1. 程式人生 > >2 Spark入門reduce、reduceByKey的操作

2 Spark入門reduce、reduceByKey的操作

上一篇是講map,map的主要作用就是替換。reduce的主要作用就是計算。

package reduce;

import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.SparkSession;
import scala.Tuple2;

import java.util.Arrays;
import java.util.List;

/**
 * @author wuweifeng wrote on 2018/4/13.
 */
public class SimpleReduce {
    public static void main(String[] args) {
        SparkSession sparkSession = SparkSession.builder().appName("JavaWordCount").master("local").getOrCreate();
        //spark對普通List的reduce操作
        JavaSparkContext javaSparkContext = new JavaSparkContext(sparkSession.sparkContext());
        List<Integer> data = Arrays.asList(1, 2, 3, 4, 5);
        JavaRDD<Integer> originRDD = javaSparkContext.parallelize(data);

        Integer sum = originRDD.reduce((a, b) -> a + b);
        System.out.println(sum);

        //reduceByKey,按照相同的key進行reduce操作
        List<String> list = Arrays.asList("key1", "key1", "key2", "key2", "key3");
        JavaRDD<String> stringRDD = javaSparkContext.parallelize(list);
        //轉為key-value形式
        JavaPairRDD<String, Integer> pairRDD = stringRDD.mapToPair(k -> new Tuple2<>(k, 1));
        List list1 = pairRDD.reduceByKey((x, y) -> x + y).collect();
        System.out.println(list1);
    }
}

程式碼很簡單,第一個就是將各個數累加。reduce順序是1+2,得到3,然後3+3,得到6,然後6+4,依次進行。

第二個是reduceByKey,就是將key相同的鍵值對,按照Function進行計算。程式碼中就是將key相同的各value進行累加。結果就是[(key2,2), (key3,1), (key1,2)]