1. 程式人生 > >Spark學習筆記(10)—— wordcount 執行流程分析

Spark學習筆記(10)—— wordcount 執行流程分析

1 啟動叢集

  • 啟動 HDFS start-dfs.sh
  • 啟動 Spark 叢集 /home/hadoop/apps/spark-1.6.3-bin-hadoop2.6/sbin/start-all.sh
  • 啟動 Spark Shell /home/hadoop/apps/spark-1.6.3-bin-hadoop2.6/bin/spark-shell --master spark://node1:7077 --executor-memory 512m --total-executor-cores 2

2 wordcount 執行流程

一共產生 5 個RDD

  • textFile 產生2個RDD, HadoopRDD 和 MapPartitionsRDD
  • flatMap 產生一個RDD,MapPartitionsRDD
  • map 產生一個 RDD MapPartitionsRDD
  • reduceByKey 產生一個RDD,ShuffleRDD
scala> val rdd = sc.textFile("hdfs://node1:9000/wc").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[9] at reduceByKey at <console>:27

scala> rdd.saveAsTextFile("hdfs://node1:9000/wcout1")

scala> rdd.toDebugString
res4: String = 
(3) ShuffledRDD[9] at reduceByKey at <console>:27 []
 +-(3) MapPartitionsRDD[8] at map at <console>:27 []
    |  MapPartitionsRDD[7] at flatMap at <console>:27 []
    |  hdfs://node1:9000/wc MapPartitionsRDD[6] at textFile at <console>:27 []
    |  hdfs://node1:9000/wc HadoopRDD[5] at textFile at <console>:27 []