Spark部署與開發環境搭建jjar執行
http://www.cnblogs.com/datahunter/p/4002331.html
1. 安裝JDK
解壓jdk安裝包到/usr/lib目錄:
1 sudo cp jdk-7u67-linux-x64.gz /usr/lib 2 cd /usr/lib 3 sudo tar -xvzf jdk-7u67-linux-x64.gz 4 sudo gedit /etc/profile
在/etc/profile檔案的末尾新增環境變數:
1 export JAVA_HOME=/usr/lib/jdk1.7.0_67 2 export JRE_HOME=/usr/lib/jdk1.7.0_67/jre 3 export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH4 export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
儲存並更新/etc/profile:
1 source /etc/profile
測試jdk是否安裝成功:
1 java -version
2. 安裝及配置SSH
1 sudo apt-get update 2 sudo apt-get install openssh-server 3 sudo /etc/init.d/ssh start
生成並新增金鑰:
1 ssh-keygen -t rsa -P "" 2 cd /home/hduser/.ssh3 cat id_rsa.pub >> authorized_keys
ssh登入:
1 ssh localhost
3. 安裝hadoop2.4.0
採用偽分佈模式安裝hadoop2.4.0。解壓hadoop2.4.0到/usr/local目錄:
1 sudo cp hadoop-2.4.0.tar.gz /usr/local/ 2 sudo tar -xzvf hadoop-2.4.0.tar.gz
在/etc/profile檔案的末尾新增環境變數:
1 export HADOOP_HOME=/usr/local/hadoop-2.4.0 2 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH3 4 export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 5 export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
儲存並更新/etc/profile:
1 source /etc/profile
在位於/usr/local/hadoop-2.4.0/etc/hadoop的hadoop-env.sh和yarn-env.sh檔案中修改jdk路徑:
1 cd /usr/local/hadoop-2.4.0/etc/hadoop 2 sudo gedit hadoop-env.sh 3 sudo gedit yarn-evn.sh
hadoop-env.sh:
yarn-env.sh:
修改core-site.xml:
1 sudo gedit core-site.xml
在<configuration></configuration>之間新增:
1 <property> 2 <name>fs.default.name</name> 3 <value>hdfs://localhost:9000</value> 4 </property> 5 6 <property> 7 <name>hadoop.tmp.dir</name> 8 <value>/app/hadoop/tmp</value> 9 </property>
修改hdfs-site.xml:
1 sudo gedit hdfs-site.xml
在<configuration></configuration>之間新增:
1 <property> 2 <name>dfs.namenode.name.dir</name> 3 <value>/app/hadoop/dfs/nn</value> 4 </property> 5 6 <property> 7 <name>dfs.namenode.data.dir</name> 8 <value>/app/hadoop/dfs/dn</value> 9 </property> 10 11 <property> 12 <name>dfs.replication</name> 13 <value>1</value> 14 </property>
修改yarn-site.xml:
1 sudo gedit yarn-site.xml
在<configuration></configuration>之間新增:
1 <property> 2 <name>mapreduce.framework.name</name> 3 <value>yarn</value> 4 </property> 5 6 <property> 7 <name>yarn.nodemanager.aux-services</name> 8 <value>mapreduce_shuffle</value> 9 </property>
複製並重命名mapred-site.xml.template為mapred-site.xml:
1 sudo cp mapred-site.xml.template mapred-site.xml 2 sudo gedit mapred-site.xml
在<configuration></configuration>之間新增:
1 <property> 2 <name>mapreduce.jobtracker.address </name> 3 <value>hdfs://localhost:9001</value> 4 </property>
在啟動hadoop之前,為防止可能出現無法寫入log的問題,記得為/app目錄設定許可權:
1 sudo mkdir /app 2 sudo chmod -R hduser:hduser /app
格式化hadoop:
1 hadoop namenode -format
啟動hdfs和yarn。在開發Spark時,僅需要啟動hdfs:
1 sbin/start-dfs.sh 2 sbin/start-yarn.sh
在瀏覽器中開啟地址http://localhost:50070/可以檢視hdfs狀態資訊:
4. 安裝scala
1 sudo cp /home/hduser/Download/scala-2.9.3.tgz /usr/local 2 sudo tar -xvzf scala-2.9.3.tgz
在/etc/profile檔案的末尾新增環境變數:
1 export SCALA_HOME=/usr/local/scala-2.9.3 2 export PATH=$SCALA_HOME/bin:$PATH
儲存並更新/etc/profile:
1 source /etc/profile
測試scala是否安裝成功:
1 scala -version
5. 安裝Spark
1 sudo cp spark-1.1.0-bin-hadoop2.4.tgz /usr/local 2 sudo tar -xvzf spark-1.1.0-bin-hadoop2.4.tgz
在/etc/profile檔案的末尾新增環境變數:
1 export SPARK_HOME=/usr/local/spark-1.1.0-bin-hadoop2.4 2 export PATH=$SPARK_HOME/bin:$PATH
儲存並更新/etc/profile:
1 source /etc/profile
複製並重命名spark-env.sh.template為spark-env.sh:
1 sudo cp spark-env.sh.template spark-env.sh 2 sudo gedit spark-env.sh
在spark-env.sh中新增:
1 export SCALA_HOME=/usr/local/scala-2.9.3 2 export JAVA_HOME=/usr/lib/jdk1.7.0_67 3 export SPARK_MASTER_IP=localhost 4 export SPARK_WORKER_MEMORY=1000m
啟動Spark:
1 cd /usr/local/spark-1.1.0-bin-hadoop2.4 2 sbin/start-all.sh
測試Spark是否安裝成功:
1 cd /usr/local/spark-1.1.0-bin-hadoop2.4 2 bin/run-example SparkPi
6. 搭建Spark開發環境
本文開發Spark的IDE推薦IntelliJ IDEA,當然也可以選擇Eclipse。在使用IntelliJ IDEA之前,需要安裝scala的外掛。點選Configure:
點選Plugins:
點選Browse repositories...:
在搜尋框內輸入scala,選擇Scala外掛進行安裝。由於已經安裝了這個外掛,下圖沒有顯示安裝選項:
安裝完成後,IntelliJ IDEA會要求重啟。重啟後,點選Create New Project:
Project SDK選擇jdk安裝目錄,建議開發環境中的jdk版本與Spark叢集上的jdk版本保持一致。點選左側的Maven,勾選Create from archetype,選擇org.scala-tools.archetypes:scala-archetype-simple:
點選Next後,可根據需求自行填寫GroupId,ArtifactId和Version:
點選Next後,如果本機沒有安裝maven會報錯,請保證之前已經安裝maven:
點選Next後,輸入檔名,完成New Project的最後一步:
點選Finish後,maven會自動生成pom.xml和下載依賴包。我們需要修改pom.xml中scala的版本:
1 <properties> 2 <scala.version>2.10.4</scala.version> 3 </properties>
在<dependencies></dependencies>之間新增配置:
1 <!-- Spark --> 2 <dependency> 3 <groupId>org.apache.spark</groupId> 4 <artifactId>spark-core_2.10</artifactId> 5 <version>1.1.0</version> 6 </dependency> 7 8 <!-- HDFS --> 9 <dependency> 10 <groupId>org.apache.hadoop</groupId> 11 <artifactId>hadoop-client</artifactId> 12 <version>2.4.0</version> 13 </dependency>
在<build><plugins></plugins></build>之間新增配置:
1 <plugin> 2 <groupId>org.apache.maven.plugins</groupId> 3 <artifactId>maven-shade-plugin</artifactId> 4 <version>2.2</version> 5 <executions> 6 <execution> 7 <phase>package</phase> 8 <goals> 9 <goal>shade</goal> 10 </goals> 11 <configuration> 12 <filters> 13 <filter> 14 <artifact>*:*</artifact> 15 <excludes> 16 <exclude>META-INF/*SF</exclude> 17 <exclude>META-INF/*.DSA</exclude> 18 <exclude>META-INF/*.RSA</exclude> 19 </excludes> 20 </filter> 21 </filters> 22 <transformers> 23 <transformer 24 implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> 25 <mainClass>mark.lin.App</mainClass> // 記得修改成你的mainClass 26 </transformer> 27 <transformer 28 implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> 29 <resource>reference.conf</resource> 30 </transformer> 31 </transformers> 32 <shadedArtifactAttached>true</shadedArtifactAttached> 33 <shadedClassifierName>executable</shadedClassifierName> 34 </configuration> 35 </execution> 36 </executions> 37 </plugin>
Spark的開發環境至此搭建完成。One more thing,wordcount的示例程式碼:
1 package mark.lin //別忘了修改package 2 3 import org.apache.spark.{SparkConf, SparkContext} 4 import org.apache.spark.SparkContext._ 5 6 import scala.collection.mutable.ListBuffer 7 8 /** 9 * Hello world! 10 * 11 */ 12 object App{ 13 def main(args: Array[String]) { 14 if (args.length != 1) { 15 println("Usage: java -jar code.jar dependencies.jar") 16 System.exit(0) 17 } 18 val jars = ListBuffer[String]() 19 args(0).split(",").map(jars += _) 20 21 val conf = new SparkConf() 22 conf.setMaster("spark://localhost:7077").setAppName("wordcount").set("spark.executor.memory", "128m").setJars(jars) 23 24 val sc = new SparkContext(conf) 25 26 val file = sc.textFile("hdfs://localhost:9000/hduser/wordcount/input/input.csv") 27 val count = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_+_) 28 println(count) 29 count.saveAsTextFile("hdfs://localhost:9000/hduser/wordcount/output/") 30 sc.stop() 31 } 32 }
7. 編譯&執行
使用maven編譯原始碼。點選左下角,點選右側package,點選綠色三角形,開始編譯。
在target目錄下,可以看到maven生成的jar包。其中,hellworld-1.0-SNAPSHOT-executable.jar是我們需要放到Spark叢集上執行