1. 程式人生 > >spark1.63升級到spark2.3.1環境變數配置

spark1.63升級到spark2.3.1環境變數配置

由於spark2.3需要jdk1.8的支援,因此spark-submit與spark-sql需要指定jdk版本,具體方式見測試用例

1.修改個人使用者配置檔案.bashrc ,註釋以前的spark環境與java環境,新增

vi ~/.bashrc

#export SPARK_HOME=/opt/core/spark-1.6.3-bin-hadoop2.6

#export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera

export SPARK_HOME=/opt/core/spark-2.3.1-bin-hadoop2.6

export JAVA_HOME=/usr/local/jdk1.

8.0_161

export PATH=${SPARK_HOME}/bin:${JAVA_HOME}/bin:$PATH

儲存之後  執行:source .bashrc

到此配置完成

2.執行計算圓周率測試用例:

spark-submit --class org.apache.spark.examples.SparkPi \ --master yarn \ --deploy-mode client \ --driver-memory 4g \ --executor-memory 2g \ --executor-cores 1 \ --conf "spark.executorEnv.JAVA_HOME=/usr/local/jdk1.8.0_161" \ --conf "spark.yarn.appMasterEnv.JAVA_HOME=/usr/local/jdk1.8.0_161" \ --queue root.bigdata.statistics \ /opt/core/spark/examples/jars/spark-examples*.jar 10

3.啟動spark-sql樣例:

spark-sql --master yarn \ --deploy-mode client \ --driver-memory 2g \ --executor-memory 8g \ --executor-cores 4 \ --conf "spark.executorEnv.JAVA_HOME=/usr/local/jdk1.8.0_161" \ --conf "spark.yarn.appMasterEnv.JAVA_HOME=/usr/local/jdk1.8.0_161" \ --queue root.bigdata.statistics