搭建Spark所遇過的坑
原文連結:https://mp.csdn.net/postedit/82423831
出現此類問題有很多種, 當時遇到這問題的因為是在spark未改動的情況下, 更換了Hive的版本導致版本不對出現了此問題,
一.經驗
- Spark Streaming包含三種計算模式:nonstate .stateful .window
- kafka可通過配置檔案使用自帶的zookeeper叢集
- Spark一切操作歸根結底是對RDD的操作
- 部署Spark任務,不用拷貝整個架包,只需拷貝被修改的檔案,然後在目標伺服器上編譯打包。
- kafka的log.dirs不要設定成/tmp下的目錄,貌似tmp目錄有檔案數和磁碟容量限制
- ES的分片類似kafka的partition
- spark Graph根據邊集合構建圖,頂點集合只是指定圖中哪些頂點有效
- presto叢集沒必要採用on yarn模式,因為hadoop依賴HDFS,如果部分機器磁碟很小,hadoop會很尷尬,而presto是純記憶體計算,不依賴磁碟,獨立安裝可以跨越多個叢集,可以說有記憶體的地方就可以有presto
- presto程序一旦啟動,JVM server會一直佔用記憶體
- 如果maven下載很慢,很可能是被天朝的GFW牆了,可以在maven安裝目錄的setting.conf配置檔案mirrors標籤下加入國內映象抵制**黨的網路封鎖,例如:
-
<mirror> <id>nexus-aliyun</id> <mirrorOf>*</mirrorOf> <name>Nexus aliyun</name> <url>http://maven.aliyun.com/nexus/content/groups/public</url> </mirror>
- 編譯spark,hive on spark就不要加-Phive引數,若需sparkSQL支援hive語法則要加-Phive引數
- 通過hive原始檔pom.xml檢視適配的spark版本,只要打版本保持一致就行,例如spark1.6.0和1.6.2都能匹配
- 開啟Hive命令列客戶端,觀察輸出日誌是否有列印“SLF4J: Found binding in [jar:file:/work/poa/hive-2.1.0-bin/lib/spark-assembly-1.6.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]”來判斷hive有沒有繫結spark
- kafka的comsumer groupID對於spark direct streaming無效
- shuffle write就是在一個stage結束計算之後,為了下一個stage可以執行shuffle類的運算元,而將每個task處理的資料按key進行分類,將相同key都寫入同一個磁碟檔案中,而每一個磁碟檔案都只屬於下游stage的一個task,在將資料寫入磁碟之前,會先將資料寫入記憶體快取中,下一個stage的task有多少個,當前stage的每個task就要建立多少份磁碟檔案。
- 單個spark任務的excutor核數不宜設定過高,否則會導致其他JOB延遲
- 資料傾斜只發生在shuffle過程,可能觸發shuffle操作的運算元有:distinct, groupByKey, reduceByKey, aggregateByKey, join, cogroup, repartition等
- 執行時刪除hadoop資料目錄會導致依賴HDFS的JOB失效
- sparkSQL UDAF中update函式的第二個引數 input: Row 對應的並非DataFrame的行,而是被inputSchema投影了的行
- Spark的Driver只有在Action時才會收到結果
- Spark需要全域性聚合變數時應當使用累加器(Accumulator)
- Kafka以topic與consumer group劃分關係,一個topic的訊息會被訂閱它的消費者組全部消費,如果希望某個consumer使用topic的全部訊息,可將該組只設一個消費者,每個組的消費者數目不能大於topic的partition總數,否則多出的consumer將無消可費
- 所有自定義類要實現serializable介面,否則在叢集中無法生效
- resources資原始檔讀取要在Spark Driver端進行,以區域性變數方式傳給閉包函式
- DStream流轉化只產生臨時流物件,如果要繼續使用,需要一個引用指向該臨時流物件
- 提交到yarn cluster的作業不能直接print到控制檯,要用log4j輸出到日誌檔案中
- HDFS檔案路徑寫法為:hdfs://master:9000/檔案路徑,這裡的master是namenode的hostname,9000是hdfs埠號。
- 不要隨意格式化HDFS,這會帶來資料版本不一致等諸多問題,格式化前要清空資料資料夾
- 搭建叢集時要首先配置好主機名,並重啟機器讓配置的主機名生效
- linux批量多機互信, 將pub祕鑰配成一個
- 小於128M的小檔案都會佔據一個128M的BLOCK,合併或者刪除小檔案節省磁碟空間
- Non DFS Used指的是非HDFS的所有檔案
- spark兩個分割槽方法coalesce和repartition,前者窄依賴,分割槽後資料不均勻,後者寬依賴,引發shuffle操作,分割槽後資料均勻
- spark中資料寫入ElasticSearch的操作必須在action中以RDD為單位執行
- 可以通過hive-site.xml修改spark.executor.instances, spark.executor.cores, spark.executor.memory等配置來優化hive on spark執行效能,不過最好配成動態資源分配。
二.基本功能
0.常見問題:
1如果執行程式出現錯誤:Exception in thread “main” java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory,這是因為專案缺少slf4j-api.jar和slf4j-log4j12.jar這兩個jar包導致的錯誤。 2如果執行程式出現錯誤:java.lang.NoClassDefFoundError: org/apache/log4j/LogManager,這是因為專案缺少log4j.jar這個jar包 3錯誤:Exception in thread “main” java.lang.NoSuchMethodError: org.slf4j.MDC.getCopyOfContextMap()Ljava/util/Map,這是因為jar包版本衝突造成的。
1.配置spark-submit (CDH版本)
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream at org.apache.spark.deploy.SparkSubmitArguments.handleUnknown(SparkSubmitArguments.scala:451) at org.apache.spark.launcher.SparkSubmitOptionParser.parse(SparkSubmitOptionParser.java:178) at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:97) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:113) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 5 more
解決方案:
在spark-env.sh檔案中新增:
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
2.啟動spark-shell時,報錯
INFO cluster.YarnClientSchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:34965/user/Executor#1736210263] with ID 1 INFO util.RackResolver: Resolved services07 to /default-rack INFO storage.BlockManagerMasterActor: Registering block manager services07:51154 with 534.5 MB RAM
解決方案:
在spark的spark-env配置檔案中配置下列配置項:
將export SPARK_WORKER_MEMORY, export SPARK_DRIVER_MEMORY, export SPARK_YARN_AM_MEMORY的值設定成小於534.5 MB
3.啟動spark SQL時,報錯:
Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver ") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
解決方案:
在$SPARK_HOME/conf/spark-env.sh檔案中配置:
export SPARK_CLASSPATH=$HIVE_HOME/lib/mysql-connector-java-5.1.6-bin.jar
4.啟動spark SQL時,報錯:
java.sql.SQLException: Access denied for user 'services02 '@'services02' (using password: YES)
解決方案:
檢查hive-site.xml的配置項, 有以下這個配置項
<property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> <description>password to use against metastore database</description> </property>
看該密碼與與MySQL的登入密碼是否一致
5.啟動計算任務時報錯:
報錯資訊為:
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [120 seconds]. This timeout is controlled by spark.rpc.askTimeout
解決方案:
分配的core不夠, 多分配幾核的CPU
6.啟動計算任務時報錯:
不斷重複出現
status.SparkJobMonitor: 2017-01-04 11:53:51,564 Stage-0_0: 0(+1)/1 status.SparkJobMonitor: 2017-01-04 11:53:54,564 Stage-0_0: 0(+1)/1 status.SparkJobMonitor: 2017-01-04 11:53:55,564 Stage-0_0: 0(+1)/1 status.SparkJobMonitor: 2017-01-04 11:53:56,564 Stage-0_0: 0(+1)/1
解決方案:
資源不夠, 分配大點記憶體, 預設值為512MB.
7.啟動Spark作為計算引擎時報錯:
報錯資訊為:
java.io.IOException: Failed on local exception: java.nio.channels.ClosedByInterruptException; Host Details : local host is: "m1/192.168.179.201"; destination host is: "m1":9000; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at org.apache.hadoop.ipc.Client.call(Client.java:1474) Caused by: java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:681) 17/01/06 11:01:43 INFO retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over m2/192.168.179.202:9000 after 9 fail over attempts. Trying to fail over immediately.
解決方案:
出現該問題的原因有多種, 我所遇到的是使用Hive On Spark時報了此錯誤,解決方案是: 在hive-site.xml檔案下正確配置該項
<property> <name>spark.yarn.jar</name> <value>hdfs://ns1/Jar/spark-assembly-1.6.0-hadoop2.6.0.jar</value> </property>
8.啟動spark叢集時報錯,啟動命令為:start-mastersh
報錯資訊:
Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/Logger at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) at java.lang.Class.privateGetMethodRecursive(Class.java:3048) at java.lang.Class.getMethod0(Class.java:3018) at java.lang.Class.getMethod(Class.java:1784) at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526) Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 7 more
解決方案:
將/home/centos/soft/hadoop/share/hadoop/common/lib目錄下的slf4j-api-1.7.5.jar檔案,slf4j-log4j12-1.7.5.jar檔案和commons-logging-1.1.3.jar檔案拷貝到/home/centos/soft/spark/lib目錄下
9.啟動spark叢集時報錯,啟動命令為:start-mastersh
報錯資訊:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2570) at java.lang.Class.getMethod0(Class.java:2813) at java.lang.Class.getMethod(Class.java:1663) at sun.launcher.LauncherHelper.getMainMethod(LauncherHelper.java:494) at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:486) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ... 6 more
解決方案:
官網資料: https://spark.apache.org/docs/latest/hadoop-provided.html#apache-hadoop
編輯/home/centos/soft/spark/conf/spark-env.sh檔案,配置下列配置項:
export SPARK_DIST_CLASSPATH=$(/home/centos/soft/hadoop/bin/hadoop classpath)
10.啟動HPL/SQL儲存過程時報錯:
報錯資訊:
2017-01-10T15:20:18,491 ERROR [HiveServer2-Background-Pool: Thread-97] exec.TaskRunner: Error in executeTask java.lang.OutOfMemoryError: PermGen space at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:800) 2017-01-10T15:20:18,491 ERROR [HiveServer2-Background-Pool: Thread-97] ql.Driver: FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. PermGen space 2017-01-10T15:20:18,491 INFO [HiveServer2-Background-Pool: Thread-97] ql.Driver: Completed executing command(queryId=centos_20170110152016_240c1b5e-3153-4179-80af-9688fa7674dd); Time taken: 2.113 seconds 2017-01-10T15:20:18,500 ERROR [HiveServer2-Background-Pool: Thread-97] operation.Operation: Error running hive query: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. PermGen space at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:388) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:244) at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91) Caused by: java.lang.OutOfMemoryError: PermGen space at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
解決方案:
參考資料: http://blog.csdn.net/xiao_jun_0820/article/details/45038205
出現該問題是因為Spark預設使用全部資源, 而此時主機的記憶體已用, 應在Spark配置檔案中限制記憶體的大小. 在hive-site.xml檔案下配置該項:
<property> <name>spark.driver.extraJavaOptions</name> <value>-XX:PermSize=128M -XX:MaxPermSize=512M</value> </property>
或在spark-default.conf檔案下配置:
spark.driver.extraJavaOptions -XX:PermSize=128M -XX:MaxPermSize=256M