1. 程式人生 > >spark-shell on yarn 出錯(arn application already ended,might be killed or not able to launch applic)解決

spark-shell on yarn 出錯(arn application already ended,might be killed or not able to launch applic)解決

今天想要將spark-shell 在yarn-client的狀態下 結果出錯:

[[email protected] spark-1.0.1-bin-hadoop2]$ bin/spark-shell --master yarn-client
Spark assembly has been built with Hive, including Datanucleus jars on classpath
14/07/22 17:28:46 INFO spark.SecurityManager: Changing view acls to: hadoop
14/07/22 17:28:46 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop)
14/07/22 17:28:46 INFO spark.HttpServer: Starting HTTP Server
14/07/22 17:28:46 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/07/22 17:28:46 INFO server.AbstractConnector: Started 
[email protected]
:49827 Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.0.1 /_/ Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_55) Type in expressions to have them evaluated. Type :help for more information. 14/07/22 17:28:51 WARN spark.SparkConf: SPARK_CLASSPATH was detected (set to '/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar'). This is deprecated in Spark 1.0+. Please instead use: - ./spark-submit with --driver-class-path to augment the driver classpath - spark.executor.extraClassPath to augment the executor classpath 14/07/22 17:28:51 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to '/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar' as a work-around. 14/07/22 17:28:51 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to '/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar' as a work-around. 14/07/22 17:28:51 INFO spark.SecurityManager: Changing view acls to: hadoop 14/07/22 17:28:51 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop) 14/07/22 17:28:51 INFO slf4j.Slf4jLogger: Slf4jLogger started 14/07/22 17:28:51 INFO Remoting: Starting remoting 14/07/22 17:28:51 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://
[email protected]
:41257] 14/07/22 17:28:51 INFO Remoting: Remoting now listens on addresses: [akka.tcp://[email protected]:41257] 14/07/22 17:28:51 INFO spark.SparkEnv: Registering MapOutputTracker 14/07/22 17:28:51 INFO spark.SparkEnv: Registering BlockManagerMaster 14/07/22 17:28:51 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-local-20140722172851-5d58 14/07/22 17:28:51 INFO storage.MemoryStore: MemoryStore started with capacity 294.9 MB. 14/07/22 17:28:51 INFO network.ConnectionManager: Bound socket to port 36159 with id = ConnectionManagerId(localhost,36159) 14/07/22 17:28:51 INFO storage.BlockManagerMaster: Trying to register BlockManager 14/07/22 17:28:51 INFO storage.BlockManagerInfo: Registering block manager localhost:36159 with 294.9 MB RAM 14/07/22 17:28:51 INFO storage.BlockManagerMaster: Registered BlockManager 14/07/22 17:28:51 INFO spark.HttpServer: Starting HTTP Server 14/07/22 17:28:51 INFO server.Server: jetty-8.y.z-SNAPSHOT 14/07/22 17:28:51 INFO server.AbstractConnector: Started
[email protected]
:57197 14/07/22 17:28:51 INFO broadcast.HttpBroadcast: Broadcast server started at http://localhost:57197 14/07/22 17:28:51 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-9b5a359c-37cf-4530-85d6-fcdbc534bc84 14/07/22 17:28:51 INFO spark.HttpServer: Starting HTTP Server 14/07/22 17:28:51 INFO server.Server: jetty-8.y.z-SNAPSHOT 14/07/22 17:28:51 INFO server.AbstractConnector: Started [email protected]:34888 14/07/22 17:28:52 INFO server.Server: jetty-8.y.z-SNAPSHOT 14/07/22 17:28:52 INFO server.AbstractConnector: Started [email protected]:4040 14/07/22 17:28:52 INFO ui.SparkUI: Started SparkUI at http://localhost:4040 14/07/22 17:28:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable --args is deprecated. Use --arg instead. 14/07/22 17:28:52 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 14/07/22 17:28:53 INFO yarn.Client: Got Cluster metric info from ApplicationsManager (ASM), number of NodeManagers: 1 14/07/22 17:28:53 INFO yarn.Client: Queue info ... queueName: default, queueCurrentCapacity: 0.0, queueMaxCapacity: 1.0, queueApplicationCount = 1, queueChildQueueCount = 0 14/07/22 17:28:53 INFO yarn.Client: Max mem capabililty of a single resource in this cluster 8192 14/07/22 17:28:53 INFO yarn.Client: Preparing Local resources 14/07/22 17:28:53 INFO yarn.Client: Uploading file:/home/hadoop/spark/assembly/target/scala-2.10/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar to hdfs://localhost:9000/user/hadoop/.sparkStaging/application_1406018656679_0002/spark-assembly_2.10-0.9.1-hadoop2.2.0.jar 14/07/22 17:28:54 INFO yarn.Client: Setting up the launch environment 14/07/22 17:28:54 INFO yarn.Client: Setting up container launch context 14/07/22 17:28:54 INFO yarn.Client: Command for starting the Spark ApplicationMaster: List($JAVA_HOME/bin/java, -server, -Xmx512m, -Djava.io.tmpdir=$PWD/tmp, -Dspark.tachyonStore.folderName=\"spark-10325217-bdb0-4213-8ae8-329940b98b95\", -Dspark.yarn.secondary.jars=\"\", -Dspark.home=\"/home/hadoop/spark\", -Dspark.repl.class.uri=\"http://localhost:49827\", -Dspark.driver.host=\"localhost\", -Dspark.app.name=\"Spark shell\", -Dspark.jars=\"\", -Dspark.fileserver.uri=\"http://localhost:34888\", -Dspark.executor.extraClassPath=\"/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar\", -Dspark.master=\"yarn-client\", -Dspark.driver.port=\"41257\", -Dspark.driver.extraClassPath=\"/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar\", -Dspark.httpBroadcast.uri=\"http://localhost:57197\", -Dlog4j.configuration=log4j-spark-container.properties, org.apache.spark.deploy.yarn.ExecutorLauncher, --class, notused, --jar , null, --args 'localhost:41257' , --executor-memory, 1024, --executor-cores, 1, --num-executors , 2, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr) 14/07/22 17:28:54 INFO yarn.Client: Submitting application to ASM 14/07/22 17:28:54 INFO impl.YarnClientImpl: Submitted application application_1406018656679_0002 to ResourceManager at /0.0.0.0:8032 14/07/22 17:28:54 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: appMasterRpcPort: 0 appStartTime: 1406021334568 yarnAppState: ACCEPTED 14/07/22 17:28:55 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: appMasterRpcPort: 0 appStartTime: 1406021334568 yarnAppState: ACCEPTED 14/07/22 17:28:56 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: appMasterRpcPort: 0 appStartTime: 1406021334568 yarnAppState: ACCEPTED 14/07/22 17:28:57 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: appMasterRpcPort: 0 appStartTime: 1406021334568 yarnAppState: ACCEPTED <span style="color:#FF0000;"> 14/07/22 17:28:58 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: appMasterRpcPort: 0 appStartTime: 1406021334568 yarnAppState: ACCEPTED 14/07/22 17:28:59 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: appMasterRpcPort: 0 appStartTime: 1406021334568 yarnAppState: FAILED org.apache.spark.SparkException: Yarn application already ended,might be killed or not able to launch application master. at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApp(YarnClientSchedulerBackend.scala:105) at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:82) at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:136) at org.apache.spark.SparkContext.<init>(SparkContext.scala:318) at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:957) at $iwC$$iwC.<init>(<console>:8) at $iwC.<init>(<console>:14) at <init>(<console>:16) at .<init>(<console>:20) at .<clinit>(<console>) at .<init>(<console>:7) at .<clinit>(<console>) at $print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:788) at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1056) at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:614) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:645) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:609) at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:796) at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:841) at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:753) at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:121) at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:120) at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:263) at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:120) at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:56) at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:913) at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:142) at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:56) at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:104) at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:56) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:930) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:884) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:982) at org.apache.spark.repl.Main$.main(Main.scala:31) at org.apache.spark.repl.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:303) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)</span> Spark context available as sc.
在8088埠檢視提交到yarn上面的job發現 failed 如下圖所示:


0001和0002是失敗的,

這時候可以通過任務右側的Tracking UI檢視job的history

點進去後進入這個畫面:

這裡大概能看出一點端倪,就是在呼叫runWorker時候失敗了 還是不夠詳細 我們發現下面有ApplicationMasters的logs  我們點進去:

可以看到有兩個log 一個是stdout 一個是stderr  stdout是空的 我們自然點開stderr看:

log內容為:

Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

就是找不到這個類,這時候就很自然的想到沒有export spark的jar包

我們先export jar包 然後執行on yarn就沒有問題了

[[email protected] spark-1.0.1-bin-hadoop2]$<span style="color:#FF0000;"> export SPARK_JAR=lib/spark-assembly-1.0.1-hadoop2.2.0.jar </span>
[[email protected] spark-1.0.1-bin-hadoop2]$ bin/spark-shell --master yarn-client
Spark assembly has been built with Hive, including Datanucleus jars on classpath
14/07/22 17:34:02 INFO spark.SecurityManager: Changing view acls to: hadoop
14/07/22 17:34:02 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop)
14/07/22 17:34:02 INFO spark.HttpServer: Starting HTTP Server
14/07/22 17:34:02 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/07/22 17:34:02 INFO server.AbstractConnector: Started [email protected]:51297
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.0.1
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_55)
Type in expressions to have them evaluated.
Type :help for more information.
14/07/22 17:34:07 WARN spark.SparkConf: 
SPARK_CLASSPATH was detected (set to '/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with --driver-class-path to augment the driver classpath
 - spark.executor.extraClassPath to augment the executor classpath
        
14/07/22 17:34:07 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to '/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar' as a work-around.
14/07/22 17:34:07 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to '/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar' as a work-around.
14/07/22 17:34:07 INFO spark.SecurityManager: Changing view acls to: hadoop
14/07/22 17:34:07 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop)
14/07/22 17:34:07 INFO slf4j.Slf4jLogger: Slf4jLogger started
14/07/22 17:34:07 INFO Remoting: Starting remoting
14/07/22 17:34:07 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:58666]
14/07/22 17:34:07 INFO Remoting: Remoting now listens on addresses: [akka.tcp://[email protected]:58666]
14/07/22 17:34:07 INFO spark.SparkEnv: Registering MapOutputTracker
14/07/22 17:34:07 INFO spark.SparkEnv: Registering BlockManagerMaster
14/07/22 17:34:07 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-local-20140722173407-9c9c
14/07/22 17:34:07 INFO storage.MemoryStore: MemoryStore started with capacity 294.9 MB.
14/07/22 17:34:07 INFO network.ConnectionManager: Bound socket to port 41701 with id = ConnectionManagerId(localhost,41701)
14/07/22 17:34:07 INFO storage.BlockManagerMaster: Trying to register BlockManager
14/07/22 17:34:07 INFO storage.BlockManagerInfo: Registering block manager localhost:41701 with 294.9 MB RAM
14/07/22 17:34:07 INFO storage.BlockManagerMaster: Registered BlockManager
14/07/22 17:34:07 INFO spark.HttpServer: Starting HTTP Server
14/07/22 17:34:07 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/07/22 17:34:07 INFO server.AbstractConnector: Started [email protected]:52090
14/07/22 17:34:07 INFO broadcast.HttpBroadcast: Broadcast server started at http://localhost:52090
14/07/22 17:34:07 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-c4e1f63c-c50a-49af-bda5-580eabeff77c
14/07/22 17:34:07 INFO spark.HttpServer: Starting HTTP Server
14/07/22 17:34:07 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/07/22 17:34:07 INFO server.AbstractConnector: Started [email protected]:38401
14/07/22 17:34:08 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/07/22 17:34:08 INFO server.AbstractConnector: Started [email protected]:4040
14/07/22 17:34:08 INFO ui.SparkUI: Started SparkUI at http://localhost:4040
14/07/22 17:34:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
--args is deprecated. Use --arg instead.
14/07/22 17:34:08 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/07/22 17:34:09 INFO yarn.Client: Got Cluster metric info from ApplicationsManager (ASM), number of NodeManagers: 1
14/07/22 17:34:09 INFO yarn.Client: Queue info ... queueName: default, queueCurrentCapacity: 0.0, queueMaxCapacity: 1.0,
      queueApplicationCount = 2, queueChildQueueCount = 0
14/07/22 17:34:09 INFO yarn.Client: Max mem capabililty of a single resource in this cluster 8192
14/07/22 17:34:09 INFO yarn.Client: Preparing Local resources
14/07/22 17:34:09 INFO yarn.Client: Uploading file:/home/hadoop/spark-1.0.1-bin-hadoop2/lib/spark-assembly-1.0.1-hadoop2.2.0.jar to hdfs://localhost:9000/user/hadoop/.sparkStaging/application_1406018656679_0003/spark-assembly-1.0.1-hadoop2.2.0.jar
14/07/22 17:34:12 INFO yarn.Client: Setting up the launch environment
14/07/22 17:34:12 INFO yarn.Client: Setting up container launch context
14/07/22 17:34:12 INFO yarn.Client: Command for starting the Spark ApplicationMaster: List($JAVA_HOME/bin/java, -server, -Xmx512m, -Djava.io.tmpdir=$PWD/tmp, -Dspark.tachyonStore.folderName=\"spark-9c1f20d9-47ba-42e7-8914-057a19e7659f\", -Dspark.yarn.secondary.jars=\"\", -Dspark.home=\"/home/hadoop/spark\", -Dspark.repl.class.uri=\"http://localhost:51297\", -Dspark.driver.host=\"localhost\", -Dspark.app.name=\"Spark shell\", -Dspark.jars=\"\", -Dspark.fileserver.uri=\"http://localhost:38401\", -Dspark.executor.extraClassPath=\"/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar\", -Dspark.master=\"yarn-client\", -Dspark.driver.port=\"58666\", -Dspark.driver.extraClassPath=\"/home/hadoop/spark-1.0.1-bin-hadoop2/lib/*.jar\", -Dspark.httpBroadcast.uri=\"http://localhost:52090\",  -Dlog4j.configuration=log4j-spark-container.properties, org.apache.spark.deploy.yarn.ExecutorLauncher, --class, notused, --jar , null,  --args  'localhost:58666' , --executor-memory, 1024, --executor-cores, 1, --num-executors , 2, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
14/07/22 17:34:12 INFO yarn.Client: Submitting application to ASM
14/07/22 17:34:12 INFO impl.YarnClientImpl: Submitted application application_1406018656679_0003 to ResourceManager at /0.0.0.0:8032
14/07/22 17:34:12 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: 
	 appMasterRpcPort: 0
	 appStartTime: 1406021652123
	 yarnAppState: ACCEPTED

14/07/22 17:34:13 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: 
	 appMasterRpcPort: 0
	 appStartTime: 1406021652123
	 yarnAppState: ACCEPTED

14/07/22 17:34:14 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: 
	 appMasterRpcPort: 0
	 appStartTime: 1406021652123
	 yarnAppState: ACCEPTED

14/07/22 17:34:15 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: 
	 appMasterRpcPort: 0
	 appStartTime: 1406021652123
	 yarnAppState: ACCEPTED

14/07/22 17:34:16 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: 
	 appMasterRpcPort: 0
	 appStartTime: 1406021652123
	 yarnAppState: ACCEPTED

14/07/22 17:34:17 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: 
	 appMasterRpcPort: 0
	 appStartTime: 1406021652123
	 yarnAppState: ACCEPTED

14/07/22 17:34:18 INFO cluster.YarnClientSchedulerBackend: Application report from ASM: 
	 appMasterRpcPort: 0
	 appStartTime: 1406021652123
	 yarnAppState: RUNNING

14/07/22 17:34:20 INFO cluster.YarnClientClusterScheduler: YarnClientClusterScheduler.postStartHook done
14/07/22 17:34:21 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.

scala> 14/07/22 17:34:25 INFO cluster.YarnClientSchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:58394/user/Executor#1230717717] with ID 1
14/07/22 17:34:27 INFO cluster.YarnClientSchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:39934/user/Executor#520226618] with ID 2
14/07/22 17:34:28 INFO storage.BlockManagerInfo: Registering block manager localhost:52134 with 589.2 MB RAM
14/07/22 17:34:28 INFO storage.BlockManagerInfo: Registering block manager localhost:58914 with 589.2 MB RAM


scala> 

scala> 

執行結果如下圖所示:

application_0003顯示已經running  我們又可以愉快的玩耍了~~

相關推薦

spark-shell on yarn 出錯arn application already ended,might be killed or not able to launch applic解決

今天想要將spark-shell 在yarn-client的狀態下 結果出錯: [[email protected] spark-1.0.1-bin-hadoop2]$ bin/spark-shell --master yarn-client Spark ass

spark遠端debug之除錯spark on yarn 程式基於CDH平臺,1.6.0版本

簡介 由於spark有多種執行模式,遠端除錯的時候,雖然大體步驟相同,但是還是有小部分需要注意的地方,這裡記錄一下除錯執行在spark on yarn模式下的程式。 環境準備 需要完好的Hadoop,spark叢集,以便於提交spark on yarn程式。我這裡是基

spark on yarn 開發begin

這個是根據 董西成老師的 部落格實驗,然後自己寫了一遍,中間遇到一些問題,索性記錄下來。 其實是個很簡單的 wordcount類,不過有了這些類,其他的程式碼,往裡面慢慢填就行了。 package org.apache.spark import org.apache.spa

Spark HA on yarn 最簡易安裝。

ima zookeepe mage mas bin apache spa pps dir 機器部署: 準備兩臺機以上linux服務器,安裝好JDK,zookeeper,hadoop spark部署 master:hadoop1,hadoop2(備用) worker:ha

HDP2.5.0 + Spark1.6.2 通過IDEA(Win64)遠端提交spark jobs On YARN

更新日:2018-08-17 本文利用Apache Ambari搭建了一個HDP2.5.0的叢集,安裝了HDP下最新的Spark1.6.2,通過spark-submit提交任務模式local、standalone、yarn-client均可。 但程式設計環境往往在Win下

spark-shell --master yarn-client(異常已經解決)

[[email protected] ~]# spark-shell --master yarn-client Warning: Master yarn-client is deprecated since 2.0. Please use master

Error :spark-shell模式報錯:java.sql.SQLException: A read-only user or a user in a read-only database

1.問題描述: 啟動spark-shell local的模式 bin/spark-shell --master local[2] 報錯: [[email protected] spark-2.1.0-bin-hadoop2.6]$ bin/spark-she

MySQL外來鍵約束_ON DELETE CASCADE/ON UPDATE CASCADE級聯刪除,刪除主表資料,附表資料也被刪除

MySQL通過外來鍵約束實現資料庫的參照完整性,外來鍵約束條件可在建立外來鍵時指定,table的儲存引擎只能是InnoDB,因為只有這種儲存模式才支援外來鍵。 外來鍵約束條件有以下4種: (1)restrict方式:同no action,都是立即檢查外來鍵約束; - - 限制

(error) MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk.

tail ann 修改 led div odi logs please -o (error) MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on

執行Jmeter.bat出錯Not able to find java executor or version. Please check your installation. errorlevel

本人從公司電腦拿的zip的jmeter包進行解壓配置好環境配置後,進行執行jmeter.bat輸出報錯 Not able to find java executor or version. Please check your installation. errorlevel 進行搜查以往前輩

Redis "MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on

今天遇到Redis “MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk”的問題。這個錯誤資訊是Redis客戶端工具在儲存資料時候丟擲的異常資訊。 很多

MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled

早上來到公司,線上的專案報錯: Error in execution; nested exception is io.lettuce.core.RedisCommandExecutionException: MISCONF Redis is configured to save RDB snapsho

部署CM報錯7:hue無法訪問hbase報錯:HBase Thrift 1 server cannot be contacted: Could not connect to hadoop02:90

1.問題描述 cm安裝hue後,訪問hbase資料庫,報錯 HBase Thrift 1 server cannot be contacted: Could not connect to hadoop02:9090 2.問題原因 hbase的thrift server 1被關閉

MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk

問題 -c lease serve check ons cau conf exc ubuntu上面redis報下列錯誤: Caused by: redis.clients.jedis.exceptions.JedisDataException: MISCONF Redis

redis.clients.jedis.exceptions.JedisDataException: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk.

今天執行專案,redis突然出現以下問題: redis.clients.jedis.exceptions.JedisDataException: MISCONF Redis is configured to save RDB snapshots, but it is currently not able t

Windows遠端kali(Ubuntu、樹莓派)安裝xrdp不成功無法定位到安裝包 apt-getUnable to locate package 解決方法

首先想到的是裝完kali之後需要配置IP,這點做好了,區域網內可以訪問 今天想把kali安裝完之後,然後安裝xrdp sudo apt-get install xrdp 遇到了問題,發現報錯

springMVC Ajax非同步上傳檔案報錯:Could not parse multipart servlet request解決辦法

1.applicaitonContext.xml的配置。配置CommonsMultipartResolver<!-- 配置spring自帶上傳工具,處理器配置 --><bean class="org.springframework.web.multipart

JMETER安裝中遇到的問題not able to find Java executable or version.Please check your java installation

這次安裝的JMETER是解壓縮壓縮包:apache-jmeter-2.12,解壓縮後執行%apache-jmeter-2.12\bin下的jmeter.bat後出現介面即表示執行成功; 成功執行JMETER前,有兩個地方需要注意: 1.JDK的版本,一定版本的jmet

it is currently not able to persist on disk

開發十年,就只剩下這套架構體系了! >>>   

spark-shell啟動報錯:Yarn application has already ended! It might have been killed or unable to launch application master

name limits nor bsp closed pre opened 頁面 loading spark-shell不支持yarn cluster,以yarn client方式啟動 spark-shell --master=yarn --deploy-mode=cli