1. 程式人生 > >spark 啟動報錯 -- 主機別名

spark 啟動報錯 -- 主機別名

Xshell 4 (Build 0127)
Copyright (c) 2002-2013 NetSarang Computer, Inc. All rights reserved.


Type `help' to learn how to use Xshell prompt.
Xshell:\> 


Connecting to 192.168.48.6:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.


[[email protected] ~]$ ping www.baidu.com
ping: unknown host www.baidu.com
[
[email protected]
~]$ su root
Password: 
[[email protected] hadoop]# spark-shell
16/07/03 02:34:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/03 02:34:42 INFO spark.SecurityManager: Changing view acls to: root
16/07/03 02:34:42 INFO spark.SecurityManager: Changing modify acls to: root
16/07/03 02:34:42 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/03 02:34:42 INFO spark.HttpServer: Starting HTTP Server
16/07/03 02:34:42 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/07/03 02:34:42 INFO server.AbstractConnector: Started
[email protected]
:46734
16/07/03 02:34:42 INFO util.Utils: Successfully started service 'HTTP class server' on port 46734.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.4.0
      /_/


Using Scala version 2.10.4 (Java HotSpot(TM) Client VM, Java 1.7.0_67)
Type in expressions to have them evaluated.
Type :help for more information.
16/07/03 02:35:00 WARN util.Utils: Your hostname, cdh3 resolves to a loopback address: 127.0.0.1; using 192.168.48.6 instead (on interface eth0)
16/07/03 02:35:00 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/07/03 02:35:00 INFO spark.SparkContext: Running Spark version 1.4.0
16/07/03 02:35:00 INFO spark.SecurityManager: Changing view acls to: root
16/07/03 02:35:00 INFO spark.SecurityManager: Changing modify acls to: root
16/07/03 02:35:00 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/03 02:35:03 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/07/03 02:35:04 INFO Remoting: Starting remoting
16/07/03 02:35:05 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://
[email protected]
:48327]
16/07/03 02:35:05 INFO util.Utils: Successfully started service 'sparkDriver' on port 48327.
16/07/03 02:35:05 INFO spark.SparkEnv: Registering MapOutputTracker
16/07/03 02:35:05 INFO spark.SparkEnv: Registering BlockManagerMaster
16/07/03 02:35:06 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-1a40975b-2ec9-4b68-aaa4-b2e30de95143/blockmgr-4ae6edc5-4604-4e8e-a5e6-15e87eff1b5b
16/07/03 02:35:06 INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
16/07/03 02:35:06 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-1a40975b-2ec9-4b68-aaa4-b2e30de95143/httpd-7b58fb35-3c97-4301-807d-10939d208cf3
16/07/03 02:35:06 INFO spark.HttpServer: Starting HTTP Server
16/07/03 02:35:06 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/07/03 02:35:06 INFO server.AbstractConnector: Started [email protected]:55711
16/07/03 02:35:06 INFO util.Utils: Successfully started service 'HTTP file server' on port 55711.
16/07/03 02:35:06 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/07/03 02:35:07 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/07/03 02:35:08 INFO server.AbstractConnector: Started [email protected]:4040
16/07/03 02:35:08 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/07/03 02:35:08 INFO ui.SparkUI: Started SparkUI at http://192.168.48.6:4040
16/07/03 02:35:08 INFO executor.Executor: Starting executor ID driver on host localhost
16/07/03 02:35:08 INFO executor.Executor: Using REPL class URI: http://192.168.48.6:46734
16/07/03 02:35:09 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38759.
16/07/03 02:35:09 INFO netty.NettyBlockTransferService: Server created on 38759
16/07/03 02:35:09 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/07/03 02:35:09 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:38759 with 267.3 MB RAM, BlockManagerId(driver, localhost, 38759)
16/07/03 02:35:09 INFO storage.BlockManagerMaster: Registered BlockManager
16/07/03 02:35:10 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.
16/07/03 02:35:15 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
16/07/03 02:35:18 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/07/03 02:35:18 INFO metastore.ObjectStore: ObjectStore, initialize called
16/07/03 02:35:19 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/07/03 02:35:19 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/07/03 02:35:20 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/07/03 02:35:23 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
16/07/03 02:35:30 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/07/03 02:35:30 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5.  Encountered: "@" (64), after : "".
16/07/03 02:35:32 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/07/03 02:35:32 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/07/03 02:35:35 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/07/03 02:35:35 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/07/03 02:35:36 INFO metastore.ObjectStore: Initialized ObjectStore
16/07/03 02:35:36 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa
java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:105)
at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:163)
at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:161)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:167)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
at $iwC$$iwC.<init>(<console>:9)
at $iwC.<init>(<console>:18)
at <init>(<console>:20)
at .<init>(<console>:24)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:130)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:122)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
... 56 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
... 61 more
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: cdh1
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:312)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:665)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:601)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:112)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:144)
at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:504)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:356)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:171)
... 66 more
Caused by: java.net.UnknownHostException: cdh1
... 92 more


<console>:10: error: not found: value sqlContext
       import sqlContext.implicits._
              ^
<console>:10: error: not found: value sqlContext
       import sqlContext.sql
              ^


scala> Stopping spark context.
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/07/03 02:50:43 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/07/03 02:50:43 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.48.6:4040
16/07/03 02:50:43 INFO scheduler.DAGScheduler: Stopping DAGScheduler
16/07/03 02:50:43 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/07/03 02:50:43 INFO util.Utils: path = /tmp/spark-1a40975b-2ec9-4b68-aaa4-b2e30de95143/blockmgr-4ae6edc5-4604-4e8e-a5e6-15e87eff1b5b, already present as root for deletion.
16/07/03 02:50:43 INFO storage.MemoryStore: MemoryStore cleared
16/07/03 02:50:43 INFO storage.BlockManager: BlockManager stopped
16/07/03 02:50:43 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/07/03 02:50:43 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/07/03 02:50:43 INFO spark.SparkContext: Successfully stopped SparkContext
16/07/03 02:50:43 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/07/03 02:50:43 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/07/03 02:50:43 INFO util.Utils: Shutdown hook called
16/07/03 02:50:43 INFO util.Utils: Deleting directory /tmp/spark-b96a6954-a15b-455c-8a07-3783f423b671
16/07/03 02:50:43 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/07/03 02:50:43 INFO util.Utils: Deleting directory /tmp/spark-1a40975b-2ec9-4b68-aaa4-b2e30de95143
16/07/03 02:50:43 INFO util.Utils: Deleting directory /tmp/spark-cdcb324d-b8d6-4c47-bad9-1f3c3b671388
[[email protected] hadoop]# cd $SPARK_HOME/bin
[[email protected] bin]# cat /etc/hosts
#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 name01
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 cdh3
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6


#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
#10.99.174.85 hadoop.example.com hadoop
#192.168.0.101 name01
#192.168.0.102 data01
#192.168.0.103 data02
192.168.48.6 cdh3
#192.168.111.126 cdh1
#192.168.111.127 cdh2
#192.168.111.128 cdh3
[[email protected] bin]# cat /etc/profile
# /etc/profile


# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc




#Hadoop Env
export HADOOP_HOME_WARN_SUPPRESS=1
export JAVA_HOME=/user/local/jdk
#export HADOOP_HOME=/user/local/hadoop
#export PATH=$JAVA_HOME/bin:$HADOOP_HOME:/bin:$PATH
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH


#Hadoop Env
export HADOOP_HOME_WARN_SUPPRESS=1
export JAVA_HOME=/user/local/jdk
export HADOOP_HOME=/user/local/hadoop-2.6.0
export HIVE_HOME=/user/local/hive
export PATH=$JAVA_HOME/bin:$HADOOP_HOME:/bin:$PATH
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
#export TOMCAT_HOME=/root/solr/apache-tomcat-6.0.37
#export JRE_HOME=$JAVA_HOME/jre
#export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$PATH
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$PATH


#FLUME
#export FLUME_HOME=/usr/local/hadoop/flume/apache-flume-1.5.0-bin
#export FLUME_CONF_DIR=$FLUME_HOME/conf
#export PATH=$PATH:$FLUME_HOME/bin


#mvn
export MAVEN_HOME=/usr/local/apache-maven-3.3.9
export PATH=$PATH:$MAVEN_HOME/bin


#scala
export SCALA_HOME=/user/local/scala-2.9.3  
export PATH=$PATH:$SCALA_HOME/bin
#spark
export SPARK_HOME=/user/local/spark-1.4.0-bin-hadoop2.6  
export PATH=$PATH:$SPARK_HOME/bin


#hbase
export HBASE_HOME=/user/local/hbase-0.98.20-hadoop2
export PATH=$PATH:$HBASE_HOME/bin


#zk
export ZOOKEEPER_HOME=/user/local/zookeeper-3.4.6
export PATH=$PATH:$ZOOKEEPER_HOME/bin


#storm
export STORM_HOME=/user/local/apache-storm-0.9.2-incubating
export PATH=$PATH:$STORM_HOME/bin


#kafaka
export KAFKA_HOME=/user/local/kafka_2.9.2-0.8.1.1
export PATH=$PATH:$KAFKA_HOME/bin


[[email protected] bin]# 






===================================
===================================


Connecting to 192.168.48.6:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.


Last login: Sun Jul  3 02:28:29 2016 from 192.168.48.1
[[email protected] ~]# jps
4007 SparkSubmit
4177 Jps
[[email protected] ~]# jps
4007 SparkSubmit
4204 Jps
[[email protected] ~]# cat /etc/hosts
#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 name01
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 cdh3
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6


#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
#10.99.174.85 hadoop.example.com hadoop
#192.168.0.101 name01
#192.168.0.102 data01
#192.168.0.103 data02
192.168.48.6 cdh3
#192.168.111.126 cdh1
#192.168.111.127 cdh2
#192.168.111.128 cdh3
[[email protected] ~]# jps
4007 SparkSubmit
4240 Jps
[[email protected] ~]# hive
-bash: hive: command not found
[[email protected] ~]# echo $HIVE_HOME
/user/local/hive
[[email protected] ~]# cd /user/local/hive
-bash: cd: /user/local/hive: No such file or directory
[[email protected] ~]# cd /user/local/
[[email protected] local]# ls -l
total 1113216
drwxr-xr-x 11 root   root         4096 Jun 28 05:50 apache-storm-0.9.2-incubating
drwxr-xr-x  5 root   root         4096 Apr 21  2012 examples-ch02-getting_started-master
drwxr-xr-x 10 root   root         4096 Jun 16 16:19 hadoop-2.6.0
drwxr-xr-x  9 root   root         4096 Jun 27 08:40 hbase-0.98.20-hadoop2
drwxr-xr-x  8 root   root         4096 Jun 16 02:50 jdk
drwxr-xr-x 10 root   root         4096 Jun 27 23:49 jzmq
drwxr-xr-x  6 root   root         4096 Jun 28 15:58 kafka_2.9.2-0.8.1.1
drwxr-x--- 18   1000   1002       4096 Jun 27 23:56 Python-2.7.2
drwxr-xr-x  8    119    129       4096 Jun 17 08:35 scala-2.9.3
drwxr-xr-x 13 hadoop hadoop       4096 Jun 17 09:03 spark-1.4.0-bin-hadoop2.6
drwxr-xr-x  2 root   root         4096 Jun 28 23:02 src_old
-rwxr-xr-x  1 root   root   1203927040 Jun 28 18:19 src_old.tar
drwxr-xr-x  4 root   root         4096 Jun 28 18:04 src_temp
drwxrwxr-x 11   1000   1000       4096 Jun 27 23:44 zeromq-4.1.2
drwxr-xr-x 11   1000   1000       4096 Jun 28 03:56 zookeeper-3.4.6
[[email protected] local]# find / -name cdh1
[[email protected] local]# cat /etc/profile
# /etc/profile


# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc


# It's NOT a good idea to change this file unless you know what you
# are doing. It's much better to create a custom.sh shell script in
# /etc/profile.d/ to make custom changes to your environment, as this
# will prevent the need for merging in future updates.




#Hadoop Env
export HADOOP_HOME_WARN_SUPPRESS=1
export JAVA_HOME=/user/local/jdk
#export HADOOP_HOME=/user/local/hadoop
#export PATH=$JAVA_HOME/bin:$HADOOP_HOME:/bin:$PATH
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH


#Hadoop Env
export HADOOP_HOME_WARN_SUPPRESS=1
export JAVA_HOME=/user/local/jdk
export HADOOP_HOME=/user/local/hadoop-2.6.0
export HIVE_HOME=/user/local/hive
export PATH=$JAVA_HOME/bin:$HADOOP_HOME:/bin:$PATH
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
#export TOMCAT_HOME=/root/solr/apache-tomcat-6.0.37
#export JRE_HOME=$JAVA_HOME/jre
#export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$PATH
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$PATH


#FLUME
#export FLUME_HOME=/usr/local/hadoop/flume/apache-flume-1.5.0-bin
#export FLUME_CONF_DIR=$FLUME_HOME/conf
#export PATH=$PATH:$FLUME_HOME/bin


#mvn
export MAVEN_HOME=/usr/local/apache-maven-3.3.9
export PATH=$PATH:$MAVEN_HOME/bin


#scala
export SCALA_HOME=/user/local/scala-2.9.3  
export PATH=$PATH:$SCALA_HOME/bin
#spark
export SPARK_HOME=/user/local/spark-1.4.0-bin-hadoop2.6  
export PATH=$PATH:$SPARK_HOME/bin


#hbase
export HBASE_HOME=/user/local/hbase-0.98.20-hadoop2
export PATH=$PATH:$HBASE_HOME/bin


#zk
export ZOOKEEPER_HOME=/user/local/zookeeper-3.4.6
export PATH=$PATH:$ZOOKEEPER_HOME/bin


#storm
export STORM_HOME=/user/local/apache-storm-0.9.2-incubating
export PATH=$PATH:$STORM_HOME/bin


#kafaka
export KAFKA_HOME=/user/local/kafka_2.9.2-0.8.1.1
export PATH=$PATH:$KAFKA_HOME/bin


[[email protected] local]# pwd
/user/local
[[email protected] local]# ll
total 1113216
drwxr-xr-x 11 root   root         4096 Jun 28 05:50 apache-storm-0.9.2-incubating
drwxr-xr-x  5 root   root         4096 Apr 21  2012 examples-ch02-getting_started-master
drwxr-xr-x 10 root   root         4096 Jun 16 16:19 hadoop-2.6.0
drwxr-xr-x  9 root   root         4096 Jun 27 08:40 hbase-0.98.20-hadoop2
drwxr-xr-x  8 root   root         4096 Jun 16 02:50 jdk
drwxr-xr-x 10 root   root         4096 Jun 27 23:49 jzmq
drwxr-xr-x  6 root   root         4096 Jun 28 15:58 kafka_2.9.2-0.8.1.1
drwxr-x--- 18   1000   1002       4096 Jun 27 23:56 Python-2.7.2
drwxr-xr-x  8    119    129       4096 Jun 17 08:35 scala-2.9.3
drwxr-xr-x 13 hadoop hadoop       4096 Jun 17 09:03 spark-1.4.0-bin-hadoop2.6
drwxr-xr-x  2 root   root         4096 Jun 28 23:02 src_old
-rwxr-xr-x  1 root   root   1203927040 Jun 28 18:19 src_old.tar
drwxr-xr-x  4 root   root         4096 Jun 28 18:04 src_temp
drwxrwxr-x 11   1000   1000       4096 Jun 27 23:44 zeromq-4.1.2
drwxr-xr-x 11   1000   1000       4096 Jun 28 03:56 zookeeper-3.4.6
[[email protected] local]# service mysqld status
mysqld (pid  1970) is running...
[[email protected] local]# cat /etc/profile
# /etc/profile


# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc




#Hadoop Env
export HADOOP_HOME_WARN_SUPPRESS=1
export JAVA_HOME=/user/local/jdk
#export HADOOP_HOME=/user/local/hadoop
#export PATH=$JAVA_HOME/bin:$HADOOP_HOME:/bin:$PATH
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH


#Hadoop Env
export HADOOP_HOME_WARN_SUPPRESS=1
export JAVA_HOME=/user/local/jdk
export HADOOP_HOME=/user/local/hadoop-2.6.0
export HIVE_HOME=/user/local/hive
export PATH=$JAVA_HOME/bin:$HADOOP_HOME:/bin:$PATH
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
#export TOMCAT_HOME=/root/solr/apache-tomcat-6.0.37
#export JRE_HOME=$JAVA_HOME/jre
#export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$PATH
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HIVE_HOME/bin:$PATH


#FLUME
#export FLUME_HOME=/usr/local/hadoop/flume/apache-flume-1.5.0-bin
#export FLUME_CONF_DIR=$FLUME_HOME/conf
#export PATH=$PATH:$FLUME_HOME/bin


#mvn
export MAVEN_HOME=/usr/local/apache-maven-3.3.9
export PATH=$PATH:$MAVEN_HOME/bin


#scala
export SCALA_HOME=/user/local/scala-2.9.3  
export PATH=$PATH:$SCALA_HOME/bin
#spark
export SPARK_HOME=/user/local/spark-1.4.0-bin-hadoop2.6  
export PATH=$PATH:$SPARK_HOME/bin


#hbase
export HBASE_HOME=/user/local/hbase-0.98.20-hadoop2
export PATH=$PATH:$HBASE_HOME/bin


#zk
export ZOOKEEPER_HOME=/user/local/zookeeper-3.4.6
export PATH=$PATH:$ZOOKEEPER_HOME/bin


#storm
export STORM_HOME=/user/local/apache-storm-0.9.2-incubating
export PATH=$PATH:$STORM_HOME/bin


#kafaka
export KAFKA_HOME=/user/local/kafka_2.9.2-0.8.1.1
export PATH=$PATH:$KAFKA_HOME/bin


[[email protected] local]# spark -version 
-bash: spark: command not found
[[email protected] local]# jps
4543 MainGenericRunner
4580 Jps
4485 MainGenericRunner
[[email protected] local]# cd /user/local/spark-1.4.0-bin-hadoop2.6/
[[email protected] spark-1.4.0-bin-hadoop2.6]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/07/03 03:01:34 WARN hdfs.DFSUtil: Namenode for null remains unresolved for ID null.  Check your hdfs-site.xml file to ensure namenodes are configured properly.
Starting namenodes on [cdh1]
cdh1: ssh: Could not resolve hostname cdh1: Temporary failure in name resolution
localhost: starting datanode, logging to /user/local/hadoop-2.6.0/logs/hadoop-root-datanode-cdh3.out
Starting secondary namenodes [cdh1]
cdh1: ssh: Could not resolve hostname cdh1: Temporary failure in name resolution
starting yarn daemons
starting resourcemanager, logging to /user/local/hadoop-2.6.0/logs/yarn-root-resourcemanager-cdh3.out
localhost: starting nodemanager, logging to /user/local/hadoop-2.6.0/logs/yarn-root-nodemanager-cdh3.out
[[email protected] spark-1.4.0-bin-hadoop2.6]# jps
4543 MainGenericRunner
5029 NodeManager
4742 DataNode
4937 ResourceManager
5063 Jps
4485 MainGenericRunner
[[email protected] spark-1.4.0-bin-hadoop2.6]# cd /user/local/hadoop-2.6.0/
[[email protected] hadoop-2.6.0]# ll
total 56
drwxr-xr-x 2 root root  4096 Jun 16 15:38 bin
drwxr-xr-x 3 root root  4096 Jun 16 15:37 etc
drwxr-xr-x 2 root root  4096 Jun 16 15:38 include
drwxr-xr-x 3 root root  4096 Jun 16 18:04 lib
drwxr-xr-x 2 root root  4096 Jun 16 15:38 libexec
-rw-r--r-- 1 root root 15429 Jun 16 15:37 LICENSE.txt
drwxr-xr-x 3 root root  4096 Jul  3 03:02 logs
-rw-r--r-- 1 root root   101 Jun 16 15:37 NOTICE.txt
-rw-r--r-- 1 root root  1366 Jun 16 15:37 README.txt
drwxr-xr-x 2 root root  4096 Jun 16 15:38 sbin
drwxr-xr-x 4 root root  4096 Jun 16 15:37 share
[[email protected] hadoop-2.6.0]# cd etc/
[[email protected] etc]# ll
total 4
drwxr-xr-x 2 root root 4096 Jun 17 09:38 hadoop
[[email protected] etc]# cd hadoop/
[[email protected] hadoop]# ll
total 160
-rw-r--r-- 1 root root  4436 Jun 16 16:13 capacity-scheduler.xml
-rw-r--r-- 1 root root  1335 Jun 16 16:13 configuration.xsl
-rw-r--r-- 1 root root   318 Jun 16 16:13 container-executor.cfg
-rw-r--r-- 1 root root  1415 Jun 16 16:13 core-site.xml
-rw-r--r-- 1 root root   721 Jun 17 10:08 derby.log
-rw-r--r-- 1 root root  3670 Jun 16 16:13 hadoop-env.cmd
-rw-r--r-- 1 root root  4257 Jun 16 16:52 hadoop-env.sh
-rw-r--r-- 1 root root  2598 Jun 16 16:13 hadoop-metrics2.properties
-rw-r--r-- 1 root root  2490 Jun 16 16:13 hadoop-metrics.properties
-rw-r--r-- 1 root root  9683 Jun 16 16:13 hadoop-policy.xml
-rw-r--r-- 1 root root  1259 Jun 16 17:49 hdfs-site.xml
-rw-r--r-- 1 root root  1449 Jun 16 16:13 httpfs-env.sh
-rw-r--r-- 1 root root  1657 Jun 16 16:13 httpfs-log4j.properties
-rw-r--r-- 1 root root    21 Jun 16 16:13 httpfs-signature.secret
-rw-r--r-- 1 root root   620 Jun 16 16:13 httpfs-site.xml
-rw-r--r-- 1 root root  3523 Jun 16 16:13 kms-acls.xml
-rw-r--r-- 1 root root  1325 Jun 16 16:13 kms-env.sh
-rw-r--r-- 1 root root  1631 Jun 16 16:13 kms-log4j.properties
-rw-r--r-- 1 root root  5511 Jun 16 16:13 kms-site.xml
-rw-r--r-- 1 root root 11291 Jun 16 16:13 log4j.properties
-rw-r--r-- 1 root root   938 Jun 16 16:13 mapred-env.cmd
-rw-r--r-- 1 root root  1383 Jun 16 16:13 mapred-env.sh
-rw-r--r-- 1 root root  4113 Jun 16 16:13 mapred-queues.xml.template
-rw-r--r-- 1 root root  1064 Jun 16 16:13 mapred-site.xml
-rw-r--r-- 1 root root   758 Jun 16 16:13 mapred-site.xml.template
-rw-r--r-- 1 root root   216 Jun 16 17:40 slaves
-rw-r--r-- 1 root root  2316 Jun 16 16:13 ssl-client.xml.example
-rw-r--r-- 1 root root  2268 Jun 16 16:13 ssl-server.xml.example
-rw-r--r-- 1 root root  2237 Jun 16 16:13 yarn-env.cmd
-rw-r--r-- 1 root root  4556 Jun 16 16:13 yarn-env.sh
-rw-r--r-- 1 root root  1506 Jun 16 16:13 yarn-site.xml
[[email protected] hadoop]# vim hdfs-site.xml 
[[email protected] hadoop]# jps
4543 MainGenericRunner
5188 Jps
5029 NodeManager
4742 DataNode
4485 MainGenericRunner
[[email protected] hadoop]# pwd
/user/local/hadoop-2.6.0/etc/hadoop
[[email protected] hadoop]# cd /user/local/spark-1.4.0-bin-hadoop2.6/
[[email protected] spark-1.4.0-bin-hadoop2.6]# stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
16/07/03 03:04:35 WARN hdfs.DFSUtil: Namenode for null remains unresolved for ID null.  Check your hdfs-site.xml file to ensure namenodes are configured properly.
Stopping namenodes on [cdh1]
cdh1: ssh: Could not resolve hostname cdh1: Temporary failure in name resolution
localhost: stopping datanode
Stopping secondary namenodes [cdh3]
The authenticity of host 'cdh3 (127.0.0.1)' can't be established.
RSA key fingerprint is 07:07:8e:1c:c0:7e:7f:1f:ca:6a:e6:d3:cb:7f:b7:a1.
Are you sure you want to continue connecting (yes/no)? yes
cdh3: Warning: Permanently added 'cdh3' (RSA) to the list of known hosts.
cdh3: no secondarynamenode to stop
stopping yarn daemons
no resourcemanager to stop
localhost: stopping nodemanager
localhost: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
[[email protected] spark-1.4.0-bin-hadoop2.6]# jps
4543 MainGenericRunner
5569 Jps
4485 MainGenericRunner
[[email protected] spark-1.4.0-bin-hadoop2.6]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/07/03 03:05:16 WARN hdfs.DFSUtil: Namenode for null remains unresolved for ID null.  Check your hdfs-site.xml file to ensure namenodes are configured properly.
Starting namenodes on [cdh1]
cdh1: ssh: Could not resolve hostname cdh1: Temporary failure in name resolution
localhost: starting datanode, logging to /user/local/hadoop-2.6.0/logs/hadoop-root-datanode-cdh3.out
Starting secondary namenodes [cdh3]
cdh3: starting secondarynamenode, logging to /user/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-cdh3.out
starting yarn daemons
starting resourcemanager, logging to /user/local/hadoop-2.6.0/logs/yarn-root-resourcemanager-cdh3.out
localhost: starting nodemanager, logging to /user/local/hadoop-2.6.0/logs/yarn-root-nodemanager-cdh3.out
[[email protected] spark-1.4.0-bin-hadoop2.6]# jps
5705 DataNode
6054 NodeManager
4543 MainGenericRunner
6186 Jps
4485 MainGenericRunner
[[email protected] spark-1.4.0-bin-hadoop2.6]# pwd
/user/local/spark-1.4.0-bin-hadoop2.6
[[email protected] spark-1.4.0-bin-hadoop2.6]# cd /user/local/hadoop-2.6.0/etc/hadoop/
[[email protected] hadoop]# pwd
/user/local/hadoop-2.6.0/etc/hadoop
[[email protected] hadoop]# ll
total 160
-rw-r--r-- 1 root root  4436 Jun 16 16:13 capacity-scheduler.xml
-rw-r--r-- 1 root root  1335 Jun 16 16:13 configuration.xsl
-rw-r--r-- 1 root root   318 Jun 16 16:13 container-executor.cfg
-rw-r--r-- 1 root root  1415 Jun 16 16:13 core-site.xml
-rw-r--r-- 1 root root   721 Jun 17 10:08 derby.log
-rw-r--r-- 1 root root  3670 Jun 16 16:13 hadoop-env.cmd
-rw-r--r-- 1 root root  4257 Jun 16 16:52 hadoop-env.sh
-rw-r--r-- 1 root root  2598 Jun 16 16:13 hadoop-metrics2.properties
-rw-r--r-- 1 root root  2490 Jun 16 16:13 hadoop-metrics.properties
-rw-r--r-- 1 root root  9683 Jun 16 16:13 hadoop-policy.xml
-rw-r--r-- 1 root root  1259 Jul  3 03:03 hdfs-site.xml
-rw-r--r-- 1 root root  1449 Jun 16 16:13 httpfs-env.sh
-rw-r--r-- 1 root root  1657 Jun 16 16:13 httpfs-log4j.properties
-rw-r--r-- 1 root root    21 Jun 16 16:13 httpfs-signature.secret
-rw-r--r-- 1 root root   620 Jun 16 16:13 httpfs-site.xml
-rw-r--r-- 1 root root  3523 Jun 16 16:13 kms-acls.xml
-rw-r--r-- 1 root root  1325 Jun 16 16:13 kms-env.sh
-rw-r--r-- 1 root root  1631 Jun 16 16:13 kms-log4j.properties
-rw-r--r-- 1 root root  5511 Jun 16 16:13 kms-site.xml
-rw-r--r-- 1 root root 11291 Jun 16 16:13 log4j.properties
-rw-r--r-- 1 root root   938 Jun 16 16:13 mapred-env.cmd
-rw-r--r-- 1 root root  1383 Jun 16 16:13 mapred-env.sh
-rw-r--r-- 1 root root  4113 Jun 16 16:13 mapred-queues.xml.template
-rw-r--r-- 1 root root  1064 Jun 16 16:13 mapred-site.xml
-rw-r--r-- 1 root root   758 Jun 16 16:13 mapred-site.xml.template
-rw-r--r-- 1 root root   216 Jun 16 17:40 slaves
-rw-r--r-- 1 root root  2316 Jun 16 16:13 ssl-client.xml.example
-rw-r--r-- 1 root root  2268 Jun 16 16:13 ssl-server.xml.example
-rw-r--r-- 1 root root  2237 Jun 16 16:13 yarn-env.cmd
-rw-r--r-- 1 root root  4556 Jun 16 16:13 yarn-env.sh
-rw-r--r-- 1 root root  1506 Jun 16 16:13 yarn-site.xml
[[email protected] hadoop]# less hdfs-site.xml 
[[email protected] hadoop]# less slaves 
[[email protected] hadoop]# less mapred-site.xml
[[email protected] hadoop]# vim mapred-site.xml
[[email protected] hadoop]# vim yarn-site.xml
[[email protected] hadoop]# vim core-site.xml
[[email protected] hadoop]# pwd
/user/local/hadoop-2.6.0/etc/hadoop
[[email protected] hadoop]# cd /user/local/spark-1.4.0-bin-hadoop2.6/
[[email protected] spark-1.4.0-bin-hadoop2.6]# pwd
/user/local/spark-1.4.0-bin-hadoop2.6
[[email protected] spark-1.4.0-bin-hadoop2.6]# start-all.sh.cc
-bash: start-all.sh.cc: command not found
[[email protected] spark-1.4.0-bin-hadoop2.6]# jsp
-bash: jsp: command not found
[[email protected] spark-1.4.0-bin-hadoop2.6]# jps
5705 DataNode
6054 NodeManager
4543 MainGenericRunner
6234 Jps
4485 MainGenericRunner
[[email protected] spark-1.4.0-bin-hadoop2.6]# stopt-all.sh
-bash: stopt-all.sh: command not found
[[email protected] spark-1.4.0-bin-hadoop2.6]# stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [cdh3]
cdh3: no namenode to stop
localhost: stopping datanode
Stopping secondary namenodes [cdh3]
cdh3: no secondarynamenode to stop
stopping yarn daemons
no resourcemanager to stop
localhost: stopping nodemanager
localhost: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
[[email protected] spark-1.4.0-bin-hadoop2.6]# jps
6633 Jps
4543 MainGenericRunner
4485 MainGenericRunner
[[email protected] spark-1.4.0-bin-hadoop2.6]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [cdh3]
cdh3: starting namenode, logging to /user/local/hadoop-2.6.0/logs/hadoop-root-namenode-cdh3.out
localhost: starting datanode, logging to /user/local/hadoop-2.6.0/logs/hadoop-root-datanode-cdh3.out
Starting secondary namenodes [cdh3]
cdh3: starting secondarynamenode, logging to /user/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-cdh3.out
starting yarn daemons
starting resourcemanager, logging to /user/local/hadoop-2.6.0/logs/yarn-root-resourcemanager-cdh3.out
localhost: starting nodemanager, logging to /user/local/hadoop-2.6.0/logs/yarn-root-nodemanager-cdh3.out
[[email protected] spark-1.4.0-bin-hadoop2.6]# jps
7226 NodeManager
6992 SecondaryNameNode
6738 NameNode
7135 ResourceManager
4543 MainGenericRunner
6823 DataNode
7537 Jps
4485 MainGenericRunner
[[email protected] spark-1.4.0-bin-hadoop2.6]# pwd
/user/local/spark-1.4.0-bin-hadoop2.6
[[email protected] spark-1.4.0-bin-hadoop2.6]# ll
total 684
drwxr-xr-x 2 hadoop hadoop   4096 Jun  2  2015 bin
-rw-r--r-- 1 hadoop hadoop 561149 Jun  2  2015 CHANGES.txt
drwxr-xr-x 2 hadoop hadoop   4096 Jun 17 09:03 conf
drwxr-xr-x 3 hadoop hadoop   4096 Jun  2  2015 data
drwxr-xr-x 3 hadoop hadoop   4096 Jun  2  2015 ec2
drwxr-xr-x 3 hadoop hadoop   4096 Jun  2  2015 examples
drwxr-xr-x 2 hadoop hadoop   4096 Jun  2  2015 lib
-rw-r--r-- 1 hadoop hadoop  50902 Jun  2  2015 LICENSE
drwxr-xr-x 2 root   root     4096 Jun 17 09:03 logs
-rw-r--r-- 1 hadoop hadoop  22559 Jun  2  2015 NOTICE
drwxr-xr-x 6 hadoop hadoop   4096 Jun  2  2015 python
drwxr-xr-x 3 hadoop hadoop   4096 Jun  2  2015 R
-rw-r--r-- 1 hadoop hadoop   3624 Jun  2  2015 README.md
-rw-r--r-- 1 hadoop hadoop    134 Jun  2  2015 RELEASE
drwxr-xr-x 2 hadoop hadoop   4096 Jun  2  2015 sbin
drwxr-xr-x 2 root   root     4096 Jun 17 09:03 work
[[email protected] spark-1.4.0-bin-hadoop2.6]# cd sbin/start-all.sh
-bash: cd: sbin/start-all.sh: Not a directory
[[email protected] spark-1.4.0-bin-hadoop2.6]#  sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /user/local/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-cdh3.out
failed to launch org.apache.spark.deploy.master.Master:
full log in /user/local/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-cdh3.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /user/local/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-cdh3.out
localhost: failed to launch org.apache.spark.deploy.worker.Worker:
localhost: full log in /user/local/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-cdh3.out
[[email protected] spark-1.4.0-bin-hadoop2.6]# less /user/local/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-cdh3.out
[[email protected] spark-1.4.0-bin-hadoop2.6]# cat /user/local/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-cdh3.out
Spark Command: /user/local/jdk/bin/java -cp /user/local/spark-1.4.0-bin-hadoop2.6/sbin/../conf/:/user/local/spark-1.4.0-bin-hadoop2.6/lib/spark-assembly-1.4.0-hadoop2.6.0.jar:/user/local/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/user/local/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/user/local/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/user/local/hadoop-2.6.0/etc/hadoop/ -Xms512m -Xmx512m -XX:MaxPermSize=128m org.apache.spark.deploy.master.Master --ip cdh1 --port 7077 --webui-port 8080
========================================
16/07/03 03:16:06 INFO master.Master: Registered signal handlers for [TERM, HUP, INT]
16/07/03 03:16:10 WARN util.Utils: Your hostname, cdh3 resolves to a loopback address: 127.0.0.1; using 192.168.48.6 instead (on interface eth0)
16/07/03 03:16:10 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/07/03 03:16:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/03 03:16:16 INFO spark.SecurityManager: Changing view acls to: root
16/07/03 03:16:16 INFO spark.SecurityManager: Changing modify acls to: root
16/07/03 03:16:16 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/03 03:16:23 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/07/03 03:16:23 INFO Remoting: Starting remoting
Exception in thread "main" java.net.UnknownHostException: cdh1: Temporary failure in name resolution
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
at java.net.InetAddress.getAllByName0(InetAddress.java:1246)
at java.net.InetAddress.getAllByName(InetAddress.java:1162)
at java.net.InetAddress.getAllByName(InetAddress.java:1098)
at java.net.InetAddress.getByName(InetAddress.java:1048)
at akka.remote.transport.netty.NettyTransport$$anonfun$addressToSocketAddress$1$$anonfun$apply$6.apply(NettyTransport.scala:383)
at akka.remote.transport.netty.NettyTransport$$anonfun$addressToSocketAddress$1$$anonfun$apply$6.apply(NettyTransport.scala:383)
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169)
at scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640)
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167)
at scala.concurrent.package$.blocking(package.scala:50)
at akka.remote.transport.netty.NettyTransport$$anonfun$addressToSocketAddress$1.apply(NettyTransport.scala:383)
at akka.remote.transport.netty.NettyTransport$$anonfun$addressToSocketAddress$1.apply(NettyTransport.scala:383)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
16/07/03 03:16:26 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/07/03 03:16:26 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/07/03 03:16:27 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/07/03 03:16:27 INFO util.Utils: Shutdown hook called
[[email protected] spark-1.4.0-bin-hadoop2.6]# pwd
/user/local/spark-1.4.0-bin-hadoop2.6
[[email protected] spark-1.4.0-bin-hadoop2.6]# ll
total 684
drwxr-xr-x 2 hadoop hadoop   4096 Jun  2  2015 bin
-rw-r--r-- 1 hadoop hadoop 561149 Jun  2  2015 CHANGES.txt
drwxr-xr-x 2 hadoop hadoop   4096 Jun 17 09:03 conf
drwxr-xr-x 3 hadoop hadoop   4096 Jun  2  2015 data
drwxr-xr-x 3 hadoop hadoop   4096 Jun  2  2015 ec2
drwxr-xr-x 3 hadoop hadoop   4096 Jun  2  2015 examples
drwxr-xr-x 2 hadoop hadoop   4096 Jun  2  2015 lib
-rw-r--r-- 1 hadoop hadoop  50902 Jun  2  2015 LICENSE
drwxr-xr-x 2 root   root     4096 Jul  3 03:15 logs
-rw-r--r-- 1 hadoop hadoop  22559 Jun  2  2015 NOTICE
drwxr-xr-x 6 hadoop hadoop   4096 Jun  2  2015 python
drwxr-xr-x 3 hadoop hadoop   4096 Jun  2  2015 R
-rw-r--r-- 1 hadoop hadoop   3624 Jun  2  2015 README.md
-rw-r--r-- 1 hadoop hadoop    134 Jun  2  2015 RELEASE
drwxr-xr-x 2 hadoop hadoop   4096 Jun  2  2015 sbin
drwxr-xr-x 2 root   root     4096 Jun 17 09:03 work
[[email protected] spark-1.4.0-bin-hadoop2.6]# cd conf/
[[email protected] conf]# ll
total 40
-rw-r--r-- 1 hadoop hadoop  202 Jun  2  2015 docker.properties.template
-rw-r--r-- 1 hadoop hadoop  303 Jun  2  2015 fairscheduler.xml.template
-rw-r--r-- 1 hadoop hadoop  632 Jun  2  2015 log4j.properties.template
-rw-r--r-- 1 hadoop hadoop 5565 Jun  2  2015 metrics.properties.template
-rw-r--r-- 1 root   root     80 Jun 17 08:40 slaves
-rw-r--r-- 1 hadoop hadoop   80 Jun  2  2015 slaves.template
-rw-r--r-- 1 hadoop hadoop  507 Jun  2  2015 spark-defaults.conf.template
-rwxr-xr-x 1 root   root   3576 Jun 17 09:03 spark-env.sh
-rwxr-xr-x 1 hadoop hadoop 3318 Jun  2  2015 spark-env.sh.template
[[email protected] conf]# less slaves
[[email protected] conf]# less spark-env.sh
[[email protected] conf]# cat spark-env.sh
#!/usr/bin/env bash


# This file is sourced when running various Spark programs.
# Copy it as spark-env.sh and edit that to configure Spark for your site.


# Options read when launching programs locally with
# ./bin/run-example or ./bin/spark-submit
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program
# - SPARK_CLASSPATH, default classpath entries to append


# Options read by executors and drivers running inside the cluster
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
# - SPARK_CLASSPATH, default classpath entries to append
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
# - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos


# Options read in YARN client mode
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2)
# - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)
# - SPARK_YARN_APP_NAME, The name of your application (Default: Spark)
# - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ?.efault?.
# - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job.
# - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job.


# Options for the daemons used in the standalone deploy mode
# - SPARK_MASTER_IP, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
# - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers


# Generic options for the daemons used in the standalone deploy mode
# - SPARK_CONF_DIR      Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - SPARK_LOG_DIR       Where log files are stored.  (Default: ${SPARK_HOME}/logs)
# - SPARK_PID_DIR       Where the pid file is stored. (Default: /tmp)
# - SPARK_IDENT_STRING  A string representing this instance of spark. (Default: $USER)
# - SPARK_NICENESS      The scheduling priority for daemons. (Default: 0)


export SCALA_HOME=/user/local/scala-2.9.3
#export JAVA_HOME=$JAVA_HOME
export JAVA_HOME=/user/local/jdk
export HADOOP_HOME=/user/local/hadoop-2.6.0
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_MASTER_IP=cdh1  
export SPARK_DRIVER_MEMORY=512M
[[email protected] conf]# 
[[email protected] conf]# vim spark-env.sh
[[email protected] conf]# jps
7226 NodeManager
6992 SecondaryNameNode
6738 NameNode
7135 ResourceManager
4543 MainGenericRunner
7758 Worker
7931 Jps
6823 DataNode
4485 MainGenericRunner
[[email protected] conf]#  sbin/start-all.sh
-bash: sbin/start-all.sh: No such file or directory
[[email protected] conf]#  ../sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /user/local/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark-root-org.apache.spark.deploy.master.Master-1-cdh3.out
localhost: org.apache.spark.deploy.worker.Worker running as process 7758.  Stop it first.
[[email protected] conf]# jps
7226 NodeManager
6992 SecondaryNameNode
6738 NameNode
7135 ResourceManager
8014 Master
4543 MainGenericRunner
7758 Worker
6823 DataNode
8190 Jps
4485 MainGenericRunner
[[email protected] conf]#