1. 程式人生 > >windows下安裝並啟動hadoop2.6.1

windows下安裝並啟動hadoop2.6.1

64位windows安裝hadoop沒必要倒騰Cygwin,直接解壓官網下載hadoop安裝包到本地->最小化配置4個基本檔案->執行1條啟動命令->完事。一個前提是你的電腦上已經安裝了jdk,設定了java環境變數,本人用的是jdk1.7.0_15。下面把這幾步細化貼出來,以hadoop2.6.1為例

  2、解壓也不細說了:複製到D盤根目錄直接解壓,出來一個目錄D:\hadoop-2.6.1,(如果windows解壓有問題,可以去linux環境解壓,再下載至windows環境),配置到環境變數HADOOP_HOME中,在PATH里加上%HADOOP_HOME%\bin;點選

http://download.csdn.net/download/abcdefg0929/10153550下載相關工具類,直接解壓後把檔案丟到D:\hadoop-2.6.1\bin目錄中去,將其中的hadoop.dll在c:/windows/System32下也放一份;

  3、去D:\hadoop-2.6.1\etc\hadoop找到下面4個檔案並按如下最小配置貼上上去:
  
core-site.xml

  <configuration>
        <property>
            <name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value> </property> </configuration>

hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>    
        <name
>
dfs.namenode.name.dir</name> <value>file:/hadoop/data/dfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/hadoop/data/dfs/datanode</value> </property> </configuration>

mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>

4、啟動windows命令列視窗,進入hadoop-2.6.1\bin目錄,執行下面2條命令,先格式化namenode再啟動hadoop

D:\hadoop-2.6.1\bin>hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
17/05/13 07:16:40 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = *****/192.168.8.5
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.1
STARTUP_MSG:   classpath = D:\hadoop-2.6.1\etc\hadoop;D:\hadoop-2.6.1\share\hado
op\common\lib\activation-1.1.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\apached
s-i18n-2.0.0-M15.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\apacheds-kerberos-c
odec-2.0.0-M15.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\api-asn1-api-1.0.0-M2
0.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\api-util-1.0.0-M20.jar;D:\hadoop-2
.6.1\share\hadoop\common\lib\asm-3.2.jar;D:\hadoop-2.6.1\share\hadoop\common\lib
\avro-1.7.4.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-beanutils-1.7.0.
jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;D:\
hadoop-2.6.1\share\hadoop\common\lib\commons-cli-1.2.jar;D:\hadoop-2.6.1\share\h
adoop\common\lib\commons-codec-1.4.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\c
ommons-collections-3.2.2.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-com
press-1.4.1.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-configuration-1.
6.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-digester-1.8.jar;D:\hadoop
-2.6.1\share\hadoop\common\lib\commons-httpclient-3.1.jar;D:\hadoop-2.6.1\share\
hadoop\common\lib\commons-io-2.4.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\com
mons-lang-2.6.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-logging-1.1.3.
jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-math3-3.1.1.jar;D:\hadoop-2.
7.2\share\hadoop\common\lib\commons-net-3.1.jar;D:\hadoop-2.6.1\share\hadoop\com
mon\lib\curator-client-2.7.1.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\curator
-framework-2.7.1.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\curator-recipes-2.7
.1.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\gson-2.2.4.jar;D:\hadoop-2.6.1\sh
are\hadoop\common\lib\guava-11.0.2.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\h
adoop-annotations-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\hadoop-auth-
2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\hamcrest-core-1.3.jar;D:\hadoo
p-2.6.1\share\hadoop\common\lib\htrace-core-3.1.0-incubating.jar;D:\hadoop-2.6.1
\share\hadoop\common\lib\httpclient-4.2.5.jar;D:\hadoop-2.6.1\share\hadoop\commo
n\lib\httpcore-4.2.5.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\jackson-core-as
l-1.9.13.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\jackson-jaxrs-1.9.13.jar;D:
\hadoop-2.6.1\share\hadoop\common\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.
7.2\share\hadoop\common\lib\jackson-xc-1.9.13.jar;D:\hadoop-2.6.1\share\hadoop\c
ommon\lib\java-xmlbuilder-0.4.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\jaxb-a
pi-2.2.2.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\jaxb-impl-2.2.3-1.jar;D:\ha
doop-2.6.1\share\hadoop\common\lib\jersey-core-1.9.jar;D:\hadoop-2.6.1\share\had
oop\common\lib\jersey-json-1.9.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\jerse
y-server-1.9.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\jets3t-0.9.0.jar;D:\had
oop-2.6.1\share\hadoop\common\lib\jettison-1.1.jar;D:\hadoop-2.6.1\share\hadoop\
common\lib\jetty-6.1.26.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\jetty-util-6
.1.26.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\jsch-0.1.42.jar;D:\hadoop-2.7.
2\share\hadoop\common\lib\jsp-api-2.1.jar;D:\hadoop-2.6.1\share\hadoop\common\li
b\jsr305-3.0.0.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\junit-4.11.jar;D:\had
oop-2.6.1\share\hadoop\common\lib\log4j-1.2.17.jar;D:\hadoop-2.6.1\share\hadoop\
common\lib\mockito-all-1.8.5.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\netty-3
.6.2.Final.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\paranamer-2.3.jar;D:\hado
op-2.6.1\share\hadoop\common\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.6.1\share\h
adoop\common\lib\servlet-api-2.5.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\slf
4j-api-1.7.10.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\slf4j-log4j12-1.7.10.j
ar;D:\hadoop-2.6.1\share\hadoop\common\lib\snappy-java-1.0.4.1.jar;D:\hadoop-2.7
.2\share\hadoop\common\lib\stax-api-1.0-2.jar;D:\hadoop-2.6.1\share\hadoop\commo
n\lib\xmlenc-0.52.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\xz-1.0.jar;D:\hado
op-2.6.1\share\hadoop\common\lib\zookeeper-3.4.6.jar;D:\hadoop-2.6.1\share\hadoo
p\common\hadoop-common-2.6.1-tests.jar;D:\hadoop-2.6.1\share\hadoop\common\hadoo
p-common-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\common\hadoop-nfs-2.6.1.jar;D:\h
adoop-2.6.1\share\hadoop\hdfs;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\asm-3.2.jar;
D:\hadoop-2.6.1\share\hadoop\hdfs\lib\commons-cli-1.2.jar;D:\hadoop-2.6.1\share\
hadoop\hdfs\lib\commons-codec-1.4.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\comm
ons-daemon-1.0.13.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\commons-io-2.4.jar;D
:\hadoop-2.6.1\share\hadoop\hdfs\lib\commons-lang-2.6.jar;D:\hadoop-2.6.1\share\
hadoop\hdfs\lib\commons-logging-1.1.3.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\
guava-11.0.2.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\htrace-core-3.1.0-incubat
ing.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\jackson-core-asl-1.9.13.jar;D:\had
oop-2.6.1\share\hadoop\hdfs\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.6.1\sh
are\hadoop\hdfs\lib\jersey-core-1.9.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\je
rsey-server-1.9.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\jetty-6.1.26.jar;D:\ha
doop-2.6.1\share\hadoop\hdfs\lib\jetty-util-6.1.26.jar;D:\hadoop-2.6.1\share\had
oop\hdfs\lib\jsr305-3.0.0.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\leveldbjni-a
ll-1.8.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\log4j-1.2.17.jar;D:\hadoop-2.7.
2\share\hadoop\hdfs\lib\netty-3.6.2.Final.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\
lib\netty-all-4.0.23.Final.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\protobuf-ja
va-2.5.0.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\servlet-api-2.5.jar;D:\hadoop
-2.6.1\share\hadoop\hdfs\lib\xercesImpl-2.9.1.jar;D:\hadoop-2.6.1\share\hadoop\h
dfs\lib\xml-apis-1.3.04.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\lib\xmlenc-0.52.ja
r;D:\hadoop-2.6.1\share\hadoop\hdfs\hadoop-hdfs-2.6.1-tests.jar;D:\hadoop-2.6.1\
share\hadoop\hdfs\hadoop-hdfs-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\hdfs\hadoop
-hdfs-nfs-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\activation-1.1.jar;D:\
hadoop-2.6.1\share\hadoop\yarn\lib\aopalliance-1.0.jar;D:\hadoop-2.6.1\share\had
oop\yarn\lib\asm-3.2.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\commons-cli-1.2.j
ar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\commons-codec-1.4.jar;D:\hadoop-2.6.1\s
hare\hadoop\yarn\lib\commons-collections-3.2.2.jar;D:\hadoop-2.6.1\share\hadoop\
yarn\lib\commons-compress-1.4.1.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\common
s-io-2.4.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\commons-lang-2.6.jar;D:\hadoo
p-2.6.1\share\hadoop\yarn\lib\commons-logging-1.1.3.jar;D:\hadoop-2.6.1\share\ha
doop\yarn\lib\guava-11.0.2.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\guice-3.0.j
ar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\guice-servlet-3.0.jar;D:\hadoop-2.6.1\s
hare\hadoop\yarn\lib\jackson-core-asl-1.9.13.jar;D:\hadoop-2.6.1\share\hadoop\ya
rn\lib\jackson-jaxrs-1.9.13.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\jackson-ma
pper-asl-1.9.13.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\jackson-xc-1.9.13.jar;
D:\hadoop-2.6.1\share\hadoop\yarn\lib\javax.inject-1.jar;D:\hadoop-2.6.1\share\h
adoop\yarn\lib\jaxb-api-2.2.2.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\jaxb-imp
l-2.2.3-1.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\jersey-client-1.9.jar;D:\had
oop-2.6.1\share\hadoop\yarn\lib\jersey-core-1.9.jar;D:\hadoop-2.6.1\share\hadoop
\yarn\lib\jersey-guice-1.9.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\jersey-json
-1.9.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\jersey-server-1.9.jar;D:\hadoop-2
.7.2\share\hadoop\yarn\lib\jettison-1.1.jar;D:\hadoop-2.6.1\share\hadoop\yarn\li
b\jetty-6.1.26.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\jetty-util-6.1.26.jar;D
:\hadoop-2.6.1\share\hadoop\yarn\lib\jsr305-3.0.0.jar;D:\hadoop-2.6.1\share\hado
op\yarn\lib\leveldbjni-all-1.8.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\log4j-1
.2.17.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\netty-3.6.2.Final.jar;D:\hadoop-
2.6.1\share\hadoop\yarn\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.6.1\share\hadoop
\yarn\lib\servlet-api-2.5.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\stax-api-1.0
-2.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\xz-1.0.jar;D:\hadoop-2.6.1\share\ha
doop\yarn\lib\zookeeper-3.4.6-tests.jar;D:\hadoop-2.6.1\share\hadoop\yarn\lib\zo
okeeper-3.4.6.jar;D:\hadoop-2.6.1\share\hadoop\yarn\hadoop-yarn-api-2.6.1.jar;D:
\hadoop-2.6.1\share\hadoop\yarn\hadoop-yarn-applications-distributedshell-2.6.1.
jar;D:\hadoop-2.6.1\share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am-laun
cher-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\yarn\hadoop-yarn-client-2.6.1.jar;D:
\hadoop-2.6.1\share\hadoop\yarn\hadoop-yarn-common-2.6.1.jar;D:\hadoop-2.6.1\sha
re\hadoop\yarn\hadoop-yarn-registry-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\yarn\
hadoop-yarn-server-applicationhistoryservice-2.6.1.jar;D:\hadoop-2.6.1\share\had
oop\yarn\hadoop-yarn-server-common-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\yarn\h
adoop-yarn-server-nodemanager-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\yarn\hadoop
-yarn-server-resourcemanager-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\yarn\hadoop-
yarn-server-sharedcachemanager-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\yarn\hadoo
p-yarn-server-tests-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\yarn\hadoop-yarn-serv
er-web-proxy-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\aopalliance-1.
0.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\asm-3.2.jar;D:\hadoop-2.6.1\sha
re\hadoop\mapreduce\lib\avro-1.7.4.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\li
b\commons-compress-1.4.1.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\commons-
io-2.4.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\guice-3.0.jar;D:\hadoop-2.
7.2\share\hadoop\mapreduce\lib\guice-servlet-3.0.jar;D:\hadoop-2.6.1\share\hadoo
p\mapreduce\lib\hadoop-annotations-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\mapred
uce\lib\hamcrest-core-1.3.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\jackson
-core-asl-1.9.13.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\jackson-mapper-a
sl-1.9.13.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\javax.inject-1.jar;D:\h
adoop-2.6.1\share\hadoop\mapreduce\lib\jersey-core-1.9.jar;D:\hadoop-2.6.1\share
\hadoop\mapreduce\lib\jersey-guice-1.9.jar;D:\hadoop-2.6.1\share\hadoop\mapreduc
e\lib\jersey-server-1.9.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\junit-4.1
1.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\leveldbjni-all-1.8.jar;D:\hadoo
p-2.6.1\share\hadoop\mapreduce\lib\log4j-1.2.17.jar;D:\hadoop-2.6.1\share\hadoop
\mapreduce\lib\netty-3.6.2.Final.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\
paranamer-2.3.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\protobuf-java-2.5.0
.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\lib\snappy-java-1.0.4.1.jar;D:\hadoo
p-2.6.1\share\hadoop\mapreduce\lib\xz-1.0.jar;D:\hadoop-2.6.1\share\hadoop\mapre
duce\hadoop-mapreduce-client-app-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\mapreduc
e\hadoop-mapreduce-client-common-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\mapreduc
e\hadoop-mapreduce-client-core-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\
hadoop-mapreduce-client-hs-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\hado
op-mapreduce-client-hs-plugins-2.6.1.jar;D:\hadoop-2.6.1\share\hadoop\mapreduce\
hadoop-mapreduce-client-jobclient-2.6.1-tests.jar;D:\hadoop-2.6.1\share\hadoop\m
apreduce\hadoop-mapreduce-client-jobclient-2.6.1.jar;D:\hadoop-2.6.1\share\hadoo
p\mapreduce\hadoop-mapreduce-client-shuffle-2.6.1.jar;D:\hadoop-2.6.1\share\hado
op\mapreduce\hadoop-mapreduce-examples-2.6.1.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b16
5c4fe8a74265c792ce23f546c64604acf0e41; compiled by 'jenkins' on 2016-01-26T00:08
Z
STARTUP_MSG:   java = 1.8.0_101
************************************************************/
17/05/13 07:16:40 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-1284c5d0-592a-4a41-b185-e53fb57dcfbf
17/05/13 07:16:42 INFO namenode.FSNamesystem: No KeyProvider found.
17/05/13 07:16:42 INFO namenode.FSNamesystem: fsLock is fair:true
17/05/13 07:16:42 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.lim
it=1000
17/05/13 07:16:42 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.re
gistration.ip-hostname-check=true
17/05/13 07:16:42 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.
block.deletion.sec is set to 000:00:00:00.000
17/05/13 07:16:42 INFO blockmanagement.BlockManager: The block deletion will sta
rt around 2017 五月 13 07:16:42
17/05/13 07:16:42 INFO util.GSet: Computing capacity for map BlocksMap
17/05/13 07:16:42 INFO util.GSet: VM type       = 64-bit
17/05/13 07:16:42 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/05/13 07:16:42 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/05/13 07:16:42 INFO blockmanagement.BlockManager: dfs.block.access.token.enab
le=false
17/05/13 07:16:42 INFO blockmanagement.BlockManager: defaultReplication
= 1
17/05/13 07:16:42 INFO blockmanagement.BlockManager: maxReplication
= 512
17/05/13 07:16:42 INFO blockmanagement.BlockManager: minReplication
= 1
17/05/13 07:16:42 INFO blockmanagement.BlockManager: maxReplicationStreams
= 2
17/05/13 07:16:42 INFO blockmanagement.BlockManager: replicationRecheckInterval
= 3000
17/05/13 07:16:42 INFO blockmanagement.BlockManager: encryptDataTransfer
= false
17/05/13 07:16:42 INFO blockmanagement.BlockManager: maxNumBlocksToLog
= 1000
17/05/13 07:16:42 INFO namenode.FSNamesystem: fsOwner             = Administrato
r (auth:SIMPLE)
17/05/13 07:16:42 INFO namenode.FSNamesystem: supergroup          = supergroup
17/05/13 07:16:42 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/05/13 07:16:42 INFO namenode.FSNamesystem: HA Enabled: false
17/05/13 07:16:42 INFO namenode.FSNamesystem: Append Enabled: true
17/05/13 07:16:43 INFO util.GSet: Computing capacity for map INodeMap
17/05/13 07:16:43 INFO util.GSet: VM type       = 64-bit
17/05/13 07:16:43 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/05/13 07:16:43 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/05/13 07:16:43 INFO namenode.FSDirectory: ACLs enabled? false
17/05/13 07:16:43 INFO namenode.FSDirectory: XAttrs enabled? true
17/05/13 07:16:43 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
17/05/13 07:16:43 INFO namenode.NameNode: Caching file names occuring more than
times
17/05/13 07:16:43 INFO util.GSet: Computing capacity for map cachedBlocks
17/05/13 07:16:43 INFO util.GSet: VM type       = 64-bit
17/05/13 07:16:43 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/05/13 07:16:43 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pc
t = 0.9990000128746033
17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanode
s = 0
17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension
  = 30000
17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.n
um.buckets = 10
17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.user
s = 10
17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.
minutes = 1,5,25
17/05/13 07:16:43 INFO namenode.FSNamesystem: Retry cache on namenode is enabled

17/05/13 07:16:43 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total
 heap and retry cache entry expiry time is 600000 millis
17/05/13 07:16:43 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/05/13 07:16:43 INFO util.GSet: VM type       = 64-bit
17/05/13 07:16:43 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.
KB
17/05/13 07:16:43 INFO util.GSet: capacity      = 2^15 = 32768 entries
17/05/13 07:16:43 INFO namenode.FSImage: Allocated new BlockPoolId: BP-664414510
-192.168.8.5-1494631003212
17/05/13 07:16:43 INFO common.Storage: Storage directory \hadoop\data\dfs\nameno
de has been successfully formatted.
17/05/13 07:16:43 INFO namenode.NNStorageRetentionManager: Going to retain 1 ima
ges with txid >= 0
17/05/13 07:16:43 INFO util.ExitUtil: Exiting with status 0
17/05/13 07:16:43 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ******/192.168.8.5
************************************************************/

D:\hadoop-2.6.1\bin>cd ..\sbin

D:\hadoop-2.6.1\sbin>start-all.cmd
This script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmd
starting yarn daemons

D:\hadoop-2.6.1\sbin>jps
DataNode
NodeManager
Jps
NameNode
ResourceManager

D:\hadoop-2.6.1\sbin>

通過jps命令可以看到4個程序都拉起來了,到這裡hadoop的安裝啟動已經完事了。接著我們可以用瀏覽器到localhost:8088看mapreduce任務,到localhost:50070->Utilites->Browse the file system看hdfs檔案。如果重啟hadoop無需再格式化namenode,只要stop-all.cmd再start-all.cmd就可以了。

上面拉起4個程序時會彈出4個視窗,我們可以看看這4個程序啟動時都幹了啥:
DataNode

DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
17/12/11 10:31:24 INFO datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = n-*********/169.254.194.63
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.1
STARTUP_MSG:   classpath = D:\hadoop-2.6.1\etc\hadoop;D:\hadoop-2.6.1\share\hadoop\common\lib\activation-1.1.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\api-asn1-api-1.0.0-M20.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\api-util-1.0.0-M20.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\asm-3.2.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\avro-1.7.4.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-beanutils-1.7.0.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-cli-1.2.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-codec-1.4.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-collections-3.2.1.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-compress-1.4.1.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-configuration-1.6.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-digester-1.8.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-el-1.0.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-httpclient-3.1.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-io-2.4.jar;D:\hadoop-2.6.1\share\hadoop\common\lib\commons-lang-2.6.jar;D: