1. 程式人生 > >Hadoop secondarynamenode兩種配置方式

Hadoop secondarynamenode兩種配置方式

hadoop secondarynamenode的兩種配置方式,hadoop版本是hadoop-1.0.4:

叢集分配關係:

masterJobTracker&&Namenode
node1Secondarynamenode
node2TaskTracker&&Datanode
node3TaskTracker&&Datanode
node4TaskTracker&&Datanode

配置1:

1.conf/core-site.xml:

<configuration>
<property>
	<name>hadoop.tmp.dir</name>
	<value>/home/hadoop/hadooptmp</value>
	<description>A base for other temporary directories.</description>
</property>

<property>
	<name>fs.default.name</name>
	<value>hdfs://master:9000</value>
</property>
</configuration>

2.conf/hadoop-env.sh:

export JAVA_HOME=/home/hadoop/jdk1.x.x_xx

3. conf/hdfs-site.xml:

<configuration>
<property>
	<name>dfs.replication</name>
	<value>2</value>
</property>

<property>
     <name>dfs.data.dir</name>
     <value>/home/hadoop/hadoopfs/data</value>
 </property>
<property>
	<name>dfs.http.address</name>
	<value>master:50070</value>
</property>

<property>
	<name>dfs.back.http.address</name>
	<value>node1:50070</value>
</property>

<property>
	<name>dfs.name.dir</name>
	<value>/home/hadoop/hadoopfs/name</value>
</property>

<property>
	<name>fs.checkpoint.dir</name>
	<value>/home/hadoop/hadoopcheckpoint</value>
</property>

<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>

4.conf/mapred-site.xml:

<configuration>
<property>
	<name>mapred.job.tracker</name>
	<value>master:9001</value>
</property>
<property>
	<name>mapred.tasktracker.map.tasks.maximum</name>
	<value>4</value>
</property>
<property>
	<name>mapred.tasktracker.reduce.tasks.maximum</name>
	<value>4</value>
</property>
<property>
	<name>mapred.child.java.opts</name>
	<value>-Xmx1000m</value>
</property>
</configuration>

5. conf/masters:

master

6.conf/secondarynamenode(此為新建的檔案)

node1

7. conf/slaves:

node2
node3
node4

8.bin/start-dfs.sh:

"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode $nameStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode $dataStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode start secondarynamenode

9.bin/stop-dfs.sh:

"$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR stop namenode $nameStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR stop datanode $dataStartOpt
"$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts secondarynamenode stop secondarynamenode

配置2:

1.conf/core-site.xml:

<configuration>
<property>
	<name>hadoop.tmp.dir</name>
	<value>/home/hadoop/hadooptmp</value>
	<description>A base for other temporary directories.</description>
</property>

<property>
	<name>fs.default.name</name>
	<value>hdfs://master:9000</value>
</property>
</configuration>

2.conf/hadoop-env.sh:

export JAVA_HOME=/home/hadoop/jdk1.x.x_xx

3. conf/hdfs-site.xml:

<configuration>
<property>
	<name>dfs.replication</name>
	<value>2</value>
</property>

<property>
     <name>dfs.data.dir</name>
     <value>/home/hadoop/hadoopfs/data</value>
 </property>
<property>
	<name>dfs.http.address</name>
	<value>master:50070</value>
</property>

<property>
	<name>dfs.back.http.address</name>
	<value>node1:50070</value>
</property>

<property>
	<name>dfs.name.dir</name>
	<value>/home/hadoop/hadoopfs/name</value>
</property>

<property>
	<name>fs.checkpoint.dir</name>
	<value>/home/hadoop/hadoopcheckpoint</value>
</property>

<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>

4.conf/mapred-site.xml:

<configuration>
<property>
	<name>mapred.job.tracker</name>
	<value>master:9001</value>
</property>
<property>
	<name>mapred.tasktracker.map.tasks.maximum</name>
	<value>4</value>
</property>
<property>
	<name>mapred.tasktracker.reduce.tasks.maximum</name>
	<value>4</value>
</property>
<property>
	<name>mapred.child.java.opts</name>
	<value>-Xmx1000m</value>
</property>
</configuration>

5. conf/masters:

node1

7. conf/slaves:

node2
node3
node4

還有就是昨天寫的secondarynamenode的使用,應該也是有拷貝secondarynamenode的檔案到namenode的過程的,

因為昨天測試的時候只是在一個機器上搞,所以就少了複製這一步,今天用了叢集進行測試,才發現要拷貝的。

上面的兩種方式已經經過叢集測試,證明可用。

分享,快樂,成長