完全分散式hadoop
1.克隆3臺client(centos7)
右鍵s200-->管理->克隆-> ... -> 完整克隆
2.啟動client
3.啟用客戶機共享資料夾。
4.修改hostname和ip地址檔案
https://blog.csdn.net/ssllkkyyaa/article/details/83410871
ssh免密碼登入Permission denied (publickey,gssapi-keyex,gssapi-with-mic) 的解決方案!
當出現Permission denied (publickey,gssapi-keyex,gssapi-with-mic) 警告的時候,恭喜你,你已經離成功很近了。
遠端主機這裡設為slave2,使用者為Hadoop。
本地主機設為slave1
以下都是在遠端主機slave2上的配置,使得slave1可以免密碼連線到slave2上。如果想免密碼互聯,原理一樣的,在slave1上也這麼配置即可!
(1)首先:配置ssh伺服器配置檔案。
在root 使用者下才能配置。
vi /etc/ssh/sshd_config
許可權設為no:
#PermitRootLogin yes
#UsePAM yes
#PasswordAuthentication yes
如果前面有# 號,將#號去掉,之後將yes修改為no。
修改之後為:
PermitRootLogin no
UsePAM no
PasswordAuthentication no
許可權設為yes:
RSAAuthentication yes
PubkeyAuthentication yes
(2)重啟sshd服務
systemctl restart sshd.service
systemctl status sshd.service #檢視ssh服務的狀態
#systemctl start sshd.service #開啟ssh服務
#sytemctl enable sshd.service #ssh服務隨開機啟動,還有個disabled
#systemctl stop sshd.ervice #停止
正常情況下應該是Active:active(running)
並且許可權要配對
1.~/.ssh/authorized_keys
chmod 700 /home/centos/.ssh
chmod 644 /home/centos/.ssh/authorized_keys
644
2.$/.ssh
700
3.root
[/etc/hostname]
s201
[/etc/sysconfig/network-scripts/ifcfg-eno16777736]
...
IPADDR=..(192.168.77.200-》201 202 203
5.重啟網路服務
$>sudo service network restart
6.修改/etc/resolv.conf檔案
nameserver 192.168.231.2
7.重複以上3 ~ 6過程.
-----------------------
授權ssh
.刪除所有主機上的/home/centos/.ssh/*
本文中方法只是跳轉,四機器聯動見:
https://blog.csdn.net/ssllkkyyaa/article/details/82839298
2.在s201主機上生成金鑰對
$>ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
3.將s200的公鑰檔案id_rsa.pub遠端複製到201 ~ 203主機上。
並放置/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub [email protected]:/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub [email protected]:/home/centos/.ssh/authorized_keys
$>scp id_rsa.pub [email protected]:/home/centos/.ssh/authorized_keys
三臺機子s201 202 203上:
-----------------
名稱節點
s200
資料節點
(slaves)
s201 s202 s203
---------------------------
刪除軟連線:
cd /home/centos/soft
rm hadoop
--------------------------
重新建立軟連結(s200 201 202 203)
ln -s /home/centos/soft/full hadoop
-------
s200 配置hadoop
core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://s200/</value>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
yarn-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>s200</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
slaves
s201
s202
s203
hadoop-env.sh
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Set Hadoop-specific environment variables here.
# The only required environment variable is JAVA_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.
# The java implementation to use.
export JAVA_HOME=/home/centos/soft/jdk
# The jsvc implementation to use. Jsvc is required to run secure datanodes
# that bind to privileged ports to provide authentication of data transfer
# protocol. Jsvc is not required if SASL is configured for authentication of
# data transfer protocol using non-privileged ports.
#export JSVC_HOME=${JSVC_HOME}
export HADOOP_CONF_DIR=/home/centos/soft/hadoop/etc/hadoop
# Extra Java CLASSPATH elements. Automatically insert capacity-scheduler.
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
if [ "$HADOOP_CLASSPATH" ]; then
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
else
export HADOOP_CLASSPATH=$f
fi
done
# The maximum amount of heap to use, in MB. Default is 1000.
#export HADOOP_HEAPSIZE=
#export HADOOP_NAMENODE_INIT_HEAPSIZE=""
# Extra Java runtime options. Empty by default.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"
# On secure datanodes, user to run the datanode as after dropping privileges.
# This **MUST** be uncommented to enable secure HDFS if using privileged ports
# to provide authentication of data transfer protocol. This **MUST NOT** be
# defined if SASL is configured for authentication of data transfer protocol
# using non-privileged ports.
export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
# Where log files are stored. $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
###
# HDFS Mover specific parameters
###
# Specify the JVM options to be used when starting the HDFS Mover.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HADOOP_MOVER_OPTS=""
###
# Advanced Users Only!
###
# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by
# the user that will run the hadoop daemons. Otherwise there is the
# potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
# A string representing this instance of hadoop. $USER by default.
export HADOOP_IDENT_STRING=$USER
----------------
拷貝到其他機子上
pwd :
/home/centos/soft/hadoop/etc/hadoop
scp -r ./* [email protected]:/home/centos/soft/hadoop/etc/hadoop
scp -r ./* [email protected]:/home/centos/soft/hadoop/etc/hadoop
scp -r ./* [email protected]:/home/centos/soft/hadoop/etc/hadoop
.刪除臨時目錄檔案
$>cd /tmp
$>rm -rf hadoop-centos
$>ssh s201 rm -rf /tmp/hadoop-centos
$>ssh s202 rm -rf /tmp/hadoop-centos
$>ssh s203 rm -rf /tmp/hadoop-centos
10.格式化檔案系統
$>hadoop namenode -format
11.啟動hadoop程序
$>start-all.sh
刪除hadoop日誌
$>cd /home/centos/soft/hadoop/logs
$>rm -rf *
$>ssh s201 rm -rf /home/centos/soft/hadoop/logs/*
$>ssh s202 rm -rf /home/centos/soft/hadoop/logs/*
$>ssh s203 rm -rf /home/centos/soft/hadoop/logs/*
10.格式化檔案系統
$>hadoop namenode -format
11.啟動hadoop程序
$>start-all.sh
----------------------------------------------------
重啟的話:
hdfs-stop.sh
刪掉tmp下的hadoop-centos
$>cd /tmp
$>rm -rf hadoop-centos
$>ssh s201 rm -rf /tmp/hadoop-centos
$>ssh s202 rm -rf /tmp/hadoop-centos
$>ssh s203 rm -rf /tmp/hadoop-centos
>hadoop namenode -format
hdfs-start.sh
=================
jps軟連線
ln -s /home/centos/soft/jdk/bin/jps /usr/local/bin/jps
=====================
xcall.sh
xsync.sh 放入/usr/local/bin/下
xcall.sh rm -rf /home/centos/hadooptmp/
hadoop namenode -format
start-all.sh
停止:
stop-all.sh