1. 程式人生 > >大數據之---hadoop問題排查匯總終極篇---持續更新中

大數據之---hadoop問題排查匯總終極篇---持續更新中

大數據 Hadoop

1、軟件環境
RHEL6 角色 jdk-8u45
hadoop-2.8.1.tar.gz ? ssh
xx.xx.xx.xx ip地址 NN hadoop1
xx.xx.xx.xx ip地址 DN hadoop2
xx.xx.xx.xx ip地址 DN hadoop3
xx.xx.xx.xx ip地址 DN hadoop4
xx.xx.xx.xx ip地址 DN hadoop5

本次涉及偽分布式部署只是要主機hadoop1

?

2、啟動密鑰互信問題

HDFS啟動

[hadoop@hadoop01 hadoop]$ ./sbin/start-dfs.sh
Starting namenodes on [hadoop01]

The authenticity of host ‘hadoop01 (172.16.18.133)‘ can‘t be established.
RSA key fingerprint is 8f:e7:6c:ca:6e:40:78:b8:df:6a:b4:ca:52:c7:01:4b.
Are you sure you want to continue connecting (yes/no)? yes
hadoop01: Warning: Permanently added ‘hadoop01‘ (RSA) to the list of known hosts.
hadoop01: chown: changing ownership of `/opt/software/hadoop-2.8.1/logs‘: Operation not permitted
hadoop01: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop01.out
hadoop01: /opt/software/hadoop-2.8.1/sbin/hadoop-daemon.sh: line 159:

/opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop01.out: Permission denied

啟動如果有交互輸入密碼,不輸入報錯權限限制,這是因為我們沒有配置互信,

偽分布式即便在同一臺機器上面我們也需要配置ssh登陸互信。

非root用戶公鑰文件權限必須是600權限(root除外)

在hadoop用戶配置ssh免密碼登陸

[hadoop@hadoop01 .ssh]$ cat id_rsa.pub? > authorized_keys
[hadoop@hadoop01 .ssh]$ chmod 600 authorized_keys

[hadoop@hadoop01 hadoop]$ ssh hadoop01 date
[hadoop@hadoop01 .ssh]$

[hadoop@hadoop01 hadoop]$ ./sbin/start-dfs.sh
Starting namenodes on [hadoop01]
hadoop01: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop01.out
hadoop01: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop01.out
Starting secondary namenodes [hadoop01]
hadoop01: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop01.out
[hadoop@hadoop01 hadoop]$ jps
1761 Jps
1622 SecondaryNameNode
1388 DataNode
1276 NameNode

?

3、進程process information unavailable 問題

分兩種情況:1、進程不存在,且process information unavailable

????????????????????????????? 2、進程存在? 報process information unavailable

對於第一種情況:

[hadoop@hadoop01 sbin]$ jps
3108 DataNode
4315 Jps
4156 SecondaryNameNode
2990 NameNode

[hadoop@hadoop01 hsperfdata_hadoop]$ ls
5295? 5415? 5640
[hadoop@hadoop01 hsperfdata_hadoop]$ ll
total 96
-rw------- 1 hadoop hadoop 32768 Apr 27 09:35 5295
-rw------- 1 hadoop hadoop 32768 Apr 27 09:35 5415
-rw------- 1 hadoop hadoop 32768 Apr 27 09:35 5640
[hadoop@hadoop01 hsperfdata_hadoop]$ pwd
/tmp/hsperfdata_hadoop

/tmp/hsperfdata_hadoop

裏面記錄jps顯示的進程號,如果此時jps看到報錯

[hadoop@hadoop01 tmp]$ jps
3330 SecondaryNameNode -- process information unavailable
3108 DataNode???????????????????????? -- process information unavailable
3525 Jps
2990 NameNode????????????????????? -- process information unavailable

查詢異常進程是否存在

[hadoop@hadoop01 tmp]$ ps -ef |grep 3330
hadoop??? 3845? 2776? 0 09:29 pts/6??? 00:00:00 grep 3330

對於進程不存在了,ok去/tmp/hsperfdata_xxx刪除文件, 直接重新啟動進程。。

?

jps查詢的是當前用戶的 hsperfdata_當前用戶/文件
[root@hadoop01 ~]# jps
7153 -- process information unavailable
8133 -- process information unavailable
7495 -- process information unavailable
8489 Jps
[root@hadoop01 ~]# ps -ef |grep 7153?? ---查看異常進程存在
hadoop??? 7153???? 1? 2 09:47 ???????? 00:00:17 /usr/java/jdk1.8.0_45/bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/opt/software/hadoop-2.8.1/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/software/hadoop-2.8.1 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/software/hadoop-2.8.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/opt/software/hadoop-2.8.1/logs -Dhadoop.log.file=hadoop-hadoop-namenode-hadoop01.log -Dhadoop.home.dir=/opt/software/hadoop-2.8.1 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/opt/software/hadoop-2.8.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode
root????? 8505? 2752? 0 09:58 pts/6??? 00:00:00 grep 7153

假如存在,當前用戶查看就是process information unavailable ,這時候查看是否進程是否存在,當前用戶? ps –ef |grep? 進程號,看進程運行用戶,不是切換用戶

[hadoop@hadoop01 hadoop]$ jps???????????? -----切換hadoop用戶查看進程
7153 NameNode
8516 Jps
8133 DataNode
7495 SecondaryNameNode

切換用戶發現進程都正常。
這個情況是查看的用戶不對,hadoop查看jps不是運行用戶查看,這個情況是不需要進行任何處理,服務運行正常

總結:對應process information unavailable報錯,處理:

1.查看進程是否存在 (進程不存在,刪/tmp/hsperfdata_xxx,重新啟動進程)

2.如果進程存在,查看存在的進程運行用戶,如果不是當前用戶 切換用戶後重新運行jps

大數據之---hadoop問題排查匯總終極篇---持續更新中