1. 程式人生 > >大數據集群常見問題總結

大數據集群常見問題總結

fault 誤報 ioe select 機器 dfs- thread 安裝 fixed

  項目將近尾聲,上線一切順利,在開發過程中遇到了不少的問題,趁著空閑時間對項目中遇到的常見問題做一個總結,當作一個筆記,問題如下:

  1. java.io.IOException: Could not obtain block: blk_194219614024901469_1100 file=/user/hive/warehouse/src_20180124_log/src_20180124_log

出現這種情況大多是結點斷了,沒有連接上。檢查配置,重新啟動服務即可。

  2. java.lang.OutOfMemoryError: Java heap space

出現這種異常,明顯是jvm內存不夠得原因,要修改所有的datanode的jvm內存大小。

Java -Xms1024m -Xmx4096m

一般jvm的最大內存使用應該為總內存大小的一半,我們使用的8G內存,所以設置為4096m,這一值可能依舊不是最優的值。

  3. IO寫操作出現問題

0-1246359584298, infoPort=50075, ipcPort=50020):Got exception while serving blk_-5911099437886836280_1292 to /172.16.100.165:

java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/

172.16.100.165:5001remote=/172.16.100.165:50930]atorg.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:185)at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)

atorg.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:293)

at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:387)

at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:179)

at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:94)

at java.lang.Thread.run(Thread.java:619)

It seems there are many reasons that it can timeout, the example given in

HADOOP-3831 is a slow reading client.

連接超時,解決辦法:在hadoop-site.xml中設置dfs.datanode.socket.write.timeout=0。

  4.解決hadoop OutOfMemoryError問題

<property>

<name>mapred.child.java.opts</name>

<value>-Xmx800M -server</value>

</property>

或者:hadoop jar jarfile [main class] -D mapred.child.java.opts=-Xmx800M

  5. Hadoop java.io.IOException: Job failed! at org.apache.hadoop.mapred.

JobClient.runJob(JobClient.java:1232) while indexing.

when i use nutch1.0,get this error:

Hadoop java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.

runJob(JobClient.java:1232) while indexing.

解決辦法:可以刪除conf/log4j.properties,然後可以看到詳細的錯誤報告。如出現的是out of memory,解決辦法是在給運行主類org.apache.nutch.crawl.Crawl加上參數:-Xms64m -Xmx512m。

  6. Namenode in safe mode

解決方法:執行命令 bin/hadoop dfsadmin -safemode leave

  7. java.net.NoRouteToHostException: No route to host

解決方法:sudo /etc/init.d/iptables stop

  8.更改namenode後,在hive中運行select 依舊指向之前的namenode地址

解決辦法:將metastore中的之前出現的namenode地址全部更換為現有的namenode地址

  9. ERROR metadata.Hive (Hive.java:getPartitions(499)) - javax.jdo.JDODataStoreException: Required table missing : ""PARTITIONS"" in Catalog "" Schema "". JPOX requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable "org.jpox.autoCreateTables"

原因:就是因為在 hive-default.xml 裏把 org.jpox.fixedDatastore 設置成 true 了,應該把值設為false。

  10.INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:Bad connect ack with firstBadLink 192.168.1.11:50010

> INFO hdfs.DFSClient: Abandoning block blk_-8575812198227241296_1001

> INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:

Bad connect ack with firstBadLink 192.168.1.16:50010

> INFO hdfs.DFSClient: Abandoning block blk_-2932256218448902464_1001

> INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:

Bad connect ack with firstBadLink 192.168.1.11:50010

> INFO hdfs.DFSClient: Abandoning block blk_-1014449966480421244_1001號的

> INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:

Bad connect ack with firstBadLink 192.168.1.16:50010

> INFO hdfs.DFSClient: Abandoning block blk_7193173823538206978_1001

> WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable

to create new block.

at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2731)

>at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1996

at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2182)

> WARN hdfs.DFSClient: Error Recovery for block blk_7193173823538206978_1001

bad datanode[2] nodes == null

> WARN hdfs.DFSClient: Could not get block locations. Source file "/user/umer/8GB_input"

- Aborting...

> put: Bad connect ack with firstBadLink 192.168.1.16:50010

解決方法:

1) ‘/etc/init.d/iptables stop‘ -->stopped firewall

2) SELINUX=disabled in ‘/etc/selinux/config‘ file.-->disabled selinux

  11.某次正常運行mapreduce實例時,拋出錯誤

java.io.IOException: All datanodes xxx.xxx.xxx.xxx:xxx are bad. Aborting…

atorg.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2158)at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735

at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)

java.io.IOException: Could not get block locations. Aborting…at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)

at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)

at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)

問題原因是linux機器打開了過多的文件導致。解決方法:用命令ulimit -n可以發現linux默認的文件打開數目為1024,修改/ect/security/limit.conf,增加hadoop soft 65535,再重新運行程序(最好所有的datanode都修改),問題解決。

  12. bin/hadoop jps後報如下異常:

Exception in thread "main" java.lang.NullPointerException

atsun.jvmstat.perfdata.monitor.protocol.local.LocalVmManager.activeVms(LocalVmManager.java:127)atsun.jvmstat.perfdata.monitor.protocol.local.MonitoredHostProvider.activeVms(MonitoredHostProvider.java:133)at sun.tools.jps.Jps.main(Jps.java:45)

解決方法:系統根目錄/tmp文件夾被刪除了。重新建立/tmp文件夾即可。

  13. bin/hive中出現 unable to create log directory /tmp/

解決辦法:系統根目錄/tmp文件夾被刪除了。重新建立/tmp文件夾即可。

  14.MySQL報錯

[root@localhost mysql]# ./bin/mysqladmin -u root password ‘123456‘

./bin/mysqladmin: connect to server at ‘localhost‘ failed

error: ‘Can‘t connect to local MySQL server through socket ‘/tmp/mysql.sock‘ (2)‘

Check that mysqld is running and that the socket: ‘/tmp/mysql.sock‘ exists!

解決不能通過mysql .sock連接MySQL問題 這個問題主要提示是,不能通過 ‘/tmp/mysql .sock‘連到服務器,而php標準配置正是用過‘/tmp/mysql .sock‘,但是一些mysql 安裝方法 將 mysql .sock放在/var/lib/mysql .sock或者其他的什麽地方,可以通過修改/etc/my.cnf文件來修正它,打開文件,可以看到如下的配置:

[mysqld]

socket=/var/lib/mysql.sock

改一下就好了,但也會引起其他的問題,如mysql 程序連不上了,再加一點:

[mysql]

socket=/tmp/mysql.sock

或者還可以通過修改my.ini中的配置來使用其他的mysql.sock來連,或者用這樣的方法:

ln -s /var/lib/mysql/mysql.sock /tmp/mysql.sock

成功了,就是這樣ln -s /var/lib/mysql/mysql.sock /tmp/mysql.sock

  15. NameNode不能切換

less $BEH_HOME/logs/hadoop/hadoop-hadoop-zkfc-hadoop001.log

日誌文件中有錯誤提醒:Unable to create SSH session

com.jcraft.jsch,JSchException:java.io.FileNotFoundException:~/.ssh/id_rsa (No such file or directory)

解決辦法:vim $HADOOP_HOME/etc/hadoop/HDFS-site.xml

修改以下參數:

<property>

<name>dfs.ha.fencing.ssh.private-key-files</name>

<value>/home/hadoop/.ssh/id_rsa</value>

<final>true</final>

<description/>

</property>

大數據集群常見問題總結