1. 程式人生 > >解決討厭的警告 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

解決討厭的警告 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

問題:

執行任何hadoop命令,都會提示如下WARN。雖然影響不大,但是每次執行一個命令都有這麼個WARN,讓人很不爽,作為一個精緻的男人, 必須要幹掉它。

[[email protected] logs]# hdfs dfs -cat /output/part-r-00000
18/12/20 14:55:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

網上搜了下,這個問題有兩個原因。

 

解決辦法1:

增加除錯資訊設定

export HADOOP_ROOT_LOGGER=DEBUG,console

再執行一遍命令, 關注到紅色部分。

[[email protected] native]# hdfs dfs -cat /output/part-r-00000
18/12/20 17:20:44 DEBUG util.Shell: setsid exited with exit code 0
18/12/20 17:20:44 DEBUG conf.Configuration: parsing URL jar:file:/opt/hadoop/hadoop-2.9.2
/share/hadoop/common/hadoop-common-2.9.2.jar!/core-default.xml 18/12/20 17:20:44 DEBUG conf.Configuration: parsing input stream [email protected]cbe0 18/12/20 17:20:44 DEBUG conf.Configuration: parsing URL file:/opt/hadoop/hadoop-2.9.2/etc/hadoop/core-site.xml 18/12/20 17:20:44 DEBUG conf.Configuration: parsing input stream
[email protected]
18/12/20 17:20:44 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)]) 18/12/20 17:20:44 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)]) 18/12/20 17:20:44 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[GetGroups]) 18/12/20 17:20:44 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Renewal failures since startup]) 18/12/20 17:20:44 DEBUG lib.MutableMetricsFactory: field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Renewal failures since last successful login]) 18/12/20 17:20:44 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics 18/12/20 17:20:44 DEBUG security.SecurityUtil: Setting hadoop.security.token.service.use_ip to true 18/12/20 17:20:44 DEBUG security.Groups: Creating new Groups object 18/12/20 17:20:44 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 18/12/20 17:20:44 DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: /opt/hadoop/hadoop-2.9.2/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /opt/hadoop/hadoop-2.9.2/lib/native/libhadoop.so.1.0.0) 18/12/20 17:20:44 DEBUG util.NativeCodeLoader: java.library.path=/opt/hadoop/hadoop-2.9.2/lib:/opt/hadoop/hadoop-2.9.2/lib/native 18/12/20 17:20:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 18/12/20 17:20:44 DEBUG util.PerformanceAdvisory: Falling back to shell based 18/12/20 17:20:44 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping 18/12/20 17:20:44 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000 18/12/20 17:20:45 DEBUG core.Tracer: sampler.classes = ; loaded no samplers 18/12/20 17:20:45 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers 18/12/20 17:20:45 DEBUG security.UserGroupInformation: hadoop login 18/12/20 17:20:45 DEBUG security.UserGroupInformation: hadoop login commit 18/12/20 17:20:45 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root 18/12/20 17:20:45 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: root" with name root 18/12/20 17:20:45 DEBUG security.UserGroupInformation: User entry: "root" 18/12/20 17:20:45 DEBUG security.UserGroupInformation: Assuming keytab is managed externally since logged in from subject. 18/12/20 17:20:45 DEBUG security.UserGroupInformation: UGI loginUser:root (auth:SIMPLE) 18/12/20 17:20:45 DEBUG core.Tracer: sampler.classes = ; loaded no samplers 18/12/20 17:20:45 DEBUG core.Tracer: span.receiver.classes = ; loaded no span receivers 18/12/20 17:20:45 DEBUG fs.FileSystem: Loading filesystems 18/12/20 17:20:45 DEBUG fs.FileSystem: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar 18/12/20 17:20:45 DEBUG fs.FileSystem: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar 18/12/20 17:20:45 DEBUG fs.FileSystem: ftp:// = class org.apache.hadoop.fs.ftp.FTPFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar 18/12/20 17:20:45 DEBUG fs.FileSystem: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar 18/12/20 17:20:45 DEBUG fs.FileSystem: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar 18/12/20 17:20:45 DEBUG fs.FileSystem: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/common/hadoop-common-2.9.2.jar 18/12/20 17:20:45 DEBUG fs.FileSystem: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar 18/12/20 17:20:45 DEBUG fs.FileSystem: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar 18/12/20 17:20:45 DEBUG fs.FileSystem: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar 18/12/20 17:20:45 DEBUG fs.FileSystem: hftp:// = class org.apache.hadoop.hdfs.web.HftpFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar 18/12/20 17:20:45 DEBUG fs.FileSystem: hsftp:// = class org.apache.hadoop.hdfs.web.HsftpFileSystem from /opt/hadoop/hadoop-2.9.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.9.2.jar 18/12/20 17:20:45 DEBUG fs.FileSystem: Looking for FS supporting hdfs 18/12/20 17:20:45 DEBUG fs.FileSystem: looking for configuration option fs.hdfs.impl 18/12/20 17:20:45 DEBUG fs.FileSystem: Looking in service filesystems for implementation class 18/12/20 17:20:45 DEBUG fs.FileSystem: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem 18/12/20 17:20:45 DEBUG impl.DfsClientConf: dfs.client.use.legacy.blockreader.local = false 18/12/20 17:20:45 DEBUG impl.DfsClientConf: dfs.client.read.shortcircuit = false 18/12/20 17:20:45 DEBUG impl.DfsClientConf: dfs.client.domain.socket.data.traffic = false 18/12/20 17:20:45 DEBUG impl.DfsClientConf: dfs.domain.socket.path = 18/12/20 17:20:45 DEBUG hdfs.DFSClient: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0 18/12/20 17:20:45 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null 18/12/20 17:20:45 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcProtobufRequest, rpcInvoker=or[email protected]932bc4a 18/12/20 17:20:45 DEBUG ipc.Client: getting client out of cache: [email protected] 18/12/20 17:20:45 DEBUG util.PerformanceAdvisory: Both short-circuit local reads and UNIX domain socket are disabled. 18/12/20 17:20:45 DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection 18/12/20 17:20:45 DEBUG ipc.Client: The ping interval is 60000 ms. 18/12/20 17:20:45 DEBUG ipc.Client: Connecting to master/192.168.102.3:8020 18/12/20 17:20:45 DEBUG ipc.Client: IPC Client (1380806038) connection to master/192.168.102.3:8020 from root: starting, having connections 1 18/12/20 17:20:45 DEBUG ipc.Client: IPC Client (1380806038) connection to master/192.168.102.3:8020 from root sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo 18/12/20 17:20:45 DEBUG ipc.Client: IPC Client (1380806038) connection to master/192.168.102.3:8020 from root got value #0 18/12/20 17:20:45 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 42ms 18/12/20 17:20:45 DEBUG ipc.Client: IPC Client (1380806038) connection to master/192.168.102.3:8020 from root sending #1 org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations 18/12/20 17:20:45 DEBUG ipc.Client: IPC Client (1380806038) connection to master/192.168.102.3:8020 from root got value #1 18/12/20 17:20:45 DEBUG ipc.ProtobufRpcEngine: Call: getBlockLocations took 2ms 18/12/20 17:20:45 DEBUG hdfs.DFSClient: newInfo = LocatedBlocks{ fileLength=60 underConstruction=false blocks=[LocatedBlock{BP-414157957-192.168.102.3-1545285971113:blk_1073741832_1008; getBlockSize()=60; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[192.168.102.4:50010,DS-95901563-8713-4527-a4e3-1517663a515a,DISK], DatanodeInfoWithStorage[192.168.102.5:50010,DS-ca41aefb-6ecd-48c8-a063-dab5052a96d4,DISK]]}] lastLocatedBlock=LocatedBlock{BP-414157957-192.168.102.3-1545285971113:blk_1073741832_1008; getBlockSize()=60; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[192.168.102.5:50010,DS-ca41aefb-6ecd-48c8-a063-dab5052a96d4,DISK], DatanodeInfoWithStorage[192.168.102.4:50010,DS-95901563-8713-4527-a4e3-1517663a515a,DISK]]} isLastBlockComplete=true} 18/12/20 17:20:45 DEBUG hdfs.DFSClient: Connecting to datanode 192.168.102.4:50010 18/12/20 17:20:45 DEBUG ipc.Client: IPC Client (1380806038) connection to master/192.168.102.3:8020 from root sending #2 org.apache.hadoop.hdfs.protocol.ClientProtocol.getServerDefaults 18/12/20 17:20:45 DEBUG ipc.Client: IPC Client (1380806038) connection to master/192.168.102.3:8020 from root got value #2 18/12/20 17:20:45 DEBUG ipc.ProtobufRpcEngine: Call: getServerDefaults took 0ms 18/12/20 17:20:45 DEBUG sasl.SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /192.168.102.4, datanodeId = DatanodeInfoWithStorage[192.168.102.4:50010,DS-95901563-8713-4527-a4e3-1517663a515a,DISK] hadoop 3 hbase 1 hive 2 mapreduce 1 spark 2 sqoop 1 storm 1 18/12/20 17:20:45 DEBUG ipc.Client: stopping client from cache: [email protected] 18/12/20 17:20:45 DEBUG ipc.Client: removing client from cache: [email protected] 18/12/20 17:20:45 DEBUG ipc.Client: stopping actual client because no more references remain: [email protected] 18/12/20 17:20:45 DEBUG ipc.Client: Stopping client 18/12/20 17:20:45 DEBUG ipc.Client: IPC Client (1380806038) connection to master/192.168.102.3:8020 from root: closed 18/12/20 17:20:45 DEBUG ipc.Client: IPC Client (1380806038) connection to master/192.168.102.3:8020 from root: stopped, remaining connections 0 18/12/20 17:20:45 DEBUG util.ShutdownHookManager: Completed shutdown in 0.004 seconds; Timeouts: 0 18/12/20 17:20:45 DEBUG util.ShutdownHookManager: ShutdownHookManger completed shutdown.


說明系統中的glibc的版本和libhadoop.so需要的版本不一致導致。

檢視系統的libc版本  

[[email protected] native]# ll /lib64/libc.so.6
lrwxrwxrwx. 1 root root 12 12月 18 11:03 /lib64/libc.so.6 -> libc-2.12.so

系統版本小於libhadoop.so.1.0.0所需版本 version `GLIBC_2.14'

離線安裝gcc4.8

https://blog.csdn.net/qq805934132/article/details/82893724

下載glibc

一、安裝glibc-2.14(由於我的叢集是內部區域網,所以只能找了臺其他的伺服器編譯了一下

[[email protected] ~]# wget http://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.gz
[[email protected] ~]# mv glibc-2.14.tar.gz /opt/software
[[email protected] ~]# cd /opt/software
[[email protected] software]# tar xf glibc-2.14.tar.gz
[[email protected] software]# cd glibc-2.14
[[email protected] glibc-2.14]# mkdir build
[[email protected] glibc-2.14]# cd build
[[email protected] build]# ../configure --prefix=/usr/local/glibc-2.14
[[email protected] build]# make -j4
[[email protected] build]# make install

此處因為缺少很多庫,沒有編譯成功。後續再想辦法解決吧

解決辦法2:

另一個原因是由於在apache hadoop官網上下載的hadoopXXX.tar.gz實際是32位的機器上編譯的(蛋疼吧),我叢集使用的64bit的,載入.so檔案時出錯,當然基本上不影響使用hadoop(如果你使用mahout做一些機器學習的任務時有可能會遇到麻煩,載入不成功,任務直接退出,所以還是有必要解決掉這個WARN的)。

具體辦法:

1. 下載hadoop-2.9.2-src.tar.gz原始碼  https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.9.2/hadoop-2.9.2-src.tar.gz

2. 在某臺64位機器上編譯(由於我的叢集機器是內部區域網,所以只能找一臺能連外網的伺服器編譯)

3. 替換之前的$HADOOP_HOME/lib/native為新編譯的native

Hadoop原始碼編譯

編譯步驟:

首先需要在虛擬機器進行下面軟體的安裝

1、安裝jdk 配置環境變數

2、安裝maven 配置環境變數

下載地址   http://maven.apache.org/download.cgi 根據需要下載適合自己的版本,我選擇的是apache-maven-3.6.0-bin.tar.gz 解壓   tar -zxvf apache-maven-3.6.0-bin.tar.gz  3、配置maven環境變數  vi ~/.bashrc
export MAVEN_HOME=/home/yuany/hadoop/apache-maven-3.6.0

export PATH=$MAVEN_HOME:/home/yuany/android-studio/bin:/usr/local/lib/anaconda2/bin:$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

source ~/.bashrc

4、檢驗是否安裝成功; 
mvn -version
5、安裝依賴庫
sudo apt-get install g++ autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev

6、安裝protobuf

  1. 下載protobuf程式碼 https://github.com/protocolbuffers/protobuf/releases
  2. 安裝protobuf
[email protected]:~/hadoop$ tar xzvf protobuf-all-3.6.1.tar.gz
[email protected]:~/hadoop$ cd protobuf-3.6.1/
[email protected]:~/hadoop/protobuf-3.6.1$ ./configure --prefix=/usr/local/protobuf
[email protected]:~/hadoop/protobuf-3.6.1$ make
[email protected]:~/hadoop/protobuf-3.6.1$ make install

  3. 至此安裝完成,下面是配置:

  (1) vim ~/.bashrc,新增

export PATH=$PATH:/usr/local/protobuf/bin/
export PKG_CONFIG_PATH=/usr/local/protobuf/lib/pkgconfig/
  儲存執行,source ~/.bashrc。輸入  protoc --version 驗證是否成功,出現 libprotoc 3.6.1證明成功!

編譯Hadoop

先把原始碼拷貝到 linux上,進入原始碼目錄/home/yuany/hadoop/hadoop-2.9.2-src

執行

mvn clean package -Pdist,native -DskipTests -Dtar 

等待結果......經過漫長的等待。如果看到如下結果證明編譯成功!