1. 程式人生 > >HBase:java api連線hbase報錯 ERROR AsyncProcess: Failed to get region location

HBase:java api連線hbase報錯 ERROR AsyncProcess: Failed to get region location

1.問題描述

JavaAPI操作HBase資料庫報錯如下,經檢查,HBase本身沒有問題,可以建立以及新增資料。但是javaapi就是連線不上去,坑了兩天沒有思路。該式的方法都試了,centos和windows的hosts都配置了對應的域名,但是就是連線不上去。

18/11/23 07:31:53 INFO ZooKeeper: Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
18/11/23 07:31:53 INFO ZooKeeper: Client environment:java.compiler=<NA>
18/11/23 07:31:53 INFO ZooKeeper: Client environment:os.name=Windows 10
18/11/23 07:31:53 INFO ZooKeeper: Client environment:os.arch=amd64
18/11/23 07:31:53 INFO ZooKeeper: Client environment:os.version=10.0
18/11/23 07:31:53 INFO ZooKeeper: Client environment:user.name=Administrator
18/11/23 07:31:53 INFO ZooKeeper: Client environment:user.home=C:\Users\Administrator
18/11/23 07:31:53 INFO ZooKeeper: Client environment:user.dir=E:\Tools\WorkspaceforMyeclipse\StreamingProduct
18/11/23 07:31:53 INFO ZooKeeper: Initiating client connection, connectString=hadoop:2181 sessionTimeout=180000 watcher=hconnection-0x6a28ffa40x0, quorum=hadoop:2181, baseZNode=/hbase
18/11/23 07:31:53 INFO ClientCnxn: Opening socket connection to server hadoop/119.3.92.224:2181. Will not attempt to authenticate using SASL (unknown error)
18/11/23 07:31:53 INFO ClientCnxn: Socket connection established to hadoop/119.3.92.224:2181, initiating session
18/11/23 07:31:53 INFO ClientCnxn: Session establishment complete on server hadoop/119.3.92.224:2181, sessionid = 0x167320861c80068, negotiated timeout = 40000
18/11/23 07:39:34 ERROR AsyncProcess: Failed to get region location 
java.net.ConnectException: Connection refused: no further information
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
	at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:416)
	at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:722)
	at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
	at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
	at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242)
	at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
	at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
	at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:34070)
	at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1582)
	at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1398)
	at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1199)
	at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:395)
	at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:344)
	at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:238)
	at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
	at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1495)
	at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1086)
	at com.Utils.HBaseUtils.put(HBaseUtils.java:79)
	at com.Utils.HBaseUtils.main(HBaseUtils.java:125)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: ConnectException: 1 time, 
	at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:247)
	at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:227)
	at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1758)
	at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240)
	at org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
	at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1495)
	at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1086)
	at com.Utils.HBaseUtils.put(HBaseUtils.java:79)
	at com.Utils.HBaseUtils.main(HBaseUtils.java:125)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

Process finished with exit code 0

2.原因

萬萬沒想到,僅僅是因為linux(centos)的hosts的問題,是由於預設hosts中的local.localdomain。原始

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.0.147 hadoop hadoop

3.解決方法

將local.localdomain全部去除

#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 localhost
172.16.0.147 hadoop hadoop

4.總結

(1)類似問題,大部分是由於hosts配置對映的問題,所以windows和linux的hosts都要修改域名對映,其中,windows的hosts對映是公網ip,Linux的hosts對映是私網ip。本坑爹問題例外,還受localhost.localdomain這個的影響。

(2)伺服器上,hbase的conf/regionservers一定要配置hadoop主機名

(3)伺服器上,zk的conf/zoo.cfg新增hadoop主機名

(4)javaapi程式,需要設定主機名對映

        configuration=new Configuration();
        configuration.set("hbase.rootdir","hdfs://hadoop:8020/hbase");
        configuration.set("hbase.zookeeper.quorum","hadoop");
        configuration.set("hbase.zookeeper.property.clientPort","2181");

(5)新增依賴

版本 
 <properties>
    <hadoop.version>2.6.0-cdh5.7.0</hadoop.version>
    <scala.version>2.11.8</scala.version>
    <kafka.version>0.9.0.0</kafka.version>
    <hbase.version>1.2.0-cdh5.7.0</hbase.version>
  </properties>

依賴 
 <dependencies>
    <dependency>
      <groupId>org.scala-lang</groupId>
      <artifactId>scala-library</artifactId>
      <version>${scala.version}</version>
    </dependency>


    <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka_2.11</artifactId>
      <version>${kafka.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_2.11</artifactId>
      <version>2.1.0</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
      <version>2.1.0</version>
    </dependency>

    <!-- https://mvnrepository.com/artifact/net.jpountz.lz4/lz4 -->
    <dependency>
      <groupId>net.jpountz.lz4</groupId>
      <artifactId>lz4</artifactId>
      <version>1.3.0</version>
    </dependency>

    <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-client -->
    <dependency>
      <groupId>org.apache.hbase</groupId>
      <artifactId>hbase-client</artifactId>
      <version>${hbase.version}</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-server -->
    <!--<dependency>-->
      <!--<groupId>org.apache.hbase</groupId>-->
      <!--<artifactId>hbase-server</artifactId>-->
      <!--<version>${hbase.version}</version>-->
    <!--</dependency>-->

    <!-- HDFS Client -->
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>${hadoop.version}</version>
      <scope>compile</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-common</artifactId>
      <version>${hadoop.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-hdfs</artifactId>
      <version>${hadoop.version}</version>
    </dependency>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.10</version>
      <scope>test</scope>
    </dependency>

    <!-- https://mvnrepository.com/artifact/org.apache.zookeeper/zookeeper -->
    <dependency>
      <groupId>org.apache.zookeeper</groupId>
      <artifactId>zookeeper</artifactId>
      <version>3.4.5-cdh5.7.0</version>
      <type>pom</type>
    </dependency>

  </dependencies>