1. 程式人生 > >Hbase通過命令將資料批量匯入的方法

Hbase通過命令將資料批量匯入的方法

拋磚引玉:

hbase建表:
hbase(main):003:0> create 'people','0'

將提前準備好的資料上傳到hdfs:
[hadoop@h71 ~]$ vi people.txt
1,jimmy,25,jiujinshan
2,tina,25,hunan

[hadoop@h71 ~]$ hadoop fs -mkdir /bulkload
[hadoop@h71 ~]$ hadoop fs -put people.txt /bulkload

將剛上傳到hdfs上的資料通過hbase bulkload匯入到hbase:

importtsv:
HADOOP_CLASSPATH=`/home/hadoop/hbase-1.0.0-cdh5.5.2/bin/hbase classpath` /home/hadoop/hadoop-2.6.0-cdh5.5.2/bin/hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar importtsv -Dimporttsv.separator=, -Dimporttsv.columns=HBASE_ROW_KEY,0:name,0:age,0:province -Dimporttsv.bulk.output=hdfs:///bulkload/output people hdfs:///bulkload/people.txt

(importtsv工具只從HDFS中讀取資料,所以就需要將資料從Linux本地匯入到HDFS中)

[hadoop@h71 ~]$ hadoop fs -lsr /bulkload
drwxr-xr-x   - hadoop supergroup          0 2017-03-20 02:16 /bulkload/output
drwxr-xr-x   - hadoop supergroup          0 2017-03-20 02:15 /bulkload/output/0
-rw-r--r--   2 hadoop supergroup       1247 2017-03-20 02:16 /bulkload/output/0/e9124651e9e04ab29794572e67b87736
-rw-r--r--   2 hadoop supergroup          0 2017-03-20 02:16 /bulkload/output/_SUCCESS
-rw-r--r--   2 hadoop supergroup         38 2017-03-20 01:50 /bulkload/people.txt

completebulkload:
HADOOP_CLASSPATH=`/home/hadoop/hbase-1.0.0-cdh5.5.2/bin/hbase classpath` /home/hadoop/hadoop-2.6.0-cdh5.5.2/bin/hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar completebulkload hdfs:///bulkload/output people

hbase(main):004:0> scan 'people'
ROW                                         COLUMN+CELL                                                                                                                 
 1                                          column=0:age, timestamp=1489947175529, value=25                                                                             
 1                                          column=0:name, timestamp=1489947175529, value=jimmy                                                                         
 1                                          column=0:province, timestamp=1489947175529, value=jiujinshan                                                                
 2                                          column=0:age, timestamp=1489947175529, value=25                                                                             
 2                                          column=0:name, timestamp=1489947175529, value=tina                                                                          
 2                                          column=0:province, timestamp=1489947175529, value=hunan
其實hbase本身就已經提供了直接通過命令列模式來將資料直接批量匯入到hbase中去(第4個不是自帶的,是第三方外掛),但我感覺這種命令列只適合那種比較簡單的場景,需求複雜的話還得自己編寫程式碼吧。

目前總結了四種方法:
(1)利用ImportTsv將檔案匯入到Hbase中
可直接將CSV檔案匯入到hbase表中,不過得先在hbase中建立相應的表
hbase(main):012:0> create 'hbase-tb1-001','cf'
[hadoop@h71 ~]$ vi simple.csv
1,"tom"
2,"sam"
3,"jerry"
4,"marry"
5,"john"
[hadoop@h71 ~]$ hadoop fs -put simple.csv /

再執行:
HADOOP_CLASSPATH=`/home/hadoop/hbase-1.0.0-cdh5.5.2/bin/hbase classpath` /home/hadoop/hadoop-2.6.0-cdh5.5.2/bin/hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar importtsv -Dimporttsv.separator=, -Dimporttsv.columns=HBASE_ROW_KEY,cf hbase-tb1-001 /simple.csv

(2)利用completebulkload將資料匯入到hbase中
和最上面那種匯入people的方法一樣,只不過上面的方法先在hbase中建立相應的表,而這個方法不用先建表,在指令裡就可以在hbase中自動建表了
HADOOP_CLASSPATH=`/home/hadoop/hbase-1.0.0-cdh5.5.2/bin/hbase classpath` /home/hadoop/hadoop-2.6.0-cdh5.5.2/bin/hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar importtsv -Dimporttsv.separator=, -Dimporttsv.bulk.output=/output -Dimporttsv.columns=HBASE_ROW_KEY,cf hbase-tb1-002 /simple.csv
(在指定路徑生成了HFile檔案並且在hbase中建立了hbase-tb1-002空表)
HADOOP_CLASSPATH=`/home/hadoop/hbase-1.0.0-cdh5.5.2/bin/hbase classpath` /home/hadoop/hadoop-2.6.0-cdh5.5.2/bin/hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar completebulkload /output hbase-tb1-002
或者這條命令
hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar completebulkload /output hbase-tb1-002

hbase(main):014:0> scan 'hbase-tb1-002'
ROW                                         COLUMN+CELL                                                                                                                 
 1                                          column=cf:, timestamp=1489846700133, value="tom"                                                                            
 2                                          column=cf:, timestamp=1489846700133, value="sam"                                                                            
 3                                          column=cf:, timestamp=1489846700133, value="jerry"                                                                          
 4                                          column=cf:, timestamp=1489846700133, value="marry"                                                                          
 5                                          column=cf:, timestamp=1489846700133, value="john"
注:這兩種方法(1)、(2)和文章一開始拋磚引玉中的兩個方法其實就是一回事,只不過命令形式有點區別罷了

(3)利用improt將資料匯入到hbase中
首先hbase中存在hbase-tb1-002表並且其中有資料:

hbase(main):014:0> scan 'hbase-tb1-002'
ROW                                         COLUMN+CELL                                                                                                                 
 1                                          column=cf:, timestamp=1489846700133, value="tom"                                                                            
 2                                          column=cf:, timestamp=1489846700133, value="sam"                                                                            
 3                                          column=cf:, timestamp=1489846700133, value="jerry"                                                                          
 4                                          column=cf:, timestamp=1489846700133, value="marry"                                                                          
 5                                          column=cf:, timestamp=1489846700133, value="john"
hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar export hbase-tb1-002 /test-output
(在hbase0.96中用的命令是bin/hbase org.apache.hadoop.hbase.mapreduce.Export hbase-tb1-002 /test-output)
[hadoop@h71 hbase-1.0.0-cdh5.5.2]$ hadoop fs -lsr /test-output
-rw-r--r--   2 hadoop supergroup          0 2017-03-19 00:16 /test-output/_SUCCESS
-rw-r--r--   2 hadoop supergroup        344 2017-03-19 00:16 /test-output/part-m-00000
(生成的是sequence file格式的資料檔案,用hadoop fs -cat命令檢視亂碼)
hbase(main):025:0> create 'hbase-tb1-003','cf'
hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar import hbase-tb1-003 /test-output
(並且/test-output/part-m-00000不會像用completebulkload時消失)
hbase(main):026:0> scan 'hbase-tb1-003'
ROW                                         COLUMN+CELL                                                                                                                 
 1                                          column=cf:, timestamp=1489853023886, value="tom"                                                                            
 2                                          column=cf:, timestamp=1489853023886, value="sam"                                                                            
 3                                          column=cf:, timestamp=1489853023886, value="jerry"                                                                          
 4                                          column=cf:, timestamp=1489853023886, value="marry"                                                                          
 5                                          column=cf:, timestamp=1489853023886, value="john"
(4)Phoenix使用MapReduce載入大批量資料(bulkload)
參考地址:http://blog.csdn.net/maomaosi2009/article/details/45623821 (這個部落格中說在匯入資料的時候寫file:///指定為本地檔案路徑雖然報錯但可以匯入資料,我做的結果是報錯並且不會匯入資料,在Phoenix中查詢該表為空)
http://blog.csdn.net/d6619309/article/details/51334126
(做這個實驗我在裝有Apache版的hbase和Phoenix中成功了,但是在cdh版中卻失敗了,並且報這個錯:
Error: java.lang.ClassNotFoundException: org.apache.commons.csv.CSVFormat
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at org.apache.phoenix.mapreduce.CsvToKeyValueMapper$CsvLineParser.<init>(CsvToKeyValueMapper.java:282)
        at org.apache.phoenix.mapreduce.CsvToKeyValueMapper.setup(CsvToKeyValueMapper.java:142)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
因為Phoenix官方預設不支援cdh版的,所以用maven重新編譯適配cdh5.5.2版本,我還以為是修改了什麼東西導致報錯)
解決:後來我試探性的將phoenix-4.6.0-cdh5.5.2-client.jar複製到了主節點/home/hadoop/hbase-1.0.0-cdh5.5.2/lib再執行以上命令就好使了

在phoenix的CLI介面建立user表:
0: jdbc:phoenix:h40,h41,h42:2181> create table user (id varchar primary key,account varchar ,passwd varchar);

在【PHOENIX_HOME】目錄下建立data_import.txt,內容如下:
[hadoop@h40 ~]$ vi data_import.txt
001,google,AM
002,baidu,BJ
003,alibaba,HZ

執行MapReduce
[hadoop@h40 phoenix-4.6.0-HBase-1.0-bin]$ hadoop jar phoenix-4.6.0-HBase-1.0-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool --table USER --input /data_import.txt

0: jdbc:phoenix:h40,h41,h42:2181> select * from user;
+------------------------------------------+------------------------------------------+------------------------------------------+
|                    ID                    |                 ACCOUNT                  |                  PASSWD                  |
+------------------------------------------+------------------------------------------+------------------------------------------+
| 001                                      | google                                   | AM                                       |
| 002                                      | baidu                                    | BJ                                       |
| 003                                      | alibaba                                  | HZ                                       |
+------------------------------------------+------------------------------------------+------------------------------------------+
hbase(main):004:0> scan 'USER'
ROW                                                          COLUMN+CELL                                                                                                                                                                     
 001                                                         column=0:ACCOUNT, timestamp=1492424759793, value=google                                                                                                                         
 001                                                         column=0:PASSWD, timestamp=1492424759793, value=AM                                                                                                                              
 001                                                         column=0:_0, timestamp=1492424759793, value=                                                                                                                                    
 002                                                         column=0:ACCOUNT, timestamp=1492424759793, value=baidu                                                                                                                          
 002                                                         column=0:PASSWD, timestamp=1492424759793, value=BJ                                                                                                                              
 002                                                         column=0:_0, timestamp=1492424759793, value=                                                                                                                                    
 003                                                         column=0:ACCOUNT, timestamp=1492424759793, value=alibaba                                                                                                                        
 003                                                         column=0:PASSWD, timestamp=1492424759793, value=HZ                                                                                                                              
 003                                                         column=0:_0, timestamp=1492424759793, value=