1. 程式人生 > >Hive資料匯入匯出的幾種方式

Hive資料匯入匯出的幾種方式

一,Hive資料匯入的幾種方式

首先列出講述下面幾種匯入方式的資料和hive表。

Hive表:

建立testA:

CREATE TABLE testA (
	id INT,
	name string,
	area string
) PARTITIONED BY (create_time string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

建立testB:

CREATE TABLE testB (
	id INT,
	name string,
	area string,
	code string
) PARTITIONED BY (create_time string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE;

資料檔案(sourceA.txt):

1,fish1,SZ
2,fish2,SH
3,fish3,HZ
4,fish4,QD
5,fish5,SR
資料檔案(sourceB.txt):
1,zy1,SZ,1001
2,zy2,SH,1002
3,zy3,HZ,1003
4,zy4,QD,1004
5,zy5,SR,1005

(1)本地檔案匯入到Hive表

hive> LOAD DATA LOCAL INPATH '/home/hadoop/sourceA.txt' INTO TABLE testA PARTITION(create_time='2015-07-08');
Copying data from file:/home/hadoop/sourceA.txt
Copying file: file:/home/hadoop/sourceA.txt
Loading data to table default.testa partition (create_time=2015-07-08)
Partition default.testa{create_time=2015-07-08} stats: [numFiles=1, numRows=0, totalSize=58, rawDataSize=0]
OK
Time taken: 0.237 seconds
hive> LOAD DATA LOCAL INPATH '/home/hadoop/sourceB.txt' INTO TABLE testB PARTITION(create_time='2015-07-09');
Copying data from file:/home/hadoop/sourceB.txt
Copying file: file:/home/hadoop/sourceB.txt
Loading data to table default.testb partition (create_time=2015-07-09)
Partition default.testb{create_time=2015-07-09} stats: [numFiles=1, numRows=0, totalSize=73, rawDataSize=0]
OK
Time taken: 0.212 seconds
hive> select * from testA;
OK
1	fish1	SZ	2015-07-08
2	fish2	SH	2015-07-08
3	fish3	HZ	2015-07-08
4	fish4	QD	2015-07-08
5	fish5	SR	2015-07-08
Time taken: 0.029 seconds, Fetched: 5 row(s)
hive> select * from testB;
OK
1	zy1	SZ	1001	2015-07-09
2	zy2	SH	1002	2015-07-09
3	zy3	HZ	1003	2015-07-09
4	zy4	QD	1004	2015-07-09
5	zy5	SR	1005	2015-07-09
Time taken: 0.047 seconds, Fetched: 5 row(s)


(2)Hive表匯入到Hive表

將testB的資料匯入到testA表

hive> INSERT INTO TABLE testA PARTITION(create_time='2015-07-11') select id, name, area from testB where id = 1;
...(省略)
OK
Time taken: 14.744 seconds
hive> INSERT INTO TABLE testA PARTITION(create_time) select id, name, area, code from testB where id = 2;
<pre name="code" class="java">...(省略)
OKTime taken: 19.852 secondshive> select * from testA;OK2 zy2 SH 10021 fish1 SZ 2015-07-082 fish2 SH 2015-07-083 fish3 HZ 2015-07-084 fish4 QD 2015-07-085 fish5 SR 2015-07-081 zy1 SZ 2015-07-11Time taken: 0.032 seconds, Fetched: 7 row(s) 說明:

1,將testB中id=1的行,匯入到testA,分割槽為2015-07-11

2,將testB中id=2的行,匯入到testA,分割槽create_time為id=2行的code值。

(3)HDFS檔案匯入到Hive表

將sourceA.txt和sourceB.txt傳到HDFS中,路徑分別是/home/hadoop/sourceA.txt和/home/hadoop/sourceB.txt中

hive> LOAD DATA INPATH '/home/hadoop/sourceA.txt' INTO TABLE testA PARTITION(create_time='2015-07-08');
...(省略)
OK
Time taken: 0.237 seconds
hive> LOAD DATA INPATH '/home/hadoop/sourceB.txt' INTO TABLE testB PARTITION(create_time='2015-07-09');
<pre name="code" class="java">...(省略)
OK
Time taken: 0.212 seconds
hive> select * from testA;
OK
1	fish1	SZ	2015-07-08
2	fish2	SH	2015-07-08
3	fish3	HZ	2015-07-08
4	fish4	QD	2015-07-08
5	fish5	SR	2015-07-08
Time taken: 0.029 seconds, Fetched: 5 row(s)
hive> select * from testB;
OK
1	zy1	SZ	1001	2015-07-09
2	zy2	SH	1002	2015-07-09
3	zy3	HZ	1003	2015-07-09
4	zy4	QD	1004	2015-07-09
5	zy5	SR	1005	2015-07-09
Time taken: 0.047 seconds, Fetched: 5 row(s)
/home/hadoop/sourceA.txt'匯入到testA表

/home/hadoop/sourceB.txt'匯入到testB表

(4)建立表的過程中從其他表匯入

hive> create table testC as select name, code from testB;
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1449746265797_0106, Tracking URL = http://hadoopcluster79:8088/proxy/application_1449746265797_0106/
Kill Command = /home/hadoop/apache/hadoop-2.4.1/bin/hadoop job  -kill job_1449746265797_0106
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2015-12-24 16:40:17,981 Stage-1 map = 0%,  reduce = 0%
2015-12-24 16:40:23,115 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.11 sec
MapReduce Total cumulative CPU time: 1 seconds 110 msec
Ended Job = job_1449746265797_0106
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: hdfs://hadoop2cluster/tmp/hive-root/hive_2015-12-24_16-40-09_983_6048680148773453194-1/-ext-10001
Moving data to: hdfs://hadoop2cluster/home/hadoop/hivedata/warehouse/testc
Table default.testc stats: [numFiles=1, numRows=0, totalSize=45, rawDataSize=0]
MapReduce Jobs Launched: 
Job 0: Map: 1   Cumulative CPU: 1.11 sec   HDFS Read: 297 HDFS Write: 45 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 110 msec
OK
Time taken: 14.292 seconds
hive> desc testC;
OK
name                	string              	                    
code                	string              	                    
Time taken: 0.032 seconds, Fetched: 2 row(s)

二,Hive資料匯出的幾種方式

(1)匯出到本地檔案系統

hive> INSERT OVERWRITE LOCAL DIRECTORY '/home/hadoop/output' ROW FORMAT DELIMITED FIELDS TERMINATED by ',' select * from testA;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1451024007879_0001, Tracking URL = http://hadoopcluster79:8088/proxy/application_1451024007879_0001/
Kill Command = /home/hadoop/apache/hadoop-2.4.1/bin/hadoop job  -kill job_1451024007879_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2015-12-25 17:04:30,447 Stage-1 map = 0%,  reduce = 0%
2015-12-25 17:04:35,616 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.16 sec
MapReduce Total cumulative CPU time: 1 seconds 160 msec
Ended Job = job_1451024007879_0001
Copying data to local directory /home/hadoop/output
Copying data to local directory /home/hadoop/output
MapReduce Jobs Launched: 
Job 0: Map: 1   Cumulative CPU: 1.16 sec   HDFS Read: 305 HDFS Write: 110 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 160 msec
OK
Time taken: 16.701 seconds

檢視資料結果:

[[email protected] output]$ cat /home/hadoop/output/000000_0 
1,fish1,SZ,2015-07-08
2,fish2,SH,2015-07-08
3,fish3,HZ,2015-07-08
4,fish4,QD,2015-07-08
5,fish5,SR,2015-07-08
通過INSERT OVERWRITE LOCAL DIRECTORY將hive表testA資料匯入到/home/hadoop目錄,眾所周知,HQL會啟動Mapreduce完成,其實/home/hadoop就是Mapreduce輸出路徑,產生的結果存放在檔名為:000000_0。

(2)匯出到HDFS

匯入到HDFS和匯入本地檔案類似,去掉HQL語句的LOCAL就可以了

hive> INSERT OVERWRITE DIRECTORY '/home/hadoop/output' select * from testA; 
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1451024007879_0002, Tracking URL = http://hadoopcluster79:8088/proxy/application_1451024007879_0002/
Kill Command = /home/hadoop/apache/hadoop-2.4.1/bin/hadoop job  -kill job_1451024007879_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2015-12-25 17:08:51,034 Stage-1 map = 0%,  reduce = 0%
2015-12-25 17:08:59,313 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.4 sec
MapReduce Total cumulative CPU time: 1 seconds 400 msec
Ended Job = job_1451024007879_0002
Stage-3 is selected by condition resolver.
Stage-2 is filtered out by condition resolver.
Stage-4 is filtered out by condition resolver.
Moving data to: hdfs://hadoop2cluster/home/hadoop/hivedata/hive-hadoop/hive_2015-12-25_17-08-43_733_1768532778392261937-1/-ext-10000
Moving data to: /home/hadoop/output
MapReduce Jobs Launched: 
Job 0: Map: 1   Cumulative CPU: 1.4 sec   HDFS Read: 305 HDFS Write: 110 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 400 msec
OK
Time taken: 16.667 seconds

檢視hfds輸出檔案:

[[email protected] bin]$ ./hadoop fs -cat /home/hadoop/output/000000_0
1fish1SZ2015-07-08
2fish2SH2015-07-08
3fish3HZ2015-07-08
4fish4QD2015-07-08
5fish5SR2015-07-08

其他

採用hive的-e和-f引數來匯出資料。

引數為: -e 的使用方式,後面接SQL語句。>>後面為輸出檔案路徑

[[email protected] bin]$ ./hive -e "select * from testA" >> /home/hadoop/output/testA.txt
15/12/25 17:15:07 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead

Logging initialized using configuration in file:/home/hadoop/apache/hive-0.13.1/conf/hive-log4j.properties
OK
Time taken: 1.128 seconds, Fetched: 5 row(s)
[[email protected] bin]$ cat /home/hadoop/output/testA.txt 
1	fish1	SZ	2015-07-08
2	fish2	SH	2015-07-08
3	fish3	HZ	2015-07-08
4	fish4	QD	2015-07-08
5	fish5	SR	2015-07-08

引數為: -f 的使用方式,後面接存放sql語句的檔案。>>後面為輸出檔案路徑

SQL語句檔案:

[[email protected] bin]$ cat /home/hadoop/output/sql.sql 
select * from testA

使用-f引數執行:
[[email protected] bin]$ ./hive -f /home/hadoop/output/sql.sql >> /home/hadoop/output/testB.txt
15/12/25 17:20:52 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead

Logging initialized using configuration in file:/home/hadoop/apache/hive-0.13.1/conf/hive-log4j.properties
OK
Time taken: 1.1 seconds, Fetched: 5 row(s)

參看結果:
[[email protected] bin]$ cat /home/hadoop/output/testB.txt 
1	fish1	SZ	2015-07-08
2	fish2	SH	2015-07-08
3	fish3	HZ	2015-07-08
4	fish4	QD	2015-07-08
5	fish5	SR	2015-07-08