1. 程式人生 > >大資料012-Hive的分桶詳解

大資料012-Hive的分桶詳解

Hive分桶通俗點來說就是將表(或者分割槽,也就是hdfs上的目錄而真正的資料是儲存在該目錄下的檔案)中檔案分成幾個檔案去儲存。比如表buck(目錄,裡面存放了某個檔案如sz.data)檔案中本來是1000000條資料,由於在處理大規模資料集時,在開發和修改查詢的階段,如果能在資料集的一小部分資料上試執行查詢,會帶來很多方便,所以我們可以分4個檔案去儲存。
下面記錄了從頭到尾以及出現問題的操作

進行連線,建立資料庫myhive2,使用該資料庫

[[email protected] ~]# cd apps/hive/bin
[[email protected] bin]# ./beeline 
Beeline version 1.2.1 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000
Connecting to jdbc:hive2://localhost:10000
Enter username for jdbc:hive2://localhost:10000: root
Enter password for jdbc:hive2://localhost:10000: ******
Connected to: Apache Hive (version 1.2.1)
Driver: Hive JDBC (version 1.2.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000> show databases;
+----------------+--+
| database_name  |
+----------------+--+
| default        |
| myhive         |
+----------------+--+
2 rows selected (1.795 seconds)
0: jdbc:hive2://localhost:10000> create database myhive2;
No rows affected (0.525 seconds)
0: jdbc:hive2://localhost:10000> use myhive2;
No rows affected (0.204 seconds)

建立分桶表,匯入資料,查看錶內容

0: jdbc:hive2://localhost:10000> create table buck(id string,name string)
0: jdbc:hive2://localhost:10000> clustered by (id) sorted by (id) into 4 buckets
0: jdbc:hive2://localhost:10000> row format delimited fields terminated by ',';
No rows affected (0.34 seconds)
0: jdbc:hive2://localhost:10000> desc buck;
+-----------+------------+----------+--+
| col_name  | data_type  | comment  |
+-----------+------------+----------+--+
| id        | string     |          |
| name      | string     |          |
+-----------+------------+----------+--+
2 rows selected (0.55 seconds)
load data local inpath '/root/sz.data' into table buck;
INFO  : Loading data to table myhive2.buck from file:/root/sz.data
INFO  : Table myhive2.buck stats: [numFiles=1, totalSize=91]
No rows affected (1.411 seconds)
0: jdbc:hive2://localhost:10000> select * from buck;
+----------+------------+--+
| buck.id  | buck.name  |
+----------+------------+--+
| 1        | zhangsan   |
| 2        | lisi       |
| 3        | wangwu     |
| 4        | furong     |
| 5        | fengjie    |
| 6        | aaa        |
| 7        | bbb        |
| 8        | ccc        |
| 9        | ddd        |
| 10       | eee        |
| 11       | fff        |
| 12       | ggg        |
+----------+------------+--+

如果分桶了的話,那麼buck目錄下應該有4個檔案,頁面檢視

 

然而並沒有,還是自己匯入的那個檔案。
這是因為分桶不是hive活著hadoop自動給我們劃分檔案來分桶的,而應該是我們分好之後匯入才好。
需要設定開啟分桶,設定reducetask數量(跟分桶數量一致)

0: jdbc:hive2://localhost:10000> set hive.enforce.bucketing = true;
No rows affected (0.063 seconds)
0: jdbc:hive2://localhost:10000> set hive.enforce.bucketing ;
+------------------------------+--+
|             set              |
+------------------------------+--+
| hive.enforce.bucketing=true  |
+------------------------------+--+
1 row selected (0.067 seconds)
0: jdbc:hive2://localhost:10000> set mapreduce.job.reduces=4;

那麼建立另外一個表tp,將該表資料放入到buck中(select出來insert 進去),放入的時候指定進行分桶,那麼會分四桶,每個裡面進行排序。那麼最後buck表就進行了分桶(分桶是匯入的時候就分桶的而不是自己實現分桶(檔案劃分))。
接下來,清空buck表資訊,建立表tp,將tp中資料查詢出來insert into到buck中。

0: jdbc:hive2://localhost:10000> truncate table buck;
No rows affected (0.316 seconds)
0: jdbc:hive2://localhost:10000> create table tp(id string,name string)
0: jdbc:hive2://localhost:10000> row format delimited fields terminated by ',';
No rows affected (0.112 seconds)
0: jdbc:hive2://localhost:10000> load data local inpath '/root/sz.data' into table tp;
INFO  : Loading data to table myhive2.tp from file:/root/sz.data
INFO  : Table myhive2.tp stats: [numFiles=1, totalSize=91]
No rows affected (0.419 seconds)
0: jdbc:hive2://localhost:10000> show tables;
+-----------+--+
| tab_name  |
+-----------+--+
| buck      |
| tp        |
+-----------+--+
2 rows selected (0.128 seconds)
0: jdbc:hive2://localhost:10000> select * from tp;
+--------+-----------+--+
| tp.id  |  tp.name  |
+--------+-----------+--+
| 1      | zhangsan  |
| 2      | lisi      |
| 3      | wangwu    |
| 4      | furong    |
| 5      | fengjie   |
| 6      | aaa       |
| 7      | bbb       |
| 8      | ccc       |
| 9      | ddd       |
| 10     | eee       |
| 11     | fff       |
| 12     | ggg       |
+--------+-----------+--+
12 rows selected (0.243 seconds)
0: jdbc:hive2://localhost:10000> insert into buck 
0: jdbc:hive2://localhost:10000> select id,name from tp distribute by (id) sort by (id);
INFO  : Number of reduce tasks determined at compile time: 4
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1508216103995_0028
INFO  : The url to track the job: http://mini1:8088/proxy/application_1508216103995_0028/
INFO  : Starting Job = job_1508216103995_0028, Tracking URL = http://mini1:8088/proxy/application_1508216103995_0028/
INFO  : Kill Command = /root/apps/hadoop-2.6.4/bin/hadoop job  -kill job_1508216103995_0028
INFO  : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 4
INFO  : 2017-10-19 03:57:23,631 Stage-1 map = 0%,  reduce = 0%
INFO  : 2017-10-19 03:57:29,349 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.18 sec
INFO  : 2017-10-19 03:57:40,096 Stage-1 map = 100%,  reduce = 25%, Cumulative CPU 2.55 sec
INFO  : 2017-10-19 03:57:41,152 Stage-1 map = 100%,  reduce = 75%, Cumulative CPU 5.29 sec
INFO  : 2017-10-19 03:57:42,375 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.61 sec
INFO  : MapReduce Total cumulative CPU time: 6 seconds 610 msec
INFO  : Ended Job = job_1508216103995_0028
INFO  : Loading data to table myhive2.buck from hdfs://192.168.25.127:9000/user/hive/warehouse/myhive2.db/buck/.hive-staging_hive_2017-10-19_03-57-14_624_1985499545258899177-1/-ext-10000
INFO  : Table myhive2.buck stats: [numFiles=4, numRows=12, totalSize=91, rawDataSize=79]
No rows affected (29.238 seconds)
0: jdbc:hive2://localhost:10000> select * from buck;
+----------+------------+--+
| buck.id  | buck.name  |
+----------+------------+--+
| 11       | fff        |
| 4        | furong     |
| 8        | ccc        |
| 1        | zhangsan   |
| 12       | ggg        |
| 5        | fengjie    |
| 9        | ddd        |
| 2        | lisi       |
| 6        | aaa        |
| 10       | eee        |
| 3        | wangwu     |
| 7        | bbb        |
+----------+------------+--+

到這應該就知道已經分桶了,否則id應該是1-12出來的,這是因為在4個桶中,分別進行了各自的排序,而不是跟order by一樣會進行全域性排序,頁面檢視下吧。

能看到確實分了4桶,客戶端檢視下內容吧(可以直接解析hdfs操作的)

0: jdbc:hive2://localhost:10000> dfs -ls /user/hive/warehouse/myhive2.db/buck;
+-----------------------------------------------------------------------------------------------------------+--+
|                                                DFS Output                                                 |
+-----------------------------------------------------------------------------------------------------------+--+
| Found 4 items                                                                                             |
| -rwxr-xr-x   2 root supergroup         22 2017-10-19 03:57 /user/hive/warehouse/myhive2.db/buck/000000_0  |
| -rwxr-xr-x   2 root supergroup         34 2017-10-19 03:57 /user/hive/warehouse/myhive2.db/buck/000001_0  |
| -rwxr-xr-x   2 root supergroup         13 2017-10-19 03:57 /user/hive/warehouse/myhive2.db/buck/000002_0  |
| -rwxr-xr-x   2 root supergroup         22 2017-10-19 03:57 /user/hive/warehouse/myhive2.db/buck/000003_0  |
+-----------------------------------------------------------------------------------------------------------+--+
5 rows selected (0.028 seconds)
0: jdbc:hive2://localhost:10000> dfs -cat  /user/hive/warehouse/myhive2.db/buck/000000_0;
+-------------+--+
| DFS Output  |
+-------------+--+
| 11,fff      |
| 4,furong    |
| 8,ccc       |
+-------------+--+
3 rows selected (0.02 seconds)
0: jdbc:hive2://localhost:10000> dfs -cat  /user/hive/warehouse/myhive2.db/buck/000001_0;
+-------------+--+
| DFS Output  |
+-------------+--+
| 1,zhangsan  |
| 12,ggg      |
| 5,fengjie   |
| 9,ddd       |
+-------------+--+
4 rows selected (0.08 seconds)
0: jdbc:hive2://localhost:10000> dfs -cat  /user/hive/warehouse/myhive2.db/buck/000002_0;
+-------------+--+
| DFS Output  |
+-------------+--+
| 2,lisi      |
| 6,aaa       |
+-------------+--+
2 rows selected (0.088 seconds)
0: jdbc:hive2://localhost:10000> dfs -cat  /user/hive/warehouse/myhive2.db/buck/000003_0;
+-------------+--+
| DFS Output  |
+-------------+--+
| 10,eee      |
| 3,wangwu    |
| 7,bbb       |
+-------------+--+
3 rows selected (0.062 seconds)

注: select id,name from tp distribute by (id) sort by (id)語句中distribute by (id) sort by (id)知道根據id進行分桶(根據id進行hash雜湊),根據id進行排序預設升序。如果兩者欄位相同那麼可以使用cluster by (id);也就是說可以寫成

insert into buck select id ,name from p cluster by (id);

效果是一樣的。

分桶的作用

觀察下面的語句。

select a.id,a.name,b.addr from a join b on a.id = b.id;

如果a表和b表已經是分桶表,而且分桶的欄位是id欄位,那麼做這個操作的時候就不需要再進行全表笛卡爾積了。