1. 程式人生 > >Hive學習之基礎語法

Hive學習之基礎語法

1、建立資料庫

create database 資料庫名;

2、使用資料庫

use 資料庫名;

3、建立表

內部表:表目錄安裝hive的規範來部署,位於hive倉庫目錄/user/hive/warehouse中

create table t_pv_log(ip string,url string,access_time string )
row format delimited
fields terminated by ',';

外部表:表目錄由使用者指定

在hdfs上建立資料夾

 hadoop fs -mkdir -p /pvlog/2017-09-16

準備測試資料

192.168.33.1,http://sina.com/a,2017-09-16 12:52:01
192.168.33.2,http://sina.com/a,2017-09-16 12:51:01
192.168.33.1,http://sina.com/a,2017-09-16 12:50:01
192.168.33.2,http://sina.com/b,2017-09-16 12:49:01
192.168.33.1,http://sina.com/b,2017-09-16 12:48:01
192.168.33.4,http://sina.com/a,2017-09-16 12:47:01
192.168.33.3,http://sina.com/a,2017-09-16 12:46:01
192.168.33.2,http://sina.com/b,2017-09-16 12:45:01
192.168.33.2,http://sina.com/a,2017-09-16 12:44:01
192.168.33.1,http://sina.com/a,2017-09-16 13:43:01

將資料上傳至hdfs中/pvlog/2017-09-16

hadoop fs -put ./pv.log /pvlog/2017-09-16

建立外部表:

create external table t_pv_log(ip string,url string,access_time string )
row format delimited
fields terminated by ','
location '/pvlog/2017-09-16';

內部表和外部表區別:

    內部表刪除時表和資料同時刪除

    外部表只刪除表,資料檔案依舊存在於hdfs系統中

4、分割槽表

    分割槽表的實質是:在表目錄中為資料檔案建立分割槽子目錄,以便於在查詢時,MR程式可以針對分割槽子目錄中的資料進行處理,縮減讀取資料的範圍。

    比如,網站每天產生的瀏覽記錄,瀏覽記錄應該建一個表來存放,但是,有時,我們可能只需要對每一天的瀏覽記錄進行分析

    這時,就可以將這個表建為分割槽表,每天的資料匯入其中的一個分割槽

準備資料:

    192.168.33.1,http://sina.com/a,2017-09-16 12:52:01
    192.168.33.2,http://sina.com/a,2017-09-16 12:51:01
    192.168.33.1,http://sina.com/a,2017-09-16 12:50:01
    192.168.33.2,http://sina.com/b,2017-09-16 12:49:01
    192.168.33.1,http://sina.com/b,2017-09-15 12:48:01
    192.168.33.4,http://sina.com/a,2017-09-15 12:47:01
    192.168.33.3,http://sina.com/a,2017-09-15 12:46:01
    192.168.33.2,http://sina.com/b,2017-09-15 12:45:01
    192.168.33.2,http://sina.com/a,2017-09-15 12:44:01
    192.168.33.1,http://sina.com/a,2017-09-15 13:43:01

    建立分割槽表

create table t_pv_log(ip string,url string ,access_time string)
partitioned by(day string)
row format delimited
fields terminated by ',';

    將資料載入入新建的表中:

load data local inpath '/usr/local/hivetest/pv.log.15' into table t_pv_log partition(day='20170916');

    通過分割槽欄位查詢資料:

0: jdbc:hive2://hadoop00:10000> select * from t_pv_log where day ='20170916';
+---------------+--------------------+-----------------------+---------------+--+
|  t_pv_log.ip  |    t_pv_log.url    | t_pv_log.access_time  | t_pv_log.day  |
+---------------+--------------------+-----------------------+---------------+--+
| 192.168.33.1  | http://sina.com/a  | 2017-09-16 12:52:01   | 20170916      |
| 192.168.33.2  | http://sina.com/a  | 2017-09-16 12:51:01   | 20170916      |
| 192.168.33.1  | http://sina.com/a  | 2017-09-16 12:50:01   | 20170916      |
| 192.168.33.2  | http://sina.com/b  | 2017-09-16 12:49:01   | 20170916      |
| 192.168.33.1  | http://sina.com/b  | 2017-09-16 12:48:01   | 20170916      |
| 192.168.33.4  | http://sina.com/a  | 2017-09-16 12:47:01   | 20170916      |
| 192.168.33.3  | http://sina.com/a  | 2017-09-16 12:46:01   | 20170916      |
| 192.168.33.2  | http://sina.com/b  | 2017-09-16 12:45:01   | 20170916      |
| 192.168.33.2  | http://sina.com/a  | 2017-09-16 12:44:01   | 20170916      |
| 192.168.33.1  | http://sina.com/a  | 2017-09-16 13:43:01   | 20170916      |
+---------------+--------------------+-----------------------+---------------+--+

5、檔案匯入

方式1:

    手動用hdfs命令,將檔案放入表目錄。

方式2:在hive的互動式shell中用hive命令來匯入本地資料到表目錄

    load data local inpath '/usr/local/data/' into table order;

方式3:用hive命令匯入hdfs中的資料檔案到表目錄

    load data inpath ‘access.log’ into table t_access  partition(day='20170916');

注意匯入本地檔案和導HDFS檔案區別:

    本地檔案匯入表:複製

    HDFS檔案匯入表:移動