1. 程式人生 > >Hive內部表與外部表(外部表使用場景)

Hive內部表與外部表(外部表使用場景)

官網解釋

Managed and External Tables
By default Hive creates managed tables, where files, metadata and statistics are managed by internal Hive processes. A managed table is stored under the hive.metastore.warehouse.dir path property, by default in a folder path similar to /apps/hive/warehouse/databasename.db/tablename/. The default
location can be overridden by the location property during table creation. If a managed table or partition is dropped, the data and metadata associated with that table or partition are deleted. If the PURGE option is not specified, the data is moved to a trash folder for a defined duration. Use managed tables when Hive should manage the lifecycle of
the table, or when generating temporary tables. An external table describes the metadata / schema on external files. External table files can be accessed and managed by processes outside of Hive. External tables can access data stored in sources such as Azure Storage Volumes (ASV) or remote HDFS locations. If
the structure or partitioning of an external table is changed, an MSCK REPAIR TABLE table_name statement can be used to refresh metadata information. Use external tables when files are already present or in remote locations, and the files should remain even if the table is dropped. Managed or external tables can be identified using the DESCRIBE FORMATTED table_name command, which will display either MANAGED_TABLE or EXTERNAL_TABLE depending on table type. Statistics can be managed on internal and external tables and partitions for query optimization.

從官網這句話可以得知Hive中有兩類資料:data和metadata
並且data儲存在hdfs上,metadata儲存在MySQL中,這兩類資料是理解內部表和外部表的關鍵。

一、內部表
1、建立一張內部表

hive (default)> create table emp_manager(
              > empno int, 
              > ename string, 
              > job string, 
              > mgr int, 
              > hiredate string, 
              > sal double, 
              > comm double, 
              > deptno int
              > )row format delimited fields terminated by '\t';

2、檢視data和metadata
在HDFS上可以看到emp_manager表的data

[root@hadoop001 maven-3.3.9]# hadoop fs -ls /user/hive/warehouse/
Found 2 items
drwxr-xr-x   - root supergroup          0 2017-10-07 21:55 /user/hive/warehouse/emp

drwxr-xr-x   - root supergroup          0 2017-10-07 23:33 /user/hive/warehouse/emp_manager

在MySQL中可以看到emp_manager的metadata

mysql> select * from TBLS \G;
*************************** 1. row ***************************
            TBL_ID: 1
       CREATE_TIME: 1506370951
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 1
          TBL_NAME: emp
          TBL_TYPE: MANAGED_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
*************************** 2. row ***************************
            TBL_ID: 6
       CREATE_TIME: 1507390381
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 6
          TBL_NAME: emp_manager
          TBL_TYPE: MANAGED_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
2 rows in set (0.00 sec)

3、刪除emp_manager表後檢視data和metadata
刪除

hive (default)> drop table emp_manager;
OK
Time taken: 2.738 seconds

data也會隨之刪除

[root@hadoop001 maven-3.3.9]# hadoop fs -ls /user/hive/warehouse/
Found 1 items
drwxr-xr-x   - root supergroup          0 2017-10-07 21:55 /user/hive/warehouse/emp

metadata也會是刪除

mysql> select * from TBLS \G;
*************************** 1. row ***************************
            TBL_ID: 1
       CREATE_TIME: 1506370951
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 1
          TBL_NAME: emp
          TBL_TYPE: MANAGED_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
1 row in set (0.00 sec)

二、外部表
1、建立外部表(放在了自己指定的目錄下)

hive (default)> create external table emp_external(
              > empno int, 
              > ename string, 
              > job string, 
              > mgr int, 
              > hiredate string, 
              > sal double, 
              > comm double, 
              > deptno int
              > )row format delimited fields terminated by '\t'
              > location '/hive/external_table/';

載入資料到emp_external

[root@hadoop001 data]# hadoop fs -put emp.txt /hive/external_table/

2、檢視data和metadata
在HDFS上可以看到emp_external表的data

[[email protected] data]# hadoop fs -ls /hive/external_table/
Found 1 items
-rw-r--r--   1 root supergroup        700 2017-10-07 23:50 /hive/exterbal_table/emp.txt

在MySQL中可以看到emp_external表的data

mysql> select * from TBLS \G;
*************************** 1. row ***************************
            TBL_ID: 1
       CREATE_TIME: 1506370951
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 1
          TBL_NAME: emp
          TBL_TYPE: MANAGED_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
*************************** 2. row ***************************
            TBL_ID: 7
       CREATE_TIME: 1507391060
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 7
          TBL_NAME: emp_external
          TBL_TYPE: EXTERNAL_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
2 rows in set (0.00 sec)

3、刪除emp_external表後檢視data和metadata
刪除

hive (default)> drop table emp_external;
OK

data不會隨之刪除

[[email protected] data]# hadoop fs -ls /hive/exterbal_table/
Found 1 items
-rw-r--r--   1 root supergroup        700 2017-10-07 23:50 /hive/exterbal_table/emp.txt

metadata會刪除

mysql> select * from TBLS \G;
*************************** 1. row ***************************
            TBL_ID: 1
       CREATE_TIME: 1506370951
             DB_ID: 1
  LAST_ACCESS_TIME: 0
             OWNER: root
         RETENTION: 0
             SD_ID: 1
          TBL_NAME: emp
          TBL_TYPE: MANAGED_TABLE
VIEW_EXPANDED_TEXT: NULL
VIEW_ORIGINAL_TEXT: NULL
1 row in set (0.00 sec)

所以不小心刪除外部表後,可以建立一個新表指定到(location ‘/hive/external_table/’)這個位置,那麼資料就會恢復。

三、外部表使用場景
對於一些原始日誌檔案,同時被多個部門同時操作的時候就需要使用外部表,如果不小心將meta data刪除了,HDFS上 的data還在可以恢復,增加了資料的安全性。