1. 程式人生 > >Hive3.0.0基於hadoop2.9.1和ubuntu16.04的安裝配置及問題解決【超詳細】

Hive3.0.0基於hadoop2.9.1和ubuntu16.04的安裝配置及問題解決【超詳細】

Hive3.0.0基於hadoop2.9.1+ubuntu16.04的安裝配置【超詳細過程】

在後面有很多配置過程中出現的問題的解決方法,主要參考部落格:
https://www.cnblogs.com/pejsidney/p/8944305.html
https://blog.csdn.net/sjmz30071360/article/details/82080189

一、配置SQL
下載sql

    sudo apt-get install mysql-server mysql-client

啟動sql

sudo /etc/init.d/mysql start      (Ubuntu版本)     
 * Starting MySQL database server mysqld [ OK ]   (啟動成功)

二、下載Hive(所有操作在hadoop使用者下進行)

 	su - hadoop
    cd /tmp
    wget http://www-us.apache.org/dist/hive/hive-3.0.0/apache-hive-3.0.0-bin.tar.gz
    sudo tar xvzf apache-hive-3.0.0-bin.tar.gz -C  /usr/local

三、Hive環境變數
開啟~/.bashrc 並配置如下:

    export HIVE_HOME=/usr/local/apache-hive-2.1.1-bin
    export HIVE_CONF_DIR=/usr/local/apache-hive-2.1.1-bin/conf
    export PATH=$HIVE_HOME/bin:$PATH
    export CLASSPATH=$CLASSPATH:/usr/local/hadoop/lib/*:.
    export CLASSPATH=$CLASSPATH:/usr/local/apache-hive-2.1.1-bin/lib/*:.
source ~/.bashrc

四、新建tmp和usr目錄(原博主寫的太繁瑣)

hdfs dfs -mkdir -p /tmp/hive
hdfs dfs -mkdir -p /usr/hive

更改tmp檔案的許可權

hdfs dfs -chmod -R 777 /tmp

五、配置Hive

cd $HIVE_HOME/conf
sudo cp hive-env.sh.template hive-env.sh

編輯hive-env.sh新加入配置

export HADOOP_HOME=/usr/local/hadoop

六、下載Apache Derby

cd /tmp
wget http://archive.apache.org/dist/db/derby/db-derby-10.13.1.1/db-derby-10.13.1.1-bin.tar.gz
sudo tar xvzf db-derby-10.13.1.1-bin.tar.gz -C /usr/local

繼續配置Derby,開啟~/.bashrc,並新增以下內容

    export DERBY_HOME=/usr/local/db-derby-10.13.1.1-bin
    export PATH=$PATH:$DERBY_HOME/bin
    export CLASSPATH=$CLASSPATH:$DERBY_HOME/lib/derby.jar:$DERBY_HOME/lib/derbytools.jar
source ~/.bashrc

再建立一個data目錄

sudo mkdir $DERBY_HOME/data

七、啟動hive

[[email protected] hive-3.0.0]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Exception in thread “main” java.lang.RuntimeException: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
at [row,col,system-id]: [3213,96,“file:/usr/hive-3.0.0/conf/hive-site.xml”]
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2964)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2733)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2605)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1362)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:4967)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:5040)
at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5127)
at org.apache.hadoop.hive.conf.HiveConf.(HiveConf.java:5070)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:97)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:81)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
Caused by: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
at [row,col,system-id]: [3213,96,“file:/usr/hive-3.0.0/conf/hive-site.xml”]
at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:621)
at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:491)
at com.ctc.wstx.sr.StreamScanner.reportIllegalChar(StreamScanner.java:2456)
at com.ctc.wstx.sr.StreamScanner.validateChar(StreamScanner.java:2403)
at com.ctc.wstx.sr.StreamScanner.resolveCharEnt(StreamScanner.java:2369)
at com.ctc.wstx.sr.StreamScanner.fullyResolveEntity(StreamScanner.java:1515)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2828)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2799)
… 17 more

【解決方法】:

修改前:

3212
3213 Ensures commands with OVERWRITE (such as INSERT OVERWRITE) acquire Exclusive locks for�transactional tables. This ensures that inserts (w/o over
write) running concurrently
3214 are not hidden by the INSERT OVERWRITE.
3215
修改後:

3212
3213 Ensures commands with OVERWRITE (such as INSERT OVERWRITE) acquire Exclusive locks for transactional tables. This ensures that inserts (w/o over
write) running concurrently
3214 are not hidden by the INSERT OVERWRITE.
3215

再次啟動======>

[[email protected] hive-3.0.0]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 50498504-b385-43e7-bf20-57cb74238499

Logging initialized using configuration in jar:file:/usr/hive-3.0.0/lib/hive-common-3.0.0.jar!/hive-log4j2.properties Async: true
Exception in thread “main” java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: KaTeX parse error: Expected '}', got 'EOF' at end of input: …a.io.tmpdir%7D/%7Bsystem:user.name%7D
at org.apache.hadoop.fs.Path.initialize(Path.java:254)
at org.apache.hadoop.fs.Path.(Path.java:212)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:703)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:620)
at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:585)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:747)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: KaTeX parse error: Expected '}', got 'EOF' at end of input: …a.io.tmpdir%7D/%7Bsystem:user.name%7D
at java.net.URI.checkPath(URI.java:1823)
at java.net.URI.(URI.java:745)
at org.apache.hadoop.fs.Path.initialize(Path.java:251)
… 12 more

【解決方法】:

建立io的tmp檔案 mkdir /usr/hive-3.0.0/tmp 並且 在配置檔案hive-site.xml裡面
以下地方的${system:java.io.tmpdir} 全部替換為 /usr/hive-3.0.0/tmp

{system:user.name} 全部替換為 {user.name}

141   <property>
142     <name>hive.exec.local.scratchdir</name>
143     <value>${system:java.io.tmpdir}/${system:user.name}</value>
144     <description>Local scratch space for Hive jobs</description>
145   </property>
146   <property>
147     <name>hive.downloaded.resources.dir</name>
148     <value>${system:java.io.tmpdir}/${hive.session.id}_resources</value>
149     <description>Temporary local directory for added resources in the remote file system.</description>
150   </property>

1861
1862 hive.querylog.location
1863 s y s t e m : j a v a . i o . t m p d i r / {system:java.io.tmpdir}/ {system:user.name}
1864 Location of Hive run time structured log file
1865

3522
3523 hive.druid.basePersistDirectory
3524
3525 Local temporary directory used to persist intermediate indexing state, will default to JVM system property java.io.tmpdir.
3526

4395
4396 hive.server2.logging.operation.log.location
4397 s y s t e m : j a v a . i o . t m p d i r / {system:java.io.tmpdir}/ {system:user.name}/operation_logs
4398 Top level directory where operation logs are stored if logging functionality is enabled
4399

再次啟動======>:

[[email protected] hive-3.0.0]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = d4cac70f-e7d5-42d0-a8e7-0093c00ef7f1

Logging initialized using configuration in jar:file:/usr/hive-3.0.0/lib/hive-common-3.0.0.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show databases;
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
hive>

【解決方法】:

執行 schematool -initSchema -dbType derby 命令進行初始化

[[email protected] hive-3.0.0]# schematool -initSchema -dbType derby
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User: APP
Starting metastore schema initialization to 3.0.0
Initialization script hive-schema-3.0.0.derby.sql

Error: FUNCTION ‘NUCLEUS_ASCII’ already exists. (state=X0Y68,code=30000)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
Use --verbose for detailed stacktrace.
*** schemaTool failed ***

這是因為第一次執行hive命令時,在執行hive的當前目錄下的已經生成了 metastore_db 目錄

[[email protected] hive-3.0.0]# ls -lrt
total 216
-rw-r–r--. 1 root root 230 May 15 17:42 NOTICE
-rw-r–r--. 1 root root 20798 May 15 17:42 LICENSE
-rw-r–r--. 1 root root 143769 May 15 18:32 RELEASE_NOTES.txt
drwxr-xr-x. 2 root root 44 Aug 26 10:16 jdbc
drwxr-xr-x. 2 root root 4096 Aug 26 10:17 binary-package-licenses
drwxr-xr-x. 4 root root 34 Aug 26 10:17 examples
drwxr-xr-x. 3 root root 157 Aug 26 10:17 bin
drwxr-xr-x. 4 root root 35 Aug 26 10:17 scripts
drwxr-xr-x. 4 root root 12288 Aug 26 10:17 lib
drwxr-xr-x. 7 root root 68 Aug 26 10:17 hcatalog
drwxr-xr-x. 3 root root 42 Aug 26 10:27 ${system:java.io.tmpdir}
drwxr-xr-x. 2 root root 4096 Aug 26 10:39 conf
drwxr-xr-x. 3 root root 33 Aug 26 10:44 tmp
drwxr-xr-x. 5 root root 133 Aug 26 10:44 metastore_db
-rw-r–r--. 1 root root 19964 Aug 26 10:44 derby.log

metastore_db預設只能在執行hive命令的當前目錄建立,需要修改hive-site.xml指定到固定目錄

577   <property>
578     <name>javax.jdo.option.ConnectionURL</name>
579     <value>jdbc:derby:;databaseName=/usr/hive-3.0.0/derby_db/metastore_db;create=true</value>
580     <description>
581       JDBC connect string for a JDBC metastore.
582       To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
583       For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
584     </description>
585   </property>

刪除metastore_db目錄(rm -rf metastore_db),再次執行初始化

[[email protected] hive-3.0.0]# schematool -initSchema -dbType derby
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User: APP
Starting metastore schema initialization to 3.0.0
Initialization script hive-schema-3.0.0.derby.sql
Initialization script completed
schemaTool completed

再次啟動Hive======>:

[[email protected] hive-3.0.0]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 28e290e1-3e2e-42d4-be61-5314ccc70f99

Logging initialized using configuration in jar:file:/usr/hive-3.0.0/lib/hive-common-3.0.0.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show databases;
OK
default
Time taken: 1.936 seconds, Fetched: 1 row(s)
hive> create table student(stu_id int,stu_name string);
OK
Time taken: 3.705 seconds
hive> insert into student values(1,‘tom’);
Query ID = root_20180826111007_d1f4e8e0-ca95-456e-a497-652591b0855e
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=
Starting Job = job_1535252886720_0001, Tracking URL = http://master:8088/proxy/application_1535252886720_0001/
Kill Command = /usr/hadoop/hadoop-2.9.0/bin/mapred job -kill job_1535252886720_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-08-26 11:11:18,864 Stage-1 map = 0%, reduce = 0%
2018-08-26 11:11:53,149 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 8.07 sec
2018-08-26 11:12:25,559 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 10.25 sec
2018-08-26 11:12:33,224 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 14.02 sec
MapReduce Total cumulative CPU time: 14 seconds 20 msec
Ended Job = job_1535252886720_0001
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://master:9000/user/hive/warehouse/student/.hive-staging_hive_2018-08-26_11-10-07_127_6657782755309499529-1/-ext-10000
Loading data to table default.student
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 14.02 sec HDFS Read: 15278 HDFS Write: 240 SUCCESS
Total MapReduce CPU Time Spent: 14 seconds 20 msec
OK
Time taken: 155.381 seconds
hive> insert into student values(2,‘bob’);
Query ID = root_20180826111257_e89f212c-4a4a-4005-a649-14ed76aa8e69
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=
Starting Job = job_1535252886720_0002, Tracking URL = http://master:8088/proxy/application_1535252886720_0002/
Kill Command = /usr/hadoop/hadoop-2.9.0/bin/mapred job -kill job_1535252886720_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-08-26 11:13:34,223 Stage-1 map = 0%, reduce = 0%
2018-08-26 11:13:54,804 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 5.96 sec
2018-08-26 11:14:16,059 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 11.71 sec
MapReduce Total cumulative CPU time: 11 seconds 710 msec
Ended Job = job_1535252886720_0002
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://master:9000/user/hive/warehouse/student/.hive-staging_hive_2018-08-26_11-12-57_895_8590756589454066874-1/-ext-10000
Loading data to table default.student
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 11.71 sec HDFS Read: 15308 HDFS Write: 240 SUCCESS
Total MapReduce CPU Time Spent: 11 seconds 710 msec
OK
Time taken: 81.864 seconds
hive> select * from student;
OK
1 tom
2 bob
Time taken: 0.824 seconds, Fetched: 2 row(s)
hive>

完!!!