1. 程式人生 > >Hive Remote模式搭建

Hive Remote模式搭建

hive

一、實驗環境

1.軟件版本:apache-hive-2.3.0-bin.tar.gz、mysql-community-server-5.7.19

2.mysql JDBC驅動包:mysql-connector-java-5.1.44.tar.gz

3.mysql已經安裝在hadoop5上

4..主機規劃

hadoop3

Remote:client
hadoop4Remote:client
hadoop5Remote:server;mysql

二、基礎配置

1.解壓並移動hive

[[email protected] ~]# tar -zxf apache-hive-2.3.0-bin.tar.gz
[[email protected] ~]# cp -r apache-hive-2.3.0-bin /usr/local/hive

2.修改環境變量

[[email protected] ~]# vim /etc/profile
export HIVE_HOME=/usr/local/hive
export PATH=$HIVE_HOME/bin:$PATH
[[email protected] ~]# source /etc/profile

3.復制初始文件

[[email protected] ~]# cd /usr/local/hive/conf/
[[email protected] conf]# cp hive-env.sh.template hive-env.sh  
[[email protected] conf]# cp hive-default.xml.template hive-site.xml  
[[email protected] conf]# cp hive-log4j2.properties.template hive-log4j2.properties  
[[email protected] conf]# cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties

4.修改hive-env.sh文件

[[email protected] conf]# vim hive-env.sh    #在最後添加
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export HIVE_HOME=/usr/local/hive    
export HIVE_CONF_DIR=/usr/local/hive/conf

5.拷貝mysql的JDBC驅動包

[[email protected] ~]# tar -zxf mysql-connector-java-5.1.44.tar.gz 
[[email protected] ~]# cp mysql-connector-java-5.1.44/mysql-connector-java-5.1.44-bin.jar /usr/local/hive/lib/

6.在hdfs中創建一下目錄,並授權,用於存儲文件

hdfs dfs -mkdir -p /user/hive/warehouse
hdfs dfs -mkdir -p /user/hive/tmp
hdfs dfs -mkdir -p /user/hive/log
hdfs dfs -chmod -R 777 /user/hive/warehouse
hdfs dfs -chmod -R 777 /user/hive/tmp
hdfs dfs -chmod -R 777 /user/hive/log

7.在mysql中創建相關用戶和庫

mysql> create database metastore;
Query OK, 1 row affected (0.03 sec)
mysql> set global validate_password_policy=0;
Query OK, 0 rows affected (0.26 sec)
mysql> grant all on local_hive.* to [email protected]‘%‘ identified by ‘hive123456‘;
Query OK, 0 rows affected, 1 warning (0.03 sec)
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

7.使用scp將hive拷貝到hadoop3和4上

[[email protected] ~]# scp -r /usr/local/hive [email protected]:/usr/local/
[[email protected] ~]# scp -r /usr/local/hive [email protected]:/usr/local/

三、修改配置文件

1.服務端hive-site.xml的配置(在hadoop5上)

<configuration>
<property>
    <name>hive.exec.scratchdir</name>
    <value>/user/hive/tmp</value>
</property>
<property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
</property>
<property>
    <name>hive.querylog.location</name>
    <value>/user/hive/log</value>
</property>

<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://hadoop5:3306/metastore?createDatabaseIfNotExist=true&amp;characterEncoding=UTF-8&amp;useSSL=false</value>
</property>
<property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
</property>
<property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
</property>
<property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hive123456</value>
</property>
</configuration>

2.客戶端hive-site.xml的配置(在hadoop3和4上)

<configuration>
<property>
    <name>hive.metastore.uris</name>
    <value>thrift://hadoop5:9083</value>
</property>
<property>
    <name>hive.exec.scratchdir</name>
    <value>/user/hive/tmp</value>
</property>
<property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
</property>
<property>
    <name>hive.querylog.location</name>
    <value>/user/hive/log</value>
</property>
<property>
    <name>hive.metastore.local</name>
    <value>false</value>
</property>
</configuration>

四、啟動hive(兩中方式)

1.直接啟動

service:

[[email protected] ~]# hive --service metastore

client:

[[email protected] ~]# hive
hive> show databases;
OK
default
Time taken: 1.599 seconds, Fetched: 1 row(s)
hive> quit;

2.beeline方式

需要先在hadoop的core-site.xml中添加配置

<property>  
  <name>hadoop.proxyuser.root.groups</name>  
  <value>*</value>  
</property>  
<property>  
  <name>hadoop.proxyuser.root.hosts</name>  
  <value>*</value>  
</property>

service:

[[email protected] ~]# nohup hiveserver2 &
[[email protected] ~]# netstat -nptl | grep 10000
tcp        0      0 0.0.0.0:10000           0.0.0.0:*               LISTEN      3464/java

client:

[[email protected] ~]# beeline 
Beeline version 1.2.1.spark2 by Apache Hive
beeline> !connect jdbc:hive2://hadoop5:10000 hive hive123456
Connecting to jdbc:hive2://hadoop5:10000
17/09/21 09:47:31 INFO jdbc.Utils: Supplied authorities: hadoop5:10000
17/09/21 09:47:31 INFO jdbc.Utils: Resolved authority: hadoop5:10000
17/09/21 09:47:31 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://hadoop5:10000
Connected to: Apache Hive (version 2.3.0)
Driver: Hive JDBC (version 1.2.1.spark2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://hadoop5:10000> show databases;
+----------------+--+
| database_name  |
+----------------+--+
| default        |
+----------------+--+
1 row selected (2.258 seconds)


本文出自 “lullaby” 博客,請務必保留此出處http://lullaby.blog.51cto.com/10815696/1967629

Hive Remote模式搭建