1. 程式人生 > >hive 2.3.0配置與部署

hive 2.3.0配置與部署

配置MySQL

安裝

yum -y install mysql mysql-server mysql-devel

啟動

service mysqld start

開機啟動

chkconfig mysqld on

登入

mysql -u root

初始化密碼
mysql中輸入

use mysql;
update user set password=password('root') where user='root';
exit;
service mysqld restart

開啟遠端連線
mysql中輸入

grant all PRIVILEGES
on *.* to [email protected]'%' identified by 'root';
service mysqld restart

MySQL儲存hive元資料

建立linux使用者

useradd hive passwd hive

在MySQL中建立資料庫

CREATE DATABASE hive;

建立MySQL使用者,並授權

CREATE USER 'hive' IDENTIFIED BY 'hive';
grant all privileges on *.* to 'hive' identified by 'hive';
flush privileges;

hive部署

環境變數

export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_HOME=$HADOOP_HOME/etc/hadoop/
export HIVE_HOME=/usr/local/hive
export HIVE_CONF_DIR=/usr/local/hive/conf
export PATH=$PATH:$HIVE_HOME/bin:$HADOOP_HOME/bin

建立臨時目錄

cd /usr/local/hive/
mkdir -p tmp/resources

建立hdfs目錄

hadoop fs -mkdir
-p /tmp/hive hadoop fs -mkdir -p /user/hive/warehouse

新建使用者組並將hdfs下的/user/hive/目錄的許可權、所屬主與所屬組分別賦權

groupadd hadoop
usermod -G hadoop hive
hadoop fs -chown -R hive:hadoop /user/hive/warehouse
hadoop fs -chown -R hive:hadoop /tmp/hive/
hadoop fs -chmod 755 /user/hive/warehouse
hadoop fs -chmod 777 /tmp/hive/

hive配置

重新命名配置檔案
這裡寫圖片描述

修改hive-env.sh

export HADOOP_HOME=/usr/local/hadoop
export HIVE_CONF_DIR=/usr/local/hive/conf

替換hive-site.xml中的一些value

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
<description>password to use against metastore database</description>
</property>

<property>
<name>hive.exec.local.scratchdir</name>
<value>/usr/local/hive/tmp</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/usr/local/hive/tmp/resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>location of default database for the warehouse</description>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission.</description>
</property>

<property>
<name>hive.hbase.snapshot.restoredir</name>
<value>/tmp</value>
<description>The directory in which to restore the HBase table snapshot.</description>
</property>
<property>
<name>hive.scratch.dir.permission</name>
<value>700</value>
<description>The permission for the user specific scratch directories that get created.</description>
</property>

使用hive

使用bin目錄下的hive啟動

show databases;

這裡寫圖片描述

在MySQL中檢視元資料表
這裡寫圖片描述

啟動hiveserver2

hive --service hiveserver2 --hiveconf hive.server2.thrift.port=10000

問題解決

這裡寫圖片描述

如果此時的MySQL中沒有hive的元資料表,而之前已經啟動過hive,包括啟動hive之後執行出錯的
在bin目錄下執行如下命令匯入元資料

 ./schematool -dbType mysql -initSchema

報錯Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStore

配置檔案中有錯誤,仔細檢查