1. 程式人生 > >Hive2.1.1、Hadoop2.7.3 部署

Hive2.1.1、Hadoop2.7.3 部署

本文以遠端模式安裝Hive2.1.1將hive的元資料放置在MySQL資料庫中。

1 安裝mysql資料庫

sudo apt-get install mysql-server
  • 1
  • 1


重啟mysql服務使得配置檔案生效

sudo service mysql restart
  • 1
  • 1

建立hive專用賬戶

 CREATE USER 'hive'@'%' IDENTIFIED BY '123456';
  • 1
  • 1

給hive賬戶授予所有許可權

grant all privileges on *.* to 'hive'@'%' identified by '123456' with grant option
;
  • 1
  • 1

刷新系統許可權表,使配置生效

flush privileges;
  • 1
  • 1

2 解壓安裝hive

cd /usr/local
sudo tar -xvzf apache-hive-2.1.1-bin.tar.gz
sudo mv apache-hive-2.1.1-bin/ hive-2.1.1
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3

配置系統環境變數

sudo gedit .bashrc
export HIVE_HOME=/usr/local/hive-2.1.1
exportPATH=$HIVE_HOME/bin:$HIVE_HOME/lib:$PATH
  • 1
  • 2
  • 3
  • 1
  • 2
  • 3


使得環境變數配置生效

source .bashrc
  • 1
  • 1

3 配置hive 
3.1 修改conf/hive-env.sh檔案

cd /usr/local/hive-2.1.1/conf/
sudo cp hive-env.sh.template hive-env.sh
sudo chown hadoop:hadoop hive-env.sh
sudo vi hive-env.sh
HADOOP_HOME=/usr/local/hadoop-2.7.3
export HIVE_CONF_DIR=/usr/local/hive-2.1.1/conf
export HIVE_AUX_JARS_PATH=/usr/local/hive-2.1.1/lib
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7


3.2 修改日誌屬性檔案配置日誌儲存目錄 
修改hive-log4j2.properties

sudo cp hive-log4j2.properties.template hive-log4j2.properties
sudo chown hadoop:hadoop hive-log4j2.properties
sudo  vi hive-log4j2.properties
property.hive.log.dir = /usr/local/hive-2.1.1/logs
  • 1
  • 2
  • 3
  • 4
  • 1
  • 2
  • 3
  • 4

修改llap-cli-log4j2.properties

property.hive.log.dir = /usr/local/hive-2.1.1/logs
property.hive.log.file = llap-cli.log
  • 1
  • 2
  • 1
  • 2

3.3 修改hive-site.xml配置檔案,主要修改如下配置專案

  <property>
    <name>hive.exec.local.scratchdir</name>
    <value>/usr/local/hive-2.1.1/tmp</value>
    <description>Local scratch space for Hive jobs</description>
  </property>
  <property>
    <name>hive.downloaded.resources.dir</name>
    <value>/usr/local/hive-2.1.1/tmp/${hive.session.id}_resources</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
  </property>
 <property>
    <name>hive.querylog.location</name>
    <value>/usr/local/hive-2.1.1/logs</value>
    <description>Location of Hive run time structured log file</description>
  </property>
  <property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>/usr/local/hive-2.1.1/logs</value>
    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
  </property>
 <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/usr/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>
<property>
    <name>hive.metastore.uris</name>
    <value>thrift://192.168.80.130:9083</value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
  </property>
<property>
    <name>hive.exec.scratchdir</name>
    <value>/tmp/hive</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>
  </property>
  <property>
    <name>hive.exec.local.scratchdir</name>
    <value>/usr/local/hive-2.1.1/tmp</value>
    <description>Local scratch space for Hive jobs</description>
  </property>
  <property>
    <name>hive.downloaded.resources.dir</name>
    <value>/usr/local/hive-2.1.1/tmp/${hive.session.id}_resources</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://192.168.80.130:3306/metastore?createDatabaseIfNotExist=true&amp;useSSL=false</value>
    <description>
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    </description>
  </property>
 <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
    <description>Username to use against metastore database</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123456</value>
    <description>password to use against metastore database</description>
  </property>
<property>
    <name>hive.hwi.listen.host</name>
    <value>0.0.0.0</value>
    <description>This is the host address the Hive Web Interface will listen on</description>
  </property>
  <property>
    <name>hive.hwi.listen.port</name>
    <value>9999</value>
    <description>This is the port the Hive Web Interface will listen on</description>
  </property>
 <property>
    <name>hive.server2.thrift.bind.host</name>
    <value>0.0.0.0</value>
    <description>Bind host on which to run the HiveServer2 Thrift service.</description>
  </property>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • <