1. 程式人生 > >Hive1.2本地模式安裝教程--hive學習

Hive1.2本地模式安裝教程--hive學習

hive安裝方式有內嵌方式、本地方式和遠端方式。此次搭建hive的目的主要是專案需要學習hive,因此選用較為簡單方便的本地模式。這個需要藉助mysql,下面進入具體步驟。

一、環境搭建

1.hadoop搭建

hive其實工作原理就是將sql查詢語句轉換為mapreduce的程式,因此安裝hadoop是前提,當然其中也包含jdk的安裝以及相關的一些配置。本文不介紹hadoop安裝,在我之前的部落格有介紹如何安裝,當然網路上也有很多教程。

2.mysql安裝

(1)安裝

sudo yum install mysql-server

(2)啟動

sudo service mysqld start
sudo
chkconfig mysqld on 開機啟動

(3)設定登入密碼

mysqladmin -u root password '123'

(4)新建資料庫使用者


CREATE USER 'hive'@'localhost' IDENTIFIED BY '123';
grant all  privileges on *.* to 'hive'@'localhost' identified by '123';
flush privileges;

至此基礎環境搭建完畢。

二、Hive安裝與配置

1.安裝

(1)解壓hive安裝包

sudo tar zxvf apache-hive
-1.2.1-bin.tar.gz sudo mv apache-hive-1.2.1-bin hive-1.2

(2)配置環境變數

修改/etc/profile檔案,新增hive的環境變數內容如下:
export HIVE_HOME=/opt/hive-1.2
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$SPARK_HOME/lib:$HADOOP_HOME/lib:$SCALA_HOME/lib:$MEKA_HOME/lib:$HIVE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin/:$HADOOP_HOME/bin:$SPARK_HOME
/bin:$SCALA_HOME/bin:$HIVE_HOME/bin
生效source /etc/profile

(3)hive配置檔案修改
修改hive-env.sh

sudo mv hive-env.sh.template hive-env.sh
修改hive-env.sh中hadoop路徑:
# Set HADOOP_HOME to point to a specific hadoop install directory
 HADOOP_HOME=/opt/hadoop-2.6.0

修改hive-site.xml

建立hive-site.xml
sudo mv hive-default.xml.template hive-site.xml

修改hive-site.xml內容,主要修改的內容如下所示:
先新建一個目錄sudo mkdir -p ~/hive/tmp

<property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>

<property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>

<property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
    <description>Username to use against metastore database</description>
  </property>

<property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123</value>
    <description>password to use against metastore database</description>
  </property>

<property>
    <name>hive.exec.local.scratchdir</name>
    <value>/home/luo/hive/tmp</value>
    <description>Local scratch space for Hive jobs</description>
  </property>

<property>
    <name>hive.downloaded.resources.dir</name>
    <value>/home/luo/hive/tmp</value>
    <description>Temporary local directory for added resources in the remote file system.</description>
  </property>

2.其他操作

(1)防火牆和selinux

關閉防火牆:
service iptables stop
chkconfig iptables off

關閉 selinux
修改/etc/selinux/conf
SELINUX=disabled
重啟生效

(2)新增mysql JDBC驅動

 mv mysql-connector-java-5.1.33-bin.jar /opt/hive-1.2/lib/

(3)替換jline包

cp jline-2.12.jar /opt/hadoop-2.6.0/share/hadoop/yarn/lib/

至此hive搭建完成,下面開啟hive,前提hadoop開啟:
由於之前將hive加入到了環境變數中,直接輸入hive

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/spark-1.5.2-bin-hadoop2.6/lib/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/spark-1.5.2-bin-hadoop2.6/lib/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/opt/hive-1.2/lib/hive-common-1.2.1.jar!/hive-log4j.properties
hive>