1. 程式人生 > >win7配置安裝Hive-2.1.1

win7配置安裝Hive-2.1.1

1 配置安裝hadoop ,版本: hadoop-2.7.6

3 解壓hive到目錄:D:\Soft\apache-hive-2.1.1-bin

4 下載mysql驅動放到:D:\Soft\apache-hive-2.1.1-bin\lib\mysql-connector-java-5.1.30-bin.jar

5 配置環境變數:

   HIVE_HOME: D:\Soft\apache-hive-2.1.1-bin\bin 

   path: %HIVE_HOME%\bin;

6 hive config配置

配置檔案目錄D:\Soft\apache-hive-2.1.1-bin\conf有4個預設的配置檔案模板拷貝成新的檔名

hive-default.xml.template             ----->       hive-site.xml
hive-env.sh.template                     ----->             hive-env.sh
hive-exec-log4j.properties.template     ----->    hive-exec-log4j2.properties
hive-log4j.properties.template             ----->    hive-log4j2.properties

7 新建本地目錄後面配置檔案用到

D:\Soft\apache-hive-2.1.1-bin\my_hive

8 修改 hive-env.sh

export HADOOP_HOME=D:\Soft\hadoop-2.7.6

export HIVE_CONF_DIR=D:\Soft\apache-hive-2.1.1-bin\conf

export HIVE_AUX_JARS_PATH=D:\Soft\apache-hive-2.1.1-bin\lib

9  修改 hive-site.xml


<!--hive的臨時資料目錄,指定的位置在hdfs上的目錄-->

<property>

<name>hive.metastore.warehouse.dir</name>

<value>/user/hive/warehouse</value>

<description>location of default database for the warehouse</description>

</property>



<!--hive的臨時資料目錄,指定的位置在hdfs上的目錄-->

<property>

<name>hive.exec.scratchdir</name>

<value>/tmp/hive</value>

<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>

</property>



<!-- scratchdir 本地目錄 -->

<property>

<name>hive.exec.local.scratchdir</name>

<value>D:/Soft/apache-hive-2.1.1-bin/my_hive/scratch_dir</value>

<description>Local scratch space for Hive jobs</description>

</property>

<!-- resources_dir 本地目錄 -->

<property>

<name>hive.downloaded.resources.dir</name>

<value>D:/Soft/apache-hive-2.1.1-bin/my_hive/resources_dir/${hive.session.id}_resources</value>

<description>Temporary local directory for added resources in the remote file system.</description>

</property>

<!-- querylog 本地目錄 -->

<property>

<name>hive.querylog.location</name>

<value>D:/Soft/apache-hive-2.1.1-bin/my_hive/querylog_dir</value>

<description>Location of Hive run time structured log file</description>

</property>

<!-- operation_logs 本地目錄 -->

<property>

<name>hive.server2.logging.operation.log.location</name>

<value>D:/Soft/apache-hive-2.1.1-bin/my_hive/operation_logs_dir</value>

<description>Top level directory where operation logs are stored if logging functionality is enabled</description>

</property>

<!-- 資料庫連線地址配置 -->

<property>

<name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://localhost:3306/hive?characterEncoding=UTF-8&amp;createDatabaseIfNotExist=true</value>

<description>

JDBC connect string for a JDBC metastore.

</description>

</property>

<!-- 資料庫驅動配置 -->

<property>

<name>javax.jdo.option.ConnectionDriverName</name>

<value>com.mysql.jdbc.Driver</value>

<description>Driver class name for a JDBC metastore</description>

</property>

<!-- 資料庫使用者名稱 -->

<property>

<name>javax.jdo.option.ConnectionUserName</name>

<value>root</value>

<description>Username to use against metastore database</description>

</property>

<!-- 資料庫訪問密碼 -->

<property>

<name>javax.jdo.option.ConnectionPassword</name>

<value>wmzycn</value>

<description>password to use against metastore database</description>

</property>

<!-- 解決 Caused by: MetaException(message:Version information not found in metastore. ) -->

<property>

<name>hive.metastore.schema.verification</name>

<value>false</value>

<description>

Enforce metastore schema version consistency.

True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic

schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures

proper metastore schema migration. (Default)

False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.

</description>

</property>

<!-- 自動建立全部 -->

<!-- hive Required table missing : "DBS" in Catalog""Schema" 錯誤 -->

<property>

<name>datanucleus.schema.autoCreateAll</name>

<value>true</value>

<description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>

</property>

10 在hadoop上建立hdfs目錄

hadoop fs  -mkdir       /tmp
hadoop fs  -mkdir       /user/
hadoop fs  -mkdir       /user/hive/
hadoop fs  -mkdir       /user/hive/warehouse 
hadoop fs  -chmod g+w   /tmp
hadoop fs  -chmod g+w   /user/hive/warehouse

11 建立mysql資料庫  

建立hive資料庫,注意編碼  latin1

允許 root 訪問許可權

11 啟動hive 

1) 啟動hadoop

2) 啟動metastore服務: hive --service metastore

元資料啟動成功會判斷資料庫表是否存在,不存在建立表

3) 啟動hive: hive   

注意:第2步只要一次就行,元資料建立成功後,以後啟動hive執行1和3步即可

14 使用hive操作資料

建立資料庫表

hive> create table test_table(id INT,name string);

顯示資料庫表

hive> show tables;