1. 程式人生 > >hive權威安裝出現的不解錯誤!(完美解決)兩種方法都可以

hive權威安裝出現的不解錯誤!(完美解決)兩種方法都可以

   以下兩種方法都可以,推薦用方法一!

如果有誤,請見部落格

方法一:

  步驟一: yum -y install mysql-server

  步驟二:service mysqld start

  步驟三:mysql -u root -p
  Enter password: (預設是空密碼,按enter)

    mysql > CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';

    mysql > GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION;

    mysql > flush privileges;

    mysql > exit;

  步驟四:去hive的安裝目錄下的lib,下將 mysql-connector-java-5.1.21.jar 傳到這個目錄下。

  步驟五:去hive的安裝目錄下的conf,下配置hive-site.xml

<property>
         <name>javax.jdo.option.ConnectionDriverName</name>
         <value>com.mysql.jdbc.Driver</value>
         <description>Driver class name for a JDBC metastore</description>
</property>
<property>
         <name>javax.jdo.option.ConnectionURL</name>
         <value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
         <description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
         <name>javax.jdo.option.ConnectionUserName</name>
         <value>hive< /value>
         <description>Username to use against metastore database</description>
</property>
<property>
         <name>javax.jdo.option.ConnectionPassword</name>
         <value>hive< /value>
         <description>password to use against metastore database</description>
</property>

  步驟六:切換到root使用者,配置/etc/profile檔案,source生效。
  步驟七:OK成功!
 
 程式設計這一塊時,可以用自己的IP啦, 如 “jdbc:hive//192.168.80.128:10000/hivebase”

 方法二:

步驟一:

[[email protected] app]# yum -y install mysql-server
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
* base: mirror.bit.edu.cn
* extras: mirror.bit.edu.cn
* updates: mirror.bit.edu.cn
base | 3.7 kB 00:00
extras | 3.4 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 3.1 MB 00:04
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package mysql-server.x86_64 0:5.1.73-7.el6 will be installed
--> Processing Dependency: mysql = 5.1.73-7.el6 for package: mysql-server-5.1.73-7.el6.x86_64
--> Processing Dependency: perl-DBI for package: mysql-server-5.1.73-7.el6.x86_64
--> Processing Dependency: perl-DBD-MySQL for package: mysql-server-5.1.73-7.el6.x86_64
--> Processing Dependency: perl(DBI) for package: mysql-server-5.1.73-7.el6.x86_64
--> Running transaction check
---> Package mysql.x86_64 0:5.1.73-7.el6 will be installed
--> Processing Dependency: mysql-libs = 5.1.73-7.el6 for package: mysql-5.1.73-7.el6.x86_64
---> Package perl-DBD-MySQL.x86_64 0:4.013-3.el6 will be installed
---> Package perl-DBI.x86_64 0:1.609-4.el6 will be installed
--> Running transaction check
---> Package mysql-libs.x86_64 0:5.1.71-1.el6 will be updated
---> Package mysql-libs.x86_64 0:5.1.73-7.el6 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

=======================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================
Installing:
mysql-server x86_64 5.1.73-7.el6 base 8.6 M
Installing for dependencies:
mysql x86_64 5.1.73-7.el6 base 894 k
perl-DBD-MySQL x86_64 4.013-3.el6 base 134 k
perl-DBI x86_64 1.609-4.el6 base 705 k
Updating for dependencies:
mysql-libs x86_64 5.1.73-7.el6 base 1.2 M

Transaction Summary
=======================================================================================================================================================================
Install 4 Package(s)
Upgrade 1 Package(s)

Total download size: 12 M
Downloading Packages:
(1/5): mysql-5.1.73-7.el6.x86_64.rpm | 894 kB 00:01
(2/5): mysql-libs-5.1.73-7.el6.x86_64.rpm | 1.2 MB 00:02
(3/5): mysql-server-5.1.73-7.el6.x86_64.rpm | 8.6 MB 00:15
(4/5): perl-DBD-MySQL-4.013-3.el6.x86_64.rpm | 134 kB 00:00
(5/5): perl-DBI-1.609-4.el6.x86_64.rpm | 705 kB 00:01
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 548 kB/s | 12 MB 00:21
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
** Found 3 pre-existing rpmdb problem(s), 'yum check' output follows:
1:libreoffice-core-4.0.4.2-9.el6.x86_64 has missing requires of libjawt.so()(64bit)
1:libreoffice-core-4.0.4.2-9.el6.x86_64 has missing requires of libjawt.so(SUNWprivate_1.1)(64bit)
1:libreoffice-ure-4.0.4.2-9.el6.x86_64 has missing requires of jre >= ('0', '1.5.0', None)
Updating : mysql-libs-5.1.73-7.el6.x86_64 1/6
Installing : perl-DBI-1.609-4.el6.x86_64 2/6
Installing : perl-DBD-MySQL-4.013-3.el6.x86_64 3/6
Installing : mysql-5.1.73-7.el6.x86_64 4/6
Installing : mysql-server-5.1.73-7.el6.x86_64 5/6
Cleanup : mysql-libs-5.1.71-1.el6.x86_64 6/6
Verifying : mysql-5.1.73-7.el6.x86_64 1/6
Verifying : mysql-libs-5.1.73-7.el6.x86_64 2/6
Verifying : perl-DBD-MySQL-4.013-3.el6.x86_64 3/6
Verifying : mysql-server-5.1.73-7.el6.x86_64 4/6
Verifying : perl-DBI-1.609-4.el6.x86_64 5/6
Verifying : mysql-libs-5.1.71-1.el6.x86_64 6/6

Installed:
mysql-server.x86_64 0:5.1.73-7.el6

Dependency Installed:
mysql.x86_64 0:5.1.73-7.el6 perl-DBD-MySQL.x86_64 0:4.013-3.el6 perl-DBI.x86_64 0:1.609-4.el6

Dependency Updated:
mysql-libs.x86_64 0:5.1.73-7.el6

Complete!
[[email protected] app]#

步驟二:  

[[email protected] app]# service mysqld start
Initializing MySQL database: Installing MySQL system tables...
OK
Filling help tables...
OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h HadoopSlave2 password 'new-password'

Alternatively you can run:
/usr/bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default. This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:
cd /usr ; /usr/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl
cd /usr/mysql-test ; perl mysql-test-run.pl

Please report any problems with the /usr/bin/mysqlbug script!

[ OK ]
Starting mysqld: [ OK ]
[[email protected] app]#

 步驟三:

[[email protected] app]# mysql -u root -p
Enter password: (預設是空密碼,按enter)
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> set password for [email protected]=password('rootroot');
Query OK, 0 rows affected (0.10 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> exit;
Bye
[[email protected] app]#

 步驟四:

 

[[email protected] app]# mysql -uroot -prootroot
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create user 'hive'@'%' identified by 'hive';
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on *.* to 'hive'@'HadoopSlave2' identified by 'hive';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> set password for [email protected]=password('hive');
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> exit;
Bye
[[email protected] app]#

 步驟五:

[[email protected] app]# mysql -uhive -phive
ERROR 1045 (28000): Access denied for user 'hive'@'localhost' (using password: YES)
[[email protected] app]# mysql -uroot -prootroot
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> grant all on *.* to 'hive'@'localhost' identified by 'hive';
Query OK, 0 rows affected (0.00 sec)

mysql> set password for [email protected]=password('hive');
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> exit;
Bye
[[email protected] app]#

步驟六:

[[email protected] app]# mysql -uhive -phive
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE DATABASE hive;
Query OK, 1 row affected (0.00 sec)

mysql> exit;
Bye
[[email protected] app]#

步驟七:

[[email protected] app]# su hadoop
[[email protected] app]$ cd hive-1.2.1/
[[email protected] hive-1.2.1]$ cd conf/
[[email protected] conf]$ ll
total 188
-rw-rw-r--. 1 hadoop hadoop 1139 Apr 30 2015 beeline-log4j.properties.template
-rw-rw-r--. 1 hadoop hadoop 168431 Jun 19 2015 hive-default.xml.template
-rw-rw-r--. 1 hadoop hadoop 2378 Apr 30 2015 hive-env.sh.template
-rw-rw-r--. 1 hadoop hadoop 2662 Apr 30 2015 hive-exec-log4j.properties.template
-rw-rw-r--. 1 hadoop hadoop 3050 Apr 30 2015 hive-log4j.properties.template
-rw-rw-r--. 1 hadoop hadoop 1593 Apr 30 2015 ivysettings.xml
[[email protected] conf]$ cp hive-env.sh.template hive-env.sh
[[email protected] conf]$ cp hive-default.xml.template hive-site.xml
[[email protected] conf]$ cp hive-exec-log4j.properties.template hive-exec-log4j.properties
[[email protected] conf]$ cp hive-log4j.properties.template hive-log4j.properties
[[email protected] conf]$ vim hive-site.xml

< property>
         < name>javax.jdo.option.ConnectionDriverName< /name>
         < value>com.mysql.jdbc.Driver< /value>
         < description>Driver class name for a JDBC metastore< /description>
< /property>
< property>
         < name>javax.jdo.option.ConnectionURL< /name>
         < value>jdbc:mysql://HadoopSlave2:3306/hive?characterEncoding=UTF-8< /value>
         < description>JDBC connect string for a JDBC metastore< /description>
< /property>
< property>
         < name>javax.jdo.option.ConnectionUserName< /name>
         < value>hive< /value>
         < description>Username to use against metastore database< /description>
< /property>
< property>
         < name>javax.jdo.option.ConnectionPassword< /name>
         < value>hive< /value>
         < description>password to use against metastore database< /description>
< /property>

步驟八:

 步驟九:

 

[[email protected] lib]$ cd ..
[[email protected] hive-1.2.1]$ su root
Password:
[[email protected] hive-1.2.1]# vim /etc/profile

export JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper-3.4.6
export HIVE_HOME=/home/hadoop/app/hive-1.2.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$HIVE_HOME/bin

[[email protected] hive-1.2.1]# source /etc/profile

hive啟動時如果遇到以下錯誤:

Exceptionin thread "main"java.lang.RuntimeException:

java.lang.IllegalArgumentException:java.net.URISyntaxException:

Relative path in absolute URI:${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D

則,在hive 安裝目錄下,建立一個臨時的IO檔案iotmp

< property>
         < name>hive.querylog.location< /name>
         < value>/home/hadoop/app/hive-1.2.1/iotmp< /value>
         < description>Location of Hive run time structured log file< /description>
< /property>
< property>
         < name>hive.exec.local.scratchdir< /name>
         < value>/home/hadoop/app/hive-1.2.1/iotmp< /value>
         < description>Local scratch space for Hive jobs< /description>
< /property>
< property>
         < name>hive.downloaded.resources.dir< /name>
         < value>/home/hadoop/app/hive-1.2.1/iotmp< /value>
         < description>Temporary local directory for added resources in the remote file system.< /description>
< /property>

    或者,/usr/local/data/hive/iotmp

   主節點與從節點,都測試過,均出現這個問題!!!

[[email protected] hive-1.2.1]$ su root
Password:
[[email protected] hive-1.2.1]# cd ..
[[email protected] app]# clear
[[email protected] app]# service mysqld start
Starting mysqld: [ OK ]
[[email protected] app]# su hadoop
[[email protected] app]$ cd hive-1.2.1/
[[email protected] hive-1.2.1]$ bin/hive

Logging initialized using configuration in file:/home/hadoop/app/hive-1.2.1/conf/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
... 8 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
... 14 more
Caused by: MetaException(message:Got exception: java.io.IOException No FileSystem for scheme: hfds)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1213)
at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:106)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:140)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:146)
at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:600)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 19 more

 現在,到了這一步,不知道是哪裡出現錯誤!

   猜想一:是不是與hive 1.* 與 hive 0.*的版本,導致的問題?

   猜想二:是不是與HadoopSlave2的大寫有關,一般別人都是如,master、slave1、slave2等。

  經過本人測驗 ,不是版本的問題,也不是大寫的問題。

  對於,三點叢集,最好起名為,master、slave1、slave2

     五點叢集,最好起名為,master、slave1、slave2、slave3、slave4

      單點叢集,也最好起名為小寫!!!

   而是把, hdfs 寫成了 hfds。 

  http://www.cnblogs.com/braveym/p/6685045.html

  對於,hive的更高階配置,請見  

[[email protected] hadoop]# cd ..
[[email protected] etc]# cd ..
[[email protected] hadoop-2.6.0]# cd ..
[[email protected] app]# service mysqld start
Starting mysqld: [ OK ]
[[email protected] app]# su hadoop
[[email protected] app]$ cd hive-1.2.1/
[[email protected] hive-1.2.1]$ bin/hive

Logging initialized using configuration in file:/home/hadoop/app/hive-1.2.1/conf/hive-log4j.properties
[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
at jline.TerminalFactory.create(TerminalFactory.java:101)
at jline.TerminalFactory.get(TerminalFactory.java:158)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:229)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:209)
at org.apache.hadoop.hive.cli.CliDriver.setupConsoleReader(CliDriver.java:787)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:721)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Exception in thread "main" java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
at jline.console.ConsoleReader.<init>(ConsoleReader.java:230)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:209)
at org.apache.hadoop.hive.cli.CliDriver.setupConsoleReader(CliDriver.java:787)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:721)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
[[email protected] hive-1.2.1]$

   參考: http://blog.csdn.net/jdplus/article/details/46493553

1.Delete jline from the Hadoop lib directory (it's only pulled in transitively from ZooKeeper).
2.export HADOOP_USER_CLASSPATH_FIRST=true

   接著,執行啊!

   參考:http://blog.csdn.net/zhumin726/article/details/8027802

      http://blog.csdn.net/xgjianstart/article/details/52192879

解決方法:jline版本不一致,把HADOOP_HOME/share/hadoop/yarn/lib和HIVE_HOME/lib下的jline-**.jar版本一致就可以了,複製其中一個高版本覆蓋另一個。

  即,取其中的高版本即可。

 

  OK,啟動成功!

# Set HADOOP_HOME to point to a specific hadoop install directory
  export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0

# Hive Configuration Directory can be controlled by:
  export HIVE_CONF_DIR=/home/hadoop/app/hive-1.2.1/conf

# Folder containing extra ibraries required for hive compilation/execution can be controlled by:
  export HIVE_AUX_JARS_PATH=/home/hadoop/app/hive-1.2.1/lib

  再,儲存,用 source ./hive-env.sh(生效檔案)

在修改之前,要相應的建立目錄,以便與配置檔案中的路徑相對應,否則在執行hive時會報錯的。

mkdir -p /home/hadoop/data/hive-1.2.1/warehouse
mkdir -p /home/hadoop/data/hive-1.2.1/tmp
mkdir -p /home/hadoop/data/hive-1.2.1/log

<property>
<name>hive.metastore.warehouse.dir</name>
<value>/home/hadoop/data/hive-1.2.1/warehouse</value>
<description>location of default database for the warehouse</description>
</property>

<property>
<name>hive.exec.scratchdir</name>
<value>/home/hadoop/data/hive-1.2.1/tmp</value>
</property>

<property>
<name>hive.querylog.location</name>
<value>/home/hadoop/data/hive-1.2.1/log</value>
</property>

到此,hive-site.xml檔案修改完成!

  然後在conf資料夾下,cp hive-log4j.properties.template  hive-log4j.proprties

開啟hive-log4j.proprties檔案,sudo gedit hive-log4j.proprties (Ubuntu系統裡),若是CentOS裡,則 vim hive-log4j.properties

尋找hive.log.dir=
這個是當hive執行時,相應的日誌文件儲存到什麼地方

(      我的:    hive.log.dir=/home/hadoop/data/hive-1.2.1/log/${user.name}        )

hive.log.file=hive.log
這個是hive日誌檔案的名字是什麼
預設的就可以,只要您能認出是日誌就好。

只有一個比較重要的需要修改一下,否則會報錯。

log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
如果沒有修改的話會出現:

WARNING: org.apache.hadoop.metrics.EventCounter is deprecated.
please use org.apache.hadoop.log.metrics.EventCounter  in all the  log4j.properties files.
(只要按照警告提示修改即可)。

至此,hive-log4j.proprties檔案修改完成。

  接下來,是重頭戲!

  接下來要配置的是以MySQL作為儲存元資料庫的hive的安裝(此中模式下是將hive的metadata儲存在mysql中,mysql的執行環境支撐雙向同步和叢集工作環境,這樣的話

,至少兩臺資料庫伺服器上匯備份hive的元資料),要使用hadoop來建立相應的資料夾路徑,

並設定許可權:

bin/hadoop fs -mkdir /user/hadoop/hive/warehouse

bin/hadoop fs -mkdir /user/hadoop/hive/tmp

bin/hadoop fs -mkdir /user/hadoop/hive/log

bin/hadoop fs -chmod g+w /user/hadoop/hive/warehouse

bin/hadoop fs -chmod g+w /user/hadoop/hive/tmp

bin/hadoop fs -chmod g+w /user/hadoop/hive/log

繼續配置hive-site.xml

[1]

<property>
<name>hive.metastore.warehouse.dir</name>
<value>hdfs://localhost:9000/user/hadoop/hive/warehouse</value>

(這是單節點的)     (我的是3節點叢集,在HadoopSlave1裡安裝的hive,所以是HadoopMaster,但是會報錯誤啊!)

    (所以啊,若是單節點倒無所謂,若是跟我這樣,3節點啊,最好是HadoopMaster這臺機器上,安裝Hive啊!!!血淋淋的教訓)
(這裡就與前面的hadoop fs -mkdir -p /user/hadoop/hive/warehouse相對應)

(要麼,就是如在HadoopSlave1上,則為/home/hadoop/data/hive-1.2.1/warehouse)

</property>
其中localhost指的是筆者的NameNode的hostname;

[2]

<property>
<name>hive.exec.scratchdir</name>
<value>hdfs://localhost:9000/user/hadoop/hive/scratchdir</value>  
</property>

[3]

//這個沒有變化與derby配置時相同
<property>
<name>hive.querylog.location</name>
<value>/usr/hive/log</value>
</property>
-------------------------------------

[4]

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNoExist=true</value>    
</property>
javax.jdo.option.ConnectionURL
這個引數使用來設定元資料連線字串

注意紅字部分在hive-site.xml中是有的,不用自己新增。

我自己的錯誤:沒有在檔案中找到這個屬性,然後就自己添加了結果導致開啟hive一直報錯。最後找到了檔案中的該屬性選項然後修改,才啟動成功。

Unableto open a test connection to the given database. JDBC url =jdbc:derby:;databaseName=/usr/local/hive121/metastore_db;create=true,username = hive. Terminating connection pool (set lazyInit to true ifyou expect to start your database after your app). OriginalException: ------

java.sql.SQLException:Failed to create database '/usr/local/hive121/metastore_db', see thenext exception for details.

atorg.apache.derby.impl.jdbc.SQLE

-------------------------------------

[5]

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
javax.jdo.option.ConnectionDriverName

關於在hive中用java來開發與mysql進行互動時,需要用到一個關於mysql的connector,這個可以將java語言描述的對database進行的操作轉化為mysql可以理解的語句。

connector是一個用java語言描述的jar檔案,而這個connector可以在官方網站上下載,經驗正是connector與mysql的版本號不一致也可以執行。

connector要copy到/usr/local/hive1.2.1/lib目錄下

[6]

<property>
<name>javax.jdo.option.ConnectorUserName</name>
<value>hive</value>
</property>
這個javax.jdo.option.ConnectionUserName
是用來設定hive存放元資料的資料庫(這裡是mysql資料庫)的使用者名稱稱的。

[7]

--------------------------------------
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
</property>
這個javax.jdo.option.ConnetionPassword是用來設定,
使用者登入資料庫(上面的資料庫)的時候需要輸入的密碼的.

[8]

<property>
<name>hive.aux.jars.path</name>
<value>file:///usr/local/hive/lib/hive-hbase-handler-0.13.1.jar,file:///usr/local/hive/lib/protobuf-java-2.5.0.jar,file:///us
r/local/hive/lib/hbase-client-0.96.0-hadoop2.jar,file:///usr/local/hive/lib/hbase-common-0.96.0-hadoop2.jar,file:///usr/local
/hive/lib/zookeeper-3.4.5.jar,file:///usr/local/hive/lib/guava-11.0.2.jar</value>
</property>

/相應的jar包要從hbase的lib資料夾下複製到hive的lib資料夾下。

[9]

<property>  
<name>hive.metastore.uris</name>  
<value>thrift://localhost:9083</value>  
</property>  
</configuration>

---------------------------------------- 到此原理介紹完畢

要使用Hadoop來建立相應的檔案路徑,
並且要為它們設定許可權:
bin/hadoop fs  -mkdir -p  /usr/hive/warehouse
bin/hadoop fs  -mkdir -p /usr/hive/tmp
bin/hadoop fs  -mkdir -p /usr/hive/log
bin/hadoop fs   -chmod g+w /usr/hive/warehouse
bin/hadoop fs   -chmod g+w /usr/hive/tmp
bin/hadoop fs   -chmod g+w /usr/hive/log

相關推薦

hive權威安裝出現不解錯誤完美解決方法可以

   以下兩種方法都可以,推薦用方法一! 如果有誤,請見部落格 方法一:   步驟一: yum -y install mysql-server   步驟二:service mysqld start   步驟三:mysql -u root -p  Enter password: (預設

異常的數字拋出,為什麽會出現錯誤解決

為什麽 class a light divide true vid main catch blog #include <iostream> using namespace std; class A { public: A(int a, int b) {

SQL SERVER資料庫備份時出現“作業系統錯誤5拒絕訪問。”錯誤解決辦法

一般備份檔案選擇的目錄為磁碟根目錄或備份所選分割槽未授予sqlserver使用者讀寫許可權時會出現此錯誤。 解決辦法就是給sqlserver使用者授予許可權:    選擇要備份的資料夾 ,右鍵-->屬性-->安全-->看下"組或使用者"是否包涵Aut

Android 關於使用httpPost出現405錯誤提示doPost 405

在使用httpPost的時候,程式碼沒錯,網址也沒錯,可就是會出現405錯誤,原因如下: post的伺服器網址是http://www.xxx.com/abc 這個網址後面的abc實現是一個目錄,需要在abc後面加上/就可以了,也就是改為:http://www.xxx.co

(雲伺服器安裝GUI圖形化介面解決

伺服器:Vultr  OS:Ubuntu 14.04 步驟:  1.遠端登陸到伺服器  2.確保所有的包和依賴關係是最新的 apt-get update 1 3.安裝LXDE Minimalist apt-get install -y lubuntu-cor

(轉)Vultr(雲伺服器安裝GUI圖形化介面解決

伺服器:Vultr  OS:Ubuntu 14.04 步驟:  1.遠端登陸到伺服器  2.確保所有的包和依賴關係是最新的 apt-get update 1 3.安裝LXDE Minimalist apt-get install -y lubuntu-core 1 4.

Python花式錯誤集錦大集合,這些你見過嗎?

  作為一個Python 新手,難免在學習的過程中會遇到很多編譯錯誤,那麼在這裡做一個彙總,避免今後犯同樣的錯誤,希望對大家有所幫助。 語法錯誤篇 1、縮排IndentationError 這是我們在複製貼上python程式碼,或者是不熟悉Python程式碼結構的

關於Asp.net超時,延長讀取sql server資料庫的超時時間解決

超時時間已到。在操作完成之前超時時間已過或伺服器未響應。 (.Net SqlClient Data Provider) 當讀取超過30秒後,就有這個提示,不知為什麼? -------------------------------------------------------------------- \\

django中使用jquery ajax post資料出現403錯誤解決辦法(方法)

 方法一: 在傳送post請求的html頁面前加入{% csrf_token %} 方法二: 在處理post資料的view前加@csrf_exempt裝飾符 例如 @csrf_exempt de

html中顯示內部伺服器錯誤問題解決

Uncaught TypeError: Cannot set property 'innerHTML' of null問題所在:程式碼中所設定的物件未被解析和載入個人問題具體描述,問題報錯出現於第2個if語句,第一個if語句(驗證使用者名稱)顯示通過,但執行到第二個if語句(

Qt Creator: Unknown debugger type “No engine”完美解決

nbsp 圖片 www too program gin sdn pro blank Qt Creator: Unknown debugger type “No engine” Qt Creator 找不到調試器的解決辦法: 1、下載並安裝 Debugging Tools f

elasticsearch實戰---中文拼音A-Z排序完美解決

0.背景 公司目前業務系統偏向後臺系統,目前包含500W+資料,在許多列表中支援各種條件查詢,含有大量的模糊搜尋條件。由於在mysql中模糊查詢效率低下,目前公司已使用es搜尋引擎進行條件搜尋。es版本如下: elasticsearch版本:6.3.2 java client版本:rest-high-l

線程調用的方法、全局變量共享、線程數量

break %d span 兩種方法 全局變量 glob 導致 col 方法 1 # -*- coding:utf-8 -*- 2 # Author:Sure Feng 3 4 5 import threading 6 import time 7 8

Spring的定時任務任務排程

Spring內部有一個task是Spring自帶的一個設定時間自動任務排程,提供了兩種方式進行配置,一種是註解的方式,而另外一種就是XML配置方式了。註解方式比較簡潔,XML配置方式相對而言有些繁瑣,但是應用場景的不同,兩者又各有優點,所以具體使用還是根據需求來劃分。因為任務排程這樣的需求,

fatal: unable to access https:// Failed to connect to: Connection refused|git clone問題完美解決

fatal: unable to access ‘https://github.com/xxxx/’: Failed to connect to x.x.x.x port xxxxx: Connection refused|git clone問題(完美解決) 系統: ubunt

Android studio 外掛無法下載完美解決

具體操作 File->Settings->Apparence & Behavior->System Settings->Updates->use secure connnection 勾去掉 如果有其它疑問請在我的github 提交你的 is

原生JS去重--方法去掉重複字元

所謂“去重”,即是去掉重複的字元。本篇部落格講述兩種方式去重,一種是比較簡單但程式碼比較囉嗦點的,另一種是有點深度但是簡潔的。  我直接寫JavaScript程式碼了。  方式一: function deleteRepetionChar(arr){ //先判斷輸入進

NYOJ 38 佈線問題prim和kruskal方法

題目38 題目資訊 執行結果 本題排行 討論區 佈線問題 時間限制:1000 ms  |  記憶體限制:65535 KB 難度:4 輸入 第一行是一個整數n表示有n組測試資料。(n<5) 每組測試資料的第一行是兩個整數v

java實現 tiff圖片 轉 JPG圖片完美解決

一.環境準備 二.程式碼Demo /** * tiff 圖片 轉 JPG 圖片 * @param filePath tiff檔案路徑 */ public static void

jdbc在連線mysql資料庫的時候出現一下問題的時候的解決另一方法

問題語句:Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any