1. 程式人生 > >1 複習ha相關 + weekend110的hive的元資料庫mysql方式安裝配置(完全正確配法)(CentOS版本)(包含解除安裝系統自帶的MySQL)

1 複習ha相關 + weekend110的hive的元資料庫mysql方式安裝配置(完全正確配法)(CentOS版本)(包含解除安裝系統自帶的MySQL)

本博文的主要內容是:

  .複習HA相關

  .MySQL資料庫

  .先在MySQL資料庫中建立hive資料庫

  .hive的配置

以下是Apache Hadoop HA的總結。分為hdfs HA和yarn HA。

 

   以上,是參考《Hadoop海量資料處理  技術詳解與專案實戰》

  強烈建議,先看

  想說的是,hive只是個工具,包括它的資料分析,依賴於mapreduce,它的資料管理,依賴於外部系統。

  metastore_db,是在哪目錄下執行,在哪裡產生資料

 

  由此可見,你在哪路徑下,執行hive指令,就在哪路徑下生成metastore_db。建一套資料庫檔案,這樣是極其不合適的,公司裡每個人若不一樣,則會顯得非常混雜。導致員工之間無法公用交流。

  為此,需公用的,mysql。作為hive的元資料管理。

       若什麼都不做,則hive預設是用derby,是單使用者,不方便,不適合多使用者。

  說明的是,關於hive的安裝和mysql的安裝,一般,都是先安裝好hive,再來安裝mysql。當然,我也看到他人是反著的也可以。

配置mysql metastore(切換到root使用者)  

         配置HIVE_HOME環境變數

1、線上安裝Mysql資料庫

  如果你是一名有經驗大資料工程師,無論是安裝jdk(注意:CentOS6.5有自帶的jdk),還是安裝mysql。都是先檢視系統是否已經安裝Mysql包。

[[email protected] app]# rpm -qa|grep mysql
mysql-libs-5.1.71-1.el6.x86_64
[[email protected] app]# rpm -e --nodeps mysql-libs-5.1.71-1.el6.x86_64
[[email protected] app]# rpm -qa|grep mysql


[[email protected] app]#

[[email protected] app]# pwd

/home/hadoop/app

[[email protected] app]# yum -y install mysql-server  (CentOS版本)

  若是Ubuntu系統的話,見

或者,可以這麼來安裝。

但是這兩個得,提前下載好。

rpm -ivh MySQL-server-5.1.73-1.glibc23.i386.rpm 
rpm -ivh MySQL-client-5.1.73-1.glibc23.i386.rpm

 參考:http://blog.csdn.net/u014726937/article/details/52142048

 具體,見

2、啟動MySQL服務

 

[[email protected] app]# service mysqld start (CentOS版本)

Initializing MySQL database:  Installing MySQL system tables...

OK

Filling help tables...

OK

To start mysqld at boot time you have to copy

support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !

To do so, start the server, then issue the following commands:

/usr/bin/mysqladmin -u root password 'new-password'

/usr/bin/mysqladmin -u root -h weekend110 password 'new-password'

Alternatively you can run:

/usr/bin/mysql_secure_installation

which will also give you the option of removing the test

databases and anonymous user created by default.  This is

strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:

cd /usr ; /usr/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl

cd /usr/mysql-test ; perl mysql-test-run.pl

Please report any problems with the /usr/bin/mysqlbug script!

[  OK  ]

Starting mysqld:  [  OK  ]

[[email protected] app]#

[[email protected] app]# mysql -u root -p        進入

Enter password: (回車)

 其實啊,這裡啊,直接幾步就可以了

mysql> CREATE USER  'hive'@'%' IDENTIFIED BY 'hive';    //建立一個賬號:使用者名稱為hive,密碼為hive

mysql> GRANT ALL PRIVILEGES ON  *.* to  'hive'@'%' IDENTIFIED BY  'hive' WITH GRANT OPTION;   //將許可權授予host為'weekend110'的hive使用者

mysql> FLUSH PRIVILEGES;

mysql> exit;

 其中, *.*:所有庫下的所有表       %:任何IP地址或主機都可以連線

[[email protected] app]# mysql -u root -p
Enter password:   (回車)
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON *.* to 'hive'@'%' IDENTIFIED BY 'hive' WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

mysql> exit;
Bye
[[email protected] app]#

   此外,為了使遠端使用者可以訪問MySQL,需要修改 /etc/mysql/my.cnf(Ubuntu系統)或/etc/my.cnf(CentOS系統),將 bind-address一行註釋掉。

當然,這可以以後更改,也可以的。

  可以看出,CentOS系統裡,是沒有的。

 

   sudo /etc/init.d/mysql restart  進行重啟、

3、設定MySQL的root使用者設定密碼

  MySQL在剛剛被安裝的時候,它的 root 使用者是沒有被設定密碼的(預設密碼為空,是回車)。首先來設定 MySQL的root 使用者的root密碼。

[[email protected] app]# mysql -u root -p

Enter password: (回車)

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 2

Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> select user,host,password from mysql.user;

+------+------------+----------+

| user | host       | password |

+------+------------+----------+

| root | localhost  |          |

| root | weekend110 |          |

| root | 127.0.0.1  |          |

|      | localhost  |          |

|      | weekend110 |          |

+------+------------+----------+

5 rows in set (0.00 sec)

mysql> set password for [email protected]=password('rootroot');

Query OK, 0 rows affected (0.00 sec)

mysql> set password for [email protected]=password('rootroot');

Query OK, 0 rows affected (0.00 sec)

mysql> select user,host,password from mysql.user;

+------+------------+-------------------------------------------+

| user | host       | password                                  |

+------+------------+-------------------------------------------+

| root | localhost  | *6C362347EBEAA7DF44F6D34884615A35095E80EB |

| root | weekend110 | *6C362347EBEAA7DF44F6D34884615A35095E80EB |

| root | 127.0.0.1  |                                           |

|      | localhost  |                                           |

|      | weekend110 |                                           |

+------+------------+-------------------------------------------+

5 rows in set (0.00 sec)

mysql>

4、為 Hive 建立相應的 Mysql 賬戶hive,再設定密碼hive。

 

mysql> create user 'hive' identified by 'hive';    //建立一個賬號:使用者名稱為hive,密碼為hive

Query OK, 0 rows affected (0.00 sec)

mysql> grant all on *.* to 'hive'@'weekend110' identified by 'hive';  //將許可權授予host為'weekend110'的hive使用者

Query OK, 0 rows affected (0.00 sec)

mysql> select user,host,password from mysql.user;

+------+------------+-------------------------------------------+

| user | host       | password                                  |

+------+------------+-------------------------------------------+

| root | localhost  | *6C362347EBEAA7DF44F6D34884615A35095E80EB |

| root | weekend110 | *6C362347EBEAA7DF44F6D34884615A35095E80EB |

| root | 127.0.0.1  |                                           |

|      | localhost  |                                           |

|      | weekend110 |                                           |

| hive | %          | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |

| hive | weekend110 | *4DF1D66463C18D44E3B001A8FB1BBFBEA13E27FC |

+------+------------+-------------------------------------------+

7 rows in set (0.00 sec)

mysql> flush privileges;

Query OK, 0 rows affected (0.01 sec)

mysql> exit;

Bye

[[email protected] app]#

5、用剛才建立的 “hive” 賬號登入,建立 Hive 專用的元資料庫

 

[[email protected] app]# mysql -uhive -hweekend110 -phive

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 3

Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE DATABASE hive;     //建立 hive使用者,專用的元資料庫hive

Query OK, 1 row affected (0.01 sec)

mysql> SHOW DATABASES;   (規範,大寫)

+--------------------+

| Database           |

+--------------------+

| information_schema |

| hive               |

| mysql              |

| test               |

+--------------------+

4 rows in set (0.03 sec)

  其中,預設建立好的是3個,information_schema、mysql、test。這三個資料庫是MySQL安裝程式自動建立的。

其中,mysql庫中包含的是5個MySQL授權表,information_schema庫中是相關資訊,而test是供使用者練習使用的。

mysql> exit;

Bye

[[email protected] app]#

[[email protected] app]# mysql -u hive -h weekend110 -p hive

Enter password:

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 6

Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> exit;

Bye

[[email protected] app]# mysql -uhive  -phive

ERROR 1045 (28000): Access denied for user 'hive'@'localhost' (using password: YES)

[[email protected] app]# mysql -uroot  -prootroot

Welcome to the MySQL monitor.  Commands end with ; or \g.

Your MySQL connection id is 8

Server version: 5.1.73 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its

affiliates. Other names may be trademarks of their respective

owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> grant all on *.* to 'hive'@'localhost' identified by 'hive';
Query OK, 0 rows affected (0.00 sec)

mysql> set password for [email protected]=password('hive');
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> exit;

Bye

[[email protected] app]#

最後結束,情況圖是如下(此博文完全正確配法)

  由此,可見啊,

  如果有誤,請見部落格

配置hive

學個技巧,如何快速的搜尋。

  先按Esc,再按Shift,再 . 鍵 +  / 鍵

更改四個地方: (新手,只需配置這四個地方,即可)

javax.jdo.option.ConnectionURL

javax.jdo.option.ConnectionDriverName

javax.jdo.option.ConnectionUserName

javax.jdo.option.ConnectionPassword

   當然,有些人說,要改5個地方,甚至,更多地方。

<property>

  <name>javax.jdo.option.ConnectionURL</name>

  <value>jdbc:derby:;databaseName=metastore_db;create=true</value>

  <description>JDBC connect string for a JDBC metastore</description>

</property>

改為

<property>

  <name>javax.jdo.option.ConnectionURL</name>

  <value>jdbc:mysql://weekend110:3306/hive?createDatabaseIfNotExist=true</value>

  <description>JDBC connect string for a JDBC metastore</description>

</property>

<property>

  <name>javax.jdo.option.ConnectionDriverName</name>

  <value>org.apache.derby.jdbc.EmbeddedDriver</value>

  <description>Driver class name for a JDBC metastore</description>

</property>

改為

<property>

  <name>javax.jdo.option.ConnectionDriverName</name>

  <value>com.mysql.jdbc.Driver</value>

  <description>Driver class name for a JDBC metastore</description>

</property>

<property>

  <name>javax.jdo.option.ConnectionUserName</name>

  <value>APP</value>

  <description>username to use against metastore database</description>

</property>

改為

<property>

  <name>javax.jdo.option.ConnectionUserName</name>

  <value>hive</value>

  <description>username to use against metastore database</description>

</property>

<property>

  <name>javax.jdo.option.ConnectionPassword</name>

  <value>mine</value>

  <description>password to use against metastore database</description>

</property>

改為

<property>

  <name>javax.jdo.option.ConnectionPassword</name>

  <value>hive</value>

  <description>password to use against metastore database</description>

</property>

 

之後,會出現如下問題。

 

還需驅動mysql的jar包,

自帶的是derby的

 

   也許,最好執行時,會出現如下問題。

 步驟二:  則,需,建立一個目錄,如我這裡,是/home/spark/app/hive-1.2.1/iotmp

< property>
         < name>hive.querylog.location< /name>
         < value>/home/spark/app/hive-1.2.1/iotmp< /value>
         < description>Location of Hive run time structured log file< /description>
< /property>
< property>
         < name>hive.exec.local.scratchdir< /name>
         < value>/home/spark/app/hive-1.2.1/iotmp< /value>
         < description>Local scratch space for Hive jobs< /description>
< /property>
< property>
         < name>hive.downloaded.resources.dir< /name>
         < value>/home/spark/app/hive-1.2.1/iotmp< /value>
         < description>Temporary local directory for added resources in the remote file system.< /description>
< /property>

 

 步驟四:解決方法:jline版本不一致,把HADOOP_HOME/share/hadoop/yarn/lib和HIVE_HOME/lib下的jline-**.jar版本一致就可以了,複製其中一個高版本覆蓋另一個。

  即,取其中的高版本即可。我這裡的情況是,HADOOP_HOME/share/hadoop/yarn/lib下是0.9多,刪除這個。

  將HIVE_HOME/lib下的jline-2.12.jar,複製一份到HADOOP_HOME/share/hadoop/yarn/lib下。

root使用者來,驗證一下

[[email protected] bin]# pwd

/home/hadoop/app/hive-0.12.0/bin

[[email protected] bin]# ./hive

16/10/10 09:26:36 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive

16/10/10 09:26:36 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize

16/10/10 09:26:36 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize

16/10/10 09:26:36 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack

16/10/10 09:26:36 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node

16/10/10 09:26:36 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces

16/10/10 09:26:36 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-0.12.0/lib/hive-common-0.12.0.jar!/hive-log4j.properties

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/home/hadoop/app/hive-0.12.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

hive> exit;

[[email protected] bin]#

hadoop使用者,來驗證一下

[[email protected] bin]$ pwd

/home/hadoop/app/hive-0.12.0/bin

[[email protected] bin]$ ./hive

16/10/10 09:28:16 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive

16/10/10 09:28:16 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize

16/10/10 09:28:16 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize

16/10/10 09:28:16 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack

16/10/10 09:28:16 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node

16/10/10 09:28:16 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces

16/10/10 09:28:16 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-0.12.0/lib/hive-common-0.12.0.jar!/hive-log4j.properties

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/home/hadoop/app/hive-0.12.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

hive> exit;

[[email protected] bin]$

   總結,想說的是,在單節點叢集裡。hadoop和hive都是在一個節點裡。

如上配置,或

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://weekend110:3306/hive?useUnicode=true&amp;characterEncoding=UTF-8&amp;createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>

  直接啟動沒問題。

但是,我在3節點(HadoopMaster、HadoopSlave1、HadoopSlave2)上,的HadoopSlave1上這麼配置,再啟動。

一直報錯誤!

  也許是沒在主節點上配置吧!

相關推薦

1 複習ha相關 + weekend110的hive的資料庫mysql方式安裝配置完全正確CentOS版本包含解除安裝系統MySQL

本博文的主要內容是:   .複習HA相關   .MySQL資料庫   .先在MySQL資料庫中建立hive資料庫   .hive的配置 以下是Apache Hadoop HA的總結。分為hdfs HA和yarn HA。      以上,是參考《

黑猴子的家:解除安裝Linux系統MySql資料庫

1、關閉mysql服務 [[email protected] ~]# systemctl status mysql [[email protected] ~]# systemctl stop mysql 尖叫提示:解除安裝用root使用者

Linux刪除mysql 5.1

一、 whereis mysql rpm -qa | grep -i mysql service mysql stop find / -name mysql rm -rf /usr/lib64/mysql rm -rf /usr/share/mysql/ rpm -qa | grep

Linux下刪除解除安裝系統MySQL資料庫

[[email protected]_rec mysql]$ sudo yum -y remove mysql-5.0.45-7.el5 Password: Loading "security" plugin Loading "rhnplugin" plugin This system is no

linux下C語言操作mysql資料庫系統版本3.23.54

      我的上一篇blog介紹了在linux環境下如何安裝配置系統自帶的mysql資料庫,並列舉了mysql的一些簡單的操作。接下來我將介紹一下如何利用mysql提供給我們的API來訪問並操作mysql資料庫(C語言)。 首先,我們需要安裝mysql-devel-3.23

使用ZabbixMySQL模板監控MySQL

zabbix mysql centos 使用Zabbix自帶MySQL模板監控MySQL 1. 安裝zabbix agent[[email protected]/* */ ~]# rpm -ivh http://repo.zabbix.com/zabbix/3.2/rhel/7/x86

zabbix2.4.5mysql監控

zabbix自帶mysql監控模塊一、相關說明生產線上使用zabbix 2.4.5 對所有業務及系統網絡進行監控,當然也需要監控mysql數據庫相關的信息,zabbix自帶的mysql監控模板就可以監控mysql,本文在zabbix 2.4.5版本下通過,其他版本請自行實驗.二、配置要監控的mysql1、先授

xamppmysql命令深入分析MySQL ERROR 1045 (28000)

erro 內存 輸入 多個 mysql pro div 登錄 組成 在命令行輸入mysql -u root –p,輸入密碼,或通過工具連接數據庫時,經常出現下面的錯誤信息,詳細該錯誤信息很多人在使用MySQL時都遇到過。 ERROR 1045 (28000): Access

阿裏雲Centos使用mysql

是否 mys 退出 centos7 top iptable ava gre 進入 ---查看、設置服務(以firewalld服務為例)從centos7開始使用systemctl來管理服務和程序,包括了service和chkconfig 啟動一個服務: systemctl

系統Python2.7安裝演算法包setuptools-40.1.1.zip

租戶業務需求安裝setuptools-40.1.1.zip 1、上傳setuptools-40.1.1.zip包 2、解壓 # unzip setuptools-40.1.1.zip 3、安裝 # cd setuptools-40.1.1/  # python setup.py insta

centos7關閉系統啟動MySQL

systemctl list-unit-files 執行此命令能檢視當前系統的服務啟動和服務狀態。 pagedown翻頁直到找到 結果顯示如下: ... microcode.service

ORACLE 11g 建立資料庫時 Enterprise Manager配置失敗的解決辦法 無法開啟ORACLE企業管理器EM的解決辦法

環境:win7 64位系統、 軟體:oracle11g database ,oracle 10g client 。 問題描述:在win7 64位系統下安裝oracle11g,在建立資料庫時,進度到85%的時候報錯,錯誤如下: 根據提示去emConfig.log檔案檢視日誌檔案,然後手動去D:\app

zabbix 4.0 MySQL 模版使用

zabbix 4.0 自帶MySQL 模版使用 環境簡介: zabbix 伺服器IP:202.1.100.1 mysql 伺服器IP:202.1.100.2 (agent端) 前提條件:是202.1.100.2已經安裝zabbix_agent 1、建立監控所需mysql賬戶(agent端): gr

Hive環境的安裝部署完美安裝叢集內或叢集外都適用解除安裝mysql安裝指定版本

  Hive 安裝依賴 Hadoop 的叢集,它是執行在 Hadoop 的基礎上。 所以在安裝 Hive 之前,保證 Hadoop 叢集能夠成功執行。   同時,比如我這裡的master、slave1和slave2組成的hadoop叢集。hive的安裝可以安裝在任一一個節點上,當然,也可以安裝在

postgresql9.0以上匯入系統函式,比如uuid相關函式

postgresql9.0以上可以通過下面語句自動匯入系統的的擴充套件函式: CREATE EXTENSION IF NOT EXISTS "uuid-ossp"  這句話的意思是匯入uuid相關的擴充套件,執行之後,你的資料庫function裡面會多出來這些東西: 其他

linux6.7解除安裝系統mysql-libs* crontab命令不能用了原因?

安裝大資料平臺cdh5時候,需要安裝mysql: 通常我們安裝mysql時候,會去解除安裝對應的linux自帶的,不然會包和已有的mysql包衝突! 於是網上找解除安裝linux系統自帶的mysql的方法: yum -y removemysql-libs-*  或者 rp

centos6.5解除安裝mysql

解除安裝mysql 輸入命令檢視自帶mysql情況 [[email protected] ~]# rpm -qa|grep mysql mysql-libs-5.1.71-1.

Hive的資料庫替換為Mysql

1.驅動拷貝 拷貝mysql­connector­java­5.1.27­bin.jar到/root/hd/hive/lib/下 2.配置Metastore到MySql -》在/root/hd/hive/conf目錄下建立一個hive-site.xml -》根據官方文件配置引

MySQL作為hive的資料庫時遇到的問題

  Mysql版本  Ver 14.14 Distrib 5.6.40, for Linux (x86_64) using  EditLine wrapper Hive版本      apache-hive-2.

0029-如何實現CDH資料庫MySQL的主備

溫馨提示:要看高清無碼套圖,請使用手機開啟並單擊圖片放大檢視。 1.文件編寫目的 MySQL資料庫自身提供的主從複製功能可以方便的實現資料的多處自動備份,實現資料庫的擴充套件。多個數據備份不僅可以加強資料的安全性,通過實現讀寫分離還能進一步提升資料庫的負載效能。本文件講述如何實現MySQL主