1. 程式人生 > >指令碼啟動關閉hadoop+zookeeper+hbase

指令碼啟動關閉hadoop+zookeeper+hbase

hbase(main):008:0> [[email protected] ~]$ ll
total 44
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Desktop
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Documents
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Downloads
-rwxr-xr-x  1 hadoop hadoop  378 11月 18 12:45 McDbDownAll.sh
-rwxr-xr-x  1 hadoop hadoop  387 11月 18 12:45 McDbUpAll.sh
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Music
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Pictures
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Public
drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Templates
drwxrwxr-x  2 hadoop hadoop 4096 11月 17 10:01 test

drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Videos

按實際修改執行目錄:

[[email protected] ~]$ cat McDbUpAll.sh
##!/bin/bash
echo 'start hadoop...'
/usr/local/bg/hadoop-2.7.1/sbin/start-all.sh
echo 'start zookeeper1...'
/opt/zookeeper-3.4.6/bin/zkServer.sh start
echo 'start zookeeper2...'
ssh [email protected] "/opt/zookeeper-3.4.6/bin/zkServer.sh start"
echo 'start zookeeper3...'
ssh

[email protected] "/opt/zookeeper-3.4.6/bin/zkServer.sh start"
echo 'start hbase...'
/opt/hbase-1.2.4/bin/start-hbase.sh

[[email protected] ~]$ cat McDbDownAll.sh
##!/bin/bash
echo 'stop hbase...'
/opt/hbase-1.2.4/bin/stop-hbase.sh
echo 'stop zookeeper3...'
ssh [email protected] "/opt/zookeeper-3.4.6/bin/zkServer.sh stop"
echo 'stop zookeeper2...'
ssh

[email protected] "/opt/zookeeper-3.4.6/bin/zkServer.sh stop"
echo 'stop zookeeper1...'
/opt/zookeeper-3.4.6/bin/zkServer.sh stop
echo 'stop hadoop...'
/usr/local/bg/hadoop-2.7.1/sbin/stop-all.sh

測試:

[[email protected] ~]$ ./McDbUpAll.sh
start hadoop...
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hdp01]
hdp01: starting namenode, logging to /usr/local/bg/hadoop-2.7.1/logs/hadoop-hadoop-namenode-hdp01.out
hdp03: starting datanode, logging to /usr/local/bg/hadoop-2.7.1/logs/hadoop-hadoop-datanode-hdp03.out
hdp02: starting datanode, logging to /usr/local/bg/hadoop-2.7.1/logs/hadoop-hadoop-datanode-hdp02.out
hdp04: ssh: connect to host hdp04 port 22: No route to host
Starting secondary namenodes [hdp01]
hdp01: starting secondarynamenode, logging to /usr/local/bg/hadoop-2.7.1/logs/hadoop-hadoop-secondarynamenode-hdp01.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/bg/hadoop-2.7.1/logs/yarn-hadoop-resourcemanager-hdp01.out
hdp03: starting nodemanager, logging to /usr/local/bg/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-hdp03.out
hdp02: starting nodemanager, logging to /usr/local/bg/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-hdp02.out
hdp04: ssh: connect to host hdp04 port 22: No route to host
start zookeeper1...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
start zookeeper2...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
start zookeeper3...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
start hbase...
starting master, logging to /opt/hbase-1.2.4/bin/../logs/hbase-hadoop-master-hdp01.out
hdp03: starting regionserver, logging to /opt/hbase-1.2.4/bin/../logs/hbase-hadoop-regionserver-hdp03.out
hdp02: starting regionserver, logging to /opt/hbase-1.2.4/bin/../logs/hbase-hadoop-regionserver-hdp02.out


[[email protected] ~]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/bg/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-11-22 14:03:00,125 WARN  [main] conf.Configuration: hbase-site.xml:an attempt to override final parameter: dfs.replication;  Ignoring.
2016-11-22 14:03:01,100 WARN  [main] conf.Configuration: hbase-site.xml:an attempt to override final parameter: dfs.replication;  Ignoring.
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.4, r67592f3d062743907f8c5ae00dbbe1ae4f69e5af, Tue Oct 25 18:10:20 CDT 2016

hbase(main):001:0> list
TABLE                                                                                                                                                 

ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2293)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getTableNames(MasterRpcServices.java:900)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55650)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
    at java.lang.Thread.run(Thread.java:745)

Here is some help for this command:
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:

  hbase> list
  hbase> list 'abc.*'
  hbase> list 'ns:abc.*'
  hbase> list 'ns:.*'

hbase(main):006:0> list
TABLE                                                                                                                                                 
t1                                                                                                                                                    
1 row(s) in 0.0760 seconds

=> ["t1"]
hbase(main):007:0> scan 't1'
ROW                                    COLUMN+CELL                                                                                                    
 k1                                    column=c1:a, timestamp=1479371230254, value=value1                                                             
 k2                                    column=c1:b, timestamp=1479371248124, value=value2                                                             
2 row(s) in 0.4220 seconds

[[email protected] ~]$ ./McDbDownAll.sh
stop hbase...
stopping hbase........................
stop zookeeper3...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
stop zookeeper2...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
stop zookeeper1...
JMX enabled by default
Using config: /opt/zookeeper-3.4.6/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
stop hadoop...
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [hdp01]
hdp01: stopping namenode
hdp02: stopping datanode
hdp03: stopping datanode
hdp04: ssh: connect to host hdp04 port 22: No route to host
Stopping secondary namenodes [hdp01]
hdp01: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
hdp03: stopping nodemanager
hdp02: stopping nodemanager
hdp04: ssh: connect to host hdp04 port 22: No route to host
no proxyserver to stop
[[email protected] ~]$


相關推薦

指令碼啟動關閉hadoop+zookeeper+hbase

hbase(main):008:0> [[email protected] ~]$ ll total 44 drwxr-xr-x. 2 hadoop hadoop 4096 11月 16 15:11 Desktop drwxr-xr-x. 2 hadoop

hadoop+zookeeper+hbase 開機自啟動

hadoop開機自動啟動hadoop2.7.3+zookeeper3.4.9+hbase1.2.6我想讓它們實現開機自啟動,需要2個腳本實現。h1.sh 和h2.sh.#!/bin/bash#discribe: 實現hadoop+zookeeper+hbase 開機自啟動#對我非常有用的鏈接:ssh 到其他

HA 模式下的 Hadoop+ZooKeeper+HBase 啟動順序

一. 背景 1.1 網路上的大部分教程的順序 1. 啟動順序 Hadoop ZooKeeper HBase 第二個HMaster 2. 停止順序 第二個 HMaster,kill-9 刪除 Hbase ZooKeeper Hadoop Note:網上

hadoop\zookeeper\hbase叢集重啟後出現的相關問題

1、在叢集上的主節點/usr/local/hadoop/bin目錄下,執行./start-dfs.sh命令後,只有主節點下的namenode進行啟動,分支節點中的datanode沒有正常啟動 解決方式: 1)刪除hadoop目錄中的tmp目錄檔案及log目錄檔案(叢集中所有節點) 2)在

hadoop zookeeper hbase spark phoenix (HA)搭建過程

環境介紹: 系統:centos7 軟體包: apache-phoenix-4.14.0-HBase-1.4-bin.tar.gz  下載連結:http://mirror.bit.edu.cn/apache/phoenix/apache-phoenix-4.14.1-HBase-1.4/bin/

hadoop+Zookeeper+HBase安裝(兩個namenode+兩個hbase

1、hadoop+Zookeeper的安裝請看:https://blog.csdn.net/sunxiaoju/article/details/85642409 2、在https://hbase.apache.org/downloads.html下載hbase,我們選擇2.1.1版本的bin

HADOOP+ZOOKEEPER+HBASE+HIVE

1.配置yum源 Xftp 10.72.39.160  /etc/yum.repo.d/RHEL.repo 目的機器上: cd /etc/yum.repos.d/ mkdir bak mv CentOS-*.repo bak yum clean all yum make

立個flag——記錄配置Hadoop+Zookeeper+HBase的完整過程(一)

背景: 這學期選修了 ‘大資料管理技術’ 這門課程,現在是第10周,然而,虛擬機器打不開了,試了各種方法也無法進入圖形介面,更悲催的是,沒有儲存配置完畢的映象!!!也就是說,我花了大半學期配置的Had

hadoop+zookeeper+hbase安裝、配置及應用例項

出於種種原因,想要搭建一個小叢集,來搞搞資料處理。 實踐環境: ubuntu10.04+jdk1.6.20+hadoop-0.20.2+zookeeper3.3.4+hbase0.90.6 本來是準備好了三臺機器,結果後面一臺掛了,所以只有兩臺了=。= 機器名  

叢集環境下配置hadoop,zookeeper,hbase第二部分

3.安裝zookeeper,修改配置檔案:兩臺機器的zookeeper安裝路徑要相同,切記,切忌!!!1)cp zoo_sample.cfg zoo.cfg2)修改zoo.cfg為:# The number of milliseconds of each ticktickTi

hadoopzookeeperhbase啟動關閉

ado bsp nbsp serve star 目錄 per slave -h      hadoop     啟動:進入到hadoop目錄,sbin/start-all.sh   關閉:sbin/stop-all.sh   zookeeper   啟動:進入到zook

一鍵啟動關閉zookeeper叢集指令碼

一共需要編寫三個檔案 第一個檔案:slave用於存放叢集主機的host地址 內容如下: node-1 node-2 node-3 第二個檔案:啟動指令碼startzk.sh 內容如下:(路徑為自定義執行指令碼路徑) cat /export/server

啟動Hadoop HA Hbase zookeeper spark

伺服器角色 伺服器 192.168.58.180 192.168.58.181 192.168.58.182 192.168.58.183 Name CentOSMaster Slvae1 Slave2 StandByNameNode NameNode  Y

單獨啟動關閉hadoop服務

hadoop、節點1)啟動名稱節點 Hadoop-daemon.sh start namenode 2) 啟動數據節點hadoop-daemons.sh start datanode slave 3)hadoop-daemon.sh start secondarynamenode 4)查看端口5007

大數據學習系列之七 ----- Hadoop+Spark+Zookeeper+HBase+Hive集

pat 修改配置文件 防止 聲明 mir rac detail jre_home true 引言 在之前的大數據學習系列中,搭建了Hadoop+Spark+HBase+Hive 環境以及一些測試。其實要說的話,我開始學習大數據的時候,搭建的就是集群,並不是單機模式和

HadoopZookeeper+HBase完全分布式集群部署

ng- 根據 標識 部署 mina 軟件包 大小 apache enable Hadoop及HBase集群部署 一、 集群環境 系統版本 虛擬機:內存 16G CPU 雙核心 系統: CentOS-7 64位 系統下載地址: http://124.202.164.6/f

問題:mysql服務正在啟動 mysql服務無法啟動 && mysql啟動指令碼 mysql關閉指令碼

操作流程:   1、解壓縮mysql_x64(mysql-5.7.22-winx64.zip)包,拷貝start_mysql.bat指令碼到解壓目錄,cmd方式執行指令碼結果如下   //start_mysql.bat指令碼內容echo off set path=%~dp0 echo

HBase+Hadoop+Zookeeper環境搭建的錯誤排查

確認hbase下的hbase-site.xml中的hbase.rootdir的埠和hadoop下的core-site.xml中的fs.defaultFS共用一個埠,否則在進入hbase shell的時候輸入list會報Can't get master address from Z

轉:ZooKeeper原理及其在HadoopHBase中的應用

簡介 ZooKeeper是一個開源的**分散式協調服務**,由雅虎建立,是Google Chubby的開源實現。分散式應用程式可以基於ZooKeeper實現諸如**資料釋出/訂閱、負載均衡、命名服務、分散式協調/通知、叢集管理、Master選舉、分散式鎖和分散式佇列**等功能。 基

hadoop指令碼啟動

hadoop中需要啟動的配置太多了,今天寫了個指令碼,實現了一鍵 啟動hadoop,指令碼寫法如下圖 這個指令碼的前提是,伺服器之間配置了ssh無祕鑰登入!!!!! 執行指令碼時,伺服器提示我輸入密碼,但是我的機器已經配置了無祕鑰登入,最後在啟 zookeeper時提示啟動成功