1. 程式人生 > >HBase 1.2.6 完全分散式叢集安裝部署詳細過程

HBase 1.2.6 完全分散式叢集安裝部署詳細過程

Apache HBase 是一個高可靠性、高效能、面向列、可伸縮的分散式儲存系統,是NoSQL資料庫,基於Google Bigtable思想的開源實現,可在廉價的PC Server上搭建大規模結構化儲存叢集,利用Hadoop HDFS作為其檔案儲存系統,利用Hadoop MapReduce來處理HBase海量資料,使用Zookeeper協調伺服器叢集。Apache HBase官網有。

Apache HBase的完全分散式叢集安裝部署並不複雜,下面是部署的詳細過程:

1、規劃HBase叢集節點

本實驗有4個節點,要配置HBase Master、Master-backup、RegionServer,節點主機作業系統為Centos 6.9,各節點的程序規劃如下:

主機 IP 節點程序
hd1 172.17.0.1 Master、Zookeeper
hd2 172.17.0.2 Master-backup、RegionServer、Zookeeper
hd3 172.17.0.3 RegionServer、Zookeeper
hd4 172.17.0.4 RegionServer

2、安裝 JDK、Zookeeper、Hadoop

各伺服器節點關閉防火牆、設定selinux為disabled

安裝 JDK、Zookeeper、Apache Hadoop 分散式叢集(具體過程詳見我另一篇博文:)

安裝後設置環境變數,這些變數在安裝配置HBase時需要用到

export JAVA_HOME=/usr/java/jdk1.8.0_131
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin

export HADOOP_HOME=/home/ahadoop/hadoop-2.8.0
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME
/sbin export ZOOKEEPER_HOME=/home/ahadoop/zookeeper-3.4.10 export PATH=$PATH:$ZOOKEEPER_HOME/bin

3、安裝NTP,實現伺服器節點間的時間一致

如果伺服器節點之間時間不一致,可能會引發HBase的異常,這一點在HBase官網上有特別強調。在這裡,設定第1個節點hd1為NTP的服務端節點,也即該節點(hd1)從國家授時中心同步時間,然後其它節點(hd2、hd3、hd4)作為客戶端從hd1同步時間

(1)安裝 NTP

# 安裝 NTP 服務
yum -y install ntp 

# 設定為開機啟動
chkconfig --add ntpd

chkconfig ntpd on

啟動 NTP 服務

service ntpd start

(2)配置NTP服務端

在節點hd1,編輯 /etc/ntp.conf 檔案,配置NTP服務,具體的配置改動項見以下中文註釋

vi /etc/ntp.conf


# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).
driftfile /var/lib/ntp/drift
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict ::1

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# 新增允許接收請求的網路範圍
restrict 172.17.0.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst

# 同步時鐘的伺服器
server 210.72.145.44 perfer      # 中國國家受時中心
server 202.112.10.36             # 1.cn.pool.ntp.org
server 59.124.196.83             # 0.asia.pool.ntp.org

#broadcast 192.168.1.255 autokey        # broadcast server
#broadcastclient                        # broadcast client
#broadcast 224.0.1.1 autokey            # multicast server
#multicastclient 224.0.1.1              # multicast client
#manycastserver 239.255.254.254         # manycast server
#manycastclient 239.255.254.254 autokey # manycast client

# 允許上層時間伺服器主動修改本機時間
restrict 210.72.145.44 nomodify notrap noquery
restrict 202.112.10.36 nomodify notrap noquery
restrict 59.124.196.83 nomodify notrap noquery

# 外部時間伺服器不可用時,以本地時間作為時間服務
server 127.0.0.1 # local clock
fudge 127.0.0.1 stratum 10

# Enable public key cryptography.
#crypto

includefile /etc/ntp/crypto/pw

# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography. 
keys /etc/ntp/keys

# Specify the key identifiers which are trusted.
#trustedkey 4 8 42

# Specify the key identifier to use with the ntpdc utility.
#requestkey 8

# Specify the key identifier to use with the ntpq utility.
#controlkey 8

# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats

# Disable the monitoring facility to prevent amplification attacks using ntpdc
# monlist command when default restrict does not include the noquery flag. See
# CVE-2013-5211 for more details.
# Note: Monitoring will not be disabled with the limited restriction flag.
disable monitor

重啟 NTP 服務

service ntpd restart

然後檢視ntp狀態

[[email protected] ahadoop]# service ntpd status
ntpd dead but pid file exists

這時發現有報錯,原來ntpd服務有一個限制,ntpd僅同步更改與ntp server時差在1000s內的時間,而查了伺服器節點的時間與實際時間差已超過了1000s,因此,必須先手動修改下作業系統時間與ntp server相差時間在1000s以內,然後再去同步服務

# 如果作業系統的時區有錯,先修改下時區(亞洲-上海)
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

# 修改日期、時間
date -s 20170703
date -s 15:32:00

其實還有另外一個小技巧,就是在安裝好NTP服務後,先通過授時伺服器獲得準確的時間,這樣也不用手工修改了,命令如下:

ntpdate -u pool.ntp.orgpool.ntp.org

【注意】如果是在docker裡面執行同步時間操作,系統會報錯

9 Jan 05:13:57 ntpdate[7299]: step-systime: Operation not permitted

如果出現這個錯誤,說明系統不允許自行設定時間。在docker裡面,由於docker容器共享的是宿主機的核心,而修改系統時間是核心層面的功能,因此,在 docker 裡面是無法修改時間

(3)配置NTP客戶端

在節點hd2、hd3、hd4編輯 /etc/ntp.conf 檔案,配置 NPT 客戶端,具體的配置改動項,見以下的中文註釋 

vi /etc/ntp.conf


# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).

driftfile /var/lib/ntp/drift

# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict ::1

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst

# 同步服務端的時間
server 172.17.0.1

restrict 172.17.0.1 nomodify notrap noquery

# 同步失敗,則使用本地的時間
server 127.0.0.1
fudge 127.0.0.1 stratum 10

#broadcast 192.168.1.255 autokey        # broadcast server
#broadcastclient                        # broadcast client
#broadcast 224.0.1.1 autokey            # multicast server
#multicastclient 224.0.1.1              # multicast client
#manycastserver 239.255.254.254         # manycast server
#manycastclient 239.255.254.254 autokey # manycast client

# Enable public key cryptography.
#crypto
includefile /etc/ntp/crypto/pw
    
# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography. 
keys /etc/ntp/keys 
    
# Specify the key identifiers which are trusted.
#trustedkey 4 8 42

# Specify the key identifier to use with the ntpdc utility.
#requestkey 8

# Specify the key identifier to use with the ntpq utility.
#controlkey 8

# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats

# Disable the monitoring facility to prevent amplification attacks using ntpdc
# monlist command when default restrict does not include the noquery flag. See
# CVE-2013-5211 for more details.
# Note: Monitoring will not be disabled with the limited restriction flag.
disable monitor

重啟NTP服務

service ntpd restart

啟動後,檢視時間的同步情況

$ ntpq -p
$ ntpstat

4、修改ulimit

在中有提到,使用 HBase 推薦修改ulimit,以增加同時開啟檔案的數量,推薦 nofile 至少 10,000 但最好 10,240 (It is recommended to raise the ulimit to at least 10,000, but more likely 10,240, because the value is usually expressed in multiples of 1024.)

修改 /etc/security/limits.conf 檔案,在最後加上nofile(檔案數量)、nproc(程序數量)屬性,如下:

vi /etc/security/limits.conf

* soft nofile 65536
* hard nofile 65536
* soft nproc  65536
* hard nproc  65536

修改後,重啟伺服器生效

reboot

5、安裝配置Apache HBase

Apache HBase 官網提供了、參考的配置例子,建議在配置之前先閱讀一下。

在本實驗中,採用了獨立的zookeeper配置,也hadoop共用,zookeeper具體配置方法可參考我的另一篇部落格。其實在HBase中,還支援使用內建的zookeeper服務,但如果是在生產環境中,建議單獨部署,方便日常的管理。

(1)下載Apache HBase

然後解壓

tar -zxvf hbase-1.2.6-bin.tar.gz

配置環境變數

vi ~/.bash_profile


export HBASE_HOME=/home/ahadoop/hbase-1.2.6
export PATH=$PATH:$HBASE_HOME/bin


# 使用環境變數生效
source ~/.bash_profile

(2)複製hdfs-site.xml配置檔案

複製$HADOOP_HOME/etc/hadoop/hdfs-site.xml到$HBASE_HOME/conf目錄下,這樣以保證hdfs與hbase兩邊一致,這也是官網所推薦的方式。在官網中提到一個例子,例如hdfs中配置的副本數量為5,而預設為3,如果沒有將最新的hdfs-site.xml複製到$HBASE_HOME/conf目錄下,則hbase將會按3份備份,從而兩邊不一致,導致會出現異常。

cp $HADOOP_HOME/etc/hadoop/hdfs-site.xml $HBASE_HOME/conf/

(3)配置hbase-site.xml

編輯 $HBASE_HOME/conf/hbase-site.xml 

<configuration>
  <property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>hd1,hd2,hd3</value>
    <description>The directory shared by RegionServers.
    </description>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/home/ahadoop/zookeeper-data</value>
    <description>
    注意這裡的zookeeper資料目錄與hadoop ha的共用,也即要與 zoo.cfg 中配置的一致
    Property from ZooKeeper config zoo.cfg.
    The directory where the snapshot is stored.
    </description>
  </property>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://hd1:9000/hbase</value>
    <description>The directory shared by RegionServers.
                 官網多次強調這個目錄不要預先建立,hbase會自行建立,否則會做遷移操作,引發錯誤
                 至於埠,有些是8020,有些是9000,看 $HADOOP_HOME/etc/hadoop/hdfs-site.xml 裡面的配置,本實驗配置的是
                 dfs.namenode.rpc-address.hdcluster.nn1 , dfs.namenode.rpc-address.hdcluster.nn2
    </description>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>分散式叢集配置,這裡要設定為true,如果是單節點的,則設定為false
      The mode the cluster will be in. Possible values are
      false: standalone and pseudo-distributed setups with managed ZooKeeper
      true: fully-distributed with unmanaged ZooKeeper Quorum (see hbase-env.sh)
    </description>
  </property>
</configuration>

(4)配置regionserver檔案

編輯 $HBASE_HOME/conf/regionservers 檔案,輸入要執行 regionserver 的主機名

hd2
hd3
hd4

(5)配置 backup-masters 檔案(master備用節點)

HBase 支援執行多個 master 節點,因此不會出現單點故障的問題,但只能有一個活動的管理節點(active master),其餘為備用節點(backup master),編輯 $HBASE_HOME/conf/backup-masters 檔案進行配置備用管理節點的主機名

hd2

(6)配置 hbase-env.sh 檔案

編輯 $HBASE_HOME/conf/hbase-env.sh 配置環境變數,由於本實驗是使用單獨配置的zookeeper,因此,將其中的 HBASE_MANAGES_ZK 設定為 false

export HBASE_MANAGES_ZK=false

到此,HBase 配置完畢

6、啟動 Apache HBase

可使用 $HBASE_HOME/bin/start-hbase.sh 指令啟動整個叢集,如果要使用該命令,則叢集的節點必須實現ssh的免密碼登入,這樣才能到不同的節點啟動服務

為了更加深入瞭解HBase啟動過程,本實驗將對各個節點依次啟動程序,經檢視 start-hbase.sh 指令碼,裡面的啟動順序如下

if [ "$distMode" == 'false' ] 
then
  "$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" $commandToRun master [email protected]
else
  "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" $commandToRun zookeeper
  "$bin"/hbase-daemon.sh --config "${HBASE_CONF_DIR}" $commandToRun master 
  "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
    --hosts "${HBASE_REGIONSERVERS}" $commandToRun regionserver
  "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
    --hosts "${HBASE_BACKUP_MASTERS}" $commandToRun master-backup
fi

也就是使用 hbase-daemon.sh 命令依次啟動 zookeeper、master、regionserver、master-backup

因此,我們也按照這個順序,在各個節點進行啟動

在啟動HBase之前,必須先啟動Hadoop,以便於HBase初始化、讀取儲存在hdfs上的資料

(1)啟動zookeeper(hd1、hd2、hd3節點)

zkServer.sh start &

(2)啟動hadoop分散式叢集(叢集的具體配置和節點規劃,見我的另一篇部落格

# 啟動 journalnode(hd1,hd2,hd3)
hdfs journalnode &

# 啟動 namenode active(hd1)
hdfs namenode &

# 啟動 namenode standby(hd2)
hdfs namenode &

# 啟動ZookeeperFailoverController(hd1,hd2)
hdfs zkfc &

# 啟動 datanode(hd2,hd3,hd4)
hdfs datanode &

(3)啟動hbase master(hd1)

hbase-daemon.sh start master &

(4)啟動hbase regionserver(hd2、hd3、hd4)

hbase-daemon.sh start regionserver &

(5)啟動hbase backup-master(hd2)

hbase-daemon.sh start master --backup &

這裡很奇怪,在 $HBASE_HOME/bin/start-hbase.sh 寫著啟動 backup-master 的命令為

"$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
    --hosts "${HBASE_BACKUP_MASTERS}" $commandToRun master-backup

但實際按這個指令執行時,卻報錯提示無法載入類 master-backup

[[email protected] ~]$ hbase-daemon.sh start master-backup &
[5] 1113
[[email protected]1620d6ed305d ~]$ starting master-backup, logging to /home/ahadoop/hbase-1.2.6/logs/hbase-ahadoop-master-backup-1620d6ed305d.out
Error: Could not find or load main class master-backup

最後經查資料,才改用了以下命令為啟動 backup-master

hbase-daemon.sh start master --backup &

經過以上步驟,就已成功地啟動了hbase叢集,可到每個節點裡面使用 jps 指令檢視 hbase 的啟動程序情況。

啟動後,再檢視 hdfs 、zookeeper 的 /hbase 目錄,發現均已初始化,並且已寫入了相應的檔案,如下

[ahadoop@ee8319514df6 ~]$ hadoop fs -ls /hbase
17/07/02 13:14:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 7 items
drwxr-xr-x   - ahadoop supergroup          0 2017-07-02 12:55 /hbase/.tmp
drwxr-xr-x   - ahadoop supergroup          0 2017-07-02 12:55 /hbase/MasterProcWALs
drwxr-xr-x   - ahadoop supergroup          0 2017-07-02 13:03 /hbase/WALs
drwxr-xr-x   - ahadoop supergroup          0 2017-07-02 12:55 /hbase/data
-rw-r--r--   3 ahadoop supergroup         42 2017-07-02 12:55 /hbase/hbase.id
-rw-r--r--   3 ahadoop supergroup          7 2017-07-02 12:55 /hbase/hbase.version
drwxr-xr-x   - ahadoop supergroup          0 2017-07-02 12:55 /hbase/oldWALs
[[email protected]31d48048cb1e ~]$ zkCli.sh -server hd1:2181
Connecting to hd1:2181
2017-07-05 11:31:44,663 [myid:] - INFO  [main:[email protected]100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2017-07-05 11:31:44,667 [myid:] - INFO  [main:[email protected]100] - Client environment:host.name=31d48048cb1e
2017-07-05 11:31:44,668 [myid:] - INFO  [main:[email protected]100] - Client environment:java.version=1.8.0_131
2017-07-05 11:31:44,672 [myid:] - INFO  [main:[email protected]100] - Client environment:java.vendor=Oracle Corporation
2017-07-05 11:31:44,673 [myid:] - INFO  [main:[email protected]100] - Client environment:java.home=/usr/java/jdk1.8.0_131/jre
2017-07-05 11:31:44,674 [myid:] - INFO  [main:[email protected]100] - Client environment:java.class.path=/home/ahadoop/zookeeper-3.4.10/bin/../build/classes:/home/ahadoop/zookeeper-3.4.10/bin/../build/lib/*.jar:/home/ahadoop/zookeeper-3.4.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/ahadoop/zookeeper-3.4.10/bin/../lib/slf4j-api-1.6.1.jar:/home/ahadoop/zookeeper-3.4.10/bin/../lib/netty-3.10.5.Final.jar:/home/ahadoop/zookeeper-3.4.10/bin/../lib/log4j-1.2.16.jar:/home/ahadoop/zookeeper-3.4.10/bin/../lib/jline-0.9.94.jar:/home/ahadoop/zookeeper-3.4.10/bin/../zookeeper-3.4.10.jar:/home/ahadoop/zookeeper-3.4.10/bin/../src/java/lib/*.jar:/home/ahadoop/zookeeper-3.4.10/bin/../conf:.:/usr/java/jdk1.8.0_131/lib:/usr/java/jdk1.8.0_131/lib/dt.jar:/usr/java/jdk1.8.0_131/lib/tools.jar:/home/ahadoop/apache-ant-1.10.1/lib
2017-07-05 11:31:44,674 [myid:] - INFO  [main:[email protected]] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-07-05 11:31:44,675 [myid:] - INFO  [main:[email protected]] - Client environment:java.io.tmpdir=/tmp
2017-07-05 11:31:44,675 [myid:] - INFO  [main:[email protected]] - Client environment:java.compiler=<NA>
2017-07-05 11:31:44,678 [myid:] - INFO  [main:[email protected]] - Client environment:os.name=Linux
2017-07-05 11:31:44,679 [myid:] - INFO  [main:[email protected]] - Client environment:os.arch=amd64
2017-07-05 11:31:44,679 [myid:] - INFO  [main:[email protected]] - Client environment:os.version=3.10.105-1.el6.elrepo.x86_64
2017-07-05 11:31:44,680 [myid:] - INFO  [main:[email protected]] - Client environment:user.name=ahadoop
2017-07-05 11:31:44,680 [myid:] - INFO  [main:[email protected]] - Client environment:user.home=/home/ahadoop
2017-07-05 11:31:44,681 [myid:] - INFO  [main:[email protected]] - Client environment:user.dir=/home/ahadoop
2017-07-05 11:31:44,686 [myid:] - INFO  [main:[email protected]] - Initiating client connection, connectString=hd1:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@799f7e29
Welcome to ZooKeeper!
2017-07-05 11:31:44,724 [myid:] - INFO  [main-SendThread(31d48048cb1e:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server 31d48048cb1e/172.17.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2017-07-05 11:31:44,884 [myid:] - INFO  [main-SendThread(31d48048cb1e:2181):ClientCnxn$SendThread@876] - Socket connection established to 31d48048cb1e/172.17.0.1:2181, initiating session
[zk: hd1:2181(CONNECTED) 0] 2017-07-05 11:31:44,912 [myid:] - INFO  [main-SendThread(31d48048cb1e:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server 31d48048cb1e/172.17.0.1:2181, sessionid = 0x15d10c18fc70002, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

[zk: hd1:2181(CONNECTED) 1] ls /hbase
[replication, meta-region-server, rs, splitWAL, backup-masters, table-lock, flush-table-proc, region-in-transition, online-snapshot, running, recovering-regions, draining, hbaseid, table]

7、HBase 測試使用

使用hbase shell進入到 hbase 的互動命令列介面,這時可進行測試使用

hbase shell

(1)檢視叢集狀態和節點數量

hbase(main):001:0> status
1 active master, 1 backup masters, 4 servers, 0 dead, 0.5000 average load

(2)建立表

hbase(main):002:0> create 'testtable','c1','c2'
0 row(s) in 1.4850 seconds

=> Hbase::Table - testtable

hbase建立表create命令語法為:表名、列名1、列名2、列名3……

(3)查看錶

hbase(main):003:0> list 'testtable'
TABLE                                                                                                                                                                                          
testtable                                                                                                                                                                                      
1 row(s) in 0.0400 seconds

=> ["testtable"]

(4)匯入資料

hbase(main):004:0> put 'testtable','row1','c1','row1_c1_value'
0 row(s) in 0.2230 seconds

hbase(main):005:0> put 'testtable','row2','c2:s1','row1_c2_s1_value'
0 row(s) in 0.0310 seconds

hbase(main):006:0> put 'testtable','row2','c2:s2','row1_c2_s2_value'
0 row(s) in 0.0170 seconds

匯入資料的命令put的語法為表名、行值、列名(列名可加冒號,表示這個列簇下面還有子列)、列資料

(5)全表掃描資料

hbase(main):007:0> scan 'testtable'
ROW                                              COLUMN+CELL                                                                                                                                   
 row1                                            column=c1:, timestamp=1499225862922, value=row1_c1_value                                                                                      
 row2                                            column=c2:s1, timestamp=1499225869471, value=row1_c2_s1_value                                                                                 
 row2                                            column=c2:s2, timestamp=1499225870375, value=row1_c2_s2_value                                                                                 
2 row(s) in 0.0820 seconds

(6)根據條件查詢資料

hbase(main):008:0> get 'testtable','row1'
COLUMN                                           CELL                                                                                                                                          
 c1:                                             timestamp=1499225862922, value=row1_c1_value                                                                                                  
1 row(s) in 0.0560 seconds

hbase(main):009:0> get 'testtable','row2'
COLUMN                                           CELL                                                                                                                                          
 c2:s1                                           timestamp=1499225869471, value=row1_c2_s1_value                                                                                               
 c2:s2                                           timestamp=1499225870375, value=row1_c2_s2_value                                                                                               
2 row(s) in 0.0350 seconds

(7)表失效

使用 disable 命令可將某張表失效,失效後該表將不能使用,例如執行全表掃描操作,會報錯,如下

hbase(main):010:0> disable 'testtable'
0 row(s) in 2.3090 seconds

hbase(main):011:0> scan 'testtable'
ROW                                              COLUMN+CELL                                                                                                                                   

ERROR: testtable is disabled.

Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications.  Scanner specific