1. 程式人生 > >Hbase shell基本操作

Hbase shell基本操作

records 例子 每次 edi operation sim pin binary file

1、啟動
cd <hbase_home>/bin
$ ./start-hbase.sh

2、啟動hbase shell

# find hadoop-hbase dfs files
hadoop fs -ls /hbase

#start shell
hbase shell

#Run a command to verify that cluster is actually running#
list


3、logs配置
Change the default by editing <hbase_home>/conf/hbase-env.sh
export HBASE_LOG_DIR=/new/location/logs

4、後臺管理

HBase comes with web based management
– http://localhost:60010

5、端口服務

Both Master and Region servers run web server
– Browsing Master will lead you to region servers
– Regions run on port 60030

6、基本命令

Quote all names
Table and column names
– Single quotes for text
• hbase> get ‘t1‘, ‘myRowId‘

– Double quotes for binary
• Use hexadecimal representation of that binary value
• hbase> get ‘t1‘, "key\x03\x3f\xcd"

Display cluster‘s status via status command
– hbase> status
– hbase> status ‘detailed‘
• Similar information can be found on HBase
Web Management Console

– http://localhost:60010

7、建表

Create Table

Create table called ‘Blog‘ with the following
schema
– 2 families
–‘info‘ with 3 columns: ‘title‘, ‘author‘, and ‘date‘
–‘content‘ with 1 column family: ‘post‘
首先建立表,附帶列族columns families
create ‘Blog‘, {NAME=>‘info‘}, {NAME=>‘content‘}
然後,添加數據,註意hbase是基於rowkey的列數據庫,可以一次添加一列或多列,必須每次添加指定rowkey
使用Put命令:
hbase> put ‘table‘, ‘row_id‘, ‘family:column‘, ‘value‘
例子:
put ‘Blog‘, ‘Michelle-001‘, ‘info:title‘, ‘Michelle‘
put ‘Blog‘, ‘Matt-001‘, ‘info:author‘, ‘Matt123‘
put ‘Blog‘, ‘Matt-001‘, ‘info:date‘, ‘2009.05.01‘
put ‘Blog‘, ‘Matt-001‘, ‘content:post‘, ‘here is content‘

列可以任意的擴展,比如

put ‘Blog‘, ‘Matt-001‘, ‘content:news‘, ‘news is new column‘

8、查看數據-指定rowid

#查看數據庫
count ‘Blog‘
count ‘Blog‘, {INTERVAL=>2}

#查看行數據
get ‘table‘, ‘row_id‘
get ‘Blog‘, ‘Matt-001‘
get ‘Blog‘, ‘Matt-001‘,{COLUMN=>[‘info:author‘,‘content:post‘]}
#時間戳
get ‘Blog‘, ‘Michelle-004‘,{COLUMN=>[‘info:author‘,‘content:post‘],TIMESTAMP=>1326061625690}
#版本
get ‘Blog‘, ‘Matt-001‘,{ VERSIONS=1}
get ‘Blog‘, ‘Matt-001‘,{COLUMN=>‘info:date‘, VERSIONS=1}
get ‘Blog‘, ‘Matt-001‘,{COLUMN=>‘info:date‘, VERSIONS>=2}
get ‘Blog‘, ‘Matt-001‘,{COLUMN=>‘info:date‘}


9、查看數據-通過scan指定範圍,註意,所有的記錄均按時間戳作為範圍排序
Limit what columns are retrieved
– hbase> scan ‘table‘, {COLUMNS=>[‘col1‘, ‘col2‘]}
• Scan a time range
– hbase> scan ‘table‘, {TIMERANGE => [1303, 13036]}
• Limit results with a filter
– hbase> scan ‘Blog‘, {FILTER =>org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
– More about filters later

scan ‘Blog‘, {COLUMNS=>‘info:title‘}
開始於John,結束並排除Matt的
scan ‘Blog‘, {COLUMNS=>‘info:title‘,STARTROW=>‘John‘, STOPROW=>‘Matt‘}
scan ‘Blog‘, {COLUMNS=>‘info:title‘, STOPROW=>‘Matt‘}


10、版本
put ‘Blog‘, ‘Michelle-004‘, ‘info:date‘, ‘1990.07.06‘
put ‘Blog‘, ‘Michelle-004‘, ‘info:date‘, ‘1990.07.07‘
put ‘Blog‘, ‘Michelle-004‘, ‘info:date‘, ‘1990.07.08‘
put ‘Blog‘, ‘Michelle-004‘, ‘info:date‘, ‘1990.07.09‘
get ‘Blog‘, ‘Michelle-004‘,{COLUMN=>‘info:date‘, VERSIONS=>3}


11、Delete records
delete ‘Blog‘, ‘Bob-003‘, ‘info:date‘

12、Drop table
– Must disable before dropping
– puts the table “offline” so schema based operations can
be performed
– hbase> disable ‘table_name‘
– hbase> drop ‘table_name‘

Hbase shell基本操作