1. 程式人生 > >【實踐】基於CentOS7部署Ceph叢集(版本10.2.2)

【實踐】基於CentOS7部署Ceph叢集(版本10.2.2)

1 簡單介紹

Ceph的部署模式下主要包含以下幾個型別的節點

Ø Ceph OSDs: A Ceph OSD 程序主要用來儲存資料,處理資料的replication,恢復,填充,調整資源組合以及通過檢查其他OSD程序的心跳資訊提供一些監控資訊給Ceph Monitors . 當Ceph Storage Cluster 要準備2份資料備份時,要求至少有2個Ceph OSD程序的狀態是active+clean狀態 (Ceph 預設會提供兩份資料備份).

Ø Monitors: Ceph Monitor 維護了叢集map的狀態,主要包括monitor map, OSD map, Placement Group (PG) map, 以及CRUSH map. Ceph 維護了 Ceph Monitors, Ceph OSD Daemons, 以及PGs狀態變化的歷史記錄 (called an “epoch”).

Ø MDSs: Ceph Metadata Server (MDS)儲存的元資料代表Ceph的檔案系統 (i.e., Ceph Block Devices 以及Ceph Object Storage 不適用 MDS). Ceph Metadata Servers 讓系統使用者可以執行一些POSIX檔案系統的基本命令,例如ls,find 等.

作者:Younger Liu,

原文地址:http://blog.csdn.net/younger_china/article/details/51823571

2 叢集規劃

在建立叢集前,首先做好叢集規劃。

基於VMware虛擬機器部署ceph叢集:

clip_image002

2.2 節點規劃

node0

HostOnly

192.168.92.100

Admin, osd(sdb)

node1

HostOnly

192.168.92.101

Osd(sdb),mon

node2

HostOnly

192.168.92.102

Osd(sdb),mon,mds

node3

HostOnly

192.168.92.103

Osd(sdb),mon,mds

client-node

HostOnly

192.168.92.109

使用者端節點;客服端,主要利用它掛載ceph叢集提供的儲存進行測試

3 準備工作

分別為上述各節點主機建立使用者ceph:(使用root許可權,或者具有root許可權)

建立使用者

[[email protected] ~]# adduser -d /home/ceph -m ceph

設定密碼

[[email protected] ~]#passwd ceph

設定賬戶許可權

[[email protected] ~]# echo "ceph ALL = (root) NOPASSWD:AL" | sudo tee /etc/sudoers.d/ceph
[[email protected] ~]# chomod 0440 /etc/sudoers.d/ceph

修改/etc/hosts

[[email protected] cluster]$ sudo cat /etc/hosts
[sudo] password for ceph:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.92.100 node0
192.168.92.101 node1
192.168.92.102 node2
192.168.92.103 node3
[[email protected] cluster]$

執行命令visudo修改suoders檔案:

1. 註釋Defaults requiretty

Defaults requiretty修改為 #Defaults requiretty, 表示不需要控制終端。

否則會出現sudo: sorry, you must have a tty to run sudo

2. 增加行 Defaults visiblepw

否則會出現 sudo: no tty present and no askpass program specified

如果在ceph-deploy new <node-hostname>階段依舊出錯:

[[email protected] cluster]$ ceph-deploy new node1 node2 node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploy new node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0xee0b18>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0xef9a28>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['node3']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node3][DEBUG ] connected to host: node0
[node3][INFO ] Running command: ssh -CT -o BatchMode=yes node3
[node3][DEBUG ] connection detected need for sudo
[node3][DEBUG ] connected to host: node3
[ceph_deploy][ERROR ] RuntimeError: remote connection got closed, ensure ``requiretty`` is disabled for node3
[[email protected] cluster]$

配置管理節點與其他節點ssh無密碼root許可權訪問其它節點。

第一步:在管理節點主機上執行命令:

ssh-keygen

說明:(為了簡單點命令執行時直接確定即可)

第二步:將第一步的key複製至其他節點

ssh-copy-id    [email protected]

ssh-copy-id    [email protected]

ssh-copy-id    [email protected]

ssh-copy-id    [email protected]

同時修改~/.ssh/config檔案增加一下內容:

[[email protected] cluster]$ cat ~/.ssh/config
Host node0
Hostname node0
User ceph
Host node1
Hostname node1
User ceph
Host node2
Hostname node2
User ceph
Host node3
Hostname node3
User ceph
[[email protected] cluster]$

3.5 關閉防火牆

[[email protected] ceph]# systemctl stop firewalld.service
[[email protected] ceph]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[[email protected] ceph]#

3.6 禁用selinux

當前禁用

setenforce 0

永久禁用

[[email protected] ceph]# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted


3.6.1 Bad owner or permissions on .ssh/config的解決

錯誤資訊:

Bad owner or permissions on /home/ceph/.ssh/config fatal: The remote end hung up unexpectedly

解決方案:

$sudo chmod 600 config

sudo vim /etc/yum.repos.d/ceph.repo

新增以下內容:

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

第二步:更新軟體源並按照ceph-deploy,時間同步軟體

[[email protected] cluster]$ sudo yum update && sudo yum install ceph-deploy
[[email protected] cluster]$ sudo yum install ntp ntpupdate ntp-doc
參見《http://blog.csdn.net/younger_china/article/details/73656331》

第三步:關閉所有節點的防火牆以及安全選項(在所有節點上執行)以及其他一些步驟

[[email protected] cluster]$ sudo systemctl stop firewall.service;
[[email protected] cluster]$ sudo systemctl disable firewall.service

[[email protected] cluster]$ sudo yum install yum-plugin-priorities

總結:經過以上步驟前提條件都準備好了接下來真正部署ceph了。

clip_image004

以前面建立的ceph使用者在管理節點節點上建立目錄()

[[email protected] ~]$ mkdir cluster
[[email protected] cluster]$ cd cluster

先清空之前所有的ceph資料,如果是新裝不用執行此步驟,如果是重新部署的話也執行下面的命令:

ceph-deploy purgedata {ceph-node} [{ceph-node}]

ceph-deploy forgetkeys

如:

[[email protected] cluster]$ <strong>ceph-deploy purgedata admin_node node1 node2 node3</strong>
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploy purgedata node0 node1 node2 node3
…
[node3][INFO ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph
[node3][INFO ] Running command: sudo rm -rf --one-file-system -- /etc/ceph/
[[email protected] cluster]$ <strong>ceph-deploy forgetkeys</strong>
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/bin/ceph-deploy forgetkeys
…
[ceph_deploy.cli][INFO ] default_release : False
[[email protected] my-cluster]$

5.2 建立叢集設定Monitor節點

在admin節點上用ceph-deploy建立叢集,new後面跟的是monitor節點的hostname,如果有多個monitor,則它們的hostname以為間隔,多個mon節點可以實現互備。

[[email protected] cluster]$ <strong>ceph-deploy new node1 node2 node3</strong>
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy new node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x29f2b18>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2a15a70>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : ['node1', 'node2', 'node3']
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node1][DEBUG ] connected to host: node0
[node1][INFO ] Running command: ssh -CT -o BatchMode=yes node1
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: sudo /usr/sbin/ip link show
[node1][INFO ] Running command: sudo /usr/sbin/ip addr show
[node1][DEBUG ] IP addresses found: ['192.168.92.101', '192.168.1.102', '192.168.122.1']
[ceph_deploy.new][DEBUG ] Resolving host node1
[ceph_deploy.new][DEBUG ] Monitor node1 at 192.168.92.101
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node2][DEBUG ] connected to host: node0
[node2][INFO ] Running command: ssh -CT -o BatchMode=yes node2
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: sudo /usr/sbin/ip link show
[node2][INFO ] Running command: sudo /usr/sbin/ip addr show
[node2][DEBUG ] IP addresses found: ['192.168.1.103', '192.168.122.1', '192.168.92.102']
[ceph_deploy.new][DEBUG ] Resolving host node2
[ceph_deploy.new][DEBUG ] Monitor node2 at 192.168.92.102
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node3][DEBUG ] connected to host: node0
[node3][INFO ] Running command: ssh -CT -o BatchMode=yes node3
[node3][DEBUG ] connection detected need for sudo
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: sudo /usr/sbin/ip link show
[node3][INFO ] Running command: sudo /usr/sbin/ip addr show
[node3][DEBUG ] IP addresses found: ['192.168.122.1', '192.168.1.104', '192.168.92.103']
[ceph_deploy.new][DEBUG ] Resolving host node3
[ceph_deploy.new][DEBUG ] Monitor node3 at 192.168.92.103
[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1', 'node2', 'node3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.92.101', '192.168.92.102', '192.168.92.103']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[[email protected] cluster]$

檢視生成的檔案

[[email protected] cluster]$ ls
ceph.conf ceph.log ceph.mon.keyring

檢視ceph的配置檔案,Node1節點都變為了控制節點

[[email protected] cluster]$ cat ceph.conf
[global]
fsid = 3c9892d0-398b-4808-aa20-4dc622356bd0
mon_initial_members = node1, node2, node3
mon_host = 192.168.92.111,192.168.92.112,192.168.92.113
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
[[email protected] my-cluster]$

5.2.1 修改副本數目

修改預設的副本數為2,即ceph.conf,使osd_pool_default_size的值為2。如果該行,則新增。

[[email protected] cluster]$ grep "osd_pool_default_size" ./ceph.conf
osd_pool_default_size = 2
[[email protected] cluster]$

5.2.2 網路不唯一的處理

如果IP不唯一,即除ceph叢集使用的網路外,還有其他的網路IP。

比如:

eno16777736: 192.168.175.100

eno50332184: 192.168.92.110

virbr0: 192.168.122.1

那麼就需要在ceph.conf配置文件[global]部分增加引數public network引數:

public_network = {ip-address}/{netmask}

如:

public_network = 192.168.92.0/6789

管理節點節點用ceph-deploy工具向各個節點安裝ceph:

ceph-deploy install {ceph-node}[{ceph-node} ...]

或者本地

ceph-deploy install {ceph-node}[{ceph-node} ...] --local-mirror=/opt/ceph-repo --no-adjust-repos --release=jewel

如:

[[email protected] cluster]$ ceph-deploy install node0 node1 node2 node3
[[email protected] cluster]$ ceph-deploy install node0 node1 node2 node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy install node0 node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2ae0560>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x2a53668>
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['node0', 'node1', 'node2', 'node3']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts node0 node1 node2 node3
[ceph_deploy.install][DEBUG ] Detecting platform for host node0 ...
[node0][DEBUG ] connection detected need for sudo
[node0][DEBUG ] connected to host: node0
[node0][DEBUG ] detect platform information from remote host
[node0][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[node0][INFO ] installing Ceph on node0
[node0][INFO ] Running command: sudo yum clean all
[node0][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[node0][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[node0][DEBUG ] Cleaning up everything
[node0][DEBUG ] Cleaning up list of fastest mirrors
[node0][INFO ] Running command: sudo yum -y install epel-release
[node0][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[node0][DEBUG ] Determining fastest mirrors
[node0][DEBUG ] * epel: mirror01.idc.hinet.net
[node0][DEBUG ] 25 packages excluded due to repository priority protections
[node0][DEBUG ] Package epel-release-7-7.noarch already installed and latest version
[node0][DEBUG ] Nothing to do
[node0][INFO ] Running command: sudo yum -y install yum-plugin-priorities
[node0][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[node0][DEBUG ] Loading mirror speeds from cached hostfile
[node0][DEBUG ] * epel: mirror01.idc.hinet.net
[node0][DEBUG ] 25 packages excluded due to repository priority protections
[node0][DEBUG ] Package yum-plugin-priorities-1.1.31-34.el7.noarch already installed and latest version
[node0][DEBUG ] Nothing to do
[node0][DEBUG ] Configure Yum priorities to include obsoletes
[node0][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[node0][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc
[node0][INFO ] Running command: sudo rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[node0][DEBUG ] Retrieving https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[node0][DEBUG ] Preparing... ########################################
[node0][DEBUG ] Updating / installing...
[node0][DEBUG ] ceph-release-1-1.el7 ########################################
[node0][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[node0][WARNIN] altered ceph.repo priorities to contain: priority=1
[node0][INFO ] Running command: sudo yum -y install ceph ceph-radosgw
[node0][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[node0][DEBUG ] Loading mirror speeds from cached hostfile
[node0][DEBUG ] * epel: mirror01.idc.hinet.net
[node0][DEBUG ] 25 packages excluded due to repository priority protections
[node0][DEBUG ] Package 1:ceph-10.2.2-0.el7.x86_64 already installed and latest version
[node0][DEBUG ] Package 1:ceph-radosgw-10.2.2-0.el7.x86_64 already installed and latest version
[node0][DEBUG ] Nothing to do
[node0][INFO ] Running command: sudo ceph --version
[node0][DEBUG ] ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
….

初始化監控節點並收集keyring:

[[email protected] cluster]$ ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbe46804cb0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fbe467f6aa0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1 node2 node3
[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core
[node1][DEBUG ] determining if provided host has same hostname in remote
[node1][DEBUG ] get remote short hostname
[node1][DEBUG ] deploying mon to node1
[node1][DEBUG ] get remote short hostname
[node1][DEBUG ] remote hostname: node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][DEBUG ] create the mon path if it does not exist
[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done
[node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node1/done
[node1][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][DEBUG ] create the monitor keyring file
[node1][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i node1 --keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring --setuser 1001 --setgroup 1001
[node1][DEBUG ] ceph-mon: mon.noname-a 192.168.92.101:6789/0 is local, renaming to mon.node1
[node1][DEBUG ] ceph-mon: set fsid to 4f8f6c46-9f67-4475-9cb5-52cafecb3e4c
[node1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1
[node1][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][DEBUG ] create a done file to avoid re-doing the mon deployment
[node1][DEBUG ] create the init path if it does not exist
[node1][INFO ] Running command: sudo systemctl enable ceph.target
[node1][INFO ] Running command: sudo systemctl enable [email protected]
[node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
[node1][INFO ] Running command: sudo systemctl start [email protected]
[node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[node1][DEBUG ] ********************************************************************************
[node1][DEBUG ] status for monitor: mon.node1
[node1][DEBUG ] {
[node1][DEBUG ] "election_epoch": 0,
[node1][DEBUG ] "extra_probe_peers": [
[node1][DEBUG ] "192.168.92.102:6789/0",
[node1][DEBUG ] "192.168.92.103:6789/0"
[node1][DEBUG ] ],
[node1][DEBUG ] "monmap": {
[node1][DEBUG ] "created": "2016-06-24 14:43:29.944474",
[node1][DEBUG ] "epoch": 0,
[node1][DEBUG ] "fsid": "4f8f6c46-9f67-4475-9cb5-52cafecb3e4c",
[node1][DEBUG ] "modified": "2016-06-24 14:43:29.944474",
[node1][DEBUG ] "mons": [
[node1][DEBUG ] {
[node1][DEBUG ] "addr": "192.168.92.101:6789/0",
[node1][DEBUG ] "name": "node1",
[node1][DEBUG ] "rank": 0
[node1][DEBUG ] },
[node1][DEBUG ] {
[node1][DEBUG ] "addr": "0.0.0.0:0/1",
[node1][DEBUG ] "name": "node2",
[node1][DEBUG ] "rank": 1
[node1][DEBUG ] },
[node1][DEBUG ] {
[node1][DEBUG ] "addr": "0.0.0.0:0/2",
[node1][DEBUG ] "name": "node3",
[node1][DEBUG ] "rank": 2
[node1][DEBUG ] }
[node1][DEBUG ] ]
[node1][DEBUG ] },
[node1][DEBUG ] "name": "node1",
[node1][DEBUG ] "outside_quorum": [
[node1][DEBUG ] "node1"
[node1][DEBUG ] ],
[node1][DEBUG ] "quorum": [],
[node1][DEBUG ] "rank": 0,
[node1][DEBUG ] "state": "probing",
[node1][DEBUG ] "sync_provider": []
[node1][DEBUG ] }
[node1][DEBUG ] ********************************************************************************
[node1][INFO ] monitor: mon.node1 is running
[node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host node2 ...
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core
[node2][DEBUG ] determining if provided host has same hostname in remote
[node2][DEBUG ] get remote short hostname
[node2][DEBUG ] deploying mon to node2
[node2][DEBUG ] get remote short hostname
[node2][DEBUG ] remote hostname: node2
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node2][DEBUG ] create the mon path if it does not exist
[node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node2/done
[node2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node2/done
[node2][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][DEBUG ] create the monitor keyring file
[node2][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i node2 --keyring /var/lib/ceph/tmp/ceph-node2.mon.keyring --setuser 1001 --setgroup 1001
[node2][DEBUG ] ceph-mon: mon.noname-b 192.168.92.102:6789/0 is local, renaming to mon.node2
[node2][DEBUG ] ceph-mon: set fsid to 4f8f6c46-9f67-4475-9cb5-52cafecb3e4c
[node2][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node2 for mon.node2
[node2][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][DEBUG ] create a done file to avoid re-doing the mon deployment
[node2][DEBUG ] create the init path if it does not exist
[node2][INFO ] Running command: sudo systemctl enable ceph.target
[node2][INFO ] Running command: sudo systemctl enable [email protected]
[node2][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
[node2][INFO ] Running command: sudo systemctl start [email protected]
[node2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[node2][DEBUG ] ********************************************************************************
[node2][DEBUG ] status for monitor: mon.node2
[node2][DEBUG ] {
[node2][DEBUG ] "election_epoch": 1,
[node2][DEBUG ] "extra_probe_peers": [
[node2][DEBUG ] "192.168.92.101:6789/0",
[node2][DEBUG ] "192.168.92.103:6789/0"
[node2][DEBUG ] ],
[node2][DEBUG ] "monmap": {
[node2][DEBUG ] "created": "2016-06-24 14:43:34.865908",
[node2][DEBUG ] "epoch": 0,
[node2][DEBUG ] "fsid": "4f8f6c46-9f67-4475-9cb5-52cafecb3e4c",
[node2][DEBUG ] "modified": "2016-06-24 14:43:34.865908",
[node2][DEBUG ] "mons": [
[node2][DEBUG ] {
[node2][DEBUG ] "addr": "192.168.92.101:6789/0",
[node2][DEBUG ] "name": "node1",
[node2][DEBUG ] "rank": 0
[node2][DEBUG ] },
[node2][DEBUG ] {
[node2][DEBUG ] "addr": "192.168.92.102:6789/0",
[node2][DEBUG ] "name": "node2",
[node2][DEBUG ] "rank": 1
[node2][DEBUG ] },
[node2][DEBUG ] {
[node2][DEBUG ] "addr": "0.0.0.0:0/2",
[node2][DEBUG ] "name": "node3",
[node2][DEBUG ] "rank": 2
[node2][DEBUG ] }
[node2][DEBUG ] ]
[node2][DEBUG ] },
[node2][DEBUG ] "name": "node2",
[node2][DEBUG ] "outside_quorum": [],
[node2][DEBUG ] "quorum": [],
[node2][DEBUG ] "rank": 1,
[node2][DEBUG ] "state": "electing",
[node2][DEBUG ] "sync_provider": []
[node2][DEBUG ] }
[node2][DEBUG ] ********************************************************************************
[node2][INFO ] monitor: mon.node2 is running
[node2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host node3 ...
[node3][DEBUG ] connection detected need for sudo
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core
[node3][DEBUG ] determining if provided host has same hostname in remote
[node3][DEBUG ] get remote short hostname
[node3][DEBUG ] deploying mon to node3
[node3][DEBUG ] get remote short hostname
[node3][DEBUG ] remote hostname: node3
[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node3][DEBUG ] create the mon path if it does not exist
[node3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node3/done
[node3][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node3/done
[node3][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node3.mon.keyring
[node3][DEBUG ] create the monitor keyring file
[node3][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i node3 --keyring /var/lib/ceph/tmp/ceph-node3.mon.keyring --setuser 1001 --setgroup 1001
[node3][DEBUG ] ceph-mon: mon.noname-c 192.168.92.103:6789/0 is local, renaming to mon.node3
[node3][DEBUG ] ceph-mon: set fsid to 4f8f6c46-9f67-4475-9cb5-52cafecb3e4c
[node3][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node3 for mon.node3
[node3][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node3.mon.keyring
[node3][DEBUG ] create a done file to avoid re-doing the mon deployment
[node3][DEBUG ] create the init path if it does not exist
[node3][INFO ] Running command: sudo systemctl enable ceph.target
[node3][INFO ] Running command: sudo systemctl enable [email protected]
[node3][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected]
[node3][INFO ] Running command: sudo systemctl start [email protected]
[node3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[node3][DEBUG ] ********************************************************************************
[node3][DEBUG ] status for monitor: mon.node3
[node3][DEBUG ] {
[node3][DEBUG ] "election_epoch": 1,
[node3][DEBUG ] "extra_probe_peers": [
[node3][DEBUG ] "192.168.92.101:6789/0",
[node3][DEBUG ] "192.168.92.102:6789/0"
[node3][DEBUG ] ],
[node3][DEBUG ] "monmap": {
[node3][DEBUG ] "created": "2016-06-24 14:43:39.800046",
[node3][DEBUG ] "epoch": 0,
[node3][DEBUG ] "fsid": "4f8f6c46-9f67-4475-9cb5-52cafecb3e4c",
[node3][DEBUG ] "modified": "2016-06-24 14:43:39.800046",
[node3][DEBUG ] "mons": [
[node3][DEBUG ] {
[node3][DEBUG ] "addr": "192.168.92.101:6789/0",
[node3][DEBUG ] "name": "node1",
[node3][DEBUG ] "rank": 0
[node3][DEBUG ] },
[node3][DEBUG ] {
[node3][DEBUG ] "addr": "192.168.92.102:6789/0",
[node3][DEBUG ] "name": "node2",
[node3][DEBUG ] "rank": 1
[node3][DEBUG ] },
[node3][DEBUG ] {
[node3][DEBUG ] "addr": "192.168.92.103:6789/0",
[node3][DEBUG ] "name": "node3",
[node3][DEBUG ] "rank": 2
[node3][DEBUG ] }
[node3][DEBUG ] ]
[node3][DEBUG ] },
[node3][DEBUG ] "name": "node3",
[node3][DEBUG ] "outside_quorum": [],
[node3][DEBUG ] "quorum": [],
[node3][DEBUG ] "rank": 2,
[node3][DEBUG ] "state": "electing",
[node3][DEBUG ] "sync_provider": []
[node3][DEBUG ] }
[node3][DEBUG ] ********************************************************************************
[node3][INFO ] monitor: mon.node3 is running
[node3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.node1
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][WARNIN] mon.node1 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
[node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][WARNIN] mon.node1 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN] waiting 10 seconds before retrying
[node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][INFO ] mon.node1 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.node2
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[ceph_deploy.mon][INFO ] mon.node2 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.node3
[node3][DEBUG ] connection detected need for sudo
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[ceph_deploy.mon][INFO ] mon.node3 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmp5_jcSr
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] get remote short hostname
[node1][DEBUG ] fetch remote file
[node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.node1.asok mon_status
[node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node1/keyring auth get-or-create client.admin osd allow * mds allow * mon allow *
[node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node1/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node1/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-node1/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmp5_jcSr
[[email protected] cluster]$

檢視生成的檔案

[[email protected] cluster]$ ls
ceph.bootstrap-mds.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring
ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log
[[email protected] cluster]$

6 OSD管理

命令

ceph-deploy osd prepare {ceph-node}:/path/to/directory

示例,如1.2.3所示

[[email protected] cluster]$ ceph-deploy osd prepare node1:/dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy osd prepare node1:/dev/sdb
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('node1', '/dev/sdb', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x10b4d88>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x10a9398>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdb:
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdb journal None activate False
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: sudo /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdb
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] set_type: Will colocate journal with data on /dev/sdb
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] ptype_tobe_for_name: name = journal
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] create_partition: Creating journal partition num 2 size 5120 on /dev/sdb
[node1][WARNIN] command_check_call: Running command: /sbin/sgdisk --new=2:0:+5120M --change-name=2:ceph journal --partition-guid=2:75a991e0-d8e1-414f-825d-1635edc8fbe5 --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdb
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] command: Running command: /sbin/partprobe /dev/sdb
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb2 uuid path is /sys/dev/block/8:18/dm/uuid
[node1][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/75a991e0-d8e1-414f-825d-1635edc8fbe5
[node1][WARNIN] prepare_device: Journal is GPT partition /dev/disk/by-partuuid/75a991e0-d8e1-414f-825d-1635edc8fbe5
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] set_data_partition: Creating osd partition on /dev/sdb
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] ptype_tobe_for_name: name = data
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdb
[node1][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:25ae7735-d1ca-45d5-9d98-63567b424248 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdb
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] update_partition: Calling partprobe on created device /dev/sdb
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] command: Running command: /sbin/partprobe /dev/sdb
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
[node1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdb1
[node1][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdb1
[node1][DEBUG ] meta-data=/dev/sdb1 isize=2048 agcount=4, agsize=982975 blks
[node1][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node1][DEBUG ] = crc=0 finobt=0
[node1][DEBUG ] data = bsize=4096 blocks=3931899, imaxpct=25
[node1][DEBUG ] = sunit=0 swidth=0 blks
[node1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=0
[node1][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node1][WARNIN] mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.6_6ywP with options noatime,inode64
[node1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.6_6ywP
[node1][WARNIN] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.6_6ywP
[node1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.6_6ywP
[node1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.6_6ywP/ceph_fsid.14578.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.6_6ywP/ceph_fsid.14578.tmp
[node1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.6_6ywP/fsid.14578.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.6_6ywP/fsid.14578.tmp
[node1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.6_6ywP/magic.14578.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.6_6ywP/magic.14578.tmp
[node1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.6_6ywP/journal_uuid.14578.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.6_6ywP/journal_uuid.14578.tmp
[node1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.6_6ywP/journal -> /dev/disk/by-partuuid/75a991e0-d8e1-414f-825d-1635edc8fbe5
[node1][WARNIN] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.6_6ywP
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.6_6ywP
[node1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.6_6ywP
[node1][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.6_6ywP
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
[node1][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdb
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] command: Running command: /sbin/partprobe /dev/sdb
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match sdb1
[node1][INFO ] checking OSD status...
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use.
[[email protected] cluster]$


6.2 啟用OSD

命令:

ceph-deploy osd activate {ceph-node}:/path/to/directory

示例:

[[email protected] cluster]$ ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd0

查詢狀態:

[[email protected] ~]$ ceph -s
cluster 4f8f6c46-9f67-4475-9cb5-52cafecb3e4c
health HEALTH_WARN
64 pgs degraded
64 pgs stuck unclean
64 pgs undersized
mon.node2 low disk space
mon.node3 low disk space
monmap e1: 3 mons at {node1=192.168.92.101:6789/0,node2=192.168.92.102:6789/0,node3=192.168.92.103:6789/0}
election epoch 18, quorum 0,1,2 node1,node2,node3
osdmap e12: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v173: 64 pgs, 1 pools, 0 bytes data, 0 objects
20254 MB used, 22120 MB / 42374 MB avail
64 active+undersized+degraded
[[email protected] ~]$
[[email protected] cluster]$ cat rmosd.sh
###############################################################################
# Author : [email protected]
# File Name : rmosd.sh
# Description :
#
###############################################################################
#!/bin/bash
if [ $# != 1 ]; then
echo "Error!";
exit 1;
fi
ID=${1}
sudo systemctl stop [email protected]${ID}
ceph osd crush remove osd.${ID}
ceph osd down ${ID}
ceph auth del osd.${ID}
ceph osd rm ${ID}
[[email protected] cluster]$

6.4 檢視OSD tree

[[email protected] cluster]$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.05835 root default
-2 0.01459 host node1
1 0.01459 osd.1 up 1.00000 1.00000
-3 0.01459 host node3
3 0.01459 osd.3 up 1.00000 1.00000
-4 0.01459 host node0
0 0.01459 osd.0 up 1.00000 1.00000
-5 0.01459 host node2
2 0.01459 osd.2 up 1.00000 1.00000
[[email protected] cluster]$


7 設定ceph叢集自啟動

在各節點需要執行如下命令:

[[email protected] cluster]$ sudo systemctl enable ceph-mon.target
[[email protected] cluster]$ sudo systemctl enable ceph-osd.target
[[email protected] cluster]$ sudo systemctl enable ceph.target

便捷執行

[[email protected] cluster]$ ssh node0 "sudo systemctl enable ceph-mon.target;sudo systemctl enable ceph-osd.target;sudo systemctl enable ceph.target"
[[email protected] cluster]$ ssh node1 "sudo systemctl enable ceph-mon.target;sudo systemctl enable ceph-osd.target;sudo systemctl enable ceph.target"
[[email protected] cluster]$ ssh node2 "sudo systemctl enable ceph-mon.target;sudo systemctl enable ceph-osd.target;sudo systemctl enable ceph.target"
[[email protected] cluster]$ ssh node3 "sudo systemctl enable ceph-mon.target;sudo systemctl enable ceph-osd.target;sudo systemctl enable ceph.target"
[[email protected] cluster]$

作者:Younger Liu,

原文地址:http://blog.csdn.net/younger_china/article/details/51823571

相關推薦

實踐基於CentOS7部署Ceph叢集版本10.2.2

1 簡單介紹Ceph的部署模式下主要包含以下幾個型別的節點Ø Ceph OSDs: A Ceph OSD 程序主要用來儲存資料,處理資料的replication,恢復,填充,調整資源組合以及通過檢查其他OSD程序的心跳資訊提供一些監控資訊給Ceph Monitors . 當C

深入淺出| 基於深度學習的機器翻譯附PDF+視訊下載

由公眾號"機器學習演算法與Python學習"整理源|將門創投本文所分享的是清華大學劉洋副教授講解

centos7基於docker部署ceph叢集及遇到的問題

Ceph是一個統一的分散式儲存系統,提供較好的效能、可靠性和可擴充套件性。 本文使用三臺伺服器進行安裝,vm1,vm2,vm4,及每臺伺服器上所要安裝的內容 vm1 vm2 vm3 monitor monitor monitor

Docker基於例項專案的叢集部署Linux基礎命令

Linux系統作為優秀的企業級伺服器系統,有多處優點: 可靠的安全性 良好的穩定性 完善的網路功能 多使用者任務 豐富的軟體支援 跨平臺的硬體支援 目錄結構 我們可以通過以下結構瞭解Linux的目錄作用: 命令操作

Docker基於例項專案的叢集部署部署專案例項介紹與搭建

部署專案簡介 我們要部署的專案是人人網的一個基於前後端分離的專案:renren-fast。 你可以在這裡對該專案進行下載,並對相關介紹文件進行了解: https://www.renren.io/community/project https://www.renren.io/guide

Docker基於例項專案的叢集部署安裝環境搭建

叢集 叢集具有三高特點: 高效能 高負載 高可用 現在的環境中,經常會用到叢集,如資料庫叢集。如,我們在主機上部署資料庫節點,形成叢集。 安裝環境與配置 在Docker中部署叢集,首先要安裝Linux環境,這裡我們使用VMware虛擬機

實踐基於spark的CF實現及優化

最近專案中用到ItemBased Collaborative Filtering,實踐過spark mllib中的ALS,但是因為其中涉及到降維操作,大資料量的計算實在不能恭維。所以自己實踐實現基於spark的分散式cf,已經做了部分優化。目測執行效率還不錯。以下程式碼 p

C++基於範圍的for迴圈C++11

一、語法概念 在C++98中,如果要遍歷一個數組,會有以下程式碼: void TestFor() { int array[] = { 1, 2, 3, 4, 5 }; for (int i = 0; i < sizeof(array) / sizeof(array[0]);

專欄 - 基於時空條件隨機場STCRF的鐳射雷達地面點估計與分割

基於時空條件隨機場(STCRF)的鐳射雷達地面點估計與分割 本專欄為論文《Ground Estimation and Point Cloud Segmentation using SpatioTemporal Conditiona

MySQL基於MySQL的SQL優化——對子查詢進行優化

通常情況下,需要把子查詢優化成JOIN查詢。 這是一個實現查詢演員名為“sandra”的表演影片片名的SQL,通過EXPLAIN關鍵字進行解析,這個查詢中包含三個子查詢,並且出現

1.快速部署ceph叢集ceph做openstack的後端儲存

Ceph 簡介(官網 http://docs.ceph.org.cn/start/intro/)不管你是想為雲平臺提供Ceph 物件儲存和/或 Ceph 塊裝置,還是想部署一個 Ceph 檔案系統或者把 Ceph 作為他用,所有 Ceph 儲存叢集的部署都始於部署一個個 Ce

MySQL基於MySQL的SQL優化——對count、max的優化

max(): 通過一條含有max()的語句來了解一下通過索引來優化帶有max()方法的SQL語句。 SELECT MAX(payment_date) FROM payment;

HDOJ6118度度熊的交易計劃最小費用流

const 費用流 sign else read true head 最大的 自動調整 題意: 度度熊參與了喵哈哈村的商業大會,但是這次商業大會遇到了一個難題:喵哈哈村以及周圍的村莊可以看做是一共由n個片區,m條公路組成的地區。由於生產能力的區別,第i個片區能夠花費a[i]

BZOJ3769spoj 8549 BST again DP記憶化搜索?

ret lin 多少 sam 16px char long long cst ini 【BZOJ3769】spoj 8549 BST again Description 求有多少棵大小為n的深度為h的二叉樹。(樹根深度為0;左右子樹有別;答案對1000000007取

javajava處理隨機浮點數小數點後兩位用RMB的大寫數值規則輸出

pen junit toc get code package 部分 amp print 晚上上床前,拿到這個有意思的問題,就想玩弄一番: ====================================================================

BZOJ4755扭動的回文串Manacher,哈希

ring problem def www. 二分 cpp div char class 【BZOJ4755】扭動的回文串(Manacher,哈希) 題面 BZOJ 題解 不要真的以為看見了回文串就是\(PAM,Manacher\)一類就可以過。 這題顯然不行啊。 我們主要考

Javaitext根據模板生成pdf包括圖片和表格

金額 res report als fields positions 創建模板 bst open() 1、導入需要的jar包:itext-asian-5.2.0.jar itextpdf-5.5.11.jar。 2、新建word文檔,創建模板,將文件另存為pdf,並用Ado

POJ2480 Longge's problem歐拉函數

sin bit flag += continue its 就是 題意 ace 題目 傳送門:QWQ 分析 題意就是求∑gcd(i, N) 1<=i <=N.。 顯然$ gcd(i,n) = x $時,必然$x|n$。 所以我們枚

leetcode 70. 爬樓梯遞迴Easy&& 劍指Offer面試題10 題目2:青蛙跳臺階問題

題目: 假設你正在爬樓梯。需要 n 階你才能到達樓頂。 每次你可以爬 1 或 2 個臺階。你有多少種不同的方法可以爬到樓頂呢? 注意:給定 n 是一個正整數。 示例 1: 輸入: 2 輸出: 2 解釋: 有兩種方法可以爬到樓頂。 1. 1

題解洛谷P3959 [NOIP2017TG] 寶藏狀壓DP+DFS

洛谷P3959:https://www.luogu.org/problemnew/show/P3959 前言 NOIP2017時還很弱(現在也很弱 看出來是DP 但是並不會狀壓DP 現在看來思路並不複雜 只是存狀態有點難想到 思路 因為n最大為12 所以可以想到是狀壓