1. 程式人生 > >Ceph學習之路(三)Ceph luminous版本部署

Ceph學習之路(三)Ceph luminous版本部署

禁用 spl ted span none deploy ets work ble

1、配置ceph.repo並安裝批量管理工具ceph-deploy

技術分享圖片
[root@ceph-node1 ~]# vim /etc/yum.repos.d/ceph.repo 
[ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

[ceph-noarch]
name
=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch enabled=1 gpgcheck=1 priority=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS enabled=0 gpgcheck=1 type=rpm-md gpgkey
=https://mirrors.aliyun.com/ceph/keys/release.asc priority=1 [root@ceph-node1 ~]# yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm [root@ceph-node1 ~]# yum makecache [root@ceph-node1 ~]# yum update -y [root@ceph-node1 ~]# yum install -y ceph-deploy ```
View Code

2、ceph的節點部署

(1)安裝NTP 在所有 Ceph 節點上安裝 NTP 服務(特別是 Ceph Monitor 節點),以免因時鐘漂移導致故障

[root@ceph-node1 ~]# yum install -y ntp ntpdate ntp-doc
[root@ceph-node2 ~]# yum install -y ntp ntpdate ntp-doc
[root@ceph-node3 ~]# yum install -y ntp ntpdate ntp-doc

[root@ceph-node1 ~]# ntpdate ntp1.aliyun.com
31 Jul 03:43:04 ntpdate[973]: adjust time server 120.25.115.20 offset 0.001528 sec
[root@ceph-node1 ~]# hwclock 
Tue 31 Jul 2018 03:44:55 AM EDT  -0.302897 seconds
[root@ceph-node1 ~]# crontab -e
*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com

確保在各 Ceph 節點上啟動了 NTP 服務,並且要使用同一個 NTP 服務器

(2)安裝SSH服務器並添加hosts解析

默認有ssh,可以省略
[root@ceph-node1 ~]# yum install openssh-server
[root@ceph-node2 ~]# yum install openssh-server
[root@ceph-node3 ~]# yum install openssh-server

確保所有 Ceph 節點上的 SSH 服務器都在運行。

[root@ceph-node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.11 ceph-node1
192.168.56.12 ceph-node2
192.168.56.13 ceph-node3

(3)允許無密碼SSH登錄

root@ceph-node1 ~]# ssh-keygen
root@ceph-node1 ~]# ssh-copy-id  root@ceph-node1
root@ceph-node1 ~]# ssh-copy-id  root@ceph-node2
root@ceph-node1 ~]# ssh-copy-id  root@ceph-node3

推薦使用方式:
修改 ceph-deploy 管理節點上的 ~/.ssh/config 文件,這樣 ceph-deploy 就能用你所建的用戶名登錄 Ceph 節點了,而無需每次執行 ceph-deploy 都要指定 –username {username} 。這樣做同時也簡化了 ssh 和 scp 的用法。把 {username} 替換成你創建的用戶名。

[root@ceph-node1 ~]# cat .ssh/config 
Host node1
   Hostname ceph-node1
   User root
Host node2
   Hostname ceph-node2
   User root
Host node3
   Hostname ceph-node3
   User root
[root@ceph-node1 ~]# chmod 600 .ssh/config 
[root@ceph-node1 ~]# systemctl restart sshd

(4)關閉Selinux

在 CentOS 和 RHEL 上, SELinux 默認為 Enforcing 開啟狀態。為簡化安裝,我們建議把 SELinux 設置為 Permissive 或者完全禁用,也就是在加固系統配置前先確保集群的安裝、配置沒問題。用下列命令把 SELinux 設置為 Permissive :

[root@ceph-node1 ~]# setenforce 0
[root@ceph-node2 ~]# setenforce 0
[root@ceph-node3 ~]# setenforce 0

要使 SELinux 配置永久生效(如果它的確是問題根源),需修改其配置文件 /etc/selinux/config 。

(5)關閉防火墻

[root@ceph-node1 ~]# systemctl stop firewalld.service
[root@ceph-node2 ~]# systemctl stop firewalld.service
[root@ceph-node3 ~]# systemctl stop firewalld.service
[root@ceph-node1 ~]# systemctl disable firewalld.service
[root@ceph-node2 ~]# systemctl disable firewalld.service
[root@ceph-node3 ~]# systemctl disable firewalld.service

(6)安裝epel源和啟用優先級

[root@ceph-node1 ~]# yum install -y epel-release
[root@ceph-node2 ~]# yum install -y epel-release
[root@ceph-node3 ~]# yum install -y epel-release
[root@ceph-node1 ~]# yum install -y yum-plugin-priorities
[root@ceph-node2 ~]# yum install -y yum-plugin-priorities
[root@ceph-node3 ~]# yum install -y yum-plugin-priorities

3、創建集群

創建一個 Ceph 存儲集群,它有一個 Monitor 和兩個 OSD 守護進程。一旦集群達到 active + clean 狀態,再擴展它:增加第三個 OSD 、增加元數據服務器和兩個 Ceph Monitors。在管理節點上創建一個目錄,用於保存 ceph-deploy 生成的配置文件和密鑰對。

(1)創建ceph工作目錄並配置ceph.conf

[root@ceph-node1 ~]# mkdir /etc/ceph && cd /etc/ceph
[root@ceph-node1 ceph]# ceph-deploy new ceph-node1 #配置監控節點

ceph-deploynew子命令能夠部署一個默認名稱為ceph的新集群,並且它能生成集群配置文件和密鑰文件。列出當前工作目錄,你會看到ceph.confceph.mon.keyring文件。

[root@ceph-node1 ceph]# vim ceph.conf
public network =192.168.56.0/24
[root@ceph-node1 ceph]# ll
total 20
-rw-r--r-- 1 root root   253 Jul 31 21:36 ceph.conf   #ceph的配置文件
-rw-r--r-- 1 root root 12261 Jul 31 21:36 ceph-deploy-ceph.log  #monitor的日誌文件
-rw------- 1 root root    73 Jul 31 21:36 ceph.mon.keyring  #monitor的密鑰環文件

遇到的問題:
[root@ceph-node1 ceph]# ceph-deploy new ceph-node1
Traceback (most recent call last):
  File "/usr/bin/ceph-deploy", line 18, in <module>
    from ceph_deploy.cli import main
  File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in <module>
    import pkg_resources
ImportError: No module named pkg_resources
解決方案:
[root@ceph-node1 ceph]# yum install -y python-setuptools

(2)管理節點和osd節點都需要安裝ceph 集群

[root@ceph-node1 ceph]# ceph-deploy install ceph-node1 ceph-node2 ceph-node3

ceph-deploy工具包首先會安裝Ceph luminous版本所有依賴包。命令成功完成後,檢查所有節點上Ceph的版本和健康狀態,如下所示:

[root@ceph-node1 ceph]# ceph --version
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
[root@ceph-node2 ~]#  ceph --version
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
[root@ceph-node3 ~]#  ceph --version
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)

(3)配置MON初始化

ceph-node1上創建第一個Ceph monitor

[root@ceph-node1 ceph]# ceph-deploy mon create-initial        #配置初始 monitor(s)、並收集所有密鑰
[root@ceph-node1 ceph]# ll      #完成上述操作後,當前目錄裏應該會出現這些密鑰環
total 92
-rw------- 1 root root   113 Jul 31 21:48 ceph.bootstrap-mds.keyring
-rw------- 1 root root   113 Jul 31 21:48 ceph.bootstrap-mgr.keyring
-rw------- 1 root root   113 Jul 31 21:48 ceph.bootstrap-osd.keyring
-rw------- 1 root root   113 Jul 31 21:48 ceph.bootstrap-rgw.keyring
-rw------- 1 root root   151 Jul 31 21:48 ceph.client.admin.keyring

註意:只有在安裝 Hammer 或更高版時才會創建 bootstrap-rgw 密鑰環。

註意:如果此步失敗並輸出類似於如下信息 “Unable to find /etc/ceph/ceph.client.admin.keyring”,請確認ceph.conf中為monitor指定的IP Public IP,而不是 Private IP。查看集群的狀態信息:

[root@ceph-node1 ceph]# ceph -s
  cluster:
    id:     c6165f5b-ada0-4035-9bab-1916b28ec92a
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph-node1
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage:   0 kB used, 0 kB / 0 kB avail
    pgs:     

(5)開啟監控模塊

查看集群支持的模塊

[root@ceph-node1 ceph]# ceph mgr dump   
[root@ceph-node1 ceph]# ceph mgr module enable dashboard   #啟用dashboard模塊

/etc/ceph/ceph.conf中添加

[mgr]
mgr modules = dashboard

設置dashboardip和端口

[root@ceph-node1 ceph]# ceph config-key put mgr/dashboard/server_addr 192.168.56.11
set mgr/dashboard/server_addr
[root@ceph-node1 ceph]# ceph config-key put mgr/dashboard/server_port 7000
set mgr/dashboard/server_port
[root@ceph-node1 ceph]# netstat -tulnp |grep 7000
tcp6       0      0 :::7000                 :::*                    LISTEN      13353/ceph-mgr 

訪問:http://192.168.56.11:7000/

技術分享圖片

(5)在ceph-node1上創建OSD

[root@ceph-node1 ceph]# ceph-deploy disk zap ceph-node1 /dev/sdb

[root@ceph-node1 ceph]# ceph-deploy disk list ceph-node1    #列出ceph-node1上所有的可用磁盤
......
[ceph-node1][INFO  ] Running command: fdisk -l
[ceph-node1][INFO  ] Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors
[ceph-node1][INFO  ] Disk /dev/sdc: 1073 MB, 1073741824 bytes, 2097152 sectors
[ceph-node1][INFO  ] Disk /dev/sdd: 1073 MB, 1073741824 bytes, 2097152 sectors
[ceph-node1][INFO  ] Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors

從輸出中,慎重選擇若幹磁盤來創建Ceph OSD(除操作系統分區以外),並將它們分別命名為sdb、sdc和sdd。disk zap子命令會刪除現有分區表和磁盤內容。運行此命令之前,確保你選擇了正確的磁盤名稱:
osd create子命令首先會準備磁盤,即默認地先用xfs文件系統格式化磁盤,然後會激活磁盤的第一、二個分區,分別作為數據分區和日誌分區:

技術分享圖片
[root@ceph-node1 ceph]# ceph-deploy mgr create node1  #部署管理器守護程序,僅僅使用此版本
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [(node1, node1)]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xe1e5a8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0xda5f50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts node1:node1
[node1][DEBUG ] connected to host: node1 
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.4.1708 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][WARNIN] mgr keyring does not exist yet, creating one
[node1][DEBUG ] create a keyring file
[node1][DEBUG ] create path recursively if it doesnt exist
[node1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.node1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-node1/keyring
[node1][INFO  ] Running command: systemctl enable ceph-mgr@node1
[node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/[email protected] to /usr/lib/systemd/system/ceph-[email protected].
[node1][INFO  ] Running command: systemctl start ceph-mgr@node1
[node1][INFO  ] Running command: systemctl enable ceph.target
View Code

(5)創建OSD

添加三個OSD。出於這些說明的目的,我們假設您在每個節點中都有一個未使用的磁盤/dev/sdb。 確保設備當前未使用且不包含任何重要數據。
語法格式:ceph-deploy osd create --data {device} {ceph-node}

[root@ceph-node1 ceph]# ceph-deploy osd create --data /dev/sdb node1
[root@ceph-node1 ceph]# ceph-deploy osd create --data /dev/sdc node1
[root@ceph-node1 ceph]# ceph-deploy osd create --data /dev/sdd node1

Ceph學習之路(三)Ceph luminous版本部署