1. 程式人生 > >Ceph集群由Jewel版本升級到Luminous版本

Ceph集群由Jewel版本升級到Luminous版本

earch 手動安裝 服務 cli format mach eterm ima The

參考文檔

https://www.virtualtothecore.com/en/upgrade-ceph-cluster-luminous/
http://www.chinastor.com/distristor/11033L502017.html

緣起

首先看之前安裝版本鏈接及測試
http://blog.51cto.com/jerrymin/2139045
http://blog.51cto.com/jerrymin/2139046
mon ceph0、ceph2、cphe3
osd ceph0、ceph1、ceph2、ceph3
rgw ceph1
deploy ceph0
之前在Centos7.5上測試了Jewel版本的集群,隨著對Ceph了解深入,計劃線上運行比較新的LTS版本Ceph集群,最終選擇了Luminous版本。

本來計劃重新部署Luminous版本,看到這是測試環境數據丟失風險小就想嘗試升級Jewel版本到Luminous版本,由於之前是Yum安裝的根據之前經驗原理是更新二進制文件,最後重啟服務即可。看介紹文檔升級步驟比較簡單,但是測試中發現國內用戶肯定會遇到一個坑,升級過程中會自動修改yum源到國外的站點,由於網絡延遲大300s反應不及時會自動斷開連接停止升級服務,故後續的操作我改成了國內源,並找出rpm包手工升級,由於本身有依賴關系故最終就收到yum install ceph ceph-radosgw即可全部升級所有的ceph包,最後重啟相關服務即完成升級,最終數據沒有丟失,各個功能也正常。

升級過程

參照官方的升級指南一步一步的小心操作,要註意,升級時候要確保系統是健康運行的狀態。
1、登錄,確認sortbitwise是enabled狀態:

[root@idcv-ceph0 yum.repos.d]# ceph osd set sortbitwise
set sortbitwise

2、設置noout標誌,告訴Ceph不要重新去做集群的負載均衡,雖然這是可選項,但是建議設置一下,避免每次停止節點的時候,Ceph就嘗試通過復制數據到其他可用節點來重新平衡集群。

[root@idcv-ceph0 yum.repos.d]# ceph osd set noout
set noout

3、升級時,可以選擇手工升級每個節點,也可以使用使用Ceph-deploy實現自動升級。如果選擇手動升級,在CentOS系統裏,你需要先編輯Ceph yum repo獲取新的Luminous版本來替換老版本Jewel,這就需要一個簡單的文本替換操作:

[root@idcv-ceph0 yum.repos.d]# sed -i ‘s/jewel/luminous/‘ /etc/yum.repos.d/ceph.repo

4、使用Ceph-deploy可以實現一個命令完成集群的自動升級

[root@idcv-ceph0 yum.repos.d]# yum install ceph-deploy python-pushy
Running transaction
Updating : ceph-deploy-2.0.0-0.noarch 1/2
Cleanup : ceph-deploy-1.5.39-0.noarch 2/2
Verifying : ceph-deploy-2.0.0-0.noarch 1/2
Verifying : ceph-deploy-1.5.39-0.noarch 2/2
Updated:
ceph-deploy.noarch 0:2.0.0-0
Complete!
[root@idcv-ceph0 yum.repos.d]# rpm -qa |grep ceph-deploy
ceph-deploy-2.0.0-0.noarch

5、一旦Ceph-deploy升級完成,首先要做的是在同一臺機器上升級Ceph。
發現從這個時候開始按照官網的步驟在國內行不通了,主要原因是yum源改成了國外的網速達不到,後續就是手工升級結合官網步驟了。

[root@idcv-ceph0 yum.repos.d]# ceph -s
    cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
     health HEALTH_WARN
            noout flag(s) set
     monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
            election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
     osdmap e49: 4 osds: 4 up, 4 in
            flags noout,sortbitwise,require_jewel_osds
      pgmap v53288: 272 pgs, 12 pools, 97496 MB data, 1785 kobjects
            296 GB used, 84824 MB / 379 GB avail
                 272 active+clean
[root@idcv-ceph0 yum.repos.d]# ceph -v
ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
[root@idcv-ceph0 yum.repos.d]# cd /root/cluster/
[root@idcv-ceph0 cluster]# ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.conf               
[root@idcv-ceph0 cluster]# ceph-deploy install --release luminous idcv-ceph0
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.0): /usr/bin/ceph-deploy install --release luminous idcv-ceph0
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  testing                       : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f38ae7a1d40>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  dev_commit                    : None
[ceph_deploy.cli][INFO  ]  install_mds                   : False
[ceph_deploy.cli][INFO  ]  stable                        : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  adjust_repos                  : True
[ceph_deploy.cli][INFO  ]  func                          : <function install at 0x7f38ae9d8ed8>
[ceph_deploy.cli][INFO  ]  install_mgr                   : False
[ceph_deploy.cli][INFO  ]  install_all                   : False
[ceph_deploy.cli][INFO  ]  repo                          : False
[ceph_deploy.cli][INFO  ]  host                          : [‘idcv-ceph0‘]
[ceph_deploy.cli][INFO  ]  install_rgw                   : False
[ceph_deploy.cli][INFO  ]  install_tests                 : False
[ceph_deploy.cli][INFO  ]  repo_url                      : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  install_osd                   : False
[ceph_deploy.cli][INFO  ]  version_kind                  : stable
[ceph_deploy.cli][INFO  ]  install_common                : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  dev                           : master
[ceph_deploy.cli][INFO  ]  nogpgcheck                    : False
[ceph_deploy.cli][INFO  ]  local_mirror                  : None
[ceph_deploy.cli][INFO  ]  release                       : luminous
[ceph_deploy.cli][INFO  ]  install_mon                   : False
[ceph_deploy.cli][INFO  ]  gpg_url                       : None
[ceph_deploy.install][DEBUG ] Installing stable version luminous on cluster ceph hosts idcv-ceph0
[ceph_deploy.install][DEBUG ] Detecting platform for host idcv-ceph0 ...
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0 
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph0][INFO  ] installing Ceph on idcv-ceph0
[idcv-ceph0][INFO  ] Running command: yum clean all
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[idcv-ceph0][DEBUG ] Cleaning up everything
[idcv-ceph0][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[idcv-ceph0][DEBUG ] Cleaning up list of fastest mirrors
[idcv-ceph0][INFO  ] Running command: yum -y install epel-release
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Determining fastest mirrors
[idcv-ceph0][DEBUG ]  * base: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ]  * epel: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ]  * extras: mirrors.neusoft.edu.cn
[idcv-ceph0][DEBUG ]  * updates: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] 8 packages excluded due to repository priority protections
[idcv-ceph0][DEBUG ] Package epel-release-7-11.noarch already installed and latest version
[idcv-ceph0][DEBUG ] Nothing to do
[idcv-ceph0][INFO  ] Running command: yum -y install yum-plugin-priorities
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Loading mirror speeds from cached hostfile
[idcv-ceph0][DEBUG ]  * base: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ]  * epel: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ]  * extras: mirrors.neusoft.edu.cn
[idcv-ceph0][DEBUG ]  * updates: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] 8 packages excluded due to repository priority protections
[idcv-ceph0][DEBUG ] Package yum-plugin-priorities-1.1.31-45.el7.noarch already installed and latest version
[idcv-ceph0][DEBUG ] Nothing to do
[idcv-ceph0][DEBUG ] Configure Yum priorities to include obsoletes
[idcv-ceph0][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[idcv-ceph0][INFO  ] Running command: rpm --import https://download.ceph.com/keys/release.asc
[idcv-ceph0][INFO  ] Running command: yum remove -y ceph-release
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Resolving Dependencies
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be erased
[idcv-ceph0][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Dependencies Resolved
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ]  Package              Arch           Version            Repository         Size
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Removing:
[idcv-ceph0][DEBUG ]  ceph-release         noarch         1-1.el7            installed         535  
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Transaction Summary
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Remove  1 Package
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Installed size: 535  
[idcv-ceph0][DEBUG ] Downloading packages:
[idcv-ceph0][DEBUG ] Running transaction check
[idcv-ceph0][DEBUG ] Running transaction test
[idcv-ceph0][DEBUG ] Transaction test succeeded
[idcv-ceph0][DEBUG ] Running transaction
[idcv-ceph0][DEBUG ]   Erasing    : ceph-release-1-1.el7.noarch                                  1/1 
[idcv-ceph0][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[idcv-ceph0][DEBUG ]   Verifying  : ceph-release-1-1.el7.noarch                                  1/1 
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Removed:
[idcv-ceph0][DEBUG ]   ceph-release.noarch 0:1-1.el7                                                 
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Complete!
[idcv-ceph0][INFO  ] Running command: yum install -y https://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Examining /var/tmp/yum-root-dPpRu6/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch
[idcv-ceph0][DEBUG ] Marking /var/tmp/yum-root-dPpRu6/ceph-release-1-0.el7.noarch.rpm to be installed
[idcv-ceph0][DEBUG ] Resolving Dependencies
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed
[idcv-ceph0][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Dependencies Resolved
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ]  Package          Arch       Version     Repository                        Size
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Installing:
[idcv-ceph0][DEBUG ]  ceph-release     noarch     1-1.el7     /ceph-release-1-0.el7.noarch     544  
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Transaction Summary
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Install  1 Package
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Total size: 544  
[idcv-ceph0][DEBUG ] Installed size: 544  
[idcv-ceph0][DEBUG ] Downloading packages:
[idcv-ceph0][DEBUG ] Running transaction check
[idcv-ceph0][DEBUG ] Running transaction test
[idcv-ceph0][DEBUG ] Transaction test succeeded
[idcv-ceph0][DEBUG ] Running transaction
[idcv-ceph0][DEBUG ]   Installing : ceph-release-1-1.el7.noarch                                  1/1 
[idcv-ceph0][DEBUG ]   Verifying  : ceph-release-1-1.el7.noarch                                  1/1 
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Installed:
[idcv-ceph0][DEBUG ]   ceph-release.noarch 0:1-1.el7                                                 
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Complete!
[idcv-ceph0][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[idcv-ceph0][WARNIN] altered ceph.repo priorities to contain: priority=1
[idcv-ceph0][INFO  ] Running command: yum -y install ceph ceph-radosgw
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Loading mirror speeds from cached hostfile
[idcv-ceph0][DEBUG ]  * base: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ]  * epel: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ]  * extras: mirrors.neusoft.edu.cn
[idcv-ceph0][DEBUG ]  * updates: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] 8 packages excluded due to repository priority protections
[idcv-ceph0][DEBUG ] Resolving Dependencies
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-osd = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-mon = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-mgr = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-mds = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-radosgw.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-radosgw.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-selinux = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librgw2 = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librados2 = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-common = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libibverbs.so.1()(64bit) for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libceph-common.so.0()(64bit) for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-common.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-common = 1:10.2.10-0.el7 for package: 1:ceph-base-10.2.10-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-common.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-rbd = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libcephfs2 = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-rgw = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-cephfs = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-rados = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librbd1 = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-prettytable for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libcephfs.so.2()(64bit) for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-mds.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-mds.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package ceph-mgr.x86_64 2:12.2.5-0.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-cherrypy for package: 2:ceph-mgr-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: pyOpenSSL for package: 2:ceph-mgr-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-pecan for package: 2:ceph-mgr-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-mon.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-mon.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package ceph-osd.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-osd.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package ceph-selinux.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-selinux.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package libibverbs.x86_64 0:15-7.el7_5 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: rdma-core(x86-64) = 15-7.el7_5 for package: libibverbs-15-7.el7_5.x86_64
[idcv-ceph0][DEBUG ] ---> Package librados2.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] --> Processing Dependency: librados2 = 1:10.2.10-0.el7 for package: 1:rbd-nbd-10.2.10-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librados2 = 1:10.2.10-0.el7 for package: 1:libradosstriper1-10.2.10-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package librados2.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package librgw2.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package librgw2.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-base.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-base.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package libcephfs1.x86_64 1:10.2.10-0.el7 will be obsoleted
[idcv-ceph0][DEBUG ] ---> Package libcephfs2.x86_64 2:12.2.5-0.el7 will be obsoleting
[idcv-ceph0][DEBUG ] ---> Package libradosstriper1.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package libradosstriper1.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package librbd1.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package librbd1.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package pyOpenSSL.x86_64 0:0.13.1-3.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-cephfs.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package python-cephfs.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package python-cherrypy.noarch 0:3.2.2-4.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-pecan.noarch 0:0.4.5-2.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-webtest >= 1.3.1 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-webob >= 1.2 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-simplegeneric >= 0.8 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-mako >= 0.4.0 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-singledispatch for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] ---> Package python-prettytable.noarch 0:0.7.2-3.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-rados.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package python-rados.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package python-rbd.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package python-rbd.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package python-rgw.x86_64 2:12.2.5-0.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package rbd-nbd.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package rbd-nbd.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package rdma-core.x86_64 0:15-7.el7_5 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: pciutils for package: rdma-core-15-7.el7_5.x86_64
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package pciutils.x86_64 0:3.5.1-3.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-mako.noarch 0:0.8.1-2.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-beaker for package: python-mako-0.8.1-2.el7.noarch
[idcv-ceph0][DEBUG ] ---> Package python-simplegeneric.noarch 0:0.8-7.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-singledispatch.noarch 0:3.4.0.2-2.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-webob.noarch 0:1.2.3-7.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-webtest.noarch 0:1.3.4-6.el7 will be installed
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package python-beaker.noarch 0:1.5.4-10.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-paste for package: python-beaker-1.5.4-10.el7.noarch
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package python-paste.noarch 0:1.7.5.1-9.20111221hg1498.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-tempita for package: python-paste-1.7.5.1-9.20111221hg1498.el7.noarch
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package python-tempita.noarch 0:0.5.1-6.el7 will be installed
[idcv-ceph0][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Dependencies Resolved
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ]  Package                Arch    Version                          Repository
[idcv-ceph0][DEBUG ]                                                                            Size
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Installing:
[idcv-ceph0][DEBUG ]  libcephfs2             x86_64  2:12.2.5-0.el7                   Ceph     432 k
[idcv-ceph0][DEBUG ]      replacing  libcephfs1.x86_64 1:10.2.10-0.el7
[idcv-ceph0][DEBUG ] Updating:
[idcv-ceph0][DEBUG ]  ceph                   x86_64  2:12.2.5-0.el7                   Ceph     3.0 k
[idcv-ceph0][DEBUG ]  ceph-radosgw           x86_64  2:12.2.5-0.el7                   Ceph     3.8 M
[idcv-ceph0][DEBUG ] Installing for dependencies:
[idcv-ceph0][DEBUG ]  ceph-mgr               x86_64  2:12.2.5-0.el7                   Ceph     3.6 M
[idcv-ceph0][DEBUG ]  libibverbs             x86_64  15-7.el7_5                       updates  224 k
[idcv-ceph0][DEBUG ]  pciutils               x86_64  3.5.1-3.el7                      base      93 k
[idcv-ceph0][DEBUG ]  pyOpenSSL              x86_64  0.13.1-3.el7                     base     133 k
[idcv-ceph0][DEBUG ]  python-beaker          noarch  1.5.4-10.el7                     base      80 k
[idcv-ceph0][DEBUG ]  python-cherrypy        noarch  3.2.2-4.el7                      base     422 k
[idcv-ceph0][DEBUG ]  python-mako            noarch  0.8.1-2.el7                      base     307 k
[idcv-ceph0][DEBUG ]  python-paste           noarch  1.7.5.1-9.20111221hg1498.el7     base     866 k
[idcv-ceph0][DEBUG ]  python-pecan           noarch  0.4.5-2.el7                      epel     255 k
[idcv-ceph0][DEBUG ]  python-prettytable     noarch  0.7.2-3.el7                      base      37 k
[idcv-ceph0][DEBUG ]  python-rgw             x86_64  2:12.2.5-0.el7                   Ceph      73 k
[idcv-ceph0][DEBUG ]  python-simplegeneric   noarch  0.8-7.el7                        epel      12 k
[idcv-ceph0][DEBUG ]  python-singledispatch  noarch  3.4.0.2-2.el7                    epel      18 k
[idcv-ceph0][DEBUG ]  python-tempita         noarch  0.5.1-6.el7                      base      33 k
[idcv-ceph0][DEBUG ]  python-webob           noarch  1.2.3-7.el7                      base     202 k
[idcv-ceph0][DEBUG ]  python-webtest         noarch  1.3.4-6.el7                      base     102 k
[idcv-ceph0][DEBUG ]  rdma-core              x86_64  15-7.el7_5                       updates   48 k
[idcv-ceph0][DEBUG ] Updating for dependencies:
[idcv-ceph0][DEBUG ]  ceph-base              x86_64  2:12.2.5-0.el7                   Ceph     3.9 M
[idcv-ceph0][DEBUG ]  ceph-common            x86_64  2:12.2.5-0.el7                   Ceph      15 M
[idcv-ceph0][DEBUG ]  ceph-mds               x86_64  2:12.2.5-0.el7                   Ceph     3.6 M
[idcv-ceph0][DEBUG ]  ceph-mon               x86_64  2:12.2.5-0.el7                   Ceph     5.0 M
[idcv-ceph0][DEBUG ]  ceph-osd               x86_64  2:12.2.5-0.el7                   Ceph      13 M
[idcv-ceph0][DEBUG ]  ceph-selinux           x86_64  2:12.2.5-0.el7                   Ceph      20 k
[idcv-ceph0][DEBUG ]  librados2              x86_64  2:12.2.5-0.el7                   Ceph     2.9 M
[idcv-ceph0][DEBUG ]  libradosstriper1       x86_64  2:12.2.5-0.el7                   Ceph     330 k
[idcv-ceph0][DEBUG ]  librbd1                x86_64  2:12.2.5-0.el7                   Ceph     1.1 M
[idcv-ceph0][DEBUG ]  librgw2                x86_64  2:12.2.5-0.el7                   Ceph     1.7 M
[idcv-ceph0][DEBUG ]  python-cephfs          x86_64  2:12.2.5-0.el7                   Ceph      82 k
[idcv-ceph0][DEBUG ]  python-rados           x86_64  2:12.2.5-0.el7                   Ceph     172 k
[idcv-ceph0][DEBUG ]  python-rbd             x86_64  2:12.2.5-0.el7                   Ceph     105 k
[idcv-ceph0][DEBUG ]  rbd-nbd                x86_64  2:12.2.5-0.el7                   Ceph      81 k
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Transaction Summary
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Install  1 Package  (+17 Dependent packages)
[idcv-ceph0][DEBUG ] Upgrade  2 Packages (+14 Dependent packages)
[idcv-ceph0][DEBUG ] 
[idcv-ceph0][DEBUG ] Total download size: 57 M
[idcv-ceph0][DEBUG ] Downloading packages:
[idcv-ceph0][DEBUG ] Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
[idcv-ceph0][WARNIN] No data was received after 300 seconds, disconnecting...
[idcv-ceph0][INFO  ] Running command: ceph --version
[idcv-ceph0][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

報錯一

Delta RPMs disabled because /usr/bin/applydeltarpm not installed.

解決方案

[root@idcv-ceph0 cluster]# yum install deltarpm -y
Loaded plugins: fastestmirror, priorities
Existing lock /var/run/yum.pid: another copy is running as pid 90654.
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 132 M RSS (523 MB VSZ)
Started: Tue Jul 10 16:15:59 2018 - 10:22 ago
State : Sleeping, pid: 90654
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 132 M RSS (523 MB VSZ)
Started: Tue Jul 10 16:15:59 2018 - 10:24 ago
State : Sleeping, pid: 90654
^C
Exiting on user cancel.
[root@idcv-ceph0 cluster]# kill -9 90654
[root@idcv-ceph0 cluster]# yum install deltarpm -y

報錯二

No data was received after 300 seconds, disconnecting...

解決方案

另外需要修改ceph源為國內yum源,比如阿裏yum,否則會報錯No data was received after 300 seconds, disconnecting...
[root@idcv-ceph0 cluster]# sed -i ‘s#download.ceph.com#mirrors.aliyun.com/ceph#g‘ /etc/yum.repos.d/ceph.repo
但是命令會自動改成國外源,這裏的解決方案是rpm -qa |grep cphe相關,修改成國內yum源後直接手動安裝yum install或者使用ceph-deploy install服務
[root@idcv-ceph0 ceph]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[root@idcv-ceph0 yum.repos.d]#yum -y install ceph ceph-radosgw
[root@idcv-ceph0 yum.repos.d]# rpm -qa |grep ceph
ceph-deploy-2.0.0-0.noarch
libcephfs2-12.2.5-0.el7.x86_64
python-cephfs-12.2.5-0.el7.x86_64
ceph-selinux-12.2.5-0.el7.x86_64
ceph-radosgw-12.2.5-0.el7.x86_64
ceph-release-1-1.el7.noarch
ceph-base-12.2.5-0.el7.x86_64
ceph-mon-12.2.5-0.el7.x86_64
ceph-osd-12.2.5-0.el7.x86_64
ceph-12.2.5-0.el7.x86_64
ceph-common-12.2.5-0.el7.x86_64
ceph-mds-12.2.5-0.el7.x86_64
ceph-mgr-12.2.5-0.el7.x86_64
[root@idcv-ceph0 yum.repos.d]# ceph -s
    cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
     health HEALTH_WARN
            noout flag(s) set
     monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
            election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
     osdmap e49: 4 osds: 4 up, 4 in
            flags noout,sortbitwise,require_jewel_osds
      pgmap v53473: 272 pgs, 12 pools, 97496 MB data, 1785 kobjects
            296 GB used, 84819 MB / 379 GB avail
                 272 active+clean

6、在每一個監控節點,需要重啟mon服務,命令如下:

[root@idcv-ceph0 cluster]# systemctl restart ceph-mon.target
[root@idcv-ceph0 cluster]# systemctl status ceph-mon.target
● ceph-mon.target - ceph target allowing to start/stop all [email protected] instances at once
   Loaded: loaded (/usr/lib/systemd/system/ceph-mon.target; enabled; vendor preset: enabled)
   Active: active since Tue 2018-07-10 17:27:39 CST; 11s agoJul 10 17:27:39 idcv-ceph0 systemd[1]: Reached target ceph target allowing to start/stop all [email protected] instances at once.
Jul 10 17:27:39 idcv-ceph0 systemd[1]: Starting ceph target allowing to start/stop all [email protected] instances at once.
[root@idcv-ceph0 cluster]# ceph -v
ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)
[root@idcv-ceph0 cluster]# ceph -s
  cluster:
    id:     812d3acb-eaa8-4355-9a74-64f2cd5209b3
    health: HEALTH_WARN
            too many PGs per OSD (204 > max 200)
            noout flag(s) set

  services:
    mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3
    mgr: no daemons active
    osd: 4 osds: 4 up, 4 in
         flags noout

  data:
    pools:   12 pools, 272 pgs
    objects: 1785k objects, 97496 MB
    usage:   296 GB used, 84817 MB / 379 GB avail
    pgs:     272 active+clean

7、在Kraken版本裏,曾介紹過有一個Ceph管理器,在Luninous版本之後,這個ceph-mgr進程是日常操作必須的,而在Kraken版本時可選的。所以我的Jewel集群裏沒有這個管理區,在這裏我們必須要安裝它:

[root@idcv-ceph0 cluster]# ceph-deploy mgr create idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.0): /usr/bin/ceph-deploy mgr create idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [(‘idcv-ceph0‘, ‘idcv-ceph0‘), (‘idcv-ceph1‘, ‘idcv-ceph1‘), (‘idcv-ceph2‘, ‘idcv-ceph2‘), (‘idcv-ceph3‘, ‘idcv-ceph3‘)]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f229723e320>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f2297b16a28>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts idcv-ceph0:idcv-ceph0 idcv-ceph1:idcv-ceph1 idcv-ceph2:idcv-ceph2 idcv-ceph3:idcv-ceph3
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0 
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to idcv-ceph0
[idcv-ceph0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph0][WARNIN] mgr keyring does not exist yet, creating one
[idcv-ceph0][DEBUG ] create a keyring file
[idcv-ceph0][DEBUG ] create path recursively if it doesn‘t exist
[idcv-ceph0][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph0 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-idcv-ceph0/keyring
[idcv-ceph0][INFO  ] Running command: systemctl enable ceph-mgr@idcv-ceph0
[idcv-ceph0][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[idcv-ceph0][INFO  ] Running command: systemctl start ceph-mgr@idcv-ceph0
[idcv-ceph0][INFO  ] Running command: systemctl enable ceph.target
[idcv-ceph1][DEBUG ] connection detected need for sudo
[idcv-ceph1][DEBUG ] connected to host: idcv-ceph1 
[idcv-ceph1][DEBUG ] detect platform information from remote host
[idcv-ceph1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to idcv-ceph1
[idcv-ceph1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph1][WARNIN] mgr keyring does not exist yet, creating one
[idcv-ceph1][DEBUG ] create a keyring file
[idcv-ceph1][DEBUG ] create path recursively if it doesn‘t exist
[idcv-ceph1][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-idcv-ceph1/keyring
[idcv-ceph1][INFO  ] Running command: sudo systemctl enable ceph-mgr@idcv-ceph1
[idcv-ceph1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[idcv-ceph1][INFO  ] Running command: sudo systemctl start ceph-mgr@idcv-ceph1
[idcv-ceph1][INFO  ] Running command: sudo systemctl enable ceph.target
[idcv-ceph2][DEBUG ] connection detected need for sudo
[idcv-ceph2][DEBUG ] connected to host: idcv-ceph2 
[idcv-ceph2][DEBUG ] detect platform information from remote host
[idcv-ceph2][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to idcv-ceph2
[idcv-ceph2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph2][WARNIN] mgr keyring does not exist yet, creating one
[idcv-ceph2][DEBUG ] create a keyring file
[idcv-ceph2][DEBUG ] create path recursively if it doesn‘t exist
[idcv-ceph2][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph2 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-idcv-ceph2/keyring
[idcv-ceph2][INFO  ] Running command: sudo systemctl enable ceph-mgr@idcv-ceph2
[idcv-ceph2][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[idcv-ceph2][INFO  ] Running command: sudo systemctl start ceph-mgr@idcv-ceph2
[idcv-ceph2][INFO  ] Running command: sudo systemctl enable ceph.target
[idcv-ceph3][DEBUG ] connection detected need for sudo
[idcv-ceph3][DEBUG ] connected to host: idcv-ceph3 
[idcv-ceph3][DEBUG ] detect platform information from remote host
[idcv-ceph3][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to idcv-ceph3
[idcv-ceph3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[idcv-ceph3][WARNIN] mgr keyring does not exist yet, creating one
[idcv-ceph3][DEBUG ] create a keyring file
[idcv-ceph3][DEBUG ] create path recursively if it doesn‘t exist
[idcv-ceph3][INFO  ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph3 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-idcv-ceph3/keyring
[idcv-ceph3][INFO  ] Running command: sudo systemctl enable ceph-mgr@idcv-ceph3
[idcv-ceph3][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[idcv-ceph3][INFO  ] Running command: sudo systemctl start ceph-mgr@idcv-ceph3
[idcv-ceph3][INFO  ] Running command: sudo systemctl enable ceph.target
[root@idcv-ceph0 cluster]# ceph -s 
  cluster:
    id:     812d3acb-eaa8-4355-9a74-64f2cd5209b3
    health: HEALTH_WARN
            too many PGs per OSD (204 > max 200)
            noout flag(s) set

  services:
    mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3
    mgr: idcv-ceph0(active), standbys: idcv-ceph1, idcv-ceph2, idcv-ceph3
    osd: 4 osds: 4 up, 4 in
         flags noout

  data:
    pools:   12 pools, 272 pgs
    objects: 1785k objects, 97496 MB
    usage:   296 GB used, 84816 MB / 379 GB avail
    pgs:     272 active+clean

8、重啟osd
前題是所有節點都修改了國內L版本yum源第6步驟又介紹,執行了yum -y install ceph ceph-radosgw,他會升級二進制文件

[root@idcv-ceph0 ceph]#systemctl restart ceph-osd.target
[root@idcv-ceph0 ceph]# ceph versions
{
"mon": {
"ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)": 3
},
"mgr": {
"ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)": 4
},
"osd": {
"ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)": 4
},
"mds": {},
"overall": {
"ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)": 11
}
}

9、現在所有的組件都是最新的12.5版本了,我們可以禁止Luminous版本之前的OSD,運行Luminous的獨有功能:

[root@idcv-ceph0 ceph]# ceph osd require-osd-release luminous
recovery_deletes is set

這也意味著現在只有Luminous節點才能加入這個集群了。

10、rgw服務也需要重啟在ceph1上

[root@idcv-ceph1 system]# systemctl restart ceph-radosgw.target
[root@idcv-ceph1 system]# systemctl status ceph-radosgw.target
● ceph-radosgw.target - ceph target allowing to start/stop all [email protected] instances at once
Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw.target; enabled; vendor preset: enabled)
Active: active since Tue 2018-07-10 18:02:25 CST; 6s ago
Jul 10 18:02:25 idcv-ceph1 systemd[1]: Reached target ceph target allowing to start/stop all [email protected] instances at once.
Jul 10 18:02:25 idcv-ceph1 systemd[1]: Starting ceph target allowing to start/stop all [email protected] instances at once.

11、啟動dashboard

[root@idcv-ceph0 ceph]# rpm -qa |grep mgr
ceph-mgr-12.2.5-0.el7.x86_64
[root@idcv-ceph0 ceph]#  ceph mgr module enable dashboard      
[root@idcv-ceph0 ceph]# ceph mgr dump
{
    "epoch": 53,
    "active_gid": 34146,
    "active_name": "idcv-ceph0",
    "active_addr": "172.20.1.138:6804/95951",
    "available": true,
    "standbys": [
        {
            "gid": 44129,
            "name": "idcv-ceph2",
            "available_modules": [
                "balancer",
                "dashboard",
                "influx",
                "localpool",
                "prometheus",
                "restful",
                "selftest",
                "status",
                "zabbix"
            ]
        },
        {
            "gid": 44134,
            "name": "idcv-ceph1",
            "available_modules": [
                "balancer",
                "dashboard",
                "influx",
                "localpool",
                "prometheus",
                "restful",
                "selftest",
                "status",
                "zabbix"
            ]
        },
        {
            "gid": 44135,
            "name": "idcv-ceph3",
            "available_modules": [
                "balancer",
                "dashboard",
                "influx",
                "localpool",
                "prometheus",
                "restful",
                "selftest",
                "status",
                "zabbix"
            ]
        }
    ],
    "modules": [
        "balancer",
        "dashboard",
        "restful",
        "status"
    ],
    "available_modules": [
        "balancer",
        "dashboard",
        "influx",
        "localpool",
        "prometheus",
        "restful",
        "selftest",
        "status",
        "zabbix"
    ],
    "services": {
        "dashboard": "http://idcv-ceph0:7000/"
    }
}

瀏覽器訪問http://172.20.1.138:7000

技術分享圖片

12、最後一步,禁止noot,以後集群就可以在需要的時候自己做負載均衡了:

[root@idcv-ceph0 ceph]# ceph osd unset noout
noout is unset
[root@idcv-ceph0 ceph]# ceph -s
  cluster:
    id:     812d3acb-eaa8-4355-9a74-64f2cd5209b3
    health: HEALTH_WARN
            application not enabled on 1 pool(s)

  services:
    mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3
    mgr: idcv-ceph0(active), standbys: idcv-ceph2, idcv-ceph1, idcv-ceph3
    osd: 4 osds: 4 up, 4 in
    rgw: 1 daemon active

  data:
    pools:   12 pools, 272 pgs
    objects: 1785k objects, 97496 MB
    usage:   296 GB used, 84830 MB / 379 GB avail
    pgs:     272 active+clean

  io:
    client:   0 B/s rd, 0 op/s rd, 0 op/s wr

報錯
    health: HEALTH_WARN
            application not enabled on 1 pool(s)

解決方案
[root@idcv-ceph0 ceph]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
    application not enabled on pool ‘test_pool‘
    use ‘ceph osd pool application enable <pool-name> <app-name>‘, where <app-name> is ‘cephfs‘, ‘rbd‘, ‘rgw‘, or freeform for custom applications.
[root@idcv-ceph0 ceph]# ceph osd pool application enable  test_pool
Invalid command: missing required parameter app(<string(goodchars [A-Za-z0-9-_.])>)
osd pool application enable <poolname> <app> {--yes-i-really-mean-it} :  enable use of an application <app> [cephfs,rbd,rgw] on pool <poolname>
Error EINVAL: invalid command
[root@idcv-ceph0 ceph]# ceph osd pool application enable test_pool rbd
enabled application ‘rbd‘ on pool ‘test_pool‘
[root@idcv-ceph0 ceph]# ceph -s
  cluster:
    id:     812d3acb-eaa8-4355-9a74-64f2cd5209b3
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3
    mgr: idcv-ceph0(active), standbys: idcv-ceph2, idcv-ceph1, idcv-ceph3
    osd: 4 osds: 4 up, 4 in
    rgw: 1 daemon active

  data:
    pools:   12 pools, 272 pgs
    objects: 1785k objects, 97496 MB
    usage:   296 GB used, 84829 MB / 379 GB avail
    pgs:     272 active+clean

  io:
    client:   0 B/s rd, 0 op/s rd, 0 op/s wr

總結

整個升級過程大概1個多小時,主要是結合國內情況否則升級還是比較簡單的,另外L版本新增了mgr和dashboard功能,升級完成後測試了對象存儲功能和塊存儲功能都正常。

Ceph集群由Jewel版本升級到Luminous版本