1. 程式人生 > >基於Centos7.4搭建Ceph

基於Centos7.4搭建Ceph

ceph


本文使用ceph-deploy工具,能快速搭建出一個ceph集群。


一、環境準備

技術分享

  • 修改主機名

  1. [[email protected] ~]# cat /etc/redhat-release

  2. CentOS Linux release 7.4.1708 (Core)


IP
主機名角色

10.10.10.20

admin-node
ceph-deploy
10.10.10.21node1mon
10.10.10.22node2osd
10.10.10.23node3osd


  • 設置DNS解析(我們這裏修改/etc/hosts文件)

  • 每個節點都要配置


  1. [[email protected] ~]# cat /etc/hosts

  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

  4. 10.10.10.20 admin-node

  5. 10.10.10.21 node1

  6. 10.10.10.22 node2

  7. 10.10.10.23 node3


  • 配置yum源

  • 每個節點都要配置


  1. [[email protected] ~]# mv /etc/yum.repos.d{,.bak}

  2. [[email protected] ~]# mkdir /etc/yum.repos.d

  3. [[email protected] ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

  4. [[email protected] ceph]# cat /etc/yum.repos.d/ceph.repo

  5. [Ceph]

  6. name=Ceph

  7. baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/

  8. enabled=1

  9. gpgcheck=0


  • 關閉防火墻和Selinux

  • 每個節點都要配置


  1. [[email protected] ~]# systemctl stop firewalld.service

  2. [[email protected] ~]# systemctl disable firewalld.service

  3. [[email protected] ~]# setenforce 0

  4. [[email protected] ~]# sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/‘ /etc/selinux/config


  • 設置節點之間面秘鑰登入

  • 每個節點都要配置

  1. [[email protected] ~]# ssh-keygen

  2. [[email protected] ~]# ssh-copy-id 10.10.10.21

  3. [[email protected] ~]# ssh-copy-id 10.10.10.22

  4. [[email protected] ~]# ssh-copy-id 10.10.10.23


  • 使用chrony同步時間

  • 每個節點都要配置


  1. [[email protected] ~]# yum install chrony -y

  2. [[email protected] ~]# systemctl restart chronyd

  3. [[email protected] ~]# systemctl enable chronyd

  4. [[email protected] ~]# chronyc source -v (查看時間是否同步,*表示同步完成)


二、安裝ceph-luminous


  • 安裝ceph-deploy

  • 只在admin-node節點安裝


  1. [[email protected] ~]# yum install ceph-deploy -y


  • 在管理節點上創建一個目錄,用於保存 ceph-deploy 生成的配置文件和密鑰對

  • 只在admin-node節點安裝


  1. [[email protected] ~]# mkdir /etc/ceph

  2. [[email protected] ~]# cd /etc/ceph/


  • 清除配置(若想從新安裝可以執行以下命令)

  • 只在admin-node節點安裝


  1. [[email protected] ceph]# ceph-deploy purgedata node1 node2 node3

  2. [[email protected] ceph]# ceph-deploy forgetkeys


  • 創建集群

  • 只在admin-node節點安裝


  1. [[email protected] ceph]# ceph-deploy new node1


  • 修改ceph的配置,將副本數改為2

  • 只在admin-node節點安裝


  1. [[email protected] ceph]# vi ceph.conf

  2. [global]

  3. fsid = 183e441b-c8cd-40fa-9b1a-0387cb8e8735

  4. mon_initial_members = node1

  5. mon_host = 10.10.10.21

  6. auth_cluster_required = cephx

  7. auth_service_required = cephx

  8. auth_client_required = cephx

  9. filestore_xattr_use_omap = true


  10. osd pool default size = 2


  • 安裝ceph

  • 只在admin-node節點安裝


  1. [[email protected] ceph]# ceph-deploy install admin-node node1 node2 node3


    [node3][DEBUG ] Configure Yum priorities to include obsoletes

    [node3][WARNIN] check_obsoletes has been enabled for Yum priorities plugin

    [node3][INFO ] Running command: rpm --import https://download.ceph.com/keys/release.asc

    [node3][INFO ] Running command: rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm

    [node3][DEBUG ] 獲取https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm

    [node3][WARNIN] 警告:/etc/yum.repos.d/ceph.repo 已建立為 /etc/yum.repos.d/ceph.repo.rpmnew

    [node3][DEBUG ] 準備中... ########################################

    [node3][DEBUG ] 正在升級/安裝...

    [node3][DEBUG ] ceph-release-1-1.el7 ########################################

    [node3][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority

    [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: ‘ceph-noarch‘

  2. 這個地方報錯了,安裝了一個高版本的ceph-release

    解決方法:yum remove ceph-release

    每個節點刪除ceph-release後再次重新執行上一次的命令


  • 配置初始 monitor(s)、並收集所有密鑰

  • 只在admin-node節點安裝


  1. [[email protected] ceph]# ceph-deploy mon create-initial

  2. [[email protected] ceph]# ls

  3. ceph.bootstrap-mds.keyring ceph.bootstrap-rgw.keyring ceph-deploy-ceph.log

  4. ceph.bootstrap-mgr.keyring ceph.client.admin.keyring ceph.mon.keyring

  5. ceph.bootstrap-osd.keyring ceph.conf rbdmap

  6. [[email protected] ceph]# ceph -s (查看集群狀態)

  7. cluster 8d395c8f-6ac5-4bca-bbb9-2e0120159ed9

  8. health HEALTH_ERR

  9. no osds

  10. monmap e1: 1 mons at {node1=10.10.10.21:6789/0}

  11. election epoch 3, quorum 0 node1

  12. osdmap e1: 0 osds: 0 up, 0 in

  13. flags sortbitwise,require_jewel_osds

  14. pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects

  15. 0 kB used, 0 kB / 0 kB avail

  16. 64 creating


  • 創建OSD


  1. [[email protected] ~]# lsblk (node2,node3做osd)

  2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

  3. fd0 2:0 1 4K 0 disk

  4. sda 8:0 0 20G 0 disk

  5. ├─sda1 8:1 0 1G 0 part /boot

  6. └─sda2 8:2 0 19G 0 part

  7. ├─cl-root 253:0 0 17G 0 lvm /

  8. └─cl-swap 253:1 0 2G 0 lvm [SWAP]

  9. sdb 8:16 0 50G 0 disk /var/local/osd0

  10. sdc 8:32 0 5G 0 disk

  11. sr0 11:0 1 4.1G 0 rom

  12. [[email protected] ~]# mkfs.xfs /dev/sdb

  13. [[email protected] ~]# mkdir /var/local/osd0

  14. [[email protected] ~]# mount /dev/sdb /var/local/osd0

  15. [[email protected] ~]# chown ceph:ceph /var/local/osd0

  16. [[email protected] ~]# mkdir /var/local/osd1

  17. [[email protected] ~]# mkfs.xfs /dev/sdb

  18. [[email protected] ~]# mount /dev/sdb /var/local/osd1/

  19. [[email protected] ~]# chown ceph:ceph /var/local/osd1

  20. [[email protected] ceph]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1 (在admin-node節點執行)


  • 將admin-node上的密鑰和配合文件拷貝到各個節點

  • 只在admin-node節點安裝


  1. [[email protected] ceph]# ceph-deploy admin admin-node node1 node2 node3


  • 確保對 ceph.client.admin.keyring 有正確的操作權限

  • 只在OSD節點執行


  1. [[email protected] ~]# chmod +r /etc/ceph/ceph.client.admin.keyring


  • 管理節點執行 ceph-deploy 來準備 OSD


  1. [[email protected] ceph]# ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1


  • 激活 OSD


  1. [[email protected] ceph]# ceph-deploy osd activate node2:/var/local/osd0 node3:/var/local/osd1


  • 檢查集群的健康狀況


  1. [[email protected] ceph]# ceph health

  2. HEALTH_OK


  1. [[email protected] ceph]# ceph health

  2. HEALTH_OK

  3. [[email protected] ceph]# ceph -s

  4. cluster 69f64f6d-f084-4b5e-8ba8-7ba3cec9d927

  5. health HEALTH_OK

  6. monmap e1: 1 mons at {node1=10.10.10.21:6789/0}

  7. election epoch 3, quorum 0 node1

  8. osdmap e14: 3 osds: 3 up, 3 in

  9. flags sortbitwise,require_jewel_osds

  10. pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects

  11. 15459 MB used, 45950 MB / 61410 MB avail

  12. 64 active+clean




本文出自 “若不奮鬥,何以稱王” 博客,請務必保留此出處http://wangzc.blog.51cto.com/12875919/1966109

基於Centos7.4搭建Ceph