Ceph入門----CentOS7部署ceph三節點分布式存儲系統
1.Ceph集群環境
使用3臺虛擬機,包括其中1個admin節點,三臺虛擬機同時承擔3個monitor節點和3個osd節點
操作系統采用CentOS Minimal 7 下載地址:http://124.205.69.134/files/4128000005F9FCB3/mirrors.zju.edu.cn/centos/7.4.1708/isos/x86_64/CentOS-7-x86_64-Minimal-1708.iso
2. 前提準備所有的主機都進行
# hostnamectl set-hostname ceph1 \\修改主機名
# vi /etc/sysconfig/network-scripts/ifcfg-ens32 或者 nmtui \\配置IP地址
# systemctl restart network \\重啟網絡服務
\\ 由於安裝的CentOS Minimal版,tab鍵無法補全命令參數,建議執行一條命令,老鳥可以忽略
#yum -y install bash-completion.noarch
# date \\查看系統時間,保證各系統的時間一致
#echo ‘192.168.59.131 ceph1‘ >> /etc/hosts \\修改hosts文件,添加所有服務器的映射
#setenforce 0 \\關閉selinux
#sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g‘ /etc/selinux/config \\修改配置文件,使關閉selinux永久生效
#firewall-cmd --zone=public --add-port=6789/tcp --permanent
#firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent \\ 添加防火墻策略
#firewall-cmd --reload \\使其防火墻策略生效
#ssh-keygen \\生成SSH密鑰
#ssh-copy-id [email protected] \\需各服務器之間進行拷貝
3. 開始進行ceph-deploy的部署,部署其中一臺機器即可
# vi /etc/yum.repos.d/ceph.repo \\新增ceph yum源,輸入以下內容
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
#yum update && reboot \\更並重啟系統
#yum install ceph-deploy -y \\安裝ceph-deploy
a. 此處出現錯誤
Downloading packages:
(1/4): python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch.rpm | 12 kB 00:00:00
(2/4): python-backports-1.0-8.el7.x86_64.rpm | 5.8 kB 00:00:02
ceph-deploy-1.5.38-0.noarch.rp FAILED ] 90 kB/s | 298 kB 00:00:04 ETA
http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm: [Errno -1] Package does not match intended download. Suggestion: run yum --enablerepo=ceph-noarch clean metadata
Trying other mirror.
(3/4): python-setuptools-0.9.8-7.el7.noarch.rpm | 397 kB 00:00:05
Error downloading packages:
ceph-deploy-1.5.38-0.noarch: [Errno 256] No more mirrors to try.
處理方法如下:
#rpm -ivh http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm
b. 此處出現錯誤:
Retrieving http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm
warning: /var/tmp/rpm-tmp.gyId2U: Header V4 RSA/SHA256 Signature, key ID 460f3994: NOKEY
error: Failed dependencies:
python-distribute is needed by ceph-deploy-1.5.38-0.noarch
處理方法:
# yum install python-distribute -y
再次執行
#rpm -ivh http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm
4. 部署monitor服務
#mkdir ~/ceph-cluster && cd ~/ceph-cluster \\新建集群配置目錄
#ceph-deploy new ceph1 ceph2 ceph3 \\部署完後生產3個文件,一個Ceph配置文件、一個monitor密鑰環和一個日誌文件
#ls -l
-rw-r--r-- 1 root root 266 Sep 19 16:41 ceph.conf
-rw-r--r-- 1 root root 172037 Sep 19 16:32 ceph-deploy-ceph.log
-rw------- 1 root root 73 Sep 19 11:03 ceph.mon.keyring
#ceph-deploy mon create-initial \\初始化群集
5. 安裝ceph
#ceph-deploy install ceph1 ceph2 ceph3 \\在ceph1 ceph2 ceph3上安裝ceph
a. 此處出現錯誤
[ceph1][DEBUG ] Retrieving https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[ceph1][WARNIN] warning: /etc/yum.repos.d/ceph.repo created as /etc/yum.repos.d/ceph.repo.rpmnew
[ceph1][DEBUG ] Preparing... ########################################
[ceph1][DEBUG ] Updating / installing...
[ceph1][DEBUG ] ceph-release-1-1.el7 ########################################
[ceph1][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: ‘ceph‘
處理方式:
# yum remove ceph-release -y
再次執行#ceph-deploy install ceph1 ceph2 ceph3
6. 創建OSD
# ceph-deploy disk list ceph{1,2,3} \\列出各服務器磁盤
# ceph-deploy --overwrite-conf osd prepare ceph1:sdc:/dev/sdb ceph2:sdc:/dev/sdb ceph3:sdc:/dev/sdb \\準備磁盤 sdb 作為journal盤,sdc作為數據盤
# ceph-deploy osd activate ceph1:sdc:/dev/sdb ceph2:sdc:/dev/sdb ceph3:sdc:/dev/sdb \\激活osd
此處出現一個錯誤,沒有從網上查到解決方式,請高手賜教,此處的錯誤沒有影響ceph的部署,通過命令已顯示磁盤已成功mount
[ceph1][WARNIN] ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device /dev/sdc: Line is truncated:
[ceph1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─cl-root 253:0 0 18G 0 lvm /
└─cl-swap 253:1 0 1G 0 lvm [SWAP]
sdb 8:16 0 30G 0 disk
└─sdb1 8:17 0 5G 0 part
sdc 8:32 0 40G 0 disk
└─sdc1 8:33 0 40G 0 part /var/lib/ceph/osd/ceph-0
sr0 11:0 1 680M 0 rom
rbd0 252:0 0 1G 0 disk /root/rbddir
7. 部署成功
# ceph -s
cluster e508bdeb-b986-4ee8-82c6-c25397a5f1eb
health HEALTH_OK
monmap e2: 3 mons at{ceph1=192.168.59.131:6789/0,ceph2=192.168.59.132:6789/0,ceph3=192.168.59.133:6789/0}
election epoch 10, quorum 0,1,2 ceph1,ceph2,ceph3
osdmap e55: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v13638: 384 pgs, 5 pools, 386 MB data, 125 objects
1250 MB used, 118 GB / 119 GB avail
384 active+clean
鑄劍團隊簽名:
【總監】十二春秋之,[email protected];
【Master】戈稻不蒼,[email protected];
【Java開發】雨鷥,[email protected];思齊駿惠,[email protected];小王子,[email protected];巡山小鉆風,[email protected];
【VS開發】豆點,[email protected];
【系統測試】土鏡問道,[email protected];塵子與自由,[email protected];
【大數據】沙漠綠洲,[email protected];張三省,[email protected];
【網絡】夜孤星,[email protected];
【系統運營】三石頭,[email protected];平凡怪咖,[email protected];
【容災備份】秋天的雨,[email protected];
【安全】保密,你懂的。
原創作者:三石頭
著作權歸作者所有。商業轉載請聯系作者獲得授權,非商業轉載請註明出處。
Ceph入門----CentOS7部署ceph三節點分布式存儲系統