分布式存儲ceph——(1)部署ceph
阿新 • • 發佈:2019-04-20
內容 本地 pad new repos fail python2 很多 錯誤
(2)所有集群節點(包括客戶端)創建cent用戶,並設置密碼,後執行如下命令:
(3)在部署節點切換為cent用戶,設置無密鑰登陸各節點包括客戶端節點
或者也可以將如上內容添加到現有的 CentOS-Base.repo 中。
(2)到國內ceph源中https://mirrors.aliyun.com/centos/7.6.1810/storage/x86_64/ceph-jewel/下載如下所需rpm包。註意:紅色框中為ceph-deploy的rpm,只需要在部署節點安裝,下載需要到https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/中找到最新對應的ceph-deploy-xxxxx.noarch.rpm 下載
(3)將下載好的rpm拷貝到所有節點,並安裝。註意ceph-deploy-xxxxx.noarch.rpm 只有部署節點用到,其他節點不需要,部署節點也需要安裝其余的rpm
(4)在部署節點(cent用戶下執行):安裝 ceph-deploy,在root用戶下,進入下載好的rpm包目錄,執行:
註意:如遇到如下報錯:
處理辦法1: 可能不能安裝成功,報如下問題:將python-distribute remove 再進行安裝(或者 yum remove python-setuptools -y)
可選參數如下:
(6)在部署節點執行(cent用戶下執行):所有節點安裝ceph軟件
所有節點有如下軟件包:
所有節點安裝上述軟件包(包括客戶端):
(7)在部署節點執行,所有節點安裝ceph軟件
(8)在部署節點初始化集群(cent用戶下執行):
(9)在osd節點prepare Object Storage Daemon :
(10)每個節點將第二塊硬盤做分區,並格式化為xfs文件系統掛載到/data:
註意:準備前先將硬盤做文件系統 xfs,掛載到/var/lib/ceph/osd,並且註意屬主和屬主為ceph:
列出節點磁盤:ceph-deploy disk list node1
擦凈節點磁盤:ceph-deploy disk zap node1:/dev/vdb1
(12)準備Object Storage Daemon:
(13)激活Object Storage Daemon:
(14)在部署節點transfer config files
(15)在ceph集群中任意節點檢測:
在部署節點執行,安裝ceph客戶端及設置:
(2)客戶端執行
(3)客戶端執行,塊設備rdb配置:
(4)File System配置:
在部署節點執行,選擇一個node來創建MDS:
以下操作在node1上執行:
在MDS節點node1上創建 cephfs_data 和 cephfs_metadata 的 pool
開啟pool:
顯示ceph fs:
以下操作在客戶端執行,安裝ceph-fuse:
獲取admin key:
掛載ceph-fs:
停止ceph-mds服務:
前言:
很多朋友想學ceph,但是開始ceph部署就讓初學者舉步為艱,ceph部署時由於國外源的問題(具體大家應該懂得),下載和安裝軟件便會卡住,停止不前。即使配置搭建了國內源後,執行ceph-deploy install 時又跑去了國外的源下載,很是無語呀!!!這樣導致我們停下了學習ceph的腳步,所以筆者就在這裏編寫了這篇文章,只要掌握了通過國內源找到並下載對應正確的ceph版本rpm包到本地,部署ceph簡直小意思!
一、部署準備:
準備5臺機器(linux系統為centos7.6版本),當然也可以至少3臺機器並充當部署節點和客戶端,可以與ceph節點共用: 1臺部署節點(配一塊硬盤,運行ceph-depoly) 3臺ceph節點(配兩塊硬盤,第一塊為系統盤並運行mon,第二塊作為osd數據盤) 1臺客戶端(可以使用ceph提供的文件系統,塊存儲,對象存儲) (1)所有ceph集群節點(包括客戶端)設置靜態域名解析;1 2 3 4 5 6 7 |
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.254.163 dlp
172.16.254.64 node1
172.16.254.65 node2
172.16.254.66 node3
172.16.254.63 controller
|
1 2 3 |
useradd cent && echo "123" | passwd --stdin cent
echo -e ‘Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL‘ | tee /etc/sudoers.d/ceph
chmod 440 /etc/sudoers.d/ceph
|
1 2 3 4 5 |
[email protected]:17:01~ #ssh-keygen
[email protected]:17:01~ #ssh-copy-id con1
[email protected]:17:01~ #ssh-copy-id con2
[email protected]:17:01~ #ssh-copy-id con3
[email protected]:17:01~ #ssh-copy-id controller
|
(4)在部署節點切換為cent用戶,在cent用戶家目錄,設置如下文件:vi~/.ssh/config# create new ( define all nodes and users )
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
Host dlp
Hostname dlp
User cent
Host node1
Hostname node1
User cent
Host node2
Hostname node2
User cent
Host node3
Hostname node3
User cent
chmod 600 ~/. ssh /config
|
二、所有節點配置國內ceph源:
(1)all-node(包括客戶端)在/etc/yum.repos.d/創建 ceph-yunwei.repo1 2 3 4 5 |
[ceph-yunwei]
name=ceph-yunwei-install
baseurl=https: //mirrors.aliyun.com/centos/7.6.1810/storage/x86_64/ceph-jewel/
enable=1
gpgcheck=0
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
ceph-10.2.11-0.el7.x86_64.rpm
ceph- base -10.2.11-0.el7.x86_64.rpm
ceph-common-10.2.11-0.el7.x86_64.rpm
ceph-deploy-1.5.39-0.noarch.rpm
ceph-devel-compat-10.2.11-0.el7.x86_64.rpm
cephfs-java-10.2.11-0.el7.x86_64.rpm
ceph-fuse-10.2.11-0.el7.x86_64.rpm
ceph-libs-compat-10.2.11-0.el7.x86_64.rpm
ceph-mds-10.2.11-0.el7.x86_64.rpm
ceph-mon-10.2.11-0.el7.x86_64.rpm
ceph-osd-10.2.11-0.el7.x86_64.rpm
ceph-radosgw-10.2.11-0.el7.x86_64.rpm
ceph-resource-agents-10.2.11-0.el7.x86_64.rpm
ceph-selinux-10.2.11-0.el7.x86_64.rpm
ceph-test-10.2.11-0.el7.x86_64.rpm
libcephfs1-10.2.11-0.el7.x86_64.rpm
libcephfs1-devel-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-10.2.11-0.el7.x86_64.rpm
libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm
librados2-10.2.11-0.el7.x86_64.rpm
librados2-devel-10.2.11-0.el7.x86_64.rpm
libradosstriper1-10.2.11-0.el7.x86_64.rpm
libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm
librbd1-10.2.11-0.el7.x86_64.rpm
librbd1-devel-10.2.11-0.el7.x86_64.rpm
librgw2-10.2.11-0.el7.x86_64.rpm
librgw2-devel-10.2.11-0.el7.x86_64.rpm
python-ceph-compat-10.2.11-0.el7.x86_64.rpm
python-cephfs-10.2.11-0.el7.x86_64.rpm
python-rados-10.2.11-0.el7.x86_64.rpm
python-rbd-10.2.11-0.el7.x86_64.rpm
rbd-fuse-10.2.11-0.el7.x86_64.rpm
rbd-mirror-10.2.11-0.el7.x86_64.rpm
rbd-nbd-10.2.11-0.el7.x86_64.rpm
|
1 |
yum localinstall -y ./*
|
(或者sudo yum install ceph-deploy)
創建ceph工作目錄1 |
mkdir ceph && cd ceph
|
處理辦法1: 可能不能安裝成功,報如下問題:將python-distribute remove 再進行安裝(或者 yum remove python-setuptools -y)
註意:如果不是安裝上述方法添加的rpm,用的是網絡源,每個節點必須yum install ceph ceph-radosgw -y
處理辦法2:安裝依賴包:python-distribute
[email protected]:16:46~/cephjrpm# yum install python-distribute -y 已加載插件:fastestmirror, langpacks Loading mirror speeds from cached hostfile 軟件包 python-setuptools 已經被 python2-setuptools 取代,改為嘗試安裝 python2-setuptools-22.0.5-1.el7.noarch 正在解決依賴關系 --> 正在檢查事務 ---> 軟件包 python2-setuptools.noarch.0.22.0.5-1.el7 將被 安裝 --> 解決依賴關系完成 依賴關系解決 ============================================================================================================================= Package 架構 版本 源 大小 ============================================================================================================================= 正在安裝: python2-setuptools noarch 22.0.5-1.el7 openstack-ocata 485 k 事務概要 ============================================================================================================================= 安裝 1 軟件包 總下載量:485 k 安裝大小:1.8 M Downloading packages: python2-setuptools-22.0.5-1.el7.noarch.rpm | 485 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction 正在安裝 : python2-setuptools-22.0.5-1.el7.noarch 1/1 驗證中 : python2-setuptools-22.0.5-1.el7.noarch 1/1 已安裝: python2-setuptools.noarch 0:22.0.5-1.el7 完畢!再次安裝 :ceph-deploy-1.5.39-0.noarch.rpm -y
[email protected]:22:12~/cephjrpm# yum localinstall ceph-deploy-1.5.39-0.noarch.rpm -y 已加載插件:fastestmirror, langpacks 正在檢查 ceph-deploy-1.5.39-0.noarch.rpm: ceph-deploy-1.5.39-0.noarch ceph-deploy-1.5.39-0.noarch.rpm 將被安裝 正在解決依賴關系 --> 正在檢查事務 ---> 軟件包 ceph-deploy.noarch.0.1.5.39-0 將被 安裝 --> 正在處理依賴關系 python-distribute,它被軟件包 ceph-deploy-1.5.39-0.noarch 需要 Loading mirror speeds from cached hostfile 軟件包 python-setuptools-0.9.8-7.el7.noarch 被已安裝的 python2-setuptools-22.0.5-1.el7.noarch 取代 --> 解決依賴關系完成 錯誤:軟件包:ceph-deploy-1.5.39-0.noarch (/ceph-deploy-1.5.39-0.noarch) 需要:python-distribute 可用: python-setuptools-0.9.8-7.el7.noarch (base) python-distribute = 0.9.8-7.el7 您可以嘗試添加 --skip-broken 選項來解決該問題 您可以嘗試執行:rpm -Va --nofiles --nodigest
刪除:python2-setuptools-22.0.5-1.el7.noarch
[email protected]:25:30~/cephjrpm# yum remove python2-setuptools-22.0.5-1.el7.noarch -y 已加載插件:fastestmirror, langpacks 正在解決依賴關系 --> 正在檢查事務 ---> 軟件包 python2-setuptools.noarch.0.22.0.5-1.el7 將被 刪除 --> 解決依賴關系完成 依賴關系解決 ============================================================================================================================= Package 架構 版本 源 大小 ============================================================================================================================= 正在刪除: python2-setuptools noarch 22.0.5-1.el7 @openstack-ocata 1.8 M 事務概要 ============================================================================================================================= 移除 1 軟件包 安裝大小:1.8 M Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction 正在刪除 : python2-setuptools-22.0.5-1.el7.noarch 1/1 驗證中 : python2-setuptools-22.0.5-1.el7.noarch 1/1 刪除: python2-setuptools.noarch 0:22.0.5-1.el7 完畢!
修改yum源後,只要保證安裝 python-setuptools 版本是 0.9.8-7.el7 就可以通過了:
[email protected]:25:30~/cephjrpm# yum localinstall ceph-deploy-1.5.39-0.noarch.rpm -y 已加載插件:fastestmirror, langpacks 正在檢查 ceph-deploy-1.5.39-0.noarch.rpm: ceph-deploy-1.5.39-0.noarch ceph-deploy-1.5.39-0.noarch.rpm 將被安裝 正在解決依賴關系 --> 正在檢查事務 ---> 軟件包 ceph-deploy.noarch.0.1.5.39-0 將被 安裝 --> 正在處理依賴關系 python-distribute,它被軟件包 ceph-deploy-1.5.39-0.noarch 需要 Loading mirror speeds from cached hostfile --> 正在檢查事務 ---> 軟件包 python-setuptools.noarch.0.0.9.8-7.el7 將被 安裝 --> 解決依賴關系完成 依賴關系解決 ============================================================================================================================= Package 架構 版本 源 大小 ============================================================================================================================= 正在安裝: ceph-deploy noarch 1.5.39-0 /ceph-deploy-1.5.39-0.noarch 1.3 M 為依賴而安裝: python-setuptools noarch 0.9.8-7.el7 base 397 k 事務概要 ============================================================================================================================= 安裝 1 軟件包 (+1 依賴軟件包) 總計:1.6 M 總下載量:397 k 安裝大小:3.2 M Downloading packages: python-setuptools-0.9.8-7.el7.noarch.rpm | 397 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction 正在安裝 : python-setuptools-0.9.8-7.el7.noarch 1/2 正在安裝 : ceph-deploy-1.5.39-0.noarch 2/2 驗證中 : ceph-deploy-1.5.39-0.noarch 1/2 驗證中 : python-setuptools-0.9.8-7.el7.noarch 2/2 已安裝: ceph-deploy.noarch 0:1.5.39-0 作為依賴被安裝: python-setuptools.noarch 0:0.9.8-7.el7 完畢!查看版本:
[email protected]:55:58~/cephjrpm#ceph -v ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
(5)在部署節點(cent用戶下執行):配置新集群
1 2 3 |
ceph-deploy new node1 node2 node3
vim ./ceph.conf
添加:osd pool default size = 2
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
public_network = 192.168.254.0/24
cluster_network = 172.16.254.0/24
osd_pool_default_size = 3
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 8
osd_pool_default_pgp_num = 8
osd_crush_chooseleaf_type = 1
[mon]
mon_clock_drift_allowed = 0.5
[osd]
osd_mkfs_type = xfs
osd_mkfs_options_xfs = -f
filestore_max_sync_interval = 5
filestore_min_sync_interval = 0.1
filestore_fd_cache_size = 655350
filestore_omap_header_cache_size = 655350
filestore_fd_cache_random = true
osd op threads = 8
osd disk threads = 4
filestore op threads = 8
max_open_files = 655350
|
1 2 3 4 5 6 7 8 9 10 11 12 |
[email protected]:13:59~/cephjrpm#ls
ceph-10.2.11-0.el7.x86_64.rpm ceph-resource-agents-10.2.11-0.el7.x86_64.rpm librbd1-10.2.11-0.el7.x86_64.rpm
ceph- base -10.2.11-0.el7.x86_64.rpm ceph-selinux-10.2.11-0.el7.x86_64.rpm librbd1-devel-10.2.11-0.el7.x86_64.rpm
ceph-common-10.2.11-0.el7.x86_64.rpm ceph-test-10.2.11-0.el7.x86_64.rpm librgw2-10.2.11-0.el7.x86_64.rpm
ceph-devel-compat-10.2.11-0.el7.x86_64.rpm libcephfs1-10.2.11-0.el7.x86_64.rpm librgw2-devel-10.2.11-0.el7.x86_64.rpm
cephfs-java-10.2.11-0.el7.x86_64.rpm libcephfs1-devel-10.2.11-0.el7.x86_64.rpm python-ceph-compat-10.2.11-0.el7.x86_64.rpm
ceph-fuse-10.2.11-0.el7.x86_64.rpm libcephfs_jni1-10.2.11-0.el7.x86_64.rpm python-cephfs-10.2.11-0.el7.x86_64.rpm
ceph-libs-compat-10.2.11-0.el7.x86_64.rpm libcephfs_jni1-devel-10.2.11-0.el7.x86_64.rpm python-rados-10.2.11-0.el7.x86_64.rpm
ceph-mds-10.2.11-0.el7.x86_64.rpm librados2-10.2.11-0.el7.x86_64.rpm python-rbd-10.2.11-0.el7.x86_64.rpm
ceph-mon-10.2.11-0.el7.x86_64.rpm librados2-devel-10.2.11-0.el7.x86_64.rpm rbd-fuse-10.2.11-0.el7.x86_64.rpm
ceph-osd-10.2.11-0.el7.x86_64.rpm libradosstriper1-10.2.11-0.el7.x86_64.rpm rbd-mirror-10.2.11-0.el7.x86_64.rpm
ceph-radosgw-10.2.11-0.el7.x86_64.rpm libradosstriper1-devel-10.2.11-0.el7.x86_64.rpm rbd-nbd-10.2.11-0.el7.x86_64.rpm
|
1 |
yum localinstall ./* -y
|
1 |
ceph-deploy install dlp node1 node2 node3
|
1 |
ceph-deploy mon create-initial
|
1 |
mkdir /data && chown ceph.ceph /data
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
[email protected]:45:22/#fdisk /dev/vdb
[email protected]:45:22/#lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 40G 0 disk
├─vda1 252:1 0 512M 0 part /boot
└─vda2 252:2 0 39.5G 0 part
├─cl-root 253:0 0 35.5G 0 lvm /
└─cl-swap 253:1 0 4G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
└─vdb1 252:17 0 10G 0 part
[email protected]:54:35/#mkfs -t xfs /dev/vdb1
[email protected]:54:50/#mount /dev/vdb1 /data/
[email protected]:56:39/#lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 40G 0 disk
├─vda1 252:1 0 512M 0 part /boot
└─vda2 252:2 0 39.5G 0 part
├─cl-root 253:0 0 35.5G 0 lvm /
└─cl-swap 253:1 0 4G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
└─vdb1 252:17 0 10G 0 part /data
|
(11)在/data/下面創建osd掛載目錄:
1 2 3 4 |
mkdir /data/osd
chown -R ceph.ceph /data/
chmod 750 /data/osd/
ln -s /data/osd / var /lib/ceph
|
1 |
ceph-deploy osd prepare node1:/ var /lib/ceph/osd node2:/ var /lib/ceph/osd node3:/ var /lib/ceph/osd
|
1 |
ceph-deploy osd activate node1:/ var /lib/ceph/osd node2:/ var /lib/ceph/osd node3:/ var /lib/ceph/osd
|
1 2 |
ceph-deploy admin dlp node1 node2 node3
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
|
1 |
ceph -s
|
三、客戶端設置:
(1)客戶端也要有cent用戶:1 2 3 |
useradd cent && echo "123" | passwd --stdin cent
echo-e ‘Defaults:cent !requiretty\ncent ALL = (root) NOPASSWD:ALL‘ | tee /etc/sudoers.d/ceph
chmod440 /etc/sudoers.d/ceph
|
1 2 |
ceph-deploy install controller
ceph-deploy admin controller
|
1 |
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
|
1 2 3 4 5 6 7 |
創建rbd:rbd create disk01 --size 10G --image-feature layering
列示rbd:rbd ls -l
映射rbd的image map:sudo rbd map disk01
顯示map:rbd showmapped
格式化disk01文件系統xfs:sudo mkfs.xfs /dev/rbd0
掛載硬盤:sudo mount /dev/rbd0 /mnt
驗證是否掛著成功:df -hT
|
1 |
ceph-deploy mds create node1
|
1 |
sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
|
1 2 |
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 128
|
1 |
ceph fs new cephfs cephfs_metadata cephfs_data
|
1 2 |
ceph fs ls
ceph mds stat
|
1 |
yum -y install ceph-fuse
|
1 2 |
[email protected] "sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring" > admin.key
chmod600 admin.key
|
1 2 |
mount-t ceph node1:6789:/ /mnt -o name=admin,secretfile=admin.key
df-hT
|
1 2 3 4 5 6 7 8 |
systemctl stop [email protected]
ceph mds fail 0
ceph fs rm cephfs --yes-i-really-mean-it
ceph osd lspools
顯示結果:0 rbd,1 cephfs_data,2 cephfs_metadata,
ceph osd pool rm cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it
|
四、刪除環境:
1 2 3 4 |
ceph-deploy purge dlp node1 node2 node3 controller
ceph-deploy purgedata dlp node1 node2 node3 controller
ceph-deploy forgetkeys
rm -rf ceph*
|
分布式存儲ceph——(1)部署ceph