共計 6103 個字符,預計需要花費 16 分鐘才能閱讀完成。
這篇文章給大家分享的是有關如何實現基于 ceph rbd+corosync+pacemaker HA-NFS 文件共享的內容。丸趣 TV 小編覺得挺實用的,因此分享給大家做個參考,一起跟隨丸趣 TV 小編過來看看吧。
1、架構圖
2、環境準備 1.1 IP 規劃
兩臺支持 rbd 的 nfs-server 主機:10.20.18.97 10.20.18.11
Vip:10.20.18.123 設置在同一網段
1.2 軟件安裝
# yum install pacemaker corosync cluster-glue resource-agents
# rpm -ivh crmsh-2.1-1.6.x86_64.rpm –nodeps
1.3 ssh 互信略 1.4 ntp 配置略 1.5 配置 hosts(兩臺)
# vi /etc/hosts
10.20.18.97 SZB-L0005908
10.20.18.111 SZB-L0005469
3 Corosync 配置(兩臺)3.1 配置 corosync
# mv /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf
# vi /etc/corosync/corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 10.20.18.111
mcastaddr: 226.94.1.1
mcastport: 5405
ttl: 1
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
amf {
mode: disabled
service {
ver: 0
name: pacemaker
aisexec {
user: root
group: root
}
Bindnetaddr 為節點 ip
Mcastaddr 為合法的組播地址,隨便填
3.2 啟動 corosync
# service corosync start
3.3 參數設置(目的是因為只有 2 個節點,忽略法定票數)
# crm configure property stonith-enabled=false
# sudo crm configure property no-quorum-policy=ignore
3.4 查看節點狀態(都 online 就 ok)
# crm_mon -1
Last updated: Fri May 22 15:56:37 2015
Last change: Fri May 22 13:09:33 2015 via crmd on SZB-L0005469
Stack: classic openais (with plugin)
Current DC: SZB-L0005908 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
0 Resources configured
Online: [ SZB-L0005469 SZB-L0005908 ]
4. Pacemaker 資源配置
說明: Pacemaker 主要管理資源,本實驗中為了搭建 rbd-nfs,所以會對 rbd map、mount、nfs-export、vip 等資源進行管理。簡而言之,自動實現 rbd 到 nfs 共享。
4.1 格式化 rbd
(本實驗創建的鏡像為 share/share2), 只需在一個節點做一次。
# rados mkpool share
# rbd create share/share2 –size 1024
# rbd map share/share2
# rbd showmapped
# mkfs.xfs /dev/rbd1
# rbd unmap share/share2
4.2 資源 pacemaker 配置 4.2.1 準備 rbd.in 腳本
(拷貝 ceph 源碼中腳本 src/ocf/rbd.in 到下面目錄,所有節點都做)
# mkdir /usr/lib/ocf/resource.d/ceph
# cd /usr/lib/ocf/resource.d/ceph/
# chmod + rbd.in
注:下面配置單個節點做
4.2.2 配置 rbd map
(可以用 crm configure edit 命令直接 copy 下面內容)
# primitive p_rbd_map_1 ocf:ceph:rbd.in \
params user=admin pool=share name=share2 cephconf= /etc/ceph/ceph.conf \
op monitor interval=10s timeout=20s
4.2.3 mount 文件系統
# primitive p_fs_rbd_1 Filesystem \
params directory= /mnt/share2 fstype=xfs device= /dev/rbd/share/share2 fast_stop=no \
op monitor interval=20s timeout=40s \
op start interval=0 timeout=60s \
op stop interval=0 timeout=60s
4.2.4 nfs-export
primitive p_export_rbd_1 exportfs \
params directory= /mnt/share2 clientspec= 10.20.0.0/24 options= rw,async,no_subtree_check,no_root_squash fsid=1 \
op monitor interval=10s timeout=20s \
4.2.5 VIP 配置
primitive p_vip_1 IPaddr \
params ip=10.20.18.123 cidr_netmask=24 \
op monitor interval=5
4.2.6 nfs 服務配置
primitive p_rpcbind lsb:rpcbind \
op monitor interval=10s timeout=30s
primitive p_nfs_server lsb:nfs \
op monitor interval=10s timeout=30s
4.3 源組配置
group g_nfs p_rpcbind p_nfs_server
group g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export_rbd_1 p_vip_1
clone clo_nfs g_nfs \
meta globally-unique= false target-role= Started
4.4 資源定位規則
location l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L0005469
4.5 查看總的配置 (可略過)
# crm configure edit node SZB-L0005469 node SZB-L0005908primitive p_export_rbd_1 exportfs \ params directory= /mnt/share2 clientspec= 10.20.0.0/24 options= rw,async,no_subtree_check,no_root_squash fsid=1 \ op monitor interval=10s timeout=20s \
op start interval=0 timeout=40s primitive p_fs_rbd_1 Filesystem \ params directory= /mnt/share2 fstype=xfs device= /dev/rbd/share/share2 fast_stop=no \ op monitor interval=20s timeout=40s \ op start interval=0 timeout=60s \ op stop interval=0 timeout=60s primitive p_nfs_server lsb:nfs \ op monitor interval=10s timeout=30s primitive p_rbd_map_1 ocf:ceph:rbd.in \ params user=admin pool=share name=share2 cephconf= /etc/ceph/ceph.conf \ op monitor interval=10s timeout=20s primitive p_rpcbind lsb:rpcbind \ op monitor interval=10s timeout=30sprimitive p_vip_1 IPaddr \ params ip=10.20.18.123 cidr_netmask=24 \ op monitor interval=5
group g_nfs p_rpcbind p_nfs_server group g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export_rbd_1 p_vip_1 clone clo_nfs g_nfs \ meta globally-unique=false target-role=Startedlocation l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L0005469
property cib-bootstrap-options: \ dc-version=1.1.10-14.el6-368c726 \ cluster-infrastructure= classic openais (with plugin) \ symmetric-cluster=true \ stonith-enabled=false \ no-quorum-policy=ignore \ expected-quorum-votes=2 rsc_defaults rsc_defaults-options: \ resource-stickiness=0 \ migration-threshold=1
4.6 重啟 corosync 服務 (兩臺)
# service corosync restart
# crm_mon -1
Last updated: Fri May 22 16:55:14 2015
Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469
Stack: classic openais (with plugin)
Current DC: SZB-L0005908 - partition with quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
8 Resources configured
Online: [ SZB-L0005469 SZB-L0005908 ]
Resource Group: g_rbd_share_1
p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005469
p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005469
p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005469
p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005469
Clone Set: clo_nfs [g_nfs]
Started: [ SZB-L0005469 SZB-L0005908 ]
5 測試 5.1 查看掛載點(通過虛擬 Ip)
# showmount -e 10.20.18.123
Export list for 10.20.18.123:
/mnt/share2 10.20.0.0/24
5. 2 故障轉移測試
# service corosync stop # SZB-L0005469 執行
# crm_mon -1 # SZB-L0005908 執行
Last updated: Fri May 22 17:14:31 2015
Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469
Stack: classic openais (with plugin)
Current DC: SZB-L0005908 - partition WITHOUT quorum
Version: 1.1.10-14.el6-368c726
2 Nodes configured, 2 expected votes
8 Resources configured
Online: [ SZB-L0005908 ]
OFFLINE: [ SZB-L0005469 ]
Resource Group: g_rbd_share_1
p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005908
p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005908
p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005908
p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005908
Clone Set: clo_nfs [g_nfs]
Started: [ SZB-L0005908 ]
Stopped: [ SZB-L0005469 ]
感謝各位的閱讀!關于“如何實現基于 ceph rbd+corosync+pacemaker HA-NFS 文件共享”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,讓大家可以學到更多知識,如果覺得文章不錯,可以把它分享出去讓更多的人看到吧!