共計 6676 個字符,預計需要花費 17 分鐘才能閱讀完成。
這篇文章將為大家詳細講解有關 ceph 中 rbd 塊的使用技巧有哪些,丸趣 TV 小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
1. rbd 塊的真實大小
由于 ceph 采用 thin provisioning,只有寫數據時才會分配相應的塊。所以當我們創建一個很大的塊時,也是瞬間完成的,因為除了一些元數據外,ceph 并沒有分配出相應的空間。那么我們創建的 rbd 塊到底有多大呢?以我的環境為例:
[root@osd1 /]# rbd ls myrbd
hello.txt
[root@osd1 /]# rbd info myrbd/rbd1
rbd image rbd1 :
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.13446b8b4567
format: 2
features: layering
[root@osd1 /]# rbd diff myrbd/rbd1 | awk { SUM += $2 } END { print SUM/1024/1024 MB }
14.2812 MB
[root@osd1 /]# rbd diff myrbd/rbd1
Offset Length Type
0 131072 data
4194304 16384 data
130023424 16384 data
260046848 16384 data
390070272 16384 data
520093696 4194304 data
524288000 4194304 data
528482304 2129920 data
650117120 16384 data
780140544 16384 data
910163968 16384 data
1040187392 16384 data
1069547520 4194304 data
2. rbd format1 與 rbd fromat2
rbd format1:
[root@osd1 /]# rbd create myrbd/rbd1 -s 8
[root@osd1 /]# rbd info myrbd/rbd1
rbd image rbd1 :
size 8192 kB in 2 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.13fb.6b8b4567
format: 1
[root@osd1 /]# rados ls -p myrbd
rbd_directory
rbd1.rbd
[root@osd1 /]# rbd map myrbd/rbd1
[root@osd1 /]# rbd showmapped
id pool image snap device
0 myrbd rbd1 - /dev/rbd0
[root@osd1 /]# dd if=/dev/zero of=/dev/rbd0
dd: writing to `/dev/rbd0 : No space left on device
16385+0 records in
16384+0 records out
8388608 bytes (8.4 MB) copied, 2.25155 s, 3.7 MB/s
[root@osd1 /]# rados ls -p myrbd
rbd_directory
rbd1.rbd
rb.0.13fb.6b8b4567.000000000001
rb.0.13fb.6b8b4567.000000000000
$image_name.rbd : 包含了這個塊的 id (rb.0.13fb.6b8b4567)
$rbd_id.$fragment : 數據塊
rbd_directory : 當前 pool 中 rbd 塊的列表
rbd format2
[root@osd1 /]# rbd create myrbd/rbd1 -s 8 --image-format=2
[root@osd1 /]# rbd info myrbd/rbd1
rbd image rbd1 :
size 8192 kB in 2 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.13436b8b4567
format: 2
features: layering
[root@osd1 /]# rados ls -p myrbd
rbd_directory
rbd_header.13436b8b4567
rbd_id.rbd1
[root@osd1 /]# rbd map myrbd/rbd1
[root@osd1 /]# rbd showmapped
id pool image snap device
0 myrbd rbd1 - /dev/rbd0
[root@osd1 /]# dd if=/dev/zero of=/dev/rbd0
dd: writing to `/dev/rbd0 : No space left on device
16385+0 records in
16384+0 records out
8388608 bytes (8.4 MB) copied, 2.14407 s, 3.9 MB/s
[root@osd1 /]# rados ls -p myrbd
rbd_directory
rbd_data.13436b8b4567.0000000000000000
rbd_data.13436b8b4567.0000000000000001
rbd_header.13436b8b4567
rbd_id.rbd1
rbd_data.$rbd_id.$fragment : 數據塊
rbd_directory : 當前 pool 中 rbd 塊的列表
rbd_header.$rbd_id : rbd 塊的元數據
rbd_id.$image_name : 包含了這個塊的 id (13436b8b4567)
3. Ceph Primary Affinity
[root@mon0 yum.repos.d]# ceph --admin-daemon /var/run/ceph/ceph-mon.*.asok config show | grep primary_affinity
mon_osd_allow_primary_affinity : false ,
#在 ceph.conf 中加入 primary affinity
mon osd allow primary affinity = true
[root@mon0 yum.repos.d]# ceph pg dump | grep active+clean | egrep \[0, | wc -l
dumped all in format plain
[root@mon0 yum.repos.d]# ceph pg dump | grep active+clean | egrep ,0\] | wc -l
dumped all in format plain
# ceph osd primary-affinity osd.0 0.5
set osd.0 primary-affinity to 0.5 (8327682)
# ceph pg dump | grep active+clean | egrep \[0, | wc -l
# ceph pg dump | grep active+clean | egrep ,0\] | wc -l
# ceph osd primary-affinity osd.0 0
set osd.0 primary-affinity to 0 (802)
# ceph pg dump | grep active+clean | egrep \[0, | wc -l
# ceph pg dump | grep active+clean | egrep ,0\] | wc -l
180
4. 升級 ceph
29 號 ceph 放出了 0.87 giant 版本,我們第一時間進行了升級。升級過程非常簡單,只需修改一處 ceph.repo,然后 yum update ceph 就可以了。升級完成后重啟各種服務。ceph.repo 如下:
[root@mon0 software]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
enabled=1
baseurl=http://ceph.com/rpm-giant/el6/$basearch
priority=1
gpgcheck=1
type=rpm-md
[ceph-source]
name=Ceph source packages
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
enabled=1
baseurl=http://ceph.com/rpm-giant/el6/SRPMS
priority=1
gpgcheck=1
type=rpm-md
[Ceph-noarch]
name=Ceph noarch packages
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
enabled=1
baseurl=http://ceph.com/rpm-giant/el6/noarch
priority=1
gpgcheck=1
type=rpm-m
5. ceph admin socket
利用 ceph admin socket 可以獲得 ceph 的在線參數,對于驗證與調試很有幫助。
$ ceph --admin-daemon /path/to/your/ceph/socket
[root@osd2 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.4.asok help
{ config diff : dump diff of current config and default config ,
config get : config get field : get the config value ,
config set : config set field val [val ...]: set a config variable ,
config show : dump current config settings ,
dump_blacklist : dump blacklisted clients and times ,
dump_historic_ops : show slowest recent ops ,
dump_op_pq_state : dump op priority queue state ,
dump_ops_in_flight : show the ops currently in flight ,
dump_reservations : show recovery reservations ,
dump_watchers : show clients which have active watches, and on which objects ,
flush_journal : flush the journal to permanent store ,
get_command_descriptions : list available commands ,
getomap : output entire object map ,
git_version : get git sha1 ,
help : list available commands ,
injectdataerr : inject data error into omap ,
injectmdataerr : inject metadata error ,
log dump : dump recent log entries to log file ,
log flush : flush log entries to log file ,
log reopen : reopen log file ,
objecter_requests : show in-progress osd requests ,
perf dump : dump perfcounters value ,
perf schema : dump perfcounters schema ,
rmomapkey : remove omap key ,
setomapheader : set omap header ,
setomapval : set omap key ,
status : high-level status of OSD ,
truncobj : truncate object to length ,
version : get ceph version }
獲取 journal 相關的參數設置:
[root@osd2 ~]# ceph --admin-daemon /var/run/ceph/ceph-mon.osd2.asok config show | grep journal
debug_journaler : 0\/5 ,
debug_journal : 1\/3 ,
journaler_allow_split_entries : true ,
journaler_write_head_interval : 15 ,
journaler_prefetch_periods : 10 ,
journaler_prezero_periods : 5 ,
journaler_batch_interval : 0.001 ,
journaler_batch_max : 0 ,
mds_kill_journal_at : 0 ,
mds_kill_journal_expire_at : 0 ,
mds_kill_journal_replay_at : 0 ,
mds_journal_format : 1 ,
osd_journal : \/var\/lib\/ceph\/osd\/ceph-osd2\/journal ,
osd_journal_size : 5120 ,
filestore_fsync_flushes_journal_data : false ,
filestore_journal_parallel : false ,
filestore_journal_writeahead : false ,
filestore_journal_trailing : false ,
journal_dio : true ,
journal_aio : true ,
journal_force_aio : false ,
journal_max_corrupt_search : 10485760 ,
journal_block_align : true ,
journal_write_header_frequency : 0 ,
journal_max_write_bytes : 10485760 ,
journal_max_write_entries : 100 ,
journal_queue_max_ops : 300 ,
journal_queue_max_bytes : 33554432 ,
journal_align_min_size : 65536 ,
journal_replay_from : 0 ,
journal_zero_on_create : false ,
journal_ignore_corruption : false ,
關于“ceph 中 rbd 塊的使用技巧有哪些”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,使各位可以學到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。