共計 11170 個字符,預計需要花費 28 分鐘才能閱讀完成。
這篇文章主要為大家展示了“ceph-deploy 中 osd 模塊有什么用”,內容簡而易懂,條理清晰,希望能夠幫助大家解決疑惑,下面讓丸趣 TV 小編帶領大家一起研究并學習一下“ceph-deploy 中 osd 模塊有什么用”這篇文章吧。
ceph-deploy 源碼分析——osd 模塊
ceph-deploy 的 osd.py 模塊是用來管理 osd 守護進程,主要是創建與激活 OSD。
osd 子命令格式如下
ceph-deploy osd [-h] {list,create,prepare,activate} ...
list: 顯示 osd 列表信息
create: 創建 OSD,包含 prepare 與 activate
prepare: 準備 OSD,通過格式化 / 分區磁盤
activate: 激活準備的 OSD
OSD 管理
make 函數
priority 為 50
osd 子命令默認執行函數為 osd
@priority(50)
def make(parser):
Prepare a data disk on remote host.
sub_command_help = dedent(
Manage OSDs by preparing a data disk on remote host.
For paths, first prepare and then activate:
ceph-deploy osd prepare {osd-node-name}:/path/to/osd
ceph-deploy osd activate {osd-node-name}:/path/to/osd
For disks or journals the `create` command will do prepare and activate
for you.
)
parser.formatter_class = argparse.RawDescriptionHelpFormatter
parser.description = sub_command_help
osd_parser = parser.add_subparsers(dest= subcommand)
osd_parser.required = True
osd_list = osd_parser.add_parser(
list ,
help= List OSD info from remote host(s)
)
osd_list.add_argument(
disk ,
nargs= + ,
metavar= HOST:DISK[:JOURNAL] ,
type=colon_separated,
help= remote host to list OSDs from
)
osd_create = osd_parser.add_parser(
create ,
help= Create new Ceph OSD daemon by preparing and activating disk
)
osd_create.add_argument(
--zap-disk ,
action= store_true ,
help= destroy existing partition table and content for DISK ,
)
osd_create.add_argument(
--fs-type ,
metavar= FS_TYPE ,
choices=[ xfs ,
btrfs
],
default= xfs ,
help= filesystem to use to format DISK (xfs, btrfs) ,
)
osd_create.add_argument(
--dmcrypt ,
action= store_true ,
help= use dm-crypt on DISK ,
)
osd_create.add_argument(
--dmcrypt-key-dir ,
metavar= KEYDIR ,
default= /etc/ceph/dmcrypt-keys ,
help= directory where dm-crypt keys are stored ,
)
osd_create.add_argument(
--bluestore ,
action= store_true , default=None,
help= bluestore objectstore ,
)
osd_create.add_argument(
disk ,
nargs= + ,
metavar= HOST:DISK[:JOURNAL] ,
type=colon_separated,
help= host and disk to prepare ,
)
osd_prepare = osd_parser.add_parser(
prepare ,
help= Prepare a disk for use as Ceph OSD by formatting/partitioning disk
)
osd_prepare.add_argument(
--zap-disk ,
action= store_true ,
help= destroy existing partition table and content for DISK ,
)
osd_prepare.add_argument(
--fs-type ,
metavar= FS_TYPE ,
choices=[ xfs ,
btrfs
],
default= xfs ,
help= filesystem to use to format DISK (xfs, btrfs) ,
)
osd_prepare.add_argument(
--dmcrypt ,
action= store_true ,
help= use dm-crypt on DISK ,
)
osd_prepare.add_argument(
--dmcrypt-key-dir ,
metavar= KEYDIR ,
default= /etc/ceph/dmcrypt-keys ,
help= directory where dm-crypt keys are stored ,
)
osd_prepare.add_argument(
--bluestore ,
action= store_true , default=None,
help= bluestore objectstore ,
)
osd_prepare.add_argument(
disk ,
nargs= + ,
metavar= HOST:DISK[:JOURNAL] ,
type=colon_separated,
help= host and disk to prepare ,
)
osd_activate = osd_parser.add_parser(
activate ,
help= Start (activate) Ceph OSD from disk that was previously prepared
)
osd_activate.add_argument(
disk ,
nargs= + ,
metavar= HOST:DISK[:JOURNAL] ,
type=colon_separated,
help= host and disk to activate ,
)
parser.set_defaults(
func=osd,
)
osd 函數,osd 子命令 list,create,prepare,activate 分別對應的函數為 osd_list、prepare、prepare、activate。
def osd(args):
cfg = conf.ceph.load(args)
if args.subcommand == list :
osd_list(args, cfg)
elif args.subcommand == prepare :
prepare(args, cfg, activate_prepared_disk=False)
elif args.subcommand == create :
prepare(args, cfg, activate_prepared_disk=True)
elif args.subcommand == activate :
activate(args, cfg)
else:
LOG.error(subcommand %s not implemented , args.subcommand)
sys.exit(1)
OSD 列表
命令行格式為:ceph-deploy osd list [-h] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
osd_list 函數
執行 ceph –cluster=ceph osd tree –format=json 命令獲取 OSD 信息
執行 ceph-disk list 命令獲取磁盤、分區信息
根據兩個命令結果以及 osd 目錄下文件信息,組裝輸出 OSD 列表數據
def osd_list(args, cfg):
monitors = mon.get_mon_initial_members(args, error_on_empty=True, _cfg=cfg)
# get the osd tree from a monitor host
mon_host = monitors[0]
distro = hosts.get(
mon_host,
username=args.username,
callbacks=[packages.ceph_is_installed]
)
# 執行 ceph --cluster=ceph osd tree --format=json 命令獲取 osd 信息
tree = osd_tree(distro.conn, args.cluster)
distro.conn.exit()
interesting_files = [active , magic , whoami , journal_uuid]
for hostname, disk, journal in args.disk:
distro = hosts.get(hostname, username=args.username)
remote_module = distro.conn.remote_module
# 獲取 OSD 的目錄 /var/run/ceph/osd 下的 osd 名稱
osds = distro.conn.remote_module.listdir(constants.osd_path)
# 執行 ceph-disk list 命令獲取磁盤、分區信息
ceph_disk_executable = system.executable_path(distro.conn, ceph-disk)
output, err, exit_code = remoto.process.check(
distro.conn,
[
ceph_disk_executable,
list ,
]
)
# 循環 OSD
for _osd in osds:
# osd 路徑,比如 /var/run/ceph/osd/ceph-0
osd_path = os.path.join(constants.osd_path, _osd)
# journal 路徑
journal_path = os.path.join(osd_path, journal)
# OSD 的 id
_id = int(_osd.split( -)[-1]) # split on dash, get the id
osd_name = osd.%s % _id
metadata = {}
json_blob = {}
# piggy back from ceph-disk and get the mount point
# ceph-disk list 的結果與 osd 名稱匹配,獲取磁盤設備
device = get_osd_mount_point(output, osd_name)
if device:
metadata[device] = device
# read interesting metadata from files
# 獲取 OSD 下的 active, magic, whoami, journal_uuid 文件信息
for f in interesting_files:
osd_f_path = os.path.join(osd_path, f)
if remote_module.path_exists(osd_f_path):
metadata[f] = remote_module.readline(osd_f_path)
# do we have a journal path?
# 獲取 journal path
if remote_module.path_exists(journal_path):
metadata[journal path] = remote_module.get_realpath(journal_path)
# is this OSD in osd tree?
for blob in tree[nodes]:
if blob.get(id) == _id: # matches our OSD
json_blob = blob
# 輸出 OSD 信息
print_osd(
distro.conn.logger,
hostname,
osd_path,
json_blob,
metadata,
)
distro.conn.exit()
創建 OSD 準備 OSD
創建 OSD 的命令行格式為:ceph-deploy osd create [-h] [–zap-disk] [–fs-type FS_TYPE] [–dmcrypt] [–dmcrypt-key-dir KEYDIR] [–bluestore] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
準備 OSD 的命令行格式為:ceph-deploy osd prepare [-h] [–zap-disk] [–fs-type FS_TYPE] [–dmcrypt] [–dmcrypt-key-dir KEYDIR] [–bluestore] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
prepare 函數,參數 activate_prepared_disk 為 True 是創建 OSD,為 False 是準備 OSD
調用 exceeds_max_osds 函數,單臺主機超過 20 個 OSD,將會 warning
調用 get_bootstrap_osd_key 函數,獲取當前目錄下的 ceph.bootstrap-osd.keyring
循環 disk
配置寫入 /etc/ceph/ceph.conf
創建并寫入 /var/lib/ceph/bootstrap-osd/ceph.keyring
調用 prepare_disk 函數,準備 OSD
校驗 OSD 狀態,并將信息非正常狀態信息寫入 warning
def prepare(args, cfg, activate_prepared_disk):
LOG.debug(
Preparing cluster %s disks %s ,
args.cluster,
.join(: .join(x or for x in t) for t in args.disk),
)
# 單臺主機超過 20 個 OSD,將會 warning
hosts_in_danger = exceeds_max_osds(args)
if hosts_in_danger:
LOG.warning(if ``kernel.pid_max`` is not increased to a high enough value)
LOG.warning(the following hosts will encounter issues:)
for host, count in hosts_in_danger.items():
LOG.warning(Host: %8s, OSDs: %s % (host, count))
# 獲取當前目錄下的 ceph.bootstrap-osd.keyring
key = get_bootstrap_osd_key(cluster=args.cluster)
bootstrapped = set()
errors = 0
for hostname, disk, journal in args.disk:
try:
if disk is None:
raise exc.NeedDiskError(hostname)
distro = hosts.get(
hostname,
username=args.username,
callbacks=[packages.ceph_is_installed]
)
LOG.info(
Distro info: %s %s %s ,
distro.name,
distro.release,
distro.codename
)
if hostname not in bootstrapped:
bootstrapped.add(hostname)
LOG.debug(Deploying osd to %s , hostname)
conf_data = conf.ceph.load_raw(args)
# 配置寫入 /etc/ceph/ceph.conf
distro.conn.remote_module.write_conf(
args.cluster,
conf_data,
args.overwrite_conf
)
# 創建并寫入 /var/lib/ceph/bootstrap-osd/ceph.keyring
create_osd_keyring(distro.conn, args.cluster, key)
LOG.debug( Preparing host %s disk %s journal %s activate %s ,
hostname, disk, journal, activate_prepared_disk)
storetype = None
if args.bluestore:
storetype = bluestore
# 準備 OSD
prepare_disk(
distro.conn,
cluster=args.cluster,
disk=disk,
journal=journal,
activate_prepared_disk=activate_prepared_disk,
init=distro.init,
zap=args.zap_disk,
fs_type=args.fs_type,
dmcrypt=args.dmcrypt,
dmcrypt_dir=args.dmcrypt_key_dir,
storetype=storetype,
)
# give the OSD a few seconds to start
time.sleep(5)
# 校驗 OSD 狀態,并將信息非正常狀態信息寫入 warning
catch_osd_errors(distro.conn, distro.conn.logger, args)
LOG.debug(Host %s is now ready for osd use. , hostname)
distro.conn.exit()
except RuntimeError as e:
LOG.error(e)
errors += 1
if errors:
raise exc.GenericError(Failed to create %d OSDs % errors)
prepare_disk 函數
執行 ceph-disk -v prepare 命令準備 OSD
如果 activate_prepared_disk 為 True,設置 ceph 服務開機啟動
def prepare_disk(
conn,
cluster,
disk,
journal,
activate_prepared_disk,
init,
zap,
fs_type,
dmcrypt,
dmcrypt_dir,
storetype):
Run on osd node, prepares a data disk for use.
ceph_disk_executable = system.executable_path(conn, ceph-disk)
args = [
ceph_disk_executable,
-v ,
prepare ,
]
if zap:
args.append(--zap-disk)
if dmcrypt:
args.append(--dmcrypt)
if dmcrypt_dir is not None:
args.append(--dmcrypt-key-dir)
args.append(dmcrypt_dir)
if storetype:
args.append(-- + storetype)
args.extend([
--cluster ,
cluster,
--fs-type ,
fs_type,
-- ,
disk,
])
if journal is not None:
args.append(journal)
# 執行 ceph-disk -v prepare 命令
remoto.process.run(
conn,
args
)
# 是否激活,激活即設置 ceph 服務開機啟動
if activate_prepared_disk:
# we don t simply run activate here because we don t know
# which partition ceph-disk prepare created as the data
# volume. instead, we rely on udev to do the activation and
# just give it a kick to ensure it wakes up. we also enable
# ceph.target, the other key piece of activate.
if init == systemd :
system.enable_service(conn, ceph.target)
elif init == sysvinit :
system.enable_service(conn, ceph)
激活 OSD
命令行格式為:ceph-deploy osd activate [-h] HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] …]
activate 函數
執行 ceph-disk -v activate 命令激活 OSD
校驗 OSD 狀態,并將信息非正常狀態信息寫入 warning
設置 ceph 服務開機啟動
def activate(args, cfg):
LOG.debug(
Activating cluster %s disks %s ,
args.cluster,
# join elements of t with : , t s with
# allow None in elements of t; print as empty
.join(: .join((s or ) for s in t) for t in args.disk),
)
for hostname, disk, journal in args.disk:
distro = hosts.get(
hostname,
username=args.username,
callbacks=[packages.ceph_is_installed]
)
LOG.info(
Distro info: %s %s %s ,
distro.name,
distro.release,
distro.codename
)
LOG.debug(activating host %s disk %s , hostname, disk)
LOG.debug(will use init type: %s , distro.init)
ceph_disk_executable = system.executable_path(distro.conn, ceph-disk)
# 執行 ceph-disk -v activate 命令激活 OSD
remoto.process.run(
distro.conn,
[
ceph_disk_executable,
-v ,
activate ,
--mark-init ,
distro.init,
--mount ,
disk,
],
)
# give the OSD a few seconds to start
time.sleep(5)
# 校驗 OSD 狀態,并將信息非正常狀態信息寫入 warning
catch_osd_errors(distro.conn, distro.conn.logger, args)
# 設置 ceph 服務開機啟動
if distro.init == systemd :
system.enable_service(distro.conn, ceph.target)
elif distro.init == sysvinit :
system.enable_service(distro.conn, ceph)
distro.conn.exit()
手工管理 OSD
以 ceph-231 上磁盤 sdb 為例,創建 osd。
創建 OSD 準備 OSD
準備 OSD
[root@ceph-231 ~]# ceph-disk -v prepare --zap-disk --cluster ceph --fs-type xfs -- /dev/sdb
創建 OSD 多一個操作,設置 ceph 服務開機啟動
[root@ceph-231 ~]# systemctl enable ceph.target
激活 OSD
查看 init
[root@ceph-231 ~]# cat /proc/1/comm
systemd
激活 OSD
[root@ceph-231 ~]# ceph-disk -v activate --mark-init systemd --mount /dev/sdb1
設置 ceph 服務開機啟動
[root@ceph-231 ~]# systemctl enable ceph.target
以上是“ceph-deploy 中 osd 模塊有什么用”這篇文章的所有內容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內容對大家有所幫助,如果還想學習更多知識,歡迎關注丸趣 TV 行業資訊頻道!