久久精品人人爽,华人av在线,亚洲性视频网站,欧美专区一二三

怎么在Kolla

208次閱讀
沒有評論

共計(jì) 29590 個(gè)字符,預(yù)計(jì)需要花費(fèi) 74 分鐘才能閱讀完成。

這篇文章給大家分享的是有關(guān)怎么在 Kolla-Ansible 中使用 Ceph 后端存儲的內(nèi)容。丸趣 TV 小編覺得挺實(shí)用的,因此分享給大家做個(gè)參考,一起跟隨丸趣 TV 小編過來看看吧。

配置 Ceph

以 osdev 用戶登錄:

$ ssh osdev@osdev01
$ cd /opt/ceph/deploy/

創(chuàng)建 Pool 創(chuàng)建鏡像 Pool

用于保存 Glance 鏡像:

$ ceph osd pool create images 32 32
pool  images  created

創(chuàng)建卷 Pool

用于保存 Cinder 的卷:

$ ceph osd pool create volumes 32 32
pool  volumes  created

用于保存 Cinder 的卷備份:

$ ceph osd pool create backups 32 32
pool  backups  created

創(chuàng)建虛擬機(jī) Pool

用于保存虛擬機(jī)系統(tǒng)卷:

$ ceph osd pool create vms 32 32
pool  vms  created

查看 Pool

$ ceph osd lspools
1 .rgw.root
2 default.rgw.control
3 default.rgw.meta
4 default.rgw.log
6 rbd
8 images
9 volumes
10 backups
11 vms

創(chuàng)建用戶查看用戶

查看所有用戶:

$ ceph auth list
installed auth entries:
mds.osdev01
 key: AQCabn5b18tHExAAkZ6Aq3IQ4/aqYEBBey5O3Q==
 caps: [mds] allow
 caps: [mon] allow profile mds
 caps: [osd] allow rwx
mds.osdev02
 key: AQCbbn5bcq4yJRAAUfhoqPNfyp2m/ORu/7vHBA==
 caps: [mds] allow
 caps: [mon] allow profile mds
 caps: [osd] allow rwx
mds.osdev03
 key: AQCcbn5bTAIdORAApGu9NJvC3AmS+L3EWXLMdw==
 caps: [mds] allow
 caps: [mon] allow profile mds
 caps: [osd] allow rwx
osd.0
 key: AQCyJH5bG2ZBHRAAsDaLHcoOxv/mLCHwITA7JQ==
 caps: [mgr] allow profile osd
 caps: [mon] allow profile osd
 caps: [osd] allow *
osd.1
 key: AQDTJH5bjvQ8HxAA4cyLttvZwiqFq1srFoSXWg==
 caps: [mgr] allow profile osd
 caps: [mon] allow profile osd
 caps: [osd] allow *
osd.2
 key: AQD9JH5bbPi6IRAA7DbwaCh6JBaa6RfWPoe9VQ==
 caps: [mgr] allow profile osd
 caps: [mon] allow profile osd
 caps: [osd] allow *
client.admin
 key: AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ==
 caps: [mds] allow *
 caps: [mgr] allow *
 caps: [mon] allow *
 caps: [osd] allow *
client.bootstrap-mds
 key: AQA1In5boIRwGBAAgj5OccvTGYkuB+btlgL0BQ==
 caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
 key: AQA1In5bS6pwGBAA379v3LXJrdURLmA1gnTaLQ==
 caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
 key: AQA1In5bnMpwGBAAXohUfa4rGS0Rd2weMl4dPg==
 caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbd
 key: AQA1In5buelwGBAANQSalrSzH3yslSc4rYPu1g==
 caps: [mon] allow profile bootstrap-rbd
client.bootstrap-rgw
 key: AQA1In5b0ghxGBAAIGK3WmBSkKZMnSEfvnEQow==
 caps: [mon] allow profile bootstrap-rgw
client.rgw.osdev01
 key: AQDZbn5b6aChEBAAzRuX4UWlxyws+aX1i+D26Q==
 caps: [mon] allow rw
 caps: [osd] allow rwx
client.rgw.osdev02
 key: AQDabn5bypCDJBAAt18L5ppG5lEg6NkGQLYs5w==
 caps: [mon] allow rw
 caps: [osd] allow rwx
client.rgw.osdev03
 key: AQDbbn5bbEVNNBAArX+/AKQu9q3hCRn/05Ya3A==
 caps: [mon] allow rw
 caps: [osd] allow rwx
mgr.osdev01
 key: AQDPIn5beqPTORAAEzcX3fMCCclLR2RiPyvugw==
 caps: [mds] allow *
 caps: [mon] allow profile mgr
 caps: [osd] allow *
mgr.osdev02
 key: AQDRIn5bLRVqDxAA/yWXO8pX6fQynJNyCcoNww==
 caps: [mds] allow *
 caps: [mon] allow profile mgr
 caps: [osd] allow *
mgr.osdev03
 key: AQDSIn5bGyrhHxAAvtAEOveovRxmdDlF45i2Cg==
 caps: [mds] allow *
 caps: [mon] allow profile mgr
 caps: [osd] allow *

查看指定用戶:

$ ceph auth get client.admin
exported keyring for client.admin
[client.admin]
 key = AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ==
 caps mds =  allow * 
 caps mgr =  allow * 
 caps mon =  allow * 
 caps osd =  allow *

創(chuàng)建 Glance 用戶

創(chuàng)建 glance 用戶,并授予 images 存儲池訪問權(quán)限:

$ ceph auth get-or-create client.glance
[client.glance]
 key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ==
$ ceph auth caps client.glance mon  allow r  osd  allow rwx pool=images 
updated caps for client.glance

查看并保存 glance 用戶的 KeyRing 文件:

$ ceph auth get client.glance
exported keyring for client.glance
[client.glance]
 key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ==
 caps mon =  allow r 
 caps osd =  allow rwx pool=images 
$ ceph auth get client.glance -o /opt/ceph/deploy/ceph.client.glance.keyring
exported keyring for client.glance

創(chuàng)建 Cinder 用戶

創(chuàng)建 cinder-volume 用戶,并授予 volumes 存儲池訪問權(quán)限:

$ ceph auth get-or-create client.cinder-volume
[client.cinder-volume]
 key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA==
$ ceph auth caps client.cinder-volume mon  allow r  osd  allow rwx pool=volumes 
updated caps for client.cinder-volume

查看并保存 cinder-volume 用戶的 KeyRing 文件:

$ ceph auth get client.cinder-volume
exported keyring for client.cinder-volume
[client.cinder-volume]
 key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA==
 caps mon =  allow r 
 caps osd =  allow rwx pool=volumes 
$ ceph auth get client.cinder-volume -o /opt/ceph/deploy/ceph.client.cinder-volume.keyring
exported keyring for client.cinder-volume

創(chuàng)建 cinder-backup 用戶,并授予 volumes 和 backups 存儲池訪問權(quán)限:

$ ceph auth get-or-create client.cinder-backup
[client.cinder-backup]
 key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og==
$ ceph auth caps client.cinder-backup mon  allow r  osd  allow rwx pool=volumes, allow rwx pool=backups 
updated caps for client.cinder-backup

查看并保存 cinder-backup 用戶的 KeyRing 文件:

$ ceph auth get client.cinder-backup
exported keyring for client.cinder-backup
[client.cinder-backup]
 key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og==
 caps mon =  allow r 
 caps osd =  allow rwx pool=volumes, allow rwx pool=backups 
$ ceph auth get client.cinder-backup -o /opt/ceph/deploy/ceph.client.cinder-backup.keyring
exported keyring for client.cinder-backup

創(chuàng)建 Nova 用戶

創(chuàng)建 nova 用戶,并授予 vms 存儲池的訪問權(quán)限:

$ ceph auth get-or-create client.nova
[client.nova]
 key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A==
$ ceph auth caps client.nova mon  allow r  osd  allow rwx pool=vms 
updated caps for client.nova

查看并保存 nova 用戶的 KeyRing 文件:

$ ceph auth get client.nova
exported keyring for client.nova
[client.nova]
 key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A==
 caps mon =  allow r 
 caps osd =  allow rwx pool=vms 
$ ceph auth get client.nova -o /opt/ceph/deploy/ceph.client.nova.keyring
exported keyring for client.nova

配置 Kolla-Ansible

以 root 用戶身份登錄 osdev01 部署節(jié)點(diǎn),并設(shè)置好環(huán)境變量:

$ ssh root@osdev01
$ export KOLLA_ROOT=/opt/kolla
$ cd ${KOLLA_ROOT}/myconfig

全局配置

編輯 globals.yml,禁止部署 Ceph:

enable_ceph:  no

開啟 Cinder 服務(wù),并開啟 Glance、Cinder 和 Nova 的后端 Ceph 功能:

enable_cinder:  yes 
glance_backend_ceph:  yes 
cinder_backend_ceph:  yes 
nova_backend_ceph:  yes

配置 Glance

配置 Glance 使用 glance 用戶使用 Ceph 的 images 存儲池:

$ mkdir -pv config/glance
mkdir:  已創(chuàng)建目錄   config/glance 
$ vi config/glance/glance-api.conf
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf

新增 Glance 的 Ceph 客戶端配置和 glance 用戶的 KeyRing 文件:

$ vi config/glance/ceph.conf
[global]
fsid = 383237bd-becf-49d5-9bd6-deb0bc35ab2a
mon_initial_members = osdev01, osdev02, osdev03
mon_host = 172.29.101.166,172.29.101.167,172.29.101.168
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
$ cp -v /opt/ceph/deploy/ceph.client.glance.keyring config/glance/ceph.client.glance.keyring
 /opt/ceph/deploy/ceph.client.glance.keyring  -   config/glance/ceph.client.glance.keyring

配置 Cinder

配置 Cinder 卷服務(wù)使用 Ceph 的 cinder-volume 用戶使用 volumes 存儲池,Cinder 卷備份服務(wù)使用 Ceph 的 cinder-backup 用戶使用 backups 存儲池:

$ mkdir -pv config/cinder/
mkdir:  已創(chuàng)建目錄   config/cinder/ 
$ vi config/cinder/cinder-volume.conf
[DEFAULT]
enabled_backends=rbd-1
[rbd-1]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder-volume
backend_host=rbd:volumes
rbd_pool=volumes
volume_backend_name=rbd-1
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
$ vi config/cinder/cinder-backup.conf
[DEFAULT]
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool=backups
backup_driver = cinder.backup.drivers.ceph
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

新增 Cinder 的卷服務(wù)和卷備份服務(wù)的 Ceph 客戶端配置和 KeyRing 文件:

$ cp config/glance/ceph.conf config/cinder/ceph.conf
$ mkdir -pv config/cinder/cinder-backup/ config/cinder/cinder-volume/
mkdir:  已創(chuàng)建目錄   config/cinder/cinder-backup/ 
mkdir:  已創(chuàng)建目錄   config/cinder/cinder-volume/ 
$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-backup/ceph.client.cinder-volume.keyring
 /opt/ceph/deploy/ceph.client.cinder-volume.keyring  -   config/cinder/cinder-backup/ceph.client.cinder-volume.keyring 
$ cp -v /opt/ceph/deploy/ceph.client.cinder-backup.keyring config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
 /opt/ceph/deploy/ceph.client.cinder-backup.keyring  -   config/cinder/cinder-backup/ceph.client.cinder-backup.keyring 
$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-volume/ceph.client.cinder-volume.keyring
 /opt/ceph/deploy/ceph.client.cinder-volume.keyring  -   config/cinder/cinder-volume/ceph.client.cinder.keyring

配置 Nova

配置 Nova 使用 Ceph 的 nova 用戶使用 vms 存儲池:

$ vi config/nova/nova-compute.conf
[libvirt]
images_rbd_pool=vms
images_type=rbd
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=nova

新增 Nova 的 Ceph 客戶端配置和 nova 用戶的 KeyRing 文件:

$ cp -v config/glance/ceph.conf config/nova/ceph.conf
 config/glance/ceph.conf  -   config/nova/ceph.conf 
$ cp -v /opt/ceph/deploy/ceph.client.nova.keyring config/nova/ceph.client.nova.keyring
 /opt/ceph/deploy/ceph.client.nova.keyring  -   config/nova/ceph.client.nova.keyring 
$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/nova/ceph.client.cinder.keyring
 /opt/ceph/deploy/ceph.client.cinder-volume.keyring  -   config/nova/ceph.client.cinder.keyring

部署測試開始部署

編輯部署腳本 osdev.sh:

#!/bin/bash
set -uexv
usage()
 echo -e  usage : \n$0  action 
 echo -e   \$1 action 

${KOLLA_ROOT}/kolla-ansible/tools/kolla-ansible --configdir ${KOLLA_ROOT}/myconfig --passwords ${KOLLA_ROOT}/myconfig/passwords.yml --inventory ${KOLLA_ROOT}/myconfig/mynodes.conf $1

增加可執(zhí)行權(quán)限:

$ chmod a+x osdev.sh

部署 OpenStack 集群:

$ ./osdev.sh bootstrap-servers
$ ./osdev.sh prechecks
$ ./osdev.sh pull
$ ./osdev.sh deploy
$ ./osdev.sh post-deploy
# ./osdev.sh  destroy --yes-i-really-really-mean-it

查看部署的服務(wù)概況:

$ openstack service list
+----------------------------------+-------------+----------------+
| ID | Name | Type |
+----------------------------------+-------------+----------------+
| 304c9c5073f14f4a97ca1c3cf5e1b49e | neutron | network |
| 46de4440a5cf4a5697fa94b2d0424ba9 | heat | orchestration |
| 60b46b491ce7403aaec0c064384dde49 | heat-cfn | cloudformation |
| 7726ab5d41c5450d954f073f1a9aff28 | cinderv2 | volumev2 |
| 7a4bd5fc12904cc7b5c3810412f98c57 | gnocchi | metric |
| 7ae6f98018fb4d509e862e45ebf10145 | glance | image |
| a0ec333149284c09ac0e157753205fd6 | nova | compute |
| b15e90c382864723945b15c37d3317a6 | placement | placement |
| b5eaa49c50d64316b583eb1c0c4f9ce2 | cinderv3 | volumev3 |
| c6474640f5d9424da0ec51c70c1e6e01 | nova_legacy | compute_legacy |
| db27eb8524be4db3be12b9dd0dab16b8 | keystone | identity |
| edf5c8b894a74a69b65bb49d8e014fff | cinder | volume |
+----------------------------------+-------------+----------------+
$ openstack volume service list
+------------------+-------------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-------------------+------+---------+-------+----------------------------+
| cinder-scheduler | osdev02 | nova | enabled | up | 2018-08-27T11:33:27.000000 |
| cinder-volume | rbd:volumes@rbd-1 | nova | enabled | up | 2018-08-27T11:33:18.000000 |
| cinder-backup | osdev02 | nova | enabled | up | 2018-08-27T11:33:17.000000 |
+------------------+-------------------+------+---------+-------+----------------------------+

初始化環(huán)境

查看初始的 RBD 存儲池情況,全部是空的:

$ rbd -p images ls
$ rbd -p volumes ls
$ rbd -p vms ls

設(shè)置環(huán)境變量,并初始化 OpenStack 環(huán)境:

$ . ${KOLLA_ROOT}/myconfig/admin-openrc.sh
$ ${KOLLA_ROOT}/myconfig/init-runonce

查看新增的鏡像信息:

$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 293b25bb-30be-4839-b4e2-1dba3c43a56a | cirros | active |
+--------------------------------------+--------+--------+
$ openstack image show 293b25bb-30be-4839-b4e2-1dba3c43a56a
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2018-08-27T11:25:29Z |
| disk_format | qcow2 |
| file | /v2/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/file |
| id | 293b25bb-30be-4839-b4e2-1dba3c43a56a |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 68ada1726a864e2081a56be0a2dca3a0 |
| properties | locations= [{u url : u rbd://383237bd-becf-49d5-9bd6-deb0bc35ab2a/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/snap , u metadata : {}}] , os_type= linux  |
| protected | False |
| schema | /v2/schemas/image |
| size | 12716032 |
| status | active |
| tags | |
| updated_at | 2018-08-27T11:25:30Z |
| virtual_size | None |
| visibility | public |
+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+

查看 RBD 存儲池的變化,可見鏡像被存儲在 images 存儲池中,并且有一個(gè)快照:

$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
$ rbd -p vms ls
$ rbd -p images info 293b25bb-30be-4839-b4e2-1dba3c43a56a
rbd image  293b25bb-30be-4839-b4e2-1dba3c43a56a :
 size 12 MiB in 2 objects
 order 23 (8 MiB objects)
 id: 178f4008d95
 block_name_prefix: rbd_data.178f4008d95
 format: 2
 features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
 op_features: 
 flags: 
 create_timestamp: Mon Aug 27 19:25:29 2018
$ rbd -p images snap list 293b25bb-30be-4839-b4e2-1dba3c43a56a
SNAPID NAME SIZE TIMESTAMP 
 6 snap 12 MiB Mon Aug 27 19:25:30 2018

創(chuàng)建虛擬機(jī)

創(chuàng)建一個(gè)虛擬機(jī):

$ openstack server create --image cirros --flavor m1.tiny --key-name mykey --nic net-id=9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 demo1
+-------------------------------------+-----------------------------------------------+
| Field | Value |
+-------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | 65cVBJ7S6yaD |
| config_drive | |
| created | 2018-08-27T11:29:03Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 309f1364-4d58-413d-a865-dfc37ff04308 |
| image | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a) |
| key_name | mykey |
| name | demo1 |
| progress | 0 |
| project_id | 68ada1726a864e2081a56be0a2dca3a0 |
| properties | |
| security_groups | name= default  |
| status | BUILD |
| updated | 2018-08-27T11:29:03Z |
| user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 |
| volumes_attached | |
+-------------------------------------+-----------------------------------------------+
$ openstack server show 309f1364-4d58-413d-a865-dfc37ff04308
+-------------------------------------+----------------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | osdev03 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | osdev03 |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2018-08-27T11:29:16.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | demo-net=10.0.0.11 |
| config_drive | |
| created | 2018-08-27T11:29:03Z |
| flavor | m1.tiny (1) |
| hostId | 4e345dd9f770f63f80d3eafe97c20d97746e890b2971a8398e26db86 |
| id | 309f1364-4d58-413d-a865-dfc37ff04308 |
| image | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a) |
| key_name | mykey |
| name | demo1 |
| progress | 0 |
| project_id | 68ada1726a864e2081a56be0a2dca3a0 |
| properties | |
| security_groups | name= default  |
| status | ACTIVE |
| updated | 2018-08-27T11:29:16Z |
| user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 |
| volumes_attached | |
+-------------------------------------+----------------------------------------------------------+

可見虛擬機(jī)在 vms 存儲池中創(chuàng)建了一個(gè)卷:

$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
$ rbd -p backups ls
$ rbd -p vms ls
309f1364-4d58-413d-a865-dfc37ff04308_disk

登錄虛擬機(jī)所在節(jié)點(diǎn),可以看到虛擬機(jī)的系統(tǒng)卷使用的是在 vms 中創(chuàng)建的這個(gè)卷,從進(jìn)程參數(shù)可以看出 qemu 直接使用的是 Ceph 的 librbd 庫訪問的 RBD 塊設(shè)備:

$ ssh osdev@osdev03
$ sudo docker exec -it nova_libvirt virsh list
 Id Name State
----------------------------------------------------
 1 instance-00000001 running
$ sudo docker exec -it nova_libvirt virsh dumpxml 1
  disk type= network  device= disk 
  driver name= qemu  type= raw  cache= none / 
  auth username= nova 
  secret type= ceph  uuid= 2ea5db42-c8f1-4601-927c-3c64426907aa / 
  /auth 
  source protocol= rbd  name= vms/309f1364-4d58-413d-a865-dfc37ff04308_disk 
  host name= 172.29.101.166  port= 6789 / 
  host name= 172.29.101.167  port= 6789 / 
  host name= 172.29.101.168  port= 6789 / 
  /source 
  target dev= vda  bus= virtio / 
  alias name= virtio-disk0 / 
  address type= pci  domain= 0x0000  bus= 0x00  slot= 0x04  function= 0x0 / 
  /disk 
$ ps -aux | grep qemu
42436 2678909 4.6 0.0 1341144 171404 ? Sl 19:29 0:08 /usr/libexec/qemu-kvm -name guest=instance-00000001,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-instance-00000001/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Client-IBRS,ss=on,hypervisor=on,tsc_adjust=on,avx512f=on,avx512dq=on,clflushopt=on,clwb=on,avx512cd=on,avx512bw=on,avx512vl=on,pku=on,stibp=on,pdpe1gb=on -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 309f1364-4d58-413d-a865-dfc37ff04308 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=17.0.2,serial=74bf926c-70b7-03df-b211-d21d6016081a,uuid=309f1364-4d58-413d-a865-dfc37ff04308,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-instance-00000001/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -object secret,id=virtio-disk0-secret0,data=zNy84nlNYigA4vjbuOxcGQa1/hh8w28i/WoJbO1Xsl4=,keyid=masterKey0,iv=OhX+FApyFyq2XLWq0ff/Ew==,format=base64 -drive file=rbd:vms/309f1364-4d58-413d-a865-dfc37ff04308_disk:id=nova:auth_supported=cephx\;none:mon_host=172.29.101.166\:6789\;172.29.101.167\:6789\;172.29.101.168\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=79,id=hostnet0,vhost=on,vhostfd=80 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:04:e8:e9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0,logfile=/var/lib/nova/instances/309f1364-4d58-413d-a865-dfc37ff04308/console.log,logappend=off -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 172.29.101.168:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on
$ ldd /usr/libexec/qemu-kvm | grep -e ceph -e rbd
 librbd.so.1 =  /lib64/librbd.so.1 (0x00007fde38815000)
 libceph-common.so.0 =  /usr/lib64/ceph/libceph-common.so.0 (0x00007fde28247000)

創(chuàng)建卷

創(chuàng)建一個(gè)卷:

$ openstack volume create --size 1 volume1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2018-08-27T11:33:52.000000 |
| description | None |
| encrypted | False |
| id | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 |
| migration_status | None |
| multiattach | False |
| name | volume1 |
| properties | |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | None |
| updated_at | None |
| user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 |
+---------------------+--------------------------------------+

查看存儲池狀態(tài),可以看到新建的卷被放在 volumes 存儲池:

$ rbd -p images ls
293b25bb-30be-4839-b4e2-1dba3c43a56a
$ rbd -p volumes ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9
$ rbd -p backups ls
$ rbd -p vms ls
309f1364-4d58-413d-a865-dfc37ff04308_disk

創(chuàng)建備份

創(chuàng)建一個(gè)卷備份,可以看到是創(chuàng)建在 backups 存儲池中:

$ openstack volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9
+-------+--------------------------------------+
| Field | Value |
+-------+--------------------------------------+
| id | f2321578-88d5-4337-b93c-798855b817ce |
| name | None |
+-------+--------------------------------------+
$ openstack volume backup list
+--------------------------------------+------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+------+-------------+-----------+------+
| f2321578-88d5-4337-b93c-798855b817ce | None | None | available | 1 |
+--------------------------------------+------+-------------+-----------+------+
$ openstack volume backup show f2321578-88d5-4337-b93c-798855b817ce
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| availability_zone | nova |
| container | backups |
| created_at | 2018-08-27T11:39:40.000000 |
| data_timestamp | 2018-08-27T11:39:40.000000 |
| description | None |
| fail_reason | None |
| has_dependent_backups | False |
| id | f2321578-88d5-4337-b93c-798855b817ce |
| is_incremental | False |
| name | None |
| object_count | 0 |
| size | 1 |
| snapshot_id | None |
| status | available |
| updated_at | 2018-08-27T11:39:46.000000 |
| volume_id | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 |
+-----------------------+--------------------------------------+
$ rbd -p backups ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base

在此創(chuàng)建一個(gè)備份,發(fā)現(xiàn) backups 存儲池并無變化,僅僅是在原有的備份卷中增加一個(gè)快照:

$ volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9
+-------+--------------------------------------+
| Field | Value |
+-------+--------------------------------------+
| id | 07132063-9bdb-4391-addd-a791dae2cfea |
| name | None |
+-------+--------------------------------------+
$ rbd -p backups ls
volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
$ rbd -p backups snap list volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
SNAPID NAME SIZE TIMESTAMP 
 4 backup.f2321578-88d5-4337-b93c-798855b817ce.snap.1535369984.08 1 GiB Mon Aug 27 19:39:46 2018 
 5 backup.07132063-9bdb-4391-addd-a791dae2cfea.snap.1535370126.76 1 GiB Mon Aug 27 19:42:08 2018

連接卷

把新增的卷鏈接到之前創(chuàng)建的虛擬機(jī)中:

$ openstack server add volume demo1 volume1
$ openstack volume show volume1
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| attachments | [{u server_id : u 309f1364-4d58-413d-a865-dfc37ff04308 , u attachment_id : u fb4d9ec0-8a33-4ed0-8845-09e6f17aac81 , u attached_at : u 2018-08-27T11:44:51.000000 , u host_name : u osdev03 , u volume_id : u 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 , u device : u /dev/vdb , u id : u 3ccca300-bee3-4b5a-b89b-32e6b8b806d9}] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2018-08-27T11:33:52.000000 |
| description | None |
| encrypted | False |
| id | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 |
| migration_status | None |
| multiattach | False |
| name | volume1 |
| os-vol-host-attr:host | rbd:volumes@rbd-1#rbd-1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 68ada1726a864e2081a56be0a2dca3a0 |
| properties | attached_mode= rw  |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | in-use |
| type | None |
| updated_at | 2018-08-27T11:44:52.000000 |
| user_id | c7111728fbbd4fd79bdd2b60e7d7cb42 |
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

到虛擬機(jī)所在節(jié)點(diǎn)查看其 libvirt 上參數(shù)的變化,發(fā)現(xiàn)新增了一個(gè) RBD 磁盤:

$ sudo docker exec -it nova_libvirt virsh dumpxml 1
  disk type= network  device= disk 
  driver name= qemu  type= raw  cache= none / 
  auth username= nova 
  secret type= ceph  uuid= 2ea5db42-c8f1-4601-927c-3c64426907aa / 
  /auth 
  source protocol= rbd  name= vms/309f1364-4d58-413d-a865-dfc37ff04308_disk 
  host name= 172.29.101.166  port= 6789 / 
  host name= 172.29.101.167  port= 6789 / 
  host name= 172.29.101.168  port= 6789 / 
  /source 
  target dev= vda  bus= virtio / 
  alias name= virtio-disk0 / 
  address type= pci  domain= 0x0000  bus= 0x00  slot= 0x04  function= 0x0 / 
  /disk 
  disk type= network  device= disk 
  driver name= qemu  type= raw  cache= none  discard= unmap / 
  auth username= cinder-volume 
  secret type= ceph  uuid= 3fa55f7c-b556-4095-9253-b908d5408ec8 / 
  /auth 
  source protocol= rbd  name= volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 
  host name= 172.29.101.166  port= 6789 / 
  host name= 172.29.101.167  port= 6789 / 
  host name= 172.29.101.168  port= 6789 / 
  /source 
  target dev= vdb  bus= virtio / 
  serial 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 /serial 
  alias name= virtio-disk1 / 
  address type= pci  domain= 0x0000  bus= 0x00  slot= 0x06  function= 0x0 / 
  /disk 
...

為虛擬機(jī)創(chuàng)建一個(gè)浮動(dòng) IP,使用 SSH 登陸進(jìn)去:

$ openstack console url show demo1
+-------+-------------------------------------------------------------------------------------+
| Field | Value |
+-------+-------------------------------------------------------------------------------------+
| type | novnc |
| url | http://172.29.101.167:6080/vnc_auto.html?token=9f835216-1c53-41ae-849a-44a85429a334 |
+-------+-------------------------------------------------------------------------------------+
$ openstack floating ip create public1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2018-08-27T11:49:02Z |
| description | |
| fixed_ip_address | None |
| floating_ip_address | 192.168.162.52 |
| floating_network_id | ff69b3ff-c2c4-4474-a7ba-952fa99df919 |
| id | 2aa86075-9c62-49f5-84ac-e7b6353c9591 |
| name | 192.168.162.52 |
| port_id | None |
| project_id | 68ada1726a864e2081a56be0a2dca3a0 |
| qos_policy_id | None |
| revision_number | 0 |
| router_id | None |
| status | DOWN |
| subnet_id | None |
| tags | [] |
| updated_at | 2018-08-27T11:49:02Z |
+---------------------+--------------------------------------+
$ openstack server add floating ip demo1 192.168.162.52
$ openstack server list
+--------------------------------------+-------+--------+------------------------------------+--------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------+--------+------------------------------------+--------+---------+
| 309f1364-4d58-413d-a865-dfc37ff04308 | demo1 | ACTIVE | demo-net=10.0.0.11, 192.168.162.52 | cirros | m1.tiny |
+--------------------------------------+-------+--------+------------------------------------+--------+---------+
$ ssh root@osdev02
$ ip netns
qrouter-65759e60-6e20-41cc-a79c-fc492232b127 (id: 1)
qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 (id: 0)
$ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ping 192.168.162.50
$ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ping 10.0.0.9(用戶名 cirros,密碼 gocubsgo)$ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ssh cirros@192.168.162.52
$ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ssh cirros@10.0.0.11
$ sudo passwd root
Changing password for root
New password: 
Bad password: too weak
Retype password: 
Password for root changed by root
$ su -
Password:

創(chuàng)建分區(qū)并寫入測試文件,最后卸載分區(qū):

# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 1G 0 disk 
|-vda1 253:1 0 1015M 0 part /
`-vda15 253:15 0 8M 0 part 
vdb 253:16 0 1G 0 disk
# mkfs.ext4 /dev/vdb
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: ede8d366-bfbc-4b9a-9d3f-306104f410d7
Superblock backups stored on blocks: 
 32768, 98304, 163840, 229376
Allocating group tables: done 
Writing inode tables: done 
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
# mount /dev/vdb /mnt
# df -h
Filesystem Size Used Available Use% Mounted on
/dev 240.1M 0 240.1M 0% /dev
/dev/vda1 978.9M 23.9M 914.1M 3% /
tmpfs 244.2M 0 244.2M 0% /dev/shm
tmpfs 244.2M 92.0K 244.1M 0% /run
/dev/vdb 975.9M 1.3M 907.4M 0% /mnt
# echo  hello openstack, volume test.    /mnt/ceph_rbd_test
# umount /mnt
# df -h
Filesystem Size Used Available Use% Mounted on
/dev 240.1M 0 240.1M 0% /dev
/dev/vda1 978.9M 23.9M 914.1M 3% /
tmpfs 244.2M 0 244.2M 0% /dev/shm
tmpfs 244.2M 92.0K 244.1M 0% /run

斷開卷

斷開卷,同時(shí)查看虛擬機(jī)內(nèi)部變化:

$ openstack server remove volume demo1 volume1
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 1G 0 disk 
|-vda1 253:1 0 1015M 0 part /
`-vda15 253:15 0 8M 0 part

在宿主機(jī)映射和掛載 RBD 卷,并查看之前虛擬機(jī)內(nèi)部創(chuàng)建的文件,完全相同:

$ rbd showmapped
id pool image snap device 
0 rbd rbd_test - /dev/rbd0
$ rbd feature disable volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 object-map fast-diff deep-flatten
$ rbd map volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9
/dev/rbd1
$ mkdir /mnt/volume1
$ mount /dev/rbd1 /mnt/volume1/
$ cat /mnt/volume1/
ceph_rbd_test lost+found/ 
$ cat /mnt/volume1/ceph_rbd_test 
hello openstack, volume test.

感謝各位的閱讀!關(guān)于“怎么在 Kolla-Ansible 中使用 Ceph 后端存儲”這篇文章就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,讓大家可以學(xué)到更多知識,如果覺得文章不錯(cuò),可以把它分享出去讓更多的人看到吧!

正文完
 
丸趣
版權(quán)聲明:本站原創(chuàng)文章,由 丸趣 2023-08-16發(fā)表,共計(jì)29590字。
轉(zhuǎn)載說明:除特殊說明外本站除技術(shù)相關(guān)以外文章皆由網(wǎng)絡(luò)搜集發(fā)布,轉(zhuǎn)載請注明出處。
評論(沒有評論)
主站蜘蛛池模板: 德令哈市| 鞍山市| 防城港市| 余江县| 兴国县| 郴州市| 山阳县| 大姚县| 和静县| 大理市| 连平县| 宜章县| 庆云县| 永平县| 南川市| 仁怀市| 林周县| 龙游县| 库尔勒市| 维西| 门头沟区| 伊春市| 天镇县| 郸城县| 延长县| 仪陇县| 邻水| 阿鲁科尔沁旗| 黑水县| 上蔡县| 江都市| 札达县| 盘山县| 尼木县| 邵阳市| 太康县| 宣威市| 峨眉山市| 甘泉县| 江陵县| 花垣县|