【ironman】部署openstack平台笔记
1.1基础环境配置
使用提供的两台云主机,云主机类型使用8vCPU/16G/100G类型。自行检查安全组策略,以确保网络正常通信与ssh连接,然后按以下要求配置服务器:
(1)设置主节点主机名为master,设置从节点主机名为node;
(2)修改hosts文件将IP地址映射为主机名;
在第一台master虚拟机执行:[root@xxx ~]hostnamectl set-hostname master
在第二台node虚拟机执行:[root@xxx ~]# hostnamectl set-hostname node
在第一台master执行:[root@master ~]# vi /etc/hosts
插入以下内容,注意要用master和node的IP地址进行替换
10.26.7.30 master
10.26.3.57 node
1.2 Yum源配置
使用提供的http服务地址,在http服务下,存在centos7.9的网络yum源,使用该http源作为安装服务的网络源。分别设置master节点和node节点的yum源文件http.repo。
使用提供的http服务地址http://10.24.1.46/centos/,在http服务下,存在centos7.9的网络yum源,使用该http源作为安装服务的网络源。分别设置master节点和node节点的yum源文件http.repo。
[root@master ~]# mv /etc/yum.repos.d/* /media/
[root@master ~]# vi /etc/yum.repos.d/http.repo
编辑写入如下内容
[httpiso]name=httpisobaseurl=http://10.24.1.46/centos/gpgcheck=0enable=1
注意“http://10.24.1.46/centos/”这个地址要用提供的http服务地址
1.3配置无秘钥ssh
配置master节点可以无秘钥访问node节点,配置完成后,尝试ssh连接node节点的hostname进行测试。
在master机器输入[root@master ~]# ssh-keygen -t rsa
然后连续点击三下回车键
在node机器同样输入[root@node ~]# ssh-keygen -t rsa
然后连续点击三下回车键
在master机器执行[root@master ~]# cd ~/.ssh
[root@master ~]# ssh-copy-id master
提示(yes/no)?时输入yes
提示password时输入主机的密码Abc@1234
[root@master ~]# ssh-copy-id node
同上
平台部署--部署容器云平台
使用master和node两台云主机,通过kubeeasy工具完成Kubernetes 1.22.1集群的搭建。
软件包地址为:http://xxxxx.iso
将提供的安装包xxx.iso下载至master节点/root目录,并解压到/opt目录:
在master节点安装kubeeasy工具:
[root@master ~]# curl -O http://10.24.1.46/competition/chinaskills_cloud_paas_v2.1.iso
[root@master ~]# mount -o loop chinaskills_cloud_paas_v2.1.iso /mnt/
[root@master ~]# cp -rfv /mnt/* /opt/
[root@master ~]# umount /mnt/
[root@master ~]# mv /opt/kubeeasy /usr/bin/kubeeasy
[root@master ~]# kubeeasy install depend --host 10.1.9.239,10.1.15.164 --user root --password Pio2I8nL --offline-file /opt/dependencies/packages.tar.gz
[root@master ~]# kubeeasy install kubernetes --master 10.1.9.239 --worker 10.1.15.164 --user root --password Pio2I8nL --version 1.22.1 --offline-file /opt/kubernetes.tar.gz
(注意 软件包地址用提供的;10.24.2.10,10.24.2.11此处IP记得替换为自己的master和node机器的IP地址)
1.4
部署Istio服务网格
在Kubernetes集群上完成Istio服务网格环境的安装。
[root@master ~]# kubeeasy add --istio istio
1.5
部署KubeVirt虚拟化
在Kubernetes集群上完成KubeVirt虚拟化环境的安装。
[root@master ~]# kubeeasy add --virt kubevirt
1.6
部署Harbor仓库及Helm包管理工具
在master节点上完成Harbor镜像仓库及Helm包管理工具的部署。然后使用nginx镜像自定义一个Chart,Deployment名称为nginx,副本数为1,然后将该Chart部署到default命名空间下,Release名称为web。
[root@master ~]# kubeeasy add --registry harbor
[root@master ~]# helm create mychart
正常结果:
Creating mychart
[root@master ~]# rm -rf mychart/templates/*
[root@master ~]# vi mychart/templates/deployment.yaml
插入以下内容:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
status: {}
helm安装
[root@master ~]# helm install web mychart
正常结果:
NAME: web
LAST DEPLOYED: Tue Sep 13 16:23:12 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
在master节点执行helm status web命令进行验证:
[root@master ~]# helm status web
正常结果:
NAME: web
LAST DEPLOYED: Tue Sep 13 16:23:12 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
(2)部署OpenStack云平台基础环境
在controller节点和compute节点都执行脚本iaas-pre-host.sh部署OpenStack云平台基础环境。完成后使用reboot命令重启虚拟机以生效配置。
[root@controller ~]# iaas-pre-host.sh
[root@compute ~]# iaas-pre-host.sh
部署Mariadb数据库及Rabbit消息队列服务
在controller节点执行脚本部署Mariadb数据库及Rabbit消息队列服务。
[root@controller ~]# iaas-install-mysql.sh
部署Keystone服务
在controller节点执行脚本部署Keystone服务。
[root@controller ~]# iaas-install-keystone.sh
部署Glance服务
在controller节点执行脚本部署Glance服务。
[root@controller ~]# iaas-install-glance.sh
部署Nova服务
在controller节点执行脚本部署Nova组件的控制服务。
[root@controller ~]# iaas-install-placement.sh
[root@controller ~]# iaas-install-nova-controller.sh
执行完上面的脚本后,在compute节点执行脚本部署Nova组件的计算服务,这样就将compute节点的CPU、内存及磁盘资源添加到OpenStack云平台的资源池中了。
[root@compute ~]# iaas-install-nova-compute.sh
部署Neutron服务
在controller节点执行脚本部署Neutron组件的控制服务。
[root@controller ~]# iaas-install-neutron-controller.sh
在compute节点执行脚本部署Neutron组件的计算服务。
[root@compute ~]# iaas-install-neutron-compute.sh
部署Dashboard服务
在controller节点执行脚本部署Dashboard组件服务。
[root@controller ~]# iaas-install-dashboard.sh
安装完成后,使用Google浏览器访问OpenStack云平台,访问地址为:http://192.168.100.10/dashboard,domain为demo,User name(用户名)为admin,Password(密码)为000000。
部署Cinder服务
在controller节点执行脚本部署Cinder组件的控制服务。
[root@controller ~]# iaas-install-cinder-controller.sh
controller节点脚本执行完成后,在compute节点执行脚本部署Cinder组件的计算服务。
[root@compute ~]# iaas-install-cinder-compute.sh
部署Swift服务
在controller节点执行脚本部署Swift组件的控制服务。
[root@controller ~]# iaas-install-swift-controller.sh
controller节点脚本执行完成后,在compute节点执行脚本部署Swift组件的计算服务。
[root@compute ~]# iaas-install-swift-compute.sh
Glance镜像服务
案例实施
1. 创建镜像
(1)下载CirrOS镜像文件
CirrOS是一个极小的云操作系统,可以使用这个小的操作系统来进行Glance服务组件的操作练习。将提供的cirros-0.3.4-x86_64-disk.img镜像上传到controller节点的root目录下面。
[root@controller ~]# curl -O http://mirrors.douxuedu.com/newcloud/cirros-0.3.4-x86_64-disk.img
[root@controller ~]# ls
…
cirros-0.3.4-x86_64-disk.img
将镜像上传至controller节点后,通过file命令查看镜像文件信息。
[root@controller ~]# file cirros-0.3.4-x86_64-disk.img
cirros-0.3.4-x86_64-disk.img: QEMU QCOW Image (v2), 41126400 bytes
(2)创建镜像
通过命令创建镜像,命令的格式如下:
[root@controller ~]# glance help image-create
参数说明:
- --disk-format:镜像格式。
- --container-format:镜像在其他项目中可见性。
- --progress:显示上传镜像的进度。
- --file:选择本地镜像文件。
- --name:上传后镜像的名称。
使用镜像cirros-0.3.4-x86_64-disk.img通过命令上传镜像至OpenStack中。
[root@controller ~]# source /etc/keystone/admin-openrc.sh
[root@controller ~]# glance image-create --name cirros-0.3.4 --disk-format qcow2 --container-format bare --progress < cirros-0.3.4-x86_64-disk.img
2. 管理镜像
(1)查看镜像
通过命令可以在OpenStack平台中查看当前Glance中所上传的镜像名称。具体命令如下:
[root@controller ~]# glance image-list
+--------------------------------------+--------------+
| ID | Name |
+--------------------------------------+--------------+
| 32a2513c-e5ba-438b-a5ee-63c35c03b284 | cirros-0.3.4 |
+--------------------------------------+--------------+
也可以使用命令查看镜像的详细信息。具体命令如下:
[root@controller ~]# glance image-show 32a2513c-e5ba-438b-a5ee-63c35c03b284
+------------------+--------------------------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------------------------+
| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
| container_format | bare |
| created_at | 2022-02-10T03:15:29Z |
| disk_format | qcow2 |
| id | 32a2513c-e5ba-438b-a5ee-63c35c03b284 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros-0.3.4 |
| os_hash_algo | sha512 |
| os_hash_value | f0fd1b50420dce4ca382ccfbb528eef3a38bbeff00b54e95e3876b9bafe7ed2d
6f919ca35d9046d4 |
| | 37c6d2d8698b1174a335fbd66035bb3edc525d2cdb187232 |
| os_hidden | False |
| owner | 1776912d52a7444d8b2d09eb86e8d1d9 |
| protected | False |
| size | 13267968 |
| status | active |
| tags | [] |
| updated_at | 2022-02-10T03:15:29Z |
| virtual_size | Not available |
| visibility | shared |
+------------------+--------------------------------------------------------------------+
(2)修改镜像
可以使用glance image-update更新镜像信息,命令的格式如下:
[root@controller ~]# glance help image-update
usage: glance image-update [--architecture <ARCHITECTURE>]
[--protected [True|False]] [--name <NAME>]
[--instance-uuid <INSTANCE_UUID>]
[--min-disk <MIN_DISK>] [--visibility <VISIBILITY>]
[--kernel-id <KERNEL_ID>]
[--os-version <OS_VERSION>]
[--disk-format <DISK_FORMAT>]
[--os-distro <OS_DISTRO>] [--owner <OWNER>]
[--ramdisk-id <RAMDISK_ID>] [--min-ram <MIN_RAM>]
[--container-format <CONTAINER_FORMAT>]
[--property <key=value>] [--remove-property key]
<IMAGE_ID>
参数说明:
- --min-disk:镜像启动最小硬盘大小。
- --name:镜像名称。
- --disk-format:镜像格式。
- --min-ram:镜像启动最小内存大小。
- --container-format:镜像在项目中可见性。
如果需要改变镜像启动硬盘最低要求值(min-disk)1G,min-disk默认单位为G。使用glance image-update更新镜像信息操作如下:
[root@controller ~]# glance image-update --min-disk=1 32a2513c-e5ba-438b-a5ee-63c35c03b284
+------------------+--------------------------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------------------------+
| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
| container_format | bare |
| created_at | 2022-02-10T03:15:29Z |
| disk_format | qcow2 |
| id | 32a2513c-e5ba-438b-a5ee-63c35c03b284 |
| min_disk | 1 |
| min_ram | 0 |
| name | cirros-0.3.4 |
| os_hash_algo | sha512 |
| os_hash_value | f0fd1b50420dce4ca382ccfbb528eef3a38bbeff00b54e95e3876b9bafe7ed
2d6f919ca35d9046d4 |
| | 37c6d2d8698b1174a335fbd66035bb3edc525d2cdb187232 |
| os_hidden | False |
| owner | 1776912d52a7444d8b2d09eb86e8d1d9 |
| protected | False |
| size | 13267968 |
| status | active |
| tags | [] |
| updated_at | 2022-02-10T03:16:59Z |
| virtual_size | Not available |
| visibility | shared |
+------------------+--------------------------------------------------------------------+
也可以使用命令更新镜像启动内存最低要求值(min-ram)为1G,min-ram默认单位为M。使用glance image-update更新镜像信息操作如下:
[root@controller ~]# glance image-update --min-ram=1024 32a2513c-e5ba-438b-a5ee-63c35c03b284
+------------------+--------------------------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------------------------+
| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
| container_format | bare |
| created_at | 2022-02-10T03:15:29Z |
| disk_format | qcow2 |
| id | 32a2513c-e5ba-438b-a5ee-63c35c03b284 |
| min_disk | 1 |
| min_ram | 1024 |
| name | cirros-0.3.4 |
| os_hash_algo | sha512 |
| os_hash_value | f0fd1b50420dce4ca382ccfbb528eef3a38bbeff00b54e95e3876b9bafe7ed2
d6f919ca35d9046d4 |
| | 37c6d2d8698b1174a335fbd66035bb3edc525d2cdb187232 |
| os_hidden | False |
| owner | 1776912d52a7444d8b2d09eb86e8d1d9 |
| protected | False |
| size | 13267968 |
| status | active |
| tags | [] |
| updated_at | 2022-02-10T03:17:21Z |
| virtual_size | Not available |
| visibility | shared |
+------------------+--------------------------------------------------------------------+
(3)删除镜像
可以使用glance image-delete删除上传至OpenStack平台中的镜像,使用命令格式如下:
[root@controller ~]# glance help image-delete
usage: glance image-delete <IMAGE_ID> [<IMAGE_ID> ...]
Delete specified image.
Positional arguments:
<IMAGE_ID> ID of image(s) to delete.
Run `glance --os-image-api-version 1 help image-delete` for v1 help
只需要在命令后跟上镜像ID即可。命令如下:
[root@controller ~]# glance image-delete 32a2513c-e5ba-438b-a5ee-63c35c03b284
[root@controller ~]# glance image-list
+--------------------------------------+-------------------------------+
| ID | Name |
+--------------------------------------+-------------------------------+
+--------------------------------------+-------------------------------+
Nova服务
1. 规划节点
节点规划见表1。
表1 节点规划
IP |
主机名 |
节点 |
10.24.195.113 |
controller |
IaaS-allinone |
10.24.195.114 |
- |
VNC测试节点 |
2. 基础准备
使用平台提供的OpenStack平台作为实验节点。
案例实施
1. 创建flavor类型
flavor类型为OpenStack在创建云主机时需要提供的云主机大小类型,云主机的资源大小可使用不同的flavor类型来进行定义。
(1)创建flavor类型
[root@controller ~]# openstack help flavor create
usage: openstack flavor create [-h] [-f {json,shell,table,value,yaml}]
[-c COLUMN] [--max-width <integer>]
[--fit-width] [--print-empty] [--noindent]
[--prefix PREFIX] [--id <id>] [--ram <size-mb>]
[--disk <size-gb>] [--ephemeral <size-gb>]
[--swap <size-mb>] [--vcpus <vcpus>]
[--rxtx-factor <factor>] [--public | --private]
[--property <key=value>] [--project <project>]
[--project-domain <project-domain>]
<flavor-name>
Create new flavor
使用命令创建一个flavor,10G的硬盘大小,512M内存,1颗vCPU,ID为10,名称为centos。命令如下:
[root@controller ~]# source /etc/keystone/admin-openrc.sh
[root@controller ~]# openstack flavor create --disk 10 --ram 512 --vcpus 1 --id 10 centos
+------------------------------+--------+
| Field | Value |
+------------------------------+--------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 10 |
| id | 10 |
| name | centos |
| os-flavor-access:is_public | True |
| properties | |
| ram | 512 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+------------------------------+--------+
(2)查看flavor类型
使用“openstack flavor list”命令查看flavor类型列表,命令如下:
[root@controller ~]# openstack flavor list
+----+-----------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 10 | 0 | 1 | True |
| 10 | centos | 512 | 10 | 0 | 1 | True |
| 2 | m1.small | 1024 | 20 | 0 | 1 | True |
| 3 | m1.medium | 2048 | 40 | 0 | 2 | True |
+----+-----------+------+------+-----------+-------+-----------+
也可以使用“openstack flavor show”命令查看具体的flavor类型的详细信息。命令格式如下:
[root@controller ~]# openstack help flavor show
usage: openstack flavor show [-h] [-f {json,shell,table,value,yaml}]
[-c COLUMN] [--max-width <integer>] [--fit-width]
[--print-empty] [--noindent] [--prefix PREFIX]
<flavor>
通过命令查看创建的“centos”的flavor类型详细信息。命令如下:
[root@controller ~]# openstack flavor show centos
+-----------------------------+--------+
| Field | Value |
+-----------------------------+--------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| access_project_ids | None |
| disk | 10 |
| id | 10 |
| name | centos |
| os-flavor-access:is_public | True |
| properties | |
| ram | 512 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+-----------------------------+--------+
2. 访问安全组
访问安全组为是OpenStack提供给云主机的一个访问策略控制组,通过安全组中的策略可以控制云主机的出入访问规则。
(1)查看访问安全组
使用命令“openstack security group list”可以查看当前所创建的访问安全组列表。命令如下:
[root@controller ~]#openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 896ce430-21f8-4673-8110-afce97e43715 | default | Default security group | 1776912d52a7444d8b2d09eb86e8d1d9 | [] |
+--------------------------------------+---------+------------------------+----------------------------------+------+
“default”为OpenStack平台自带的安全组,通过命令可以查看安全组中的安全规则,命令如下:
[root@controller ~]# openstack security group rule list default
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+
| ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group |
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+
| 1e6c27ff-b456-4d2a-a64d-51197fea048e | None | IPv4 | 0.0.0.0/0 | | 896ce430-21f8-4673-8110-afce97e43715 |
| 699e2744-e926-4bb4-9e4f-54885f669bc5 | None | IPv6 | ::/0 | | None |
| 7aa363c8-5df3-4ce3-a775-9e453f086c87 | None | IPv6 | ::/0 | | 896ce430-21f8-4673-8110-afce97e43715 |
| bb08b786-09f4-44f3-a030-71b189a0f84f | None | IPv4 | 0.0.0.0/0 | | None |
+--------------------------------------+-------------+-----------+-----------+------------+--------------------------------------+
在安全规则的列表中,不能看出每条规则的具体策略,通过使用命令“openstack security group rule show”查看任意规则的详细信息。命令如下:
[root@controller ~]# openstack security group rule show 7aa363c8-5df3-4ce3-a775-9e453f086c87
+-------------------+-------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------+
| created_at | 2022-02-10T03:21:40Z |
| description | None |
| direction | ingress |
| ether_type | IPv6 |
| id | 7aa363c8-5df3-4ce3-a775-9e453f086c87 |
| location | cloud='', project.domain_id=, project.domain_name='000000', project.id='1776912d52a7444d8b2d09eb86e8d1d9', project.name='admin', region_name='', zone= |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 1776912d52a7444d8b2d09eb86e8d1d9 |
| protocol | None |
| remote_group_id | 896ce430-21f8-4673-8110-afce97e43715 |
| remote_ip_prefix | ::/0 |
| revision_number | 0 |
| security_group_id | 896ce430-21f8-4673-8110-afce97e43715 |
| tags | [] |
| updated_at | 2022-02-10T03:21:40Z |
+-------------------+-------------------------------------------------------------------+
(2)创建访问安全组
创建一个新的安全组,命令格式如下:
[root@controller ~]# openstack help security group create
usage: openstack security group create [-h] [-f {json,shell,table,value,yaml}]
[-c COLUMN] [--max-width <integer>]
[--fit-width] [--print-empty]
[--noindent] [--prefix PREFIX]
[--description <description>]
[--project <project>]
[--project-domain <project-domain>]
<name>
使用命令创建新的安全组规则,命令如下:
[root@controller ~]# openstack security group create test
+-----------------+---------------------------------------------------------------------+
| Field | Value |
+-----------------+---------------------------------------------------------------------+
| created_at | 2022-02-10T03:25:18Z |
| description | test |
| id | 96373f68-be50-4819-b9a6-8fc8d3e9dc0a |
| location | cloud='', project.domain_id=, project.domain_name='000000', project.id='1776912d52a7444d8b2d09eb86e8d1d9', project.name='admin', region_name='', zone= |
| name | test |
| project_id | 1776912d52a7444d8b2d09eb86e8d1d9 |
| revision_number | 1 |
| rules | created_at='2022-02-10T03:25:18Z', direction='egress', ethertype='IPv4', id='2bbc98ad-4784-419d-b815-4ee2c6c75b54', updated_at='2022-02-10T03:25:18Z' |
| | created_at='2022-02-10T03:25:19Z', direction='egress', ethertype='IPv6', id='70fcb5e0-fd86-461e-84a4-2a83b4b90730', updated_at='2022-02-10T03:25:19Z' |
| tags | [] |
| updated_at | 2022-02-10T03:25:18Z |
+-----------------+---------------------------------------------------------------------+
(3)删除访问安全组
可以使用命令删除不需要使用的访问安全组,命令如下:
[root@controller ~]# openstack security group delete test
[root@controller ~]# openstack security group list
+--------------------------------------+---------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+---------+------------------------+----------------------------------+------+
| 896ce430-21f8-4673-8110-afce97e43715 | default | Default security group | 1776912d52a7444d8b2d09eb86e8d1d9 | [] |
+--------------------------------------+---------+------------------------+----------------------------------+------+
(4)添加安全规则
在默认安全组中添加三条需要使用的访问规则,使用“openstack security group rule create”命令,命令格式如下:
[root@controller ~]# openstack help security group rule create
usage: openstack security group rule create [-h]
[-f {json,shell,table,value,yaml}]
[-c COLUMN]
[--max-width <integer>]
[--fit-width] [--print-empty]
[--noindent] [--prefix PREFIX]
[--remote-ip <ip-address> | --remote-group <group>]
[--description <description>]
[--dst-port <port-range>]
[--icmp-type <icmp-type>]
[--icmp-code <icmp-code>]
[--protocol <protocol>]
[--ingress | --egress]
[--ethertype <ethertype>]
[--project <project>]
[--project-domain <project-domain>]
<group>
在“defualt”安全组中添加一条策略,从入口方向放行所有ICMP规则,命令如下:
[root@controller ~]# openstack security group rule create --protocol icmp --ingress default
+-------------------+-------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------+
| created_at | 2022-02-10T04:47:42Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 61014f36-5c20-46ce-b779-7d0c7458e691 |
| location | cloud='', project.domain_id=, project.domain_name='000000', project.id='1776912d52a7444d8b2d09eb86e8d1d9', project.name='admin', region_name='', zone= |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 1776912d52a7444d8b2d09eb86e8d1d9 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 896ce430-21f8-4673-8110-afce97e43715 |
| tags | [] |
| updated_at | 2022-02-10T04:47:42Z |
+-------------------+-------------------------------------------------------------------+
在“defualt”安全组中添加一条策略,从入口方向放行所有TCP规则,命令如下:
[root@controller ~]# openstack security group rule create --protocol tcp --ingress default
+-------------------+-------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------+
| created_at | 2022-02-10T04:47:59Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 03ace6cf-ec1a-42a9-a754-c21fe887d1c0 |
| location | cloud='', project.domain_id=, project.domain_name='000000', project.id='1776912d52a7444d8b2d09eb86e8d1d9', project.name='admin', region_name='', zone= |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 1776912d52a7444d8b2d09eb86e8d1d9 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 896ce430-21f8-4673-8110-afce97e43715 |
| tags | [] |
| updated_at | 2022-02-10T04:47:59Z |
+-------------------+-------------------------------------------------------------------+
在“defualt”安全组中添加一条策略,从入口方向放行所有UDP规则,命令如下:
[root@controller ~]# openstack security group rule create --protocol udp --ingress default
+-------------------+-------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------+
| created_at | 2022-02-10T04:48:22Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 9ec501e5-2c16-4d89-8a15-57a16a8fe3cd |
| location | cloud='', project.domain_id=, project.domain_name='000000', project.id='1776912d52a7444d8b2d09eb86e8d1d9', project.name='admin', region_name='', zone= |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 1776912d52a7444d8b2d09eb86e8d1d9 |
| protocol | udp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 896ce430-21f8-4673-8110-afce97e43715 |
| tags | [] |
| updated_at | 2022-02-10T04:48:22Z |
+-------------------+-------------------------------------------------------------------+
查看“default”安全组中所有的规则列表信息,命令如下:
[root@controller ~]# openstack security group rule list default
+--------------------------------------+--------------+-----------+-----------+------------+--------------------------------------+
| ID | IP Protocol | Ethertype | IP Range | Port Range | Remote Security Group |
+--------------------------------------+--------------+-----------+-----------+------------+--------------------------------------+
| 03ace6cf-ec1a-42a9-a754-c21fe887d1c0 | tcp | IPv4 | 0.0.0.0/0 | | None |
| 1e6c27ff-b456-4d2a-a64d-51197fea048e | None | IPv4 | 0.0.0.0/0 | | 896ce430-21f8-4673-8110-afce97e43715 |
| 61014f36-5c20-46ce-b779-7d0c7458e691 | icmp | IPv4 | 0.0.0.0/0 | | None |
| 699e2744-e926-4bb4-9e4f-54885f669bc5 | None | IPv6 | ::/0 | | None |
| 7aa363c8-5df3-4ce3-a775-9e453f086c87 | None | IPv6 | ::/0 | | 896ce430-21f8-4673-8110-afce97e43715 |
| 9ec501e5-2c16-4d89-8a15-57a16a8fe3cd | udp | IPv4 | 0.0.0.0/0 | | None |
| bb08b786-09f4-44f3-a030-71b189a0f84f | None | IPv4 | 0.0.0.0/0 | | None |
+--------------------------------------+--------------+-----------+-----------+------------+--------------------------------------+
3. 启动虚拟机
(1)查询可用镜像
上传镜像,使用“openstack image list”命令查看当前可用镜像列表。命令如下:
[root@controller ~]# curl -O http://mirrors.douxuedu.com/newcloud/cirros-0.3.4-x86_64-disk.img
[root@controller ~]# glance image-create --name cirros-0.3.4 --disk-format qcow2 --container-format bare --progress < cirros-0.3.4-x86_64-disk.img
…
[root@controller ~]# openstack image list
+--------------------------------------+--------------+---------+
| ID | Name | Status |
+--------------------------------------+--------------+---------+
| 32a2513c-e5ba-438b-a5ee-63c35c03b284 | cirros-0.3.4 | active |
+--------------------------------------+--------------+---------+
使用“openstack flavor list”命令查看可用的类型。命令如下:
[root@controller ~]# openstack flavor list
+----+-----------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 10 | 0 | 1 | True |
| 10 | centos | 1024 | 10 | 0 | 2 | True |
| 2 | m1.small | 1024 | 20 | 0 | 1 | True |
| 3 | m1.medium | 2048 | 40 | 0 | 2 | True |
+----+-----------+------+------+-----------+-------+-----------+
(2)创建网络和子网
使用“openstack network create ”命令创建网络息。命令如下:
[root@controller ~]# openstack network create --provider-network-type vlan --provider-physical-network provider network-vlan --provider-segment 200
+---------------------------+-----------------------------------------------------------+
| Field | Value |
+---------------------------+-----------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2022-02-10T05:02:18Z |
| description | |
| dns_domain | None |
| id | cccedc78-027d-40e9-afbd-708154923ca6 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| location | cloud='', project.domain_id=, project.domain_name='000000', project.id='1776912d52a7444d8b2d09eb86e8d1d9', project.name='admin', region_name='', zone= |
| mtu | 1500 |
| name | network-vlan |
| port_security_enabled | True |
| project_id | 1776912d52a7444d8b2d09eb86e8d1d9 |
| provider:network_type | vlan |
| provider:physical_network | provider |
| provider:segmentation_id | 200 |
| qos_policy_id | None |
| revision_number | 1 |
| router:external | Internal |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2022-02-10T05:02:18Z |
+---------------------------+-----------------------------------------------------------+
使用“openstack subnet create”创建子网。命令如下:
[root@controller ~]# openstack subnet list
[root@controller ~]# openstack subnet create --network network-vlan --allocation-pool start=192.168.200.100,end=192.168.200.200 --gateway 192.168.200.1 --subnet-range 192.168.200.0/24 subnet-vlan
+-------------------+-------------------------------------------------------------------+
| Field | Value |
+-------------------+-------------------------------------------------------------------+
| allocation_pools | 192.168.200.100-192.168.200.200 |
| cidr | 192.168.200.0/24 |
| created_at | 2022-02-10T05:03:52Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.200.1 |
| host_routes | |
| id | 69c14fff-de95-440a-bc8e-fe9f43e4b424 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| location | cloud='', project.domain_id=, project.domain_name='000000', project.id='1776912d52a7444d8b2d09eb86e8d1d9', project.name='admin', region_name='', zone= |
| name | subnet-vlan |
| network_id | cccedc78-027d-40e9-afbd-708154923ca6 |
| prefix_length | None |
| project_id | 1776912d52a7444d8b2d09eb86e8d1d9 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2022-02-10T05:03:52Z |
+-------------------+-------------------------------------------------------------------+
(3)修改OpenStack平台
修改Nova服务配置文件,设置参数“virt_type=qemu”。命令参数如下:
[root@controller ~]# crudini --set /etc/nova/nova.conf libvirt virt_type qemu
[root@controller ~]# systemctl restart openstack-nova-compute
(4)启动云主机
使用“openstack server create”命令创建云主机,其命令格式如下:
[root@controller ~]# openstack help server create
usage: openstack server create [-h] [-f {json,shell,table,value,yaml}]
[-c COLUMN] [--max-width <integer>]
[--fit-width] [--print-empty] [--noindent]
[--prefix PREFIX]
(--image <image> | --volume <volume>) --flavor
<flavor> [--security-group <security-group>]
[--key-name <key-name>]
[--property <key=value>]
[--file <dest-filename=source-filename>]
[--user-data <user-data>]
[--availability-zone <zone-name>]
[--block-device-mapping <dev-name=mapping>]
[--nic <net-id=net-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,port-id=port-uuid,auto,none>]
[--network <network>] [--port <port>]
[--hint <key=value>]
[--config-drive <config-drive-volume>|True]
[--min <count>] [--max <count>] [--wait]
<server-name>
通过命令创建云主机,使用cirros镜像,flavor为1核vCPU、512M内存、10G硬盘,使用network-vlan网络。云主机名为“cirros-test”创建命令如下:
[root@controller ~]# openstack server create --image cirros-0.3.4 --flavor 10 --network network-vlan cirros-test
+-------------------------------------+-------------------------------------------------+
| Field | Value |
+-------------------------------------+-------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | 3QV3njyWpTnk |
| config_drive | |
| created | 2022-03-01T07:08:26Z |
| flavor | centos (10) |
| hostId | |
| id | d152e1e5-7ff2-4f4e-9a1f-4133d8c4d6fe |
| image | cirros-0.3.4 (84a1ae85-7638-4d77-b5ae-7257b522bd13) |
| key_name | None |
| name | cirros-test |
| progress | 0 |
| project_id | 84b07b58499c419d9bb3c6de945abc21 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2022-03-01T07:08:27Z |
| user_id | 641a71d3af054cf29e99cef1c6f7e534 |
| volumes_attached | |
+-------------------------------------+-------------------------------------------------+
4. 管理虚拟机
(1)查看虚拟机
使用“openstack server list”命令查看虚拟机列表信息,使用命令如下:
[root@controller ~]# openstack server list
+--------------------------------------+-------------+--------+------------------------------+--------------+--------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------+--------+------------------------------+--------------+--------+
| d152e1e5-7ff2-4f4e-9a1f-4133d8c4d6fe | cirros-test | ACTIVE | network-vlan=192.168.200.187 | cirros-0.3.4 | centos |
+--------------------------------------+-------------+--------+------------------------------+--------------+--------+
使用命令可以查看虚拟机的具体信息,包括使用的安全组、flavor以及网络信息,通过命令“openstack server show”进行查看。命令如下:
[root@controller ~]# openstack server show cirros-test
+-------------------------------------+-------------------------------------------------+
| Field | Value |
+-------------------------------------+-------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | controller |
| OS-EXT-SRV-ATTR:hypervisor_hostname | controller |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2022-03-01T07:08:42.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | network-vlan=192.168.200.187 |
| config_drive | |
| created | 2022-03-01T07:08:26Z |
| flavor | centos (10) |
| hostId | 3f5e51b24503c97ac5e8033e5552e14e990f49f7e5583898f5b7329c |
| id | d152e1e5-7ff2-4f4e-9a1f-4133d8c4d6fe |
| image | cirros-0.3.4 (84a1ae85-7638-4d77-b5ae-7257b522bd13) |
| key_name | None |
| name | cirros-test |
| progress | 0 |
| project_id | 84b07b58499c419d9bb3c6de945abc21 |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2022-03-01T07:08:42Z |
| user_id | 641a71d3af054cf29e99cef1c6f7e534 |
| volumes_attached | |
+-------------------------------------+-------------------------------------------------+
(2)操作虚拟机
可以通过命令操作虚拟机,对虚拟机进行关机、开机、重启等操作。关闭虚拟机操作,命令如下:
[root@controller ~]# openstack server stop cirros-test
[root@controller ~]# openstack server list
+--------------------------------------+-------------+---------+-------------------------------+--------------+--------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------+---------+-------------------------------+--------------+--------+
| 7e424f14-eed1-44f5-a29a-0b64749cbc4d | cirros-test | SHUTOFF | network-vlan=192.168.200.187 | cirros-0.3.4 | centos |
+--------------------------------------+-------------+---------+-------------------------------+--------------+--------+
通过命令操作虚拟机,对虚拟机进行开机操作,命令如下:
[root@controller ~]# openstack server start cirros-test
[root@controller ~]# openstack server list
+--------------------------------------+-------------+--------+------------------------------+--------------+--------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------+--------+------------------------------+--------------+--------+
| 7e424f14-eed1-44f5-a29a-0b64749cbc4d | cirros-test | ACTIVE | network-vlan=192.168.200.187 | cirros-0.3.4 | centos |
+--------------------------------------+-------------+--------+------------------------------+--------------+--------+
通过命令操作虚拟机,对虚拟机进行重启操作,命令如下:
[root@controller ~]# openstack server reboot cirros-test
[root@controller ~]# openstack server list
+--------------------------------------+-------------+--------+-------------------------------+--------------+--------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------+--------+-------------------------------+--------------+--------+
| 7e424f14-eed1-44f5-a29a-0b64749cbc4d | cirros-test | ACTIVE | network-vlan=192.168.200.187 | cirros-0.3.4 | centos |
+--------------------------------------+-------------+--------+-------------------------------+--------------+--------+
5. 云主机调整类型大小
(1)修改配置文件
修改controller节点nova.conf配置文件,添加调整类型大小的参数,controller节点设置参数如下所示:
[root@controller ~]# crudini --set /etc/nova/nova.conf DEFAULT allow_resize_to_same_host True
[root@controller ~]# crudini --set /etc/nova/nova.conf DEFAULT scheduler_default_filters RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
修改完配置文件后重启相关服务。命令如下所示:
[root@controller ~]#systemctl restart openstack-nova*
(2)创建云主机类型
现有云主机硬盘和内存不满足使用,需要对现有云主机进行资源扩容,将内存扩容至1G,硬盘扩容至15G大小,类型名称为“centos1”首先创建一个新的云主机类型满足扩容资源的需求。通过命令创建新云主机类型,命令如下所示:
[root@controller ~]# openstack flavor create --disk 15 --ram 1024 --vcpus 2 centos1
+-----------------------------+---------------------------------------+
| Field | Value |
+-----------------------------+---------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 15 |
| id | a99a75ba-5afb-448b-bfc8-6bc656471476 |
| name | centos1 |
| os-flavor-access:is_public | True |
| properties | |
| ram | 1024 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 2 |
+-----------------------------+--------------------------------------+
查看当前云主机类型列表,命令如下:
[root@controller ~]# openstack flavor list
[root@controller ~]# openstack flavor list
+------------------------------------+---------+-----+-----+---------+-----+----------+
| ID |Name | RAM |Disk |Ephemeral|VCPUs|Is Public |
+------------------------------------+---------+-----+-----+---------+-----+----------+
| 1 | m1.tiny | 512 | 10 | 0 | 1 | True |
| 10 | centos | 512 | 10 | 0 | 1 | True |
| 2 | m1.small|1024 | 20 | 0 | 1 | True |
| 3 |m1.medium|2048 | 40 | 0 | 2 | True |
|a99a75ba-5afb-448b-bfc8-6bc656471476|centos1 |1024 | 15 | 0 | 2 | True |
+------------------------------------+---------+-----+-----+---------+-----+----------+
(3)调整云主机类型
查看云主机列表,通过命令查看云主机列表。命令如下:
[root@controller ~]# openstack server list
+--------------------------------------+-------------+--------+------------------------------+--------------+--------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------+--------+------------------------------+--------------+--------+
| 7e424f14-eed1-44f5-a29a-0b64749cbc4d | cirros-test | ACTIVE | network-vlan=192.168.200.187 | cirros-0.3.4 | centos |
+--------------------------------------+-------------+--------+------------------------------+--------------+--------+
使用命令“openstack server resize”调整云主机类型,命令格式如下:
[root@controller ~]# openstack help server resize
usage: openstack server resize [-h] [--flavor <flavor> | --confirm | --revert]
[--wait]
<server>
<server> Server (name or ID)
optional arguments:
-h, --help show this help message and exit
--flavor <flavor> Resize server to specified flavor
--confirm Confirm server resize is complete
--revert Restore server state before resize
--wait Wait for resize to complete
使用命令调整云主机“cirros-test”类型为centos1,使用–wait参数,在命令执行后,调整云主机需要一定时间,添加–wait参数后会在确认时回馈“complete”。命令如下所示:
[root@controller ~]# openstack server resize --flavor centos1 --wait cirros-test
Complete
[root@controller ~]# openstack server list
+--------------------------------------+-------------+---------------+------------------------------+--------------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------+---------------+------------------------------+--------------+---------+
| d152e1e5-7ff2-4f4e-9a1f-4133d8c4d6fe | cirros-test | VERIFY_RESIZE | network-vlan=192.168.200.187 | cirros-0.3.4 | centos1 |
+--------------------------------------+-------------+---------------+------------------------------+--------------+---------+
此时为待确定状态,登录OpenStack平台,如图1所示:
图1 登录openstack平台
单击右上角设置,选择简体中文,单击“保存”按钮,如图2所示:
图2 设置中文
在左侧导航栏选择“项目→计算→实例”,在实例最后的动作下拉菜单中选择“确认 调整大小/迁移”,如图3与图4所示:
图3 确认调整大小/迁移
图4 调整成功
在命令执行完毕后,通过命令查看云主机列表信息。命令如下:
[root@controller ~]# openstack server list
+--------------------------------------+-------------+--------+------------------------------+--------------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------+--------+------------------------------+--------------+---------+
| d152e1e5-7ff2-4f4e-9a1f-4133d8c4d6fe | cirros-test | ACTIVE | network-vlan=192.168.200.187 | cirros-0.3.4 | centos1 |
+--------------------------------------+-------------+--------+------------------------------+--------------+---------+
Cinder服务
1. 规划节点
节点规划见表1。
表1 节点规划
IP |
主机名 |
节点 |
10.24.194.153 |
controller |
IaaS-allinone |
2. 基础准备
使用平台提供的OpenStack平台作为实验节点。
案例实施
1. 块存储服务
(1)创建镜像和网络:
[root@controller ~]# curl -O http://mirrors.douxuedu.com/newcloud/cirros-0.3.4-x86_64-disk.img
[root@controller ~]# source /etc/keystone/admin-openrc.sh
[root@controller ~]# glance image-create --name cirros-0.3.4 --disk-format qcow2 --container-format bare --progress < cirros-0.3.4-x86_64-disk.img
…
[root@controller ~]# openstack network create --provider-network-type vlan --provider-physical-network provider network-vlan --provider-segment 200
…
[root@controller ~]# openstack subnet create --network network-vlan --allocation-pool start=192.168.200.100,end=192.168.200.200 --gateway 192.168.200.1 --subnet-range 192.168.200.0/24 subnet-vlan
…
(2)修改OpenStack平台
修改Nova服务配置文件,设置参数“virt_type=qemu”。命令参数如下:
[root@controller ~]# crudini --set /etc/nova/nova.conf libvirt virt_type qemu
[root@controller ~]# systemctl restart openstack-nova-compute
(3)启动云主机:
[root@controller ~]# openstack server create --image cirros-0.3.4 --flavor 2 --network network-vlan cirros-test
(4)查看Cinder服务状态
使用“openstack volume service list”命令查询块存储服务状态,命令代码如下所示:
[root@controller ~]# openstack volume service list
+------------------+-------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+-------------+------+---------+-------+----------------------------+
| cinder-volume | compute@lvm | nova | enabled | up | 2022-02-10T05:21:08.000000 |
| cinder-scheduler | controller | nova | enabled | up | 2022-02-10T05:21:06.000000 |
+------------------+-------------+------+---------+-------+----------------------------+
(5)创建块存储
通过使用命令“openstack volume create”创建块存储,命令格式如下:
[root@controller ~]# openstack help volume create
usage: openstack volume create [-h] [-f {json,shell,table,value,yaml}]
[-c COLUMN] [--max-width <integer>]
[--fit-width] [--print-empty] [--noindent]
[--prefix PREFIX] [--size <size>]
[--type <volume-type>]
[--image <image> | --snapshot <snapshot> | --source <volume> | --source-replicated <replicated-volume>]
[--description <description>] [--user <user>]
[--project <project>]
[--availability-zone <availability-zone>]
[--consistency-group consistency-group>]
[--property <key=value>] [--hint <key=value>]
[--multi-attach] [--bootable | --non-bootable]
[--read-only | --read-write]
<name>
通过命令创建块存储,大小为2G,名称为“volume”。命令如下所示:
[root@controller ~]# openstack volume create --size 2 volume
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2022-02-10T05:21:32.000000 |
| description | None |
| encrypted | False |
| id | 67634904-65eb-471f-9ab6-79296e2494b7 |
| migration_status | None |
| multiattach | False |
| name | volume |
| properties | |
| replication_status | None |
| size | 2 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | None |
| updated_at | None |
| user_id | e415c8bc53884e72a7993dffbcde2a1e |
+---------------------+--------------------------------------+
(6)查看块存储
使用“openstack volume list”命令查看块存储列表信息。命令如下:
[root@controller ~]# openstack volume list
+--------------------------------------+--------+-----------+------+-------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+--------+-----------+------+-------------+
| 67634904-65eb-471f-9ab6-79296e2494b7 | volume | available | 2 | |
+--------------------------------------+--------+-----------+------+-------------+
通过openstack命令查看某一块存储的详细信息。命令如下:
[root@controller ~]# openstack volume show volume
+--------------------------------+--------------------------------------+
| Field | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2022-02-10T05:21:32.000000 |
| description | None |
| encrypted | False |
| id | 67634904-65eb-471f-9ab6-79296e2494b7 |
| migration_status | None |
| multiattach | False |
| name | volume |
| os-vol-host-attr:host | compute@lvm#LVM |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1776912d52a7444d8b2d09eb86e8d1d9 |
| properties | |
| replication_status | None |
| size | 2 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | None |
| updated_at | 2022-02-10T05:21:33.000000 |
| user_id | e415c8bc53884e72a7993dffbcde2a1e |
+--------------------------------+--------------------------------------+
(7)挂载云硬盘
块存储设备创建成功后,可以在OpenStack上将该设备挂载至云主机上,可以作为一块云硬盘来进行使用。给云主机添加一块磁盘。
将块存储挂载至云主机的命令为“openstack server add volume”,其命令格式为:
[root@controller ~]# openstack help server add volume
usage: openstack server add volume [-h] [--device <device>] <server> <volume>
Add volume to server
positional arguments:
<server> Server (name or ID)
<volume> Volume to add (name or ID)
使用命令将创建的“volume”块存储添加至云主机“cirros-test”上。命令如下:
[root@controller ~]# openstack server add volume cirros-test volume
使用命令查看块存储的列表信息,命令代码如下所示:
[root@controller ~]# openstack volume list
+--------------------------------------+--------+--------+------+--------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+--------+--------+------+--------------------------------------+
| 67634904-65eb-471f-9ab6-79296e2494b7 | volume | in-use | 2 | Attached to cirros-test on /dev/vdb |
+--------------------------------------+--------+--------+------+--------------------------------------+
2. 扩展卷
(1)扩展卷大小
创建完卷后可能因为需求的变更,需要对已有的卷进行扩容操作,这时需要用到“openstack volume set”命令修改卷的信息。命令格式如下:
[root@controller ~]# openstack help volume set
usage: openstack volume set [-h] [--name <name>] [--size <size>]
[--description <description>] [--no-property]
[--property <key=value>]
[--image-property <key=value>] [--state <state>]
[--type <volume-type>]
[--retype-policy <retype-policy>]
[--bootable | --non-bootable]
[--read-only | --read-write]
<volume>
分离卷,通过命令将“volume”卷大小从2G扩容至3G,使用–size参数可修改已创建好的卷大小。命令操作如下所示:
[root@controller ~]# openstack server remove volume cirros-test volume
[root@controller ~]# openstack volume set --size 3 volume
[root@controller ~]# openstack volume list
+--------------------------------------+--------+-----------+------+-------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+--------+-----------+------+-------------+
| 67634904-65eb-471f-9ab6-79296e2494b7 | volume | available | 3 | |
+--------------------------------------+--------+-----------+------+-------------+
(2)验证卷大小
将扩容后的卷“volume”挂载至云主机“cirros-test”上,操作命令如下所示:
[root@controller ~]# openstack server add volume cirros-test volume
[root@controller ~]# openstack volume list
OpenStack服务运维案例
Swift服务
案例实施
1. 对象存储服务
(1)查看服务状态
在OpenStack平台中使用命令“swift stat”查看对象存储服务状态,执行命令如下所示:
[root@controller ~]# source /etc/keystone/admin-openrc.sh
[root@controller ~]# swift stat
(2)创建容器
使用命令创建容器,名称为“swift-test”,操作命令如下:
[root@controller ~]# openstack container create swift-test
+---------------------------------------+----------+------------------------------------+
| account |container | x-trans-id |
+---------------------------------------+----------+------------------------------------+
| AUTH_13b5e35202d54a84ae7a5ae5c57b9846 |swift-test| tx14edab5036414bfab0e64-006204b5ef |
+---------------------------------------+----------+------------------------------------+
(3)查看容器
使用命令查询容器列表信息,命令如下所示:
[root@controller ~]# openstack container list
+------------+
| Name |
+------------+
| swift-test |
+------------+
使用命令查询容器详细信息,命令如下所示:
[root@controller ~]# openstack container show swift-test
+--------------+---------------------------------------+
| Field | Value |
+--------------+---------------------------------------+
| account | AUTH_13b5e35202d54a84ae7a5ae5c57b9846 |
| bytes_used | 0 |
| container | swift-test |
| object_count | 0 |
+--------------+---------------------------------------+
(4)创建对象
创建完容器后,可以创建对象,通过使用命令“openstack object create”在对象中创建对象。命令格式如下所示:
[root@controller ~]# openstack help object create
usage: openstack object create [-h] [-f {csv,json,table,value,yaml}]
[-c COLUMN] [--max-width <integer>]
[--fit-width] [--print-empty] [--noindent]
[--quote {all,minimal,none,nonnumeric}]
[--sort-column SORT_COLUMN] [--name <name>]
<container> <filename> [<filename> ...]
在使用命令创建对象前,需要将上传后的目录结构在本地创建。在本地创建名为“test”的目录“/root/test”,将/root/anaconda-ks.cfg文件复制至“/root/test”目录中。命令代码如下所示:
[root@controller ~]# mkdir test
[root@controller ~]# cp anaconda-ks.cfg test/
创建对象的过程也是向容器中上传文件,使用命令创建“test/anaconda-ks.cfg”和“anaconda-ks.cfg”对象。操作命令如下所示:
[root@controller ~]# openstack object create swift-test test/anaconda-ks.cfg
+----------------------+------------+----------------------------------+
| object | container | etag |
+----------------------+------------+----------------------------------+
| test/anaconda-ks.cfg | swift-test | 41656296ae6768ae924a5b5f3fe15bf0 |
+----------------------+------------+----------------------------------+
(5)查看对象
创建完对象后,通过命令查看容器中对象信息,使用命令“openstack object list”查看对象信息,命令格式如下所示:
[root@controller ~]# openstack help object list
usage: openstack object list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN]
[--max-width <integer>] [--fit-width]
[--print-empty] [--noindent]
[--quote {all,minimal,none,nonnumeric}]
[--sort-column SORT_COLUMN] [--prefix <prefix>]
[--delimiter <delimiter>] [--marker <marker>]
[--end-marker <end-marker>]
[--limit <num-objects>] [--long] [--all]
<container>
使用命令查看容器“swift-test”中所有对象信息,操作命令如下:
[root@controller ~]# openstack object list swift-test
+----------------------+
| Name |
+----------------------+
| test/anaconda-ks.cfg |
+----------------------+
通过查询命令可以看出,在通过命令上传对象时,本地路径即为容器内对象路径。使用命令“openstack object show”查询“swift-test”容器中“test/anaconda-ks.cfg”对象详细信息,命令如下所示:
[root@controller opt]# openstack object show swift-test test/anaconda-ks.cfg
+----------------+---------------------------------------+
| Field | Value |
+----------------+---------------------------------------+
| account | AUTH_13b5e35202d54a84ae7a5ae5c57b9846 |
| container | swift-test |
| content-length | 6880 |
| content-type | application/octet-stream |
| etag | 41656296ae6768ae924a5b5f3fe15bf0 |
| last-modified | Thu, 10 Feb 2022 06:54:30 GMT |
| object | test/anaconda-ks.cfg |
+----------------+---------------------------------------+
(6)下载对象
存储在容器中的对象,可以在需要使用时,通过“openstack object save”命令进行下载至本地,命令格式如下所示:
[root@controller ~]# openstack help object save
usage: openstack object save [-h] [--file <filename>] <container> <object>
Save object locally
使用命令将“swift-test”容器中“test/anaconda-ks.cfg”对象下载至本地/opt/目录下。操作命令如下所示:
[root@controller ~]# cd /opt/
[root@controller opt]# openstack object save swift-test test/anaconda-ks.cfg
[root@controller opt]# ls test/
anaconda-ks.cfg
(7)删除对象
使用“openstack object delete”命令删除容器内的对象,命令格式如下所示:
[root@controller opt]# openstack help object delete
usage: openstack object delete [-h] <container> <object> [<object> ...]
使用删除对象命令将“swift-test”容器内“test/anaconda-ks.cfg”删除,查看“swift-test”容器中对象列表信息。操作命令如下所示:
[root@controller opt]# openstack object delete swift-test test/anaconda-ks.cfg
[root@controller opt]# openstack object list swift-test
(8)删除容器
使用“openstack container delete”命令删除容器,命令格式如下所示:
[root@controller opt]# openstack help container delete
usage: openstack container delete [-h] [--recursive]
<container> [<container> ...]
使用删除容器命令将“swift-test”容器删除:
[root@controller opt]# openstack container delete swift-test
查看容器列表信息。操作命令如下:
[root@controller opt]# openstack container list
[root@controller opt]# cd
2. 分片存储案例
(1)创建容器
使用命令创建一个容器test并查看容器的状态信息,命令如下:
[root@controller ~]# swift post test
[root@controller ~]# swift stat test
Account: AUTH_13b5e35202d54a84ae7a5ae5c57b9846
Container: test
Objects: 0
Bytes: 0
Read ACL:
Write ACL:
Sync To:
Sync Key:
Accept-Ranges: bytes
X-Storage-Policy: Policy-0
Last-Modified: Thu, 10 Feb 2022 07:00:01 GMT
X-Timestamp: 1644476400.54127
X-Trans-Id: tx1523620b734d425fb7249-006204b7f7
Content-Type: application/json; charset=utf-8
X-Openstack-Request-Id: tx1523620b734d425fb7249-006204b7f7
(2)上传镜像并分片存储
将提供的cirros-0.3.4-x86_64-disk.img镜像上传至controller节点的/root目录下,并使用命令上传至test容器中,进行分片存储,每个片段的大小为10M,命令如下:
[root@controller ~]# curl -O http://mirrors.douxuedu.com/newcloud/cirros-0.3.4-x86_64-disk.img
[root@controller ~]# ll
total 8088152
-rw-------. 1 root root 1741 Oct 20 16:11 anaconda-ks.cfg
-rw-r--r--. 1 root root 13287936 Oct 21 14:23 cirros-0.3.4-x86_64-disk.img
上传镜像至容器并进行分片:
[root@controller ~]# swift upload test -S 10000000 cirros-0.3.4-x86_64-disk.img
cirros-0.3.4-x86_64-disk.img segment 0
cirros-0.3.4-x86_64-disk.img segment 1
cirros-0.3.4-x86_64-disk.img
查看cirros镜像的存储路径:
[root@controller ~]# swift stat test cirros-0.3.4-x86_64-disk.img
Account: AUTH_13b5e35202d54a84ae7a5ae5c57b9846
Container: test
Object: cirros-0.3.4-x86_64-disk.img
Content Type: application/octet-stream
Content Length: 13267968
Last Modified: Thu, 10 Feb 2022 07:01:25 GMT
ETag: "fedf8be64303d80840c0c67304617bb2"
Manifest: test_segments/cirros-0.3.4-x86_64-disk.img/1644463107.000000/13267968/10000000/
Meta Mtime: 1644463107.000000
Accept-Ranges: bytes
X-Timestamp: 1644476484.47485
X-Trans-Id: tx80ea2f784bd046f1813c7-006204b85a
X-Openstack-Request-Id: tx80ea2f784bd046f1813c7-006204b85a
查看存储路径中的数据片:
[root@controller ~]# swift list test_segments
cirros-0.3.4-x86_64-disk.img/1644463107.000000/13267968/10000000/00000000
cirros-0.3.4-x86_64-disk.img/1644463107.000000/13267968/10000000/00000001
可以看到,cirros镜像在上传至Swfit对象存储中被分片存储了,单个存储片的大小为10M,因为该镜像大小为16M左右,所有分成了两个存储片。
1使用Heat模板创建用户
编写Heat模板文件create_user.yaml,模板名为test-user,创建名为heat-user的用户,属于admin项目包,并赋予heat-user用户admin的权限,配置用户密码为123456。模板内容如下:
[root@controller ~]# vi create_user.yaml
heat_template_version: 2014-10-16
resources:
user:
type: OS::Keystone::User
properties:
name: heat-user
password: "123456"
domain: demo
default_project: admin
roles: [{"role": admin, "project": admin}]
执行模板文件,命令如下:
[root@controller ~]# openstack stack create -t create_user.yaml test-user
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| id | e67b28ff-6df3-45e0-9c5f-1ea56229bb49 |
| stack_name | test-user |
| description | No description |
| creation_time | 2023-08-08T01:09:13Z |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | Stack CREATE started |
+---------------------+--------------------------------------+
查询创建结果,命令如下:
[root@controller ~]# openstack user list |grep heat-user
| d2d62ed0897c4102a1e60d4f34104208 | heat-user |
2 使用Heat模板创建网络与子网
编写Heat模板create_net.yaml,创建名为Heat-Network网络,选择不共享;创建子网名为Heat-Subnet,子网网段设置为10.20.2.0/24,开启DHCP服务,地址池为10.20.2.20-10.20.2.100。模板内容如下:
[root@controller ~]# vi create_net.yaml
heat_template_version: 2014-10-16
description: Generated template
resources:
network_1:
type: OS::Neutron::Net
properties:
admin_state_up: true
name: Heat-Network
shared: false
subnet_1:
type: OS::Neutron::Subnet
properties:
allocation_pools:
- end: 10.20.2.100
start: 10.20.2.10
cidr: 10.20.2.0/24
enable_dhcp: true
host_routes: []
ip_version: 4
name: Heat-Subnet
network_id:
get_resource: network_1
使用命令执行该Heat模板文件,命令如下:
[root@controller ~]# openstack stack create -t create_net.yaml test
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| id | 9eba9c8c-22de-44c7-9fab-2ad7f3d8992c |
| stack_name | test |
| description | Generated template |
| creation_time | 2023-08-08T01:52:36Z |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | Stack CREATE started |
+---------------------+--------------------------------------+
查询创建结果,命令如下:
[root@controller ~]# openstack network list
+--------------------------------------+--------------+
| ID | Name | Subnets
+--------------------------------------+--------------+
| c9de6482-5876-40a3-a412-67dff8ee3f82 | Heat-Network | 2901706e-281f-48dd-ac0e-1c973b6a5b0e |
+--------------------------------------+--------------+
3 使用Heat模板创建容器
编写Heat模板create_container.yaml文件,创建名为heat-swift的容器。模板内容如下:
[root@controller ~]# vi create_container.yaml
heat_template_version: 2014-10-16
resources:
user:
type: OS::Swift::Container
properties:
name: heat-swift
使用命令执行该Heat模板文件,命令如下:
[root@controller ~]# openstack stack create -t create_container.yaml test-container
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| id | deac2d1e-90a7-43fd-af96-386b6a002b1e |
| stack_name | test-container |
| description | No description |
| creation_time | 2023-08-08T02:19:35Z |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | Stack CREATE started |
+---------------------+--------------------------------------+
查询创建结果,命令如下:
[root@controller ~]# swift list
heat-swift
CI/CD
1部署Harbor
(1)基础准备
下载软件包到本地:
[root@master ~]# curl -O http://mirrors.douxuedu.com/competition/BlueOcean.tar.gz
解压软件包:
[root@master ~]# tar -zxf BlueOcean.tar.gz
(2)部署Harbor
安装Docker Compose:
[root@master ~]# cp BlueOcean/tools/docker-compose-Linux-x86_64 /usr/bin/docker-compose
[root@master ~]# docker-compose version
docker-compose version 1.25.0, build 0a186604
docker-py version: 4.1.0
CPython version: 3.7.4
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
安装Harbor仓库:
[root@master ~]# tar -zxf BlueOcean/harbor-offline-installer.tar.gz -C /opt/
[root@master ~]# sh /opt/harbor/install.sh
部署完成后Harbor镜像仓库默认用户为admin,密码为Harbor12345。
(3)访问Harbor
在Web端使用火狐浏览器登录Harbor(http://master),登录成功后如图2所示:
图2
新建springcloud项目,访问级别设置为公开,如图3所示:
图3
创建完成后如图4所示:
图4
上传镜像到Harbor(IP为master节点地址):
[root@master ~]# docker login -uadmin -pHarbor12345 10.26.15.244
[root@master ~]# docker load -i BlueOcean/images/maven_latest.tar
[root@master ~]# docker tag maven 10.26.15.244/library/maven
[root@master ~]# docker push 10.26.15.244/library/maven
[root@master ~]# docker load -i BlueOcean/images/java_8-jre.tar
[root@master ~]# docker load -i BlueOcean/images/jenkins_jenkins_latest.tar
[root@master ~]# docker load -i BlueOcean/images/gitlab_gitlab-ce_latest.tar
2 部署Jenkins
(1)安装Jenkins
新建命名空间:
[root@master ~]# kubectl create ns devops
部署Jenkins需要使用到一个拥有相关权限的serviceAccount,名称为jenkins-admin,可以给jenkins-admin赋予一些必要的权限,也可以直接绑定一个cluster-admin的集群角色权限,此处选择给予集群角色权限。编写Jenkins资源清单文件:
[root@master ~]# vi jenkins-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins
labels:
app: jenkins
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
nodePort: 30880
- name: agent
port: 50000
targetPort: agent
nodePort: 30850
selector:
app: jenkins
┅ # 此三横请手动输入,不要复制
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
labels:
app: jenkins
spec:
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
serviceAccountName: jenkins-admin
containers:
- name: jenkins
image: jenkins/jenkins:latest
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
privileged: true
ports:
- name: http
containerPort: 8080
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkinshome
- mountPath: /usr/bin/docker
name: docker
- mountPath: /var/run/docker.sock
name: dockersock
- mountPath: /usr/bin/kubectl
name: kubectl
- mountPath: /root/.kube
name: kubeconfig
volumes:
- name: jenkinshome
hostPath:
path: /home/jenkins_home
- name: docker
hostPath:
path: /usr/bin/docker
- name: dockersock
hostPath:
path: /var/run/docker.sock
- name: kubectl
hostPath:
path: /usr/bin/kubectl
- name: kubeconfig
hostPath:
path: /root/.kube
┅ # 此三横请手动输入,不要复制
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: jenkinshome
annotations:
volume.beta.kubernetes.io/storage-class: local-path
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1024Mi
┅ # 此三横请手动输入,不要复制
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins-admin
labels:
name: jenkins
┅ # 此三横请手动输入,不要复制
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-admin
labels:
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins-admin
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
这里通过NodePort的形式来暴露了Jenkins的8080端口,另外还需要暴露一个agent的端口,这个端口主要用于Jenkins的Master和Slave之间的通信。
部署Jenkins:
[root@master ~]# kubectl -n devops apply -f jenkins-deploy.yaml
查看Pod:
[root@master ~]# kubectl -n devops get pods
NAME READY STATUS RESTARTS AGE
jenkins-cc97fd4fc-v5dh2 1/1 Running 0 21s
(2)访问Jenkins
查看Jenkins Service端口:
[root@master ~]# kubectl -n devops get svc
在Web端使用浏览器访问Jenkins(http://master:30880)
获取Jenkins密码:
[root@master ~]# kubectl -n devops exec deploy/jenkins -- cat /var/jenkins_home/secrets/initialAdminPassword
a3ac7ba3812746d0bc8ed40e122ba20b
输入密码后单击“继续”按钮,如图6所示:
图6
将离线插件包拷贝到Jenkins:
[root@master ~]# kubectl -n devops cp BlueOcean/plugins/ jenkins-cc97fd4fc-v5dh2:/var/jenkins_home
重启Jenkins:
[root@master ~]# kubectl -n devops rollout restart deployment jenkins
刷新Jenkins页面,选择“跳过插件安装”,安装完成后进入用户创建页面,创建一个用户jenkins,密码000000,如图7所示:
图7
单击“保存并完成”按钮,如图8所示:
图8
单击“保存并完成”,如图9所示:
图9
单击“开始使用Jenkins”按钮并使用新创建的用户登录Jenkins,如图10所示:
图10
3部署GitLab
(1)部署GitLab
编写GitLab资源清单文件:
[root@master ~]# vi gitlab-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: gitlab
spec:
type: NodePort
ports:
- port: 443
nodePort: 30443
targetPort: 443
name: gitlab-443
- port: 80
nodePort: 30888
targetPort: 80
name: gitlab-80
selector:
app: gitlab
┅ # 此三横请手动输入,不要复制
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitlab
spec:
selector:
matchLabels:
app: gitlab
revisionHistoryLimit: 2
template:
metadata:
labels:
app: gitlab
spec:
containers:
- image: gitlab/gitlab-ce:latest
name: gitlab
imagePullPolicy: IfNotPresent
env:
- name: GITLAB_ROOT_PASSWORD # 设置root用户密码
value: admin@123
- name: GITLAB_PORT
value: "80"
ports:
- containerPort: 443
name: gitlab-443
- containerPort: 80
name: gitlab-80
部署GitLab:
[root@master ~]# kubectl -n devops apply -f gitlab-deploy.yaml
查看Pod:
[root@master ~]# kubectl -n devops get pods
NAME READY STATUS RESTARTS AGE
gitlab-645dd88cd7-6vv2q 1/1 Running 0 29s
jenkins-cc97fd4fc-kmjtl 1/1 Running 0 7m20s
查看GitLab Service:
[root@master ~]# kubectl -n devops get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gitlab NodePort 192.104.250.77 <none> 443:30443/TCP,80:30888/TCP 57s
jenkins NodePort 192.98.107.152 <none> 8080:30880/TCP,50000:30850/TCP 7m48s
GitLab启动较慢,可以通过“kubectl logs”查看其启动状态。启动完成后,在Web端访问GitLab(http://master:30888),如图11所示:
图11
登录GitLab,如图12所示:
图12
(2)创建项目
单击“New project”按钮,如图13所示:
图13
单击“Create blank project”按钮创建项目springcloud,可见等级选择“Public”,如图14所示:
图14
单击“Create project”按钮,进入项目,如图15所示:
图15
push源代码到GitLab的springcloud项目:
[root@master ~]# cd BlueOcean/springcloud/
[root@master springcloud]# git config --global user.name "administrator"
[root@master springcloud]# git config --global user.email "admin@example.com"
[root@master springcloud]# git remote remove origin
[root@master springcloud]# git remote add origin http://10.26.10.143:30888/root/springcloud.git
[root@master springcloud]# git add .
[root@master springcloud]# git commit -m "initial commit"
[root@master springcloud]# git push -u origin master
刷新网页,springcloud项目master分支中的文件已经更新了(使用火狐浏览器打开网页),如图16所示:
图16
4 配置Jenkins连接GitLab
(1)设置Outbound requests
登录Gitlab管理员界面(http://master:30888/admin),如图17所示:
图17
在左侧导航栏选择“Settings→Network”,设置“Outbound requests”,勾选“Allow requests to the local network from web hooks and services”复选框,如图18所示:
图18
配置完成后保存。
(2)创建GitLab API Token
单击GitLab用户头像图标,如图19所示:
图19
在左侧导航栏选择“Preferences”,如图20所示:
图20
在左侧导航栏选择“Access Tokens”添加Token,如21图所示:
图21
单击“Create personal access token”按钮生成Token,如图22所示:
图22
记录下Token(U6p_ubRixGSdRvs6MGft),后面配置Jenkins时会用到。
(3)设置Jenkins
登录Jenkins首页,选择“系统管理→系统配置”,配置GitLab信息,取消勾选“Enable authentiviion for ‘/project’ end-point”,输入“Connection name”和“Gitlab host URL”,如图23所示:
图23
添加Credentials,单击“添加”→“Jenkins”按钮添加认证信息,将Gitlab API Token填入,如图24所示:
图24
选择新添加的证书,然后单击“Test Connection”按钮,如图25所示:
图25
返回结果为Success,说明Jenkins可以正常连接GitLab。
5 Jenkinsfile
(1)新建任务
登录Jenkins首页,新建任务springcloud,任务类型选择“流水线”,如图26所示:
图26
单击“确定”按钮,配置构建触发器,如图27所示:
图27
记录下GitLab webhook URL的地址(http://10.26.15.244:30880/project/springcloud),后期配置webhook需要使用。
配置流水线,在定义域中选择“Pipeline script from SCM”,此选项指示Jenkins从源代码管理(SCM)仓库获取流水线。在SCM域中选择“Git”,然后输入“Repository URL”,如图28所示:
图28
在Credentials中选择“添加”,凭据类型选择“Username with password”,然后输入对应信息,如图29所示:
图29
单击“保存”按钮,回到流水线中,在Credentials域选择刚才添加的凭证,如图30所示:
图30
保存任务。
(2)编写流水线
Pipeline有两种创建方法——可以直接在Jenkins的Web UI界面中输入脚本;也可以通过创建一个Jenkinsfile脚本文件放入项目源码库中。
一般推荐在Jenkins中直接从源代码控制(SCMD)中直接载入Jenkinsfile Pipeline这种方法。
登录GitLab进入springcloud项目,选择新建文件,如图31所示:
图31
将流水线脚本输入到Jenkinsfile中,如图32所示:
图32
Pipeline包括声明式语法和脚本式语法。声明式和脚本式的流水线从根本上是不同的。声明式是Jenkins流水线更友好的特性。脚本式的流水线语法,提供更丰富的语法特性。声明式流水线使编写和读取流水线代码更容易设计。
此处选择声明式Pipeline,完整的流水线脚本如下:
pipeline{
agent none
stages{
stage('mvn-build'){
agent {
docker {
image '10.26.15.244/library/maven'
args '-v /root/.m2:/root/.m2'
}
}
steps{
sh 'cp -rfv /opt/repository /root/.m2/ && ls -l /root/.m2/repository'
sh 'mvn package -DskipTests'
archiveArtifacts artifacts: '**/target/*.jar', fingerprint: true
}
}
stage('image-build'){
agent any
steps{
sh 'cd gateway && docker build -t 10.26.15.244/springcloud/gateway -f Dockerfile .'
sh 'cd config && docker build -t 10.26.15.244/springcloud/config -f Dockerfile .'
sh 'docker login 10.26.15.244 -u=admin -p=Harbor12345'
sh 'docker push 10.26.15.244/springcloud/gateway'
sh 'docker push 10.26.15.244/springcloud/config'
}
}
stage('cloud-deploy'){
agent any
steps{
sh 'sed -i "s/sqshq\\/piggymetrics-gateway/10.26.15.244\\/springcloud\\/gateway/g" yaml/deployment/gateway-deployment.yaml'
sh 'sed -i "s/sqshq\\/piggymetrics-config/10.26.15.244\\/springcloud\\/config/g" yaml/deployment/config-deployment.yaml'
sh 'kubectl create ns springcloud'
sh 'kubectl apply -f yaml/deployment/gateway-deployment.yaml'
sh 'kubectl apply -f yaml/deployment/config-deployment.yaml'
sh 'kubectl apply -f yaml/svc/gateway-svc.yaml'
sh 'kubectl apply -f yaml/svc/config-svc.yaml'
}
}
}
}
(3)开启Jenkins匿名访问
登录Jenkins首页,选择“系统管理→全局安全配置”,授权策略选择“任何用户可以做任何事(没有任何限制)”,如图33所示。
图33
6 构建CI/CD
(1)触发构建
在GitLab的项目中,通常会使用Webhook的各种事件来触发对应的构建,通常配置好后会向设定好的URL发送post请求。
登录GitLab,进入springcloud项目,现在左侧导航栏“Settings→Webhooks”,将前面记录的GitLab webhook URL地址填入URL处,禁用SSL认证,如图34所示。
图34
单击“Add webhook”按钮添加webhook,完成后如图35所示:
图35
单击“Test→Push events”按钮进行测试,如图36所示:
图36
结果返回HTTP 200则表明Webhook配置成功。
(2)Jenkins查看
登录Jenkins,可以看到springcloud项目已经开始构建,如图37所示:
图37
选择左侧导航栏“打开Blue Ocean”,如图38所示:
图38
Blue Ocean是pipeline的可视化UI,同时兼容经典的自由模式的job。Jenkins Pipeline从头开始设计,但仍与自由式作业兼容,Blue Ocean减少了经典模式下的混乱并为团队中的每个成员增加了清晰度。
单击项目名称springcloud,如图39所示:
图39
单击正在构建的pipeline可以查看阶段视图,如图40所示:
图40
单击任意“>”符号可查看每个Step的构建详情,如图41所示:
图41
若构建成功,Blue Ocean界面会变为绿色。构建完成后如图42所示:
图42
退出阶段试图界面,如图43所示:
图43
返回Jenkins首页,如图44所示:
图44
(3)Harbor查看
进入Harbor仓库springcloud项目查看镜像列表,可以看到已自动上传了一个gateway镜像,如图45所示:
图45
(4)Kubernetes查看
Pod的启动较慢,需等待3–5分钟。在命令行查看Pod:
[root@master ~]# kubectl -n springcloud get pods
NAME READY STATUS RESTARTS AGE
config-6b6875fffd-p2g7j 1/1 Running 0 3m6s
gateway-5d5f8cc944-vstgm 1/1 Running 0 3m6s
查看service:
[root@master ~]# kubectl -n springcloud get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
config NodePort 192.109.170.192 <none> 8888:30015/TCP 3m18s
gateway NodePort 192.110.243.17 <none> 4000:30010/TCP 3m18s
通过端口30010访问服务,如图46所示:
图46
至此,完整的CI/CD流程就完成了。
[root@k8s-master-node1 ~]# curl -O http://mirrors.douxuedu.com/competition/Explorer.tar.gz
[root@k8s-master-node1 ~]# tar -zxvf Explorer.tar.gz
导入CentOS基础镜像:
[root@k8s-master-node1 ~]# docker load -i KodExplorer/CentOS_7.9.2009.tar
(2)编写Dockerfile
编写init.sh脚本:
[root@k8s-worker-node1 ~]# cd KodExplorer/
[root@k8s-worker-node1 KodExplorer]# vi mysql_init.sh
#!/bin/bash
mysql_install_db --user=root
mysqld_safe --user=root &
sleep 8
mysqladmin -u root password 'root'
mysql -uroot -proot -e "grant all on *.* to 'root'@'%' identified by 'root'; flush privileges;"
编写yum源:
[root@k8s-worker-node1 KodExplorer]# vi local.repo
[yum]
name=yum
baseurl=file:///root/yum
gpgcheck=0
enabled=1
编写Dockerfile文件:
[root@k8s-worker-node1 KodExplorer]# vi Dockerfile-mariadb
FROM centos:centos7.9.2009
MAINTAINER Chinaskills
RUN rm -rfv /etc/yum.repos.d/*
COPY local.repo /etc/yum.repos.d/
COPY yum /root/yum
ENV LC_ALL en_US.UTF-8
RUN yum -y install mariadb-server
COPY mysql_init.sh /opt/
RUN bash /opt/mysql_init.sh
EXPOSE 3306
CMD ["mysqld_safe","--user=root"]
(3)构建镜像
构建镜像:
[root@k8s-worker-node1 KodExplorer]# docker build -t kod-mysql:v1.0 -f Dockerfile-mariadb .
3. 容器化部署Redis
(1)编写Dockerfile
编写Dockerfile文件:
[root@k8s-worker-node1 KodExplorer]# vi Dockerfile-redis
FROM centos:centos7.9.2009
MAINTAINER Chinaskills
RUN rm -rf /etc/yum.repos.d/*
COPY local.repo /etc/yum.repos.d/
COPY yum /root/yum
RUN yum -y install redis
RUN sed -i 's/127.0.0.1/0.0.0.0/g' /etc/redis.conf && \
sed -i 's/protected-mode yes/protected-mode no/g' /etc/redis.conf
EXPOSE 6379
CMD ["/usr/bin/redis-server","/etc/redis.conf"]
(2)构建镜像
[root@k8s-worker-node1 KodExplorer]# docker build -t kod-redis:v1.0 -f Dockerfile-redis .
4. 容器化部署PHP
(1)编写Dockerfile
编写Dockerfile文件:
[root@k8s-worker-node1 KodExplorer]# vi Dockerfile-php
FROM centos:centos7.9.2009
MAINTAINER Chinaskills
RUN rm -rf /etc/yum.repos.d/*
COPY local.repo /etc/yum.repos.d/
COPY yum /root/yum
RUN yum install httpd php php-cli unzip php-gd php-mbstring -y
WORKDIR /var/www/html
COPY php/kodexplorer4.37.zip .
RUN unzip kodexplorer4.37.zip
RUN chmod -R 777 /var/www/html
RUN sed -i 's/#ServerName www.example.com:80/ServerName localhost:80/g' /etc/httpd/conf/httpd.conf
EXPOSE 80
CMD ["/usr/sbin/httpd","-D","FOREGROUND"]
(2)构建镜像
[root@k8s-worker-node1 KodExplorer]# docker build -t kod-php:v1.0 -f Dockerfile-php .
5. 容器化部署Nginx
(1)编写Dockerfile
编写dockerfile:
[root@k8s-worker-node1 KodExplorer]# vi Dockerfile-nginx
FROM centos:centos7.9.2009
MAINTAINER Chinaskills
RUN rm -rf /etc/yum.repos.d/*
COPY local.repo /etc/yum.repos.d/
COPY yum /root/yum
RUN yum -y install nginx
RUN /bin/bash -c 'echo init ok'
EXPOSE 80
CMD ["nginx","-g","daemon off;"]
(2)构建镜像
[root@k8s-worker-node1 KodExplorer]# docker build -t kod-nginx:v1.0 -f Dockerfile-nginx .
6. 编排部署服务
(1)编写docker-compose.yaml
[root@k8s-worker-node1 KodExplorer]# vi docker-compose.yaml
version: '3.2'
services:
nginx:
container_name: nginx
image: kod-nginx:v1.0
volumes:
\- ./www:/data/www
\- ./nginx/logs:/var/log/nginx
ports:
\- "443:443"
restart: always
depends_on:
\- php-fpm
links:
\- php-fpm
tty: true
mysql:
container_name: mysql
image: kod-mysql:v1.0
volumes:
\- ./data/mysql:/var/lib/mysql
\- ./mysql/logs:/var/lib/mysql-logs
ports:
\- "3306:3306"
restart: always
redis:
container_name: redis
image: kod-redis:v1.0
ports:
\- "6379:6379"
volumes:
\- ./data/redis:/data
\- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
restart: always
command: redis-server /usr/local/etc/redis/redis.conf
php-fpm:
container_name: php-fpm
image: kod-php:v1.0
ports:
\- "8090:80"
links:
\- mysql
\- redis
restart: always
depends_on:
\- redis
\- mysql
(2)部署服务
[root@k8s-worker-node1 KodExplorer]# docker-compose up -d
Creating network "kodexplorer_default" with the default driver
Creating redis ... done
Creating mysql ... done
Creating php-fpm ... done
Creating nginx ... done
查看服务:
[root@k8s-worker-node1 KodExplorer]# docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------
mysql mysqld_safe --user=root Up 0.0.0.0:3306->3306/tcp,:::3306->3306/tcp
nginx nginx -g daemon off; Up 0.0.0.0:443->443/tcp,:::443->443/tcp, 80/tcp
php-fpm /usr/sbin/httpd -D FOREGROUND Up 0.0.0.0:8090->80/tcp,:::8090->80/tcp
redis redis-server /usr/local/et ... Up 0.0.0.0:6379->6379/tcp,:::6379->6379/tcp
2. 安装部署 Grafana
2.1 创建数据目录并设置权限
代码语言:javascript
复制
cd /data/containers
mkdir -p grafana/data
2.2 创建 docker-compose.yml 文件
创建配置文件,vi grafana/docker-compose.yml
代码语言:javascript
复制
name: "grafana"services:
grafana:
image: grafana/grafana-oss:10.4.4
container_name: grafana
restart: always
user: '0'
networks:
- app-tier
ports:
- '3000:3000'
volumes:
- ./data:/var/lib/grafananetworks:
app-tier:
name: app-tier
driver: bridge
#external: true
2.3 创建并启动服务
- 启动服务
代码语言:javascript
复制
cd /data/containers/grafana
docker compose up -d
2.4 验证容器状态
- 查看 grafana 容器状态
代码语言:javascript
复制
$ docker compose psNAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
grafana grafana/grafana-oss:10.4.4 "/run.sh" grafana 14 seconds ago Up 12 seconds 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp
- 查看 grafana 服务日志
代码语言:javascript
复制
# 通过日志查看容器是否有异常,结果略
$ docker compose logs -f
- 点赞
- 收藏
- 关注作者
评论(0)