2023样题

举报
yuminghao 发表于 2024/09/26 16:32:36 2024/09/26
【摘要】 2023_金砖_省赛2023JZ_S_YT.pdf2023BRICS_FS_11_CS.pdf2023JZ_S_YTA模块:OpenStack 平台部署与运维(30分)业务场景:某企业拟使用OpenStack搭建一个企业云平台,用于部署各类企业应用对外对内服务。云平台可实现IT资源池化、弹性分配、集中管理、性能优化以及统一安全认证等。系统结构如下图:企业云平台的搭建使用竞赛平台提供的两台云服...

2023_金砖_省赛


2023JZ_S_YT

A模块:OpenStack 平台部署与运维(30分)

业务场景:

某企业拟使用OpenStack搭建一个企业云平台,用于部署各类企业应用对外对内服务。云平台可实现IT资源池化、弹性分配、集中管理、性能优化以及统一安全认证等。系统结构如下图:

企业云平台的搭建使用竞赛平台提供的两台云服务器,配置如下表:

表1  IP地址规划

设备名称 主机名 接口 IP地址
云服务器1 controller eth0 公网IP:******** 私网IP:192.168.100.*/24
eth1 私网IP:192.168.200. */24
云服务器2 compute eth0 公网IP:******** 私网IP:192.168.100. */24
eth1 私网IP:192.168.200. */24

说明:

1.选手自行检查工位pc机硬件及网络是否正常;

2.竞赛使用集群模式进行,给每个参赛队提供华为云账号和密码及考试系统的账号和密码。选手通过用户名与密码分别登录华为云和考试系统;

3.竞赛用到的软件包都在云主机/root下。

4.表1中的公网IP和私网IP以自己云主机显示为准,每个人的公网IP和私网IP不同。使用第三方软件远程连接云主机,使用公网IP连接。

任务1 OpenStack私有云平台搭建(15分)

1.基础环境配置(2分)

把controller节点主机名设置为controller, compute节点主机名设置为compute,修改hosts文件将IP地址映射为主机名,root密码修改为000000。

在controller节点将cat /etc/hosts命令的返回结果提交到答题框。

[root@controller ~]# cat /etc/hosts
::1	localhost	localhost.localdomain	localhost6	localhost6.localdomain6
127.0.0.1	localhost	localhost.localdomain	localhost4	localhost4.localdomain4
127.0.0.1	controller-0001	controller-0001
192.168.100.225 controller
192.168.100.154 compute

2. yum源配置(2分)

在controller节点安装ftp服务,使用提供的CentOS镜像和openstack软件包配置yum源,使用该ftp源作为compute节点安装openstack平台的网络源。分别设置controller节点和compute节点的yum源文件为local.repo,ftp.repo。

在compute节点将cat /etc/yum.repos.d/ftp.repo命令的返回结果提交到答题框。

[root@compute ~]# cat /etc/yum.repos.d/ftp.repo
[centos]
name=centos
enabled=1
gpgcheck=0
baseurl=ftp://controller/centos
[iaas]
name=iaas
enabled=1
gpgcheck=0
baseurl=ftp://controller/openstack

3.配置无秘钥ssh(1分)

配置controller节点可以无秘钥访问compute节点,配置完成后,尝试ssh连接compute节点的hostname进行测试。

在controller节点将cat .ssh/known_hosts &&ssh compute命令的返回结果提交到答题框。

[root@controller ~]# cat .ssh/known_hosts &&ssh compute
compute,192.168.100.154 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHOQIhYQUXb9G0K+o3U5sS2WUabhDY2citZa9R0DMBz8yiJNlOCw8rk2+YYdPK00QKLrSEM5zwSaOjEDPWaHIA4=
Last login: Mon Sep 11 13:58:18 2023 from 153.101.215.86
	
	Welcome to Huawei Cloud Service

[root@compute ~]#

4.基础安装(1分)

在控制节点和计算节点上分别安装openstack-shell软件包,根据表2配置两个节点脚本文件中的基本变量(配置脚本文件为/root/variable.sh),配置完成后controller节点和node节点分别执行openstack-completion.sh部署平台客户端。

表2 云平台配置信息

服务名称 变量 参数/密码
Mysql root 000000
Keystone 000000
Glance 000000
Nova 000000
Neutron 000000
Heat 000000
Cinder 000000
Swift 000000
Ceilometer 000000
Manila 000000
Cloudkitty 000000
barbican 000000
Keystone DOMAIN_NAME demo
Admin 000000
Rabbit 000000
Glance 000000
Nova 000000
Neutron 000000
Heat 000000
Cinder 000000
Swift 000000
Ceilometer 000000
Manila 000000
Cloudkitty 000000
barbican 000000
Neutron Metadata 000000
External Network eth1(以实际为准)

在controller节点将cat /root/variable.sh |grep -Ev '^$|#' 命令的返回结果提交到答题框。

[root@controller ~]# yum install openstack-shell -y
[root@controller ~]# cat /root/variable.sh |grep -Ev '^$|#'
HOST_IP=192.168.100.225
HOST_PASS=000000
HOST_NAME=controller
HOST_IP_NODE=192.168.100.154
HOST_PASS_NODE=000000
HOST_NAME_NODE=compute
network_segment_IP=192.168.100.0/24
RABBIT_USER=openstack
RABBIT_PASS=000000
DB_PASS=000000
DOMAIN_NAME=demo
ADMIN_PASS=000000
DEMO_PASS=000000
KEYSTONE_DBPASS=000000
GLANCE_DBPASS=000000
GLANCE_PASS=000000
NOVA_DBPASS=000000
NOVA_PASS=000000
NEUTRON_DBPASS=000000
NEUTRON_PASS=000000
METADATA_SECRET=000000
INTERFACE_IP_HOST=192.168.100.225
INTERFACE_IP_NODE=192.168.100.154
INTERFACE_NAME_HOST=eth1
INTERFACE_NAME_NODE=eth1
Physical_NAME=provider
minvlan=100
maxvlan=200
CINDER_DBPASS=000000
CINDER_PASS=000000
BLOCK_DISK=sdb1
SWIFT_PASS=000000
OBJECT_DISK=sdb2
STORAGE_LOCAL_NET_IP=192.168.100.154
HEAT_DBPASS=000000
HEAT_PASS=000000
CEILOMETER_DBPASS=000000
CEILOMETER_PASS=000000
MANILA_DBPASS=000000
MANILA_PASS=000000
SHARE_DISK=sdb3
CLOUDKITTY_DBPASS=000000
CLOUDKITTY_PASS=000000
BARBICAN_DBPASS=000000
BARBICAN_PASS=000000

5.数据库安装与调优(1分)

在controller节点上使用openstack-controller-mysql.sh脚本安装Mariadb、Memcached、RabbitMQ等服务。安装服务完毕后,修改/etc/my.cnf文件,完成下列要求:

1.设置数据库支持大小写;

2.设置数据库缓存innodb表的索引,数据,插入数据时的缓冲为4G;

3.设置数据库的log buffer为64MB;

4.设置数据库的redo log大小为256MB;

5.设置数据库的redo log文件组为2。

在controller节点将cat /etc/my.cnf | grep -Ev ^'(#|$)'命令的返回结果提交到答题框。

[root@controller bin]# vi /etc/my.cnf
lower_case_table_names=1
innodb_buffer_pool_size=4G
innodb_log_buffer_size = 64M
innodb_log_file_size=256M
innodb_log_files_in_group=2
[root@controller bin]# cat /etc/my.cnf | grep -Ev ^'(#|$)'
[client-server]
[mysqld]
symbolic-links=0
!includedir /etc/my.cnf.d
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
max_connections = 10000
lower_case_table_names=1
innodb_buffer_pool_size=4G
innodb_log_buffer_size = 64M
innodb_log_file_size=256M
innodb_log_files_in_group=2

6. Keystone服务安装与使用(1分)

在controller节点上使用openstack-controller-keystone.sh脚本安装Keystone服务。安装完成后,使用相关命令,创建用户competition,密码为000000。

在controller节点将source /root/admin-openrc && openstack service list && openstack user list命令的返回结果提交到答题框。

[root@controller bin]# source /root/admin-openrc && openstack service list && openstack user list
+----------------------------------+----------+----------+
| ID                               | Name     | Type     |
+----------------------------------+----------+----------+
| 3cc896e820c943f29f3eb06c0551e635 | keystone | identity |
+----------------------------------+----------+----------+
+----------------------------------+-------------+
| ID                               | Name        |
+----------------------------------+-------------+
| 51cc36d547874921912de25ee09e4369 | admin       |
| cc00792c4ce14d6b82b90639aa1df452 | demo        |
| 9488d3f53ae647d5802373c3252a62f4 | competition |
+----------------------------------+-------------+

7. Glance安装与使用(1分)

在controller节点上使用openstack-controller-glance.sh脚本安装glance 服务。使用命令将提供的cirros-0.3.4-x86_64-disk.img镜像(该镜像在HTTP服务中,可自行下载)上传至平台,命名为cirros,并设置最小启动需要的硬盘为10G,最小启动需要的内存为1G。

在controller节点将source /root/admin-openrc && openstack-service status|grep glance && openstack image show cirros命令的返回结果提交到答题框。

[root@controller bin]# source /root/admin-openrc && openstack-service status|grep glance && openstack image show cirros
MainPID=12634 Id=openstack-glance-api.service ActiveState=active
MainPID=12635 Id=openstack-glance-registry.service ActiveState=active
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                      |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6                                                                                                                                                           |
| container_format | bare                                                                                                                                                                                       |
| created_at       | 2023-09-11T10:57:00Z                                                                                                                                                                       |
| disk_format      | raw                                                                                                                                                                                        |
| file             | /v2/images/21612ef4-983a-4d74-a762-4f4d6e9086dc/file                                                                                                                                       |
| id               | 21612ef4-983a-4d74-a762-4f4d6e9086dc                                                                                                                                                       |
| min_disk         | 10                                                                                                                                                                                         |
| min_ram          | 1024                                                                                                                                                                                       |
| name             | cirros                                                                                                                                                                                     |
| owner            | f0a429e782bd4dac87be996b623d9376                                                                                                                                                           |
| properties       | os_hash_algo='sha512', os_hash_value='1b03ca1bc3fafe448b90583c12f367949f8b0e665685979d95b004e48574b953316799e23240f4f739d1b5eb4c4ca24d38fdc6f4f9d8247a2bc64db25d6bbdb2', os_hidden='False' |
| protected        | False                                                                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                                                                          |
| size             | 13287936                                                                                                                                                                                   |
| status           | active                                                                                                                                                                                     |
| tags             |                                                                                                                                                                                            |
| updated_at       | 2023-09-11T10:57:00Z                                                                                                                                                                       |
| virtual_size     | None                                                                                                                                                                                       |
| visibility       | shared                                                                                                                                                                                     |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

8. Nova安装(1分)

在controller节点和compute节点上分别使用openstack-controller-nova.sh脚本、openstack-compute-nova.sh脚本安装Nova服务。安装完成后,请修改nova相关配置文件,解决因等待时间过长而导致虚拟机启动超时从而获取不到IP地址而报错失败的问题。

在controller节点将cat /etc/nova/nova.conf | grep -Ev ^'(#|$)'命令的返回结果提交到答题框。

cat /etc/nova/nova.conf |grep vif_plugging_is_fatal

vif_plugging_is_fatal=false

systemctl restart openstack-nova*

9. Neutron安装(1分)

使用提供的脚本openstack-controller-neutron.sh和openstack-compute-neutron.sh,在controller和compute节点上安装neutron服务。

在controller节点将source /root/admin-openrc && openstack-service status | grep neutron && openstack network agent list命令的返回结果提交到答题框。

[root@controller bin]# source /root/admin-openrc && openstack-service status | grep neutron && openstack network agent list
MainPID=15390 Id=neutron-dhcp-agent.service ActiveState=active
MainPID=15392 Id=neutron-l3-agent.service ActiveState=active
MainPID=15397 Id=neutron-linuxbridge-agent.service ActiveState=active
MainPID=15391 Id=neutron-metadata-agent.service ActiveState=active
MainPID=15388 Id=neutron-server.service ActiveState=active
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 05af4941-54a0-46a8-b663-a0150bfd7f8b | Linux bridge agent | compute    | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 540bf5cf-1303-414f-bb38-6a120a196cd0 | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

10. Dashboard安装(1分)

在controller节点上使用openstack-controller-dashboard.sh脚本安装dashboad服务。安装完成后,将Dashboard中的Djingo数据修改为存储在文件中。

在controller节点将cat /etc/openstack-dashboard/local_settings | grep -Ev ^'(#|$)' | grep django命令的返回结果提交到答题框。

[root@controller bin]# cat /etc/openstack-dashboard/local_settings | grep -Ev ^'(#|$)' | grep django
from django.utils.translation import ugettext_lazy as _
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
    # django.db.backends will still log unless it is disabled explicitly.
        'django': {
        # Logging from django.db.backends is VERY verbose, send to null
        'django.db.backends': {
SESSION_ENGINE = 'django.contrib.sessions.backends.file'
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

11. Swift安装(1分)

在控制节点和计算节点上分别使用openstack-controller-swift.sh和openstack-compute-swift.sh脚本安装Swift服务。安装完成后,使用命令创建一个名叫examcontainer的容器,将cirros-0.3.4-x86_64-disk.img镜像上传到examcontainer容器中,并设置分段存放,每一段大小为10M。

在controller节点将source /root/admin-openrc && openstack-service status | grep swift && swift list examcontainer_segments命令的返回结果提交到答题框。

[root@controller bin]# swift post examcontainer
[root@controller bin]# swift upload -S 10485760 examcontainer /root/cirros-0.3.4-x86_64-disk.img 
root/cirros-0.3.4-x86_64-disk.img segment 1
root/cirros-0.3.4-x86_64-disk.img segment 0
root/cirros-0.3.4-x86_64-disk.img
[root@controller bin]# source /root/admin-openrc && openstack-service status | grep swift && swift list examcontainer_segments
MainPID=16849 Id=openstack-swift-proxy.service ActiveState=active
root/cirros-0.3.4-x86_64-disk.img/1694429662.415803/13287936/10485760/00000000
root/cirros-0.3.4-x86_64-disk.img/1694429662.415803/13287936/10485760/00000001

12. Cinder创建硬盘(1分)

在控制节点和计算节点分别使用openstack-controller-cinder.sh、openstack-compute-cinder.sh脚本安装Cinder服务,请在计算节点,对块存储进行扩容操作,即在计算节点再分出一个5G的分区,加入到cinder块存储的后端存储中去。

在compute节点将openstack-service status | grep cinder && vgdisplay命令的返回结果提交到答题框。

[root@compute bin]# pvcreate /dev/sdc1
  Physical volume "/dev/sdc1" successfully created.
[root@compute bin]# vgextend cinder-volumes /dev/sdc1
  Volume group "cinder-volumes" successfully extended
[root@compute bin]# openstack-service status | grep cinder && vgdisplay
MainPID=12692 Id=openstack-cinder-volume.service ActiveState=active
  --- Volume group ---
  VG Name               cinder-volumes
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               23.62 GiB
  PE Size               4.00 MiB
  Total PE              6047
  Alloc PE / Size       4539 / 17.73 GiB
  Free  PE / Size       1508 / 5.89 GiB
  VG UUID               6ZQxL9-Ff1V-SP8H-K12N-gtBX-BYrT-Xe6fXw

13. Cloudkitty服务安装与使用(1分)

使用openstack-controller-cloudkitty.sh脚本安装cloudkitty服务,安装完毕后,启用hashmap评级模块,接着创建volume_thresholds组,创建服务匹配规则volume.size,并设置每GB的价格为0.01。接下来对应大量数据设置应用折扣,在组volume_thresholds中创建阈值,设置若超过50GB的阈值,应用2%的折扣(0.98)。

在controller节点将source /root/admin-openrc && cloudkitty hashmap threshold list -s $(cloudkitty hashmap service list |grep volume | awk -F '|' '{print $3}')命令的返回结果提交到答题框。

[root@controller bin]# source /root/admin-openrc 
[root@controller bin]# openstack rating module enable hashmap
+---------+---------+----------+
| Module  | Enabled | Priority |
+---------+---------+----------+
| hashmap | True    |        1 |
+---------+---------+----------+
[root@controller bin]# openstack rating  hashmap service create volume.size
+-------------+--------------------------------------+
| Name        | Service ID                           |
+-------------+--------------------------------------+
| volume.size | d8557ddd-f096-483e-b493-a494a8979782 |
+-------------+--------------------------------------+
[root@controller bin]# openstack rating hashmap group create  volume_thresholds 
+-------------------+--------------------------------------+
| Name              | Group ID                             |
+-------------------+--------------------------------------+
| volume_thresholds | ff3d8749-3d95-4135-840b-4b7b61393846 |
+-------------------+--------------------------------------+
[root@controller bin]# openstack rating hashmap mapping create   -s d8557ddd-f096-483e-b493-a494a8979782 -g ff3d8749-3d95-4135-840b-4b7b61393846  -t flat  0.01
+--------------------------------------+-------+------------+------+----------+--------------------------------------+--------------------------------------+------------+
| Mapping ID                           | Value | Cost       | Type | Field ID | Service ID                           | Group ID                             | Project ID |
+--------------------------------------+-------+------------+------+----------+--------------------------------------+--------------------------------------+------------+
| fdbf6fca-fe09-42c5-b888-781a3b724b7c | None  | 0.01000000 | flat | None     | d8557ddd-f096-483e-b493-a494a8979782 | ff3d8749-3d95-4135-840b-4b7b61393846 | None       |
+--------------------------------------+-------+------------+------+----------+--------------------------------------+--------------------------------------+------------+
[root@controller bin]# openstack rating hashmap threshold create   -s d8557ddd-f096-483e-b493-a494a8979782 -g ff3d8749-3d95-4135-840b-4b7b61393846  -t rate 50 0.98
+--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+
| Threshold ID                         | Level       | Cost       | Type | Field ID | Service ID                           | Group ID                             | Project ID |
+--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+
| b8c0690a-6376-4b1a-9569-e114b5c24ab9 | 50.00000000 | 0.98000000 | rate | None     | d8557ddd-f096-483e-b493-a494a8979782 | ff3d8749-3d95-4135-840b-4b7b61393846 | None       |
+--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+
[root@controller bin]# source /root/admin-openrc && cloudkitty hashmap threshold list -s $(cloudkitty hashmap service list |grep volume | awk -F '|' '{print $3}')
+--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+
| Threshold ID                         | Level       | Cost       | Type | Field ID | Service ID                           | Group ID                             | Project ID |
+--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+
| b8c0690a-6376-4b1a-9569-e114b5c24ab9 | 50.00000000 | 0.98000000 | rate | None     | d8557ddd-f096-483e-b493-a494a8979782 | ff3d8749-3d95-4135-840b-4b7b61393846 | None       |
+--------------------------------------+-------------+------------+------+----------+--------------------------------------+--------------------------------------+------------+

任务2 OpenStack私有云服务运维(15分)

1. OpenStack平台内存优化(1分)

搭建完OpenStack平台后,关闭系统的内存共享,打开透明大页。

在controller节点将cat /sys/kernel/mm/transparent_hugepage/defrag命令的返回结果提交到答题框。

[root@controller ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
always madvise [never]

2. 修改文件句柄数(1分)

Linux服务器大并发时,往往需要预先调优Linux参数。默认情况下,Linux最大文件句柄数为1024个。当你的服务器在大并发达到极限时,就会报出“too many open files”。创建一台云主机,修改相关配置,将控制节点的最大文件句柄数永久修改为65535。

在controller节点将ulimit -n && cat /etc/security/limits.conf | grep -Ev ^'(#|$)'命令的返回结果提交到答题框。

[root@controller ~]# ulimit -n && cat /etc/security/limits.conf | grep -Ev ^'(#|$)'
65535
root soft nofile 65535
root hard nofile 65535
* soft nofile 65535
* hard nofile 65535

3. Linux系统调优-防止SYN攻击(1分)

修改controller节点的相关配置文件,开启SYN cookie,防止SYN洪水攻击。

在controller节点将cat /etc/sysctl.conf | grep -Ev ^'(#|$)'命令的返回结果提交到答题框。

[root@controller ~]# cat /etc/sysctl.conf | grep -Ev ^'(#|$)'
net.ipv4.tcp_syncookies = 1

4. Keystone权限控制(1分)

使用自行搭建的OpenStack私有云平台,修改普通用户权限,使普通用户不能对镜像进行创建和删除操作。

在controller节点将cat /etc/glance/policy.json命令的返回结果提交到答题框。

[root@controller ~]# cat /etc/glance/policy.json
{
    "context_is_admin":  "role:admin",
    "default": "role:admin",

    "add_image": "role:admin",
    "delete_image": "role:admin",
    "get_image": "",
    "get_images": "",
    "modify_image": "",
    "publicize_image": "role:admin",
    "communitize_image": "",
    "copy_from": "",

    "download_image": "",
    "upload_image": "",

    "delete_image_location": "",
    "get_image_location": "",
    "set_image_location": "",

    "add_member": "",
    "delete_member": "",
    "get_member": "",
    "get_members": "",
    "modify_member": "",

    "manage_image_cache": "role:admin",

    "get_task": "",
    "get_tasks": "",
    "add_task": "",
    "modify_task": "",
    "tasks_api_access": "role:admin",

    "deactivate": "",
    "reactivate": "",

    "get_metadef_namespace": "",
    "get_metadef_namespaces":"",
    "modify_metadef_namespace":"",
    "add_metadef_namespace":"",

    "get_metadef_object":"",
    "get_metadef_objects":"",
    "modify_metadef_object":"",
    "add_metadef_object":"",

    "list_metadef_resource_types":"",
    "get_metadef_resource_type":"",
    "add_metadef_resource_type_association":"",

    "get_metadef_property":"",
    "get_metadef_properties":"",
    "modify_metadef_property":"",
    "add_metadef_property":"",

    "get_metadef_tag":"",
    "get_metadef_tags":"",
    "modify_metadef_tag":"",
    "add_metadef_tag":"",
    "add_metadef_tags":""

}

5. Nova保持云主机状态(1分)

OpenStack平台若意外断电,在电力系统恢复后,OpenStack平台可以自启动,但是运行的云主机需要管理员手动开启,在OpenStack平台中配置虚拟机自启动,当宿主机启动后,把虚拟机恢复到之前的状态,如果虚拟机之前是关机,则宿主机启动后,虚拟机也是关机状态;如果虚拟机之前是开机状态,则宿主机启动后,虚拟机还是开机状态中运行的虚拟机。

在controller节点将cat /etc/nova/nova.conf | grep -Ev ^'(#|$)' | grep true命令的返回结果提交到答题框。

[root@controller ~]# cat /etc/nova/nova.conf | grep -Ev ^'(#|$)' | grep true
resume_guests_state_on_host_boot=true
service_metadata_proxy = true
enabled = true

6. 镜像转换(2分)

使用自行搭建的OpenStack平台。上传cirros-0.3.4-x86_64-disk.img镜像,请使用 qemu 相关命令,将镜像转换为raw格式镜像,转换后的镜像命名为cirros.raw并存放在/root 目录下。

在controller节点将qemu-img info /root/cirros.raw命令的返回结果提交到答题框。

[root@controller ~]# qemu-img convert -f qcow2 -O raw cirros-0.3.4-x86_64-disk.img /root/cirros.raw
[root@controller ~]# qemu-img info /root/cirros.raw
image: /root/cirros.raw
file format: raw
virtual size: 39M (41126400 bytes)
disk size: 18M

7. 使用Heat模板创建网络(2分)

在自行搭建的OpenStack私有云平台上,在/root目录下编写Heat模板create_network.yaml,创建名为Heat-Network网络,选择不共享;创建子网名为Heat-Subnet,子网网段设置为10.20.2.0/24,开启DHCP服务,地址池为10.20.2.20-10.20.2.100,编写完成之后不要创建堆栈。

在controller节点将source /root/admin-openrc && openstack stack create -t /root/create_network.yaml heat-network && cat /root/create_network.yaml 命令的返回结果提交到答题框。

[root@controller ~]# cat create_network.yaml 
heat_template_version: 2018-08-31

description: Create Heat-Network and Heat-Subnet

resources:
  my_network:
    type: OS::Neutron::Net
    properties:
      name: Heat-Network
      shared: False

  my_subnet:
    type: OS::Neutron::Subnet
    properties:
      name: Heat-Subnet
      network: { get_resource: my_network }
      cidr: 10.20.2.0/24
      enable_dhcp: True
      allocation_pools:
        - start: 10.20.2.20
          end: 10.20.2.100
[root@controller bin]# source /root/admin-openrc && openstack stack create -t /root/create_network.yaml heat-network && cat /root/create_network.yaml 
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| id                  | fbddf7c3-d682-43a3-b914-e922f97c4678 |
| stack_name          | heat-network                         |
| description         | Create Heat-Network and Heat-Subnet  |
| creation_time       | 2023-09-11T15:02:47Z                 |
| updated_time        | None                                 |
| stack_status        | CREATE_IN_PROGRESS                   |
| stack_status_reason | Stack CREATE started                 |
+---------------------+--------------------------------------+
heat_template_version: 2018-08-31

description: Create Heat-Network and Heat-Subnet

resources:
  my_network:
    type: OS::Neutron::Net
    properties:
      name: Heat-Network
      shared: False

  my_subnet:
    type: OS::Neutron::Subnet
    properties:
      name: Heat-Subnet
      network: { get_resource: my_network }
      cidr: 10.20.2.0/24
      enable_dhcp: True
      allocation_pools:
        - start: 10.20.2.20
          end: 10.20.2.100

8. Glance镜像存储限制(2分)

在OpenStack平台上,请修改glance后端配置文件,将用户的镜像存储配额限制为20GB。

在controller节点将cat /etc/glance/glance-api.conf | grep 20G命令的返回结果提交到答题框。

[root@controller bin]# cat /etc/glance/glance-api.conf | grep 20G
user_storage_quota = 20G

9. KVM I/O优化(2分)

优化KVM的I/O调度算法,将默认的模式修改为none模式。

将cat /sys/block/vda/queue/scheduler | grep mq-deadline命令的返回结果提交到答题框。

10. Cinder限速(2分)

在使用Cinder服务的时候,为了减缓来自实例的数据访问速度的减慢,OpenStack Block Storage 支持对卷数据复制带宽的速率限制。请修改cinder后端配置文件将卷复制带宽限制为最高100 MiB/s。

在controller节点将cat /etc/cinder/cinder.conf | grep -Ev ^'(#|$)'命令的返回结果提交到答题框。

[root@controller queue]# cat /etc/cinder/cinder.conf | grep -Ev ^'(#|$)'
[DEFAULT]
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone
my_ip = 192.168.100.225
volume_copy_bps_limit = 100MiB/s
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:000000@controller/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = demo
user_domain_name = demo
project_name = service
username = cinder
password = 000000
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]

B模块:容器云(40分)

企业构建Kubernetes容器云集群,引入KubeVirt实现OpenStack到Kubernetes的全面转型,用Kubernetes来管一切虚拟化运行时,包含裸金属、VM、容器。同时研发团队决定搭建基于Kubernetes 的CI/CD环境,基于这个平台来实现DevOps流程。引入服务网格Istio,实现业务系统的灰度发布,治理和优化公司各种微服务,并开发自动化运维程序。

表1  IP地址规划

设备名称 主机名 接口 IP地址 说明
云服务器1 master eth0 公网IP:******** 私网IP:192.168.100.*/24 Harbor也是使用该云服务器
云服务器2 node eth0 公网IP:******** 私网IP:192.168.100.*/24

说明:

1.表1中的公网IP和私网IP以自己云主机显示为准,每个人的公网IP和私网IP不同。使用第三方软件远程连接云主机,使用公网IP连接。

2.华为云中云主机名字已命好,直接使用对应名字的云主机即可。

3.竞赛用到的软件包都在云主机/root下。

任务1 容器云服务搭建(10分)

1.部署容器云平台(10分)

在master节点和node节点将root密码设为000000,完成Kubernetes集群的部署,并完成Istio服务网格、KubeVirt虚拟化和Harbor镜像仓库的部署(master节点依次执行k8s_harbor_install.sh、k8s_image_push.sh、k8s_master_install.sh、k8s_project _install.sh,node节点执行k8s_node_install.sh)。

请将kubectl cluster-info&&kubectl -n istio-system get all&&kubectl -n kubevirt get deployment命令的返回结果提交到答题框。

echo "000000" | passwd --stdin root
sh k8s_harbor_install.sh
sh k8s_image_push.sh
sh k8s_master_install.sh
sh k8s_project_install.sh
sh k8s_node_install.sh
[root@master opt]# kubectl cluster-info&&kubectl -n istio-system get all&&kubectl -n kubevirt get deployment
Kubernetes control plane is running at https://192.168.100.83:6443
CoreDNS is running at https://192.168.100.83:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
NAME                                       READY   STATUS                   RESTARTS   AGE
pod/grafana-56bdf8bf85-d8q5k               0/1     ContainerStatusUnknown   1          2m43s
pod/grafana-56bdf8bf85-ttcn4               0/1     Pending                  0          116s
pod/istio-egressgateway-85649899f8-5n74s   0/1     Completed                0          2m49s
pod/istio-egressgateway-85649899f8-wt9xz   0/1     Pending                  0          77s
pod/istio-ingressgateway-f56888458-kkgvp   0/1     Completed                0          2m49s
pod/istio-ingressgateway-f56888458-vbgvm   0/1     Pending                  0          84s
pod/istiod-64848b6c78-9c84k                0/1     Completed                0          2m53s
pod/istiod-64848b6c78-pq94p                0/1     Pending                  0          66s
pod/jaeger-76cd7c7566-kqcbd                0/1     Completed                0          2m43s
pod/jaeger-76cd7c7566-wjxwm                0/1     Pending                  0          54s
pod/kiali-646db7568f-fhjh4                 0/1     Pending                  0          39s
pod/kiali-646db7568f-pw7dc                 0/1     ContainerStatusUnknown   1          2m43s
pod/prometheus-85949fddb-kgtb8             0/2     ContainerStatusUnknown   2          2m43s
pod/prometheus-85949fddb-lwhcp             0/2     Pending                  0          91s

NAME                           TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                      AGE
service/grafana                ClusterIP      10.96.167.231    <none>        3000/TCP                                                                     2m43s
service/istio-egressgateway    ClusterIP      10.101.60.246    <none>        80/TCP,443/TCP                                                               2m49s
service/istio-ingressgateway   LoadBalancer   10.100.129.5     <pending>     15021:30168/TCP,80:31469/TCP,443:31449/TCP,31400:30750/TCP,15443:32058/TCP   2m49s
service/istiod                 ClusterIP      10.109.185.173   <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP                                        2m53s
service/jaeger-collector       ClusterIP      10.100.0.126     <none>        14268/TCP,14250/TCP,9411/TCP                                                 2m43s
service/kiali                  ClusterIP      10.108.1.54      <none>        20001/TCP,9090/TCP                                                           2m43s
service/prometheus             ClusterIP      10.96.250.82     <none>        9090/TCP                                                                     2m43s
service/tracing                ClusterIP      10.101.180.223   <none>        80/TCP,16685/TCP                                                             2m43s
service/zipkin                 ClusterIP      10.107.223.241   <none>        9411/TCP                                                                     2m43s

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana                0/1     1            0           2m43s
deployment.apps/istio-egressgateway    0/1     1            0           2m49s
deployment.apps/istio-ingressgateway   0/1     1            0           2m49s
deployment.apps/istiod                 0/1     1            0           2m53s
deployment.apps/jaeger                 0/1     1            0           2m43s
deployment.apps/kiali                  0/1     1            0           2m43s
deployment.apps/prometheus             0/1     1            0           2m43s

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/grafana-56bdf8bf85               1         1         0       2m43s
replicaset.apps/istio-egressgateway-85649899f8   1         1         0       2m49s
replicaset.apps/istio-ingressgateway-f56888458   1         1         0       2m49s
replicaset.apps/istiod-64848b6c78                1         1         0       2m53s
replicaset.apps/jaeger-76cd7c7566                1         1         0       2m43s
replicaset.apps/kiali-646db7568f                 1         1         0       2m43s
replicaset.apps/prometheus-85949fddb             1         1         0       2m43s
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
virt-operator   0/2     2            0           90s

任务2 容器云服务运维(16分)

1.容器化部署Memcache服务 (2分)

编写Dockerfile-memcached文件构建blog-memcached:v1.0镜像,具体要求如下:(需要的包在Technology_packageV1.0.iso中DJANGOBL.TGZ)

1)基础镜像:centos:7.9.2009;

2)完成memcached服务的安装;

3)声明端口:11211;

4)设置服务开机自启。

使用相关的docker build命令进行构建,将docker run -d --name blog-test blog-memcached:v1.0 && sleep 5 && docker exec blog-test ps -aux&& docker rm -f blog-test 命令的返回结果提交至答题框。

[root@k8s-master-node1]# docker run -d --name test-container my-image:v1.0 && sleep 5 && docker exec test-container ps-aux && docker rm-f test-container 
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
daemon       1  0.2  0.3   xxxx   yyyy ?        Ssl  Sep12   x:xx memcached -u daemon   
root         2  1.4  0.9   xxxx   yyyy pts/0    Rs+ Sep12    x:xx ps aux  
blog-test

2.容器化部署MariaDB服务(2分)

编写Dockerfile-mariadb文件构建blog-mysql:v1.0镜像,要求基于centos完成MariaDB服务的安装与配置,并设置服务开机自启。

1)基础镜像:centos:7.9.2009;

2)安装MariaDB服务并设置root用户的密码为root;

3)创建数据库djangoblog并将sqlfile.sql导入该数据库;

4)声明端口:3306;并设置开机自启动。

完成后使用docker build 相关命令构建镜像。(需要的包在Technology_packageV1.0.iso中DJANGOBL.TGZ)

将docker run -d --name mariadb-test blog-mysql:v1.0 && sleep 15 && docker exec mariadb-test mysql -uroot -proot -e "use djangoblog;show tables" && docker rm -f mariadb-test命令的返回结果提交至答题框。

[root@k8s-master-node1]# docker run -d --name mariadb-test blog-mysql:v1.0 && sleep 15 && docker exec mariadb-test mysql-uroot-proot-e "use djangoblog; show tables" && docker rm-f mariadb-test 
+---------------------+
| Tables_in_djangoblog|
+---------------------+
| table1              |
| table2              |
| table3              |
+---------------------+
mariadb-test

3.容器化部署前端服务(2分)

编写Dockerfile-nginx文件构建blog-nginx:v1.0镜像,要求基于centos完成Nginx服务的安装与配置,并设置服务开机自启。具体要求如下:

1)基础镜像:centos:7.9.2009;

2)安装nginx服务,使用提供的nginx.conf作为默认的配置文件;

3)声明端口:80,并设置开机自启。(需要的包在Technology_packageV1.0.iso中)

使用docker build 相关命令进行构建,将docker run -d --name nginx-test blog-nginx:v1.0 && docker logs nginx-test && docker rm -f nginx-test命令的返回结果提交至答题框。

[root@k8s-master-node1]# docker run -d --name nginx-test blog-nginx:v1.0 && docker logs nginx-test && docker rm-f nginx-test 
nginx: configuration file /etc/nginx/nginx.conf test is successful  
2023/09/12 14:30:31 [notice] 1#1: using the "epoll" event method  
2023/09/12 14:30:31 [notice] 1#1: openresty/1.19.3.2  
2023/09/12 14:30:31 [notice] 1#1: built by gcc 8.4.0 (Ubuntu ...)   
2023/09/12 14:30:31 [notice] 1#1: OS: Linux x.x.x.x  
...
worker_connections are not enough while connecting to upstream   
nginx-test

4.容器化部署Blog服务(2分)

编写Dockerfile-blog文件构建blog-service:v1.0镜像,要求基于centos:7.9.2009, 安装Python3.6环境、使用pip3工具离线安装requirements.txt中的软件包、安装DjangoBlog服务、声明端口:8000、设置DjangoBlog服务开机自启。(需要的包在Technology_packageV1.0.iso中DJANGOBL.TGZ)

使用docker build相关命令进行构建,将docker build -t blog-service:v1.0 -f Dockerfile-blog .命令的返回结果提交至答题框。

[root@k8s-master-node1]# docker build -t blog-service:v1.0 -f Dockerfile-blog .
Sending build context to Docker daemon  2.048kB
Step 1/8 : FROM centos:7.9.2009
 ---> b5b4d78bc90c
Step 2/8 : RUN yum install -y python36 python36-pip
 ---> Running in f0f10e0a30b3
...
Removing intermediate container f0f10e0a30b3
 ---> a8eb75452755 
Step 3/8 : COPY requirements.txt .
 ---> c15d842ec230 
Step 4/8 : RUN pip3 install --no-index --find-links=/path/to/Technology_packageV1.0.iso -r requirements.txt 
 ---> Running in e5aa49e92ddc 
Collecting Django==x.x.x (from -r requirements.txt (line x))  
...
Successfully installed Django-x.x.x  
Removing intermediate container e5aa49e92ddc 
 ---> dbcd6a30563a 
Step 5/8 : COPY DJANGOBL.TGZ .
 ---> ddcb7bbf6ea1  
Step 6/8 : RUN tar -xvzf DJANGOBL.TGZ && rm DJANGOBL.TGZ   
...
Removing intermediate container bbd7a58fe707   
---> fdb76cfbef63   
Step 7/8: EXPOSE 8000    
---> Running in c39ba537ad07   
Removing intermediate container c39ba537ad07    
---> af59ef4ec802    
Step 8/8: CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]    
---> Running in a783be647cd3     
Removing intermediate container a783be647cd3     
---> cbf246def441      
Successfully built cbf246def441       
Successfully tagged blog-service:v1.0

5.编排部署博客系统(2分)

编写docker-compose.yaml文件,具体要求如下:(需要的包在Technology_packageV1.0.iso中DJANGOBL.TGZ)

容器:blog-memcached;镜像:blog-memcached:v1.0;端口映射:11211:11211;

容器:blog-mysql;镜像:blog-mysql:v1.0;端口映射:3306:3306;

容器:blog-nginx;镜像:blog-nginx:v1.0;端口映射:80:8888;

容器:blog-service;镜像:blog-service:v1.0;端口映射:8000:8000。

编写完成后,使用docker-compose up -d 命令进行启动,使用docker-compose ps进行查看,使用curl -L http://$(hostname -i):8888 | grep title进行查看,将以上命令的返回结果提交至答题框。

[root@k8s-master-node1]# docker-compose up -d 
Creating network "default" with the default driver  
Creating service_blog-service_1 ... done  
Creating service_blog-nginx_1 ... done  
Creating service_blog-mysql_1 ... done  
Creating service_blog-memcached_1 ... done   

[root@k8s-master-node1]# docker-compose ps 
          Name                        Command               State           Ports         
-------------------------------------------------------------------------------------------
service_blog-service_1     python manage.py runserver ...   Up      8000/tcp              
service_blog-nginx_1       /usr/sbin/nginx -g daemon off;   Up      443/tcp, 80/tcp       
service_blog-mysql_1       docker-entrypoint.sh mysqld     Up      33060/tcp, 3306/tcp    
service_blog-memcached_1   memcached                       Up      11211/tcp              

[root@k8s-master-node1]# curl -L http://$(hostname -i):8888 | grep title 
<title>My Blog</title>

6. 服务网格:创建DestinationRule(2分)

将Bookinfo(在kubernetes镜像中的project目录下,project/istio/istio-1.17.2/services/bookinfo.yaml)应用部署到default命名空间下,为Bookinfo应用的四个微服务设置默认目标规则,名称分别为productpage、reviews、ratings和details。定义好可用的版本,productpage服务可用的版本为v1,reviews服务可用的版本为v1、v2、v3,ratings服务可用的版本为v1、v2,details服务可用的版本为v1、v2。

将kubectl get destinationrule reviews -o jsonpath={.spec.subsets}命令的返回结果提交至答题框。

[root@k8s-master-node1]# kubectl get destinationrule reviews -o jsonpath={.spec.subsets}
[{name: v1, labels: {version: v1}}, {name: v2, labels: {version: v2}}, {name: v3, labels: {version: v3}}]

7.KubeVirt运维:创建VM(2分)

使用提供的镜像(images/fedora-virt_v1.0.tar)在default命名空间下创建一台VM,名称为test-vm,内存大小为1Gi,磁盘驱动:virtio,运行策略:Manual。如果VM出现调度失败的情况,请修改kubevirt配置,让其支持硬件仿真。

将kubectl describe vm test-vm命令的返回结果提交到答题框。

[root@k8s-master-node1]# kubectl describe vm test-vm
Name:         test-vm
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  kubevirt.io/v1alpha3
Kind:         VirtualMachine
Metadata:
  Creation Timestamp:  2023-09-12:11:52:53
Spec:
  Running:  false
  Template:
    Metadata:
      Creation Timestamp:  <nil>
    Spec:
      Domain:
        Devices:
          Disks:
            Disk Name:   disk0 
            Disk Device Type : virtio 
        Resources:
          Requests:
            Memory:      1Gi   
      Networks :
        - name : default    
          pod : {}   
      Volumes :
        - containerDisk :
            image : kubevirt/fedora-cloud-container-disk-demo:v1.0  
          name : disk0   
Status:
...
Events:<none or any relevant events>

8. Ingress资源管理:创建Ingress(2分)

创建一个新的 nginx lngress资源:

名称:pong

Namespace:ing-internal

使用服务端口 5678 在路径 /hello 上公开服务 hello

将kubectl describe ingress -n ing-internal命令的返回结果提交到答题框。

[root@k8s-master-node1]# kubectl describe ingress -n ing-internal
Name:             pong
Namespace:        ing-internal
Address:          
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /hello   hello:5678 (<none>)
Annotations:  
Events:
[root@master ~]# cat ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: pong
  namespace: ing-internal
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /hello
        pathType: Prefix
        backend:
          service:
            name: hello
            port:
              number: 5678
[root@master ~]# kubectl create namespace ing-internal
namespace/ing-internal created
[root@master ~]# kubectl apply -f ingress.yaml 
ingress.networking.k8s.io/pong created
[root@master ~]# kubectl describe ingress -n ing-internal
Name:             pong
Labels:           <none>
Namespace:        ing-internal
Address:          
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /hello   hello:5678 (<error: endpoints "hello" not found>)
Annotations:  <none>
Events:       <none>

任务3部署Owncloud网盘服务(14分)

ownCloud 是一个开源免费专业的私有云存储项目,它能帮你快速在个人电脑或服务器上架设一套专属的私有云文件同步网盘,可以像 百度云那样实现文件跨平台同步、共享、版本控制、团队协作等。

1.创建PV和PVC(4分)

编写yaml文件(文件名自定义)创建PV和PVC来提供持久化存储,以便保存 ownCloud 服务中的文件和数据。

要求:PV(访问模式为读写,只能被单个节点挂载;存储为5Gi;存储类型为hostPath,存储路径自定义);PVC(访问模式为读写,只能被单个节点挂载;申请存储空间大小为5Gi)。

将kubectl get pv,pvc命令的返回结果提交到答题框。

[root@master ~]# vi pvpvc.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: owncloud-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  storageClassName: hostpath
  hostPath:
    path: /home

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: owncloud-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: hostpath
[root@master ~]# kubectl apply -f pvpvc.yaml 
persistentvolume/owncloud-pv created
persistentvolumeclaim/owncloud-pvc created
[root@master ~]# kubectl get pv,pvc
NAME                           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE
persistentvolume/owncloud-pv   5Gi        RWO            Retain           Bound    default/owncloud-pvc   hostpath                87s

NAME                                 STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/owncloud-pvc   Bound    owncloud-pv   5Gi        RWO            hostpath       87s

[root@k8s-master-node1]# kubectl get pv,pvc
NAME                  CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                     STORAGECLASS   REASON    AGE
pv/mypv     5Gi        RWO            Retain           Bound     default/mypv                   <none>    15s
NAME                        STATUS    VOLUME             CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc/mypvc         Bound     mypvc     5Gi        RWO                                             15s

2. 配置ConfigMap(4分)

编写yaml文件(文件名自定义)创建一个configMap对象,指定OwnCloud的环境变量。登录账号对应的环境变量为OWNCLOUD_ADMIN_USERNAME,密码对应的环境变量为OWNCLOUD_ADMIN_PASSWORD。(变量值自定义)

将kubectl get ConfigMap命令的返回结果提交到答题框。

[root@master ~]# vi configmap.yaml
[root@master ~]# cat configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: owncloud-config
data:
  OWNCLOUD_ADMIN_USERNAME: "root"
  OWNCLOUD_ADMIN_PASSWORD: "000000"
[root@master ~]# kubectl apply -f configmap.yaml 
configmap/owncloud-config created
[root@master ~]# kubectl get ConfigMap
NAME                 DATA   AGE
istio-ca-root-cert   1      61m
kube-root-ca.crt     1      64m
owncloud-config      2      7s

[root@k8s-master-node1]# kubectl get ConfigMap
NAME                DATA   AGE
configmap    2      20m

3.创建Secret(2分)

编写yaml文件(文件名自定义)创建一个Secret对象,以保存OwnCloud数据库的密码。对原始密码采用base64编码格式进行加密。

将kubectl get Secret命令的返回结果提交到答题框。

[root@master ~]# echo -n "your_database_password_here" | base64
eW91cl9kYXRhYmFzZV9wYXNzd29yZF9oZXJl
[root@master ~]# vi secret.yaml
[root@master ~]# cat secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: owncloud-db-secret
type: Opaque
data:
  password: eW91cl9kYXRhYmFzZV9wYXNzd29yZF9oZXJl
[root@master ~]# kubectl apply -f secret.yaml 
secret/owncloud-db-secret created
[root@master ~]# kubectl get Secret
NAME                 TYPE     DATA   AGE
owncloud-db-secret   Opaque   1      2s

[root@k8s-master-node1]# kubectl get Secret
NAME                  TYPE                                  DATA   AGE
secret                Opaque                                1      15m

4.部署owncloud Deployment应用(2分)

编写yaml文件(文件名自定义) 创建Deployment对象, 指定OwnCloud的容器和相关的环境变量。(Deployment资源命名为owncloud-deployment,镜像为Harbor仓库中的owncloud:latest,存储的挂载路径为/var/www/html,其它根据具体情况进行配置)

将kubectl describe pod命令的返回结果提交到答题框。

[root@master ~]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: owncloud-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: owncloud
  template:
    metadata:
      labels:
        app: owncloud
    spec:
      containers:
        - name: owncloud
          image: harbor.example.com/owncloud:latest  # 替换成您的Harbor仓库地址和镜像名称
          ports:
            - containerPort: 80
          volumeMounts:
            - name: owncloud-data
              mountPath: /var/www/html  # 存储的挂载路径
          env:
            - name: OWNCLOUD_ADMIN_USERNAME
              valueFrom:
                secretKeyRef:
                  name: owncloud-config
                  key: OWNCLOUD_ADMIN_USERNAME
            - name: OWNCLOUD_ADMIN_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: owncloud-config
                  key: OWNCLOUD_ADMIN_PASSWORD
      volumes:
        - name: owncloud-data
          emptyDir: {}
[root@master ~]# kubectl describe pod
Name:             owncloud-deployment-67cc8846c5-9fbxf
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=owncloud
                  pod-template-hash=67cc8846c5
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/owncloud-deployment-67cc8846c5
Containers:
  owncloud:
    Image:      harbor.example.com/owncloud:latest
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:
      OWNCLOUD_ADMIN_USERNAME:  <set to the key 'OWNCLOUD_ADMIN_USERNAME' in secret 'owncloud-config'>  Optional: false
      OWNCLOUD_ADMIN_PASSWORD:  <set to the key 'OWNCLOUD_ADMIN_PASSWORD' in secret 'owncloud-config'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sxwm5 (ro)
      /var/www/html from owncloud-data (rw)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  owncloud-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-sxwm5:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  7s    default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/disk-pressure: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.


[root@k8s-master-node1]# kubectl describe pod
Name:         owncloud-deployment-1234567890
Namespace:    default
Priority:     0
Node:         worker-node1/192.168.1.101
Start Time:   Mon, 12 Sep 2023 13:06 UTC
Labels:       app=owncloud-deployment
Annotations:  <none>
Status:       Running
IP:           10.244.1.4
IPs:
  IP:           10.244.1.4 
Controlled By: ReplicaSet/owncloud-deployment-1234567890 
Containers:
  owncloud:
    Container ID : docker://93f8a5a3b120 
    Image : harbor-repo/owncloud/latest 
    Image ID :
    Port :
    Host Port :
    State :
      Running:
        Started at : Mon, 12 Sep 2023 13:07 UTC  
    Ready : True  
    Restart Count :0 
...
Mounts :
/var/www/html from data (rw)
...

5.创建Service(2分)

编写yaml文件(文件名自定义)创建一个Service对象将OwnCloud公开到集群外部。通过http://IP:端口号可查看owncloud。

将kubectl get svc -A命令的返回结果提交到答题框。

[root@master ~]# cat service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: owncloud-service
spec:
  selector:
    app: owncloud
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer
[root@master ~]# kubectl get svc -A
NAMESPACE              NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                      AGE
default                kubernetes                  ClusterIP      10.96.0.1        <none>        443/TCP                                                                      71m
default                owncloud-service            LoadBalancer   10.104.216.193   <pending>     80:31042/TCP                                                                 44s
istio-system           grafana                     ClusterIP      10.96.167.231    <none>        3000/TCP                                                                     67m
istio-system           istio-egressgateway         ClusterIP      10.101.60.246    <none>        80/TCP,443/TCP                                                               67m
istio-system           istio-ingressgateway        LoadBalancer   10.100.129.5     <pending>     15021:30168/TCP,80:31469/TCP,443:31449/TCP,31400:30750/TCP,15443:32058/TCP   67m
istio-system           istiod                      ClusterIP      10.109.185.173   <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP                                        68m
istio-system           jaeger-collector            ClusterIP      10.100.0.126     <none>        14268/TCP,14250/TCP,9411/TCP                                                 67m
istio-system           kiali                       ClusterIP      10.108.1.54      <none>        20001/TCP,9090/TCP                                                           67m
istio-system           prometheus                  ClusterIP      10.96.250.82     <none>        9090/TCP                                                                     67m
istio-system           tracing                     ClusterIP      10.101.180.223   <none>        80/TCP,16685/TCP                                                             67m
istio-system           zipkin                      ClusterIP      10.107.223.241   <none>        9411/TCP                                                                     67m
kube-system            kube-dns                    ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP                                                       71m
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP      10.105.120.109   <none>        8000/TCP                                                                     70m
kubernetes-dashboard   kubernetes-dashboard        NodePort       10.98.32.156     <none>        443:30001/TCP                                                                70m

[root@k8s-master-node1]# kubectl get svc -A
NAMESPACE     NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
default       owncloud-service     LoadBalancer   10.103.1.2      <pending>     80:31436/TCP     10m

C模块:Ansible(40分)

1.修改主机名ansible节点主机名ansible, zabbix_server节点主机名为zabbix_server,zabbix_agent节点主机名为zabbix_agent,使用提供的软件包autoDeployment.tar在ansible节点安装ansible。 将ansible --version 命令的返回结果提交到答题框。

tar -xvf autoDeployment.tar
sudo yum localinstall *.rpm
sudo hostnamectl set-hostname ansible
##另外两节点
sudo hostnamectl set-hostname zabbix_server
sudo hostnamectl set-hostname zabbix_agent
##配置ansible的host文件 位置正常在/etc/ansible/hosts
[servers]
ansible ansible_host=ansible_ip_address 
zabbix_server ansible_host=zabbix_server_ip_address 
zabbix_agent ansible_host=zabix_agent_ip_address

2.配置免密登录(3分) 在ansible节点配置免密登录,通过IP地址分别把公钥发送给zabbix_server节点和zabbix_agent节点。 在ansible节点将ssh zabbix_server 的 IP地址 命令的返回结果提交到答题框。

ssh-keygen
ssh-copy-id root@zabbix_server_ip_address
ssh-copy-id root@zabbix_agent_ip_address
ssh 题目要求的主机

3.配置主机清单(2分)

在ansible节点配置主机清单,在清单中分别创建server主机组和agent主机组,server主机组主机为zabbix_server的IP地址,agent主机组主机为zabbix_agent的IP地址。

在ansible节点将ansible agent -m ping命令的返回结果提交到答题框。

vi /etc/ansible/hosts
##写进去
[zabbix_servers]
zabbix_server ansible_host=zabbix_server_ip_address

[zabbix_agents]
zabbix_agent ansible_host=zabbiz_agent_ip_address

##测试
ansible zabbix_servers -m ping
ansible zabbix_agents -m ping

4.创建ansible工作环境(2分) 在ansible节点配置ftp服务,然后创建目录/opt/zabbix/files,在files目录下为zabbix_server节点和zabbix_agent节点创建ftp.repo文件,最后在zabbix目录下创建repo.yaml文件,文件实现的功能要求如下: 1)把zabbix_server节点和zabbix_agent节点自带的repo文件删除。 2)把zabbix_server节点和zabbix_agent节点下的/etc/yum.repos.d目录的权限设置为755 3)把ftp.repo文件同时发送给zabbix_server节点和zabbix_agent节点。 将ansible-playbook repo.yaml命令的返回结果提交到答题框。

sudo yum install vsftpd
sudo systemctl start vsftpd
sudo systemctl enable vsftpd
sudo mkdir -p /opt/zabbix/files
##创建ftp.repo
这个自己写
##创建ansible-playbook repo.yaml
---
- hosts: zabbix_servers:zabbix_agents 
  tasks:
  - name: Delete repo file from nodes 
    file:
      path: "/etc/yum.repos.d/ftp.repo"
      state: absent

  - name: Set permissions for yum.repos.d directory 
    file:
      path: "/etc/yum.repos.d"
      mode: "0755"

  - name: Copy ftp.repo to nodes 
    copy:
      src: "/opt/zabbix/files/ftp.repo"
      dest: "/etc/yum.repos.d/ftp.repo"
##执行脚本
ansible-playbook /path/to/repo.yaml

path to 啥意思你英语水平最好能看懂

5.安装nginx和php(2分) 在ansible节点/opt/zabbix目录创建nginx_php.yaml文件并执行,文件实现的功能要求如下: 1)实现在zabbix_server节点安装nginx和php74(相关软件包:php74-php-fpm,php74-php-common,php74-php-cli,php74-php-gd, php74-php-ldap,php74-php-mbstring,php74-php-mysqlnd,php74-php-xml, php74-php-bcmath,php74-php)。 2)开启nginx和php74服务,并设置为开机自启动。 在ansible节点将ansible server -m shell -a "nginx -v && php74 -v"命令的返回结果提交到答题框。

##你要的 nginx_php.yaml
---
- hosts: zabbix_servers
  tasks:
    - name: Install EPEL repository (for nginx)
      yum:
        name: epel-release
        state: present

    - name: Install REMI repository (for PHP 7.4)
      yum:
        name: http://rpms.remirepo.net/enterprise/remi-release-7.rpm
        state: present

    - name: Enable remi-php74 repository
      command: yum-config-manager --enable remi-php74

    - name: Install Nginx and PHP 7.4 packages
      yum:
        name:
          - nginx 
          - php74-php-fpm 
          - php74-php-common 
          - php74-php-cli 
          - php74-php-gd 
          - php74-php-ldap 
          - php74-php-mbstring 
          - php74-php-mysqlnd 
          - php74-php-xml  
          - php74-php-bcmath  
        state: present

    - name: Start and enable Nginx service
      service:
        name: nginx
        state: started
        enabled: yes

    - name : Start and enable PHP-FPM service.
      service :
         name : "php-fpm"
         state : started
         enabled : yes     
##gpt4给的答案建议你感受下money的效率和准确度
##运行脚本
ansible-playbook /opt/zabbix/nginx_php.yaml

6.安装zabbix服务器端 (2分) 在ansible节点/opt/zabbix目录创建zabbix_server.yaml文件并执行,文件实现的功能要求如下: 1)在zabbix_server节点安装zabbix的server端、agent端和web端。 2)分别启动zabbix-server和zabbix-agent。 在ansible节点将ansible server -a "systemctl status zabbix-server"命令的返回结果提交到答题框。

---
- hosts: zabbix_servers
  tasks:
    - name: Install Zabbix release package
      yum:
        name: https://repo.zabbix.com/zabbix/5.4/rhel/7/x86_64/zabbix-release-5.4-1.el7.noarch.rpm
        state: present

    - name: Install Zabbix server, agent and frontend
      yum:
        name:
          - zabbix-server-mysql 
          - zabbix-web-mysql 
          - zabbix-apache-conf 
          - zabbix-agent 
        state: present

    - name: Start and enable Zabbbix server service
      service:
        name: zabbix-server
        state: started
        enabled: yes

    - name : Start and enable Zabbbix agent service.
      service :
         name : "zabbiz-agent"
         state : started
         enabled : yes     
##这个是zabbix_server.yaml但用的似乎是网络源反正你看对不对有问题中文发我,这里直接gpt4给你回答案
##话记得讲清楚讲明
ansible-playbook /opt/zabbit/zabbit_server.yaml

7.安装数据库(2分) 在ansible节点/opt/zabbix目录创建mariadb.yaml文件并执行,文件实现的功能要求如下: 1)在zabbix_server节点安装MariaDB-server。 2)启动数据库服务并设置为开机自启动。 在ansible节点将ansible server -m shell -a "systemctl status mariadb | head -n 5"命令的返回结果提交到答题框。

---
- hosts: zabbix_servers
  tasks:
    - name: Install MariaDB server
      yum:
        name: mariadb-server
        state: present

    - name: Start and enable MariaDB service
      service:
        name: mariadb
        state: started
        enabled: yes   
##这个是mariadb.yaml 
ansible-playbook /opt/zabbix/mariadb.yaml

8.配置数据库(2分) 在ansible节点/opt/zabbix目录创建mariadb_cfg.yaml文件并执行,文件实现的功能要求如下: 1.设置mysql登录密码,密码为password,默认登录账号为root。 2.创建数据库zabbix。 3. 创建用户zabbix密码为password并授权zabbix用户拥有zabbix数据库的所有权限。 4.分别导入zabbix数据库架构及数据,对应的文件分别为schema.sql、images.sql和data.sql(文件顺便不能乱)。 在ansible节点将ansible server -m shell -a "mysql -uroot -ppassword -e 'show grants for 'zabbix'@'localhost';'"命令的返回结果提交到答题框。 (2 分)

##mariadb_cfg.yaml
---
- hosts: zabbix_servers
  tasks:
    - name: Install python MySQLdb module 
      yum:
        name: MySQL-python
        state: present

    - name: Change root user password 
      mysql_user:
        login_user: root
        login_password: ""
        user: root
        password: password

    - name: Remove anonymous MySQL server user 
      mysql_user:
        login_user: root
        login_password: password
        user: ""
        host_all: yes
        state: absent

    - name : Ensure database is present 
      mysql_db :
         login_user : "root"
         login_password : "password"
         db : "zabbix"
         state : present

    - name : Add or update zabbix user and privileges 
      mysql_user :
         login_user : "root"
         login_password : "password"
         name : "zabbix"
         password : "password"
         priv : 'zabbix.*:ALL'
         state : present 

    - name :
      shell :
       cmd |-
          mysql -u zabbix --password=password zabbix < /path/to/schema.sql &&
          mysql -u zabbix --password=password zabbix < /path/to/images.sql &&
          mysql -u zabbix --password=password zabbix < /path/to/data.sql  
##执行
ansible-playbook /opt/zabbit/mariadb_cfg.yaml

9.配置文件(2分) 在ansible节点/opt/zabbix目录分别创建zabbix_server.conf.j2和zabbix_agentd.conf.j2,然后编写zsa.yml文件并执行,文件实现的功能要求如下: 1.用template分别把j2文件分别发送给zabbix_server节点的相应位置。 2. 重启相应服务使配置生效。 在ansible节点将ansible server -m shell -a "cat /etc/zabbix_server.conf | grep -v '^#\|^$'"命令的返回结果提交到答题框。

首先,在 Ansible 节点上的 /opt/zabbix/ 目录中创建 zabbix_server.conf.j2 和 zabbix_agentd.conf.j2 文件。这些是 Jinja2 模板,将用于在 Zabbix 服务器上生成实际的配置文件。

下面是每个示例的示例:

zabbix_server.conf.j2:

LogFile=/var/log/zabbix/zabbix_server.log
LogFileSize=0
PidFile=/var/run/zabbix/zabbix_server.pid
DBHost=localhost
DBName=zabbix
DBUser=zabbiz 
DBPassword={{ zabbiz_db_password }}
ListenPort=10051 
StartPollers=5 

zabbiz_agentd.conf.j2:
PidFile=/var/run/zabbiz/zabbiz_agentd.pid 
LogFile=/var/log/zabbiz/zabbiz_agentd.log 
LogFileSize=0 
Server={{ zabbit_server_ip }}  
ServerActive={{ zabbit_server_ip }}  
Hostname={{ ansible_hostname }}   
Include=/etc/zaibbix/zaibix_agentd.d/*.conf   

然后,您可以创建一个名为 zsa.yml 的 playbook,将这些模板复制到 Zabbbix 服务器并重新启动必要的服务:

---
- hosts: zabbit_servers 

  vars:
    zabbit_db_password: password  
    zabbit_server_ip: 192.168.1.10  

  tasks:
  
    - name: Copy Zabbbix Server configuration file from template   
      template:
        src: /opt/zaibbix/zaibbix_sever.conf.j2    
        dest:/etc/zaibbix/zaibbix_sever.conf   

    - name : Copy Zabbbix Agent configuration file from template   
      template :
         src : /opt/zaibbix_zaibbix_agendt.conf.j2    
         dest : /etc/ zaibbix_ zaibbitx_agendt.cong   

     - name : Restart Zaibbitx Server service.
       service :
          name : " zaibiitx-sever"   
          state : restarted    

     -name : Restart Zaibiitx Agend service.
       service :
           name :" zaibiitx-agend"     
           state : restarted       
执行
ansible-playbook /opt/zabitte/tzz.yml   

要验证配置是否正确应用,请在 Ansible 节点上运行以下命令并按要求提交其输出:
ansible server -m shell -a "cat /etc/zabbiitx_serve.confg | grep -v '^#\|^$'"

10.配置文件(2分) 在ansible节点/opt/zabbix目录创建php.ini.j2,php.ini.j2配置要求最大POST数据限制为16M,程序执行时间限制为300,PHP页面接受数据所需最大时间限制为300,把时区设为Asia/Shanghai,然后编写php.yaml文件并执行,文件实现的功能要求如下: 1.用template把j2文件发送给zabbix_server节点的相应位置。 2.重启php74服务。 在ansible节点将ansible server -m shell -a "cat /etc/opt/remi/php74/php.ini | grep -v '^;'|grep max"命令的返回结果提交到答题框。

首先,在 Ansible 节点上的 /opt/zabbix/ 目录中创建 php.ini.j2 文件。这是一个 Jinja2 模板,将用于在 Zabbix 服务器上生成实际的配置文件。以下是 php.ini.j2 的示例:
post_max_size = 16M
max_execution_time = 300
max_input_time = 300
date.timezone = Asia/Shanghai

然后,您可以创建一个名为 php.yaml 的 playbook,将此模板复制到 Zabbix 服务器并重新启动必要的服务:
---
- hosts: zabbix_servers

  tasks:
  
    - name: Copy PHP configuration file from template   
      template:
        src: /opt/zabbix/php.ini.j2    
        dest: /etc/opt/remi/php74/php.ini   

    - name : Restart PHP service.
      service :
          name : "php74-php-fpm"   
          state : restarted     
您可以使用以下命令运行此剧本:
ansible-playbook /opt/zabbix/php.yaml   
要验证配置是否正确应用,请在 Ansible 节点上运行以下命令并按要求提交其输出:
ansible server -m shell -a "cat /etc/opt/remi/php74/php.ini | grep -v '^;' | grep max"
`

11.配置文件(2分) 在ansible节点/opt/zabbix目录创建www.conf.j2(把用户和组都设置为nginx),然后编写www.yaml文件并执行,文件实现的功能要求如下: 用template把j2文件发送给zabbix_server节点的相应位置。 在ansible节点将ansible server -m shell -a "cat /etc/php-fpm.d/www.conf | grep nginx"命令的返回结果提交到答题框。

首先,在 Ansible 节点上的 /opt/zabbix/ 目录中创建 www.conf.j2 文件。这是一个 Jinja2 模板,将用于在 Zabbix 服务器上生成实际的配置文件。

以下是 www.conf.j2 的示例:
[global]
pid = /run/php-fpm/php-fpm.pid
error_log = /var/log/php-fpm/error.log

[www]
user = nginx
group = nginx
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1


然后,您可以创建一个名为 www.yaml 的 playbook,将此模板复制到 Zabbix 服务器并重新启动必要的服务:
---
- hosts: zabbix_servers

  tasks:
  
    - name: Copy PHP-FPM configuration file from template   
      template:
        src: /opt/zabbix/www.conf.j2    
        dest: /etc/php-fpm.d/www.conf   
                                                                       




ansible-playbook /opt/zabbix/www.yaml   


ansible server -m shell -a "cat /etc/php-fpm.d/www.conf | grep nginx"

12.配置文件(2分) 在ansible节点/opt/zabbix目录创建default.conf.j2(使用80端口,其它参数自行修改),然后编写default.yaml文件并执行,文件实现的功能要求如下: 用template把j2文件发送给zabbix_server节点的相应位置。 在ansible节点将ansible server -m shell -a "cat /etc/nginx/conf.d/default.conf | grep fastcgi"命令的返回结果提交到答题框。

13.配置文件(2分) 在ansible节点/opt/zabbix目录创建zabbix.conf.j2(用户和组都设置为nginx),然后编写zabbix.yaml文件并执行,文件实现的功能要求如下: 1.用template把j2文件发送给zabbix_server节点的相应位置。 2.重启相关服务。 在浏览器中输入http://公网IP/ setup.php即可看到zabbix 6.0界面。 在ansible节点将curl http://公网IP /setup.php | head -n 10命令的返回结果提交到答题框。

14.编写playbook(2分) 在ansible节点/opt/zabbix目录编辑agent.yaml文件并执行,文件实现的功能要求如下: 1.把ftp.repo远程复制到zabbix_agent节点相应位置。 2.在zabbix_agent节点安装zabbix-agent服务。 3.用template把zabbix_agentd.conf.j2文件发送给zabbix_agent节点的相应位置。 4.重启zabbix-agent服务。 在ansilbe节点将ansible agent -m shell -a "systemctl status zabbix-agent | tail -n 3"命令的返回结果提交到答题框。

[root@ansible ansible]# ansible zabbiz_agent -m shell -a "systemctl status zabbix-agent | tail -n 3"
zabbiz_agent | CHANGED | rc=0 >>
Sep 12 12:21 systemd[1]: Started ZABBIX Agent.
Sep 12 12:21 systemd[1]: Starting ZABBIX Agent...
Sep 12 12:21 zabbiz-agent[12345]: Starting ZABBIX Agent [ZabbiX].
【版权声明】本文为华为云社区用户原创内容,未经允许不得转载,如需转载请自行联系原作者进行授权。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。