【ironman】部署容器云平台笔记

举报
yd_232190478 发表于 2024/11/21 20:10:55 2024/11/21
【摘要】 1. 集群主机环境配置(1)配置主机名主机密码默认为Abc@1234,使用Linux命令修改节点主机名。[root@controller ~]# hostnamectl set-hostname controller[root@controller ~]# hostnamecontroller[root@compute ~]# hostnamectl set-hostname compute...

1. 集群主机环境配置

1)配置主机名

主机密码默认为Abc@1234,使用Linux命令修改节点主机名。

[root@controller ~]# hostnamectl set-hostname controller

[root@controller ~]# hostname

controller

[root@compute ~]# hostnamectl set-hostname compute

[root@compute ~]# hostname

compute

修改完成后,点击浏览器“刷新”按钮刷新页面,以生效新主机名。

2)配置域名解析

使用vi命令在controller节点与compute节点的/etc/hosts文件添加如下内容,添加完成后输入:wq保存文件内容退出。

[root@controller ~]# vi /etc/hosts

192.168.200.12 controller

192.168.200.21 compute

 

[root@compute ~]# vi /etc/hosts

192.168.200.12 controller

192.168.200.21 compute

对应的IP地址根据云主机实际IP地址填写。

3)配置yum环境

OpenStack云平台部署的iso文件通过curl命令下载到controller节点的/root目录下。在/opt目录生成centos7-2009和iaas-train目录,将安装镜像文件内容复制到centos7-2009和iaas目录中。

[root@controller ~]# curl -O http://mirrors.douxuedu.com/competition/chinaskills_cloud_iaas_v2.0.1.iso

[root@controller ~]# curl -O http://mirrors.douxuedu.com/competition/CentOS-7-x86_64-DVD-2009.iso

[root@controller ~]# mkdir /opt/{centos7-2009,iaas}

[root@controller ~]# mount /root/CentOS-7-x86_64-DVD-2009.iso /mnt/

mount: /dev/loop0 is write-protected, mounting read-only

[root@controller ~]# cp -r /mnt/* /opt/centos7-2009/

[root@controller ~]# umount /mnt/

[root@controller ~]# mount /root/chinaskills_cloud_iaas_v2.0.1.iso /mnt/

mount: /dev/loop0 is write-protected, mounting read-only

[root@controller ~]# cp -r /mnt/* /opt/iaas/

[root@controller ~]# umount /mnt/

配置controller节点Yum安装源文件yum.repo,指向本地文件目录路径。

[root@controller ~]# mv /etc/yum.repos.d/* /media/

[root@controller ~]# vi /etc/yum.repos.d/yum.repo

[centos]

name=centos7-2009

baseurl=file:///opt/centos7-2009

gpgcheck=0

enabled=1

[openstack]

name=openstack-train

baseurl=file:///opt/iaas/iaas-repo

gpgcheck=0

enabled=1

[root@controller ~]# yum clean all && yum repolist

repo id            repo name                        status

centos            centos7-2009                      4,070

openstack         openstack-train                     953

repolist: 5,023

 

[root@master ~]# mv /etc/yum.repos.d/* /media/

[root@master ~]# vi /etc/yum.repos.d/http.repo

[httpiso]name=httpisobaseurl=http://10.24.1.46/centos/gpgcheck=0enable=1

 

4、配置无秘钥ssh

配置master节点可以无秘钥访问node节点,配置完成后,尝试ssh连接node节点的hostname进行测试。

master机器输入[root@master ~]# ssh-keygen -t rsa

然后连续点击三下回车键

 

node机器同样输入[root@node ~]# ssh-keygen -t rsa

然后连续点击三下回车键

 

master机器执行[root@master ~]# cd ~/.ssh

[root@master ~]# ssh-copy-id master

提示(yes/no)?时输入yes

提示password时输入主机的密码Abc@1234

[root@master ~]# ssh-copy-id node
同上

1. 规划节点

节点规划见表1。

1 节点规划

IP

主机名

节点

192.168.200.12

Controller

控制节点

192.168.200.21

Compute

计算节点

192.168.200.20

-

桌面化测试节点

2. 基础准备

现平台已经提供了CentOS 7.9系统的两台云主机,云主机类型为4vcpu、12G内存、40G磁盘及20G临时磁盘;网络接口1设置为外部网络,作为云主机通信和管理使用,网络接口2设置为内部网络,主要为云主机提供一个网卡设备。另平台还提供了一台桌面化节点的云主机,内部提供了谷歌游览器,用来访问OpenStack平台。

案例实施

1. 环境配置

1)配置主机名

主机密码默认为Abc@1234,使用Linux命令修改节点主机名。

[root@controller ~]# hostnamectl set-hostname controller

[root@controller ~]# hostname

controller

[root@compute ~]# hostnamectl set-hostname compute

[root@compute ~]# hostname

compute

修改完成后,点击浏览器“刷新”按钮刷新页面,以生效新主机名。

2)配置域名解析

使用vi命令在controller节点与compute节点的/etc/hosts文件添加如下内容,添加完成后输入:wq保存文件内容退出。

[root@controller ~]# vi /etc/hosts

192.168.200.12 controller

192.168.200.21 compute

 

[root@compute ~]# vi /etc/hosts

192.168.200.12 controller

192.168.200.21 compute

对应的IP地址根据云主机实际IP地址填写。

3)配置yum环境

OpenStack云平台部署的iso文件通过curl命令下载到controller节点的/root目录下。在/opt目录生成centos7-2009和iaas-train目录,将安装镜像文件内容复制到centos7-2009和iaas目录中。

[root@controller ~]# curl -O http://mirrors.douxuedu.com/competition/chinaskills_cloud_iaas_v2.0.1.iso

[root@controller ~]# curl -O http://mirrors.douxuedu.com/competition/CentOS-7-x86_64-DVD-2009.iso

[root@controller ~]# mkdir /opt/{centos7-2009,iaas}

[root@controller ~]# mount /root/CentOS-7-x86_64-DVD-2009.iso /mnt/

mount: /dev/loop0 is write-protected, mounting read-only

[root@controller ~]# cp -r /mnt/* /opt/centos7-2009/

[root@controller ~]# umount /mnt/

[root@controller ~]# mount /root/chinaskills_cloud_iaas_v2.0.1.iso /mnt/

mount: /dev/loop0 is write-protected, mounting read-only

[root@controller ~]# cp -r /mnt/* /opt/iaas/

[root@controller ~]# umount /mnt/

配置controller节点Yum安装源文件yum.repo,指向本地文件目录路径。

[root@controller ~]# mv /etc/yum.repos.d/* /media/

[root@controller ~]# vi /etc/yum.repos.d/yum.repo

[centos]

name=centos7-2009

baseurl=file:///opt/centos7-2009

gpgcheck=0

enabled=1

[openstack]

name=openstack-train

baseurl=file:///opt/iaas/iaas-repo

gpgcheck=0

enabled=1

[root@controller ~]# yum clean all && yum repolist

repo id            repo name                        status

centos            centos7-2009                      4,070

openstack         openstack-train                     953

repolist: 5,023

controller节点使用Yum安装vsftpd服务,并将/opt目录下的文件共享出去。

[root@controller ~]# yum install -y vsftpd

Installed:

  vsftpd.x86_64 0:3.0.2-28.el7                                                               

Complete!

[root@controller ~]# echo "anon_root=/opt" >> /etc/vsftpd/vsftpd.conf

[root@controller ~]# systemctl start vsftpd

[root@controller ~]# systemctl enable vsftpd

配置compute节点Yum安装源文件yum.repo,指向controller节点的共享文件目录路径。

[root@compute ~]# mv /etc/yum.repos.d/* /media/

[root@compute ~]# vi /etc/yum.repos.d/yum.repo

[centos]

name=centos7-2009

baseurl=ftp://controller/centos7-2009

gpgcheck=0

enabled=1

[openstack]

name=openstack-train

baseurl=ftp://controller/iaas/iaas-repo

gpgcheck=0

enabled=1

[root@compute ~]# yum clean all && yum repolist

repo id            repo name                        status

centos            centos7-2009                      4,070

openstack         openstack-train                     953

repolist: 5,023

4)划分分区

compute节点上的临时磁盘vdb中划分两个9G的分区。

[root@compute ~]# lsblk

NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

vda    253:0    0  40G  0 disk

└─vda1 253:1    0  40G  0 part /

vdb    253:16   0  20G  0 disk

vdc    253:32   0   1M  0 disk

 [root@compute ~]# fdisk /dev/vdb

Welcome to fdisk (util-linux 2.23.2).

 

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

 

Device does not contain a recognized partition table

Building a new DOS disklabel with disk identifier 0x64513eb0.

 

Command (m for help): n

Partition type:

   p   primary (0 primary, 0 extended, 4 free)

   e   extended

Select (default p): p

Partition number (1-4, default 1):

First sector (2048-41943039, default 2048):

Using default value 2048

Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): +9G

Partition 1 of type Linux and of size 9 GiB is set

 

Command (m for help): n

Partition type:

   p   primary (1 primary, 0 extended, 3 free)

   e   extended

Select (default p): p

Partition number (2-4, default 2):

First sector (18876416-41943039, default 18876416):

Using default value 18876416

Last sector, +sectors or +size{K,M,G} (18876416-41943039, default 41943039): +9G

Partition 2 of type Linux and of size 9 GiB is set

 

Command (m for help): p

 

Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x64513eb0

 

   Device Boot      Start         End      Blocks   Id  System

/dev/vdb1            2048    18876415     9437184   83  Linux

/dev/vdb2        18876416    37750783     9437184   83  Linux

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

[root@compute ~]# lsblk

NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

vda    253:0    0  40G  0 disk

└─vda1 253:1    0  40G  0 part /

vdb    253:16   0  20G  0 disk

├─vdb1 253:17   0   9G  0 part

└─vdb2 253:18   0   9G  0 part

vdc    253:32   0   1M  0 disk

2. OpenStack云平台部署

完成基础环境部署后,就可以开始部署安装OpenStack云平台。

1)配置环境变量

controller节点和compute节点安装OpenStack云平台的安装脚本软件包。

[root@controller ~]# yum install -y openstack-iaas

Installed:

  openstack-train.x86_64 0:v1.0.2-1.el7                                                      

Complete!

 

[root@compute ~]# yum install -y openstack-iaas

Installed:

  openstack-train.x86_64 0:v1.0.2-1.el7                                                      

Complete!

controller节点和compute节点配置环境变量文件/etc/openstack/openrc.sh,按“i”键进入openrc.sh编辑模式:

[root@controller ~]# vi /etc/openstack/openrc.sh

脚本修改以下内容,配置完按ESC键输入:

:%s/^.\{1\}//

删除每行前1个字符,再按ESC键输入:wq保存退出 :

HOST_IP=192.168.200.12

HOST_PASS=Abc@1234           #controller节点root用户密码

HOST_NAME=controller

HOST_IP_NODE=192.168.200.21

HOST_PASS_NODE=Abc@1234      #compute节点root用户密码

HOST_NAME_NODE=compute

network_segment_IP=192.168.200.0/24

RABBIT_USER=openstack

RABBIT_PASS=000000

DB_PASS=000000

DOMAIN_NAME=demo

ADMIN_PASS=000000

DEMO_PASS=000000

KEYSTONE_DBPASS=000000

GLANCE_DBPASS=000000

GLANCE_PASS=000000

PLACEMENT_DBPASS=000000

PLACEMENT_PASS=000000

NOVA_DBPASS=000000

NOVA_PASS=000000

NEUTRON_DBPASS=000000

NEUTRON_PASS=000000

METADATA_SECRET=000000

INTERFACE_NAME=eth1      #云主机第二张网卡名称

Physical_NAME=provider

minvlan=1

maxvlan=1000

CINDER_DBPASS=000000

CINDER_PASS=000000

BLOCK_DISK=vdb1          #compute节点第一个分区名称

SWIFT_PASS=000000

OBJECT_DISK=vdb2          #compute节点第二个分区名称

STORAGE_LOCAL_NET_IP=192.168.200.21

2)部署OpenStack云平台基础环境

controller节点和compute节点都执行脚本iaas-pre-host.sh部署OpenStack云平台基础环境。完成后使用reboot命令重启虚拟机以生效配置。

[root@controller ~]# iaas-pre-host.sh

[root@compute ~]# iaas-pre-host.sh

3)部署Mariadb数据库及Rabbit消息队列服务

controller节点执行脚本部署Mariadb数据库及Rabbit消息队列服务。

[root@controller ~]# iaas-install-mysql.sh

4)部署Keystone服务

controller节点执行脚本部署Keystone服务。

[root@controller ~]# iaas-install-keystone.sh

5)部署Glance服务

controller节点执行脚本部署Glance服务。

[root@controller ~]# iaas-install-glance.sh

6)部署Nova服务

controller节点执行脚本部署Nova组件的控制服务。

[root@controller ~]# iaas-install-placement.sh

[root@controller ~]# iaas-install-nova-controller.sh

执行完上面的脚本后,在compute节点执行脚本部署Nova组件的计算服务,这样就将compute节点的CPU、内存及磁盘资源添加到OpenStack云平台的资源池中了。

[root@compute ~]# iaas-install-nova-compute.sh

7)部署Neutron服务

controller节点执行脚本部署Neutron组件的控制服务。

[root@controller ~]# iaas-install-neutron-controller.sh

compute节点执行脚本部署Neutron组件的计算服务。

[root@compute ~]# iaas-install-neutron-compute.sh

8)部署Dashboard服务

controller节点执行脚本部署Dashboard组件服务。

[root@controller ~]# iaas-install-dashboard.sh

安装完成后,使用Google浏览器访问OpenStack云平台,访问地址为:http://192.168.100.10/dashboard,domain为demo,User name(用户名)为admin,Password(密码)为000000。结果如图1所示。

 

1 OpenStack云平台dashboard访问1

在页面上方菜单栏单击用户admin下拉菜单,选择“Settings”选项,在跳转页面的User Settings下,将Language这只为简体中文,Timezone设置为上海时区,如图2所示。

 

2 设置简体中文

设置完成后返回云平台首页如图3所示。

3 OpenStack云平台dashboard访问2

9)部署Cinder服务

controller节点执行脚本部署Cinder组件的控制服务。

[root@controller ~]# iaas-install-cinder-controller.sh

controller节点脚本执行完成后,在compute节点执行脚本部署Cinder组件的计算服务。

[root@compute ~]# iaas-install-cinder-compute.sh

10)部署Swift服务

controller节点执行脚本部署Swift组件的控制服务。

[root@controller ~]# iaas-install-swift-controller.sh

controller节点脚本执行完成后,在compute节点执行脚本部署Swift组件的计算服务。

[root@compute ~]# iaas-install-swift-compute.sh

3. 创建cirros云主机

1)上传镜像

controller节点上传cirros镜像。

[root@controller ~]# source /etc/keystone/admin-openrc.sh

[root@controller ~]# glance image-create --name cirros --disk-format qcow2 --container-format bare < /opt/iaas/images/cirros-0.3.4-x86_64-disk.img

+------------------+--------------------------------------------------------------------+

| Property         | Value                                                              |

+------------------+--------------------------------------------------------------------+

| checksum         | ee1eca47dc88f4879d8a229cc70a07c6                                   |

| container_format | bare                                                               |

| created_at       | 2022-02-16T02:58:23Z                                               |

| disk_format      | qcow2                                                              |

| id               | 76ce1b38-b1fa-465c-947f-288ea4760761                               |

| min_disk         | 0                                                                  |

| min_ram          | 0                                                                  |

| name             | cirros                                                             |

| os_hash_algo     | sha512                                                             |

| os_hash_value    | 1b03ca1bc3fafe448b90583c12f367949f8b0e665685979d95b004e48574b953316799e23240f4f7        |

|                  | 39d1b5eb4c4ca24d38fdc6f4f9d8247a2bc64db25d6bbdb2                   |

| os_hidden        | False                                                              |

| owner            | 1ac0739939db4dc78bf42802ba0205e9                                   |

| protected        | False                                                              |

| size             | 13287936                                                           |

| status           | active                                                             |

| tags             | []                                                                 |

| updated_at       | 2022-02-16T02:58:24Z                                               |

| virtual_size     | Not available                                                      |

| visibility       | shared                                                             |

+------------------+--------------------------------------------------------------------+

2)创建网络

controller节点使用OpenStack相关命令创建一个net网络。

[root@controller ~]# source /etc/keystone/admin-openrc.sh

[root@controller ~]# openstack network create net --mtu 1350

[root@controller ~]# openstack subnet create --network net --subnet-range 10.0.0.0/24 --gateway 10.0.0.1 subnet

3)创建云主机

登录Dashboard页面,在左侧菜单栏中选择“项目→计算→实例”,单击“创建实例”按钮,输入实例名称cirros,默认可用域为nova,数量为1,单击“下一步”按钮,创建步骤如图4所示。


4虚拟机创建1

选择已共享的cirros镜像,选择不创建新卷,单击“下一步”按钮,创建步骤如图5所示。


5虚拟机创建2

选择m1.tiny实例类型,单击“下一步”按钮,创建步骤如图6所示。


6虚拟机创建3

选择net为虚拟机的网络,这样最后单击“创建实例”按钮就可以完成虚拟机创建,创建步骤如图7所示。


7虚拟机创建4

等虚拟机创建完成后,可以看到虚拟机状态为“运行”,如图8所示。只要求创建cirros的云主机不报错,不要求创建后的cirros云主机可以使用CRT连接。

 

 

 

 

 

2. 使用Heat模板创建用户

编写Heat模板文件create_user.yaml,模板名为test-user,创建名为heat-user的用户,属于admin项目包,并赋予heat-user用户admin的权限,配置用户密码为123456。模板内容如下:

[root@controller ~]# vi create_user.yaml

heat_template_version: 2014-10-16

resources:

  user:

    type: OS::Keystone::User

    properties:

      name: heat-user

      password: "123456"

      domain: demo

      default_project: admin

      roles: [{"role": admin, "project": admin}]

执行模板文件,命令如下:

[root@controller ~]# openstack stack create -t create_user.yaml test-user

+---------------------+--------------------------------------+

| Field               | Value                                |

+---------------------+--------------------------------------+

| id                  | e67b28ff-6df3-45e0-9c5f-1ea56229bb49 |

| stack_name          | test-user                            |

| description         | No description                       |

| creation_time       | 2023-08-08T01:09:13Z                 |

| updated_time        | None                                 |

| stack_status        | CREATE_IN_PROGRESS                   |

| stack_status_reason | Stack CREATE started                 |

+---------------------+--------------------------------------+

 

查询创建结果,命令如下:

[root@controller ~]# openstack user list |grep heat-user

| d2d62ed0897c4102a1e60d4f34104208 | heat-user         |

 

3. 使用Heat模板创建网络与子网

编写Heat模板create_net.yaml,创建名为Heat-Network网络,选择不共享;创建子网名为Heat-Subnet,子网网段设置为10.20.2.0/24,开启DHCP服务,地址池为10.20.2.20-10.20.2.100。模板内容如下:

[root@controller ~]# vi create_net.yaml

heat_template_version: 2014-10-16

description: Generated template

resources:

  network_1:

    type: OS::Neutron::Net

    properties:

      admin_state_up: true

      name: Heat-Network

      shared: false

  subnet_1:

    type: OS::Neutron::Subnet

    properties:

      allocation_pools:

      - end: 10.20.2.100

        start: 10.20.2.10

      cidr: 10.20.2.0/24

      enable_dhcp: true

      host_routes: []

      ip_version: 4

      name: Heat-Subnet

      network_id:

        get_resource: network_1

 

使用命令执行该Heat模板文件,命令如下:

[root@controller ~]# openstack stack create -t create_net.yaml test

+---------------------+--------------------------------------+

| Field               | Value                                |

+---------------------+--------------------------------------+

| id                  | 9eba9c8c-22de-44c7-9fab-2ad7f3d8992c |

| stack_name          | test                                 |

| description         | Generated template                   |

| creation_time       | 2023-08-08T01:52:36Z                 |

| updated_time        | None                                 |

| stack_status        | CREATE_IN_PROGRESS                   |

| stack_status_reason | Stack CREATE started                 |

+---------------------+--------------------------------------+

 

查询创建结果,命令如下:

[root@controller ~]# openstack network list

+--------------------------------------+--------------+

| ID                                   | Name         | Subnets                              

+--------------------------------------+--------------+

| c9de6482-5876-40a3-a412-67dff8ee3f82 | Heat-Network | 2901706e-281f-48dd-ac0e-1c973b6a5b0e |

+--------------------------------------+--------------+

 

4. 使用Heat模板创建容器

编写Heat模板create_container.yaml文件,创建名为heat-swift的容器。模板内容如下:

[root@controller ~]# vi create_container.yaml

heat_template_version: 2014-10-16

resources:

  user:

    type: OS::Swift::Container

    properties:

      name: heat-swift

使用命令执行该Heat模板文件,命令如下:

[root@controller ~]# openstack stack create -t create_container.yaml test-container

+---------------------+--------------------------------------+

| Field               | Value                                |

+---------------------+--------------------------------------+

| id                  | deac2d1e-90a7-43fd-af96-386b6a002b1e |

| stack_name          | test-container                       |

| description         | No description                       |

| creation_time       | 2023-08-08T02:19:35Z                 |

| updated_time        | None                                 |

| stack_status        | CREATE_IN_PROGRESS                   |

| stack_status_reason | Stack CREATE started                 |

+---------------------+--------------------------------------+

 

查询创建结果,命令如下:

[root@controller ~]# swift list

heat-swift

 

 

vi /etc/sysctl.conf

vm.dirty_writeback_centisecs = 60

sudo vi /etc/sysctl.conf

 

 

 

 

 

2. 基础环境配置

将提供的安装包chinaskills_cloud_paas_v2.0.iso下载至master节点/root目录,并解压到/opt目录:

[root@localhost ~]# curl -O http://mirrors.douxuedu.com/competition/chinaskills_cloud_paas_v2.0.1.iso

[root@localhost ~]# mount -o loop chinaskills_cloud_paas_v2.0.1.iso /mnt/

[root@localhost ~]# cp -rfv /mnt/* /opt/

[root@localhost ~]# umount /mnt/

1.1 安装kubeeasy

kubeeasy为Kubernetes集群专业部署工具,极大的简化了部署流程。其特性如下:

master节点安装kubeeasy工具:

[root@localhost ~]# mv /opt/kubeeasy /usr/bin/kubeeasy

1.2 安装依赖包

此步骤主要完成docker-ce、git、unzip、vim、wget等工具的安装。

master节点执行以下命令完成依赖包的安装:

[root@localhost ~]# kubeeasy install depend \

--host 10.24.2.10,10.24.2.11 \

--user root \

--password Abc@1234 \

--offline-file /opt/dependencies/base-rpms.tar.gz

参数解释如下:

  • --host:所有主机节点IP,如:10.24.1.2-10.24.1.10,中间用“-”隔开,表示10.24.1.2到10.24.1.10范围内的所有IP。若IP地址不连续,则列出所有节点IP,用逗号隔开,如:10.24.1.2,10.24.1.7,10.24.1.9。

可通过命令“tail -f /var/log/kubeinstall.log”查看安装详情或排查错误。

1.3 配置SSH免密钥

安装Kubernetes集群的时候,需要配置Kubernetes集群各节点间的免密登录,方便传输文件和通讯。

master节点执行以下命令完成集群节点的连通性检测:

[root@localhost ~]# kubeeasy check ssh \

--host 10.24.2.10,10.24.2.11 \

--user root \

--password Abc@1234

master节点执行以下命令完成集群所有节点间的免密钥配置:

[root@localhost ~]# kubeeasy create ssh-keygen \

--master 10.24.2.10 \

--worker 10.24.2.11 \

--user root --password Abc@1234

案例实施(实验手册)

1. Explorer资源管理器介绍

Explorer是一个开源的Web文件管理器,提供了在线文件管理、文件预览、编辑、上传和下载等功能。也可以叫它网盘,但是它不具备网盘的很多功能。

它采用了前端技术,无需安装任何软件,通过浏览器即可访问和使用,并且支持多用户、多平台、多语言、多种文件格式的在线浏览和编辑。

还提供了插件机制,可以扩展更多的功能,如图像在线处理、压缩、解压缩等。

2. 容器化部署MariaDB

1)基础环境准备

下载软件包并解压:

[root@k8s-master-node1 ~]# curl -O http://mirrors.douxuedu.com/competition/Explorer.tar.gz

[root@k8s-master-node1 ~]# tar -zxvf Explorer.tar.gz

导入CentOS基础镜像:

[root@k8s-master-node1 ~]# docker load -i KodExplorer/CentOS_7.9.2009.tar

2)编写Dockerfile

编写init.sh脚本:

[root@k8s-worker-node1 ~]# cd KodExplorer/

[root@k8s-worker-node1 KodExplorer]# vi mysql_init.sh

#!/bin/bash

mysql_install_db --user=root

mysqld_safe --user=root &

sleep 8

mysqladmin -u root password 'root'

mysql -uroot -proot -e "grant all on *.* to 'root'@'%' identified by 'root'; flush privileges;"

编写yum源:

[root@k8s-worker-node1 KodExplorer]# vi local.repo

[yum]

name=yum

baseurl=file:///root/yum

gpgcheck=0

enabled=1

编写Dockerfile文件:

[root@k8s-worker-node1 KodExplorer]# vi Dockerfile-mariadb

FROM centos:centos7.9.2009

MAINTAINER Chinaskills

RUN rm -rfv /etc/yum.repos.d/*

COPY local.repo /etc/yum.repos.d/

COPY yum /root/yum

ENV LC_ALL en_US.UTF-8

RUN yum -y install mariadb-server

COPY mysql_init.sh /opt/

RUN bash /opt/mysql_init.sh

EXPOSE 3306

CMD ["mysqld_safe","--user=root"]

3)构建镜像

构建镜像:

[root@k8s-worker-node1 KodExplorer]# docker build -t kod-mysql:v1.0 -f Dockerfile-mariadb .

[+] Building 56.6s (8/11)                   docker:default

[+] Building 104.7s (12/12) FINISHED              docker:default

 => [internal] load .dockerignore                    0.2s

 => => transferring context: 2B                     0.0s

 => [internal] load build definition from Dockerfile-mariadb       0.2s

 => => transferring dockerfile: 397B                   0.0s

 => [internal] load metadata for docker.io/library/centos:centos7.9.2009 0.0s

 => CACHED [1/7] FROM docker.io/library/centos:centos7.9.2009      0.0s

 => [internal] load build context                    10.8s

 => => transferring context: 350.48MB                  3.4s

 => [2/7] RUN rm -rfv /etc/yum.repos.d/*                10.9s

 => [3/7] COPY local.repo /etc/yum.repos.d/               2.8s

 => [4/7] COPY yum /root/yum                      14.5s

 => [5/7] RUN yum -y install mariadb-server               55.3s

 => [6/7] COPY mysql_init.sh /opt/                    0.2s

 => [7/7] RUN bash /opt/mysql_init.sh                  16.1s

 => exporting to image                         4.5s

 => => exporting layers                         4.4s

 => => writing image sha256:d7b697a36449c5bbcd382d4e260d4a5b4e559985dfd5aca76b73bd0823cda7df  0.0s

 => => naming to docker.io/library/kod-mysql:v1.0            0.0s

3. 容器化部署Redis

1)编写Dockerfile

编写Dockerfile文件:

[root@k8s-worker-node1 KodExplorer]# vi Dockerfile-redis

FROM centos:centos7.9.2009

MAINTAINER Chinaskills

RUN rm -rf /etc/yum.repos.d/*

COPY local.repo /etc/yum.repos.d/

COPY yum /root/yum

RUN yum -y install redis

RUN sed -i 's/127.0.0.1/0.0.0.0/g' /etc/redis.conf && \

  sed -i 's/protected-mode yes/protected-mode no/g' /etc/redis.conf

EXPOSE 6379

CMD ["/usr/bin/redis-server","/etc/redis.conf"]

2)构建镜像

[root@k8s-worker-node1 KodExplorer]# docker build -t kod-redis:v1.0 -f Dockerfile-redis .

[+] Building 27.1s (11/11) FINISHED              docker:default

 => [internal] load .dockerignore                    0.0s

 => => transferring context: 2B                     0.0s

 => [internal] load build definition from Dockerfile-redis        0.0s

 => => transferring dockerfile: 449B                   0.0s

 => [internal] load metadata for docker.io/library/centos:centos7.9.2009 0.0s

 => [1/6] FROM docker.io/library/centos:centos7.9.2009         0.0s

 => [internal] load build context                    0.0s

 => => transferring context: 40.81kB                   0.0s

 => CACHED [2/6] RUN rm -rf /etc/yum.repos.d/*              0.0s

 => [3/6] COPY local.repo /etc/yum.repos.d/               0.2s

 => [4/6] COPY yum /root/yum                      14.1s

 => [5/6] RUN yum -y install redis                     7.3s

 => [6/6] RUN sed -i 's/127.0.0.1/0.0.0.0/g' /etc/redis.conf &&   sed -i 's/protected-mode yes/protected-mode no/g' /etc/redis.conf         0.7s

 => exporting to image                         4.5s

 => => exporting layers                         4.5s

 => => writing image sha256:7aa1768dfe2ce212982778da9d2c872d3d691a1a0f10cb23fcc9c7eae7dc6f44  0.0s

 => => naming to docker.io/library/kod-redis:v1.0            0.0s

4. 容器化部署PHP

1)编写Dockerfile

编写Dockerfile文件:

[root@k8s-worker-node1 KodExplorer]# vi Dockerfile-php

FROM centos:centos7.9.2009

MAINTAINER Chinaskills

RUN rm -rf /etc/yum.repos.d/*

COPY local.repo /etc/yum.repos.d/

COPY yum /root/yum

RUN yum install httpd php php-cli unzip php-gd php-mbstring -y

WORKDIR /var/www/html

COPY php/kodexplorer4.37.zip .

RUN unzip kodexplorer4.37.zip

RUN chmod -R 777 /var/www/html

RUN sed -i 's/#ServerName www.example.com:80/ServerName localhost:80/g' /etc/httpd/conf/httpd.conf

EXPOSE 80

CMD ["/usr/sbin/httpd","-D","FOREGROUND"]

2)构建镜像

[root@k8s-worker-node1 KodExplorer]# docker build -t kod-php:v1.0 -f Dockerfile-php .

[+] Building 54.2s (15/15) FINISHED                                                     docker:default

 => [internal] load build definition from Dockerfile-php         0.0s

 => => transferring dockerfile: 565B                   0.0s

 => [internal] load .dockerignore                    0.0s

 => => transferring context: 2B                     0.0s

 => [internal] load metadata for docker.io/library/centos:centos7.9.2009 0.0s

 => [ 1/10] FROM docker.io/library/centos:centos7.9.2009         0.0s

 => [internal] load build context                    0.2s

 => => transferring context: 13.89MB                   0.2s

 => CACHED [ 2/10] RUN rm -rf /etc/yum.repos.d/*             0.0s

 => CACHED [ 3/10] COPY local.repo /etc/yum.repos.d/           0.0s

 => CACHED [ 4/10] COPY yum /root/yum                  0.0s

 => [ 5/10] RUN yum install httpd php php-cli unzip php-gd php-mbstring -y 39.4s

 => [ 6/10] WORKDIR /var/www/html                    0.2s

 => [ 7/10] COPY php/kodexplorer4.37.zip .                0.5s

 => [ 8/10] RUN unzip kodexplorer4.37.zip                2.8s

 => [ 9/10] RUN chmod -R 777 /var/www/html                8.5s

 => [10/10] RUN sed -i 's/#ServerName www.example.com:80/ServerName localhost:80/g' /etc/httpd/conf/httpd.conf                 0.9s

 => exporting to image                          1.5s

 => => exporting layers                         1.5s

 => => writing image sha256:c3c6f684f8b15691ed391a3dc0474a03d3eb222252e8e4aeee116f732e948504  0.0s

 => => naming to docker.io/library/kod-php:v1.0             0.0s

5. 容器化部署Nginx

1)编写Dockerfile

编写dockerfile:

[root@k8s-worker-node1 KodExplorer]# vi Dockerfile-nginx

FROM centos:centos7.9.2009

MAINTAINER Chinaskills

RUN rm -rf /etc/yum.repos.d/*

COPY local.repo /etc/yum.repos.d/

COPY yum /root/yum

RUN yum -y install nginx

RUN /bin/bash -c 'echo init ok'

EXPOSE 80

CMD ["nginx","-g","daemon off;"]

2)构建镜像

[root@k8s-worker-node1 KodExplorer]# docker build -t kod-nginx:v1.0 -f Dockerfile-nginx .

[+] Building 16.0s (11/11) FINISHED              docker:default

 => [internal] load .dockerignore                    0.0s

 => => transferring context: 2B                     0.0s

 => [internal] load build definition from Dockerfile-nginx        0.0s

 => => transferring dockerfile: 338B                   0.0s

 => [internal] load metadata for docker.io/library/centos:centos7.9.2009 0.0s

 => [1/6] FROM docker.io/library/centos:centos7.9.2009         0.0s

 => [internal] load build context                    0.1s

 => => transferring context: 40.81kB                   0.0s

 => CACHED [2/6] RUN rm -rf /etc/yum.repos.d/*              0.0s

 => CACHED [3/6] COPY local.repo /etc/yum.repos.d/            0.0s

 => CACHED [4/6] COPY yum /root/yum                    0.0s

 => [5/6] RUN yum -y install nginx                   14.2s

 => [6/6] RUN /bin/bash -c 'echo init ok'                0.7s

 => exporting to image                         0.8s

 => => exporting layers                         0.8s

 => => writing image sha256:1c904f24ba2ca44f5014bf92920b4728694b34b2239974f9a5a42015cc5a4201  0.0s

 => => naming to docker.io/library/kod-nginx:v1.0            0.0s

6. 编排部署服务

1)编写docker-compose.yaml

[root@k8s-worker-node1 KodExplorer]# vi docker-compose.yaml

version: '3.2'

services:

     nginx:

        container_name: nginx

        image: kod-nginx:v1.0

        volumes:

           \- ./www:/data/www

           \- ./nginx/logs:/var/log/nginx

        ports:

           \- "443:443"

        restart: always

        depends_on:

           \- php-fpm

        links:

           \- php-fpm

        tty: true

 

     mysql:

        container_name: mysql

        image: kod-mysql:v1.0

        volumes:

           \- ./data/mysql:/var/lib/mysql

           \- ./mysql/logs:/var/lib/mysql-logs

        ports:

           \- "3306:3306"

        restart: always

     redis:

        container_name: redis

        image: kod-redis:v1.0

        ports:

            \- "6379:6379"

        volumes:

            \- ./data/redis:/data

            \- ./redis/redis.conf:/usr/local/etc/redis/redis.conf

        restart: always

        command: redis-server /usr/local/etc/redis/redis.conf

     php-fpm:

        container_name: php-fpm

        image: kod-php:v1.0

        ports:

            \- "8090:80"

        links:

            \- mysql

            \- redis

        restart: always

        depends_on:

             \- redis

             \- mysql

2)部署服务

[root@k8s-worker-node1 KodExplorer]# docker-compose up -d

Creating network "kodexplorer_default" with the default driver

Creating redis ... done

Creating mysql ... done

Creating php-fpm ... done

Creating nginx  ... done

查看服务:

[root@k8s-worker-node1 KodExplorer]# docker-compose ps

Name        Command        State           Ports          

-----------------------------------------------------------------------------------------------

mysql   mysqld_safe --user=root     Up   0.0.0.0:3306->3306/tcp,:::3306->3306/tcp  

nginx   nginx -g daemon off;       Up   0.0.0.0:443->443/tcp,:::443->443/tcp, 80/tcp

php-fpm  /usr/sbin/httpd -D FOREGROUND  Up   0.0.0.0:8090->80/tcp,:::8090->80/tcp    

redis   redis-server /usr/local/et ...  Up   0.0.0.0:6379->6379/tcp,:::6379->6379/tcp

 

 

 

 

 

1. 部署GitLab

1)基础准备

解压软件包并导入镜像:

[root@master ~]# curl -O http://mirrors.douxuedu.com/competition/Gitlab-CI.tar.gz

[root@master ~]# tar -zxvf Gitlab-CI.tar.gz

[root@master ~]# ctr -n k8s.io image import gitlab-ci/images/images.tar

[root@master ~]# docker load < gitlab-ci/images/images.tar

 

2)部署GitLab服务

新建命名空间gitlab-ci:

[root@master ~]# kubectl create ns gitlab-ci

namespace/gitlab-ci created

 

gitlab-ci命名空间下部署GitLab,将80端口以NodePort方式对外暴露为30880,YAML资源文件如下:

[root@master ~]# cd gitlab-ci

[root@master gitlab-ci]# vi gitlab-deploy.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: gitlab

  namespace: gitlab-ci

  labels:

    name: gitlab

spec:

  selector:

    matchLabels:

      name: gitlab

  template:

    metadata:

      name: gitlab

      labels:

        name: gitlab

    spec:

      containers:

      - name: gitlab

        image: gitlab/gitlab-ce:latest

        imagePullPolicy: IfNotPresent

        env:

        - name: GITLAB_ROOT_PASSWORD

          value: Abc@1234

        - name: GITLAB_ROOT_EMAIL

          value: 123456@qq.com

        ports:

        - name: http

          containerPort: 80

        volumeMounts:

        - name: gitlab-config

          mountPath: /etc/gitlab

        - name: gitlab-logs

          mountPath: /var/log/gitlab

        - name: gitlab-data

          mountPath: /var/opt/gitlab

      volumes:

      - name: gitlab-config

        hostPath:

          path: /home/gitlab/conf

      - name: gitlab-logs

        hostPath:

          path: /home/gitlab/logs

      - name: gitlab-data

        hostPath:

          path: /home/gitlab/data

 

创建service服务释放端口

[root@master gitlab-ci]# vi gitlab-svc.yaml

apiVersion: v1

kind: Service

metadata:

  name: gitlab

  namespace: gitlab-ci

  labels:

    name: gitlab

spec:

  type: NodePort

  ports:

    - name: http

      port: 80

      targetPort: http

      nodePort: 30880

  selector:

    name: gitlab

创建资源:

[root@master gitlab-ci]# kubectl apply -f gitlab-deploy.yaml

deployment.apps/gitlab created

[root@master gitlab-ci]# kubectl apply -f gitlab-svc.yaml

service/gitlab created

 

查看Pod:

[root@master gitlab-ci]# kubectl -n gitlab-ci get pods

NAME                      READY   STATUS    RESTARTS   AGE

gitlab-7b54df755-6ljtp    1/1     Running   0          45s

 

3)自定义hosts

查看GitLab Pod的IP地址:

[root@master gitlab-ci]# kubectl -n gitlab-ci get pods -owide

NAME                      READY   STATUS    RESTARTS   AGE    IP            NODE               NOMINATED NODE   READINESS GATES

gitlab-7b54df755-6ljtp    1/1     Running   0          50s   10.244.1.43   k8s-worker-node1   <none>           <none>

 

在集群中自定义hosts添加gitlab Pod的解析:

[root@master gitlab-ci]# kubectl edit configmap coredns -n kube-system

........

apiVersion: v1

data:

  Corefile: |

    .:53 {

        errors

        health {

           lameduck 5s

        }

        ready

        kubernetes cluster.local in-addr.arpa ip6.arpa {

           pods insecure

           fallthrough in-addr.arpa ip6.arpa

           ttl 30

        }

## 添加以下字段

        hosts {

            10.244.1.43 gitlab-7b54df755-6ljtp

            fallthrough

        }

        prometheus :9153

##删除以下三行字段

        forward . /etc/resolv.conf {

           max_concurrent 1000

        }

        

        cache 30

        loop

        reload

        loadbalance

    }

........

[root@master gitlab-ci]# kubectl -n kube-system rollout restart deploy coredns

deployment.apps/coredns restarted

 

进入gitlab pod中

[root@master gitlab-ci]#  kubectl exec -ti -n gitlab-ci      gitlab-7b54df755-6ljtp   bash

root@gitlab-7b54df755-6ljtp:/# vi /etc/gitlab/gitlab.rb

 

在首行添加(ip为pod IP地址)

external_url 'http://10.244.1.43:80'

root@gitlab-7b54df755-6ljtp:/# reboot

root@gitlab-7b54df755-6ljtp:/# exit

 

4)访问GitLab

查看Service:

[root@master gitlab-ci]# kubectl -n gitlab-ci get svc

NAME     TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE

gitlab   NodePort   10.96.108.3   <none>        80:30880/TCP   3m33s

 

通过http://10.24.2.14:30880访问GitLab,用户名123456@qq.com,密码Abc@1234,如图1所示:


1

5)上传项目包

点击“Create a project”按钮,如图2所示:


2

点击“Create blank project”创建项目demo-2048,可见等级选择“Public”,如图3所示:


3

点击“Create project”,进入项目,如图4所示:


4

将源代码推送到项目中:

[root@master gitlab-ci]# cd /root/gitlab-ci/demo-2048

[root@master demo-2048]# git config --global user.name "administrator"

[root@master demo-2048]# git config --global user.email "admin@example.com"

[root@master demo-2048]# git remote remove origin

[root@master demo-2048]# git remote add origin http://10.24.2.14:30880/root/demo-2048.git

[root@master demo-2048]# git add .

[root@master demo-2048]# git commit -m "initial commit"

[master (root-commit) 105c032] initial commit

[root@master demo-2048]# git push -u origin drone

Username for 'http://10.24.2.14:30880': root

Password for 'http://root@10.24.2.14:30880':              #输入密码Abc@1234

Counting objects: 189, done.

Delta compression using up to 8 threads.

Compressing objects: 100% (137/137), done.

Writing objects: 100% (189/189), 43.35 KiB | 0 bytes/s, done.

Total 189 (delta 40), reused 0 (delta 0)

remote: Resolving deltas: 100% (40/40), done.

To http://10.24.2.14:30880/root/cloud-manager.git

 * [new branch]      master -> master

Branch master set up to track remote branch master from origin.

 

刷新页面如图5、图6所示:


5


6

2. 部署GitLab CI Runner

1)获取 GitLab CI Register Token

登录GitLab管理界面(http://10.24.2.14:30880/admin),然后点击左侧菜单栏中的CI/CD下的Runners,如图7所示:


7

点击右侧按钮,如图8所示:


8

记录下参数Registration token的值,后续注册Runners时会用到该参数。

2)修改GitLab Runner配置清单

首先创建一个名为gitlab-ci的serviceAccount:

[root@master ~]# cd /root/gitlab-ci/

[root@master gitlab-ci]# cat runner-sa.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

  name: gitlab-ci

  namespace: gitlab-ci

[root@master gitlab-ci]# cat runner-role.yaml

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: gitlab-ci

  namespace: gitlab-ci

rules:

  - apiGroups: [""]

    resources: ["*"]

    verbs: ["*"]

[root@master gitlab-ci]# cat runner-rb.yaml

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: gitlab-ci

  namespace: gitlab-ci

subjects:

  - kind: ServiceAccount

    name: gitlab-ci

    namespace: gitlab-ci

roleRef:

  kind: Role

  name: gitlab-ci

  apiGroup: rbac.authorization.k8s.io

[root@master gitlab-ci]# kubectl apply -f runner-sa.yaml

serviceaccount/gitlab-ci created

[root@master gitlab-ci]# kubectl apply -f runner-role.yaml

role.rbac.authorization.k8s.io/gitlab-ci created

[root@master gitlab-ci]# kubectl apply -f runner-rb.yaml

rolebinding.rbac.authorization.k8s.io/gitlab-ci created

[root@master gitlab-ci]# kubectl -n gitlab-ci get sa

NAME        SECRETS   AGE

default     1         10m

gitlab-ci   1         21s

 

default用户赋权:

[root@master gitlab-ci]# vi default.yaml

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: default

  labels:

    k8s-app: gitlab-default

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

- kind: ServiceAccount

  name: default

  namespace: gitlab-ci

[root@master gitlab-ci]# kubectl apply -f default.yaml

clusterrolebinding.rbac.authorization.k8s.io/default created

 

修改values.yaml文件

[root@master gitlab-ci]# tar -zxvf gitlab-runner-0.43.0.tgz

[root@master gitlab-ci]# vi gitlab-runner/values.yaml

...

  ## Use the following Kubernetes Service Account name if RBAC is disabled in this Helm chart (see rbac.create)

  ##

  # serviceAccountName: default

  serviceAccountName: gitlab-ci   #添加,注意缩进格式

...

## The GitLab Server URL (with protocol) that want to register the runner against

## ref: https://docs.gitlab.com/runner/commands/index.html#gitlab-runner-register

##

# gitlabUrl: http://gitlab.your-domain.com/

gitlabUrl: http://10.24.2.14:30880/      #添加,缩进顶格

...

## The Registration Token for adding new Runners to the GitLab Server. This must

## be retrieved from your GitLab Instance.

## ref: https://docs.gitlab.com/ce/ci/runners/index.html

##

# runnerRegistrationToken: ""

runnerRegistrationToken: "riU8c4D2SNkKAv8GS9q_"    #添加,缩进顶格

...

  config: |

    [[runners]]

      [runners.kubernetes]

        namespace = "{{.Release.Namespace}}"

        image = "ubuntu:16.04"

        privileged = true     #添加,注意缩进格式

 

在进行maven/npm等构建工具打包时所依赖的包默认会从私服获取,为了加快构建速度可以在本地缓存一份,在此,需要创建PVC来持久化构建缓存,加速构建速度。为了节省存储空间决定不在每个项目中存储构建缓存,而是配置全局缓存。

创建一个PVC用于挂载到Pod中使用:

[root@master gitlab-ci]# cat gitlab-runner/templates/pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

  name: ci-build-cache-pv

  namespace: gitlab-ci

  labels:

    type: local

spec:

  storageClassName: manual

  capacity:

    storage: 10Gi

  accessModes:

    - ReadWriteOnce

  hostPath:

    path: "/opt/ci-build-cache"

[root@master gitlab-ci]# cat gitlab-runner/templates/pvc.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

  name: ci-build-cache-pvc

  namespace: gitlab-ci

spec:

  storageClassName: manual

  accessModes:

    - ReadWriteOnce

  resources:

    requests:

      storage: 5Gi

 

编辑values.yaml文件,添加构建缓存信息配置:

[root@master gitlab-ci]# vi gitlab-runner/values.yaml

## configure build cache

cibuild:

  cache:

    pvcName: ci-build-cache-pvc

    mountPath: /home/gitlab-runner/ci-build-cache

 

使用官方提供的runner镜像注册runner,默认的runner配置文件在/home/gitlab-runner/.gitlab-runner/config.toml。编辑templates/configmap.yml文件,entrypoint部分添加runner配置。在start之前添加,这样runner在创建构建Pod的时候会根据配置挂载PVC:

[root@master gitlab-ci]# vi gitlab-runner/templates/configmap.yaml

    cat >>/home/gitlab-runner/.gitlab-runner/config.toml <<EOF

      [[runners.kubernetes.volumes.pvc]]

      name = "{{.Values.cibuild.cache.pvcName}}"

      mount_path = "{{.Values.cibuild.cache.mountPath}}"

    EOF

 

    # Start the runner

    exec /entrypoint run --user=gitlab-runner \

      --working-directory=/home/gitlab-runner

 

3)部署GitLab Runner

部署GitLab Runner:

[root@master gitlab-ci]# helm -n gitlab-ci install gitlab-runner gitlab-runner

NAME: gitlab-runner

LAST DEPLOYED: Wed Jul 27 11:17:11 2022

NAMESPACE: gitlab-ci

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

Your GitLab Runner should now be registered against the GitLab instance reachable at: "http://10.24.2.14:30880/"

 

Runner namespace "gitlab-ci" was found in runners.config template.

 

查看Realse和Pod:

[root@master gitlab-ci]# helm -n gitlab-ci list

NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION

gitlab-runner   gitlab-ci       1               2022-07-27 11:17:11.456495093 +0800 CST deployed        gitlab-runner-0.43.0    15.2.0     

[root@master gitlab-ci]# kubectl -n gitlab-ci get pods

NAME                             READY   STATUS    RESTARTS   AGE

gitlab-7b54df755-6ljtp           1/1     Running   0          30m

gitlab-runner-5bc5578655-2ct85   1/1     Running   0          58s

 

返回Runners页面并刷新,如图9所示:


9

可以看到Runner状态为online,表明已经注册成功。

3. 配置GitLab

1)添加Kubernetes集群

GitLab Admin界面下,依次点击“Settings”→“Network”,展开“Outbound requests”,勾选“Allow requests to the local network from webhooks and integrations”,并保存。如图10所示:


10

进入demo-2048项目,新建配置文件(.gitlab/agents/<agent-name>/config.yaml),此处为.gitlab/agents/kubernetes-agent/config.yaml,如图11所示:


11

config.yaml文件格式如下:

gitops:

  manifest_projects:

  - id: gitlab-org/cluster-integration/gitlab-agent

    default_namespace: my-ns

    paths:

      # Read all YAML files from this directory.

    - glob: '/team1/app1/*.yaml'

      # Read all .yaml files from team2/apps and all subdirectories.

    - glob: '/team2/apps/**/*.yaml'

      # If 'paths' is not specified or is an empty list, the configuration below is used.

    - glob: '/**/*.{yaml,yml,json}'

    reconcile_timeout: 3600s

    dry_run_strategy: none

    prune: true

    prune_timeout: 3600s

    prune_propagation_policy: foreground

    inventory_policy: must_match

 

依次点击左侧菜单栏“Operate”→“Kubernetes clusters”,如图12所示:


12

点击“Connect a cluster”,并选择配置文件kubernetes-agent,如图13所示:


13

点击“Register”,如图14所示:


14

通过如下命令安装agent,将config.token和config.kasAddress的值修改为上一步页面显示的值:

[root@master gitlab-ci]# helm upgrade --install kubernetes-agent  gitlab-agent-1.1.0.tgz     --namespace gitlab-ci     --create-namespace     --set image.tag=v16.2.0     --set config.token=vTPAASMpwTW-tEQ3NHYc3y5YKCHCFep466q52dgaRCstXyXDzg     --set config.kasAddress=ws://10.244.0.23/-/kubernetes-agent/

NAME: kubernetes-agent

LAST DEPLOYED: Wed Jul 13 17:27:21 2022

NAMESPACE: gitlab-ci

STATUS: deployed

REVISION: 1

TEST SUITE: None

 

查看Release和Pod:

[root@ k8s-master-node1 gitlab-ci]# helm -n gitlab-ci list

NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION

gitlab-runner           gitlab-ci       1               2022-07-27 11:17:11.456495093 +0800 CST deployed        gitlab-runner-0.43.0    15.2.0     

kubernetes-agent        gitlab-ci       1               2022-07-27 11:22:27.285028745 +0800 CST deployed        gitlab-agent-1.1.0      v15.0.0

[root@master gitlab-ci]# kubectl get pod -n gitlab-ci

NAME                                             READY   STATUS    RESTARTS      AGE

gitlab-7665cf47c5-8ghbw                          1/1     Running   1 (89m ago)   96m

gitlab-runner-665f4647b9-zhrlh                   1/1     Running   0             18m

kubernetes-agent-gitlab-agent-6df7787756-b4rzx   1/1     Running   0             14s

 

点击“Close”并刷新界面,如图15所示:


15

可以看到,Kubernetes集群已连接成功。

2)开启Container Registry

GitLab中开启Container Registry,进入demo-2048项目,依次点击“Settings”→“CI/CD”,如图16所示:


16

展开“Variables”,配置镜像仓库相关的参数。

添加REGISTRY变量,其值为Harbor仓库地址,如图17所示:


17图

添加完成后如图18所示:


18

然后继续添加变量REGISTRY_IMAGE(demo)、REGISTRY_USER(admin)、REGISTRY_PASSWORD(Harbor12345)、REGISTRY_PROJECT(demo)和HOST(10.24.2.14),添加完成后保存变量,如图19所示:


19

4. Harbor仓库配置

1)更新Harbor仓库

修改harbor仓库的helm配置:

[root@master ~]# vi /opt/harbor/values.yaml

#将127.0.0.1改为master节点实际IP地址

externalURL: http://10.26.7.197:80

 

修改完成后,更新harbor仓库:

[root@master ~]# helm -n harbor upgrade harbor /opt/harbor

2)添加demo仓库

登录Harbor仓库新建一个公开项目demo,如图20所示:


20

将镜像tomcat:8.5.64-jdk8推送到该项目中:

[root@master gitlab-ci]# ctr -n k8s.io images tag docker.io/library/tomcat:8.5.64-jdk8 10.24.2.14/library/tomcat:8.5.64-jdk8

[root@master gitlab-ci]#  ctr -n k8s.io images push  10.24.2.14/library/tomcat:8.5.64-jdk8   --plain-http=true  --user admin:Harbor12345

 

修改containerd配置文件

[root@master ~]# vi /etc/containerd/config.toml

  • ••

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.com"]

          endpoint = ["http://harbor.com"]

        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."10.24.2.14"]

          endpoint = ["http://10.24.2.14"]

[root@master ~]# systemctl daemon-reload

[root@master ~]# systemctl restart containerd

 

5. .gitlab-ci.yaml文件

1).gitlab-ci.yaml文件简介

GitLab CI通过YAML文件管理配置job,定义了job应该如何工作。该文件存放于仓库的根目录,默认名为.gitlab-ci.yaml。

  • gitlab-ci.yml文件中指定了CI的触发条件、工作内容、工作流程,编写和理解此文件是CI实战中最重要的一步,该文件指定的任务内容总体构成了1个pipeline、1个pipeline包含不同的stage执行阶段、每个stage包含不同的具体job脚本任务。

当有新内容push到仓库,或者有代码合并后,GitLab会查找是否有.gitlab-ci.yml文件,如果文件存在,Runners将会根据该文件的内容开始build本次commit。

2)Pipeline

一个.gitlab-ci.yml文件触发后会形成一个pipeline任务流,由gitlab-runner来运行处理。一次Pipeline其实相当于一次构建任务,里面可以包含很多个阶段(Stages),如安装依赖、运行测试、编译、部署测试服务器、部署生产服务器等流程。任何提交或者Merge Request的合并都可以触发Pipeline构建。

3)Stages

Stages表示一个构建阶段,也就是上面提到的一个流程。我们可以在一次Pipeline中定义多个Stages,这些Stages会有以下特点:

所有Stages会按照顺序运行,即当一个Stage完成后,下一个Stage才会开始

只有当所有Stages完成后,该构建任务(Pipeline)才会成功

如果任何一个Stage失败,那么后面的Stages不会执行,该构建任务(Pipeline)失败

4)Jobs

Jobs表示构建工作,表示某个Stage里面执行的工作。我们可以在Stages里面定义多个Jobs,这些Jobs会有以下特点:

相同Stage中的Jobs会并行执行

相同Stage中的Jobs都执行成功时,该Stage才会成功

如果任何一个Job失败,那么该Stage失败,即该构建任务(Pipeline)失败

一个Job被定义为一列参数,这些参数指定了Job的行为。下表列出了主要的Job参数,见表2:

2

参数

是否必须

描述

script

Runner执行的shell脚本或命令

image

用于docker镜像

services

用于docker服务

stages

定义构建阶段

types

stages 的别名(已废除)

before_script

定义在每个job之前运行的命令

after_script

定义在每个job之后运行的命令

variable

定义构建变量

cache

定义一组文件列表,可在后续运行中使用

5)编写流水线脚本

编写.gitlab-ci.yml:

stages:

  - build

  - release

  - review

 

variables:

  MAVEN_OPTS: "-Dmaven.repo.local=/opt/cache/.m2/repository"

 

maven_build:

  image: maven:3.6-jdk-8

  stage: build

  only:

    - drone

  script:

    - cp -r /opt/repository /opt/cache/.m2/

    - mvn clean install -DskipTests=true

    - cd target && jar -xf 2048.war

    - cp -rfv 2048 /home/gitlab-runner/ci-build-cache

 

image_build:

  image: docker:18.09.7

  stage: release

  variables:

    DOCKER_DRIVER: overlay

    DOCKER_HOST: tcp://localhost:2375

    #CI_DEBUG_TRACE: "true"

  services:

    - name: docker:18.09.7-dind

      command: ["--insecure-registry=0.0.0.0/0"]

  script:

    - cp -rfv /home/gitlab-runner/ci-build-cache/2048 .

    - sed -i "s/10.24.2.3/$REGISTRY/g" ./Dockerfiles/Dockerfile

    - docker build -t "${REGISTRY_IMAGE}:latest" -f ./Dockerfiles/Dockerfile .

    - docker tag "${REGISTRY_IMAGE}:latest" "${REGISTRY}/${REGISTRY_PROJECT}/${REGISTRY_IMAGE}:latest"

    - docker login -u "${REGISTRY_USER}" -p "${REGISTRY_PASSWORD}" "${REGISTRY}"

    - docker push "${REGISTRY}/${REGISTRY_PROJECT}/${REGISTRY_IMAGE}:latest"

 

deploy_review:

  image: kubectl:1.22

  stage: review

  only:

    - drone

  script:

    - sed -i "s/REGISTRY/$REGISTRY/g" template/demo-2048.yaml

    - kubectl apply -f template/

 

6. 构建CICD

1)触发构建

流水线脚本编写完成后会自动触发构建,进入demo-2048项目,依次点击“build”→“Pipelines”,可以看到GitLab CI开始执行构建任务了,如图21所示:


21

点击“running”可查看构建详情,如图22所示:


22

点击流水线的任一阶段可查看构建详情,如图23所示:


23

此时Runner Pod所在的namespace下面也会出现1个新的Pod:

[root@master gpmall]# kubectl -n gitlab-ci get pods

NAME                                             READY   STATUS    RESTARTS   AGE

gitlab-7b54df755-6ljtp                           1/1     Running   0          3h6m

gitlab-runner-5dc59b5b77-x2vw8                   1/1     Running   0          129m

kubernetes-agent-gitlab-agent-64bf6d87f4-vgxbx   1/1     Running   0          151m

runner-x16szo9v-project-2-concurrent-0jzq5h      2/2     Running   0          8s

 

这个新Pod就是用来执行具体的Job任务的。

构建完成后如图24所示:


24

查看新发布的Pod:

[root@master manifests]# kubectl -n gitlab-ci get pods

NAME                                             READY   STATUS    RESTARTS   AGE

demo-2048-6bf767d4d4-kks65                       1/1     Running   0          2m22s

gitlab-7b54df755-6ljtp                           1/1     Running   0          3h8m

gitlab-runner-5dc59b5b77-x2vw8                   1/1     Running   0          132m

kubernetes-agent-gitlab-agent-64bf6d87f4-vgxbx   1/1     Running   0          153m

 

2)查看Harbor

登录Harbor仓库,进入demo项目,如图25所示:


25

可以看到镜像已构建并上传成功。

3)验证服务

查看Service:

[root@master gitlab-ci]# kubectl -n gitlab-ci get svc

NAME        TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE

demo-2048   NodePort   10.96.222.104   <none>        8080:8889/TCP   3m14s

 

【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。