流雨菲非
部署MongoDB数据库服务
解压mongodb-repo.tar.gz
tar -zxvf ~/mongodb-repo.tar.gz
检查解压后的文件
ls
brotli-1.0.7-5.el7.x86_64.rpm libX11-common-1.6.7-4.el7_9.noarch.rpm
createrepo-0.9.9-28.el7.noarch.rpm libXau-1.0.8-2.1.el7.x86_64.rpm
deltarpm-3.6-3.el7.x86_64.rpm libxcb-1.13-1.el7.x86_64.rpm
gcc-c++-4.8.5-44.el7.x86_64.rpm libXext-1.3.3-3.el7.x86_64.rpm
GraphicsMagick-1.3.38-1.el7.x86_64.rpm mongodb-org-4.0.28-1.el7.x86_64.rpm
jasper-libs-1.900.1-33.el7.x86_64.rpm mongodb-org-mongos-4.0.28-1.el7.x86_64.rpm
jbigkit-libs-2.0-11.el7.x86_64.rpm mongodb-org-server-4.0.28-1.el7.x86_64.rpm
lcms2-2.6-3.el7.x86_64.rpm mongodb-org-shell-4.0.28-1.el7.x86_64.rpm
libICE-1.0.9-9.el7.x86_64.rpm mongodb-org-tools-4.0.28-1.el7.x86_64.rpm
libjpeg-turbo-1.2.90-8.el7.x86_64.rpm nodejs-12.22.12-1nodesource.x86_64.rpm
libSM-1.2.2-2.el7.x86_64.rpm nodejs-16.15.0-3.el7.x86_64.rpm
libstdc++-devel-4.8.5-44.el7.x86_64.rpm nodejs-libs-16.15.0-3.el7.x86_64.rpm
libtiff-4.0.3-35.el7.x86_64.rpm openssl11-1.1.1k-3.el7.x86_64.rpm
libtool-ltdl-2.4.2-22.el7_3.x86_64.rpm openssl11-libs-1.1.1k-3.el7.x86_64.rpm
libuv-1.44.2-1.el7.x86_64.rpm python-deltarpm-3.6-3.el7.x86_64.rpm
libwebp-0.3.0-10.el7_9.x86_64.rpm repodata
libwmf-lite-0.2.8.4-44.el7.x86_64.rpm urw-base35-fonts-legacy-20170801-10.el7.noarch.rpm
libX11-1.6.7-4.el7_9.x86_64.rpm
通过rpm文件安装
yum install -y brotli-1.0.7-5.el7.x86_64.rpm \
libX11-1.6.7-4.el7_9.x86_64.rpm \
libX11-common-1.6.7-4.el7_9.noarch.rpm \
libXau-1.0.8-2.1.el7.x86_64.rpm \
libxcb-1.13-1.el7.x86_64.rpm \
libXext-1.3.3-3.el7.x86_64.rpm \
gcc-c++-4.8.5-44.el7.x86_64.rpm \
GraphicsMagick-1.3.38-1.el7.x86_64.rpm \
jasper-libs-1.900.1-33.el7.x86_64.rpm \
jbigkit-libs-2.0-11.el7.x86_64.rpm \
lcms2-2.6-3.el7.x86_64.rpm \
libICE-1.0.9-9.el7.x86_64.rpm \
libjpeg-turbo-1.2.90-8.el7.x86_64.rpm \
libSM-1.2.2-2.el7.x86_64.rpm \
libstdc++-devel-4.8.5-44.el7.x86_64.rpm \
libtiff-4.0.3-35.el7.x86_64.rpm \
libtool-ltdl-2.4.2-22.el7_3.x86_64.rpm \
libuv-1.44.2-1.el7.x86_64.rpm \
libwebp-0.3.0-10.el7_9.x86_64.rpm \
libwmf-lite-0.2.8.4-44.el7.x86_64.rpm \
openssl11-1.1.1k-3.el7.x86_64.rpm \
openssl11-libs-1.1.1k-3.el7.x86_64.rpm
yum install -y mongodb-org-4.0.28-1.el7.x86_64.rpm \
mongodb-org-mongos-4.0.28-1.el7.x86_64.rpm \
mongodb-org-server-4.0.28-1.el7.x86_64.rpm \
mongodb-org-shell-4.0.28-1.el7.x86_64.rpm \
mongodb-org-tools-4.0.28-1.el7.x86_64.rpm
# 启动MongoDB服务
systemctl start mongod
# 设置MongoDB服务开机自启
systemctl enable mongod
# 检查MongoDB服务状态
systemctl status mongod
链接mongdb
mongo
主从数据库管理
三台服务器修改配置文件
vim /etc/mongod.conf
//添加
修改绑定ip为0.0.0.0
replication:
replSetName: "cloud"
重启服务
sudo systemctl restart mongod
初始化副本集
mongo//进入shell
rs.initiate({
_id: "cloud",
members: [
{ _id: 0, host: "172.16.2.128:27017" },
{ _id: 1, host: "172.16.2.76:27017" }
]
})
验证副本集配置
rs.status()//在mongoshell
验证从节点
rs.conf()//mongoshell
//验证
rs.isMaster()
1. 安装依赖
安装 Node.js
根据提供的文件,您需要安装 Node.js 版本 12.22.12
:
yum install nodejs-12.22.12-1nodesource.x86_64.rpm
安装其他依赖
yum install -y gcc-c++ make
yum install -y epel-release GraphicsMagick
npm config set registry https://registry.npmmirror.com/
npm config set ELECTRON_MIRROR https://cdn.npmmirror.com/dist/electron/
部署
从 rocketchat-cloud.tar.gz
解压并进入目录:
tar -xzvf rocketchat-cloud.tar.gz
cd rocketchat-cloud
安装 Rocket.Chat
依赖项:
cd bundle/programs/server/
sudo npm install
安装mongo
#移到/opt并命名Rocket.Chat
mv bundle /opt/Rocket.Chat
#添加用户
useradd -M rocketchat && usermod -L rocketchat
#赋予权限
chown -R rocketchat:rocketchat /opt/Rocket.Chat
- 验证副本集的名称:
- 连接到 MongoDB 实例,确定当前副本集的名称。首先,使用以下命令连接到 MongoDB:
mongo --host 172.16.2.76 --port 27017
- 一旦进入 MongoDB shell,运行以下命令查看副本集的状态:
rs.status()
- 找到输出中的
set
字段,它表示当前副本集的名称。
查看node路径
which node
vi /lib/systemd/system/rocketchat.service
[Unit]
Description=The Rocket.Chat server
After=network.target remote-fs.target nss-lookup.target nginx.service mongod.service
[Service]
ExecStart=/usr/local/node/bin/node /opt/Rocket.Chat/main.js
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=rocketchat
User=rocketchat
Environment=MONGO_URL=mongodb://192.168.1.182:27017/rocketchat?replicaSet=set字段
Environment=MONGO_OPLOG_URL=mongodb://192.168.1.182:27017/local?replicaSet=set字段
Environment=ROOT_URL=http://localhost:3000/ PORT=3000
[Install]
WantedBy=multi-user.target
systemctl start rocketchat
systemctl status rocketchat
1. 创建云主机
在华为云上创建一个 X86 架构的云主机,镜像使用 CentOS 7.9。
2. 安装 chkrootkit
假设 makechk.tar.gz
文件已经下载到主机上。
bash复制# 解压 makechk.tar.gz
tar -xzvf makechk.tar.gz
cd makechk
# 编译和安装 chkrootkit
make sense
sudo cp chkrootkit /usr/local/bin
3. 扫描系统并保存日志
bash复制# 创建日志目录
sudo mkdir -p /var/log/chkrootkit
# 运行 chkrootkit 并保存扫描结果
sudo /usr/local/bin/chkrootkit > /var/log/chkrootkit/chkrootkit.log
4. 修复漏洞
查看扫描结果,并根据结果修复任何发现的问题。通常情况下,需要手动检查并删除可疑文件,更新系统软件包,检查网络连接和配置等。
5. 提交信息
确保主机上的服务正常,并提交以下信息:
复制用户名: <你的用户名>
密码: <你的密码>
公网IP地址: <你的公网IP>
任务 2: 安装 ELK 和添加数据
1. 创建云主机
在华为云上创建一个 X86 架构的云主机,镜像使用 CentOS 7.9。
2. 配置 YUM 源并安装 Docker
配置 Docker 仓库
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# 安装 Docker
sudo yum install -y docker-ce docker-ce-cli containerd.io
# 启动并启用 Docker 服务
sudo systemctl start docker
sudo systemctl enable docker
3. 安装 ELK 服务
假设 sepb_elk_latest.tar
文件已经下载到主机上。
bash复制# 加载 Docker 镜像
sudo docker load < sepb_elk_latest.tar
# 启动 ELK 服务
sudo docker run -d --name elk -p 5601:5601 -p 9200:9200 -p 5044:5044 <镜像ID>
4. 添加数据操作
将监控目标节点所需安装的 RPM 安装包下载到本地主机的 /root
目录下。假设下载的 RPM 包已经在本地。
bash复制# 假设 RPM 包已经下载到 /root 目录下
ls /root/*.rpm
部署helm
步骤一:创建 ChartMuseum 命名空间
您可以使用以下命令在 Kubernetes 集群中创建 chartmuseum
命名空间:
kubectl create namespace chartmuseum
步骤二:编写 YAML 文件部署 ChartMuseum 服务
创建一个名为 chartmuseum-deployment.yaml
的文件,并在其中定义 ChartMuseum 的 Deployment 和 Service。
apiVersion: apps/v1
kind: Deployment
metadata:
name: chartmuseum
namespace: chartmuseum
spec:
replicas: 1
selector:
matchLabels:
app: chartmuseum
template:
metadata:
labels:
app: chartmuseum
spec:
containers:
- name: chartmuseum
image: chartmuseum/chartmuseum:latest
ports:
- containerPort: 8080
env:
- name: STORAGE
value: "local"
- name: STORAGE_LOCAL_ROOTDIR
value: "/chartmuseum"
volumeMounts:
- name: chartmuseum-storage
mountPath: /chartmuseum
volumes:
- name: chartmuseum-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: chartmuseum
namespace: chartmuseum
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
selector:
app: chartmuseum
部署 ChartMuseum
使用以下命令部署 ChartMuseum:
kubectl apply -f chartmuseum-deployment.yaml
步骤三:安装 Helm 服务
- 下载 Helm: 首先,您需要下载并解压 Helm 3.3.0 版本的压缩包。可以使用以下命令:
wget https://get.helm.sh/helm-v3.3.0-linux-amd64.tar.gz
tar -zxvf helm-v3.3.0-linux-amd64.tar.gz
- 将 Helm 移动到 PATH:
sudo mv linux-amd64/helm /usr/local/bin/helm
如果没有权限,您可以选择将 Helm 移动到用户目录下的 ~/bin
文件夹:
mkdir -p ~/bin
mv linux-amd64/helm ~/bin/
echo 'export PATH=$PATH:~/bin' >> ~/.bashrc
source ~/.bashrc
- 验证 Helm 安装:
helm version
步骤四:连接到 kcloud 集群
要连接到您的 kcloud 集群,您需要 kubeconfig 文件,其中包含连接所需的凭证。通常,这个文件包含以下信息:
- 用户名
- 密码
- 集群的公共 IP 地址
您可以通过在 Kubernetes 控制台中找到相关信息来获取所需的连接信息。以下是如何获取这些信息的简要说明:
- 获取集群的公共 IP 地址: 您可以使用以下命令获取集群节点的外部 IP 地址:
kubectl get nodes -o wide
- 获取连接用户和密码: 这些信息通常在您创建集群时提供,或者可以在控制台的“访问管理”部分找到。
创建命名空间
kubectl create ns 空间名
chartmuseum.yaml
apiVersion: v1
kind: Pod
metadata:
name: chartmuseum
namespace: chartmuseum
labels:
app: chartmuseum
spec:
containers:
- image: chartmuseum/chartmuseum:latest
name: chartmuseum
ports:
- containerPort: 8080
protocol: TCP
env:
- name: DEBUG
value: "1"
- name: STORAGE
value: local
- name: STORAGE_LOCAL_ROOTDIR
value: /charts
volumeMounts:
- name: chartspath
mountPath: /charts
volumes:
- name: chartspath
hostPath:
path: /data/charts
service.yaml
apiVersion: v1
kind: Service
metadata:
name: chartmuseum
namespace: chartmuseum
labels:
app: chartmuseum
spec:
selector:
app: chartmuseum
type: ClusterIP
ports:
- port: 8080
protocol: TCP
targetPort: 8080
部署仓库命令
kubectl apply -f chartmuseum.yaml -f service.yaml
首先,我们需要解压 Chart 包,以便我们可以修改其内容。
tar -xzf wordpress-13.0.23.tgz
cd wordpress
3. 创建 Persistent Volume (PV)
假设 wordpress
Chart 包中已经包含了 PVC,我们需要手动创建 PV 并绑定到 PVC。请根据需要调整下面的 PersistentVolume
配置:
创建一个文件 wordpress-pv.yaml
,内容如下:
apiVersion: v1
kind: PersistentVolume
metadata:
name: wordpress-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
应用 PV 配置:
kubectl apply -f wordpress-pv.yaml
编辑 values.yaml
文件,将 service.type
修改为 NodePort
:
service:
type: NodePort
port: 80
nodePort: 30080 # 您可以指定一个具体的 NodePort 或让 Kubernetes 自动分配
cd ..
helm package wordpress
1. 检查证书和密钥路径
确保 client-certificate
和 client-key
的路径是正确的。如果您提供的 certfile.cert
和 keyfile.key
是相对路径,请确保它们的路径是相对于当前工作目录的,或者使用绝对路径。
例如,如果这些文件在 ~/.kube/
目录下,您可以这样修改:
users:
- name: CloudShell
user:
client-certificate: ~/.kube/certfile.cert
client-key: ~/.kube/keyfile.key
# 创建一个命名空间(可选)
kubectl create namespace wordpress
# 使用修改后的 Chart 包部署 WordPress
helm install my-wordpress ./wordpress-13.0.23.tgz --namespace wordpress
# 查看部署状态
kubectl get all -n wordpress
# 获取 WordPress 服务的 NodePort
kubectl get svc -n wordpress
安装华为云依赖
# 弹性云服务 (ECS)
pip install huaweicloudsdkecs
# 虚拟私有云 (VPC)
pip install huaweicloudsdkvpc
# 镜像服务 (IMS)
pip install huaweicloudsdkims
# 容器云引擎 (CCE)
pip install huaweicloudsdkcce
# 云数据库 (RDS)
pip install huaweicloudsdkrds
全部安装
pip install huaweicloudsdkall
密钥对python
import os
from huaweicloudsdkcore.auth.credentials import BasicCredentials
from huaweicloudsdkcore.exceptions import exceptions
from huaweicloudsdkecs.v2 import *
from huaweicloudsdkecs.v2.region.ecs_region import EcsRegion
def main():
# 设置华为云的AK/SK
ak = '你的华为云AK'
sk = '你的华为云SK'
project_id = '你的项目ID' # 在管理控制台的项目详情页可以找到
# 创建认证对象
credentials = BasicCredentials(ak, sk, project_id)
# 初始化ECS客户端
ecs_client = EcsClient.new_builder() \
.with_credentials(credentials) \
.with_region(EcsRegion.value_of("cn-north-4")) \ # 根据实际情况设置区域
.build()
keypair_name = "chinaskills_keypair"
# 检查密钥对是否存在
try:
list_request = ListKeypairsRequest()
list_response = ecs_client.list_keypairs(list_request)
keypairs = list_response.keypairs
for keypair in keypairs:
if keypair.keypair.name == keypair_name:
# 如果密钥对存在,删除它
delete_request = DeleteKeypairRequest()
delete_request.keypair_name = keypair_name
ecs_client.delete_keypair(delete_request)
print(f"Deleted existing keypair: {keypair_name}")
break
except exceptions.ClientRequestException as e:
print(f"Failed to list or delete keypair: {e}")
# 创建新的密钥对
try:
create_request = CreateKeypairRequest()
create_request.body = CreateKeypairRequestBody(
keypair=CreateKeypairOption(name=keypair_name)
)
create_response = ecs_client.create_keypair(create_request)
new_keypair = create_response.keypair
print(f"Created keypair: {new_keypair.name}")
print(f"Public key: {new_keypair.public_key}")
print(f"Private key (save this securely!): {new_keypair.private_key}")
except exceptions.ClientRequestException as e:
print(f"Failed to create keypair: {e}")
if __name__ == "__main__":
main()
ak
: 您的访问密钥 (Access Key)
sk
: 您的秘密密钥 (Secret Key)
your-region
: 您使用的地区,例如 cn-north-4
云硬盘python
调用SDK云硬盘管理的方法,实现云主机的的增删查改。
在/root/huawei 目录下编写create_block_store.py 文件,使用 SDK编写 Python代
码,调用创建华为云的云硬盘,具体要求如下:
(1)云硬盘可用区域:cn-north-4a
(2)云硬盘名称:chinaskills_volume
(3)云硬盘规格和大小:超高IO,100G
(4)设置云硬盘共享
(5)设置云硬盘加密,加密秘钥为默认的KMS密钥
(6)如果云硬盘已经存在,代码中需要先删除
(7)输出此云硬盘的详细信息(状态要求为available)
完成后提交云服务器节点的用户名、密码和IP地址到答题框。
创建云主机
import argparse
import json
import yaml
import huaweicloudsdkcore.auth.credentials as credentials
import huaweicloudsdkcore.exceptions as exceptions
import huaweicloudsdkcore.http.http_config as http_config
from huaweicloudsdkecs.v2 import *
from huaweicloudsdkecs.v2.region.ecs_region import EcsRegion
def init_client():
# Replace with your actual AK/SK and region
ak = "your-access-key"
sk = "your-secret-key"
region = "cn-north-4"
auth = credentials.BasicCredentials(ak, sk)
config = http_config.HttpConfig.get_default_config()
config.ignore_ssl_verification = True
client = EcsClient.new_builder() \
.with_http_config(config) \
.with_credentials(auth) \
.with_region(EcsRegion.value_of(region)) \
.build()
return client
def create_instance(client, instance_info):
try:
server_name = instance_info['name']
image_id = instance_info['imagename']
# Create ECS instance
create_request = NovaCreateServersRequest()
create_request.body = NovaCreateServersRequestBody(
server=NovaCreateServersOption(
name=server_name,
imageRef=image_id,
flavorRef="s2.small.1",
availability_zone="cn-north-4a",
networks=[NovaServerNetwork(id="your-network-id")],
security_groups=[NovaServerSecurityGroup(name="default")]
)
)
create_response = client.nova_create_servers(create_request)
server_id = create_response.server.id
# Wait for the instance to be active
while True:
show_request = ShowServerRequest(server_id)
show_response = client.show_server(show_request)
if show_response.server.status == "ACTIVE":
print(json.dumps(show_response.server.to_dict(), indent=4))
break
except exceptions.ClientRequestException as e:
print(f"Error: {e.status_code}, {e.error_msg}")
except Exception as e:
print(f"Unexpected error: {str(e)}")
def get_instance(client, name, output_file=None):
try:
list_request = ListServersDetailsRequest()
list_response = client.list_servers_details(list_request)
servers = list_response.servers
for server in servers:
if server.name == name:
server_info = json.dumps(server.to_dict(), indent=4)
if output_file:
with open(output_file, 'w') as f:
f.write(server_info)
else:
print(server_info)
return
print(f"No server with name {name} found.")
except exceptions.ClientRequestException as e:
print(f"Error: {e.status_code}, {e.error_msg}")
except Exception as e:
print(f"Unexpected error: {str(e)}")
def get_all_instances(client, output_file=None):
try:
list_request = ListServersDetailsRequest()
list_response = client.list_servers_details(list_request)
servers = list_response.servers
servers_info = [server.to_dict() for server in servers]
output = yaml.dump(servers_info, default_flow_style=False)
if output_file:
with open(output_file, 'w') as f:
f.write(output)
else:
print(output)
except exceptions.ClientRequestException as e:
print(f"Error: {e.status_code}, {e.error_msg}")
except Exception as e:
print(f"Unexpected error: {str(e)}")
def delete_instance(client, name):
try:
list_request = ListServersDetailsRequest()
list_response = client.list_servers_details(list_request)
servers = list_response.servers
for server in servers:
if server.name == name:
delete_request = DeleteServerRequest(server.id)
client.delete_server(delete_request)
print(f"Deleted server with name {name}")
return
print(f"No server with name {name} found.")
except exceptions.ClientRequestException as e:
print(f"Error: {e.status_code}, {e.error_msg}")
except Exception as e:
print(f"Unexpected error: {str(e)}")
def main():
parser = argparse.ArgumentParser(description='ECS Manager')
subparsers = parser.add_subparsers(dest='command')
# Create instance command
create_parser = subparsers.add_parser('create', help='Create an ECS instance')
create_parser.add_argument('-i', '--input', required=True, help='JSON formatted instance info')
# Get instance command
get_parser = subparsers.add_parser('get', help='Get an ECS instance')
get_parser.add_argument('-n', '--name', required=True, help='Instance name')
get_parser.add_argument('-o', '--output', help='Output file')
# Get all instances command
get_all_parser = subparsers.add_parser('getall', help='Get all ECS instances')
get_all_parser.add_argument('-o', '--output', help='Output file')
# Delete instance command
delete_parser = subparsers.add_parser('delete', help='Delete an ECS instance')
delete_parser.add_argument('-n', '--name', required=True, help='Instance name')
args = parser.parse_args()
client = init_client()
if args.command == 'create':
instance_info = json.loads(args.input)
create_instance(client, instance_info)
elif args.command == 'get':
get_instance(client, args.name, args.output)
elif args.command == 'getall':
get_all_instances(client, args.output)
elif args.command == 'delete':
delete_instance(client, args.name)
else:
parser.print_help()
if __name__ == "__main__":
main()
创建一个 ECS 实例:
python3 /root/huawei/ecs_manager.py create --input '{ "name": "chinaskill001", "imagename": "your-image-id"}'
查询指定名称的 ECS 实例:
python3 /root/huawei/ecs_manager.py get --name "chinaskill001" --output instance_info.json
查询所有 ECS 实例:
python3 /root/huawei/ecs_manager.py getall --output all_instances.yaml
删除指定名称的 ECS 实例:
python3 /root/huawei/ecs_manager.py delete --name "chinaskill001"
vpcpython
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import huaweicloudsdkcore.auth.credentials as credentials
import huaweicloudsdkcore.exceptions as exceptions
import huaweicloudsdkvpc.v2 as vpc
from huaweicloudsdkvpc.v2.region.vpc_region import VpcRegion
app = FastAPI()
# Replace with your actual AK/SK
ak = "your-access-key"
sk = "your-secret-key"
region = "cn-north-4"
auth = credentials.BasicCredentials(ak, sk)
client = vpc.VpcClient.new_builder() \
.with_credentials(auth) \
.with_region(VpcRegion.value_of(region)) \
.build()
class VpcCreate(BaseModel):
name: str
cidr: str
class VpcUpdate(BaseModel):
new_name: str
old_name: str
class VpcDelete(BaseModel):
vpc_name: str
@app.post("/cloud_vpc/create_vpc")
async def create_vpc(vpc_details: VpcCreate):
try:
request = vpc.CreateVpcRequest()
request.body = vpc.CreateVpcRequestBody(
vpc=vpc.CreateVpcOption(
name=vpc_details.name,
cidr=vpc_details.cidr
)
)
response = client.create_vpc(request)
return response.to_dict()
except exceptions.ClientRequestException as e:
raise HTTPException(status_code=e.status_code, detail=e.error_msg)
@app.get("/cloud_vpc/vpc/{vpc_name}")
async def get_vpc(vpc_name: str):
try:
request = vpc.ListVpcsRequest()
response = client.list_vpcs(request)
for vpc_item in response.vpcs:
if vpc_item.name == vpc_name:
return vpc_item.to_dict()
raise HTTPException(status_code=404, detail="VPC not found")
except exceptions.ClientRequestException as e:
raise HTTPException(status_code=e.status_code, detail=e.error_msg)
@app.get("/cloud_vpc/vpc")
async def get_all_vpcs():
try:
request = vpc.ListVpcsRequest()
response = client.list_vpcs(request)
return [vpc_item.to_dict() for vpc_item in response.vpcs]
except exceptions.ClientRequestException as e:
raise HTTPException(status_code=e.status_code, detail=e.error_msg)
@app.put("/cloud_vpc/update_vpc")
async def update_vpc(vpc_update: VpcUpdate):
try:
request = vpc.ListVpcsRequest()
response = client.list_vpcs(request)
for vpc_item in response.vpcs:
if vpc_item.name == vpc_update.old_name:
update_request = vpc.UpdateVpcRequest(vpc_item.id)
update_request.body = vpc.UpdateVpcRequestBody(
vpc=vpc.UpdateVpcOption(
name=vpc_update.new_name
)
)
update_response = client.update_vpc(update_request)
return update_response.to_dict()
raise HTTPException(status_code=404, detail="VPC not found")
except exceptions.ClientRequestException as e:
raise HTTPException(status_code=e.status_code, detail=e.error_msg)
@app.delete("/cloud_vpc/delete_vpc")
async def delete_vpc(vpc_delete: VpcDelete):
try:
request = vpc.ListVpcsRequest()
response = client.list_vpcs(request)
for vpc_item in response.vpcs:
if vpc_item.name == vpc_delete.vpc_name:
delete_request = vpc.DeleteVpcRequest(vpc_item.id)
client.delete_vpc(delete_request)
return {"detail": "VPC deleted successfully"}
raise HTTPException(status_code=404, detail="VPC not found")
except exceptions.ClientRequestException as e:
raise HTTPException(status_code=e.status_code, detail=e.error_msg)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=7045)
命令启动
uvicorn main:app --host 0.0.0.0 --port 7045
安装 kubectl
/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
yum install -y kubectl-1.25.1
然后开始安装,注意,版本一定要和集群的版本对应
安装和配置kubectl
mkdir -p $HOME/.kube
mv -f kubeconfig.json $HOME/.kube/config
mu-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: mu-pod
namespace: default
spec:
containers:
- name: containers01
image: nginx
ports:
- name: http
containerPort: 80
- name: containers02
image: tomcat
ports:
- name: tomcat
containerPort: 80
my-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: test
vi secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: default
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
type: Opaque
cat mariadbnamespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: mariadb
题目 13】chartmuseum 仓库部署[1 分]
在 k8s 集群中创建 chartmuseum 命名空间,编写 yaml 文件在 chartmuseum 命名空间中
使用 chartmuseum:latest 镜像创建本地私有 chart 仓库,设置其仓库存储目录为宿主机的
/data/charts 目录。编写 service.yaml 文件,为 chart 私有仓库创建 Service 访问策略,定义其
为 ClusterIP 访问模式。编写完成后启动 chartmuseum 服务。提交连接 kcloud 集群节点的用
户名、密码和公网 IP 地址到答题框。
检测 chartmuseum 服务反馈是否正确计 1 分
apiVersion: v1
kind: Namespace
metadata:
name: chartmuseum
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: chartmuseum
name: chartmuseum
namespace: chartmuseum
spec:
replicas: 1
selector:
matchLabels:
app: chartmuseum
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: chartmuseum
spec:
containers:
- image: chartmuseum/chartmuseum:latest
imagePullPolicy: IfNotPresent
name: chartmuseum
ports:
- containerPort: 8080
protocol: TCP
env:
- name: DEBUG
value: "1"
- name: STORAGE
value: local
- name: STORAGE_LOCAL_ROOTDIR
value: /charts
resources:
limits:
cpu: 500m
memory: 256Mi
requests:
cpu: 100m
memory: 64Mi
volumeMounts:
- mountPath: /charts
name: charts-volume
volumes:
- name: charts-volume
nfs:
path: /data/charts
server: 192.168.200.10
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: chartmuseum
namespace: chartmuseum
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: chartmuseum
【题目 14】私有仓库管理[2 分]
在 master 节点添加搭建的本地私有 chart 仓库源,name 为 chartmuseum,并上传
wordpress-13.0.23.tgz 包至 chartmuseum 私有仓库中。可以使用本地仓库 chart 源部署应用。
完成后提交连接 kcloud 集群节点的用户名、密码和公网 IP 地址到答题框。
检测 chartmuseum 仓库源中存在 wordpress-13.0.23 计 2 分
#为/data/charts授予777权限
chmod 777 /data/charts/
#查看svc
[root@kcloud-server ~]# kubectl get svc -n chartmuseum
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
chartmuseum ClusterIP 10.247.199.133 <none> 8080/TCP 24m
#添加本地仓库源,name 为 chartmuseum
[root@kcloud-server ~]# helm repo add chartmuseum http://10.247.199.133:8080
"chartmuseum" has been added to your repositories
[root@kcloud-server ~]# helm repo list
NAME URL
chartmuseum http://10.247.199.133:8080
#上传wordpress-13.0.23.tgz 包至 chartmuseum 私有仓库中
[root@kcloud-server ~]# curl --data-binary "@wordpress-13.0.23.tgz" http://10.247.199.133:8080/api/charts
{"saved":true}[root@kcloud-server ~]#
#更新仓库
[root@kcloud-server ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "chartmuseum" chart repository
Update Complete. ⎈ Happy Helming!⎈
#列出
[root@kcloud-server ~]# helm search repo wordpress
NAME CHART VERSION APP VERSION DESCRIPTION
chartmuseum/wordpress 13.0.23 5.9.2 WordPress is the world's most popular blogging ...
#/data/charts/目录查看
[root@kcloud-server charts]# ls
index-cache.yaml wordpress-13.0.23.tgz
云主机安全系统
#购买centos7.9云主机
#上传makechk.tar.gz,chkrootkit.tar.gz软件包
#解压makechk.tar.gz软件
#配置yum源
[root@ecs-cecc ~]# cat /etc/yum.repos.d/local.repo
[local]
name=local
baseurl=file:///root/makechk
gpgcheck=0
enabled=1
[root@ecs-cecc ~]# yum makecache
#安装编译安装依赖包
[root@ecs-cecc packages]# cd /root/ && yum install -y gcc gcc-c++ make glibc*
#解压chkrootkit.tar.gz
#查看目录文件
[root@ecs-cecc ~]# cd chkrootkit-0.55/
[root@ecs-cecc chkrootkit-0.55]# ls
ACKNOWLEDGMENTS chkdirs.c chkproc.c chkrootkit.lsm chkwtmp.c ifpromisc.c patch README.chklastlog strings.c
check_wtmpx.c chklastlog.c chkrootkit chkutmp.c COPYRIGHT Makefile README README.chkwtmp
#编译安装
[root@ecs-cecc chkrootkit-0.55]# make sense
cc -DHAVE_LASTLOG_H -o chklastlog chklastlog.c
cc -DHAVE_LASTLOG_H -o chkwtmp chkwtmp.c
cc -DHAVE_LASTLOG_H -D_FILE_OFFSET_BITS=64 -o ifpromisc ifpromisc.c
cc -o chkproc chkproc.c
cc -o chkdirs chkdirs.c
cc -o check_wtmpx check_wtmpx.c
cc -static -o strings-static strings.c
cc -o chkutmp chkutmp.c
#添加环境变量
[root@ecs-cecc ~]# cp -r chkrootkit-0.55/ /usr/local/chkrootkit
[root@ecs-cecc ~]# cd /usr/local/chkrootkit/
[root@ecs-cecc chkrootkit]# ls
ACKNOWLEDGMENTS chkdirs chklastlog.c chkrootkit chkutmp.c COPYRIGHT Makefile README.chklastlog strings-static
check_wtmpx chkdirs.c chkproc chkrootkit.lsm chkwtmp ifpromisc patch README.chkwtmp
check_wtmpx.c chklastlog chkproc.c chkutmp chkwtmp.c ifpromisc.c README strings.c
[root@ecs-cecc chkrootkit]# cp chkrootkit /usr/bin/
#查看版本
[root@ecs-cecc chkrootkit]# chkrootkit -V
chkrootkit version 0.55
#创建/var/log/chkrootkit/chkrootkit.log文件
[root@ecs-cecc ~]# mkdir /var/log/chkrootkit/
[root@ecs-cecc ~]# touch /var/log/chkrootkit/chkrootkit.log
#扫描系统保存至/var/log/chkrootkit/chkrootkit.log
[root@ecs-cecc ~]# chkrootkit > /var/log/chkrootkit/chkrootkit.log
#查看扫描结果
[root@ecs-cecc ~]# cat /var/log/chkrootkit/chkrootkit.log
日志·分析
#上传docker-repo.tar.gz,sepb_elk_latest.tar
#解压docker-repo.tar.gz
#配置yum源安装docker
[root@ecs-cecc ~]# cat /etc/yum.repos.d/local.repo
[local]
name=local
baseurl=file:///opt/docker-repo
gpgcheck=0
enabled=1
[root@ecs-cecc ~]# yum clean all
[root@ecs-cecc ~]# yum makecache
#安装docker
[root@ecs-cecc ~]# yum install -y docker-ce
#启动docker,设置为开机自启
[root@ecs-cecc ~]# systemctl start docker && systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
#查看状态
[root@ecs-cecc ~]# systemctl status docker
#导入镜像
[root@ecs-cecc ~]# docker load -i sepb_elk_latest.tar
#启动elk容器(由于Elasticsearch启动需要最大虚拟内存区域数量,修改sysctl.conf文件追加vm.max_map_count=262144)
[root@ecs-cecc ~]# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -e ES_MIN_MEM=128m -e ES_MAX_MEM=1024m -it --name elk sebp/elk:latest
[root@ecs-cecc ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1bf5111a8a0c sebp/elk:latest "/usr/local/bin/star…" About a minute ago Up About a minute 0.0.0.0:5044->5044/tcp, 0.0.0.0:5601->5601/tcp, 9300/tcp, 0.0.0.0:9200->9200/tcp, 9600/tcp elk
[root@ecs-cecc ~]#
#上传filebeat-7.13.2-x86_64.rpm
#安装filebeat
[root@ecs-cecc ~]# yum install -y filebeat-7.13.2-x86_64.rpm
#启动
[root@ecs-cecc ~]# systemctl start filebeat
#查看状态
[root@ecs-cecc ~]# systemctl status filebeat
#应用filebeat
方式一:(收集yum数据到本地文件)
[root@ecs-cecc ~]# vi /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: True
paths:
- /var/log/yum.log
output.file:
path: "/tmp"
filename: "filebeat-test.txt"
#重启filebeat服务
[root@ecs-cecc ~]# systemctl restart filebeat
#安装httpd服务
[root@ecs-cecc ~]# yum install -y httpd
#验证
[root@ecs-cecc tmp]# cat /tmp/filebeat-test.txt
{"@timestamp":"2022-10-16T09:20:03.410Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.13.2"},"log":{"offset":2213,"file":{"path":"/var/log/yum.log"}},"message":"Oct 16 17:20:02 Installed: httpd-2.4.6-97.el7.centos.5.x86_64","input":{"type":"log"},"host":{"hostname":"ecs-cecc","architecture":"x86_64","name":"ecs-cecc","os":{"family":"redhat","name":"CentOS Linux","kernel":"3.10.0-1160.53.1.el7.x86_64","codename":"Core","type":"linux","platform":"centos","version":"7 (Core)"},"id":"acca19161ce94d449c58923b12797030","containerized":false,"ip":["192.168.1.151","fe80::f816:3eff:fe79:d168","172.17.0.1","fe80::42:40ff:fef4:5e7","fe80::14fb:49ff:feec:ffad"],"mac":["fa:16:3e:79:d1:68","02:42:40:f4:05:e7","16:fb:49:ec:ff:ad"]},"agent":{"version":"7.13.2","hostname":"ecs-cecc","ephemeral_id":"a522699e-3e6b-44a7-b833-d14b43d2edba","id":"67d653cb-908e-418f-9356-5b7f2461dbe8","name":"ecs-cecc","type":"filebeat"},"ecs":{"version":"1.8.0"},"cloud":{"machine":{"type":"c6s.xlarge.2"},"service":{"name":"Nova"},"provider":"openstack","instance":{"name":"ecs-cecc.novalocal","id":"i-0129dc00"},"availability_zone":"cn-east-2c"}}
方式二:(收集yum数据到Elasticsearch中)
#修改配置文件
[root@ecs-cecc ~]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: True
paths:
- /var/log/yum.log
output.elasticsearch:
hosts: ["localhost:9200"]
#重启
[root@ecs-cecc ~]# systemctl restart filebeat
2.1.1 kubeedge安装
(1)基础环境检查与配置
使用k8s-allinone镜像启动云端主机环境后。等待一段时间,集群初始化完毕后,检查k8s集群状态并查看主机名,命令如下:
[root@master ~]# kubectl get nodes,pod -A
NAME STATUS ROLES AGE VERSION
node/master Ready control-plane,master 5m14s v1.22.1
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-dashboard pod/dashboard-7575cf67b7-5s5hz 1/1 Running 0 4m51s
kube-dashboard pod/dashboard-agent-69456b7f56-nzghp 1/1 Running 0 4m50s
kube-system pod/coredns-78fcd69978-klknx 1/1 Running 0 4m58s
kube-system pod/coredns-78fcd69978-xwzgr 1/1 Running 0 4m58s
kube-system pod/etcd-master 1/1 Running 0 5m14s
kube-system pod/kube-apiserver-master 1/1 Running 0 5m11s
kube-system pod/kube-controller-manager-master 1/1 Running 0 5m13s
kube-system pod/kube-flannel-ds-9gdnl 1/1 Running 0 4m51s
kube-system pod/kube-proxy-r7gq9 1/1 Running 0 4m58s
kube-system pod/kube-scheduler-master 1/1 Running 0 5m11s
kube-system pod/metrics-server-77564bc84d-tlrp7 1/1 Running 0 4m50s
由上面的执行结果可以看出集群和Pod状态都是正常的。
查看当前节点主机名:
[root@master ~]# hostnamectl
Static hostname: master
Icon name: computer-vm
Chassis: vm
Machine ID: cc2c86fe566741e6a2ff6d399c5d5daa
Boot ID: 94e196b737b6430bac5fbc0af88cbcd1
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1160.el7.x86_64
Architecture: x86-64
修改边端节点的主机名,命令如下:
[root@localhost ~]# hostnamectl set-hostname kubeedge-node
[root@kubeedge-node ~]# hostnamectl
Static hostname: kubeedge-node
Icon name: computer-vm
Chassis: vm
Machine ID: cc2c86fe566741e6a2ff6d399c5d5daa
Boot ID: c788c13979e0404eb5afcd9b7bc8fd4b
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1160.el7.x86_64
Architecture: x86-64
分别配置云端节点和边端节点的主机映射文件,命令如下:
[root@master ~]# cat >> /etc/hosts <<EOF
10.26.17.135 master
10.26.7.126 kubeedge-node
EOF
[root@kubeedge-node ~]# cat >> /etc/hosts <<EOF
10.26.17.135 master
10.26.7.126 kubeedge-node
EOF
(2)云端、边端节点配置Yum源
下载安装包kubernetes_kubeedge.tar.gz至云端master节点/root目录,并解压到/opt目录,命令如下:
[root@master ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/kubernetes_kubeedge_allinone.tar.gz
[root@master ~]# tar -zxvf kubernetes_kubeedge_allinone.tar.gz -C /opt/
[root@master ~]# ls
docker-compose-Linux-x86_64 harbor-offline-installer-v2.5.0.tgz kubeedge kubernetes_kubeedge.tar.gz
ec-dashboard-sa.yaml k8simage kubeedge-counter-demo yum
在云端master节点配置yum源,命令如下:
[root@master ~]# mv /etc/yum.repos.d/* /media/
[root@master ~]# cat > /etc/yum.repos.d/local.repo <<EOF
[docker]
name=docker
baseurl=file:///opt/yum
gpgcheck=0
enabled=1
EOF
[root@master ~]# yum -y install vsftpd
[root@master ~]# echo anon_root=/opt >> /etc/vsftpd/vsftpd.conf
开启服务,并设置开机自启:
[root@master ~]# systemctl enable vsftpd --now
边端kubeedge-node配置yum源,命令如下:
[root@kubeedge-node ~]# mv /etc/yum.repos.d/* /media/
[root@kubeedge-node ~]# cat >/etc/yum.repos.d/ftp.repo <<EOF
[docker]
name=docker
baseurl=ftp://master/yum
gpgcheck=0
enabled=1
EOF
(3)云端、边端配置Docker
云端master节点已经安装好了Docker服务,需要配置本地镜像拉取,命令如下:
[root@master ~]# vi /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "200m",
"max-file": "5"
},
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 655360,
"Soft": 655360
},
"nproc": {
"Name": "nproc",
"Hard": 655360,
"Soft": 655360
}
},
"live-restore": true,
"oom-score-adjust": -1000,
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 10,
"insecure-registries": ["0.0.0.0/0"]
}
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
边端节点安装Docker,并配置本地镜像拉取,命令如下:
[root@kubeedge-node ~]# yum -y install docker-ce
[root@kubeedge-node ~]# vi /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "200m",
"max-file": "5"
},
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 655360,
"Soft": 655360
},
"nproc": {
"Name": "nproc",
"Hard": 655360,
"Soft": 655360
}
},
"live-restore": true,
"oom-score-adjust": -1000,
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 10,
"insecure-registries": ["0.0.0.0/0"]
}
[root@kubeedge-node ~]# systemctl daemon-reload
[root@kubeedge-node ~]# systemctl enable docker --now
(4)云端节点部署Harbor仓库
在云端master节点上部署Harbor本地镜像仓库,命令如下:
[root@master ~]# cd /opt/
[root@master opt]# mv docker-compose-Linux-x86_64 /usr/bin/docker-compose
[root@master opt]# tar -zxvf harbor-offline-installer-v2.5.0.tgz
[root@master opt]# cd harbor && cp harbor.yml.tmpl harbor.yml
[root@master harbor]# vi harbor.yml
hostname: 10.26.17.135 #将hostname修改为云端节点IP
[root@master harbor]# ./install.sh
……
✔ ----Harbor has been installed and started successfully.----
[root@master harbor]# docker login -u admin -p Harbor12345 master
….
Login Succeeded
打开浏览器使用云端master节点IP,访问Harbor页面,使用默认的用户名和密码进行登录(admin/Harbor12345),并创建一个名为“k8s”的命名空间,如下图所示:
图2-1 创建k8s项目
加载本地镜像并上传至Harbor镜像仓库,命令如下:
[root@master harbor]# cd /opt/k8simage/ && sh load.sh
[root@master k8simage]# sh push.sh
请输入您的Harbor仓库地址(不需要带http):10.26.17.135 #地址为云端master节点地址
(5)配置节点亲和性
在云端节点分别配置flannel pod和proxy pod的亲和性,命令如下:
[root@master k8simage]# kubectl edit daemonset -n kube-system kube-flannel-ds
......
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
- key: node-role.kubernetes.io/edge #在containers标签前面增加配置
operator: DoesNotExist
[root@master k8simage]# kubectl edit daemonset -n kube-system kube-proxy
spec:
affinity: #在containers标签前面增加配置
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/edge
operator: DoesNotExist
[root@master k8simage]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-q7mfq 1/1 Running 0 13m
kube-proxy-wxhkm 1/1 Running 0 39s
可以查看到两个pod被修改完成之后重新运行,状态为Running。
(6)Kubeedge云端环境搭建
在云端master节点配置云端所需要的软件包,及服务配置文件,命令如下:
[root@master k8simage]# cd /opt/kubeedge/
[root@master kubeedge]# mv keadm /usr/bin/
[root@master kubeedge]# mkdir /etc/kubeedge
[root@master kubeedge]# tar -zxf kubeedge-1.11.1.tar.gz
[root@master kubeedge]# cp -rf kubeedge-1.11.1/build/tools/* /etc/kubeedge/
[root@master kubeedge]# cp -rf kubeedge-1.11.1/build/crds/ /etc/kubeedge/
[root@master kubeedge]# tar -zxf kubeedge-v1.11.1-linux-amd64.tar.gz
[root@master kubeedge]# cp -rf * /etc/kubeedge/
启动云端服务,命令如下:
[root@master kubeedge]# cd /etc/kubeedge/
[root@master kubeedge]# keadm deprecated init --kubeedge-version=1.11.1 --advertise-address=10.26.17.135
……
KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log
CloudCore started
● -kubeedge-version=:指定Kubeedge的版本,离线安装必须指定,否则会自动下载最新版本。
● -advertise-address=:暴露IP,此处填写keadm所在的节点内网IP。如果要与本地集群对接的话,则填写公网IP。此处因为云上,所以只需要写内网IP。
检查云端服务,命令如下:
[root@master kubeedge]# netstat -ntpl |grep cloudcore
tcp6 0 0 :::10000 :::* LISTEN 974/cloudcore
tcp6 0 0 :::10002 :::* LISTEN 974/cloudcore
(7)Kubeedge边缘端环境搭建
在边缘端kubeedge-node节点复制云端软件包至本地,命令如下:
[root@kubeedge-node ~]# scp root@master:/usr/bin/keadm /usr/local/bin/
[root@kubeedge-node ~]# mkdir /etc/kubeedge
[root@kubeedge-node ~]# cd /etc/kubeedge/
[root@kubeedge-node kubeedge]# scp -r root@master:/etc/kubeedge/* /etc/kubeedge/
在云端master节点查询密钥,命令如下,复制的token值需要删掉换行符:
[root@master kubeedge]# keadm gettoken
1f0f213568007af1011199f65ca6405811573e44061c903d0f24c7c0379a5f65.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTEwNTc2ODN9.48eiBKuwwL8bFyQcfYyicnFSogra0Eh0IpyaRMg5NvY
在边端kubeedge-node使用命令加入集群,命令如下:
[root@kubeedge-node ~]# keadm deprecated join --cloudcore-ipport=10.26.17.135:10000 --kubeedge-version=1.11.1 --token=1f0f213568007af1011199f65ca6405811573e44061c903d0f24c7c0379a5f65.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTEwNTc2ODN9.48eiBKuwwL8bFyQcfYyicnFSogra0Eh0IpyaRMg5NvY
install MQTT service successfully.
......
[Run as service] service file already exisits in /etc/kubeedge//edgecore.service, skip download
kubeedge-v1.11.1-linux-amd64/
kubeedge-v1.11.1-linux-amd64/edge/
kubeedge-v1.11.1-linux-amd64/edge/edgecore
kubeedge-v1.11.1-linux-amd64/version
kubeedge-v1.11.1-linux-amd64/cloud/
kubeedge-v1.11.1-linux-amd64/cloud/csidriver/
kubeedge-v1.11.1-linux-amd64/cloud/csidriver/csidriver
kubeedge-v1.11.1-linux-amd64/cloud/iptablesmanager/
kubeedge-v1.11.1-linux-amd64/cloud/iptablesmanager/iptablesmanager
kubeedge-v1.11.1-linux-amd64/cloud/cloudcore/
kubeedge-v1.11.1-linux-amd64/cloud/cloudcore/cloudcore
kubeedge-v1.11.1-linux-amd64/cloud/controllermanager/
kubeedge-v1.11.1-linux-amd64/cloud/controllermanager/controllermanager
kubeedge-v1.11.1-linux-amd64/cloud/admission/
kubeedge-v1.11.1-linux-amd64/cloud/admission/admission
KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe
如若提示yum报错,可删除多余yum源文件,重新执行加入集群命令:
[root@kubeedge-node kubeedge]# rm -rf /etc/yum.repos.d/epel*
查看状态服务是否为active:
[root@kubeedge-node kubeedge]# systemctl status edgecore
● edgecore.service
Loaded: loaded (/etc/systemd/system/edgecore.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2023-08-03 06:05:39 UTC; 15s ago
Main PID: 8405 (edgecore)
Tasks: 15
Memory: 34.3M
CGroup: /system.slice/edgecore.service
└─8405 /usr/local/bin/edgecore
在云端master节点检查边缘端节点是否正常加入,命令如下:
[root@master kubeedge]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubeedge-node Ready agent,edge 5m19s v1.22.6-kubeedge-v1.11.1
master Ready control-plane,master 176m v1.22.1
若节点数量显示为两台,且状态为Ready,则证明节点加入成功。
(8)云端节点部署监控服务
在云端master节点配置证书,命令如下:
[root@master kubeedge]# export CLOUDCOREIPS="10.26.17.135"
此处IP填写为云端master节点IP:
[root@master kubeedge]# cd /etc/kubeedge/
[root@master kubeedge]# ./certgen.sh stream
更新云端配置,使监控数据可以传送至云端master节点,命令如下:
[root@master kubeedge]# vi /etc/kubeedge/config/cloudcore.yaml
cloudStream:
enable: true #修改为true
streamPort: 10003
router:
address: 0.0.0.0
enable: true #修改为true
port: 9443
restTimeout: 60
更新边缘端的配置,命令如下:
[root@kubeedge-node kubeedge]# vi /etc/kubeedge/config/edgecore.yaml
edgeStream:
enable: true #修改为true
handshakeTimeout: 30
serviceBus:
enable: true #修改为true
重新启动云端服务,命令如下:
[root@master kubeedge]# kill -9 $(netstat -lntup |grep cloudcore |awk 'NR==1 {print $7}' |cut -d '/' -f 1)
[root@master kubeedge]# cp -rfv cloudcore.service /usr/lib/systemd/system/
[root@master kubeedge]# systemctl start cloudcore.service
[root@master kubeedge]# netstat -lntup |grep 10003
tcp6 0 0 :::10003 :::* LISTEN 15089/cloudcore
通过netstat -lntup |grep 10003查看端口,如果出现10003端口则表示成功开启cloudStream。
重新启动边缘端服务,命令如下:
[root@kubeedge-node kubeedge]# systemctl restart edgecore.service
在云端部署服务并查看收集指标,命令如下:
[root@master kubeedge]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
kubeedge-node 24m 0% 789Mi 6%
master 278m 3% 8535Mi 54%
服务部署后,需等待一段时间才能查看到kubeedge-node的资源使用情况,这时因为数据还未同步至云端节点。
2.1.2 安装依赖包
首先安装gcc编译器,gcc有些系统版本已经默认安装,通过gcc -version查看,没安装的先安装gcc,不要缺少,否则有可能安装python出错,python3.7.0以下的版本可不装libffi-devel。
在云端节点,下载离线yum源,安装软件,命令如下:
[root@master ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/gcc-repo.tar.gz
[root@master ~]# tar -zxvf gcc-repo.tar.gz
[root@master ~]# vi /etc/yum.repos.d/gcc.repo
[gcc]
name=gcc
baseurl=file:///root/gcc-repo
gpgcheck=0
enabled=1
[root@master ~]# yum -y install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel libffi-devel gcc
2.1.3 编译安装Python3.7
在云端节点下载python3.7等安装包,并进行解压编译,命令如下:
[root@master ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/Python-3.7.3.tar.gz
[root@master ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/volume_packages.tar.gz
[root@master ~]# mkdir /usr/local/python3 && tar -zxvf Python-3.7.3.tar.gz
[root@master ~]# cd Python-3.7.3/
[root@master Python-3.7.3]# ./configure --prefix=/usr/local/python3
[root@master Python-3.7.3]# make && make install
[root@master Python-3.7.3]# cd /root
2.1.4 建立Python软链接
解压volume_packages压缩包,然后将编译后的python3.7软连接至/usr/bin目录下,并查看版本信息,命令如下:
[root@master ~]# tar -zxvf volume_packages.tar.gz
[root@master ~]# yes |mv volume_packages/site-packages/* /usr/local/python3/lib/python3.7/site-packages/
[root@master ~]# ln -s /usr/local/python3/bin/python3.7 /usr/bin/python3
[root@master ~]# ln -s /usr/local/python3/bin/pip3.7 /usr/bin/pip3
[root@master ~]# python3 --version
Python 3.7.3
[root@master ~]# pip3 list
Package Version
------------------------ --------------------
absl-py 1.4.0
aiohttp 3.8.4
aiosignal 1.3.1
anyio 3.7.0
async-timeout 4.0.2
asynctest 0.13.0
......以下内容忽略......
2.2 搭建MongoDB
2.2.1 搭建MongoDB
将mongoRepo.tar.gz软件包放到边侧节点中,然后进行解压,命令如下:
[root@kubeedge-node ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/mongoRepo.tar.gz
[root@kubeedge-node ~]# tar -zxvf mongoRepo.tar.gz -C /opt/
[root@kubeedge-node ~]# vi /etc/yum.repos.d/mongo.repo
[mongo]
name=mongo
enabled=1
gpgcheck=0
baseurl=file:///opt/mongoRepo
在边侧节点安装mongodb,命令如下:
[root@kubeedge-node ~]# yum -y install mongodb*
安装完成后配置mongo,命令如下:
[root@kubeedge-node ~]# vi /etc/mongod.conf
#找到下面的字段然后进行修改
net:
port: 27017
bindIp: 0.0.0.0 #修改为0.0.0.0
修改完毕后,重启服务,命令如下:
[root@kubeedge-node ~]# systemctl restart mongod && systemctl enable mongod
验证服务,命令如下:
[root@kubeedge-node ~]# netstat -lntup |grep 27017
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN 10195/mongod
若出现27017端口,则MongoDB服务启动成功。
2.2.2 创建数据库
边侧节点登录MongoDB,创建数据库与集合,命令如下:
[root@kubeedge-node ~]# mongo
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
> use edgesql
switched to db edgesql
> show collections
> db.createCollection("users")
{ "ok" : 1 }
> db.createCollection("ai_data")
{ "ok" : 1 }
> db.createCollection("ai_model")
{ "ok" : 1 }
> show collections
ai_data
ai_model
users
>
#按键盘上的Ctrl+D可退出
2.3 搭建H5前端
ydy_cloudapp_front_dist是编译后的前端H5程序,通过Web Server运行即可。
2.3.1 Linux运行H5前端
在边侧节点下载gcc-repo和ydy_cloudapp_front_dist压缩包并进行解压,配置Yum源并将解压后的文件拷贝至Nginx站点目录,命令如下:
[root@kubeedge-node ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/gcc-repo.tar.gz
[root@kubeedge-node ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/ydy_cloudapp_front_dist.tar.gz
[root@kubeedge-node ~]# tar -zxvf gcc-repo.tar.gz
[root@kubeedge-node ~]# tar -zxvf ydy_cloudapp_front_dist.tar.gz
[root@kubeedge-node ~]# vi /etc/yum.repos.d/gcc.repo
[gcc]
name=gcc
baseurl=file:///root/gcc-repo
gpgcheck=0
enabled=1
[root@kubeedge-node ~]# yum install -y nginx
[root@kubeedge-node ~]# rm -rf /usr/share/nginx/html/*
[root@kubeedge-node ~]# mv ydy_cloudapp_front_dist/index.html /usr/share/nginx/html/
[root@kubeedge-node ~]# mv ydy_cloudapp_front_dist/static/ /usr/share/nginx/html/
[root@kubeedge-node ~]# vi /etc/nginx/nginx.conf
#配置nginx反向代理,进入配置文件在文件下方找到相应的位置进行配置
server {
listen 80;
listen [::]:80;
server_name localhost;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
location ~ ^/cloudedge/(.*) {
proxy_pass http://10.26.17.135:30850/cloudedge/$1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type';
add_header 'Access-Control-Allow-Credentials' 'true';
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
[root@kubeedge-node ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@kubeedge-node ~]# systemctl restart nginx && systemctl enable nginx
项目工程在“ydy_cloudapp_backend_framework”所在目录,首先将软件包下载至云端节点中并解压,命令如下:
[root@master ~]# curl -O http://mirrors.douxuedu.com/KubeEdge/python-kubeedge/ydy_cloudapp_backend_framework.tar.gz
[root@master ~]# tar -zxvf ydy_cloudapp_backend_framework.tar.gz
[root@master ~]# cd ydy_cloudapp_backend_framework
图3-1 查看导入工程
3.2用户管理微服务
3.2.1 概述
用户管理微服务是一个用于用户注册和登录的后端服务。使用FastAPI框架,用户数据存储在MongoDB数据库,密码通过bcrypt哈希算法,以保障用户密码的安全性。
3.2.2 微服务主程序
(1)FastAPI微服务启动代码文件:fastapi_user/main.py,启动命令如下:
[root@master ydy_cloudapp_backend_framework]# cd /root/ydy_cloudapp_backend_framework/fastapi_user/
[root@master fastapi_user]# python3 main.py &
[1] 22763
[root@master fastapi_user]# INFO: Will watch for changes in these directories: ['/root/ydy_cloudapp_backend_framework/fasta
pi_user']
INFO: Uvicorn running on http://0.0.0.0:8046 (Press CTRL+C to quit)
INFO: Started reloader process [22763] using StatReload
INFO: Started server process [22769]
INFO: Waiting for application startup.
INFO: Application startup complete.
......以下代码均为main.py文件展示.....
# ===========================================
# Copyright (C) 2023 Jiangsu One Cloud Technology Development Co. LTD. All Rights Reserved.
# 版权: 江苏一道云科技发展有限公司,版权所有!
# ===========================================
# @Author : YiDaoYun
# @Software: Cloud Application Development Software V1.0
from fastapi import FastAPI
from apps.user import manager_user
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(manager_user.router)
app.mount("/static", StaticFiles(directory="static"), name="static")
if __name__ == '__main__':
import uvicorn
uvicorn.run(app='main:app', host='0.0.0.0', port=8046,reload=True)
(2)具体微服务实现代码文件:fastapi_user/apps/user/manager_user.py,启动命令如下:
[root@master fastapi_user]# cd apps/user/
[root@master user]# python3 manager_user.py &
[2] 24794
......以下均代码均为manager_user.py文件展示......
from fastapi import FastAPI
from apps.user import manager_user
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(manager_user.router)
app.mount("/static", StaticFiles(directory="static"), name="static")
if __name__ == '__main__':
import uvicorn
uvicorn.run(app='main:app', host='0.0.0.0', port=8046,reload=True)
创建了一个FastAPI应用对象,通过uvicorn库启动HTTP服务器以监听指定的主机和端口。通过python3 main.py来启动程序static目录为更快速的访问fastapi的API docs文档,挂载本地静态目录,下面使用http://云端节点:8046/docs进行访问,如下图所示:
图3-2 用户管理FastAPI微服务
3.2.3 用户注册功能
具体实现代码fastapi_user/apps/user/manager_user.py,首先需要修改用户注册代码,注意修改完成后保存文件会自动重新运行,命令如下:
[root@master user]# vi manager_user.py
router = APIRouter()
MongoDBUrl = os.environ.get('MongoDBUrl') if os.environ.get('MongoDBUrl') else "mongodb://10.26.3.179:27017/" #改成mongodb数据库本机IP
client = MongoClient(MongoDBUrl)
db = client["edgesql"]
users_collection = db["users"]
class User(BaseModel):
username: str
password: str
@router.post("/register")
def register_user(user: User):
if users_collection.find_one({"username": user.username}):
raise HTTPException(status_code=400, detail="用户已经注册!")
hashed_password = bcrypt.hashpw(user.password.encode('utf-8'),bcrypt.gensalt())
user_dict = user.dict()
user_dict['password'] = hashed_password.decode('utf-8')
users_collection.insert_one(user_dict)
return {"detail": "用户注册成功!"}
首先定义数据模型User,主要是作为注册请求体,分别用来表示用户的用户名和密码。str表示这两个字段的数据类型为字符串。然后定义注册的路由为/register,接着为路由定义逻辑代码,编写名为register_user的函数,这是处理用户注册的具体函数。
密码使用bcrypt库来对用户密码进行哈希处理,以增加安全性。如果用户未注册过,使用users_collection.insert_one将该字典插入到数据库中,完成用户的注册。
● URL:/register
● 请求方法:POST
● 请求体:JSON格式的用户信息
{
"username": "your_username",
"password": "your_password"
}
● 响应:成功注册时返回HTTP状态码200,否则返回HTTP状态码400,其中detail字段为错误信息。返回浏览器,点击“/registerRegister User”,并点击“Try it Out”,修改两个string的值,完成后点击“Execute”,测试如下:
图3-3 用户注册FastAPI 微服务
在边侧节点,查询MongoDB数据库,如下图所示:
图3-4查询MongoDB数据库
3.2.4 用户登录功能
具体实现代码fastapi_user/apps/user/manager_user.py,用户登录代码如下:
@router.post("/login")
def login_user(user: User):
db_user = users_collection.find_one({"username": user.username})
if not db_user:
raise HTTPException(status_code=400, detail="用户名或密码无效!")
if not bcrypt.checkpw(user.password.encode('utf-8'),db_user['password'].encode('utf-8')):
raise HTTPException(status_code=400, detail="用户名或密码无效!")
return ({"message": "用户登陆成功!"})
使用FastAPI的router装饰器来创建一个路由,它将处理POST请求,并且路由路径为/login。
然后定义login_user函数,这是处理用户登录的具体函数。它接受一个user参数,该参数是通过请求体传递的数据,数据必须符合User模型的结构。
使用users_collection.find_one来根据用户提供的用户名,在数据库中查找对应的用户信息。users_collection是一个数据库集合,存储了用户信息。如果在数据库中没有找到对应的用户(即db_user为空),则抛出一个HTTP异常,状态码为400,同时提供错误详情用户名或密码无效。这意味着提供的用户名在数据库中不存在。
使用bcrypt.checkpw来检查用户提供的密码与数据库中存储的哈希密码是否匹配。如果提供的密码与数据库中存储的哈希密码不匹配,则抛出一个HTTP异常,状态码为400,同时提供错误详情用户名或密码无效!。若提供的密码在数据库中匹配上,则返回用户登陆成功。
● URL:/loginr
● 请求方法:POST
● 请求体:JSON格式的用户信息
{
"username": "your_username",
"password": "your_password"
}
● 响应:成功登录时返回HTTP状态码200和message字段为"用户登陆成功!",否则返回HTTP状态码400,其中detail字段为错误信息。继续点击“/login Login User”,并点击“Try it Out”,修改两个string的值,完成后点击“Execute”测试如下:
图3-5 用户登录FastAPI微服务
3.3云端管理微服务
3.3.1 概述
节点管理微服务是一个基于K8S SDK来进行管理云端节点和边缘节点信息的后端服务。它使用FastAPI框架提供了一组API,允许用户获取节点信息、获取节点资源使用情况、添加/更新节点标签以及删除节点标签等功能。
3.3.2 微服务主程序
回到云端节点,微服务主程序fastapi_cloud/main.py如下,启动命令如下:
[root@master user]# cd /root/ydy_cloudapp_backend_framework/fastapi_cloud
[root@master fastapi_cloud]# python3 main.py &
[3] 15348
[root@master fastapi_cloud]# INFO: Will watch for changes in these directories: ['/root/ydy_cloudapp_backend_framework/fast
api_cloud']
INFO: Uvicorn running on http://0.0.0.0:8070 (Press CTRL+C to quit)
INFO: Started reloader process [15348] using StatReload
INFO: Started server process [15352]
INFO: Waiting for application startup.
INFO: Application startup complete.
......以下代码均为main.py文件展示......
# ===========================================
# Copyright (C) 2023 Jiangsu One Cloud Technology Development Co. LTD. All Rights Reserved.
# 版权: 江苏一道云科技发展有限公司,版权所有!
# ===========================================
# @Author : YiDaoYun
# @Software: Cloud Application Development Software V1.0
# 在运行节点管理微服务之前,请确保您已经配置了Kubernetes客户端。
from fastapi import FastAPI
from apps.node import cloud_node
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(cloud_node.router)
app.mount("/static", StaticFiles(directory="static"), name="static")
if __name__ == '__main__':
import uvicorn
uvicorn.run(app='main:app', host='0.0.0.0', port=8047,reload=True)
创建了一个FastAPI应用对象,配置了CORS中间件和静态文件目录的挂载,然后将自定义模块中的路由添加进应用程序中,并通过uvicorn库启动HTTP服务器以监听指定的主机和端口。通过python3 main.py来启动程序。
其中static目录,是为了可以在离线环境或者在联网环境都可以更快速的访问fastapi的API文档,文档地址为http://ip/docs,http://ip/redoc,若不进行挂载静态目录,那么在访问API文档时会默认请求国外的JS文件,导致页面空白无法访问。
3.3.3 微服务封装
云端管理微服务基于FastAPI框架,封装了Kubernetes的资源的访问,包括节点、边缘节点、节点信息等资源信息。
同时封装网关接口,实现前端对接。
3.4 边端管理微服务开发
3.4.1 概述
边缘管理微服务是一个用于管理边缘设备、边缘应用的后端服务。它使用FastAPI框架和Kubernetes SDK实现了一组API,主要针对KubeEdge拓展的CRD的资源进行访问封装。
3.3.2 微服务主程序
微服务主程序fastapi_cloud_edge/main.py如下,启动命令如下:
[root@master fastapi_cloud]# cd /root/ydy_cloudapp_backend_framework/fastapi_cloud_edge/
[root@master fastapi_cloud_edge]# python3 main.py &
[4] 17167
[root@master fastapi_cloud_edge]# INFO: Will watch for changes in these directories: ['/root/ydy_cloudapp_backend_framework
/fastapi_cloud_edge']
INFO: Uvicorn running on http://0.0.0.0:8045 (Press CTRL+C to quit)
INFO: Started reloader process [17167] using StatReload
INFO: Started server process [17184]
INFO: Waiting for application startup.
INFO: Application startup complete.
......以下代码均为main.py文件展示......
# ===========================================
# Copyright (C) 2023 Jiangsu One Cloud Technology Development Co. LTD. All Rights Reserved.
# 版权: 江苏一道云科技发展有限公司,版权所有!
# ===========================================
# @Author : YiDaoYun
# @Software: Cloud Application Development Software V1.0
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from apps.device import device_manager
from apps.device import device_model_manager
from apps.pod import pod_manager
from apps.data import data_manager
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(device_model_manager.router)
app.include_router(device_manager.router)
app.include_router(pod_manager.router)
app.include_router(data_manager.router)
app.mount("/static", StaticFiles(directory="static"), name="static")
if __name__ == '__main__':
import uvicorn
uvicorn.run(app='main:app', host='0.0.0.0', port=8045,reload=True)
3.4.2 微服务封装
边端管理微服务基于FastAPI框架,封装了KubeEdge资源的访问,包括DeviceModel、Device、RuleEndpoint、Rule、Pod应用、AI应用数据、AI应用模型等管理。
同时封装网关接口,实现前端对接。
3.5 PCB缺陷识别AI微服务
3.5.1 概述
PCB缺陷识别应用开发,基于YOLO5与已训练的模型实现PCB缺陷识别,将识别结果通过MQTT发送边端管理微服务,将识别结果保存并通知云端。
3.5.2 微服务主程序
基于FastAPI封装,主程序文件为fastapi_ai_pcb/main.py。首先修改pcb_detect_service.py文件,将from pcb_detect import detect_image进行注释,然后运行服务,命令如下:
[root@master fastapi_cloud_edge]# cd /root/ydy_cloudapp_backend_framework/fastapi_ai_pcb/
[root@master fastapi_ai_pcb]# vi pcb_detect_service.py
from pcb_detect import detect_image #注释此行
[root@master fastapi_ai_pcb]# python3 main.py &
[5] 22383
[root@master fastapi_ai_pcb]# INFO: Will watch for changes in these directories: ['/root/ydy_cloudapp_backend_framework/fas
tapi_ai_pcb']
INFO: Uvicorn running on http://0.0.0.0:8055 (Press CTRL+C to quit)
INFO: Started reloader process [22383] using StatReload
INFO: Started server process [22403]
INFO: Waiting for application startup.
INFO: Application startup complete.
......以下均为main.py文件展示......
# ===========================================
# Copyright (C) 2023 Jiangsu One Cloud Technology Development Co. LTD. All Rights Reserved.
# 版权: 江苏一道云科技发展有限公司,版权所有!
# ===========================================
# @Author : YiDaoYun
# @Software: Cloud Application Development Software V1.0
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
import pcb_detect_service
app = FastAPI(timeout=60)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(pcb_detect_service.router)
app.mount("/static", StaticFiles(directory="static"), name="static")
if __name__ == '__main__':
import uvicorn
uvicorn.run(app='main:app', host='0.0.0.0', port=8055,reload=True)
3.5.2 PCB识别AI服务
AI服务通过接口可以访问。
Post接口描述请查看文档说明:
使用http://ai-api.douxuedu.com:8056/docs进行访问,如下图所示:
图3-6 FastAPI接口
返回说明:
{
"model_version": "best",
"image_base64": "此字段表示图片转为base64的数据",
"result_txt": "2 0.67041 0.808411 0.0107422 0.0205607\n1 0.859863 0.571028 0.00878906 0.0242991\n1 0.504395 0.58972 0.0107422 0.0242991\n"
}
字段解释:
● model_version字段为模型的名字;
● image_base64字段为图片识别后转为base64的数据;
● result_txt为识别框的坐标位置。
3.5.2 微服务封装
使用FastAPI封装了图片检测微服务,调用PCB识别AI服务(3.4.2),将检测后的结果,通过MQTT发送到边缘端进行存储。
此外借助KubeEdge的云边端通信机制,实现更新模型更新、可以接受云端状态更新或设备启停控制等。
3.6服务网关开发
3.6.1 概述
服务网关是一个用于聚合和转发多个微服务的后端服务,H5前端只访问网关微服务。网关微服务基于fastapi-gateway开源框架实现了一个网关,通过这个网关可以访问多个不同的微服务。本案例直接基于fastapi-gateway源码进行开发。
3.6.2 运行网关服务
网关服务启动代码fastapi_gateway/fastapi_gateway_service/main.py,命令如下:
[root@master fastapi_ai_pcb]# cd /root/ydy_cloudapp_backend_framework/fastapi_gateway/fastapi_gateway_service/
[root@master fastapi_gateway_service]# python3 main.py &
[5] 25569
[root@master fastapi_gateway_service]# INFO: Started server process [25569]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:30850 (Press CTRL+C to quit)
.......以下内容均为main.py代码展示......
app = FastAPI(title="API Gateway")
#微服务 地址 gatway_endpint
router1 = APIRouter(prefix="/cloudedge")
router2 = APIRouter(tags=["Without service path"])
SERVICE_URL = os.environ.get('SERVICE_URL')
SERVICE_USER_URL = os.environ.get('SERVICE_USER_URL')
SERVICE_CLOUD_URL = os.environ.get('SERVICE_CLOUD_URL')
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
#==============用户管理微服务网关接口
#用户注册
@route(
request_method=router1.post,
service_url=SERVICE_USER_URL,
gateway_path="/register",
service_path="/register",
status_code=status.HTTP_200_OK,
override_headers=False,
body_params=["RegisterUser"],
tags=["Edge_User"],
)
async def check_params(RegisterUser: User, request: Request, response: Response):
pass
#用户登录
@route(
request_method=router1.post,
service_url=SERVICE_USER_URL,
gateway_path="/login",
service_path="/login",
status_code=status.HTTP_200_OK,
override_headers=False,
body_params=["LoginUser"],
tags=["Edge_User"],
)
async def check_params(LoginUser: User, request: Request, response: Response):
pass
......
app.mount("/static", StaticFiles(directory="static"), name="static")
app.include_router(router1)
if __name__ == '__main__':
import uvicorn
uvicorn.run(app, host='0.0.0.0', port=8050)
此段代码为用户服务在网关接口中的定义。首先创建了一个FastAPI应用,作为服务网关的主要应用。它使用了一个标题“API Gateway”,这个标题会在docs文档中体现,然后创建了一个APIRouter对象,用于处理/cloudedge前缀的路由。这个路由对象将用于管理微服务的接口。SERVICE_URL则是各个微服务的IP地址。通过@route来进行定义服务网关中的路由和微服务之间的映射关系。
最终通过python3 main.py来启动网关服务。
其中static目录,是为了可以在离线环境或者在联网环境都可以更快速的访问fastapi的API文档,文档地址为http://ip/docs,http://ip/redoc,若不进行挂载静态目录,那么在访问API文档时会默认请求国外的JS文件,导致页面空白无法访问。
网关路由定义:
@route(
request_method=router1.post,
service_url=SERVICE_USER_URL,
gateway_path="/login",
service_path="/login",
status_code=status.HTTP_200_OK,
override_headers=False,
body_params=["LoginUser"],
tags=["Edge_User"],
)
async def check_params(LoginUser: User, request: Request, response: Response):
pass
● 请求方法:POST
● 服务URL:SERVICE_USER_URL
● 网关路径:/login
● 微服务路径:/login
● 返回状态码:200(HTTP状态码200 OK)
● 重写响应头:不重写(override_headers=False)
● 请求参数:LoginUser
● 分类标签:[“Edge_User”]
功能说明:这个路由的目的是进行登陆。它将转发收到的POST请求到微服务的路径/login,并将微服务的响应原样返回给客户端。
配置参数说明:
● 请求方法:request_method=router1.post,表示该路由使用HTTP POST方法。
● 服务URL:service_url=SERVICE_USER_URL,表示微服务的基本URL,这个URL将用于拼接微服务的完整路径。
● 网关路径:gateway_path="/login",表示网关上的路径,客户端将请求这个路径来访问云端节点信息。
● 微服务路径:service_path="/login",表示微服务上的路径,实际请求登陆接口信息的微服务将在这个路径上监听请求。
● 返回状态码:status_code=status.HTTP_200_OK,表示当微服务返回成功时,网关将返回HTTP状态码200 OK给客户端。
● 重写响应头:override_headers=False,表示网关不会修改微服务返回的响应头,响应头将直接传递给客户端。
● 分类标签:tags=[“Edge_User”],用于对这个路由进行分类,方便在API文档中组织和显示。
如果在微服务中需要进行body请求,则可以在route中定义body_params参数,若是微服务中需要附带参数,则可以在route中定义query_params参数。
比如微服务API为:/xxx/{xxx},则在网关route中则可以query_params=[“xxx”]。
若微服务代码中定义了请求模型,则可以在网关目录下定义model.py文件,将微服务定义的模型放在其中,并且在route中引入:
● model.py:
class User(BaseModel):
username: str
password: str
● main.py:
#从模型文件中引入存放的模型
from models import User
body_params=["LoginUser"]
async def check_params(LoginUser: User, request: Request, response: Response):
pass
#User为模型的名字,LoginUser必须要与body_params里的名字对应
3.6.3 服务网关接口
启动完成后使用http://云端节点地址:30850/docs#端口进行访问,点击“/cloudedge/login Check Params”,并点击“Try it out”,然后修改两个string的值,修改完成后点击“Execute”,如下图所示:
图3-7 用户登录接口
3.6.4 前端登录
在进行前端登录之前,需要确保所有节点Selinux以及防火墙全部关闭,并在桌面环境中添加一条映射文件,命令如下:
[root@desktop ~/Desktop]# vi /etc/hosts
# Kubernetes-managed hosts file.
10.26.17.186 localedge #添加映射,地址为部署网关服务的节点
配置完成之后,使用前端节点IP/index.html,进行访问,用户名密码为test11/123456,如下图所示:
图3-8 前端登录页面
图3-9 前端浏览器登录后
- 3. 云应用开发
- 3.1开发准备
- 3.2用户管理微服务
- 3.3云端管理微服务
- 3.4 边端管理微服务开发
- 3.5 PCB缺陷识别AI微服务
- 3.6服务网关开发
公有云基础组件实操
- 点赞
- 收藏
- 关注作者
评论(0)