流域非的个人看法

举报
yd_298865073 发表于 2024/11/06 20:23:28 2024/11/06
【摘要】 部署MongoDB数据库服务 解压mongodb-repo.tar.gztar -zxvf ~/mongodb-repo.tar.gz 检查解压后的文件ls brotli-1.0.7-5.el7.x86_64.rpm libX11-common-1.6.7-4.el7_9.noarch.rpmcreaterepo-0.9.9-28.el7.noarch.rpm ...

部署MongoDB数据库服务

解压mongodb-repo.tar.gz

tar -zxvf ~/mongodb-repo.tar.gz 

检查解压后的文件

ls 
brotli-1.0.7-5.el7.x86_64.rpm            libX11-common-1.6.7-4.el7_9.noarch.rpm
createrepo-0.9.9-28.el7.noarch.rpm       libXau-1.0.8-2.1.el7.x86_64.rpm
deltarpm-3.6-3.el7.x86_64.rpm            libxcb-1.13-1.el7.x86_64.rpm
gcc-c++-4.8.5-44.el7.x86_64.rpm          libXext-1.3.3-3.el7.x86_64.rpm
GraphicsMagick-1.3.38-1.el7.x86_64.rpm   mongodb-org-4.0.28-1.el7.x86_64.rpm
jasper-libs-1.900.1-33.el7.x86_64.rpm    mongodb-org-mongos-4.0.28-1.el7.x86_64.rpm
jbigkit-libs-2.0-11.el7.x86_64.rpm       mongodb-org-server-4.0.28-1.el7.x86_64.rpm
lcms2-2.6-3.el7.x86_64.rpm               mongodb-org-shell-4.0.28-1.el7.x86_64.rpm
libICE-1.0.9-9.el7.x86_64.rpm            mongodb-org-tools-4.0.28-1.el7.x86_64.rpm
libjpeg-turbo-1.2.90-8.el7.x86_64.rpm    nodejs-12.22.12-1nodesource.x86_64.rpm
libSM-1.2.2-2.el7.x86_64.rpm             nodejs-16.15.0-3.el7.x86_64.rpm
libstdc++-devel-4.8.5-44.el7.x86_64.rpm  nodejs-libs-16.15.0-3.el7.x86_64.rpm
libtiff-4.0.3-35.el7.x86_64.rpm          openssl11-1.1.1k-3.el7.x86_64.rpm
libtool-ltdl-2.4.2-22.el7_3.x86_64.rpm   openssl11-libs-1.1.1k-3.el7.x86_64.rpm
libuv-1.44.2-1.el7.x86_64.rpm            python-deltarpm-3.6-3.el7.x86_64.rpm
libwebp-0.3.0-10.el7_9.x86_64.rpm        repodata
libwmf-lite-0.2.8.4-44.el7.x86_64.rpm    urw-base35-fonts-legacy-20170801-10.el7.noarch.rpm
libX11-1.6.7-4.el7_9.x86_64.rpm

通过rpm文件安装

yum install -y brotli-1.0.7-5.el7.x86_64.rpm \
                   libX11-1.6.7-4.el7_9.x86_64.rpm \
                   libX11-common-1.6.7-4.el7_9.noarch.rpm \
                   libXau-1.0.8-2.1.el7.x86_64.rpm \
                   libxcb-1.13-1.el7.x86_64.rpm \
                   libXext-1.3.3-3.el7.x86_64.rpm \
                   gcc-c++-4.8.5-44.el7.x86_64.rpm \
                   GraphicsMagick-1.3.38-1.el7.x86_64.rpm \
                   jasper-libs-1.900.1-33.el7.x86_64.rpm \
                   jbigkit-libs-2.0-11.el7.x86_64.rpm \
                   lcms2-2.6-3.el7.x86_64.rpm \
                   libICE-1.0.9-9.el7.x86_64.rpm \
                   libjpeg-turbo-1.2.90-8.el7.x86_64.rpm \
                   libSM-1.2.2-2.el7.x86_64.rpm \
                   libstdc++-devel-4.8.5-44.el7.x86_64.rpm \
                   libtiff-4.0.3-35.el7.x86_64.rpm \
                   libtool-ltdl-2.4.2-22.el7_3.x86_64.rpm \
                   libuv-1.44.2-1.el7.x86_64.rpm \
                   libwebp-0.3.0-10.el7_9.x86_64.rpm \
                   libwmf-lite-0.2.8.4-44.el7.x86_64.rpm \
                   openssl11-1.1.1k-3.el7.x86_64.rpm \
                   openssl11-libs-1.1.1k-3.el7.x86_64.rpm
yum install -y mongodb-org-4.0.28-1.el7.x86_64.rpm \
                   mongodb-org-mongos-4.0.28-1.el7.x86_64.rpm \
                   mongodb-org-server-4.0.28-1.el7.x86_64.rpm \
                   mongodb-org-shell-4.0.28-1.el7.x86_64.rpm \
                   mongodb-org-tools-4.0.28-1.el7.x86_64.rpm

# 启动MongoDB服务

systemctl start mongod 

# 设置MongoDB服务开机自启

 systemctl enable mongod 

# 检查MongoDB服务状态

 systemctl status mongod

链接mongdb

mongo

主从数据库管理

三台服务器修改配置文件

vim /etc/mongod.conf
//添加
    修改绑定ip为0.0.0.0
replication:
  replSetName: "cloud"

重启服务

sudo systemctl restart mongod

初始化副本集

mongo//进入shell
rs.initiate({
  _id: "cloud",
  members: [
    { _id: 0, host: "172.16.2.128:27017" },
    { _id: 1, host: "172.16.2.76:27017" }
  ]
})

验证副本集配置

rs.status()//在mongoshell

验证从节点

rs.conf()//mongoshell
    //验证
    rs.isMaster()

1. 安装依赖

安装 Node.js

根据提供的文件,您需要安装 Node.js 版本 12.22.12

yum install nodejs-12.22.12-1nodesource.x86_64.rpm

安装其他依赖

yum install -y gcc-c++ make
yum install -y epel-release  GraphicsMagick
npm config set registry https://registry.npmmirror.com/
npm config set ELECTRON_MIRROR https://cdn.npmmirror.com/dist/electron/

部署

rocketchat-cloud.tar.gz 解压并进入目录:

tar -xzvf rocketchat-cloud.tar.gz
cd rocketchat-cloud

安装 Rocket.Chat 依赖项:

cd bundle/programs/server/
sudo npm install
安装mongo

#移到/opt并命名Rocket.Chat

mv bundle /opt/Rocket.Chat

#添加用户

useradd -M rocketchat && usermod -L rocketchat

#赋予权限

chown -R rocketchat:rocketchat /opt/Rocket.Chat

  1. 验证副本集的名称:
    • 连接到 MongoDB 实例,确定当前副本集的名称。首先,使用以下命令连接到 MongoDB:
mongo --host 172.16.2.76 --port 27017
    • 一旦进入 MongoDB shell,运行以下命令查看副本集的状态:
rs.status()
    • 找到输出中的 set 字段,它表示当前副本集的名称。

查看node路径

which node

vi /lib/systemd/system/rocketchat.service

[Unit]
Description=The Rocket.Chat server
After=network.target remote-fs.target nss-lookup.target nginx.service mongod.service
[Service]
ExecStart=/usr/local/node/bin/node  /opt/Rocket.Chat/main.js
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=rocketchat
User=rocketchat
Environment=MONGO_URL=mongodb://192.168.1.182:27017/rocketchat?replicaSet=set字段
MONGO_OPLOG_URL=mongodb://192.168.1.182:27017/local?replicaSet=set字段
ROOT_URL=http://localhost:3000/ PORT=3000
[Install]
WantedBy=multi-user.target

systemctl start rocketchat

systemctl status rocketchat

1. 创建云主机

在华为云上创建一个 X86 架构的云主机,镜像使用 CentOS 7.9。

2. 安装 chkrootkit

假设 makechk.tar.gz 文件已经下载到主机上。

bash复制# 解压 makechk.tar.gz
tar -xzvf makechk.tar.gz
cd makechk

# 编译和安装 chkrootkit
make sense
sudo cp chkrootkit /usr/local/bin

3. 扫描系统并保存日志

bash复制# 创建日志目录
sudo mkdir -p /var/log/chkrootkit

# 运行 chkrootkit 并保存扫描结果
sudo /usr/local/bin/chkrootkit > /var/log/chkrootkit/chkrootkit.log

4. 修复漏洞

查看扫描结果,并根据结果修复任何发现的问题。通常情况下,需要手动检查并删除可疑文件,更新系统软件包,检查网络连接和配置等。

5. 提交信息

确保主机上的服务正常,并提交以下信息:

复制用户名: <你的用户名>
密码: <你的密码>
公网IP地址: <你的公网IP>

任务 2: 安装 ELK 和添加数据

1. 创建云主机

在华为云上创建一个 X86 架构的云主机,镜像使用 CentOS 7.9。

2. 配置 YUM 源并安装 Docker

配置 Docker 仓库
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# 安装 Docker
sudo yum install -y docker-ce docker-ce-cli containerd.io

# 启动并启用 Docker 服务
sudo systemctl start docker
sudo systemctl enable docker

3. 安装 ELK 服务

假设 sepb_elk_latest.tar 文件已经下载到主机上。

bash复制# 加载 Docker 镜像
sudo docker load < sepb_elk_latest.tar

# 启动 ELK 服务
sudo docker run -d --name elk -p 5601:5601 -p 9200:9200 -p 5044:5044 <镜像ID>

4. 添加数据操作

将监控目标节点所需安装的 RPM 安装包下载到本地主机的 /root 目录下。假设下载的 RPM 包已经在本地。

bash复制# 假设 RPM 包已经下载到 /root 目录下
ls /root/*.rpm

部署helm

步骤一:创建 ChartMuseum 命名空间

您可以使用以下命令在 Kubernetes 集群中创建 chartmuseum 命名空间:

kubectl create namespace chartmuseum

步骤二:编写 YAML 文件部署 ChartMuseum 服务

创建一个名为 chartmuseum-deployment.yaml 的文件,并在其中定义 ChartMuseum 的 Deployment 和 Service。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: chartmuseum
  namespace: chartmuseum
spec:
  replicas: 1
  selector:
    matchLabels:
      app: chartmuseum
  template:
    metadata:
      labels:
        app: chartmuseum
    spec:
      containers:
        - name: chartmuseum
          image: chartmuseum/chartmuseum:latest
          ports:
            - containerPort: 8080
          env:
            - name: STORAGE
              value: "local"
            - name: STORAGE_LOCAL_ROOTDIR
              value: "/chartmuseum"
          volumeMounts:
            - name: chartmuseum-storage
              mountPath: /chartmuseum
      volumes:
        - name: chartmuseum-storage
          emptyDir: {}

---
apiVersion: v1
kind: Service
metadata:
  name: chartmuseum
  namespace: chartmuseum
spec:
  type: ClusterIP
  ports:
    - port: 8080
      targetPort: 8080
  selector:
    app: chartmuseum

部署 ChartMuseum

使用以下命令部署 ChartMuseum:

kubectl apply -f chartmuseum-deployment.yaml

步骤三:安装 Helm 服务

  1. 下载 Helm: 首先,您需要下载并解压 Helm 3.3.0 版本的压缩包。可以使用以下命令:
wget https://get.helm.sh/helm-v3.3.0-linux-amd64.tar.gz
tar -zxvf helm-v3.3.0-linux-amd64.tar.gz
  1. 将 Helm 移动到 PATH
sudo mv linux-amd64/helm /usr/local/bin/helm

如果没有权限,您可以选择将 Helm 移动到用户目录下的 ~/bin 文件夹:

mkdir -p ~/bin
mv linux-amd64/helm ~/bin/
echo 'export PATH=$PATH:~/bin' >> ~/.bashrc
source ~/.bashrc
  1. 验证 Helm 安装
helm version

步骤四:连接到 kcloud 集群

要连接到您的 kcloud 集群,您需要 kubeconfig 文件,其中包含连接所需的凭证。通常,这个文件包含以下信息:

  • 用户名
  • 密码
  • 集群的公共 IP 地址

您可以通过在 Kubernetes 控制台中找到相关信息来获取所需的连接信息。以下是如何获取这些信息的简要说明:

  1. 获取集群的公共 IP 地址: 您可以使用以下命令获取集群节点的外部 IP 地址:
kubectl get nodes -o wide
  1. 获取连接用户和密码: 这些信息通常在您创建集群时提供,或者可以在控制台的“访问管理”部分找到。

创建命名空间

kubectl create ns  空间名

chartmuseum.yaml

apiVersion: v1
kind: Pod
metadata:
  name: chartmuseum
  namespace: chartmuseum 
  labels:
    app: chartmuseum 
spec:
  containers:
  - image: chartmuseum/chartmuseum:latest
    name: chartmuseum
    ports:
    - containerPort: 8080
      protocol: TCP
    env:
    - name: DEBUG
      value: "1"
    - name: STORAGE
      value: local
    - name: STORAGE_LOCAL_ROOTDIR
      value: /charts
    volumeMounts:
    - name: chartspath
      mountPath: /charts
  volumes:
  - name: chartspath
    hostPath:
      path: /data/charts

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: chartmuseum
  namespace: chartmuseum
  labels:
    app: chartmuseum
spec:
  selector:
    app: chartmuseum
  type: ClusterIP 
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080 

部署仓库命令

kubectl apply -f  chartmuseum.yaml  -f service.yaml 

首先,我们需要解压 Chart 包,以便我们可以修改其内容。

tar -xzf wordpress-13.0.23.tgz
cd wordpress

3. 创建 Persistent Volume (PV)

假设 wordpress Chart 包中已经包含了 PVC,我们需要手动创建 PV 并绑定到 PVC。请根据需要调整下面的 PersistentVolume 配置:

创建一个文件 wordpress-pv.yaml,内容如下:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: wordpress-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

应用 PV 配置:

kubectl apply -f wordpress-pv.yaml


编辑 values.yaml 文件,将 service.type 修改为 NodePort

service:
  type: NodePort
  port: 80
  nodePort: 30080  # 您可以指定一个具体的 NodePort 或让 Kubernetes 自动分配
cd ..
helm package wordpress

1. 检查证书和密钥路径

确保 client-certificateclient-key 的路径是正确的。如果您提供的 certfile.certkeyfile.key 是相对路径,请确保它们的路径是相对于当前工作目录的,或者使用绝对路径。

例如,如果这些文件在 ~/.kube/ 目录下,您可以这样修改:

users:
- name: CloudShell
  user:
    client-certificate: ~/.kube/certfile.cert
    client-key: ~/.kube/keyfile.key
# 创建一个命名空间(可选)
kubectl create namespace wordpress

# 使用修改后的 Chart 包部署 WordPress
helm install my-wordpress ./wordpress-13.0.23.tgz --namespace wordpress
# 查看部署状态
kubectl get all -n wordpress

# 获取 WordPress 服务的 NodePort
kubectl get svc -n wordpress

安装华为云依赖

# 弹性云服务 (ECS)
pip install  huaweicloudsdkecs

# 虚拟私有云 (VPC)
pip install huaweicloudsdkvpc

# 镜像服务 (IMS)
pip install huaweicloudsdkims

# 容器云引擎 (CCE)
pip install huaweicloudsdkcce

# 云数据库 (RDS)
pip install huaweicloudsdkrds
全部安装
pip install huaweicloudsdkall

密钥对python

import huaweicloudsdkcore.auth.credentials as credentials
import huaweicloudsdkcore.exceptions as exceptions
import huaweicloudsdkcore.http.http_config as http_config
from huaweicloudsdkecs.v2 import *
from huaweicloudsdkecs.v2.region.ecs_region import EcsRegion

def create_keypair(client, keypair_name):
    try:
        # Check if the keypair already exists
        list_request = ListKeypairsRequest()
        list_response = client.list_keypairs(list_request)
        keypairs = list_response.keypairs

        for keypair in keypairs:
            if keypair.keypair.name == keypair_name:
                # Delete the existing keypair
                delete_request = DeleteKeypairRequest()
                delete_request.body = DeleteKeypairRequestBody(name=keypair_name)
                client.delete_keypair(delete_request)
                print(f"Deleted existing keypair: {keypair_name}")

        # Create a new keypair
        create_request = CreateKeypairRequest()
        create_request.body = CreateKeypairRequestBody()
        create_request.body.name = keypair_name
        create_response = client.create_keypair(create_request)
        print(f"Created keypair: {create_response.keypair}")

    except exceptions.ClientRequestException as e:
        print(f"Error: {e.status_code}, {e.error_msg}")
    except Exception as e:
        print(f"Unexpected error: {str(e)}")

def main():
    # Replace with your actual AK/SK
    ak = "your-access-key"
    sk = "your-secret-key"

    # Replace with the region you are using
    region = EcsRegion.value_of("your-region")

    # Initialize the credentials
    auth = credentials.BasicCredentials(ak, sk)

    # Initialize HTTP configuration
    config = http_config.HttpConfig.get_default_config()
    config.ignore_ssl_verification = True

    # Initialize the ECS client
    client = EcsClient.new_builder() \
        .with_http_config(config) \
        .with_credentials(auth) \
        .with_region(region) \
        .build()

    # Keypair name
    keypair_name = "chinaskills_keypair"

    # Create the keypair
    create_keypair(client, keypair_name)

if __name__ == "__main__":
    main()

ak: 您的访问密钥 (Access Key)

sk: 您的秘密密钥 (Secret Key)

your-region: 您使用的地区,例如 cn-north-4

云硬盘python

调用SDK云硬盘管理的方法,实现云主机的的增删查改。 
在/root/huawei 目录下编写create_block_store.py 文件,使用 SDK编写 Python代
码,调用创建华为云的云硬盘,具体要求如下: 
(1)云硬盘可用区域:cn-north-4a 
(2)云硬盘名称:chinaskills_volume 
(3)云硬盘规格和大小:超高IO,100G 
(4)设置云硬盘共享 
(5)设置云硬盘加密,加密秘钥为默认的KMS密钥 
(6)如果云硬盘已经存在,代码中需要先删除 
(7)输出此云硬盘的详细信息(状态要求为available) 
完成后提交云服务器节点的用户名、密码和IP地址到答题框。

创建云主机

import argparse
import json
import yaml
import huaweicloudsdkcore.auth.credentials as credentials
import huaweicloudsdkcore.exceptions as exceptions
import huaweicloudsdkcore.http.http_config as http_config
from huaweicloudsdkecs.v2 import *
from huaweicloudsdkecs.v2.region.ecs_region import EcsRegion

def init_client():
    # Replace with your actual AK/SK and region
    ak = "your-access-key"
    sk = "your-secret-key"
    region = "cn-north-4"

    auth = credentials.BasicCredentials(ak, sk)
    config = http_config.HttpConfig.get_default_config()
    config.ignore_ssl_verification = True

    client = EcsClient.new_builder() \
        .with_http_config(config) \
        .with_credentials(auth) \
        .with_region(EcsRegion.value_of(region)) \
        .build()
    
    return client

def create_instance(client, instance_info):
    try:
        server_name = instance_info['name']
        image_id = instance_info['imagename']

        # Create ECS instance
        create_request = NovaCreateServersRequest()
        create_request.body = NovaCreateServersRequestBody(
            server=NovaCreateServersOption(
                name=server_name,
                imageRef=image_id,
                flavorRef="s2.small.1",
                availability_zone="cn-north-4a",
                networks=[NovaServerNetwork(id="your-network-id")],
                security_groups=[NovaServerSecurityGroup(name="default")]
            )
        )
        create_response = client.nova_create_servers(create_request)
        server_id = create_response.server.id

        # Wait for the instance to be active
        while True:
            show_request = ShowServerRequest(server_id)
            show_response = client.show_server(show_request)
            if show_response.server.status == "ACTIVE":
                print(json.dumps(show_response.server.to_dict(), indent=4))
                break

    except exceptions.ClientRequestException as e:
        print(f"Error: {e.status_code}, {e.error_msg}")
    except Exception as e:
        print(f"Unexpected error: {str(e)}")

def get_instance(client, name, output_file=None):
    try:
        list_request = ListServersDetailsRequest()
        list_response = client.list_servers_details(list_request)
        servers = list_response.servers

        for server in servers:
            if server.name == name:
                server_info = json.dumps(server.to_dict(), indent=4)
                if output_file:
                    with open(output_file, 'w') as f:
                        f.write(server_info)
                else:
                    print(server_info)
                return

        print(f"No server with name {name} found.")

    except exceptions.ClientRequestException as e:
        print(f"Error: {e.status_code}, {e.error_msg}")
    except Exception as e:
        print(f"Unexpected error: {str(e)}")

def get_all_instances(client, output_file=None):
    try:
        list_request = ListServersDetailsRequest()
        list_response = client.list_servers_details(list_request)
        servers = list_response.servers

        servers_info = [server.to_dict() for server in servers]
        output = yaml.dump(servers_info, default_flow_style=False)

        if output_file:
            with open(output_file, 'w') as f:
                f.write(output)
        else:
            print(output)

    except exceptions.ClientRequestException as e:
        print(f"Error: {e.status_code}, {e.error_msg}")
    except Exception as e:
        print(f"Unexpected error: {str(e)}")

def delete_instance(client, name):
    try:
        list_request = ListServersDetailsRequest()
        list_response = client.list_servers_details(list_request)
        servers = list_response.servers

        for server in servers:
            if server.name == name:
                delete_request = DeleteServerRequest(server.id)
                client.delete_server(delete_request)
                print(f"Deleted server with name {name}")
                return

        print(f"No server with name {name} found.")
    
    except exceptions.ClientRequestException as e:
        print(f"Error: {e.status_code}, {e.error_msg}")
    except Exception as e:
        print(f"Unexpected error: {str(e)}")

def main():
    parser = argparse.ArgumentParser(description='ECS Manager')
    subparsers = parser.add_subparsers(dest='command')

    # Create instance command
    create_parser = subparsers.add_parser('create', help='Create an ECS instance')
    create_parser.add_argument('-i', '--input', required=True, help='JSON formatted instance info')

    # Get instance command
    get_parser = subparsers.add_parser('get', help='Get an ECS instance')
    get_parser.add_argument('-n', '--name', required=True, help='Instance name')
    get_parser.add_argument('-o', '--output', help='Output file')

    # Get all instances command
    get_all_parser = subparsers.add_parser('getall', help='Get all ECS instances')
    get_all_parser.add_argument('-o', '--output', help='Output file')

    # Delete instance command
    delete_parser = subparsers.add_parser('delete', help='Delete an ECS instance')
    delete_parser.add_argument('-n', '--name', required=True, help='Instance name')

    args = parser.parse_args()
    client = init_client()

    if args.command == 'create':
        instance_info = json.loads(args.input)
        create_instance(client, instance_info)
    elif args.command == 'get':
        get_instance(client, args.name, args.output)
    elif args.command == 'getall':
        get_all_instances(client, args.output)
    elif args.command == 'delete':
        delete_instance(client, args.name)
    else:
        parser.print_help()

if __name__ == "__main__":
    main()

创建一个 ECS 实例:

python3 /root/huawei/ecs_manager.py create --input '{ "name": "chinaskill001", "imagename": "your-image-id"}'

查询指定名称的 ECS 实例:

python3 /root/huawei/ecs_manager.py get --name "chinaskill001" --output instance_info.json

查询所有 ECS 实例:

python3 /root/huawei/ecs_manager.py getall --output all_instances.yaml

删除指定名称的 ECS 实例:

python3 /root/huawei/ecs_manager.py delete --name "chinaskill001"

vpcpython

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import huaweicloudsdkcore.auth.credentials as credentials
import huaweicloudsdkcore.exceptions as exceptions
import huaweicloudsdkvpc.v2 as vpc
from huaweicloudsdkvpc.v2.region.vpc_region import VpcRegion

app = FastAPI()

# Replace with your actual AK/SK
ak = "your-access-key"
sk = "your-secret-key"
region = "cn-north-4"

auth = credentials.BasicCredentials(ak, sk)
client = vpc.VpcClient.new_builder() \
    .with_credentials(auth) \
    .with_region(VpcRegion.value_of(region)) \
    .build()

class VpcCreate(BaseModel):
    name: str
    cidr: str

class VpcUpdate(BaseModel):
    new_name: str
    old_name: str

class VpcDelete(BaseModel):
    vpc_name: str

@app.post("/cloud_vpc/create_vpc")
async def create_vpc(vpc_details: VpcCreate):
    try:
        request = vpc.CreateVpcRequest()
        request.body = vpc.CreateVpcRequestBody(
            vpc=vpc.CreateVpcOption(
                name=vpc_details.name,
                cidr=vpc_details.cidr
            )
        )
        response = client.create_vpc(request)
        return response.to_dict()
    except exceptions.ClientRequestException as e:
        raise HTTPException(status_code=e.status_code, detail=e.error_msg)

@app.get("/cloud_vpc/vpc/{vpc_name}")
async def get_vpc(vpc_name: str):
    try:
        request = vpc.ListVpcsRequest()
        response = client.list_vpcs(request)
        for vpc_item in response.vpcs:
            if vpc_item.name == vpc_name:
                return vpc_item.to_dict()
        raise HTTPException(status_code=404, detail="VPC not found")
    except exceptions.ClientRequestException as e:
        raise HTTPException(status_code=e.status_code, detail=e.error_msg)

@app.get("/cloud_vpc/vpc")
async def get_all_vpcs():
    try:
        request = vpc.ListVpcsRequest()
        response = client.list_vpcs(request)
        return [vpc_item.to_dict() for vpc_item in response.vpcs]
    except exceptions.ClientRequestException as e:
        raise HTTPException(status_code=e.status_code, detail=e.error_msg)

@app.put("/cloud_vpc/update_vpc")
async def update_vpc(vpc_update: VpcUpdate):
    try:
        request = vpc.ListVpcsRequest()
        response = client.list_vpcs(request)
        for vpc_item in response.vpcs:
            if vpc_item.name == vpc_update.old_name:
                update_request = vpc.UpdateVpcRequest(vpc_item.id)
                update_request.body = vpc.UpdateVpcRequestBody(
                    vpc=vpc.UpdateVpcOption(
                        name=vpc_update.new_name
                    )
                )
                update_response = client.update_vpc(update_request)
                return update_response.to_dict()
        raise HTTPException(status_code=404, detail="VPC not found")
    except exceptions.ClientRequestException as e:
        raise HTTPException(status_code=e.status_code, detail=e.error_msg)

@app.delete("/cloud_vpc/delete_vpc")
async def delete_vpc(vpc_delete: VpcDelete):
    try:
        request = vpc.ListVpcsRequest()
        response = client.list_vpcs(request)
        for vpc_item in response.vpcs:
            if vpc_item.name == vpc_delete.vpc_name:
                delete_request = vpc.DeleteVpcRequest(vpc_item.id)
                client.delete_vpc(delete_request)
                return {"detail": "VPC deleted successfully"}
        raise HTTPException(status_code=404, detail="VPC not found")
    except exceptions.ClientRequestException as e:
        raise HTTPException(status_code=e.status_code, detail=e.error_msg)

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=7045)

命令启动

uvicorn main:app --host 0.0.0.0 --port 7045

安装 kubectl

/etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

yum install -y kubectl-1.25.1

然后开始安装,注意,版本一定要和集群的版本对应

安装和配置kubectl

mkdir -p $HOME/.kube
mv -f kubeconfig.json $HOME/.kube/config

mu-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: mu-pod
  namespace: default
spec:
  containers:
    - name: containers01
      image: nginx
      ports:
        - name: http
          containerPort: 80
    - name: containers02
      image: tomcat
      ports:
        - name: tomcat
          containerPort: 80

my-namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: test

vi secret.yaml

apiVersion: v1

kind: Secret

metadata:

name: mysecret

namespace: default

data:

username: YWRtaW4=

password: MWYyZDFlMmU2N2Rm

type: Opaque

cat mariadbnamespace.yaml

apiVersion: v1

kind: Namespace

metadata:

name: mariadb

题目 13】chartmuseum 仓库部署[1 分]

在 k8s 集群中创建 chartmuseum 命名空间,编写 yaml 文件在 chartmuseum 命名空间中

使用 chartmuseum:latest 镜像创建本地私有 chart 仓库,设置其仓库存储目录为宿主机的

/data/charts 目录。编写 service.yaml 文件,为 chart 私有仓库创建 Service 访问策略,定义其

为 ClusterIP 访问模式。编写完成后启动 chartmuseum 服务。提交连接 kcloud 集群节点的用

户名、密码和公网 IP 地址到答题框。

检测 chartmuseum 服务反馈是否正确计 1 分

apiVersion: v1

kind: Namespace

metadata:

name: chartmuseum

---

apiVersion: apps/v1

kind: Deployment

metadata:

labels:

app: chartmuseum

name: chartmuseum

namespace: chartmuseum

spec:

replicas: 1

selector:

matchLabels:

app: chartmuseum

strategy:

rollingUpdate:

maxSurge: 1

maxUnavailable: 1

type: RollingUpdate

template:

metadata:

labels:

app: chartmuseum

spec:

containers:

- image: chartmuseum/chartmuseum:latest

imagePullPolicy: IfNotPresent

name: chartmuseum

ports:

- containerPort: 8080

protocol: TCP

env:

- name: DEBUG

value: "1"

- name: STORAGE

value: local

- name: STORAGE_LOCAL_ROOTDIR

value: /charts

resources:

limits:

cpu: 500m

memory: 256Mi

requests:

cpu: 100m

memory: 64Mi

volumeMounts:

- mountPath: /charts

name: charts-volume

volumes:

- name: charts-volume

nfs:

path: /data/charts

server: 192.168.200.10

restartPolicy: Always

---

apiVersion: v1

kind: Service

metadata:

name: chartmuseum

namespace: chartmuseum

spec:

ports:

- port: 8080

protocol: TCP

targetPort: 8080

selector:

app: chartmuseum


【题目 14】私有仓库管理[2 分]

在 master 节点添加搭建的本地私有 chart 仓库源,name 为 chartmuseum,并上传

wordpress-13.0.23.tgz 包至 chartmuseum 私有仓库中。可以使用本地仓库 chart 源部署应用。

完成后提交连接 kcloud 集群节点的用户名、密码和公网 IP 地址到答题框。

检测 chartmuseum 仓库源中存在 wordpress-13.0.23 计 2 分

#为/data/charts授予777权限

chmod 777 /data/charts/

#查看svc

[root@kcloud-server ~]# kubectl get svc -n chartmuseum

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

chartmuseum ClusterIP 10.247.199.133 <none> 8080/TCP 24m


#添加本地仓库源,name 为 chartmuseum

[root@kcloud-server ~]# helm repo add chartmuseum http://10.247.199.133:8080

"chartmuseum" has been added to your repositories

[root@kcloud-server ~]# helm repo list

NAME URL

chartmuseum http://10.247.199.133:8080

#上传wordpress-13.0.23.tgz 包至 chartmuseum 私有仓库中

[root@kcloud-server ~]# curl --data-binary "@wordpress-13.0.23.tgz" http://10.247.199.133:8080/api/charts

{"saved":true}[root@kcloud-server ~]#

#更新仓库

[root@kcloud-server ~]# helm repo update

Hang tight while we grab the latest from your chart repositories...

...Successfully got an update from the "chartmuseum" chart repository

Update Complete. ⎈ Happy Helming!⎈

#列出

[root@kcloud-server ~]# helm search repo wordpress

NAME CHART VERSION APP VERSION DESCRIPTION

chartmuseum/wordpress 13.0.23 5.9.2 WordPress is the world's most popular blogging ...


#/data/charts/目录查看

[root@kcloud-server charts]# ls

index-cache.yaml wordpress-13.0.23.tgz

云主机安全系统

#购买centos7.9云主机

#上传makechk.tar.gz,chkrootkit.tar.gz软件包

#解压makechk.tar.gz软件

#配置yum源

[root@ecs-cecc ~]# cat /etc/yum.repos.d/local.repo

[local]

name=local

baseurl=file:///root/makechk

gpgcheck=0

enabled=1

[root@ecs-cecc ~]# yum makecache

#安装编译安装依赖包

[root@ecs-cecc packages]# cd /root/ && yum install -y gcc gcc-c++ make glibc*

#解压chkrootkit.tar.gz

#查看目录文件

[root@ecs-cecc ~]# cd chkrootkit-0.55/

[root@ecs-cecc chkrootkit-0.55]# ls

ACKNOWLEDGMENTS chkdirs.c chkproc.c chkrootkit.lsm chkwtmp.c ifpromisc.c patch README.chklastlog strings.c

check_wtmpx.c chklastlog.c chkrootkit chkutmp.c COPYRIGHT Makefile README README.chkwtmp

#编译安装

[root@ecs-cecc chkrootkit-0.55]# make sense

cc -DHAVE_LASTLOG_H -o chklastlog chklastlog.c

cc -DHAVE_LASTLOG_H -o chkwtmp chkwtmp.c

cc -DHAVE_LASTLOG_H -D_FILE_OFFSET_BITS=64 -o ifpromisc ifpromisc.c

cc -o chkproc chkproc.c

cc -o chkdirs chkdirs.c

cc -o check_wtmpx check_wtmpx.c

cc -static -o strings-static strings.c

cc -o chkutmp chkutmp.c

#添加环境变量

[root@ecs-cecc ~]# cp -r chkrootkit-0.55/ /usr/local/chkrootkit

[root@ecs-cecc ~]# cd /usr/local/chkrootkit/

[root@ecs-cecc chkrootkit]# ls

ACKNOWLEDGMENTS chkdirs chklastlog.c chkrootkit chkutmp.c COPYRIGHT Makefile README.chklastlog strings-static

check_wtmpx chkdirs.c chkproc chkrootkit.lsm chkwtmp ifpromisc patch README.chkwtmp

check_wtmpx.c chklastlog chkproc.c chkutmp chkwtmp.c ifpromisc.c README strings.c

[root@ecs-cecc chkrootkit]# cp chkrootkit /usr/bin/

#查看版本

[root@ecs-cecc chkrootkit]# chkrootkit -V

chkrootkit version 0.55

#创建/var/log/chkrootkit/chkrootkit.log文件

[root@ecs-cecc ~]# mkdir /var/log/chkrootkit/

[root@ecs-cecc ~]# touch /var/log/chkrootkit/chkrootkit.log

#扫描系统保存至/var/log/chkrootkit/chkrootkit.log

[root@ecs-cecc ~]# chkrootkit > /var/log/chkrootkit/chkrootkit.log

#查看扫描结果

[root@ecs-cecc ~]# cat /var/log/chkrootkit/chkrootkit.log

日志·分析

#上传docker-repo.tar.gz,sepb_elk_latest.tar

#解压docker-repo.tar.gz

#配置yum源安装docker

[root@ecs-cecc ~]# cat /etc/yum.repos.d/local.repo

[local]

name=local

baseurl=file:///opt/docker-repo

gpgcheck=0

enabled=1

[root@ecs-cecc ~]# yum clean all

[root@ecs-cecc ~]# yum makecache

#安装docker

[root@ecs-cecc ~]# yum install -y docker-ce

#启动docker,设置为开机自启

[root@ecs-cecc ~]# systemctl start docker && systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

#查看状态

[root@ecs-cecc ~]# systemctl status docker

#导入镜像

[root@ecs-cecc ~]# docker load -i sepb_elk_latest.tar

#启动elk容器(由于Elasticsearch启动需要最大虚拟内存区域数量,修改sysctl.conf文件追加vm.max_map_count=262144)

[root@ecs-cecc ~]# docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -e ES_MIN_MEM=128m -e ES_MAX_MEM=1024m -it --name elk sebp/elk:latest

[root@ecs-cecc ~]# docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

1bf5111a8a0c sebp/elk:latest "/usr/local/bin/star…" About a minute ago Up About a minute 0.0.0.0:5044->5044/tcp, 0.0.0.0:5601->5601/tcp, 9300/tcp, 0.0.0.0:9200->9200/tcp, 9600/tcp elk

[root@ecs-cecc ~]#




#上传filebeat-7.13.2-x86_64.rpm

#安装filebeat

[root@ecs-cecc ~]# yum install -y filebeat-7.13.2-x86_64.rpm

#启动

[root@ecs-cecc ~]# systemctl start filebeat

#查看状态

[root@ecs-cecc ~]# systemctl status filebeat


#应用filebeat

方式一:(收集yum数据到本地文件)

[root@ecs-cecc ~]# vi /etc/filebeat/filebeat.yml

filebeat.inputs:

- type: log

enabled: True

paths:

- /var/log/yum.log

output.file:

path: "/tmp"

filename: "filebeat-test.txt"

#重启filebeat服务

[root@ecs-cecc ~]# systemctl restart filebeat

#安装httpd服务

[root@ecs-cecc ~]# yum install -y httpd

#验证

[root@ecs-cecc tmp]# cat /tmp/filebeat-test.txt

{"@timestamp":"2022-10-16T09:20:03.410Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.13.2"},"log":{"offset":2213,"file":{"path":"/var/log/yum.log"}},"message":"Oct 16 17:20:02 Installed: httpd-2.4.6-97.el7.centos.5.x86_64","input":{"type":"log"},"host":{"hostname":"ecs-cecc","architecture":"x86_64","name":"ecs-cecc","os":{"family":"redhat","name":"CentOS Linux","kernel":"3.10.0-1160.53.1.el7.x86_64","codename":"Core","type":"linux","platform":"centos","version":"7 (Core)"},"id":"acca19161ce94d449c58923b12797030","containerized":false,"ip":["192.168.1.151","fe80::f816:3eff:fe79:d168","172.17.0.1","fe80::42:40ff:fef4:5e7","fe80::14fb:49ff:feec:ffad"],"mac":["fa:16:3e:79:d1:68","02:42:40:f4:05:e7","16:fb:49:ec:ff:ad"]},"agent":{"version":"7.13.2","hostname":"ecs-cecc","ephemeral_id":"a522699e-3e6b-44a7-b833-d14b43d2edba","id":"67d653cb-908e-418f-9356-5b7f2461dbe8","name":"ecs-cecc","type":"filebeat"},"ecs":{"version":"1.8.0"},"cloud":{"machine":{"type":"c6s.xlarge.2"},"service":{"name":"Nova"},"provider":"openstack","instance":{"name":"ecs-cecc.novalocal","id":"i-0129dc00"},"availability_zone":"cn-east-2c"}}

方式二:(收集yum数据到Elasticsearch中)

#修改配置文件

[root@ecs-cecc ~]# cat /etc/filebeat/filebeat.yml

filebeat.inputs:

- type: log

enabled: True

paths:

- /var/log/yum.log

output.elasticsearch:

hosts: ["localhost:9200"]

#重启

[root@ecs-cecc ~]# systemctl restart filebeat

【版权声明】本文为华为云社区用户原创内容,未经允许不得转载,如需转载请自行联系原作者进行授权。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。