在上一篇中分别介绍了:
1搭建基础实验环境
2安装Keystone认证服务组件(控制节点)
3安装Glance镜像服务组件(控制节点)
>>点我查看利用鲲鹏服务器搭建OpenStack平台实践(上)<<
下面继续介绍最重要的几点:
4 安装Nova计算服务(控制节点)
4.1 在控制节点安装nova计算服务
创建nova相关数据库,nova服务在本版本新增加了两个数据库,需注意
mysql -u root -p123456
|
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';
flush privileges;
show databases;
select user,host from mysql.user;
exit
|
4.2 在keystone上面注册nova服务
创建服务证书
(1)在keystone上创建nova用户
cd /server/tools
source admin-openrc.sh
openstack user create --domain default --password=nova nova
openstack user list
|
(2)在keystone上将nova用户配置为admin角色并添加进service项目
openstack role add --project service --user nova admin
|
(3)创建nova计算服务实体
openstack service create --name nova --description "OpenStack Compute" compute
openstack service list
|
(4)创建计算服务的API端点(endpoint)
计算服务compute
openstack endpoint create --region RegionOne compute public http://jack20_controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://jack20_controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://jack20_controller:8774/v2.1
openstack endpoint list
|
4.3 控制节点安装nova相关服务
(1)安装nova相关包
yum --enablerepo=centos-openstack-stein,epel install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y --nogpgcheck
|
(2)快速修改nova配置
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:nova@jack20_controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:nova@jack20_controller/nova
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:openstack@jack20_controller
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://jack20_controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers jack20_controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.76
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address '$my_ip'
openstack-config --set /etc/nova/nova.conf glance api_servers http://jack20_controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://jack20_controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password placement
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 300
|
默认情况下,计算服务使用内置的防火墙服务。由于网络服务包含了防火墙服务,必须使用`nova.virt.firewall.NoopFirewallDriver`防火墙服务来禁用掉计算服务内置的防火墙服务
检查生效nova配置
egrep -v "^#|^$" /etc/nova/nova.conf
|
4.4 同步nova数据(注意同步顺序)
nova_api有33张表, nova_cell0有111张表,nova也有111张表
(1)初始化nova-api
su -s /bin/sh -c "nova-manage api_db sync" nova
|
验证数据库
mysql -h192.168.0.76 -unova -pnova -e "use nova_api;show tables;"
mysql -h192.168.0.76 -unova -pnova -e "use nova_api;show tables;"|wc -l
|
(2)初始化nova_cell0和nova数据库
注册cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
|
创建cell1单元
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
|
初始化nova数据库
su -s /bin/sh -c "nova-manage db sync" nova
|
检查确认cell0&1注册成功
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
|
验证数据库
mysql -h192.168.0.76 -unova -pnova -e "use nova_cell0;show tables;"
mysql -h192.168.0.76 -unova -pnova -e "use nova;show tables;"
mysql -h192.168.0.76 -unova -pnova -e "use nova_cell0;show tables;"|wc -l
mysql -h192.168.0.76 -unova -pnova -e "use nova;show tables;"|wc -l
|
通过对比可知(这里一共显示了各111张表,因为太多就不放图片了,大家自己实践对比一下~),这两个数据库的表目前完全一样,都有111张表
(3)检查确认cell0和cell1注册成功
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
|
返回数据存储在nova_api数据库cell_mappings表中
4.5 启动nova服务
systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl list-unit-files |grep openstack-nova* |grep enabled
|
5 安装Nova计算服务(计算节点)
注:确保计算节点已完成包括1.4在内之前所有配置
5.1 安装nova计算节点相关软件包
(1)计算节点安装nova软件包
yum --enablerepo=centos-openstack-stein,epel install openstack-nova-compute python-openstackclient openstack-utils -y --nogpgcheck
|
(2)快速修改配置文件(/etc/nova/nova.conf)
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:openstack@jack20_controller
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://jack20_controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers jack20_controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.0.0.31
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://jack20_controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://jack20_controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://jack20_controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password placement
|
服务器组件监听所有 IP 地址,而代理组件仅仅监听计算节点管理网络接口 IP
egrep -v "^#|^$" /etc/nova/nova.conf
|
(3)配置虚拟机的硬件加速
首先确定您的计算节点是否支持虚拟机的硬件加速。(推荐使用qemu,使用kvm有大概率会出现创建虚拟机实例卡在系统引导GRUB界面而无法成功安装系统的情况,即不推荐使用CPU虚拟化)
egrep -c '(vmx|svm)' /proc/cpuinfo
|
*注:如果返回位0,表示计算节点不支持硬件加速
需要配置libvirt使用QEMU方式管理虚拟机,使用以下命令:
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
egrep -v "^#|^$" /etc/nova/nova.conf|grep 'virt_type'
|
*注:如果返回为其他值,表示计算节点支持硬件加速且不需要额外的配置,使用以下命令:
openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm
egrep -v "^#|^$" /etc/nova/nova.conf|grep 'virt_type'
|
(4)启动nova相关服务,并配置为开机自启动
需启动2个服务(nova和jack20_controller节点需要同时运行,才能启动所有服务,之后的实验请同时运行两个节点。)
systemctl start libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl list-unit-files |grep libvirtd.service
systemctl list-unit-files |grep openstack-nova-compute.service
|
(5)将计算节点增加到cell数据库(控制节点)
cd /server/tools
source admin-openrc.sh
|
检查确认数据库有新的计算节点
openstack compute service list --service nova-compute
|
手动将新的计算节点添加到openstack集群
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
|
6 安装Neutron网络服务(控制节点)
6.1 主机网络配置及测试
(1)主机名解析
确认控制节点和计算节点的hosts文件中都有以下配置
vim /etc/hosts
|
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.76 jack20_controller
192.168.0.131 calculate block1 1 2
|
(2)检测各节点到控制节点和公网的联通性
控制节点
ping -c 3 www.huaweicloud.com
ping -c 3 calculate
ping -c 3 block1
|
计算节点
ping -c 3 www.huaweicloud.com
ping -c 3 jack20_controller
|
6.2 在keystone数据库中注册neutron相关服务
(1)创建neutron数据库,授予合适的访问权限
mysql -u root -p123456
|
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
exit
|
(2)在keystone上创建neutron用户
cd /server/tools
cd /server/toolssource admin-openrc.sh
openstack user create --domain default --password=neutron neutron
openstack user list
|
(3)将neutron添加到service项目并授予admin角色
openstack role add --project service --user neutron admin
|
(4)创建neutron服务实体
openstack service create --name neutron --description "OpenStack Networking" network
openstack service list
|
(5)创建neutron网络服务的API端点(endpoint)
openstack endpoint create --region RegionOne network public http://124.70.35.167:9696
openstack endpoint create --region RegionOne network internal http://124.70.35.167:9696
openstack endpoint create --region RegionOne network admin http://124.70.35.167:9696
openstack endpoint list
|
6.3 在控制节点安装neutron网络组件
关于neutron的网络有两种方式,本文用的是Provider networks
(1)安装neutron软件包
yum --enablerepo=centos-openstack-stein,epel install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y --nogpgcheck
|
(3)快速配置/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types flat,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
|
查看生效的配置
egrep -v '(^$|^#)' /etc/neutron/plugins/ml2/ml2_conf.ini
|
(4)快速配置/etc/neutron/plugins/ml2/openvswitch_agent.ini(第一条命令需要先使用ifconfig命令查看自身IP所在网卡名称对应修改)
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2population True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge br-tun
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs integration_bridge br-int
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings provider:br-provider
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip 192.168.0.76
|
以下参数在启动neutron-linuxbridge-agent.service的时候会自动设置为1
sysctl net.bridge.bridge-nf-call-iptables
sysctl net.bridge.bridge-nf-call-ip6tables
|
(5)配置layer-3代理
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge br-provider
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT metadata_port 9697
|
查看生效的配置
egrep -v '(^$|^#)' /etc/neutron/l3_agent.ini
|
(6)快速配置/etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT ovs_integration_bridge br-int
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True
|
查看生效配置
egrep -v '(^$|^#)' /etc/neutron/dhcp_agent.ini
|
(7)快速配置/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host 124.70.35.167
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT memcache_servers 124.70.35.167:11211
|
查看生效配置
egrep -v '(^$|^#)' /etc/neutron/metadata_agent.ini
|
(8)配置计算服务使用网络服务
快速配置/etc/nova/nova.conf,将neutron添加到计算节点中
egrep -v '(^$|^#)' /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron url http://124.70.35.167:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://124.70.35.167:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password neutron
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret neutron
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_is_fatal True
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_timeout 300
|
(9)初始化安装网络插件
创建网络插件的链接,初始化网络的脚本插件会用到/etc/neutron/plugin.ini,需使用ML2的插件进行提供
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
|
(10)同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
|
(11)重启nova_api服务
systemctl restart openstack-nova-api.service
|
(12)启动neutron服务并设置开机启动
systemctl start neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl status neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl list-unit-files |grep neutron* |grep enabled
|
至此,控制端的neutron网络服务就安装完成,之后需要在计算节点安装网络服务组件,使计算节点可以连接到openstack集群
6.4 在计算节点安装neutron网络组件
(1)安装neutron组件
yum --enablerepo=centos-openstack-stein,epel install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y --nogpgcheck
|
(2)快速配置/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:openstack@124.70.35.167
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://124.70.35.167:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://124.70.35.167:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers 124.70.35.167:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT state_path /var/lib/neutron
|
(3)快速配置/etc/neutron/plugins/ml2/openvswitch_agent.ini(第一条命令需要先使用ifconfig命令查看自身IP所在网卡名称对应修改)
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip 192.168.0.131
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge br-tun
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2population True
|
注:第一个选项physical_interface_mappings选项要配置计算节点自身的网卡名称provider:ens33
(5)重启计算节点
systemctl restart openstack-nova-compute.service
systemctl status openstack-nova-compute.service
|
(6)启动neutron网络组件,并配置开机自启动
需启动1个服务,网桥代理
systemctl restart neutron-openvswitch-agent.service
systemctl status neutron-openvswitch-agent.service
systemctl enable neutron-openvswitch-agent.service
systemctl list-unit-files |grep neutron* |grep enabled
|
至此,计算节点的网络配置完成,转回到控制节点进行验证操作
>>点我查看利用鲲鹏服务器搭建OpenStack平台实践(下)<<
将继续给大家带来:
7 安装Horizon服务组件(控制节点)
8 安装Cinder存储服务组件(控制节点)
9 部署VPC以测试OpenStack
【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
评论(0)