try devstack cont. (on ecs)

举报
黄生 发表于 2023/03/14 23:53:57 2023/03/14
【摘要】 IaaS/PaaS/SaaS的区别个人理解,从软件层面看,IaaS到OS结束,PaaS到应用/中间件运行环境结束(如WEB、DB软件),SaaS是服务/数据的开箱即用(比如写blog服务:))。Openstack的名字比较有意思,Open可能代表开源软件,stack代表堆叠。合起来就是开源软件的堆栈组合构成了Openstack?openstack属于上面所说的IaaS可以单节点部署,也可以多...

IaaS/PaaS/SaaS的区别
个人理解,从软件层面看,IaaS到OS结束,PaaS到应用/中间件运行环境结束(如WEB、DB软件),SaaS是服务/数据的开箱即用(比如写blog服务:))。

Openstack的名字比较有意思,Open可能代表开源软件,stack代表堆叠。合起来就是开源软件的堆栈组合构成了Openstack?
openstack属于上面所说的IaaS

可以单节点部署,也可以多节点部署。单节点部署就是将所有的服务和组件都放在一个物理节点上。

OpenStack on OpenStack,也叫Tripple-O,就更复杂和烧脑一些,先不管他。我们只看openstack直接部署在Linux上的情况。

安装好后看一下版本

stack@ecs-os:~$ nova-manage --version
Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code.
27.0.0

openstack的仪表盘horizon是基于Django,提供对opensack资源的WEB访问界面。
horizon依赖于WEB服务、keystone服务(又依赖于memcached支持)

没想到是,只在tcp6开放了80端口

tcp6       0      0 :::80                   :::*                    LISTEN      -
连接也正常
tcp6       0      0 10.0.0.92:80            117.152.88.1??:9106     TIME_WAIT

我已经在不知不觉中使用tcp6 over ipv6了吗?为什么ip地址还是ipv4的形式呢?

stack@ecs-os:~$ vim /etc/openstack/clouds.yaml #devstack安装后已经生成好了
stack@ecs-os:~$ export OS_CLOUD=devstack-admin #使用其中某节配置,设置为devstack默认是demo的
stack@ecs-os:~$ openstack server list #看启动的instance

stack@ecs-os:~$ openstack server  set --property  novncproxy-base-url=http://116.63.90.1??:6080/vnc_auto.html --property vncserver_listen=0.0.0.0 cirros
stack@ecs-os:~$ openstack console url show cirros  #所以说,虽然设置了上面2个instance的属性,但是外网在console里还是打不开vnc,因为IP仍然是内网IP
+----------+------------------------------------------------------------------------------------------+
| Field    | Value                                                                                    |
+----------+------------------------------------------------------------------------------------------+
| protocol | vnc                                                                                      |
| type     | novnc                                                                                    |
| url      | http://10.0.0.92:6080/vnc_lite.html?path=%3Ftoken%3D17027a18-ec38-4a71-a0b7-2ef7a6026dee |
+----------+------------------------------------------------------------------------------------------+

stack@ecs-os:~$ openstack network agent list
+--------------------------------------+------------------------------+--------+-------------------+-------+-------+----------------------------+
| ID                                   | Agent Type                   | Host   | Availability Zone | Alive | State | Binary                     |
+--------------------------------------+------------------------------+--------+-------------------+-------+-------+----------------------------+
| 953cfe6c-a060-4aa3-93a2-0079ce18bd04 | OVN Controller Gateway agent | ecs-os |                   | :-)   | UP    | ovn-controller             |
| c23dd5b3-65e3-5000-9c47-5eb5e52a21c4 | OVN Metadata agent           | ecs-os |                   | :-)   | UP    | neutron-ovn-metadata-agent |
+--------------------------------------+------------------------------+--------+-------------------+-------+-------+----------------------------+

root@ecs-os:~# systemctl status devstack@q-svc.service  #neutron以前的名字叫q...
● devstack@q-svc.service - Devstack devstack@q-svc.service
     Loaded: loaded (/etc/systemd/system/devstack@q-svc.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-03-13 00:00:02 CST; 21min ago
   Main PID: 678 (/usr/bin/python)
      Tasks: 6 (limit: 9442)
     Memory: 394.1M
     CGroup: /system.slice/system-devstack.slice/devstack@q-svc.service
             ├─ 678 /usr/bin/python3.8 /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
             ├─7788 neutron-server: api worker (/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_con>
             ├─7789 neutron-server: api worker (/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_con>
             ├─7812 neutron-server: rpc worker (/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_con>
             ├─7813 neutron-server: MaintenanceWorker (/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/>
             └─7815 neutron-server: periodic worker (/usr/bin/python3.8 /usr/local/bin/neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml>

root@ecs-os:~# systemctl status devstack@q-agt.service
Unit devstack@q-agt.service could not be found.

ip a #启动了一个cirros实例
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether fa:16:3e:bc:bc:18 brd ff:ff:ff:ff:ff:ff
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ce:8c:00:48:56:f2 brd ff:ff:ff:ff:ff:ff
4: br-int: <BROADCAST,MULTICAST> mtu 1442 qdisc noop state DOWN group default qlen 1000
    link/ether 46:6a:84:52:8d:42 brd ff:ff:ff:ff:ff:ff
5: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0e:c2:1f:6e:d8:47 brd ff:ff:ff:ff:ff:ff
6: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:ec:45:bf brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
7: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:ec:45:bf brd ff:ff:ff:ff:ff:ff
8: tap0f401b6c-40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
    link/ether fe:16:3e:e1:07:0a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc16:3eff:fee1:70a/64 scope link
       valid_lft forever preferred_lft forever
9: tap50c7b4ef-50@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 5a:35:2c:4f:b1:3d brd ff:ff:ff:ff:ff:ff link-netns ovnmeta-50c7b4ef-56da-4a7e-93b7-54410a38d2a9
    inet6 fe80::5835:2cff:fe4f:b13d/64 scope link
       valid_lft forever preferred_lft forever

stack@ecs-os:~$ ovs-vsctl list-br
ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection failed (Permission denied)
stack@ecs-os:~$ logout
root@ecs-os:~# ovs-vsctl list-br
br-ex
br-int

root@ecs-os:~# ovs-vsctl list-ports br-int
patch-br-int-to-provnet-dbc4f406-97b3-4cf2-8668-8f02c5708e23
tap0f401b6c-40
tap50c7b4ef-50

root@ecs-os:~# ovs-vsctl list-ports br-ex
patch-provnet-dbc4f406-97b3-4cf2-8668-8f02c5708e23-to-br-int
注释:This port serves as a link between the "br-ex" bridge and another bridge called "br-int", which is typically used for managing Open vSwitch internal traffic. The naming convention of the port suggests that it's being used for network provisioning purposes.

root@ecs-os:~# ovs-vsctl add-port br-ex eth0

ssh 到openstack主机的连接断开了
而且在华为云环境中,无法通过给br-ex配置eth0使用的ip及其他网络信息来重连
可能是云环境的限制,所以给ecs加个弹性网卡eth1,与eth0在同一VPC的不同子网内。
先del-port eth0再add-port eth1

9: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000

重启后变成这样

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
5: br-int: <BROADCAST,MULTICAST> mtu 1442 qdisc noop state DOWN group default qlen 1000
6: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
7: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
8: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
9: tap0f401b6c-40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc fq_codel master ovs-system state UNKNOWN group default qlen 1000
10: tap50c7b4ef-50@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000

openstack里面的网络,还是一笔糊涂账

看物理网络

stack@ecs-os:~$ neutron net-external-list
neutron CLI is deprecated and will be removed in the Z cycle. Use openstack CLI instead.
+--------------------------------------+--------+----------------------------------+----------------------------------------------------+
| id                                   | name   | tenant_id                        | subnets                                            |
+--------------------------------------+--------+----------------------------------+----------------------------------------------------+
| 6eb8727f-7df7-4fb3-83df-88a6359113d6 | public | 306eaa9b5204455eaaf9d5aafd34efb6 | 09e3447f-afc9-486e-b7d1-c976e697470e 2001:db8::/64 |
|                                      |        |                                  | 2b058b4c-be1c-4869-9b74-9d10f48621f4 10.0.1.0/24   |
+--------------------------------------+--------+----------------------------------+----------------------------------------------------+

private(自建的) 和 shared 下的cirros,都可以DHCP到子网IP
但是 private获取不到metadata;二者都 ping不通默认网关

以下,可以看看一直不知道用途的网络接口tap50c7b4ef-50@if2是干什么的,它在创建instance的端口时同步出现,其中169.254.169.254也出现了

11: tap50c7b4ef-50@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 6e:db:06:d6:71:79 brd ff:ff:ff:ff:ff:ff link-netns ovnmeta-50c7b4ef-56da-4a7e-93b7-54410a38d2a9
    inet6 fe80::6cdb:6ff:fed6:7179/64 scope link
       valid_lft forever preferred_lft forever

stack@ecs-os:~$ sudo ip netns exec ovnmeta-50c7b4ef-56da-4a7e-93b7-54410a38d2a9 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: tap50c7b4ef-51@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:3b:7c:ed brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 169.254.169.254/32 brd 169.254.169.254 scope global tap50c7b4ef-51
       valid_lft forever preferred_lft forever
    inet 192.168.233.2/24 brd 192.168.233.255 scope global tap50c7b4ef-51
       valid_lft forever preferred_lft forever

建立shared类型的network,ports里自动就会有Attached Devicenetwork:distributed的端口,ip为.2(subnet里的网关一般用.1)

突然新建cirros instance报错(timeout)。不想投入了,因为可能陷入添油战术的圈套。结束,不用devstack了。

【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。