为什么管理容器没有';使用OpenStack Ansible进行安装时,是否无法接收IP-s?

为什么管理容器没有';使用OpenStack Ansible进行安装时,是否无法接收IP-s?,ansible,openstack,lxc,Ansible,Openstack,Lxc,出于测试目的,我想使用Ansible在两个VirtualBox实例上安装OpenStack。 如前所述,我用四个VLAN-s预先配置了本地网络。并创建一个桥接接口。之后,网络连接正常 我还配置了openstack\u user\u config.yml文件 --- cidr_networks: container: 172.29.236.0/22 tunnel: 172.29.240.0/22 storage: 172.29.244.0/22 used_ips: - "172.2

出于测试目的,我想使用Ansible在两个VirtualBox实例上安装OpenStack。 如前所述,我用四个VLAN-s预先配置了本地网络。并创建一个桥接接口。之后,网络连接正常

我还配置了openstack\u user\u config.yml文件

---
cidr_networks:
  container: 172.29.236.0/22
  tunnel: 172.29.240.0/22
  storage: 172.29.244.0/22
used_ips:
  - "172.29.236.1,172.29.236.255"
  - "172.29.240.1,172.29.240.255"
  - "172.29.244.1,172.29.244.255"

global_overrides:
  internal_lb_vip_address: 192.168.33.22
  external_lb_vip_address: dev-ows.hive
  tunnel_bridge: "br-vxlan"
  management_bridge: "br-mgmt"
  provider_networks:
    - network:
      container_bridge: "br-mgmt"
      container_type: "veth"
      container_interface: "eth1"
      ip_from_q: "container"
      type: "raw"
      group_binds:
        - all_containers
        - hosts
      is_container_address: true
    - network:
      container_bridge: "br-vxlan"
      container_type: "veth"
      container_interface: "eth10"
      ip_from_q: "tunnel"
      type: "vxlan"
      range: "1:1000"
      net_name: "vxlan"
      group_binds:
        - neutron_linuxbridge_agent
    - network:
      container_bridge: "br-vlan"
      container_type: "veth"
      container_interface: "eth11"
      type: "flat"
      net_name: "flat"
      group_binds:
        - neutron_linuxbridge_agent
    - network:
      container_bridge: "br-storage"
      container_type: "veth"
      container_interface: "eth2"
      ip_from_q: "storage"
      type: "raw"
      group_binds:
        - glance_api
        - cinder_api
        - cinder_volume
        - nova_compute
...
但我在运行playbook后得到错误:

# openstack-ansible setup-hosts.yml
...
TASK [lxc_container_create : Gather container facts] *********************************************************************************************************************************************************************************************************
fatal: [controller01_horizon_container-6da3ab23]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"controller01_horizon_container-6da3ab23\". Make sure this host can be reached over ssh", "unreachable": true}
fatal: [controller01_utility_container-3d6724b2]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"controller01_utility_container-3d6724b2\". Make sure this host can be reached over ssh", "unreachable": true}
fatal: [controller01_keystone_container-01c915b6]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"controller01_keystone_container-01c915b6\". Make sure this host can be reached over ssh", "unreachable": true}
...
我发现Ansible playbooks创建的LXC容器没有接口,因此也没有IP地址。这就是为什么当Ansible通过ssh连接到这些容器时,会出现“主机不可访问”错误


请给我一些建议,说明我做错了什么。

正如您所注意到的,容器没有获得管理ip

您是否确保两个虚拟箱上的br管理桥接器按预期工作?通过br mgmt检查这两台主机之间的连接。例如,在两台主机之间使用br mgmt ip地址ping

如果已正确设置vlan和网桥,则应该能够通过特定网桥在主机之间建立连接

$ ansible -vi inventory/myos all -m shell -a "ip route" --limit infra,compute
Using /etc/ansible/ansible.cfg as config file
infra2 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.12 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.12 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.12 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.12 

infra1 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.11 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.11 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.11 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.11 

infra3 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.13 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.13 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.13 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.13

compute1 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.16 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.16 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.16 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.16 

compute2 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.17 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.17 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.17 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.17 

因此,使用上述任何主机的br mgmt IP(172.29.236.x),我应该能够访问具有相同br mgmt子网的对等方。

主机可以通过每个网桥接口相互访问。我假设两个Vbox中的一个也是部署器节点。部署器节点是否可以通过SSH连接到其他主机(再次通过br mgmt)?您可以向我们展示完整的openstack_用户配置以及您的主机网络配置,以便我们了解更多信息。我运行的是ansible playbooks,而不是虚拟环境。**我的****我的**我从您的设置中看到了两件可能会或可能不会导致您的问题的事情:您的部署节点也应该位于相同的br mgmt网络上,根据您的openstack\u用户配置应该使用br mgmt IP来识别您的主机。共享基础主机:controller01:ip:我想提示是这样的。这表明容器的网络接口未根据……进行配置。请查看文档,了解是否缺少步骤,或者最好与上游项目组在#openstack ansible上进行检查,以寻求进一步帮助。
$ ansible -vi inventory/myos all -m shell -a "ip route" --limit infra,compute
Using /etc/ansible/ansible.cfg as config file
infra2 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.12 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.12 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.12 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.12 

infra1 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.11 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.11 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.11 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.11 

infra3 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.0.3.0/24 dev lxcbr0  proto kernel  scope link  src 10.0.3.1 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.13 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.13 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.13 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.13

compute1 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.16 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.16 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.16 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.16 

compute2 | SUCCESS | rc=0 >>
default via 10.255.0.1 dev eno1 onlink 
10.255.0.0/24 dev eno1  proto kernel  scope link  src 10.255.0.17 
172.29.236.0/22 dev br-mgmt  proto kernel  scope link  src 172.29.236.17 
172.29.240.0/22 dev br-vxlan  proto kernel  scope link  src 172.29.240.17 
172.29.244.0/22 dev br-storage  proto kernel  scope link  src 172.29.244.17