In this tutorial, we will study how does the Openstack's Flat mode work, how to activate it and finally how to use it.
Flat mode is a network mode used by Openstack to provide newtork connectivity to its instances.
As a reminder, this mode is used with the nova-network component. This network component currently cohabits with the new one: Quantum. These two components aim to provide all network functionnalities expected by instances. However we won't go deeper into the new component Quantum, as this article focuses on the flat mode.
With the historical nova-network component, there are three modes available:
- Flat: The one we will describe
- Flat DHCP: add a DHCP server to automatically give IP addresses to instances
- VLAN Manager: add an isolation (OSI level 2), allowing to have distinct network by project
More information can be found here: http://docs.openstack.org/trunk/openstack-compute/admin/content/networking-options.html
This mode aims to deploy OpenStack without taking care of the underlying network topology. This mode is very simple to use. Because of its simplicity, this mode is easy to configure, deploy and use in production environment.
Flat mode functioning is based on Linux bridging. The idea is to provide a bridge between the instance and the desired network. The network addressing is manual. It is by creating the instance that we attach the network(s).
Here is a schema explaining this mode (source: OpenStack documentation):
Interfaces eth1 correspond to the 'real' OpenStack network. It is on this network that the compute and controller nodes will directly communicate with each other.
During the launch of an instance, it will be attached to the bridge br100. Of course you can have multiple declared networks, and therfore use bridges on multiples VLAN.
In this tutorial, we will configure OpenStack and an instance using two network interfaces as shown on the picture (source: documentation OpenStack):
First of all, install the needed packages to build bridges and VLAN:
compute-node:~# apt-get install vlan ifenslave
Activate the VLAN module:
compute-node:~# modprobe 8021q
For example, you can configure the network interfaces of a compute-node the following way:
compute-node:~# vi /etc/network/interfaces auto eth0 iface eth0 inet static address 10.0.0.1 netmask 255.255.255.0 gateway 10.0.0.254 auto eth1 iface eth1 inet manual auto br100 iface br100 inet manual bridge_stp off bridge_fd 0 bridge_maxwait 0 bridge_ports eth1.100 auto br101 iface br101 inet manual bridge_stp off bridge_fd 0 bridge_maxwait 0 bridge_ports bond1.101
eth0 is the interface on the OpenStack network. eth1 is the second interface, allowing us to create bridges on the different VLANs. Obviously it is still to do some bonding by using multiple interface
Edit the OpenStack configuration:
compute-node:~# vi /etc/nova/nova.conf [...] # NETWORK - network_manager=nova.network.manager.FlatManager ##For injected network informations in instance flat_injected=true firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver # Have nova-network on each compute-node multi_host=true [...]
During this configuration, 2 very useful options have been added. OpenStack is now "injecting" the network configuration during the creation of instances instead of letting the image network configuration.
OpenStack Network Creation
Next step is to create 2 networks, that we want to use in OpenStack:
compute-node:~# nova-manage network create --multi_host=T --fixed_range_v4=172.16.100.0/24 --bridge=br100 --bridge_interface=br100 --num_networks=1 --network_size=256 --label=network100 --gateway=172.16.100.254 --dns1=172.16.100.254 compute-node:~# nova-manage network create --multi_host=T --fixed_range_v4=172.16.101.0/24 --bridge=br100 --bridge_interface=br101 --num_networks=1 --network_size=256 --label=network101 --gateway=172.16.101.254 --dns1=172.16.101.254
Let's check networks have been properly created:
compute-node:~# nova network-list +--------------------------------------+----------+--------------------+ | ID | Label | Cidr | +--------------------------------------+----------+--------------------+ | 069dedb6-c97a-432c-bcf6-54b2b4311928 | network100 | 172.16.100.0/24 | | 0c8bd87e-c824-439d-a567-5f37e724292c | network101 | 172.16.101.0/24 | +--------------------------------------+----------+--------------------+
Note: This can also be checked on the controler node data base:
controller-node:~# mysql --defaults-extra-file=/etc/mysql/debian.cnf -e "select * from networks\G" nova *************************** 1. row *************************** created_at: 2012-11-27 10:43:39 updated_at: 2012-11-27 15:34:26 deleted_at: NULL deleted: 0 id: 1 injected: 1 cidr: 172.16.100.0/24 netmask: 255.255.255.0 bridge: br100 gateway: 172.16.100.254 broadcast: 172.16.100.255 dns1: 172.16.100.254 vlan: NULL vpn_public_address: NULL vpn_public_port: NULL vpn_private_address: NULL dhcp_start: NULL project_id: NULL host: NULL cidr_v6: NULL gateway_v6: NULL label: network100 netmask_v6: NULL bridge_interface: br100 multi_host: 1 *************************** 2. row *************************** created_at: 2012-12-26 08:34:13 updated_at: 2012-12-26 08:42:18 deleted_at: NULL deleted: 0 id: 2 injected: 1 cidr: 172.16.101.0/24 netmask: 255.255.255.0 bridge: br101 gateway: 172.16.101.254 broadcast: 172.16.101.255 dns1: 172.16.101.254 vlan: NULL vpn_public_address: NULL vpn_public_port: NULL vpn_private_address: NULL dhcp_start: NULL project_id: NULL host: NULL cidr_v6: NULL gateway_v6: NULL label: iliad3 netmask_v6: NULL bridge_interface: br101 multi_host: 1
Creating instance with these 2 networks
Because of the previous OpenStack network configuration and the creation of the networks, we can boot an instance as following:
compute-node:~# nova boot --flavor 1 --image precise-cloud --nic net-id=069dedb6-c97a-432c-bcf6-54b2b4311928,v4-fixed-ip=172.16.100.2 --nic net-id=0c8bd87e-c824-439d-a567-5f37e724292c,v4-fixed-ip=172.16.101.2 --security_group default instance-nw100-101
Now that the instance is created, we can check its network configuration:
compute-node:~# nova show instance-nw100-101| grep network | network100 network | 172.16.100.2 | | inetwork101 network | 172.16.101.2 |
Finally, we check the bridges on the compute node:
compute-node:~# brctl show bridge name bridge id STP enabled interfaces br100 8000.00137238fb51 no eth1.100 vnet0 br101 8000.00137238fb51 no bond1.101 vnet1
Connect to the instance to insure network configuration is correct:
instance:~$ ping smile.fr -c 1 PING smile.fr (188.8.131.52) 56(84) bytes of data. 64 bytes from 184.108.40.206: icmp_req=1 ttl=46 time=7.24 ms instance:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 172.16.100.254 0.0.0.0 UG 0 0 0 eth0 172.16.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 172.16.101.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
It works! However one of the two gateway has been witrdrawn by hand.
In order to have proper metadata, we need to redirect bridges outgoing queries of type 169.254.169.254 on the port 80 to the server hosting the service nova-api. This routing is usually made on the instance's gateway.
The filtering of level 3 - made by iptables - does not work with thismode. The filtering offered by the OpenStack API will therfore not work.
That's it, you know now how to deploy and configure the OpenStack flat network mode...But there are all the OpenStack remaining components to explore!