Jeff Rametta | Yet Another Cloud and Networking Blog
Devstack Icehouse With Brocade ML2 Plugin
Devstack is a scripted installation of OpenStack that can be used for development or
demo purposes. This writeup covers a simple two node devstack installation using the Brocade VCS plugin for
OpenStack networking (aka Neutron).
The VCS ML2 plugin supports both Open vSwitch and Linux Bridge agents and realizes tenant networks as
port-profiles in the physical network infrastructure. A port-profile in a Brocade Ethernet fabric is like a
network policy for VMs or OpenStack instances and can contain information like VLAN assignment, QoS
information, and ACLs. Because tenant networks are provisioned end-to-end, no additional networking setup is
required anywhere in the network.
Deployment Topology
My hardware environment is pretty simple. I have two servers on which to run OpenStack – one will be a
controller/compute node, the other will just be a compute node.
Server Configuration
I used Ubuntu Precise as the OS platform. The network interfaces were configured as below on the controller.
Compute node is similar.
12345678910111213141516
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
address 10.17.88.129
netmask 255.255.240.0
gateway 10.17.80.1
dns-nameservers 10.17.80.21
# Private tenant network interface (connected to VCS fabric)
auto eth1
iface eth1 inet manual
up ifconfig eth1 up promisc
The VCS plugin currently uses NETCONF for communication with the Ethernet fabric and the ncclient python library
is required on the controller node.
OpenStack runs as a non-root user that has sudo privileges. I usually have a user already setup, but devstack
will create a new user if you try to run stack.sh as root.
The controller node should also contain an ML2 configuration file ml2_conf_brocade.ini identifying
the authentication credentials
and management virtual IP for the VCS fabric. The location of this file (usually somewhere in
/etc/neutron/plugins), but should be specified via the Q_PLUGIN_EXTRA_CONF_PATH parameter in local.conf above.
I happened to just place it in stack’s home directory.
Source the openrc in the devstack directory to obtain credentials and use the CLI to have a look around,
create networks, and launch new virtual machine instances. Alternatively, login to the Horizon dashboard at
http://controlerNodeIP and use the GUI (user: admin or demo, password: openstack).
Within the VCS fabric, check that new port-profiles are created for every tenant network that is created. Two
new port-profiles should exist after running stack.sh for the first time. These corrospond to the initial
pubic and private networks that the devstack script creates.
As new instances are launched, they should be tied to the port-profile corresponding the network they belong
to. Any instances on the same network should be able to communicate with each other through the VCS fabric.
123456789
VDX1# show port-profile status
Port-Profile PPID Activated Associated MAC Interface
openstack-profile-2 1 Yes fa16.3e1b.95d0 None
fa16.3e64.fce8 Gi 2/0/28
fa16.3e85.5b2f Gi 2/0/28
fa16.3ea6.3741 Gi 2/0/5
fa16.3ecd.bfc1 Gi 2/0/5
fa16.3eeb.87f7 Gi 2/0/28
openstack-profile-3 2 Yes fa16.3e2c.0baf None