OpenFlow wired testbed

The i2CAT OpenFlow wired testbed is integrated into the Fed4FIRE+ community. It provides 5 OpenFlow 1.0 switches in a full-mesh fashion, connected to 2 virtualisation servers. The infrastructure is communicated with the VirtualWall facilities (iMec).

As a experimenter, you can request any overlay following the examples below.


The monitoring of the devices at the i2CAT infrastructure can be checked at the Fed4FIRE Federation Monitor. Specific monitoring: VTAM, OFAM.


It is necessary to accept the terms under the testbed privacy page. You can read and accept/deny them while creating a new experiment in jFed or by loading your Fed4FIRE user certificate in your browser and loading this page.


Interconnections for the OpenFlow 1.0 testbed. You must take these into account to define your overlay (in the form of an XML definition, called RSpec).

See specific details on the hardware provided


This testbed offers two kind of resources: virtualised computing nodes (VTAM) and OpenFlow resources (OFAM). Experimenters can request resources by identifying the specific resources through an RSpec (resource specification) file, and submitting towards either the GENIv2 or GENIv3 API exposed by the software stack.

For detailed user manuals on GENIv3, you may have a look here.

Requesting resources

In order to request resources, a GENI-compliant client shall be used, such as jFed (GUI) or OMNI (CLI). Some examples below:


Using jFed: on the left sidebar pick “Virtual Machine”. On the new window, select the “i2cat.vtam” testbed. You may change the name and define a specific node of your choice.

Editing the RSpec: if not using jFed or wishing to change some details directly, note you should change the following:

  • component_id: URN of the server you want to define compute nodes at


Using jFed: not supported for requests on OpenFlow wired devices.

Editing the RSpec: you should change the following:

    • e-mail: your e-mail account
    • controller’s ip:port: IP (eth0, something like 10.216.12.X) and port for the controller that runs in a VM defined within i2CAT facilities (due to firewall restrictions, non-managed public IPs should not work)
    • definition of switches and ports: you may use all (for learning_switch/fwd apps) or a more restricted subset
  • matching conditions: typically the VLAN is enough. Note that, if you are connecting to another testbed, you should set here 1) the edge switches and ports connecting the two testbeds, and 2) the VLAN that you previously requested to define the dedicated external network between facilities (see Interconnecting testbeds section)

Configuring resources


The VLAN you pick for your flowspace directly impact the generation of the traffic. You should run the following in any VM at i2CAT:

Similarly, the following route configuration must be done in the iMec VirtualWall2 node:


The controller URL you define at your flowspec will be indirectly connected to the switches. To manage them, you shall run your controller into a reachable IP (within i2CAT’s facilities, either within the experimenter’s VPN or one of the public IPs under i2CAT’s premises).

Interconnecting testbeds

The i2CAT facilities are connected to those of iMec (see topology diagram above).

Using jFed: select a “Dedicated Ext. Network” from the left sidebar. On the new window, select the “i2cat vlan XXX network edge”, where “XXX” is the VLAN previously requested in your OFAM RSpec.

Editing the RSpec: you should change the following:

    • component_manager_id: in the first node (a server or VM), this string (URN) defines the VLAN that connects the testbeds
  • component_manager’s name: in the link between the testbeds, this string (URN) defines the VLAN that connects the testbeds

Defining the connectivity

Once the interconnections are set-up, and the VMs are properly tagged, it is time to run the controller. You should run the controller on the IP and port you provided in the OFAM RSpec. This machine has to be visible from the switches (i.e., located into the i2CAT network – either using private or public IP); yet there is no need for specific configuration or tagging on the controller machine.

Some typical controllers are POX or Ryu (small footprint), and ONOS or ODL (larger, production-like deployments).

Find some working examples below, using static route (POX) and learning switch (ONOS). These assume you have reserved the whole topology under one specific VLAN of your choice. After you run and set up the controller, you may proceed to ping VMs within the servers at i2CAT and/or at the VirtualWall (iMec) facilities.


Download the POX source into your controller VM (e.g., ~/pox) and place the following script under the ~/pox/forwarding folder.

Finally, run the following command to start POX (change the VLAN by that defined at your OFAM RSpec):

python ~/pox/pox/ log.level --DEBUG forwarding.f4f_i2cat_vw --vlan=$VLAN


Use the following commands to download the sources, compile (your VM must at least 4GB of free memory), attach to the console and enable the fwd (forwarding, sort of l2 learning switch app).