Iris - the reconfigurable radio testbed at Trinity College Dublin provides virtualized radio hardware to support the experimental investigation of the interplay between radio capabilities and networks. Our facility pairs underlying flexible radio and computations resources with various hypervisors in the form of software radio frameworks to realize various research and testing configurations. We employ the following hardware elements as underlying radio resources:

  • 18 x ceiling mounted USRP N210s equipped with SBX daugtherboards reaching frequencies between 400-4400 MHz Rx/Tx (40 MHz);
  • 2 x USRP N210 Beam elements with SBX daugtherboards reaching frequencies between 400-4400 MHz Rx/Tx (40 MHz);
  • Each physical resource is also equipped with 4 Cores and 4GB RAM
  • These platforms are connected to a private computational cloud, allowing users to deploy an array of computational environments. To expose the functionality of this equipment for a variety of applications, we employ a variety of radio hypervisors, each with different capabilities and organized into the two categories of open standards compliant and blue sky oriented systems. The category of open standards compliant hypervisors includes frameworks based on open implementations of proven waveforms, such as the OpenBTS or Amarisoft frameworks. Blue sky oriented hypervisors, on the other hand, freely enable prototyping of wireless systems, as exemplified by GNURadio. Software Defined Radio (SDR) hypervisors available at the Iris Testbed include:

  • GNU Radio SDR,
  • Iris SDR,
  • and the SRSLTE 3GPP library.
  • In addition to these frameworks, we also offer support for:

  • spectrum visualization using Fosphor GNU Radio Block
  • the WiSHFUL framework for Iris and GNU Radio SDR Frameworks
  • the OpenAir CN Evolved Packet Core Networks
  • Docker, LXC, Open Air Interface Evolved Packer Core, migratable containers
  • Plain Ubuntu 14.04
  • Plain Ubuntu 16.04
  • Open vSwitch
  • Future Internet Named Data Networking framework: NDN C++ library with eXperimental eXtensions, Named Data Networking forwarding daemon (NFD), and NDN Repo-Ng (Data Store)
  • These software elements can be deployed with radio hardware or with one of 40 virtual machines available. Together these radio hypervisors and software tools enable the realization of heterogeneous radio platforms for composition into networks, as illustrated below. Therefore, this facility is ideally equipped to investigate the combination of various physical layer approaches into coexisting or coherent networks. The Iris architecture supports building 5G radio networks that can simplify, automate and virtualize the delivery of a very diverse set of services over mobile heterogeneous networks.

    Architecture Iris 2

    Functional Layers

    Logically, one can think of the testbed as consisting of four layers, as illustrated in the Figure below. These include:

  • The bottom layer provides the physical elements, such as servers, USRPs, storage, etc., that can be controlled through one of multiple hypervisors.
  • The next includes SDR elements including GNU Radio and SRS LTE, an open source version of the LTE standard developed by CONNECT researchers.
  • The next layer corresponds to the virtualized testbed, comprising virtual machines associated to the physical radio units.
  • Finally, at the top sits the definition of each experiment that uses the resources provided by the lower layer.
  • Iris's Cloud Based Testbed Manager (CBTM) sets up experimentation units across these functional layers (Figure below) by allowing users to create virtual machines in physical rack servers, specify specific software and hardware elements. The CBTM will interact with the KVM Hypervisor to provision these resources. All of these elements are available to jFed user for experimentation.

    Functional Layers

    Setting up an Experiment

    Wireless Testbed 

    With their own hardware, users will be able to setup an experiment using jFed. This allows users to send information that will communicate with the Aggregate Manager (AM) gateway for authentication and to receive infromation about what resources are available. The AM will interact with the Cloud Based Testbed Manager (CBTM) to setup Experimentation Units. The CBTMs act as software defined radio transmitters and receivers.

    Experimentation Units

    The Experimentation Units enable software defined radio experimentation that connects the user to a virtual machine. An Experimentation Unit consists of a virtual machine and a USRP N210 with the following capabilities:

    Up to 16 USRPs can be used in an experiment, and users simply SSH into virtual machines to run the experiments.

    Aggregate Manager (AM)

    After users send information, the AM will authenticate users, tell them which resources will be available to them through RSpecs, and interact with the CBTM on behalf of the users in order to instantiate the available resources. The AM uses the GENI v3 API which is written as a wrapper of the reference AM.


    Cloud Based Testbed Management (CBTM)

    The principal responsibility of the CBTM is to create virtual machines in the servers, where the useres will be able to run experiment specific software. The CBTM will interact with the KVM Hypervisor. We are currently running CBTM v2.0, which is written in Python and use libvirt libraries to interact with the KVM hypervisor.

    CBTM Overall

    Available Software for Testbed

    Iris testbed overview and basic experiments with jFed


    Project Name Site Years
    CREW 2005-2010 (Complete)
    FORGE 2013-2016 (Complete)
    FUTEBOL 2016-2019
    Fed4FIRE 2012-2016 (Complete)
    Fed4FIRE Plus 2017-2021
    eWINE 2016-2018 (Complete)
    WiSHFUL 2015-2018 (complete)
    5GINFIRE 2017-2019
    ORCA 2017-01-01 to 2019-12-31