Skip to end of metadata
Go to start of metadata

The Continuous Deployment Lab provided by AT&T for Radio Edge Cloud currently has five Open Edge and three Dell servers dedicated to it. Additional servers including Dell, HP and OpenEdge are also available and/or in use but not dedicated to REC validation.

Logs from the CD installation of Telco Appliance (the parent blueprint for REC) are available at

Logs from the post-install of the O-RAN-SC's RAN Intelligent Controller are available at

Logs from the Telco Appliance Cloud Test Automation Framework are available at

The basic structure of the CD part of the flow is an hourly check for new ISO images. If a new ISO build is found at then it is downloaded and a deployment to bare metal OpenEdge19 cluster is attempted. If the ISO installs successfully, then the RIC install is attempted as a post-install job and some tests are automatically run.

Currently the only ISO is the Telco Appliance ISO. The REC Continuous Deployment Jenkins (the private per contributor part of the diagram below) installs TA on bare metal and then installs the RIC using a REC workflow script. The future plan is to incorporate the RIC at build time rather than install time. When this happens, the Akraino Continuous Integration Jenkins (the shared/public part of the diagram below) with produce a REC ISO in addition to the TA ISO. The TA ISO will continue to exist and be available for use in the creation of new blueprints in the TA blueprint family, but REC won't need the TA ISO anymore because the REC ISO will be built directly from the TA Gerrit repositories and build products (e.g. RPMs) combined with RIC components from the O-RAN-SC Gerrit and image repository. Right now there is only the TA ISO because we haven't built the CI jobs to integrate RIC source code and TA source code into a combined REC ISO.

Continuous Integration → Continuous Deployment Flow

Akraino Release 1 Continuous Deployment Hardware

The OpenEdge servers are manufactured by Nokia and published under the Open Compute Project as

Sales Item




AF RMC module for OE chassis



AF OE packaging for OE 3U chassis



AF OE Shelf for OE chassis



AF 2kW AC PSU for OE chassis (F to R)



AF 25GbE dual port LP-MD2 NIC card CX5



AF 25GbE dual port OCP NIC card CX5



AF CPU heat sink type A



AF 6210U Intel processor 20c 2.5GHz 150W



AF OE riser card type A for OE 1U server



AF OE Server L6 Barebone 1U C621



AirFrame OE 3U Chassis



AF SSD 480GB SATA 1dwpd M.2 2280



AF SSD 960GB SATA 3dwpd 2.5 inch





Chassis Level Specification

Total Physical Compute Cores: 100 (200 vCPUs)

Total Physical Compute Memory: 960GB

Total SSD-based OS Storage: 4.8TB (10 x 480GB SSDs)

Total Application-based Raw Storage: 9.60TB (10 x 960GB SSD)

Networking per Server: Apps - 4 x 25GbE (per Server)

           DCIM - 2 x 10GbE + 1 x 1GbT (shared)

Power: 2 x 2kW AC PSUs

Cooling: Front-to-Rear AirFlow

Shown: 3U chassis with 5 server blades (one partially removed) and two power supplies (hidden below the partially removed blade)

Shown: One server blade, 1U high by half width of a 19 inch rack

Additional Radio Edge Cloud Hardware (not currently in CD Flow)

  • HP DL380 Generation 9
  • HP DL380 Generation 10
  • Dell R740xd

Radio Edge Cloud Network Cabling

Each server is cabled to the top of rack with two pairs of 10Gbps cabling. Future plans include 25Gbps but that is not in CD at this time. The Nokia Open Edge chassis includes one internal switch, so there is one uplink from chassis to top of rack which provides one out of band (OOB) network connection to the baseboard management controller (BMC) on each server blade. For DL380 and R740xd servers a separate OOB cable to the top of rack is needed for each server. Several different subnets and VLANs are allocated as indicated below and the top of rack switch/router provides uplinks to the Radio Access Network (simulated, in the case of a lab environment) and a remote management/support network.

  • No labels