Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Introduction

Licensing

This document outlines the steps to deploy Radio Edge Cloud (REC) cluster. It has a minimum of three controller nodes. Optionally it may include worker nodes if desired. REC was designed from the ground up to be a highly available, flexible, and cost-efficient system for the use and support of Cloud RAN and 5G networks. The production deployment of Radio Edge Cloud is intended to be done using the Akraino Regional Controller which has been significantly enhanced during the Akraino Release 1 timeframe, but for evaluation purposes, it is possible to deploy REC without the Regional Controller. Regardless of whether the Regional Controller is used, the installation process is cluster oriented. The Regional Controller or a human being initiates the process on the first controller in the cluster, then that controller automatically installs an image onto every other server in the cluster using IPMI and Ironic (from OpenStack) to perform a zero touch install.

In a Regional Controller based deployment, the Regional Controller API will be used to upload the REC Blueprint YAML (available from the REC repository) which informs the Regional Controller of where to obtain the REC ISO images, the REC workflows (executable code for creating, modifying and deleting REC sites) and the REC remote installer component (a container image which will be instantiated by the create workflow and which will then invoke the REC Deployer (which is located in the ISO DVD disc image file) which conducts the rest of the installation.

is Apache 2.0 licensed. The goal of the project is the packaging and installation of upstream Open Source projects. Each of those upstream projects is separately licensed. For a full list of packages included in REC you can refer to https://logs.akraino.org/production/vex-yul-akraino-jenkins-prod-1/ta-ci-build-amd64/313/work/results/rpmlists/rpmlist (the 313 in this URL is the Akraino REC/TA build number, see https://logs.akraino.org/production/vex-yul-akraino-jenkins-prod-1/ta-ci-build-amd64/ for the latest build.) All of the upstream projects that are packaged into the REC/TA build image are Open Source.

Introduction

This document outlines the steps to deploy Radio Edge Cloud (REC) cluster. It has a minimum of three controller nodes. Optionally it may include worker nodes if desired. REC was designed from the ground up to be a highly available, flexible, and cost-efficient system for the use and support of Cloud RAN and 5G networks. The production deployment of Radio Edge Cloud is intended to be done using the Akraino Regional Controller which has been significantly enhanced during the Akraino Release 1 timeframe, but for evaluation purposes, it is possible to deploy REC without the Regional Controller. Regardless of whether the Regional Controller is used, the installation process is cluster oriented. The Regional Controller or a human being initiates the process on the first controller in the cluster, then that controller automatically installs an image onto every other server in the cluster using IPMI and Ironic (from OpenStack) to perform a zero touch install.

In a Regional Controller based deployment, the Regional Controller API will be used to upload the REC Blueprint YAML (available from the REC repository) which informs the Regional Controller of where to obtain the REC ISO images, the REC workflows (executable code for creating, modifying and deleting REC sites) and the REC remote installer component (a container image which will be instantiated by the create workflow and which will then invoke the REC Deployer (which is located in the ISO DVD disc image file) which conducts the rest of the installation.

The instructions below skip most of this and directly invoke the REC Deployer from the Baseboard Management Controller (BMC), integrated Lights Out (iLO) or integrated Dell Remote Access Controller (iDRAC) of a physical server. The basic workflow of the REC deployer is to copy a base image to the first controller in the cluster and then read the contents of a configuration file (typically called user_config.yaml) to deploy the base OS and all additional software to the rest of the nodes in the cluster.

...

Recent builds can be obtained from the Akraino Nexus server. Choose either "latest" or a specific build number from the old release images directory for builds prior to the AMD/ARM split or the AMD64 builds or the ARM64 builds and download the file install.iso.

Akraino ReleaseREC or TA ISO BuildBuild DateNotes
1
draw.io Diagram
bordertrue
viewerToolbartrue
fitWindowfalse
diagramDisplayNameDownload Build 9
lboxtrue
revision3
diagramNameDownload
simpleViewertrue
width
linksauto
tbstyletop
diagramWidth123

Build 9. This build has been removed from Nexus (probably due to age)

2019-05-30

Build number 9 is known to NOT work on Dell servers or any of the ARM options listed below. If attempting to install on Dell servers, it is suggested to use builds from no earlier than June 10th

2
draw.io Diagram
bordertrue
viewerToolbartrue
fitWindowfalse
diagramDisplayNameDownload Build 9
lboxtrue
revision3
diagramNameDownload Build 237
simpleViewertrue
width
linksauto
tbstyletop
diagramWidth123
2019-11-18

Build 237. This build has been removed from Nexus (probably due to age)

2019-11-18It is possible that there may still be some issues on Dell servers. Most testing has been done on Open Edge. Some builds between June 10th and November 18th have been successfully used on Dell servers, but because of a current lack of Remote Installer support for Dell (or indeed anything other than Open Edge), the manual testing is not as frequent as the automated testing of REC on Open Edge. If you are interested in testing or deploying on platforms other than Open Edge, please join the Radio Edge Cloud Project Meetings.
3 - AMD64

Build 237. This build has been removed from Nexus (probably due to age)

2020-05-29This is a minor update to Akraino Release 2 of AMD64 based Radio Edge Cloud
3 - ARM64

Arm build 134. This build has been removed from Nexus (probably due to age)

2020-04-13This is the first ARM based release of Radio Edge Cloud
4 - AMD642020-11-03The ARM build is unchanged since Release 3It is possible that there may still be some issues on Dell servers. Most testing has been done on Open Edge. Some builds between June 10th and November 18th have been successfully used on Dell servers, but because of a current lack of Remote Installer support for Dell (or indeed anything other than Open Edge), the manual testing is not as frequent as the automated testing of REC on Open Edge. If you are interested in testing or deploying on platforms other than Open Edge, please join the Radio Edge Cloud Project Meetings.

Options for booting the ISO on your target hardware include NFS, HTTP, or USB memory stick. You must place the ISO in a suitable location (e.g., NFS server, HTTP(S) server or USB memory stick before starting the boot process. The file bootcd.iso, which is also in the same directory, is used only when deploying via the Akraino Regional Controller using the Telco Appliance Remote Installer. You can ignore bootcd.iso when following the manual procedure below.

...

Section
bordertrue


Column
width20

Nokia OpenEdge Servers

Using the BMC, configure a userid and password on each blade and ensure that the VMedia Access checkbox is checked.

The expected physical configuration as described in Radio Edge Cloud Validation Lab is that each server in the cluster has two SSD 480GB SATA 1dwpd M.2 2280 on a riser card inside the server and two SSD 960GB SATA 3dwpd 2.5 inch on the front panel. There is no RAID configuration used. The reference implementation in the Radio Edge Cloud Validation Lab uses one M.2 drive as the physical volume for LVM and both 2.5 inch SSDs as Ceph volumes.


Column
width20

HP Servers


Column
width20

Dell Servers

Provision the disk configuration of the server via iDRAC such that the desired disks will be visible to the OS in the desired order. The installation will use /dev/sda as the root disk and /dev/sdb and /dev/sdc as the Ceph volumes.


Column
width20

Ampere Servers

Darrin Vallis

Download and print hardware configuration guide for REC test installation

Each server requires 2 SSDs

Each server requires 3 NIC ports and 1 BMC connection

"Dumb" switch or vLAN is connected to two NIC ports

1 NIC port and BMC are connected to router via switch

REC ISO will recognize Ampere based Lenovo HR330 1U, HR350 2U or openEDGE sleds with "Hawk" motherboard

Designate 1 server as Node 1. It runs all code to complete the installation

Boot each server with a monitor attached to the VGA port. Note the BMC IP address.

Boot note 1 into Operating System. Note Linux names for all Ethernet ports on hardware guide.

Download and edit user_config.yaml

cidr is range of IPs

infra_external has network access

infra_internal is VLAN or dumb switch without network access

infra_storage are IPs on internal network used for storage

Interface_net_mapping must be set with NIC port names previously obtained for Node 1

hwmgmt is IP addresses of all BMCs. Node 1 is the master


Column
width20

Marvell Servers

@ Carl Yang <carlyang@marvell.com>


...

Section
bordertrue


Column
width20

Nokia OpenEdge Servers

Login to the controller-1 BMC ip using a web browser (https://xxx.xxx.xxx.xxx).

Go to Settings/Media Redirection/General Settings.

Select the Remote Media Support.

Select the Mount CD/DVD.

Type the NFS server IP address.

Type the NFS share path.

Select the nfs in Share Type for CD/DVD.

Click Save.

Click OK to restart the VMedia Service.

Go to Settings/Media Redirection/Remote Images.

Select the image for the first CD/DVD device from the drop-down list.

Click the play button to map the image with the server’s CD/DVD devices. The Redirection Status changes to Started when the image redirection succeeds.

Go to Control & Maintain/Remote Control to open the Remote Console.

Reset the server.

Press F11 to boot menu and select boot from CD/DVD device.


Column
width20

HP Servers

Login to iLo for Controller 1 for the installation

Go to Remote Console & Media

Scroll to HTML 5 Console

  • URL

 http://XXX.XXX.XXX.XX:XXXX/REC_RC1/install.iso -> Virtual Media URL →

  • NFS 

< IP to connect for NFS file system>/<file path>/install.iso

Check “Boot on Next Reset” -> Insert Media

Reset System


Column
width20

Dell Servers

Go to Configuration/Virtual Media

Scroll down to Remote File Share and enter the url for ISO into the Image File Path field.

  • URL:

http://XXX.XXX.XXX.XX:XXXX/REC_RC1/install.iso

  • NFS

< IP to connect for NFS file system>/<file path>/install.iso>

 Select Connect.

Open Virtual Console, and go to Boot

Set Boot Action to Virtual CD/DVS/ISO

Then Power/Reset System

Be sure to read the note below on Dell servers


Column
width20

Ampere Servers

Darrin Vallis

Download install.aarch64.iso from the TA Repository. Use build 273 or later. Note 1/24/20: Waiting for Jenkins build to incorporate Hawk hardware detector. Use rec_install_testbuild.iso for Hawk based openEDGE sleds. latest Telco Appliance build. A complete list of ARM aarch64 builds is available here.

Mount install.aarch64.iso as NFS share on Linux file system in same network as REC cluster

Download and unzip ampere_virtual_media_v2.zip

Edit mount_media.sh and dismount_media.sh. Set IPMI_LOCATION to IP address of Node 1 BMC, NFS_IP to IP of NFS server, ISO_LOCATION to NFS path for install.aarch64.iso

Run mount_media.sh. This will connect install.aarch64.iso as a CDROM on Node 1

Boot Node 1 into BIOS. Force boot from CD by selecting "Save & Exit" tab on BIOS, Boot Override → CDDROM

REC Telco Appliance will begin installation

See instructions below



Column
width20

Marvell Servers

@ Carl Yang <carlyang@marvell.com>


...

Note:  When the deployment to all the nodes has completed, “controller-1” will reboot automatically.

Note
titleSpecial Attention Required on Dell

A Note on deploying on DELL severs:

Currently, a manual step is required when doing an installation on Dell servers.  After the networking has been set up and the deployment has started, the following message will be shown on the console screen on controller-2 and controller-3:

Image Modified

At this point, both controller-2 and controller-3 should be set to boot from virtual CD/DVD/ISO.

To do this:

  • Log on to the iDrac web interface
  • Select "Launch Virtual Console"
  • In the Virtual Console:
    • Select "Boot | Virtual CD/DVD/ISO" and confirm
    • Select "Power | Reset System (Warm Boot)" and confirm

Again, this needs to be done for both controller-2 and controller-3.  After this, the installation should continue normally.


As a reference, during this time, viewing the file /srv/deployment/log/cm.log on controller-1 will show the following:

   FAILED - RETRYING: Verify node provisioning state. Waiting for 60mins max. (278 retries left).

   FAILED - RETRYING: Verify node provisioning state. Waiting for 60mins max. (277 retries left).

   FAILED - RETRYING: Verify node provisioning state. Waiting for 60mins max. (276 retries left).

This will continue until the above manual step is completed or a timeout happens.  After the manual step, the following messages will appear:

   ok: [controller-2 -> localhost]

   ok: [controller-3 -> localhost]

Verifying Deployment

A post-installation verification is required to ensure that all nodes and services were properly deployed.

...