Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Introduction

Licensing

This document outlines the steps to deploy Radio Edge Cloud is Apache 2.0 licensed. The goal of the project is the packaging and installation of upstream Open Source projects. Each of those upstream projects is separately licensed. For a full list of packages included in REC you can refer to https://logs.akraino.org/production/vex-yul-akraino-jenkins-prod-1/ta-ci-build-amd64/313/work/results/rpmlists/rpmlist (the 313 in this URL is the Akraino REC/TA build number, see https://logs.akraino.org/production/vex-yul-akraino-jenkins-prod-1/ta-ci-build-amd64/ for the latest build.) All of the upstream projects that are packaged into the REC/TA build image are Open Source.

Introduction

This document outlines the steps to deploy Radio Edge Cloud (REC) cluster. It has a minimum of three controller nodes. Optionally it may include worker nodes if desired. REC was designed from the ground up to be a highly available, flexible, and cost-efficient system for the use and support of Cloud RAN and 5G networks. The production deployment of Radio Edge Cloud is intended to be done (REC) cluster. It has a minimum of three controller nodes. Optionally it may include worker nodes if desired. REC was designed from the ground up to be a highly available, flexible, and cost-efficient system for the use and support of Cloud RAN and 5G networks. The production deployment of Radio Edge Cloud is intended to be done using the Akraino Regional Controller which has been significantly enhanced during the Akraino Release 1 timeframe, but for evaluation purposes, it is possible to deploy REC without the Regional Controller. Regardless of whether the Regional Controller is used, the installation process is cluster oriented. The Regional Controller or a human being initiates the process on the first controller in the cluster, then that controller automatically installs an image onto every other server in the cluster using IPMI and Ironic (from OpenStack) to perform a zero touch install.

...

Recent builds can be obtained from the Akraino Nexus server. Choose either "latest" or a specific build number from the old release images directory for builds prior to the AMD/ARM split or the AMD64 builds or the ARM64 builds and download the file install.iso.

Akraino ReleaseREC or TA ISO BuildBuild DateNotes1
draw.io Diagram
bordertrue
viewerToolbartrue
fitWindowfalse
diagramDisplayNameDownload Build 9
lboxtrue
revision3
diagramNameDownload
simpleViewertrue
width
linksauto
tbstyletop
diagramWidth123
2019-05-30

Build number 9 is known to NOT work on Dell servers or any of the ARM options listed below. If attempting to install on Dell servers, it is suggested to use builds from no earlier than June 10th

2
draw.io Diagram
bordertrue
viewerToolbartrue
fitWindowfalse
diagramDisplayNameDownload Build 9
lboxtrue
revision3
diagramNameDownload Build 237
simpleViewertrue
width
linksauto
tbstyletop
diagramWidth123
ISO BuildBuild DateNotes
1

Build 9. This build has been removed from Nexus (probably due to age)

2019-05-30

Build number 9 is known to NOT work on Dell servers or any of the ARM options listed below. If attempting to install on Dell servers, it is suggested to use builds from no earlier than June 10th

2

Build 237. This build has been removed from Nexus (probably due to age)

2019-11-18It is possible that there may still be some issues on Dell servers. Most testing has been done on Open Edge. Some builds between June 10th and November 18th have been successfully used on Dell servers, but because of a current lack of Remote Installer support for Dell (or indeed anything other than Open Edge), the manual testing is not as frequent as the automated testing of REC on Open Edge. If you are interested in testing or deploying on platforms other than Open Edge, please join the Radio Edge Cloud Project Meetings.
3 - AMD64

Build 237. This build has been removed from Nexus (probably due to age)

2020-05-29This is a minor update to Akraino Release 2 of AMD64 based Radio Edge Cloud
3 - ARM64

Arm build 134. This build has been removed from Nexus (probably due to age)

2020-04-13This is the first ARM based release of Radio Edge Cloud
4 - AMD642020-11-03The ARM build is unchanged since Release 32019-11-18It is possible that there may still be some issues on Dell servers. Most testing has been done on Open Edge. Some builds between June 10th and November 18th have been successfully used on Dell servers, but because of a current lack of Remote Installer support for Dell (or indeed anything other than Open Edge), the manual testing is not as frequent as the automated testing of REC on Open Edge. If you are interested in testing or deploying on platforms other than Open Edge, please join the Radio Edge Cloud Project Meetings.

Options for booting the ISO on your target hardware include NFS, HTTP, or USB memory stick. You must place the ISO in a suitable location (e.g., NFS server, HTTP(S) server or USB memory stick before starting the boot process. The file bootcd.iso, which is also in the same directory, is used only when deploying via the Akraino Regional Controller using the Telco Appliance Remote Installer. You can ignore bootcd.iso when following the manual procedure below.

...

Section
bordertrue


Column
width20

Nokia OpenEdge Servers

Using the BMC, configure a userid and password on each blade and ensure that the VMedia Access checkbox is checked.

The expected physical configuration as described in Radio Edge Cloud Validation Lab is that each server in the cluster has two SSD 480GB SATA 1dwpd M.2 2280 on a riser card inside the server and two SSD 960GB SATA 3dwpd 2.5 inch on the front panel. There is no RAID configuration used. The reference implementation in the Radio Edge Cloud Validation Lab uses one M.2 drive as the physical volume for LVM and both 2.5 inch SSDs as Ceph volumes.


Column
width20

HP Servers


Column
width20

Dell Servers

Provision the disk configuration of the server via iDRAC such that the desired disks will be visible to the OS in the desired order. The installation will use /dev/sda as the root disk and /dev/sdb and /dev/sdc as the Ceph volumes.


Column
width20

Ampere Servers

Darrin Vallis

Download and print hardware configuration guide for REC test installation

Each server requires 2 SSDs

Each server requires 3 NIC ports and 1 BMC connection

"Dumb" switch or vLAN is connected to two NIC ports

1 NIC port and BMC are connected to router via switch

REC ISO will recognize Ampere based Lenovo HR330 1U, HR350 2U or openEDGE sleds with "Hawk" motherboard

Designate 1 server as Node 1. It runs all code to complete the installation

Boot each server with a monitor attached to the VGA port. Note the BMC IP address.

Boot note 1 into Operating System. Note Linux names for all Ethernet ports on hardware guide.

Download and edit user_config.yaml

cidr is range of IPs

infra_external has network access

infra_internal is VLAN or dumb switch without network access

infra_storage are IPs on internal network used for storage

Interface_net_mapping must be set with NIC port names previously obtained for Node 1

hwmgmt is IP addresses of all BMCs. Node 1 is the master


Column
width20

Marvell Servers

@ Carl Yang <carlyang@marvell.com>


...