Installing a new RC on a VM from the Build Server is a subset of the process of installing a new RC on a bare metal server. The major difference being the Build Server does not configure the target RC server's BIOS nor install the linux operating system but rather only installs the Network Cloud Regional Controller software only.
This installation procedure creates a new Regional Controller on a pre-prepared VM. The VM which will become the RC is termed the 'Target RC' or just 'Target VM' in this guide.
Unlike when an RC is built by the Build Server on a bare metal server this installation is performed directly on a pre-prepared Ubuntu 16.04 VM and only installs the Network Cloud specific and other software packages on the Target VM to create a new Regional Controller. Once the RC is build it is used to subsequently deploy either Rover or Unicycle pods.
The installation procedure is executed directly on the Target VM and automatically installs the following on the Target Server:
- Install Network Cloud Regional Controller specific software including
- PostgreSQL DB
- Camunda Workflow and Decision Engine
- Akraino Web Portal
- LDAP configuration
- Install a number of supporting supplementary software components including
- OpenStack Tempest tests
- YAML builds
- ONAP scripts
- Sample VNFs
During the layer stages of the installation the Target Server's 'host' interface must have connectivity to the internet to be able to download the necessary repos and packages.
When the RC is installed on a VM the an Ubuntu 16.04 Linux operating system must be installed and updated before the a RC can be built.
Preflight RC Region Specific Input Data
Deploying the RC
RC Specific Software Installation
If you haven't done so already, elevate yourself to root:
Clone the Akraino Regional Controller repository:
Change to the /opt/akraino/region directory and run the start_regional_controller.sh script:
This will also take 10 to 20 minutes.
A successful installation will end with the following message:
Note: The enumerated IP shown (10.51.34.230) is an example 'host' address for a RC deployed in a validation lab.
The Regional Controller Node installation is now complete.
At this point there will be one new directories where the cloned NC artifacts have been created.
Please note: It will be necessary to generate rsa keys on the newly commissioned RC which must then be copied and inserted into the 'genesis_ssh_public_key' attribute in site input yaml file used when subsequently deploying each Unicycle pod at any edge site controlled by the newly built RC. This will be covered in the Unicycle installation instructions.
Accessing the new Regional Controller's Portal UI
During the installation a UI will have been installed on the newly deployed RC. This UI will be used to subsequently deploy all Rover and Unicycle pods to edge locations. The RC's portal can be opened in Chrome via the portal URL
TARGET_SERVER_IP is the RC's 'host' IP address. Note: IE or Edge browsers may not currently work with this UI.
Use the following credentials:
- Username: akadmin
- Password: akraino
Upon successful login, the Akraino Portal home page will appear. Please not the extra entries in the MTN3 site is due to the fact this screenshot was taken after a Unicycle pod was deployed from this RC.