Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This guide instructs how to build and install an Akraino Edge Stack (AES) Regional Controller node.

Contents

Table of Contents
excludeContents


Overview

The Regional Controller Node installation includes the following components:

Operating System

  • Redfish Integrated Dell Remote Access Controller (iDRAC) bootstrapping and hardware configuration
  • Linux OS (Ubuntu)

Regional Controller

  • PostgreSQL DB
  • Camunda Workflow and Decision Engine
  • Akraino Web Portal
  • LDAP configuration

Supplementary Components

Various supporting files are also installed on the Regional Controller, including:

  • OpenStack Tempest tests
  • YAML builds
  • ONAP scripts
  • Sample VNFs

 


Info

This installation guide refers to the following by way of example:

  • 192.168.2.43 (aknode43)Build Server (Linux Server with a Docker Container)
  • 192.168.2.44 (aknode44)Bare Metal Server
  • 192.168.41.44Bare Metal Server iDRAC

Steps herein presume the use of a root account. All steps are performed from the Build Server.

A clean, out-of-the-box Ubuntu environment is strongly recommended before proceeding.

Prerequisites

AES Regional Controller installation is orchestrated from a Build Server acting upon a Bare Metal Server.

Build Server

  • Any server or VM with Ubuntu Release 16.04
  • Latest version of the following apt packages:
    • docker    (used to run dhcp and web containers)
    • python    (used for redfish api calls to bare metal server)
    • python-requests    (used for redfish api calls to bare metal server)
    • python-pip    (used to install hpe redfish tools)
    • sshpass    (used to copy keys to new server)
    • xorriso    (used to extract Ubuntu files to web server)
    • make    (used to build custom ipxe efi file used during bare metal server boot)
    • gcc    (used to build custom ipxe efi file used during bare metal server boot)

Bare Metal Server

  • Dell PowerEdge R740 Gen 14 server or HP DL380 Gen10 with no installed OS [ Additional types of hardware will be supported in the future release]
  • Two interfaces for primary network connectivity bonding
  • 802.1q VLAN tagging for primary network interfaces

System Check

Build Server 

Ensure Ubuntu Release 16.04 (specifically) and Docker version is 1.13.1 or newer: 

...

Code Block
languagebash
# apt list python python-requests python-pip sshpass xorriso make gcc 
Listing... Done 
gcc/xenial,now 4:5.3.1-1ubuntu1 amd64 [installed] 
make/xenial,now 4.1-6 amd64 [installed,automatic] 
python/xenial-updates,now 2.7.12-1~16.04 amd64 [installed] 
python-pip/xenial-updates,xenial-updates,now 8.1.1-2ubuntu0.4 all [installed] 
python-requests/xenial-updates,xenial-updates,now 2.9.1-3ubuntu0.1 all [installed] 
sshpass/xenial,now 1.05-1 amd64 [installed] xorriso/xenial,now 1.4.2-4ubuntu1 amd64 [installed] 

Network Connectivity 

The Build Server must have connectivity to the Bare Metal Server iDRAC interface on ports 80 (http) and 443 (https).

...

Verification of the Build Server and Bare Metal Server primary networks is beyond the scope of this guide.

Installation

Repository Cloning

Repositories are located under /opt/akraino:

...

Code Block
languagebash
## Download the latest Regional_controller artifacts from LF Nexus ## 


mkdir -p /opt/akraino/region
NEXUS_URL=https://nexus.akraino.org
curl -L "$NEXUS_URL/service/local/artifact/maven/redirect?r=snapshots&g=org.akraino.regional_controller&a=regional_controller&v=0.0.2-SNAPSHOT&e=tgz" | tar -xozv -C /opt/akraino/region


Configuration

Copy the Bare Metal Server configuration template into /opt/akraino/server-config/AKRAINO_NODE_RC, where AKRAINO_NODE_RC is the Bare Metal Server name followed by rc:

...

Code Block
languagebash
# host name for server
SRV_NAME=aknode44

# server oem - Dell or HPE (case sensitive)
SRV_OEM=Dell

# out of band interface information for server (idrac/ilo/etc)
SRV_OOB_IP=192.168.41.44
SRV_OOB_USR=root
SRV_OOB_PWD=ROOT_PASSWORD

# mac address of server to be used during the build - not required for Dell servers
# SRV_MAC=3c:fd:fe:b8:10:60

# name of network interface used during build when ipxe.efi is booted and when os is booted
# ipxe numbers ports from 0-n in pci bus order.
# the netx value will depend on how many nics are in the server
# and which pci device number is assigned to the slot
SRV_IPXE_INF=net8

# the build interface is the nic used by the Ubuntu installed to load the OS
SRV_BLD_INF=enp135s0f0

# the boot device is the device name on which the OS will be loaded
SRV_BOOT_DEVICE=sdg

# ipxe script to use - based on the os version and kernel to install
# valid options are script-hwe-16.04.5-amd64.ipxe or script-16.04.5-amd64.ipxe
SRV_BLD_SCRIPT=script-hwe-16.04.5-amd64.ipxe

# template xml file to set bios and raid configuration settings
SRV_BIOS_TEMPLATE=dell_r740_g14_uefi_base.xml.template
SRV_BOOT_TEMPLATE=dell_r740_g14_uefi_httpboot.xml.template
SRV_HTTP_BOOT_DEV=NIC.Slot.7-1-1

# VLAN to use during build and for final network configuration
SRV_VLAN=41

# basic network information for dhcp config and final server network settings
SRV_MTU=9000
SRV_IP=192.168.2.44
SRV_SUBNET=192.168.2.0
SRV_NETMASK=255.255.255.0
SRV_GATEWAY=192.168.2.200
SRV_DNS=192.168.2.85
SRV_DOMAIN=lab.akraino.org
SRV_DNSSEARCH=lab.akraino.org
SRV_NTP=ntp.ubuntu.org

# root password for server being built
SRV_PWD=SERVER_PASSWORD

# network bond information
SRV_BOND=bond0
SRV_SLAVE1=enp135s0f0
SRV_SLAVE2=enp135s0f1

Operating System

Begin the OS installation:

...

Note that any time estimates (e.g., "This step could take up to 15 minutes") and elapsed times are likely inaccurate. The total install time is longer, on the order of hours. Enjoy that beverage. (smile) 

Regional Controller

Update the Akraino run command (rc) file in /opt/akraino/region:

...

The Regional Controller Node installation is now complete.

Akraino Portal Operations

Login

Visit the portal URL http://REGIONAL_NODE_IP:8080/AECPortalMgmt/ where REGIONAL_NODE_IP is the Portal IP.

...