You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 36 Next »

Introduction

This installation procedure creates a new Regional Controller on a bare metal server. The bare metal server which will become the RC is termed the 'Target RC' or just 'Target Server' in this guide.

The Build Server remotely installs the Linux operating system, Network Cloud specific and other software packages on the Target Server to create a new Regional Controller. Once the RC is build it is used to subsequently deploy either Rover or Unicycle pods. After the Build Server has completed the creation of the Regional Controller, the Build Server has no further role in any Network Cloud Rover or Unicycle Pod deployment. 

The installation procedure is executed from the Build Server and automatically performs all the following on the Target Server:

  • Modify the BIOS including DHCP and PXE boot configuration by issuing Redfish API commands to the Target Server's iDRAC or iLO BMC 
  • Install and update an Ubuntu 16.04 operating system
  • Install Network Cloud Regional Controller specific software including
    • PostgreSQL DB
    • Camunda Workflow and Decision Engine
    • Akraino Web Portal
    • LDAP configuration
  • Install a number of supporting supplementary software components including
    • OpenStack Tempest tests
    • YAML builds
    • ONAP scripts
    • Sample VNFs

Preflight requirements

Networking

The Target RC has multiple physical ND VLAN interfaces. The Build Server uses different interfaces during the different stages of its creation of a RC on the Target Server. A very detailed description of the entire networking setup can be found in the Network Architecture section of this release documentation <INSERT LINK>. In addition the networking configuration used in the validation labs is contained in the Validation Labs section of this release documentation <INSERT LINK>

The Build Server must have IP connectivity to the Target Server's dedicated BMC port using ports 80 (http) <is 80 actually used?> and 443 (https) in order to issue Redfish commands to configure the Target Server's BIOS settings. The Target Server's BMC IP address is denoted as <SRV_OOB_IP> in this guide. The Target Server's BMC must be manually preconfigured with the <SRV_OOB_IP> address.

After setting the Target Server's BIOS, the Build Server will then (usually) act as the DHCP server for the initial Target Server's boot process. The Target Server will be automatically configured by the Redfish API commands to send its initial DHCP Request from one of its main NICs via the VLAN tagged 'host' network. Thus the Target Server's 'host' interface and the Build Server's DHCP server interface must be in the same broadcast domain so that the DHCP Request broadcast frame can reach the Build Server. It is possible to remove the need for the Build Server and Target Server to be on the same L2 domain using  DHCP relay/helper functionality in the TOR to relay the Target Server's DHCP requests across an IP routed network, however this has not been verified in the R1 release and this guide assumes the build and Target Servers to be on the same L2 broadcast domain as described in the detailed networking section.

During the layer stages of the installation the Target Server's 'host' interface must have connectivity to the internet to be able to download the necessary repos and packages.

Software

When the RC is installed on a new bare metal server no software is required on the Target Server. All software will be installed from the Build Server and/or external repos via the internet.

Preflight checks

To verify the necessary IP connectivity from the Build Server to the Target Server's BMC confirm from the Build Server that at least port 443 is open to the Target Server'  iDRAC/iLO BMC IP address <SRV_OOB_IP> : <INSERT_IP ADDRESS BELOW> 

build_server# #nmap -sS <SRV_OOB_IP>

build_server# nmap -sS <INSERT_IP ADDRESS>


Starting Nmap 7.01 ( https://nmap.org ) at 2018-07-10 13:55 UTC Nmap scan report for <INSERT_IP ADDRESS> Host is up (0.00085s latency). Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 443/tcp open https 5900/tcp open vnc Nmap done: 1 IP address (1 host up) scanned in 1.77 seconds


<IS THIS NEXT STEP REALLY NECESSARY - DOESN:T THE INSTALL SCRIPT FORMAT THE SERVER BY DEFAULT?>

Next, use nmap to check for a "clean slate" Bare Metal Server. The results will show the host as being down (due to no OS).

# nmap -sS <SRV_HOST_ADDRESS>

# nmap -sS <INSERT HOST ADDRESS>
 
Starting Nmap 7.01 ( https://nmap.org ) at 2018-07-10 13:55 UTC
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 0.63 seconds

Preflight Input Data 

The automated deployment process configures the new RC based on a set of user defined values. These values must be created and stored in a yaml configuration file before the RC deployment process can be started.

During the previous Build Server installation a generic template called serverrc.template was created on the Build Server in /opt/akraino/redfish/. This template should be used to create the deployment specific input file for the new RC. The example below is for a file called aknode29rc to create a RC on a server called aknode29.   

root@build-server# mkdir -p /opt/akraino/server-config

root@build-server# #cp /opt/akraino/redfish/serverrc.template /opt/akraino/server-config/<NEW_RC_SRV_NAME>rc

root@build-server# cp /opt/akraino/redfish/serverrc.template /opt/akraino/server-config/aknode29rc

root@build-server# #vi /opt/akraino/server-config/<NEW_RC_SRV_NAME>rc

root@build-server# vi /opt/akraino/server-config/aknode29rc

The actual serverrc input file used in the validation labs to build their RCs is shown in the Validation Labs section of the release documentation <INSERT LINK>. <MAYBE A COPY OF THE FILE SHOULD BE PLACED AT THE END OF THIS PAGE FOR IMMEDIATE REFERENCE?>

Deploying the RC 

The RC is deployed in two stages, first the bare metal and Linux OS installation occurs then the Network Cloud specific software is installed on the Target RC server.


Elevate yourself to root:

user@build_server:/# sudo -i

Operating System Installation

To begin the Regional Controller's BIOS provisioning and Linux installation do the following (aknode29 is an example):

root@build-server# #/opt/akraino/redfish/install_server_os.sh --rc /opt/akraino/server-config/<RC_INPUT_FILE_NAME> --skip-confirm

root@build-server# /opt/akraino/redfish/install_server_os.sh --rc /opt/akraino/server-config/aknode29rc --skip-confirm


During the installation the progress can be monitored by viewing the following logfile on the Build Server <INSERT CODE SHOWING LOG FILE>

The BIOS configuration and Linus installation results in numerous reboots of the Target Server which each take many minutes. In addition software packages are transferred and updated on the Target RC server resulting in a total installation time of <INSERT BALL  PARK FIGURE>. The actual time will vary depending on the time taken to retrieve packages from their external repos.

A successful installation of this stage of the RC deployment will result in the following message:

root@build-server# Completed bare metal install of regional server [aknode44] at Mon Jul 2 20:09:35 UTC 2018 SUCCESS: Try connecting with 'ssh root@192.168.2.42' as user root Elapsed time was 9 minutes and 22 seconds


RC Specific Software Installation

Once the previous stage is complete the Network Cloud specific software must then be installed.

During the second stage of the RC deployment process the Build Server will <SSH into?> use the Target RC server's 'host' address. The RC's planned 'host' address must be manually configured in a file on the build called akrainrc in /opt/akraino/region/.

Locate and set the TARGET_SERVER_IP value in the akrainorc file to the planned Regional Controller's 'host' address. All other values must be left as-is.

root@build-server# vim /opt/akraino/region/akrainorc

Once this has been set and the file saved export the value using <INSERT ADDRESS BELOW>

root@build-server# #export TARGET_SERVER_IP=<RC_planned_host_address>

root@build-server# #export TARGET_SERVER_IP=<INSERT ADDRESS>


The final step installs the Regional Controller software:

root@build-server# /opt/akraino/region/install_akraino_portal


This will also take many minutes. <INSERT BALL PARK> .


A successful installation will end with the following message <UPDATE IP ADDRESS BELOW>

...
  
Setting up tempest content/repositories
Setting up ONAP content/repositories
Setting up sample vnf content/repositories
Setting up airshipinabottle content/repositories
Setting up redfish tools content/repositories
SUCCESS:  Portal can be accessed at http://192.168.2.44:8080/AECPortalMgmt/
SUCCESS:  Portal install completed


The Regional Controller Node installation is now complete.


Please note: It will be necessary to generate rsa keys on the newly commissioned RC which must then be copied and inserted into the 'genesis_ssh_public_key' attribute in site input yaml file used when subsequently deploying each Unicycle pod at any edge site controlled by the newly built RC. This will be covered in the Unicycle installation instructions.

Accessing the new Regional Controller's Portal UI

During the final stage of the installation a UI will have been installed on the newly deployed RC. This UI will be used to subsequently deploy all Rover and Unicycle pods to edge locations. The RC's portal can be opened in Chrome via the portal URL http://REGIONAL_NODE_IP:8080/AECPortalMgmt/ where REGIONAL_NODE_IP is the RC's 'host' IP address. Note: IE or Edge browsers may not currently work with this UI.

Use the following credentials:

  • Username: akadmin
  • Password: akraino

Upon successful login, the Akraino Portal home page will appear.

Parameter Summary

OOB_SRV_IP

TARGET_SERVER_IP = REGIONAL_NODE_IP = 'host' address = SRV_IP

NEW_RC_SRV_NAME


Example RC Configuration Input File

This section includes a sample input file that was used in a Validation Lab to build a Regional Controller.

This file is the /opt/akraino/redfish/server-config/aknode29rc

# host name for server
SRV_NAME=aknode29

# server oem - Dell or HPE (case sensitive)
SRV_OEM=Dell

# out of band interface information for server (idrac/ilo/etc)
SRV_OOB_IP=10.51.35.146
SRV_OOB_USR=root
SRV_OOB_PWD=calvin

# mac address of server to be used during the build - not required for Dell 10G servers
# SRV_MAC=3c:fd:fe:b8:10:60

# the boot device is the device name on which the OS will be loaded
SRV_BOOT_DEVICE=sda

# Ubuntu kernel and version to use for os install
# valid options are hwe-16.04.6-amd64 or 16.04.6-amd64
SRV_BLD_SCRIPT=hwe-16.04.6-amd64

# template xml file to set bios and raid configuration settings
SRV_BIOS_TEMPLATE=dell_r740_g14_uefi_base.xml.template
SRV_BOOT_TEMPLATE=dell_r740_g14_uefi_httpboot.xml.template
SRV_HTTP_BOOT_DEV=NIC.Slot.7-1-1

# template to run to configure OS after first boot
# current options are: firstboot.sh.template, firstboot-genesis.sh.tempate or firstboot-airship-iab.sh.template
SRV_FIRSTBOOT_TEMPLATE=firstboot.sh.template

# VLAN to use during build and for final network configuration
# This VLAN will be trunked from TOR to RC and the RC's DHCP Requests and HTTP PXE boot will occur on this tagged VLAN
SRV_VLAN=408

# basic network information for dhcp config and final server network settings
SRV_MTU=9000
SRV_IP=10.51.34.230				#This is the RC's IP address on the 'host' network
SRV_SUBNET=10.51.34.224
SRV_NETMASK=255.255.255.224
SRV_GATEWAY=10.51.34.225
SRV_DNS=10.64.73.100
SRV_DOMAIN=vran.k2.ericsson.se
SRV_DNSSEARCH=vran.k2.ericsson.se
SRV_NTP=seki20-ntp1.k2.ericsson.se

# network bond information - NOTE: SRV_SLAVE1 will be used for OS install
SRV_BOND=bond0
SRV_SLAVE1=enp134s0f0
SRV_SLAVE2=enp134s0f1

# password to set for root after OS is installed
SRV_PWD=akraino,d
  • No labels