You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 58 Next »


Introduction


Unicycle pods are deployed from an existing Regional Controller and consist of 3 master nodes and 0 to 4 worker nodes. The number of nodes may be increased after validation in subsequent releases.

A number of different options exist including the choice of Dell or HP servers and whether to support tenant VM networking with an OVS-DPDK or SR-IOV data plane. The choice of which is deployed is achieved by simply creating different pod specific yaml input files.

In R1 the options include

BlueprintServersDataplaneValidated HW detailsValidated by
UnicycleDell 740XDOVS-DPDKEricsson Unicycle OVS-DPDK Validation HW, Networking and IP planEricsson
UnicycleDell 740XDSR-IOVATT Unicycle SR-IOV Validation HW, Networking and IP planAT&T

Ericsson Unicycle SR-IOV Validation HW, Networking and IP plan

Ericsson
UnicycleHP 380 Gen10SR-IOV<INSERT LINK TO ATT HP SERVER SPEC>AT&T

Preflight Requirements

Servers

Deployment has been validated on two types of servers, Dell and HP.

The exact server specifications used during validation activities are defined in links in the table above. Whilst the blueprint allows for variation in the servers, since the installation includes provisioning of BIOS and HW specific components such as NICs, SSDs, HDDs, CPUs etc only these exact server types are supported and validated in the R1 release.

A unicycle pod deployment consists of at least 3 servers for the master nodes (that can also run tenant VM workloads and 0 to 4 dedicated worker nodes.

All Unicycle pod servers' iDRAC/iLO  IP address and subnet must be manually provisioned into the server before installation begins. This includes the genesis, masters and any workers to be deployed in a pod.

Networking

The automated deployment process configures all aspects of the servers including BIOS, linux operating system and Network Cloud specific software. The deployment and configuration of all external switching components is outside the scope of the R1 release and must be completed as a pre-requisite before attempting a unicycle pods deployment.

Details of the networking is given in the Network Architecture section and that used during the validation in the ATT Validation Labs and Ericsson Validation Labs sections of the release documentation.

Software

When a unicycle pod's nodes are installed on a new bare metal server no software is required on the Target Server. All software will be installed from the Build Server and/or external repos via the internet.

Generate RC ssh Key Pair

In order for the RC to ssh into the Unicycle nodes during the deployment process a ssh key pair must be generated on the RC then the public key must be inserted into the Unicycle pods's site specific input file.

root@regional_server# ssh-keygen -t rsa 
  
.....

root@regional_server# cd /root/.ssh 
root@regional_server# ls -lrt
total 12
-rw------- 1 root root  399 May 26 00:34 authorized_keys
-rw-r--r-- 1 root root  395 May 26 00:55 id_rsa.pub
-rw------- 1 root root 1679 May 26 00:55 id_rsa

 

<COMPLETE RSA GENERATION AND INPUT FILE UPDATE>

Preflight Checks

<KEEP CONSISTENT WITH PREVIOUS ROVER SECTION>

Preflight Unicycle Pod Specific Input Data

<KEEP CONSISTENT WITH PREVIOUS ROVER SECTION>

Unicycle Pods with OVS-DPDK Dataplane


Unicycle Pods with SR-IOV Dataplane


Deploying a Unicycle Pod

<INSERT UI DRIVEN PROCEDURE HERE>

Unicycle Pod Site Specific Configuration Input Files

This section contains links to the input files used to build the Unicycle pods in ATT's and Ericsson's validation labs for the R1 release. Being pods and site specific the enumerated values will differ. Full details of the relevant validation lab setup that should be referenced when looking at these files is contained in the Validation Labs section of this documentation.

Please note, superficially these files may appear very similar but they are all included as examination of the details shows the differences dues to HW differences such as vendor, slot location of NICs and the method of defining the pod to implement a OCS-DPDK or SR-IOV dataplane as well site specific differences due to VIDs, subents etc.









  • No labels