You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

Introduction

In Akraino's R1 release the Network Cloud (NC) family provides for the automated deployment of three different types of edge pod:

  • Rover
  • Unicycle with an OVS-DPDK data plane for guest VMs
  • Unicycle with a SR-IOV data plane for guest VMs

Each of these different pod types is considered a different blueprint species within the NC family. Multiple pods of the same or different species can be deployed at any given edge location. Edge sites would typically be Central Offices (COs), Mobile Switch Offices (MSOs) but could be any site within the operator's network.

The deployment process is based on a two tier hierarchical model incorporating a Regional Controller (RC) which is responsible for the deployment of any number of independent Rover and/or Unicycle pods at one or multiple edge locations within its region of control. In a large geographic deployment multiple RCs would be deployed to subdivide into a multi-region deployment.

Before Rover or Unicycle edge pods can be deployed to an edge site a RC must be deployed to orchestrate their deployment. The same RC is used to deploy all three supported blueprint species.

The RC may be implemented on a physical Bare Metal (BM) server or in a VM. 

A 'build server' is used to create an RC. Once the the RC is created the build server has no further role in NC deployments (except if a new RC is to be created).

The diagram below shows the deployment process with time running from top to bottom.


The fully automated deployment processes includes:

The bare metal provisioning of the RC and Rover/Unicycle pods' BIOS via the servers' BMC using the Redfish API 

The provisioning and blueprint species specific configuration of an ubuntu operating system on all nodes

The deployment of all containerization, virtualization and associated Network Cloud specific software


Summary

Build Server - a temporary VM or physical server initially used to deploy Regional Controllers.

Regional Controller - a VM or physical server used to deploy Rover and/or Unicycle pods at edge sites within its geographical region of control.

Edge pods - physical servers deployed by a Regional Controller at edge sites in either Rover or Unicycle pod configurations


Servers

Build Server

The Build Server node is built using a pre-provisioned VM. Full details are contained in the Validation Labs section of the NC's R1 documentation.

Regional Controller

The Regional Controller (RC) node may be built using either a bare metal server or in a pre-provisioned VM. Full details of the servers and VM deployment options are contained in the Validation Labs section of the NC's R1 documentation.

Rover

Rover edge pods consists of a single node deployed on a bare metal server. Full details of the servers are contained in the Validation Labs section of the NC's R1 documentation.

Unicycle

Unicycle edge pods consists of 3 to 7 nodes deployed on bare metal servers consisting 3 controller nodes (1 genesis and 2 master nodes) and 0 to 4 worker nodes. Full details of the servers are contained in the Validation Labs section of the NC's R1 documentation.

Switching Subsystem

The switching subsystem is considered a 'black box' in the R1 NC release providing a set of L1, L2, L3 and BGP networking services. As such any selection of switching hardware can be used that provides the necessary services described in detail in the R1 documentation Network Architecture

In R1 the provisioning of the switching subsystem is considered a pre-requisite that is completed before the deployment of a Build Server, a Regional Controller and Rover and/or Edge pods. The R1 release does not configure the switching subsystem.

  • No labels