Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Each edge location has infra local controller, which has a bootstrap cluster, which has all the components required to boot up the compute cluster.

Platform Architecture

...


draw.io Diagram
bordertrue
diagramNamePlatform Architecture
simpleViewerfalse
width
linksauto
tbstyletop
lboxtrue
diagramWidth641
revision5


Infra-global-controller: 

...

As you see above in the picture, the bootstrap machine itself is based on K8s.  Note that this K8s is different from the K8s that gets installed in compute nodes.  That is, these are two different K8s clusters. In case of the bootstrap machine, it itself is a complete K8s cluster with one node that has both master and minion software combined.  All the components of the infra-local-controller (such as BPAFlux, Cluster API, Metal3 and Ironic) are containers.  

...

  • As a USB bootable disk:   One should be able to get any bare-metal server machine, insert USB and restart the server. This means that the USB bootable disk shall have basic Linux, K8s and all containers coming up without any user actions.  It must also have packages and OS images that are required to provision actual compute nodes.  As in above example, these binary, OS and packages are installed on 9 compute nodes.
  • As individual entities:  As developers, one shall be able to use any machine without inserting a USB disk.  In this case, the developer can choose a machine as a bootstrap machine, install Linux OS, Install K8s using kubeadm and then bring up BPAFlux, Cluster API, Metal3 and Ironic. Then upload packages via REST APIs provided by BPA to the system.
  • As a KVM/QEMU Virtual machine image:   One shall be able to use any VM as a bootstrap machine using this image.

Note that the infra-local-controller can be run without the infra-global-controller. In the interim release, we expect that only the infra-local-controller is supported.  The infra-global-controller is targeted for the final Akraino R6 release. It is It is the goal that any operations done in the interim release on infra-local-controller manually are automated by infra-global-controller. And hence the interface provided by infra-local-controller is flexible enough to support both manual actions as well as automated actions. 

...

  1. Bring up a Linux operating system.
  2. Provision the software with the right configuration
  3. Bring up basic K8s components (such as kubelet, Dockercontainerd, kubectl, kubeadm etc..)
  4. Bring up additional components that can be installed using kubectl.

Step 1 and 2 are performed by Metal3 and Ironic.  Step 3 is performed by BPA Cluster API and Step 4 is done by talking to application-K8sFlux.

Metal3 Bare Metal Operator & Ironic

The Bare Metal Operator provides provisioning of compute nodes (either bare metal or VM) by using the K8s API. The Bare Metal Operator defines a CRD BareMetalHost object representing a physical server; it represents several hardware inventories. Ironic is responsible for provisioning the physical servers, and the Bare Metal Operator is for responsible for wrapping the Ironic and represents them as CRD object. 

...

Cluster API (CAPI)

The job of the BPA is to install all packages to the application-K8s that can't be installed using kubectl.  Hence, the BPA is used right after the compute nodes get installed with the Linux operating system, before installing K8s-based packages.  BPA is also an implementation of CRD controller of infra-local-controller-K8s.  We expect to have the following CRsCAPI is provision the bare metal infrastructure with Metal3 and bootstrap K8s with kubeadm.  CAPI provides CRs to accomplish the following:

  • To upload site-specific information - compute nodes and their roles
  • To instantiate the binary package installation.
  • To get hold of application K8s kubeconfig file.
  • Get status of the installation

The BPA also provides some RESTful APIs for doing the following:

  • To upload binary images that are used to install the stuff in compute nodes.
  • To upload To upload a Linux Operating system that are needed in compute nodes.
  • Get status of installation of all packages as prescribed before.

Since compute nodes may not have Internet connectivity

  • The BPA also acts as a local Docker Hub repository and ensures that all K8s container packages (that need to be installed on the application-K8s) are served locally here.
  • The BPA also configures docker to access packages from this local repository.

BPA also takes care of: (After interim release)

  • When a new compute node is added, once the administrator adds the new compute node in the site list, it shall take care of installing the packages.

Flux

Flux implements GitOps workflows to install K8s-based packages after K8s is bootstrapped:

  • Upload binary images that are used to install the packages in compute nodes.
  • Get status of installation of all packages as prescribed before.
  • If a new binary package version is uploaded, it shall take care of figuring out the
  • When a new compute node is added, once the administrator adds the new compute node in the site list, it shall take care of installing the packages.
  • If a new binary package version is uploaded, it shall take care of figuring out the compute nodes that require this new version and update that compute node with the new version.

BPA is expected to store any private key and secret information in CSM.

  • SSH passwords used to authenticate with the compute nodes is expected to be stored in SMS of CSM
  • kubeconfig used to authenticate with application-K8s.

BPA and Ironic related integration:

...

Infra-local-controller

Since compute nodes may not have Internet connectivity

  • The infra-local-controller also acts as a local Docker Hub repository and ensures that all K8s container packages (that need to be installed on the application-K8s) are served locally here.
  • The infra-local-controller also configures the container runtime to access packages from this local repository.

The infra-local-controller is expected to store any private key and secret information in CSM.

  • SSH passwords used to authenticate with the compute nodes is expected to be stored in SMS of CSM
  • kubeconfig used to authenticate with application-K8s.

Software Platform Architecture

Local Controller: kubeadm, Flux, Cluster API, Metal3, Bare Metal Operator, Ironic, EMCO

Global Controller: kubeadm, KUD, K8s Provisioning Manager, Binary Provisioning Manager, CSM

R5 Release R6 Release cover only Infra local controller:

Image Removed

Bare Metal Operator

draw.io Diagram
bordertrue
diagramNameSoftware Platform Architecture
simpleViewerfalse
width
linksauto
tbstyletop
lboxtrue
diagramWidth1162
revision4

Metal3

One of the major challenges to cloud admin managing multiple One of the major challenges to cloud admin managing multiple clusters in different edge location is coordinate control plane of each cluster configuration remotely, managing patches and updates/upgrades across multiple machines. In ICN family stack, Bare Metal Operator from Metal3 project is used as bare metal provider. It is used as a machine actuator that uses Ironic to provide K8s API to manage the physical servers that also run K8s clusters on bare-metal host.

KuD

Cluster API & Flux

Cluster API K8s deployment (KUD) is a project that uses Kubespray infrastructure and bootstrap providers (Metal3 and kubeadm in this case) to bring up a K8s deployment and some add-ons on a provisioned machine.  One Flux uses a GitOps workflow to bring up additional add-ons in a K8s deployment.  One of the K8s clusters with high availability, which is provisioned and configured by KUDCluster API and Flux, will be used to deploy EMCO on K8s. ICN family uses Edge Multi-Cluster Orchestration for service orchestration. EMCO provides a set of helm chart to be used to run the workloads on a multi-cluster. 

...

EMCO will be the Service Orchestration Engine in ICN family and is responsible for the VNF life cycle management, tenant management and Tenant tenant resource quota allocation and managing Resource Orchestration engine Engine (ROE) to schedule VNF workloads with Multimulti-site scheduler awareness and Hardware Platform abstraction Abstraction (HPA). It can be used to deploy the K8s App components (as shown in fig. II), NFV Specific components and NFVi SDN controller in the edge cluster.  In R5 release EMCO will be used Required an Akraino dashboard that sits on the top of EMCO to deploy the K8s add-on such as  OVN, NFD, and Intel device plugins such as SRIOV  in the edge location (as shown in figure I).  Required an Akraino dashboard that sits on the top of EMCO to deploy the VNFs.

K8s  Block and Modules:

VNFs.

K8s  Block and Modules:

K8s will be the K8s will be the Resource Orchestration Engine in ICN family to manage Network, Storage and Compute resource for the VNF application. ICN family will be using docker containerd as a de-facto container runtime. Each release supports different container runtimes that are focused on use cases. 

...

ICN uses Metal3 project for provisioning server in the edge locations, ICN project uses Redfish with virtual media (preferred) or the IPMI protocol to identify the servers in the edge locations, and use Ironic & Ironic - Inspector to provision the OS in the edge location. For R5 releaseR6 release, ICN project provision Ubuntu 1820.04 in each server, and uses the distinguished network such provisioning network and bare-metal network networks for inspection and Redfish/IPMI provisioning.

ICN project injects the user data in each server regarding network configuration, grub update to enable IOMMU, remote command execution using ssh and maintain a common secure mechanism for all provisioning the servers. Each local controller maintains IP address management for that edge location. For more information  refer - Metal3 Bare Metal Operator in ICN stack

BPA Operator: 

ICN uses the BPA operator to install KUD. It can  install KUD either on baremetal hosts or on Virtual Machines. The BPA operator is also used to install software on the machines after KUD has been installed successfully

KUD Installation

Baremetal Hosts: When a new provisioning CR is created, the BPA operator function is triggered, it then uses a dynamic client to get a list of all baremetal hosts that were provisioned using Metal3. It reads the MAC addresses from the provisioning CR and compares with the baremetal hosts list to confirm that a host with that MAC address exists. If it exists, it then searches the DHCP lease file for corresponding IP address of the host, using the IP addresses of all the hosts in the provisioning CR, it then creates an inventory file and triggers a job that installs KUD on the machines using the inventory file. When the job is completed successfully, a K8s cluster is running in the baremetal hosts. The BPA operator then creates a ConfigMap using the hosts name as keys and their corresponding IP addresses as values. If a host containing a specified MAC address does not exist, the BPA operator throws an error.

Software Installation

When a new software CR is created, the reconcile loop is triggered, on seeing that it is a software CR, the BPA operator checks for a ConfigMap with a cluster label corresponding to that in the software CR, if it finds one, it gets the IP addresses of all the master and worker nodes, ssh's into the hosts and installs the required software. If no corresponding config map is found, it throws an error.

Refer

BPA Rest Agent:

Provides a straightforward RESTful API that exposes resources: Binary Images, Container Images, and OS Images. This is accomplished by using MinIO for object storage and MongoDB for metadata.

POST - Creates a new image resource using a JSON file.

GET - Lists available image resources.

PATCH - Uploads images to the MinIO backend and updates MongoDB.

DELETE - Removes the image from MinIO and MongoDB

location. For more information  refer - Metal3 Bare Metal Operator in ICN stack

Cluster API: 

ICN uses the Cluster API to provision the infrastructure and bootstrap K8s clusters.

Flux: 

ICN uses the Flux deploy additional K8s packages after the cluster is bootstrappedMore on BPA Restful API can be found at ICN Rest API.

EMCO:

EMCO is used as Service orchestration in ICN BP. ICN BP developed containerized KUD multi-cluster to install the EMCO EMCO is installed as a plugin in any cluster provisioned by BPA operatorwith Cluster API and Flux. EMCO installed Composite vFW  application to install in any edge location.

...

Please refer to list of software components in the ICN R5 R6 Release Notes

Hardware and Software Management

Software Management

ICN R5 Timelines

Hardware Management

Hostname

CPU Model

Memory

Storage

1GbE: NIC#, VLAN,

(Connected

extreme 480 switch)

10GbE: NIC# VLAN, Network

(Connected with IZ1 switch)

Jump

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0: VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)

node1

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0: VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)

node2

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0:  VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)

node3

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0: VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)

node4

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0: VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)

node5

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0: VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)


Licensing

Refer Software Components list