You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 31 Next »

Overview

The Provider Access Edge blueprint is part of Akraino's Kubernetes-Native Infrastructure family of blueprints. As such, it leverages the best-practices and tools from the Kubernetes community to declaratively manage edge computing stacks at scale and with a consistent, uniform user experience from the infrastructure up to the services and from developer environments to production environments on bare metal or on public cloud.

This blueprint targets small footprint deployments able to host NFV (in particular vRAN) and MEC (e.g. AR/VR, machine learning, etc.) workloads. Its key features are:

  • Lightweight, self-managing clusters based on CoreOS and Kubernetes (OKD distro).
  • Support for VMs (via KubeVirt) and containers on a common infrastructure.
  • Application lifecycle management using the Operator Framework.
  • Support for multiple networks using Multus.
  • Support for real-time workloads using CentOS-rt*.

Architecture

Resource Requirements

Deployments to AWS

nodesinstance type
1x bootstrap (temporary)EC2: m4.xlarge, EBS: 120GB GP2
3x mastersEC2: m4.xlarge, EBS: 120GB GP2
3x workersEC2: m4.large, EBS: 120GB GP2

Deployments to Bare Metal

nodesrequirements
1x provisioning host (temporary)12 cores, 16GB RAM, 200GB disk free, 3 NICs (1 internet connectivity, 1 provisioning+storage, 1 cluster)
3x masters12 cores, 16GB RAM, 200GB disk free, 2 NICs (1 provisioning+storage, 1 cluster)
3x workers12 cores, min. 16GB RAM, 200GB disk free, 2 SR/IOV-capable NICs (1 provisioning+storage, 1 cluster)


The blueprint validation lab uses 7 SuperMicro SuperServer 1028R-WTR (Black) with the following specs:

UnitsTypeDescription
2CPUBDW-EP 12C E5-2650V4 2.2G 30M 9.6GT QPI
8Mem16GB DDR4-2400 2RX8 ECC RDIMM
1SSDSamsung PM863, 480GB, SATA 6Gb/s, VNAND, 2.5" SSD - MZ7LM480HCHP-00005
4HDDSeagate 2.5" 2TB SATA 6Gb/s 7.2K RPM 128M, 512N (Avenger)
2NICStandard LP 40GbE with 2 QSFP ports, Intel XL710


Networking for the machines has to be set up as follows:

Deployments to libvirt

nodesrequirements
1x bootstrap (temporary)2 vCPUs, 2GB RAM, 2GB (sparse)
3x masters2 vCPUs, 8GB RAM, 2GB (sparse)
3x workers2 vCPUs, 4GB RAM, 2GB (sparse)

Documentation

See KNI Blueprint User Documentation.

Project Team

Member

Company

Contact

RolePhoto & Bio 
Andrew BaysRed HatCommitter

Frank Zdarsky

Red Hat

CommitterSenior Principal Software Engineer, Red Hat Office of the CTO; Edge Computing and 

Jennifer Koerv

Intel

Committer
Manjari AsawaWiproManjari Asawa <manjari.asawa@wipro.com>Committer
Mikko YlinenIntelCommitter
Ned SmithIntelCommitter
Ricardo NoriegaRed HatRicardo NoriegaCommitterRed Hat NFVPE - CTO office - Networking

Sukhdedv Kapur

Juniper

CommitterDistinguished Engineer; Contrail Software - CTO Org
Yolanda RoblaRed HatYolanda Robla MotaPTL, CommitterRed Hat NFVPE - Edge, baremetal provisioning

Use Case Template

Attributes

Description

Informational

Type

New


Industry Sector

Telco and carrier networks


Business Driver



Business Use Cases



Business Cost - Initial Build Cost Target Objective



Business Cost – Target Operational Objective



Security Need



Regulations



Other Restrictions



Additional Details



Blueprint Template

Attributes

Description

Informational

Type

New


Blueprint Family - Proposed Name

Kubernetes-Native Infrastructure for Edge (KNI-Edge)


Use Case

Provider Access Edge (PAE)


Blueprint - Proposed Name

Provider Access Edge (PAE)


Initial POD Cost (CAPEX)

less than $150k (TBC)


Scale & Type

3 to 7 x86 servers (Xeon class)


Applications

vRAN (RIC), MEC apps (CDN, AI/ML, …)


Power Restrictions

less than 10kW (TBC)


Infrastructure orchestration

End-to-end Service Orchestration: ONAP
Middlewares: Kubeflow (AI/ML), NEV SDK (TBC)
App Lifecycle Management: Kubernetes Operators (mix of Helm and native)
Cluster Lifecycle Management: Kubernetes Cluster API/Controller
Cluster Monitoring: Prometheus
Container Platform: Kubernetes (OKD 4.0)
Container Runtime: CRI-O
VM Runtime: KubeVirt
OS: CentOS/CentOS-rt 7.6


SDN

Tungsten Fabric (w/ SR-IOV, DPDK, and multi-i/f); leaf-and-spine fabric mgmt.


SDSCeph

Workload Type

containers, VMs


Additional Details



  • No labels