Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The ICN blueprint family intends to address deployment of workloads in a large number of edges and also in public clouds using K8S K8s as resource orchestrator in each site and Edge Multi-Cluster Orchestration (EMCO) as service level orchestrator (across sites).  ICN also intends to integrate infrastructure orchestration which is needed to bring up a site using bare-metal servers. Infrastructure orchestration, which is the focus of this page, needs to ensure that the infrastructure software required on edge servers is installed on a per-site basis, but controlled from a central dashboard.  Infrastructure orchestration is expected to do the following:

...

  1. SDEWAN Controller with Open source based SDWAN CNF and SDEWAN HUB to establish IPSEC tunneling between Edge Distributions with Service Function Chaining (SFC)
  2. Composite vFirewall vFirewall (vFW) to show case Telco, Cable Use telco and cable use cases using EMCO(Edge Multi-Cluster Orchestration)

Where on the Edge

Nowadays best efforts are put to keep the Cloud native control plane close to workload to reduce latency, increase performance, and fault tolerance. A single orchestration engine to be lightweight and maintain the resources in a cluster of compute node, Where the customer can deploy multiple Network Functions, such as VNF, CNF, Micro service, Function as a service (FaaS), and also scale the orchestration infrastructure depending upon the customer demand.

ICN target on-prem premises edge, 5G, IoT, SDWAN, Video streaming, Edge Gaming Cloud. A single deployment model to target multiple edge use case.

...

On an edge deployment, there may be multiple edges that need to be brought up.  The Administrator going to each location, using the infra-local-controller to bring up application-K8S clusters in compute nodes of each location, is not scalable.  Therefore, we have an "infra-global-controller" to control multiple "infra-local-controllers" which are controlling the worker nodes. The "infra-global-controller" is expected to provide a centralized software provisioning and configuration system.  It provides one single-pane-of-glass for administrating the edge locations with respect to infrastructure. The worker nodes may be baremetal bare metal servers, or they may be virtual machines resident on the infra-local-controller. So the minimum platform configuration is one global controller and one local controller (although the local controller can be run without a global controller).

Since, there are a few K8S K8s clusters, let us define them:

  • infra-global-controller-K8S k8s :  This is the K8S K8s cluster where infra-global-controller related containers are run.
  • infra-local-controller-K8Sk8s:  This is the K8S K8s cluster where the infra-local-controller related containers are run, which bring up compute nodes.
  • application-K8S k8s:  These are K8S K8s clusters on compute nodes, where application workloads are run.

...

Infra-global-controller runs in its own K8S clusterK8s cluster. All the components of infra-global-controllers are containers.  The following components are part of the infra-global-controller.

  • Provisioning controller  (PC) Micro Services
  • Binary Provisioning Manager (BPM) Micro services
  • K8S K8s Provisioning Manager (KPM) Micro-services
  • CSM: Certificate and Secret management Management (CSM) related Micro-services
  • Cluster-API related Micro-services
  • MongoDB for storing packages and OS images.
  • Prometheus: Monitoring and alerting

Since we expect the infra-global-controller to be reachable from the Internet, we should be secured using

  • ISTIO Istio and Envoy (for internal communication as well as for external communication) 
  • Store Citadel private keys using CSM.
  • Store secrets using SMS of CSM.

...

As you see above in the picture, the bootstrap machine itself is based on K8SK8s.  Note that this K8S K8s is different from the K8S that K8s that gets installed in compute nodes.  That is, these are two different K8S K8s clusters. In case of the bootstrap machine, it itself is a complete K8S cluster K8s cluster with one node that has both master and minion software combined.  All the components of the infra-local-controller (such as BPA, Metal3 and Ironic) are containers.  

Since we expect infra-local-controller is reachable from outside we expect it to be secured using

  • ISTIO Istio and Envoy (for internal communication as well as for external communication) 

...

  • As a USB bootable disk:   One should be able to get any bare-metal server machine, insert USB and restart the server. This means that the USB bootable disk shall have basic Linux, K8S K8s and all containers coming up without any user actions.  It must also have packages and OS images that are required to provision actual compute nodes.  As in above example, these binary, OS and packages are installed on 9 compute nodes.
  • As individual entities:  As developers, one shall be able to use any machine without inserting a USB disk.  In this case, the developer can choose a machine as a bootstrap machine, install Linux OS, Install K8S using Kubeadm K8s using kubeadm and then bring up BPA, Metal3 and Ironic. Then upload packages via RESTAPIs REST APIs provided by BPA to the system.
  • As a KVM/QMEU QEMU Virtual machine image:   One shall be able to use any VM as a bootstrap machine using this image.

Note that the infra-local-controller can be run without the infra-global-controller. In the interim release, we expect that only the infra-local-controller is supported.  The infra-global-controller is targeted for the final Akraino R4 releaseR6 release. It is the goal that any operations done in the interim release on infra-local-controller manually are automated by infra-global-controller. And hence the interface provided by infra-local-controller is flexible enough to support both manual actions as well as automated actions. 

As indicated above, infra-local-controller will bring up K8S K8s clusters on the compute nodes used for workloads.  Bringing up a workload K8S cluster normally requires the following steps

  1. Bring up a Linux operating system.
  2. Provision the software with the right configuration
  3. Bring up basic Kubernetes K8s components (such as Kubeletkubelet, Docker, kubectl, kubeadm etc..)
  4. Bring up components that can be installed using kubectl.

Step 1 and 2 are performed by Metal3 and Ironic.  Step 3 is performed by BPA and Step 4 is done by talking to application-K8SK8s

Metal3

...

Bare Metal Operator & Ironic

The Baremetal Bare Metal Operator provides provisioning of compute nodes (either bare - metal or VM) by using the Kubernetes K8s API. The Baremetal Bare Metal Operator defines a CRD BaremetalHost Object BareMetalHost object representing a physical server; it represents several hardware inventories. Ironic is responsible for provisioning the physical servers, and the Baremetal Bare Metal Operator is for responsible for wrapping the Ironic and represents them as CRD object. 

...

The job of the BPA is to install all packages to the application-K8S that K8s that can't be installed using kubectl.  Hence, the BPA is used right after the compute nodes get installed with the Linux operating system, before installing KubernetesK8s-based packages.  BPA is also an implementation of CRD controller of infra-local-controller-k8sK8s.  We expect to have the following CRs:

  • To upload site-specific information - compute nodes and their roles
  • To instantiate the binary package installation.
  • To get hold of application -K8S kubeconfig K8s kubeconfig file.
  • Get status of the installation

...

  • The BPA also acts as a local Docker Hub repository and ensures that all K8S K8s container packages (that need to be installed on the application-K8SK8s) are served locally here.
  • The BPA also configures docker to access packages from this local repository.

...

  • SSH passwords used to authenticate with the compute nodes is expected to be stored in SMS of CSM
  • Kuberconfig kubeconfig used to authenticate with application-K8SK8s.

BPA and Ironic related integration:

...

Software Platform Architecture

Local Controller: KubeadmkubeadmMetal3, Baremetal Bare Metal Operator, Ironic, Prometheus, EMCO

Global Controller: Kubeadmkubeadm, KuDKUD, K8S Provisioning K8s Provisioning Manager, Binary Provisioning Manager, Prometheus, CSM

R4 Release R5 Release cover only Infra local controller:

...

Bare Metal Operator

One of the major challenges to cloud admin managing multiple clusters in different edge location is coordinate control plane of each cluster configuration remotely, managing patches and updates/upgrades across multiple machines. Cluster-API provides declarative APIs to represent clusters and machines inside a cluster.  Cluster-API provides the abstraction for various common logic that can be seen in various cluster provider such as GKE, AWS, Vsphere. Cluster-API consolidated all those logic provide abstractions for all those logic functions such as grouping machines for the upgrade, autoscaling auto-scaling mechanism.

In ICN family stack, Baremetal operator from metal3 Bare Metal Operator from Metal3 project is used as bare metal provider. It is used as a machine actuator that uses Ironic to provide k8s K8s API to manage the physical servers that also run Kubernetes K8s clusters on bare-metal host.

KuD

Kubernetes K8s deployment (KUD) is a project that uses Kubespray to bring up a Kubernetes K8s deployment and some addons add-ons on a provisioned machine. As it already part of EMCO it can be effectively reused to deploy the K8s App components(as shown in fig. II), NFV Specific components and NFVi SDN controller in the edge cluster. In R4 release KuD will be used to deploy the K8s addon such as  Virlet, OVN, NFD, and Intel device plugins such as SRIOV  in the edge location(as shown in figure I). One of the Kubernetes clusters with high availability, which is provisioned and configured by KUD will be used to deploy EMCO  One of the K8s clusters with high availability, which is provisioned and configured by KUD, will be used to deploy EMCO on K8s. ICN family uses Edge Multi-Cluster Orchestration for service orchestration. EMCO provides a set of helm chart to be used to run the workloads on a Multi - cluster. 

...

EMCO will be the Service Orchestration Engine in ICN family and is responsible for the VNF life cycle management, tenant management and Tenant resource quota allocation and managing Resource Orchestration engine (ROE) to schedule VNF workloads with Multi-site scheduler awareness and Hardware Platform abstraction (HPA). Required an Akraino dashboard that sits on the top of EMCO to deploy the VNFs

Kubernetes  Block and Modules:

It can be used to deploy the K8s App components (as shown in fig. II), NFV Specific components and NFVi SDN controller in the edge cluster.  In R5 release EMCO will be used to deploy the K8s add-on such as  Virtlet, OVN, NFD, and Intel device plugins such as SRIOV  in the edge location (as shown in figure I).  Required an Akraino dashboard that sits on the top of EMCO to deploy the VNFs.

K8s  Block and Modules:

K8s will be the Resource Orchestration Engine in ICN family to manage Network, Storage and Compute resource for the VNF application. ICN family will be using multiple container runtimes as Virtlet and docker as a de-facto container Kubernetes will be the Resource Orchestration Engine in ICN family to manage Network, Storage and Compute resource for the VNF application. ICN family will be using multiple container runtimes as Virtlet and docker as a de-facto container runtime. Each release supports different container runtimes that are focused on use cases. 

Kubernetes K8s module is divided into 3 groups - K8s App components, NFV specific components and NFVi SDN controller components, all these components will be installed using KuD addonsEMCO

K8s App components: This block has k8s K8s storage plugins, container runtime, OVN for networking, Service proxy and Prometheus for monitoring, and responsible application management

NFV Specific components: This block is responsible for k8s K8s compute management to support both software and hardware acceleration (include including network acceleration) with CPU pinning and Device plugins such as SRIOV 

...

Modules Design & Architecture:

...

Metal3: 

ICN uses Metal3 project for provisioning server in the edge locations, ICN project uses IPMI protocol to identify the servers in the edge locations, and use Ironic & Ironic - Inspector to provision the OS in the edge location. For R4 releaseR5 release, ICN project provision Ubuntu 18.04 .5 in each server, and uses the distinguished network such provisioning network and bare-metal network for inspection and ipmi IPMI provisioning.

ICN project injects the user data in each server regarding network configuration, grub update to enable IOMMU, remote command execution using ssh and maintain a common secure mechanism for all provisioning the servers. Each local controller maintains IP address management for that edge location. For more information  refer - Metal3 Baremetal Bare Metal Operator in ICN stack

BPA Operator: 

ICN uses the BPA operator to install KUD. It can  install KUD either on Baremetal baremetal hosts or on Virtual Machines. The BPA operator is also used to install software on the machines after KUD has been installed successfully

...

Baremetal Hosts: When a new provisioning CR is created, the BPA operator function is triggered, it then uses a dynamic client to get a list of all Baremetal baremetal hosts that were provisioned using Metal3. It reads the MAC addresses from the provisioning CR and compares with the baremetal hosts list to confirm that a host with that MAC address exists. If it exists, it then searches the DHCP lease file for corresponding IP address of the host, using the IP addresses of all the hosts in the provisioning CR, it then creates a host.ini an inventory file and triggers a job that installs KUD on the machines using the hosts.ini inventory file. When the job is completed successfully, a k8s K8s cluster is running in the Baremetal baremetal hosts. The bpa BPA operator then creates a configmap ConfigMap using the hosts name as keys and their corresponding IP addresses as values. If a host containing a specified MAC address does not exist, the BPA operator throws an error.

Virtual Machines : ICN project uses Virtlet for provisioning virtual machines in the edge locations. For this release, it involves a nested Kubernetes K8s implementation. Kubernetes  K8s is first installed with Virtlet. Pod spec files are created with cloud init user data, network annotation with mac address, CPU and Memory requests. Virtlet VMs are created as per cluster spec or requirement. Corresponding provisioning custom resources are created to match the mac addresses of the Virtlet VMs.

BPA operator checks the provisioning custom resource and maps the mac address(es) to the running Virtlet VM(s). BPA operator gets the IP addresses of those VMs and initiates an installer job which runs KuD KUD scripts in those VMs. Upon completion, the K8s cluster is ready running in the Virtlet VMs.

...

When a new software CR is created, the reconcile loop is triggered, on seeing that it is a software CR, the bpa BPA operator checks for a configmap ConfigMap with a cluster label corresponding to that in the software CR, if it finds one, it gets the IP addresses of all the master and worker nodes, ssh's into the hosts and installs the required software. If no corresponding config map is found, it throws an error.

...

Provides a straightforward RESTful API that exposes resources: Binary Images, Container Images, and OS Images. This is accomplished by using MinIO for object storage and Mongodb MongoDB for metadata.

POST - Creates a new image resource using a JSON file.

...

PATCH - Uploads images to the MinIO backend and updates MongodbMongoDB.

DELETE - Removes the image from MinIO and Mongodb.MongoDB

More on BPA Restful API can be found at ICN Rest API.

KuD

Kubernetes deployment (KUD) is a project that uses Kubespray to bring up a Kubernetes deployment and some addons on a provisioned machine. As it already part of EMCO it can be effectively reused to deploy the K8s App components(as shown in fig. II), NFV Specific components and NFVi SDN controller in the edge cluster. In R4 release KuD will be used to deploy the K8s addon such as  Virlet, OVN, NFD, CMK CPU Manager for Kubernetes and Intel device plugins such as SRIOV and QAT in the edge location(as shown in figure I). API.

EMCO:

EMCO is used as Service orchestration in ICN BP. EMCO is developed as part of Multicloud-k8s project in ONAP community. ICN BP developed containerized KUD multi-cluster to install the EMCO as a plugin in any cluster provisioned by BPA operator. EMCO installed Composite vFW  application to install in any edge location.

...

SDEWAN CNF module is worked as a software-defined router located in each edge location and central hub k8s K8s cluster to manage central-edge and edge-edge communication. It's functionality is realized via CNF (Containerized Network Function) and deployed by K8s, it is based on OpenWRT (an open-source project based on Linux, and used on embedded devices to route network traffic) and leverages Linux kernel functionality for packet processings processing to support network functionalities such as multiple wan link support (mwan3), firewall/SNAT/DNAT (fw3) , IPSec (strongswan) etc. It exposes Restful APIs for configuration, detail information can be found at: SDEWAN CNF

SDEWAN Configure Agent (also named SDEWAN Controller)  module is worked as K8s controller located in each edge location and central hub k8s K8s cluster to support configuration of SDEWAN CNF functionalities (e.g. mwan3, firwall, SNAT, DNAT, IPSec etc.) and monitor SDEWAN CNF status. It exposes CRDs to support configuration via K8s API server for unified authentication and authorization, detail information can be found at: Sdewan SDEWAN CRD Controller

Cloud Storage:

...

  1. Storage Service for Local controller: which used by BPA Rest Agent to provide storage service for image objects with binary, container and operating system. There are 2 solutions, MinIO and GridFS, with the consideration of Cloud native and Data reliability, we propose to use MinIO, which is CNCF project for object storage and compatible with Amazon S3 API, and provide language plugins for client application, it is also easy to deploy in Kubernetes K8s and flexible scale-out. MinIO also provide storage service for HTTP Server. Since MinIO need export volume in bootstrap, local-storage is a simple solution but lack of reliability for the data safety, we will switch to reliability volume provided by Ceph CSI RBD in next release. 
  2. Optane Persistent Memory plugin in KUD, which can provide LVM and direct volumes on Optane PM namespaces, since the Optane PM has high performance and low latency compared with normal SSD storage device, it can be used as cache, metadata volume or other high throughput and low latency scenarios.

...

XL710 - dam/public/documents/datasheets/xl710-10-40-controller-datasheet.pdfdocker 19.03.13Mirantis/virtlet -1.4.4R4vSwitch - OVSakraino-icnovs v2.14.0(mirror repo - github.com/openvswitch/ovs )Helmhelm/helm 34R4Istioistio/istio 1.3Rook/Cephrookiodocsrookv1.0/helm-operator.html v1R4MetalLBdanderson/metallb/releases - v0.7.3OVN4NFV-K8Ss-Pluginopnfvovn4nfvk8splugin9SDEWAN controllerakraino-edge-stack/icn-sdwan - v1hub.dockerrepository/docker/integratedcloudnative/sdewan-controller 03.0intelintel-device-plugins-for-kubernetes - SRIOVCNIcoreos/flannel/ - release tag v0.11.0https://github.com/containernetworking/cni - release tag v0.7.0containernetworking/plugins - release tag v0.8.1akraino-icn/multus-cni - Multus v3.4.1 tp,-cniR4

Components

Link

License

Akraino Release target

ICN https://github.com/akraino-edge-stack/icn - v0.45.0Apache License 2.0R4R5

Provision stack - Metal3

https://github.com/akraino-icn/baremetal-operator - v1 v2.10-icn

Apache License 2.0

R4R5

Ironic - Ironic IPA downloaderhttps://github.com/akraino-icn/ironic-ipa-downloader  - v1.0-icnApache License 2.0R5
Ironic - Ironic imagehttps://github.com/akraino-icn/ironic-image - v1.0-icnApache License 2.0R4R5
Ironic - Ironic imageInspector Imagehttps://github.com/akraino-icn/ironic-inspector-image - v1.0-icnApache License 2.0R4
Ironic - Ironic Inspector Imagehttps://github.com/akraino-icn/ironic-inspector-image - v1.0-icnApache License 2.0R4

Host Operating system

Ubuntu 18.04.5

GNU General Public License

R4

.0R5

Host Operating system

Ubuntu 18.04

GNU General Public License

R5

NIC drivers

XL710 - https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xl710-10-40-controller-datasheet.pdf

GNU General Public License Version 2

R5

QAT driversIntel® C627 Chipset - https://ark.intel.com/content/www/us/en/ark/products/97343/intel-c627-chipset.htmlGNU General Public License Version 2R5
Intel® Optane™ DC Persistent Memory

NIC drivers

Intel® Optane™ DC 256GB Persistent Memory Module  - 

https://www.intel.com/content/

www/

us/en/

GNU General Public License Version 2

R4

QAT driversIntel® C627 Chipset - https://ark.intel.com/content/www/us/en/ark/products/97343/intel-c627-chipset.htmlGNU General Public License Version 2R4
Intel® Optane™ DC Persistent Memory

Intel® Optane™ DC 256GB Persistent Memory Module  - 

https://www.intel.com/content/www/us/en/products/memory-storage/optane-dc-persistent-memory/optane-dc-256gb-persistent-memory-module.html

PMDK: Persistent Memory Development Kit - https://github.com/pmem/pmdk/

SPDX-License-Identifier - BSD-3-ClauseR4

products/memory-storage/optane-dc-persistent-memory/optane-dc-256gb-persistent-memory-module.html

PMDK: Persistent Memory Development Kit - https://github.com/pmem/pmdk/

SPDX-License-Identifier - BSD-3-ClauseR5

EMCO

(formerly known as ONAP4K8s)

https://github.com/open-ness/EMCO

Apache License 2.0

R5

SDEWAN CNFs

https://github.com/akraino-edge-stack/icn-sdwan - v1.0

https://hub.docker.com/repository/docker/integratedcloudnative/openwrt - 0.3.1

GNU General Public License Version 2

R5

KUD

EMCO

(formerly known as ONAP4K8s)

https://git.onap.org/multicloud/K8s/ 

Apache License 2.0

R4R5

SDEWAN CNFsKubespray

https://github.com/akrainokubernetes-edge-stack/icn-sdwan sigs/kubespray v2.14.1

Apache License 2.0

R5

K8s

- v1.0https://hubgithub.docker.com/repository/docker/integratedcloudnative/openwrtkubernetes/kubeadm - 0v1.318.19

GNU General Public Apache License Version 2.0

R4R5

KUDDocker

https://gitgithub.onap.org/multicloud/k8s/ com/docker - 19.03

Apache License 2.0

R4R5

KubesprayVirtlet

https://github.com/Mirantis/kubernetes-sigs/kubespray v2.14.1virtlet -1.4.4

Apache License 2.0

R4R5K8s

SDN - OVN

https://github.com/akraino-icn/kubernetesovn/kubeadm - v1.18.9v20.06.0

(mirror repo - https://github.com/ovn-org/ovn)

Apache License 2.0

R4

R5

vSwitch - OVS

Docker

https://github.com/akraino-icn/

ovs -

Apache License 2

v2.14.0

R4

Virtlet

(mirror repo - https://github.com/

openvswitch/ovs )

Apache License 2.0R5

SDN - OVNAnsible

https://github.com/akraino-icn/ovn/ansible/ansible - v20.062.9.7

Apache License 2.0

R5

Helm

(mirror repo - https://github.com/ovn-org/ovn)/helm/helm - 3.2.4

Apache License 2.0

R4R5

Istio

https://github.com/

istio/

istio -

1.0.3

Apache License 2.0

R5

Rook/Ceph

https://

rook.io/docs/rook/v1.0/helm-operator.html v1.0

Apache License 2.0R4

R5

AnsibleMetalLB

https://github.com/ansibledanderson/metallb/ansiblereleases - 2v0.97.73

Apache License 2.0

R4

R5

OVN4NFV-K8ss-Pluginhttps://github.com/opnfv/ovn4nfv-k8s-plugin - v2.2.0Apache License 2.0R5
SDEWAN controller

https://github.com/

akraino-edge-stack/icn-sdwan -

v1.0

Apache License 2.0

R4

https://

hub.docker.

com/

repository/

docker/

integratedcloudnative/sdewan-controller - 0.3.0

Apache License 2.0R5
Device Pluginshttps://github.com/intel/intel-device-plugins-for-kubernetes - QAT 0.19.0-kerneldrvApache License 2.0

R4

R5

Node Feature Discovery

https://github.com/

kubernetes-sigs/

node-

feature-

discovery  - v0.

7.0

Apache License 2.0

R5

R4

CNI

https://github.com/coreos/flannel/ - v0.12.0

https://github.com/

containernetworking/cni release tag v0.7.0

https://

github.com/

containernetworking/plugins -

v0.

Apache License 2.0R4Device Plugins

8.7

https://github.com/

akraino-icn/

Apache License 2.0R4

Node Feature Discoverymultus-cni - v3.7

https://github.com/kubernetes-sigsk8snetworkplumbingwg/node-feature-discovery  - 0.4.0sriov-cni

Apache License 2.0

R4

R5

Containerized Data Importer (CDI)https://github.com/kubevirt/containerized-data-importer - v1.34.1Apache License 2.0R5
CPU Manager for Kubernetes (CMK)https://github.com/integratedcloudnative/CPU-Manager-for-Kubernetes - v1.4.1-no-taintApache License 2.0R5
KubeVirthttps://github.com/kubevirt/kubevirt - v0.41.0Apache License 2.0R5
Sriov Network Operatorhttps://github.com/k8snetworkplumbingwg/sriov-network-operator - 4.8.0Apache License 2.0R5

Hardware and Software Management

Software Management

ICN R4 R5 Timelines

Hardware Management

Hostname

CPU Model

Memory

Storage

1GbE: NIC#, VLAN,

(Connected

extreme 480 switch)

10GbE: NIC# VLAN, Network

(Connected with IZ1 switch)

Jump

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0: VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)

node1

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0: VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)

node2

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0:  VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)

node3

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0: VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)

node4

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0: VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)

node5

2xE5-2699

64GB

3TB (Sata)
180 (SSD)

IF0: VLAN 110 (DMZ)
IF1: VLAN 111 (Admin)

IF2: VLAN 112 (Private)
VLAN 114 (Management)
IF3: VLAN 113 (Storage)
VLAN 1115 (Public)


Licensing

Refer Software Components list