You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

OpenNESS 19.12 Design

Openness released 19.12 on December 21 2019 and this new release has removed the deployment mode ( kubernetes + NTS ). Two modes is supported now: Native deployment Mode (which is based on pure docker/libvirt) and Infrastructure Mode (which is based on kube-ovn), below are the brief summary of the difference of these 2 modes:

 

Functionality

Native Deployment Mode

Infrastructure Deployment Mode

Usage Scenarios

On-Premises Edge

Network Edge

Infrastructure

Virtualization base: docker/libvirt

Orchestration: OpenNESS controller

Network: docker network (container) + NTS (through new added KNI interface)

Orchestration: Kubernetes

Network: kube-ovn CNI

Micro-Services in OpenNESS Controller

Web UI: controller UI

Edge Node/Edge application lifecycle management

Core Network Configuration

Telemetry

Core Network Configuration: Configure the access network (e.g., LTE/CUPS, 5G) control plane

Telemetry

Micro-Services in OpenNESS Node

EAA: application/service registration, authentication etc.

ELA/EVA/EDA: used by controller to configure host interfaces, network policy (used by NTS), create/destroy application etc.

DNS: for client to access MS in edge node

NTS: traffic steering

EAA: application/service registration, authentication etc.

EIS(Edge Interface Service), looks to be similar with providernet implemented in ovs4nfv k8s CNI

DNS: for client to access MS in edge node

Application on-boarding

OpenNESS Controller Web UI or Restful API

Kubernetes (e.g. Kubectl apply -f application.yaml)

Note: unlike 19.09, No UI used to on-board application

Edge node interface configuration

ELA (Edge LifeCycle Agent, Implemented by OpenNESS) – Configurated by OpenNESS controller

EIS (Edge Interface Service, which is an kubectl extension to configurate edge node host network adapter), use

 

e.g. kubectl interfaceservice attach $NODE_NAME $PCI_ADDRESS

Traffic Policy configuration

EDA (Edge Dataplane Agent, Implemented by OpenNESS) – Configurated by OpenNESS controller

Kubenetes Network Policy CRD

 

e.g. kubectl apply -f network_policy.yml

Note: unlike 19.09, No UI used to configure policy

DataPlane Service

NTS (Implemented based on DPDK in OpenNESS) to provide additional KNI interface for container

kube-ovn + Network policy

Gap Analysis for Integration OpenNESS with ICN

Network Policy

By default, in a Network Edge environment, all ingress traffic is blocked (services running inside of deployed applications are not reachable) and all egress traffic is enabled (pods are able to reach the internet). The following NetworkPolicy definition is used:

Default network policy: block all ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: block-all-ingress
  namespace: default        # selects default namespace
spec:
  podSelector: {}           # matches all the pods in the default namespace
  policyTypes:
  - Ingress
  ingress: []               # no rules allowing ingress traffic = ingress blocked

Admin can enable access to certain service by applying a NetworkPolicy CRD. For example:

1. To deploy a Network Policy allowing ingress traffic on port 5000 (tcp and udp) from 192.168.1.0/24 network to OpenVINO consumer application pod, create the following specification file for this Network Policy:

Admin defined network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: openvino-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      name: openvino-cons-app
  policyTypes:
  - Ingress
  ingress:
  - from:
    - ipBlock:
        cidr: 192.168.1.0/24
    ports:
    - protocol: TCP
      port: 5000
    - protocol: UDP
      port: 5000

2. Create the Network Policy:
kubectl apply -f network_policy.yml

Cross-Node communication

OS (Ubuntu)

  • No labels