Openness released 19.12 on December 21 2019 and this new release has removed the deployment mode ( kubernetes + NTS ). Two modes is supported now: Native deployment Mode (which is based on pure docker/libvirt) and Infrastructure Mode (which is based on kube-ovn), below are the brief summary of the difference of these 2 modes:
Functionality | Native Deployment Mode | Infrastructure Deployment Mode |
Usage Scenarios | On-Premises Edge | Network Edge |
Infrastructure | Virtualization base: docker/libvirt Orchestration: OpenNESS controller Network: docker network (container) + NTS (through new added KNI interface) | Orchestration: Kubernetes Network: kube-ovn CNI |
Micro-Services in OpenNESS Controller | Web UI: controller UI Edge Node/Edge application lifecycle management Core Network Configuration Telemetry | Core Network Configuration: Configure the access network (e.g., LTE/CUPS, 5G) control plane Telemetry |
Micro-Services in OpenNESS Node | EAA: application/service registration, authentication etc. ELA/EVA/EDA: used by controller to configure host interfaces, network policy (used by NTS), create/destroy application etc. DNS: for client to access MS in edge node NTS: traffic steering | EAA: application/service registration, authentication etc. EIS(Edge Interface Service), looks to be similar with providernet implemented in ovs4nfv k8s CNI DNS: for client to access MS in edge node |
Application on-boarding | OpenNESS Controller Web UI or Restful API | Kubernetes (e.g. Kubectl apply -f application.yaml) Note: unlike 19.09, No UI used to on-board application |
Edge node interface configuration | ELA (Edge LifeCycle Agent, Implemented by OpenNESS) – Configurated by OpenNESS controller | EIS (Edge Interface Service, which is an kubectl extension to configurate edge node host network adapter), use
e.g. kubectl interfaceservice attach $NODE_NAME $PCI_ADDRESS |
Traffic Policy configuration | EDA (Edge Dataplane Agent, Implemented by OpenNESS) – Configurated by OpenNESS controller | Kubenetes Network Policy CRD
e.g. kubectl apply -f network_policy.yml Note: unlike 19.09, No UI used to configure policy |
DataPlane Service | NTS (Implemented based on DPDK in OpenNESS) to provide additional KNI interface for container | kube-ovn + Network policy |
Network policy and DNS is used for traffic steering. Network policy is used for restrict access among services but NOT “proactively” forward the traffic, While the OpenNESS DNS service can help “redirect” the external client’s traffic to the edge application service。
By default, in a Network Edge environment, all ingress traffic is blocked (services running inside of deployed applications are not reachable) and all egress traffic is enabled (pods are able to reach the internet). The following NetworkPolicy definition is used:
apiVersion: networking.k8s.io/v1 metadata: name: block-all-ingress namespace: default # selects default namespace spec: podSelector: {} # matches all the pods in the default namespace policyTypes: - Ingress ingress: [] # no rules allowing ingress traffic = ingress blocked |
Admin can enable access to certain service by applying a NetworkPolicy CRD. For example:
1. To deploy a Network Policy allowing ingress traffic on port 5000 (tcp and udp) from 192.168.1.0/24 network to OpenVINO consumer application pod, create the following specification file for this Network Policy:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: openvino-policy namespace: default spec: podSelector: matchLabels: name: openvino-cons-app policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 192.168.1.0/24 ports: - protocol: TCP port: 5000 - protocol: UDP port: 5000 |
2. Create the Network Policy:
kubectl apply -f network_policy.yml
DNS service can help “redirect” the external client’s traffic to the edge application service. This gap analysis is to investigate whether OpenNESS DNS can be used for ICN traffic steering or not.
OpenNESS provides DNS server which provides the microsevice’s ip address based on FQDN. OpenNESS extends kubectl utility with kubectl edgedns cmd to set/delete DNS entry. For example,
Below are implement details of OpenNESS DNS server:
The OpenNESS DNS service is different from K8s’ CoreDNS to support different usages:
Edge apps can be divided into producer and consumer. This gap analysis is to investigate the communication between the producers and consumers which are on different edge nodes.
Edge applications must introduce themselves to OpenNESS framework and identify if they would like to activate new edge services or consume an existing service. Edge Application Agent (EAA) component is the handler of all the edge applications hosted by the OpenNESS edge node and acts as their point-of-contact.
OpenNESS-awareness involves (a) authentication, (b) service activation/deactivation, (c) service discovery, (d) service subscription, and (e) Websocket connection establishment. The Websocket connection retains a channel for EAA for notification forwarding to pre-subscribed consumer applications. Notifications are generated by "producer" edge applications and absorbed by "consumer" edge applications.
The sequence of operations for the producer application:
The sequence of operations for the consumer application:
Edge apps will access eaa through eaa.openness (name.namespace) which is a kubernetes service:
https://github.com/open-ness/edgecontroller/blob/master/kube-ovn/openness.yaml#L18
For example: as following links show, openvino consumer will access http://eaa.openness:443/auth for authentication.
https://github.com/open-ness/edgeapps/blob/master/openvino/consumer/cmd/main.go#L24
https://github.com/open-ness/edgeapps/blob/master/openvino/consumer/cmd/main.go#L66
eaa is deployed as a deployment and only 1 eaa will be deployed:
https://github.com/open-ness/edgecontroller/blob/master/kube-ovn/openness.yaml#L41
Because all edge apps will access only 1 eaa, it doesn't matter that eaa is stateful.
For example:
only 1 eaa is deployed on node1. producer1 and producer2 will activate the new service with eaa. consumer1 and consumer2 will consume services stored in eaa. Because all the information are stored in only 1 eaa, there won't be issues.
node1 node2
eaa
producer1 consumer1 producer2 consumer2
Because edge apps on different edge node all can access service eaa, the consumer can consume the service provided by producer which is on a different node.
For example:
producer1 is located in node1 and consumer2 is located on node2. The networking flow will be:
producer1 -> service eaa -> pod eaa
consumer2 -> service eaa -> pod eaa
node1 node2
eaa
producer1 consumer2
OpenNESS only supports Centos but ICN is based on Ubuntu 18.04. This gap analysis is to investigate how to deploy OpenNESS on Ubuntu 18.04
OpenNESS only supports Centos but ICN is based on Ubuntu 18.04. By changing the ansible scripts of OpenNESS, it is able to deploy OpenNESS on Ubuntu 18.04. The following parts of ansible scripts need to change:
1. Following ansible roles can be removed for OpenNESS master: grub, cnca, multus, nfd. Ansible role grub can be removed for OpenNESS node. Because:
2. Centos uses yum to install packages and we need to use apt for Ubuntu.
3. Some packages which will be installed by ansible scripts should be removed or replaced:
4. Selinux is not used on Ubuntu and need to remove the ansible scripts configuring selinux.
5. Epel repository is for Centos and Ubuntu doesn't need this repository.
6. Proxy will be set for yum and need to change the scripts to set proxy for apt.
7. Docker installation for Centos and Ubuntu are different. Need to change the scripts following the installation guide. For example: the docker repository is different for Centos and Ubuntu.
8. Auditd is used for Docker. Auditd is delivered with Centos by default but Ubuntu needs to install auditd.
9. Kubernetes installation for Centos and Ubuntu are different. Need to change the scripts following the installation guide. For example: gpg key is different for Centos and Ubuntu, ubuntu use deb and Centos uses repository.
10. cgroups driver is different for Centos (systemd) and Ubuntu (cgroups). By default, cgroups driver is cgroups and need to remove the ansible scripts which configures cgroups driver to systemd.
11. firewalld is used in Centos and need to change to ufw which is used by Ubuntu.
12. Packages are different for installing openvswitch and ovn. Centos uses RPMs. Ubuntu uses openvswitch-switch, ovn-common, ovn-central and ovn-host.
13. Topology manager and CPU manager is configured for edge node's kubelet. No need to use topology manager and can remove these.