Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Add Openness integration test plan for ICN

...

Openness integration for Multus, SR-IOV CNI, SR-IOV Network Device Plugin, FPGA, Bios, Topology Manager, CMK, NFD

Microservice

Integration Detail

Components

Testing

Dependency

Request to openness team

Propose to integrate

multus

Version 3.3
Downloading the following 3 files:
    kustomization.yml[1]  rename_default_net.yml[2] multus-daemonset.yml[3] multus-daemonset.yml[4]

And then run the following command to kustomize the yml file and then apply.
    kubectl kustomize . | kubectl apply -f -

Command kustomize will add parameter: “--rename-conf-file=true” to the daemonset yml file like following:
Containers:
     - args:
        - --multus-conf-file=auto
        - --cni-version=0.3.1
        - --rename-conf-file=true

This parameter will add suffix “.old” to the original cni conf file. For example, “.old” is added to the kube-ovn conf file as below:
[root@master net.d]# ls /etc/cni/net.d/
00-kube-ovn.conflist.old  00-multus.conf  multus.d

[1]https://github.com/open-ness/openness-experience-kits/blob/master/roles/multus/files/kustomization.yml
[2]https://github.com/open-ness/openness-experience-kits/blob/master/roles/multus/files/rename_default_net.yml
[3]https://raw.githubusercontent.com/intel/multus-cni/v3.3/images/multus-daemonset.yml
[4]https://raw.githubusercontent.com/intel/multus-cni/v3.3/images/multus-daemonset.yml

multus running as daemonset

 Not found

 Nothing

Test cases are missing and need to ask where the test cases are.

No
This is standard multus and only changes a parameter.

sriov cni

git clone https://github.com/intel/sriov-cni
docker build . -t nfvpe/sriov-cni
kubectl create -f ./images/k8s-v1.16/sriov-cni-daemonset.yaml [1]
kubectl create -f openness-sriov-crd.yml[2]

[1]https://github.com/intel/sriov-cni/blob/master/images/k8s-v1.16/sriov-cni-daemonset.yaml
[2]https://github.com/open-ness/openness-experience-kits/blob/master/roles/sriov/master/files/openness-sriov-crd.yml

sriov cni running as daemonset

 Not found

SR-IOV enabled NIC

Test cases are missing and need to ask where the test cases are.

No
This is standard sriov cni.

sriov network device plugin

git clone https://github.com/intel/sriov-network-device-plugin

if fpga_sriov_userspace.enabled:
      patch FPGA_SRIOV_USERSPACE_DEV_PLUGIN.patch[1] to sriov network device plugin directory

make image

if fpga_sriov_userspace.enabled:
      kubectl create -f fpga_configMap[2]
else:
      kubectl create -f sriov_configMap[3]

kubectl create -f ./deployments/k8s-v1.16/sriovdp-daemonset.yaml[4]

Provide ansible scripts to create VF and bind igb_uio driver.

If FPGA is used, fpga_sriov_userspace.enabled should be set to true. Then FPGA_SRIOV_USERSPACE_DEV_PLUGIN.patch will be patched to sriov network device plugin. This patch enables sriov network device plugin to control fpga devices which are bounded to userspace driver. Fpga_configMap will be applied and this configmap will create resource intel_fec_5g and intel_fec_lte which is based on fpga device by specifying vendor_id, device_id and driver.

[1]https://github.com/open-ness/edgecontroller/blob/master/fpga/FPGA_SRIOV_USERSPACE_DEV_PLUGIN.patch
[2]https://github.com/open-ness/edgecontroller/blob/master/fpga/configMap.yaml
[3]https://github.com/open-ness/edgecontroller/blob/master/sriov/configMap.yaml
[4]https://github.com/intel/sriov-network-device-plugin/blob/master/deployments/k8s-v1.16/sriovdp-daemonset.yaml

sriov network device plugin running as daemonset

 Not found

SR-IOV enabled device

Test cases are missing and need to ask where the test cases are.

Yes
Special patch will be patched to this project and is needed by FPGA.

fpga

bbdev_config_service, n3000-1-3-5-beta-rte-setup.zip, n3000-1-3-5-beta-cfg-2x2x25g-setup.zip, flexran-dpdk-bbdev-v19-10.patch, FPGA image for 5GNR vRAN are not available and need to ask openness team.

fpga_sriov_userspace.enabled should be set to true. 

On master node, build the kubectl plugin rsu (Remote System Update) and move the binary file to directory the /usr/bin/. This plugin will create kubernetes jobs and run OPAE in those jobs. OPAE(Open Programmable Acceleration Engine) enables programming of the FPGA and is used to program the FPGA factory image or the user image (5GN FEC vRAN). The plugin also allows for obtaining basic FPGA telemetry such as temperature, power usage and FPGA image information. 

On worker node, using n3000-1-3-5-beta-rte-setup.zip (can be used to install OPAE), n3000-1-3-5-beta-cfg-2x2x25g-setup.zip to build docker image ‘fpga-opae-pacn3000:1.0’. OPAE will be installed in this docker image. RSU will create a kubernetes job which uses image ‘fpga-opae-pacn3000:1.0’ as below:
apiVersion: batch/v1
kind: Job
metadata:
  name: fpga-opae-job
spec:
  template:
    spec:
      containers:
      - securityContext:
          privileged: true
        name: fpga-opea
        image: fpga-opae-pacn3000:1.0
        imagePullPolicy: Never
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "./check_if_modules_loaded.sh && fpgasupdate /root/images/<img_name> <RSU_PCI_bus_function_id> && rsu bmcimg (RSU_PCI_bus_function_id)" ]
        volumeMounts:
        - name: class
          mountPath: /sys/devices
          readOnly: false
        - name: image-dir
          mountPath: /root/images
          readOnly: false
      volumes:
      - hostPath:
          path: "/sys/devices"
        name: class
      - hostPath:
          path: "/temp/vran_images"
        name: image-dir
      restartPolicy: Never
      nodeSelector:
        kubernetes.io/hostname: samplenodename
  backoffLimit: 0

User FPGA images will be put in the directory /temp/vran_images/.

To configure the VFs with the necessary number of queues for the vRAN workload the BBDEV configuration utility is to be run as a job within a privileged container.
make build-docker-fpga-cfg
kubectl create -f fpga-sample-configmap.yaml[1]
kubectl create -f fpga-config-job.yaml[2]

A sample pod requesting the FPGA (FEC) VF may look like this:
apiVersion: v1
kind: Pod
Metadata:
  name: test
  Labels:
    env: test
spec:
  containers:
  - name: test
    image: centos:latest
    command: [ "/bin/bash", "-c", "--" ]
    args: [ "while true; do sleep 300000; done;" ]
    resources:
      requests:
        intel.com/intel_fec_5g: '1'
      limits:
        intel.com/intel_fec_5g: '1'

[1]https://github.com/open-ness/edgecontroller/blob/master/fpga/fpga-sample-configmap.yaml
[2]
https://github.com/open-ness/edgecontroller/blob/master/fpga/fpga-config-job.yaml

kubectl plugin rsu

 Not found

Intel® FPGA Programmable Acceleration Card (Intel FPGA PAC) N3000,
DPDK 18.08,
Hugepage support

1. Test cases are missing and need to ask where the test cases are.

2.bbdev_config_service, n3000-1-3-5-beta-rte-setup.zip, n3000-1-3-5-beta-cfg-2x2x25g-setup.zip, flexran-dpdk-bbdev-v19-10.patch are not available. Need to request these packages.

3. FPGA image for 5GNR vRAN is not available. Need to request this image.

4.What’s the difference between flexran and vran

Yes
This is developed by openness team and FPGA is requested by Srini.

bios

On master node, build the kubectl plugin biosfw and move the binary file to directory the /usr/bin/. This plugin will create a kubernetes job and run syscfg in that job. Intel® System Configuration Utility (Syscfg) is a command-line utility that can be used to save and restore BIOS and firmware settings to a file or to set and display individual settings.

On worker node, using syscfg_package.zip to build docker image ‘openness-biosfw’. Syscfg will be upzipped in this docker image. The kubernetes job created by kubectl plugin biosfw will use this image ‘openness-biosfw’ as below:
apiVersion: batch/v1
kind: Job
metadata:
  name: openness-biosfw-job
spec:
  backoffLimit: 0
  activeDeadlineSeconds: 100
  template:
    spec:
      restartPolicy: Never
      containers:
        - name: openness-biosfw-job
          image: openness-biosfw
          imagePullPolicy: Never
          securityContext:
            privileged: true
          args: ["$(BIOSFW_COMMAND)"]
          env:
            - name: BIOSFW_COMMAND
              valueFrom:
                configMapKeyRef:
                  name: biosfw-config
                  key: COMMAND
          volumeMounts:
            - name: host-devices
              mountPath: /dev/mem
            - name: biosfw-config-volume
              mountPath: /biosfw-config/
      volumes:
        - name: host-devices
          hostPath:
            path: /dev/mem
        - name: biosfw-config-volume
          configMap:
            name: biosfw-config

Kubectl plugin biosfw

 Not found

certain Intel® Server platforms

https://downloadcenter
.intel.com/download/
28713/Save-and-Restore-System-Configuration-Utility-SYSCFG-

1.Test cases are missing and need to ask where the test cases are.

2. Ask the server version, motherboard version, bios version for testing epa feature bios?

Yes
This is developed by openness team.

topology manager

Configure kubelet on the worker node as below:
1. Set cpuManagerPolicy to static
2. Set topologyManagerPolicy to best-effort
# BEGIN OpenNESS configuration - General

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
KubeletCgroups: "/systemd/system.slice"
Authentication:
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
clusterDNS:
  - 10.96.0.10
clusterDomain: cluster.local
featureGates:
  TopologyManager: True
podPidsLimit: 2048
# END OpenNESS configuration - General
# BEGIN OpenNESS configuration - CPU Manager
cpuManagerPolicy: static
kubeReserved:
  cpu: "1"
# END OpenNESS configuration - CPU Manager
# BEGIN OpenNESS configuration - Topology Manager
topologyManagerPolicy: best-effort
# END OpenNESS configuration - Topology Manager

Kubelet component

 Not found

K8s 1.16

Test cases are missing and need to ask where the test cases are.

No
This is standard topology manager.

CMK

Download the following files:
cmk-namespace.yaml[1] cmk-serviceaccount.yaml[2] cmk-rbac-rules.yaml[3] cmk-cluster-init-pod.yaml[4]

Copy following files to the same directory as cmk-namespace.yaml, cmk-serviceaccount.yaml, cmk-rbac-rules.yaml and cmk-cluster-init-pod.yaml:
Kustomization.yml[5] rewrite_args.yml.j2[6]

Run the following command:
kubectl kustomize . | kubectl apply -f -
This kustomize command will change the parameters in cmk-cluster-init-pod.yaml:
- args:
      # Change this value to pass different options to cluster-init.
      - "/cmk/cmk.py cluster-init --host-list=node1,node2,node3 --saname=cmk-serviceaccount --namespace=cmk-namespace"

On each worker node, clone the project https://github.com/intel/CPU-Manager-for-Kubernetes and then checkout the commit e3df769521558cff7734c568ac5d3882d4f41af9. Using command ‘make’ to build the docker image.

[1]https://raw.githubusercontent.com/intel/CPU-Manager-for-Kubernetes/e3df769521558cff7734c568ac5d3882d4f41af9/resources/authorization/cmk-namespace.yaml
[2]https://raw.githubusercontent.com/intel/CPU-Manager-for-Kubernetes/e3df769521558cff7734c568ac5d3882d4f41af9/resources/authorization/cmk-serviceaccount.yaml
[3]https://raw.githubusercontent.com/intel/CPU-Manager-for-Kubernetes/e3df769521558cff7734c568ac5d3882d4f41af9/resources/authorization/cmk-rbac-rules.yaml
[4]https://raw.githubusercontent.com/intel/CPU-Manager-for-Kubernetes/e3df769521558cff7734c568ac5d3882d4f41af9/resources/pods/cmk-cluster-init-pod.yaml
[5]https://github.com/open-ness/openness-experience-kits/blob/master/roles/cmk/master/files/kustomization.yml
[6]https://github.com/open-ness/openness-experience-kits/blob/master/roles/cmk/master/templates/rewrite_args.yml.j2


Not found

 Nothing

Test cases are missing and need to ask where the test cases are.

No
This is standard CMK.

nfd

version: v0.4.0

Download the following files:
Nfd-master.yaml.template[1] nfd-worker-daemonset.yaml.template[2]

Copy following files to the same directory as nfd-master.yaml.template and nfd-worker-daemonset.yaml.template:
Add_nfd_namespace.yaml[3] kustomization.yml[4] replace_cluster_role_binding_namespace.yml[5] replace_service_account_namespace.yml[6] enable_nfd_master_certs.yml.j2[7] enable_nfd_worker_certs.yml.j2[8]

Run the following command to kustomize the files (nfd-master.yaml and nfd-worker-daemonset.yaml):
kubectl kustomize . | kubectl apply -f -
The above kustomize command will replace the namespace ‘default’ with ‘openness’, add certs to nfd-master and nfd-worker.

Apply below network policy to allow the communication between nfd-master and nfd-worker:
https://github.com/open-ness/edgecontroller/blob/master/kube-ovn/nfd_network_policy.yml

[1]https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.4.0/nfd-master.yaml.template
[2]https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.4.0/nfd-worker-daemonset.yaml.template
[3]https://github.com/open-ness/openness-experience-kits/blob/master/roles/nfd/files/add_nfd_namespace.yml
[4]https://github.com/open-ness/openness-experience-kits/blob/master/roles/nfd/files/kustomization.yml
[5]https://github.com/open-ness/openness-experience-kits/blob/master/roles/nfd/files/replace_cluster_role_binding_namespace.yml
[6]https://github.com/open-ness/openness-experience-kits/blob/master/roles/nfd/files/replace_service_account_namespace.yml
[7]https://github.com/open-ness/openness-experience-kits/blob/master/roles/nfd/templates/enable_nfd_master_certs.yml.j2
[8]https://github.com/open-ness/openness-experience-kits/blob/master/roles/nfd/templates/enable_nfd_worker_certs.yml.j2

nfd-master running as daemonset on kubernetes master node

nfd-worker running as daemonset 

 Not found

 Nothing

Test cases are missing and need to ask where the test cases are.

No
This is standard nfd and only a few changes are applied such as namespace, certs.

Openness integration test plan for Multus, SR-IOV CNI, SR-IOV Network Device Plugin, Topology Manager, CMK, NFD

Microservice

ICN

OPENNESS

Difference

Next

MULTUS

  1. Apply the bridge-network.yaml[1].
  2. Create Multus-deployment.yaml[2] with two bridge interfaces.
  3. Exec follow command to check if the “net1” interface was created.  

         kubectl exec -it $deployment_pod -- ip a

[1]https://github.com/onap/multicloud-k8s/blob/9c63ce2a7b2b66b3e3fce5d1f553f327148df83f/kud/tests/_common.sh#L856

[2]https://github.com/onap/multicloud-k8s/blob/9c63ce2a7b2b66b3e3fce5d1f553f327148df83f/kud/tests/_common.sh#L873

  1. Apply the macvlan-network.yaml.
  2. Create a pod with macvlan annotation.
  3. Verify the “net1” interface was configured in the deployed pod.

Link:

openness multus usage: https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md#multus-usage 

  • Different network types used for testing. ICN is using the ‘bridge’ type, OPENNESS is ‘macvlan’.
  • Update ICN test case with verifying macvlan network type. 

SRIOV CNI

  1. Apply the sriov-network.yaml[1].
  2. Check if the Ethernet adapter version is equal to "XL710".
  3. Create a pod[2] with the sriov annotation field and the sriov resource requested. 
  4. Verify the the deployed pod status.

kubectl get pods $pod | awk 'NR==2{print $3}'

  1. Check the current sriov resource allocation status[3].

[1]https://github.com/onap/multicloud-k8s/blob/9c63ce2a7b2b66b3e3fce5d1f553f327148df83f/kud/deployment_infra/playbooks/sriov-nad.yml#L1

[2]https://github.com/onap/multicloud-k8s/blob/9c63ce2a7b2b66b3e3fce5d1f553f327148df83f/kud/tests/sriov.sh#L32

[3]https://github.com/onap/multicloud-k8s/blob/9c63ce2a7b2b66b3e3fce5d1f553f327148df83f/kud/tests/sriov.sh#L68

  1. Apply the sriov-network.yaml
  2. Create a pod with the sriov annotation field and the sriov resource requested. 
  3. Verify the “net1” interface was configured  in the deployed pod.


Link:

Openness sriov usage:

https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-sriov-multiple-interfaces.md#usage

  • Beside deploying the pod with sriov interface, ICN checks the current allocated sriov resource status. 
  • ICN has a more comprehensive testing and It covers openness test scope. So the ICN test case remains unchanged.

SRIOV NETWORK DEVICE PLUGIN

NFD

Verify NFD by setting pod.yaml with ’affinity’ field.

...

apiVersion: v1

kind: Pod

metadata:

  name: $pod_name

spec:

  affinity:

    nodeAffinity:

      requiredDuringSchedulingIgnoredDuringExecution:

        nodeSelectorTerms:

        - matchExpressions:

          - key: "feature.node.kubernetes.io/kernel-version.major"

            operator: Gt

            values:

            - '3'

  containers:

  - name: with-node-affinity

    image: gcr.io/google_containers/pause:2.0

...

Link:

KUD test script:

https://github.com/onap/multicloud-k8s/blob/master/kud/tests/nfd.sh 

Verify NFD by setting pod.yaml with ‘nodeSelector’ field.

apiVersion: v1

kind: Pod

metadata:

  labels:

    env: test

  name: golang-test

spec:

  containers:

    - image: golang

      name: go1

  nodeSelector:

    feature.node.kubernetes.io/cpu-pstate.turbo: 'true'









Link:

Openness nfd usage:

https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-node-feature-discovery.md#usage

  • Node affinity is conceptually similar to nodeSelector. it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.

  • Both tests are roughly the same like each other, ICN specifies ‘affinity’ to check if the NFD is effective, and OPENNESS uses the ‘nodeSelector’ field.
  • Add a check condition for  label ‘feature.node.kubernetes.io/cpu-pstate.turbo: 'true'’.

CMK

NIL



Link:

CMK official validate solution:

https://github.com/intel/CPU-Manager-for-Kubernetes/blob/master/docs/operator.md#validating-the-environment

Liang’s patch: 

https://gerrit.onap.org/r/c/multicloud/k8s/+/102311

  1. Create a pod that can be used to deploy applications pinned to a core.

Link:

Openness CMK usage:

https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-dedicated-core.md#usage

  • CMK’s integration is under way in ICN. So ICN doesn’t provide a test case now.
  • NIL

Topology Manager

NIL





Link:

Topology Manager limitation:

https://kubernetes.io/docs/tasks/administer-cluster/topology-manager/#known-limitations

  1. Create a pod with guaranteed(requests equal to limits) QoS class. 
  2. Check in kubelet's logs on you node (journalctl -xeu kubelet).

Link:

Openness TM usage:

https://github.com/open-ness/specs/blob/master/doc/enhanced-platform-awareness/openness-topology-manager.md#usage

  • Not implement Topology Manager at ICN. So ICN doesn’t provide test case now.
  • Dependence on k8s version.

Openness dns config agent design

Openness extends kubectl command line to set edgedns (Described in 20318887 part). To integrate Openness with ICN, we will not use this but create a config agent to set edge dns. This config agent will monitor below CRD:

...