...
Overall Test Architecture
Test Architecture
We support the following jobs
CI job
triggered by gerrit patch creation/update.The job runs verify.sh under icn project. The verify.sh currently has integrated the golang test and bashate test.Post +1/-1 for gerrit patch if the build succeeds/failsUpload the job log to Nexus server in post-build actions
CD job for test
triggered daily automatically (We can also trigger it manually)Run a make command, which creates VM(s) and deploys ICN components on the VM(s)Upload the job log to Nexus server in post-build actions
CI jobs detail
Update the verify.sh can update the CI job content.
CD job detail
We have the following steps for CD job:
On our private Jenkins node, we provision a VM by vagrant. A Vagrantfile which defines the VMs properties is needed. We can define many VM properties in the Vagrantfile:VM hostnameVM memory 64G, cpu 16, disk 300GB
Login to the VM and run 'make verifier' which installs the components in the VMWe destroy the VM as the last step of the job
Test Bed
Pod Topology
ICN Master Baremetal Deployment Verifier
ICN Master Virtual Deployment Verifier
Baremetal deployment
Hostname | CPU Model | Memory | BMC Firmware | Storage | 1GbE: NIC#, VLAN, (Connected extreme 480 switch) | 10GbE: NIC# VLAN, Network (Connected with IZ1 switch) | 40GbE: NIC# |
---|---|---|---|---|---|---|---|
Jump | Intel 2xE5-2699 | 64GB | 1.46.9995 | 3TB (Sata) | IF0: VLAN 110 (DMZ) | IF2: VLAN 112 (Private) | |
node1 | Intel 2xE5-2699 | 64GB | 1.46.9995 | 3TB (Sata) | IF0: VLAN 110 (DMZ) | IF2: VLAN 112 (Private) | |
node2 | Intel 2xE5-2699 | 64GB | 1.46.9995 | 3TB (Sata) | IF0: VLAN 110 (DMZ) | IF2: VLAN 112 (Private) | IF4: SRIOV |
Virtual deployment
Hostname
CPU Model
Memory
Storage
1GbE: NIC#, VLAN,
(Connected
extreme 480 switch)
10GbE: NIC# VLAN, Network
(Connected with IZ1 switch)
node1
Intel
2xE5-2699
64GB
3TB (Sata)180 (SSD)
IF0: VLAN 110 (DMZ)IF1: VLAN 111 (Admin)
Test Framework
All components are tested with end-to-end testing
Traffic Generator
Containerized packet generator is developed for traffic generator testing in cFW
Test description
Testing
CI Testing:
...
- Multus CNI is a container network interface (CNI) plugin for Kubernetes that enables attaching multiple network interfaces to pods. This is accomplished by Multus acting as a "meta-plugin", a CNI plugin that can call multiple other CNI plugins.
- A 'NetworkAttachmentDefinition' is used to set up the network attachment, i.e. secondary interface for the pod.
- A pod is created with requesting specific network annotations with bridge CNI to create multiple interfaces. When the pod is up and running, we can attach to it to check the network interfaces on it by running ip a command
Virtlet:
Virtlet is a Kubernetes runtime server which allows you to run VM workloads, based on QCOW2 images.We create a Virtlet VM pod-spec file adhering to the standards for virtlet to create a VM in a K8S env.The pod spec file is applied to bring up Virtlet deployment and make sure it is running. We attach to the pod and test to make sure the VM is running fine by connecting to it and checking details.
OVN4NFV:
- OVN4NFV provide Provider networks using VLAN networking and Service Function Chaining.
- After the pod is up and running we will be able to attach to the pod and check for multiple interfaces created inside the container.
- OVN4NFV networking is setup and created along the EMCO composite vFW testing
Node feature Discovery
- Node feature discovery for Kubernetes detects hardware features available on each node in a Kubernetes cluster and advertises those features using node labels.
- Create a pod with specific label information in the case the pods are scheduled only on nodes whose Major Kernal version is 3 and above. Since the NFD Master and worker Daemonset is already running, the master has all the label information about the nodes which is collected by the worker.
- If the O.S version matches, the PoD will be scheduled and up. Otherwise, the Pod will be in a pending state in case there are no nodes with matching labels that are requested by the pod
SRIOV
The SRIOV network device plugin is Kubernetes device plugin for discovering and advertising SRIOV network virtual functions (VFs) in a Kubernetes host.We first determine which hosts are SRIOV capable and install the drivers on them and run the DaemonSet and register Network attachment definitionOn an SRIOV capable hosts, we can get the resources for the node before we run the pod. When we run the test case, there is a request for a VF from the pod, therefore the number of resources for the node is increased.
QAT
KUD identify if there are QAT devices in the host and decide whether to deploy QAT device plugin into Kubernetes cluster.The QAT device plugin discovers and advertises QAT virtual functions (VFs) to Kubernetes cluster.KUD assign 1 QAT VF to the Kernel workloads. After the assginment finished, the Allocated resources in node description will increase.
CMK
- CPU Manager for Kubernetes provides cpu pinning for K8s workloads. In KUD, there are two test cases for the exclusive and shared cpu pools testing.
Optane PM
The Optane PM plugin is Kubernetes CSI plugin and driver with storage volume provisioning for Kubernetes applications.Check whether the Optane PM hardware: NVDIMM is existed, if not skip the validation.Configure the Optane PM plugin in KUD, and create StorageClass and PersistentVolumeClaim which used by Kubernetes application, check whether the PVC is bound, if yes, the Optane PM volume created and bound to PVC and used by application, validation passed.
SDEWAN
- Use Kud to setup 3 clusters (sdewan-hub, edge-a, edge-b)
- Run the SDEWAN CRD Controller in each clusters.
- Create SDEWAN CNF instance and dummy pod (using httpbin instead) in edge-a, SDEWAN CNF instance and httpbin pod in edge-b
- Create IPSec CR to configure sdewan-hub as responder to provide virtual IP addresses to any authenticated party requesting for IP addresses through SDEWAN CRD Controller.
- Create IPSec CR to configure edge-a and edge-b IPSec configuration to get the IP addresses through SDEWAN CRD Controller.
- Establish edge-a tunnel to sdewan-hub, edge-b tunnel to sdewan-hub, and hub XFRM policies will automatically route traffic between edge-a and edge-b
- Create SNAT CR to establish SNAT rule in edge-a and DNAT CR to establish DNAT rule in edge-b which will enable tcp connection from edge-a to edge-b's httpbin service.
- Verify curl command is successful from edge-a dummy pod (using httpbin instead) to edge-b's httpbin service. The function of the curl command is to return back the ip address of the requester.
EMCO:
- EMCO Sanity testing check the health connectivity EMCO Micro service, once it is installed
cFW:
- Cloud Native FW having multiple components such packetgen generator, sink and cFW
- Packet generator: Sends packets to the packet sink through the firewall. This includes a script that periodically generates different volumes of traffic inside the container
- Firewall: Reports the volume of traffic passing though to the ONAP DCAE collector.
- Traffic sink: Displays the traffic volume that lands at the sink container using the link node port through your browser and enable automatic page refresh by clicking the "Off" button. You can see the traffic volume in the charts.
BluVal Testing
Status as of Dec 10th 2020June 25th 2021:
Layer | Result | Comments | Nexus |
os/lynis | PASS with exceptions | Exceptions:
| Logs |
os/vuls | PASS with exceptions | Exceptions:
| Logs |
k8s/conformance | PASS with exceptions | Exceptions:
| Logs |
k8s/kube-hunter | PASS | With aquasec/kube-hunter:edge image | Logs |
...
Akraino BluVal Exception Request
...
The gerrit comments contains the CI log url. All the CI logs are under this folder ICN : https://jenkins.akraino.org/view/icn/job/icn-master-verify/
CD Logs:
ICN Master Baremetal Deployment Verifier
ICN Master Virtual Deployment Verifier
ICN SDEWAN Master End2End Testing ??????
Test Dashboards
All the testing results are in logs
...