This document describes the blueprint test environment for the Smart Data Transaction for CPS blueprint. The test results and logs are posted in the Akraino Nexus at the link below:
https://nexus.akraino.org/content/sites/logs/fujitsu/job/
N/A
Testing has been carried out at Fujitsu Limited labs without any Akraino Test Working Group resources.
Tests are carried out on the architecture shown in the diagram below.
The test bed consists of 4 VMs running on x86 hardware, performing deploy and ci/cd and build and master node roles, two edge nodes on ARM64 (Jetson Nano) hardware, and two sensor nodes on ARM32 (Raspberry Pi) hardware.
Node Type | Count | Hardware | OS |
---|---|---|---|
CI/CD | 1 | Intel i5, 2 cores VM | Ubuntu 20.04 |
Build | 1 | Intel i5, 2 cores VM | Ubuntu 20.04 |
Deploy | 1 | Intel i5, 2 cores VM | Ubuntu 20.04 |
Master | 1 | Intel i5, 2 cores VM | Ubuntu 20.04 |
Edge | 2 | Jetson Nano, ARM Cortex-A57, 4 cores | Ubuntu 20.04 |
Camera | 2 | H.View HV-500E6A | N/A (pre-installed) |
The Build VM is used to run the BluVal test framework components outside the system under test.
BluVal and additional tests are carried out using Robot Framework.
N/A
This set of test cases confirms the scripting to change the default runtime of edge nodes.
The test scripts and data are stored in the source repository's cicd/tests/sdt_step2/install/
directory.
The test bed is place in a state where all nodes are prepared with required software. No EdgeX or Kubernetes services are running.
Execute the test scripts:
robot cicd/tests/sdt_step2/install/
The test scripts will change the default runtime of edge nodes from runc to nvidia.
The robot command should report success for all test cases.
Nexus URL:
Pass (1/1 test case)
These test cases verify that the images for EdgeX microservices can be constructed, and pushed to private registry.
The test scripts and data are stored in the source repository's cicd/tests/sdt_step2/build/
directory.
The test bed is placed in a state where all nodes are prepared with required software and the Docker registry is running.
Execute the test scripts:
robot cicd/tests/sdt_step2/build/
The test scripts will build images of changed services(sync-app/image-app/device-camera), add push the images to private registry.
The robot command should report success for all test cases.
Nexus URL:
Pass (2/2 test cases)
These test cases verify that the Kubernetes cluster can be initialized, edge nodes added to it and removed, and the cluster torn down.
The test scripts and data are stored in the source repository's cicd/tests/sdt_step2/cluster/
directory.
The test bed is placed in a state where all nodes are prepared with required software and the Docker registry is running. The registry must be populated with the Kubernetes and Flannel images from upstream.
Execute the test scripts:
robot cicd/tests/sdt_step2/cluster/
The test scripts will start the cluster, add all configured edge nodes, remove the edge nodes, and reset the cluster.
The robot command should report success for all test cases.
Nexus URL:
Pass (4/4 test cases)
These test cases verify that the EdgeX micro-services can be started and that MQTT messages are passed to the master node from the services.
The test scripts and data are stored in the source repository's cicd/tests/sdt_step2/edgex/
directory.
The test bed is placed in a state where the cluster is initialized and all edge nodes have joined. The Docker registry and mosquitto MQTT broker must be running on the master node. The registry must be populated with all upstream images and custom images. Either the device-camera
service should be enabled, or device-virtual
should be enabled to provide readings.
Execute the test scripts:
robot cicd/tests/sdt_step2/edgex/
The test scripts will start the EdgeX micro-services on all edge nodes, confirm that MQTT messages are being delivered from the edge nodes, and stop the EdgeX micro-services.
The robot command should report success for all test cases.
Nexus URL:
Pass (8/8 test cases)
These test cases verify that the device-camera
service can get image from IP Camera, the sync-app
service can share the image to other edge node, the image-app
service can analyze the image, and the support-notification can receive the crowded notification.
The test steps and data are contained in the scripts in the source repository cicd/tests/sdt_step2/camera/
directory.
The test bed is initialized to the point of having all EdgeX services running, with device-camera and image-app
enabled.
Execute the test scripts:
robot cicd/tests/sdt_step2/camera/
The test cases will check if MQTT messages and the core-data service containing the data of image acquisition, image sharing and image analysis, and check whether the support-notification service having the notification data of crowded after setting the crowded rule.
The Robot Framework should report success for all test cases
Nexus URL:
Pass (9/9 test cases)
N/A
BluVal tests for Lynis, Vuls, and Kube-Hunter were executed on the test bed.
Steps To Implement Security Scan Requirements
https://vuls.io/docs/en/tutorial-docker.html
We use Ubuntu 20.04, so we run Vuls test as follows:
Create directory
$ mkdir ~/vuls $ cd ~/vuls $ mkdir go-cve-dictionary-log goval-dictionary-log gost-log |
Fetch NVD
$ docker run --rm -it \ -v $PWD:/go-cve-dictionary \ -v $PWD/go-cve-dictionary-log:/var/log/go-cve-dictionary \ vuls/go-cve-dictionary fetch nvd |
Fetch OVAL
$ docker run --rm -it \ -v $PWD:/goval-dictionary \ -v $PWD/goval-dictionary-log:/var/log/goval-dictionary \ vuls/goval-dictionary fetch ubuntu 16 17 18 19 20 |
Fetch gost
$ docker run --rm -i \ -v $PWD:/gost \ -v $PWD/gost-log:/var/log/gost \ vuls/gost fetch ubuntu |
Create config.toml
[servers] [servers.master] host = "192.168.51.22" port = "22" user = "test-user" keyPath = "/root/.ssh/id_rsa" # path to ssh private key in docker |
Start vuls container to run tests
$ docker run --rm -it \ -v ~/.ssh:/root/.ssh:ro \ -v $PWD:/vuls \ -v $PWD/vuls-log:/var/log/vuls \ -v /etc/localtime:/etc/localtime:ro \ -v /etc/timezone:/etc/timezone:ro \ vuls/vuls scan \ -config=./config.toml |
Get the report
$ docker run --rm -it \ -v ~/.ssh:/root/.ssh:ro \ -v $PWD:/vuls \ -v $PWD/vuls-log:/var/log/vuls \ -v /etc/localtime:/etc/localtime:ro \ vuls/vuls report \ -format-list \ -config=./config.toml |
Create ~/validation/bluval/bluval-sdtfc.yaml to customize the Test
blueprint: name: sdtfc layers: - os - k8s os: &os - name: lynis what: lynis optional: "False" k8s: &k8s - name: kube-hunter what: kube-hunter optional: "False" |
Update ~/validation/bluval/volumes.yaml file
volumes: # location of the ssh key to access the cluster ssh_key_dir: local: '/home/ubuntu/.ssh' target: '/root/.ssh' # location of the k8s access files (config file, certificates, keys) kube_config_dir: local: '/home/ubuntu/kube' target: '/root/.kube/' # location of the customized variables.yaml custom_variables_file: local: '/home/ubuntu/validation/tests/variables.yaml' target: '/opt/akraino/validation/tests/variables.yaml' # location of the bluval-<blueprint>.yaml file blueprint_dir: local: '/home/ubuntu/validation/bluval' target: '/opt/akraino/validation/bluval' # location on where to store the results on the local jumpserver results_dir: local: '/home/ubuntu/results' target: '/opt/akraino/results' # location on where to store openrc file openrc: local: '' target: '/root/openrc' # parameters that will be passed to the container at each layer layers: # volumes mounted at all layers; volumes specific for a different layer are below common: - custom_variables_file - blueprint_dir - results_dir hardware: - ssh_key_dir os: - ssh_key_dir networking: - ssh_key_dir docker: - ssh_key_dir k8s: - ssh_key_dir - kube_config_dir k8s_networking: - ssh_key_dir - kube_config_dir openstack: - openrc sds: sdn: vim: |
Update ~/validation/tests/variables.yaml file
### Input variables cluster's master host host: <IP Address> # cluster's master host address username: <username> # login name to connect to cluster password: <password> # login password to connect to cluster ssh_keyfile: /root/.ssh/id_rsa # Identity file for authentication |
Run Blucon
$ bash validation/bluval/blucon.sh sdtfc |
BluVal tests should report success for all test cases.
Vuls results (manual) Nexus URL:
Lynis results (manual) Nexus URL:
Kube-Hunter results Nexus URL:
Nexus URL:
There are 5 CVEs with a CVSS score >= 9.0. These are exceptions requested here:
Release 7: Akraino CVE and KHV Vulnerability Exception Request
CVE-ID | CVSS | NVD | Fix/Notes |
CVE-2016-1585 | 9.8 | https://nvd.nist.gov/vuln/detail/CVE-2016-1585 | No fix available TODO: File exception request |
CVE-2022-0318 | 9.8 | https://nvd.nist.gov/vuln/detail/CVE-2022-0318 | Fix not yet available TODO: File exception request |
CVE-2022-1927 | 9.8 | https://nvd.nist.gov/vuln/detail/CVE-2022-1927 | Fix not yet available TODO: File exception request |
CVE-2022-20385 | 9.8 | https://nvd.nist.gov/vuln/detail/CVE-2022-20385 | No fix available TODO: File exception request |
CVE-2022-37434 | 9.8 | https://nvd.nist.gov/vuln/detail/CVE-2022-37434 | No fix available (for zlib1g, zlib1g-dev) TODO: File exception request |
Nexus URL (run via Bluval, without fixes):
Nexus URL (manual run, with fixes):
The initial results compare with the Lynis Incubation: PASS/FAIL Criteria, v1.0 as follows.
2022-09-14 16:19:49 Test: Checking for program update...
2022-09-14 16:19:49 Result: Update check failed. No network connection?
2022-09-14 16:19:49 Info: to perform an automatic update check, outbound DNS connections should be allowed (TXT record).
2022-09-14 16:19:49 Suggestion: This release is more than 4 months old. Check the website or GitHub to see if there is an update available. [test:LYNIS] [details:-] [solution:-]
TODO Fix: Download and run the latest Lynis directly on SUT. See the link below:
Steps To Implement Security Scan Requirements#InstallandExecute
No. | Test | Result | Fix |
---|---|---|---|
1 | Test: Checking PASS_MAX_DAYS option in /etc/login.defs | 2022-09-14 16:20:32 Result: password aging limits are not configured | TODO: Set PASS_MAX_DAYS 180 in /etc/login.defs and rerun. |
2 | Performing test ID AUTH-9328 (Default umask values) | 2022-09-14 16:20:32 Result: found umask 022, which could be improved | TODO: Set UMASK 027 in /etc/login.defs |
3 | Performing test ID SSH-7440 (Check OpenSSH option: AllowUsers and AllowGroups) | 2022-09-14 16:20:44 Result: SSH has no specific user or group limitation. Most likely all valid users can SSH to this machine. | TODO: Configure AllowUsers in /etc/ssh/sshd_config (allow only the admin account). |
4 | Test: checking for file /etc/network/if-up.d/ntpdate | 2022-09-14 16:20:46 Result: file /etc/network/if-up.d/ntpdate does not exist | OK |
5 | Performing test ID KRNL-6000 (Check sysctl key pairs in scan profile) : Following sub-tests required | N/A | N/A |
5a | sysctl key fs.suid_dumpable contains equal expected and current value (0) | 2022-09-14 16:20:58 Result: sysctl key fs.suid_dumpable contains equal expected and current value (0) | OK |
5b | sysctl key kernel.dmesg_restrict contains equal expected and current value (1) | 2022-09-14 16:20:58 Result: sysctl key kernel.dmesg_restrict has a different value than expected in scan profile. Expected=1, Real=0 | TODO: Add kernel.dmesg_restrict=1 to /etc/sysctl.d/90-lynis-hardening.conf |
5c | sysctl key net.ipv4.conf.default.accept_source_route contains equal expected and current value (0) | 2022-09-14 16:20:58 Result: sysctl key net.ipv4.conf.default.accept_source_route has a different value than expected in scan profile. Expected=0, Real=1 | TODO: Add net.ipv4.conf.default.accept_source_route=0 to /etc/sysctl.d/90-lynis-hardening.conf |
6 | Test: Check if one or more compilers can be found on the system | 2022-09-14 16:20:59 Result: found installed compiler. See top of logfile which compilers have been found or use /usr/bin/grep to filter on 'compiler' | TODO: Uninstall gcc and remove /usr/bin/as (installed with binutils) |
Results after the above fixes are as follows:
TODO
No. | Test | Result |
---|---|---|
1 | Test: Checking PASS_MAX_DAYS option in /etc/login.defs | TODO |
2 | Performing test ID AUTH-9328 (Default umask values) | TODO |
3 | Performing test ID SSH-7440 (Check OpenSSH option: AllowUsers and AllowGroups) | TODO |
4 | Test: checking for file /etc/network/if-up.d/ntpdate | TODO |
5 | Performing test ID KRNL-6000 (Check sysctl key pairs in scan profile) : Following sub-tests required | |
5a | sysctl key fs.suid_dumpable contains equal expected and current value (0) | TODO |
5b | sysctl key kernel.dmesg_restrict contains equal expected and current value (1) | TODO |
5c | sysctl key net.ipv4.conf.default.accept_source_route contains equal expected and current value (0) | TODO |
6 | Test: Check if one or more compilers can be found on the system | TODO |
The post-fix manual logs can be found at TODO.
Nexus URL: TODO
There are no reported vulnerabilities. Note, this release includes fixes for vulnerabilities found in release 6. See the release 6 test document for details on those vulnerabilities and the fixes.
Note that the results still show one test failure. The "Inside-a-Pod Scanning" test case reports failure, apparently because the log ends with "Kube Hunter couldn't find any clusters" instead of "No vulnerabilities were found." This also occurred during release 6 testing. Because vulnerabilities were detected and reported in release 6 by this test case, and those vulnerabilities are no longer reported, we believe this is a false negative, and may be caused by this issue: https://github.com/aquasecurity/kube-hunter/issues/358
Single pane view of how the test score looks like for the Blue print.
Total Tests | Test Executed | Pass | Fail | In Progress |
---|---|---|---|---|
26 | 26 | 24 | 2 | 0 |
*Vuls is counted as one test case.
*One Kube-Hunter failure is counted as a pass. See above.
Vuls and Lynis test cases are failing, an exception request is filed for Vuls-detected vulnerabilities that cannot be fixed. The Lynis results have been confirmed to pass the Incubation criteria.
None at this time.
None at this time.