Dial by your location +1 669 900 6833 US (San Jose) +1 646 558 8656 US (New York) +1 877 369 0926 US Toll-free +1 855 880 1246 US Toll-free Meeting ID: 459272075 Find your local number: https://zoom.us/u/acimKOClJk
Notes
Release 2 readiness
Test
Robot case exists
Docker for AMD64
Docker for ARM64
Tested on AMD64
Tested on ARM64
Works with Bluval
Redfish
LTP
Ok
Ok
Works with Airship
Works
k8s
Ok
Ok
Works with k3s
OpenStack (Tempest)
Ok
Ok
Works with Airship
September 11, 2019
"Release readiness" updated
Discussion about Blueprint family names and Blueprint names - it is best to use the same names that are used in repos
Fixing ARM64 support for Sonobuoy required moving to Kubernetes 1.15. The problem is that one test case (Aggregation) that comes with Kubernetes 1.15 works only with the Kubernetes 1.15 version, while almost all blueprints use an earlier Kubernetes version. So the plan is to blacklist that testcase in Blueval so that the same tests can be run everywhere
Indu is working to push etcd HA tests to Gerrit
July 24, 2019
News
Mandatory tests were discussed in PTL/TSC meeting on July 23 (but not approved for lack of quorum). Will be on the agenda again on July 25
Sonobuoy works also on ARM64 except for one test case
Requires some patches to run
Target is to check by end of week if that issue can be fixed
Public UI VM
LF has sent out a quote for two xlarge VMs on AWS, load balancer and 10G extra storage space
The extra storage will be used for database for UI
The two VMs are running the UI code
It is possible to scale down the proposal but let us go with this
Validation maintenance
We had a discussion about what code should be included in this projects repo
There is already blueprint related code: bluval-<blueprint name>.yaml
But we agreed that we do not need to allow blueprint specific code to the repo, at least for the time being
Demo next Tuesday
Will show UI in action with REC lab
Open gerrits
[UI] Support UI partial control - this is work in progress
Add resources for building cord-tester image - has a "-1"
sonobuoy testing – waiting for DANM4.0
Juha K will be back from holiday on August 5
Old APs - not discussed
Using k8s on our deployments
Using versioning for our releases
Moving the UI to a new git repo
July 17, 2019
Update about Sonobuoy testing on ARM64: with the latest Sonobuoy version, almost all test cases (except for one) seem to pass
Bluval UI:
First item was about the Bluval UI implementation hosted at LF. LF has already a MySQL instance running in AWS as an elastic DB, and if it can be used, then the UI will only need to have the web server and the code to update the data in the database. This means that the UI can be stateless and all state will be in the DB. To be followed up
Second issue was about how to check whether bluval has ran all the mandatory tests. The test log will have results of all tests that it has run, but the UI cannot know what tests it was supposed to run (since it is possible to disable tests). The tentative solution is to create a file from bluval.yaml that has all tests that were supposed to be run, and store it in the logs. Bluval.py needs to do that
We discussed the Redfish testing. Redfish testing is currently mainly checking that the implementations confirm to the schemas that have been defined. The DMTF has also defined a few interoperability profiles, such as for OCP. It would be useful to define inside Akraino a profile that the different hardwares could be tested against, to make sure that all implementations have common functionality implemented in the same way
Recording is here. Unfortunately, it only covers a part of the Redfish discussion
Code badge will show for a given combination of "blueprint + version + lab" whether the mandatory tests have passed
We had a discussion about how this could be implemented in practice
July 3, 2019
Follow-ups on the APs from last meeting
AP: Next week we can present in this meeting a demo of the UI: done
AP: Ioakeim to copy the developer info for the UI into the wiki: work in progress
AP: Cristina to start the user guide and then the team members complete it with the layers that they worked on: done
AP: Indu to check the overlap between e2e test and conformance test that we now run with sonobuoy: done; conclusion is we will drop the e2e for the first validation release as there are too many tests and they take too long to execute
AP: Indu to take over Miguel's work on the etcd test: need to get access to an airship deployment
AP: Deepak to look into the options that the LTP test has: ran it internally and automated it in robot; conclusion is that it could be used by any distro
Status on the build jobs: the issue from last week with the UI was fixed; had one more issue with mariadb that was fixed today; overall state is good
Status on Sonobuoy testing on ARM: the patches for it have been merged;
AP: Cristina to identify the conformance tests that don't have support for aarch64
Status on permanent lab resources for hosting the UI: Ioakeim sent email sent to LF; will open a formal ticket
Status on pending patches: discussion on patch #949; will make a new patch to incorporate the changes discussed
Demo of the UI; discussion on what it takes to consider a project mature
AP: Ioakeim define the requirements better for the next week's meeting
June 26, 2019
Status on the build jobs: builds have been failing for the last week and a half; patch to fix it has been merged, we expect the jobs to pass on the tonight's run
Status on Sonobuoy testing on ARM: work is still in progress; we need to integrate the build and usage of these images in our own repo as up-streaming them is not feasible
Status on permanent lab resources for hosting the UI: [Ioakeim] request made at the TSC level; we need to check with LF first (Eric Ball)
Successfully migrated to the ONAP portal SDK (Casablanca version); patch is work in progress
AP: Next week we can present in this meeting a demo of the UI
Status on user documentation
Discussion on what (user guide, developer guide) to store where (git, wiki)
AP: Ioakeim to copy the developer info for the UI into the wiki
AP: Cristina to start the user guide and then the team members complete it with the layers that they worked on
Andrew promised to give us feedback on the user guide practicality
Status on Jenkins job that consumes the validation containers:
This task will take a couple of weeks more until we are able to implement it; Cristina will integrate the k8s test with the IEC blueprint
Status on pending patches
AP: Indu to check the overlap between e2e test and conformance test that we now run with sonobuoy
AP: Indu to take over Miguel's work on the etcd test
Multiarch discussion: quick presentation on how to use the containers
Mandatory tests: presentation from Andrew on what tests should be mandatory for the respective layers
AP: Deepak to look into the options that the LTP test has
AP: Andrew to upload the presentation to this wiki page
We had a discussion about when a blueprint project can be "mature": is the maturation review before Release 2 or after it? Do we want to have "Mature" projects in Release 2? Assumption is yes
We need to define what a Blueprint Validation release is, since the different blueprints need to be able to test against a stable set of tests. It would make sense to call the first release "2", so that BluVal Rel.2 would be used to test blueprints for Release 2. Most likely will also need a BlueVal Rel. 2.1 to fix bugs, and then the blueprints could choose any version 2.x (the latest is probably the best choice but there is no pressure to upgrade if an earlier release works)
June 5, 2019
etcd ha test cases
Issue is that there are two implementations, depending on how the k8s cluster has been deployed
It would be better to have a single implementation that would work with both
Juha and Cristina do not have access to an Airship deployment
Docker makefile bug
Fixed now
UI implementation
Discussion was about how to push jobs to validate different blueprints in different labs, since there are three kinds:
The public community lab that runs in UNH-IOL
'Private' company labs which can operate in a 'master/agent' Jenkins model
Company labs that run in a peer model whereby the Edge lab jenkins only pulls and pushes but is not a slave to the LF jenkins.
All of these will push the results in the LF Nexus, but only the two first ones can take jobs from LF Jenkins (and hence the UI)
UI discussion: the web UI can run in Docker containers, Ioakeim will make the instructions available on how to build them
May 22, 2019
May 8, 2019
Automatic signed-off-by and linting work now: when you check in code, you need to sign off the commit and you will get a result from automatic code syntax check
Building containers and uploading to hub.docker.com does not yet work, LF staff is investigating
There will be a presentation about this project in the Akraino TSC/PTL meeting next week. Tapio Tallgren will make some slides and Cristina Pauna will check if the Sonobuyo on Robot Framework in a container will work (dl: Monday)
We also had a discussion about the UI framework. The key part is reading the log files, parsing them, and rendering the result on a web page
We also had a discussion about a common "environment file" which would contain all the parameters that the different tests would need (examples: IP addresses, user names, Kubernetes admin file). Need to build from bottom up
We also discussed bringing a proposal to the TSC about what the mandatory tests will be. Propose to start with Kubernetes conformance tests and Linux Testing Project
Need to make sure that all mandatory tests work on different architectures
April 17, 2019
JJB for patch verification will be made by Cristina
Patch for creating the k8s conformance test docker container was merged. Miguel will try it out with the robot test he's working on. JJB for building the docker images automatically will be done by Cristina; currently waiting for LF to make the dockerhub repo available from the jenkins slaves
Robot tests for Cyclictest are developed by Naga
Documentation and robot tests for LTP and baremetal are developed by Miguel
Juha is looking into robot framework
Deepak presented a proposal for a Dashboard to trigger and view tests results in an user-friendly way; gathered input from the team and will continue the discussion next time.
April 10, 2019
Meeting time check - let's keep the current time, 11 AM Eastern/6 PM Eastern European/8 AM Pacific
New member introduction
New slack channels will be available for Akraino projects, we can use that instead of IRC
New meeting time. Propose Wednesdays 10-11 (an hour earlier)
Using IRC? Let's try the channel #akraino
Let's try to clarify the project scope: passing the mandatory validation tests is a part of the Akraino process to mature a process, so it is very important to agree in the Akraino TSC what the mandatory tests are. There can be tests that are supported by the framework but which are not mandatory. The logic should be: - If the blueprint installs Kubernetes, Kubernetes e2e conformance tests are mandatory - If the blueprint installs OpenStack, OpenStack Tempest and Rally are mandatory We also discussed making some HA tests mandatory.
Then we need to agree on how to move forward. Let's say you run a test; what happens next? If you run the test from LF Jenkins, it would be great to sync the results to the LF Jenkins. This requires that you are running your lab in a DMZ or you are opening a lot of ports on the firewall to run a slave Jenkins. So the easier alternative is to copy the results of the tests to the LF Nexus. To make it easy (for scripts) to understand the logs, we need to agree on a common format. For this, we propose running all tests using the Robot Framework. - Will BVal.yaml be an optional component to make it easy to chain together a number of tests? - To run the tests, I want to run something like "docker run --env-file abcd.txt --volume output.log akraino_k8s_tests". For this we need - The docker container - Definition of the env-file format: we need to figure out what information the test will require and write it to the environment file - The actual code to read the environment file, launch the Robot Framework, run the tests with Robots, and copy the output to the output.log - Once I have all this, I can make a Jenkins job in my lab which will launch the tests, and I can also configure it to copy the output somewhere.
March 6, 2019
Discussion about scope of testing + continue discussion about role of xtesting
The plan is to use the Xtesting part of the OPNFV Functest project as the basis for the Akraino Blueprint Validation project. There will be separate containers for the different test sets, hosted at the Docker Hub. The "source code" (docker yaml etc) will be hosted in the Gerrit repository, a Jenkins job will build the images, and then they will be uploaded. We need to identify the mandatory tests that all Akraino Blueprint projects must pass. There will be a centralized database to store test results, to make it easy to follow the progress of different projects. A web interface to the database would be nice.
Specific notes:
The Nexus repository does not work well for storing Docker images that must support different architectures, as there is no support for manifest files. So it is best to use the Docker Hub for images
Xtesting has two different modes, a "tutorial" one that installs a Jenkins instance, and one for the "Functest" type, where the xtesting framework only creates the Jenkins jobs.
There is no need to have support for voting in the Jenkins jobs
Trevor Bramwell has set up the OPNFV test database, should ask him
Cedric is working on rewriting the web page code that shows the test results