Skip to end of metadata
Go to start of metadata

Weekly on Wednesdays at 8:00 AM PST / 11:00 AM EST.


https://zoom.us/j/459272075

Dial by your location
        +1 669 900 6833 US (San Jose)
        +1 646 558 8656 US (New York)
        +1 877 369 0926 US Toll-free
        +1 855 880 1246 US Toll-free
Meeting ID: 459272075
Find your local number: https://zoom.us/u/acimKOClJk


Notes

Release 2 readiness

TestRobot case existsDocker for AMD64Docker for ARM64Tested on AMD64Tested on ARM64Works with Bluval
Redfish





LTPOkOk
Works with Airship
Works
k8sOkOk
Works with k3s

OpenStack (Tempest)OkOk
Works with Airship


September 18, 2019

  • The UI VM discussion (MariaDB vs MySQL DB): waiting for LF to provide the quotation for MariaDB
  • Discussion on blueprint validation prerequisites (what we request from users with regards to infra and logs)
  • Update from Indu: completed the etcd testing, validated in Airship and REC (ready to merge). Tempest robot test verified on Airship and integrated with bluval
  • Discussion on OpenStack layer: Tempest and Refstack are complementary. If projects don't support certain features the tests should detect the missing components and skip those tests (we need to test this functionality). As a backup we can also blacklist tests.
  • LTP tests need root access to run; options are either sudo or create a group with privileges as opposed to using root password. Question if they can be ran without root access.
  • Discussion on security: we need a development guide documentation
  • Discussion on how to handle the situation where the nodes don't have access to internet and we need the test images to be in a local repository inside the cluster

September 11, 2019

  • "Release readiness" updated
  • Discussion about Blueprint family names and Blueprint names - it is best to use the same names that are used in repos
  • The UI VM discussion (MariaDB vs MySQL DB) will continue in https://jira.linuxfoundation.org/servicedesk/customer/portal/2/IT-16700
  • All Blueprints that want to have Maturity review before Akraino Release 2 must complete mandatory tests before Oct 31st

September 4, 2019

  • Need to have a discussion with LF about AWS database solution (ONAP SDK uses MariaDB, AWS provides an MySQL) AP: Tapio
  • Blueprint families: discussion about whether the UI and Nexus URL should use blueprint family names. Decision was Yes
  • Showing logs: there was a request to show the CI/CD logs with the UI. This is difficult to do and hence low priority
  • Plan is to show a demo to TSC next Tuesday September 10. Tapio will request a slot

August 28, 2019

Security tools discussion

August 21, 2019

Agenda:

August 14, 2019

  • Update about the UI web page: biggest issue at the moment is that the SDK used has some bugs. These are being fixed upstream
  • ELIOT/Kumar has reported success in running Blueprint validation
  • Indu is planning to test REC with Bluval
  • The Security Sub-Committee/Ken Yi needs to be invited to meetings



July 31, 2019

News:

  • Mandatory tests and UI have been approved

Other items (in random order)

  • Security subcommittee is interested in using the Validation framework for two of their mandatory test sets. To be followed up
  • Next week Tapio will be on holiday, will ask Deepak to start the meeting
  • Naga has committed the patch to separate Docker and Robot parts (see https://gerrit.akraino.org/r/c/validation/+/1132). He will update the user documentation on wiki to match the latest
  • Fixing ARM64 support for Sonobuoy required moving to Kubernetes 1.15. The problem is that one test case (Aggregation) that comes with Kubernetes 1.15  works only with the Kubernetes 1.15 version, while almost all blueprints use an earlier Kubernetes version. So the plan is to blacklist that testcase in Blueval so that the same tests can be run everywhere
  • Indu is working to push etcd HA tests to Gerrit

July 24, 2019

News

  • Mandatory tests were discussed in PTL/TSC meeting on July 23 (but not approved for lack of quorum). Will  be on the agenda again on July 25
  • Sonobuoy works also on ARM64 except for one test case
    • Requires some patches to run
    • Target is to check by end of week if that issue can be fixed

Public UI VM

  • LF has sent out a quote for two xlarge VMs on AWS, load balancer and 10G extra storage space
  • The extra storage will be used for database for UI
  • The two VMs are running the UI code
  • It is possible to scale down the proposal but let us go with this

Validation maintenance

  • We had a discussion about what code should be included in this projects repo
  • There is already blueprint related code: bluval-<blueprint name>.yaml
  • But we agreed that we do not need to allow blueprint specific code to the repo, at least for the time being

Demo next Tuesday

  • Will show UI in action with REC lab


Open gerrits

  • [UI] Support UI partial control - this is work in progress
  • Add resources for building cord-tester image - has a "-1"
  • sonobuoy testing – waiting for DANM4.0
  • Juha K will be back from holiday on August 5

Old APs - not discussed

  • Using k8s on our deployments
  • Using versioning for our releases
  • Moving the UI to a new git repo


July 17, 2019

  • Update about Sonobuoy testing on ARM64: with the latest Sonobuoy version, almost all test cases (except for one) seem to pass
  • Bluval UI:
    • First item was about the Bluval UI implementation hosted at LF.  LF has already a MySQL instance running in AWS as an elastic DB, and if it can be used, then the UI will only need to have the web server and the code to update the data in the database. This means that the UI can be stateless and all state will be in the DB. To be followed up
    • Second issue was about how to check whether bluval has ran all the mandatory tests. The test log will have results of all tests that it has run, but the UI cannot know what tests it was supposed to run (since it is possible to disable tests). The tentative solution is to create a file from bluval.yaml that has all tests that were supposed to be run, and store it in the logs. Bluval.py needs to do that
  • We discussed the Redfish testing. Redfish testing is currently mainly checking that the implementations confirm to the schemas that have been defined. The DMTF has also defined a few interoperability profiles, such as for OCP. It would be useful to define inside Akraino a profile that the different hardwares could be tested against, to make sure that all implementations have common functionality implemented in the same way
  • Recording is here. Unfortunately, it only covers a part of the Redfish discussion

July 10, 2019

  • AP updates:
    • AP: Kubernetes conformance tests failed on 12 test cases, moving to a newer version may fix this (https://gerrit.akraino.org/r/c/validation/+/1152)
    • AP: Ioakeim to copy the developer info for the UI into the wiki: work in progress
    • AP: Indu to take over Miguel's work on the etcd test: airship access is now ok, will check now if it works
  • Last week items - we did not cover these this time either
    • Using k8s on our deployments
    • Using versioning for our releases
    • Moving the UI to a new git repo
  • Code badge feature (presentation)
    • Code badge will show for a given combination of "blueprint + version + lab" whether the mandatory tests have passed
    • We had a discussion about how this could be implemented in practice

July 3, 2019

  • Follow-ups on the APs from last meeting
    • AP: Next week we can present in this meeting a demo of the UI: done
    • AP: Ioakeim to copy the developer info for the UI into the wiki: work in progress
    • AP: Cristina to start the user guide and then the team members complete it with the layers that they worked on: done
    • AP: Indu to check the overlap between e2e test and conformance test that we now run with sonobuoy: done; conclusion is we will drop the e2e for the first validation release as there are too many tests and they take too long to execute
    • AP: Indu to take over Miguel's work on the etcd test: need to get access to an airship deployment
    • AP: Deepak to look into the options that the LTP test has: ran it internally and automated it in robot; conclusion is that it could be used by any distro
    • AP: Andrew to upload the presentation to this wiki page: done Test Set Proposal to TSC
  • Status on the build jobs: the issue from last week with the UI was fixed; had one more issue with mariadb that was fixed today; overall state is good
  • Status on Sonobuoy testing on ARM: the patches for it have been merged;
    • AP: Cristina to identify the conformance tests that don't have support for aarch64
  • Status on permanent lab resources for hosting the UI: Ioakeim sent email sent to LF; will open a formal ticket
  • Status on pending patches: discussion on patch #949; will make a new patch to incorporate the changes discussed
  • Demo of the UI; discussion on what it takes to consider a project mature
    • AP: Ioakeim define the requirements better for the next week's meeting

June 26, 2019

  • Status on the build jobs: builds have been failing for the last week and a half; patch to fix it has been merged, we expect the jobs to pass on the tonight's run
  • Status on Sonobuoy testing on ARM: work is still in progress; we need to integrate the build and usage of these images in our own repo as up-streaming them is not feasible
  • Status on permanent lab resources for hosting the UI: [Ioakeim] request made at the TSC level; we need to check with LF first (Eric Ball)
    • Successfully migrated to the ONAP portal SDK (Casablanca version); patch is work in progress
    • AP: Next week we can present in this meeting a demo of the UI
  • Status on user documentation
    • Discussion on what (user guide, developer guide) to store where (git, wiki)
    • AP: Ioakeim to copy the developer info for the UI into the wiki
    • AP: Cristina to start the user guide and then the team members complete it with the layers that they worked on
    • Andrew promised to give us feedback on the user guide practicality
  • Status on Jenkins job that consumes the validation containers:
    • This task will take a couple of weeks more until we are able to implement it; Cristina will integrate the k8s test with the IEC blueprint
  • Status on pending patches
    • AP: Indu to check the overlap between e2e test and conformance test that we now run with sonobuoy
    • AP: Indu to take over Miguel's work on the etcd test
  • Multiarch discussion: quick presentation on how to use the containers
  • Mandatory tests: presentation from Andrew on what tests should be mandatory for the respective layers
    • AP: Deepak to look into the options that the LTP test has
    • AP: Andrew to upload the presentation to this wiki page
  • AoB: None

June 19, 2019

  • Sonobuoy testing on ARM
  • Release 2 requirements
    • What do we need for the first release?
      • Kubernetes conformance testing works
      • Docker containers are being built automatically every day
      • Need user documentation, starting from ready made containers
      • Need a Jenkins job to install
    • When can we have it?
  • Meetings for next two weeks: Andrew Wilkinson can start the meetings in Zoom, Cristina Pauna will host
  • Need lab resources permanently for hosting the UI – topic for TSC meeting

June 12, 2019

  • New rules enforced: you need a "+2" in a review for merging, but it cannot be your own
    • If you are a committer, you can give a "+2" to your own patch
    • One "+2" from someone else than submitter is needed
    • After this, anyone can merge the patch
  • Sonobuoy doesn’t work for arm, so we have 2 options→ postponed to next meeting
    • call the tests directly without sonobuoy on arm hr
    • adapt sonobuoy to work on arm (I incline more to this option but seems difficult to do it upstream so we’ll have to figure out a clever solution)
  • Documentation strategy
    • We have projects lined up to test the Blueprint validation project
    • We need to separate user documentation and developer documentation
    • Follow Documentation Sub-Committee requirements (Architecture, User Guide and Release Notes)
  • Proposal to TSC about mandatory tests: https://docs.google.com/presentation/d/1ONU7jmeGVrhbJe2gKRtode1aHKH8teRgfN91LYAPBmo/edit?usp=sharing
  • We had a discussion about when a blueprint project can be "mature": is the maturation review before Release 2 or after it? Do we want to have "Mature" projects in Release 2? Assumption is yes
  • We need to define what a Blueprint Validation release is, since the different blueprints need to be able to test against a stable set of tests. It would make sense to call the first release "2", so that BluVal Rel.2 would be used to test blueprints for Release 2. Most likely will also need a BlueVal Rel. 2.1 to fix bugs, and then the blueprints could choose any version 2.x (the latest is probably the best choice but there is no pressure to upgrade if an earlier release works)

June 5, 2019

  • etcd ha test cases
    • Issue is that there are two implementations, depending on how the k8s cluster has been deployed
    • It would be better to have a single implementation that would work with both
    • Juha and Cristina do not have access to an Airship deployment
  • Docker makefile bug
    • Fixed now
  • UI implementation
    • Discussion was about how to push jobs to validate different blueprints in different labs, since there are three kinds:
      • The public community lab that runs in UNH-IOL
      • 'Private' company labs which can operate in a 'master/agent' Jenkins model
      • Company labs that run in a peer model whereby the Edge lab jenkins only pulls and pushes but is not a slave to the LF jenkins. 
    • All of these will push the results in the LF Nexus, but only the two first ones can take jobs from LF Jenkins (and hence the UI)
    • The UI will soon run in a VM with a public IP

May 29, 2019

  • Pass variables to robot testcases (838)
    • As discussed earlier, we want to make a single file with all variables
    • Decision: go with the YAML version
  • Organize file structure for tests (887)
    • Proposal is to create a structure for storing the different tests
    • Additional idea is to add "layer" to the structure
    • Decision: Go ahead
  • New idea: there should be a single container per layer, use bluval to select what tests to run
  • Add container script for k8s ha (839): looks ok
  • UI discussion: the web UI can run in Docker containers, Ioakeim will make the instructions available on how to build them

May 22, 2019


5-22-19 Validation Framework Call.mp4


May 8, 2019

  • Automatic signed-off-by and linting work now: when you check in code, you need to sign off the commit and you will get a result from automatic code syntax check
  • Building containers and uploading to hub.docker.com does not yet work, LF staff is investigating
  • There will be a presentation about this project in the Akraino TSC/PTL meeting next week. Tapio Tallgren will make some slides and Cristina Pauna will check if the Sonobuyo on Robot Framework in a container will work (dl: Monday)
  • We also had a discussion about the UI framework. The key part is reading the log files, parsing them, and rendering the result on a web page
  • We agreed on a new committer: Ioakeim Samaras

April 24

  • General complaints:
    • Let's start having good commit messages to make it easier for outsiders to understand what is going on
    • Let's also have signed-off-by lines in git commits
  • JJB for patch verification is now done (thanks Cristina!)
  • Container scripts are also working, will be automated soon
  • Test containers will be put to hub.docker.com under Akraino/validation (https://hub.docker.com/r/akraino/validation)
  • Dashboard discussion is postponed
  • There is a new Kanban board for the project: https://jira.akraino.org/secure/RapidBoard.jspa?projectKey=VAL&rapidView=5
  • We also had a discussion about a common "environment file" which would contain all the parameters that the different tests would need (examples: IP addresses, user names, Kubernetes admin file). Need to build from bottom up
  • We also discussed bringing a proposal to the TSC about what the mandatory tests will be. Propose to start with Kubernetes conformance tests and Linux Testing Project
  • Need to make sure that all mandatory tests work on different architectures


April 17, 2019

  • JJB for patch verification will be made by Cristina
  • Patch for creating the k8s conformance test docker container was merged. Miguel will try it out with the robot test he's working on. JJB for building the docker images automatically will be done by Cristina; currently waiting for LF to make the dockerhub repo available from the jenkins slaves
  • Robot tests for Cyclictest are developed by Naga
  • Documentation and robot tests for LTP and baremetal are developed by Miguel
  • Juha is looking into robot framework
  • Deepak presented a proposal for a Dashboard to trigger and view tests results in an user-friendly way; gathered input from the team and will continue the discussion next time.

April 10, 2019

March 27, 2019

Agenda:

  • New committers to project
  • New meeting time. Propose Wednesdays 10-11 (an hour earlier)
  • Using IRC? Let's try the channel #akraino
  • Let's try to clarify the project scope: passing the mandatory validation tests is a part of the Akraino process to mature a process, so it is very important to agree in the Akraino TSC what the mandatory tests are. There can be tests that are supported by the framework but which are not mandatory. The logic should be:
        - If the blueprint installs Kubernetes, Kubernetes e2e conformance tests are mandatory
        - If the blueprint installs OpenStack, OpenStack Tempest and Rally are mandatory
    We also discussed making some HA tests mandatory.
  • Then we need to agree on how to move forward. Let's say you run a test; what happens next? If you run the test from LF Jenkins, it would be great to sync the results to the LF Jenkins. This requires that you are running your lab in a DMZ or you are opening a lot of ports on the firewall to run a slave Jenkins. So the easier alternative is to copy the results of the tests to the LF Nexus. To make it easy (for scripts) to understand the logs, we need to agree on a common format. For this, we propose running all tests using the Robot Framework.
    - Will BVal.yaml be an optional component to make it easy to chain together a number of tests?
    - To run the tests, I want to run something like "docker run --env-file abcd.txt --volume output.log akraino_k8s_tests". For this we need
        - The docker container
        - Definition of the env-file format: we need to figure out what information the test will require and write it to the environment file
        - The actual code to read the environment file, launch the Robot Framework, run the tests with Robots, and copy the output to the output.log
    - Once I have all this, I can make a Jenkins job in my lab which will launch the tests, and I can also configure it to copy the output somewhere.


March 6, 2019

Discussion about scope of testing + continue discussion about role of xtesting

Recording / chat

The plan is to use the Xtesting part of the OPNFV Functest project as the basis for the Akraino Blueprint Validation project. There will be separate containers for the different test sets, hosted at the Docker Hub. The "source code" (docker yaml etc) will be hosted in the Gerrit repository, a Jenkins job will build the images, and then they will be uploaded. We need to identify the mandatory tests that all Akraino Blueprint projects must pass. There will be a centralized database to store test results, to make it easy to follow the progress of different projects. A web interface to the database would be nice.

Specific notes:

  • The Nexus repository does not work well for storing Docker images that must support different architectures, as there is no support for manifest files. So it is best to use the Docker Hub for images
  • Xtesting has two different modes, a "tutorial" one that installs a Jenkins instance, and one for the "Functest" type, where the xtesting framework only creates the Jenkins jobs.
  • There is no need to have support for voting in the Jenkins jobs
  • Trevor Bramwell has set up the OPNFV test database, should ask him
  • Cedric is working on rewriting the web page code that shows the test results

February 27, 2019

Discussion about Xtesting.

Recording / chat

February 13, 2019

Agenda/notes

  • Weekly meetings
    • Time is ok?
    • Need to make official somehow
    • Recording or not?
    • Next week?
  • Will use Jira
  • Clarification of workflow
    • Will BVF do testing or just provide the tools?
  • Next tasks
    • Collection of test tools
    • Define "MVP"
    • Get an introduction from OPNFV team
    • What are we missing with this approach?
      • ONAP VFN SDK project

  • No labels