Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Requirements:
- Access to the Internet (proxy considerations are ignored in this documentation).
- Ubuntu 18.04 as the operating system (only one tested).
- SSH is configured in all machines part of the cluster.
- Login as root.

Furthermore, this guide assumes that:
- There are a total of two machines.
- The first machine includes Jenkins, the Kubernetes master nodes and the first worker node.
- The second machine only includes the second worker node.

...

Also needed to upload Bluval logs is the lftools python3 package, install it:

pip3 install lftools

The Bluval job depends on templates and scripts from the ci-management repository:

cd ~
git clone --recursive "https://gerrit.akraino.org/r/ci-management"

The following is temporary until patch https://gerrit.akraino.org/r/c/validation/+/3370 gets merged by the validation team.https://gerrit.akraino.org/r/c/validation/+/3370:

sed -i 's/ssh:\/\/akraino-jobbuilder@gerrit.akraino.org:29418/https:\/\/github.com\/igordcard/' ci-management/jjb/defaults.yaml

Let's finally get Jenkins to recognize the Bluval job:

pip install jenkins-job-builder
python2 -m jenkins_jobs test ci-management/jjb:icn/ci/jjb icn-bluval-daily-master
python2 -m jenkins_jobs update ci-management/jjb:icn/ci/jjb icn-bluval-daily-master

Recommendation: install the Rebuilder plugin to easily rebuild a job with the same/similar parameters:
Go to: http://localhost:8080/pluginManager/available and install "Rebuilder", then restart Jenkins (will be done soon anyway).

Since Jenkins will be running a job that calls Docker, it needs to have permissions to run Docker, so add jenkins user to the docker group:

usermod -aG docker jenkins

Restart Jenkins to apply new permissions (necessary) and finalize Rebuilder plugin installation (necessary):

systemctl restart jenkins

KUD and Kubernetes

...

Before running any job, the ICN/EMCO flavor of Kubernetes needs to be installed.
Here is the current recommended procedure.

Again, this guide assumed that:
- There are a total of two machines.
- The first machine includes Jenkins, the Kubernetes master nodes and the first worker node.
- The second machine only includes the second worker node.

The first thing to do is have master node's SSH trust its own itself, root@localhost.

SSH to localhost and accept the connection to persist the fingerprint in ~/.ssh/known_hosts.:

ssh root@localhost

Likewise, the master node should also trust root @ the worker node. SSH to it and accept the connection to persist the fingerprint in in ~/.ssh/known_hosts. This trust will be needed for Ansible to install the Kubernetes cluster (KUD).

ssh root@WORKER_NODE_IPADDR

At the master node (where Jenkins is already installed at this point), download KUD source code with Kubernetes 1.16 patch (this guide should be update once this patch is merged):

cd ~
apt-get install -y git-review
git clone "https://gerrit.onap.org/r/multicloud/k8s"
cd k8s
git remote add gerrit https://GERRIT_USERNAME@gerrit.onap.org/r/a/multicloud/k8s
git review -s
git review -d 106869

Replace all localhost references with $HOSTNAME : :%s/localhost/$HOSTNAMEin KUD's aio.sh:

sed -i 's/localhost/$HOSTNAME/' kud/hosting_providers/baremetal/aio.sh

Remove [ovn-central], [ovn-controller], [virtlet] and [cmk] groups (and contents) from the aio.sh file below:

vim kud/hosting_providers/baremetal/aio.sh

Configure KUD for multi-node by also modifying aio.sh:

vim kud/hosting_providers/baremetal/aio.sh

Specifically, the only change for this guide's dual-node deployment is to add the worker node details to the [all] and [kube-node] groups, like this:

In [all], add line:
WORKER_NODE_HOSTNAME ansible_ssh_host=WORKER_NODE_IPADDR ansible_ssh_port=22
In [kube-node], add line:
WORKER_NODE_HOSTNAME

In installer.sh, disable KUD addons and plugins:

vim kud/hosting_providers/vagrant/installer.sh
The following lines (near the end of the file) can be commented, as such:

# install_addons
# if ${KUD_PLUGIN_ENABLED:-false}; then
# install_plugin
# fi

Finally install Kubernetes with KUD (ansible will automatically install it in the worker node too):

kud/hosting_providers/baremetal/aio.sh

...

The easiest way to check what logs have been uploaded to the Nexus is by opening the following URL:
https://logs.akraino.org/intel/bluval_results/icn/master/


TODO validation docker host network and fork dependencies