Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

CLI tool

The current KNI blueprints use the openshift-install tool from the OKD Kubernetes distro to stand up a minimal Kubernetes cluster. All other Day 1 and Day 2 operations are then driven purely through manipulation of declarative Kubernetes manifests. To use this in the context of Akraino KNI blueprints, the project has created a set of light-weight tools that need helper CLI tool that needs to be installed first.

If necessary, install golang binary (incl. GOPATH var) as well as make (using sudo yum install -y make) on your system.

Then install the kni-installerknictl:

mkdir -p $GOPATH/src/gerrit.akraino.org/kni
cd $GOPATH/src/gerrit.akraino.org/kni
git clone https://gerrit.akraino.org/r/kni/installer
cd installer
make build
make binary
cp bin/*knictl $GOPATH/bin/

Secrets

Most secrets (TLS certificates, Kubernetes API keys, etc.) will be auto-generated for you, but you need to provide at least two secrets yourself:

...

ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa_kni

The pull secret is used to download the container images used during cluster deployment. Unfortunately, the OKD Kubernetes distro used by the KNI blueprints does not (yet) provide pre-built container images for all of the deployed components. Instead of going through the hassle of building those from source, we use the ones made available by openshift.com. Therefore, you need to go to https://cloud.redhat.com/openshift/install/metal/user-provisioned, log in (creating a free account, if necessary), and hit "Download Pull Secret".

Create and export a local folder for these two secrets:

mkdir p $HOME/akraino-secrets
export CREDENTIALS=file://$HOME/akraino-secrets

...

$HOME/.kni folder and copy the following files:

  • id_rsa.pub → needs to contain the public key that you want to use to access your nodes
  • pull-secret.json → needs to contain the pull secret previously copied

Pre-Requisites for Deploying to AWS

...

Store the aws-access-key-id and aws-secret-access-key in files of the same name in the akraino-secrets folder you created earlier.a credentials file inside $HOME/.aws, with the following format:

Code Block
languagebash
[default]aws_access_key_id=xxx
aws_secret_access_key=xxx

Pre-Requisites for Deploying to Bare Metal

...

Please see the upstream documentation for details.

...

Structure of a

...

site

There is a Makefile on the root directory of this project. In order to deploy a blueprint, you will need to use the following syntax:

...

create a repository with a site. The site configuration is based in kustomize, and needs to use our blueprints as base, referencing that properly. A sample site can be seen on https://

...

github.

...

com/

...

yrobla/kni

...

where

...

-site. Site needs to have this structure:

.
├── 00_install-config
│   ├── install-config.name.patch.yaml
│   ├── install-config.patch.yaml
│   ├── kustomization.yaml
│   └── site-config.yaml
├── 01_cluster-mods
│   ├── kustomization.yaml
│   ├── manifests
│   └── openshift
├── 02_cluster-addons
│   └── kustomization.yaml
└── 03_services
└── kustomization.yaml

00_install-config

This folder will contain the basic settings for the site, including the base blueprint/profile, and the site name/domain. The following files are needed:

  • kustomization.yaml: key file, where it will contain a link to the used blueprint/profile, and a reference to the used patches to customize the site bases:


    Code Block
    languageyml
    bases:
    - git::https://gerrit.akraino.org/r/kni

...

Specific site settings. Once the site repository is being cloned, it needs to contain a settings.yaml file with the per-site settings. This needs to be the path inside the SITE_REPO where the settings.yaml is contained. In Akraino, a sample settings file for AWS and libvirt is provided. You can see it on aws/sample_settings.yaml and libvirt/sample_settings.yaml on the SITE_REPO. You should create your own settings specific for your deployment.

Site Configurations for Deployments to AWS

Deploying to AWS requires a site-config.yaml file to be created that looks like this:

settings:
  baseDomain: "devcluster.openshift.com"
  clusterName: "kni-edge"
  clusterCIDR: "10.128.0.0/14"
  clusterSubnetLength: 9
  machineCIDR: "10.0.0.0/16"
  serviceCIDR: "172.30.0.0/16"
  SDNType: "OpenShiftSDN"
  AWSRegion: "us-east-1"

where

...

  • /blueprint-pae.git//profiles/production.aws/00_install-config
    
    patches:
    - install-config.patch.yaml
    
    patchesJson6902:
    - target:
    version: v1
    kind: InstallConfig
    name: cluster
    path: install-config.name.patch.yaml
    
    transformers:
    - site-config.yaml


    The entry in bases needs to reference the blueprint being used (in this case blueprint-pae), and the profile install-config file (in this case production.aws/00_install-config). The other entries need to be just written literally.
  • install-config.patch.yaml: is a patch to modify the domain from the base blueprint. You need to customize with the domain you want to give to your site

    Code Block
    languageyml
    apiVersion: v1
    kind: InstallConfig
    metadata:
    name: cluster
    baseDomain: devcluster.openshift.com


  • install-config.name.patch.yaml: is a patch to modify the site name from the base blueprint. You need to customize with the name you want to give to your site


Code Block
languageyml
- op: replace
  path: "/metadata/name"
  value: kni-site
  • site-config.yaml: site configuration file, you can add entries in config to override behaviour of knictl (currently just releaseImageOverride is supported)


Code Block
languageyml
apiVersion: kni.akraino.org/v1alpha1
kind: SiteConfig
metadata:
 name: notImportantHere
 config:
   releaseImageOverride: registry.svc.ci.openshift.org/origin/release:4.1

01_cluster_mods

This is the directory that will contain all the customizations for the basic cluster deployment. You could create patches for modifying number of masters/workers, network settings... everything that needs to be modified on cluster deployment time. It needs to have a basic kustomization.yaml file, that will reference the same level file for the blueprint. And you could create additional patches following kustomize syntax:

Code Block
languageyml
bases:
- git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.aws/01_cluster-mods

02_cluster_addons and 03_services

Follow same structure as 01_cluster_mods, but in this case is for adding additional workloads after cluster deployment. They also need to have a kustomization.yaml file that references the file of the same level for the blueprint, and can include additional resources and patches.

How to deploy

The whole deployment workflow is based on knictl CLI tool that this repository is providing.

1. Fetch requirements for a site.

You need to have a site repository with the structure described above. Then, first thing is to fetch the requirements needed for the blueprint that the site references. This is achieved by:

Code Block
languagebash
./knictl fetch_requirements github.com/site-repo.git 

Where the first argument references a site repository, following https://github.com/hashicorp/go-getter syntax. This will download the site repository, and will create a folder with the site name inside $HOME/.kni . It will also fetch all the binaries needed, and will store them inside $HOME/.kni/$SITE_NAME/requirements folder.

2. Prepare manifests for a site

Next step is to run a procedure to prepare all the manifests for deploying a site. This is achieved by applying kustomize on the site repository, combining that with the base manifests for the blueprint, and doing a merge with the manifests generated by the installer at runtime. This is achieved by the following command:

Code Block
languagebash
./knictl prepare_manifests $SITE_NAME

This will generate a set of manifests ready to apply, and will be stored on $HOME/.kni/$SITE_NAME/final_manifests folder. Along with manifests, a profile.env file has been created also in $HOME/.kni/$SITE_NAME folder. It includes environment vars that can be sourced before deploying the cluster. Current vars that can be exported are:

  • OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE : used when a new image is wanted, instead of the default one
  • TF_VAR_libvirt_master_memory, TF_VAR_libvirt_master_vcpu: Used in the libvirt case, to define the memory and CPU for the vms.

3. Deploy the cluster

Before starting the deployment, it is recommended to source the env vars from profile.env . You can achieve it with:

Code Block
languagebash
source $HOME/.kni/$SITE_NAME/profile.env

Then, you need to deploy the cluster using the generated manifests. This can be achieved with:

Code Block
languagebash
$HOME/.kni/$SITE_NAME/requirements/openshift-install create cluster --dir=$HOME/.kni/$SITE_NAME/final_manifests

This will deploy a cluster based on the specified manifests. You can learn more about how to manage cluster deployment and how to interact with it on https://docs.openshift.com/container-platform/4.1/welcome/index.html

4. Apply workloads

After the cluster has been generated, the extra workloads that have been specified in manifests (like kubevirt), need to be applied. This can be achieved by:

Code Block
languagebash
./knictl apply_workloads $SITE_NAME

This will execute kustomize on the site manifests and will apply the output to the cluster. After that, the site deployment can be considered as finished.

...

The other values (clusterCIDR, clusterSubnetLength, machineCIDR, serviceCIDR, SDNType) can be left with the defaults as above

Then, when deploying the cluster, apply the settings using:

make deploy SETTINGS_PATH=site-config.yaml BASE_PATH="aws/3-node"

Note: Please consider that, for security reasons, AWS has disabled the SSH access to nodes. If you need to SSH for any reason, please take a look at `Unable to SSH into Master Nodes` in the upstream troubleshooting documentation.

Site Configurations for Deployments to libvirt

Deploying to libvirt requires a site-config.yaml file to be created that looks like this:

settings:
  baseDomain: "tt.testing"
  clusterName: "test"
  clusterCIDR: "10.128.0.0/14"
  clusterSubnetLength: 9
  machineCIDR: "192.168.126.0/24"
  serviceCIDR: "172.30.0.0/16"
  SDNType: "OpenShiftSDN"
  libvirtURI: "qemu+tcp://192.168.122.1/system"

where

...

Then, when deploying the cluster, apply the settings using:

make deploy SETTINGS_PATH=site-config.yaml BASE_PATH="libvirt/3-node"

There is a recording of the deployment of a blueprint to libvirt (1 master, 1 worker) here: https://www.youtube.com/watch?v=3mDb1cba8uU

Note: Currently the installer is failing when adding console to the cluster for libvirt. In order to make it work, please follow instructions on https://github.com/openshift/installer/pull/1371.

Accessing the Cluster

After the deployment finishes, a kubeconfig file will be placed inside build/ auth directory:

export KUBECONFIG=$HOME/./buildkni/$SITE_NAME/final_manifests/auth/kubeconfig

Then cluster can be managed with the kubectl or oc (drop-in replacement with advanced functionality) CLI tools. To get the oc client, visit https://cloud.redhat.com/openshift/install/metal/user-provisioned , and follow the Download Command-Line Tools link, where you need to download the openshift-client archive that matches your operating system.

Destroying the Cluster

In order to destroy the running cluster and clean up your environment, run

...

When needed, the site can be destroyed with the openshift-install command, using the following syntax:

Code Block
languagebash
$HOME/.kni/$SITE_NAME/requirements/openshift-install destroy cluster --dir $HOME/.kni/$SITE_NAME/final_manifests


Troubleshooting the Cluster

...