You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Overview

This document describes how to deploy blueprints from Akraino's KNI Blueprint Family. It is common to all blueprints in that family, unless otherwise noted.

Pre-Installation Requirements

Resource Requirements

The resource requirements for deployment depend on the specific blueprint and deployment target. Please see:

Installer

The current KNI blueprints use the openshift-install tool from the OKD Kubernetes distro to stand up a minimal Kubernetes cluster. All other Day 1 and Day 2 operations are then driven purely through manipulation of declarative Kubernetes manifests. To use this in the context of Akraino KNI blueprints, the project has created a set of light-weight tools that need to be installed first.

If necessary, install golang binary (incl. GOPATH var) as well as make (using sudo yum install -y make) on your system.

Then install the kni-installer:

mkdir -p $GOPATH/src/gerrit.akraino.org/kni
cd $GOPATH/src/gerrit.akraino.org/kni
git clone https://gerrit.akraino.org/r/kni/installer
cd installer
make build
make binary
cp bin/* $GOPATH/bin

Secrets

Most secrets (TLS certificates, Kubernetes API keys, etc.) will be auto-generated for you, but you need to provide at least two secrets yourself:

  • a public SSH key
  • a pull secret

The public SSH key is automatically added to every machine provisioned into the cluster and allows remote access to that machine. In case you don't have / want to use an existing key, you can create a new key pair using:

ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa_kni

The pull secret is used to download the container images used during cluster deployment. Unfortunately, the OKD Kubernetes distro used by the KNI blueprints does not (yet) provide pre-built container images for all of the deployed components. Instead of going through the hassle of building those from source, we use the ones made available by openhift.com. Therefore, you need to go to https://cloud.openshift.com/clusters/install, log in (creating a free account, if necessary), and hit "Download Pull Secret".

Create and export a local folder for these two secrets:

mkdir p $HOME/akraino-secrets
export CREDENTIALS=file://$HOME/akraino-secrets

And store the public SSH key (id_rsa_kni.pub) and the pull secret there under the names ssh-pub-key and coreos-pull-secret, respectively.

Pre-Requisites for Deploying to AWS

For deploying a KNI blueprint to AWS, you need to

  • add a public hosted DNS zone for the cluster to Route53,
  • validate your AWS quota in the chosen region is sufficient,
  • set up an API user account with the necessarily IAM privileges.

Please see the upstream documentation for details. 

Store the aws-access-key-id and aws-secret-access-key in files of the same name in the akraino-secrets folder you created earlier.

Pre-Requisites for Deploying to Bare Metal

For deploying a KNI blueprint to bare metal using Metal3, you need to

  • Acquire a server for the Provisioning Host with the following networking topology
    • One nic interface with access to the internet
    • Two nic interfaces that will connect to the Provisioning and Baremetal network
  • Acquire two or more servers to be
    • used as in the minimal case, 1 master + 1 worker
    • used as a base cluster with HA, 3 master 
    • Master nodes should have 16G of RAM
    • Worker nodes should have a minimum of 16G
    • Each node is required to have 2 independent NIC interfaces
  • Networking
    • One nic on each of the master and workers will be put on the Provisioning LAN
    • One nic on each of the master and workers will be put on the Baremetal LAN
    • The Baremetal and Provisioning LANs should be isolated and not contain a DHCP nor DNS server.
  • Deploy the Provisionin Host server with Centos 7.6
    • The server requires 
      • 16G of mem
      • 1 socket / 12 core
      • 200G of free space in /
    • Connect one nic to internet, one to Baremetal and one to Provisioning
  • Prepare the Provisioning Host server for virtualization,
    • source utils/prep_hos.sh from the kni-installer repo on the host.

Pre-Requisites for Deploying to Libvirt

For deploying a KNI blueprint to VMs on KVM/libvirt, you need to

  • provision a machine with CentOS 1810 to serve as virthost and
  • prepare the virthost by running 
    source utils/prep_host.sh
    from the kni-installer repo on that host.

Please see the upstream documentation for details.

Deploying a Blueprint

There is a Makefile on the root directory of this project. In order to deploy you will need to use the following syntax:

export CREDENTIALS=file://$(pwd)/akraino-secrets
export BASE_REPO="git::https://gerrit.akraino.org/r/kni/templates"
export BASE_PATH="aws/3-node"
export SITE_REPO="git::https://gerrit.akraino.org/r/kni/templates"
export SETTINGS_PATH="aws/sample_settings.yaml"
make deploy

where


environment variabledescription
BASE_REPORepository for the base manifests. This is the repository where the default manifest templates are stored. For Akraino, it's defaulting to git::https://gerrit.akraino.org/r/kni/templates
BASE_PATHInside BASE_REPO, there is one specific folder for each blueprint and provider: aws/1-node, libvirt/1-node, etc... So you need to specify a folder inside BASE_REPO, that will match the type of deployment that you want to execute.
SITE_REPORepository for the site manifests. Each site can have different manifests and settings, this needs to be a per-site repository where individual settings need to be stored, and where the generated manifests per site will be stored. For Akraino, it's defaulting to git::https://gerrit.akraino.org/r/kni/templates
SETTINGS_PATH

Specific site settings. Once the site repository is being cloned, it needs to contain a settings.yaml file with the per-site settings. This needs to be the path inside the SITE_REPO where the settings.yaml is contained. In Akraino, a sample settings file for AWS and libvirt is provided. You can see it on aws/sample_settings.yaml and libvirt/sample_settings.yaml on the SITE_REPO. You should create your own settings specific for your deployment.

Settings Specific for AWS DeploymentsHow to deploy for AWS

There are two different footprints for AWS: 1 master/1 worker, and 3 masters/3 workers. Makefile needs to be called with:

make deploy CREDENTIALS=git@github.com:<git_user>/akraino-secrets.git BASE_REPO=git::https://gerrit.akraino.org/r/kni/templates.git BASE_PATH=[aws/1-node | aws/3-node] SITE_REPO=git::https://gerrit.akraino.org/r/kni/templates.git SETTINGS_PATH=aws/sample_settings.yaml

The settings file will look like:

settings:
  baseDomain: "devcluster.openshift.com"
  clusterName: "kni-edge"
  clusterCIDR: "10.128.0.0/14"
  clusterSubnetLength: 9
  machineCIDR: "10.0.0.0/16"
  serviceCIDR: "172.30.0.0/16"
  SDNType: "OpenShiftSDN"
  AWSRegion: "us-west-1"

Where:

  • baseDomain: DNS zone matching with the one created on Route53
  • clusterName: name you are going to give to the cluster
  • AWSRegion: region where you want your cluster to deploy
  • The other values (clusterCIDR, clusterSubnetLength, machineCIDR, serviceCIDR, SDNType) can be left with the defaults

The make process will create the needed artifacts and will start the deployment of the specified cluster. Please consider that, for security reasons. AWS has disabled the SSH access to nodes.

If you need to SSH for any reason, please take a look at `Unable to SSH into Master Nodes` section from the following document: https://github.com/openshift/installer/blob/master/docs/user/troubleshooting.md

How to deploy for Libvirt


There are two different footprints for libvirt: 1 master/1 worker, and 3 masters/3 workers. Makefile needs to be called with:

make deploy CREDENTIALS=git@github.com:<git_user>/akraino-secrets.git BASE_REPO=git::https://gerrit.akraino.org/r/kni/templates.git BASE_PATH=[libvirt/1-node|libvirt/3-node] SITE_REPO=git::https://gerrit.akraino.org/r/kni/templates.git SETTINGS_PATH=libvirt/sample_settings.yaml  INSTALLER_PATH=file:///${GOPATH}/bin/openshift-install

A sample settings.yaml file has been created specifically for Libvirt targets. It needs to look like:

settings:
  baseDomain: "tt.testing"
  clusterName: "test"
  clusterCIDR: "10.128.0.0/14"
  clusterSubnetLength: 9
  machineCIDR: "192.168.126.0/24"
  serviceCIDR: "172.30.0.0/16"
  SDNType: "OpenShiftSDN"
  libvirtURI: "qemu+tcp://192.168.122.1/system"

Where:

  • base_domain: DNS zone matching with the entry created in /etc/NetworkManager/dnsmasq.d/openshift.conf during the libvirt-howto machine setup. (tt.testing by default)
  • cluster_name: name you are going to give to the cluster
  • libvirt_host_ip: host IP where libvirt is configured (i.e. qemu+tcp://192.168.122.1/system)

The rest of the options are exactly the same as in an AWS deployment.

There is a recording of the deployment of a Libvirt blueprint with 1 master and 1 worker. Please, see the following link to watch it: https://www.youtube.com/watch?v=3mDb1cba8uU

Temporary workaround

Currently the installer is failing when adding console to the cluster for libvirt. In order to make it work, please follow instructions on https://github.com/openshift/installer/pull/1371.

Accessing the Cluster

After the deployment finishes, a kubeconfig file will be placed inside build/auth directory:

export KUBECONFIG=./build/auth/kubeconfig

Then cluster can be managed with the kubectl or oc (drop-in replacement with advanced functionality) CLI tools. To get the oc client, see "Step 5 - Accessing your new cluster" at https://cloud.openshift.com/clusters/install.

How to destroy the cluster

In order to destroy the running cluster and clean up your environment, run

make clean

Building and consuming your own installer

The openshift-installer binaries are published on https://github.com/openshift/installer/releases.

For faster deploy, you can grab the installer from the upper link. However, there may be situations where you need to compile your own installer (such as the case of libvirt), or you need a newer version. You can build the binary following the instructions on https://github.com/openshift/installer, or you can use the provided target from our project.

The binary can be produced with the following command:

make binary

It accepts an additional INSTALLER_GIT_TAG parameter that allows to specify the installer version that you want to build. Once built, the openshift-install is placed in the build directory. This binary can be copied into $GOPATH/bin for easier use. Once generated, the new installer can be used with an env var:

export INSTALLER_PATH=http://<url_to_binary>/openshift-install

Or pass it as a parameter to make command.

Customization

Use your own manifests

openshift-installer is also able to produce manifests, that end users can modify and deploy a cluster with the modified settings. New manifests can be generated with:

/path/to/openshift-install create manifests

This will generate a pair of folders: manifests and openshift.

Those manifests can be modified with the desired values. After that this code can be executed to generate a new cluster based on the modified manifests:

/path/to/openshift-install create cluster
  • No labels