You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

Overview

This document describes how to deploy blueprints from Akraino's KNI Blueprint Family. It is common to all blueprints in that family, unless otherwise noted.

Pre-Installation Requirements

Resource Requirements

The resource requirements for deployment depend on the specific blueprint and deployment target. Please see:

Installer

The current KNI blueprints use the openshift-install tool from the OKD Kubernetes distro to stand up a minimal Kubernetes cluster. All other Day 1 and Day 2 operations are then driven purely through manipulation of declarative Kubernetes manifests. To use this in the context of Akraino KNI blueprints, the project has created a set of light-weight tools that need to be installed first.

If necessary, install golang binary (incl. GOPATH var) as well as make (using sudo yum install -y make) on your system.

Then install the kni-installer:

mkdir -p $GOPATH/src/gerrit.akraino.org/kni
cd $GOPATH/src/gerrit.akraino.org/kni
git clone https://gerrit.akraino.org/r/kni/installer
cd installer
make build
make binary
cp bin/* $GOPATH/bin

Secrets

Most secrets (TLS certificates, Kubernetes API keys, etc.) will be auto-generated for you, but you need to provide at least two secrets yourself:

  • a public SSH key
  • a pull secret

The public SSH key is automatically added to every machine provisioned into the cluster and allows remote access to that machine. In case you don't have / want to use an existing key, you can create a new key pair using:

ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa_kni

The pull secret is used to download the container images used during cluster deployment. Unfortunately, the OKD Kubernetes distro used by the KNI blueprints does not (yet) provide pre-built container images for all of the deployed components. Instead of going through the hassle of building those from source, we use the ones made available by openshift.com. Therefore, you need to go to https://cloud.redhat.com/openshift/install/metal/user-provisioned, log in (creating a free account, if necessary), and hit "Download Pull Secret".

Create and export a local folder for these two secrets:

mkdir p $HOME/akraino-secrets
export CREDENTIALS=file://$HOME/akraino-secrets

And store the public SSH key (id_rsa_kni.pub) and the pull secret there under the names ssh-pub-key and coreos-pull-secret, respectively.

Pre-Requisites for Deploying to AWS

For deploying a KNI blueprint to AWS, you need to

  • add a public hosted DNS zone for the cluster to Route53,
  • validate your AWS quota in the chosen region is sufficient,
  • set up an API user account with the necessarily IAM privileges.

Please see the upstream documentation for details. 

Store the aws-access-key-id and aws-secret-access-key in files of the same name in the akraino-secrets folder you created earlier.

Pre-Requisites for Deploying to Bare Metal

For deploying a KNI blueprint to bare metal using Metal3, you need to

  • Acquire a server for the Provisioning Host with the following networking topology
    • One nic interface with access to the internet
    • Two nic interfaces that will connect to the Provisioning and Baremetal network
  • Acquire two or more servers to be
    • used as in the minimal case, 1 master + 1 worker
    • used as a base cluster with HA, 3 master 
    • Master nodes should have 16G of RAM
    • Worker nodes should have a minimum of 16G
    • Each node is required to have 2 independent NIC interfaces
  • Networking
    • One nic on each of the master and workers will be put on the Provisioning LAN
    • One nic on each of the master and workers will be put on the Baremetal LAN
    • The Baremetal and Provisioning LANs should be isolated and not contain a DHCP nor DNS server.
  • Deploy the Provisionin Host server with Centos 7.6
    • The server requires 
      • 16G of mem
      • 1 socket / 12 core
      • 200G of free space in /
    • Connect one nic to internet, one to Baremetal and one to Provisioning
  • Prepare the Provisioning Host server for virtualization,
    • source utils/prep_hos.sh from the kni-installer repo on the host.

Pre-Requisites for Deploying to Libvirt

For deploying a KNI blueprint to VMs on KVM/libvirt, you need to

  • provision a machine with CentOS 1810 to serve as virthost and
  • prepare the virthost by running 
    source utils/prep_host.sh
    from the kni-installer repo on that host.

Please see the upstream documentation for details.

Deploying a Blueprint

There is a Makefile on the root directory of this project. In order to deploy you will need to use the following syntax:

export BASE_REPO="git::https://gerrit.akraino.org/r/kni/templates"
export SITE_REPO="git::https://gerrit.akraino.org/r/kni/templates"
export SETTINGS_PATH="aws/sample_settings.yaml"
make deploy

where

environment variabledescription
BASE_REPORepository for the base manifests. This is the repository where the default manifest templates are stored. For Akraino, it's defaulting to git::https://gerrit.akraino.org/r/kni/templates
BASE_PATHInside BASE_REPO, there is one specific folder for each blueprint and provider: aws/1-node, libvirt/1-node, etc... So you need to specify a folder inside BASE_REPO, that will match the type of deployment that you want to execute.
SITE_REPORepository for the site manifests. Each site can have different manifests and settings, this needs to be a per-site repository where individual settings need to be stored, and where the generated manifests per site will be stored. For Akraino, it's defaulting to git::https://gerrit.akraino.org/r/kni/templates
SETTINGS_PATH

Specific site settings. Once the site repository is being cloned, it needs to contain a settings.yaml file with the per-site settings. This needs to be the path inside the SITE_REPO where the settings.yaml is contained. In Akraino, a sample settings file for AWS and libvirt is provided. You can see it on aws/sample_settings.yaml and libvirt/sample_settings.yaml on the SITE_REPO. You should create your own settings specific for your deployment.

Site Configurations for Deployments to AWS

Deploying to AWS requires a site-config.yaml file to be created that looks like this:

settings:
  baseDomain: "devcluster.openshift.com"
  clusterName: "kni-edge"
  clusterCIDR: "10.128.0.0/14"
  clusterSubnetLength: 9
  machineCIDR: "10.0.0.0/16"
  serviceCIDR: "172.30.0.0/16"
  SDNType: "OpenShiftSDN"
  AWSRegion: "us-east-1"

where

variabledescription
baseDomainDNS zone matching with the one created on Route53, e.g. "kni.akraino.org"
clusterNamename you are going to give to the cluster, e.g. "site-01"
AWSRegionregion where you want your cluster to deploy, e.g. "us-east-1"

The other values (clusterCIDR, clusterSubnetLength, machineCIDR, serviceCIDR, SDNType) can be left with the defaults as above

Then, when deploying the cluster, apply the settings using:

make deploy SETTINGS_PATH=site-config.yaml BASE_PATH="aws/3-node"


Note: Please consider that, for security reasons, AWS has disabled the SSH access to nodes. If you need to SSH for any reason, please take a look at `Unable to SSH into Master Nodes` in the upstream troubleshooting documentation.

Site Configurations for Deployments to libvirt

Deploying to libvirt requires a site-config.yaml file to be created that looks like this:

settings:
  baseDomain: "tt.testing"
  clusterName: "test"
  clusterCIDR: "10.128.0.0/14"
  clusterSubnetLength: 9
  machineCIDR: "192.168.126.0/24"
  serviceCIDR: "172.30.0.0/16"
  SDNType: "OpenShiftSDN"
  libvirtURI: "qemu+tcp://192.168.122.1/system"

where

variabledescription
baseDomainDNS zone matching with the entry created in /etc/NetworkManager/dnsmasq.d/openshift.conf during the libvirt-howto machine setup. (tt.testing by default)
clusterNamename you are going to give to the cluster, e.g. "site-01"
libvirtURIhost IP of the prepared virthost, e.g. "qemu+tcp://192.168.122.1/system"

The other values (clusterCIDR, clusterSubnetLength, machineCIDR, serviceCIDR, SDNType) can be left with the defaults as above

Then, when deploying the cluster, apply the settings using:

make deploy SETTINGS_PATH=site-config.yaml BASE_PATH="libvirt/3-node"


There is a recording of the deployment of a blueprint to libvirt (1 master, 1 worker) here: https://www.youtube.com/watch?v=3mDb1cba8uU


Note: Currently the installer is failing when adding console to the cluster for libvirt. In order to make it work, please follow instructions on https://github.com/openshift/installer/pull/1371.

Accessing the Cluster

After the deployment finishes, a kubeconfig file will be placed inside build/auth directory:

export KUBECONFIG=./build/auth/kubeconfig

Then cluster can be managed with the kubectl or oc (drop-in replacement with advanced functionality) CLI tools. To get the oc client, visit https://cloud.redhat.com/openshift/install/metal/user-provisioned , and follow the Download Command-Line Tools link, where you need to download the openshift-client archive that matches your operating system.

Destroying the Cluster

In order to destroy the running cluster and clean up your environment, run

make clean

Troubleshooting the Cluster

Please see the upstream documentation for details.



  • No labels