This document aims to describe a basic installation of the KNI blueprints. This will target both Industrial and Telco use cases, and will describe deployments on libvirt and AWS.
Dependencies
You will need to create an account on http://cloud.openshift.com . This is needed to have download access to the OpenShift installer artifacts. After that, you will need to download the pull secret from https://cloud.openshift.com/clusters/install (step 4 - Deploy the cluster).
...
git clone https://gerrit.akraino.org/r/kni/installer
How to build
First, the kni-edge-installer binary needs to be produced. For that you enter into the directory of the cloned project, and just execute make with the following syntax:
...
This will produce the kni-edge-installer binary that can be used to deploy a site
How to deploy
There is a Makefile on the root directory of this project. In order to deploy you will need to use the following syntax:
export CREDENTIALS=file://$(pwd)/akraino-secrets
export BASE_REPO="git::https://gerrit.akraino.org/r/kni/templates"
export BASE_PATH="aws/3-node"
export SITE_REPO="git::https://gerrit.akraino.org/r/kni/templates"
export SETTINGS_PATH="aws/sample_settings.yaml"
make deploy
Where:
CREDENTIALS
Content of private repo. This repository is needed to store private credentials. It is recommended that you store those credentials on a private repo where only allowed people have access, where access to the repositories can be controlled by SSH keys.
...
export CREDENTIALS=file:///path/to/akraino-secrets
BASE_REPO
Repository for the base manifests. This is the repository where the default manifest templates are stored. For Akraino, it's defaulting to git::https://gerrit.akraino.org/r/kni/templates
BASE_PATH
Inside BASE_REPO, there is one specific folder for each blueprint and provider: aws/1-node, libvirt/1-node, etc... So you need to specify a folder inside BASE_REPO, that will match the type of deployment that you want to execute.
SITE_REPO
Repository for the site manifests. Each site can have different manifests and settings, this needs to be a per-site repository where individual settings need to be stored, and where the generated manifests per site will be stored. For Akraino, it's defaulting to git::https://gerrit.akraino.org/r/kni/templates
SETTINGS_PATH
Specific site settings. Once the site repository is being cloned, it needs to contain a settings.yaml file with the per-site settings. This needs to be the path inside the SITE_REPO where the settings.yaml is contained. In Akraino, a sample settings file for AWS and libvirt is provided. You can see it on aws/sample_settings.yaml and libvirt/sample_settings.yaml on the SITE_REPO. You should create your own settings specific for your deployment.
How to deploy for AWS
Before starting the deploy, please read the following documentation to prepare the AWS account properly: [https://github.com/openshift/installer/blob/master/docs/user/aws/README.md
...
If you need to SSH for any reason, please take a look at `Unable to SSH into Master Nodes` section from the following document: https://github.com/openshift/installer/blob/master/docs/user/troubleshooting.md
How to deploy for Libvirt
First of all, we need to prepare a host in order to configure libvirt, iptables, permissions, etc. This repository contains a bash script that will prepare the host for you:
...
And copy the generated binary inside the GOBIN path. (Please look at KNI User Documentation Building and consuming your own installer section)
There are two different footprints for libvirt: 1 master/1 worker, and 3 masters/3 workers. Makefile needs to be called with:
...
There is a recording of the deployment of a Libvirt blueprint with 1 master and 1 worker. Please, see the following link to watch it: https://www.youtube.com/watch?v=3mDb1cba8uU
Temporary workaround
Currently the installer is failing when adding console to the cluster for libvirt. In order to make it work, please follow instructions on https://github.com/openshift/installer/pull/1371.
How to use the cluster
After the deployment finishes, a kubeconfig file will be placed inside build/auth directory:
...
Then cluster can be managed with oc CLI tool. You can get the client on this link: https://cloud.openshift.com/clusters/install ( Step 5: Access your new cluster ).
How to destroy the cluster
In order to destroy the running cluster, and clean up environment, just use make clean command.
Anchor | ||||
---|---|---|---|---|
|
The openshift-installer binaries are published on https://github.com/openshift/installer/releases.
...
Or pass it as a parameter to make command.
Customization
Use your own manifests
openshift-installer is also able to produce manifests, that end users can modify and deploy a cluster with the modified settings. New manifests can be generated with:
...