Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

CLI tool
Anchor
knictl
knictl

The current KNI blueprints use the openshift-install tool from the OKD Kubernetes distro to stand up a minimal Kubernetes cluster. All other Day 1 and Day 2 operations are then driven purely through manipulation of declarative Kubernetes manifests. To use this in the context of Akraino KNI blueprints, the project has created a helper CLI tool that needs to be installed first.

...

Code Block
languagebash
[default]
aws_access_key_id=xxx
aws_secret_access_key=xxx

...

  • provision a machine with CentOS 1810 to serve as virthost and
  • prepare the virthost by running 
    source utils/prep_host.sh

    from the kni-installer repo on that host.

...

In order to deploy a blueprint, you need to create a repository with a site. The site configuration is based in kustomize, and needs to use our blueprints as base, referencing that properly. Sample sites for deploying on libvirt, AWS and baremetal can be seen on: https://github.com/akraino-edge-stack/kni-blueprint-pae/tree/master/sites.
Site needs to have this structure:

.
├── 00_install-config
│   ├── install-config.name.patch.yaml
│   ├── install-config.patch.yaml
│   ├── kustomization.yaml
│   └── site-config.yaml
├── 01_cluster-mods
│   ├── kustomization.yaml
│   ├── manifests
│   └── openshift
├── 02_cluster-addons
│   └── kustomization.yaml
└── 03_services
└── kustomization.yaml

00_install-config

This folder will contain the basic settings for the site, including the base blueprint/profile, and the site name/domain. The following files are needed:

...

In absence of this setting, following kind of errors are thrown from installer.

Error: Error running command '          ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx chassis bootdev pxe;

          ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx power cycle || ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx power on;

': exit status 1. Output: Error: Unable to establish IPMI v2 / RMCP+ session

Error: Unable to establish IPMI v2 / RMCP+ session

Error: Unable to establish IPMI v2 / RMCP+ session

Depending on servers, RMCP session needs to be enabled on security settings of the management console.

After enabling this setting, you can run below command to verify that it is working as expected. Give IP address, username and password.

ipmitool -I lanplus -H x.x.x.x -U xxx -P xxxxx chassis status

(where x.x.x.x is IPMI port IP of your master/worker node, this is followed by root username and password for IPMI e.g. iDRAC)

High level steps

Setup installer node

Install CentOS operating system there. Once you have it, configure your NIC/VLANS properly (management/external, provisioning, baremetal, ipmi). Be sure that you collect the information of interfaces/vlans.

Create your site

First step to start a baremetal deployment is to have a site defined, with all the network and baremetal settings defined in the yaml files. A sample of site using this baremetal automation can be seen here .
In order to define the settings for a site, the first section 00_install-config needs to be used.
Start by creating a kustomization file like the following: https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/kustomization.yaml

In this kustomization file we are patching the default install-config, and also adding some extra files to define networking (site-config.yaml).

credentials.yaml:

This file is not shown on the site structure as it contains private content. It needs to have following structure:

apiVersion: v1
kind: Secret
metadata:
name: community-lab-ipmi
stringdata:
username: xxx <- base64 encoded IPMI username
password: xxx <- base64 encoded IPMI password
type: Opaque

install-config.patch.yaml : https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/install-config.patch.yaml

apiVersion: v1
kind: InstallConfig
baseDomain: baremetal.edge-sites.net <- domain of your site
compute:
 - name: worker
   replicas: 2 <- number of needed workers
controlPlane:
   name: master
   platform: {}
   replicas: 1 <- number of needed masters (1/3)
metadata:
   name: cluster
networking:
  clusterNetworks:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
   none: {}
   apiVIP: 192.168.111.4  <- IP for Kubernetes api endpoint, needs to be on the range of your baremetal network
   ingressVIP: 192.168.111.3 <- IP for the Kubernetes ingress endpoint, needs to be on the range of your baremetal network
   dnsVIP: 192.168.111.2 <- IP for the Kubernetes DNS endpoint, needs to be on the range of your baremetal network
   hosts:
      # Master nodes are always RHCOS
      -  name: master-0
         role: master
         bmc:
            address: ipmi://10.11.7.12 <- ipmi address for master
            credentialsName: community-lab-ipmi <- this needs to reference the name of the secret provided in credentials.yaml
         bootMACAddress: 3C:FD:FE:CD:98:C9  <- mac address for the provisioning interface of your master
         sdnMacAddress: 3C:FD:FE:CD:98:C8   <- mac address for the baremetal interface of your master
         # sdnIPAddress: 192.168.111.11     <- Optional -- Set static IP on your baremetal for your master
         hardwareProfile: default
         osProfile: 
            # With role == master, the osType is always rhcos
            # And with type rhcos, the following are settings are available
            type: rhcos
            pxe: bios         <- pxe boot type either bios (default if not specified) or uefi
            install_dev: sda  <- where to install the operating system (sda is the default)
      # Worker nodes can be either rhcos (default) || centos (7.x) || rhel (8.x)
      -  name: worker-0
         role: worker
         bmc: 
            address: ipmi://10.11.7.13
            credentialsName: community-lab-ipmi
         bootMACAddress: 3C:FD:FE:CD:9E:91
         sdnMacAddress: 3C:FD:FE:CD:9E:90
         hardwareProfile: default
         provisioning_interface: enp134s0f1 <- specify that if the provisioning interface is different than the one you will provide on next site-config.yaml
         baremetal_interface: enp134s0f0 <- specify that if the baremetal interface is different than the one you will provide on next site-config.yaml
         # If an osProfile/type is not defined, rhe node defaults to RHCOS
         osProfile: 
            type: centos7
            # With type: rhcos the following are settings are available
            pxe: bios    # pxe boot type either bios (default if not specified) or uefi
            install_dev: sda  # where to install the operating system (sda is the default)
      -  name: worker-1
         role: worker
         bmc: 
            address: ipmi://10.11.7.14
            credentialsName: community-lab-ipmi
         bootMACAddress: 3C:FD:FE:CD:9B:81
         sdnMacAddress: 3C:FD:FE:CD:9B:80
         hardwareProfile: default
         # If an osProfile/type is not defined, rhe node defaults to RHCOS
         # osProfile: 
            # type: rhcos
            # With type: rhcos the following are settings are available
            # pxe: bios|uefi    # pxe boot type either bios (default if not specified) or uefi
            # install_dev: sda  # where to install the operating system (sda is the default)
pullSecret: 'PULL_SECRET'
sshKey: |
   SSH_PUB_KEY

site-config.yaml: https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/site-config.yaml

apiVersion: kni.akraino.org/v1alpha1
kind: SiteConfig
metadata:
  name: notImportantHere
config: {}
provisioningInfrastructure:
  hosts:
    # interface to use for provisioning on the masters
    masterBootInterface: ens787f1 <- name of the provisioning interface for the masters
    # interface to use for provisioning on the workers
    workerBootInterface: ens787f1 <- name of the provisioning interface for the workers
    # interface to use for baremetal on the masters
    masterSdnInterface: ens787f0 <- name of the baremetal interface for the masters
    # interface to use for baremetal on the workers
    workerSdnInterface: ens787f0 <- name of the baremetal interface for the workers

  network:
    # The provisioning network's CIDR
    provisioningIpCidr: 172.22.0.0/24 <- range of the provisioning network
    # PXE boot server IP
    # DHCP range start (usually provHost/interfaces/provisioningIpAddress + 1)
    provisioningDHCPStart: 172.22.0.11 <- DHCP start range of the provisioning network
    provisioningDHCPEnd: 172.22.0.51 -> DHCP end range

    # The baremetal networks's CIDR
    baremetalIpCidr: 192.168.111.0/24 <- range of the baremetal network
    # Address map
    # bootstrap: baremetalDHCPStart   i.e. 192.168.111.10
    # master-0: baremetalDHCPStart+1  i.e. 192.168.111.11
    # master-1: baremetalDHCPStart+2  i.e. 192.168.111.12
    # master-2: baremetalDHCPStart+3  i.e. 192.168.111.13
    # worker-0: baremetalDHCPStart+5  i.e. 192.168.111.15
    # worker-N: baremetalDHCPStart+5+N
    baremetalDHCPStart: 192.168.111.10 <- DHCP start range of the baremetal network. Needs to start with an IP that does not conflict with previous baremetal VIP definitions
    baremetalDHCPEnd: 192.168.111.50 <- DHCP end range
    # baremetal network default gateway, set to proper IP if /provHost/services/baremetalGateway == false
    # if /provHost/services/baremetalGateway == true, baremetalGWIP with be located on provHost/interfaces/baremetal
    # and external traffic will be routed through the provisioning host
    baremetalGWIP: 192.168.111.4
    dns:
      # cluster DNS, change to proper IP address if provHost/services/clusterDNS == false
      # if /provHost/services/clusterDNS == true, cluster (IP) with be located on provHost/interfaces/provisioning
      # and DNS functionality will be provided by the provisioning host
      cluster: 192.168.111.3
      # Up to 3 external DNS servers to which non-local queries will be directed
      external1: 8.8.8.8
#     external2: 10.11.5.19
#     external3: 10.11.5.19

  provHost:
    interfaces:
      # Interface on the provisioning host that connects to the provisioning network
      provisioning: enp136s0f1 <- it typically needs to be a nic, not a vlan (unless your system supports pxe booting from vlans)
      # Must be in provisioningIpCidr range
      # pxe boot server will be at port 8080 on this address
      provisioningIpAddress: 172.22.0.1
      # Interface on the provisioning host that connects to the baremetal network
      baremetal: enp136s0f0.3009
      # Must be in baremetalIpCidr range
      baremetalIpAddress: 192.168.111.1
      # Interface on the provisioning host that connects to the internet/external network
      external: enp136s0f0.3008
    bridges:
      # These bridges are created on the bastion host
      provisioning: provisioning <- typically leave those fixed names
      baremetal: baremetal
    services:
      # Does the provsioning host provide DHCP services for the baremetal network?
      baremetalDHCP: true <- set it to false just if you have your own DHCP for the baremetal network
      # Does the provisioning host provide DNS services for the cluster?
      clusterDNS: true <- set it to false just if you have your own DNS in the baremetal network and you can configure your names properly
      # Does the provisioning host provide a default gateway for the baremetal network?
      baremetalGateway: true

Setup installer node

Install CentOS operating system there. Once you have it, configure your NIC/VLANS properly (management/external, provisioning, baremetal, ipmi). Be sure that you collect the information of interfaces/vlans.

Configure the system properly to run knictl on it: Install knictl



knictl offers two commands to automate the deployment of a baremetal UPI cluster (and only baremetal UPI, at this time).  As prerequisites to using these commands, you must ensure the following are true:

...