...
Specific instructions for baremetal are going to be provided later.
Deploying on baremetal
Minimal hardware footprint needed
This is minimal configuration example where only 3 servers are used. Servers and their role are given in below table.
4. Apply workloads
Anchor | ||||
---|---|---|---|---|
|
After the cluster has been generated, the extra workloads that have been specified in manifests (like kubevirt), need to be applied. This can be achieved by:
Code Block | ||
---|---|---|
| ||
./knictl apply_workloads $SITE_NAME |
This will execute kustomize on the site manifests and will apply the output to the cluster. After that, the site deployment can be considered as finished.
Deploying on baremetal
Minimal hardware footprint needed
This is minimal configuration example where only 3 servers are used. Servers and their role are given in below table.
Server# | Role | Purpose |
1 | Installer node | This |
Server# | Role | Purpose |
1 | Installer node | This host is used for remotely installing and configuring master and worker node. This server also hosts bootstrap node on KVM-QEMU using libvirt. Several components like- HAProxy, DNS server, DHCP server for provisioning and baremetal network, CoreDNS, Matchbox, Terraform, IPMItool, TFTPboot are configured on this server. Since cluster coreDNS is running from here, this node will be required later as well. |
2 | Master node | This is control plane or master node of K8s cluster that is based on openshift 4.x. |
3 | Worker node | This is worker node which hosts the application. |
4 | Bootstrap node | Bootstrap node runs as VM on installer node and it exists only during the installation and later automatically deleted by installer. |
...
First step to start a baremetal deployment is to have a site defined, with all the network and baremetal settings defined in the yaml files. A sample of site using this baremetal automation can be seen here .
In order to define the settings for a site, the first section 00_install-config needs to be used.
Start by creating a kustomization file like the followingA sample of site using this baremetal automation can be seen here .
In order to define the settings for a site, the first section 00_install-config needs to be used.
Start by creating a kustomization file like the following: https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/kustomization.yaml
bases:
- git::https://gerrit.akraino.org/r/kni/blueprint-pae.git//profiles/production.baremetal/00_install-config
patches:
- install-config.patch.yaml
patchesJson6902:
- target:
version: v1
kind: InstallConfig
name: cluster
path: install-config.name.patch.yaml
transformers:
- site-config.yaml
In this kustomization file we are patching the default install-config, and also adding some extra files to define networking (site-config.yaml).
credentials.yaml:
This file is not shown on the site structure as it contains private content. It needs to have following structure:
apiVersion: v1
kind: Secret
metadata:
name: community-lab-ipmi
stringdata:
username: xxx <- base64 encoded IPMI username
password: xxx <- base64 encoded IPMI password
type: Opaque
install-config.name.patch.yaml: https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00
...
_install-config/install-config.
...
name.
...
patch.yaml
...
- op: replace
path: "/metadata/name"
value: community <- replace with your cluster name here
This file is not shown on the site structure as it contains private content. It needs to have following structure:
apiVersion: v1
kind: Secret
metadata:
name: community-lab-ipmi
stringdata:
username: xxx <- base64 encoded IPMI username
password: xxx <- base64 encoded IPMI password
type: Opaque
install-config.patch.yaml : https://github.com/akraino-edge-stack/kni-blueprint-pae/blob/master/sites/community.baremetal.edge-sites.net/00_install-config/install-config.patch.yaml
...
apiVersion: kni.akraino.org/v1alpha1
kind: SiteConfig
metadata:
name: notImportantHere
config: {}
provisioningInfrastructure:
hosts:
# interface to use for provisioning on the masters
masterBootInterface: ens787f1 <- name of the provisioning interface for the masters
# interface to use for provisioning on the workers
workerBootInterface: ens787f1 <- name of the provisioning interface for the workers
# interface to use for baremetal on the masters
masterSdnInterface: ens787f0 <- name of the baremetal interface for the masters
# interface to use for baremetal on the workers
workerSdnInterface: ens787f0 <- name of the baremetal interface for the workers
network:
# The provisioning network's CIDR
provisioningIpCidr: 172.22.0.0/24 <- range of the provisioning network
# PXE boot server IP
# DHCP range start (usually provHost/interfaces/provisioningIpAddress + 1)
provisioningDHCPStart: 172.22.0.11 <- DHCP start range of the provisioning network
provisioningDHCPEnd: 172.22.0.51 -> DHCP end range
# The baremetal networks's CIDR
baremetalIpCidr: 192.168.111.0/24 <- range of the baremetal network
# Address map
# bootstrap: baremetalDHCPStart i.e. 192.168.111.10
# master-0: baremetalDHCPStart+1 i.e. 192.168.111.11
# master-1: baremetalDHCPStart+2 i.e. 192.168.111.12
# master-2: baremetalDHCPStart+3 i.e. 192.168.111.13
# worker-0: baremetalDHCPStart+5 i.e. 192.168.111.15
# worker-N: baremetalDHCPStart+5+N
baremetalDHCPStart: 192.168.111.10 <- DHCP start range of the baremetal network. Needs to start with an IP that does not conflict with previous baremetal VIP definitions
baremetalDHCPEnd: 192.168.111.50 <- DHCP end range
# baremetal network default gateway, set to proper IP if /provHost/services/baremetalGateway == false
# if /provHost/services/baremetalGateway == true, baremetalGWIP with be located on provHost/interfaces/baremetal
# and external traffic will be routed through the provisioning host
baremetalGWIP: 192.168.111.4
dns:
# cluster DNS, change to proper IP address if provHost/services/clusterDNS == false
# if /provHost/services/clusterDNS == true, cluster (IP) with be located on provHost/interfaces/provisioning
# and DNS functionality will be provided by the provisioning host
cluster: 192.168.111.3
# Up to 3 external DNS servers to which non-local queries will be directed
external1: 8.8.8.8
# external2: 10.11.5.19
# external3: 10.11.5.19
provHost:
interfaces:
# Interface on the provisioning host that connects to the provisioning network
provisioning: enp136s0f1 <- it typically needs to be a nic, not a vlan (unless your system supports pxe booting from vlans)
# Must be in provisioningIpCidr range
# pxe boot server will be at port 8080 on this address
provisioningIpAddress: 172.22.0.1
# Interface on the provisioning host that connects to the baremetal network
baremetal: enp136s0f0.3009
# Must be in baremetalIpCidr range
baremetalIpAddress: 192.168.111.1
# Interface on the provisioning host that connects to the internet/external network
external: enp136s0f0.3008
bridges:
# These bridges are created on the bastion host
provisioning: provisioning <- typically leave those fixed names
baremetal: baremetal
services:
# Does the provsioning host provide DHCP services for the baremetal network?
baremetalDHCP: true <- set it to false just if you have your own DHCP for the baremetal network
# Does the provisioning host provide DNS services for the cluster?
clusterDNS: true <- set it to false just if you have your own DNS in the baremetal network and you can configure your names properly
# Does the provisioning host provide a default gateway for the baremetal network?
baremetalGateway: true
Setup installer node
Install CentOS operating system there. Once you have it, configure your NIC/VLANS properly (management/external, provisioning, baremetal, ipmi). Be sure that you collect the information of interfaces/vlans.
Configure the system properly to run knictl on it: Install knictl
knictl offers two commands to automate the deployment of a baremetal UPI cluster (and only baremetal UPI, at this time). As prerequisites to using these commands, you must ensure the following are true:
...
Fetch requirements
Inside knictl path (typically $HOME/go/src/gerrit.akraino.org/kni/installer), run the fetch-requirements command, pointing to the github repo of the site you created
...
./knictl fetch_requirements <site repo URI>
...
./knictl prepare_manifests $SITE_NAME
...
platform:
none: {}
Once the aforementioned items have been dealt with, deploy your master nodes like so:
Code Block | ||
---|---|---|
| ||
./knictl deploy_masters $SITE_NAME |
...
For example:
./knictl fetch_requirements https://github.com/akraino-edge-stack/kni-blueprint-pae/tree/master/sites/community.baremetal.edge-sites.net
Prepare manifests
Run the prepare manifests command, using as a parameter the name of your site
./knictl prepare_manifests $SITE_NAME
For example:
./knictl prepare_manifests community.baremetal.edge-sites.ne
Deploy masters
Code Block | ||
---|---|---|
| ||
./knictl deploy_workersmasters $SITE_NAME |
This will deploy a bootstrap VM and begin to bring up your worker master nodes. Monitor your worker nodes are After this command has successfully executed, monitor your cluster as you normally would during this processwhile the masters are deploying. If the deployment doesn't hit any errors, you will then have a working baremetal cluster.
4. Apply workloads
...
Once the masters have reached the ready state, you can then deploy your workers.
Deploy workers
Code Block | ||
---|---|---|
| ||
./knictl applydeploy_workloadsworkers $SITE_NAME |
This will execute kustomize on the site manifests and will apply the output to the cluster. After that, the site deployment can be considered as finished.begin to bring up your worker nodes. Monitor your worker nodes are you normally would during this process. If the deployment doesn't hit any errors, you will then have a working baremetal cluster.
After masters and workers are up, you can apply the workloads using the general procedure as shown here
Accessing the Cluster
After the deployment finishes, a kubeconfig
file will be placed inside auth directory:
...