Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The BPA is part of the infra local controller which runs as a bootstrap k8s cluster in the ICN project. As described in Integrated Cloud Native Akraino project, the purpose of the BPA is to install packages that cannot be installed using kubectl. It will be called once the operating system (Linux) has been installed in the compute nodes by the baremetal operator. The Binary Provisioning Agent will carry out the following functions;

  1. Get site specific information containing information about compute nodes and their resource capabilities.
  2. Assign roles to compute hosts and create Create the hosts.ini file required to install kubernetes on compute nodes in order to create a cluster using kubespray. It uses the roles specified in the provisioning custom resource
  3. Instantiate the binary package installation and get the status of the installationGet application-k8s kubeconfig file
  4. BPA is also expected to store any private key and secret information in CSM
  5. Install the packages on newly added compute nodes
  6. Update package versions in compute nodes that require the update
  7. Store private keys 

...

Prerequisites: This workflow assumes that the baremetal CR and  baremetal operator have been created and has successfully installed the compute nodes with Linux OS. It also assumes that the BPA controller is running.

Image AddedImage Removed

Fig 1: Illustration of the proposed workflow

Workflow Summary/Description

  1. Create BPA Provisioning CRD (Created only once and just creates the BPA resource kind)and Software CRD
  2. The CRDs are stored in ETCD
  3. Start the BPA controller (It then watches  for the creation of either a software CR or provisioning CR (We will be focusing on the provisioning CR) here).
  4. Create an instance of the Provisioning Custom Resource, this can be done at any instance once the BPA operator is runningCreate the BPA Custom Resource

  5. The BPA Operator continues to watch the k8s API server and once it sees that a new BPA CR object has been created, it queries the k8s API server for the Baremetal hosts lists. The baremetal hosts lists contains information about the compute nodes provisioned including the IP address, CPU, memory..etc of each host.

  6. The BPA operator looks into the baremetal hosts list and knows which hosts should be master and which should be workers. As the master and worker fields have various parameters, it can do this in various ways;

    1. If the MAC address is provided in the BPA CR object, it compares that value with the value in the hosts list and assigns the roles. For example if a mac address of 00:c5:16:05:61:b2 is specified for master in the BPA CR spec, it checks the baremetal list for a host that has that MAC address and gives it the role of master.
    2. If there is no MAC address specified but just resources, it checks the baremetal list for hosts that meet the resource requirements
    3. If both MAC address and resource requirements are provided, it finds the host with the specified MAC address and confirms that the host meets the resource requirement provided in the BPA CR and then assigns the role.
  7. Using the MAC address of the host, the BPA operator looks in the DHCP file of the DHCP server running on the same host it is running and determines the IP address that corresponds to that MAC address
  8. The BPA operator reads a file containing the default username and password for the various hosts, copies its public key to those hosts in order to use kubespray later.
  9. The BPA operator then creates the hosts.ini file using the assigned roles and their corresponding IP addresses.

  10. The BPA operator then installs kubernetes using kubespray on the compute nodes thus creating an active kubernetes cluster. During installation, it would continue to check the status of the installation

  11. On successful completion of the k8s cluster installation, the BPA operator would  save the application-k8s kubeconfig file in order to access the k8s cluster and make changes such as software updates or add a worker node for future purposes.

BPA CRD

The BPA CRD tells the Kubernetes API how to expose the provisioning custom resource object. The CRD yaml file is applied using 

kubectl create -f bpa_v1alpha1_provisioning_crd.yaml”  See below for the CRD definition.

 BPA CRD Yaml File (*_crd.yaml)

  1. .

  2. Confirm that all the hosts specified in the provisioning CR exist in the Baremetal hosts list, then query the DHCP lease file using the MAC address of each host to get the corresponding IP addresses.
  3. Create the hosts.ini file using the roles specified in the provisioning CR and the MAC addresses from 6 above.

  4. Once the hosts.ini file is created, start the KUD job.
  5. KUD job installs kud in the host.
  6. BPA operator spawns a thread that continues to check the status of the KUD job.

  7. Once the KUD installation is completed, the BPA operator, creates a configmap for that cluster, the configmap contains a mapping of the IP address to the host label specified in the provisioning CR. (Step 11 is not shown in the diagram). This configmap will be used when the BPA operator is installing software specified in the software CR (see BPA Software CR Specs).

BPA CRD

The BPA CRD tells the Kubernetes API how to expose the provisioning custom resource object. The CRD yaml file is applied using 


kubectl create -f bpa_v1alpha1_provisioning_crd.yaml”  See below for the CRD definition.

 BPA CRD Yaml File (*_crd.yaml)

Code Block
languageyml

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: provisionings.bpa.akraino.
Code Block
languageyml

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: provisionings.bpa.akraino.org
spec:
  group: bpa.akraino.org
  names:
    kind: Provisioning
    listKind: ProvisioningList
    plural: provisionings
    singular: provisioning
    shortNames:
    - bpa
  scope: Namespaced
  subresources:
    status: {}
  validation:
    openAPIV3Schema:
      properties:
        apiVersion:
          description: 
          type: string
        kind:
          description: 
          type: string
        metadata:
          type: object
        spec:
          type: object
        status:
          type: object
  version: v1alpha1
  versions:
  - name: v1alpha1
    served: true
    storage: true

...

Code Block
// ProvisioningSpec defines the desired state of Provisioning
// +k8s:openapi-gen=true
type ProvisioningSpec struct {
	// INSERT ADDITIONAL SPEC FIELDS - desired state Masters []Master `json:"master,omitempty"`
        Workers []Worker `json:"worker,omitempty"`
}

// ProvisioningStatus defines the observed state of 
// Provisioning
type ProvisioningStatus struct {
        
}
// Provisioning is the Schema for the provisionings API
type Provisioning struct {
        metav1.TypeMeta   `json:",inline"`
        metav1.ObjectMeta `json:"metadata,omitempty"`

        Spec   ProvisioningSpec   `json:"spec,omitempty"`
        Status ProvisioningStatus `json:"status,omitempty"`
}

// ProvisioningList contains a list of Provisioning
type ProvisioningList struct {
        metav1.TypeMetaof cluster
	// Important: Run "operator-sdk generate k8s" to regenerate code after modifying this file
	// Add custom validation using kubebuilder tags: https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html
	Masters []map[string]Master  `json:"masters,omitempty"`
	Workers []map[string]Worker  `json:"workers,omitempty"`
	KUDPlugins []string `json:"KUDPlugins,omitempty"`
}

// ProvisioningStatus defines the observed state of Provisioning
// +k8s:openapi-gen=true
type ProvisioningStatus struct {
	// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
	// Important: Run "operator-sdk generate k8s" to regenerate code after modifying this file
	// Add custom validation using kubebuilder tags: https://book-v1.book.kubebuilder.io/beyond_basics/generating_crd.html
}

// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object

// Provisioning is the Schema for the provisionings API
// +k8s:openapi-gen=true
// +kubebuilder:subresource:status
type Provisioning struct {
	metav1.TypeMeta   `json:",inline"`
        	metav1.ListMetaObjectMeta `json:"metadata,omitempty"`

	Spec       ProvisioningSpec Items           []Provisioning `json:"itemsspec,omitempty"`
}

// master struct contains resource requirements for a master 
// node
type Master	Status ProvisioningStatus `json:"status,omitempty"`
}

// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object

// ProvisioningList contains a list of Provisioning
type ProvisioningList struct {
        CPU int32      `json:"cpu,omitempty"`
        Memory string 	metav1.TypeMeta `json:",inline"`
	metav1.ListMeta `json:"memorymetadata,omitempty"`
	Items        MACaddress string   []Provisioning `json:"mac-address,omitemptyitems"`
}

// workermaster struct contains resource requirements for a workermaster node
type WorkerMaster struct {
	MACaddress        string `json:"mac-address,omitempty"`
	CPU int32      `json:"cpu,omitempty"`
        	Memory string  `json:"memory,omitempty"`
}

// worker struct contains resource requirements for a SRIOVworker boolnode
type Worker struct {
	MACaddress string `json:"sriovmac-address,omitempty"`
	CPU int32 `json:"cpu,omitempty"`
	Memory string     QAT  `json:"memory,omitempty"`
	SRIOV bool      `json:"qatsriov,omitempty"`
	QAT  bool      MACaddress string `json:"mac-addressqat,omitempty"`
}

func init() {
	SchemeBuilder.Register(&Provisioning{}, &ProvisioningList{})
}


The variables in the ProvisioningSpec struct are used to create the data structures in the yaml spec for the custom resource. Three variables are added to the
ProvisioningSpecstruct;

  1. Masters: This variable will contain an array of Master objects. The master struct as defined in the *-types.go file above contains CPU and memory information, this information would be used by the BPA operator to determine which compute nodes to assign the role of Master to when it gets the baremetal list from the API server.
  2. Workers: This variable will contain an array of Worker objects. Similar to the case of the Masters variables, the Worker struct will contain resource requirements for the Worker nodes and the BPA operator will use this information to determine which hosts to assign the role of worker.
  3. Replicas: An integer that defines the number of pods that should run when the CR is deployed.KUDPlugins: This variable will contain the list of KUD plugins to be installed with KUD in the cluster

 Sample Provisioning CR YAML files

...

Code Block
languageyml
apiVersion: bpa.akraino.org/v1alpha1
kind: Provisioning
metadata:
  name: provisioning-sample
  labels:
    cluster: cluster-abc
    owner: c1
spec:
  masters:
   - master:
      mac-address: 00:c6:14:04:61:b2
  workers:
    - worker-1:
       mac-address: 00:c6:14:04:61:b2

    - worker-2:
       mac-address: 00:c2:12:03:62:b1


Code Block
languageyml
apiVersion: bpa.akraino.org/v1alpha1
kind: Provisioning
metadata:
  name: provisioningsample-kud-sampleplugins
spec:
  masterlabels:
    cpucluster: 10cluster-efg
    owner: memoryc2
spec:
 4Gi masters:
    mac- master-address1: 00:c5:16:05:61:b2
  worker:
    cpu: 20
    memory: 8Gi
        mac-address: 00:e1:ba:ce:df:bd
  KUDPlugins:
    - onap4k8s


Code Block
languageyml
apiVersion: bpa.akraino.org/v1alpha1
kind: Provisioning
metadata:
  name: provisioning-sample
  labels:
    mac-address: 00:c6:14:04:61:b2

The YAML file above can be used to create a provisioning custom resource which is an instance of the provisioning CRD describes above. The spec.master field corresponds to the Masters variable in the ProvisioningSpec struct of the *-types.go file, while the  spec.worker field corresponds to the Workers variable in the ProvisioningSpec struct of the *-types.go file and the spec.replica field corresponds to the Replicas variable in the same struct. 

Based on the values above, when the BPA operator gets the baremetal hosts object (Step 5in figure 1), it would assign hosts with 10 CPUs and 4Gi memory the role of master and it would assign hosts with 20CPUs and 8Gi memory the role of worker.

 Sample Baremetal Lists from Query

cluster: cluster-xyz
    owner: c2
spec:
  masters:
   - master-1:
      cpu: 10
      memory: 4Gi
      mac-address: 00:c5:16:05:61:b2
   - master-2:
      cpu: 10
      memory: 4Gi
      mac-address: 00:c2:14:06:61:b5
  workers:
   - worker:
      cpu: 20
      memory: 8Gi
      mac-address: 00:c6:14:04:61:b2


The YAML file above can be used to create a provisioning custom resource which is an instance of the provisioning CRD describes above. The spec.master field corresponds to the Masters variable in the ProvisioningSpec struct of the *-types.go file, while the  spec.worker field corresponds to the Workers variable in the ProvisioningSpec struct of the *-types.go file. 

Currently the cpu and memory fields are not  used by the BPA operator code. More Provisioning CRs can be found in here

 Sample Baremetal Lists from Query

Code Block
languageyml
apiVersion: v1
items:
- apiVersion: metal3.io/v1alpha1
  kind: BareMetalHost
  metadata:
    creationTimestamp: "2019-07-20T01:43:19Z"
    finalizers:
    - baremetalhost.metal3.io
    generation: 2
    name: demo-provisioning
    namespace: metal3
    resourceVersion: "35002"
    selfLink: /apis/metal3.io/v1alpha1/namespaces/metal3/baremetalhosts/demo-provisioning
    uid: 3b22014e-9252-4f15-89a5-67f96e1a07a2
  spec:
    bmc:
      address: ipmi://172.31.1.17
      credentialsName: demo-provisioning-bmc-secret
    description: ""
    externallyProvisioned: false
    hardwareProfile: ""
    image:
      checksum: http://172.22.0.1/images/bionic-server-cloudimg-amd64.md5sum
      url: http://172.22.0.1/images/bionic-server-cloudimg-amd64.qcow2
    online: true
  status:
    errorMessage: ""
    goodCredentials
Code Block
languageyml
apiVersion: v1
items:
- apiVersion: metal3.io/v1alpha1
  kind: BareMetalHost
  metadata:
    creationTimestamp: "2019-07-20T01:43:19Z"
    finalizers:
    - baremetalhost.metal3.io
    generation: 2
    name: demo-provisioning
    namespace: metal3
    resourceVersion: "35002"
    selfLink: /apis/metal3.io/v1alpha1/namespaces/metal3/baremetalhosts/demo-provisioning
    uid: 3b22014e-9252-4f15-89a5-67f96e1a07a2
  spec:
    bmc:
      addresscredentials: ipmi://172.31.1.17

        credentialsNamename: demo-provisioning-bmc-secret
    description: ""
    externallyProvisionednamespace: falsemetal3
    hardwareProfile  credentialsVersion: "30393"
    imagehardware:
      checksum: http://172.22.0.1/images/bionic-server-cloudimg-amd64.md5sum
 cpu:
        url: http://172.22.0.1/images/bionic-server-cloudimg-amd64.qcow2arch: x86_64
    online    clockMegahertz: true3700
   status:
     errorMessagecount: ""72
    goodCredentials    flags:
       credentials: -  …. 
        name: demo-provisioning-bmc-secret- xtopology
        namespace:- metal3xtpr
      credentialsVersion: "30393"
 model: Intel(R)  hardware:
      cpu:Xeon(R) Gold 6140M CPU @ 2.30GHz
        arch: x86_64firmware:
        clockMegahertzbios:
 3700
         countdate: 72
11/07/2018
          flagsvendor: Intel Corporation
         -  …. version: SE5C620.86B.00.01.0015.110720180833
        - xtopologyhostname: localhost.localdomain
        - xtprnics:
      -  modelip: Intel(R) Xeon(R) Gold 6140M CPU @ 2.30GHz
      firmware:""
        mac: 3c:fd:fe:9c:88:60
        biosmodel:
 0x8086 0x1572
        datename: 11/07/2018eth0
          vendorpxe: Intel Corporationfalse
          versionspeedGbps: SE5C620.86B.00.01.0015.1107201808330
      hostname: localhost.localdomain
      nics:vlanId: 0
      - ip: ""172.22.0.55
        mac: 3ca4:fdbf:fe01:9c64:8886:606f
        model: 0x8086 0x15720x37d2
        name: eth0eth5
        pxe: falsetrue
        speedGbps: 0
        vlanId: 0
      - ip: 172.22.0.55
   …
      ramMebibytes: 262144
      storage:
      - machctl: a4"6:bf0:01:64:86:6f0:0"
        model: 0x8086INTEL 0x37d2SSDSC2KB48
        name: eth5 /dev/sda
        pxerotational: truefalse
        speedGbpsserialNumber: 0BTYF8290022M480BGN
        vlanIdsizeBytes: 0480103981056
        vendor: ATA
      ramMebibytes  wwn: 262144"0x55cd2e414fc888c1"
        storagewwnWithExtension: "0x55cd2e414fc888c1"
      - hctl: "67:0:0:0"
        model: INTEL SSDSC2KB48
        name: /dev/sdasdb
        rotational: false
        serialNumber: BTYF8290022M480BGNBTYF83160FDB480BGN
        sizeBytes: 480103981056
        vendor: ATA
        wwn: "0x55cd2e414fd7b5a3"
        vendorwwnWithExtension: ATA"0x55cd2e414fd7b5a3"
        wwn: "0x55cd2e414fc888c1"systemVendor:
        wwnWithExtensionmanufacturer: Intel "0x55cd2e414fc888c1"Corporation
      -  hctl: "7:0:0:0"productName: S2600WFT (SKU Number)
        modelserialNumber: INTEL SSDSC2KB48BQPW84200264
    hardwareProfile: unknown
    lastUpdated: "2019-07-20T02:41:30Z"
     nameoperationalStatus: /dev/sdbOK
        rotationalpoweredOn: false
        serialNumber: BTYF83160FDB480BGNprovisioning:
        sizeBytesID: 48010398105694fa2511-3cb1-4372-ab42-9c377db8aeca
        vendorimage: ATA
        wwnchecksum: "0x55cd2e414fd7b5a3"
        wwnWithExtensionurl: "0x55cd2e414fd7b5a3"
      systemVendorstate: provisioning
kind: List
metadata:
       manufacturerresourceVersion: Intel Corporation""
        productName: S2600WFT (SKU Number)
        serialNumber: BQPW84200264
    hardwareProfile: unknown
    lastUpdated: "2019-07-20T02:41:30Z"
    operationalStatus: OK
    poweredOn: false
    provisioningselfLink: ""



In addition, we would also have two other CRDs that the BPA would use to perform its functions;

  1. Software CRD
  2. Cluster CRD

Software CRD

The software CRD  will install the required software, drivers and perform software updates. See BPA Software CR Specs

Draft Software CRD

Code Block
languageyml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: software.bpa.akraino.org
spec:
  group: bpa.akraino.org
  names:
      IDkind: 94fa2511-3cb1-4372-ab42-9c377db8aecasoftware
    listKind: softwarerList
 image:
   plural: software
    checksumsingular: ""software
        url: ""shortNames:
      state: provisioning
kind: List
metadata:- su
  resourceVersionscope: ""Namespaced
  selfLinksubresources: ""

In addition, we would also have two other CRDs that the BPA would use to perform its functions;

  1. Software CRD
  2. Cluster CRD

Software CRD

The software CRD  will install the required software, drivers and perform software updates

Draft Software CRD

Code Block
languageyml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: cluster.bpa.akraino.org
spec:
  group: bpa.akraino.org
  names:

    status: {}
  validation:
    openAPIV3Schema:
      properties:
        apiVersion:
          description: 
          type: string
        kind:
 software-updater
    listKind: software-updaterList
    pluraldescription: software-updaters
     singular: software-updater
    shortNamestype: string
     - su
  scopemetadata:
 Namespaced
  subresources:
    status: {}
  validationtype: object
    openAPIV3Schema:
    spec:
  properties:
        apiVersiontype: object
          descriptionstatus: 
          type: stringobject
  version: v1alpha1
  versions:
  - kindname: v1alpha1
    served: true
    storage:  description: true

 Sample Software CR YAML files


Code Block
languageyml

apiVersion: bpa.akraino.org/v1alpha1
kind: Software
metadata:
  labels:
    cluster: cluster-xyz
    typeowner: stringc1
  name: example-software
spec:
     metadatamasterSoftware:
    - curl
    - type: objecthtop
    -    specjq:
          typeversion: object1.5+dfsg-1ubuntu0.1
    - maven:
   status:
     version: 3.3.9-3
   workerSoftware:
 type: object
  version:- v1alpha1curl
  versions:
  - name: v1alpha1htop
    served:- truetmux
    storage: true- jq

Cluster CRD

The cluster CRD will have the Cluster name and contain the provisioning CR and/or the software CR for the specified cluster

...

Code Block
languageyml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: cluster.bpa.akraino.org
spec:
  group: bpa.akraino.org
  names:
    kind: cluster
    listKind: clusterList
    plural: clusters
    singular: cluster
    shortNames:
    - cl
  scope: Namespaced
  subresources:
    status: {}
  validation:
    openAPIV3Schema:
      properties::
        apiVersion:
          description: 
          type: string
        apiVersionkind:
          description: 
          type: string
        kindmetadata:
          descriptiontype: object
        spec:
          type: stringobject
        metadatastatus:
          type: object
  version: v1alpha1
     specversions:
  - name: v1alpha1
      typeserved: objecttrue
    storage: true

 Sample Cluster CR YAML files


Code Block
languageyml
apiVersion:   statusbpa.akraino.org/v1alpha1
kind: cluster
metadata:
  name: cluster-sample
  labels:
     typecluster: objectcluster-abc
  version: v1alpha1
 owner: versions:
  - name: v1alpha1c1
spec:
    servedprovisioningCR: true"provisioning-sample"
    storage: true

Open Questions

...

softwareCR: "software-sample"

Future Work

This proposal would make it possible to assign roles to nodes based on the features discovered. Currently, the proposal makes use of CPU, memory, SRIOV and QAT. However the The baremetal operator list returns much more information about the nodes, we would be able to extend this the feature to allow the operator assign roles based on more determine the right nodes to use complex requirements such as CPU model, memory, CPU..etc This would feed into Hardware Platform Awareness (HPA)

...

  1. https://wiki.akraino.org/pages/viewpage.action?pageId=11995877&show-miniview
  2. https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#advanced-topics

Presentation:

View file
nameAkraino-Intergrated-Cloud-Native-NestedK8s HA.pptx
height250