Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This workflow runs well in Kubernetes, but Kubevirt doesn't support for it. Because Kubevirt is a CRD implenment and it fails to process the pod-type configuration in its yaml file. It It has its own api server and controller to verify the CRD definition and create a corresponding pod. Thie means we can not assign a QAT vf to the Kubevirt VM by adding spec.containers.resource.limits/requests: "1" with QAT resource name to the VMI configuration file. So the gaps in Kubevirt to enable QAT may be following items:

  1. Need create corresponding custom resource to hold QAT device.
  2. QAT feature should be optional and can be configured through a configmap.
  3. When assign a QAT vf to VM, Kubevirt need mount the required pci device to it.
  4. Do ordinary device passthrough to assign it
  5. Change virt-api to verify an approved VMI pod with QAT
  6. Example yaml file to create VMI with QAT
  7. Test cases

Integration

CRD Definition

Kubevirt use the feature of Kubernetes name Dynamic Admission Control and create Kubevirt API through a ValidatingAdmissionWebhook. This feature allows KubeVirt to dynamically register an HTTPS webhook with Kubernetes at KubeVirt install time. After registering the custom webhook, all requests related to KubeVirt API objects are forwarded from the Kubernetes API server to our HTTPS endpoint for validation. If our endpoint rejects a request for any reason, the object will not be persisted into etcd and the client receives our response outlining the reason for the rejection.

So to enable QAT in Kubevirt, it is necessray to create related segments to the validation service and add the QAT feature Gate verified method.

  • Add necessary information to swagger.json which is an add on for kubenetes API used in Kubevirt

...

"qats": {
"description": "Whether to assign a QAT vf device to the vmi.\n+optional",
"type": "array",
"items": {
"$ref": "#/definitions/v1.QAT"
}
},

...

"v1.QAT": {
"required": [
"name",
"deviceName"
],
"properties": {
"deviceName": {
"type": "string"
},
"name": {
"description": "Name of the QAT device as exposed by a device plugin",
"type": "string"
}
}
},

  •  Add Feature Gate to webhook validation service tp config it in configmap 

...

if spec.Domain.Devices.QATs != nil && !config.QATPassthroughEnabled() {
causes = append(causes, metav1.StatusCause{
Type: metav1.CauseTypeFieldValueInvalid,
Message: fmt.Sprintf("QAT feature gate is not enabled in kubevirt-config"),
Field: field.Child("QATs").String(),
})
}

...


Non-privileged

To meet the Kubernetes and Kubevirt community specifications, the pod should be non-privileged. So we should mount the assigned QAT pci device to the VM through the interfaces Kubevirt provided.

if util.IsQATVMI(vmi) {
for _, qat := range vmi.Spec.Domain.Devices.QATs {
requestResource(&resources, qat.DeviceName)
}
}

PCI passthrough


<hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x3d' slot='0x02' function='0x2'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</hostdev>



Example

continue...