Skip to end of metadata
Go to start of metadata

Project Technical Lead: Georg Kunz. Elected 1/17/19.

Project Committer Details:

Initial Committers for a project will be specified at project creation. Committers have the right to commit code to the source code management system for that project.

A Contributor may be promoted to a Committer by the project’s Committers after demonstrating a history of contributions to that project.

Candidates for the project’s Project Technical Leader will be derived from the Committers of the Project. Candidates must self nominate by marking "Y" in the Self Nominate column below by Jan. 16th. Voting will take place January 17th.

Only Committers for a project are eligible to vote for a project’s Project Technical Lead.

Please see Akraino Technical Community Document section 3.1.3 for more detailed information.





Contact Info

 Committer BioCommitter Picture 

Self Nominate for PTL (Y/N)

Andrew Wilkinson






Use Case Details:

Feature project proposers: Ericsson, Nokia and Radysis (confirm) and (others - confirm).

The following BP proposals require support of OVS-DPDK in Airship:

Network Cloud : OVS-DPDK Unicycle Dell Blueprint Proposal

Radio Edge Cloud

Edge Video Processing

(any other dependent BPs?)

Initial list of high-level working items

The table below provides a list of high-level work items required to enable support for OVS-DPDK in Airship. This list is not considered to be complete but a starting point for design discussions in the feature project. Please feel free to add / modify / extend work items.

Task nameDescriptionAirship componentImplementationUpstream reference
create openvswitch agent chartcreate helm chart for openvswitch agentopenstack-helmin place and being deployed when OVS is enabled
create ovs-dpdk chartcreate helm chart for openvswitch dpdk containeropenstack-helmExtend existing openvswitch chart with config parameters for DPDK.
deploy neutron openvswitch agentensure chart of openvswitch agent is deployedtreasuremapIn place and being deployed when OVS is enabled
DPDK host config: enable 1G hugepagesmodify kernel cmdline to enable 1G hugepages (hugepagez=1G hugepages=8 iommu=pt intel_iommu=on )drydock

Already available:

Define the number of available hugepages in the node's HardwareProfile:

Define the kernel paramters for enabling hgepages in the node's BaremetalNode configuration (kernel_param section):!/story/2004790
DPDK host config: mount hugepagesmount hugepages into local file system on system boot (hardcoded mount point)Divingbell

Two alternatives available:

  • utilize auto-mount capabilities of Ubuntu

Issue: does not allow fine-grained control of mount-options (e.g. specify the size of the hugepages if there are multiple available)

  • Deploy Divingbell daemonset on compute nodes with a given chart configuration (values.yaml).!/story/2004790
DPDK host config: make hugepage mount point a config optionmake mount point config option for e.g. use by helm chartsopenstack-helmconfig option of helm chart (see patchset). Need to figure out how to pass overrides to chart.
specify PCI IDs of NICs for use by DPDKspecify in site config which PCI IDs (NICs) should be decided to DPDK 
Add parameters to OVS helm chart

DPDK host config: install DPDK kernel modules and tools on host OSeither install host OS dpdk package or build from sourcedrydock or divingbelldepends on DPDK driver we want to use: igb_uio, uio_generic_pci or vfio-pci. Only for igb_uio a custum built kernel module is needed.

DPDK host config: load DPDK kernel module in host OSload dpdk kernel modules uio and igb_uio during host boot-updrydock or divingbellextend existing openvswitch-vswitchd init container
DPDK host config: bind NICs to DPDK

use dpdk-devbind to bind specified NICs to DPDK

drydock or divingbell

look into re-using and/or adapting a tool used by kolla-ansible

DPDK host config: enable hugepage support for kubeletenable hugepage support for k8s kubelet via feature-gate optionpromenadehugepages is a beta feature since K8s 1.10 and enabled by default.!/story/2004791
ensure communication between OVS agent and OVSensure common socket configurationopenstack-helmalready in place
adapt OVS bridge configuration for OVS setup

Work items:

  • create bridges with datatpath type netdev
  • add physical interface to physical bridge (br-phy)

look into re-using and/or adapting a tool used by kolla-ansible
adapt Neutron (ovs-agent) configurationAdapt neutron.conf and ml2 plugin configopenstack-helmextend ml2 plugin configuration of neutron in openstack-helm
adapt IP address assignment to OVS DPDK bridge

make sure that correct IPs get assigned to the OVS bridges running DPDK.

Every dpdk bridge needs a separate IP address for the tunnel endpoints.


create docker image with dpdk-enabled OVS

update the image build scripts of openstack-helm to include DPDK in the OVS image


openstack-helm-image repo:

Alternatives (kept just for reference)

Kolla images: or
update nova and neutron images with newer version of OVSOVS in current images is outdatedopenstack-helm-images
update site configuration to deploy ovs-dpdkcreate a site configuration which actually deploys ovs-dpdk as data planetreasuremapenable openvswitch chart group
add LAG support to DPDK configurationconfigure LAG support on DPDK NICs in OVSopenstack-helm


done / available

  • No labels