ta/build-tools (tree view): build-tools contain the code which is the backbone of the current SCM system. The description of the whole pipeline can be found here, together with the configuration of the image builder tool (DIB: https://github.com/openstack/diskimage-builder )
- yaml denotes which exact upstream RPM packages need to be installed during disk image creation procedure
- xml file collects the AKREC source repositories which need to be installed, and that from which branch, or exact revision of the repo the delivery package needs to be built
ta/virtual-installer: tooling to deploy a virtualized AKREC installation on top of VMs. Only for testing purposes!
CaaS (Container as a Service) related repos
ta/caas-cpupooler (tree view): packages, configures, and integrates the components of the upstream CPU-Pooler project (https://github.com/nokia/CPU-Pooler ). This component is responsible for providing advanced CPU management policies to both containerized CaaS, and application components
ta/caas-danm (tree view): packages, configures, and integrates the components of all upstream projects related to the network management service. Components are coming from DANM (https://github.com/nokia/danm ), Flannel (https://github.com/coreos/flannel ), and SR-IOV Device Plugin (https://github.com/intel/sriov-network-device-plugin ) repositories.
ta/caas-etcd (tree view): packages, configures, and integrates the components of the upstream Etcd database project (https://github.com/etcd-io/etcd ). This is the data backend of the Kubernetes management plane.
ta/caas-helm (tree view): packages, configures, and integrates the components of the upstream Helm project (https://github.com/helm/helm ), to provide package management capabilities for containerized applications. Also contains the source code of the AKREC Helm chart repository component.
ta/caas-install (tree view): contains the generic deployment playbooks installing the whole CaaS sub-system during the post-configuration phase. Contains AKREC utility scripts installed to the target operating system under “utils”, and the Helm Chart of the CaaS layer under “infra-charts”. Note: not all CaaS components are installed via Helm.
ta/caas-kubedns (tree view): packages, configures, and integrates the components of the upstream Kubernetes DNS project (https://github.com/kubernetes/dns ). Kube-DNS backs-up the CaaS in-built service discovery feature.
ta/caas-kubernetes (tree view): packages, configures, and integrates the major components of the Kubernetes management plane (https://github.com/kubernetes/kubernetes ). Includes the code related to deploying the API server, scheduler, controller-manager, kube-proxy, and kubelet components.
ta/caas-logging (tree view): packages, configures, and integrates the major components making up the CaaS log management pipeline. Fluentd (https://github.com/fluent/fluentd ) is used to gather and forward the standard output channels of the infrastructure components, and Elasticsearch (https://github.com/elastic/elasticsearch ) is the central log store collocating them.
ta/caas-metrics (tree view): packages, configures, and integrates the major components making up the CaaS performance management pipeline. Metrics server (https://github.com/kubernetes-incubator/metrics-server ) integrates the container’s core, while Prometheus (https://github.com/prometheus ) and custom metrics adapter (https://github.com/kubernetes-incubator/custom-metrics-apiserver ) integrates their custom metrics to the Horizontal Pod Autoscaler API.
ta/caas-registry (tree view): packages, configures, and integrates the components responsible for managing container images. Docker Registry (https://github.com/docker/distribution ) is used as the front-end, and Swift object store (https://github.com/openstack/swift ) is used as the backend component.
L3 Deployer related repos
ta/hw-detector (tree view): responsible for recognizing the specific hardware type the deployment is executed on. Can be used both as a library, or through CLI. Uses IPMI. Contains the hardware specific configuration templates
ta/infra-ansible (tree view): This repository contains all the generic deployment playbooks, which configure services running directly on the host. Includes playbooks for disk partitioning, Ceph configuration, hardening, security, SSH, operating system level user management etc.
ta/ironic-virtmedia-driver (tree view): this project contains Ironic drivers for baremetal provisioning using Virtual media for Quanta Hardware and Virtual environment. The main motivation for writing own drivers is to avoid L2 Network dependency and to support L3 based deployment.
ta/openstack-ansible-XYZ: these projects are re-used from the upstream Openstack-Ansible project (https://github.com/openstack/openstack-ansible ) for the purpose of deploying Galera, Keystone, RabbitMQ, and Ironic. These services are used by various middleware, and deployer components.
ta/os-net-config (tree view): contains a fork of the Openstack os-net-config tool (https://github.com/openstack/os-net-config ). Used to configure the host network interfaces based on the deployment configuration.
ta/python-ilorest-library (tree view): forked from https://github.com/HewlettPackard/python-ilorest-library. Used to remotely manage the iLO and iLO Chassis Manager based HPE servers.
ta/start-menu (tree view): The installation menu which is used to configure the external IP of the installation controller and starting the installation after the user-config is copied to the installation controller
ta/ironic (tree view): Patched version of openstack ironic that supports setting boot media to floppy. Used for L3 provisioning on certain hardware. (https://github.com/openstack/ironic, https://github.com/rdo-packages/ironic-distgit)
ta/ironicclient (tree view): Patched version of openstack ironicclient that adds the support for floppy.(https://github.com/openstack/python-ironicclient, https://github.com/rdo-packages/ironicclient-distgit)
- validators are responsible to ensure only semantically correct configuration changes are admitted into the configuration manager server’s backend (that is, Redis)
- userconfighandlers can mutate the content of a user’s configuration change based on domain-specific policies
- inventoryhandlers are responsible to create Ansible inventories from the configuration data. Ansible inventories are consumed by the Deployer playbooks
- activators are invoked when something was changed in the data of their respective domain. These plugins are responsible for executing run-time changes in the system, based on the submitted config data changes
ta/distributed-state-server (tree view): a service for persistent state management. It is used to store/share the state information between multiple nodes. It uses either etcd or file-based backend. CM uses it to store e.g. the configuration activation state.