The Akraino Regional Controller provides a standardized way to install Akraino Blueprints on disparate, network connected, hardware.  It is designed to be agnostic about what is being installed, and only concerns itself with providing a standard framework for running workflows associated with Blueprints, in order to perform the lifecycle management functions of a particular Blueprint.  It does this via a REST-based API.

Child Pages

Project Technical Lead:  TBD

Project Committers detail:

Initial Committers for a project will be specified at project creation. Committers have the right to commit code to the source code management system for that project.

A Contributor may be promoted to a Committer by the project’s Committers after demonstrating a history of contributions to that project.

Candidates for the project’s Project Technical Leader will be derived from the Committers of the Project. Candidates must self nominate by marking "Y" in the Self Nominate column below by TBD. Voting will take place TBD.

Only Committers for a project are eligible to vote for a project’s Project Technical Lead.

Please see Akraino Technical Community Document section 3.1.3 for more detailed information.





Contact Info

 Committer BioCommitter Picture 

Self Nominate for PTL (Y/N)

 Andrew WilkinsonEricsson 

Use Case Details:



Companies Participating / Committers

Requested Release / Timeline


Akraino Regional Controller

The current Akraino Portal provides a user interface and a collection of workflows and services to execute the actions requested by the user.  This proposal is to separate the workflows and services from the portal user interface so that actions can be performed through the portal, direct REST calls from an external orchestration tool, or a CLI that could be developed as part of a different feature project.

  1. Define an API for the various actions that a user might request including user management, blueprint definition, blueprint deployment, monitoring, etc.
  2. Modify existing services and workflows to be initiated though the new API.
  3. Develop new services and workflows as needed to address common tasks required by blueprints.
  4. Coordinate with the Portal feature project to use the new API.
  5. Standardize the software definition of a "Blueprint" so that multiple software entities can interact with blueprints in a defined way.
  6. Standardize the software definition of a "hardware profile" to enforce some rigor in what types of hardware individual blueprints may use.                                                                                                      



Impacted Blueprint Family - Network Cloud and Radio Edge Cloud

See attachment for additional details


  • No labels


  1. A question:

    Does remote installer or software manager expect some entity such as ironic to be there in each edge location in a bootstrap/initial machine?

    Is there any plugin added in this project that talks to "Cluster-API".  In the ICN BP, we are thinking of using "Cluster-API" for provisioning Bare-metal server or VM-Servers that would eventually host K8S.  Hence the question. Please see the some design discussions in ICN here: ICN - Infrastructure Orchestration (Architecture & Design document - WIP).  


    1. There needs to be something that does the bare metal installation. The architecture of the new regional controller (i.e. not the original one from the Network Cloud seed code, but the one that was written in 2019 to support Radio Edge Cloud) is a collection of containers that does not include the remote installer but which can, if commanded to by a workflow, instantiate a remote installer container. Currently the only remote installer container available is the one provided by the Telco Appliance blueprint family, of which Radio Edge Cloud is the first family member.

      The architecture supports the concept of other blueprints either reusing the TA remote installer or providing their own remote installer that works totally differently as long as it is provided as a container image and as long as the blueprint supplies a workflow that instantiates the correct remote installer container that it needs. The regional controller provides the functionality of parsing the blueprint's YAML specification in order to obtain the workflow and any required images. The RC sets up an environment for the workflow to execute in. That environment includes the ability to instantiate a remote installer container and to use Apache Airflow for structuring the workflow functionality (though it is not required to explicitly invoke any Apache Airflow functionality if the blueprint doesn't see a need for it)

      In the case of Radio Edge Cloud, the TA remote installer sets up an NFS server and uses RedFish API to cause a physical server to NFS mount an ISO image of a DVD as virtual media and boot off of it. Other blueprints could use other mechanisms in their own remote installer containers.

  2. Hi,

    I have installed regional controller on a VM. However, unable to login using - akadim/akraino. Has the password been changed.

    From where do we change the password to login into UI.



    1. It is important to note that there are two different regional controllers.  The original R.C. was provided with the Network Cloud blueprint and works only with that blueprint family.  That R.C. comes with a portal/web GUI and uses the login/password you specified.

      The R.C. listed on this page (under Approved Incubation Projects) is entirely new work, and consists of only an API server.  It is designed to be used with many blueprints, not just Network Cloud. And it also uses a different login password combo (listed on the Frequently Asked Questions page).