You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

The Continuous Deployment Lab provided by Enea for IEC has two ThunderX1 servers on which we do nightly virtual deploys of IEC Type 2, run validation, and install SEBA usecase.


Continuous Integration → Continuous Deployment Flow


Enea's CI/CD validation lab is ran using Jenkins and it's connected to the Akraino Linux foundation Jenkins master.

The Jenkins jobs are configured via JJB in the Akraino ci-management project: https://gerrit.akraino.org/r/gitweb?p=ci-management.git;a=tree;f=jjb/iec

These jobs are loaded in the LF Jenkins master and triggered periodically: https://jenkins.akraino.org/view/iec/

Each job's has a master job that calls four subsequent jobs (e.g.: https://jenkins.akraino.org/view/iec/job/iec-type2-fuel-virtual-centos7-daily-master/)

Logs from the CD installation of Integrated Edge Cloud (IEC) are available at: https://nexus.akraino.org/content/sites/logs/production/vex-yul-akraino-jenkins-prod-1/


Akraino Continuous Deployment Hardware

Sales Item

Description

QTY

cn8890

Gigabyte ThunderX R120-T32 (1U)

2

  

Chassis Level Specification

Total Physical Compute Cores: 48

Total Physical Compute Memory: 256GB

Total SSD-based OS Storage: 480G

Networking per Server: 2x 1G and 2x 10G


IEC Cabling


Virtual Deploy using Fuel@OPNFV installer

Based on the configuration passed to the installer, it will handle creating the VMs, virtual networks, OS installation and IEC installation.

The setup is created based on a Pod Descriptor File (PDF) and Installer Descriptor File (IDF) . The files for the two servers in Enea lab are at https://gerrit.akraino.org/r/gitweb?p=iec.git;a=tree;f=ci/labs/arm

The PDF contains information about the VMs (RAM, CPUs, Disks). The IDF contains the virtual subnets that need to be created for the cluster and the interfaces on the Jumphost that are going to be connected to the clusters. The first two interfaces on the Jumphost will be used for Admin and Public subnets.

An installation will create 3 VMs and install the OS given as parameter. The supported OSes are Ubuntu16.04, Ubuntu 18.04 and Centos7. Each VM has three subnets:

  • Admin: used during installation
  • Mgmt: used by k8s
  • Public: used for external access


  • No labels