Table of Contents
Introduction
This document covers both Integrated Edge Cloud(IEC) Type 1 & 2.
This document provides guidelines on how to manually install the Akraino IEC Release 2, including required software and hardware configurations. The steps described below are automatized in CI using Fuel@OPNFV or Compass. For details on this procedure, check the IEC Type1&2 Test Document for R2
The audience of this document is assumed to have good knowledge of networking and Unix/Linux administration.
Currently, the chosen operating system (OS) is Ubuntu 16.04 and/or 18.04.
The infrastructure orchestration of IEC is based on Kubernetes, which is a production-grade container orchestration with a rich running eco-system.
There are some options for Container Network Interface (CNI) solutions for IEC, e.g Calico, Contiv-vpp and Flannel. But the default container network interface (CNI) solution chosen for Kubernetes is Calico, which is a high performance, scalable, policy enabled and widely used container networking solution with rather easy installation and arm64 support.
Currently MACCHIATObin board is used as a typical Type1 hardware platform, and we provided a guide on how to setup hardware. There is no explicit difference between Type1 and Type2 on the installation method of IEC Release 2.
The installation guide is mostly inherited from that of R1.
How to use this document
The following sections describe the prerequisites for planning an IEC
deployment. Once these are met, installation steps provided should be followed
in order to obtain an IEC compliant Kubernetes cluster.
Deployment Architecture
The reference cluster platform consists of 3 nodes, baremetal or virtual machines:
- the first node will have the role of Kubernetes Master;
- all other nodes will have the role of Kubernetes Slave;
- Calico/Flannel/Contiv will be used as container network interface (CNI);
...
All machines (including the jumpserver) should be part of at least one common network segment.
Pre-Installation Requirements
Hardware Requirements
Info |
---|
Hardware requirements depend on the deployment type. Depending on the intended usecase(s), more memory/storage might be required for running/storing the containers. |
Minimum Hardware Requirements
HW Aspect | Requirement | ||
---|---|---|---|
1 Jumpserver | A physical or virtualized machine that has direct network connectivity to the cluster nodes.
| ||
CPU | Minimum 1 socket (each cluster node) | ||
RAM | Minimum 2GB/server (Depending on usecase work load) | ||
Disk | Minimum 20GB (each cluster node) | ||
Networks | Mininum 1 |
Recommended Hardware Requirements
HW Aspect | Requirement | ||
---|---|---|---|
1 Jumpserver | A physical or virtualized machine that has direct network connectivity to the cluster nodes.
| ||
CPU | 1 socket (each cluster node) | ||
RAM | 16GB/server (Depending on usecase work load) | ||
Disk | 100GB (each cluster node) | ||
Networks | 2/3 (management and public, optionally separate PXE) |
Software Prerequisites
- Ubuntu 16.04/18.04 is installed on each node;
- SSH server running on each node, allowing password-based logins;
- a user (by default named iecIEC, but can be customized via config later) is present on each node;
- iec user IEC user has passwordless sudo rights;
- iecIEC user is allowed password-based SSH login;
Database Prerequisites
Schema scripts
N/A
Other Installation Requirements
Jump Host Requirements
N/A
Network Requirements
- at least one common network segment across all nodes;
- internet connectivity;
Bare Metal Node Requirements
N/A
Execution Requirements (Bare Metal Only)
N/A
Installation High-Level Overview
Bare Metal Deployment Guide
Install Bare Metal Jump Host
The jump host (jumpserver) operating system should be pre-provisioned. No special software requirements apply apart from package prerequisites:
- git
- sshpass
Creating a Node Inventory File
N/A
Creating the Settings Files
Clone the IEC git repo and edit the configuration file by setting:
...
Code Block | ||
---|---|---|
| ||
jenkins@jumpserver:~$ git clone https://gerrit.akraino.org/r/iec.git
jenkins@jumpserver:~$ cd iec/src/foundation/scripts
jenkins@jumpserver:~/iec/src/foundation/scripts$ vim config
#!/bin/bash
# shellcheck disable=SC2034
# Host user which can log into master and each worker nodes
HOST_USER=${HOST_USER:-iec}
REPO_URL="https://gerrit.akraino.org/r/iec"
#log file
LOG_FILE="kubeadm.log"
# Master node IP address
K8S_MASTER_IP="10.169.36.152"
# HOST_USER's password on Master node
K8S_MASTERPW="123456"
######################################################
#
# K8S_WORKER_GROUP is an array which consists of 2
# parts. One is k8s worker ip address. The other is
# the user password.
#
######################################################
K8S_WORKER_GROUP=(
"10.169.40.106,123456"
)
# K8s parameter
CLUSTER_IP=172.16.1.136 # Align with the value in our K8s setup script
POD_NETWORK_CIDR=192.168.0.0/16
#IEC support three kinds network solution for Kubernetes: calico,flannel,contivpp
CNI_TYPE=calico
#kubernetes-cni version 0.7.5/ 0.6.0
CNI_VERSION=0.6.0
#kubernetes version: 1.15.2/ 1.13.0
KUBE_VERSION=1.13.0
# DEV_NAME is an associative array, list network interface device names used by contivpp,
# Use IP address of K8S_WORKER_GROUP as key, for example
# DEV_NAME=(
# [10.169.40.106]="enp137s0f0"
# )
declare -A DEV_NAME
DEV_NAME=() |
Running
Simply start the installation script with default parameters in the same directory:
...
Code Block | ||
---|---|---|
| ||
jenkins@jumpserver:~/iec/src/foundation/scripts$ ./startup.sh -C flannel -k 1.15.2 -c 0.7.5 #Deploy 1.15.2 K8s with Flannel CNI jenkins@jumpserver:~/iec/src/foundation/scripts$ ./startup.sh -C contivpp -k 1.15.2 -c 0.7.5 # Deploy 1.15.2 K8s with Contiv-vpp CNI |
...
There are some different options for startup.sh scripts. Please refer to following information:
Code Block | ||
---|---|---|
| ||
-k|--kube: The version of k8s
-c|--cni-ver: ---- Kubernetes-cni version
-C|--cni: ---- CNI type: calico/flannel/contivpp |
Info |
---|
If you want to deploy the K8s with Contiv-vpp, you must specify 1 NIC which will be used in Contiv-vpp. Then modify the configuration file. |
...
Virtual Deployment Guide
Standard Deployment Overview
From the installer script's perspective, virtual deployments are identical to baremetal ones.
Preprovision some virtual machines on the jumpserver node as hypervisor, using Ubuntu 16.04/18.04, then continue the installation similar to the baremetal deployment process described above.
Snapshot Deployment Overview
N/A
Special Requirements for Virtual Deployments
N/A
Install Jump Host
Similar to baremetal deployments. Additionally, one hypervisor solution should
be available for creating the cluster nodes virtual machines (e.g. KVM).
Verifying the Setup - VMs
N/A
Upstream Deployment Guide
N/A
Upstream Deployment Key Features
N/A
Special Requirements for Upstream Deployments
N/A
Scenarios and Deploy Settings for Upstream Deployments
N/A
Including Upstream Patches with Deployment
N/A
Running
Similar to virtual deployments, edit the configuration file, then launch the
installation script:
Code Block | ||
---|---|---|
| ||
jenkins@jumpserver:~$ git clone https://gerrit.akraino.org/r/iec.git
jenkins@jumpserver:~$ cd iec/src/foundation/scripts
jenkins@jumpserver:~/iec/src/foundation/scripts$ vim config.sh
jenkins@jumpserver:~/iec/src/foundation/scripts$ ./startup.sh |
Interacting with Containerized Overcloud
N/A
Verifying the Setup
IEC installation automatically performs one simple test of the Kubernetes cluster installation by spawning an nginx container and fetching a sample file via HTTP.
OpenStack Verification
N/A
Developer Guide and Troubleshooting
Utilization of Images
N/A
Post-deployment Configuration
N/A
OpenDaylight Integration
N/A
Debugging Failures
N/A
Reporting a Bug
All issues should be reported via IEC JIRA page. When submitting reports, please provide as much relevant information as possible, e.g.:
- output logs;
- IEC git repository commit used;
- jumpserver info (operating system, versions of involved software components et al.);
- command history (when relevant);
Uninstall Guide
N/A
Troubleshooting
Error Message Guide
N/A
Maintenance
N/A
Frequently Asked Questions
N/A
License
Any software developed by the "Akraino IEC" Project is licenced under the
Apache License, Version 2.0 (the "License");
you may not use the content of this software bundle except in compliance with the License.
You may obtain a copy of the License at <https://www.apache.org/licenses/LICENSE-2.0>
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
References
For more information on the Akraino Release 1, please see:
...