Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Introduction

This document covers  covers the installation requirements for Integrated Edge Cloud(IEC) Type 2 R5 blueprint. Blueprint can be installed in two different ways

a) Using terraform command line utility : A The purpose of this terraform template is to provision a multi-node Kubernetes cluster on AWS using microk8s. MicroK8s offers a lightweight Kubernetes environment for edge use cases.cluster is provisioned by applying the terraform template. EdgeX Foundry can be installed manually using the deployment specification repository here

b) Using a Platform approach : Blueprint (terraform template) can be uploaded to gopaddle. The blueprint can then be used to launch multiple Kubernetes environments from an interactive GUI based approach. Once the cluster is ready, EdgeX Foundry can be installed in the Kubernetes environment by choosing the template from the gopaddle catalog. The north bound APIs to interact with gopaddle can be found here

Blueprint System Requirements

Installating the blueprint brings up a 3 node cluster with 1 master and 2 worker nodes. Node sizes should a minimum of t4g.medium. A pre-existing VPC and a sub-net is required prior to the installation process. Host machine for the cluster requires Ubuntu 18.04.

ItemCapacity
Number of nodes3
Node Sizet4g.medium - 2vCPUs - 4 GiB Memory

Disks in Storidge HA Clustering mode 

Status
subtletrue
colourRed
titleNot Yet Supported

3 Disks per node - 100 GB each.
VPCPre-existing VPC 
Subnet

Public (for now). Will switch to private subnet with Gateway configuration in future releases.

Amazon Machine Image (AMI)Ubuntu Server 18.04 LTS

How to use this document

The following sections describe the prerequisites for planning an IEC deployment. Once these the pre-requisites are met, installation steps provided should be followed in order to obtain an IEC compliant Kubernetes cluster.

Pre-Installation Requirements

1. Install terraform -  Download the terraform template to the client machine from the gerrit repo | https://wwwgerrit.terraform.io/downloads.html

(a)Download the zip file based on the server type.
(b)Unzip the file to get the terraform binary.
(c)Currently supported Ubuntu version is 18.04

2. IAM Access Keys - Permissions required for running the template - AmazonEC2FullAccess

...

NOTE: Replace fields in the variable.tf file with your corresponding values

In order for Terraform to be able to create resources in your AWS account, you will need to configure the AWS credentials. One of the easiest of which is to set the following environment variables:

export AWS_ACCESS_KEY_ID=(your access key id)
export AWS_SECRET_ACCESS_KEY=(your secret access key)

The credentials can also be set in the variable.tf file.

variable "access_key" {
description = "access_key"
default = <insertKey>
}

variable "secret_key" {
description = "secret_key"
default = <insertKey>
}

Terraform Template

...

akraino.org/r/c/iec/+/4273.

2. If Follow the instructions here to install terraform on the client machine from where the blueprint install is to be executed. 


Info
titleSupported Client OS

Ubuntu 18.04


3. AWS IAM User Access Keys - Create an AWS IAM User by following the steps here. Enable Programmatic Access and choose Attach existing policies directly. Select AmazonEC2FullAccess to grant full access to EC2 services.


Info
titleSecurity Consideration

In the future releases, access policies will be scope to specific operations instead of a complete EC2 access.


4. Generate an AWS Private Key file as described here. Private Key file is required to access the EC2 instances during the installation process. Place the private key file in the root directory of the template folder.

5. Initialize the environment variables to configure the AWS specific inputs. Choose a region, an AMI, a pre-existing VPC and sub-net. Here is an example of how these environment variables can be initialized. TF_LOG_PATH specifies the file path where the terraform execution logs will be redirected. TF_LOG can be set to TRACE, DEBUG, INFO, WARN, or ERROR.


Code Block
languagebash
themeEmacs
export TF_VAR_aws_region="us-east-2"
export TF_VAR_aws_ami="ami-026141f3d5c6d2d0c"
export TF_VAR_aws_instance="t4g.medium"
export TF_VAR_vpc_id="vpc-561e9f3e"
export TF_VAR_aws_subnet_id="subnet-d64dcabe"
export TF_VAR_access_key="<aws-access-key>"
export TF_VAR_secret_key="<aws-secret-key>"
export TF_LOG="TRACE"
export TF_LOG_PATH="tf.log"

5.

provider "aws" {
region = var.aws_region
}

The user_data installs the microk8s inside the EC2 instance.

#!/bin/bash
sudo su
apt update -y >> microk8s_install.log
apt install snapd -y >> microk8s_install.log
snap install core >> microk8s_install.log
export PATH=$PATH:/snap/bin
snap install microk8s --classic >> microk8s_install.log
microk8s status --wait-ready
microk8s enable dns >> microk8s_install.log
microk8s add-node > microk8s.join_token
microk8s config > configFile

Since terraform does not wait until the user_data is executed, we exec into the instance by using the 'remote-exec' type provisioner and add the following script. This script will make terraform wait for util microk8s.join-token file is created.

provisioner "remote-exec" {
inline = ["until [ -f /microk8s.join_token ]; do sleep 5; done; cat /microk8s.join_token"]
}

For testing purposes, we create an 'ALLOW ALL' ingress and egress rule security group.

Variables.tf file
The provider and the resource blocks in the main.tf file can be configured by changing the values in variables.tf file.
For example, if you want to change the aws_instace type from t2.small to t2.micro, replace the value here in this block.
variable "aws_instance" {
type = string
description = "instance_type"
default = "t2.small"
}
Other resource-specific values like aws_region, aws_ami, vpc and the subnet can also be changed the same way by editing the variable.tf file. Apply terraform

To create a master node with microk8s, run the following commands.

Code Block
languagebash
themeEmacs
terraform init

...


terraform plan

...


terraform apply


Once the worked nodes are created, they will be connected to the master automatically. A multi-node k8s cluster will be provisioned with a calico CNI.