Table of Contents |
---|
...
Edge Connector and Edge GW with one-click.
Pre-Installation Requirements
In order to use the playbooks several preconditions must be fulfilled:
Time must be configured on all hosts (refer to "Configuring time")
Hosts for Edge Controller (Kubernetes master) and Edge Nodes (Kubernetes workers) must have proper and unique hostname (not
localhost
). This hostname must be specified in/etc/hosts
(refer to "Setup static hostname").Ansible inventory must be configured (refer to "Configuring inventory").
SSH keys must be exchanged with hosts (refer to "Exchanging SSH keys with hosts").
Proxy must be configured if needed (refer to "Setting proxy").
If a private repository is used Github token has to be set up (refer to "GitHub Token").
Configuring time
By default CentOS ships with chrony NTP client. It uses default NTP servers listed below that might not be available in certain networks:
0.centos.pool.ntp.org
1.centos.pool.ntp.org
2.centos.pool.ntp.org
3.centos.pool.ntp.org
OpenNESS requires It is required that the time to shall be synchronized between all of the nodes and controllers to allow for correct certificate verification.
...
In order to set some custom static hostname, a command can be used:
hostnamectl set-hostname <host_name>
...
In order to execute playbooks, inventory.ini
must be configure configured to include specific hosts to run the playbooks on.
OpenNESS' The inventory contains three groups: all
, edgenode_group
, and controller_group
:
all
contains all the hosts (with configuration) used in any playbookcontroller_group
contains host to be set up as a Kubernetes master / OpenNESS Edge Controller
WARNING: Since only one Controller is supported,controller_group
can contain only 1 host.edgenode_group
contains hosts to be set up as a Kubernetes workers / OpenNESS Edge NodesGateways.
NOTE: All nodes will be joined to the master specified incontroller_group
.
...
# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): <ENTER>
Enter passphrase (empty for no passphrase): <ENTER>
Enter same passphrase again: <ENTER>
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:vlcKVU8Tj8nxdDXTW6AHdAgqaM/35s2doon76uYpNA0 root@host
The key's randomart image is:
+---[RSA 2048]----+
| .oo.==*|
| . . o=oB*|
| o . . ..o=.=|
| . oE. . ... |
| ooS. |
| ooo. . |
| . ...oo |
| . .*o+.. . |
| =O==.o.o |
+----[SHA256]-----+
Then, the generated key must be copied to every host from the inventory. It is done by running ssh-copy-id
, e.g.:
...
To run deploy of only Edge Nodes or Edge Controller use deploy_ne.sh nodes
and deploy_ne.sh controller
respectively.
Developer Guide and Troubleshooting
Developer Guide
Start by following the page Setting Up Your Development Environment, this covers items such as signing up for a linux foundation account, configuring git, installing gerrit and IDE recommendations.
Clone 5G-MEC-CLOUD-GAMING Code
Visit https://gerrit.akraino.org/r/admin/repos/5g-mec-cloud-gaming to obtain the git clone commands.
Download Submodule
- git submodule update --init --recursive
Setup Environment
Enter the work directory:
- cd ./5g-mec-cloud-gaming
Execute the versify.sh script to set up the build environment:
versify.sh script first installs Golang and ginkgo,then installs docker and docker-compose.
Golang:
wget https://dl.google.com/go/ go1.13.4.linux-amd64.tar.gz
export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin
Ginkgo:
go get github.com/onsi/ginkgo/ginkgo
go get github.com/onsi/gomega/
Docker:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
Docker-compose:
curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
Running Playbooks
For convenience, playbooks can be executed by running helper deployment scripts.
NOTE: All nodes provided in the inventory may reboot during the installation.
Convention for the scripts is: action_mode.sh [group]. Following scripts are available for Network Edge mode:
- deploy_ne.sh [ controller | nodes ]
To run deploy of only Edge Nodes or Edge Controller use deploy_ne.sh nodes and deploy_ne.sh controller respectively.
NOTE: Playbooks for Edge Controller/Kubernetes master must be executed before playbooks for Edge Nodes.
NOTE: Edge Nodes and Edge Controller must be set up on different machines.
Uninstall Guide
The role of cleanup playbook is to revert changes made by deploy playbooks. The convention for the scripts is: action_mode.sh [group]. Following script is available for cleanup:
- cleanup_ne.sh [ controller | nodes]
The teardown is made by going step by step in reverse order and undoing the steps. For example, playbooks for Edge Controller/Kubernetes master must be executed before playbooks for Edge Nodes. During the unistall operation, the cleanup script of Edge Nodes should be executed first, and then run the cleanup script of Edge Controller / Kubernetes master.
Note that there might be some leftovers created by installed software.
Troubleshooting
Proxy issues
For PRC users who have network problems, try the following mirrors.
- Kubernetes
Kubernetes repo URL: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64, as a replacement of https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64.
Kubernetes repo key: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg, as a replacement of https://packages.cloud.google.com/yum/doc/yum-key.gpg.
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg, as a replacement of https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg.
- Kubeovn
Kubeovn repo: https://gitee.com/mirrors/Kube-OVN.git, as a replacement of https://github.com/alauda/kube-ovn.git.
Kubeovn raw file repo: https://gitee.com/mirrors/Kube-OVN/raw, as a replacement of https://raw.githubusercontent.com/alauda/kube-ovn.
Useful Commands
To display pods deployed in the default namespace:
kubectl get pods
To display pods running in all namespaces:
kubectl get pods --all-namespaces
To display status and latest events of deployed pods:
kubectl describe pod <pod_name> --namespace=<namespace_name>
To get logs of running pods:
kubectl logs <pod_name> -f --namespace=<namespace_name>
To display the allocatable resources:
kubectl get node <node_name> -o json | jq '.status.allocatable'
To display node information:
kubectl describe node <node_name>
To display available images on local machine (from host):
docker images
License
Any software developed by the "Akraino 5G MEC/Slice System to Support Cloud Gaming, HD Video and Live Broadcasting Blueprint" is licensed under the
Apache License, Version 2.0 (the "License");
you may not use the software except in compliance with the License.
You may obtain a copy of the License at <https://www.apache.org/licenses/LICENSE-2.0>
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
License information of 5G MEC BP components
edge connector (aka edgecontroller in openNESS)
No | Software | version | licence |
---|---|---|---|
1 | openNESS | 20.03 | Apache 2.0 license |
2 | Kubernetes | 1.17 | Apache 2.0 license |
3 | Docker | 19.3 | Apache 2.0 license |
4 | etcd | 3.4.3-0 | Apache 2.0 license |
edge GW (aka edgenode in openNESS)
No | Software | version | licence |
---|---|---|---|
1 | openNESS | 20.03 | Apache 2.0 license |
2 | Kubernetes | 1.17 | Apache 2.0 license |
3 | Docker | 19.03 | Apache 2.0 license |
4 | openvswitch | 2.11.4 | Apache 2.0 license |
5 | kube-ovn | 0.10.2 | Apache 2.0 license |
5GCEmulator
No | Software | version | licence |
---|---|---|---|
1 | openNESS | 20.03 | Apache 2.0 license |