...
This document assumes the reader is familiar with basic UNIX command line utilities and Kubernetes. Familiarity with Ansible and Docker may also be useful. To interact with the EdgeX micro-services in a running setup, use the APIs as described in the EdgeX documentation. Sensor data can be observed through the MQTT broker mosquitto and its command line utility mosquitto_sub
.
Start by reviewing the deployment architecture and requirements in the following sections, then follow the steps in the Installation section to set up the software and start it running. Confirm the services are functioning as expected by following the instructions in the Verifying the Setup section. The later sections in this document describe other tasks that can be performed on a running setup, alternate configuration options, and how to shut down and uninstall the software.
...
- Make sure there are entries for the master and edge node names in
/etc/hosts
- Install required software packages including Docker, Kubernetes, pip, and mosquitto
- Install Python packages used by other playbooks (
kubernetes
andcryptography
) - Make sure the user can run docker Docker commands
- Prepare basic configuration for Docker and Kubernetes
- Set up a user name and password for the MQTT service
...
ssh-copy-id -i ~/.ssh/edge.pub edge@nodename
Edge Node Kubernetes Requirements
Like the master node, swap should be disabled and the cluster IP address ranges should be excluded from proxy processing if necessary.
Note that on the Jetson Nano hardware platform has a service called nvzramconfig
that acts as swap and needs to be disabled. Use the following command to disable it:
sudo systemctl disable nvzramconfig.service
Starting the Cluster
Adding Edge Nodes to the Cluster
Starting EdgeX
Sensor Nodes
After the administrative account has been created, the following command will perform initial setup on all edge nodes configured in the deploy/playbook/hosts
file:
ansible-playbook -i ./hosts edge_install.yml
The playbook will perform the following initialization tasks:
- Make sure there is an entry for the master node in
/etc/hosts
- Install required software packages including Docker and kubelet
- Make sure the user can run Docker commands
- Configure Docker, including adding the certificates to secure access to the private registry
Edge Node Kubernetes Requirements
Like the master node, swap should be disabled and the cluster IP address ranges should be excluded from proxy processing if necessary.
Note that on the Jetson Nano hardware platform has a service called nvzramconfig
that acts as swap and needs to be disabled. Use the following command to disable it:
sudo systemctl disable nvzramconfig.service
Building the Custom Services
At this time, images for the two custom services, sync-app
and device-lora
, need to be built from source and pushed to the private Docker registry. (In the future these images should be available on Docker Hub or another public registry.) Use the following playbooks from the cicd/playbook
directory on the deploy node to do so.
This command will install components that support cross-compiling the microservices for ARM devices:
ansible-playbook -i ./hosts setup_build.yml
This command will build local docker images of the custom microservices:
ansible-playbook -i ./hosts build_images.yml
The build command can take some time, depending on connection speed and the load on the deploy host, especially the compilation of cross-compiled images.
This command will push the images to the private registry:
ansible-playbook -i ./hosts push_images.yml
At time of writing this step will also create some workaround images required to enable EdgeX security features in this blueprint's Kubernetes configuration. Hopefully, these images will no longer be needed once fixes have been made upstream.
Starting the Cluster
With the base software installed and configured on the master and edge nodes, the following command will start the cluster:
ansible-playbook -i ./hosts init_cluster.yml --ask-become-pass
This command only starts the master node in the Kubernetes cluster. The state of the master node can be confirmed using the kubectl get node
command on the master node.
admin@master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 14m v1.22.7
Adding Edge Nodes to the Cluster
Once the cluster is initialized, the following command will add all the configured edge nodes to the cluster:
ansible-playbook -i ./hosts join_cluster.yml
The kubectl get node
command on the master node can be used to confirm the state of the edge nodes.
admin@master:~$ kubectl get node
NAME STATUS ROLES AGE VERSION
edge1 Ready <none> 2m50s v1.22.7
edge2 Ready <none> 2m45s v1.22.7
master Ready control-plane,master 17m v1.22.7
Starting EdgeX
After adding the edge nodes to the cluster, the following command will start the EdgeX services on the edge nodes:
ansible-playbook -i ./hosts edgex_start.yml
You can confirm the status of the EdgeX microservices using the kubectl get pod
command on the master node.
Sensor Nodes
In the test installation sensor nodes have been constructed using Raspberry Pi devices running a Python script as a service to read temperature and humidity from a DHT-1 sensor, and forward those readings through an LRA-1 USB dongle to a pre-configured destination.
The Python script is located in sensor/dht2lra.py
, and an example service definition file for use with systemd is dht2lra.service
in the same directory.
The destination edge node is configured by connecting to the LRA-1 USB dongle, for example using the tio
program (tio needs to be installed using sudo apt-get install tio
):
pi@raspi02:~ $ sudo tio /dev/ttyUSB0
[tio 09:31:52] tio v1.32
[tio 09:31:52] Press ctrl-t q to quit
[tio 09:31:52] Connected
i2-ele LRA1
Ver 1.07.b+
OK
>
At the ">" prompt, enter dst=N
, where N is the number in the lora_id
variable for the edge node in deploy/playbook/hosts
. Then enter the ssave
command and disconnect from the dongle (using Ctrl+t q
in the case of tio). The destination ID will be stored in the dongle's persistent memory (power cycling will not clear the value).
Running the script, either directly with python ./dht2lra.py
, or using the service, will periodically send readings to the edge node. These readings should appear in the core-data
database and be possible to monitor using the edgex-events-nodename
channel.
Verifying the Setup
as defined the Akraino validation feature project plus any additional testing specific to the blue print
...
References
- EdgeX Foundry Documentation (release 2.1): https://docs.edgexfoundry.org/2.1/
Definitions, Acronyms and Abbreviations
...