Table of Contents maxLevel 2
Introduction
This guide provides instructions for installing and configuring the Smart Data Transaction for CPS blueprint, and also includes recommended hardware and software requirements for the blueprint. The guide describes a minimal installation of the blueprint consisting of a single "master" node and two "edge" nodes, with directions on how the number of nodes can be modified as needed.
...
You can confirm the status of the EdgeX microservices using the kubectl get pod
command on the master node.
Sensor Nodes
In the test installation sensor nodes have been constructed using Raspberry Pi devices running a Python script as a service to read temperature and humidity from a DHT-1 sensor, and forward those readings through an LRA-1 USB dongle to a pre-configured destination.
The Python script is located in sensor/dht2lra.py
, and an example service definition file for use with systemd is dht2lra.service
in the same directory.
The destination edge node is configured by connecting to the LRA-1 USB dongle, for example using the tio
program (tio needs to be installed using sudo apt-get install tio
):
pi@raspi02:~ $ sudo tio /dev/ttyUSB0
[tio 09:31:52] tio v1.32
[tio 09:31:52] Press ctrl-t q to quit
[tio 09:31:52] Connected
i2-ele LRA1
Ver 1.07.b+
OK
>
At the ">" prompt, enter dst=N
, where N is the number in the lora_id
variable for the edge node in deploy/playbook/hosts
. Then enter the ssave
command and disconnect from the dongle (using Ctrl+t q
in the case of tio). The destination ID will be stored in the dongle's persistent memory (power cycling will not clear the value).
Running the script, either directly with python ./dht2lra.py
, or using the service, will periodically send readings to the edge node. These readings should appear in the core-data
database and be possible to monitor using the edgex-events-nodename
channel.
Verifying the Setup
Test cases for verifying the blueprint's operation are provided in the cicd/tests
directory. These are Robot Framework scripts which can be executed using the robot
tool. In addition, the cicd/playbook
directory contains playbooks supporting setup of a Jenkins-based automated testing environment for CI/CD. For more information, consult the README.md
files in those directories.
Developer Guide and Troubleshooting
EdgeX Service Configuration UI
The configuration parameters of EdgeX micro-services can be accessed through a Consul server on each edge node. The UI is accessible at the address http://node-address:8500/ui
. The node address is automatically assigned by Kubernetes and can be confirmed using the kubectl get node -o wide
command on the master node.
In order to access the configuration UI a login token is required. This can be acquired using the get-consul-acl-token.sh
script in the edgex
directory. Execute it as follows and it will print out the Consul access token:
get-consul-acl-token.sh pod-name
The pod-name
parameter is the name of the EdgeX pod running on the node. This can be obtained with the kubectl get pod
command on the master node. The name of the pod will be shown in the first column of the output, and will be "edgex-nodename-..."
Access the UI address through a web browser running on the master node, and click on the "log in" button in the upper right. You will be prompted to enter the access token. Copy the access token printed by the get-consul-acl-token.sh
script into the text box and press enter to log in to the UI. See the EdgeX documentation and Consul UI documentation for more information.
EdgeX API Access
The EdgeX micro-services each support REST APIs which are exposed through an API gateway running on https://node-address:8443
. The REST APIs are documented in the EdgeX documentation, and they are mapped to URLs under the API gateway address using path names based on the names of each micro-service. So, for example, the core-data
service's ping
interface can be accessed through https://node-address:8443/core-data/api/v2/ping
. A partial list of these mappings can be found in the EdgeX introduction to the API gateway.
Note that the blueprint does not automatically generate signed certificates for the API gateway, so the certificate it uses by default will cause warnings if accessed using a web browser and require the -k
option if using the curl
tool.
There is more information about the API gateway in the EdgeX documentation.
Enabling and Disabling Optional Services
Three EdgeX micro-services can be enabled and disabled using variables in the deploy/playbook/group_vars/all/edgex.yml
file. Set the variable to true
to enable the micro-service the next time the edgex_start.yml
playbook is run. Set the variable to false
to disable that micro-service. The micro-service controlling variables are listed below:
device_virtual
: Enable or disable thedevice-virtual
service, provided by EdgeX Foundry, used for testing.device_lora
: Enable or disable thedevice-lora
service, one of the custom services provided by this blueprint, which provides support for receiving readings and sending commands to remote sensors over LoRa low-power radio links.sync_app
: Enable or disable thesync-app
application service, the other custom service provided by this blueprint, which provides a way to forward sensor data to other edge nodes.
Debugging Failures
Consult the sections under Troubleshooting for commands to debug failures. In particular, using the kubectl
commands described in Accessing Logs, and changing the log levels of services using the configuration UI described above, which can change the logging level of running services, can be useful.
Reporting a Bug
<TBD>
Uninstall Guide
Stopping EdgeX
Removing Edge Nodes
Stopping Kubernetes
Stopping and Clearing the Docker Registry
Uninstalling Software Components
Removing Configuration and Temporary Data
Troubleshooting
Confirming Node and Service Status
Accessing Logs
Maintenance
Stopping and Restarting EdgeX Services
Stopping and Restarting the Kubernetes Cluster
Adding and Removing Edge Nodes
Updating the Software
...
(EdgeX micro-service containers are grouped into one Kubernetes "pod" per node.)
admin@master:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
edgex-edge1-57859dcdff-k8j6g 20/20 Running 16 1m31s
edgex-edge2-5678d8fbbf-q988v 20/20 Running 16 1m26s
Note, during initialization of the services you may see some containers restart one or more times. This is part of the timeout and retry behavior of the services waiting for other services to complete initialization and does not indicate a problem.
Sensor Nodes
In the test installation sensor nodes have been constructed using Raspberry Pi devices running a Python script as a service to read temperature and humidity from a DHT-1 sensor, and forward those readings through an LRA-1 USB dongle to a pre-configured destination.
The Python script is located in sensor/dht2lra.py
, and an example service definition file for use with systemd is dht2lra.service
in the same directory.
The destination edge node is configured by connecting to the LRA-1 USB dongle, for example using the tio
program (tio needs to be installed using sudo apt-get install tio
):
pi@raspi02:~ $ sudo tio /dev/ttyUSB0
[tio 09:31:52] tio v1.32
[tio 09:31:52] Press ctrl-t q to quit
[tio 09:31:52] Connected
i2-ele LRA1
Ver 1.07.b+
OK
>
At the ">" prompt, enter dst=N
, where N is the number in the lora_id
variable for the edge node in deploy/playbook/hosts
. Then enter the ssave
command and disconnect from the dongle (using Ctrl+t q
in the case of tio). The destination ID will be stored in the dongle's persistent memory (power cycling will not clear the value).
Running the script, either directly with python ./dht2lra.py
, or using the service, will periodically send readings to the edge node. These readings should appear in the core-data
database and be possible to monitor using the edgex-events-nodename
channel. For example, the following command run on the master node should show the readings arriving at an edge node named "edge1":
mosquitto_sub -t edgex-events-edge1 -u edge -P edgemqtt
Verifying the Setup
Test cases for verifying the blueprint's operation are provided in the cicd/tests
directory. These are Robot Framework scripts which can be executed using the robot
tool. In addition, the cicd/playbook
directory contains playbooks supporting setup of a Jenkins-based automated testing environment for CI/CD. For more information, consult the README.md
files in those directories.
Developer Guide and Troubleshooting
EdgeX Service Configuration UI
The configuration parameters of EdgeX micro-services can be accessed through a Consul server on each edge node. The UI is accessible at the address http://node-address:8500/ui
. The node address is automatically assigned by Kubernetes and can be confirmed using the kubectl get node -o wide
command on the master node.
In order to access the configuration UI a login token is required. This can be acquired using the get-consul-acl-token.sh
script in the edgex
directory. Execute it as follows and it will print out the Consul access token:
get-consul-acl-token.sh pod-name
The pod-name
parameter is the name of the EdgeX pod running on the node. This can be obtained with the kubectl get pod
command on the master node. The name of the pod will be shown in the first column of the output, and will be "edgex-nodename-..."
Access the UI address through a web browser running on the master node, and click on the "log in" button in the upper right. You will be prompted to enter the access token. Copy the access token printed by the get-consul-acl-token.sh
script into the text box and press enter to log in to the UI. See the EdgeX documentation and Consul UI documentation for more information.
EdgeX API Access
The EdgeX micro-services each support REST APIs which are exposed through an API gateway running on https://node-address:8443
. The REST APIs are documented in the EdgeX documentation, and they are mapped to URLs under the API gateway address using path names based on the names of each micro-service. So, for example, the core-data
service's ping
interface can be accessed through https://node-address:8443/core-data/api/v2/ping
. A partial list of these mappings can be found in the EdgeX introduction to the API gateway.
Note that the blueprint does not automatically generate signed certificates for the API gateway, so the certificate it uses by default will cause warnings if accessed using a web browser and require the -k
option if using the curl
tool.
There is more information about the API gateway in the EdgeX documentation.
Enabling and Disabling Optional Services
Three EdgeX micro-services can be enabled and disabled using variables in the deploy/playbook/group_vars/all/edgex.yml
file. Set the variable to true
to enable the micro-service the next time the edgex_start.yml
playbook is run. Set the variable to false
to disable that micro-service. The micro-service controlling variables are listed below:
device_virtual
: Enable or disable thedevice-virtual
service, provided by EdgeX Foundry, used for testing.device_lora
: Enable or disable thedevice-lora
service, one of the custom services provided by this blueprint, which provides support for receiving readings and sending commands to remote sensors over LoRa low-power radio links.sync_app
: Enable or disable thesync-app
application service, the other custom service provided by this blueprint, which provides a way to forward sensor data to other edge nodes.
Debugging Failures
Consult the sections under Troubleshooting for commands to debug failures. In particular, using the kubectl
commands described in Accessing Logs, and changing the log levels of services using the configuration UI described above, which can change the logging level of running services, can be useful.
Reporting a Bug
Contact the Smart Data Transaction for CPS mailing list at sdt-blueprint@lists.akraino.org to report potential bugs or get assistance with problems.
Uninstall Guide
Stopping EdgeX
The EdgeX services can be stopped on all edge nodes using the edgex_stop.yml
playbook. (It is not currently possible to stop and start the services on individual nodes.)
ansible-playbook -i ./hosts edgex_stop.yml
Confirm that the services have stopped using the kubectl get pod
command on the master node. It should show no pods in the default namespace.
After stopping the EdgeX services it is possible to restart them using the edgex_start.yml
playbook as usual. Note, however, that the pod names and access tokens will have changed.
Removing Edge Nodes
The edge nodes can be removed from the cluster using the following command:
ansible-playbook -i ./hosts delete_from_cluster.yml
This command should be run before stopping the cluster as described in the following section, in order to provide a clean shutdown. It is also possible to re-add the edge nodes using join_cluster.yml
, perhaps after editing the configuration in the hosts
file.
Stopping Kubernetes
Kubernetes can be stopped by running the following command. Do this after all edge nodes have been removed.
ansible-playbook -i ./hosts reset_cluster.yml --ask-become-pass
Stopping and Clearing the Docker Registry
If you need to stop the private Docker registry service for some reason, use the following command:
ansible-playbook -i ./hosts stop_registry.yml
With the registry stopped it is possible to remove the registry entirely. This will recover any disk space used by images stored in the registry, but means that pull_upsteam_images.yml, build_images.yml, and push_images.yml will need to be run again.
ansible-playbook -i ./hosts remove_registry.yml
Uninstalling Software Components
Installed software components can be removed with sudo apt remove package-name
. See the list of installed software components earlier in this document. Python packages (cryptography
and kubernetes
) can be removed with the pip uninstall
command.
Ansible components installed with ansible-galaxy (community.docker
, kubernetes.core
, community.crypto
) can be removed by deleting the directories under ~/.ansible/collections/ansible_collections
on the deploy node.
Removing Configuration and Temporary Data
This blueprint stores configuration and data in the following places. When uninstalling the software, these folders and files can also be removed, if present, on the master, deploy and edge nodes.
- Master node:
- ~/.lfedge
- /opt/lfedge
- /etc/mosquitto/conf.d/edge.conf
- /usr/share/keyrings/kubernetes-archive-keyring.gpg
- Edge node:
- /opt/lfedge
- /etc/docker/certs.d/master:5000/registry.crt
- /usr/local/share/ca-certificates/master.crt
- /etc/docker/daemon.json
- /usr/share/keyrings/kubernetes-archive-keyring.gpg
- Deploy node:
- /etc/profile.d/go.sh
- /usr/local/go
- ~/edgexfoundry
Troubleshooting
Confirming Node and Service Status
The kubectl
command can be used to check the status of most cluster components. kubectl get node
will show the health of the master and edge nodes, and kubectl get pod
will show the overall status of the EdgeX services. The kubectl describe pod pod-name
command can be used to get a more detailed report on the status of a particular pod. The EdgeX configuration UI, described in the section EdgeX Service Configuration UI above, also shows the result of an internal health check of all EdgeX services on the node.
Accessing Logs
The main tool for accessing logs is kubectl logs, run on the master node. This command can be used to show the logs of a running container:
kubectl logs -c container-name pod-name
It can also be used to check the logs of a container which has crashed or stopped:
kubectl logs --previous -c container-name pod-name
And it can be used to stream the logs of a container to a terminal:
kubectl logs -c container-name pod-name -f
The container names can be found in the output of kubectl describe pod
or in the edgex/deployments/edgex.yml
file (the names of the entries in the containers
list).
For the rare cases when the Kubernetes log command does not work, it may be possible to use the docker log
command on the node you wish to debug.
Maintenance
Stopping and Restarting EdgeX Services
As described in the Uninstall Guide subsection Stopping EdgeX, the EdgeX services can be stopped and restarted using the edgex_stop.yml
and edgex_start.yml
playbooks.
Stopping and Restarting the Kubernetes Cluster
Similar to stopping and restarting the EdgeX services, the whole cluster can be stopped and restarted by stopping EdgeX, removing the edge nodes, stopping Kubernetes, starting Kubernetes, adding the edge nodes, and starting EdgeX again:
ansible-playbook -i ./hosts edgex_stop.yml
ansible-playbook -i ./hosts delete_from_cluster.yml
ansible-playbook -i ./hosts reset_cluster.yml --ask-become-pass
ansible-playbook -i ./hosts init_cluster.yml --ask-become-pass
ansible-playbook -i ./hosts join_cluster.yml
ansible-playbook -i ./hosts edgex_start.yml
Adding and Removing Edge Nodes
Edge nodes can be added an removed by stopping the cluster and editing the deploy/playbook/hosts
file. The master_install.yml
and edge_install.yml
playbooks need to be run again to update /etc/hosts
and certificates on any added nodes.
Updating the Software
Running setup_deploy.yml, master_install.yml, and edge_install.yml playbooks can be used to update software packages if necessary. Note that Kubernetes is specified to use version 1.22 to avoid problems that might arise from version instability, but it should be possible to update if so desired.
Rebuilding Custom Services
The custom services can be rebuilt by running the build_images.yml
playbook in cicd/playbook
. After successfully building a new version of a service, use push_images.yml to push the images to the private Docker registry. The source for the services is found in edgex/sync-app
and edgex/device-lora
.
License
The software provided as part of the Smart Data Transaction for CPS blueprint is licensed under the Apache License, Version 2.0 (the "License");
...