Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Introduction

This page is about work in progress regarding REC validation and testing, more information should be added in the near future.

The external validation labs in AT&T lab We had successfully validated REC installation with post-install validation test and additional test such as RIC Robot Test and nanobot in AT&T Radio Edge Cloud Validation Lab (Middletown, NJ) had successfully validating REC installation which identified by . These test cases have been identified manually with two different clusters such as HP Gen 10 and Nokia OE clustersOpenEdge. More clusters will be test in the AT&T and Nokia labs in the later release

...

Post-install validation

A post installation verification is required to ensure that all nodes and services were properly deployed.

You need to establish an ssh connection to the controller’s VIP address and login with administrative rights.

Example Needed

1. Verify Deployment Success.

Enter the following command:tail

#tail /srv/deployment/log/bootstrap.log

You should see: Installation complete, Installation Succeeded.

2. Docker Version Test:

#docker --version

Expected Output: Docker version 1819.0903.23, build 6247962

3. Kubernetes Cluster – check Health/validation

# kubectl get pods --all-namespaces

Expected Output: status of all the pods should be running

a872fc2f86

34. Confirm active state of required services

Enter the following commands:

systemctl status --no-pager docker.service

systemctl status --no-pager kubelet.service

Example

systemctl status --no-pager docker.service* docker.service - Docker Application Container Engine  

Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: enabled)  

Active: active (running)    

#systemctl show -p SubState docker.service  | sed 's/SubState=//g'

Expected Output: running

#systemctl show -p SubState kubelet.service  | sed 's/SubState=//g'

Expected Output: running

#systemctl show -p ActiveState docker.service | sed 's/ActiveState=//g'

Expected Output: active

#systemctl show -p ActiveState kubelet.service | sed 's/ActiveState=//g'

Expected Output: active

4

...

. Verify node functionality

Enter the following commands:kubectl

#kubectl get no --no-headers | grep -v Ready

Output:  The command output shows nothing.kubectl

#kubectl get no --no-headers | wc -l

Output:  The command output shows the number of CaaS REC nodes.

...

5. Verify Components

Enter the following command:kubectl

#kubectl get po --no-headers --namespace=kube-system --field-selector status.phase!=Running

Output:  The command output shows nothing.

...

6. Confirm Package Manager Status (Helm)

  • Docker registry is running, and images can be downloaded:

image=$(docker images -f 'reference=*/reccaas/hypercubehyperkube' --format="{{.Repository}}:{{.Tag}}"); docker rmi $image; docker pull $image

Output:  Status: Downloaded newer image  Image is up to date for …

  • Chart repository is up and running:  (The curl command below is really one line.)

curl -sS -XGET --cacert /etc/chart-repo/ssl/ca.pem --cert /etc/chart-repo/ssl/chart-repo?repo1.pem

--key /etc/chart-repo/ssl/chart-repo?repo1-key.pem https://chart-repo.kubesystemkube-system.svc.rec.io:8088/charts/index.yaml

...

  • Helm is able to run a sample application:

helm list

Output:  rec  caas-infra.

Deployment Failures

Sometimes failures happen, usually do to misconfigurations or incorrect addresses entered.

To re-launch a failed deployment

There are two options for redeploying.

  1. /opt/nokia/cmframework/scripts/bootstrap.sh /opt/nokia/installer-ui/user_config/user_config.yaml (the user_config.yaml file is loaded from installer GUI)
  2. Openvt -s -w /opt/start-menu/start_menu.sh &/

Note:  In some cases, modifications to the user_config.yaml may be necessary to resolve a failure. 

If re-deployment is not possible, then the deployment will need to be started from booting to the REC.iso

Additional Testing

More testing cases will be added in the near future. For example, 

1.Installing RIC on top of REC 

2. RIC robot test 

3. Nanobot 

...

Additional Testing

More detail of each testing cases will be added in the near future. For example

1. Install RIC Robot test on top of REC

This set of tests which created a deploy-able robot container for testing robot test cases for RIC. It adds the interfaces to RIC applications and then to turn them into test cases. It is used an initial view of the Xapp Manager REST Interface and some other references to the E2Manager interface.

2. Nanobot 

This is a second set of tests which run as a job and then give a robot test report. The deployment steps are very similar than the RIC Robot test since we already have the existing repo which  cloned as part of deploying ric_robot_suite. Instead of pulling an image from azure, we will build the container.


HTML
<script type="text/javascript" src="https://jira.akraino.org/s/c11c0bd6cdfdc04cacdf44e3072f7af4-T/ah7phx/78002/b6b48b2829824b869586ac216d119363/2.0.26/_/download/batch/com.atlassian.jira.collector.plugin.jira-issue-collector-plugin:issuecollector/com.atlassian.jira.collector.plugin.jira-issue-collector-plugin:issuecollector.js?locale=en-US&collectorId=3ad5091c"></script> <script type="text/javascript">window.ATL_JQ_PAGE_PROPS = { "triggerFunction": function(showCollectorDialog) { jQuery("#myCustomTrigger").click(function(e) { e.preventDefault(); showCollectorDialog(); }); }}; </script> 
<div style=" z-index:1000; background-color:#a00; position:fixed; bottom:0; right:-125px; display:block; transform:rotate(-45deg); overflow:hidden; white-space:nowrap; box-shadow:0 0 10px #888;" > <a href="#" id="myCustomTrigger" style=" border: 1px solid #faa; color: #fff; display: block; font: bold 125% 'Helvetica Neue', Helvetica, Arial, sans-serif; margin: 1px 0; padding: 10px 110px 10px 200px; text-align: center; text-decoration: none; text-shadow: 0 0 5px #444; transition: 0.5s;" >Report a Bug</a> </div>