Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

This document outlines the steps to deploy SDN-Enabled Broadband Access (SEBA) for Telco Appliance.  With the exception of the installation of the SEBA application components (VOLTHA, NEM, ONOS), the installation process follows that of REC (REC Installation Guide).  The production deployment of SEBA is intended to be done using the Akraino Regional Controller, but , for evaluation purposes, it is possible to deploy this release focuses on deploying SEBA without the Regional Controller.In a Regional Controller based deployment,   Instructions on how to deploy the SEBA blueprint using the Regional Controller API will be used to upload the SEBA Blueprint YAML (for Akraino Release 2, SEBA blueprint reuses REC_blueprint.yaml available from the SEBA repository) which informs the Regional Controller of where to obtain the SEBA ISO images, the SEBA workflows (executable code for creating, modifying and deleting SEBA sites) will be covered in a future release.

The instructions below directly invoke the SEBA Deployer from the BMC, iLO or iDRAC of a physical server. The basic workflow of the SEBA deployer is to copy a base image to the first controller in the cluster and then read the contents of a configuration file (typically called user_config.yaml) to deploy the base OS and all additional software to the rest of the nodes in the cluster.

An overview and diagram of the network connectivity is available on the Radio Edge Cloud Validation Lab page and the SEBA remote installer component (a container image which will be instantiated by the create workflow and which will then invoke the SEBA Deployer (which is located in the ISO DVD disc image file) which conducts the rest of the installation.The instructions below skip most of this and directly invoke the SEBA Deployer from the BMC, iLO or iDRAC of a physical server. The basic workflow of the SEBA deployer is to copy a base image to the first controller in the cluster and then read the contents of a configuration file (typically called user_config.yaml) to deploy the base OS and all additional software to the rest of the nodes in the cluster.

Pre-Installation Requirements for SEBA Cluster

...

The specific recommended configuration as of the Release 2 time frame is the Open Edge configuration for a single cluster documented in the Radio Edge Cloud Validation Lab, with only three server blades populated (instead of five server blades for REC).

BIOS Requirements:

...

  • BIOS set to Legacy

...

  • BIOS set to Legacy (Not UEFI)
  • CPU Configuration/Turbo Mode Disabled
  • Virtualization Enabled
  • IPMI Enabled
  • Boot Order set with Hard Disk listed as first in the list.

As of Akraino Release 2, the Telco Appliance blueprint family does not yet include automatic configuration for a pre-boot environment. The following versions were manually loaded on the Open Edge servers in the SEBA Blueprint Validation Lab using the incomplete but functional script available here (this is (note:  this may be facilitated with the same script utilized by REC for Akraino Release 1).  In the future, automatic configuration of the pre-boot environment is expected to be a function of the Regional Controller under the direction of the SEBA pod create workflow script.

...

The SEBA installer will configure NTP and DNS using the parameters entered in the user_config.yaml.  However, the network must be configured for the SEBA cluster to be able to access the NTP and DNS servers prior to the install.

...

The user_config.yaml file contains details for your SEBA cluster such as required network CIDRs, usernames, passwords, DNS and NTP server ip addresses, etc.  The SEBA configuration is flexible, but there are dependencies: e.g., using DPDK requires a networking profile with ovs-dpdk type, a performance profile with CPU pinning & hugepages and performance profile links on the compute node(s).  All values in the user_config.yaml should be updated to match the environment for your deployment.

Note

The following link points to the latest user_config template with descriptions and examples for every available parameter:

...

  user_config.yaml template


Note

Note: the version number listed in the user_config.yaml needs to follow closely the version from the template. There is a strict version checking during deployment for the first two part of the version number. The following rules apply to the yaml's version parameter:

### Version numbering:
###    X.0.0
###        - Major structural changes compared to the previous version.
###        - Requires all users to update their user configuration to
###          the new template
###    a.X.0
###        - Significant changes in the template within current structure
###          (e.g. new mandatory attributes)
###        - Requires all users to update their user configuration according
###          to the new template (e.g. add new mandatory attributes)
###    a.b.X
###        - Minor changes in template (e.g. new optional attributes or
###          changes in possible values, value ranges or default values)
###        - Backwards compatible

...

Note

Kubernetes 1.14 deprecates several legacy APIs and Kubernetes 1.16 disables them by default. For deployment of SEBA, it is necessary to manually enable these legacy Kubernetes APIs since they are not supported by Telco Appliance.  The deprecated APIs will be removed in Kubernetes 1.18.

The following commands will install the SEBA software on the cluster.

  • Enable legacy APIs by adding --runtime-config option to the command section of /etc/kubernetes/manifests/apiserver.yml on each node in the cluster.  Connect to each node using ssh and edit the file to match the example below.

    Code Block
    title/etc/kubernetes/manifests/apiserver.yml
    collapsetrue
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: kube-apiserver
      namespace: kube-system
    spec:
      hostNetwork: true
      containers:
        - name: kube-apiserver
          image: registry.kube-system.svc.rec.io:5555/caas/hyperkube:1.16.0-5
          securityContext:
            runAsUser: 144
          command:
            - "/kube-apiserver"
            - --admission-control=DefaultStorageClass,LimitRanger,MutatingAdmissionWebhook,NamespaceExists,NamespaceLifecycle,NodeRestriction,PodSecurityPolicy,ResourceQuota,ServiceAccount,ValidatingAdmissionWebhook
            - --advertise-address=192.168.12.51
            - --allow-privileged=true
            - --anonymous-auth=false
            - --apiserver-count=3
            - --audit-policy-file=/var/lib/caas/policies/audit-policy.yaml
            - --audit-log-format=json
            - --audit-log-maxsize=100
            - --audit-log-maxbackup=88
            - --audit-log-path=/var/log/audit/kube_apiserver/kube-apiserver-audit.log
            - --authorization-mode=Node,RBAC
            - --bind-address=192.168.12.51
            - --client-ca-file=/etc/openssl/ca.pem
            - --enable-bootstrap-token-auth=true
            - --etcd-cafile=/etc/etcd/ssl/ca.pem
            - --etcd-certfile=/etc/etcd/ssl/etcd1.pem
            - --etcd-keyfile=/etc/etcd/ssl/etcd1-key.pem
            - --etcd-servers=https://192.168.12.51:4111,https://192.168.12.52:4111,https://192.168.12.53:4111
            - --experimental-encryption-provider-config=/etc/kubernetes/ssl/secrets.conf
            - --feature-gates=SCTPSupport=True,CPUManager=False,TokenRequest=True,DevicePlugins=True
            - --insecure-port=0
            - --kubelet-certificate-authority=/etc/openssl/ca.pem
            - --kubelet-client-certificate=/etc/kubernetes/ssl/kubelet-server.pem
            - --kubelet-client-key=/etc/kubernetes/ssl/kubelet-server-key.pem
            - --kubelet-https=true
            - --max-requests-inflight=1000
            - --proxy-client-cert-file=/etc/kubernetes/ssl/metrics.crt
            - --proxy-client-key-file=/etc/kubernetes/ssl/metrics.key
            - --requestheader-client-ca-file=/etc/openssl/ca.pem
            - --requestheader-extra-headers-prefix=X-Remote-Extra-
            - --requestheader-group-headers=X-Remote-Group
            - --requestheader-username-headers=X-Remote-User
            - --secure-port=6443
            - --service-account-key-file=/etc/kubernetes/ssl/service-account.pem
            - --service-account-lookup=true
            - --service-cluster-ip-range=10.254.0.0/16
            - --tls-cert-file=/etc/kubernetes/ssl/tls-cert.pem
            - --tls-private-key-file=/etc/kubernetes/ssl/apiserver1-key.pem
            - --token-auth-file=/etc/kubernetes/ssl/tokens.csv
            - --runtime-config=apps/v1beta1=true,apps/v1beta2=true,extensions/v1beta1/daemonsets=true,extensions/v1beta1/deployments=true,extensions/v1beta1/replicasets=true,extensions/v1beta1/networkpolicies=true,extensions/v1beta1/podsecuritypolicies=true
    
          resources:
            requests:
              cpu: "50m"
          volumeMounts:
            - name: time-mount
              mountPath: /etc/localtime
              readOnly: true
            - name: secret-kubernetes
              mountPath: /etc/kubernetes/ssl
              readOnly: true
            - name: secret-root-ca
              mountPath: /etc/openssl/ca.pem
              readOnly: true
            - name: secret-etcd
              mountPath: /etc/etcd/ssl
              readOnly: true
            - name: audit-kube-apiserver
              mountPath: /var/log/audit/kube_apiserver/
              readOnly: false
            - name: audit-policy-dir
              mountPath: /var/lib/caas/policies
              readOnly: true
      volumes:
        - name: time-mount
          hostPath:
            path: /etc/localtime
        - name: secret-kubernetes
          hostPath:
            path: /etc/kubernetes/ssl
        - name: secret-root-ca
          hostPath:
            path: /etc/openssl/ca.pem
        - name: secret-etcd
          hostPath:
            path: /etc/etcd/ssl
        - name: audit-kube-apiserver
          hostPath:
            path: /var/log/audit/kube_apiserver/
        - name: audit-policy-dir
          hostPath:
            path: /var/lib/caas/policies
    ssh cloudadmin@10.65.1.51
    sudo vi /etc/kubernetes/manifests/apiserver.yml



  • Code Block
    title/etc/kubernetes/manifests/apiserver.yml
    collapsetrue
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: kube-apiserver
      namespace: kube-system
    spec:
      hostNetwork: true
      containers:
        - name: kube-apiserver
          image: registry.kube-system.svc.rec.io:5555/caas/hyperkube:1.16.0-5
          securityContext:
            runAsUser: 144
          command:
            - "/kube-apiserver"
            - --admission-control=DefaultStorageClass,LimitRanger,MutatingAdmissionWebhook,NamespaceExists,NamespaceLifecycle,NodeRestriction,PodSecurityPolicy,ResourceQuota,ServiceAccount,ValidatingAdmissionWebhook
            - --advertise-address=192.168.12.51
            - --allow-privileged=true
            - --anonymous-auth=false
            - --apiserver-count=3
            - --audit-policy-file=/var/lib/caas/policies/audit-policy.yaml
            - --audit-log-format=json
            - --audit-log-maxsize=100
            - --audit-log-maxbackup=88
            - --audit-log-path=/var/log/audit/kube_apiserver/kube-apiserver-audit.log
            - --authorization-mode=Node,RBAC
            - --bind-address=192.168.12.51
            - --client-ca-file=/etc/openssl/ca.pem
            - --enable-bootstrap-token-auth=true
            - --etcd-cafile=/etc/etcd/ssl/ca.pem
            - --etcd-certfile=/etc/etcd/ssl/etcd1.pem
            - --etcd-keyfile=/etc/etcd/ssl/etcd1-key.pem
            - --etcd-servers=https://192.168.12.51:4111,https://192.168.12.52:4111,https://192.168.12.53:4111
            - --experimental-encryption-provider-config=/etc/kubernetes/ssl/secrets.conf
            - --feature-gates=SCTPSupport=True,CPUManager=False,TokenRequest=True,DevicePlugins=True
            - --insecure-port=0
            - --kubelet-certificate-authority=/etc/openssl/ca.pem
            - --kubelet-client-certificate=/etc/kubernetes/ssl/kubelet-server.pem
            - --kubelet-client-key=/etc/kubernetes/ssl/kubelet-server-key.pem
            - --kubelet-https=true
            - --max-requests-inflight=1000
            - --proxy-client-cert-file=/etc/kubernetes/ssl/metrics.crt
            - --proxy-client-key-file=/etc/kubernetes/ssl/metrics.key
            - --requestheader-client-ca-file=/etc/openssl/ca.pem
            - --requestheader-extra-headers-prefix=X-Remote-Extra-
            - --requestheader-group-headers=X-Remote-Group
            - --requestheader-username-headers=X-Remote-User
            - --secure-port=6443
            - --service-account-key-file=/etc/kubernetes/ssl/service-account.pem
            - --service-account-lookup=true
            - --service-cluster-ip-range=10.254.0.0/16
            - --tls-cert-file=/etc/kubernetes/ssl/tls-cert.pem
            - --tls-private-key-file=/etc/kubernetes/ssl/apiserver1-key.pem
            - --token-auth-file=/etc/kubernetes/ssl/tokens.csv
            - --runtime-config=apps/v1beta1=true,apps/v1beta2=true,extensions/v1beta1/daemonsets=true,extensions/v1beta1/deployments=true,extensions/v1beta1/replicasets=true,extensions/v1beta1/networkpolicies=true,extensions/v1beta1/podsecuritypolicies=true
    
          resources:
            requests:
              cpu: "50m"
          volumeMounts:
            - name: time-mount
              mountPath: /etc/localtime
              readOnly: true
            - name: secret-kubernetes
              mountPath: /etc/kubernetes/ssl
              readOnly: true
            - name: secret-root-ca
              mountPath: /etc/openssl/ca.pem
              readOnly: true
            - name: secret-etcd
              mountPath: /etc/etcd/ssl
              readOnly: true
            - name: audit-kube-apiserver
              mountPath: /var/log/audit/kube_apiserver/
              readOnly: false
            - name: audit-policy-dir
              mountPath: /var/lib/caas/policies
              readOnly: true
      volumes:
        - name: time-mount
          hostPath:
            path: /etc/localtime
        - name: secret-kubernetes
          hostPath:
            path: /etc/kubernetes/ssl
        - name: secret-root-ca
          hostPath:
            path: /etc/openssl/ca.pem
        - name: secret-etcd
          hostPath:
            path: /etc/etcd/ssl
        - name: audit-kube-apiserver
          hostPath:
            path: /var/log/audit/kube_apiserver/
        - name: audit-policy-dir
          hostPath:
            path: /var/lib/caas/policies


  • Connect to the first controller in the cluster to run the remaining commands.

    Code Block
    ssh cloudadmin@10.65.1.51


  • Delete the kube-apiserver pods and wait for the pods to be recreated.

    Code Block
    kubectl delete pod -n kube-system kube-apiserver-192.168.12.51
    kubectl delete pod -n kube-system kube-apiserver-192.168.12.52
    kubectl delete pod -n kube-system kube-apiserver-192.168.12.53


  • Add cluster-admin rights to to the tiller service account.

    Code Block
    kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:tiller


  • Add the CORD repository and updated indexes.

    Code Block
    helm repo add cord https://charts.opencord.org
    helm repo update


  • Install the CORD platform.

    Code Block
    helm install -n cord-platform --version 6.1.0 cord/cord-platform


  • Wait until all 3 etcd CRDs are present in Kubernetes

    Code Block
    kubectl get crd | grep -i etcd | wc -l


  • Install the SEBA profile.

    Code Block
    helm install -n seba --version 1.0.0 cord/seba


  • Install the AT&T workflow

    Code Block
    helm install -n att-workflow --version 1.0.2 cord/att-workflow


  • Wait for all pods to reach Completed or Running status.

    Code Block
    kubectl get pods


    Code Block
    titleExample output
    collapsetrue
    NAME                                                              READY   STATUS      RESTARTS   AGE     IP            NODE            NOMINATED NODE   READINESS GATES
    att-workflow-att-workflow-driver-6487d77db-rdwgk                  1/1     Running     0          2m1s    10.244.0.27   192.168.12.52   <none>           <none>
    att-workflow-tosca-loader-7btvq                                   0/1     Completed   4          2m1s    10.244.1.37   192.168.12.51   <none>           <none>
    cord-platform-etcd-operator-etcd-backup-operator-84dfbc689vqsj9   1/1     Running     0          4m9s    10.244.2.13   192.168.12.53   <none>           <none>
    cord-platform-etcd-operator-etcd-operator-8b6c64548-nnj2r         1/1     Running     0          4m9s    10.244.2.14   192.168.12.53   <none>           <none>
    cord-platform-etcd-operator-etcd-restore-operator-7f5f5b95sdxw5   1/1     Running     0          4m9s    10.244.0.13   192.168.12.52   <none>           <none>
    cord-platform-grafana-74c589b6db-jqnpv                            2/2     Running     0          4m9s    10.244.1.24   192.168.12.51   <none>           <none>
    cord-platform-kafka-0                                             1/1     Running     1          4m9s    10.244.1.25   192.168.12.51   <none>           <none>
    cord-platform-kafka-1                                             1/1     Running     0          2m31s   10.244.0.26   192.168.12.52   <none>           <none>
    cord-platform-kafka-2                                             1/1     Running     0          96s     10.244.2.29   192.168.12.53   <none>           <none>
    cord-platform-kibana-7459967f55-z7sk8                             1/1     Running     0          4m9s    10.244.2.18   192.168.12.53   <none>           <none>
    cord-platform-logstash-0                                          1/1     Running     0          4m9s    10.244.0.15   192.168.12.52   <none>           <none>
    cord-platform-onos-5b95b8f489-9s56b                               2/2     Running     0          4m8s    10.244.0.19   192.168.12.52   <none>           <none>
    cord-platform-prometheus-alertmanager-7df4f44f4d-tbfcl            2/2     Running     0          4m9s    10.244.2.15   192.168.12.53   <none>           <none>
    cord-platform-prometheus-kube-state-metrics-76c8565f87-wslpw      1/1     Running     0          4m9s    10.244.0.14   192.168.12.52   <none>           <none>
    cord-platform-prometheus-pushgateway-849c597464-pxhrf             1/1     Running     0          4m9s    10.244.1.26   192.168.12.51   <none>           <none>
    cord-platform-prometheus-server-555b77dcd9-brtfk                  2/2     Running     0          4m9s    10.244.2.17   192.168.12.53   <none>           <none>
    cord-platform-zookeeper-0                                         1/1     Running     0          4m9s    10.244.0.16   192.168.12.52   <none>           <none>
    cord-platform-zookeeper-1                                         1/1     Running     0          3m35s   10.244.1.31   192.168.12.51   <none>           <none>
    cord-platform-zookeeper-2                                         1/1     Running     0          2m47s   10.244.2.27   192.168.12.53   <none>           <none>
    etcd-cluster-4btz528zxt                                           1/1     Running     0          2m38s   10.244.0.25   192.168.12.52   <none>           <none>
    etcd-cluster-qpjdpn9wdl                                           1/1     Running     0          3m2s    10.244.1.35   192.168.12.51   <none>           <none>
    etcd-cluster-vg7v7rcdtn                                           1/1     Running     0          2m22s   10.244.2.28   192.168.12.53   <none>           <none>
    kpi-exporter-9b9f87bd5-7xfcw                                      1/1     Running     3          4m8s    10.244.2.16   192.168.12.53   <none>           <none>
    kpi-exporter-9b9f87bd5-gbzpm                                      1/1     Running     2          4m8s    10.244.0.17   192.168.12.52   <none>           <none>
    sadis-server-6c6f649bb4-bfg4m                                     1/1     Running     1          3m2s    10.244.2.21   192.168.12.53   <none>           <none>
    seba-base-kubernetes-tosca-loader-gsdwx                           0/1     Completed   2          3m2s    10.244.2.22   192.168.12.53   <none>           <none>
    seba-fabric-6879cd6dc9-dd2xt                                      1/1     Running     0          3m2s    10.244.2.19   192.168.12.53   <none>           <none>
    seba-fabric-crossconnect-c684c6df5-wvpjp                          1/1     Running     0          3m2s    10.244.0.21   192.168.12.52   <none>           <none>
    seba-kubernetes-bb4fcd749-z4nr8                                   1/1     Running     0          3m2s    10.244.1.32   192.168.12.51   <none>           <none>
    seba-onos-service-86697c97bf-sd2gz                                1/1     Running     0          3m2s    10.244.0.22   192.168.12.52   <none>           <none>
    seba-rcord-6975778bf6-brxvb                                       1/1     Running     0          3m2s    10.244.2.20   192.168.12.53   <none>           <none>
    seba-seba-services-tosca-loader-ddnkz                             0/1     Completed   4          3m2s    10.244.1.34   192.168.12.51   <none>           <none>
    seba-volt-f6549c677-qqfcg                                         1/1     Running     0          3m2s    10.244.1.33   192.168.12.51   <none>           <none>
    xos-chameleon-645f89cb68-5hvld                                    1/1     Running     0          4m7s    10.244.1.29   192.168.12.51   <none>           <none>
    xos-core-868868885d-x9tjx                                         1/1     Running     0          4m7s    10.244.1.30   192.168.12.51   <none>           <none>
    xos-db-7445f8dcb7-6867w                                           1/1     Running     0          4m8s    10.244.0.18   192.168.12.52   <none>           <none>
    xos-gui-858b98bc9f-pc2b5                                          1/1     Running     0          4m8s    10.244.1.27   192.168.12.51   <none>           <none>
    xos-tosca-fdbbc894b-2v264                                         1/1     Running     0          4m7s    10.244.0.20   192.168.12.52   <none>           <none>
    xos-ws-6c76444b89-kj8q7                                           1/1     Running     0          4m8s    10.244.1.28   192.168.12.51   <none>           <none>
    
    

    Delete the kube-apiserver pods and wait for the pods to be recreated.

    Code Block
    kubectl delete pod -n kube-system kube-apiserver-192.168.12.51
    kubectl delete pod -n kube-system kube-apiserver-192.168.12.52
    kubectl delete pod -n kube-system kube-apiserver-192.168.12.53

    Add cluster-admin rights to to the tiller service account.

    Code Block
    kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

    Add the CORD repository and updated indexes.

    Code Block
    helm repo add cord https://charts.opencord.org
    helm repo update

    Install the CORD platform.

    Code Block
    helm install -n cord-platform --version 6.1.0 cord/cord-platform

    Wait until all 3 etcd CRDs are present in Kubernetes

    Code Block
    kubectl get crd | grep -i etcd | wc -l

    Install the SEBA profile.

    Code Block
    helm install -n seba --version 1.0.0 cord/seba

    Install the AT&T workflow

    Code Block
    helm install -n att-workflow --version 1.0.2 cord/att-workflow
    Wait for all pods to reach Completed or Running status.