Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

The purpose of this Document document is to enumerate the APIs which are exposed by Akraino Blue print project to the external projects Akraino/Non Akraino for interaction/integration.

This document should be used in conjunction with the architecture document to understand APIs at  at a modular level and their interactions.

This document should function as a glossary of APIs with its functionality, interfaces, inputs, and expected outcomes as the following example:

API1:

...

Kubernetes native APIs 

API2: KubeEdge APIs (Kubernetes API extensions)

API3: ML inference framework APIs 

API4: ML management APIs

        This is the link of ML management API specification.

API5: ML inference offloading APIs 

ML Offloading APIs provide synchronization of ML inference service with UE side. It serves application developers and enables machine learning apps to offload computation intensive jobs from UE device to close by edge nodes. ML offloading services satisfy the ML computing resource requirement, meanwhile its responses faster than cloud ML services. 


The ML framework offloading APIs offer ML inference services (tensorflow serving frameworks) from KubeEdge sites through ML Engine, which is contains a set of commonly used model pool with standard APIs. Pre-trained Machine Learning models in the pool have detail features published and performance has been tested. It has different categories can be deployed to the pool from cloud environment. In the future, the pool can open different categories of models to cover a wide variety of user use cases in ML domain. The ML engine enable traditional app developer to leverage the fast response time of edge computing, and lower entry barriers of machine learning knowledge. Just use those ML offloading API in app, and stable new ML feature can if an app developers don't have a in-house trained model, they can also chose from the existing models, and it enables traditional app developers to quickly adopt the KubeEdge ML offloading solution without concerns of model management by themselves.


The KubeEdge ML offloading service has a Facial recognition demo api. The demo mobile application passes a face image be delivered to user device from the nearest edge node. The ML engine contains vision, video, OCR, Facial recognition, and NLU sectors. Developer’s application can provide image or voice to the ML engine via https request, and the edge ML offloading service can identify objects, people, text, scenes and activities etc. This is a key component of MEC ecosystem, where user has data security or latency concerns, therefor, can’t use cloud resource.  With high scalability of model acceleration on demand, mobile app developer no need to worry of on identifies the expression and return corresponding facial expression code. Mobile app developers don't need to worry about the device resource limitation, and or latency to issues from the public cloud.

API4: ML inference offloading APIs 

 


Here is an example of Facial expression API

Facial Expression Recognition

This operation takes an input image and success response will be in JSON format with 6 of human facial expression alone with different scores.


       HTTP Method: POST

Request URL: https://{endpoint}/facialExpression


       Parameters

Image type: PNG image

        Image dimensions: greater than 48X48


Response

JSON:
     [      "appID":"1234567",      "faceNumber":1,

      "emotion": {

        "anger": 0.0,

        "contempt": 0.0,

        "fear": 0.0,

        "happiness": 0.196,

        "sadness": 0.0,

        "surprise": 0.803

      }

    ]ML Offloading APIs provides synchronization of ML inference service with UE side. It serves application developers and enable machine learning apps to offload computation intensive jobs from UE device to close by edge nodes. ML offloading services satisfie the requirement the ML computing resource requirement, meanwhile its responses faster than cloud ML services.