Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

ML offlading APIs offer ML inference services (support different ML frameworks) from KubeEdge sites through ML APIs, which contains a set of commonly used model pool. Machine Learning models in the pool have detail features published and performance has been tested. It has different categories to cover a wide variety of user cases in ML domain. The ML API enables traditional app developer to leverage the fast response time of edge computing, and lower entry barriers of machine learning knowledge. Just use those ML offloading API in app, and stable new ML feature can be delivered to user devices from the nearest edge node. The ML engine contains vision, video, OCR, Facial recognition, and NLU sectors. Developer’s applications can provide inputs ( image or voice) to the ML offloading APIs via https request, and the edge ML offloading service can identify objects, people, text, scenes and activities etc. It is a key component of KubeEdge to address users' data security or latency concerns.  With high scalability of model acceleration on demand, mobile app developer no need to worry of on device resource limitation, and latency to the public cloud. 

The ML Offloading offloading APIs is a set of intelligence services on edge cloud offers various of AI services, and it can be triggered by mobile applications. For example, it can be used to determine if an image contains faces or translate text into different languages. Those APIs are available only if developers deploy it those KubeEdge.  The ML offlading APIs can support different ML categories, including Vision, ASR, dialog engine and more in the future, ans serves as REST web service.

Here is an example of Facial expression API divination