“The future is already here – it’s just not evenly distributed yet.” -William Gibson
Remember when the internet emerged and we wondered what it would become? While it was hard to predict its first-order effects (eg. social media) and second-order effects (eg. blockchain), we knew its impact would be massive. That’s how I feel about Edge AI.
As some of you might know, Edge AI refers to machine learning algorithms processing data and making decisions on the device itself without relying on the cloud. It’s the same fascinating technology that makes Face ID work on our smartphones without an internet connection. Since data is processed locally on the device itself, the technology offers potential for much better privacy, security, power efficiency and latency by design. The applications of Edge AI are still in their infancy but it’s certainly going to open up entirely new worlds of use cases that we never imagined possible.
Okay, enough talk!
We built a simple Edge AI prototype during one of our hackathons in Mercedes-Benz. What you see in the video is a tiny AI computer (yes, the left one on top of a cardboard stand) detecting face in real-time without an internet connection.
So, what’s the big deal?
Face detection using AI has been around for a while, so what’s worth mentioning here? Yes, Object detection (including face) is a problem solved by AI years ago. However, usually, object detection is done in 3 steps:
- Send the data from a user’s device (eg. image of a cat from a user’s device) over the internet to a cloud computing provider such as AWS
- Object present or not (eg. cat present or not) decision is made in the cloud and
- Result is sent back to the user’s device.
This round trip raises questions regarding privacy, security, time needed for the round trip (i.e latency) among others. So why is the decision made in the cloud? This is because machine learning algorithms that make these decisions need enormous computing power and cloud can provide that in a scaleable manner. This is precisely where Edge AI is changing the game. With Edge AI, the heavy computing power needed is made available in the user’s device (at the edge) itself, thanks to Moore’s Law and GPUs. For example, the AI computer you see in the video has a GPU, CPU and RAM packed into the size of a credit card. It is far more powerful than a normal laptop we use and costs $99. Amazing?
For those technically inclined, we used an NVIDIA Jetson Nano Developer Kit connected to a Raspberry Pi Camera as the hardware. It runs a python deep learning algorithm to detect faces in real-time. If you’d like to experiment or have questions, let me know. Happy to help!
Footnotes:
- This article is intended to make the topic of Edge AI accessible to people from diverse disciplines. A trade-off between simplicity and technical accuracy is apparent.
- The prototype was developed and demoed in July, 2019; ~3 months after Jetson Nano was launched.