What is Edge AI and why is it important

AI systems are being deployed to solve and optimise more and more complex problems every day. But most of them rely on extensive cloud computing resources to function. While for most applications this is enough, running AI systems on the cloud have its drawbacks. This is where Edge AI comes to play. Simply put, Edge AI is running AI systems on device. 

Currently, almost any technology or process that relies on AI uses extensive cloud computing resources. The sensor data is collected by the device, sent to the cloud, analysed and the result is sent back to the device. Just open the Google Lens application on your phone. When you scan an image or an object using Google Lens, the picture is sent to the servers, the results are analysed there and the results are sent back to the crowd. Without internet connectivity, Google Lens is pretty much useless. And this is exactly how most AI systems work. 

Undeniably, running AI on the cloud has its advantages. Most AI systems require a lot of computational resources and running them on cloud reduces the hardware costs. Particularly for training AI systems, cloud computing can make the whole process go a lot faster. 

But, having AI in clouds has its drawbacks too. 

Mainly the bandwidth and the costs associated with that. Imagine an AI system that takes live CCTV footage to monitor a store for thefts or for detecting traffic violations. The camera has to send HT footage to a remote cloud server every second. Think about the data costs associated with that. 

This reliance on connectivity also severely limits the applications of AI in areas with limited internet connectivity. AI in healthcare, farming and other areas which will definitely be useful in rural areas cannot rely on internet connectivity for proper functioning. Reliance on computing systems can also cause problems in AI systems where decisions have to made in mere seconds or less. For example, if a pedestrian jumps in front of an autonomous car, the car simply does not have the time to send the entire video footage to a remote cloud server, get it analysed, have the decision sent back before hitting the brakes. 

Sending sensor data to the cloud also poses serious security risks. This has been demonstrated many times in the recent past with regard to smart home devices. Earlier this year, camera footage from a Xiaomi camera was sent to the wrong Google account,  and there was also the incident wherein Amazon sent Alexa recordings to the wrong user. Which also demonstrates the privacy issues of sending private data to remote cloud servers. Consumers don’t appreciate it when their photos or videos or questions are being sent to cloud servers. 

Here’s where AI on Edge comes to play. 

Here the collection of sensor data and its processing is carried out on the device itself. But this doesn’t mean that the system is fully disconnected from the cloud. The amount of data processed on the device itself usually varies upon the application, but the training is rarely carried out on edge. Companies usually carry out the training phase in the cloud and only the inference happens on the edge. 

For example, consider the Echo devices. The detection of wake-word “Hey Alexa” is carried out on the device, with an algorithm that was trained in the cloud using multiple recordings. 

Why AI on Edge?

In the case of Echo devices, only the wake-word is processed on the cloud. The rest of the processing is performed in the cloud. This actually demonstrates the relevance of on edge AI, with regard to the bandwidth and cost. Without this, every sound picked up by the mics will have to be sent as a continuous audio stream to the cloud, to see if someone is talking to the device or not. Imagine the amount of data that has to be sent to the cloud if that was the case??

Edge AI

Alexa devices are one way of doing AI on edge, where only a small amount of processing is done on the device. On the other end, we have autonomous cars. In these cars, the entire processing is done on the device(computers onboard the car) itself. The autonomous cars also reiterate a point mentioned earlier. While the processing is done on the device, they are still not fully disconnected from the cloud; the sensor data is still sent to the cloud and is used for further training. And in case any issues are reported the data is used to fix the problem.

The use of AI on edge for autonomous driving also showcases another advantage for using on-device AI. For systems that simply cannot fail, relying on constant internet connectivity won’t work. The latency will be too high for effective decision making. Imagine a robot surgeon performing surgery. Decisions have to be made in real-time with zero lag to perform such a task safely. Even though 5G internet is promising faster internet speeds, it is yet to be seen if they will be enough. Maybe some higher proportion of processing may be shifted to the cloud, but the majority of the processing will have to be done on the device. 

Another advantage offered by AI on edge is the security and privacy. Limiting the reliance on internet connectivity greatly reduces the vulnerability of electronic devices. An attacker will have to be in physical proximity to the device to target a device not connected to the internet. And it greatly reduces the privacy concerns. Consumers will be a lot less worried about a stranger listening on to their private lives with Edge AI. 

Of course, on device AI has obstacles. 

As you can guess, the major obstacle for AI on edge is the computing power required. The reason for cloud dependency for AI systems is the enormous computing power. Switching completely to Edge AI will require some serious hardware. 

But the future shines bright.

With advanced algorithms that require less computing power will definitely make AI fully on edge closer to reality. Just as shift from CPUs to GPUs for machine learning systems improved the computational power, custom chips and hardware for AI systems will make it more easier to deploy AI on edge. 

Researchers, as well as major chip manufacturers, are looking to novel technologies such as quantum computing and carbon nanotubes to improve the computational prowess. And neuromorphic hardware and chips designed to mimic the human brain will be capable of processing AI more efficiently and with less power and will make AI on edge viable for more applications. 

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top