What is it about?
Edge computing involves a network of devices that process data onsite. This means that large data can be processed faster. Real time data processing also saves valuable time in critical applications like self driving cars. Neuromorphic computers, inspired by the human brain, require very low power. Hence, they are attractive options for edge computing. However, it is not clear what the best method to train a neuromorphic AI is. Two popular training options are called evolutionary optimization (EO) and imitation learning (IL). Here, scientists compare the two training approaches. As a a test case, an autonomous race car is controlled by an edge neuromorphic AI. The results prove that EO is more accurate. EO methods give better performing small networks that are well suited for edge deployment. But, they are also much longer and more expensive to train.
Featured Image
Photo by Jonas Leupe on Unsplash
Why is it important?
AI applications are popping up everywhere. Onsite data processing in AI saves a lot of time and money. Neuromorphic computers could be the key to unlocking many edge AI applications. This work provides one step in the direction of autonomous vehicle control using neuromorphic computing. It shows that existing neuromorphic algorithms can already provide workable solutions. It also provides a map for the use of edge AI in future. KEY TAKEAWAY: Neuromorphic computing is a good candidate for edge control applications such as self-driving cars. EO training approaches produce more accurate and smaller networks. This makes them better suited for edge use than IL.
Read the Original
This page is a summary of: Evolutionary vs imitation learning for neuromorphic control at the edge*, Neuromorphic Computing and Engineering, January 2022, Institute of Physics Publishing,
DOI: 10.1088/2634-4386/ac45e7.
You can read the full text:
Contributors
Be the first to contribute to this page