Microchip Technology’s SuperFlash memBrain™ neuromorphic memory solution provides substantial reduction in compute power to improve AI Inference at the edge.
As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery powered and deeply embedded devices are challenged to perform AI functions—like computer vision and voice recognition. Microchip Technology Inc., via its Silicon Storage Technology (SST) subsidiary, is addressing this challenge by significantly reducing power with its analog memory technology, the memBrain™ neuromorphic memory solution. Based on its industry proven SuperFlash® technology and optimized to perform vector matrix multiplication (VMM) for neural networks, Microchip’s analog flash memory solution improves system architecture implementation of VMM through an analog in-memory compute approach, enhancing AI inference at the edge.
- AI applications
As current neural net models may require 50M or more synapses (weights) for processing, it becomes challenging to have enough bandwidth for an off-chip DRAM, creating a bottleneck for neural net computing and an increase in overall compute power. In contrast, the memBrain solution stores synaptic weights in the on-chip floating gate—offering significant improvements in system latency. When compared to traditional digital DSP and SRAM/DRAM based approaches, it delivers 10 to 20 times lower power and significantly reduced overall BOM.