Tuesday, November 9, 2021
At Plumerai we believe in building vertically integrated solutions to enable the most advanced AI on embedded devices. We do extensive data collection, build our own intelligent data pipeline, design our own inference software and tiny AI models, which we train using our own training algorithms. We recently showed that our inference engine for Arm Cortex-M is the fastest and smallest in the world. This way we bring powerful AI to small microcontrollers that previously could not run such complex deep learning tasks.
One of our unique technologies is our Binarized Neural Networks (BNNs), which consists of simple single-bit operations instead of 8-bit multiplications. BNNs save significant memory and power, enabling either more weights and activations in the same silicon footprint to reach higher accuracy, or the same networks to run on smaller and cheaper chips that are powered by smaller batteries. Until now, we have been deploying our BNNs on Arm Cortex-M and Arm Cortex-A processors with great results. However, we felt there was more room for improvement, since these CPUs are built to run typical 8-bit and 32-bit workloads and don’t provide native support for the single-bit operations that our BNNs rely on.
Some of our customers asked if our AI solutions also support FPGAs, since these provide incredible flexibility, cost efficiency, and tighter integration. FPGAs turn out to be an ideal platform to implement our models and inference engine on, as they enable us to unlock the full potential of our BNNs. In FPGAs we can natively implement the binary arithmetic that we need to run our models. We therefore decided to develop our own AI accelerator IP core named Ikva, which we introduce for the first time in this blog post. The Ikva accelerator runs our own BNNs and also efficiently supports 8-bit models. Of course, Ikva is fully supported by our extensive tool flow and ultra-fast and memory-efficient inference engine that’s integrated with TensorFlow Lite. A 32-bit RISC-V processor controls Ikva, captures the data from the camera and provides a programmer-friendly runtime environment. During the development of Ikva, we aimed to design a new hardware architecture for our optimized AI models while keeping it highly flexible and suitable for unknown future models. In contrast to other AI companies that seem to either develop models, or training software, or AI processors, we focus on the full AI stack and the Ikva core completes our offering. With Ikva, we now support the full AI stack starting from data collection, to training and model development, to very efficient inference engines, and now all the way down to providing the most optimized hardware implementations.
As you know, we like AI that is tiny, and Ikva fits in small and low-power FPGAs like the Lattice CrossLink-NX. The architecture is scalable, both in memory and in compute power. This means we can target a wide variety of FPGAs, ensure we fit next to other IP blocks, and extract maximum performance out of the resources that are available in the target FPGA device.
The video above showcases one of our proprietary person presence detection models together with our inference software running on the Ikva IP core in a Lattice CrossLink-NX LIFCL-40 FPGA. This is a low-power and low-cost 6x6mm FPGA that is available off-the-shelf and includes a native MIPI camera interface, further reducing the number of components in the system.
Ikva runs our robust and highly accurate person presence detection model 10x faster on the CrossLink-NX FPGA than on a typical Arm Cortex-M microcontroller. Alternatively, the frame rate can be scaled down to 1 or 2 FPS for those applications where low energy consumption is key.
There are many target applications for person presence detection. For instance in your home, to automatically turn off your TV, your lights, or heating when there’s no one in the room. Outside your home, your doorbell can send you a signal when there’s someone walking up to your front door or a small camera can detect when there’s an unexpected visitor in your backyard. In the office, your PC can automatically lock the screen when you leave. Elderly care can be improved when you know how much time they spend in bed, their living room, or outside. The possibilities are endless, whether it’s in the home, on the road, in the city, at the office or on the factory floor. Accurate, inexpensive and battery-powered person detection will enhance our lives.
Of course, besides running Plumerai’s optimized BNN models, you can also run your own model on the Ikva core, or integrate Ikva into your FPGA-based device. We’re excited to enable extremely powerful AI to go to places it couldn’t go before.
The Ikva IP core, the supporting tool flow, and optimized person detection models are available today. Contact us to receive more information or schedule a video call to see our live demonstration. We’re eager to discuss how we can enable your products with Ikva.