Friday, May 6, 2022

MLPerf Tiny benchmark shows Plumerai's inference engine on top for Cortex-M

We recently announced Plumerai’s participation in MLPerf Tiny, the best-known public benchmark suite for evaluation of machine learning inference tools and methods. In the latest v0.7 of the MLPerf Tiny results, we participated along with 7 other companies. The published results confirm the claims that we made earlier on our blog: our inference engine is indeed the world’s fastest on Arm Cortex-M microcontrollers. This has now been validated and tested using standardized methods and reviewed by third parties. And what’s more, everything was also externally certified for correctness by evaluating model accuracy on four representative neural networks and applications from the domain: anomaly detection, image classification, keyword spotting, and visual wake words. In addition, our inference engine is also very memory efficient and works well on Cortex-M devices from all major vendors.

Visual Wake Words Image Classification Keyword Spotting Anomaly Detection
STM32 L4R5 220 ms 185 ms 73 ms 5.9 ms
STM32 F746 59 ms 65 ms 19 ms 2.4 ms
Cypress PSoC-62 200 ms 203 ms 64 ms 6.8 ms

Official MLPerf Tiny 0.7 inference results for Plumerai’s inference engine on 3 example devices

You can read more about Plumerai’s inference engine in another blog post, or try it out with your own models using our public benchmarking service. Do contact us if you are interested in the world’s fastest inference software and the most advanced AI on your embedded device, or if you want to know more about the MLPerf Tiny results.