Can You Beat the AI? How to Quickly Deploy TinyML on MCUs Using TensorFlow Lite Micro

Are you curious about artificial intelligence (AI) and machine learning (ML)? Do you want to know how to use it on the microcontrollers you already work with? In this article, we provide you with an introduction to ML on microcontrollers. This topic is also known as tiny machine learning (TinyML). Get ready for losing against an ESP-EYE at the rock, paper, scissors. You will learn about data collection and processing, how to design and train an AI and how to get it running on the MCU. This example provides you with all you need to do your own TinyML project from start to end.

Why Should I Care About TinyML?

Surely you’ve heard of tech companies such as DeepMind and OpenAI. They dominate the ML domain with experts and GPU power. To give a sense of scale, the best AIs, such as those used by Google Translate, need months of training. They use hundreds of high-performance GPUs in parallel. TinyML turns the tables somewhat by going small. Because of memory limitations, large AI models don’t fit onto microcontrollers. The figure below shows the disparity between the hardware requirements.

CategoriesUncategorizedTags