May 18, 2020machinelearning embeddesystems
Part one of this introduction to Machine Learning for embedded software engineers has been the hard one. When I say hard, I mean hard as in hard skills.
I tried to guide you through the learning process with development boards and code, and I ended the article with the promise the following part would be softer.
How can this part be softer? Soft as in soft skills?
Here, I just want to point you to some broad concepts and higher-level material that hopefully should allow you to enter a Machine Learning mindset.
I consider this mindset fundamental to understand the potential Machine Learning can unlock when applied at the edge, on little devices.
In the previous article, I introduced to you Pete Warden, Staff Research Engineer at Google, and the screencasts on his Youtube channel. Those screencasts are meant to extend the content of the TinyML book, of which he is one of the two authors.
The goal of the book is to show the capabilities of Machine Learning on microcontrollers to an audience that is not familiar with Machine Learning nor embedded software. I think, though, that skipping the embedded software details, it is a useful resource even for embedded software engineers that are eager to see how Machine Learning can be applied to their well-known tiny devices.
The first six chapters of the book are free to download, and I definitely suggest you read at least the Introduction for that feeling of purpose that guides the people behind the TinyML project. Through those words, and also listening to Pete on his screencasts, it is easy to perceive his enthusiasm towards the topic and the possibilities he sees emerging mixing the worlds of ML and low-power devices.
He is also lucky to not be alone promoting these concepts. I hope you already know the embedded.fm podcast: in all cases, dedicate an hour to episode 327: a little bit of human knowledge. In this episode, Daniel Situnayake, the other author of the TinyML book, does a perfect job in describing the scope and the utility of Machine Learning at the edge.
Take some expertise from a domain expert and encapsulate it in a model that is put somewhere to answer simple questions with simple answers, pushing the understanding of the world down where it can be applied without human intervention.
After a grasp of the potential of Machine Learning applied to embedded devices, you might be interested in keeping up with the basic terminology around the Machine Learning world.
I recommend starting very soft with the AI for everyone free course by Andrew Ng, one of the fathers of modern machine learning.
It is definitely short and easy to follow: perfect for acquiring the Machine Learning mindset I was referring to at the beginning. It does not get you lost into details that are not strictly required to deal with Machine Learning when you do not need to be an expert in it.
At first, you just have to be prepared to become able to see where it can be applied, what its limits and advantages are.
Have I been too soft?
I have to admit with these two articles, I just wanted you to be ready for my Object Classification techniques using the OpenMV Cam H7 talk…
But only because that would be a valid third part for your introduction to Machine Learning as an embedded software engineer!
Joking aside, if you registered or you can register now, attend the talk and give me your feedback: it is scheduled for May 20th 12:00pm (EDT).
In the talk, I will try to get you into more specific concepts and have you follow a complete and particular example from training to inference, that I think is not easy to find at the moment on the Internet.
BTW, if you are interested in the world of Machine Learning on edge devices, I opened a LinkedIn group I invite you to join: Edge Machine Learning.