Hey there!
I started a thread the other day about
TFLite on Teensy mostly focusing on a speech recognition example, and mjs513 broached a similar subject recently
here. Very tight, serendipitous timing on all of this, hopefully we can get a critical mass going on Teensy-based ML applications
. Looking forward to developments on all these fronts!
Initial thoughts on the resources you mentioned:
1) Picovoice
At first glance, the "Request Access" button peppered throughout their site presents a barrier; I'm a bit reluctant to fill out said form, given that I'm mostly concerned with noncommercial and/or unconventional uses of ML... maybe if I ask nicely they would grant use for academia and exploration? I see that they do have a Github repo, but it's all precompiled libraries so I doubt it would be possible to tweak things to run on Teensy without commercial access. In the end, though, the very existence of such companies is an encouraging reminder that embedded ML at the edge is a viable, vibrant field to explore.
2) pocketsphinx
I had never heard of this project, but I have a lot of respect for CMU and have confidence in their ability to execute; see the
Pixy CMUCam. It occurs to me that, even though the documentation only mentions desktop environments, lessons may be learned anyway because they have a good dataset with which to train models. Perhaps we could leverage this to better train TFLite models?
3) Key Word Spotting
What's intriguing is that it mentions Tensorflow, but it's a bit old so it's not the recent renaissance we're seeing with Pete Warden's post, etc. The main benefit I derive from this code is that this work predates a lot of the early/mid-2019 advances in this field, and it starts a conversation about a number of different models that can be used for microcontroller-based AI/ML.
My short-term plans with TFLite are to better understand how to configure, train, and deploy new models, and to better integrate TF neural networks into more conventional Arduino sketches; with the current state of things it feels a bit weird that the main Arduino sketch is rather empty, and the TF runtime is largely independent and hidden. For example, if we're talking about mechatronics, I'd be thinking of Kiva Systems (now Amazon Robotics) and how they use ML to enable robots to self-optimize in the field via reinforcement learning (see
here and
here; tinkering with TFLite and various Teensy libraries to do things like stepper motor control would be a much smoother experience if more of this could be done in the Arduino IDE.
- Andrew