Uncanny Eyes is getting expensive

Michael I get customization requests frequently, so usually not a problem to build variations of the standard products like to add the memory chips but leave headers loose. In that case, just order the standard product and make the comment in the cart to leave the headers loose or drop me an email.

I also get requests to use double headers for logic analyzer hookups or some kind of stacking headers. For stacking headers, I use the extra tall stacking headers that have square pins instead of the not-so-great flat bladed Arduino versions and cut them down to 24-pin to fit. The pins are also extra long and can be cut down in length so assemblies sit flush if needed, otherwise they sit proud by about 3/16". Doesn't make for the shortest assembly, but the connections are solid.

Here are a couple of examples:

Teensy 4.1 with stacking header.jpgTeensy 4.1 with Double Header.JPGTeensy 4.1 and Audio Board Stack.jpgTeensy 4.1 and Audio Board Stack 2.jpg

BTW, thanks for the tip about the DFPlayers when I ran into a problem when using Uncanny Eyes with the Teensy audio library.

Sourced some straight from the original mfr and created an adapter board that mounts 2 of them for a Zombie operation game for my wife's work. I am using 2 so that I can have eerie background music or loud scares from behind the user while also having various zombie sounds coming from the game itself. Board also connects 2 uncanny eyes LCDs (round LCDs inbound), a string of addressable LEDs for lighting effects, PIR motion sensor input to automatically activate/deactivate the game and tweezer vibration motor hookup. Now I just need to find the time to actually complete the game before Halloween!

Prop Adapter.jpg
 
Michael I get customization requests frequently, so usually not a problem to build variations of the standard products like to add the memory chips but leave headers loose. In that case, just order the standard product and make the comment in the cart to leave the headers loose or drop me an email.

I also get requests to use double headers for logic analyzer hookups or some kind of stacking headers. For stacking headers, I use the extra tall stacking headers that have square pins instead of the not-so-great flat bladed Arduino versions and cut them down to 24-pin to fit. The pins are also extra long and can be cut down in length so assemblies sit flush if needed, otherwise they sit proud by about 3/16". Doesn't make for the shortest assembly, but the connections are solid.
It might be helpful to list as options the various ways things can be configured, and of course pictures can help.

When I'm doing breadboarding, etc. I tend to prefer have the Teensy with female stacking headers for each of the sides, plus 5 pin female headers for the row of pins next to the SD card, and for USB host. Unfortunately, the placement of the ethernet header is a problem. It is hard to use the ethernet pins along with female stacking headers, unless you make a special 2x3 2mm pitch header cable with one side shaved off.

Or possibly solder a short cable to the pins that then has the 2x3 pins to hook up the ethernet stuff. The same might work for USB host as well. Of course the trouble with soldering cables to boards, is eventually the cable may work its way to being unsoldered.

In my last order, in fact I was thinking I should have a Teensy with the ethernet pins, but that being said, I've never used the ethernet support.....

In terms of mounting the audio shield, with a Teensy 4.1 with just male pins soldered in, as far as I can see, there are several options:
  • Solder the audio shield to a prototype board (or put male headers on one and female headers on the other board) where you have one more parallel row of pins, and solder the male pins on the prototype board to be in the next row. This way in a breadboard (or a prototype board with parallel rows of headers), the audio shield connects on the pins outside of the Teensy.
  • Put female headers or female stacking headers on the audio shield and mount it underneath the teensy. If you are using a breadboard or prototype board, you could put the 24 pin stacking headers on the audio shield, and just hope you don't get random short circuits from things hitting the exposed pins.
  • Put a prototype board between the teensy and audio shield. The prototype board would have 24 pins on each side, but the first 14 would be stacking headers, and the last 10 pins would be clipped. This allows you to make other connections to the teensy on the prototype board (such as for the 2 SPI displays for uncanny eyes).
  • Get a parallel prototype board (such as made by electrocookie), and just connect the 28 wires between the two sides. IIRC, there are issues if the LRCLK or BCLK pin wire is too long, and it may or may not be an issue if you extend the teensy to another board.
  • Use some other audio output such as S/PDIF.

BTW, thanks for the tip about the DFPlayers when I ran into a problem when using Uncanny Eyes with the Teensy audio library.
You are welcome.

Yes, part of my motivation was to start re-enabling sounds to the uncanny eyes (which I had briefly done with the Teensy 3.5/3.6). Your original query prompted me to see what the issue with the audio processing is. With my example handling mono sounds from memory, I didn't see choppiness in playing the audio, but as I said there might be dead space where it can't start the next recording until the eyes code finishes on whole pass. However, if you are reading from a SD card, it might have issues with latency.

Sourced some straight from the original mfr and created an adapter board that mounts 2 of them for a Zombie operation game for my wife's work. I am using 2 so that I can have eerie background music or loud scares from behind the user while also having various zombie sounds coming from the game itself. Board also connects 2 uncanny eyes LCDs (round LCDs inbound), a string of addressable LEDs for lighting effects, PIR motion sensor input to automatically activate/deactivate the game and tweezer vibration motor hookup. Now I just need to find the time to actually complete the game before Halloween!
Yep, there is always the time element.
 
Last edited:
I thought about doing an ala cart version where you could just select from a list of options for connectors and memory, but I couldn't come up with a clean way to do it without it being overly complicated. I may make a standard variation with memory soldered on and tested but pins left loose so the user can configure the pins how they want. Many people that don't want to mess with the memory chips are fine with soldering headers on.

Your original query prompted me to see what the issue with the audio processing is. With my example handling mono sounds from memory, I didn't see choppiness in playing the audio, but as I said there might be dead space where it can't start the next recording until the eyes code finishes on whole pass. However, if you are reading from a SD card, it might have issues with latency.
I was using large files and so needed to run off an SD card. Tried both the T4.1 and audio adapter. The T4.1 slot performed a bit better but still not good and it is probably some type of latency issue reading the files off the SD card brought on by the high demands of the Uncanny Eyes. For some applications like the side project I am working on, the DFPlayer is probably a better/simpler solution anyway.
 
I thought about doing an ala cart version where you could just select from a list of options for connectors and memory, but I couldn't come up with a clean way to do it without it being overly complicated. I may make a standard variation with memory soldered on and tested but pins left loose so the user can configure the pins how they want. Many people that don't want to mess with the memory chips are fine with soldering headers on.

Pretty much that is me. I can do through hole soldering. I've done SMT soldering in the past, but these days I prefer somebody else to do it. And IIRC, the last time looked at the flash memory, I hadn't seen that the 1 or 2 gigabit flash chips were working.

Also, if you don't have at least a description on the web page of what is possible, people will assume you can only do the options that are listed. And if they want something different, they might assume they would have to have a large order in order for you to do special processing (or go elsewhere). But then the flip side with global supply chains, if you don't have a particular part they want to use, it may be hard to get that particular thing for your order.

I was using large files and so needed to run off an SD card. Tried both the T4.1 and audio adapter. The T4.1 slot performed a bit better but still not good and it is probably some type of latency issue reading the files off the SD card brought on by the high demands of the Uncanny Eyes. For some applications like the side project I am working on, the DFPlayer is probably a better/simpler solution anyway.

Yes, I suspect you are right. That way you don't have two things, each wanting to disable interrupts, and be on the main loop. I would imagine doing loads of neopixels/ws2812b LEDs, running loads of servos, and/or doing high speed UART/CAN/MIDI streams would be other cases where it might make sense to dedicate separate micro processors to each task, and just have the boss sending out low volumes of data to tell each one what to do.
 
I've been continuing to hack away at the Uncanny/M4 Eyes code and thought I'd post a bit of an update on where I'm at:

I've managed to merge in a lot of the M4 codebase, as well as make other improvements, so now my Teensy code supports the following:
  • Support for config.eye configuration files to specify all required parameters for an eye. Note this is similar to but NOT in the same format as the M4 config.eye files.
  • Python code takes the config.eye file and generates C code containing all lookup tables etc for the Teensy to use.
  • A polar angle/distance table is applied to both iris and sclera at runtime (rather than just the iris). This reduces the size of the sclera textures and will allow for angles/spins etc of the sclera.
  • The upper and lower eyelids are now stored as a start/end location for each column, rather than a greyscale threshold table. This hugely reduces the amount of space required to store the eyelids without any impact on performance.
  • Displacement mapping is now applied when rendering the iris/sclera for a more realistic curved effect.
  • Slit pupils are specified in the config.eye file and the polar mappings are generated from this. Previously a greyscale threshold image was needed. This simplifies creation of new slit eyes, with the downside that only vertical slits are currently supported (e.g. goats eyes are no longer possible). This could be addressed with more a sophisticated algorithm.
  • The generation of slit pupil polar mappings is much faster than the M4 implementation.
  • The easing table has been removed; the easing is now calculated at runtime.
  • The various M4 eyes have been ported over.
  • Displacement and polar mapping tables can be reused across multiple eyes if they use the same parameters. This hugely reduces the space required to store multiple eyes.
  • Replaced a lot of the #defines with normal logic, so eyes and various other settings can be changed at runtime.
  • Removed the recursion, so loop() now only renders a single frame per call.
  • Probably a few other things I've forgotten.


With the above changes, it's now easy to swap eyes at runtime. Eyes generally take up a lot less storage space than previously, so I'm currently able to fit 9 or so eyes on a Teensy 4.0. Here's an example video of this in action:



I also hacked together a way to run it on my desktop PC so I could more easily test and debug the code:


Still to do:
  • Performance: currently two eyes render at around 45fps, which is down from the 52fps I was getting before implementing most of the above. I'm hoping to regain most/all of this once I've had a chance to optimise the new code.
  • Add support for different configurations for left/right eyes.
  • Add support for spin/angle/mirror, as per M4.
  • The polar distance map is generally unique per eye, so experiment with generating the polar distance map dynamically. If it can be done fast enough, this will hopefully reduce the storage required for eyes by another 30-40%.
  • Generate binary files that the Teensy can load from SD card etc, so eyes can be swapped/altered without requiring a recompile.
  • Make a PC based UI tool for creating/editing/generating eyes, with realtime preview.
  • Lots of other stuff!

Note this is all still very much a work in progress (and not everything above is even checked in yet) so I wouldn't recommend using this unless you're OK with getting your hands dirty with very rough and incomplete code. The config file and codegen are likely going to change further, and there's still a lot that hasn't been tested or straight up doesn't work. This includes displays other than 240x240 GC9A01A, rotations/spin/mirroring, external input from buttons/light sensors, ... I'm not even sure I'll fully test/implement everything since I personally am only interested in running it on two 240x240 displays for now, but I'm hoping to at least get it into a state that's passable enough for others to easily use and improve.
 
I've been continuing to hack away at the Uncanny/M4 Eyes code and thought I'd post a bit of an update on where I'm at:

I've managed to merge in a lot of the M4 codebase, as well as make other improvements, so now my Teensy code supports the following:

Cool.

With the above changes, it's now easy to swap eyes at runtime. Eyes generally take up a lot less storage space than previously, so I'm currently able to fit 9 or so eyes on a Teensy 4.0.


I also hacked together a way to run it on my desktop PC so I could more easily test and debug the code:

Cool again.

Still to do:
  • Performance: currently two eyes render at around 45fps, which is down from the 52fps I was getting before implementing most of the above. I'm hoping to regain most/all of this once I've had a chance to optimise the new code.

I must admit after watching eyes at 6 fps (doing the un-optimized version of the 128x128 display on a Teensy 4.0 with 2 displays on the same SPI bus), that while 52fps is nicer than 45fps, I suspect for a real world display, that 6fps can be fine. I do wonder if there is a limit where the display can be tweaked too fast, and it can trigger incidents in people who suffer from epilepsy.

I wonder whether adding back in the special CS/DC support for DMA would help on the Teensy 4.1, since we have 3 CS0 pins (10, 36, and 37) and 2 CS1 pins (0 and 38).

  • Add support for different configurations for left/right eyes.
  • Add support for spin/angle/mirror, as per M4.
  • The polar distance map is generally unique per eye, so experiment with generating the polar distance map dynamically. If it can be done fast enough, this will hopefully reduce the storage required for eyes by another 30-40%.
  • Generate binary files that the Teensy can load from SD card etc, so eyes can be swapped/altered without requiring a recompile.
  • Make a PC based UI tool for creating/editing/generating eyes, with realtime preview.
  • Lots of other stuff!

Note this is all still very much a work in progress (and not everything above is even checked in yet) so I wouldn't recommend using this unless you're OK with getting your hands dirty with very rough and incomplete code. The config file and codegen are likely going to change further, and there's still a lot that hasn't been tested or straight up doesn't work. This includes displays other than 240x240 GC9A01A, rotations/spin/mirroring, external input from buttons/light sensors, ... I'm not even sure I'll fully test/implement everything since I personally am only interested in running it on two 240x240 displays for now, but I'm hoping to at least get it into a state that's passable enough for others to easily use and improve.
Let us know when you have the stuff checked in.

I suspect we may want to also add the 240x240 square displays in addition to the round displays, since they are slightly easier to get (ST7789 driver, with/without CS pin). BTW, my replacement round display (and spare) arrived at the post office today.

Hmmm, I should look at reading .WAV files from a SD card instead of just doing a limited number of mono RAW sound files. Of course as KenHahn points out, you can run out of steam in more complex environments. I suppose I should also add #ifdefs to support an external dfplayer, which would be simpler than trying to do both sound and eyes at the same time.
 
Last edited:
I do wonder if there is a limit where the display can be tweaked too fast, and it can trigger incidents in people who suffer from epilepsy.
Hmm I don't know much about it but I'm not sure how triggering epilepsy would be possible without some sort of rapid strobing effect? Generally speaking, the higher the frame rate the better. Most PC monitors refresh at 60Hz or more, anything much less than that and any animation/movement starts to look a bit jerky. Faster than 60Hz can look smoother but it rapidly becomes diminishing returns. If anything I'd have thought smoother animation would reduce the chance of inducing epilepsy. Viewing the YouTube clip doesn't look great unfortunately due to aliasing effects between the (variable) GC9A01 frame rates, tearing (due to lack of double-buffering) of the GC9A01 updates, my mobile phone camera's shutterspeed/framerate, and the playback monitor's refresh rate. Even looking at the screens directly however does still look stuttery to me at times, especially when the eyes are blinking. A higher framerate will help with that.

All that aside, the faster the eyes can be rendered the more CPU cycles available for code to perform other tasks between frames. I have a few ideas that will need all the spare cycles I can find :D

I wonder whether adding back in the special CS/DC support for DMA would help on the Teensy 4.1, since we have 3 CS0 pins (10, 36, and 37) and 2 CS1 pins (0 and 38).
Possibly, but I don't know anything about that unfortunately. Once my code is a bit more feature complete and stable and I start looking at performance, I'll try and see what I can figure out. Where can the code you're referring to be found?

Let us know when you have the stuff checked in.
Will do! I'm hoping to have most of it done within the next few days.

I suspect we may want to also add the 240x240 square displays in addition to the round displays, since they are slightly easier to get (ST7789 driver, with/without CS pin). BTW, my replacement round display (and spare) arrived at the post office today.
Other 240x240 displays should be pretty straightforward to support since there's not much code that depends on the GC9A01 specifically. My code's basically the same as uncannyEyes_GC9A01A.ino in that regard. As far as 128x128 displays go, I don't think they'll be too difficult either, they'll just require a bit more testing (and probably a few minor code/config changes) to ensure the different resolution is being handled correctly. Displays bigger than 255x255 would be a much bigger problem since many of the lookup tables and code are currently just bytes for various resolution-related things.

Unfortunately I don't own any displays besides these round ones so not sure if/when I'll look at supporting other types. Assuming it doesn't cause too much of a performance impact, I should at least be able to abstract away most/all of the display-specific code so it's easy for people to add support for other displays.

Hmmm, I should look at reading .WAV files from a SD card instead of just doing a limited number of mono RAW sound files. Of course as KenHahn points out, you can run out of steam in more complex environments. I suppose I should also add #ifdefs to support an external dfplayer, which would be simpler than trying to do both sound and eyes at the same time.
Sounds interesting. I haven't tried doing any projects involving sound playback or SD cards on the Teensy yet. Presumably any .WAV loading code would have some overlap with code that loaded eyes from SD.
 
Hmm I don't know much about it but I'm not sure how triggering epilepsy would be possible without some sort of rapid strobing effect?
I'm just wondering out loud if pushing the frame rate higher than the current speed might trigger the issue. My sister has epilepsy, but I don't know where the trigger levels are. With my migraines, I have been affected by the old tube monitors at slow refresh rates, and probably florescent lights.

Will do! I'm hoping to have most of it done within the next few days.

Other 240x240 displays should be pretty straightforward to support since there's not much code that depends on the GC9A01 specifically. My code's basically the same as uncannyEyes_GC9A01A.ino. uncannyEyes_GC9A01A.ino was derived from the ST7789 example. There are only 3 or so places that are different (different constructor, different init function, and different command sent to draw the bits).


As far as 128x128 displays go, I don't think they'll be too difficult either, they'll just require a bit more testing (and probably a few minor code/config changes) to ensure the different resolution is being handled correctly. Displays bigger than 255x255 would be a much bigger problem since many of the lookup tables and code are currently just bytes for various resolution-related things.

I have variants of the TFT and OLED 128x128 displays. I probably have the TFT variant without CS pin lying about as well. At the moment, I tend to only use the historical code on those and run them on Teensy 3.2/3.5s. With the ability to run different patterns, I think 240x240 displays and Teensy 4.x is the way to go. The main advantage the 128x128 displays had was different patterns available without too much tweaking. But you've solved that for the 240x240 display.
 
I'm just wondering out loud if pushing the frame rate higher than the current speed might trigger the issue. My sister has epilepsy, but I don't know where the trigger levels are. With my migraines, I have been affected by the old tube monitors at slow refresh rates, and probably florescent lights.
These are IPS panels, the same tech as most PC/laptop monitors, so if she can tolerate using a computer screen she should be fine with these too. To my understanding, migraines and epilepsy are triggered not by refresh rates but by strobing/flickering at particular frequencies. Florescent lights can flicker at 100/120Hz due to AC current peaks, old CRT monitors flicker due to the phospors decaying between passes of the raster beam. LEDs used for house lighting generally require PWM for dimming, so they too will appear to flicker if the dimmer isn't running at a high enough frequency. IPS computer displays also use PWM for changing the brightness of the LED backlights, but as far as I can tell the microcontroller ones can only turn the backlight on or off, so there's no PWM and no chance of flicker with them. I'd only expect problems with them if the content of what you were displaying (e.g. a video of a concert with strobe lights) contained rapidly changing intensities. Maybe if the eyes were made to blink many times in rapid succession...?! I remember old plasma displays can flicker quite badly, as can some OLEDs since they use a relatively low frequency PWM. Try taking a video of an OLED screen next to an IPS one and you'll see what I mean.

I have variants of the TFT and OLED 128x128 displays. I probably have the TFT variant without CS pin lying about as well. At the moment, I tend to only use the historical code on those and run them on Teensy 3.2/3.5s. With the ability to run different patterns, I think 240x240 displays and Teensy 4.x is the way to go. The main advantage the 128x128 displays had was different patterns available without too much tweaking. But you've solved that for the 240x240 display.
I see, thanks for that detail. Given that, I won't spend too much time worrying about 128x128 for now, but will still try and make sure support can be added later if need be.
 
I've had a quick play with combining uncannyEyes (GC9A01A variant) with audio playback from SD card.
I suspect we will need to add some yield calls to get the interrupts handled for sound playback.
Yes, we do need to! My explorations discovered that the code basically ends up in split() most of the time, doing clever timing on iris scaling and outputting the frames as it goes. As this prevents loop() ever exiting, any other code you want to run along with uncannyEyes is stuffed unless you put a callback to it. At the end of frame() is a good place, and I put a yield() call there too.

I was using large files and so needed to run off an SD card. Tried both the T4.1 and audio adapter. The T4.1 slot performed a bit better but still not good and it is probably some type of latency issue reading the files off the SD card brought on by the high demands of the Uncanny Eyes. For some applications like the side project I am working on, the DFPlayer is probably a better/simpler solution anyway.
AudioPlaySdWav essentially keels over if there's much going on at all, as it loads 512 bytes at a time from SD card, within the audio interrupt. That's enough for 2 blocks of mono audio, or 1 of stereo, so there's zero margin, plus any attempt to access the card from your sketch will cause mayhem. I've done a "properly buffered" WAV file player, which you can find at https://github.com/h4yn0nnym0u5e/Audio/commits/feature/buffered-SD, and I've started a thread for discussion, feature and bug reports at https://forum.pjrc.com/threads/70963-Yet-Another-File-Player-(and-recorder). With this in use I reckon you can play back from SD card while uncannyEyes is running, though I'd welcome more opinions on the subject! A 24k audio buffer seems the smallest you can get away with for stereo output, which corresponds to just over 3 frames of eye-writing time. If you're short of heap the player can buffer in PSRAM.

I've attached my test code, which spews characters to the terminal when the audio callback is run: View attachment uncannyEyes_GC9A01A_Audio.zip. I've made minimal changes to the basic GC9A01A code, and configured it for my hardware in config.h; you'll also have to supply a suitable WAV file on a decent SD card, turn off the test tones, etc. etc.!
 
the code basically ends up in split() most of the time
I got rid of that craziness in my codebase, so loop() now always just renders a single frame. That still might not be fast enough for time-sensitive work to happen since rendering a frame takes around 20ms, but it does make everything a lot more predictable and easier to reason about.

Thanks for the sample sound code, I'll take a look at that when I get a chance to see how well it fits in with my stuff.
 
That sounds great. With that, you can lose the callback to loopAudio() and yield() inside frame(), and just leave the one at the end of loop() - no yield() needed as it's implicit in loop() returning.

I think I'm getting a frame rendered and output in 46ms, so your 20ms would be a great improvement. To go faster you'd have to have a loopEye() with a state machine which does part of the job each time through, and outputs it when the frame buffer is complete. Don't know if that's worth the complexity... Did you get rid of the while (eye[eyeIndex].display->asyncUpdateActive() && (emWait < 1000)) ; spin loop? If not, doing so would be a (fairly easy?) win to free up some wasted CPU cycles.
 
Did you get rid of the while (eye[eyeIndex].display->asyncUpdateActive() && (emWait < 1000)) ; spin loop? If not, doing so would be a (fairly easy?) win to free up some wasted CPU cycles.
Not yet, I haven't looked at performance much at all yet as I'm still working on implementing/testing the basic functionality. At some point I'll look at how much CPU is lost to that spinning and figure out how best to improve it. Most likely just letting any other user code run as normal but skipping the render if the previous update is still running would be a good starting point.
 
I got rid of that craziness in my codebase, so loop() now always just renders a single frame. That still might not be fast enough for time-sensitive work to happen since rendering a frame takes around 20ms, but it does make everything a lot more predictable and easier to reason about.

Great! Besides playing sounds, I imagine other things (displaying neopixels, running servos via PWM, and reading keypresses in a timely fashion without using attachInterrupt) would benefit from it not hogging the CPU. I recall in the new M4 code, they had a user function that the code would call at appropriate times in the loop.

That reminds me, when I last hacked on the original source, I did add support for doing neopixels using the prop shield. I had to manually switch pins 11 and 13 from SPI mode to data mode, enable the prop shield level shifter, do the neopixels, and then switch back to SPI mode. Fun, fun, fun.... Fortunately modern neopixels need level shifting less often, and even if they did, I would use a separate chip instead of the prop shield.

Of course in return at the end of each frame, it can potentially slow down the code (to do the return and next loop call, plus saving all of the data information) if the only thing running is the eyes.
 
Last edited:
I recall in the new M4 code, they had a user function that the code would call at appropriate times in the loop.
Yes it does. I haven't added that yet but would be easy to do so. I'm currently somewhat more inclined though to get things to a point where the eyes are just a library people call from their own setup and loop functions instead, since that gives the user a lot more control and options than a callback does, especially if the eyes are just a part of a bigger/more complex project.
 
Yes it does. I haven't added that yet but would be easy to do so. I'm currently somewhat more inclined though to get things to a point where the eyes are just a library people call from their own setup and loop functions instead, since that gives the user a lot more control and options than a callback does, especially if the eyes are just a part of a bigger/more complex project.

Yes I agree with the library approach. I refactored my code to be a library. This way I can have many different .ino files, each of which has different configuration options (which display driver, number of displays, pins used, various options like smoothing, sound, etc.). Having to clone the original code and edit config.h was tedious. Of course this only becomes an issue when you have more than one setup. ;) Lets see, right now I have:

  • Teensy 4.1 with two ST7789 square 240x240 eyes and the audio adapter playing mono sounds;
  • Teensy 4.1 with two GC901A round 240x240 eyes;
  • Teensy 3.5 with two OLED SSD159 square 128x128 eyes;
  • Teensy 3.1/3.2 with two TFT ST7735 square 128x128 eyes;
  • I had a Teensy 4.0 wired up for the round eyes, but I moved to Teensy 4.1; (and)
  • Since I now use a common pinout for each eye, I have run the TFT ST7735 and SSD159 eyes on a Teensy 4.0 or 4.1. It was of course slower than the original code, but it did work.

If you are going to do the library approach, then I can wait for that code.

When I did my refactoring I discovered that you can't have a library with both the GC901A and ST7789 drivers in the library. Instead, I had to go with moving the .ino file from the original code into a .h file that is included in the target .ino file (rather than having each driver be a separate .cpp file in the library). I can post this code if desired.
 
Last edited:
FWIW, I uploaded my variant of the sources (mainly moving the uncanny eyes code into a library, and also support for some mono sounds):


Thanks! I've only just seen this so haven't had a chance to look at it yet, but will do so before I spend any time refactoring into a library/API.

I've made a bit more progress on my side, including fixing a bunch of bugs and adding support for some more of the M4_Eyes features, along with some of my own. There's still plenty to do before it's ready for general use and the API hasn't had a lot of love yet, but the basic functionality is shaping up well. It is also now much easier to create your own eyes, either by modifying existing ones or even completely from scratch. There are 18 eyes so far and I plan to add more, as well as further improve some of the existing ones. Depending on the texture sizes and polar mapping parameters I can currently fit about 10-14 eyes on a Teensy 4.0. That should be able to go much higher with SD card support and dynamic generation of polar distance tables, though 10+ is probably plenty already I think!

My (hopefully working!) code, the beginnings of some documentation, and an updated video are here. It's still not ready for prime-time, more of a tech demo than anything, but if anyone does want to give it a try I'd be happy to hear your feedback.

Some of the issues I'm aware of:
  • There's a rounding bug that can cause the edge of the eyelids to look a bit ragged.
  • I haven't yet done much to improve how eyes and displays are configured in user code, or the API for rendering the eyes themselves.
  • Light meters, joysticks etc aren't supported/implemented yet.
  • I still need to check this, but I suspect the default definition of which screen is left or right, and how they are mirrored, might be reversed compared to the old code (the new M4 code flipped a bunch of stuff around and I didn't notice initially). Once I work on improving the API/configurability this should just come down to configuration settings.
I'm going to be away (and offline) for the first couple of weeks in November then probably starting a new job so my time for this project will be reduced, I'll keep chipping away at it when I can though.
 
Last edited:
Thanks! I've only just seen this so haven't had a chance to look at it yet, but will do so before I spend any time refactoring into a library/API.

NP. My main issue was I wanted to set all of the user interface stuff in the .ino file, so that I didn't have to duplicate the code for each variant (i.e. square vs. round display, 1 or 2 eyes, which SPI controller and CS/DC pins to use, adding sound and neopixel support, etc.).

I've made a bit more progress on my side, including fixing a bunch of bugs and adding support for some more of the M4_Eyes features, along with some of my own. There's still plenty to do before it's ready for general use and the API hasn't had a lot of love yet, but the basic functionality is shaping up well. It is also now much easier to create your own eyes, either by modifying existing ones or even completely from scratch. There are 18 eyes so far and I plan to add more, as well as further improve some of the existing ones. Depending on the texture sizes and polar mapping parameters I can currently fit about 10-14 eyes on a Teensy 4.0. That should be able to go much higher with SD card support and dynamic generation of polar distance tables, though 10+ is probably plenty already I think!
Yes, you can possibly have too many, but having more than 1 or 2 is nice.

My (hopefully working!) code, the beginnings of some documentation, and an updated video are here. It's still not ready for prime-time, more of a tech demo than anything, but if anyone does want to give it a try I'd be happy to hear your feedback.
I just got back from a small vacation. I do have one Halloween party coming up, that it may be useful to have them.

Some of the issues I'm aware of:
  • There's a rounding bug that can cause the edge of the eyelids to look a bit ragged.
  • I haven't yet done much to improve how eyes and displays are configured in user code, or the API for rendering the eyes themselves.
  • Light meters, joysticks etc aren't supported/implemented yet.
  • I still need to check this, but I suspect the default definition of which screen is left or right, and how they are mirrored, might be reversed compared to the old code (the new M4 code flipped a bunch of stuff around and I didn't notice initially). Once I work on improving the API/configurability this should just come down to configuration settings.

When I was looking at the code, I wished that instead of passing pins to read for eye movements, blinks, etc., that instead a callback function was passed, and that function could return the appropriate value(s). Perhaps having a default callback that mimics the current behavior would be useful for those that don't need a more elaborate function.

I've thought that perhaps having a pair of PIR sensors (one on each side, assuming the eyes are mounted for normal binocular vision) could be used to have the eyes track people moving (at least sort of) in front of the eyes (https://learn.adafruit.com/tree-ent-sculpture-with-animated-eyes).

Or using IR heat sensors can track one person (https://learn.adafruit.com/monster-m4sk-is-watching-you). As the learning session says, you need a relatively cool setup for the IR transmitter to work well (such as outdoors in the northern hemisphere during spooky season).

I'm going to be away (and offline) for the first couple of weeks in November then probably starting a new job so my time for this project will be reduced, I'll keep chipping away at it when I can though.
Good luck with the new job!
 
BTW, I was going through my 'junk' drawer, and I ran across the acrylic lens holders for the Adafruit Hallowings (both M0 and M4), as well as the lens holder for the Monster M4SK. The opening on both of lens holders is ever so slightly larger than the round GC901A 240x240 displays. I'm not sure that the lens holder by itself will be enough to keep the display from separating, which happens when you use it in cosplay setups and you move the prop around quite a bit. But certainly will hold the display if you use it to hold in either the plastic or glass convex lenses that Adafruit sells.


While I use the lens holders on both the Hallowings (M0 has the 128x128 display, M4 has the 240x240 display) and Monster M4SK (2 240x240 displays), it does bother me that the display is slightly off center for the lens holder. But I haven't pried off the display's double sided tape on the PCB, and moved to to better position the display. But with the GC901A display, the lens opening is just the right size. You could fashion something to hold the display in place.

The Wavelan display (with the separate wires attached via a connector) is slightly better, because it only has a small bump at the top. The other display that came with the male 2.54mm pitch pins soldered into the header does have the slight bump at the top.

The Adafruit 1.3" square 240x240 display (https://www.adafruit.com/product/4313) that I have is a little too small for the lens holders. I believe these were made for the 1.54" display (https://www.adafruit.com/product/3787) used in the Hallowing M4 and Monster M4SK.
 
Its been awhile. With the slowdown due to the upcoming USA Thanksgiving, I finally decided to take the time to play with Chris's code.

I had to convert the code from PlatformIO back to Arduino, which meant moving some things around due to the different compiler options, and the way Arduino 'simplifies' things but that break complex builds:
  • I had to add symlinks for the polarDist_240* and disp_240* files in eyes/graphics/240x240 to the toplevel directory so Arduino would build them;
  • I had to change the paths of the includes in config.h;
  • I needed to change config.h to use my pin definitions;
  • I needed to add an .ino file that includes the main.cpp file.

I have some things I would like to do:
  • Ideally I want to move it into a library, much like I did with the previous code, which means I can have different sketches each that has different config options (one eye, two eyes, which driver, sound or neopixels, etc.).
  • I would also like to add ST7789 back in (ST7789 is the one for the square displays that Adafruit and others sell, Chris's code just uses the GC9A01A_t3n driver, which is for the round eyes).
  • It would be nice to allow the 128x128 displays to also be used, but I'm probably not as motivated to do that.
  • I want to dig into the code that changes which eye pattern is used, and slow it down (right now, the eyes change in less than a minute).
  • I would like add support for doing neopixels. With the current code (before Chris's code), the eye code would not exit the loop function all that often, and neopixels would only get updated occasionally.
  • I want to re-enable printing the fps, so I can compare it with the old code. Maybe add more messages, such as when the eye is changed.
  • At some point, it would be nice to add sound support for reading from a SD card. As we've discussed, it evidently is rather choppy due to interrupts and such.
  • For directory support and such, it would be nice to add in MTP support optionally.

But before I do major changes, I will try to package it up for others to use if desired.

And as for the original post, I had another round display go bad. Fortunately, the last time I bought them, I did buy a spare. The square displays seem to be more rugged (hence my desire to re-enable ST7789 support). On the other hand, with the square displays, I have them mounted with stand-off posts, so that may make it less sensitive to gravity issues (whether caused by cats or just my own ability to knock things to the ground).
 
Last edited:
Its been awhile. With the slowdown due to the upcoming USA Thanksgiving, I finally decided to take the time to play with Chris's code.

Thanks for taking a look at this, and for the detailed notes on the various issues you hit on the way!

I had to convert the code from PlatformIO back to Arduino, which meant moving some things around due to the different compiler options, and the way Arduino 'simplifies' things but that break complex builds

I've never used the standard Arduino .ino approach, so I wasn't even aware that it needs everything in a single directory. Hmm, I'll have a look at other libraries and see how they handle this.

I needed to add an .ino file that includes the main.cpp file.

You might be better off just renaming main.cpp to main.ino, and modifying it as required. Think of this file more of an example program showing the usage of the eyes "library" (I know, it's not quite a real library yet!). Ultimately something like main.cpp/main.ino will likely just live in an "examples" directory to help get people started, rather than as the real app/bootstrapping it currently is.

I have some things I would like to do:
  • Ideally I want to move it into a library, much like I did with the previous code, which means I can have different sketches each that has different config options (one eye, two eyes, which driver, sound or neopixels, etc.).

  • This is my eventual plan too. I'd like to get it to the point where it is trivial for anyone to add to their project, e.g. via the Arduino Library Manager or PlatformIO Library Registry. Before I attempt that though I first want to get the code and API feature-complete and (relatively) stable. It is currently still a while away from that point however, especially since I have much less free time to work on this these days. As you'll see from the commits, I am still chipping away at it though!

    I would also like to add ST7789 back in (ST7789 is the one for the square displays that Adafruit and others sell, Chris's code just uses the GC9A01A_t3n driver, which is for the round eyes).
    [*] It would be nice to allow the 128x128 displays to also be used, but I'm probably not as motivated to do that.

    One quite big change I made was to try and abstract away the display hardware code, so it will be much easier to add support for different display types. Have a look at Display.h (and the corresponding GC9A01A_Display implementation) for my first attempt at this. Unfortunately I think this might be responsible for cutting performance in half (down to around 22fps per eye) due to extra indirection in drawPixel(). I'm hoping to find some time to investigate and improve this over the weekend ahead. I don't have any ST7789 displays (or 128x128 for that matter) to test with, but maybe I'll order a couple. Also happy to work with you on 128x128 support. I tried to keep most of the code resolution independent (up to 255x255 at least, beyond that will be tricky!) so I'm hoping 128x128 won't be too tricky to get working.

    I want to dig into the code that changes which eye pattern is used, and slow it down (right now, the eyes change in less than a minute).

    Just change EYE_DURATION_MS in main.cpp. Alternatively, update the code at the top of loop() to call nextEye() whenever you like (on a button press for example).

    I would like add support for doing neopixels. With the current code (before Chris's code), the eye code would not exit the loop function all that often, and neopixels would only get updated occasionally.

    While I don't think neopixel (or sound) code belongs directly in a library like this, it should still play nicely and be easy to support these sorts of use cases. As my code currently stands, each [loop() call renders a single frame (i.e. updates a single eye on a single screen). Whether that's quick enough to also allow LEDs and audio to be kept updated from loop() I'm not sure. If it's not, maybe some sort of callback support could be added per scan line (i.e. 240 times per loop() call), or maybe LEDs and audio would need to be handled via interrupts instead?

    I want to re-enable printing the fps, so I can compare it with the old code. Maybe add more messages, such as when the eye is changed.

    For FPS just uncomment the #define SHOW_FPS in Display.h. I had a bunch of other data being displayed at one point too but took it out. Drawing text yourself is currently a bit messy; you'd have to keep hold of your DisplayDefinition object and call left.display->drawText(100, 150, ...). This shouldn't be too hard for me to improve.

    At some point, it would be nice to add sound support for reading from a SD card. As we've discussed, it evidently is rather choppy due to interrupts and such.

    Yes I want to get this working nicely with sound, though no sure baking in audio playback + SD support is the right way to go. I'd rather a more general solution that allowed arbitrary sound code to get the CPU cycles it needed (rather than e.g. just baking in .wav playback support).

    For directory support and such, it would be nice to add in MTP support optionally.
 
Last edited:
Thanks for taking a look at this, and for the detailed notes on the various issues you hit on the way!
You are welcome.

I've never used the standard Arduino .ino approach, so I wasn't even aware that it needs everything in a single directory. Hmm, I'll have a look at other libraries and see how they handle this.
The main issue is the polarDist_240*.cpp, disp_240*.cpp, and polarAngle_240.cpp files aren't built, so I had to move them into the top level directory so they would be built automatically. I was thinking this morning that another way to do it is to have config.h include those files. Then the linker code that deletes unused global data/functions will just delete the unused arrays.

I also had to change the paths in config.h from being #include "graphics/240x240/cat.h" to #include "eyes/graphics/240x240/cat.h" because I had the .ino file at the top level file, and it included "eyes/main.cpp". Now, I could put the .ino file in that directory, but a quirk of Arduino is the .ino file has to be named the same as the directory. I.e. in this case I called it 'Test-eyes'. But with the long term goal of moving this stuff into a library, that means it is less than an issue.

You might be better off just renaming main.cpp to main.ino, and modifying it as required. Think of this file more of an example program showing the usage of the eyes "library" (I know, it's not quite a real library yet!). Ultimately something like main.cpp/main.ino will likely just live in an "examples" directory to help get people started, rather than as the real app/bootstrapping it currently is.
Yes, but initially I wanted to make as few changes as possible.

This is my eventual plan too. I'd like to get it to the point where it is trivial for anyone to add to their project, e.g. via the Arduino Library Manager or PlatformIO Library Registry. Before I attempt that though I first want to get the code and API feature-complete and (relatively) stable. It is currently still a while away from that point however, especially since I have much less free time to work on this these days. As you'll see from the commits, I am still chipping away at it though!
I can appreciate that. In fact I haven't been able to contemplate doing computer stuff for fun until about two weeks ago since I needed to get the basic changes we need for the future posted before the cut-off of the stage1 builds.


One quite big change I made was to try and abstract away the display hardware code, so it will be much easier to add support for different display types. Have a look at Display.h (and the corresponding GC9A01A_Display implementation) for my first attempt at this. Unfortunately I think this might be responsible for cutting performance in half (down to around 22fps per eye) due to extra indirection in drawPixel(). I'm hoping to find some time to investigate and improve this over the weekend ahead. I don't have any ST7789 displays (or 128x128 for that matter) to test with, but maybe I'll order a couple. Also happy to work with you on 128x128 support. I tried to keep most of the code resolution independent (up to 255x255 at least, beyond that will be tricky!) so I'm hoping 128x128 won't be too tricky to get working.
Yep. I have in front of me one Teensy 4.1 with 2 square 240x240 displays, one Teensy 4.1 with 2 round 240x240 displays, a Teensy 3.2 with 128x128 TFT displays and a Teensy 3.5 with 128x128 OLED displays. At the moment, since the 128x128 stuff usual the original special optimizations that only run on Teensy 3.x systems, the two 128x128 displays are frozen to use the original code. Unfortunately, the 128x128 TFT displays are rather ancient, and it is nearly hard to see them. But with your code, the 240x240 displays now have more options.

The two Teensy 4.1's are set up to have an audio shield (one has a prototype board with a parallel 14x2 set of headers for the audio shield since that Teensy just has normal male pins) while the other Teensy 4.1 has stacking headers, and I can mount the audio shield directly on it. Both boards have wiring to support I2C, neopixel level shifter, a momentary push button and 1-2 potentiometers.

Just change EYE_DURATION_MS in main.cpp. Alternatively, update the code at the top of loop() to call nextEye() whenever you like (on a button press for example).
Thanks. I figured it would be simple, I just hadn't delved into it yet. But with the long term goal of moving it to a library, you want there to be a way to easily alter this.

While I don't think neopixel (or sound) code belongs directly in a library like this, it should still play nicely and be easy to support these sorts of use cases. As my code currently stands, each [loop() call renders a single frame (i.e. updates a single eye on a single screen). Whether that's quick enough to also allow LEDs and audio to be kept updated from loop() I'm not sure. If it's not, maybe some sort of callback support could be added per scan line (i.e. 240 times per loop() call), or maybe LEDs and audio would need to be handled via interrupts instead?
I agree, neither neopixel or sound stuff goes in these files. The issue that I have with the current code is the loop function won't return until an entire eye cycle is done. And so if you add a second loop function to do the neopixels, and the main loop function calls the eyes loop function and then neopixel loop function, you will see one sequence of the neopixel be drawn, and then it will pause while the next eye cycle is done. Similarly for playing sound, it can't start the new sound until the eye cycle is done.

Now there are two ways to 'solve' this. The way the current Adafruit code does it, there is a user function called from the display code at times. Another way is to re-organize the display code so it just does the stuff in smaller pieces and then returns. While I tend to prefer the second method, the first method is simpler.

For FPS just uncomment the #define SHOW_FPS in Display.h. I had a bunch of other data being displayed at one point too but took it out. Drawing text yourself is currently a bit messy; you'd have to keep hold of your DisplayDefinition object and call left.display->drawText(100, 150, ...). This shouldn't be too hard for me to improve.
Note, in my stuff, I just print the FPS stuff to the USB serial monitor. I don't actually paint the screen (though I once did that).

Yes I want to get this working nicely with sound, though no sure baking in audio playback + SD support is the right way to go. I'd rather a more general solution that allowed arbitrary sound code to get the CPU cycles it needed (rather than e.g. just baking in .wav playback support).
 
Last edited:
<edit #2>
Note, I updated the zip file with changes. I messed up the config.h file initially and I did not set up the frame buffer, so async output did not work. I have fixed this now, and output is much faster.

I also had a typo in pulling in the spikes eye in the config file, and that has been fixed also (I forgot the trailing comma).
</edit #2>

FWIW, I put the changes I made to be able to run the uncanny Eyes code in:

The changes include:
  • Adding a dummy .ino file needed by Arduino;
  • Building the various helper arrays needed by the eyes in the main directory if we are running under Arduino (these start with build-*);
  • My config.h changes for pin assignments;
  • Adding the ability to slow down the eye duration by using a define in config.h that is used in main.cpp;
  • Commenting out deleting 'display in the GC9A01A_Display destructor (this was getting a warning); (and)
  • Add all 18 eyes to the build list.

Some comments:
  • Some of your files don't seem to have final newlines in them;
  • The EyeDefinition initialization really needs to be moved to config.h or equivalent, since if you want to add or subtract eyes, you don't want to have to modify two places (including the eyes/*.h in config.h, and then adding the eyes to EyeDefinition and bumping up the array count;
  • The constexpr definitions in main.cpp really needs to be in config.h so they can be tweaked;
  • The eyes structure should really have a const char * field to hold the name. Thus when we change an eye, we can write out the name to USB serial; (and)
  • At some point, we should use random to select the next eye, rather than sequentially going through the eyes.
 
Last edited:
Thanks for all the additional feedback and your code changes. I'll incorporate what makes sense to do so and keep working towards making things easier to use and customise.

You'll be pleased to hear I just ordered a couple of ST7789 displays from AliExpress, as well as a single 128x128 one. When they (eventually!) arrive I'll have a go at adding support for them.

On the bad news front, I've looked into the performance discrepancy I've been seeing and it seems my old code was skipping some of the updates but still counting it as having rendered the frame, so the FPS numbers I was getting were higher than reality! :( The actual rate is generally (depending on the eye and eyelid state) in the region of 20-25fps. I figured out a hack that speeds things up about 10% but not sure I'll check that in as it's rather nasty and I don't think it'll be compatible with the other display types. I also know that changing the code to process columns at a time (rather than rows) slowed things down about 10-15%, but that was done to make the eyelid logic simpler and use less memory, so I don't think I'll change that back. Ah well, I'd still like to improve performance but for now it's not going to be a priority.

In somewhat related news, I've put myself on the waiting list for one of these. It will be absolutely perfect for getting the eyes to track people's faces, much better than the expensive (and quite limited) IR sensor solution discussed earlier, or trying to squeeze the recognition logic into the already overloaded Teensy! I've also ordered an ESP32-CAM to try out which can be used to do something similar.
 
Back
Top