Uncanny Eyes is getting expensive

Here's a quick photo of my Literary Clock with the eyes running. It shows up randomly on the clock during Halloween.
Literary_Clock_and_Skull.jpg
 
Thanks for all the additional feedback and your code changes. I'll incorporate what makes sense to do so and keep working towards making things easier to use and customise.
I'm setting up to do a branch of the tree in github, so it should be easier to send you patches for individual changes.

You'll be pleased to hear I just ordered a couple of ST7789 displays from AliExpress, as well as a single 128x128 one. When they (eventually!) arrive I'll have a go at adding support for them.

Yes, I was going to look at least at the ST7789, since it is similar to the GC9A01A. When I glanced at it, the abstraction seemed wrong, main.cpp should not be referring at all to GC9A01A. I would think you would want an abstract display base class with virtual functions, and a super-class for each of ST7789 or GC9A010A. Then in config.h, you use the appropriate display before calling main.cpp. In the old code, where everything was compiled together, it was easy to get things to be optimized.

Similarly as I mentioned for eye selection, you want that all done in config.h. I would envision a function returning a pointer to the next eye class. With the current code, you could just keep them in an array, but perhaps in the future, read from SD cards and do things on the fly.

On the bad news front, I've looked into the performance discrepancy I've been seeing and it seems my old code was skipping some of the updates but still counting it as having rendered the frame, so the FPS numbers I was getting were higher than reality! :( The actual rate is generally (depending on the eye and eyelid state) in the region of 20-25fps. I figured out a hack that speeds things up about 10% but not sure I'll check that in as it's rather nasty and I don't think it'll be compatible with the other display types. I also know that changing the code to process columns at a time (rather than rows) slowed things down about 10-15%, but that was done to make the eyelid logic simpler and use less memory, so I don't think I'll change that back. Ah well, I'd still like to improve performance but for now it's not going to be a priority.

For me, the performance is ok. Sure the older code is faster (but then perhaps it is lying what the actual fps is), but for my use of having something in cosplay setups, even when I first did not use the frame buffer, and it was 4 fps, instead of 20 fps, it was still usable. Sure 20 fps is better...

In somewhat related news, I've put myself on the waiting list for one of these. It will be absolutely perfect for getting the eyes to track people's faces, much better than the expensive (and quite limited) IR sensor solution discussed earlier, or trying to squeeze the recognition logic into the already overloaded Teensy! I've also ordered an ESP32-CAM to try out which can be used to do something similar.
Interesting sensor. I would have expected it to be a lot pricier for what it delivers. I put in a pre-order for a few also. Thanks.
 
Last edited:
In somewhat related news, I've put myself on the waiting list for one of these. It will be absolutely perfect for getting the eyes to track people's faces, much better than the expensive (and quite limited) IR sensor solution discussed earlier, or trying to squeeze the recognition logic into the already overloaded Teensy! I've also ordered an ESP32-CAM to try out which can be used to do something similar.
BTW, I just got mail from usefulsensors.com when I inquired about the availability, and I was told they just sent off a bunch of sensors. Presumably Sparcfun will be shipping out orders by the end of next week.
 
Well my person sensors just arrived from Sparcfun. Fun, fun, fun. I got the basic Arduino test running, and it can identify faces, but I didn't test it in detail.

I had the Teensy running with the standard 2 eyes, and my wife came over to the computer and asked what it was. I told her it did the eye patterns, and I had just gotten the people sensor, and that I hope at some time to get the tracking done (either by me or by Chris). She said I was rather spooky. I hope it was meant as a complement. ;)

<edit>
The only real change I needed to do to the basic Arduino test that just prints the face information is delete including Serial.h which is not needed on Teensy (and in fact doesn't exist).

Lets see the People Sensors seem to be at:
 
Last edited:
Test run of circuit python eyeball code with Teensy 4.1 and person sensor

Ok, I wanted to test the person sensor eyeball code. Rather than run some other processor, since I already had a Teensy 4.1 wired up to use the ST7789 square displays, I decided to run the Teensy 4.1 circuit python code. I first tried out CP 8.0.0-beta5, but it doesn't seem to boot on the Teensy 4.1. So I went back to 7.3.3.

I fired up Arduino to download a program and it started the Teensy loader (note I use Fedora 36 Linux).

  • On the Teensy loader, I clicked File -> Open Hex File, and gave it the pathname to the Teensy 4.1 circuit python 7.3.3 hex file;
  • I pressed the Teensy program button, and the Teensy loader installed the circuit python to the Teensy;
  • After it was done, I power cycled the Teensy;
  • When the CIRCUITPY disk was mounted, I copied the two BMP files from the person_sensor_eyeball release to the CIRCUITPY disk;
  • I copied the code.py to the CIRCUITPY disk;
  • I fired up mu-editor so I could watch the console to find out what libraries I was missing;
  • As I expected, I needed to change the pin assignments. I used my defaults for SPI0 (CS == 22, DC == 9, Reset == 5), and also used the SPI0 defaults;
  • I also needed to copy the lib/adafruit_st7789.mpy file and the lib/adafruit_imageload directory from the Circuit Python 7.x.x libraries to the lib subdirectory on the Teensy;
  • And it seems to work.

Here is my changed code:

Code:
# Spooky eyeball using a Person Sensor.
# Adapted from code by @todbot / Tod Kurt, original at
# https://github.com/todbot/circuitpython-tricks/.

import board
import busio
import digitalio
import displayio
import random
import struct
import time

import adafruit_imageload
from adafruit_st7789 import ST7789

last_person_sensor_time = 0


def get_faces(i2c):
    global last_person_sensor_time

    # The person sensor has the I2C ID of hex 62, or decimal 98.
    PERSON_SENSOR_I2C_ADDRESS = 0x62

    # We will be reading raw bytes over I2C, and we'll need to decode them into
    # data structures. These strings define the format used for the decoding, and
    # are derived from the layouts defined in the developer guide.
    PERSON_SENSOR_I2C_HEADER_FORMAT = "BBH"
    PERSON_SENSOR_I2C_HEADER_BYTE_COUNT = struct.calcsize(
        PERSON_SENSOR_I2C_HEADER_FORMAT)

    PERSON_SENSOR_FACE_FORMAT = "BBBBBBbB"
    PERSON_SENSOR_FACE_BYTE_COUNT = struct.calcsize(PERSON_SENSOR_FACE_FORMAT)

    PERSON_SENSOR_FACE_MAX = 4
    PERSON_SENSOR_RESULT_FORMAT = PERSON_SENSOR_I2C_HEADER_FORMAT + \
        "B" + PERSON_SENSOR_FACE_FORMAT * PERSON_SENSOR_FACE_MAX + "H"
    PERSON_SENSOR_RESULT_BYTE_COUNT = struct.calcsize(
        PERSON_SENSOR_RESULT_FORMAT)

    # How long to pause between sensor polls.
    PERSON_SENSOR_DELAY = 0.3

    if time.monotonic() - last_person_sensor_time < PERSON_SENSOR_DELAY:
        return []
    last_person_sensor_time = time.monotonic()

    read_data = bytearray(PERSON_SENSOR_RESULT_BYTE_COUNT)
    i2c.readfrom_into(PERSON_SENSOR_I2C_ADDRESS, read_data)

    offset = 0
    (pad1, pad2, payload_bytes) = struct.unpack_from(
        PERSON_SENSOR_I2C_HEADER_FORMAT, read_data, offset)
    offset = offset + PERSON_SENSOR_I2C_HEADER_BYTE_COUNT

    (num_faces) = struct.unpack_from("B", read_data, offset)
    num_faces = int(num_faces[0])
    offset = offset + 1

    faces = []
    for i in range(num_faces):
        (box_confidence, box_left, box_top, box_right, box_bottom, id_confidence, id,
         is_facing) = struct.unpack_from(PERSON_SENSOR_FACE_FORMAT, read_data, offset)
        offset = offset + PERSON_SENSOR_FACE_BYTE_COUNT
        face = {
            "box_confidence": box_confidence,
            "box_left": box_left,
            "box_top": box_top,
            "box_right": box_right,
            "box_bottom": box_bottom,
            "id_confidence": id_confidence,
            "id": id,
            "is_facing": is_facing,
        }
        faces.append(face)
    checksum = struct.unpack_from("H", read_data, offset)

    return faces


def map_range(s, a1, a2, b1, b2):
    return b1 + ((s - a1) * (b2 - b1) / (a2 - a1))


displayio.release_displays()

spi = busio.SPI(clock=board.D13, MOSI=board.D11, MISO=board.D12)
while not spi.try_lock():
    pass
spi.configure(baudrate=24000000)  # Configure SPI for 24MHz
spi.unlock()

tft_cs = board.D22
tft_dc = board.D9
tft_reset = board.D5

display_bus = displayio.FourWire(
    spi, command=tft_dc, chip_select=tft_cs, reset=tft_reset)

display = ST7789(display_bus, width=240, height=240, rowstart=80)

dw, dh = 240, 240  # display dimensions

# load our eye and iris bitmaps
eyeball_bitmap, eyeball_pal = adafruit_imageload.load("eye0_ball2.bmp")
iris_bitmap, iris_pal = adafruit_imageload.load("eye0_iris0.bmp")
iris_pal.make_transparent(0)

# compute or declare some useful info about the eyes
iris_w, iris_h = iris_bitmap.width, iris_bitmap.height  # iris is normally 110x110
iris_cx, iris_cy = dw//2 - iris_w//2, dh//2 - iris_h//2
r = 20  # allowable deviation from center for iris

main = displayio.Group()
display.show(main)
eyeball = displayio.TileGrid(eyeball_bitmap, pixel_shader=eyeball_pal)
iris = displayio.TileGrid(
    iris_bitmap, pixel_shader=iris_pal, x=iris_cx, y=iris_cy)
main.append(eyeball)
main.append(iris)
x, y = iris_cx, iris_cy
tx, ty = x, y
next_time = time.monotonic()
eye_speed = 0.25
twitch = 2

# The Pico doesn't support board.I2C(), so check before calling it. If it isn't
# present then we assume we're on a Pico and call an explicit function.
try:
    i2c = board.I2C()
except:
    i2c = busio.I2C(scl=board.GP5, sda=board.GP4)

# Wait until we can access the bus.
while not i2c.try_lock():
    pass

while True:
    faces = []
    faces = get_faces(i2c)
    facex, facey = None, None
    if len(faces) > 0:
        facex0 = (faces[0]['box_right'] - faces[0]
                  ['box_left']) // 2 + faces[0]['box_left']
        facey0 = (faces[0]['box_bottom'] - faces[0]
                  ['box_top']) // 2 + faces[0]['box_top']
        facex = map_range(facex0, 0, 255, 40, -40)
        facey = map_range(facey0, 0, 255, -40, 40)
        tx = iris_cx + facex
        ty = iris_cy + facey

    x = x * (1-eye_speed) + tx * eye_speed  # "easing"
    y = y * (1-eye_speed) + ty * eye_speed
    iris.x = int(x)
    iris.y = int(y)
    display.refresh()
 
Last edited:
I just checked my SparkFun order and rather annoyingly it is set in an "exception" state, I guess due to a change I made to it after I placed it, which I understood would resolve automatically but doesn't seem to have. That means my order hasn't even shipped yet, so it won't be until sometime in the New Year that I'm likely to have the sensors and time to try them :( Ah well, the good news is the other screen types I ordered have arrived, as well as an ESP32-CAM which might be a workable substitute for the Human Sensor in the meantime.

I made the SPI frequency configurable (set SPI_SPEED in config.h), which provides a pretty big FPS boost to eye types that aren't CPU-bound, i.e. basically any eye that has eyelids. The eyes without eyelids generally take longer to render a frame than the DMA transfer takes, so there's not much gain there.

I hear your points about the code in main.cpp vs config.h and I've moved some of it out to config.h to help keep it in one place (with more still to be done). It's something of a moot point ultimately because I consider both those files to be user/non-library code, so I haven't paid it a lot of attention to it yet. Eventually that code will be moved into one or more examples of how to use the library.

Thanks for the People Sensor examples! I'd seen a few of those but not all of them. Looks like it should be pretty easy to get up and running. At some point I'll probably try and hack together some ESP32-CAM firmware that outputs data in the same format as the People Sensor so it's easy to support both.

I've also got a bit of work-in-progress to only render partial frames for each loop() call. This should make it much easier to support time-sensitive features like sound and LED strips without the eyes hogging too much CPU.

One question I have for you is, how are you powering the Person Sensor + screens? The 3.3v output on the Teensy is rated up to 250mA. The Person Sensor draws ~150mA and I believe the screens are up to around 50mA each, so it sounds like it's pretty tight to power all three modules directly off the Teensy.
 
Last edited:
I just checked my SparkFun order and rather annoyingly it is set in an "exception" state, I guess due to a change I made to it after I placed it, which I understood would resolve automatically but doesn't seem to have. That means my order hasn't even shipped yet, so it won't be until sometime in the New Year that I'm likely to have the sensors and time to try them :( Ah well, the good news is the other screen types I ordered have arrived, as well as an ESP32-CAM which might be a workable substitute for the Human Sensor in the meantime.

I made the SPI frequency configurable (set SPI_SPEED in config.h), which provides a pretty big FPS boost to eye types that aren't CPU-bound, i.e. basically any eye that has eyelids. The eyes without eyelids generally take longer to render a frame than the DMA transfer takes, so there's not much gain there.

I hear your points about the code in main.cpp vs config.h and I've moved some of it out to config.h to help keep it in one place (with more still to be done). It's something of a moot point ultimately because I consider both those files to be user/non-library code, so I haven't paid it a lot of attention to it yet. Eventually that code will be moved into one or more examples of how to use the library.
Thanks. Yes I saw those changes. What I want to do is not have to change the the drawing code (i.e. main.cpp) at all, and move the config.h stuff into the .ino file that includes or calls the code in main.cpp. that way I have several different directories with just config/.ino file and a common library for everything else. At the moment, I have 9 different directories for the different eye combinations.

Thanks for the People Sensor examples! I'd seen a few of those but not all of them. Looks like it should be pretty easy to get up and running. At some point I'll probably try and hack together some ESP32-CAM firmware that outputs data in the same format as the People Sensor so it's easy to support both.

I've also got a bit of work-in-progress to only render partial frames for each loop() call. This should make it much easier to support time-sensitive features like sound and LED strips without the eyes hogging too much CPU.

Great.

One question I have for you is, how are you powering the Person Sensor + screens? The 3.3v output on the Teensy is rated up to 250mA. The Person Sensor draws ~150mA and I believe the screens are up to around 50mA each, so it sounds like it's pretty tight to power all three modules directly off the Teensy.
At the moment, I'm just using the 3.3v power for everything. I put the 2nd ST7789 screen on the same pins as the first screen, and I hooked up a power meter. It is drawing 225mA, so yes, it is close. It is 190mA with only one screen. On one of the prototype boards that I have wired up, I have an option of using VIN to power the SPI devices instead of 3.3v, so I likely will need redo the second board to add that as an option.
 
(Moved conversation from this thread)

The sound is pretty acceptable with an eye definition without lids (I think it's fisheye, although I don't have the names printing out right now). Sound gets pretty stuttery with the eyes with lids right now.

Hmm, that's interesting, and somewhat surprising, as I was expecting it to be the other way around. The eyes without lids take longer to render than the eyes with lids. Generally the eyes without lids take longer to render a (single display) frame than it takes for the DMA transfer, while the eyes with lids render frames faster than the DMA transfer time. This means the no-lid eyes virtually always render a frame and initiate a DMA transfer every time loop() is called, and so loop() always takes a while to run. The eyes with lids however often have to wait until the DMA is finished before starting the next frame. If you have a look at EyeController::renderFrame() you'll see the method exits immediately if the display is not available (i.e. a DMA transfer is still taking place), and hence loop() might get called 10s or 100s of thousands of times in quick succession, doing almost nothing each time, before it can eventually perform the next render. I wonder if this is somehow affecting your audio code's behaviour? I don't know much about this, but maybe the constant DMA to the screens is starving out the audio side of things?

I was able to put the SPI parameter up to 55,000,000 but not 60,000,000.

Note that increasing the SPI speed only tends to improve the framerate of the eyes with lids, since faster DMA transfers mean they don't have to wait as long before starting the next render.

I've got a few ideas on how to speed up eyes with lids further, e.g. by using an additional buffer, or by not including any unchanged eyelid rows in the DMA transfer. I've also got some work-in-progress code to only render a (configurable) portion of the screen on each call to loop() so other code can get more share of the CPU time.

I wonder if yield() s in some crucial spots would help the stuttering, though?

You could try adding a yield() call in EyeController.h line 492:

Code:
      } // end column
      yield();
    } // end scanline

Given it's eyes with lids that cause audio stuttering though, I'm not sure that it will help as it seems like something else (possibly DMA related?) is going wrong.

with ALL of Chris's eye definitions (including leopard) and the audio stuff, there seems to be lots of memory left

I only have a Teensy 4.0 so can fit between 8-13 eyes onto it (depending on the eyes, some are much bigger than others). It should be quite possible to optimise the eye size even futher, which I hope to get around to eventually. Loading on demand from SD card would obviously make a big difference too.

Fun fact: I was out of town with my laptop and didn't have access to my Teensy, so instead of coding I decided to try creating that leopard eye from a photo I took in Botswana a couple of months ago :) I'm not completely happy with the result and will probably end up tweaking them a bit more.

Leopard.jpg

...also beginning support for person sensor.

I still haven't received my Person Sensors yet so wrote that code without being able to test it. If you're feeling especially brave you could give it a go; otherwise I'm guessing I'll have the sensors to try it myself in another week or two.

I've also got some untested code that adds ST7789 support. Hopefully I'll get this tested and committed in the next few days.
 
Last edited:
Fun fact: I was out of town with my laptop and didn't have access to my Teensy, so instead of coding I decided to try creating that leopard eye from a photo I took in Botswana a couple of months ago :) I'm not completely happy with the result and will probably end up tweaking them a bit more.

I was wondering where it came from.


I still haven't received my Person Sensors yet so wrote that code without being able to test it. If you're feeling especially brave you could give it a go; otherwise I'm guessing I'll have the sensors to try it myself in another week or two.

Yes, I was thinking of doing it tomorrow.

I've also got some untested code that adds ST7789 support. Hopefully I'll get this tested and committed in the next few days.

Ok.

In terms of portability and configuring different setups, you might want to check if the person sensor is actually present on the I2C bus.

As I said in the other thread, I may want to do a variant with only one eye. I think a lot has been cleaned up, but until I actually try it, I won't know.

At some point, you may want to let config.h add support for switching eyes, besides doing the eye duration. For example, when doing setup for a prop, it might be better to have a push button to switch between the eyes or some other method (shaking with a sensor, etc.). I kind of prefer to have the eyes go sequential for the first run, and then go random (which I have in my branch) for instance.

Doing digital reads on pins like BLINK_PIN you may want to use a bounce library, and only do the blink action when the button is first pressed (after the bounce period).

Setting the random seed may want to be under config.h control. After all, A3 may be used elsewhere. I will usually factor in an internal temp. and use a floating register less likely to be used (A11 currently). In the past, I used to read a value from EEPROM and then write the new seed back into EEPROM. I don't recall if the Teensy 4 series has a random number instruction or not.
 
Yes, I was thinking of doing it tomorrow.

Great, if you do I'm of course interested to hear how it goes. My sensors were posted about two weeks ago but the tracking number is not working and I'm expecting them to get held up in customs for tax payments anyway so I suspect I still have a bit of a wait for mine :(

As I said in the other thread, I may want to do a variant with only one eye. I think a lot has been cleaned up, but until I actually try it, I won't know.

I've just checked in some initial ST7789 support. Annoyingly I was only able to find one of the two ST7789 displays I bought! I'm sure I'll find the second one soon enough, but in the meantime I've only been able to test the display with a single eye, and you'll be pleased to know it worked OK. There's no custom SPI support yet, and with two displays there might be problems with the way the reset pin is handled, but that shouldn't be too hard to fix. Also, my display doesn't have a CS pin so I haven't tested for displays that do have them.

Note that currently none of the eyes have symmetrical eyelids, so if you want symmetrical eyelids for a single display you'll have to regenerate eyes using the appropriate eyelid bitmaps yourself for now.

At some point, you may want to let config.h add support for switching eyes, besides doing the eye duration. For example, when doing setup for a prop, it might be better to have a push button to switch between the eyes or some other method (shaking with a sensor, etc.). I kind of prefer to have the eyes go sequential for the first run, and then go random (which I have in my branch) for instance.

Doing digital reads on pins like BLINK_PIN you may want to use a bounce library, and only do the blink action when the button is first pressed (after the bounce period).

Sure, but as I've said before I consider all this sort of functionality somewhat application-specific, outside of the scope of the library. Not to say it shouldn't be included in bundled example code, I'm just treating it as fairly low priority for now.

Setting the random seed may want to be under config.h control. After all, A3 may be used elsewhere. I will usually factor in an internal temp. and use a floating register less likely to be used (A11 currently). In the past, I used to read a value from EEPROM and then write the new seed back into EEPROM. I don't recall if the Teensy 4 series has a random number instruction or not.

I think that line of code came from one of the other Uncanny Eyes codebases. I guess how the seed is generated is not exactly mission-critical :D but I'll change it to @defragster's suggestion since it looks pretty straightforward to do so.
 
Heh, so about 10 minutes after my previous post I went and checked the mail, and my Person Sensors have arrived! :cool:
 
maybe the constant DMA to the screens is starving out the audio side of things?

That has been my suspicion overnight. I will try your insightful suggestion about where the yield() will do the most good first (Thanks!) and if that doesn't make much difference I will try turning async off.

I had copied my audio configuration from another program and may simplify that further. That program was sync-ing a neopixel display to music using fft and that seems to have no immediate relevance here (although I guess I could imagine blinking or something). I already commented out the 2 fft lines but did not do anything more to the audio layout yet. I think there's built-in fft hardware so it is probably much less expensive than I would imagine.

I turned the processor speed up from 600 to 720 and didn't notice much, which tended to make me think the DMA was the issue. I think efficient rendering of the eyes is good, but increasing speed of updating the displays may just give an impression of hypervigilance--continually looking in all directions--so finding more time for audio may not hurt the overall realism of the eyes.

I'll defer testing till later to avoid irritating my wife with stuttery Bald Mtn.
 
QUOTE=jrraines;318033]That has been my suspicion overnight. I will try your insightful suggestion about where the yield() will do the most good first (Thanks!) and if that doesn't make much difference I will try turning async off.

[/QUOTE]

with yield() where you suggested it was still very stuttery. switching to async off helped the sound. the fps drop by about 1/2 but the eyes still seem quite active and IMHO no less realistic. With my hearing aids in I am less convinced that switching to fish or skull makes a big difference in the stuttering.

With Async off demon eye looks like this:
 
For those looking at my github tree, I reorganized it to make merges easier. The 'main' branch is now just a mirror of Chris's branch. The 'meissner' branch is currently dead with changes from this morning, but I won't be updating it. The 'meissner2' branch is now the latest branch. Rather than deleting code that I modified, I put it inside of #ifdef ORIG_CODE ... #endif lines. Hopefully this will make future merges easier, since the original code is still there.

I did try out the people sensor. It seems to work, but it isn't as useful when I'm at the desk, so I re-disabled it.
 
Ok, I have a dual setup now, GC9A01A (round 1.28") on one side, and ST7789 (square, Adafruit 1.3" display) on the other, both powered by Teensy 4.1's. I'm doing this via separate git branches:

  • main is the branch that syncs with Chris's branch
  • meissner2 is the branch that I do most of the changes on (currently defaults to 2 GC9A01A's)
  • st7789 is the branch that just changes the defaults for the ST7789 displays that I have
  • gc9a01a is the specific branch for 2 GC9A01A's.

As before, I needed to make build*.cpp files in the top level (src) directory because the Arduino builder won't build the files in the sub-directories. I had to add 2 build files, one for GC9A01A and one for ST7789. This in turned needed a new .h file (config-display.h) to define USE_GC9A01A or USE_ST7789. This include file is then included in config.h and the two build files for the displays, so that each display build can determine to build the code or not. This is due to an issue where the Arduino builder just compiles everything and throws it to the linker, which will then discard the unused functions. However both the ST7789 and GC9A010 displays define the same external symbols, and the linker can't do the link.

I also needed to have a different named .ino dummy file in each branch, because Arduino IDE insists that the .ino file be named the same as the directory.

While I would eventually like everything moved to libraries and the config.h file essentially becomes .ino file like I had in my previous re-forumulation, it usable to have different branches, and just do 'git rebase' on the sub-branches. Before my current setup to use git, when I used cvs, it would have been much harder to do this merging.

In terms of speed, the GC9A01A displays range from being much faster to either slightly faster or the same speed, depending on the eye. The eyes like hazel, bigBlue, cat eyes tend to be twice as fast in terms of fps, while dragon is faster, but not double the speed. Skull and toonstripe seem to be roughly the same speed.

Oh and BTW, in running the dual boards, it is really useful to have the ON/OFF pin on the Teensys. I would put both Teensys to sleep, and try to connect the ON/OFF pin to ground on both boards at the same time with jumper wires, and for the first round they would each run the same 19 eyes in succession, each eye for 16 seconds. I have HAVE_FPS defined in Display.h so each eye prints the fps. That being said, I do wish we could have still had that pin on the Teensy 4.0 be a true DAC.
 
Last edited:
I was slow to realize that the T4.1 PSRAM gets used automagically by the compiler. It is clear that Michael and Chris both understood that. What I realized overnight is that probably I need to specify what goes in EXTMEM intelligently. I started by putting 4 of the largest eye definition files into EXTMEM rather than PROGMEM. That accomplished less than I'd hoped
Code:
Memory Usage on Teensy 4.1:
  FLASH: code:187664, data:2958508, headers:8396   free for files:4971896
   RAM1: variables:67328, code:182824, padding:13784   free for local variables:260352
   RAM2: variables:15520  free for malloc/new:508768
 EXTRAM: variables:132512

Ideally, it seems to me, everything that gets called only in setup() would go into EXTMEM if stuff didn't fit on the SOC. That seems tough to achieve.

Haven't done any testing with those changes yet. It may be interesting to see how the different eyes sound.

Another thought I've had about where I would hope to take this is to play different sounds based on which eye was chosen. I'm pretty happy with the change I made to select eyes:
Code:
  static elapsedMillis eyeTime{};
  if (fastTouchRead(41)> 23 && eyeTime > EYE_DURATION_MS) {   //jrr duration is here as a sort of debounce
    nextEye();
    eyeTime = 0;
  }

I have truncated sounds in some other projects when working without an SD card but limiting sound memory by 3 orders of magnitude gets pretty drastic. The teensy 4.1's SD slot uses QSPI, and I wonder cards with different UHS ratings perform differently in this setting.
 
I have truncated sounds in some other projects when working without an SD card but limiting sound memory by 3 orders of magnitude gets pretty drastic. The teensy 4.1's SD slot uses QSPI, and I wonder cards with different UHS ratings perform differently in this setting.
If you didn't know it, you can use the unused flash memory as a LittleFS file system. And you can export this flash memory via MTP so that you can update the sounds being played without having to re-flash the Teensy. The Teensy 4.1 has 8 megabytes of flash while the Teensy 4.0 only has 2 megabytes.

However, I don't think we have an audio function that plays sounds from a LittleFS filesystem, only from the SD card.

But I think if we can, copying the file from something that exports a file system to a larger in-memory buffer that holds the entire stream would simplify the audio playing, since it doesn't have to read blocks of data, but it is just a big linear array. I have peeked under the cover of the Audio system, but the way I imagine it, if the .WAV is held as an entire linear memory, the audio routines that run at interrupt level don't need to block. Sure, when you get to larger sizes, things like cache latency still affect performance, but I suspect it is faster than doing a read operation via QSPI in chunks.

We have AudioPlayMemory that can be used. It is mono only, but presumably you could have two objects, one for the left side, and one for the right and use a mixer. What I'm imaging is something that takes a WAV file from a file system, and converts it to two arrays of memory that use AudioPlayMemory to play the sound without having to segment the buffers. We could optimize this to do the conversion on the PC and just store 2 raw files that are easily copied to the buffer.
 
Last edited:
PR#451 is in, which aims to make playing of raw or WAV from any filesystem possible. However, there is still the issue that updating eyes takes 40-50ms, and these objects only buffer a few milliseconds of audio data, so you might still encounter issues. Plus, of course, you're limited to <8Mb of data, which is 90s of audio if you allow zero space for eye definitions.

As noted in my post #35, with a bit of adaptation to the eyes code my buffered SD player works well, and I've also stolen adopted the filesystem specification idea. Note I haven't re-written any of my demo code as yet, it seems the system is in a bit of flux while ideas are being worked out, so I'm going to concentrate on other things for a while. But of course if someone needs support to get the buffered player working I'll try to help...
 
If people are following my git sources, I have removed the multiple branches. Instead I made 4 config-display-<xxx>.h files, and in the directory you have the files checked out, you have to make a symlink from the appropriate file to config-display.h. I keep getting messed up with git and rebases, etc. In my perl script to rebuild ~/Arduino, I now make 4 separate directories and symlink the appropriate config-display-<xxx>.h file in it.

<edit>
I fixed the problem with configuring just one eye.
 
Last edited:
PR#451 is in, which aims to make playing of raw or WAV from any filesystem possible. However, there is still the issue that updating eyes takes 40-50ms, and these objects only buffer a few milliseconds of audio data, so you might still encounter issues. Plus, of course, you're limited to <8Mb of data, which is 90s of audio if you allow zero space for eye definitions.

As noted in my post #35, with a bit of adaptation to the eyes code my buffered SD player works well, and I've also stolen adopted the filesystem specification idea. Note I haven't re-written any of my demo code as yet, it seems the system is in a bit of flux while ideas are being worked out, so I'm going to concentrate on other things for a while. But of course if someone needs support to get the buffered player working I'll try to help...

Ok, good to know.
 
I found this last night but have not used it.
https://github.com/FrankBoesing/Ard...xamples/Mp3FilePlayerLFS/Mp3FilePlayerLFS.ino

Using the flash memory means having about 3 orders of magnitude less memory for sounds i.e. 1000x.

Ok, it sounds like the SD card is the best for longer stuff. Though for space cramped things where you don't need lots of sounds, flash or internal memory would work well. In particular, I have in the back of my mind updating my wizard's staff, and there the extra 1" for the Teensy 4.1 might be an issue. A Teensy 4.0 + third party I2S + small speaker will roughly fit under one round GC9A01A.
 
Back
Top