I'm setting up to do a branch of the tree in github, so it should be easier to send you patches for individual changes.Thanks for all the additional feedback and your code changes. I'll incorporate what makes sense to do so and keep working towards making things easier to use and customise.
You'll be pleased to hear I just ordered a couple of ST7789 displays from AliExpress, as well as a single 128x128 one. When they (eventually!) arrive I'll have a go at adding support for them.
On the bad news front, I've looked into the performance discrepancy I've been seeing and it seems my old code was skipping some of the updates but still counting it as having rendered the frame, so the FPS numbers I was getting were higher than reality! The actual rate is generally (depending on the eye and eyelid state) in the region of 20-25fps. I figured out a hack that speeds things up about 10% but not sure I'll check that in as it's rather nasty and I don't think it'll be compatible with the other display types. I also know that changing the code to process columns at a time (rather than rows) slowed things down about 10-15%, but that was done to make the eyelid logic simpler and use less memory, so I don't think I'll change that back. Ah well, I'd still like to improve performance but for now it's not going to be a priority.
Interesting sensor. I would have expected it to be a lot pricier for what it delivers. I put in a pre-order for a few also. Thanks.In somewhat related news, I've put myself on the waiting list for one of these. It will be absolutely perfect for getting the eyes to track people's faces, much better than the expensive (and quite limited) IR sensor solution discussed earlier, or trying to squeeze the recognition logic into the already overloaded Teensy! I've also ordered an ESP32-CAM to try out which can be used to do something similar.
BTW, I just got mail from usefulsensors.com when I inquired about the availability, and I was told they just sent off a bunch of sensors. Presumably Sparcfun will be shipping out orders by the end of next week.In somewhat related news, I've put myself on the waiting list for one of these. It will be absolutely perfect for getting the eyes to track people's faces, much better than the expensive (and quite limited) IR sensor solution discussed earlier, or trying to squeeze the recognition logic into the already overloaded Teensy! I've also ordered an ESP32-CAM to try out which can be used to do something similar.
# Spooky eyeball using a Person Sensor.
# Adapted from code by @todbot / Tod Kurt, original at
# https://github.com/todbot/circuitpython-tricks/.
import board
import busio
import digitalio
import displayio
import random
import struct
import time
import adafruit_imageload
from adafruit_st7789 import ST7789
last_person_sensor_time = 0
def get_faces(i2c):
global last_person_sensor_time
# The person sensor has the I2C ID of hex 62, or decimal 98.
PERSON_SENSOR_I2C_ADDRESS = 0x62
# We will be reading raw bytes over I2C, and we'll need to decode them into
# data structures. These strings define the format used for the decoding, and
# are derived from the layouts defined in the developer guide.
PERSON_SENSOR_I2C_HEADER_FORMAT = "BBH"
PERSON_SENSOR_I2C_HEADER_BYTE_COUNT = struct.calcsize(
PERSON_SENSOR_I2C_HEADER_FORMAT)
PERSON_SENSOR_FACE_FORMAT = "BBBBBBbB"
PERSON_SENSOR_FACE_BYTE_COUNT = struct.calcsize(PERSON_SENSOR_FACE_FORMAT)
PERSON_SENSOR_FACE_MAX = 4
PERSON_SENSOR_RESULT_FORMAT = PERSON_SENSOR_I2C_HEADER_FORMAT + \
"B" + PERSON_SENSOR_FACE_FORMAT * PERSON_SENSOR_FACE_MAX + "H"
PERSON_SENSOR_RESULT_BYTE_COUNT = struct.calcsize(
PERSON_SENSOR_RESULT_FORMAT)
# How long to pause between sensor polls.
PERSON_SENSOR_DELAY = 0.3
if time.monotonic() - last_person_sensor_time < PERSON_SENSOR_DELAY:
return []
last_person_sensor_time = time.monotonic()
read_data = bytearray(PERSON_SENSOR_RESULT_BYTE_COUNT)
i2c.readfrom_into(PERSON_SENSOR_I2C_ADDRESS, read_data)
offset = 0
(pad1, pad2, payload_bytes) = struct.unpack_from(
PERSON_SENSOR_I2C_HEADER_FORMAT, read_data, offset)
offset = offset + PERSON_SENSOR_I2C_HEADER_BYTE_COUNT
(num_faces) = struct.unpack_from("B", read_data, offset)
num_faces = int(num_faces[0])
offset = offset + 1
faces = []
for i in range(num_faces):
(box_confidence, box_left, box_top, box_right, box_bottom, id_confidence, id,
is_facing) = struct.unpack_from(PERSON_SENSOR_FACE_FORMAT, read_data, offset)
offset = offset + PERSON_SENSOR_FACE_BYTE_COUNT
face = {
"box_confidence": box_confidence,
"box_left": box_left,
"box_top": box_top,
"box_right": box_right,
"box_bottom": box_bottom,
"id_confidence": id_confidence,
"id": id,
"is_facing": is_facing,
}
faces.append(face)
checksum = struct.unpack_from("H", read_data, offset)
return faces
def map_range(s, a1, a2, b1, b2):
return b1 + ((s - a1) * (b2 - b1) / (a2 - a1))
displayio.release_displays()
spi = busio.SPI(clock=board.D13, MOSI=board.D11, MISO=board.D12)
while not spi.try_lock():
pass
spi.configure(baudrate=24000000) # Configure SPI for 24MHz
spi.unlock()
tft_cs = board.D22
tft_dc = board.D9
tft_reset = board.D5
display_bus = displayio.FourWire(
spi, command=tft_dc, chip_select=tft_cs, reset=tft_reset)
display = ST7789(display_bus, width=240, height=240, rowstart=80)
dw, dh = 240, 240 # display dimensions
# load our eye and iris bitmaps
eyeball_bitmap, eyeball_pal = adafruit_imageload.load("eye0_ball2.bmp")
iris_bitmap, iris_pal = adafruit_imageload.load("eye0_iris0.bmp")
iris_pal.make_transparent(0)
# compute or declare some useful info about the eyes
iris_w, iris_h = iris_bitmap.width, iris_bitmap.height # iris is normally 110x110
iris_cx, iris_cy = dw//2 - iris_w//2, dh//2 - iris_h//2
r = 20 # allowable deviation from center for iris
main = displayio.Group()
display.show(main)
eyeball = displayio.TileGrid(eyeball_bitmap, pixel_shader=eyeball_pal)
iris = displayio.TileGrid(
iris_bitmap, pixel_shader=iris_pal, x=iris_cx, y=iris_cy)
main.append(eyeball)
main.append(iris)
x, y = iris_cx, iris_cy
tx, ty = x, y
next_time = time.monotonic()
eye_speed = 0.25
twitch = 2
# The Pico doesn't support board.I2C(), so check before calling it. If it isn't
# present then we assume we're on a Pico and call an explicit function.
try:
i2c = board.I2C()
except:
i2c = busio.I2C(scl=board.GP5, sda=board.GP4)
# Wait until we can access the bus.
while not i2c.try_lock():
pass
while True:
faces = []
faces = get_faces(i2c)
facex, facey = None, None
if len(faces) > 0:
facex0 = (faces[0]['box_right'] - faces[0]
['box_left']) // 2 + faces[0]['box_left']
facey0 = (faces[0]['box_bottom'] - faces[0]
['box_top']) // 2 + faces[0]['box_top']
facex = map_range(facex0, 0, 255, 40, -40)
facey = map_range(facey0, 0, 255, -40, 40)
tx = iris_cx + facex
ty = iris_cy + facey
x = x * (1-eye_speed) + tx * eye_speed # "easing"
y = y * (1-eye_speed) + ty * eye_speed
iris.x = int(x)
iris.y = int(y)
display.refresh()
Thanks. Yes I saw those changes. What I want to do is not have to change the the drawing code (i.e. main.cpp) at all, and move the config.h stuff into the .ino file that includes or calls the code in main.cpp. that way I have several different directories with just config/.ino file and a common library for everything else. At the moment, I have 9 different directories for the different eye combinations.I just checked my SparkFun order and rather annoyingly it is set in an "exception" state, I guess due to a change I made to it after I placed it, which I understood would resolve automatically but doesn't seem to have. That means my order hasn't even shipped yet, so it won't be until sometime in the New Year that I'm likely to have the sensors and time to try them Ah well, the good news is the other screen types I ordered have arrived, as well as an ESP32-CAM which might be a workable substitute for the Human Sensor in the meantime.
I made the SPI frequency configurable (set SPI_SPEED in config.h), which provides a pretty big FPS boost to eye types that aren't CPU-bound, i.e. basically any eye that has eyelids. The eyes without eyelids generally take longer to render a frame than the DMA transfer takes, so there's not much gain there.
I hear your points about the code in main.cpp vs config.h and I've moved some of it out to config.h to help keep it in one place (with more still to be done). It's something of a moot point ultimately because I consider both those files to be user/non-library code, so I haven't paid it a lot of attention to it yet. Eventually that code will be moved into one or more examples of how to use the library.
Thanks for the People Sensor examples! I'd seen a few of those but not all of them. Looks like it should be pretty easy to get up and running. At some point I'll probably try and hack together some ESP32-CAM firmware that outputs data in the same format as the People Sensor so it's easy to support both.
I've also got a bit of work-in-progress to only render partial frames for each loop() call. This should make it much easier to support time-sensitive features like sound and LED strips without the eyes hogging too much CPU.
At the moment, I'm just using the 3.3v power for everything. I put the 2nd ST7789 screen on the same pins as the first screen, and I hooked up a power meter. It is drawing 225mA, so yes, it is close. It is 190mA with only one screen. On one of the prototype boards that I have wired up, I have an option of using VIN to power the SPI devices instead of 3.3v, so I likely will need redo the second board to add that as an option.One question I have for you is, how are you powering the Person Sensor + screens? The 3.3v output on the Teensy is rated up to 250mA. The Person Sensor draws ~150mA and I believe the screens are up to around 50mA each, so it sounds like it's pretty tight to power all three modules directly off the Teensy.
The sound is pretty acceptable with an eye definition without lids (I think it's fisheye, although I don't have the names printing out right now). Sound gets pretty stuttery with the eyes with lids right now.
I was able to put the SPI parameter up to 55,000,000 but not 60,000,000.
I wonder if yield() s in some crucial spots would help the stuttering, though?
} // end column
yield();
} // end scanline
with ALL of Chris's eye definitions (including leopard) and the audio stuff, there seems to be lots of memory left
...also beginning support for person sensor.
Fun fact: I was out of town with my laptop and didn't have access to my Teensy, so instead of coding I decided to try creating that leopard eye from a photo I took in Botswana a couple of months ago I'm not completely happy with the result and will probably end up tweaking them a bit more.
I still haven't received my Person Sensors yet so wrote that code without being able to test it. If you're feeling especially brave you could give it a go; otherwise I'm guessing I'll have the sensors to try it myself in another week or two.
I've also got some untested code that adds ST7789 support. Hopefully I'll get this tested and committed in the next few days.
I was wondering where it came from.
...
I don't recall if the Teensy 4 series has a random number instruction or not.
Yes, I was thinking of doing it tomorrow.
As I said in the other thread, I may want to do a variant with only one eye. I think a lot has been cleaned up, but until I actually try it, I won't know.
At some point, you may want to let config.h add support for switching eyes, besides doing the eye duration. For example, when doing setup for a prop, it might be better to have a push button to switch between the eyes or some other method (shaking with a sensor, etc.). I kind of prefer to have the eyes go sequential for the first run, and then go random (which I have in my branch) for instance.
Doing digital reads on pins like BLINK_PIN you may want to use a bounce library, and only do the blink action when the button is first pressed (after the bounce period).
Setting the random seed may want to be under config.h control. After all, A3 may be used elsewhere. I will usually factor in an internal temp. and use a floating register less likely to be used (A11 currently). In the past, I used to read a value from EEPROM and then write the new seed back into EEPROM. I don't recall if the Teensy 4 series has a random number instruction or not.
maybe the constant DMA to the screens is starving out the audio side of things?
Memory Usage on Teensy 4.1:
FLASH: code:187664, data:2958508, headers:8396 free for files:4971896
RAM1: variables:67328, code:182824, padding:13784 free for local variables:260352
RAM2: variables:15520 free for malloc/new:508768
EXTRAM: variables:132512
static elapsedMillis eyeTime{};
if (fastTouchRead(41)> 23 && eyeTime > EYE_DURATION_MS) { //jrr duration is here as a sort of debounce
nextEye();
eyeTime = 0;
}
If you didn't know it, you can use the unused flash memory as a LittleFS file system. And you can export this flash memory via MTP so that you can update the sounds being played without having to re-flash the Teensy. The Teensy 4.1 has 8 megabytes of flash while the Teensy 4.0 only has 2 megabytes.I have truncated sounds in some other projects when working without an SD card but limiting sound memory by 3 orders of magnitude gets pretty drastic. The teensy 4.1's SD slot uses QSPI, and I wonder cards with different UHS ratings perform differently in this setting.
PR#451 is in, which aims to make playing of raw or WAV from any filesystem possible. However, there is still the issue that updating eyes takes 40-50ms, and these objects only buffer a few milliseconds of audio data, so you might still encounter issues. Plus, of course, you're limited to <8Mb of data, which is 90s of audio if you allow zero space for eye definitions.
As noted in my post #35, with a bit of adaptation to the eyes code my buffered SD player works well, and I've also stolen adopted the filesystem specification idea. Note I haven't re-written any of my demo code as yet, it seems the system is in a bit of flux while ideas are being worked out, so I'm going to concentrate on other things for a while. But of course if someone needs support to get the buffered player working I'll try to help...
I don't think we have an audio function that plays sounds from a LittleFS filesystem.
I found this last night but have not used it.
https://github.com/FrankBoesing/Ard...xamples/Mp3FilePlayerLFS/Mp3FilePlayerLFS.ino
Using the flash memory means having about 3 orders of magnitude less memory for sounds i.e. 1000x.