Uncanny Eyes is getting expensive

FWIW, I was on fleabay and I noticed a US seller (datacenterliquidation) selling the GC9A01 round displays that have a squarish PCB under them for $4-ish plus s/h (i.e. you will need to hide the PCB in your build). I was able to pick up 4 spares for $22.26. I probably have a few spares already, but as the title of the original post says, I can be hard on these displays, so it is helpful to have spares on hand.

The local steampunk event in May is doing an Alice in Wonderland theme, and I was thinking of doing a Chesire Cat with one of the eyes.
 
True to the title of the thread, I was wiring up the two new eyes I bought (see the previous post, #151). Now of course these new eyes have a different wiring layout than the previous eyes. So to test them out, I added jumper wires to the breadboard where I have the set of round eyes (with the GC9010A driver). In doing so, I broke one of the previous round eyes. Sigh....

As cool as the pure round eyes are, I don't think I'm going to use them since they seem so fragile. The new eyes have a round screen which sticks up, but the PCB that holds the screen is squarish, complete with mounting holes. I bought a pair of summer daisy sunglasses from the Dollar store, and when I popped the lenses out, I discovered that the new eyes round screen perfectly fits in the hole where the sunglasses lenses went. Cool. In addition, the 40mm convex glass or plastic lens with a lip that Adafruit sells for the Hallowing and Monster M4SKs also fits perfectly. I hope it turns out, and I will put out pictures if it does work out, probably first week of May (the steampunk event is on the 2nd week).
 
True to the title of the thread, I was wiring up the two new eyes I bought (see the previous post, #151). Now of course these new eyes have a different wiring layout than the previous eyes. So to test them out, I added jumper wires to the breadboard where I have the set of round eyes (with the GC9010A driver). In doing so, I broke one of the previous round eyes. Sigh....

As cool as the pure round eyes are, I don't think I'm going to use them since they seem so fragile. The new eyes have a round screen which sticks up, but the PCB that holds the screen is squarish, complete with mounting holes. I bought a pair of summer daisy sunglasses from the Dollar store, and when I popped the lenses out, I discovered that the new eyes round screen perfectly fits in the hole where the sunglasses lenses went. Cool. In addition, the 40mm convex glass or plastic lens with a lip that Adafruit sells for the Hallowing and Monster M4SKs also fits perfectly. I hope it turns out, and I will put out pictures if it does work out, probably first week of May (the steampunk event is on the 2nd week).
Still waiting for that picture @MichaelMeissner ;)

Thank you for this thread, it helped alot!
 
For various reasons, I haven't done much with Teensys in awhile. I got a question from somebody who wanted to use the Person Sensor with TeensyEyes that Chris had done and I incorporated in my version of the code. Evidently the Person Sensor is no longer being made, but there is a similar sensor offered by DFRobot.com. The question was whether anybody had modified TeensyEyes to use it?
Gravity: Offline Edge AI Gesture & Face Detection Sensor – 5 Gestures, 10 Faces, 3m
 
Hi everyone,

Chris Miller implemented that PersonSensor in his TeensyEyes-code.
However, this sensor is no longer being manufactured and its sold out (EOL).

There is a similar alternative sensor available from DFROBOT additionally even with gesture control:

https://wiki.dfrobot.com/SKU_SEN0626_Gesture_and_Face_Detection_Module#Basic Tutorial

But can it be integreated in the TeensyEye-code as a replacement for the PersonSensor easily?
Or is there any other sensor as an alternative? Has anyone tried that out yet?

It would be a shame if we had to forego this option for the eyes-control with a face recognition-sensor.
 
For various reasons, I haven't done much with Teensys in awhile. I got a question from somebody who wanted to use the Person Sensor with TeensyEyes that Chris had done and I incorporated in my version of the code. Evidently the Person Sensor is no longer being made, but there is a similar sensor offered by DFRobot.com. The question was whether anybody had modified TeensyEyes to use it?
Gravity: Offline Edge AI Gesture & Face Detection Sensor – 5 Gestures, 10 Faces, 3m
Oh, while I was trying to remember my English and made a profile for this forum, Michael already wrote something about my question. Thank you, Michael for taking care of it right away. I didn`t see your post, when i was writing my post here.
 
Oh, while I was trying to remember my English and made a profile for this forum, Michael already wrote something about my question. Thank you, Michael for taking care of it right away. I didn`t see your post, when i was writing my post here.
I glanced at the library. It looks like it shouldn't be to hard to add support. I ordered a face/gesture sensor from Amazon.

I've been having a lot of health issues recently, but I'll see if I can add the support in a bit. I imagine somebody might find it useful to have the uncanny eye display with tracking before spooky season.

While I was at Amazon, I noticed there are now 240x240 SPI displays that have touch sensors as well, so I ordered 2 of them also:
 
In the meantime I have created a new eye. I let the iris rotate slowly (spin).
 

Attachments

  • iris.jpg
    iris.jpg
    23.4 KB · Views: 49
  • sclera.jpg
    sclera.jpg
    331 bytes · Views: 51
here you go... maybe this iris can be designed even better, I basically wanted to design an iris that I had never seen before.
Thanks. It is an interesting eye pattern. I've updated my git sources to include it.

I broke out my Teensy eye platform with 2 round 240x240 GC9A01A displays out of mothballs, and I fixed the cold solder joint that had shown up. I discovered the person sensor I had on the package lost its camera. Fortunately, I had bought 3 or 4 of the sensors when Sparkfun was selling them 3 years ago. So I set things up again with a replacement sensor, and it does work.

I did have the issue that since I have both sensors on the system, I would sometimes pick up the wrong sensor, which doesn't work that well. :cool:

I tried running the detectGesture I2C sketch. and it doesn't seem to find my face. I may look at it later. It is interesting that the code seems to get a lot of warnings from the compiler.

I was thinking that I could also use the gesture sensor and use one of them to tell it to switch eye patterns.
 
Last edited:
I was thinking that I could also use the gesture sensor and use one of them to tell it to switch eye patterns.

A good idea. So far, I have implemented the eye change using a sound sensor that reacts to clapping. Gesture control would of course be much more elegant.
 
my Halloween-project: picture frame hanging on the wall... as an inspiration for more projects... it features sound (music and speech), LED-animations, animatronic-lid, ultrasonic distance sensor, person sensor, light sensor and acoustic sensor.
 

Attachments

  • Halloween picture frame_b.jpeg
    Halloween picture frame_b.jpeg
    197.4 KB · Views: 66
  • Halloween picture frame_a.jpeg
    Halloween picture frame_a.jpeg
    191.8 KB · Views: 68
FWIW, I bought a 2nd Gravity Gesture sensor, and like the first one, I can't get the second one to return any faces or gestures. I took the person sensor off the I2C bus, just in case it was interfering with things and it didn't seem to help.

At a high level, it is kind of weird. The person sensor can identify something like 10 faces. And in getting the coordinates for each face, you use the face number to identify where face #1, where face #2 is. But at least as described in the API documentation, there doesn't seem to be any way to get the data for the different faces. Also, the example code assumes there will always be a face found, and it doesn't consider that maybe all the sensor sees is a gesture and not a face.

The only thing I can think of is to switch to using a serial UART instead of I2C, and maybe try a non-Teensy 4.1 system just to validate it running on I2C.
 
The only thing I can think of is to switch to using a serial UART instead of I2C, and maybe try a non-Teensy 4.1 system just to validate it running on I2C.
hmmmmm, ok, the Teensy is not explicitly mentioned in the compatibility list:


But what exactly is so special about Teensy compared to compatible boards in terms of the standardized I2C interface?
 
I did it again, this time as an elevated artifact with the same technical features
(I don't like the stand yet. It doesn't have a Halloween design, I still have to
redecorate it to the right style)....
...and thus one less "useful personal sensor" ;-)
I would like to implement two more ideas: a book and a crystal ball (fortune telling ball),
each with an eye, of course.
 

Attachments

  • Halloween Artifact_a.jpg
    Halloween Artifact_a.jpg
    145.2 KB · Views: 55
  • Halloween Artifact_b.jpg
    Halloween Artifact_b.jpg
    116.4 KB · Views: 56
Last edited:
Hi Michael,
Have you been able to find out why the other sensor from DFRobot isn't working with a Teensy 4.x as it should?
 
I received the DFROBOT sensor SEN0626 today. I installed

DFRobot_GestureFaceDetection and DFRobot_RTU

on Arduino 2.3.6 and have taken the I2C variant from the DFROBOT-sample-sketch
and compiled it for Teensy 4.0. Its working with Teensy 4.0.
All the gestures as well as the x and y coordinates of facial recognition (only one tested so far)
are recognized and displayed in the serial monitor.

So all that remains is to integrate it correctly and, above all, neatly into the program code for the Uncanny Eye.
 
Hi, I love what you all have been doing with these eyes! I have the eyes running on an ESP32 with two GC9A01 240 x 240 round displays. I know the code language between the teensy and ESP32 are completely different, but I was wondering if anyone knew how to create new eyes that work with a 240 x 240 display and can be used on an ESP32?
 
Hi, I love what you all have been doing with these eyes! I have the eyes running on an ESP32 with two GC9A01 240 x 240 round displays. I know the code language between the teensy and ESP32 are completely different, but I was wondering if anyone knew how to create new eyes that work with a 240 x 240 display and can be used on an ESP32?
If you dig into this thread, I believe Chris mentioned how he/she/they did new eyes. The Teensy sources that Chris made and I modified all have the eyes compiled into the source. I would imagine you are using the original source that Chris used. In this source, the eyes are read from the flash memory file system, and you can adjust them by using a photo editor on the png files and a text editor on the config files.

The Adafruit code was originally made for the Hallowing M4 (one eye) and later used in the Monster M4SK (two eyes). Both Hallowing M4 and Monster M4SK are made for 240x240 displays. The Adafruit code uses the ST7789 driver for the square displays. Other people have modified the code to use the GC9A01 driver for the round eyes. I don't know if Adafruit has re-incorporated the display code for the GC9A01 into their sources. Given they came out with round eye displays using the GC9A01, I would hope that they now include the support.

There is the earlier Adafruit learning system had an entry called 'Uncanny Eyes' that was designed explicitly for the Teensy 3.1/3.2/3.5/3.6 boards (but it will not work on the Teensy 4.0/4.1 boards). This was later modified to support other boards, such as the Adafruit Hallowing M0. In Uncanny Eyes, you only have one eye pattern that is compiled in. Other Teensy users (and I) have modified Uncanny Eyes so it will work on a Teensy 4.0/4.1 and it still uses 128x128 displays.
Here are pages of interest from the M4 learning guide about adding new eyes.
 
Here is my code snippet for integrating the DFRobot GestureFaceDetection sensor into the “UncannyEyes” code.
I have limited the code snippet to to the areas where I made changes.

As long as no face is recognized, the eye moves automatically as usual. If a face is recognized, the target coordinates
targetX and targetY values are each output correctly and continuously between -1 and +1, depending on where the
detected face is moving (-> the gfd-sensor is working correctly), but the eye does not move to this target, instead
remaining in the last position it was in when it was still moving automatically due to the absence of faces, i.e., the last
position before a face was detected and the automatic function was switched off.
However, the blinking of the eyelids continues to be performed correctly.

Something's not right. Unfortunately, I'm not a good programmer. I can't figure out what's wrong.
Seems like I'm too dumb for this. ;-) Does anyone see a mistake? I am grateful for any support.


C:
#include <SPI.h>           
#include <Wire.h> 
#include <Entropy.h>

#include <array>
#include "DFRobot_GestureFaceDetection.h"

#include "eyes/eyes.h"           
#include "eyes/EyeController.h" 
#include "GC9A01A_Display.h"

#define NUM_EYES  1            // If only 1 eye, use the first SPI bus for the eye

#define DEVICE_ID 0x72         // Sensor gfd ID
DFRobot_GestureFaceDetection_I2C gfd(DEVICE_ID);

#include "eyes/240x240/hazel.h"

#define NUM_EYE_PATTERNS 1 
#define EYE_PATTERN(left, right)  { left }

std::array<std::array<EyeDefinition, NUM_EYES>, NUM_EYE_PATTERNS> eyeDefinitions{{
        EYE_PATTERN (hazel::eye, hazel::eye),
}};

                
GC9A01A_Config eyeInfo[] = {                             // Define the pins used for each display
  // CS  DC  MOSI  SCK  RST  ROT  MIRROR USE_FB  ASYNC
    {10, 9,   11,  13,   8,   0,   true,  true,  true},  // left eye -> mirror: true
};

constexpr uint32_t EYE_DURATION_MS{3000};
constexpr uint32_t SPI_SPEED{30000000};

EyeController<NUM_EYES, GC9A01A_Display> *eyes{};

uint16_t eyeNr = 0;
uint16_t faceNr = 0;
uint16_t faceScore = 0;
float faceX = 0.0f;
float faceY = 0.0f;

//------------------------------------------------
void setup() {
 
    delay(2000);
    Serial.begin(115200);
    Entropy.Initialize();
    randomSeed(Entropy.random());
 
    gfd.begin(&Wire);
    gfd.setFaceDetectThres(60);
    gfd.setGestureDetectThres(60);
    gfd.setDetectThres(100);
 
    initEyes(true, true, false);  // bool: autoMove, autoBlink, staticPupils

}


//------------------------------------------------
void loop() {

    eyes->updateDefinitions( eyeDefinitions.at(eyeNr) );
    
    faceNr = gfd.getFaceNumber();
    faceScore = gfd.getFaceScore();
    faceX = gfd.getFaceLocationX();            // gfd-X-Coord:  hard left = 0, hard right = 640
    faceY = gfd.getFaceLocationY();            // gfd-Y-Coord:  fully up = 0, fully down = 640
 
    if ( faceNr > 0  &&  faceScore > 50 ) {
 
                 eyes->setAutoMove(false);

                 float targetX = (faceX-320.0f)/320.0f;   // the target x location for the eye(s), in the range -1.0 (hard left) to 1.0 (hard right)
                 float targetY = (faceY-320.0f)/320.0f;   // the target y location for the eye(s), in the range -1.0 (fully up) to 1.0 (fully down)

                 Serial.print ("targetX: ");
                 Serial.print (targetX);
                 Serial.print ("   targetY: ");
                 Serial.println (targetY);

                 eyes->setTargetPosition(targetX,targetY);
      
    }
    else {  eyes->setAutoMove(true); }
 
    eyes->renderFrame();

}
 
Hi strunx,
I don't have a DFRobot sensor, but your code looks OK. I replied to your github issue, I think the problem is due to a bugfix that I never merged in to main. Sorry about that! Main has the fix now, hopefully it solves your problem?

Also, something I'd be interested to learn is how fast that sensor can update the coordinates of a face. The website says "The face detection information is refreshed every 1.5 seconds" which sounds really bad to me, but in the demo videos the sensor seems a lot more responsive than that, so I'm a bit confused how fast this new sensor really is. For comparison, the Person Sensor updates at ~7 FPS with ~150ms lag, which is still a bit on the slow side for smooth tracking. I haven't had time to try yet (and probably won't for a while), but was thinking about putting a TinyML model on an ESP32-S3-Zero, which I think might be able to detect faces at around 15 FPS.

MichaelMeissner, thanks for helping people out on this thread, as you can probably tell I haven't had much spare time for TeensyEyes in quite a while. I'm he/him BTW :)
 
Hi Chris,

thank you very much for getting in touch so fast and taking care of the matter right away.
I will test the new main code in the next few days and then give you some feedback.

The frame rate of the DFROBOT sensor SEN0626 Offline Edge AI Gesture & Face Detection is exactly: 16,67 FPS (60ms)
 
Last edited:
Hi strunx,
I don't have a DFRobot sensor, but your code looks OK. I replied to your github issue, I think the problem is due to a bugfix that I never merged in to main. Sorry about that! Main has the fix now, hopefully it solves your problem?

hi chris,
yessssssssssss, your fix solved the problem. The DFROBOT SEN0626 Gesture_and_Face_Detection_Module is working now as it should.
Thanks again for your help.
 
Back
Top