New library and example: Read-TSL1402R-Optical-Sensor-using-Teensy-3.x

Mr Mayhem

Well-known member
I published a new class library on Github for reading the AMS TSL1402R linear photodiode array, and sending the sensor pixel data over USB serial to a Processing visualization sketch.
https://github.com/Mr-Mayhem/Read-TSL1402R-Optical-Sensor-using-Teensy-3.x

It is complete with Arduino and Processing sketch examples.
This is my first GitHub posting, so bear with me if it's a little wacky one way or the other.
Please add it to Teensy libraries index after it passes muster.

It's fairly optimized for speed, because it combines the sensor's parallel read out feature with the Teensy ADC library's simultaneous read of two pixels at a time,
rather than sequential read of the two pixels as done by many other examples.
And that's way faster than the sensor's serial read out circuit which reads one pixel at a time, of course.

Another speed improvement is, I send the data as binary; rather than each sensor value being sent as a string of characters,
I am using byte pairs for each sensor reading, and a unique byte prefix/delimiter for parsing and sync.

I got away with this approach without data falsifying the sync byte, by using a bit shift strategy, taking advantage of 12 bits of data in a 16 bit byte-pair.
By shifting prior to sending, no data byte ever equals the sync byte value of 255. I shift back after receive.

The thing runs pretty fast, like 250 to 300 frames (complete sensor reads of 256 pixels) per second.
(Speed is important when the data serves as part of a feedback loop for a machine, etc. as in my case where I use this to measure stuff live on a moving cnc machine)

It's to the point that even though the Processing sketch is fairly well stripped-down for speed, it still can't keep up with the data stream;
I had to insert a delay in the Arduino loop to slow down the Teensy 3.6 frame rate; my serial AvailableBytes display
(shown in the top right of the Processing window) kept increasing and latency kept growing!
This is a good thing. I will try to write a C++ visualizer soon and see how fast that can go.

I developed it on Teensy 3.6, but it should work on any 3.x with appropriate pin changes.
The ADC pins must be selected from particular pin groups, so that each of the two ADC inputs use a separate internal ADC.
It won't work with the two analog inputs sharing the same internal hardware ADC.

I used a Teensy 3.6 pinout card image from the forums which was edited to showing which pins use which ADC.
https://forum.pjrc.com/threads/34808-K66-Beta-Test?p=114102&viewfull=1#post114102

I am working on an extended version that sends the data to an ESP8266 via SPI, which then sends it over WiFi to a PC.
After that, I forsee bringing an ESP32 into the mix.

A lot of the code for extra features on the Processing sketch is commented out, to make it run faster.
I will add a full featured Processing sketch shortly, with all the bells and whistles turned on.

I will be happy to answer questions and tweak things as needed.
 
Last edited:
Very cool! This should be fun to play with. I'll definitely get one on my next order to mouser.
A picture of the Processing capturing would be nice to see.

Sounds like this would really benefit from writing to SPI flash, which I could probably add to the library. That would be so you could capture higher framerates, but you wouldn't be able to see those real-time. You could write to SPI flash at a higher frame rate, but output over serial at a lower framerate.
edit: Or maybe not, I'm not sure where your limitation is now that I read your post again.
 
Last edited:
Perhaps a stupid question: What can you use this for? Finger gestures for input? Seems pretty expensive for that...
 
Last edited:
It provides precision position info of shadows or laser cast upon the sensor, etc. See my Github readme for a list of possible applications. The list is pretty long. I am using it for height compensation on a cnc. Most PCB blanks are slightly warped, enough to throw off the cutting depth so it misses part of the board on low spots, and goes too deep in high spots. By measuring the height of the surface of the blank in a grid pattern of points, we can correct for these height variations using software, so that the cutting tool ends up tracking the warped surface, etc.
Usually this is accomplished using a probe, which lowers until contact with the blank is detected, and then it logs the xyz and raises up, and moves to the next point of the grid pattern and repeats. This method is quite slow, because it has to do this for hundreds of points typically, in order to insure a sufficient resolution or fidelity. Using this sensor combined with a mechanical tip, I can drag without raising the tip, like a dial indicator but digital, so I can rapidly "mow the lawn" to generate the point cloud table much faster and with high resolution. The table is used in later steps to correct the motion of the machine height-wise, to track the varying height of the part.
 
Last edited:
linuxgeek:

Right now, the Processing app simply can't keep up the data stream. In my final use, I will be reading the stream on a dedicated board or C++ based app, so it's not a big deal. I included the Processing sketch to satisfy the need of others to see it's working, and to experiment.

SPI flash would be useful in a data logging application, like being able to review a recording of sensor frames after the fact.
Most of the applications I have in mind are real-time, however, Except....

I would like to save X, Y, and Z1, Z2 values (a single point cloud 'point') of the CNC machine's current axis positions, to the card for each sensor read, as the instrument performs a measurement run, and be able to read the file back out later over serial or WiFi, etc.

Z1 is the machine's current z axis height, and this value comes from the CNC controller, along with X and Y axis position values.
Z2 comes from the Teensy which reads the sensor and calculates a position from it.
All values would be written to the SD card as a comma delimited row in a text file.

That would be a golden capability, and I would be very interested in that, because it would simplify integration with various machines. After a run, the point cloud file could be downloaded from the card one way or another, and then used by the correction software to either bend the original gcode cutting program, or by the machine as it runs, using the point cloud in memory as a constant height correction factor.

Speaking of SPI, I mentioned I am writing a version that sends the values over SPI to an ESP8266 for the sake of it's WiFi capability. I got it to work ok, with the Teensy 3.6 as SPI master, and the ESP8266 as the SPI slave. I wonder if it makes sense to try the SPI DMA library for more speed on the Teensy.

I currently use the normal SPI library for Teensy. After I set it to 2 megabits per second, the clock gets paused briefly after each byte sent, while the background tasks, probably the "Arduino emulator" runs. At 4 Megabits per second, the pauses grow larger still, so I am not really gaining speed despite the higher speed setting. It doesn't mess up SPI, because all SPI signals pause there together, but speed is clamped pretty much at 2 or 4 megabits per second it seems, while the ESP8266 goes up to 40 or maybe 80 megabits per second, if you set the ESP8266 clock at the higher speed of 160 MHz.

If it makes sense to use DMA SPI for this, I need more examples on how to use the DMA SPI library, like a basic description on data flow. I see the sources and sinks declared at the top of the example code, but the example is limiting in that it's a one shot, and I am doing a loop over and over. It will take a bunch of experimenting to adapt it for my use, I wonder if someone already has used that library in this context so I can save some time.

I chop the the 512 bytes of a complete sensor frame into 32 byte pieces, and send each of these through SPI to the ESP8266, which then go to a buffer for sending over WiFi.
 
Last edited:
I saw this video of some use on the github: youtube.com/watch?v=htYAWXXoZa0

I liked the Analog breakout card and linked to K66 post 8 and the WIKI page - I missed it before

FrankB has ESP8266 running Serial to Teensy at 4MB - uses at least 3 pins with hardware flow control - but might be easier than working out SPI? Details on this thread - that is with AT firmware on ESP - should work perhaps with Arduino too.
 
Yeah, I did an in-depth study on that motion sensing technique after seeing their paper on it, pretty cool. It would be nice if someone could make an M-code pattern generator, tuned with this application in mind, so we can play with this. I see a bunch of them on the web, but would have to dive in a bit to re-do it for printing or display and have the right parameters exposed for experimenting.


Examining the M-code pattern requirements, we see the paper contains the following:

"Generating m-Sequences
The projected m-sequences consist of a binary pattern in which every m-bit subsequence (window) is unique. For tracking robustness, we further constrain our m-sequences in several ways.
We limit the maximum number of consecutive identical bits (a run of bits) to three and require that every window contain at least one run of length exactly one. These requirements assist with accurate recovery of the spatial frequency of the pattern received on the linear sensors.
Finally, we require that the bit-patterns of different
windows differ in at least two places, to ensure that single bit-flips caused by noise could not result in an incorrect
identification.
To create m-sequences that fulfill these constraints, we use a sequential generate-and-test approach with backtracking. We use a window size of m=25 for an 800-bit sequence suitable for an 800x600 projection resolution. We used the same approach to find two separate 800-bit sequences with no windows in common, opening the possibility of sequence switching as an additional information channel."

Not too complicated huh?

I saw others write scripts which automatically draw encoder wheel patterns in a vector based format, which is what the high res film printer services require. An app like that for M-codes would make it easy to print M-code film transparencies for a lamp projector. Or just output to a mini projector.

I linked that video in my Github readme in application examples section.

Turns out, the sensor manufacturer had published on using M codes with the sensor, see:

Position Detection Using Maximal Length Sequences
Contributed by David J. Mehrl, Dept. of Electr. Engr., Texas Tech University


I got SPI to work, just not faster than 4 megabits per second.

I figure the DMA SPI approach might be lighter on the CPU than say the serial you suggest or normal SPI ? I don't know for sure how they compare CPU load wise and different speeds.
I load things down pretty well already with all that sensor bit banging and ADC reads, so the less load made by the chosen method of moving the data to the ESP8266, the better. But it still needs to be fast. That's why I am leaning towards DMA SPI, but only as an intuition because I am not familiar with it yet.
 
Last edited:
linuxgeek:

Here is a screen capture. I set WIDTH_PER_PT to 2 to make the pixels spread out some, but it slows the frame rate, so for speed tests I set this to 1.

What the numbers mean:
First number displays the frame number from 1 to 60, and this counts from 1 to 60 and repeats, kinda like a video editor frame counter.

The 2nd number displays AvailableBytes from Serial, low numbers show we are keeping up, rising means we are falling behind and the buffer is filling up.

For the plot, left and right (x) is pixel number, which correlates with the pixel's position in the sensor, and up and down (y) is the pixel brightness ADC value from 0 to 4095.

A strip at the top, running from left to right, is colored in greyscale according to the pixel's brightness, scaled down from (0 to 4095) to (0 to 255).


Processing_Screen_Capture-000500.png

The notch you see in the plot is a jumper wire laying across the sensor, with a desk lamp shining down from overhead.

Like I said in the notes, it's pretty minimal for sake of speed, because I was trying to see how fast I can make it go, and I got a little carried away. There is other code for highlighting the left and right steepest slope of the notch, and subpixel location, but it bogs it down alot and so probably needs to be done in C, not Processing/java.

Achieving subpixel resolution is like the holy grail of this kind of thing, because otherwise the precision with which we can determine the shadow's location on the sensor, is limited to the spacing and density of the pixels in the sensor.
While looking at alternative code for subpixel center-finding, I wonder if I could virtually hang a catenary or "hanging chain" between the steepest slope locations (on the sides of the notch), and use the the lowest point of the "hanging chain" as a subpixel center location. Or fit a parabola or 2nd catenary to the lowest 3 points on the original catenary, like finding the subpixel of subpixels almost. I am out of my league on the math, ha. It's like I understand it only enough to get in over my head. But it sounds right, and I will be investigating the catenary as a subpixel center-finding technique.
 
Last edited:
The Serial 1 & 2 FIFO have hardware to do blocks of transmissions between interrupts to refill the FIFO - so a few bytes at a time will go with no attention feeding from the software queue of bytes to transmit - that is less overhead than standard SPI I think. If you got DMA SPI working it would be even less overhead just writing to memory and having it work - but your receiver would need to keep up - same in both cases I suppose.
 
Thank you for the info on Serial 1 & 2 FIFO. I need to do experiments to see if the SPI DMA will bear fruit. I may saturate the WiFi on the ESP8266 before I saturate anything else. I need to group 2 sensor frames into one packet over Wifi, as I have it now the WiFi packets are only half full, and it is much slower than simple Teensy to PC over serial. Like 10 times slower currently.
 
How are you programming the ESP? Arduino?

Well, for the first code I just put up, it's based on a Teensy 3.6 running it's flavor of the Arduino library, sending sensor data over a USB serial wire, to a PC running a Processing visualization sketch, to plot the data on the screen. And I should mention it runs faster than any other serial connection data thing I ever tried so far, but I am also eyeballing the 480 Megabit per second port on the Teensy 3.6. Hmmmm.

However, I finished a 1st working version of an extended version of this, but did not yet post up on Github:

In this extended version, instead of sending the sensor data through a USB serial connection to the PC,
I send it via SPI to ESP8266, which in turn forwards it via WiFi UDP to my home router, then on to my PC, running a different Processing sketch to plot the data on the screen. This isn't running nearly as fast as the USB to serial version; I need to speed up the SPI part of it. The stock SPI library for Teensy is rather slow compared to what the ESP can handle.

Teensy 3.6 is configured as an SPI master, and ESP8266 is configured as an SPI slave. SPI speed is set to 4 megabits per second.

I am using the ESP8266 Arduino library for the ESP8266.

And I am using the SPISlave library for the ESP8266 SPI functionality, which is part of the ESP8266 Arduino library.

In the extended version with SPI, I added two additional wires between the Teensy 3.6 and ESP8266, above and beyond the 4 normal SPI wires, Slave Select, MOSI, MISO, and CLOCK.

Additional wire 1 goes high to inform the Teensy master that the ESP slave has booted and started SPI. This prevents the Teensy master from interfering with the ESP8266's boot, a known issue tied to Slave Select being HIGH during boot.

Additional wire 2 goes high to inform the Teensy master that the ESP slave is ready to receive data. This is LOW while processing a frame and sending it to WiFi. It goes HIGH when the ESP is not. I signal this condition using a wire rather than have the Master keep polling for it over the SPI, because polling would be less CPU efficient for both modules.

Ideally the Teensy would be an SPI slave in this situation, which would negate the need for additional wires 1 and 2. As ESP master, it's boot cycle would never be interfered with, and the ESP would be able to signal it's ready state when it so chooses as an SPI master controlling the bus, but I do not believe a slave SPI library is available yet for Teensy 3.x
 
Last edited:
Achieving subpixel resolution is like the holy grail of this kind of thing, because otherwise the precision with which we can determine the shadow's location on the sensor, is limited to the spacing and density of the pixels in the sensor.
While looking at alternative code for subpixel center-finding, I wonder if I could virtually hang a catenary or "hanging chain" between the steepest slope locations (on the sides of the notch), and use the the lowest point of the "hanging chain" as a subpixel center location. Or fit a parabola or 2nd catenary to the lowest 3 points on the original catenary, like finding the subpixel of subpixels almost. I am out of my league on the math, ha. It's like I understand it only enough to get in over my head. But it sounds right, and I will be investigating the catenary as a subpixel center-finding technique.

Looks like it might be worthwhile to just plot the derivative (difference between adjacent pixels). If you find the inflection points, maybe you could just take the center point between start of partial shadow to full shadow.

Also, I'm wondering what light tricks you could do to increase the contrast. I imagine that the light is reflecting from the side of whatever you put in. Can you paint the sides black (minimize reflection) or another color that is detectable by wavelength?

Can you do a sort of calibration period to do elaborate lighting changes that determines the exact size of the object first? Once you know that, you can find those exact datapoints on the shadow and know where it is at all times on your pixel array.
 
Oh, and I was wondering, so the TSL1410R (1280 pixels) is the biggest one that will work I'm guessing w/o too much change to the code?

http://www.mouser.com/catalog/catalogusd/648/194.pdf

I think so, yes. After changing SENSOR_NUM_PIXELS it should work, or be very close to working. I recommend that you include the sensor chip I used in your part purchase, along with the larger sensor you want to try, for sake of making sure all is well with the original circuit first.

Then swap in the larger sensor chip with the higher pixel count. This way you can tell if errors are caused by the part change or not; the last known good will be staked at the original design working; it is easier to troubleshoot a system when you change only one thing at a time beyond the last known "working ok" attempt.
 
Last edited:
linuxgeek:

I just ordered 2 of the TSL1410R sensors online, so I can be of more use with your effort to use it, and learn it myself as well.

I bought from Arrow Electronics; they were cheaper than Mouser and Digikey, plus Mouser, my usual source, was out of them.
 
Ok, cool. I'll look into ordering some too. I was starting to look into this a year ago, but now that you've done so much of the heavy lifting, looking forward to trying it out.

Also, I think I totally misunderstood the difficulties of position tracking, so you can ignore most of what I said before, as you probably already realized.
 
Last edited:
Fine business.

This is a great sensor to learn DSP and sub-pixel stuff with, because it's merely a 1d line of pixels, not a 2d square like a ccd camera, so less dimensions means less complex I suppose.

I am new to the whole DSP thing; I read lots of stuff on it but never played around with it.
I decided I will try my own sub-pixel resolution stunt to break the ice.

The recipe I am currently exploring to get sub-pixel resolution of the shadow's center location is,

1. set the ADC to average a few ADC readings, not just one.
This reduces noise prior to processing, a good thing generally.

2. prepare a new interpolated data array 10 times larger than the original data

3. interpolate (add 10 new points in-between the original sensor data points) put them with the original data points, into the new array.
So every 10th element of the resulting array is an original data point, and the rest were added.

4. Find the derivative (slope) for each pair or group of points from step 3...
This would plot as two distinct peaks, corresponding with the sides of the "notch" seen in the original data plot from a narrow shadow cast upon the sensor. (one peak might be inverted because it slopes the opposite way)
We might simply use the existing "steepest slope" finder rather than compute derivatives per-se

5. Find the index location of the highest value for each of the two peaks, using a peak finder.

6. Divide by 2 to find the center.

7. Usually, several of these steps are combined within one one loop structure in common practice, to minimize the computational intensity.

8. Disclaimer, I have no idea what I am doing, I simply saw a mirage on the horizon and am stumbling across the sand towards it.
 
Last edited:
Last edited:
I am working on the data visualizer Processing sketch this week.

I am cleaning up the code and working on improving the sub pixel code.

I also realized that the shadow/laser location can be sent from Teensy rather than raw data.

Once I am reasonably satisfied the Processing data visualizer code is clean and produces accurate sub-pixel output, I'll port it to Arduino/Teensy 3.x.

I am thinking let the Teensy mode be set via serial, like option 1 is send raw data, and option 2 send the interpreted location of the shadow, 3 for location of a laser beam, which uses different math to find center.

I want sensors with Freakin' Lasers, man, ya know?

This commercial software explains edge detection:
http://docs.adaptive-vision.com/current/studio/machine_vision_guide/1DEdgeDetection.html

and sub-pixel resolution by fitting a parabola, which is the method I think is currently used in the Processing app code I borrowed.
http://docs.adaptive-vision.com/current/studio/machine_vision_guide/1DEdgeDetection_Subpixel.html

It's pretty jittery, so I am looking for other ways to do it, or adding pre-processing steps.

This book chapter explains a bunch of 2d (camera) edge finding techniques.
I am looking for 1d versions.

Machine Vision Edge Detection
http://www.cse.usf.edu/~r1k/MachineVisionBook/MachineVision.files/MachineVision_Chapter5.pdf

So, just gotta play around with 1d examples of these and see how they compare, etc.

Anyone know some good open source 1d (image is a line not an area) edge finder examples in C or java let me know.
I suppose I could just flatten a 2d version.
 
Last edited:
I just got the new TSL1410R sensor chips delivered today.

So... it looks like I need to solder some wires to it.

I wonder if anyone made a nice little connector or board for this purpose.
 
Processing Convolution Demos

I uploaded a Processing sketch that shows how to do Convolution.

It smooths raw sensor input data, or other things depending on the impulse or 'kernel' shape.

From an example from the DSP book, translated into java, plus I plot input, impulse, and output on the screen in labeled colors.
The Input Side Algorithm
http://www.dspguide.com/ch6/3.htm
 
Last edited:
https://ams.com/eng/content/download/250184/975725/file/TSL1410R_DS000149_2-00.pdf

Like In page 16?

Can you do that by putting it on a breadboard initially.

Yes, but since I have to solder something to the terminals on the sensor chip, I want to see if there is anything like this pcb with socket, so I can solder pin headers to the sensor and have it plug into a socket or breadboard.

https://hackaday.io/project/9829-linear-ccd-module
That HackaDay example is for a large Toshiba sensor, the TCD1304.
Note the socket, nice.


The sensor chip solder hole pitch is .1" or 2.54mm, standard pitch then.
I need to buy some .1" header pin arrays I think.

Then solder them onto the sensor, and then plug the sensor chip in to a breadboard, or later plug into a pcb socket like that example.

So 0.1" pin headers soldered onto the sensor will probably be the neatest solution.

I just don't want to mess up the sensor by multiple soldering, like soldering wires now, pins later.
I will try to pick up some pin headers tomorrow at Fry's or Radio Shack.

Oh, the pin holes are tiny, probably smaller than typical pin header pin diameter.
I will need to find pin headers with thinner pins, or end up using the thickest wire that can fit and trimming it.
I do have a bunch of Precise Bits micro-drills for my CNC hobby, heh.
Maybe I can enlarge the holes slightly to permit thicker pins.
The sensor contact holes are 0.021 (0,533mm) diameter.

Here is a pin header example, from the Teensy website:
https://www.pjrc.com/store/header_14x1.html

It dosen't say what the diameter is, though it's pretty standard I assume.
 
Last edited:
Ok, drilling worked. Using the tiny existing holes as pilot holes, I drilled a 0.7mm, followed by a 1mm.
All by hand, twisting the bit in my fingers.
Now I can fit some pins in there I hope. Arduino jumper wire tips fit, that's in the neighborhood.

As I mentioned, I had some tiny PCB drill bits hanging around. I broke a bit or two, be careful.
I was able to drill by hand using the broken stub anyway.

I bought them from PreciseBits.com in the drill bit section
I always had excellent service with them, and they sell all kinds of micro cutting tools.

Next up is to get the pins soldered in the holes, and put the sensor on a breadboard.

Then I will get it talking to Teensy 3.6 and Processing.
 
Last edited:
Back
Top