ADNS-3080 optical flow sensor, high speed camera

Status
Not open for further replies.

ohnoitsaninja

Well-known member
I've been shopping for machine vision cameras, I need very high fps/low resolution/monochrome, and I'm finding everything that fits what I'm looking for is too expensive, typically $500 without a lens and only kinda fast enough.

This ADNS 3080 mouse sensor is on ebay with a great little lens and breakout board for $10 and seems to fit my requirements. Has anyone played with mouse sensors with teensy and know what kind of speed I can expect for using it as a camera? I can also run the machine vision code on the teensy, it's pretty lightweight and optimized and would certainly be faster than trying to pass the video along to my PC, I want to use a spare teensy 3.2 I have but can use a 3.6 if it's not enough. I have a beaglebone black and an arty fpga if that still isn't enough, but I don't know how to use either of those well, I'd rather use teensy :). The 3080 datasheet says framerates of over 6400 can be done, seems too good to be true for the price. Can I really access full camera frames at that speed on a microcontroller or is it just an x/y tracking movement I get from the 3080 processing the feed internally?

I prototyped my machine vision code with a ps3 camera, 183fps, and processing.org, works great, but I need more FPS! I don't want do admit how many hours it took researching the scientific/industrial camera market before coming to the conclusion to use mouse sensors.
 
Last edited:
It's a device intended to produce x/y values, having had firmware loaded as part of your install process. So if you want you could write your own machine vision system in 2k and load that... More usefully see page 20 for the commands which seem to dump it out of image process and into camera mode. Which I think is showing 10us a byte for 900 pixel bytes, which works out at 9ms per frame, pluse some overhead either side. maybe 10ms per frame for 100 fps. 30 by 30 pixels at 6 bit dynamic range.

Also seems to be a bit fussy about the lens mounting arrangement so check that section as well if your going to use the thing.
 
Thank you for figuring out that math for me! One of the ways the commercial cameras I look at improve FPS is with a feature called region of interest where they just don't use the entire frame. Do you think I could pull 30x5 pixels and call the next frame for higher frame rates?

Also is that 10us per byte limit normal? Would I be likely to find another sensor that can do it much faster by searching enough datasheets?
 
Last edited:
Suspect that 10us is about what you get over a serial interface intended for updating mouse moves. If you look at things like the Raspberry Pi camera you will see there is a dedicated bus for the camera to the control IC. If you want a better speed you are probably looking for a dedicated CPU/Image sensor pair. Which is what this is but you don't seem to have access to the code on the sensor. Might be worth seeing if anybody has open sourced a mouse sensor firmware since this chip would probably do what you want with custom firmware loaded (as in streaming a subset of pixel data, possibly at a faster clock). What I don't know is if anybody has done such a thing.

If you look in that data sheet you can pull individual pixels, but suspect that'll work out slower then just triggering it's 'dump the sensor' hard coded function.
 
I do a lot with linear photodiode arrays. What is your sensor used for, or what is it trying to see, and what is it controlling or feeding with it's result?

I am trying to determine what order of magnitude the update rate should be, based on the context of what you are using it for.

Maybe you could use linear photodiode arrays, I am seeing 800 to 1600 received USB serial data frames (one row of 256 pixels) per second on Teensy 3.6, depending on the averaging, etc. That's between 8 and 9 megabits per second serial over the USB cable to a PC running Processing. If I minimize the eye candy, I can draw about 500 frames per second in processing reading from the serial data.

Here is my Teensy sketch that reads linear photodiode arrays and calculates shadow width/position with subpixel accuracy.
Uncomment the test code "putPositiveSineWave()" and related variables, which puts in a sine wave to the sensorIntArray[],
and comment the read of the sensor (readIntsParallel()) and you can test without a sensor.
https://github.com/Mr-Mayhem/TSL14XXR_Teensy36

Here is the matching Processing sketch, receives USB serial binary data, makes pretty visuals and shadow location/width with subpixel precision.
Plus the serial code is special and runs in its own thread, and is like 20% faster than Processing's own "SerialEvent" method of doing things.
https://github.com/Mr-Mayhem/Linear_Array_Sensor_Subpixel_Visualizer_V2

It's not optical flow, but that could be done perhaps using two of these linear photodiode array sensors at right angles to each other, each with a lens. The idea depends on you application, but reading one row of pixels X 2 will beat square sensors at speed tests most of the time.

Discussion and some examples in this thread, look for the lumitrack video posted in that thread on page 3 to see this idea in action.
https://forum.pjrc.com/threads/3937...Read-TSL1410R-Optical-Sensor-using-Teensy-3-x

Otherwise, maybe my code will run faster for reading and displaying your sensor.
At the end of the day, it's an array filled with sensor values, framed and sent over serial, so maybe you can run faster by trying my framework.
It's a work in progress. At some point I am going to bypass the Processing serial library and talk to it's source for serial interface directly.
 
Last edited:
The ps3 camera code locates the center of a disk and then scans arcs in a circle to find the angle of the disk, doing absolute rotary encoding. The disk is a 3" circle, half white, half black, I scan near the edge for accuracy. It only needs to compensate for a very small offset from the disk being centered to the camera sensor. I'm trying to avoid mechanically coupling to a high end commercial absolute rotary encoder, I've come up with a few ideas. Using a laser and galvos to trace a circle and pick up the position with a photodiode. Or use a small motor with a paper disk ~9000+ rpm, and an small angle(5 degrees?) of the paper disk replaced with clear tape, photo diode above and encoding disk beneath. The setup I have now works well, using some good math I get pretty good accuracy(9-10 bit) from the ps3 camera but it could be better as well as the FPS, I would ideally be tracking the angle at 1000hz.

I haven't found a good community to go to for scientific camera recommendations, and it's not clear how much FPS gain I'll get over what they say in specs with region of interest tech, but it does seem to be the easy route to replace my ps3cam. I've also looked into PI cameras, I haven't found anything usb for good pricing with over 120fps.

I'm also thinking about just cutting out a big circle, maybe a foot across, and using it as a quadrature encoder wheel, if i can hold my sanity making all those little cuts. I'd much rather keep it 3" for other purposes.
 
Last edited:
Oh, I see it talks SPI, they mention "popular 24 mhz SPI" tables all over the data sheet. It has a Frame Capture mode to get raw data, but I can't discern if the 6,469 max framerate is for position report frames, or raw data frames.

I would think raw data is more like 900 bytes per frame X 6,469 frames per sec X 8 bits per byte = 46,576,800 bits per sec,

Which is way more than "popular 24 mhz SPI" so it probably only sends the position reports at that 6468 frames/sec rate.

Maybe the serial code post above was not so relevant, I thought it was Serial as in serial Port, ha. But it's SPI.
 
Oh, an encoder application. The idea is the less pixels to process, the faster you can go. Down to a few individual photodiodes. You can interpolate between them also for more accuracy without more physical "pixels" (or photodiodes).

But absolute encoders need a row of pixels generally, to be able to see the different coded stripe paths at the same time.

If you look at the Parallax sensor TSL1401
https://www.parallax.com/product/28317

My library is for TSL1402R and TSL1410R and similar ones:
http://ams.com/eng/Products/Light-Sensors/Linear-Array

and the IC Haus equivalents, (some of the sensors here qualify)
https://www.ichaus.de/keyword/Sensor iCs

You could read that "species" of sensor with my code, and get a row of pixels from seeing the wheel, looking at a path from center to the outer edge.

I am thinking the lens and sensor, not so much the board, who knows what other stuff might get in the way of accessing raw sensor signals. I haven't looked at the schematic on that one. Plus the sensor is out of manufacture soon, so one will need to use IC Haus surface mount version of that 128 pixel sensor, which is compatible. I bought a 5 pack of those and am waiting on getting smd soldering heat gun, etc for tests.

What kind of pattern to use, hmmm. It depends on if you are using reflection or beam-breaking strategy I suppose.
I expect the thing would be cheaper if you did beam breaking, because then you can forgo the lens, and the related complexity.

A linear photodiode array or a row of individual photo diodes on a PCB can detect dark patterns marked on a transparent plastic wheel, from a LED light shining through it.

You can use one of the absolute encoder codes from the internet. In fact there are several apps or plug-ins available to produce the coded patterns in PDF or other vector formats for high res "masters", then print them on transparency sheets on a laser printer (or a trip to Staples, hah). There are lots of great "how to make an encoder wheel" tutorials which show some clever innovations on the work-flow to getting a nice clean encoder wheel. I think a transparent wheel is the ticket, considering it probably has a higher signal-to-noise ratio between light and dark signals. A light shines through the wheel, and different pixel groups watch different tracks on the wheel, and the dark areas on the code block the light, light areas let it pass.

If you can get the tracks packed close to each other enough to use a linear photodiode array, then that's a possible solution. Pototdiode arrays and CCD line arrays do come wider, but at a cost of speed, because they have more pixels. However, you could search for one with wider pixel spacing, few pixels, and that way you don't need to solder individual photodiodes, it's one chip with a glass window on top. Some models have only 16 pixels spread quite widely, that's probably your ticket for a nice neat packaged reader device. Each of the pixels can be aligned to it's own track going around the wheel. The absolute position is encoded by the on-off pattern read by the pixels.

If you use an AMS or IC Haus linear photodiode array of that "species", they CAN do about 8 million pixel clocks per second. Times 12 bits per pixel = 96,000,000 bits per second.
Assuming 128 pixels, we divide 8 million by 128 = 62,500 frames per second. The only way to read that fast though, is to use dedicated clock and SI pulse driver logic and dedicated 10 mega-samples/sec ADC chip, probably with SPI output. Or use an CPLD or FPGA and a dedicated ADC.

CPLDs are cheap, so I was thinking about trying one to see how fast I could go with such a sensor. Gotta price out an ADC too. And learn how to do the PCB to keep the signals clean at those frequencies, hah.

At the fancy end, IC Haus Germany sells some relatively affordable all-in-one optical encoder chips you might find useful too. They sell small quantities on their online storefront website.
http://us-shop.ichaus.com/Default.asp

I have yet to play with those.
 
Last edited:
Thank you for all of your suggestions. I'm curious how much faster SPI would work, it's unclear to me reading the datasheet if that 10us delay still applies. 30x30 is not that many pixels, but if I used something like a grey-code wheel, 14+ bits of accuracy should be possible with the right lens setup I think?

The solution I've gone with is to not use a live feed, but a recorded video. I have found inexpensive cameras that can record at 1,000 FPS from Casio and many other point-and-shoot cameras. Timing constancy and latency was always going to be an issue with a usb camera, this way should be more consistent. I plan to use a teensy to output a timestamp in binary on some leds for the video to pick up and computer vision to align with. The scientific cameras I was looking at have a GPIO port that allow for things like frame triggering, but I am hoping casio is good with their timekeeping. I might be doing this on a lot of units, so it would have been ideal to use a usb camera, so I can calibrate and program each unit in one go. Now each unit will have to be handled and plugged in to usb twice.
 
Last edited:
Status
Not open for further replies.
Back
Top