Feasability of using Teensy 3.1 for a telescope autoguider (processing small images)

Status
Not open for further replies.
Curious how you work with the MT9M001. I see that Digikey sells the bare chip, but I have not seen it available as a eval board type module. Did you make your own PCB for it?
 
This project has been put aside for quite some time (not enough free time...), but I was really happy when Teensy 3.6 was announced, since it meant that I could buffer whole LCD frame with 16bit pixel data.

So far I was unsuccessful in porting the code that worked for Teensy 3.2 to Teensy 3.6. Why is that so, I am not sure yet. I can get the image sensor's test image OK, but as soon as I acquire actual image data, it is either extremely noisy, CMOS imager crashes, there's a jitter in line read, or even get an image rotated by 90°.

I have the suspicion that the power supply is not stable enough and is messing around with both CMOS imager and the Teensy. Does the 3.6 need a so much stronger power supply compared to 3.2? I tried also running the 3.6 at 96 MHz and this did not help. What I didn't try is to re-solder the exact same setup to a 3.2 (earlier tests with 3.2 were done on a breadboard).

Another question, why does it make a difference in what I get, if I run my code with 8-bit variables:
Code:
uint8_t data [240][320] = {0};
  cli();
  for (int n=0; n < 240; n++) { 
    while (digitalReadFast(line_active) == 0) {} //wait fo line ready  
     for (int i=0; i < 320; i++) { 
       while (digitalReadFast(pix_clk) == 1) {} //wait for pixel clock
       data[n][i] = GPIOD_PDIR & 0xFF ;  //get the lowest 10 bits of portD
       while (digitalReadFast(pix_clk) == 0) {} //wait for pixel clock
     }
    while (digitalReadFast(line_active) == 1) {} //wait if line still active 
  }
  sei();

Compared to 16-bits (want to use full 10 bits of the imager):
Code:
uint16_t data [240][320] = {0};
  cli();
  for (int n=0; n < 240; n++) { 
    while (digitalReadFast(line_active) == 0) {} //wait fo line ready  
     for (int i=0; i < 320; i++) { 
       while (digitalReadFast(pix_clk) == 1) {} //wait for pixel clock
       data[n][i] = GPIOD_PDIR & 0x3FF ;  //get the lowest 10 bits of portD
       while (digitalReadFast(pix_clk) == 0) {} //wait for pixel clock
     }
    while (digitalReadFast(line_active) == 1) {} //wait if line still active 
  }
  sei();

Shouldn't the speed for both codes be the same on a 32-bit processor?


Is there a difference between digitalReadFast on 3.2 and 3.6? I did not get much speed improvement for 180 MHz 3.6 vs 96 MHz 3.2...
Is it ok to disable interrupts as I did here on the 3.6? (This was crucial on the 3.2 to avoid any jitter in pixel clock readings).

Any help is greatly appreciated :)
Samo
 
Everything looks like it ought to work.

Of course, this sort of code is timing critical. It might be running faster on Teensy 3.6 even with the same 96 MHz clock speed, due to the much better flash caching present in 3.6.
 
Thanks Paul!
I found a typo (one input had a wrong pin assigned). Now it works perfectly! Even better like that (data is valid already from 1 ns after rising edge of pixel clock), less likely to run out of time and skip a read:

Code:
uint16_t data [240][320] = {0};
  cli();
  for (int n=0; n < 240; n++) { 
    while (digitalReadFast(line_active) == 0) {} //wait fo line ready  
     for (int i=0; i < 320; i++) { 
       data[n][i] = GPIOD_PDIR & 0x3FF ;  //get the lowest 10 bits of portD
       while (digitalReadFast(pix_clk) == 1) {} //wait for pixel clock       
       while (digitalReadFast(pix_clk) == 0) {} //wait for pixel clock
     }
    while (digitalReadFast(line_active) == 1) {} //wait if line still active 
  }
  sei();

So it could do a 4MHz on the Teensy 3.2 (8 bit data), and on Teensy 3.6 I can go up to 15 MHz when using only 8 bit, and 10 MHz when using 10 bits. That is just AWESOME!
Still...don't know why the difference in speed if storing 8 or 10 bits from the port?
Video is now only limited by the LCD speed (unless I read the whole frame and bin it 4x4)
It's more speed than I would need, I found that the noise of the CMOS imager increases quite a lot when clocked more than 3-4 MHz (typical for image sensory from mid 00s).


How hard (or possible at all) would it be to implement image storage directly during readout to the SD card on the Teensy board? I am wondering if it really can work with 0% CPU load, could I buffer one whole line of the image (1280 pixels with uint16_t integers) and then it would be copied to the SD card with DMA while the next line is read by the CPU?
 
Samo,
I'm VERY interested in how you got the sub pixel blob tracking working as I've been pulling my hair out over getting it working on a much lower res sensor with aims of getting this sensor up and running later. Is there any way I can have a look at your Teensy 3.2 code to have a look at how it works? Cheers
 
Arsenio,
I am calculating the center of mass of pixels of a subimage (e.g. i take 12x12 pixels).
Basically it is a simple mathematical formula (second equation here):
https://en.wikipedia.org/wiki/Center_of_mass#A_system_of_particles
Each pixel is treated as a particle of a certain distance from the start of the subimage (coordinates 0,0). The intensity of the pixel in the "mass".
Then the code looks something like posted below (some trickery for treating noise is needed in case you encounter dead or hot pixels). I found that the calculation is working the most reliable when as much as possible background noise is removed, but without summing negative values, in this case the background was treated as 0.

What is shown below is the most basic example of center of mass calculation with bias noise subtraction.
I tried more advanced algorithms, Teensy 3.6 has a floating point processor so I could easily do some more advanced calculations fast enough, such as for example modelling the blob with a Gaussian function (rough estimation of function parameters from the size) and weighing the center of mass calculations with the fit. Then the weighing function is iteratively improved until the result is stable. This should bring more precision and insensitivity to noise, but only if the blob is symmetrical, otherwise it does not work very robust.
Next step would be implementing non-linear least square fitting of a function to the blob, this would lead to best result, but I am not sure how would Teensy handle such a task.

Code:
//calculate total signal intensity; sums of rows; and sums of columns
for (int x=0; x<centroidBoxSize; x++) { //subtract noise only if remains positive and sum in x axis
  for (int y=0; y<centroidBoxSize; y++) {
    if (testarray[x][y] - noisemax > 0) {xaxis[x] += testarray[x][y] - noisemax;}
  }
}
for (int y=0; y<centroidBoxSize; y++) { //sum in y axis and calculate sums of both axis
  for (int x=0; x<centroidBoxSize; x++) {
    if (testarray[x][y] - noisemax > 0) {yaxis[y] += testarray[x][y] - noisemax;}
  }
  totalint += yaxis[y];
}

//here is basic image moment calculation
for (int i=0; i<centroidBoxSize; i++) { //center of mass
  sumpositionx += (xaxis[i] * (i+1));
  sumpositiony += (yaxis[i] * (i+1));
}
//calculate positions; maxcol and maxrow are positioning the selected subimage in the whole image
posx = ((float(sumpositionx) / float(totalint)) - 0.5) + maxcol[1] - centroidBoxSize/2;
posy = ((float(sumpositiony) / float(totalint)) - 0.5) + maxrow[1] - centroidBoxSize/2;
 
Heck yeah, been tinkering with implementing this into my system and it appears to work okay so far, though I am parsing MUCH smaller images (60*60 max). Have you looked into running this on the Teensy 4, as it has the hardware DCMI?
 
Unfortunately, due to not enough time for the hobby I was not able to finish this project. I have the device in a box with all the hardware to communicate with the telescope, but would need time to develop the calibration and the application of the measurements that the above code is giving me.
Surely Teensy 4 would be much faster and the much more available RAM would mean that nearly the whole image could be stored into the memory. But in the end I found that the LCD refresh rate is anyway the speed limiting factor...
 
Thanks to the unfortunate current situation and having more time at home I have managed to complete this project. Well..kind of, it still has bugs and my self-learned coding skills are probably the most inefficient :)

To sum up what the telescope autoguider does:
I am taking images from a 1024x1280 pixels monochrome CMOS imager using teensy 3.6. Because there is not enough RAM in Teensy I am binning the data online in 4x4 pixels to get 320x240 images to an LCD.
The imager is pointed to stars on a telescope mount (I use a 180mm f/2.8 Nikkor lens). On the image it recognises the 3 brightest stars and I have to option to select one. After selection I start to collect subframes from the CMOS imager 200x200 pixels.
From the subframe I take the data 16x16 pixels around the star (most intense cluster of 5 pixels - central + 2 adjacent in each column and row direction). Within this 16x16 data array I do calculations to determine with sub-pixel precision the position of a star.
I tried a few different algorithms how to determine star centroid, tried even fitting a 2D-gaussian function with the Teensy, but the best result was to calculate simply the center of mass of the image. Works really well.
The autoguider has a bluetooth module that communicates with the telescope controller.
This is close to doing it's job, but first needs calibration (scale and angle of the image). I move the telescope for 10s in one direction and calculate how much and in which direction the star has moved, and calculate the parameters to be able to convert the positions into a new, rotated coordianate system.
Then it is ready to go! Once I start guiding the autoguider is measuring position of the star in the image and in case it moves from the position by a defined error margin it sends movement corrections to the telescope. Simple proportional control, works excellent. I get sub-pixel accuracy, so that I can track a 900mm telescope precisely using only a 180mm lens. I added the option of "dithering" this is doing small random movements, between every camera exposure I move the telescope for a random amount from the initial position, up to a given maximum amount of pixels in distance.
All comes with quite a few settings (for the CMOS imager and for the autoguiding algorithms) to be able to do everything accurately.
Here is a video to see it in action:
https://www.youtube.com/watch?v=eLeirHzpUQQ
Each line in the graph is 1 pixel difference in star position. As you can see the position of the star is kept very tight, p-p error was here some 0.7 pixel. RMS is probably about +-0.3 pixel, which with the imager and lens used is some 1.5 arcseconds. This is good enough for a medium sized telescope (up to 1000mm focal length). I use a relatively inacurate tracking mount, if I do not do any corrections the error in tracking the star will be about 30 arcseconds.
The best is that having a tiny unit directly on the lens that is battery powered (it is good for some 4-5 hours) I have everything ready in 10 minutes, whereas normally this king of autoguiding would involve a laptop and cables, there is also laptop battery life if being on the field. Some people do it with a Raspberry pi though I read...
This is how it looks like, attached to a 180mm lens:
90390542_10158871114449749_6553818810330120192_n.jpg

The kind of an image that I would make:
m3sm.jpg
 
Status
Not open for further replies.
Back
Top