Problem trying to read OV7670 camera under IRQ Teensy 4.0

Status
Not open for further replies.
Another quick update: I hacked up the ILI9341 side to setup a window for 240x320 on the camera and again did a different setRotation of the screen and now you can have picture on TFT the same orientation as camera...

Obviously more that can and may be done. But first hers is that version:

Again not sure yet what all I will do with this now...

I am thinking of at least throwing some version up on Github in some project name: T41_CSI_OV7670
And while at it like to for example cleanup some of this code setup and for example there are probably 2 or 3 different ways we are initializing the camera.
I think the first way configures it and then we call for specific configuration again which does it all over again.

So would like to clean that out. Plus more interegrate the CSI settings. So if I say set the camera to X, it also redoes the CSI settings.

Question/Suggestions? I don't need to own this project. I know several of you did the Lion Share of the initial work. I don't mind throwing it up on Github, but if someone would prefer to own and like for some of us to fork and do PRs that would be great as well!

Thoughts?
 

Attachments

  • CSI_41_main-201120a.zip
    31.9 KB · Views: 59
I think the first way configures it and then we call for specific configuration again which does it all over again.

So would like to clean that out. Plus more integrate the CSI settings. So if I say set the camera to X, it also redoes the CSI settings.
?

My OV7670 camera library, which just got a new version posted in the Project Guidance forum, is guilty of the brute force reinitialization of all camera registers on a resolution change. I need to rethink the resolution-change issue and simplify that code. That code does, however, adjust the CSI registers for the new resolution.

One of the new examples in the library implements a motion-detection algorithm which samples a grid of 83 pixels and looks for changes as a trigger to capture a new frame. I got bitten by the DMAMEM cache demons when I first tested the algorithm with frame buffers in DMAMEM. The CSI transfers new frames into DMAMEM using its DMA hardware. I was reading out sample pixel values in the foreground and found that they didn't change, even with gross camera movement. The problem was that the foreground reads were reading cached data and the CSI DMA transfers into DMAMEM were invisible to the foreground. That was solved with an "arm_dcache_delete(buff, ImageSize())" call.

Another thing I want to tackle is the ability to save images with JPG compression. Some testing with VGA and QVGA RGB565 bitmap images shows compression ratios of better than 7 to 1 are possible. I've found some JPG compression source code on line---but it will take a bit of work to implement as it is written for RGB888 input. Also, one of the first steps in that JPG algorithm is to convert the RGB bitmap to YUV data. Since the OV7670 can output YUV directly, it may be possible to eliminate that step with proper setup of the camera.
 
@defragster/@mjs513/@Vindar and @mborgerson...

Wondering if it would make sense to do a hardware mod or Rev. to my Camera/display board.

Earlier I had not seen that much of a speed boast in using Hardware CS pin for DC. But if there is some then maybe I should try it...
That is I also keep forgetting that with T4.1 we have a few more hardware DC options for SPI, that is we have pins 36 and 37 which are actually distinct SPI CS pins.
What I mean is that on SPI1 you have pins 0 and 38 but they are the same signal so only one of them can work in a sketch...

Note, to try the DC change can maybe cut a tract and maybe jumper to a via...

2nd Note: I also had some luck with the reading scan line on other board on SPI1... Both with Hardware CS on DC and not on CS... so trying to figure out why not working on SPI.

@mborgerson - as for resolution change, as I maybe mentioned, I can see maybe switching Horizontal/vertical but maybe one might want to capture larger images than can be shown on screen...
 
Can do a mod - or time to order a rev'd board is okay. You have board(s) left to mod and test and see if it worth doing a rev. But the one I have can be edited as needed to test, if it works better there. Photo of via/cut/jumper welcome.

With TD1.54 and LittleFS and bundle of new flash chips I can see 'spare time' being busy for days before catching up.
 
@defragster/@mjs513/@Vindar and @mborgerson...


@mborgerson - as for resolution change, as I maybe mentioned, I can see maybe switching Horizontal/vertical but maybe one might want to capture larger images than can be shown on screen...

The ability to capture and store a full VGA image, while displaying a QVGA or similar smaller image on an LCD display is one of the reasons that I'm looking at the Pixel Pipeline. If I could set up the PXP to scale down the VGA image and rotate it for optimum transfer to the ILI9341 display, that would be great. I'm not sure how to handle other displays that are not a handy sub-multiple of the VGA, but I think the PXP can handle windowing the full VGA bitmap to a smaller display. I'm still deep in the weeds on the PXP scaling and windowing capabilities, so don't put a complete library on your Christmas list!
 
@mborgerson - Sounds like you are having fun...

@all - still not sure about mod to the board for just that...

Also about doing the scan line detecting, I was continuously shooting myself in the foot! :mad: :eek:

In sketches I did not setup the TOUCH CS signal, so the Touch stuff was interfering with the stuff being returned (DAH)... So IF I were to mod the board, I would probalby
add PD resistors to the CS pin.

So it appears like I am now getting proper scan line data. Next up put in a few measure APIs, maybe add a set frame rate. And then experiment on when it is safe to startup a writeRect call...

ARGH!
 
I have some hacking done in the ILI9341_t3n library that I put up in the branch ScanLine

Right now just trying to get an idea of how long it takes to draw a full image:
WriteRect time: 41051 so a little over 45ms

The default refresh rate, information grabbed:
sampleRefreshRate startsum:70809 count:10 period:7081 frames per second:141


I can experiment setting dfferent frame refresh data:
Command character received: f
Setting Frame Rate control to 11
sampleRefreshRate startsum:150641 count:10 period:15064 frames per second:66
Command character received: t
FBX: 10688
WriteRect time: 41052

Slows it down to 66 frames per second. But the images does not look as good. ...

Note: I also received my 90 degree camera rotate boards and build one. So now I can use the normal QVGA and rotation 3... And have it pointing back from the screen ...

I put my current version of the OV7670 library in here as well as the sketch... Probably need to move to appropriate places.
 

Attachments

  • CSI_41_OV7670_ILI-201202a.zip
    280.6 KB · Views: 56
After a couple of days of programming that seemed way too close to working for a living, I have the OV7670 sending YUV data to my PC host program. A lot of the programming was in the addition YUV display to the host program. In any case, the results seem to show that the YUV format does reduce some of the quantization problems inherent in the RGB565 data format. In theory, the YUV422 format should have better resolution in luminance and color shading as it has more bits available (although at lesser spatial resolution) for brightness and color.

Here are two images: on is YUV422 encoded and the other is RGB565 encoded. The YUV image was converted in the PC to an RGB888 image for display. The RGB565 image is displayed using Win10 internal conversions. As you can see, the RGB image has some artifacts and noise at some gradient transitions that are not visible in the YUV image. This improvement was what I hoped for in the transition to YUV format.

Screen_RGB565.png

This is the RGB565 image that shows some noise at the transitions in the gradients.

Screens_YUV.png


This is the YUV422 image that lacks the noise at the same areas in the image. Note that both images show a greenish tint because I did not compensate the camera for the strong green output of the LED fluorescent tube in my computer room.

My next steps are to replace the PC YUV decoding with a pixel pipeline process that changes the VGA YUV image to an RGB888 image that can be transferred to the PC and which can also be scaled and converted a QVGA RGB565 image than can be displayed on my ILI9341 display.

A word of warning: there are lots of YUV to RGB conversions algorithms out on the internet that differ in some constants and offsets. Figuring out which was compatible with the OV7670 output took many hours. Now I get to figure out how to set up the PXP to match that algorithm.

I suspect that Christmas shopping will have an impact on my progress over the next week! Get your own shopping done while I try to multitask!
 
Status
Not open for further replies.
Back
Top