CSI Camera Library

Status
Not open for further replies.

mborgerson

Well-known member
In a thread on the technical support forum https://forum.pjrc.com/threads/63195-Problem-trying-to-read-OV7670-camera-under-IRQ-Teensy-4-0
I've been working with the OV7670 camera and a number of different hardware implementations with the OP, Cyrille, and the usual list of suspects. I've decided that it is time to convert a lot of our work to a library and I have a few questions:

1. Should I work on a generic library that can handle multiple different cameras, or concentrate on the OV7670, which is not only cheap ($5 per unit on Amazon), but pretty capable (VGA resolution at 30FPS)?

2. Should I follow the example of the Hardware Serial libraries and have the library just instantiate a single CSI_Camera object or should the library be more like the IntervalTimer and allow multiple instantiations? There is a strong argument for the former in that the T4.1 has only one CSI interface and there is no simple way to connect multiple cameras. That also makes it simpler to handle interrupt routines, which can get tricky with multiple objects.

3. How does the library handle the fact that a VGA RGB565 image is about 614400 bytes and can only be easily captured using the PSRAM in EXTMEM? Smaller images, like QVGA (320x240) can be captured in the T4.1 DMAMEM, but how should the object check for frame buffer memory size?

4. How will the class handle various output formats: Frame buffer sent to Serial for upload to PC, Frame Buffer written to SD card, Frame buffer windowed to transfer to a connected LCD display?

This object will only be available on the T4.1, as it is the only Teensy that has the CSI pins led to user-available pins.

The easy way out would be to write a library that only handles the OV7670 with a limited number of output formats. I may start that way, but I don't want to paint myself into a corner when there are lots of other camera module that may work with the T4.1 CSI.
 
OV7670 library

Here is a first attempt at an OV7670 library. It has a very simple API and I am looking for suggestions on additions. I will look at the Arduino_OV7670_Master code for possible additions.

There are only three small examples. The most complex of these implements an MTP responder to transfer .bmp files to a Windows computer for display.

View attachment OV7670.zip

Just as the SD library instantiates an object named "SD", this library instantiates and object named "OV7670". There is just this single object since only one camera can be attached to the CSI hardware at a time. Having a single named object also greatly simplifies the code for the CSI IRQ handler.
 
I've posted a new version of the OV7670 library. Here are a few of the changes:

* Fixed a bug where the .bmp file header was not properly updated when changing the camera window size.
* Added support for sampling pixels for motion detection
* Added sample code for time-lapse capture of images.
* Added sample code for motion-triggered capture of images.
* Added visible time stamps to images displayed with ILI9341 and embedded 4-byte time as first 4 bytes of bitmap file.

The new sample code requires the SD updates in TD 1.54B4 or later. Several examples use MTP file transfer--which seems to be in the process of getting updated. When TD1.54 is released, I will review my implementation of the MTP responder and make sure it is compatible with the latest version of the MTP library.

View attachment OV7670_1.zip
 
Hi again,

For the fun of it, I am going to play around again converting my sketch based off your earlier version to your current library.

I will probably try to add in some of the changes I have been playing with.

Things like: More resolutions, using the windowing capabilities of the camera. Also I have a version setup that instead of having another buffer to copy the data to when you get the new frame you wish to work with,
instead, I have the third buffer that I simply tell the CSI system to use that extra buffer for the next reads, and return back the previous filled in buffer to the sketch to use. This avoids having to copy like 153K of data in the case of QVGA and 4 times that in the case of VGA.

Again wondering what your thoughts are on creating a github project for this. For me having projects help to understand what things have changed since the previous updates.

But for me, this is all just for the fun of it :D

Kurt
 
Things like: More resolutions, using the windowing capabilities of the camera. Also I have a version setup that instead of having another buffer to copy the data to when you get the new frame you wish to work with,
instead, I have the third buffer that I simply tell the CSI system to use that extra buffer for the next reads, and return back the previous filled in buffer to the sketch to use. This avoids having to copy like 153K of data in the case of QVGA and 4 times that in the case of VGA.
I had a version early on that captured images by switching the CSI buffer address for one cycle. It worked fine until I forgot and changed from QVGA to VGA and switched in a QVGA-sized buffer. Needless to say, the result was puzzling! To make this work well, I am considering putting the buffer size into the first 4 bytes of the frame buffer before I send the pointer to the camera library. That way the camera library can check the buffer size against the current working resolution and return an error if the buffer isn't big enough to hold the frame words. Another possibility is to define a buffer structure with a uint32_t size component followed by an array of bytes (or uint16_t ) for frame storage.

I've avoided custom resolutions with the windowing capability because I'm not sure how the camera will react if the window transmitted doesn't match the area over which it is doing auto-exposure or AGC sensing. What will happen if a bright patch outside the window reduces the exposure?


Again wondering what your thoughts are on creating a github project for this. For me having projects help to understand what things have changed since the previous updates.

But for me, this is all just for the fun of it :D

Kurt

I signed up for a GitHub account a month or two ago. However, I've spent the intervening free time fiddling with cameras and displays and the associated software---not in learning to use GitHub.

Now that leaf-raking season is over in Western Oregon, I hope to get back to GitHub and some other projects.
 
Hello again:

As I mentioned, I have been doing some hacking (updates) to your latest library where I did add in some extra custom configurations... I always figure well if does not work for your application don't use ;)

As for making sure the buffer is the right size. All three of them are allocated in the main .INO sketch one after another. Let me know what you think.

I also added in some additional minor things that I had in my earlier version, like when you ask to display the OV7670 registers, I see data like:
Code:
Kurts OV7670 Camera to ILI93xx Test sketch  Compiled on Nov 29 2020 12:11:53
EXT Memory size: 8
After cameraBegin(end setup)
s - Show Information
c - Print CSI Registers
r - Show Camera registers
d - Save Image to SD Card
t - send Image to ILI9341
v - Send continuous images to ILI9341
w - test camera settings x,y,w,h,pixoffset, delay
Command character received: r
00(GAIN):36
01(BLUE):62
02(RED):48
03(VREF):0A
04(COM1):01
05(BAVE):73
06(GbAVE):7B
07(AECHH):40
08(RAVE):76
09(COM2):01
0A(PID):76
0B(VER):73
0C(COM3):00
0D(COM4):40
0E(COM5):61
0F(COM6):4B
10(AECH):7F
11(CLKRC):80
12(COM7):04
13(COM8):C7
14(COM9):6A
15(COM10):22
16(*RSVD*):02
17(HSTART):19
18(HSTOP):37
19(VSTART):16
1A(VSTOP):66
1B(PSHFT):00
1C(MIDH):7F
1D(MIDL):A2
1E(MVFP):07
1F(LAEC):00
20(ADCCTR0):04
21(ADCCTR1):02
22(ADCCTR2):91
23(ADCCTR3):00
24(AEW):95
25(AEB):33
26(VPT):E3
27(BBIAS):80
28(GbBIAS):80
29(*RSVD*):07
2A(EXHCH):00
2B(EXHCL):00
2C(RBIAS):80
2D(ADVFL):00
2E(ADVFH):00
2F(YAVE):3C
30(HSYST):00
31(HSYEN):00
32(HREF):80
33(CHLF):0B
34(ARBLM):11
35(*RSVD*):0B
36(*RSVD*):00
37(ADC):1D
38(ACOM):71
39(OFON):2A
3A(TSLB):0D
3B(COM11):12
3C(COM12):78
3D(COM13):40
3E(COM14):18
3F(EDGE):00
40(COM15):D0
41(COM16):08
42(COM17):00
43(AWBC1):0A
44(AWBC2):F0
45(AWBC3):34
46(AWBC4):58
47(AWBC5):28
48(AWBC6):3A
49(*RSVD*):00
4A(*RSVD*):00
4B(REG4B):09
4C(DNSTH):00
4D(DM_POS):40
4E(*RSVD*):20
4F(MTX1):B3
50(MTX2):B3
51(MTX3):00
52(MTX4):3D
53(MTX5):A7
54(MTX6):E4
55(BRIGHT):32
56(CONTRAS):5C
57(CONTRAS_CENTER):80
58(MTXS):9E
59(AWBC7):88
5A(AWBC8):88
5B(AWBC9):44
5C(AWBC10):67
5D(AWBC11):49
5E(AWBC12):0E
5F(B_LMT):F0
60(R_LMT):F0
61(G_LMT):F0
62(LCC1):00
63(LCC2):00
64(LCC3):50
65(LCC4):30
66(LCC5):00
67(MANU):80
68(MANV):80
69(GFIX):00
6A(GGAIN):40
6B(DBLV):0A
6C(AWBCTR3):0A
6D(AWBCTR2):55
6E(AWBCTR1):11
6F(AWBCTR0):9E
70(SCALING_XSC):3A
71(SCALING_YSC):35
72(SCALING_DCWCTR):00
73(SCALING_PCLK_DIV):F0
74(REG74):10
75(REG75):05
76(REG76):E1
77(REG77):01
78(*RSVD*):04
79(*RSVD*):26
7A(SLOP):20
7B(GAMA1):10
7C(GAMA2):1E
7D(GAMA3):35
7E(GAMA4):5A
7F(GAMA5):69
80(GAMA6):76
81(GAMA7):80
82(GAMA8):88
83(GAMA9):8F
84(GAMA10):96
85(GAMA11):A3
86(GAMA12):AF
87(GAMA13):C4
88(GAMA14):D7
89(GAMA15):E8
8A(*RSVD*):00
8B(*RSVD*):00
8C(?RGB444):00
8D(*RSVD*):4F
8E(*RSVD*):00
8F(*RSVD*):00
90(*RSVD*):00
91(*RSVD*):00
92(DM_LNL):00
93(DM_LNH):00
94(LCC6):50
95(LCC7):50
96(*RSVD*):00
97(*RSVD*):30
98(*RSVD*):20
99(*RSVD*):30
9A(*RSVD*):84
9B(*RSVD*):29
9C(*RSVD*):03
9D(BD50ST):4C
9E(BD60ST):3F
9F(HAECC1):78
A0(HAECC2):68
A1(DSPC3):03
A2(SCALING_PCLK_DELAY):01
A3(*RSVD*):02
A4(NT_CTRL):82
A5(AECGMAX):05
A6(LPH):D8
A7(UPL):D8
A8(TPL):F0
A9(TPH):90
AA(NALG):94
AB(*RSVD*):07
AC(STR-OPT):00
AD(STR_R):80
AE(STR_G):80
AF(STR_B):80
B0(*RSVD*):84
B1(ABLC1):0C
B2(*RSVD*):0E
B3(THL_ST):82
B4(*RSVD*):00
B5(THL_DLT):04
B6(*RSVD*):00
B7(*RSVD*):66
B8(*RSVD*):0A
B9(*RSVD*):06
BA(*RSVD*):00
BB(*RSVD*):00
BC(*RSVD*):00
BD(*RSVD*):00
BE(AD-CHB):08
BF(AD-CHR):07
C0(AD-CHGb):0B
C1(AD-CHRr):0A
C2(*RSVD*):00
C3(*RSVD*):00
C4(*RSVD*):00
C5(*RSVD*):00
C6(*RSVD*):00
C7(*RSVD*):00
C8(*RSVD*):E0
C9(SATCTR):6D
Horizontal start:200 stop:440 width:240
Vertical start:90 stop:410 height:320
I have not yet merged in the same textual information on the setting of registers when in debug mode yet. I also have function to be able to set window parameters. This sort of came out of the Adafruit_OV7670 library with the comments of allowing you to finally adjust the window parameters to work for your camera. I used that before to get ideas of what values may or may not work.

Anyway my WIP versions are included here. Both your library (with mods) and my Test sketch converted over to your library.

Next up maybe some hacking of my ILI9341_t3n library to see if I can time the VREF of the display and then see if we can find what good windows of time are to start up the writeRect...

EDIT: I updated the library file to add the WriteRegister can display debug info. Only if you set my debug define to > 1...
 

Attachments

  • CSI_41_OV7670_ILI-201129a.zip
    15.5 KB · Views: 80
  • OV7670.zip
    268.3 KB · Views: 88
Last edited:
KurtE:


I see that you have avoided possible frame buffer overflows by putting all the buffers in EXTMEM where there is plenty of room for a VGA image. You have also avoided resolution changes in the program once the buffers are allocated.

There are some cases where a program will perform better with one or more buffers in DMAMEM, and best if a buffer is in DTCM. Avoiding EXTMEM will greatly speed up operations that require pseudo-random access to the buffer to operate on single pixels or groups of pixels. For instance, rotating a QVGA image is about 8 times faster if the destination is in DTCM instead of DMAMEM.

I think I caught the most likely offense by checking to make sure that the buffer is in EXTMEM when trying to switch to VGA resolution. If using a custom resolution, you can probably check the buffer requirements from the resolution settings and have an error exit if the buffer is too big for DMAMEM or DTCM. However, for marginal cases, a buffer might fit in DMAMEM or DTCM, but you have to compare the buffer size to some realistic estimate of memory left after code memory, stack space, USB buffers, etc. etc.
 
@mborgerson - Yep this is the worst preforming memory usage. And yes you could use DMAMEM(malloc) and maybe DTCM...

But right now just playing and figured I would start with the worst memory And so far I have not allowed the camera resolution to change. Have not checked to see if the library allows that or not. If it did I might play with it converting from Landscape to portrait mode which again is the same size image.

But first up, curious in playing with display library to see about when it is safe to do the writeRect without having a possible tear.

And for example does it matter on the screen orientation? Lots to try out :D
 
Status
Not open for further replies.
Back
Top