Why does the OctoWS2811 library use an array of Ints to store the RGB data?
It's 6 integers times the number of LEDs per pin, which is 24 bytes times the number of LEDs per pin. Since there are 8 pins each with that number of LEDs, the size works out to be 3 bytes LED. No bytes are wasted.
32 bit integers are used instead of 8 bit bytes because you get only a 25% chance the compiler will align a byte array onto a 32 bit boundary. Internally the RAM is 32 bits wide, so alignment allows for faster copy between the buffers.
#define LEDS_PER_STRIP 500
DMAMEM int displayMemory[LEDS_PER_STRIP*6];
Seems like you don't quite understand. I'll try to explain again, this time with a specific example.
Suppose to use 500 LEDs per pin. There are 8 pins, so that means you have 4000 LEDs. Each LED needs 3 bytes, so you need a total of 12000 bytes.
Here's the code:
Code:#define LEDS_PER_STRIP 500 DMAMEM int displayMemory[LEDS_PER_STRIP*6];
This array is 500*6 = 3000 ints which are 4 bytes each. That's exactly the required size of 12000 bytes.
Changing to a 16 bit integer or 8 bit bytes doesn't alter the need for 24 bits (3 bytes) per LED, adding up to a total of 12000 bytes.