Hi all. Just wondering, in cases where execution speed is the primary concern, is it generally better to use native-sized 32-bit integers on T3.x boards even though I don’t need values that large? I’m mainly talking about integer arithmetic, ‘for’ loop indices, array indices, bit manipulation, etc.
Just thinking that using 8 or 16-bit integers might involve extra packing / unpacking, shifting, masking, etc.
Thanks.
Greg
In general, it depends on the low level details of the processor. Neither Arm nor AVR are architectures that I've done compiler support for, so I can't say what they support, and what they don't..
Note, the ISO C/C++ standards say that
char/
short values are logically converted to
int when used in an expression. Typically, most machines provide direct instructions to load and store 8-bit and 16-bit values into 32-bit or 64-bit registers. The arithmetic is done via 32-bit or 64-bit instructions, and then the store only stores the bottom 8 or 16-bit values.
On some 64-bit machines, there aren't 32-bit instructions, so the compiler every so often has to do a convert to 32-bit if the expression is being done in
int rather than
long. But since the Teensy is 32-bit that isn't an issue to you.
FWIW, the PowerPC though does not have a load 8-bit with sign extension, so
signed char has to be done as two instructions, load 8-bit with zero extension, and sign extend, but it does have load 16-bit and 32-bit values with either sign or zero extension.
Be aware of pre-mature optimization. The things that you think are going to be the bottlenecks may not be where the chip is spending its time.