dauntless89
Well-known member
I've been reviewing datatype standards for a couple projects I'm working on, and have a couple points I could use guidance on. The current project is actually based on a Due but this is more for general programming knowledge and will also pertain to the T3.6 I'm using in another project until the T4 comes out. Both are 32-bit architectures so it should apply to both.
In the Arduino library, "int" and "long" are both 32-bit containers on a Due. Most of my variables can be stored in a 16-bit container, but to get a 16-bit container, you must use "short." A lot of example code found online subscribes to a "use the smallest container you can get away with" doctrine, but this page cautions that using a smaller container than the native architecture can cause delays. Is this true all of the time, or just with certain operations? Should I generally use the native container size until I start running out of space?
The other thing I'm curious about is that I have three routines that measure pulse frequencies by measuring the duration between pulses. Precision requires using micros, and if I use a 32-bit container for these variables, I'll get a glitch every 72 minutes as the counter rolls over which will require manual correction of the datalog after the fact. If I use a 64-bit container, it can store something like 600,000 years worth of microseconds, but how bad will it slow the program down on a 32-bit processor?
Thanks,
Tony
In the Arduino library, "int" and "long" are both 32-bit containers on a Due. Most of my variables can be stored in a 16-bit container, but to get a 16-bit container, you must use "short." A lot of example code found online subscribes to a "use the smallest container you can get away with" doctrine, but this page cautions that using a smaller container than the native architecture can cause delays. Is this true all of the time, or just with certain operations? Should I generally use the native container size until I start running out of space?
The other thing I'm curious about is that I have three routines that measure pulse frequencies by measuring the duration between pulses. Precision requires using micros, and if I use a 32-bit container for these variables, I'll get a glitch every 72 minutes as the counter rolls over which will require manual correction of the datalog after the fact. If I use a 64-bit container, it can store something like 600,000 years worth of microseconds, but how bad will it slow the program down on a 32-bit processor?
Thanks,
Tony