Bit math: Why isn't bitClear,bitSet and similar READABLE functions used more?

Status
Not open for further replies.

alan856

Well-known member
I've been programming about 40 years (GAK!) and still have not gotten used to the rather 'formal' definitions like uint32_t (which I 'get' while that is useful) but I'm still wondering about the rather arcane bit-math syntax: 1<<2 or 3&2>>4.... and so on & on that seems to be de rigueur. Surely very FACTUAL - but not very darn readable!

What is so bad about using bitClear(), bitSet() and the similarly defined bit functions. They do seem to convey the desired operation more clearly - what advantages do the << and >> style syntax offer?
 
Manually doing the bits not only makes you learn how it works (it’s how I learned), allows you to handle not only single bits but multiple bits as well. bitSet, bitClear would be an eyesore for libraries. An example of this would be automatic filtering setup of mailboxes using offsets in flexcan. Throwing several bitSet statements would be painful bloat of code when one or two lines are sufficient of AND’ing, OR’ing, or shifting.
 
Sorry- do not agree with your assessment.

Every bit manipulation that can be done with the operators can be done with the functions. And I would not mind see several lined of such functions as I would know at a glance what is being done. Readability in code is Paramount, along with copious comments.
 
Sorry- do not agree with your assessment.

Every bit manipulation that can be done with the operators can be done with the functions. And I would not mind see several lined of such functions as I would know at a glance what is being done. Readability in code is Paramount, along with copious comments.

I have to agree with tonton on this. Readability of bit operators is explicit and cannot be confused. Functions are black boxes -- you know input and output; that is all. Copious comments can also be counter-productive. Comments should be exact and precise, but that does not necessarily mean copious.

By the way, I've been coding non-professionally and professionally since the release of the Intel 4004. I'll let you do the math :)
 
uint32_t var = 0b110;
var &= ~(3UL << 1);
var |= (3UL << 1);
var |= 0b1101001; // comment EN, FRAME, TEST, CRC, etc (comment on sequential bits)
var == 0b1101111; <——answer

if you don’t understand that then by all means write bitSet everywhere
I am not a professional coder nor did I study for it, but I do believe in performance, clarity from understanding, efficiency, less cpu clockwork......... When the libraries are polling bits in realtime 24/7 you gotta ask yourself, how many instructions you want to throw away endlessly.... This is not ideal in a runtime environment.....

but I sure wouldnt want this plastered all over the library:
var = bitSet(var, 6,1); //EN
var = bitSet(var,5,1); // FRAME
var = bitSet(var,3,1); // TEST
var = bitSet(var,0,1); // ETC

Then theres 32 bit registers to deal with, fun stuff... 32 lines of editing?

Everyone has their own style of coding ethics and try to follow the standard, but no one will do this when it affects performance. Configuration maybe, but, not realtime processing :)
 
Well @quadrupel - you may have a beard grayer than mine - but I STILL disagree.

Perhaps in a heavy duty commercial programming environment... but for many Arduino and Teensy users I think the readability wins out. BTW - this “discussion” has been going on for years as evidenced by the numerous conversations about it on Stack Exchange. They even mention how The embedded community has embraces the syntax. For me - I’ll just spell it out - the moist sponge between my ears is drying out, and I see the shifting operators as a big PITA.
 
This is sort shy away from these type of "religious wars" other than to say, that the first computer I programmed on in High School was an IBM 1620... My masters project project was done using a Franklin Ace...

As for which is more readable or understandable, difficult to say. Sometimes it is more on how it is actually done, than if you use the shift registers or these functions/macros.

I try to be pretty pragmatic on these things.

For example I find things like:
Code:
if (bitOrder == LSBFIRST) {
		ctar |= SPI_CTAR_LSBFE;
	} else {
		ctar &= ~SPI_CTAR_LSBFE;
	}
Just as readable as:
Code:
if (bitOrder == LSBFIRST) {

		bitSet(ctar, SPI_CTAR_LSBFE_BIT);
	} else {
		bitClear(ctar, SPI_CTAR_LSBFE_BIT);
	}
(Note the value of SPI_CTAR_LSBFE_BIT is not actually defined, but could be)...

The problem I have had using some of these bit functions, is many times with hardware registers and the like, the fields are not just one bit.
For example on T3.x boards with the PUSHR register, you may want to verify that the queue is not FULL before you may want to look at the status register (SR) at the TXCTR field Bits (12-15) and see if the count is greater than 3 (SPI object) or 1 (SPI1 and SPI2...). Which is a little harder to do with bitSet, bitClear, bitTst like functions...

So again it all depends...
 
I see both shift operators and bit functions a lot, neither is particularly readable or less readable to me. The IMPORTANT thing is to not use magic numbers in your code.
Code:
// GOOD, because the register naming with the defined masks means you don't need more comments really, it's self-documenting.
enableReg |= I2C_MASK; // Enable I2C, but you probably already know that now.
enableReg &= ~SPI_MASK; // Disable SPI
setBit(interruptEnable, UART_RX_INT_MASK);

// BAD, when you do this, I have go read the device documentation to figure out what you're doing.
// And if I have to do that, I will owe you a solid kick to the nuts.
configReg |= 0x4; // there better be a comment here telling me what the hell these bits do!!!
configReg &= ~0x8;
setBit(interruptEnable, 0x2);
 
Well @quadrupel - you may have a beard grayer than mine - but I STILL disagree.

Perhaps in a heavy duty commercial programming environment... but for many Arduino and Teensy users I think the readability wins out. BTW - this “discussion” has been going on for years as evidenced by the numerous conversations about it on Stack Exchange. They even mention how The embedded community has embraces the syntax. For me - I’ll just spell it out - the moist sponge between my ears is drying out, and I see the shifting operators as a big PITA.

Well, that is your opinion, religion etc., you may be right so keep it but honestly, I don't care.
 
I see both shift operators and bit functions a lot, neither is particularly readable or less readable to me. The IMPORTANT thing is to not use magic numbers in your code.
Code:
// GOOD, because the register naming with the defined masks means you don't need more comments really, it's self-documenting.
enableReg |= I2C_MASK; // Enable I2C, but you probably already know that now.
enableReg &= ~SPI_MASK; // Disable SPI
setBit(interruptEnable, UART_RX_INT_MASK);

// BAD, when you do this, I have go read the device documentation to figure out what you're doing.
// And if I have to do that, I will owe you a solid kick to the nuts.
configReg |= 0x4; // there better be a comment here telling me what the hell these bits do!!!
configReg &= ~0x8;
setBit(interruptEnable, 0x2);

Interesting perspective from a software engineer. For those who learnt hardware first, then came into software (once progress had caught up), they see things differently. What is BAD about magic numbers is that they may be used in several code places, so any change/maintenance might easily "miss" changing one of them (with BAD consequence). Of that there is no argument, and there never was a program that did not require maintenance.

But, there are downsides to using "descriptive/alphabetic" labels. The biggest I find is not knowing where these are defined and if buried in some .h file (it might even be a non-explicit .h), I can waste a lot of time trying to find the mask and spell it correctly so that my program will compile without error. And that's worse to me than using the Reference Manual - where at least all the registers are in one place (even if there are 3000+ pages as in iMRXT 1060).

A hardware guy knows which bits have been set by "0x2" or even "0b00000010000000000100000000000001", and can easily compare the mask against the register, to make sure that the correct bits have been prepared. What I also learn from this long-winded error prone bit setting is "what all the other bits are for". And I often have that in the back of my mind for the future. The downside is its very easy to make a mistake - which is why I don't double check, but treble check.

There is no such thing as "self-documenting" code.
 
Interesting perspective from a software engineer.

I'm actually a hardware guy. :)

5 years digital ASIC design, 11 years FPGA and electrical schematic/PCB design. It's only the last few years (about 3) I've shifted to software as the work I do (accelerated computing) has moved away from custom hardware to things GPGPU programming, etc.

You raise good points (particularly the need for documentation). However documentation should never be needed simply because of poor variable/function naming.

Luckily the issues you raised can usually be solved with good software design guidelines and rules to follow when programming on a team. My team is big on abstraction. In the case of manipulating hardware registers, the SW driver for it will have something like i2cDriver.cpp, with all bit masks/fields defined in i2cDriverMap.h where <name>.cpp, <name>Map.h is a convention so you always know where to find bit and register address mappings. But this is not what you expose to the SW users.

Your driver public API must expose meaningful functions that expose the functionality and abstract the particular hardware implementation so they never deal with bit fields. setI2cClock(freq), enableI2c(), disableI2c, clearRxInterrupt(), etc. The translation of "task" to "bits" is done in one thin layer. Typically, only the hardware guy, the SW guy who wrote the driver, and the verification engineer who did the unit testing ever look at the code. They also are ALL responsible for the design review of the SW code. Yup, SW code reviewed by HW engineers. That gets interesting with new SW hires that aren't used to that!
 
BTW - as far as 'bloat' goes using bitClear() or bitSet) - I got a disassembly listing on a Teensy4/Teensyduino test script (in VisualStudio 2019) - and each of those took ONE line of assembly code.

And everybody seems to be talking about drivers and libraries - they are yet another animal. Not really to be looked inside of.... except in extenuating performance cases. I'm talking of main-stream, day-to-day coding in the hobby/serious enthusiast world where 98% of scripts/programs are one-off affairs. I can't tell you how many older forum entries seen where some newcomer is asking for decryption of the opaque bit-manipulation done with shifts, ANDs and ORs. A hardware coder needs to UNDERSTAND what a bit shift is, what setting bits in a register is doing, etc. But HOW it is done in c would seem to be the most readable. As I said before (to everyone's irritation) I really dislike the "u_int32_t" style of defining vars. Looks like crap when you scan down a many-lined piece of code. There must be a better way!

Let the flames begin! :)

Horses for course... your mileage may vary.
 
… not a flaming place here as observed … hope it stays that way - fun to come here and not get grief over opinions.

Most of the stuff here looking like that is 'driver or library level' code - that is why just using the native "C" operators with bits is an acceptable common denominator.

Having done lots of maintenance coding - you just adapt to what is present and fix the actual problem in context.

Comments seem like a great idea - more can be better - though they can be cryptic or outdated with changes. Having some base level dOxygen in the whole of the tree would be nice.

Having a tree grepping editor is essential when it comes to navigating sources and finding the definition or use of various things.
 
Status
Not open for further replies.
Back
Top