I have an application capturing both analog and digital data up to a rate of 128ksps.
At the moment the ISR is taking too much time and making the 128ksps rate unviable for streaming purposes.
I have determined that the digital input reading and byte assembly is by far the slowest part of the routine (below).
Note that these inputs are not readable as a single byte (unfortunately).
The digitial input read and byte value assignment is taking an average of 206ns out of a total of 375ns so pretty horrible.
Obviously my code is terribly inefficient but I'm not sure which approach would be significantly faster?
I have not implemented ISRs to service the digital inputs individually as I need to keep the ADC ISR very high priority and low jitter.
At the moment the ISR is taking too much time and making the 128ksps rate unviable for streaming purposes.
I have determined that the digital input reading and byte assembly is by far the slowest part of the routine (below).
Note that these inputs are not readable as a single byte (unfortunately).
Code:
void adcDataReadyIsrStream(){
adc.readData(&res);
ultemp = micros() - loggerStatus.logStartTimeMicro;
//mCurPosValue = myEnc1.read(); // taking
mCurPosValue++;
// update digital inputs here as the ISRs for the digital inputs will be disabled
bitWrite(digitalInputs, 0,digitalReadFast(DIN0));
bitWrite(digitalInputs, 1,digitalReadFast(DIN1));
bitWrite(digitalInputs, 2,digitalReadFast(DIN2));
bitWrite(digitalInputs, 3,digitalReadFast(DIN3));
bitWrite(digitalInputs, 4,digitalReadFast(DIN4));
bitWrite(digitalInputs, 5,digitalReadFast(DIN5));
bitWrite(digitalInputs, 6,digitalReadFast(DIN6));
bitWrite(digitalInputs, 7,digitalReadFast(DIN7));
myQueueWrite(0xAA);
myQueueWrite((uint8_t)(0xFF & ultemp)); // elapsed time
myQueueWrite((uint8_t)(ultemp>>8)&0xFF);
myQueueWrite((uint8_t)(ultemp>>16)&0xFF);
myQueueWrite((uint8_t)(ultemp>>24)&0xFF);
myQueueWrite((uint8_t)(res.chan1_16&0xFF)); // analog channel 1
myQueueWrite((uint8_t)(res.chan1_16>>8)&0xFF);
myQueueWrite((uint8_t)(res.chan2_16&0xFF)); // analog channel 1
myQueueWrite((uint8_t)(res.chan2_16>>8)&0xFF);
myQueueWrite((uint8_t)(res.chan3_16&0xFF)); // analog channel 1
myQueueWrite((uint8_t)(res.chan3_16>>8)&0xFF);
myQueueWrite((uint8_t)(res.chan4_16&0xFF)); // analog channel 1
myQueueWrite((uint8_t)(res.chan4_16>>8)&0xFF);
myQueueWrite((uint8_t)(mCurPosValue)&0xFF); // 32 bit encoder counter
myQueueWrite((uint8_t)(mCurPosValue>>8)&0xFF);
myQueueWrite((uint8_t)(mCurPosValue>>16)&0xFF);
myQueueWrite((uint8_t)(mCurPosValue>>24)&0xFF);
myQueueWrite(digitalInputs); // digital inputs, automatically update in own ISRs
// all added for debugging
/* myQueueWrite((uint8_t)(myCircBuffer.length)&0xFF); // 32 bit encoder counter
myQueueWrite((uint8_t)(myCircBuffer.length>>8)&0xFF);
myQueueWrite((uint8_t)(myCircBuffer.length>>16)&0xFF);
myQueueWrite((uint8_t)(myCircBuffer.length>>24)&0xFF);
myQueueWrite((uint8_t)(myCircBuffer.writeIndex)&0xFF); // 32 bit encoder counter
myQueueWrite((uint8_t)(myCircBuffer.writeIndex>>8)&0xFF);
myQueueWrite((uint8_t)(myCircBuffer.writeIndex>>16)&0xFF);
myQueueWrite((uint8_t)(myCircBuffer.writeIndex>>24)&0xFF);
myQueueWrite((uint8_t)(myCircBuffer.readIndex)&0xFF); // 32 bit encoder counter
myQueueWrite((uint8_t)(myCircBuffer.readIndex>>8)&0xFF);
myQueueWrite((uint8_t)(myCircBuffer.readIndex>>16)&0xFF);
myQueueWrite((uint8_t)(myCircBuffer.readIndex>>24)&0xFF);
*/
}
The digitial input read and byte value assignment is taking an average of 206ns out of a total of 375ns so pretty horrible.
Obviously my code is terribly inefficient but I'm not sure which approach would be significantly faster?
I have not implemented ISRs to service the digital inputs individually as I need to keep the ADC ISR very high priority and low jitter.