My teensy 3.2 device has an ISR that reads a sensor and writes to a circular inBuffer that handles overflow gracefully. A main loop processes data from the inBuffer, triggers some time-sensitive external actions, and writes logging output to a circular outBuffer that also overflows gracefully. The last step in the main loop is a non-blocking write to the USB Serial port, which is read by a host PC.
Everything runs fine with data rates of 1 - 2 Mbps and no problems for days on end, until the host PC gets distracted and fails to read the serial port for some period of time. At that point my Serial.Write call intermittently blocks ("forever" or until the PC begins reading again) even though the code is written to be non-blocking. And of course, blocking is a disaster in my scenario.
Here's where the code blocks:
After convincing myself this was indeed the problem, I looked into the serial implementation and found that in Nov 2014 Paul added a ToDo in the code:
This (loss of atomicity) is the likely mechanism causing my non-blocking code to block. My questions are:
Question #1: Is there anything that can be done purely in user code to achieve reliably non-blocking data transmission?
Question #2: Has anybody else already investigated this?
Question #3: I have added the line suggested in Paul's note and Serial.Write now appears to behave as expected/desired. Can anybody explain what kinds of scenarios might cause issues (IOW, how to test).
Question #4: Is there a best practice for using a modified library while still applying bulk library updates periodically?
Thanks!
-- Craig
Everything runs fine with data rates of 1 - 2 Mbps and no problems for days on end, until the host PC gets distracted and fails to read the serial port for some period of time. At that point my Serial.Write call intermittently blocks ("forever" or until the PC begins reading again) even though the code is written to be non-blocking. And of course, blocking is a disaster in my scenario.
Here's where the code blocks:
Code:
const int serialWriteTimeout = 1000; // microseconds
void SendDataToHost()
{
elapsedMicros serialWriteTime;
while (serialWriteTime < serialWriteTimeout)
{
int bytesAvail = outBuffer.getCountAvailForRead();
if (bytesAvail == 0)
break; // quit when there's no remaining data to send
int serialOutAvail = Serial.availableForWrite(); // normally this will be 64
if (serialOutAvail == 0)
continue;
int bytesWanted = (bytesAvail > serialOutAvail) ? serialOutAvail : bytesAvail;
int bytesToSend = outBuffer.readBytes(USB_OutBuffer, bytesWanted);
Serial.write(USB_OutBuffer, bytesToSend);
// on the teensy this shouldn't block because the serial buffer has promised there's room for this data
// Unfortunately, it DOES block, at least some of the time.
}
}
After convincing myself this was indeed the problem, I looked into the serial implementation and found that in Nov 2014 Paul added a ToDo in the code:
Code:
int usb_serial_write_buffer_free(void)
{
uint32_t len;
tx_noautoflush = 1;
if (!tx_packet) {
if (!usb_configuration ||
usb_tx_packet_count(CDC_TX_ENDPOINT) >= TX_PACKET_LIMIT ||
(tx_packet = usb_malloc()) == NULL) {
tx_noautoflush = 0;
return 0;
}
}
len = CDC_TX_SIZE - tx_packet->index;
// TODO: Perhaps we need "usb_cdc_transmit_flush_timer = TRANSMIT_FLUSH_TIMEOUT"
// added here, so the SOF interrupt can't take away the available buffer
// space we just promised the user could write without blocking?
// But does this come with other performance downsides? Could it lead to
// buffer data never actually transmitting in some usage cases? More
// investigation is needed.
// https://github.com/PaulStoffregen/cores/issues/10#issuecomment-61514955
tx_noautoflush = 0;
return len;
}
This (loss of atomicity) is the likely mechanism causing my non-blocking code to block. My questions are:
Question #1: Is there anything that can be done purely in user code to achieve reliably non-blocking data transmission?
Question #2: Has anybody else already investigated this?
Question #3: I have added the line suggested in Paul's note and Serial.Write now appears to behave as expected/desired. Can anybody explain what kinds of scenarios might cause issues (IOW, how to test).
Question #4: Is there a best practice for using a modified library while still applying bulk library updates periodically?
Thanks!
-- Craig
Last edited: