Hi,
I've been messing around with an application of the Teensy 4.0 where it receives some bytes and replays them back with some modifications. I've been running into issues where if the writer writes a lot of bytes quickly the program hangs. I've reduced everything to an MWE where 180,000 to the serial port and waiting to read the 180,000 bytes.
Then on the teensy side I simply have:
On my linux box this will hang forever. The write finishes but the read never completes, at less than 180k say 150k everything goes through. It was my understanding that USB flow control should take care of this type of thing and I would not need any special rate limiting on the host side.
It's not super clear to me what is occurring here because the write terminates but the read never completes. My first though is that there is some limit on how much data can be buffered for a read on the linux side. My expectation would be that once this buffer was full, the writes on the teensy side would block causing the writes on the linux side to block. Instead, only the read blocks.
Ideally on the user side we just want to expose a library function that looks basically like a file descriptor where you can dump an arbitrary amount of bytes into it and later call read on it when the user is ready. I thought that should be possible with USB but perhaps not. Is the only solution here to basically fragment it where the client is basically only allowed to have say 10k bytes in flight at a time and then has to wait for an acknowledgment that those 10k bytes have been processed before sending the next fragment.
I guess the other option would be potentially have some sort of service on the pc responsible for constantly reading off the serial port and managing its own buffer (assuming that a buffer on the pc is causing this process to block). This solution would allow us to deal with growing the buffer instead of causing the transfer to seize up.
Thanks for any advice on how to get this done!
I've been messing around with an application of the Teensy 4.0 where it receives some bytes and replays them back with some modifications. I've been running into issues where if the writer writes a lot of bytes quickly the program hangs. I've reduced everything to an MWE where 180,000 to the serial port and waiting to read the 180,000 bytes.
Code:
use std::{
fs::{File, OpenOptions},
io::{Read, Write},
};
const BUFFER_SIZE: usize = 180_000;
use rand::prelude::*;
fn main() {
let mut f = OpenOptions::new()
.read(true)
.write(true)
.open("/dev/ttyACM0")
.unwrap();
let mut send_buffer: [u8; BUFFER_SIZE] = [0; BUFFER_SIZE];
let mut recv_buffer: [u8; BUFFER_SIZE] = [0; BUFFER_SIZE];
let mut rng = thread_rng();
rng.fill_bytes(&mut send_buffer);
f.write_all(&send_buffer).unwrap();
f.read_exact(&mut recv_buffer).unwrap();
assert!(send_buffer == recv_buffer);
println!("Hello, world!");
}
Then on the teensy side I simply have:
Code:
// As simple as it gets. Just echo back what's seen over USB serial.
void setup() {
Serial.begin(115200); // baud doesn't matter (native USB).
}
void loop() {
while(Serial.available()){
Serial.write(Serial.read());
}
}
On my linux box this will hang forever. The write finishes but the read never completes, at less than 180k say 150k everything goes through. It was my understanding that USB flow control should take care of this type of thing and I would not need any special rate limiting on the host side.
It's not super clear to me what is occurring here because the write terminates but the read never completes. My first though is that there is some limit on how much data can be buffered for a read on the linux side. My expectation would be that once this buffer was full, the writes on the teensy side would block causing the writes on the linux side to block. Instead, only the read blocks.
Ideally on the user side we just want to expose a library function that looks basically like a file descriptor where you can dump an arbitrary amount of bytes into it and later call read on it when the user is ready. I thought that should be possible with USB but perhaps not. Is the only solution here to basically fragment it where the client is basically only allowed to have say 10k bytes in flight at a time and then has to wait for an acknowledgment that those 10k bytes have been processed before sending the next fragment.
I guess the other option would be potentially have some sort of service on the pc responsible for constantly reading off the serial port and managing its own buffer (assuming that a buffer on the pc is causing this process to block). This solution would allow us to deal with growing the buffer instead of causing the transfer to seize up.
Thanks for any advice on how to get this done!