DeweyOxberger
Member
So I have Teensy 3.0 code that is a simple serial echo:
void loop()
{
uint16_t length = Serial.available();
if (length > 0)
{
Serial.readBytes((char *)Buffer, length);
Serial.write(Buffer, length);
Serial.send_now();
}
}
On the PC side I:
track the time of:
loop 100 times
send a blob of data
receive the echoed blob
I'm getting strange results that are much slower than I would expect. I'm hoping someone can shed some light on what is going on.
I was expecting to see it take about 200ms for any blob size up to 19 * 64 bytes (in theory anyway).
Instead I see something that makes me think there is some kind of packet count limiting that I haven't seen before:
(There are 100 transactions per test)
Somehow these first all send 1 packet and receive one packet in just 1 ms per transaction. (How does that work?)
16 byte payloads (1600 bytes sent, 1600 bytes received in 139 ms)
32 byte payloads (3200 bytes sent, 3200 bytes received in 101 ms)
48 byte payloads (4800 bytes sent, 4800 bytes received in 99 ms)
64 byte payloads (6400 bytes sent, 6400 bytes received in 101 ms)
Now it bumps to 2 ms per transaction
80 byte payloads (8000 bytes sent, 8000 bytes received in 200 ms)
96 byte payloads (9600 bytes sent, 9600 bytes received in 200 ms)
256 byte payloads (25600 bytes sent, 25600 bytes received in 201 ms)
Then works its way to 3 ms per transaction - I was expecting 2ms per up to about 1K bytes per transaction.
272 byte payloads (27200 bytes sent, 27200 bytes received in 248 ms)
288 byte payloads (28800 bytes sent, 28800 bytes received in 297 ms)
304 byte payloads (30400 bytes sent, 30400 bytes received in 299 ms)
640 byte payloads (64000 bytes sent, 64000 bytes received in 302 ms)
Then it jumps to 4ms per transaction.
656 byte payloads (65600 bytes sent, 65600 bytes received in 399 ms)
672 byte payloads (67200 bytes sent, 67200 bytes received in 400 ms)
and so on.
Did I miss something here? Why is the time going up is discrete steps well below the packet limit? I thought I had done this sort of thing with other microcontrollers and it worked fine (SiLabs, MSP430 are two most recent, I was thinking both show about 1ms out 1ms back with 9-10 packets per transaction working fine before it bumps to more time).
Any ideas?
void loop()
{
uint16_t length = Serial.available();
if (length > 0)
{
Serial.readBytes((char *)Buffer, length);
Serial.write(Buffer, length);
Serial.send_now();
}
}
On the PC side I:
track the time of:
loop 100 times
send a blob of data
receive the echoed blob
I'm getting strange results that are much slower than I would expect. I'm hoping someone can shed some light on what is going on.
I was expecting to see it take about 200ms for any blob size up to 19 * 64 bytes (in theory anyway).
Instead I see something that makes me think there is some kind of packet count limiting that I haven't seen before:
(There are 100 transactions per test)
Somehow these first all send 1 packet and receive one packet in just 1 ms per transaction. (How does that work?)
16 byte payloads (1600 bytes sent, 1600 bytes received in 139 ms)
32 byte payloads (3200 bytes sent, 3200 bytes received in 101 ms)
48 byte payloads (4800 bytes sent, 4800 bytes received in 99 ms)
64 byte payloads (6400 bytes sent, 6400 bytes received in 101 ms)
Now it bumps to 2 ms per transaction
80 byte payloads (8000 bytes sent, 8000 bytes received in 200 ms)
96 byte payloads (9600 bytes sent, 9600 bytes received in 200 ms)
256 byte payloads (25600 bytes sent, 25600 bytes received in 201 ms)
Then works its way to 3 ms per transaction - I was expecting 2ms per up to about 1K bytes per transaction.
272 byte payloads (27200 bytes sent, 27200 bytes received in 248 ms)
288 byte payloads (28800 bytes sent, 28800 bytes received in 297 ms)
304 byte payloads (30400 bytes sent, 30400 bytes received in 299 ms)
640 byte payloads (64000 bytes sent, 64000 bytes received in 302 ms)
Then it jumps to 4ms per transaction.
656 byte payloads (65600 bytes sent, 65600 bytes received in 399 ms)
672 byte payloads (67200 bytes sent, 67200 bytes received in 400 ms)
and so on.
Did I miss something here? Why is the time going up is discrete steps well below the packet limit? I thought I had done this sort of thing with other microcontrollers and it worked fine (SiLabs, MSP430 are two most recent, I was thinking both show about 1ms out 1ms back with 9-10 packets per transaction working fine before it bumps to more time).
Any ideas?
Attachments
Last edited: