Teensy 3.1 data transmission problems: stops receiving when sending too much

Status
Not open for further replies.
I'm trying to create a data transmission system between computers using lasers. To debug it I'm currently using wires instead of lasers.

The protocol is programmed in Java, all the micro controller has to do is to send the data from Serial to Serial1 and the data from Serial1 to Serial.

I previously used two Arduino Micro's which worked fine, except that it could only get speeds of up to 30 kilobytes per second. Because of that I bought two Teensy 3.1's , and now I'm facing a problem with them.

When I transmit a lot of data from the computer to the Teensy, the computer won't receive any bytes from the Teensy anymore. It only happends when I send non-stop data to the Teensy. When I send 4000 bytes and then stop sending anything for about 100ms or so, it will somewhat work again (it will still miss some read data during the sending of those 4000 bytes).

I don't quite know why it does this, to me it looks like the Serial and Serial1 are using the same buffers and that might be causing this problem.

Here is my Teensy code. I got two versions, I usually use the buffered one for performance reasons, I use the one bit per time one for debug purposes. During debugging, I also turned the baud rate down a bit, to be sure that that isn't causing any problems.

Buffered version:
Code:
#define SERIAL_BAUD_RATE 115200

#define BUFFER_SIZE 256


char buf[BUFFER_SIZE];


void setup()
{
  Serial.begin( SERIAL_BAUD_RATE );
  Serial1.begin( SERIAL_BAUD_RATE );
}


void loop()
{
  int count;
  
  count = Serial.available();
  if( count > 0 )
  {
    if( count > BUFFER_SIZE )
    {
      count = BUFFER_SIZE;
    }
    Serial.readBytes( buf, count );
    Serial1.write( (uint8_t*)buf, count );
  }
  
  count = Serial1.available();
  if( count > 0 )
  {
    if( count > BUFFER_SIZE )
    {
      count = BUFFER_SIZE;
    }
    Serial1.readBytes( buf, count );
    Serial.write( (uint8_t*)buf, count );
  }
}

One bit a time version:
Code:
#define SERIAL_BAUD_RATE 115200


void setup()
{
  Serial.begin( SERIAL_BAUD_RATE );
  Serial1.begin( SERIAL_BAUD_RATE );
}


void loop()
{
  if( Serial.available() )
    Serial1.write( Serial.read() );
  
  if( Serial1.available() )
    Serial.write( Serial1.read() );
}

You can download the Java application I've made here: https://dl.dropboxusercontent.com/u...nsmission/Data Transmission v1.10 - Debug.jar
Run it with this bat so you can see the debug information: https://dl.dropboxusercontent.com/u...nsmission/Data Transmission v1.10 - Debug.bat

You need to have two micro controllers connected with each other, the micro controllers can be connected to the same computer or to two different computers.

What you should notice is that everything goes fine, you can send text messages over the two, etc, until you send a file. When you send a file, the receiver will still work properly, but the sender won't receive any bytes anymore. It will only send packets, not receive anything.
When I debugged the serial.readBytes() function, it reads bytes under normal circumstances, but when it's sending a file, it will always return null or 0 bytes on the sender's side.

The result on the sender's side is something like this: (--> is sending, <-- is receiving)
Code:
<-- PacketConnectionConfirm
<-- PacketConnectionConfirmRequest
--> PacketConnectionConfirm
--> PacketConnectionConfirm
--> PacketConnectionConfirm
--> PacketConnectionConfirmRequest
<-- PacketConnectionConfirm
<-- PacketConnectionConfirm
<-- PacketConnectionConfirm
write started
--> PacketTCPPartialPacketList
--> PacketTCPPartialPacket
--> PacketTCPPartialPacket
--> PacketConnectionConfirmRequest
--> PacketConnectionConfirmRequest
--> PacketTCPPartialPacket
--> PacketTCPPartialPacket
--> PacketConnectionConfirmRequest
--> PacketConnectionConfirmRequest
--> PacketTCPPartialPacket
--> PacketTCPPartialPacket
--> PacketConnectionConfirmRequest
--> PacketConnectionConfirmRequest
--> PacketTCPPartialPacket
--> PacketTCPPartialPacket
--> PacketConnectionConfirmRequest
--> PacketConnectionConfirmRequest
--> PacketTCPPartialPacketList
--> PacketTCPPartialPacket
--> PacketTCPPartialPacket
--> PacketConnectionConfirmRequest
--> PacketConnectionConfirmRequest

The receiver's side works like normal and gives: (--> is sending, <-- is receiving)
Code:
--> PacketConnectionConfirm
--> PacketConnectionConfirmRequest
<-- PacketConnectionConfirm
<-- PacketConnectionConfirm
<-- PacketConnectionConfirm
<-- PacketConnectionConfirmRequest
--> PacketConnectionConfirm
--> PacketConnectionConfirm
--> PacketConnectionConfirm
--> PacketConnectionConfirmRequest
<-- PacketTCPPartialPacketList
read started
--> PacketTCPReceiveNotification
--> PacketTCPReceiveNotification
--> PacketTCPReceiveNotification
--> PacketConnectionConfirmRequest
--> PacketConnectionConfirmRequest
--> PacketConnectionConfirmRequest
<-- PacketTCPPartialPacket
--> PacketConnectionConfirmRequest
--> PacketTCPReceiveNotification
--> PacketTCPReceiveNotification
--> PacketTCPReceiveNotification
--> PacketConnectionConfirmRequest
<-- PacketTCPPartialPacket
<-- PacketConnectionConfirmRequest
<-- PacketConnectionConfirmRequest
--> PacketConnectionConfirm
--> PacketConnectionConfirm
--> PacketConnectionConfirm
--> PacketConnectionConfirm
--> PacketConnectionConfirm
--> PacketConnectionConfirm
--> PacketConnectionConfirmRequest
--> PacketTCPReceiveNotification
--> PacketTCPReceiveNotification

I hope someone can help me out here.
 
Most all such data comm should/must have
a) flow control, either in hardware using RTS/CTS handshaking, or in software by one of many methods for the sender/receiver to agree on when it's OK to send more.
b) error detection and correction, often by putting a CRC8 or CRC16 in each packet.
In data links, it is, like in so many things, not "if" but "how often".

Without (a), in any datalink, there will be overruns, lost data, etc.

A common method is for the receiving software to send an "ACK" coded packet to the sender to say that the data was received without error and a buffer is available for receiving up to X bytes, where X is either a given at design time or X is passed in the packets. Better to keep it simple and a fixed X.
Also, the sender should put a binary number, say 16 bits, in each transmitted packet. The number increments for each new packet sent, except for those sent to correct an error.
The receiving node can expect the number to advance. If the packet number says it's a duplicate of the last received, the receiver can ACK again and ignore the duplicate.
The packet numbering allows the sender/receiver to detect/correct lost packets. Commonly, the receiver does not ACK if a packet has an older sequence number than expected, or a CRC error. In this case, the sender will time-out receipt of the ACK and retransmit an identical packet (same sequence #), up to n times (retries).

And so forth.

These are commonplace methods and there are many variants.

With UART based links, there's the added problem of loss of frame sync (start/stop bits). The cure for this is one or two byte-times of no data flow.
 
Last edited:
Thanks for the info, I wish I knew this earlier really. The way my protocol works is that it sends the packets like UDP, except that the receiver also sends back which packets have been correctly received by the receiver, the sender keeps track of the packets that have been send and the packets that have been correctly received by the receiver, the ones that haven't been correctly received after a certain time will be resend.

Anyway, this is not really the problem, the problem is that sending somehow interferes with receiving. When I send data, the receiver will receive all of it, but the receiver can't send anything back, any data that the receiver sends won't be received by the sender.

How is this possible? The Arduino Micro didn't do this?
 
Last edited:
Connect the Teensy's UART port TX to RX (loopback). Then write a little program that sends data to itself, and reads it in a manner such that the interrupt handler's buffer cannot overflow. Prove that works, then with what you learned, apply the same concept of flow control to your program.
And you should ensure you have proper flow control design, plus error detection/correction with some sort of checksum or CRC. A checksum can be simply the sum of all the bytes mod 256, or XOR of all bytes, etc. Not as good as CRC but quick and easy.
 
Hmm, the only thing I'm not doing currently is the flow control, but is it possible that reading a lot from Serial causes it to miss anything from Serial1? If so, then that's the thing that I did wrong and that's what I need to fix.

Error detection and correction is all finished already (I've used CRC32 btw).

I wonder why it worked with the Micro's tho, but that's probably because the baud rate of Serial is the same as the baud rate of Serial1, so any data in would intermediately go out. With the Teensy's the baud rate of Serial is as fast as the maximum USB speed I think I've read somewhere, where as the Serial1 speed isn't.

Anyway, thanks for the help.
 
I wonder now tho, is there a way to let Serial and Serial1 use different buffers?

So writing a lot from the PC to the micro controller (Serial reading buffer) won't flood the buffers used by the laser receiver to the micro controller (Serial1 reading buffer).

Now it looks like those 2 buffers are shared, since when I write a lot from the PC to the micro controller, it won't receive anything from the laser receiver anymore.
The Serial and Serial1 reading buffer will be flooded then since the PC to micro controller connection (PC's Serial.write()) is way faster than the micro controller's connection with the laser (Serial1.write()) is.

So, if they had different buffers, the problem would be fixed, since the micro controller can always read data from the laser receiver then, how full Serial's reading buffer is won't affect it then.

So, is there a way to let Serial and Serial1 use different buffers?
 
No doubt, each serial port interrupt handler already have its own unique buffer. that's always done.

Re flow control:
UDP does the same thing: if you overrun the receiver - successive UDP packets are discarded in the low level driver.
On a processor like the Teensy, let's say the driver (interrupt handler) buffer is 256 bytes. When the application does not read as fast as the bytes arrive, the 256 byte buffer fills to max, and the driver begins to discard bytes/characters because there's no where to store them.

TCP doesn't do that because it has flow control in the TCP protocol itself.

With flow control, the trick is to tell the sender to stop sending BEFORE the buffer fills, e.g., the buffer is too full. The hardware FIFOs come in to play a bit. But they really just add a tiny few more bytes to the driver's ring buffer size. And not all UARTs on all Teensy's have a FIFO. The ones on the Teensy3 are 8 bytes as I recall.
 
Last edited:
But how could sending a lot of data from the PC to the micro controller (Serial) cause the received laser data buffer (Serial1) to overflow?
 
Status
Not open for further replies.
Back
Top