T4 UART Rx DMA -Disable the Rx FIFO!

Status
Not open for further replies.

Brooks

Well-known member
Just a head's up: If you're going to DMA data from a T4 UART, disable the UART FIFO during setup!
Code:
    LPUART2_FIFO &= ~LPUART_FIFO_RXFE;

Longer version:
I've returned to working on a Teensy-based controller for my Trossen Hexapod robot. The servos use a 1Mb async TTL interface, faster than I wanted to handle with interrupts. I had gotten DMA working with a T3.5, but life intervened...

Over the summer I laid out a a T4.0 board with Neopixels (RGB leds are great!), AX-12 servo driver logic, and a bunch of other goodies. I carried forward my Neopixel DMA drivers from a prior project, and they came right up.

I used Paul's HardwareSerial code to do the heavy lifting of setting up the UART, and then modified the UART settings to suit. I struggled with the AX-12 driver for weeks! Using a 'scope I could see the polls going to the servo and responses coming back. The DMA logic never seemed to get the last byte (the checksum byte - something I wanted!).

This morning I was re-reading the RT1060 processor manual (again!) and stumbled across an obscure reference that made me wonder if I should turn off the FIFO. I turned it off and Hey Presto! it started working...

It's made my day!
 
Does not surprise me, I believe I needed to disable FIFO in SPI to work with DMA...

Also there is code in recent HardwareSerial stuff, that when you ask for something like Serial1.available() and our software queue is empty it looks at the hardware registers. Why? When data comes in on the RX pin, it goes into the FIFO and only after one character time elapses will it trigger an interrupt to read it in... Again hardware wise to reduce the number of interrupts.
 
CPU/DMA race story

That reminds me of the strangest CPU/DMA race condition...

My project involves really fast comms with UART0 and UART1 in the Kinetis K64 / K60 / K66, (our locally designed hardware, but the K66 is in the Teensy3.6 and is a good model before our design was ready to test). So I enable FIFO and have UART RxIDLE interrupt as highest priority in the system to catch packet boundaries reliably. The ISR halts DMA, then switches to the next buffer, and re-enables DMA. Straightforward, or so I thought.

During long-term stability testing, we saw packets that are one byte short followed by packets that are one byte long. After a very very difficult period of diagnosing the issue, we discovered that the CPU outran the DMA controller, and the trailing byte of packetN was left in the FIFO to become an extra leading byte of packetN+1. Simply adding a check for empty FIFO during the ISR cleared that nasty issue.

I have not studied the 1060 series but I wonder whether its LPUARTS have a related issue.

I've been doing embedded comms for 35 years and never had a CPU outrun a DMA before...
 
My guess is that it was not outrunning the DMA, but that configuration settings may be in conflict with doing DMA...

That is in many cases the configuration of the UART is setup that when a condition is set, the DMA operation and/or Interrupt will happen...

In the case of the T4 (and now T4.1), my first fix for not getting the last data on an RX was to not exactly what @Brooks did by disabling the FIFO, instead what I did was to set the
RXCOUNT field of WATER register to 0, which said if the count goes > 0 do the interrupt or DMA... Still allows for stuff to go into FIFO, but it will trigger fast (and therefore often). Which again depending on your needs can be good or bad.

EDIT: Forgot to mention that T3.x also have water mark setting I believe this is split to a couple of registers RWFIFO and TWFIFO...
 
You make a good point (and was exactly what I though during the time we struggled with the byte-slip issue), so I went back to examine the settings we use. We do set RXWATER to 1.

Receive Watermark

When the number of datawords in the receive FIFO/buffer is equal to or greater than the value in this
register field, an interrupt via S1[RDRF] or a DMA request via C5[RDMAS] is generated as determined by
C5[RDMAS] and C2[RIE]. For proper operation, the value in RXWATER must be set to be less than the
receive FIFO/buffer size as indicated by PFIFO[RXFIFOSIZE] and PFIFO[RXFE] and must be greater
than 0
.

It's also worth mentioning that the slipped byte phenomenon only occurred about once in 10000 to 50000 300-byte packets. We are running the UARTs at the maximum configurable baud rate. Knowing that we are pushing the envelope, we checked everything that made sense. After exhausting all of the other alternatives, trying the FIFO clearance check resolved the issue. From then on, packet boundary detection has worked perfectly, and we've gone on to work on other parts of the project.

We are running 8 DMA channels, 4 of which are dedicated to the UARTs, as well as running the Ethernet MAC pretty hard, though it has its own DMA support. We looked into crossbar settings as well as DMA channel priority settings, none of which changed the failure statistics in a visible way.

UART0 and UART1 are clocked by the system clock (we run 150 MHz), with a baud rate divisor of "1" for a 9.375 Mbit/second bit rate. The DMA controller is getting a real workout, and I'm guessing that occasionally it might take a few extra clocks to perform some internal state management.

Were it not for tight schedule constraints, this would be an interesting challenge. As it was, it was just a very very hard grind with many sleepless nights before we arrived at the solution.
 
Status
Not open for further replies.
Back
Top