@sicco First, thanks for engaging. Concurrency is hard to get right and I appreciate the discussion. I'm open to be shown new details or flaws in my thinking I haven't thought of. Here's my view of the system; please pick it apart because maybe there's something I'm missing. I'll describe my train of thought in a "think out loud" manner.
There's three layers of buffering.
The first is where DMA interacts with the `rx_ring` buffers declared in
lwip_t41.c. (Note that I'll be referencing
IMXRT1060 Manual, Rev 3.) The Ethernet hardware is supplied with some external memory that's expected to contain instances of "enhanced buffer descriptors" (as opposed to "legacy buffer descriptors" because I'm enabling interrupts and also IEEE1588 support). (41.3.14.1 Enhanced receive buffer descriptor, page 2124.)
Initially, all buffers (there's five of them in the code, see `RX_SIZE`) have the "Empty" flag set. When a packet comes in, the hardware fills one or more of these buffers and marks the "Empty" flag(s) as false, and then an interrupt is generated. In `enet_isr()`, a flag is set indicating there's data. When next `Ethernet.loop()` is called, that flag is checked, and if there's data, each non-empty buffer is first copied into an lwIP
pbuf (see `enet_rx_next()`; it calls `t41_low_level_input()` to translate the buffer into a
pbuf), and then its "Empty" flag is set.
pbufs live in one of the lwIP pools.
The following detail is important: if my assumption below is true, then new packets can't be stored if there's no "Empty" RX buffer, and I set the "Empty" flag only after I've copied the data into a
pbuf. As well, that data is copied ultimately from the main `loop()` and not from an ISR.
An assumption I'm making: When there's no RX buffers that have the "Empty" flag set then the incoming frame is dropped. If anyone knows for sure this is the case, could you please reply here with how you know if this is so? I haven't found this detail in the docs.
So far we have "new packets get dropped if full" behaviour, but I'm not sure I can control that.
The second buffering layer is those
pbufs, one
pbuf per RX buffer. This is where the raw incoming DMA-transferred data gets processed. If there's no more
pbuf space, then the "Empty" flag for the RX buffer is set anyway, clearing room for new packets to arrive. This is "old packets get dropped if full" behaviour. I suppose I could change this so that the RX buffer, if there's no space in a
pbuf for it, does not have its "Empty" set; that would change the behaviour to "new packets get dropped if full."
For UDP, the third buffering layer is that circular buffer. The current behaviour is to "drop old packets if full." This is also done ultimately from the main `loop()` function and not from an ISR.
So,
In summary, the UDP queueing buffers packets from the
pbufs and not the raw RX buffers written to by the hardware.
One of the reasons I've chosen "old packets get dropped if full" behaviour is because I believe it works better with "realer"-time streaming applications, where congestion won't result in starvation. It was primarily a gut choice.
I'd love your opinion (or anyone who has experience with this, for that matter) on these questions:
1. Should I provide a way to change the RX buffer->
pbuf transfer behaviour to "new packets get dropped if full"? I'd probably do it with a define or something.
2. Should I provide a way for `EthernetUDP` queueing behaviour to be changed to "new packets get dropped if full"? I'd either use a define or a `setIgnoreNewIfFull(flag)` function or something.
If the answer to either of these questions is "yes", then why?
What do you think? Does this clarify why I don't think there needs to be locking in `EthernetUDP`?