Interfacing to a high throughput Ethernet Sensor

jimmie

Well-known member
I have an Ethernet sensor which outputs about 4000 bytes per second continuously.

Can I read from this sensor using a Teensy 4.1? Reading and deciding needs to be done in real-time.

Thanks in advance for your help.
 
Doesn't sound like a lot ... depends on the process and protocol - is the device just set to broadcast the data in how many messages or what type? Or it is queried some number of times per second to reply with the data?

Is the device info published for reference?
 
Thank you @defragster.

The device is a rotating laser which sweeps from 0 - 270 degrees at 150 Hz. The angular resolution is 0.75 degrees. So, for every 0.75 degrees between 0 - 270, the sensor outputs the angle and its associated distance. So basically, every scan has 360 distances (every 0.75 degrees from 0 - 270). This is 54,000 readings per second.

The telegram description is very long and can be found here: http://cdn.sick.com/media/docs/7/27/927/telegram_listing_telegram_listing_ranging_sensors_lms1xx_lms5xx_tim2xx_tim5xx_tim7xx_lms1000_mrs1000_mrs6000_nav310_ld_oem15xx_ld_lrs36xx_lms4000_lrs4000_multiscan100_en_im0045927.pdf
 
on phone I saw multipass unit? Couldn't read the details well ... phone said the download wasn't secure ... but read it anyhow.

Seems it gets an IP::pORT and just spews data UPD or TCP as selected?

150 Hz ... 150 passes per second of 360 readings would be 54,000 ... data is two sets of two bytes? That is way more than p#1 4,000 bytes/sec. Also seems the packets may have time or other info - perhaps an index count?

Is each reading a unique message (54,000 per second?) or are they grouped?

Is each scan one elevation or does it pan up and down some number of elevations?
 
Thank you @defragster for taking the time to read the document.

You are correct, the data is way more than 4000 bytes/sec. I estimated this number from a PC DLL implementation but it is incorrect.

Each reading is a unique message. Basically, the sensor sweeps/scans (in 2D) 150 times a second and outputs data for each scan as a TCP message. Each scan has a time stamp and 360 distances (a distance every 0.75 degrees from 0 - 270 degrees). The scan is a 2D scan, so there is a single level only for each scan. There can be up to two distances per the 0.75 degree angle but that can be limited to just one in the sensor's firmware.
 
@shawn of github.com/ssilverman/QNEthernet (or others) might have an idea of usable throughput speed for handling such a barrage of messages with a T_4.1.

There are three wired T_4.1's on the desk here - but not gone that direction in simple testing - so far just using the latest CHAT example @shawn added.

The CHAT uses UDP and just put a us timer across sending 29 chat messages and it is showing:
Code:
11:54:30 17  5 2023 [28] 29 .send() took 253 us

So 253us to send 29 messages and they are received by another T_4.1 client - some quick math suggests a T_4.1 can send 114,550 UDP messages per second.
> rounding up to 10us per UDP send {290us for 29 .send()} would be 100,000 per second supported by the network here and the T_4.1.

TCP messages a bit different AFAIK ... question would be if receiving T_4.1 could maintain the sustained receive and what it could do with all that data ... Stuff it to RAM/PSRAM for processing?
 
Thank you! This is encouraging news.

The QNEthernet library is what we are using due to its stability.

I realize that TCP is a bit more "wordy" than UDP but there is some hope in that we only need to read continuously for about 5 seconds each time. So this in combination with temporary RAM/PSRAM storage may work ...
 
I've heard of lidar scanners outputting udp. Why would one want to use TCP? If there's some bad data, the next frame will be fine. TCP is used for validating messages were sent properly, like for banking. If a pixel is out, does it matter? It shouldn't, in a robust system. Not trying to be argumentative, but what's the motivation for sending lidar data via TCP?
 
Thank you @clinker8

I have no objections to UDP. I am just not sure if the option is available…
 
Thank you @clinker8

I have no objections to UDP. I am just not sure if the option is available…

Noted in p#4 after skimming the manual that it shows either TCP or UDP. If you can accept/identify a missing scan packet, then that would be easier to see and test like the QNEthernet CHAT sample @shawn wrote. That's what led to quick speed measure here ...

And ideally faster with less clutter waiting for the ACK confirmation return message given the Teensy may be near the limit as it is.
 
I wrote the attached code to interface to that sensor. The code is a modified example from the QNEthernet library.

As a way to check for consistent operation, I calculated the time between each update. It should be about 7 ms. However, as you can see from the attached trace, every 10 updates (or so), the code updates after over 100 ms which means it is losing a lot of updates.

What could be causing that?

Thanks in advance for any help.
 

Attachments

  • output_2023-06-02_09-11-47.txt
    36.5 KB · Views: 15
  • QNtest.ino
    10.3 KB · Views: 20
  • tls_template.c
    690 bytes · Views: 14
Using the UDP the default queueSize had to be increased that was with:
Code:
// UDP port.
EthernetUDP udp(32);  // (##)-packet buffer allocate for incoming messages

Looking at: github.com/ssilverman/QNEthernet/blob/master/README.md

It seems this 'EthernetClient client;' not using UDP - perhaps this will help?:
Code:
setReceiveQueueSize(size): Sets the receive queue size. The minimum possible value is 1 and the default is 1. If a value of zero is used, it will default to 1. If the new size is smaller than the number of items in the queue then all the oldest frames will get dropped.

also noted as:
Code:
Raw frame receive buffering
Similar to UDP buffering, if raw frames come in at a faster rate than they are consumed, some may get dropped. To help mitigate this, the receive queue size can be adjusted with the EthernetFrame.setReceiveQueueSize(size) function. The default queue size is 1 and the minimum size is also 1 (if a zero is passed in then 1 will be used instead).
 
Hello @defragster,

Please forgive my newbie question. Should I be using a EthernetUDP instead of EthernetClient client?

If I continue to use the EthernetClient, how do I use EthernetFrame.setReceiveQueueSize(size)? I could not find any code examples.

Thanks again.
 
If the scanner was set up to use UDP? then using that would be an option. Seem that TCP messages still being used?

As for how to use the '.setReceiveQueueSize' - that just came up in a text search when I saw what the UDP option was called and not sure of usage.

@shawn made the UDP BroadcastChatUDP.ino example played with here and he pointed out the EthernetUDP udp(32); that stopped message loss in the testing done here.

No examples in the library seem to use setReceiveQueueSize. Maybe HACKING the line #77 code below for a larger number would show less loss and indicate it could lead to a solution: 77: setReceiveQueueSize(32);
If that doesn't help then it isn't using that underlying queue code.
Code:
C:\T_Drive\tCode\libraries\QNEthernet\src\QNEthernetFrame.cpp:
   75  FLASHMEM EthernetFrameClass::EthernetFrameClass()
   76      : inBuf_(1) {
[B][U]   77:   setReceiveQueueSize(1);[/U][/B]
   78  }
   79  
   ..
  155  }
  156  
  157: void EthernetFrameClass::setReceiveQueueSize(size_t size) {
  158    if (size < 1) {
  159      size = 1;

C:\T_Drive\tCode\libraries\QNEthernet\src\QNEthernetFrame.h:
  130    // This disables interrupts while changing the queue so as not to interfere
  131    // with the receive function if called from an ISR.
  132:   void setReceiveQueueSize(size_t size);

Not sure how EthernetFrameClass is used/created/accessed to call that method.
 
Thank you @defragster.

I made the change but there is no difference so it is probably not called by my code.

Do you think using UDP may help?

Thanks.
 
...Do you think using UDP may help?
...

Seems like it should, Yes. Per the notes left above it might reduce overhead of TCP and quick testing shows no problem sending 28 bytes in 10us from one and received fine on another Teensy.

Seems the send might take longer than receive - certainly not longer as the sending with the (32) buffers noted above was receiving without loss as fast as the sender could push them out. But depends on what is done to store or process the data on receive as the next message arrives.
 
Last edited:
`EthernetFrame` is for raw Ethernet frames (probably not what you want here), `EthernetUDP` is for IP-based UDP, and `EthernetClient` is for IP-based TCP.
 
`EthernetFrame` is for raw Ethernet frames (probably not what you want here), `EthernetUDP` is for IP-based UDP, and `EthernetClient` is for IP-based TCP.

Good to know. The Hack showed it wasn't as 'hoped' the same underlying buffer used for IP-based TCP.

Is there a solution to the p#12 lost/delayed TCP data? Or is that the nature of TCP that items need to be handled and acknowledged one at a time ... or ???
 
Back
Top