ah - that may explain why it did run at the first stage just until I thought enhancements would be needed for something else. Well, I changed the meaning of my own protocol bytes so that's not a problem any longer. Thanks anyway - it help to understand!
But man, I'm still having problem to understand the TX side deeply: When I lower my sending rate, things are smooth without congestion and the receiving end catches up without any backlog. But I need higher rates and I'm sure that the bus is not saturated so it has to be software runtime issues?!
I apologise but I have some more questions:
- one of my application runs fine without msg.seq=1 with 6 TX mailboxes to deliver in. But as soon as I do msg.seq=0 and I send one frame, it locks itself up for almost a whole second then the 2nd frame is sent etc... same code, same receiver... So what could lead into this situation? Shouldn't depletion be decoupled from sequencing as long as there are enough TX mailboxes?
- Where do I see or configure the TX transmit queue size? what/how is this linked to the template parameter of the FlexCan_T4 constructor?
- does the write() complete after it get's the ACK from the receiving node or is this asynced?
- does the receiving callback have to run through until the next call can be processed? or can they be "overrun" leading into consistency problems (overwriting volatiles) and depending on the previous question, lead into congestions on the bus?
- Can0.events() doesn't seem to return anything? I thought that is useful to understand the queue capacity?!
- there is no timeout argument in this library (or not yet?) - so not control for retries or timeout parameters? correct?
Maybe it's lack of my knowhow and documentation but maybe a couple of instructions would be needed for anybody that run into "congestions/locked/hanging" problems especially when dealing with sender and receiver code. Maybe there are some best practices what to do and what not to do. In my application it's about realtime, low-latency, continuous small messages (either 8 frames with 1 byte or 1 frame with 8 bytes, with ext IDs) from a master node to a client node. Bus arbritation/layer 1 prio is only needed when there is another type of message (another message ID) on the same bus, if that happens, I'd like to have some control of priority, those are messages that come every now and then, so not at a constant rate. The reason I went for MB's is to get some convenience to sort the messages based on the type (the message ID, really). But maybe in my application all I needed was a FIFO sending as fast as possible and on the receiver do the "triage" in one FIFO callback (parse based on message ID etc... all in software)....
Could you spend some minutes on this, please? I'm happy to pay for your time or donate something!
Thanks.
But man, I'm still having problem to understand the TX side deeply: When I lower my sending rate, things are smooth without congestion and the receiving end catches up without any backlog. But I need higher rates and I'm sure that the bus is not saturated so it has to be software runtime issues?!
I apologise but I have some more questions:
- one of my application runs fine without msg.seq=1 with 6 TX mailboxes to deliver in. But as soon as I do msg.seq=0 and I send one frame, it locks itself up for almost a whole second then the 2nd frame is sent etc... same code, same receiver... So what could lead into this situation? Shouldn't depletion be decoupled from sequencing as long as there are enough TX mailboxes?
- Where do I see or configure the TX transmit queue size? what/how is this linked to the template parameter of the FlexCan_T4 constructor?
- does the write() complete after it get's the ACK from the receiving node or is this asynced?
- does the receiving callback have to run through until the next call can be processed? or can they be "overrun" leading into consistency problems (overwriting volatiles) and depending on the previous question, lead into congestions on the bus?
- Can0.events() doesn't seem to return anything? I thought that is useful to understand the queue capacity?!
- there is no timeout argument in this library (or not yet?) - so not control for retries or timeout parameters? correct?
Maybe it's lack of my knowhow and documentation but maybe a couple of instructions would be needed for anybody that run into "congestions/locked/hanging" problems especially when dealing with sender and receiver code. Maybe there are some best practices what to do and what not to do. In my application it's about realtime, low-latency, continuous small messages (either 8 frames with 1 byte or 1 frame with 8 bytes, with ext IDs) from a master node to a client node. Bus arbritation/layer 1 prio is only needed when there is another type of message (another message ID) on the same bus, if that happens, I'd like to have some control of priority, those are messages that come every now and then, so not at a constant rate. The reason I went for MB's is to get some convenience to sort the messages based on the type (the message ID, really). But maybe in my application all I needed was a FIFO sending as fast as possible and on the receiver do the "triage" in one FIFO callback (parse based on message ID etc... all in software)....
Could you spend some minutes on this, please? I'm happy to pay for your time or donate something!
Thanks.