I have an electronic lead screw system running on a Teensy4.1 which has been running for about a year and a half. It's been working amazingly well, far beyond my expectations.
I'm adding some features and finding that I'm writing to the ILI9341 too often. This typically happens on a fast feed on a lathe. It's aggravated by the fact that this is a 1um DRO scale. Every time the position changes, (by 1 micron) the display is updated. Works great at lower speeds.
First I'd like to know, roughly how long a small text area takes to update. I only write to a small block on the screen, just to change the number. If you can suggest something to make this part more streamlined I'd appreciate the advice. I call this in the background, but Zval is updated by an encoder ISR.
Second, I'm pursuing some sort of display rate limiting, but I'm currently thrashing at that. Since the display is updated in the main loop, I need a way to estimate the number of calls generated and make them appropriate to the varying speeds. I haven't implemented the code on the Teensy yet but I modeled it in Python to get an idea how to implement it at runtime, from the time between estimated updates. I also calculated the number of updates per second, if it was updated every time the carriage moved a micron. It's RPM and feed rate dependent. The answer is given by
update rate [calls/sec] = RPM/60 x encoder x N/D [step/sec] x stepsize [mm/step] x updates [calls/mm] where:
RPM is measured by the ELS, encoder is known and fixed, N & D depend on the feed rate selected, stepsize is known and fixed, as are the updates/mm
For 400 RPM, an encoder of 4096 pulses/rev and N=150, D=8128 (for a feed rate of 0.2mm/rev of the spindle) and a step size of 2.65 um, (and a 1000 updates/mm), I get an average rate of 1333.33 calls/sec. As a consequence, at this feed rate the display of the Z axis is blurry, and there's banding. So on average, the call rate has to be toned down, a lot!
From my logged data (courtesy of the great help I got here on data logging) my calculated from data display update rate the estimates are quite noisy, as I'm effectively differentiating the time difference of the updates. Nonetheless, I ran a script to see what it would look like if I implemented an alpha filter on the Teensy on reciprocal of the time difference between updates (Z changed by one micron). The raw logged data is grey and a mess, the colored traces are the output of simple alpha filters that would be easy to implement in a Teensy. (Have such a filter, for RPM filtering.) This graph is from captured data from my lathe (yesterday). During this (not super fast) feed the display (the little box with the DRO value,) couldn't keep up. Feed time was from about 52sec to 70sec. Although it looks complete at 67 seconds, it took another 5 seconds to come to a stop, within 3um of the stop point. The last 5 seconds seem forever...


I want the position display update rate to be normal at low call rates. I'm willing to programmatically skip updates at fast feed rates, as long as they smoothly transition to "normal" at low rates. I need a signal to control the rate, that's not too hard to compute. I think the alpha filter is over estimating the rates from 65.2 seconds onward. The carriage is just slowly creeping forward, slowing exponentially, as seen in the second graph.
Anyways, that's my question. Any ideas or suggestions would be greatly appreciated. I've probably gone into the weeds, when a simple solution would suffice!
I'm adding some features and finding that I'm writing to the ILI9341 too often. This typically happens on a fast feed on a lathe. It's aggravated by the fact that this is a 1um DRO scale. Every time the position changes, (by 1 micron) the display is updated. Works great at lower speeds.
First I'd like to know, roughly how long a small text area takes to update. I only write to a small block on the screen, just to change the number. If you can suggest something to make this part more streamlined I'd appreciate the advice. I call this in the background, but Zval is updated by an encoder ISR.
C-like:
void updateZ()
{
if ((Zval != oldZval) || (zz == 0))
{
uint16_t w, h;
int16_t x1, y1;
oldZval = Zval;
String newstr = "XXXXXXXXXXXX";
//tft.setFont(Arial_18_Bold);
tft.setFont(DroidSansMono_18);
tft.setTextColor(ILI9341_GREEN, thisGREY);
tft.getTextBounds(newstr, cxgZ, cygZ, &x1, &y1, &w, &h);
//Serial.printf("w = %i, h = %i\n", w, h);
tft.fillRect(x1, y1-1, w, h+1, thisGREY);
tft.setTextDatum(TL_DATUM);
tft.drawString("Z:", cxgZ, cygZ);
x1 = tft.getCursorX(); y1 = tft.getCursorY();
if (metric) {
float zval = Zval * 25.4;
if (zval > 0.0) {
tft.drawString("+", x1, y1); }
else if (zval < 0.0) {
tft.drawString("-", x1, y1); }
else if (zval == 0.0) {
tft.drawString(" ", x1, y1); }
x1 = tft.getCursorX(); y1 = tft.getCursorY();
tft.drawFloat( fabs(zval), 3, x1, y1 );
x1 = tft.getCursorX(); y1 = tft.getCursorY();
tft.drawString("mm", x1, y1);
//Serial.printf("Zval = %+6.3f mm\n", fabs(zval));
}
else {
if (Zval > 0.0) {
tft.drawString("+", x1, y1); }
else if (Zval < 0.0) {
tft.drawString("-", x1, y1); }
else if (Zval == 0.0) {
tft.drawString(" ", x1, y1); }
x1 = tft.getCursorX(); y1 = tft.getCursorY();
tft.drawFloat( fabs(Zval), 4, x1, y1 );
x1 = tft.getCursorX(); y1 = tft.getCursorY();
tft.drawString("in", x1, y1);
//Serial.printf("Zval = %+6.4f in\n", Zval);
}
}
zz = zz + 1;
}
update rate [calls/sec] = RPM/60 x encoder x N/D [step/sec] x stepsize [mm/step] x updates [calls/mm] where:
RPM is measured by the ELS, encoder is known and fixed, N & D depend on the feed rate selected, stepsize is known and fixed, as are the updates/mm
For 400 RPM, an encoder of 4096 pulses/rev and N=150, D=8128 (for a feed rate of 0.2mm/rev of the spindle) and a step size of 2.65 um, (and a 1000 updates/mm), I get an average rate of 1333.33 calls/sec. As a consequence, at this feed rate the display of the Z axis is blurry, and there's banding. So on average, the call rate has to be toned down, a lot!
From my logged data (courtesy of the great help I got here on data logging) my calculated from data display update rate the estimates are quite noisy, as I'm effectively differentiating the time difference of the updates. Nonetheless, I ran a script to see what it would look like if I implemented an alpha filter on the Teensy on reciprocal of the time difference between updates (Z changed by one micron). The raw logged data is grey and a mess, the colored traces are the output of simple alpha filters that would be easy to implement in a Teensy. (Have such a filter, for RPM filtering.) This graph is from captured data from my lathe (yesterday). During this (not super fast) feed the display (the little box with the DRO value,) couldn't keep up. Feed time was from about 52sec to 70sec. Although it looks complete at 67 seconds, it took another 5 seconds to come to a stop, within 3um of the stop point. The last 5 seconds seem forever...


I want the position display update rate to be normal at low call rates. I'm willing to programmatically skip updates at fast feed rates, as long as they smoothly transition to "normal" at low rates. I need a signal to control the rate, that's not too hard to compute. I think the alpha filter is over estimating the rates from 65.2 seconds onward. The carriage is just slowly creeping forward, slowing exponentially, as seen in the second graph.
Anyways, that's my question. Any ideas or suggestions would be greatly appreciated. I've probably gone into the weeds, when a simple solution would suffice!