Help with servo smoothing

Status
Not open for further replies.
Paul, I fully understand and thank you for your code. I didn't post up my code as it is too long. Christoph has been very helpful and gave me an example which is nearly there but I just cannot achieve my goal. I attach the relevant section of my code. The main purpose of this is to get acceleration and deceleration between 2 points. It is unidirectional at the moment.. The code will receive target updates over Ethernet (I appreciate there is a problem with new updates coming in whilst servo is moving) Also I can't find a way to put my code in an attachment, sorry.



int Ramp = 20;
int MaxDelay = 10;
int MinDelay= 2;
int diffPos=0;
int moveStart = 0;
int JMdelay=0;
boolean ServoMoving=false;

int CalcDelay(int i){
if(diffPos>=(2*Ramp)){ // ################## ie LONG DURATION ######################
if ( i<= diffPos-(MaxDelay-MinDelay) ) {dly=max(MinDelay,MaxDelay-i);}
else {dly=min(dly+1,MaxDelay);}
}
else if (diffPos<(2*Ramp)){ // ########### This gives linear ramp up and down again on short duration ie less than 2 * Ramps
if ( i <= max( diffPos/2, diffPos-(MaxDelay-MinDelay) ) ){dly=max(MaxDelay-i,MinDelay);}//
else
{dly=min(dly+1,MaxDelay);}
}
return dly;
}

void loop() {
CheckNewTarget()
MoveServo()
}


void CheckNewTarget(){
if (((targetPos-currPos) >0) && (!ServoMoving))
{
diffPos=targetPos-currPos;
moveStart=currPos;
Serial.print("######new target specified is : ");Serial.print(targetPos);
JMdelay=0;
ServoMoving=true;
}
}


void MoveServo(){
//static uint8_t JMdelay = 0; // static makes sure that these are not re-initialized to zero in every main loop.
static elapsedMillis servoTimer;
//Serial.print("time now : ");Serial.print(servoTimer);Serial.print(" JM delay: ");Serial.printLN(nextDelay);
if((targetPos > currPos) && (servoTimer >= JMdelay))
{

JMdelay=CalcDelay(currPos-moveStart);

Serial.print("Current Position: ");Serial.print(currPos);Serial.print(" steps into move : ");Serial.print(currPos-moveStart);Serial.print(" JMdelay: ");Serial.printLN(JMdelay);
servoTimer = 0;
currPos+=1;
//move servo to currentPos
JMservo.writeMicroseconds( currPos+900 );
Serial.print("Servo moved to: ");Serial.print(currPos);Serial.print(" Target: ");Serial.printLN(targetPos);
if (currPos = targetPos) {
JMdelay = 0; // we need this to properly restart the "loop" upon the next target update
ServoMoving=false;
}
}

}
 
Last edited:
posting code in code.PNG makes it readable and more usable. You can find an icon "#" for that on 'GO ADVANCED', From Advanced you can also point to code files to upload them with the 'Paper clip' drop down selector.
 
I might be missing something obvious here, but from all I see I got the impression that you are trying to GUESS actual servo position:

if (servoActualPosition < servoTargetPosition) {
servoActualPosition = servoActualPosition + 1;
} else {
servoActualPosition = servoActualPosition - 1;
}

Now, if your servo is anything like normal hobby servos then its movement is quite complex, and any guess has nothing to do with reality. For example most of the servos begin movement really fast but then slow down near the target point, which covers at least your deceleration requirement but also makes all assumptions on actual position invalid. Trying to use guessed position to control servo movement is exactly what causes jerky motion.
Don't get me wrong, you might be able to achieve some approximation of smooth movement, but in order to achieve truly smooth motion you need to revise your approach.

You have several options here.

1. You can add mechanical inertial dumping between servo and camera. Then you would simply tell servo to go to target position and wait. You still won't know when it reaches it but at least it will be smooth.

2. You can use some smart motor controller with feedback (e.g. Roboteq) in closed loop position mode. These controllers have tons of parameters and you can configure precise acceleration/deceleration limits. Again, you will tell controller to go to target position and wait. However now you can actually read real position back from controller and know exactly when it reaches target.

3. You can add position feedback to servo shaft and use it to correct servo command in your loop. However if you controlling servo by PWM signal it will be tricky and not entirely smooth.

4. Finally you can replace servo (which is position-controlled device) with a motor (which is speed-controlled device) and implement your own PID controller. There are tons of PID examples on the web. If you do not want to change existing mechanical design you can open up your servo, remove control board and attach wires straight to motor and feedback pot. There are inexpensive H-bridge breakouts that will work as replacement for removed control board.
 
I thank you for your comments which are accurate. I am quite far into this project and have committed to purchase of some high torque hobby servos. My theory is that if I keep the shortest delay between steps just greater than the servo's own running speed then there is a good chance that it will "track" fairly accurately. The movement looks fairly decent on my workbench and I can see the pulse stretching on the scope. In theory in an exaggerated situation where you increment the pulse width say every second and the servo moves really slowly then you always know where it is. Anyway I will proceed with my current plan. Unfortunately I have to use real delays as I just cannot get my head around using elapsed millis in this particular set up, despite lots of helpful guidance from Christoph and Paul.
thanks.
John
 
I did something similar. I measured time to move various distances (depends on weight and motor strength) and then created a large table of optimal values. So it would immediately apply full motor power up to a certain point and then apply full reverse power. Was much faster than the PID that others tried and it had no overshoot. Was also simple to code.
 
Last edited:
I think John is not actually guessing the servo's actual position, but merely updating the input to the servo. So the variable "currentPos" should rather be named "currentCommandedPos" or something like that. However, there's still some confusion about the non-blocking delays and acceleration.

John, what's so uncomfortable about using a fixed delay? I still don't really get that, and using a sine curve to calculate the desired speed only makes things more complicated imho. I have read and understood your thought about having just one function instead of a accelerate->constant speed->decelerate scheme, but you seem to need a flexible delay to implement it. The sine will also have different slopes depending on the distance between current position and target, and thus not limit acceleration.

My gut feeling is that this discussion will either end without a non-blocking solution, or with a solution someone has coded for you. If you need a different view on elapsedMillis, consider this simple LED blinking example:

Code:
loop()
{
  static const uint16_t period = 500; // 1 Hz blinking
  static elapsedMillis timer = period; // initialize to period so that we can start toggling the pin immediately
  if (timer >= period)
  {
    static bool ledState = false;
    timer = 0;
    digitalWrite(LED_PIN, ledState);
    ledState = !ledState;
  }
  // CPU is free to do other stuff here
}

You'll have to complete this by writing a setup() function that actually configures LED_PIN, but I hope this helps understanding elapsedMillis.
 
Christoph
as always thank you for taking the time to respond. My more recent version of code calculates a delay based on a ramp up- linear- ramp down model. I use a variable delay instead of variable pulse width increments because in the linear section of the movement I want to take some accurate measurements based on position. Please don't think I am asking anyone to write a solution for me. I have been writing VB code for years and using threads to do this kind of stuff (as a hobby) but I suffer from some sort of mild dyslexia when it comes to dealing with loops whizzing round and delays within delays. I can cope with elapsed millis on a one of basis (and have already some implemented in this project) but cannot get to grips with this scenario where I have a for loop. Your suggested code came close and I thought I had cracked it but in the meantime another loop came whizzing past and mopped up all my variables!!
thanks
John
 
Mopped up the variables? How so? Is there another loop updating the target and interfering with the servo control loop?
 
At last... now I've got it.
In my loop I had a global variable "targetPos" which got updated by various means. What I had to do was create an event when there was a call for a change and then update the variable "diffPos" for subsequent calculations. Stupid me!!
thank you for your patience.
Quick question if you don't mind..
I have 5 inputs to monitor, 4 are low res like voltage and temp, but I need a high res 12bit? as well.
If I use the ADC library for this one will there be any conflicts. Can the high res input be on any of the analogue input pins
John
 
Glad you sorted it out, great! The obvious lessons: globals are not a good idea. Not protecting globals when they shouldn't be updated is even worse.

The ADC question should go into a separate thread, really. I don't think that the topic has attracted any adc library experts and my guess is that achieving 12 actually usable bits won't be easy...
 
Status
Not open for further replies.
Back
Top