I wanted to measure execution timing of floating point cos routine using the following program:
The idea was to measure the on-time of pin13 using a scope. The routine time3 is called between setting and clearing the LED. The strange thing is, that I measure 170ns on-time. That is far too short for a execution of the code. If I make sum and d volatile I measure some ms. That seems ok. But why has volatile an influence here?
Can it be that the compiler optimizes in the following way: The function time3 does not depend on any value, so computing the results one time and then caching them would be ok.
So the function is executed only once and then the cached result is used?
Code:
int led = 13;
void setup() {
pinMode(led, OUTPUT);
delay(1000) ;
Serial.begin(9600);
Serial.print("Hello World ");
}
int32_t time3() {
int32_t k ;
float sum,d ;
sum=0.0 ; d=0.0 ;
for(k=0 ; k<1000 ; k++){
sum += cos(d) ;
d += 0.00001 ;
}
return sum ;
}
int32_t dummy ;
void loop2(){
while(1){
digitalWrite(led, HIGH);
dummy = time3() ;
digitalWrite(led, LOW);
delay(5) ;
}
}
void loop() {
loop2() ;
}
The idea was to measure the on-time of pin13 using a scope. The routine time3 is called between setting and clearing the LED. The strange thing is, that I measure 170ns on-time. That is far too short for a execution of the code. If I make sum and d volatile I measure some ms. That seems ok. But why has volatile an influence here?
Can it be that the compiler optimizes in the following way: The function time3 does not depend on any value, so computing the results one time and then caching them would be ok.
So the function is executed only once and then the cached result is used?