What's the resolution of micros() on Teensy 3.1 ?

Status
Not open for further replies.

amundsen

Well-known member
On 16 MHz Arduino boards, the resolution is 4 µs. On 8 MHz boards, it is 8µs.

Does it change according to the clocking frequency on the Teensy 3.1 ? Where can I find the values ?

Thank you in advance.
 
On 16 MHz Arduino boards, the resolution is 4 µs. On 8 MHz boards, it is 8µs.

Does it change according to the clocking frequency on the Teensy 3.1 ? Where can I find the values ?

Thank you in advance.

1 (one) us
No it does not change.
 
Micros is fairly slow for a lowlevel clock, since it has to combine the number of millisecond ticks with the clock count within current millisecon from the SysTick timer. On a 96MHz Teensy 3.0 the loop :

Code:
    for (n=0;n<1000;n++)
      looptime=micros()-starttime;

runs in 918 uS, that is around 0.9uS per call to micros(), just barely a 1uS effective resolution. With a 24MHz clock the micros() resolution would then be around 4uS .
 
It might be of interest to implement a clock for microsecond and submicrosecond intervalls. This could be a free running 32 bit timer (PIT? or Cortex cycle counter?) and the routines always read raw 32 bit timer values. For a wait the wait count would be scaled from nanoseconds to timer count, instead of converting from timerticks to micro/nano seconds, using 32 bit unsigned timer values handles timer rollover as long as the intervalls are not to long.
 
Earlier mlu reported 0.9usec per call to micros() on 96MHz Teensy 3.0. I tried a similar test on Teensy 3.1 (using Arduino IDE 1.8.4 and Teensyduino 1.38):

Code:
void loop() {
  long looptime;
  long starttime = micros();
  
  for (int n=0;n<1000;n++)
    looptime=micros()-starttime;

  String s = String(looptime,DEC) + "\n";
  Serial.print(s);
  delay(500);
}

96 MHz : 333 to 337 usec
72 MHz : 444 to 448 usec
(The time scales with inverse processor speed, unsurprisingly.)

So that's almost 3 times as fast as reported by mlu for the 3.0.

The scope trace shows occasional breaks of 1 usec and 1.7 usec (at 72 MHz), presumably due to other interrupts, or due to micros() periodically taking a different execution path. This implies that an approach like this is not guaranteed to see every microsecond value from micros(). And even though most of the time the resolution is better than 0.5 usec, sometimes it's only about 2 usec.

Another experiment:

Code:
elapsedMicros elapsed;

void loop() {
  long next_elapsed = 0;
  cli();
  while (true) {
    while (elapsed < next_elapsed ) { }  
    next_elapsed += 1;
    digitalWriteFast(ledPin, HIGH);   // set the LED on
    digitalWriteFast(ledPin, LOW);    // set the LED off
  } 
}

At 72MHz, this produces a pulse every 1 usec, with occasional lapses of say 0.5 to 1.5 usec, but the total count of pulses catches up within a few usec. The catchup happens because next_elapsed is relentlessly incremented by 1, and the loop time most of the time is well below 1 usec.

So, the cli() didn't get rid of the jitter. On the other hand, I was surprised to see that this worked (at least for several 10's of minutes) despite the cli(). I had expected some counter to rollover and not get serviced, debilitating elapsedMicros.
 
So, the cli() didn't get rid of the jitter. On the other hand, I was surprised to see that this worked (at least for several 10's of minutes) despite the cli(). I had expected some counter to rollover and not get serviced, debilitating elapsedMicros.

micros(), which is called by elapsedMicros, re-enables interrupts. So the timer interrupts still happen.

Code:
uint32_t micros(void)
{
	uint32_t count, current, istatus;

	__disable_irq();
	current = SYST_CVR;
	count = systick_millis_count;
	istatus = SCB_ICSR;	// bit 26 indicates if systick exception pending
[B]	__enable_irq();[/B]
...
 
Status
Not open for further replies.
Back
Top