Are internal timers coded to take clock speed into account?

Can I just confirm that the IntervalTimer objects are set up to take varying clock speeds into account in their timing? If yes that would presumably mean that you get increased timing resolution at higher clock speeds. There would also be greater drift due to hotter running to take into account too. The idea is pretty much obvious of course but I thought I would just check up on that before making assumptions.

I have added heatsinking and I am getting a steady 58-60 deg C at 912MHz. I'm hoping that won't cause too much stress. Can anyone advise as to whether that is sustainable?
 
Assuming this is a T_4.x from context and 912 MHz speed? And no idea how much soldering and other added thermal mass attached.

On T_3.6 and prior Teensy the CPU speed was derived from common clocks and changing speed changed most of them.

The T_4.x processor has clocks generally independent of the one driving the processor speed, that is why it can be changed at runtime. So generally the clocking for non-CPU/memory operation are independent and fixed based on the feature in use.

At a glance that seems a temp on the upper end of continued function, and the MCU beyond spec speed at that temp not likely to have a normal and full life.
> there was a NXP life curve with Temp posted some months back - finding that for ref will indicate expected loss of lifetime at that temp

Heat sinks help and add surface area, but they saturate in still/warm air - if a fan could move cooling ambient air over the unit, it would probably drop ~5 degrees and last longer.

Only Teensy ever lost here was a T_4.0 plugged in after a T_4.0 - with heat sink. It did not have one and the IDE was still absentmindedly set at 916 MHz or so, and that unit melted inside in a couple of hours before I noticed. It was warning me with restarts/hangs - but that was wrongly assumed to be from trying new hardware/library issue.
 
Take a look at the clock tree in the reference manual.
On Clock change, only PLL1 changes its frequency. Besides the ARM-Core-clock, IGP ist connected to this clock. Everything else is not influenced.
If re-init the things connected to IGB after changeing the clock, these will behave correct, too.
 
Can I just confirm that the IntervalTimer objects are set up to take varying clock speeds into account in their timing?

By default, the timers used by IntervalTimer run directly from the 24 MHz crystal clock.


If yes that would presumably mean that you get increased timing resolution at higher clock speeds.

No. The 24 MHz clock they use is fixed at 24 MHz, regardless of how the rest of the chip runs. Well, expect in the deepest sleep modes where the crystal oscillator shuts off completely.


There would also be greater drift due to hotter running to take into account too.

It is a ordinary (not special temperature compensated) good quality crystal, so the drift in frequency can be expected around 30 ppm.


I have added heatsinking and I am getting a steady 58-60 deg C at 912MHz. I'm hoping that won't cause too much stress. Can anyone advise as to whether that is sustainable?

If you're measuring outside the chip, you really should make use of the on-chip temperature sensor.

Even if you keep the temperature reasonable, 912 MHz overclock uses higher CPU voltage. It is expected to reduce the lifespan of the chip.
 
Thanks for all the info guys. It seems pretty plain how I'm sitting now. I should have given you a little more info in the first post but these things slip.

Yes, it is a T4.0 and I found the curve you were talking about. I've attached it again here. To me there seems to be no problem around the levels I was seeing given my measurement setup here. The heatsinking is simple, just a small 9x9x12mm 5 fin aluminium sink permanently thinly "thermal plastered" to the main chip. The temperatures I was quoting were reported by the system itself using tempmonGetTemp() within my own program as it was running. I would expect internal temperatures to be much higher if I were measuring from the external heatsink. I'm assuming the internal reporting via the software is much closer to the actual truth.

The setup with the clocks was to be expected too. Your advice that the timer base clock does not vary with CPU speed rules out any improvements possible in that direction. I'm back to 600MHz as there is no reason in the other functionality I need for increased speed.

So why does resolution mean so much? The application itself is just an ultra simple variable pulse delay of up to a max of 1 sec once every 5secs or longer. The idea is to push the accuracy to as great as I can aiming at a ballpark 20nsec resolution, (stability is another matter), in dealing with other gear which achieves that easily. That is of course unachievable with these units but getting as close as I can without ridiculous complexity or expense should suffice. It is to be used in the bench testing of nautical seismic survey control/recording software and its support equipment. Those accuracies are demanded and met with real life working gear and this item takes the place of one of those extreme accuracy units. We don't need all round ultimate accuracy for test purposes but we need a repeatable "exactly approximate" delay, i.e. only very close to what we ask it for but knowing exactly what it is really giving.

I have this working using digitalPinToInterrupt() to capture the input pulse and start an Interval Timer which simply kicks the output pin high on completion. Other than immediately preventing multiple input capture by detaching the interrupt (maybe overkill but it works) and setting an indicator LED all other action is locked out during the timing phase. I did consider using the Bounce library but was worried there would be additional time delay penalties which may not be constant. With the overall simplicity and the fact that any inherent lost cycles should be constant for every run (I hope) I can trim it for counting accuracy.
 

Attachments

  • Teensy_Life_Curve.png
    Teensy_Life_Curve.png
    103.7 KB · Views: 47
Back
Top