Forum Rule: Always post complete source code & details to reproduce any issue!

# Thread: Exponential Moving Average - How does it work?

1. Originally Posted by jonr Be careful when using 16 bits - your input sample range is limited. For example, -1024 to 1023 (??) for the code in #21. Some comments or checks would be a good idea.
Agreed, I added a check and more comments on the page: https://tttapa.github.io/Pages/Mathe...implementation

You can now verify the range:
Code:
```EMA<5, int_fast16_t> filter;

static_assert(filter.supports_range(-1024, 1023),
"use a wider state or input type, or a smaller shift factor");``` Originally Posted by WMXZ if data are 16 bit, then IMO the state variable (effectively sample<<K) should be declared as 32 bit
As mentioned by Nominal Animal, a 32-bit state is much slower on an 8-bit MCU, and since a common use is to filter 10-bit analogRead values with a relatively small K, a 16-bit state is preferable. Originally Posted by Nominal Animal Note: It looks like the state has to be an exact-width type (so uintN_t and not uint_fastN_t), as any extra bits will mess up the modulo arithmetic.
I don't think this is an issue, especially if you explicitly cast the input to state_t before adding it. Even if you don't cast it, you're probably fine thanks to the usual arithmetic conversions:
[...]
Otherwise, the operand has integer type and integral conversions are applied to produce the common type, as follows:
- If both operands are signed or both are unsigned, the operand with lesser conversion rank is converted to the operand with the greater integer conversion rank
- Otherwise, if the unsigned operand's conversion rank is greater or equal to the conversion rank of the signed operand, the signed operand is converted to the unsigned operand's type.
- Otherwise, if the signed operand's type can represent all values of the unsigned operand, the unsigned operand is converted to the signed operand's type
- Otherwise, both operands are converted to the unsigned counterpart of the signed operand's type.

The conversion rank above increases in order bool, signed char, short, int, long, long long. The rank of any unsigned type is equal to the rank of the corresponding signed type.
Since the rank of the state type is never less than the rank of the input type, and since the state type is unsigned, the input will be converted to the unsigned state type before performing the addition.
The conversion from a signed integer to an unsigned integer effectively sign extends the integer to the full size of the unsigned type because of integral promotion:
If the destination type is unsigned, the resulting value is the smallest unsigned value equal to the source value modulo 2ⁿ where n is the number of bits used to represent the destination type.
Once the type is converted, the C and C++ standards guarantee 2ⁿ modular arithmetic for unsigned types, so it's fine.

The rules apply to the number of bits used to represent the type, which could be 32, for uint_fast16_t on ARM, for example.
uint_fast16_t doesn't guarantee arithmetic modulo 2¹⁶, but it does guarantee arithmetic modulo some power of two.  Reply With Quote

2. Originally Posted by PieterP As mentioned by Nominal Animal, a 32-bit state is much slower on an 8-bit MCU, and since a common use is to filter 10-bit analogRead values with a relatively small K, a 16-bit state is preferable.
This statement far too much application and hardware oriented (low quality ADC and 8-bit MCU).
EMA is not only used to filter IMU accelerators and other 10bit data, but is used in sound signal detection. Here you can easily have mean levels of 8 bits and need a K of 10, so 16 bits are too little.  Reply With Quote

3. Originally Posted by WMXZ This statement far too much application and hardware oriented (low quality ADC and 8-bit MCU).
EMA is not only used to filter IMU accelerators and other 10bit data, but is used in sound signal detection. Here you can easily have mean levels of 8 bits and need a K of 10, so 16 bits are too little.
For my purposes, and as a beginner-friendly Arduino/Teensy sketch, using the same size for the state as for the input is a perfectly sensible default.
You can still supply your own types as a template parameter if you don't want to use the default. You could even change the default if you feel like that's the right thing to do for your application.

From a practical point of view, having a default state type that's larger than the input type requires some extra template boilerplate to convert one type to a larger type. It also cannot be done consistently, what should the state type be if the input is uint64_t?

I expect users working with audio to know about data types and overflow, and I expect users to read the documentation, which clearly states the valid input range, as well as two examples how to compute it and a compile-time check.
If a user ignores the documentation and removes the range checks, that's on them.  Reply With Quote

4. Originally Posted by WMXZ This statement far too much application and hardware oriented (low quality ADC and 8-bit MCU).
EMA is not only used to filter IMU accelerators and other 10bit data, but is used in sound signal detection. Here you can easily have mean levels of 8 bits and need a K of 10, so 16 bits are too little.
For that particular case, yes.

On 32-bit microcontroller, it makes sense to use the 32-bit filter implementation only.
On 8-bit microcontrollers, it makes sense to use the 16-bit filter if it suffices, but switch to 32-bit when it doesn't.

The C polymorphic header implementation can be easily changed to do this automatically, so the user does not need to care.

I'm not exactly sure what the cleanest way to implement the selection in C++ would be, but one option is to reuse part of the preprocessor magic for C for selecting the desired state type (16- or 32-bit unsigned integer) and sample type (signed or unsigned 16- or 32-bit integer), emitting a suitable filter class definition (preprocessor supplying the types as template parameters). For verification I'd add an override macro, so that an unit test case could compare the output of some PRNG filtered using floating-point types to the filter, to be run on a host computer.  Reply With Quote

5. Originally Posted by Nominal Animal On 32-bit microcontroller, it makes sense to use the 32-bit filter implementation only.
That is why I use T3.x and T4.x. OK, there are still T2.x around and sold.

The question for me (on T4.x) is to keep, say signal detection and the related averaging (to estimate background noise) in the integer domain, or to convert to floating point. But this is another topic and not related to OP.  Reply With Quote

6. As the OP shows, floating point allows clearer code - use it if you your application isn't near the cpu capacity limit.  Reply With Quote

7. Originally Posted by WMXZ integer [..] or [..] floating point
Floats (Binary32) have 23 bits of precision, but a 254-bit range.
32-bit integers have 32 bits of precision and a 32-bit range, of course.
If the absolute maximum signal range is known, then 32-bit integer (fixed-point) arithmetic will give you better precision.
However, Teensy 4.x do have hardware double (binary64) support too, with 53 bits of precision and a 2046-bit range.

So, choosing what format to use is a balance between code maintainability, efficiency on the given architecture (Teensy 4.x having hardware support for both floats and doubles, and Teensy 3.5 and 3.6 for floats), precision, and required range.

Finally, it is obviously trivial to add a 64-bit integer filter implementation (next to the 16- and 32-bit ones, I mean; it is just a few characters' difference), for use on 32-bit microcontrollers like Teensy 3.x/4.x.  Reply With Quote

8. Originally Posted by Nominal Animal Floats (Binary32) have 23 bits of precision, but a 254-bit range.
32-bit integers have 32 bits of precision and a 32-bit range, of course.
If the absolute maximum signal range is known, then 32-bit integer (fixed-point) arithmetic will give you better precision.
That is my reasoning too, so decision to use floating points for ADC data (max signal range is known) is always pushed into future. Also FPU could consumed in the past more power than fixed point arithmetic. It may have changed, but do there exist measurements?  Reply With Quote

9. (I haven't found any, and can't even find instruction timings for ARMv7 or FPv5 used on the i.MX RT1062 on Teensy 4.0 and 4.1.
As far as I know, one would have to microbenchmark the various approaches oneself to find out. I haven't done that.
Others on his forum might know more.)  Reply With Quote

10. Originally Posted by PieterP I'm sure you could use more fractional bits in the fixed point representation to get a more accurate result, but that requires more shifting (which is slower, especially on AVR) and larger integer types. Besides, the second implementation is accurate enough for 99% of smoothing applications, occasionally being one bit off on a 10-bit ADC value is usually not an issue.
Thanks for a really useful discussion about filtering. My application is heavily filtering a fairly slowly drifting ADC signal as input to a threshold crossing detector. The filter input is almost constant over periods comparable to the time constant of the filter, but has some noise superimposed, which is very unwelcome since the threshold needs to be pretty close to the slowly drifting signal. The noise is mostly 1-bit.

I have been using the C++ implementation from Pieter P’s page, but I’ve found that it fails to attenuate the 1-bit noise as soon as the filter settles to steady value. My naive explanation is that the steady (DC) value (for my signal) will always be close to x+half, so even attenuated noise will likely push the output to either x or x+1. However, 2-bit noise (ie. input x or x+2) is correctly filtered to a steady value of x+1.

So a possible solution would be to double the signal, filter, and halve it again. Going further, I'd suggest using any "spare" bits in the implementation to scale up the input as far as possible (within reason). Hence my code is

Code:
```class EMA {
static const unsigned int P = 4;            // Prescale shift: K + P + <input-bit-range> <= 32
public:
// Constructor - allow initialisation
EMA(unsigned int shift, unsigned int initial = 0)
: K(shift),
state( ( (unsigned long)initial << (K + P) ) - ((unsigned long)initial << P) ),
half(1 << (K - 1)) {}

// Update the filter with the given input and return the filtered output.
unsigned int operator()(unsigned int input) {
state += (unsigned long)input  << P;
unsigned long output = (state) >> K;
state -= output;
return output >> P;
}

/// Fixed point representation of one half, used for rounding.
//constexpr static unsigned long half = 1 << (K - 1);

private:
unsigned int K;                         // Filter shift
unsigned long state = 0;
unsigned long half;
};```
where <input-bit-range> is 10, state is 32-bit, K=12 (for the time constant I want) and P could therefore be up to 10 (but maybe there's no point in going that high).

(BTW I got myself in a great muddle using templates and multiple instances, so K is a parameter in the constructor instead of in a template ).

I'd welcome any comments - I haven't thought very much about the speed, since it's fast enough for my purposes.  Reply With Quote

11. if you want to go a buffer route, Circular_Buffer supports median(), average(), variance(), and deviation()  Reply With Quote

12. Originally Posted by RogerL Thanks for a really useful discussion about filtering. My application is heavily filtering a fairly slowly drifting ADC signal as input to a threshold crossing detector. The filter input is almost constant over periods comparable to the time constant of the filter, but has some noise superimposed, which is very unwelcome since the threshold needs to be pretty close to the slowly drifting signal. The noise is mostly 1-bit.

I have been using the C++ implementation from Pieter Ps page, but Ive found that it fails to attenuate the 1-bit noise as soon as the filter settles to steady value. My naive explanation is that the steady (DC) value (for my signal) will always be close to x+half, so even attenuated noise will likely push the output to either x or x+1.
The output of the filter should be float or at least fixed point, not integer, the correct solution to your problem is to
add hysteresis to your thresholding I think. The steady state ought to be close to x+0.5 in the situation you describe, not
quantized.  Reply With Quote

13. I think this is an inherent problem with quantization, not necessarily because of the implementation of the filter (although different rounding/quantization methods might reveal it more easily than others, given that the filter inputs are quantized as well).

Consider an ADC that measures a signal with a mean of 2.5V with sinusoidal noise of amplitude 2. For the sake of argument, we'll assume that the imaginary ADC has infinite precision. If you then round this infinite-precision signal to the nearest integer, you get something like this: Now apply an (equally imaginary) infinite-precision (but causal) low-pass filter to the infinite-precision ADC measurements, in the hope of filtering out the sinusoidal noise. If you make the cutoff frequency of your filter low enough, or make the order high enough, you can reduce the amplitude of the noise down to an arbitrarily low level.
However, given that the DC component of 2.5V is right at the boundary between rounding up or rounding down, when you quantize the filtered signal by rounding it to the nearest integer, you still get a square wave with amplitude 0.5 (1-bit noise): (Blue: original signal, Orange: low-pass filtered signal, Green: quantized low-pass filtered signal)
Again, this is regardless of how good your filter is.

AFAIK, the only real solution is to use hysteresis, as proposed by MarkT.
I usually shift up my input signal by an additional P bits before applying the filter (filling up all available bits of the integer type for maximum accuracy as you suggest), and then afterwards I remove those P bits again, but with hysteresis. You can find my implementation here:
GitHub: Hysteresis.hpp

I would encourage you to look into the rounding again, by using `state >> K` instead of `(state + half) >> K`, it becomes impossible to reach the zero state again, as shown in this figure from a previous post: Originally Posted by PieterP the problem with truncation is that it makes it impossible to reach zero again (assuming non-negative inputs).

Using a rectangular input from 0 to 16 and back to 0, for instance:   Reply With Quote

14. > an inherent problem with quantization

Yes, but you can pre-shift the data to the left, reducing quantization error of the filtered result.  Reply With Quote

15. Originally Posted by jonr Yes, but you can pre-shift the data to the left, reducing quantization error of the filtered result.
But that doesn't solve the issue, you can shift it to the left as much as you like, in the end you have to represent everything in a finite number of bits, and no amount of filtering or shifting is going to prevent the last bit (or any bit really, through carrying) from flickering back and forth if the value is close to the middle of a quantization interval. You have to sacrifice some bits by incorporating hysteresis in your quantization step.

If you shift to the left enough, the least significant bit might become less and less significant, but it's still noisy.

For example, if you want to take a 12-bit ADC reading, filter it, and output the result to an 8-bit DAC, shifting to the left doesn't help, because eventually you have to truncate everything except the 8 most significant bits, resulting again in 1-bit noise in the 8-bit output, no matter how accurate all intermediate arithmetic and filtering was.  Reply With Quote

16. If I read it correctly, @RogerL wants to detect the time when a signal crosses a threshold.
As the signal is corrupted by noise, this threshold crossing may occur multiple times in vicinity of the threshold.
This problem will persist independent of filtering until noise amplitude is less than 1 bit (which is 'very difficult' to achieve), so I would approach the issue differently:

In general we can state that initially (early on) there would be few threshold crossing from below, and then crossings will end with few threshold crossings from above.
Ideally the 'true' threshold crossing would be somewhere in the middle.

Most brutal approach is estimate of threshold crossing is in the middle of first and last crossing (valid if statistics is symmetric)
more elegant solution would be to describe threshold crossings by a statistics and find the median.
(I let the algorithm to the reader)

Obviously, any filtering does improve the performance. In particular, in absence of signal there should be no threshold crossing.
In signal processing terms: the threshold should be high enough that there are no false alarms.
Here is where filtering comes into play. it should suppress noise that in absence of signal no threshold crossing occurs.  Reply With Quote

17. Thanks to all for some very useful comments on my post. Trusting not to try your patience, I thought I might summarise my situation for the benefit of others who may pass this way while looking for filtering code.

1. My requirement is for a heavily filtered ADC value (to be used for calculation of a threshold to detect small brief increments). A prime consideration is low noise in the filtered signal, but this is very specific to my application. If you are interested in audio filtering (for example), you might be much more interested in the speed of the filter.

2. An EMA filter is easy to implement, and is mathematically equivalent to a simple resistor-capacitor (RC) low pass filter, with the time constant set by the filter parameter. There is a good code example at https://tttapa.github.io/Pages/Mathe...mentation.html

3. Usually, the EMA filter is programmed with integer types, for speed of execution. Hence the input and output have limited resolution (unlike the almost infinite resolution of an analog RC circuit).

4. Because of quantisation, an integer EMA filter has one characteristic that differs from an analog RC filter. However large the filter parameter (K), the EMA filter will not attenuate 1-bit noise in the input when it is tracking a slowly changing input signal. This is because the true filtered value is between two adjacent integer values, and the output will flick between the two. This is essentially the same reason that values from an ADC have 1-bit noise in the first place.

5. In order to remove the noise, or most of it, a possible solution is to shift up the input by a few bits and then filter it. This will allow the average to have a value between the input integers. The result will then be fairly smooth and accurate, but will have to be treated as fixed-point value, or converted to a float, or used in a context where the gain of the filter can be > 1.

5a. Arduino family processors (mostly) have a setting that left-adjusts the ADC value, equivalent to a left shift of 6 on a 10-bit ADC. This would save the explicit shift, but the standard Arduino analogRead clears this control bit (ADLAR) so you would have to roll your own analogRead.

6. If the requirement is for a smooth but not necessarily highly accurate value (as long as it tracks the slowly changing input with a small or fixed offset), then the shifted up filter result can be simply shifted down again. This suits my case, and my code is in a previous post in this thread (with P = the pre-scale shift up and then down).

The plot shows the output tracking the noise after it settles (P=0), or clipping the noise after it settles (P=2, but really any P>0).   Reply With Quote

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•