filter cut off frequency and keyboard tracking

emmanuel63

Well-known member
Hello,

I want to implement keyboard tracking for the cut off frequency of a LP filter. I just can't find the maths behind...

Say I play a A4 = 440 Hz. My filter is set on FC = 1000 Hz. A certain amount of harmonics are cut, which is what I want.
But if I play A6 = 1760 hz, the fundamental is almost completely cut !
And if I play an A2 = 110 Hz, the sound keeps many harmonics and sounds too bright...

I have been struggling with maths, but I can't get any good result.
How do you implement keyboard tracking ?

Emmanuel
 
Need a bit more context to fully understand, and perhaps answer your question.
If I understand correctly, keyboard tracking means that the LP-filter cutoff frequency changes with the frequency of the note played.
E.g. note 440 Hz > Fc = 1000 Hz, note 1760 Hz > Fc = 4000 Hz, note 110Hz > Fc = 440 Hz.
Searching for keyboard tracking on the web, I believe it's also called "key tracking".

Now, how did you implement the LP filter? Using the audio library? Or?
And for note generation, is it MIDI? Or MIDI synthesized to analog?

Paul
 
E.g. note 440 Hz > Fc = 1000 Hz, note 1760 Hz > Fc = 4000 Hz, note 110Hz > Fc = 440 Hz.

Yes, this is exactly what I try to do. The goal is to get the same perceptual effect whatever the note played.
I do use the audio library for this project. A keyboard sends MIDI note numbers. Note frequency is compute according to octave selection, tune, pitch modulation. Oscillators are updated with note frequency.
 
Here is an example of what I have tried :

Code:
filter1[myActiveVoice]->frequency(osc1_frequency[myVoice] + osc1_frequency[myVoice]*Fc_ratio);

Fc_ration is contrôle by a midi controller. I tried different ranges, but it doesn't give an even effect.
 
I'm not sure I, or other forum readers, understand [or will understand] the issue from this single line of code.
Is it possible to share your complete code?

Paul
 
Complete code is very long...
I will send a more detailed version tomorrow. Thanks for your help, I appreciate.
 
Good morning ! (from France !)

Well the code is very long, because of the polyphony management of my synth.
I think we can really focus on the core equation that links Note Frequency with Cut Filter Frequency.

Say we have :
- a LP filter, defined with its cut frequency "centered_FC"
- the played note define with its frequency "note_Freq"
- the adapted filter cut off frequency "new_FC"
- the amount of key tracking "track", that determines how much "centered_FC" will be shifted
- we must find a function to compute "new_FC" from "centered_FC" and "note_Freq". Every frequency is in hertz.


Here is my "best" solution :

new_FC = centered_FC * (track) + note_freq * (1 - track); with : track: 0 to 1


It is OK but far from perfect
Emmanuel
 
Bonjour,

Calculating your best solution: if track = 0, then new_FC = note_freq. If track = 1, then new_FC = centered_FC. I don't think that is what you want.
Isn't the formula just new_FC = note_freq * track, where track => 2?
Looking at note 440 Hz > Fc = 1000 Hz above, than track is 1000/440 = ~2.27.

Paul
 
Bonjour et merci !

this formula
new_FC = note_freq * track where : track = n
unfortunately doesn't work. It cut low pitch notes too much :

with track = 4
for C = 162 Hz -> new_FC = 648 a lot of harmonics are cut. Sound is rather dull.
for A = 880 Hz -> new_FC = 3520 not "so many" harmonics are cut. Sound is bright.

I think it has something to do with this kind of maths (from audio lib statevariablefilter doc) :
filter_formula.png
 
I don't think you want the signal amplitude to be part of the equation. I can imagine that will lead to strange effects.
Probably it's just playing with the track factor until you find a satisfying formula.
Did some quick googling on "key tracking filter formula" - you may find an answer when digging some more.

Paul
 
new_FC = centered_FC * (track) + note_freq * (1 - track); with : track: 0 to 1

You are doing calculations in the linear, frequency domain: converting MIDI note to a frequency then adding a scaled offset. That will not work because perception of pitch is logarithmic, not linear.

Instead convert your example filter resonant frequency (here 1000 Hz) into a MIDI note and then subtract the MIDI note of your oscillator frequency. That gives you an offset in terms of semitones.

You can then add that offset to any oscillator note, convert that from MIDI note to frequency, and there is your result.

(You will probably want to use floats for this, not integer MIDI notes, to avoid sudden jumps.)
 
Timbre varies with frequency even for identical harmonic profiles I fear.

Human hearing is very adapt at perceiving the filter envelope separate from the excitation note(s) as this is the basis of vowel recognition in language.

Human hearing is also much more sensitive to a narrow range of frequencies around 300 to 2000 Hz, just to complicate things.

I think this means you cannot have timbre the same for different notes over more than a short range of notes/frequencies - if the filter tracks the note, the brain perceives the different filter envelopes as having different quality. If the filter doesn't track you get the problem of very different harmonic counts, which affects the timbre very strongly.

This is why sample sets are used for realistic synthesis of real instrument voices. You may think all the octaves on a piano must have the same 'pianolike' timbre, but they vary significantly, we just group them all as "piano sounds". If you use a single sample for every note (suitably sped-up/slowed-down) it doesn't sound like a real piano...

I think what I'm saying is there's no simple correct answer to how to do synthesis, but many possible ways to explore.
 
Back
Top