Teensy Hearing Aid

chipaudette

Well-known member
Hi All,

I started a new project to see what could be done to make an open-source hearing aid. I'm not attempting to make it small, or attractive, or long-lived. Maybe those qualities will come in time. My goal is to make a platform where I (and maybe others) can try out their own hearing aid algorithms.

After playing with the Teensy (especially the fast new Teensy 3.6) I chose to base my initial device on Teensy hardware...

TeensyHearingAidLead2.png

If your interested in more details, or if you'd like to see a quick demo (video), you can check out my post at: http://openaudio.blogspot.com/2016/11/a-teensy-hearing-aid.html

I'm not aware of any other attempts at making an open-source hearing aid. Are any of you familiar with any that got anywhere?

Chip
 
Being a hearing aid, understanding how the different gain settings work is important. So, I started by quantifying the effect of the volume knob on the Teensy Audio Board. Here's the quick summary:

HeadphoneLevel.png

So, for my full-scale sine wave, the output starts to saturate at a setting of about 0.85. The saturation occurs because I'm asking for a signal level that is bigger than the power rails supplying the SGTL5000, so it's understandable. I'm going to keep my volume setting at 0.8 or below.

If you're interested in a bit more detail: http://openaudio.blogspot.com/2016/11/teensy-audio-board-headphone-level.html

Chip
 
cool project! Maybe it will shrink one day and be the sister or brother of the google glasses..?
 
Yeah, definitely looking to make it smaller. First, I'm heading towards integrating everything into daughter card for the Teensy. Then, maybe I'll integrate the microcontroller to put it all into one package. It still probably won't fit into something as svelte as google glasses, but it could probably fit behind the ear. That'd be cool (to me, at least).

Chip
 
Hearing Aids are not just amps, they are frequency gain in areas that people are hard of hearing too.
Overall amp is no good.
In my teens years, I would listen to ROCK on a stereo at almost full volume. As my parents would say "Your going to go deaf" Well, yes.. But its the higher frequencies that I have lost.
My Lows are ok, but just doing an amplifier for overall, not so good.
The only reason I see is the teensy maybe has to adjust a notch filter to reduce lows and allow Highs to pass in my case. The more use of notch filters, the better the device.

Or equalizers ...
The other thing is tinnitus, I have a constant noise at high end that is all the time. I think it puts me to sleep type noise.. That is another function that has a unique feature.
Finding that noise and blanking it.
But being the 1st step is kool., My hearing aids cost close to $1,000 each. Enough payback for loud music.
 
Hearing Aids are not just amps, they are frequency gain in areas that people are hard of hearing too.
Overall amp is no good.
In my teens years, I would listen to ROCK on a stereo at almost full volume. As my parents would say "Your going to go deaf" Well, yes.. But its the higher frequencies that I have lost.
My Lows are ok, but just doing an amplifier for overall, not so good.
The only reason I see is the teensy maybe has to adjust a notch filter to reduce lows and allow Highs to pass in my case. The more use of notch filters, the better the device.

Or equalizers ...
The other thing is tinnitus, I have a constant noise at high end that is all the time. I think it puts me to sleep type noise.. That is another function that has a unique feature.
Finding that noise and blanking it.
But being the 1st step is kool., My hearing aids cost close to $1,000 each. Enough payback for loud music.

Yes, I'm definitely aware that gain (loudness) alone is not helpful. If just gain were needed, you could just use an analog amplifier. I chose to use the Teensy 3.6 because it has a good bit of computational power that can be put to use on signal processing aimed at improving hearing. Frequency shaping and dynamic range compression are the first simple steps, but after that you can go into all sorts of frequency-domain techniques.

For algorithms, I've been working through this book "Digital Hearing Aids" by James Kates. It's not the newest book around, but I like that it's written from the time when most hearing aids were just making the jump to digital. Therefore, the signal processing techniques that he describes are modest, which makes them simple enough for me to understand. After mastering the basics, then maybe I'll have the ability to understand the modern.

Sorry about your hearing loss and tinnitus. I know a lot of folks in that position. I'll probably be there too, soon enough. All of this has given me a good amount of motivation to help make some progress. This effort with my Teensy Hearing Aid is just one cog in a larger effort to make open-source hearing technologies. My work is attempting to be just one teensy (ha!) contribution.

Chip
 
Starting down the path of adding dynamic range compression. Starting with a little discussion of why the heck you need to do such a thing...written while killing time in a overly-loud airport.

View attachment 9403
http://openaudio.blogspot.com/2017/01/the-need-for-dynamic-range-compression.html


Chip

And I've now implemented my Dynamic Range Compressor as part of my floating-point extension of the Teensy Audio Library. If you're interested, you can check it out here:

http://openaudio.blogspot.com/2017/01/basic-dynamic-range-compressor.html

CompressorSignalFlow.png

Chip
 
I have hearing damage similar to Wayne's description. Seems to me that a key part of this project is going to be audiometrics--determining exactly what frequencies need boosting by how much.

For me, tinnitus sounds like a steam radiator hissing in my ears 24/7, with occasion sine-like tones (but I'm unable to identify the exact pitch) that last for a few seconds and then fade out. I don't see help for tinnitus coming from this source, unfortunately, since it seems to be neurologically generated. I think of it as analogous to phantom limb syndrome for dead audio nerves.

--Michael
 
Bummer to hear about the tinnitus. For those whose tinnitus is primarily tonal, there has been some progress in the research world on identifying the source of the tinnitus (in the brain) and on identifying methods for breaking up the tinnitus. But, I haven't heard much about progress on non-tonal tinnitus.

If you're curious about the tonal tinnitus stuff, I heard a researcher at a hearing conference present the results of this paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4629059/. It discusses how tonal tinnitus has been found to be correlated with unexpected, sustained, oscillatory brainwave patterns in a specific region of the brain. It appears to be an objective physiologic marker for tinnitus, which is the advance that everyone has been waiting for.

The real point of the paper, though, is to discuss their refinement to a treatment methodology for tinnitus based on this new physiologic understanding. This new treatment approach seeks to break up those oscillatory brain wave patterns. The hypothesis is that, if you can stop the oscillation, you won't hear the tinnitus anymore. Cool idea.

To break up the oscillation, they are trying to stimulate that region of the brain. You can do it electrically, but who likes pushing electricity into their head? Since that region of the brain is associated with sound processing, why not stimulate that region of the brain with sound!?! So, the treatment idea is to break up those brainwave oscillations via carefully-designed patterns of tone bursts. To show what I mean, here's a good figure from the paper:

BMRI2015-569052.001.jpg

By playing these tone bursts over-and-over in the patient's ear, they're hoping that the unsteady stimulation that dances around the tinnitus frequency acts to break up the oscillation in the troublesome region of the patients's brain. In a small human study, they found this treatment to be safe and effective, which is great. But, it was a small study. The question is whether this simple treatment approach will show benefit when they move to a larger test population.

Of course, this is for tonal tinnitus, which is probably not your kind of tinnitus.

Chip
 
Last edited:
Interesting. I hadn't heard about this progress with tonal. In my own case, the tonal component occurs rarely, is of brief duration when it does occur, and so doesn't bother me. (And I've grown accustomed to the non-tonal.)

Your dynamic range compression is interesting to me for related reasons. In combination with my diminished high-end acuity, the trend in recent years to mix movie/tv audio such that the explosions and car crashes are viscerally loud has made it difficult for me to set the volume and/or EQ in a way that makes dialog consistently clear. Boosting the frequencies around 1000-2000 Hz helps some, but I've often wished that compression was built in.

When I get a chance (a few projects are ahead of this) I want to do some experimenting.

--Michael
 
Your discussion of combining EQ with compression is interesting and very relevant.

My current code is quite simple in that it is only a single-band compressor. This means that all of the audio frequencies are compressed (or not) together. This is in contrast to most compressors used in hearing aids today, which are are multi-band compressors. A multi-band compressor breaks the audio into different frequency bands (say, low, middle, and high frequencies) and applies compression to each band independently. This works better for some folks...but, as with everything in hearing, other folks prefer the sound of the older ways. In the future, I'm hoping to extend my code to do multi-band compression.

Mitigating the effects of broken hearing is hard.

Chip
 
Last edited:
Looking forward to following your progress.

Single band is what I'll need for the experiments I have in mind, at least at first--simply a box that sits between amplifier and source (computer's analog out in my workroom and Roku with spdif out in living room). I imagine that the width of the band can be changed using the audio lib's filter capability.

But I don't want to get into this too deeply yet, before I'm ready to follow through on it.

--Michael
 
Last edited:
Really good work Chip! I have tested the compressor for use with instruments and it works very well!
But now I'm in need of a level detector (or envelope follower) for the audio signal.
And you have made a very good one in the compressor. So I tried to just do a tweak in your code to get the audio_level_dB_block out from the module instead of the compressed audio signal:

Code:
 void update(void) {
      //Serial.println("AudioEffectGain_F32: updating.");  //for debugging.
      audio_block_f32_t *audio_block = AudioStream_F32::receiveWritable_f32();
      if (!audio_block) return;

      //apply a high-pass filter to get rid of the DC offset
      if (use_HP_prefilter) arm_biquad_cascade_df1_f32(&hp_filt_struct, audio_block->data, audio_block->data, audio_block->length);
      
      //apply the pre-gain...a negative gain value will disable
      if (pre_gain > 0.0f) arm_scale_f32(audio_block->data, pre_gain, audio_block->data, audio_block->length); //use ARM DSP for speed!

      //calculate the level of the audio (ie, calculate a smoothed version of the signal power)
      audio_block_f32_t *audio_level_dB_block = AudioStream_F32::allocate_f32();
      calcAudioLevel_dB(audio_block, audio_level_dB_block); //returns through audio_level_dB_block

      //compute the desired gain based on the observed audio level
      audio_block_f32_t *gain_block = AudioStream_F32::allocate_f32();
      calcGain(audio_level_dB_block, gain_block);  //returns through gain_block

      //apply the desired gain...store the processed audio back into audio_block
      arm_mult_f32(audio_block->data, gain_block->data, audio_block->data, audio_block->length);

      //transmit the block and release memory
      AudioStream_F32::transmit([B][COLOR="#FF0000"]audio_level_dB_block[/COLOR][/B]);
      AudioStream_F32::release(audio_block);
      AudioStream_F32::release(gain_block);
      AudioStream_F32::release(audio_level_dB_block);
    }

Is there any obvious reason that this should not work? I will use the signal strength to control a state variable filter.
 
Last edited:
After working with my crazy breadboard version of my Teensy Hearing Aid, I decided to make a custom PCB. It still uses a Teensy 3.6, but it brings together all of the other components onto one board...

Parts%u00252Bof%u00252BTympan.png

It's so much more robust that my breadboard version. If you're interested, a few more details (as well as links to schematics and whatnot) are here: http://openaudio.blogspot.com/2017/03/unifying-electronics-to-make-tympan.html

This sure is fun!

Chip
 
That's looking good! Didn't read enough to see what part the Bluetooth plays? Other than that I suppose the T_3.6 is the most expensive part by far? (saying only that the rest is simple components)
 
The bluetooth isn't much used at the moment. The idea was that the bluetooth link would allow me to control the settings on the device without having to plug in USB. I could simply open up my phone and shoot a few messages to the device and its settings would change.

Alternatively, I figured that I could use the Bluetooth link to send data out from the hearing aid. I could log that data on a PC or phone in order to see how the device responds to certain sound environments, or I could use it to monitor the sound environment itself. I could use the device as a wireless sound level meter or something.

Or, less scientific or more silly, one could image having the link to the phone that enables you to play an audio version of Pokemon Go. The hearing aid continues to process the ambient natural audio, but upon Bluetooth commands from the phone based on your location (or whatever), would inject additional sounds based on whatever game scenario you're playing.

So, yeah, the bluetooth is for connectivity for basic user control or for future advanced interactions. It currently has nothing to do with making hearing better.

Chip
 
Good - I didn't miss that part :) Communication and control/feedback sounds good. I was wondering if perhaps there was a tie to Bluetooth earphones I didn't see.

The intermediate result looks much better than the post #1 mock up.
 
I have a question...
I am trying to extend chipaudettes compressor class with an option to add another input and use it as a sidechain compressor. Where one of the inputs are analyzed and the other input gets the compression based on the analyze.
But the program freeze when I try to read the other input:
Code:
	  audio_block_sc = AudioStream_F32::receiveReadOnly_f32([B]1[/B]);
I can put this
Code:
	  audio_block_sc = AudioStream_F32::receiveReadOnly_f32([B]0[/B]);
and the program will run, but does not what it should obviously.

I have tried a lot of different methods and variants of receiveReadOnly, made new audioStreams etc.

Does anyone have an idea what I am missing?

Code:
#ifndef _AudioEffectCompressor2_F32
#define _AudioEffectCompressor2_F32

#include <arm_math.h> //ARM DSP extensions.  https://www.keil.com/pack/doc/CMSIS/DSP/html/index.html
#include <AudioStream_F32.h>

class AudioEffectCompressor2_F32 : public AudioStream_F32
{
  //GUI: inputs:2, outputs:1  //this line used for automatic generation of GUI node
  public:
    //constructor
    AudioEffectCompressor2_F32(void) : AudioStream_F32(2, inputQueueArray_f32 ) {
      setThresh_dBFS(-20.0f);     //set the default value for the threshold for compression
      setCompressionRatio(5.0f);  //set the default copression ratio
      setAttack_sec(0.005f, AUDIO_SAMPLE_RATE);  //default to this value
      setRelease_sec(0.200f, AUDIO_SAMPLE_RATE); //default to this value
      setHPFilterCoeff();  enableHPFilter(true);  //enable the HP filter to remove any DC offset from the audio
      resetStates();	setSideChain(0);
    };

    //here's the method that does all the work
    void update(void) {
      //Serial.println("AudioEffectGain_F32: updating.");  //for debugging.
	  audio_block_f32_t *audio_block, *audio_block_sc;
      audio_block = AudioStream_F32::receiveWritable_f32(0);
	  if (!audio_block) return;
	  
	  audio_block_sc = AudioStream_F32::receiveReadOnly_f32(1);	  
      /*if (!audio_block_sc) {
		release(audio_block);
		return;
		}
		*/
      //apply a high-pass filter to get rid of the DC offset
      if (use_HP_prefilter) arm_biquad_cascade_df1_f32(&hp_filt_struct, audio_block->data, audio_block->data, audio_block->length);
  
	  //apply the pre-gain...a negative gain value will disable
      //if (pre_gain > 0.0f) arm_scale_f32(audio_block->data, pre_gain, audio_block->data, audio_block->length); //use ARM DSP for speed!

      //calculate the level of the audio (ie, calculate a smoothed version of the signal power)
      audio_block_f32_t *audio_level_dB_block = AudioStream_F32::allocate_f32();
      calcAudioLevel_dB(audio_block, audio_level_dB_block); //returns through audio_level_dB_block

      //compute the desired gain based on the observed audio level
      audio_block_f32_t *gain_block = AudioStream_F32::allocate_f32();
      calcGain(audio_level_dB_block, gain_block);  //returns through gain_block
	 
      //apply the desired gain...store the processed audio back into audio_block
	  //if sideChain is activated it affects the other input with the calculated gain.
	   if (!sideChain) arm_mult_f32(audio_block->data, gain_block->data, audio_block->data, audio_block->length);
	   else arm_mult_f32(audio_block_sc->data, gain_block->data, audio_block->data, audio_block->length);
	  
	  //arm_mult_f32(audio_block->data, gain_block->data, audio_block->data, audio_block->length);

      //transmit the block and release memory
      AudioStream_F32::transmit(audio_block);
      AudioStream_F32::release(audio_block_sc);	  
      AudioStream_F32::release(audio_block);		  
      AudioStream_F32::release(gain_block);
      AudioStream_F32::release(audio_level_dB_block);	  
    }
 
Add this as a private variable to your class:
Code:
audio_block_t *inputQueueArray_i16[2];

Edit: Looking at the float fork of AudioConnection, I'm not sure if my suggestion is helpful. It would be how you'd fix it in the standard library.
 
Last edited:
macaba nailed it (except for the data type). The problem is that I defined my class to only accept a single channel of data. The class definition needs to be tweaked to permit the second channel of input. Sorry that I didn't do that before. So, taking macaba's suggestion (but correcting the data type), you should modify the private section of AudioEffectCompressor o read:

Code:
  private:
    //state-related variables
    audio_block_f32_t *inputQueueArray_f32[2]; //expand to allow *two* channels of input

This will allow two float32_t audio blocks to be passed as inputs. In the update() function of the class, you can access the two inputs using the syntax that you've already shown....

Code:
audio_block_f32_t *block1 = AudioStream_F32::receiveReadOnly_f32(0); //first input
audio_block_f32_t *block2 = AudioStream_F32::receiveReadOnly_f32(1); //second input

Chip
 
Back
Top