MIDI notes to trigger Audio Board synth?

Status
Not open for further replies.

Synthetech

Active member
Very new to Teensy and soon will order a Audio board with a 3.5 Teensy.

I have looked and searched, but can't seem to find any examples of a simple MIDI triggered
"Sound Module" made using the Audio Board and Audio Library.

I have seen a few examples of direct switch triggering of synths made with the Audio Board, but not one where you simply attach a MIDI controller keyboard like a M-Audio Oxygen 25.

Can anyone help me understand how to get a MIDI note on command message to trigger the proper pitched note in a synth built with the Audio Board and Library?

I think I can understand how to make a serial port become a MIDI interface, read the bytes and have it figured out what's a note command or CC command and get the values..

Getting the MIDI notes to proper pitch is my first hurdle. I think I can figure out how to get CC's to adjust variables for other features like LFO's, waveform selections, effects parameters, etc.

I will also be interested in how to create a portamento effect so notes will "glide" from one to another.

Thanks!
 
Can anyone help me understand how to get a MIDI note on command message to trigger the proper pitched note in a synth built with the Audio Board and Library?
The conversion from MIDI note to Hz is:

F(x) = 440 * (2 ^ ((x - 69) / 12))

440Hz is the value for note=69 so the x value must be offset by 69 to get the 12th-root-of-two calculation to work.

The twelfth-root-of-two is the ratio between the frequency value of adjacent semitones.

Your extracted value from the MIDI note needs to be converted to Hz before being sent to an audio object.



Portmento in old-school modular hardware is achieved by low-pass filtering the frequency value (which is a 1v/octave control voltage) with a slew limiter to cause the signal to take time to approach the destination value.

You could digitally filter the frequency value you feed your oscillator... but you might want to leave that out to start.
 
If by synth, you mean, playing a note, with little regard for timbre , you'll use the Audio lib objects, and set the .frequency attribute of a sine wave object. That object, in turn, is connected to some output. I2S via the Audio Shield and on through its headphone jack... or the DAC pin on the teensy and some piezo speaker.

Have you viewed the Audio Tutorial?
https://www.pjrc.com/teensy/td_libs_Audio.html
Or visited the Audio GUI Tool?
https://www.pjrc.com/teensy/gui/

If you're looking for a pre-made, multi-instrument synth on a chip, Teensy's not it, but it can drive one. A chip by the Finnish company VLSI is used in breakout boards by Adafruit and Sparkfun. It's the VLSI 1053, I think.
I got the Sparkfun model working well in my Arduino days (at 5V on an UNO). Some flakiness when I drove the 3V3 breakdown from Adafruit using a Teensy. I need to get back to that.
 
Thank you oddson. That's what I needed to understand.
I guess if I had dug into the example code from the workshop I would have seen the frequency used for each audio object.

Davidelvig, I simply want to make a single voiced synth. Not multi timbred.
I want to tinker around with a analog type of synth and possibly wavetable in the future if it is developed.
Hopefully I can make a "stacked" sawtooth voice like a Roland JP-8000. It would be taking ALL the polyphonic notes and stack them together in Unison, then detune to fatten up.
It would be a Mono mode patch. But I'd like to switch back to a polyphonic patch too.
Guess I will know more as I dig into it.


Thanks for the quick replies guys.. very helpful.
I ordered a 3.5 Teensy and the Audio Board yesterday.
They should be here by this weekend.
 
Well I got my 3.5 Teensy and Audio Board last Friday (amazing 2 Day delivery!).
I managed to get it on a breadboard and have had a chance to tinker around with the Workshop Examples to get WAV files playing and the simple Oscillator synth working.

Now I am eager to dive in and get:

A Midi Interface working-
I think making a simple optoisolator interface on a serial port would be best so I can use the USB port to help debug/see data on the serial monitor.


I am looking for a solution to get raw MIDI data and then use it to trigger Audio Object waveforms. Seems to be my biggest hurdle.
I think I can figure out how to get the bytes, then use state machines to extract the midi note data.. Note On/Off, Note number and Velocity... but it's using those notes to trigger the voices on/off that stumps me.



Here is a nice starter synth I seen on YouTube

https://youtu.be/KbcNqarBTsI

And here is a link to the code..

https://github.com/otem/teensypolysynth/blob/master/teensypolysynth.ino

and a copy for here..

Code:
#include <Audio.h>
#include <Wire.h>
#include <SPI.h>
#include <SD.h>
#include <SerialFlash.h>

// GUItool: begin automatically generated code
AudioSynthWaveformDc     lfoenvelope;          //xy=589.047534942627,966.6665487289429
AudioSynthWaveform       lfo;            //xy=677.4761581420898,1172.523769378662
AudioMixer4              mixer1;         //xy=720.9047355651855,1027.666706085205
AudioAnalyzePeak         peak1;          //xy=949.0476112365723,890.8095207214355
AudioSynthWaveform       voice8b;        //xy=1185.6190299987793,1941.6666355133057
AudioSynthNoiseWhite     voice8n;        //xy=1185.6190299987793,1976.6666355133057
AudioSynthWaveform       voice8a;        //xy=1187.6190299987793,1903.6666355133057
AudioSynthWaveform       voice4a;        //xy=1206.6190299987793,925.6666355133057
AudioSynthWaveform       voice4b;        //xy=1207.6190299987793,963.6666355133057
AudioSynthNoiseWhite     voice4n;        //xy=1207.6190299987793,998.6666355133057
AudioSynthWaveform       voice5b;        //xy=1208.6190299987793,1219.6666355133057
AudioSynthNoiseWhite     voice5n;        //xy=1208.6190299987793,1254.6666355133057
AudioSynthWaveform       voice5a;        //xy=1213.6190299987793,1176.6666355133057
AudioSynthWaveform       voice7b;        //xy=1217.6190299987793,1710.6666355133057
AudioSynthNoiseWhite     voice7n;        //xy=1217.6190299987793,1745.6666355133057
AudioSynthWaveform       voice6b;        //xy=1220.6190299987793,1473.6666355133057
AudioSynthNoiseWhite     voice6n;        //xy=1220.6190299987793,1508.6666355133057
AudioSynthWaveform       voice6a;        //xy=1222.6190299987793,1435.6666355133057
AudioSynthWaveform       voice7a;        //xy=1222.6190299987793,1667.6666355133057
AudioSynthWaveform       voice3b;        //xy=1228.6190299987793,715.6666355133057
AudioSynthNoiseWhite     voice3n;        //xy=1228.6190299987793,750.6666355133057
AudioSynthWaveform       voice3a;        //xy=1233.6190299987793,672.6666355133057
AudioSynthWaveform       voice1b;        //xy=1258.6190299987793,249.66663551330566
AudioSynthNoiseWhite     voice1n;        //xy=1261.6190299987793,293.66663551330566
AudioSynthWaveform       voice2b;        //xy=1261.6190299987793,483.66663551330566
AudioSynthNoiseWhite     voice2n;        //xy=1261.6190299987793,518.6666355133057
AudioSynthWaveform       voice1a;        //xy=1263.6190299987793,206.66663551330566
AudioSynthWaveform       voice2a;        //xy=1263.6190299987793,445.66663551330566
AudioSynthWaveformDc     voice8filterenv; //xy=1313.6190299987793,2087.6666355133057
AudioSynthWaveformDc     voice8env;      //xy=1327.6190299987793,2026.6666355133057
AudioMixer4              voice8mix;      //xy=1330.6190299987793,1961.6666355133057
AudioSynthWaveformDc     voice4filterenv; //xy=1335.6190299987793,1109.6666355133057
AudioSynthWaveformDc     voice5filterenv; //xy=1336.6190299987793,1365.6666355133057
AudioSynthWaveformDc     voice7filterenv; //xy=1345.6190299987793,1856.6666355133057
AudioSynthWaveformDc     voice4env;      //xy=1349.6190299987793,1048.6666355133057
AudioSynthWaveformDc     voice6filterenv; //xy=1348.6190299987793,1619.6666355133057
AudioSynthWaveformDc     voice5env;      //xy=1350.6190299987793,1304.6666355133057
AudioMixer4              voice4mix;      //xy=1352.6190299987793,983.6666355133057
AudioMixer4              voice5mix;      //xy=1353.6190299987793,1239.6666355133057
AudioSynthWaveformDc     voice3filterenv; //xy=1356.6190299987793,861.6666355133057
AudioSynthWaveformDc     voice7env;      //xy=1359.6190299987793,1795.6666355133057
AudioSynthWaveformDc     voice6env;      //xy=1362.6190299987793,1558.6666355133057
AudioMixer4              voice7mix;      //xy=1362.6190299987793,1730.6666355133057
AudioMixer4              voice6mix;      //xy=1365.6190299987793,1493.6666355133057
AudioSynthWaveformDc     voice3env;      //xy=1370.6190299987793,800.6666355133057
AudioMixer4              voice3mix;      //xy=1373.6190299987793,735.6666355133057
AudioSynthWaveformDc     voice1filterenv; //xy=1387.6190299987793,385.66663551330566
AudioSynthWaveformDc     voice2filterenv; //xy=1389.6190299987793,629.6666355133057
AudioMixer4              voice1mix;      //xy=1403.6190299987793,269.66663551330566
AudioSynthWaveformDc     voice2env;      //xy=1403.6190299987793,568.6666355133057
AudioSynthWaveformDc     voice1env;      //xy=1404.6190299987793,334.66663551330566
AudioMixer4              voice2mix;      //xy=1406.6190299987793,503.66663551330566
AudioEffectMultiply      voice8multiply; //xy=1494.6190299987793,1999.6666355133057
AudioMixer4              voice8filtermodmixer; //xy=1504.6190299987793,2115.6666355133057
AudioEffectMultiply      voice4multiply; //xy=1516.6190299987793,1021.6666355133057
AudioEffectMultiply      voice5multiply; //xy=1517.6190299987793,1277.6666355133057
AudioMixer4              voice4filtermodmixer; //xy=1526.6190299987793,1137.6666355133057
AudioEffectMultiply      voice7multiply; //xy=1526.6190299987793,1768.6666355133057
AudioEffectMultiply      voice6multiply; //xy=1529.6190299987793,1531.6666355133057
AudioMixer4              voice5filtermodmixer; //xy=1534.6190299987793,1387.6666355133057
AudioEffectMultiply      voice3multiply; //xy=1537.6190299987793,773.6666355133057
AudioMixer4              voice6filtermodmixer; //xy=1539.6190299987793,1647.6666355133057
AudioMixer4              voice7filtermodmixer; //xy=1543.6190299987793,1878.6666355133057
AudioMixer4              voice3filtermodmixer; //xy=1554.6190299987793,883.6666355133057
AudioEffectMultiply      voice1multiply; //xy=1567.6190299987793,307.66663551330566
AudioEffectMultiply      voice2multiply; //xy=1570.6190299987793,541.6666355133057
AudioMixer4              voice2filtermodmixer; //xy=1580.6190299987793,657.6666355133057
AudioMixer4              voice1filtermodmixer; //xy=1584.6190299987793,417.66663551330566
AudioFilterStateVariable voice8filter;   //xy=1677.6190299987793,2022.6666355133057
AudioFilterStateVariable voice5filter;   //xy=1697.6190299987793,1321.6666355133057
AudioFilterStateVariable voice4filter;   //xy=1699.6190299987793,1044.6666355133057
AudioFilterStateVariable voice7filter;   //xy=1706.6190299987793,1812.6666355133057
AudioFilterStateVariable voice6filter;   //xy=1712.6190299987793,1554.6666355133057
AudioFilterStateVariable voice3filter;   //xy=1717.6190299987793,817.6666355133057
AudioFilterStateVariable voice2filter;   //xy=1753.6190299987793,564.6666355133057
AudioFilterStateVariable voice1filter;   //xy=1770.6190299987793,359.66663551330566
AudioMixer4              last4premix;    //xy=2177.6190299987793,1294.6666355133057
AudioMixer4              first4premix;   //xy=2178.6190299987793,1210.6666355133057
AudioFilterStateVariable delayFilter;    //xy=2627.6190299987793,1404.6666355133057
AudioMixer4              mainOutMixer;   //xy=2698.6190299987793,1287.6666355133057
AudioEffectDelay         delay1;         //xy=2756.6190299987793,1599.6666355133057
AudioOutputI2S           i2s1;           //xy=2924.6190299987793,1285.6666355133057
AudioConnection          patchCord1(lfoenvelope, 0, mixer1, 0);
AudioConnection          patchCord2(lfo, 0, voice1filtermodmixer, 1);
AudioConnection          patchCord3(lfo, 0, voice2filtermodmixer, 1);
AudioConnection          patchCord4(lfo, 0, voice3filtermodmixer, 1);
AudioConnection          patchCord5(lfo, 0, voice4filtermodmixer, 1);
AudioConnection          patchCord6(lfo, 0, voice5filtermodmixer, 1);
AudioConnection          patchCord7(lfo, 0, voice6filtermodmixer, 1);
AudioConnection          patchCord8(lfo, 0, voice7filtermodmixer, 1);
AudioConnection          patchCord9(lfo, 0, voice8filtermodmixer, 1);
AudioConnection          patchCord10(lfo, 0, mixer1, 1);
AudioConnection          patchCord11(mixer1, peak1);
AudioConnection          patchCord12(voice8b, 0, voice8mix, 1);
AudioConnection          patchCord13(voice8n, 0, voice8mix, 2);
AudioConnection          patchCord14(voice8a, 0, voice8mix, 0);
AudioConnection          patchCord15(voice4a, 0, voice4mix, 0);
AudioConnection          patchCord16(voice4b, 0, voice4mix, 1);
AudioConnection          patchCord17(voice4n, 0, voice4mix, 2);
AudioConnection          patchCord18(voice5b, 0, voice5mix, 1);
AudioConnection          patchCord19(voice5n, 0, voice5mix, 2);
AudioConnection          patchCord20(voice5a, 0, voice5mix, 0);
AudioConnection          patchCord21(voice7b, 0, voice7mix, 1);
AudioConnection          patchCord22(voice7n, 0, voice7mix, 2);
AudioConnection          patchCord23(voice6b, 0, voice6mix, 1);
AudioConnection          patchCord24(voice6n, 0, voice6mix, 2);
AudioConnection          patchCord25(voice6a, 0, voice6mix, 0);
AudioConnection          patchCord26(voice7a, 0, voice7mix, 0);
AudioConnection          patchCord27(voice3b, 0, voice3mix, 1);
AudioConnection          patchCord28(voice3n, 0, voice3mix, 2);
AudioConnection          patchCord29(voice3a, 0, voice3mix, 0);
AudioConnection          patchCord30(voice1b, 0, voice1mix, 1);
AudioConnection          patchCord31(voice1n, 0, voice1mix, 2);
AudioConnection          patchCord32(voice2b, 0, voice2mix, 1);
AudioConnection          patchCord33(voice2n, 0, voice2mix, 3);
AudioConnection          patchCord34(voice1a, 0, voice1mix, 0);
AudioConnection          patchCord35(voice2a, 0, voice2mix, 0);
AudioConnection          patchCord36(voice8filterenv, 0, voice8filtermodmixer, 0);
AudioConnection          patchCord37(voice8env, 0, voice8multiply, 1);
AudioConnection          patchCord38(voice8mix, 0, voice8multiply, 0);
AudioConnection          patchCord39(voice4filterenv, 0, voice4filtermodmixer, 0);
AudioConnection          patchCord40(voice5filterenv, 0, voice5filtermodmixer, 0);
AudioConnection          patchCord41(voice7filterenv, 0, voice7filtermodmixer, 0);
AudioConnection          patchCord42(voice4env, 0, voice4multiply, 1);
AudioConnection          patchCord43(voice6filterenv, 0, voice6filtermodmixer, 0);
AudioConnection          patchCord44(voice5env, 0, voice5multiply, 1);
AudioConnection          patchCord45(voice4mix, 0, voice4multiply, 0);
AudioConnection          patchCord46(voice5mix, 0, voice5multiply, 0);
AudioConnection          patchCord47(voice3filterenv, 0, voice3filtermodmixer, 0);
AudioConnection          patchCord48(voice7env, 0, voice7multiply, 1);
AudioConnection          patchCord49(voice6env, 0, voice6multiply, 1);
AudioConnection          patchCord50(voice7mix, 0, voice7multiply, 0);
AudioConnection          patchCord51(voice6mix, 0, voice6multiply, 0);
AudioConnection          patchCord52(voice3env, 0, voice3multiply, 1);
AudioConnection          patchCord53(voice3mix, 0, voice3multiply, 0);
AudioConnection          patchCord54(voice1filterenv, 0, voice1filtermodmixer, 0);
AudioConnection          patchCord55(voice2filterenv, 0, voice2filtermodmixer, 0);
AudioConnection          patchCord56(voice1mix, 0, voice1multiply, 0);
AudioConnection          patchCord57(voice2env, 0, voice2multiply, 1);
AudioConnection          patchCord58(voice1env, 0, voice1multiply, 1);
AudioConnection          patchCord59(voice2mix, 0, voice2multiply, 0);
AudioConnection          patchCord60(voice8multiply, 0, voice8filter, 0);
AudioConnection          patchCord61(voice8filtermodmixer, 0, voice8filter, 1);
AudioConnection          patchCord62(voice4multiply, 0, voice4filter, 0);
AudioConnection          patchCord63(voice5multiply, 0, voice5filter, 0);
AudioConnection          patchCord64(voice4filtermodmixer, 0, voice4filter, 1);
AudioConnection          patchCord65(voice7multiply, 0, voice7filter, 0);
AudioConnection          patchCord66(voice6multiply, 0, voice6filter, 0);
AudioConnection          patchCord67(voice5filtermodmixer, 0, voice5filter, 1);
AudioConnection          patchCord68(voice3multiply, 0, voice3filter, 0);
AudioConnection          patchCord69(voice6filtermodmixer, 0, voice6filter, 1);
AudioConnection          patchCord70(voice7filtermodmixer, 0, voice7filter, 1);
AudioConnection          patchCord71(voice3filtermodmixer, 0, voice3filter, 1);
AudioConnection          patchCord72(voice1multiply, 0, voice1filter, 0);
AudioConnection          patchCord73(voice2multiply, 0, voice2filter, 0);
AudioConnection          patchCord74(voice2filtermodmixer, 0, voice2filter, 1);
AudioConnection          patchCord75(voice1filtermodmixer, 0, voice1filter, 1);
AudioConnection          patchCord76(voice8filter, 0, last4premix, 3);
AudioConnection          patchCord77(voice5filter, 0, last4premix, 0);
AudioConnection          patchCord78(voice4filter, 0, first4premix, 3);
AudioConnection          patchCord79(voice7filter, 0, last4premix, 2);
AudioConnection          patchCord80(voice6filter, 0, last4premix, 1);
AudioConnection          patchCord81(voice3filter, 0, first4premix, 2);
AudioConnection          patchCord82(voice2filter, 0, first4premix, 1);
AudioConnection          patchCord83(voice1filter, 0, first4premix, 0);
AudioConnection          patchCord84(last4premix, 0, mainOutMixer, 1);
AudioConnection          patchCord85(first4premix, 0, mainOutMixer, 0);
AudioConnection          patchCord86(delayFilter, 0, mainOutMixer, 3);
AudioConnection          patchCord87(mainOutMixer, 0, i2s1, 0);
AudioConnection          patchCord88(mainOutMixer, 0, i2s1, 1);
AudioConnection          patchCord89(mainOutMixer, delay1);
AudioConnection          patchCord90(delay1, 0, delayFilter, 0);
AudioControlSGTL5000     sgtl5000_1;     //xy=2661.6190299987793,1054.6666355133057
// GUItool: end automatically generated code







#include <Bounce.h>
//Mux control pins
int s0 = 27;
int s1 = 26;
int s2 = 25;
int s3 = 24;
//Mux in "SIG" pin
int SIG_pin = 28;



//Buttons
int notePins[8] = {0,1,2,8,16,17,20,21};

Bounce noteBounce[] = {
  Bounce(0,10),
  Bounce(1,10),
  Bounce(2,10),
  Bounce(8,10),
  Bounce(16,10),
  Bounce(17,10),
  Bounce(20,10),
  Bounce(21,10),

};

int colorIndex;
int keyIndex;
float noteFreq[7][8] = {

  //5       1       6      2      7      3      8     4  
  {329.63,220.00,369.99,246.94,415.30,277.18,440.00,293.66},
  {369.99,246.94,415.30,277.18,466.16,311.13,493.88,329.63},
  {392.00,261.63,440.00,293.66,493.88,329.63,523.25,349.23},
  {440.00,293.66,493.88,329.63,554.37,369.99,587.33,392.00},
  {493.88,329.63,554.37,369.99,622.25,415.30,659.25,440.00},
  {523.25,349.23,587.33,392.00,659.25,440.00,698.46,466.16},
  {587.33,392.00,659.25,440.00,739.99,493.88,783.99,523.25},
};

int btnState[8];
int prevBtnState[8];


//Analog Inputs
float analogValues[16];
float analogValuesLag[16];

int extraAnalogPins[5] = {A13,A18,A19,A20,A12};
float extraAnalogValues[5];
float extraAnalogValuesLag[5];
int changeThresh;
int extraChangeThresh;

//LEDS
int red = 3;
int green = 4;
int blue = 5;
int redLevel;
int greenLevel;
int blueLevel;
int redLevelArray[7] = {   182, 255, 0,   248, 0,   240,  255};
int greenLevelArray[7] = { 246, 0,   133, 159, 230, 0,    0};
int blueLevelArray[7] = {  41,  129, 252, 0,   255, 180, 40};


//EnvSwitch
int EnvSwitchPin = 32;
int envelopeFilter;

float tempPulseWidth;
float tempPeak;
float tempRMS;


//synth
float mainVolume;
int tempLineOutLevel;
float vcoOneLevel;
float vcoTwoLevel;
int vcoOneOct;
int vcoTwoOct;
int octArray[6] = {1,1,2,4,8,16};
float deTune;
int waveShapeOneIndex;
int waveShapeTwoIndex;
int lfoWaveShapeIndex;
int octOneIndex;
int octTwoIndex;
//WaveShapes
short waveShapes[4] = {
  WAVEFORM_SINE,
  WAVEFORM_SAWTOOTH,
  WAVEFORM_SQUARE,
  WAVEFORM_PULSE,
};
bool voiceBPulse;
float tempDetuneMod;
float deTuneLfo;
//LFO WaveShapes
short lfoWaveShapes[5] = {
  WAVEFORM_SINE,
  WAVEFORM_SAWTOOTH,
  WAVEFORM_SAWTOOTH_REVERSE,
  WAVEFORM_SQUARE,
  WAVEFORM_SAMPLE_HOLD,
};
//ADSR
int attackTime;
int decayTime;
float sustainLevel;
int releaseTime;
//Filter ADSR
int attackTimeFilter;
int decayTimeFilter;
float sustainLevelFilter;
int releaseTimeFilter;
//LFO ADSR
// int attackTimeLFO;
// int decayTimeLFO;
// float sustainLevelLFO;
// int releaseTimeLFO;

//Note Timing
bool noteTrigFlag[8];
unsigned long attackWait[8];

bool firstRunRead;




void setup() {
  AudioMemory(160);
  Serial.begin(115200);
  sgtl5000_1.enable();
  sgtl5000_1.volume(.7);

  //led Startup
  pinMode(red, OUTPUT);
  pinMode(green, OUTPUT);
  pinMode(blue, OUTPUT);
  analogWrite(red, 200);
  delay(300);
  analogWrite(red, 0);
  analogWrite(green, 255);
  analogWrite(blue, 0);
  delay(300);
  analogWrite(red, 0);
  analogWrite(green, 0);
  analogWrite(blue, 255);  
  delay(300);
  analogWrite(red, 0);
  analogWrite(green, 0);
  analogWrite(blue, 0); 
  redLevel = 182;
  greenLevel = 246;
  blueLevel = 41;

  //MUX
  pinMode(s0, OUTPUT); 
  pinMode(s1, OUTPUT); 
  pinMode(s2, OUTPUT); 
  pinMode(s3, OUTPUT); 
  pinMode(SIG_pin, INPUT);

  digitalWrite(s0, LOW);
  digitalWrite(s1, LOW);
  digitalWrite(s2, LOW);
  digitalWrite(s3, LOW);


  //switch
  pinMode(EnvSwitchPin, INPUT_PULLUP);
  colorIndex = 0;
  keyIndex = 0;

  //mix
  first4premix.gain(0, .25);
  first4premix.gain(1, .25);
  first4premix.gain(2, .25);
  first4premix.gain(3, .25);
  last4premix.gain(0, .25);
  last4premix.gain(1, .25);
  last4premix.gain(2, .25);
  last4premix.gain(3, .25);

  //Voice 1
  voice1a.begin(.3,440,WAVEFORM_SQUARE);
  voice1b.begin(.3,440,WAVEFORM_SAWTOOTH);
  //Voice 2
  voice2a.begin(.3,440,WAVEFORM_SQUARE);
  voice2b.begin(.3,440,WAVEFORM_SAWTOOTH);
  //Voice 3
  voice3a.begin(.3,440,WAVEFORM_SQUARE);
  voice3b.begin(.3,440,WAVEFORM_SAWTOOTH);
  //Voice 4
  voice4a.begin(.3,440,WAVEFORM_SQUARE);
  voice4b.begin(.3,440,WAVEFORM_SAWTOOTH);
  //Voice 5
  voice5a.begin(.3,440,WAVEFORM_SQUARE);
  voice5b.begin(.3,440,WAVEFORM_SAWTOOTH);
  //Voice 6
  voice6a.begin(.3,440,WAVEFORM_SQUARE);
  voice6b.begin(.3,440,WAVEFORM_SAWTOOTH);
  //Voice 7
  voice7a.begin(.3,440,WAVEFORM_SQUARE);
  voice7b.begin(.3,440,WAVEFORM_SAWTOOTH);
  //Voice 8
  voice8a.begin(.3,440,WAVEFORM_SQUARE);
  voice8b.begin(.3,440,WAVEFORM_SAWTOOTH);

  delayFilter.frequency(3000);
  delayFilter.resonance(1);
  delay1.delay(0,0);
  mainOutMixer.gain(3,0);

  //LFO
  lfo.begin(1,3,WAVEFORM_SINE);

  vcoOneOct = 1;
  vcoTwoOct = 1;
  deTune = 1;
  mainOutMixer.gain(0,.5);
  lfoenvelope.amplitude(1);
  voiceBPulse = false;

  firstRunRead = true;


  pinMode(A18, INPUT);
  pinMode(A19, INPUT);
  pinMode(A20, INPUT);

  pinMode(32, INPUT_PULLUP);
  pinMode(33, OUTPUT);

  for(int i=0; i<8; i++){
    pinMode(notePins[i], INPUT_PULLUP);
    digitalWrite(notePins[i], HIGH);
    //audio startup
    if(i < 4){
      voice1env.amplitude(.5,1);
      voice1a.frequency(noteFreq[0][i]);
      voice1b.frequency(noteFreq[0][i]+3);
      delay(200);
      voice1env.amplitude(0,0);
    }
  }
}

float mapfloat(float x, float in_min, float in_max, float out_min, float out_max){
  return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
}
int getSmooth(int pin){
  int vals[5]; //array that stores 5 readings.
  for(int i = 0; i < 5; i++){
    vals[i] = analogRead(pin); //takes 5 readings.
  }
  float smooth = (vals[0] + vals[1] + vals[2] + vals[3] + vals[4]) / 5;
  return smooth;
}

int readMux(int channel){
  int controlPin[] = {s0, s1, s2, s3};

  int muxChannel[16][4]={
    {0,0,0,0}, //channel 0
    {1,0,0,0}, //channel 1
    {0,1,0,0}, //channel 2
    {1,1,0,0}, //channel 3
    {0,0,1,0}, //channel 4
    {1,0,1,0}, //channel 5
    {0,1,1,0}, //channel 6
    {1,1,1,0}, //channel 7
    {0,0,0,1}, //channel 8
    {1,0,0,1}, //channel 9
    {0,1,0,1}, //channel 10
    {1,1,0,1}, //channel 11
    {0,0,1,1}, //channel 12
    {1,0,1,1}, //channel 13
    {0,1,1,1}, //channel 14
    {1,1,1,1}  //channel 15
  };

  //loop through the 4 sig
  for(int i = 0; i < 4; i ++){
    digitalWrite(controlPin[i], muxChannel[channel][i]);
  }

  int val = getSmooth(SIG_pin);

  //return the value
  return val;
}

void loop() {

  //Volume
  mainVolume = analogRead(A1);
  mainVolume = mainVolume/1023;
  sgtl5000_1.volume(mainVolume);
  tempLineOutLevel = analogRead(A1);
  tempLineOutLevel = map(tempLineOutLevel, 0, 1023, 31, 13);
  sgtl5000_1.lineOutLevel(tempLineOutLevel);

  //envSwitch
  envelopeFilter = digitalRead(EnvSwitchPin);
  if(envelopeFilter == LOW){
      digitalWrite(33, HIGH);
  }else{
    digitalWrite(33, LOW);
  }
  //notes
  for(int i=0; i<8; i++){
    if(i == 0){
      voice1a.frequency((noteFreq[keyIndex][i]/4)*vcoOneOct);
      voice1b.frequency(((noteFreq[keyIndex][i]/4*vcoTwoOct) * deTune) * deTuneLfo);
    }
    if(i == 1){
      voice2a.frequency((noteFreq[keyIndex][i]/4)*vcoOneOct);
      voice2b.frequency(((noteFreq[keyIndex][i]/4*vcoTwoOct) * deTune) * deTuneLfo);
    }
    if(i == 2){
      voice3a.frequency((noteFreq[keyIndex][i]/4)*vcoOneOct);
      voice3b.frequency(((noteFreq[keyIndex][i]/4*vcoTwoOct) * deTune) * deTuneLfo);
    }
    if(i == 3){
      voice4a.frequency((noteFreq[keyIndex][i]/4)*vcoOneOct);
      voice4b.frequency(((noteFreq[keyIndex][i]/4*vcoTwoOct) * deTune) * deTuneLfo);
    }
    if(i == 4){
      voice5a.frequency((noteFreq[keyIndex][i]/4)*vcoOneOct);
      voice5b.frequency(((noteFreq[keyIndex][i]/4*vcoTwoOct) * deTune) * deTuneLfo);
    }
    if(i == 5){
      voice6a.frequency((noteFreq[keyIndex][i]/4)*vcoOneOct);
      voice6b.frequency(((noteFreq[keyIndex][i]/4*vcoTwoOct) * deTune) * deTuneLfo);
    }
    if(i == 6){
      voice7a.frequency((noteFreq[keyIndex][i]/4)*vcoOneOct);
      voice7b.frequency(((noteFreq[keyIndex][i]/4*vcoTwoOct) * deTune) * deTuneLfo);
    }
    if(i == 7){
      voice8a.frequency((noteFreq[keyIndex][i]/4)*vcoOneOct);
      voice8b.frequency(((noteFreq[keyIndex][i]/4*vcoTwoOct) * deTune) * deTuneLfo);
    }
    
    btnState[i] = digitalRead(notePins[i]);
    if (noteBounce[i].update()){
      Serial.println(noteFreq[keyIndex][i]);
      if(i == 0){
        if (btnState[i] == LOW && prevBtnState[i] == HIGH){
          voice1env.amplitude(1,attackTime);
          voice1filterenv.amplitude(1,attackTimeFilter);
          noteTrigFlag[i] = true;
          attackWait[i] = millis();
        }else{
          noteTrigFlag[i] = false;
          voice1env.amplitude(0,releaseTime);
          voice1filterenv.amplitude(-1, releaseTimeFilter);
        }
      }
      if(i == 1){
        if (btnState[i] == LOW && prevBtnState[i] == HIGH){
          voice2env.amplitude(1,attackTime);
          voice2filterenv.amplitude(1,attackTimeFilter);
          noteTrigFlag[i] = true;
          attackWait[i] = millis();
        }else{
          noteTrigFlag[i] = false;
          voice2env.amplitude(0,releaseTime);
          voice2filterenv.amplitude(-1, releaseTimeFilter);
        }
      }
      if(i == 2){
        if (btnState[i] == LOW && prevBtnState[i] == HIGH){
          voice3env.amplitude(1,attackTime);
          voice3filterenv.amplitude(1,attackTimeFilter);
          noteTrigFlag[i] = true;
          attackWait[i] = millis();
        }else{
          noteTrigFlag[i] = false;
          voice3env.amplitude(0,releaseTime);
          voice3filterenv.amplitude(-1, releaseTimeFilter);
        }
      }
      if(i == 3){
        if (btnState[i] == LOW && prevBtnState[i] == HIGH){
          voice4env.amplitude(1,attackTime);
          voice4filterenv.amplitude(1,attackTimeFilter);
          noteTrigFlag[i] = true;
          attackWait[i] = millis();
        }else{
          noteTrigFlag[i] = false;
          voice4env.amplitude(0,releaseTime);
          voice4filterenv.amplitude(-1, releaseTimeFilter);
        }
      }
      if(i == 4){
        if (btnState[i] == LOW && prevBtnState[i] == HIGH){
          voice5env.amplitude(1,attackTime);
          voice5filterenv.amplitude(1,attackTimeFilter);
          noteTrigFlag[i] = true;
          attackWait[i] = millis();
        }else{
          noteTrigFlag[i] = false;
          voice5env.amplitude(0,releaseTime);
          voice5filterenv.amplitude(-1, releaseTimeFilter);
        }
      }
      if(i == 5){
        if (btnState[i] == LOW && prevBtnState[i] == HIGH){
          voice6env.amplitude(1,attackTime);
          voice6filterenv.amplitude(1,attackTimeFilter);
          noteTrigFlag[i] = true;
          attackWait[i] = millis();
        }else{
          noteTrigFlag[i] = false;
          voice6env.amplitude(0,releaseTime);
          voice6filterenv.amplitude(-1, releaseTimeFilter);
        }
      }
      if(i == 6){
        if (btnState[i] == LOW && prevBtnState[i] == HIGH){
          voice7env.amplitude(1,attackTime);
          voice7filterenv.amplitude(1,attackTimeFilter);
          noteTrigFlag[i] = true;
          attackWait[i] = millis();
        }else{
          noteTrigFlag[i] = false;
          voice7env.amplitude(0,releaseTime);
          voice7filterenv.amplitude(-1, releaseTimeFilter);
        }
      }
      if(i == 7){
        if (btnState[i] == LOW && prevBtnState[i] == HIGH){
          voice8env.amplitude(1,attackTime);
          voice8filterenv.amplitude(1,attackTimeFilter);
          noteTrigFlag[i] = true;
          attackWait[i] = millis();
        }else{
          noteTrigFlag[i] = false;
          voice8env.amplitude(0,releaseTime);
          voice8filterenv.amplitude(-1, releaseTimeFilter);
        }
      }
    }
    if(btnState[i] == LOW){
      if(i == 0){
        if(millis() - attackWait[i] > attackTime && noteTrigFlag[i]){
          voice1env.amplitude(sustainLevel,decayTime);
        }
        if(millis() - attackWait[i] > attackTimeFilter && noteTrigFlag[i]){
          voice1filterenv.amplitude(sustainLevelFilter,decayTimeFilter);
        }
      }
      if(i == 1){
        if(millis() - attackWait[i] > attackTime && noteTrigFlag[i]){
          voice2env.amplitude(sustainLevel,decayTime);
        }
        if(millis() - attackWait[i] > attackTimeFilter && noteTrigFlag[i]){
          voice2filterenv.amplitude(sustainLevelFilter,decayTimeFilter);
        }
      }
      if(i == 2){
        if(millis() - attackWait[i] > attackTime && noteTrigFlag[i]){
          voice3env.amplitude(sustainLevel,decayTime);
        }
        if(millis() - attackWait[i] > attackTimeFilter && noteTrigFlag[i]){
          voice3filterenv.amplitude(sustainLevelFilter,decayTimeFilter);
        }
      }   
      if(i == 3){
        if(millis() - attackWait[i] > attackTime && noteTrigFlag[i]){
          voice4env.amplitude(sustainLevel,decayTime);
        }
        if(millis() - attackWait[i] > attackTimeFilter && noteTrigFlag[i]){
          voice4filterenv.amplitude(sustainLevelFilter,decayTimeFilter);
        }
      }  
      if(i == 4){
        if(millis() - attackWait[i] > attackTime && noteTrigFlag[i]){
          voice5env.amplitude(sustainLevel,decayTime);
        }
        if(millis() - attackWait[i] > attackTimeFilter && noteTrigFlag[i]){
          voice5filterenv.amplitude(sustainLevelFilter,decayTimeFilter);
        }
      }  
      if(i == 5){
        if(millis() - attackWait[i] > attackTime && noteTrigFlag[i]){
          voice6env.amplitude(sustainLevel,decayTime);
        }
        if(millis() - attackWait[i] > attackTimeFilter && noteTrigFlag[i]){
          voice6filterenv.amplitude(sustainLevelFilter,decayTimeFilter);
        }
      } 
      if(i == 6){
        if(millis() - attackWait[i] > attackTime && noteTrigFlag[i]){
          voice7env.amplitude(sustainLevel,decayTime);
        }
        if(millis() - attackWait[i] > attackTimeFilter && noteTrigFlag[i]){
          voice7filterenv.amplitude(sustainLevelFilter,decayTimeFilter);
        }
      } 
      if(i == 7){
        if(millis() - attackWait[i] > attackTime && noteTrigFlag[i]){
          voice8env.amplitude(sustainLevel,decayTime);
        }
        if(millis() - attackWait[i] > attackTimeFilter && noteTrigFlag[i]){
          voice8filterenv.amplitude(sustainLevelFilter,decayTimeFilter);
        }
      } 
    }
    prevBtnState[i] = btnState[i];
  }

  //knobs
  for(int i = 0; i < 16; i ++){
    analogValues[i] = readMux(i);

    if(i == 8 || i == 2){
      changeThresh = 250;
    }else if(i == 14)
      changeThresh = 200;
    else{
      changeThresh = 5;
    }
    if (abs(analogValues[i] - analogValuesLag[i]) > changeThresh || firstRunRead){
      //vcoOne
      if(i == 0){
        //oct
        octOneIndex = (analogValues[i]/204)+1;
        if(octOneIndex < 6){
          vcoOneOct = octArray[octOneIndex];
        }
      }
      if(i == 8){
        //shape
        waveShapeOneIndex = analogValues[i]/255;
        if(waveShapeOneIndex < 4){
          voice1a.begin(waveShapes[waveShapeOneIndex]);
          voice2a.begin(waveShapes[waveShapeOneIndex]);
          voice3a.begin(waveShapes[waveShapeOneIndex]);
          voice4a.begin(waveShapes[waveShapeOneIndex]);
          voice5a.begin(waveShapes[waveShapeOneIndex]);
          voice6a.begin(waveShapes[waveShapeOneIndex]);
          voice7a.begin(waveShapes[waveShapeOneIndex]);
          voice8a.begin(waveShapes[waveShapeOneIndex]);
        }
      }
      if(i == 4){
        //mix
        vcoOneLevel = (analogValues[i])/1023;
        vcoTwoLevel = 1 - (analogValues[i])/1023;
        voice1mix.gain(1,vcoOneLevel);
        voice1mix.gain(0,vcoTwoLevel);
        voice2mix.gain(1,vcoOneLevel);
        voice2mix.gain(0,vcoTwoLevel);
        voice3mix.gain(1,vcoOneLevel);
        voice3mix.gain(0,vcoTwoLevel);  
        voice4mix.gain(1,vcoOneLevel); 
        voice4mix.gain(0,vcoTwoLevel); 
        voice5mix.gain(1,vcoOneLevel); 
        voice5mix.gain(0,vcoTwoLevel); 
        voice6mix.gain(1,vcoOneLevel); 
        voice6mix.gain(0,vcoTwoLevel); 
        voice7mix.gain(1,vcoOneLevel); 
        voice7mix.gain(0,vcoTwoLevel); 
        voice8mix.gain(1,vcoOneLevel); 
        voice8mix.gain(0,vcoTwoLevel); 
      }
      //vcoTwo
      if(i == 12){
        //oct
        octTwoIndex = (analogValues[i]/204)+1;
        if(octTwoIndex < 6){
          vcoTwoOct = octArray[octTwoIndex];
        }

      }
      if(i == 2){
        //shape
        waveShapeTwoIndex = analogValues[i]/255;
        if(waveShapeTwoIndex < 4){
          if(waveShapeTwoIndex == 3){
            voiceBPulse = true;
          }else{
            voiceBPulse = false;
          }          
          voice1b.begin(waveShapes[waveShapeTwoIndex]);
          voice2b.begin(waveShapes[waveShapeTwoIndex]);
          voice3b.begin(waveShapes[waveShapeTwoIndex]);
          voice4b.begin(waveShapes[waveShapeTwoIndex]);
          voice5b.begin(waveShapes[waveShapeTwoIndex]);
          voice6b.begin(waveShapes[waveShapeTwoIndex]);
          voice7b.begin(waveShapes[waveShapeTwoIndex]);
          voice8b.begin(waveShapes[waveShapeTwoIndex]);
        }
      }
      if(i == 10){
        //detune
        deTune = analogValues[i];
        deTune = mapfloat(deTune, 0, 1023, .875, 1.125);
      }
      //LFO
      if(i == 6){
        //freq
        lfo.frequency(analogValues[i]/50);
      }
      if(i == 14){
        //shape
        lfoWaveShapeIndex = analogValues[i]/204.6;
        if(lfoWaveShapeIndex < 5){
          lfo.begin(lfoWaveShapes[lfoWaveShapeIndex]);
          Serial.println(lfoWaveShapeIndex);
        }
      }
      //noise
      if(i == 1){
        voice1n.amplitude(analogValues[i]/3096);
        voice2n.amplitude(analogValues[i]/3096);
        voice3n.amplitude(analogValues[i]/3096);
        voice4n.amplitude(analogValues[i]/3096);
        voice5n.amplitude(analogValues[i]/3096);
        voice6n.amplitude(analogValues[i]/3096);
        voice7n.amplitude(analogValues[i]/3096);
        voice8n.amplitude(analogValues[i]/3096);
      }
      //Filter
      if(i == 9){
        //frequency
        voice1filter.frequency(analogValues[i]*10);
        voice2filter.frequency(analogValues[i]*10);
        voice3filter.frequency(analogValues[i]*10);
        voice4filter.frequency(analogValues[i]*10);
        voice5filter.frequency(analogValues[i]*10);
        voice6filter.frequency(analogValues[i]*10);
        voice7filter.frequency(analogValues[i]*10);
        voice8filter.frequency(analogValues[i]*10);
      }
      if(i == 5){
        //resonance
        voice1filter.resonance((analogValues[i]/204)+.9);
        voice2filter.resonance((analogValues[i]/204)+.9);
        voice3filter.resonance((analogValues[i]/204)+.9);
        voice4filter.resonance((analogValues[i]/204)+.9);
        voice5filter.resonance((analogValues[i]/204)+.9);
        voice6filter.resonance((analogValues[i]/204)+.9);
        voice7filter.resonance((analogValues[i]/204)+.9);
        voice8filter.resonance((analogValues[i]/204)+.9);
      }
      if(i == 13){
        //lfo Mod
        voice1filtermodmixer.gain(1, analogValues[i]/1023);
        voice2filtermodmixer.gain(1, analogValues[i]/1023);
        voice3filtermodmixer.gain(1, analogValues[i]/1023);
        voice4filtermodmixer.gain(1, analogValues[i]/1023);
        voice5filtermodmixer.gain(1, analogValues[i]/1023);
        voice6filtermodmixer.gain(1, analogValues[i]/1023);
        voice7filtermodmixer.gain(1, analogValues[i]/1023);
        voice8filtermodmixer.gain(1, analogValues[i]/1023);
      }
      if(i == 3){
        //env Mod
        voice1filtermodmixer.gain(0, analogValues[i]/1023);
        voice2filtermodmixer.gain(0, analogValues[i]/1023);
        voice3filtermodmixer.gain(0, analogValues[i]/1023);
        voice4filtermodmixer.gain(0, analogValues[i]/1023);
        voice5filtermodmixer.gain(0, analogValues[i]/1023);
        voice6filtermodmixer.gain(0, analogValues[i]/1023);
        voice7filtermodmixer.gain(0, analogValues[i]/1023);
        voice8filtermodmixer.gain(0, analogValues[i]/1023);
      }
      //delay
      if(i == 11){
        //time
        delay1.delay(0, analogValues[i]/2.4);
      }
      if(i == 7){
        //feedback
        mainOutMixer.gain(3,analogValues[i]/1023);
      }
      //pulseWidth
      if(i == 15){
        tempPulseWidth = 1 - (analogValues[i]/1023);
        tempDetuneMod = analogValues[i]/2046;

      }
      analogValuesLag[i] = analogValues[i];   
    }      
  }
  //ExtraAnalogIn
  for(int i=0; i<5; i++){
    extraAnalogValues[i] = getSmooth(extraAnalogPins[i]);
    if(i == 0){
      extraChangeThresh = 144;
    }else{
      extraChangeThresh = 1;
    }
    if (abs(extraAnalogValues[i] - extraAnalogValuesLag[i]) > extraChangeThresh || firstRunRead){
      if(i == 0){
        //key
        colorIndex = extraAnalogValues[i]/146;
        if(colorIndex < 7){
          keyIndex = colorIndex;
          redLevel = redLevelArray[colorIndex];
          blueLevel = blueLevelArray[colorIndex];
          greenLevel = greenLevelArray[colorIndex];
        }
      }
      if(i == 1){
        //attack
        if(firstRunRead){
          attackTimeFilter = extraAnalogValues[i]*2;
          attackTime = extraAnalogValues[i]*2;
        }
        if(envelopeFilter == LOW){
          attackTimeFilter = extraAnalogValues[i]*2;
        }else{
          attackTime = extraAnalogValues[i]*2;
        }
      }
      if(i == 2){
        //decay
        if(firstRunRead){
          decayTimeFilter = extraAnalogValues[i];
          decayTime = extraAnalogValues[i];
        }
        if(envelopeFilter == LOW){
          decayTimeFilter = extraAnalogValues[i];
        }else{
          decayTime = extraAnalogValues[i];
        }
      }
      if(i == 3){
        //sustain
        if(firstRunRead){
          sustainLevelFilter = extraAnalogValues[i];
          sustainLevelFilter = mapfloat(sustainLevelFilter, 0, 1023, -1, 1);
          sustainLevel = extraAnalogValues[i]/1023;
        }
        if(envelopeFilter == LOW){
          sustainLevelFilter = extraAnalogValues[i];
          sustainLevelFilter = mapfloat(sustainLevelFilter, 0, 1023, -1, 1);
        }else{
          sustainLevel = extraAnalogValues[i]/1023;
        }
      }
      if(i == 4){
        //release
        if(firstRunRead){
          releaseTimeFilter = extraAnalogValues[i]*2;
          releaseTime = extraAnalogValues[i]*2;
        }
        if(envelopeFilter == LOW){
          releaseTimeFilter = extraAnalogValues[i]*2;
        }else{
          releaseTime = extraAnalogValues[i]*2;
        }
      }
      extraAnalogValuesLag[i] = extraAnalogValues[i];
    }
  }

  //LFO Peak
  if(peak1.available()){
    tempPeak = peak1.read();
  }
  analogWrite(blue, blueLevel*tempPeak);
  analogWrite(green, greenLevel*tempPeak);
  analogWrite(red, redLevel*tempPeak);
  voice1a.pulseWidth((tempPeak/2) + tempPulseWidth);
  voice2a.pulseWidth((tempPeak/2) + tempPulseWidth);
  voice3a.pulseWidth((tempPeak/2) + tempPulseWidth);
  voice4a.pulseWidth((tempPeak/2) + tempPulseWidth);
  voice5a.pulseWidth((tempPeak/2) + tempPulseWidth);
  voice6a.pulseWidth((tempPeak/2) + tempPulseWidth);
  voice7a.pulseWidth((tempPeak/2) + tempPulseWidth);
  voice8a.pulseWidth((tempPeak/2) + tempPulseWidth);

  if(voiceBPulse){
    voice1b.pulseWidth((tempPeak/2) + tempPulseWidth);
    voice2b.pulseWidth((tempPeak/2) + tempPulseWidth);
    voice3b.pulseWidth((tempPeak/2) + tempPulseWidth);
    voice4b.pulseWidth((tempPeak/2) + tempPulseWidth);
    voice5b.pulseWidth((tempPeak/2) + tempPulseWidth);
    voice6b.pulseWidth((tempPeak/2) + tempPulseWidth);
    voice7b.pulseWidth((tempPeak/2) + tempPulseWidth);
    voice8b.pulseWidth((tempPeak/2) + tempPulseWidth);
  }else{
    deTuneLfo = ((tempPeak) * tempDetuneMod + 1);
    //Serial.println(deTuneLfo);
  }
  firstRunRead = false;
}


I can pretty much follow the code and how 8 buttons will trigger 8 pairs of Waveform Audio Objects and any connected modulators, filters, mixers, etc.

But how can I convert that to assign each MIDI note (incoming from Serial Port) to a pair of Waveforms, repeat for next MIDI note, etc...
Then when max Polyphony is reached, say 8 notes.. Drop off the oldest note and begin the replacement note?
Or say Notes 1,2,3,4,5 are playing.. Then notes 2,4 & 5 are released and three new notes are played while notes 1 and 3 stay on?



I ran into some of this code that was used in this STM32 project

http://mutable-instruments.net/forum/discussion/6005/the-dsp-g1-analog-modeling-synth-source-code/p1

Per the author's permission, I have extracted only the code I wish to discuss about to create a method for triggering Voices on the Teensy Synth.
I believe this is most of the methods used to extract MIDI data and somehow use it in an array to be used to trigger voices. But it's beyond my understanding right now how it all works.

Code:
uint32_t FREQ[15]={0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}; //OSC pitches

volatile uint8_t TRIG=1;           //MIDItrig 1=note ON

volatile uint8_t state0=0;    //needs to be reset on a new trig

volatile uint8_t state1=0;    //needs to be reset on a new trig



const uint32_t NOTES[12]={208065>>2,220472>>2,233516>>2,247514>>2,262149>>2,277738>>2,294281>>2,311779>>2,330390>>2,349956>>2,370794>>2,392746>>2};


volatile uint8_t MIDISTATE=0;

volatile uint8_t MIDIRUNNINGSTATUS=0;

volatile uint8_t MIDINOTE;

volatile uint8_t MIDIVEL;

uint8_t OSCNOTES[5];



//-------------- Get the base frequency for the MIDI note ---------------

uint32_t MIDI2FREQ(uint8_t note) {

  uint8_t key=note%12;

  if (note<36) return (NOTES[key]>>(1+(35-note)/12));

  if (note>47) return (NOTES[key]<<((note-36)/12));

  return NOTES[key];

}




    //-------------------- 15 DCO block ------------------------------------------

    DCO=0;

    for (i=0;i<15;i++) {

    	DCOPH[i] += FREQ[i];              //Add freq to phaseacc's

    	DCO += waveform[(DCOPH[i]>>15)&255];  //Add DCO's to output

    }

    DCO = DCO<<4;





void handleMIDINOTE(uint8_t status,uint8_t note,uint8_t vel) {

	uint8_t i;

	//uint8_t trigflag=0;

	uint32_t freq;

	if ((!vel)&&(status==0x90)) status=0x80;

	if (status==0x80) {

	      for (i=0;i<5;i++) {

	    	  if (OSCNOTES[i]==note) {

	    		  if (!RELEASE0) {

	    		  FREQ[i*3]=0;

	    		  FREQ[i*3+1]=0;

	    		  FREQ[i*3+2]=0;

	    		  }

	    		  OSCNOTES[i]=0;

	    	  }

	    	  //trigflag+=OSCNOTES[i];

	      }

	      if (!(OSCNOTES[0]|OSCNOTES[1]|OSCNOTES[2]|OSCNOTES[3]|OSCNOTES[4])) TRIG=0;

	      return;

	}



	if (status==0x90) {

		if ((!TRIG)&&(RELEASE0)) {

			for (i=0;i<14;i++) {

				FREQ[i]=0;

			}

		}

		i=0;

		while (i<5) {

	      if (!OSCNOTES[i]) {

    		  freq=MIDI2FREQ(note);

    		  FREQ[i*3]=freq;

			  FREQ[i*3+1]=FREQ[i*3]+((FREQ[i*3]/50)*DETUNE/127)+((FREQ[i*3]/2)*RANGE/32);

			  FREQ[i*3+2]=FREQ[i*3]-((FREQ[i*3]/50)*DETUNE/127)+((FREQ[i*3]/2)*RANGE/32);

	    	  OSCNOTES[i]=note;

	    	  if (!TRIG) {

	    		  TRIG=1;

	    		  state0=0;

	    		  state1=0;

	    	  }

	    	  return;

	      }

	      i++;

		}

	}



}



void UART0_IRQHandler(void) {

      uint8_t MIDIRX;

	  while (!(LPC_USART0->STAT & UART_STATUS_TXRDY));

	  MIDIRX = LPC_USART0->RXDATA;



	  /*

	  Handling "Running status"

	  1.Buffer is cleared (ie, set to 0) at power up.

	  2.Buffer stores the status when a Voice Category Status (ie, 0x80 to 0xEF) is received.

	  3.Buffer is cleared when a System Common Category Status (ie, 0xF0 to 0xF7) is received.

	  4.Nothing is done to the buffer when a RealTime Category message is received.

	  5.Any data bytes are ignored when the buffer is 0.

	  */



	  if ((MIDIRX>0xBF)&&(MIDIRX<0xF8)) {

		  MIDIRUNNINGSTATUS=0;

		  MIDISTATE=0;

		  return;

	  }



	  if (MIDIRX>0xF7) return;



	  if (MIDIRX & 0x80) {

		  MIDIRUNNINGSTATUS=MIDIRX;

		  MIDISTATE=1;

		  return;

	  }



	  if (MIDIRX < 0x80) {

	  	  if (!MIDIRUNNINGSTATUS) return;

	  	  if (MIDISTATE==1) {

	  		  MIDINOTE=MIDIRX;

	  		  MIDISTATE++;

	  		  return;

	  	  }

	  	  if (MIDISTATE==2) {

	  		  MIDIVEL=MIDIRX;

	  		  MIDISTATE=1;

	  		  if ((MIDIRUNNINGSTATUS==0x80)||(MIDIRUNNINGSTATUS==0x90)) handleMIDINOTE(MIDIRUNNINGSTATUS,MIDINOTE,MIDIVEL);

	  		  if (MIDIRUNNINGSTATUS==0xB0) handleMIDICC(MIDINOTE,MIDIVEL);



	  		  return;

	  	  }

	  	  }



	  return;




I tried to bring as much info as possible to this post for a solution. Hopefully some of you out there can see what I'm trying to do and can offer advice, help and or a push in the right direction on how to get all this accomplished.

I think once I get how to apply the MIDI data/notes to the voices, I can be on my way to designing my own custom synths... maybe continue this thread to leave a path for others to follow and make their own MIDI triggered custom synths with the Teensy Audio Library since I have not managed to find any specific subject on this here or anywhere else yet.

Thanks in advance!
/Blaine
 
You can use the one USB cable/port for multiple purposes at once. In the adrhuino Tools/USB Type, you can use Serial plus MIDI at the same time, and even add USB Audio.

I use the triple option now, and all is pretty automatic..

I use the Serial to send debug data to the serial monitor (e.g. Serial.printf()), Audio to send analog audio output to Audacity on my Mac, and MIDI to a midi capture app on the Mac (or MidiOx on Windows). I've not done midi into the Teensy, though there may be sample code for that in the forum or Teensyduino examples.

At the same time, I play audio to the Audio shield's output jack I2S output.

I'd recommend going piece by piece... adding only one new element at a time. Then wrap that up and work on the next thing.
- Send a midi message using USBMidi (I found a good Mac capture tool -"snoize")
- Capture and display-to-serial an inbound midi message (if you have a source of those)
- Add a single button and send "pushed" to the serial monitor when the button is pressed

With lots of pieces at once, debugging will be frustrating.

Good luck!
 
Thanks for the info and advice David.

I saw after I posted that the USB on my Teensy could do both MIDI and serial monitoring... The USB audio is interesting.. Does that mean it may show up in aiso for DAWs like Ableton Live or FL Studio? That would be really convenient!

I'm confident in getting the MIDI data into the Teensy and parsing it out for the notesOn, notesOff data.. It's that Voice Allocation method that has me hung up right now.

I found this bit of code that may shed light into the methods I'm after-

https://github.com/grame-cncm/faust/blob/master/architecture/faust/dsp/poly-dsp.h

(Ignore the code pasted in below.. It didn't carriage return properly and it was a huge pita to try and delete with my tablet.)

Code:
/************************************************************************ IMPORTANT NOTE : this file contains two clearly delimited sections : the ARCHITECTURE section (in two parts) and the USER section. Each section is governed by its own copyright and license. Please check individually each section for license and copyright information. *************************************************************************/ /*******************BEGIN ARCHITECTURE SECTION (part 1/2)****************/ /************************************************************************ FAUST Architecture File Copyright (C) 2003-2011 GRAME, Centre National de Creation Musicale --------------------------------------------------------------------- This Architecture section is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; If not, see <http://www.gnu.org/licenses/>. EXCEPTION : As a special exception, you may create a larger work that contains this FAUST architecture section and distribute that work under terms of your choice, so long as this FAUST architecture section is not modified. ************************************************************************ ************************************************************************/ #ifndef __poly_dsp__ #define __poly_dsp__ #include <stdio.h> #include <string> #include <math.h> #include <float.h> #include <algorithm> #include <ostream> #include <sstream> #include <vector> #include <limits.h> #include "faust/gui/MidiUI.h" #include "faust/gui/JSONUI.h" #include "faust/gui/MapUI.h" #include "faust/dsp/proxy-dsp.h" #define kActiveVoice 0 #define kFreeVoice -1 #define kReleaseVoice -2 #define kNoVoice -3 #define VOICE_STOP_LEVEL 0.001 #define MIX_BUFFER_SIZE 16384 #define FLOAT_MAX(a, b) (((a) < (b)) ? (b) : (a)) // ends_with(<str>,<end>) : returns true if <str> ends with <end> static inline bool ends_with(std::string const& str, std::string const& end) { size_t l1 = str.length(); size_t l2 = end.length(); return (l1 >= l2) && (0 == str.compare(l1 - l2, l2, end)); } static inline double midiToFreq(double note) { return 440.0 * pow(2.0, (note-69.0)/12.0); } static inline unsigned int isPowerOfTwo(unsigned int n) { return !(n & (n - 1)); } class GroupUI : public GUI, public PathBuilder { private: std::map<std::string, uiGroupItem*> fLabelZoneMap; void insertMap(std::string label, FAUSTFLOAT* zone) { if (!ends_with(label, "/gate") && !ends_with(label, "/freq") && !ends_with(label, "/gain")) { // Groups all controller except 'freq', 'gate', and 'gain' if (fLabelZoneMap.find(label) != fLabelZoneMap.end()) { fLabelZoneMap[label]->addZone(zone); } else { fLabelZoneMap[label] = new uiGroupItem(this, zone); } } } uiCallbackItem* fPanic; public: GroupUI(FAUSTFLOAT* zone, uiCallback cb, void* arg) { fPanic = new uiCallbackItem(this, zone, cb, arg); }; virtual ~GroupUI() { // 'fPanic' is kept and deleted in GUI, so do not delete here }; // -- widget's layouts void openTabBox(const char* label) { fControlsLevel.push_back(label); } void openHorizontalBox(const char* label) { fControlsLevel.push_back(label); } void openVerticalBox(const char* label) { fControlsLevel.push_back(label); } void closeBox() { fControlsLevel.pop_back(); } // -- active widgets void addButton(const char* label, FAUSTFLOAT* zone) { insertMap(buildPath(label), zone); } void addCheckButton(const char* label, FAUSTFLOAT* zone) { insertMap(buildPath(label), zone); } void addVerticalSlider(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT init, FAUSTFLOAT fmin, FAUSTFLOAT fmax, FAUSTFLOAT step) { insertMap(buildPath(label), zone); } void addHorizontalSlider(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT init, FAUSTFLOAT fmin, FAUSTFLOAT fmax, FAUSTFLOAT step) { insertMap(buildPath(label), zone); } void addNumEntry(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT init, FAUSTFLOAT fmin, FAUSTFLOAT fmax, FAUSTFLOAT step) { insertMap(buildPath(label), zone); } // -- passive widgets void addHorizontalBargraph(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT fmin, FAUSTFLOAT fmax) { insertMap(buildPath(label), zone); } void addVerticalBargraph(const char* label, FAUSTFLOAT* zone, FAUSTFLOAT fmin, FAUSTFLOAT fmax) { insertMap(buildPath(label), zone); } // -- metadata declarations void declare(FAUSTFLOAT* zone, const char* key, const char* val) {} }; // One voice of polyphony struct dsp_voice : public MapUI, public decorator_dsp { int fNote; // Playing note actual pitch int fDate; // KeyOn date bool fTrigger; // True if stolen note and need for envelop re-trigger FAUSTFLOAT fLevel; // Last audio block level dsp_voice(dsp* dsp):decorator_dsp(dsp) { dsp->buildUserInterface(this); fNote = kFreeVoice; fLevel = FAUSTFLOAT(0); fDate = 0; fTrigger = false; } void extractLabels(std::string& gate, std::string& freq, std::string& gain) { // Keep gain, freq and gate labels std::map<std::string, FAUSTFLOAT*>::iterator it; for (it = getMap().begin(); it != getMap().end(); it++) { std::string label = (*it).first; if (ends_with(label, "/gate")) { gate = label; } else if (ends_with(label, "/freq")) { freq = label; } else if (ends_with(label, "/gain")) { gain = label; } } } void computeSlice(int offset, int slice, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { if (slice > 0) { FAUSTFLOAT** inputs_slice = (FAUSTFLOAT**)alloca(getNumInputs() * sizeof(FAUSTFLOAT*)); for (int chan = 0; chan < getNumInputs(); chan++) { inputs_slice[chan] = &(inputs[chan][offset]); } FAUSTFLOAT** outputs_slice = (FAUSTFLOAT**)alloca(getNumOutputs() * sizeof(FAUSTFLOAT*)); for (int chan = 0; chan < getNumOutputs(); chan++) { outputs_slice[chan] = &(outputs[chan][offset]); } compute(slice, inputs_slice, outputs_slice); } } }; /** * Polyphonic DSP : group a set of DSP to be played together or triggered by MIDI. */ class mydsp_poly : public dsp, public midi { private: dsp* fDSP; std::vector<dsp_voice*> fVoiceTable; // Individual voices dsp* fVoiceGroup; // Voices group to be used for GUI grouped control std::string fGateLabel; std::string fGainLabel; std::string fFreqLabel; FAUSTFLOAT fPanic; int fPolyphony; bool fVoiceControl; bool fGroupControl; GroupUI fGroups; FAUSTFLOAT** fMixBuffer; int fNumOutputs; int fDate; std::vector<MidiUI*> fMidiUIList; inline FAUSTFLOAT mixVoice(int count, FAUSTFLOAT** outputBuffer, FAUSTFLOAT** mixBuffer) { FAUSTFLOAT level = 0; for (int i = 0; i < fNumOutputs; i++) { FAUSTFLOAT* mixChannel = mixBuffer[i]; FAUSTFLOAT* outChannel = outputBuffer[i]; for (int j = 0; j < count; j++) { level = FLOAT_MAX(level, (FAUSTFLOAT)fabs(outChannel[j])); mixChannel[j] += outChannel[j]; } } return level; } inline void clearOutput(int count, FAUSTFLOAT** mixBuffer) { for (int i = 0; i < fNumOutputs; i++) { memset(mixBuffer[i], 0, count * sizeof(FAUSTFLOAT)); } } inline int getVoice(int note, bool steal = false) { for (int i = 0; i < fPolyphony; i++) { if (fVoiceTable[i]->fNote == note) { if (steal) { fVoiceTable[i]->fDate = fDate++; } return i; } } if (steal) { int voice_release = kNoVoice; int voice_playing = kNoVoice; int oldest_date_release = INT_MAX; int oldest_date_playing = INT_MAX; // Scan all voices for (int i = 0; i < fPolyphony; i++) { if (fVoiceTable[i]->fNote == kReleaseVoice) { // Keeps oldest release voice if (fVoiceTable[i]->fDate < oldest_date_release) { oldest_date_release = fVoiceTable[i]->fDate; voice_release = i; } } else { // Otherwise keeps oldest playing voice if (fVoiceTable[i]->fDate < oldest_date_playing) { oldest_date_playing = fVoiceTable[i]->fDate; voice_playing = i; } } } // Then decide which one to steal if (oldest_date_release != INT_MAX) { std::cout << "Steal release voice : voice_date " << fVoiceTable[voice_release]->fDate << " cur_date = " << fDate << " voice = " << voice_release << std::endl; fVoiceTable[voice_release]->fDate = fDate++; fVoiceTable[voice_release]->fTrigger = true; return voice_release; } else if (oldest_date_playing != INT_MAX) { std::cout << "Steal playing voice : voice_date " << fVoiceTable[voice_playing]->fDate << " cur_date = " << fDate << " voice = " << voice_playing << std::endl; fVoiceTable[voice_playing]->fDate = fDate++; fVoiceTable[voice_playing]->fTrigger = true; return voice_playing; } else { assert(false); return kNoVoice; } } else { return kNoVoice; } } inline void init(dsp* dsp, int max_polyphony, bool control, bool group) { fDSP = dsp; fVoiceControl = control; fGroupControl = group; fPolyphony = max_polyphony; fFreqLabel = fGateLabel = fGainLabel = ""; // Create voices for (int i = 0; i < fPolyphony; i++) { fVoiceTable.push_back(new dsp_voice(dsp->clone())); } // Init audio output buffers fNumOutputs = fVoiceTable[0]->getNumOutputs(); fMixBuffer = new FAUSTFLOAT*[fNumOutputs]; for (int i = 0; i < fNumOutputs; i++) { fMixBuffer[i] = new FAUSTFLOAT[MIX_BUFFER_SIZE]; } // Groups all uiItem for a given path fVoiceGroup = new proxy_dsp(fVoiceTable[0]); fVoiceGroup->buildUserInterface(&fGroups); for (int i = 0; i < fPolyphony; i++) { fVoiceTable[i]->buildUserInterface(&fGroups); } fDate = 0; // Keep gain, freq and gate labels fVoiceTable[0]->extractLabels(fGateLabel, fFreqLabel, fGainLabel); } void uIBuilder(UI* ui_interface) { ui_interface->openTabBox("Polyphonic"); // Grouped voices UI ui_interface->openVerticalBox("Voices"); ui_interface->addButton("Panic", &fPanic); fVoiceGroup->buildUserInterface(ui_interface); ui_interface->closeBox(); // In not group, also add individual voices UI if (!fGroupControl) { for (int i = 0; i < fPolyphony; i++) { char buffer[32]; snprintf(buffer, 31, ((fPolyphony < 8) ? "Voice%d" : "V%d"), i+1); ui_interface->openHorizontalBox(buffer); fVoiceTable[i]->buildUserInterface(ui_interface); ui_interface->closeBox(); } } ui_interface->closeBox(); } static void panic(FAUSTFLOAT val, void* arg) { if (val == FAUSTFLOAT(1)) { static_cast<mydsp_poly*>(arg)->hardAllNotesOff(); } } inline bool checkPolyphony() { if (fFreqLabel == "") { std::cout << "DSP is not polyphonic...\n"; return false; } else { return true;; } } // Always returns a voice int newVoiceAux() { int voice = getVoice(kFreeVoice, true); assert(voice != kNoVoice); fVoiceTable[voice]->fNote = kActiveVoice; return voice; } public: /** * Constructor. * * @param dsp - the dsp to be used for one voice. Beware : mydsp_poly will use and finally delete the pointer. * @param max_polyphony - number of voices of polyphony * @param control - whether voices will be dynamically allocated and controlled (typically by a MIDI controler). * If false all voices are always running. * @param group - if true, voices are not individually accessible, a global "Voices" tab will automatically dispatch * a given control on all voices, assuming GUI::updateAllGuis() is called. * If false, all voices can be individually controlled. * */ mydsp_poly(dsp* dsp, int max_polyphony, bool control = false, bool group = true): fPanic(FAUSTFLOAT(0)), fGroups(&fPanic, panic, this) { init(dsp, max_polyphony, control, group); } void metadata(Meta* meta) { fVoiceTable[0]->metadata(meta); } virtual ~mydsp_poly() { for (int i = 0; i < fNumOutputs; i++) { delete[] fMixBuffer[i]; } delete[] fMixBuffer; for (int i = 0; i < fPolyphony; i++) { delete fVoiceTable[i]; } delete fVoiceGroup; // Remove object from all MidiUI interfaces that handle it for (int i = 0; i < fMidiUIList.size(); i++) { fMidiUIList[i]->removeMidiIn(this); } delete fDSP; } void init(int sample_rate) { // Init voices for (int i = 0; i < fPolyphony; i++) { fVoiceTable[i]->init(sample_rate); } } void instanceInit(int sample_rate) { // Init voices for (int i = 0; i < fPolyphony; i++) { fVoiceTable[i]->instanceInit(sample_rate); } } void instanceConstants(int sample_rate) { // Init voices for (int i = 0; i < fPolyphony; i++) { fVoiceTable[i]->instanceConstants(sample_rate); } } void instanceResetUserInterface() { for (int i = 0; i < fPolyphony; i++) { fVoiceTable[i]->instanceResetUserInterface(); } } void instanceClear() { for (int i = 0; i < fPolyphony; i++) { fVoiceTable[i]->instanceClear(); } } virtual int getSampleRate() { return fVoiceTable[0]->getSampleRate(); } virtual mydsp_poly* clone() { return new mydsp_poly(fDSP->clone(), fPolyphony, fVoiceControl, fGroupControl); } void compute(int count, FAUSTFLOAT** inputs, FAUSTFLOAT** outputs) { assert(count < MIX_BUFFER_SIZE); // First clear the outputs clearOutput(count, outputs); if (fVoiceControl) { // Mix all playing voices for (int i = 0; i < fPolyphony; i++) { if (fVoiceTable[i]->fNote != kFreeVoice) { if (fVoiceTable[i]->fTrigger) { // New note, so re-trigger fVoiceTable[i]->fTrigger = false; fVoiceTable[i]->setParamValue(fGateLabel, 0.0f); fVoiceTable[i]->computeSlice(0, 1, inputs, fMixBuffer); fVoiceTable[i]->setParamValue(fGateLabel, 1.0f); fVoiceTable[i]->computeSlice(1, count - 1, inputs, fMixBuffer); } else { // Compute regular voice fVoiceTable[i]->compute(count, inputs, fMixBuffer); } // Mix it in result fVoiceTable[i]->fLevel = mixVoice(count, fMixBuffer, outputs); // Check the level to possibly set the voice in kFreeVoice again if ((fVoiceTable[i]->fLevel < VOICE_STOP_LEVEL) && (fVoiceTable[i]->fNote == kReleaseVoice)) { fVoiceTable[i]->fNote = kFreeVoice; } } } } else { // Mix all voices for (int i = 0; i < fPolyphony; i++) { fVoiceTable[i]->compute(count, inputs, fMixBuffer); mixVoice(count, fMixBuffer, outputs); } } } int getNumInputs() { return fVoiceTable[0]->getNumInputs(); } int getNumOutputs() { return fVoiceTable[0]->getNumOutputs(); } void buildUserInterface(UI* ui_interface) { // Add itself to the MidiUI object MidiUI* midi_ui = dynamic_cast<MidiUI*>(ui_interface); if (midi_ui) { fMidiUIList.push_back(midi_ui); midi_ui->addMidiIn(this); } if (fPolyphony > 1) { uIBuilder(ui_interface); } else { fVoiceTable[0]->buildUserInterface(ui_interface); } } MapUI* newVoice() { return fVoiceTable[newVoiceAux()]; } void deleteVoice(MapUI* voice) { std::vector<dsp_voice*>::iterator it = find(fVoiceTable.begin(), fVoiceTable.end(), reinterpret_cast<dsp_voice*>(voice)); if (it != fVoiceTable.end()) { (*it)->setParamValue(fGateLabel, 0.0f); // Release voice (*it)->fNote = kReleaseVoice; } else { std::cout << "Voice not found\n"; } } // Pure MIDI control MapUI* keyOn(int channel, int pitch, int velocity) { if (checkPolyphony()) { int voice = newVoiceAux(); fVoiceTable[voice]->setParamValue(fFreqLabel, midiToFreq(pitch)); fVoiceTable[voice]->setParamValue(fGainLabel, float(velocity)/127.f); fVoiceTable[voice]->fNote = pitch; fVoiceTable[voice]->fTrigger = true; // so that envelop is always re-initialized return fVoiceTable[voice]; } else { return 0; } } void keyOff(int channel, int pitch, int velocity = 127) { if (checkPolyphony()) { int voice = getVoice(pitch); if (voice != kNoVoice) { // No use of velocity for now... fVoiceTable[voice]->setParamValue(fGateLabel, 0.0f); // Release voice fVoiceTable[voice]->fNote = kReleaseVoice; } else { std::cout << "Playing pitch = " << pitch << " not found\n"; } } } void pitchWheel(int channel, int wheel) {} void ctrlChange(int channel, int ctrl, int value) { if (ctrl == ALL_NOTES_OFF || ctrl == ALL_SOUND_OFF) { allNotesOff(); } } void progChange(int channel, int pgm) {} void keyPress(int channel, int pitch, int press) {} void chanPress(int channel, int press) {} void ctrlChange14bits(int channel, int ctrl, int value) {} // Gently terminates all the active voice void allNotesOff() { if (checkPolyphony()) { for (int i = 0; i < fPolyphony; i++) { fVoiceTable[i]->setParamValue(fGateLabel, 0.0f); fVoiceTable[i]->fNote = kReleaseVoice; } } } // Kill immediately all the active voices void hardAllNotesOff() { if (checkPolyphony(

I know the Faust code I linked to is a lot to look thru, but I'm sure the answer is in there.

Seems the trick is to tag a note on... Perhaps just the note number since you won't play the same note twice at one time..
Then trigger a voice envelope using a Gate value for duration tracking..
If desired, use the Velocity data to adjust the overall Gain of the voice.


Ugh! My brain hurts after spending most the night researching how to get a method coded.
Most my evening was just to find the proper Term of the method, which is "Voice Allocation".
 
Well after a couple of days pouring over what was probably very simple code for seasoned programmers, I figured out how to allocate notes to a limited number of "voices".


Anyway, here's what I did...

Code:
// USB MIDI note allocation to voice number method
// for future synth sketch
// contributed by Blaine Perkins


// create a midi note on off indicator
int ledPin = 13;



// array for the notes to be assigned to one of eight voices
int voice[8];

//used to track how many voices are active
//when limit is reached, msg prints Voice Limit Exceeded!
int noteCount=0;

//function to allocate a new MIDI Note On to an available voice number
//scans for elements that are zero value. then assigns the note number to it
void MIDInoteOn(int note, int vel)
{

  if(noteCount==8){
    Serial.println("Voice Limit Exceeded!");
  }

  for (int i=0; i<8;i++){
 if(!voice[i]){
  voice[i]=note;
  Serial.print("New Note #  ");
  Serial.println(note);
  Serial.println(vel);
  Serial.print("Assigned to Voice #  ");
  Serial.println(i+1);

  digitalWrite(ledPin, HIGH);
  delay(100);
  digitalWrite(ledPin, LOW);
  noteCount++;

    if(voice[i]){return;
  }
}
}
}
 



//functin called when a Note Off msg is received.  Will scan the voice array to locate 
//the note to be shut off.  Thus returning the element to zero value.  which frees up that
//voice for use of a new note.
void MIDInoteOff(int note, int vel)
{
  
    for(int i=0; i<8; i++){
     
      if(voice[i]==note){
        voice[i]=0;
          Serial.print("-note off # ");
          Serial.println(note);
          Serial.print("Shut Off Voice # ");
          Serial.println(i+1);
          noteCount--;
          if(noteCount<0){noteCount=0;
      }
  digitalWrite(ledPin, HIGH);
  delay(100);
  digitalWrite(ledPin, LOW);

          
      }
    }
  }







void OnNoteOn(byte channel, byte note, byte velocity)
{
  MIDInoteOn(note, velocity);
}

void OnNoteOff(byte channel, byte note, byte velocity)
{
  MIDInoteOff(note, velocity);
  digitalWrite(ledPin, LOW);
  Serial.print("-note off # ");
  Serial.println(note);
}

void setup()
{
  Serial.begin(9600);
  pinMode(ledPin, OUTPUT);
  usbMIDI.setHandleNoteOff(OnNoteOff);
  usbMIDI.setHandleNoteOn(OnNoteOn) ;

}

void loop()
{
  usbMIDI.read();
}


As suggested I used the Serial monitor AND the built in USB MIDI port.
Seems to work well. I press a key on my MIDI keyboard it prints to the monitor which Voice was assigned what note.
Basically it tracks it as it should and if it exceeds the set limit, it will say so and not allocate anymore voices until one is freed up.

I'm a total newb at coding, so there may be better ways to do this. If anyone has suggestions I am open to any ideas for improvement.


Next I may need to figure out how to do "Note Stealing" so the extra note hit will steal the oldest note.
Even more complex is a Stealer that monitors if a Voice's envelope is shutting down it's Amp, thus its not making any sound and can be released for new notes.
 
I've been away for a bit.
Looks like good progress, Synthetech.
Have you implemented some sort to sound synthesis on the teensy to respond to the input MIDI messages?
 
I'm a total newb at coding....
Congrats on your progress... it's particularly impressive if you know how many MIDI and audio projects seem to go off the rails with noobs that take on too many things at once....

You clearly avoided that and followed measured logical steps...

I look foward to trying this out.... I had some thoughts on voice stealing but I'll see what you've done so far first.

Even more complex is a Stealer that monitors if a Voice's envelope is shutting down it's Amp, thus its not making any sound and can be released for new notes.
Yes... you need a variable or property you can test to see if it's in the release phase of the envelope generator.
 
Last edited:
Status
Not open for further replies.
Back
Top