Audio System Design Tool - reloaded

Status
Not open for further replies.

fms1961

Well-known member
As mentioned some time before, I'm opening this thread for further discussion of the ongoing development of the "Audio System Design Tool".

After the last merge, I "killed" all my changes - but thanks to goog development and versioning tools, I was able to restore my changes.

What's going on? Some topics in short words:

1. the node definition has to be integrated as real JavaScript object, because some properties are not "transportable" via stringyfied JSON.
2. the node definition had been extended. Definition of value ranges, validation callbacks, editor component description, source generation rules ...
3. with this, the editor dialog template per node is gone. The dialogs will be build on the fly following the descriptions in the node definition object
4. A new "loop" node holds arbitrary code to put in the "loop" section of the resulting source code (first step in code gen. of a complete sketch)

As this version is unstable, I won't create any pull request - for testing, clone my fork on https://github.com/mamuesp/Audio, change to the direcgtory "gui" and open the file "index.html" direct in the browser - this should work fine.

The next steps will be:

1. complete the node definitions
2. complete the export function
3. expand the import function (which may then also interpret the parameter settings of the components
4. implement the "loop" node

Some dialog screenshots:

Screenshot 2016-01-08 22.37.23.png

Screenshot 2016-01-08 22.37.33.png

Here are some examples of the expanded node description: (the node definitions are not finished yet!) If questions will arise, please feel free to ask.

(is there any possibility to limit the height of a HTML box?)
HTML:
<script type="text/javascript">
var nodes = {
    "AudioInputI2S":                   {
        "type": "AudioInputI2S",
        "data": {
            "defaults":  {
                "name": {
                    "value": "i2s"
                }
            },
            "shortName": "i2s",
            "inputs":    0,
            "outputs":   2,
            "category":  "input-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioInputAnalog":                {
        "type": "AudioInputAnalog",
        "data": {
            "defaults":  {
                "name": {
                    "value": "adc"
                },
                "pin":  {
                    "value": "A2",
                    "call":  "(###pin###)",
                    "input": "text",
                    "label": "Pin"
                }
            },
            "shortName": "adc",
            "inputs":    0,
            "outputs":   1,
            "category":  "input-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioInputI2Sslave":              {
        "type": "AudioInputI2Sslave",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "i2ss",
            "inputs":    0,
            "outputs":   2,
            "category":  "input-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioOutputI2S":                  {
        "type": "AudioOutputI2S",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "i2s",
            "inputs":    2,
            "outputs":   0,
            "category":  "output-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioOutputSPDIF":                {
        "type": "AudioOutputSPDIF",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "spdif",
            "inputs":    2,
            "outputs":   0,
            "category":  "output-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioOutputAnalog":               {
        "type": "AudioOutputAnalog",
        "data": {
            "defaults":  {
                "name": {
                    "value": "dac"
                }
            },
            "shortName": "dac",
            "inputs":    1,
            "outputs":   0,
            "category":  "output-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioOutputPWM":                  {
        "type": "AudioOutputPWM",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "pwm",
            "inputs":    1,
            "outputs":   0,
            "category":  "output-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioOutputI2Sslave":             {
        "type": "AudioOutputI2Sslave",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "i2ss",
            "inputs":    2,
            "outputs":   0,
            "category":  "output-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioMixer4":                     {
        "type": "AudioMixer4",
        "data": {
            "defaults":  {
                "name":  {
                    "value": "mixer"
                },
                "gain0": {
                    "value":    0.25,
                    "call":     ".gain(0, ###gain0###)",
                    "validate": RED.nodes.isValidRange,
                    "min":      0.0,
                    "max":      1.0,
                    "input":    "text",
                    "label":    "Gain chan. 0"
                },
                "gain1": {
                    "value":    0.25,
                    "call":     ".gain(1, ###gain1###)",
                    "validate": RED.nodes.isValidRange,
                    "min":      0.0,
                    "max":      1.0,
                    "input":    "text",
                    "label":    "Gain chan. 1"
                },
                "gain2": {
                    "value":    0.25,
                    "call":     ".gain(2, ###gain2###)",
                    "validate": RED.nodes.isValidRange,
                    "min":      0.0,
                    "max":      1.0,
                    "input":    "text",
                    "label":    "Gain chan. 2"
                },
                "gain3": {
                    "value":    0.25,
                    "call":     ".gain(3, ###gain3###)",
                    "validate": RED.nodes.isValidRange,
                    "min":      0.0,
                    "max":      1.0,
                    "input":    "text",
                    "label":    "Gain chan. 3"
                }
            },
            "shortName": "mixer",
            "inputs":    4,
            "outputs":   1,
            "category":  "mixer-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioPlayMemory":                 {
        "type": "AudioPlayMemory",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "playMem",
            "inputs":    0,
            "outputs":   1,
            "category":  "play-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioPlaySdWav":                  {
        "type": "AudioPlaySdWav",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "playSdWav",
            "inputs":    0,
            "outputs":   2,
            "category":  "play-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioPlaySdRaw":                  {
        "type": "AudioPlaySdRaw",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "playSdRaw",
            "inputs":    0,
            "outputs":   1,
            "category":  "play-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioPlaySerialflashRaw":         {
        "type": "AudioPlaySerialflashRaw",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "playFlashRaw",
            "inputs":    0,
            "outputs":   1,
            "category":  "play-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioPlayQueue":                  {
        "type": "AudioPlayQueue",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "queue",
            "inputs":    0,
            "outputs":   1,
            "category":  "play-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioRecordQueue":                {
        "type": "AudioRecordQueue",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "queue",
            "inputs":    1,
            "outputs":   0,
            "category":  "record-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioSynthWaveformSine":          {
        "type": "AudioSynthWaveformSine",
        "data": {
            "defaults":  {
                "name":      {
                    "value": "sine"
                },
                "amplitude": {
                    "value":    "0.5",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      1,
                    "input":    "text",
                    "label":    "Amplitude"
                },
                "frequency": {
                    "value":    "1000",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      22000,
                    "input":    "text",
                    "label":    "Frequency"
                },
                "phase":     {
                    "value":    "0",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      360,
                    "input":    "text",
                    "label":    "Phase"
                }
            },
            "shortName": "sine",
            "inputs":    0,
            "outputs":   1,
            "category":  "synth-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioSynthWaveformSineModulated": {
        "type": "AudioSynthWaveformSineModulated",
        "data": {
            "defaults":  {
                "name":      {
                    "value": "sine_fm"
                },
                "amplitude": {
                    "value":    "0.5",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      1,
                    "input":    "text",
                    "label":    "Amplitude"
                },
                "frequency": {
                    "value":    "1000",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      22000,
                    "input":    "text",
                    "label":    "Frequency"
                },
                "phase":     {
                    "value":    "0",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      360,
                    "input":    "text",
                    "label":    "Phase"
                }
            },
            "shortName": "sine_fm",
            "inputs":    1,
            "outputs":   1,
            "category":  "synth-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioSynthWaveform":              {
        "type": "AudioSynthWaveform",
        "data": {
            "defaults":  {
                "name":      {
                    "value": "new"
                },
                "waveform":  {
                    "value":    0,
                    "data":     ["WAVEFORM_SINE", "WAVEFORM_SAWTOOTH", "WAVEFORM_SAWTOOTH_REVERSE", "WAVEFORM_SQUARE", "WAVEFORM_TRIANGLE", "WAVEFORM_ARBITRARY", "WAVEFORM_PULSE", "WAVEFORM_SAMPLE_HOLD"],
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      7,
                    "input":    "select",
                    "label":    "Waveform"
                },
                "amplitude": {
                    "value":    "0.5",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      1,
                    "input":    "text",
                    "label":    "Amplitude"
                },
                "frequency": {
                    "value":    "1000",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      22000,
                    "input":    "text",
                    "label":    "Frequency"
                },
                "phase":     {
                    "value":    "0",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      360,
                    "input":    "text",
                    "label":    "Phase"
                }
            },
            "shortName": "waveform",
            "inputs":    0,
            "outputs":   1,
            "category":  "synth-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioSynthToneSweep":             {
        "type": "AudioSynthToneSweep",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "tonesweep",
            "inputs":    0,
            "outputs":   1,
            "category":  "synth-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioSynthWaveformDc":            {
        "type": "AudioSynthWaveformDc",
        "data": {
            "defaults":  {
                "name":      {
                    "value": "new"
                },
                "amplitude": {
                    "value":    "0.5",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      1,
                    "input":    "text",
                    "label":    "Amplitude"
                },
                "period":    {
                    "value":    "10",
                    "call":     "insert",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      100000,
                    "input":    "text",
                    "label":    "Period"
                }
            },
            "shortName": "dc",
            "inputs":    0,
            "outputs":   1,
            "category":  "synth-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioSynthNoiseWhite":            {
        "type": "AudioSynthNoiseWhite",
        "data": {
            "defaults":  {
                "name":      {
                    "value": "new"
                },
                "amplitude": {
                    "value":    "0.5",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      1,
                    "input":    "text",
                    "label":    "Amplitude"
                }
            },
            "shortName": "noise",
            "inputs":    0,
            "outputs":   1,
            "category":  "synth-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioSynthNoisePink":             {
        "type": "AudioSynthNoisePink",
        "data": {
            "defaults":  {
                "name":      {
                    "value": "new"
                },
                "amplitude": {
                    "value":    "0.5",
                    "validate": RED.nodes.isValidRange,
                    "min":      0,
                    "max":      1,
                    "input":    "text",
                    "label":    "Amplitude"
                }
            },
            "shortName": "pink",
            "inputs":    0,
            "outputs":   1,
            "category":  "synth-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioEffectFade":                 {
        "type": "AudioEffectFade",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "fade",
            "inputs":    1,
            "outputs":   1,
            "category":  "effect-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioEffectChorus":               {
        "type": "AudioEffectChorus",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "chorus",
            "inputs":    1,
            "outputs":   1,
            "category":  "effect-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioEffectFlange":               {
        "type": "AudioEffectFlange",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "flange",
            "inputs":    1,
            "outputs":   1,
            "category":  "effect-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioEffectEnvelope":             {
        "type": "AudioEffectEnvelope",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "envelope",
            "inputs":    1,
            "outputs":   1,
            "category":  "effect-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioEffectMultiply":             {
        "type": "AudioEffectMultiply",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "multiply",
            "inputs":    2,
            "outputs":   1,
            "category":  "effect-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioEffectDelay":                {
        "type": "AudioEffectDelay",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "delay",
            "inputs":    1,
            "outputs":   8,
            "category":  "effect-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioEffectDelayExternal":        {
        "type": "AudioEffectDelayExternal",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "delayExt",
            "inputs":    1,
            "outputs":   8,
            "category":  "effect-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioEffectBitcrusher":           {
        "type": "AudioEffectBitcrusher",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "bitcrusher",
            "inputs":    1,
            "outputs":   1,
            "category":  "effect-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioFilterBiquad":               {
        "type": "AudioFilterBiquad",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "biquad",
            "inputs":    1,
            "outputs":   1,
            "category":  "filter-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioFilterFIR":                  {
        "type": "AudioFilterFIR",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "fir",
            "inputs":    1,
            "outputs":   1,
            "category":  "filter-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioFilterStateVariable":        {
        "type": "AudioFilterStateVariable",
        "data": {
            "defaults":  {
                "name":          {
                    "value": "new"
                },
                "frequency":     {
                    "value": 0
                },
                "resonance":     {
                    "value": 0
                },
                "octaveControl": {
                    "value": 0
                }
            },
            "shortName": "filter",
            "inputs":    2,
            "outputs":   3,
            "category":  "filter-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioAnalyzePeak":                {
        "type": "AudioAnalyzePeak",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "peak",
            "inputs":    1,
            "outputs":   0,
            "category":  "analyze-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioAnalyzeFFT256":              {
        "type": "AudioAnalyzeFFT256",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "fft256",
            "inputs":    1,
            "outputs":   0,
            "category":  "analyze-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioAnalyzeFFT1024":             {
        "type": "AudioAnalyzeFFT1024",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "fft1024",
            "inputs":    1,
            "outputs":   0,
            "category":  "analyze-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioAnalyzeToneDetect":          {
        "type": "AudioAnalyzeToneDetect",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "tone",
            "inputs":    1,
            "outputs":   0,
            "category":  "analyze-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioAnalyzePrint":               {
        "type": "AudioAnalyzePrint",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "print",
            "inputs":    1,
            "outputs":   0,
            "category":  "analyze-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioControlSGTL5000":            {
        "type": "AudioControlSGTL5000",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "sgtl5000",
            "inputs":    0,
            "outputs":   0,
            "category":  "control-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioControlAK4558":              {
        "type": "AudioControlAK4558",
        "data": {
            "defaults":  {
                "name":        {
                    "value": "new"
                },
                "volume":      {
                    "value":    0.25,
                    "validate": RED.nodes.isValidRange,
                    "min":      0.0,
                    "max":      1.0,
                    "input":    "text",
                    "label":    "Volume"
                },
                "volumeLeft":  {
                    "value":    0.25,
                    "validate": RED.nodes.isValidRange,
                    "min":      0.0,
                    "max":      1.0,
                    "input":    "text",
                    "label":    "Left volume"
                },
                "volumeRight": {
                    "value":    0.25,
                    "validate": RED.nodes.isValidRange,
                    "min":      0.0,
                    "max":      1.0,
                    "input":    "text",
                    "label":    "Right volume"
                },
                "inputSelect": {
                    "value": "n.a.",
                    "input": "display",
                    "label": "Input select"
                }
            },
            "shortName": "ak4558",
            "inputs":    0,
            "outputs":   0,
            "category":  "control-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioControlWM8731":              {
        "type": "AudioControlWM8731",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "wm8731",
            "inputs":    0,
            "outputs":   0,
            "category":  "control-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioControlWM8731master":        {
        "type": "AudioControlWM8731master",
        "data": {
            "defaults":  {"name": {"value": "new"}},
            "shortName": "wm8731m",
            "inputs":    0,
            "outputs":   0,
            "category":  "control-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    },
    "AudioSourceLoopContainer":        {
        "type": "AudioSourceLoopContainer",
        "data": {
            "defaults":  {
                "name": {
                    "value": "loop"
                },
                "code": {
                    "value": "void loop() {\n\n\n}\n"
                }
            },
            "shortName": "loop",
            "inputs":    0,
            "outputs":   0,
            "category":  "source-function",
            "color":     "#E6E0F8",
            "icon":      "arrow-in.png"
        }
    }
};
</script>
 
Last edited:
Very cool. So the Export function will also set up the .begin() functions for each node?
David
 
1. the node definition has to be integrated as real JavaScript object, because some properties are not "transportable" via stringyfied JSON.
2. the node definition had been extended. Definition of value ranges, validation callbacks, editor component description, source generation rules ...

Switching from JSON to Javascript looks great.

Is it possible to structure this so each node's chunk of Javascript appears together with the script-wrapped HTML. Maybe each Javascript chunk could append to the array?

As we add more capabilities to the library, it's really nice to have all the definitions for each node (each object in the library) together as one big block of text that can be copied and pasted and edited when new features are added. Over the next year, I believe we're going to add a *lot* more to the library... more effects, more types of analysis, support for control of many more codec chips and other hardware, wavetable and algorithmic synthesis, playing of other file formats, more types of input & output like USB, network streaming, dual channel I2S (quad channel audio), and high-res options for the build-in ADCs & DAC. Even if the format isn't pretty, it's really convenient for me and any contributing to the library when all the definitions are together in 1 location.
 
Is it possible to structure this so each node's chunk of Javascript appears together with the script-wrapped HTML. Maybe each Javascript chunk could append to the array?
Let's say it's no problem to add the help data as property to the node while initializing the nodes at start, so you may access this data directly without looking for a script chunk. But the static data should be left alone as it is, because if you integrate all the HTML code, the node definitions won't be maintainable any more ... so we have to add a property to the node e.g. "helptext" perhaps with an URL, and we "park" the help pages in hidden <a> tags, so the property holds an URL which works locally, and if needed, with external help files as well. I would even prefere - in aspects of structure and maintainability - to hold the help data in single files, but as AJAX won't work locally, this point is declined. But as we load the whole data at the beginning, we may extract the node definitions in a single javascript file - so we have a central point of hassle-free configuration. Clear structure, could be handled by language aware editors, synthax higlighting etc. - this will help to keep track of the source. So the help code stays in the index.html file (where it belongs), and the "nodes" Object will be hold in a "node-definitions.js" file, because now it's plain JavaScript.

The dialog handling must be re-structured a little - some functions are defined in "view.js" others in "editor.js" - the latter will be the dialog "home", in my opinion. Also we must change the way, the source code will be generated. Because now we have to parse and comapre a lot of specific values to create a correct source code output - but when defined as property function in the node, we may provide very complex source code constructs without any "investigation" how e.g. the function to set the parameters will be called. This is a real "must be" in fact of your mentioning that there will be a lot more node types and functions ...

Edit: e.g. the default output will be
- as declaration: <type of node> <unique name>(<parameters>);
- as setup call (to initialize parameters): <unique name>.<parameter name>(<value>);
- all other results will be described in the "getSource" method of the nodes.

And another point: on comment fragments I stumble upon the fact, that the "workspace" related funcitonality should be removed? Am I right with this assumption? If yes, I will put this point to my agenda as well.
 
Last edited:
So the help code stays in the index.html file (where it belongs), and the "nodes" Object will be hold in a "node-definitions.js" file, because now it's plain JavaScript.

I'm guessing you're looking at this from a web development best practices point of view. Get the script code out of the .html and into a .js file, because that's how things are supposed to be done. I can appreciate there's a number of good reasons why that's normally a preferred way to do things.

However, a major goal of the design tool is facilitating development of the audio library. Placing the stuff to be edited when new objects are added in the C++ code in 2 or more places is pretty undesirable. I really want to keep all the defs in one file, and ideally all the text for each object/node should be together in one easily copied block of text. A set of script tags is fine. I believe we had 3 of them together in groups before the recent round of changes.

The point is the design should make contributing new objects easier, for knowledgeable C++ programmers. It's safe to assume they can copy/paste/modify pretty much any reasonable format. But it's also safe to assume would-be contributors will not want to learn the structure of the Javascript code. It's also safe to assume requiring edits in 2 places (even in the same file) will lead to pull requests with only 1 of them, as we recently saw with the new AK4558 codec.

I know a single large .js script is much cleaner from a Javascript developer perspective. For the tiny code fragments we're expecting non-Javascript programmers to contribute, 50+ tiny scripts that add execute on page load to append data to arrays is kinda ugly. But only having to copy/paste/edit 1 block of text in 1 file will really help people contribute to the library. It'll also make things simpler for me. Please consider when I've been working on the C++ code, I'm thinking about DSP algorithms and hardware details and scaling between fixed point numerical formats and all sorts of other low-level stuff that's pretty much the opposite of how I think while coding Javascript.

In all projects, documentation is the least fun part, so let's make it as painless as possible for contributors to add documentation!


The dialog handling must be re-structured a little - some functions are defined in "view.js" others in "editor.js"
....
that the "workspace" related funcitonality should be removed? Am I right with this assumption?

Restructuring the Javascript code itself is probably a good idea. There's a lot of leftover cruft.
 
Paul, I understand that you have concerns, and even if I take another look onto the project as professional software developer (e.g. the MVC pattern is not really a paradigm here), I think, we should (and would) find a satisfying solution for us both. And it's not a JavaScript thing, but a general software development theme.

At first - perhaps I expressed it not so clear - I see the JavScript code of the node descriptions in one single (!!!) file, and the help data might as well stay in the index.html file. But as I use programming editors and tools, and the JavaScript code in the node definitions become more and more complex, I must have a posibility to handle this data as a JavaScript file - for editing with syntax highlighting, automatic linking of function definitions and calls, error searching, debugging - functionality which won't work as good in an .html file as in a .js file. So this is a real must have. But we may meet in the middle - I may - before a commit - re-insert the changes JavaScript code in the index.html file, so again you will find all code for node definitions and the help data in one file.

Even if we move on in this way, rest assured, that I'm certain that the "one file fits all" philosophy will confuse starters more than a clear, structured layout of the data. But I have no problem to let stand two opinions face-to-face, this is no reason for me to lower my ideas of contribution to the great tool.

Another point: what do you think about the idea to have a "node editor" for the node definitions? And, perhaps, a node "extractor" to interpret the .CPP files and generates a basic node model from them? With such tools, the file discussion will be obsolete very fast. Perhaps - when the tool has a state of good usability - we may look into this thoughts.

As example, I attach the definitions of two nodes, the mixer and the waveform. There you see the complexity of the definitions, because we need information for every "setable" parameter, how to edit, how to set, validate it's values and so on.

Code:
"AudioMixer4":                     {
	"type": "AudioMixer4",
	"data": {
		"defaults":  {
			"name":  {
				"value": "mixer"
			},
			"getSource": {
				"value": RED.generators.getSrcIndexed
			},
			"gain": {
				"value":    [0.25, 0.25, 0.25, 0.25],
				"validate": RED.nodes.isValidRange,
				"min":      0.0,
				"max":      1.0,
				"input":    "text-array",
				"label":    "Gain chan."
			},
		},
		"shortName": "mixer",
		"inputs":    4,
		"outputs":   1,
		"category":  "mixer-function",
		"color":     "#E6E0F8",
		"icon":      "arrow-in.png"
	}
},
"AudioSynthWaveform":              {
	"type": "AudioSynthWaveform",
	"data": {
		"defaults":  {
			"name":      {
				"value": "new"
			},
			"waveform":  {
				"value":    0,
				"data":     ["WAVEFORM_SINE", "WAVEFORM_SAWTOOTH", "WAVEFORM_SAWTOOTH_REVERSE", "WAVEFORM_SQUARE", "WAVEFORM_TRIANGLE", "WAVEFORM_ARBITRARY", "WAVEFORM_PULSE", "WAVEFORM_SAMPLE_HOLD"],
				"validate": RED.nodes.isValidRange,
				"min":      0,
				"max":      7,
				"input":    "select",
				"label":    "Waveform"
			},
			"amplitude": {
				"value":    "0.5",
				"validate": RED.nodes.isValidRange,
				"min":      0,
				"max":      1,
				"input":    "text",
				"label":    "Amplitude"
			},
			"frequency": {
				"value":    "1000",
				"validate": RED.nodes.isValidRange,
				"min":      0,
				"max":      22000,
				"input":    "text",
				"label":    "Frequency"
			},
			"phase":     {
				"value":    "0",
				"validate": RED.nodes.isValidRange,
				"min":      0,
				"max":      360,
				"input":    "text",
				"label":    "Phase"
			}
		},
		"shortName": "waveform",
		"inputs":    0,
		"outputs":   1,
		"category":  "synth-function",
		"color":     "#E6E0F8",
		"icon":      "arrow-in.png"
	}
},
 
If you're willing to work towards a specific feature, my most desired thing for the design tool would be issue #59.

https://github.com/PaulStoffregen/Audio/issues/59

As the library grows to support more hardware, we're really going to need this to guide novice users to create designs using compatible objects.
This is a matter of some hours - we add two (optional) fields in the node definition: "excludes" - here we enter all types of nodes which should not be used on the same flow with the current one. "expects" - here, we list all node types which must be available to operate with the current node. And the described hardware requirements could be implemented as well.

So it would be helpful if someone could generate a list with all exclusions and expectations, so I will implement this functionality very quickly.
 
Last edited:
... and as I'm a lazy gui, I just write a node.js tool which will parse the .cpp classes and generates a node-definition skeleton - this will make less trouble and is less error-prone than today ... when it's usable, I wil make myself remarkable ... ;-)
 
Last edited:
To explain some changes in the needed HTML code for dialog and help purposes:

In the future, there are only general data templates necessary, not one template per node, but one template per element type (like text input, checkbox, dropdown selection). This decreases the amount of static data an the maintenance effort. The dialogs for the node parameters are controlled by the node definitions, every parameter listed under "defaults" will be editable (if not tagged "read-only") and the field type is described by the entry "type" in the object which describes the parameter. So the dialogs are generated on the fly.

Let's look to an example:

Screenshot 2016-01-12 11.21.20.png

For the moment, three input types are used:

"AudioDefault" is the "dialog base" for all node dialogs, here the form-row is defined and the name might be edited - this parameter is common to all nodes. Under "AudioDefaultParameter" is the HTML snippet found which provides a text input to change the parameter's value. The snippet under "AudioSelectParameter" holds the code for an dropdown selector from which one can choose. The selectable values are declared in the node definitions.

HTML:
<script type="text/x-red" data-template-name="AudioDefault">
    <div class="form-row">
        <label for="node-input-name"><i class="fa fa-tag"></i> Name</label>
        <input type="text" id="node-input-name" placeholder="Name">
    </div>
</script>

<script type="text/x-red" data-template-name="AudioDefaultParameter">
    <label for="node-input-###prop###"><i class="fa fa-tag"></i> ###label###</label>
    <input type="text" id="node-input-###prop###" placeholder="Enter value here">
</script>

<script type="text/x-red" data-template-name="AudioSelectParameter">
    <label for="node-input-###prop###"><i class="fa fa-tag"></i> ###label###</label>
    <select id="node-input-###prop###">###OPTIONS###</select>
</script>
 
That format looks good.

Normally I'm reluctant to make things more complicated, but there's a couple other things to consider....

How to generate the AudioMemory number is the main issue. The first line in setup() needs to be AudioMemory(number). Figuring out what number is required will be tricky. Too much wastes resources. The library never takes advantage of extra memory. It only uses what's needed. But if the memory isn't enough, the nodes that can't get memory simply fail. Most are designed to go silent, but some might fail very badly.....

Conceptually, memory can be used within the nodes, and each patchcord might use 1 memory, but typically patchcords use 0. It all depends on how things are connected.

Most nodes never store any memory internally, but some do. The output nodes use a fixed amount, usually 1, 2 or 4. The delay node uses a variable amount, depending on the configured delay length. The FFT objects use a fixed amount. Eventually I'm going to rewrite the flange and chorus to use the audio memory rather than external buffers. Someday we'll get comb and allpass filters, which will also use the memory. All of these use either a fixed amount, or amounts that can be calculated from the settings.

The queue objects also use memory. Unlike all the others where the memory usage is fixed or a function of settings, the amount of memory the queue objects allocate depends on the timing of other code in the Arduino sketch, which is impossible to accurately predict. When a sketch is using a queue object to receive audio data, and it's doing something like writing to the SD card (the Recorder example), usually the SD card will complete writing quickly. But sometimes the card can have very substantial latency, perhaps when the SD library needs to update the FAT filesystem tables, or perhaps when its internal wear leveling or other internal management needs access to the media. During those unpredictable times, the queue objects internally store the audio memory. This allows for simple Arduino programming with high-latency Arduino libraries, but it means the queue objects have memory usage that can't be predicted. Perhaps the GUI should allow the user to specify an anticipated worst case their code will delay (with a reasonable default), which could be used to calculate the amount those objects would add to the memory estimate?

The other issue to consider is configuring nodes in their constructors. Today only 2 nodes exist with this feature: AudioInputAnalog and AudioEffectDelayExternal. These settings can't be configured by functions in setup(). Hopefully the GUI can support these? We probably don't have any support for these with export and import, but probably should. Even if this isn't implemented, it's at least something know exists.
 
Last edited:
I'm probably going to have to come up with an algorithm for whether a patchcord uses a memory block. But it's going to be complicated. Here goes.....

The main dependency is the order of the objects/nodes in the exported code. Patchcords almost always connect downward, to an object later in the list. Any upward connection, or a connection to the same object will consume 1 memory. That's the only simple case.

Downward connections consume memory temporarily. To find the memory usage, code needs to iterate through the nodes, in their export order. Before starting, set the memory usage of each downward patchcord to zero, and set a max memory variable to zero. Then iterating at each node, set the downward patchcords connected to its outputs to 1, and the downward patchcords arriving at its inputs to 0. Then add up the sum of all downward patchcords, and update the maximum if greater. Repeat for all nodes. Then at the end, add the maximum found while iterating to the number of upward patchcords for an reasonable estimate of the memory all the patchcords will consume. Add that to the memory consumed inside the nodes, for the total to allocate in AudioMemory().

However, this will over-estimate the memory in certain cases where the output of 1 node connects with multiple patchcords to the inputs of several others. The library uses a shared copy-on-write memory management scheme for these cases. Nodes can choose to receive input as read-only or writable. When an input is read-only, the same physical memory is used, so 2 or more patchcords use only 1 memory. That algorithm would neglect this optimization. Knowing whether an input is writable or read-only isn't always simple. Some nodes, notable the mixer, will receive one input as writable and the others as read-only. Which input is writable varies at runtime, depending on whether the connected node actually transmitted any data. The synthesis nodes only transmit when actually generating sound, and most others that pass data through avoid transmitting when they're not receiving. Many nodes detect when silent cases and avoid transmitting data. These runtime decisions make reliably predicting which inputs will be read-only on the mixer pretty much impossible. But a conservative estimate which neglects this complicated case is probably fine.
 
If we wanted to refine the memory estimate for the shared copy on write optimization, we'd probably need the node definitions to specify the inputs as read-only or writable/unknown. Many of the nodes are always read-only. So while iterating, if a node's output has more than 1 patchcord connected, then another iterative algorithm could be used to detect the case when some of them should not be assumed to consume 1 memory. Rather than try to define that algorithm now, perhaps it's enough to just know that if we want to ever do this, we'll need to node definitions to specify which inputs are known to always be read-only. Maybe that ought to be put into node defs?
 
Sounds reasonable ... I believe the node definitions will be the core of the Audio GUI, so all parameters which matters should be configured there. I wil see if I may "pour" your memory handling in reasonable code.

For the other parameters I propose that perhaps you prepare a node skeleton which all parameters you find "mentionable" (means which should be configured in the node defintions).
 
A running version of a scanner which parses the Audio/<...>.h files may be found on Github. Its a node.js module and easy to install and use. Would be glad if this could help tp extract all informations which are already in the sources files to lower the effort and to avoid errors.
 
I haven't had time to look at this. And to be honest, this partially automates a task I'm happy to do and even prefer to do manually.
 
I haven't had time to look at this. And to be honest, this partially automates a task I'm happy to do and even prefer to do manually.
Perhaps I should not read the ToDo comments in the code - because there was the idea mentioned to extract the node structures from the cpp code. My idea was to extend the cpp files with the missing data so that all needed data for the nodes could be extracted automatically - so after a change, the file will be created by a "bot" - and no one needs to touch it. Now as you made your preferences clear, I again assumed wrong. But no problem, let's move on.
 
Perhaps I should not read the ToDo comments in the code - because there was the idea mentioned to extract the node structures from the cpp code. My idea was to extend the cpp files with the missing data so that all needed data for the nodes could be extracted automatically - so after a change, the file will be created by a "bot" - and no one needs to touch it. Now as you made your preferences clear, I again assumed wrong. But no problem, let's move on.

It would be cool if the Audio System Design Tool could import "unknown" objects from the sourcecode . The most important information are there : The number of in and outputs (indirectly from "AudioConnection"), and the name.

That would be a great help, for all who use own objects ( i have a growing number..)
 
Last edited:
Opps, sorry about that TODO comment. There's probably many more old comments in the code. Please disregard them!

Let's discuss features here on the forum and/or in the issue tracker.

If you're willing to work on it, the GUI feature I'd really like to see most is issue 59.

The really valuable thing about the design tool is guiding designs to only have sensible connections. The library is large with so many features, and growing. This recent case is an example of why we need hardware usage metadata and compatibility checking. Right now, it would allow placing two I2S outputs, because it can't check if they need exclusive hardware access. As we add more features and more ways to interface with hardware, the need to give people advice and prevent designs that can't work due to known conflicts will only increase.
 
Status
Not open for further replies.
Back
Top