So I'm kicking around a project idea that would involve taking multiple (probably 4 or 8 to start) line-level unbalanced mono audio inputs and multicasting them to many smartphones simultaneously via WiFi. The device would create its own 802.11n hotspot, and a smart phone would join the hot spot and use a custom app to receive these streams in real time and mix them on the device. I'm hoping to achieve sub 15-20ms latency, but just don't know how feasible all of this will be. Hardware-wise, I was thinking for my initial proof of concept that a Teensy3 might serve as the core of the device, reading inputs from the ADC's via I2S and muxing them together and sending them to the WiFi SoC (thinking about a Broadcom BCM4718 possibly) as IP Multicast packets. Although with that BCM4718, I might be able to just use it as the core, as according to the product brief, it supports I2S.
I'd prefer to do this with uncompressed audio, but don't know how feasible that will be from a bandwidth standpoint. However, my research has shown that doing it with compressed codecs will introduce an undesirable level of latency. I was hoping to be able to avoid delving into a DSP, and unless I'm missing something I don't see that I would need it since I'm not really trying to do any operations on the audio data on the device itself. The smart phones would be handling that on the client side. But I don't know enough at this point to know if thats a key component or not.
Can anyone provide some insight into whether this is even possible? Right now its just a pipe dream, and it seems like the bandwidth and latency issues would be a beast to overcome.
I'd prefer to do this with uncompressed audio, but don't know how feasible that will be from a bandwidth standpoint. However, my research has shown that doing it with compressed codecs will introduce an undesirable level of latency. I was hoping to be able to avoid delving into a DSP, and unless I'm missing something I don't see that I would need it since I'm not really trying to do any operations on the audio data on the device itself. The smart phones would be handling that on the client side. But I don't know enough at this point to know if thats a key component or not.
Can anyone provide some insight into whether this is even possible? Right now its just a pipe dream, and it seems like the bandwidth and latency issues would be a beast to overcome.