Audio Library for Linear Timecode (LTC)?

jkoffman

Well-known member
Hi all,

I'm wondering if anyone has tried using the audio library to read and/or generate linear timecode (LTC). More info about LTC here: https://en.wikipedia.org/wiki/Linear_timecode but the basic idea is that it's a modulated audio signal that encodes time information into an audio track. It's used extensively in the film and video world as well as to synchronize devices in live entertainment.

There is an apparently well featured library called libltc (https://github.com/x42/libltc) but I am not sure I am skilled enough to port it to the Teensy entirely on my own.

There is a person working on this for regular Arduino, but he's hit some processing capability roadblocks: https://hackaday.io/project/7694-arduino-timecode-smpte-ltc-reader-generator-shield

Since the signal is just audio frequency, it seems that it might be possible to use the audio library to both generate and read it. I'm curious if anyone has tried or started looking in to that before. I have a long standing project I'd love to try and while it isn't urgent, it could be interesting to get it going.

Any info or ideas welcome!
 
It has been a while since you posted this, but now i am on to the same thing and i was wondering if you pursued this project?
i just received the DS3231 i2c modules i ordered and hooked them up to my Teensy 3.2, so far so good.

Next steps:
- condition audio signal (add offset), maybe scale, maybe protect against overvoltage from TC input. i guess most signals will be either -10db or +4db line level but i read that other TC generators accept up to +/- 5V and i wonder if this exists in the "wild"... also, figure out what level of voltage is actually damaging for the adc, because the SMPTE signal probably even survives some clipping)
- figure out how to use either the audio library or the adc library to sample the input (the libltc manual mentions that 8bit 16khz should do fine so i think there is no need for an external audio codec)
- get libltc running on the teensy, figure out how to make libltc create audio in realtime, also how to make use of the sub seconds squarewave the DS3231 can output to make things more precise.
- later: maybe add a simple gps module to use time for RTC conditioning when used as master
- much much later: maybe port all the stuff to ESP32 to add multiple device syncing via Bluetooth or Wifi
 
Hi there,

Sadly I never really went anywhere with this project. I was waiting for the other gent to make more headway with libltc, then it all kind of fell off my radar. It looks like he may have stalled out as well.

Are you hoping to only generate LTC, or are you hoping to read as well? For reading and low accuracy generating I was considering just using the Teensy audio shield as an easy way to get audio in and out of the module. Personally, I'm less worried about making the audio signal correct (at least at the start) and more worried about generating and reading LTC successfully.

Keep us posted on how it goes! If I manage to make my way back to this project I will definitely post here. If you're interested in collaborating, let me know!
 
Link for related libltc info:

https://forum.pjrc.com/threads/5108...for-Teensy-3-2?p=175844&viewfull=1#post175844

I looked briefly at libltc. Unfortunately it's all GPLv3 which means none of it can be used directly in the MIT licensed Teensy Audio library, at least not without encumbering the library with GPL requirements.

Of course you can do this yourself, and any GPL requirements are your own responsibility. Just please do not attempt to contribute any code to the library (like github pull requests) which is copied from GPL sources.
 
Link for related libltc info:

https://forum.pjrc.com/threads/5108...for-Teensy-3-2?p=175844&viewfull=1#post175844

I looked briefly at libltc. Unfortunately it's all GPLv3 which means none of it can be used directly in the MIT licensed Teensy Audio library, at least not without encumbering the library with GPL requirements.

Of course you can do this yourself, and any GPL requirements are your own responsibility. Just please do not attempt to contribute any code to the library (like github pull requests) which is copied from GPL sources.

I am not sure what to do with this info? Does this have any consequences in relation to a non-commercial project? Or is this more like a hint that to effectively use libltc it would have to be incorporated into the audio lib?
 
Hey Evgeny, did you ever make any progress with this project? I’m interested in the same thing but looks like it’s not a simple problem!
 
Hi Tom, for a skilled programmer this would be an easy task, I am sure. I am total noob though and for me the learning curve was too high. Also I must admit I didn't find the teensy community neither helpful not encouraging for a newbie.

There is a another technical thing: I believe the esp8266 and/or esp32 is much better suited for this type of project, since you could do sync over WiFi with other devices. If you keep working in this, let me know, I got all the parts lying around, would be interested to Pick it up again. For practical issues I got tentacle sync devices for now, and they show how to do it right at least :)
 
The following is just my humble opinion if you wanted to pursue porting the code to any embedded platform, Teensy or otherwise.

There is some usage of calloc and free (https://github.com/x42/libltc/blob/master/src/ltc.c) which can be problematic in an embedded environment. If you make an oopsy regarding how memory is being used, you'll just lock the Teensy up. If you were developing on a computer, the operating system would protect you from memory oopsies, and your program would crash, but the computer would at least keep running. These memory problems can be intimidating or annoying to work with, but it's not the end of the world. It might be worth changing the program to use static allocation just to avoid memory problems.

I'd first develop a program which encodes to a wav file, and write the wav file to an SD card, sort of similar to how the example code works (http://x42.github.io/libltc/example_encode_8c-example.html).

Next I'd make a program that decodes that wav file off the SD card, and prints the decoded info to serial.

If you can get those 2 steps working, then you can wrap the encoder and decoder into Audio objects to make them work in the context of a project that uses the Audio library. At that point you can connect whatever ADC, DAC, or codecs that are supported.

I consider myself pretty well versed and I've never heard of LTC until just now. I don't have any devices that speak this protocol. So it would be kind of silly for me to try porting the code, as I wouldn't be able to verify it was working in the real world. But just briefly looking at the code, I think it should be possible. Anyway that's just my 2 cents.
 
As far as the GPL issue as Paul noted - "GPLv3 which means none of it can be used directly in the MIT licensed Teensy Audio library"

That is just saying that unless an alternative licensed library set of code was provided for source even if it was wonderful, perfect and working - it could not go into the PJRC codebase for Teensy without adding library restrictions that would prevent the use as allowed by MIT License on all other code installed with the TeensyDuino installer.

Paul - as time allows - will incorporate outside libraries or work on them to facilitate use with Teensy, but in this case any effort on that code would be lost because of the restrictive code license as noted.

And as far as Help from the forum in general - that only works as far as somebody on the forum has some knowledge or interest with regard to the matter at hand. Paul/Robin are the only PJRC employees to date on the forum - everyone else is here voluntarily for the fun of it.
 
Evgeny: I'm also not a very experienced programmer, so I might not get much further than you did. However, I do feel like I've gotten some good support from the Teensy community in the past, and I'm sure others in the future would appreciate any attempt to port LTC to an embedded environment. Perhaps you are running into the problem of asking too generalized a question, which often results in either no answer, or such a general response that it doesn't help you move forward.

As far as using wifi to sync, I don't think that's necessary, at least for what I'd like to do. I want a GPS chip to set the time accurately and then generate LTC from the GPS time. I've read GPS time is around 100ns accuracy once receiver latency is taken into account, so that should be plenty accurate for 29.97fps LTC. As long as each device was in sync with GPS time, the devices wouldn't need to talk to eachother. Am I missing something?


wcalvert: Thanks so much for your help and direction. Since we aren't storing very much data, seems like a static memory allocation would make lots of sense. I need to get an SD card adapter for my teensy before I can move forward in attempting to encode to a WAV file.

To give you some background- LTC (linear time code) is a method of encoding standard time/frame data (US uses SMPTE) to the audio track of a camera or audio recorder. While it's a leftover from an earlier era of video tape based cameras and manual editing between tapes, it's actually still really helpful. For example, say you have 10 cameras and an audio recorder filming a concert. Some of them might be fancy professional cameras that will receive a SMPTE sync signal and record that locked to the video and audio it's capturing. Then you have some consumer-grade cameras that might have an audio input, but no sync input, so no way to synchronize with other equipment. If you can feed those cameras an audio signal on one channel that contains the synced time code (LTC over audio) then when you go to edit your video together later, you can use software to extract this accurate timecode from the audio channel, set the timecode meta-data for the clip, and you are back in sync. This avoids the tedium of manually re-syncing several cameras to each other, or to an external audio recorder. There are commercial units that do this currently, they cost around $600 for a pair of them.
 
defragster- does that mean that porting libltc (which is licensed under GNU GPL) to teensy would not work?
Or that it will work, but can't be included in Teensy's codebase, and so we won't likely get much support or help on this ?
 
The licensing concerns means that someone would need to do the port and maintain it in a separate codebase from the core Teensy / Arduino implementation, while respecting the original license. That's a lot of work for what seems like a pretty niche use case.
 
but can't be included in Teensy's codebase, and so we won't likely get much support or help on this?

Definitely this, at least not help with code which has an incompatible open source license.

I do believe this is an interesting application. Right now there's no chance I'll work on a thing like this, since so much more is needed with Teensy 4.0 right now. But if I do look at it someday, where would I even look for equipment which can create the siganl, and ways to verify if it's being properly decoded?
 
Paul, I have access to gear that can rx and tx these signals and can bring them to pdxdorkbot for you to check out.
 
I feel like this could be built off of the waveform object since it is just a square wave but I don't know enough about the audio library to do it myself, if it's any help this is the format of an LTC frame, what it's supposed to look like and various notes on it. I do have some equipment that can generate and read LTC so I can help with any kind of testing if need be, but I don't have a use for it myself at this time.

Screen Shot 2019-12-12 at 11.55.12 PM.png

As far as timing goes it uses bi-phase mark code, so it would be clocked between 960hz for 24 frames and 1200hz for 30 frames. For anyone unfamiliar with bi-phase mark code it's a way of transmitting clock and data signals using only one wire so here is visual of what that looks like before and after.

1920px-Biphase_Mark_Code.svg.jpg
 
I didn't realize how much you can do with the teensy audio library. So much to learn!

Would the built-in ADC and DAC work for sampling and producing this LTC signal? Or would I need a separate audio chip for this?

On the RX side, could I use fft512 to sense for the LTC signal being high or low at a given moment?

On the TX side, how can I trigger square waves to start and stop with high enough precision to produce the LTC signal?

I should probably do some more research before I start throwing around these types of questions, but figured some early advice might help avoid some of the bone-headed dead ends I'm sure to encounter along the way. I just lack a frame of reference to understand if this is the kind of thing one should do with a teensy or if it requires some other hardware on either end.

I did come across this reddit thread https://www.reddit.com/r/electronic...e_help_realise_a_smpte_timecode_reader_using/ where one reply talked about an analog circuit for reading SMPTE timecode from a 20 year old BYTE magazine article. https://imgur.com/MemvkPD I wonder if this would help simplify things to put this circuit ahead of the teensy.

I also started researching a hardware solution and came across this: http://www.geocities.ws/mart_in_medina/LTC1601cct.pdf which just seems like a hardware reader of the signal, but might be a good solution to the RX side of the problem.
 
I may as well continue posting my research in case someone else wants to follow the breadcrumbs.

Here are a few arduino examples of SMPTE/LTC readers and generators-
https://forum.arduino.cc/index.php?topic=126677.0
https://forum.arduino.cc/index.php/topic,8237.0.html
https://web.archive.org/web/2012030.../avr-smpte-ltc-audio-time-code-generator.html

Here is another option that looks interesting:
https://babynetslate.wordpress.com/2012/08/13/reference-grade-ebu-time-code-generator/

Or for a full hardware solution- these chips are on ebay for $15
https://www.idt.com/us/en/document/dst/2008b-datasheet
 
I do believe this is an interesting application. Right now there's no chance I'll work on a thing like this, since so much more is needed with Teensy 4.0 right now. But if I do look at it someday, where would I even look for equipment which can create the siganl, and ways to verify if it's being properly decoded?

Since it's just an audio signal, there are PC applications that can generate and decode. A program called Reaper (a multitrack digital audio editor) can easily generate the audio waveform. It's free if you aren't making money off of it. Or if you want help, I can generate examples for you.

Reading is a bit more complicated, but there are several computer applications that can do it. What OS are you using?

As an aside, I have a personal interest in this (as you might have guessed since I started the thread...years ago). To date I have just not had the time to work on this project, but I still hope to someday...if anyone else makes progress, I am very interested in what you come up with! My ultimate goal would be first to decode a live LTC stream, and second to be able to create an arbitrary LTC stream.
 
Just came across this intersting topic. Could someone Upload a wav-file with the LTC code?
Thank you.
Maybe I'd do something.
 
Thank you. Is there a simple PC-Tool to decode the code, too? Preferably from sound-input.
Need this for testing.
I think, i'll add LTC encode and decode to the audio-library.
 
I think, i'll add LTC encode and decode to the audio-library.

Before I do something meaningless. :
The audio library uses 128 samples per block by default (but there is the possibility to use shorter blocks). At about 44.1 kHz this means 128*1/44100Hz = ~2.9ms per block. The output will therefore be inaccurate by at least this amount (possibly twice as much). Can you live with that?
If not, the audio library may be not the optimal place for it.
 
The extra processing power of a Teensy 4.0 should be able to handle the shorter blocks if anything.
 
Back
Top