Puzzling PWM Problems

I read it. It's gibberish nonsense to cover up the fact that it doesn't know how to proceed. You don't need a logic analyzer or a scope to know how I2C works. Regardless it's got nothing to do with how that state machine isn't anywhere near complete.

Here's a metaphor to explain what the AI has given you:
View attachment 38922

If you already knew how to implement the parts of the code that the AI didn't write, you wouldn't need it to write the parts that it did.
At a social function a few weeks ago, I had occasion to ask a senior software engineer at NVidia about the impact of Claude Code and other AI systems on their coding practices. His response was that the agentic AI coders could generate reasonably good code, but the results almost always required significant testing and some modifications. The junior engineers he worked with too often submitted solutions with inadequate testing and review.


A question I didn't think to ask is "If Claude Code gives you 5000 lines of code to solve a complex problem in your server farm network, how can you be sure you are not infringing on any patents?" I think most companies use a 'black box' approach where the coders have to start with only proprietary code or known open-source code. It seems that asking an external AI for help shoots a lot of holes in the black box. Companies like NVidia probably have proprietary models with carefully-screened training data. Independent developers may have to be very careful when using AI-generated code in commercial products.
 
I'm sorry I left this thread hanging for so long. I encountered most of the problems noted in this post, considered my options, and discovered that exact starting phase sync was not an issue for the demonstration data that I needed. I reverted to simply calling AnalogWriteFrequency() in Setup() and AnalogWrite() once for each channel with parameters that gave the desired pulse widths and repetition intervals. I looked at the comparison registers to make sure that quantization effects were not an issue, and moved on to the more complex parts of the simulation.

The more complex issues involve the apparent shift in PRI due to Doppler effects as the receiving satellites pass by the emitting radar and the intermittent reception of the pulses as the rotating radar beam passes by the satellites. This part of the simulation uses known satellite velocities, the relative position of the satellites and radars, the rotation rate of the radar antennas, the beamwidth of the radars, and the spacing of the satellites along their orbital path.

I suspect it will take at least five pages of text, three or four drawings, and a few oscilloscope simulations to explain the challenges faced by the engineers who designed the system in 1970 and the operators who collected vital ocean surveillance data for the next five years.

Lest you forget, the engineers who designed the data collection systems were probably limited to measuring timing differences with about 1 microsecond resolution. They achieved PRI resolutions of better than 5 PPM with lots of oversampling and averaging in best-case conditions.

Today, we can achieve 6.7 nanosecond timing resolution with the Teensy hardware timers clocked by the 150MHz peripheral clock. (That will probably need some software to account for rollover after about 437 microseconds to handle counts past 65536).

I'll post some code and scope movies After I get back from my vacation cruise to Hawaii and back. There was no room for a T4.1 or oscilloscope in my one carry-on suitcase! ;-) I did bring the laptop and the ship has Starlink internet, so I can work on the text mentioned above during the 12 sea days on the 16-day cruise.
It sounds like a really interesting project! Enjoy your trip! :)
 
I read it. It's gibberish nonsense to cover up the fact that it doesn't know how to proceed. You don't need a logic analyzer or a scope to know how I2C works. Regardless it's got nothing to do with how that state machine isn't anywhere near complete.

Here's a metaphor to explain what the AI has given you:
View attachment 38922

If you already knew how to implement the parts of the code that the AI didn't write, you wouldn't need it to write the parts that it did.
I dont have much more to say to you, its just plain strange comments you lay out, you seem defensive and scared that the AI will take your job, dont be scared, embrace it instead :)

I have a couple of friends that won GOLD in ICPC 2 years in a row 2004-2005 And they dont program anything since December, just go through PR and prompt if things need to be changed. They are absolute world class developers and one of them have also created apps like uTorrent and other well used complex applications of the past.

Anyway, you do you, continue to say that AI is useless for Embedded systems if it makes you happy, more and more of the rest of the world know it is untrue and for each month you will be getting further behind if you dont start to embrace it.

I also must have misunderstand what you wanted to build then if you actually can do it with only knowing the I2C alone, wasn't the whole purpose to implement it with FlexIO? It becomes Analog and Timing problems. Where will you get those data without looking in an Analyzer or scope?
 
Last edited:
Important datapoint on AI:
https://www.nature.com/articles/s41598-025-24658-5

Its a tool in your tool bag - use it wisely.
I agree that it is a tool in the tool bag, but it is no doubt that it get used more and more and that the capability of it have sky rocked the last few months.
The article is from 10th of November last years so GPT was on 5 (5.1, 5.2 and 5.3 Codex have arrived after this) Claude Opus was on 4.1 and 4.5 and 4.6 have arrived after this.

It is really 3 months of night and day performance difference. It maybe sound a bit exaggerating but try google on the below term and look at any of the videos comparing the models with real code examples. The last 3-4 months have had an absolute crazy shift in AI Capability. google this if you are interested in the difference:

claude opus 4.1 vs 4.6
 
I agree that it is a tool in the tool bag, but it is no doubt that it get used more and more and that the capability of it have sky rocked the last few months.
The article is from 10th of November last years so GPT was on 5 (5.1, 5.2 and 5.3 Codex have arrived after this) Claude Opus was on 4.1 and 4.5 and 4.6 have arrived after this.

It is really 3 months of night and day performance difference. It maybe sound a bit exaggerating but try google on the below term and look at any of the videos comparing the models with real code examples. The last 3-4 months have had an absolute crazy shift in AI Capability. google this if you are interested in the difference:

claude opus 4.1 vs 4.6
I read the referenced article. My takeaways were:
1. As of last fall, the LLM models used often failed to generate correct code even after as many as 100 test-and-adjust iterations.
2. The test runs were done using python coding. This means that their results are hard to apply to programs in C++ which we use for Teensy programming.
3. The goal of the testing was to measure energy efficiency.

As a retired developer writing code to provide illustrations, or to record data for personal use, I am more concerned with the dollar cost of using advanced AI coders. My annual budget for software is about $500---about equally split between Office 365(Word, Excel, CoPilot and cloud backup), and a MatLab Home license (analyze data from Teensys, graphing, and mapping). Can I afford claude opus 4.6?

My dream AI Coding project would look something like this:

"Given the attached TeensyDuino 1.60 source code and USB association specifications, generate a new set of USB libraries which add an API to allow the straightforward addition of new USB devices"

My first test cases would be:
1. The addition of a USB Test and Measurement Class device driver using the T4.1 USB host port. This would allow high-speed transfer of data from multiple Teensy boards to a host controller for storage.
2. A USB Video Class driver using bulk-mode data transfer for the FLIR BOSON thermal imaging camera.
3. A USB Video Class driver using Isochronous transfers for image capture from inexpensive USB cameras.

It took me about 6 months of part-time coding to get working results for tests 1 and 2. The structure of the existing USB libraries makes the addition of new device types very difficult and required me to edit and recompile the existing USB library source--after weeks of studying the reference manual register descriptions to figure out exactly what the core code was doing.

AFAIK, no one has released code to solve test #3.

I think it took about two years work by a group of talented part-time developers to get the MTP driver integrated into TeensyDuino. While MTP transfers are vital to many of my data collection programs, TeensyDuino still shows MTP as "experimental". I think the 'experiments' have produced enough data to show that the MTP code is stable and efficient and can be considered to be a device driver, and not an experiment!

I look forward to the time when the TeensyDuino USB Type menu is not a list of pre-selected driver combinations, but a menu that has check boxes for the drivers your project requires and allows the addition of new check boxes as new drivers become available.
 
Back
Top