Call to arms | Teensy + SDRAM = true

Wow, even with the flawed test code, sure looks like adding a 10pF to 15pF capacitor greatly improves the timing margin compared to just using the pin's self capacitance. Of course, the corrected test code should be used before we really draw any final conclusions. Still, it's pretty interesting to see... especially given NXP's answer to this question (to not add any capacitor) which @BriComp found and reposted in msg #49.
 
More tests with the 10pF cap and the latest sketch, the results are very different each restart.
Cap ValueSpeedRead fails
10 pF240MHz0
254.12MHz449297
426547
395068
392256
417701

Those 5 results at 254 MHz look pretty consistent to me!

@defragster - could I talk you into doing a quick experiment? Not for the code to publish on github, just as a quick test run only once, could you add a uint64_t counter which increments on every read? The idea is to get a count of the total number of reads this test performs.

Numbers like 392256 and 449297 look very different when printed as integers. But really what we're dealing with here is the percentage of reads that failed. To turn these integers into percentages for the sake of better comparison, we need to know the total number of reads the test performs.
 
Those 5 results at 254 MHz look pretty consistent to me!

@defragster - could I talk you into doing a quick experiment? Not for the code to publish on github, just as a quick test run only once, could you add a uint64_t counter which increments on every read? The idea is to get a count of the total number of reads this test performs.

Numbers like 392256 and 449297 look very different when printed as integers. But really what we're dealing with here is the percentage of reads that failed. To turn these integers into percentages for the sake of better comparison, we need to know the total number of reads the test performs.

Over 57 tests with 3 ReReads :: Extra info: ran for 84.27 seconds {total reads 1,434,451,968}

For 100 ReReads longer pass it would be about: 47,815,065,600
Code:
EXTMEM Memory Test, 32 Mbyte   SDRAM speed 205.71 Mhz F_CPU_ACTUAL 600 Mhz begin@ 80000000  end@ 82000000

  --- START 57 test patterns ------ with 3 reReads ... wait ...
#############............................................
Test result: 0 read errors

Extra info: ran for 82.87 seconds {total reads 1434451968}

  --- START 57 test patterns ------ with 100 reReads ... wait ...
#############............................................
Test result: 0 read errors

Extra info: ran for 2384.05 seconds {total reads 47815065600}
 
Last edited:
Good to know.

I would still like to see this test have zero configuration. When someone says 449297 read errors, how do we know if those were from 1.4 or 47.8 billion reads?
 
Good to know.

I would still like to see this test have zero configuration. When someone says 449297 read errors, how do we know if those were from 1.4 or 47.8 billion reads?
Not following? This was a number of total reads for a one off run - not to be 'incorporated' in any other fashion?

No configuration once compiled. When the sketch starts it does a 90 second test run with only 3 ReReads of all tests showing a Test result.
EXTMEM Memory Test, 32 Mbyte SDRAM speed 196.36 Mhz F_CPU_ACTUAL 600 Mhz begin@ 80000000 end@ 82000000

--- START 57 test patterns ------ with 3 reReads ... wait ...
#############............................................
Test result: 0 read errors

Extra info: ran for 82.87 seconds {total reads 1434451968}
When that completes it then runs the 40 minute 100 ReRead test pass without 'configuration' or user interaction.
If that longer 100 ReRead is allowed to complete without interruption it presents a Test result.
--- START 57 test patterns ------ with 100 reReads ... wait ...
#############............................................
Test result: 0 read errors

Extra info: ran for 2384.05 seconds {total reads 47815065600}

So on power up watching for 90 seconds the user can decide to walk away for 40 minutes, or to abort the test with failures from the short test and swap the capacitor or change the test in another way.

If a user gets 'X' or 1,000 read errors in 3 ReReads, it will likely be 33 times worse with 100. 3 ReReads was not expected to be conclusive when some errors are present. In prior post some Hundred(s) of ReReads were requested and it takes 40 minutes for 100. User input can invoke a 1,000 ReRead test after completion of the above process that will take about 10 times 40 minutes with 'send' over USB of 'k+Enter'. Those numbers are easy to change.
 
Not following? ....

No configuration once compiled.

Still way too much configuration!

Look at msg #669. This is typically how we will hear someone report their results (config not even mentioned). How do we compare against other people's results?

The problem with ANY configuration is people will tell the final number, but they're unlikely to also clearly state (or say at all) how they configured the test. This the reason why widely used benchmark tests like Coremark have no zero configuration.
 
Wow, even with the flawed test code, sure looks like adding a 10pF to 15pF capacitor greatly improves the timing margin compared to just using the pin's self capacitance. Of course, the corrected test code should be used before we really draw any final conclusions. Still, it's pretty interesting to see... especially given NXP's answer to this question (to not add any capacitor) which @BriComp found and reposted in msg #49.
I did use the corrected test code altho only for the 10pF and the results were not much different from the non-corrected test code.

I can test more, just tell me what to test.
 
Still way too much configuration!

Look at msg #669. This is typically how we will hear someone report their results (config not even mentioned). How do we compare against other people's results?

The problem with ANY configuration is people will tell the final number, but they're unlikely to also clearly state (or say at all) how they configured the test. This the reason why widely used benchmark tests like Coremark have no zero configuration.
Building a chosen speed is undesired configuration element? It should have a fixed Speed setting for the SDRAM test?
Still not following - as that would limit the use of the test at this point finding an unknown upper limit/CAP combination.
At this point (last Saturday ... ) this is a test for a user 'build' selected specific SDRAM access speed against a not yet specified CAP.
p#669 seems to document 6 different executions with one at 240 MHz and no errors then five repeats at 254.12MHz with similar errors.

> SDRAM speed is a non-specific and yet unknown variable for testing
> CAP installed is a non-specific and yet unknown variable
> IDE Build speed is a variable (just like with Coremark)
Only the user knows the CAP value, but the Test result could include presentation of the F_CPU and SDRAM MHz - that is currently in setup() open text - and in a version of the 'Extra info' on this machine.

Unless setup pauses with UI to ask the USER to provide the desired frequency before sdram.begin( 32, ???, 1 )

If a fully automated test was desired a FIXED range and INCREMENT could be coded and it could run from X=200? MHz to Y=300? MHz in increments of Z=5? MHz. That would be a test design first seen in the sentence preceding this one. It might work with a second call to sdram.begin() or may require EEPROM state storage and forced restarts?
 
Sorry, I should be more specific.

I want you to delete variables "readRepeat" and "readFixed" and hard-code the test as if readRepeat = 100 and readFixed = 1.

I especially want this code deleted!

Code:
  while (Serial.available()) {  // send usb TO REPEAT TEST
    chIn = Serial.read();
    if ( '1' == chIn ) readRepeat = 100;
    if ( 'K' == chIn ) readRepeat = 1000;
    if ( 's' == chIn ) readRepeat = 3; // Fast test
    inputSer = true;
  }

I want you to immediately run the test once after sdram.begin(). All the code currently in loop() should be in setup(). Delete everything from loop(). The test runs just once without user prompting, then loop() does nothing.

Please keep the "speed" variable at the top, but make the default 254.

Hopefully this is clearer?
 
Last edited:
Also nice to have, but not essential, would be a message printed before the test starts to advise the user approximately how long the test needs to run. This can be just a single number for the time taken if using the default 254 MHz speed. If the test takes more than 1 minute, best to round up to the next whole minute rather than specify seconds.
 
Sorry, I should be more specific.

I want you to delete variables "readRepeat" and "readFixed" and hard-code the test as if readRepeat = 100 and readFixed = 1.

I especially want this code deleted!

Code:
  while (Serial.available()) {  // send usb TO REPEAT TEST
    chIn = Serial.read();
    if ( '1' == chIn ) readRepeat = 100;
    if ( 'K' == chIn ) readRepeat = 1000;
    if ( 's' == chIn ) readRepeat = 3; // Fast test
    inputSer = true;
  }

I want you to immediately run the test once after sdram.begin(). All the code currently in loop() should be in setup(). Delete everything from loop(). The test runs just once without user prompting, then loop() does nothing.

Please keep the "speed" variable at the top, but make the default 254.

Hopefully this is clearer?
Much clearer and more specific - but premature - and costing me sleep. And as noted these are trivial changes (perhaps for a unique SDRAMmark.ino when testing leaves the ALPHA stage) - though misnamed readFixed will for now go to 0==false as:
> bool quickTest = false; // Do the 57 tests with FEW_REREADS before TYPICAL_REREADS

Test was written with early observations here in mind {Errors in the faster FIXED were common and now they are few which is why readFixed existed) and no expectation speeds might go so high, or have such a suitable CAP.
First Real use was days ago and off to a bad start with library DQS pin misused.
But it performs a FIXED execution and provides two points of feedback {1.5m and 40m} when failure is likely at this point.
If a shorter 3 read test shows proportionate results to the 40 minute 100 read, then longer test can be shortened where the Errors come from early and not extended re-reading - and that is why the 'overnight' 1,000 read test is available at this time as well as your expectations noted 100's of reads - and assumed 'per test value'.

Hopefully that testing is paused until Mouser hardware arrives here in some 2+ days. And any apparent confusion can be dealt with.
Also nice to have, but not essential, would be a message printed before the test starts to advise the user approximately how long the test needs to run. This can be just a single number for the time taken if using the default 254 MHz speed. If the test takes more than 1 minute, best to round up to the next whole minute rather than specify seconds.
Testing is nowhere near a minute. Without the 'preceding short 1.5 min SNIFF' {readFixed==quickTest} test the test for 100 ReReads takes approximately 40 minutes (thus the progress bar and ~40 second updates). The repeat calc of the psuedo random value expected combined with read being 3X slower than write is not FAST with 100 rereads over 57 tests where 44 have the pseudo random overhead.
 
Ok, then let's go with 25 reads for a total test time of approx 10 minutes.

Usually I'm hesitant to edit and commit untested code, but maybe in this situation that would be simpler than going back and forth like this?
 
UPS says Mouser delivery: Tomorrow, Wednesday 1/31 - usually late in the day ...

Back and forth at this point is counter productive - lost cycles on both ends. Board count is low and not half are worried about any CAP for the current state of use. The evolution of the base memory test was an offering to investigate initial CAP selection testing. When that is done as noted a CoreMark like is a trivial paring of the code.
 
If anyone can run this on the real hardware, a few quick questions...

1: Result now prints total and percentage. Did I get this right?

2: At start 5 minutes is estimated. Is the actual run time really about 5 minutes?

3: Did I mess anything up? (no hardware here for actual running)
 
If anyone can run this on the real hardware, a few quick questions...

1: Result now prints total and percentage. Did I get this right?

2: At start 5 minutes is estimated. Is the actual run time really about 5 minutes?

3: Did I mess anything up? (no hardware here for actual running)
I can test this in about 20 hours.
 
Set to a testable speed and ran - the time disappeared - my last build here did too? Num ReReads not shown in any fashion.
Code:
Test capacitor effect effect on SDRAM read timing margin
Clock set 205.71 MHz

    SDRAM hardware initialized.

    This test takes approximately 5 minutes to complete.
    Progress:: '#'=fixed pattern, '.'=PsuedoRand patterns, and 'F' shows Failed test pattern
    If built with DUAL Serial second SerMon will show details.


  --- START 57 test patterns ... wait ...
#############............................................
Test result: 0 read errors (0.0000%)

Extra info: ran for 0.00 seconds



Compile Time:: ...\Documents\GitHub\SDRAM_t4\examples\CapReadSDRAM\CapReadSDRAM.ino Jan 30 2024 14:49:03
EXTMEM Memory Test, 32 Mbyte   SDRAM speed 205.71 Mhz F_CPU_ACTUAL 600 Mhz begin@ 80000000  end@ 82000000
 
Quick reminder, this test is supposed to be run at an overclock speed where at least some read errors occur. CapReadSDRAM is meaningless when run at a slower speed with 0 read errors.
No doubt - but until tomorrow - no caps here - and then we'll know what that expected range is if @Dogbone06 and I get together with testing. Europe time had me up late for his start ... then it went on forum before I got back ...
 
Back
Top