Forum Rule: Always post complete source code & details to reproduce any issue!
Results 1 to 21 of 21

Thread: Problem with TeensyThreads and SD.h playing together

  1. #1
    Member
    Join Date
    Mar 2021
    Location
    Oceanside, CA
    Posts
    20

    Problem with TeensyThreads and SD.h playing together

    Using TeensyThreads to alleviate SD write latency issues works very well when it works but it only seems to work within a sweet spot of write block size of 256-1024 characters. If 2048 or higher is selected SD.h won't write at all. If 128 or smaller is selected SD.h writes but sporadically with errors. Without using threads, blocks as small as 64 and at least as large as 32k can be written.

    The intended project captures serial data on a Teensy 3.5, processes it, and logs it to an SD card. This works for the most part but SD write latency is a problem as it blocks capture of incoming data. Serial buffering can compensate for this up to a point but occasionally the write latency is extreme, the buffers overflow and data is corrupted. Using TeensyThreads is an attempt to isolate the write operation as a separate thread so the input capture and processing can continue during SD write blocking. As mentioned this works quite well with a block size of 1024 and a slice time of 1ms but fails completely with larger block sizes. It would seem that using threads should work with any block size that works without threads, although performance may differ.

    To test the feasibility of applying threads to this project a simulator was created that just generates random data that approximates actual project data in size and sample rate and writes to the SD card. The following code is the simulator. In particular, the use of threads can be turned on or off with a switch and the block size can be specified. With this code and no threads, block sizes will work from 64 up, exhibiting the latency issues the threading is intended to prevent. With threading enabled only block sizes of 256, 512, or 1024 will work, the sweet spot.

    Actually the project seems so far to be working fine with settings of 1024 and 1ms. I am raising this as a question to see if perhaps there is something that needs to be tweaked in the libraries to allow them to work together better, or if perhaps there is a way to code this better to permit a wider range of options?

    If you would like to try this for yourself I would suggest using a block length of 2048 with Threads=false for 30sec. Ideally this should result in zero total and max latency recorded in columns 4 and 5 and about 4100 lines, give or take depending on the random numbers. If there is latency and significantly less than 4100 lines, this is the problem threading is intended to avoid. Now, changing only turning Threads=true, run it again. On my card and T3.5 it will fail immediately as the SD won't write. Now run it again with a block size of 1024 to demonstrate that the threading works with the block size in the sweet spot and greatly reduces the SD latency issue.

    Code:
    #include <SD.h>
    #include <TeensyThreads.h>
    
    //  Tuneable parameters
    unsigned int blockLength = 2048; // Number of characters to log in a block
    unsigned int frameRate = 7;       // Rate new lines are generated (millisec) In actual use this could be several values from 6.7 to 22.
    int lineLength = 45;              // Number of values in a line. In actual use this is about 25 or 45 depending if one or two inputs are logged.
    unsigned int runTime = 30;       // Runtime for the test trial (Sec). In actual use logging must succeed at one hour.
    bool Threads = false;             // Do or don't use a threaded solution.
    byte slice = 1;                  // Slice time for a threaded solution only (millisec).
    
    // Misc other variables
    String dataString = "Count,Millis,Latency,Total Latency, Max Latency,String Length,A,B,C,D,E,F,G,H,I,J,K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y,Z,AA,BB,CC,DD,EE,FF,GG,HH,II,JJ,KK,LL,MM,NN,OO,PP,QQ,RR,SS,TT,UU,VV,WW,XX,YY,ZZ";
    elapsedMillis frameTime = 0;
    int count = 1;
    double lastLine = millis();
    const int chipSelect = BUILTIN_SDCARD;    // Set SD pin for this card
    File dataFile ;                           // Define file for SD write
    char fileName[12] = "TestLog.csv";
    
    void setup() {
      Serial.begin(9600);
      while(!Serial);
    
      SD.begin(chipSelect) ;                     // Start SD driver
      SD.remove(fileName);                       // If the filename exists, erase it
      dataFile = SD.open(fileName, FILE_WRITE);  // Open the data file and leave open
      delay(10);
      if(!dataFile) End("Could not open file: ");
      lastLine = millis() - frameRate;
    
      if(Threads){
        threads.setSliceMillis(slice);
        threads.addThread(writeLog);
        }
      }
    
    void loop() {
    
    if(frameTime >= frameRate){             // Generate a new line at the prescribed frame rate
      if(Threads)threads.suspend(1);
      dataString += "\n";
      dataString += count++;
      dataString+= ",";
      dataString += millis();
      dataString += ",";
      int Latency = millis() - lastLine - frameRate;
      static int totLatency = 0;
      static int maxLatency =0;
      totLatency+= Latency;
      if(Latency > maxLatency) maxLatency = Latency;
      dataString += Latency;
      dataString += ",";
      dataString += totLatency;
      dataString += ",";
      dataString += maxLatency; 
      lastLine = millis();
      dataString += ",";
      dataString += dataString.length();  
      for(int i = 0; i < lineLength; i++){
        dataString += ",";    
        dataString += random(175,1875);
      }
      frameTime = 0;
      if(Threads)threads.restart(1);
    }
    
    // If not using threads, this section does the write to the SD card when a new block of text is full
    if(!Threads){ 
    if(dataString.length() > blockLength){
      Serial.print(dataString.substring(0, blockLength));
      if(!dataFile.print(dataString.substring(0, blockLength))) End("SD write failed: ");
      dataString.remove(0,blockLength);
      dataFile.flush();
      }
    }
    
    if(millis() > runTime*1000) End("Run complete: ");
    
    if(Threads && (frameTime < frameRate/2)) threads.yield();  // (Only for threads) If no new frame is expected soon, yield some time to the SD process.
    }
    
    // Shut off the logging when the proscribed time expires or an error occurs.
    void End(String message){
      threads.stop();
      Serial.print("\n\n");
      Serial.print(message);
      Serial.print(millis()/1000);
      Serial.println(" Seconds");
      while(1);
      }
      
    // If threads are used, this section writes the data to the SD card when a text block is complete
    void writeLog(){
      while(1){
      if(dataString.length() > blockLength){
        Serial.print(dataString.substring(0, blockLength));
        if(!dataFile.print(dataString.substring(0, blockLength))) End("SD write failed: ");
        dataString.remove(0,blockLength);
        dataFile.flush();
        }
      threads.yield();
      }
    }

  2. #2
    Senior Member+ defragster's Avatar
    Join Date
    Feb 2015
    Posts
    15,098
    RUN data may only be semi responsive, given it is using the T_4.1 with SD card already plugged in.

    This is IDE 1.8.15 and TD 1.54 with the newer faster SdFat code supporting the SD code. Is TD 1.54 in use there? Usage of the underlying SdFat boosts writes some 5 or more times over TD 1.53 and prior SD.h code.

    Other suggestion is that the slower T_3.5 may just be a bit behind the curve keeping up. What F_CPU is in use, the default 120 MHZ or OC'd?

    As posted using T_4.1:
    Code:
    ...
    4095,29951,0,70,5,1310,965,1383,1403,1428,388,789,1566,1723,1352,1222,1602,179,869,1342,442,1310,923,1658,1338,813,418,1044,1312,568,704,686,630,1031,447,466,461,1138,979,556,742,1557,1176,1786,1421,945,906,1781,263,600,1643
    4096,29958,0,70,5,1535,215,396,1629,1404,782,1122,928,1592,1103,272,239,1238,1836,281,1583,980,1748,1286,1823,717,424,1591,1178,1487,730,355,622,416,483,280,1607,1240,1497,318,1613,731,1460,1587,1163,1573,1184,572,1779,1513,687
    4097,29965,0,70,5,1763,498,1650,1545,1299,545,1235,542,629,295,824,1219,923,370,1167,1254,1156,1759,943,1211,1137,427,599,1700,1240,1448,1832,548,1768,1520,1029,1361,1197,1255,365,629,302,347,1814,843,912,954,397,335,1680,304
    4098,29972,0,70,5,1989,875,1766,662,678,877,1486,774,313,343,348,852,1125,459
    
    Run complete: 30 Seconds
    Same T_4.1 TThreads == true with same 2048 buffer:
    Code:
    ...
    4101,29941,0,0,0,1230,910,746,1848,1718,988,703,789,912,626,852,1824,187,502,576,255,1085,1441,1001,857,1305,268,937,1129,927,687,1070,580,760,1130,611,1228,1561,1263,299,186,1589,1837,1047,1327,564,1601,927,681,976,687
    4102,29948,0,0,0,1450,1535,285,1488,1623,1758,1354,1241,177,1302,681,1797,1713,680,960,1389,1107,986,1224,886,298,1764,1561,1031,1849,1657,585,1693,521,818,1204,1613,197,291,711,1704,442,397,1828,1617,1062,427,871,439,263,1261
    4103,29955,0,0,0,1677,1685,902,348,1385,214,1510,541,1031,374,1743,1334,669,889,1718,1036,1200,1830,1654,596,1175,967,1686,1484,764,1627,543,208,528,380,411,1295,268,1067,1808,467,1086,1490,1823,709,591,241,1667,1779,1412,880
    4104,29962,0,0,0,1903,1125,1047,737,1044,1383,1066,1656,1056,955,946,1152,653,195,275,1022,1015,1626,933,1191,1729,1453,1833,433,793,497,391,1447,1513,424,1751,11
    
    Run complete: 30 Seconds
    And again on T_4.1 using 4096:
    // Tuneable parameters
    unsigned int blockLength = 4096; // 2048; // Number of characters to log in a block
    unsigned int frameRate = 7; // Rate new lines are generated (millisec) In actual use this could be several values from 6.7 to 22.
    int lineLength = 45; // Number of values in a line. In actual use this is about 25 or 45 depending if one or two inputs are logged.
    unsigned int runTime = 30; // Runtime for the test trial (Sec). In actual use logging must succeed at one hour.
    bool Threads = true; //false; // Do or don't use a threaded solution.
    byte slice = 1;
    Code:
    ...
    4086,29888,0,0,0,3281,1216,1229,377,1668,1528,1519,865,238,1836,838,1636,661,1722,1225,460,1332,1659,1271,1111,612,1168,1775,1641,700,1052,1208,1242,995,1842,319,1763,1173,782,1465,1582,1598,1195,1393,473,896,1415,878,1523,388,230
    4087,29895,0,0,0,3512,1197,1527,1201,1670,1426,593,1630,1568,1748,1410,256,926,723,1296,1348,476,1077,263,1460,498,463,270,1076,1707,377,1686,338,440,751,674,1058,1176,203,525,737,355,209,225,1339,1297,1310,1848,696,789,747
    4088,29902,0,0,0,3736,618,1395,1025,1769,931,420,910,448,845,409,191,1098,1320,253,1623,825,873,1187,1177,1818,1756,534,1172,1140,974,195,533,1410,684,264,1132,1745,421,1189,1508,418,1522,694,647,702,1592,1851,751,1652,637
    4089,29909,0,0,0,3959,1669,937,1213,743,615,568,482,202,1068,1022,644,1386,1246,597,687,758,915,1589,1740,971,1562,1874,848,1239,966,1111,1833,377,1560,89
    
    Run complete: 30 Seconds

  3. #3
    Member
    Join Date
    Mar 2021
    Location
    Oceanside, CA
    Posts
    20
    I'm using IDE 1.8.13, TD 1.54, and default T3.5 120mhz.

    Interesting that block sizes worked fine on a T4.1 that won't work on my T3.5. Even your result without threads looks pretty good, although that can vary quite a bit from one run to another depending, it seems, on luck with the SD directory.

    Probably you are right and it just needs to be kept down to block length = 1024 so the T3.5 can keep up somehow. Honestly as long as this continues to work well it's fine with my project. Logically it would seem a 4096 block would be more efficient but it really doesn't seem to affect the resulting file noticeably.

    Thanks for testing this.

  4. #4
    Senior Member+ defragster's Avatar
    Join Date
    Feb 2015
    Posts
    15,098
    Nice TD 1.54 in use. Might help to bump T_3.5 F_CPU ...

    It might also be the SD card in use where the AData one here is happy to cooperate at good speed.

    Never worked much with T_3.5 ... at least not recently. The T_3.6 was the core Beta and 3.5 at the end ... and not needing 5V tolerance and having 3.6's with USB_Host made it easy to neglect. Then came the T_4.0 and 4.1 betas and even the T_3.6 got neglected ...

  5. #5
    Senior Member vjmuzik's Avatar
    Join Date
    Apr 2017
    Posts
    822
    If I had to hazard a guess, it may be that larger block sizes are taking longer than the 1ms time slice allows for so the process gets interrupted when it switches back to the other thread and it doesnít recover when going back to the SD thread. You should try adding thread locking described here: https://github.com/ftrias/TeensyThreads#locking and see if that helps. I know that at least when I was working on my USB Ethernet and NativeEthernet in a multithreaded way that it is required to have the locks when doing this otherwise it would get interrupted and fail. Your miles may vary though since the FNET library Iím using is already designed to be able to support multithreading if you provide it with the couple of required functions that it needs.

  6. #6
    Member
    Join Date
    Mar 2021
    Location
    Oceanside, CA
    Posts
    20
    The result is the same with longer slices. My T3.5 fails to write @2048 even if the slice time is increased to 100ms.

    The threaded process works with a block length of 1024 on three different cards; two different models of newish 32GB cards and one very old 32 MB card. In all cases they fail to write at all at 2048. Given defragster's results on a 4.1 it would seem the common element is the T3.5.

    I admit I really don't understand the thread locking being described but if it locks the process to the SD operation until it completes it would seem to defeat the purpose of using threads in the first place. If the context can't skip out of the SD blocking and service the incoming data it's back to the latency and data corruption problems.
    Last edited by choochoo22; 07-22-2021 at 05:48 AM.

  7. #7
    Senior Member+ defragster's Avatar
    Join Date
    Feb 2015
    Posts
    15,098
    Wasn't sure what was happening as far as timing and T_4.1 was at hand.

    Just for fun with :: unsigned int blockLength = 8*4096; // 2048; // Number of characters to log in a block

    Code:
    4071,29765,0,0,0,32211,346,1852,697,993,1139,810,1733,451,1470,662,607,413,184,1768,768,1504,1769,316,1761,1326,1549,1006,842,1757,878,1595,411,911,1842,1005,1109,1717,1751,736,828,1589,1420,724,1017,436,647,1119,913,587,1363
    4072,29772,0,0,0,32437,258,371,178,1676,536,236,264,1416,882,1689,1687,1216,681,211,860,235,1740,932,582,278,1048,560,1859,1739,1255,1116,1480,285,756,1126,752,560,1085,912,899,180,281,945,1337,295,705,193,694,1745,869
    4073,29779,0,0,0,32656,1239,401,594,1764,1186,514,1214,369,1109,970,450,874,1140,915,1847,1868,1577,1449,1763,1331,416,760,405,10
    
    Run complete: 30 Seconds
    And a second run:
    Code:
    4071,29677,0,1,1,32139,346,1852,697,993,1139,810,1733,451,1470,662,607,413,184,1768,768,1504,1769,316,1761,1326,1549,1006,842,1757,878,1595,411,911,1842,1005,1109,1717,1751,736,828,1589,1420,724,1017,436,647,1119,913,587,1363
    4072,29684,0,1,1,32365,258,371,178,1676,536,236,264,1416,882,1689,1687,1216,681,211,860,235,1740,932,582,278,1048,560,1859,1739,1255,1116,1480,285,756,1126,752,560,1085,912,899,180,281,945,1337,295,705,193,694,1745,869
    4073,29691,0,1,1,32584,1239,401,594,1764,1186,514,1214,369,1109,970,450,874,1140,915,1847,1868,1577,1449,1763,1331,416,760,405,1097,1816,1349,260,746,382,487,540,703,1029,369,901,365,876,568,1765,1027,
    
    Run complete: 30 Seconds
    So maybe it is the SD card speed? Pulled out a T_3.5 - FAILED on start at 32KB

    But stepping down to 4096 byte buffer on T_3.5 at 120 MHz : unsigned int blockLength = 4096;
    Code:
    4212,29947,0,1,1,3617,1038,1786,961,867,1734,1321,681,1732,535,1736,837,1739,725,381,1522,697,176,1029,672,215,1760,302,1827,1314,326,1060,1013,323,861,219,1717,1519,821,1213,1592,1048,262,668,877,419,1357,594,771,1083,1620
    4213,29954,0,1,1,3841,1175,1724,509,1396,1633,1609,845,1753,1381,659,520,1248,1161,676,1075,746,1156,1769,575,184,1533,1813,1761,613,851,1669,558,1027,239,925,439,613,647,770,443,896,381,1597,1345,383,216,1718,1714,1640,586
    4214,29961,0,1,1,4065,472,725,1231,1746,202,1870
    
    Run complete: 30 Seconds
    T_3.5 Going to : blockLength = 2048; { and 8192
    Code:
    SD write failed: 0 Seconds
    At 144 MHz 1024 and 4096 work but 2048 and 8192 fail.

    at 4096 and 144 MHz:
    Code:
    4210,29945,0,14,13,3642,1272,1594,485,1241,507,205,727,450,485,1060,637,821,1506,778,1287,349,276,1705,215,449,1235,1660,1222,611,1458,256,1757,962,1811,203,972,1614,631,1258,1184,206,1027,1478,687,925,432,1373,1586,1365,355
    4211,29952,0,14,13,3867,1069,1502,1524,901,1105,1552,1117,787,1097,1450,1367,823,656,1282,1872,245,1829,1658,1593,701,1463,326,1085,1253,1219,829,322,996,180,1784,1716,489,1021,1471,1736,1112,622,232,1509,1644,559,1406,667,768,408
    4212,29959,0,14,1
    
    Run complete: 30 Seconds
    and again :
    Code:
    4221,29955,0,10,9,2551,1787,1712,800,783,1708,1544,1235,1471,1753,756,1545,985,293,427,960,1239,1015,392,1442,657,1202,522,1407,258,1668,1149,1230,874,923,401,285,1067,465,1778,1529,793,383,1097,472,1221,1437,383,946,771,297
    4222,29962,0,10,9,2776,1155,657,1098,1282,734,1855,805,1467,969,1424,536,1197,1421,1490,1188,1771,1549,811,253,1444,686,842,1507,983,503,1151,1747,308,1514,1015,831,999,347,1210,259,972,
    
    Run complete: 30 Seconds

  8. #8
    Member
    Join Date
    Mar 2021
    Location
    Oceanside, CA
    Posts
    20
    Based on those results it would seem the "sweet spot" for a 3.5 may be a combination of block length and the particular card. I've tried 1k, 2k, 4k, 8k, 16k and 32k and all of them write with the threading off (with other issues) but none above 1k have worked with threading on.

  9. #9
    Senior Member
    Join Date
    Jul 2014
    Posts
    3,318
    what happens if you combine all fill operation in same thread and not in setup() and writeLog thread ?

  10. #10
    Member
    Join Date
    Mar 2021
    Location
    Oceanside, CA
    Posts
    20
    I tried moving the file creation statements from setup() into writeLog() like this with all file operations are in the writeLog() thread. The result is that the file fails to open every time.

    Code:
      static bool startSwitch = true;
      if(startSwitch){                                // Only do this once
        SD.begin(chipSelect) ;                     // Start SD driver
        SD.remove(fileName);                       // If the filename exists, erase it
        dataFile = SD.open(fileName, FILE_WRITE);  // Open the data file and leave open
        threads.delay(10);
        if(!dataFile) End("Could not open file: ");
        startSwitch = false;
        }
    It seems that this would likely cause problems even if it did work. While writeLog() is working on creating a file to write, the loop() would be processing log data that wasn't being written. It seems better to get the file ready to write in setup before input data is processed. Since those operations are never executed again it doesn't seem there should be a problem with that.

    In further test runs 4096 works sometimes. Unfortunately also sometimes settings that usually work will fail to write after running successfully for a few seconds. It seems to be worse with the random number replaced by a fixed 4 digit value, which does make the data a little longer.

    The inconsistency concerns me.

  11. #11
    Senior Member
    Join Date
    Jul 2014
    Posts
    3,318
    issue is that you need to close the file at the end
    using your code in original post I did in
    Code:
    void End(String message){
      dataFile.close();
      threads.stop();
      Serial.print("\n\n");
      Serial.print(message);
      Serial.print(millis()/1000);
      Serial.println(" Seconds");
      while(1);
      }

  12. #12
    Member
    Join Date
    Mar 2021
    Location
    Oceanside, CA
    Posts
    20
    The failures that are occurring happen before the logging reaches timeout. Either the SD writing fails to start in the first place or stops inexplicably in mid log. When an error has already occurred and caused the code to go to End(), nothing can be done there to cause the error to un-happen.

  13. #13
    Senior Member
    Join Date
    Jul 2014
    Posts
    3,318
    I had no error on the test program.
    Code:
    Run complete: 30 Seconds
    But it is general that file that are not closed properly may show up as zero size. If you inspect them with hex editor you will see that data are there.
    So, if doing long term logging, always close file regularly. But this common knowledge.
    Anyhow, your program has no dataFile.close(), so it will always show size 0.

  14. #14
    Senior Member+ defragster's Avatar
    Join Date
    Feb 2015
    Posts
    15,098
    Check out included example : ...\hardware\teensy\avr\libraries\SD\examples\SdFa t_Usage\SdFat_Usage.ino

    Maybe the the right size of : if (myfile.preAllocate(40*1024*1024)) {

    Would work in setup(), before logging, to help the SD write process complete in a more timely fashion.

  15. #15
    Member
    Join Date
    Mar 2021
    Location
    Oceanside, CA
    Posts
    20
    Quote Originally Posted by WMXZ View Post
    I had no error on the test program.
    ...
    Anyhow, your program has no dataFile.close(), so it will always show size 0.
    If you ran the posted code as-is, threading is off so the alleged problem with SD.h and TeensyThreads would not occur. You need to set Threads = true and experiment with different block sizes. Whether errors occur depends somewhat on the settings, the card, and the Teensy model, apparently, and sometimes on luck. Look at defragster's results. Any given hardware may work with some block sizes and not others. My testing also has turned up several instances where settings that seem to work most of the time eventually don't, even with nothing changed, just luck.

    It has not been my experience that leaving the file open results in zero file size. The flush command has pretty much the same effect as close without actually closing the file (and thereby requiring re-opening).

    Also, there is a reason for not closing the file. One of the use cases for the project is open-ended. That is, the run time is not known in advance and logging is terminated only by power being cut. This does not seem to cause any file issues. File size is reported correctly and the logged data is readable with a spreadsheet or other programs, up to the point of the last flush anyway and that's fine. Conceivably this could cause file and possibly card corruption but in practice it hasn't been an issue.

  16. #16
    Member
    Join Date
    Mar 2021
    Location
    Oceanside, CA
    Posts
    20
    Quote Originally Posted by defragster View Post
    Check out included example : ...\hardware\teensy\avr\libraries\SD\examples\SdFa t_Usage\SdFat_Usage.ino

    Maybe the the right size of : if (myfile.preAllocate(40*1024*1024)) {

    Would work in setup(), before logging, to help the SD write process complete in a more timely fashion.
    It occurred to me some time ago that reserving some space in advance might alleviate some of the problems but I didn't know how to do it. Thanks for that, I'll experiment with it.

  17. #17
    Senior Member+ defragster's Avatar
    Join Date
    Feb 2015
    Posts
    15,098
    Quote Originally Posted by choochoo22 View Post
    It occurred to me some time ago that reserving some space in advance might alleviate some of the problems but I didn't know how to do it. Thanks for that, I'll experiment with it.
    That is a new feature AFAIK brought in with the change to SdFat inclusion and indeed @PaulStoffregen did well to exemplify its usage.

    Hope it helps.

  18. #18
    Senior Member
    Join Date
    Jul 2014
    Posts
    3,318
    The size information is only written to file directory entry when you close the file.
    This does not mean data are not written to file (check with hex editor).
    Allocating huge file space will write the allocation space as file size, which will be as wrong as zero file size is.
    I bet, if you open a zero file size with a low-level program, you will be able to read the content.
    to ensure some realistic file sizes you always can close and re-open with append at regular intervals (say once every minute/hour/day, depending on the expected granularity)
    I would finally argue, that open-ended files (single file for whole application) is close to bad practice and not only for the reason of your problems

  19. #19
    Member
    Join Date
    Mar 2021
    Location
    Oceanside, CA
    Posts
    20
    Quote Originally Posted by WMXZ View Post
    The size information is only written to file directory entry when you close the file....
    I'm sorry. I don't mean to be argumentative. I know you are trying to help and know much much more about all this than I do, but this point is simply not correct. I've been running previous versions of this application since last fall, as have others. In the past few days I've run dozens of test runs of the simulator posted and the primary application. None of them have ever included a .close statement, only .flush. All runs that are otherwise successful record the file size in the directory. Without trying to actually count the bytes in a 70MB file, the sizes reported all seem to be plausibly correct.

    It is true that without the .flush statement the file size will be reported as zero. I lack the means or knowledge of examining the content of a reported zero size file but it doesn't suit my needs in any case. You don't need a hex editor or other exotic tools. The .txt and .csv files written with the flush statement can be easily read with common applications like Notepad, Excel, etc. Please try it and verify this for yourself. You can just run the simulator code as posted and if it runs without a problem, as it did for you the last time, you can check the directory entry and read the resulting file with Notepad. In fact, if you still have it on your card, just open the file generated the last time you ran it.

    As for the effects of pre-allocating, I don't know as I haven't tried that yet.
    Last edited by choochoo22; 07-24-2021 at 07:59 AM.

  20. #20
    Senior Member
    Join Date
    Jul 2014
    Posts
    3,318
    @choochoo22
    OK, I wrote a test program and I see that you are correct, flush() does give the same filesize than close().

    I was rerunning code in OP with Thread=true and got no error

    Code:
    Run complete: 30 Seconds
          933888 TestLog.csv
    not sure if filesize is correct
    Last edited by WMXZ; 07-24-2021 at 08:58 AM.

  21. #21
    Member
    Join Date
    Mar 2021
    Location
    Oceanside, CA
    Posts
    20
    Quote Originally Posted by defragster View Post
    Check out included example : ...\hardware\teensy\avr\libraries\SD\examples\SdFa t_Usage\SdFat_Usage.ino

    Maybe the the right size of : if (myfile.preAllocate(40*1024*1024)) {

    Would work in setup(), before logging, to help the SD write process complete in a more timely fashion.
    Thanks again for that example it was really useful. After a good deal of experimenting and floundering, here is what I observed:

    • Relative to the problem mentioned in the OP these new SD features seem to have no effect, using with TeensyThreads is still problematic.
    • There are five different initialization methods illustrated in the example, the first three don't seem to work as written with my T-3.5.
    • This method works but somewhat worse than the simple SD initialization in my OP.
      Code:
        // Access the built in SD card on Teensy 3.5, 3.6, 4.1 using DMA (maybe faster)
        //ok = SD.sdfs.begin(SdioConfig(DMA_SDIO));
    • This initialization method hugely reduces write times and latency on my T-3.5. For example; doing a pre-erase of the 80MB file with the DMA method above took about 2min. Doing the same with this FIFO method took about 6sec, so about 20 times faster.
      Code:
        // Access the built in SD card on Teensy 3.5, 3.6, 4.1 using FIFO
        //ok = SD.sdfs.begin(SdioConfig(FIFO_SDIO));
    • Using pre-allocation seems to improve performance somewhat.
    • Pre-erasing the allocated space with zeros has no discernible effect. It doesn't it seem logical that it should but it was recommended in other threads. It does have some benefit in making it easier to manually edit the file later if it is closed without truncate() but it's too time consuming to be worthwhile.
    • Eliminating the flush() statement with each write seems to help reduce SD latency but means the file must be closed. This causes some operational issues in my project but it looks like it can be accommodated and performance is still very good even with the flush().

    In the end I've applied the FIFO initialization to my project along with pre-allocation and eliminating the flush() on each write when possible. The latency in this solution is vastly improved from earlier project versions without the complication and inconsistency experienced with TeensyThreads and pre-erasing is not needed.

    Even though the OP concern was not addressed the more important objective (to me) of reducing the impacts of write latency in my project have been met.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •