SD-card writing making program slow ? Why

Status
Not open for further replies.

Rakeh

Member
A few days ago, I posted about how I want to get data from IMU sensor and store the values for every 3 seconds in the builtin SD-card of Teensy3.6. After resolving few issues that were at first priority, I am faced up with the new one that is indeed not bearable in my task.

I am writing data for every time in SD-card in new file and after some time, my system response is getting slow. From the code posted below you can see that I am writing files for now with limit of 999. So, initially everything works fine but when the counter increases up-to 170 or 180, it takes around 2 seconds to write the file in SD-card irrespective of other delays that are present and when it reaches beyond 280-300 mark, the files takes 5-7 sec for writing and its hell of a time. My task is very redundant to get data for 3 seconds and store again and again so I wonder if I reach up-to 700 or 800 it might take a while to write the data.

I want some suggestion if the system is getting slow because the write up time increases is because of SD-fat file system or do the T3.6 is slow( which I don't think so is slow) or is it the SD/SPI library making it slow or some coding fault that is making the time to write increase as loop value increase.. I am really confused about it so any thoughts how to overcome the issue.


Thank you
-Rakeh

List of included libraries just for refernce

#include <Wire.h>
#include <SPI.h>
#include <SD.h>
#include <SD_t3.h>
#include <TimeLib.h>
#include "BN55.h"
Code:
void LogFunc()
{
  time_t t = processSyncMessage();
  if (t != 0)
  {
    Teensy3Clock.set(t);
    setTime(t);
   }
   String Year = year();
   String Mon = month();
   String Day = day();
   String Hour = hour();
   String Min = minute();
   String Sec = second();
   
   delay(1);
   delay((LOG_INTERVAL - 1) - (millis() % LOG_INTERVAL));
   
   if ((millis() - syncTime) < SYNC_INTERVAL) return;
   syncTime = millis();
   // blink LED to show we are syncing data to the card & updating FAT!
   logfile.flush();//physically save any bytes written to the file to the SD card
   {
    unsigned long currentmillis = millis();
    if (currentmillis - previousmillis > interval)
    {
      previousmillis = currentmillis * 10000;
      if (ledState == LOW)
        ledState = HIGH;
      else
        ledState = LOW;
        digitalWrite(ledPin, ledState);
        }
    }
    char filename[] = "Log000.txt";
    for (uint16_t i = 0; i < 1000; i++)
    {
      byte Hun = i/100;
      filename[3] = i / 100 + '0';
      filename[4] = (i - Hun *100) / 10 + '0';
      filename[5] = i % 10 + '0';
      if (! SD.exists(filename))
      {
        logfile = SD.open(filename, FILE_WRITE);
        Serial.print("Logging to: ");
        Serial.println(filename);       
        break;  // leave the loop!
       }
     }
     if (! logfile)
     {
      Serial.println("Error: Log file could not created");
      }
      logfile.println("No.\tDate\t\tTime\t\tAcc(x)\tAcc(y)\tAcc(z)\t\tGyro(x)\tGyro(y)\tGyro(z)");
      for(int k=1;k<6;k++)
      {
        mySensor.readAccel();
        String Accx = (mySensor.accel.x);
        String Accy = (mySensor.accel.y);
        String Accz = (mySensor.accel.z);

        mySensor.readGyro();
        String Gyrox = (mySensor.gyro.x);
        String Gyroy = (mySensor.gyro.y);
        String Gyroz = (mySensor.gyro.z);

        String No = k;
        String dataline = (No + "\t" + Year + "-" + Mon + "-" + Day + "\t" + Hour + ":" + Min + ":" + Sec + "\t\t" + Accx  + "\t" + Accy  + "\t" + Accz + "\t\t" +Gyrox + "\t" + Gyroy + "\t" + Gyroz);
        logfile.println(dataline);
        delay(400);
      }
}
 
This will take some time - adds at least 2 seconds to the time for logging:
Code:
      for(int k=1;k<6;k++)
      {
// …
        delay(400);
      }

Would be best to perhaps find a way to call this code every 400 ms if that is what is needed and remove the delay(400) and do other things while waiting.


There are threads about fast data logging - best to write data in blocks of 512 bytes to work well with the SD media size.
 
I'll try to remove delay and use simple way of making such delay instead.

Can you point out some link or blog about how to log data fastly. I was looking and ended up finding that the delay comes and its normal as it increase files but was definitely pointless. So, if you can point some link or thread will be helpful.


Thank you
-Rakeh
 
By writing to a lot of different files you bring the dynamics of that into play. FAT file systems have just so many entries in the initial directory table and will have to allocate more space after some number of files. That takes a little time.

But I don't see where you close any of these files that you opened. File system code usually has some limit on the maximum number of opened files and exceeding that number causes trouble. The limit will depend on the specifics of the file system library.
 
Thank you for your suggestion @UhClem.

I have seen in my code that I forgot to close the files and you absolutely picked my error. After adding logfile.close function, the performance is still not that good as my expectation but yeah now it takes around 1 sec to write when files are beyond 240 mark and so.

It has been improved but its not removed there is still some delay in writing the files.

I have faced another problem this time and it is like after some certain write operations it does not write to new files.

For my 1000 loop, it writes 124 files correct and after 125, it re-writes the results to same 125 file unless i reset the teensy by pushing the button and then this happens again at 253 that its starts to rewrite files. and similarly 381 again.I don't know the anomaly behind this behavior. Does anyone have this kind of issue please let me know.



Thank you
-Rakeh
 
I wondered about file closure and access - but the initial question was speed so I focused first on removing that delay(400).

As far as files - quick guess would be limit to the number of files in Root or directory? Perhaps starting a new sub directory for storage of each 'hundred' set of files?

That should be easy to test - and shouldn't add new problems with 10 directories of 100 files each it might work?
 
Status
Not open for further replies.
Back
Top