SdFs - a New SD Library for FAT16/FAT32/exFAT

Status
Not open for further replies.
Yes, I downloaded is again for this test.

I agree, good selection of uSD is key.

What card are you using? There is another possibility, a counterfeit card.

There are some merchants where counterfeit cards are common. I even have a few from Amazon.

Often it is so blatant that you can tell from running the SdInfo example.

This is what a real Samsung Pro Select looks like.

Code:
Type any character to start
init time: 7 ms

Card type: SDXC

Manufacturer ID: 0X1B
OEM ID: SM
Product: 00000
Version: 1.0
Serial number: 0X2A5DED9C
Manufacturing date: 7/2010

cardSize: 64021.86 MB (MB = 1,000,000 bytes)
flashEraseSize: 128 blocks
eraseSingleBlock: true

OCR: 0XC0FF8000

SD Partition Table
part,boot,bgnCHS[3],type,endCHS[3],start,length
1,0X0,0XA,0X9,0X2,0X7,0XFE,0XFF,0XFF,32768,125009920
2,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0
3,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0
4,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0

Scanning FAT, please wait.

Volume is exFAT
sectorsPerCluster: 256
clusterCount:      488192
freeClusterCount:  421892
fatStartSector:    49152
dataStartSector:   65536

The Manufacturer ID and OEM ID should be the same. Also the sizes and locations should be the same.

Here is the diff between the output for two different Samsung Pro Select 64GB cards.

Code:
10c10
< Serial number: 0X365DDB9C
---
> Serial number: 0X2A5DED9C
31c31
< freeClusterCount:  480120
---
> freeClusterCount:  421892

I didn't reformat them so the free count varies. Notice that even some parts of the serial number are the same. the trailing byte is 9C and there is a byte with 5D.

The best trick is to take a low end Samsung Card and "upgrade it by printing a new livery".

Here is the diff between a Samsung EVO and a Pro.
Code:
2c2
< init time: 7 ms
---
> init time: 9 ms
10c10
< Serial number: 0X365DDB9C
---
> Serial number: 0X7853CE88
13c13
< cardSize: 64021.86 MB (MB = 1,000,000 bytes)
---
> cardSize: 64087.92 MB (MB = 1,000,000 bytes)
21c21
< 1,0X0,0XA,0X9,0X2,0X7,0XFE,0XFF,0XFF,32768,125009920
---
> 1,0X0,0XA,0X9,0X2,0X7,0XFE,0XFF,0XFF,32768,125138944
30,31c30,31
< clusterCount:      488192
< freeClusterCount:  480120
---
> clusterCount:      488696
> freeClusterCount:  488375

That changes a $22 card into one costing twice as much. Or buy an older Evo 64GB card for $9.99 and turn it into a $60 Pro+.
 
Last edited:
What card are you using?

SanDisk

Code:
Assuming an SDIO interface.

type any character to start
init time: 155 ms

Card type: SDHC

Manufacturer ID: 0X3
OEM ID: SD
Product: SU32G
Version: 8.0
Serial number: 0X75BE5A20
Manufacturing date: 7/2013

cardSize: 31914.98 MB (MB = 1,000,000 bytes)
flashEraseSize: 128 blocks
eraseSingleBlock: true

OCR: 0XC0FF8000

SD Partition Table
part,boot,bgnCHS[3],type,endCHS[3],start,length
1,0X0,0X82,0X3,0X0,0X7,0XFE,0XFF,0XFF,8192,62325760
2,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0
3,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0
4,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0

Scanning FAT, please wait.

Volume is exFAT
sectorsPerCluster: 64
clusterCount:      973680
freeClusterCount:  973625
fatStartSector:    10240
dataStartSector:   18432

type any character to start
 
I only have a few cards with this product code SUxxG where xx is the size in GB. They are all class 4 low performance cards.

Here is the closest to your card. it's formatted FAT32.

Code:
type any character to start
init time: 111 ms

Card type: SDHC

Manufacturer ID: 0X3
OEM ID: SD
Product: SU32G
Version: 8.0
Serial number: 0X45773607
Manufacturing date: 5/2013

cardSize: 31914.98 MB (MB = 1,000,000 bytes)
flashEraseSize: 128 blocks
eraseSingleBlock: true

OCR: 0XC0FF8000

SD Partition Table
part,boot,bgnCHS[3],type,endCHS[3],start,length
1,0X0,0X82,0X3,0X0,0XC,0XFE,0XFF,0XFF,8192,62325760
2,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0
3,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0
4,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0

Scanning FAT, please wait.

Volume is FAT32
sectorsPerCluster: 64
clusterCount:      973584
freeClusterCount:  973517
fatStartSector:    9362
dataStartSector:   24576
 
Bill,
how to truncate file when closing?
I note that final file-size is also pre-allocated size.

something like this?
Code:
file.truncate(file.curPosition());
 
Last edited:
Bill,
how to truncate file when closing?
I note that final file-size is also pre-allocated size.

Before closing the file, call file.truncate(). There are two versions. The current html is garbled, it should say this:
bool ExFatFile::truncate ( )

Truncate a file at the current file position.

Returns
The value true is returned for success and the value false is returned for failure.

bool ExFatFile::truncate ( uint64_t length )
inline

Truncate a file to a specified length. The current file position will be set to end of file.

Parameters
[in] length The desired length for the file.

Returns
The value true is returned for success and the value false is returned for failure.

If you do not truncate a file and use Windows, exFAT file size is handled in a way I didn't expect. exFAT directory entries have two lengths, data_length and valid_length. data_length is the amount of space allocated and valid_length is the amount of data actually written.

Windows will set valid length to data length. Or at least show the files size to be data_length. I need to dump the a directory after having Windows repair a exFAT volume.

I have implemented exFAT so you can close the file and reopen it with these two lengths maintained.

I like this exFAT Overview. See "Stream Extension Directory Entry".

I return EOF at ValidDataLength. Why would you want to read zeros?

I guess I should do this:

ValidDataLength determines how much actual data written to the file. Implementation shall update this field as data has been written. The data beyond the valid data length is undefined and implementation shall return zeros.

I don't allow you to truncate a file after valid_length but before data_length.
 
Last edited:
I'm confused (so what else is new?) - does exFAT only work on >32GB SD cards? I tried Teensy36FifoLogger.ino on my Sandisk Extreme 32GB, no joy, and SDFormatter insists on formatting it with FAT32.
 
I'm confused (so what else is new?) - does exFAT only work on >32GB SD cards? I tried Teensy36FifoLogger.ino on my Sandisk Extreme 32GB, no joy, and SDFormatter insists on formatting it with FAT32.
You can use exFAT on smaller cards.

I included an example, ExFatFormatter.ino, that will format smaller cards exFAT.

I have tried to choose a file system layout that should work well.

Don't expect optimal performance since cards 32GB and smaller are designed to have optimal performance with a specific FAT32 (larger than 2GB through 32GB) or FAT16/FAT12 (2GB or less) file system.

The SdFormatter.ino example has no options since it tries to match the SD Associations SD Memory Card Formatter.
 
preAllocate

I note that FsFile does not have preAllocate.
any workaround for FS agnostic programs?
 
I note that FsFile does not have preAllocate.
any workaround for FS agnostic programs?

I have not decided how to handle features like preAllocate where the FAT16/FAT32 version is totally different than the exFAT version.

Here is the FAT version:
bool FatFile:: preAllocate ( uint32_t length )

Allocate clusters to an empty file.

The file will contain uninitialized data.

Parameters
[in] length size of the file in bytes.

Returns
true for success else false.

It's not too useful since there is no valid length and the FAT will be accessed since there is no contiguous file attribute. If you write a partial sector it must be a rewrite which kills performance.

It would be easy to add preAllocate(). Any Thoughts?
 
for the record, using just exFAT-formatted Sandisk Extreme 32GB
Code:
Assuming an SDIO interface.

type any character to start
init time: 18 ms

Card type: SDHC

Manufacturer ID: 0X3
OEM ID: SD
Product: SP32G
Version: 8.0
Serial number: 0X912A13DA
Manufacturing date: 6/2011

cardSize: 31914.98 MB (MB = 1,000,000 bytes)
flashEraseSize: 128 blocks
eraseSingleBlock: true

OCR: 0XC0FF8000

SD Partition Table
part,boot,bgnCHS[3],type,endCHS[3],start,length
1,0X0,0X1,0X1,0X0,0X7,0XFE,0XFF,0XFF,16384,62317568
2,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0
3,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0
4,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0X0,0,0

Scanning FAT, please wait.

Volume is exFAT
sectorsPerCluster: 256
clusterCount:      243364
freeClusterCount:  243361
fatStartSector:    24576
dataStartSector:   32768

type any character to start

manufactured in 2011? sounds old...

now Teensy36FifoLogger reports:
Code:
Type any character to begin

FIFO_DIM = 6
FreeStack: 59587
Type any character to stop

6 maxFifoCount
8913420288 bytes
870.45 seconds
10.24 MB/sec
1326687109 yieldCalls

Type any character to run test again
looks like progress - I will try to incorporate SdFs into my code, but it may take a while given my programming skills...
 
Windows will set valid length to data length. Or at least show the files size to be data_length. I need to dump the a directory after having Windows repair a exFAT volume.

I did a number of tests on Windows 10. If you scan a volume for errors that has a files with large pre-allocations, there is no error or repair. It will work fine on SdFs, validDataLength and dataLength are maintained by Windows 10.

Various Windows programs do strange things. Some die thinking they have been handed an 8GB file.

If the pre-allocate size is smaller, programs like notepad read the entire file and get zeros for the part beyond validDataLength. notepad repairs the file by replacing the zeros with blanks then writes a file that extends validDataLength to dataLength. Actually notepad adds some huge number of blank lines.

I can open a file with PSPad in hex mode and all data after validDataLength appears as zeros. If I close it without changing anything, the file is not modified and works as expected on SdFs.
 
I have not decided how to handle features like preAllocate where the FAT16/FAT32 version is totally different than the exFAT version.

Here is the FAT version:


It's not too useful since there is no valid length and the FAT will be accessed since there is no contiguous file attribute. If you write a partial sector it must be a rewrite which kills performance.

It would be easy to add preAllocate(). Any Thoughts?

I would suggest a FS agnostic API. Implementation may depend on FS (with/without real allocation).

BRW, I always thought that for FAT32 pre-allocation is more important for, say FAT32, than for exFAT, but I may be wrong.
 
I would suggest a FS agnostic API. Implementation may depend on FS (with/without real allocation).

BRW, I always thought that for FAT32 pre-allocation is more important for, say FAT32, than for exFAT, but I may be wrong.

pre-allocation only works well on FAT32 if you maintain a fake valid length and a local indication the file is contiguous. I think FatFS may do that but only partially. There are problems with this sort-of exFAT treatment of FAT16/FAT32.

You can't set file length shorter than the first byte of the last cluster for FAT16/FAT32. I tried that in 2008 with my first FAT16 implementation. Windows calls it a corrupt file and removes the extra clusters.

I am leaning toward implementing preAllocate() for FsFile where the FAT32 version will make a file with undefined data. The exFAT version will take advantage of validDataLength/dataLength.

I am probably worrying too much. Few users will use preAllocate(). SdFat has a createContiguous() function which is like open() followed by preAllocate(). It has only been used by users that want to do raw writes to a file or pre-erase the file and seek to locations.

You can use sd.fatType() to discover if a volume FAT16, FAT32, or exFAT. I need to fix the documentation for fatType(). It returns 12, 16, 32 for FAT12, FAT16, and FAT32. It returns 64 for exFAT. I need to define FAT_TYPE symbols.
 
FsDateTime callback

Bill,
another issue:
with ExFile I seem to be unable to get file time stamp working
I added the FsDateTime callback.
Should it work? if yes, then I have a bug in my callback implementation (I try to use RTC_TSR and convert the seconds to struct_tm before calling your FS_DATE/TIME functions)

Edit: Solved? I had as years only offset from 1970 so FS_DATE returned zero. corrected now
 
Last edited:
Bill,
another issue:
with ExFile I seem to be unable to get file time stamp working
I added the FsDateTime callback.
Should it work? if yes, then I have a bug in my callback implementation (I try to use RTC_TSR and convert the seconds to struct_tm before calling your FS_DATE/TIME functions)

Try the TeensyRtcTimestamp.ino example. I ran it with exFAT and this was the output.
Code:
Type any character to begin
DateTime::now 2017-08-30 12:41:44
2017-08-30 12:41           35 RtcTest.txt
Done

On Windows:
Code:
Size            35 bytes
Date created    8/30/2017 12:41 PM
Date modified   8/30/2017 12:41 PM
 
Last edited:
I added FsFile::preAllocate().
bool FsFile::preAllocate ( uint64_t length )
inline

Allocate contiguous clusters to an empty file.

The file must be empty with no clusters allocated.

The file will contain uninitialized data for FAT16/FAT32 files. exFAT files will have zero validLength and dataLength will equal the requested length.

Parameters
[in] length size of the file in bytes.

Returns
true for success else false.

I also corrected a few errors in the documentation and defined symbols for file system types.
 
I added FsFile::preAllocate().


I also corrected a few errors in the documentation and defined symbols for file system types.

Works,
can now compile and run
Code:
SdFs sd;
FsFile file;
with exFat formatted uSD AND file preallocation
 
I decided to try optimizing FAT16/FAT32 for contiguous files. I realized that I had done most of the work with exFAT.

After more tests, I will post it to GitHub.

Here is the difference for the bench example. Note that average performance doesn't improve much but max latency is better for write and read.

This is on an Uno.

Old way FAT32:
Code:
write speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
692.00,17120,720,733
693.53,11536,720,731

read speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
666.80,2012,756,761
666.71,2012,756,761

New way FAT32 with preAllocate():

Code:
write speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
697.74,1644,720,728
697.64,1640,720,728

read speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
667.11,1000,756,761
667.29,1216,756,761

The biggest advantage may be that you can do a SD busy test with FAT32. This allows simple AVR loggers with a small FIFO to run at several hundred samples/second. You know how long it will take to write a sector when the card is not busy since no extra I/O will be done to the FAT.

Here is the result with the CardBusyTest with FAT32 on an Uno.
Code:
Starting write of 10 MiB.
minMicros: 716
maxMicros: 736
15.07 Seconds
695.67 KB/sec

Here is the result for FAT32 using SPI on Teensy 3.6:
Code:
Starting write of 100 MiB.
minMicros: 157
maxMicros: 160
32.72 Seconds
3204.40 KB/sec

Edit: Even the Teensy 3.6 DMA SDIO logger works with FAT32:
Code:
3 maxFifoCount
1633484800 bytes
159.52 seconds
10.24 MB/sec
392708121 yieldCalls
Amazing, only 96 KiB of FIFO buffer. It's one of my best 32GB uSDs.
 
Last edited:
I posted the version of SdFs with two optimizations for FAT16/FAT32 pre-allocated files.

The first avoids access to the FAT table when a file is known to be contiguous.

The second adds a pre-allocate mode which treats all sectors beyond the current file position as containing undefined data. This allows the normal read update write operation to be avoid for writes that use the cache. The pre-allocate mode is canceled if the file is re-positioned with a seek or rewind.

The read update write operation kills performance with modern SD cards.

Here is the Teensy 3.6 improvement for a pre-allocated FAT32 file with 64 byte writes. I used a high end Samsung Pro 32GB card formatted FAT32.

Before:
Code:
FILE_SIZE_MB = 5
BUF_SIZE = 64 bytes

write speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
449.28,17130,1,142
438.83,17465,1,145

After:
Code:
FILE_SIZE_MB = 5
BUF_SIZE = 64 bytes

write speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
12690.36,71,1,4
12658.23,73,1,4
 
I tried the example "ExFatLogger" with these results

Code:
Log time: 17.73 Seconds
File size: 4058 bytes
totalOverrun: 0
maxFifoCount: 1
maxLogMicros: 16
maxWriteMicros: 160
Log interval: 10000 micros
maxDelta: 0 micros

The "maxWriteMicros = 160" is the time it takes to write the values in the sd card and "maxLogMicros = 16" is the time it takes to write the values in the adc array correctly??
 
Last edited:
I tried the example "ExFatLogger" with these results

Code:
Log time: 17.73 Seconds
File size: 4058 bytes
totalOverrun: 0
maxFifoCount: 1
maxLogMicros: 16
maxWriteMicros: 160
Log interval: 10000 micros
maxDelta: 0 micros

The "maxWriteMicros = 160" is the time it takes to write the values in the sd card and "maxLogMicros = 16" is the time it takes to write the values in the adc array correctly??

16 µs is the time to read the ADC values. It took 16 µs to execute the for loop in this function.

Code:
void logRecord(data_t* data, uint16_t overrun) {
  if (overrun) {
    // Add one since this record has no adc data. Could add overrun field.
    overrun++;
    data->adc[0] = 0X8000 | overrun;
  } else {
    for (size_t i = 0; i < ADC_COUNT; i++) {
      data->adc[i] = analogRead(i);
    }
  }
}

Here is the call with the timing.
Code:
      uint32_t m = micros();
      logRecord(fifoData + fifoHead, overrun);
      m = micros() - m;
      if (m > maxLogMicros) {
        maxLogMicros = m;
      }

The 160 µs is the maximum time to write a 512 byte sector to the SD. Here is the code.
Code:
      uint32_t usec = micros();
      if (nb != binFile.write(&fifoData[fifoTail], nb)) {
        error("write binFile failed");
      }
      usec = micros() - usec;
      if (usec > maxWriteMicros) {
        maxWriteMicros = usec;
      }

I probably need better names for the variables maxLogMicros and maxWriteMicros.
 
Thanks for the detailed explanation,so to my own project i will have furthermore a "time burden" of 160us+16us = ~180us if i want to add a datalogger(amazing:D)?
The time "maxLogMicros" maybe reduced depending with on the type of sd card(Speed Class)?
 
Last edited:
Thanks for the detailed explanation,so to my own project i will have furthermore a "time burden" of 160us+16us = ~180us if i want to add a datalogger(amazing:D)?
The time "maxLogMicros" maybe reduced depending with on the type of sd card(Speed Class)?

maxLogMicros is the time to read sensors and does not depend on the SD card.

maxWriteMicros is the time to write one 512 byte sector to the card. About 140 µs is required for the SPI transfer with a 30 MHz SPI clock. You will gain very little with another SD card. The program only writes to the SD when it is not busy which masks card quality.

The maximum sample rate is limited by the sum of the two times. I suspect you could set the sample interval to 200 µs for a sample rate of 5,000 samples per second.

ExFatLogger is a simple logger that doesn't use interrupt routines or DMA so it's maximum rate is limited.

Here is a Teensy 3.6 ADC logger that will record one analog pin at over a million samples per second. It over-clocks the ADC by a factor of 2.5 to achieve up to three million samples per second.
 
Status
Not open for further replies.
Back
Top