Fatfs significant slow down in directories with many files

I have a data logging system running on an STM32F7 which is storing data using FatFs by ChaN to an SD card: http://elm-chan.org/fsw/ff/00index_e.html

Each new set of data is stored in a separate file withing a directory. During post-processing on the device, each file is read and then deleted. After testing the open, read, delete sequence in a directory with 5000 files I found that the further through the directory I scanned the slower it got.

At the beginning this loop would take around 100-200ms, 2000 files in and now it takes 700 ms. Is there a quicker way of storing, reading, deleting data? or configuring FatFs?

edit: Sorry should have specified, I am using FAT32 as the FAT file system

f_opendir(&directory, "log");
while(1) {
    f_readdir(&directory, &fInfo);
    if(fInfo.fname[0] == 0) {
      //end of the directory

    if(fInfo.fname[0] == '.') {
      //ignore the dot entries

    if(fInfo.fattrib & AM_DIR) {
      //its a directory (shouldnt be here), ignore it

    sprintf(path, "log/%s", fInfo.fname);
    f_open(&file, path, FA_READ);
    f_read(&file, rBuf, btr, &br);

    //process data...

    f_unlink(path); //delete after processing

1 Answers Fatfs significant slow down in directories with many files

You can keep the directory chains shorter by splitting your files into more than one directory (simply create a new subdirectory for every 500 files or so). This can make access to a specific file quite a bit faster, as the chains to walk become shorter on average. (This is just assuming that you are not searching for files with a specific name, but rather process files in the order they have been created - In this case the search algorithm can be pretty straightforward).

Other than that, there is not much hope to get a simple FAT file system any faster. This is a principal problem of the old FAT technology.

5 months ago