views:

96

answers:

5

I am considering using a FAT file system for an embedded data logging application. The logger will only create one file to which it continually appends 40 bytes of data every minute. After a couple years of use this would be over one million write cycles. MY QUESTION IS: Does a FAT system change the File Allocation Table every time a file is appended? How does it keep track where the end of the file is? Does it just put an EndOfFile marker at the end or does it store the length in the FAT table? If it does change the FAT table every time I do a write, I would ware out the FLASH memory in just a couple of years. Is a FAT system the right thing to use for this application?

My other thought is that I could just store the raw data bytes in the memory card and put an EndOfFile marker at the end of my data every time I do a write. This is less desirable though because it means the only way of getting data out of the logger is through serial transfers and not via a PC and a card reader.

+2  A: 

No, a flash file system driver is explicitly designed to minimize the wear and spread it across the memory cells. Taking advantage of the near-zero seek time. Your data rates are low, it's going to last a long time. Specifying a yearly replacement of the media is a simple way to minimize the risk.

Hans Passant
Probably a good idea.
Jordan S
+3  A: 

FAT updates the directory table when you modify the file (at least, it will if you close the file, I'm not sure what happens if you don't). It's not just the file size, it's also the last-modified date:

http://en.wikipedia.org/wiki/File_Allocation_Table#Directory_table

If your flash controller doesn't do transparent wear levelling, and your flash driver doesn't relocate things in an effort to level wear, then I guess you could cause wear. Consult your manual, but if you're using consumer hardware I would have thought that everything has wear-levelling somewhere.

On the plus side, if the event you're worried about only occurs every minute, then you should be able to speed that up considerably in a test to see whether 2 years worth of log entries really does trash your actual hardware. Might even be faster than trying to find the relevant manufacturer docs...

Steve Jessop
I will be using the Microchip MMC library. Not sure if it has wear leveling or not. I will try your idea though.
Jordan S
@Steve: You said, "if your flash controller doesn't do transparent wear leveling..." You are referring to the controller inside the SD card right? How do I know what that controller does? I haven't seen that information available anywhere? And wouldn't it be different for each SD card manufacturer?
Jordan S
Well, I was talking about flash generically. If you're using an SD card (or other removable media), as opposed to some specific bit of flash soldered to your embedded device, then yes that would be the controller inside the media. You don't have much control over the details if your device accepts any old SD card. I think the SD spec does mandate some wear-leveling and block replacement, but I don't know how good the lowest common denominator is.
Steve Jessop
A: 

If your only operation is appending to one file it may be simpler to forgo a filesystem and use the flash device as a data tape. You have to take into account the type of flash and its block size, though.

nategoose
A: 

Large flash chips are divided into sub-pages that are a power-of-two multiple of 264 (256+8) bytes in size, pages that are a power-of-two multiple of that, and blocks which are a power-of-two multiple of that. A blank page will read as all FF's. One can write a page at a time; the smallest unit one can write is a sub-page. Once a sub-page is written, it may not be rewritten until the entire block containing it is erased. Note that on smaller flash chips, it's possible to write the bytes of a page individually, provided one only writes to blank bytes, but on many larger chips that is not possible. I think in present-generation chips, the sub-page size is 528 bytes, the page size is 2048+64 bytes, and the block size is 128K+4096 bytes.

An MMC, SD, CompactFlash, or other such card (basically anything other than SmartMedia) combines a flash chip with a processor to handle PC-style sector writes. Essentially what happens is that when a sector is written, the controller locates a blank page, writes a new version of that sector there along with up to 16 bytes of 'header' information indicating what sector it is, etc. The controller then keeps a map of where all the different pages of information are located.

A SmartMedia card exposes the flash interface directly, and relies upon the camera, card reader, or other device using it to perform such data management according to standard methods.

Note that keeping track of the whereabouts of all 4,000,000 pages on a 2 gig card would require either having 12-16 megs of RAM, or else using 12-16 meg of flash as a secondary lookup table. Using the latter approach would mean that every write to a flash page would also require a write to the lookup table. I wouldn't be at all surprised if slower flash devices use such an approach (so as to only have to track the whereabouts of about 16,000 'indirect' pages).

In any case, the most important observation is that flash write times are not predictable, but you shouldn't normally have to worry about flash wear.

supercat
I am not worried about the actual data being corrupt but what I worry about is the File Allocation Table itself. If I am modifying a file many, many times, and the Allocation Table is getting updated every time, then the FAT table could get worn out and and I would think that could be a big problem.
Jordan S
Is it possible that the controller inside the SD card could actually move the FAT table around also? I thought I read somewhere that the FAT table has to start at the very first memory address on the card.
Jordan S
One other thing, if a flash byte contains FF and you erase it but don't write anything to it (keep it empty), does that count as one cycle?
Jordan S
@Jordan S: the flash controller can present a different address to the filesystem or block driver ("logical"), from what it uses internally ("physical"). So for the FAT to be at the "start of the disk" from the FAT driver's POV, that just means it must have logical address 0. The controller can "move" it just by assigning a different physical block to that logical address, which is why it needs the lookup tables supercat describes.
Steve Jessop
@Jordan S: Any time a particular logical sector is changed, the entire sector will be copied to a new area of flash and the sector map will be updated to reflect this. The heaviest wear to a single block from writing a single sector one million times would likely be less severe than the heaviest single-block wear from writing a million sectors one each in random order.
supercat
@Jordan S: As for counting flash wear, maximizing endurance requires that all bits of a flash block be programmed to "0" before the block is erased. The flash-chip hardware will do this automatically, but it means (1) writing one or even zero bytes of a flash block and erasing it causes the same wear as writing all the bytes; (2) if a block-erase cycle is interrupted by a power failure, the block may contain an arbitrary mix of ones and zeroes, regardless of what it contained prior to an erase cycle.
supercat
A: 

Did you check what happens to the FAT file system consistency in case of a power failure or reset of your device?

When your device experience such a failure you must not lose only that log entry, that you are just writing. Older entries must stay valid.

No, FAT is not the right thing if you need to read back the data.

You should further consider what happens, if the flash memory is filled with data. How do you get space for new data? You need to define the requirements for this point.

harper
@harper: I am not sure what will happen if power is lost during a write cycle. I will test it when I get it up and running. Why do you say FAT is not the right thins ig you need to read back the data? Do you mean because the FAT tables are easy to corrupt?I am not really concerned with running out of memory because with 2GB I should be able to store over 100 years of continuous running data. If they haven't cleared the log file after that long, that means they are probably not using it and won't mind if it overflows. Also, most circuit components aren't even guaranteed to last that long.
Jordan S