views:

240

answers:

7

I develop some PHP project on Linux platform. Are there any disadvantages of putting several thousand images (files) in one directory? This is closed set which won't grow. The alternative would be to separate this files using directory structure based on some ID (this way there would be let's say only 100 in one directory).

I ask this question, because often I see such separation when I look at images URLs on different sites. You can see that directory separation is done in such way, that no more then several hundreds images are in one directory.

What would I gain by not putting several thousand files (of not growing set) in one directory but separating them in groups of e.g. 100? Is it worth complicating things?

UPDATE:

  • There won't be any programmatic iteration over files in a directory (just a direct access to an image by it's filename)
  • I want to emphasize that the image set is closed. It's less then 5000 images, and that is it.
  • There is no logical categorization of this images
  • Human access/browse is not required
  • Images have unique filenames
  • OS: Debian/Linux 2.6.26-2-686, Filesystem: ext3

VALUABLE INFORMATION FROM THE ANSWERS:

Why separate many files to different directories:

  • "32k files limit per directory when using ext3 over nfs"
  • performance reason (access speed) [but for several thousand files it is difficult to say if it's worth, without measuring ]
A: 

The only reason I could imagine where it would be detrimental was when iterating over the directory. More files, means more iterations. But that's basically all I can think of from a programming perspective.

Gordon
+1  A: 

I think there is two aspects to this question:

  1. Does the linux file system that you're using efficiently support directories with thousands of files. I'm not an expert, but I think the newer file systems won't have problems.

  2. Are there performance issues with specific PHP fucntions? I think direct access to files should be okay, but if you're doing directory listings then you might eventually run into time or memory problems.

GSP
+4  A: 

In addition to faster file access by separating images into subdirectories, you also dramatically extend the number of files you can track before hitting the natural limits of the filesystem.

A simple approach is to md5() the file name, then use the first n characters as the directory name (eg, substr(md5($filename), 2)). This ensures a reasonably even distribution (vs taking the first n characters of the straight filename).

MightyE
More than one level would be useful into further levels of subdirectories. For example: ./12/34/56/78/1234567890abc.jpg.
Alister Bulman
Ok, so md5 would be a general approach. In my case I already have unique ID, because every image is associated with exactly one database row (which has it's primary row of course). I think it is a typical scenario.
JohnM2
It is worth considering that these numbers may not be as evenly distributed as md5 hashes would be.
Justin Smith
A: 

Several thousand images are still okay. When you access a directory, operating systems reads the listing of its files by blocks of 4K. If you have plain directory structure, it may take time to read the whole file listing if there are many (e. g. hundred thousand) files in it.

codeholic
+1  A: 

There is no reason to split those files into multiple directories, if you won't expect any filename conflicts and if you don't need to iterate over those images at any point.

But still, if you can think of a suggestive categorization, it's not a bad idea to sort the images a bit, even if it is just for maintenance reasons.

poke
A: 

If changing the filesystem is an option, I'd recommend moving wherever you store all the images to a ReiserFS filesystem. It is excellent at fast storage/access of lots of small files.

If not, MightyE's response of breaking them into folders is most logical and will increase access times by a considerable margin.

Xorlev
+1  A: 

usually the reason for such splitting is file system performance. for a closed set of 5000 files I am not sure it's worth the hassle. I suggest that you try the simple approach of putting all the files in one directory thing, but keep an eye open on the actual time it takes to access the files.

if you see that it's not fast enough for your needs, you can split it like you suggested.

I had to split files myself for performance reasons. in addition I bumped into a 32k files limit per directory when using ext3 over nfs (not sure if it's a limit of nfs or ext3). so that's another reason to split into multiple directories. in any case, try with a single dir and only split if you see it's not fast enough.

Omry