I'm in the process of implementing caching for my project, after looking at cache directory structures, I've seen many examples like:
- cache
- cache/a
- cache/a/a/
- cache/a/...
- cache/a/z
- cache/...
- cache/z ...
You get the idea. Another example for storing files, let's say our file is named IMG_PARTY.JPG, a common way is to put it in a directory named:
files/i/m/IMG_PARTY.JPG
Some thoughts come to mind, but I'd like to know the real reasons for this.
Filesystems doing linear lookups find files faster when there's less of them in a directory. Such structure spreads files thin. (wild guess)
To not mess up *nix utilities like rm, which take a finite number of arguments and deleting large number of files at once tends to be hacky (having to pass it though find etc.)
What's the real reason? Can you suggest a cache directory structure that you find to be good and tell me why it's good?
Thanks!