views:

1037

answers:

4

The C routines opendir(), readdir() and closedir() provide a way for me to traverse a directory structure. However, each dirent structure returned by readdir() does not seem to provide a useful way for me to obtain the set of pointers to DIR that I would need to recurse into the directory subdirectories.

Of course, they give me the name of the files, so I could either append that name to the directory path and stat() and opendir() them, or I could change the current working directory of the process via chdir() and roll it back via chdir("..").

The problem with the first approach is that if the length of the directory path is great enough, then the cost to pass a string containing it to opendir() will overweight the cost of opening a directory. If you are a bit more theoretical, you could say your complexity could increase beyond linear time (in the total character count of the (relative) filenames in the directory tree).

Also, the second approach has a problem. Since each process has a single current working directory, all but one thread will have to block in a multithreaded application. Also, I don't know if the current working directory is just a mere convenience (i.e., the relative path will be appended to it prior to a filesystem query). If it is, this approach will be inefficient too.

I am accepting alternatives to these functions. So how is it one can traverse a UNIX directory tree efficiently (linear time in the total character count of the files under it)?

+1  A: 

The way to use opendir/readdir/closedir is to make the function recursive! Have a look at the snippet here on Dreamincode.net.

Hope this helps, Best regards, Tom.

tommieb75
+1  A: 

You seem to be missing one basic point: directory traversal involves reading data from the disk. Even when/if that data is in the cache, you end up going through a fair amount of code to get it from the cache into your process. Paths are also generally pretty short -- any more than a couple hundred bytes is pretty unusual. Together these mean that you can pretty reasonably build up strings for all the paths you need without any real problem. The time spent building the strings is still pretty minor compared to the time to read data from the disk. That means you can normally ignore the time spent on string manipulation, and work exclusively at optimizing disk usage.

My own experience has been that for most directory traversal a breadth-first search is usually preferable -- as you're traversing the current directory, put the full paths to all sub-directories in something like a priority queue. When you're finished traversing the current directory, pull the first item from the queue and traverse it, continuing until the queue is empty. This generally improves cache locality, so it reduces the amount of time spent reading the disk. Depending on the system (disk speed vs. CPU speed, total memory available, etc.) it's nearly always at least as fast as a depth-first traversal, and can easily be up to twice as fast (or so).

Jerry Coffin
Why use a priority queue and not something simpler like a FIFO queue? What do you use as the priority attribute?
Andrew O'Reilly
@Andrew: Good question. A FIFO will work perfectly well. A PQ simply makes it easy to produce results in order sorted by name, which the user generally prefers (certainly, I prefer it when I'm using it...)
Jerry Coffin
@Jerry: thanks, that makes sense, I hadn't considered the output format.
Andrew O'Reilly
+3  A: 

Have you tried ftw() aka File Tree Walk ?

Snippit from man 3 ftw:

int ftw(const char *dir, int (*fn)(const char *file, const struct stat *sb, int flag), int nopenfd);

ftw() walks through the directory tree starting from the indicated directory dir. For each found entry in the tree, it calls fn() with the full pathname of the entry, a pointer to the stat(2) structure for the entry and an int flag

SiegeX
And `nftw()` sometimes - there's a subtle difference between the two, but I'd have to go manual bashing to find it...http://www.opengroup.org/onlinepubs/9699919799/functions/nftw.html ("The nftw() function shall recursively descend the directory hierarchy rooted in path. The nftw() function has a similar effect to ftw() except that it takes an additional argument flags...").
Jonathan Leffler
Thanks for reminding me of `nftw()`. I do remember using that over `ftw()` because the former allows you to pass a flag to tell it not to recurse over symlinks (among other things).
SiegeX
A: 

SiegeX: Thank you. If ftw does not use opendir/readdir/closedir internally as I was doing, this is maybe the solution. nftw is much better because one can control the way it goes into the directory tree.

Vanessa: There is a limit on the length of a file name, but no limit on the length of a path as far as I know. For instance. Try running this (say 5 times) on a fresh directory:

$ mkdir $(for x in {1..200}; do echo -n a; done) && cd a*

You will end up with a full path of at least 1000 characters. At least I did so using ReiserFS.

When you say I should make my function recursive, I don't see why that would help me keeping the strings small. Could you show an example?

Jerry: In practice I know concatenating the strings would work ok. My question is about the worst case. One could define a directory with, say, a 5GB long path and put 1000 files into it. Appending would not be that good.

Luís Fernando S. X. Silveira
Please use the **add comment** button to reply to answers individually, rather than adding your own **answer** (except when you do answer your own question).
mctylr
Note that people need reputation to add comments but not to add answers.
Juho Östman