views:

161

answers:

1

I'm conflicted between a "read once, use memory+pointers to files" and a "read when necessary" approach. The latter is of course much easier (no additional classes needed to store the whole dir structure), but IMO it is slower? I'm trying to list the filenames and relative paths (so the compiler can do with them what it needs to).

A little clarification: I'm writing a simple build system, that read a project file, checks if all files are present, and runs some compile steps. The file tree is static, so the first option doesn't need to be very dynamic and only needs to be built once every time the program is run. Thanks

+1  A: 

You can safely assume that the operating system will cache the directory contents anyway, so that access through file system APIs will come down to memory operations.

So the answer to your question "is it faster?" is likely "No, not measurably".

OTOH, consider that a directories contents can change over time, even in very short time. Thus, reading directory content eagerly or lazyily is not so much a question of speed, but of semantics. It may be that you find that you must/must not read the entire directory, depending on what you are doing.

Ingo
Didn't know about the caching thing, thanks for the info.
rubenvb