tags:

views:

189

answers:

0

I'm doing a file-search utility that indexes a drive and let you find a file based on different criteria. To make it as fast as possible I'd want to keep the index in memory (which also allows me to do some PLINQ).

In order to keep the memory requirement down I'm thinking of putting all directory names into a lookup table (with a pointer to it's parent node) so they're only stored once, likewise for extensions I can put that into another lookup-table

So my questions are, is this a good approach or is there some better way? Is it feasible to keep the index of a large drives in memory like this?

In order to keep the index in sync I was thinking of using FilesystemWatcher. Depending how large the index is another approach might be to serialize the index to the harddrive, but it kinda defeats the purpose if it takes too long to load when you need it.