Hi. I'm developing an application targeted for desktop systems which may have as little as 256MB RAM (Windows 2000 and up). In my application I have this large file (>256MB) which contains fixed records of about 160 bytes/each. This application has a rather lengthy process in which, over time, it will be randomly accessing about 90% of the file (for reading and writing). Any given record write will not be more than 1,000 record accesses away from the read of that particular record (I can tune this value).
I have two obvious options for this process: regular I/O (FileRead, FileWrite) and memory mapping (CreateFileMapping, MapViewOfFile). The latter should be much more efficient in systems with enough memory, but in systems with low memory it will swap out most of other applications' memory, which in my application is a no-no. Is there a way to keep the process from eating up all memory (e.g., like forcing the flushing of memory pages I'm no longer accessing)? If this is not possible, then I must resort back to regular I/O; I would have liked to use overlapped I/O for the writing part (since access is so random), but documentation says writes of less than 64K are always served synchronously.
Any ideas for improving I/O are welcomed.