We are currently thinking of building a cache-system to hold data pulled out of an SQL database and make it available to a couple of other applications (website, webservice, etc). We imagine the cache to be running as a windows service and basically consist of a smart dictionary which holds the cache entries. My question is, is there a limit to the working set of the application (it will be running under windows server 2003)? Or is the amount of physical memory the limit?
As with any other Windows program, you're limited by address space. That is: on 32-bit, you can have 2GB of address space. On x64, you can have 8TB.
If you don't have 8TB of physical memory, it will start to page.
32bit or 64bit? 32bit is 2gb (for a process), 64 bit is 1TB (enterprise edition 2003 server).
However, the maximum size of a CLR Object is 2gb even on 64bit.
On 32bit Windows you can get a bit more memory by booting Windows with the /3gb flag and flagging your app as "large address aware"
Matthias,
Not actually an answer to the direct question, but another way of approaching this problem which will get around some of the big pitfalls, which can be a major headache with caching solutions. (Sorry I don't have any recommended reading on the matter.)
We implemented this in a previous project, and it did create other problems.
For offline access, can you use sql express on the desktops to create a mirror of your database (or just the bit you need to cache)? Then all you need to do is switch which database your application is pointing to. You can even use it store diffs and replay these to the server - although this has other problems. You can alter the permissions on the local copy to make this one read-only if that's how it should be.
The dictionaries you are thinking of creating sound remarkably like Sql indexes. I would rely on sql to do the job for you if you can architect it that way. Why reinvent that wheel? If you do, you will have to think carefully about cache expiration and memory management - particularly if this is a windows service.
Good luck,
Sam
I have recently been doing extensive profiling around memory limits in .NET on a 32bit process. We all get bombarded by the idea that we can allocate up to 2.4GB (2^31) in a .NET application but unfortuneately this is not true :(. The application process has that much space to use and the operating system does a great job managing it for us, however, .NET itself seems to have its own overhead which accounts for aproximately 600-800MB for typical real world applications that push the memory limit. This means that as soon as you allocate an array of integers that takes about 1.4GB, you should expect to see an OutOfMemoryException().
Obviously in 64bit, this limit occurs way later (let's chat in 5 years :)), but the general size of everything in memory also grows (I am finding it's ~1.7 to ~2 times) because of the increased word size.
What I know for sure is that the Virtual Memory idea from the operating system definitely does NOT give you virtually endless allocation space within one process. It is only there so that the full 2.4GB is addressable to all the (many) applications running at one time.
I hope this insight helps somewhat.
The following table from MSDN is the most precise answer to your query. Note that the IMAGE_FILE_LARGE_ADDRESS_AWARE flag cannot be set directly from the managed compiler, though fortunately it can be set post build via the editbin utility. 4GT refers to the /3gb flag.