I have compared the performance of system.runtime.caching in .NET 4.0 and the Enterprise Library Caching Block and to my surprise it performs terribly in comparison when fetching large data collections from cache items.
Enterprise Library fetches 100 objects in about 0,15ms, 10000 objects in about 0,25ms. This is fast, and natural for an in-process cache because no data actually needs to be copied (only references).
The .NET 4.0 caching fetches 100 objects in about 25ms, 10000 objects in about 1500ms! This is terribly slow in comparison and it makes me suspect the caching is done out-of-process.
Am I missing some configuration option, for example to enable in-process caching, or is the Enterprise Library Caching Block really this much faster?
Update
Here's my benchmark:
First, I load the data from the database to the cache (separate from the benchmark).
I use a timer around the get methods to measure the time in milliseconds:
EnterpriseLibrary Caching
Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager _cache;
public void InitCache(){
_cache = CacheFactory.GetCacheManager("myCacheName");
}
public void Benchmark(){
HighPerformanceTimer timer = new HighPerformanceTimer();
timer.Start();
myObject o = (myObject)_cache.GetData(myCacheKey);
timer.Stop();
Response.Write(timer.GetAsStringInMilliseconds());
}
.NET 4.0 Caching
System.Runtime.Caching.MemoryCache _cache;
public void InitCache(){
_cache = new MemoryCache("myCacheName");
}
public void Benchmark(){
HighPerformanceTimer timer = new HighPerformanceTimer();
timer.Start();
myObject o = (myObject)_cache.Get(myCacheKey);
timer.Stop();
Response.Write(timer.GetAsStringInMilliseconds());
}
The benchmark is executed 1000 times to compute average time to fetch the object to ensure reliability of the test. The timer is a custom timer I use, any timer counting milliseconds should do.
The interesting thing is that the "myObject" has numerous references. If there was any serialization involved I'd understand why the performance differs for this object (like in distributed caching), but these are both in-process caches that theoretically should work without many major differences at all.