I'm using Linq-To-SQL for a project with around 75 tables. We have to keep a cache of entire tables that we pull down because the entities are all interrelated and pulling them on demand takes way too long. So, to track all of these entities from all of these tables, we have a single class responsible for maintaining in-memory table references. This Cache object has a different property for each of the 75 table references, and each reference caches its table on demand. for example:
private EntityTableReference _reference;
public EntityTableReference EntityTableReference
{
get
{
// Caches all entities from the table
return _reference ?? (_reference = new EntityTableReference(this));
}
}
Now, I've seen a lot of guides saying that this really goes against the principles of OO. The Cache object doesn't do anything, it just provides a common object to pass around so that we can send a single reference to the Cache object in our function calls rather than a reference to every table that the function needs to access. This has been working really well for us and I don't see any downsides in terms of maintainability, readability, speed, etc.
Are there any criticisms against this sort of design decision? Is this a case where breaking the rules is OK because we've evaluated the advantages and disadvantages, or am I missing something here and digging myself into a hole?