I would like to study more on the subject of complete data design patterns. Specifically, the different mixes of technologies to store, process, cache, and retrieve data. In other words, look at how many components are used in large systems like facebook.
To my knowledge, we have RBMS and NoSQL flavors of database categories. However, many more technologies (outside of permanent data-storage) are critically in the real-world use of the data - such as memcached. Yet I can't find a lot on the overarching design patterns that should be in use to make the most of all architectures.
Does anyone have links to articles about whole-package design patterns that can be accomplished with different mixes of database system components?
This is not a question for DB specific best practices like database normalization. Nor is it a question about how best to use a certain technology.
What design patterns can be used to mix strange technologies correctly to leverage each ones strengths to design complete and efficient systems? From caching, to CRUD, to scaling, to data integrity.
For example, on small shared hosts I can run things like blogs off SQLite since it's almost all reads and no writes. On the other hand, some projects are on low-end VPS and I can use MySQL + APC cache (it is only one server after all) for amazing performance on high read/write. With more than one VPS memcached is champ!
I am also a fan of MongoDB and PostgreSQL. However, MongoDB does not use any form of RAM limitations so you should really have a separate server. Nevertheless, storing large objects in MongoDB and leaving the rest of the important data on PostgreSQL is a win-win.
However, these are all very basic design choices. Large scale applications are designed with much more abstraction to promote scaling and reduce points of failure.