Several colleagues and I are faced with an architectural decision that has serious performance implications: our product includes a UI-driven schema builder that lets non-programmers build their own data types for a web app. Currently, it builds properly normalized schemas behind the scenes and includes some complex logic to alter the schema and migrate legacy data automagically if the admins make changes to the data types.
The normalized schemas have hit performance bottlenecks in the past, and a major refactoring has been scheduled. One of the groups of developers wants to store every property of the data types in a separate table, so that changes to the data types will never require schema altering. (A single property could be turned into a 1:n relationship, for example, just by changing application logic.)
Because early benchmarking indicates this will exact a huge performance penalty, they have built a caching layer in application code that maintains denormalized versions of each data type. While it does speed up the queries, I'm skeptical about the complexity that the application layer will be taking on, but I'm hoping for feedback - am I being pessimistic? Have others deployed this type of solution successfully? Should I stick to my guns, or is moving the complexity from "Schema modification tools" to "Schema mirroring tools" a good thing?