Let me creatively misinterpret this as, "What are the constituents of good RDBMS (i.e. the software that manages a database in a relational manner) design?" In no particular order:
Independence of logical and physical layout. I should never have to duplicate an attribute or join two relations in my logical layout in order to speed queries; I should just tell the physical layer to do that, and have it ensure that data never gets out of sync transparently to me. The only time I should hear about it is when a change to the logical schema is incompatable with the existing physical schema, prompting me to make the appropriate changes. Thus as an admin, I can turn a row-oriented storage format into a column-oriented storage format or vice versa, or even maintain both, and my clients see nothing but faster queries. (The same goes for optimization hints, by the way: those should be entirely separate from the logical query.) As a poster previous to me mentioned, the physical design and changes should be automated to some degree, based on query history or whatever.
Abolishment of "primary keys." A candidate key is a candidate key, and should not have any logical priority over another. If you want one set of attributes indexed, but not another, that goes in the physical specification, not the logical.
A relational query language that is, more or less, the relational algebra.
Well beyond the various bits of small cruft one expects to accumulate here and there (and I'm not even counting the physical storage hints in this), SQL has some very basic things wrong with it that make it non-relational, and very hard to use if you're trying to use a relational model. For years now, my standard technique for formulating complex queries has been to spend twenty minutes specifying it in some fairly pure form of the relational algebra, and then forty minutes trying to translate that into SQL.
Real types, specified as easily as real types in my programming language of choice. If I have complex numbers, they should have an attribute in a relation of a single complex number, not two separate attributes as real and imaginary. And functions should be defined for types, not defined for certain types and also-does-god-knows-what for different types that come along. Yes, I'm talking about abolishing NULL, though not at all about losing the capability for it. I don't mind creating a type of "all signed integers expressable in 32 bits, plus 'unknown' and 'out of range.'" But when I call the "average" function on a set of those values, I don't want it handing me a number and snickering behind its back saying, "Ha ha! I wonder if what I just made up is similar to the answer he'd come up with?" I want it to say, "I know average for integers; for what you've got, you'd better tell me what you think it means."
(An example: "sum(3, 2, unknown)" is not "5", nor is it "unknown". It's, "at least 5."
Get rid of or fix these "ORM" layers. Going and spending a lot of CPU cycles to have a relational DBMS instead of a hierarchial or network DBMS, only in order to spend yet more CPU cycles to make that relational DBMS look like a hierarchical or network DBMS later on is not only a waste of CPU, but also leads to errors in one conversion being multiplied in another. Give me direct relational queries in my language. (I'm not a big MS fan, but I give them props for LINQ, which is a step in this direction.)
This question hit a sore point, so there's my rant.