Assertion: the performance of SQL databases degrades when the volume of data becomes very large (say, tens or hunderds of terabytes). This means certain patterns in database design which are reasonable for most small-to-medium sized databases break down when the database grows. For (a rather general) example, there is a trend that moves away from designing data models which are fully (or say, BCNF) normalized because the joins necessary would impact performance too heavily. See also this question
My question is this: Do you know of any database patterns which, although reasonable in a typical database, break down (performance-wise) for huuuge databases, particularly SELECT-queries? Are there alternative strategies that accomplish the same (data-wise) without these performance issues?