If your database is set up correctly to collect statistics on the table contents, then your query optimiser should be easily able to figure out the cardinality of each subclause and first process those that reduce the dataset the most (highest cardinality). The cardinality can be thought of as how many unique values are in a column.
For example, if there are a million different TypeContent
values but isPublished
is only 0 or 1, the query optimiser should process the TypeContent
clause first.
But this is why database tuning is not a set-and-forget operation. The performance of your database depends both on the schema, and the data held in the table. That means that out-of-date, incorrect, statistics are actually worse than no statistics at all. At least if there's no statistics, the optimiser won't bother to use them.
If the data properties change regularly, you should tune regularly. See here for SQL Server 2008 specific statistics, and how to create and maintain them.