One problem with a clustered index on large tables is that the memory required to store the index is equal to the size of the table. There is no separate index. A non clustered index stores only the data of the index and then the table's primary key or a refid. So a normal index can much more likely fit in RAM. If you are doing searches using the clustered index and the table is large then you could easily be slowing things down. If your clustered index is part of the date and your searches are all for recent dates, then maybe the clustered index won't hurt searching performance since you never access all of it.
I disagree with posters claiming the clustered index will reduce data fragmentation. It increases the data fragmentation. On a normal table, only deleting causes fragmentation. As you add rows to a clustered table, or change the a field of the clustered index, SQL has to physically reorder table. This means breaking and adding data pages which increases fragmentation. Thats why everyone recommends being careful to pick a field for clustering that a) doesnt change often if ever, and b) always increments.
I find that a clustered index is useful on large tables when your queries need to return multiple "related" rows often. You can use the cluster so the related rows are stored consecutively and easier for SQL to retrieve. I wouldn't necessarily cluster a large table so that I would have a new index for searching.
The one advantage that clustering does have, like a covering index, is that the index contains the data that query is trying to return. There is no extra step from the index to the table to get the data.
In the end, you have to get the profiler out, and run some tests.
Am I getting this right or missing something?