duncan's link has a good set of tips. Here are a few more tips:
If you do not need to query against totally up-to-date data (i.e. if data up to the last hour or close of business yesterday is acceptable), consider building a separate data mart for the analytics. This allows you to optimise this for fast analytic queries.
The SQL Server query optimiser has a star transformation operator. If the query optimiser recongises this type of query it can select what slice of data you want by filtering based on the dimension tables before it touches the fact table. This reduces the amount of I/O needed for the query.
For VLDB applications involving large table scans, consider direct attach storage with as many controllers as possible rather than a SAN. You can get more bandwidth cheaper. However, if your data set is less than (say) 1TB or so it probably won't make a great deal of difference.
A 64-bit server with lots of RAM is good for caching if you have locality of reference in your query accesses. However, a table scan has no locality of reference so once it gets significantly bigger than the RAM on your server extra memory doesn't help so much.
If you partition your fact tables, consider putting each partition on a sepaarate disk array - or at least a separate SAS or SCSI channel if you have SAS arrays with port replication. Note that this will only make a difference if you routinely do queries across multiple partitions.