I also have a very large table in SQL Server (2008 R2 Developer Edition) that is having some performance problems.
I was wondering if another DBMS would be better for handling large tables. I'm mainly only considering the following systems: SQL Server 2008, MySQL, and PostgreSQL 9.0.
Or, as the referenced question above eludes to, is the table size and performance mainly a factor of indexes and caching?
Also, would greater normalization improve performance, or hinder it?
Edit:
One of the comments below claims I was vague. I have over 20 million rows (20 years of stock data & 2 years of options data), and I am trying to figure out how to improve performance by an order of magnitude. I only care about read/calculation performance; I don't care about write performance. The only writes are during data refreshes, and those are BulkCopy.
I have some indexes already, but hopefully I'm doing something wrong because I need to speed things up a lot. I need to start looking at my queries too.
The comments and answers provided already helped me understand how to start profiling my database. I'm a programmer, not a DBA (therefore Marco's book recommendation is perfect). I don't have that much database experience and I've never profiled a database before. I will try these suggestions and report back if necessary. Thank you!