I'm working on a project that will have a single table holding lots and lots of rows on either a SQL Server or SQL Azure installation. I'm trying to estimate how many rows I can store per GB. Is it a matter of simply adding up the memory size of the individual column data types? Is there other overhead to consider?
+1
A:
- Estimating the Size of a Table (SQL 2000).
- Estimating the Size of a Clustered Index (SQL 2005/2008)
- Estimating the Size of a Nonclustered Index (SQL 2005/2008)
- Estimating the Size of a Heap (SQL 2005/2008)
If you use Row Versioning, add 14 bytes per row, see Row Versioning Resource Usage.
With SQL 2008 you should consider Page Compression.
With SQL 2008 R2 you need to also consider possibly Unicode Compression.
Remus Rusanu
2010-04-19 20:28:25