views:

4528

answers:

9

I develop software that stores a lot of data in one of its database tables (SQL Server version 8, 9 or 10). Let's say, about 100,000 records are inserted into that table per day. This is about 36 million records per year. For fear that I would lose on performance, I decided to create a new table everyday (a table with current date in its name) to lower the number of records per table.

Could you please tell me, whether it was a good idea? Is there a record limit for SQL server tables? Or do you know how many records (more or less) can be stored in a table before performance is lowered significantly?

Thanks for your answers in advance.

+4  A: 

I don't know specifically MSSQL but 36 million rows is not large to an enterprise database (100,000 rows sounds like a configuration table to me :-). I'm not a big fan of some of MS' software but this isn't Access: I assume they can handle pretty substantial database sizes with their enterprise DBMS.

I suspect days may have been too fine a resolution to divide it up, if indeed it needs dividing at all.

paxdiablo
+3  A: 

I do not know of a row limit, but I know tables with more than 170 million rows. You may speed it up using partitioned tables (2005+) or views that connect multiple tables.

Sascha
+1 for partitioned tables.
Stanislav Kniazev
+3  A: 

It's hard to give a generic answer to this. It really depends on number of factors:

  • what size your row is
  • what kind of data you store (strings, blobs, numbers)
  • what do you do with your data (just keep it as archive, query it regularly)
  • do you have indexes on your table - how many
  • what's your server specs

etc.

As answered elsewhere here, 100,000 a day and thus per table is overkill - I'd suggest monthly or weekly perhaps even quarterly. The more tables you have the bigger maintenance/query nightmare it will become.

Rashack
A: 

You can populate the table until you have enough disk space. For better performance you can try migration to SQL Server 2005 and then partition the table and put parts on different disks(if you have RAID configuration that could really help you). Partitioning is possible only in enterprise version of SQL Server 2005. You can look partitioning example at this link: http://technet.microsoft.com/en-us/magazine/cc162478.aspx

Also you can try to create views for most used data portion, that is also one of the solutions.

Hope this helped...

+1  A: 

It depends, but I would say it is better to keep everything in one table for that sake of simplicity.

100,000 rows a day is not really that much of an enormous amount. (Depending on your server hardware). I have personally seen MSSQL handle up to 100M rows in a single table without any problems. As long as your keep your indexes in order it should be all good. The key is to have heaps of memory so that indexes don't have to be swapped out to disk.

On the other hand, it depends on how you are using the data, if you need to make lots of query's, and its unlikely data will be needed that spans multiple days (so you won't need to join the tables) it will be faster to separate out it out into multiple tables. This is often used in applications such as industrial process control where you might be reading the value on say 50,000 instruments every 10 seconds. In this case speed is extremely important, but simplicity is not.

Nathan Reed
A: 

SELECT Top 1 sysobjects.[name], max(sysindexes.[rows]) AS TableRows, CAST( CASE max(sysindexes.[rows]) WHEN 0 THEN -0 ELSE LOG10(max(sysindexes.[rows])) END AS NUMERIC(5,2)) AS L10_TableRows FROM sysindexes INNER JOIN sysobjects ON sysindexes.[id] = sysobjects.[id] WHERE sysobjects.xtype = 'U' GROUP BY sysobjects.[name] ORDER BY max(rows) DESC

ravi
+3  A: 

These are some of the Maximum Capacity Specifications for SQL Server 2008 R2

Database size 524,272 terabytes

Databases per instance of SQL Server 32,767

Filegroups per database 32,767

Files per database 32,767

File size (data) 16 terabytes

File size (log) 2 terabytes

Rows per table Limited by available storage

Tables per database Limited by number of objects in a database

Malak
A: 

We have tables in SQL Server 2005 and 2008 with over 1 Billion rows in it (30 million added daily). I can't imagine going down the rats nest of splitting that out into a new table each day.

Much cheaper to add the appropriate disk space (which you need anyway) and RAM.

Chris Lively
A: 

We overflowed an integer primary key once (which is ~2.4 billion rows) on a table. If there's a row limit, you're not likely to ever hit it at a mere 36 million rows per year.

Mark