views:

470

answers:

13

Currently I am designing a database for use in our company. We are using SQL Server 2008. The database will hold data gathered from several customers. The goal of the database is to acquire aggregate benchmark numbers over several customers.

Recently, I have become worried with the fact that one table in particular will be getting very big. Each customer has approximately 20.000.000 rows of data, and there will soon be 30 customers in the database (if not more). A lot of queries will be done on this table. I am already noticing performance issues and users being temporarily locked out.

My question, will we be able to handle this table in the future, or is it better to split this table up into smaller tables for each customer?


Update: It has now been about half a year since we first created the tables. Following the advices below, I created a handful of huge tables. Since then, I have been experimenting with indexes and decided on a clustered index on the first two columns (Hospital code and Department code) on which we would have partitioned the table had we had Enterprise Edition. This setup worked fine until recently, as Galwegian predicted, performance issues are springing up. Rebuilding an index takes ages, users lock each other out, queries frequently take longer than they should, and for most queries it pays off to first copy the relevant part of the data into a temp table, create indices on the temp table and run the query. This is not how it should be. Therefore, we are considering to buy Enterprise Edition for use of partitioned tables. If the purchase cannot go through I plan to use a workaround to accomplish partitioning in Standard Edition.

A: 

One table, then worry about performance. That is, assuming you are collecting the exact same information for each customer. That way, if you have to add/remove/modify a column, you are only doing it in one place.

brien
+3  A: 

Splitting tables for performance reasons is called sharding. Also, a database schema can be more or less normalized. A normalized schema has separate tables with relations between them, and data is not duplicated.

Sjoerd
Is my nomenclature off? I call splitting tables partitioning. I call sharding the physical or seperation of data sets for particular purposes, no?
Xepoch
+3  A: 

I am assuming you have your database properly normalized. It shouldn't be a problem to deal with the data volume you refer to on a single table in SQL Server; what I think you need to do is review your indexes.

Otávio Décio
I have my data normalized, however the table I am referring to is completely denormalized, since it will be queried a lot and will not often change.
littlegreen
If you are not updating the table then I wonder why you are having users being locked out.
Otávio Décio
Probably because we are still in a design phase where we are bulk loading data into the database quite often. But I get your point, the locking problem will disappear in a production situation. Thanks!
littlegreen
+3  A: 

Datawarehouses are supposed to be big (the clue is in the name). Twenty million rows is about medium by warehousing standards, although six hundred million can be considered large.

The thing to bear in mind is that such large tables have a different physics, like black holes. So tuning them takes a different set of techniques. The other thing is, users of a datawarehouse must understand that they are dealing with huge amounts of data, and so they must not expect sub-second response (or indeed sub-minute) for every query.

Partitioning can be useful, especially if you have clear demarcations such as, as in your case, CUSTOMER. You have to be aware that partitioning can degrade the performance of queries which cut across the grain of the partitioning key. So it is not a silver bullet.

APC
+1 for black holes ;)
littlegreen
+5  A: 

Start out with one large table, and then apply 2008's table partitioning capabilities where appropriate, if performance becomes an issue.

Galwegian
If i have to give points to someone... this answer is concise, and the table partitioning hint led me to a lot of specific SQL server 2008 info that i can use. So thanks Galwegian, and everyone at that!
littlegreen
A: 

If you're on MS SQL server and you want to keep the single table, table partitioning could be one solution.

kragan
+2  A: 

Since you've tagged your question as 'datawarehouse' as well I assume you know some things about the subject. Depending on your goals you could go for a star-schema (a multidemensional model with a fact and dimensiontables). Store all fastchanging data in 1 table (per subject) and the slowchaning data in another dimension/'snowflake' tables.

An other option is the DataVault method by Dan Lindstedt. Which is a bit more complex but provides you with full flexibility.

http://danlinstedt.com/category/datavault/

Ben Fransen
hehe.. i wish i knew even more about datawarehousing. you aren't by any chance looking for a job, are you :)
littlegreen
A: 

Keep one table - 20M rows isn't huge, and customers aren't exactly the kind of table that you can easily 'archive off', and the aggrevation of searching multiple tables to find a customer isn't worth the effort (SQL is likely to be much more efficient at BTree searching than your own invention is)

You will need to look into the performance and locking issues however - this will prevent your db from scaling.

nonnb
A: 

You can also create supplemental tables that hold already calculated details on historical information if there are common queries.

Jacob G
+4  A: 

Partioning is definately something to look into. I had a database that had 2 tables sharded. Each table contained around 30-35million records. I have since merged this into one large table and assigned some good indexes. So far, I've not had to partition this table as it's working a treat, but I'm keep partitioning in mind. One thing that I have noticed, compared to when the data was sharded, and that's the data import. It is now slower, but I can live with that as the Import tool can be re-written ;o)

Ardman
A: 

One table and use table partitioning.

I think the advice to use NOLOCK is unjustified based on the information given. NOLOCK means you will get inaccurate and unreliable results from your queries (dirty and phantom reads). Before using NOLOCK you need to be sure that's not going to be a problem for your customers.

dportas
Dirty Reads Yes - It won't affect Phantoms though as these occur under the default isolation level as well.
Martin Smith
+2  A: 

In a properly designed database, that is not a huge anmout of records and SQl server should handle with ease.

A partioned single table is usually the best way to go. Trying to maintain separate indivudal customer tables is very costly in termas of time and effort and far more probne to errors.

Also examine you current queries if you are experiencing performance issues. If you don't have proper indexing (did you for instance index the foreign key fields?) queries will be slow, if you don't have sargeable queries they will be slow if you used correlated subqueries or cursors, they will be slow. Are you returning more data than is striclty needed? If you have select * anywhere in your production code, get rid of it and only return the fields you need. If you used views that call views that call views or if you used EAV table, you willhave performance iisues at this level. If you allowed a framework to autogenerate SQl code, you may well have badly perforimng queries. Remember Profiler is your friend. Of course you could also have a hardware issue, you need a pretty good sized dedicated server for that number of records. It won't work to run this on your web server or a small box.

I suggest you need to hire a professional dba with performance tuning experience. It is quite complex stuff. Databases desigend by application programmers often are bad performers when they get a real number of users and records. Database MUST be designed with data integrity, performance and security in mind. If you didn't do that the changes of having them are slim indeed.

HLGEM
I am not using a framework, I am using indexes, and we do have a kickass server. However, it is true that I am a newbie at the subject, and we are looking for a professional DBA to add to the team. I am also not yet using Profiler, so thanks for that tip.
littlegreen
+1  A: 

Is this a single flat table (no particular model)? Typically in data warehouses, you either have a normalized data model (third normal form at least - usually in an entity-relationship-model) or you have dimensional data (Kimball method or variations - usually fact tables with associated dimension tables in a set of stars).

In both cases, indexes play a large part, and partitioning can also play a part in getting queries to perform (but partitioning is not usually about performance but about maintenance being able to add and drop partitions quickly) over very large data sets - but it really depends on the order of aggregation and the types of queries.

Cade Roux