views:

54

answers:

1

Our in-house system is built on SQL Server 2008 with a 40-table 6NF schema. Most of the tables FK to 3 others, a key few as many as 7. The system will ultimately support 100s of employees working with 10s of 1000s of customers and store 100s of 1000s of transactional records -- prime-time access should peak at 1000 rows per second.

Is there any reason to think that this depth of RDBMS inter-relation would overburden a system built using modern hardware with ample RAM? I'm attempting to evaluate whether we need to adjust our design or project direction/goals before we approach the final development phase (in a couple of months).

+3  A: 

In SQl Server terms what you describe is a smallish database. With correct design SQL Server can handle terrabytes of data.

This is not to guarantee that your current design may perform well. There are many ways to construct poorly performing t-SQL and many bad database design choices.

If I were you I would load test data to twice the size you expect the tables to have and then start testing your code. Load testing might also be a good idea. It is far easier to fix database performance problems before they go to production. Far, far easier!

HLGEM
thanks HL, I'll certainly try and do that before we go live -- good advice
Hardryv