views:

229

answers:

4

I have a background process that is constantly inserting into a table in real-time. It could be thousands of inserts an hour (with a few updates). This same table is used by the web application to get some data needed to display to the user.

Currently whenever the background process runs its inserts/updates, the web app seems blocked and cannot access the database.

I'm wondering what a good solution could be. Should I create another table where I insert into and than copy the table over to another table that is read only? Is there another method?

Any help and suggestions is greatly appreciated!

+2  A: 

Some things to look at would be:

1) Are your insert/updates or your select queries part of transactions? If so, make sure you've set the most favorable transaction isolation level for your particular situation.

2) Do you have too many indexes on this giant table? Indexes speed up selects but slow down inserts and updates. Make sure you have only the indexes you absolutely need.

3) Related to #2. Are you missing indexes that will speed up your selects? Are your select statements doing full table scans? Have a look at the query execution plan to see what's actually happening. (Query Analyzer can help you with that).

4) Consider replicating your giant data out to one or more readonly database servers that can handle the select traffic while the read/write master table (i.e. your existing table) handles mainly the insert/update traffic.

5) Do you have triggers that are firing on your insert/update operations that may be causing performance or locking issues?

6) Is your application experiencing deadlocked transactions? Have a look at this MSDN article on the subject and see if it helps. Also, make sure you understand the dining philosophers problem so you can avoid deadlocked transactions.

Asaph
+1  A: 

If you're on Enterprise Edition, the ideal solution would be to use Sliding Window partitioning. Essentially, you create an identical table on the same filegroup, load the data into said table, and then "switch" the table's partition into your production table. The actual operation is just a metadata flip, so it happens more or less instantaneously. Have a look here for more info: http://technet.microsoft.com/en-us/library/ms191160.aspx

Aaron Alton
A: 

If you just need to read data from the table in the web application, you might want to try using READ_UNCOMMITTED isolation level for the query. It doesn't take any locks and reads uncommitted data. I wouldn't recommend doing this for something like a banking application, but most typical business web apps probably wouldn't have a problem with it. Make sure to set the isolation level back to what you usually use (READ COMMITTED?) after the query.

If using READ_UNCOMMITTED is not ok for you, you could familiarize yourself with Row Versioning-Based isolation levels. For instance, snapshot isolation uses an optimistic approach to data modification, thus reducing the need to do pessimistic locking of rows.

Kaitsu
+2  A: 

Especially for SQL Server you might want to change your database to turn READ_COMMITTED_SHAPSHOT on.

This basically prevents your select queries from locking up if there are pending inserts/updates on the data.

Wolfgang