tags:

views:

53

answers:

2

Hello

In our current project we are interfacing with a third party data provider. They need to insert data in a table of ours. This inserting can be frequent every 1 min, every 5min, every 30, depends on the amount of new data they need to provide. The use the isolation level read committed. On our end we have an application, windows service, that calls a webservice every 2 minutes to see if there is new data in this table. Our isolation level is repeatable read. We retrieve the records and update a column on these rows.

Now the problem is that sometimes this third party provider needs to insert a lot of data, let's say 5000 records. They do this per transaction (5rows per transaction), but they don't close the connection. They do one transaction and then the next untill all records are inserted. This caused issues for our process, we receive a timeout.

If this goes on for a long time the database get's completely unstable. For instance, they maybe stopped, but the table somehow still stays unavailable. When I try to do a select on the table, I get several records but at a certain moment I don't get any response anymore. It just says retrieving data but nothing comes anymore until I get a timeout exception.

Only solution is to restart the database and then I see the other records.

How can we solve this. What is the ideal isolation level setting in this scenario?

+1  A: 

Have you considered Snapshot isolation? From an architectural perspective it would be better if you create an integration mechanism for the 3rd party and handle the inserts yourself- perhaps get them to send you updates via a Message Queue?

RichardOD
Haven't worked with this levelNeed to investigate
Sven
+1  A: 

perhaps you can create a staging table that the third party data provider can write into. You can then write your own routine to pull data from there into your actual table, using a method of your choosing that better works for your application.

EDIT based on OP's comment

in your question you say

Now the problem is that sometimes this third party provider needs to insert a lot of data, let's say 5000 records. They do this per transaction (5rows per transaction), but they don't close the connection. They do one transaction and then the next untill all records are inserted. This caused issues for our process, we receive a timeout.

Which indicates that you have identified the problem as how the third party data provider inserts the data, and implies that you are helpless to change it. My suggestion to have them insert into a staging table frees your application for any locking and/or blocking caused by the third party data provider. You would then be free to include the data into your application using any isolation level, any number of rows at a time, any time or a specific time, etc.

However, I'm puzzled how inserting new rows into a table causes your application to lock up. If it is locking/blocking why would your application be reading those new rows while they are being inserted anyway? Are you inserting using a clustered index with rows going into the middle of the table? Is your application doing table scan selects in a transaction? there must be something else going on.

when you system starts to die try this, it will show you what is causing the block:

;with Blockers AS
(SELECT
     r.session_id AS spid
         ,r.cpu_time,r.reads,r.writes,r.logical_reads 
         ,r.blocking_session_id AS BlockingSPID
         ,LEFT(OBJECT_NAME(st.objectid, st.dbid),50) AS ShortObjectName
         ,LEFT(DB_NAME(r.database_id),50) AS DatabaseName
         ,s.program_name
         ,s.login_name
         ,OBJECT_NAME(st.objectid, st.dbid) AS ObjectName
         ,SUBSTRING(st.text, (r.statement_start_offset/2)+1,( (CASE r.statement_end_offset
                                                                   WHEN -1 THEN DATALENGTH(st.text)
                                                                   ELSE r.statement_end_offset
                                                               END - r.statement_start_offset
                                                              )/2
                                                            ) + 1
                   ) AS SQLText
     FROM sys.dm_exec_requests                          r
         JOIN sys.dm_exec_sessions                      s ON r.session_id = s.session_id
         CROSS APPLY sys.dm_exec_sql_text (sql_handle) st
     --WHERE r.session_id > 50
)
SELECT Blockers.* FROM Blockers
KM
Will this not cause the same issue but then between transfering the data from one table to the other table?
Sven
To be honest, I'm also not sure where issue really is. They insert massive amounts of data sometimes and I have that process running every 2 minutes. The problem is (I think) that they don't free up the table, locks are kept, I don't know..I was inspired by your idea and was thinking the following. What if I give them a table for their own and I create my own table. On their table, I'll put a database trigger that will copy the data over. Will the insert of a database trigger block my access to that second table? What isolation level does a trigger use? I'll use this script tomorrow!
Sven
if you use a trigger you will see no change. the trigger will run within the transaction that the third party data provider is using and will cause the same locking/blocking issues. If you create a staging table, you need to schedule a job to start the process so it is within a different execution scope and completely under your control.
KM