We are trying to have a transactional table that only takes new records inserted on a regular basis.
This simple table requires us to continuously add new records to it over time. The volume of transactions into this table is expected to be quite high, and also there might be periodical batch imports of transactions (>1000) that may take multiple seconds to complete.
From this data we then do a set of select statements grouping different columns to return the required values.
From our initial testing we have found a bottleneck to be related to SQL Server that blocks our SELECT's when in the middle of a transaction of INSERTS.
Below is a simple example that can be run to illustrate the problem.
-- Simple DB Table
create table LOCK_TEST (
LOCK_TEST_ID int identity ,
AMOUNT int);
-- Run this in 1 query window
begin tran
insert into LOCK_TEST (AMOUNT) values (1);
WAITFOR DELAY '00:00:15' ---- 15 Second Delay
insert into LOCK_TEST (AMOUNT) values (1);
commit
-- In Query 2 run this in parallel
select SUM(AMOUNT)
from LOCK_TEST;
I would expect Query 2 to return straight away, with 0 until query 1 completes, and then show 2. We never want to see 1 returned from the 2nd query.
The answer's we have looked at relate to WITH (NOLOCK) on the select statement. But this violates the transactional boundaries, and the returned information may be financial in nature and we don't wish to see any uncommited details in our queries.
My problem seems to be on the INSERT side...
Why does the INSERT block the SELECT statement even though it's not modifying any existing data?
Bonus points question: Is this a "feature" of SQL Server, or would we find this on other Database flavours also?
UPDATE I have now had time to find a local oracle database and run the same simple test. This test pass's as I would expect.
Ie I can run query as often as I want, and it will return null until the 1st transaction commits, then returns 2.
Is there a way to make SQL Server work like this? or do we need to move to Oracle?