views:

115

answers:

4

Periodically users running reports are blocking users doing CRUD operations and causing time-outs. I'd like to create duplicate locations of the current tables for the report users.

I was thinking of creating a job that backs-up my app's database, and restores it to a reporting database on the same server so that users running reports will be separated from those doing CRUD ops. The job will run every 10 minutes or so. Initial tests show start to finish will be about 30 seconds. Disk space is not an issue.

Is this a good/bad idea? What pitfalls should I watch out for? Is there a better way to do this?

+2  A: 

This sounds like a good idea. The only concern I'd have is do you need to update every 10 minutes? This could also slow things down while the update is running. Usually these are done overnight (to have the least impact to others), or if during the day, at only 3 fixed points (say 10 am, 1 pm, and 4pm).

FrustratedWithFormsDesigner
Running a 30 second backup every 10 minutes is almost giving 5% downtime...
ck
Who will be impacted during the backup/restore? Is that just the report users during the restore part?
Gern Blandston
Also, what if I restore the db to a staging db, drop the reporting db when it's done and rename the staging db to the reporting db?
Gern Blandston
@Gern Blandston: I think users of both databases could potentially be affected by a backup/restore. I don't know enough about MSSSQL Server to know if it would be able to protect one group, but I would be pessimistic and assume it can't (until proven otherwise).
FrustratedWithFormsDesigner
+1  A: 

Naturally for most of the enterprise class applications, the transaction database is always kept separated from the reporting database. The transaction system is tuned for OLTP and the reporting database might be denormalised to suit the need of the reporting scenarios. So it is almost a natural suggestion.

Kangkan
+1  A: 

Be careful with doing frequent backups -- that could lead to a lot of downtime!

Common Solutions

It is indeed a common practice to create a separate instance just for reporting.

Some folks even go a step further and put reporting on a separate physical machine or cluster to further isolate that part of the load.

Both of those can be handled with replication (which avoids the downtime problem). Or you could just do a nightly backup and report against that.

I would also like to mention that the high-end approach to this is data warehousing, where you essentially transform this new reporting database into a read-optimized repository that's more efficient to report against. That tends to be very time-consuming to implement, so it is not the quick fix that you're looking for.

Final Thoughts

One last thing: I've seen some shops on the cusp of this problem try to avoid dealing with it. Here's the takeaway: reporting tends to spike at certain times of the month or year, so if you're normally on the verge of killing your database server, the last week of the month might push you over the edge!

This question is very similar: http://stackoverflow.com/questions/190512/sql-server-separate-database-for-reports

Brian MacKay
+2  A: 

Before you do a forklift upgrade, you might see if putting

...from sometable WITH (NOLOCK)

on your reporting queries mitigates the problem. It may at least buy you some time to figure out what is optimal.

Steven A. Lowe
But remember you will be getting dirty reads that way, this could be a problem in reporting.
HLGEM
@HLGEM - true, but the data is 10 minutes old on the current plan anyway! We have had similar issues in the past with running very large queries against an enterprise size database and the NOLOCK made a *HUGE* difference. PK :-)
Paul Kohler
JUst pointing out that it can make a results difference. If you can live with dirty reads that's one thing, but the poster needed to know that was what he would get.
HLGEM