views:

798

answers:

2

In the business I work for we are discussion methods to reduce the read load on our primary database.

One option that has been suggested is to have live one-way replication from our primary database to a slave database. Applications would then read from the slave database and write directly to the primary database. So...

  • Application Reads From Slave
  • Application Writes to Primary
  • Primary Updates Slave Automatically

What are the major pros and cons for this method?

+2  A: 

A few cons:

  • 2 points of failure
  • Application logic will have to take into account the delay between writing something and then reading it, since it won't be available immediately from the secondary database

A strategy I have used is to send key reporting data to a secondary database nightly, de-normalizing it on the way, so that beefy queries can run on that database instead of locking up tables and stealing resources from the OLTP server. I'm not using any formal data warehousing or replication tools, rather I identify problem queries that are Ok without up-to-the-minute data and create data structures on the secondary server specifically for those queries.

There are definitely pros to the "replicate everything" approach:

  • You can run any ad-hoc query on the secondary, since it has all of your data
  • If your primary server dies, you can re-purpose the secondary quickly to take over
Eric Z Beard
+1  A: 

We are using one-way replications, but not from the same application. Our applications are reading-writing to the master database, the data gets synchronized to the replca database, and the reporting tools are using this replica.

We don't want our application to read from a different database, so in this scenario I would suggest using file groups and partitioning on the master database. Using file groups (especially on different drives) and partitioning of files and indexes can help on performance a lot.

Biri