tags:

views:

126

answers:

6

Hi everyone,

I am looking at upgrading a realtime program ASP.NET C#, that takes very frequently updated data and moves it from one database to another.

Currently using a middle man app, that pulls from one and inserts into another using SqlBulkCopy.

Is it better to have the source db server write a flat file and the middle man collect from that flat file?

Or sql triggers? Or DTS?

Any advice would be great!

Thanks

+2  A: 

It really depends on the amount of data you are passing and the amount, if any, transformations you are making. DTS/SSIS are the most scalable and feature rich options you listed.

What kind of load to you expect? Are you doing complex transformations? If you have the resources to use SSIS, I would recommend that as it will scale as high as you want to go.

Dustin Laine
I would ideally go with DTS /SSIS too - especially if the databases are SQL Server. This is mostly because in my experience, the logic around the data transfer has potential to become complicated over time and SSIS is built for that purpose.
InSane
I'm moving approx 250 rows, 10 columns, mostly varchars and datetime - every 60 sec.SSIS you think? - I'll investigate. Thanks!
Chris M
In that case, with such small numbers I would just write a SQL query to do the work and then run it as a SQL job every 60 seconds.
Dustin Laine
+1  A: 

Have you considered SQL Server Replication or SSIS?

Winston Smith
Considered yes, investigated properly no.Thanks for the tip, I'll look into those technologies today.
Chris M
+1  A: 

Reading and writing to flat file is more efficient. But if you want to use index search and other SQL features, it's better to use SQL.

Trurl
A flat file? That simply can't any good, can it? What about locking issues? Why write it to a disk then have to read it again when the entire situation is within the scope of the problem (ie he has complete control)? How is it more efficient?
Kieren Johnstone
Currently there is no index or sql features. literally collect x rows for x period, then dump on new server. If the SQL server can use a stored proc to write to a flat file - then I could collect and dump on new box?
Chris M
I think you should give more information. Collecting for x period means index usage. Why do you need SQL procedure in case of flat file?
Trurl
@Kieren: in a project I worked on we had to insert several thousand of lines in a table every minute. We tried several ways to do it : write each line in database as it came in, gather all lines in a cache then issue a big insert into with all values, or write value in a flat file then import csv in one SQL command. The last one using flat files was *way* faster (in our case more than two times faster). There was also other issues like size of the table, number of indexes, etc. but inserting a flat file was faster in every cases. Tried with MySQL and PostgreSQL.
kriss
I suppose I forgot that some DBMSs can read straight from CSV files etc - very good points! I retract the shockedness of my comment!
Kieren Johnstone
+2  A: 

As far as I know there are three different "standard" ways of doing this with SQL server:

  • Replication. Normally used for keeping tables across databases synchronized, with a publisher and subscribers.
  • SSIS (formerly DTS). Can be automated with a scheduled job.
  • Service Broker. New kid on the block.

Chances are good that any of these would perform better than writing out to a flat file and reading it, but the only way to tell for sure (in your environment) it to test and time the different approaches.

Oded
A: 

How about a shared cache?

jgauffin
A: 

If you like writing C# and dislike SSIS. You could look at Rhino ETL as an alternative. I've found it to perform very quickly.

The flat file does seem unnecessary as you a re writing to disk when you don't need to.

John Nolan