views:

402

answers:

3

I have a datalogging application (c#/.net) that logs data to a SQLite database. This database is written to constantly while the application is running. It is also possible for the database to be archived and a new database created once the size of the SQLite database reaches a predefined size.

I'm writing a web application for reporting on the data. My web setup is c#/.Net with a SQL Server. Clients will be able to see their own data gathered online from their instance of my application.

For test purposes, to upload the data to test with I've written a rough and dirty application which basically reads from the SQLite DB and then injects the data into the SQL Server using SQL - I run the application once to populate the SQL Server DB online.

My application is written in c# and is modular so I could add a process that periodically checks the SQLite DB then transfer new data in batches to my SQL Server.

My question is, if I wanted to continually synchronise the client side SQLLite database (s) with my server as the application is datalogging what would the best way of going about this be?

Is there any technology/strategy I should be looking into employing here? Any recommended techniques?

A: 

Might want to investigate concept of Log Shipping

Tori Marrama
Not sure this is feasable since I'm working from a SQLite database to a SQL Server, also I'd like to update the SQL server side constantly - not real time but maybe every few minutes.Food for thought though, it's got me thinking about how to solve another issue I have.
Mattl
A: 

You could take a look at the Sync Framework. How complex is the schema that you're looking to sync up & is it only one-way or does data need to come back down?

As a simply solution I'd look at exporting data in some delimited format and then using bcp/BULK INSERT to pull it in to your central server.

Chris W
Thanks for this, the data transfer will be one way. I'll use something like the bcp to start with and look into the sync framework as a longer term solution.
Mattl
A: 

Several options come to mind. You can add a timestamp to each table that you want to copy from and then select rows written after the last update. This is fast and will work if you archive the database and start with an empty one.

You can also journal your updates for each table into an XML string that describes the changes and store that into a new table that is treated as a queue.

ebpower