As I have seen, SQL 2008 no longer offers replication over internet in very simple method. Not only that, I can see that the merge replication method installs too many triggers and views onto database and adds additional field named "rowguid".
We have more disc space available today then the processor speed. So I decided to write my own generic Replication for SQL Server. It requires following steps.
This has an assumption that all my tables have one single primary key (either bigint or nvarchar(50))
Setup Table : SyncItems
SyncItemID (Primary,AutoNumber)
SyncTable (nvarchar(50) name of table)
RowID (bigint)
RowIDStr (nvarchar(50))
RowOp (nvarchar(10), 'Inserted','Updated','Deleted')
RowTime (datetime, current_timestamp)
And I can setup three triggers on each of tables of database, which can insert one item in SyncItems for specifying DML.
Now all replicators, can only first scan through SyncItems and load the modified table row and update at the destination.
Question:
I already have setup Database Recovery model to full. Is there any alternative method where I can read DMLs history and replicate rather then setting up triggers?
How long the history of transaction logs are kept? In the trigger based custom replication, I can delete older records and we can truncate table accordingly.
If I can read transaction logs and replicate manually, that will be great because we can even replicate DDL transactions. I tried using searching for various stored procedures and did enough Googling but didn't find any hint.
Is there any option to read replication publisher logs? So that instead of using SQL Server's own replication, we can write our own replication agent which can replicate data over internet?
(I have tried web sync, and various options, but our data is huge, and its not practical to use web sync)