tags:

views:

98

answers:

2

I have an application that is modifying 5 identical xml files, each located on a different network share. I am aware that this is needlessly redundant, but "it must be so."

Every time this application runs, exactly one element (no more, no less) will be added/removed/modified.

Initially, the application opens each xml file, adds/removes/modifies the element to the appropriate node and saves the file, or throws an error if it cannot (Unable to access the network share, timeout, etc...)

How do I make this atomic?

My initial assumption was to:

foreach (var path in NetworkPaths)
    if (!File.Exists(path)
        isAtomic = false;

if (isAtomic)
{
    //Do things
}

But I can see that only going so far. Is there another way to do this, or a direction I can be pointed to?

+2  A: 

Unfortunately, for it to be truly "atomic" isn't really possible. My best advice would be to wrap up your own form of transaction for this, so you can at least undo the changes.

I'd do something like check for each file - if one doesn't exist, throw.

Backup each file, save the state needed to undo, or save a copy in memory if they're not huge. If you can't, throw.

Make your edits, then save the files. If you get a failure here, try to restore from each of the backups. You'll need to do some error handling here so you don't throw until all of the backups were restored. After restoring, throw your exception.

At least this way, you'll be more likely to not make a change to just a single file. Hopefully, if you can modify one file, you'll be able to restore it from your backup/undo your modification.

Reed Copsey
+1  A: 

I suggest the following solution.

  • Try opening all files with a write lock.
    • If one or more fail, abort.
    • Modify and flush all files.
      • If one or more fail, roll the already modified ones back and flush them again.
  • Close all files.

If the rollback fails ... well ... try again, and try again, and try again ... and give up in an inconsitent state.

If you have control over all processes writing this files, you could implement a simple locking mechanism using a lock file. You could even perform write ahead logging and record the planned change in the lock file. If your process crashes, the next one attempting to modify the files would detect the incomplete operation and could continue it before doing it's one modification.

Daniel Brückner