views:

32

answers:

1

Hey, so having an issue with writing out to an xml file. Works fine for single requests via the browser, but when I use something like Charles to perform 5-10 repeated requests concurrently several of them will fail. The trace simply shows a 500 error with no content inside, basically I think they start timing out waiting for write access or something...

This method is inside my repository class, have also attempted to have repository instance as a singleton but doesn't appear to make any difference..

Any help would be much appreciated. Cheers

    public void Add(Request request) {
        try {
            XDocument requests;
            XmlReader xmlReader;
            using (xmlReader = XmlReader.Create(_requestsFilePath)) {
                requests = XDocument.Load(xmlReader);
                XElement xmlRequest = new XElement("request",
                            new XElement("code", request.code),
                            new XElement("date", request.date),
                            new XElement("email", new XCData(request.email)),
                            new XElement("name", new XCData(request.name)),
                            new XElement("recieveOffers", request.recieveOffers)
                        );
                requests.Root.Element("requests").Add(xmlRequest);
                xmlReader.Close();
            }
            requests.Save(_requestsFilePath);
        } catch (Exception ex) {
            HttpContext.Current.Trace.Warn("Error writing to file: "+ex);
        }
    }
A: 

There are a couple problems with your design here. First off, in contexts with concurrent operations, it is okay to read from the same source at the same time, but when a write occurs, you need to stop all other reading and writing from happening, otherwise you run into race conditions.

The first thing you should change is getting rid of the catch big Exception because that is preventing you from seeing the full error. The problem could be an out of memory error because you capped out your RAM while loading the same data set 5-10 times. Unless you know what the error is you wont be able to address it correctly, and guessing wont help.

and if stack overflow stops timing out on me, I will copy and paste my answer here!

NickLarsen
well stack overflow wont let me post my answer, not sure why. The end result of my post was that you need to remove the exception and get the real error or else there are 1000 ways to make this code better, but we're all shooting blind without knowing what your actual problem is.
NickLarsen
Hey thanks, have realised iis7 wasn't actually capturing the error when attached to which is a first, so using casini rather then iis7 now. So actual error is - The process cannot access the file '...\requests.xml' because it is being used by another process.I have attempted to use lock (with save code inside) with the same issue.Any good code design tips for this kind of concurrent file access? I have looked into other answers (i.e. mutex etc) but have not been able to get them to solve this issue.
Mark
You need some mechanism to ensure that only a single thread is accessing the physical file at a time. My suggestion would be to load this file up into memory. You can make concurrent changes in memory all day long and after every few requests, or seconds or whatever, persist the data back to disk. If you require more frequent disk writing, you can modify it easily with this sort of set up. If you absolutely cannot keep it in memory, then I would create a "write queue" and add items to it when calling this method in your repository, then write the entire queue whenever you choose.
NickLarsen
And if you need to verify persistence before returning a response to your users, and neither of the previous two methods are possible, an XML file on disk might not be your best data store.
NickLarsen