I am designing the architecture for a set of WCF services. Due to the nature of the deployment of these services (remotely deployed onto a number of unmanageable systems on client sites, therefore we cannot afford the administrative overhead of database servers), the data store has to be file based (we are leaning quite heavily toward XML for the file format).
Once again, the nature of the services means that there is a potential for concurrency issues within individual files, and I am trying to come up with a system that will behave correctly in all instances and avoid attempting to read data when there is a write operation pending.
My current thinking is taking one of two possible routes.
1 - locking files.
This would operate in the following way. All file operations will have a locking mechanism. Reads would check to ensure the required file is not currently locked before requesting data. If the file is locked, the service should sleep for a random number of milliseconds (within an as-yet undetermined range). Write operations would set the lock, commit the data and then unlock the file.
2 - additional program in the background to provide data to the services.
This version would have a secondary application in the background, exposing various public static methods, callable by the services. The background app would be solely responsible for maintaining an in-memory representation of the data, providing the data to the services, and keeping the file copies in synch with the in memory objects. In this respect, it would behave as if it were a transactionalised database server.
Of the two (or possibly other) methods of realising the goal of creating these kinds of services, which option would provide the greatest performance with least chance of concurrency conflicts? The simplicity of the design of option 1 means that I'm more in favour of that one, but I am worried that performance may suffer as a result of the "sleep" operations.
TIA
Marc.