Without specifics it's hard to say where your slow spot is. Are you measuring where the time is being spent in your application?
I find that many times, a large data set is being materialized unnecessarily, which eats memory and can kill performance. This could happen in your scenario if you receive the XML input data and store it in an XmlDocument
before you start parsing through the data. This will kill you if the XmlDocument
is large.
If possible, aim to process the data incrementally by reading it with an XmlReader
. Some datasets are amenable to this approach: the processing required doesn't need a lot of context, but proceeds linearly across the data. In this case you'll see a huge improvement over sucking everything down into an XmlDocument
. However, other datasets are so structured that you really do have to have everything in core before continuing.
Again, it's hard to say if there is a better way without understanding how the input data is structured.