views:

151

answers:

5

I have 16 large xml files. When I say Large, I am talking in gigabytes. One of these files is over 8 GB. Several of them are over 1 gb. These are given to me from an external provider.

I am trying to import the XML into a database so that I can shred it into tables. Currently, I stream 10,000 records at a time out of the file into memory and insert the blob. I use SSIS with a script task to do this. This is actually VERY fast for all files, except the 8 GB file.

I cannot load the entire file into an xml document. I can't stress this enough. That was iteration 1 and the files are so huge that the system just locks up trying to deal with these files, the 8 gb one in particular.

I ran my current "file splitter" and it spent 7 hours on importing the xml data and still was not done. It imported 363 blocks of 10,000 records out of the 8 GB file and was still not done.

FYI, here is how I am currently streaming my files into memory (10,000 records at a time). I found the code at http://blogs.msdn.com/b/xmlteam/archive/2007/03/24/streaming-with-linq-to-xml-part-2.aspx

private static IEnumerable<XElement> SimpleStreamAxis(string fileName, string matchName) 
        {
            using (FileStream stream = File.OpenRead(fileName))
            {
                using (XmlReader reader = XmlReader.Create(stream, new XmlReaderSettings() { ProhibitDtd = false }))
                {
                    reader.MoveToContent();
                    while (reader.Read())
                    {
                        switch (reader.NodeType)
                        {
                            case XmlNodeType.Element:
                                if (reader.Name == matchName)
                                {
                                    XElement el = XElement.ReadFrom(reader) as XElement;
                                    if (el != null)
                                        yield return el;
                                }
                                break;
                        }
                    }

                    reader.Close();
                }

                stream.Close();
            }
        }

So, it works fine on all the files, except the 8 GB one where as it has to stream further and further into the file it takes longer and longer.

What I would like to do is split the file into smaller chunks, but the splitter needs to be fast. Then the streamer and the rest of the process can run more quickly. What is the best way to go about splitting the files? Ideally I'd split it myself in code in SSIS.

EDIT:

Here's the code that actually pages out my data using the streaming methodology.

connection = (SqlConnection)cm.AcquireConnection(null);

                int maximumCount = Convert.ToInt32(Dts.Variables["MaximumProductsPerFile"].Value);
                int minMBSize = Convert.ToInt32(Dts.Variables["MinimumMBSize"].Value);
                int maxMBSize = Convert.ToInt32(Dts.Variables["MaximumMBSize"].Value);

                string fileName = Dts.Variables["XmlFileName"].Value.ToString();

                FileInfo info = new FileInfo(fileName);

                long fileMBSize = info.Length / 1048576; //1024 * 1024 bytes in a MB

                if (minMBSize <= fileMBSize && maxMBSize >= fileMBSize)
                {
                    int pageSize = 10000;     //do 2000 products at one time

                    if (maximumCount != 0)
                        pageSize = maximumCount;

                    var page = (from p in SimpleStreamAxis(fileName, "product") select p).Take(pageSize);
                    int current = 0;

                    while (page.Count() > 0)
                    {
                        XElement xml = new XElement("catalog",
                            from p in page
                            select p);

                        SubmitXml(connection, fileName, xml.ToString());

                        //if the maximum count is set, only load the maximum (in one page)
                        if (maximumCount != 0)
                            break;

                        current++;
                        page = (from p in SimpleStreamAxis(fileName, "product") select p).Skip(current * pageSize).Take(pageSize);
                    }
                }
A: 

Take a look to this project which splits XML files into smaller ones to conquer your issue:

Split large XML files into small files: http://www.codeproject.com/KB/XML/SplitLargeXMLintoSmallFil.aspx

atconway
I've already looked at that and ruled it out. The source code is not supplied in the download. The code displayed is not complete (method calls off to code not supplied). I'm also not sure of the speed advantage of that method vs how I'm already splitting the xml.
Josh
+2  A: 

You're going to want a SAXReader for handling large XML files.

peterJ
+1  A: 

Have you looked into using a SAX parser? There isn't one distributed by Microsoft, but there are a handful of examples on the web. With a SAX parser, you essentially read the file as a stream and events fire that you can listen for vs loading the whole thing into an in-memory DOM which you obviously can't do. I don't know too much about using SAX parsers, so I don't have a specific recommendation, but a lot of Java folks have done XML this way for years.

mattmc3
XmlReader as used in the question code is similar to a SAX parser only it's pull rather than push. It wouldn't make any difference to this particular problem.
Simon Steele
+5  A: 

It looks like you are re-reading into the XML file over and over again each step, each time you use the from p in SimpleStreamAxis bit you are re-reading and scanning into the file. Also by calling Count() you are walking the full page each time.

Try something like this:

var full = (from p in SimpleStreamAxis(fileName, "product") select p);
int current = 0;

while (full.Any() > 0)
{
    var page = full.Take(pageSize);

    XElement xml = new XElement("catalog",
    from p in page
    select p);

    SubmitXml(connection, fileName, xml.ToString());

    //if the maximum count is set, only load the maximum (in one page)
    if (maximumCount != 0)
        break;

    current++;
    full = full.Skip(pageSize);
}

Note this is untested, but you should hopefully get the idea. You need to avoid enumerating through the file more than once, operations like Count() and Take/Skip are going to take a long time on an 8gb xml file.

Update: I think the above will still iterate through the file more times than we want, you need something a bit more predictable like this:

var full = (from p in SimpleStreamAxis(fileName, "product") select p);
int current = 0;

XElement xml = new XElement("catalog");
int pageIndex = 0;

foreach (var element in full)
{
    xml.Add(element);

    pageIndex++;
    if (pageIndex == pageSize)
    {
        SubmitXml(connection, fileName, xml.ToString());
        xml = new XElement("catalog");
        pageIndex = 0;
    }

    //if the maximum count is set, only load the maximum (in one page)
    if (maximumCount != 0)
        break;

    current++;
}

    // Submit the remainder
if (xml.Elements().Any())
{
    SubmitXml(connection, fileName, xml.ToString());
}
Simon Steele
+1  A: 

If you're using MS SQL Server, use XML Bulk Load for exactly this.
Knowledgebase Article

bowenl2