I have some html files created by Filemaker export. Each file is basically a huge HTML table. I want to iterate through the table rows and populate them into a database. I have tried to do it with HTMLParser as follows:
String inputHTML = readFile("filemakerExport.htm","UTF-8");
Parser parser = new Parser();
parser.setInputHTML(inputHTML);
parser.setEncoding("UTF-8");
NodeList nl = parser.parse(null);
NodeList trs = nl.extractAllNodesThatMatch(new TagNameFilter("tr"),true);
for(int i=0;i<trs.size();i++) {
NodeList nodes = trs.elementAt(i).getChildren();
NodeList tds = nodes.extractAllNodesThatMatch(new TagNameFilter("td"),true);
// Do stuff with tds
}
The above code works with files under 1 Mb. Unfortunately I have a 4.8 Mbs html file and I get an out of memory error.
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at org.htmlparser.lexer.Lexer.parseTag(Lexer.java:1002)
at org.htmlparser.lexer.Lexer.nextNode(Lexer.java:369)
at org.htmlparser.scanners.CompositeTagScanner.scan(CompositeTagScanner.java:111)
at org.htmlparser.util.IteratorImpl.nextNode(IteratorImpl.java:92)
at org.htmlparser.Parser.parse(Parser.java:701)
at Tools.main(Tools.java:33)
Is there a more efficient way to solve this problem with HTMLParser (I am totally new to the library), or should I use a different library or approach?