views:

998

answers:

6

I need to extract information from an unstructured web page in Android. The information I want is embedded in a table that doesn't have an id.

<table> 
<tr><td>Description</td><td></td><td>I want this field next to the description cell</td></tr> 
</table>

Should I use

  • Pattern Matching?
  • Use BufferedReader to extract the information?

Or are there faster way to get that information?

A: 

Why don't you create a script that does the scraping with cURL and simple html dom parser and just grab the value you need from that page? These tools work with PHP, but other tools exist for exist for any language you need.

Oren
A: 

One way of doing this is to put the html into a String and then manually search and parse through the String. If you know that the tags will come in a specific order then you should be able to crawl through it and find the data. This however is kinda sloppy, so its a question of do you want it to work now? or work well?

int position = (String)html.indexOf("<table>");  //html being the String holding the html code
String field = html.substring(html.indexOf("<td>",html.indexOf("<td>",position)) + 4, html.indexOf("</td>",html.indexOf("</td>",position)));

like i said... really sloppy. But if you're only doing this once and you need it to work, this just might do the trick.

mtmurdock
A: 

Why don't you just write

int start=data.indexOf("Description");

After that take the required substring.

Fedor
+1  A: 

The fastest way will be parsing the specific information yourself. You seem to know the HTML structure precisely beforehand. The BufferedReader, String and StringBuilder methods should suffice. Here's a kickoff example which displays the first paragraph of your own question:

public static void main(String... args) throws Exception {
    URL url = new URL("http://stackoverflow.com/questions/2971155");
    BufferedReader reader = null;
    StringBuilder builder = new StringBuilder();
    try {
        reader = new BufferedReader(new InputStreamReader(url.openStream(), "UTF-8"));
        for (String line; (line = reader.readLine()) != null;) {
            builder.append(line.trim());
        }
    } finally {
        if (reader != null) try { reader.close(); } catch (IOException logOrIgnore) {}
    }

    String start = "<div class=\"post-text\"><p>";
    String end = "</p>";
    String part = builder.substring(builder.indexOf(start) + start.length());
    String question = part.substring(0, part.indexOf(end));
    System.out.println(question);
}

Parsing is in practically all cases definitely faster than pattern matching. Pattern matching is easier, but there is a certain risk that it may yield unexpected results, certainly when using complex regex patterns.

You can also consider to use a more flexible 3rd party HTML parser instead of writing one yourself. It will not be as fast as parsing yourself with beforehand known information. It will however be more concise and flexible. With decent HTML parsers the difference in speed is pretty negligible. I strongly recommend Jsoup for this. It supports jQuery-like CSS selectors. Extracting the firsrt paragraph of your question would then be as easy as:

public static void main(String... args) throws Exception {
    URL url = new URL("http://stackoverflow.com/questions/2971155");
    Document document = Jsoup.parse(url, 3000);
    String question = document.select("#question .post-text p").first().text();
    System.out.println(question);
}

It's unclear what web page you're talking about, so I can't give a more detailed example how you could select the specific information from the specific page using Jsoup. If you still can't figure it at your own using Jsoup and CSS selectors, then feel free to post the URL in a comment and I'll suggest how to do it.

BalusC
jsoup has a dependency on the Apache Commons Lang library
Josef
@Josef: I fail to see how that's a valid reason for the downvote.
BalusC
A: 

When you Scrap Html webPage. Two things you can do for it. First One is using REGEX. Another One is Html parsers.

Using Regex is not preferable by all. Because It causes logical exception at the Runtime.

Using Html Parser is More Complicated to do. you can not sure proper output will come. its too made some runtime exception by my experience.

So Better make response of the url to Xml file. and do xml parsing is very easy and effective.

Praveen Chandrasekaran
+3  A: 

I think in this case it makes no sense to look for a fast way to extract the information as there is virtually no performance difference between the methods already suggested in answers when you compare it to the time it will take to download the HTML.

So assuming that by fastest you mean most convenient, readable and maintainable code, I suggest you use a DocumentBuilder to parse the relevant HTML and extract data using XPathExpressions:

Document doc = DocumentBuilderFactory.newInstance()
  .newDocumentBuilder().parse(new InputSource(new StringReader(html)));

XPathExpression xpath = XPathFactory.newInstance()
  .newXPath().compile("//td[text()=\"Description\"]/following-sibling::td[2]");

String result = (String) xpath.evaluate(doc, XPathConstants.STRING);

If you happen to retrieve invalid HTML, I recommend to isolate the relevant portion (e.g. using substring(indexOf("<table")..) and if necessary correct remaining HTML errors with String operations before parsing. If this gets too complex however (i.e. very bad HTML), just go with the hacky pattern matching approach as suggested in other answers.

Remarks

  • XPath is available since API Level 8 (Android 2.2). If you develop for lower API levels you can use DOM methods and conditionals to navigate to the node you want to extract
Josef