views:

329

answers:

5

When there is no webservice API available, your only option might be to Screen Scrape, but how do you do it in c#?

how do you think of doing it?

+2  A: 

The term you're looking for is actually called Screen Scraping.

One thing you have to consider about scraping web sites is that they are beyond your control and can change frequently and significantly. If you do go with scraping the fact of change ought to part of your overall strategy. E.g. you will need to update your code sooner or later to deal with a "moving target."

Here are a couple of C# links to get you started:

http://www.cambiaresearch.com/c4/3ee4f5fc-0545-4360-9bc7-5824f840a28c/How-to-scrape-or-download-a-webpage-using-csharp.aspx

http://mhinze.com/archive/screen-scraping-tutorial-using-c-net/

Paul Sasik
+5  A: 

Use Html Agility Pack. It handles poorly and malformed HTML. It lets you query with XPath, making it very easy to find the data you're looking for. DON'T write a parser by hand and DON'T use regular expressions, it's just too clumsy.

Matt Olenik
Don't use RegEx indeed. http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags/1732454#1732454
Jeff Yates
+3  A: 

Matt and Paul's answers are correct. "Screen scraping" by parsing the HTML from a website is usually a bad idea because:

  1. Parsing HTML can be difficult, especially if it's malformed. If you're scraping a very, very simple page then regular expressions might work. Otherwise, use a parsing framework like the HTML Agility Pack.

  2. Websites are a moving target. You'll need to update your code each time the source website changes their markup structure.

  3. Screen scraping doesn't play well with Javascript. If the target website is using any sort of dynamic script to manipulate the webpage you're going to have a very hard time scraping it. It's easy to grab the HTTP response, it's a lot harder to scrape what the browser displays in response to client-side script contained in that response.

If screen scraping is the only option, here are some keys to success:

  1. Make it as easy as possible to change the patterns you look for. If possible, store the patterns as text files or in a resource file somewhere. Make it very easy for other developers (or yourself in 3 months) to understand what markup you expect to find.

  2. Validate input and throw meaningful exceptions. In your parsing code, take care to make your exceptions very helpful. The target site will change on you, and when that happens you want your error messages to tell you not only what part of the code failed, but why it failed. Mention both the pattern you're looking for AND the text you're comparing against.

  3. Write lots of automated tests. You want it to be very easy to run your scraper in a non-destructive fashion because you will be doing a lot of iterative development to get the patterns right. Automate as much testing as you can, it will pay off in the long run.

  4. Consider a browser automation tool like Watin. If you require complex interactions with the target website it might be easier to write your scraper from the point of view of the browser itself, rather than mucking with the HTTP requests and responses by hand.

As for how to screen scrape in C#, you can either use Watin (see above) and scrape the resulting document using its DOM, or you can use the WebClient class [see MSDN or Google] to get at the raw HTTP response, including the HTML content, and then use some sort of text-based analysis to extract the data you want.

Seth Petry-Johnson
DOM would be best, but watin, looks rather interesting..
K001
+1  A: 

You might want to check out Dapper.Net (the open source project not the commercial advertising one) if you wanted to "outsource" the screen scraping problem

You can XMLize content from any site using a wizard found here

PeanutPower
A: 

Just one thing to note, a few people have mentioned pulling down the website as XML and then using XPath to iterate through the nodes. It's probably important to make sure you are working with a site that has been developed in XHTML to make sure that the HTML represents a well formed XML document.

Brian Scott