tags:

views:

45

answers:

7

Hi,

I want to crawl through lets say other companies websites like for cars and extract readonly information in my local database. Then I want to be able to display this collected information on my website. Purely from technology perspective, is there a .net tool, program, etc already out there that is generic enough for my purpose. Or do I have to write it from scratch?

To do it effectively, I may need a WCF job that just mines data on constant basis and refreshes the database which then provides data to the website.

Also, is there a way to mask my calls to those websites? Would I create "traffic burden" for my target websites? Would it impact their functionality if I am just harmlessly crawling them?

How do I make my request look "human" instead of coming from Crawler?

Are there code examples out there on how to use a library that parses the DOM tree?

Can I send request to a specific site and get a response in terms of DOM with WebBrowser control?

A: 
  1. No, there is no generic solution. You need to learn appropriate techs.
  2. You can "hide" Only if you will be directing the traffic through a http proxy
  3. "Traffic burden", or as it is really called, traffic load depends on the total percentage your http requests will make in the overal site traffic, for household names you can safely assume your traffic load will be nearly zero
  4. It most likely won't impact them, they were designed to serve requests.
Jas
-1 for suggesting the use of regular expressions to parse HTML: http://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags
Kirk Woll
Kirk, it wasn't meant to be understood literally. I could have as well written something like "learn HTTP protocol. Understand sockets". My point was that there was no magic bullet. Sheesh.
Jas
A: 

I don't know about how you'd affect a target site, but one nifty way to generate human-looking traffic is the WinForms browser control. I've used it a couple of times to grab things from Wikipedia because my normal mode of using HttpWebRequest to perform HTTP get flagged a non-human filter there and I got blocked.

Gabriel
Can you elaborate on "human-looking traffic is the WinForms browser control" ? I am lost ...
dotnet-practitioner
To be specific, if you just want the eqiv of what you see in the view-source when you load a web page, you can use a System.Net.HttpWebRequest/Response, but some sites know that "real" browsers add additional stuff in their request's headers (among other things). Wikipedia tolerated only a few requests like this before blocking me. But, when I used the WebBrowser control, I essentially was programatically driving an instance of IE, so any kind of detection would have to be based on less deterministic more qualitative metrics (is what I think). Any clearer?
Gabriel
A: 

How to Write a Web Crawler in C#.

Robert Greiner
+1  A: 

Use HtmlAgilityPack to parse the HTML. Then use a Windows Service (not WCF) to run the long-running process.

Kirk Woll
A: 

As far as affecting the target site it totally depends on the site. If you crawl stackoverflow enough times fast enough they'll ban your ip. If you do the same to google they'll start asking you to answer captchas. Most sites have rate limiters, so you can only ask for a request so often.

As far as scraping the data out of the page, never use regular expressions it's been said over and over. You should be using eaither a library that parses the DOM tree or roll your own if you want. In a previous startup of mine the way we approached the issue was we wrote an intermediary template language that would tell our scraper where the data was on the page so that we knew what data and what type of data we were extracting. The hard part you'll find is constantly changing and varying data. Once you have the parser working it takes constant work to have it keep working even on the same site.

whatWhat
Normally the website layout does not change much so I dont understand what you mean by "Once you have the parser working it takes constant work to have it keep working even on the same site."
dotnet-practitioner
A: 

I use a fantastically flexible tool Visual Web Ripper. Output to Excel, SQL, text. Input from the same.

Brad
A: 

There is no Generic tool which would extract the data from the Web for you. This is not a trivial operation. In general Crawling the pages is not that difficult. But stripping / extracting the content you need is difficult. This operation will have to be customized for every website.

We use professional tools dedicated for this and they are designed to feed the Crawler with instructions about which areas within the web page to extract the data you need.

I have also seen Perl Scripts designed extract data from Specific web pages. They could be highly effective depending on the site you parse.

If you hit a site too frequently, you will be banned (At least temporarily).

To mask your IP you can try http://proxify.com/

SKG