views:

544

answers:

5

Let me rephrase the question...

Here's the scenario: As an insurance agent you are constantly working with multiple insurance websites. For each website I need to login and pull up a client. I am looking to automate this process.

I currently have a solution built for iMacros but that requires a download/installation.

I'm looking for a solution using the .NET framework that will allow the user to provide their login credentials and information about a client and I will be able to automate this process for them.

This will involve knowledge of each specific website which is fine, I will have all of that information.

I would like for this process to be able to happen in the background and then launch the website to the user once the action is performed.

+2  A: 

I've done this in the past using the WebBrowser control inside a winforms app that i execute on the server. The WebBrowser control will allow you to access the html elements on the page, input information, click buttons/links, etc. It should allow you to accomplish your goal.

There are ways to do this without the WebBrowser control, look at the HTML Agility Pack.

Jason Miesionczek
This will not be a winforms application.
mstrickland
In light of your updated question, i still recommend the HTML Agility Pack to do what you're looking to accomplish.
Jason Miesionczek
+5  A: 

You could try the following tools:

They are automated testing tools/frameworks that allow you to write automated tests from a UI perspective and verify the results.

The Matt
Looking for something a little more proven with examples.
mstrickland
What do you mean "more proven"? There's plenty of examples for each of those frameworks on their respective pages.
The Matt
@mstrickland: SL is too young and still struggling to estabilish "proven" ways to do its nominal stuff for there to possibly exist "proven" ways to do something as esoteric as this.
AnthonyWJones
A: 

Assuming that you are talking about filling and submitting a form or forms using a bot of some sort then scraping the response to display to the user.

Use HttpWebRequest(?) to create a form post containing the relevant form fields and data from your model and submit the request. Retrieve and analyse the response, store any cookies as you will need to resubmit the cookie on the next request. Formulate the next request based on the results of the first request ( remembering to attach cookies as necessary ) and submit it. Retrieve the response and display or parse and display ( depending on what you are hoping to achieve ).

You say this is not a client app - therefore I will assume a web app. The downside of this is that once you start proxying requests for the user, you will have to always proxy those requests as there is no way for you to transfer any session cookies from the target site to the user and there is no ( simple / easy / logical ) way for the user to log in to the target site and then transfer the cookie to you.

Usually when trying to do this sort of integration, people will use some form of published API for interacting with the companies / systems in question as they are designed for the type of interactions that you are referring to.

Neal
+1  A: 

Use Watin. It's an open source .NET library to automate IE and Firefox. It's a lot easier than manipulating raw HTTP requests or hacking the WebBrowser control to do what you want, and you can run it from a console app or service, since you mentioned this wouldn't be a WinForms app.

You can also make the browser window invisible if needed, since you mentioned only showing this to the user at a certain point.

Adam Neal
+1 for Watin. Great for testing web apps and will work like a charm!
Michael
A: 

It is not clear to me what difficulty you want to communicate when you wrote:

I currently have a solution built for iMacros but that requires a download/installation.

I think here lies some your requirements about which you are not explicit. You certainly need to "download/install" your .Net program on your client's machines. So, what's the difference?

Anyway, Crowbar seems promising:

Crowbar is a web scraping environment based on the use of a server-side headless mozilla-based browser.

Its purpose is to allow running javascript scrapers against a DOM to automate web sites scraping but avoiding all the syntax normalization issues.

For people not familiar with this terminology: "javascript scrapers" here means something like an iMacros' macro, used to extract information from a web site (in the end is a Javascript program, for what purpose you use it I do not think makes a difference).

Design

Crowbar is implemented as a (rather simple, in fact) XULRunner application that provides an HTTP RESTful web service implemented in javascript (basically turning a web browser into a web server!) that you can use to 'remotely control' the browser.

I don't know if this headless browser can be extended with add-ons like a normal Firefox installation. In such case you can even think to use yours iMacros' macros (or use CoScripter) with appropriate packaging.

The more I think about this, more I feel that this is a convoluted solution for what you wrote you want to achieve. So, please, clarify.

MaD70