I am learning Haskell after years of OOP.
I am writing a dumb web spider with few functions and state.
I am not sure how to do it right in FP world.
In OOP world this spider could be designed like this (by usage):
Browser b = new Browser()
b.goto(“http://www.google.com/”)
String firstLink = b.getLinks()[0]
b.goto(firstLink)
print(b.getHtml())
This code loads http://www.google.com/, then “clicks” the first link, loads content of second page and then prints the content.
class Browser {
goto(url: String) : void // loads HTML from given URL, blocking
getUrl() : String // returns current URL
getHtml() : String // returns current HTML
getLinks(): [String] // parses current HTML and returns a list of available links (URLs)
private _currentUrl:String
private _currentHtml:String
}
It’s possbile to have 2 or “browsers” at once, with its own separate state:
Browser b1 = new Browser()
Browser b2 = new Browser()
b1.goto(“http://www.google.com/”)
b2.goto(“http://www.stackoverflow.com/”)
print(b1.getHtml())
print(b2.getHtml())
QUESTION: show how would you design such a thing in Haskell from scracth (Browser-like API with possibility to have several independent instances)? Please, give a code snippet.
NOTE: For simplicity, skip the details on getLinks() function (its trivial and not interesting).
Also let’s assume there is an API function
getUrlContents :: String -> IO String
that opens HTTP connection and returns an HTML for given URL.
UPDATE: why to have state (or may be not)?
The API can have more functions, not just single "load-and-parse results".
I didn't add them to avoid complexity.
Also it could care about HTTP Referer header and cookies by sending them with each request in order to emulate real browser behavior.
Consider the following scenario:
- Open http://www.google.com/
- Type "haskell" into first input area
- Click button "Google Search"
- Click link "2"
- Click link "3"
- Print HTML of current page (google results page 3 for "haskell")
Having a scenario like this on hands, I as a developer would like to transfer it to code as close as possible:
Browser b = new Browser()
b.goto("http://www.google.com/")
b.typeIntoInput(0, "haskell")
b.clickButton("Google Search") // b.goto(b.finButton("Google Search"))
b.clickLink("2") // b.goto(b.findLink("2"))
b.clickLink("3")
print(b.getHtml())
The goal of this scenario is to get HTML of the last page after a set of operations. Another less visible goal is to keep code compact.
If Browser has a state, it can send HTTP Referer header and cookies while hiding all mechanics inside itself and giving nice API.
If Browser has no state, the developer is likely to pass around all current URL/HTML/Cookies -- and this adds noise to scenario code.
NOTE: I guess there are libraries outside for scrapping HTML in Haskell, but my intention was not to scrap HTML, but learn how these "black-boxed" things can be designed properly in Haskell.