views:

361

answers:

4

I am learning Haskell after years of OOP.

I am writing a dumb web spider with few functions and state.
I am not sure how to do it right in FP world.

In OOP world this spider could be designed like this (by usage):

Browser b = new Browser()
b.goto(“http://www.google.com/”)

String firstLink = b.getLinks()[0]

b.goto(firstLink)
print(b.getHtml())

This code loads http://www.google.com/, then “clicks” the first link, loads content of second page and then prints the content.

class Browser {
   goto(url: String) : void // loads HTML from given URL, blocking
   getUrl() : String // returns current URL
   getHtml() : String // returns current HTML
   getLinks(): [String] // parses current HTML and returns a list of available links (URLs)

   private _currentUrl:String
   private _currentHtml:String
}

It’s possbile to have 2 or “browsers” at once, with its own separate state:

Browser b1 = new Browser()
Browser b2 = new Browser()

b1.goto(“http://www.google.com/”)
b2.goto(“http://www.stackoverflow.com/”)

print(b1.getHtml())
print(b2.getHtml())

QUESTION: show how would you design such a thing in Haskell from scracth (Browser-like API with possibility to have several independent instances)? Please, give a code snippet.

NOTE: For simplicity, skip the details on getLinks() function (its trivial and not interesting).
Also let’s assume there is an API function

getUrlContents :: String -> IO String

that opens HTTP connection and returns an HTML for given URL.


UPDATE: why to have state (or may be not)?

The API can have more functions, not just single "load-and-parse results".
I didn't add them to avoid complexity.

Also it could care about HTTP Referer header and cookies by sending them with each request in order to emulate real browser behavior.

Consider the following scenario:

  1. Open http://www.google.com/
  2. Type "haskell" into first input area
  3. Click button "Google Search"
  4. Click link "2"
  5. Click link "3"
  6. Print HTML of current page (google results page 3 for "haskell")

Having a scenario like this on hands, I as a developer would like to transfer it to code as close as possible:

Browser b = new Browser()
b.goto("http://www.google.com/")
b.typeIntoInput(0, "haskell")
b.clickButton("Google Search") // b.goto(b.finButton("Google Search"))
b.clickLink("2") // b.goto(b.findLink("2"))
b.clickLink("3")
print(b.getHtml())

The goal of this scenario is to get HTML of the last page after a set of operations. Another less visible goal is to keep code compact.

If Browser has a state, it can send HTTP Referer header and cookies while hiding all mechanics inside itself and giving nice API.

If Browser has no state, the developer is likely to pass around all current URL/HTML/Cookies -- and this adds noise to scenario code.

NOTE: I guess there are libraries outside for scrapping HTML in Haskell, but my intention was not to scrap HTML, but learn how these "black-boxed" things can be designed properly in Haskell.

+2  A: 

Don't try to replicate to many object-orientation.

Just define a simple Browser type that holds the current URL (per IORef for the sake of mutability) and some IO functions to provide access and modification functionality.

A sample programm would look like this:

import Control.Monad

do
   b1 <- makeBrowser "google.com"
   b2 <- makeBrowser "stackoverflow.com"

   links <- getLinks b1

   b1 `navigateTo` (head links)

   print =<< getHtml b1
   print =<< getHtml b2

Note that if you define a helper function like o # f = f o, you'll have a more object-like syntax (e.g. b1#getLinks).

Complete type definitions:

data Browser = Browser { currentUrl :: IORef String }

makeBrowser  :: String -> IO Browser

navigateTo   :: Browser -> String -> IO ()
getUrl       :: Browser -> IO String
getHtml      :: Browser -> IO String
getLinks     :: Browser -> IO [String]
Dario
Why are you trying to make browser "objects" and mimic the object oriented design/interface/syntax? Wouldn't a simple additional `getLinks :: String -> String -> [String]` be all that is needed?
sth
IMHO, even you are trying to replicate OOP too much. For this task, the only remotely possible benefit for mutability is caching HTML and links list, which your answer doesn't do. And even there it isn't needed.
Alexey Romanov
+3  A: 

The getUrlContents function already does what goto() and getHtml() would do, the only thing missing is a function that extracts links from the downloaded page. It could take a string (the HTML of a page) and a URL (to resolve relative links) and extract all links from that page:

getLinks :: String -> String -> [String]

From these two functions you can easily build other functions that do the spidering. For example the "get the first linked page" example could look like this:

getFirstLinked :: String -> IO String
getFirstLinked url =
   do page <- getUrlContents url
      getUrlContents (head (getLinks page url))

A simple function to download everything linked from a URL could be:

allPages :: String -> IO [String]
allPages url =
   do page <- getUrlContent url
      otherpages <- mapM getUrlContent (getLinks page url)
      return (page : otherpages)

(Note that this for example will follow cycles in the links endlessly - a function for real use should take care of such cases)

There only "state" that is used by these functions is the URL and it is just given to the relevant functions as a parameter.

If there would be more information that all the browsing functions need you could create a new type to group it all together:

data BrowseInfo = BrowseInfo
     { getUrl     :: String
     , getProxy   :: ProxyInfo
     , getMaxSize :: Int
     }

Functions that use this information could then simply take a parameter of this type and use the contained information. There is no problem in having many instances of these objects and using them simultaneously, every function will just use the object that it is given as a parameter.

sth
+2  A: 

show how would you design such a thing in Haskell from scracth (Browser-like API with possibility to have several independent instances)? Please, give a code snippet.

I would use one (Haskell) thread at each point, have all threads running in the State monad with a record type of whatever resources they need, and have results communicated back to the main thread over a channel.

Add more concurrency! That's the FP way.

If I recall correctly, there's a design here for gangs of link checking threads communicating over channels:

Also, make sure not to use Strings, but Text or ByteStrings -- they'll be much faster.

Don Stewart
+8  A: 

As you describe the problem, there is no need for state at all:

data Browser = Browser { getUrl :: String, getHtml :: String, getLinks :: [String]} 

getLinksFromHtml :: String -> [String] -- use Text.HTML.TagSoup, it should be lazy

goto :: String -> IO Browser
goto url = do
             -- assume getUrlContents is lazy, like hGetContents
             html <- getUrlContents url 
             let links = getLinksFromHtml html
             return (Browser url html links)

It’s possbile to have 2 or “browsers” at once, with its own separate state:

You obviously can have as many as you want, and they can't interfere with each other.

Now the equivalent of your snippets. First:

htmlFromGoogle'sFirstLink = do
                              b <- goto "http://www.google.com"
                              let firstLink = head (links b)
                              b2 <- goto firstLink -- note that a new browser is returned
                              putStr (getHtml b2)

And second:

twoBrowsers = do
                b1 <- goto "http://www.google.com"
                b2 <- goto "http://www.stackoverflow.com/"
                putStr (getHtml b1)
                putStr (getHtml b2)

UPDATE (reply to your update):

If Browser has a state, it can send HTTP Referer header and cookies while hiding all mechanics inside itself and giving nice API.

No need for state still, goto can just take a Browser argument. First, we'll need to extend the type:

data Browser = Browser { getUrl :: String, getHtml :: String, getLinks :: [String], 
                         getCookies :: Map String String } -- keys are URLs, values are cookie strings

getUrlContents :: String -> String -> String -> IO String
getUrlContents url referrer cookies = ...

goto :: String -> Browser -> IO Browser
goto url browser = let
                     referrer = getUrl browser 
                     cookies = getCookies browser ! url
                   in 
                   do 
                     html <- getUrlContents url referrer cookies
                     let links = getLinksFromHtml html
                     return (Browser url html links)

newBrowser :: Browser
newBrowser = Browser "" "" [] empty

If Browser has no state, the developer is likely to pass around all current URL/HTML/Cookies -- and this adds noise to scenario code.

No, you just pass values of type Browser around. For your example,

useGoogle :: IO ()
useGoogle = do
              b <- goto "http://www.google.com/" newBrowser
              let b2 = typeIntoInput 0 "haskell" b
              b3 <- clickButton "Google Search" b2
              ...

Or you can get rid of those variables:

(>>~) = flip mapM -- use for binding pure functions

useGoogle = goto "http://www.google.com/" newBrowser >>~
            typeIntoInput 0 "haskell" >>=
            clickButton "Google Search" >>=
            clickLink "2" >>=
            clickLink "3" >>~
            getHtml >>=
            putStr

Does this look good enough? Note that Browser is still immutable.

Alexey Romanov
Brilliant. ....
oshyshko
Note that the BrowserAction monad already exists: http://hackage.haskell.org/packages/archive/HTTP/4000.0.8/doc/html/Network-Browser.html
jrockway
Also note that `flip mapM` is called `forM`.
BMeph