views:

574

answers:

6

I am a student interested in developing a search engine that indexes pages from my country. I have been researching algorithms to use for sometime now and I have identified HITS and PageRank as the best out there. I have decided to go with PageRank since it is more stable than the HITS algorithm (or so I have read).

I have found countless articles and academic papers related to PageRank, but my problem is that I don't understand most of the mathematical symbols that form the algorithm in these papers. Specifically, I don't understand how the Google Matrix (the irreducible,stochastic matrix) is calculated.

My understanding is based on these two articles:

Could someone provide a basic explanation (examples would be nice) with less mathematical symbols?

Thanks in advance.

+4  A: 

This is the paper that you need: http://infolab.stanford.edu/~backrub/google.html (If you do not recognise the names of the authors, you will find more information about them here: http://www.google.com/corporate/execs.html).

The symbols used in the document, are described in the document in lay English.

Thanks for making me google this.

Alterlife
+12  A: 

The formal defintion of PageRank, as defined at page 4 of the cited document, is expressed in the mathematical equation with the funny "E" symbol (it is in fact the capital Sigma Greek letter. Sigma is the letter "S" which here stands for Summation).

In a nutshell this formula says that to calculate the PageRank of page X...

   For all the backlinks to this page  (=all the pages that link to X)
   you need to calculate a value that is
         The PageRank of the page that links to X    [R'(v)]
         divided by 
         the number of links found on this page.    [Nv]
         to which you add
           some "source of rank",  [E(u)] normalized by c
             (we'll get to the purpose of that later.)

     And you need to make the sum of all these values [The Sigma thing]
     and finally, multiply it by a constant   [c] 
        (this constant is just to keep the range of PageRank manageable)

The key idea being this formula is that all web pages that link to a given page X are adding to value to its "worth". By linking to some page they are "voting" in favor of this page. However this "vote" has more or less weight, depending on two factors:

  • The popularity of the page that links to X [R'(v)]
  • The fact that the page that links to X also links to many other pages or not. [Nv]

These two factors reflect very intuitive ideas:

  • It's generally better to get a letter of recommendation from a recognized expert in the field than from a unknown person.
  • Regardless of who gives the recommendation, by also giving recommendation to other people, they are diminishing the value of their recommendation to you.

As you notice, this formula makes use of somewhat of a circular reference, because to know the page range of X, you need to know the PageRank of all pages linking to X. Then how do you figure these PageRank values?... That's where the next issue of convergence explained in the section of the document kick in.

Essentially, by starting with some "random" (or preferably "decent guess" values of PageRank, for all pages, and by calculating the PageRank with the formula above, the new calculated values get "better", as you iterate this process a few times. The values converge, i.e. they each get closer and closer to what is the actual/theorical value. Therefore by iterating a sufficient amount of times, we reach a moment when additional iterations would not add any practical precision to the values provided by the last iteration.

Now... That is nice and dandy, in theory. The trick is to convert this algorithm to something equivalent but which can be done more quickly. There are several papers that describe how this, and similar tasks, can be done. I don't have such references off-hand, but will add these later. Beware they do will involve a healthy dose of linear algebra.

EDIT: as promised, here are a few links regarding algorithms to calculate page rank. Efficient Computation of PageRank Haveliwala 1999 /// Exploiting the Block Structure of the Web for Computing PR Kamvar etal 2003 /// A fast two-stage algorithm for computing PageRank Lee et al. 2002

Although many of the authors of the links provided above are from Stanford, it doesn't take long to realize that the quest for efficient PageRank-like calculation is a hot field of research. I realize this material goes beyond the scope of the OP, but it is important to hint at the fact that the basic algorithm isn't practical for big webs.

To finish with a very accessible text (yet with many links to in-depth info), I'd like to mention Wikipedia's excellent article

If you're serious about this kind of things, you may consider an introductory/refresher class in maths, particularly linear algebra, as well a computer science class that deal with graphs in general. BTW, great suggestion from Michael Dorfman, in this post, for OCW's video of 1806's lectures.

I hope this helps a bit...

mjv
Thank you for this.I'll take your advice
codedhands
+6  A: 

If you are serious about developing an algorithm for a search engine, I'd seriously recommend you take a Linear Algebra course. In the absence of an in-person course, the MIT OCW course by Gilbert Strang is quite good (video lectures at http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/VideoLectures/).

A class like this would certainly allow you to understand the mathematical symbols in the document you provide-- there's nothing in that paper that wouldn't be covered in a first-year Linear Algebra course.

I know this isn't the answer you are looking for, but it's really the best option for you. Having someone try to explain the individual symbols or algorithms to you when you don't have a good grasp of the basic concepts isn't a very good use of anybody's time.

Michael Dorfman
Thanks alot.I appreciate it
codedhands
+3  A: 

You might also want to read the introductory tutorial on the mathematics behind the construction of the Pagerank matrix written by David Austin's entitled How Google Finds Your Needle in the Web's Haystack; it starts with a simple example and builds to the full definition.

Jeff Kubina
+2  A: 

"The $25,000,000,000 Eigenvector: The Linear Algebra Behind Google". from Rose-Hulman is a bit out of date, because now Page Rank is the $491B linear algebra problem. I think the paper is very well written.

"Programming Collective Intelligence" has a nice discussion of Page Rank as well.

duffymo
+2  A: 

Duffymo posted the best refernce in my opinion. I studied the page rank algorithm in my senior undergrad year. Page rank is doing the following:

  1. Define the set of current webpages as the states of a finite markov chain.
  2. Define the probability of transitioning from site u to v where the there is an outgoing link to v from u to be

    1/u_{n} where u_{n} is the number of out going links from u.

  3. Assume the markov chain defined above is irreducible (this can be enforced with only a slight degradation of the results)

  4. It can be shown every finite irreducible markov chain has a stationary distribution. Define the page rank to be the stationary distribution, that is to say the vector that holds the probability of a random particle to end up at each given site as the number of state transitions goes to infinity.

Google uses a slight variation on the power method to find the stationary distribution (the power method finds dominant eigenvalues). Other than that there is nothing to it. Its rather simple and elegant and probably one of the simplest applications of markov chains I can think of, but it is wortha lot of money!

So all the pagerank algorithm does is take into account the topology of the web as an indication of whether a website should be important. The more incoming links a site has the greater the probability of a random particle spending its time at the site over an infinite amount of time.

ldog
Thank you gmath
codedhands