views:

81

answers:

2

I'm experimenting a bit with textual comparison/basic plagiarism detection, and want to try this on a website-to-website basis. However, I'm a bit stuck in finding a proper way to process the text.

How would you process and compare the content of two websites for plagiarism?

I'm thinking something like this pseudo-code:

// extract text
foreach website in websites
  crawl website - store structure so pages are only scanned once
  extract text blocks from all pages - store this is in list

// compare      
foreach text in website1.textlist
  compare with all text in website2.textlist

I realize that this solution could very quickly accumulate a lot of data, so it might only be possible to make it work with very small websites.

I haven't decided on the actual text comparison algorithm yet, but right now I'm more interested in getting the actual process algorithm working first.

I'm thinking it would be a good idea to extract all text as individual text pieces (from paragraphs, tables, headers and so on), as text can move around on pages.

I'm implementing this in C# (maybe ASP.NET).

I'm very interested in any input or advice you might have, so please shoot! :)

+1  A: 

You're probably going to be more interested in fragment detection. for example, lots of pages will have the word "home" on them and you don't care. But it's fairly unlikely very many pages will have exactly the same words on the entire page. So you probably want to compare and report on pages that have exct matches of length 4,5,6,7,8, etc words and counts for each length. Assign a score and weight them and if you exceed your "magic number" report the suspected xeroxers.

For C#, you can use the webBrowser() to get a page and fairly easily get its text. Sorry, no code sample handy to copy/paste but MSDN usually has pretty good samples.

No Refunds No Returns
+1 Thanks for your comments and advice. Your word count solution could be a lighter alternative in case of massive amounts of text. I think you mean HttpWebRequest.create(Uri) for creating a webrequest, but that part is working pretty good.
Sune Rievers
As text tend to move around (in my experience at least), I will base the comparison on text fragments instead of pages.
Sune Rievers
+2  A: 

My approach to this problem would be to google for specific, fairly unique blocks of text whose copyright you are trying to protect.

Having said that, if you want to build your own solution, here are some comments:

  • Respect robots.txt. If they have marked the site as do-not-crawl, chances are they are not trying to profit from your content anyway.
  • You will need to refresh the site structure you have stored from time-to-time as websites change.
  • You will need to properly separate text from HTML tags and JavaScript.
  • You will essentially need to do a full text search in the entire text of the page (with tags/Script removed) for the text you wish to protect. There are good, published algorithms for this.
Eric J.
+1 Thanks for the advice. I will respect robots.txt (or at least have an option to turn it on/off). I'm using the HtmlAgilityPack to clean and parse the html, and to extract text from tags. This makes it very easy to extract the text. For the actual comparison I'm thinking more in the line of Normalized Compression Distance, though I haven't thoroughly examined the algorithm yet.
Sune Rievers
It's not really the answer I was looking for, but since you've gotten most votes, and your answer is helpful, I will accept it as answer, thanks for your comment :)
Sune Rievers