views:

222

answers:

3

Hello there,

I just want to know what is your opinion about how to fingerprint/verify html/links structure.

The problem I want to solve is: fingerprint for example 10 different sites, html pages. And after some time I want to have possibility to verify them, so is, if site has been changed, links changed, verification fails, othervise verification success. My base Idea is to analyze link structure by splitting it in some way, doing some kind of tree, and from that tree generate some kind of code. But I'm still in brainstorm stage, where I need to discuss this with someone, and know other ideas.

So any ideas, algos, and suggestions would be usefull.

+1  A: 

You could always hash the raw HTML of the site and compare it. I believe sites can maintain a "last edited" date, but am not sure if this is always updated.

Edit: My mistake, this is simply a way to compare the website to a previous version, but not really fingerprint it in the way you mean.

Matt Boehm
+1  A: 

Just throwing this out there:

Why don't you crawl the site, putting all the links into an XML document that would represent the map of the site.

Create an MD5 checksum on that file and store it. Then, any time in the future you could recrawl, recreate the XML, redo the checksum and compare it to your earlier checksum.

If they don't match, the link structure has changed - although you won't necessarily know where.

nikmd23
A: 

Whatever data or structure you intend to hash, summarize and otherwise fingerprint, be sure to account for the various forms of noise on many of the web sites "out-there".

Example of such noise or random content are:

  • Company Stock value ticker
  • Weather condition in wherever city they are
  • several pages have a current (now) date-time somewhere in footers or headers
  • Advertisement content (more and more these are make to look indigenous to the site to defeat Ad blockers on web browsers)
mjv