views:

27

answers:

3

Hi All,

We have a requirement to cache web pages as accurately as possible, so that we can go back and view a version of a page at any previous point in time. We'd like to be able to view the page as it really was - with the right css, javascript, images etc.

Are there any OS libraries (any language) that will fetch a page, download all externally-linked assets and re-write the links such they they point to the locally-cached assets?

Or is this a case of rolling our own?

Thanks

Edit: I realise that without rendering dynamically generated links etc that this is not going to be 100% possible unless we do DOM rendering. However for the time being we can probably live without this.

+1  A: 

why dont apply a base href to the pages, replace internal absolute links with relative absolutes and keep the structure?

Joe Hopfgartner
+1  A: 

I suggest the HTTrack: http://www.httrack.com/

Because the software is free, open source, and supports command line, I believe that you can integrate it or customize it to your needs smoothly.

Windows 2000/XP/Vista/Seven and Linux/Unix/BSD: Debian, Ubuntu, Gentoo, RPM package (Mandriva & RedHat), OSX (MacPorts), Fedora and FreeBSD i386 packages

See the description:

"HTTrack allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer.

It arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online.

It can also update an existing mirrored site, and resume interrupted downloads."

Paulocoghi
A: 

You could use the mht/mhtml format to save as a unified document.

Wiki description: http://en.wikipedia.org/wiki/MHTML

A quick search will reveal some sources of code to do this.

Mark Schultheiss