views:

263

answers:

4

Is it possible to download contents of a website (a set of html pages) straight to memory without writing to disk? I have a cluster of machines with 24G ram each, but I'm limited by a disk quota to several hundreds MB. I was thinking of redirecting the output of wget command for example to some kind of in-memory structure without storing the contents on a disk. The other option is to create my own version of wget but may be there is a simple way to do it with pipes

Also what would be the best way to run this download in parallel (the cluster has >20 nodes). Cant use the file system in this case.

Thanks

+4  A: 

Are you root? You could just use a tmpfs.

Re your edit: you're not cpu bound, you don't need to use every machine. You can use xargs -n SOME_NUMBER to split your list of root urls, assuming there are several.

But if you are keen on sharing memory, you can set up a cluster memcache and mount it on every machine with memcachefs.

Tobu
It so happens that Linux has a `tmpfs` mounted at `/dev/shm`, accessible for everyone (not just root). Not that you *should* abuse it for this purpose, but... ;-)
ephemient
Depends on your setup and your distro, you may or may not have tmpfs already mounted at /dev/shm.
davr
@ephemient You are a bad person. (incidentally, /var/lock also works)
Tobu
@davr: Any Linux distribution using Glibc≥2.2 and that wishes to be POSIX.1-compliant has `/dev/shm` mounted; Glibc implements POSIX shared memory (`shm_open`) via files in that directory.
ephemient
+10  A: 

See wget download options:

‘-O file’

‘--output-document=file’

The documents will not be written to the appropriate files, but all will be concatenated together and written to file. If ‘-’ is used as file, documents will be printed to standard output, disabling link conversion. (Use ‘./-’ to print to a file literally named ‘-’.)

If you want to read the files into a Perl program, you can invoke wget using backticks.

Depending on what you really need to do, you might be able to get by just using LWP::Simple's get.

use LWP::Simple;
my $content = get("http://www.example.com/");
die "Couldn't get it!" unless defined $content;

Update: I had no idea you could implement your own file system in Perl using Fuse and Fuse.pm. See also Fuse::InMemory.

Sinan Ünür
+5  A: 

If you a) are already using Perl, b) want to download HTML, and c) parse it, I always recommend LWP and HTML::TreeBuilder.

Leonardo Herrera
+2  A: 

wget <url> -O -

will write the contents of a url to standard output, which can then be captured in memory.

mobrule