views:

356

answers:

3

tar|gzip is wonderful, except files can get too big, and transferring them over network gets complicated. DOS era archivers were routinely used to create multipart archives, one per floppy, but gzip doesn't seem to have such option (because of Unix streaming philosophy).

So what's the easiest and most robust way of doing this under Linux (and obviously with archive size ~2GB, not 1.44MB)?

+3  A: 

you could split it up into pieces by using /usr/bin/split (with the "-b" option) - read 'man split'

dfa
And then join them with cat -- the tool that's created for joining, although it isn't quite obvious as everyone starts to use it as a tool for displaying files on a console
+1  A: 

The typical Unix solution would be "split -b" but this option is not very robust. If any of the files is damaged or lost, you're losing everything from there on.

You could use split in conjunction with bzip2 that is often able to repair a broken archive to a certain degree.

A much safer way would be to use parchive (PAR2 more specifically). It will create additional files in RAID-style to recover from any damage to files sections. For more info, look at quickpar, parchive.sf.net, par2 package on Linux distribs...

Eric Darchis
+2  A: 

I don't bother using gzip for archiving any more, just for unpacking other people's archives who haven't yet been converted :-)

7zip has insane-level compression (although I haven't put it head-to-head in all scenarios) and it also supports creating volumes, which is in answer to your specific question.

paxdiablo