tags:

views:

549

answers:

4

Hi,

I have a really big log file (9GB -- I know I need to fix that) on my box. I need to split into chunks so I can upload it to amazon S3 for backup. S3 has a max file size of 5GB. So I would like to split this into several chunks and then upload each one.

Here is the catch, I only have 5GB on my server free so I can't just do a simple unix split. Here is what I want to do:

  1. grab the first 4GB of the log file and spit out into a seperate file (call it segment 1)
  2. Upload that segment1 to s3.
  3. rm segment1 to free up space.
  4. grab the middle 4GB from the log file and upload to s3. Cleanup as before
  5. Grab the remaining 1GB and upload to S3.

I can't find the right unix command to split with an offset. Split only does things in equal chunks and csplit doesn't seem to have what I need either. Any recommendations?

+2  A: 

One solution is to compress it first. A textual log file should easily go from 9G to well below 5G then you delete the original, giving you 9G of free space.

Then you pipe that compressed file directly through split so as to not use up more disk space. What you'll end up with is a compressed file and the three files for upload.

Upload them then delete them the uncompress the original log.

=====

I was half way through that solution when I realized what claptrap it was :-)

A better solution is to just count the lines (say 3 million) and use an awk script to extract and send the individual parts:

cat biglogfile | awk '1,1000000 {print}' > bit1
# send and delete bit1
cat biglogfile | awk '1000001,2000000 {print}' > bit2
# send and delete bit2
cat biglogfile | awk '2000001,3000000 {print}' > bit3
# send and delete bit3

And, of course this can be done with any of the standard text processing tools in Unix: perl, python, awk, head/tail combo. It depends on what you're comfortable with.

paxdiablo
I don't know why I didn't think of compressing the file. It went down to 622M and it was small enough to upload.
Ish
That's a good solution, Ish. It looks like I could have just shut my trap after the first sentence :-)
paxdiablo
+1  A: 

First, gzip -9 your log file.

Then, write a small shell script to use dd:

#!/bin/env sh

chunk_size = 2048 * 1048576; #gigs in megabytes
input_file = shift;    

len = `stat '%s' $input_file`
chunks = $(($len/$chunk_size + 1))

for i in {0...$chunks}
do
  dd if=$input_file skip=$i of=$input_file.part count=1 bs=$chunk_size
  scp $input_file.part servername:path/$input_file.part.$i
done

I just plopped this in off the top of my head, so I don't know if it will work without modification, but something very similar to this is what you need.

Ben Collins
Thanks I didn't know about the dd command thats useful.
Ish
+1  A: 

You can use dd. You will need to specify bs (the memory buffer size), skip (the number of buffers to skip), and count (the number of buffers to copy) in each block.

So using a buffer size of 10Meg, you would do:

# For the first 4Gig
dd if=myfile.log bs=10M skip=0 count=400 of=part1.logbit
<upload part1.logbit and remove it>
# For the second 4Gig
dd if=myfile.log bs=10M skip=400 count=400 of=part2.logbit
...

You might also benefit from compressing the data you are going to transfer:

dd if=myfile.log bs=10M skip=800 count=400 | gzip -c > part3.logbit.gz

There may be more friendly methods.

dd has some real shortcomings. If you use a small buffer size, it runs much more slowly. But you can only skip/seek in the file by multiples of bs. So if you want to start reading data from a prime offset, you're in a real fiddle. Anyway I digress.

joeytwiddle
A: 

Coreutils split creates equal sized output sections, excepting for the last section.

split --bytes=4GM bigfile chunks
Lachlan Roche
He said in his question that he can't just run split because of limited disk space on his server.
Ben Collins