views:

24

answers:

2

I am dumping my SQL db with the script below. My site isnt used often so the DB is unchanged for days. The only difference is the last line which is the dump date. Each dump is about 400k uncompressed and 107kb as a .sql.gz file. I decide to compress them as a solid archive with 7z and rar. In both cases i get 950kb with 32files. I feel i should get better compression. How?

#!/bin/bash
cd /home/mybackup/mysqldumps
y=$(date +%Y)
m=$(date +%m)
d=$(date +%d)
h=$(date +%H)
mkdir $y
cd $y
mkdir $m
cd $m
mysqldump --all-databases --single-transaction --flush-logs | gzip > "$y $m $d $h.sql.gz"
chmod 400 "$y $m $d $h.sql.gz"
A: 

In this day and age 950k is a tiny amount of storage space. If you go with a simple grandfather, father, son backup rotation you're looking at about 22Mb for a year's worth of backups. Or five or six MP3 files as comparison.

Even if you are on dialup (or GPRS/1xRTT in a pinch) this is still a manageable amount of data to transfer.

julesallen
The size isnt important. Its just numbers to show it isnt compressing well. It just feels weird.
acidzombie24
The problem isnt size. The problem is "i'm doing it wrong"
acidzombie24
Isn't that why we're all reading Stack Overflow? I'm a big fan of Amazon EC2 and my backup scheme is to take entire file system snapshots which cost a few cents a month to keep around. Lazy and highly effective. If you don't want to/can't switch infrastructure look at something like https://www.jungledisk.com/business/server/features/ which uses S3 for storage. Very nice.
julesallen
A: 

Uncompress all the .sql.gz to regular sql files. Compress the folder. Results were 88kb while compressing the files as .sql.gz were 950k. Thats huge savings.

acidzombie24