views:

433

answers:

3

When I dump a Sybase database, it doesn't seem to matter whether there's data in the tables or not, the file size is the same. I've been told that this is down to the fact that my dump file is binary and not logical, so the file of the dump file is based on the allocated size of the database. I know that Oracle can use logical dump files, but can I get Sybase to do the something similar, or is there any other sneaky ways of getting the dump file size down?

A: 

Since somewhere around version 12 you have been able to perform compressed dumps in ASE.

The syntax is: dump database database_name to file_name [ with compression=compress_level]

compress_level is 0-9. 0 is no compression and 9 is the most. The more you compress the higher the CPU usage whilst running the dump. You just need to peform a little testing to find an appropriate balance of size versus performance.

No special commands are needed to load the dump.

AdamH
this works nicely, the bonus of which is that truncating tables that I don't need as part of the dump now has an effect as the process of compressing the dump squeezes out any of the space previously taken up by empty tables! Thanks Adam!
ninesided
+2  A: 

If you're already using compress_level 9 and still want more compression, it's possible to re-compress the file with bzip2.

If you simply bzip2 the compressed file, you will get ~10% improvement. If you uncompress and re-compress you may see improvements in the 30% range. Bear in mind though, that you'll have to un-compress and/or gzip the file again in order for Sybase to load it.

gunzip -c pubs_1.dmp | bzip2 > pubs.dmp.bz2
brianegge
A: 

Whilst the link (The syntax is) above is obviously correct as it points to sybase documentation, the comments are misleading.

The syntax in simple format is:

dump database {database_name} to "compress::{#compression_level}::{stripe_device}" go

eg: dump database mydb to "compress::1::/sybase_dumps/mydb_17022009"

In terms of loading the database dump the ::compress;option needs to be given again.

eg. load database mydb from "compresss::/sybase_dumps/mydb_17022009"

Notice the compression level is not required nor the extra separating colons that follow.

The testing to find your balance is a good point, remeber the higher you go expect the dump to take considerably longer. I find 1-3 is more than ample, I have never gone above 6 the diminishing returns are not worth it.

If I was desparate I would bzip2 the file as described above {point earned}. If this was a production host I would ship the file to another host and do this. Resource hit can be considerable.

Vincent
Both methods work. The documentation indicates that the "with compression=compress_level" is the newer and preferred syntax."The older compress:: option works only with local archives; you cannot use the servername option. To compress to a remote machine, you must use the preferred compress=compress_level option."
AdamH