I've large number of small files with sequential filenames & i want to create a single file out of it. What is the fastest way to do this?
e.g.
1.tgz.1 1.tgz.2 1.tgz.3 =========> 1.tgz
I've large number of small files with sequential filenames & i want to create a single file out of it. What is the fastest way to do this?
e.g.
1.tgz.1 1.tgz.2 1.tgz.3 =========> 1.tgz
You could concatenate the files from the shell.
In Windows (/b
for binary mode):
copy /b 1.tgz.1 + 1.tgz.2 + 1.tgz.3 1.tgz
In Unix/Linux:
cat 1.tgz.1 1.tgz.2 1.tgz.3 > 1.tgz
This is bash (your shell may vary):
for n in *.tgz.* ; do cat $n >> ${n/tgz.*/tgz} ; done
If it's a large number of small files, you don't want to be messing around with a huge number of arguments.
Since most UNIX shells expand wildcards alphabetically, you should use:
cat 1.tgz.? 1.tgz.?? 1.tgz.??? >1.tgz
That's assuming there are between 100 and 999 files inclusive, adjust the arguments to handle more or less (e.g., add 1.tgz.???? if there's between 1,000 and 9,9999 inclusive). You're not going to get better performance since your bottle neck is the disk speed which is always going to be slower than the code running on the CPU.
The only other possibilities I can think of are:
nice
to bump up your priority (see man nice
for details). This will improve your power to get more CPU but again, if you're bound by disk I/O, that won't help much.