views:

58

answers:

2

I'm trying to locate files with the same name and delete all of the smaller sized copies, leaving only the largest. For example: test.jpg = 2kb, test.jpg=9kb, test.jpg=5kb. The 2kb and 5kb files would get deleted, leaving just the 9kb. I've tried a couple of GUI programs to do this and they were no help, as you had to delete everything manually after it found the copies (not so good when there are around 400000 dupes!) Is there a script out there that can do this that anybody knows of?

A: 

This perl script finds all files starting in the current directory. It then puts them into a hash where the file's basename is the key and the value is a (size, fullpath) pair. It then iterates over the basenames, sorts the duplicates and removes all but the largest one.

The actual /bin/rm is commented out. Make sure this does what you want before you do it for real.

Real perl hackers: if I'm doing something naive/dumb here I'd love to learn about that.

#!/usr/bin/perl -w
use File::Basename;
use strict;

my @files = `/usr/bin/find -type f`;
my %stats;

# each hash key is the simple basename of the files
# each hash value is a 2 element array of (size, fullpath)
foreach my $file (@files)
{
    chomp($file);
    my $result = `/bin/ls -s $file`;
    chomp($result);
    if($result =~ /^(\d+)\s+(.*)/)
    {   
        my ($basefile, $dir, $suffix) = fileparse($file);
        push(@{$stats{$basefile}}, [$1, $2]);
    }
    else
    {   
        printf STDERR "Unexpected ls output: $result\n";
    }
}

foreach my $file (keys %stats)
{
    # sort from smallest to largest
    my @sorted = sort {$b->[0] <=> $a->[0]} @{$stats{$file}};

    # remove the biggest one
    pop(@sorted);

    # for each one that's left remove it (use at your own risk!)
    foreach my $path (@sorted)
    {   
        # system("/bin/rm $path");
        printf "/bin/rm $path->[1]\n";
    }
}
nall
+1  A: 

This finds all files and prints their names, sizes and name-with-path. Then it sorts them by name, then by size (descending) then path. The awk script passes through all but the first (largest) and xargs hands them off to echo (remove the echo to make rm take action). This should work on files with spaces in their names, but not those that have newlines or tabs in their names.

find -type f -printf "%f\t%s\t%p\n" |
    sort -t $'\t' -k 1,1 -k 2,2rn -k 3,3 |
    awk -F'\t' '{if ( $1 == prevfile) printf "%s\0", $3; prevfile = $1}' |
    xargs -0 -I{} echo rm \{\}

In this directory structure (produced by tree -s), all files named "file" would be deleted except for test/dir/dir/file which is the largest at 50 bytes.

test
|-- [    26]  file
|-- [  4096]  dir
|   |-- [    34]  file
`-- [  4096]  dir
    |-- [  4096]  dir
    |   |-- [    50]  file
    `-- [  4096]  test
        `-- [  4096]  dir
            `-- [    20]  file
Dennis Williamson
We really need to add --head/--tail or equivalent to sort. It would be both functionaly and algorithmically beneficial
pixelbeat
Yes, I think I agree. It already has `-u` for unique so it would seem to be a good fit.
Dennis Williamson