tags:

views:

69

answers:

7

Hi all i have a file like this:

term1 term2
term3 term4
term2 term1
term5 term3
..... .....

what i need to do is to remove duplicates in any order they appear, such as:

term1 term2

and

term2 term1

is a duplicate to me. It is a really long file, so I'm not sure what can be faster. Does anyone has an idea on how to do this? awk perhaps?

+1  A: 

Ordering each word in the line and sorting is easy with perl.

./scriptbelow.pl < datafile.txt | uniq

#!/usr/bin/perl

foreach(sort map { reorder($_) } <>) {
    print;
}

sub reorder {
    return join(' ', sort { $a cmp $b } split(/\s+/, $_)) . "\n";
}
h0tw1r3
+1  A: 

In perl:

while($t=<>) {
 @ts=sort split(/\s+/, $t);
 $t1 = join(" ", @ts);
 print $t unless exists $done{$t1};
 $done{$t1}++;
}

Or:

cat yourfile | perl -n -e  'print join(" ", sort split) . "\n";' | sort | uniq

I'm not sure which one performs better for huge files. The first one produces a huge perl hashmap in memory, the second one invokes a "sort" command...

leonbloy
+1  A: 

To preserve original ordering, a simple (but not necessarily fast and/or memory-efficient) solution in awk:

awk '!seen[$1 " " $2] && !seen[$2 " " $1] { seen[$1 " " $2] = 1; print }

Edit: Sorting alternative in ruby:

ruby -n -e 'puts $_.split.sort.join(" ")' | sort | uniq
Arkku
+1  A: 

If the file is very very long, maybe you should consider writing your program with C/C++. I think this would be the fastest solution ( specially if you have to treat all the file for each line that you read). Treatment with bash functions get very slow with big files and repetitive operations

then he would spend time doing low level stuff, doing memory manipulation,etc. tools like awk, Perl, Python are capable of handling large files.
ghostdog74
+1  A: 

If you want to remove both "term1 term2" and "term2 term1":

join -v 1 -1 1 <(sort input_file) -v 2 -2 2 <(sort -k 2 input_file) | uniq
Dennis Williamson
+1  A: 
awk '($2FS$1 in _){
 delete _[$1FS$2];delete _[$2FS$1]
 next
} { _[$1FS$2] }
END{ for(i in _)  print i } ' file

output

$ cat file
term1 term2
term3 term4
term2 term1
term5 term3
term3 term5
term6 term7

$ ./shell.sh
term6 term7
term3 term4
ghostdog74
+1  A: 

The way I would do it (if you don't need to keep the double columns) is:

sed 's/ /\n/g' test.txt | sort -u

Here's what the output looks like (ignore my funky prompt):

[~]
==> cat test.txt
term1 term2
term3 term4
term2 term1
term5 term3
[~]
==> sed 's/ /\n/g' test.txt | sort -u
term1
term2
term3
term4
term5
DevNull