I have a couple of text files (A.txt
and B.txt
) which look like this (might have ~10000 rows each)
processa,id1=123,id2=5321
processa,id1=432,id2=3721
processa,id1=3,id2=521
processb,id1=9822,id2=521
processa,id1=213,id2=1
processc,id1=822,id2=521
I need to check if every row in file A.txt
is present in B.txt
as well (B.txt
might have more too, that is okay).
The thing is that rows can be in any order in the two files, so I am thinking I will sort them in some particular order in both the files in O(nlogn)
and then match each line in A.txt
to the next lines in B.txt
in O(n)
. I could implement a hash, but the files are big and this comparison happens only once after which these files are regenerated, so I don't think that is a good idea.
What is the best way to sort the files in Perl? Any ordering would do, it just needs to be some ordering.
For example, in dictionary ordering, this would be
processa,id1=123,id2=5321
processa,id1=213,id2=1
processa,id1=3,id2=521
processa,id1=432,id2=3721
processb,id1=9822,id2=521
processc,id1=822,id2=521
As I mentioned before, any ordering would be just as fine, as long as Perl is fast in doing it.
I want to do it from within Perl code, after opening the file like so
open (FH, "<A.txt");
Any comments, ideas etc would be helpful.