views:

43

answers:

4

I originally asked this question: http://stackoverflow.com/questions/4002115/regular-expression-in-gvim-to-remove-duplicate-domains-from-a-list

However, I realize I may be more likely to find a working solution if I "broaden my scope" in terms of what solution I'm willing to accept.

So, I'll rephrase my question & maybe I'll get a better solution...here goes:

I have a large list of URLs in a .txt file (I'm running Windows Vista 32bit) and I need to remove duplicate DOMAINS (and the entire corresponding URL to each duplicate) while leaving behind the first occurrence of each domain. There are roughly 6,000,000 URLs in this particular file, in the following format (the URLs obviously don't have a space in them, I just had to do that because I don't have enough posts here to post that many "live" URLs):

http://www.exampleurl.com/something.php
http://exampleurl.com/somethingelse.htm  
http://exampleurl2.com/another-url  
http://www.exampleurl2.com/a-url.htm  
http://exampleurl2.com/yet-another-url.html  
http://exampleurl.com/  
http://www.exampleurl3.com/here_is_a_url  
http://www.exampleurl5.com/something

Whatever the solution is, the output file using the above as the input, should be this:

http://www.exampleurl.com/something.php  
http://exampleurl2.com/another-url  
http://www.exampleurl3.com/here_is_a_url  
http://www.exampleurl5.com/something

You notice there are no duplicate domains now, and it left behind the first occurrence it came across.

If anybody can help me out, whether it be using regular expressions or some program I'm not aware of, that would be great.

I'll say this though, I have NO experience using anything other than a Windows OS, so a solution entailing something other than a windows program, would take a little "baby stepping" so to speak (if anybody is kind enough to do so).

+1  A: 

For this particular situation I would not use a Regex. URL's are a well defined format and there exist an easy to use parser for that format in the BCL: The Uri type. It can be used to easily parse the type and get out the domain information you seek.

Here is a quick example

public List<string> GetUrlWithUniqueDomain(string file) {
  using ( var reader = new StreamReader(file) ) {
    var list = new List<string>();
    var found = new HashSet<string>();
    var line = reader.ReadLine();
    while (line != null) {
      Uri uri;
      if ( Uri.TryCreate(line, UriKind.Absolute, out uri) && found.Add(uri.Host)) {
        list.Add(line);
      }
      line = reader.ReadLine();
    }
  }
  return list;
}
JaredPar
A: 
  1. Find a unix box if you dont have one, or get cygwin
  2. use tr to convert '.' to TAB for convenient.
  3. use sort(1) to sort the lines by the domain name part. This might be made a little easier by writing an awk program to normalize the www part.

And ça va, you have the dups together. Use perhaps use uniq(1) to find dublicates.

(Extra credit: why can't a regular expression alone do this? Computer science students should think about the pumping lemmas.)

Charlie Martin
+2  A: 

Regular expressions in Python, very raw and does not work with subdomains. The basic concept is to use dictionary keys and values, key will be domain name, and value will be overwritten if the key already exists.

import re

pattern = re.compile(r'(http://?)(w*)(\.*)(\w*)(\.)(\w*)')
urlsFile = open("urlsin.txt", "r")
outFile = open("outurls.txt", "w")
urlsDict = {}

for linein in urlsFile.readlines():
    match = pattern.search(linein)
    url = match.groups()
    domain = url[3]
    urlsDict[domain] = linein

outFile.write("".join(urlsDict.values()))

urlsFile.close()
outFile.close()

You can extend it to filter out subdomains, but the basic idea is there I think. And for 6 million URLs might take quite a while in Python...

Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems. −−Jamie Zawinski, in comp.emacs.xemacs

Soulseekah
+1  A: 

I would use a combination of Perl and regexps. My first version i

   use warnings ;
   use strict ;
   my %seen ;
   while (<>) {
       if ( m{ // ( .*? ) / }x ) {
       my $dom = $1 ;

       print unless $seen {$dom} ++ ;
       print "$dom\n" ;
     } else {
       print "Unrecognised line: $_" ;
     }
   }

But this treats www.exampleurl.com and exampleurl.com as different. My 2nd version has

if ( m{ // (?:www\.)? ( .*? ) / }x )

which ignores "www." at front. You could probably refine the regexp a bit, but that is left to the reader.

Finally you could comment the regexp a bit ( the /x qualifier allows this). It rather depends on who is going to be reading it - it could be regarded as too verbose.

           if ( m{
               //          # match double slash
               (?:www\.)?  # ignore www
               (           # start capture
                  .*?      # anything but not greedy
                )          # end capture
                /          # match /
               }x ) {

I use m{} rather than // to avoid /\/\/

justintime