views:

136

answers:

5

I am trying to extract all links that have /thumb/ in it within ""'s. Actually i only need to use the images src. I dont know if images will end with jpg or if there will be case sensitivity problems, etc. I really only care about the full link.

m = Regex.Match(page, @"""(.+?/thumbs/.+?)""");
//...
var thumbUrl = m.Groups[1].Value;

My full code

    var page = DownloadWebPage(url);
    var reg = new Regex(@"Elements\s+\((.*)\)", RegexOptions.Multiline);
    var m = reg.Match(page);
    var szEleCount= m.Groups[1].Value;
    int eleCount = int.Parse(szEleCount);

    m = Regex.Match(page, @"""(.+?/thumbs/.+?)""");
    while (m.Success)
    {
        var thumbUrl = m.Groups[1].Value;
        //i break here to see a problem
        m = m.NextMatch();
    }

thumbUrl looks like

center\"> ... lot of text, no /thumbs/ ... src=\"http://images.fdhkdhfkd.com/thumbs/dfljdkl/22350.jpg

+1  A: 

The problem is that .+? also consumes "s, so it continues to match outside of the src attribute. Use this instead:

m = Regex.Match(page, @"""([^""]+/thumbs/[^""]+)""");
reko_t
i was tempted to mark yours as correct.
acidzombie24
so very tempted...
acidzombie24
+3  A: 

Nongreedy regular expressions can be slow because the engine has to do a lot of backtracking.

This one uses only greedy expressions:

@"""([^""]*/thumbs/[^""]*)"""

Instead of matching the least amount of anything, it matches as many non-double-quotes as it can.

Andomar
+3  A: 

If you're parsing (X)HTML, consider using a proper parser.

See: http://stackoverflow.com/questions/56107/what-is-the-best-way-to-parse-html-in-c for some C# examples of how to do so.

Bart Kiers
A: 

Usually when you have regular expression you use static field and specify RegexOptions.Compiled option:

static Regex template = new Regex(@"""(.+?/thumbs/.+?)""", RegexOptions.Compiled | RegexOptions.Multiline)
Konstantin Spirin
A certain number of Regex objects get cached automatically, so storing one in a field won't necessarily have any effect on performance. Also, `Compiled` regexes may be faster, but they're much more expensive to create; whether you should use that option depends on how you're using the regex. http://blogs.msdn.com/bclteam/archive/2004/11/12/256783.aspx
Alan Moore
+3  A: 

The way a reluctant (non-greedy) quantifier works is, once it starts to match, it stops at the first opportunity. What you're trying to do is match the minimum amount of text that meets your criteria, which isn't the same thing; you still have to make sure it doesn't start matching before you want it to. As others have suggested, you can do that by replacing the .+? in your regex with something that doesn't match quotes, like [^""]+.

But that still leaves you with a performance problem. In your example, the regex starts matching when it sees the quote in center">; when it reaches the quote at src=" (assuming you've changed the .+? to [^""]+) it will abort that match attempt an move on. The next attempt, starting at the quote in src=" will succeed. So you're getting the right result now, but you're still wasting a lot of time on that first, failed match attempt.

The key to writing fast regexes is to make sure that, if a match attempt is going to fail, it fails as quickly as possible. For example, I think it's safe to assume you don't want any angle brackets between the " and /thumbs/, so add them to the set of characters you don't want to match: [^""<>]+. Now, any match attempt starting at the quote in center"> it will be aborted at the very next position.

There are other things you can do to further optimize the regex, involving atomic groups and negative lookaheads, but this will probably be as fast as you need:

@"""([^""<>]+/thumbs/[^""<>]+)"""
Alan Moore