views:

59

answers:

2

I'm trying to run a unix regEXP on every log file in a 1.12 GB directory, then replace the matched pattern with ''. Test run on a 4 meg file took about 10 minutes, but worked. Obviously something is damaging performance by several orders of magnitude.

UPDATE: I am noticing that searching for ^(155[0-2]).*$ takes ~7 seconds in a 5.6 MB file with 77 matches. Adding the Negative Lookahead Assertion, ?!, so that the regExp becomes ^(?!155[0-2]).*$ is causing it to take at least 5-10 minutes; granted, there will be thousands and thousands of matches.

Should the negative lookahead assertion be extremely detrimental to performance when there are many matches?

A: 

If you can get rid of that .* at the beginning it would help. What can be before it, just whitespace? If so, try:

^(?!\s*155[0-2][0-9]{4}\s).*$

If it really can be anything, try making it non-greedy:

^(?!.*?155[0-2][0-9]{4}\s).*$

Note: in both examples, I removed the second .*, since the third one would match the same thing as well.

It helps to think about what the regex engine will actually be doing.

  1. Match ^ (beginning of line). No problem.
  2. Try to match the negative look-ahead assertion
  3. Grab as much as possible with .*. This means it grabs the entire line.
  4. Is the next character 1? If not, make the .* match one fewer character and repeat until it does match a 1.

You can see that this means for every line that doesn't match, it will backtrack through the entire line. Now, if you just use \s* at the beginning, then that will only grab whitespace, not the entire line. If it really can be anything, .*? will be faster on lines that do match the 155 pattern, and it will be about the same on lines that don't. (On lines that don't match, it will keep growing the .* until it has grabbed the whole line.)

Kip
Interesting... in addition to your suggestion, I think I can (based on the format of my log files) remove [0-9]{4}\s from the grouping. Does that help much?Currently trying ^(?!\s*155[0-2]).*$, but it's still extremely slow. :(
John Sullivan
@JohnSullivan: probably not much, since those are fixed lengths. My guess is that the ultimate cause of the slowness is that the program is loading the entire log file into memory, then running the regex, then writing the file. (As opposed to processing one line at a time.)
Kip
It appears the presence of ?!, the negative lookahead operator, is murdering performance (7 - seconds to 10 minutes on 5mb, although 70 matches vs. 20000). Based on that, do you think we should blame the program, not the regex?
John Sullivan
By the way Kip, thanks for all the help and information.
John Sullivan
@JohnSullivan: sure, no problem. unfortunately i don't know quite enough about the internals of regex engines, nor about the particular program you're using, to help you further. sorry :-/
Kip
A: 

Basically: The regex implementation you are using is non-linear and can only deal with a subset of the regular expression language with any efficiency. See my question about a regex implementation that can handle machine generated regexes efficiently for more background.

If you can select another implementation, you're in luck; back when I was looking these were scarce. Two reasonable options are RE2, and TRE, but both are libraries, not standalone executables.

Another option you have is to use the unix utility (grep?) you've used in the past; grep certainly has a windows port as do many other unix utilities.

Eamon Nerbonne