Let's start with the obligatory request to see what you have tried.
Scalability is a big issue when dealing with log files because they can get very big. The extended format is smaller than the standard log format but still you have to be aware of the potential for consumption of mass quantities of RAM.
You can use regular expressions or simple substring extracts. Substring extracts are faster but lack the cool-factor.
require 'benchmark'
TIME_REGEX = /(\d\d:\d\d:\d\d)/
ACTION_REGEX = /(\w+)/
FILEPATH_REGEX = /(\S+)/
ary = %(#Version: 1.0
#Date: 12-Jan-1996 00:00:00
#Fields: time cs-method cs-uri
00:34:23 GET /foo/bar.html
12:21:16 GET /foo/bar.html
12:45:52 GET /foo/bar.html
12:57:34 GET /foo/bar.html
).split(/\n+/)
n = 50000
Benchmark.bm(6) do |x|
x.report('regex') do
n.times do
ary.each do |l|
next if l[/^#/]
l.strip!
# l[/^ #{ TIME_REGEX } \s #{ ACTION_REGEX } \s #{ FILEPATH_REGEX } $/ix]
# l =~ /^ #{ TIME_REGEX } \s #{ ACTION_REGEX } \s #{ FILEPATH_REGEX } $/ix
l =~ /^ #{ TIME_REGEX } \s #{ ACTION_REGEX } \s #{ FILEPATH_REGEX } $/iox
timestamp, action, filepath = $1, $2, $3
end
end
end
x.report('substr') do
n.times do
ary.each do |l|
next if l[/^#/]
l.strip!
timestamp = l[0, 8]
action = l[9, 3]
filepath = l[14 .. -1]
end
end
end
end
# >> user system total real
# >> regex 1.220000 0.000000 1.220000 ( 1.235210)
# >> substr 0.800000 0.010000 0.810000 ( 0.804276)
Try running the different regular expressions to see how subtle changes can make a big difference in run-time.
In both the regex and substring versions of the benchmark code you can extract the ary.each do
loops for the basis of what you are looking for.