You have to look at every single line in the range you want (to tell if it's in the range you want) so I'm guessing you mean not every line in the file. At a bare minimum, you will have to look at every line in the file up to and including the first one outside your range (I'm assuming the lines are in date/time order).
This is a fairly simple pattern:
state = preprint
for every line in file:
if line.date >= startdate:
state = print
else
if line.date > enddate:
exit for loop
if state == print:
print line
You can write this in awk, Perl, Python, even COBOL if you must but the logic is always the same.
Locating the line numbers first (with say grep) and then just blindly printing out that line range won't help since grep also has to look at all the lines (all of them, not just up to the first outside the range, and most likely twice, one for the first line and one for the last).
If this is something you're going to do quite often, you may want to consider shifting the effort from 'every time you do it' to 'once, when the file is stabilized'. An example would be to load up the log file lines into a database, indexed by the date/time.
That takes a while to get set up but will result in your queries becoming a lot faster. I'm not necessarily advocating a database - you could probably achieve the same effect by splitting the log files into hourly logs thus:
2009/
01/
01/
0000.log
0100.log
: :
2300.log
02/
: :
Then for a given time, you know exactly where to start and stop looking. The range 2009/01/01-15:22
through 2009/01/05-09:07
would result in:
- some (the last bit) of the file
2009/01/01/1500.txt
.
- all of the files
2009/01/01/1[6-9]*.txt
.
- all of the files
2009/01/01/2*.txt
.
- all of the files
2009/01/0[2-4]/*.txt
.
- all of the files
2009/01/05/0[0-8]*.txt
.
- some (the first bit) of the file
2009/01/05/0900.txt
.
Of course, I'd write a script to return those lines rather than trying to do it manually each time.