I'm trying to write a piece of code that reads a file line by line and stores each line, up to a certain amount of input data. I want to guard against the end-user being evil and putting something like a gig of data on one line in addition to guarding against sucking in an abnormally large file. Doing $str = <FILE>
will still read in a whole line, and that could be very long and blow up my memory.
fgets lets me do this by letting me specify a number of bytes to read during each call and essentially letting me split one long line into my max length. Is there a similar way to do this in perl? I saw something about sv_gets
but am not sure how to use it (though I only did a cursory Google search).
The goal of this exercise is to avoid having to do additional parsing / buffering after reading data. fgets stops after N bytes or when a newline is reached.
EDIT I think I confused some. I want to read X lines, each with max length Y. I don't want to read more than Z bytes total, and I would prefer not to read all Z bytes at once. I guess I could just do that and split the lines, but wondering if there's some other way. If that's the best way, then using the read function and doing manual parse is my easiest bet.
Thanks.