tags:

views:

367

answers:

6

Hello,

I have a script that each time is called gets the 1st line of a file. Each line is known to be exactly of the same length (32 alphanumerci chars) and terminates with a "\r\n". After getting the 1st line, the script removes it.

Now I do in this way:

$contents = file_get_contents($file));
$first_line = substr($contents, 0, 32);
file_put_contents($file, substr($contents, 32 + 2)); //+2 because we remove also the \r\n

Obvioulsy it works, but I was wondering if there could be a smarter (or more efficent) way to do this???

In my simple solution I basically read and rewrite all the file just to take and remove the 1st line.

Thanks!

+2  A: 

Here's one way:

$contents = file($file, FILE_IGNORE_NEW_LINES);
$first_line = array_shift($contents);
file_put_contents($file, implode("\r\n", $contents));

There's countless other ways to do that also, but all the methods would involve separating the first line somehow and saving the rest. You cannot avoid rewriting the whole file. An alternative take:

list($first_line, $contents) = explode("\r\n", file_get_contents($file), 2);
file_put_contents($file, implode("\r\n", $contents));
Tatu Ulmanen
Your first example will generate redundant newlines. Without the `FILE_IGNORE_NEW_LINES` flag for `file()` you don't need to `implode()` the lines with newlines again.
fireeyedboy
@fireeyedboy, true that, fixed.
Tatu Ulmanen
@Tatu Ulman: +1 very interesting code, thanks! I never used file function before.
Marco Demajo
+3  A: 

There is no more efficient way to do this other than rewriting the file.

Byron Whitlock
A: 

You could use file() method.

Gets the first line

$content = file('myfile.txt');
echo $content[0];  
JeremySpouken
+1  A: 

You could store positional info into the file itself. For example, the first 8 bytes of the file could store an integer. This integer is the byte offset of the first real line in the file.

So, you never delete lines anymore. Instead, deleting a line means altering the start position. fseek() to it and then read lines as normal.

The file will grow big eventually. You could periodically clean up the orphaned lines to reduce the file size.

But seriously, just use a database and don't do stuff like this.

chris
+1  A: 

I wouldn't usually recommend opening up a shell for this sort of thing, but if you're doing this infrequently on really large files, there's probably something to be said for:

$lines = `wc -l myfile` - 1;
`tail -n $lines myfile > newfile`;

It's simple, and it doesn't involve reading the whole file into memory.

I wouldn't recommend this for small files, or extremely frequent use though. The overhead's too high.

Frank Farmer
This isn't efficient in anyway and also the code isn't portable
anijhaw
For, say, a 3 gigabyte file, this'll be a lot more efficient than most of the answers posted here. Most of the posted answers will die with out of memory errors on large files. You're right that it isn't portable though. There's a very specific set of circumstances in which this solution would be useful/acceptable.
Frank Farmer
A: 

you can iterate the file , instead of putting them all in memory

$handle = fopen("file", "r");
$first = fgets($handle,2048); #get first line.
$outfile="temp";
$o = fopen($outfile,"w");
while (!feof($handle)) {
    $buffer = fgets($handle,2048);
    fwrite($o,$buffer);
}
fclose($handle);
fclose($o);
rename($outfile,$file);
ghostdog74