views:

340

answers:

4

I'm using a foreach loop to process a large set of items, unfortunately it's using alot of memory. (probably because It's doing a copy of the array). Apparently there is a way to save some memory with the following code: $items = &$array;

Isn't it better to use for loops instead?

And is there a way to destroy each item as soon as they have been processed in a foreach loop.

eg.

    $items = &$array;
    foreach($items as $item)
    {
     dosomethingwithmy($item);
     destroy($item);
    }

I'm just looking for the best way to process a lot of items without running out of ressources.

+3  A: 

Try a for loop:

$keys = array_keys($array);
for ($i=0, $n=count($keys); $i<$n; ++$i) {
    $item = &$array[$keys[$i]];
    dosomethingwithmy($item);
    destroy($item);
}
Gumbo
please put the count outside the for. you will gain some performance
Gabriel Sosa
@Gabriel Sosa: `count` already is only called once.
Gumbo
my script is using 50% less memory with this loop.
mnml
@mnml: That was predictable since `foreach` uses an internal copy of the array and thus doubles the memory usage for that array.
Gumbo
+1  A: 

Resource-wise, your code will be more efficient if you use a for loop, instead of a foreach loop. Each iteration of your foreach loop will copy the current element in memory, which will take time and memory. Using for and accessing the current item with an index is a bit better and faster.

Nicolas
+1  A: 

use this:

reset($array);
while(list($key_d, $val_d) = each($array)){

}

because foreach create a copy

Haim Evgi
+1  A: 

If you are getting that large data set from a database, it can often help to try and consume the data set as soon as it comes from the database. For example from the php mysql_fetch_array documentation.

$resource = mysql_query("query");
while ($row = mysql_fetch_array($resource, MYSQL_NUM)) {
    process($row);
}

this loop will not create an in memory copy of the entire dataset (at least not redundantly). A friend of mine sped up some of her query processing by 10x using this technique (her datasets are biological so they can get quite large).

barkmadley