views:

141

answers:

3

I am working on an image generation script in PHP and have gotten it working two ways. One way is slow but uses a limited amount of memory, the second is much faster, but uses 6x the memory . There is no leakage in either script (as far as I can tell).

In a limited benchmark, here is how they performed:

--------------------------------------------
METHOD  | TOTAL TIME | PEAK MEMORY |  IMAGES
--------------------------------------------
One     |     65.626 |     540,036 |     200
Two     |     20.207 |   3,269,600 |     200
--------------------------------------------

And here is the average of the previous numbers (if you don't want to do your own math):

--------------------------------------------
METHOD  | TOTAL TIME | PEAK MEMORY |  IMAGES
--------------------------------------------
One     |      0.328 |     540,036 |       1
Two     |      0.101 |   3,269,600 |       1
--------------------------------------------

Which method should I use and why?

I anticipate this being used by a high volume of users, with each user making 10-20 requests to this script during a normal visit.

I am leaning toward the faster method because though it uses more memory, it is for a 1/3 of the time and would reduce the number of concurrent requests.

Update: I was able to refine the second option and come up with a third that cut the memory usage almost in half. Running the new numbers one after the other, this is the new benchmark:

--------------------------------------------
METHOD  | TOTAL TIME | PEAK MEMORY |  IMAGES
--------------------------------------------
One     |     51.901 |     798,900 |     200
Two     |     12.039 |   3,269,600 |     200
Three   |     13.667 |   1,815,624 |     200
--------------------------------------------

Thanks for the direction and help!

+2  A: 

I'd go with the second. It's faster, which is always good for the end-user. The ram usage is high, though, so you have to weigh in the price of additional ram. It's easier (and cheaper) to add ram to a server than it is to add a whole new box.

On the other hand, have you profiled both of these? I'm sure you can reach some kind of happy middleground. Something like the speed of the second algorithm but the memory usage of the first. Run it through xdebug before you make a decision.

ryeguy
@ryeguy The first method copies alpha information from each pixel in a mask image, and draws it on a newly created image. The second, uses a preprocessed array of `x,y,alpha` values (7000~) and draws the new image from them. There isn't a whole lot of ways to optimize the loading of the array which is what uses up the memory. I am trying to avoid using ImageMagick on the server (or any of its variants).
Doug Neiner
+2  A: 

I would think it depends on how much memory you have available. Assuming those numbers are in bytes, then even the "more memory" version is still "only" using 3MB of memory. The more-memory version will continue to be faster until the number of outstanding requests pushes the memory system into swapping, at which point, it'll be dramatically slower...

Mark Bessey
A: 

I read your comment about how you're doing the fast version, and you could use something like memcached so you only have to have your array of stuff in memory once. Then the extra memory usage hardly matters.

Brendan Long