views:

204

answers:

6

Hey guys, I was working on some particle systems and this was the only way I could figure out to set up the arrays:

if (vertices){
  free(vertices);
 }
 if (textures){
  free(textures);
 }
 vertices = malloc(sizeof(point3D) * 4 * [particles count]);
 textures = malloc(sizeof(point2D) * 4 * [particles count]);

The particles constantly change so a new size of array needs to constantly be created at about 60 fps. Is this a bad way of doing things? Could this cause my app to slow down or cause memory thrashing? When I run it under instruments it doesnt look to bad but it is running on the simulator on my mac. Is this ok or is there another way I could be doing this?

**EDIT Ok I went in and rewrote the system it estimates the max amount of particles at creation. Then it uses that to allocate arrays. This system has the trade off that it often overestimates the memory needed by quite a bit but only calls malloc once. I figured it was not that big of a trade of since a maximum realistic overestimate would be like 100 floats, which is not too bad. Thanks for the help guys.

A: 

In that case, you'd probably want to keep that memory around and just reassign it.

Matt Williamson
The problem is that the array needs to change size because of how often particles are added and taken away...
Justin Meiners
Can't you size an array based on the max estimated size and keep that around?
pgb
I could, it would be difficult to estimate but maybe. Are you saying that I should avoid doing it this way? Because if this is just awful I make something else work.
Justin Meiners
@Justin: Even if there really is no good way to estimate an absolute maximum, you can still do better than continuously reallocing - you can start out with some estimated reasonable size, then if you ever need more, double the size. That way the size can tick up and down a lot without causing any allocations. (Of course, for all I know, your current setup may not actually cause a performance problem.)
Jefromi
keep track of the last array count, if it's bigger than the last malloc, do another, otherwise reuse it. Not going to hurt so much having a piece of memory too big.
Matt Williamson
+3  A: 

Try starting with an estimated particle count and malloc-ing an array of that size. Then, if your particle count needs to increase, use realloc to re-size the existing buffer. That way, you minimize the amount of allocate/free operations that you are doing.

If you want to make sure that you don't waste memory, you can also keep a record of the last 100 (or so) particle counts. If the max particle count out of that set is less than (let's say) 75% of your current buffer size, then resize the buffer down to fit that smaller particle count.

bta
Ok I will work on this implementation.
Justin Meiners
+1  A: 

You don't need to remalloc unless the number of particles increases (or you handled a memory warning in the interim). Just keep the last malloc'd size around for comparison.

hotpaw2
+3  A: 

Allocating memory is fast relative to some things and slow relative to others. The average Objective-C program does a lot more than 60 allocations per second. For allocations of a few million bytes, malloc+free should take less than a thousand of a second. Compared to arithmetic operations, that's slow. But compared to other things, it's fast.

Whether it's fast enough in your case is a question for testing. It's certainly possible to do 60 Hz memory allocations on the iPhone — the processor runs at 600 MHz.

This certainly does seem like a good candidate for reusing the memory, though. Keep track of the size of the pool and allocate more if you need more. Not allocating memory is always faster than allocating it.

Chuck
+1  A: 

As hotpaw2 mentioned, if you need to optimise you could perhaps do so by only allocating if you need more space i.e.:

particleCount = [particles count];
if (particleCount > allocatedParticleCount) {
  if (vertices) {
    free(vertices);
  }
  if (textures) {
    free(textures);
  }
  vertices = malloc(sizeof(point3D) * 4 * particleCount);
  textures = malloc(sizeof(point2D) * 4 * particleCount);
  allocatedParticleCount = particleCount;
}

...having initialised allocatedParticleCount to 0 on instantiation of your object.

P.S. Don't forget to free your objects when your object is destroyed. Consider using an .mm file and use C++/Boost's shared_array for both vertices and textures. You would then not require the above free statements either.

Christopher Hunt
Lol .mm thanks for the pointers though. Vote
Justin Meiners
I've had problems using boost shared pointers and the like inside objective-c++ classes. I've had to wrap the boost shared pointers in a c++ class that I new when initing and delete when deallocing a objective-c++ class.
No one in particular
Interesting....
Justin Meiners
I've not had any problems using boost shared pointers inside Obj-C++ classes implemented using .mm files. Be aware though that your .h file should declare C++ types within a "#ifdef __cplusplus" block. It is entirely plausible for your Obj-C++ class declaration to be imported by an Obj-C file (.m).
Christopher Hunt
@No one in particular i've written a lot of objc++ code, it has behaved as it should for me *shrug*
Justin
+1  A: 

I'll add another answer that's more direct to the point of the original question. Most of the answers prior to this one (including my own) are very likely premature optimizations.

I have iPhone apps that do many 1000's of mallocs and frees per second, and they don't even show up in a profile of the app.

So the answer to the original question is no.

hotpaw2