views:

63

answers:

1

I'm taking a day off banging my head against memory management and opengl-es to try and boost efficiency. Part of the app we're working on accesses vast tables of data. These data were supplied to us in the form of switch statements, mostly.

In all, I have four switch statements, each with 2000+ cases, expected to grow. How poor is the access time going to be to these? Is it worth looking for lower-hanging optimisation fruit for now or is this a big no-no for Objective-C compilers?

+1  A: 

Switch cases are in general quite fast, since they only do integer comparisons.

If you really want to micro-optimise, certain types of data can be stored in C-arrays for extremely fast lookup using pointer arithmetic. This is something that you should only ever look into if you really need the extra speed - pointer arithmetic involves a lot of potential bugs, many of which can be quite hard to debug.

The real question is: have you done any profiling? Shark is a very effective tool when it comes to time profiling of iOS apps - use it, and see how much execution time is being spent on your switch case code. If it's less than 5-10%, there's probably no point in even considering optimisation.

Nick Forge
It seems to be less than 10%, around 7% I think, but we call it 100,000 times for each artifact the app produces (and we're producing these near-constantly, one after another. It doesn't /seem/ to be a bottleneck, but I feel it's messy enough to become one.
mtc06
As others have said, profile the application first. And don't refactor the thing just for refactoring's sake. Refactor it because you have a compelling reason to be in the code, fixing a defect. Although the premise of refactoring is to change the code without changing it's behavior, every refactoring has the potential to introduce a defect. I'd wait until it became *necessary* to refactor it. If it ain't broke, don't fix it.
Mike Hofer