views:

161

answers:

2

Lets say I'm going to save 100 floating point numbers in a list by running a single script, most probably it will take some memory to process.So if this code executes every time as a requirement of an application there will be performance hits, so my question is how to maintain efficiency in order to gain performance.

Mock-up code:

def generate_lglt():
    float1, float2 = 27.2423423, 12.2323245
    lonlats = []
    for val in range(100, 0, -1):
        lonlats.append(random.uniform(float1, float2))
        lonlats.append(random.uniform(float1, float2))
        lonlats.append(random.uniform(float1, float2))
        lonlats.append(random.uniform(float1, float2))
        lonlats.append(random.uniform(float1, float2))
    print lonlats

Thanks.

+2  A: 

If generate_lglt() is going to be called a lot of different times, you may want to keep from regenerating the same range(100,0,-1) with every call of the code. You may want to cache that generated range somewhere and use it over and over again.

Also, if you are going to be exiting a for loop without completing each iteration, use xrange instead of range.

Redbeard 0x0A
+8  A: 

Bottlenecks occur at unexpected places, so never optimize code just because you think it might be the right code to try to improve. What you need to do is

  1. Write your program so that it runs completely.
  2. Develop tests to make sure your program is correct.
  3. Decide whether your program is too slow.
    • There is a good chance you will quit at this step.
  4. Develop performance tests that run your program realistically.
  5. Profile the code in its realistic performance tests using the cProfile module.
  6. Figure out what algorithmic improvements can improve your code's performance.
    • This is usually the way to improve speed the most.
  7. If you are using the best algorithm for the job, perform micro-optimizations.
    • Rewriting critical parts in C (possibly using Cython) is often more effective than in-Python micro-optimizations.
Mike Graham