views:

220

answers:

4

i want to speed my code compilation..I have searched the internet and heard that psyco is a very tool to improve the speed.i have searched but could get a site for download.

i have installed any additional libraries or modules till date in my python.. can psyco user,tell where we can download the psyco and its installation and using procedures?? i use windows vista and python 2.6 does this work on this ??

+3  A: 

Psyco does not speed up compilation (in fact, it would slow it down). However, if your problem is compilation speed in Python, there are some serious problems with your code.

If you are trying to improve runtime performance, Psyco does work with 32bit operating systems and Python version 2.5. The latest version is the first Google result for Psyco: http://psyco.sourceforge.net/

Psyco is no longer an "interesting" project as Python 3.x has gained Unladen Swallow, and most of the developer attention is divided between that and PyPy.

There are other ways of improving performance, not limited to Cython and Shed Skin

Yann Ramin
+10  A: 

I suggest to not rely on this tools, anyway psycho is being replaced by the new python implementations as PyPy and unladen swallow. To speed up "for free" you can use Cython and Shedskin. Anyway this is not the right way to speedup the code in my opinion.

If you are looking for speed here are some hints:

  1. Profiling
  2. Profiling
  3. Profiling

You should use the cProfile module and find the bottlenecks, then proceed with the optimization.

If the optimization in python isn't enough, rewrite the relevant parts in Cython and you're ok.

pygabriel
+1 for profiling.
Peter Recore
+1 for profiling.
intuited
+1 for profiling.
Greg Hewgill
+2  A: 

So it seems you don't want to speed up the compile but want to speed up the execution.

If that is the case, my mantra is "do less." Save off results and keep them around, don't re-read the same file(s) over and over again. Read a lot of data out of the file at once and work with it.

On files specifically, your performance will be pretty miserable if you're reading a little bit of data out of each file and switching between a number of files while doing it. Just read in each file in completion, one at a time, and then work with them.

dash-tom-bang
I have file data of format3.343445 13.54564 14.345535 12.453454 1 and so on upto 1000 linesand i have number given such as a=2.44443for the given file i need to find the row number of the numbers in file which is most close to the given number "a"how can i do thisi am presently doing by loading whole file into list and comparin each element and findin the closest one any other better faster method?
kaushik
you could create a new list of data pairs: `my_list = [(abs(n-a), n) for n in file_list]`, then sort that and pick out the first element.
dash-tom-bang
+1  A: 
  • Use the appropriate data structures. If you see that you are doing a lot

    if element in list
    #or 
    list.index(element) 
    

    then you might be better off with sets and dictionaries.

  • Don't create a list only to iterate over it, use generators or the itertools module.
  • Read Python Performance Tips
  • As already mentioned, do profiling.
Felix Kling