views:

367

answers:

6

There is a reference table that shows the execution cost of every php function?

I know that the time execution is boundet to many factors and is impossible to determinate an unique value, but im looking for a 'ideological' table.

For example,

is_dir() = cost 3
is_file() = cost 2

(take it JUST as an example ;)

If i dont remember bad, for C there is a table with cpu cycles needed to every operation..

EDIT: i read all you comment, and so there are any table for my needs.

Anyway, i know that

is_dir('/');//is faster than
is_dir('/a/very/long/path/to/check/');

But, i have some trouble to accept this situation (that, if i have understood right your words, is possible);

$a = '/';
$b = '/a/very/long/path/to/check/';
is_dir($a);  //time execution 0.003
is_file($a); //time execution 0.005
//so suppose is_dir is faster than is_file
//(still example, function names and number are random ;)
is_dir($b);  //time execution 0.013
is_file($b); //time execution 0.009
//wow, now is faster is_file()....?
A: 

You can't "instruction count" like that for PHP. Even with C it would depend on the libc implementation.

Draemon
I dont know how to do that, but seem strange that do not exist something like.. i mean, is faster a for() loop or a while() ?This kind of comparison will be great imho
DaNieL
for() vs while() depends on the compiler (and has nothing to do with libc) and optimisation options and architecture. In practice, they're going to produce identical native code.
Draemon
+1  A: 

I'm not aware of such a table.

The cost of a function will vary a lot depending on the input.

A 1000 item array being processed by array_unique() will take longer than the same function with a 5 item input array.

If you're interested in tuning your application you could use microtime to time the execution of each function call.

It would look something like this

$starttime = microtime();
is_dir('/var/foo');
$stoptime = microtime();
echo "is_dir cost: ".($stoptime-$startime);
andrew
I didnt consider the input.. so, for example, can i find the situation where is_dir($a) is faster than is_file($a), but is_dir($b) is slower than id_file($b)?p.s: the is_dir and is_file are the first two functions that flow in my mind, but is just an example
DaNieL
+4  A: 

Such a chart would not be helpful or accurate. In your example, the timing of the function would depend very much on the details of the local filing system and would quickly become meaningless.

The only true way to measure performance is to build your code, then time it. Is for faster than while? Maybe one will be better than the other in one situation, while the other would be faster in another situation, depending on all kinds of factors that would be specific to the code within.

Time your code, using a profiler, and get some meaningful data that is accurate for your project on your server environment.

rikh
So you confirm the situation i said in my comment to andrew answer.
DaNieL
even if is_file() is faster 80% of the time to is_dir(), it is still meaningless to say it is faster, as it is only faster sometimes, under some conditions. Another bit of code might hit a case where it is slower 100% of the time. The point is that the chart would give you a false sense of what is faster and what is slower. The only true answer is to measure the performance in your specific case and go from there.
rikh
I'd want to add that `is_file()` is not the same thing as `is_dir()`. You don't choose which function to use by how long it takes, but depending on *what you need to do*. There are rarely two functions that do exactly the same thing and are interchangeable, so the questions only makes sense for the few cases where two functions produce the same output given the same input. That's very case-by-case though and hardly generalizable.
deceze
Yes, i know the difference between is_dir and is_file, those two are the first functions similarly come into my mind ;)
DaNieL
A: 

I don't think there is such thing as a function costs table, because the same function can have very different costs depending on the context.

For example (but just take it as an example !) :

is_dir('/'); //will be very fast
is_dir('/symbolic_link/symbolic_link_on_a_remote_fs/'); //will be way slower

That being said, evaluating function calls costs can be a very good approach to optimization.

Working with a profiler can help you realize that some specific function calls in your code are slower than you'd like them to be, and re-thinking/re-writing these parts could speed things up a lot.

Nicolas
+1  A: 

I think it's not possible that such table would exist anyhow.

First of all, time greatly depends on input. But it's not a such problem for variety of functions. We could point algorithmic complexity: O(n^2), O(n log(n)) and so on. We have great mathematical apparatus for dealing with such thing.

More important thing is environment. You could not predict how long will be working function, because you do not know all the environment. file_exists() on NFS volume will not be as fast as on local ext3 volume. You could not predict how long request will transfered to MySQL server and so on.

So what you should do — is measure, measure and measure. Depend only on your own measurements and your environment. It's the only way.

dotsid
A: 

The closest that you'll get is likely OpCode analysis via the Vulcan Logic Dumper (author's webpage/project homepage link), which used to be called the Vulcan Logic Disassembler (PECL link). Sarah Golemon has some nice info about this stuff on her blog.

ken