views:

99

answers:

2

Hi!

I have done some profiling on a site and found that strtolower calls took unexpectedly long time.

The context is


function __autoload($class_name) {  
  require_once('app/model/' . strtolower($class_name) . '.php');  
}

And the result is
0.0092 -> __autoload() C:\xxx\config.php:0_
0.0093 -> strtolower() C:\xxx\config.php:77
0.0101 -> require-once(C:\xxx.php) C:\xxx\config.php:77
I've seen this on several places in the trace file.

I then tried the function in the following context


for($i=0;$i<100;$i++) {  
  strtolower('SomeStRIng' . $i)  
}

And the result was
0.0026 -> strtolower() C:\xxx\index.php:53
0.0027 -> strtolower() C:\xxx\index.php:53
0.0027 -> strtolower() C:\xxx\index.php:53
0.0027 -> strtolower() C:\xxx\index.php:53

There is a notable difference between the two. It's no biggie overall of course but I'm still confused.

A: 

Which profiler did you use? XDebug?

I would suspect it's a problem with the profiler, as you are showing quite a significant difference. See if this profiles any differently...

function __autoload($class_name) {  
  $file=strtolower($class_name);
  require_once('app/model/' . $file . '.php');  
}
Paul Dixon
Using Xdebug. Doing it your way made it drop to about half. 0.0003-0.0005. I don't know how the parser works. May it be a bit more loaded by the fact that it is more to parse and keep in memory?
Anders
Wha? Did introducing a variable make the code faster?
Raveren
Raverene: I quess it need a litle more scientific benchmarking then my test. It was consistent for all strtolower calls though so I guess it somehow made a difference
Anders
+2  A: 

You're running far too small tests, on far too little data. You'll never get consistent data, as other system factors (like CPU speed/load) will take a far greater toll.

Your first test is disk-bound. Lowering the case of a (hopefully reasonably) short string is essentially instantaneous, or at least measured in microseconds. Hitting the disk to locate/load/parse a file will take on the order of milliseconds. You're trying to detect a difference in something where the part you're not concerned about takes 1000 times longer. ie: the strtolower overhead is a rounding error.

Your second test, while being purely cpu/memory bound, is also too short to be practical. You can't be sure that doing 100 string concatenations (and associated memory allocation) won't overwhelm the actual lowercasing. A better test would be to prebuild a series of mix-case strings (a few hundred or thousand of them), then loop over that array repeatedly and strtolower in seuqnce. That way you've eliminated as much overhead/irrelevant code path as possible, and should hopefully get more consistent data.

Marc B
Many thanks. I'm aware that I didn't make a scientific test at all, but I saw that the difference was consistent. I'll try to grip what you said in the post above and make sense of it.
Anders