Performance of FOR vs FOREACH in PHP
First of all, I understand in 90% of applications the performance difference is completely irrelevant, but I just need to know which is the faster construct. That and...
The information currently available on them on the net is confusing. A lot of people say foreach is bad, but technically it should be faster since it's suppose to simplify writing a array traversal using iterators. Iterators, which are again suppose to be faster, but in PHP are also apparently dead slow (or is this not a PHP thing?). I'm talking about the array functions: next() prev() reset() etc. well, if they are even functions and not one of those PHP language features that look like functions.
To narrow this down a little: I'm not interesting in traversing arrays in steps of anything more than 1 (no negative steps either, ie. reverse iteration). I'm also not interested in a traversal to and from arbitrary points, just 0 to length. I also don't see manipulating arrays with more than 1000 keys happening on a regular basis, but I do see a array being traversed multiple times in the logic of a application! Also as for operations, largely only string manipulation and echo'ing.
Here are few reference sites:
http://www.phpbench.com/
http://www.php.lt/benchmark/phpbench.php
What I hear everywhere:
foreach
is slow, and thus for
/ while
is faster foreach
copies the array it iterates over; to make it faster you need to use references $key = array_keys($aHash); $size = sizeOf($key);
for ($i=0; $i < $size; $i++)
$key = array_keys($aHash); $size = sizeOf($key);
for ($i=0; $i < $size; $i++)
$key = array_keys($aHash); $size = sizeOf($key);
for ($i=0; $i < $size; $i++)
is faster than a foreach
Here's my problem. I wrote this test script: http://pastebin.com/1ZgK07US and no matter how many times I run the script, I get something like this:
foreach 1.1438131332397
foreach (using reference) 1.2919359207153
for 1.4262869358063
foreach (hash table) 1.5696921348572
for (hash table) 2.4778981208801
In short:
foreach
is faster than foreach
with reference foreach
is faster than for
foreach
is faster than for
for a hash table Can someone explain?
PHP Version 5.3.0
Edit: Answer With help from people here I was able to piece together the answers to all question. I'll summarize them here:
Thank you everyone who tried to help.
I'll likely stick to foreach (the non-reference version) for any simple traversal.
My personal opinion is to use what makes sense in the context. Personally I almost never use for
for array traversal. I use it for other types of iteration, but foreach
is just too easy... The time difference is going to be minimal in most cases.
The big thing to watch for is:
for ($i = 0; $i < count($array); $i++) {
That's an expensive loop, since it calls count on every single iteration. So long as you're not doing that, I don't think it really matters...
As for the reference making a difference, PHP uses copy-on-write, so if you don't write to the array, there will be relatively little overhead while looping. However, if you start modifying the array within the array, that's where you'll start seeing differences between them (since one will need to copy the entire array, and the reference can just modify inline)...
As for the iterators, foreach
is equivalent to:
$it->rewind();
while ($it->valid()) {
$key = $it->key(); // If using the $key => $value syntax
$value = $it->current();
// Contents of loop in here
$it->next();
}
As far as there being faster ways to iterate, it really depends on the problem. But I really need to ask, why? I understand wanting to make things more efficient, but I think you're wasting your time for a micro-optimization. Remember, Premature Optimization Is The Root Of All Evil
...
Edit: Based upon the comment, I decided to do a quick benchmark run...
$a = array();
for ($i = 0; $i < 10000; $i++) {
$a[] = $i;
}
$start = microtime(true);
foreach ($a as $k => $v) {
$a[$k] = $v + 1;
}
echo "Completed in ", microtime(true) - $start, " Secondsn";
$start = microtime(true);
foreach ($a as $k => &$v) {
$v = $v + 1;
}
echo "Completed in ", microtime(true) - $start, " Secondsn";
$start = microtime(true);
foreach ($a as $k => $v) {}
echo "Completed in ", microtime(true) - $start, " Secondsn";
$start = microtime(true);
foreach ($a as $k => &$v) {}
echo "Completed in ", microtime(true) - $start, " Secondsn";
And the results:
Completed in 0.0073502063751221 Seconds
Completed in 0.0019769668579102 Seconds
Completed in 0.0011849403381348 Seconds
Completed in 0.00111985206604 Seconds
So if you're modifying the array in the loop, it's several times faster to use references...
And the overhead for just the reference is actually less than copying the array (this is on 5.3.2)... So it appears (on 5.3.2 at least) as if references are significantly faster...
I'm not sure this is so surprising. Most people who code in PHP are not well versed in what PHP is actually doing at the bare metal. I'll state a few things, which will be true most of the time:
If you're not modifying the variable, by-value is faster in PHP. This is because it's reference counted anyway and by-value gives it less to do. It knows the second you modify that ZVAL (PHP's internal data structure for most types), it will have to break it off in a straightforward way (copy it and forget about the other ZVAL). But you never modify it, so it doesn't matter. References make that more complicated with more bookkeeping it has to do to know what to do when you modify the variable. So if you're read-only, paradoxically it's better not the point that out with the &. I know, it's counter intuitive, but it's also true.
Foreach isn't slow. And for simple iteration, the condition it's testing against — "am I at the end of this array" — is done using native code, not PHP opcodes. Even if it's APC cached opcodes, it's still slower than a bunch of native operations done at the bare metal.
Using a for loop "for ($i=0; $i < count($x); $i++) is slow because of the count(), and the lack of PHP's ability (or really any interpreted language) to evaluate at parse time whether anything modifies the array. This prevents it from evaluating the count once.
But even once you fix it with "$c=count($x); for ($i=0; $i<$c; $i++) the $i<$c is a bunch of Zend opcodes at best, as is the $i++. In the course of 100000 iterations, this can matter. Foreach knows at the native level what to do. No PHP opcodes needed to test the "am I at the end of this array" condition.
What about the old school "while(list(" stuff? Well, using each(), current(), etc. are all going to involve at least 1 function call, which isn't slow, but not free. Yes, those are PHP opcodes again! So while + list + each has its costs as well.
For these reasons foreach is understandably the best option for simple iteration.
And don't forget, it's also the easiest to read, so it's win-win.
One thing to watch out for in benchmarks (especially phpbench.com), is even though the numbers are sound, the tests are not. Alot of the tests on phpbench.com are doing things at are trivial and abuse PHP's ability to cache array lookups to skew benchmarks or in the case of iterating over an array doesn't actually test it in real world cases (no one writes empty for loops). I've done my own benchmarks that I've found are fairly reflective of the real world results and they always show the language's native iterating syntax foreach
coming out on top (surprise, surprise).
//make a nicely random array
$aHash1 = range( 0, 999999 );
$aHash2 = range( 0, 999999 );
shuffle( $aHash1 );
shuffle( $aHash2 );
$aHash = array_combine( $aHash1, $aHash2 );
$start1 = microtime(true);
foreach($aHash as $key=>$val) $aHash[$key]++;
$end1 = microtime(true);
$start2 = microtime(true);
while(list($key) = each($aHash)) $aHash[$key]++;
$end2 = microtime(true);
$start3 = microtime(true);
$key = array_keys($aHash);
$size = sizeOf($key);
for ($i=0; $i<$size; $i++) $aHash[$key[$i]]++;
$end3 = microtime(true);
$start4 = microtime(true);
foreach($aHash as &$val) $val++;
$end4 = microtime(true);
echo "foreach ".($end1 - $start1)."n"; //foreach 0.947947025299
echo "while ".($end2 - $start2)."n"; //while 0.847212076187
echo "for ".($end3 - $start3)."n"; //for 0.439476966858
echo "foreach ref ".($end4 - $start4)."n"; //foreach ref 0.0886030197144
//For these tests we MUST do an array lookup,
//since that is normally the *point* of iteration
//i'm also calling noop on it so that PHP doesn't
//optimize out the loopup.
function noop( $value ) {}
//Create an array of increasing indexes, w/ random values
$bHash = range( 0, 999999 );
shuffle( $bHash );
$bstart1 = microtime(true);
for($i = 0; $i < 1000000; ++$i) noop( $bHash[$i] );
$bend1 = microtime(true);
$bstart2 = microtime(true);
$i = 0; while($i < 1000000) { noop( $bHash[$i] ); ++$i; }
$bend2 = microtime(true);
$bstart3 = microtime(true);
foreach( $bHash as $value ) { noop( $value ); }
$bend3 = microtime(true);
echo "for ".($bend1 - $bstart1)."n"; //for 0.397135972977
echo "while ".($bend2 - $bstart2)."n"; //while 0.364789962769
echo "foreach ".($bend3 - $bstart3)."n"; //foreach 0.346374034882
链接地址: http://www.djcxy.com/p/31574.html