Which result of this two tests is more relaible and more accurate

Hi guys,

I’m trying to improve my JavaScript application performance. The for loop is the most critical part of the app. It has been constantly filled with Float32Array and Unit8Array streamed data. I’m considering this two tests. Their results are different. The guy at Oracle says the while loop in reverse is fastest:

Here his test results: https://blogs.oracle.com/greimer/entry/best_way_to_code_a

But the test on this page shows that the for loop with caching length wins:


So which one you decide is the best?

Thank you,

I just ran the tests from that second link in my browser and the reverse while loop version was slightly faster than the for loop. The results were far too close to have any meaning though.

The speed at which the code runs can be different in different browsers such that there may be some browsers that are slightly faster with code that is a lot slower in other browsers.

At one point I ran a comparison of the table specific DOM calls against the generic DOM calls and as I had expected the specific calls were slightly faster in most browsers. There were two results I didn’t expect though - one browser where the specific calls ran in less than 1/5 the time of the more generic calls and one where the specific calls were slightly slower than the generic ones.

From the testing that I have done with loops, it doesn’t make any real difference whether you use a for loop or a while loop. It also doesn’t make any difference whether you process forward or backward through the data. What does make a difference is the number of dots and that appear in variable names inside the loop. With the Andrew Hedges code both this.length and this[i] appear in the for loop while only this[i] appears in the while loop. Therefore the while loop will be slightly faster.

Move the this.length reference outside the for loop and the time for both should become too close to measure. The simplest way to avoid looking up the length each time around the loop would be to change the code like this:

Array.prototype.in_array_plus = 
         function(search_term) {
   for (var i = 0, ii = this.length; i < ii ; i++) {
      if (this[i] === search_term) {
         return true;
   return false;

I would expect that to take so close to the same time to run as the while loop that you’d never be able to measure the difference given that all of the statements that differ between the two versions are now outside the loop and only run once.

With Greg Reimer’s tests the number of iterations of the loop were too small to accurately reflect the difference between the different approaches. With so few iterations the time differences are as big as he says - but hardly worth worrying about. If the loops were run with a length of a million instead of a thousand then the differences between the run times of some of those tests would be unchanged (with the same difference between them just with bigger times overall) while the less efficient loops that may be showing as more efficient in those test results may take a lot longer - possibly seconds longer. For example the first four results listed show caching the length as being slightly faster, running more iterations would show it as a lot faster but the difference between the for loop and while loop versions would not noticeably change.

If you are going to be processing loops with very high numbers of iterations then it would be best to do your own tests with data as close to what you actually expect as possible.

Thanks Stephen,

That was remarkable test result and helped me gain even more confident.