Creating Accurate Timers in JavaScript
Asynchronous timers are the cornerstone of all time-based processes in JavaScript. From obvious things like clocks and stopwatches, to visual effects and animation, to the synchronised delays that are vital to the usability of dropdown menus.
But the problem with JavaScript timers is that they’re not very accurate. We couldn’t make a stopwatch just by incrementing x, because it wouldn’t stay in time:
var time = 0,
elapsed = '0.0';
window.setInterval(function()
{
time += 100;
elapsed = Math.floor(time / 100) / 10;
if(Math.round(elapsed) == elapsed) { elapsed += '.0'; }
document.title = elapsed;
}, 100);
Web browsers, like all applications, take turns for a piece of CPU time, and the time they have to wait will vary, depending on the load. This is what causes the latency in asynchronous timers — a 200ms timer may actually take 202ms, or 204, and this will gradually send the stopwatch out of time.
The solution in this case is not to rely on the speed of the timer at all, but rather, to query the system time freshly each loop and derive the output from that:
var start = new Date().getTime(),
elapsed = '0.0';
window.setInterval(function()
{
var time = new Date().getTime() - start;
elapsed = Math.floor(time / 100) / 10;
if(Math.round(elapsed) == elapsed) { elapsed += '.0'; }
document.title = elapsed;
}, 100);
But what about less literal applications, like animation — can we use the same approach to make them just as accurate?
Can’t beat the system
Recently I’ve been working on some visual transitions, and I found myself pondering exactly that question: If a user specifies a 5-second animation, what can we do to make that animation really last 5 seconds, and not 5.1 or 5.2? It may be a small difference, but small differences add up. And anyway, making things better is an end in its own right!
So to cut to the chase — we can indeed use the system clock to compensate for timer inaccuracy. If we run an animation as a series of setTimeout
calls — each instance calling the next — then all we have to do to keep it accurate is work out exactly how inaccurate it is, and subtract that difference from the next iteration:
var start = new Date().getTime(),
time = 0,
elapsed = '0.0';
function instance()
{
time += 100;
elapsed = Math.floor(time / 100) / 10;
if(Math.round(elapsed) == elapsed) { elapsed += '.0'; }
document.title = elapsed;
var diff = (new Date().getTime() - start) - time;
window.setTimeout(instance, (100 - diff));
}
window.setTimeout(instance, 100);
To see the practical effect of all this, I’ve prepared a demo that illustrates the difference between normal and adjusted timers — Self-adjusting timer examples.
The demo has three examples:
- The first example (left) is just an ordinary timer implemented with
setInterval
, that shows the cumulative difference between the elapsed time by iteration count, and the actual time by the system clock; - The second example (center) ramps-up the amount of work the browser is doing on each iteration, to show how more work means more latency, and therefore much greater inaccuracy;
- The third example (right) does just as much work as the second, but now uses the self-adjusting technique, to illustrate what a profound difference it makes to the overall accuracy;
The great thing about this approach is that it really doesn’t matter how inaccurate the timer would otherwise be, the adjustments will always keep it in time. A small amount of constant latency will be easily compensated for, but equally, a sudden large spike of latency caused by a surge of processor use (like starting an application) is just as easily moderated. And we can see from the un-adjusted examples, how even though one-iteration’s worth of inaccuracy is quite small, the cumulative effect can be incredibly large.
Of course even the adjusted timer can’t compensate 100% — it adjusts the speed of the next iteration, and so can’t compensate for the latency of the last iteration. But still, whatever this difference amounts to, it will be tiny compared with the cumulative effect of that discrepancy multiplied by hundreds or thousands of instances.
Go with the flow
I said earlier that I had this idea while working on animation (for an update to my popular Image transitions), and the following is the abstraction I eventually came up with. It implements a self-adjusting timer, calculates the speed and steps from the input length and resolution, and provides callbacks for on-instance and on-complete:
function doTimer(length, resolution, oninstance, oncomplete)
{
var steps = (length / 100) * (resolution / 10),
speed = length / steps,
count = 0,
start = new Date().getTime();
function instance()
{
if(count++ == steps)
{
oncomplete(steps, count);
}
else
{
oninstance(steps, count);
var diff = (new Date().getTime() - start) - (count * speed);
window.setTimeout(instance, (speed - diff));
}
}
window.setTimeout(instance, speed);
}
Here’s a simplified example of its use, which does an opacity fade on an image (using standard syntax only) over 5 seconds at 20 frames/second — Self-adjusting opacity fade:
var img = document.getElementById('image');
var opacity = 1;
img.style.opacity = opacity;
doTimer(5000, 20, function(steps)
{
opacity = opacity - (1 / steps);
img.style.opacity = opacity;
},
function()
{
img.style.opacity = 0;
});
And there you have it — self-adjusting timers make for better animation, giving you confidence that when you specify a 5-second effect, you’ll get one that lasts 5 seconds!
Thumbnail credit: Yukon White Light