Faster page loading: asynchronous calls and tricks of perception

Anyone loading a bunch of CSS or Javascript via <link> tags on a web page knows about “blocking”: where the browser stops the loading of the page because it has to go fetch (and read/parse) an external file. If you have a lot of these, or even if you don’t and the user has a slow connection, you have a slow page. Users hateses teh slow pages, hatses them.

At the 2010 Fronteers Conference, Stoyan Stefanov gave a talk on [url=]Progressive Downloads and Rendering, where he listed some tricks to get around blocking by Javascript or other external files to make page load speed up. One trick was adding a <script> tag in the body (near the bottom, so after the important stuff has loaded) which adds a <script> tag to the <head> dynamically and runs it. While that file is being fetched, the rest of the page can continue to load. This is a bit asynchronous, isn’t it (similar to web pages still loading content while also fetching images)?

As a follow-up to his Higher Order Javascript article ([url=]see SitePoint thread about it), Piers Cawley has gone further with [url=]Asynchronous Streams, where he uses jQuery (as an example) to load external files asynchronously to avoid blocking of the HTML document loading.

In my web development career I haven’t worried about blocking, but plenty of folks around here are loading ginormous files, and lots of them, for large sites. As developers, what do you do to get around slow page loads? Have you done anything like this asynchronous calling of the external files?


Steve Souders wrote an article about a very similar method not long ago called Render First. JS second where he promotes the use of a small JS library called [URL=“”]LAB js which can be used to load and execute other scripts asynchronously (And with the option to wait for dependencies to execute before continuing down the chain.)

I haven’t used it on any big sites yet but playing around with it to see where and when I can get the most benefit out of it.

Another obvious thing that I personally do is to make sure that CSS sprites are used when appropriate to minimize HTTP requests. I figure 1 x 50kb is better than 50 x 1kb

Another thing, more on the coding size of things, I try to make sure my JS execution is optimized.
(Because who wants to see a loop like this when using jQuery?)

for (var i=0; i < someArray.length; i++) {
  $("#some_selector").append("<p>" + someArray[i] + "</p>");

This is of course a simplified example, but when you’re talking large loops and DOM manipulation, things tend to bog down quite quickly.

(For those of you watching at home, the optimization to the above snippet would be:)

var someArrLen = someArray.length; //cache array length so the JS interpreter doesn't need to look it up all the time
var theHTML = [];
for (var i=0; i < someArray.length; i++) {
  theHTML.push("<p>" + someArray[i] + "</p>");
$("#someSelector").append(theHTML.join("")); //only 1 DOM manipulation required

(Also note that I’m using an array rather than appending to a string. Your mileage may vary performance wise depending on what browser you use, see Craig Buckler’s article on High-performance String Concatenation in JavaScript)

Caching the array length is a micro-optimisation that doesn’t have much of a benefit when dealing with small arrays. Despite that though it can be a good practice.

In regards of the DOM manipulation, are you aware of the speed differences when using a document fragment instead of working with strings?

Paul, you can’t spell micro-optimization without optimization :wink:

jQuery uses DocumentFragments when it does DOM manipulation :slight_smile:
(I’m a real jQuery whore and I suppose I kind of expect those optimizations to made in the library level so I don’t have to worry about them too much.)

Having said that, I see your point and it’s definitely worth noting that DocumentFragments are an excellent way to increase performance. (I just read which has a few numbers to support that)

What I mean by that is that there is a cost-benefit trade-off when performing optimizations.

Some optimizations can result in a 1% or 2% speed improvement, while other optimizations can result in a 300% or 500% speed improvement.

The place to first focus your efforts is in the big numbers. Premature optimization can be more of a hindrance than a help.
The only reason to apply small optimizations are when they can be easily implemented as good-practice techniques to your code.

With PHP code for example, you should not spend the whole day going through code changing double quotes to single quotes. While there is a minor improvement by doing that, it’s not worth the time that it takes, especially when it comes time to explain to whoever pays you.

However, when writing your code there’s no trade-off in using single quotes instead of double quotes as you write.

This is ironic, considering the use of JQuery and how resource intensive it is. If your worried about a little thing like the processes necessary to get a length of an array than you shouldn’t be using JQuery.

I know Asynchronous Streams is buggy, and extremely unreliable in IE6. Especially when you start stacking them up one on top of another. Moving all JavaScript to the end of the body is what I have began doing (well forced) but still. If the JavaScript is written correctly, completely separate from the HTML it shouldn’t be an issue moving it from the head to the end of the body. Also, make sure you compress everything. Besides that if its possible to place everything in a single file for output, that will speed up things. I wouldn’t recommend developing the actual, separate scripts all in one file, but placing them compressed in a single file after being completed.

Speaking of which, the High-Performance JavaScript video presentation is well worth a good viewing.