Seven Mistakes That Make Websites Slow

With the holidays just around the corner, companies are ramping up SEM spending, paying close attention to SEO and revamping landing pages. Yet, all of this time, effort and money needed to maximize holiday sales could be in vain if increased site traffic over the holidays causes a website to slow down or even go down.

It’s no secret that performance matters to users. Site speed directly affects bounce rates, conversion rates, revenue, user satisfaction, SEO (explicitly in Google’s Page Rank, and indirectly with site popularity) and virtually every other business metric worth tracking. Users leave slow sites, and many of them won’t come back.

Not so long ago, eight seconds was cited as a tipping point beyond which users would abandon a website. Then it was six seconds. Then four. Now, the rule of thumb is two seconds. The bar is high, and it’s rising all the time.

Small performance changes can have a big impact

User patience is not linear. Almost no-one abandons a site for being too slow within the first second. But beyond that first second, absent some feedback (such as the browser title bar showing the page title), and users start to bounce at an accelerating rate. By three or four seconds a typical site might lose half its potential visitors. Of course the specific thresholds vary by website, user action and intent and other factors … but the principle remains the same.

The bottleneck

Quick quiz: what percentage of the time a user spends waiting for your page to load is spent after the HTML comes back to their browser? If you are not in the web development or performance optimization community the answer might surprise, or even astonish you. It is typically over 90%. Most of the time users spend waiting on your website is spent after the HTML page has been retrieved by their browser. Why is this so?

Fetching the HTML is just the beginning

A serious analysis of how browsers work is way out of scope here, but in a nutshell, browsers parse your page’s HTML, sequentially discovering its assets (such as scripts, stylesheets, and images), requesting and then either parsing and executing them or displaying them as appropriate.

But these assets are not simply fetched all at once. Instead, the browser opens a limited number of connections to the server(s) referenced by the page. There is overhead involved in establishing TCP and HTTP connections, and some unavoidable latency in pushing the request and response bytes back and forth across the network.

So, in general, round trips between the browser and server are expensive. The structure of the HTML markup, the number and the ordering of its assets, are absolutely critical factors in its performance.

Before we head into the heart of holiday site traffic, take a few moments to consider seven of the biggest mistakes web developers and site owners make relating to website performance, and suggestions for how they can be avoided or corrected.

1. Too Many HTTP Requests

This is the single biggest contributor to performance problems in most web pages.  Many of the most effective WPO techniques relate to solving this problem, albeit in very different ways. Here are a few solutions:

Concatenate scripts and stylesheets

Simply concatenate (combine) multiple script files into one.  Ditto for stylesheets; simply include the contents of subsequent .css files into one combined stylesheet. There are maintenance costs for doing this manually, but automated solutions abound.

Combine images with sprites

CSS spriting has become a mainstream technique. The idea is to put many common images (for example  all the graphics for your site’s templates, themes or navigation) into a single large image file. Then you use CSS to precisely position and selectively display just the appropriate portion of the sprite image in each place where an image is needed. So instead of dozens of images, you have just one.

Be forewarned, the maintenance overhead for this technique can be substantial since even minor edits typically require updates to images and CSS, and even to the HTML. Fortunately, tools for automating CSS spriting have sprung up to help address this maintenance burden, like SpriteMe, Compass and Yottaa.

Use fewer images

Too many images in a page is an endemic problem approximately as old as the <img> tag. Solutions fall into two buckets. One is technical: replace image files with CSS (for example for background colors, borders, buttons, hover effects and styled text), or even inline them using “data URIs” for smaller images.

You can also consider pagination in cases  where the images are essential to the page’s purpose, for example  an eCommerce catalog.

The other solution may be tougher: work with your site’s designers and product owners to create simpler pages that don’t rely on as many images.

2. Minimal Client-side Processing

Many sites fail to leverage the capabilities of the client, instead pushing all the work to the server. One simple example is form validation. Posting form data to the server, validating it there, and sending back an error message (let alone a whole page) is incredibly inefficient.

Validate on the client

Instead, validate the user’s input from within the page, right where the input is happening. For security reasons, web applications should always also validate on the server side; Rule #1 of web security is that user input cannot be trusted. So, validate on the client for performance and UX reasons, and validate on the server for security reasons.

Use web standards and MVC separation

Using web standards is critical for creating maintainable, accessible, future-proof websites. A great side effect is it’s also the best foundation for maximizing performance. Use of modern web standards encourages the separation of content (HTML), styling (CSS), and behavior (JavaScript).

Put another way, the venerable “MVC” [Model/View/Controller] design pattern is at play in the client tier of your website’s code.

Think of the HTML (really, the DOM) as the model, the CSS as the view, and the JavaScript as the controller. This separation tends to make code more efficient and maintainable, and makes many optimization techniques much more practical to apply.

Push presentation code into the client tier

In addition to the form validation example noted earlier, many other scenarios call for doing more in the client. Charts and graphs — any sort of pretty-looking data visualization — used to be the sole province of the server.

No more. Now, it often makes more sense to push just the raw data from the server to the client (for example  in JSON format), and use JavaScript and CSS to create pretty graphs, charts and visualizations right in the browser. This way many user interactions can avoid hitting the server at all.

And, by only pushing the data, you save on server CPU, shorten wait time, and leverage the underutilized resources available to each client. There are many great tools for dataviz out there, including Processing, D3 and  Flot.

Leverage Ajax techniques

By only requiring small parts of the page to change in response to user actions, you make your site or web application much more responsive and efficient. There are different valid approaches (for example  fetching HTML vs script vs data). But don’t refresh the whole page if you don’t need to! If you’re late to the Ajax party, this book is a few years old but is still a great overview.

3. Low Number of Parallel Requests

Fetch a script, parse and execute it, then fetch another one. Rinse and repeat. Then download a few images from the same server, using up all the available connections. Then once they’re downloaded, fetch some more. Sound efficient? It’s not. The bandwidth of the user’s connection is not the constraining factor most of the time; rather, it’s inefficient markup that fails to account for browser behavior.

There are things you can do to your HTML to allow virtually any browser to make many requests at once, which has a huge impact on latency.

Use browser-appropriate domain sharding

Some old but still-popular browsers like IE 7 benefit greatly from “domain sharding”, the practice of using a different host name alias for the same web server, in order to circumvent tiny per-server simultaneous connection limits. Using img1.foo.com and img2.foo.com to point to the same place has an extra DNS lookup cost, but lets you effectively double the number of parallel downloads. Note it’s important not to apply this technique to newer browsers that support lots of parallel connections, because then you incur the DNS cost without any benefit. WPO guru Steve Souders does a nice job explaining the tradeoffs here.

Use intelligent script loading

There has been an explosion in script loaders, which help with minimizing the performance impact of multiple scripts. There are cases where it’s not convenient or practical to concatenate certain files, and intelligent loading of scripts can go a long way towards mitigating the cost of non-concatenated script files.

These loaders typically load scripts asynchronously (to bypass the problem of their blocking behavior) and can also preserve order of execution without requiring sequential download. Serving concatenated, asynchronous JavaScript is still generally the best (and simplest) approach, but leveraging a good script loader can be a real difference-maker if you don’t or can’t concatenate your JavaScript (and convert it to async). Here’s a list of script loaders.

Leverage Keep-Alive

This one’s simple: make sure your web server doesn’t override the default behavior for HTTP 1.1, which is to reuse the same TCP connection for multiple HTTP request/response cycles. There are exceptions (for example  for specialized image servers), but for your average site it’s a no-brainer: use it and your pages will be faster.

4. Failure to leverage browser cache / local storage

Someone said, “There are two hard problems in computer science: caching, naming, and off-by-one errors” (ha). Seriously, the fastest way to load an asset is from a local cache. Failing to make use of what the browser already has is a common but solvable problem.

Use the right headers

Setting far-future cache headers for static assets — especially the ones referenced in more than one page — is a great way to improve performance. Since explicit invalidation of client caches is not possible, the way to handle updates to cached content is filename revving (renaming the asset and updating the references to it).

This is another technique that has high maintenance costs if you do it manually, but automation (for example  via build scripts) makes it a snap. Use the “Expires” header for this approach. For frequently updated content, use “Last-Modified” response headers, in order to trigger conditional “If-Modified-Since” requests from the browser. Conditional requests are obviously slower than a local cache lookup, but are still much better than a full round-trip.

Here’s a great HTTP cache overview.

Leverage local storage

A newer weapon in the WPO arsenal is HTML5′s local storage. For browsers that support it, it allows much, much more to be explicitly stored on the client than cookies, and unlike cookies it doesn’t weigh down each request.

5. Third-party widgets

Third-party widgets are the bane of every performance-conscious site operator’s life.

Avoid third-party widgets!

Don’t use them if you can help it. A couple of social media plugins and an analytics integration are often necessary, but avoid them like the plague if you can.

Use async implementations

Try to use widgets that provide asynchronous implementations, so their inevitably terrible performance impacts their widget without dragging down your entire UX with it.

Measure performance (and stop using the slow ones)!

Watch them carefully and either insist on an SLA, switch widget providers or find a way to do without the widget. (This point about measurement applies to all aspects of performance. The things you measure have a funny tendency to improve, and you can’t optimize what you don’t measure.)

6. Too many bytes

Like “Too Many HTTP Requests”, this is a high-level problem addressed by many different WPO techniques. There are lots of ways to make responses (and even requests) smaller:

Compression

One obvious but important solution is to introduce compression (a la gzip). The slight performance penalty for decompression on the client is typically dwarfed by the reduced latency, with fewer bytes going across the wire. On the server side, pre-compressing static resources helps reduce CPU overhead. Server-side solutions like Apache’s mod_deflate make it trivial to compress dynamic content and to ensure compressed content is only sent to clients that can handle it (as indicated by a request header like “Accept-Encoding: gzip, deflate”).

One gotcha to watch out for is compression combined with caching: make sure to use a “Vary: Accept-Encoding” header so caches respond with content appropriate for the request.

Other techniques include:

  • More ruthless content editing
  • Image optimization (a la http://www.smushit.com/ysmush.it/)
  • JavaScript and CSS refactoring and minification
  • Client-tier code reuse
  • Pagination
  • Ajax
  • Cookie-less domains (for images and other static assets)

7. Failure to Use a Global Network

One very common mistake is to ignore geography. If your site is hosted in a NYC data center, there is a huge difference in latency for users in Boston versus users in California (let alone Asia). Serving content from the edge is the role of the traditional CDN. Using a cloud provider to distribute your content to even more locations so users always pull from a server near them is even better.

Cutting-edge Web Performance Optimization services like Yottaa, which can route requests across multiple cloud providers and distribute your pages and their contents to users all across the globe, represent the next generation of solutions for optimizing for a geographically diverse audience.

Performance matters, especially around the holidays. Before hitting Black Friday, Cyber Monday and beyond, measure your site’s performance and then improve it.

Whether you do it by hand, do it at build time, do it at deploy time, do it on your server at request time, or do it in the cloud with your favorite web performance optimization service … just do it!

Further Reading and Other Resources:

Firebug
YSlow
WPT
dynaTrace
Yottaa
High Performance Web Sites
Even Faster Web Sites
Book of Speed
Web2.0 Expo
Velocity Conference

Free book: Jump Start HTML5 Basics

Grab a free copy of one our latest ebooks! Packed with hints and tips on HTML5's most powerful new features.

  • http://www.credoinfotech.com/ Credo Infotech

    Great tips…

    Yes, I agree with you. These are definitely costlier mistakes and would make the website slow for sure. Page load speeds is the most major disaster a website can experience. Most people won’t wait a few seconds for the site to load, and will exit out of it. Actually slow loading will kill the website slowly from the search engines.

    I’d also like to add poorly designed websites to the list. Poorly designed as in horrible color schemes can also present a major disaster.

  • alexander farkas

    Sorry, but combining JS files into a single script is in most cases not the best for performance. It is worst. How scripts are loaded by the browser has changed dramatically in the last 3 years (beginning with IE8). They do load in parralel for example.

    It really depends on the filesize. If your JS files are greater than 40kb, you should split them, if they are smaller than 15kb you should combine them. Telling people to merge all files into one huge file is a really bad advice.

    Don’t get me wrong concating scripts is still best practice, but combining into one does not take into account, that scripts should be loaded in parralel and modern browsers already do this automatically without a scriptloader.

    • http://www.yottaa.com/ Chris Weekly

      [Disclaimer: I am Yottaa's Director of Product Management]
      Hi Alexander,
      No apology needed, thanks for weighing in. You’re correct that — as is often the case with optimization (or anything complex) — the messy truth with all its caveats and exceptions can sometimes be more complicated than easily fits into a bullet point. For some browsers, some of the time, exercising restraint in how far you take concatenation can improve performance. But in our experience, most of the time, concatenating scripts within reason will improve performance versus not concatenating them. Modern browsers do much better than older ones, sure — but market share for older browsers (as you’re well aware, I’m sure) remains stubbornly high. This is one of the most compelling arguments in favor of automating web performance optimization (e.g. with Yottaa Site Speed Optimizer) — the burden of maintaining browser-specific optimizations like this becomes someone else’s problem.
      Anyway thanks for your comment!
      Best Regards,
      Chris

  • http://youmakethewebsite.com.au/ MakingWebsites

    Nice stuff, people forget how important speed is and just concentrate on something that looks great.

    I’m in a constant battle myself re: social media plugins. I’ve currently removed all plugins (just have static links at the moment) and am reevaluating my strategy on their use. For example, the standard +1 badge uses JS of around 50kb!

  • http://www.delfin.com.ve Delfín

    Cufon can make a website really slow if it not used properly.
    It should be used for certain text, not all.

    Delfín
    @delfinfb

  • Arnie Keller

    “The other solution may be tougher: work with your site’s designers and product owners to create simpler pages that don’t rely on as many images.”

    Don’t forget about optimizing images. It’s amazing how much you can reduce their file size and still have an entirely acceptable image.

    • http://www.yottaa.com/ Chris Weekly

      Absolutely right, Arnie.
      This was mentioned in passing “Image optimization (a la http://www.smushit.com/ysmush.it/)” in the article too but bears repeating.
      Cheers

  • http://www.reactivedesigns.net Jeff Boulton

    I use a couple of tools which help me optimize my code – Page Speed and YSlow in Firebug help me with the above points. I optimize one item, then run the tools and check the feedback – it’s defiantly a process which takes time.

  • John Manoah

    Thanks … it was very informative and something that is need of the hour.

  • Dan

    A nice simple info graphic to demonstrate you point about how log people wait these days;

    http://www.shiftscape.com/website/the-need-for-speed/

    • Dan

      Hi,

      It’s an interesting point in one of the comments re being “browser specific”. I also have a nice article on how to speed up the client side browser for Firefox. It involves setting FF to use “pipe-lining” which enables more than the default amount of connections and can load page content in parallel giving a faster experience. I think IE has a default max connections of about 2 (might be wrong there)

      http://www.shiftscape.com/internet/the-quick-brown-fox/

  • http://www.e-interesy.pl E-commerce

    It’s true that Cufon can really make website slow. I’ve used one 3 MB (!) .js file with font data.