Seven Mistakes That Make Websites Slow

Share this article

With the holidays just around the corner, companies are ramping up SEM spending, paying close attention to SEO and revamping landing pages. Yet, all of this time, effort and money needed to maximize holiday sales could be in vain if increased site traffic over the holidays causes a website to slow down or even go down.

It’s no secret that performance matters to users. Site speed directly affects bounce rates, conversion rates, revenue, user satisfaction, SEO (explicitly in Google’s Page Rank, and indirectly with site popularity) and virtually every other business metric worth tracking. Users leave slow sites, and many of them won’t come back.

Not so long ago, eight seconds was cited as a tipping point beyond which users would abandon a website. Then it was six seconds. Then four. Now, the rule of thumb is two seconds. The bar is high, and it’s rising all the time.

Small performance changes can have a big impact

User patience is not linear. Almost no-one abandons a site for being too slow within the first second. But beyond that first second, absent some feedback (such as the browser title bar showing the page title), and users start to bounce at an accelerating rate. By three or four seconds a typical site might lose half its potential visitors. Of course the specific thresholds vary by website, user action and intent and other factors … but the principle remains the same.

The bottleneck

Quick quiz: what percentage of the time a user spends waiting for your page to load is spent after the HTML comes back to their browser? If you are not in the web development or performance optimization community the answer might surprise, or even astonish you. It is typically over 90%. Most of the time users spend waiting on your website is spent after the HTML page has been retrieved by their browser. Why is this so?

Fetching the HTML is just the beginning

A serious analysis of how browsers work is way out of scope here, but in a nutshell, browsers parse your page’s HTML, sequentially discovering its assets (such as scripts, stylesheets, and images), requesting and then either parsing and executing them or displaying them as appropriate.

But these assets are not simply fetched all at once. Instead, the browser opens a limited number of connections to the server(s) referenced by the page. There is overhead involved in establishing TCP and HTTP connections, and some unavoidable latency in pushing the request and response bytes back and forth across the network.

So, in general, round trips between the browser and server are expensive. The structure of the HTML markup, the number and the ordering of its assets, are absolutely critical factors in its performance.

Before we head into the heart of holiday site traffic, take a few moments to consider seven of the biggest mistakes web developers and site owners make relating to website performance, and suggestions for how they can be avoided or corrected.

1. Too Many HTTP Requests

This is the single biggest contributor to performance problems in most web pages.  Many of the most effective WPO techniques relate to solving this problem, albeit in very different ways. Here are a few solutions:

Concatenate scripts and stylesheets

Simply concatenate (combine) multiple script files into one.  Ditto for stylesheets; simply include the contents of subsequent .css files into one combined stylesheet. There are maintenance costs for doing this manually, but automated solutions abound.

Combine images with sprites

CSS spriting has become a mainstream technique. The idea is to put many common images (for example  all the graphics for your site’s templates, themes or navigation) into a single large image file. Then you use CSS to precisely position and selectively display just the appropriate portion of the sprite image in each place where an image is needed. So instead of dozens of images, you have just one.

Be forewarned, the maintenance overhead for this technique can be substantial since even minor edits typically require updates to images and CSS, and even to the HTML. Fortunately, tools for automating CSS spriting have sprung up to help address this maintenance burden, like SpriteMe, Compass and Yottaa.

Use fewer images

Too many images in a page is an endemic problem approximately as old as the <img> tag. Solutions fall into two buckets. One is technical: replace image files with CSS (for example for background colors, borders, buttons, hover effects and styled text), or even inline them using “data URIs” for smaller images.

You can also consider pagination in cases  where the images are essential to the page’s purpose, for example  an eCommerce catalog.

The other solution may be tougher: work with your site’s designers and product owners to create simpler pages that don’t rely on as many images.

2. Minimal Client-side Processing

Many sites fail to leverage the capabilities of the client, instead pushing all the work to the server. One simple example is form validation. Posting form data to the server, validating it there, and sending back an error message (let alone a whole page) is incredibly inefficient.

Validate on the client

Instead, validate the user’s input from within the page, right where the input is happening. For security reasons, web applications should always also validate on the server side; Rule #1 of web security is that user input cannot be trusted. So, validate on the client for performance and UX reasons, and validate on the server for security reasons.

Use web standards and MVC separation

Using web standards is critical for creating maintainable, accessible, future-proof websites. A great side effect is it’s also the best foundation for maximizing performance. Use of modern web standards encourages the separation of content (HTML), styling (CSS), and behavior (JavaScript).

Put another way, the venerable “MVC” [Model/View/Controller] design pattern is at play in the client tier of your website’s code.

Think of the HTML (really, the DOM) as the model, the CSS as the view, and the JavaScript as the controller. This separation tends to make code more efficient and maintainable, and makes many optimization techniques much more practical to apply.

Push presentation code into the client tier

In addition to the form validation example noted earlier, many other scenarios call for doing more in the client. Charts and graphs — any sort of pretty-looking data visualization — used to be the sole province of the server.

No more. Now, it often makes more sense to push just the raw data from the server to the client (for example  in JSON format), and use JavaScript and CSS to create pretty graphs, charts and visualizations right in the browser. This way many user interactions can avoid hitting the server at all.

And, by only pushing the data, you save on server CPU, shorten wait time, and leverage the underutilized resources available to each client. There are many great tools for dataviz out there, including Processing, D3 and  Flot.

Leverage Ajax techniques

By only requiring small parts of the page to change in response to user actions, you make your site or web application much more responsive and efficient. There are different valid approaches (for example  fetching HTML vs script vs data). But don’t refresh the whole page if you don’t need to! If you’re late to the Ajax party, this book is a few years old but is still a great overview.

3. Low Number of Parallel Requests

Fetch a script, parse and execute it, then fetch another one. Rinse and repeat. Then download a few images from the same server, using up all the available connections. Then once they’re downloaded, fetch some more. Sound efficient? It’s not. The bandwidth of the user’s connection is not the constraining factor most of the time; rather, it’s inefficient markup that fails to account for browser behavior.

There are things you can do to your HTML to allow virtually any browser to make many requests at once, which has a huge impact on latency.

Use browser-appropriate domain sharding

Some old but still-popular browsers like IE 7 benefit greatly from “domain sharding”, the practice of using a different host name alias for the same web server, in order to circumvent tiny per-server simultaneous connection limits. Using img1.foo.com and img2.foo.com to point to the same place has an extra DNS lookup cost, but lets you effectively double the number of parallel downloads. Note it’s important not to apply this technique to newer browsers that support lots of parallel connections, because then you incur the DNS cost without any benefit. WPO guru Steve Souders does a nice job explaining the tradeoffs here.

Use intelligent script loading

There has been an explosion in script loaders, which help with minimizing the performance impact of multiple scripts. There are cases where it’s not convenient or practical to concatenate certain files, and intelligent loading of scripts can go a long way towards mitigating the cost of non-concatenated script files.

These loaders typically load scripts asynchronously (to bypass the problem of their blocking behavior) and can also preserve order of execution without requiring sequential download. Serving concatenated, asynchronous JavaScript is still generally the best (and simplest) approach, but leveraging a good script loader can be a real difference-maker if you don’t or can’t concatenate your JavaScript (and convert it to async). Here’s a list of script loaders.

Leverage Keep-Alive

This one’s simple: make sure your web server doesn’t override the default behavior for HTTP 1.1, which is to reuse the same TCP connection for multiple HTTP request/response cycles. There are exceptions (for example for specialized image servers), but for your average site it’s a no-brainer: use it and your pages will be faster.

4. Failure to leverage browser cache / local storage

Someone said, “There are two hard problems in computer science: caching, naming, and off-by-one errors” (ha). Seriously, the fastest way to load an asset is from a local cache. Failing to make use of what the browser already has is a common but solvable problem.

Use the right headers

Setting far-future cache headers for static assets — especially the ones referenced in more than one page — is a great way to improve performance. Since explicit invalidation of client caches is not possible, the way to handle updates to cached content is filename revving (renaming the asset and updating the references to it).

This is another technique that has high maintenance costs if you do it manually, but automation (for example  via build scripts) makes it a snap. Use the “Expires” header for this approach. For frequently updated content, use “Last-Modified” response headers, in order to trigger conditional “If-Modified-Since” requests from the browser. Conditional requests are obviously slower than a local cache lookup, but are still much better than a full round-trip.

Here’s a great HTTP cache overview.

Leverage local storage

A newer weapon in the WPO arsenal is HTML5’s local storage. For browsers that support it, it allows much, much more to be explicitly stored on the client than cookies, and unlike cookies it doesn’t weigh down each request.

5. Third-party widgets

Third-party widgets are the bane of every performance-conscious site operator’s life.

Avoid third-party widgets!

Don’t use them if you can help it. A couple of social media plugins and an analytics integration are often necessary, but avoid them like the plague if you can.

Use async implementations

Try to use widgets that provide asynchronous implementations, so their inevitably terrible performance impacts their widget without dragging down your entire UX with it.

Measure performance (and stop using the slow ones)!

Watch them carefully and either insist on an SLA, switch widget providers or find a way to do without the widget. (This point about measurement applies to all aspects of performance. The things you measure have a funny tendency to improve, and you can’t optimize what you don’t measure.)

6. Too many bytes

Like “Too Many HTTP Requests”, this is a high-level problem addressed by many different WPO techniques. There are lots of ways to make responses (and even requests) smaller:

Compression

One obvious but important solution is to introduce compression (a la gzip). The slight performance penalty for decompression on the client is typically dwarfed by the reduced latency, with fewer bytes going across the wire. On the server side, pre-compressing static resources helps reduce CPU overhead. Server-side solutions like Apache’s mod_deflate make it trivial to compress dynamic content and to ensure compressed content is only sent to clients that can handle it (as indicated by a request header like “Accept-Encoding: gzip, deflate”).

One gotcha to watch out for is compression combined with caching: make sure to use a “Vary: Accept-Encoding” header so caches respond with content appropriate for the request.

Other techniques include:

  • More ruthless content editing
  • Image optimization (a la http://www.smushit.com/ysmush.it/)
  • JavaScript and CSS refactoring and minification
  • Client-tier code reuse
  • Pagination
  • Ajax
  • Cookie-less domains (for images and other static assets)

7. Failure to Use a Global Network

One very common mistake is to ignore geography. If your site is hosted in a NYC data center, there is a huge difference in latency for users in Boston versus users in California (let alone Asia). Serving content from the edge is the role of the traditional CDN. Using a cloud provider to distribute your content to even more locations so users always pull from a server near them is even better.

Cutting-edge Web Performance Optimization services like Yottaa, which can route requests across multiple cloud providers and distribute your pages and their contents to users all across the globe, represent the next generation of solutions for optimizing for a geographically diverse audience.

Performance matters, especially around the holidays. Before hitting Black Friday, Cyber Monday and beyond, measure your site’s performance and then improve it.

Whether you do it by hand, do it at build time, do it at deploy time, do it on your server at request time, or do it in the cloud with your favorite web performance optimization service … just do it!

Further Reading and Other Resources:

Firebug
YSlow
WPT
dynaTrace
Yottaa
High Performance Web Sites
Even Faster Web Sites
Book of Speed
Web2.0 Expo
Velocity Conference

Coach WeiCoach Wei
View Author

Coach Wei is CEO of Yottaa, the Web Performance Company. Coach is a frequent writer and speaker on web performance technology, industry trends and the start-up ecosystem. Coach obtained his master's degree from MIT, holds six patents and maintains a blog on startups, web 2.0 and entrepreneurship.

understanding-performancewebsite performance
Share this article
Read Next
Get the freshest news and resources for developers, designers and digital creators in your inbox each week