Building a High Performance Website

Share this article

Building a fast website is hard.

Leading an initiative to improve site performance is even harder.

With the numerous techniques that can be used to increase site speed, project managers are constantly struggling with both technical and project management questions. Do you focus on front end or back end optimization? What do you need to find out from your web development team before developing a strategy to improve website performance? Which optimizations should be done manually, and which should be done automatically through site performance services?

Having the best possible answers to these questions is absolutely crucial to the overall success of any initiative to improve website performance. Before choosing techniques that will increase site speed, project managers must first have a solid grasp of factors that affect website performance, and analyze how these factors are affecting the site in question. This, coupled with an understanding of available site optimization techniques, will help project managers make the most well informed site performance decisions.

Front End vs. Back End

There are the behind the scenes choices that your in-house or contracted website developer may invisibly create for you that will affect how easy or difficult it will be to improve your website performance and optimizations in the long run. Furthermore, these “dark” decisions can be broken down into two distinct categories: front-end choices and back-end choices.

The front-end choices (and there are many choices to make here!) impact how your website is assembled when it reaches the end user by the browser that they are using to visit your site.

The back-end choices tend to be more rigid and inflexible than the front-end choice. These choices relate to decisions regarding how you host and maintain your website in addition to the same choices made by the many different 3rd party providers that are commonly integrated into a modern website (think social networking and advertising widgets).

You should become intimately aware of four specific metrics, because the solutions and techniques that you can apply from these metrics impact both the front-end and back-end choices.

Time to first byte

This is actually the combination of two separate measures: the time to find (which has its own internal time to connect costs) and the time to connect to your webpage. This occurs for every resource that your webpage needs in order to function properly.

A service called the domain name service (DNS) is at the “lowest” level of the network stack that delivers the modern Internet as we know it. Without DNS, a device would not be able to find the resources needed to assemble your website. All DNS services are not equal. Every device (and browser on the device) may have different policies in place that impact where it gets its DNS information and how long it saves (caches) that information.

DNS resolution is amortized across many resource requests. It is frequently amortized across multiple browser sessions and even browser instances (e.g. Firefox versus Chrome or IE).

The other time to connect is how long it takes to establish the HTTP connection to the backend servers (yours and those 3rd party assets your site depends on). This is often a more expensive operation than DNS because it is inherently more complicated and subject to longer “code paths”.

Backend decisions impact how much work must be done to satisfy the request once a connection is established. Serving static resources (e.g. scripts, style sheets and images) are often significantly faster than executing code that applies a template and substitutes user-specific resources. C’est la vie.

Time to title

The time to title is the first indication to the end user that his or her browser has performed DNS lookups, connected to the primary domain, and that your web server has delivered the first resource (the HTML document) that defines the webpage.

The time to title records how long it took from initial request to updating the title bar in the browser with the page name. Many users may not notice this change. However, it does happen and it’s a very important data point.

Getting to this point is just the beginning of a very long chain of actions that are automated by your users browser.

The average website in 2011 has fewer than 50 resources (images, javascript files, CSS files, etc). The browser will parse (analyze) the HTML document and build a mini computer program that it must execute in order to obtain all of the other resources that your webpage says must be present. Unfortunately, the current standards for the web don’t have the notion that a resource is optional and can be ignored if it’s difficult or costly to obtain it.

The 49 other resources will be requested and processed immediately after the title is updated.

Time to display

This is the point when your website MAY be usable. Simply put, this is the point at which the browser has started to render. Render is the fancy word that engineers use to describe the process of applying all of the specified style sheets and inline styles to the text, graphic resources, and HTML layout markup.

Time to display became more complicated because some browsers (particularly for mobile devices) adopted many techniques to “optimize” the layout for small screens. This takes time and each device maker is capable of applying different rules and heuristics.

If your site is text heavy, the time to display is typically that point at which your users can actually start reading the page.

If your site is graphics heavy, the time to display is typically that point at which your users start to see the layout … but the graphics may take time to render because of advanced imaging tricks such as progressive rendering.

Time to interactivity

This is the point when your website is fully interactive. The simplest way to tell the difference between “display” and “interactive” is to find a website that won’t allow you to scroll while it’s loading.

I’m from Boston and here, we are big sports fans. So a local newspaper’s sports page is a good example. These are highly templatized pages — the content and the advertisements change frequently — but the overall structure does not.

My favorite (worst) site for this is the Boston Herald (sorry, guys). This is a site that often loads quickly when I click on the latest story about Tom Brady and I can usually read the first two paragraphs that are “above the fold”. However, I can’t scroll down. Why? Because the site pages used synchronous advertisements that are being downloaded after the time to display but before full interactivity. I can’t tell you how many times I’ve found myself “stuck” and mildly frustrated by this behavior.

Managing Your Team

In order to bridge the gap between the delivery teams and the marketing and business teams — and accelerate the planning and implementation process — it’s best to have some common ground.

Asking the following are 10 questions will greatly help determine which website performance techniques to use moving forward:

  1. Do we have compression turned on?
  2. How many resource requests do we make
  3. How many 3rd party assets do we have?
  4. What will happen to our site if a 3rd party widget becomes inaccessible or very slow?
  5. Have we sized our images to decrease their size?
  6. Have we encoded our images to allow progressive rendering?
  7. Does our host provide optimization services?
  8. Are we using a backend template system? If so, are we targeting mobile devices?
  9. Are we doing any asynchronous javascript requests?
  10. Have we combined and minimized our javascript files?

Strategy

Establish a Baseline

Before embarking on the performance and optimization process, it’s critical to get a baseline in place so you can measure success and understand return on investment. It may seem unnecessary or unduly complicated. However, without this, you’ll never be able to answer the simplest questions that your boss will eventually ask you:

  • How much faster is our site?
  • How much did it cost us to get here?
  • How much will it cost us to maintain going forward?

One approach: you might deploy two distinct versions of the site on separate domains. For example, a sub-domain (e.g. old.domain.com is perfectly fine) could contain a copy of your website before optimizations are performed. To get apples to apples comparison you’d want to host on equivalent hardware.

The other critical and implicit point of establishing a baseline: you are actually measuring and learning about changes over time. Failure to measure when there are so many tools and services available will be punished … by the lack of users who will eventually abandon your site because of its poor performance.

Manual vs. Automated Optimization

If you have a template system with highly dynamic content, you might want to look into optimizing the template system. This may require some special purpose contracting but the return on investment will be amortized across the thousands or millions of times the template system is used (for every page requested).

If you have an internal team and/or the resources and budget, first get the team up to speed on the state of the art. Buy them a copy of Steve Souder’s book High Performance Websites. Give the team a few days to read the book and do all of the exercises.

If you’re running an agile shop, you can make the training a two- or three-point story. Acceptance will be demonstrating a local copy of the many samples provided in the book.

If you have a graphics heavy website, you need to find a way to ensure that you’re delivering graphics at the best scale and quality. Delivering graphics at full resolution and full scale is a speed killer. If you’re a photo site, you deal with this automatically as part of the upload process.

If you’re a catalog site, you may simply be dumping high-res images into a database that are dynamically assembled on demand by the template system. This will require planning some development stories to create an automated process that defines and creates semi-automated tasks to transform images by the min, max, and preferred resolution to validate that the database is not polluted with high res images on a periodic basis.

Deferring loading of resources is easy to do, but requires an acute understanding of the site’s workflow.

For example, if you have a website that may allow interactive chat with a customer support rep to occur, you’ve got an embedded chat system and often dozens of images that could be used for emoticons.

There is no good reason to load dozens of resources a new user may not even use during his or her first visit to the site. There are numerous tricks to execute the deferred load.

Extract the markup defining the resources into a separate HTML file and on a one-time basis load that markup with an AJAX call and side-affect the DOM. Another option is to load those resources in defer loaded, hidden iframe. Regardless of the technique adopted, it requires some javascript programming to pull it off.

CSS spriting is a very powerful resource “reduction” technique. Simply put, CSS spriting refers to the technique that combines images into a bigger image that is loaded once and used many different times in many different ways.

Every time you remove a resource request to your backend servers, it directly increases the speed of the webpage. This action improves the user experience and improves the scalability and throughput of your backend systems.

Third party assets are the hardest to deal with in many ways. You want your site to be modern and connected to social media but you are completely under their control once you do this. These widgets are trivial to integrate into the markup using a technique called script injection. Google has used this same technique for over 10 years with “ad injection.”

If these injectable scripts are loaded synchronously (and they invariably are), it can lead to complete page load failure. This happens because each script is side-affecting the global state of the javascript engine for the HTML document that attaches to these scripts.

The simplest way to think about this is that an HTML page is actually a software program no different in many ways than the C and C++ code that was used to implement the browser. The browser is a software program that executes other programs. Modern browsers isolate each of the programs that they are executing in order to ensure that one bad program doesn’t impact the performance of another program. However, one key aspect of executing the HTML program is that order matters. Since order matters, the browser must wait until a script injection completes before executing the next one. A browser may load the to-be-executed script in the background, but it must wait until the previous script fully loads (or fails to load) before moving down the sequence.

Doing the extra work to manually (and asynchronously) side-affect can have dramatic benefits in terms of time to display and time to interactivity. However, it does require special purpose programming skills and can become brittle over time. Invest carefully and wisely.

One trick we use is to provide the image of the widget as “scaffolding,”  and to dynamically inject the final rendering asynchronously.

This is the technique that AJAX oriented advertising is adopting, using iframes and advanced CSS ordering tricks to load advertisements in the background and swap them in ONLY after they successfully loaded. These are all variations on the JSONP protocol (http://en.wikipedia.org/wiki/JSONP).

JSONP and related techniques are critical in the manual optimization and dynamic, asynchronous rendering space. Make sure your team is familiar with these approaches and applies them to your website.

Automated Optimization

Many of the above optimizations can be dynamically performed with no code change to your site.

How can this be? And why would you do it this way?

Obviously, this is an ideal scenario: you don’t have to manually modify your site; you don’t have to become an expert on DNS and CDN providers; you don’t have to manually code your site to deal with a particular CDN; you don’t have to turn external resources into inline data; you don’t have to combine images into sprites; and you don’t have to manually analyze your image resolution and create alternative size and resolution images.

The reason automated optimization can be delivered as a service is because of the very nature of an HTML “program”. The HTML document resources are sequential blocks of programming data (and code). Your browser must retrieve and assemble these blocks. Furthermore, your browser works less, consumes less bandwidth, and introduces less points of failure when 50 resources are turned into 20 resources to deliver the same “data + code” that needs to be parsed and executed.

Of course, as always, there is a trade-off to be considered here. The trade-off is between getting the latest and greatest performance tricks automatically applied to your site … and the speed at which your team can do this manually for every page on your site.

If you have outsourced website development and aren’t running your own infrastructure (backend-servers), you may have limited options available to you because your website hosting service may disable compression on the common entry level plans. They do this because compression requires backend computation and backend computation is a shared resource across many other sites hosted on the same machine.

The simple option here is to upgrade to a more expensive hosting plan. Ask your hosting provider for help here.

If you use a complete virtual machine or dedicated server, you still may have compression turned off because it is typically disabled by default. You may not have access to the configuration files. Again, ask your hosted for help here.

Helpful Resources

The techniques for building a high-performing website are well known and there are many books and free tools available to help you understand this:

Phil StanhopePhil Stanhope
View Author

For over 20 years, Phil has focused on distributed systems. His experiences cover the spectrum from desktop, client/server, messaging oriented middleware and integration, mobile, internet infrastructure, and SaaS. He is Vice President of Operations at Yottaa.

Share this article
Read Next
Get the freshest news and resources for developers, designers and digital creators in your inbox each week