HTTP/2: Background, Performance Benefits and Implementations

Share this article


On top of the infrastructure of the internet — or the physical network layers — sits the Internet Protocol, as part of the TCP/IP, or transport layer. It’s the fabric underlying all or most of our internet communications.

A higher level protocol layer that we use on top of this is the application layer. On this level, various applications use different protocols to connect and transfer information. We have SMTP, POP3, and IMAP for sending and receiving emails, IRC and XMPP for chatting, SSH for remote sever access, and so on.

The best-known protocol among these, which has become synonymous with the use of the internet, is HTTP (hypertext transfer protocol). This is what we use to access websites every day. It was devised by Tim Berners-Lee at CERN as early as 1989. The specification for version 1.0 was released in 1996 (RFC 1945), and 1.1 in 1999.

The HTTP specification is maintained by the World Wide Web Consortium, and can be found at

The first generation of this protocol — versions 1 and 1.1 — dominated the web up until 2015, when HTTP/2 was released and the industry — web servers and browser vendors — started adopting it.


HTTP is a stateless protocol, based on a request-response structure, which means that the client makes requests to the server, and these requests are atomic: any single request isn’t aware of the previous requests. (This is why we use cookies — to bridge the gap between multiple requests in one user session, for example, to be able to serve an authenticated version of the website to logged in users.)

Transfers are typically initiated by the client — meaning the user’s browser — and the servers usually just respond to these requests.

We could say that the current state of HTTP is pretty “dumb”, or better, low-level, with lots of “help” that needs to be given to the browsers and to the servers on how to communicate efficiently. Changes in this arena are not that simple to introduce, with so many existing websites whose functioning depends on backward compatibility with any introduced changes. Anything being done to improve the protocol has to be done in a seamless way that won’t disrupt the internet.

In many ways, the current model has become a bottleneck with this strict request-response, atomic, synchronous model, and progress has mostly taken the form of hacks, spearheaded often by the industry leaders like Google, Facebook etc. The usual scenario, which is being improved on in various ways, is for the visitor to request a web page, and when their browser receives it from the server, it parses the HTML and finds other resources necessary to render the page, like CSS, images, and JavaScript. As it encounters these resource links, it stops loading everything else, and requests specified resources from the server. It doesn’t move a millimeter until it receives this resource. Then it requests another, and so on.

Average number of requests in the world's top websites

The number of requests needed to load world’s biggest websites is often in couple of hundreds.

This includes a lot of waiting, and a lot of round trips during which our visitor sees only a white screen or a half-rendered website. These are wasted seconds. A lot of available bandwidth is just sitting there unused during these request cycles.

CDNs can alleviate a lot of these problems, but even they are nothing but hacks.

As Daniel Stenberg (one of the people working on HTTP/2 standardization) from Mozilla has pointed out, the first version of the protocol is having a hard time fully leveraging the capacity of the underlying transport layer, TCP. Users who have been working on optimizing website loading speeds know this often requires some creativity, to put it mildly.

Over time, internet bandwidth speeds have drastically increased, but HTTP/1.1-era infrastructure didn’t utilize this fully. It still struggled with issues like HTTP pipelining — pushing more resources over the same TCP connection. Client-side support in browsers has been dragging the most, with Firefox and Chrome disabling it by default, or not supporting it at all, like IE, Firefox version 54+, etc. This means that even small resources require opening a new TCP connection, with all the bloat that goes with it — TCP handshakes, DNS lookups, latency… And due to head-of-line blocking, the loading of one resource results in blocking all other resources from loading.

HTTP pipelining

A synchronous, non-pipelined connection vs a pipelined one, showing possible savings in load time.

Some of the optimization sorcery web developers have to resort to under the HTTP/1 model to optimize their websites include image sprites, CSS and JavaScript concatenation, sharding (distributing visitors’ requests for resources over more than one domain or subdomain), and so on.

The improvement was due, and it had to solve these issues in a seamless, backward-compatible way so as not to interrupt the workings of the existing web.


In 2009, Google announced a project that would become a draft proposal of a new-generation protocol, SPDY (pronounced speedy), adding support to Chrome, and pushing it to all of its web services in subsequent years. Then followed Twitter and server vendors like Apache, nginx with their support, Node.js, and later came Facebook,, and most CDN providers.

SPDY introduced multiplexing — sending multiple resources in parallel, over a single TCP connection. Connections are encrypted by default, and data is compressed. First, preliminary tests in the SPDY white paper performed on the top 25 sites showed speed improvements from 27% to over 60%.

After it proved itself in production, SPDY version 3 became basis for the first draft of HTTP/2, made by the Hypertext Transfer Protocol working group httpbis in 2015.

HTTP/2 aims to address the issues ailing the first version of the protocol — latency issues — by:

It also aims to solve head-of-line blocking. The data it transfers is in binary format, improving its efficiency, and it requires encryption by default (or at least, this is a requirement imposed by major browsers).

Header compression is performed with the HPACK algorithm, solving the vulnerability in SPDY, and reducing web request sizes by half.

Server push is one of the features that aims to solve wasted waiting time, by serving resources to the visitor’s browser before the browser requires it. This reduces the round trip time, which is a big bottleneck in website optimization.

Due to all these improvements, the difference in loading time that HTTP/2 brings to the table can be seen on this example page by

Savings in loading time become more apparent the more resources a website has.

How to See If a Website Is Serving Resources over HTTP/2

In major browsers, like Firefox or Chrome, we can check a website’s support for the HTTP/2 protocol in the inspector tool, by opening the Network tab and right-clicking the strip above the resources list. Here we can enable the Protocol item.

enabling protocol inspection in browser's inspection tool

Another way is to install a little JavaScript-based tool that allows us to inspect HTTP/2 support through the command line (assuming we have Node.js and npm installed):

npm install -g is-HTTP2-cli

After installation, we should be able to use it like this:


✓ HTTP/2 supported by
Supported protocols: grpc-exp h2 HTTP/1.1


At the time of writing, all major browsers support HTTP/2, albeit requiring all the HTTP/2 requests be encrypted, which the HTTP/2 specification itself doesn’t require.


Apache 2.4 supports it with its mod_HTTP2 module which should be production-ready by now. Apache needs to be built with it by adding the --enable-HTTP2 argument to the ./configure command. We also need to be sure to have at least version 1.2.1 of the libngHTTP2 library installed. In the case of the system having trouble finding it, we can provide the path to ./configure by adding --with-ngHTTP2=<path>.

The next step would be to load the module by adding the directive to Apache’s configuration:

LoadModule HTTP2_module modules/

Then, we would add Protocols h2 h2c HTTP/1.1 to our virtual host block and reload the server. Apache’s documentation warns us of the caveats when enabling HTTP/2:

Enabling HTTP/2 on your Apache Server has impact on the resource consumption and if you have a busy site, you may need to consider carefully the implications.

The first noticeable thing after enabling HTTP/2 is that your server processes will start additional threads. The reason for this is that HTTP/2 gives all requests that it receives to its own Worker threads for processing, collects the results and streams them out to the client.

You can read more about the Apache configuration here.

nginx has supported HTTP/2 since version 1.9.5, and we enable it by simply adding the http2 argument to our virtual host specification:

server {
    listen 443 ssl http2 default_server;

    ssl_certificate    server.crt;
    ssl_certificate_key server.key;

Then reload nginx.

Unfortunately, server push at the time of writing is not officially implemented, but it has been added to development roadmap, scheduled to be released next year. For the more adventurous ones, there is an unofficial nginx module that adds support for HTTP/2 server push.

LiteSpeed and OpenLiteSpeed also boast support for HTTP/2.

One caveat before activating HTTP/2 on the server side is to make sure that we have SSL support. This means that all the virtual hosts snippets we mentioned above — for Apache and for nginx — need to go into the SSL-version virtual host blocks, listening on port 443. Once we have Apache or nginx installed, and we have configured regular virtual hosts, getting the LetsEncrypt SSL certificate, and installing it on any of the major Linux distributions should be a matter of just couple of lines of code. Certbot is a command-line tool that automates the whole process.


In this article, I’ve provided a bird’s-eye overview of HTTP/2, the new and evolving specification of a second-generation web protocol.

The full list of implementations of the new generation of HTTP can be found here.

For the less tech-savvy, perhaps the shortest path to transitioning to this new protocol would be to simply implement a CDN into the web stack, as CDNs were among the earliest adopters of HTTP/2.

Frequently Asked Questions (FAQs) about HTTP/2

What are the main differences between HTTP/1.1 and HTTP/2?

HTTP/2 is an upgrade from HTTP/1.1, designed to improve the speed and efficiency of data transfer between a client and a server. The main differences include binary framing, multiplexing, server push, and header compression. Binary framing allows for more efficient parsing of messages. Multiplexing enables multiple requests and responses to be sent simultaneously, reducing latency. Server push allows the server to send resources to the client before they are requested, improving load times. Header compression reduces overhead by compressing request and response headers.

How does HTTP/2 improve website performance?

HTTP/2 improves website performance in several ways. It reduces latency through multiplexing and server push, which allows for faster loading of web pages. It also reduces the amount of data transferred through header compression. Additionally, HTTP/2 is binary, not textual, making it more efficient to parse, more compact, and less error-prone.

Is HTTP/2 backward compatible with HTTP/1.1?

Yes, HTTP/2 is designed to be backward compatible with HTTP/1.1. This means that clients and servers that support HTTP/2 can still communicate with those that only support HTTP/1.1. However, they won’t be able to take advantage of the performance improvements offered by HTTP/2.

How can I implement HTTP/2 on my website?

Implementing HTTP/2 on your website typically involves upgrading your server software or hosting environment to a version that supports HTTP/2. You may also need to configure your server to enable HTTP/2 features. It’s important to note that HTTP/2 requires an SSL certificate, so your website needs to be served over HTTPS.

What are the potential drawbacks of HTTP/2?

While HTTP/2 offers many performance benefits, there are potential drawbacks. For instance, the use of a single TCP connection can lead to head-of-line blocking. Also, the increased complexity of HTTP/2 can make it more difficult to debug and troubleshoot issues. Lastly, not all browsers and servers support HTTP/2, so backward compatibility with HTTP/1.1 is still necessary.

Does HTTP/2 replace HTTP/1.1?

HTTP/2 does not replace HTTP/1.1 but rather builds upon it. While HTTP/2 offers significant performance improvements, HTTP/1.1 is still widely used and supported. Many servers and clients support both protocols and can switch between them as needed.

Is HTTP/2 faster than HTTP/1.1?

Yes, HTTP/2 is generally faster than HTTP/1.1 due to features like multiplexing, server push, and header compression. However, the actual performance gain can vary depending on the specific use case and implementation.

Does HTTP/2 require encryption?

While the HTTP/2 specification does not require encryption, in practice, most implementations do. This is because major browsers like Chrome and Firefox only support HTTP/2 over HTTPS, which requires an SSL certificate.

What is server push in HTTP/2?

Server push is a feature in HTTP/2 that allows a server to send resources to a client before the client requests them. This can improve page load times by reducing the number of round-trip times needed to fetch all resources.

How does HTTP/2 handle multiple requests?

HTTP/2 handles multiple requests through a feature called multiplexing. This allows multiple requests and responses to be in flight at the same time, reducing latency and improving performance.

Tonino JankovTonino Jankov
View Author

Tonino is a web developer and IT consultant who's dived through open-source code for over a decade. He's also a crypto enthusiast, Linux fan, and moderate libertarian.

Share this article
Read Next
Get the freshest news and resources for developers, designers and digital creators in your inbox each week