A Faster More Secure Internet with HTTP/2

Whilst both awareness and adoption (from hosting providers and platforms) of HTTP/2 is still far from ubiquitous it will change many standard assumptions and alter current front-end best practices.

As many of these changes revolve around security and speed it's definitely a topic of great relevance and importance to any professional who makes their living from the web.

What's even better is that end users don't have to make any effort to receive its benefits. Most modern browsers support it and whilst some platform-as-a-service offerings don't support it as yet, most have this on their roadmaps and there are some relatively cost-effective ways around this temporary limitation.

So what's all the fuss about...

Its fast!

Because there is no blocking

One limitation with HTTP is that browsers will wait for a response to complete before making another request. This is sometimes referred to as head of line blocking. For every web domain, most browsers will support up to only 6 concurrent connections.

HTTP/2 requires 1 network(TCP) connection per web domain and will allow multiple requests and responses with just one initial connection. This means provided we request the resources we need (or will need) in a timely manner, our web applications will make full use of the bandwidth available without having the overhead of having to establish connections to the same web domain repeatedly with the associated costs.

Akamai has a great HTTP/2 demo that benchmarks a ton of images loaded up over HTTP vs HTTP/2. When inspecting the traffic in the chrome dev tools it's really quite interesting to observe the visual difference in the waterfall section of the network tab.

HTTP:
HttpDownload And then HTTP/2:
Http2Download NB: I couldn't actually fit all the files being downloaded at once onto the once screen!

Followed by observing the results of the test:
RaceResults

A considerable win for HTTP/2! One thing to note is that due to the fact that many more requests are actually in-flight at once there is an effective evening out of the response time of the assets. This means that our actual HTML page may take slightly longer to reach the user's browser, though the time to the user actually seeing our content fully rendered will be significantly reduced.

The headers are compressed

Whilst we have been able to compress the body of requests with Gzip for a long time this was not the case for headers.

By default, HTTP/2 uses HPACK, a compression format for efficiently representing HTTP header fields, designed for HTTP/2.

Many web platforms, frameworks, and CMS add cookies and all manner of goodness to request headers so this is a pretty big win.

It makes things more secure

None of the browsers support HTTP2 over an insecure (non-SSL) connection. Whilst organisations such as LetsEncrypt (a free, automated, and open certificate authority) and services such as Cloudflare (kind of a CDN/Network optimization service with extras like the ability to manage your DNS) offer a means of acquiring an SSL certificate for free, this is not universally known and the fact the protocol will enforce this could be a great thing for the internet. After all, if you have to do it and it costs nothing then surely its a win for everyone?

How will this change things?

So like all things change is not without disruption and much of the conventional wisdom on front end web optimizations will be altered.

User centric web optimizations

Previously to avoid head of line blocking it was normal to put as much as possible into one file. Module bundlers such as Webpack can even be configured to convert images into text (data-uri) so as to be included in the same bundle. Whilst one bundle is great in terms of offering the best download speed on an HTTP/1.1 connection, it does mean that a small change in our code or dependencies will require the user to download all of our code again.

As head of line blocking is no longer one of our primary concerns, one thing that can start happening now is to create smaller bundles based on logical groupings i.e: libraries or features.

JSPM offers a really cool bundle arithmetic feature that makes it trivial to create bundles based on a slice of your functionality with ease. Other module bundlers are also able to achieve the same thing with manual configuration.

Server push

Another interesting scenario is that once we are no longer obsessed with everything being in the one file, we will want a means of getting files to the browser before they are actually needed. E.g my application relies on a library for handling dates but won't need it initially, just as soon as a user starts interacting with the UI, therefore we may want it to go with the markup of the initial page.

Server initiated (Explicit Server Push)

A server can actually initiate the transmission of data without a request from the client via an HTTP/2 PUSH_PROMISE.
The client has the option to stop the delivery by sending the server an RST_STREAM frame. (Frames are the blocks of data that HTTP/2 requests, responses, and server pushes are built on).

Hint-based - Link preload

One way this is currently being implemented is a hint based approach that involves sending something like

Link: </my-script-you-will-need.js>; rel=preload; as=script

in headers (also available via the HTML link tag via an attribute) to cause the server to initiate a server push. Cloudflare has this great article about how their implementation of HTTP/2 uses link preload headers to initiate server push.

It should be noted that this feature is still in a status of working draft.

1.Service workers

A service worker could be used to store cache contents locally and then intercept subsequent requests, first checking if the asset already exists in the cache on the user's device. Support for this is growing but limited (at the time of writing) to Firefox and Chrome.

2.Cache digest

This is a means of allowing clients to inform the server of their cache's contents via a hash based mechanism. Servers can then make an informed decision of what to push to clients rather than pushing everything every time. Like a lot of the newer aspects of HTTP/2 this feature is still not finalized.

So what can I use?

Whilst currently implementing server push would not be without its challenges such as:

  1. Best practices in what to push and when are still emerging
  2. The specification for cache digest is still not guaranteed to be stable
  3. Many of the major web platforms are yet to implement it.
  4. It may be a little while before ServiceWorker is fully adopted.

However, the lack of head of the line blocking that connection multiplexing brings and header compression means there are several tangible benefits available without much effort required for implementation.

But doesn't SSL makes things slower and more expensive?

One consideration of the past is that SSL would represent a cost both in terms of money (i.e buying an SSL certificate) and also when working on a larger scale, in terms of performance as part of our infrastructure has to take on the burden of decrypting each connection.

SSL is free

In the majority of cases, an SSL certificate can be obtained without cost. Also for organizations such as digital agencies that may host many sites behind the same IP address, SNI (an addition to the SSL protocol) means that provided your web server supports it (and most do/should), then you no longer need an IP address per SSL certificate which would lead to greater hosting costs.

SSL termination is getting easier

SSL certificates are able to use ECDSA as their digital signature algorithm. This puts significantly less load on hardware when it comes to performing SSL termination which translates this into less cost(in terms of performance) than RSA (which most older certs use) due to its smaller key size.

It is also one of the factors that allows companies such as Cloudflare to be able to offer this service for free. (Apart from the fact they must be nice folks)

SSL can make you money

This is a pretty well know fact but since 2014 Google has preferred sites available over HTTPS. Whilst this isn't going to let you hack the system, in any situation that rankings are tight it could give a site the rankings boost that plays an important part in driving traffic to a site.

SSL termination happens less than you might think (in the new world)

So as mentioned earlier as the connection is multiplex (one connection per origin to satisfy the request for multiple assets) the frequency that our website/infrastructure will have to terminate SSL will be reduced.

Final thoughts

So HTTP/2 is going to change and challenge many things for web developers and web applications. It comes with many benefits:

  1. Its faster
  2. More secure (as its HTTPS by default)
  3. HTTPS makes things more visible

Whilst reaping its benefits now requires a little creative planning at the infrastructural level (e.g if you're using a cloud platform that doesn't support HTTP/2 you could look to put CloudFlare's service "in between" your app and the rest of the web to optimise the delivery of static content over HTTP/2) it will certainly offer up new ways of optimising for the front-end.

At the end of the day, the success of any good web application and the internet as a whole is based on convenience, trust and the quality of the content or services that we produce. SSL gives us confidence that we are receiving content from whom we think we are, something essential in this formula.

The direction that HTTP/2 will send us in certainly disrupts some previous ideas but should take the space of web development in a direction that we should all want to go in for the sake of our end users. That is after all, what it's all about in the end. Not just using the latest tech, unfortunately ;-).

Andrew de Rozario

Self / community educated developer who loves all things web, JavaScript and .Net related. Passionate about sharing knowledge, teaching and eating though maybe not in that order.

Subscribe to Just In Time Coder

Get the latest posts delivered right to your inbox.

or subscribe via RSS with Feedly!