Digital Web recently published an article about “Client Side Load Balancing for Web 2.0 Applications“. I wanted to take a moment to explain why I think this load balancing technique is a bad idea. But first, here’s the concept in brief:
- Your web site is deployed in an identical fashion across a number of web servers.
- Your customer’s browser retrieves a list of web app servers from your server, say in XML format.
- The browser then “randomly selects servers to call until one responds”, and “has a preset timeout for each call. If the call takes greater than the preset time, the client randomly selects another server until it finds one that responds”.
No matter how long and hard I think about this concept, I can’t convince myself that it even sounds good in theory. Here’s why:
- We still have a single point of failure. What happens if our web application is not able to retrieve a valid list of servers?
- Correctly failing over is difficult and ungraceful. How much “preset time” should the client allow before trying another server? Is this waiting period acceptable to our customers? Can the application accurately tell the difference between a server being offline and plain old network congestion?
- Now that our client side code contains this load balancing logic we have a lot more testing to do to ensure it works in every browser on every platform.
- Load is distributed amongst the servers in a completely random way. The browser has no way of knowing that it’s just sent a request to a server that’s already busy.
Server-side hardware load balancing offers many advantages over this client-side method that are too great to ignore. Some other things you should consider are:
- A hardware load balancer is able to distribute requests to servers with the least load. They can also quickly detect an outage in your web cluster and direct traffic as appropriate. Why make our customer wait while their browser detects this outage for us – and then decides what action is appropriate? The quality of this detection application (developer) dependent and hardly guaranteed.
- Eliminate single points of failure. A redundant load balancer set up greatly reduces the chance of outage. There are reliable ways of doing this what won’t put a hole in your pocket.
- You can deploy updates to your site in a way that won’t confuse your customers; using the load balancer to hide servers from the customer until your site updates are deployed and tested.
- The load balancer can also act as a caching layer for static resources. This reduces the load on your web servers and delivers content to your customers faster!
Lastly, I think if you have gone to the trouble of building your application so that it runs across multiple nodes and you’ve signed the cheques for 3 or 4 (or more) servers, it doesn’t seem like much of a stretch to put at least one load balancer in front of the whole lot (2, if you can afford and manage it). Implementing client side load balancing after you’ve come this far seems like a blemish on all your hard work.
My next post will explore scaling and load balancing a little more. Until then… :)
Lucas has been building the web since 1996. His experience covers Content Management, Online Learning, Documentation Management, Product Ordering and ecommerce systems across Linux(Debian/Redhat/CentOS), Mac OS, FreeBSD and Microsoft platforms.