In the past few years, the “cloud” has become a major buzzword. It’s gotten to the point where, when people hear the word “cloud” they roll their eyes. There are many factors that contribute to this stigma.
Oftentimes, people who are new to cloud technologies, but familiar with traditional hosting, will say, “The cloud is just the ‘Internet’ right? It’s just another place to host your website.”
Cloud Methodology
On the very surface, this is true. Cloud services use most of the same hardware that traditional hosting companies use. They have servers and routers and switches. The difference between cloud services and traditional technologies is not really in the hardware—it’s in the methodology. It’s about using the hardware in ways that provide improved reliability, scalability, and performance.
Before cloud computing started to explode, dedicated hosting was just about the best thing out there for running websites and web applications. Dedicated hardware has many benefits, especially when it comes to performance. For most use cases, dedicated hardware is also reasonably scalable. You can really beef up a server with tons of memory, storage, and processing power. The vast majority of websites and web applications would never need as much power as the most powerful server available. But at the end of the day, dedicated hardware does have limits on how much it can scale up.
While dedicated hardware is great for performance, it does have a weakness: reliability. Even with the best hardware (e.g., redundant storage and redundant power supplies), you still have a single point of failure. There are ways to try to get around this, but they are complicated and expensive.
As more and more services and applications (like smartphone apps) intensify how they use internet-connected resources, the demand for scalability and reliability has increased at a rapid pace. This, in turn, has created a marketplace where people are willing to pay for services that manage all the complexity of setting up and maintaining ultra-reliable hardware.
Redundancy
In the dedicated server era, the focus was on making hardware more reliable. Hardware developers took the most common points of failure (e.g., power supplies and hard drives) and started doubling and tripling up (that’s an oversimplification) so that if one component failed, the machine could keep running. This was actually a really exciting time and everyone was obsessed with the idea of creating a server that would never go down.
But over time, people who were trying to create unlimited scalability and reliability realized that ultra-redundant hardware could never be the solution. No machine could ever be built that will never fail. So they decided that the best solution was to go in the opposite direction by building systems that are designed for failure. Sounds crazy, right?
Cloud Reliability
You might be wondering how building something to fail can result in unlimited reliability. The answer is actually quite simple. The laws of physics dictate that, eventually, every piece of computer hardware will fail. Nothing can last forever. So instead of trying to fight against nature, people who design hardware and software systems decided to transform the way they put components together. In the cloud, components are assembled in a way that any number of them can fail and the application will continue to run. This isn’t really a new idea. For decades, airplanes have been designed to be able to finish a flight even if an engine completely fails. Even our own bodies are designed around the concept that every day, billions of cells will die. This is happening constantly inside each of us and, for most of our lives, we hardly notice.
The exact methods used to create a hardware platform that is designed to handle failure are too complicated to explain in a short article. But it’s important to at least know what types of failures cloud platforms are designed to handle:
- Failure of individual hardware components (e.g., failing hard drives, power supplies, memory chips, processors)
- Connectivity failure within a datacenter (e.g., failing routers, switches, load balancers)
- Connectivity failure of an entire datacenter itself (e.g., earthquake, tornado, terrorist attack)
In the last five years, cloud providers have made huge investments in infrastructure, systems, and processes that allow them to sell these complex systems as a commodity. Many of the actual components of their offerings are not new. For example, one of the central modules of every cloud service is server virtualization. Server virtualization has been around for a long time, and it’s really not that special. It’s the robust combination of modules and services that make cloud computing such a great solution.
Cloud Demands Planning
One misconception that people have when they first start investigating cloud offerings is the idea that cloud services allow any application or website to move to the cloud and enjoy unlimited scalability and reliability. This is simply not the case. In order for a website or application to be truly scalable, it must be structured from the ground up to utilize all of the features that make the cloud work. And in many instances, this is easier said than done.
This misconception is largely due to the way cloud providers market their services, and this might be one of the few valid criticisms of cloud computing—the fact that most cloud providers advertise that it requires little work to implement ultra-reliable systems. The truth is that it requires a very clear understanding of all the components that make up the cloud, and a great deal of dedication and hard work to implement a good plan. However, while it does require significant effort, it takes far less effort and expense than it would to get the same results using traditional hosting services.
I recently read a comment by someone who said that cloud services will not overtake traditional hosting. I believe this idea comes from a misunderstanding of how the cloud works and why people move to the cloud. It reminds me of when I got my first ISDN line at my home back in the 90s. It cost me a significant amount of money to install and hundreds of dollars each month for service. Many people told me that they would never need that kind of speed and that most people would always be fine using dial-up (especially home users). Sure, there are still a certain number of people on dial-up in this world. But generally speaking, high-speed Internet has taken over and providers continue to compete to provide a better service. I’m positive that every single person who questioned my early push into high-speed now has high-speed internet in their own homes.
The same concept applies to cloud computing. Traditional hosting is limited. Sure, most websites don’t need unlimited scalability. And there may even be some people out there who don’t care if their website goes down occasionally. But in most cases, people want, and are willing to pay for, better technology that provides more reliability and optimal performance.
In many cases, people won’t buy these services directly. But more and more, service providers (including traditional hosts) will find ways to utilize cloud computing to create services that can scale without limit and never go down.
Cloud computing is nowhere near perfect, and I’m excited to see where it leads in the coming years. But one thing is for certain—I have much more fun playing with cloud technologies than I ever did on a traditional host. And my business makes far more money pushing the limits in the cloud.
Image via Helder Almeida / Shutterstock
David lives in the woods outside a small town in North Carolina with his wife and children. During the day he ventures into town to where he envisions, launches, and grows web and mobile products.