A couple of months ago, a question was posed on LinkedIn: “What is your definition of Cloud?” followed immediately by “What is your definition of Virtualization?” It prompted me to respond with a carefully articulated response that made me realize just how fuzzy these terms continue to be.
The United States National Institute of Standardization and Technology (NIST) puts out a very good explanation of the critical aspects of what makes “cloud computing”, well, “cloud computing”. But even terms such as “virtualization” are not as cut and dried as many in the industry presume.
What is Virtualization?
There is virtualization at nearly every level to one degree or another, from the hardware to the application, and one can argue, even the data. So let’s start with what virtualization means and what, in that context, Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) means.
Virtualization means simply the abstraction of the solution from the infrastructure required to support it. This can be internal or external and can apply at any level of the computer stack, from the hardware to the application itself. Indeed, IaaS/PaaS/SaaS are in fact, just that. They are all virtualization solutions, just at different levels of the computing stack. IaaS, for example, is aimed at virtualizing the hardware from the operating system. SaaS, however, is virtualizing the application from the operating system.
So that leaves the nagging question: “What makes ‘as a Service’ different from the existing virtualization solutions?” That is the heart of the topic here today. What is, “X as a Service” where X is whatever is being virtualized? The good news is that it’s not really a new technology – more an exercise in branding, but it helps have an understanding of computer and virtualization history.
Historically, servers (especially Windows Servers) had issues with conflicting software. These were compounded by issues where multiple users were trying to use the same physical resource. Because of this and perhaps other issues, industry determined it was best practice to have each server do one, and only one, thing. As hardware progressed and software became more efficient, these physical machines became more and more underutilized.
At this point, a group of bright, forward thinking people looked around and found a solution to this problem – VMware was thus born. Virtualization is nothing new. The founders of VMware had used virtualization products for many years from IBM. You see, IBM developed computer virtualization for their mainframe computers. And some on the VMware team worked on these. The trick was getting it to work with x86 and x64 based “Wintel” machines.
Fast forward to today, where many (maybe most) Intel based servers use VMware or other virtualization solutions regularly. Like the mainframe, many machines have come much closer to full utilization due to virtualization. Clustering, full on-line redundancy, hot swappability, and running backups are possible without affecting the services the server is providing in the first place. This leads to near zero-loss disaster recovery times not to mention a host of other options available only due to the advent of virtualizing these services and platforms. So why not combine them further to provide better resource utilization? Enter IaaS.
Infrastructure as a Service is the next evolution in virtualization of the hardware. Now that we can keep entire datacenters virtual, why not move that to the “Big Iron” systems and make them even cheaper to maintain on a per server basis? Systems like HP’s SuperDome, Dell’s v200, or IBM’s p-Series and even z-Series machines can house a huge amount of resources which can then be redistributed among the various virtual machines within the system. Blade servers and other technological innovations have created similar high-density computing platforms that can similarly reduce costs on a per chassis basis. Technical aspects aside, the point is why not use bigger, better machines that can minimize costs and overhead for your datacenter? The single biggest reason is up front costs.
Infrastructure as a Service is a service provided by another entity (be it a group within your company or a third party) that provides the hardware and hardware support while you maintain the virtual datacenter on top of it. In other words, they provide the infrastructure as a virtualized service and datacenter operations moves on as they always did taking care of the server operating systems and the applications they support. There are many advantages to this route. I will go into these advantages and their disadvantages in another article, but one big advantage is the less hardware there is to maintain, the less there is to go wrong and the lower headcount required to maintain it.
Platform as a Service is an interesting, but critical concept. It is basically an application platform, not unlike Websphere, Glassfish, or a host of other application server platforms. It is, by design, virtualizing the application dependencies and serving them as a single “platform” upon which any compatible software can run. Often referred to as an “application fabric”, this platform is not really used directly by users or maintained by infrastructure, per se, but rather a sort of “middleware” the developers use to target their programs against.
By virtualizing the application fabric, a set of standardized programming “stubs”, called API’s, can be used independent of which fabric is used. Generically providing these stubs to programmers to program against at their own time and without the overhead of setting up the environment provides a great benefit to the programmers which results in better, more consistent programs for the end users who pay for them.
In the end, Platform as a Service is the epitome of “cloud computing”. In its ideal situation, it provides a structure where software can run and be used on any platform irrespective of the operating system the user prefers. Reality falls somewhat short of that, of course, but the industry continues to move closer to that ideal each day.
Software as a Service is an easy extent ion to make from the Platform as a Service idea. Here, however, it is more customer facing. Programs such as PeopleSoft Human Resource Management software, SalesForce.com Contact Management Software, and even the Hotmail email service are common SaaS offerings under the definitions proposed above (and indeed by most definitions provided by industry). They are software that is developed, managed, and maintained by the company that owns them and provided as a service to the consuming users.
Software as a Service provides the most compelling case for both developers and users. Developers find it makes it easier to maintain and support their codebase while minimizing piracy through controlled access to the software. Moreover, it is much easier for developers to generate a steady income stream as it is more conducive to subscription based pricing. Users gain the simplicity of a system that works without constant tweaking just to keep it running. They also often appreciate the ability to store and access data from anywhere without significant headache (most of the time).
Each of the definitions above are excellent examples of virtualization, but I left out the key component that differentiates each of them as “X as a Service” offerings. The one key feature that defines XaaS from any other form of virtualization. That one piece is self-service provisioning.
Without self-service provisioning, any of these virtualization solutions would require a larger, sometimes much larger, workforce to manually service these requests. Besides the cost overhead these workers would necessitate, human error would creep in and need to be tracked and fixed without a permanent patch further driving up costs. And worst of all, it would slow provisioning down to days, if not weeks. Imagine how many people it would take to setup email accounts for each of Yahoo’s estimated 310M account holders and how long that would take to implement. Now realize that is only one of thousands of SaaS providers…
As you can see, none of this is revolutionary; though some implementations are quite exciting. The fact that companies are increasingly finding value in this concept versus sticking with only their own internal datacenter also quite telling. While the concepts are not new or groundbreaking, the terms used may be new. So what do you think? Are these terms and concepts new to you? Do you disagree with the definitions set forth? Let me know by posting a comment below…
Jump Start Git, 2nd Edition
Visual Studio Code: End-to-End Editing and Debugging Tools for Web Developers