Episode 82 of The SitePoint Podcast is now available! This week Kevin Yank (@sentience) chats with Jeff Barr (@jeffbarr) and Lucas Chan (@geekylucas) about cloud hosting with Amazon Web Services. Jeff is the author of SitePoint’s latest book, Host Your Web Site in the Cloud. Lucas is the lead systems administrator at SitePoint, 99designs, and Flippa.
Listen in your Browser
Play this episode directly in your browser! Just click the orange “play” button below:
Download this Episode
You can also download this episode as a standalone MP3 file. Here’s the link:
- SitePoint Podcast #82: Cloud Hosting with Jeff Barr and Lucas Chan (MP3, 1:46:54, 97.9MB)
Subscribe to the Podcast
Kevin: October 8th, 2010. Two experts in cloud computing team up to demystify the details of hosting your website in the Cloud; I’m Kevin Yank and this is the SitePoint Podcast #82: Cloud Hosting with Jeff Barr and Lucas Chan!
Hi there, and welcome to another SitePoint Podcast. I don’t want to take too long with this intro because we have a lot to get through ahead of us. This podcast is probably the longest one by nearly an hour of any podcast we’ve put out at SitePoint before. What you’re about to hear is the audio for an online seminar that we held in support of the release of our latest book, Host Your Web Site in the Cloud, by Jeff Barr. This was a really successful event and if you want to be able to see the slides that go with the audio we do have that version of the presentation available, we have a full recording with those slides on the site, and I’ll be sure to link to that in the show notes, but I think the presentation works just fine as audio because everything that’s mentioned in the slides is also mentioned by the presenters in audio. So, rather than sitting in front of your computer and reading slides for two hours while you listen to this I think, you know, plan to get some housework done, get out and do some gardening or even go for a walk while you listen to this and settle in for two hours of expertise both from Jeff Barr from Amazon and our very own Lucas Chan who runs the servers here at SitePoint, 99designs, and Flippa. I’ll be herding the cats, throwing out the questions here, but most of the time you’ll hear them talking and it’s really good stuff. So sit down and maybe plan to take a break about halfway through this one, but I hope you enjoy it.
Hi there everyone and welcome to this special webinar from sitepoint.com brought to you in collaboration with Amazon, to celebrate the launch of our new book, Host Your Web Site in the Cloud: Amazon Web Services Made Easy, we’re very privileged to have the author of that book, Jeff Barr, with us, hi there, Jeff.
Kevin: Jeff joined Amazon in 2002 when he decided the smart bet was on the Cloud going forward and that Amazon was going to be a big player if not the player in that space. In his role as the Amazon Web Services senior evangelist Jeff speaks to developers at conferences and user groups and webinars like this one all around the world. We’re also joined by Lucas Chan who is one of our, in fact our lead sys admin here at SitePoint; hi, Lucas.
Lucas: Hi everyone.
Kevin: Lucas joined SitePoint in 2002 as a web dev but he’s since gone to lead the move to the Cloud for hosting here at SitePoint. These days he’s the one most responsible for SitePoint’s server infrastructure and most of that is hosted virtually on Amazon Web Services these days. So what we’re going to do for the next hour is talk a bit about cloud hosting and just what this book covers, but our biggest mission here is to give you guys and gals who are sitting in the webinar at the moment an opportunity to ask and get answers for all of your questions about cloud hosting, cloud computing; we’ve got the man himself here and we want to make sure you guys get to take full advantage. So, we’ve got roughly half an hour of material here where we’re going to take a sort of informal tour through this book and chat about some of the different issues that it discusses, but there is a question feature here in the webinar so, please, as soon as you’ve got something to ask or even if you came to the webinar armed with some questions that you wanted to get answers to feel free to stick them in at any time. We’ve got Louis Simoneau, one of our technical editors here at SitePoint, who will be going through all those questions and flagging them so that I can ask them of our two experts on the panel here.
So, without further ado I think we’ll just dive in. I’ll hand over to you, Jeff, as the author of this book to get us started, so maybe why don’t you start us off telling us a little bit about your journey that brought you to writing this book and what does this book mean to you.
Jeff: Okay, so first of all, thank you all for coming, I’m really happy to be here speaking to you virtually and hope this turns out to be a worthwhile use of your time. So, let’s see, what does the book mean to me? So, I don’t think I ever started out to be a writer, and when I was younger actually going all the way back to high school I remember several teachers telling me that I seemed like a fairly smart guy but that my writing skills were actually very, very poor, and their advice to me several times was you could probably do pretty well in life if you could only learn how to write. So I look at this book and I pick it up and say I think I actually got that figured out, and I actually flipped through the pages and read it and am pretty happy with the quality of the text and the quality of the content and the overall flow.
So, as far as how I got here, I’m based up here in Sammamish Washington, which is basically the next city over past Redmond, home of Microsoft, and through my earlier career I spent some time in the ‘80’s and ‘90’s at a number of startups and small companies. In 1997 my family and I we moved across the country from Maryland out to Washington so I could work at Microsoft, and while I was there I spent some time on the Visual Basic 6 release, kind of remembered as the last great Visual Basic release, and then I spent a couple of years on visualstudio.net. After that I left Microsoft, wanted to do something a little bit more exciting, I was a little bit frustrated with the very, very long pace of product cycles there, so I got connected up with a friend of mine who’d made some money in the dot com boom, and he wanted me to help him investigate and invest in companies. So for a couple of years I got to help him evaluate companies and evaluate the technology and the markets and the people and give him recommendations on if he should invest or not. So, at that time a lot of the companies we looked at back in 2000, 2001, were talking about XML and SOAP and web services and often if you remember UDDI and things like that that are best forgotten, kind of chatting about the— We’d talk a lot about this kind of idea of being able to make web service calls across the Internet and activate remote functionality, and we’d talk and talk and talk about that, and quite honestly way back at that point the audiences were kind of skeptical; when we’d go to potential investors and say, look, this is something we think is the future, the ability to build applications that are effectively taking advantage of programmable websites and thinking about the idea that websites themselves will have APIs and you can build programs to take advantage of those APIs, people would look at that and they’d shake their heads fairly skeptically, and they really didn’t quite grasp I think what I had maybe saw a little bit earlier and just the power of APIs and the power of platforms. So, when I saw the very, very first release from Amazon that was a little note on the side of my account. I logged in one day, a little box popped up and said, “Amazon now has XML,” I immediately thought this is pretty neat. I investigated and through a couple different steps found myself working on the Amazon Web Services by the summer of 2002. So it’s been a pretty amazing journey having been there the last eight years; we’ve gone from that initial service, the product catalog access, into more of the infrastructure services which is what I focused on in the book, and being part of this organization and watch it grow from some great ideas and some great people into an organization with a large number of services and great customers and great applications. It’s been pretty exciting so far.
Kevin: So let’s move right along to the book here. The book’s got eleven chapters in it and we’re going to use that as a rough guide for the presentation today. And Lucas will be chiming in in each chapter to give a bit of practical perspective about the related things that are going on for us here at SitePoint. Because one of the reasons we’re most excited about publishing this book in collaboration with Amazon is because the move to cloud hosting for us has been such a transformative experience. So the book is Host Your Web Site in the Cloud, and the first chapter is “Welcome to Cloud Computing.” And Cloud Computing is one of those terms that I know for me it scared me off the first time I encountered that, it seemed like a very computer-science-y … I must be doing work for NASA in order for this to apply to me. Why don’t you talk a bit about cloud computing and how cloud hosting fits into that, Jeff?
Jeff: Certainly. So, I did really try in the book to start out from the very, very beginning and just take people up the ramp so that even if you didn’t have an appreciation for what it was before I wanted to make sure that people really did understand the concept, the idea that you could basically from your desktop you can make connections across the Internet to remote services and really kind of go from the model of static servers and static resources that you pay a fixed amount of money for every month, whether you have way too much or way too little, to go to this much more dynamic and flexible model where you can scale up and scale down as needed. So one of the things I talk about a bit in the first chapter is this phrase that I think I invented called “the success disaster.” So, the success disaster means you had a great idea, you did an implementation, you set up the hosting, you put it online, and you kind of thought— you kind of plot out the growth and think that you’ve got plenty of time sometime later to deal with success and with scaling, but you put it online and instead of the few friends visiting the whole world shows up at your front door, and you generally in today’s kind of attention flash economy you only get one or two chance to make a good impression, and if that first day or that first week doesn’t deliver people a high quality experience then their attention is going to move on to the next bright and shiny thing. So, this success disaster is like the day you thought would be awesome because you got so many great users but they didn’t actually get the quality of service that you wanted to provide them.
So I also wanted to talk a lot about this idea of the programmable infrastructure, the idea that it used to be when you needed new servers and storage you would call up your systems person or your hosting company and you’d negotiate back and forth for delivery times and prices to now the new model of the Cloud that you simply make web service calls to the Cloud that says I need another server, I need a disk drive, I need some storage, and the Cloud configures itself and responds in a matter of seconds with the identifiers for those new resources that you have.
Kevin: Right. So, Lucas, how much of this played into our decision to move to cloud hosting; were we experiencing a success disaster of our own?
Lucas: Yeah, all of it plays into it really. Especially Jeff’s talking about turning people away at the door, and around 2006/2007 we would have occasions where we’d be on the front page of Digg or Slashdot or something like that. And our servers were just completely caving in under the load, so at that time with running out of capacity and coinciding with a massive development effort for SitePoint Contests and then Marketplace, looking at the Cloud it became really attractive to us. And especially the ease and the speed with which we could bring these servers online with some highly experimental configurations, yeah, the timing was really good for us actually when we discovered Amazon Web Services.
Kevin: These massive waves of success, Jeff, it’s really quite — I mean that is part of what is considered a successful launch on the Web these days, it’s kind of like if you don’t get that you’re not going to get the critical mass you need for so many of the businesses that are going live. And so if you don’t plan for that you’re really setting yourself up for failure, aren’t you?
Jeff: I agree. And quite often I think what happens is that people think that they’ve planned and they’ve done some modeling and they believe they’ve arranged for enough capacity, but then what often seems to happen is that the site may get used at unanticipated rates where maybe a somewhat inefficient feature that you thought might have been somewhat peripheral to the overall site turns out to be the thing that everybody actually focuses on. And so maybe you built this little side part and you said, well, nobody’s going to use it so I won’t index the tables or I won’t optimize the code, and then it suddenly turns out that’s the part everybody likes. And then in the old model it’s back to the drawing board, re-architect, re-implement, whereas with the Cloud model you say, okay, that’s alright, I’ll just tune up some more resources, I’ll deal with the inefficiency by simply paying a little bit more money for a short amount of time, and that will then buy me some time; quite literally you’re buying some time to go back and do a more optimal implementation.
Kevin: Alright. So if you’re ready, Jeff, we can move on to Chapter Two.
Jeff: Okay, so Chapter Two is really where I just start by describing each of the different Amazon Web Services. So I just go through each of them, and I was really careful to make sure that I addressed not just the technology but the business side of the Cloud. And when I go out and speak to audiences I make it really clear that the Cloud is not simply a new technology to be aware of, but you need to pay attention to the business side, and you need to understand that you’re paying for your resources on a pay-as-you-go basis, so I want to make sure that people understand that each different resource such as processing and storage and bandwidth and messaging, each of those has a unit price that you pay as you consume them and also you’re not paying, and generally during development you’re not using a whole lot of those resources so you’re not going to be paying a whole lot, so you can develop your new application or your new site on a very economic basis and you don’t even need to scale up to a large sized server, or instances we call them, until you’re actually up and running. So our smallest instance is called a Micro which is even smaller than the smallest one documented in the book, and the micro instances actually rent for two cents per hour, so I figure that most people spend more at their favorite coffee shop or soft drink machine than they do on servers at those kinds of prices.
Kevin: We talk about being ready for the wave of success, but at the same time the low cost of your minimal server profile means that you have a lot of runway to wait for that success to come along I suppose.
Jeff: That’s totally true and it’s either runway until it succeeds or it also says you can do some more experiments and you can try things out and you don’t have to worry about wasting resources in case your experiments don’t pan out. So generally we know that the more things we try the better our chances are of success. So a cloud-based model really gives you that freedom to experiment and to test things out without having to go back to your boss and say, well, we just spent $100,000.00 on new servers but it turns out that nobody actually needed them.
Kevin: Lucas, for a long time we maintained a bit of sort of legacy server, we had accounts at places like DreamHost and stuff like that, I think we still do, but for you is there any experiment too small to go to the trouble of putting it on Cloud Hosting or is that your default solution for everything now?
Lucas: The new instance size that Amazon released the other day was the Micro size, is that right Jeff?
Jeff: That’s right.
Lucas: Yeah, so I mean prior to that possibly we might consider putting a really tiny project onto a shared hosting environment or something like that, a really non-critical site, but Amazon’s got it pretty much all covered now, I’d say. And it’s a really nice situation to be in where we’re not having to predict our hardware needs for the next two or three years and go through the process of negotiating contracts for those and worrying about how they’re going to be maintained and managed if we choose to colocate somewhere how that’s going to work.
Kevin: I do remember the last time we had a big meeting like that, Lucas, and it was early in the days of Amazon Web Services, and it was kind of like I think us keen developers really did want to make the leap to the Cloud but we had to do our due diligence and show how much it was going to cost to go the other way, and I think you and Lachlan Donald at the time put together a big huge plan of what a server cluster would look like that could handle a reasonable amount of load going forward, and the numbers were pretty convincing, weren’t they?
Lucas: You know what, the numbers were convincing but the actual outcome was nothing like what we predicted, and if you talk to management at SitePoint now they’ll tell you that we’re spending more money on hosting than we really had predicted back then. But really you have to consider the fact that we have launched so many more applications into the Cloud than we had ever really considered possible back then as well, so it’s really just given us so much more opportunity to grow at a rapid rate.
Kevin: Cool. So, Chapter Three, Jeff, is Tooling Up. So what kind of tools are you talking about there?
Jeff: So, the idea in Chapter Three— So basically I want to make sure that people have their local hardware set up so that they’ve got their PC, I need to make sure they have the right version of PHP installed, that they have CloudFusion Toolkit, and that they’re also aware of the visual tools, the various things like AWS Management Console and other third party tools that they can use. I also want to make sure that they know how to go to the AWS site, create their account, and get the keys that they use to identify and sign the requests and understand all these kind of nitty gritty things that sometimes are just kind of a speed bump between you see the code the in the book and you say this looks awesome, I’d love to give it a try, I wanted to make sure that all those steps were just carefully spelled out. And I also wanted to make sure that a few things like understanding that it’s really handy when you’re building web apps to have control of some DNS, to have a couple kind of domain names laying around that you can use for experimentation are also quite handy when you want to just really put things online and see how they work.
Kevin: I know when we first got started with AWS and EC2 specifically it was a few years ago and it was very much a command line very nitty gritty experience. Lucas, do you want to talk a bit about that and how we managed that at the time and then maybe Jeff you can bounce back with how that’s changing.
Lucas: Yeah, for sure. So, Kev, as you mentioned at the time initially Amazon just provided some command line tools for accessing their APIs and we thought we could do a little bit better than that. We built a little web app for managing our instances that were running our sites, but it was kind of hard; like we spent a lot of time figuring that out and these days it’s so much easier with the AWS Console that Amazon provides, and some other third party tools like ElasticFox, and we even use some third party cloud management services like RightScale and CloudKick now. So it’s really easy to manage this stuff now, and you know we had a couple of disasters writing our own tools and then shutting down instances accidentally and all that kind of stuff, so there’s war stories for another time.
Kevin: Jeff, is it fair to say that Amazon initially wanted to step aside and allow third parties to control the tool story and they just wanted to focus on building really solid pipes?
Jeff: I don’t think that was really the thinking. Generally what happens is as we build new services we build the service, we layer the APIs on top of the internal implementation, and then we build command line tools to exercise those APIs, and that’s the first level that we can ship. Then it turns out that putting visual tools together generally there’s a lot more work when you start to say let’s build the visual representations of the services and get UIs figured out and do usability testing and so forth, so in order not to delay release of the services we generally put the service and the command line tools out first, and then we then have a separate timeline for the visual management tools. But we certainly do respect, appreciate, and do a lot to help out the third party management tools, but I think it does point out that when we build our tools we build on top of the same APIs that the third parties do.
Kevin: We have a question that came through from Rob Morrow who asks “So what does it really cost?” And you sort of went through the rough breakdown of the things that Amazon charges you for, but I suppose can you give our listeners a rough idea of at the minimum running a single instance Micro server if you were just — you needed a server to develop on and experiment with, what are we talking about in cost here?
Jeff: Oh, let’s see, I’m really bad at doing math in my head, so the Micro servers are two cents per hour so that’s basically about fifty cents a day, so we’re talking about $15.00 per month for full-time access.
Jeff: Now, there are separate charges for bandwidth in and out that’s currently fifteen cents per gigabyte, so just having a server up and running and then some separate storage probably we’re looking at less than $20.00 per month for kind of a minimal server. Now if you’re going to use those for an extended amount of time or if you can predict your usage we have this model called Reserved Instances where you can pay up front to basically reserve capacity and then once you do that your hourly rate is lower and the net price when you figure out the cost on 100% usage or 50% basis is going to even be a lesser price per hour.
Kevin: Alright. Thanks for the question, Rob; if you have any more specific cost questions please feel free to post a follow-up question.
Kevin: So at this point in the book we get into some of the actual specific services and what they can do for you, and the first one you look at is S3, the Simple Storage Service.
Jeff: Right. So S3 was one of our earlier services, and I think when we rolled this one out it really got people’s attention, they said, hmm, this is just not something at all we would have ever expected Amazon to do, so we really have two services that fit together kind of like a hand in a glove, we have S3 which is storage for objects, and we have CloudFront, which is content distribution which has a number of nodes around the world to basically make it so that those nodes are very, very close to your users so that they can get the content very quickly with low latency. So here I basically explain S3 and CloudFront and I go through all the basics: creating a collection which is called a bucket, show you how to list the contents of your buckets, upload files to the bucket, do some fun stuff like pulling down graphics from the bucket using some simple graphics processing to create thumbnail-sized images and store them back, and then to use CloudFront for distribution. One thing I found really useful as a developer is that when I have a principal task to be done I definitely work toward doing that task, but I’m not afraid to kind of one step to the side and also explore something else that’s interesting. So as I was doing the programming for S3 I said let me just learn a little bit about doing some graphics processing which is how I got into the thumbnailing.
Kevin: Lucas, what are some of the things we use S3 for?
Lucas: One good example of S3 usage is so 99designs, which was formally SitePoint Contests, we store contest submissions, so logos and web designs submitted by the design community there, we store them in S3, and then additionally we store thumbnails and resized versions of those submissions. So as of this morning we’re around about 5.9 million objects stored in 99designs S3 account, so certainly we’re talking about large scale and really reliable backend storage. Having said that, we tend not to have our users fetch objects directly from S3, even though that’s certainly possible, just because we like to have some more fine-grained control over how things are cached and stuff like that.
Kevin: What’s fascinating to me about S3 is it’s almost a return to the origins of web infrastructure, when the Web was first conceived and when servers like Apache were first being written they were mainly architected for serving static resources, HTML files, image files in some cases, and these things were just, you know, the web browser would ask for a URL, the web server would go and fetch the file and send it to the browser and it didn’t have to be any more complicated than that. And over the years I feel like web servers have evolved into this whole scripting, programmatic, dynamically generated pages type stuff, but S3 is kind of saying hang on, it still is important to serve static objects and the more these systems are architected for dynamic generation of pages the less optimal and the more money you’re spending just to serve these static things, so why not architect something that is specifically designed for static.
Jeff: Exactly. And also as you start getting into more and more objects and larger amounts of data, the amount of effort you’d have to expend on your own to build a storage infrastructure and a backup infrastructure and built-in redundancy and so forth, that gets pretty complicated pretty quickly, and when you start talking about nearly six million objects, I don’t even know how much data that represents, but you suddenly get to the point where you say this doesn’t start to fit on a single disk and my backups become very, very difficult and you suddenly get this very, very uncomfortable situation on sites that grow quickly where the data is coming in more quickly than you might know what to do with it. So being able to send it up to the Cloud, not have to worry about disk full errors and those kinds of things is really comforting and simplifies the whole application building process.
Kevin: We have an interesting question about S3 from Giovanni Castillo, he says, “I have a system which is protected with usernames and passwords,” and he uploads documents to S3 and then other users can view or download the documents from the system, but he doesn’t want to put those as public documents. I think this is a good opportunity to talk about some of the flexibility of S3. How can you serve stuff from S3 without making it completely public?
Jeff: Sure. So, there’s a complete ACL or access control list model for S3 and there’s multiple ways that you can protect your content. One thing you can do is you can actually set up the system such that each time you render a page for a particular user you would generate a unique time-limited URL that is only valid for a very, very small amount of time, so when you deliver that content to the user even if they copy the URL and hand it someone else it would expire, it wouldn’t be valid after a certain amount of time. We also have a relatively new service, it’s actually newer than I was able to cover in the book, called IAM, Identity Access Management, and with Identity Access Management you also have very, very fine-grained control of access to objects, access to individual S3 buckets and collections of S3 objects, and even to individual S3 APIs. You can put in protections, say, these particular user have the ability to call some APIs, these other users have the ability to only use a different subset of those APIs, so there’s a lot of fine-grained control available with S3.
Kevin: Cool. So let’s talk about EC2 next, and for me EC2 is the heart of Amazon Web Services. Does it seem that way down your way Jeff?
Jeff: I think so. So, despite having all these wonderful services that store data and move it in and out and message, you need to do some processing, so you’re almost always going to need some EC2 at the center of things. So, in this chapter I talk about things like the programmable data center, the idea that you do your resource allocation through APIs. I cover many of the important aspects of EC2, and then I try to get practical very quickly, show you how to launch an instance with the AWS Management Console, show you how to use Putty to actually SSH in to your instance, and then once we have that basic instance we do things like assign an Elastic IP address so it’s got a permanent IP that you can remap to different instances as you might scale up or scale down, the Elastic Block Storage, which is basically virtual disk volumes that you allocate in the Cloud and then attach to your instances, creating custom machine images, and then using the APIs to do some of this dynamic processing that I think is so exciting.
Kevin: Great. And, Lucas, was it EC2 that really attracted us to Amazon Web Services and then we discovered everything else going out from there?
Lucas: Yeah, that’s definitely fair to say. One of the things I recall we really wanted to do was experiment with, or to really rapidly deploy a PHP5 code base for our customers to check out as a beta for SitePoint Contests, and we’re really able to do that super quickly with EC2. These days we’re using a combination of small, large and high CPU instances, and it just works really well for us in that we can target different instance sizes for various apps whether they’re CPU intensive or not. So Jeff talks a bit about creating custom AMIs, which are machine images, we tend not to do that too much at SitePoint, we’ve just found it a lot more convenient to customize these machines when they boot using user data that’s passed into the machine once it goes through its boot process. So, the other thing is that—I alluded to this when talking about S3—we tend to not serve too much data directly from S3, and we basically serve most of our data directly from our EC2 instances and they’re really good for that. And, yeah, that pretty much covers our EC2 usage I’d say.
Kevin: Okay. We’ve got a couple of questions that have come through about some of the acronyms that are being tossed around here, so, EC2 stands for Elastic Compute Cloud. Maybe Jeff take a quick step back and just describe at its heart what is EC2, what is the EC2 service?
Jeff: Sure. So, EC2 is essentially the ability to obtain servers running Linux or Windows or Solaris on an hourly basis. So you allocate these servers from the Cloud and when you make that request you specify a couple different things such as the instance type, and there’s actually 10 different instance types which are different amounts of memory all the way up from the Micro instances up to some very, very large instance with up to 70 gigabytes of memory. You specify the instance type whether you want a 32 or 64 bit machine, you specify where in the world you’d like to run, we currently have regions or data center clusters in four different parts of the world, and a few other parameters. So you launch these servers and along with that launch request you point to something called an AMI, an Amazon Machine Image. So the AMI is effectively the same as your C: drive or your root disk of your Linux system, it has your operating system, it has your pre-built applications, whatever static data you’d like to include on there. We give you a long list of pre-built AMIs and I also show you in the book how to build your own. So once you launch your EC2 instance, or instances, and I say instances because we have some customers that launch hundreds or even multiple thousands of instances for their applications. You’re billed for the usage on an hour-by-hour basis, so we actually bill you for clock time, so if you started up and used it for 24 hours you’ll pay 24 times the per hour price for that instance.
Kevin: Cool. Okay, I think we can move on from EC2, I’m sure we’ll have more to say about it, and a lot of the topics we have coming forward tie into it, but let’s move to Chapter Six which is all about SQS. So what’s SQS?
Jeff: Okay, so SQS is short for Simple Queue Service, and when we describe something as “simple” at Amazon we don’t mean that it’s like just for little baby applications or toy applications we mean that the programming model and the mental model and the interface are designed to be very clean, very straightforward, and something that you can understand, assimilate and start to use very, very quickly. So in this chapter I go through a number of different ways that you would use SQS, so SQS is a first in, first out message queue, and you can use these to build a very flexible processing system, like you could build what you think of as an application pipeline where you go from process one to process two to process three, and you can glue those different steps together by using a queue in between each processing step. So if some of the steps are very lightweight and take a short amount of time some of the further pipeline steps are heavyweight and might take a long or variable amount of time, you can use the message queues to basically be buffers between the different pipeline steps, and you can also build your application such that if one of the pipeline steps is kind of getting behind and the queue is backing up you simply launch another instance of the service and you then have two applications, two of the same applications pulling from the queue doing the processing and then sending the results along the way.
So, I actually had a ton of fun writing this chapter, I hesitate to pick a favorite chapter but if I had to I would probably say that it would be Chapter Six, and I built this really neat processing pipeline where there was a little queue and I would just drop some URLs to web pages in that queue, and I would then take those URLs through a couple of steps where I would fetch the HTML of the page, I would parse it, I would identify the image URLs in the page, fetch each of those, collect up all the images for a single page, convert them down to little thumbnails, and then make kind of a representative page that had up to 16 thumbnails from the page. So it was really neat just to build this application on a step at a time basis where I kind of identified the general thing that I wanted to do, and then I just built it one pipeline stage at a time, and I would load up a queue, run the application, test it, get the output into the next queue and then design and build the next step along the processing pipeline.
Kevin: Yeah, I have to admit SQS was the one that when it was announced as I guess a regular web developer this was one of the services that I looked at it and went, hmm, I can’t really see how this would be useful to me, it’s nice that they made it but this is one of the pieces that felt like I needed to be working at NASA in order for this to benefit me. But, Lucas, you guys down at 99designs pretty quickly found a use for SQS, didn’t you?
Lucas: Yeah, we love message queues; we’re always talking about them. Really the big benefit with adopting queues is that we don’t really need to make our users wait for stuff anymore. So, you know, using image resizing as an example, when a user uploads an image to our site we don’t make them wait while the PHP script resizes the image and then redirects them back to like a confirmation or a success page. All we do is we dump a task into the message queue and the user can go off and continue navigating around our site, and in the background we have some EC2 instances that are monitoring those queues and performing these tasks like image resizing or logging and maintenance operations, stuff like that, so really the key is we don’t like to make our users wait for stuff, and whenever possible we’ll ship a task off to a queue and just let one of our worker machines deal with it. So, other examples of where we use queues are for things like cron tasks and transaction processing and other data processing and reprocessing operations that we just don’t really want our application servers to deal with. And so if we ever have a situation where the queues are getting really busy or are backing up all we need to do is light up a new EC2 instance which is programmed to monitor those queues and perform the tasks that we put on them.
Kevin: So it feels to me like SQS is kind of a good second thing to look at once you have moved your site onto a cloud hosting platform. If you’re building websites on a shared hosting environment at the moment this is probably kind of feeling like another world to you, but once you move stuff onto cloud hosting and then you’re asking, okay, maybe I’m saving a bit of money or maybe I’m spending around the same amount but getting a lot more flexibility, but what else, what other benefits are there for me. And if you look into SQS this is one of the things you get on a cloud hosting infrastructure that you don’t necessarily get on a more traditional infrastructure that can really impact the user experience that you give to your users. If you go oh, right, I don’t have to do all the processing that comes out of a particular web browser request before I send that next page back to them what does that mean, how can that change and improve the user experience, especially with sites that are doing Ajax tricks to feel more like applications; this sort of flexibility becomes really vital.
Jeff: Exactly. And this is one of those; I think I used in the book the phrase “thinking cloud” a couple of times. And to me thinking cloud is kind of one of these gradual changes and thought process where you go from the traditional model where you have a fixed amount of resources and you think mostly in terms of synchronous processing to a cloud-based model where you think in terms of variable resources and you think in terms of asynchronous processing and the ability to be a lot more dynamic, a lot more adaptive. So in this case we can be adaptive to workloads, we can be adaptive to the fact that sometimes it might take a longer time than other times to do the page fetching or the rendering, and our site is not going to get stuck or overloaded because some of these processes take too long, because any of these variable impact processes are just in queues and the queues will expand to whatever size is necessary and the work will just take place at whatever speed it’s able to do so.
Kevin: I want to get on to the next chapter quickly, but we do have one question from Matt S. who asks, “Are the messages in SQS durable?” And for the benefit of our listeners you should probably expand on what that question means before you answer it.
Jeff: Okay, yes, so the messages are durable, and when you post a message to SQS it’s stored redundantly and it’s then there for a configurable amount of time. Basically you put it there and it’s available for several days until it expires from the queue. Now, when you pull a message off the queue to process it there’s a very, very interesting process that happens. When you take that message and are ready to start using it, it actually is not deleted from the queue, it’s instead just temporarily made invisible to any other applications that might be trying to pull from within the queue, so that gives you a time window during which you would actually process the message and then pass it along to whatever is supposed to happen next. If you were to pull that message from the queue and for some reason your application fell over and died for whatever random reason, after that time limit expires on the message, the message will reappear in the queue and another application can pick it up and continue on. So this is also another element of building reliability into a system so that you can’t accidentally have an application die while it’s in the middle of processing a message and have that message simply be lost.
Kevin: Alright. Let’s motor on to Chapter Seven which is about monitoring, scaling, and elastic load balancing. This for me is the stuff that I was really excited about going into Amazon Web Services because it solves so many of the problems that are so much work on more traditional infrastructures.
Jeff: That’s right. So in this chapter I really kind of build from the idea of a single server, a single host, into the idea of doing essentially scaling so you can have several parallel servers all running the same code so when you have web traffic coming in you can use the load balancer to scatter that traffic to any one of the available servers. You can then look at all the servers and by using our service called CloudWatch you can use the auto scaling to say, okay, the set of servers that’s running, the load on those servers that is too high I will simply launch another server, put it into the load balancer and then further distribute the traffic across those servers. So when you start thinking about how do I deal with large amounts of traffic, how do I build a site that can actually scale up and scale down so that I’m only using an adequate number of resources, not using too much, you start thinking about auto scaling and load balancing to implement that. And in traditional situations, traditional hosting, this is very, very complex and gets to the point where you might need to bring in outsiders to help you set this up and configure it, but with EC2 it’s simply you either go from the command line or you go to the console and a couple of clicks and you can set up automatic scaling and load balancing, so it’s much more easy to do, it’s more straightforward. Another thing I did in this chapter is I wanted to actually demonstrate how you can test to make sure that something scales, so I used something called the Apache JMeter, an open source application, set up my application for automatic scaling, and I used JMeter to send a variable load to that system and then observe that the automatic scaling took place as I intended.
Kevin: Cool. Lucas, do you want to maybe show off a bit of the monitoring that we do at SitePoint and 99designs and how that works for us?
Lucas: Yeah, you guys can see my screen now?
Lucas: Okay. What we’re looking at currently is just a visualization of the load on the application servers that are running 99designs, and this is for the last 24 hours, so it’s nearly 10:00 o’clock in the morning now and the process I’d normally go through when I get into the office is to load up this interface here—this is running on RightScale—and check out what the application servers were doing overnight.
Kevin: So RightScale, for those who aren’t familiar with it, it’s a third party service that is used to monitor and auto scale a collection of EC2 servers.
Lucas: Yes, that’s right, which that’s effectively what the Amazon, the AWS Console does as well. One of the other features of RightScale is it gives us the ability to construct, they’re called server templates, and they’re basically a collection of scripts or instructions that a server runs when it boots, so they have a really nice interface for managing basically server configuration and server boot processes and that is version controlled as well. I won’t show that off today, but I really wanted to just show is that this is the type of monitoring that we can get, and we can go back to like over the last year or month or quarter. If we have a look, I’ll just switch to this other tab, so one of the things I might do, this is a look at the database layer for 99designs and this is over the last month, and we can see here this top graph is the CPU usage for our database master. And you can see that the CPU usage has been growing over the past few weeks, so the process I might go through here is to work with the developers to see, you know, maybe go through their source control commit logs and see what was released maybe around week 36 that started this increase in CPU usage. So having tools like this is really, really handy, and as I said, the Amazon console does provide this; we’re currently using RightScale mostly, we also have experimented with another service called CloudKick. So, that about covers it for the monitoring that we use.
Kevin: Okay. We have a question that’s come through that I think you could answer, Lucas; “How does this auto scaling work with databases?” “For example, if you have a lot of users assessing a database and adding posts like on a forum, does the database then become a choke point?” This is from Justin Herrera.
Lucas: Yeah, that’s a good question. And the answer is often quite different depending on the type of, the usage profile of your database. So, for example, a forum is something that would typically be read heavy or
SELECT query heavy, so options for, you know, your options for scaling out that layer are to use a master and slave configuration where you’re sending your read queries to the slave and protecting or shielding your master in effect and leaving the master to deal with the
UPDATEs. Going on from that you can then go to a configuration where you have more than one slave, so you have basically multiple slaves connected to, replicating from this master DB server and then load balance your
SELECT queries across those slaves. There does come a point I suppose where you need to deal with load on the master database and you know that can be solved in a couple of ways, you can either scale vertically and switch to a high-CPU master or something that has increased I/O capacity, the other thing you would then look at doing is maybe partitioning your master DB so that you have essentially your DB tables split up across two masters and start going from there.
Kevin: Hmm-mm. At this point for sites on the scale of SitePoint and 99designs is it fair to say, Lucas, that database reads have at times become a choke point that we have had to scale for but database writes have never gotten to that point for us yet?
Lucas: Yeah, that’s mostly accurate I’d say. So, for example, flippa.com, which is pretty read heavy, recently we scaled vertically for the DB slave, so instead of adding another DB reader and going for a multiple reader configuration or multiple slave configuration we opted to scale vertically in that respect. However, there are times when, and I think, Kev, you might recall dealing with session updates—
Kevin: Oh, yeah!
Lucas: —sent to a DB master and we were able to get around that using MySQL memory tables. But there are times when unexpectedly you’ll see a lot of
UPDATE queries coming in, and there’s various ways of dealing with that, it’s really a case by case basis I’d say though.
Kevin: Cool. We had another question from Matt S. come in, and this way about the scaling thing. He asks, “Is this configurable to replicate content/settings from a primary node to the farm automatically? Jeff, do you want to talk roughly about strategies for that sort of thing?
Jeff: Sure. That’s a question that I could probably write half of a book on the different ways that you can go about doing that. So, one of the ways you might do that is you might use Amazon Simple DB which we’ll talk about next to basically store configurations and settings. Another way that you can do it is when you’re setting up your EC2 instances you also can create these virtual disk volumes that are called EBS, Elastic Block Storage volumes. After you create these you can go to the console and you can just simply click and you can turn those into snapshots. You can then every time you launch a new instance you can say also create some disk volumes from these particular snapshots. So if you have reference data or read-only data you can simply create new copies of the virtual disks each time you launch additional instances.
Kevin: Yep. It’s a really flexible system. I guess the only definitive answer you can give to something like that is you have a lot of options and there’s a lot of people out there willing to give you advice for it.
Jeff: That sounds about right.
Kevin: So let’s motor along to Chapter Eight, and Amazon Simple DB, which I have to admit I don’t have any experience with this particular service. I don’t know if we’re using it anywhere at SitePoint. Lucas?
Lucas: You know, yeah, we don’t have much experience with it, but we did quite optimistically use it when we were trying to write a clone of essentially Google Analytics. It was a crazy idea and it was kind of fun, and we don’t use it anymore, but certainly Simple DB was kind of the right choice for something like that, from our point of view.
Kevin: Alright, so take us through Simple DB Jeff?
Jeff: So, Simple DB is designed to be a very clean, very straightforward way for you to store large amounts of data without having to worry about a lot of the details. You simply create a storage structure, which we call a domain, and then you start storing each row in that domain is a set of attribute-value pairs. And the neat thing about Simple DB is you don’t need to have the same set of attribute names in each of the rows of your database, so you have a lot of flexibility to change your data structures as your application changes. Like when you have a traditional relational database you need to spend some time upfront designing your schema, and if you populate your tables with a couple hundred thousand or a couple million rows and then you say you have a great idea and you say, okay, let’s now update the database to add a new column to it, that can actually take your database offline for hours and hours and hours. So the Simple DB model avoids that by letting you have this flexible storage structure where it’s dynamic and more adaptable to your needs. Now, it is what’s called a NOSQL or non-relational database, so you’re always focusing on a single domain, it has a great
SELECT model but it doesn’t have a join model, so you’re always pulling data from a single domain. In the book I give an example of processing RSS feeds, and I show the — I take advantage of the flexible schemas because different RSS feeds have different amounts of detail in the header information, and I don’t need to store anything at all for the feeds which don’t have that extra information included.
Kevin: I know there’s some excitement about the NOSQL—I guess you could almost call it a movement—among developers here at SitePoint. Lucas, what’s your take on that?
Lucas: Really it’s whatever’s the right tool for the job basically, yeah.
Kevin: So there are particular types of applications where giving up database joins buys you, what, performance, flexibility; what do you get in return?
Lucas: Yeah, performance, flexibility, and also ability to deal with lots of data basically. Yeah, basically there comes a point where a relational DB is just going to start falling over when you’re talking about gigabytes and terabytes of data.
Kevin: Cool. Is there anything else you wanted to say about Simple DB, Jeff?
Jeff: I really like it because it takes away some of the planning for simple applications and you literally just create the domain and start putting data into it. And you certainly have to give a little bit of thought to organization, but not a ton of thought, and you don’t have to worry about being locked into a structure and thinking, well, I need to go back later and alter or change the structure; you’ve this runtime flexibility to make changes as needed, so it kind of un-constrains your thinking, it let’s you be a little bit more flexible as far as ideas you might have for ways to innovate and extend your applications.
Kevin: It’s interesting to me how each of these services has a different sort of, I guess in the traditional world there’s a very clear line between what is hosting and server stuff that your sys admin does, and what is developer stuff that your developers are working on. And a lot of these services really blend that and create gray areas between that black and white world. And this one feels like one that’s kind of right down the middle, that on the one hand it may really simplify the job of your sys admin because they don’t have to maintain, say, a MySQL database server that can deal with ginormous amounts of data, but on the programming end it really impacts what you’re doing as well; it would change the way you write your applications if you were planning to use a database like Simple DB instead of something like MySQL.
Jeff: That’s right. And to me this is reflection of what I see happening in startups and smaller organizations all the time is that in a larger organization you have developers, you have operations staff, you have database experts, you have all these different specialties and each person can contribute their unique knowledge to building an application. In the smaller organizations, especially in the startups, you simply don’t have all those different people present; quite often one person has to be able to be a jack of all trades and has to be able to build, architect, deploy, operate, monitor, and maintain. And so we definitely have blurred the development versus the operations, which I think there’s now this phrase called “dev ops” that really encapsulates the idea that the developer is often an operator.
Kevin: Yeah, we actually got a question a little while ago from Matt S. asking if you need to be a network or server tech to work with this service, this sort of cloud hosting stuff. And, yeah, I think you’re right that this is the kind of hosting that developers can get interested in. And it turns your hosting into something that into an interesting development tool rather than something that you need to hire someone to take care of so your developers don’t have to worry about it.
Jeff: I think that’s very accurate. And once you start to think about the fact that your infrastructure itself is programmable, and you think I can make a function call to create a virtual disk, or I can make a function call to create an entirely new server it really shapes your thinking in a way that says instead of fitting my app to my infrastructure I can have my app actually affect and change and improve my infrastructure as needed.
Kevin: So Chapter Nine, RDS.
Jeff: Okay. So, RDS is one of the newest services and this is short for Relational Database Service. So the idea here is we’ve taken the open source MySQL database and we’ve put some wrappers kind of underneath and on top of it, if you will, to take care of a lot of the more complex issues of actually running a relational database. So you don’t have to worry about buying hardware, installing an OS, installing in MySQL, dealing with MySQL patches, dealing with backups, dealing with scaling, dealing with running out disk space; all those things are simply built into the service and you can create an entire fully operational MySQL database with a couple clicks in the console. Once you have that up and running you can easily click again and do snapshot backups, you can scale the amount of processing power you have either up or down, you can add additional storage, you can create new database instances from your snapshot backups, and so all the kind of not fun nitty gritty kind of pain in the neck kind of stuff you need to do to run a rational database has been really simplified, it’s simply just a couple clicks in the console.
Kevin: So this isn’t an either/or choice between Simple DB and RDS, in a given application you may want to use Simple DB for one type of data and RDS for another, and you could conceivably be using both services side by side.
Jeff: That’s right. And to me one of the powerful attributes of AWS is simply the fact that you do have all this choice. And we’ve talked about a whole host of different services today, but you might choose just to simply bring your existing application and only use EC2 and look at all the rest and say I don’t need any of these right now, but then maybe get your application up and running, and then as you kind of think about how you’re going to evolve it and extend it and move it forward you can say, okay, I’m going to just kind of one step at a time, one service at a time start to adopt different aspects of it.
Kevin: So Lucas was talking earlier about how it is relatively straightforward to scale MySQL running as a collection of instances on the EC2 service the way we do it. It’s relatively easy to scale that if you just need more capacity for
SELECT queries, read only queries, but it starts to get more complicated once you need to scale on the write side to
DELETE sort of query level. So when you get to that sort of thing would that be a natural point to go okay we’re going to stop managing our own MySQL instances and we’re going to look at something like RDS now?
Jeff: It’s probably not a good idea to wait until you’re in a panic situation because people can (laughs) — I always like to be — I’m more of kind of like a planner and I always whenever I get surprised by stuff I mentally kind of slap myself and say this something I should have anticipated. But with RDS you actually have a choice of actually five different sizes of instance that range from 1.7 gigs of ram up to 68 gigs of ram. So if you start with something small or a little bit larger than the small you’ve got a lot of headroom to scale up with additional processing power as your needs change. When you start to say, well, the largest one that we have to offer you can kind of see reaching its limits you need to actually start thinking about adding caching, maybe you need to start thinking about charting the database in various ways, and those are deep architectural decisions, those are not things you just want to have to like patch in in an evening when things suddenly gets busy.
Kevin: What’s exciting to me is this book does get into this level of stuff, these choices you’re going to be making not on day one of your cloud hosting adventure but maybe after year one or maybe even after year two, these are the things you’ll be looking at, and yet this book covers it, it gives you a complete picture of the adventure you may be going on if you’re thinking of diving in to cloud hosting now.
Jeff: That was the idea. I wanted to make it easy for people to get started but then be something that hopefully will be on their shelf for a reference for a couple years to come.
Kevin: Cool. Just for the benefit of our listeners, yes, we are running a bit long, it’s five past ten now. We were planning to go for an hour, but we’re having a good time here and no one has to really run off, so please do stick with us; I’d say we’ve got another 15, 20 minutes, we’ve got about two more chapters to get through, and I want to definitely make sure we have time to answer all of your questions. Speaking of which, we do have a couple of questions on this database layer stuff. We’ve answered some of these a little obliquely, but I want to make sure we get direct answers. Peter Gotkindt asks, “Is Simple DB fast compared to MySQL?”
Jeff: Ah, so we always love to hear that answer, ‘it depends’, certainly it really, really depends in this particular situation. Because one of the things that Simple DB does for you automatically is that when you store data you don’t have to make choices like you do in a relational database of which columns effectively you’d like to index. Simple DB simply indexes all of the columns, and there’s a little bit of a cost at write time to actually do that—I’m talking processing cost—to actually do that extra indexing, but you don’t have to worry about when you do a query, you don’t have to think well did I remember to index this particular field or not. So Simple DB is tuned to be very, very responsive in a lot of different cases. On the relational database side, and this is kind of both a feature and an issue for any relational database, you have a lot of fine-grained control of the various parameters that the database is running under, so you can choose what to index, what kind of indexes, you can often choose table types, you can choose a number of parameters for buffer sizes and for different caching and flushing and indexing options. So, I’d say Simple DB is going to give you straightforward performance and it’s not going to require a lot of tuning. RDS is going to give you good performance out of the box, I guess there’s really no out of the box when you’re in the Cloud, but there’s going to be good performance when you start up and you then have the ability if your workload is very, very special in some particular area, you can go into the RDS parameter groups and you can start fine tuning memory allocations and the like.
Kevin: Cool. Jason Mays asks, “So does this mean we have to rewrite our web applications to use several databases or does Amazon support something like XERound?” I’m not familiar with XERound; Lucas is that something you know about?
Lucas: I’m not familiar with it either, but I think what he’s getting at is I presume that product is a bit like MySQLProxy or something like that where it analyzes your queries and determines which server they should be sent to.
Kevin: Alright, maybe you could talk about how when we first moved into Amazon what did that require us to do in our applications to access databases, what changes if any did we have to make?
Lucas: Yeah, so I mean at the time it really forced us to kind of think a lot about different types of failure scenarios and modify our applications to deal with those. Specifically to do with the database layer we had to modify our DB library so that
DELETEs were sent to a master server and reads were sent to a slave if that’s what we’re wanting to do. I don’t think that was a huge change, though, like that specific change to our apps, so there are options; if you’re running some open source stuff like vBulletin or WordPress or things like that I think they have some plugins and various other modifications you can make that are pretty easy that will deal with this stuff. Yeah, I think that kind of covers it.
Kevin: Cool. Jeff did you want to expand on that answer at all?
Jeff: Sure. So there’s one other thing to consider is that when you’re launching EC2 Instances those are just plain old Windows or Linux servers when you get down to it, and it you don’t want to run MySQL, if there’s another database either open source or commercial, you can go ahead and simply install and run that on your EC2 instance. We actually do have relationships with a number of the commercial database suppliers, we’ve got DB2, we’ve got Postgres, we just announced a relationship with Oracle where they’ll actually let you launch Oracle database instances in the Cloud, a number of different kinds of things that style where you can use RDS and get everything fully managed for you, or you can just simply bring your existing code into the Cloud and run it there.
Kevin: Okay, so that sort of answers this last question from Giovanni Castillo who says, “But I understand that I can access MySQL server in my application in other ways besides RDS, can I/should I?” So, that’s again once again one of those ‘it depends’ questions, but you were saying, Jeff, that if you want to simplify your migration into the Cloud then in most cases you can replicate whatever it is you’ve got working in your traditional infrastructure in the Cloud, and then once that’s up and running you can start to think about if you want to redesign things to take advantage of some of the unique benefits that you get with your cloud hosting.
Jeff: That’s certainly what I would do. I’m a big advocate of changing one thing at a time. And make that one change, make sure you’re happy with it, make sure you fully understand how it works and what its characteristics are, and then the next day or the next week make that second change. But when you have too many variables it can just get overwhelming as far as trying to master too many different new things at once.
Kevin: Okay, great. So, I think we’re ready to move on to Chapter Ten which is Advanced AWS. I love me a good advanced chapter, but speaking for myself, a lot of the stuff we’ve already covered is feeling pretty advanced if what you’re hosting your site on right now is a shared hosting account. So what is more advanced than what we’ve been seeing already?
Jeff: So, the alternate title for this would be ‘This is the stuff I had hoped to cover in some of the earlier chapters but I didn’t have room for it but was too cool to leave out of the book’ so that didn’t fit in that line (laughter). So, a number of people have always asked me like how can I get a more detailed picture of how I’m consuming AWS resources, so I have an example that shows how you can download our CSV records of all your service usage, store those in Simple DB, and then I also kind show you how to use a Simple DB queries to process that data in different ways. I show the readers how to use Elastic Block Storage; to actually create RAID volumes on top of EBS for larger volumes or for additional throughput. There’s this really interesting idea called Instance Metadata where you can pass data into your EC2 Instances when you start them, and you can also kind of interrogate your environment to find out where your servers running, what is its IP address, and I show you how to access that. And then the last thing I did, and I will confess to having a lot of fun with this one, is I wanted to talk once more about the whole programmable infrastructure model, and this time not creating infrastructure but actually querying the Cloud and saying “What exactly do I have running?” And so I show the readers how to build a little diagramming application, and what this does is it calls the Cloud and says give me a list of my EC2 instances, a second call says give me a list of all my EBS virtual disk volumes, and then a third call gives me a list of all the snapshots or backups I’ve taken from those volumes, and then with some very, very simple graphics operations and PHP’s GD library I simply draw out a diagram that says here’s my servers, here’s the disk volumes attached to each server, and then here’s all the snapshot backups we have of each of those volumes. So it’s kind of a really simple but I think really, really cool kind of inventory application that just simply says we can ask the Cloud what my resources are and I can find out all I need to know about each one of those.
Kevin: Yeah, neat. I guess it’s fun just to play with this stuff and go, oh, what can I make this do, you know; oh, look, there’s a report that gives me this, what can I do with it.
Jeff: I’ll confess we had a lot of fun putting this book together.
Kevin: Lucas did you have anything to say about any of that stuff? Are there any fun weekend or late night experiments you wanted to tell us about that you guys have done down there?
Lucas: Well, just touching on this stuff, Jeff was talking about instance metadata which we use or have used pretty extensively. I alluded to it before when talking about the fact that we tend to not use customized machine images too much and instead we tend to pass data into the instance so that when it boots it can configure itself to be an app server or a worker server or a proxy server or something like that. And that’s actually some really cool stuff to work on. We absolutely love EBS, or Elastic Block Storage, just because it gives us so much flexibility and allows us to basically clone a database server without interfering with its operation at all really. Yeah, look, there’s not really too much else I would say with this chapter, the stuff that really excites me is looking at — I kind of nerd-out when looking at all the CPU usage graphs and the load graphs and things like that, and I definitely agree with what Jeff was saying with making one change at a time because it’s pretty amazing to see the differences that subtle changes can make, especially when you’ve got a team of developers who are releasing code onto the servers on a daily basis, things can definitely get hairy at times.
Kevin: Cool. So we’ve got one more chapter left, but I just wanted to warn our panelists, Louis informs me we’ve got quite a few sort of questions that he’s been holding onto so that we can tackle them at the end. So we won’t spend too much more time on this last chapter, and brace yourself guys for some wildcard questions coming soon. And for our listeners, yeah, question time is coming up and so if you’ve got a question that you’ve been holding onto now would be a good time to send it so we can put it in the queue for our panelists. So let’s talk about Chapter Eleven which is CloudList.
Jeff: Okay, so this chapter I really wanted to have a complete application that would show the readers just how to use several different services together to build an application. And in the U.S. the Craigslist site is very, very popular so I said could I make a very, very simple classified ad system that I could document and fully explain inside of a chapter, and it turned out because I had EC2 and I had S3 and Simple DB I made a simple yet fully functional classified ad system in about 500 lines of PHP, so I was able to accomplish that and like the rest of this it was just kind of fun to put this all together kind of designing what I wanted the data to look like and setting up a number of command line tools to create new states and new cities for this, and then I had command line tools to load data in; so before I even did any web work I had a set of command line tools to give me the basic ability to edit and inspect the data. And then putting a web frontend on there was pretty simple even though my web design skills are very, very poor, and floating
divs and background colors are about as fancy as I know how to get.
Kevin: Cool. So let’s … So this was a PHP application that you put together. I want to address this question of what if you’re not a PHP developer, if you’re used to working in a language like Python or Ruby or even ASP.NET, what can you learn from this book.
Jeff: So I think one of the things that has always attracted me to PHP is it’s a fairly readable language. Things are spelled out, it’s not like some of the other languages like when I try to read Perl so much of the semantics are kind of buried beneath the surface and are invoked by kind of funny looking punctuation or kind of by implication that you’re just supposed to know what the rules are, I think PHP is a lot more obvious and a lot more straightforward, so I do think that developers that are used to any other scripting language shouldn’t have a ton of trouble reading and understanding what happens with PHP, and I suspect even if they knew no PHP at the start that they might be able to start writing some applications within just a couple of chapters.
Kevin: Yeah, cool, I agree. Alright, so let’s dive into some questions here. I’m going to ask Jeff and Lucas please don’t be too polite if you’ve got something to say dive in here. So, the first question is way back from the beginning we’ve got Jason Mays who asks, “Can you cap your monthly budget somehow?” “I don’t want to get a bill for a million pounds if I get Slashdotted.”
Jeff: Ah, so currently we don’t have that feature. It has been requested a couple times, and I do have a couple different thoughts in that area. First, you can login to your AWS account at any time and you can definitely see how much you’re spending on an hour-by-hour basis; you can always see what your bill is to date. The second is if you have some genuine content on your site that people are coming to look at then if you’re really running that site as a business then presumably you do have a business model that’s based on page views and people’s actions once they get to that page. So I think that the Cloud really says to you, it really says make sure that that business model exists, make sure that you understand what it actually costs you to deliver a page, make sure you understand effectively the amount of revenue you would get from delivering that page when you average it out over a large number of visitors, and then make sure that your revenue is in excess of your costs. Now this maybe seems a little bit too simplistic, but when you really think about web businesses we’re really thinking about scaling not just the technology we’re thinking about scaling the business as well. So you need to make sure that each individual page view is effectively self-sustaining, that you’re going to get on average enough business out of that page view to pay for the resources it consumes.
Lucas: Yeah, I would say turning people away at the door is a big mistake.
Kevin: Something we learned at SitePoint is that scalability and performance buys you traffic. We really didn’t know at SitePoint just how many people we were turning away with our old server infrastructure, all we knew is that sometimes our server got a little slow, but we had no real easy way of addressing that at the time. And what we’ve learned after moving to the Cloud where we can scale is that, wow, during that times where we now scale, where our server used to get a little slow we are getting a lot more traffic because our site is remaining fast whereas before I think at the times when we could be making the most money in some cases our server was holding us back, and not only holding us back but hiding the fact from us that we were turning those people away.
Jeff: That’s really, really interesting. I guess what we’ve all learned is that web visitors are very impatient these days and they simply want to show up and they want to get their pages within a very small number of milliseconds, and if they don’t then they’ll open up another tab and maybe they’ll come back to the original site, but the responsiveness is effectively a feature and you need to have a rapidly responding site to have a high-quality business.
Kevin: Alright, I’ve got a very specific question here. This is from David Castro who asks, “I consult for a widget developer who has a server component for their toolkit, the product is somewhat pricey so I suggested that they have a paid AMI, Amazon Machine Image, to pay for their product hour by hour; however, if they need to load a product key on the server hosted in the instance how would they keep a customer from copying that file to their own server and pirating the license?”
Jeff: Okay, deep question, so I should probably explain the idea of a paid AMI.
So, we talked a little bit today about this idea of the AMI, the Amazon Machine Image, that anybody can build and that they can package up their own software inside of and then store for reuse so you can then create additional servers. Now after you’ve created one of these AMIs you can choose to either open up widely so anybody that can find it can launch copies or you can individually add users to it and effectively say this is the set of users that have launch permission on this AMI. Now, we have an extension to this model, we call it the paid AMI model, and with the paid AMI model you as a developer can create your machine image, you can put some of your own proprietary applications or value added software in there, and then you can set your own price per hour for the machine image. You can say instead of paying the base two cents or eight cents per hour, or whatever is the charge for your instance size that your users use, you can say it costs them four cents, sixteen cents, whatever price you’d like to charge, and we then take care of all the accounting and billing so that when your customers find that AMI and launch it they’re charged the price that you establish, we pay you the difference between our regular cost and what you set as the prices. So that’s what’s called a paid AMI.
Kevin: So that’s cool. And I guess the question is then about copy protection; once someone has access to your paid AMI what keeps them from copying all that software off and running it on a free machine?
Jeff: Sure, so when you set up this AMI you have — you are setting up a complete machine and you can decide am I even going to give these users that run it do they even have the ability to log in. You might say it’s just a software as a service application, and the only right that they have is the ability to launch it; you don’t have to give them a Unix user account and a password to allow them to log in, so they might not even be able to get to the file system, you’re simply saying run copies of this binary software and you can’t get into the machine and see it in any way.
Kevin: Cool. Alright, we’ve got — this is a more general question from Egimonu, he says, “Can small firms who run normal day to day business services benefit from cloud hosting? Basically is it cost effective for them to use the services?” So I’m sure this is a question you get asked on a daily basis, is there a business that’s too small for cloud hosting?
Jeff: I don’t actually think that there is, especially with our new micro instances, we’re seeing that people are doing things like running micro sites on the Micro instances where the idea of a micro site is you pretty much have a website that consists of a single page and call to action at the bottom. So that to me is like the ultimate small business where you just put everything, you describe your product, you describe the benefits, you say this is how to actually purchase it, and it’s all on a single page. And the micro site model I think is really neat because you probably won’t make a living from a single micro site, but you have this ability to spawn off many, many different micro sites at very, very low cost, so that’s kind of like the ultimate small business where maybe you’re talking about fractions of a penny per hour to run these micro sites and these micro businesses. And I think that a small business is always going to be concerned about economy, so I think prices for the instances themselves are very, very competitive, but also the fact that you can turn them off when you don’t need them. So maybe you’re using some very, very powerful servers to do some analysis or some number crunching of some sort, but you can launch the servers in the morning when you come into the office, do your number crunching all day, and at the end of the day you can literally just suspend them and then pick up from where you left off the next morning.
Kevin: Cool. This is a great question from Matt S. who sent through a few questions today. He says, “How do some of these APIs work from my application’s point of view?” Are there interfaces for several frameworks and languages like PHP, .NET, etc., or will I be making raw or REST submissions to access these things?”
Jeff: Okay, Matt, so we do have toolkits available for a number of languages, I think we have toolkits available for all the languages that you have mentioned, and so in general you are going to be making calls to a framework or to a toolkit, in addition to the ones that we have built and that we supply ourselves there’s a number of third party developers that have also developed toolkits for different languages that we haven’t gotten to the top of our priority list just yet. If you really, really want to you can make the raw REST or the raw SOAP calls, and we show you how to do that; it’s a little bit complicated because you have to do this thing called signing where you take the public key that we give you when you create your account, and you need to sign your request with that key so that we can ensure that it’s actually coming from you and not from someone pretending to be you.
Kevin: David Bier asks, “The Micro instance is really useful, I’m thinking of putting my email server on a Micro instance, is this an ideal thing to run on a Micro instance as the load is low?”
Jeff: I think that should work out just fine. It’s going to, of course, I hate to keep saying it depends, but it really depends on your particular situation. The Micro servers are great for services that are idle most of the time and then process occasional bursts of activities. So the Micros can actually, they have an allocation of kind of steady state CPU, and then they can burst up to a higher speed for short amounts of time. So I think an email server probably—
Kevin: Cool, so it depends on how much email you get, David.
Jeff: Yeah, and maybe how much spam processing he does and those kinds of things.
Kevin: There are a couple of interrelated questions here. Ron P. asks, “Is there a concept of having “my own server” to prevent the situation where if I’m on a server that is shared with very busy sites then they will slow down my site?”
Jeff: Alright, two different answers to that question. So, we built AWS so that when you are sharing resources that we’re always going to give you the amount that we’ve committed to giving you as far as the amount of CPU power and the amount of memory. This is very, very different than kind of the VPS model where in the VPS model it’s basically take a machine and subdivide it until your users complain a bunch and then just back off a little tiny bit. The AWS model we give you a committed amount of power in CPU power, memory, and so forth. Now if you for other reasons just say I really just want my own physical machine you can effectively do that by simply allocating the largest of the instance types.
Kevin: Cool. So the largest roughly corresponds to the biggest piece of infrastructure.
Jeff: Yeah, those are effectively like a server all by itself. So the biggest one that we have is actually, and this one has a very long name, it’s called a High Memory Quadruple Extra Large. It has eight virtual cores, it has 68 gigs of ram, it has 1.7 terabytes of local disk, and all that for $2.40 per hour.
Kevin: Cool. Lucas, I want to hear your answer on this. “If I set up an EC2 instance with an installation of Apache and PHP to server my web site do I then have to worry about installing software updates keeping up to date with the latest Apache version, OS patches, etc.?” What’s our strategy for that?
Lucas: Yeah, it’s just like running a server in a dedicated server environment, there’s always security concerns, so definitely recommend staying across those things. I’m pretty sure Jeff would talk a little bit in the book about the distributed firewall and security groups that you can take advantage of. But, yeah, look it’s just like running any server in a regular hosting environment, so that definitely should be at the top of your list of concerns.
Kevin: I know we use RightScale to manage and monitor our servers. One of the things they also provided is these ready made sort of ready to go out of the box Apache server, ready to go out of the box MySQL server, and so to some extent you can trust a third party service like that to maintain these relatively up-to-date and known-to-work images, and any time they release a new one you can just light up a new server using the new image and shut down the old server on the old image. So I guess as long as you’re following your third party provider’s blog you can let them worry about that to some extent.
Lucas: Yep, I would definitely agree with that.
Kevin: What would you suggest, Jeff, would you be actually applying OS updates to your virtual server instances or is there a better strategy?
Jeff: So there’s a couple different schools of thought and one of them holds that you actually never update running servers and instead you always simply create new golden AMIs with the newest patches, and you simply kill your old instances and launch fresh ones from the new AMIs. And that way you can basically use a script or you use a recipe tool such as Puppet or Chef to build your masters, and you always know that they’re reproducible and you can have full control of what’s on there.
Kevin: Alright. David Bier asks, “SNS is a new service that I can really see being used heavily, what are the projected features and will it be possible to pass simple information to a message?” I think he might be talking about SQS.
Jeff: No, we actually do have a brand new service called SNS that’s even newer than I was able to include in the book. The difficult thing about writing a book is that you’re taking a snapshot of a fixed point in time, and being that I work inside AWS I’m always aware that there’s something brand new and it’s always so regrettable when you know that as soon as you write ‘the end’ there’s going to be some brand new thing that pops up and you say, man, I sure wish that we could include that, but you kind of have to draw the line and say, okay, this is as far as we could go.
So, SNS is short for the Simple Notification Service and it’s basically a topic-based publish and subscribe model. So you can programmatically create topics, you can then have applications or email addresses subscribed to those topics, and you can then publish to those topics, and when you publish to a topic all of the recipients will be notified by email or via HTTP that there’s a new topic or a new message has been published for them.
Kevin: So it’s kind of like PUSH SQS, right?
Jeff: So, SQS is a queue based model where there’s actually storage in there, whereas SNS is immediate delivery; as soon as you do the publication the deliveries will happen within a relatively small number of probably milliseconds.
Kevin: Sorry, I cut you off there, Lucas.
Lucas: Yeah, SNS is something that we considered, we’ve been considering at 99designs just as a means of I guess more reliably configuring our clusters when instances come online. So, for example, when an instance, like when a web app instance comes online it could send notifications basically to all the other web app nodes that it’s alive and that it exists, and that would allow us to reconfigure things like Memcache or Beanstalk where we’ve got app servers that are talking to each other.
Jeff: Sounds great.
Kevin: We’ve got a great question from Peter Gotkindt who asks, “How do you best time the shutdown of a server so that you can use as much of an hour of usage as possible but prevent paying for the 30 seconds of a new hour?” Does the billing really work in hour chunks?
Jeff: The billing definitely works in hour chunks, and I’ve never actually tried to do that myself even though I do have some personal AWS resources and I do pay for them just like any other user every month. I’ve never really tried to time it that finely to make sure I only had exactly one hour’s worth of usage. I suspect what you could do is you could call the EC2 API from within the instance, you’d get the launch time, you could then round it up to an hour and then figure out when is the optimal time just a few seconds before the top of the hour to shut down.
Kevin: Cool. Let me just take a quick look through the questions because we’ve answered most of them and I see a couple of good wrap-up questions, but I want to make sure; just a couple of rapid fire ones here for you, Jeff. Any plans on providing Sparc Architecture for Solaris?
Jeff: Not as far as I know. That question comes up I’d say annually at best. I talk to a lot of users and I’d say maybe one user per year asks for Sparc, so that will be the request for 2010 and I’ll anxiously wait for the 2011 request (laughter).
Kevin: So the year that you’ve got nothing else to do that’s when you’ll do it?
Jeff: Seriously if someone really needs that feel free to send me some email with some more info about what you need and I’ll be happy to pass that along to the dev team. Let me actually put my email address here on the slide just in case anybody wants to track me down.
Kevin: Okay. Matt S. asks, “There have been some concerns in the sphere about data ownership in the Cloud, any thoughts or comments from SitePoint as well as Jeff on do you really own your data when it’s hosted on Amazon’s virtual servers?”
Jeff: Yeah, this comes up all the time and it’s one of those red herring kinds of questions. The fact is it’s your data and you totally own your data, we assert no ownership whatsoever of your data. You store it in the Cloud when you want, where you want, you decide when you want to delete it, you have full control not only of what you store but where you store it; you can choose to store it in the U.S. or in Asia or in Europe, and it’s your data, it’s fully under your control. And I just have to be really, really emphatic about that just because it’s one of those cloud myths that seems to just kind of go around from time to time that the Cloud somehow is like this great big monster that eats your data and refuses to give it back to you or something. Definitely not the case.
Kevin: Lucas I know sometimes we have strategies that worst case scenario Amazon disappears overnight we still have strategies that our data remains accessible to us, you want to talk about that?
Lucas: Yeah, for sure. Right off the bat what I would say is if you’re concerned about your data being out of your hands in some way that you feel like you don’t have much control of, you can always encrypt it before you store it there. So, some of the things that we do, for example, with our MySQL service we tend to have MySQL replicating out of the EC2 environment to a location offsite which is usually our office, so in the event that Amazon, God forbid, completely disappeared we’d obviously have copies of our code base locally, we would have an up-to-date copy of our MySQL DB, and we would be able to launch somewhere else if something were to happen. We also, there’s various strategies for if you’ve got data stored in S3 you can perhaps also be copying that data out of the Amazon environment into somewhere else so that if you need to you can launch somewhere else. But overall I’d say ownership is not really much of a concern from our point of view.
Kevin: Alright. Adam Burns says, “I’m in business operations, I’m not technical, and we’re just moving onto AWS. Today was a little over my head. Is the book a good starting point or is there a better alternative for non-techies?”
Jeff: Hmm, let’s see. I would hope that the first couple chapters of the book should get you started, and if it doesn’t I would love to hear from you and maybe there’s a chapter zero or chapter negative one that I need to write at some point to get going. But just like every other part of Amazon I love to hear from my users and from effectively my readers and my customers, so any feedback of things that I went over too quickly would be great to hear just to make sure that we’re making sure we get people from the beginning up to the point where they can be really productive.
Kevin: Yeah, I’d say you could buy the book, read the first couple of chapters and then hand the book over to your developers to take it from there probably. Yeah, we’d love to hear some feedback on that. Let’s see, just want to make sure we tidy things up here. Let’s go to Ron P. who says, “Kind of a wrap-up question: A plain old website is in the Cloud, it’s a type of cloud computing, seems like the terminology should not be cloud computing so much as dynamic infrastructure hosting or some such thing, the only thing I’ve seen besides dynamic infrastructure is SQS and Simple DB; sound about right?”
Jeff: So, I don’t think we really invented the term ‘cloud computing’ it seems to be the term that the industry has adopted for it. You can certainly contest the definition or the choice of the name, but the industry has kind of chosen to say this is what we like to call it.
Kevin: But fundamentally the difference between putting your site on a physical computer that you are renting versus putting it on a virtual server that you can make bigger or smaller or replicate or put to sleep as your business needs demand, I suppose it all just does come down to different flavors of technology, but there is a fundamental difference in approach in flexibility there.
Jeff: I would agree, and so we can kind of say that the Cloud gives you the option to be able to get that flexibility at a later point. So, one thing that’s happened in this industry quite a bit is this phenomenon called ‘cloud washing’ where a lot of things that were doing perfectly well before the cloud terminology came along people have kind of taken these cloud stickers and just kind of sticking ‘cloud’ on things that maybe didn’t really need to be relabeled as cloud. But it does kind of tell you the power of marketing and the power that people think that this is the wave of the future if everything needs to be cloud this and cloud that in order to get some attention.
Kevin: Alright, so I think we’ll wrap it up at this point; maybe Lucas and Jeff some parting thoughts.
Jeff: Let’s see, I really appreciate everybody sticking around for nearly two hours, this has been great. I really enjoyed writing the book, I had a ton of fun as I put it together, and I hope it’s as fun to read as it was to write, but I had just a ton of fun, kind of intense fun but fun nevertheless as I, myself, had to learn about the different parts of AWS and then think what is the most interesting and the most productive way to present this to the audience.
Lucas: Look, I think all I’d really want to say is just thanks everyone for coming along, and I’m always really happy to talk shop with this stuff, so you guys can just send me an email at lucas at sitepoint.com if you want to ask anymore questions; I’m always happy to have a chat about this stuff.
Jeff: Yep, same here, I’m happy to entertain further emails and follow-ups and so forth.
Kevin: On that front there is currently a post on the homepage of sitepoint.com about this session. I think all of us at SitePoint since this is the first online seminar that we’ve done we would love to hear your thoughts on how this went, if you had any lingering questions you can post those there as well, just head to sitepoint.com and click the story on the front page about this webinar and leave us a comment and let us know if you’d like us to do this again and maybe even some topics that you’d like to about us from— er, hear from us about. (laughs)
There are a couple of questions that we didn’t get to but I’m going to spend the next few minutes posting answers to that in the session, but aside from that I would like to say thank you; I’d like to add my thanks to everyone who is listening, and this is Kevin Yank speaking to you from SitePoint Headquarters, thank you for joining us and we’ll see you next time. Bye, bye.
And thanks for listening to the SitePoint Podcast. If you have any thoughts or questions about today’s interview, please do get in touch. You can follow SitePoint on Twitter @sitepointdotcom, and you can find me on Twitter @sentience. Visit sitepoint.com/podcast to leave a comment on this show, and to subscribe to get every show automatically. We’ll be back next week with another news and commentary show with our usual panel of experts.
This episode of the SitePoint Podcast was produced by Karn Broad and I’m Kevin Yank. Bye for now!
Theme music by Mike Mella.
Thanks for listening! Feel free to let us know how we’re doing, or to continue the discussion, using the comments field below.