First Look at – a Development and Deployment SaaS

Not so long ago, many of us were satisfied handling deployment of our projects by uploading files via FTP to a web server. I was doing it myself until relatively recently and still do on occasion (don’t tell anyone!). At some point in the past few years, demand for the services and features offered by web applications rose, team sizes grew and rapid iteration became the norm. The old methods for deploying became unstable, unreliable and (generally) untrusted.

So was born a new wave of tools, services and workflows designed to simplify the process of deploying complex web applications, along with a plethora of accompanying commercial services. Generally, they offer an integrated toolset for version control, hosting, performance and security at a competitive price. is a newer player on the market, built by the team at Commerce Guys, who are better known for their Drupal eCommerce solutions. Initially, the service only supported Drupal based hosting and deployment, but it has rapidly added support for Symfony, WordPress, Zend and ‘pure’ PHP, with node.js, Python and Ruby coming soon.

It follows the microservice architecture concept and offers an increasing amount of server, performance and profiling options to add and remove from your application stack with ease.

I tend to find these services make far more sense with a simple example. I will use a Drupal platform as it’s what I’m most familiar with. has a couple of requirements that vary for each platform. In Drupal’s case they are:

  • An id_rsa public/private key pair
  • Git
  • Composer
  • The CLI
  • Drush

I won’t cover installing these here; more details can be found in the documentation section.

I had a couple of test platforms created for me by the team, and for the sake of this example, we can treat these as my workplace adding me to some new projects I need to work on. I can see these listed by issuing the platform project:list command inside my preferred working directory.

Platform list

Get a local copy of a platform by using the platform get ID command (The IDs are listed in the table we saw above).

This will download the relevant code base and perform some build tasks, any extra information you need to know is presented in the terminal window. Once this is complete, you will have the following folder structure:

Folder structure from build

The repository folder is your code base and here is where you make and commit changes. In Drupal’s case, this is where you will add modules, themes and libraries.

The build folder contains the builds of your project, that is the combination of drupal core, plus any changes you make in the repository folder.

The shared folder contains your local settings and files/folders, relevant to just your development copy.

Last is the www symlink, which will always reference the current build. This would be the DOCROOT of your vhost or equivalent file.

Getting your site up and running

Drupal is still dependent on having a database present to get started, so if we need it we can get the database from the platform we want by issuing:

platform drush sql-dump > d7.sql

Then we can import the database into our local machine and update the credentials in shared/settings.local.php accordingly.

Voila! We’re up and working!

Let’s start developing

Let’s do something simple: add the views and features modules. is using Drush make files, so it’s a different process from what you might be used to. Open the project.make file and add the relevant entry to it. In our case, it’s:

projects[ctools][version] = "1.6"
projects[ctools][subdir] = "contrib"

projects[views][version] = "3.7"
projects[views][subdir] = "contrib"

projects[features][version] = "2.3"
projects[features][subdir] = "contrib"

projects[devel][version] = "1.5"
projects[devel][subdir] = "contrib"

Here, we are setting the projects we want to include, the specific versions and what subfolder of the modules folder we want them placed into.

Rebuild the platform with platform build. You should notice the devel, ctools, features and views module downloaded, and we can confirm this by making a quick visit to the modules page:

Modules list page in Drupal

You will notice that each time we issue the build command, a new version of our site is created in the builds folder. This is perfect for quickly reverting to an earlier version of our project in case something goes wrong.

Now, let’s take a typical Drupal development path, create a view and add it to a feature for sharing amongst our team. Enable all the modules we have just added and generate some dummy content with the Devel Generate feature, either through Drush or the module page.

Now, create a page view that shows all content on the site:

Add it to a feature:

Uncompress the archive created and add it into the repository -> modules folder. Commit and push this folder to version control. Now any other team member running the platform build command will receive all the updates they need to get straight into work.

You can then follow your normal processes for getting modules, feature and theme changes applied to local sites such as update hooks or profile based development.

What else can do?

This simplifies the development process amongst teams, but what else does offer to make it more compelling than other similar options?

If you are an agency or freelancer that works on multiple project types, the broader CMS/Framework/Language support, all hosted in the same place and with unified version control and backups, is a compelling reason.

With regards to version control, provides a visual management and record of your git commits and branches, which I always find useful for reviewing code and status of a project. Apart from this, you can create snapshots of your project, including code and database, at any point.

When you are ready to push your site live, it’s simple to allocate DNS and domains all from the project configuration pages.

Performance, Profiling and other Goodies

By default, your projects have access to integration with Redis, Solr and EntityCache / AuthCache. It’s just a case of installing the relevant Drupal modules and pointing them to the built-in server details.

For profiling, has just added support for Sensiolabs Blackfire, all you need to do is install the browser companion, add your credentials, create an account and you’re good to go.

Backups are included by default as well as the ability to restore from backups.

Team members can be allocated permissions at project levels and environment levels, allowing for easy transitioning of team members across projects and the roles they undertake in each one. offers some compelling features over it’s closest competition (Pantheon and Acquia) and pricing is competitive. The main decision to be made with all of these SaaS offerings is if the restricted server access and ‘way of doing things’ is a help or a hindrance to your team and its workflows. I would love to know your experiences or thoughts in the comments below.


  1. Thanks Chris for the very interesting article and thanks Site Point for publishing it.

    I find the service very interesting personally. I’d just like to point out what I consider kinks in the armour from a customer’s business perspective.

    The pricing structure starts off with a licensing cost structure and that goes a bit against the principles of the “pay for what you actually use” construct of cloud computing, which PaaS is a part of. Yes, for the platform and the work that goes into it, there should be a price and $10 per person per month is relatively fair.

    However, the usage of the platform, should have much more clear pricing. How much is the database usage? How much is the application node usage? How much for bandwidth? etc. If this is all based on a environment price, then the customer is often paying for something they aren’t getting and over the years, this simply works against the customer. (they might as well own their own server stack). This kind of pricing is the old “rake’em over the coals” attitude of enterprise business, which companies like Oracle have been known to take clear advantage of. So, I have to ask, who are their real customers? Their pricing, at first glance, seems reasonable, but there are other costs involved. These are sort of explained in one line (the smallest) on their pricing page and there is no way, this can be “all of it”.

    As for the services offered. It is interesting they went with MariaDB. They say “massively scalable”. Well, MariaDB isn’t massively scalable. If they had said they offer MySql Cluster, I might have considered that somewhat massively scalable. But, that is questionable too. So, as a user I’d actually be worried about scalability and how they handle it.

    But then they might say, well, we do have Solr and Redis and these are massively scalable. Then I would counter, uh. Nope. Redis is still far from massively scalable. It is still a single thread single process system, which means multicore architecture isn’t even taken advantage of. There is Redis Cluster and if they were to say that is what they offer, then I might agree with massive scalability with Redis. But, there is no mention of it being used.

    Solr is another story. It was built for horizontal scalability, like most NoSQL databases. But then, it is only a datastore to support a powerful search engine and not really a datastore for data persistence. So, from a database perspective, I am covered on search for scalability, but not really for the core database. And, what interests me most is, in general for a platform, the database costs are some of the highest of any services offered, yet there is no mention of extra costs for a growing database. From my experience, a GB of database storage in a PaaS like service should be around $20-$40 per GB. Yet, there is no mention of database costs at all.

    From a pricing standpoint, I am just not certain what is going on. I can have 10 developers, working on multiple dev environments. Ok. From what I understand, that would be 10 x 10 = $1000 per month. That is considerably cheaper than needing server admins in my team to keep up infrastructure. But now I want to go to production. Now I must pay for a production environment. Ok. The basic one is $50 per month.But the largest environment is 7GB. Not exactly killing it on power. What kind of compute power is behind that 7GB too? And 7GB of what kind of memory?

    Again, from a business perspective, my costs as a customer aren’t clearly provided. Just as an example of cost transparency done right, check out Heroku. At first you might say, Heruko is a lot less transparent, because there are so many things to account for. But actually, they are a lot more transparent, just like any PaaS should be.

    My intention is not to hack on, but to try and demonstrate missing clarity in their pricing structures. As a supporter of PaaS, I really hope can come through and be more transparent. I think they need to think a bit more about their license and environment cost structures and depict their prices better accordingly, so there are no surprises on both sides. :smile:


  2. Many thanks for this really thorough feedback - you really hit the spot on most issues one might complain about when considering them.

  3. Scott, thanks for the feedback. As the product manager, I’m really really happy to hear it. The pricing structure is our first shot at getting pricing right. It’s tough business, and we’ll iterate on it, incorporating feedback from yourself and others.

    Regarding the developer licenses: we’re considering getting rid of them in the near future. Also, we’re intending to move to “per-service, pay what you use” pricing as soon as we can engineer the the supporting bits around that. I think these two changes will address your pricing concerns pretty squarely.

    When we scale MySQL (MariaDB), we do so using Galera Cluster. It works quite nicely, and allows us to take your database from a few CPUs and GB or RAM to nearly 100 CPUs and 180 GB of RAM without ever taking your application offline. This is available to our Enterprise customers.

    Solr is scaled using SolrCloud. Redis is not scalable, as you say, but we provide high-availability Redis so that if one instance fails, another takes over (the cache data is assumed to be ephemeral and will be rebuilt).

    Hope that answers some of your questions!


    Robert Douglass

  4. Thanks Robert. Your answers are good ones. I appreciate the response too.

    I don’t really have a problem personally with the licensing, as long as that is the cost I am paying for the work put into the platform and the overall service and those costs are also not taken into account in the infrastructure. For instance, you might need the licenses, simply in order to control access. Your licenses are reasonably priced too, thus not a KO criteria IMHO. And then again, if you have a user access system, but don’t really care about how many users are on the system or rather you know the number of added users won’t be excessive in any damaging way, then the licensing might not be necessary and the costs for the platform’s development and maintenance can be spread across the infrastructure services.

    Galera Cluster isn’t too bad. I consider MySql Cluster the better solution currently though. What makes it unattractive is Oracle luking behind it. LOL!

    What intrigues me is why don’t you support MongoDB? The PHP driver is pretty good and the new driver PHongo is going to be pretty cool too, especially if their vision of PHP developer driven extensions comes together well. I hope it does. But certainly, if your goal is to provide Node.js support later on, then offering MongoDB support with it is a no-brainer. :wink:


  5. Scott - we actually support MongoDB. It’s not advertised anywhere as it’s in the “invisible beta” stage, but it’s deployed. If you wish to use it, a support ticket will get you the needed information. Same with Elasticsearch and PostgreSQL. As soon as we’ve put them through their paces, we’ll release documentation on these services.

3 more replies