About Docker and PHP

I read the article “Setting Up a Modern PHP Development Environment with Docker” (https://www.sitepoint.com/docker-php-development-environment/) with interest, but it provided no way to post comments. Hence, only the opinions of the author were reflected in this tutorial.

I wanted to give an opposing point of view as a web developer with 26 years of experience. I will write it here in the hope that it will prompt either adding this text to the article, or at least adding some balance to the article.

I have been operating a local Windows 10 development webserver for many years, and most of my web programming has been in PHP. I use a WAMP environment (from Bitnami) with only one major issue: migrating to new versions of the WAMP components takes several hours of work. For this reason, I upgrade versions about once every three years. I have not found different component versions between local and remote webservers to be much of an issue, as each new version of PHP, etc., mostly introduces new features that I rarely use while rarely deprecating or eliminating features that I use.

I have no intention of ever using my local webserver for production use. But if I wanted to, the changes required would take under an hour to do, and would mostly consist of making the site available on the Internet. I have actually done this as an experiment and my websites worked fine when visited from other computers.

While I understand the philosophy of Docker as providing a virtual container for a website, so that the website will run anywhere without change, I would like to make the following points in opposition to this approach:

  • Docker must have considerable space and time overhead to provide its service. Even with low prices for secondary storage, keeping such overhead low helps with responsiveness, file copying speed, the freedom to store lots of videos on the server, and other perhaps more subtle issues. Docker must provide a “prune” command to help the user clean up from its free use of drive space. I see that the actual space and time overhead for Docker is not admitted in this article or others that advocate the use of Docker. Unfortunately, I’m too busy to install Docker and measure these for myself.

  • I don’t need what Docker provides: through years of experience I can easily create websites and web programs (including new website development tools) that are intrinsically portable between various kinds of webservers. I develop on Windows using several different web technologies and run on Linux (Centos) remotely, under WHM/cPanel, in a VPS. The idea that there are great differences between these environments calling for virtualization is not true, in my opinion.

  • I need my development websites to run differently from production websites, irrespective of OS. Principally, I need better security for production websites, so there must be less details in error reporting. All such differences are governed by the definition of an environment variable on the dev website that does not exist on the prod website. Simple.

  • Docker is certainly more efficient than traditional hardware virtualization products using a hypervisor, but this is not the central issue in website and/or database development. I can’t envision needing OS virtualization, ever, for my own website development work.

Respectfully submitted,

David Spector

Hi FieryFrog,

Can I ask if you’ve ever actually tried docker on a real project? It sounds like you haven’t and it’s hard for me to take comments seriously from someone talking about someone they’ve never used themselves. I didn’t really ‘get it’ until I tried it myself on a real project either.

How difficult would your website be to move to a different server? As you suggest, at least an hour of your time (ignoring DNS and things that aren’t on the server itself). That’s even worse if you have other stuff on the server that’s easy to forget about, cronjobs, etc. With docker it’s just a matter of copying and pasting the files from one server to the other and running docker-compose up.

Docker must have considerable space and time overhead to provide its service.

Take a look at this IBM research on the topic: https://dominoweb.draco.res.ibm.com/reports/rc25482.pdf

The overhead is negligible in most cases. Port mapping can cause a tiny amount of network overhead but that can be worked around if 100 micro-seconds is a problem.

I need my development websites to run differently from production websites, irrespective of OS. Principally, I need better security for production websites, so there must be less details in error reporting. All such differences are governed by the definition of an environment variable on the dev website that does not exist on the prod website. Simple.

That’s right, but the differences are tiny. In my experience there are a few config file changes which are easy to manage in the server block on nginx. You just have a server block for the website.dev development version and a server block for website.com with the production config.

Being able to install a PHP extension on the development environment and push it with a commit is a lot easier than having to manually install the extension twice. And that’s if the extension is available on Windows.

. I have not found different component versions between local and remote webservers to be much of an issue, as each new version of PHP, etc., mostly introduces new features that I rarely use while rarely deprecating or eliminating features that I use.

That’s great, but for those of us who do work across a wide range of websites or more regular updates, this can be an issue. 10 years or so ago before I started using Docker, upgrading PHP on one of our server was a horrible task because we were running around 70 small sites. This meant that if we missed anything in local testing, we’d possibly have phonecalls from multiple clients. With docker, we can upgrade one site at a time while we’re working on a particular site. It was always a bit of a gamble because upgrading PHP upgraded it for every single site. Docker gives the equivalent of a different VPS for each site.

It’s very disingenuous of you to suggest this isn’t a problem just because you work at what sounds like a very small scale. And I’ll stress that even at small scale, Docker makes your life a lot simpler because it’s less configuration to write and you ensure dev/production parity.

I don’t need what Docker provides: through years of experience I can easily create websites and web programs (including new website development tools) that are intrinsically portable between various kinds of webservers. I develop on Windows using several different web technologies and run on Linux (Centos) remotely, under WHM/cPanel, in a VPS. The idea that there are great differences between these environments calling for virtualization is not true, in my opinion.

I can walk from London to Edinburgh, it doesn’t make it a particularly good way of getting there. Just because you can do something another way is not an argument against the different approach. Docker is intrinsically more portable because the server config is packaged with the website code. I don’t need to care what PHP version the new server is using, what PHP extensions are installed, whether it’s using NGINX or Apache, I can just run docker-compose up on one server and the site works exactly the same way as it does on any other. That’s just not possible in a traditional stack because there will be various server wide configuration differences.

David Spector
President,
Springtime Software

Given that your website https://www.springtimesoftware.com is using frames which have been bad practice since the late 90s, I am not surprised that you do not see the benefits in Docker.

3 Likes

Thank you for your comments. I can see that Docker is a very good technology for those running something like 70 websites. I was only providing a different point of view from and for someone running at very small scale, where I still doubt that Docker would save any time or work. Since you didn’t explain whether Docker is compatible with cPanel-managed websites, that question remains open for me. cPanel supports Apache, PHP, and MySQL (its Apache support includes creating its own automatic configurations files), so questions of compatibility between cPanel on the Linux remote production server and Windows on the local development server are important for someone working in my type of environment, as is the amount of disk space Docker needs for its installation, which you did not mention.

As to the primitiveness of my personal website, this is true. I have ignored this site since I first created it, which I believe was around 1994. I am currently developing new macro-based and Bootstrap 5-based website creation technology and when it is finished, my personal website will, I expect, be just the second website that I will bring into modern times.

I have no idea. I’ve never used cPanel, I’ve used Plesk a bit but really never saw the point in these tools for anyone who knows what they are doing. They make some things easier but when something breaks or you want to do anything out of their very limited scope they make life very difficult. It looks like you can though: https://www.unixy.net/docker-cpanel/ You could easily run cPanel inside docker alongside anything else.

as is the amount of disk space Docker needs for its installation, which you did not mention.

~30mb

edit: Actually it will be a bit more than that because .rpm files are compressed.

You’d need to install NGINX/Apache and all its dependencies on the host machine anyway. With Docker you don’t need to install them on the host (my web server doesn’t have the php, nginx or mysql packages installed natively). These packages then get installed as docker images instead. Different distros might have slightly different packages but the Docker PHP image is not going to be significantly larger or smaller on disk than the native version.

The only overhead is the Docker package itself, which on CentOS is about 30mb (https://download.docker.com/linux/centos/7/x86_64/stable/Packages/) and 1mb for docker-compose.

If you are running Docker and all your sites are using the same PHP, MySQL, NGINX containers then the size on the disk is the same as natively installing it because you’re doing the same thing: Installing the package and all its dependencies once. Docker shares resources among different images using the same container.

If your sites are using different PHP versions, then you need multiple PHP packages installed and will need more space, obviously. This is no different to installing multiple PHP versions natively on the server.

For RAM, on my local machine Docker is using just over 50mb of memory with 12 images running.

Thank you, TomB for taking the time to provide this information.

50 MB is not so bad for what it is doing. But your descriptions are making it clear that Docker is not compatible with cPanel/WHM. These are management tools that make it easier to set up website accounts, reseller accounts, email accounts, DNS Zone Records, IP addresses, and much, much more. Rarely is it necessary to make any edits to the Apache config files when one uses WHM. You simply enter the account name and domain name, and WHM creates all the virtual server entries in the configuration files.

Thousands of website developers use cPanel and rely on it instead of having to program the “bare metal” of Apache and the other underlying tools. I have to do both (use WHM on one server and Apache, etc., on the other). I don’t see how Docker could possible help.

Anyone here used Docker with cPanel/WHM? Is it compatible? My intuition tells me it is not unless it is specifically programmed to interface with the cPanel API, which I doubt, or unless it virtualizes WHM and cPanel, which I also doubt, as they are very complex internally.

My point is that the article I pointed to above flatly recommends Docker for anyone using PHP. That is a gross generalization. Docker appears to be designed for really heavy-duty website maintenance, not for people like me, using WHM and cPanel on one server and WAMP on the other server and with just a handful of very low-traffic websites. Let’s wait and see if anyone replies to my question, so we can find out the truth, instead of speculating further.

While it does scale very well, it’s very useful even in the small scale. If you really can’t see the benefit of combining the server config with the application I’m not really sure what else to say other that ask you to actually try it yourself before dismissing it entirely. I wasn’t sure the first time I tried using it because it requires thinking about things in a different way but once I uploaded my first website using it, I never looked back.

That particular article actually suggests using it for your development machine and the points in the article stand whether you are using cPanel or setting things up yourself on the server. I’d argue that using cPanel for hosting multiple sites like this will cause headaches. As I said, I’ve used Plesk before and upgrading PHP was always a pain.

Let’s think what we need to do in each case to run the website on another machine.

With docker:

  1. Copy the files
  2. (Optionally) change docker-compose.yml to run the website on a free port
  3. Run docker-compose up

With WAMP:

  1. Copy the files
  2. Edit the nginx.conf or httpd.conf to set up the new vhost
  3. (Most likely) Add the path to php’s open_basedir directive
  4. Install any PHP extensions that the site needs
  5. Hope that the website works on the PHP version you are running
  6. Restart the server

If you actually try this yourself a few times, you’ll wonder why you never looked at it sooner.

One of the advantages of Docker is that everything isn’t stored in a monolithic configuration file. You will need a reverse proxy ( “when someone connects to hostname.com” connect them to this particular docker instance) but that really isn’t a particularly difficult file and can run its own container as well.

Thousands of people use homeopathic medicine, thousands of people claim to believe the world is flat. The number of people using something doesn’t really make any difference to anything.

I’m not saying cPanel is a bad tool, it’s great for people who don’t really understand the underlying technologies and one of my old employers used to use Plesk on a couple of servers but it doesn’t really do anything that’s easy enough to do with the relevant server config file. Editing httpd.conf, nginx.conf really isn’t difficult for anyone who writes code for a living.

If you need to give non-technical users access to the server management then you’ll need it or something similar. For more experienced server admins such tools tend to get in the way and limit options.

The only thing I’d prefer a gui for is managing mailbox accounts on postfix as that’s a pain to do but there are plenty of tools for that specifically if you need it.

Windows has eventually recognized that Linux can be easily dual booted. This makes developing web sites so much easier that absolutely no Docker or Cpanel tools are required!

I develop sites locally which are a mirror image of the online VHS Linux Ubuntu Server. The only tool used is a one line RSYNC script to update the local site to the server.

This copying task is fast because only files that have changed are zipped and uploaded. If the files are large then only the modified parts are zipped.

Edit:

I’m actually interested in finding out whether this is true.

Unfortunately, googling this topic brings up mostly tutorials and comparisons of different panels.

I did find an interesting discussion on reddit: https://www.reddit.com/r/selfhosted/comments/50zbxw/do_you_guys_run_with_a_control_panel_for_your_web/

edit: A couple of other discussions:

https://www.spigotmc.org/threads/do-you-use-a-control-panel-if-so-which.2433/page-2

A small sample size but quite a lot of “just use ssh” replies… cPanel certainly wouldn’t be “most” in this sample as most people who said they used panels mentioned others entirely.

Probably also best to recognize there isnt “a” way to do things that is best for everyone. Use cases, personal preference…

If you’re just popping up PHP once in a blue moon for a website a couple of pages deep, going through the whole circus of docker setup is… frankly beyond overkill. I can copy and paste a couple of .php files pretty easily. Don’t really need a massive environment copy to do it in.

On the contrast, if you’re daily developing dozens of large websites for clients around the world… yea you probably need something that mimics environments and setups and can spot some of if not all of the quirks that are going to come your way in live production.

(Incidentally, as a person that falls into the first category, I still use XAMPP - and it’s never gone wrong in any of the ways listed in the article. shrug)

1 Like

Well, let me jump rigth in. Im developer with 6 years of experience. Last 2 of them in webdev…
I, as @FieryFrog, understand ins and outs of Docker, and, while it may be (and in 90% of cases is) thing to go when it comes to testing our app in various enviroments, Im huge opponent of using Docker on production-grade servers. Why? Many reasons…

  • RAM/CPU consumption: Docker containers (not deamon, containers) tend to consume more resources as they grow; the more container grows, the more it consumes; it may quickly lead to overusing and slowness of hardware; in extreme cases container can hang the server;
  • Logs and log readibility: containers sends out tons of logs (Ive seen mcontainers trhat sent 20GB of logs daily); These logs are so verbose that its extremely hard sometimes to sieve-out exact piece you’re looking for… for best readibility you have to use external log aggregator/reader…
  • Docker has some architectural flaws that makes it very open for malicious people; Docker is easily hackable;

General reply to several postings: very interesting topic, and it makes Docker sound very advantageous. I will read some the links when I have time. However, it is a bit scary to consider dropping WHM and cPanel, which maintain so many different settings and data.

Since around 2000 or so, I have had to switch hosting companies four or five times (since I don’t want to maintain my own Internet-connected server) because they went out of business, because they were charging too much, or because their customer service was terrible. Each time I did research on the Web and chose the best company based on what people recommended or warned about.

Every single one of these four or five hosting companies either required or offered WHM and cPanel. Some offered Plesk or bare root access in addition. So, from the beginning of my Web work I have used WHM and cPanel and usually found them very helpful and occasionally very frustrating as well.

While I do not understand why nobody here at SitePoint seems to use WHM, I insist on waiting until someone joins the conversation who has actually used WHM. If they say that Docker replaces WHM easily, then I might believe it enough to try it. Otherwise, the enormous risk to myself and my single customer that all the websites and mailservers would be down for some time while I try to learn what I need to enjoy the current stability is too great to accept.

again, use cases and needs.

Don’t need cPanel. Don’t need WHM. Not a professional selling services. As you’ll find the vast majority of people who come to these forums are; mostly its just people working on single, little projects for personal sites (or doing coding courses. I’d guess a not-small percentage of visitors are doing assignments…), who’re looking for a solution to a specific problem. WHM, cPanel, Docker, Vagrant… its all just… too much hassle for a tiny thing that people are doing in their spare time. Heck i’d wager a majority of visitors are doing dev on the live server.

1 Like

Just like applications running natively, memory leaks can occur. The only difference is that they are running in Docker instead of of natively. I’ve never seen an issue with a containerized application that isn’t also present when running the application natively. And you wouldn’t expect to since Docker doesn’t really do anything special itself

If there is an issue with a program using too much ram it will exist regardless of whether you run the program in docker or not.

I have a server with a 75 day uptime running 6 websites, 8 game servers, syncthing, bitwarden_rs, all in docker and the memory usage is 1.54gb.

Normally applications send their logs to the journal, which also requires an external log reader (journalctl). The exact same information is logged in docker, no more, no less.

If you have an appllication that is creating large log files you can set limits in docker, but regardless this isn’t really a docker issue, it’s the underlying application that’s creating the logs.

I just checked and my entire /var/lib/docker/containers directory is 499mb. That’s with all the individual containers not just the logs and the server has been running for nearly 2 years.

Please provide a citation for this. Like anything, you can set things up in an insecure way if. If you make the docker daemon available on an exposed port then yeah, anyone can run a container on your machine but that’s not the default behaviour.

Have you actually tied docker? In many ways Docker is actually easier than XAMPP.

See, thats point here… Docker (by architecture) should prevent this from happening by checking if the port in config is open or closed… if open, Docker should refuse working…

Architectural flaw. In fact serious security risk. One of many there…

Im not saying that Docker is useless; all Im saying that there are many serious architectural flaw in Docker that makes using it risky.

There are cases where this behavior is useful though, it’s only a flaw if you manually configure it to do so and the docker manual makes it clear that this isn’t a good idea unless you know what you’re doing.

For example, I’ve used this before to connect a Windows VM on the same machine to the docker daemon on my host.

I can set up Apache in an insecure way, I can set up MySQL in an insecure way, it’s not an “architectural flaw” in the application itself.

Im not saying that Docker is useless; all Im saying that there are many serious architectural flaw in Docker that makes using it risky.

And yet you have nothing to back up this claim. Docker is used by Google, Amazon, IBM and Microsoft in large architectures (they will use it alongside Kubernetes).

It’s odd that they’d use a technology that you’re saying is “insecure” without anything to back up your claims.

Of course its up to you how you configure your stack, but Docker can be wiser and, if not totally refuse working, probavbly just warn admin…

Having said the above, I still think the way Dockers devs handled this, they just (willfully or not) created serious security risk. Not everyone is advanced in server administration. There are newbies there also…

You have to change the default settings. The manual page specifically says not to do this unless you know what you’re doing.

It’s like blaming mysql for letting you create a database with root access without a password.

Its called stupidity…

as is exposing the docker daemon. Just because you can do something does not make it an “architectural flaw” in an application.

I can run dd if=/dev/zero of=/dev/sda and wipe my system drive, it’s not dd's fault if the user does something stupid.

I have not, but if you can explain how to set up Docker in less than "Double click the EXE, put your PHP files here, and go to http://localhost/yourfilename.php", then i’ll consider it easier. Otherwise it’s not for my development needs.

1 Like