Currently we have a test server and a liver server. We make changes/updates to the test server, then once everything is QC’d and confirmed, we then transfer changes over to the live server.
The transfer is currently a manual process, in that we have to FTP all the updated files and copy over the required MySQL tables.
This process could easily lead to mistakes, if say, you were forget to transfer over a particular file.
How do you guys manage this, is this where version control comes into play??
For application files, I’m unlucky/lucky (however you look at it) to not be running 10+ servers so I just ZIP/upload/unZIP. Changeable folders are symlink’d.
DB uses Doctrine as the ORM, which has migrations.
At my old job we used a push/sync method that was scheduled with cron. It was all custom scripts, and I hope you’re looking for a software solution, but basically the script did this:
We built our files at the development server, then when finished and tested, we pushed our files to the staging server.
The staging server was a test server, and helped us resolve any merge issues from new code and old code.
The script compared the last edited property of the files in the live folder and the stage folder and pushed new files from the staging server to the live server, overwriting where necessary.
This was finicky, and had to be monitored constantly. Plus it didn’t help with the database at all. But it got the job done without FTP. SVN is a far better way to go when things get too big- or even just using Virtualizations and folder mapping instead of separate boxes. I think my main point, is that there should be some staging middle ground, not just dev and live.
Personally, I use subversion and the process goes like this:
From the website ‘root’ folder (eg /home/myuser/www/mysite/ or /var/www/mysite or however you have it) I have an exports folder. Within this folder I export the site from subversion, into an appropriately named folder for the revision/branch - eg r43 or r16-mybranch or whatever. Then, I have a symbolic link in the website root folder (same one ‘exports’ is in) which points to whichever revision I want served - eg ‘site -> exports/r43/’.
Of course, I have a shell script to manage all this and delete and unused directories in the exports folder after a rollout has been done.
If I have a test site, a staging/‘pre-live’ environment, I use 2 symbolic links - ‘live’ and ‘test’ say, to point to different exports in the exports folder (or the same when you want to roll the latest tested changes to live).
Apache is set up to serve from these symbolic links - eg /home/myuser/www/mysite/site/httpdocs/ or /var/www/mysite/live/httpdocs/
Database is just handled ad-hoc again though, but I’m looking to try and automate it all soon.
Forgot to mention configs - I have an environments.xml config file for the site, which has 3 sections (for me, you can have more/less) - one for each environment - dev/test/live. I set an environment variable within the apache config depending on the environment, and my config class takes this and returns the correct config value depending on the environment. This way, I can have different config settings for each environment (anything from paths, urls etc to email recipients for automated emails - don’t want my test emails going to the same people as the live emails!), yet still only have to maintain 1 copy of the config file throughout the repository and all checkouts/exports.
PHING has been really useful for me in cases where I only want to give certain customers certain files (example noone gets tests, only premium customers get the premium modules, etc…)
It’ll run your tests, .zip and .tar stuff, etc… and unlike a shell script it runs anywhere PHP does. The down side is it uses XML not PHP for its configuration, which doesn’t deal well with duplication in the build file itself.
It can be looked at like a functional equivalent of rail’s rake. Instead of writing .bat or .php scripts for common tasks like setting up and tearing down the database. you could keep them all in a phing build file, that way you can automate things nicely.
The other nice thing is phing has a “depends” syntax. Example you can say pushing files live depends on the unit tests passing, and it will run your tests before letting you push stuff live. For example. You could have it depend on other things like chmodding files or installing the database before running a task.
I’d say it should be possible using samba or whatever you call that on Windows. Sharing folders, I believe. Either way, it’s possible using Phing, and that’s a really nice system.
I too like using rsync + some custom scripts as well. rsync works great for this, and you can find versions for Windows (though they do not handle permissions as well as native Windows programs do).
Most of the time, I usually just use git version control to deploy on my server. With git, you can add as many named remotes as you want and push to them at will. When I’m done with a code change and all my tests pass, I push to a dev/staging remote and test it. If it’s good, I push to the live remote. Works pretty well, and really fast.
On the same topic, how do you guys deploy sites from windows to linux?
Ex: you develop in windows, have your own staging server (some linux box), and the client’s server (some other linux box). Keeping in mind that you must deal with allot of clients, and that your main dev env is windows.