The basics of Continuous Delivery
Posted on February 13, 2017
In this article we will explore the requirements for a complete Continuous Delivery route, as well as give some examples of tools that you may like to use.
The first part is Continuous Integration, often called by its abbreviation, CI.
CI is essentially a way your developers work on your codebase. Some development teams may work in completely separate branches and CI still allows you to do that, but has the requirement that you merge back your code with every other team member's code, multiple times a day. Even when your features are not completely finished. A seperate branch is used to store the latest code. Generally we use the "develop" branch for this.
The develop branch is updated as often as possible. However, developers are expected to only commit code to it that has at least been tested unthoroughly by hand.
This way even incomplete features can be tested against each other constantly. That will make your deploys a lot easier, since all merges should have already been resolved beforehand.
You will have to develop an automated deployment script. Initially for the build/testing server, followed by a nearly identical deploy to staging to production.
You'll want to make sure that the deploys to testing are as close as possible to the deploys to staging and production. Sometimes there are a few differences. For example, I do not copy the user uploaded files and images from production to a staging environment. In stead, I link those paths to production when testing images on staging. This saves a lot of bandwidth and time for each deploy, which can become costly if you run multiple deploys a day.
Also, my testing and staging environments use a daily database export from production, so they may be behind a few hours.
Note that you should have as few differences as possible between a testing and production deployment script. Every small difference between those servers may yield errors that cannot be caught on a testing server, resulting in a failed deployment to live.
Again, the overal deploy script will vary per project. Some might only need the latest code to be pulled, while others also require complex database updates and have some scripts that need to run on each release.
These scripts are crafted by hand. Generally they are constructed with a few shell scripts, which are added to a deploy server. I like to use Capistrano as an extra layer for deployments. It has a lot of functionality built in to make your deploys more dynamic and structured. One big feature is the ability to easily keep around old releases along with a rollback scenario. Capistrano is written in Ruby, but is able deploy any kind of software, no matter the language.
You can automatically build and test your latest code by using a server that monitors the develop branch.
The requirements for 'building' your code varies on a by project basis. Some may only need the code from Git (or another svn), while others may rely on setting up a database, composer, npm packages, etc.
Once your code is built, you run automated tests that have been written specifically for your codebase. There are a few ways to automate your tests. A popular system is Behat, which allows you to write tests that are usable by your server, while being very readable by humans.
This provides a very accessible learning curve and gives your stakeholders a way to understand your tests, even if they do not have much programming experience.
It is suggested that developers write these tests together with their stakeholders, in order to make sure that the requirements are understood by the developer as well as the testing system.
Check out the following Behat test:
Feature: Listing command In order to change the structure of the folder I am currently in As a UNIX user I need to be able see the currently available files and folders there Scenario: Listing two files in a directory Given I am in a directory "test" And I have a file named "foo" And I have a file named "bar" When I run "ls" Then I should get: """ bar foo """
It's pretty readable, isn't it?
Besides these project specific tests, it is also recommended to add some form of automated code review. Checking your code for standards regularly can dramatically increase the quality and readability for other developers.
You should add these tests to your build server, but can also add it as a Git Hook on your development machine(s). This way you can prevent yourself from committing ugly code. Checking your code before commits saves you the time you otherwise would have to wait for the build server to complete after a commit. It'll also reduce the commits you have to do, which keeps your commit log a bit more clean.
When a commit passes all the automated tests, I like to merge it into another branche. Making sure there is a branch that contains "stable" code only, at least in the eyes of the deploy server.
I call this branch "nightly". You authorise your build server to merge commits into the nightly branch automatically.
If the build server accepts the commit as nightly, it is then considered stable enough to be offered to another server.
Now you decide your continuous delivery strategy:
- Have your code deployed to a staging server and test/check the result manually. This is called Continuous Delivery
- Deploy the code straight to production and skip manual testing. This is called Continuous Deployment
There are a lot of flavours of deploy servers. You can choose to host them yourself with Jenkins or have them hosted in the cloud using Travis CI.
There are also Bamboo and a CI feature in GitLab. These two can either be hosted yourself or in a managed environment from the software's publishers.
Finally you want to be able to start and kill servers at any point with the click of a button.
While deploy scripts handle operations on a working environment, we should have the ability to start up one or multiple empty servers, and provision them automatically to match our server setup.
Many use programmes like Puppet, Ansible or Chef for provisioning.
Personally I generally only use a few techniques to simulate provisioning on a virtual level.
On a new release, an extra Docker Instance is started, aside from a live one.
The deploy code only runs on the newly started instance, and is not shown to the user until the deployment is complete.
Now where to start?
I'd suggest anyone to start with writing deploy scripts. Using Capistrano, you do not even need a dedicated build server to start with. Try to write out the steps you have to do on each deploy, and 'translate' them into code.
Once that is complete you can do the same for testing. Which tests do you do manually on each deploy? Write them out and translate them back into Behat instructions, or for any other unit testing application.
Lastly, setup a CI server to manage your deployments, and make sure all developers know to merge back into your development branch at all times.