/ GDZSYMFONYDOCKERGIT
 / 10.59350/x8e4g-kfv32

Building and deploying applications

By Kleiner - Rubik's cube.svg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=7836106

The continuous integration toolchain has developed a lot during the last years.

Our mission, to build and deploy everything completely automatic and reproducable became more and more elegant. But let me take you back into the ancient halls of our workflows.

Let’s take a TYPO3 CMS website. We built sites with this PHP-based Content Management System for ages, but let’s skip the dark woods of Subversion.

Back in the days

The TYPO3 CMS ships a so-called Core and is extensible with “Extensions”. Extensions are mostly customer specific entities to solve a special problem (i.e. show all employees on the website, do some sophisticated searches and so on).

These Extensions do mostly run on special TYPO3 Core versions, such as 7.6 or 8.7. To keep the extensions reusable and separate from the core their code lives in dedicated git repositories. I assume, git is well-known for 2018 developers.

So, to get back to the point, deploying Extensions once was done with Jenkins, a build server and copied into the specific installation of the website. If a developer adds some changes to the repository, a Jenkins listener started a job to mostly only copy the code from the git repository to all servers that used the extension via ssh or rsync.

This procedure did work for quite some time, but had massive drawbacks. There was no defined state of a TYPO3 website. Getting new developers into the team caused pain and everyone had different states with different extension versions and so on.

We tried to get rid of that using tools, such as Vagrant and chef or shell-scripts, but it never gave us a good feeling.

Rescue

As said before, everything went really fast in the last years and the main components that help us deliver fast, high-quality software are Composer, Git, Docker and GitLab.

Composer is the PHP tool to define application dependencies, provide autoloading and run scripts. So, basically every application (TYPO3, Symfony and so on) is built initially using the Composer manifest composer.json. Every TYPO3 Extension that is used is defined in the composer.json file and the specific version (commit hash) is written to a composer.lock file, when the dependencies are installed or updated.

Git is used to store and do some versioning of the source code. The main project, containing the composer.json, composer.lock and some configuration files are put into a git repository.

Docker is the way to build completely independent and reproducable applications. Everything lives in the so called container and has no dependencies on the server it runs on. The only thing a server needs to run a dockerized application is a running docker daemon. To build the application and store it in a docker image, we use

Gitlab and Gitlab CI are free tools for hosting and building (integrating) source code. Well, that’s quite an understatement, as it is much more. But I’ll cover that later.

Gitlab in its original purpose is used to store some of our git repositories - mainly applications that have to be deployed. Libraries and components, such as TYPO3 Extensions are hosted on Github, because every developer has got a Github account and knows the workflows with forking and submitting pull request. This is a highly collaborative approach for us.

So, as Gitlab stores our git repositories, it additionaly does a lot more than that. When a developer pushes her changes to the Gitlab repository, the Gitlab CI runner is automatically triggered and starts building the whole applicaten in an isolated environment from scratch. But what exactly does that mean? Well, the Gitlab runner runs on a separate server, takes the contents of the Dockerfile and starts building a Docker image, so adds the required components on operating system level, such as webserver, PHP, PHP extensions and used tools. When this step is succeeded the next step is to gather all application components, that are defined in the composer.json or more correctly said composer.lock file and installs these. After successfully building the Docker Image in this step, we take advantage of another great tool, that is bundled in the Gitlab environment - a Docker Registry. The Docker Registry is a repository for storing these docker images. These images can be tagged (labeled). Labeling images has a great purpose for us. Images built from the develop branch are always tagged with the develop tag and overwritten on each build. Releases are tagged twice. Releases technically are “only” git tags. So, when tagging a commit, the built Docker images are also tagged with that tag, the developer provided and additionaly with a tag live that will be deployed as the (you guessed it) live application.

The only thing really missing in our workflow is a component, such as Docker Swarm or Kubernetes, that automatically takes these images and adds them to a cluster. We currently do this with shell scripts, but evaluating the next steps to have a completely automated continuous delivery environment.

Disclaimer: Everything in this article may sound quite easy, but took quite some work and mindshifting to achieve our goals. Most importantly: it feels good and with our last, missing ingredient we are going to have a really stable and solid solution for our development (well, let’s say the word “DevOps”) workflow.