The road towards the absolute CI/CD pipeline goes through building Docker images and deploying them to production. The code included in the images gets unit tested, both locally during development and after merging in master branch using Travis.
But Travis builds its own environment to run the tests on which could be different from the environment of the docker image. For example Travis may be running tests in a Debian based VM with libjpeg version X and our to-be-deployed docker image runs code on-top of Alpine with libjpeg version Y.
To ensure that the image to be deployed to production is OK, we need to run the
tests inside that Docker image. That still possible with Travis with only a few
changes to .travis.yml
:
Sudo is required
sudo: required
Start by requesting to run tests in a VM instead of in a container.
Request Docker service:
services:
- docker
The VM must run the Docker daemon.
Add TRAVIS_COMMIT to Dockerfile (Optional)
before_install:
- docker --version
- echo "ENV GIT_SHA ${TRAVIS_COMMIT}" >> Dockerfile
It's very useful to export the git SHA of HEAD as a Docker environment variable.
This way you can always identify the code included, even if you have .git
directory in .dockerignore
to reduce size of the image.
The resulting docker image gets also tagged with the same SHA for easier identification.
Build the image
install:
- docker pull ${DOCKER_REPOSITORY}:last_successful_build || true
- docker pull ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} || true
- docker build -t ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} --pull=true .
Instead of pip installing packages, override the install
step to build the
Docker image.
Start by pulling previously built images from the Docker Hub. Remember that Travis will run each job in a isolate VMs therefore there is no Docker cache. By pulling previously built images cache gets seeded.
Travis' built-in cache functionality can also be used, but I find it more convenient to push to the Hub. Production will later pull from there and if debug is needed I can pull the same image locally too.
Each docker pull
is followed by a || true
which translates to "If the Docker
Hub doesn't have this repository or tag it's OK, don't stop the build".
Finally trigger a docker build
. Flag --pull=true
will force downloading the
latest versions of the base images, the ones from the FROM
instructions. For
example if an image is based on Debian this flag will force Docker to download
the latest version of Debian image. The Docker cache has been already populated
so this is not superfluous. If skipped the new build could use an outdated base
image which could have security vulnerabilities.
Run the tests
script:
- docker run -d --name mariadb -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e MYSQL_DATABASE=foo mariadb:10.0
- docker run ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} flake8 foo
- docker run --link mariadb:db -e CHECK_PORT=3306 -e CHECK_HOST=db giorgos/takis
- docker run --env-file .env --link mariadb:db ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} coverage run ./manage.py test
First start mariadb
which is needed for the Django tests to run. Fork to the
background with -d
flag. The --name
flag makes linking with other containers
easier.
Then run flake8 linter. This is run after mariadb
- although it doesn't depend
on it - to allow some time for the database to download and initialize before it
gets hit with tests.
Travis needs about 12 seconds to get MariaDB ready which is usually more than
the time the linter runs. To wait for the database to become ready before
running the tests, run Takis. Takis waits for the container named mariadb
to
open port 3306. I blogged in detail about Takis
before.
Finally run the tests making sure that the database is linked using --link
and
that environment variables needed for the application to initialize are set
using --env-file
.
Upload built images to the Hub
deploy:
- provider: script
script: bin/dockerhub.sh
on:
branch: master
repo: foo/bar
And dockerhub.sh
docker login -e "$DOCKER_EMAIL" -u "$DOCKER_USERNAME" -p "$DOCKER_PASSWORD"
docker push ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT}
docker tag -f ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} ${DOCKER_REPOSITORY}:last_successful_build
docker push ${DOCKER_REPOSITORY}:last_successful_build
The deploy
step is used to run a script to tag images, login to Docker Hub and
finally push those tags. This step is run only on branch: master
and not on
pull requests.
Pull requests will not be able to push to Docker Hub anyway because Travis does
not include encrypted environment variables to pull requests and therefore,
there will be no $DOCKER_PASSWORD
. In the end of the day this is not a problem
because you don't want pull requests with arbitrary code to end up in your
Docker image repository.
Set the environment variables
Set the environment variables needed to build, test and deploy in env
section:
env:
global:
# Docker
- DOCKER_REPOSITORY=example/foo
- DOCKER_EMAIL="foo@example.com"
- DOCKER_USERNAME="example"
# Django
- DEBUG=False
- DISABLE_SSL=True
- ALLOWED_HOSTS=*
- SECRET_KEY=foo
- DATABASE_URL=mysql://root@db/foo
- SITE_URL=http://localhost:8000
- CACHE_URL=dummy://
and save them to .env
file for the docker daemon to access with --env-file
before_script:
- env > .env
Variables with private data like DOCKER_PASSWORD
can be added through Travis'
web interface.
That's all!
Pull requests and merges to master
are both tested against Docker images and
successful builds of master
are pushed to the Hub and can be directly used to
production.
You can find a real life example of .travis.yml
in the
Snippets
project.
Edit: Since Aug 2016 Travis upgraded to Docker v1.12 which breaks caching as described here. I made a follow-up post to fix caching.
Go Top