Useful links:
- Caching: https://docs.gitlab.com/ce/ci/caching/
- .gitlab-ci.yml: https://docs.gitlab.com/ce/ci/yaml/
- Examples of .gitlab-ci.yml files: https://docs.gitlab.com/ce/ci/examples/
- Available Variables: https://docs.gitlab.com/ce/ci/variables/
When you define your stages all jobs of the same stage are executed in parallel.
- Gitlab.com doesn't support interactive web terminals for now (last I checked 2019/02/20), follow this issue for more.
You have two options:
When you're templating/extending keep in my mind that is better to avoid some simplified syntaxes because when merging the values, Gitlab CI will not merge lists for example.
Let's say you have something like:
deploy:
only:
- masternow you want to extend and add:
only:
# ...
changes:
- ./**/*.pyIn order to avoid having to repeat the first bit in the extended form, you use from the beginning, like this:
deploy:
only:
refs:
- masterThen when you extend, you'll have the result you expect.
deploy:
only:
refs:
- master+
deploy:
only:
changes:
- ./**/*.py=
deploy:
only:
refs:
- master
changes:
- ./**/*.pyRun your jobs locally to avoid to commit and push just to see if you're writing correct "CI code".
There are some limitations, but for basic checks, it's good enough.
So, install: https://docs.gitlab.com/runner/
And you'll be running something like:
gitlab-runner exec docker my_awesome_job
job:
script:
- '[[ -z "$MY_PASSWORD" ]] && echo "You must set the variable: MY_PASSWORD" && exit 1;'Of course, you have a built-in way of executing jobs only if variable == to something:
This can be useful for testing, like in a Build > Test > Release Scenario.
Let's see a complete example of how that would be:
services:
- docker:dind
stages:
- Build
- Test
- Release
variables:
DJANGO_IMAGE_TEST: $CI_REGISTRY_IMAGE/django:$CI_COMMIT_REF_SLUG
DJANGO_IMAGE: $CI_REGISTRY_IMAGE/django:$CI_COMMIT_SHA
django_build:
image: docker:stable
stage: Build
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
# So we can use as cache (`|| true` means that even if the pull fails, wel'll try to build it)
- docker pull $DJANGO_IMAGE_TEST || true
# Using --cache-from we make sure that if nothing is changed here we use what's cached
# BUILD TEST IMAGE:
- docker build --target=production --cache-from=$DJANGO_IMAGE_TEST -t $DJANGO_IMAGE_TEST .
# push so we can use in subsequent jobs
- docker push $DJANGO_IMAGE_TEST
django_test:
image: $DJANGO_IMAGE_TEST
stage: Test
services:
- postgres:11-alpine
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ""
POSTGRES_PORT: "5432"
# Using the test settings, instead of actual production
DJANGO_SETTINGS_MODULE: myapp.config.test
script:
# Install some packages to run tests
# Execute pytest
- pip install --user -r requirements/test.txt
- pytest
django_release:
image: docker:stable
stage: Release
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $DJANGO_IMAGE_TEST
- docker tag $DJANGO_IMAGE_TEST $DJANGO_IMAGE
- docker push $DJANGO_IMAGEServices are Docker containers with long-running services that you can access from your jobs.
For example the Postgres: https://docs.gitlab.com/ce/ci/services/postgres.html
- The
hostaddress will be available to conenct atpostgres(notlocalhost). - The default
database,usernameandpasswordare the default from the official image - You can customize some things
IMPORTANT:
You may want export the variables so you can see what variables Gitlab CI will inject by default.
This can cause some weird behaviors, maybe you're expecting POSTGRES_PORT to be 5432, but if you export the variables you'll see that it's actually something like: tcp://172.17.0.3:5432.
So you probably want to define some variables, like:
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: ""
POSTGRES_PORT: "5432"before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRYimage: docker:stable
services:
- docker:dind
stages:
- Linters
test_docker_compose_files:
stage: Linters
script:
# Download and install docker-compose
- wget https://github.com/docker/compose/releases/download/1.23.2/run.sh -O /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
# Validating the main Docker Compose file used in development environment
- docker-compose -f docker-compose.yml config
# Validating deployment docker stack files
- docker-compose -f deployment/docker-stack.django.yml configdeploy:
image: docker:latest
stage: Deployment
script:
# Fist let's check if our variables exists:
- '[[ -z "$MY_SECRET" ]] && echo "You must set the variable: MY_SECRET" && exit 1;'
# step two is to check if MY_SECRET is stored in Docker Secrets
# if not, we create one
- docker secret inspect MY_SECRET || echo $MY_SECRET | docker secret create MY_SECRET -
# and then we deploy to our swarm:
- docker stack deploy --with-registry-auth -c deployment/docker-stack.yml my_stack
when: manualdeploy:
image: docker:latest
stage: Deployment
script:
- apk add --no-cache openssl
- docker secret inspect MY_SECRET || openssl rand -base64 50 | docker secret create MY_SECRET -
# and then we deploy to our swarm:
- docker stack deploy --with-registry-auth -c deployment/docker-stack.yml my_stack
when: manualvalidate_stack_files:
stage: Validate
image: docker:stable
script:
- wget https://github.com/docker/compose/releases/download/1.23.2/run.sh -O /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
# Validating the main Docker Compose file used in development environment
- docker-compose -f docker-compose.yml config
# Validating the deployment docker stack files
- docker-compose -f deployment/docker-stack.django.yml config
only:
changes:
- docker-compose.*
- deployment/docker-stack.*code_style:
stage: Quality
# It is simply to official Python image + Black
image: douglasmiranda/black
script:
- black --check --diff my_project/
only:
changes:
- ./**/*.py
allow_failure: true
when: on_success