In this guide, we will explore various scenarios for working with Gitlab CI/CD Pipelines. We will provide examples of using the most commonly used options when working with CI/CD. The template library will be expanded as needed.
You can easily find what CI/CD is and why it’s needed through a quick internet search. Full documentation on configuring pipelines in GitLab is also readily available. Here, I’ll briefly and as seamlessly as possible describe the system’s operation from a bird’s-eye view:
- A developer submits a commit to the repository, creates a merge request through the website, or initiates a pipeline explicitly or implicitly in some other way.
- The pipeline configuration selects all tasks whose conditions allow them to run in the current context.
- Tasks are organized according to their respective stages.
- Stages are executed sequentially, meaning that all tasks within a stage run in parallel.
- If a stage fails (i.e., if at least one task in the stage fails), the pipeline typically stops (almost always).
- If all stages are completed successfully, the pipeline is considered to have passed successfully.
In summary, we have:
- A pipeline: a set of tasks organized into stages, where you can build, test, package code, deploy a ready build to a cloud service, and more.
- A stage: a unit of organizing a pipeline, containing one or more tasks.
- A task (job): a unit of work in the pipeline, consisting of a script (mandatory), launch conditions, settings for publishing/caching artifacts, and much more.
Consequently, the task when setting up CI/CD is to create a set of tasks that implement all necessary actions for building, testing, and publishing code and artifacts.
? Discover the Power of CI/CD Services with Gart Solutions – Elevate Your DevOps Workflow!
Templates
In this section, we will provide several ready-made templates that you can use as a foundation for writing your own pipeline.
Minimal Scenario
For small tasks consisting of a couple of jobs:
stages:
- build
TASK_NAME:
stage: build
script:
- ./build_script.sh
- stages: Describes the stages of our pipeline. In this example, there is only one stage.
- TASK_NAME: The name of our job.
- stage: The stage to which our job belongs.
- script: A set of scripts to execute.
Standard Build Cycle
Typically, the CI/CD process includes the following steps:
- Building the package.
- Testing.
- Delivery.
- Deployment.
You can use the following template as a basis for such a scenario:
stages:
- build
- test
- delivery
- deploy
build-job:
stage: build
script:
- echo "Start build job"
- build-script.sh
test-job:
stage: test
script:
- echo "Start test job"
- test-script.sh
delivery-job:
stage: delivery
script:
- echo "Start delivery job"
- delivery-script.sh
deploy-job:
stage: deploy
script:
- echo "Start deploy job"
- deploy-script.sh
Jobs
In this section, we will explore options that can be applied when defining a job. The general syntax is as follows
<TASK_NAME>:
<OPTION1>: ...
<OPTION2>: ...
We will list commonly used options, and you can find the complete list in the official documentation.
Stage
This option specifies to which stage the job belongs. For example:
stages:
- build
- test
TASK_NAME:
...
stage: build
TASK_NAME:
...
stage: test
Stages are defined in the stages
directive.
There are two special stages that do not need to be defined in stages
:
.pre
: Runs before executing the main pipeline jobs..post
: Executes at the end, after the main pipeline jobs have completed.
For example:
stages:
- build
- test
getVersion:
stage: .pre
script:
- VERSION=$(cat VERSION_FILE)
- echo "VERSION=${VERSION}" > variables.env
artifacts:
reports:
dotenv: variables.env
In this example, before starting all build jobs, we define a variable VERSION
by reading it from a file and pass it as an artifact as a system variable.
Image
Specifies the name of the Docker image to use if the job runs in a Docker container:
TASK_NAME:
...
image: debian:11
Before_script
This option defines a list of commands to run before the script
option and after obtaining artifacts:
TASK_NAME:
...
before_script:
- echo "Run before_script"
Script
The main section where job tasks are executed is described in the script
option. Let’s explore it further.
Describing an array of commands: Simply list the commands that need to be executed sequentially in your job:
TASK_NAME:
...
script:
- command1
- command2
Long commands split into multiple lines: You may need to execute commands as part of a script (with comparison operators, for example). In this case, a multiline format is more convenient. You can use different indicators:
Using |
:
TASK_NAME:
...
script:
- |
command_line1
command_line2
Using >
:
TASK_NAME:
...
script:
- >
command_line1
command_line1_continue
command_line2
command_line2_continue
After_script
A set of commands that are run after the script
, even if the script
fails:
TASK_NAME:
...
script:
...
after_script:
- command1
- command2
Artifacts
Artifacts are intermediate builds or files that can be passed from one stage to another.
You can specify which files or directories will be artifacts:
TASK_NAME:
...
artifacts:
paths:
- ${PKG_NAME}.deb
- ${PKG_NAME}.rpm
- *.txt
- configs/
In this example, artifacts will include all files with names ending in .txt
, ${PKG_NAME}.deb
, ${PKG_NAME}.rpm
, and the configs
directory. ${PKG_NAME}
is a variable (more on variables below).
In other jobs that run afterward, you can use these artifacts by referencing them by name, for example:
TASK_NAME_2:
...
script:
- cat *.txt
- yum -y localinstall ${PKG_NAME}.rpm
- apt -y install ./${PKG_NAME}.deb
You can also pass system variables that you defined in a file:
TASK_NAME:
...
script:
- echo -e "VERSION=1.1.1" > variables.env
...
artifacts:
reports:
dotenv: variables.env
In this example, we pass the system variable VERSION
with the value 1.1.1
through the variables.env
file.
If necessary, you can exclude specific files by name or pattern:
TASK_NAME:
...
artifacts:
paths:
...
exclude:
- ./.git/**/
In this example, we exclude the .git
directory, which typically contains repository metadata. Note that unlike paths
, exclude
does not recursively include files and directories, so you need to explicitly specify objects.
Extends
Allows you to separate a part of the script into a separate block and combine it with a job. To better understand this, let’s look at a specific example:
.my_extend:
stage: build
variables:
USERNAME: my_user
script:
- extend script
TASK_NAME:
extends: .my_extend
variables:
VERSION: 123
PASSWORD: my_pwd
script:
- task script
In this case, in our TASK_NAME
job, we use extends
. As a result, the job will look like this:
TASK_NAME:
stage: build
variables:
VERSION: 123
PASSWORD: my_pwd
USERNAME: my_user
script:
- task script
What happened:
stage: build
came from.my_extend
.- Variables were merged, so the job includes
VERSION
,PASSWORD
, andUSERNAME
. - The
script
is taken from the job (key values are not merged; the job’s value takes precedence).
Environment
Specifies a system variable that will be created for the job:
TASK_NAME:
...
environment:
RSYNC_PASSWORD: rpass
EDITOR: vi
Release
Publishes a release on the Gitlab portal for your project:
TASK_NAME:
...
release:
name: 'Release $CI_COMMIT_TAG'
tag_name: '$CI_COMMIT_TAG'
description: 'Created using the release-cli'
assets:
links:
- name: "myprogram-${VERSION}"
url: "https://gitlab.com/master.dmosk/project/-/jobs/${CI_JOB_ID}/artifacts/raw/myprogram-${VERSION}.tar.gz"
rules:
- if: $CI_COMMIT_TAG
Please note that we use the if
rule (explained below).
? Read more: CI/CD Pipelines and Infrastructure for E-Health Platform
Rules and Constraints Directives
To control the behavior of job execution, you can use directives with rules and conditions. You can execute or skip jobs depending on certain conditions. Several useful directives facilitate this, which we will explore in this section.
Rules
Rules define conditions under which a job can be executed. Rules regulate different conditions using:
if
changes
exists
allow_failure
variables
The if
Operator: Allows you to check a condition, such as whether a variable is equal to a specific value:
TASK_NAME:
...
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
In this example, the commit must be made to the default branch.
Changes: Checks whether changes have affected specific files. This check is performed using the changes
option:In this example, it checks if the script.sql
file has changed:
TASK_NAME:
...
rules:
- changes:
- script.sql
Multiple Conditions: You can have multiple conditions for starting a job. Let’s explore some examples.
a) If the commit is made to the default branch AND changes affect the script.sql
file:
TASK_NAME:
...
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- script.sql
b) If the commit is made to the default branch OR changes affect the script.sql
file:
TASK_NAME:
...
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
- changes:
- script.sql
Checking File Existence: Determined using exists
:
TASK_NAME:
...
rules:
- exists:
- script.sql
The job will only execute if the script.sql
file exists.
Allowing Job Failure: Defined with allow_failure
:
TASK_NAME:
...
rules:
- allow_failure: true
In this example, the pipeline continues even if the TASK_NAME
job fails.
Conditional Variable Assignment: You can conditionally assign variables using a combination of if
and variables
:
TASK_NAME:
...
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
variables:
DEPLOY_VARIABLE: "production"
- if: '$CI_COMMIT_BRANCH =~ demo'
variables:
DEPLOY_VARIABLE: "demo"
When
Determines when a job should be run, such as on manual trigger or at specific intervals. The when
directive determines when a job should run. Possible values include:
on_success
(default): Runs if all previous jobs have succeeded or haveallow_failure: true
.manual
: Requires manual triggering (a “Run Pipeline” button appears in the GitLab CI/CD panel).always
: Runs always, regardless of previous results.on_failure
: Runs only if at least one previous job has failed.delayed
: Delays job execution. You can control the delay using thestart_in
directive.never
: Never runs.
Let’s explore some examples:
- Manual:
TASK_NAME:
...
when: manual
The job won’t start until you manually trigger it in the GitLab CI/CD panel.
2. Always:
TASK_NAME:
...
when: always
The job will always run. Useful, for instance, when generating a report regardless of build results.
- On_failure:
TASK_NAME:
...
on_failure: always
The job will run if there is a failure in previous stages. You can use this for sending notifications.
- Delayed:
TASK_NAME:
...
when: delayed
start_in: 30 minutes
The job will be delayed by 30 minutes.
- Never:
TASK_NAME:
...
when: never
The job will never run.
- Using with
if
:
TASK_NAME:
...
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
when: manual
In this example, the job will only execute if the commit is made to the default branch and an administrator confirms the run.
Needs
Allows you to specify conditions for job execution based on the presence of specific artifacts or completed jobs. With rules of this type, you can control the order in which jobs are executed.
Let’s look at some examples.
- Artifacts: This directive accepts
true
(default) andfalse
. To start a job, artifacts must be uploaded in previous stages. Using this configuration:
TASK_NAME:
...
needs:
- artifacts: false
…allows the job to start without artifacts.
- Job: You can start a job only after another job has completed:
TASK_NAME:
...
needs:
- job: createBuild
In this example, the task will only start after the job named createBuild
has finished.
? Read more: Building a Robust CI/CD Pipeline for Cybersecurity Company
Variables
In this section, we will discuss user-defined variables that you can use in your pipeline scripts as well as some built-in variables that can modify the pipeline’s behavior.
User-Defined Variables User-defined variables are set using the variables
directive. You can define them globally for all jobs:
variables:
PKG_VER: "1.1.1"
Or for a specific job:
TASK_NAME:
...
variables:
PKG_VER: "1.1.1"
You can then use your custom variable in your script by prefixing it with a dollar sign and enclosing it in curly braces, for example:
script:
- echo ${PKG_VER}
GitLab Variables These variables help you control the build process. Let’s list these variables along with descriptions of their properties:
Variable Description Example LOG_LEVEL
Sets the log level. Options: debug, info, warn, error, fatal, and panic. Lower priority compared to command-line arguments --debug
and --log-level
. LOG_LEVEL: warning
CI_DEBUG_TRACE
Enables or disables debug tracing. Takes values true
or false
. CI_DEBUG_TRACE: true
CONCURRENT
Limits the number of jobs that can run simultaneously. CONCURRENT: 5
GIT_STRATEGY
Controls how files are fetched from the repository. Options: clone, fetch, and none (don’t fetch). GIT_STRATEGY: none
Additional Options In this section, we will cover various options that were not covered in other sections.
- Workflow: Allows you to define common rules for the entire pipeline. Let’s look at an example from the official GitLab documentation:
workflow:
rules:
- if: $CI_COMMIT_TITLE =~ /-draft$/
when: never
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
In this example, the pipeline:
- Won’t be triggered if the commit title contains “draft.”
- Will be triggered if the pipeline source is a merge request event.
- Will be triggered if changes are made to the default branch of the repository.
- Default Values: Defined in the
default
directive. Options with these values will be used in all jobs but can be overridden at the job level.
default:
image: centos:7
tags:
- ci-test
In this example, we’ve defined an image (e.g., a Docker image) and tags (which may be required for some runners).
Import Configuration from Another YAML File: This can be useful for creating a shared part of a script that you want to apply to all pipelines or for breaking down a complex script into manageable parts. It is done using the include
option and has different ways to load files. Let’s explore them in more detail.
a) Local File Inclusion (local):
include:
- local: .gitlab/ci/build.yml
b) Template Collections (template):
include:
- template: Custom/.gitlab-ci.yml
In this example, we include the contents of the Custom/.gitlab-ci.yml
file in our script.
c) External File Available via HTTP (remote):
include:
- remote: 'https://gitlab.dmosk.ru/main/build.yml'
d) Another Project:
include:
- project: 'public-project'
ref: main
file: 'build.yml'
!reference tags: Allows you to describe a script and reuse it for different stages and tasks in your CI/CD. For example:
.apt-cache:
script:
- apt update
after_script:
- apt clean all
install:
stage: install
script:
- !reference [.apt-cache, script]
- apt install nginx
update:
stage: update
script:
- !reference [.apt-cache, script]
- apt upgrade
- !reference [.apt-cache, after_script]
Let’s break down what’s happening in our example:
- We created a task called
apt-cache
. The dot at the beginning of the name tells the system not to start this task automatically when the pipeline is run. The created task consists of two sections:script
andafter_script
. There can be more sections. - We execute the
install
stage. In one of the execution lines, we callapt-cache
(only the commands from thescript
section). - In the
update
stage, we callapt-cache
twice—the first executes commands from thescript
section, and the second executes those from theafter_script
section.
These are the fundamental concepts and options of GitLab CI/CD pipelines. You can use these directives and templates as a starting point for configuring your CI/CD pipelines efficiently. For more advanced use cases and additional options, refer to the official GitLab CI/CD documentation.
See how we can help to overcome your challenges