Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Azure DevOps Services
Jenkins, an open-source automation server, is traditionally installed by enterprises in their own data centers and managed on-premises. Many providers also offer managed Jenkins hosting.
Alternatively, Azure Pipelines is a cloud native continuous integration pipeline. It provides the management of multistage pipelines and build agent Azure Virtual Machines hosted in the cloud.
Azure Pipelines also offers a fully on-premises option with Azure DevOps Server, for customers who have compliance or security concerns that require them to keep their code and build within the enterprise data center.
In addition, Azure Pipelines supports hybrid cloud and on-premises models. Azure Pipelines can manage build and release orchestration and enables build agents, both in the cloud and installed on-premises.
This article provides a guide to translate a Jenkins pipeline configuration to Azure Pipelines. It includes information about moving container-based builds and selecting build agents, mapping environment variables, and how to handle success and failures of the build pipeline.
Configuration
You'll find a familiar transition from a Jenkins declarative pipeline into an Azure Pipelines YAML configuration. The two are conceptually similar, supporting "configuration as code" and allowing you to check your configuration into your version control system. Unlike Jenkins, however, Azure Pipelines uses the industry-standard YAML to configure the build pipeline.
The concepts between Jenkins and Azure Pipelines and the way they're configured are similar. A Jenkinsfile lists one or more stages of the build process, each of which contains one or more steps that are performed in order. For example, a "build" stage may run a task to install build-time dependencies, then perform a compilation step. While a "test" stage may invoke the test harness against the binaries that were produced in the build stage.
For example:
Jenkinsfile
pipeline {
agent none
stages {
stage('Build') {
steps {
sh 'npm install'
sh 'npm run build'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
}
}
The jenkinsfile translates easily to an Azure Pipelines YAML configuration, with a job corresponding to each stage, and steps to perform in each job:
azure-pipelines.yml
jobs:
- job: Build
steps:
- script: npm install
- script: npm run build
- job: Test
steps:
- script: npm test
Container-based builds
Using containers in your build pipeline allows you to build and test within a docker image that has the exact dependencies that your pipeline needs, already configured. It saves you from having to include a build step that installs more software or configures the environment. Both Jenkins and Azure Pipelines support container-based builds.
In addition, both Jenkins and Azure Pipelines allow you to share the build
directory on the host agent to the container volume using the -v flag to
docker. This allows you to chain multiple build jobs together that can use
the same sources and write to the same output directory. This is especially
useful when you use many different technologies in your stack; you may want
to build your backend using a .NET Core container and your frontend with a
TypeScript container.
For example, to run a build in an Ubuntu 22.04 ("Jammy") container, then run tests in an Ubuntu 24.04 ("Noble") container:
Jenkinsfile
pipeline {
agent none
stages {
stage('Build') {
agent {
docker {
image 'ubuntu:jammy'
args '-v $HOME:/build -w /build'
}
}
steps {
sh 'make'
}
}
stage('Test') {
agent {
docker {
image 'ubuntu:noble'
args '-v $HOME:/build -w /build'
}
}
steps {
sh 'make test'
}
}
}
}
Azure Pipelines provides container jobs to enable you to run your build within a container:
azure-pipelines.yml
resources:
containers:
- container: jammy
image: ubuntu:jammy
- container: noble
image: ubuntu:noble
jobs:
- job: build
container: jammy
steps:
- script: make
- job: test
dependsOn: build
container: noble
steps:
- script: make test
In addition, Azure Pipelines provides a docker task that allows you to run, build, or push an image.
Agent selection
Jenkins offers build agent selection using the agent option to ensure
that your build pipeline - or a particular stage of the pipeline - runs
on a particular build agent machine. Similarly, Azure Pipelines offers many options to configure where your build environment runs.
Hosted agent selection
Azure Pipelines offers cloud hosted build agents for Linux, Windows, and
macOS builds. To select the build environment, you can use the
vmimage
keyword. For example, to select a macOS build:
pool:
vmimage: macOS-latest
Additionally, you can specify a container and specify a docker image
for finer grained control over how your build is run.
On-premises agent selection
If you host your build agents on-premises, then you can define the
build agent "capabilities"
based on the architecture of the machine or the software that you've installed
on it. For example, if you've set up an on-premises build agent with the
java capabilities, then you can ensure that your job runs on it using the
demands keyword:
pool:
demands: java
Environment variables
In Jenkins, you typically define environment variables for the entire
pipeline. For example, to set two environment variables, CONFIGURATION=debug
and PLATFORM=x86:
Jenkinsfile
pipeline {
agent any
environment {
CONFIGURATION = 'debug'
PLATFORM = 'x64'
}
stages {
stage('Build') {
steps {
sh 'echo $CONFIGURATION $PLATFORM'
}
}
}
}
Similarly, in Azure Pipelines you can configure variables that are used both within the YAML configuration and are set as environment variables during job execution:
azure-pipelines.yml
variables:
configuration: debug
platform: x64
Additionally, in Azure Pipelines you can define variables that are set only during a particular job:
azure-pipelines.yml
jobs:
- job: debug build
variables:
configuration: debug
steps:
- script: ./build.sh $(configuration)
- job: release build
variables:
configuration: release
steps:
- script: ./build.sh $(configuration)
Predefined variables
Both Jenkins and Azure Pipelines set a number of environment variables to help you inspect and interact with the execution environment of the continuous integration system.
| Description | Jenkins | Azure Pipelines |
|---|---|---|
| A unique numeric identifier for the current build invocation. | BUILD_NUMBER |
BUILD_BUILDNUMBER |
| A unique identifier (not necessarily numeric) for the current build invocation. | BUILD_ID |
BUILD_BUILDID |
| The URL that displays the build logs. | BUILD_URL |
This value isn't set as an environment variable in Azure Pipelines but you can derive it from other variables.1 |
| The name of the machine that the current build runs on. | NODE_NAME |
AGENT_NAME |
| The name of this project or build definition. | JOB_NAME |
RELEASE_DEFINITIONNAME |
| A string for identification of the build; the build number is a good unique identifier. | BUILD_TAG |
BUILD_BUILDNUMBER |
| A URL for the host executing the build. | JENKINS_URL |
SYSTEM_TEAMFOUNDATIONCOLLECTIONURI |
| A unique identifier for the build executor or build agent that runs currently. | EXECUTOR_NUMBER |
AGENT_NAME |
| The location of the checked out sources. | WORKSPACE |
BUILD_SOURCESDIRECTORY |
| The Git Commit ID corresponding to the version of software being built. | GIT_COMMIT |
BUILD_SOURCEVERSION |
| Path to the Git repository on GitHub, Azure Repos, or another repository provider. | GIT_URL |
BUILD_REPOSITORY_URI |
| The Git branch being built. | GIT_BRANCH |
BUILD_SOURCEBRANCH |
1 To derive the URL that displays the build logs in Azure Pipelines, combine the following environment variables in this format:
${SYSTEM_TEAMFOUNDATIONCOLLECTIONURI}/${SYSTEM_TEAMPROJECT}/_build/results?buildId=${BUILD_BUILDID}
Success and failure handling
Jenkins allows you to run commands when the build has finished, using the
post section of the pipeline. You can specify commands that run when the
build succeeds (using the success section), when the build fails (using
the failure section) or always (using the always section). For example:
Jenkinsfile
post {
always {
echo "The build has finished"
}
success {
echo "The build succeeded"
}
failure {
echo "The build failed"
}
}
Similarly, Azure Pipelines has a rich conditional execution framework that allows you to run a job, or steps of a job, based on many conditions including pipeline success or failure.
To emulate Jenkins post-build conditionals, you can define jobs
that run based on the always(), succeeded() or failed() conditions:
azure-pipelines.yml
jobs:
- job: always
steps:
- script: echo "The build has finished"
condition: always()
- job: success
steps:
- script: echo "The build succeeded"
condition: succeeded()
- job: failed
steps:
- script: echo "The build failed"
condition: failed()
Note
Jenkins supports additional post conditions beyond always, success, and failure:
changed: Runs only if the current Pipeline's run has a different completion status from its previous run.fixed: Runs only if the current run succeeds and the previous run failed or was unstable.unstable: Runs only if the current Pipeline's run has an "unstable" status (typically caused by test failures).cleanup: Runs after all other post conditions have been evaluated, regardless of the Pipeline's status.
In Azure Pipelines, you can achieve similar functionality using conditions with expressions like eq(variables['Agent.JobStatus'], 'SucceededWithIssues') for unstable builds.
In addition, you can combine other conditions, like the ability to run a task based on the success or failure of an individual task, environment variables, or the execution environment, to build a rich execution pipeline.
Credentials handling
Jenkins provides a credentials() helper within the environment directive
to securely inject credentials into your pipeline. Jenkins supports several
credential types, including secret text, username/password pairs, and secret files.
Jenkinsfile
pipeline {
agent any
environment {
AWS_ACCESS_KEY_ID = credentials('aws-access-key-id')
AWS_SECRET_ACCESS_KEY = credentials('aws-secret-access-key')
}
stages {
stage('Deploy') {
steps {
sh 'aws s3 ls'
}
}
}
}
In Azure Pipelines, you can manage secrets using variable groups, Azure Key Vault integration, or by defining secret variables directly in your pipeline:
azure-pipelines.yml
variables:
- group: my-aws-credentials # Variable group linked to Azure Key Vault or containing secrets
jobs:
- job: Deploy
steps:
- script: aws s3 ls
env:
AWS_ACCESS_KEY_ID: $(AWS_ACCESS_KEY_ID)
AWS_SECRET_ACCESS_KEY: $(AWS_SECRET_ACCESS_KEY)
You can also reference secrets directly from Azure Key Vault by using the AzureKeyVault@2 task:
steps:
- task: AzureKeyVault@2
inputs:
azureSubscription: 'my-azure-subscription'
KeyVaultName: 'my-key-vault'
SecretsFilter: 'AWS-ACCESS-KEY-ID,AWS-SECRET-ACCESS-KEY'
- script: aws s3 ls
env:
AWS_ACCESS_KEY_ID: $(AWS-ACCESS-KEY-ID)
AWS_SECRET_ACCESS_KEY: $(AWS-SECRET-ACCESS-KEY)