29

I've been interested in converting our bespoke Jenkins integrations into a pipeline. However, I can't seem to figure out how to do it.

Can anyone help me with the Jenkins script that could do the following?

1---2---3-----------9---10
    |           |
    |---4-------|
    |           |
    |---5---6---|
        |       |
        |---7---|

1: Start pipeline
10: End pipeline
5: Build some files
   * needed by 6, 7,
   * needed as artifacts at the end
2, 3, 4, 6, 7: Have jUnit result files, should be available at end of
   test (somewhere), even if one failed

Is this even possible? Or should I just join after 3, 4, 5? Like this:

1---2---3-------6-------9---10
    |       |   |   |
    |---4---|   7---|
    |       |
    |---5---|
Pierre.Vriens
  • 7,225
  • 14
  • 39
  • 84
Bert Goethals
  • 501
  • 1
  • 4
  • 9

3 Answers3

11

Based on the comments to my question, and some basic testing the following seems to work:

Bert Goethals
  • 501
  • 1
  • 4
  • 9
6

I had a similar situation in which I wanted to nest other parallel jobs threads inside another parallel one. This code worked for me:

def performDeploymentStages(String node, String app) {
    stage("build") {
        echo "Building the app [${app}] on node [${node}]"
    }
    stage("deploy") {
        echo "Deploying the app ${app}] on node [${node}]"
    }
    stage("test") {
        echo "Testing the app [${app}] on node [${node}]"
    }
}

pipeline {
    agent {
        label 'master'
    }
    parameters {
        string(name: 'NODES', defaultValue: '1,2,3', description: 'Nodes to build, deploy and test')
        choice(name: 'ENV', choices: 'qa', description: 'Environment')
        string(name: 'APPS', defaultValue: 'app01,app02', description: 'App names')
    }

    stages {
        stage('parallel stage') {
            steps {
                script {
                    def nodes = [:]
                    for (node in params.NODES.tokenize(',')) {
                        def apps = [:]
                        for (app in params.APPS.tokenize(',')) {
                            performDeploymentStages(node, app)
                        }
                        parallel apps
                    }
                    parallel nodes
                }
            }
        }
    }
}

To fully benefit from parallel run remember to assign enough executors.

biniosuaf
  • 61
  • 1
  • 2
2

Working off of the existing answers and the Jenkins example documentation I came up with the following nested parallel solution. The Stage View (2.19) plugin and Blue Ocean (1.24.5) do not display them as you would hope, so I used the Groovy Postbuild (2.5) plugin (whitelisted) to show a result summary on the build page of the Classic View (the Badge plugin may work as well).

Current environment is using Jenkins LTS 2.277.2, a main "trigger" job, and 4 downstream jobs that build the pieces of my application, all of which are Multibranch pipelines.

I added a method to trigger my downstream Multibranch Pipeline jobs, since they all do the same thing. This is at the end of my Jenkinsfile.

// In case you want to work on the results of the downstream jobs
buildResults = []

// Common code to trigger a downstream job // Outputs a URL that you can follow in Blue Ocean and Classic View def performTriggerJob(String jobID) { // We need to wrap what we return in a Groovy closure, or else it's invoked // when this method is called, not when we pass it to parallel. // To do this, you need to wrap the code below in { }, and either return // that explicitly, or use { -> } syntax. return { GIT_BRANCH_FIXUP = env.BRANCH_NAME.replace("/","%2F") // propagate: false so we can work after it returns. Try/catch will also work res = build( job: "../${jobID}/${GIT_BRANCH_FIXUP}", propagate: false) echo "${jobID}: ${res.getResult()} - ${res.buildVariables.get('RUN_DISPLAY_URL')}" // Add to buildResults buildResults << res

    text = &quot;Downstream Run Result: &lt;a href=\&quot;${res.buildVariables.get('RUN_DISPLAY_URL')}\&quot;&gt;${jobID} - ${res.getResult()}&lt;/a&gt;&quot;

    if (res.getResult().equals(&quot;SUCCESS&quot;)) {
        manager.createSummary(&quot;green.gif&quot;).appendText(text)
    } else {
        manager.createSummary(&quot;red.png&quot;).appendText(text)
        error 'JOB FAILED' // this fails the stage
    }
}

}

Then in the main pipeline of the main Triggering job, I use script blocks to start the nested parallel jobs.

pipeline {
agent any
stages {
stage('Build Nested parallel') {
   steps {
      script {
         // Runs all these builds in parallel, the 2 nested Jobs
         // have a requirement on the cmake build, but can also run in
         // parallel after that finishes. Declarative cannot nest, but
         // Scripted can.
     // This is a map of 'name': action
     // performTriggerJob returns a Closure which is why this is just a method &quot;call&quot;
     // as Groovy will invoke .call() on parallel 'actions' automatically
     def builds = [
        &quot;subJob1&quot;: performTriggerJob('subJob1'),
        &quot;subJob2&quot;: performTriggerJob('subJob2'),
        &quot;subJob3&quot;: { script {
           // Since we have more work to do than just trigger a job,
           // we have to .call() this Closure directly in the script block to get it to run
           performTriggerJob('subJob3').call()

           // You can also pass a map directly into the parallel 
           parallel (
              'nestedJob1': performTriggerJob('nestedJob1'),
              'nestedJob2': performTriggerJob('nestedJob2')
           )
        } }
     ]
     parallel builds
  }

} } // Other stages } }

Since performTriggerJob adds the results of the jobs to the variable buildResults we can do something with all the results once everything is done. I copy all artifacts from the last successful build and then zip them up so they're available on the trigger job.

buildResults is a list of (what I believe are) RunWrapper objects, so the methods in this javadoc are what you can call on this object.

// Other stages
stage("Post steps") {
steps {
script {
   for( finishedBuildObj in buildResults ) {
      // Obj to use. Could be this build or the last successful build
      buildObj       = finishedBuildObj
      projectName    = buildObj.getFullProjectName()
      if (buildObj.getResult() != "SUCCESS") {
         buildObj = buildObj.getPreviousSuccessfulBuild()
      }
      copyArtifacts(
         projectName: projectName,
         // Allow this build to continue even if no build is found matching the "Which build" condition,
         // the build's workspace does not exist or is inaccessible, or no artifacts are found matching the specified pattern.
         // By default this build step fails the build if no artifacts are copied.
         optional: true,
         // Ignore the directory structure of the artifacts in the source project and copy all matching artifacts directly into the specified target directory.
         // By default the artifacts are copied in the same directory structure as the source project.
         flatten: true,
         // Target base directory for copy, or leave blank to use the workspace.
         // Directory (and parent directories, if any) will be created if needed.
         //May contain references to build parameters like $PARAM.
         target: 'allArtifacts',
         fingerprintArtifacts: true,
         // How to select the build to copy artifacts from, such as latest successful or stable build, or latest "keep forever" build. Other plugins may provide additional selections.
         // The build number of the selected build will be recorded in the environment for later build steps to reference. For details, see the help of "Result variable suffix" in "Advanced" section.
         selector: specific(buildObj.getNumber().toString())
      )
   }
}}}
KymikoLoco
  • 121
  • 2