Jenkins - Shared Libraries
Code reuse is the most essential technic for CI/CD platform. Jenkins fulfils this feature with shared pipeline libraries. Shared libraries are composed of code with simple/complex logic stored in a source code repository that is automatically downloaded within the pipeline and available as pipeline functions. It encourages standard ways to invoke common functionality, create a building block for more complex operations.
(Untrusted Vs Trusted) Shared Libraries
Untrusted Shared Library:
The untrusted library is code that is restricted in what it can invoke and use. It is not allowed the same level of freedom to call the methods in pipeline like a trusted library, and it cannot access the larger set of Jenkins internal objects that trusted code can. Untrusted code runs in the Groovy Sandbox, which has a predefined list of methods that are “safe” to call. When running in the Sandbox, Jenkins internally monitors to see whether the library code attempts to call any methods, not in the safe list. If so, the code is stopped and approval must be granted by an administrator.
Trusted Shared Library:
Trusted libraries are ones that can call/use any methods in Java, the Jenkins API, Jenkins plugins, the Groovy language, etc. Because trusted libraries have such wide latitude in what they can call and use, it’s important that access to add or change code in trusted shared libraries is managed & restricted. Making updates to trusted libraries should require an appropriate level of source control access and verification. For these reasons, the code that has privileges should always be contained in a trusted library where there is an oversight.
(Internal Vs External) Libraries
Internal Libraries:
This is an older method of managing libraries. Jenkins 2.0 includes an internal Git repository that can be leveraged to store internal libraries for just testing purposes. Any content put in this library is trusted for all scripts, but anyone pushing to it has to have the appropriate administrative permissions.
The internal Git repository has a specific name: workflowLibs.git
External Libraries:
External library are defined and maintained outside of Jenkins(usual stored in source control). Global pipeline libraries are defined in Manage Jenkins --> Configure
Shared Libraries can be defined at various places within Jenkins. Unique difference is as described in below screenshot with trusted vs untrusted code.
Loading shared libraries in pipeline using @Library annotation
In object-oriented languages like Java, an annotation is metadata that can be put in the code to augment (or “annotate”) other code. Use the @Library annotation in your pipeline script to load a library. The name of the library to load, and optionally a version, are specified as arguments. Here’s the basic syntax:
@Library('[@]')_ []
Some key points to be noted about the syntax:
- The library name is required.
- The version should be preceded by the @ sign.
- The version can be a tag, branch name, or other specification of a revision in the source code repository.
- Specific subsets of methods can be imported by including an import statement at the end of the annotation or on the next line.
- An import statement is not required. If one is not specified, all methods will be imported.
- If no import statement is specified, then an underscore (_) must be placed at the end of the annotation, directly after the closing parenthesis. (This is required since an annotation needs something to annotate by definition. In this case, the _ is simply serving as a placeholder.)
- Multiple library names (with respective versions if desired) can be specified in the same annotation. Just separate them with commas.
// Load the default version of a library @Library('sharedLib')_ // Override the default version and load a specific version of a library @Library('sharedLib@2.0')_ // Accessing multiple libraries with one statement @Library(['sharedLib', 'utilitiesLib@master'])_ // Annotation with import @Library('sharedLib@2.0') import static org.demo.Utilities.*
Structure of Shared Library Code (Best Practices):
The shared libraries feature has a predefined folder structure it expects. At the highest level, a shared library folder has three subfolders in it: src, vars, and resources.
src folder:
This folder is intended to be set up with Groovy files in the standard Java directory structure (i.e., like src/org/servana/utils.groovy). It is added to the classpath when pipelines are executed.
Any Groovy code is valid to use here. However, in most cases probably want to invoke some kind of pipeline processing, using actual pipeline steps. There are several options for how to implement the step calls within the library, and correspondingly, how to invoke them from the script.
Example-1:
Simple groovy method, not enclosed by a class.
// org.servana.buildUtils package org.servana def timedGradleBuild(tasks) { timestamps { sh "${tool 'gradle3.5'}/bin/gradle ${tasks}" } }
The above method can be invoked within a pipeline by;
node { def utils = new org.servana.buildUtils() git "git@github.com:servana/demo.git" utils.timedGradleBuild("clean build") }
Example-2:
Create an enclosing class (to facilitate things like defining a superclass). It also get access to all of the DSL steps by passing the steps object to a method, in a constructor or in a method of the class:
// org.servana.buildUtils package org.servana class buildUtils implements Serializable { def steps //constructor buildUtils(steps) { this.steps = steps} def timedGradleBuild(tasks) { def gradleHome = steps.tool 'gradle3.2' steps.timestamps { steps.sh "${gradleHome}/bin/gradle ${tasks}" } } }
The tool step in above code "steps.tool" again references the installed version of gradle tool that was configured in the Global Tool Configuration. It returns the path associated with the tool of that name. This is a cleaner way to do it than the way it is done in example-1. Since we are enclosing this in a class, the class must implement Serializable to support saving the state of build if the pipeline/jenkins is stopped or restarted.
The above class in the pipeline can be invoked as;
@Library('bldtools') import org.servana.buildUtils.* node{ def bldtools = new buildUtils(steps) git "git@github.com:servana/demo.git" bldtools.timedGradleBuild 'clean build' }
Example-3:
Other items, like environment variables, can be passed in in the same way as the steps. In the following code describes to pass in the env object and utilise it in pipeline code:
// org.servana.buildUtils package org.servana class buildUtils implements Serializable { def env //constructor def steps buildUtils(env,steps) { this.env = env this.steps = steps } def timedGradleBuild(tasks) { def gradleHome = steps.tool 'gradle3.2' steps.sh " echo "Building for ${env.BUILD_TAG} steps.timestamps { steps.sh "${gradleHome}/bin/gradle ${tasks}" } } }
above class can be invoked in the pipeline by
@Library('') import static org.servana.buildUtils.* node{ git "git@github.com:servana/demo.git" timedGradleBuild this, 'clean build' }
var folder:
This area is for hosting scripts that define variables and associated methods that can be accessed in the pipeline. The basename/filename of a script should be a valid Groovy identifier. You can have a .txt file that contains help or other documentation for the variable. This documentation file can be HTML or Markdown.
It is allowed to define any methods that can referred in the pipeline.
// vars/timedCommand.groovy def setCommand(commandToRun) { cmd = commandToRun } def getCommand() { cmd } def runCommand() { timestamps { cmdOut = sh (script:"${cmd}", returnStdout:true).trim() } } def getOutput() { cmdOut }
cmd and cmdOut above are not fields. These are objects created on demand. Now, the timedCommand object as follows in pipeline script:
node{ timedCommand.cmd = 'ls -la' echo timedCommand.cmd timedCommand.runCommand() echo timedCommand.getOutput() }
Using global variables like steps.
It is easy to create global variable definitions that act like steps in pipeline scripts. That is, they can be called like regular pipeline steps. The trick to this is to define the call method in the global variable’s definition. Any valid pipeline DSL code can be in the body of the call method.
// vars/timedCommand2 def call (String cmd) { timestamps { cmdOutput = echo sh (script:"${cmd}", returnStdout:true).trim() } echo cmdOutput }
Resources folder:
Non-Groovy files can be stored in this directory. They can be loaded via the library Resource step in an external library.
This is intended for allowing your external libraries to load up any additional non-Groovy files they may need. An example could be a datafile of some kind, such as an XML or JSON file, or any other file that the library needs to use. The file is loaded as a string.
Using Third-Party Libraries:
Shared libraries can also make use of third-party libraries using the @Grab annotation. The @Grab annotation is provided through the Grape dependency manager that is built into Groovy. It allows to pull in any dependency from a Maven repository, such as Maven Central. This can be done from trusted libraries, but does not work in the Groovy Sandbox.
Below mentioned is an example function using @Grab to pull in an Apache Commons dependency. In a similar to our other mentioned examples above, below code is using a "stopwatch" function to time how long execution of a command takes. The routine is written entirely with Groovy code (as noted earlier, libraries have access to all Groovy constructs):
// vars/timedCommands1 @Grab('org.apache.commons:commons-lang3:3.4+') import org.apache.commons.lang.time.StopWatch def call(String cmdToRun) { def sw = new StopWatch() def proc = "$cmdToRun".execute() sw.start() proc.waitFor() sw.stop() println("The process took ${(sw.getTime()/1000).toString()} seconds.\n") }
The code could be invoked like this from a pipeline script:
node { timedCommand5("sleep 10") }
Loading Code Directly
It is possible to load code directly via the load operation. This is similar to the shared library code in terms of the syntax. It is different in that it is not pulling it from source control. In order to load code directly, you just need to have your function stored in a location that is accessible.
Example:
def call(String cmd, String logFilePath) { timestamps { cmdOutput = sh (script:"${cmd}", returnStdout:true).trim() } echo cmdOutput writeFile file: "${logFilePath}", text: "${cmdOutput}" } return this;
Pipeline Usage:
node { def useProc = load '/home/servana/timedCommand.groovy' useProc 'ls -la', 'command.log' }
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article