This document provides the guidelines for a correct optimal implementation of the SCM API.
This document is structured as follows:
-
The first section is an overview of all the elements that contribute to a correct performant optimal implementation of the SCM API.
-
The subsequent sections consider each of the API extension points in turn, in the recommended order of implementation. If you stop at the end of any one section, you should have something that works, but may not be optimal.
In order to assist users to discover plugins correctly, please try and stick to the following naming guidelines:
-
If your plugin provides an implementation of
SCMSource
,SCMNavigator
orSCM
this is considered an "scm" plugin. Use anartifactId
that reflects the SCM system you are providing for, e.g.git
,subversion
,mercurial
, etc. In the event that there is already an existing plugin with that name, the preference is to append-scm
to theartifactId
.Similarly the name should be
Git
,Subversion
,Mercurial
, etc. in the event that there is a name conflict with an existing plugin, the preference is to append ` SCM` to the name.NoteSadly this practice was not followed for some plugins and consequently we now have
github-branch-source
(should have beengithub-scm
) andbitbucket-branch-source
(should have beenbitbucket-scm
) because of the previously existinggithub
andbitbucket
plugins respectively. -
If your plugin provides implementations of
SCM___Trait
that behave like a filter (i.e. you are annotating it with@Selection
) this is considered an "scm-filter" plugin.-
If the filter is generic (all the best filter implementations are generic) then use an
artifactId
that starts withscm-filter-
. The name should end with ` SCM Filter`.For example, a filter that selects tags based on their age might use the
artifactId
ofscm-filter-tag-age
and a plugin name ofTag Age SCM Filter
. -
If the filter is specific to a particular "scm" plugin, then use an
artifactId
that starts with that plugin name followed by-scm-filter
, obviously without a double-scm-scm-
and ignoring cases where the "scm" plugin has been made mistakes in its naming. The name should start with the SCM name and end with ` SCM Filter`.For example, a filter that selects GitHub pull requests based on their PR labels might use the
artifactId
ofgithub-scm-filter-pr-labels
and a plugin name ofGitHub PR Label SCM Filter
.
-
-
If your plugin provides implementations of
SCM___Trait
that do not behave like a filter or provides a mix of filters and non-filters this is considered an "scm-trait" plugin.-
If the trait is generic (all the best trait implementations are generic) then use an
artifactId
that starts withscm-trait-
. The name should end with ` SCM Behaviour`. -
If the filter is specific to a particular "scm" plugin, then use an
artifactId
that starts with that plugin name followed by-scm-trait
, obviously without a double-scm-scm-
and ignoring cases where the "scm" plugin has been made mistakes in its naming. The name should start with the SCM name and end with ` SCM Behaviour`.
-
If your consumer plugin does not fit the above criteria, please file a PR against this text to initiate a discussion on the best way to name your particular use-case.
The SCM API consists of a number of extension points:
-
The original
hudson.scm.SCM
extension point is responsible for:-
performing checkouts from source control into a
FilePath
-
calculating the changelogs between revisions
-
constructing the changelog parser
-
-
The
jenkins.scm.api.SCMSource
extension point is responsible for:-
identifying heads / branches that are available
-
tracking the current revisions of heads
-
constructing
hudson.scm.SCM
instances for a given head and revision pair
-
-
The
jenkins.scm.api.SCMNavigator
extension point is responsible for:-
enumerating potential
jenkins.scm.api.SCMSource
instances
-
Some examples of how these concepts may be mapped into well known source control systems:
-
Git has a fairly direct mapping of concepts
-
jenkins.scm.api.SCMHead
would represent the branches and tags in an individual Git repository -
jenkins.scm.api.SCMRevision
would represent the Git commit hash of each individual commit. Because Git commits are immutable, eachSCMRevision
is deterministic as if you ask to check out the same hash you will always get the same contents. Some Git based source control systems - such as GitHub - have the concept of a change/pull request. TheSCMRevision
of a GitHub pull request would actually be a composite revision consisting of both the pull request head revision and the base branch head revision. -
jenkins.scm.api.SCMSource
would represent a single Git repository. -
jenkins.scm.api.SCMNavigator
would represent a collection of Git repositories - most likely all on the same server. Another example of a collection of Git repositories could be a GitHub organization.
-
-
Subversion can have a fairly direct mapping of concepts analogous to the Git mapping above. Alternatively some organizations will use a single Subversion repository to hold multiple projects under which each project would have the familiar
trunk/branches/tags
structure. If we consider this style of Subversion use then:-
jenkins.scm.api.SCMHead
would be mapped against a specific project and be one of:trunk
,branches/name
ortags/name
. -
jenkins.scm.api.SCMRevision
would be mapped to the global Subversion revision number against which the specificSCMHead
was last modified. EachSCMRevision
is deterministic as if you ask to check out the trunk/branch.tag at the same revision you will always get the same contents, e.g.svn co http://svn.example.com/project/trunk@1234
-
jenkins.scm.api.SCMSource
would represent a single project within the Subversion repository. -
jenkins.scm.api.SCMNavigator
would represent the collection of projects in the root of the Subversion repository.
-
-
CVS, one mapping of the CVS concepts would be:
-
jenkins.scm.api.SCMHead
would represent aHEAD
/ branch / tag in a module. -
jenkins.scm.api.SCMRevision
would represent a timestamp (for itsjenkins.scm.api.SCMHead
). Sadly the timestamp is not deterministic as an immediate checkout may miss some files that are still being committed to the repository.NoteAn alternative deterministic
jenkins.scm.api.SCMRevision
would be a list of all the files and their individual revisions, but this would be prohibitive in terms of storage cost and is non-trivial for a user to replicate from their own workspace and consequently the timestamp would be preferred.CVS timestamps are technically deterministic to within 1 second resolution, the issue here is that during the current 1 second checkout you can get different files that are in the process of currently being committed.
Ignoring the problems of time sychronization in a distributed system, we could assume that any timestamp more than 1 second in the past should be deterministic in order to allow the consuming plugins to make optimizations that are only possible with deterministic revisions.
In reality, we do not know the time difference between the CVS server’s clock and the Jenkins master’s clock, so we would probably need to use a larger time difference between the timestamp and the Jenkins master in order to safely assume that the timestamp is more than one second in the past for the CVS server and thus has become deterministic.
-
jenkins.scm.api.SCMSource
would represent an individual module on the CVS server. -
jenkins.scm.api.SCMNavigator
would represent the collection of modules available from a single CVS server.
-
Implementers are free to map the concepts to their own SCM system as they see fit, but the recommendation is to try to keep close to the principles of mapping outlined in the above examples.
The concepts we have covered so far determine how Jenkins plugins can drive interactions with the SCM system. While a Jenkins driven interaction with an SCM is sufficient for enabling advanced SCM functionality such as that provided in the Branch API plugin, it does not lend to a good user experience as Jenkins would be required to continually poll the backing SCM to establish if there are any changes. In order to minimize the load on Jenkins and the SCM system as well as minimize the amount of time between a change being committed to the SCM system and Jenkins responding to the change, it is necessary to implement the eventing portions of the SCM API.
There are currently three classes of events:
-
jenkins.scm.api.SCMHeadEvent
represents an event concerning ajenkins.scm.api.SCMHead
such as:-
the creation of a new
jenkins.scm.api.SCMHead
within a specificjenkins.scm.api.SCMSource
, -
a change in revision of a
jenkins.scm.api.SCMHead
, -
a change in metadata about a specific
jenkins.scm.api.SCMHead
and -
the removal of an existing
jenkins.scm.api.SCMHead
from ajenkins.scm.api.SCMSource
-
-
jenkins.scm.api.SCMSourceEvent
represents an event concerning ajenkins.scm.api.SCMSource
such as:-
the creation of a new
jenkins.scm.api.SCMSource
within a specificjenkins.scm.api.SCMNavigator
, -
a change in metadata about a specific
jenkins.scm.api.SCMSource
-
the removal of an existing
jenkins.scm.api.SCMSource
from ajenkins.scm.api.SCMNavigator
-
-
jenkins.scm.api.SCMNavigatorEvent
represents an event concerning ajenkins.scm.api.SCMNavigator
such as:-
the creation of a new
jenkins.scm.api.SCMNavigator
Notethere is no use case for this event currently envisioned as it would likely require a containing context for the jenkins.scm.api.SCMNavigator
instances. -
a change in metadata about a specific
jenkins.scm.api.SCMNavigator
-
the removal of an existing
jenkins.scm.api.SCMNavigator
.
-
Not every event is required to be provided by the backing SCM system. The primary events ensure that Jenkins responds promptly to activity in the backing source control system. They are, in order of priority:
-
jenkins.scm.api.SCMHeadEvent
of typeUPDATED
representing the change of a revision in a specific head. When this event is implemented, it removes the need to continually poll for revision changes and builds can be triggered as soon as the event is received which benefits user responsiveness. -
jenkins.scm.api.SCMHeadEvent
of typeCREATED
representing the creation of a new head. When this event is implemented, it removes the need to continually poll thejenkins.scm.api.SCMSource
to identify untrackedjenkins.scm.api.SCMHead
instances. -
jenkins.scm.api.SCMSourceEvent
of typeCREATED
representing the creation of a new source. When this event is implemented, it removes the need to continually poll thejenkins.scm.api.SCMNavigator
to identify untrackedjenkins.scm.api.SCMSource
instances.
The secondary events ensure that state changes in the source control system are reflected promptly within Jenkins. These secondary events will not trigger builds. They are, in order of priority:
-
jenkins.scm.api.SCMHeadEvent
of typeREMOVED
representing the removal a specific head. When this event is implemented, it means that Jenkins can "deactivate" any resources (i.e. jobs) that are dedicated to tracking that head.NoteIt is likely that the resources (i.e. jobs) cannot be removed until Jenkins performs a full scan as the SCM API is designed for the use case where you have multiple sources attached to the same owner and the reason for removal from one source may be a move to another source. Without a full scan of all sources the priority claims of multiple sources cannot be determined -
jenkins.scm.api.SCMSourceEvent
of typeREMOVED
representing the removal of a specific source. When this event is implemented, it means that Jenkins can "deactivate" any resources (i.e. jobs) that are dedicated to tracking that source.
The tertiary events relate to metadata updates, such as URLs, display names or descriptions about the various resources being tracked. The kind of tertiary information that these events represent may not be available for all source control systems. In cases where the source control system provides an API to store such metadata, it may be the case that there are no events generated when the metadata is modified. The tertiary events are, in order of priority:
-
jenkins.scm.api.SCMHeadEvent
of typeUPDATED
representing the change of metadata for a specific head, such as the description of a branch / change request -
jenkins.scm.api.SCMSourceEvent
of typeUPDATED
representing the change of metadata for a specific source, such as:-
the description of the source
-
the display name of the source
-
the information URL of the source
-
the avatar of the source
-
-
jenkins.scm.api.SCMNavigatorEvent
of typeUPDATED
representing the change of metadata for a collection of sources as an aggregate, such as:-
the description of the collection
-
the display name of the collection
-
the information URL of the collection
-
the avatar of the collection
-
Implementations are free to use the event system to publish other events as appropriate providing the type of event is logically mapped.
The next step in implementing the SCM API is to allow for consuming plugins to perform deeper identification of interesting jenkins.scm.api.SCMHead
instances.
Consuming plugins may not be interested in every single jenkins.scm.api.SCMHead
.
For example:
-
the Pipeline Multibranch Plugin is only interested in
jenkins.scm.api.SCMHead
instances that have aJenkinsfile
in the root of the checkout. -
the Literate Plugin is only interested in
jenkins.scm.api.SCMHead
instances that have a marker file (configurable with the default being.cloudbees.md
) in the root of the checkout.
Each SCM API consuming plugin defines the criteria by implementing jenkins.scm.api.SCMSourceCriteria
.
Each jenkins.scm.api.SCMSourceOwner
can specify the criteria for the jenkins.scm.api.SCMSource
instances that it owns.
When a jenkins.scm.api.SCMSource
has been supplied with a jenkins.scm.api.SCMSourceCriteria
it will need to provide a jenkins.scm.api.SCMProbe
when identifying potential jenkins.scm.api.SCMHead
instances.
Note
|
Implementations of When a consuming plugin is processing a
Thus every SCM API consuming plugin that listens for a
Consumers can safely ignore wither a specific event is trusted or not.
To illustrate why consumers do not need to know about the trust state of an event, consider how a consumer responds to a
|
Consumers of the SCM API may want more advanced criteria to check the contents of specific files in the head / branch. Additionally, in some cases consumers of the SCM API may want to inspect specific files in the source control system in order to determine how to process that head / branch. For example,
-
when Pipeline Multibranch Plugin needs to build a specific revision of a specific branch, it first needs to parse the
Jenkinsfile
in order to determine the build plan. -
when Literate Plugin needs to build a specific revision of a specific branch, it first needs to parse the
README.md
in order to determine the matrix of execution environments against which to build.
Consumers of the SCM API cannot assume that every SCM API implementation has the ability for deep inspection of specific files at specific revisions and thus must fall back to performing a full check-out.
SCM API implementations indicate their support for deep inspection both by returning a non-null
value from jenkins.scm.api.SCMProbe.getRoot()
and/or by implementing the jenkins.scm.api.SCMFileSystem.Builder
extension point.
The final areas of the SCM API of interest to implementers are categorization and branding. Both of these areas can be considered completely optional. As they provide for a significantly richer user experience for the end user, it is recommended to implement these areas of the SCM API.
The jenkins.scm.api.SCMHead
instances can represent a number of different things:
-
mainline development branches
-
side feature branches
-
tags or snapshots of branches at fixed revisions
-
change requests to branches
-
etc.
Each source control system will have their own idiomatic terminology for each of these concepts. For example:
-
GitHub uses the term "Pull Request" to refer to a change request
-
Gerrit uses the term "Change" to refer to a change request
-
Perforce uses the term "Change Review" to refer to a change request
-
Git and Subversion use the term "Tag" to refer to a tag
-
Accurev uses the term "Snapshot" to refer to a tag
Each jenkins.scm.api.SCMSourceDescriptor
should provide the concrete instances of the jenkins.scm.api.SCMHeadCategory
that are potentially generated by their jenkins.scm.api.SCMSource
instances.
Then each jenkins.scm.api.SCMSource
instance can filter down that list to the actual categories that may be returned by that specific source.
For example, a GitHub source may return "Branches", "Pull Requests" and "Tags" but the user may have configured their specific source for a specific project to only build "Branches" and "Tags".
In an analogous way, the jenkins.scm.api.SCMSource
instances themselves may have different terminology for each of the different source control systems:
-
GitHub uses the term "Repository" to refer to primary repositories
-
GitHub uses the term "Fork" to refer to forks of the primary repositories
-
Accurev uses the term "Depot" to refer to repositories (using the term "repository" to refer to the collection of "depots")
-
One way of mapping CVS concepts to the SCM API might use the term "Module" for
jenkins.scm.api.SCMSource
instances.
In general, it is anticipated that most jenkins.scm.api.SCMNavigatorDescriptor
instances will only ever return a single jenkins.scm.impl.UncategorizedSCMSourceCategory
instance using the concept name that users expect.
Thus,
-
An
AccurevSCMNavigator.DescriptorImpl
would havepublic class AccurevSCMNavigator extends SCMNavigator { // ... @Extension public static class DecriptorImpl extends SCMNavigatorDescriptor { // ... protected SCMSourceCategory[] createCategories() { return new SCMSourceCategory[]{ new UncategorizedSCMSourceCategory(Messages._AccurevSCMNavigator_DepotSourceCategory()) }; } } }
-
A
CVSSCMNavigator.DescriptorImpl
would havepublic class CVSSCMNavigator extends SCMNavigator { // ... @Extension public static class DecriptorImpl extends SCMNavigatorDescriptor { // ... protected SCMSourceCategory[] createCategories() { return new SCMSourceCategory[]{ new UncategorizedSCMSourceCategory(Messages._CVSSCMNavigator_ModuleSourceCategory()) }; } } }
The implementers of a GitHub SCM API would need to decide whether the forks should be listed as additional heads / branches of the primary repository or whether they should be listed as a separate category of sources.
When defining custom categorization, we also need to pay attention to the getPronoun()
methods of:
-
jenkins.scm.api.SCMHead
-
jenkins.scm.api.SCMSource
(which will fall through tojenkins.scm.api.SCMSourceDescriptor
) -
jenkins.scm.api.SCMNavigator
(which will fall through tojenkins.scm.api.SCMNavigatorDescriptor
)
For example, with the Accurev source control system we might have:
public class AccurevSCMNavivator extends SCMNavigator {
// ...
@Extension
public static class DecriptorImpl extends SCMNavigatorDescriptor {
// ...
public String getPronoun() {
return "Repository"; // Better: Messages.AccurevSCMNavigator_RepositoryPronoun();
}
protected SCMSourceCategory[] createCategories() {
return new SCMSourceCategory[]{
new UncategorizedSCMSourceCategory(
new NonLocalizable("Depots")
// Better: Messages._AccurevSCMNavigator_DepotSourceCategory()
)
};
}
}
}
public class AccurevSCMSource extends SCMSource {
private boolean buildTags;
// ...
protected boolean isCategoryEnabled(@NonNull SCMHeadCategory category) {
if (category instanceof TagSCMHeadCategory) {
return buildTags;
}
return true;
}
@Extension
public static class DecriptorImpl extends SCMNavigatorDescriptor {
// ...
public String getPronoun() {
return "Depot"; // Better: Messages.AccurevSCMSource_RepositoryPronoun();
}
protected SCMHeadCategory[] createCategories() {
return new SCMSourceCategory[]{
new UncategorizedSCMSourceCategory(
new NonLocalizable("Streams")
// Better: Messages._AccurevSCMSource_StreamHeadCategory()
),
new TagSCMHeadCategory(
new NonLocalizable("Snapshots")
// Better: Messages._AccurevSCMSource_SnapshotHeadCategory()
)
};
}
}
}
public class AccurevSCMHead extends SCMHead {
// ...
public String getPronoun() {
return "Stream"; //: Better with localization
}
}
public class AccurevSnapshotSCMHead extends SCMHead implements TagSCMHead {
// ...
public String getPronoun() {
return "Snapshot"; //: Better with localization
}
}
The above represents the terminology and categorization that is appropriate for the Accurev source control system.
Note
|
When implementing categorization it is recommended to reuse an existing categorization class (with the terminology injected) rather than create a new categorization. New categorizations should be added to the scm-api plugin by pull requests as this allows similar categories to be grouped. |
Branding controls the visual icons that are used to represent the jenkins.scm.api.SCMSource
and jenkins.scm.api.SCMNavigator
instances.
Branding is determined by the getIconClassName()
of the jenkins.scm.api.SCMSourceDescriptor
and jenkins.scm.api.SCMNavigatorDescriptor
.
Where these methods return non-null the corresponding icons will be used by consumers of the SCM API as the final fall-back icons.
The hudson.scm.SCM
API has been subject to significant evolution. Modern implementations should focus on implementing the following methods:
public class MySCM extends SCM {
/*
* all configuration fields should be private
* mandatory fields should be final
* non-mandatory fields should be non-final
*/
@DataBoundConstructor
public MySCM(/*mandatory configuration*/) {
// ...
}
// for easier interop with SCMSource
public MySCM(MySCMSource config) {
// copy the configuratuion from the SCMSource
}
// Getters for all the configuration fields
// use @DataBoundSetter to inject the non-mandatory configuration elements
// as this will simplify the usage from pipeline
@Override
public boolean supportsPolling() {
return true; // hopefully you do
}
@Override
public boolean requiresWorkspaceForPolling() {
return false; // hopefully you don't
}
// for easier interop with SCMSource
public void setSCMHead(@NonNull SCMHead head, @CheckForNull SCMRevision revision) {
// configure to checkout the specified head at the specific revision
// if passed implementations that do not come from a MySCMSource then silently ignore
}
@Override
public PollingResult compareRemoteRevisionWith(@Nonnull Job<?, ?> project, @Nullable Launcher launcher,
@Nullable FilePath workspace, @Nonnull TaskListener listener,
@Nonnull SCMRevisionState baseline)
throws IOException, InterruptedException {
if (baseline instanceof MySCMRevisionState) {
//
// get current revision in SCM
// if your implementation of requiresWorkspaceForPolling() returns true then the
// workspace and launcher parameters should be non-null and point to a
// workspace and node to use for the comparison
// NOTE: requiring a workspace for polling is a realy bad user experience
// as obtaining a workspace may require the provisioning of build resources
// from the Cloud API just to determine that there are no changes to build
//
if (baseline same as currentRevision) {
return PollingResult.NO_CHANGES;
} else {
return PollingResult.SIGNIFICANT;
}
} else {
return PollingResult.BUILD_NOW;
}
}
@Override
public void checkout(@Nonnull Run<?, ?> build, @Nonnull Launcher launcher, @Nonnull FilePath workspace,
@Nonnull TaskListener listener, @CheckForNull File changelogFile,
@CheckForNull SCMRevisionState baseline) throws IOException, InterruptedException {
// do the checkout in the remote workspace using the supplied launcher
// output from the checkout should be streamed to the listener
// write the changelog to the changelog file (assuming it is non-null)
// the changelog should be from the supplied baseline to the revision checked out
// finally attach the revision state to the build's actions.
build.addAction(new MySCMRevisionState(/*whatever you need*/));
}
@Override
public ChangeLogParser createChangeLogParser() {
return new MyChangeLogParser();
}
@Symbol("my")
@Extension
public static class DescriptorImpl extends SCMDescriptor<MySCM> {
public DescriptorImpl() {
super(MySCMRepositoryBrowser.class);
}
// ...
}
}
Note
|
To simplify the implementation of the If the configuration for the |
The hudson.scm.SCM
implementation will also need a Stapler view for config
.
You will also need to provide implementations of SCMRevisionState
and ChangeLogParser
.
You do not need to provide an implementation of RepositoryBrowser
but you must at least provide an abstract base class with the appropriate methods for generating links from change log entries.
For simplification of integration with jenkins.scm.api.SCMSource
and the new SCM API it is recommended to use a SCMRevisionState
implementation that effectively defers to your implementation of SCMRevision
public class MySCMRevisionState extends SCMRevisionState implements Serializable {
private static final long serialVersionUID = 1L;
@NonNull
private final MySCMRevision revision;
public MySCMRevisionState(@NonNull MySCMRevision revision) {
this.revision = revision;
}
public MySCMRevision getRevision() {
return revision;
}
}
Most SCM implementations will just capture the output of an externally launched command and write that to the change log file (e.g. the equivalent of git log rev1..rev2 > file
).
This has the advantage of being easy for users to compare to their own locally launched commands, but it requires that the change log parser be able to parse
historical change log files.
The easiest format for the change log on disk is just to serialize the list of log entries using XStream
.
You still have to write a parser for the native tool change log format, but as you evolve the native command used to capture the change logs, you can use the XStream
data model evolution to ensure that the older changelogs can be parsed by newer implementations (e.g. if we changed from using say git log --format=oneline rev1..rev2
to git log --format=fuller rev1..rev2
)
If the XStream
on-disk format is used, then the change log parser can become relatively trivial:
public class MySCMChangeLogParser extends ChangeLogParser {
@Override
public ChangeLogSet<? extends ChangeLogSet.Entry> parse(Run build,
RepositoryBrowser<?> browser,
File changelogFile)
throws IOException, SAXException {
List<MySCMChangeLogEntry> entries =
(List<MySCMChangeLogEntry>) Items.XSTREAM2.fromXML(changelogFile);
return new MySCMChangeLogSet(build, browser, entries);
}
}
public class MySCMChangeLogEntry extends ChangeLogSet.Entry {
// ...
/*package*/ void setParent(MySCMChangeLogSet parent) {
super.setParent(parent);
}
}
public class MySCMChangeLogSet extends ChangeLogSet<MySCMChangeLogEntry> {
private final List<MySCMChangeLogEntry> entries;
public MySCMChangeLogSet(Run<?, ?> build,
RepositoryBrowser<?> browser,
List<MySCMChangeLogEntry> entries) {
super(build, browser);
this.entries = new ArrayList<>(entries);
// contract of ChangeLogSet.Entry is that parent must be set before
// ChangeLogSet is exposed
for (MySCMChangeLogEntry entry: this.entries) {
entry.setParent(this);
}
}
@Override
public boolean isEmptySet() {
return entries.isEmpty();
}
public Iterator<MySCMChangeLogEntry> iterator() {
return entries.iterator();
}
}
The ChangeLogSet
implementation will also need Stapler views for index
and digest
.
When rendering the entries, the repository browser should be used to render links.
You should assume that any RepositoryBrowser
you are provided is an implementation of the base class you specified in your SCMDescriptor
.
The jenkins.scm.api.SCMSource
API has been subject to some evolution.
The following are the recommended methods to implement:
public class MySCMSource extends SCMSource {
/*
* all configuration fields should be private
* mandatory fields should be final
* non-mandatory fields should be non-final
*/
/**
* Using traits is not required but it does make your implementation easier for others to extend.
*/
@NonNull
private List<SCMSourceTrait> traits = new ArrayList<>();
@DataBoundConstructor
public MockSCMSource(String id, /*mandatory configuration*/) {
super(id); /* see note on ids*/
}
public MockSCMSource(String id, MySCMNavigator config, String name) {
super(id); /* see note on ids*/
}
// Getters for all the configuration fields
@Override
@NonNull
public List<SCMSourceTrait> getTraits() {
return Collections.unmodifiableList(traits);
}
@Override
@DataBoundSetter
public void setTraits(@CheckForNull List<SCMSourceTrait> traits) {
this.traits = new ArrayList<>(Util.fixNull(traits));
}
// use @DataBoundSetter to inject the non-mandatory configuration elements
// as this will simplify the usage from pipeline
@Override
protected void retrieve(@CheckForNull SCMSourceCriteria criteria,
@NonNull SCMHeadObserver observer,
@CheckForNull SCMHeadEvent<?> event,
@NonNull TaskListener listener)
throws IOException, InterruptedException {
try (MySCMSourceRequest request = new MySCMSourceContext(criteria, observer, ...)
.withTraits(traits)
.newRequest(this, listener)) {
// When you implement event support, if you have events that can be trusted
// you may want to use the payloads of those events to avoid extra network
// calls for identifying the observed heads
Iterable<...> candidates = null;
Set<SCMHead> includes = observer.getIncludes();
if (includes != null) {
// at least optimize for the case where the includes is one and only one
if (includes.size() == 1 && includes.iterator().next() instanceof MySCMHead) {
candidates = getSpecificCandidateFromSourceControl();
}
}
if (candidates == null) {
candidates = getAllCandiatesFromSourceControl();
}
for (candidate : candidates) {
// there are other signatures for the process method depending on whether you need another
// round-trip call to the source control server in order to instantiate the MySCMRevision
// object. This example assumes that the revision can be instantiated without requiring
// an additional round-trip.
if (request.process(
new MySCMHead(...),
(RevisionLambda) (head) -> { return new MySCMRevision(head, ...) },
(head, revision) -> { return createProbe(head, revision) }
)) {
// the retrieve was only looking for some of the heads and has found enough
// do not waste further time looking at the other heads
return;
}
}
}
}
@NonNull
@Override
protected SCMProbe createProbe(@NonNull final SCMHead head, @CheckForNull final SCMRevision revision)
throws IOException {
/* see note on SCMProbe */
// assuming we have a sutable implementation of SCMFileSystem
return newProbe(head, revision);
}
@NonNull
@Override
public SCM build(@NonNull SCMHead head, @CheckForNull SCMRevision revision) {
return new MySCMBuilder(this, head, revision).withTraits(traits).build();
}
@NonNull
@Override
protected List<Action> retrieveActions(@CheckForNull SCMSourceEvent event,
@NonNull TaskListener listener)
throws IOException, InterruptedException {
List<Action> result = new ArrayList<>();
// if your SCM provides support for metadata at the "SCMSource" level
// then you probably want to return at least a `jenkins.branch.MetadataAction`
// from this method. The listener can be used to log the interactions
// with the backing source control system.
//
// When you implement event support, if you have events that can be trusted
// you may want to use the payloads of those events when populating the
// actions (if that will avoid extra network calls and give the same result)
return result;
}
@NonNull
@Override
protected List<Action> retrieveActions(@NonNull SCMHead head,
@CheckForNull SCMHeadEvent event,
@NonNull TaskListener listener)
throws IOException, InterruptedException {
List<Action> result = new ArrayList<>();
// if your SCM provides support for metadata at the "SCMHead" level
// then you probably want to return the correct metadata actions
// from this method. The listener can be used to log the interactions
// with the backing source control system.
//
// When you implement event support, if you have events that can be trusted
// you may want to use the payloads of those events when populating the
// actions (if that will avoid extra network calls and give the same result)
return result;
}
@NonNull
@Override
protected List<Action> retrieveActions(@NonNull SCMRevision revision,
@CheckForNull SCMHeadEvent event,
@NonNull TaskListener listener)
throws IOException, InterruptedException {
List<Action> result = new ArrayList<>();
// if your SCM provides support for metadata at the "SCMRevision" level
// then you probably want to return the correct metadata actions
// from this method. The listener can be used to log the interactions
// with the backing source control system.
//
// When you implement event support, if you have events that can be trusted
// you may want to use the payloads of those events when populating the
// actions (if that will avoid extra network calls and give the same result)
return result;
}
// This method is only required if you have more than one category and
// it is user configurable whether any specific source may opt in/out of
// specific categories
@Override
protected boolean isCategoryEnabled(@NonNull SCMHeadCategory category) {
if (category instanceof ChangeRequestSCMHeadCategory) {
return includeChangeRequests;
}
if (category instanceof TagSCMHeadCategory) {
return includeTags;
}
return true;
}
@Symbol("my")
@Extension
public static class DescriptorImpl extends SCMSourceDescriptor {
@Nonnull
@Override
public String getDisplayName() {
return "My SCM";
}
// This method is only required if you need more than one category
// or if the categories need to use idiomatic names specific to
// your source control system.
@NonNull
@Override
protected SCMHeadCategory[] createCategories() {
return new SCMHeadCategory[]{
UncategorizedSCMHeadCategory.INSTANCE,
ChangeRequestSCMHeadCategory.DEFAULT,
TagSCMHeadCategory.DEFAULT
};
}
// need to implement this as the default filtering of form binding will not be specific enough
public List<SCMSourceTraitDescriptor> getTraitsDescriptors() {
return SCMSourceTrait._for(this, MySCMSourceContext.class, MySCMBuilder.class);
}
@Override
@NonNull
public List<SCMSourceTrait> getTraitsDefaults() {
return Collections.<SCMSourceTrait>singletonList(new MySCMDiscoverChangeRequests());
}
}
}
// we need a context because we are using traits
public class MySCMSouceContext extends SCMSourceContext<MySCMSourceContext, MySCMSourceRequest> {
// store configuration that can be modified by traits
// for example, there may be different types of SCMHead instances that can be discovered
// in which case you would define discovery traits for the different types
// then those discovery traits would decorate this context to turn on the discovery.
// exmaple: we have a discovery trait that will ignore branches that have been filed as a change request
// because they will also be discovered as the change request and there is no point discovering
// them twice
private boolean needsChangeRequests;
// can include additional mandatory parameters
public MySCMSourceContext(SCMSourceCriteria criteria, SCMHeadObserver observer) {
super(criteria, observer);
}
// follow the builder pattern for "getters" and "setters" and use final liberally
// i.e. getter methods are *just the field name*
// setter methods return this for method chaining and are named to be readable;
public final boolean needsChangeRequests() { return needsChangeRequests; }
// in some cases your "setters" logic may be inclusive, in this example, once one trait
// declares that it needs to know the details of all the change requests, we have to get
// those details, even if the other traits do not need the information. Hence this
// "setter" uses inclusive OR logic.
@NonNull
public final MySCMSouceContext wantChangeRequests() { needsChangeRequests = true; return this; }
@NonNull
@Override
public MySCMSourceRequest newRequest(@NonNull SCMSource source, @CheckForNull TaskListener listener) {
return new MySCMSourceRequest(source, this, listener);
}
}
// we need a request because we are using traits
// the request provides utility methods that make processing easier and less error prone
public class MySCMSourceRequest extends SCMSourceRequest {
private final boolean fetchChangeRequests;
MockSCMSourceRequest(SCMSource source, MySCMSourceContext context, TaskListener listener) {
super(source, context, listener);
// copy the relevant details from the context into the request
this.fetchChangeRequests = context.needsChangeRequests();
}
public boolean isFetchChangeRequests() {
return fetchChangeRequests;
}
}
// we need a SCMBuilder because we are using traits
public class MySCMBuilder extends SCMBuilder<MySCMBuilder,MySCM> {
// include any fields needed by traits to decorate the resulting MySCM
private final MySCMSource source;
public MySCMBuilder(@NonNull MySCMSource source, @NonNull SCMHead head,
@CheckForNull SCMRevision revision) {
super(MySCM.class, head, revision);
this.source = source;
}
// provide builder-style getters and setters for fields
@NonNull
@Override
public MySCM build() {
MySCM result = new MySCM(this);
result.setHead(head(), revision());
// apply the decorations from the fields
return result;
}
}
Note
|
SCMSource IDs
The SCMSource’s IDs are used to help track the SCMSource that a SCMHead instance originated from. If - and only if - you are certain that you can construct a definitive ID from the configuration details of your SCMSource then implementations are encouraged to use a computed ID. When instantiating an In all other cases, implementations should use the default generated ID mechanism when the ID supplied to the constructor is An example of how a generated ID could be definitively constructed would be:
If users add the same source with the same configuration options twice to the same owner, with the above ID generation scheme, it should not matter as both sources would be idempotent. By starting with the server URL and then appending the name of the source we might be able to more quickly route events. The observant reader will spot the issue above, namely that we need to start from an URL that is definitive.
Most SCM systems can be accessed via multiple URLs.
For example, GitHub can be accessed at both |
Note
|
SCMProbe: implement custom or leverage SCMFileSystem
The above example uses the default implementation of If your source control system cannot support an implementation of If your source control system cannot support even the |
The jenkins.scm.api.SCMSource
implementation will also need a Stapler view for config-detail
.
You will need to have implemented your own SCMHead
and SCMRevision
subclasses.
Note
|
So you didn’t implement your own
SCMHead and SCMRevision subclasses and now you want toYou can register a |
-
For regular branch like things, you will want to extend from
SCMHead
directly.public class MySCMHead extends SCMHead { private static final long serialVersionUID = 1L; public MySCMHead(@NonNull String name) { super(name); } @Override // overriding to illustrate, by default returns SCMHeadOrigin.DEFAULT @NonNull public SCMHeadOrigin getOrigin() { // if branch like things can come from different places (i.e. branches from forks) return SCMHeadOrigin.DEFAULT; } }
-
When the backing object in source control is more like a tag, then add in the
TagSCMHead
mixin interface to identify that the head is a tag.public class MyTagSCMHead extends MySCMHead implements TagSCMHead { private static final long serialVersionUID = 1L; public MyTagSCMHead(@NonNull String name) { super(name); } @Override // overriding to illustrate, by default returns SCMHeadOrigin.DEFAULT @NonNull public SCMHeadOrigin getOrigin() { // if tag like things can come from different places (i.e. tags from forks) return SCMHeadOrigin.DEFAULT; } }
Tip
|
Both tags and regular branches can normally use the same public class MySCMRevision extends SCMRevision {
private static final long serialVersionUID = 1L;
private final String hash;
public MySCMRevision(@NonNull MySCMHead head, String hash) {
super(head);
this.hash = hash;
}
public String getHash() {
return hash;
}
// critical to implement equals and hashCode
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
MySCMRevision that = (MySCMRevision) o;
return hash.equals(that.hash);
}
@Override
public int hashCode() {
return hash.hashCode();
}
// very helpful for users to implement toString
@Override
public String toString() {
return hash;
}
} |
-
Change request like things are special. For one, the actual strategy used to determine what to build can be different from a regular head. The change request may be built against the original baseline revision, or it mat be built against the current revision of the original baseline branch.
You should consider whether it makes sense for change request like things to extend the same base class you used for branch and tag like thing or whether you should extend from
SCMHead
directly. In either case you should implement theChangeRequestSCMHead2
mix-in interface.Another important concern with change request like things is where the change request can originate from untrusted users. Implementers should always make it configurable whether change request like things will be excluded from the
SCMSource
and also where possible to differentiate between trusted and untrusted users.public class MyChangeRequestSCMHead extends SCMHead implements ChangeRequestSCMHead2 { private static final long serialVersionUID = 1L; private final String id; private final MySCMHead target; public MyChangeRequestSCMHead(String id, MySCMHead target) { super("Change/" + id); // because My SCM calls Change Requests Change/### where ### is the change ID this.id = id; this.target = target; } public String getId() { return id; } public SCMHead getTarget() { return target; } @NonNull public ChangeRequestCheckoutStrategy getCheckoutStrategy() { // because My SCM checks out change requests by merging the two heads the effective revision will // always depend on the revision of the target and the revision of the change request so we // return MERGE. return ChangeRequestCheckoutStrategy.MERGE; } @NonNull public String getOriginName() { // My SCM does not create change requests from branches, rather you request a new change request // and commit your changes to that. Hence, unlike GitHub, Bitbucket, etc there is no concept // of a different name when considered as a branch, so we just pass through getName() return getName(); } @Override // overriding to illustrate, by default returns SCMHeadOrigin.DEFAULT @NonNull public SCMHeadOrigin getOrigin() { // My SCM is a centralized source control system so there is only ever one origin // If My SCM allowed users to "fork" the repository and have change requests originate from // forks then we might return `new SCMHeadOrigin.Fork(name)` // If My SCM was a distributed source control system with some sort of automatic discovery // mechanism (akin to peer discovery in Bittorrent say) then we might create our own // subclass of SCMHeadOrigin to represent those peers as the simple "name" of a Fork // would not be sufficient to uniquely identify the origin. return SCMHeadOrigin.DEFAULT; } } public class MyChangeRequestSCMRevision extends ChangeRequestSCMRevision<MySCMHead> { private static final long serialVersionUID = 1L; private final String change; public MyChangeRequestSCMRevision(@NonNull MyChangeRequestSCMHead head, @NonNull MySCMRevision target, @NonNull String change) { super(head, target); this.change = change; } /** * The commit hash of the head of the change request branch. */ public String getChange() { return change; } @Override public boolean equivalent(ChangeRequestSCMRevision<?> o) { if (!(o instanceof MyChangeRequestSCMRevision)) { return false; } MyChangeRequestSCMRevision other = (MyChangeRequestSCMRevision) o; return getHead().equals(other.getHead()) && change.equals(other.change); } @Override protected int _hashCode() { return change.hashCode(); } @Override public String toString() { return getTarget().getHash() + "+" + change; } }
To enable consumers to establish the relationship between revisions and heads, you should implement the SCMSource.parentRevisions(head,revision,listener)
and SCMSource.parentHeads(head,listener)
methods.
These two methods are not strictly required, but when implemented they enable consumers to identify relationships between different branches, e.g. if the consumer wants to build a more complete changelog history tracking through the different branches.
The jenkins.scm.api.SCMNavigator
API has not been subject to much evolution and consequently the methods to implement are relatively obvious
public class MySCMNavigator extends SCMNavigator {
/*
* all configuration fields should be private
* mandatory fields should be final
* non-mandatory fields should be non-final
*/
/**
* Using traits is not required but it does make your implementation easier for others to extend.
* Using traits also reduces duplicate configuration between your SCMSource and your SCMNavigator
* as you can provide the required traits
*/
@NonNull
private final List<SCMTrait<?>> traits;
@DataBoundConstructor
public MySCMNavigator(/*mandatory configuration*/) {
// ...
}
@Override
@NonNull
protected String id() {
// Generate the ID of the thing being navigated.
// Typically this will, at a minimum consist of the URL of the remote server
// For GitHub it would probably also include the GitHub Organization being navigated
// For BitBucket it could include the owning team as well as the project (if navigation is scoped to
// a single project within a team) or just the owning team (if navigation is scoped to all repositories
// in a team)
//
// See the Javadoc for more details.
// ...
}
// Getters for all the configuration fields
@Override
@NonNull
public List<SCMTrait<?>> getTraits() {
return Collections.unmodifiableList(traits);
}
@Override
@DataBoundSetter
public void setTraits(@CheckForNull List<SCMTrait<?>> traits) {
this.traits = new ArrayList<>(Util.fixNull(traits));
}
// use @DataBoundSetter to inject the non-mandatory configuration elements
// as this will simplify the usage from pipeline
@Override
public void visitSources(@NonNull SCMSourceObserver observer) throws IOException, InterruptedException {
try (MySCMNavigatorRequest request = new MySCMNavigatorContext()
.withTraits(traits)
.newRequest(this, observer)) {
Iterable<...> candidates = null;
Set<String> includes = observer.getIncludes();
if (includes != null) {
// at least optimize for the case where the includes is one and only one
if (includes.size() == 1 && includes.iterator().next() instanceof MySCMHead) {
candidates = getSpecificCandidateFromSourceControl();
}
}
if (candidates == null) {
candidates = getAllCandiatesFromSourceControl();
}
for (String name : candidates) {
if (request.process(name, (SourceLambda) (name) -> {
// it is *critical* that we assign each observed SCMSource a reproducible id.
// the id will be used to correlate the SCMHead back with the SCMSource from which
// it came. If we do not use a reproducible ID then repeated observations of the
// same navigator will return "different" sources and consequently the SCMHead
// instances discovered previously will be picked up as orphans that have been
// taken over by a new source... which could end up triggering a new build.
//
// At a minimum you could use the name as the ID, but better is at least to include
// the URL of the server that the navigator is navigating
String id = "... some stuff based on configuration of navigator ..." + name;
return new MySCMSourceBuilder(name).withId(id).withRequest(request).build();
}, (AttributeLambda) null)) {
// the observer has seen enough and doesn't want to see any more
return;
}
}
}
}
@NonNull
@Override
public List<Action> retrieveActions(@NonNull SCMNavigatorOwner owner,
@CheckForNull SCMNavigatorEvent event,
@NonNull TaskListener listener)
throws IOException, InterruptedException {
List<Action> result = new ArrayList<>();
// if your SCM provides support for metadata at the "SCMNavigator" level
// then you probably want to return at least a `jenkins.branch.MetadataAction`
// from this method. The listener can be used to log the interactions
// with the backing source control system.
//
// When you implement event support, if you have events that can be trusted
// you may want to use the payloads of those events when populating the
// actions (if that will avoid extra network calls and give the same result)
return result;
}
@Symbol("my")
@Extension
public static class DescriptorImpl extends SCMNavigatorDescriptor {
@Nonnull
@Override
public String getDisplayName() {
return "My SCM Team";
}
@Override
public SCMNavigator newInstance(@CheckForNull String name) {
// if you can guess a fully configured MySCMNavigator instance
// from just the name, e.g. a GitHub navigator could guess that
// the name was the name of a GitHub organization (assuming it does
// not need to worry about GitHub Enterprise servers or assuming
// that the descriptor allows configuring the default server as
// a global configuration) then return one here, otherwise...
return null;
}
// This method is only required if you need more than one category
// or if the categories need to use idiomatic names specific to
// your source control system.
@NonNull
@Override
protected SCMSourceCategory[] createCategories() {
return new SCMHeadCategory[]{
new UncategorizedSCMHeadCategory(
// better would be Messages.MySCMNavigator_TeamsCategory()
new NonLocalizable("Teams") // because My SCM uses the term "teams" for a collection of repositories.
)
};
}
// optional branding of the icon
public String getIconClassName() {
return "icon-my-scm-team";
}
// register the icons as we have implemented optional branding
static {
IconSet.icons.addIcon(
new Icon("icon-my-scm-team icon-sm",
"plugin/my-scm/images/16x16/team.png",
Icon.ICON_SMALL_STYLE));
IconSet.icons.addIcon(
new Icon("icon-my-scm-team icon-md",
"plugin/my-scm/images/24x24/team.png",
Icon.ICON_MEDIUM_STYLE));
IconSet.icons.addIcon(
new Icon("icon-my-scm-team icon-lg",
"plugin/my-scm/images/32x32/team.png",
Icon.ICON_LARGE_STYLE));
IconSet.icons.addIcon(
new Icon("icon-my-scm-team icon-xlg",
"plugin/my-scm/images/48x48/team.png",
Icon.ICON_XLARGE_STYLE));
}
}
}
// we need a source builder because we are using traits
public class MySCMSourceBuilder extends SCMSourceBuilder<MySCMSourceBuilder, MySCMSource> {
private String id;
// store the required configuration here
// there may be other mandatory parameters that you may want to capture here
// such as the SCM server URL
public MySCMSourceBuilder(String name) {
super(MockSCMSource.class, name);
}
@NonNull
public MySCMSourceBuilder withId(String id) {
this.id = id;
return this;
}
@NonNull
@Override
public MySCMSource build() {
return new MySCMSource(id, ...);
}
}
The jenkins.scm.api.SCMNavigator
implementation will also need a Stapler view for config
.
At this point you should now have a full implementation of the SCM API that works for polling.
To test this implementation you should set up an organization / team / whatever the correct terminology is for the thing you are representing with SCMNavigator
Within this you should set up more than one of repository / project / whatever the correct terminology is for the thing you are representing with SCMSource
Within these repositories, create some dummy branches with a basic Jenkinsfile
in the root.
Also have some branches that do not have a Jenkinsfile
in the root.
Ensure you have at least one repository with content but without a Jenkinsfile
in any branch / tag / change request.
Tag some of the branches.
If your source control system has the concept of change requests, create some change requests.
Install the Pipeline Multibranch Plugin and your plugin into your test instance.
-
If your
SCMNavigatorDescriptor.newInstance(name)
method does not return null, verify that the new item screen has a specific organization folder type corresponding to yourSCMNavigator
. -
Create an organization folder for your
SCMNavigator
. It should not matter whether you use the name based inference from a specific organization folder type or create a generic organization folder and add yourSCMNavigator
to the configuration. -
Verify that all the repositories containing at least one branch with a
Jenkinsfile
have had multibranch projects created for them. -
Verify that the repository that does not contain any
Jenkinsfile
has not had a multibranch project created for it (unless you did not implementSCMProbe
orSCMFileSystem.Builder
) -
Pick one of the multibranch projects. Verify that the branches / tags / change requests that contain a
Jenkinsfile
have been created and categorized correctly. -
Commit a change to one of the branches. Trigger a rescan of the organization. Verify that the only build activity is the organization scan, the repository scans for each individual repository and then the branch build for the changed branch. This is checking that your revision equality has been implemented correctly and relying on the Branch API to requests builds when scanning identifies changed revisions for individual
SCMHead
instances.
You could perform additional testing, doing things like adding new branches / tags / change requests, updating branches, merging change requests, deleting branches, etc but as the implementation we have to this point only performs polling, if the above tests work then everything should work when polling.
From the testing and the requirement to trigger a scan in order to see the changes, you should now have an appreciation of why event support is important to users.
The first part of implementing event support is to determine how events will be fed into Jenkins. There are a number of techniques that can be used. The two most common techniques are:
-
Webhook
The webhook technique typically involves setting up a RootAction
that can receive a payload from the source control system.
For this technique to work, the source control system must be able to establish a connection to the Jenkins server.
This can be problematic where, for example, the Jenkins server is on an internal-only network and the source control system is an externally hosted service (e.g. GitHub)
-
Messaging service
The messaging service uses a broker which can be reached by both the Jenkins server and the source control system. The source control system sends its event payloads to the broker system. The Jenkins server periodically connects (or in some cases uses a persistent connection) to the broker to receive the payloads.
The webhook technique is the simpler to implement and is generally sufficient for most Jenkins users. For the users where the webhook technique is not sufficient it is usually relatively easy to build a generic messaging service on top of the webhook, for example the SCM SQS Plugin.
The basic starting point for a WebHook is an UnprotectedRootAction
@Extension
public class MySCMWebHook implements UnprotectedRootAction {
public static final String URL_NAME = "my-scm-hook";
public static final String ENDPOINT = "notify";
@Override
public String getIconFileName() {
return null;
}
@Override
public String getDisplayName() {
return null;
}
@Override
public String getUrlName() {
return URL_NAME;
}
@RequirePOST
public HttpResponse doNotify(StaplerRequest2 req) {
// check if the event payload at least provides some proof of origin
// this may be a query parameter or a HTTP header
// if the proof of origin is missing, drop the event on the floor and return
// extract the payload from the request
// parse the payload
/* PSEUDOCODE
for (event : payload) {
switch (eventType) {
case HEAD:
SCMHeadEvent.fireNow(new MySCMHeadEvent(eventType, payload, SCMEvent.originOf(req));
break;
case SOURCE:
SCMHeadEvent.fireNow(new MySCMSourceEvent(eventType, payload, SCMEvent.originOf(req));
break;
case NAVIGATOR:
SCMHeadEvent.fireNow(new MySCMNavigatorEvent(eventType, payload, SCMEvent.originOf(req));
break;
}
}
*/
return HttpResponses.ok();
}
@Extension
public static class CrumbExclusionImpl extends CrumbExclusion {
public boolean process(HttpServletRequest req, HttpServletResponse resp, FilterChain chain) throws IOException, ServletException {
String pathInfo = req.getPathInfo();
if(pathInfo != null && pathInfo.equals("/"+URL_NAME+"/notify")) {
chain.doFilter(req, resp);
return true;
} else {
return false;
}
}
}
}
You don’t have to worry about triggering the build, you just have to fire the event. If your event is implemented correctly then the event subsystem will match the event to the corresponding SCMSource instances and let them do what they need to do. In the case of multibranch, for example, they will initiate a check of the latest revision and then build if the revision is changed.
There are some common concerns that you should be aware of when writing a webhook:
-
The webhook normally needs to be an
UnprotectedRootAction
because it can be tricky to configure the source control system to integrate with whateverSecurityRealm
the user’s Jenkins has been configured to use. For example, if a Jenkins is configured to use OAuth or some other single sign-on technology, you would need to configure an Jenkins API token for a user and then provide that API token to the source control system. -
The webhook normally needs to have an exception for the crumb based CSRF protections (as shown in the above example).
-
The webhook should not blindly process all events, rather it should look for some proof of origin.
-
Proof of origin can be as simple as a token generated by Jenkins (or configured by the user in the Jenkins Global configuration) that must be supplied with the POST request either as a HTTP header or as a query or form parameter.
NoteSimple proofs of origin such as these can be captured by intermediate network elements where the path between the event source and the webhook is unencrypted.
If the event source is not performing server certificate validation, then the proof of origin may be captured by a man-in-the-middle attack.
Simple proofs of origin are not a protection from malicious agents, rather a protection from misconfigured event sources.
-
More complex proofs of origin may not be possible without having dedicated support for the Jenkins webhook built into the source control system.
-
Once you have a webhook in place, the source control system needs to be configured to send events to the webhook.
-
The simplest implementation is none at all. Document the webhook URL and how to configure the source control system to send events to the webhook URL.
-
The best user experience is where the webhook URL is auto-registered by Jenkins.
Note
|
Even if you implement auto-registration of the webhook, not all users will be prepared to grant Jenkins the permission to manage the destination webhooks of a source control system. Such users will need to manually register the webhook URL, so it is important that you document the webhook URL and how to configure the source control system to send events to Jenkins. |
Auto-registration of webhooks is performed in different methods depending on the scope of the webhook.
-
SCMNavigator.afterSave(owner)
-
SCMSource.afterSave()
-
SCM2.afterSave(job)
In the above methods you need to register the webhook url using your SCM Provider API to which Jenkins will listen for events.
To get the hook url you create a method as follows:
public String getHookUrl() { JenkinsLocationConfiguration locationConfiguration = JenkinsLocationConfiguration.get(); String rootUrl = locationConfiguration.getUrl(); if (StringUtils.isBlank(rootUrl) || rootUrl.startsWith("http://localhost:")) { return ""; } return rootUrl+"/"+MySCMWebHook.URL_NAME+"/"+MySCMWebHook.ENDPOINT; }
Note
|
Existing implementations of the This leads to a proliferation of triggers for multiple source control systems and consequently confuses users and leads to a bad user experience. Switching those implementations to use Additionally, unless a source control system can guarantee delivery of events, in order to ensure that events are not lost, users will need to configure Poll SCM in any case (even if only with the schedule of |
Tip
|
If you are implementing auto-registration of webhooks, keep a local database of what hooks have been attempted to be registered. This will allow for:
The database should also either:
|
At this point we need to look into implementing the events themselves.
The most important event is the SCMHeadEvent
for an updated revision.
This is also potentially the most difficult event to implement.
The easiest case is where there is a 1:1 mapping between events in the source control system and events in the SCM API.
For example, if the "MySCM" source control system always sends JSON event payloads, and the payload for a updated branch looked something like:
{
"event":"branch-update",
"server":"https://myscm.example.com:443/",
"team":"project-odd",
"repository":"webapp",
"branch":"feature-23",
"revision":"af536372"
//...
}
The webhook receiver would start by parsing the payload and then create the appropriate event object from the payload:
JsonNode json = new ObjectMapper().readTree(payload);
String event = json.path("event").asString();
if ("branch-create".equals(event)) {
SCMHeadEvent.fireNow(new MyBranchSCMHeadEvent(Type.CREATED, json);
} else if ("branch-update".equals(event)) {
SCMHeadEvent.fireNow(new MyBranchSCMHeadEvent(Type.UPDATED, json);
} // else etc
Because each event from the source control system has a 1:1 correspondence with the events in the SCM API the implementation of each event can be fairly straightforward.
You can define an AbstractSCMHeadEvent
public class AbstractMySCMHeadEvent<E> extends SCMHeadEvent<E> {
public AbstractMySCMHeadEvent(Type type, E createEvent, String origin) {
super(type, createEvent, origin);
}
@Override
public boolean isMatch(@NonNull SCMNavigator navigator) {
return navigator instanceof MySCMNavigator && isMatch((MySCMNavigator) navigator);
}
// Define this method in your concrete event class to uniquely identify `MySCMNavigator`
public abstract boolean isMatch(@NonNull MySCMNavigator navigator);
@Override
public boolean isMatch(@NonNull SCMSource source) {
return source instanceof MySCMSource && isMatch((MySCMSource) source);
}
// Define this method in your concrete event class to uniquely identify `MySCMSource`
public abstract boolean isMatch(@NonNull MySCMSource source);
@Nonnull
@Override
public final Map<SCMHead, SCMRevision> heads(@Nonnull SCMSource source) {
Map<SCMHead, SCMRevision> heads = new HashMap<>();
if (source instanceof GitLabSCMSouMySCMSourcerce) {
return headsFor((MySCMSource) source);
}
return heads;
}
@NonNull
protected abstract Map<SCMHead, SCMRevision> headsFor(GitLabSCMSource source);
@NonNull
@Override
public String getSourceName() {
return payload.path("repository").asString();
}
@NonNull
@Override
public Map<SCMHead, SCMRevision> heads(@NonNull SCMSource source) {
if (!(source instanceof MySCMSource)) {
return Collections.emptyMap();
}
MySCMSource src = (MySCMSource) source;
if (!(src.getServer().equals(getPayload().path("server").asString()))) {
return Collections.emptyMap();
}
if (!(src.getTeam().equals(getPayload().path("team").asString()))) {
return Collections.emptyMap();
}
if (!(src.getRepository().equals(getPayload().path("repository").asString()))) {
return Collections.emptyMap();
}
MySCMSourceContext context = new MySCMSourceContext(null, SCMHeadObserver.none(), ...)
.withTraits(src.getTraits();
if (/*some condition dependent determined by traits*/) {
// the configured traits are saying this event is ignored for this source
return Collections.emptyMap();
}
MySCMHead head = new MySCMHead(getPayload().path("branch").asString(), false);
// the configuration of the context may also modify how we return the heads
// for example there could be traits to control whether to build the
// merge commit of a change request or the head commit of a change request (or even both)
// so the returned value may need to be customized based on the context
return Collections.<SCMHead, SCMRevision>singletonMap(
head, new MySCMRevision(head, revision)
);
}
// This method triggers builds for all jobs like freestyle, non-multibranch. So if you are implementing a
// "branch-source" plugin then you might return "false" in this method.
@Override
public boolean isMatch(@NonNull SCM scm) {
if (scm instanceof MySCM) {
MySCM mySCM = (MySCM) scm;
return mySCM.getServer().equals(getPayload().path("server").asString())
&& mySCM.getTeam().equals(getPayload().path("team").asString())
&& mySCM.getRepository().equals(getPayload().path("repository").asString())
&& mySCM.getBranch().equals(getPayload().path("branch").asString());
}
return false;
}
}
Now extending AbstractSCMHeadEvent
you can implement concrete SCMEvent
like MySCMPushEvent
, MySCMPullRequestEvent
, MySCMTagEvent
etc.
Sample Impl of MySCMPushSCMEvent
:
public class MySCMPushSCMEvent extends AbstractMySCMHeadEvent<JsonNode> { public GitLabPushSCMEvent(JsonNode payload, String origin) { super(typeOf(payload), pushEvent, origin); } private static Type typeOf(JsonNode payload) { if (<check if type created from the payload>) { return Type.CREATED; } if (<check if type removed from the payload>) { return Type.REMOVED; } return Type.UPDATED; } @Override public String description() { // we have no context, so have to give a full description // without a context we may give a name that is less relevant to the user, this is especially // the case when dealing with events for branches that are also change requests return String.format("MySCM update notification for branch %s of team %s repository %s on server %s", getPayload().path("branch").asString(), getPayload().path("team").asString(), getPayload().path("repository").asString(), getPayload().path("server").asString() ); } /** * {@inheritDoc} */ @Override public String descriptionFor(@NonNull SCMNavigator navigator) { String ref = getPayload().getRef(); ref = ref.startsWith(Constants.R_HEADS) ? ref.substring(Constants.R_HEADS.length()) : ref; return "Push event to branch " + ref + " in project " + getPayload().getRepository().getName(); } // Feel free to modify this method to uniquely identify the your SCMNavigator @Override public boolean isMatch(@NonNull MySCMNavigator navigator) { navigator.getServer().equals(payload.path("server").asString()); && navigator.getTeam().equals(payload.path("team").asString()); } // Feel free to modify this method to uniquely identify the your SCMSource @Override public boolean isMatch(@NonNull MySCMSource source) { source.getServer().equals(payload.path("server").asString()); && (source.getRepository().equals(payload.path("repository").asString()); } @Override public String descriptionFor(@NonNul SCMNavigator navigator) { // we have context, so we can give an abbreviated description here // also pay attention as in the context the reported name of the branch may be different // for example if the branch is also part of a change request we may have to name if differently // depending on how the source has been configured to name branches that are part of a change requets return String.format("MySCM update notification for branch %s of repository %s", getPayload().path("branch").asString(), // we know the navigator is a match so team and server can be assumed in the context of the navigator getPayload().path("repository").asString() ); } @Override public String descriptionFor(@NonNul SCMSource source) { // we have context, so we can give an abbreviated description here // also pay attention as in the context the reported name of the branch may be different // for example if the branch is also part of a change request we may have to name if differently // depending on how the source has been configured to name branches that are part of a change requets return String.format("MySCM update notification for branch %s", getPayload().path("branch").asString() // we know the source is a match so server, team and repository can be assumed in the context of the source ); } @NonNull @Override public Map<SCMHead, SCMRevision> headsFor(GitLabSCMSource source) { if (!(source instanceof MySCMSource)) { return Collections.emptyMap(); } MySCMSource src = (MySCMSource) source; if (!(src.getServer().equals(getPayload().path("server").asString()))) { return Collections.emptyMap(); } if (!(src.getTeam().equals(getPayload().path("team").asString()))) { return Collections.emptyMap(); } if (!(src.getRepository().equals(getPayload().path("repository").asString()))) { return Collections.emptyMap(); } MySCMSourceContext context = new MySCMSourceContext(null, SCMHeadObserver.none(), ...) .withTraits(src.getTraits(); if (/*some condition dependent determined by traits*/) { // the configured traits are saying this event is ignored for this source return Collections.emptyMap(); } MySCMHead head = new MySCMHead(getPayload().path("branch").asString(), false); // the configuration of the context may also modify how we return the heads // for example there could be traits to control whether to build the // merge commit of a change request or the head commit of a change request (or even both) // so the returned value may need to be customized based on the context return Collections.<SCMHead, SCMRevision>singletonMap( head, new MySCMRevision(head, revision) ); } }
Specific Impl of MySCMMergeRequestSCMEvent
(from GitLab Branch Source Plugin):
public class GitLabMergeRequestSCMEvent extends AbstractGitLabSCMHeadEvent<MergeRequestEvent> { public GitLabMergeRequestSCMEvent(MergeRequestEvent mrEvent, String origin) { super(typeOf(mrEvent), mrEvent, origin); } private static Type typeOf(MergeRequestEvent mrEvent) { switch (mrEvent.getObjectAttributes().getState()) { case "opened": return Type.CREATED; case "closed": return Type.REMOVED; case "reopened": default: return Type.UPDATED; } } @Override public String descriptionFor(@NonNull SCMNavigator navigator) { String state = getPayload().getObjectAttributes().getState(); if (state != null) { switch (state) { case "opened": return "Merge request !" + getPayload().getObjectAttributes().getIid() + " opened in project " + getPayload() .getProject().getName(); case "reopened": return "Merge request !" + getPayload().getObjectAttributes().getIid() + " reopened in project " + getPayload() .getProject().getName(); case "closed": return "Merge request !" + getPayload().getObjectAttributes().getIid() + " closed in project " + getPayload() .getProject().getName(); } } return "Merge request !" + getPayload().getObjectAttributes().getIid()+ " event in project " + getPayload().getProject() .getName(); } @Override public boolean isMatch(@NonNull GitLabSCMNavigator navigator) { return navigator.getNavigatorProjects().contains(getPayload().getProject().getPathWithNamespace()); } @Override public boolean isMatch(@NonNull GitLabSCMSource source) { return getPayload().getObjectAttributes().getTargetProjectId().equals(source.getProjectId()); } @NonNull @Override public String getSourceName() { return getPayload().getProject().getPathWithNamespace(); } @Override public String descriptionFor(@NonNull SCMSource source) { String state = getPayload().getObjectAttributes().getState(); if (state != null) { switch (state) { case "opened": return "Merge request !" + getPayload().getObjectAttributes().getIid() + " opened"; case "reopened": return "Merge request !" + getPayload().getObjectAttributes().getIid() + " reopened"; case "closed": return "Merge request !" + getPayload().getObjectAttributes().getIid() + " closed"; } } return "Merge request !" + getPayload().getObjectAttributes() .getIid()+ " event"; } @Override public String description() { String state = getPayload().getObjectAttributes().getState(); if (state != null) { switch (state) { case "opened": return "Merge request !" + getPayload().getObjectAttributes().getIid() + " opened in project " + getPayload() .getProject().getPathWithNamespace(); case "reopened": return "Merge request !" + getPayload().getObjectAttributes().getIid() + " reopened in project " + getPayload() .getProject().getPathWithNamespace(); case "closed": return "Merge request !" + getPayload().getObjectAttributes().getIid() + " closed in project " + getPayload() .getProject().getPathWithNamespace(); } } return "Merge request !" + getPayload().getObjectAttributes() .getIid()+ " event"; } @NonNull @Override public Map<SCMHead, SCMRevision> headsFor(GitLabSCMSource source) { Map<SCMHead, SCMRevision> result = new HashMap<>(); try (GitLabSCMSourceRequest request = new GitLabSCMSourceContext(null, SCMHeadObserver.none()) .withTraits(source.getTraits()) .newRequest(source, null)) { MergeRequestEvent.ObjectAttributes m = getPayload().getObjectAttributes(); Map<Boolean, Set<ChangeRequestCheckoutStrategy>> strategies = request.getMRStrategies(); boolean fork = !getPayload().getObjectAttributes().getSourceProjectId().equals(getPayload().getObjectAttributes().getTargetProjectId()); String originOwner = getPayload().getUser().getUsername(); String originProjectPath = m.getSource().getPathWithNamespace(); for (ChangeRequestCheckoutStrategy strategy : strategies.get(fork)) { MergeRequestSCMHead h = new MergeRequestSCMHead( "MR-" + m.getIid() + (strategies.size() > 1 ? "-" + strategy.name() .toLowerCase(Locale.ENGLISH) : ""), m.getIid(), new BranchSCMHead(m.getTargetBranch()), ChangeRequestCheckoutStrategy.MERGE, fork ? new SCMHeadOrigin.Fork(originProjectPath) : SCMHeadOrigin.DEFAULT, originOwner, originProjectPath, m.getSourceBranch() ); result.put(h, m.getState().equals("closed") ? null : new MergeRequestSCMRevision( h, new BranchSCMRevision( h.getTarget(), "HEAD" ), new BranchSCMRevision( new BranchSCMHead(h.getOriginName()), m.getLastCommit().getId() ) )); } } catch (IOException e) { e.printStackTrace(); } return result; } }
The important things here are to ensure that the methods return as fast as possible if they know there is no match.
When there is not a good mapping between source control events and the events of the SCM API, it will be necessary to detangle the events. For example, if the "MySCM" worked more like Git where you can have a single "git push" update multiple branches and create multiple tags, we may have an event payload that looks something more like:
{
"event":"push",
"server":"https://myscm.example.com:443/",
"team":"project-odd",
"repository":"webapp",
"branches":{
"feature-23":"af536372",
"feature-26":"6712edf2",
"master":"b8a6d7c2"
},
"tags":{
"1.0":"b8a6d7c2"
}
//...
}
There are two ways we can map this type of event payload into the SCM API’s event model:
-
We could separate this event into multiple events, each of which will have to be matched against all the listeners. Each source would then check their interest against the four events, for
feature-23
,feature-26
,master
and1.0
. -
We could issue this as a single event that returns the appropriate heads for each source. A source that is interested in features and master but not tags would get the
feature-23
,feature-26
andmaster
heads fromSCMHeadEvent.heads(source)
while a source that is interested in master and tags but not features would getmaster
and1.0
heads fromSCMHeadEvent.heads(source)
.
Note
|
The first option requires the least code and is conceptually easier to understand. The second option allows for significantly reducing the number of requests that are required to be made against the source control system. Additionally when making requests against the source control system, an event scoped cache could be stored within the event object as it is likely that multiple interested parties will be making essentially the exact same checks. With source control systems that havea public service offering, e.g. GitHub, there will typically be API rate limits. When there are API rate limits, reducing the number of API calls will become a priority |
If you only implement support for some events, please make best effort to ensure that the first release of your plugin has support for the following three events:
-
jenkins.scm.api.SCMHeadEvent
of typeUPDATED
representing the change of a revision in a specific head. -
jenkins.scm.api.SCMHeadEvent
of typeCREATED
representing the creation of a new head. -
jenkins.scm.api.SCMSourceEvent
of typeCREATED
representing the creation of a new source.
Useful, but non-essential events are:
-
jenkins.scm.api.SCMHeadEvent
of typeREMOVED
representing the removal a specific head. -
jenkins.scm.api.SCMSourceEvent
of typeREMOVED
representing the removal of a specific source.
These events will be used to track heads that no longer exist and sources that are no longer relevant, however as a full (non-event driven) scan would be required to confirm that the head / source has actually been removed rather than moved between sources / navigators their non-implementation will have minimal impact.
Finally, the metadata update events are just polish to show a professionally implemented plugin. Not every source control system will be able to store customized metadata, so these events may not even be relevant for some source control systems.
-
jenkins.scm.api.SCMHeadEvent
of typeUPDATED
representing the change of metadata for a specific head. -
jenkins.scm.api.SCMSourceEvent
of typeUPDATED
representing the change of metadata for a specific source. -
jenkins.scm.api.SCMNavigatorEvent
of typeUPDATED
representing the change of metadata for a collection of sources as an aggregate.
We can reuse the previous test environment
-
Update a file in one of the branches with a
Jenkinsfile
.Verify that the event support for an updated revision of an existing branch results in that branch being triggered without either a full reindex of the multibranch project or a full scan of the organization folder.
-
Create a new branch from a branch that already has a
Jenkinsfile
Verify that the event support for a new branch results in that branch being discovered and a project created for it without either a full reindex of the multibranch project or a full scan of the organization folder.
-
(If technically possible) Create a new repository with initial content that already has a branch with a
Jenkinsfile
. For example, in GitHub you could clone an existing repository into the user / team.Verift that the event support for a new repository results in that repository being indexed, the branch with the
Jenkinsfile
being discovered and consequently both the multibranch project and the branch project being created without a full scan of the organization folder. -
Add a
Jenkinsfile
to a branch in a repository that does not have any branches with aJenkinsfile
.Verify that the event support for an updated revision of an existing branch where there is no multibranch project for the repository (and consequently no branch project for the branch) results in both the multibranch project and the branch project being created without a full scan of the organization folder.
-
Remove a branch that has a
Jenkinsfile
.Verify that the event support for removal of a branch results in that branch project being disabled until the next full index of the multibranch project (or longer if the multibranch project has an orphaned item strategy that retains branches for a period of time after the branch is "dead")
-
Remove a repository that has at least one branch with a
Jenkinsfile
.Verify that the event support for removal of a repository results in all the branch projects being disabled until the next full index of the multibranch project (or longer if the multibranch project has an orphaned item strategy that retains branches for a period of time after the branch is "dead") and that the multibranch project itself is disabled until the next full scan of the organization folder (or longer depending on the organization folder’s orphaned item strategy).
If you have implemented tag support, repeat the above tests for tags where those tests make sense.
(Some source control systems may be exceedingly strict on tag like objects.
For example, Accurev will not permit the deletion of snapshots or the modification of snapshots in any way.
So in the case of Accurev, it would not be possible to test adding a Jenkinsfile
to a snapshot.
For Accurev, it may make sense to test hiding a snapshot and unhiding a snapshot given that hiding a snapshot is the closest equivalent to deleting a tag)
If you have implemented change request support, repeat the above tests for change requests where those tests make sense.
If your source control system has support for metadata attached to SCMHead
/ SCMSource
/ SCMNavigator
concepts:
-
Test that updating the metadata for a branch / tag / change request results in the corresponding update to the metadata for that project without triggering a build of the project or a full reindex of the multibranch project.
For example, changing the title of a change request results in the description of the change request’s branch project being updated.
-
Test that updating the metadata for a repository results in the corresponding update to the metadata for the multibranch project without triggering a full reindex of the multibranch project or a full scan of the organization folder.
For example, changing the description of a repository results in the description of the multibranch project being updated.
-
Test that updating the metadata for a collection of repositories results in the corresponding update to the metadata for the organization folder without triggering a full scan of the organization folder.
For example, changing the avatar of an organization
-
Verify that all the repositories containing at least one branch with a
Jenkinsfile
have had multibranch projects created for them.