Hapi pipelines plugin for the Screwdriver API
const Hapi = require('@hapi/hapi');
const server = new Hapi.Server();
const pipelinesPlugin = require('./');
server.connection({ port: 3000 });
server.register({
register: pipelinesPlugin,
options: {}
}, () => {
server.start((err) => {
if (err) {
throw err;
}
console.log('Server running at:', server.info.uri);
});
});
page
, count
, sort
, sortBy
, search
, and configPipelineId
optional
search
will search for a pipeline with a name containing the search keyword in the scmRepo
field
GET /pipelines?page={pageNumber}&count={countNumber}&configPipelineId={configPipelineId}&search={search}
Need to have array format for ids
to only return pipelines with matching ids
GET /pipelines?search={search}&ids[]=12345&ids[]=55555
GET /pipelines/{id}
Create a pipeline and create a job called 'main'
POST /pipelines
Arguments
checkoutUrl
- Source code URL for the application. For a git-based repository, it is typically the SSH endpoint and the branch name, separated by a octothorpe. Must be unique.rootDir
- Optional Root directory where the source code lives. Default to empty string.
Example payload:
{
"checkoutUrl": "[email protected]:screwdriver-cd/data-model.git#master",
"rootDir": "src/app/component"
}
You can update the checkoutUrl of a pipeline.
PUT /pipelines/{id}
Arguments
checkoutUrl
- Source code URL for the application. For a git-based repository, it is typically the SSH endpoint and the branch name, separated by a octothorpe. Must be unique.rootDir
- Optional Root directory where the source code lives. Default to empty string.
Example payload:
{
"checkoutUrl": "[email protected]:screwdriver-cd/data-model.git#master",
"rootDir": "src/app/component"
}
DELETE /pipelines/{id}
- Synchronize the pipeline by looking up latest screwdriver.yaml
- Create, update, or disable jobs if necessary.
- Store/update the pipeline workflowGraph
POST /pipelines/{id}/sync
- Synchronize webhooks for the pipeline
- Add or update webhooks if necessary
POST /pipelines/{id}/sync/webhooks
- Synchronize pull requests for the pipeline
- Add or update pull request jobs if necessary
POST /pipelines/{id}/sync/pullrequests
Query Params:
page
- Optional Specific page of the set to returncount
- Optional Number of items per pagesort
- Optional Sort rangekey byascending
ordescending
(defaultdescending
)sortBy
- Optional Field to sort bytype
- Optional Get pipeline or pr events (defaultpipeline
)prNum
- Optional Return only PR events of specified PR numbersha
- Optional Searchsha
andconfigPipelineSha
for eventsgroupEventId
- Optional Return only events with a specified groupEventIdid
- Optional Fetch specific event ID; alternatively can use greater than(gt:
) or less than(lt:
) prefix
GET /pipelines/{id}/events?page={pageNumber}&count={countNumber}&sort={sort}&type={type}&prNum={prNumber}&sha={sha}
GET /pipelines/{id}/events?id=gt:{eventId}&count={countNumber}
(greater than eventId)
GET /pipelines/{id}/events?id=lt:{eventId}&count={countNumber}&sort=ascending
(less than eventId)
page
, count
, sort
, latest
, sortBy
, fetchSteps
, readOnly
, and groupEventId
are optional
When latest=true
and groupEventId
is set, only latest builds in a pipeline based on groupEventId will be returned. The latest
parameter must be used in conjunction with the groupEventId
.
GET /pipelines/{id}/builds?page={pageNumber}&count={countNumber}&sort={sort}&latest=true&groupEventId={groupEventId}&sortBy={sortBy}&fetchSteps=false&readOnly=false
archived
is optional and has a default value of false
, which makes the endpoint not return archived jobs (e.g. closed pull requests)
Arguments:
archived
- Optional and has a default value offalse
, which makes the endpoint not return archived jobs (e.g. closed pull requests)type
- Optional and can be set topr
orpipeline
to only return PR jobs or non-PR jobsjobName
- Optional and can be set to only return only a single job
GET /pipelines/{id}/jobs?archived={boolean}&type={type}&jobName={jobName}
GET /pipelines/{id}/admin
GET /pipelines/{id}/triggers
page
, count
, sort
, sortBy
, and name
optional
GET /pipelines/{id}/stages?page={pageNumber}&count={countNumber}&sort={sort}&name={stageName}
GET /pipelines/{id}/secrets
GET /pipelines/{id}/metrics
GET /pipelines/{id}/metrics?startTime=2019-02-01T12:00:00.000Z
GET /pipelines/{id}/metrics?aggregateInterval=week
Need to have array format for downtimeJobs and downtimeStatuses
GET /pipelines/{id}/metrics?downtimeJobs[]=123&downtimeJobs[]=456&downtimeStatuses[]=ABORTED
- Start all child pipelines belong to this config pipeline all at once
POST /pipelines/{id}/startall
POST /pipelines/{id}/token
GET /pipelines/{id}/tokens
PUT /pipelines/{pipelineId}/tokens/{tokenId}
PUT /pipelines/{pipelineId}/tokens/{tokenId}/refresh
DELETE /pipelines/{pipelineId}/tokens/{tokenId}
DELETE /pipelines/{pipelineId}/tokens
GET /pipelines/{id}/jobs/{jobName}/latestBuild
Can search by build status
GET /pipelines/{id}/jobs/{jobName}/latestBuild?status=SUCCESS
DELETE /pipelines/${id}/caches?scope={scope}&cacheId={id}
Path Params:
id
- The id of the pipeline
Query Params:
scope
- Scope of the cache supporting valuespipelines|jobs|events
cacheId
- The id of the cache - pipelineId/jobId/eventId
ecosystem:
store: 'https://store.screwdriver.cd'
queue: 'https://queue.screwdriver.cd'
cache:
strategy: 's3'
Route requests to queue service api if strategy is disk and to store api if strategy is s3
POST /pipelines/{id}/openPr
The server supplies factories to plugins in the form of server settings:
// handler pipelinePlugin.js
handler: async (request, h) => {
const factory = request.server.app.pipelineFactory;
// ...
}
GET /pipeline/templates
Can use additional options for sorting, pagination and count:
GET /pipeline/templates?sort=ascending&sortBy=name&page=1&count=50
GET /pipeline/templates/{namespace}/{name}/versions
Can use additional options for sorting, pagination and count:
GET /pipeline/templates/{namespace}/{name}/versions?sort=ascending&page=1&count=50
Creating a template will store the template meta (name
, namespace
, maintainer
, latestVersion
, trustedSinceVersion
, pipelineId
) and template version (description
, version
, config
, createTime
, templateId
) into the datastore.
version
will be auto-bumped. For example, if [email protected]
already exists and the version passed in is 1.0.0
, the newly created template will be version 1.0.1
.
POST /pipeline/template
'name', 'namespace', 'version', 'description', 'maintainer', 'config'
name
- Name of the templatenamespace
- Namespace of the templateversion
- Version of the templatedescription
- Description of the templatemaintainer
- Maintainer of the templateconfig
- Config of the template. This field is an object that includessteps
,image
, and optionalsecrets
,environments
. Similar to what's inside thepipeline
Example payload:
{
"name": "example-template",
"namespace": "my-namespace",
"version": "1.3.1",
"description": "An example template",
"maintainer": "[email protected]",
"config": {
"steps": [{
"echo": "echo hello"
}]
}
}
Validate a pipeline template and return a JSON containing the boolean property ‘valid’ indicating if the template is valid or not
POST /pipeline/template/validate
'name', 'namespace', 'version', 'description', 'maintainer', 'config'
name
- Name of the templatenamespace
- Namespace of the templateversion
- Version of the templatedescription
- Description of the templatemaintainer
- Maintainer of the templateconfig
- Config of the template. This field is an object that includessteps
,image
, and optionalsecrets
,environments
. Similar to what's inside thepipeline
Example payload:
{
"name": "example-template",
"namespace": "my-namespace",
"version": "1.3.1",
"description": "An example template",
"maintainer": "[email protected]",
"config": {
"steps": [{
"echo": "echo hello"
}]
}
}
GET /pipeline/template/{namespace}/{name}
GET /pipeline/template/{id}
GET /pipeline/template/{namespace}/{name}/{versionOrTag}
Template tag allows fetching on template version by tag. For example, tag [email protected]
as stable
.
GET /pipeline/templates/{namespace}/{name}/tags
Can use additional options for sorting, pagination and count:
GET /pipeline/templates/{namespace}/{name}/tags?sort=ascending&sortBy=name&page=1&count=50
If the template tag already exists, it will update the tag with the new version. If the template tag doesn't exist yet, this endpoint will create the tag.
Note: This endpoint is only accessible in build
scope and the permission is tied to the pipeline that creates the template.
PUT /templates/{templateName}/tags/{tagName}
with the following payload
version
- Exact version of the template (ex:1.1.0
)
Deleting a pipeline template will delete a template and all of its associated tags and versions.
DELETE /pipeline/templates/{namespace}/{name}
name
- Name of the template
Delete the template version and all of its associated tags.
If the deleted version was the latest version, the API would set the latestVersion
attribute of the templateMeta to the previous version.
DELETE /pipeline/templates/{namespace}/{name}/versions/{version}
'namespace', 'name', 'version'
namespace
- Namespace of the templatename
- Name of the templateversion
- Version of the template
Delete the template tag. This does not delete the template itself.
Note: This endpoint is only accessible in build
scope and the permission is tied to the pipeline that creates the template.
DELETE /pipeline/templates/{namespace}/{name}/tags/{tag}
'namespace', 'name', 'tag'
namespace
- Namespace of the templatename
- Name of the templatetag
- Tag name of the template
Update a pipeline template's trusted property
PUT /pipeline/templates/{namespace}/{name}/trusted
'namespace', 'name'
namespace
- Namespace of the templatename
- Name of the template
Example payload:
{
"trusted": true
}