Welcome to Knowledge Base!

KB at your finger tips

This is one stop global knowledge base where you can learn about all the products, solutions and support features.

Categories
All

DevOps-Docker

Settings

Settings

Note

Atomist is currently in Early Access. Features and APIs are subject to change.

This page describes the configurable settings for Atomist. Enabling any of these settings instructs Atomist to carry out an action whenever a specific Git event occurs. These features require that you install the Atomist GitHub app in your GitHub organization.

To view and manage these settings, go to the settings page on the Atomist website.

New image vulnerabilities

Extract software bill of material from container images, and match packages with data from vulnerability advisories. Identify when new vulnerabilities get introduced, and display them as GitHub status check on the pull request that introduces them.

Base image tags

Pin base image tags to digests in Dockerfiles, and check for supported tags on Docker official images. Automatically creates a pull request pinning the Dockerfile to the latest digest for the base image tag used.

Get started

Get started

Note

Atomist is currently in Early Access. Features and APIs are subject to change.

To get started with Atomist, you’ll need to:

  • Connect Atomist with your container registry
  • Link your container images with their Git source

Before you can begin the setup, you need a Docker ID. If you don’t already have one, you can register here.

Connect container registry

This section describes how to integrate Atomist with your container registry. Follow the applicable instructions depending on the type of container registry you use. After completing this setup, Atomist will have read-only access to your registry, and gets notified about pushed or deleted images.

Using Docker Hub? 🐳

If you are using Docker Hub as your container registry, you can skip this step and go straight to linking images to Git source. Atomist integrates seamlessly with your Docker Hub organizations.


When setting up an Amazon Elastic Container Registry (ECR) integration with Atomist, the following AWS resources are required:

  • Read-only Identity Access Management (IAM) role, for Atomist to be able to access the container registry
  • Amazon EventBridge, to notify Atomist of pushed and deleted images

This procedure uses pre-defined CloudFormation templates to create the necessary IAM role and Amazon EventBridge. This template protects you from confused deputy attacks by ensuring a unique ExternalId , along with the appropriate condition on the IAM role statement.

  1. Go to https://dso.docker.com and sign in using your Docker ID credentials.
  2. Navigate to the Integrations tab and select Configure next to the Elastic Container Registry integration.
  3. Fill out all the fields, except Trusted Role ARN . The trusted role identity is known only after applying the CloudFormation template.

    Choose basic auth credentials to protect the endpoint that AWS uses to notify Atomist. The URL and the basic auth credentials are parameters to the CloudFormation template.

  4. Now create the CloudFormation stack. Before creating the stack, AWS asks you to enter three parameters.

    • Url : the API endpoint copied from Atomist
    • Username , Password : basic authentication credentials for the endpoint. Must match what you entered in the Atomist workspace.

    Use the following Launch Stack buttons to start reviewing the details in your AWS account.

    Note

    Before creating the stack, AWS will ask for acknowledgement that creating this stack requires a capability. This stack creates a role that will grant Atomist read-only access to ECR resources.

    confirm

    Region ecr-integration.template
    us-east-1 Launch Stack
    us-east-2 Launch Stack
    us-west-1 Launch Stack
    us-west-2 Launch Stack
    eu-west-1 Launch Stack
    eu-west-2 Launch Stack
    eu-west-3 Launch Stack
    eu-central-1 Launch Stack
    ca-central-1 Launch Stack
    ap-southeast-2 Launch Stack
  5. After creating the stack, copy the Value for the AssumeRoleArn key from the Outputs tab in AWS.

    AWS stack creation output

  6. Paste the copied AssumeRoleArn value into the Trusted Role ARN field on the Atomist configuration page.

  7. Select Save Configuration .

    Atomist tests the connection with your ECR registry. A green check mark displays beside the integration if a successful connection is made.

    integration list showing a successful ECR integration

To integrate Atomist with GitHub Container Registry, connect your GitHub account, and enter a personal access token for Atomist to use when pulling container images.

  1. Go to https://dso.docker.com and sign in using your Docker ID credentials.
  2. Connect your GitHub account as instructed in the GitHub app page.
  3. Open the Integrations tab, and select Configure next to the GitHub Container Registry in the list.
  4. Fill out the fields and select Save Configuration .

    Atomist requires the Personal access token for connecting images to private repositories. The token must have the read:packages scope.

    Leave the Personal access token field blank if you only want to index images in public repositories.

Setting up an Atomist integration with Google Container Registry and Google Artifact Registry involves:

  • Creating a service account and grant it a read-only access role.
  • Creating a PubSub subscription on the gcr topic to watch for activity in the registry.

To complete the following procedure requires administrator’s permissions in the project.

  1. Set the following environment variables. You will use them in the next steps when configuring the Google Cloud resources, using the gcloud CLI.

    export SERVICE_ACCOUNT_ID="atomist-integration" # can be anything you like
    export PROJECT_ID="YOUR_GCP_PROJECT_ID"
    
  2. Create the service account.

    gcloud iam service-accounts create ${SERVICE_ACCOUNT_ID} \
        --project ${PROJECT_ID} \
        --description="Atomist Integration Service Account" \
        --display-name="Atomist Integration"
    
  3. Grant the service account read-only access to the artifact registry.

    The role name differs depending on whether you use Artifact Registry or Container Registry:

    • roles/artifactregistry.reader for Google Artifact Registry
    • roles/object.storageViewer for Google Container Registry
    gcloud projects add-iam-policy-binding ${PROJECT_ID} \
        --project ${PROJECT_ID} \
        --member="serviceAccount:${SERVICE_ACCOUNT_ID}@${PROJECT_ID}.iam.gserviceaccount.com" \
        --role="roles/artifactregistry.reader" # change this if you use GCR
    
  4. Grant service account access to Atomist.

    gcloud iam service-accounts add-iam-policy-binding "${SERVICE_ACCOUNT_ID}@${PROJECT_ID}.iam.gserviceaccount.com" \
        --project ${PROJECT_ID} \
        --member="serviceAccount:atomist-bot@atomist.iam.gserviceaccount.com" \
        --role="roles/iam.serviceAccountTokenCreator"
    
  5. Go to dso.docker.com and sign in with your Docker ID credentials.
  6. Navigate to the Integrations tab and select Configure next to the Google Artifact Registry integration.
  7. Fill out the following fields:

    • Project ID is the PROJECT_ID used in earlier steps.
    • Service Account : The email address of the service account created step 2.
  8. Select Save Configuration . Atomist will test the connection. Green check marks indicate a successful connection.

    GCP configuration successful

    Next, create a new PubSub subscription on the gcr topic in registry. This subscription notifies Atomist about new or deleted images in the registry.

  9. Copy the URL in the Events Webhook field to your clipboard. This will be the PUSH_ENDPOINT_URI for the PubSub subscription.

  10. Define the following three variable values, in addition to the PROJECT_ID and SERVICE_ACCOUNT_ID from earlier:

    • PUSH_ENDPOINT_URL : the webhook URL copied from the Atomist workspace.
    • SERVICE_ACCOUNT_EMAIL : the service account address; a combination of the service account ID and project ID.
    • SUBSCRIPTION : the name of the PubSub (can be anything).
    PUSH_ENDPOINT_URI={COPY_THIS_FROM_ATOMIST}
    SERVICE_ACCOUNT_EMAIL="${SERVICE_ACCOUNT_ID}@${PROJECT_ID}.iam.gserviceaccount.com"
    SUBSCRIPTION="atomist-integration-subscription"
    
  11. Create the PubSub for the gcr topic.

    gcloud pubsub subscriptions create ${SUBSCRIPTION} \
      --topic='gcr' \
      --push-auth-token-audience='atomist' \
      --push-auth-service-account="${SERVICE_ACCOUNT_EMAIL}" \
      --push-endpoint="${PUSH_ENDPOINT_URI}"
    

When the first image push is successfully detected, a green check mark on the integration page will indicate that the integration works.

Atomist can index images in a JFrog Artifactory repository by means of a monitoring agent.

The agent scans configured repositories at regular intervals, and send newly discovered images’ metadata to the Atomist data plane.

In the following example, https://hal9000.atomist.com is a private registry only visible on an internal network.

docker run -ti atomist/docker-registry-broker:latest\
  index-image remote \
  --workspace AQ1K5FIKA \
  --api-key team::6016307E4DF885EAE0579AACC71D3507BB38E1855903850CF5D0D91C5C8C6DC0 \
  --artifactory-url https://hal9000.docker.com \
  --artifactory-repository atomist-docker-local \
  --container-registry-host atomist-docker-local.hal9000.docker.com
  --username admin \
  --password password
Parameter Description
workspace ID of your Atomist workspace.
api-key Atomist API key.
artifactory-url Base URL of the Artifactory instance. Must not contain trailing slashes.
artifactory-repository The name of the container registry to watch.
container-registry-host The hostname associated with the Artifactory repository containing images, if different from artifactory-url .
username Username for HTTP basic authentication with Artifactory.
password Password for HTTP basic authentication with Artifactory.

Knowing the source repository of an image is a prerequisite for Atomist to interact with the Git repository. For Atomist to be able to link scanned images back to a Git repository repository, you must annotate the image at build time.

The image labels that Atomist requires are:

Label Value
org.opencontainers.image.revision The commit revision that the image is built for.
com.docker.image.source.entrypoint Path to the Dockerfile, relative to project root.

For more information about pre-defined OCI annotations, see the specification document on GitHub.

You can add these labels to images using the built-in Git provenance feature of Buildx, or set using the --label CLI argument.

Add labels using Docker Buildx

Beta

Git provenance labels in Buildx is a Beta feature.

To add the image labels using Docker Buildx, set the environment variable BUILDX_GIT_LABELS=1 . The Buildx will create the labels automatically when building the image.

export BUILDX_GIT_LABELS=1
docker buildx build . -f docker/Dockerfile

Add labels using the label CLI argument

Assign image labels using the --label argument for docker build .

docker build . -f docker/Dockerfile -t $IMAGE_NAME \
    --label "org.opencontainers.image.revision=10ac8f8bdaa343677f2f394f9615e521188d736a" \
    --label "com.docker.image.source.entrypoint=docker/Dockerfile"

Images built in a CI/CD environment can leverage the built-in environment variables when setting the Git revision label:

Build tool Environment variable
GitHub Actions ${{ github.sha }}
GitHub Actions, pull requests ${{ github.event.pull_request.head.sha }}
GitLab CI/CD $CI_COMMIT_SHA
Docker Hub automated builds $SOURCE_COMMIT
Google Cloud Build $COMMIT_SHA
AWS CodeBuild $CODEBUILD_RESOLVED_SOURCE_VERSION
Manually $(git rev-parse HEAD)

Consult the documentation for your CI/CD platform to learn which variables to use.

Where to go next

Atomist is now tracking bill of materials, packages, and vulnerabilities for your images! You can view your image scan results on the images overview page.

Teams use Atomist to protect downstream workloads from new vulnerabilities. It’s also used to help teams track and remediate new vulnerabilities that impact existing workloads. The following sections describe integrate and configure Atomist further. For example, to gain visibility into container workload systems like Kubernetes.

  • Connect Atomist with your GitHub repositories by installing the Atomist app for your GitHub organization.
  • Manage which Atomist features you use in settings.
  • Learn about deployment tracking and how Atomist can help watch your deployed containers.
  • Atomist watches for new advisories from public sources, but you can also add your own internal advisories for more information.
Read article

Introduction to Atomist

Introduction to Atomist

Note

Atomist is currently in Early Access. Features and APIs are subject to change.

Atomist is a data and automation platform for managing the software supply chain. It extracts metadata from container images, evaluates the data, and helps you understand the state of the image.

Integrating Atomist into your systems and repositories grants you essential information about the images you build, and the containers running in production. Beyond collecting and visualizing information, Atomist can help you further by giving you recommendations, notifications, validation, and more.

Example capabilities made possible with Atomist are:

  • Stay up to date with advisory databases without having to re-analyze your images.
  • Automatically open pull requests to update base images for improved product security.
  • Check that your applications don’t contain secrets, such as a password or API token, before they get deployed.
  • Dissect Dockerfiles and see where vulnerabilities come from, line by line.

How it works

Atomist monitors your container registry for new images. When it finds a new image, it analyzes and extracts metadata about the image contents and any base images used. The metadata is uploaded to an isolated partition in the Atomist data plane where it’s securely stored.

The Atomist data plane is a combination of metadata and a large knowledge graph of public software and vulnerability data. Atomist determines the state of your container by overlaying the image metadata with the knowledge graph.

What’s next?

Head over to the try atomist page for instructions on how to run Atomist, locally and with no strings attached.

Read article

Color output controls

Color output controls

BuildKit and Buildx have support for modifying the colors that are used to output information to the terminal. You can set the environment variable BUILDKIT_COLORS to something like run=123,20,245:error=yellow:cancel=blue:warning=white to set the colors that you would like to use:

Progress output custom colors

Setting NO_COLOR to anything will disable any colorized output as recommended by no-color.org:

Progress output no color

Note

Parsing errors will be reported but ignored. This will result in default color values being used where needed.

See also the list of pre-defined colors.

Read article

Configure BuildKit

Configure BuildKit

If you create a docker-container or kubernetes builder with Buildx, you can apply a custom BuildKit configuration by passing the --config flag to the docker buildx create command.

Registry mirror

You can define a registry mirror to use for your builds. Doing so redirects BuildKit to pull images from a different hostname. The following steps exemplify defining a mirror for docker.io (Docker Hub) to mirror.gcr.io .

  1. Create a TOML at /etc/buildkitd.toml with the following content:

    debug = true
    [registry."docker.io"]
      mirrors = ["mirror.gcr.io"]
    

    Note

    debug = true turns on debug requests in the BuildKit daemon, which logs a message that shows when a mirror is being used.

  2. Create a docker-container builder that uses this BuildKit configuration:

    $ docker buildx create --use --bootstrap \
      --name mybuilder \
      --driver docker-container \
      --config /etc/buildkitd.toml
    
  3. Build an image:

    docker buildx build --load . -f - <<EOF
    FROM alpine
    RUN echo "hello world"
    EOF
    

The BuildKit logs for this builder now shows that it uses the GCR mirror. You can tell by the fact that the response messages include the x-goog-* HTTP headers.

$ docker logs buildx_buildkit_mybuilder0
...
time="2022-02-06T17:47:48Z" level=debug msg="do request" request.header.accept="application/vnd.docker.container.image.v1+json, */*" request.header.user-agent=containerd/1.5.8+unknown request.method=GET spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=1356 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=1469 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:25:17 GMT" response.header.etag="\"774380abda8f4eae9a149e5d5d3efc83\"" response.header.expires="Sun, 06 Feb 2022 18:25:17 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:57 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788077652182 response.header.x-goog-hash="crc32c=V3DSrg==" response.header.x-goog-hash.1="md5=d0OAq9qPTq6aFJ5dXT78gw==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=1469 response.header.x-guploader-uploadid=ADPycduqQipVAXc3tzXmTzKQ2gTT6CV736B2J628smtD1iDytEyiYCgvvdD8zz9BT1J1sASUq9pW_ctUyC4B-v2jvhIxnZTlKg response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=760 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=1471 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:35:13 GMT" response.header.etag="\"35d688bd15327daafcdb4d4395e616a8\"" response.header.expires="Sun, 06 Feb 2022 18:35:13 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:12 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788032100793 response.header.x-goog-hash="crc32c=aWgRjA==" response.header.x-goog-hash.1="md5=NdaIvRUyfar8201DleYWqA==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=1471 response.header.x-guploader-uploadid=ADPycdtR-gJYwC7yHquIkJWFFG8FovDySvtmRnZBqlO3yVDanBXh_VqKYt400yhuf0XbQ3ZMB9IZV2vlcyHezn_Pu3a1SMMtiw response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="do request" request.header.accept="application/vnd.docker.image.rootfs.diff.tar.gzip, */*" request.header.user-agent=containerd/1.5.8+unknown request.method=GET spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=1356 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=2818413 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:25:17 GMT" response.header.etag="\"1d55e7be5a77c4a908ad11bc33ebea1c\"" response.header.expires="Sun, 06 Feb 2022 18:25:17 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:06 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788026431708 response.header.x-goog-hash="crc32c=ZojF+g==" response.header.x-goog-hash.1="md5=HVXnvlp3xKkIrRG8M+vqHA==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=2818413 response.header.x-guploader-uploadid=ADPycdsebqxiTBJqZ0bv9zBigjFxgQydD2ESZSkKchpE0ILlN9Ibko3C5r4fJTJ4UR9ddp-UBd-2v_4eRpZ8Yo2llW_j4k8WhQ response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
...

Setting registry certificates

If you specify registry certificates in the BuildKit configuration, the daemon copies the files into the container under /etc/buildkit/certs . The following steps show adding a self-signed registry certificate to the BuildKit configuration.

  1. Add the following configuration to /etc/buildkitd.toml :

    # /etc/buildkitd.toml
    debug = true
    [registry."myregistry.com"]
      ca=["/etc/certs/myregistry.pem"]
      [[registry."myregistry.com".keypair]]
        key="/etc/certs/myregistry_key.pem"
        cert="/etc/certs/myregistry_cert.pem"
    

    This tells the builder to push images to the myregistry.com registry using the certificates in the specified location ( /etc/certs ).

  2. Create a docker-container builder that uses this configuration:

    $ docker buildx create --use --bootstrap \
      --name mybuilder \
      --driver docker-container \
      --config /etc/buildkitd.toml
    
  3. Inspect the builder’s configuration file ( /etc/buildkit/buildkitd.toml ), it shows that the certificate configuration is now configured in the builder.

    $ docker exec -it buildx_buildkit_mybuilder0 cat /etc/buildkit/buildkitd.toml
    
    debug = true
    
    [registry]
    
      [registry."myregistry.com"]
        ca = ["/etc/buildkit/certs/myregistry.com/myregistry.pem"]
    
        [[registry."myregistry.com".keypair]]
          cert = "/etc/buildkit/certs/myregistry.com/myregistry_cert.pem"
          key = "/etc/buildkit/certs/myregistry.com/myregistry_key.pem"
    
  4. Verify that the certificates are inside the container:

    $ docker exec -it buildx_buildkit_mybuilder0 ls /etc/buildkit/certs/myregistry.com/
    myregistry.pem    myregistry_cert.pem   myregistry_key.pem
    

Now you can push to the registry using this builder, and it will authenticate using the certificates:

$ docker buildx build --push --tag myregistry.com/myimage:latest .

CNI networking

CNI networking for builders can be useful for dealing with network port contention during concurrent builds. CNI is not yet available in the default BuildKit image. But you can create your own image that includes CNI support.

The following Dockerfile example shows a custom BuildKit image with CNI support. It uses the CNI config for integration tests in BuildKit as an example. Feel free to include your own CNI configuration.

# syntax=docker/dockerfile:1

ARG BUILDKIT_VERSION=v{{ site.buildkit_version }}
ARG CNI_VERSION=v1.0.1

FROM --platform=$BUILDPLATFORM alpine AS cni-plugins
RUN apk add --no-cache curl
ARG CNI_VERSION
ARG TARGETOS
ARG TARGETARCH
WORKDIR /opt/cni/bin
RUN curl -Ls https://github.com/containernetworking/plugins/releases/download/$CNI_VERSION/cni-plugins-$TARGETOS-$TARGETARCH-$CNI_VERSION.tgz | tar xzv

FROM moby/buildkit:${BUILDKIT_VERSION}
ARG BUILDKIT_VERSION
RUN apk add --no-cache iptables
COPY --from=cni-plugins /opt/cni/bin /opt/cni/bin
ADD https://raw.githubusercontent.com/moby/buildkit/${BUILDKIT_VERSION}/hack/fixtures/cni.json /etc/buildkit/cni.json

Now you can build this image, and create a builder instance from it using the --driver-opt image option:

$ docker buildx build --tag buildkit-cni:local --load .
$ docker buildx create --use --bootstrap \
  --name mybuilder \
  --driver docker-container \
  --driver-opt "image=buildkit-cni:local" \
  --buildkitd-flags "--oci-worker-net=cni"

Resource limiting

Max parallelism

You can limit the parallelism of the BuildKit solver, which is particularly useful for low-powered machines, using a BuildKit configuration while creating a builder with the --config flags.

# /etc/buildkitd.toml
[worker.oci]
  max-parallelism = 4

Now you can create a docker-container builder that will use this BuildKit configuration to limit parallelism.

$ docker buildx create --use \
  --name mybuilder \
  --driver docker-container \
  --config /etc/buildkitd.toml

TCP connection limit

TCP connections are limited to 4 simultaneous connections per registry for pulling and pushing images, plus one additional connection dedicated to metadata requests. This connection limit prevents your build from getting stuck while pulling images. The dedicated metadata connection helps reduce the overall build time.

More information: moby/buildkit#2259

Read article

Track deployments

Track deployments

Note

Atomist is currently in Early Access. Features and APIs are subject to change.

By integrating Atomist with a runtime environment, you can track vulnerabilities for deployed containers. This gives you contexts for whether security debt is increasing or decreasing.

There are several options for how you could implement deployment tracking:

  • Invoking the API directly
  • Adding it as a step in your continuous deployment pipeline
  • Creating Kubernetes admission controllers

API

Each Atomist workspace exposes an API endpoint. Submitting a POST request to the endpoint updates Atomist about what image you are running in your environments. This lets you compare data for images you build against images of containers running in staging or production.

You can find the API endpoint URL on the Integrations page. Using this API requires an API key.

The most straight-forward use is to post to this endpoint using a webhook. When deploying a new image, submit an automated POST request (using curl , for example) as part of your deployment pipeline.

$ curl <api-endpoint-url> \\
  -X POST \\
  -H "Content-Type: application/json" \\
  -H "Authorization: Bearer <api-token>" \\
  -d '{"image": {"url": "<image-url>@<sha256-digest>"}}'

Parameters

The API supports the following parameters in the request body:

{
  "image": {
    "url": "string",
    "name": "string"
  },
  "environment": {
    "name": "string"
  },
  "platform": {
    "os": "string",
    "architecture": "string",
    "variant": "string"
  }
}
Parameter Mandatory Default Description
image.url Yes  Fully qualified reference name of the image, plus version (digest). You must specify the image version by digest.
image.name No  Optional identifier. If you deploy many containers from the same image in any one environment, each instance must have a unique name.
environment.name No deployed Use custom environment names to track different image versions in environments, like staging and production
platform.os No linux Image operating system.
platform.architecture No amd64 Instruction set architecture.
platform.variant No  Optional variant label.
Read article