This is one stop global knowledge base where you can learn about all the products, solutions and support features.
Docker images can support multiple platforms, which means that a single image may contain variants for different architectures, and sometimes for different operating systems, such as Windows.
When running an image with multi-platform support,
docker
automatically
selects the image that matches your OS and architecture.
Most of the Docker Official Images on Docker Hub provide a variety of architectures.
For example, the
busybox
image supports
amd64
,
arm32v5
,
arm32v6
,
arm32v7
,
arm64v8
,
i386
,
ppc64le
, and
s390x
. When running this image
on an
x86_64
/
amd64
machine, the
amd64
variant is pulled and run.
Docker is now making it easier than ever to develop containers on, and for Arm servers and devices. Using the standard Docker tooling and processes, you can start to build, push, pull, and run images seamlessly on different compute architectures. In most cases, you donât have to make any changes to Dockerfiles or source code to start building for Arm.
BuildKit with Buildx is designed to work well for building for multiple platforms and not only for the architecture and operating system that the user invoking the build happens to run.
When you invoke a build, you can set the
--platform
flag to specify the target
platform for the build output, (for example,
linux/amd64
,
linux/arm64
, or
darwin/amd64
).
When the current builder instance is backed by the
docker-container
driver,
you can specify multiple platforms together. In this case, it builds a manifest
list which contains images for all specified architectures. When you use this
image in
docker run
or
docker service
, Docker picks
the correct image based on the nodeâs platform.
You can build multi-platform images using three different strategies that are supported by Buildx and Dockerfiles:
QEMU is the easiest way to get started if your node already supports it (for
example. if you are using Docker Desktop). It requires no changes to your
Dockerfile and BuildKit automatically detects the secondary architectures that
are available. When BuildKit needs to run a binary for a different architecture,
it automatically loads it through a binary registered in the
binfmt_misc
handler.
For QEMU binaries registered with
binfmt_misc
on the host OS to work
transparently inside containers, they must be statically compiled and registered
with the
fix_binary
flag. This requires a kernel >= 4.8 and
binfmt-support >= 2.1.7. You can check for proper registration by checking if
F
is among the flags in
/proc/sys/fs/binfmt_misc/qemu-*
. While Docker
Desktop comes preconfigured with
binfmt_misc
support for additional platforms,
for other installations it likely needs to be installed using
tonistiigi/binfmt
image.
$ docker run --privileged --rm tonistiigi/binfmt --install all
Using multiple native nodes provide better support for more complicated cases
that are not handled by QEMU and generally have better performance. You can
add additional nodes to the builder instance using the
--append
flag.
Assuming contexts
node-amd64
and
node-arm64
exist in
docker context ls
;
$ docker buildx create --use --name mybuild node-amd64
mybuild
$ docker buildx create --append --name mybuild node-arm64
$ docker buildx build --platform linux/amd64,linux/arm64 .
Finally, depending on your project, the language that you use may have good
support for cross-compilation. In that case, multi-stage builds in Dockerfiles
can be effectively used to build binaries for the platform specified with
--platform
using the native architecture of the build node. A list of build
arguments like
BUILDPLATFORM
and
TARGETPLATFORM
is available automatically
inside your Dockerfile and can be leveraged by the processes running as part
of your build.
# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log
FROM alpine
COPY --from=build /log /log
Run the
docker buildx ls
command
to list the existing builders:
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
default * docker
default default running 20.10.17 linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
This displays the default builtin driver, that uses the BuildKit server
components built directly into the docker engine, also known as the
docker
driver.
Create a new builder using the
docker-container
driver
which gives you access to more complex features like multi-platform builds
and the more advanced cache exporters, which are currently unsupported in the
default
docker
driver:
$ docker buildx create --name mybuilder --driver docker-container --bootstrap
mybuilder
Switch to the new builder:
$ docker buildx use mybuilder
Note
Alternatively, run
docker buildx create --name mybuilder --driver docker-container --bootstrap --use
to create a new builder and switch to it using a single command.
And inspect it:
$ docker buildx inspect
Name: mybuilder
Driver: docker-container
Nodes:
Name: mybuilder0
Endpoint: unix:///var/run/docker.sock
Status: running
Buildkit: v0.10.4
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
Now listing the existing builders again, we can see our new builder is registered:
$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
mybuilder docker-container
mybuilder0 unix:///var/run/docker.sock running v0.10.4 linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
default * docker
default default running 20.10.17 linux/amd64, linux/arm64, linux/arm/v7, linux/arm/v6
Test the workflow to ensure you can build, push, and run multi-platform images. Create a simple example Dockerfile, build a couple of image variants, and push them to Docker Hub.
The following example uses a single
Dockerfile
to build an Alpine image with
cURL installed for multiple architectures:
# syntax=docker/dockerfile:1
FROM alpine:3.16
RUN apk add curl
Build the Dockerfile with buildx, passing the list of architectures to build for:
$ docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t <username>/<image>:latest --push .
...
#16 exporting to image
#16 exporting layers
#16 exporting layers 0.5s done
#16 exporting manifest sha256:71d7ecf3cd12d9a99e73ef448bf63ae12751fe3a436a007cb0969f0dc4184c8c 0.0s done
#16 exporting config sha256:a26f329a501da9e07dd9cffd9623e49229c3bb67939775f936a0eb3059a3d045 0.0s done
#16 exporting manifest sha256:5ba4ceea65579fdd1181dfa103cc437d8e19d87239683cf5040e633211387ccf 0.0s done
#16 exporting config sha256:9fcc6de03066ac1482b830d5dd7395da781bb69fe8f9873e7f9b456d29a9517c 0.0s done
#16 exporting manifest sha256:29666fb23261b1f77ca284b69f9212d69fe5b517392dbdd4870391b7defcc116 0.0s done
#16 exporting config sha256:92cbd688027227473d76e705c32f2abc18569c5cfabd00addd2071e91473b2e4 0.0s done
#16 exporting manifest list sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48 0.0s done
#16 ...
#17 [auth] <username>/<image>:pull,push token for registry-1.docker.io
#17 DONE 0.0s
#16 exporting to image
#16 pushing layers
#16 pushing layers 3.6s done
#16 pushing manifest for docker.io/<username>/<image>:latest@sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48
#16 pushing manifest for docker.io/<username>/<image>:latest@sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48 1.4s done
#16 DONE 5.6s
Note
<username>
must be a valid Docker ID and<image>
and valid repository on Docker Hub.- The
--platform
flag informs buildx to create Linux images for AMD 64-bit, Arm 64-bit, and Armv7 architectures.- The
--push
flag generates a multi-arch manifest and pushes all the images to Docker Hub.
Inspect the image using
docker buildx imagetools
command:
$ docker buildx imagetools inspect <username>/<image>:latest
Name: docker.io/<username>/<image>:latest
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest: sha256:f3b552e65508d9203b46db507bb121f1b644e53a22f851185d8e53d873417c48
Manifests:
Name: docker.io/<username>/<image>:latest@sha256:71d7ecf3cd12d9a99e73ef448bf63ae12751fe3a436a007cb0969f0dc4184c8c
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/amd64
Name: docker.io/<username>/<image>:latest@sha256:5ba4ceea65579fdd1181dfa103cc437d8e19d87239683cf5040e633211387ccf
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/arm64
Name: docker.io/<username>/<image>:latest@sha256:29666fb23261b1f77ca284b69f9212d69fe5b517392dbdd4870391b7defcc116
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/arm/v7
The image is now available on Docker Hub with the tag
<username>/<image>:latest
.
You can use this image to run a container on Intel laptops, Amazon EC2 Graviton
instances, Raspberry Pis, and on other architectures. Docker pulls the correct
image for the current architecture, so Raspberry PIs run the 32-bit Arm version
and EC2 Graviton instances run 64-bit Arm.
The digest identifies a fully qualified image variant. You can also run images targeted for a different architecture on Docker Desktop. For example, when you run the following on a macOS:
$ docker run --rm docker.io/<username>/<image>:latest@sha256:2b77acdfea5dc5baa489ffab2a0b4a387666d1d526490e31845eb64e3e73ed20 uname -m
aarch64
$ docker run --rm docker.io/<username>/<image>:latest@sha256:723c22f366ae44e419d12706453a544ae92711ae52f510e226f6467d8228d191 uname -m
armv7l
In the above example,
uname -m
returns
aarch64
and
armv7l
as expected,
even when running the commands on a native macOS or Windows developer machine.
Docker Desktop provides
binfmt_misc
multi-architecture support, which means you can run containers for different
Linux architectures such as
arm
,
mips
,
ppc64le
, and even
s390x
.
This does not require any special configuration in the container itself as it
uses qemu-static
from the
Docker for Mac VM
. Because of this, you can run an ARM container,
like the
arm32v7
or
ppc64le
variants of the busybox image.
Multi-stage builds are useful to anyone who has struggled to optimize Dockerfiles while keeping them easy to read and maintain.
Acknowledgment
Special thanks to Alex Ellis for granting permission to use his blog post Builder pattern vs. Multi-stage builds in Docker as the basis of the examples below.
One of the most challenging things about building images is keeping the image
size down. Each
RUN
,
COPY
, and
ADD
instruction in the Dockerfile adds a layer to the image, and you
need to remember to clean up any artifacts you donât need before moving on to
the next layer. To write a really efficient Dockerfile, you have traditionally
needed to employ shell tricks and other logic to keep the layers as small as
possible and to ensure that each layer has the artifacts it needs from the
previous layer and nothing else.
It was actually very common to have one Dockerfile to use for development (which contained everything needed to build your application), and a slimmed-down one to use for production, which only contained your application and exactly what was needed to run it. This has been referred to as the âbuilder patternâ. Maintaining two Dockerfiles is not ideal.
Hereâs an example of a
build.Dockerfile
and
Dockerfile
which adhere to the
builder pattern above:
build.Dockerfile
:
# syntax=docker/dockerfile:1
FROM golang:1.16
WORKDIR /go/src/github.com/alexellis/href-counter/
COPY app.go ./
RUN go get -d -v golang.org/x/net/html \
&& CGO_ENABLED=0 go build -a -installsuffix cgo -o app .
Notice that this example also artificially compresses two
RUN
commands together
using the Bash
&&
operator, to avoid creating an additional layer in the image.
This is failure-prone and hard to maintain. Itâs easy to insert another command
and forget to continue the line using the
\
character, for example.
Dockerfile
:
# syntax=docker/dockerfile:1
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY app ./
CMD ["./app"]
build.sh
:
#!/bin/sh
echo Building alexellis2/href-counter:build
docker build -t alexellis2/href-counter:build . -f build.Dockerfile
docker container create --name extract alexellis2/href-counter:build
docker container cp extract:/go/src/github.com/alexellis/href-counter/app ./app
docker container rm -f extract
echo Building alexellis2/href-counter:latest
docker build --no-cache -t alexellis2/href-counter:latest .
rm ./app
When you run the
build.sh
script, it needs to build the first image, create
a container from it to copy the artifact out, then build the second
image. Both images take up room on your system and you still have the
app
artifact on your local disk as well.
Multi-stage builds vastly simplify this situation!
With multi-stage builds, you use multiple
FROM
statements in your Dockerfile.
Each
FROM
instruction can use a different base, and each of them begins a new
stage of the build. You can selectively copy artifacts from one stage to
another, leaving behind everything you donât want in the final image. To show
how this works, letâs adapt the
Dockerfile
from the previous section to use
multi-stage builds.
# syntax=docker/dockerfile:1
FROM golang:1.16
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go ./
RUN CGO_ENABLED=0 go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app ./
CMD ["./app"]
You only need the single Dockerfile. You donât need a separate build script,
either. Just run
docker build
.
$ docker build -t alexellis2/href-counter:latest .
The end result is the same tiny production image as before, with a significant reduction in complexity. You donât need to create any intermediate images, and you donât need to extract any artifacts to your local system at all.
How does it work? The second
FROM
instruction starts a new build stage with
the
alpine:latest
image as its base. The
COPY --from=0
line copies just the
built artifact from the previous stage into this new stage. The Go SDK and any
intermediate artifacts are left behind, and not saved in the final image.
By default, the stages are not named, and you refer to them by their integer
number, starting with 0 for the first
FROM
instruction. However, you can
name your stages, by adding an
AS <NAME>
to the
FROM
instruction. This
example improves the previous one by naming the stages and using the name in
the
COPY
instruction. This means that even if the instructions in your
Dockerfile are re-ordered later, the
COPY
doesnât break.
# syntax=docker/dockerfile:1
FROM golang:1.16 AS builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go ./
RUN CGO_ENABLED=0 go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app ./
CMD ["./app"]
When you build your image, you donât necessarily need to build the entire
Dockerfile including every stage. You can specify a target build stage. The
following command assumes you are using the previous
Dockerfile
but stops at
the stage named
builder
:
$ docker build --target builder -t alexellis2/href-counter:latest .
A few scenarios where this might be very powerful are:
debug
stage with all debugging symbols or tools enabled, and a
lean
production
stage
testing
stage in which your app gets populated with test data, but
building for production using a different stage which uses real data
When using multi-stage builds, you are not limited to copying from stages you
created earlier in your Dockerfile. You can use the
COPY --from
instruction to
copy from a separate image, either using the local image name, a tag available
locally or on a Docker registry, or a tag ID. The Docker client pulls the image
if necessary and copies the artifact from there. The syntax is:
COPY --from=nginx:latest /etc/nginx/nginx.conf /nginx.conf
You can pick up where a previous stage left off by referring to it when using
the
FROM
directive. For example:
# syntax=docker/dockerfile:1
FROM alpine:latest AS builder
RUN apk --no-cache add build-base
FROM builder AS build1
COPY source1.cpp source.cpp
RUN g++ -o /binary source.cpp
FROM builder AS build2
COPY source2.cpp source.cpp
RUN g++ -o /binary source.cpp
Multi-stage build syntax was introduced in Docker Engine 17.05.
The legacy Docker Engine builder processes all stages of a Dockerfile leading
up to the selected
--target
. It will build a stage even if the selected
target doesnât depend on that stage.
BuildKit only builds the stages that the target stage depends on.
For example, given the following Dockerfile:
# syntax=docker/dockerfile:1
FROM ubuntu AS base
RUN echo "base"
FROM base AS stage1
RUN echo "stage1"
FROM base AS stage2
RUN echo "stage2"
With BuildKit enabled, building the
stage2
target in this Dockerfile means only
base
and
stage2
are processed.
There is no dependency on
stage1
, so itâs skipped.
$ DOCKER_BUILDKIT=1 docker build --no-cache -f Dockerfile --target stage2 .
[+] Building 0.4s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 36B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 0.0s
=> CACHED [base 1/2] FROM docker.io/library/ubuntu 0.0s
=> [base 2/2] RUN echo "base" 0.1s
=> [stage2 1/1] RUN echo "stage2" 0.2s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:f55003b607cef37614f607f0728e6fd4d113a4bf7ef12210da338c716f2cfd15 0.0s
On the other hand, building the same target without BuildKit results in all stages being processed:
$ DOCKER_BUILDKIT=0 docker build --no-cache -f Dockerfile --target stage2 .
Sending build context to Docker daemon 219.1kB
Step 1/6 : FROM ubuntu AS base
---> a7870fd478f4
Step 2/6 : RUN echo "base"
---> Running in e850d0e42eca
base
Removing intermediate container e850d0e42eca
---> d9f69f23cac8
Step 3/6 : FROM base AS stage1
---> d9f69f23cac8
Step 4/6 : RUN echo "stage1"
---> Running in 758ba6c1a9a3
stage1
Removing intermediate container 758ba6c1a9a3
---> 396baa55b8c3
Step 5/6 : FROM base AS stage2
---> d9f69f23cac8
Step 6/6 : RUN echo "stage2"
---> Running in bbc025b93175
stage2
Removing intermediate container bbc025b93175
---> 09fc3770a9c4
Successfully built 09fc3770a9c4
stage1
gets executed when BuildKit is disabled, even if
stage2
does not
depend on it.
It all starts with a Dockerfile.
Docker builds images by reading the instructions from a Dockerfile. This is a text file containing instructions that adhere to a specific format needed to assemble your application into a container image and for which you can find its specification reference in the Dockerfile reference.
Here are the most common types of instructions:
Instruction | Description |
---|---|
FROM <image>
|
Defines a base for your image. |
RUN <command>
|
Executes any commands in a new layer on top of the current image and commits the result.
RUN
also has a shell form for running commands.
|
WORKDIR <directory>
|
Sets the working directory for any
RUN
,
CMD
,
ENTRYPOINT
,
COPY
, and
ADD
instructions that follow it in the Dockerfile.
|
COPY <src> <dest>
|
Copies new files or directories from
<src>
and adds them to the filesystem of the container at the path
<dest>
.
|
CMD <command>
|
Lets you define the default program that is run once you start the container based on this image. Each Dockerfile only has one
CMD
, and only the last
CMD
instance is respected when multiple exist.
|
Dockerfiles are crucial inputs for image builds and can facilitate automated, multi-layer image builds based on your unique configurations. Dockerfiles can start simple and grow with your needs and support images that require complex instructions. For all the possible instructions, see the Dockerfile reference.
The default filename to use for a Dockerfile is
Dockerfile
, without a file
extension. Using the default name allows you to run the
docker build
command
without having to specify additional command flags.
Some projects may need distinct Dockerfiles for specific purposes. A common
convention is to name these
<something>.Dockerfile
. Such Dockerfiles can then
be used through the
--file
(or
-f
shorthand) option on the
docker build
command.
Refer to the âSpecify a Dockerfileâ section
in the
docker build
reference to learn about the
--file
option.
Note
We recommend using the default (
Dockerfile
) for your projectâs primary Dockerfile.
Docker images consist of read-only layers , each resulting from an instruction in the Dockerfile. Layers are stacked sequentially and each one is a delta representing the changes applied to the previous layer.
Hereâs a simple Dockerfile example to get you started with building images. Weâll take a simple âHello Worldâ Python Flask application, and bundle it into a Docker image that can test locally or deploy anywhere!
Letâs say we have a
hello.py
file with the following content:
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello World!"
Donât worry about understanding the full example if youâre not familiar with Python, itâs just a simple web server that will contain a single page that says âHello Worldâ.
Note
If you test the example, make sure to copy over the indentation as well! For more information about this sample Flask application, check the Flask Quickstart page.
Hereâs the Dockerfile that will be used to create an image for our application:
# syntax=docker/dockerfile:1
FROM ubuntu:22.04
# install app dependencies
RUN apt-get update && apt-get install -y python3 python3-pip
RUN pip install flask==2.1.*
# install app
COPY hello.py /
# final configuration
ENV FLASK_APP=hello
EXPOSE 8000
CMD flask run --host 0.0.0.0 --port 8000
The first line to add to a Dockerfile is a
# syntax
parser directive.
While optional, this directive instructs the Docker builder what syntax to use
when parsing the Dockerfile, and allows older Docker versions with BuildKit enabled
to use a specific Dockerfile frontend
before starting the build. Parser directives
must appear before any other comment, whitespace, or Dockerfile instruction in
your Dockerfile, and should be the first line in Dockerfiles.
# syntax=docker/dockerfile:1
Note
We recommend using
docker/dockerfile:1
, which always points to the latest release of the version 1 syntax. BuildKit automatically checks for updates of the syntax before building, making sure you are using the most current version.
Next we define the first instruction:
FROM ubuntu:22.04
Here the
FROM
instruction sets our
base image to the 22.04 release of Ubuntu. All following instructions are
executed on this base image, in this case, an Ubuntu environment. The notation
ubuntu:22.04
, follows the
name:tag
standard for naming docker images. When
you build your image you use this notation to name your images and use it to
specify any existing Docker image. There are many public images you can
leverage in your projects. Explore Docker Hub
to find out.
# install app dependencies
RUN apt-get update && apt-get install -y python3 python3-pip
This
RUN
instruction executes a shell
command in the build context.
In this example, our context is a full Ubuntu operating system, so we have
access to its package manager, apt. The provided commands update our package
lists and then, after that succeeds, installs
python3
and
pip
, the package
manager for Python.
Also note
# install app dependencies
line. This is a comment. Comments in
Dockerfiles begin with the
#
symbol. As your Dockerfile evolves, comments can
be instrumental to document how your dockerfile works for any future readers
and editors of the file.
Note
Starting your Dockerfile by a
#
like regular comments is treated as a directive when you are using BuildKit (default), otherwise it is ignored.
RUN pip install flask==2.1.*
This second
RUN
instruction requires that weâve installed pip in the layer
before. After applying the previous directive, we can use the pip command to
install the flask web framework. This is the framework weâve used to write
our basic âHello Worldâ application from above, so to run it in Docker, weâll
need to make sure itâs installed.
COPY hello.py /
Now we use the
COPY
instruction to
copy our
hello.py
file from the local build context into the
root directory of our image. After being executed, weâll end up with a file
called
/hello.py
inside the image.
ENV FLASK_APP=hello
This
ENV
instruction sets a Linux
environment variable weâll need later. This is a flask-specific variable,
that configures the command later used to run our
hello.py
application.
Without this, flask wouldnât know where to find our application to be able to
run it.
EXPOSE 8000
This
EXPOSE
instruction marks that
our final image has a service listening on port
8000
. This isnât required,
but it is a good practice, as users and tools can use this to understand what
your image does.
CMD flask run --host 0.0.0.0 --port 8000
Finally,
CMD
instruction sets the
command that is run when the user starts a container based on this image. In
this case weâll start the flask development server listening on all addresses
on port
8000
.
To test our Dockerfile, weâll first build it using the
docker build
command:
$ docker build -t test:latest .
Here
-t test:latest
option specifies the name (required) and tag (optional)
of the image weâre building.
.
specifies the build context as
the current directory. In this example, this is where build expects to find the
Dockerfile and the local files the Dockerfile needs to access, in this case
your Python application.
So, in accordance with the build command issued and how build context works, your Dockerfile and python app need to be in the same directory.
Now run your newly built image:
$ docker run -p 8000:8000 test:latest
From your computer, open a browser and navigate to
http://localhost:8000
Note
You can also build and run using Play with Docker that provides you with a temporary Docker instance in the cloud.
If you are interested in examples in other languages, such as Go, check out our language-specific guides in the Guides section.
BuildKit and Buildx have support for modifying the colors that are used to
output information to the terminal. You can set the environment variable
BUILDKIT_COLORS
to something like
run=123,20,245:error=yellow:cancel=blue:warning=white
to set the colors that you would like to use:
Setting
NO_COLOR
to anything will disable any colorized output as recommended
by no-color.org:
Note
Parsing errors will be reported but ignored. This will result in default color values being used where needed.
See also the list of pre-defined colors.
If you create a
docker-container
or
kubernetes
builder with Buildx, you can
apply a custom BuildKit configuration by passing the
--config
flag to
the
docker buildx create
command.
You can define a registry mirror to use for your builds. Doing so redirects
BuildKit to pull images from a different hostname. The following steps exemplify
defining a mirror for
docker.io
(Docker Hub) to
mirror.gcr.io
.
Create a TOML at
/etc/buildkitd.toml
with the following content:
debug = true
[registry."docker.io"]
mirrors = ["mirror.gcr.io"]
Note
debug = true
turns on debug requests in the BuildKit daemon, which logs a message that shows when a mirror is being used.
Create a
docker-container
builder that uses this BuildKit configuration:
$ docker buildx create --use --bootstrap \
--name mybuilder \
--driver docker-container \
--config /etc/buildkitd.toml
Build an image:
docker buildx build --load . -f - <<EOF
FROM alpine
RUN echo "hello world"
EOF
The BuildKit logs for this builder now shows that it uses the GCR mirror. You
can tell by the fact that the response messages include the
x-goog-*
HTTP
headers.
$ docker logs buildx_buildkit_mybuilder0
...
time="2022-02-06T17:47:48Z" level=debug msg="do request" request.header.accept="application/vnd.docker.container.image.v1+json, */*" request.header.user-agent=containerd/1.5.8+unknown request.method=GET spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=1356 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=1469 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:25:17 GMT" response.header.etag="\"774380abda8f4eae9a149e5d5d3efc83\"" response.header.expires="Sun, 06 Feb 2022 18:25:17 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:57 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788077652182 response.header.x-goog-hash="crc32c=V3DSrg==" response.header.x-goog-hash.1="md5=d0OAq9qPTq6aFJ5dXT78gw==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=1469 response.header.x-guploader-uploadid=ADPycduqQipVAXc3tzXmTzKQ2gTT6CV736B2J628smtD1iDytEyiYCgvvdD8zz9BT1J1sASUq9pW_ctUyC4B-v2jvhIxnZTlKg response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=760 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=1471 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:35:13 GMT" response.header.etag="\"35d688bd15327daafcdb4d4395e616a8\"" response.header.expires="Sun, 06 Feb 2022 18:35:13 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:12 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788032100793 response.header.x-goog-hash="crc32c=aWgRjA==" response.header.x-goog-hash.1="md5=NdaIvRUyfar8201DleYWqA==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=1471 response.header.x-guploader-uploadid=ADPycdtR-gJYwC7yHquIkJWFFG8FovDySvtmRnZBqlO3yVDanBXh_VqKYt400yhuf0XbQ3ZMB9IZV2vlcyHezn_Pu3a1SMMtiw response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg=fetch spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="do request" request.header.accept="application/vnd.docker.image.rootfs.diff.tar.gzip, */*" request.header.user-agent=containerd/1.5.8+unknown request.method=GET spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
time="2022-02-06T17:47:48Z" level=debug msg="fetch response received" response.header.accept-ranges=bytes response.header.age=1356 response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"" response.header.cache-control="public, max-age=3600" response.header.content-length=2818413 response.header.content-type=application/octet-stream response.header.date="Sun, 06 Feb 2022 17:25:17 GMT" response.header.etag="\"1d55e7be5a77c4a908ad11bc33ebea1c\"" response.header.expires="Sun, 06 Feb 2022 18:25:17 GMT" response.header.last-modified="Wed, 24 Nov 2021 21:07:06 GMT" response.header.server=UploadServer response.header.x-goog-generation=1637788026431708 response.header.x-goog-hash="crc32c=ZojF+g==" response.header.x-goog-hash.1="md5=HVXnvlp3xKkIrRG8M+vqHA==" response.header.x-goog-metageneration=1 response.header.x-goog-storage-class=STANDARD response.header.x-goog-stored-content-encoding=identity response.header.x-goog-stored-content-length=2818413 response.header.x-guploader-uploadid=ADPycdsebqxiTBJqZ0bv9zBigjFxgQydD2ESZSkKchpE0ILlN9Ibko3C5r4fJTJ4UR9ddp-UBd-2v_4eRpZ8Yo2llW_j4k8WhQ response.status="200 OK" spanID=9460e5b6e64cec91 traceID=b162d3040ddf86d6614e79c66a01a577
...
If you specify registry certificates in the BuildKit configuration, the daemon
copies the files into the container under
/etc/buildkit/certs
. The following
steps show adding a self-signed registry certificate to the BuildKit
configuration.
Add the following configuration to
/etc/buildkitd.toml
:
# /etc/buildkitd.toml
debug = true
[registry."myregistry.com"]
ca=["/etc/certs/myregistry.pem"]
[[registry."myregistry.com".keypair]]
key="/etc/certs/myregistry_key.pem"
cert="/etc/certs/myregistry_cert.pem"
This tells the builder to push images to the
myregistry.com
registry using
the certificates in the specified location (
/etc/certs
).
Create a
docker-container
builder that uses this configuration:
$ docker buildx create --use --bootstrap \
--name mybuilder \
--driver docker-container \
--config /etc/buildkitd.toml
Inspect the builderâs configuration file (
/etc/buildkit/buildkitd.toml
), it
shows that the certificate configuration is now configured in the builder.
$ docker exec -it buildx_buildkit_mybuilder0 cat /etc/buildkit/buildkitd.toml
debug = true
[registry]
[registry."myregistry.com"]
ca = ["/etc/buildkit/certs/myregistry.com/myregistry.pem"]
[[registry."myregistry.com".keypair]]
cert = "/etc/buildkit/certs/myregistry.com/myregistry_cert.pem"
key = "/etc/buildkit/certs/myregistry.com/myregistry_key.pem"
Verify that the certificates are inside the container:
$ docker exec -it buildx_buildkit_mybuilder0 ls /etc/buildkit/certs/myregistry.com/
myregistry.pem myregistry_cert.pem myregistry_key.pem
Now you can push to the registry using this builder, and it will authenticate using the certificates:
$ docker buildx build --push --tag myregistry.com/myimage:latest .
CNI networking for builders can be useful for dealing with network port contention during concurrent builds. CNI is not yet available in the default BuildKit image. But you can create your own image that includes CNI support.
The following Dockerfile example shows a custom BuildKit image with CNI support. It uses the CNI config for integration tests in BuildKit as an example. Feel free to include your own CNI configuration.
# syntax=docker/dockerfile:1
ARG BUILDKIT_VERSION=v{{ site.buildkit_version }}
ARG CNI_VERSION=v1.0.1
FROM --platform=$BUILDPLATFORM alpine AS cni-plugins
RUN apk add --no-cache curl
ARG CNI_VERSION
ARG TARGETOS
ARG TARGETARCH
WORKDIR /opt/cni/bin
RUN curl -Ls https://github.com/containernetworking/plugins/releases/download/$CNI_VERSION/cni-plugins-$TARGETOS-$TARGETARCH-$CNI_VERSION.tgz | tar xzv
FROM moby/buildkit:${BUILDKIT_VERSION}
ARG BUILDKIT_VERSION
RUN apk add --no-cache iptables
COPY --from=cni-plugins /opt/cni/bin /opt/cni/bin
ADD https://raw.githubusercontent.com/moby/buildkit/${BUILDKIT_VERSION}/hack/fixtures/cni.json /etc/buildkit/cni.json
Now you can build this image, and create a builder instance from it using
the
--driver-opt image
option:
$ docker buildx build --tag buildkit-cni:local --load .
$ docker buildx create --use --bootstrap \
--name mybuilder \
--driver docker-container \
--driver-opt "image=buildkit-cni:local" \
--buildkitd-flags "--oci-worker-net=cni"
You can limit the parallelism of the BuildKit solver, which is particularly useful
for low-powered machines, using a BuildKit configuration
while creating a builder with the
--config
flags.
# /etc/buildkitd.toml
[worker.oci]
max-parallelism = 4
Now you can create a
docker-container
builder
that will use this BuildKit configuration to limit parallelism.
$ docker buildx create --use \
--name mybuilder \
--driver docker-container \
--config /etc/buildkitd.toml
TCP connections are limited to 4 simultaneous connections per registry for pulling and pushing images, plus one additional connection dedicated to metadata requests. This connection limit prevents your build from getting stuck while pulling images. The dedicated metadata connection helps reduce the overall build time.
More information: moby/buildkit#2259
BuildKit supports loading frontends dynamically from container images. To use
an external Dockerfile frontend, the first line of your Dockerfile
needs to set the
syntax
directive
pointing to the specific image you want to use:
# syntax=[remote image reference]
For example:
# syntax=docker/dockerfile:1
# syntax=docker.io/docker/dockerfile:1
# syntax=example.com/user/repo:tag@sha256:abcdef...
This defines the location of the Dockerfile syntax that is used to build the Dockerfile. The BuildKit backend allows seamlessly using external implementations that are distributed as Docker images and execute inside a container sandbox environment.
Custom Dockerfile implementations allow you to:
Note
BuildKit also ships with a built-in Dockerfile frontend, but itâs recommended to use an external image to make sure that all users use the same version on the builder and to pick up bugfixes automatically without waiting for a new version of BuildKit or Docker Engine.
Docker distributes official versions of the images that can be used for building
Dockerfiles under
docker/dockerfile
repository on Docker Hub. There are two
channels where new images are released:
stable
and
labs
.
The
stable
channel follows semantic versioning.
For example:
docker/dockerfile:1
- kept updated with the latest
1.x.x
minor
and
patch
release.
docker/dockerfile:1.2
- kept updated with the latest
1.2.x
patch release,
and stops receiving updates once version
1.3.0
is released.
docker/dockerfile:1.2.1
- immutable: never updated.
We recommend using
docker/dockerfile:1
, which always points to the latest
stable release of the version 1 syntax, and receives both âminorâ and âpatchâ
updates for the version 1 release cycle. BuildKit automatically checks for
updates of the syntax when performing a build, making sure you are using the
most current version.
If a specific version is used, such as
1.2
or
1.2.1
, the Dockerfile needs
to be updated manually to continue receiving bugfixes and new features. Old
versions of the Dockerfile remain compatible with the new versions of the
builder.
The
labs
channel provides early access to Dockerfile features that are not yet
available in the
stable
channel.
labs
images are released at the same time
as stable releases, and follow the same version pattern, but use the
-labs
suffix, for example:
docker/dockerfile:labs
- latest release on
labs
channel.
docker/dockerfile:1-labs
- same as
dockerfile:1
, with experimental
features enabled.
docker/dockerfile:1.2-labs
- same as
dockerfile:1.2
, with experimental
features enabled.
docker/dockerfile:1.2.1-labs
- immutable: never updated. Same as
dockerfile:1.2.1
, with experimental features enabled.
Choose a channel that best fits your needs. If you want to benefit from
new features, use the
labs
channel. Images in the
labs
channel contain
all the features in the
stable
channel, plus early access features.
Stable features in the
labs
channel follow semantic versioning,
but early access features donât, and newer releases may not be backwards
compatible. Pin the version to avoid having to deal with breaking changes.
For documentation on âlabsâ features, master builds, and nightly feature
releases, refer to the description in the BuildKit source repository on GitHub.
For a full list of available images, visit the
docker/dockerfile
repository on Docker Hub,
and the
docker/dockerfile-upstream
repository on Docker Hub
for development builds.