This chart is based on
stable/minio
version
0.4.3
,
and inherits most settings from there.
Design Choices
Design choices related to the upstream chart
can be found in the project’s README.
GitLab chose to alter that chart in order to simplify configuration of the secrets,
and to remove all use of secrets in environment variables. GitLab added
initContainer
s
to control the population of secrets into the
config.json
, and a chart-wide
enabled
flag.
This chart makes use of only one secret:
global.minio.credentials.secret
: A global secret containing the
accesskey
and
secretkey
values that will be used for authentication to the bucket(s).
Configuration
We will describe all the major sections of the configuration below. When configuring
from the parent chart, these values will be:
They way we’ve chosen to implement compartmentalized sub-charts includes the ability
to disable the components that you may not want in a given deployment. For this reason,
the first setting you should decide on is
enabled:
.
By default, MinIO is enabled out of the box, but is not recommended for production use.
When you are ready to disable it, run
--set global.minio.enabled: false
.
Configure the initContainer
While rarely altered, the
initContainer
behaviors can be changed via the following items:
The initContainer image settings are just as with a normal image configuration.
By default, chart-local values are left empty, and the global settings
global.busybox.image.repository
and
global.busybox.image.tag
will be used to
populate initContainer image. If chart-local values are specified, they get
used instead of the global setting’s values.
initContainer script
The initContainer is passed the following items:
The secret containing authentication items mounted in
/config
, usually
accesskey
and
secretkey
.
The ConfigMap containing the
config.json
template, and
configure
containing a
script to be executed with
sh
, mounted in
/config
.
An
emptyDir
mounted at
/minio
that will be passed to the daemon’s container.
The initContainer is expected to populate
/minio/config.json
with a completed configuration,
using
/config/configure
script. When the
minio-config
container has completed
that task, the
/minio
directory will be passed to the
minio
container, and used
to provide the
config.json
to the MinIO server.
Configuring the Ingress
These settings control the MinIO Ingress.
Name
Type
Default
Description
apiVersion
String
Value to use in the
apiVersion
field.
annotations
String
This field is an exact match to the standard
annotations
for Kubernetes Ingress.
enabled
Boolean
false
Setting that controls whether to create Ingress objects for services that support them. When
false
the
global.ingress.enabled
setting is used.
configureCertmanager
Boolean
Toggles Ingress annotation
cert-manager.io/issuer
. For more information see the TLS requirement for GitLab Pages.
tls.enabled
Boolean
true
When set to
false
, you disable TLS for MinIO. This is mainly useful when you cannot use TLS termination at Ingress-level, like when you have a TLS-terminating proxy before the Ingress Controller.
tls.secretName
String
The name of the Kubernetes TLS Secret that contains a valid certificate and key for the MinIO URL. When not set, the
global.ingress.tls.secretName
is used instead.
Configuring the image
The
image
,
imageTag
and
imagePullPolicy
defaults are
documented upstream.
Persistence
This chart provisions a
PersistentVolumeClaim
and mounts a corresponding persistent
volume to default location
/export
. You’ll need physical storage available in the
Kubernetes cluster for this to work. If you’d rather use
emptyDir
, disable
PersistentVolumeClaim
by:
persistence.enabled: false
.
The behaviors for
persistence
are documented upstream.
When
volumeName
is provided, the
PersistentVolumeClaim
will use the provided
PersistentVolume
by name, in place of creating a
PersistentVolume
dynamically. This overrides the upstream behavior.
matchLabels
Map
true
Accepts a Map of label names and label values to match against when choosing a volume to bind. This is used in the
PersistentVolumeClaim
selector
section. See the volumes documentation.
matchExpressions
Array
Accepts an array of label condition objects to match against when choosing a volume to bind. This is used in the
PersistentVolumeClaim
selector
section. See the volumes documentation.
defaultBuckets
defaultBuckets
provides a mechanism to automatically create buckets on the MinIO
pod at
installation
. This property contains an array of items, each with up to three
properties:
name
,
policy
, and
purge
.
The name of the bucket that is created. The provided value should conform to AWS bucket naming rules, meaning that it should be compliant with DNS and contain only the characters a-z, 0-9, and – (hyphen) in strings between 3 and 63 characters in length. The
name
property is
required
for all entries.
policy
none
The value of
policy
controls the access policy of the bucket on MinIO. The
policy
property is not required, and the default value is
none
. In regards to
anonymous
access, possible values are:
none
(no anonymous access),
download
(anonymous read-only access),
upload
(anonymous write-only access) or
public
(anonymous read/write access).
purge
Boolean
The
purge
property is provided as a means to cause any existing bucket to be removed with force, at installation time. This only comes into play when using a pre-existing
PersistentVolume
for the volumeName property of persistence. If you make use of a dynamically created
PersistentVolume
, this will have no valuable effect as it only happens at chart installation and there will be no data in the
PersistentVolume
that was just created. This property is not required, but you may specify this property with a value of
true
in order to cause a bucket to purged with force
mc rm -r --force
.
Security Context
These options allow control over which
user
and/or
group
is used to start the pod.
For in-depth information about security context, please refer to the official
Kubernetes documentation.
Service Type and Port
These are documented upstream,
and the key summary is:
## Expose the MinIO service to be accessed from outside the cluster (LoadBalancer service). ## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it. ## ref: http://kubernetes.io/docs/user-guide/services/ ## serviceType:LoadBalancer servicePort:9000
The chart does not expect to be of the
type: NodePort
, so
do not
set it as such.
Upstream items
The upstream documentation
for the following also applies completely to this chart:
resources
nodeSelector
minioConfig
Further explanation of the
minioConfig
settings can be found in the
MinIO notify documentation.
This includes details on publishing notifications when Bucket Objects are accessed or changed.
Our NGINX fork | GitLab
Adjustments to the NGINX fork
Our NGINX fork
Our fork of the NGINX chart was pulled from GitHub.
Adjustments to the NGINX fork
The following adjustments were made to the NGINX fork:
tcp-configmap.yaml
: is optional depending on new
tcpExternalConfig
setting
Ability to use a templated TCP ConfigMap name from another chart
controller-configmap-tcp.yaml
:
.metadata.name
is a template
ingress-nginx.tcp-configmap
controller-deployment.yaml
:
.spec.template.spec.containers[0].args
uses
ingress-nginx.tcp-configmap
template for ConfigMap name
GitLab chart overrides
ingress-nginx.tcp-configmap
so that
gitlab/gitlab-org/charts/gitlab-shell
can configure its TCP service
Ability to use a templated Ingress name based on the release name
Replace
controller.service.loadBalancerIP
with
global.hosts.externalIP
Added support to add common labels through
common.labels
configuration option
controller-deployment.yaml
:
Add
podlabels
and
global.pod.labels
to
.spec.template.metadata.labels
default-backend-deployment.yaml
:
Add
podlabels
and
global.pod.labels
to
.spec.template.metadata.labels
Disable NGINX’s default nodeSelectors.
Added support for PDB
maxUnavailable
.
Remove NGINX’s
isControllerTagValid
helper in
charts/nginx-ingress/templates/_helpers.tpl
The check had not been updated since it was implemented in 2020.
As part of #3383, we need to refer to a tag that will contain
ubi
,
meaning that the
semverCompare
would not work as expected anyway.
Added support for autoscaling/v2beta2 and autoscaling/v2 APIs in HPAs and
extended HPA settings to support memory and custom metrics, as well as
behavior configuration.
Added conditional support for API version of PodDisruptionBudget.
Add the following booleans to enable/disable GitLab Shell (SSH access) independently for the external and internal (if enabled with
controller.service.internal.enabled
) services:
controller.service.enableShell
.
controller.service.internal.enableShell
.
(follows the exisiting chart pattern of
controller.service.enableHttp(s)
)
We provide a complete NGINX deployment to be used as an Ingress Controller. Not all
Kubernetes providers natively support the NGINX Ingress,
to ensure compatibility.
Our fork of the NGINX chart was pulled from
GitHub. See Our NGINX fork for details on what was modified in our fork.
The version of the NGINX Ingress Helm chart bundled with the GitLab Helm charts
has been updated to support Kubernetes 1.22. As a result, the GitLab Helm
chart can not longer support Kubernetes versions prior to 1.19.
Configuring NGINX
See NGINX chart documentation
for configuration details.
Global settings
We share some common global settings among our charts. See the Globals Documentation
for common configuration options, such as GitLab and Registry hostnames.
Configure hosts using the Global settings
The hostnames for the GitLab Server and the Registry Server can be configured using
our Global settings chart.
Example policy for preventing connections to all internal endpoints
Defining the Registry Configuration
httpSecret
Notification Secret
Redis cache Secret
authEndpoint
certificate
compatibility
readiness and liveness probe
schema1
validation
manifests
notifications
hpa
storage
middleware.storage
keypairid
variants
debug
health
reporting
profiling
database
Creating the database
migration
gc
Redis cache
Sentinels
Garbage Collection
Manual Garbage Collection
Running administrative commands against the Container Registry
Using the Container Registry
The
registry
sub-chart provides the Registry component to a complete cloud-native
GitLab deployment on Kubernetes. This sub-chart makes use of the upstream
registry container
containing Docker Distribution. This chart
is composed of 3 primary parts: Service,
Deployment,
and ConfigMap.
All configuration is handled according to the official Registry configuration documentation
using
/etc/docker/registry/config.yml
variables provided to the
Deployment
populated
from the
ConfigMap
. The
ConfigMap
overrides the upstream defaults, but is
based on them.
See below for more details:
distribution/cmd/registry/config-example.yml
distribution-library-image/config-example.yml
Design Choices
A Kubernetes
Deployment
was chosen as the deployment method for this chart to allow
for simple scaling of instances, while allowing for
rolling updates.
This chart makes use of two required secrets and one optional:
Required
global.registry.certificate.secret
: A global secret that will contain the public
certificate bundle to verify the authentication tokens provided by the associated
GitLab instance(s). See documentation
on using GitLab as an auth endpoint.
global.registry.httpSecret.secret
: A global secret that will contain the
shared secret between registry pods.
Optional
profiling.stackdriver.credentials.secret
: If Stackdriver profiling is enabled and
you need to provide explicit service account credentials, then the value in this secret
(in the
credentials
key by default) is the GCP service account JSON credentials.
If you are using GKE and are providing service accounts to your workloads using
Workload Identity
(or node service accounts, although this is not recommended), then this secret is not required
and should not be supplied. In either case, the service account requires the role
roles/cloudprofiler.agent
or equivalent manual permissions
Configuration
We will describe all the major sections of the configuration below. When configuring
from the parent chart, these values will be:
If you chose to deploy this chart as a standalone, remove the
registry
at the top level.
Installation parameters
Parameter
Default
Description
annotations
Pod annotations
podLabels
Supplemental Pod labels. Will not be used for selectors.
common.labels
Supplemental labels that are applied to all objects created by this chart.
authAutoRedirect
true
Auth auto-redirect (must be true for Windows clients to work)
authEndpoint
global.hosts.gitlab.name
Auth endpoint (only host and port)
certificate.secret
gitlab-registry
JWT certificate
compatiblity
Configuration of compatibility settings
debug.addr.port
5001
Debug port
debug.tls.enabled
false
Enable TLS for the debug port for the registry. Impacts liveness and readiness probes, as well as the metrics endpoint (if enabled)
debug.tls.secretName
The name of the Kubernetes TLS Secret that contains a valid certificate and key for the registry debug endpoint. When not set and
debug.tls.enabled=true
- the debug TLS configuration will default to the registry’s TLS certificate.
debug.prometheus.enabled
false
DEPRECATED
Use
metrics.enabled
debug.prometheus.path
""
DEPRECATED
Use
metrics.path
metrics.enabled
false
If a metrics endpoint should be made available for scraping
metrics.path
/metrics
Metrics endpoint path
metrics.serviceMonitor.enabled
false
If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the
prometheus.io
scrape annotations
metrics.serviceMonitor.additionalLabels
{}
Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig
{}
Additional endpoint configuration for the ServiceMonitor
deployment.terminationGracePeriodSeconds
30
Optional duration in seconds the pod needs to terminate gracefully.
deployment.strategy
{}
Allows one to configure the update strategy utilized by the deployment
draintimeout
'0'
Amount of time to wait for HTTP connections to drain after receiving a SIGTERM signal (e.g.
'10s'
)
relativeurls
false
Enable the registry to return relative URLs in Location headers.
enabled
true
Enable registry flag
hpa.behavior
{scaleDown: {stabilizationWindowSeconds: 300 }}
Behavior contains the specifications for up- and downscaling behavior (requires
autoscaling/v2beta2
or higher)
hpa.customMetrics
[]
Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in
targetAverageUtilization
)
hpa.cpu.targetType
Utilization
Set the autoscaling CPU target type, must be either
Utilization
or
AverageValue
hpa.cpu.targetAverageValue
Set the autoscaling CPU target value
hpa.cpu.targetAverageUtilization
75
Set the autoscaling CPU target utilization
hpa.memory.targetType
Set the autoscaling memory target type, must be either
Utilization
or
AverageValue
hpa.memory.targetAverageValue
Set the autoscaling memory target value
hpa.memory.targetAverageUtilization
Set the autoscaling memory target utilization
hpa.minReplicas
2
Minimum number of replicas
hpa.maxReplicas
10
Maximum number of replicas
httpSecret
Https secret
extraEnvFrom
List of extra environment variables from other data sources to expose
Only list which uploads will be purged without deleting
priorityClassName
Priority class assigned to pods.
reporting.sentry.enabled
false
Enable reporting using Sentry
reporting.sentry.dsn
The Sentry DSN (Data Source Name)
reporting.sentry.environment
The Sentry environment
profiling.stackdriver.enabled
false
Enable continuous profiling using Stackdriver
profiling.stackdriver.credentials.secret
gitlab-registry-profiling-creds
Name of the secret containing credentials
profiling.stackdriver.credentials.key
credentials
Secret key in which the credentials are stored
profiling.stackdriver.service
RELEASE-registry
(templated Service name)
Name of the Stackdriver service to record profiles under
profiling.stackdriver.projectid
GCP project where running
GCP project to report profiles to
database.enabled
false
Enable metadata database. This is an experimental feature and must not be used in production environments.
database.host
global.psql.host
The database server hostname.
database.port
global.psql.port
The database server port.
database.user
The database username.
database.password.secret
RELEASE-registry-database-password
Name of the secret containing the database password.
database.password.key
password
Secret key in which the database password is stored.
database.name
The database name.
database.sslmode
The SSL mode. Can be one of
disable
,
allow
,
prefer
,
require
,
verify-ca
or
verify-full
.
database.ssl.secret
global.psql.ssl.secret
A secret containing client certificate, key and certificate authority. Defaults to the main PostgreSQL SSL secret.
database.ssl.clientCertificate
global.psql.ssl.clientCertificate
The key inside the secret referring the client certificate.
database.ssl.clientKey
global.psql.ssl.clientKey
The key inside the secret referring the client key.
database.ssl.serverCA
global.psql.ssl.serverCA
The key inside the secret referring the certificate authority (CA).
database.connecttimeout
0
Maximum time to wait for a connection. Zero or not specified means waiting indefinitely.
database.draintimeout
0
Maximum time to wait to drain all connections on shutdown. Zero or not specified means waiting indefinitely.
database.preparedstatements
false
Enable prepared statements. Disabled by default for compatibility with PgBouncer.
database.pool.maxidle
0
The maximum number of connections in the idle connection pool. If
maxopen
is less than
maxidle
, then
maxidle
is reduced to match the
maxopen
limit. Zero or not specified means no idle connections.
database.pool.maxopen
0
The maximum number of open connections to the database. If
maxopen
is less than
maxidle
, then
maxidle
is reduced to match the
maxopen
limit. Zero or not specified means unlimited open connections.
database.pool.maxlifetime
0
The maximum amount of time a connection may be reused. Expired connections may be closed lazily before reuse. Zero or not specified means unlimited reuse.
database.pool.maxidletime
0
The maximum amount of time a connection may be idle. Expired connections may be closed lazily before reuse. Zero or not specified means unlimited duration.
database.migrations.enabled
true
Enable the migrations job to automatically run migrations upon initial deployment and upgrades of the Chart. Note that migrations can also be run manually from within any running Registry pods.
database.migrations.activeDeadlineSeconds
3600
Set the activeDeadlineSeconds on the migrations job.
database.migrations.backoffLimit
6
Set the backoffLimit on the migrations job.
gc.disabled
true
When set to
true
, the online GC workers are disabled.
gc.maxbackoff
24h
The maximum exponential backoff duration used to sleep between worker runs when an error occurs. Also applied when there are no tasks to be processed unless
gc.noidlebackoff
is
true
. Please note that this is not the absolute maximum, as a randomized jitter factor of up to 33% is always added.
gc.noidlebackoff
false
When set to
true
, disables exponential backoffs between worker runs when there are no tasks to be processed.
gc.transactiontimeout
10s
The database transaction timeout for each worker run. Each worker starts a database transaction at the start. The worker run is canceled if this timeout is exceeded to avoid stalled or long-running transactions.
gc.blobs.disabled
false
When set to
true
, the GC worker for blobs is disabled.
gc.blobs.interval
5s
The initial sleep interval between each worker run.
gc.blobs.storagetimeout
5s
The timeout for storage operations. Used to limit the duration of requests to delete dangling blobs on the storage backend.
gc.manifests.disabled
false
When set to
true
, the GC worker for manifests is disabled.
gc.manifests.interval
5s
The initial sleep interval between each worker run.
gc.reviewafter
24h
The minimum amount of time after which the garbage collector should pick up a record for review.
-1
means no wait.
migration.enabled
false
When set to
true
, migration mode is enabled. New repositories will be added to the database, while existing repositories will continue to use the filesystem. This is an experimental feature and must not be used in production environments.
migration.disablemirrorfs
false
When set to
true
, the registry does not write metadata to the filesystem. Must be used in combination with the metadata database. This is an experimental feature and must not be used in production environments.
migration.rootdirectory
Allows repositories that have been migrated to the database to use separate storage paths. Using a distinct root directory from the main storage driver configuration allows online migrations. This is an experimental feature and must not be used in production environments.
migration.importtimeout
5m
The maximum duration that an import job may take to complete before it is aborted. This is an experimental feature and must not be used in production environments.
migration.preimporttimeout
1h
The maximum duration that a pre import job may take to complete before it is aborted. This is an experimental feature and must not be used in production environments.
migration.tagconcurrency
1
This parameter determines the number of concurrent tag details requests to the filesystem backend. This can greatly reduce the time spent importing a repository after a successful pre import has completed. Pre import is not affected by this parameter. This is an experimental feature and must not be used in production environments.
migration.maxconcurrentimports
1
This parameter determines the maximum number of concurrent imports allowed per instance of the registry. This can help reduce the number of resources that the registry needs when the migration mode is enabled. This is an experimental feature and must not be used in production environments.
migration.importnotification.enabled
false
When set to
true
, the import notification feature will be enabled. This requires the following parameters to be configured. This is an experimental feature and must not be used in production environments.
The URL endpoint where the notification will be sent to. Required when
importnotification
is enabled. Must be a valid URL, including scheme. A placeholder can be defined as
{path}
to add the repository path in the URL.
migration.importnotification.timeout
5s
A value for the HTTP timeout for the import notification. This is an experimental feature and must not be used in production environments.
migration.importnotification.secret
''
This will be automatically created if
not provided, when the
shared-secrets
feature is enabled. This is an experimental feature and must not be used in production environments.
securityContext.fsGroup
1000
Group ID under which the pod should be started
securityContext.runAsUser
1000
User ID under which the pod should be started
serviceLabels
{}
Supplemental service labels
tokenService
container_registry
JWT token service
tokenIssuer
gitlab-issuer
JWT token issuer
tolerations
[]
Toleration labels for pod assignment
middleware.storage
configuration layer for midleware storage (s3 for instance)
redis.cache.enabled
false
When set to
true
, the Redis cache is enabled. This feature is dependent on the metadata database being enabled. Repository metadata will be cached on the configured Redis instance.
redis.cache.host
<Redis URL>
The hostname of the Redis instance. If empty, the value will be filled as
global.redis.host:global.redis.port
.
redis.cache.port
6379
The port of the Redis instance.
redis.cache.sentinels
[]
List sentinels with host and port.
redis.cache.mainname
The main server name. Only applicable for Sentinel.
redis.cache.password.enabled
false
Indicates whether the Redis cache used by the Registry is password protected.
redis.cache.password.secret
gitlab-redis-secret
Name of the secret containing the Redis password. This will be automatically created if not provided, when the
shared-secrets
feature is enabled.
redis.cache.password.key
redis-password
Secret key in which the Redis password is stored.
redis.cache.db
0
The name of the database to use for each connection.
redis.cache.dialtimeout
0s
The timeout for connecting to the Redis instance. Defaults to no timeout.
redis.cache.readtimeout
0s
The timeout for reading from the Redis instance. Defaults to no timeout.
redis.cache.writetimeout
0s
The timeout for writing to the Redis instance. Defaults to no timeout.
redis.cache.tls.enabled
false
Set to
true
to enable TLS.
redis.cache.tls.insecure
false
Set to
true
to disable server name verification when connecting over TLS.
redis.cache.pool.size
10
The maximum number of socket connections. Default is 10 connections.
redis.cache.pool.maxlifetime
1h
The connection age at which client retires a connection. Default is to not close aged connections.
redis.cache.pool.idletimeout
300s
How long to wait before closing inactive connections.
Chart configuration examples
pullSecrets
pullSecrets
allows you to authenticate to a private registry to pull images for a pod.
Additional details about private registries and their authentication methods can be
found in the Kubernetes documentation.
The way we’ve chosen to implement compartmentalized sub-charts includes the ability
to disable the components that you may not want in a given deployment. For this reason,
the first setting you should decide on is
enabled
.
By default, Registry is enabled out of the box. Should you wish to disable it, set
enabled: false
.
Configuring the
image
This section details the settings for the container image used by this sub-chart’s
Deployment.
You can change the included version of the Registry and
pullPolicy
.
Default settings:
tag: 'v3.63.0-gitlab'
pullPolicy: 'IfNotPresent'
Configuring the
service
This section controls the name and type of the Service.
These settings will be populated by
values.yaml
.
By default, the Service is configured as:
Name
Type
Default
Description
name
String
registry
Configures the name of the service
type
String
ClusterIP
Configures the type of the service
externalPort
Int
5000
Port exposed by the Service
internalPort
Int
5000
Port utilized by the Pod to accept request from the service
clusterIP
String
null
Allows one to configure a custom Cluster IP as necessary
loadBalancerIP
String
null
Allows one to configure a custom LoadBalancer IP address as necessary
Configuring the
ingress
This section controls the registry Ingress.
Name
Type
Default
Description
apiVersion
String
Value to use in the
apiVersion
field.
annotations
String
This field is an exact match to the standard
annotations
for Kubernetes Ingress.
configureCertmanager
Boolean
Toggles Ingress annotation
cert-manager.io/issuer
. For more information see the TLS requirement for GitLab Pages.
enabled
Boolean
false
Setting that controls whether to create Ingress objects for services that support them. When
false
the
global.ingress.enabled
setting is used.
tls.enabled
Boolean
true
When set to
false
, you disable TLS for the Registry subchart. This is mainly useful for cases in which you cannot use TLS termination at
ingress-level
, like when you have a TLS-terminating proxy before the Ingress Controller.
tls.secretName
String
The name of the Kubernetes TLS Secret that contains a valid certificate and key for the registry URL. When not set, the
global.ingress.tls.secretName
is used instead. Defaults to not being set.
Configuring TLS
Container Registry supports TLS which secures its communication with other components,
including
nginx-ingress
.
Prerequisites to configure TLS:
The TLS certificate must include the Registry Service host name
(for example,
RELEASE-registry.default.svc
) in the Common
Name (CN) or Subject Alternate Name (SAN).
After the TLS certificate generates:
Create a Kubernetes TLS Secret
Create another Secret that only contains the CA certificate of the TLS certificate with
ca.crt
key.
To enable TLS:
Set
registry.tls.enabled
to
true
.
Set
global.hosts.registry.protocol
to
https
.
Pass the Secret names to
registry.tls.secretName
and
global.certificates.customCAs
accordingly.
When
registry.tls.verify
is
true
, you must pass the CA certificate Secret
name to
registry.tls.caSecretName
. This is necessary for self-signed
certificates and custom Certificate Authorities. This Secret is used by NGINX to verify the TLS
certificate of Registry.
The Registry debug port also supports TLS. The debug port is used for the
Kubernetes liveness and readiness checks as well as exposing a
/metrics
endpoint for Prometheus (if enabled).
TLS can be enabled for by setting
registry.debug.tls.enabled
to
true
.
A Kubernetes TLS Secret
can be provided in
registry.debug.tls.secretName
dedicated for use in
the debug port’s TLS configuration. If a dedicated secret is not specified,
the debug configuration will fall back to sharing
registry.tls.secretName
with
the registry’s regular TLS configuration.
For Prometheus to scrape the
/metrics/
endpoint using
https
- additional
configuration is required for the certificate’s CommonName attribute or
a SubjectAlternativeName entry. See
Configuring Prometheus to scrape TLS-enabled endpoints
for those requirements.
Configuring the
networkpolicy
This section controls the registry
NetworkPolicy.
This configuration is optional and is used to limit egress and Ingress of the registry to specific endpoints.
and Ingress to specific endpoints.
Name
Type
Default
Description
enabled
Boolean
false
This setting enables the
NetworkPolicy
for registry
ingress.enabled
Boolean
false
When set to
true
, the
Ingress
network policy will be activated. This will block all Ingress connections unless rules are specified.
ingress.rules
Array
[]
Rules for the Ingress policy, for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
egress.enabled
Boolean
false
When set to
true
, the
Egress
network policy will be activated. This will block all egress connections unless rules are specified.
egress.rules
Array
[]
Rules for the egress policy, these for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
Example policy for preventing connections to all internal endpoints
The Registry service normally requires egress connections to object storage,
Ingress connections from Docker clients, and kube-dns for DNS lookups. This
adds the following network restrictions to the Registry service:
All egress requests to the local network on
10.0.0.0/8
port 53 are allowed (for kubeDNS)
Other egress requests to the local network on
10.0.0.0/8
are restricted
Egress requests outside of the
10.0.0.0/8
are allowed
Note that the registry service requires outbound connectivity to the public
internet for images on external object storage
networkpolicy: enabled:true egress: enabled:true # The following rules enable traffic to all external # endpoints, except the local # network (except DNS requests) rules: -to: -ipBlock: cidr:10.0.0.0/8 ports: -port:53 protocol:UDP -to: -ipBlock: cidr:0.0.0.0/0 except: -10.0.0.0/8
Defining the Registry Configuration
The following properties of this chart pertain to the configuration of the underlying
registry container. Only the most critical values
for integration with GitLab are exposed. For this integration, we make use of the
auth.token.x
settings of Docker Distribution, controlling
authentication to the registry via JWT authentication tokens.
httpSecret
Field
httpSecret
is a map that contains two items:
secret
and
key
.
The content of the key this references correlates to the
http.secret
value of
registry. This value should be populated with
a cryptographically generated random string.
The
shared-secrets
job will automatically create this secret if not provided. It will be
filled with a securely generated 128 character alpha-numeric string that is base64 encoded.
Notification Secret is utilized for calling back to the GitLab application in various ways,
such as for Geo to help manage syncing Container Registry data between primary and secondary sites.
It is also used to send import notifications if the migration is enabled and the endpoint is configured.
The
notificationSecret
secret object will be automatically created if
not provided, when the
shared-secrets
feature is enabled.
global: # To provide your own secret registry: notificationSecret: secret:gitlab-registry-notification key:secret
# If utilising Geo, and wishing to sync the container registry geo: registry: replication: enabled:true primaryApiUrl:<URL to primary registry>
Ensuring the
secret
value is set to the name of the secret created above
Redis cache Secret
The Redis cache Secret is used when
global.redis.password.enabled
is set to
true
.
When the
shared-secrets
feature is enabled, the
gitlab-redis-secret
secret object
is automatically created if not provided.
To create this secret manually, see the Redis password instructions.
authEndpoint
The
authEndpoint
field is a string, providing the URL to the GitLab instance(s) that
the registry will authenticate to.
The value should include the protocol and hostname only. The chart template will automatically
append the necessary request path. The resulting value will be populated to
auth.token.realm
inside the container. For example:
authEndpoint: "https://gitlab.example.com"
By default this field is populated with the GitLab hostname configuration set by the
Global Settings.
certificate
The
certificate
field is a map containing two items:
secret
and
key
.
secret
is a string containing the name of the Kubernetes Secret
that houses the certificate bundle to be used to verify the tokens created by the GitLab instance(s).
key
is the name of the
key
in the
Secret
which houses the certificate
bundle that will be provided to the registry
container as
auth.token.rootcertbundle
.
The
compatibility
field is a map relating directly to the configuration file’s
compatibility
section.
Default contents:
compatibility: schema1: enabled:false
readiness and liveness probe
By default there is a readiness and liveness probe configured to
check
/debug/health
on port
5001
which is the debug port.
schema1
The
schema1
section controls the compatibility of the service with version 1
of the Docker manifest schema. This setting is provide as a means of supporting
Docker clients earlier than
1.10
, after which schema v2 is used by default.
If you
must
support older versions of Docker clients, you can do so by setting
registry.compatbility.schema1.enabled: true
.
validation
The
validation
field is a map that controls the Docker image validation
process in the registry. When image validation is enabled the registry rejects
windows images with foreign layers, unless the
manifests.urls.allow
field
within the validation stanza is explicitly set to allow those layer urls.
Validation only happens during manifest push, so images already present in the
registry are not affected by changes to the values in this section.
The image validation is turned off by default.
To enable image validation you need to explicitly set
registry.validation.disabled: false
.
manifests
The
manifests
field allows configuration of validation policies particular to
manifests.
The
urls
section contains both
allow
and
deny
fields. For manifest layers
which contain URLs to pass validation, that layer must match one of the regular
expressions in the
allow
field, while not matching any regular expression in
the
deny
field.
Name
Type
Default
Description
referencelimit
Int
0
The maximum number of references, such as layers, image configurations, and other manifests, that a single manifest may have. When set to
0
(default) this validation is disabled.
payloadsizelimit
Int
0
The maximum data size in bytes of manifest payloads. When set to
0
(default) this validation is disabled.
urls.allow
Array
[]
List of regular expressions that enables URLs in the layers of manifests. When left empty (default), layers with any URLs will be rejected.
urls.deny
Array
[]
List of regular expressions that restricts the URLs in the layers of manifests. When left empty (default), no layer with URLs which passed the
urls.allow
list will be rejected
notifications
The
notifications
field is used to configure Registry notifications.
It has an empty hash as default value.
Name
Type
Default
Description
endpoints
Array
[]
List of items where each item correspond to an endpoint
The
hpa
field is an object, controlling the number of registry
instances to create as a part of the set. This defaults to a
minReplicas
value
of
2
, a
maxReplicas
value of 10, and configures the
cpu.targetAverageUtilization
to 75%.
storage
storage: secret: key:config extraKey:
The
storage
field is a reference to a Kubernetes Secret and associated key. The content
of this secret is taken directly from Registry Configuration:
storage
.
Please refer to that documentation for more details.
Examples for AWS s3 and
Google GCS drivers can be
found in
examples/objectstorage
:
registry.s3.yaml
registry.gcs.yaml
For S3, make sure you give the correct
permissions for registry storage. For more information about storage configuration, see
Container Registry storage driver in the administration documentation.
Place the
contents
of the
storage
block into the secret, and provide the following
as items to the
storage
map:
secret
: name of the Kubernetes Secret housing the YAML block.
key
: name of the key in the secret to use. Defaults to
config
.
extraKey
:
(optional)
name of an extra key in the secret, which will be mounted
to
/etc/docker/registry/storage/${extraKey}
within the container. This can be
used to provide the
keyfile
for the
gcs
driver.
# Example using S3 kubectl create secret generic registry-storage \ --from-file=config=registry-storage.yaml
# Example using GCS with JSON key # - Note: `registry.storage.extraKey=gcs.json` kubectl create secret generic registry-storage \ --from-file=config=registry-storage.yaml \ --from-file=gcs.json=example-project-382839-gcs-bucket.json
You can disable the redirect for the storage driver,
ensuring that all traffic flows through the Registry service instead of redirecting to another backend:
You will need to provide persistent volumes for this data.
hpa.minReplicas
should be set to
1
hpa.maxReplicas
should be set to
1
For the sake of resiliency and simplicity, it is recommended to make use of an
external service, such as
s3
,
gcs
,
azure
or other compatible Object Storage.
The chart will populate
delete.enabled: true
into this configuration
by default if not specified by the user. This keeps expected behavior in line with
the default use of MinIO, as well as the Omnibus GitLab. Any user provided value
will supersede this default.
middleware.storage
Configuration of
middleware.storage
follows upstream convention:
Configuration is fairly generic and follows similar pattern:
middleware: # See https://gitlab.com/gitlab-org/container-registry/-/blob/master/docs/configuration.md#middleware storage: -name:cloudfront options: baseurl:https://abcdefghijklmn.cloudfront.net/ # `privatekey` is auto-populated with the content from the privatekey Secret. privatekeySecret: secret:cloudfront-secret-name # "key" value is going to be used to generate file name for PEM storage: # /etc/docker/registry/middleware.storage/<index>/<key> key:private-key-ABC.pem keypairid:ABCEDFGHIJKLMNOPQRST
Within above code
options.privatekeySecret
is a
generic
Kubernetes secret contents of which corresponds to PEM file contents:
privatekey
used upstream is being auto-populated by chart from the privatekey Secret and will be
ignored
if specified.
keypairid
variants
Various vendors use different field names for the same construct:
Vendor
field name
Google CDN
keyname
CloudFront
keypairid
Only configuration of
middleware.storage
section is supported at this time.
debug
The debug port is enabled by default and is used for the liveness/readiness
probe. Additionally, Prometheus metrics can be enabled via the
metrics
values.
debug: addr: port:5001
metrics: enabled:true
health
The
health
property is optional, and contains preferences for
a periodic health check on the storage driver’s backend storage.
For more details, see Docker’s configuration documentation.
If the Registry database is enabled, Registry will use its own database to track its state.
Follow the steps below to manually create the database and role.
These instructions assume you are using the bundled PostgreSQL server. If you are using your own server,
there will be some variation in how you connect.
This is an experimental feature and
must not
be used in production.
The
redis.cache
property is optional and provides options related to the
Redis cache.
To use
redis.cache
with the registry, the metadata database must be enabled.
The
redis.cache
can use the
global.redis.sentinels
configuration. Local values can be provided and
will take precedence over the global values. For example:
The Docker Registry will build up extraneous data over time which can be freed using
garbage collection.
As of now there is no
fully automated or scheduled way to run the garbage collection with this Chart.
Manual Garbage Collection
Manual garbage collection requires the registry to be in read-only mode first. Let’s assume that you’ve already
installed the GitLab chart by using Helm, named it
mygitlab
, and installed it in the namespace
gitlabns
.
Replace these values in the commands below according to your actual configuration.
# Because of https://github.com/helm/helm/issues/2948 we can't rely on --reuse-values, so let's get our current config. helm get values mygitlab > mygitlab.yml # Upgrade Helm installation and configure the registry to be read-only. # The --wait parameter makes Helm wait until all ressources are in ready state, so we are safe to continue. helm upgrade mygitlab gitlab/gitlab -f mygitlab.yml --set registry.maintenance.readOnly.enabled=true--wait # Our registry is in r/o mode now, so let's get the name of one of the registry Pods. # Note down the Pod name and replace the '<registry-pod>' placeholder below with that value. # Replace the single quotes to double quotes (' => ") if you are using this with Windows' cmd.exe. kubectl get pods -n gitlabns -lapp=registry -ojsonpath='{.items[0].metadata.name}' # Run the actual garbage collection. Check the registry's manual if you really want the '-m' parameter. kubectl exec-n gitlabns <registry-pod> -- /bin/registry garbage-collect -m /etc/docker/registry/config.yml # Reset registry back to original state. helm upgrade mygitlab gitlab/gitlab -f mygitlab.yml --wait # All done :)
Running administrative commands against the Container Registry
The administrative commands can be run against the Container Registry
only from a Registry pod, where both the
registry
binary as well as necessary
configuration is available. Issue #2629
is open to discuss how to provide this functionality from the toolbox pod.
To run administrative commands:
Connect to a Registry pod:
kubectl exec-it <registry-pod> -- bash
Once inside the Registry pod, the
registry
binary is available in
PATH
and
can be used directly. The configuration file is available at
/etc/docker/registry/config.yml
. The following example checks the status
of the database migration:
registry database migrate status /etc/docker/registry/config.yml
For further details and other available commands, refer to the relevant
documentation:
The
shared-secrets
job is responsible for provisioning a variety of secrets
used across the installation, unless otherwise manually specified. This includes:
Initial root password
Self-signed TLS certificates for all public services: GitLab, MinIO, and Registry
Registry authentication certificates
MinIO, Registry, GitLab Shell, and Gitaly secrets
Redis and PostgreSQL passwords
SSH host keys
GitLab Rails secret for encrypted credentials
Installation command line options
The table below contains all the possible configurations that can be supplied to
the
helm install
command using the
--set
flag:
Parameter
Default
Description
enabled
true
See Below
env
production
Rails environment
podLabels
Supplemental Pod labels. Will not be used for selectors.
Some users may wish to explicitly disable the functionality provided by this job.
To do this, we have provided the
enabled
flag as a boolean, defaulting to
true
.
To disable the job, pass
--set shared-secrets.enabled=false
, or pass the following
in a YAML via the
-f
flag to
helm
:
shared-secrets: enabled:false
If you disable this job, you
must
manually create all secrets,
and provide all necessary secret content. See installation/secrets
for further details.
This guide contains instructions for when and how to generate a changelog entry
file, as well as information and history about our changelog process.
Overview
Each bullet point, or
entry
, in our
CHANGELOG.md
file is generated from the subject line of a Git commit. Commits are included
when they contain the
Changelog
Git trailer.
When generating the changelog, author and merge request details are added
automatically.
The
Changelog
trailer accepts the following values:
added
fixed
changed
deprecated
removed
security
performance
other
An example of a Git commit to include in the changelog is the following:
Update git vendor to gitlab
Now that we are using gitaly to compile git, the git version isn't known from the manifest, instead we are getting the gitaly version. Update our vendor field to be `gitlab` to avoid cve matching old versions.
Changelog: changed
GitLab automatically links the merge request to the commit when generating the
changelog. If you want to override the merge request to link to, you can specify
an alternative merge request using the
MR
trailer:
Update git vendor to gitlab
Now that we are using gitaly to compile git, the git version isn't known from the manifest, instead we are getting the gitaly version. Update our vendor field to be `gitlab` to avoid cve matching old versions.
The value must be the full URL of the merge request.
What warrants a changelog entry?
Any user-facing change
should
have a changelog entry. Example: “GitLab now
uses system fonts for all text.”
A fix for a regression introduced and then fixed in the same release (i.e.,
fixing a bug introduced during a monthly release candidate)
should not
have a changelog entry.
Any developer-facing change (e.g., refactoring, technical debt remediation,
test suite changes)
should not
have a changelog entry. Example: “Reduce
database records created during Cycle Analytics model spec.”
Any
contribution from a community member, no matter how small,
may
have
a changelog entry regardless of these guidelines if the contributor wants one.
Example: “Fixed a typo on the search results page. (Jane Smith)”
Writing good changelog entries
A good changelog entry should be descriptive and concise. It should explain the
change to a reader who has
zero context
about the change. If you have trouble
making it both concise and descriptive, err on the side of descriptive.
Bad:
Go to a project order.
Good:
Show a user’s starred projects at the top of the “Go to project”
dropdown.
The first example provides no context of where the change was made, or why, or
how it benefits the user.
Bad:
Copy (some text) to clipboard.
Good:
Update the “Copy to clipboard” tooltip to indicate what’s being
copied.
Again, the first example is too vague and provides no context.
Bad:
Fixes and Improves CSS and HTML problems in mini pipeline graph and
builds dropdown.
Good:
Fix tooltips and hover states in mini pipeline graph and builds
dropdown.
The first example is too focused on implementation details. The user doesn’t
care that we changed CSS and HTML, they care about the
end result
of those
changes.
Bad:
Strip out
nil
s in the Array of Commit objects returned from
find_commits_by_message_with_elastic
The first example focuses on
how
we fixed something, not on
what
it fixes.
The rewritten version clearly describes the
end benefit
to the user (fewer 500
errors), and
when
(searching commits with Elasticsearch).
Use your best judgement and try to put yourself in the mindset of someone
reading the compiled changelog. Does this entry add value? Does it offer context
about
where
and
why
the change was made?
How to generate a changelog entry
Git trailers are added when committing your changes. This can be done using your
text editor of choice. Adding the trailer to an existing commit requires either
amending to the commit (if it’s the most recent one), or an interactive rebase
using
git rebase -i
.
To update the last commit, run the following:
git commit --amend
You can then add the
Changelog
trailer to the commit message. If you had
already pushed prior commits to your remote branch, you have to force push
the new commit:
git push -f origin your-branch-name
To edit older (or multiple commits), use
git rebase -i HEAD~N
where
N
is the
last N number of commits to rebase. Let’s say you have 3 commits on your branch:
A, B, and C. If you want to update commit B, you need to run:
git rebase -i HEAD~2
This starts an interactive rebase session for the last two commits. When
started, Git presents you with a text editor with contents along the lines of
the following:
pick B Subject of commit B pick C Subject of commit C
To update commit B, change the word
pick
to
reword
, then save and quit the
editor. Once closed, Git presents you with a new text editor instance to edit
the commit message of commit B. Add the trailer, then save and quit the editor.
If all went well, commit B is now updated.
For more information about interactive rebases, take a look at
the Git documentation.
History and Reasoning
This method was adopted from the primary GitLab codebase, as we
found the workflow to be appealing and familiar.