## Allow to overwrite under which User and Group we're running. securityContext: runAsUser:1000 fsGroup:1000
## Enable deployment to use a serviceAccount serviceAccount: enabled:false create:false annotations:{} ## Name to be used for serviceAccount, otherwise defaults to chart fullname # name:
Parameter
Description
Default
deployment.strategy
Allows one to configure the update strategy utilized by the deployment
{}
enabled
Mailroom enablement flag
true
hpa.behavior
Behavior contains the specifications for up- and downscaling behavior (requires
autoscaling/v2beta2
or higher)
{scaleDown: {stabilizationWindowSeconds: 300 }}
hpa.customMetrics
Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in
targetAverageUtilization
)
[]
hpa.cpu.targetType
Set the autoscaling CPU target type, must be either
Utilization
or
AverageValue
Utilization
hpa.cpu.targetAverageValue
Set the autoscaling CPU target value
hpa.cpu.targetAverageUtilization
Set the autoscaling CPU target utilization
75
hpa.memory.targetType
Set the autoscaling memory target type, must be either
Utilization
or
AverageValue
hpa.memory.targetAverageValue
Set the autoscaling memory target value
hpa.memory.targetAverageUtilization
Set the autoscaling memory target utilization
hpa.maxReplicas
Maximum number of replicas
2
hpa.minReplicas
Minimum number of replicas
1
image.pullPolicy
Mailroom image pull policy
IfNotPresent
extraEnvFrom
List of extra environment variables from other data sources to expose
Supplemental labels that are applied to all objects created by this chart.
{}
resources
Mailroom resource requirements
{ requests: { cpu: 50m, memory: 150M }}
networkpolicy.annotations
Annotations to add to the NetworkPolicy
{}
networkpolicy.egress.enabled
Flag to enable egress rules of NetworkPolicy
false
networkpolicy.egress.rules
Define a list of egress rules for NetworkPolicy
[]
networkpolicy.enabled
Flag for using NetworkPolicy
false
networkpolicy.ingress.enabled
Flag to enable
ingress
rules of NetworkPolicy
false
networkpolicy.ingress.rules
Define a list of
ingress
rules for NetworkPolicy
[]
securityContext.fsGroup
Group ID under which the pod should be started
1000
securityContext.runAsUser
User ID under which the pod should be started
1000
serviceAccount.annotations
Annotations for ServiceAccount
{}
serviceAccount.enabled
Flag for using ServiceAccount
false
serviceAccount.create
Flag for creating a ServiceAccount
false
serviceAccount.name
Name of ServiceAccount to use
tolerations
Tolerations to add to the Mailroom
priorityClassName
Priority class assigned to pods.
Incoming email
By default, incoming email is disabled. There are two methods for
reading incoming email:
IMAP
Microsoft Graph
First, enable it by setting the common settings.
Then configure the IMAP settings or
Microsoft Graph settings.
These methods can be configured in
values.yaml
. See the following examples:
Incoming email with IMAP
Incoming email with Microsoft Graph
IMAP
To enable incoming e-mail for IMAP, provide details of your IMAP server
and access credentials using the
global.appConfig.incomingEmail
settings.
In addition, the requirements for the IMAP email account
should be reviewed to ensure that the targeted IMAP account can be used
by GitLab for receiving email. Several common email services are also
documented on the same page to aid in setting up incoming email.
The IMAP password will still need to be created as a Kubernetes Secret as
described in the secrets guide.
Microsoft Graph
See the GitLab documentation on creating an Azure Active Directory application.
Provide the tenant ID, client ID, and client secret. You can find details for these settings in the command line options.
Create a Kubernetes secret containing the client secret as described in the secrets guide.
Reply-by-email
To use the reply-by-email feature, where users can reply to notification emails to
comment on issues and MRs, you need to configure both outgoing email
and incoming email settings.
Service Desk email
By default, the Service Desk email is disabled.
As with incoming e-mail, enable it by setting the common settings.
Then configure the IMAP settings or
Microsoft Graph settings.
These options can also be configured in
values.yaml
. See the following examples:
Service Desk with IMAP
Service Desk with Microsoft Graph
Service Desk email
requires
that Incoming email be configured.
IMAP
Provide details of your IMAP server and access credentials using the
global.appConfig.serviceDeskEmail
settings. You can find details for
these settings in the command line options.
Create a Kubernetes secret containing IMAP password as described in the secrets guide.
Microsoft Graph
See the GitLab documentation on creating an Azure Active Directory application.
Provide the tenant ID, client ID, and client secret using the
global.appConfig.serviceDeskEmail
settings. You can find details for
these settings in the command line options.
You will also have to create a Kubernetes secret containing the client secret
as described in the secrets guide.
Using the GitLab-Migrations chart | GitLab
Requirements
Design Choices
Configuration
Installation command line options
Chart configuration examples
extraEnv
extraEnvFrom
image.pullSecrets
Using the Community Edition of this chart
External Services
Redis
host
serviceName
port
password
sentinels
PostgreSQL
host
serviceName
port
database
preparedStatements
username
password
Using the GitLab-Migrations chart
The
migrations
sub-chart provides a single migration Job that handles seeding/migrating the GitLab database. The chart runs using the GitLab Rails codebase.
After migrating, this Job also edits the application settings in the database to turn off writes to authorized keys file. In the charts we are only supporting use of the GitLab Authorized Keys API with the SSH
AuthorizedKeysCommand
instead of support for writing to an authorized keys file.
Requirements
This chart depends on Redis, and PostgreSQL, either as part of the complete GitLab chart or provided as external services reachable from the Kubernetes cluster this chart is deployed onto.
Design Choices
The
migrations
creates a new migrations Job each time the chart is deployed. In order to prevent job name collisions, we append the chart revision, and a random alpha-numeric value to the Job name each time is created. The purpose of the random text is described further in this section.
For now we also have the jobs remain as objects in the cluster after they complete. This is so we can observe the migration logs. Currently this means these Jobs persist even after a
helm uninstall
. This is one of the reasons why we append random text to the Job name, so that future deployments using the same release name don’t cause conflicts. Once we have some form of log-shipping in place, we can revisit the persistence of these objects.
The container used in this chart has some additional optimizations that we are not currently using in this Chart. Mainly the ability to quickly skip running migrations if they are already up to date, without needing to boot up the rails application to check. This optimization requires us to persist the migration status. Which we are not doing with this chart at the moment. In the future we will introduce storage support for the migrations status to this chart.
Configuration
The
migrations
chart is configured in two parts: external services, and chart settings.
Installation command line options
Table below contains all the possible charts configurations that can be supplied to
helm install
command using the
--set
flags
Parameter
Description
Default
common.labels
Supplemental labels that are applied to all objects created by this chart.
By default, the Helm charts use the Enterprise Edition of GitLab. If desired, you can instead use the Community Edition. Learn more about the difference between the two.
In order to use the Community Edition, set
image.repository
to
registry.gitlab.com/gitlab-org/build/cng/gitlab-toolbox-ce
The hostname of the Redis server with the database to use. This can be omitted in lieu of
serviceName
. If using Redis Sentinels, the
host
attribute needs to be set to the cluster name as specified in the
sentinel.conf
.
serviceName
The name of the
service
which is operating the Redis database. If this is present, and
host
is not, the chart will template the hostname of the service (and current
.Release.Name
) in place of the
host
value. This is convenient when using Redis as a part of the overall GitLab chart. This will default to
redis
port
The port on which to connect to the Redis server. Defaults to
6379
.
password
The
password
attribute for Redis has two sub keys:
secret
defines the name of the Kubernetes
Secret
to pull from
key
defines the name of the key in the above secret that contains the password.
sentinels
The
sentinels
attribute allows for a connection to a Redis HA cluster.
The sub keys describe each Sentinel connection.
host
defines the hostname for the Sentinel service
port
defines the port number to reach the Sentinel service, defaults to
26379
Note:
The current Redis Sentinel support only supports Sentinels that have
been deployed separately from the GitLab chart. As a result, the Redis
deployment through the GitLab chart should be disabled with
redis.install=false
.
The Secret containing the Redis password will need to be manually created
before deploying the GitLab chart.
The hostname of the PostgreSQL server with the database to use. This can be omitted if
postgresql.install=true
(default non-production).
serviceName
The name of the service which is operating the PostgreSQL database. If this is present, and
host
is not, the chart will template the hostname of the service in place of the
host
value.
port
The port on which to connect to the PostgreSQL server. Defaults to
5432
.
database
The name of the database to use on the PostgreSQL server. This defaults to
gitlabhq_production
.
preparedStatements
If prepared statements should be used when communicating with the PostgreSQL server. Defaults to
false
.
username
The username with which to authenticate to the database. This defaults to
gitlab
password
The
password
attribute for PostgreSQL has to sub keys:
secret
defines the name of the Kubernetes
Secret
to pull from
key
defines the name of the key in the above secret that contains the password.
The Praefect chart is still under development. The alpha version is not yet suitable for production use. Upgrades may require significant manual intervention.
See our Praefect GA release Epic for more information.
The Praefect chart is used to manage a Gitaly cluster inside a GitLab installment deployed with the Helm charts.
Known limitations and issues
The database has to be manually created.
The cluster size is fixed: Gitaly Cluster does not currently support autoscaling.
Using a Praefect instance in the cluster to manage Gitaly instances outside the cluster is not supported.
Upgrades to version 4.8 of the chart (GitLab 13.8) will encounter an issue that makes it
appear
that repository data is lost. Data is not lost, but requires manual intervention.
Requirements
This chart consumes the Gitaly chart. Settings from
global.gitaly
are used to configure the instances created by this chart. Documentation of these settings can be found in Gitaly chart documentation.
Important
:
global.gitaly.tls
is independent of
global.praefect.tls
. They are configured separately.
By default, this chart will create 3 Gitaly Replicas.
Configuration
The chart is disabled by default. To enable it as part of a chart deploy set
global.praefect.enabled=true
.
Replicas
The default number of replicas to deploy is 3. This can be changed by setting
global.praefect.virtualStorages[].gitalyReplicas
with the desired number of replicas. For example:
Group-level wikis cannot be moved using the API at this time.
When migrating from standalone Gitaly instances to a Praefect setup,
global.praefect.replaceInternalGitaly
can be set to
false
.
This ensures that the existing Gitaly instances are preserved while the new Praefect-managed Gitaly instances are created.
When migrating to Praefect, none of Praefect’s virtual storages can be named
default
.
This is because there must be at least one storage named
default
at all times,
therefore the name is already taken by the non-Praefect configuration.
The instructions to migrate to Gitaly Cluster
can then be followed to move data from the
default
storage to
virtualStorage2
. If additional storages
were defined under
global.gitaly.internal.names
, be sure to migrate repositories from those storages as well.
After the repositories have been migrated to
virtualStorage2
,
replaceInternalGitaly
can be set back to
true
if a storage named
default
is added in the Praefect configuration.
The instructions to migrate to Gitaly Cluster
can be followed again to move data from
virtualStorage2
to the newly-added
default
storage if desired.
Finally, see the repository storage paths documentation
to configure where new repositories are stored.
Creating the database
Praefect uses its own database to track its state. This has to be manually created in order for Praefect to be functional.
These instructions assume you are using the bundled PostgreSQL server. If you are using your own server,
there will be some variation in how you connect.
Log into your database instance:
kubectl exec-it$(kubectl get pods -lapp=postgresql -o custom-columns=NAME:.metadata.name --no-headers)-- bash
By default, the
shared-secrets
Job will generate a secret for you.
Fetch the password:
kubectl get secret RELEASE_NAME-praefect-dbsecret -ojsonpath="{.data.secret}" | base64--decode
Set the password in the
psql
prompt:
\passwordpraefect
Create the database:
CREATEDATABASEpraefectWITHOWNERpraefect;
Running Praefect over TLS
Praefect supports communicating with client and Gitaly nodes over TLS. This is
controlled by the settings
global.praefect.tls.enabled
and
global.praefect.tls.secretName
.
To run Praefect over TLS follow these steps:
The Helm chart expects a certificate to be provided for communicating over
TLS with Praefect. This certificate should apply to all the Praefect nodes that
are present. Hence all hostnames of each of these nodes should be added as a
Subject Alternate Name (SAN) to the certificate or alternatively, you can use wildcards.
To know the hostnames to use, check the file
/srv/gitlab/config/gitlab.yml
file in the Toolbox Pod and check the various
gitaly_address
fields specified
under
repositories.storages
key within it.
A basic script for generating custom signed certificates for internal Praefect Pods
can be found in this repository.
Users can use or refer that script to generate certificates with proper SAN attributes.
Create a TLS Secret using the certificate created.
The table below contains all the possible charts configurations that can be supplied to
the
helm install
command using the
--set
flags.
Parameter
Default
Description
common.labels
{}
Supplemental labels that are applied to all objects created by this chart.
failover.enabled
true
Whether Praefect should perform failover on node failure
failover.readonlyAfter
false
Whether the nodes should be in read-only mode after failover
autoMigrate
true
Automatically run migrations on startup
electionStrategy
sql
See election strategy
image.repository
registry.gitlab.com/gitlab-org/build/cng/gitaly
The default image repository to use. Praefect is bundled as part of the Gitaly image
podLabels
{}
Supplemental Pod labels. Will not be used for selectors.
ntpHost
pool.ntp.org
Configure the NTP server Praefect should ask the for the current time.
service.name
praefect
The name of the service to create
service.type
ClusterIP
The type of service to create
service.internalPort
8075
The internal port number that the Praefect pod will be listening on
service.externalPort
8075
The port number the Praefect service should expose in the cluster
init.resources
init.image
extraEnvFrom
List of extra environment variables from other data sources to expose
logging.level
Log level
logging.format
json
Log format
logging.sentryDsn
Sentry DSN URL - Exceptions from Go server
logging.rubySentryDsn
Sentry DSN URL - Exceptions from
gitaly-ruby
logging.sentryEnvironment
Sentry environment to be used for logging
metrics.enabled
true
If a metrics endpoint should be made available for scraping
metrics.port
9236
Metrics endpoint port
metrics.separate_database_metrics
true
If true then metrics scrapes will not perform database queries, setting to false may cause performance problems
metrics.path
/metrics
Metrics endpoint path
metrics.serviceMonitor.enabled
false
If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the
prometheus.io
scrape annotations
metrics.serviceMonitor.additionalLabels
{}
Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig
{}
Additional endpoint configuration for the ServiceMonitor
securityContext.runAsUser
1000
securityContext.fsGroup
1000
serviceLabels
{}
Supplemental service labels
statefulset.strategy
{}
Allows one to configure the update strategy utilized by the statefulset
The
sidekiq
sub-chart provides configurable deployment of Sidekiq workers, explicitly
designed to provide separation of queues across multiple
Deployment
s with individual
scalability and configuration.
While this chart provides a default
pods:
declaration, if you provide an empty definition,
you will have
no
workers.
Requirements
This chart depends on access to Redis, PostgreSQL, and Gitaly services, either as
part of the complete GitLab chart or provided as external services reachable from
the Kubernetes cluster this chart is deployed onto.
Design Choices
This chart creates multiple
Deployment
s and associated
ConfigMap
s. It was decided
that it would be clearer to make use of
ConfigMap
behaviours instead of using
environment
attributes or additional arguments to the
command
for the containers, in order to
avoid any concerns about command length. This choice results in a large number of
ConfigMap
s, but provides very clear definitions of what each pod should be doing.
Configuration
The
sidekiq
chart is configured in three parts: chart-wide external services,
chart-wide defaults, and per-pod definitions.
Installation command line options
The table below contains all the possible charts configurations that can be supplied
to the
helm install
command using the
--set
flags:
Parameter
Default
Description
annotations
Pod annotations
podLabels
Supplemental Pod labels. Will not be used for selectors.
common.labels
Supplemental labels that are applied to all objects created by this chart.
concurrency
20
Sidekiq default concurrency
deployment.strategy
{}
Allows one to configure the update strategy utilized by the deployment
deployment.terminationGracePeriodSeconds
30
Optional duration in seconds the pod needs to terminate gracefully.
enabled
true
Sidekiq enabled flag
extraContainers
List of extra containers to include
extraInitContainers
List of extra init containers to include
extraVolumeMounts
String template of extra volume mounts to configure
extraVolumes
String template of extra volumes to configure
extraEnv
List of extra environment variables to expose
extraEnvFrom
List of extra environment variables from other data sources to expose
gitaly.serviceName
gitaly
Gitaly service name
health_checks.port
3808
Health check server port
hpa.behaviour
{scaleDown: {stabilizationWindowSeconds: 300 }}
Behavior contains the specifications for up- and downscaling behavior (requires
autoscaling/v2beta2
or higher)
hpa.customMetrics
[]
Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in
targetAverageUtilization
)
hpa.cpu.targetType
AverageValue
Set the autoscaling CPU target type, must be either
Utilization
or
AverageValue
hpa.cpu.targetAverageValue
350m
Set the autoscaling CPU target value
hpa.cpu.targetAverageUtilization
Set the autoscaling CPU target utilization
hpa.memory.targetType
Set the autoscaling memory target type, must be either
Utilization
or
AverageValue
If a metrics endpoint should be made available for scraping
metrics.port
3807
Metrics endpoint port
metrics.path
/metrics
Metrics endpoint path
metrics.log_enabled
false
Enables or disables metrics server logs written to
sidekiq_exporter.log
metrics.podMonitor.enabled
false
If a PodMonitor should be created to enable Prometheus Operator to manage the metrics scraping
metrics.podMonitor.additionalLabels
{}
Additional labels to add to the PodMonitor
metrics.podMonitor.endpointConfig
{}
Additional endpoint configuration for the PodMonitor
metrics.annotations
DEPRECATED
Set explicit metrics annotations. Replaced by template content.
metrics.tls.enabled
false
TLS enabled for the metrics/sidekiq_exporter endpoint
metrics.tls.secretName
{Release.Name}-sidekiq-metrics-tls
Secret for the metrics/sidekiq_exporter endpoint TLS cert and key
psql.password.key
psql-password
key to psql password in psql secret
psql.password.secret
gitlab-postgres
psql password secret
psql.port
Set PostgreSQL server port. Takes precedence over
global.psql.port
redis.serviceName
redis
Redis service name
resources.requests.cpu
900m
Sidekiq minimum needed CPU
resources.requests.memory
2G
Sidekiq minimum needed memory
resources.limits.memory
Sidekiq maximum allowed memory
timeout
25
Sidekiq job timeout
tolerations
[]
Toleration labels for pod assignment
memoryKiller.daemonMode
true
If
false
, uses the legacy memory killer mode
memoryKiller.maxRss
2000000
Maximum RSS before delayed shutdown triggered expressed in kilobytes
memoryKiller.graceTime
900
Time to wait before a triggered shutdown expressed in seconds
memoryKiller.shutdownWait
30
Amount of time after triggered shutdown for existing jobs to finish expressed in seconds
memoryKiller.hardLimitRss
Maximum RSS before immediate shutdown triggered expressed in kilobyte in daemon mode
memoryKiller.checkInterval
3
Amount of time between memory checks
livenessProbe.initialDelaySeconds
20
Delay before liveness probe is initiated
livenessProbe.periodSeconds
60
How often to perform the liveness probe
livenessProbe.timeoutSeconds
30
When the liveness probe times out
livenessProbe.successThreshold
1
Minimum consecutive successes for the liveness probe to be considered successful after having failed
livenessProbe.failureThreshold
3
Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
readinessProbe.initialDelaySeconds
0
Delay before readiness probe is initiated
readinessProbe.periodSeconds
10
How often to perform the readiness probe
readinessProbe.timeoutSeconds
2
When the readiness probe times out
readinessProbe.successThreshold
1
Minimum consecutive successes for the readiness probe to be considered successful after having failed
readinessProbe.failureThreshold
3
Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
securityContext.fsGroup
1000
Group ID under which the pod should be started
securityContext.runAsUser
1000
User ID under which the pod should be started
priorityClassName
""
Allow configuring pods
priorityClassName
, this is used to control pod priority in case of eviction
Chart configuration examples
resources
resources
allows you to configure the minimum and maximum amount of resources (memory and CPU) a Sidekiq
pod can consume.
Sidekiq pod workloads vary greatly between deployments. Generally speaking, it is understood that each Sidekiq
process consumes approximately 1 vCPU and 2 GB of memory. Vertical scaling should generally align to this
1:2
ratio of
vCPU:Memory
.
By default, the Helm charts use the Enterprise Edition of GitLab. If desired, you
can use the Community Edition instead. Learn more about the
differences between the two.
In order to use the Community Edition, set
image.repository
to
registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ce
.
External Services
This chart should be attached to the same Redis, PostgreSQL, and Gitaly instances
as the Webservice chart. The values of external services will be populated into a
ConfigMap
that is shared across all Sidekiq pods.
The hostname of the Redis server with the database to use. This can be omitted in lieu of
serviceName
. If using Redis Sentinels, the
host
attribute needs to be set to the cluster name as specified in the
sentinel.conf
.
password.key
String
The
password.key
attribute for Redis defines the name of the key in the secret (below) that contains the password.
password.secret
String
The
password.secret
attribute for Redis defines the name of the Kubernetes
Secret
to pull from.
port
Integer
6379
The port on which to connect to the Redis server.
serviceName
String
redis
The name of the
service
which is operating the Redis database. If this is present, and
host
is not, the chart will template the hostname of the service (and current
.Release.Name
) in place of the
host
value. This is convenient when using Redis as a part of the overall GitLab chart.
sentinels.[].host
String
The hostname of Redis Sentinel server for a Redis HA setup.
sentinels.[].port
Integer
26379
The port on which to connect to the Redis Sentinel server.
The current Redis Sentinel support only supports Sentinels that have
been deployed separately from the GitLab chart. As a result, the Redis
deployment through the GitLab chart should be disabled with
redis.install=false
.
The Secret containing the Redis password needs to be manually created
before deploying the GitLab chart.
The hostname of the PostgreSQL server with the database to use. This can be omitted if
postgresql.install=true
(default non-production).
serviceName
String
The name of the
service
which is operating the PostgreSQL database. If this is present, and
host
is not, the chart will template the hostname of the service in place of the
host
value.
database
String
gitlabhq_production
The name of the database to use on the PostgreSQL server.
password.key
String
The
password.key
attribute for PostgreSQL defines the name of the key in the secret (below) that contains the password.
password.secret
String
The
password.secret
attribute for PostgreSQL defines the name of the Kubernetes
Secret
to pull from.
port
Integer
5432
The port on which to connect to the PostgreSQL server.
username
String
gitlab
The username with which to authenticate to the database.
preparedStatements
Boolean
false
If prepared statements should be used when communicating with the PostgreSQL server.
The hostname of the Gitaly server to use. This can be omitted in lieu of
serviceName
.
serviceName
String
gitaly
The name of the
service
which is operating the Gitaly server. If this is present, and
host
is not, the chart will template the hostname of the service (and current
.Release.Name
) in place of the
host
value. This is convenient when using Gitaly as a part of the overall GitLab chart.
port
Integer
8075
The port on which to connect to the Gitaly server.
authToken.key
String
The name of the key in the secret below that contains the authToken.
authToken.secret
String
The name of the Kubernetes
Secret
to pull from.
Metrics
By default, a Prometheus metrics exporter is enabled per pod. Metrics are only available
when GitLab Prometheus metrics
are enabled in the Admin area. The exporter exposes a
/metrics
endpoint on port
3807
. When metrics are enabled, annotations are added to each pod allowing a Prometheus
server to discover and scrape the exposed metrics.
Chart-wide defaults
The following values will be used chart-wide, in the event that a value is not presented
on a per-pod basis.
Name
Type
Default
Description
concurrency
Integer
25
The number of tasks to process simultaneously.
timeout
Integer
4
The Sidekiq shutdown timeout. The number of seconds after Sidekiq gets the TERM signal before it forcefully shuts down its processes.
memoryKiller.checkInterval
Integer
3
Amount of time in seconds between memory checks
memoryKiller.maxRss
Integer
2000000
Maximum RSS before delayed shutdown triggered expressed in kilobytes
memoryKiller.graceTime
Integer
900
Time to wait before a triggered shutdown expressed in seconds
memoryKiller.shutdownWait
Integer
30
Amount of time after triggered shutdown for existing jobs to finish expressed in seconds
minReplicas
Integer
2
Minimum number of replicas
maxReplicas
Integer
10
Maximum number of replicas
maxUnavailable
Integer
1
Limit of maximum number of Pods to be unavailable
Detailed documentation of the Sidekiq memory killer is available
in the Omnibus documentation.
Per-pod Settings
The
pods
declaration provides for the declaration of all attributes for a worker
pod. These will be templated to
Deployment
s, with individual
ConfigMap
s for their
Sidekiq instances.
The settings default to including a single pod that is set up to monitor
all queues. Making changes to the pods section will
overwrite the default pod
with
a different pod configuration. It will not add a new pod in addition to the default.
Name
Type
Default
Description
concurrency
Integer
The number of tasks to process simultaneously. If not provided, it will be pulled from the chart-wide default.
name
String
Used to name the
Deployment
and
ConfigMap
for this pod. It should be kept short, and should not be duplicated between any two entries.
queues
String
See below.
negateQueues
String
See below.
queueSelector
Boolean
false
Use the queue selector.
timeout
Integer
The Sidekiq shutdown timeout. The number of seconds after Sidekiq gets the TERM signal before it forcefully shuts down its processes. If not provided, it will be pulled from the chart-wide default. This value
must
be less than
terminationGracePeriodSeconds
.
resources
Each pod can present it’s own
resources
requirements, which will be added to the
Deployment
created for it, if present. These match the Kubernetes documentation.
nodeSelector
Each pod can be configured with a
nodeSelector
attribute, which will be added to the
Deployment
created for it, if present. These definitions match the Kubernetes documentation.
memoryKiller.checkInterval
Integer
3
Amount of time between memory checks
memoryKiller.maxRss
Integer
2000000
Overrides the maximum RSS for a given pod.
memoryKiller.graceTime
Integer
900
Overrides the time to wait before a triggered shutdown for a given Pod
memoryKiller.shutdownWait
Integer
30
Overrides the amount of time after triggered shutdown for existing jobs to finish for a given Pod
minReplicas
Integer
2
Minimum number of replicas
maxReplicas
Integer
10
Maximum number of replicas
maxUnavailable
Integer
1
Limit of maximum number of Pods to be unavailable
podLabels
Map
{}
Supplemental Pod labels. Will not be used for selectors.
strategy
{}
Allows one to configure the update strategy utilized by the deployment
extraVolumes
String
Configures extra volumes for the given pod.
extraVolumeMounts
String
Configures extra volume mounts for the given pod.
priorityClassName
String
""
Allow configuring pods
priorityClassName
, this is used to control pod priority in case of eviction
hpa.customMetrics
Array
[]
Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in
targetAverageUtilization
)
hpa.cpu.targetType
String
AverageValue
Overrides the autoscaling CPU target type, must be either
Utilization
or
AverageValue
hpa.cpu.targetAverageValue
String
350m
Overrides the autoscaling CPU target value
hpa.cpu.targetAverageUtilization
Integer
Overrides the autoscaling CPU target utilization
hpa.memory.targetType
String
Overrides the autoscaling memory target type, must be either
Utilization
or
AverageValue
hpa.memory.targetAverageValue
String
Overrides the autoscaling memory target value
hpa.memory.targetAverageUtilization
Integer
Overrides the autoscaling memory target utilization
hpa.targetAverageValue
String
DEPRECATED
Overrides the autoscaling CPU target value
extraEnv
Map
List of extra environment variables to expose. The chart-wide value is merged into this, with values from the pod taking precedence
extraEnvFrom
Map
List of extra environment variables from other data source to expose
terminationGracePeriodSeconds
Integer
30
Optional duration in seconds the pod needs to terminate gracefully.
queues
The
queues
value is a string containing a comma-separated list of queues to be
processed. By default, it is not set, meaning that all queues will be processed.
The string should not contain spaces:
merge,post_receive,process_commit
will
work, but
merge, post_receive, process_commit
will not.
Any queue to which jobs are added but are not represented as a part of at least
one pod item
will not be processed
. For a complete list of all queues, see
these files in the GitLab source:
app/workers/all_queues.yml
ee/app/workers/all_queues.yml
negateQueues
negateQueues
is in the same format as
queues
, but it represents
queues to be ignored rather than processed.
The string should not contain spaces:
merge,post_receive,process_commit
will
work, but
merge, post_receive, process_commit
will not.
This is useful if you have a pod processing important queues, and another pod
processing other queues: they can use the same list of queues, with one being in
queues
and the other being in
negateQueues
.
negateQueues
should not
be provided alongside
queues
, as it will have no effect.
Example
pod
entry
pods: - name: immediate concurrency: 10 minReplicas: 2 # defaults to inherited value maxReplicas: 10 # defaults to inherited value maxUnavailable: 5 # defaults to inherited value queues: merge,post_receive,process_commit extraVolumeMounts: | - name: example-volume-mount mountPath: /etc/example extraVolumes: | - name: example-volume persistentVolumeClaim: claimName: example-pvc resources: limits: cpu: 800m memory: 2Gi hpa: cpu: targetType: Value targetAverageValue: 350m
Configuring the
networkpolicy
This section controls the
NetworkPolicy.
This configuration is optional and is used to limit Egress and Ingress of the
Pods to specific endpoints.
Name
Type
Default
Description
enabled
Boolean
false
This setting enables the network policy
ingress.enabled
Boolean
false
When set to
true
, the
Ingress
network policy will be activated. This will block all Ingress connections unless rules are specified.
ingress.rules
Array
[]
Rules for the Ingress policy, for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
egress.enabled
Boolean
false
When set to
true
, the
Egress
network policy will be activated. This will block all egress connections unless rules are specified.
egress.rules
Array
[]
Rules for the egress policy, these for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
Example Network Policy
The Sidekiq service requires Ingress connections for only the Prometheus
exporter if enabled, and normally requires Egress connections to various
places. This examples adds the following network policy:
All Ingress requests from the network on TCP
10.0.0.0/8
port 3807 are allowed for metrics exporting
All Egress requests to the network on UDP
10.0.0.0/8
port 53 are allowed for DNS
All Egress requests to the network on TCP
10.0.0.0/8
port 5432 are allowed for PostgreSQL
All Egress requests to the network on TCP
10.0.0.0/8
port 6379 are allowed for Redis
Other Egress requests to the local network on
10.0.0.0/8
are restricted
Egress requests outside of the
10.0.0.0/8
are allowed
Note the example provided is only an example and may not be complete
Note that the Sidekiq service requires outbound connectivity to the public
internet for images on external object storage
The
spamcheck
sub-chart provides a deployment of Spamcheck which is an anti-spam engine developed by GitLab originally to combat the rising amount of spam in GitLab.com, and later made public to be used in self-managed GitLab instances.
Requirements
This chart depends on access to the GitLab API.
Configuration
Enable Spamcheck
spamcheck
is disabled by default. To enable it on your GitLab instance, set the Helm property
global.spamcheck.enabled
to
true
, for example:
On the left sidebar, select
Settings
>
Reporting
.
Expand
Spam and Anti-bot Protection
.
Update the Spam Check settings:
Check the
Enable Spam Check via external API endpoint
checkbox
For URL of the external Spam Check endpoint use
grpc://gitlab-spamcheck.default.svc:8001
, where
default
is replaced with the Kubernetes namespace where GitLab is deployed.
Leave
Spam Check API key
blank.
Select
Save changes
.
Installation command line options
The table below contains all the possible charts configurations that can be supplied to the
helm install
command using the
--set
flags.
Parameter
Default
Description
annotations
{}
Pod annotations
common.labels
{}
Supplemental labels that are applied to all objects created by this chart.
deployment.livenessProbe.initialDelaySeconds
20
Delay before liveness probe is initiated
deployment.livenessProbe.periodSeconds
60
How often to perform the liveness probe
deployment.livenessProbe.timeoutSeconds
30
When the liveness probe times out
deployment.livenessProbe.successThreshold
1
Minimum consecutive successes for the liveness probe to be considered successful after having failed
deployment.livenessProbe.failureThreshold
3
Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
deployment.readinessProbe.initialDelaySeconds
0
Delay before readiness probe is initiated
deployment.readinessProbe.periodSeconds
10
How often to perform the readiness probe
deployment.readinessProbe.timeoutSeconds
2
When the readiness probe times out
deployment.readinessProbe.successThreshold
1
Minimum consecutive successes for the readiness probe to be considered successful after having failed
deployment.readinessProbe.failureThreshold
3
Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
deployment.strategy
{}
Allows one to configure the update strategy used by the deployment. When not provided, the cluster default is used.
hpa.behavior
{scaleDown: {stabilizationWindowSeconds: 300 }}
Behavior contains the specifications for up- and downscaling behavior (requires
autoscaling/v2beta2
or higher)
hpa.customMetrics
[]
Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in
targetAverageUtilization
)
hpa.cpu.targetType
AverageValue
Set the autoscaling CPU target type, must be either
Utilization
or
AverageValue
hpa.cpu.targetAverageValue
100m
Set the autoscaling CPU target value
hpa.cpu.targetAverageUtilization
Set the autoscaling CPU target utilization
hpa.memory.targetType
Set the autoscaling memory target type, must be either
Utilization
or
AverageValue
resources
allows you to configure the minimum and maximum amount of resources (memory and CPU) a Spamcheck pod can consume.
For example:
resources: requests: memory:100m cpu:100M
livenessProbe/readinessProbe
deployment.livenessProbe
and
deployment.readinessProbe
provide a mechanism to help control the termination of Spamcheck Pods in certain scenarios,
such as, when a container is in a broken state.
The Toolbox Pod is used to execute periodic housekeeping tasks within
the GitLab application. These tasks include backups, Sidekiq maintenance,
and Rake tasks.
Configuration
The following configuration settings are the default settings provided by the
Toolbox chart:
List of extra environment variables from other data sources to expose
Configuring backups
Information concerning configuring backups in the
backup and restore documentation. Additional
information about the technical implementation of how the backups are
performed can be found in the
backup and restore architecture documentation.]
Persistence configuration
The persistent stores for backups and restorations are configured separately.
Please review the following considerations when configuring GitLab for
backup and restore operations.
Backups use the
backups.cron.persistence.*
properties and restorations
use the
persistence.*
properties. Further descriptions concerning the
configuration of a persistence store will use just the final property key
(e.g.
.enabled
or
.size
) and the appropriate prefix will need to be
added.
The persistence stores are disabled by default, thus
.enabled
needs to
be set to
true
for a backup or restoration of any appreciable size.
In addition, either
.storageClass
needs to be specified for a PersistentVolume
to be created by Kubernetes or a PersistentVolume needs to be manually created.
If
.storageClass
is specified as ‘-‘, then the PersistentVolume will be
created using the default StorageClass
as specified in the Kubernetes cluster.
If the PersistentVolume is created manually, then the volume can be specified
using the
.volumeName
property or by using the selector
.matchLables
/
.matchExpressions
properties.
In most cases the default value of
.accessMode
will provide adequate
controls for only Toolbox accessing the PersistentVolumes. Please consult
the documentation for the CSI driver installed in the Kubernetes cluster to
ensure that the setting is correct.
Backup considerations
A backup operation needs an amount of disk space to hold the individual
components that are being backed up before they are written to the backup
object store. The amount of disk space depends on the following factors:
Number of projects and the amount of data stored under each project
Size of the PostgresSQL database (issues, MRs, etc.)
Size of each object store backend
Once the rough size has been determined, the
backups.cron.persistence.size
property can be set so that backups can commence.
Restore considerations
During the restoration of a backup, the backup needs to be extracted to disk
before the files are replaced on the running instance. The size of this
restoration disk space is controlled by the
persistence.size
property. Be
mindful that as the size of the GitLab installation grows the size of the
restoration disk space also needs to grow accordingly. In most cases the
size of the restoration disk space should be the same size as the backup
disk space.
Toolbox included tools
The Toolbox container contains useful GitLab tools such as Rails console,
Rake tasks, etc. These commands allow one to check the status of the database
migrations, execute Rake tasks for administrative tasks, interact with
the Rails console:
# locate the Toolbox pod kubectl get pods -lapp=toolbox
# Launch a shell inside the pod kubectl exec-it <Toolbox pod name> -- bash
# open Rails console gitlab-rails console -e production