The
migrations
sub-chart provides a single migration Job that handles seeding/migrating the GitLab database. The chart runs using the GitLab Rails codebase.
After migrating, this Job also edits the application settings in the database to turn off writes to authorized keys file. In the charts we are only supporting use of the GitLab Authorized Keys API with the SSH
AuthorizedKeysCommand
instead of support for writing to an authorized keys file.
Requirements
This chart depends on Redis, and PostgreSQL, either as part of the complete GitLab chart or provided as external services reachable from the Kubernetes cluster this chart is deployed onto.
Design Choices
The
migrations
creates a new migrations Job each time the chart is deployed. In order to prevent job name collisions, we append the chart revision, and a random alpha-numeric value to the Job name each time is created. The purpose of the random text is described further in this section.
For now we also have the jobs remain as objects in the cluster after they complete. This is so we can observe the migration logs. Currently this means these Jobs persist even after a
helm uninstall
. This is one of the reasons why we append random text to the Job name, so that future deployments using the same release name don’t cause conflicts. Once we have some form of log-shipping in place, we can revisit the persistence of these objects.
The container used in this chart has some additional optimizations that we are not currently using in this Chart. Mainly the ability to quickly skip running migrations if they are already up to date, without needing to boot up the rails application to check. This optimization requires us to persist the migration status. Which we are not doing with this chart at the moment. In the future we will introduce storage support for the migrations status to this chart.
Configuration
The
migrations
chart is configured in two parts: external services, and chart settings.
Installation command line options
Table below contains all the possible charts configurations that can be supplied to
helm install
command using the
--set
flags
Parameter
Description
Default
common.labels
Supplemental labels that are applied to all objects created by this chart.
By default, the Helm charts use the Enterprise Edition of GitLab. If desired, you can instead use the Community Edition. Learn more about the difference between the two.
In order to use the Community Edition, set
image.repository
to
registry.gitlab.com/gitlab-org/build/cng/gitlab-toolbox-ce
The hostname of the Redis server with the database to use. This can be omitted in lieu of
serviceName
. If using Redis Sentinels, the
host
attribute needs to be set to the cluster name as specified in the
sentinel.conf
.
serviceName
The name of the
service
which is operating the Redis database. If this is present, and
host
is not, the chart will template the hostname of the service (and current
.Release.Name
) in place of the
host
value. This is convenient when using Redis as a part of the overall GitLab chart. This will default to
redis
port
The port on which to connect to the Redis server. Defaults to
6379
.
password
The
password
attribute for Redis has two sub keys:
secret
defines the name of the Kubernetes
Secret
to pull from
key
defines the name of the key in the above secret that contains the password.
sentinels
The
sentinels
attribute allows for a connection to a Redis HA cluster.
The sub keys describe each Sentinel connection.
host
defines the hostname for the Sentinel service
port
defines the port number to reach the Sentinel service, defaults to
26379
Note:
The current Redis Sentinel support only supports Sentinels that have
been deployed separately from the GitLab chart. As a result, the Redis
deployment through the GitLab chart should be disabled with
redis.install=false
.
The Secret containing the Redis password will need to be manually created
before deploying the GitLab chart.
The hostname of the PostgreSQL server with the database to use. This can be omitted if
postgresql.install=true
(default non-production).
serviceName
The name of the service which is operating the PostgreSQL database. If this is present, and
host
is not, the chart will template the hostname of the service in place of the
host
value.
port
The port on which to connect to the PostgreSQL server. Defaults to
5432
.
database
The name of the database to use on the PostgreSQL server. This defaults to
gitlabhq_production
.
preparedStatements
If prepared statements should be used when communicating with the PostgreSQL server. Defaults to
false
.
username
The username with which to authenticate to the database. This defaults to
gitlab
password
The
password
attribute for PostgreSQL has to sub keys:
secret
defines the name of the Kubernetes
Secret
to pull from
key
defines the name of the key in the above secret that contains the password.
Using the Praefect chart (alpha) | GitLab
Known limitations and issues
Requirements
Configuration
Replicas
Multiple virtual storages
Persistence
Migrating to Praefect
Creating the database
Running Praefect over TLS
Installation command line options
Using the Praefect chart (alpha)
The Praefect chart is still under development. The alpha version is not yet suitable for production use. Upgrades may require significant manual intervention.
See our Praefect GA release Epic for more information.
The Praefect chart is used to manage a Gitaly cluster inside a GitLab installment deployed with the Helm charts.
Known limitations and issues
The database has to be manually created.
The cluster size is fixed: Gitaly Cluster does not currently support autoscaling.
Using a Praefect instance in the cluster to manage Gitaly instances outside the cluster is not supported.
Upgrades to version 4.8 of the chart (GitLab 13.8) will encounter an issue that makes it
appear
that repository data is lost. Data is not lost, but requires manual intervention.
Requirements
This chart consumes the Gitaly chart. Settings from
global.gitaly
are used to configure the instances created by this chart. Documentation of these settings can be found in Gitaly chart documentation.
Important
:
global.gitaly.tls
is independent of
global.praefect.tls
. They are configured separately.
By default, this chart will create 3 Gitaly Replicas.
Configuration
The chart is disabled by default. To enable it as part of a chart deploy set
global.praefect.enabled=true
.
Replicas
The default number of replicas to deploy is 3. This can be changed by setting
global.praefect.virtualStorages[].gitalyReplicas
with the desired number of replicas. For example:
Group-level wikis cannot be moved using the API at this time.
When migrating from standalone Gitaly instances to a Praefect setup,
global.praefect.replaceInternalGitaly
can be set to
false
.
This ensures that the existing Gitaly instances are preserved while the new Praefect-managed Gitaly instances are created.
When migrating to Praefect, none of Praefect’s virtual storages can be named
default
.
This is because there must be at least one storage named
default
at all times,
therefore the name is already taken by the non-Praefect configuration.
The instructions to migrate to Gitaly Cluster
can then be followed to move data from the
default
storage to
virtualStorage2
. If additional storages
were defined under
global.gitaly.internal.names
, be sure to migrate repositories from those storages as well.
After the repositories have been migrated to
virtualStorage2
,
replaceInternalGitaly
can be set back to
true
if a storage named
default
is added in the Praefect configuration.
The instructions to migrate to Gitaly Cluster
can be followed again to move data from
virtualStorage2
to the newly-added
default
storage if desired.
Finally, see the repository storage paths documentation
to configure where new repositories are stored.
Creating the database
Praefect uses its own database to track its state. This has to be manually created in order for Praefect to be functional.
These instructions assume you are using the bundled PostgreSQL server. If you are using your own server,
there will be some variation in how you connect.
Log into your database instance:
kubectl exec-it$(kubectl get pods -lapp=postgresql -o custom-columns=NAME:.metadata.name --no-headers)-- bash
By default, the
shared-secrets
Job will generate a secret for you.
Fetch the password:
kubectl get secret RELEASE_NAME-praefect-dbsecret -ojsonpath="{.data.secret}" | base64--decode
Set the password in the
psql
prompt:
\passwordpraefect
Create the database:
CREATEDATABASEpraefectWITHOWNERpraefect;
Running Praefect over TLS
Praefect supports communicating with client and Gitaly nodes over TLS. This is
controlled by the settings
global.praefect.tls.enabled
and
global.praefect.tls.secretName
.
To run Praefect over TLS follow these steps:
The Helm chart expects a certificate to be provided for communicating over
TLS with Praefect. This certificate should apply to all the Praefect nodes that
are present. Hence all hostnames of each of these nodes should be added as a
Subject Alternate Name (SAN) to the certificate or alternatively, you can use wildcards.
To know the hostnames to use, check the file
/srv/gitlab/config/gitlab.yml
file in the Toolbox Pod and check the various
gitaly_address
fields specified
under
repositories.storages
key within it.
A basic script for generating custom signed certificates for internal Praefect Pods
can be found in this repository.
Users can use or refer that script to generate certificates with proper SAN attributes.
Create a TLS Secret using the certificate created.
The table below contains all the possible charts configurations that can be supplied to
the
helm install
command using the
--set
flags.
Parameter
Default
Description
common.labels
{}
Supplemental labels that are applied to all objects created by this chart.
failover.enabled
true
Whether Praefect should perform failover on node failure
failover.readonlyAfter
false
Whether the nodes should be in read-only mode after failover
autoMigrate
true
Automatically run migrations on startup
electionStrategy
sql
See election strategy
image.repository
registry.gitlab.com/gitlab-org/build/cng/gitaly
The default image repository to use. Praefect is bundled as part of the Gitaly image
podLabels
{}
Supplemental Pod labels. Will not be used for selectors.
ntpHost
pool.ntp.org
Configure the NTP server Praefect should ask the for the current time.
service.name
praefect
The name of the service to create
service.type
ClusterIP
The type of service to create
service.internalPort
8075
The internal port number that the Praefect pod will be listening on
service.externalPort
8075
The port number the Praefect service should expose in the cluster
init.resources
init.image
extraEnvFrom
List of extra environment variables from other data sources to expose
logging.level
Log level
logging.format
json
Log format
logging.sentryDsn
Sentry DSN URL - Exceptions from Go server
logging.rubySentryDsn
Sentry DSN URL - Exceptions from
gitaly-ruby
logging.sentryEnvironment
Sentry environment to be used for logging
metrics.enabled
true
If a metrics endpoint should be made available for scraping
metrics.port
9236
Metrics endpoint port
metrics.separate_database_metrics
true
If true then metrics scrapes will not perform database queries, setting to false may cause performance problems
metrics.path
/metrics
Metrics endpoint path
metrics.serviceMonitor.enabled
false
If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the
prometheus.io
scrape annotations
metrics.serviceMonitor.additionalLabels
{}
Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig
{}
Additional endpoint configuration for the ServiceMonitor
securityContext.runAsUser
1000
securityContext.fsGroup
1000
serviceLabels
{}
Supplemental service labels
statefulset.strategy
{}
Allows one to configure the update strategy utilized by the statefulset
The
sidekiq
sub-chart provides configurable deployment of Sidekiq workers, explicitly
designed to provide separation of queues across multiple
Deployment
s with individual
scalability and configuration.
While this chart provides a default
pods:
declaration, if you provide an empty definition,
you will have
no
workers.
Requirements
This chart depends on access to Redis, PostgreSQL, and Gitaly services, either as
part of the complete GitLab chart or provided as external services reachable from
the Kubernetes cluster this chart is deployed onto.
Design Choices
This chart creates multiple
Deployment
s and associated
ConfigMap
s. It was decided
that it would be clearer to make use of
ConfigMap
behaviours instead of using
environment
attributes or additional arguments to the
command
for the containers, in order to
avoid any concerns about command length. This choice results in a large number of
ConfigMap
s, but provides very clear definitions of what each pod should be doing.
Configuration
The
sidekiq
chart is configured in three parts: chart-wide external services,
chart-wide defaults, and per-pod definitions.
Installation command line options
The table below contains all the possible charts configurations that can be supplied
to the
helm install
command using the
--set
flags:
Parameter
Default
Description
annotations
Pod annotations
podLabels
Supplemental Pod labels. Will not be used for selectors.
common.labels
Supplemental labels that are applied to all objects created by this chart.
concurrency
20
Sidekiq default concurrency
deployment.strategy
{}
Allows one to configure the update strategy utilized by the deployment
deployment.terminationGracePeriodSeconds
30
Optional duration in seconds the pod needs to terminate gracefully.
enabled
true
Sidekiq enabled flag
extraContainers
List of extra containers to include
extraInitContainers
List of extra init containers to include
extraVolumeMounts
String template of extra volume mounts to configure
extraVolumes
String template of extra volumes to configure
extraEnv
List of extra environment variables to expose
extraEnvFrom
List of extra environment variables from other data sources to expose
gitaly.serviceName
gitaly
Gitaly service name
health_checks.port
3808
Health check server port
hpa.behaviour
{scaleDown: {stabilizationWindowSeconds: 300 }}
Behavior contains the specifications for up- and downscaling behavior (requires
autoscaling/v2beta2
or higher)
hpa.customMetrics
[]
Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in
targetAverageUtilization
)
hpa.cpu.targetType
AverageValue
Set the autoscaling CPU target type, must be either
Utilization
or
AverageValue
hpa.cpu.targetAverageValue
350m
Set the autoscaling CPU target value
hpa.cpu.targetAverageUtilization
Set the autoscaling CPU target utilization
hpa.memory.targetType
Set the autoscaling memory target type, must be either
Utilization
or
AverageValue
If a metrics endpoint should be made available for scraping
metrics.port
3807
Metrics endpoint port
metrics.path
/metrics
Metrics endpoint path
metrics.log_enabled
false
Enables or disables metrics server logs written to
sidekiq_exporter.log
metrics.podMonitor.enabled
false
If a PodMonitor should be created to enable Prometheus Operator to manage the metrics scraping
metrics.podMonitor.additionalLabels
{}
Additional labels to add to the PodMonitor
metrics.podMonitor.endpointConfig
{}
Additional endpoint configuration for the PodMonitor
metrics.annotations
DEPRECATED
Set explicit metrics annotations. Replaced by template content.
metrics.tls.enabled
false
TLS enabled for the metrics/sidekiq_exporter endpoint
metrics.tls.secretName
{Release.Name}-sidekiq-metrics-tls
Secret for the metrics/sidekiq_exporter endpoint TLS cert and key
psql.password.key
psql-password
key to psql password in psql secret
psql.password.secret
gitlab-postgres
psql password secret
psql.port
Set PostgreSQL server port. Takes precedence over
global.psql.port
redis.serviceName
redis
Redis service name
resources.requests.cpu
900m
Sidekiq minimum needed CPU
resources.requests.memory
2G
Sidekiq minimum needed memory
resources.limits.memory
Sidekiq maximum allowed memory
timeout
25
Sidekiq job timeout
tolerations
[]
Toleration labels for pod assignment
memoryKiller.daemonMode
true
If
false
, uses the legacy memory killer mode
memoryKiller.maxRss
2000000
Maximum RSS before delayed shutdown triggered expressed in kilobytes
memoryKiller.graceTime
900
Time to wait before a triggered shutdown expressed in seconds
memoryKiller.shutdownWait
30
Amount of time after triggered shutdown for existing jobs to finish expressed in seconds
memoryKiller.hardLimitRss
Maximum RSS before immediate shutdown triggered expressed in kilobyte in daemon mode
memoryKiller.checkInterval
3
Amount of time between memory checks
livenessProbe.initialDelaySeconds
20
Delay before liveness probe is initiated
livenessProbe.periodSeconds
60
How often to perform the liveness probe
livenessProbe.timeoutSeconds
30
When the liveness probe times out
livenessProbe.successThreshold
1
Minimum consecutive successes for the liveness probe to be considered successful after having failed
livenessProbe.failureThreshold
3
Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
readinessProbe.initialDelaySeconds
0
Delay before readiness probe is initiated
readinessProbe.periodSeconds
10
How often to perform the readiness probe
readinessProbe.timeoutSeconds
2
When the readiness probe times out
readinessProbe.successThreshold
1
Minimum consecutive successes for the readiness probe to be considered successful after having failed
readinessProbe.failureThreshold
3
Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
securityContext.fsGroup
1000
Group ID under which the pod should be started
securityContext.runAsUser
1000
User ID under which the pod should be started
priorityClassName
""
Allow configuring pods
priorityClassName
, this is used to control pod priority in case of eviction
Chart configuration examples
resources
resources
allows you to configure the minimum and maximum amount of resources (memory and CPU) a Sidekiq
pod can consume.
Sidekiq pod workloads vary greatly between deployments. Generally speaking, it is understood that each Sidekiq
process consumes approximately 1 vCPU and 2 GB of memory. Vertical scaling should generally align to this
1:2
ratio of
vCPU:Memory
.
By default, the Helm charts use the Enterprise Edition of GitLab. If desired, you
can use the Community Edition instead. Learn more about the
differences between the two.
In order to use the Community Edition, set
image.repository
to
registry.gitlab.com/gitlab-org/build/cng/gitlab-sidekiq-ce
.
External Services
This chart should be attached to the same Redis, PostgreSQL, and Gitaly instances
as the Webservice chart. The values of external services will be populated into a
ConfigMap
that is shared across all Sidekiq pods.
The hostname of the Redis server with the database to use. This can be omitted in lieu of
serviceName
. If using Redis Sentinels, the
host
attribute needs to be set to the cluster name as specified in the
sentinel.conf
.
password.key
String
The
password.key
attribute for Redis defines the name of the key in the secret (below) that contains the password.
password.secret
String
The
password.secret
attribute for Redis defines the name of the Kubernetes
Secret
to pull from.
port
Integer
6379
The port on which to connect to the Redis server.
serviceName
String
redis
The name of the
service
which is operating the Redis database. If this is present, and
host
is not, the chart will template the hostname of the service (and current
.Release.Name
) in place of the
host
value. This is convenient when using Redis as a part of the overall GitLab chart.
sentinels.[].host
String
The hostname of Redis Sentinel server for a Redis HA setup.
sentinels.[].port
Integer
26379
The port on which to connect to the Redis Sentinel server.
The current Redis Sentinel support only supports Sentinels that have
been deployed separately from the GitLab chart. As a result, the Redis
deployment through the GitLab chart should be disabled with
redis.install=false
.
The Secret containing the Redis password needs to be manually created
before deploying the GitLab chart.
The hostname of the PostgreSQL server with the database to use. This can be omitted if
postgresql.install=true
(default non-production).
serviceName
String
The name of the
service
which is operating the PostgreSQL database. If this is present, and
host
is not, the chart will template the hostname of the service in place of the
host
value.
database
String
gitlabhq_production
The name of the database to use on the PostgreSQL server.
password.key
String
The
password.key
attribute for PostgreSQL defines the name of the key in the secret (below) that contains the password.
password.secret
String
The
password.secret
attribute for PostgreSQL defines the name of the Kubernetes
Secret
to pull from.
port
Integer
5432
The port on which to connect to the PostgreSQL server.
username
String
gitlab
The username with which to authenticate to the database.
preparedStatements
Boolean
false
If prepared statements should be used when communicating with the PostgreSQL server.
The hostname of the Gitaly server to use. This can be omitted in lieu of
serviceName
.
serviceName
String
gitaly
The name of the
service
which is operating the Gitaly server. If this is present, and
host
is not, the chart will template the hostname of the service (and current
.Release.Name
) in place of the
host
value. This is convenient when using Gitaly as a part of the overall GitLab chart.
port
Integer
8075
The port on which to connect to the Gitaly server.
authToken.key
String
The name of the key in the secret below that contains the authToken.
authToken.secret
String
The name of the Kubernetes
Secret
to pull from.
Metrics
By default, a Prometheus metrics exporter is enabled per pod. Metrics are only available
when GitLab Prometheus metrics
are enabled in the Admin area. The exporter exposes a
/metrics
endpoint on port
3807
. When metrics are enabled, annotations are added to each pod allowing a Prometheus
server to discover and scrape the exposed metrics.
Chart-wide defaults
The following values will be used chart-wide, in the event that a value is not presented
on a per-pod basis.
Name
Type
Default
Description
concurrency
Integer
25
The number of tasks to process simultaneously.
timeout
Integer
4
The Sidekiq shutdown timeout. The number of seconds after Sidekiq gets the TERM signal before it forcefully shuts down its processes.
memoryKiller.checkInterval
Integer
3
Amount of time in seconds between memory checks
memoryKiller.maxRss
Integer
2000000
Maximum RSS before delayed shutdown triggered expressed in kilobytes
memoryKiller.graceTime
Integer
900
Time to wait before a triggered shutdown expressed in seconds
memoryKiller.shutdownWait
Integer
30
Amount of time after triggered shutdown for existing jobs to finish expressed in seconds
minReplicas
Integer
2
Minimum number of replicas
maxReplicas
Integer
10
Maximum number of replicas
maxUnavailable
Integer
1
Limit of maximum number of Pods to be unavailable
Detailed documentation of the Sidekiq memory killer is available
in the Omnibus documentation.
Per-pod Settings
The
pods
declaration provides for the declaration of all attributes for a worker
pod. These will be templated to
Deployment
s, with individual
ConfigMap
s for their
Sidekiq instances.
The settings default to including a single pod that is set up to monitor
all queues. Making changes to the pods section will
overwrite the default pod
with
a different pod configuration. It will not add a new pod in addition to the default.
Name
Type
Default
Description
concurrency
Integer
The number of tasks to process simultaneously. If not provided, it will be pulled from the chart-wide default.
name
String
Used to name the
Deployment
and
ConfigMap
for this pod. It should be kept short, and should not be duplicated between any two entries.
queues
String
See below.
negateQueues
String
See below.
queueSelector
Boolean
false
Use the queue selector.
timeout
Integer
The Sidekiq shutdown timeout. The number of seconds after Sidekiq gets the TERM signal before it forcefully shuts down its processes. If not provided, it will be pulled from the chart-wide default. This value
must
be less than
terminationGracePeriodSeconds
.
resources
Each pod can present it’s own
resources
requirements, which will be added to the
Deployment
created for it, if present. These match the Kubernetes documentation.
nodeSelector
Each pod can be configured with a
nodeSelector
attribute, which will be added to the
Deployment
created for it, if present. These definitions match the Kubernetes documentation.
memoryKiller.checkInterval
Integer
3
Amount of time between memory checks
memoryKiller.maxRss
Integer
2000000
Overrides the maximum RSS for a given pod.
memoryKiller.graceTime
Integer
900
Overrides the time to wait before a triggered shutdown for a given Pod
memoryKiller.shutdownWait
Integer
30
Overrides the amount of time after triggered shutdown for existing jobs to finish for a given Pod
minReplicas
Integer
2
Minimum number of replicas
maxReplicas
Integer
10
Maximum number of replicas
maxUnavailable
Integer
1
Limit of maximum number of Pods to be unavailable
podLabels
Map
{}
Supplemental Pod labels. Will not be used for selectors.
strategy
{}
Allows one to configure the update strategy utilized by the deployment
extraVolumes
String
Configures extra volumes for the given pod.
extraVolumeMounts
String
Configures extra volume mounts for the given pod.
priorityClassName
String
""
Allow configuring pods
priorityClassName
, this is used to control pod priority in case of eviction
hpa.customMetrics
Array
[]
Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in
targetAverageUtilization
)
hpa.cpu.targetType
String
AverageValue
Overrides the autoscaling CPU target type, must be either
Utilization
or
AverageValue
hpa.cpu.targetAverageValue
String
350m
Overrides the autoscaling CPU target value
hpa.cpu.targetAverageUtilization
Integer
Overrides the autoscaling CPU target utilization
hpa.memory.targetType
String
Overrides the autoscaling memory target type, must be either
Utilization
or
AverageValue
hpa.memory.targetAverageValue
String
Overrides the autoscaling memory target value
hpa.memory.targetAverageUtilization
Integer
Overrides the autoscaling memory target utilization
hpa.targetAverageValue
String
DEPRECATED
Overrides the autoscaling CPU target value
extraEnv
Map
List of extra environment variables to expose. The chart-wide value is merged into this, with values from the pod taking precedence
extraEnvFrom
Map
List of extra environment variables from other data source to expose
terminationGracePeriodSeconds
Integer
30
Optional duration in seconds the pod needs to terminate gracefully.
queues
The
queues
value is a string containing a comma-separated list of queues to be
processed. By default, it is not set, meaning that all queues will be processed.
The string should not contain spaces:
merge,post_receive,process_commit
will
work, but
merge, post_receive, process_commit
will not.
Any queue to which jobs are added but are not represented as a part of at least
one pod item
will not be processed
. For a complete list of all queues, see
these files in the GitLab source:
app/workers/all_queues.yml
ee/app/workers/all_queues.yml
negateQueues
negateQueues
is in the same format as
queues
, but it represents
queues to be ignored rather than processed.
The string should not contain spaces:
merge,post_receive,process_commit
will
work, but
merge, post_receive, process_commit
will not.
This is useful if you have a pod processing important queues, and another pod
processing other queues: they can use the same list of queues, with one being in
queues
and the other being in
negateQueues
.
negateQueues
should not
be provided alongside
queues
, as it will have no effect.
Example
pod
entry
pods: - name: immediate concurrency: 10 minReplicas: 2 # defaults to inherited value maxReplicas: 10 # defaults to inherited value maxUnavailable: 5 # defaults to inherited value queues: merge,post_receive,process_commit extraVolumeMounts: | - name: example-volume-mount mountPath: /etc/example extraVolumes: | - name: example-volume persistentVolumeClaim: claimName: example-pvc resources: limits: cpu: 800m memory: 2Gi hpa: cpu: targetType: Value targetAverageValue: 350m
Configuring the
networkpolicy
This section controls the
NetworkPolicy.
This configuration is optional and is used to limit Egress and Ingress of the
Pods to specific endpoints.
Name
Type
Default
Description
enabled
Boolean
false
This setting enables the network policy
ingress.enabled
Boolean
false
When set to
true
, the
Ingress
network policy will be activated. This will block all Ingress connections unless rules are specified.
ingress.rules
Array
[]
Rules for the Ingress policy, for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
egress.enabled
Boolean
false
When set to
true
, the
Egress
network policy will be activated. This will block all egress connections unless rules are specified.
egress.rules
Array
[]
Rules for the egress policy, these for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
Example Network Policy
The Sidekiq service requires Ingress connections for only the Prometheus
exporter if enabled, and normally requires Egress connections to various
places. This examples adds the following network policy:
All Ingress requests from the network on TCP
10.0.0.0/8
port 3807 are allowed for metrics exporting
All Egress requests to the network on UDP
10.0.0.0/8
port 53 are allowed for DNS
All Egress requests to the network on TCP
10.0.0.0/8
port 5432 are allowed for PostgreSQL
All Egress requests to the network on TCP
10.0.0.0/8
port 6379 are allowed for Redis
Other Egress requests to the local network on
10.0.0.0/8
are restricted
Egress requests outside of the
10.0.0.0/8
are allowed
Note the example provided is only an example and may not be complete
Note that the Sidekiq service requires outbound connectivity to the public
internet for images on external object storage
The
spamcheck
sub-chart provides a deployment of Spamcheck which is an anti-spam engine developed by GitLab originally to combat the rising amount of spam in GitLab.com, and later made public to be used in self-managed GitLab instances.
Requirements
This chart depends on access to the GitLab API.
Configuration
Enable Spamcheck
spamcheck
is disabled by default. To enable it on your GitLab instance, set the Helm property
global.spamcheck.enabled
to
true
, for example:
On the left sidebar, select
Settings
>
Reporting
.
Expand
Spam and Anti-bot Protection
.
Update the Spam Check settings:
Check the
Enable Spam Check via external API endpoint
checkbox
For URL of the external Spam Check endpoint use
grpc://gitlab-spamcheck.default.svc:8001
, where
default
is replaced with the Kubernetes namespace where GitLab is deployed.
Leave
Spam Check API key
blank.
Select
Save changes
.
Installation command line options
The table below contains all the possible charts configurations that can be supplied to the
helm install
command using the
--set
flags.
Parameter
Default
Description
annotations
{}
Pod annotations
common.labels
{}
Supplemental labels that are applied to all objects created by this chart.
deployment.livenessProbe.initialDelaySeconds
20
Delay before liveness probe is initiated
deployment.livenessProbe.periodSeconds
60
How often to perform the liveness probe
deployment.livenessProbe.timeoutSeconds
30
When the liveness probe times out
deployment.livenessProbe.successThreshold
1
Minimum consecutive successes for the liveness probe to be considered successful after having failed
deployment.livenessProbe.failureThreshold
3
Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
deployment.readinessProbe.initialDelaySeconds
0
Delay before readiness probe is initiated
deployment.readinessProbe.periodSeconds
10
How often to perform the readiness probe
deployment.readinessProbe.timeoutSeconds
2
When the readiness probe times out
deployment.readinessProbe.successThreshold
1
Minimum consecutive successes for the readiness probe to be considered successful after having failed
deployment.readinessProbe.failureThreshold
3
Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
deployment.strategy
{}
Allows one to configure the update strategy used by the deployment. When not provided, the cluster default is used.
hpa.behavior
{scaleDown: {stabilizationWindowSeconds: 300 }}
Behavior contains the specifications for up- and downscaling behavior (requires
autoscaling/v2beta2
or higher)
hpa.customMetrics
[]
Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in
targetAverageUtilization
)
hpa.cpu.targetType
AverageValue
Set the autoscaling CPU target type, must be either
Utilization
or
AverageValue
hpa.cpu.targetAverageValue
100m
Set the autoscaling CPU target value
hpa.cpu.targetAverageUtilization
Set the autoscaling CPU target utilization
hpa.memory.targetType
Set the autoscaling memory target type, must be either
Utilization
or
AverageValue
resources
allows you to configure the minimum and maximum amount of resources (memory and CPU) a Spamcheck pod can consume.
For example:
resources: requests: memory:100m cpu:100M
livenessProbe/readinessProbe
deployment.livenessProbe
and
deployment.readinessProbe
provide a mechanism to help control the termination of Spamcheck Pods in certain scenarios,
such as, when a container is in a broken state.
The Toolbox Pod is used to execute periodic housekeeping tasks within
the GitLab application. These tasks include backups, Sidekiq maintenance,
and Rake tasks.
Configuration
The following configuration settings are the default settings provided by the
Toolbox chart:
List of extra environment variables from other data sources to expose
Configuring backups
Information concerning configuring backups in the
backup and restore documentation. Additional
information about the technical implementation of how the backups are
performed can be found in the
backup and restore architecture documentation.]
Persistence configuration
The persistent stores for backups and restorations are configured separately.
Please review the following considerations when configuring GitLab for
backup and restore operations.
Backups use the
backups.cron.persistence.*
properties and restorations
use the
persistence.*
properties. Further descriptions concerning the
configuration of a persistence store will use just the final property key
(e.g.
.enabled
or
.size
) and the appropriate prefix will need to be
added.
The persistence stores are disabled by default, thus
.enabled
needs to
be set to
true
for a backup or restoration of any appreciable size.
In addition, either
.storageClass
needs to be specified for a PersistentVolume
to be created by Kubernetes or a PersistentVolume needs to be manually created.
If
.storageClass
is specified as ‘-‘, then the PersistentVolume will be
created using the default StorageClass
as specified in the Kubernetes cluster.
If the PersistentVolume is created manually, then the volume can be specified
using the
.volumeName
property or by using the selector
.matchLables
/
.matchExpressions
properties.
In most cases the default value of
.accessMode
will provide adequate
controls for only Toolbox accessing the PersistentVolumes. Please consult
the documentation for the CSI driver installed in the Kubernetes cluster to
ensure that the setting is correct.
Backup considerations
A backup operation needs an amount of disk space to hold the individual
components that are being backed up before they are written to the backup
object store. The amount of disk space depends on the following factors:
Number of projects and the amount of data stored under each project
Size of the PostgresSQL database (issues, MRs, etc.)
Size of each object store backend
Once the rough size has been determined, the
backups.cron.persistence.size
property can be set so that backups can commence.
Restore considerations
During the restoration of a backup, the backup needs to be extracted to disk
before the files are replaced on the running instance. The size of this
restoration disk space is controlled by the
persistence.size
property. Be
mindful that as the size of the GitLab installation grows the size of the
restoration disk space also needs to grow accordingly. In most cases the
size of the restoration disk space should be the same size as the backup
disk space.
Toolbox included tools
The Toolbox container contains useful GitLab tools such as Rails console,
Rake tasks, etc. These commands allow one to check the status of the database
migrations, execute Rake tasks for administrative tasks, interact with
the Rails console:
# locate the Toolbox pod kubectl get pods -lapp=toolbox
# Launch a shell inside the pod kubectl exec-it <Toolbox pod name> -- bash
# open Rails console gitlab-rails console -e production
The
webservice
sub-chart provides the GitLab Rails webserver with two Webservice workers
per pod. (The minimum necessary for a single pod to be able to serve any web request in GitLab)
The pods of this chart make use of two containers:
gitlab-workhorse
and
webservice
.
GitLab Workhorse listens on
port
8181
, and should
always
be the destination for inbound traffic to the pod.
The
webservice
houses the GitLab Rails codebase,
listens on
8080
, and is accessible for metrics collection purposes.
webservice
should never recieve normal traffic directly.
Requirements
This chart depends on Redis, PostgreSQL, Gitaly, and Registry services, either as
part of the complete GitLab chart or provided as external services reachable from
the Kubernetes cluster this chart is deployed onto.
Configuration
The
webservice
chart is configured as follows: Global settings,
Deployments settings, Ingress settings, External services, and
Chart settings.
Installation command line options
The table below contains all the possible chart configurations that can be supplied
to the
helm install
command using the
--set
flags.
Parameter
Default
Description
annotations
Pod annotations
podLabels
Supplemental Pod labels. Will not be used for selectors.
common.labels
Supplemental labels that are applied to all objects created by this chart.
deployment.terminationGracePeriodSeconds
30
Seconds that Kubernetes will wait for a pod to exit, note this must be longer than
shutdown.blackoutSeconds
deployment.livenessProbe.initialDelaySeconds
20
Delay before liveness probe is initiated
deployment.livenessProbe.periodSeconds
60
How often to perform the liveness probe
deployment.livenessProbe.timeoutSeconds
30
When the liveness probe times out
deployment.livenessProbe.successThreshold
1
Minimum consecutive successes for the liveness probe to be considered successful after having failed
deployment.livenessProbe.failureThreshold
3
Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
deployment.readinessProbe.initialDelaySeconds
0
Delay before readiness probe is initiated
deployment.readinessProbe.periodSeconds
10
How often to perform the readiness probe
deployment.readinessProbe.timeoutSeconds
2
When the readiness probe times out
deployment.readinessProbe.successThreshold
1
Minimum consecutive successes for the readiness probe to be considered successful after having failed
deployment.readinessProbe.failureThreshold
3
Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
deployment.strategy
{}
Allows one to configure the update strategy used by the deployment. When not provided, the cluster default is used.
enabled
true
Webservice enabled flag
extraContainers
List of extra containers to include
extraInitContainers
List of extra init containers to include
extras.google_analytics_id
nil
Google Analytics ID for frontend
extraVolumeMounts
List of extra volumes mounts to do
extraVolumes
List of extra volumes to create
extraEnv
List of extra environment variables to expose
extraEnvFrom
List of extra environment variables from other data sources to expose
Behavior contains the specifications for up- and downscaling behavior (requires
autoscaling/v2beta2
or higher)
hpa.customMetrics
[]
Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in
targetAverageUtilization
)
hpa.cpu.targetType
AverageValue
Set the autoscaling CPU target type, must be either
Utilization
or
AverageValue
hpa.cpu.targetAverageValue
1
Set the autoscaling CPU target value
hpa.cpu.targetAverageUtilization
Set the autoscaling CPU target utilization
hpa.memory.targetType
Set the autoscaling memory target type, must be either
Utilization
or
AverageValue
hpa.memory.targetAverageValue
Set the autoscaling memory target value
hpa.memory.targetAverageUtilization
Set the autoscaling memory target utilization
hpa.targetAverageValue
DEPRECATED
Set the autoscaling CPU target value
sshHostKeys.mount
false
Whether to mount the GitLab Shell secret containing the public SSH keys.
If a metrics endpoint should be made available for scraping
metrics.port
8083
Metrics endpoint port
metrics.path
/metrics
Metrics endpoint path
metrics.serviceMonitor.enabled
false
If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the
prometheus.io
scrape annotations
metrics.serviceMonitor.additionalLabels
{}
Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig
{}
Additional endpoint configuration for the ServiceMonitor
metrics.annotations
DEPRECATED
Set explicit metrics annotations. Replaced by template content.
metrics.tls.enabled
false
TLS enabled for the metrics/web_exporter endpoint
metrics.tls.secretName
{Release.Name}-webservice-metrics-tls
Secret for the metrics/web_exporter endpoint TLS cert and key
minio.bucket
git-lfs
Name of storage bucket, when using MinIO
minio.port
9000
Port for MinIO service
minio.serviceName
minio-svc
Name of MinIO service
monitoring.ipWhitelist
[0.0.0.0/0]
List of IPs to whitelist for the monitoring endpoints
monitoring.exporter.enabled
false
Enable webserver to expose Prometheus metrics, this is overridden by
metrics.enabled
if the metrics port is set to the monitoring exporter port
monitoring.exporter.port
8083
Port number to use for the metrics exporter
psql.password.key
psql-password
Key to psql password in psql secret
psql.password.secret
gitlab-postgres
psql secret name
psql.port
Set PostgreSQL server port. Takes precedence over
global.psql.port
puma.disableWorkerKiller
true
Disables Puma worker memory killer
puma.workerMaxMemory
The maximum memory (in megabytes) for the Puma worker killer
puma.threads.min
4
The minimum amount of Puma threads
puma.threads.max
4
The maximum amount of Puma threads
rack_attack.git_basic_auth
{}
See GitLab documentation for details
redis.serviceName
redis
Redis service name
registry.api.port
5000
Registry port
registry.api.protocol
http
Registry protocol
registry.api.serviceName
registry
Registry service name
registry.enabled
true
Add/Remove registry link in all projects menu
registry.tokenIssuer
gitlab-issuer
Registry token issuer
replicaCount
1
Webservice number of replicas
resources.requests.cpu
300m
Webservice minimum CPU
resources.requests.memory
1.5G
Webservice minimum memory
service.externalPort
8080
Webservice exposed port
securityContext.fsGroup
1000
Group ID under which the pod should be started
securityContext.runAsUser
1000
User ID under which the pod should be started
serviceLabels
{}
Supplemental service labels
service.internalPort
8080
Webservice internal port
service.type
ClusterIP
Webservice service type
service.workhorseExternalPort
8181
Workhorse exposed port
service.workhorseInternalPort
8181
Workhorse internal port
service.loadBalancerIP
IP address to assign to LoadBalancer (if supported by cloud provider)
service.loadBalancerSourceRanges
List of IP CIDRs allowed access to LoadBalancer (if supported) Required for service.type = LoadBalancer
shell.authToken.key
secret
Key to shell token in shell secret
shell.authToken.secret
{Release.Name}-gitlab-shell-secret
Shell token secret
shell.port
nil
Port number to use in SSH URLs generated by UI
shutdown.blackoutSeconds
10
Number of seconds to keep Webservice running after receiving shutdown, note this must shorter than
deployment.terminationGracePeriodSeconds
tls.enabled
false
Webservice TLS enabled
tls.secretName
{Release.Name}-webservice-tls
Webservice TLS secrets.
secretName
must point to a Kubernetes TLS secret.
tolerations
[]
Toleration labels for pod assignment
trusted_proxies
[]
See GitLab documentation for details
workhorse.logFormat
json
Logging format. Valid formats:
json
,
structured
,
text
workerProcesses
2
Webservice number of workers
workhorse.keywatcher
true
Subscribe workhorse to Redis. This is
required
by any deployment servicing request to
/api/*
, but can be safely disabled for other deployments
workhorse.shutdownTimeout
global.webservice.workerTimeout + 1
(seconds)
Time to wait for all Web requests to clear from Workhorse. Examples:
1min
,
65s
.
workhorse.trustedCIDRsForPropagation
A list of CIDR blocks that can be trusted for propagating a correlation ID. The
-propagateCorrelationID
option must also be used in
workhorse.extraArgs
for this to work. See the Workhorse documentation for more details.
workhorse.trustedCIDRsForXForwardedFor
A list of CIDR blocks that can be used to resolve the actual client IP via the
X-Forwarded-For
HTTP header. This is used with
workhorse.trustedCIDRsForPropagation
. See the Workhorse documentation for more details.
workhorse.livenessProbe.initialDelaySeconds
20
Delay before liveness probe is initiated
workhorse.livenessProbe.periodSeconds
60
How often to perform the liveness probe
workhorse.livenessProbe.timeoutSeconds
30
When the liveness probe times out
workhorse.livenessProbe.successThreshold
1
Minimum consecutive successes for the liveness probe to be considered successful after having failed
workhorse.livenessProbe.failureThreshold
3
Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
workhorse.monitoring.exporter.enabled
false
Enable workhorse to expose Prometheus metrics, this is overridden by
workhorse.metrics.enabled
workhorse.monitoring.exporter.port
9229
Port number to use for workhorse Prometheus metrics
workhorse.monitoring.exporter.tls.enabled
false
When set to
true
, enables TLS on metrics endpoint. It requires TLS to be enabled for Workhorse.
workhorse.metrics.enabled
true
If a workhorse metrics endpoint should be made available for scraping
workhorse.metrics.port
8083
Workhorse metrics endpoint port
workhorse.metrics.path
/metrics
Workhorse metrics endpoint path
workhorse.metrics.serviceMonitor.enabled
false
If a ServiceMonitor should be created to enable Prometheus Operator to manage the Workhorse metrics scraping
workhorse.metrics.serviceMonitor.additionalLabels
{}
Additional labels to add to the Workhorse ServiceMonitor
workhorse.metrics.serviceMonitor.endpointConfig
{}
Additional endpoint configuration for the Workhorse ServiceMonitor
workhorse.readinessProbe.initialDelaySeconds
0
Delay before readiness probe is initiated
workhorse.readinessProbe.periodSeconds
10
How often to perform the readiness probe
workhorse.readinessProbe.timeoutSeconds
2
When the readiness probe times out
workhorse.readinessProbe.successThreshold
1
Minimum consecutive successes for the readiness probe to be considered successful after having failed
workhorse.readinessProbe.failureThreshold
3
Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
workhorse.imageScaler.maxProcs
2
The maximum number of image scaling processes that may run concurrently
workhorse.imageScaler.maxFileSizeBytes
250000
The maximum file size in bytes for images to be processed by the scaler
workhorse.tls.verify
true
When set to
true
forces NGINX Ingress to verify the TLS certificate of Workhorse. For custom CA you need to set
workhorse.tls.caSecretName
as well. Must be set to
false
for self-signed certificates.
workhorse.tls.secretName
{Release.Name}-workhorse-tls
The name of the TLS Secret that contains the TLS key and certificate pair. This is required when Workhorse TLS is enabled.
workhorse.tls.caSecretName
The name of the Secret that contains the CA certificate. This
is not
a TLS Secret, and must have only
ca.crt
key. This is used for TLS verification by NGINX.
webServer
puma
Selects web server (Webservice/Puma) that would be used for request handling
priorityClassName
""
Allow configuring pods
priorityClassName
, this is used to control pod priority in case of eviction
Chart configuration examples
extraEnv
extraEnv
allows you to expose additional environment variables in all containers in the pods.
deployment.strategy
allows you to change the deployment update strategy. It defines how the pods will be recreated when deployment is updated. When not provided, the cluster default is used.
For example, if you don’t want to create extra pods when the rolling update starts and change max unavailable pods to 50%:
You can also change the type of update strategy to
Recreate
, but be careful as it will kill all pods before scheduling new ones, and the web UI will be unavailable until the new pods are started. In this case, you don’t need to define
rollingUpdate
, only
type
:
deployment: strategy: type:Recreate
For more details, see the Kubernetes documentation.
TLS
A Webservice pod runs two containers:
gitlab-workhorse
webservice
gitlab-workhorse
Workhorse supports TLS for both web and metrics endpoints. This will secure the
communication between Workhorse and other components, in particular
nginx-ingress
,
gitlab-shell
, and
gitaly
. The TLS certificate should include the Workhorse
Service host name (e.g.
RELEASE-webservice-default.default.svc
) in the Common
Name (CN) or Subject Alternate Name (SAN).
Note that multiple deployments of Webservice can exist,
so you need to prepare the TLS certificate for different service names. This
can be achieved by either multiple SAN or wildcard certificate.
Once the TLS certificate is generated, create a Kubernetes TLS Secret for it. You also need to create
another Secret that only contains the CA certificate of the TLS certificate
with
ca.crt
key.
The TLS can be enabled for
gitlab-workhorse
container by setting
global.workhorse.tls.enabled
to
true
. You can pass custom Secret names to
gitlab.webservice.workhorse.tls.secretName
and
global.certificates.customCAs
accordingly.
When
gitlab.webservice.workhorse.tls.verify
is
true
(it is by default), you
also need to pass the CA certificate Secret name to
gitlab.webservice.workhorse.tls.caSecretName
.
This is necessary for self-signed certificates and custom CA. This Secret is used
by NGINX to verify the TLS certificate of Workhorse.
TLS can be enabled on metrics endpoints for
gitlab-workhorse
container by setting
gitlab.webservice.workhorse.monitoring.tls.enabled
to
true
. Note that TLS on
metrics endpoint is only available when TLS is enabled for Workhorse. The metrics
listener uses the same TLS certificate that is specified by
gitlab.webservice.workhorse.tls.secretName
.
webservice
The primary use case for enabling TLS is to provide encryption via HTTPS
for scraping Prometheus metrics.
For this reason, the TLS certificate should include the Webservice
hostname (ex:
RELEASE-webservice-default.default.svc
) in the Common
Name (CN) or Subject Alternate Name (SAN).
The Prometheus server bundled with the chart does not yet
support scraping of HTTPS endpoints.
TLS can be enabled on the
webservice
container by the settings
gitlab.webservice.tls.enabled
:
By default, the Helm charts use the Enterprise Edition of GitLab. If desired, you
can use the Community Edition instead. Learn more about the
differences between the two.
In order to use the Community Edition, set
image.repository
to
registry.gitlab.com/gitlab-org/build/cng/gitlab-webservice-ce
and
workhorse.image
to
registry.gitlab.com/gitlab-org/build/cng/gitlab-workhorse-ce
.
Global settings
We share some common global settings among our charts. See the Globals Documentation
for common configuration options, such as GitLab and Registry hostnames.
Deployments settings
This chart has the ability to create multiple Deployment objects and their related
resources. This feature allows requests to the GitLab application to be distributed between multiple sets of Pods using path based routing.
The keys of this Map (
default
in this example) are the “name” for each.
default
will have a Deployment, Service, HorizontalPodAutoscaler, PodDisruptionBudget, and
optional Ingress created with
RELEASE-webservice-default
.
Any property not provided will inherit from the
gitlab-webservice
chart defaults.
Each
deployments
entry will inherit from chart-wide Ingress settings. Any value presented here will override those provided there. Outside of
path
, all settings are identical to those.
The
path
property is directly populated into the Ingress’s
path
property, and allows one to control URI paths which are directed to each service. In the example above,
default
acts as the catch-all path, and
api
received all traffic under
/api
You can disable a given Deployment from having an associated Ingress resource created by setting
path
to empty. See below, where
internal-api
will never receive external traffic.
These annotations will be used for every Ingress. For example:
ingress.annotations."nginx\.ingress\.kubernetes\.io/enable-access-log"=true
.
ingress.configureCertmanager
Boolean
Toggles Ingress annotation
cert-manager.io/issuer
. For more information see the TLS requirement for GitLab Pages.
ingress.enabled
Boolean
false
Setting that controls whether to create Ingress objects for services that support them. When
false
, the
global.ingress.enabled
setting value is used.
ingress.proxyBodySize
String
512m
See Below.
ingress.tls.enabled
Boolean
true
When set to
false
, you disable TLS for GitLab Webservice. This is mainly useful for cases in which you cannot use TLS termination at Ingress-level, like when you have a TLS-terminating proxy before the Ingress Controller.
ingress.tls.secretName
String
(empty)
The name of the Kubernetes TLS Secret that contains a valid certificate and key for the GitLab URL. When not set, the
global.ingress.tls.secretName
value is used instead.
ingress.tls.smardcardSecretName
String
(empty)
The name of the Kubernetes TLS SEcret that contains a valid certificate and key for the GitLab smartcard URL if enabled. When not set, the
global.ingress.tls.secretName
value is used instead.
annotations
annotations
is used to set annotations on the Webservice Ingress.
We set one annotation by default:
nginx.ingress.kubernetes.io/service-upstream: "true"
.
This helps balance traffic to the Webservice pods more evenly by telling NGINX to directly
contact the Service itself as the upstream. For more information, see the
NGINX docs.
proxyBodySize
is used to set the NGINX proxy maximum body size. This is commonly
required to allow a larger Docker image than the default.
It is equivalent to the
nginx['client_max_body_size']
configuration in an
Omnibus installation.
As an alternative option,
you can set the body size with either of the following two parameters too:
Each pod spawns an amount of workers equal to
workerProcesses
, who each use
some baseline amount of memory. We recommend:
A minimum of 1.25GB per worker (
requests.memory
)
A maximum of 1.5GB per worker, plus 1GB for the primary (
limits.memory
)
Note that required resources are dependent on the workload generated by users
and may change in the future based on changes or upgrades in the GitLab application.
The hostname of the Registry server to use. This can be omitted in lieu of
api.serviceName
.
api.port
Integer
5000
The port on which to connect to the Registry API.
api.protocol
String
The protocol Webservice should use to reach the Registry API.
api.serviceName
String
registry
The name of the
service
which is operating the Registry server. If this is present, and
api.host
is not, the chart will template the hostname of the service (and current
.Release.Name
) in place of the
api.host
value. This is convenient when using Registry as a part of the overall GitLab chart.
certificate.key
String
The name of the
key
in the
Secret
which houses the certificate bundle that will be provided to the registry container as
auth.token.rootcertbundle
.
certificate.secret
String
The name of the Kubernetes Secret that houses the certificate bundle to be used to verify the tokens created by the GitLab instance(s).
host
String
The external hostname to use for providing Docker commands to users in the GitLab UI. Falls back to the value set in the
registry.hostname
template. Which determines the registry hostname based on the values set in
global.hosts
. See the Globals Documentation for more information.
port
Integer
The external port used in the hostname. Using port
80
or
443
will result in the URLs being formed with
http
/
https
. Other ports will all use
http
and append the port to the end of hostname, for example
http://registry.example.com:8443
.
tokenIssuer
String
gitlab-issuer
The name of the auth token issuer. This must match the name used in the Registry’s configuration, as it incorporated into the token when it is sent. The default of
gitlab-issuer
is the same default we use in the Registry chart.
Chart settings
The following values are used to configure the Webservice Pods.
Name
Type
Default
Description
replicaCount
Integer
1
The number of Webservice instances to create in the deployment.
workerProcesses
Integer
2
The number of Webservice workers to run per pod. You must have at least
2
workers available in your cluster in order for GitLab to function properly. Note that increasing the
workerProcesses
will increase the memory required by approximately
400MB
per worker, so you should update the pod
resources
accordingly.
Metrics
Metrics can be enabled with the
metrics.enabled
value and use the GitLab
monitoring exporter to expose a metrics port. Pods are either given Prometheus
annotations or if
metrics.serviceMonitor.enabled
is
true
a Prometheus
Operator ServiceMonitor is created. Metrics can alternativly be scraped from
the
/-/metrics
endpoint, but this requires GitLab Prometheus metrics
to be enabled in the Admin area. The GitLab Workhorse metrics can also be
exposed via
workhorse.metrics.enabled
but these can’t be collected using the
Prometheus annotations so either require
workhorse.metrics.serviceMonitor.enabled
to be
true
or external Prometheus
configuration.
GitLab Shell
GitLab Shell uses an Auth Token in its communication with Webservice. Share the token
with GitLab Shell and Webservice using a shared Secret.
Defines the name of the key in the secret (below) that contains the authToken.
authToken.secret
String
Defines the name of the Kubernetes
Secret
to pull from.
port
Integer
22
The port number to use in the generation of SSH URLs within the GitLab UI. Controlled by
global.shell.port
.
WebServer options
Current version of chart supports Puma web server.
Puma unique options:
Name
Type
Default
Description
puma.workerMaxMemory
Integer
The maximum memory (in megabytes) for the Puma worker killer
puma.threads.min
Integer
4
The minimum amount of Puma threads
puma.threads.max
Integer
4
The maximum amount of Puma threads
Configuring the
networkpolicy
This section controls the
NetworkPolicy.
This configuration is optional and is used to limit Egress and Ingress of the
Pods to specific endpoints.
Name
Type
Default
Description
enabled
Boolean
false
This setting enables the
NetworkPolicy
ingress.enabled
Boolean
false
When set to
true
, the
Ingress
network policy will be activated. This will block all Ingress connections unless rules are specified.
ingress.rules
Array
[]
Rules for the Ingress policy, for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
egress.enabled
Boolean
false
When set to
true
, the
Egress
network policy will be activated. This will block all egress connections unless rules are specified.
egress.rules
Array
[]
Rules for the egress policy, these for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
Example Network Policy
The webservice service requires Ingress connections for only the Prometheus
exporter if enabled and traffic coming from the NGINX Ingress, and normally
requires Egress connections to various places. This examples adds the following
network policy:
All Ingress requests from the network on TCP
10.0.0.0/8
port 8080 are allowed for metrics exporting and NGINX Ingress
All Egress requests to the network on UDP
10.0.0.0/8
port 53 are allowed for DNS
All Egress requests to the network on TCP
10.0.0.0/8
port 5432 are allowed for PostgreSQL
All Egress requests to the network on TCP
10.0.0.0/8
port 6379 are allowed for Redis
All Egress requests to the network on TCP
10.0.0.0/8
port 8075 are allowed for Gitaly
Other Egress requests to the local network on
10.0.0.0/8
are restricted
Egress requests outside of the
10.0.0.0/8
are allowed
Note the example provided is only an example and may not be complete
Note that the Webservice requires outbound connectivity to the public internet
for images on external object storage
If the
service.type
is set to
LoadBalancer
, you can optionally specify
service.loadBalancerIP
to create
the
LoadBalancer
with a user-specified IP (if your cloud provider supports it).
When the
service.type
is set to
LoadBalancer
you must also set
service.loadBalancerSourceRanges
to restrict
the CIDR ranges that can access the
LoadBalancer
(if your cloud provider supports it).
This is currently required due to an issue where metric ports are exposed.
Additional information about the
LoadBalancer
service type can be found in
the Kubernetes documentation