Using certmanager-issuer for CertManager Issuer creation | GitLab
Configuration
Installation parameters
Using certmanager-issuer for CertManager Issuer creation
This chart is a helper for Jetstack’s CertManager Helm chart.
It automatically provisions an Issuer object, used by CertManager when requesting TLS certificates for
GitLab Ingresses.
Configuration
We describe all the major sections of the configuration below. When configuring
from the parent chart, these values are:
certmanager-issuer: # Configure an ACME Issuer in cert-manager. Only used if global.ingress.configureCertmanager is true. server:https://acme-v02.api.letsencrypt.org/directory
# Provide an email to associate with your TLS certificates # email:
rbac: create:true
resources: requests: cpu:50m
# Priority class assigned to pods priorityClassName:""
common: labels:{}
Installation parameters
This table contains all the possible charts configurations that can be supplied
to the
helm install
command using the
--set
flags:
Parameter
Default
Description
server
https://acme-v02.api.letsencrypt.org/directory
Let’s Encrypt server for use with the ACME CertManager Issuer.
email
You must provide an email to associate with your TLS certificates. Let’s Encrypt uses this address to contact you about expiring certificates, and issues related to your account.
rbac.create
true
When
true
, creates RBAC-related resources to allow for manipulation of CertManager Issuer objects.
resources.requests.cpu
50m
Requested CPU resources for the Issuer creation Job.
common.labels
Common labels to apply to the ServiceAccount, Job, ConfigMap, and Issuer.
priorityClassName
Priority class assigned to pods.
Using the GitLab-Gitaly chart | GitLab
Requirements
Design Choices
Configuration
Installation command line options
Chart configuration examples
extraEnv
extraEnvFrom
image.pullSecrets
tolerations
annotations
priorityClassName
git.config
Altering security contexts
External Services
Workhorse
Chart settings
Git Repository Persistence
Running Gitaly over TLS
Global server hooks
Using the GitLab-Gitaly chart
The
gitaly
sub-chart provides a configurable deployment of Gitaly Servers.
Requirements
This chart depends on access to the Workhorse service, either as part of the
complete GitLab chart or provided as an external service reachable from the Kubernetes
cluster this chart is deployed onto.
Design Choices
The Gitaly container used in this chart also contains the GitLab Shell codebase in
order to perform the actions on the Git repositories that have not yet been ported into Gitaly.
The Gitaly container includes a copy of the GitLab Shell container within it, and
as a result we also need to configure GitLab Shell within this chart.
Configuration
The
gitaly
chart is configured in two parts: external services,
and chart settings.
Gitaly is by default deployed as a component when deploying the GitLab
chart. If deploying Gitaly separately,
global.gitaly.enabled
needs to
be set to
false
and additional configuration will need to be performed
as described in the external Gitaly documentation.
Installation command line options
The table below contains all the possible charts configurations that can be supplied to
the
helm install
command using the
--set
flags.
Parameter
Default
Description
annotations
Pod annotations
common.labels
{}
Supplemental labels that are applied to all objects created by this chart.
podLabels
Supplemental Pod labels. Will not be used for selectors.
external[].hostname
- ""
hostname of external node
external[].name
- ""
name of external node storage
external[].port
- ""
port of external node
extraContainers
List of extra containers to include
extraInitContainers
List of extra init containers to include
extraVolumeMounts
List of extra volumes mounts to do
extraVolumes
List of extra volumes to create
extraEnv
List of extra environment variables to expose
extraEnvFrom
List of extra environment variables from other data sources to expose
gitaly.serviceName
The name of the generated Gitaly service. Overrides
global.gitaly.serviceName
, and defaults to
<RELEASE-NAME>-gitaly
image.pullPolicy
Always
Gitaly image pull policy
image.pullSecrets
Secrets for the image repository
image.repository
registry.com/gitlab-org/build/cng/gitaly
Gitaly image repository
image.tag
master
Gitaly image tag
init.image.repository
initContainer image
init.image.tag
initContainer image tag
internal.names[]
- default
Ordered names of StatefulSet storages
serviceLabels
{}
Supplemental service labels
service.externalPort
8075
Gitaly service exposed port
service.internalPort
8075
Gitaly internal port
service.name
gitaly
The name of the Service port that Gitaly is behind in the Service object.
service.type
ClusterIP
Gitaly service type
securityContext.fsGroup
1000
Group ID under which the pod should be started
securityContext.fsGroupChangePolicy
Policy for changing ownership and permission of the volume (requires Kubernetes 1.23)
securityContext.runAsUser
1000
User ID under which the pod should be started
tolerations
[]
Toleration labels for pod assignment
persistence.accessMode
ReadWriteOnce
Gitaly persistence access mode
persistence.annotations
Gitaly persistence annotations
persistence.enabled
true
Gitaly enable persistence flag
persistence.matchExpressions
Label-expression matches to bind
persistence.matchLabels
Label-value matches to bind
persistence.size
50Gi
Gitaly persistence volume size
persistence.storageClass
storageClassName for provisioning
persistence.subPath
Gitaly persistence volume mount path
priorityClassName
Gitaly StatefulSet priorityClassName
logging.level
Log level
logging.format
json
Log format
logging.sentryDsn
Sentry DSN URL - Exceptions from Go server
logging.rubySentryDsn
Sentry DSN URL - Exceptions from
gitaly-ruby
logging.sentryEnvironment
Sentry environment to be used for logging
ruby.maxRss
Gitaly-Ruby resident set size (RSS) that triggers a memory restart (bytes)
ruby.gracefulRestartTimeout
Graceful period before a force restart after exceeding Max RSS
ruby.restartDelay
Time that Gitaly-Ruby memory must remain high before a restart (seconds)
ruby.numWorkers
Number of Gitaly-Ruby worker processes
shell.concurrency[]
Concurrency of each RPC endpoint Specified using keys
rpc
and
maxPerRepo
packObjectsCache.enabled
false
Enable the Gitaly pack-objects cache
packObjectsCache.dir
/home/git/repositories/+gitaly/PackObjectsCache
Directory where cache files get stored
packObjectsCache.max_age
5m
Cache entries lifespan
git.catFileCacheSize
Cache size used by Git cat-file process
git.config[]
[]
Git configuration that Gitaly should set when spawning Git commands
prometheus.grpcLatencyBuckets
Buckets corresponding to histogram latencies on GRPC method calls to be recorded by Gitaly. A string form of the array (for example,
"[1.0, 1.5, 2.0]"
) is required as input
statefulset.strategy
{}
Allows one to configure the update strategy utilized by the StatefulSet
statefulset.livenessProbe.initialDelaySeconds
30
Delay before liveness probe is initiated
statefulset.livenessProbe.periodSeconds
10
How often to perform the liveness probe
statefulset.livenessProbe.timeoutSeconds
3
When the liveness probe times out
statefulset.livenessProbe.successThreshold
1
Minimum consecutive successes for the liveness probe to be considered successful after having failed
statefulset.livenessProbe.failureThreshold
3
Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
statefulset.readinessProbe.initialDelaySeconds
10
Delay before readiness probe is initiated
statefulset.readinessProbe.periodSeconds
10
How often to perform the readiness probe
statefulset.readinessProbe.timeoutSeconds
3
When the readiness probe times out
statefulset.readinessProbe.successThreshold
1
Minimum consecutive successes for the readiness probe to be considered successful after having failed
statefulset.readinessProbe.failureThreshold
3
Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
metrics.enabled
false
If a metrics endpoint should be made available for scraping
metrics.port
9236
Metrics endpoint port
metrics.path
/metrics
Metrics endpoint path
metrics.serviceMonitor.enabled
false
If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the
prometheus.io
scrape annotations
metrics.serviceMonitor.additionalLabels
{}
Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig
{}
Additional endpoint configuration for the ServiceMonitor
metrics.metricsPort
DEPRECATED
Use
metrics.port
Chart configuration examples
extraEnv
extraEnv
allows you to expose additional environment variables in all containers in the pods.
priorityClassName
allows you to assign a PriorityClass
to the Gitaly pods.
Below is an example use of
priorityClassName
:
priorityClassName:persistence-enabled
git.config
git.config
allows you to add configuration to all Git commands spawned by
Gitaly. Accepts configuration as documented in
git-config(1)
in
key
/
value
pairs, as shown below.
Gitaly
StatefulSet
performance may suffer when repositories have large
amounts of files.
Mitigate the issue by changing or fully deleting the settings for the
securityContext
.
The example syntax eliminates the
securityContext
setting entirely.
Setting
securityContext: {}
or
securityContext:
does not work due
to the way Helm merges default values with user provided configuration.
Starting from Kubernetes 1.23 you can instead set the
fsGroupChangePolicy
to
OnRootMismatch
to mitigate the issue.
The hostname of the Workhorse server. This can be omitted in lieu of
serviceName
.
port
Integer
8181
The port on which to connect to the Workhorse server.
serviceName
String
webservice
The name of the
service
which is operating the Workhorse server. If this is present, and
host
is not, the chart will template the hostname of the service (and current
.Release.Name
) in place of the
host
value. This is convenient when using Workhorse as a part of the overall GitLab chart.
Chart settings
The following values are used to configure the Gitaly Pods.
Gitaly uses an Auth Token to authenticate with the Workhorse and Sidekiq
services. The Auth Token secret and key are sourced from the
global.gitaly.authToken
value. Additionally, the Gitaly container has a copy of GitLab Shell, which has some configuration
that can be set. The Shell authToken is sourced from the
global.shell.authToken
values.
Git Repository Persistence
This chart provisions a PersistentVolumeClaim and mounts a corresponding persistent
volume for the Git repository data. You’ll need physical storage available in the
Kubernetes cluster for this to work. If you’d rather use emptyDir, disable PersistentVolumeClaim
with:
persistence.enabled: false
.
The persistence settings for Gitaly are used in a volumeClaimTemplate
that should be valid for all your Gitaly pods. You should
not
include settings
that are meant to reference a single specific volume (such as
volumeName
). If you want
to reference a specific volume, you need to manually create the PersistentVolumeClaim.
You can’t change these through our settings once you’ve deployed. In StatefulSet
the
VolumeClaimTemplate
is immutable.
Sets the accessMode requested in the PersistentVolumeClaim. See Kubernetes Access Modes Documentation for details.
enabled
Boolean
true
Sets whether or not to use a PersistentVolumeClaims for the repository data. If
false
, an emptyDir volume is used.
matchExpressions
Array
Accepts an array of label condition objects to match against when choosing a volume to bind. This is used in the
PersistentVolumeClaim
selector
section. See the volumes documentation.
matchLabels
Map
Accepts a Map of label names and label values to match against when choosing a volume to bind. This is used in the
PersistentVolumeClaim
selector
section. See the volumes documentation.
size
String
50Gi
The minimum volume size to request for the data persistence.
storageClass
String
Sets the storageClassName on the Volume Claim for dynamic provisioning. When unset or null, the default provisioner will be used. If set to a hyphen, dynamic provisioning is disabled.
subPath
String
Sets the path within the volume to mount, rather than the volume root. The root is used if the subPath is empty.
annotations
Map
Sets the annotations on the Volume Claim for dynamic provisioning. See Kubernetes Annotations Documentation for details.
Running Gitaly over TLS
This section refers to Gitaly being run inside the cluster using
the Helm charts. If you are using an external Gitaly instance and want to use
TLS for communicating with it, refer the external Gitaly documentation
Gitaly supports communicating with other components over TLS. This is controlled
by the settings
global.gitaly.tls.enabled
and
global.gitaly.tls.secretName
.
Follow the steps to run Gitaly over TLS:
The Helm chart expects a certificate to be provided for communicating over
TLS with Gitaly. This certificate should apply to all the Gitaly nodes that
are present. Hence all hostnames of each of these Gitaly nodes should be
added as a Subject Alternate Name (SAN) to the certificate.
To know the hostnames to use, check the file
/srv/gitlab/config/gitlab.yml
file in the Toolbox pod and check the various
gitaly_address
fields specified under
repositories.storages
key within it.
A basic script for generating custom signed certificates for
internal Gitaly pods can be found in this repository.
Users can use or refer that script to generate certificates with proper
SAN attributes.
Create a k8s TLS secret using the certificate created.
Redeploy the Helm chart by passing
--set global.gitaly.tls.enabled=true
.
Global server hooks
The Gitaly StatefulSet has support for Global server hooks. The hook scripts run on the Gitaly pod, and are therefore limited to the tools available in the Gitaly container.
The hooks are populated using ConfigMaps, and can be used by setting the following values as appropriate:
global.gitaly.hooks.preReceive.configmap
global.gitaly.hooks.postReceive.configmap
global.gitaly.hooks.update.configmap
To populate the ConfigMap, you can point
kubectl
to a directory of scripts:
The
gitlab-exporter
sub-chart provides Prometheus metrics for GitLab
application-specific data. It talks to PostgreSQL directly to perform
queries to retrieve data for CI builds, pull mirrors, etc. In addition,
it uses the Sidekiq API, which talks to Redis to gather different
metrics around the state of the Sidekiq queues (e.g. number of jobs).
Requirements
This chart depends on Redis and PostgreSQL services, either as part of
the complete GitLab chart or provided as external services reachable
from the Kubernetes cluster on which this chart is deployed.
Configuration
The
gitlab-exporter
chart is configured as follows:
Global settings and Chart settings.
Installation command line options
The table below contains all the possible chart configurations that can be supplied
to the
helm install
command using the
--set
flags.
Parameter
Default
Description
annotations
Pod annotations
common.labels
{}
Supplemental labels that are applied to all objects created by this chart.
podLabels
Supplemental Pod labels. Will not be used for selectors.
common.labels
Supplemental labels that are applied to all objects created by this chart.
deployment.strategy
{}
Allows one to configure the update strategy utilized by the deployment
enabled
true
GitLab Exporter enabled flag
extraContainers
List of extra containers to include
extraInitContainers
List of extra init containers to include
extraVolumeMounts
List of extra volumes mounts to do
extraVolumes
List of extra volumes to create
extraEnv
List of extra environment variables to expose
extraEnvFrom
List of extra environment variables from other data sources to expose
If a metrics endpoint should be made available for scraping
metrics.port
9168
Metrics endpoint port
metrics.path
/metrics
Metrics endpoint path
metrics.serviceMonitor.enabled
false
If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the
prometheus.io
scrape annotations
metrics.serviceMonitor.additionalLabels
{}
Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig
{}
Additional endpoint configuration for the ServiceMonitor
metrics.annotations
DEPRECATED
Set explicit metrics annotations. Replaced by template content.
priorityClassName
Priority class assigned to pods.
resources.requests.cpu
75m
GitLab Exporter minimum CPU
resources.requests.memory
100M
GitLab Exporter minimum memory
serviceLabels
{}
Supplemental service labels
service.externalPort
9168
GitLab Exporter exposed port
service.internalPort
9168
GitLab Exporter internal port
service.name
gitlab-exporter
GitLab Exporter service name
service.type
ClusterIP
GitLab Exporter service type
securityContext.fsGroup
1000
Group ID under which the pod should be started
securityContext.runAsUser
1000
User ID under which the pod should be started
tolerations
[]
Toleration labels for pod assignment
psql.port
Set PostgreSQL server port. Takes precedence over
global.psql.port
Chart configuration examples
image.pullSecrets
extraEnv
extraEnv
allows you to expose additional environment variables in all containers in the pods.
We share some common global settings among our charts. See the Globals Documentation
for common configuration options, such as GitLab and Registry hostnames.
Chart settings
The following values are used to configure the GitLab Exporter pod.
metrics.enabled
By default, the pod exposes a metrics endpoint at
/metrics
. When
metrics are enabled, annotations are added to each pod allowing a
Prometheus server to discover and scrape the exposed metrics.
The
gitlab-grafana
subchart adapts the
grafana/grafana
chart to operate correctly with the same level of configuration as the Omnibus
GitLab install. In addition, the installation of Grafana allows additional
dashboards to be installed by the end user and be incorporated with the
GitLab supplied dashboards.
Requirements
This chart depends on the
grafana/grafana
chart which is usually installed
by the
GitLab
meta chart. In addition, Kubernetes Ingress support is
needed to properly route the Grafana requests using the
/-/grafana
path.
Design Choices
Because of Helm limitations it is not possible to configure the Grafana
chart with knowledge of a dynamic name for the initial password Secret.
As a result a statically named Secret is created to contain the initial
password. This Secret is named
gitlab-grafana-initial-password
.
The same issue exists for the ConfigMap that contains the script that
is used to inject the initial password into the Grafana container. That
ConfigMap is named
gitlab-grafana-import-secret
.
Both the initial password Secret and the import script ConfigMap are
mounted into the Grafana container (Script in
/tmp/initial
and Configmap in
/tmp/scripts
).
The container command line is augmented to use both
of these objects to securely expose the initial password to the
Grafana server. Modification of the container command line will
generally prevent the initial password from being injected into the
Grafana server environment.
Configuration
There are no required settings, it should work out of the box if you deploy
all of the charts together. The administrator credentials are created by
the
shared-secrets
Job and the administrator username is set to
root
.
Password for Grafana’s root user can be extracted by the following command:
kubectl get secret gitlab-grafana-initial-password -ojsonpath='{.data.password}' | base64--decode;echo
Installation command line options
Parameter
Default
Description
common.labels
{}
Supplemental labels that are applied to all objects created by this chart.
ingress.apiVersion
Value to use in the
apiVersion
field.
ingress.tls
{}
Hash of Ingress TLS settings if GitLab cert manager is not installed
ingress.annotations
{}
Additional annotations to add to Grafana Ingress resource
Dashboard Support
Grafana dashboards are automatically discovered from the ConfigMaps in
the deployed namespace. If a ConfigMap has been created with the
gitlab_grafana_dashboard
label set to
true
, then the JSON encoded
dashboard in the ConfigMap will be imported into Grafana. This import happens
once (when Grafana is restarted) and any changes to the dashboard will not be
written back to the ConfigMap.
There are currently no dashboards created when the chart is installed. Any
user created dashboards can be imported by creating a ConfigMap using the
gitlab_grafana_dashboard
label and managing the ConfigMap themselves.
Datasource support
Datasources may be created in the same manner as the dashboards by adding
the
gitlab_grafana_datasource
label. This chart will add a ConfigMap
to direct Grafana to use the embedded Prometheus metrics.
The
gitlab-pages
subchart provides a daemon for serving static websites from
GitLab projects.
Requirements
This chart depends on access to the Workhorse services, either as part of the
complete GitLab chart or provided as an external service reachable from the Kubernetes
cluster this chart is deployed onto.
Configuration
The
gitlab-pages
chart is configured as follows:
Global settings and Chart settings.
Global Settings
We share some common global settings among our charts. See the
Globals Documentation for details.
Chart settings
The tables in following two sections contains all the possible chart
configurations that can be supplied to the
helm install
command using the
--set
flags.
General settings
Parameter
Default
Description
annotations
Pod annotations
common.labels
{}
Supplemental labels that are applied to all objects created by this chart.
deployment.strategy
{}
Allows one to configure the update strategy used by the deployment. When not provided, the cluster default is used.
extraEnv
List of extra environment variables to expose
extraEnvFrom
List of extra environment variables from other data source to expose
hpa.behavior
{scaleDown: {stabilizationWindowSeconds: 300 }}
Behavior contains the specifications for up- and downscaling behavior (requires
autoscaling/v2beta2
or higher)
hpa.customMetrics
[]
Custom metrics contains the specifications for which to use to calculate the desired replica count (overrides the default use of Average CPU Utilization configured in
targetAverageUtilization
)
hpa.cpu.targetType
AverageValue
Set the autoscaling CPU target type, must be either
Utilization
or
AverageValue
hpa.cpu.targetAverageValue
100m
Set the autoscaling CPU target value
hpa.cpu.targetAverageUtilization
Set the autoscaling CPU target utilization
hpa.memory.targetType
Set the autoscaling memory target type, must be either
Utilization
or
AverageValue
If a metrics endpoint should be made available for scraping
metrics.port
9235
Metrics endpoint port
metrics.path
/metrics
Metrics endpoint path
metrics.serviceMonitor.enabled
false
If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the
prometheus.io
scrape annotations
metrics.serviceMonitor.additionalLabels
{}
Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig
{}
Additional endpoint configuration for the ServiceMonitor
metrics.annotations
DEPRECATED
Set explicit metrics annotations. Replaced by template content.
metrics.tls.enabled
false
TLS enabled for the metrics endpoint
metrics.tls.secretName
{Release.Name}-pages-metrics-tls
Secret for the metrics endpoint TLS cert and key
podLabels
Supplemental Pod labels. Will not be used for selectors.
resources.requests.cpu
75m
GitLab Pages minimum CPU
resources.requests.memory
100M
GitLab Pages minimum memory
securityContext.fsGroup
1000
Group ID under which the pod should be started
securityContext.runAsUser
1000
User ID under which the pod should be started
service.externalPort
8090
GitLab Pages exposed port
service.internalPort
8090
GitLab Pages internal port
service.name
gitlab-pages
GitLab Pages service name
service.customDomains.type
LoadBalancer
Type of service created for handling custom domains
service.customDomains.internalHttpsPort
8091
Port where Pages daemon listens for HTTPS requests
service.customDomains.internalHttpsPort
8091
Port where Pages daemon listens for HTTPS requests
service.customDomains.nodePort.http
Node Port to be opened for HTTP connections. Valid only if
service.customDomains.type
is
NodePort
service.customDomains.nodePort.https
Node Port to be opened for HTTPS connections. Valid only if
service.customDomains.type
is
NodePort
service.sessionAffinity
None
Type of the session affinity. Must be either
ClientIP
or
None
(this only makes sense for traffic originating from within the cluster)
service.sessionAffinityConfig
Session affinity config. If
service.sessionAffinity
==
ClientIP
the default session sticky time is 3 hours (10800)
serviceLabels
{}
Supplemental service labels
tolerations
[]
Toleration labels for pod assignment
Pages specific settings
Parameter
Default
Description
artifactsServerTimeout
10
Timeout (in seconds) for a proxied request to the artifacts server
artifactsServerUrl
API URL to proxy artifact requests to
extraVolumeMounts
List of extra volumes mounts to add
extraVolumes
List of extra volumes to create
gitlabCache.cleanup
int
See: Pages Global Settings
gitlabCache.expiry
int
See: Pages Global Settings
gitlabCache.refresh
int
See: Pages Global Settings
gitlabClientHttpTimeout
GitLab API HTTP client connection timeout in seconds
gitlabClientJwtExpiry
JWT Token expiry time in seconds
gitlabRetrieval.interval
int
See: Pages Global Settings
gitlabRetrieval.retries
int
See: Pages Global Settings
gitlabRetrieval.timeout
int
See: Pages Global Settings
gitlabServer
GitLab server FQDN
headers
[]
Specify any additional http headers that should be sent to the client with each response. Multiple headers can be given as an array, header and value as one string, for example
['my-header: myvalue', 'my-other-header: my-other-value']
insecureCiphers
false
Use default list of cipher suites, may contain insecure ones like 3DES and RC4
internalGitlabServer
Internal GitLab server used for API requests
logFormat
json
Log output format
logVerbose
false
Verbose logging
maxConnections
Limit on the number of concurrent connections to the HTTP, HTTPS or proxy listeners
maxURILength
Limit the length of URI, 0 for unlimited.
propagateCorrelationId
Reuse existing Correlation-ID from the incoming request header
X-Request-ID
if present
redirectHttp
false
Redirect pages from HTTP to HTTPS
sentry.enabled
false
Enable Sentry reporting
sentry.dsn
The address for sending Sentry crash reporting to
sentry.environment
The environment for Sentry crash reporting
serverShutdowntimeout
30s
GitLab Pages server shutdown timeout in seconds
statusUri
The URL path for a status page
tls.minVersion
Specifies the minimum SSL/TLS version
tls.maxVersion
Specifies the maximum SSL/TLS version
useHTTPProxy
false
Use this option when GitLab Pages is behind a Reverse Proxy.
useProxyV2
false
Force HTTPS request to utilize the PROXYv2 protocol.
zipCache.cleanup
int
See: Zip Serving and Cache Configuration
zipCache.expiration
int
See: Zip Serving and Cache Configuration
zipCache.refresh
int
See: Zip Serving and Cache Configuration
zipOpenTimeout
int
See: Zip Serving and Cache Configuration
zipHTTPClientTimeout
int
See: Zip Serving and Cache Configuration
rateLimitSourceIP
See: GitLab Pages rate-limits. To enable rate-limiting use
extraEnv=["FF_ENFORCE_IP_RATE_LIMITS=true"]
rateLimitSourceIPBurst
See: GitLab Pages rate-limits
rateLimitDomain
See: GitLab Pages rate-limits. To enable rate-limiting use
extraEnv=["FF_ENFORCE_DOMAIN_RATE_LIMITS=true"]
rateLimitDomainBurst
See: GitLab Pages rate-limits
rateLimitTLSSourceIP
See: GitLab Pages rate-limits. To enable rate-limiting use
extraEnv=["FF_ENFORCE_IP_TLS_RATE_LIMITS=true"]
rateLimitTLSSourceIPBurst
See: GitLab Pages rate-limits
rateLimitTLSDomain
See: GitLab Pages rate-limits. To enable rate-limiting use
extraEnv=["FF_ENFORCE_DOMAIN_TLS_RATE_LIMITS=true"]
rateLimitTLSDomainBurst
See: GitLab Pages rate-limits
serverReadTimeout
5s
See: GitLab Pages global settings
serverReadHeaderTimeout
1s
See: GitLab Pages global settings
serverWriteTimeout
5m
See: GitLab Pages global settings
serverKeepAlive
15s
See: GitLab Pages global settings
authCookieSessionTimeout
10m
See: GitLab Pages global settings
Configuring the
ingress
This section controls the GitLab Pages Ingress.
Name
Type
Default
Description
apiVersion
String
Value to use in the
apiVersion
field.
annotations
String
This field is an exact match to the standard
annotations
for Kubernetes Ingress.
configureCertmanager
Boolean
false
Toggles Ingress annotation
cert-manager.io/issuer
. The acquisition of a TLS certificate for GitLab Pages via cert-manager is disabled because a wildcard certificate acquisition requires a cert-manager Issuer with a DNS01 solver, and the Issuer deployed by this chart only provides a HTTP01 solver. For more information see the TLS requirement for GitLab Pages.
enabled
Boolean
Setting that controls whether to create Ingress objects for services that support them. When not set, the
global.ingress.enabled
setting is used.
tls.enabled
Boolean
When set to
false
, you disable TLS for the Pages subchart. This is mainly useful for cases in which you cannot use TLS termination at
ingress-level
, like when you have a TLS-terminating proxy before the Ingress Controller.
tls.secretName
String
The name of the Kubernetes TLS Secret that contains a valid certificate and key for the pages URL. When not set, the
global.ingress.tls.secretName
is used instead. Defaults to not being set.
Chart configuration examples
extraVolumes
extraVolumes
allows you to configure extra volumes chart-wide.
This section controls the
NetworkPolicy.
This configuration is optional and is used to limit Egress and Ingress of the
Pods to specific endpoints.
Name
Type
Default
Description
enabled
Boolean
false
This setting enables the
NetworkPolicy
ingress.enabled
Boolean
false
When set to
true
, the
Ingress
network policy will be activated. This will block all Ingress connections unless rules are specified.
ingress.rules
Array
[]
Rules for the Ingress policy, for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
egress.enabled
Boolean
false
When set to
true
, the
Egress
network policy will be activated. This will block all egress connections unless rules are specified.
egress.rules
Array
[]
Rules for the egress policy, these for details see https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource and the example below
Example Network Policy
The
gitlab-pages
service requires Ingress connections for port 80 and 443 and
Egress connections to various to default workhorse port 8181. This examples adds
the following network policy:
All Ingress requests from the network on TCP
0.0.0.0/0
port 80 and 443 are allowed
All Egress requests to the network on UDP
10.0.0.0/8
port 53 are allowed for DNS
All Egress requests to the network on TCP
10.0.0.0/8
port 8181 are allowed for Workhorse
Note the example provided is only an example and may not be complete
The GitLab Runner subchart provides a GitLab Runner for running CI jobs. It is enabled by default and should work out of the box with support for caching using s3 compatible object storage.
Requirements
This chart depends on the shared-secrets Job to populate its
registrationToken
for automatic registration. If you intend to run this chart as a stand-alone chart with an existing GitLab instance then you will need to manually set the
registrationToken
in the
gitlab-runner
secret to be equal to that displayed by the running GitLab instance.
Configuration
There are no required settings, it should work out of the box if you deploy all of the charts together.
Deploying a stand-alone runner
By default we do infer
gitlabUrl
, automatically generate a registration token, and generate it through the
migrations
chart. This behavior will not work if you intend to deploy it with a running GitLab instance.
In this case you will need to set
gitlabUrl
value to be the URL of the running GitLab instance. You will also need to manually create
gitlab-runner
secret and fill it with the
registrationToken
provided by the running GitLab.
Using Docker-in-Docker
In order to run Docker-in-Docker, the runner container needs to be privileged to have access to the needed capabilities. To enable it set the
privileged
value to
true
. See the upstream documentation in regards to why this is does not default to
true
.
Security concerns
Privileged containers have extended capabilities, for example they can mount arbitrary files from the host they run on. Make sure to run the container in an isolated environment such that nothing important runs beside it.
Installation command line options
Parameter
Description
Default
gitlab-runner.image
Runner image
gitlab/gitlab-runner:alpine-v10.5.0
gitlab-runner.gitlabUrl
URL that the Runner uses to register to GitLab Server
GitLab external URL
gitlab-runner.install
Install the
gitlab-runner
chart
true
gitlab-runner.imagePullPolicy
Image pull policy
IfNotPresent
gitlab-runner.init.image.repository
initContainer
image
gitlab-runner.init.image.tag
initContainer
image tag
gitlab-runner.pullSecrets
Secrets for the image repository
gitlab-runner.unregisterRunners
Unregister all runners before termination
true
gitlab-runner.concurrent
Number of concurrent jobs
20
gitlab-runner.checkInterval
Polling interval
30s
gitlab-runner.rbac.create
Whether to create RBAC service account
true
gitlab-runner.rbac.clusterWideAccess
Deploy containers of jobs cluster-wide
false
gitlab-runner.rbac.serviceAccountName
Name of the RBAC service account to create
default
gitlab-runner.runners.privileged
Run in privileged mode, needed for
dind
false
gitlab-runner.runners.cache.secretName
Secret to access key and secret key from
gitlab-minio
gitlab-runner.runners.config
Runner configuration as string
See below
gitlab-runner.resources.limits.cpu
Runner CPU limit
gitlab-runner.resources.limits.memory
Runner memory limit
gitlab-runner.resources.requests.cpu
Runner requested CPU
gitlab-runner.resources.requests.memory
Runner requested memory
Default runner configuration
The default runner configuration used in the GitLab chart has been customized to use the included MinIO for cache by default. If you are setting the runner
config
value, you will need to also configure your own cache configuration.