All of our containers include predefined resource request values. By default we
have not put resource limits into place. If your nodes do not have excess memory
capacity, one option is to apply memory limits, though adding more memory (or nodes)
would be preferable. (You want to avoid running out of memory on any of your
Kubernetes nodes, as the Linux kernel’s out of memory manager may end essential Kube processes)
In order to come up with our default request values, we run the application, and
come up with a way to generate various levels of load for each service. We monitor the
service, and make a call on what we think is the best default value.
We will measure:
Idle Load
- No default should be below these values, but an idle process
isn’t useful, so typically we will not set a default based on this value.
Minimal Load
- The values required to do the most basic useful amount of work.
Typically, for CPU, this will be used as the default, but memory requests come with
the risk of the Kernel reaping processes, so we will avoid using this as a memory default.
Average Loads
- What is considered
average
is highly dependent on the installation,
for our defaults we will attempt to take a few measurements at a few of what we
consider reasonable loads. (we will list the loads used). If the service has a pod
autoscaler, we will typically try to set the scaling target value based on these.
And also the default memory requests.
Stressful Task
- Measure the usage of the most stressful task the service
should perform. (Not necessary under load). When applying resource limits, try and
set the limit above this and the average load values.
Heavy Load
- Try and come up with a stress test for the service, then measure
the resource usage required to do it. We currently don’t use these values for any
defaults, but users will likely want to set resource limits somewhere between the
average loads/stress task and this value.
GitLab Shell
Load was tested using a bash loop calling
nohup git clone <project> <random-path-name>
in order to have some concurrency.
In future tests we will try to include sustained concurrent load, to better match the types of tests we have done for the other services.
Idle values
0 tasks, 2 pods
cpu: 0
memory:
5M
Minimal Load
1 tasks (one empty clone), 2 pods
cpu: 0
memory:
5M
Average Loads
5 concurrent clones, 2 pods
cpu:
100m
memory:
5M
20 concurrent clones, 2 pods
cpu:
80m
memory:
6M
Stressful Task
SSH clone the Linux kernel (17MB/s)
cpu:
280m
memory:
17M
SSH push the Linux kernel (2MB/s)
cpu:
140m
memory:
13M
Upload connection speed was likely a factor during our tests
Heavy Load
100 concurrent clones, 4 pods
cpu:
110m
memory:
7M
Default Requests
cpu: 0 (from minimal load)
memory:
6M
(from average load)
target CPU average:
100m
(from average loads)
Recommended Limits
cpu: >
300m
(greater than stress task)
memory: >
20M
(greater than stress task)
Check the troubleshooting documentation
for details on what might happen if
gitlab.gitlab-shell.resources.limits.memory
is set too low.
Webservice
Webservice resources were analyzed during testing with the
10k reference architecture.
Notes can be found in the Webservice resources documentation.
Sidekiq
Sidekiq resources were analyzed during testing with the
10k reference architecture.
Notes can be found in the Sidekiq resources documentation.
KAS
Until we learn more about our users need, we expect that our users will be using KAS the following way.
Idle values
0 agents connected, 2 pods
cpu:
10m
memory:
55M
Minimal Load
:
1 agents connected, 2 pods
cpu:
10m
memory:
55M
Average Load
: 1 agent is connected to the cluster.
5 agents connected, 2 pods
cpu:
10m
memory:
65M
Stressful Task
:
20 agents connected, 2 pods
cpu:
30m
memory:
95M
Heavy Load
:
50 agents connected, 2 pods
cpu:
40m
memory:
150M
Extra Heavy Load
:
200 agents connected, 2 pods
cpu:
50m
memory:
315M
The KAS resources defaults set by this chart are more than enough to handle even the 50 agents scenario.
If you are planning to reach what we consider an
Extra Heavy Load
, then you should consider tweaking the
default to scale up.
Defaults
: 2 pods, each with
cpu:
100m
memory:
100M
For more information on how these numbers were calculated, see the
issue discussion.
Backing up a GitLab installation | GitLab
Create the backup
Cron based backup
Backup utility extra arguments
Backup the secrets
Additional Information
Backing up a GitLab installation
GitLab backups are taken by running the
backup-utility
command in the Toolbox pod provided in the chart. Backups can also be automated by enabling the Cron based backup functionality of this chart.
Before running the backup for the first time, you should ensure the
Toolbox is properly configured
for access to object storage
Follow these steps for backing up a GitLab Helm chart based installation
Create the backup
Ensure the toolbox pod is running, by executing the following command
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
Run the backup utility
kubectl exec <Toolbox pod name> -it-- backup-utility
Visit the
gitlab-backups
bucket in the object storage service and ensure a tarball has been added. It will be named in
<timestamp>_<version>_gitlab_backup.tar
format.
This tarball is required for restoration.
Cron based backup
The Kubernetes CronJob created by the Helm chart
sets the
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
annotation on the jobTemplate. Some Kubernetes environments, such as
GKE Autopilot, don’t allow this annotation to be set and will not create
Job Pods for the backup.
Cron based backups can be enabled in this chart to happen at regular intervals as defined by the Kubernetes schedule.
You need to set the following parameters:
gitlab.toolbox.backups.cron.enabled
: Set to true to enable cron based backups
gitlab.toolbox.backups.cron.schedule
: Set as per the Kubernetes schedule docs
gitlab.toolbox.backups.cron.extraArgs
: Optionally set extra arguments for backup-utility (like
--skip db
)
Backup utility extra arguments
The backup utility can take some extra arguments. See what those are with:
kubectl exec <Toolbox pod name> -it-- backup-utility --help
Backup the secrets
You also need to save a copy of the rails secrets as these are not included in the backup as a security precaution. We recommend keeping your full backup that includes the database separate from the copy of the secrets.
Find the object name for the rails secrets
kubectl get secrets | grep rails-secret
Save a copy of the rails secrets
kubectl get secrets <rails-secret-name> -ojsonpath="{.data['secrets\.yml']}" | base64--decode> gitlab-secrets.yaml
Store
gitlab-secrets.yaml
in a secure location. You need it to restore your backups.
GitLab Helm chart provides a utility pod from the Toolbox sub-chart that acts as an interface for the purpose of backing up and restoring GitLab instances. It is equipped with a
backup-utility
executable which interacts with other necessary pods for this task.
Technical details for how the utility works can be found in the architecture documentation.
Prerequisites
Backup and Restore procedures described here have only been tested with S3 compatible APIs. Support for other object storage services, like Google Cloud Storage, will be tested in future revisions.
During restoration, the backup tarball needs to be extracted to disk. This means the Toolbox pod should have disk of necessary size available.
This chart relies on the use of object storage for
artifacts
,
uploads
,
packages
,
registry
and
lfs
objects, and does not currently migrate these for you during restore. If you are restoring a backup taken from another instance, you must migrate your existing instance to using object storage before taking the backup. See issue 646.
Backup and Restoring procedures
Backing up a GitLab installation
Restoring a GitLab installation
Object storage
We provide a MinIO instance out of the box when using this charts unless an external object storage is specified. The Toolbox connects to the included MinIO by default, unless specific settings are given. The Toolbox can also be configured to back up to Amazon S3 or Google Cloud Storage (GCS).
Backups to S3
The Toolbox uses
s3cmd
to connect to object storage. In order to configure connectivity to external object storage
gitlab.toolbox.backups.objectStorage.config.secret
should be specified which points to a Kubernetes secret containing a
.s3cfg
file.
gitlab.toolbox.backups.objectStorage.config.key
should be specified if different from the default of
config
. This points to the key containing the contents of a
.s3cfg
file.
To backup to GCS you must set
gitlab.toolbox.backups.objectStorage.backend
to
gcs
. This ensures that the Toolbox uses the
gsutil
CLI when storing and retrieving
objects. Additionally you must set
gitlab.toolbox.backups.objectStorage.config.gcpProject
to the project ID of the GCP project that contains your storage buckets.
You must create a Kubernetes secret with the contents of an active service account JSON key where the service account has the
storage.admin
role for the buckets
you will use for backup. Below is an example of using the
gcloud
and
kubectl
to create the secret.
As the backups are assembled locally outside of the object storage target, temporary disk space is needed. The required space might exceed the size of the actual backup archive.
The default configuration will use the Toolbox pod’s file system to store the temporary data. If you find pod being evicted due to low resources, you should attach a persistent volume to the pod to hold the temporary data.
On GKE, add the following settings to your Helm command:
--set gitlab.toolbox.persistence.enabled=true
If your backups are being run as part of the included backup cron job, then you will want to enable persistence for the cron job as well:
For other providers, you may need to create a persistent volume. See our Storage documentation for possible examples on how to do this.
“Bucket not found” errors
If you see
Bucket not found
errors during backups, check the
credentials are configured for your bucket.
The command depends on the cloud service provider:
For AWS S3, the credentials are stored on the toolbox pod in
~/.s3cfg
. Run:
s3cmd ls
For GCP GCS, run:
gsutil ls
You should see a list of available buckets.
“AccessDeniedException: 403” errors in GCP
An error like
[Error] AccessDeniedException: 403 <GCP Account> does not have storage.objects.list access to the Google Cloud Storage bucket.
usually happens during a backup or restore of a GitLab instance, because of missing permissions.
The backup and restore operations use all buckets in the environment, so
confirm that all buckets in your environment have been created, and that the GCP account can access (list, read, and write) all buckets:
Find your toolbox pod:
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
Get all buckets in the pod’s environment. Replace
<toolbox-pod-name>
with your actual toolbox pod name, but leave
"BUCKET_NAME"
as it is:
kubectl describe pod <toolbox-pod-name> | grep"BUCKET_NAME"
Confirm that you have access to every bucket in the environment:
To obtain a backup tarball of an existing GitLab instance that used other installation methods like an Omnibus GitLab
package or Omnibus GitLab Helm chart, follow the instructions
given in documentation.
If you are restoring a backup taken from another instance, you must migrate your existing instance to using object storage
before taking the backup. See issue 646.
It is recommended that you restore a backup to the same version of GitLab on which it was created.
GitLab backup restores are taken by running the
backup-utility
command on the Toolbox pod provided in the chart.
Before running the restore for the first time, you should ensure the Toolbox is properly configured for
access to object storage
The backup utility provided by GitLab Helm chart supports restoring a tarball from any of the following locations
The
gitlab-backups
bucket in the object storage service associated to the instance. This is the default scenario.
A public URL that can be accessed from the pod.
A local file that you can copy to the Toolbox pod using
kubectl cp
Restoring the secrets
Restore the rails secrets
The GitLab chart expects rails secrets to be provided as a Kubernetes Secret with content in YAML. If you are restoring the rails secret from an Omnibus GitLab instance, secrets are stored in JSON format in the
/etc/gitlab/gitlab-secrets.json
file. To convert the file and create the secret in YAML format:
Copy the file
/etc/gitlab/gitlab-secrets.json
to the workstation where you run
kubectl
commands.
Install the yq tool (version 4.21.1 or later) on your workstation.
Run the following command to convert your
gitlab-secrets.json
to YAML format:
Make sure you have a running GitLab instance by deploying the charts. Ensure the Toolbox pod is enabled and running by executing the following command
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
Get the tarball ready in any of the above locations. Make sure it is named in the
<timestamp>_<version>_gitlab_backup.tar
format.
Run the backup utility to restore the tarball
kubectl exec <Toolbox pod name> -it-- backup-utility --restore-t <timestamp>_<version>
Here,
<timestamp>_<version>
is from the name of the tarball stored in
gitlab-backups
bucket. In case you want to provide a public URL, use the following command
kubectl exec <Toolbox pod name> -it-- backup-utility --restore-f <URL>
You can provide a local path as a URL as long as it’s in the format:
file:///<path>
This process will take time depending on the size of the tarball.
The restoration process will erase the existing contents of database, move existing repositories to temporary locations and extract the contents of the tarball. Repositories will be moved to their corresponding locations on the disk and other data, like artifacts, uploads, LFS etc. will be uploaded to corresponding buckets in Object Storage.
During restoration, the backup tarball needs to be extracted to disk.
This means the Toolbox pod should have disk of necessary size available.
For more details and configuration please see the Toolbox documentation.
Restore the runner registration token
After restoring, the included runner will not be able to register to the instance because it no longer has the correct registration token.
Follow these troubleshooting steps to get it updated.
Enable Kubernetes related settings
If the restored backup was not from an existing installation of the chart, you will also need to enable some Kubernetes specific features after the restore. Such as
incremental CI job logging.
Find your Toolbox pod by executing the following command
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
Run the instance setup script to enable the necessary features
kubectl exec <Toolbox pod name> -it-- gitlab-rails runner -e production /scripts/custom-instance-setup
Restart the pods
In order to use the new changes, the Webservice and Sidekiq pods need to be restarted. The safest way to restart those pods is to run:
The restoration process does not update the
gitlab-initial-root-password
secret with the value from backup. For logging in as
root
, use the original password included in the backup. In the case that the password is no longer accessible, follow the steps below to reset it.
Attach to the Webservice pod by executing the command
kubectl exec <Webservice pod name> -it-- bash
Run the following command to reset the password of
root
user. Replace
#{password}
with a password of your choice
Using certmanager-issuer for CertManager Issuer creation | GitLab
Configuration
Installation parameters
Using certmanager-issuer for CertManager Issuer creation
This chart is a helper for Jetstack’s CertManager Helm chart.
It automatically provisions an Issuer object, used by CertManager when requesting TLS certificates for
GitLab Ingresses.
Configuration
We describe all the major sections of the configuration below. When configuring
from the parent chart, these values are:
certmanager-issuer: # Configure an ACME Issuer in cert-manager. Only used if global.ingress.configureCertmanager is true. server:https://acme-v02.api.letsencrypt.org/directory
# Provide an email to associate with your TLS certificates # email:
rbac: create:true
resources: requests: cpu:50m
# Priority class assigned to pods priorityClassName:""
common: labels:{}
Installation parameters
This table contains all the possible charts configurations that can be supplied
to the
helm install
command using the
--set
flags:
Parameter
Default
Description
server
https://acme-v02.api.letsencrypt.org/directory
Let’s Encrypt server for use with the ACME CertManager Issuer.
email
You must provide an email to associate with your TLS certificates. Let’s Encrypt uses this address to contact you about expiring certificates, and issues related to your account.
rbac.create
true
When
true
, creates RBAC-related resources to allow for manipulation of CertManager Issuer objects.
resources.requests.cpu
50m
Requested CPU resources for the Issuer creation Job.
common.labels
Common labels to apply to the ServiceAccount, Job, ConfigMap, and Issuer.
The
gitaly
sub-chart provides a configurable deployment of Gitaly Servers.
Requirements
This chart depends on access to the Workhorse service, either as part of the
complete GitLab chart or provided as an external service reachable from the Kubernetes
cluster this chart is deployed onto.
Design Choices
The Gitaly container used in this chart also contains the GitLab Shell codebase in
order to perform the actions on the Git repositories that have not yet been ported into Gitaly.
The Gitaly container includes a copy of the GitLab Shell container within it, and
as a result we also need to configure GitLab Shell within this chart.
Configuration
The
gitaly
chart is configured in two parts: external services,
and chart settings.
Gitaly is by default deployed as a component when deploying the GitLab
chart. If deploying Gitaly separately,
global.gitaly.enabled
needs to
be set to
false
and additional configuration will need to be performed
as described in the external Gitaly documentation.
Installation command line options
The table below contains all the possible charts configurations that can be supplied to
the
helm install
command using the
--set
flags.
Parameter
Default
Description
annotations
Pod annotations
common.labels
{}
Supplemental labels that are applied to all objects created by this chart.
podLabels
Supplemental Pod labels. Will not be used for selectors.
external[].hostname
- ""
hostname of external node
external[].name
- ""
name of external node storage
external[].port
- ""
port of external node
extraContainers
List of extra containers to include
extraInitContainers
List of extra init containers to include
extraVolumeMounts
List of extra volumes mounts to do
extraVolumes
List of extra volumes to create
extraEnv
List of extra environment variables to expose
extraEnvFrom
List of extra environment variables from other data sources to expose
gitaly.serviceName
The name of the generated Gitaly service. Overrides
global.gitaly.serviceName
, and defaults to
<RELEASE-NAME>-gitaly
image.pullPolicy
Always
Gitaly image pull policy
image.pullSecrets
Secrets for the image repository
image.repository
registry.com/gitlab-org/build/cng/gitaly
Gitaly image repository
image.tag
master
Gitaly image tag
init.image.repository
initContainer image
init.image.tag
initContainer image tag
internal.names[]
- default
Ordered names of StatefulSet storages
serviceLabels
{}
Supplemental service labels
service.externalPort
8075
Gitaly service exposed port
service.internalPort
8075
Gitaly internal port
service.name
gitaly
The name of the Service port that Gitaly is behind in the Service object.
service.type
ClusterIP
Gitaly service type
securityContext.fsGroup
1000
Group ID under which the pod should be started
securityContext.fsGroupChangePolicy
Policy for changing ownership and permission of the volume (requires Kubernetes 1.23)
securityContext.runAsUser
1000
User ID under which the pod should be started
tolerations
[]
Toleration labels for pod assignment
persistence.accessMode
ReadWriteOnce
Gitaly persistence access mode
persistence.annotations
Gitaly persistence annotations
persistence.enabled
true
Gitaly enable persistence flag
persistence.matchExpressions
Label-expression matches to bind
persistence.matchLabels
Label-value matches to bind
persistence.size
50Gi
Gitaly persistence volume size
persistence.storageClass
storageClassName for provisioning
persistence.subPath
Gitaly persistence volume mount path
priorityClassName
Gitaly StatefulSet priorityClassName
logging.level
Log level
logging.format
json
Log format
logging.sentryDsn
Sentry DSN URL - Exceptions from Go server
logging.rubySentryDsn
Sentry DSN URL - Exceptions from
gitaly-ruby
logging.sentryEnvironment
Sentry environment to be used for logging
ruby.maxRss
Gitaly-Ruby resident set size (RSS) that triggers a memory restart (bytes)
ruby.gracefulRestartTimeout
Graceful period before a force restart after exceeding Max RSS
ruby.restartDelay
Time that Gitaly-Ruby memory must remain high before a restart (seconds)
ruby.numWorkers
Number of Gitaly-Ruby worker processes
shell.concurrency[]
Concurrency of each RPC endpoint Specified using keys
rpc
and
maxPerRepo
packObjectsCache.enabled
false
Enable the Gitaly pack-objects cache
packObjectsCache.dir
/home/git/repositories/+gitaly/PackObjectsCache
Directory where cache files get stored
packObjectsCache.max_age
5m
Cache entries lifespan
git.catFileCacheSize
Cache size used by Git cat-file process
git.config[]
[]
Git configuration that Gitaly should set when spawning Git commands
prometheus.grpcLatencyBuckets
Buckets corresponding to histogram latencies on GRPC method calls to be recorded by Gitaly. A string form of the array (for example,
"[1.0, 1.5, 2.0]"
) is required as input
statefulset.strategy
{}
Allows one to configure the update strategy utilized by the StatefulSet
statefulset.livenessProbe.initialDelaySeconds
30
Delay before liveness probe is initiated
statefulset.livenessProbe.periodSeconds
10
How often to perform the liveness probe
statefulset.livenessProbe.timeoutSeconds
3
When the liveness probe times out
statefulset.livenessProbe.successThreshold
1
Minimum consecutive successes for the liveness probe to be considered successful after having failed
statefulset.livenessProbe.failureThreshold
3
Minimum consecutive failures for the liveness probe to be considered failed after having succeeded
statefulset.readinessProbe.initialDelaySeconds
10
Delay before readiness probe is initiated
statefulset.readinessProbe.periodSeconds
10
How often to perform the readiness probe
statefulset.readinessProbe.timeoutSeconds
3
When the readiness probe times out
statefulset.readinessProbe.successThreshold
1
Minimum consecutive successes for the readiness probe to be considered successful after having failed
statefulset.readinessProbe.failureThreshold
3
Minimum consecutive failures for the readiness probe to be considered failed after having succeeded
metrics.enabled
false
If a metrics endpoint should be made available for scraping
metrics.port
9236
Metrics endpoint port
metrics.path
/metrics
Metrics endpoint path
metrics.serviceMonitor.enabled
false
If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the
prometheus.io
scrape annotations
metrics.serviceMonitor.additionalLabels
{}
Additional labels to add to the ServiceMonitor
metrics.serviceMonitor.endpointConfig
{}
Additional endpoint configuration for the ServiceMonitor
metrics.metricsPort
DEPRECATED
Use
metrics.port
Chart configuration examples
extraEnv
extraEnv
allows you to expose additional environment variables in all containers in the pods.
priorityClassName
allows you to assign a PriorityClass
to the Gitaly pods.
Below is an example use of
priorityClassName
:
priorityClassName:persistence-enabled
git.config
git.config
allows you to add configuration to all Git commands spawned by
Gitaly. Accepts configuration as documented in
git-config(1)
in
key
/
value
pairs, as shown below.
Gitaly
StatefulSet
performance may suffer when repositories have large
amounts of files.
Mitigate the issue by changing or fully deleting the settings for the
securityContext
.
The example syntax eliminates the
securityContext
setting entirely.
Setting
securityContext: {}
or
securityContext:
does not work due
to the way Helm merges default values with user provided configuration.
Starting from Kubernetes 1.23 you can instead set the
fsGroupChangePolicy
to
OnRootMismatch
to mitigate the issue.
The hostname of the Workhorse server. This can be omitted in lieu of
serviceName
.
port
Integer
8181
The port on which to connect to the Workhorse server.
serviceName
String
webservice
The name of the
service
which is operating the Workhorse server. If this is present, and
host
is not, the chart will template the hostname of the service (and current
.Release.Name
) in place of the
host
value. This is convenient when using Workhorse as a part of the overall GitLab chart.
Chart settings
The following values are used to configure the Gitaly Pods.
Gitaly uses an Auth Token to authenticate with the Workhorse and Sidekiq
services. The Auth Token secret and key are sourced from the
global.gitaly.authToken
value. Additionally, the Gitaly container has a copy of GitLab Shell, which has some configuration
that can be set. The Shell authToken is sourced from the
global.shell.authToken
values.
Git Repository Persistence
This chart provisions a PersistentVolumeClaim and mounts a corresponding persistent
volume for the Git repository data. You’ll need physical storage available in the
Kubernetes cluster for this to work. If you’d rather use emptyDir, disable PersistentVolumeClaim
with:
persistence.enabled: false
.
The persistence settings for Gitaly are used in a volumeClaimTemplate
that should be valid for all your Gitaly pods. You should
not
include settings
that are meant to reference a single specific volume (such as
volumeName
). If you want
to reference a specific volume, you need to manually create the PersistentVolumeClaim.
You can’t change these through our settings once you’ve deployed. In StatefulSet
the
VolumeClaimTemplate
is immutable.
Sets the accessMode requested in the PersistentVolumeClaim. See Kubernetes Access Modes Documentation for details.
enabled
Boolean
true
Sets whether or not to use a PersistentVolumeClaims for the repository data. If
false
, an emptyDir volume is used.
matchExpressions
Array
Accepts an array of label condition objects to match against when choosing a volume to bind. This is used in the
PersistentVolumeClaim
selector
section. See the volumes documentation.
matchLabels
Map
Accepts a Map of label names and label values to match against when choosing a volume to bind. This is used in the
PersistentVolumeClaim
selector
section. See the volumes documentation.
size
String
50Gi
The minimum volume size to request for the data persistence.
storageClass
String
Sets the storageClassName on the Volume Claim for dynamic provisioning. When unset or null, the default provisioner will be used. If set to a hyphen, dynamic provisioning is disabled.
subPath
String
Sets the path within the volume to mount, rather than the volume root. The root is used if the subPath is empty.
annotations
Map
Sets the annotations on the Volume Claim for dynamic provisioning. See Kubernetes Annotations Documentation for details.
Running Gitaly over TLS
This section refers to Gitaly being run inside the cluster using
the Helm charts. If you are using an external Gitaly instance and want to use
TLS for communicating with it, refer the external Gitaly documentation
Gitaly supports communicating with other components over TLS. This is controlled
by the settings
global.gitaly.tls.enabled
and
global.gitaly.tls.secretName
.
Follow the steps to run Gitaly over TLS:
The Helm chart expects a certificate to be provided for communicating over
TLS with Gitaly. This certificate should apply to all the Gitaly nodes that
are present. Hence all hostnames of each of these Gitaly nodes should be
added as a Subject Alternate Name (SAN) to the certificate.
To know the hostnames to use, check the file
/srv/gitlab/config/gitlab.yml
file in the Toolbox pod and check the various
gitaly_address
fields specified under
repositories.storages
key within it.
A basic script for generating custom signed certificates for
internal Gitaly pods can be found in this repository.
Users can use or refer that script to generate certificates with proper
SAN attributes.
Create a k8s TLS secret using the certificate created.
Redeploy the Helm chart by passing
--set global.gitaly.tls.enabled=true
.
Global server hooks
The Gitaly StatefulSet has support for Global server hooks. The hook scripts run on the Gitaly pod, and are therefore limited to the tools available in the Gitaly container.
The hooks are populated using ConfigMaps, and can be used by setting the following values as appropriate:
global.gitaly.hooks.preReceive.configmap
global.gitaly.hooks.postReceive.configmap
global.gitaly.hooks.update.configmap
To populate the ConfigMap, you can point
kubectl
to a directory of scripts: