Architecture of Cloud native GitLab Helm charts
Documentation Organization:
- Goals
- Architecture
- Design Decisions
- Resource Usage
This is one stop global knowledge base where you can learn about all the products, solutions and support features.
Documentation Organization:
All of our containers include predefined resource request values. By default we
have not put resource limits into place. If your nodes do not have excess memory
capacity, one option is to apply memory limits, though adding more memory (or nodes)
would be preferable. (You want to avoid running out of memory on any of your
Kubernetes nodes, as the Linux kernel’s out of memory manager may end essential Kube processes)
In order to come up with our default request values, we run the application, and
come up with a way to generate various levels of load for each service. We monitor the
service, and make a call on what we think is the best default value.
We will measure:
Idle Load
- No default should be below these values, but an idle process
isn’t useful, so typically we will not set a default based on this value.
Minimal Load
- The values required to do the most basic useful amount of work.
Typically, for CPU, this will be used as the default, but memory requests come with
the risk of the Kernel reaping processes, so we will avoid using this as a memory default.
Average Loads
- What is considered
average
is highly dependent on the installation,
for our defaults we will attempt to take a few measurements at a few of what we
consider reasonable loads. (we will list the loads used). If the service has a pod
autoscaler, we will typically try to set the scaling target value based on these.
And also the default memory requests.
Stressful Task
- Measure the usage of the most stressful task the service
should perform. (Not necessary under load). When applying resource limits, try and
set the limit above this and the average load values.
Heavy Load
- Try and come up with a stress test for the service, then measure
the resource usage required to do it. We currently don’t use these values for any
defaults, but users will likely want to set resource limits somewhere between the
average loads/stress task and this value.
Load was tested using a bash loop calling
nohup git clone <project> <random-path-name>
in order to have some concurrency.
In future tests we will try to include sustained concurrent load, to better match the types of tests we have done for the other services.
5M
5M
100m
5M
80m
6M
280m
17M
140m
13M
110m
7M
6M
(from average load)
100m
(from average loads)
300m
(greater than stress task)
20M
(greater than stress task)
Check the troubleshooting documentation
for details on what might happen if
gitlab.gitlab-shell.resources.limits.memory
is set too low.
Webservice resources were analyzed during testing with the
10k reference architecture.
Notes can be found in the Webservice resources documentation.
Sidekiq resources were analyzed during testing with the
10k reference architecture.
Notes can be found in the Sidekiq resources documentation.
Until we learn more about our users need, we expect that our users will be using KAS the following way.
10m
55M
10m
55M
10m
65M
30m
95M
40m
150M
50m
315M
The KAS resources defaults set by this chart are more than enough to handle even the 50 agents scenario.
If you are planning to reach what we consider an
Extra Heavy Load
, then you should consider tweaking the
default to scale up.
100m
100M
For more information on how these numbers were calculated, see the
issue discussion.
GitLab backups are taken by running the
backup-utility
command in the Toolbox pod provided in the chart. Backups can also be automated by enabling the Cron based backup functionality of this chart.
Before running the backup for the first time, you should ensure the
Toolbox is properly configured
for access to object storage
Follow these steps for backing up a GitLab Helm chart based installation
Ensure the toolbox pod is running, by executing the following command
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
Run the backup utility
kubectl exec <Toolbox pod name> -it -- backup-utility
Visit the
gitlab-backups
bucket in the object storage service and ensure a tarball has been added. It will be named in
<timestamp>_<version>_gitlab_backup.tar
format.
This tarball is required for restoration.
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
Cron based backups can be enabled in this chart to happen at regular intervals as defined by the Kubernetes schedule.
You need to set the following parameters:
gitlab.toolbox.backups.cron.enabled
: Set to true to enable cron based backups
gitlab.toolbox.backups.cron.schedule
: Set as per the Kubernetes schedule docs
gitlab.toolbox.backups.cron.extraArgs
: Optionally set extra arguments for backup-utility (like
--skip db
)
The backup utility can take some extra arguments. See what those are with:
kubectl exec <Toolbox pod name> -it -- backup-utility --help
You also need to save a copy of the rails secrets as these are not included in the backup as a security precaution. We recommend keeping your full backup that includes the database separate from the copy of the secrets.
Find the object name for the rails secrets
kubectl get secrets | grep rails-secret
Save a copy of the rails secrets
kubectl get secrets <rails-secret-name> -o jsonpath="{.data['secrets\.yml']}" | base64 --decode > gitlab-secrets.yaml
Store
gitlab-secrets.yaml
in a secure location. You need it to restore your backups.
GitLab Helm chart provides a utility pod from the Toolbox sub-chart that acts as an interface for the purpose of backing up and restoring GitLab instances. It is equipped with a
backup-utility
executable which interacts with other necessary pods for this task.
Technical details for how the utility works can be found in the architecture documentation.
Backup and Restore procedures described here have only been tested with S3 compatible APIs. Support for other object storage services, like Google Cloud Storage, will be tested in future revisions.
During restoration, the backup tarball needs to be extracted to disk. This means the Toolbox pod should have disk of necessary size available.
This chart relies on the use of object storage for
artifacts
,
uploads
,
packages
,
registry
and
lfs
objects, and does not currently migrate these for you during restore. If you are restoring a backup taken from another instance, you must migrate your existing instance to using object storage before taking the backup. See issue 646.
We provide a MinIO instance out of the box when using this charts unless an external object storage is specified. The Toolbox connects to the included MinIO by default, unless specific settings are given. The Toolbox can also be configured to back up to Amazon S3 or Google Cloud Storage (GCS).
The Toolbox uses
s3cmd
to connect to object storage. In order to configure connectivity to external object storage
gitlab.toolbox.backups.objectStorage.config.secret
should be specified which points to a Kubernetes secret containing a
.s3cfg
file.
gitlab.toolbox.backups.objectStorage.config.key
should be specified if different from the default of
config
. This points to the key containing the contents of a
.s3cfg
file.
It should look like this:
helm install gitlab gitlab/gitlab \
--set gitlab.toolbox.backups.objectStorage.config.secret=my-s3cfg \
--set gitlab.toolbox.backups.objectStorage.config.key=config .
s3cmd
.s3cfg
file documentation can be found here
In addition, two bucket locations need to be configured, one for storing the backups, and one temporary bucket that is used
when restoring a backup.
--set global.appConfig.backups.bucket=gitlab-backup-storage
--set global.appConfig.backups.tmpBucket=gitlab-tmp-storage
To backup to GCS you must set
gitlab.toolbox.backups.objectStorage.backend
to
gcs
. This ensures that the Toolbox uses the
gsutil
CLI when storing and retrieving
objects. Additionally you must set
gitlab.toolbox.backups.objectStorage.config.gcpProject
to the project ID of the GCP project that contains your storage buckets.
You must create a Kubernetes secret with the contents of an active service account JSON key where the service account has the
storage.admin
role for the buckets
you will use for backup. Below is an example of using the
gcloud
and
kubectl
to create the secret.
export PROJECT_ID=$(gcloud config get-value project)
gcloud iam service-accounts create gitlab-gcs --display-name "Gitlab Cloud Storage"
gcloud projects add-iam-policy-binding --role roles/storage.admin ${PROJECT_ID} --member=serviceAccount:gitlab-gcs@${PROJECT_ID}.iam.gserviceaccount.com
gcloud iam service-accounts keys create --iam-account gitlab-gcs@${PROJECT_ID}.iam.gserviceaccount.com storage.config
kubectl create secret generic storage-config --from-file=config=storage.config
Configure your Helm chart as follows to use the service account key to authenticate to GCS for backups:
helm install gitlab gitlab/gitlab \
--set gitlab.toolbox.backups.objectStorage.config.secret=storage-config \
--set gitlab.toolbox.backups.objectStorage.config.key=config \
--set gitlab.toolbox.backups.objectStorage.config.gcpProject=my-gcp-project-id \
--set gitlab.toolbox.backups.objectStorage.backend=gcs
In addition, two bucket locations need to be configured, one for storing the backups, and one temporary bucket that is used
when restoring a backup.
--set global.appConfig.backups.bucket=gitlab-backup-storage
--set global.appConfig.backups.tmpBucket=gitlab-tmp-storage
As the backups are assembled locally outside of the object storage target, temporary disk space is needed. The required space might exceed the size of the actual backup archive.
The default configuration will use the Toolbox pod’s file system to store the temporary data. If you find pod being evicted due to low resources, you should attach a persistent volume to the pod to hold the temporary data.
On GKE, add the following settings to your Helm command:
--set gitlab.toolbox.persistence.enabled=true
If your backups are being run as part of the included backup cron job, then you will want to enable persistence for the cron job as well:
--set gitlab.toolbox.backups.cron.persistence.enabled=true
For other providers, you may need to create a persistent volume. See our Storage documentation for possible examples on how to do this.
If you see
Bucket not found
errors during backups, check the
credentials are configured for your bucket.
The command depends on the cloud service provider:
For AWS S3, the credentials are stored on the toolbox pod in
~/.s3cfg
. Run:
s3cmd ls
For GCP GCS, run:
gsutil ls
You should see a list of available buckets.
An error like
[Error] AccessDeniedException: 403 <GCP Account> does not have storage.objects.list access to the Google Cloud Storage bucket.
usually happens during a backup or restore of a GitLab instance, because of missing permissions.
The backup and restore operations use all buckets in the environment, so
confirm that all buckets in your environment have been created, and that the GCP account can access (list, read, and write) all buckets:
Find your toolbox pod:
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
Get all buckets in the pod’s environment. Replace
<toolbox-pod-name>
with your actual toolbox pod name, but leave
"BUCKET_NAME"
as it is:
kubectl describe pod <toolbox-pod-name> | grep "BUCKET_NAME"
Confirm that you have access to every bucket in the environment:
# List
gsutil ls gs://<bucket-to-validate>/
# Read
gsutil cp gs://<bucket-to-validate>/<object-to-get> <save-to-location>
# Write
gsutil cp -n <local-file> gs://<bucket-to-validate>/
To obtain a backup tarball of an existing GitLab instance that used other installation methods like an Omnibus GitLab
package or Omnibus GitLab Helm chart, follow the instructions
given in documentation.
If you are restoring a backup taken from another instance, you must migrate your existing instance to using object storage
before taking the backup. See issue 646.
It is recommended that you restore a backup to the same version of GitLab on which it was created.
GitLab backup restores are taken by running the
backup-utility
command on the Toolbox pod provided in the chart.
Before running the restore for the first time, you should ensure the Toolbox is properly configured for
access to object storage
The backup utility provided by GitLab Helm chart supports restoring a tarball from any of the following locations
gitlab-backups
bucket in the object storage service associated to the instance. This is the default scenario.
kubectl cp
The GitLab chart expects rails secrets to be provided as a Kubernetes Secret with content in YAML. If you are restoring the rails secret from an Omnibus GitLab instance, secrets are stored in JSON format in the
/etc/gitlab/gitlab-secrets.json
file. To convert the file and create the secret in YAML format:
Copy the file
/etc/gitlab/gitlab-secrets.json
to the workstation where you run
kubectl
commands.
Install the yq tool (version 4.21.1 or later) on your workstation.
Run the following command to convert your
gitlab-secrets.json
to YAML format:
yq -P '{"production": .gitlab_rails}' gitlab-secrets.json >> gitlab-secrets.yaml
Check that the new
gitlab-secrets.yaml
file has the following contents:
production:
db_key_base: <your key base value>
secret_key_base: <your secret key base value>
otp_key_base: <your otp key base value>
openid_connect_signing_key: <your openid signing key>
ci_jwt_signing_key: <your ci jwt signing key>
To restore the rails secrets from a YAML file:
Find the object name for the rails secrets:
kubectl get secrets | grep rails-secret
Delete the existing secret:
kubectl delete secret <rails-secret-name>
Create the new secret using the same name as the old, and passing in your local YAML file
kubectl create secret generic <rails-secret-name> --from-file=secrets.yml=gitlab-secrets.yaml
In order to use the new secrets, the Webservice, Sidekiq and Toolbox pods
need to be restarted. The safest way to restart those pods is to run:
kubectl delete pods -lapp=sidekiq,release=<helm release name>
kubectl delete pods -lapp=webservice,release=<helm release name>
kubectl delete pods -lapp=toolbox,release=<helm release name>
The steps for restoring a GitLab installation are
Make sure you have a running GitLab instance by deploying the charts. Ensure the Toolbox pod is enabled and running by executing the following command
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
<timestamp>_<version>_gitlab_backup.tar
format.
Run the backup utility to restore the tarball
kubectl exec <Toolbox pod name> -it -- backup-utility --restore -t <timestamp>_<version>
Here,
<timestamp>_<version>
is from the name of the tarball stored in
gitlab-backups
bucket. In case you want to provide a public URL, use the following command
kubectl exec <Toolbox pod name> -it -- backup-utility --restore -f <URL>
You can provide a local path as a URL as long as it’s in the format:
file:///<path>
After restoring, the included runner will not be able to register to the instance because it no longer has the correct registration token.
Follow these troubleshooting steps to get it updated.
If the restored backup was not from an existing installation of the chart, you will also need to enable some Kubernetes specific features after the restore. Such as
incremental CI job logging.
Find your Toolbox pod by executing the following command
kubectl get pods -lrelease=RELEASE_NAME,app=toolbox
Run the instance setup script to enable the necessary features
kubectl exec <Toolbox pod name> -it -- gitlab-rails runner -e production /scripts/custom-instance-setup
In order to use the new changes, the Webservice and Sidekiq pods need to be restarted. The safest way to restart those pods is to run:
kubectl delete pods -lapp=sidekiq,release=<helm release name>
kubectl delete pods -lapp=webservice,release=<helm release name>
The restoration process does not update the
gitlab-initial-root-password
secret with the value from backup. For logging in as
root
, use the original password included in the backup. In the case that the password is no longer accessible, follow the steps below to reset it.
Attach to the Webservice pod by executing the command
kubectl exec <Webservice pod name> -it -- bash
Run the following command to reset the password of
root
user. Replace
#{password}
with a password of your choice
/srv/gitlab/bin/rails runner "user = User.first; user.password='#{password}'; user.password_confirmation='#{password}'; user.save!"
This chart is a helper for Jetstack’s CertManager Helm chart.
It automatically provisions an Issuer object, used by CertManager when requesting TLS certificates for
GitLab Ingresses.
We describe all the major sections of the configuration below. When configuring
from the parent chart, these values are:
certmanager-issuer:
# Configure an ACME Issuer in cert-manager. Only used if global.ingress.configureCertmanager is true.
server: https://acme-v02.api.letsencrypt.org/directory
# Provide an email to associate with your TLS certificates
# email:
rbac:
create: true
resources:
requests:
cpu: 50m
# Priority class assigned to pods
priorityClassName: ""
common:
labels: {}
This table contains all the possible charts configurations that can be supplied
to the
helm install
command using the
--set
flags:
Parameter | Default | Description |
---|---|---|
server
|
https://acme-v02.api.letsencrypt.org/directory
|
Let’s Encrypt server for use with the ACME CertManager Issuer. |
email
|
You must provide an email to associate with your TLS certificates. Let’s Encrypt uses this address to contact you about expiring certificates, and issues related to your account. | |
rbac.create
|
true
|
When
true
, creates RBAC-related resources to allow for manipulation of CertManager Issuer objects.
|
resources.requests.cpu
|
50m
|
Requested CPU resources for the Issuer creation Job. |
common.labels
|
Common labels to apply to the ServiceAccount, Job, ConfigMap, and Issuer. | |
priorityClassName
|
Priority class assigned to pods. |