- Requirements
- Design Choices
-
Configuration- Installation command line options
-
Chart configuration examples- extraEnv
- extraEnvFrom
- image.pullSecrets
- tolerations
- annotations
- priorityClassName
-
git.config
- Altering security contexts
-
External Services- Workhorse
-
Chart settings- Git Repository Persistence
- Running Gitaly over TLS
- Global server hooks
Using the GitLab-Gitaly chart
The
gitaly
sub-chart provides a configurable deployment of Gitaly Servers.
Requirements
This chart depends on access to the Workhorse service, either as part of the
complete GitLab chart or provided as an external service reachable from the Kubernetes
cluster this chart is deployed onto.
Design Choices
The Gitaly container used in this chart also contains the GitLab Shell codebase in
order to perform the actions on the Git repositories that have not yet been ported into Gitaly.
The Gitaly container includes a copy of the GitLab Shell container within it, and
as a result we also need to configure GitLab Shell within this chart.
Configuration
The
gitaly
chart is configured in two parts: external services,
and chart settings.
Gitaly is by default deployed as a component when deploying the GitLab
chart. If deploying Gitaly separately,
global.gitaly.enabled
needs to
be set to
false
and additional configuration will need to be performed
as described in the external Gitaly documentation.
Installation command line options
The table below contains all the possible charts configurations that can be supplied to
the
helm install
command using the
--set
flags.
Parameter | Default | Description |
---|---|---|
annotations
|
Pod annotations | |
common.labels
|
{}
|
Supplemental labels that are applied to all objects created by this chart. |
podLabels
|
Supplemental Pod labels. Will not be used for selectors. | |
external[].hostname
|
- ""
|
hostname of external node |
external[].name
|
- ""
|
name of external node storage |
external[].port
|
- ""
|
port of external node |
extraContainers
|
List of extra containers to include | |
extraInitContainers
|
List of extra init containers to include | |
extraVolumeMounts
|
List of extra volumes mounts to do | |
extraVolumes
|
List of extra volumes to create | |
extraEnv
|
List of extra environment variables to expose | |
extraEnvFrom
|
List of extra environment variables from other data sources to expose | |
gitaly.serviceName
|
The name of the generated Gitaly service. Overrides
global.gitaly.serviceName
, and defaults to
<RELEASE-NAME>-gitaly
|
|
image.pullPolicy
|
Always
|
Gitaly image pull policy |
image.pullSecrets
|
Secrets for the image repository | |
image.repository
|
registry.com/gitlab-org/build/cng/gitaly
|
Gitaly image repository |
image.tag
|
master
|
Gitaly image tag |
init.image.repository
|
initContainer image | |
init.image.tag
|
initContainer image tag | |
internal.names[]
|
- default
|
Ordered names of StatefulSet storages |
serviceLabels
|
{}
|
Supplemental service labels |
service.externalPort
|
8075
|
Gitaly service exposed port |
service.internalPort
|
8075
|
Gitaly internal port |
service.name
|
gitaly
|
The name of the Service port that Gitaly is behind in the Service object. |
service.type
|
ClusterIP
|
Gitaly service type |
securityContext.fsGroup
|
1000
|
Group ID under which the pod should be started |
securityContext.fsGroupChangePolicy
|
Policy for changing ownership and permission of the volume (requires Kubernetes 1.23) | |
securityContext.runAsUser
|
1000
|
User ID under which the pod should be started |
tolerations
|
[]
|
Toleration labels for pod assignment |
persistence.accessMode
|
ReadWriteOnce
|
Gitaly persistence access mode |
persistence.annotations
|
Gitaly persistence annotations | |
persistence.enabled
|
true
|
Gitaly enable persistence flag |
persistence.matchExpressions
|
Label-expression matches to bind | |
persistence.matchLabels
|
Label-value matches to bind | |
persistence.size
|
50Gi
|
Gitaly persistence volume size |
persistence.storageClass
|
storageClassName for provisioning | |
persistence.subPath
|
Gitaly persistence volume mount path | |
priorityClassName
|
Gitaly StatefulSet priorityClassName | |
logging.level
|
Log level | |
logging.format
|
json
|
Log format |
logging.sentryDsn
|
Sentry DSN URL - Exceptions from Go server | |
logging.rubySentryDsn
|
Sentry DSN URL - Exceptions from
gitaly-ruby
|
|
logging.sentryEnvironment
|
Sentry environment to be used for logging | |
ruby.maxRss
|
Gitaly-Ruby resident set size (RSS) that triggers a memory restart (bytes) | |
ruby.gracefulRestartTimeout
|
Graceful period before a force restart after exceeding Max RSS | |
ruby.restartDelay
|
Time that Gitaly-Ruby memory must remain high before a restart (seconds) | |
ruby.numWorkers
|
Number of Gitaly-Ruby worker processes | |
shell.concurrency[]
|
Concurrency of each RPC endpoint Specified using keys
rpc
and
maxPerRepo
|
|
packObjectsCache.enabled
|
false
|
Enable the Gitaly pack-objects cache |
packObjectsCache.dir
|
/home/git/repositories/+gitaly/PackObjectsCache
|
Directory where cache files get stored |
packObjectsCache.max_age
|
5m
|
Cache entries lifespan |
git.catFileCacheSize
|
Cache size used by Git cat-file process | |
git.config[]
|
[]
|
Git configuration that Gitaly should set when spawning Git commands |
prometheus.grpcLatencyBuckets
|
Buckets corresponding to histogram latencies on GRPC method calls to be recorded by Gitaly. A string form of the array (for example,
"[1.0, 1.5, 2.0]"
) is required as input
|
|
statefulset.strategy
|
{}
|
Allows one to configure the update strategy utilized by the StatefulSet |
statefulset.livenessProbe.initialDelaySeconds
|
30 | Delay before liveness probe is initiated |
statefulset.livenessProbe.periodSeconds
|
10 | How often to perform the liveness probe |
statefulset.livenessProbe.timeoutSeconds
|
3 | When the liveness probe times out |
statefulset.livenessProbe.successThreshold
|
1 | Minimum consecutive successes for the liveness probe to be considered successful after having failed |
statefulset.livenessProbe.failureThreshold
|
3 | Minimum consecutive failures for the liveness probe to be considered failed after having succeeded |
statefulset.readinessProbe.initialDelaySeconds
|
10 | Delay before readiness probe is initiated |
statefulset.readinessProbe.periodSeconds
|
10 | How often to perform the readiness probe |
statefulset.readinessProbe.timeoutSeconds
|
3 | When the readiness probe times out |
statefulset.readinessProbe.successThreshold
|
1 | Minimum consecutive successes for the readiness probe to be considered successful after having failed |
statefulset.readinessProbe.failureThreshold
|
3 | Minimum consecutive failures for the readiness probe to be considered failed after having succeeded |
metrics.enabled
|
false
|
If a metrics endpoint should be made available for scraping |
metrics.port
|
9236
|
Metrics endpoint port |
metrics.path
|
/metrics
|
Metrics endpoint path |
metrics.serviceMonitor.enabled
|
false
|
If a ServiceMonitor should be created to enable Prometheus Operator to manage the metrics scraping, note that enabling this removes the
prometheus.io
scrape annotations
|
metrics.serviceMonitor.additionalLabels
|
{}
|
Additional labels to add to the ServiceMonitor |
metrics.serviceMonitor.endpointConfig
|
{}
|
Additional endpoint configuration for the ServiceMonitor |
metrics.metricsPort
|
DEPRECATED Use
metrics.port
|
Chart configuration examples
extraEnv
extraEnv
allows you to expose additional environment variables in all containers in the pods.
Below is an example use of
extraEnv
:
extraEnv:
SOME_KEY: some_value
SOME_OTHER_KEY: some_other_value
When the container is started, you can confirm that the environment variables are exposed:
env | grep SOME
SOME_KEY=some_value
SOME_OTHER_KEY=some_other_value
extraEnvFrom
extraEnvFrom
allows you to expose additional environment variables from other data sources in all containers in the pods.
Below is an example use of
extraEnvFrom
:
extraEnvFrom:
MY_NODE_NAME:
fieldRef:
fieldPath: spec.nodeName
MY_CPU_REQUEST:
resourceFieldRef:
containerName: test-container
resource: requests.cpu
SECRET_THING:
secretKeyRef:
name: special-secret
key: special_token
# optional: boolean
CONFIG_STRING:
configMapKeyRef:
name: useful-config
key: some-string
# optional: boolean
image.pullSecrets
pullSecrets
allows you to authenticate to a private registry to pull images for a pod.
Additional details about private registries and their authentication methods can be
found in the Kubernetes documentation.
Below is an example use of
pullSecrets
image:
repository: my.gitaly.repository
tag: latest
pullPolicy: Always
pullSecrets:
- name: my-secret-name
- name: my-secondary-secret-name
tolerations
tolerations
allow you schedule pods on tainted worker nodes
Below is an example use of
tolerations
:
tolerations:
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "node_label"
operator: "Equal"
value: "true"
effect: "NoExecute"
annotations
annotations
allows you to add annotations to the Gitaly pods.
Below is an example use of
annotations
:
annotations:
kubernetes.io/example-annotation: annotation-value
priorityClassName
priorityClassName
allows you to assign a PriorityClass
to the Gitaly pods.
Below is an example use of
priorityClassName
:
priorityClassName: persistence-enabled
git.config
git.config
allows you to add configuration to all Git commands spawned by
Gitaly. Accepts configuration as documented in
git-config(1)
in
key
/
value
pairs, as shown below.
git:
config:
- key: "pack.threads"
value: 4
- key: "fsck.missingSpaceBeforeDate"
value: ignore
Altering security contexts
Gitaly
StatefulSet
performance may suffer when repositories have large
amounts of files.
Mitigate the issue by changing or fully deleting the settings for the
securityContext
.
gitlab:
gitaly:
securityContext:
fsGroup: ""
runAsUser: ""
securityContext
setting entirely.
Setting
securityContext: {}
or
securityContext:
does not work due
to the way Helm merges default values with user provided configuration.
Starting from Kubernetes 1.23 you can instead set the
fsGroupChangePolicy
to
OnRootMismatch
to mitigate the issue.
gitlab:
gitaly:
securityContext:
fsGroupChangePolicy: "OnRootMismatch"
From the documentation,
this setting “could help shorten the time it takes to change ownership and permission of a volume.”
External Services
This chart should be attached the Workhorse service.
Workhorse
workhorse:
host: workhorse.example.com
serviceName: webservice
port: 8181
Name | Type | Default | Description |
---|---|---|---|
host
|
String |
The hostname of the Workhorse server. This can be omitted in lieu of
serviceName
.
|
|
port
|
Integer |
8181
|
The port on which to connect to the Workhorse server. |
serviceName
|
String |
webservice
|
The name of the
service
which is operating the Workhorse server. If this is present, and
host
is not, the chart will template the hostname of the service (and current
.Release.Name
) in place of the
host
value. This is convenient when using Workhorse as a part of the overall GitLab chart.
|
Chart settings
The following values are used to configure the Gitaly Pods.
services. The Auth Token secret and key are sourced from the
global.gitaly.authToken
value. Additionally, the Gitaly container has a copy of GitLab Shell, which has some configuration
that can be set. The Shell authToken is sourced from the
global.shell.authToken
values.
Git Repository Persistence
This chart provisions a PersistentVolumeClaim and mounts a corresponding persistent
volume for the Git repository data. You’ll need physical storage available in the
Kubernetes cluster for this to work. If you’d rather use emptyDir, disable PersistentVolumeClaim
with:
persistence.enabled: false
.
that should be valid for all your Gitaly pods. You should not include settings
that are meant to reference a single specific volume (such as
volumeName
). If you want
to reference a specific volume, you need to manually create the PersistentVolumeClaim.
the
VolumeClaimTemplate
is immutable.
persistence:
enabled: true
storageClass: standard
accessMode: ReadWriteOnce
size: 50Gi
matchLabels: {}
matchExpressions: []
subPath: "/data"
annotations: {}
Name | Type | Default | Description |
---|---|---|---|
accessMode
|
String |
ReadWriteOnce
|
Sets the accessMode requested in the PersistentVolumeClaim. See Kubernetes Access Modes Documentation for details. |
enabled
|
Boolean |
true
|
Sets whether or not to use a PersistentVolumeClaims for the repository data. If
false
, an emptyDir volume is used.
|
matchExpressions
|
Array |
Accepts an array of label condition objects to match against when choosing a volume to bind. This is used in the
PersistentVolumeClaim
selector
section. See the volumes documentation.
|
|
matchLabels
|
Map |
Accepts a Map of label names and label values to match against when choosing a volume to bind. This is used in the
PersistentVolumeClaim
selector
section. See the volumes documentation.
|
|
size
|
String |
50Gi
|
The minimum volume size to request for the data persistence. |
storageClass
|
String | Sets the storageClassName on the Volume Claim for dynamic provisioning. When unset or null, the default provisioner will be used. If set to a hyphen, dynamic provisioning is disabled. | |
subPath
|
String | Sets the path within the volume to mount, rather than the volume root. The root is used if the subPath is empty. | |
annotations
|
Map | Sets the annotations on the Volume Claim for dynamic provisioning. See Kubernetes Annotations Documentation for details. |
Running Gitaly over TLS
the Helm charts. If you are using an external Gitaly instance and want to use
TLS for communicating with it, refer the external Gitaly documentation
Gitaly supports communicating with other components over TLS. This is controlled
by the settings
global.gitaly.tls.enabled
and
global.gitaly.tls.secretName
.
Follow the steps to run Gitaly over TLS:
-
The Helm chart expects a certificate to be provided for communicating over
TLS with Gitaly. This certificate should apply to all the Gitaly nodes that
are present. Hence all hostnames of each of these Gitaly nodes should be
added as a Subject Alternate Name (SAN) to the certificate.To know the hostnames to use, check the file
/srv/gitlab/config/gitlab.yml
file in the Toolbox pod and check the various
gitaly_address
fields specified underrepositories.storages
key within it.
kubectl exec -it <Toolbox pod> -- grep gitaly_address /srv/gitlab/config/gitlab.yml
internal Gitaly pods can be found in this repository.
Users can use or refer that script to generate certificates with proper
SAN attributes.
-
Create a k8s TLS secret using the certificate created.
kubectl create secret tls gitaly-server-tls --cert=gitaly.crt --key=gitaly.key
-
Redeploy the Helm chart by passing
--set global.gitaly.tls.enabled=true
.
Global server hooks
The Gitaly StatefulSet has support for Global server hooks. The hook scripts run on the Gitaly pod, and are therefore limited to the tools available in the Gitaly container.
The hooks are populated using ConfigMaps, and can be used by setting the following values as appropriate:
-
global.gitaly.hooks.preReceive.configmap
-
global.gitaly.hooks.postReceive.configmap
-
global.gitaly.hooks.update.configmap
To populate the ConfigMap, you can point
kubectl
to a directory of scripts:
kubectl create configmap MAP_NAME --from-file /PATH/TO/SCRIPT/DIR