How to deploy Gitea on Kubernetes with Helm

Gitea is a simple and hyper-focused git management solution. Here is how you can install it.

How to deploy Gitea on Kubernetes with Helm

As I've been going through and building a self hosted infrastructure, I was looking for a fast, and well maintained version control hosting system. Gitea quickly came up as a great solution. Gitea is a fork of the Gogs project but is developed much quicker. Gitea provides a lot of the features that big Git hosting services, like Github, Gitlab, provide. Although, Gitea is particularly well suited for self hosting because hyper focused on only Git features. Where as Github, and Gitlab, provide pipeline, hosting features as well. Here we will deploy Gitea the easy way, on Kubernetes.

Helm chart

Gitea actually provides a nice and easy to use helm chart for deployment on Kubernetes. The following are the custom values that I used for my deployment.

# Default values for gitea.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 1

clusterDomain: cluster.local

image:
  repository: gitea/gitea
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""
  pullPolicy: Always
  rootless: false # only possible when running 1.14 or later

imagePullSecrets: []

# Security context is only usable with rootless image due to image design
podSecurityContext:
  fsGroup: 1000

containerSecurityContext: {}
#   allowPrivilegeEscalation: false
#   capabilities:
#     drop:
#       - ALL
#   # Add the SYS_CHROOT capability for root and rootless images if you intend to
#   # run pods on nodes that use the container runtime cri-o. Otherwise, you will
#   # get an error message from the SSH server that it is not possible to read from
#   # the repository.
#   # https://gitea.com/gitea/helm-chart/issues/161
#     add:
#       - SYS_CHROOT
#   privileged: false
#   readOnlyRootFilesystem: true
#   runAsGroup: 1000
#   runAsNonRoot: true
#   runAsUser: 1000

# DEPRECATED. The securityContext variable has been split two:
# - containerSecurityContext
# - podSecurityContext.
securityContext: {}

service:
  http:
    type: ClusterIP
    port: 3000
    clusterIP: None
    #loadBalancerIP:
    #nodePort:
    #externalTrafficPolicy:
    #externalIPs:
    #ipFamilyPolicy:
    #ipFamilies:
    loadBalancerSourceRanges: []
    annotations:
  ssh:
    type: ClusterIP
    port: 22
    clusterIP: None
    #loadBalancerIP:
    #nodePort:
    #externalTrafficPolicy:
    #externalIPs:
    #ipFamilyPolicy:
    #ipFamilies:
    #hostPort:
    loadBalancerSourceRanges: []
    annotations:

ingress:
  enabled: true
  className: nginx
  annotations:
    {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: [root-url]
      paths:
        - path: /
          pathType: Prefix
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - git.example.com
  # Mostly for argocd or any other CI that uses `helm template | kubectl apply` or similar
  # If helm doesn't correctly detect your ingress API version you can set it here.
  # apiVersion: networking.k8s.io/v1

resources:
  {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

nodeSelector: {}

tolerations: []

affinity: {}

statefulset:
  env:
    []
    # - name: VARIABLE
    #   value: my-value
  terminationGracePeriodSeconds: 60
  labels: {}
  annotations: {}

persistence:
  enabled: true
  # existingClaim:
  size: 10Gi
  accessModes:
    - ReadWriteOnce
  labels: {}
  annotations: {}
  # storageClass:
  # subPath:

# additional volumes to add to the Gitea statefulset.
extraVolumes:
# - name: postgres-ssl-vol
#   secret:
#     secretName: gitea-postgres-ssl

# additional volumes to mount, both to the init container and to the main
# container. As an example, can be used to mount a client cert when connecting
# to an external Postgres server.
extraVolumeMounts:
# - name: postgres-ssl-vol
#   readOnly: true
#   mountPath: "/pg-ssl"

# bash shell script copied verbatim to the start of the init-container.
initPreScript: ""
#
# initPreScript: |
#   mkdir -p /data/git/.postgresql
#   cp /pg-ssl/* /data/git/.postgresql/
#   chown -R git:git /data/git/.postgresql/
#   chmod 400 /data/git/.postgresql/postgresql.key

# Configure commit/action signing prerequisites
signing:
  enabled: false
  gpgHome: /data/git/.gnupg

gitea:
  admin:
    #existingSecret: gitea-admin-secret
    username: [gitea-admin]
    password: [gitea-password]
    email: [gitea-email]

  metrics:
    enabled: false
    serviceMonitor:
      enabled: false
      #  additionalLabels:
      #    prometheus-release: prom1

  ldap:
    []
    # - name: "LDAP 1"
    #  existingSecret:
    #  securityProtocol:
    #  host:
    #  port:
    #  userSearchBase:
    #  userFilter:
    #  adminFilter:
    #  emailAttribute:
    #  bindDn:
    #  bindPassword:
    #  usernameAttribute:
    #  publicSSHKeyAttribute:

  # Either specify inline `key` and `secret` or refer to them via `existingSecret`
  oauth:
    []

  config:
    #  APP_NAME: "Gitea: Git with a cup of tea"
    #  RUN_MODE: dev
    #
    #  server:
    #    SSH_PORT: 22
    #
    #  security:
    #    PASSWORD_COMPLEXITY: spec
    server:
      ROOT_URL: [root-url]
    service:
      DISABLE_REGISTRATION: false
      REQUIRE_SIGNIN_VIEW: true
      SHOW_REGISTRATION_BUTTON: false
    openid:
      ENABLE_OPENID_SIGNIN: false
      ENABLE_OPENID_SIGNUP: true
    oauth2_client:
      ENABLE_AUTO_REGISTRATION: true

  additionalConfigSources: []
  #   - secret:
  #       secretName: gitea-app-ini-oauth
  #   - configMap:
  #       name: gitea-app-ini-plaintext

  additionalConfigFromEnvs: []

  podAnnotations: {}

  # Modify the liveness probe for your needs or completely disable it by commenting out.
  livenessProbe:
    tcpSocket:
      port: http
    initialDelaySeconds: 200
    timeoutSeconds: 1
    periodSeconds: 10
    successThreshold: 1
    failureThreshold: 10

  # Modify the readiness probe for your needs or completely disable it by commenting out.
  readinessProbe:
    tcpSocket:
      port: http
    initialDelaySeconds: 5
    timeoutSeconds: 1
    periodSeconds: 10
    successThreshold: 1
    failureThreshold: 3

  # # Uncomment the startup probe to enable and modify it for your needs.
  # startupProbe:
  #   tcpSocket:
  #     port: http
  #   initialDelaySeconds: 60
  #   timeoutSeconds: 1
  #   periodSeconds: 10
  #   successThreshold: 1
  #   failureThreshold: 10

memcached:
  enabled: true
  service:
    port: 11211

postgresql:
  enabled: true
  global:
    postgresql:
      postgresqlDatabase: gitea
      postgresqlUsername: [db-username]
      postgresqlPassword: [db-password]
      servicePort: 5432
  persistence:
    size: 10Gi

mysql:
  enabled: false
  root:
    password: gitea
  db:
    user: gitea
    password: gitea
    name: gitea
  service:
    port: 3306
  persistence:
    size: 10Gi

mariadb:
  enabled: false
  auth:
    database: gitea
    username: gitea
    password: gitea
    rootPassword: gitea
  primary:
    service:
      port: 3306
    persistence:
      size: 10Gi

# By default, removed or moved settings that still remain in a user defined values.yaml will cause Helm to fail running the install/update.
# Set it to false to skip this basic validation check.
checkDeprecation: true

Pay particular attention to the config section. That section is how you can change the behaviour of your deployment. For examle, I'm using it to disable registration on my instance.

So it's deployed right? Not yet. I'm using the ingress-nginx, an ingress that's officially maintained by the Kubernetes project. If you are using the ingress as well, I highly recommend using it because it can make a lot of things easier to manage, then you also have to change that ingress' tcp section. In the values file of you ingress, near the bottom, find the tcp section and make the following adjustments.

# TCP service key:value pairs
# Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
tcp:
  22: "[namespace]/[instance-name]-gitea-ssh:22"

This will allow you to use SSH to pull and push changes to your repos.

Now you're done! ๐Ÿ‘