Deploy Postgres on any Kubernetes using CloudNativePG

RMAG news

There are many ways to setup Postgres in Kubernetes, but all methods will not solve all problems, here are some.

Backup data to object storage
On-demand backup
Schedule backup
Point-in-time recovery (PITR)

The best method to counter these problems is CloudNativePG operator, this operator manages PostgreSQL workloads on any supported Kubernetes cluster.

pre-requisite: any running Kubernetes cluster

Step-1
Install CloudNativePG operator on your running Kubernetes, best way to deploy using Helm.

helm repo add cnpg https://cloudnative-pg.github.io/charts

helm upgrade – install cnpg
– namespace cnpg-system
– create-namespace
cnpg/cloudnative-pg

This will install cnpg operator in cnpg-system namespace in your Kubernetes cluster, to check the pod is running or not run below command

kubectl get pods -l app.kubernetes.io/name=cloudnative-pg -n cnpg-system

Step-2:
cnpg will also install new Kubernetes resource called Cluster representing a PostgreSQL cluster made up of a single primary and an optional number of replicas that co-exist in a chosen Kubernetes namespace.

Once the operator is running, now we have to install Postgres in Kubernetes cluster using resource called Cluster created by cnpg.
we use the manifest below cluster.yaml to create postgres cluster

apiVersion: v1
data:
password: VHhWZVE0bk44MlNTaVlIb3N3cU9VUlp2UURhTDRLcE5FbHNDRUVlOWJ3RHhNZDczS2NrSWVYelM1Y1U2TGlDMg==
username: YXBw
kind: Secret
metadata:
name: cluster-example-app-user
type: kubernetes.io/basic-auth

apiVersion: v1
data:
password: dU4zaTFIaDBiWWJDYzRUeVZBYWNCaG1TemdxdHpxeG1PVmpBbjBRSUNoc0pyU211OVBZMmZ3MnE4RUtLTHBaOQ==
username: cG9zdGdyZXM=
kind: Secret
metadata:
name: cluster-example-superuser
type: kubernetes.io/basic-auth

apiVersion: v1
kind: Secret
metadata:
name: backup-creds
data:
ACCESS_KEY_ID: a2V5X2lk
ACCESS_SECRET_KEY: c2VjcmV0X2tleQ==

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cluster-example-full
spec:
description: “Example of cluster”
imageName: ghcr.io/cloudnative-pg/postgresql:16.2
instances: 3
startDelay: 300
stopDelay: 300
primaryUpdateStrategy: unsupervised

postgresql:
parameters:
shared_buffers: 256MB
pg_stat_statements.max: ‘10000’
pg_stat_statements.track: all
auto_explain.log_min_duration: ’10s’

bootstrap:
initdb:
database: app
owner: app
secret:
name: cluster-example-app-user

enableSuperuserAccess: true
superuserSecret:
name: cluster-example-superuser

storage:
storageClass: standard
size: 1Gi

backup:
barmanObjectStore:
destinationPath: s3://cluster-example-full-backup/
endpointURL: http://custom-endpoint:1234
s3Credentials:
accessKeyId:
name: backup-creds
key: ACCESS_KEY_ID
secretAccessKey:
name: backup-creds
key: ACCESS_SECRET_KEY
wal:
compression: gzip
encryption: AES256
data:
compression: gzip
encryption: AES256
immediateCheckpoint: false
jobs: 2
retentionPolicy: “30d”

resources:
requests:
memory: “512Mi”
cpu: “1”
limits:
memory: “1Gi”
cpu: “2”

affinity:
enablePodAntiAffinity: true
topologyKey: failure-domain.beta.kubernetes.io/zone

nodeMaintenanceWindow:
inProgress: false
reusePVC: false

In the above manifest we are creating two secrets because one secret is for initial database and another secret is for superuser access, you can read more about roles in Postgres here

The Third secret we created to access object store, here we are using AWS S3.

The supported object storages can be found here

Now apply the manifest in your Kubernetes cluster

kubectl create -f cluster.yaml -n namespace

Now you can see postgres pods are running in your Kubernetes cluster

kubectl get pods -n namespace

You can get postgres cluster by

kubectl get cluster -n namespace

In the next tutorial we will configure On-demand backup, schedule backup and recovery from existing data