KubernetesStatefulSetsare used to deploy stateful applications inside your cluster. Each Pod in the StatefulSet can accesslocal persistent volumesthat stick to it even after it’s rescheduled. This allows Pods to maintain individual state that’s separate from their neighbors in the set.

Unfortunately these volumes come with a big limitation: Kubernetes doesn’t provide a way to resize them from the StatefulSet object. Thespec.resources.requests.storageproperty of the StatefulSet’s

fieldis immutable, preventing you from applying any capacity increases you require. This article will show you how to workaround the problem.

Copy this YAML and save it to

kind: Service

name: nginx

app: nginx

  • name: nginx

port: 80

clusterIP: None


apiVersion: apps/v1

kind: StatefulSet

replicas: 3

serviceName: nginx

image: nginx:latest

  • name: web

containerPort: 80

  • name: data

mountPath: /usr/share/nginx/html

name: data

accessModes: [“ReadWriteOnce”]

storage: 1Gi

service/nginx created

statefulset.apps/nginx created

You’ll need astorage classand provisioner in your cluster to run this example. It creates a StatefulSet that runs three replicas of an NGINX web server.

While this isn’t representative of when StatefulSets should be used, it’s adequate as a demo of the volume problems you can face. A volume claim with 1 Gi of storage is mounted to NGINX’s data directory. Your web content could outgrow this relatively small allowance as your service scales. However trying to modify thevolumeClaimTemplates.spec.resources.requests.storagefield to10Giwill report the following error when you runkubectl apply:

service/nginx unchanged

The StatefulSet “nginx” is invalid: spec: Forbidden: updates to statefulset spec for fields other than ‘replicas’, ‘template’, ‘updateStrategy’, ‘persistentVolumeClaimRetentionPolicy’ and ‘minReadySeconds’ are forbidden

This occurs because almost all the fields of a StatefulSet’s manifest are immutable after creation.

Manually Resizing StatefulSet Volumes

you’re able to bypass the restriction by manually resizing the persistent volume claim (PVC). You’ll then need to recreate the StatefulSet to release and rebind the volume from your Pods. This will trigger the actual volume resize event.

First use Kubectl to find the PVCs associated with your StatefulSet:

data-nginx-0 Bound pvc-ccb2c835-e2d3-4632-b8ba-4c8c142795e4 1Gi RWO

data-nginx-1 Bound pvc-1b0b27fe-3874-4ed5-91be-d8e552e515f2 1Gi RWO

data-nginx-2 Bound pvc-4b7790c2-3ae6-4e04-afee-a2e1bae4323b 1Gi RWO

There are three PVCs because there are three replicas in the StatefulSet. Each Pod gets its own individual volume.

Now usekubectl editto adjust the capacity of each volume:

The PVC’s YAML manifest will appear in your editor. Find thespec.resources.requests.storagefield and change it to your new desired capacity:

storage: 10Gi

# …

Save and close the file. Kubectl should report that the change has been applied to your cluster.

Now repeat these steps for the StatefulSet’s remaining PVCs. Listing your cluster’s persistent volumes should then show the new size against each one:

pvc-0a0d0b15-241f-4332-8c34-a24b61944fb7 10Gi RWO Delete Bound default/data-nginx-2

pvc-33af452d-feff-429d-80cd-a45232e700c1 10Gi RWO Delete Bound default/data-nginx-0

pvc-49f3a1c5-b780-4580-9eae-17a1f002e9f5 10Gi RWO Delete Bound default/data-nginx-1

data-nginx-0 Bound pvc-33af452d-feff-429d-80cd-a45232e700c1 10Gi RWO

data-nginx-1 Bound pvc-49f3a1c5-b780-4580-9eae-17a1f002e9f5 10Gi RWO

data-nginx-2 Bound pvc-0a0d0b15-241f-4332-8c34-a24b61944fb7 10Gi RWO

This is because the volume can’t be resized while Pods are still using it.

Recreating the StatefulSet

Complete the resize by releasing the volume claim from the StatefulSet that’s holding it. Delete the StatefulSet but use theorphancascading mechanismso its Pods remain in your cluster. This will help minimize downtime.

statefulset.apps “nginx” deleted

Next edit your original YAML file to include the new volume size in thespec.resources.requests.storagefile. Then usekubectl applyto recreate the StatefulSet in your cluster:

The new StatefulSet will assume ownership of the previously orphaned Pods because they’ll already meet its requirements. The volumes may get resized at this point but in most cases you’ll have to manually initiate a rollout that restarts your Pods:

The rollout proceeds sequentially, targeting one Pod at a time. This ensures your service remains accessible throughout.

Tryconnecting to one of your Podsto check the increased capacity is visible from within:

root@nginx-0:/# df -h /usr/share/nginx/html

/dev/disk/by-id/scsi-0DO_Volume_pvc-33af452d-feff-429d-80cd-a45232e700c1 9.9G 4.5M 9.4G 1% /usr/share/nginx/html

The Pod’s reporting the expected 10 Gi of storage.

Summary

Kubernetes StatefulSets let you run stateful applications in Kubernetes with persistent storage volumes that are scoped to individual Pods. However the flexibility this permits ends when you need to resize one of your volumes. This is a missing feature which currently requires several manual steps to be completed in sequence.

The Kubernetes maintainers are aware of the issue. There’s an openfeature requestto develop a solution which should eventually let you initiate volume resizes by editing a StatefulSet’s manifest. This will be much quicker and safer than the current situation.

One final caveat is that volume resizes are dependent on a storage driver thatpermits dynamic expansion. This feature only became generally available in Kubernetes v1.24 and not all drivers, Kubernetes distributions, and cloud platforms will support it. you’re able to check whether yours does by runningkubectl get scand looking fortruein theALLOWVOLUMEXPANSIONcolumn of the storage driver you’re using with your StatefulSets.