added some PVC migration notes
This commit is contained in:
parent
2325431cb6
commit
9b55e1521a
164
vlads-notes/0003-k8s-ops.md
Normal file
164
vlads-notes/0003-k8s-ops.md
Normal file
@ -0,0 +1,164 @@
|
|||||||
|
---
|
||||||
|
title: Various K8s ops to remember
|
||||||
|
tags:
|
||||||
|
- k8s
|
||||||
|
- learning-notes
|
||||||
|
- guides
|
||||||
|
draft: false
|
||||||
|
date: 2025-12-14
|
||||||
|
---
|
||||||
|
|
||||||
|
### Migrating data between PVCs
|
||||||
|
|
||||||
|
Let's say I have a pod using a PVC that relies on a PV that uses NFS as a storage backend. At some point I want to make this pod use a quicker backend like Longhorn, without losing any data in the process. One option to achieve this is to us a temporary Pod to replicate the data into the new PVC. Here are the steps along with an example of migrating the storage of a Gitea Actions runner:
|
||||||
|
|
||||||
|
1. create a new PV using the new storage backend + a new PVC that binds to it
|
||||||
|
|
||||||
|
The storage backend of the new PVC is Longhorn:
|
||||||
|
|
||||||
|
```
|
||||||
|
# pvc.yaml
|
||||||
|
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
name: gitea-run-1-pvc
|
||||||
|
namespace: dev-stuff
|
||||||
|
spec:
|
||||||
|
storageClassName: longhorn-static
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 5Gi
|
||||||
|
```
|
||||||
|
|
||||||
|
Apply the manifest and check that the PVC is is "bound" state:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl apply -f pvc.yaml
|
||||||
|
|
||||||
|
# OUTPUT:
|
||||||
|
persistentvolumeclaim/gitea-run-1-pvc created
|
||||||
|
|
||||||
|
kubectl get pvc -n dev-stuff
|
||||||
|
|
||||||
|
# OUTPUT:
|
||||||
|
gitea-run-1-pvc Bound pvc-17af2514-7307-4d74-b343-33c31607ad12 5Gi RWO longhorn-static <unset> 56s
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
2. scale down the Deployment to 0 replicas
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl scale deployment -n dev-stuff gitea-runner-1 --replicas=0
|
||||||
|
|
||||||
|
# OUTPUT:
|
||||||
|
deployment.apps/gitea-runner-1 scaled
|
||||||
|
```
|
||||||
|
|
||||||
|
3. create a temporary Pod that attaches both the old and the new PVC
|
||||||
|
|
||||||
|
The image of choice is "busybox", but any other image that has the basic linux utilities available will do:
|
||||||
|
|
||||||
|
```
|
||||||
|
# temp.yaml
|
||||||
|
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: data-mover
|
||||||
|
name: data-mover
|
||||||
|
namespace: dev-stuff
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: data-mover
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: data-mover
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- args:
|
||||||
|
- -c
|
||||||
|
- while true; do ping localhost; sleep 60;done
|
||||||
|
command:
|
||||||
|
- /bin/sh
|
||||||
|
image: busybox:latest
|
||||||
|
name: data-mover
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /source
|
||||||
|
name: source
|
||||||
|
- mountPath: /destination
|
||||||
|
name: destination
|
||||||
|
restartPolicy: Always
|
||||||
|
volumes:
|
||||||
|
- name: source
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: gitea-runner-1-pvc
|
||||||
|
- name: destination
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: gitea-run-1-pvc
|
||||||
|
```
|
||||||
|
|
||||||
|
Apply the manifest and check that the pod is running:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl get pods -n dev-stuff
|
||||||
|
|
||||||
|
# OUTPUT:
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
data-mover-5ff6cfcbfc-9cd8f 1/1 Running 0 31s
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
3. copy all the data across
|
||||||
|
|
||||||
|
Exec into the newly created pod and copy the contents of "/source" into "/destination":
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl exec -it -n dev-stuff data-mover-5ff6cfcbfc-9cd8f -- sh
|
||||||
|
copy -r source/* destination/*
|
||||||
|
exit
|
||||||
|
```
|
||||||
|
|
||||||
|
4. remove the "data-mover" deployment
|
||||||
|
|
||||||
|
```
|
||||||
|
kuebctl delete -f temp.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
5. modify the Deployment to mount the new PVC
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl edit deployment -n dev-stuff gitea-runner-1
|
||||||
|
|
||||||
|
# Change the line that references the PVC to use the new one:
|
||||||
|
|
||||||
|
volumes: - name: runner-data persistentVolumeClaim: claimName: gitea-run-1-pvc # This was previously "gitea-runner-1-pvc"
|
||||||
|
|
||||||
|
# Then save and close. The manifest should be applied with the new values
|
||||||
|
|
||||||
|
# OUTPUT:
|
||||||
|
deployment.apps/gitea-runner-1 edited
|
||||||
|
```
|
||||||
|
|
||||||
|
5. scale up the Deployment and check that everything works as expected
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl scale deployment -n dev-stuff gitea-runner-1 --replicas=1
|
||||||
|
|
||||||
|
# OUTPUT:
|
||||||
|
deployment.apps/gitea-runner-1 scaled
|
||||||
|
|
||||||
|
kubectl get pods -n dev-stuff
|
||||||
|
|
||||||
|
|
||||||
|
# OUTPUT:
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
gitea-runner-1-754f74b9c4-vlqrf 1/1 Running 0 90s
|
||||||
|
```
|
||||||
|
|
||||||
Loading…
Reference in New Issue
Block a user