1st published item - fixes
All checks were successful
Publish new notes / build-quartz (push) Successful in 1m55s
Publish new notes / deploy (push) Successful in 30s

This commit is contained in:
VR 2025-06-22 23:17:26 +01:00
parent adbce7ceb6
commit 78a5a2b180

View File

@ -77,7 +77,7 @@ spec:
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
group: cert-manager.ioService:
group: cert-manager.io
```
Service:
```
@ -128,7 +128,7 @@ spec:
# Mapping to the service (see kubectl get services -n nextcloud)
name: digital-garden-svc
port:
number: 443```
number: 443
```
ConfigMap:
```
@ -209,11 +209,12 @@ spec:
name: digital-garden-nginx-conf
```
I needed the PVC to persists the data if the pod or the node crashes - it's hosted on an SSD attached to an NFS server that is exposed via a StorageClass to the cluster. The certificate is managed via [CertManager](https://cert-manager.io/) and is issued by Let'sEncrypt - always good to use TLS! The service simply ties the pod to the ingress, not much to say here. The ingress uses the Nginx admission controller and is configured with the Let'sEncrypt cert to enable TLS. The config map has a minimal Nginx config file that is mounted to the pod under "/etc/nginx/nginx.conf". Lastly, the deployment which ties it all together - not much to say, it's just one Nginx replica. Good practice says that I should add some resource limits and requests, but I'll leave that for later with the rest of the tech debt...
I needed a PVC to persist the data if the pod/node crashed - this one is hosted on an SSD attached to an NFS server that is exposed via a StorageClass to the cluster. The certificate is managed via [CertManager](https://cert-manager.io/) and is issued by Let'sEncrypt - always good to use TLS! The service simply ties the pod to the ingress, not much to say here. The ingress uses the Nginx admission controller and is configured with the Let'sEncrypt cert to enable TLS. The config map has a minimal Nginx config file that is mounted to the pod under "/etc/nginx/nginx.conf". Lastly, the deployment which ties it all together - not much to say, it's just one Nginx replica.
Good practice says that I should add some resource limits and requests, but I'll leave that for later with the rest of the tech debt...
#### Notes repo and Github Actions workflow
Nothing special here - I'm running a local Gitea instance, so I just created a new [repo](https://k3gtpi.jumpingcrab.com/vlad/vlads-notes) there and saved the K8s cluster config as a repo secret called "K8S_CONF" (use secrets instead of plain variables as the latter can be exposed in the action's logs)
I'm running a local Gitea instance, so I just created a new [repo](https://k3gtpi.jumpingcrab.com/vlad/vlads-notes) and saved the K8s cluster config as a repo secret called "K8S_CONF" (use secrets instead of plain variables as the latter can be exposed in the action's logs)
![[Pasted image 20250622224303.png]]
@ -279,5 +280,5 @@ jobs:
#### Troubleshooting
Compared to Gitlab CI, I think Actions is simpler to use but it has it's quirks - the main difficulty I found was around sharing artifacts between jobs. The latest (v4) upload-artifact and download-artifact actions are not supported for some reason, so I had to rely on the deprecated v3 version.
Compared to Gitlab CI, I think Actions is simpler to use but it has its quirks - the main difficulty I found was around sharing artifacts between jobs. The latest (v4) upload-artifact and download-artifact actions are not supported for some reason, so I had to rely on the deprecated v3 version.
Aside from that I encountered some issues with "kubectl cp" command as it could not preserve the original file permissions when copying the Quartz files into the PVC - I had to copy them to a temp location and change their ownership to UID 1000 and GID 1000 as the NFS PVC did not allow files owned by root (UID 0, GID 0).