vlads-notes/vlads-notes/0000-how-this-was-built.md
VR dfab1fa9dd
All checks were successful
Publish new notes / build-quartz (push) Successful in 1m46s
Publish new notes / deploy (push) Successful in 27s
Fixin
2025-06-28 14:34:54 +01:00

302 lines
11 KiB
Markdown
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: Building this digital garden
tags:
- how-to
- cicd
- quartz
- github-actions
draft: false
date: 2025-06-22
---
### Why?
Well, in my work I deal with networks, systems, servers, services, tools, etc... every time I got stuck with an issue or a broken config or anything really, a quick Google search led me to some random person's web page which either gave me a solution or at least pointed me in the right direction.
So as a thank you to all those who spend their time and energy sharing their experience and knowledge, I decided to do the same and set up a place where I can write about what I do, learn, discover, break (and hopefully fix) and deal with.
### How?
I found out recently about this class of tools called [static site generators (SSGs)](https://en.wikipedia.org/wiki/Static_site_generator) - they can take text written in various formats and use it to generate static websites. Since I wanted for some time to have a place to share my notes online, I felt SSGs were just the thing I needed so I started looking around. There are many great options but out of all of them [Quartz](https://quartz.jzhao.xyz/) felt like the one - built with Typescript, super lightweight and comes with a clean and simple interface.
At the same time, I've been messing with K8s in my home lab for a few years and I also had to get familiar with Github Actions for a work project (I've mostly used Gitlab CICD and custom scripts for CI stuff) so it all combined nicely into an opportunity to learn something new!
Anyway, let's get into it - the plan was to serve the site with Nginx, running as a pod in K8s. The content would be written in Markdown (edited with Obsidian - more on that later) and pushed to my Gitea local instance where a Github Actions workflow would build the site files with Quartz and upload them to the Nginx pod's PVC.
Summarised, I had to:
- [ ] Set up a Nginx pod (PVC, cert, service, ingress, configmap and deployment)
- [ ] Create a notes repo and the Github Actions workflow for auto deployment
- [ ] Troubleshoot the whole things until it works (always the fun bit)
##### Nginx pod
Nothing fancy, manifests below and explanations after.
PVC:
```
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    namespace: my-stuff
    name: digital-garden-data-pvc
spec:
    storageClassName: nfs-client
    accessModes:
        - ReadWriteOnce
    resources:
        requests:
            storage: 10Gi
```
Certificate:
```
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: digital-garden-prod-tls
namespace: my-stuff
spec:
secretName: digital-garden-prod-tls
# 90d
duration: 2160h
# 15d
renewBefore: 360h
subject:
organizations:
- RPI
commonName: vlads-notes.jumpingcrab.com
isCA: false
privateKey:
algorithm: RSA
encoding: PKCS1
size: 2048
usages:
- server auth
- client auth
dnsNames:
- vlads-notes.jumpingcrab.com
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
group: cert-manager.io
```
Service:
```
---
apiVersion: v1
kind: Service
metadata:
labels:
app: digital-garden
name: digital-garden-svc
namespace: my-stuff
spec:
type: ClusterIP
ports:
- port: 443
protocol: TCP
targetPort: 80
selector:
app: digital-garden
```
Ingress:
```
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: my-stuff
name: digital-garden-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: 512m
spec:
ingressClassName: nginx
tls:
- hosts:
- vlads-notes.jumpingcrab.com
# Name of the certifciate (see kubectl get certificate -A)
secretName: digital-garden-prod-tls
rules:
- host: vlads-notes.jumpingcrab.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
# Mapping to the service (see kubectl get services -n nextcloud)
name: digital-garden-svc
port:
number: 443
```
ConfigMap:
```
---
apiVersion: v1
kind: ConfigMap
metadata:
name: digital-garden-nginx-conf
namespace: my-stuff
data:
nginx.conf: |
user nginx;
worker_processes 3;
error_log /var/log/nginx/error.log;
events {
worker_connections 10240;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
limit_req_zone $binary_remote_addr zone=req_limit_per_ip:1m rate=1r/s;
server {
listen 80;
server_name vlads-notes.jumpingcrab.com;
root /www/data;
index index.html;
error_page 404 /404.html;
server_tokens off;
include mime.types;
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options DENY;
add_header X-XSS-Protection "1; mode=block";
add_header Referrer-Policy "no-referrer-when-downgrade";
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
location / {
limit_req zone=req_limit_per_ip burst=5 nodelay;
try_files $uri $uri.html $uri/ =404;
}
}
}
```
Deployment:
```
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: digital-garden
namespace: my-stuff
labels:
app: digital-garden
spec:
replicas: 1
selector:
matchLabels:
app: digital-garden
template:
metadata:
labels:
app: digital-garden
name: digital-garden
spec:
containers:
- name: digital-garden
image: nginx:1.28.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: digital-garden-volume
mountPath: /www/data
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
resources: {}
volumes:
- name: digital-garden-volume
persistentVolumeClaim:
claimName: digital-garden-data-pvc
- name: nginx-config
configMap:
name: digital-garden-nginx-conf
```
I needed a PVC to persist the data if the pod/node crashed - this one is hosted on an SSD attached to an NFS server that is exposed via a StorageClass to the cluster. The certificate is managed via [CertManager](https://cert-manager.io/) and is issued by Let'sEncrypt - always good to use TLS! The service simply ties the pod to the ingress, not much to say here. The ingress uses the Nginx admission controller and is configured with the Let'sEncrypt cert to enable TLS. The config map has a minimal Nginx config file that is mounted to the pod under "/etc/nginx/nginx.conf". Lastly, the deployment which ties it all together - not much to say, it's just one Nginx replica.
Good practice says that I should add some resource limits and requests, but I'll leave that for later with the rest of the tech debt...
#### Notes repo and Github Actions workflow
I'm running a local Gitea instance, so I just created a new [repo](https://k3gtpi.jumpingcrab.com/vlad/vlads-notes) and saved the K8s cluster config as a repo secret called "K8S_CONF" (use secrets instead of plain variables as the latter can be exposed in the action's logs)
![gitea_secret_setup](images/0000/gitea_secret_setup.png)
I then created the ".gitea/workflows" directories and placed the workflow YAML file (publish.yaml) within. The workflow is split into two jobs:
- 1st job
- checks out the files from the repo
- clones Quartz (from a repo clone hosted locally) to the working directory
- copies notes from "./notes" directory to the "./quartz-clone" and triggers the Quartz build
- uploads the files created by Quartz as artifacts for the next job
- 2nd job
- copies these artifacts locally
- installs the kubectl client
- copies the K8s config from the secret into the required path
- gets the Nginx's pod name
- deletes old files from the Nginx root directory and copies the new files built by Quartz
Contents:
```
name: Publish new notes
run-name: Build in Quartz and push to Nginx
on: [push]
jobs:
build-quartz:
runs-on: ubuntu-latest
container:
image: node:24.2
steps:
- name: Grab local files
uses: actions/checkout@v4
- name: Clone local copy of Quartz
run: git clone https://k3gtpi.jumpingcrab.com/vlad/quartz-clone.git
- name: Copy notes to content directory
run: cp ./notes/* quartz-clone/content
- name: Build Quartz
run: cd quartz-clone && npm i && npx quartz create && npx quartz build
- name: Upload artifact
uses: actions/upload-artifact@v3
with:
name: content
path: quartz-clone/public
deploy:
runs-on: ubuntu-latest
needs: build-quartz
steps:
- name: Get artifacts
uses: actions/download-artifact@v3
with:
name: content
path: ./content
- name: Install kubectl
run: curl -LO https://dl.k8s.io/release/v1.33.0/bin/linux/arm64/kubectl && chmod +x kubectl && mv kubectl /usr/bin
- name: Set up cluster access
run: mkdir ~/.kube && echo "${{ secrets.K8S_CONF }}" > ~/.kube/config
- name: Get target pods's name
run: echo "TARGET_POD=$(kubectl get pods -n my-stuff -l app=digital-garden -o json | jq -r .items[0].metadata.name)" >> "$GITHUB_ENV"
- name: Copy contents to pod temp folder (due to permission issues)
run: kubectl cp content my-stuff/$TARGET_POD:/tmp
- name: Change permissions and move files to WWW directory
run: kubectl exec -i -n my-stuff $TARGET_POD -- bash -c "chown -R 1000:1000 /tmp/content && rm -rf /www/data/* && mv /tmp/content/* /www/data"
```
#### Troubleshooting
Compared to Gitlab CI, I think Actions is simpler to use but it has its quirks - the main difficulty I found was around sharing artifacts between jobs. The latest (v4) upload-artifact and download-artifact actions are not supported for some reason, so I had to rely on the deprecated v3 version.
Aside from that I encountered some issues with "kubectl cp" command as it could not preserve the original file permissions when copying the Quartz files into the PVC - I had to copy them to a temp location and change their ownership to UID 1000 and GID 1000 as the NFS PVC did not allow files owned by root (UID 0, GID 0).