Building Shades

Goals

  • pages written in Markdown, deployed as a static site
  • pages pushed to git, rendered automatically without user interaction
  • low-cost, reasonable performance solution

Infrastructure

  • kubernetes cluster
  • object storage
  • domain
  • git
  • linux workstation

Steps

From Zero to First Post

  • install hugo binary
  • create a git repository and, if not public, create an application token for read access
  • create a bucket where your static files for the webpage are stored
  • install kubernetes manifests for
    • cronjob
      • renders static site using hugo
      • pushes to s3
    • nginx
      • handles serving the static website from s3 to http
    • ingress
      • handles tls termination at the edge
  • configure hugo
  • create a post
  • test locally served page

Technical Details

Hugo Stuff

  • install hugo binary: https://gohugo.io/installation/
    • i.e. pacman -S hugo
    • hugo version #does this work?
  • create git repository
    • create application token
      • permissions: repository read
    • git clone <repo>
  • create a bucket
    • Hetzner Cloud
      • public access
      • no versioning
  • hugo new site garden --force
  • edit hugo.toml, look for a nice theme here: https://themes.gohugo.io/
    • set theme in hugo.toml like theme = 'example'
  • create a post with hugo new content content/posts/something-new.md
  • serve locally hugo server --buildDrafts
  • remember to change draft=true to draft=false in the header of the markdown file
  • commit to repository

Kubernetes Stuff

Example for Hetzner Cloud NBG1 bucket

In addition to the cronjob, you also need a serviceaccount hugo-render and two secrets:

  • git-credentials with the keys username and token
  • objectstorage with the keys access-key and secret-key

Remember to adjust GIT_REPO and S3_BUCKET_NAME

If your bucket is not in NBG1 in Hetzner Cloud: adjust S3_REGION and S3_ENDPOINT

Cronjob
---
apiVersion: batch/v1
kind: CronJob
metadata:
  name: hugo-site-deployer
spec:
  schedule: "*/15 * * * *"  # Runs every 15 minutes
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: hugo-render
          volumes:
            - name: site-output
              emptyDir: {}
          initContainers:
          - name: hugo
            image: docker.io/hugomods/hugo:ci-0.143.1 #pinned to last supported version for the LoveIt theme
            command:
            - /bin/sh
            - -c
            - |
              # Construct the HTTPS URL with the Git credentials
              git clone https://$GIT_USERNAME:$GIT_TOKEN@$GIT_REPO /src
              cd /src

              # Build the Hugo site
              hugo --minify -d /output
            volumeMounts:
              - name: site-output
                mountPath: /output
            env:
            - name: GIT_REPO
              value: "myrepo.domain/organisation/garden.git"
            - name: GIT_USERNAME
              valueFrom:
                secretKeyRef:
                  name: git-credentials
                  key: username
            - name: GIT_TOKEN
              valueFrom:
                secretKeyRef:
                  name: git-credentials
                  key: token
          containers:
          - name: s3-uploader
            image: minio/mc  # MinIO client image
            command:
              - /bin/sh
              - -c
              - |
                # Configure MinIO client with S3 endpoint credentials
                mc alias set s3 $S3_ENDPOINT $AWS_ACCESS_KEY_ID $AWS_SECRET_ACCESS_KEY
                
                # Copy the Hugo site to the S3 bucket
                mc mirror /output s3/$S3_BUCKET_NAME --overwrite --remove

            # Mount the shared volume to read the Hugo build output
            volumeMounts:
              - name: site-output
                mountPath: /output
            env:
              - name: AWS_ACCESS_KEY_ID
                valueFrom:
                  secretKeyRef:
                    name: objectstorage
                    key: access-key
              - name: AWS_SECRET_ACCESS_KEY
                valueFrom:
                  secretKeyRef:
                    name: objectstorage
                    key: secret-key
              - name: S3_BUCKET_NAME
                value: "the-infamous-example-bucket"
              - name: S3_ENDPOINT
                value: "https://nbg1.your-objectstorage.com"
              - name: S3_REGION
                value: "nbg1"
          restartPolicy: OnFailure
Serving Static Sites from Objectstorage
Nginx Deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-static-proxy
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-static-proxy
  template:
    metadata:
      labels:
        app: nginx-static-proxy
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: nginx-config
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
        - name: nginx-cache
          mountPath: /var/cache/nginx
        - name: nginx-run
          mountPath: /run
        securityContext:
          runAsUser: 1000
          runAsGroup: 1000
          allowPrivilegeEscalation: false
      volumes:
      - name: nginx-config
        configMap:
          name: nginx-proxy-config
      - name: nginx-cache
        emptyDir: {}
      - name: nginx-run
        emptyDir: {}
Nginx Configmap

Remember to insert the correct bucket endpoint

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-proxy-config
data:
  nginx.conf: |
    events {
      worker_connections 1024;
    }

    http {
      server {
        listen 8080;

        location / {
          proxy_pass https://<bucketname>.<endpoint>
          rewrite ^(.*/)$ $1index.html break;
        }
      }
    }
Nginx Service
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx-static-proxy
spec:
  selector:
    app: nginx-static-proxy
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
  type: ClusterIP
Expose via IngressRoute
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: garden
spec:
  entryPoints:
    - websecure
  routes:
  - match: "Host(`www.vaduzz.de`) && PathPrefix(`/`)"
    kind: Rule
    services:
    - name: nginx-service
      port: 8080
      scheme: http
  tls:
    certResolver: letsencrypt

Optimization

Looking closer at my nginx config I noticed that I forgot to add a cache, so here is another config. Keep in mind that on Talos emptydir uses the hosts filesystem for backing so i kept the usage to 1G max.

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-proxy-config
data:
  nginx.conf: |
    events {
      worker_connections 1024;
    }

    http {
      proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=bucket_cache:10m max_size=1g inactive=48h use_temp_path=off;
      server {
        listen 8080;

        location / {
          proxy_cache bucket_cache;
          proxy_cache_key "$scheme://$host$request_uri"; 
          proxy_cache_valid 200 304 48h;
          proxy_cache_lock on;
          proxy_cache_revalidate on;
          expires 1y;
          proxy_pass https://<bucketname>.<endpoint>
          rewrite ^(.*/)$ $1index.html break;
        }
      }
    }

Does it work?

Using siege to create 50 concurrent workers trying to hit a page…

A look into the Metrics for Requests at the Ingress Controller shows a slightly better Apdex score so caching seems to have improved the response times. Grafana Dashboard Snipped showing Spikes in Requests per second up to 60 and Apdex score in the first case below 0.75 and the second case below 0.85