Skip to content

Post 6: Configuring Persistent Storage with Longhorn and NFS

In the last post, we had our k3s cluster up and running with MetalLB set up to manage external IPs. Now, it’s time to get persistent storage working for our workloads. In this post, I’ll walk through setting up Longhorn for application configuration storage and NFS for accessing existing media files.


Goals

  • Enable dynamic storage provisioning with Longhorn
  • Connect to an existing NFS server for read/write media access
  • Verify PVC creation in Portainer

Step 1: Install Longhorn

First, ensure each of your k3s nodes has an unformatted secondary drive. In my case, each VM has a 50GB second disk.

Install Longhorn using Helm:

helm repo add longhorn https://charts.longhorn.io
helm repo update

kubectl create namespace longhorn-system
helm install longhorn longhorn/longhorn \
  --namespace longhorn-system \
  --set persistence.defaultClass=true \
  --set persistence.defaultFsType=ext4

Wait a few minutes, then access the UI:

kubectl port-forward svc/longhorn-frontend -n longhorn-system 8080:80

Visit http://localhost:8080 in your browser.


Step 2: Create Longhorn PVC for App Configs

Here’s a sample PVC YAML you can use to dynamically provision storage from Longhorn:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kavita-config
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: longhorn

Apply with:

kubectl apply -f kavita-config-pvc.yaml

This will show up in Portainer under Storage > PVCs.


Step 3: Create NFS Persistent Volume & Claim for Media

Since I already have media hosted on a TrueNAS server (10.0.0.3), I’ll set up a PersistentVolume and a matching PVC:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: media-pv
spec:
  capacity:
    storage: 500Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /mnt/tank/data
    server: 10.0.0.3
  persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: media-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Gi
  volumeName: media-pv

Apply it:

kubectl apply -f media-pvc.yaml

Note: The 500Gi storage value is just informational. NFS does not enforce storage limits.


Step 4: Test PVCs in a Deployment

Here’s a quick pod spec to validate Longhorn and NFS PVCs:

apiVersion: v1
kind: Pod
metadata:
  name: pvc-tester
spec:
  containers:
  - name: alpine
    image: alpine
    command: ["sleep", "3600"]
    volumeMounts:
    - mountPath: /config
      name: config-vol
    - mountPath: /media
      name: media-vol
  volumes:
  - name: config-vol
    persistentVolumeClaim:
      claimName: kavita-config
  - name: media-vol
    persistentVolumeClaim:
      claimName: media-pvc

Recap

  • Longhorn is installed and ready for dynamic provisioning
  • Media PVC is hooked up to our TrueNAS NFS share
  • PVCs can be visualized and tested via Portainer

In the next post, we’ll start deploying actual applications—beginning with Kavita as our self-hosted digital library.

Published inKubernetes