In this blog post, you’ll learn how to create and configure persistent storage in the Xelon Kubernetes Service.
Kubernetes offers essential features for managing containerized applications. This open-source system handles the orchestration of containers—such as those running in Docker environments—and ensures they are coordinated across multiple hosts. When needed, Kubernetes can automatically scale the number of active containers based on current load. If a container fails, the system restarts or replaces it to maintain application availability. Incoming network traffic is distributed evenly across containers, helping ensure performance and stability. Kubernetes also supports rolling updates of applications and provides simple rollback options in case of issues.
While Kubernetes is already widely adopted by software developers and SaaS companies, it’s becoming increasingly relevant for managed service providers as well. However, building a stable, highly available control plane requires deep expertise in container orchestration and involves considerable effort for maintenance, security updates, and scaling. The Xelon Kubernetes Service offers a fully managed Kubernetes solution that can be resold under your own brand, thus reducing operational overhead while providing powerful infrastructure capabilities.
As part of your Kubernetes blog series, we explain here how to set up and run Persistent Storage in the Xelon Kubernetes Service.
By default, containers don't persist the data they produce. When a container is deleted, its data gets destroyed as well. Containerized applications that require data persistence need a storage backend that isn't destroyed when the application’s container terminates. That's why Xelon HQ allows you to create a Persistent Storage – a service that allows your data to be stored outside the pods.
Within the Persistent Storage page, click the green Create Persistent Storage button. You'll see a wizard opened, where you should specify the Persistent Storage's name, its volume (from 5 to 1000 GBs), and attach it to an organization and its device.
When all set, click Deploy Persistent Storage.
It will appear in the list of all Persistent Storages – there you can extend its volume, reattach to another device, or delete.
Prerequisites:
Run the following commands on your NFS server node:
apt update && apt -y upgradeapt install -y nfs-servermkdir /datacat << EOF >> /etc/exports
/data 192.168.12.0/24(rw,no_subtree_check,no_root_squash)
EOFsystemctl enable --now nfs-serverexportfs -ar
You can also deploy the NFS server in a clustered way with high availability support.
To provision Persistent Storage dynamically using the StorageClass, you need to install the NFS provisioner. We'll use the nfs-subdir-external-provisioner to for that purpose. The following commands install all things we need by using the Helm package manage:
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisionerhelm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--create-namespace \
--namespace nfs-provisioner \
--set nfs.server=IP_OFF_THE_NFS_NODE \
--set nfs.path=/data
To create the PersistentVolumeClaim, use the following manifest:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-test
labels:
storage.k8s.io/name: nfs
storage.k8s.io/created-by: mstoeckle
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs-client
resources:
requests:
storage: 1Gi
The NFS has the following specifications in that you need to consider them before using the NFS storage:
The Xelon Kubernetes Service supports software developers, SaaS companies and managed service providers in building and operating secure and highly available applications and microservices. You can find out more about our Kubernetes service here.