How to share storage between Kubernetes pods? - kubernetes

Sharing storage between Kubernetes pods is very important for apps that need to share data well. We have many ways to do this. We can use Persistent Volumes, ConfigMaps, StatefulSets, EmptyDir volumes, and Network File System (NFS). These methods help us make sure our Kubernetes pods can access shared storage. This improves how we manage data and work together in our cloud-native apps.

In this article, we will look at different ways to share storage between Kubernetes pods. We will see how to use Persistent Volumes for long-lasting storage. We will also look at ConfigMaps for sharing configuration data. Then, we will explore StatefulSets for apps that need to keep their state. Next, we will talk about EmptyDir volumes for storage that is temporary. Finally, we will learn how to use NFS for bigger storage needs. At the end, we will answer some common questions about sharing storage in Kubernetes.

  • How can we share storage between Kubernetes pods?
  • Using Persistent Volumes to share storage between Kubernetes pods
  • How to use ConfigMaps for sharing configuration data in Kubernetes pods
  • Using StatefulSets for shared storage in Kubernetes
  • Sharing storage with EmptyDir volumes in Kubernetes
  • How to use NFS for shared storage across Kubernetes pods
  • Common Questions

Using Persistent Volumes to Share Storage Between Kubernetes Pods

In Kubernetes, we can use Persistent Volumes (PVs) to share storage between different pods. A PV is a part of storage in the cluster. An administrator sets it up or it can be created automatically with Storage Classes. Here is a simple way to set up and use Persistent Volumes for sharing storage.

Step 1: Define a Persistent Volume

First, we need to create a YAML file called pv.yaml to define a Persistent Volume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: shared-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /path/to/nfs
    server: nfs-server.example.com

Step 2: Define a Persistent Volume Claim

Next, we create another YAML file called pvc.yaml for a Persistent Volume Claim (PVC):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

Step 3: Deploy Pods Using the PVC

Now we will create a YAML file called pod.yaml for the pods that will use the shared storage:

apiVersion: v1
kind: Pod
metadata:
  name: pod-1
spec:
  containers:
    - name: app-container
      image: my-app-image
      volumeMounts:
        - mountPath: /data
          name: shared-storage
  volumes:
    - name: shared-storage
      persistentVolumeClaim:
        claimName: shared-pvc
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-2
spec:
  containers:
    - name: app-container
      image: my-app-image
      volumeMounts:
        - mountPath: /data
          name: shared-storage
  volumes:
    - name: shared-storage
      persistentVolumeClaim:
        claimName: shared-pvc

Step 4: Apply the Configurations

We can run these commands to create the Persistent Volume, Persistent Volume Claim, and Pods:

kubectl apply -f pv.yaml
kubectl apply -f pvc.yaml
kubectl apply -f pod.yaml

Accessing Shared Storage

Now both pod-1 and pod-2 can use the storage at /data. Any data that one pod writes will be available to the other pod because of the shared Persistent Volume.

If you want to know more about storage in Kubernetes, you can check What are Persistent Volumes and Persistent Volume Claims.

How to Use ConfigMaps for Sharing Configuration Data in Kubernetes Pods

ConfigMaps in Kubernetes help us keep configuration separate from image content. This makes our containerized applications easier to move around. They let us share configuration data among many pods in a simple way. Let’s see how we can create and use ConfigMaps to share configuration data in Kubernetes pods.

Creating a ConfigMap

We can create a ConfigMap using literal values, files, or folders. Here is a simple example of creating a ConfigMap from literal values:

kubectl create configmap my-config --from-literal=app.name=myApp --from-literal=app.version=1.0

Using ConfigMap in Pods

To use the ConfigMap in a pod, we need to reference it in our pod specification. Here is an example of how to mount a ConfigMap as environment variables in a pod:

apiVersion: v1
kind: Pod
metadata:
  name: configmap-demo
spec:
  containers:
  - name: my-container
    image: my-image
    env:
    - name: APP_NAME
      valueFrom:
        configMapKeyRef:
          name: my-config
          key: app.name
    - name: APP_VERSION
      valueFrom:
        configMapKeyRef:
          name: my-config
          key: app.version

Mounting ConfigMap as a Volume

We can also mount a ConfigMap as a volume. Here is another example:

apiVersion: v1
kind: Pod
metadata:
  name: configmap-volume-demo
spec:
  containers:
  - name: my-container
    image: my-image
    volumeMounts:
    - name: config-volume
      mountPath: /etc/config
  volumes:
  - name: config-volume
    configMap:
      name: my-config

Updating the ConfigMap

When we want to update a ConfigMap, we can use the kubectl edit command or apply a new config file. Here is an example:

kubectl edit configmap my-config

We can also apply a new file like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-config
data:
  app.name: myUpdatedApp
  app.version: 1.1

And we apply it with:

kubectl apply -f configmap.yaml

Accessing ConfigMap Data

We can access the data in the ConfigMap through environment variables or mounted files. This helps all containers in a pod or across many pods to share the same configuration easily.

For more information about managing application configuration in Kubernetes, we can check out Kubernetes ConfigMaps.

Using StatefulSets for Shared Storage in Kubernetes

StatefulSets in Kubernetes help us manage stateful applications. They give stable storage for each instance. They work well for apps that need to keep data safe and have steady network names. We can use StatefulSets for shared storage by using Persistent Volumes (PVs) and Persistent Volume Claims (PVCs).

Simple Example of StatefulSet with Shared Storage

Here is a simple example of a StatefulSet with a shared volume:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: my-statefulset
spec:
  serviceName: "my-service"
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp-image
        volumeMounts:
        - name: shared-storage
          mountPath: /data
  volumeClaimTemplates:
  - metadata:
      name: shared-storage
    spec:
      accessModes: [ "ReadWriteMany" ]
      resources:
        requests:
          storage: 5Gi

Important Points

  • VolumeClaimTemplates: Every pod in the StatefulSet gets its own PVC from this template.
  • Access Modes: We set accessModes to ReadWriteMany. This lets many pods read and write to the volume at the same time.
  • Stable Network Identities: Each pod in a StatefulSet gets a special name. This name is based on the StatefulSet name and its index. This gives each pod a stable identity.

Storage Options

We can use different storage options that support ReadWriteMany access mode. Some of them are:

  • NFS: Network File System
  • GlusterFS: A scalable network filesystem
  • CephFS: A distributed filesystem in Ceph

How to Deploy

To use the above StatefulSet setup, we save it as statefulset.yaml. Then we run:

kubectl apply -f statefulset.yaml

This setup helps us use StatefulSets for shared storage in Kubernetes. It is good for stateful applications that need to keep data safe across many pods. To learn more about Kubernetes StatefulSets, we can check how do I manage stateful applications with StatefulSets.

Sharing Storage with EmptyDir Volumes in Kubernetes

In Kubernetes, EmptyDir volumes help us share storage between containers in a pod. We create an EmptyDir volume when a pod gets assigned to a node. This volume stays as long as the pod is running on that node. All containers in the pod can read and write to the EmptyDir volume. This gives us a way to have temporary storage that disappears when the pod ends.

Configuration Example

Here is a simple example to show how we use EmptyDir volumes in a pod:

apiVersion: v1
kind: Pod
metadata:
  name: shared-emptydir-pod
spec:
  containers:
  - name: app-container-1
    image: nginx
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: shared-storage
  - name: app-container-2
    image: busybox
    command: ['sh', '-c', 'echo Hello from container 2 > /mnt/hello.txt']
    volumeMounts:
    - mountPath: /mnt
      name: shared-storage
  volumes:
  - name: shared-storage
    emptyDir: {}

Key Points

  • Scope: We can only access EmptyDir volumes from containers in the same pod.
  • Lifecycle: The system creates the volume when the pod starts. It deletes it when the pod ends.
  • Use Cases: These volumes are good for temporary storage needs like caching or processing data.

Accessing Data

In the example above, the nginx container can read data that the busybox container writes. They share the EmptyDir volume that is mounted at /usr/share/nginx/html and /mnt. So, EmptyDir is a good way for us to share short-lived data between containers in a pod without needing permanent storage solutions.

For more details about managing volumes in Kubernetes, we can check what are Kubernetes volumes and how do I persist data.

How to Implement NFS for Shared Storage Across Kubernetes Pods

To use NFS (Network File System) for shared storage in Kubernetes pods, we need to set up an NFS server. We also need to configure Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) in our Kubernetes cluster. Here are the steps to do this.

Step 1: Set Up an NFS Server

We can set up an NFS server on a different machine or as a pod in our Kubernetes cluster. Below is an example of how to create an NFS server using a Kubernetes Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-server
  template:
    metadata:
      labels:
        app: nfs-server
    spec:
      containers:
      - name: nfs-server
        image: itsthenetwork/nfs-server-alpine:latest
        ports:
        - containerPort: 2049
        volumeMounts:
        - mountPath: /nfsshare
          name: nfs-volume
      volumes:
      - name: nfs-volume
        emptyDir: {}

Step 2: Expose NFS Server via a Service

Next, we need to create a service to expose the NFS server:

apiVersion: v1
kind: Service
metadata:
  name: nfs-server
spec:
  ports:
  - port: 2049
    targetPort: 2049
  selector:
    app: nfs-server

Step 3: Create Persistent Volume (PV)

Now we define a Persistent Volume that points to the NFS server:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /nfsshare
    server: nfs-server

Step 4: Create Persistent Volume Claim (PVC)

Next, we create a Persistent Volume Claim to ask for storage from the Persistent Volume:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

Step 5: Mount the PVC in Pods

Now we can use this PVC in our pods to share storage:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app-image
        volumeMounts:
        - mountPath: /mnt/data
          name: nfs-storage
      volumes:
      - name: nfs-storage
        persistentVolumeClaim:
          claimName: nfs-pvc

Accessing Shared Storage

Now both pods from the app-deployment can access the shared NFS storage at /mnt/data. This helps in sharing data between the pods easily.

For more information on Persistent Volumes and Persistent Volume Claims, you can check what are Persistent Volumes and Persistent Volume Claims.

Frequently Asked Questions

1. How do Kubernetes pods share storage?

Kubernetes pods share storage in a few ways. They can use Persistent Volumes (PVs), ConfigMaps, and EmptyDir volumes. With Persistent Volumes, we can create a shared storage that stays even when the pods go away. This is important for apps like databases. They need the data to be the same and safe. To learn more about Kubernetes pods and how to set them up, you can read our article on what are Kubernetes pods and how do I work with them.

2. What are Persistent Volumes in Kubernetes?

Persistent Volumes (PVs) in Kubernetes help us manage storage separately from the pods. This means we can share storage between different pods while keeping the data safe. PVs can connect with many storage types like NFS, AWS EBS, or Google Cloud Persistent Disk. If you want to know more about PVs and claims, please check our article on what are persistent volumes and persistent volume claims.

3. Can I use NFS for sharing storage in Kubernetes?

Yes, we can use NFS (Network File System) in Kubernetes to share storage. This lets many pods access the same storage at the same time. To set up NFS in our Kubernetes cluster, we usually create a Persistent Volume that uses NFS. For a full guide on how to use NFS, see our article on how to implement NFS for shared storage across Kubernetes pods.

4. What is the difference between EmptyDir and Persistent Volumes?

EmptyDir is a temporary storage for Kubernetes pods. It gets created when the pod starts and gets deleted when the pod is gone. It works for data that does not need to last after the pod is gone. On the other hand, Persistent Volumes are for data that needs to stay even if the pod restarts. They can be used by many pods. To learn more about Kubernetes volumes, visit our article on what are Kubernetes volumes and how do I persist data.

5. How can I use ConfigMaps to share configuration data?

ConfigMaps in Kubernetes help us store configuration data in key-value pairs. This data can be shared between many pods. We can mount a ConfigMap as a volume or use it as environment variables in the pods. This makes it easy to update and manage our application settings without changing the code. For more details on ConfigMaps, check out our article on what are Kubernetes ConfigMaps and how do I use them.