This hands-on exercise guides you in creating a ConfigMap for application settings, a Secret for credentials, and a PersistentVolumeClaim for durable storage. You will then deploy a Pod that consumes all three, demonstrating how to manage a complete, stateful application component.
This practice requires an active Kubernetes cluster, such as one running via Minikube or Kind, with kubectl configured to interact with it.
First, we will create the configuration objects that will be injected into our application Pod. We'll start with a ConfigMap for non-sensitive data and then a Secret for a mock API key.
Create the ConfigMap:
Use the kubectl create configmap command to generate a ConfigMap named app-config from a literal value. This ConfigMap will hold a simple properties file format.
kubectl create configmap app-config \
--from-literal=app.properties='
greeting=Hello
log.level=INFO
'
Create the Secret:
Next, create a Secret named api-credentials. We will use the --from-literal flag again to provide a mock API key. Remember, while Kubernetes stores Secrets as base64 encoded strings, they are not encrypted by default in etcd and should be treated as sensitive.
kubectl create secret generic api-credentials \
--from-literal=API_KEY='abc-123-def-456'
Verify Creation:
You can inspect the objects you just created using kubectl get and kubectl describe. For example, to view the ConfigMap:
kubectl get configmap app-config -o yaml
You will see the data you provided stored under the data field.
With the configuration ready, the next step is to request storage. We accomplish this by creating a PersistentVolumeClaim (PVC). On most local development clusters and cloud providers, a default StorageClass is configured to dynamically provision a PersistentVolume (PV) that satisfies the claim.
Define the PersistentVolumeClaim:
Create a file named pvc.yaml with the following content. This manifest requests a small, 1Gi volume with ReadWriteOnce access, meaning it can be mounted as read-write by a single node.
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-data-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Apply and Verify the PVC: Apply the manifest to your cluster.
kubectl apply -f pvc.yaml
Now, check the status of the PVC. It should quickly transition from Pending to Bound, indicating that a PersistentVolume has been successfully provisioned and bound to it.
kubectl get pvc my-data-claim
You should see output similar to this:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-data-claim Bound pvc-a1b2c3d4-e5f6-g7h8-i9j0-k1l2m3n4o5p6 1Gi RWO standard 15s
Now we will tie everything together. The following diagram shows the relationships between the Pod we are about to create and the configuration and storage resources it depends on.
The
stateful-appPod consumes theSecretas an environment variable, mounts theConfigMapas a file, and mounts thePersistentVolumeClaimto provide a durable filesystem.
Define the Pod:
Create a file named stateful-pod.yaml. This manifest defines a Pod that:
api-credentials Secret as environment variables.ConfigMap and one for our PersistentVolumeClaim.ConfigMap volume at /etc/config, making the app.properties key available as a file./data for persistent storage.# stateful-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: stateful-app
spec:
containers:
- name: app-container
image: alpine
# Keep the container running
command: ["/bin/sh", "-c", "sleep 3600"]
envFrom:
- secretRef:
name: api-credentials
volumeMounts:
- name: config-volume
mountPath: /etc/config
- name: data-volume
mountPath: /data
volumes:
- name: config-volume
configMap:
name: app-config
- name: data-volume
persistentVolumeClaim:
claimName: my-data-claim
We use a simple alpine image and a sleep command to keep the container running so we can inspect it.
Deploy the Pod: Apply the Pod manifest to your cluster.
kubectl apply -f stateful-pod.yaml
Let's confirm that our configuration and storage were correctly attached to the running Pod. We will use kubectl exec to run commands inside the container.
Verify Environment Variable from Secret: Execute a command inside the Pod to print its environment variables and filter for our API key.
kubectl exec stateful-app -- env | grep API_KEY
The output should be API_KEY=abc-123-def-456, confirming the Secret was injected.
Verify Mounted File from ConfigMap:
Check the contents of the file mounted from the ConfigMap.
kubectl exec stateful-app -- cat /etc/config/app.properties
This will display the content we defined earlier: greeting=Hello and log.level=INFO.
Test Data Persistence:
Now, write a file to the persistent storage volume mounted at /data.
kubectl exec stateful-app -- sh -c "echo 'Stateful data survives restarts' > /data/message.txt"
To prove that the data is persistent, delete the Pod and recreate it. The PersistentVolumeClaim and the underlying PersistentVolume will not be affected.
kubectl delete pod stateful-app
kubectl apply -f stateful-pod.yaml
Wait for the new Pod to be in the Running state. You can check its status with kubectl get pod stateful-app. Once it is running, try to read the file you created earlier.
kubectl exec stateful-app -- cat /data/message.txt
The command will output Stateful data survives restarts. This confirms that the data written to the volume persisted even after the Pod was completely destroyed and replaced.
To keep your cluster clean, delete the resources created during this practice session.
kubectl delete pod stateful-app
kubectl delete pvc my-data-claim
kubectl delete secret api-credentials
kubectl delete configmap app-config
You have now successfully managed both configuration and stateful data for an application, two fundamental skills for running production-grade workloads on Kubernetes.
Was this section helpful?
© 2026 ApX Machine LearningEngineered with