Fix: Kubernetes ConfigMap Changes Not Reflected in Running Pods
Quick Answer
How to fix Kubernetes ConfigMap updates not reaching running pods — why pods don't see updated values, how to trigger restarts, use live volume mounts, and automate ConfigMap rollouts with Reloader.
The Error
You update a ConfigMap but running pods still use the old configuration:
kubectl edit configmap my-app-config
# Edit and save — but the running app still reads old values
kubectl exec my-pod -- cat /etc/config/app.properties
# Shows OLD values — not the updated onesOr environment variables from the ConfigMap are not updated:
kubectl exec my-pod -- printenv DATABASE_URL
# Shows the old DATABASE_URL even after updating the ConfigMapOr the application restarted inside the pod but still reads old values from the mounted file.
Why This Happens
ConfigMap updates behave differently depending on how the ConfigMap data is consumed:
- Environment variables (
envFrom/env) — injected at pod creation time. Updating the ConfigMap does not update environment variables in running pods. A pod restart is required. - Volume mounts — ConfigMap data mounted as files is updated automatically by kubelet, but with a delay (typically 1–2 minutes). The files on disk update, but the application must re-read them — most apps only read config files on startup.
- Immutable ConfigMaps — if
immutable: trueis set, the ConfigMap cannot be edited at all. - Kubelet sync period — the default kubelet
--sync-frequencyfor ConfigMap volumes is 60 seconds plus propagation time. Changes are not instant.
Fix 1: Restart Pods to Pick Up ConfigMap Changes
For environment variable-based ConfigMaps, a pod restart is always required:
# Rolling restart — replaces pods one by one (zero downtime)
kubectl rollout restart deployment/my-app
# Watch the rollout
kubectl rollout status deployment/my-app
# Verify the new pod has the updated value
kubectl exec deployment/my-app -- printenv MY_CONFIG_VALUERestart a specific pod (it will be recreated by the Deployment):
kubectl delete pod my-app-abc123-xyz789
# Deployment controller creates a new pod with current ConfigMap valuesPatch the Deployment to force a restart without changing the spec:
# Add/update a timestamp annotation — triggers rolling restart
kubectl patch deployment my-app \
-p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"}}}}}'Fix 2: Use Volume Mounts for Live Config Updates
Pods that consume ConfigMaps via volume mounts receive file updates automatically (within ~1–2 minutes). But the application must re-read the file:
ConfigMap as a volume:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-volume
mountPath: /etc/config # Files appear here
readOnly: true
volumes:
- name: config-volume
configMap:
name: my-app-config # ConfigMap name# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-config
data:
app.properties: |
database.url=jdbc:postgresql://postgres:5432/mydb
cache.ttl=300
feature-flags.json: |
{"dark_mode": true, "beta_feature": false}Application must watch for file changes:
# Python — watch config file for changes
import json
import time
import os
from pathlib import Path
CONFIG_PATH = Path('/etc/config/feature-flags.json')
def load_config():
with open(CONFIG_PATH) as f:
return json.load(f)
# Option A — reload on each request (simple, slight overhead)
def is_feature_enabled(feature: str) -> bool:
config = load_config()
return config.get(feature, False)
# Option B — poll for file changes
class ConfigWatcher:
def __init__(self, path: Path):
self.path = path
self._config = load_config()
self._mtime = path.stat().st_mtime
def get(self, key, default=None):
current_mtime = self.path.stat().st_mtime
if current_mtime != self._mtime:
self._config = load_config()
self._mtime = current_mtime
return self._config.get(key, default)
config = ConfigWatcher(CONFIG_PATH)// Node.js — watch for file changes with fs.watch
const fs = require('fs');
const path = '/etc/config/app.json';
let config = JSON.parse(fs.readFileSync(path, 'utf8'));
fs.watch(path, (event) => {
if (event === 'change') {
try {
config = JSON.parse(fs.readFileSync(path, 'utf8'));
console.log('Config reloaded');
} catch (err) {
console.error('Failed to reload config:', err);
}
}
});Note: Kubernetes updates ConfigMap volumes atomically using symlinks. The actual file path is a symlink that points to a new directory when updated. Using
fs.watchon the symlink target may not fire. Watch the directory or use a polling approach instead.
Fix 3: Automate Restarts with Reloader
Install Stakater Reloader — it watches for ConfigMap and Secret changes and automatically triggers rolling restarts:
# Install with Helm
helm repo add stakater https://stakater.github.io/stakater-charts
helm install reloader stakater/reloader -n kube-systemAnnotate your Deployment to enable automatic restarts:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
annotations:
# Restart when any ConfigMap or Secret changes
reloader.stakater.com/auto: "true"
# Or restart only when specific ConfigMaps change
configmap.reloader.stakater.com/reload: "my-app-config,my-other-config"
# Or restart only when specific Secrets change
secret.reloader.stakater.com/reload: "my-app-secret"Now whenever my-app-config is updated, Reloader automatically triggers a kubectl rollout restart — no manual intervention needed.
Fix 4: Trigger Restarts via CI/CD After ConfigMap Updates
In a CI/CD pipeline, always restart after updating a ConfigMap:
# GitHub Actions example
- name: Update ConfigMap
run: |
kubectl create configmap my-app-config \
--from-file=app.properties=./config/app.properties \
--dry-run=client -o yaml | kubectl apply -f -
- name: Restart deployment to pick up new config
run: |
kubectl rollout restart deployment/my-app
kubectl rollout status deployment/my-app --timeout=300sUsing kustomize for ConfigMap management:
# kustomization.yaml
resources:
- deployment.yaml
configMapGenerator:
- name: my-app-config
files:
- config/app.properties
options:
disableNameSuffixHash: false # Adds a hash suffix — forces pod restart when content changesWhen disableNameSuffixHash: false (default), kustomize generates a new ConfigMap name (e.g., my-app-config-abc123) whenever the content changes. The Deployment references the new name, triggering an automatic rolling restart.
Fix 5: Use Projected Volumes for Multiple Sources
Combine ConfigMaps and Secrets into a single mount point:
spec:
containers:
- name: my-app
volumeMounts:
- name: combined-config
mountPath: /etc/config
volumes:
- name: combined-config
projected:
sources:
- configMap:
name: app-config
- secret:
name: app-secrets
- configMap:
name: feature-flags
items:
- key: flags.json
path: feature-flags.json # Custom filename in the mountFix 6: Verify ConfigMap Is Mounted Correctly
# Check the ConfigMap exists and has the right data
kubectl get configmap my-app-config -o yaml
# Verify what files are mounted in the pod
kubectl exec my-pod -- ls -la /etc/config/
# Check the actual file content inside the pod
kubectl exec my-pod -- cat /etc/config/app.properties
# Check if the symlink is updated (Kubernetes uses symlinks for atomic updates)
kubectl exec my-pod -- ls -la /etc/config/
# Look for: ..data -> ..2024_01_15_10_30_00.12345 (timestamp changes on update)
# Force kubelet to sync (for debugging — not for production)
kubectl exec my-pod -- kill -HUP 1 # Send SIGHUP to PID 1 — triggers graceful reload in some appsCheck the kubelet sync period on a node:
# SSH into a node and check kubelet config
sudo cat /var/lib/kubelet/config.yaml | grep -i sync
# syncFrequency: 1m0s ← Default 1 minuteFix 7: Use Immutable ConfigMaps for Safety
For config that should not change at runtime, mark the ConfigMap as immutable:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-config-v2
immutable: true # Cannot be modified — must create a new ConfigMap
data:
DATABASE_URL: "postgres://db:5432/mydb"Immutable ConfigMaps provide:
- Protection against accidental changes
- Better kubelet performance (kubelet doesn’t watch immutable ConfigMaps for changes)
- Forced version control via naming (
config-v1,config-v2)
To update an immutable ConfigMap, create a new one and update the Deployment to reference it:
kubectl create configmap my-app-config-v3 \
--from-literal=DATABASE_URL=postgres://db-new:5432/mydb
kubectl set env deployment/my-app --from=configmap/my-app-config-v3
kubectl rollout status deployment/my-appStill Not Working?
Check the pod’s actual environment variables vs ConfigMap:
# What the ConfigMap currently contains
kubectl get configmap my-app-config -o jsonpath='{.data}'
# What the running pod actually has
kubectl exec my-pod -- env | sort
# If they differ, the pod predates the ConfigMap update — restart it
kubectl rollout restart deployment/my-appCheck kubelet logs on the node for volume sync errors:
# Find which node the pod is on
kubectl get pod my-pod -o jsonpath='{.spec.nodeName}'
# SSH into the node and check kubelet logs
journalctl -u kubelet | grep -i "configmap\|volume\|sync" | tail -50Verify RBAC allows the kubelet to read the ConfigMap. In locked-down clusters, the kubelet’s service account may not have permission to read ConfigMaps in your namespace.
For related Kubernetes issues, see Fix: Kubernetes CrashLoopBackOff and Fix: Kubernetes Ingress Not Working.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Kubernetes Ingress Not Working (404, 502, or Traffic Not Routing)
How to fix Kubernetes Ingress not routing traffic — why Ingress returns 404 or 502, how to configure annotations correctly, debug ingress-nginx and AWS ALB Ingress Controller, and verify backend service health.
Fix: Kubernetes exceeded quota / Pod Stuck in Pending Due to Resource Quota
How to fix Kubernetes 'exceeded quota' errors — pods stuck in Pending because namespace resource quotas are exhausted, missing resource requests, and LimitRange defaults.
Fix: kubectl apply error validating / is invalid
How to fix kubectl apply errors like 'error validating', 'is invalid', and 'error when creating' caused by YAML syntax issues, deprecated APIs, missing fields, and more.
Fix: Kubernetes Pod stuck in Pending state
How to fix Kubernetes Pod stuck in Pending state caused by insufficient resources, unschedulable nodes, PVC issues, node selectors, taints, and resource quotas.