Fix: kubectl apply error validating / is invalid
Quick Answer
How to fix kubectl apply errors like 'error validating', 'is invalid', and 'error when creating' caused by YAML syntax issues, deprecated APIs, missing fields, and more.
The Error
You run kubectl apply -f on a manifest and get one of these errors:
error: error validating "deployment.yaml": error validating data: ValidationError(Deployment.spec):
missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpecerror: error validating "service.yaml": error validating data: [ValidationError(Service.spec.ports[0]):
unknown field "port" in io.k8s.api.core.v1.ServicePort]The Deployment "my-app" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"my-app"}:
`selector` does not match template `labels`error when creating "manifest.yaml": the server could not find the requested resourceThese all mean Kubernetes rejected your manifest because something in the YAML is structurally wrong, references a deprecated or unknown API, or violates a cluster policy. The good news: every one of these has a straightforward fix once you know where to look.
Why This Happens
Kubernetes validates every manifest against its API schema before applying it. This validation happens in multiple stages:
- Client-side validation —
kubectlchecks the YAML structure against the API schema it knows about. - Server-side validation — The API server checks the object against the current cluster’s schema, including CRDs and admission webhooks.
- Admission controllers — Mutating and validating webhooks can reject objects based on custom policies.
When any stage fails, you get a validation error. The most common root causes are:
- Bad YAML syntax — Indentation errors, tabs instead of spaces, or missing colons break parsing before validation even starts.
- Wrong apiVersion — APIs get deprecated and removed across Kubernetes versions. A manifest that worked on 1.24 may fail on 1.26.
- Missing required fields — Every resource type has mandatory fields. Omit one and the API server rejects the entire object.
- Immutable fields — Some fields cannot be changed after initial creation. Trying to update them triggers an “is invalid” error.
- Namespace mismatches — Applying a namespaced resource to a non-existent namespace, or applying a cluster-scoped resource with a namespace set.
- Quota violations — Resource requests exceeding namespace quotas or limit ranges.
- Unknown resource types — Applying a CR before the CRD is installed.
Let’s fix each one.
Fix 1: Validate YAML Syntax
YAML syntax errors are the most common cause of kubectl apply failures. The error messages can be cryptic because the YAML parser fails before Kubernetes even sees the object.
Check for tabs. YAML does not allow tabs for indentation. Only spaces are valid. If your editor inserted tabs, you will get errors like:
error converting YAML to JSON: yaml: line 8: found character that cannot start any tokenFind and replace all tabs with spaces. In most editors, you can configure this globally. From the command line:
cat -A deployment.yaml | grep '\^I'If you see ^I characters, those are tabs. Replace them:
sed -i 's/\t/ /g' deployment.yamlCheck indentation levels. Every nested field must be indented consistently. A common mistake is misaligning containers under spec:
# Wrong -- containers is at the wrong indentation level
spec:
template:
spec:
containers:
- name: my-app
image: nginx# Correct
spec:
template:
spec:
containers:
- name: my-app
image: nginxCheck for missing colons. Every key-value pair needs a colon followed by a space:
# Wrong -- missing colon after "name"
metadata
name: my-app
# Correct
metadata:
name: my-appPro Tip: Use yamllint to catch syntax errors before they hit kubectl. Install it with pip install yamllint and run yamllint deployment.yaml. It catches tabs, indentation inconsistencies, trailing spaces, and other subtle issues that kubectl error messages make hard to diagnose.
For a deeper dive into YAML parsing errors, see YAML mapping values not allowed here.
Fix 2: Fix apiVersion Mismatch
Kubernetes deprecates and removes APIs on a regular schedule. If you copied a manifest from a blog post written in 2020, chances are the apiVersion is wrong for your cluster.
Common examples of removed APIs:
| Old apiVersion | Removed in | Replacement |
|---|---|---|
extensions/v1beta1 (Deployment) | 1.16 | apps/v1 |
extensions/v1beta1 (Ingress) | 1.22 | networking.k8s.io/v1 |
rbac.authorization.k8s.io/v1beta1 | 1.22 | rbac.authorization.k8s.io/v1 |
batch/v1beta1 (CronJob) | 1.25 | batch/v1 |
policy/v1beta1 (PodSecurityPolicy) | 1.25 | Removed entirely |
autoscaling/v2beta2 | 1.26 | autoscaling/v2 |
flowcontrol.apiserver.k8s.io/v1beta2 | 1.29 | flowcontrol.apiserver.k8s.io/v1 |
Check which API versions your cluster supports:
kubectl api-resources | grep deploymentsThis shows the current valid group and version. You can also check for a specific group:
kubectl api-versions | grep appsIf you have many manifests to update, use kubectl-convert to migrate them automatically:
kubectl convert -f old-deployment.yaml --output-version apps/v1Note: The kubectl convert plugin is not included by default. Install it via kubectl krew install convert or download it from the Kubernetes GitHub releases.
When migrating from beta APIs to stable ones, be aware that the stable version often requires fields that the beta version made optional. For example, apps/v1 Deployments require spec.selector, while extensions/v1beta1 auto-generated it.
Fix 3: Fix Missing Required Fields
The error message usually tells you exactly which field is missing:
error validating data: ValidationError(Deployment.spec): missing required field "selector"Here are the fields that developers forget most often:
Deployment missing spec.selector:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
app: my-app # Must match template labels
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: nginx:1.25The selector.matchLabels must exactly match template.metadata.labels. If they don’t, you get:
Invalid value: ... `selector` does not match template `labels`Pod missing containers:
missing required field "containers" in io.k8s.api.core.v1.PodSpecEvery Pod spec needs at least one container with a name and image:
spec:
containers:
- name: app
image: nginx:1.25Service missing ports:
A Service of type ClusterIP, NodePort, or LoadBalancer requires at least one port definition. The targetPort field is also commonly confused with port:
spec:
selector:
app: my-app
ports:
- port: 80 # Port the Service listens on
targetPort: 8080 # Port on the container
protocol: TCPIf your Pod keeps crashing after fixing the manifest, check out CrashLoopBackOff troubleshooting for container-level debugging.
Fix 4: Fix Immutable Field Errors on Update
Some fields cannot be changed once a resource is created. Trying to update them with kubectl apply gives:
The Deployment "my-app" is invalid: spec.selector: Invalid value: ...: field is immutableCommon immutable fields:
spec.selectoron Deployments, StatefulSets, and DaemonSetsspec.clusterIPon Servicesspec.volumeNameon PersistentVolumeClaimsmetadata.nameandmetadata.namespace(obviously)spec.nodeNameon Pods
The fix: Delete and recreate the resource:
kubectl delete deployment my-app
kubectl apply -f deployment.yamlOr use kubectl replace --force, which does the delete-and-create in one step:
kubectl replace --force -f deployment.yamlWarning: Both approaches cause downtime. The old Pods are terminated before new ones are created. For Deployments, this is usually acceptable because the new ReplicaSet spins up quickly. For StatefulSets with persistent storage, be extra careful — verify your PersistentVolumeClaims have the correct reclaimPolicy so data is not deleted.
If you need to change the selector without downtime, the standard approach is to create a new Deployment with the updated selector, shift traffic to it, and then delete the old one.
Fix 5: Fix Namespace Issues
Namespace-related errors come in several forms:
Error from server (NotFound): namespaces "staging" not foundError from server (Forbidden): namespaces "kube-system" is forbiddenThe namespace doesn’t exist. Create it first:
kubectl create namespace staging
kubectl apply -f deployment.yaml -n stagingOr include a Namespace manifest in your YAML and apply it before other resources:
apiVersion: v1
kind: Namespace
metadata:
name: staging
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: staging
# ...Namespace in the manifest vs. the command line. If your manifest has metadata.namespace: production but you run kubectl apply -f deployment.yaml -n staging, the manifest wins. The object is created in production. This catches people off guard. Either set the namespace in the manifest or on the command line, not both, to avoid confusion.
Cluster-scoped resources with a namespace set. Resources like ClusterRole, ClusterRoleBinding, PersistentVolume, and Namespace itself are cluster-scoped. Setting metadata.namespace on them causes an error:
# Wrong -- ClusterRole is cluster-scoped
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: my-role
namespace: default # Remove this lineRemove the namespace field from cluster-scoped resources.
Fix 6: Fix Resource Quota and Limit Range Violations
If your namespace has ResourceQuota or LimitRange objects, your Pod spec must comply with them or the API server rejects the manifest:
Error from server (Forbidden): error when creating "deployment.yaml": pods "my-app-xyz" is forbidden:
exceeded quota: compute-resources, requested: requests.cpu=500m, used: requests.cpu=1800m, limited: requests.cpu=2Check the quota:
kubectl describe resourcequota -n my-namespaceThis shows the current usage and limits. Either reduce your resource requests or ask your cluster admin to increase the quota.
LimitRange violations look like this:
Error from server (Forbidden): pods "my-app" is forbidden: minimum cpu usage per Container is 100m, but request is 50mCheck the LimitRange:
kubectl describe limitrange -n my-namespaceThen adjust your container resource requests and limits to fall within the allowed range:
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"Common Mistake: When a LimitRange specifies default limits but you don’t set any resources on your container, the defaults are applied automatically. This can lead to unexpected OOM kills when the default memory limit is too low. Always set explicit resource requests and limits. For more on memory issues, see Pod OOMKilled troubleshooting.
Fix 7: Fix CRD Not Installed
When you apply a custom resource and its CRD is missing, you get:
error: unable to recognize "my-resource.yaml": no matches for kind "MyCustomResource" in version "example.com/v1"Or:
error when creating "my-resource.yaml": the server could not find the requested resourceInstall the CRD first. CRDs must exist before you create instances of them. If you are installing an operator or controller, apply its CRDs first:
kubectl apply -f crds/
kubectl apply -f operator/
kubectl apply -f custom-resources/Check if the CRD exists:
kubectl get crd | grep mycustomresourceIf it is there but you are still getting errors, check the API group and version match between the CRD and your custom resource manifest. The spec.group and spec.versions[].name in the CRD must match the apiVersion in your CR.
After installing a CRD, it may take a few seconds for the API server to recognize the new type. If kubectl apply fails immediately after CRD creation, wait a moment and retry:
kubectl apply -f crds/ && sleep 5 && kubectl apply -f custom-resources/If you are using Helm, ensure the CRDs are in the crds/ directory of the chart so they are installed before templates are rendered.
Fix 8: Use —dry-run and kubectl diff for Debugging
Before applying a manifest to a live cluster, validate it locally:
Client-side dry run:
kubectl apply -f deployment.yaml --dry-run=clientThis checks YAML syntax and basic schema validation against kubectl’s built-in schema. It does not contact the cluster, so it won’t catch server-side issues like quota violations or webhook rejections.
Server-side dry run:
kubectl apply -f deployment.yaml --dry-run=serverThis sends the manifest to the API server for full validation, including admission webhooks and quota checks, but does not actually create or modify the resource. This is the most thorough pre-flight check available.
kubectl diff:
kubectl diff -f deployment.yamlThis shows exactly what would change if you applied the manifest. It is invaluable when updating existing resources because it highlights field-level differences. If a field shows up in the diff that you did not intend to change, you may be about to trigger an immutable field error or an unintended rollout.
Validate against a specific schema version:
kubectl apply -f deployment.yaml --validate=trueThe --validate flag is enabled by default, but explicitly setting it ensures you see all validation errors. You can also use --validate=warn (Kubernetes 1.27+) to see validation warnings without rejecting the apply.
Pro Tip: Chain these together in a CI pipeline. Run kubectl apply --dry-run=server as a gate before deploying. This catches 90% of manifest issues before they hit production. Pair it with kubectl diff in a PR comment so reviewers can see exactly what changes are going to the cluster.
If your connection to the cluster is the issue rather than the manifest itself, see kubectl connection refused for troubleshooting API server connectivity.
Still Not Working?
If you have validated your YAML, confirmed the apiVersion, and checked all required fields but are still getting errors, look into these less common causes:
Admission Webhook Failures
Mutating or validating admission webhooks can reject manifests with custom error messages:
Error from server: error when creating "deployment.yaml": admission webhook "validate.example.com" denied the request: image must be from approved registryCheck which webhooks are configured:
kubectl get validatingwebhookconfigurations
kubectl get mutatingwebhookconfigurationsThe webhook’s error message usually tells you what policy you are violating. Common webhook policies include requiring specific image registries, enforcing label conventions, or blocking privileged containers. Work with your cluster admin to understand and comply with the policies, or get an exemption if needed.
RBAC Permission Errors
If your user or service account lacks permission to create the resource, you get:
Error from server (Forbidden): deployments.apps "my-app" is forbidden: User "dev-user" cannot create resource "deployments" in API group "apps" in the namespace "production"Check your permissions:
kubectl auth can-i create deployments -n production
kubectl auth can-i '*' '*' -n productionIf the answer is “no,” you need a RoleBinding or ClusterRoleBinding granting the appropriate permissions to your user or service account.
Cluster Version Compatibility
If your manifests target a newer Kubernetes version than your cluster is running, features and fields may not be available:
kubectl versionCompare the server version against the API features you are using. Check the Kubernetes changelog for when specific features were introduced.
If your images are not pulling correctly after fixing the manifest, check ImagePullBackOff troubleshooting. If Pods are stuck in Pending after a successful apply, see Pod Pending fixes.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Kubernetes Pod stuck in Pending state
How to fix Kubernetes Pod stuck in Pending state caused by insufficient resources, unschedulable nodes, PVC issues, node selectors, taints, and resource quotas.
Fix: Docker container health status unhealthy
How to fix Docker container health check failing with unhealthy status, including HEALTHCHECK syntax, timing issues, missing curl/wget, endpoint problems, and Compose healthcheck configuration.
Fix: AWS CloudFormation stack in ROLLBACK_COMPLETE or CREATE_FAILED state
How to fix AWS CloudFormation ROLLBACK_COMPLETE and CREATE_FAILED errors caused by IAM permissions, resource limits, invalid parameters, and dependency failures.
Fix: Docker build sending large build context / slow Docker build
How to fix Docker build sending large build context caused by missing .dockerignore, node_modules in context, large files, and inefficient Dockerfile layers.