Fix: kubectl – The Connection to the Server Was Refused or Context Not Found
Quick Answer
How to fix kubectl errors like 'connection refused', 'context not found', or 'unable to connect to the server' when managing Kubernetes clusters.
The Error
You run a kubectl command and hit one of these errors:
error: context "my-cluster" does not existThe connection to the server localhost:8080 was refused - did you specify the right host or port?Unable to connect to the server: dial tcp 192.168.1.100:6443: connect: connection refusederror: no context exists with the name: "arn:aws:eks:us-east-1:123456789:cluster/production"Unable to connect to the server: getting credentials: exec: executable aws not foundAll of these point to the same general problem: kubectl either cannot find the cluster configuration it needs, or the cluster it is trying to reach is not available. The specific message tells you which part of the chain is broken — the kubeconfig file, the context, the credentials, or the cluster itself.
Why This Happens
kubectl relies on a configuration file (called a kubeconfig) to know where the Kubernetes API server lives and how to authenticate with it. By default, it looks at ~/.kube/config. Inside that file, there are three key sections:
- Clusters — the API server addresses and their CA certificates.
- Users — credentials (tokens, client certificates, or exec-based auth plugins) for each cluster.
- Contexts — named combinations of a cluster, a user, and optionally a namespace. A context ties everything together.
When you run kubectl, it reads the current context from the kubeconfig and uses the associated cluster and user entries to connect. If any piece is missing, wrong, or expired, the command fails.
Here are the most common reasons:
- The kubeconfig file is missing or empty. kubectl falls back to
localhost:8080, where nothing is listening, and you get a “connection refused” error. - The KUBECONFIG environment variable points to a file that does not exist or to a file that does not contain the context you need.
- You switched machines or shells and the kubeconfig was never copied over or sourced.
- The cluster is not running. Minikube was stopped, your kind cluster was deleted after a Docker restart, or the remote cluster’s API server is down.
- Your credentials expired. Cloud provider tokens (AWS, GCP, Azure) have a limited lifespan, and once they expire, kubectl cannot authenticate.
- You deleted or renamed a context without updating the current-context field.
- Multiple kubeconfig files exist but only some are included in the KUBECONFIG variable, so kubectl cannot see all your clusters.
Understanding which part is broken determines the fix. The sections below walk through each scenario.
Fix 1: Check Your Current Kubeconfig and Context
Start with the basics. See what kubectl thinks it is working with:
kubectl config viewThis prints the entire kubeconfig (with sensitive values redacted). Look at the current-context field at the top. If it is empty or points to a context name that does not appear in the contexts list, that is your problem.
List all available contexts:
kubectl config get-contextsThe active context is marked with an asterisk (*). If you see the context you need in the list but it is not selected, switch to it:
kubectl config use-context my-clusterIf the context you need is not in the list at all, the kubeconfig does not contain it. You need to either add it manually or regenerate it from your cluster provider (see Fix 5 and Fix 6 below).
Verify the connection after switching:
kubectl cluster-infoIf this returns the API server address without errors, you are connected.
Fix 2: Fix the KUBECONFIG Environment Variable
kubectl looks for configuration in this order:
- The
--kubeconfigflag passed directly to the command. - The
KUBECONFIGenvironment variable. - The default file at
~/.kube/config.
If KUBECONFIG is set but points to a non-existent file, kubectl silently falls back to an empty config and you get the localhost:8080 error. Check what it is set to:
echo $KUBECONFIGIf it is empty, kubectl is using the default path. Verify that file exists:
ls -la ~/.kube/configIf the file does not exist, you need to generate or copy one. If you know where your kubeconfig file is, point kubectl to it:
export KUBECONFIG=/home/user/.kube/my-cluster-configMake it permanent by adding the export to your shell profile:
# For bash
echo 'export KUBECONFIG=/home/user/.kube/my-cluster-config' >> ~/.bashrc
source ~/.bashrc
# For zsh
echo 'export KUBECONFIG=/home/user/.kube/my-cluster-config' >> ~/.zshrc
source ~/.zshrcIf your KUBECONFIG is set but the file has a typo in the path or was accidentally deleted, either fix the path or unset the variable to fall back to the default:
unset KUBECONFIGFor related issues with environment variables not being picked up correctly, see Fix: Environment Variable Is Undefined.
Fix 3: Start Your Cluster (Minikube, kind, k3d)
If the kubeconfig and context look correct but the connection is refused, the cluster itself is probably not running. This is the single most common cause on development machines.
Minikube:
minikube statusIf it shows Stopped or Nonexistent:
minikube startMinikube automatically updates ~/.kube/config with the correct context when it starts. After starting, verify:
kubectl config use-context minikube
kubectl cluster-infokind (Kubernetes in Docker):
kind clusters run as Docker containers. If Docker was restarted, the cluster containers are gone.
kind get clustersIf your cluster is not listed, recreate it:
kind create cluster --name my-clusterIf the cluster is listed but Docker shows the containers as stopped, delete and recreate:
kind delete cluster --name my-cluster
kind create cluster --name my-clusterkind clusters do not persist across Docker daemon restarts. This is by design.
k3d:
k3d cluster list
k3d cluster start my-clusterDocker Desktop Kubernetes:
- Open Docker Desktop.
- Go to Settings > Kubernetes.
- Make sure Enable Kubernetes is checked.
- If it is already enabled but the status indicator is red or orange, click Reset Kubernetes Cluster.
Then set the context:
kubectl config use-context docker-desktopIf your local cluster was recently working and suddenly stopped, a machine reboot or Docker update is almost always the cause. For other connection issues on localhost, see Fix: ERR_CONNECTION_REFUSED on localhost.
Fix 4: Fix a Wrong or Missing Context
The “context not found” error means the kubeconfig’s current-context field references a context name that does not exist in the file. This happens when you delete a context, rename it, or merge kubeconfig files that overwrote each other.
See what the current context is set to:
kubectl config current-contextIf this prints a name that does not appear in kubectl config get-contexts, the reference is stale.
Set the current context to one that exists:
kubectl config use-context <valid-context-name>If you need to manually create a context (for example, you have the cluster and user entries but no context linking them):
kubectl config set-context my-new-context \
--cluster=my-cluster \
--user=my-user \
--namespace=default
kubectl config use-context my-new-contextTo see what clusters and users are defined in the kubeconfig (so you know what values to use):
kubectl config view -o jsonpath='{range .clusters[*]}{.name}{"\n"}{end}'
kubectl config view -o jsonpath='{range .users[*]}{.name}{"\n"}{end}'To remove a stale context that points to a cluster you no longer use:
kubectl config delete-context old-cluster-contextFix 5: Refresh Cloud Provider Credentials
Cloud-managed Kubernetes clusters (EKS, GKE, AKS) use short-lived tokens for authentication. When these expire, kubectl returns Unauthorized or fails to execute the credential plugin. The fix is to regenerate the kubeconfig entry from the cloud provider CLI.
AWS EKS:
aws eks update-kubeconfig --region us-east-1 --name my-clusterThis updates ~/.kube/config with a fresh context, cluster, and user entry. If you get an error about missing AWS credentials, fix those first:
aws sts get-caller-identityIf that fails, you need to configure your AWS CLI session. Run aws configure or log in with SSO:
aws sso login --profile my-profileMake sure the IAM principal (user or role) you are authenticated as has permission to access the EKS cluster. The cluster creator has admin access by default, but other users need to be added to the aws-auth ConfigMap. For general AWS credential problems, see Fix: SSH Connection Timed Out which covers network-level debugging that applies to cloud connectivity.
Google Cloud GKE:
gcloud container clusters get-credentials my-cluster \
--region us-central1 \
--project my-project-idIf your gcloud session has expired:
gcloud auth loginFor application-default credentials used by automation:
gcloud auth application-default loginAzure AKS:
az aks get-credentials --resource-group my-rg --name my-clusterIf your Azure session has expired:
az loginFor Azure AD-integrated clusters, you may need to clear the cached token:
kubelogin remove-tokens
az aks get-credentials --resource-group my-rg --name my-clusterAfter running any of these commands, verify the connection:
kubectl get nodesFix 6: Merge Multiple Kubeconfig Files
If you work with multiple clusters, you likely have multiple kubeconfig files. kubectl only sees the files listed in the KUBECONFIG variable (or the single default file). If a cluster’s config is in a separate file that is not included, kubectl cannot find its context.
Combine multiple kubeconfig files at runtime:
# Linux / macOS (colon-separated)
export KUBECONFIG=~/.kube/config:~/.kube/config-eks-prod:~/.kube/config-gke-staging
# Windows PowerShell (semicolon-separated)
$env:KUBECONFIG = "$HOME\.kube\config;$HOME\.kube\config-eks-prod"With this set, kubectl config get-contexts shows contexts from all the listed files.
Merge all files into a single permanent kubeconfig:
# Back up the original first
cp ~/.kube/config ~/.kube/config.bak
# Merge
KUBECONFIG=~/.kube/config:~/.kube/config-eks-prod:~/.kube/config-gke-staging \
kubectl config view --flatten > ~/.kube/config-merged
# Replace the original
mv ~/.kube/config-merged ~/.kube/configVerify the merge worked:
kubectl config get-contextsYou should now see all clusters listed. Switch to the one you need:
kubectl config use-context <context-name>Be careful with merging. If two files define a context with the same name but different clusters, one will overwrite the other. Check for conflicts before merging by inspecting each file’s context names.
Why this matters: The kubeconfig file contains credentials that give full access to your clusters. A corrupted merge, an accidental deletion, or a misconfigured
KUBECONFIGvariable can lock you out of production. Always back up~/.kube/configbefore modifying it, and usekubectl config viewto verify the result after any change.
Fix 7: The API Server Is Unreachable (Network Issues)
If the kubeconfig, context, and credentials are all correct but you still get connection refused or timeouts, the problem is between you and the API server.
Test basic connectivity:
# Get the API server address from your kubeconfig
kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}'
# Test if the port is reachable
curl -k https://<server-address>/healthzAn ok response means the server is up and reachable. A timeout means something is blocking the connection.
Common network blockers:
Firewall or security group rules. Cloud clusters often restrict API server access to specific IP ranges. If your IP changed (e.g., you switched networks), you may be blocked. Check the cluster’s allowed CIDR ranges in your cloud provider console.
VPN not connected. Many production clusters are only reachable through a VPN. If you normally connect over a VPN and it is disconnected, the API server is unreachable.
Proxy settings. Corporate proxies intercept HTTPS traffic and can break the certificate chain. Exclude your cluster from proxy routing:
export NO_PROXY=$NO_PROXY,<cluster-ip>,<cluster-hostname>,.eks.amazonaws.com
export no_proxy=$no_proxy,<cluster-ip>,<cluster-hostname>,.eks.amazonaws.comPrivate clusters. EKS, GKE, and AKS all support private API server endpoints that are only reachable from within the VPC. If your cluster has a private endpoint enabled and public access disabled, you must connect from within the cloud network (e.g., through a bastion host or VPN). For issues connecting to remote servers in general, see Fix: SSH Connection Timed Out.
DNS resolution failures. The cluster hostname may not resolve from your current network:
nslookup <cluster-hostname>If it does not resolve, try using the IP address directly (check the cluster details in your cloud console) or fix your DNS configuration.
Fix 8: Kubeconfig File Is Corrupted or Has Invalid YAML
If the kubeconfig file has a syntax error, kubectl cannot parse it and shows confusing errors. This sometimes happens after a bad merge or manual edit.
Validate the kubeconfig:
kubectl config viewIf this throws a YAML parse error, the file is malformed. Open it and look for common issues:
# Check for obvious YAML problems
cat ~/.kube/configCommon corruption patterns:
- Duplicate keys from a bad merge (two
contexts:sections instead of a merged list). - Broken base64 in certificate-authority-data or client-certificate-data fields (truncated or containing newlines).
- Tabs instead of spaces. YAML does not allow tabs for indentation.
- Missing quotes around values that contain special characters.
If the file is beyond repair, back it up and regenerate:
mv ~/.kube/config ~/.kube/config.brokenThen regenerate from your cluster provider (see Fix 5) or copy a known-good version from another machine.
If you encounter Docker Compose connectivity issues alongside Kubernetes problems (common when running both locally), see Fix: Docker Compose Up Errors.
Fix 9: kubectl Version Mismatch
kubectl has a version skew policy: it supports clusters within one minor version above or below its own version. If your kubectl is version 1.26 and the cluster is running 1.30, some features may not work and authentication mechanisms may have changed.
Check versions:
kubectl version --client
kubectl versionThe second command also shows the server version (if the connection works). If there is a large gap, update kubectl:
# Using curl (Linux)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
# Using Homebrew (macOS)
brew upgrade kubectl
# Using Chocolatey (Windows)
choco upgrade kubernetes-cliA version mismatch is rarely the sole cause of “context not found” errors, but it can cause subtle authentication failures, especially with newer exec-based credential plugins that older kubectl versions do not support.
Still Not Working?
Common Mistake: Setting
KUBECONFIGto a file that does not exist. kubectl does not warn you — it silently falls back to an empty config and connects tolocalhost:8080, which gives a confusing “connection refused” error that has nothing to do with your actual cluster. Always verify the file exists withlsafter setting the variable.
Multiple clusters and losing track of contexts
If you manage many clusters, consider using a context-switching tool to avoid mistakes:
# kubectx -- fast context switching
kubectx my-cluster
# Or use kubectl directly with the --context flag to avoid switching
kubectl get pods --context=my-clusterYou can also set the context per terminal window without affecting other sessions:
export KUBECONFIG=~/.kube/config
kubectl config use-context staging-cluster
# This only affects the current shellThe context exists but the cluster entry is missing
Sometimes a context references a cluster name that was removed from the kubeconfig. Check:
kubectl config view -o jsonpath='{.contexts[?(@.name=="my-context")].context.cluster}'Then verify that cluster name exists:
kubectl config view -o jsonpath='{range .clusters[*]}{.name}{"\n"}{end}'If the cluster entry is gone, you need to re-add it. The easiest way is to regenerate the kubeconfig from your cloud provider (Fix 5) or re-create the local cluster (Fix 3).
kubectl works with sudo but not without
If sudo kubectl get nodes works but kubectl get nodes does not, the kubeconfig file permissions are the issue:
sudo chown $(id -u):$(id -g) ~/.kube/config
chmod 600 ~/.kube/configThis makes the file readable only by your user, which is also a security best practice. Kubeconfig files contain credentials and should not be world-readable.
Pods are stuck after fixing the connection
Once you can connect to the cluster again, you might notice pods in a bad state. If pods are in CrashLoopBackOff, the issue predates your connection problem — the pods were failing while you could not see them. See Fix: Kubernetes Pod CrashLoopBackOff for a thorough walkthrough of diagnosing and fixing crashing pods.
Clean slate: regenerate everything
If nothing else works and you are on a development machine, start fresh:
# Back up the old config
mv ~/.kube/config ~/.kube/config.old
# For minikube
minikube delete
minikube start
# For kind
kind delete cluster
kind create cluster
# For cloud clusters, regenerate from the provider CLI
aws eks update-kubeconfig --region us-east-1 --name my-cluster
# or
gcloud container clusters get-credentials my-cluster --region us-central1 --project my-project
# or
az aks get-credentials --resource-group my-rg --name my-clusterThen verify:
kubectl config get-contexts
kubectl get nodesThis gives you a clean kubeconfig with only the clusters you explicitly added, eliminating any stale contexts, expired certificates, or corrupted entries.
Related: If your kubectl connects but you are getting connection refused errors from services inside the cluster, see Fix: The Connection to the Server localhost:8080 Was Refused (kubectl) for API server-specific troubleshooting.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Kubernetes ImagePullBackOff - Failed to Pull Image
How to fix the Kubernetes ImagePullBackOff and ErrImagePull errors when a pod fails to pull a container image from a registry.
Fix: Kubernetes Pod CrashLoopBackOff (Back-off restarting failed container)
How to fix the Kubernetes CrashLoopBackOff error when a pod repeatedly crashes and Kubernetes keeps restarting it with increasing back-off delays.
Fix: YAML 'mapping values are not allowed here' and Other YAML Syntax Errors
How to fix 'mapping values are not allowed here', 'could not find expected :', 'did not find expected key', and other YAML indentation and syntax errors in Docker Compose, Kubernetes manifests, GitHub Actions, and config files.
Fix: Docker Container Exited (137) OOMKilled / Killed Signal 9
How to fix Docker container 'Exited (137)', OOMKilled, and 'Killed' signal 9 errors caused by out-of-memory conditions in Docker, Docker Compose, and Kubernetes.