Skip to content

Fix: Docker Build Cache Not Working - No Cache Being Used

FixDevs ·

Quick Answer

How to fix Docker build cache not working when layers rebuild every time despite no changes, including layer ordering, .dockerignore, COPY invalidation, BuildKit cache mounts, and CI/CD cache strategies.

The Error

You run docker build and expect cached layers to speed things up. Instead, Docker rebuilds everything from scratch:

$ docker build -t myapp .
[+] Building 142.3s (14/14) FINISHED
 => [internal] load build definition from Dockerfile           0.0s
 => [internal] load .dockerignore                              0.0s
 => [1/9] FROM node:20-alpine@sha256:abc123...                 0.0s
 => [2/9] WORKDIR /app                                         0.1s
 => [3/9] COPY . .                                            0.4s
 => [4/9] RUN npm install                                     87.2s
 => [5/9] RUN npm run build                                   54.6s

Every single layer rebuilds. No CACHED tags anywhere. Your build takes minutes when it should take seconds.

Or you explicitly pass --no-cache to force a clean build, but stale cache artifacts still cause problems:

$ docker build --no-cache -t myapp .

The build completes, but the resulting image behaves as if old cached files are still present.

Why This Happens

Docker builds images in layers. Each instruction in your Dockerfile (RUN, COPY, ADD, etc.) creates a layer. Docker caches each layer and reuses it on subsequent builds if nothing has changed.

Here is the critical rule: if any layer’s cache is invalidated, every layer after it is also invalidated. Docker does not skip ahead to re-use later cached layers. It rebuilds everything from the invalidation point downward.

Cache invalidation triggers include:

  • A file changed that was referenced by COPY or ADD
  • The instruction itself changed (different RUN command text)
  • The base image changed (new digest for FROM image)
  • Build arguments changed (ARG values differ from previous build)
  • Docker’s cache storage was pruned or the build context changed

The most common mistake is putting COPY . . early in the Dockerfile. This copies your entire project directory — including source files that change constantly. Every code change invalidates that layer and everything after it, destroying the cache for expensive steps like npm install or pip install.

Fix 1: Reorder Layers for Maximum Cache Hits

The single most impactful fix. Put instructions that change least frequently at the top, and instructions that change most frequently at the bottom.

Bad — cache busts on every code change:

FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build

Here, COPY . . includes your source code. Any file edit invalidates npm install, which then re-downloads every dependency from scratch.

Good — dependencies cached separately from source code:

FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build

Now npm install only reruns when package.json or package-lock.json change. Source code changes only invalidate the COPY . . and RUN npm run build layers.

The same principle applies to Python:

FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["python", "app.py"]

And to Go:

FROM golang:1.22-alpine
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o main .

Pro Tip: If your project has multiple dependency files (e.g., package.json and yarn.lock, or go.mod and go.sum), copy all of them before the install step. Missing the lockfile means Docker sees a different set of files on each build and invalidates the cache anyway.

Fix 2: Add a Proper .dockerignore

Without a .dockerignore, COPY . . sends your entire project directory to Docker — including node_modules, .git, build artifacts, logs, and temporary files. These change frequently and bloat the build context, which triggers unnecessary cache invalidation.

Create a .dockerignore in your project root:

.git
node_modules
dist
build
*.log
.env
.DS_Store
__pycache__
*.pyc
.pytest_cache
coverage
.next

The .git directory is especially important. It contains files that change on every commit, which means COPY . . sees a different context each time — even if your actual source code hasn’t changed.

Common Mistake: You have a .dockerignore file, but it is not in the build context root. If you run docker build -f path/to/Dockerfile ., the .dockerignore must be in . (the build context), not next to the Dockerfile. Docker looks for .dockerignore at the root of the build context, not relative to the Dockerfile location.

You can verify what is being sent to Docker by checking the build context size:

$ docker build -t myapp . 2>&1 | head -5
[+] Building 0.1s (2/2)
 => [internal] load build context                              0.0s
 => => transferring context: 2.34MB                            0.0s

If that “transferring context” number is unexpectedly large (hundreds of MB or more), your .dockerignore is missing entries. This also slows down builds and can cause disk space issues over time.

Fix 3: Fix COPY Instructions That Invalidate Cache

Docker determines cache validity for COPY by checksumming the files being copied. If any file in the source changes, the layer is invalidated.

Avoid wildcard copies when possible. Instead of:

COPY . .

Be specific about what you need:

COPY src/ ./src/
COPY public/ ./public/
COPY tsconfig.json ./

This limits cache invalidation to only the directories that actually changed.

Watch out for generated files. If your build process generates files in the source directory (like .next, dist, or __pycache__), these will differ between builds and invalidate COPY layers. Either add them to .dockerignore or restructure your build to generate them inside the container.

Timestamps do not affect Docker’s cache decision for COPY — Docker uses file content checksums. However, ADD with a remote URL does check timestamps and can behave differently. Stick to COPY unless you specifically need ADD’s tar extraction or URL fetching features.

If you are copying files and encountering path errors, check out how to fix Docker COPY file not found errors for detailed troubleshooting.

Fix 4: Handle ARG and ENV Cache Invalidation

ARG values are part of the cache key. If an ARG value changes between builds, every layer that uses it — and every layer after it — is invalidated.

A common antipattern is passing a build timestamp or Git commit hash as an ARG:

ARG BUILD_TIME
ARG GIT_COMMIT
RUN echo "Built at $BUILD_TIME, commit $GIT_COMMIT"
COPY . .
RUN npm install
RUN npm run build
$ docker build --build-arg BUILD_TIME=$(date +%s) --build-arg GIT_COMMIT=$(git rev-parse HEAD) -t myapp .

The BUILD_TIME changes on every build. The GIT_COMMIT changes on every commit. Both invalidate the cache from that point forward.

Fix: Move volatile ARG declarations as late as possible in the Dockerfile:

COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
ARG BUILD_TIME
ARG GIT_COMMIT
LABEL build_time=$BUILD_TIME
LABEL git_commit=$GIT_COMMIT

Now the ARG values only affect the LABEL instructions at the very end. The expensive npm install and npm run build layers remain cached.

Note: ENV instructions behave similarly. Changing an ENV value invalidates all subsequent layers. If you need environment variables at runtime but not during build, set them in your docker run command or docker-compose.yml instead of the Dockerfile.

Fix 5: Enable and Configure BuildKit

BuildKit is Docker’s modern build engine. It handles caching more intelligently than the legacy builder. On Docker Desktop 23.0+, BuildKit is the default. On older versions or Linux servers, you may need to enable it explicitly:

export DOCKER_BUILDKIT=1
docker build -t myapp .

Or set it permanently in /etc/docker/daemon.json:

{
  "features": {
    "buildkit": true
  }
}

Restart Docker after changing the config:

sudo systemctl restart docker

BuildKit provides several cache improvements over the legacy builder:

  • Parallel layer building: Independent layers build simultaneously
  • Better cache matching: More intelligent invalidation logic
  • Cache mount support: Persist caches across builds (see next section)
  • Remote cache import/export: Share cache between machines

If you are running into memory issues during builds with BuildKit, your container may be hitting resource limits. Check Docker OOMKilled troubleshooting for guidance.

Fix 6: Use BuildKit Cache Mounts

BuildKit cache mounts let you persist package manager caches across builds without baking them into the image layer. This is one of the most underused Docker caching features.

For npm/yarn:

FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
    npm install
COPY . .
RUN npm run build

For pip:

FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN --mount=type=cache,target=/root/.cache/pip \
    pip install -r requirements.txt
COPY . .

For apt:

FROM ubuntu:24.04
RUN --mount=type=cache,target=/var/cache/apt \
    --mount=type=cache,target=/var/lib/apt/lists \
    apt-get update && apt-get install -y curl git build-essential

For Go modules:

FROM golang:1.22-alpine
WORKDIR /app
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod \
    go mod download
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build \
    go build -o main .

Cache mounts are not included in the final image. They persist on the build host between builds. Even if the RUN layer’s cache is invalidated, the package manager’s own cache (inside the mount) speeds up the re-download.

Note: Cache mounts require # syntax=docker/dockerfile:1 at the top of your Dockerfile on older Docker versions. On Docker 23.0+, they work without it.

Fix 7: Optimize Multi-Stage Build Caching

Multi-stage builds can have cache issues of their own. Each stage has its own cache chain, and changes in early stages ripple into later ones.

# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Production
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json ./
CMD ["node", "dist/index.js"]

If the builder stage rebuilds, the COPY --from=builder in the production stage also invalidates. This is expected behavior.

To maximize multi-stage cache efficiency:

  1. Keep build dependencies in the builder stage only. Do not install dev tools in the production stage.
  2. Use specific COPY --from paths. Copy only the artifacts you need, not the entire filesystem.
  3. Consider separate stages for dependency installation:
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install --production

FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build

FROM node:20-alpine AS production
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/index.js"]

This way, production dependencies (deps stage) are cached separately from the full build. If only source code changes, and production dependencies haven’t changed, Docker can reuse the deps stage from cache.

If your production image fails to find expected files after a multi-stage build, you may be dealing with a Docker image not found issue related to incorrect stage references.

Fix 8: Configure CI/CD Remote Cache

In CI/CD environments (GitHub Actions, GitLab CI, Jenkins), each build typically runs on a fresh machine with no local cache. Builds are slow every time. Remote cache solves this.

GitHub Actions with BuildKit cache:

- name: Build Docker image
  uses: docker/build-push-action@v5
  with:
    context: .
    push: true
    tags: myapp:latest
    cache-from: type=gha
    cache-to: type=gha,mode=max

Registry-based cache (works with any CI):

docker buildx build \
  --cache-from type=registry,ref=myregistry.com/myapp:cache \
  --cache-to type=registry,ref=myregistry.com/myapp:cache,mode=max \
  -t myapp:latest \
  --push .

Inline cache (simplest option, embedded in the image):

docker buildx build \
  --cache-from type=registry,ref=myregistry.com/myapp:latest \
  --build-arg BUILDKIT_INLINE_CACHE=1 \
  -t myregistry.com/myapp:latest \
  --push .

The mode=max option caches all layers, including intermediate layers from multi-stage builds. Without it, only the final stage layers are cached.

Why this matters: Without remote cache in CI/CD, you are rebuilding every dependency on every pipeline run. For large projects, this can add 5-15 minutes per build. Remote cache brings that back down to seconds for unchanged layers, which directly impacts deployment velocity and developer feedback loops.

Fix 9: Debug Cache Behavior

When cache is not working and you cannot figure out why, use these debugging techniques.

Check build output for CACHED tags:

docker build -t myapp . 2>&1 | grep -i cached

Cached layers show CACHED in BuildKit output:

 => CACHED [2/6] WORKDIR /app                                 0.0s
 => CACHED [3/6] COPY package.json package-lock.json ./        0.0s
 => CACHED [4/6] RUN npm install                               0.0s
 => [5/6] COPY . .                                            0.3s

In this example, layers 2-4 are cached, but layer 5 (COPY . .) is not — meaning a file in the build context changed.

Use --progress=plain for verbose output:

docker build --progress=plain -t myapp . 2>&1

This shows the full output of each step, including which layers were resolved from cache.

Inspect the build cache:

docker builder prune --dry-run

This shows what is in the build cache without deleting anything. If the cache is empty, your layers have been pruned — likely by a docker system prune or by Docker’s automatic garbage collection.

Check Docker disk usage:

docker system df -v

Look at the “Build Cache” section. If it shows 0B, there is no cache available. If your system is running low on disk space, Docker may aggressively prune the build cache. See fixing Docker disk space issues for more details.

Fix 10: Clear Corrupted Cache

Sometimes the cache itself is the problem. Stale or corrupted cache entries can cause builds to use outdated layers or fail in unexpected ways.

Nuclear option — clear all build cache:

docker builder prune -af

Warning: This removes all build cache. Your next build will be a full rebuild.

Clear only old cache (keep recent entries):

docker builder prune --filter until=24h

This removes cache entries older than 24 hours.

Clear everything (images, containers, cache):

docker system prune -af

Warning: This removes all unused images, stopped containers, and build cache. Use this as a last resort. If Docker has permission issues preventing cleanup, check Docker socket permission troubleshooting.

After pruning, rebuild your image:

docker build -t myapp .

The first build after pruning will be slow (no cache). Subsequent builds will be fast again as the cache rebuilds.

Still Not Working?

If cache still is not behaving as expected after trying the fixes above:

  • Check your Docker version. Run docker version. Versions before 23.0 use the legacy builder by default, which has worse caching behavior. Upgrade if possible.
  • Check for .env file changes. If you use --env-file or copy .env into the image, any change to environment values invalidates the cache from that point.
  • Check for secret mounts. RUN --mount=type=secret layers are never cached by default in some BuildKit versions. Pin your BuildKit version to get consistent behavior.
  • Check if Docker Compose is overriding cache settings. docker compose build respects cache_from in docker-compose.yml, but no_cache: true in the service config forces full rebuilds every time.
  • Ensure consistent build context. If you run builds from different directories or with different -f flags, Docker treats them as different build contexts with separate caches.
  • Check volume mounts on Docker-in-Docker. If you run Docker builds inside a container (common in CI), the build cache lives inside that container. When the CI container is destroyed, the cache goes with it. Use remote cache (Fix 8) to solve this.
  • Inspect your Dockerfile for non-deterministic commands. Commands like RUN apt-get update or RUN curl https://... produce different output over time, but Docker does not know that — it caches based on the command text, not the output. This can cause stale cache entries. Use --no-cache for these specific layers or pin versions explicitly.
F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles