Fix: Docker Multi-Stage Build COPY --from Failed
Quick Answer
How to fix Docker multi-stage build errors — COPY --from stage not found, wrong stage name, artifacts not at expected path, and BuildKit caching issues.
The Error
A Docker multi-stage build fails when copying artifacts from a previous stage:
COPY --from=builder /app/dist ./dist
----
failed to solve: failed to read dockerfile: failed to parse stage name "builder": invalid reference formatOr the stage exists but the file isn’t there:
COPY failed: file not found in build context or excluded by .dockerignore: stat app/dist: file does not existOr a more subtle failure where the stage name isn’t recognized:
failed to solve: failed to compute cache key: failed to calculate checksum of ref abc123::xyz456: "/app/dist": not foundOr the build succeeds but the final image is missing the expected files:
docker run myapp ls /app/dist
# ls: cannot access '/app/dist': No such file or directoryWhy This Happens
Multi-stage builds copy files from one stage to another using COPY --from=<stage>. Failures occur when:
- Stage name typo or case mismatch —
COPY --from=Builderwon’t find a stage namedbuilder. Stage names are case-sensitive. - Build step in the source stage failed silently — if
npm run buildorgo buildexited with an error that wasn’t caught, the output files don’t exist whenCOPY --fromruns. - Wrong file path in the source stage — the build output lands in
/app/buildbut you’re copying from/app/dist. .dockerignoreexcluding needed files — files excluded from the build context can’t be used asCOPYsources within the same stage.- Stage index used incorrectly —
COPY --from=0refers to the firstFROMstage by index, but indices shift when you add or reorder stages. - BuildKit cache serving stale artifacts — Docker’s layer cache may return a cached version of a stage that doesn’t reflect recent changes to the build command.
- Multi-platform builds with incorrect base images — cross-compiling for a different architecture and then copying the binary into a base image of a different architecture causes silent failures.
Fix 1: Use Named Stages
Always name your build stages with AS <name>. Using numeric indices (--from=0) breaks when you add or reorder stages:
# Bad — using index, breaks when stages are reordered
FROM node:20-alpine AS 0
RUN npm ci && npm run build
FROM nginx:alpine
COPY --from=0 /app/dist /usr/share/nginx/html # Fragile# Good — named stage, resilient to reordering
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:alpine
WORKDIR /usr/share/nginx/html
COPY --from=builder /app/dist . # Clear and stable
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]Check that stage names are consistent and lowercase:
# Wrong — case mismatch
FROM node:20 AS Builder # Defined as "Builder"
# ...
COPY --from=builder /app/dist . # Looking for "builder" — NOT FOUND# Correct — consistent casing (lowercase recommended)
FROM node:20 AS builder
# ...
COPY --from=builder /app/dist . # Matches ✓Fix 2: Verify the Build Output Path
If the COPY --from path doesn’t match where the build actually writes its output, the copy silently fails or errors:
# Debug the builder stage — run it interactively to check what's there
docker build --target builder -t myapp-builder .
docker run --rm myapp-builder find /app -type d | head -20
# /app
# /app/node_modules
# /app/build ← output is here, not /app/distMatch the COPY --from path to the actual output:
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Check where CRA puts output: /app/build, not /app/dist
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html # Corrected pathCommon build output locations by tool:
| Tool | Default output |
|---|---|
| Create React App | /app/build |
| Vite | /app/dist |
| Next.js | /app/.next |
| Angular CLI | /app/dist/<project-name> |
Go go build | /app/<binary-name> |
Maven mvn package | /app/target/<name>.jar |
Gradle ./gradlew build | /app/build/libs/<name>.jar |
Fix 3: Check for Silent Build Failures
If the build command exits with a non-zero code, Docker stops the step and the output files don’t exist. But sometimes the command exits 0 despite a failed build:
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
# If npm run build fails with exit code 0, no output is produced
RUN npm run buildAdd explicit verification:
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Verify the output exists before the final stage tries to copy it
RUN test -d /app/dist || (echo "Build failed — /app/dist not found" && exit 1)Or build the stage in isolation and inspect it:
# Build only the builder stage
docker build --target builder -t debug-builder .
# Check what files were produced
docker run --rm debug-builder ls -la /app/dist
# If this fails, the build output didn't land in /app/distEnable BuildKit verbose output to see each step:
DOCKER_BUILDKIT=1 docker build --progress=plain .Fix 4: Fix .dockerignore Excluding Source Files
If .dockerignore excludes files that your build step needs, the build fails inside the container even though the files exist on your machine:
# .dockerignore — overly aggressive
*
!package.json
!package-lock.json
# src/ is excluded — COPY . . won't include it, build fails# .dockerignore — balanced approach
node_modules
.git
.env
*.log
dist
build
.nextPro Tip: Use COPY selectively instead of COPY . . to control exactly what goes into each stage:
FROM node:20-alpine AS builder
WORKDIR /app
# Layer 1: dependencies (cached separately)
COPY package*.json ./
RUN npm ci
# Layer 2: source code only
COPY src/ ./src/
COPY public/ ./public/
COPY tsconfig.json vite.config.ts ./
RUN npm run buildThis approach also improves caching — source code changes don’t invalidate the dependency installation layer.
Fix 5: Reference External Images with —from
COPY --from can copy from any Docker image, not just stages in the same Dockerfile. This is useful for pulling binaries from official images:
# Copy a specific binary from an official image
FROM ubuntu:22.04
COPY --from=golang:1.22 /usr/local/go /usr/local/go
ENV PATH="/usr/local/go/bin:${PATH}"
RUN go version# Copy compiled binary from a build stage, then use a minimal runtime image
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o server ./cmd/server
FROM gcr.io/distroless/static-debian12 AS runtime
COPY --from=builder /app/server /server
ENTRYPOINT ["/server"]Common Mistake: When copying a Go binary into a
distrolessoralpineimage, make sure you compile withCGO_ENABLED=0. A binary compiled with CGO links against glibc — it won’t run in distroless images that don’t have glibc.
Fix 6: Fix BuildKit Cache Serving Stale Data
Docker’s layer cache can serve a cached version of a build stage that no longer reflects your latest code. Force a fresh build:
# Bypass cache for the entire build
docker build --no-cache .
# Or invalidate cache only from a specific stage onward using a build argument
docker build --build-arg CACHE_BUST=$(date +%s) .ARG-based cache invalidation:
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
# Add a build arg to bust cache at a specific point
ARG CACHE_BUST=1
RUN echo "Cache bust: $CACHE_BUST"
COPY . .
RUN npm run build# Force re-run from the CACHE_BUST point
docker build --build-arg CACHE_BUST=$(date +%s) .Fix 7: Full Working Multi-Stage Dockerfile Examples
Node.js (React/Vite) → Nginx:
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --frozen-lockfile
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM nginx:1.25-alpine AS runtime
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]Go application → Distroless:
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -trimpath -o server ./cmd/server
FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/server /server
USER nonroot:nonroot
EXPOSE 8080
ENTRYPOINT ["/server"]Java Spring Boot → JRE:
FROM maven:3.9-eclipse-temurin-21 AS builder
WORKDIR /app
COPY pom.xml ./
RUN mvn dependency:go-offline -q
COPY src ./src
RUN mvn package -DskipTests
FROM eclipse-temurin:21-jre-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/target/*.jar app.jar
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]Still Not Working?
List available stages in your Dockerfile:
grep -n "^FROM" Dockerfile
# 1: FROM node:20-alpine AS builder
# 2: FROM nginx:alpineRun each stage individually to isolate the failure:
# Test stage 1
docker build --target builder -t stage1-test .
docker run --rm stage1-test ls /app/dist
# If stage 1 is fine, test the copy manually
docker run --rm stage1-test cat /app/dist/index.htmlCheck Docker and BuildKit versions — older versions have known multi-stage bugs:
docker version
# Ensure Docker Engine 18.09+ for BuildKit support
# Ensure Docker 20.10+ for full multi-stage stability
# Enable BuildKit explicitly
DOCKER_BUILDKIT=1 docker build .
# Or set in Docker daemon config
# /etc/docker/daemon.json: { "features": { "buildkit": true } }For related Docker issues, see Fix: Docker COPY Failed — File Not Found and Fix: Docker Build ARG Not Available.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: AWS ECS Task Failed to Start
How to fix ECS tasks that fail to start — port binding errors, missing IAM permissions, Secrets Manager access, essential container exit codes, and health check failures.
Fix: Linux OOM Killer Killing Processes (Out of Memory)
How to fix Linux OOM killer terminating processes — reading oom_kill logs, adjusting oom_score_adj, adding swap, tuning vm.overcommit, and preventing memory leaks.
Fix: AWS CloudWatch Logs Not Appearing
How to fix AWS CloudWatch logs not showing up — IAM permissions missing, log group not created, log stream issues, CloudWatch agent misconfiguration, and Lambda log delivery delays.
Fix: AWS ECR Authentication Failed (docker login and push Errors)
How to fix AWS ECR authentication errors — no basic auth credentials, token expired, permission denied on push, and how to authenticate correctly from CI/CD pipelines and local development.