Skip to content

Fix: Docker Compose Service failed to build / ERROR building

FixDevs ·

Quick Answer

How to fix Docker Compose Service failed to build errors caused by wrong Dockerfile paths, YAML syntax issues, build args, platform mismatches, and network failures.

The Error

You run docker compose up --build or docker compose build and get:

Service 'web' failed to build: error building at STEP ...

Or one of these variations:

ERROR: Service 'api' failed to build: Build failed
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0
=> ERROR [internal] load build definition from Dockerfile

The build refuses to complete. Your containers never start. This error has many root causes, from a wrong file path to a subtle YAML indentation mistake. Below are the eight most common fixes.

Why This Happens

Docker Compose reads your docker-compose.yml (or compose.yaml) file, then hands each service’s build instructions off to the Docker engine. The “Service failed to build” error is a catch-all that fires whenever any step in that chain breaks. Common triggers include:

  • The Dockerfile path or build context is wrong.
  • The YAML file has syntax or indentation errors.
  • Build arguments or environment variables are missing.
  • A multi-stage build references a stage that does not exist.
  • The target platform does not match the image architecture.
  • Network problems block package downloads during the build.
  • You are mixing Docker Compose v1 and v2 command syntax.
  • Volume mounts defined in Compose conflict with the build step.

Each fix below targets one of these causes. Work through them in order — the first three cover the vast majority of cases.

Fix 1: Fix the Dockerfile Path and Build Context

The most common reason the build fails is that Compose cannot find the Dockerfile. By default, Compose looks for a file named Dockerfile inside the directory you specify as the build context.

Check your docker-compose.yml:

services:
  web:
    build:
      context: ./app
      dockerfile: Dockerfile.prod

Here context is the directory Compose sends to the Docker daemon, and dockerfile is the path relative to that context. If your file is at ./app/docker/Dockerfile.prod, you need:

    build:
      context: ./app
      dockerfile: docker/Dockerfile.prod

Verify the file exists:

ls -la ./app/docker/Dockerfile.prod

If you use the short-form syntax, Compose expects a Dockerfile in the given directory:

services:
  web:
    build: ./app

This is equivalent to context: ./app with dockerfile: Dockerfile. If your file is named anything else, switch to the long-form syntax shown above.

Common Mistake: On case-sensitive file systems (Linux), Dockerfile and dockerfile are different files. Docker expects Dockerfile with a capital D by default. If your file is lowercase, either rename it or set dockerfile: dockerfile explicitly.

Also make sure your build context is not too large. A massive context can cause timeouts that surface as a generic build failure.

Fix 2: Fix docker-compose.yml Syntax

YAML is sensitive to indentation, colons, and quoting. A single misplaced space can break the entire file.

Validate your file:

docker compose config

This parses the Compose file and prints the resolved configuration. If it prints an error, the problem is in your YAML.

Common syntax mistakes:

Wrong indentation under build:

# Broken — dockerfile is not nested under build
services:
  web:
    build:
    context: ./app
    dockerfile: Dockerfile
# Fixed — proper nesting
services:
  web:
    build:
      context: ./app
      dockerfile: Dockerfile

Mixing tabs and spaces. YAML requires spaces only. If your editor inserts a tab, the parser fails. Run this to check:

grep -P '\t' docker-compose.yml

If that returns any lines, replace the tabs with spaces.

Using the deprecated version field. Docker Compose v2 ignores the version key, but if you have a malformed value it can still cause parsing issues. The safest approach is to remove it entirely:

# Remove this line — it is no longer needed
# version: "3.8"

services:
  web:
    build: ./app

Missing colon or extra colon. Every key needs exactly one colon followed by a space (or a newline for nested keys). Double-check lines you recently edited.

Fix 3: Fix Build Args and Environment Variables

If your Dockerfile uses ARG values that are not passed from Compose, the build can fail at the step that references them.

Dockerfile:

ARG NODE_VERSION
FROM node:${NODE_VERSION}-alpine

Compose file:

services:
  web:
    build:
      context: .
      args:
        NODE_VERSION: "20"

If NODE_VERSION is missing from the args block, the FROM line resolves to node:-alpine, which is not a valid image, and the build fails.

You can also pull build args from your environment using a .env file:

    build:
      context: .
      args:
        NODE_VERSION: ${NODE_VERSION}

Then in .env:

NODE_VERSION=20

Verify your args resolve correctly:

docker compose config | grep -A 5 "build"

This shows the fully resolved build configuration, including interpolated variables. If any value is blank, that is your problem.

Fix 4: Fix Multi-Stage Build References

Multi-stage builds let you reference earlier stages by name. If the name does not match, the build fails.

# Stage 1
FROM node:20-alpine AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build

# Stage 2 — name must match exactly
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html

The --from=builder reference must match the AS builder label exactly. If you rename the stage to build but forget to update the COPY line, you get:

failed to solve: failed to compute cache key: "builder" not found

When Compose is involved, this becomes a generic “Service failed to build” error. Check every --from= and COPY --from= reference in your Dockerfile.

Pro Tip: You can also target a specific stage from Compose using the target key. This is useful when the same Dockerfile has both a development and production stage:

services:
  web:
    build:
      context: .
      target: builder

If target references a stage name that does not exist in the Dockerfile, the build fails immediately. Double-check the spelling.

Fix 5: Fix Platform Mismatch (linux/amd64 vs arm64)

If you are on an Apple Silicon Mac (M1/M2/M3/M4) or another ARM device, pulling or building images meant for linux/amd64 will fail or produce warnings:

WARNING: The requested image's platform (linux/amd64) does not match
the detected host platform (linux/arm64/v8)

Sometimes this manifests as a build error when a base image has no ARM variant. Fix it by specifying the platform in Compose:

services:
  web:
    build:
      context: .
      platform: linux/amd64
    platform: linux/amd64

Or build for multiple platforms using Docker Buildx:

docker buildx create --use
docker buildx bake --set '*.platform=linux/amd64'

If you need native ARM performance, find a base image that supports linux/arm64. Most official images (Node, Python, Nginx, PostgreSQL) now publish multi-arch manifests.

For cross-platform CI/CD pipelines, set the platform explicitly in both the Compose file and your CI config to avoid surprises when the build host architecture differs from development.

Fix 6: Fix Network Issues During Build (DNS, Proxy)

If the build step runs apt-get update, npm install, pip install, or any command that fetches packages from the internet, a network failure kills the build:

E: Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/...
Could not resolve host: registry.npmjs.org

Compose reports this as a build failure. The root cause is network, not Docker.

Check DNS resolution inside the build:

docker run --rm alpine nslookup registry.npmjs.org

If that fails, Docker cannot resolve DNS. Add custom DNS to the Docker daemon config (/etc/docker/daemon.json on Linux, Docker Desktop settings on Mac/Windows):

{
  "dns": ["8.8.8.8", "8.8.4.4"]
}

Restart Docker after changing this:

sudo systemctl restart docker

Behind a corporate proxy, pass proxy settings as build args:

services:
  web:
    build:
      context: .
      args:
        HTTP_PROXY: http://proxy.corp.com:8080
        HTTPS_PROXY: http://proxy.corp.com:8080
        NO_PROXY: localhost,127.0.0.1

Docker also respects HTTP_PROXY and HTTPS_PROXY in the daemon config and in ~/.docker/config.json. Check all three locations if proxy issues persist.

If your Docker daemon is not running at all, you will see a different error — see how to fix Docker daemon not running.

Fix 7: Fix Docker Compose v1 vs v2 Command Differences

Docker Compose v1 (docker-compose, with a hyphen) and v2 (docker compose, as a Docker subcommand) behave differently in several ways. If you are following an older tutorial, the commands may not work on your system.

Key differences:

Featurev1 (docker-compose)v2 (docker compose)
Commanddocker-compose builddocker compose build
Config filedocker-compose.ymldocker-compose.yml or compose.yaml
version fieldRequiredIgnored (optional)
Build behaviorLegacy builderBuildKit by default

Check which version you have:

docker compose version

If that fails, you might only have v1 installed:

docker-compose --version

Docker Compose v1 reached end-of-life in July 2023. If you are still on v1, upgrade:

# Remove v1
sudo apt remove docker-compose

# Install v2 plugin
sudo apt update
sudo apt install docker-compose-plugin

On Docker Desktop, Compose v2 is included by default. Make sure Docker Desktop is up to date.

Build failures that only happen with v2 are often caused by BuildKit differences. If you need to temporarily disable BuildKit to debug:

DOCKER_BUILDKIT=0 docker compose build

This falls back to the legacy builder, which produces more verbose output and can help isolate the problem. Once you find the issue, switch BuildKit back on — it is faster and more efficient.

Fix 8: Fix Volume Mount Errors in Compose

Volume definitions in your Compose file do not directly affect the build step — Docker builds happen from the Dockerfile and build context only. But a misconfigured volumes section can cause confusing errors at startup that look like build failures.

A common mistake is mounting a host directory that overwrites files created during the build:

services:
  web:
    build: ./app
    volumes:
      - ./app:/usr/src/app

The build installs dependencies into /usr/src/app/node_modules. Then the volume mount replaces /usr/src/app with your host directory, which may not have node_modules. The container starts and immediately crashes with “Module not found.”

Fix this with an anonymous volume to preserve node_modules:

services:
  web:
    build: ./app
    volumes:
      - ./app:/usr/src/app
      - /usr/src/app/node_modules

The second line creates an anonymous volume that prevents the host mount from overwriting node_modules.

Another issue: named volumes that do not exist. If you reference a volume in a service but forget to declare it at the top level, Compose fails:

services:
  db:
    image: postgres:16
    volumes:
      - pgdata:/var/lib/postgresql/data

# This top-level block is required
volumes:
  pgdata:

Without the top-level volumes declaration, Compose throws an error before any build starts.

If you run into permission denied errors when Docker tries to access volume mount paths, that is a separate issue related to user permissions on the Docker socket or the host filesystem.

Still Not Working?

If none of the fixes above solved your problem, check these less common causes:

Compose Profiles. If your service uses profiles, it will not build unless you activate that profile:

services:
  debug:
    build: ./debug-tools
    profiles:
      - debug

Build it with:

docker compose --profile debug build

Without the --profile flag, Compose skips the service entirely, which can be confusing if you expect it to build.

depends_on with Health Checks. If service A depends on service B with a health check condition, and B fails its health check, A never starts. This is not a build failure, but the error output can be misleading:

services:
  web:
    build: ./app
    depends_on:
      db:
        condition: service_healthy
  db:
    image: postgres:16
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

If db never becomes healthy, web stays in a waiting state. Check the health status:

docker compose ps
docker inspect --format='{{.State.Health.Status}}' <container_id>

Compose extends and include. Compose v2 supports include for splitting configs across files and extends for inheriting service definitions. If the referenced file or service does not exist, the build fails:

include:
  - path: ./monitoring/compose.yaml

services:
  web:
    extends:
      file: ./base-compose.yaml
      service: base-web
    build: ./app

Verify that all referenced files exist and that the service names match exactly.

Clear the build cache. Sometimes a corrupted cache causes repeated build failures. Force a clean build:

docker compose build --no-cache

Or remove all build cache:

docker builder prune -a

Check Docker resource limits. If Docker Desktop is configured with too little memory or disk space, builds fail silently. Open Docker Desktop settings and increase the memory limit to at least 4 GB. Check available disk:

docker system df

If disk usage is high, clean up:

docker system prune -a

If your Docker entrypoint is not found after a successful build, that is a runtime problem — check the entrypoint path and file permissions. For image not found errors, verify the image name and tag exist on the registry.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles