Skip to content

Fix: Go fatal error: all goroutines are asleep - deadlock!

FixDevs ·

Quick Answer

How to fix Go fatal error all goroutines are asleep deadlock caused by unbuffered channels, missing goroutines, WaitGroup misuse, and channel direction errors.

The Error

Your Go program crashes with:

fatal error: all goroutines are asleep - deadlock!

goroutine 1 [chan send]:
main.main()
	/app/main.go:8 +0x50

Or variations:

fatal error: all goroutines are asleep - deadlock!

goroutine 1 [chan receive]:
main.main()
	/app/main.go:10 +0x68
fatal error: all goroutines are asleep - deadlock!

goroutine 1 [semacquire]:
sync.runtime_Semacquire(...)

Every goroutine in the program is blocked waiting for something (a channel operation, a mutex, a WaitGroup), and nothing can unblock any of them. Go’s runtime detects this condition and panics.

Why This Happens

Go detects deadlocks when all goroutines are blocked. The runtime checks if any goroutine can make progress. If none can, the program is deadlocked and cannot continue.

Common causes:

  • Sending to an unbuffered channel with no receiver. The sender blocks forever.
  • Receiving from a channel with no sender. The receiver blocks forever.
  • Forgetting to close a channel. A range loop over a channel blocks forever waiting for more values.
  • WaitGroup counter never reaches zero. wg.Wait() blocks because wg.Done() is never called.
  • Mutex double-lock. Locking a mutex that is already locked by the same goroutine.
  • Circular channel dependencies. Goroutine A waits on channel X, goroutine B waits on channel Y, and they need each other to proceed.

Fix 1: Fix Unbuffered Channel Sends

An unbuffered channel blocks the sender until a receiver is ready:

Broken — sending with no goroutine to receive:

func main() {
    ch := make(chan int)
    ch <- 42  // Deadlock! No goroutine is receiving
    fmt.Println(<-ch)
}

Fixed — receive in a goroutine:

func main() {
    ch := make(chan int)
    go func() {
        ch <- 42  // Send in a goroutine
    }()
    fmt.Println(<-ch)  // Receive in main
}

Fixed — use a buffered channel:

func main() {
    ch := make(chan int, 1)  // Buffer size 1
    ch <- 42                  // Does not block (buffer has space)
    fmt.Println(<-ch)         // 42
}

Pro Tip: Unbuffered channels (make(chan T)) require both a sender and a receiver to be ready simultaneously. Buffered channels (make(chan T, N)) allow up to N sends without a receiver. Use unbuffered channels for synchronization and buffered channels for decoupling.

Fix 2: Fix Channel Range Loops

range over a channel blocks until the channel is closed:

Broken — channel never closed:

func main() {
    ch := make(chan int)

    go func() {
        for i := 0; i < 5; i++ {
            ch <- i
        }
        // Forgot to close(ch)!
    }()

    for v := range ch {  // Deadlock! range waits for close(ch) forever
        fmt.Println(v)
    }
}

Fixed — close the channel when done sending:

go func() {
    for i := 0; i < 5; i++ {
        ch <- i
    }
    close(ch)  // Signal that no more values will be sent
}()

for v := range ch {
    fmt.Println(v)  // Prints 0-4, then exits the loop
}

Fixed — use a known count instead of range:

for i := 0; i < 5; i++ {
    fmt.Println(<-ch)
}

Common Mistake: Closing a channel from the receiver side, or closing a channel multiple times. Only the sender should close a channel, and only close it once. Closing a closed channel causes a panic.

Fix 3: Fix WaitGroup Misuse

sync.WaitGroup deadlocks if Done() is never called:

Broken — Done() not called:

var wg sync.WaitGroup

for i := 0; i < 5; i++ {
    wg.Add(1)
    go func(n int) {
        // Forgot wg.Done()!
        fmt.Println(n)
    }(i)
}

wg.Wait()  // Deadlock! Counter never reaches 0

Fixed — use defer wg.Done():

for i := 0; i < 5; i++ {
    wg.Add(1)
    go func(n int) {
        defer wg.Done()  // Always called, even if the function panics
        fmt.Println(n)
    }(i)
}

wg.Wait()

Broken — Add() called inside the goroutine (race condition):

for i := 0; i < 5; i++ {
    go func(n int) {
        wg.Add(1)  // WRONG! Main goroutine might reach Wait() before Add()
        defer wg.Done()
        fmt.Println(n)
    }(i)
}

wg.Wait()  // Might return too early

Fixed — always call Add() before launching the goroutine:

for i := 0; i < 5; i++ {
    wg.Add(1)  // Add before starting the goroutine
    go func(n int) {
        defer wg.Done()
        fmt.Println(n)
    }(i)
}

wg.Wait()

Fix 4: Fix Select with Default

Use select to avoid blocking on channel operations:

ch := make(chan int)

// Blocking receive (might deadlock)
value := <-ch

// Non-blocking receive with select
select {
case value := <-ch:
    fmt.Println("Received:", value)
default:
    fmt.Println("No value available")
}

Timeout pattern:

select {
case value := <-ch:
    fmt.Println("Received:", value)
case <-time.After(5 * time.Second):
    fmt.Println("Timed out waiting for value")
}

Multiple channels:

select {
case msg := <-msgCh:
    handleMessage(msg)
case err := <-errCh:
    handleError(err)
case <-ctx.Done():
    fmt.Println("Context canceled")
    return
}

Fix 5: Fix Producer-Consumer Patterns

A common pattern that can deadlock if not implemented correctly:

Broken — single channel, single goroutine:

func main() {
    jobs := make(chan int)
    results := make(chan int)

    // Producer
    for i := 0; i < 5; i++ {
        jobs <- i  // Deadlock! No consumer running yet
    }
    close(jobs)

    // Consumer
    for j := range jobs {
        results <- j * 2
    }
}

Fixed — start consumer first, or use goroutines:

func main() {
    jobs := make(chan int, 10)   // Buffered
    results := make(chan int, 10)

    // Start consumer goroutine first
    go func() {
        for j := range jobs {
            results <- j * 2
        }
        close(results)
    }()

    // Producer
    for i := 0; i < 5; i++ {
        jobs <- i
    }
    close(jobs)

    // Collect results
    for r := range results {
        fmt.Println(r)
    }
}

Worker pool pattern:

func main() {
    jobs := make(chan int, 100)
    results := make(chan int, 100)
    var wg sync.WaitGroup

    // Start 3 workers
    for w := 0; w < 3; w++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for j := range jobs {
                results <- j * 2
            }
        }()
    }

    // Send jobs
    for i := 0; i < 10; i++ {
        jobs <- i
    }
    close(jobs)

    // Wait for workers and close results
    go func() {
        wg.Wait()
        close(results)
    }()

    for r := range results {
        fmt.Println(r)
    }
}

Fix 6: Fix Mutex Deadlocks

Go’s sync.Mutex is not reentrant — locking it twice from the same goroutine deadlocks:

Broken:

var mu sync.Mutex

func doWork() {
    mu.Lock()
    defer mu.Unlock()
    helper()  // Calls Lock() again — deadlock!
}

func helper() {
    mu.Lock()  // Deadlock! Already locked by doWork()
    defer mu.Unlock()
    // ...
}

Fixed — restructure to avoid nested locks:

func doWork() {
    mu.Lock()
    data := readData()
    mu.Unlock()

    result := processData(data)  // No lock held during processing

    mu.Lock()
    writeResult(result)
    mu.Unlock()
}

Fixed — use a lock-free inner function:

func doWork() {
    mu.Lock()
    defer mu.Unlock()
    helperLocked()  // Assumes lock is already held
}

func helperLocked() {
    // Does NOT lock — caller must hold the lock
    // Document this requirement in a comment
}

Fix 7: Fix Context Cancellation

Use context.Context for proper goroutine lifecycle management:

func main() {
    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
    defer cancel()

    ch := make(chan string)

    go func() {
        result := longOperation()
        ch <- result
    }()

    select {
    case result := <-ch:
        fmt.Println("Result:", result)
    case <-ctx.Done():
        fmt.Println("Operation timed out:", ctx.Err())
    }
}

Pass context to goroutines:

func worker(ctx context.Context, ch chan<- int) {
    for i := 0; ; i++ {
        select {
        case <-ctx.Done():
            return  // Exit when context is canceled
        case ch <- i:
            time.Sleep(100 * time.Millisecond)
        }
    }
}

Fix 8: Use the Race Detector

While the race detector does not detect deadlocks directly, it catches data races that often accompany deadlock-prone code:

go run -race main.go
go test -race ./...

Debug with GOTRACEBACK:

GOTRACEBACK=all go run main.go
# Shows all goroutine stacks on deadlock, not just the relevant ones

Use runtime.NumGoroutine() to monitor goroutine leaks:

fmt.Println("Goroutines:", runtime.NumGoroutine())

Still Not Working?

Note: Go only detects deadlocks when all goroutines are blocked. If even one goroutine is running (e.g., a time.Sleep loop, an HTTP server), Go will not detect the deadlock. The program hangs silently instead of panicking.

Use pprof to debug hanging programs:

import _ "net/http/pprof"

go func() {
    http.ListenAndServe(":6060", nil)
}()

// Visit http://localhost:6060/debug/pprof/goroutine?debug=2
// Shows all goroutine stacks

For Go index out of range panics, see Fix: Go panic: runtime error: index out of range. For Go type errors, see Fix: Go cannot use X as type Y. For Go module issues, see Fix: Go module not found.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles