Skip to content

Fix: Elasticsearch index_not_found_exception (Index Does Not Exist)

FixDevs ·

Quick Answer

How to fix Elasticsearch index_not_found_exception errors — why index operations fail with 404, how to create indices correctly, manage index aliases, and handle missing indices in production.

The Error

An Elasticsearch query or index operation fails with:

{
  "error": {
    "root_cause": [
      {
        "type": "index_not_found_exception",
        "reason": "no such index [my-index]",
        "index": "my-index"
      }
    ],
    "type": "index_not_found_exception",
    "reason": "no such index [my-index]",
    "index_uuid": "_na_",
    "index": "my-index",
    "status": 404
  },
  "status": 404
}

Or in application logs:

elasticsearch.exceptions.NotFoundError: NotFoundError(404, 'index_not_found_exception', 'no such index [logs-2026-03]')

Or when using the Elasticsearch JavaScript client:

ResponseError: index_not_found_exception: [index_not_found_exception] Reason: no such index [products]

Why This Happens

Elasticsearch throws index_not_found_exception when an operation references an index that does not exist in the cluster. Common causes:

  • The index was never created — automatic index creation is disabled, or the expected setup step was skipped.
  • The index name has a typo or wrong date suffix — time-based indices like logs-2026-03 differ from logs-2026.03.
  • The index was deleted — a rollover, ILM policy, or manual deletion removed it.
  • Wrong cluster or environment — connecting to a staging cluster that has different indices than production.
  • Alias does not exist — operations targeting an alias fail if the alias was never created or was deleted.
  • action.auto_create_index is disabled — Elasticsearch can auto-create indices on write, but this is often disabled in production for safety.

Fix 1: Verify the Index Exists

Before debugging further, confirm what indices currently exist in the cluster:

# List all indices with their status
curl -X GET "localhost:9200/_cat/indices?v&pretty"

# Or filter by pattern
curl -X GET "localhost:9200/_cat/indices/my-index*?v&pretty"

# Show aliases
curl -X GET "localhost:9200/_cat/aliases?v&pretty"

# Check if a specific index exists (returns 200 or 404)
curl -I "localhost:9200/my-index"

Using the Elasticsearch client (Python):

from elasticsearch import Elasticsearch

es = Elasticsearch("http://localhost:9200")

# Check if index exists
if es.indices.exists(index="my-index"):
    print("Index exists")
else:
    print("Index does not exist")

# List all indices matching a pattern
indices = es.cat.indices(index="logs-*", h=["index", "health", "docs.count", "store.size"])
print(indices)

Using the Elasticsearch client (JavaScript):

const { Client } = require('@elastic/elasticsearch');
const client = new Client({ node: 'http://localhost:9200' });

async function checkIndex() {
  const exists = await client.indices.exists({ index: 'my-index' });
  console.log('Index exists:', exists); // true or false
}

Fix 2: Create the Index

If the index does not exist, create it with the appropriate mappings:

Simple index creation:

curl -X PUT "localhost:9200/my-index?pretty" -H 'Content-Type: application/json' -d'
{
  "settings": {
    "number_of_shards": 1,
    "number_of_replicas": 1
  },
  "mappings": {
    "properties": {
      "title": { "type": "text" },
      "price": { "type": "float" },
      "created_at": { "type": "date" },
      "in_stock": { "type": "boolean" }
    }
  }
}'

Create index with alias (recommended for production):

curl -X PUT "localhost:9200/products-v1?pretty" -H 'Content-Type: application/json' -d'
{
  "settings": {
    "number_of_shards": 2,
    "number_of_replicas": 1
  },
  "mappings": {
    "properties": {
      "name": { "type": "text", "analyzer": "standard" },
      "price": { "type": "scaled_float", "scaling_factor": 100 },
      "category": { "type": "keyword" }
    }
  },
  "aliases": {
    "products": {}
  }
}'

Your application queries the products alias — you can later reindex to products-v2 and switch the alias without downtime.

Python — create index with error handling:

from elasticsearch import Elasticsearch, BadRequestError

es = Elasticsearch("http://localhost:9200")

def create_index_if_not_exists(index_name: str, mappings: dict, settings: dict = None):
    if es.indices.exists(index=index_name):
        print(f"Index '{index_name}' already exists")
        return

    body = {"mappings": mappings}
    if settings:
        body["settings"] = settings

    try:
        es.indices.create(index=index_name, body=body)
        print(f"Created index '{index_name}'")
    except BadRequestError as e:
        print(f"Failed to create index: {e}")
        raise

create_index_if_not_exists(
    "products",
    mappings={
        "properties": {
            "name": {"type": "text"},
            "price": {"type": "float"},
        }
    }
)

Fix 3: Enable or Configure Auto-Index Creation

Elasticsearch can automatically create an index when a document is first indexed. This is enabled by default but is often disabled in production:

Check if auto-creation is enabled:

curl -X GET "localhost:9200/_cluster/settings?pretty"
# Look for: "action.auto_create_index"

Enable auto-creation (for development):

curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "action.auto_create_index": "true"
  }
}'

Allow auto-creation for specific index patterns only (production-safe):

curl -X PUT "localhost:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "action.auto_create_index": "logs-*,metrics-*,-*"
    // logs-* and metrics-* are auto-created; everything else (-*) is denied
  }
}'

Pro Tip: In production, avoid relying on auto-index creation. Explicitly creating indices with defined mappings prevents field type conflicts (a text field in one document and an integer in another causes a mapping exception that makes the index unusable). Use index templates to define mappings for automatically-created indices.

Fix 4: Use Index Templates for Time-Based Indices

If your application creates time-based indices (e.g., logs-2026-03-15), use an index template so new indices get the correct mappings automatically:

# Create an index template for logs-* indices
curl -X PUT "localhost:9200/_index_template/logs-template?pretty" -H 'Content-Type: application/json' -d'
{
  "index_patterns": ["logs-*"],
  "template": {
    "settings": {
      "number_of_shards": 1,
      "number_of_replicas": 1
    },
    "mappings": {
      "properties": {
        "@timestamp": { "type": "date" },
        "level": { "type": "keyword" },
        "message": { "type": "text" },
        "service": { "type": "keyword" }
      }
    }
  },
  "priority": 100
}'

Now any index matching logs-* created automatically or manually gets these settings and mappings.

Verify the template applies:

# Create a test index matching the pattern
curl -X PUT "localhost:9200/logs-2026-03-15?pretty"

# Check its mappings — should match the template
curl -X GET "localhost:9200/logs-2026-03-15/_mapping?pretty"

Fix 5: Handle Missing Indices in Application Code

Instead of letting queries fail, check for the index before querying or create it on demand:

JavaScript — defensive query with index check:

const { Client } = require('@elastic/elasticsearch');
const client = new Client({ node: process.env.ELASTICSEARCH_URL });

async function searchProducts(query) {
  const indexName = 'products';

  // Check and create index if missing
  const exists = await client.indices.exists({ index: indexName });
  if (!exists) {
    await createProductsIndex();
  }

  const result = await client.search({
    index: indexName,
    query: {
      match: { name: query },
    },
  });

  return result.hits.hits.map(hit => hit._source);
}

async function createProductsIndex() {
  await client.indices.create({
    index: 'products',
    mappings: {
      properties: {
        name: { type: 'text' },
        price: { type: 'float' },
        category: { type: 'keyword' },
      },
    },
  });
}

Python — catch the specific exception:

from elasticsearch import Elasticsearch, NotFoundError

es = Elasticsearch("http://localhost:9200")

def search_with_fallback(index: str, query: dict):
    try:
        return es.search(index=index, query=query)
    except NotFoundError as e:
        if "index_not_found_exception" in str(e):
            # Log the issue and return empty results instead of crashing
            print(f"Index '{index}' not found, returning empty results")
            return {"hits": {"hits": [], "total": {"value": 0}}}
        raise  # Re-raise other NotFoundErrors

Fix 6: Fix Index Alias Issues

If your application queries an alias that does not exist or points to a deleted index:

List all aliases:

curl -X GET "localhost:9200/_alias?pretty"
curl -X GET "localhost:9200/_cat/aliases?v&pretty"

Create or update an alias:

# Point 'products' alias to 'products-v1'
curl -X POST "localhost:9200/_aliases?pretty" -H 'Content-Type: application/json' -d'
{
  "actions": [
    { "add": { "index": "products-v1", "alias": "products" } }
  ]
}'

Switch alias from old index to new index (zero-downtime reindex):

curl -X POST "localhost:9200/_aliases?pretty" -H 'Content-Type: application/json' -d'
{
  "actions": [
    { "remove": { "index": "products-v1", "alias": "products" } },
    { "add": { "index": "products-v2", "alias": "products" } }
  ]
}'

Both actions execute atomically — there is no moment when the alias is unset.

Fix 7: Use ILM (Index Lifecycle Management) to Prevent Index Loss

If indices are being deleted unexpectedly, check your ILM policy:

# Check what ILM policies exist
curl -X GET "localhost:9200/_ilm/policy?pretty"

# Check what policy an index is using
curl -X GET "localhost:9200/my-index/_ilm/explain?pretty"

Create a safe ILM policy that rolls over but retains data:

curl -X PUT "localhost:9200/_ilm/policy/logs-policy?pretty" -H 'Content-Type: application/json' -d'
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {
            "max_primary_shard_size": "10gb",
            "max_age": "7d"
          }
        }
      },
      "warm": {
        "min_age": "7d",
        "actions": {
          "shrink": { "number_of_shards": 1 },
          "forcemerge": { "max_num_segments": 1 }
        }
      },
      "delete": {
        "min_age": "90d",
        "actions": {
          "delete": {}
        }
      }
    }
  }
}'

This rolls over indices when they reach 10GB or 7 days old, compacts them after 7 days, and deletes them after 90 days.

Still Not Working?

Check the Elasticsearch cluster health. A red cluster status means some primary shards are unassigned — indices may appear to exist but be inaccessible:

curl -X GET "localhost:9200/_cluster/health?pretty"
# "status": "red" means some shards are unavailable

Check if the index is closed. A closed index is inaccessible but still exists. _cat/indices shows it with status close:

curl -X GET "localhost:9200/_cat/indices?v"
# Look for status 'close'

# Open a closed index
curl -X POST "localhost:9200/my-index/_open"

Check for index blocks. Write or read blocks on an index cause operation failures that can look like “index not found”:

curl -X GET "localhost:9200/my-index/_settings?pretty"
# Look for: "index.blocks.write": "true" or "index.blocks.read": "true"

# Remove write block
curl -X PUT "localhost:9200/my-index/_settings" -H 'Content-Type: application/json' -d'
{ "index.blocks.write": null }'

Verify cluster connectivity. If you get index_not_found_exception for an index you know exists, you may be connecting to the wrong cluster node or environment. Check your connection URL and verify cluster name with curl localhost:9200.

For related search and database issues, see Fix: Elasticsearch Cluster Red Status and Fix: MongoDB Connect ECONNREFUSED.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles