Step 1 of 2

Create your setup prompt

You're all set!

API setup prompt
> **Canonical reference:** https://docs.exa.ai/reference/search-api-guide-for-coding-agents
>
> If anything below looks outdated or contradicts real API behavior, fetch that URL β€” it is the source of truth for search types, parameters, and response shape. Report staleness back to the user.

---

# Exa API Setup Guide

## Your Configuration

| Setting | Value |
|---------|-------|
| Coding Tool | Not specified |
| Use Case | General |
| Search Type | Auto - Balanced relevance and speed (default) |
| Content | Compact |

**Project Description:** (Not provided)

---

## API Key Setup

### Environment Variable

```bash
export EXA_API_KEY="YOUR_API_KEY"
```

### .env File

```env
EXA_API_KEY=YOUR_API_KEY
```

---

## πŸ”Œ Exa MCP Server

Give your AI coding assistant real-time web search with Exa MCP.

**Remote MCP URL (with your API key):**

```
https://mcp.exa.ai/mcp?exaApiKey=
```

**Tool enablement (optional):**
Add a `tools=` query param to the MCP URL.

Enable specific tools:
```
https://mcp.exa.ai/mcp?exaApiKey=&tools=web_search_exa,get_code_context_exa,people_search_exa
```

Enable all tools:
```
https://mcp.exa.ai/mcp?exaApiKey=&tools=web_search_exa,web_search_advanced_exa,get_code_context_exa,crawling_exa,company_research_exa,people_search_exa,deep_researcher_start,deep_researcher_check
```

**Your API key:** ``
Manage keys at [dashboard.exa.ai/api-keys](https://dashboard.exa.ai/api-keys).

**Available tools (enabled by default):**
- `web_search_exa`
- `get_code_context_exa`
- `company_research_exa`

**Optional tools (enable via `tools=`):**
- `web_search_advanced_exa`
- `crawling_exa`
- `people_search_exa`
- `deep_researcher_start`
- `deep_researcher_check`

**Troubleshooting:** if tools don’t appear, restart your MCP client after updating the config.

**JSON config (Cursor, Windsurf, etc.):**

```json
{
  "mcpServers": {
    "exa": {
      "url": "https://mcp.exa.ai/mcp?exaApiKey="
    }
  }
}
```

**Claude Desktop:**
Exa is available as a built-in Claude Connector. Go to **Settings** (or **Customize**) β†’ **Connectors**, search for **Exa**, and click **+** to add it. No config files needed.

πŸ“– Full docs: [docs.exa.ai/reference/exa-mcp](https://docs.exa.ai/reference/exa-mcp)

---

## Quick Start

### cURL

```bash
curl -X POST 'https://api.exa.ai/search' \
  -H 'x-api-key: YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{
  "query": "your search query here",
  "type": "auto",
  "num_results": 10,
  "contents": {
    "highlights": {
      "max_characters": 4000
    }
  }
}'
```

---

## Search Type Reference

| Type | Best For | Approx Latency | Depth |
|------|----------|----------------|-------|
| `auto` | Most queries β€” balanced relevance and speed | ~1 second | Smart | ← your selection
| `fast` | Latency-sensitive queries that still need good relevance | ~450 ms | Basic |
| `instant` | Chat, voice, autocomplete, quick lookups | ~250 ms | Basic |
| `deep-lite` | Cheaper synthesis when full deep search is overkill | 4 seconds | Deep |
| `deep` | Research, enrichment, thorough results | 4-15 seconds | Deep |
| `deep-reasoning` | Complex research, multi-step reasoning, hard synthesis tasks | 12-40 seconds | Deepest |

Latency numbers are ballpark β€” synthesis (`outputSchema`) and forced livecrawls (`contents.maxAgeHours: 0`) stack on top of the base `type`. See the Latency Characteristics section for details.

**Tip:** `type="auto"` works well for most queries. `outputSchema` works on every search type, so you can request structured, grounded output regardless of which type you pick.

---

## Structured Outputs (outputSchema)

`outputSchema` works on **every** search type. Pass a JSON schema and Exa returns the synthesized answer as structured JSON in `output.content`, with field-level citations in `output.grounding`. Deep variants (`deep-lite`, `deep`, `deep-reasoning`) give higher-quality synthesis for complex queries, but the response shape is the same.

**Schema controls:** `type`, `description`, `required`, `properties`, `items`. Max nesting depth 2, max total properties 10. Do NOT add citation or confidence fields to the schema β€” `/search` returns grounding data automatically.

```bash
curl -X POST 'https://api.exa.ai/search' \
  -H 'x-api-key: YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{
  "query": "articles about GPUs",
  "type": "auto",
  "outputSchema": {
    "type": "object",
    "description": "Companies mentioned in articles",
    "required": ["companies"],
    "properties": {
      "companies": {
        "type": "array",
        "description": "List of companies mentioned",
        "items": {
          "type": "object",
          "required": ["name"],
          "properties": {
            "name": { "type": "string", "description": "Name of the company" },
            "description": { "type": "string", "description": "Short description of what the company does" }
          }
        }
      }
    }
  },
  "contents": {
    "highlights": { "max_characters": 4000 }
  }
}'
```

### Response Shape

Responses with `outputSchema` include:
- `output.content` β€” structured JSON matching your schema (or a string for `{"type": "text"}` schemas)
- `output.grounding` β€” array of `{field, citations, confidence}` entries with source URLs

```json
{
  "output": {
    "content": {
      "companies": [
        {"name": "Nvidia", "description": "GPU and AI chip manufacturer"},
        {"name": "AMD", "description": "Semiconductor company producing GPUs and CPUs"}
      ]
    },
    "grounding": [
      {
        "field": "companies[0].name",
        "citations": [{"url": "https://...", "title": "Source"}],
        "confidence": "high"
      }
    ]
  }
}
```

### When to Use Structured Outputs

- **Enrichment workflows** β€” extract specific fields (company info, people data, product details)
- **Data pipelines** β€” get structured data directly instead of parsing free text
- **Grounded answers** β€” prefer `outputSchema` on `/search` over the legacy `/answer` endpoint
- Prefer a deep variant (`deep-lite`/`deep`/`deep-reasoning`) when you need multi-step reasoning or synthesis across many sources

---

## Content Configuration

Content is controlled via the `contents` object on `/search` (or top-level fields on `/contents`). You can combine `text`, `highlights`, and `summary` in the same call β€” they are not mutually exclusive.

| Mode | Config | Best For |
|------|--------|----------|
| Text | `"text": {"maxCharacters": 20000}` | Full content extraction, RAG |
| Highlights | `"highlights": {"maxCharacters": 4000}` | Token-efficient excerpts | ← your selection
| Summary | `"summary": {"query": "your question"}` or `"summary": true` | LLM-written summary per result |

### Tuning knobs

- **`summary`** β€” pass `true` for a generic summary, or `{"query": "..."}` to bias the summary toward a specific question. Supports a `schema` field for per-result structured output. Summary has no `verbosity` setting β€” verbosity lives on `text` (below).
- **`text.verbosity`** β€” `"compact" | "full"` (default `"compact"`). Compact returns only the main content of the page, excluding navbars, banners, footers etc.
- **`text.includeHtmlTags`** β€” boolean (default `false`). When `true`, preserves HTML structure (useful for code blocks, tables).
- **`text.maxCharacters`** β€” hard cap on extracted text length. Always set this to control token cost.
- **`highlights.maxCharacters`** β€” total character budget across all highlights per result.
- **`highlights.query`** β€” custom query to direct the LLM's selection of highlights (useful when highlights should focus on a different angle than the search query).

**Case conventions:** JavaScript SDK and raw JSON use camelCase (`maxCharacters`). Python SDK uses snake_case (`max_characters`) β€” this applies inside nested dicts too: `contents={"highlights": {"max_characters": 4000}}`.

**⚠️ Token usage:** `text: true` with no cap can blow up context. Prefer `highlights` with `maxCharacters`, or `text` with `maxCharacters`, for agent workflows.

---

## Domain Filtering (Optional)

Usually not needed - Exa's neural search finds relevant results without domain restrictions.

**When to use:**
- Targeting specific authoritative sources
- Excluding low-quality domains from results

**Example:**

```json
{
  "includeDomains": ["arxiv.org", "github.com"],
  "excludeDomains": ["pinterest.com"]
}
```

**Note:** `includeDomains` and `excludeDomains` can be used together to include a broad domain while excluding specific subdomains (e.g., `"includeDomains": ["vercel.com"], "excludeDomains": ["community.vercel.com"]`).

---

## Content Freshness (maxAgeHours)

`maxAgeHours` sets the maximum acceptable age (in hours) for cached content. If the cached version is older than this threshold, Exa will livecrawl the page to get fresh content.

| Value | Behavior | Best For |
|-------|----------|----------|
| 24 | Use cache if less than 24 hours old, otherwise livecrawl | Daily-fresh content |
| 1 | Use cache if less than 1 hour old, otherwise livecrawl | Near real-time data |
| 0 | Always livecrawl (ignore cache entirely) | Real-time data where cached content is unusable |
| -1 | Never livecrawl (cache only) | Maximum speed, historical/static content |
| *(omit)* | Default behavior (livecrawl as fallback if no cache exists) | **Recommended** β€” balanced speed and freshness |

**When LiveCrawl Isn't Necessary:**
Cached data is sufficient for many queries, especially for historical topics or educational content. These subjects rarely change, so reliable cached results can provide accurate information quickly.

See [maxAgeHours docs](https://exa.ai/docs/reference/livecrawling-contents#maxAgeHours) for more details.

---

## Other Endpoints

Beyond `/search`, the other endpoint you'll commonly use is `/contents`:

| Endpoint | Description | Docs |
|----------|-------------|------|
| `/contents` | Get clean, parsed content for URLs you already have | [Docs](https://exa.ai/docs/reference/get-contents) |

> For grounded answers, use `outputSchema` on `/search` instead of the legacy `/answer` endpoint. `/search` + `outputSchema` returns the same answer-with-citations shape in `output.content` / `output.grounding`.

### /contents β€” Get Contents for Known URLs

Use `/contents` when you already have URLs and need their content. Unlike `/search` (which finds and optionally retrieves content), `/contents` is purely for content extraction from known URLs.

**When to use `/contents` vs `/search`:**
- URLs from another source (database, user input, RSS feeds) β†’ `/contents`
- Need to refresh stale content for URLs you already have β†’ `/contents` with `maxAgeHours`
- Need to find AND get content in one call β†’ `/search` with `contents`

```bash
curl -X POST 'https://api.exa.ai/contents' \
  -H 'x-api-key: YOUR_API_KEY' \
  -H 'Content-Type: application/json' \
  -d '{
  "urls": ["https://example.com/article", "https://example.com/blog-post"],
  "text": { "max_characters": 20000 }
}'
```

**Content retrieval options** (choose one per request):

| Option | Config | Best For |
|--------|--------|----------|
| Text | `"text": {"max_characters": 20000}` | Full content extraction, RAG |
| Highlights | `"highlights": {"max_characters": 4000}` | Key excerpts, lower token usage |

**Highlights example** (token-efficient excerpts with relevance query):
```json
{
  "urls": ["https://example.com/article"],
  "highlights": { "max_characters": 4000, "query": "key findings" }
}
```

**Freshness control:** Add `maxAgeHours` to ensure content is fresh:
- `24` β€” livecrawl if cached content is older than 24 hours
- `0` β€” always livecrawl (ignore cache)
- Omit β€” use cache when available, livecrawl as fallback

---

## Troubleshooting

**⚠️ COMMON PARAMETER MISTAKES β€” avoid these:**
- `useAutoprompt` β†’ **deprecated**, remove it entirely
- `includeUrls` / `excludeUrls` β†’ **do not exist**. Use `includeDomains` / `excludeDomains`
- `text`, `summary`, `highlights` at the top level of `/search` β†’ **must be nested** inside `contents` (e.g. `"contents": {"text": true}`). On `/contents` they ARE top-level β€” don't confuse the two.
- `numSentences`, `highlightsPerUrl` β†’ **deprecated** highlights params. Use `maxCharacters` instead
- `tokensNum` β†’ **does not exist**. Use `contents.text.maxCharacters` to limit text length
- `livecrawl: "always"` β†’ **deprecated**. Use `contents.maxAgeHours: 0` instead
- `excludeDomains` + `category: "company" | "people"` β†’ **400 error**. Those categories don't support `excludeDomains` or any date filters.

> **`stream: true`** switches `/search` to SSE mode (OpenAI-compatible chat-completion chunks). It's supported β€” just expect streaming chunks instead of one JSON response.

**Results not relevant?**
1. Try `type: "auto"` - most balanced option
2. Try `type: "deep"` - runs multiple query variations and ranks the combined results
3. Refine query - use singular form, be specific
4. Check category matches your use case

**Need structured data from search?**
1. Pass `outputSchema` on any search type β€” `auto` works, `deep`/`deep-reasoning` gives higher-quality synthesis
2. Define the fields you need in the schema β€” Exa returns grounded JSON in `output.content` with citations in `output.grounding`

**Results too slow?**
1. Use `type: "fast"` or `type: "instant"`
2. Reduce `numResults`
3. Skip contents if you only need URLs

**No results?**
1. Remove filters (date, domain restrictions)
2. Simplify query
3. Try `type: "auto"` - has fallback mechanisms

---

## Resources

- Docs: https://exa.ai/docs
- Dashboard: https://dashboard.exa.ai
- API Status: https://status.exa.ai