Skip to content

MCP

Dinobase exposes an MCP (Model Context Protocol) server that gives AI agents direct access to your business data through tool calls.

Before connecting an MCP client, set up Dinobase with your data:

Terminal window
pip install dinobase
dinobase init
dinobase add stripe --api-key sk_test_...
dinobase sync

See Getting Started for a full walkthrough, Connecting Sources for the 100+ supported sources, and Cloud Storage Backend for team/remote setups.

Terminal window
dinobase serve

The server uses stdio transport and exposes seven tools: query, list_sources, describe, confirm, confirm_batch, cancel, and refresh.

Keep data fresh while the server runs:

Terminal window
dinobase serve --sync --sync-interval 30m

See Syncing & Scheduling for more sync options.

The dinobase install command writes the MCP config directly to the right file for your client:

Terminal window
dinobase install claude-code # runs: claude mcp add dinobase -- dinobase serve
dinobase install claude-desktop # writes to Claude Desktop config file
dinobase install cursor # writes .cursor/mcp.json in current directory

Safe to run multiple times — it merges the dinobase entry rather than overwriting the whole file.

Any client that supports the MCP stdio transport can connect using the same server entry:

{
"command": "dinobase",
"args": ["serve"]
}

If dinobase isn’t on your PATH, use the full Python path:

{
"command": "/path/to/python",
"args": ["-m", "dinobase.mcp.server"]
}

When the MCP server starts, it provides dynamic instructions based on what data is loaded. The agent sees something like:

You have access to a Dinobase database -- business data synced from
multiple sources into a single SQL database (DuckDB dialect).
Connected sources:
stripe: customers, subscriptions, charges, invoices (12,450 rows total)
hubspot: contacts, companies, deals (8,320 rows total)
How to work with this database:
1. Use list_sources to see what data is available
2. Use describe on a table to see columns, types, annotations, and sample data
3. Use query to run SQL (DuckDB dialect, reference tables as schema.table)
  1. Agent calls list_sources to see what’s available
  2. Agent calls describe on relevant tables to understand columns and types
  3. Agent writes and executes SQL via query
  4. For mutations (UPDATE/INSERT), query returns a preview — agent calls confirm to execute

This is the same workflow whether the agent uses MCP or CLI — the data and query engine are identical.

See the full MCP Tools reference for parameter details.

Both interfaces use the same query engine and data. The difference:

MCPCLI
Transportstdio tool callsbash commands
Best forClaude Desktop, CursorClaude Code, Aider
Token efficiencyStandard27% fewer tokens
Output formatJSON (always)JSON or --pretty

For shell-capable agents, the CLI is more token-efficient. For tool-calling agents, MCP is the natural fit.