One of the most useful things you can do with Model Context Protocol (MCP) is give your AI assistant direct, structured access to your database. Instead of pasting query results into a chat or building a brittle custom integration, you wire up an MCP server that lets the assistant explore your schema and run queries safely.
This guide walks through connecting Postgres to Claude (Desktop or Code) end-to-end. We will cover the off-the-shelf approach, the security model you should adopt, and how to build a custom server when the official one is not enough.
What you will have at the end
After following this tutorial, you will be able to type things like:
"How many orders did we process last week, grouped by status?"
And Claude will introspect your schema, write the SQL, run it against your Postgres database, and answer.
Option A: Use the official Postgres MCP server
For most read-only exploration use cases, you do not need to build anything. The official @modelcontextprotocol/server-postgres server handles the common case well.
Setup
Add this to your claude_desktop_config.json (or the equivalent for Claude Code):
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgres://readonly_user:password@localhost:5432/mydb"
]
}
}
}
The connection string is passed as a single argument. Restart Claude Desktop, and the new server appears.
What you get
The official server exposes:
- A
querytool — runs an arbitrary SQLSELECTagainst the database - Resources for each table — Claude can attach a table's schema as conversation context
- Read-only by default — the server refuses anything that is not a
SELECT
A conversation looks like:
You: What are the top 5 customers by total order value?
Claude: Let me check the schema first… [calls
list_resources, attachesordersandcustomerstables] … [callsquerywith a SQL statement] … Here are the top 5: 1. ABC Corp ($45,200), 2. …
The LLM handles SQL generation, error recovery, and result interpretation. Your job ends at the connection string.
The security model (read this part)
Giving an LLM direct database access sounds scary because it is. The mitigation strategy has three layers, all of which you should apply.
Layer 1: A dedicated read-only database user
Never connect with your application's main user. Create a Postgres user with only the privileges the LLM should have:
CREATE USER ai_readonly WITH PASSWORD 'strong-random-password';
GRANT CONNECT ON DATABASE mydb TO ai_readonly;
GRANT USAGE ON SCHEMA public TO ai_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO ai_readonly;
-- For future tables:
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO ai_readonly;
If the user runs DELETE FROM orders by accident or by malicious prompt injection, Postgres rejects it. Defense in depth.
Layer 2: Hide sensitive columns
For tables containing PII, credentials, or anything regulated, create read-only views that omit those columns and grant access only to the views:
CREATE VIEW orders_for_ai AS
SELECT id, status, total, currency, created_at, customer_id
FROM orders;
REVOKE SELECT ON orders FROM ai_readonly;
GRANT SELECT ON orders_for_ai TO ai_readonly;
The LLM never sees raw orders — it sees the sanitized view. Email addresses, card details, internal notes all stay invisible.
Layer 3: Statement timeout and connection limits
Prevent runaway queries:
ALTER USER ai_readonly SET statement_timeout = '10s';
ALTER USER ai_readonly SET lock_timeout = '5s';
And at the connection-pool level, cap concurrent connections for this user via your Postgres configuration or a pooler like PgBouncer.
With these three layers in place, the worst the LLM can do is read whatever the views explicitly expose. No writes, no DDL, no time bombs.
Option B: Build your own Postgres MCP server
The official server is great for ad-hoc analysis, but you may want more control: domain-specific tools, write access with safeguards, or custom output formatting.
Here is a minimal custom server with three tools — list_tables, describe_table, and safe_query:
#!/usr/bin/env node
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';
import pg from 'pg';
const { Pool } = pg;
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
const server = new McpServer({ name: 'pg-helper', version: '1.0.0' });
server.tool(
'list_tables',
'List all tables in the public schema.',
{},
async () => {
const { rows } = await pool.query(`
SELECT tablename FROM pg_tables WHERE schemaname = 'public' ORDER BY tablename
`);
return {
content: [{ type: 'text', text: rows.map(r => `- ${r.tablename}`).join('\n') }],
};
}
);
server.tool(
'describe_table',
'Show columns and types for a table.',
{ name: z.string() },
async ({ name }) => {
const { rows } = await pool.query(`
SELECT column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_schema = 'public' AND table_name = $1
ORDER BY ordinal_position
`, [name]);
if (!rows.length) {
return { content: [{ type: 'text', text: `Table "${name}" not found.` }], isError: true };
}
const lines = rows.map(r => `${r.column_name} (${r.data_type})${r.is_nullable === 'NO' ? ' NOT NULL' : ''}`);
return { content: [{ type: 'text', text: `Columns of ${name}:\n${lines.join('\n')}` }] };
}
);
server.tool(
'safe_query',
'Run a read-only SELECT query. Other statement types are rejected.',
{
sql: z.string().describe('A SQL SELECT statement'),
limit: z.number().int().min(1).max(500).default(50),
},
async ({ sql, limit }) => {
if (!/^\s*select\b/i.test(sql) || /\b(insert|update|delete|drop|alter|truncate|grant|revoke)\b/i.test(sql)) {
return { content: [{ type: 'text', text: 'Only SELECT statements are allowed.' }], isError: true };
}
const wrapped = `SELECT * FROM (${sql}) sub LIMIT ${limit}`;
const { rows } = await pool.query(wrapped);
return {
content: [{
type: 'text',
text: rows.length
? `Returned ${rows.length} rows:\n\n${JSON.stringify(rows, null, 2)}`
: 'Query returned no rows.',
}],
};
}
);
await server.connect(new StdioServerTransport());
Key points in this custom server:
list_tablesanddescribe_tableare concrete, intent-named tools the LLM uses for discovery — much friendlier than asking the LLM to writeSELECT * FROM information_schema.tables.safe_queryexplicitly rejects anything that is not aSELECT. The regex is intentionally strict.- Auto-applied
LIMITcaps result-set size so the LLM cannot accidentally pull a million rows into its context. - Connection pool is shared across all tool calls — far better than opening a new connection per call.
Note the regex check is a belt; the read-only database user is the suspenders. Use both.
Connecting your custom server to Claude
Publish to npm (npm publish) and use:
{
"mcpServers": {
"pg-helper": {
"command": "npx",
"args": ["-y", "pg-helper-mcp"],
"env": { "DATABASE_URL": "postgres://ai_readonly:password@localhost:5432/mydb" }
}
}
}
Or point at a local file during development:
{
"mcpServers": {
"pg-helper": {
"command": "node",
"args": ["/absolute/path/to/server.js"],
"env": { "DATABASE_URL": "postgres://ai_readonly:password@localhost:5432/mydb" }
}
}
}
Restart Claude Desktop and try a query.
When write access is required
Sometimes the AI assistant genuinely needs to write — creating an issue from a conversation, updating a setting, recording a decision. Two patterns:
Pattern 1: Specific write tools, not generic safe_write.
Do not expose an UPDATE tool that takes arbitrary SQL. Expose purpose-built tools:
server.tool('mark_order_shipped',
'Mark an order as shipped, providing a tracking number.',
{ orderId: z.number().int().positive(), trackingNumber: z.string() },
async ({ orderId, trackingNumber }) => {
const { rowCount } = await pool.query(
`UPDATE orders SET status = 'shipped', tracking_number = $2 WHERE id = $1`,
[orderId, trackingNumber]
);
return { content: [{ type: 'text', text: `Updated ${rowCount} order.` }] };
}
);
Each write tool has a narrow, well-understood blast radius. The LLM cannot drop a table because there is no "drop a table" tool.
Pattern 2: Confirmation step for sensitive writes.
For any destructive or irreversible action, structure the tool to require explicit confirmation:
server.tool('delete_order',
'Delete an order. Requires confirmation token from preview_delete_order.',
{ orderId: z.number().int().positive(), confirmationToken: z.string() },
async ({ orderId, confirmationToken }) => {
// Verify token matches the one issued by preview_delete_order
if (!verifyDeleteToken(orderId, confirmationToken)) {
return { content: [{ type: 'text', text: 'Invalid or expired confirmation token.' }], isError: true };
}
await pool.query('DELETE FROM orders WHERE id = $1', [orderId]);
return { content: [{ type: 'text', text: `Order ${orderId} deleted.` }] };
}
);
A companion preview_delete_order tool returns the row that would be deleted plus a single-use token. The LLM has to present the preview to the user, get confirmation, then call the delete with the token. Defense against blind prompt-injection-driven deletions.
Common issues and fixes
| Symptom | Cause | Fix |
|---|---|---|
| Server starts but tools do not appear in Claude | Wrong path or wrong Node version | Check absolute path; ensure Node 18+ |
connection refused |
Postgres not accepting connections | Check pg_hba.conf and that Postgres is listening on the right interface |
password authentication failed |
Wrong creds or wrong user | Use the dedicated ai_readonly user; do not reuse app creds |
| Queries hang forever | No statement timeout | Set statement_timeout on the user |
| LLM keeps asking about wrong tables | Schema not exposed clearly | Add tool descriptions referencing your actual table names |
Conclusion
A Postgres MCP server is the single highest-leverage integration most teams can add to their AI workflow. Once Claude can read your real data, the quality of every analytical answer goes up by an order of magnitude.
The official server gets you 80% there in five minutes. A custom server unlocks the last 20% — write access, domain-specific tools, exact output formatting. In both cases, the security model matters more than the code: dedicated read-only user, hidden views for sensitive columns, statement timeouts. Get those three right and you can confidently expose your database to an LLM without losing sleep.
Try it yourself
Once the Postgres MCP server is connected, analytical questions become one-liners:
list_tables, describe_table, then safe_queryLast week (May 5–11), 168 orders were created in total:• shipped: 142
• pending: 18
• cancelled: 6
• refunded: 2
That is an 85% ship rate — slightly above your trailing four-week average. The 6 cancellations all happened on Friday — want me to pull those specific orders?
Three tool calls in a row: schema discovery, column inspection, then the actual SELECT. The LLM chained them autonomously because each tool description told it what to do.