MCP Tools vs Resources vs Prompts: Which Do You Actually Need?

Model Context Protocol (MCP) servers can expose three different kinds of capabilities: tools, resources, and prompts. Most tutorials lump them together or focus on tools and gloss over the other two. That is a mistake — each primitive solves a different problem, and using the right one makes your server feel native instead of clunky.

This article explains all three, shows what they look like in code, and gives concrete rules for picking the right primitive for any capability you want to expose.

The 30-second summary

Primitive The model… Best for
Tools …decides to invoke an action Doing things — queries, writes, computations
Resources …reads data the user attaches Static or semi-static context — files, docs, records
Prompts …executes a user-triggered template Standardised workflows — "code review," "summarise"

Think of it this way: tools are verbs the LLM can use, resources are nouns the user can attach, prompts are scripts the user can run.

Tools — actions the LLM invokes autonomously

A tool is a named, schema-validated function the LLM can choose to call. The LLM decides when to call it and with what arguments based on the tool's description and the user's intent.

Minimal example — a weather tool:

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { z } from 'zod';

server.tool(
  'get_weather',
  'Get current weather for any city',
  { city: z.string().describe('City name, e.g. "Chennai"') },
  async ({ city }) => {
    const data = await fetchWeather(city);
    return { content: [{ type: 'text', text: `It is ${data.temp}°C in ${city}.` }] };
  }
);

Key traits of tools:

  • Autonomously invoked — the user never explicitly says "call get_weather." They say "is it raining in Chennai?" and the LLM picks the tool.
  • Argument-driven — every call comes with structured arguments matching the schema.
  • Stateless from the host's perspective — each call is independent.

Use tools when the LLM needs to do something: run a query, write a file, hit an API, perform a calculation.

Resources — data the user attaches as context

A resource is a piece of read-only data the server exposes with a stable URI. The user (not the LLM) decides which resources to attach to a conversation; the LLM then reads them as context.

Example — exposing the contents of a project's README as a resource:

import { readFile } from 'node:fs/promises';

server.resource(
  'project-readme',                   // unique resource ID
  'file:///projects/myapp/README.md', // URI
  {
    name: 'MyApp README',
    description: 'The README for the MyApp project',
    mimeType: 'text/markdown',
  },
  async () => {
    const text = await readFile('/projects/myapp/README.md', 'utf8');
    return { contents: [{ uri: 'file:///projects/myapp/README.md', mimeType: 'text/markdown', text }] };
  }
);

In Claude Desktop, the user clicks a resource in the MCP panel to attach it. The text becomes part of the conversation context — the LLM can read it but does not actively decide to fetch it.

Key traits of resources:

  • User-attached — the user picks them, not the LLM.
  • Stable URIsfile:///, postgres://, https://, or custom schemes like notion://.
  • Read-only — resources do not have side effects.
  • Can subscribe — clients can subscribe to resource changes for live updates.

Use resources when there is canonical data the user might want to reference — files, database rows, design specs, documentation pages.

Resources are not just "tools that read data"

A common mistake: building a read_readme tool when you should be exposing the README as a resource.

Read approach Pros Cons
Tool (read_readme) LLM can fetch autonomously Burns a tool call every conversation; LLM may forget
Resource (attached README) Loaded once, persistent context User has to attach it

If the data is something the user knows they want in the conversation, make it a resource. If the LLM should fetch it on demand based on intent, make it a tool.

Prompts — user-triggered templates

A prompt is a named, parameterised template the user can invoke to start a structured conversation. Unlike tools (which the LLM invokes) and resources (which the user attaches), prompts are user-invoked workflows.

Example — a code review prompt:

server.prompt(
  'code-review',
  'Review a pull request for security, performance, and style.',
  {
    repository: z.string().describe('Repo in owner/name format'),
    pull_number: z.number().int().positive(),
  },
  async ({ repository, pull_number }) => {
    const pr = await fetchPullRequest(repository, pull_number);
    return {
      messages: [{
        role: 'user',
        content: {
          type: 'text',
          text: `Review pull request #${pull_number} on ${repository}.\n\nDescription:\n${pr.body}\n\nChanged files:\n${pr.files.join('\n')}\n\nFocus on: security issues, N+1 queries, missing error handling, and any deviation from our style guide.`,
        },
      }],
    };
  }
);

In the Claude Desktop UI, the user picks code-review from a menu, fills in the repo and PR number, and the conversation starts pre-populated with the templated message.

Key traits of prompts:

  • User-invoked — the user explicitly picks the prompt; the LLM does not.
  • Templated — produces a starting message (or sequence) with variable substitution.
  • Workflow-oriented — designed to bootstrap a common task with consistent context and instructions.

Use prompts when there is a standardised workflow people on your team should run the same way every time — code reviews, incident postmortems, support ticket triage, weekly status drafts.

A side-by-side example

Imagine you are building an MCP server for a customer-support team. The same underlying capability could be exposed as any of the three — but the right choice depends on usage:

Capability If exposed as a Tool If exposed as a Resource If exposed as a Prompt
Customer profile lookup get_customer({ id }) — LLM fetches when context suggests it customer://12345 — user pins one specific customer to the chat n/a — not a workflow
Ticket triage n/a — not a single action n/a — too dynamic triage-ticket({ ticketId }) — bootstraps the standard triage sequence
Search past tickets search_tickets({ query }) — LLM searches based on the user's question n/a — too dynamic to enumerate n/a — not a single workflow

The right answer is often all three — a comprehensive server exposes tools for active capabilities, resources for canonical data, and prompts for repeatable workflows.

When you only have time to build one

If you can only implement one of the three primitives, build tools first. They are the most universally supported by clients, the most flexible, and the easiest to test.

Resources are second priority — useful when you have well-defined, stable data sources users will repeatedly reference.

Prompts are third priority — high-leverage for teams with recurring workflows, less so for individual developers using AI assistants ad hoc.

A decision flowchart

When designing a new capability, ask:

  1. Does the LLM decide when to use this? → Tool.
  2. Is this static or semi-static data the user explicitly wants in context? → Resource.
  3. Is this a multi-step workflow the user runs the same way each time? → Prompt.
  4. Is it more than one of the above? → Expose it as multiple primitives.

That last point matters. A customers capability might warrant a search_customers tool for ad-hoc lookups, a customer://{id} resource for pinning specific accounts, and a monthly-review prompt for the recurring report your account managers run.

A note on host support

Not every MCP client supports all three primitives equally well:

  • Tools are universally supported.
  • Resources are supported in Claude Desktop, Cursor, and most major clients.
  • Prompts are supported in Claude Desktop with first-class UI; support in other hosts varies.

If your server is meant to work everywhere, lean on tools as the primary surface and add resources or prompts as enhancements rather than requirements.

Conclusion

MCP's three primitives are not three names for the same thing — they are three distinct shapes for three distinct kinds of capability:

  • Tools are verbs the LLM picks.
  • Resources are nouns the user attaches.
  • Prompts are scripts the user runs.

Getting the choice right makes your server feel native — the LLM uses tools fluidly, the user finds resources where they expect them, and prompts give power users one-click access to your team's standard workflows.

Getting it wrong leads to clunky servers: tools that should have been resources (forcing redundant fetches every conversation) or workflow logic stuffed into tool descriptions (asking the LLM to remember an N-step process). Pick the right primitive and your server gets out of the way.

Try it yourself

A single server exposing all three primitives lets the same capability surface in three ways:

You[Attaches the resource customer://12345, then asks:] Run the monthly-review prompt for this customer.
Claude · invoked prompt monthly-review + called search_ticketsMonthly review for Acme Corp (Customer #12345):

Account health: Stable. MRR up 12% month-over-month.
Open tickets: 3 (1 high priority, 2 low). Oldest is 4 days.
Trend: Ticket volume down 30% versus last month — most recent issues are feature requests, not bugs.
Action items: Schedule a check-in on ticket #874 (feature request for SSO). No escalations needed.

Three primitives collaborating: the user pinned a resource, the user invoked a prompt (which set up the templated context and instructions), and Claude called a tool (search_tickets) inside that prompt to get fresh data. That is the full triad working together.

Leave a Comment

Your email address will not be published. Required fields are marked *