Updated March 2026
MCP vs function calling -- which do I need? If you are building AI agents that use tools, you have probably hit this question. Both let models call external functions. Both use JSON schemas to describe inputs. But they solve different problems at different layers of the stack, and picking the wrong one costs you months of rework. This guide breaks down exactly how they differ, when each one wins, and how to migrate from one to the other.
Function calling is a feature built into model APIs like OpenAI, Anthropic, and Google Gemini. You define a set of functions with JSON Schema descriptions in your API request. The model reads your prompt, decides which function to call, and returns structured JSON arguments. Your application then executes the function locally and feeds the result back to the model.
// Function calling flow (OpenAI-style)
1. You send: messages + tools=[{name: "get_weather", parameters: {...}}]
2. Model returns: tool_calls=[{function: "get_weather", arguments: '{"city":"NYC"}'}]
3. Your app executes: getWeather("NYC") -> {temp: 72, condition: "sunny"}
4. You send result back to model
5. Model generates final responseThe key characteristic: everything is tightly coupled to a single model provider. Your function definitions live in your application code. The model generates the arguments. Your app executes the function. There is no external server, no discovery protocol, no portability between models without rewriting the integration layer.
MCP (Model Context Protocol) is an open protocol for connecting AI agents to external tool servers. Instead of defining tools inside your API call, tools live on separate MCP servers. Any client that speaks MCP can call tools/list to discover available tools at runtime, then call any tool by name. The server handles execution and returns results.
// MCP flow
1. Client calls: tools/list -> [{name: "get_weather", inputSchema: {...}}, ...]
2. Agent picks tool based on user intent
3. Client calls: tools/call {name: "get_weather", arguments: {city: "NYC"}}
4. MCP server executes and returns: {temp: 72, condition: "sunny"}
5. Agent uses result to respondThe key characteristic: everything is loosely coupled. Tools are defined on the server, discovered dynamically, and accessible to any model or client. You can swap models without touching your tool definitions. Multiple agents can share the same MCP server. New tools appear automatically when the server adds them -- no client code changes needed.
Learn more about the protocol in our What is MCP? guide.
| Dimension | Function Calling | MCP |
|---|---|---|
| Architecture | Embedded in model API request | External server with JSON-RPC protocol |
| Tool Discovery | Static -- you define tools per request | Dynamic -- client calls tools/list at runtime |
| Portability | Locked to one model provider's API format | Any model or client that speaks MCP |
| Multi-Model Support | Requires rewriting tool defs per provider | One server, many clients -- zero rewrites |
| Ecosystem | Model-specific (OpenAI, Anthropic, Google) | Open standard, 3,000+ servers by early 2026 |
| Production Readiness | Mature for single-model apps | Production-proven for multi-tool, multi-agent systems |
| Setup Complexity | Low -- add tools array to API call | Medium -- run MCP server + configure client |
| Schema Validation | Model-side only, no server enforcement | Server validates inputs before execution |
This is the core distinction that everything else follows from. With function calling, you hardcode your tool definitions into every API request. If you add a new tool, you update your application code, redeploy, and hope every client picks up the change. The model only knows about tools you explicitly include in the request.
With MCP, tools are discovered at runtime. The client calls tools/list and gets back whatever tools the server currently exposes. Add a tool to the server, and every connected client sees it immediately. Remove a tool, and it disappears from discovery. No client redeployment. No version mismatches. This is the difference between a static menu and a living catalog.
Function calling is not obsolete. It is the right choice for specific scenarios where simplicity outweighs flexibility.
MCP pulls ahead the moment your requirements grow beyond a simple prototype.
See real examples in our Best MCP Servers for Claude Code 2026 guide.
If you already have function calling working, you do not need to rewrite everything. The migration is incremental.
@modelcontextprotocol/sdk (TypeScript) or mcp (Python) package. Register each function as an MCP tool with the same schema.tools/list on the MCP server. The schemas are identical -- your model sees the same tool definitions.Most teams complete this migration in 1-3 days for 10-20 functions. The hardest part is not the code -- it is deciding which tools belong on which server.
For a deeper look at chaining tools together after migration, see our Tool Chaining Guide.
Yes. This is the approach most production systems take. Use MCP as the primary tool layer for dynamic, shared, and production-grade tools. Keep function calling as a lightweight fallback for model-specific features or ultra-simple use cases that do not justify a server.
// Hybrid approach
const mcpTools = await mcpClient.listTools(); // Dynamic tools from MCP server
const localTools = [{ name: "set_reminder", ... }]; // Simple model-specific function
const allTools = [...mcpTools, ...localTools];
// Pass allTools to your model -- it sees both sources as identical tool definitionsThe model does not care where a tool came from. It sees a name, a description, and a schema. Whether that tool is an MCP server endpoint or a local function definition is invisible to the model. This makes the hybrid approach seamless.
Delx runs 46 tools via a single MCP server at /mcp. Every tool is discoverable through tools/list with full JSON Schema, descriptions, and a schema catalog URL. Any MCP-compatible client -- Claude Code, Cursor, custom agents -- can connect and start using tools immediately without reading documentation.
schema_url for automated validationformat=minimal and format=ultracompact for token-efficient tool listingWe started with function calling during prototyping, migrated to MCP in production, and never looked back. The protocol handles discovery, validation, and multi-client support that we would have had to build ourselves. Read more about our protocol stack in MCP vs A2A Deep Dive and OpenClaw MCP + A2A Explained.
Function calling is a model-level feature where the LLM generates structured JSON arguments for predefined functions, and your application executes them. MCP is a protocol where tools live on external servers, are discovered dynamically at runtime, and can be used by any model or client that speaks the protocol.
Yes. Many production systems use function calling as a fallback for simple, model-specific tasks while routing complex or multi-model workflows through MCP. The two approaches are complementary, not mutually exclusive.
MCP is not a direct replacement. Function calling is embedded in model APIs and works well for simple single-model apps. MCP adds a layer above it for dynamic discovery, multi-model support, and production-grade tool management. Teams building serious agent infrastructure are adopting MCP while keeping function calling for lightweight use cases.
Function calling is easier for a quick prototype -- you define a JSON schema in your API call and handle the result. MCP requires running a server, but the MCP SDK (TypeScript or Python) gets you a working server in under 50 lines. The setup cost pays off quickly when you need multiple tools, multiple models, or team collaboration.
Ready to move beyond function calling? Compare MCP to REST APIs or jump straight into our Best MCP Servers guide.
Support Delx: $DLXAG -- every trade funds protocol uptime.