MCP: From Function Calling to a Universal Standard for AI
TL;DR: LLMs evolved from answering questions to interacting with the real world through function calling. But scaling these integrations became a problem. MCP (Model Context Protocol) from Anthropic is the open standard that's changing the game, and companies like OpenAI and Google have already adopted it.
The problem no one saw coming
When ChatGPT exploded in November 2022, millions of people discovered we could have surprisingly natural conversations with an AI. In March 2023, GPT-4 raised the stakes. Suddenly, companies shifted priorities, hundreds of startups were born, and we understood this wasn't "just another tool."
But there was a frustrating limit: models lived in a box.
They could reason, explain, generate code... but couldn't send an email, create a calendar event, or query your database. The potential was enormous, but remained trapped inside the model.
Function Calling: The first major evolution
In July 2022, OpenAI announced function calling: the ability to give the model tools to interact with external systems.
// Before: The model could only respond
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "What's the weather in Buenos Aires?" }]
});
// Response: "I don't have access to real-time data..."
// After: The model can USE tools
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "What's the weather in Buenos Aires?" }],
functions: [{
name: "get_weather",
description: "Gets current weather for a city",
parameters: {
type: "object",
properties: {
city: { type: "string" }
}
}
}]
});
// The model decides: "I need to call get_weather('Buenos Aires')"
This changed everything. Models could:
- Answer questions by calling external APIs
- Convert natural language into SQL queries or service calls
- Execute real actions: send emails, create tasks, update CRMs
Without realizing it, we were opening the door to our digital reality for these models.
The problem of scaling tools
Function calling solved the "what", but created a new problem: "how do we scale this?"
Imagine this scenario:
// Bot 1: Sales assistant
const salesBot = {
functions: [
{ name: "find_customer", description: "Find customer by email...", parameters: {...} },
{ name: "get_orders", description: "Get customer orders...", parameters: {...} }
]
};
// Bot 2: Support assistant
const supportBot = {
functions: [
{ name: "find_customer", description: "Find customer by email...", parameters: {...} }, // DUPLICATE!
{ name: "get_orders", description: "Get customer orders...", parameters: {...} } // DUPLICATE!
]
};
// Bot 3: Billing assistant
// ... Copy-paste the same functions again?
The problems were evident:
- Massive duplication: Each client (bot/app) had to configure the same tools
- Maintenance: An API change required updating N configurations
- Copy-paste errors: Do you remember exactly how to describe each function?
- No single source of truth: If the description differs between bots, so does the behavior
This was the classic N×M problem: N clients × M tools = maintenance chaos.
MCP: The solution with clear responsibilities
In November 2024, Anthropic announced MCP (Model Context Protocol): an open standard for LLMs to discover and interact with tools.
The idea is brilliantly simple: invert the responsibility.
Before (Traditional Function Calling)
Client → Must know how to configure EACH tool
→ Maintains the definitions
→ Breaks if the tool changes
After (MCP)
Client → Only asks: "What tools do you have?"
MCP Server → Responds with its capabilities
→ Maintains its own definitions
→ Updates without breaking clients
How MCP works
MCP defines a client-server architecture with a discovery flow:
// 1. The client (Claude Desktop, your app, etc.) starts
const client = new MCPClient();
// 2. Connects to configured MCP servers
await client.connect('weather-mcp', 'database-mcp', 'gmail-mcp');
// 3. The client asks each server
const capabilities = await client.listTools();
// Response:
// [
// { name: "get_weather", description: "...", server: "weather-mcp" },
// { name: "query_db", description: "...", server: "database-mcp" },
// { name: "send_email", description: "...", server: "gmail-mcp" }
// ]
// 4. During conversation, the model decides which tool to use
// The client executes the tool on the correct server
const result = await client.callTool('get_weather', { city: 'Buenos Aires' });
The three primitives of MCP
MCP defines three types of capabilities:
-
Tools (model-controlled): Actions the model can invoke
{ "name": "find_customer", "description": "Find customer by email", "inputSchema": { ... } } -
Resources (app-controlled): Data the app can provide
{ "uri": "client://12345/profile", "name": "Customer profile", "mimeType": "application/json" } -
Prompts (user-controlled): Reusable prompt templates
{ "name": "summarize_order", "description": "Summarize a purchase order", "arguments": ["order_id"] }
The impact: From "API first" to "MCP first"
Adoption has been meteoric:
- November 2024: Anthropic launches MCP with Python and TypeScript SDKs
- March 2025: OpenAI officially adopts MCP in ChatGPT Desktop
- April 2025: Google confirms support in Gemini
- December 2025: MCP donated to the Agentic AI Foundation (Linux Foundation)
Today, hundreds of public MCP servers are available:
- GitHub MCP: Interact with repos, issues, PRs
- Postgres MCP: Execute database queries
- Slack MCP: Read and send messages
- Google Drive MCP: Access documents
Companies are seeing the potential:
- Customer support: Bots that query CRMs and resolve cases
- Development: Agents that read code, run tests, create PRs
- Analytics: LLMs that query metrics and generate reports
And a new paradigm is emerging: MCP first.
Just as years ago we thought "does our platform expose APIs?" we now ask "does our system have an MCP server?".
Practical example: Your own MCP Server
Creating an MCP server is surprisingly simple:
// weather-mcp-server.ts
import { MCPServer } from '@modelcontextprotocol/sdk';
const server = new MCPServer({
name: 'weather-service',
version: '1.0.0'
});
// Define a tool
server.tool({
name: 'get_weather',
description: 'Gets current weather for a city',
inputSchema: {
type: 'object',
properties: {
city: { type: 'string', description: 'City name' }
},
required: ['city']
},
async handler({ city }) {
const data = await fetch(`https://api.weather.com/${city}`);
return data.json();
}
});
server.listen();
Now any MCP client (Claude Desktop, your app, etc.) can discover and use this tool without manual configuration.
Security considerations
Not everything is perfect. In April 2025, security researchers identified risks:
- Prompt injection: Malicious tools can manipulate the model
- Tool permissions: Combining tools can leak data
- Lookalike tools: Malicious servers can impersonate legitimate tools
MCP creates a "trust boundary" between the model and systems. It's crucial to:
- Validate servers before connecting them
- Implement granular permissions
- Audit tool calls
Conclusion: The future is modular
MCP doesn't just solve a technical scalability problem. It's redefining how we build systems with AI.
We went from:
- Isolated LLMs → Text only
- Function calling → Chaotic N×M integrations
- MCP → Standardized and composable ecosystem
The next step is yours:
- If you're a developer: Explore existing MCP servers or create your own
- If you lead product: Ask yourself what value an MCP server would add to your platform
- If you build with AI: Consider MCP before writing another custom integration
The era of autonomous agents is here. MCP is the protocol that makes them possible.
References
- MCP Official Announcement - Anthropic
- MCP Technical Specification
- Function Calling - OpenAI
- MCP on Wikipedia
- GitHub: Model Context Protocol
Are you using MCP in your project? What MCP server would you like to exist? Share in the comments.