MCP 101
I read the Documentation So You Don't Have To
Everyone is making videos about MCP. Nobody is reading the actual documentation.
I’ve been using MCP for weeks without fully understanding what I was using. I built an MCP server. I installed Claude Code, which runs on MCP. I connected tools to Claude Desktop, which also uses MCP. It all worked. But I couldn’t explain what MCP actually was in plain English without reaching for someone else’s analogy.
So I sat down and read the official Anthropic documentation. The spec. The GitHub repo. The architecture docs. Here’s what I found.
What MCP Actually Is
MCP stands for Model Context Protocol. Anthropic created it and open-sourced it through the Linux Foundation.
The simplest explanation: MCP is a standard way for AI models to connect to tools and data. Before MCP, every time you wanted an AI to talk to a new tool, someone had to build a custom integration from scratch. Every model, every tool, every combination required its own code.
Anthropic’s own documentation calls it “a USB-C port for AI.” That analogy is actually good. Before USB-C, every phone had a different charger. USB-C gave everyone one standard plug. MCP is trying to do the same thing for AI connections.
One standard. Any model. Any tool. Plug it in.
Why It Exists (The N x M Problem)
If you have 5 AI models and 10 tools, you need 50 custom integrations. That’s the N x M problem. Every new model or tool multiplies the work.
MCP collapses that. Build one MCP server for your tool, and any MCP-compatible model can use it. Build one MCP client into your AI app, and it can talk to any MCP server. With MCP, you build 5 clients (one per model) + 10 servers (one per tool) = 15 total builds instead of 50 custom integrations.
Setup Models Tools Total Connections
Pre-MCP 5 10 50 (custom code)
With MCP 5 10 15 (standardized)
The math is what makes this matter.
How It Actually Works
The documentation breaks MCP into three roles and three capabilities.
The three roles:
Host is the AI application you’re using. Claude Desktop, Cursor, your IDE. It’s the thing you interact with.
Client lives inside the Host. It manages the connection to servers. You don’t interact with this directly. It’s the plumbing behind the wall.
Server is the other end of the pipe. It connects to your data, your tools, your files. Anyone can build a server. No approval, no marketplace, no gatekeeping. That’s the point, and that’s the risk.
The three capabilities:
Resources are read-only data. A file on your computer. A database entry. An API response. The AI can look at these but not change them.
Prompts are reusable templates. They tell the model how to approach a specific task. Think of them as saved instructions you can call up by name.
Tools are where it gets real. Tools let the AI take action. Write a file. Create a pull request. Send a message. Run a query. This is the capability that turns a chatbot into an agent.
The documentation is clear about this distinction. Resources are read. Tools are write. Prompts are how.
What the Documentation Says vs. What Everyone Claims
Here’s where the Interrogation Layer comes in.
They claim: “MCP is the App Store for AI.” The docs say: MCP is a protocol, not a marketplace. There is no central store. There is no approval process. There is no curation. Anyone can build a server. Anyone can connect to it. That’s the point, and that’s the risk.
They claim: “MCP makes AI safe for enterprise.” The docs say: The security model puts almost all responsibility on the developer. The spec includes a “Trust & Safety” section that explicitly states it “cannot enforce these principles at the protocol level.” The protocol provides the pipe. You provide the guardrails.
They claim: “MCP is just for developers.” The docs say: You can run MCP servers through Docker, through no-code platforms like n8n, or through a simple Python script. The barrier to entry is lower than most people think. The official quickstart tutorial has you running a working MCP server in minutes.
They claim: “Anthropic created MCP to help developers.” The Interrogation: Anthropic created MCP to own the standard. By making it open-source and donating it to the Linux Foundation, they’re not being charitable. If every data source speaks MCP, then Claude can connect to everything without Anthropic writing a single custom integration. They get the network effect by owning the language, not the infrastructure. That’s smart business, not philanthropy.
What MCP Breaks If It Wins
MCP isn’t the only attempt at solving tool connectivity. OpenAI has function calling. LangChain has tool abstractions. Google has their own approaches. But MCP is one of the first serious attempts at a foundation-backed, open protocol layer.
If MCP becomes the standard, tool vendors will ship MCP servers by default. Models that don’t support it will feel closed. Agent frameworks that abstract over MCP may become thin wrappers with shrinking margins. The next AI battle stops being about which model is smartest and starts being about who controls the pipe.
That’s the strategic shift most explainers miss entirely. If the protocol layer standardizes, the moat moves. It’s no longer who has the smartest model. It’s who controls distribution, defaults, and trust.
Try This Yourself (5 Minutes)
You don’t need to build anything to see MCP working.
If you have Claude Desktop installed (if not, grab it free at claude.ai/download):
Open Claude Desktop
Click the settings icon
Look for “Developer” or “MCP” in the settings
You’ll see a list of MCP servers that are either connected or available
If you have any tools showing up (file access, web browsing, code execution), those are MCP servers running on your machine right now. You’ll typically see file access, web browsing, or code execution listed as available tools.
You’re already using MCP. You just didn’t know it had a name.
If you want to go one step further, the official Python quickstart walks you through building a weather server in about 10 minutes. It’s at modelcontextprotocol.io under “Quickstart.” The code is short, the instructions are clear, and when it works you’ll see your custom tool show up inside Claude Desktop. That moment is when MCP clicks.
Operator Verdict: Adopt (Tinkerers) / Watch (Enterprise)
If you’re a tinkerer, adopt this now. MCP is how your local AI setup goes from “chatbot I type into” to “assistant that actually does things.” The barrier is lower than you think and the documentation is better than most open-source projects.
If you’re in enterprise, watch. The protocol works. The plumbing is solid. But the governance story isn’t there yet. No native audit trails. No centralized kill switch. No standardized permission model that a CISO would sign off on today. Those are the specific blockers, and if you work in enterprise IT, you already recognize them.
The gap between “this works on my gaming PC” and “this is approved for production” is where the next 18 months of enterprise AI adoption lives. And it starts with understanding what the protocol actually says.
I read the documentation so you don’t have to. Now you know what MCP is. Next time someone calls it “the App Store for AI,” you can correct them.
It’s a pipe. A really important pipe. And if you understand the pipe, you control what flows through it.
AI Frankly. Are we having fun yet?



