/* global React */

function Chapter08() {
  return (
    <section className="chapter" id="ch-08" data-screen-label="08 MCP">
      <div className="chapter-header">
        <div className="eyebrow">Chapter 08 · MCP</div>
        <h1 className="chapter-title">Model Context Protocol — tools, but standardized.</h1>
        <p className="chapter-lede">
          MCP is a simple JSON-RPC protocol that lets any LLM client talk to any tool server. LangChain ships an adapter
          that surfaces every MCP tool as a regular LangChain tool. Once you see the wire, the magic disappears.
        </p>
      </div>

      <SectionTitle num="8.1">The problem MCP solves</SectionTitle>
      <p>
        Before MCP, every framework reinvented tool integration. If someone wrote a "GitHub tool" for LangChain, it
        didn't help LlamaIndex users; if someone wrote it for Cursor, it didn't help Claude Desktop. <strong>MCP is the
        USB-C of LLM tooling</strong> — one protocol, plug anything into anything.
      </p>
      <ul>
        <li><strong>Server</strong> — a process that exposes tools, resources, and prompts. Anyone can write one in any language.</li>
        <li><strong>Client</strong> — the LLM application (Claude Desktop, Cursor, your LangChain agent) that consumes them.</li>
        <li><strong>Transport</strong> — stdio (subprocess) or Streamable HTTP. Wire protocol is JSON-RPC 2.0.</li>
      </ul>

      <SectionTitle num="8.2">The three primitives a server exposes</SectionTitle>
      <div className="two-col">
        <Callout kind="intuition" title="Tools (model-invoked)">
          Functions the LLM can decide to call. Same shape as LangChain tools: name, description, input schema.
        </Callout>
        <Callout kind="intuition" title="Resources (app-provided)">
          Read-only data the app can attach to context (files, DB rows, URLs). The LLM doesn't pick them — the app does.
        </Callout>
      </div>
      <Callout kind="intuition" title="Prompts (user-invoked)">
        Pre-written prompt templates the user can summon (e.g. slash commands). Useful for canned workflows.
      </Callout>
      <p>For agents, you'll mostly care about <strong>tools</strong>. Resources and prompts are nice-to-haves.</p>

      <SectionTitle num="8.3">The handshake, visualized</SectionTitle>

      <MCPDiagram />

      <p>
        On startup the host calls <code>list_tools()</code> on each connected server, gets back tool descriptors, and
        registers them. When the LLM emits a tool call for one of those tools, the host routes it via <code>call_tool(name, args)</code>
        and gets back the result.
      </p>

      <SectionTitle num="8.4">Connecting MCP servers from LangChain</SectionTitle>
      <p>
        The official adapter is <code>langchain-mcp-adapters</code>. It turns one or more MCP servers into a list of
        LangChain <code>Tool</code> objects you can pass straight to <code>create_tool_calling_agent</code>:
      </p>

      <CodeBlock file="install.sh" lang="bash">{`pip install langchain-mcp-adapters langchain langchain-openai`}</CodeBlock>

      <CodeBlock file="mcp_client.py">{`import asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_openai import ChatOpenAI

async def main():
    client = MultiServerMCPClient({
        "github": {
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-github"],
            "env": {"GITHUB_TOKEN": "ghp_..."},
            "transport": "stdio",
        },
        "postgres": {
            "command": "python",
            "args": ["-m", "mcp_postgres", "--dsn", "postgres://..."],
            "transport": "stdio",
        },
        "weather": {
            "url": "https://example.com/mcp",
            "transport": "streamable_http",
        },
    })

    tools = await client.get_tools()       # list[BaseTool] — ready to use
    print([t.name for t in tools])
    # ['search_repos', 'read_file', 'create_issue',
    #  'query', 'list_tables', 'schema',
    #  'current_weather', 'forecast']

    model = ChatOpenAI(model="gpt-4o-mini")
    agent = create_tool_calling_agent(model, tools, prompt)
    executor = AgentExecutor(agent=agent, tools=tools)

    result = await executor.ainvoke({
        "input": "Find issues labeled 'bug' in repo X and summarize."
    })
    print(result["output"])

asyncio.run(main())`}</CodeBlock>

      <Callout kind="intuition" title="Your agent doesn't know it's MCP">
        After <code>get_tools()</code> returns, those tools are indistinguishable from any other LangChain tool. The
        agent code is identical whether the tool is local Python or a remote MCP server. That's the whole value prop.
      </Callout>

      <SectionTitle num="8.5">Writing your own MCP server (Python)</SectionTitle>
      <CodeBlock file="my_server.py">{`from mcp.server.fastmcp import FastMCP

mcp = FastMCP("my-tools")

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two integers."""
    return a + b

@mcp.tool()
def search_notes(query: str, limit: int = 10) -> list[dict]:
    """Search my Obsidian vault by keyword."""
    # ... real implementation ...
    return [{"title": "...", "snippet": "..."}]

if __name__ == "__main__":
    mcp.run()  # stdio by default`}</CodeBlock>

      <p>
        Run that file with <code>python my_server.py</code> and it speaks MCP over stdin/stdout. Any MCP client (Claude
        Desktop, Cursor, your LangChain agent) can now use those tools.
      </p>

      <SectionTitle num="8.6">When to use MCP vs. plain LangChain tools</SectionTitle>
      <ul>
        <li><strong>Use MCP when</strong> — the tool surface is shared across multiple apps (Claude Desktop + your agent), or you want a clean process boundary (e.g. tools that need root, or run in another language).</li>
        <li><strong>Use plain tools when</strong> — it's all in one process, performance matters, or you need fine-grained callbacks.</li>
      </ul>

      <Callout kind="gotcha" title="Latency tax is real">
        Every MCP tool call is a JSON-RPC round trip over a pipe (or HTTP). For tight inner loops, prefer in-process
        tools. Reserve MCP for heavyweight integrations (a database, an OS-level service, an existing CLI).
      </Callout>

      <Callout kind="warning" title="Trust boundary">
        An MCP server is a process you give credentials and data to. Treat it like any other dependency — pin versions,
        review the code, isolate filesystem access. Don't pipe production secrets to a server you found on a Discord.
      </Callout>
    </section>
  );
}

window.Chapter08 = Chapter08;
