/* global React */

function Chapter05() {
  return (
    <section className="chapter" id="ch-05" data-screen-label="05 Tools">
      <div className="chapter-header">
        <div className="eyebrow">Chapter 05 · Tools</div>
        <h1 className="chapter-title">Tools and tool calling — the agent's hands.</h1>
        <p className="chapter-lede">
          A tool is any Python callable the LLM is allowed to invoke. Underneath the framework gloss is a strict contract:
          a name, a description, a JSON schema for arguments, and a function that returns serializable data.
        </p>
      </div>

      <SectionTitle num="5.1">The four parts of a tool</SectionTitle>
      <ol>
        <li><strong>Name</strong> — short, snake_case. The LLM uses this to select.</li>
        <li><strong>Description</strong> — a docstring. <em>This is the actual prompt.</em> Bad descriptions = wrong tool calls.</li>
        <li><strong>Argument schema</strong> — a Pydantic model or type-hinted signature. LangChain converts to JSON Schema.</li>
        <li><strong>Implementation</strong> — the function body that runs when the LLM picks this tool.</li>
      </ol>

      <SectionTitle num="5.2">Three ways to define a tool</SectionTitle>

      <h3 className="sub-title">A. The <code>@tool</code> decorator (simplest)</h3>
      <CodeBlock file="tools_simple.py">{`from langchain_core.tools import tool

@tool
def get_weather(city: str, units: str = "metric") -> dict:
    """Return current weather for a city.

    Args:
        city: City name, e.g. 'Tokyo'.
        units: 'metric' or 'imperial'.
    """
    # ... real API call ...
    return {"city": city, "temp_c": 18.2, "condition": "Cloudy"}`}</CodeBlock>

      <p>The decorator inspects the signature, the type hints, and the docstring to build the schema automatically.</p>

      <h3 className="sub-title">B. Pydantic args_schema (when you need validation)</h3>
      <CodeBlock file="tools_pydantic.py">{`from pydantic import BaseModel, Field
from langchain_core.tools import tool

class WeatherArgs(BaseModel):
    city: str = Field(description="City name, e.g. 'Tokyo'.")
    units: str = Field(default="metric", description="'metric' or 'imperial'.")

@tool(args_schema=WeatherArgs)
def get_weather(city: str, units: str = "metric") -> dict:
    """Return current weather for a city."""
    return {"city": city, "temp_c": 18.2}`}</CodeBlock>

      <h3 className="sub-title">C. Subclass <code>BaseTool</code> (full control, async, callbacks)</h3>
      <CodeBlock file="tools_class.py">{`from langchain_core.tools import BaseTool

class WeatherTool(BaseTool):
    name: str = "get_weather"
    description: str = "Return current weather for a city."
    args_schema: type = WeatherArgs

    def _run(self, city: str, units: str = "metric") -> dict:
        return {"city": city, "temp_c": 18.2}

    async def _arun(self, city: str, units: str = "metric") -> dict:
        return {"city": city, "temp_c": 18.2}`}</CodeBlock>

      <SectionTitle num="5.3">What the LLM actually sees</SectionTitle>
      <p>
        LangChain converts your tool into the OpenAI tool-calling JSON schema. Click through the simulator below to see
        the schema, the args the LLM emits, and the result message that gets appended to the conversation.
      </p>

      <ToolSimulator />

      <Callout kind="intuition" title="The description IS the prompt">
        Whatever you put in the docstring or <code>description=</code> is verbatim what the LLM reads when deciding which
        tool to call. <em>"Get weather"</em> works; <em>"Returns current temperature, conditions, humidity, wind speed for any city worldwide. Use whenever the user asks about weather, rain, temperature, or what to wear."</em> works much better.
      </Callout>

      <SectionTitle num="5.4">Binding tools to a model</SectionTitle>
      <p>"Binding" is how you tell the model which tools are available for this call:</p>
      <CodeBlock file="bind_tools.py">{`tools = [get_weather, search_web, calculator]

model_with_tools = model.bind_tools(tools)

response = model_with_tools.invoke([HumanMessage("Should I bring an umbrella in Tokyo?")])

print(response.content)     # often empty
print(response.tool_calls)
# [{'name': 'get_weather', 'args': {'city': 'Tokyo'}, 'id': 'call_abc123'}]`}</CodeBlock>

      <p>
        The model returns either content (a final answer) or one-or-more <code>tool_calls</code>. <strong>Possibly both</strong> —
        modern models sometimes narrate <em>and</em> call a tool. <strong>Possibly multiple tool calls in parallel</strong> —
        the model can request several at once. Your runtime must handle both.
      </p>

      <SectionTitle num="5.5">Executing the call manually</SectionTitle>
      <p>
        For intuition, let's run the loop ourselves once — no <code>AgentExecutor</code>, just plain code:
      </p>
      <CodeBlock file="manual_loop.py">{`from langchain_core.messages import HumanMessage, ToolMessage

tools_by_name = {t.name: t for t in tools}
messages = [HumanMessage("Should I bring an umbrella in Tokyo?")]

while True:
    ai_msg = model_with_tools.invoke(messages)
    messages.append(ai_msg)

    if not ai_msg.tool_calls:        # model is done — final answer
        break

    for call in ai_msg.tool_calls:
        tool = tools_by_name[call["name"]]
        result = tool.invoke(call["args"])
        messages.append(ToolMessage(
            content=str(result),
            tool_call_id=call["id"],
            name=call["name"],
        ))

print(messages[-1].content)`}</CodeBlock>

      <Callout kind="tip" title="That's the entire AgentExecutor">
        The hand-rolled loop above is — give-or-take callbacks, retries, and parallelism — exactly what
        <code>AgentExecutor</code> (and modern <code>create_react_agent</code> in LangGraph) does. Once you've written it
        once, the framework stops feeling magical.
      </Callout>

      <SectionTitle num="5.6">Tool design tips that save real money</SectionTitle>
      <Callout kind="tip" title="Keep tool surface small">
        5–10 well-described tools beat 30 narrow ones. The model gets confused by overlap. Merge similar tools and use
        an <code>action</code> argument to discriminate.
      </Callout>
      <Callout kind="tip" title="Return structured, terse data">
        Tools that dump a 10KB HTML page back into the model burn tokens and confuse it. Pre-process: extract the 5
        fields the agent actually needs.
      </Callout>
      <Callout kind="gotcha" title="Errors should be returned, not raised">
        If a tool raises an uncaught exception, your loop dies. Wrap the body and return
        <code>{"{'error': '...'}"}</code> — the LLM will see the error and try a different approach, which is usually
        what you want.
      </Callout>
    </section>
  );
}

window.Chapter05 = Chapter05;
