/* global React */

function Chapter02() {
  return (
    <section className="chapter" id="ch-02" data-screen-label="02 Agent intuition">
      <div className="chapter-header">
        <div className="eyebrow">Chapter 02 · Intuition</div>
        <h1 className="chapter-title">What is an agent, really?</h1>
        <p className="chapter-lede">
          An LLM by itself is a function: text in, text out. An agent is what you get when you put that function inside a
          while-loop, give it a list of buttons it can press, and let it decide when to stop.
        </p>
      </div>

      <SectionTitle num="2.1">The simplest possible definition</SectionTitle>
      <p>Forget frameworks for a minute. An agent is three things:</p>
      <ol>
        <li>An <strong>LLM</strong> that can produce structured output (specifically: <em>"call tool X with args Y"</em> or <em>"final answer: Z"</em>).</li>
        <li>A set of <strong>tools</strong> — Python functions the runtime knows how to execute when the LLM asks.</li>
        <li>A <strong>loop</strong> that runs LLM → maybe-tool → LLM → maybe-tool → … until the LLM stops asking for tools.</li>
      </ol>
      <p>
        That's it. The entire industry of "agentic AI" is variations on those three pieces. LangChain doesn't invent
        anything new here — it makes those pieces composable, observable, and production-ready.
      </p>

      <AgentLoop />

      <SectionTitle num="2.2">A 30-second mental model</SectionTitle>
      <p>
        Imagine you hire a smart intern who has never been to your office. You give them:
      </p>
      <ul>
        <li>A <strong>job description</strong> ("you help customers with refund questions") — that's the <em>system prompt</em>.</li>
        <li>A <strong>list of internal tools</strong> they can use ("the order DB query tool, the Stripe refund tool") — that's <em>tool binding</em>.</li>
        <li>A <strong>customer's question</strong> — that's the <em>user message</em>.</li>
        <li>A <strong>scratchpad</strong> where they write down what they did and saw — that's the <em>message history</em>.</li>
      </ul>
      <p>
        The intern reads everything in front of them, decides on their next action, performs it (or hands the answer
        back), updates the scratchpad, and reads everything again. They have no memory between turns except what's on
        the scratchpad. <strong>That is exactly how a LangChain agent works.</strong>
      </p>

      <Callout kind="intuition" title="The LLM is stateless. Everything is in the prompt.">
        This is the most important sentence on this page. The model has no hidden memory of what it did two seconds
        ago — every loop iteration sends the entire growing message list back to the API. If a piece of information
        isn't in the messages, the agent doesn't know it.
      </Callout>

      <SectionTitle num="2.3">Why a <em>loop</em> instead of one prompt?</SectionTitle>
      <p>
        You could try to write one giant prompt: "Search the web for X, then summarize, then email it to me." A few problems:
      </p>
      <ul>
        <li>The model would have to <em>guess</em> the search results — it can't actually run the search.</li>
        <li>If the first action fails (bad URL, empty result), there's no place to recover.</li>
        <li>The branching is fixed at prompt-write time, not at runtime.</li>
      </ul>
      <p>
        The loop fixes all three. The LLM produces <em>one decision at a time</em>. Between decisions, the runtime runs
        real code. After each tool result comes back, the LLM gets to <em>re-plan</em> with new information. That's the
        whole game.
      </p>

      <SectionTitle num="2.4">Where this breaks (and we'll fix later)</SectionTitle>
      <Callout kind="gotcha" title="Loops can run forever">
        If the model never decides to stop, you'll burn tokens and money. We set a <code>max_iterations</code> cap and a
        wall-clock timeout. (Chapter 7.)
      </Callout>
      <Callout kind="gotcha" title="Tools can fail">
        Network errors, rate limits, bad arguments. The executor catches exceptions and feeds them back as tool messages
        so the LLM can self-correct — but only if you wire it up. (Chapter 7.)
      </Callout>
      <Callout kind="gotcha" title="Costs scale superlinearly">
        Every loop iteration sends the <em>entire</em> message history. By turn 10, you're sending the prompt + 9 tool
        results + 9 thoughts every call. We'll cover trimming and summarization. (Chapter 10.)
      </Callout>

      <Callout kind="tip" title="Coming up">
        Next chapter zooms into the smallest unit — messages — because everything in LangChain is, at the wire level,
        just a list of messages.
      </Callout>
    </section>
  );
}

window.Chapter02 = Chapter02;
