/* global React */

function Chapter01() {
  return (
    <section className="chapter" id="ch-01" data-screen-label="01 Welcome">
      <div className="chapter-header">
        <div className="eyebrow">Chapter 01 · Orientation</div>
        <h1 className="chapter-title">From completion API to autonomous agent — a working developer's mental model.</h1>
        <p className="chapter-lede">
          This is a deep, slow walk through how LangChain agents actually work — the primitives, the loop, the messages,
          the tools, the executor, and how MCP fits into the picture. Two hours, no skipped steps, no magic words.
        </p>
      </div>

      <p>
        Most LLM tutorials stop at <code>llm.invoke("hello")</code>. That's the easy part. What's hard, and what this
        course is about, is the leap from <em>"LLM that answers a question"</em> to <em>"system that decides what to do, takes
        actions in the world, and stops when it has enough information."</em> That system is what we call an <strong>agent</strong>.
      </p>

      <SectionTitle num="1.1">What you'll have by the end</SectionTitle>
      <p>You'll be able to read any LangChain agent codebase and answer the four questions that matter:</p>
      <ol>
        <li><strong>What's in the prompt right now?</strong> — every message, in order, with full provenance.</li>
        <li><strong>How does the LLM decide to call a tool?</strong> — schemas, structured output, validation.</li>
        <li><strong>What does the executor actually do between LLM calls?</strong> — the runtime loop, parsing, errors.</li>
        <li><strong>How do I plug in external capabilities safely?</strong> — tools, retrievers, MCP servers.</li>
      </ol>

      <SectionTitle num="1.2">How to use this site</SectionTitle>
      <div className="two-col">
        <Callout kind="tip" title="Read in order">
          Each chapter assumes the previous one. We build from <em>messages</em> upward — primitives, then composition (LCEL),
          then tools, then agents, then orchestration. Skipping breaks the intuition.
        </Callout>
        <Callout kind="note" title="Touch the diagrams">
          Every animated figure has play / pause / step controls. The tool simulator, the LCEL builder, and the ReAct trace
          let you run things at your own pace. Click around — you can't break anything.
        </Callout>
      </div>

      <div className="stat-row">
        <div className="stat"><div className="num">10</div><div className="label">Chapters</div></div>
        <div className="stat"><div className="num">~2 hr</div><div className="label">Reading time</div></div>
        <div className="stat"><div className="num">6</div><div className="label">Live widgets</div></div>
        <div className="stat"><div className="num">40+</div><div className="label">Code samples</div></div>
      </div>

      <SectionTitle num="1.3">Prerequisites</SectionTitle>
      <ul>
        <li>You can read Python comfortably; you've used <code>pip install</code> and a virtual env.</li>
        <li>You've called an LLM API at least once — even just <code>openai.ChatCompletion.create()</code>.</li>
        <li>You have a vague sense of what JSON Schema is. We'll re-explain.</li>
      </ul>
      <p>
        That's it. We do not assume you've used LangChain before. We do assume you want to <em>understand</em>, not just copy a quickstart.
      </p>

      <Callout kind="intuition" title="The single sentence to take with you">
        An agent is just a loop where an LLM keeps emitting either a <code>tool_call</code> or a final answer, and a tiny
        runtime executes the tool and feeds the result back. Everything else — LCEL, AgentExecutor, MCP, callbacks — is
        machinery that makes that loop reliable, observable, composable, and safe.
      </Callout>
    </section>
  );
}

window.Chapter01 = Chapter01;
