/* global React */

function Chapter04() {
  return (
    <section className="chapter" id="ch-04" data-screen-label="04 LCEL">
      <div className="chapter-header">
        <div className="eyebrow">Chapter 04 · Composition</div>
        <h1 className="chapter-title">LCEL — composing pipelines with the pipe operator.</h1>
        <p className="chapter-lede">
          LangChain Expression Language is the backbone. Learn this and you'll never write boilerplate orchestration
          code again. It looks like Unix pipes, and it's not an accident.
        </p>
      </div>

      <SectionTitle num="4.1">The Runnable protocol</SectionTitle>
      <p>
        Every "thing you can call" in LangChain implements <code>Runnable</code>. That means it has:
      </p>
      <ul>
        <li><code>.invoke(input)</code> — the basic call.</li>
        <li><code>.stream(input)</code> — yields chunks.</li>
        <li><code>.batch(inputs)</code> — parallel execution.</li>
        <li><code>.ainvoke / .astream / .abatch</code> — async variants.</li>
      </ul>
      <p>
        Prompts, models, parsers, retrievers, even plain Python functions (via <code>RunnableLambda</code>) are all Runnables.
        Once everything has the same interface, you can pipe them together.
      </p>

      <SectionTitle num="4.2">The pipe operator</SectionTitle>
      <p>The single Python operator that makes LCEL feel magical:</p>
      <CodeBlock file="lcel_basics.py">{`from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI

prompt = ChatPromptTemplate.from_template("Translate to French: {text}")
model  = ChatOpenAI(model="gpt-4o-mini")
parser = StrOutputParser()

chain = prompt | model | parser   # <-- LCEL

result = chain.invoke({"text": "Good morning"})
# 'Bonjour'`}</CodeBlock>

      <p>
        <code>prompt | model | parser</code> creates a <code>RunnableSequence</code>. The output of <code>prompt</code>
        (a <code>PromptValue</code>) becomes the input of <code>model</code>; its <code>AIMessage</code> output becomes the
        input of <code>parser</code>; <code>parser</code> returns a string.
      </p>

      <Callout kind="intuition" title="Pipes = function composition">
        <code>a | b | c</code> is just <code>lambda x: c(b(a(x)))</code> — but with type checking, streaming support,
        async, batching, and tracing for free. That's the entire trick.
      </Callout>

      <SectionTitle num="4.3">Try it yourself</SectionTitle>
      <p>
        Click blocks below to compose a chain. Watch the Python code update live, and the simulated output change based
        on what you piped at the end.
      </p>

      <LCELBuilder />

      <SectionTitle num="4.4">Branching and merging — RunnableParallel</SectionTitle>
      <p>
        Real chains aren't always linear. Sometimes you want to fan out — run a retriever <em>and</em> a question through
        the prompt at the same time. <code>RunnableParallel</code> (or just a dict literal) handles that:
      </p>
      <CodeBlock file="rag_chain.py">{`from langchain_core.runnables import RunnablePassthrough

retriever = vectorstore.as_retriever()

rag_chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | model
    | StrOutputParser()
)

rag_chain.invoke("What is an MCP server?")`}</CodeBlock>

      <p>
        That dict literal is sugar for <code>{`RunnableParallel({"context": retriever, "question": RunnablePassthrough()})`}</code>.
        It runs both branches in parallel; the result is a dict with both keys. The prompt template then reads
        <code>{"{context}"}</code> and <code>{"{question}"}</code> from it.
      </p>

      <SectionTitle num="4.5">Why LCEL matters for agents</SectionTitle>
      <p>
        Under the hood, an agent's "decide what to do" step is also a chain: <code>prompt | model.bind_tools(tools)</code>.
        The agent loop calls <code>.invoke()</code> on that chain each turn. Streaming, batching, and tracing all come
        for free because of LCEL.
      </p>

      <Callout kind="tip" title="When NOT to use LCEL">
        For loops, conditional branches that depend on intermediate results, retries with state — LCEL gets awkward.
        That's exactly the gap LangGraph fills (a graph runtime built on LCEL primitives). For agents, the modern
        recommendation is to use a graph for the loop and LCEL for the inner steps.
      </Callout>
    </section>
  );
}

window.Chapter04 = Chapter04;
