ReAct, Self-Ask, and Structured Chat Agents
Understand how to design LLM agents using three key reasoning patterns: ReAct for flexible multi-step tool use, Self-Ask for structured follow-up questions with search, and Structured Chat for JSON-based tool calls. Learn to select the right pattern based on task complexity and tool diversity, improving reliability and monitoring in agent workflows.
With tools, prompt templates, and memory wired into your agent, one critical design decision remains: how does the agent decide what to do next? A single LLM call can generate fluent text, but it cannot reliably solve tasks that require gathering evidence across multiple steps, checking intermediate results, and adjusting course. Consider a customer-support agent that must look up an order status, check warehouse inventory for a replacement, and then compose a helpful response. A naive single-shot prompt would force the model to guess at all three answers simultaneously, almost guaranteeing hallucinated order numbers or fabricated stock levels.
This is where reasoning patterns come in. Each pattern structures the space between “receive a user query” and “return a final answer” differently, controlling how the LLM thinks, which tools it calls, and in what order. This lesson implements three foundational patterns using LangChain, compares their execution traces, and identifies where each one excels or breaks down.
ReAct: interleaving thought and action
The ReAct pattern, short for Reasoning + Acting, forces the LLM to alternate between explicit reasoning steps and tool invocations. Instead of jumping straight to a tool call, the model first writes out a Thought explaining what it knows so far and what it needs next. It then emits an Action specifying the tool name and input. The framework executes that tool and returns the result as an Observation. This cycle repeats until the model is confident enough to produce a Final Answer.
How the loop works
The key innovation is that verbalizing reasoning before acting reduces hallucinated tool calls. When the model must articulate why it is calling a search tool, it is less likely to fabricate an answer it could have simply looked up. LangChain’s create_react_agent utility manages this loop by appending each Thought, Action, and Observation to the