Search⌘ K
AI Features

Implementing ReAct with AWS Step Functions and Amazon Bedrock

Takes 120 mins

Modern generative AI applications require more than simply calling a large language model to perform a task. To build reliable agents, you must control how the model reasons, when it invokes tools, how conversation history is maintained, and how execution is safely terminated. Without structured orchestration, models can hallucinate tool calls, lose context, or enter uncontrolled loops. A production-ready AI system needs a deterministic workflow around the model.

In this Cloud Lab, you'll build a ReAct-based AI travel assistant using AWS Step Functions, AWS Lambda, and Amazon Bedrock. The model follows a structured reasoning pattern in which it generates a “thought,” optionally emits a machine-readable “action” in JSON to invoke a tool, and produces a “final answer” once sufficient information is available. AWS Step Functions orchestrates this loop by evaluating model outputs, dynamically invoking tools, managing conversation state, enforcing loop limits, and handling errors. In this scenario, when a user asks about the weather in a city and whether they should book a hotel with specific features, the model first reasons that it needs weather data, invokes a weather function, receives an observation, and then incorporates that information into its next reasoning step before generating a grounded recommendation.

You'll begin by creating and configuring two AWS Lambda functions.

  • The first Lambda function acts as the tool execution layer. It parses structured action JSON from the model output and dynamically invokes simulated tools.

  • The second Lambda function manages conversation history by appending assistant reasoning and tool observations to the message list, ensuring the model receives updated context on each iteration.

Next, you'll configure an AWS Step Functions state machine that initializes the agent state, invokes Amazon Bedrock for reasoning, evaluates whether the model returned an “action” or a “final answer”, executes tools when required, merges tool results back into memory, increments loop counters, and safely re-invokes the model until completion.

By the end of this Cloud Lab, you'll understand how to implement the ReAct reasoning pattern with Amazon Bedrock, orchestrate multi-step AI workflows using AWS Step Functions, dynamically execute tools with AWS Lambda, and design a controlled, production-grade AI agent architecture on AWS.

Implement an agentic ReAct workflow using AWS Step Function, Amazon Bedrock, and AWS Lambda
Implement an agentic ReAct workflow using AWS Step Function, Amazon Bedrock, and AWS Lambda

What is ReAct?

The ReAct (Reason + Act) framework, introduced by researchers at Google and Princeton, solves a fundamental flaw in LLMs: the “lack of transparent reasoning.” Without ReAct, a model often reaches a conclusion prematurely. With ReAct, the model follows a recursive loop of reasoning traces and task-specific actions.

  • Thought: The model talks to itself to create a plan. It articulates what it knows and what it is missing (e.g., “I have the user’s destination, but I do not have the current weather forecast for Seattle”).

  • Action: Based on the thought, the model selects a tool from its toolbox (an API, a database, or a calculator) and formats a request.

  • Observation: The external system provides the data. The model treats this as a new fact in its memory, which triggers the next cycle of thought.

Why use ReAct?

ReAct is the industry standard for building trustworthy agents for three main reasons:

  • Error correction: If a tool returns an error (e.g., City not found), the model’s next “thought” can recognize the mistake and try a different search term instead of failing.

  • Context grounding: It prevents hallucinations by anchoring the model’s final response in the “observations” it gathered during the loop. The model doesn’t have to guess the weather; it is looking at the API response.

  • Interpretability: In a production environment, you can log the “thoughts.” If an agent makes a weird recommendation, you can look at the logs to see exactly where its reasoning went off the rails.