CLOUD LABS
Building Multiple Agents Using CrewAI and Bedrock
In this Cloud Lab, you’ll build CrewAI agents with Amazon Bedrock to future-proof your skills by creating Knowledge Bases, using foundational models, and integrating vector stores.
intermediate
Certificate of Completion
Learning Objectives
Amazon Bedrock is a service that provides foundational models from companies like Anthropic, Cohere, and Meta. It has various features that allow us to build generative AI applications. You can also use Amazon Bedrock to build knowledge bases using the foundational models available and then create AI agents using third-party platforms such as CrewAI, a framework that allows us to create, coordinate, and manage AI agents.
In this Cloud Lab, you’ll create an S3 bucket and upload data about a hypothetical company. You’ll then create an Aurora cluster and use it as a vector store of the knowledge base. You’ll also use AWS Secrets Manager to store Aurora cluster credentials. After this, you’ll enable the Amazon Titan foundational model and use it to create a knowledge base in Amazon Bedrock. Finally, you’ll create CrewAI agents that will use the knowledge base you created to reply to user queries and test these agents by assigning them different tasks.
After completing this lab, you’ll be well-equipped to use Bedrock Knowledge Bases and base models in your AI applications and will be able to build CrewAI agents integrated with Amazon Bedrock. The following is the high-level architecture diagram of the infrastructure you’ll create in this Cloud Lab:
Why multi-agent systems are showing up everywhere
As GenAI apps mature, teams run into the limits of “one prompt does everything.” Complex problems often require multiple skills: researching, planning, generating, verifying, formatting, and applying domain rules. Multi-agent systems address this by splitting work into specialized roles and coordinating them through a shared workflow.
This approach is useful because it can:
Reduce prompt bloat by keeping each agent focused.
Improve reliability by constraining responsibilities.
Make complex outputs more structured (plans, reports, recommendations).
Encourage verification steps before final responses.
Scale to more domains by adding agents instead of rewriting a monolith prompt.
What CrewAI adds to the agentic toolbox
CrewAI is an orchestration framework for building multi-agent workflows. The core idea is simple: define agents with specific roles, define the tasks they should perform, and run the “crew” so that outputs move from one step to the next.
In practice, frameworks like CrewAI help you:
Create clear agent boundaries (researcher vs. writer vs. reviewer).
Encode collaboration patterns (sequential, parallel, or hybrid).
Standardize prompts, tools, and output formatting.
Reuse workflows for different inputs without redesigning everything.
Where Amazon Bedrock fits
Amazon Bedrock typically serves as the model layer in these systems, powering agent reasoning and generation. In an AWS-centric architecture, Bedrock becomes the foundation model backbone while the rest of your system provides:
Data access and retrieval (knowledge bases, storage, databases).
Tool execution (serverless functions, APIs).
Guardrails and security (permissions, logging, monitoring).
The bigger takeaway is that multi-agent workflows are most useful when agents can ground their outputs in tools and data, not just generate text.
Common multi-agent patterns you can reuse
If you’re building agentic systems, these patterns show up repeatedly:
Specialist roles with a final editor: One agent researches, another drafts, and an editor agent finalizes formatting and tone.
Planner and executor: A planning agent decomposes the problem into steps, and one or more executor agents carry them out.
Retriever and synthesizer: One agent focuses on retrieval and citations, and another focuses on synthesis and explanation.
Critic/verifier loop: A critic agent checks for mistakes, missing constraints, or weak reasoning before the final output is returned.
How to make multi-agent systems dependable
Multi-agent systems can also fail in predictable ways: redundant work, conflicting outputs, unnecessary token usage, or tool misuse. A few guardrails help:
Keep roles narrowly defined and outputs structured.
Use tools for facts and retrieval instead of “best guesses.”
Add constraints (schemas, checklists, acceptance criteria).
Evaluate workflows with representative test inputs.
Log intermediate outputs so you can debug behavior over time.
If you can build a small multi-agent workflow that produces consistent results, you can scale the same approach to more complex product use cases.
Before you start...
Try these optional labs before starting this lab.
Relevant Courses
Use the following content to review prerequisites or explore specific concepts in detail.
Felipe Matheus
Software Engineer
Adina Ong
Senior Engineering Manager
Clifford Fajardo
Senior Software Engineer
Thomas Chang
Software Engineer
Copyright ©2026 Educative, Inc. All rights reserved.