Customer support automation is a common and high-ROI use case for generative AI. Combine retrieval-augmented generation (RAG), sentiment detection for routing, and human-in-the-loop review to improve response speed while keeping answers accurate and policy-compliant. On AWS, you can use Amazon Bedrock for generation and knowledge bases, Amazon Comprehend for sentiment signals, and SageMaker Augmented AI (A2I) to route uncertain responses to human review. In this Cloud Lab, you’ll create an S3 vector bucket, two standard S3 buckets, and a Bedrock Knowledge Base seeded with FAQ data.
You’ll then configure a supervisor agent using LangGraph to orchestrate the workflow. The supervisor first receives the user question and routes it to the appropriate specialized agents: a retrieval agent that fetches answers from the knowledge base, a sentiment analysis agent powered by Amazon Comprehend, or, if needed, a human reviewer via SageMaker A2I. Finally, the supervisor collects the results and generates a polished response using Bedrock LLMs, ensuring either an automated answer or a seamless human escalation.
By the end of this Cloud Lab, you’ll know how to design and deploy a multi-agent customer support workflow that balances AI-driven automation with human judgment. You will gain hands-on experience integrating multiple AWS services into a single, graph-driven pipeline, a skill set highly relevant for building real-world AI-powered applications in customer service and beyond.
The following is the high-level architecture diagram of the infrastructure you will create in this Cloud Lab: