Introduction to the Course

Learn how this course is structured and what you’ll build with LlamaIndex.

Welcome to this course! This course will explore how to use LlamaIndex to build practical, intelligent systems powered by large language models (LLMs). Whether you’re working on a chatbot, a document assistant, or a structured data pipeline, LlamaIndex gives you the tools to move from concept to real-world application.

What you’ll learn

By the end of this course, you’ll be able to:

  • Understand the core components of LlamaIndex and how they fit into modern AI workflows.

  • Build retrieval-augmented generation (RAG) pipelines for document-based question answering.

  • Design single-agent and multi-agent systems that use tools, memory, and context.

  • Extract structured information from unstructured text using schema-based methods.

  • Monitor and debug your applications with built-in tracing and evaluation tools.

  • Combine these capabilities to build full-featured AI applications, such as a document Q&A assistant, a resume optimizer, or a lesson plan generator.

Tools you’ll use

Throughout the course, you’ll use:

  • LlamaIndex for retrieval, agents, memory, workflows, and evaluation provides a unified, modular interface for building powerful LLM applications without reinventing the wheel.

  • Ollama for local embedding generation and lightweight local LLMs enables fast, offline processing without cloud API costs, making it ideal for development and experimentation.

  • Groq as our LLM backend for high-performance, low-latency inference—while Ollama works great for local models, Groq’s hosted infrastructure offers exceptional speed for large models like LLaMA 3–70B, which may not run efficiently on most local setups.

  • Streamlit to build simple, interactive frontends—perfect for prototyping and sharing AI applications with minimal setup or UI overhead.

Press + to interact

Course structure

This course is divided into the following chapters:

  1. Core Concepts and Using LLMs
    Learn the foundational building blocks of LlamaIndex and how it integrates with LLMs.

  2. Building a RAG Pipeline
    Create retrieval-augmented systems that answer questions based on document content.

  3. Agents and Workflows
    Implement intelligent agents, memory-enhanced systems, and custom multi-step workflows.

  4. Extracting Structured Outputs from LLMs
    Learn how to extract structured fields from raw text using Pydantic schemas and LLM guidance.

  5. Monitoring and Evaluating LLM Applications
    Use tracing, debugging, and performance evaluation to improve and troubleshoot your AI systems.

  6. Building Real-World Applications with LlamaIndex
    Build real-world applications like a multi-turn document Q&A assistant, a resume analyzer, and a lesson planner using everything you’ve learned.

Let’s get started!

To take this course, you should have basic Python knowledge and some familiarity with language models, but you don’t need prior experience with LlamaIndex. Each lesson builds on what came before—so follow along, experiment with the code, and start building intelligent, flexible AI applications together.