Search⌘ K
AI Features

Core Components of LangChain

Explore LangChain's three functional layers that simplify building large language model applications. Understand the model interaction layer, data pipeline layer, and orchestration layer to design scalable, modular LLM pipelines. This lesson helps you grasp key primitives like Models, Prompts, Document Loaders, Vector Stores, Chains, Memory, and Agents to create adaptable AI solutions.

Building LLM applications without an orchestration framework means writing custom glue code for every provider, every data source, and every conversation turn. The previous lesson showed how this approach produces brittle pipelines where changing a single model provider can break the entire system. LangChain solves this by offering a set of standardized building blocks, each responsible for one job, that snap together like modular components in a well-designed system.

This lesson maps each of those pain points to a specific LangChain primitive. LangChain organizes its components into three functional layers. The model interaction layer handles communication with LLMs through Models, Prompts, and Output Parsers. The data pipeline layer manages document ingestion and retrieval through Document Loaders, Text Splitters, Vector Stores, and Retrievers. The orchestration layer wires everything together through Chains, Memory, and Agents.

To make these layers concrete, consider a customer-support RAG chatbot. It must load knowledge-base articles, retrieve relevant passages, call a chat model with a structured prompt, parse the response into JSON, and remember prior conversation turns. Every one of LangChain’s ten primitives plays a role in that pipeline. Understanding these layers is the prerequisite for the next lesson, which dives deep into the Model layer itself.

The following diagram shows how these three layers relate to each other and maps each step of the chatbot scenario to its corresponding component.

LangChain three-layer architecture showing how RAG chatbot pipeline maps to ten primitives
LangChain three-layer architecture showing how RAG chatbot pipeline maps to ten primitives

The model interaction layer

The first three primitives govern how your application communicates with language models and structures their output. They form the interface between your code and the LLM itself.

Models

LangChain wraps every LLM behind a uniform interface. Rather than learning a different SDK for each provider, you interact with a single abstraction that works the same way regardless of the backend. There are three ...