Search⌘ K
AI Features

Mapping LangChain Knowledge into LangGraph

Explore how to map your existing LangChain skills into LangGraph to build stateful, controllable AI workflows. Understand key differences in control flow, shared data, and branching logic to decide when to choose graph-based workflows over linear chains.

If you have used LangChain before, you have already built the foundations this course relies on. You know how to wrap a language model, write a prompt template, parse a model’s output, and chain a few steps together. Those skills carry forward into LangGraph without modification.

LangGraph is not a replacement for LangChain. It is built on top of it. Every model call, retriever, tool, and prompt template you know from LangChain still works inside a LangGraph node. What changes is the layer above: how we connect those calls, how we pass data between them, and how we control which steps run.

This lesson is about making that handoff clear. We will look at what stays the same, what changes, and how to recognise which tool to reach for when starting a new project.

What stays the same

The core building blocks of LangChain work inside LangGraph nodes without any modification. The following table maps familiar LangChain patterns to their role in a LangGraph workflow.

LangChain concept

What you do with it

Where it lives in LangGraph

Model wrapper (ChatGroq, genai)

Call the language model

Inside a model node function

Prompt template

Build structured prompts

Inside a model node function

Output parser

Extract structured data from model output

Inside a model node, or a dedicated parsing node

Retriever

Fetch relevant documents

Inside a retrieval node

Tool function

Call external APIs or search engines

Inside a tool node

Chain (LLMChain, LCEL pipe)

Sequence of steps

Replaced by nodes and edges

The key difference in that last row is intentional. In LangChain, a chain is the glue that sequences steps together. In LangGraph, that job moves to edges. The steps themselves, the model calls, the retrievers, and the tools remain unchanged. Only the connective tissue is different.

A side-by-side comparison

The clearest way to see the shift is to build the same workflow in both styles. We will use a simple retrieval-and-answer assistant: given a user question, retrieve relevant content, then generate a response.

The LangChain approach

In LangChain, this is a sequential chain. Each step feeds directly into the next. The retriever returns documents, those documents become part of the prompt, and the model generates a response.

Python
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_groq import ChatGroq
llm = ChatGroq(model="llama-3.1-8b-instant", api_key="{{GROQ_API_KEY}}")
prompt = ChatPromptTemplate.from_template(
"Answer the question using the context below.\n\n"
"Context: {context}\n\n"
"Question: {question}"
)
# A simple mock retriever for illustration
def retrieve(question: str) -> str:
return "Our refund policy allows full refunds within 30 days of purchase."
chain = prompt | llm | StrOutputParser()
question = "What is the refund policy?"
context = retrieve(question)
answer = chain.invoke({"context": context, "question": question})
print(answer)
  • Lines 1–3: Import LangChain components. ChatPromptTemplate structures the prompt. StrOutputParser extracts the text ...