Search⌘ K
AI Features

Tool-Augmented LangGraph Nodes

Explore how to enhance LangGraph workflows by using tool-augmented nodes that call Python functions to retrieve live data. Understand how to separate tool results and model outputs in state, implement routing for fact and pricing queries, and build grounded responses that reduce hallucination risk by providing specific data for the language model to base its answers on.

The solution is grounding: give the model the relevant data before asking it to generate a response. Instead of asking the large language model to recall your refund policy, look it up first, pass the retrieved text to the model, and ask it to answer based on that specific content. The model becomes a reasoning and language layer on top of data you control.

This lesson shows how to build that pattern in LangGraph. The key pieces are a tool function that retrieves data, a dedicated node that calls the tool and stores the result in state, and a synthesis node that combines the tool output with the user’s question to produce a grounded response.

Tools as plain Python functions

In LangGraph, a tool is an ordinary Python function. It takes arguments, does something, and returns a result. It might call a database, query an API, run a search, perform a calculation, or look something up in a dictionary. The function does not know it is being called from a graph. It has no special LangGraph interface to implement.

What makes something a “tool” in the context of this lesson is how it is used: the function is called from inside a dedicated node, its result is written into state, and subsequent nodes can read that result without having to call the tool themselves.

This is worth stating clearly because tool use in LangGraph is simpler than it might sound. There is no tool registry, no special decorator, and no framework-level tool invocation. A tool is a function. A tool node is a node that calls it.

The tool node pattern

When a workflow needs to use a tool, the cleanest approach is to give that tool its own dedicated node. One node, one tool call, one state field written.

The alternative is calling tools inside general-purpose nodes that also do other work. A node that classifies intent, calls an API, and generates a response is doing three jobs. When it fails, there is no clean way to tell which job caused the failure. When the tool needs to be replaced, the entire node has to be rewritten.

Dedicated tool nodes make each job testable in isolation, replaceable without touching other nodes, and observable through the state field they write.

What we are building

The workflow in this lesson is a knowledge base assistant. It classifies incoming questions as fact-seeking or general. Fact questions route through a tool node that searches a knowledge base, then pass the retrieved content to a synthesis node that calls an LLM with both the user’s question and the retrieved data. General questions route directly to LLM without any lookup.

The diagram below shows the two paths and where grounding enters the flow.

Knowledge base assistant: fact questions pass through a tool node before reaching an LLM; general questions go directly to the model; both paths terminate at END
Knowledge base assistant: fact questions pass through a tool node before reaching an LLM; general questions go directly to the model; both paths terminate at END

State design

The ...