Search⌘ K
AI Features

LangChain Callbacks for Monitoring

Explore how to implement LangChain callbacks to monitor multi-agent language model pipelines. Understand the callback architecture, build custom handlers to log events, track token usage, detect errors, and ensure reliable production observability without altering pipeline logic.

The three-agent pipeline from the previous lesson, Researcher, Critic, and Summarizer, produces reliable outputs, but right now it operates as an opaque system. When the pipeline runs, you have no visibility into which model was called, what prompts were sent, how many tokens were consumed, or whether a node silently failed and retried. In production, this lack of visibility becomes a serious liability. LangChain’s callback system solves this by providing event-driven hooks that fire at specific moments during execution, allowing you to log, trace, and debug every step without touching your pipeline logic.

This lesson walks through the callback architecture, builds a custom handler class from scratch, and attaches it to the compiled multi-agent pipeline so that every LLM call, chain execution, and error becomes visible in a structured log stream.

Note: Callbacks are purely observational. They receive event data but never modify pipeline state or influence routing decisions, which keeps your observability layer cleanly separated from your business logic.

The LangChain callback architecture

LangChain’s callback system is built on two core abstractions that work together to deliver event-driven observability.

Core abstractions

The first abstraction is BaseCallbackHandlerA base class provided by LangChain that defines methods corresponding to every life cycle event in a chain or agent execution. You subclass it and override only the methods you care about.. The second is the CallbackManager, an internal dispatcher that automatically routes events to all registered handlers whenever a life cycle event occurs. You rarely interact with the manager directly because LangChain wires it up behind the scenes when you pass handlers through configuration.

Key callback methods

The methods you override on BaseCallbackHandler map directly to life cycle events in agentic workflows. The following table summarizes each method, when it fires, what data it receives, and how it is typically used.

LangChain Callback Methods Overview

Callback Method

Trigger Point

Key Arguments Received

Typical Use Case

on_llm_start

When LLM call begins

`serialized`, `prompts`, `run_id`

Log prompt content and model selection

on_llm_end

When LLM returns response

`response` (LLMResult), `run_id`

Log token consumption and latency

on_llm_error

When LLM raises exception

`error`, `run_id`

Alert on failures and trigger retries

on_chain_start

When chain/Runnable begins

`serialized`, `inputs`, `run_id`

Trace execution entry points

on_chain_end

When chain/Runnable completes

`outputs`, `run_id`

Log final results and measure duration

on_tool_start

When agent calls a tool

`serialized`, `input_str`, `run_id`

Audit tool usage and inputs

...