Search⌘ K
AI Features

The Anatomy of an ADK Agent

Explore the fundamental components of an AI agent built with the Google ADK framework. Learn how the LlmAgent acts as the agent's brain, how tools extend its capabilities, and how the Runner orchestrates execution. Understand how these parts work together to create modular, scalable AI agents.

In software development, building any complex system requires a clear understanding of its fundamental components. Just as a web application is composed of distinct parts like a database, a server-side framework, and a user interface, an AI agent built with a professional framework is also made of well-defined, interconnected components.

Having seen a basic agent run, we will now explore its architectural anatomy. A solid grasp of these core building blocks is essential for moving beyond simple examples and beginning to design and build powerful, custom agentic applications. This lesson breaks down the essential Python classes of the Google Agent Development Kit, focusing on the three primary components: the LlmAgent (the brain), the Tools (the capabilities), and the Runner (the engine).

The core component: LlmAgent

At the very center of any intelligent application built with the ADK is the agent itself. The primary class we will work with for this purpose is the LlmAgent.

The LlmAgent, also known as Agent, is a core component in the ADK that acts as the “thinking” part of an application. Its primary function is to leverage the power of an LLM for reasoning, understanding natural language, making decisions, generating responses, and interacting with tools. It is the component where we define the agent’s identity and its core logic. When we create an instance of the LlmAgent class, we configure its behavior through a series of parameters.

Here is the code snippet of the LlmAgent class, demonstrating the use of its primary parameters:

python
from google.adk.agents.llm_agent import LlmAgent
root_agent = LlmAgent(
name='greeting_agent',
model='gemini-2.5-flash',
description='An agent that provides a friendly greeting in a specified language.',
instruction='You are a friendly agent. Greet the user in their specified language.',
)

Let’s explore the parameters and their usage:

  • name (Required): Every agent needs a unique string identifier. This name is crucial for internal operations, especially in multi-agent systems where different agents need a way to refer to or delegate tasks to each other. It also serves as a clear label in logs and debugging outputs. It is best to choose a descriptive name that reflects the agent’s function.

  • model (Required): This parameter specifies the underlying LLM that will power the agent’s reasoning. The choice of model directly impacts the agent’s capabilities, performance, and cost. Different models have different strengths, so selecting the right one is a key design decision.

  • description: This parameter is a concise, human-readable summary of the agent’s capabilities. While it may seem secondary in a system with only one agent, its importance grows significantly in multi-agent architectures. It is primarily used by other agents to determine if they should route a task. For example, if a manager agent receives a user query, it will look at the descriptions of all the worker agents it controls to decide which specialist is best suited for the job.

  • instruction: This parameter is the agent’s core directive. It is a string that serves as the system prompt, sent to the LLM at the beginning of every interaction. A well-crafted instruction is the primary tool we have for guiding the agent. It is used to define:

    • Its core task or goal.

    • Its personality or persona.

    • Constraints on its behavior.

    • How and when to use its tools.

    • The desired format for its output.

Beyond these core parameters, the LlmAgent offers several other optional arguments for better control over its behavior.

LLM response generation

We can control how the underlying LLM generates responses using generate_content_config parameter. This allows us to pass a configuration object that controls how the LLM generates responses. We can adjust parameters like temperature ...