From LLMs to AI Agents
Learn how LLMs evolve into AI agents with real autonomy.
Imagine you have an extremely knowledgeable assistant who has read millions of books but is locked in a library with no phone, no internet, and only a notepad with one page. You ask this assistant a complex question about yesterday’s stock prices or to schedule a meeting on your calendar. The assistant wants to help, but it has two big problems:
It can only draw on its memory (which doesn’t include yesterday’s news or your schedule).
Its notepad is so small that it forgets anything that doesn’t fit on that page.
Limitations of LLMs
Large language models (LLMs) are powerful at generating and understanding text. However, when an LLM is used in isolation, it faces several inherent limitations:
Limited knowledge beyond training: LLMs only know what’s in their training data, with no awareness of recent events or new facts. They can’t access real-time information so that answers may be outdated or incorrect.
Pre-agent fix: Developers would periodically retrain or fine-tune models with newer data or manually inject facts into prompts to update knowledge.
Why not enough: Retraining is slow and costly; manual updates don’t scale. LLMs still can’t respond to new or user-specific info without extra help.
No built-in access to external data: By default, LLMs can’t browse the web or interact with other apps, as they can only generate text and not perform real-world actions or fetch live data.
Pre-agent fix: Developers created custom scripts or plugins to fetch data or perform actions alongside the LLM.
Why not enough: Each integration was bespoke, brittle, hard to scale, and required ongoing maintenance.
Context window limitations: LLMs can only “remember” a limited chunk of text at a time. Anything outside this window is forgotten, so long conversations or big documents may lose earlier details.
Pre-agent fix: Retrieval-augmented generation (RAG) fetches and injects relevant data for each prompt.
Why not enough: RAG helps with facts but doesn’t give the LLM memory across ...