Search⌘ K
AI Features

Structured Outputs and Few-Shot Prompting

Explore how to engineer AI prompts for structured and reliable outputs by using direct formatting instructions and few-shot prompting. Understand techniques to produce machine-readable data like XML or JSON, and learn to craft high-quality examples to guide AI patterns effectively.

So far, we’ve focused on producing conversational, human-readable text. But what happens when the consumer of the AI’s output is a software system instead of a person?

Consider an automated system built for a sales team: it monitors incoming emails, extracts lead-related fields, and writes them to a customer relationship management (CRM) database. A conversational output like “The customer’s name is John Doe and their email is john.doe@example.com” isn’t usable for this workflow. The CRM requires strictly structured, machine-readable data such as JSON or XML to create a new entry.

This is the core challenge this lesson will solve. An LLM’s natural tendency is to produce conversational prose. To build robust applications, we must engineer prompts that compel the model to respond in a strict, predictable, and parsable format. This lesson will explore the two primary techniques for achieving this level of output reliability: directly instructing the model on specific data formats and using few-shot prompting to teach the model complex patterns through demonstration.

Engineering a structured output

The first requirement for output control is to explicitly define the structure of the AI’s response. For any programmatic use case, such as feeding data to another API, updating a database, or rendering a UI component, the output format must be predictable.

Technique 1: Direct instruction with formatting rules

The simplest method for controlling output is to directly tell the model what we want. This often involves two parts: naming the format and stating any constraints that apply.

  • Name ...