Search⌘ K

Handling Ambiguity, Safety, and Prompt Quality

Learn how to design resilient prompts that can handle ambiguity, refuse unsafe requests, and self-correct their own errors.

So far, the examples have focused on straightforward cases where the user’s intent is clear, and the task is straightforward. Production systems rarely operate under those conditions. Users may provide incomplete information, ask out-of-scope questions, or try actions the system isn’t designed to support.

Consider an AI-powered appointment booking bot for a medical clinic. A user types, “I need to see a doctor next week afternoon.” A naive prompt, eager to be helpful, might guess what the user means, perhaps booking an appointment for “next Wednesday at 2:00 p.m.” If this guess is wrong, the bot has just created a real-world problem for both the user and the clinic.

This is the core challenge of building production-grade AI systems. Our prompts must be resilient. They must be engineered to handle not only well-formed inputs but also ambiguous, incomplete, or adversarial ones. In this lesson, we will learn to design prompts that gracefully handle ambiguity, refuse to answer unsafe questions, can critique and refine their own outputs, and can be systematically troubleshooted when they underperform.

Handling ambiguous user queries

An AI’s tendency to guess when faced with an incomplete query is a major source of error and user frustration. Our first and most important line of defense is to engineer the prompt to stop guessing and start clarifying.

Instructing the model to ask clarifying questions

For any interactive application, the most robust way to handle ambiguity is to turn the conversation back to the user. Instead of allowing the model to make a potentially incorrect assumption, we can explicitly instruct it to ask for the information it needs. Let’s re-engineer the prompt for our medical appointment bot.

System prompt: You are an appointment booking assistant for a medical clinic. Your goal is to schedule an appointment based on the user's request. Extract the doctor's name, requested day, and time, and confirm the booking.

User prompt: I need to see a doctor next week afternoon.

Assistant response: Okay, I have booked an appointment for you with Dr. Evans for next Wednesday at 2:00 PM. Is that correct?

Now, let’s add a rule for handling ambiguity.

System prompt: You are an appointment booking assistant for a medical clinic. Your goal is to schedule an appointment.


You require three pieces of information to book an appointment:

1. The specific doctor's name.

2. The exact day (e.g., Monday, July 22nd).

3. The specific time (e.g., 3:00 PM).


Rule: If any of these details are missing or ambiguous in the user's request, do not guess. Instead, your task is to ask a clarifying question to get the missing information. Only confirm the booking when you have all three details.


User prompt: I need to see a doctor next week afternoon.

Assistant response: I can certainly help you book an appointment. To make sure I get it right, could you please provide the specific day you'd like to come in and the name of the doctor you'd like to see?

This second prompt is more robust as it prevents errors, ...