Search⌘ K
AI Features

Prompt Engineering Template

Explore how to design well-structured prompts by defining conversational roles such as system, user, and assistant, and using delimiters like markdown and code blocks. Understand why this structure prevents confusion for AI models, making prompts more reliable, maintainable, and effective.

A well-structured report is more than a long block of text. It includes a title, section headings, and paragraphs that break the content into meaningful units. This structure isn’t cosmetic; it makes the information easier to read, understand, and use.

A modern LLM prompt also needs a clear, deliberate structure. As prompts evolve to include instructions, user queries, reference text, and output requirements, they require a consistent structure to remain manageable. Without structure, the model may misinterpret which parts are instructions and which are data. The next section outlines how to design that structure.

Conversational roles

Modern, instruction-tuned AIs are most often designed as chat models. This means their fundamental architecture is built to understand conversations, not just single, isolated commands. This conversational paradigm has distinct roles, and using them correctly is the first and most crucial step in creating a well-structured prompt. The three roles are

  • System

  • User

  • Assistant

The system prompt

The system prompt is a special, high-level instruction that sets the context, rules, and persona for the entire conversation that follows. It is the primary mechanism for AI contextual refinement, shaping how the model interprets and responds to everything that comes after it. Think of it as the AI’s constitution or the director’s notes to an actor before a play begins. The instructions in the system prompt are designed to be persistent and to influence every subsequent response from the model.

The system prompt is the ideal place to define:

  • Persona and role: “You are an expert copywriter specializing in technology startups.”

  • Primary objectives: “Your goal is to help users debug their Python code.”

  • Safety guardrails: “You must never provide medical or legal advice.”

  • Key constraints or rules: “Always answer in French.”

For example, if we were building a customer support bot, a good system prompt would establish its identity and core directives.

Prompt: You are 'SupportBot,' a friendly and helpful AI assistant for the e-commerce brand 'Innovate Forward.' Your primary goal is to resolve customer issues by providing accurate information from the company's official knowledge base. You must maintain a polite and professional tone at all times.

By placing these instructions in the system prompt, we ensure they are not just a one-time command but a foundational part of the AI’s behavior.

The user prompt

The user prompt represents the input from the end-user for a specific turn in the conversation. It contains the immediate question, command, or data that the AI needs to process at that moment. This is where we would place the article to be summarized, the code to be debugged, or the customer’s specific query. It is the dynamic part of the conversation.

The assistant message

The assistant message is the AI’s response in the conversation. While we do not write the final assistant message ourselves (the model generates it), it serves a powerful secondary purpose in prompt engineering. By providing Assistant messages as part of an example prompt, we can demonstrate the exact format and style of the desired output. We will explore this advanced technique in a later lesson.

Putting it all together: A structured conversation

These three roles work together to form a clean, logical conversational history. All major chat model APIs, from OpenAI to Anthropic, are built around this structure. Here’s how a simple, well-structured conversation would look for our support bot:

Role: System

You are 'SupportBot,' a friendly and helpful AI assistant for the e-commerce brand 'Innovate Forward.' Your primary goal is to resolve customer issues by providing accurate information from the company's official knowledge base. You must maintain a polite and professional tone at all times.

Role: User

My order #12345 arrived damaged. What is your return policy?

Role: Assistant (Model's Generated Response)

I'm so sorry to hear that your order arrived damaged. According to our policy, you can get a full refund or a free replacement for any damaged items within 30 days of delivery. I can help you start that process right now.

By separating the high-level directives (system) from the immediate task (user), we create a prompt that is far more reliable and easier for the model to understand.

Conversational roles
Conversational roles

After defining the conversational roles, the next step is organizing the content within a single, complex user prompt. This is where delimiters are useful.

Delimiters for content organization

A delimiter is a character or sequence of characters that marks the beginning and end of a distinct block of content. Delimiters act as fences or signposts within the prompt. They create a clear separation between different parts of the input, such as separating instructions from the text we want the model to process.

Using delimiters is one of the most important prompt engineering best practices because it prevents what is sometimes called prompt bleeding, where the model gets confused and starts treating the instructions as if they are part of the input data, or vice versa.

Common and effective delimiters

While we can use any distinct characters, the industry has converged on a few highly effective and readable standards.

  • Triple backticks (```): This is the de facto standard for encapsulating blocks of code, but it is an excellent and highly visible way to separate any block of text.

  • XML/HTML tags (<tag>): These are extremely powerful because they provide semantic meaning. We can create our own tags to label different parts of the prompt, such as <document>, <user_question>, or <style_guide>. This is highly recommended for complex prompts with multiple, distinct pieces of information.

  • Markdown headers (###): Using Markdown headings is a great way to structure a prompt like a well-organized document. This makes the prompt highly readable for both the AI and the human engineers who have to maintain it.

Let’s see this in action. We need to extract specific information from a legal clause.

Prompt: Extract the renewal date and the notice period from the following legal clause. The clause is The agreement shall automatically renew on December 31st, 2025, unless either party provides a written notice of non-renewal at least 30 days prior. Please provide the output in JSON format.

Response: Sure! Here is the information from the legal clause in JSON format:

{ "renewal_date": "December 31st, 2025", "notice_period": "at least 30 days" }

This prompt might work, but it is messy. The instruction, the data, and the output format are all jumbled together. Now, let's structure it with delimiters.

Prompt: ### INSTRUCTION ###

Extract the renewal date and the notice period from the legal clause provided below. Provide the output in JSON format.


### LEGAL CLAUSE ###

<clause>

The agreement shall automatically renew on December 31st, 2025, unless either party provides a written notice of non-renewal at least 30 days prior.

</clause>


Response: {

"renewal_date": "2025-12-31",

"notice_period_days": 30

}

The second version is much more effective. It’s unambiguous, easier to read, and more reliable because the model can identify each part of the task. This technique is widely recommended.

Note: Why are delimiters so effective? It is because they mirror the structured data that the models were trained on. The internet is filled with text that uses Markdown, XML, and other structured formats. By using these conventions, we are “speaking the model’s language,” making it easier for it to parse our intent correctly.

Combining roles and delimiters

The most robust and professional prompts are built by combining both layers of structure. We use conversational roles to set the high-level context and delimiters to organize the content within each message. The result is essentially a reusable prompt engineering template: a consistent, readable structure that your whole team can build on and maintain.

Let’s design a prompt for an AI code review assistant using roles and delimiters.

### ROLE: SYSTEM

You are an expert AI code reviewer specializing in Python. Your purpose is to help developers improve their code quality.


### YOUR TASK ###

Review the Python code provided by the user and identify areas for improvement.


### REVIEW CRITERIA ###

You must evaluate the code based on the following three criteria:

1. **Bugs or Logical Errors:** Identify any code that will not work as intended.

2. **Style Guide Adherence:** Check if the code follows the official PEP 8 Python style guide.

3. **Performance:** Suggest any opportunities for performance optimization.


### OUTPUT FORMAT ###

Provide your feedback as a bulleted list, with a separate bullet for each issue you find.


### ROLE: USER

Please review the following Python function:

```python

def get_names( user_list):

names=[]

for user in user_list:

names.append(user['name'])

return names

```

This structure is incredibly effective. The system prompt uses markdown headers to clearly lay out the AI’s permanent instructions, criteria, and required output format. This makes the prompt’s logic easy for a human developer to read and debug. The user prompt then provides the specific data for this turn, using triple backticks to cleanly encapsulate the code. This clean separation of concerns is the hallmark of professional prompt engineering.

Structure is not an optional extra in prompt engineering; it is a prerequisite for creating reliable, predictable, and maintainable AI behavior. A well-structured prompt is easier to debug, easier for a team to collaborate on, and performs better under pressure. It is also the foundation that all advanced prompt engineering builds upon, from few-shot examples to chain-of-thought reasoning.