June 18, 2025

Writing prompts that get results: A practical guide (advanced edition)

Writing prompts that get results: A practical guide (advanced edition)

Prompting is the future

At 14.ai, we believe prompting will soon be the primary way people interact with software, not through dashboards or buttons, but through clear, structured instructions to intelligent systems. This is already happening in customer support, where fast, repeatable workflows matter most.

Why? Because well-prompted agents don’t just automate tasks, they collaborate with humans to make support teams faster, more consistent, and more effective.

In the same way Google made search literacy essential, prompt literacy is now becoming a core skill. And like writing good code or documentation, writing good prompts takes practice and structure.

What is a system prompt?

A system prompt defines how your AI agent should behave, defining its role, tone, scope, and output format.

Unlike a simple user query, this is the "rulebook" for the agent. Think of it as onboarding documentation for a new teammate. You're not just telling it what to do, you're defining how it thinks.

A strong system prompt typically includes:

  • Role: What kind of agent is this? A triage agent? A summarizer?
  • Goal: What outcome should it deliver?
  • Constraints: Are there rules for tone, content, or structure?
  • Output format: Should the output be bullets, plain text, or JSON?
  • Fallback instructions: What should happen when the input is vague or unclear?

The best prompts are scoped, specific, and don’t assume ideal input.

Prompt layers: system, user, context

AI agents process messages across three layers of input:

  • System: Sets the agent’s role, tone, structure, and constraints. It’s the instruction manual.
  • User: The real message from the customer, which are often messy, emotional, or unclear.
  • Context: This includes metadata such as prior ticket history, customer profile, and past actions.

You can think of the system prompt as the brain of the agent, while the user and context layers form its environment: the better defined the system prompt, the more predictable the agent’s behavior across noisy real-world inputs.

Common prompt failures in production

Even well-crafted prompts can break down. Here are typical failure modes that can occur in real deployments:

  • Mixed signals: Conflicting role or tone (e.g. casual greeting plus strict formatting).
  • No fallback logic: The agent doesn’t know what to do with vague input.
  • Overprompting: Instructions are too dense or include competing goals.
  • Prompt drift: A prompt works at first, but degrades as new edge cases appear.

Example: A summarizer starts appending “Thanks for contacting support” to every message. This wasn’t part of the prompt, but the model filled in the gap due to vague formatting instructions.

Updating prompts over time

Prompts are not one-and-done. They require ongoing iteration like any piece of product logic.

When you spot a failure in production:

  • Log the case: Save the input message and model response.
  • Analyze: What went wrong? Did it misclassify? Misinterpret tone? Skip a step?
  • Patch the prompt: Add clarification, constraints, or fallback logic.
  • Re-test: Run the updated prompt across varied examples, especially edge cases.
  • Treat your system prompt like a living doc. Every real-world failure is training data.

Debugging checklist

Before shipping a prompt, you can use this checklist:

  • Would a human teammate understand what to do?
  • Is the tone and output format clearly defined?
  • Is there fallback logic for vague, ambiguous, or empty input?
  • Have you tested across multiple realistic examples?

Remember: prompts are not meant for ideal inputs. They are built to hold up when things get messy.

What are tool calls?

Tool calls allow your AI agent to do more than just respond with text. They enable the agent to take real-world actions by triggering internal tools or APIs.

Think of them as the bridge between conversation and execution.

For example, tool calls might trigger the following:

  • reschedule_appointment: moves a meeting in your internal calendar system
  • issue_refund: initiates a refund in your billing platform
  • lookup_order_status: pulls live tracking data from your order system

These are not just silent API calls happening in the background. Without clear guidance, the model might:

  • Ignore the tool entirely
  • Invent inputs or parameters that do not exist
  • Use the wrong tool for the situation

In short, tool calls turn your AI from a text generator into a true workflow operator. Prompting is what makes that transition possible.

Each tool prompt should include:

  • Trigger conditions: e.g. “Use this only if refund and order ID are both present”
  • How to use: e.g. “Extract the order ID and pass it to the order_id field”
  • Fallbacks: e.g. “If no ID is present, tag as manual-review and escalate”

Meta-prompt templates

Not sure how to start writing a system prompt from scratch?

Use a meta-prompt: a prompt that helps you generate reliable system prompts.

These are especially useful when you’re:

  • Defining new agent behaviors
  • Troubleshooting unclear instructions
  • Creating fallback logic for edge cases

Meta-prompt: generate a system prompt from scratch

Based on the following information, generate a complete, production-ready system prompt for an AI support agent.

Your output must:

  • Include a clear role description
  • Define the agent’s goal and scope of responsibility
  • Specify output format (e.g. number of bullet points, word count, plain text, etc.)
  • Include fallback instructions for cases where the input is vague or incomplete
  • Avoid any greetings, sign-offs, or extra commentary

Format the system prompt as a single block of plain text.

Input:

  • Role: [insert role, e.g. triage agent, drafting agent]
  • Goal: [insert core task, e.g. assign priority level or draft a reply]
  • Output format: [insert format requirements]
  • Fallback: [insert failure handling or escalation logic]

Meta-prompt: rewrite an underperforming prompt

The following system prompt is producing inconsistent or low-quality results. Your task is to rewrite it so that it is clearly scoped, well-structured, and reliable for use in a real-world support workflow.

Your output must:

  • Clearly define the agent’s role and intended goal
  • Specify a structured output format
  • Include fallback behavior for vague, incomplete, or irrelevant input
  • Eliminate filler language and ambiguous instructions
  • Output only the revised system prompt as plain text — no extra commentary or formatting

Original Prompt:

[Insert original system prompt]

Meta-prompt: fix a failing prompt based on behavior

Use the following failure case to improve the system prompt. Your goal is to revise the prompt so that the agent behaves reliably, even when inputs are unclear or unexpected.

Your output must:

  • Rewrite the system prompt to directly address the failure behavior
  • Define what the agent should do when the input is vague, incomplete, or unrelated to the task
  • Include fallback logic that ensures a consistent response pattern
  • Keep the prompt concise, structured, and suitable for production use
  • Output only the revised system prompt — no explanations or additional text

Failure Case:

[Insert a short example of an input and incorrect model behavior]

Original Prompt:

[Insert original system prompt]

Final takeaways

System prompts are like onboarding docs for your AI agents. They define the role, rules, and boundaries for how the model should behave.

Clarity, structure, and fallback logic are your most reliable tools.

Prompt failures are signals. Treat them like internal docs: revise with clarity, test with examples, and iterate often.

Strong prompting isn’t about being clever; it’s about being predictable, repeatable, and resilient.