Skip to content
Learn Agentic AI10 min read0 views

Prompt Engineering 101: Writing Effective Instructions for LLMs

Master the fundamentals of prompt engineering — learn to write clear system and user messages, format instructions for consistency, and avoid common pitfalls that cause unreliable LLM outputs.

What Is Prompt Engineering?

Prompt engineering is the discipline of crafting inputs to large language models (LLMs) so they produce reliable, accurate, and useful outputs. Unlike traditional programming where you write deterministic logic, prompt engineering is about communicating intent to a probabilistic system. The quality of your prompt directly determines the quality of the response.

Every interaction with an LLM involves at least one prompt, but most production systems use two distinct message types: the system message and the user message. Understanding how these work together is the foundation of effective prompt engineering.

System Messages vs User Messages

The system message sets the behavioral context for the entire conversation. It defines who the AI is, how it should respond, and what constraints it should follow. The user message contains the actual request or question.

from openai import OpenAI

client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {
            "role": "system",
            "content": "You are a senior Python developer. Respond with concise, production-ready code. Always include type hints and error handling."
        },
        {
            "role": "user",
            "content": "Write a function to validate an email address."
        }
    ]
)

print(response.choices[0].message.content)

The system message persists across the conversation and shapes every response. The user message is specific to each turn. Keeping these concerns separated produces far more consistent results than cramming everything into a single prompt.

Five Principles for Clear Instructions

1. Be specific about the output format. Instead of "summarize this article," write "Summarize this article in exactly 3 bullet points, each under 20 words."

2. Provide context before the task. Tell the model what it is working with before asking it to act:

prompt = """You are reviewing a Python pull request.

The code below implements a user authentication endpoint.
Review it for security vulnerabilities and suggest fixes.

Code:
{code_snippet}
"""

3. Use delimiters to separate data from instructions. Triple quotes, XML tags, or markdown headings prevent the model from confusing your instructions with input data:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

prompt = f"""Translate the text between <text> tags to French.

<text>
{user_input}
</text>

Return only the translation, no explanation.
"""

4. Specify what NOT to do. Negative instructions reduce unwanted behaviors: "Do not include disclaimers. Do not use phrases like 'as an AI.' Do not repeat the question back."

5. Define the output structure explicitly. If you need JSON, show the exact shape:

prompt = """Extract the following fields from the customer email:
- name (string)
- sentiment (positive | neutral | negative)
- issue_category (billing | technical | general)

Return valid JSON only. Example:
{"name": "Jane", "sentiment": "negative", "issue_category": "billing"}
"""

Common Pitfalls

The most frequent mistake is vague instructions. "Write something good about our product" gives the model no constraints. "Write a 150-word product description for our CI/CD tool targeting DevOps engineers, emphasizing speed and reliability" gives it everything it needs.

Another pitfall is instruction overload — stuffing 30 rules into a single system prompt. Models lose track of long unstructured lists. Group related instructions under headings and prioritize the most important rules at the top and bottom of the prompt (where attention is strongest).

Finally, avoid implicit assumptions. If you expect code to use a specific library, say so. If you want the response in a specific language, state it. Models do not read your mind — they read your text.

Putting It Together

def build_review_prompt(code: str, language: str) -> list[dict]:
    return [
        {
            "role": "system",
            "content": (
                f"You are a senior {language} code reviewer. "
                "Evaluate code for bugs, performance, and readability. "
                "Format your review as a numbered list of findings. "
                "Each finding must include: severity (critical/warning/info), "
                "the line reference, and a suggested fix. "
                "If the code is clean, respond with 'No issues found.'"
            ),
        },
        {
            "role": "user",
            "content": f"Review this {language} code:\n\n```\n{code}\n```",
        },
    ]

This function produces consistent, structured reviews because every aspect of the expected behavior is spelled out.

FAQ

What is the difference between a system prompt and a user prompt?

The system prompt defines the AI's persona, constraints, and behavioral rules for the entire conversation. The user prompt contains the specific request for a single turn. System prompts are set once and persist; user prompts change with each interaction.

How long should a prompt be?

As long as necessary, but no longer. A well-structured 200-word prompt with clear sections outperforms a 1000-word wall of text. Focus on clarity and specificity rather than length. Production system prompts typically range from 100 to 500 words.

Do prompts work the same across different LLMs?

The core principles — clarity, specificity, structure — transfer across models. However, each model has quirks. GPT-4 follows system prompts more strictly than GPT-3.5. Claude responds well to XML-tagged sections. Always test your prompts against the specific model you are deploying.


#PromptEngineering #LLM #SystemPrompts #AIFundamentals #Python #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.