هوش مصنوعی 8 min read

Prompt Engineering — The Art of Asking AI the Right Questions

Here’s a fascinating fact: two people can use the exact same AI model and get completely different results. One thinks AI is useless, the other thinks it’s magic. What’s the difference? How they ask their questions. Or what we call Prompt Engineering.

In this article, you’ll learn Prompt Engineering from start to finish. Not just theory, but with practical examples you can use tomorrow.

What Is Prompt Engineering?

A prompt is the “instruction” or “input” you give to an AI model. Engineering means designing it deliberately. So Prompt Engineering is the art and science of writing effective instructions for AI models.

Think of an AI model as an extraordinarily skilled chef. It can cook anything. But if you say “make something good,” the result is unpredictable. But if you say “make pasta with Alfredo sauce, not too salty, with lots of Parmesan on top,” you get exactly what you want.

The same principle applies to AI. The more precisely and clearly you ask, the better answer you get.

Why Does Prompt Engineering Matter?

Three main reasons:

1. Time Savings: A good prompt might give you an accurate answer on the first try. A bad prompt might require 10 back-and-forth exchanges.

2. Higher Quality: The model’s output directly depends on input quality. Garbage in, garbage out.

3. More Control: With the right prompt, you can precisely control the format, length, tone, and content of the output.

Technique 1: Zero-Shot Prompting

The simplest type of prompt. No examples — you just ask directly.

Weak example:

Translate: The cat sat on the mat.

Strong example:

Translate the following English sentence into fluent, natural French.
Use simple vocabulary:
The cat sat on the mat.

See the difference? In the strong example, we specified “fluent and natural” and “simple vocabulary.” These details help the model understand exactly what you want.

Zero-Shot works well for simple, straightforward tasks. For more complex work, you need more advanced techniques.

Technique 2: Few-Shot Prompting

In this technique, you give the model a few examples so it understands exactly what pattern you’re looking for.

Example:

Classify the following texts by sentiment (positive/negative/neutral):

Text: "This movie was amazing!" → positive
Text: "The food was cold and the waiter was rude." → negative
Text: "I arrived at 3 PM." → neutral

Now classify this text:
Text: "It was a good book but the ending was weak."

By providing three examples, the model understands exactly what kind of output you want. It gets the format, the criteria, and can continue the pattern.

Important Few-Shot tips:

  • Your examples should be diverse. If they’re all positive, the model will develop bias.
  • 3 to 5 examples is usually enough. Too many might confuse the model.
  • Arrange examples from simple to complex.

Technique 3: Chain-of-Thought (Step-by-Step Reasoning)

This is one of the most powerful techniques. You tell the model to “think step by step.” This makes the model walk through reasoning stages instead of jumping straight to an answer, producing more accurate results.

Example without CoT:

If I have 3 apples and give 2 to my friend,
then buy 5 more, how many apples do I have?

Example with CoT:

If I have 3 apples and give 2 to my friend,
then buy 5 more, how many apples do I have?

Please solve step by step and explain each stage.

With “solve step by step,” the model responds like this:

1. Start: 3 apples
2. Gave away 2: 3 - 2 = 1 apple
3. Bought 5: 1 + 5 = 6 apples
Answer: 6 apples

This example is simple, but for complex problems in math, logic, and programming, Chain-of-Thought makes a dramatic difference. Research shows that using CoT can improve model accuracy on math problems by up to 40%.

Technique 4: System Prompts

A System Prompt is a general instruction that sets the model’s overall behavior. Like telling an actor “in this scene, you’re playing a doctor.”

Example:

System: You are a legal advisor who explains things in simple,
understandable language for non-experts. Don't use complex legal
terminology unless necessary, and if you do, explain what it means.

User: What's the difference between a power of attorney
and a revocable power of attorney?

System Prompts set the model’s behavior across the entire conversation without needing to repeat instructions each time. You can specify expertise level, tone, language, and output format here.

Effective System Prompt tips:

  • Define a clear role: “You are a…”
  • Specify the audience: “for non-experts” or “for senior developers”
  • State constraints: “don’t use complex terminology”
  • Set output format: “give answers in bullet points” or “maximum 200 words”

Technique 5: Role Playing

This technique is similar to System Prompts but more precise and specialized. You give the model a complete persona.

Example:

You are a high school math teacher with 20 years of experience.
Your students don't like math and lack motivation.
Your teaching method is explaining concepts with real-life examples
from everyday situations. Your tone is friendly and humorous.

Now explain the concept of "probability."

The result you get is very different from a dry, formal explanation. The model genuinely tries to explain with real-life examples and a friendly tone.

Technique 6: Output Formatting

One of the most common problems is that the model’s output isn’t in your desired format. The solution: explicitly state what format you want.

Example:

List 5 popular programming languages for AI.
Give the output in JSON format with this structure:
{
  "languages": [
    {
      "name": "language name",
      "why": "reason for popularity in one sentence",
      "difficulty": "easy/medium/hard"
    }
  ]
}

By providing a sample format, the model returns exactly that structure. This is especially useful when you want to use the output in your code.

Technique 7: Iterative Refinement

You don’t always have to get everything with a single prompt. You can work in stages.

Stage 1: “Write an article about the benefits of morning exercise.”

Stage 2: “Make the tone more casual and add a personal example.”

Stage 3: “Remove the third paragraph and replace it with a scientific statistic.”

Stage 4: “Write a more engaging introduction that hooks the reader.”

Each stage improves the output. This method is especially effective for content creation.

Technique 8: Constraint Setting

Sometimes the best approach is to say what you don’t want.

Example:

Write an explanation of blockchain.

Constraints:
- Maximum 150 words
- Write for a 15-year-old audience
- Use the analogy of money and banks
- No sentences longer than 15 words
- No technical jargon

Constraints are like railroad tracks: they prevent the model from going off course.

Common Prompt Engineering Mistakes

Let’s look at what mistakes to avoid:

1. Being vague

Bad: “Tell me something interesting.”

Good: “Tell me a scientific fact about the human brain that most people don’t know.”

2. Being too long

A good prompt is concise and useful. You don’t need 5 paragraphs of explanation. Bullet-point the key points.

3. Making assumptions

The model can’t read your mind. If you have specific context, state it explicitly. Instead of “fix that article I wrote yesterday,” write: “Edit the following text for grammar: [text]”

4. Not providing context

Bad: “My code doesn’t work, why?”

Good: “The Python code below gives a TypeError on line 15. Code: [code] Error: [error message]”

5. Not asking follow-up questions

If the first answer isn’t satisfactory, modify the prompt and ask again. Add a new dimension each time.

6. Not using examples

When you want the model to follow a specific format, always provide an example. An example is worth a thousand words of explanation.

Ready-Made Prompt Templates

A few practical templates you can use directly:

For learning:

Explain the concept of [X] so that a 10-year-old could understand it.
Use an example from everyday life.
After explaining, ask me 3 questions to check my understanding.

For writing:

Write a [content type] about [topic].
Audience: [who]
Tone: [how]
Length: [how much]
Must include: [what points]
Must not include: [what to avoid]

For debugging code:

The code below is in [language] and gives a [error type] error.
I expected [expected behavior] but [actual behavior] happens.
Please:
1. Identify the problem
2. Explain the cause
3. Write the corrected code
[code]

For analysis:

Analyze this [text/data/report].
Focus on: [specific aspect]
Give output in this format:
- Key points: [bullet points]
- Strengths: [bullet points]
- Weaknesses: [bullet points]
- Recommendations: [bullet points]

Prompt Engineering for Coding

If you’re a programmer, Prompt Engineering can multiply your productivity. Some specific tips:

Specify language and framework: Instead of “write an API,” say “write a REST API with Python and FastAPI that…”

Define input and output: “Write a function that takes a list of dictionaries and…”

State technical constraints: “Without using external libraries” or “compatible with Python 3.9+”

Ask the model to explain its code: “Write the code and explain each section with comments.”

The Future of Prompt Engineering

An important question: will Prompt Engineering always be necessary?

Models are getting better every day at understanding what you mean. But even the best models produce better results with better prompts. Think of it like driving: modern cars are much easier to drive than old ones, but you still need to know how to drive.

Prompt Engineering isn’t going away. Its form will change but the principles will remain: be clear, provide context, give examples, and state what you want.

Everyone who works with AI — from programmers to marketers, from students to managers — benefits from learning Prompt Engineering. This skill is like typing: once you learn it, you become faster and better at everything.

Start now. Write a prompt using the techniques you’ve learned and see the result. Then change it and try again. You’ll learn something new each time. Prompt Engineering is a skill that improves with practice, not just reading.