April 20, 20269 min readBy AiCensus

How to Write Better AI Prompts (Without Learning Prompt Engineering)

Most prompt engineering guides are written by people trying to sell you a $200 course. They will tell you about "chain-of-thought" and "few-shot learning" and "RAG pipelines" as if you are building a production system. You are not. You are trying to get ChatGPT to write a decent email.

Good prompting is not a technical skill. It is a communication skill. You already know most of this from explaining things to coworkers, briefing designers, or writing job descriptions. Applying it to AI takes about an hour of practice. Here is the version that actually helps.

The One Rule That Matters Most

If you remember nothing else, remember this: AI models do exactly what you ask them to do. Vague questions produce vague answers. Specific questions produce specific answers.

"Write me a blog post about marketing" and "Write a 600-word blog post explaining why small businesses should use email marketing, aimed at restaurant owners who have never used Mailchimp, with a friendly-but-not-cutesy tone and a clear call to action to sign up for my free trial" will produce wildly different outputs from the same model. The second prompt is not magic. It just tells the model what you actually want.

Most of the "prompt engineering" tricks floating around are really just techniques for being more specific without typing out 200 words every time. If you just want to get better output right now, add more detail. Everything else is optimization.

The Four Things Every Prompt Should Include

You do not need a template. But most successful prompts contain these four things, whether explicitly or implicitly.

Who you are (or who the model should be). "Act as a senior product manager" or "I am a solo founder who needs..." anchors the response. Not every prompt needs this, but when the framing matters (technical vs casual, beginner vs expert), say it.

What you want. The actual task. Be concrete about the output shape: a list, a draft email, an outline, a response under 100 words, a code function, a comparison table. Models are much better at producing specific formats than guessing.

What you are working with. Context. The product name, the audience, the constraints, the previous attempts. If you just ask for "a tagline for my business", you get generic nonsense. If you paste in your business description and two competitor taglines, you get something useful.

What good looks like. This is the one most people skip. Say what a good answer would contain, or show an example. "It should read like the kind of email my coworker would send, not like a press release" is useful information to the model.

That is it. Every effective prompt is some combination of those four things.

Specific Prompt Moves That Work

These are the techniques I actually use daily. Not the flashy ones. The boring ones.

Tell it what you do not want. "Do not use corporate speak" or "Avoid the phrase 'in today's fast-paced world'" is often more useful than positive descriptions. Models default to a certain style, and explicit negative guidance pushes them off it.

Give it an example. Even one example works wonders. "Here is a tweet I wrote last week that performed well: [paste]. Write five new ones in the same voice about this topic: [topic]." The model will actually mimic the voice, which is much harder to describe than to show.

Ask for variations, not a single answer. "Give me 10 subject lines" will produce better individual subject lines than asking for one, because the model has to differentiate them. You can pick the best or combine elements. This works for almost any short-form output.

Iterate in the same conversation. Do not start over. Say "Make it shorter", "Rewrite more casually", "Change the opening to start with a question". Models keep the full context of a conversation, and incremental edits get you where you want faster than rewriting the prompt from scratch.

Ask the model to ask you questions. "Before you write this, ask me any clarifying questions." This turns a one-shot prompt into a brief, which is how a good freelancer would approach the same task. Works especially well for complex writing jobs.

Let the model structure its own reasoning. For anything involving analysis or decisions, "Think through this step by step before giving your final answer" genuinely improves output quality. For long complex tasks, this is the single most useful prompt technique.

Show the model what the audience cares about. Not "write for marketers" but "write for a marketing director who has tried three tools already, is skeptical of AI claims, and cares about measurable ROI". The specificity unlocks a more targeted response.

The Prompts for Common Tasks

Since most people use AI tools for the same handful of things, here are the templates I keep coming back to.

Drafting an email you are dreading. "I need to reply to this email: [paste]. I want to [say no politely / push back on the timeline / ask for more information]. Keep it under 100 words. Professional but not stiff. Do not start with 'Thank you for reaching out'."

Summarizing a long document. "Summarize this document in 150 words for someone who needs the key decisions and implications, not the full context. Bullet points are fine if they help. At the end, list three questions a thoughtful reader might ask: [paste]."

Brainstorming names or headlines. "I need 20 [names/headlines/taglines] for [thing]. The audience is [audience]. Constraints: [must be under X words, must not use Y]. After the list, pick your top 3 and briefly explain why each one works."

Code help. "I have this [language] code: [paste]. It is doing [expected behavior] but the result is [actual behavior]. Walk me through what is happening, then suggest a fix with a brief explanation."

Getting unstuck on writing. "Here is a draft paragraph that is not working: [paste]. The problem is [it feels generic / too long / wrong tone / buried lead]. Rewrite it three different ways so I can compare."

Making a decision. "I am deciding between [option A] and [option B] for [context]. Here are the factors I care about: [list]. Walk me through the trade-offs and make a recommendation. Be opinionated."

None of these are clever. They just include the four things from earlier.

What to Do When the Output Is Wrong

Your first response to a bad AI output should not be "this tool is broken". It should be "what did I not tell it?"

Walk back through your prompt. Usually one of these is the problem:

  1. You were too vague about the task. "Help me with my resume" is not a prompt. "Improve the wording of this bullet point to emphasize measurable results: [paste]" is a prompt.
  2. You did not give it enough context. The model does not know who you are, what your company does, or who you are writing to unless you tell it.
  3. You did not specify the format. "Write something" is open-ended. "Write three paragraphs" is bounded.
  4. You are asking the wrong model. Claude is better at long nuanced writing. ChatGPT is better at structured tasks and has more tool access. Gemini is better at live-research tasks. Task mismatch causes a lot of frustration.

If the output is factually wrong, separate concern: see our guide on AI hallucinations. That is a different problem than a bad prompt.

The Iteration Trick Most People Miss

Here is the thing that changed how I use these tools: you do not have to get the prompt right on the first try. You should not even try to.

Start with a rough prompt. See what comes back. Correct it in the same conversation. Do it again. By iteration three or four you will have an output that would have taken you 20 minutes to describe upfront, and it only took 2 minutes of back-and-forth.

Models are conversational for a reason. The ChatGPT and Claude interfaces are designed around this. Use them that way.

What Prompt Engineering Actually Matters For

For day-to-day personal use, everything above covers 95% of what you need. The technical prompt engineering stuff ("few-shot learning", "constitutional AI", "retrieval augmented generation") starts to matter when you are:

  • Building an app that uses an AI API for thousands of users.
  • Trying to get consistent output across many runs.
  • Working with specialized models for domain tasks.
  • Doing research on model behavior.

If that is you, great, go read the academic papers. If you are just trying to write better emails, draft blog posts, or debug code, you are done after this post.

One Last Thing About Specificity

I said specific prompts produce specific answers. This goes both ways in a useful and slightly counterintuitive way: asking for something very specific is usually easier than asking for something good.

"Write a good blog post about productivity" is hard because "good" is undefined. "Write a 500-word blog post intro that opens with a concrete scenario from a remote worker's morning, uses short paragraphs, and ends with a question that makes the reader want to keep reading" is easier because every constraint is checkable.

This is why professional writers get more out of AI than hobbyists. They already know what good looks like, so they can describe it. If you want to prompt better, spend more time thinking about what you actually want before you type.

Want to try these techniques with different tools? Browse our chatbots directory to explore alternatives, or try Poe if you want to test the same prompt across multiple models in one interface. For specific workflows, the curated stacks on AiCensus show which tools pair well for common build patterns.

Type better prompts. Get better output. That is the whole trick.