Foundational Techniques
These build the core structure of a good prompt.
Trick | Description | Example Prompt | Why It's Useful |
---|---|---|---|
Role Prompting | Assign the AI a specific persona or expert role to guide its tone, knowledge, and approach. This activates relevant knowledge pathways. | "You are a senior beauty editor at Vogue. Write a product description for a luxury skincare brand targeting women over 35, focused on hydration and clean ingredients." | Aligns responses with expertise; reduces generic outputs. Great for creative or specialized tasks. |
Context Provision | Provide detailed background information upfront to set the stage, including goals, audience, or constraints. | "I'm building a landing page for a no-code app that helps freelancers manage clients. Goal: convert free users. Here's what I've done so far... Write 3 subject lines." | Prevents misunderstandings; improves relevance and depth, especially for ambiguous queries. |
Structured Prompts | Use formats like JSON, XML, or bullet points to organize the prompt, making it easier for the AI to follow. | "Output in JSON: {'hook': 'Attention-grabber', 'insight': 'Key point', 'cta': 'Call to action'} for a marketing email." | Ensures consistent, parsable outputs; ideal for automation or data extraction. |
Example-Based Techniques
Leverage demonstrations to teach the AI what you want.
Trick | Description | Example Prompt | Why It's Useful |
---|---|---|---|
Few-Shot Prompting | Include 2-5 examples of the desired input-output pairs to "teach" the AI the pattern without full training. | "Ticket: 'I can’t reset my password.' Category: Account Access --- Ticket: 'The app crashes when I upload a file.' Category: Bug Report --- Ticket: 'I want to upgrade to the Pro plan.' Category:" | Boosts accuracy from 0% to 90% for classification or generation tasks; no need for zero examples. |
Zero-Shot Prompting | Directly describe the task without examples, relying on the model's pre-trained knowledge. | "Summarize this article in 3 bullet points." | Quick for simple, well-known tasks; saves time when examples aren't available. |
Reasoning and Refinement Techniques
Encourage logical thinking and iteration.
Trick | Description | Example Prompt | Why It's Useful |
---|---|---|---|
Chain-of-Thought (CoT) | Instruct the AI to reason step-by-step, breaking down complex problems. | "Let’s think step by step: Should I open a coffee shop in Goa? Consider footfall, rent, seasonality, and competition." | Enhances problem-solving accuracy by 30-50%; vital for math, logic, or strategy. |
Self-Criticism | Ask the AI to review and critique its own response for improvements. | "Here’s my answer. Now, offer yourself some criticism and revise it." | Increases reliability; catches errors in high-stakes or logic-heavy outputs. |
Decomposition | Break a big task into sub-tasks or sub-problems for sequential solving. | "What are the sub-problems in planning a marketing campaign? Solve each one step-by-step." | Handles complexity; reduces overwhelm for multi-step queries. |
Maieutic Prompting | Probe the AI's beliefs by questioning assumptions, like a Socratic dialogue, to refine answers. | "Explain why X is true. Now, assume it's false and explain why. Synthesize the best view." | Uncovers nuances and reduces biases; great for controversial or uncertain topics. |
Advanced Optimization Techniques
For scaling, creativity, or edge cases.
Trick | Description | Example Prompt | Why It's Useful |
---|---|---|---|
Ensemble Prompting | Generate multiple responses (3-5) and vote/select/refine the best one. | "Generate 3 variations of this summary. Rank them by clarity and pick the best." | Counters inconsistency; yields higher-quality final outputs through comparison. |
Constraints and Responsibilities | Specify limits like length, tone, or rules to avoid unwanted outputs. | "Keep responses under 100 words. Always be empathetic and factual. No made-up data." | Controls output style; prevents rambling or hallucinations. |
Meta-Prompting | Prompt the AI to improve or generate better prompts first. | "Rewrite this vague prompt to be detailed and effective: 'Write a blog post.'" | Automates prompt optimization; useful for beginners or iterative workflows. |
Panel of Experts | Simulate multiple perspectives (e.g., disciplines or personas) and synthesize. | "Answer from a historian, economist, and scientist's view. Then synthesize." | Broadens insights; ideal for multifaceted questions like policy or innovation. |
Iterative Follow-Ups | Treat prompting as a conversation: refine based on initial outputs. | "Make this more Gen Z. Add urgency. Format as a tweet thread." | Builds on responses; turns mediocre outputs into elite ones dynamically. |
Best of N | Run the same prompt multiple times and select/average the top results. | "Run this query 5 times. Score each on accuracy and pick the highest." | Mitigates variability; ensures robust results for critical tasks. |
Final Tips
These tricks are adaptable—combine them (e.g., Role + CoT + Few-Shot) for even better results. Start with simple ones like Role Prompting, then experiment. For more depth, check resources like Anthropic's guides or OpenAI's playground.