GuideReferencebeginner10 min read

Prompt Frameworks Compared: COSTAR, RISEN, Chain-of-Thought & More

A side-by-side comparison to help you pick the right framework for any AI task

Too Many Frameworks, Not Enough Clarity

COSTAR. RISEN. APE. CRISPE. RACE. TIDD-EC. The prompt engineering world has produced dozens of frameworks, each with its own acronym and its own evangelists. If you’ve tried to figure out which one to learn, you’ve probably wasted more time reading about frameworks than actually writing prompts.

Here’s the truth: you do not need to learn them all. Most overlap significantly. Many are variations on the same core ideas. What you need is a clear understanding of 3–4 distinct approaches and a reliable way to decide which one fits your task.

This page gives you that. No fluff, no exhaustive taxonomy. Just a practical comparison of the four frameworks that cover nearly every real-world prompting scenario.

The Four Frameworks Worth Knowing

Out of the dozens of prompt frameworks available, four stand out as genuinely distinct approaches. Each solves a different problem, and together they handle virtually any AI task you will encounter.

1

COSTAR

Context, Objective, Style, Tone, Audience, Response. Best for comprehensive, well-structured outputs where you need control over every aspect of the result.
2

RISEN

Role, Instructions, Steps, End Goal, Narrowing. Best for multi-step processes and workflows where order matters and the AI needs a defined path to follow.
3

Chain-of-Thought

Ask the AI to reason step-by-step before answering. Best for analysis, math, logic problems, and any task where the thinking process matters as much as the answer.
4

Few-Shot

Provide examples of the input-output pattern you want. Best for classification, formatting, style matching, and any task where showing beats telling.

Insight

These four frameworks cover roughly 95% of real-world prompting needs. Other frameworks (APE, CRISPE, RACE, etc.) are mostly variations of COSTAR or RISEN with different labels on similar concepts.

Side-by-Side Comparison

This table gives you the full picture at a glance. Bookmark it and come back when you’re deciding which framework to use for a new task.

CriteriaCOSTARRISENChain-of-ThoughtFew-Shot
Full NameContext, Objective, Style, Tone, Audience, ResponseRole, Instructions, Steps, End Goal, NarrowingStep-by-step reasoning before answeringLearning from provided input-output examples
Best ForContent creation, marketing copy, business writingMulti-step workflows, SOPs, process documentationAnalysis, math, logic, debugging, decision-makingClassification, formatting, style matching, data extraction
Learning CurveLowLow-MediumVery LowLow
Unique StrengthTotal control over output style and formatClear sequencing and role assignmentForces transparent reasoning, catches errorsShows rather than tells, high consistency
WeaknessCan feel verbose for simple tasksLess suited for creative or open-ended workAdds length; not useful for simple generationRequires good examples; garbage in, garbage out
Ideal UserMarketers, writers, business professionalsOperations managers, project leads, consultantsAnalysts, developers, researchersData teams, QA engineers, content ops
Time to Write3-5 minutes3-5 minutes1-2 minutes2-10 minutes (depends on examples)
Works Best WithAll models (GPT-4, Claude, Gemini)All models; shines with instruction-tunedStronger models (GPT-4, Claude, Gemini Pro)All models including smaller/faster ones

COSTAR: When Structure Is Everything

COSTAR is the most comprehensive general-purpose framework. Its six elements — Context, Objective, Style, Tone, Audience, Response — give you fine-grained control over every dimension of the AI’s output.

It excels at content creation tasks where you care about both what the AI says and how it says it. Marketing emails, blog posts, business reports, product descriptions — any task where style and audience matter as much as substance.

Where COSTAR falls short: purely analytical tasks where style is irrelevant, or simple questions where the overhead is not worth it. You do not need six elements to ask “What is the capital of France?”

COSTAR Example: Agency Proposal
CONTEXT:
I run a 15-person digital agency. We need to pitch a website redesign to a mid-size e-commerce client whose current site has a 70% bounce rate.

OBJECTIVE:
Write a one-page proposal summary that highlights the business case for a redesign, using their bounce rate and industry benchmarks.

STYLE:
Professional and data-driven. Short paragraphs, clear section headers.

TONE:
Confident and consultative. Not salesy.

AUDIENCE:
The client's VP of Marketing, who reports to the CEO and needs internal buy-in.

RESPONSE FORMAT:
- Executive summary (2-3 sentences)
- Problem statement with data
- Proposed approach (3 bullet points)
- Expected outcomes with metrics

Read the full COSTAR guide →

RISEN: When Process Matters

RISEN stands for Role, Instructions, Steps, End Goal, and Narrowing. It is built around the idea that AI performs best when given a clear identity and a sequential path to follow.

This framework shines for operational tasks: creating standard operating procedures, onboarding plans, project workflows, or any multi-step process where order is critical. The “Steps” element forces you to think through the sequence, and “Narrowing” keeps the AI from going off track.

RISEN is less effective for creative writing or open-ended brainstorming where you want the AI to explore freely rather than follow a defined path.

RISEN Example: Onboarding Plan
ROLE:
You are a senior HR operations specialist with expertise in employee onboarding.

INSTRUCTIONS:
Create a 30-day onboarding plan for new software engineers joining a remote-first startup.

STEPS:
1. Week 1: Equipment setup, account access, team introductions
2. Week 2: Codebase orientation, pair programming sessions
3. Week 3: First small ticket, code review process
4. Week 4: Independent work, 30-day check-in preparation

END GOAL:
The new hire should be able to independently pick up and complete standard tickets by day 30.

NARROWING:
Focus on engineering-specific onboarding. Do not cover general company orientation (HR handles that separately). Assume the team uses GitHub, Slack, and Linear.

Read the full RISEN guide →

Chain-of-Thought: When Reasoning Is Required

Chain-of-Thought (CoT) prompting is the simplest framework conceptually: ask the AI to think step-by-step before giving its final answer. That’s it. No acronym to memorize, no template to fill in.

But the results are striking. For tasks involving analysis, logic, math, or complex reasoning, CoT consistently outperforms direct prompting. The AI catches its own errors, considers edge cases, and arrives at more accurate conclusions.

CoT is not useful for simple generation tasks. If you need a marketing email, you do not need the AI to “reason through” it — you need COSTAR. Use CoT when the quality of the thinking determines the quality of the output.

Chain-of-Thought Example: Revenue Analysis
Our SaaS product has three pricing tiers: Starter ($29/mo), Growth ($79/mo), and Enterprise ($199/mo). Last quarter, we had 200 Starter users, 80 Growth users, and 15 Enterprise users. This quarter, 30 Starter users upgraded to Growth, 10 Growth users upgraded to Enterprise, and we lost 25 Starter users entirely.

Think through this step-by-step:
1. Calculate last quarter's MRR
2. Calculate this quarter's MRR after all changes
3. Break down the MRR change by component (new, expansion, contraction, churn)
4. Identify which tier transition had the biggest revenue impact
5. Recommend where to focus retention efforts and why

Read the full Chain-of-Thought guide →

Few-Shot: When Examples Speak Louder

Few-shot prompting skips lengthy instructions in favor of examples. You show the AI 2–5 input-output pairs, then give it a new input and let it follow the pattern. The AI infers the rules from the examples rather than from explicit directions.

This approach is remarkably effective for classification, data extraction, format conversion, and style matching. It works especially well when the task is hard to describe in words but easy to demonstrate.

The downside: your examples must be high quality. If your examples contain inconsistencies or errors, the AI will replicate those too. And for tasks that require deep reasoning, examples alone are not enough — you need Chain-of-Thought.

Few-Shot Example: Support Ticket Classification
Classify each customer support message as one of: billing, technical, feature-request, or general.

Examples:
Message: "I was charged twice this month for my subscription."
Category: billing

Message: "The export button gives me a 500 error when I click it."
Category: technical

Message: "It would be great if you added dark mode."
Category: feature-request

Message: "What are your office hours?"
Category: general

Now classify:
Message: "My invoice shows the wrong plan name but the amount is correct."
Category:

Read the full Few-Shot guide →

Decision Flowchart

Not sure which framework to use? Walk through these questions in order. The first “yes” gives you your answer.

“Do I need the AI to reason through a problem or show its work?”

Yes → Chain-of-Thought. Ask it to think step-by-step before answering.

“Do I need a specific output format, writing style, or classification?”

Yes → Few-Shot. Show 2–5 examples of the input-output pattern you want.

“Is this a multi-step process with a defined workflow?”

Yes → RISEN. Define the role, steps, and constraints explicitly.

“Do I need a comprehensive, well-structured response with control over style and tone?”

Yes → COSTAR. Fill in all six elements for maximum control.

Still not sure?

Start with COSTAR. It is the most versatile framework and works well for the widest range of tasks. You can always switch if you find a better fit.

Combining Frameworks

Here is the insight most framework guides miss: these approaches are not mutually exclusive. You can combine them when a single framework does not give you what you need.

The most common combinations:

  • COSTAR + Chain-of-Thought: When you need structured output that also requires deep analysis. Use COSTAR for the format and CoT for the thinking.
  • RISEN + Few-Shot: When you have a sequential process but want consistent formatting at each step. Define the process with RISEN, show examples with Few-Shot.
  • Chain-of-Thought + Few-Shot: When you want the AI to reason through a problem but follow a specific reasoning pattern. Show examples of the reasoning steps you expect.
Combined Example: COSTAR + Chain-of-Thought
CONTEXT:
I'm evaluating three project management tools (Asana, Linear, Monday) for a 30-person engineering team. Budget is $500/month. We need Jira migration support, GitHub integration, and sprint planning.

OBJECTIVE:
Analyze each tool against our requirements and recommend the best fit.

STYLE:
Analytical and structured. Use a comparison format.

AUDIENCE:
Engineering leadership team making the final purchasing decision.

RESPONSE FORMAT:
For each tool, provide: pricing fit, feature match (scored 1-5 per requirement), migration complexity, and overall recommendation.

REASONING:
Think through each tool's strengths and weaknesses step-by-step before scoring. Explain your reasoning for each score so we can validate your analysis.

Warning

Do not over-engineer. Combine frameworks only when a single one is not getting the results you need. A well-written COSTAR prompt is better than a sloppy COSTAR-CoT-Few-Shot hybrid.

Before & After

See the difference between an unstructured prompt and the same request using the right framework. This example uses COSTAR because the task — competitive analysis writing — needs control over structure, tone, and audience.

Before
Write a competitive analysis of our product vs the top 3 competitors.
After
CONTEXT:
We sell an AI-powered prompt builder (AskSmarter.ai) in a market with established competitors: PromptPerfect, FlowGPT, and PromptBase. Our differentiator is guided prompt construction through smart questions rather than manual template editing. We launched 6 months ago and have 2,000 active users.

OBJECTIVE:
Write a competitive analysis comparing our product against the three competitors across five dimensions: ease of use, output quality, pricing, target audience, and unique value proposition.

STYLE:
Direct and analytical. Use a comparison table for the five dimensions, followed by a narrative summary of our competitive position.

TONE:
Honest and balanced. Acknowledge competitor strengths. Do not spin weaknesses as advantages.

AUDIENCE:
Our founding team (CEO, CTO, Head of Product) preparing for a board meeting. They need facts, not cheerleading.

RESPONSE FORMAT:
1. Comparison table (5 rows x 4 columns)
2. Key findings (3-4 bullet points)
3. Strategic recommendations (2-3 sentences)
4. Biggest competitive risk (1 paragraph)

Success

The structured prompt takes 3–5 minutes to write but produces board-ready output on the first attempt. The unstructured version would need multiple rounds of follow-up to get anything usable.

Quick Reference

Keep this table handy. When you start a new AI task, match it to the right framework.

If Your Task Is…Use ThisBecause…
Writing marketing copy or emailsCOSTARYou need control over style, tone, and audience
Creating an SOP or onboarding planRISENSequential steps and defined end goals are critical
Analyzing data or making a decisionChain-of-ThoughtThe reasoning process determines the quality of the answer
Classifying or extracting structured dataFew-ShotExamples define the pattern better than instructions
Writing a business report or proposalCOSTARMultiple output dimensions (format, tone, audience) to control
Debugging code or finding logic errorsChain-of-ThoughtStep-by-step reasoning catches what direct answers miss
Matching a specific writing tone or formatFew-ShotShowing the desired output is faster than describing it
Building a project plan or workflowRISENRole assignment and step sequencing keep the output on track

Next Steps

Now that you know which framework to use and when, dive deeper into the ones most relevant to your work:

Or skip the manual framework application entirely and let AskSmarter do it for you. Our prompt builder applies the right framework automatically based on your task.

Stop choosing frameworks. Start getting results.

AskSmarter asks you smart questions about your task, then applies the right framework (or combination of frameworks) automatically. You get optimized prompts without memorizing a single acronym.

Start building free