Why this is hard to get right
A Training Manager Tries to Build a Module in an Afternoon
Maya is a senior training manager at a mid-size SaaS company. Her team just promoted 14 individual contributors into first-time manager roles. The VP of People wants a self-paced module on time management ready in two weeks, delivered through their LMS, capped at 45 minutes.
Maya knows the topic cold. She's run live workshops on it for years. But translating that expertise into a structured, self-paced module is a different skill set — one she doesn't have time to relearn from scratch.
She opens a blank doc and types a rough outline. She gets three bullet points before she stalls. What comes first — objectives or scenarios? How many activities fit in 45 minutes? Does she write the quiz now or after the content?
She tries an AI assistant. She types: "Create an online course module about time management with some lessons and activities."
The output looks impressive at first. But when she reads carefully, it's generic. The "learner" could be anyone. The activities are surface-level. The quiz questions don't connect to real manager situations. There's no timing guidance. She'd need to rebuild most of it anyway.
The problem wasn't the AI. It was the prompt.
Maya rewrites with more precision. She specifies the audience — new managers, promoted from individual contributor roles. She locks in the 45-minute constraint, with a specific time split across intro, content, and practice. She requests real workplace scenarios, not hypothetical ones. She defines quiz parameters: six questions, four multiple-choice, two short answer, with an answer key. She sets the tone: practical, supportive, direct.
The next output is structurally complete. The learning outcomes use action verbs and map to real on-the-job behavior. The scenarios involve a missed deadline and a direct report who overpromises — exactly the situations her new managers face. The quiz tests application, not recall.
Maya still edits. She adjusts one scenario to match her company's specific context. She swaps one multiple-choice question for something more challenging. But she's editing a solid draft — not building from nothing.
The difference between the two outputs wasn't effort. It was input quality. A prompt that specifies audience, timing, scenario type, assessment format, and tone gives the AI the constraints it needs to make real decisions. Without those constraints, it makes guesses — and those guesses default to the average, not the specific.
For instructional designers, training managers, and L&D leads, the prompt is the design brief. The more complete the brief, the less rework on the back end.
Common mistakes to avoid
Skipping Learner Context Entirely
Writing 'create a module on X' without naming the audience forces the AI to invent a learner profile. It defaults to a generic adult professional, which produces shallow content. Specify role, experience level, and prior knowledge — for example, 'new managers promoted from individual contributor roles with no formal leadership training' — to get depth and examples that actually fit.
Omitting Time and Pacing Constraints
Without a time limit, the AI will fill whatever space it imagines is appropriate — often producing a module that would take 90 minutes to complete when you need 30. Always state total time and how it should split across intro, core content, and practice. The AI uses those numbers as hard constraints, not suggestions.
Asking for 'Activities' Without Specifying Type
The word 'activities' is too broad. You'll get a mix of discussion questions, reflections, and role-plays that may not fit your delivery format or learner context. Name the activity type explicitly — scenario-based reflection, case analysis, drag-and-drop, short answer — so the AI designs something you can actually build and deploy.
Forgetting to Define Assessment Parameters
Prompts that say 'include a quiz' produce wildly inconsistent results — sometimes 3 questions, sometimes 15, with no answer key. Specify question count, question types, and whether you need an answer key. A well-defined assessment prompt produces a quiz you can drop directly into your authoring tool with minimal editing.
Not Anchoring Learning Outcomes to Behavior
Generic prompts produce outcomes like 'learners will understand time management' — which is unmeasurable and unactionable. Ask explicitly for outcomes using action verbs (prioritize, apply, demonstrate, evaluate) tied to specific on-the-job tasks. This forces the AI to write outcomes you can actually assess.
Leaving Tone and Format Unspecified
Unformatted output from a vague prompt often reads like a textbook or a slide deck transcript — neither of which works well in a self-paced eLearning context. Define tone (practical, conversational, direct) and output format (headings + bullet points, numbered steps) so the content is ready to paste into your LMS or authoring tool without reformatting.
The transformation
Create an online course module about time management with some lessons and activities.
You’re an **instructional designer**. Create a **single online module outline** on **time management for new managers**. 1. Define **3 measurable learning outcomes** using action verbs. 2. Provide a **45-minute** structure: **5-min intro**, **30-min content**, **10-min practice**. 3. Include **2 real workplace scenarios** and a guided reflection for each. 4. Add **1 short quiz** (6 questions: 4 multiple-choice, 2 short answer) with an answer key. Tone: **practical, supportive, direct**. Format as **headings + bullet points**. Avoid theory-heavy language.
Why this works
Role Assignment Anchors Expertise
The After Prompt opens with 'You're an instructional designer.' This single line shifts the AI's framing. It stops generating generic content and applies instructional design conventions — measurable outcomes, timed structure, scenario-based practice, and formative assessment — as the default framework for everything that follows.
Numbered Steps Create a Deliverable Checklist
The After Prompt's four numbered steps function as a production checklist, not just a request. Each step defines one deliverable: outcomes, timing structure, scenarios, and quiz parameters. The AI treats these as discrete tasks, producing a module outline with clear sections rather than a continuous, unstructured narrative.
Scenario Specificity Drives Realistic Content
Requesting '2 real workplace scenarios with guided reflection' in the After Prompt signals that abstract examples are not acceptable. The AI generates situations grounded in the learner's actual work context — new manager challenges — rather than placeholder vignettes that could apply to any industry or role.
Precise Assessment Parameters Eliminate Guesswork
Specifying '6 questions: 4 multiple-choice, 2 short answer, with an answer key' in the After Prompt removes all ambiguity. The AI doesn't decide quiz length, question mix, or whether to include answers. Every parameter is pre-decided, so the quiz output is usable rather than a starting sketch.
Tone and Format Constraints Ensure Usability
The After Prompt closes with explicit tone ('practical, supportive, direct') and format ('headings + bullet points') directives. These aren't stylistic preferences — they're usability requirements that determine whether the output can go directly into an LMS without reformatting or rewriting.
The framework behind the prompt
The Instructional Design Theory Behind Effective Module Prompts
Effective online course modules don't emerge from topic dumps — they emerge from systematic design decisions about what learners need to be able to do, how they'll practice, and how you'll know they've succeeded. When you use AI to generate a module outline, you're essentially delegating those design decisions to the model. The quality of its decisions depends almost entirely on the quality of your brief.
Bloom's Taxonomy (revised by Anderson and Krathwohl in 2001) is the most widely used framework for writing learning outcomes. It classifies cognitive tasks across six levels — Remember, Understand, Apply, Analyze, Evaluate, Create — and maps each to measurable action verbs. Prompts that reference Bloom's by name, and specify which levels to target, produce outcomes that are both assessable and strategically appropriate for the learner's stage of development.
Merrill's First Principles of Instruction provide a research-backed structure for content sequencing. Merrill argues that effective instruction activates prior knowledge, demonstrates new skills with realistic examples, requires application in real-world contexts, and supports integration back into the learner's job. The After Prompt on this page maps directly to these principles: scenarios activate context, the timed content block demonstrates the skill, the guided reflection practice applies it.
Cognitive Load Theory (Sweller, 1988) explains why time constraints in prompts matter. Working memory has a limited capacity. A module that covers too much in too little time — or too little structure in too much time — creates extraneous cognitive load that blocks learning. By specifying a 45-minute cap with a defined structure split, you force the AI to make scope decisions that protect the learner's cognitive bandwidth.
Kirkpatrick's Four Levels of Evaluation remind us that learning outcomes only matter if they connect to behavior change (Level 3) and results (Level 4). Prompts that tie outcomes to on-the-job tasks — rather than end-of-module quiz scores — push the AI to design for transfer, not just completion.
Prompt variations
You're an instructional designer specializing in sales enablement.
Create a single online module outline on discovery call techniques for new B2B sales reps in their first 30 days.
- Write 3 measurable learning outcomes using action verbs (e.g., conduct, identify, apply).
- Design a 30-minute structure: 3-min intro, 18-min core content, 9-min practice.
- Include 2 recorded call scenarios — one effective, one ineffective — each with a short analysis checklist.
- Add 1 knowledge check (5 questions: 3 multiple-choice, 2 true/false) with an answer key.
Tone: direct, motivating, peer-like. Format as headings + bullet points. Avoid sales jargon and theory-heavy language.
You're an instructional designer with expertise in regulated industries.
Create a single online module outline on HIPAA privacy rules for front-desk healthcare staff with no prior compliance training.
- Define 3 measurable learning outcomes focused on correct behavior, not memorization.
- Structure the module for 20 minutes: 2-min intro, 12-min content, 6-min assessment.
- Include 2 realistic front-desk scenarios where a privacy decision must be made, with correct and incorrect response options explained.
- Add 1 compliance quiz (8 questions: 6 multiple-choice, 2 short scenario-response) with a passing score of 80% and a full answer key.
Tone: clear, non-alarmist, procedural. Format as numbered steps and bullet points. Flag any areas where state law may vary.
You're an instructional designer working with a software engineering team.
Create a single online module outline on code review best practices for mid-level engineers who write reviews but lack consistent feedback quality.
- Write 3 behavioral learning outcomes — what engineers will do differently after this module.
- Design a 40-minute structure: 5-min intro, 25-min content with examples, 10-min practice.
- Include 2 annotated pull request examples — one with strong review comments and one with weak comments — plus a guided gap-analysis exercise.
- Add 1 post-module task: reviewers submit one real PR review within 5 days and rate their own confidence using a 5-point scale.
Tone: collegial, specific, non-judgmental. Format as headings + bullet points. Avoid generic software advice — anchor examples to asynchronous, distributed teams.
You're an instructional designer with B2B SaaS customer success experience.
Create a single online module outline on renewal conversation skills for CSMs managing mid-market accounts.
- Define 3 outcome statements tied to specific renewal call behaviors (e.g., surface risk signals, align on value, handle pushback).
- Structure the module for 35 minutes: 4-min intro, 20-min content, 11-min scenario practice.
- Include 2 renewal call role-play scripts: one where the customer is at-risk and one where the customer is satisfied but passive. Add a decision-point annotation at each critical moment.
- Add 1 short reflection exercise: CSMs rate their last three renewal calls against a 5-behavior rubric.
Tone: consultative, candid, confidence-building. Format as headings + bullet points. Avoid platitudes — every piece of advice should be actionable in the next call.
When to use this prompt
Customer Success Enablement Leads
Build training modules that help CSMs handle renewals, escalations, and onboarding calls with consistent quality.
Product Managers Creating Internal Training
Turn new feature launches into short modules with outcomes, practice scenarios, and quick checks for understanding.
Sales Managers Running Coaching Programs
Create modules on discovery, objection handling, and follow-up that fit into a 45-minute weekly session.
Marketing Teams Training Brand Ambassadors
Develop modules that teach messaging, positioning, and do-and-don’t guidelines with quizzes for retention.
Engineering Leaders Standardizing Team Practices
Design modules on code reviews, incident response, or documentation habits with realistic scenarios.
Pro tips
- 1
Specify the learner’s current skill level so you get the right depth and examples.
- 2
Add a real constraint like time, tool access, or compliance rules to keep content usable.
- 3
Name 2 common mistakes your learners make so the module targets behavior change.
- 4
Define how you’ll measure success, like quiz score targets or an on-the-job task checklist.
Once you're comfortable with the base prompt structure, you can add three layers of constraint that significantly improve output quality for high-stakes or regulated training.
1. Define prior knowledge explicitly. Instead of naming a learner role, describe what they already know and what they don't. For example: 'Assume learners can set priorities in individual work but have never had to manage competing deadlines across a team.' This gives the AI a cognitive starting point and prevents it from re-teaching concepts your learners already know.
2. Specify misconceptions to address. Add a line like: 'Learners commonly believe that time management is about working faster. The module should directly challenge this belief in the first scenario.' Naming a misconception forces the AI to build content that causes cognitive dissonance — which is one of the most effective instructional techniques for behavior change.
3. Add a transfer task. End the prompt with: 'Include one on-the-job transfer task that learners complete within 5 business days of finishing the module.' Transfer tasks dramatically increase learning retention and are easy to define in a prompt. They also signal to stakeholders that the module is tied to real behavior, not just completion rates.
These three additions take your prompt from a solid outline request to a full instructional design brief that the AI can execute with minimal rework.
The core prompt structure — role, outcomes, timing, scenarios, assessment, tone — works across industries, but each context demands different defaults.
Corporate L&D (internal training): Emphasize behavioral outcomes tied to performance reviews or OKRs. Request scenarios drawn from internal processes (e.g., 'a direct report missing a sprint deadline') rather than hypothetical situations. Add: 'Format outputs so they can be imported into our LMS without reformatting.'
Healthcare and compliance training: Shift from behavioral outcomes to procedural accuracy. Request scenarios with explicit right/wrong decision points. Add a constraint: 'Flag any content where regulatory interpretation may vary by jurisdiction.' Set quiz pass thresholds explicitly (e.g., 'learners must score 85% to complete').
Customer education and product onboarding: Focus outcomes on product competency — what the user will be able to do in the tool, not what they'll understand about it. Request scenarios grounded in specific product workflows. Replace 'quiz' with 'hands-on checkpoint' and describe the task.
Academic and continuing education: Align outcomes to Bloom's Taxonomy levels 4–6 (Analyze, Evaluate, Create). Request discussion prompts alongside assessments. Specify whether the module is synchronous, asynchronous, or hybrid — the structure changes significantly depending on delivery mode.
Run through this checklist before sending your prompt to the AI. Each item catches a common gap that leads to a weak first draft.
Audience
- Named the learner role and experience level
- Described one specific knowledge gap or behavior to change
- Noted any constraints (technology access, reading level, language)
Structure
- Specified total module time
- Defined the time split across intro, content, and practice
- Set the number and types of scenarios
Assessment
- Stated exact question count
- Named question types (multiple-choice, short answer, scenario-response)
- Requested an answer key if needed
- Set a passing score if required
Tone and Format
- Listed 2–3 specific tone descriptors (not just 'professional')
- Specified output format (headings + bullets, numbered steps, etc.)
- Named delivery tool or platform if format matters
Outcomes
- Requested action verbs (Bloom's-aligned if required)
- Tied outcomes to observable on-the-job behavior
- Set the number of outcomes explicitly
If you can check every item, your prompt is ready. If you're missing three or more, the AI output will require significant rework.
When not to use this prompt
When This Prompt Pattern Is Not the Right Fit
This prompt works well for structured, single-topic modules with a defined audience and a clear behavioral outcome. It's not the right tool in every situation.
Don't use it for curriculum-level planning. If you're designing a 12-module learning path or a full onboarding program, this prompt will generate a single module outline — not a program architecture. Use a separate prompt to map the curriculum before diving into individual modules.
Don't use it for highly regulated content without expert review. For compliance training in healthcare, finance, or legal contexts, AI-generated content requires mandatory review by a subject-matter expert and compliance officer. The AI will produce plausible-sounding content that may be jurisdictionally incorrect or out of date. Treat the output as a draft scaffold, not a final deliverable.
Don't use it when learner research hasn't been done. If you don't yet know who your learners are, what they already know, or what behavior you want to change, the prompt will produce a module for an imagined audience. Conduct at least a brief needs analysis first — even three conversations with target learners — before writing the prompt.
Consider alternatives when:
- You need a full facilitator guide for instructor-led training (use a facilitation guide prompt instead)
- You're converting existing slide decks into eLearning (use a content conversion prompt)
- You need learner-facing narration scripts rather than an instructional outline
Troubleshooting
The AI generates a module outline that would take 90 minutes, not 45
The AI is treating your time limit as a label, not a constraint. Restate the limit as a hard rule — add: 'This module must be completable in exactly 45 minutes by an average adult reader. Every section must fit within the time allocation in Step 2. Cut any content that exceeds these limits.' You can also add word count ceilings per section (e.g., 'intro: max 150 words') for tighter control.
Learning outcomes use vague language like 'understand' or 'be aware of'
The prompt didn't specify action verb requirements clearly enough. Add an explicit instruction to Step 1: 'Each outcome must begin with a measurable action verb (apply, demonstrate, evaluate, distinguish, construct). Do not use understand, know, appreciate, or be aware of.' If you need Bloom's Taxonomy alignment, name the specific levels — 'use Bloom's levels 3 through 5 only.'
Scenarios feel generic and could apply to any industry or role
The audience description lacks specificity. Expand the learner context with one or two concrete details about the work environment — for example, 'new managers at a 200-person SaaS company who previously worked as individual contributors on a fully remote team.' Also add: 'Each scenario must involve a real situation this specific learner type would encounter in their first 90 days on the job.'
The quiz questions test recall, not application
The AI defaults to recall questions unless told otherwise. Add this instruction to the quiz step: 'Every question must test application or analysis, not recall. Learners should need to use the concept in a simulated decision, not just remember a definition.' You can also request scenario-based stems: 'Format each question as a workplace situation followed by a decision choice.'
Output is formatted as a wall of text instead of a structured outline
The format instruction either wasn't included or was too vague. Add a dedicated formatting line at the end of your prompt: 'Format the entire output as: H2 headings for each major section, H3 for subsections, bullet points for all list items. No paragraphs longer than 3 sentences. Bold every key term on first use.' This level of specificity removes all formatting ambiguity.
How to measure success
How to Evaluate the Quality of Your Module Outline Output
Don't accept the first output without running it through a quick quality check. Strong AI-generated module outlines share several characteristics.
Learning Outcomes
- Each outcome begins with a measurable action verb
- Outcomes describe observable on-the-job behavior, not internal mental states
- You could design a quiz question or a real task from each outcome
Structure and Timing
- The total time allocation matches your specified limit
- The time split across intro, content, and practice is realistic and balanced
- No single section is disproportionately long or short
Scenarios
- Each scenario is grounded in the specific learner role and work context you named
- Scenarios require a decision or judgment, not just reading comprehension
- The guided reflection or debrief connects back to a learning outcome
Assessment
- Question count and types match your specification exactly
- Questions test application or analysis, not simple recall
- An answer key is included with brief rationale for each correct answer
Tone and Format
- The output uses the exact tone descriptors you specified
- Headings and bullet points are consistently applied
- No section reads like a textbook definition or a slide title
Now try it on something of your own
Reading about the framework is one thing. Watching it sharpen your own prompt is another — takes 90 seconds, no signup.
Turn your module topic and learner context into a ready-to-build instructional outline — with outcomes, scenarios, and a quiz included.
Try one of these
Frequently asked questions
The more specific, the better. Name the role, experience level, and one key gap — for example, 'mid-level engineers who write code reviews but give vague feedback.' This prevents the AI from defaulting to a generic adult learner profile. You don't need a full learner persona — two or three precise details are enough to shift the output from generic to role-specific.
Yes. Adjust the timing constraint in Step 2 to reflect your microlearning window (e.g., 8 minutes: 1-min intro, 5-min content, 2-min check). Also reduce the scope — one learning outcome instead of three, one scenario instead of two, and a 3-question knowledge check. The structure scales down cleanly as long as you update every time reference in the prompt.
Add two details to the audience description: the industry and the specific work context. For example, replace 'new managers' with 'newly promoted shift supervisors in a hospital setting.' Then update the scenario instruction to request industry-specific situations. You can also add a constraint like 'all examples must be drawn from inpatient care settings' to prevent the AI from using generic corporate examples.
This usually means the AI is ignoring your time constraint as a scope limit. Restate it more forcefully — add a line like 'This module must be completable by an average reader in exactly 45 minutes. Trim any section that would push past this limit.' You can also add word count ceilings to each section to give the AI explicit scope boundaries it cannot exceed.
This prompt generates a structured outline — outcomes, timing, scenario descriptions, and quiz structure. It's intentionally not a full script, because outlines are faster to review and easier to revise before committing to narration. Once you approve the outline, run a second prompt asking the AI to expand each section into a full learner-facing script with narrator notes.
Add a specific instruction to Step 1: 'Write 3 measurable learning outcomes aligned to Bloom's Taxonomy levels 3–5 (Apply, Analyze, Evaluate).' Name the specific levels you need. Most AI tools are trained on Bloom's and will produce correctly classified outcomes when you reference the framework by name and tier.
The tone instruction in your prompt is the primary lever. Replace vague tone words like 'professional' with behaviorally specific ones — 'write the way a respected colleague explains something, not the way a textbook defines it.' You can also add: 'Avoid passive voice, avoid definitions, avoid bullet points that start with nouns — every point should start with a verb.'
Yes, if format matters. Name the authoring tool or LMS and add a constraint like 'format all activities as they would appear in Articulate Rise — section blocks, not slide decks.' This prevents the AI from generating formats (like multi-column layouts or video scripts) that don't translate to your actual build environment.