Why this is hard to get right
The Listening Tour That Fell Apart
Maria is a VP of Engineering at a 400-person fintech company. Three months after a major reorg, her CEO asked her to run a listening tour. The goal was simple: hear from ICs and managers, surface blockers, and report back with actionable themes.
Maria started with good intentions. She sent calendar invites to 20 people, opened a blank Google Doc, and wrote down a few questions the night before her first session. By week two, she had a problem. Every conversation went differently. Some sessions ran 90 minutes. Others felt like performance reviews. She was asking different questions each time, and her notes were a mix of direct quotes, half-formed ideas, and vague impressions like "seems frustrated."
When she tried to synthesize, she had nothing coherent. She spent an entire weekend trying to group feedback into themes, but the data didn't connect. She wrote a 12-page document nobody read. Her CEO asked for a follow-up in three weeks, and Maria had no clear next steps to share.
The problem wasn't effort — it was structure. A listening tour looks like a series of conversations, but it's actually a qualitative research project. It needs a consistent instrument, a coding scheme, and a synthesis method that scales across 10 to 20 sessions without burning the facilitator out.
When Maria tried using AI to build her plan, her first attempt produced exactly what you'd expect from a vague request: a generic list of open-ended questions and a basic agenda with no context. The AI didn't know she was running sessions with both ICs and skip-level managers. It didn't know she needed to avoid the topic of last quarter's layoffs during early sessions. It didn't know she had to deliver a one-pager to the CEO, not a sprawling report.
Her second attempt was different. She specified the role, the company stage, the number and type of sessions, the sensitive constraints, and the exact output formats she needed. The AI produced a facilitator script, a grouped question bank, a note-taking template with a coding scheme, and a synthesis format she could drop directly into a board slide.
She ran the tour in six weeks. Her synthesis doc was two pages. Her CEO shared it with the full leadership team. Three of her themes turned into Q3 priorities.
The difference between a wasted listening tour and a strategic one is rarely effort. It's whether you defined the structure before you started the first conversation.
Common mistakes to avoid
Not Naming the Triggering Event
A listening tour after a reorg needs different framing than one during normal operations. Without the triggering event, the AI produces generic questions that miss the emotional context — and sessions feel tone-deaf. Always name what happened: a reorg, a leadership change, a missed quarter, rising churn. That event shapes every question and every facilitation note.
Skipping the Audience Breakdown
Asking the AI to plan 'sessions with employees' produces one-size-fits-all questions. ICs, managers, and customers each need different question sets and different facilitation approaches. Specify the audience mix, the number of sessions per group, and any hierarchy sensitivities. The AI can then differentiate the instrument for each audience type.
Omitting Sensitive Topics and Constraints
Every listening tour has landmines — topics that will derail trust if handled wrong. Without naming these explicitly, the AI may generate questions that provoke defensiveness or create legal exposure. Include the topics you must handle carefully and the ones you must avoid entirely. A well-constrained prompt produces a facilitator script that actually protects you.
Requesting Questions Without a Synthesis Format
Most users ask for question lists but forget to request the synthesis layer. Good questions without a coding scheme produce unstructured notes you can't analyze. Ask for a note-taking template and a synthesis format in the same prompt. This forces the AI to design the whole system, not just the intake.
Leaving Tone Unspecified During Sensitive Periods
Post-reorg listening tours carry anxiety. If you don't specify tone, the AI defaults to a neutral, clinical register that can feel cold. Specify that the tone should build trust, avoid promises, and acknowledge uncertainty. That instruction changes the language of every script, intro, and closing — and it affects whether people speak honestly.
Asking for a Plan Without Naming the Output Format
Executives need a synthesis they can share, not a 15-page transcript dump. Without specifying the final output format — a one-pager, a slide, a memo — the AI defaults to whatever feels complete. Name the format, the audience for the synthesis, and the word count. That shapes the entire structure of the plan, from question design to note-taking to reporting.
The transformation
Create a listening tour plan for our leadership team and give me some questions to ask people.
You’re an **executive chief of staff**. Build a 6-week **CEO listening tour plan** for a 250-person SaaS company after a reorg. Include: 1. **Goals (3)** and what success looks like. 2. A weekly schedule: **12 sessions** (mix of ICs, managers, customers). 3. A 60-minute agenda template and facilitator script. 4. **15 open-ended questions**, grouped by theme. 5. A note-taking template and coding scheme. 6. A 1-page synthesis format with **top 5 themes**, risks, and next actions. Tone: calm, direct, trust-building. Avoid promises. Keep outputs copy-paste ready.
Why this works
Role Assignment Focuses the Model
The prompt opens with 'You're an executive chief of staff.' This isn't decoration — it shifts the model's defaults toward strategic, executive-level thinking. A chief of staff understands political sensitivity, synthesis for leaders, and the difference between gathering feedback and building trust. That role context shapes every output the model produces.
Specificity Eliminates Generic Output
The prompt names a 250-person SaaS company after a reorg as the exact context. This prevents the model from producing advice suited to a 5,000-person enterprise or a pre-product startup. Specific company size and triggering event anchor every recommendation — session count, question depth, and synthesis format all scale to the stated reality.
Structured Deliverable List Prevents Gaps
The numbered list of six deliverables — goals, schedule, agenda, questions, note template, and synthesis format — forces the model to treat this as a complete system, not a loose collection of suggestions. Each deliverable connects to the next. The questions feed the note template, which feeds the synthesis. The prompt wires that dependency chain explicitly.
Tone Constraint Protects Trust
The instruction 'calm, direct, trust-building. Avoid promises.' directly addresses the political risk of a post-reorg listening tour. Without this constraint, the model might generate enthusiastic, forward-looking language that overpromises and triggers cynicism. This single line changes the register of every script, intro, and closing the model writes.
Copy-Paste Constraint Drives Usability
The instruction 'Keep outputs copy-paste ready' signals that the AI should produce finished artifacts, not drafts requiring heavy editing. This constraint changes the model's formatting behavior — it produces complete scripts, full question banks, and ready-to-use templates rather than outlines with bullet points you'd still need to expand yourself.
The framework behind the prompt
The Theory Behind Executive Listening Tours
Listening tours are qualitative research in leadership clothing. The best practitioners borrow methods from ethnographic interviewing, appreciative inquiry, and grounded theory — even when they've never heard those terms.
Ethnographic interviewing teaches us that the goal isn't to ask questions and collect answers. It's to understand the mental models, constraints, and experiences of the people you're talking to. This is why behavioral questions ('Tell me about the last time this happened') consistently outperform opinion questions ('What do you think about X?'). Opinion questions get you what people think you want to hear. Behavioral questions get you what actually happened.
Appreciative inquiry (AI) — developed by David Cooperrider at Case Western Reserve University — offers a complementary framework. Rather than diagnosing problems, AI asks people to describe what's working at its best, what conditions enabled that, and what a future with more of that would look like. For post-reorg or trust-repair listening tours, this approach reduces defensiveness and surfaces constructive energy rather than grievance.
Grounded theory, originally developed by Glaser and Strauss, provides the synthesis method most listening tours lack. The idea: you don't impose a framework on your data. You code your notes, look for patterns, and let themes emerge from the ground up. A practical version of this is open coding (label everything) followed by axial coding (group labels into themes) followed by selective coding (identify the 3-5 themes that explain the most).
The challenge for practitioners is that most leaders aren't trained researchers. They collect rich qualitative data and then synthesize it using gut feel — which introduces bias and produces themes that reflect what the leader already believed.
A well-structured prompt bridges this gap. It doesn't require you to learn qualitative research methods. It encodes those methods — consistent instrumentation, thematic coding, structured synthesis — into the plan the AI produces. The result is a field-ready system that gives you research-grade insights without a research team.
Prompt variations
You're a senior customer success strategist. Build a 4-week listening tour plan for a CS leader at a 150-person B2B SaaS company following a 15% churn spike in Q2.
Include:
- 3 clear goals tied to churn reduction and product feedback.
- A session schedule: 10 customer interviews segmented by churned, at-risk, and healthy accounts.
- A 45-minute interview agenda and facilitator script with a trust-building opener.
- 12 open-ended questions grouped by: onboarding experience, product gaps, and relationship quality.
- A note-taking template with a signal-strength rating (1-3) for each theme.
- A synthesis format: top 3 themes, top 3 product requests, and a one-paragraph executive summary.
Tone: empathetic, curious, non-defensive. Do not position the company. Keep all outputs copy-paste ready.
You're a principal product researcher. Build a stakeholder listening tour for a product manager at a 300-person enterprise software company preparing a major roadmap reset.
Include:
- Goals (3) focused on surfacing trade-offs, not validating existing plans.
- A 3-week schedule: 8 internal stakeholder sessions (sales, support, finance, engineering leads) and 4 customer sessions.
- A 50-minute session guide with an explicit neutrality statement at the start.
- 10 questions per audience type (internal vs. customer), focused on pain, priority, and constraint.
- A conflict-mapping template to capture where stakeholder views diverge.
- A synthesis format: top 5 trade-offs, 3 non-negotiable constraints, and recommended next steps.
Tone: neutral, analytical, direct. The PM must not lead witnesses. Outputs must be shareable with an exec team in under 2 pages.
You're an experienced engineering manager turned consultant. Build a 3-week listening tour plan for an engineering director at a 200-person SaaS company who needs honest feedback from individual contributors after a failed sprint planning process.
Include:
- 2 goals: identify systemic blockers and rebuild IC trust in the planning process.
- A session schedule: 8 one-on-one sessions with ICs across 3 teams, plus 2 anonymous async input options.
- A 30-minute agenda with a no-blame framing statement at the start.
- 10 questions focused on process friction, tooling gaps, and cross-team dependencies — not individual performance.
- A note-taking template with a 'blockers vs. preferences' coding column.
- A synthesis format: top 3 systemic issues, quick wins list, and a 5-sentence update the director can share back with ICs.
Tone: direct, blame-free, action-oriented. Avoid language that sounds like a performance review. All outputs must be copy-paste ready.
You're an organizational development consultant with experience in mission-driven organizations. Build a 5-week listening tour plan for a nonprofit executive director taking over a 60-person organization after the previous ED resigned under conflict.
Include:
- 3 goals: assess staff morale, understand mission alignment gaps, and identify quick trust-building actions.
- A session schedule: 14 sessions covering staff (by department), board members, and major funders.
- A 45-minute facilitation guide with a specific opening statement acknowledging the transition.
- 12 questions grouped by: organizational health, mission clarity, and leadership expectations.
- A note-taking template that flags sentiment, not just content, on a 3-point scale.
- A synthesis format: top 3 themes, 3 immediate actions, and a 1-page summary for the board.
Tone: humble, transparent, trust-first. Avoid any language that implies judgment of the previous leadership. All outputs should be ready to use without editing.
When to use this prompt
Founders post-reorg
You need a listening tour that reduces rumors and surfaces issues fast. You also need a consistent script across sessions.
Customer success leaders
You want a repeatable set of customer interviews after churn rises. You need themes and actions you can share with product.
Product managers planning discovery
You need stakeholder interviews that produce clear trade-offs. You want a synthesis format your exec team will read.
Engineering leaders improving execution
You want to hear blockers from ICs without blame. You need coded notes and a tight action list for the next sprint cycle.
Pro tips
- 1
Define the decision you’ll make from the tour so questions stay focused.
- 2
Name the sensitive topics you must handle carefully so the script lowers risk.
- 3
Specify how you’ll share outcomes so people trust the process.
- 4
Set a time budget for synthesis so the plan fits your calendar, not an ideal week.
The synthesis phase kills most listening tours. You run 12 sessions, collect 40 pages of notes, and spend a weekend trying to find patterns. Here's a system that works at scale.
Use a 3-column note format during sessions:
- Column 1: Direct quote or paraphrase
- Column 2: Theme tag (assign 1-3 tags from a pre-built list)
- Column 3: Signal strength (1 = mentioned once, 2 = mentioned with emphasis, 3 = mentioned with emotion or specificity)
Build your theme list before session one, not after. Ask the AI to generate a set of 8-10 candidate themes based on your goals. You'll refine them after the first two sessions, but starting with a list prevents you from free-coding under pressure.
After each session, spend 10 minutes tagging — not writing a summary. The synthesis writes itself when the tags are consistent.
For the final synthesis, ask the AI to take your coded note entries and produce a ranked theme list, a tension map (where views conflict), and a top-5 actions list. Feed it the structured notes, not the raw transcripts. You'll cut synthesis time from a weekend to two hours.
Specify this system in your initial prompt and ask the AI to design the note template around your pre-defined theme list. That alignment between your question bank and your coding scheme is what makes synthesis fast.
Not every listening tour happens in person or synchronously. Remote and async formats require different designs — and most AI-generated plans default to in-person assumptions.
For remote synchronous sessions, specify the platform and its constraints in your prompt. For example: 'Sessions will run via Zoom. Include a note on managing silence, since remote participants are slower to fill it. Adjust the agenda to account for a 5-minute technical buffer at the start.'
For async input channels — surveys, Loom prompts, Slack threads — ask the AI to generate a parallel async instrument alongside the live session guide. The questions need to be self-explanatory without a facilitator present. Add: 'Include a written version of each question that works without context from a live conversation.'
For hybrid tours (some sessions live, some async), ask for a synthesis method that weights both equally. Otherwise, the louder synchronous voices dominate your themes.
The biggest risk in remote tours is low response depth. Specify in your prompt: 'Include a follow-up question for each main question that can be sent as a one-line async prompt if the live answer was surface-level.' That gives you a fallback without scheduling another call.
Remote tours also require stronger confidentiality framing. Ask the AI to include a written confidentiality statement participants can read before responding.
A flat list of 15 questions is a starting point. A question bank with branching logic is a professional instrument.
Branching logic means the facilitator has conditional follow-ups ready based on what the participant says. For example:
- If the participant describes a process problem: 'How long has this been the case? When did it work better?'
- If the participant names a person: 'Can you tell me more about the dynamic? I want to understand the system, not the individual.'
- If the participant says everything is fine: 'What's the one thing that, if it changed, would make your work noticeably easier?'
To build this into your prompt, add: 'For each of the 15 questions, include 2 conditional follow-ups: one for when the answer is specific and one for when the answer is vague or positive. Label them [If specific] and [If vague].'
This turns your question list into a decision tree the facilitator can navigate in real time. It's especially valuable when different audience segments (ICs vs. managers) respond differently to the same opening question.
You can also ask the AI to flag which questions carry the most risk — the ones most likely to surface defensiveness or conflict — and include a facilitator note on how to redirect without shutting down the participant. That level of preparation separates a trust-building tour from one that accidentally creates new problems.
When not to use this prompt
When This Prompt Pattern Is Not the Right Tool
A listening tour plan is not appropriate in every situation. Using it in the wrong context wastes time and can damage trust.
Don't run a listening tour when you've already made the decision. If the reorg is done, the product direction is set, or the policy is final, a listening tour creates an expectation of influence that you can't honor. That's worse than not asking at all. If you need to communicate a decision rather than gather input, use a change communication plan instead.
Don't use this format when you need statistically representative data. Listening tours are qualitative. They surface themes, not percentages. If your stakeholders will push back on "what we heard" without sample sizes or confidence intervals, consider a structured survey alongside or instead of the tour.
Don't attempt a listening tour during an active crisis. When people are anxious about their jobs, a listening session can feel like a performance review or an intelligence-gathering exercise. In active uncertainty, a brief, transparent communication plan often builds more trust than open-ended questions.
Don't use this prompt if you don't have authority to act on what you hear. A listening tour that surfaces real issues and produces no visible action is more damaging than not asking. Before you build the plan, confirm: do you have the authority and resources to respond to at least some of what you'll learn?
Troubleshooting
The AI produces questions that feel like a survey, not a conversation
Add the instruction: 'All questions must be open-ended and suitable for a facilitated 60-minute conversation, not a written survey. Avoid binary questions, rating scales, or questions that can be answered in one sentence.' Also ask for follow-up probes for each question so the facilitator can go deeper — that signals the conversational format you need.
The synthesis format is too long for executive audiences
Specify the exact format and word count in your prompt: 'The synthesis output must fit on one page. Use a 3-column table: theme, supporting evidence (2 quotes max), and recommended action. Total word count under 400.' Giving the AI a structural template, not just a length limit, produces a usable one-pager instead of a condensed essay.
The facilitator script sounds robotic or scripted rather than natural
Add a voice instruction: 'Write the facilitator script in a spoken register — the way a thoughtful, calm executive actually talks, not the way a corporate memo reads.' You can also provide a one-sentence example of the tone you want: 'For example, open with: I appreciate you making time. I don't have an agenda — I'm here to listen.' That example anchors the register.
The session count and schedule don't reflect real calendar constraints
Include your actual time budget in the prompt: 'I have 4 hours per week available for sessions and synthesis combined. Design a schedule that fits within that constraint over 6 weeks.' Without a time budget, the AI optimizes for coverage, not feasibility. A plan you can't actually run is worthless no matter how thorough it looks.
The AI includes questions that overlap significantly with each other
Ask the AI to audit its own output before finalizing: 'After generating the question bank, review for overlap. If two questions would likely produce the same answer, remove one. Label each question with its theme tag so I can verify coverage across all themes.' This self-review instruction catches redundancy that you'd otherwise have to edit manually.
How to measure success
How to Evaluate the Quality of Your AI-Generated Plan
A strong AI output for this prompt type should pass these checks before you use it in the field.
Completeness check:
- Does the plan include a session schedule with specific audience segments named?
- Are the questions grouped by theme with at least 3 distinct themes?
- Is there a note-taking template with a coding column, not just free-text space?
- Is the synthesis format specific — a named structure, not 'summarize key themes'?
Quality signals to look for:
- Questions are behavioral, not opinion-based — they ask what happened, not what people think
- The facilitator script includes a psychological safety opener that names confidentiality and purpose
- The synthesis format fits the stated audience — a one-pager for an exec, not a research report
- Tone language is consistent throughout the script, agenda, and questions
Red flags that signal a weak output:
- Questions that could apply to any organization in any industry
- A synthesis format that requires manual analysis rather than structured coding
- An agenda that runs longer than your stated time budget
- No acknowledgment of sensitive topics or political context you named in the prompt
Now try it on something of your own
Reading about the framework is one thing. Watching it sharpen your own prompt is another — takes 90 seconds, no signup.
Turn your listening tour goals into a complete, field-ready plan — questions, agenda, and synthesis format included.
Try one of these
Frequently asked questions
Specify the exact number — don't leave it open. For a 6-week tour at a 200-300 person company, 10-15 sessions is realistic. The session count affects question design, note-taking load, and synthesis complexity. If you're unsure, include your time budget (hours per week) and let the AI recommend a session count that fits your calendar.
Yes — but specify the relationship type clearly. External stakeholders (customers, partners, funders) require different facilitation framing, different legal constraints, and different synthesis goals than internal sessions. Name the stakeholder type, your relationship to them, and what you're allowed to share back. That changes the script, the questions, and the synthesis format the AI produces.
This usually means you didn't specify the thematic groupings you need. Add a line like: 'Group questions into these 3 themes: [theme A], [theme B], [theme C].' Also specify the level of specificity — for example, 'questions should surface concrete examples, not opinions.' That instruction pushes the AI away from abstract questions toward behavioral and situational ones.
Swap the role assignment to match your reality — for example, 'You're an experienced founder who has run company-wide listening tours at early-stage startups.' The key is naming a role that carries the right defaults for your context. Also reduce session counts, simplify the synthesis format, and specify that outputs should be manageable for a solo facilitator without a support team.
Name them explicitly as constraints, not topics to explore. For example: 'Do not include questions about the Q3 layoffs directly. Include a facilitator note on how to respond if this topic comes up unprompted.' This forces the AI to design a script that acknowledges the sensitivity without opening a wound — and gives you a prepared response if participants raise it.
Add a time constraint to the synthesis section of your prompt: 'The synthesis format should be completable in 30 minutes per session and 2 hours total across all sessions.' You can also specify the exact output: 'Produce a 1-page synthesis template, not a multi-section report.' Constraints on time and length change what the AI considers an appropriate output.
You can, but it's cleaner to split them. Use the main prompt to generate the plan, agenda, and facilitator guide. Then use a second prompt — referencing the output of the first — to generate participant-facing communications. Ask for a 3-sentence session description, a calendar invite note, and a FAQ for participants. Splitting keeps each output focused and avoids tone conflicts between internal and external language.
Include a tone instruction that addresses psychological safety directly. Add a line like: 'Questions should invite candor, not judgment. Avoid 'why' questions that can feel accusatory. Prefer 'what' and 'how' questions that invite description.' Also ask the AI to include a facilitator opening that names confidentiality, the purpose of the tour, and what will happen with the feedback.