Why this is hard to get right
Picture this: It's mid-October, and your CHRO just got off a call with the CFO. Three high-performing engineers left last quarter, two more are rumored to be interviewing, and the executive team wants answers by the end of the month.
You're the HR business partner assigned to produce a retention risk analysis. You have access to Glint engagement survey data, six months of voluntary turnover records, a folder of exit interview notes, and a spreadsheet of promotion decisions by department. The data exists. The story is somewhere inside it.
You open ChatGPT and type: "Analyze our retention data and tell me who's at risk of leaving."
What comes back is a wall of generic advice — "improve manager communication," "offer career development paths," "review compensation benchmarks." It reads like a blog post from 2019. None of it reflects your actual data. None of it names a department. None of it is something you can bring into a meeting with the CFO without being laughed out of the room.
So you try again. You paste in a summary of your survey scores and ask for insights. This time the AI gives you a slightly longer wall of generic advice.
The problem isn't the data. It's the prompt.
Without telling the AI what role to play, what output format you need, which audience will read it, and what decision it needs to support, you're asking it to produce something useful from nothing. It defaults to the average of everything it has ever read about HR retention — which is exactly as unhelpful as it sounds.
This is the frustration that people analytics professionals face constantly. They have real data, real urgency, and real stakeholders. What they're missing is a structured way to translate that context into a prompt the AI can actually act on. A strong retention risk analysis prompt doesn't just describe the task — it defines the inputs, the output structure, the risk framework, the audience, and the constraints all at once. That's what gets you from a blog post to a boardroom brief in a single generation.
Common mistakes to avoid
Omitting the Data Sources Entirely
Asking for a 'retention analysis' without specifying what data is available forces the AI to invent or assume inputs. The output becomes a generic framework rather than an analysis of your situation. Always list your actual data sources, even if they're imperfect.
Skipping the Output Format
Without a defined structure — risk tiers, root causes, action plan — the AI produces a narrative essay when you needed a report with sections. Specify the exact deliverables and their order to get output you can use without reformatting.
Leaving the Audience Undefined
A retention analysis for an HRBP reads very differently from one for a CFO. If you don't specify who will read the output, the AI defaults to a vague middle ground that satisfies neither. Name your audience and their priorities explicitly.
Forgetting to Define 'Risk'
Terms like 'flight risk' mean different things to different people. Without defining your risk tiers — High / Medium / Low, or specific thresholds like engagement score below 55 — the AI applies its own arbitrary criteria, making the prioritization impossible to defend.
Asking for Solutions Before Diagnosis
Jumping straight to 'what should we do' without first requesting root cause identification produces action plans built on assumptions. Structure your prompt so diagnosis comes before recommendation — the AI will produce sharper, evidence-backed interventions as a result.
The transformation
Analyze our employee retention data and tell me who might leave and what we should do about it.
**You are a senior HR analytics consultant specializing in workforce retention strategy.** Analyze the following employee data to produce a structured **retention risk analysis report** for our HR leadership team and CFO: **Data provided:** - Engagement survey scores (last 2 cycles) by department - Voluntary turnover rate by tenure band (0–1 yr, 1–3 yr, 3–5 yr, 5+ yr) - Exit interview themes from the past 12 months - Internal mobility and promotion rates by department **Your analysis must include:** 1. A risk-tiered employee segment breakdown (High / Medium / Low flight risk) 2. The top 3 root causes of attrition per segment, supported by the data 3. Department-level hotspots ranked by urgency 4. 5 targeted retention interventions with estimated cost-impact ratios 5. A 90-day action plan with owners and milestones **Constraints:** Use plain language suitable for both HR professionals and finance executives. Flag any data gaps that weaken the analysis. Output length: 600–900 words.
Why this works
Role Priming
Assigning the AI the persona of a 'senior HR analytics consultant' activates domain-specific reasoning patterns, including familiarity with engagement frameworks, attrition drivers, and executive communication standards. This single line raises the quality floor of every subsequent output.
Grounded Inputs
Listing four specific data sources prevents the AI from generating analysis from thin air. When the AI knows it's working from engagement scores, turnover rates, exit themes, and promotion data, it can structure reasoning around evidence — not boilerplate.
Tiered Structure
Defining a High / Medium / Low risk framework forces the AI to make prioritized judgments. Without a taxonomy, the AI produces flat lists. With one, it creates a ranked output your team can triage and act on immediately.
Dual Audience Anchoring
Specifying both HR professionals and finance executives as readers shapes every word choice, framing, and level of technical detail. The AI calibrates language to be simultaneously rigorous and accessible — matching the real political environment of most organizations.
Constraint Setting
Adding a word count range and a directive to flag data gaps sets a quality standard and an honesty threshold. The AI produces a tighter, more credible analysis when it knows both how long to be and that intellectual humility is explicitly valued.
The framework behind the prompt
Retention risk analysis draws from several well-established frameworks in organizational behavior and HR analytics.
The most foundational is the Push-Pull-Stay model, which categorizes attrition drivers into forces pushing employees away (poor management, limited growth), forces pulling them toward alternatives (competitor offers, lifestyle changes), and anchors keeping them in place (relationships, mission, compensation). A strong retention analysis prompt maps data to each category rather than treating all turnover as one undifferentiated problem.
A second relevant framework is workforce segmentation by flight risk, adapted from customer churn models in marketing analytics. Just as customer success teams tier accounts by renewal risk, HR analysts segment employees by departure probability — enabling targeted interventions rather than costly, blanket retention programs.
The predictive-descriptive split also matters here. Descriptive analytics explains past attrition; predictive analytics forecasts future risk. The most effective retention analyses use both, combining lagging indicators (past turnover rates, completed exit interviews) with leading indicators (declining engagement scores, reduced internal mobility) to build a complete picture.
Finally, cost-of-turnover modeling — typically estimated at 0.5x to 2x annual salary per departure depending on role complexity — provides the financial grounding that makes retention analysis compelling to non-HR executives. Including this framing in your prompt ensures the output connects workforce risk to business risk.
Prompt variations
You are an HR strategist advising a 60-person Series B startup with limited formal people data.
Using the inputs below, produce a retention risk snapshot for the founding team and Head of People:
Available inputs:
- Informal 1:1 feedback themes from the last quarter (summarized)
- Headcount growth rate and voluntary departures in the past 6 months
- Glassdoor review themes (last 10 reviews)
Deliverables:
- Top 3 retention risks ranked by severity
- Which employee segments are most exposed (by role type or tenure)
- 3 low-cost interventions actionable within 30 days
Tone: Direct and candid. Flag where limited data creates uncertainty. Output: 400–500 words.
You are a workforce analytics director preparing a quarterly retention risk briefing for a Fortune 500 HR leadership team.
Analyze the following Q3 data to produce an executive retention risk dashboard summary:
Data inputs:
- eNPS scores by business unit (Q2 vs. Q3 delta)
- Regrettable attrition rate by level (Individual Contributor, Manager, Director+)
- Open role aging data (average days to fill by department)
- Internal transfer request volume
Required output:
- Executive summary (150 words max)
- Top 3 business units by flight risk with supporting evidence
- Year-over-year trend callout (improvement or deterioration)
- 3 strategic recommendations for Q4 with owners and success metrics
Format: Use headers and bullet points. Plain language for a mixed executive audience. 700–850 words.
You are an organizational effectiveness consultant supporting a post-merger HR integration.
Analyze the data below to identify retention risks specific to the acquired workforce (approximately 200 employees joining from TechCo):
Data inputs:
- Pre-merger engagement scores from TechCo's last annual survey
- 60-day post-merger pulse survey results
- Voluntary departures since close date (by department and tenure)
- Manager sentiment from integration steering committee interviews
Deliverables:
- Risk segmentation: which TechCo employee groups are highest flight risk and why
- Cultural friction points driving disengagement (cite specific survey themes)
- 4 retention interventions tailored to post-merger psychology (autonomy, identity, uncertainty)
- 30/60/90-day monitoring milestones
Constraints: Acknowledge where short data windows limit confidence. Output: 600–800 words.
When to use this prompt
HR Business Partners
Use this prompt to translate raw engagement survey data into a prioritized list of at-risk teams, giving HRBPs a defensible brief to bring to department leaders before talent walks out the door.
Chief People Officers
Turn quarterly people metrics into an executive-ready retention risk report that connects workforce instability to revenue impact — the language the CFO and board actually care about.
Compensation and Benefits Teams
Identify which tenure bands and departments show the highest flight risk so you can make targeted compensation adjustments instead of costly, blanket salary increases.
People Analytics Teams
Structure a repeatable analysis template that ingests multiple data streams — survey scores, turnover rates, exit themes — and outputs a consistent risk framework every quarter.
Consulting and Advisory Firms
Deliver faster client diagnostics by using this prompt as a starting framework for workforce retention engagements, customizing inputs to match each client's available data.
Pro tips
- 1
Specify your exact data sources by name — the more concrete you are (e.g., 'Glint engagement scores' vs. 'survey data'), the more targeted the analysis will be, because the AI can apply the right interpretive standards for that data type.
- 2
Add a cost constraint or budget context (e.g., 'interventions must require no additional headcount') so the recommended actions are realistic for your organization, not just theoretically ideal.
- 3
Name the decision this analysis needs to support — whether it's a board presentation, a Q3 budget request, or a department reorg — so the AI frames findings around that specific outcome rather than producing a general report.
- 4
Include a benchmark if you have one (e.g., 'our industry average voluntary turnover is 18%') so the AI can calibrate which of your numbers represent genuine risk signals versus normal variance.
The quality of your retention risk analysis depends almost entirely on the quality of the inputs you describe. Before you write your prompt, take 10 minutes to inventory what you actually have.
Quantitative signals to include:
- Engagement or eNPS scores, ideally with two or more cycles so you can show trend direction
- Voluntary turnover rate broken down by tenure band, department, or level
- Time-to-fill data for open roles (a lagging indicator of team strain)
- Internal mobility rates — low internal movement often predicts external departure
Qualitative signals to include:
- Exit interview themes, aggregated by category (manager quality, compensation, growth, culture)
- Open-ended survey responses, summarized by theme rather than quoted individually
- Skip-level or focus group feedback from the past two quarters
What to do when data is missing: Be honest in your prompt. Include a line like: 'We do not have compensation benchmarking data; flag where this gap limits conclusions.' The AI will produce a more credible analysis when it knows what it doesn't have to work with — and you'll know which data investments to prioritize next cycle.
A retention risk analysis is only as valuable as the decisions it drives. Once you have your AI-generated report, use this structure to convert findings into an action plan your leadership team can actually execute.
30 Days — Stop the bleeding: Focus on your highest-risk segment. Identify the 1-2 interventions with the fastest impact and lowest resource cost. Common examples: manager training for teams with low engagement scores, stay interviews with employees in the 1–3 year tenure band, or a compensation spot-check for roles where you're consistently losing candidates to competitors.
60 Days — Fix the system: Address the structural root causes the analysis surfaces. This might mean redesigning your internal transfer process, updating your performance review calibration, or launching a career pathing pilot in a high-risk department.
90 Days — Measure and iterate: Re-run your pulse survey or eNPS for the segments you intervened on. Compare the delta. Update your prompt with new data and generate a refreshed analysis. Build this into your quarterly people review cadence so it becomes a repeatable practice rather than a one-time fire drill.
Pro tip: Assign a named owner and a success metric to each action item before the plan leaves the HR team. 'Improve manager scores' is not an action. 'Head of L&D to launch 4-session manager effectiveness series by Nov 15; target: 10-point improvement in direct report trust scores by Q1 survey' is.
There are two distinct modes of retention analysis, and knowing which one you need will sharpen your prompt significantly.
Descriptive analysis answers: 'What has already happened and why?' It uses historical data — past turnover rates, completed exit interviews, previous survey cycles — to explain patterns. This is the right mode when you're preparing a retrospective for leadership or diagnosing a spike in attrition that already occurred.
Predictive analysis answers: 'Who is likely to leave in the next 90 days?' It requires leading indicators — recent engagement score drops, declining 1:1 meeting frequency, internal transfer requests, or changes in performance review scores. This mode is more forward-looking and more useful for proactive intervention, but it requires more sophisticated data.
How to signal this in your prompt:
- For descriptive: 'Using historical data from the past 12 months, identify the root causes of our elevated attrition rate in Q2 and Q3.'
- For predictive: 'Using the leading indicators below, identify which employee segments show early warning signals of departure risk over the next 60–90 days.'
Most organizations benefit from running both analyses in parallel: a descriptive look backward to explain what happened, and a predictive scan forward to catch the next wave before it breaks.
When not to use this prompt
This prompt isn't the right tool in every situation. If you're dealing with a single, specific involuntary termination or a performance improvement plan for an individual employee, this analysis framework is too broad — use a targeted conversation guide or a case-by-case documentation prompt instead.
This prompt also isn't a substitute for real-time manager judgment. If an employee has already verbally communicated intent to leave, you need a retention conversation guide, not an analysis report.
Finally, if your organization has fewer than 15–20 employees, aggregated data analysis loses statistical meaning. A structured 1:1 feedback collection prompt will serve you better.
Troubleshooting
The AI produces generic retention advice instead of analyzing my specific data
Your data inputs section isn't concrete enough. Instead of saying 'engagement survey results,' specify the exact scores: 'Glint eNPS of 22 in Engineering vs. company average of 41, down 8 points from Q2.' The more specific your numbers, the less room the AI has to substitute generic advice for actual analysis.
The action plan recommendations are impractical or too expensive
Add a constraints line to your prompt: 'All recommended interventions must require no new headcount and fit within a $50K discretionary budget for Q4.' Without a resource constraint, the AI defaults to best-case recommendations that ignore your real-world operating environment.
The output reads like an HR textbook, not an executive brief
Strengthen your audience instruction. Replace 'suitable for HR professionals' with 'written for a CFO with no HR background who needs to understand financial exposure within the first 100 words.' Naming a skeptical, non-specialist reader forces the AI to prioritize clarity and business impact over HR terminology.
How to measure success
A strong output from this prompt will do five specific things well. First, it segments employees into distinct risk tiers with clear criteria — not a flat list. Second, each root cause claim ties back to a specific data point you provided, not to general HR theory. Third, the recommended interventions are specific and scoped, not vague directives. Fourth, the tone shifts appropriately between HR detail and executive summary language. Fifth, the analysis flags at least one data gap or caveat rather than projecting false confidence. If any of these five signals are missing, tighten the relevant section of your prompt and regenerate.
Now try it on something of your own
Reading about the framework is one thing. Watching it sharpen your own prompt is another — takes 90 seconds, no signup.
a retention risk analysis report your leadership team can act on
Try one of these
Frequently asked questions
Yes. Replace formal survey data with whatever qualitative inputs you have — manager feedback themes, 1:1 notes, Glassdoor reviews, or informal pulse check summaries. The prompt works as long as you clearly list what you're providing so the AI knows how to weight the evidence.
Replace 'High / Medium / Low' with your internal definitions. For example, specify 'High risk = engagement score below 55 AND tenure under 2 years' so the AI applies your criteria rather than inventing its own. The more precise your thresholds, the more defensible the output.
Absolutely. Narrow the scope in your data inputs section — specify the department name, headcount, and the specific data you have for that team. A focused analysis of one high-risk department is often more actionable than a company-wide overview.
Never include personally identifiable information. Aggregate your data by department, tenure band, or role level before inputting it into any AI tool. Use anonymized summaries of exit interview themes rather than individual quotes. Always check your organization's AI data governance policy before proceeding.
Most people analytics teams run a full analysis quarterly, aligned to business reviews, with lightweight pulse checks monthly. Build the prompt once, then update the data inputs each cycle so you get consistent, comparable outputs over time rather than one-off snapshots.