Why this is hard to get right
The Real Cost of a Vague Integration Plan
Marcus had closed a $14M acquisition on a Friday. By Monday, he was fielding questions he couldn't answer. Which engineering team owned the migration? Were salespeople supposed to keep selling both products? When would customers hear anything official?
He was a seasoned VP of Operations at a 200-person SaaS company, and this was his third M&A deal. He knew the 90-day window mattered. He also knew that most integration failures aren't strategic — they're coordination failures. Nobody owns the seams.
He sat down to draft an integration plan and typed what felt obvious: "Create a 90-day integration plan for our new acquisition." The AI returned a clean-looking document with generic phases: discovery, alignment, execution. It listed workstreams. It mentioned "change management." It looked like a slide from a consulting deck he'd seen in 2018.
It told him nothing he didn't already know. It didn't account for the fact that the acquired company's top three engineers were flight risks. It didn't address the 40 enterprise customers whose contracts needed review within 60 days. It didn't help him figure out who approved pricing decisions when both sales teams collided on the same account.
Marcus realized the problem wasn't the AI. It was him. He'd asked a generic question and gotten a generic answer.
He started over. This time he added the context that actually mattered: the specific customer retention goal, the constraint that no layoffs would happen in the first 90 days, the fact that both brands needed to survive for at least six months. He specified the audience — his exec team and functional leads — and demanded a one-page output with named workstream owners, weekly milestones, a risk register, and a decision log template.
The second output was something he could actually use. It surfaced the engineering access risk as a top-three priority. It recommended a customer communication sequence with timing tied to contract renewal dates. It structured decision rights so his team wouldn't need to escalate every judgment call.
The difference wasn't magic. It was specificity. The prompt had enough context to mimic the judgment of someone who had run this before. Marcus had given it the same briefing he'd give a consultant on day one — and the plan reflected that.
That's the hard truth about integration planning with AI: the tool is only as useful as the brief you give it. Most leaders skip the constraints, skip the numbers, skip the audience. They get back a framework when they needed a playbook.
Common mistakes to avoid
Omitting Hard Constraints Upfront
Integration plans without stated constraints — like 'no layoffs in 90 days' or 'keep both brands live' — produce recommendations that are technically sound but politically or contractually impossible. The AI will recommend the most efficient path, not the most viable one. Always state what's off the table before asking for a plan.
Skipping Specific Retention and Synergy Targets
Asking for a plan without quantified goals forces the AI to invent priorities. When you specify 'retain 95% of top 50 customers' and a '$1.2M synergy target,' the AI calibrates every recommendation against those outcomes. Without them, you get balanced coverage of all workstreams instead of weighted urgency where it matters most.
Requesting a Plan Without Naming the Audience
An integration plan for a CEO reads differently than one for functional leads. Audience omission leads to a document nobody fully owns. Specify whether your output is for exec alignment, operational leads, or a combined all-hands briefing. The detail level, ownership language, and action specificity all shift accordingly.
Asking for Too Many Workstreams Without Owners
A common AI output error is producing 12 workstreams with no named owners, which turns a plan into a list. Specify the number of workstreams you want and require that each has an explicit owner role. This forces the AI to scope appropriately and prevents the plan from expanding beyond your team's actual capacity.
Ignoring Decision Rights and Escalation Paths
Most integration plans stall not because of missing tasks, but because nobody knows who approves what. If you don't ask for a decision log or decision-rights framework, the AI won't include one. Explicitly request a decision log template or an RACI summary within the prompt to ensure your plan includes governance, not just activities.
Treating the First Output as Final
A 90-day integration plan covers enormous ground. One prompt pass rarely surfaces all the risk. After generating the initial plan, run a second prompt asking the AI to stress-test the top 10 risks, then a third asking it to rewrite the customer communication section for a specific segment. Layered prompting produces better plans than a single exhaustive request.
The transformation
Create a 90-day plan for integrating our recent acquisition and tell me what to do first.
You’re a COO who has led 5 post-acquisition integrations. Create a **first 90 days integration plan** for acquiring **[Company B]** into **[Company A]**. 1. Audience: **exec team and functional leads** 2. Goals: **retain 95% of top 50 customers**, hit **$1.2M** synergy target, avoid service disruptions 3. Constraints: **no layoffs in 90 days**, keep both brands for **6 months** 4. Output: **one-page plan** with 6 workstreams, owners, weekly milestones, top 10 risks, and a decision log template 5. Tone: **direct, calm, action-focused**
Why this works
Persona Raises the Bar
The After Prompt opens with 'You're a COO who has led 5 post-acquisition integrations.' This role assignment isn't cosmetic. It primes the AI to apply judgment from repeated experience — anticipating failure modes, weighting risks, and avoiding first-time mistakes that a generic business advisor persona would miss.
Numbers Replace Ambiguity
The After Prompt specifies 'retain 95% of top 50 customers' and 'hit $1.2M synergy target.' Concrete figures force the AI to prioritize activities that move specific needles. Without them, every workstream looks equally important and the plan reads like a checklist instead of a strategy.
Constraints Prevent Useless Recommendations
'No layoffs in 90 days' and 'keep both brands for 6 months' are real-world guardrails that shape every recommendation downstream. By stating them upfront, the prompt eliminates an entire class of technically correct but practically impossible suggestions that would otherwise require a full rewrite.
Format Specification Forces Executable Output
Requiring 'a one-page plan with 6 workstreams, owners, weekly milestones, top 10 risks, and a decision log template' converts a narrative document into an operational tool. Each element has a specific function — milestones create accountability, the risk register surfaces blind spots, and the decision log prevents approval bottlenecks.
Tone Instruction Matches Leadership Context
'Direct, calm, action-focused' is not generic tone guidance. It signals the communication register appropriate for a stressed executive team during a volatile period. The AI avoids hedging language, qualifiers, and consultant-speak — and produces language that leaders can read fast and act on immediately.
The framework behind the prompt
Why 90-Day Plans Succeed or Fail: The Research Behind Integration Planning
The "first 90 days" concept in organizational change is closely tied to Michael Watkins' research on leadership transitions, which found that new leaders — and by extension, newly merged organizations — establish patterns in the first three months that are extremely hard to reverse. Watkins identified STARS frameworks (Start-up, Turnaround, Accelerated Growth, Realignment, Sustaining Success) to help leaders diagnose the integration context before designing a plan. Most M&A integrations fall into the Turnaround or Accelerated Growth categories, where speed and clarity of ownership matter more than consensus-building.
McKinsey research consistently shows that 70% of mergers fail to achieve their stated value targets, and the most common failure mode isn't strategic misalignment — it's execution delay in the first 100 days. The window between deal close and the first major customer renewal cycle is typically 60-90 days. Plans that don't address customer risk in that window lose retention before they ever reach synergy targets.
From a prompting theory perspective, Role-based prompting (assigning an expert persona) combined with constraint-driven prompting (stating what cannot happen) produces markedly better AI output for planning tasks than open-ended requests. This aligns with the RISEN framework (Role, Instructions, Steps, End goal, Narrowing) — the integration prompt on this page incorporates all five elements.
Decision-rights clarity is another evidence-backed predictor of integration success. Research from Bain & Company on RAPID decision frameworks shows that organizations that explicitly assign decision authority during integration complete workstreams 35% faster than those that rely on informal consensus. A well-structured prompt that requests a decision log isn't just a formatting preference — it reflects a structural predictor of integration success.
Finally, the one-page constraint in the After Prompt reflects a core principle from communication research: documents that exceed one page for executive audiences see dramatically lower engagement and action rates. Forcing the AI to compress a complex plan into a single structured page isn't about brevity — it's about forcing the prioritization that most integration teams avoid.
Prompt variations
You're a COO with deep experience in private equity-backed company integrations.
Create a first 90 days integration plan for merging TechFlow Inc. (acquired, 80 employees, B2B SaaS) into GridOps (platform company, 320 employees, infrastructure software).
- Audience: PE operating partners and the GridOps CEO
- Financial goals: Achieve $900K in annualized cost synergies by month 4, protect $3.2M ARR from TechFlow's top 30 accounts
- Constraints: PE board review at day 45; no product pricing changes until month 6; retain all TechFlow engineering staff through day 90
- Output: One-page executive plan with 5 workstreams (people, product, systems, go-to-market, finance), named owner roles, bi-weekly milestones, a 45-day board readiness checklist, and top 8 risks
- Tone: Investor-grade — precise, numbers-driven, no hedging
You're a Chief People Officer who has navigated 4 post-merger workforce integrations.
Create a 90-day people integration plan for bringing the 60-person team from acquired company Beacon Analytics into Northstar Data (parent company, 180 employees).
- Audience: CHRO, VP of People, and functional team leads
- Goals: Retain 100% of Beacon's 12 senior engineers and 3 product leads through day 90; complete benefits harmonization by day 60; achieve one shared performance review cycle by day 90
- Constraints: Beacon employees stay on existing compensation through Q2; no org chart changes announced before day 30; all communication goes through direct managers first
- Output: Week-by-week people plan covering onboarding, culture integration, compensation review, manager enablement, and retention risk flags for top 15 employees
- Tone: Empathetic but operationally precise — this plan will be read by managers who are nervous
You're a VP of Customer Success with experience protecting revenue through two post-acquisition transitions.
Create a 90-day customer communication and retention plan for managing Fieldstone Software's 200-account customer base after its acquisition by Arcadia Systems.
- Audience: CS team leads, account managers, and the CCO
- Goals: Prevent churn on the top 40 accounts (representing $2.8M ARR); complete health scoring for all accounts by day 30; deliver proactive outreach to every enterprise account by day 21
- Constraints: No product EOL announcements before day 60; sales team cannot contact CS-owned accounts without CS approval; pricing stays flat through the first renewal cycle
- Output: Account segmentation matrix, outreach sequence with message templates for three customer tiers, escalation protocol for at-risk accounts, weekly CS team check-in agenda, and a churn risk tracker template
- Tone: Customer-first — calm, reassuring, specific about what stays the same
You're a VP of Engineering who has led technical integration for 3 acquisitions.
Create a 90-day technical integration plan for merging Capsule AI's engineering team and systems into Momentum Platform.
- Audience: Engineering leads, CTO, and IT/security
- Goals: Complete SSO and identity access migration by day 45; deprecate duplicate dev tools by day 75; achieve shared CI/CD pipeline and code review standards by day 90
- Constraints: No production system changes during Momentum's Q3 launch freeze (days 15-35); Capsule's data must stay in EU region through legal review; all access changes require security sign-off
- Output: Technical workstreams for identity/access, infrastructure, dev tooling, data migration, and security compliance — each with a named engineering lead role, weekly milestones, dependency map, and a go/no-go checklist for each phase
- Tone: Technical and precise — this plan will be used in sprint planning and Jira
When to use this prompt
Founders after an acquisition
You need a 90-day integration plan that protects customers and keeps your team aligned on priorities.
Product managers coordinating roadmaps
You need a shared plan that clarifies which roadmap items change, which stay, and who approves decisions.
Customer success leaders managing renewals
You need an integration timeline that limits churn risk and sets rules for customer communication.
Engineering leaders combining teams
You need workstreams for systems, access, security, and delivery, with owners and weekly checkpoints.
Sales leaders aligning accounts
You need guardrails for account ownership, pricing changes, and cross-sell timing without disrupting pipeline.
Pro tips
- 1
Define your top customer risk first so the plan protects renewals before internal cleanup.
- 2
Add 3 non-negotiables to prevent scope creep and keep leaders aligned under pressure.
- 3
Specify decision rights by workstream so teams don’t stall while waiting for approvals.
- 4
List the 5 systems that must integrate first so the plan reflects technical reality.
After generating your initial integration plan, run a second prompt that forces the AI to attack its own output. This is one of the highest-leverage moves in AI-assisted planning.
Use this follow-up structure:
'Review the integration plan above as a skeptical board member who has seen 3 integrations fail. Identify the 5 most likely failure points in the first 30 days. For each failure point: name the specific risk, rate its likelihood (high/medium/low), state the leading indicator you'd watch, and recommend one preemptive action.'
This approach works because it assigns an adversarial role with experience — not just a general critic. The board member framing prevents vague warnings like 'communication breakdowns' and instead produces specific flags like 'engineering access dependencies not resolved before go-live.'
You can also run a third prompt focused on a single workstream: 'Rewrite the customer communication workstream in detail. Include a week-by-week outreach schedule, draft message frameworks for enterprise vs. mid-market accounts, and an escalation protocol for accounts that don't respond.'
Layered prompting adds roughly 20 minutes to your planning session and can prevent weeks of fire-fighting during execution. Most integration leaders who use AI tools stop after the first output — which is the planning equivalent of reading only the table of contents.
The After Prompt on this page targets a mid-market acquisition with meaningful ARR and headcount. Here's how to scale it up or down without losing output quality.
For small deals (under $5M, under 30 employees):
- Reduce workstreams from 6 to 3-4: people, systems, and customer handoff
- Replace 'synergy target' with 'cost overlap eliminated' — be specific (e.g., 'eliminate $80K in duplicate SaaS tools')
- Focus the decision log on just 4-5 recurring decisions, not a full governance framework
- Request a 30-day plan instead of 90 — most small deal integration happens faster
For large deals (over $50M, over 200 employees):
- Add a communications workstream separate from customer success — internal communications to 200+ employees requires its own plan
- Request a RACI matrix, not just named owners — at this scale, one person rarely owns a full workstream
- Ask for a 30/60/90 milestone structure with explicit board reporting cadence
- Specify legal and compliance workstream separately from IT and systems
- Add a 'stranded costs' section to the risk register — large integrations regularly miss costs that don't disappear when teams merge
The core prompt structure remains the same. What changes is the scope, the number of stakeholders named, and the governance formality.
The 90-day integration prompt is one of several AI-assisted planning tools you can deploy across a full deal cycle. Here's where structured prompting adds the most value at each stage.
Due Diligence (pre-close): Use a variation that asks the AI to generate a due diligence checklist for a specific deal type (SaaS acquisition, services firm, hardware company). Include the target company's size and your strategic rationale. Ask for red flags to investigate by workstream.
Deal Memo and Board Presentation (pre-close): Prompt the AI to draft a one-page strategic rationale that connects acquisition goals to the parent company's three-year plan. Specify the audience (board, investors, or management team) and the decision you're asking them to make.
Day 1 Communication (close day): Use a prompt focused entirely on stakeholder communication — separate drafts for employees, customers, and the market. Specify tone by audience: employees need empathy and certainty, customers need continuity, the market needs strategic framing.
30/60/90 Reviews: After each milestone, prompt the AI to generate a structured status update against the original plan. Provide the original plan as context, then describe what's on track, what's behind, and what's changed. Ask for a revised plan for the remaining period.
Each of these uses follows the same principle: specific context in, calibrated output out.
When not to use this prompt
Don't use this prompt pattern when you don't yet have the basic deal facts. If you're pre-LOI and haven't confirmed customer count, team size, or deal constraints, the AI will fill gaps with assumptions — and those assumptions will shape recommendations in ways that are hard to notice and easy to act on incorrectly.
Avoid this approach for highly regulated industries like healthcare, financial services, or defense contracting where integration plans require legal review before distribution. AI-generated plans can miss compliance-critical sequencing — for example, recommending a systems migration before regulatory data residency requirements are met.
Don't substitute this plan for stakeholder alignment. An AI-generated plan is a starting document, not a signed commitment. If your exec team hasn't agreed on the top three integration goals, generating a detailed plan creates the illusion of alignment without the substance.
Consider alternatives when:
- The integration is primarily a talent acquisition with no system or customer overlap — a simpler onboarding and retention plan prompt will serve better
- You need a plan for public-company disclosure — involve legal and IR teams directly
- The acquired company is in a different country — labor law and employment constraints require local expertise that AI models handle inconsistently
Troubleshooting
The plan is too generic and reads like a consulting template
Add your specific numbers and constraints before anything else. Reopen the prompt and add: 'This plan must reference our specific goals: [retention target], [synergy amount], [team headcount]. Every recommendation must connect to one of these outcomes.' Generic output is almost always caused by generic input — the AI has no choice but to produce frameworks when it lacks your actual targets.
The AI produces 8 workstreams when I asked for 6
Add a hard ceiling with a consolidation instruction: 'You must limit the plan to exactly 6 workstreams. If you identify more than 6 priority areas, consolidate related activities into the nearest workstream. Do not add a workstream without removing one.' AI models tend to expand scope when given ambiguous upper bounds — specifying 'exactly' rather than 'up to' closes that gap.
Risk register items are vague (e.g., 'cultural misalignment') with no mitigation actions
Add a risk format requirement to the prompt: 'For each risk, provide: risk name, likelihood (high/medium/low), impact if realized, the leading indicator to monitor, and one specific mitigation action. No risk entry should be listed without all five elements.' Vague risk registers come from open-ended risk requests — structuring the output format forces specificity.
The decision log template has too many columns and won't fit in a working document
Specify the tool and column limit: 'Generate a decision log table formatted for Google Sheets with no more than 6 columns. Columns must include: decision topic, owner, date, decision made, and rationale. Skip any columns that duplicate information.' Formatting constraints prevent the AI from building theoretically complete but practically unusable artifacts.
The weekly milestones are all clustered in weeks 1-4 with almost nothing in weeks 8-12
Add an explicit distribution instruction: 'Distribute milestones evenly across all 13 weeks. Each workstream must have at least one milestone in weeks 9-13. Weeks 1-4 should not contain more than 40% of total milestones.' Front-loaded milestone plans are a common AI output error — the model over-indexes on high-urgency early actions and underplans the stabilization phase.
How to measure success
A strong AI-generated integration plan passes these checks:
- Specificity test: Every workstream references at least one number from your input (a dollar target, a headcount, a date). If the plan uses only generic percentages, the AI improvised your goals.
- Constraint compliance: Scan for your stated non-negotiables. If you said "no layoffs in 90 days" and the plan recommends a "staffing review," the AI ignored a hard constraint.
- Owner completeness: Every workstream and major milestone should have a named role, not just "leadership" or "the team."
- Risk register utility: Each risk entry should name a specific scenario, not a category. "Top engineer resigns before systems migration completes" is useful. "Talent risk" is not.
- Decision log structure: The log template should be usable in a spreadsheet within 5 minutes. If it requires significant reformatting, the format instruction was too loose.
- Timeline distribution: Milestones should appear in all 13 weeks, not just the first four.
- Actionability: A functional lead with no prior integration experience should be able to read their workstream section and know what to do in week one.
Now try it on something of your own
Reading about the framework is one thing. Watching it sharpen your own prompt is another — takes 90 seconds, no signup.
Build a merger integration plan prompt tailored to your deal size, customer risk, and real constraints.
Try one of these
Frequently asked questions
You don't need legal names or confidential details. What the AI needs are the structural facts: company sizes, ARR at risk, team headcount, and deal constraints. Replace real names with descriptive labels like 'acquired company' and 'parent company' if needed. The quality of the plan depends on goals and constraints, not brand names.
Yes, with adjustments. Replace acquisition-specific language with 'team consolidation' or 'division merger.' Remove synergy targets and replace with operational goals like 'reduce duplicate tool spend by 30%' or 'consolidate two support queues into one by day 60.' The workstream and milestone structure transfers directly to internal reorg planning.
Add a hard constraint to your prompt: 'Output must fit on one page. Use tables, not prose. Maximum 3 sentences per workstream section.' AI models default to thorough when the format isn't constrained. Specifying visual format (table vs. bullets vs. prose) and word or section limits produces tighter, more actionable output.
Use the prompt in scenario-planning mode. Replace confirmed numbers with 'target' or 'projected' labels and add: 'This is a pre-close scenario. Flag assumptions that require deal-room confirmation.' The AI will produce a conditional plan that surfaces what you need to verify during due diligence, which is often more useful than a post-close plan that ignores open questions.
Add a culture constraint section to the prompt. Specify: 'Company A has a top-down decision culture. Company B uses flat consensus-based decision making.' Then ask for a workstream that explicitly addresses decision-rights harmonization and manager communication protocols. Culture gaps are the most common integration failure — naming them in the prompt forces the plan to address them directly.
Iterate in layers. Run the main plan prompt first. Then run a second prompt: 'Review the plan above and identify the 5 most likely failure points in the first 30 days. For each, add a specific mitigation step.' Then run a third focused on your highest-risk workstream. Layered prompting consistently outperforms trying to get everything in one pass.
Be explicit: 'Generate a decision log table with columns for: decision topic, decision owner, stakeholders consulted, decision made, date, and rationale.' Without this specificity, AI tools produce vague 'decision tracking' narrative instead of a usable artifact. You can also ask for a second version formatted for a specific tool like Notion, Airtable, or Google Sheets.
Absolutely. Add a 'current state' section: 'We are on day 42 of a 90-day integration. Workstreams 1 and 3 are on track. Workstream 2 (systems migration) is 2 weeks behind. Workstream 4 (customer communications) has not started.' Ask the AI to replan the remaining 48 days given the current state. Mid-course corrections often benefit more from structured prompting than the initial plan.