Anand & Wu — The Gen AI Playbook for Organizations
TL;DR
Harvard Business Review feature by Bharat N. Anand (Dean, NYU Stern) and Andy Wu (HBS Strategy). Argues executives are asking the wrong questions about generative-ai — fixating on its limitations instead of its strategic implications. Offers a 2×2 framework that maps tasks by cost of errors and type of knowledge to identify where to deploy GenAI today. Strategic differentiation, not speed alone, is what matters: rapid deployment, proprietary data, and complementary assets (people, processes, culture).
Key claims
The wrong questions
Executives commonly ask: “When will gen AI match my best employees? Is it accurate enough? Is my CIO moving fast enough? What are rivals doing?” These focus on the intelligence trajectory of GenAI rather than its strategic implications. The right question: “How can my organization use gen AI effectively today, regardless of its limitations? And how can we use it to create a competitive advantage?”
Two breakthroughs in access
GenAI has changed the access landscape in ways that compound:
- Nontechie employees can use GenAI without expert support. For decades AI was the domain of engineers, programmers, data scientists. ChatGPT changed that with natural-language interaction.
- GenAI is increasingly embedded into existing tools — email, videoconferencing, spreadsheets, CRM, ERP — lowering adoption barriers further.
The authors compare this to the MS-DOS → GUI transition of the 1980s: not necessarily more powerful, but dramatically more accessible.
The 2×2 framework: where and how to use GenAI
Two axes:
- Cost of errors: Low (small inefficiencies, missteps in a draft) ↔ High (reputational damage, legal liability, physical harm — incorrect financial filings, flawed medical recommendations)
- Type of knowledge: Explicit data (clearly articulated, structured/unstructured but capturable — numerical data, inventory databases, policy documents, customer reviews) ↔ Tacit knowledge (experiential, intuitive, context-specific — composing marketing campaigns, interpreting subtle cues, complex strategic trade-offs)
| Tacit knowledge | Explicit data | |
|---|---|---|
| High cost of errors | Human-first zone — Human leads, AI assists with minor tasks. Examples: setting strategy, integrating enterprise systems, disciplinary decisions, hiring critical employees, diagnosing cancer, providing psychotherapy | Quality control zone — AI produces, human verifies. Examples: drafting high-value contracts, writing production software code, conducting due diligence of records |
| Low cost of errors | Creative catalyst zone — AI creates options, human selects. Examples: creating advertisements, outlining sales scripts, developing products | No regrets zone — AI does it all, no human in the loop. Examples: addressing bulk customer inquiries, summarizing documents, screening résumés. Where AI agents will thrive in the future. |
The framework’s core insight: “The suitability of gen AI for a given task depends not on the intelligence of GenAI but on two deeper factors.” Stop debating whether GenAI is smart enough; ask which tasks gen AI can assist with today to make human judgment more effective.
The paradox of access (industry-level)
“Because everyone can use it, it becomes dramatically harder to capture value with it.” Pattern from Internet 1.0:
- Early adopters got brief advantages
- Benefits flowed to consumers, not firms
- E-ticketing: airlines all used it; lower airfare flowed to customers
- CAD/ERP: were once advantage, became table stakes
Implications for 2025:
- AI-first entrants are coming — small teams of experts can replace dozens of conventional roles. Building blocks already exist (software dev agents, AI sales reps).
- Customers and suppliers can use GenAI against you. Law firms have faced this since the 1990s: in-house counsel tripled 1997–2020 as work moved in-house; nearly 90% of large law firms now offer flat-fee or favorable pricing.
Building competitive advantage (4 moves)
-
Mandate broad access to technology. Every person should evaluate which tasks can be handled by gen AI; encourage experimentation; remove IT/compliance bottlenecks. Focus on guarding most-critical risks (PII leakage, regulated data) — not all-risk minimization. JPMorgan example (2023): temporarily blocked staff from ChatGPT during third-party reviews — sensible precaution but blocked 60,000 users from experimentation.
-
Reimagine all assets as data.
- Ascertain where data resides; centralize it. Harrah’s Entertainment example (2000s): funneled every slot pull, hotel check-in, dinner receipt into a single data warehouse, growing revenue faster than competitors who couldn’t copy the data infrastructure or culture.
- Identify data you aren’t yet collecting. “The data you don’t collect today is a seed you never plant.”
-
Redesign the organization. Won’t be enough to layer gen AI onto existing workflows — eventually you need a gen-AI-first vision. Capital One example (1990s): rewired around data via marketing + risk + IT teams running thousands of microexperiments per year — feedback loop between data and continuous learning. Famous “balance transfer” teaser-rate experiment drove explosive credit-card account growth by tracking user behavior longitudinally; later detected applicants becoming higher risk and managed phase-out before competitors became “catastrophic.”
-
Get the most out of your people. Manage time freed by GenAI: it can evaporate into idle tinkering, busywork, downtime. Managers should track hours saved, set redeployment expectations, recognize/incentivize effective use of saved time.
Why don’t gen AI gains show up in P&L? (six leakage points)
Each leakage point and who is responsible for fixing it:
- Task efficiency — fail to identify tasks where gen AI improves efficiency. Everyone, enabled by CTO/CIO.
- Employee adoption — miss opportunities because employees aren’t trained. Everyone, enabled by CTO/CIO.
- Resource redeployment — labor capacity saved isn’t redeployed to higher-value tasks. Every manager, enabled by CEO/COO.
- Organizational throughput — fail to redesign processes to capitalize on gains. Every manager, enabled by CEO/COO.
- Market demand — customers don’t have a need to purchase the greater output. CEO + C-suite.
- Competitive retention — competitors use gen AI similarly, gains dissipated through lower margins. CEO + C-suite.
Three sources of strategic differentiation
- Rapid and targeted deployment across tasks (valuable in near-term while competitors fixate on intelligence/hallucinations).
- Proprietary data that enhances gen AI’s performance, or process fixes that prevent its value from being lost to bottlenecks.
- Unique people, processes, and culture — “complementary assets” that make gen AI more valuable inside one organization than inside others.
Notable quotes
“How can my organization use gen AI effectively today, regardless of its limitations? And how can we use it to create a competitive advantage?”
“The suitability of gen AI for a given task depends on two factors: the cost of errors and the type of knowledge the task demands.”
“The fact that your customers, suppliers, and competitors can access the same technology creates the paradox of access: Because everyone can use it, it becomes dramatically harder to capture value with it.”
“Strategic differentiation will come from three sources: (1) rapid and targeted deployment of gen AI across tasks, which is valuable in the near term if your competitors remain fixated on intelligence or paralyzed by concerns like hallucinations; (2) proprietary data that enhances gen AI’s performance or process fixes that prevent its value from being lost to organizational bottlenecks; and (3) unique people, processes, and culture — the ‘complementary assets’ that make gen AI more valuable inside one organization than it is inside others.”
My take
The 2×2 framework is the most directly operational lens we’ve ingested so far. Where MIT CISR tells you “what stage you’re in” and AI Index tells you “what % of your peers are doing it,” Anand-Wu tells you where to point GenAI today, on a per-task basis. It’s the framework most likely to be quoted in real adoption decisions.
The “no regrets zone” framing dovetails with the ai-agents story — that quadrant is exactly where today’s autonomous agents are deployed (bulk customer inquiries, document summarization, résumé screening). MITTRI_Cisco’s chatbot → agent → multi-agent progression maps neatly onto this quadrant’s evolution.
The paradox of access argument is a sharp inversion of the typical “AI is a competitive moat” rhetoric. If correct, the implication for the wiki is structural: strategic differentiation in 2025+ is mostly about complementary assets (data, people, process, culture), not about which model you use. That’s an under-cited claim worth tracking.
The leakage-point exhibit (“Why Don’t Gen AI Gains Show Up in My P&L?”) is the most actionable diagnostic in this entire batch of sources. It directly addresses the AI Index 2025’s puzzle of “78% adoption + 1% maturity + revenue gains <5%”: the gains aren’t showing up because organizations aren’t capturing them at every link in the value chain.
The historical analogies (Internet 1.0 e-ticketing, Big Law 1990s, Harrah’s, Capital One) are well-chosen and add depth. The CAD/ERP analogy is particularly apt: today’s competitive AI advantage may be tomorrow’s table stakes.
Linked entities and concepts
Entities (this wiki): Bharat N. Anand, Andy Wu, Harvard Business Review. Dangling: Harvey (legal AI tool), GitHub Copilot, NYU Stern, Harvard Business School, Mack Institute for Innovation Management (Wharton), OpenAI (ChatGPT), JPMorgan Chase, Harrah’s Entertainment, Capital One.
Concepts: generative-ai (heavy enrichment), enterprise-ai-adoption (heavy enrichment — the 2×2 framework + leakage points), ai-agents (the “no regrets zone” projection), responsible-ai (the “guard against most-critical risks” framing).
Threads: organizational-frameworks-for-ai-adoption (Anand-Wu’s 2×2 enters the framework comparison).
Source
- Raw PDF (11 pp): article file
- HBR Reprint: R2506K
- Authors:
- Bharat N. Anand is the Richard R. West Dean and a professor of business administration at New York University’s Stern School of Business
- Andy Wu is the Arjun and Minoo Melwani Family Associate Professor of Business Administration in the Strategy Unit at Harvard Business School and a senior fellow at the Mack Institute for Innovation Management at the Wharton School
- HBR cross-references in the article: “How Generative AI Can Augment Human Creativity” (HBR Jul–Aug 2023); “How Is Your Team Spending the Time Saved by Gen AI?” (HBR Mar–Apr 2025).