Progress in AI isn’t just about technology – it’s about people. When people are left out of the AI equation, not only does it tend to slow down progress, but even worse, risks undermining achieving the desired results altogether. Repeatedly, the pattern is clear: the biggest AI failures aren’t technical issues, but human ones.
AI solutions are often optimized to perform faster, cheaper, and smarter. Yet the people surrounding those systems — customers, employees, and partners — are frequently treated as variables to be managed rather than relationships to be nurtured.
When organizations fail to account for the experience of all people in the broader system, the result isn’t innovation; it’s alienation. In our experience, the most frequent cause of underperforming AI isn’t poor modeling — it’s poor empathy.
Three lenses of AI impact
AI is never neutral. Every design choice reflects human assumptions. Asking By whom? For whom? To whom? surfaces those assumptions to reduce unintended risks.
By whom is the AI designed?
When solutions are designed distant from daily users, blind spots emerge. A healthcare payer that automated their claim review process achieved faster processing times, but denied legitimate cases because developers hadn’t included the nuanced review steps nurses once performed. The model was accurate and yet the outcome had real-life impacts on the financial and physical health of insured customers.
For whom is it built?
Efficiency often serves the organization more than the people it touches. A European payments company replaced hundreds of customer service agents with a generative AI chatbot in 2024. Within weeks, customers complained that responses were “generic, repetitive, and unhelpful.” The company ultimately reintroduced human agents to repair service quality and trust — a vivid example of optimization serving the firm at the expense of its customers (Economic Times, 2025). Warren Buffett once said, “It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you’ll do things differently.”
To whom does it happen?
AI’s impacts often extend beyond just one target group and have a domino effect on different stakeholders. Energy utilities adopting predictive algorithms to optimize field maintenance have improved efficiency – but contractors and local crews report feeling displaced and undervalued as human judgment is sidelined. What looks like progress for one group can feel like dispossession to another.
Disaffected groups, amplified by social media, can vent their frustrations and fears, risking social blowback and reputational impacts to the company that undermines the economic value achieved by the AI solution. There is real benefit to identifying, capturing, and mitigating potential downstream impacts early in the design process.
When organizations ask these questions, they set themselves up for success and optimize for a broadly appropriate outcome. Ignoring the full human ecosystem during the design process can lead to a solution that works on paper but struggles or fails once put into practice.
How you leverage AI signals how you value people
Applied technology is never just a tool – it’s a signal. As a signal, it also serves as a mirror of how leaders value their people. Every AI deployment sends a message: are employees seen as indispensable partners, or replaceable cogs in a machine?
When leaders emphasize “efficiency through automation,” employees often hear “replaceability through automation.” One U.S. workforce study found that nearly one in four workers expect automation to eliminate parts of their job within five years, even as most say they’ve received little guidance on how AI will actually support them (SHRM, 2024). This leads to a workforce caught in an endless loop of “why bother, I’m about to be replaced anyway.”
In one logistics company, dispatchers were told a new AI scheduling platform would “reduce manual effort.” Within weeks, drivers stopped sharing local insights — assuming “the system knows best.” The company didn’t lose headcount. It lost the tacit human knowledge that kept deliveries efficient.
Conversely, companies that frame AI as augmentation — not automation — see the opposite effect. IBM, for instance, reports that 94% of common HR queries are now resolved via AI chat, while humans remain responsible for all final decisions. Employees report faster service and less frustration, and trust in leadership remains high (IBM, 2024).
While AI can absolutely automate tasks, it also amplifies the organizational culture around it. Leaders who design it without empathy risk signaling that people are expendable.
Creating psychological safety in the age of automation
Introducing AI into a human system without trust is like upgrading an aircraft mid-flight — the mechanics may work, but passengers will panic.
“Trust is earned in drops and lost in buckets”, as Kevin Plank said, and trust erodes when AI decisions seem opaque or unchallengeable. In 2019, a major U.S. credit card issuer suspended its AI-driven limit-setting algorithm after women with identical credit profiles to men received substantially lower limits — a case that spurred regulatory scrutiny and public backlash (HBS, 2019).
In contrast, trust deepens when organizations communicate transparently. Healthcare providers using “human-in-the-loop” AI to support radiologists have seen diagnostic accuracy improve by up to 30% — but only when doctors retain final decision authority (Journal of Medical Internet Research, 2023).
Like empathy, transparency and explainability don’t slow adoption; they enable it. When people understand how AI supports their judgment rather than replaces it, trust becomes the force multiplier for performance.
Designing with empathy and impact
Empathy isn’t softness. It’s system intelligence. Here are four practical ways to make empathy actionable in AI design and deployment.
1. Persona mapping across the rollout footprint
Identify every persona touched by the AI solution — customers, employees, partners, regulators. Understand their motivations, anxieties, and success metrics. Prioritize the broader canvas of desired outcomes, which may be more complex than merely ‘faster’ or ‘automated’. A financial institution found through this process that its fraud-prevention chatbot’s tone implied accusation rather than reassurance; guidance rewritten in plain, supportive language reduced complaint rates by 40%. Our position isn’t to avoid using AI – it’s to do it with humans in mind.
2. Impact equity matrix
Map stakeholder groups (e.g., employees, customers, management, investors) on a grid of “benefit vs. burden.” Visualizing trade-offs reveals imbalances early — like when efficiency for one group translates to frustration for another. Time invested here manifests into benefits realized later with deeper understanding, stronger adoption, reduced frustration, and less public backlash.
3. Storytelling as signal management
Behavior shifts through stories, not summaries. When employees hear concrete stories of peers adopting AI successfully, it signals that the change is real and achievable — a “peer proof” effect that transforms apprehension into engagement. The same storytelling power strengthens customer confidence.
Importantly, stories of mistakes and course corrections are just as valuable. When leaders share what went wrong, why decisions were made, and how issues were resolved, they provide context people rarely get but deeply want. These stories give employees a coherent narrative: where we came from, why we’re changing, and where we’re going next.
4. Stakeholder Impact Assessment and value-sensitive design
Borrow from frameworks such as the Stakeholder Impact Assessment developed by the Alan Turing Institute (2023), which systematically documents intended and unintended effects of AI on individuals and communities. Embedding this process into project governance surfaces risks before they become headlines.
Empathy isn’t a delay to deployment – it’s an amplification for success.
Progress that includes people
AI’s promise isn’t to replace humanity, but to extend it. The future belongs to organizations that ask not only what can AI do, but what should it do – and for whom?
AI designed for you empowers. AI designed to you coerces. The distinction is empathy and understanding that even the most rational systems must succeed within the beautifully irrational world of human experience.
When leaders honor that truth, go beyond creating smarter technologies. They build stronger, more human organizations capable of greater, more impactful outcomes.