
AI Won’t Fix Your Leadership Problem
Organizations spent more on AI last year than any year in history. The returns? Underwhelming.
A Gartner survey of over 4,200 business and technology leaders found that only 48% of digital initiatives met or exceeded their business outcome targets. BCG’s 2025 global survey was worse: 60% of respondents said their AI investments delivered little material value in either revenue or cost reduction.
The usual explanation is that AI is still maturing, that organizations need better data, more sophisticated models, stronger technical talent. That explanation is comfortable. It’s also incomplete.
The pattern behind these numbers isn’t a technology problem. It’s a leadership accountability problem. Executives are delegating AI strategy to their technical teams and then wondering why the investment didn’t produce business results. I’ve seen this cycle before, across 20 years of building software products and leading technology teams. New technology arrives, executives assume it’s “an IT thing,” and the gap between spending and outcomes widens.
Six recent pieces of research paint a clear picture of where that gap lives and what executive coaching can do about it.
Key Takeaways
- Only 48% of digital initiatives meet business targets. The gap isn’t technical capability but leadership accountability for AI outcomes.
- Most organizations automate by default when augmentation would produce better results. Coaching helps executives make that distinction deliberately.
- AI literacy is now a leadership competency, not a technical skill. Executives don’t need to code, but they need fluency to ask the right questions.
- Executive coaching surfaces the leadership avoidance patterns that consultants and vendors won’t challenge.
The ROI Problem Nobody Wants to Own
Leadership transformation expert Dr. Vivian Atud puts it directly: “AI does not create ROI. Leadership does.” Her 90-day accountability framework challenges executives to prove measurable business value from AI investments within three months or reassess the entire initiative. Not in a year. Not after the next sprint cycle. Ninety days.
Still Tracking Pilots Instead of Outcomes?
In a free consult, map adoption metrics to revenue, margin, or retention—and set a 90-day accountability rhythm that doesn’t slip.
MIT Sloan Management Review backs this up with a broader pattern. Despite decades of investment in technology, many organizations still aren’t seeing meaningful returns. The problem isn’t that AI can’t deliver value. The problem is what gets measured.
Most executive teams track AI adoption metrics: number of deployments, licenses purchased, pilots launched, teams trained. These numbers go up reliably. They also tell you nothing about business impact. The distinction matters. Adoption metrics measure activity. Business outcome metrics measure results. When your dashboard shows 47 AI pilots across six divisions, that’s motion. When it shows which of those pilots changed revenue, margin, or customer retention, that’s progress.

This is the same pattern coaches encounter in leadership development all the time. Leaders confuse being busy with being effective. They mistake attendance at training programs for skill development. They track hours spent on strategy without checking whether the strategy changed anything.
An executive coach’s role in this space is accountability partner. Not the friendly kind who nods and says “you’ll get there.” The kind who asks the question nobody inside the organization will ask: So what? You deployed AI across your supply chain. So what changed? You trained 200 people on the new platform. So what did they do differently?
Why AI Isn’t Improving Productivity (Yet)
Nobel laureate Daron Acemoglu makes a case that should concern every executive investing in AI: current AI investments aren’t showing up in productivity numbers. Not because AI is incapable of improving productivity, but because most organizations are applying it in the wrong direction.
Acemoglu draws a distinction between automation and augmentation. Automation replaces human tasks with machine execution. Augmentation creates new tasks that complement what humans do well. Both have value, but they produce very different organizational outcomes. Most AI budgets flow toward automation by default because it’s easier to measure. You can count the tasks eliminated, the hours saved, the headcount reduced. Augmentation is harder to quantify because it creates new capability rather than eliminating old cost.
The incentive structures push AI toward centralization and automation without anyone making a deliberate choice about it. Technology vendors sell automation. Procurement evaluates cost reduction. The board wants efficiency metrics. Every signal in the system points toward replacing human work, even when augmenting it would produce more value.
This is where coaching changes the conversation. There’s a concept I use often: the gap between stimulus and response. A trigger arrives, an emotion follows, and then there’s a space before the response. Most executives have compressed that space to nearly zero. The AI pitch lands, the instinct fires (“automate everything”), and the purchase order goes out.
A coached executive widens that gap. They pause long enough to ask: what should we automate, and what should we augment? Where does removing humans from the process actually improve outcomes, and where does it just reduce headcount? That’s not a technology decision. That’s executive-level discernment, and it requires the kind of honest self-examination that doesn’t happen in vendor demos or board presentations.
The Tools Are Ahead of the Leaders
MIT Sloan recently highlighted something most executives are missing: agentic AI coding tools aren’t just for developers anymore. Tools like Claude Code handle data analysis, document processing, and research synthesis. These are tasks that land on every executive team’s desk, and the technology to handle them already exists.
The gap is awareness and identity. In conversations with executives, the pattern repeats: they know these tools exist but assume the tools are irrelevant to their work. “That’s for the engineering team.” This assumption was defensible two years ago. It’s not anymore.
AI literacy is now a leadership competency. Not coding. Not machine learning theory. Fluency. The ability to understand what AI can and cannot do, to evaluate proposals from vendors and internal teams, to ask questions that separate genuine capability from marketing.
A non-technical executive doesn’t need to build models. They need to know when someone is selling them a model they don’t need. They need to recognize when their team is using AI as a hammer looking for nails versus solving an actual business problem. This is the same skill executives need with any specialized domain they oversee but don’t operate in directly.
An executive coach with a technology background bridges this gap practically. Not through training sessions or certification programs, but through one-on-one work that builds the executive’s own judgment about what questions to ask, what proposals to challenge, and what results to demand.
When Competitors Become Collaborators
CLO Magazine recently analyzed a pattern that breaks conventional strategic thinking: fierce competitors choosing to collaborate at the core of their AI architecture. Apple and Google, rivals for over a decade across operating systems, devices, and data strategy, are now sharing AI infrastructure. Not at the edges. At the center.
The shift is structural. Competitive advantage used to mean owning proprietary capability. In the AI era, it increasingly means participating effectively in ecosystems. The firms winning aren’t the ones who built everything themselves. They’re the ones who figured out what to own and what to share.
For executives who built their careers on control, this is identity-level disruption. Their leadership model was shaped by acquisition, ownership, and proprietary advantage. Ecosystem participation feels like weakness. Sharing core technology with a competitor feels like surrender.
When your leadership identity is built on control and ownership, shifting to ecosystem participation feels like losing power. An executive coach helps leaders update these mental models without appearing weak to their teams.
This is where executive coaching does work that consulting can’t. A consultant can present the ecosystem strategy. An executive coach works with the leader’s resistance to it. The resistance isn’t intellectual. It’s emotional and identity-driven. Updating a mental model that made you successful for 20 years isn’t a slide deck problem.
Balancing Speed with Governance
Liberty Mutual global CIO Monica Caldas describes a tension every technology executive recognizes: deploying AI fast enough to compete while managing risk carefully enough to survive. The board wants AI results. The compliance team wants review cycles. Both are right, and both deadlines are real.
Most executives handle this by oscillating. Six months of aggressive deployment followed by a governance crackdown after something goes wrong. Then back to aggressive deployment because the competitors gained ground during the pause. The oscillation itself is the problem. Each swing creates organizational whiplash that erodes trust in leadership.
React vs. respond. That distinction applies here directly. Reacting means swinging from “full speed ahead” to “shut everything down” based on the latest board meeting or the latest incident. Responding means holding both tensions long enough to make a deliberate choice about where speed matters and where governance matters.
This sounds simple. It isn’t. Sitting with ambiguity while the board presses for quarterly AI wins requires a kind of executive composure that most leaders haven’t practiced. An executive coach helps develop that composure through repeated practice in a safe environment. The coaching session becomes the space where the executive can explore both tensions honestly, without performing confidence for stakeholders or caution for regulators.
What an Executive Coach Actually Does in the AI Conversation
The common thread across all six signals: AI challenges are leadership challenges wearing technology clothing. The executive who can’t articulate AI ROI has an accountability problem, not a measurement problem. The executive pushing automation everywhere has a discernment problem, not a technology problem. The executive who can’t share infrastructure with competitors has an identity problem, not a strategy problem.
Executive coaching in the AI space fills four distinct roles:
- Accountability partner for ROI discipline. Forcing the “so what?” question on every AI investment. Not accepting deployment counts as evidence of value.
- Thinking partner for automation-vs-augmentation decisions. Widening the gap between stimulus and response so executives choose deliberately instead of defaulting to whatever vendors propose.
- Pattern disruptor for outdated mental models. Working with the identity-level resistance that blocks leaders from adapting to ecosystem competition, shared infrastructure, and collaborative advantage.
- Governance navigator for speed-vs-safety tensions. Building the executive composure to hold competing pressures without oscillating between extremes.
Consider a CTO who’s deployed AI across three divisions. The dashboards show adoption numbers climbing. The board presentations look strong. But when pressed on actual business outcomes, the numbers aren’t there. Revenue hasn’t moved. Customer satisfaction is flat. The efficiency gains exist on paper but haven’t hit the P&L.
A consultant would audit the deployments and recommend optimization. An executive coach would surface something different: the leader’s own avoidance of accountability conversations with their division heads. The dashboards are designed to look good because the CTO hasn’t created the conditions where honest reporting is safe. That’s not a technology fix. That’s a leadership conversation that starts with the CTO’s own relationship with uncomfortable data.
How does executive coaching help with AI strategy?
Executive coaching addresses the leadership gaps behind AI strategy failures. Where consultants focus on technology selection and implementation plans, coaches work with the executive’s accountability patterns, decision-making defaults, and mental models about competition and control. Coaching helps leaders demand measurable business outcomes from AI investments instead of accepting adoption metrics as proof of value.
What’s the difference between an AI consultant and an executive coach for AI challenges?
An AI consultant evaluates technology, recommends solutions, and builds implementation roadmaps. An executive coach works with the leader, not the technology. Coaching surfaces the avoidance patterns, identity attachments, and decision-making defaults that block leaders from using AI effectively. Consultants solve the technical problem. Coaches address why the leader keeps creating or avoiding the problem in the first place.
How long does AI-focused executive coaching take to show results?
Most executives see measurable shifts in their AI leadership within 3 to 6 months of regular coaching sessions. Early wins typically include clearer ROI expectations for AI initiatives, more deliberate automation-vs-augmentation decisions, and improved ability to hold competing pressures (speed vs. governance) without oscillating between extremes. The deeper identity-level shifts around control, collaboration, and ecosystem thinking take longer but compound over time.
Turn AI Spend Into Measurable Business Outcomes
Bring one live initiative. We’ll clarify ROI metrics, automation vs. augmentation choices, and the leadership conversations you’ve been avoiding.
Book a Free Consultation →



