Who's Flying the Plane?

Who's Flying the Plane?

You would never accept this design in aviation. So why are we accepting it in enterprise systems?

We talk about AI as a co-pilot, but in practice it is often deployed without clearly defining who is in command, how decisions are governed, or where accountability sits. In any environment where reliability and trust matter, that would not pass review. The same standard should apply here.

The real issue with enterprise AI

Most AI programs in contact centers do not struggle because the technology is weak. They struggle because AI is introduced into operating environments that were never designed to support it.

Tools are deployed quickly. Expectations are high. Results are mixed. Adoption fades. Leaders move on to the next initiative.

This pattern is not driven by a lack of innovation. It is driven by a lack of readiness. AI reflects the maturity of the system it enters. It does not correct it.

Research from Forrester shows that only 32 percent of CX organizations have the data literacy required to use AI responsibly. That gap alone explains why so many deployments stall after launch.

What "co-pilot" actually implies

A co-pilot supports the person flying the plane. They assist with monitoring, provide recommendations, and help manage workload. They do not own the destination or the outcome.

When AI is positioned as a co-pilot, three conditions must already exist:

  • Someone is clearly responsible for the outcome
  • The information being used can be trusted
  • The human using the system knows when to rely on it and when not to
When any of those are missing, the problem is not AI. It is system design.

The three conditions required for AI to work

1. Clear command authority and operating model

Before introducing AI, organizations need to answer basic questions: who owns decisions when AI makes a recommendation, when an agent is expected to follow it, when they are expected to override it, and what happens when the system is wrong.

In many contact centers, these questions are not clearly answered. Agents are accountable for outcomes, but authority is vague. That creates hesitation. Under pressure, people default to what feels safest — which often means bypassing the tool.

AI works when the operating model is explicit. Decision rights, escalation paths, and accountability need to be defined before scale, not after issues surface.

2. Reliable data and usable context

AI depends entirely on the quality of the information it receives. If customer data is fragmented, if knowledge is outdated, or if journey context does not persist across channels, AI output will be incomplete or misleading. This is not a model issue. It is a foundation issue.

Research from McKinsey consistently shows that weak data foundations are one of the primary reasons AI initiatives fail to scale, regardless of model sophistication. In contact centers, this shows up as repetitive customer interactions, generic responses in high-emotion situations, missed escalation signals, and increased rework and handle time.

3. Human capability and AIQ

AI readiness is not just a systems question. It is also a people question. Even with a clear operating model and reliable data, AI will underperform if the humans using it are not prepared to work alongside it.

AIQ — a person's ability to use AI effectively across real tasks — includes understanding what AI is good at, where it tends to fail, how to frame inputs, and how to validate outputs under pressure. Research cited by MIT shows this capability is distinct from general intelligence or technical skill and can be developed over time.

You can have a capable co-pilot and still struggle if the pilot has not been trained to work with it.

Why many deployments stall

A common pattern is to deploy AI first and address governance later. Tools are layered onto existing workflows. Metrics focus on usage instead of outcomes. Responsibility is implied rather than defined.

Research from Upwork Research Institute shows that many employees experience AI as additional work rather than relief. That reaction is rational when systems are bolted on instead of integrated. When trust erodes, adoption stops — not gradually, quickly.

What readiness actually looks like

Organizations that see consistent results from AI tend to do a few things well:

  • They define decision ownership before deploying tools
  • They invest in data quality and context continuity
  • They design AI into workflows rather than around them
  • They keep humans accountable for outcomes
  • They measure success by customer and operational results, not tool usage
AI is not the strategy. It is an enabler. Its impact depends entirely on the environment it is placed in.

If this were aviation, the plane would be grounded. In enterprise AI, we often label the same design as progress. The most important question is not whether AI can do the work. It is who is responsible when it does.

Sources

Forrester — Budget Planning Guide 2026: Customer Experience. Findings on CX data literacy and organizational readiness for responsible AI use.

MIT — Research on Human-AI Collaboration and AI Quotient. Studies defining AIQ as a distinct, learnable capability.

McKinsey — Research on enterprise AI adoption and scaling challenges. Analysis linking weak data foundations to low AI ROI.

Upwork Research Institute — Workforce studies on employee experience with AI tools.