Your organisation has two AI strategies. It only knows about one

Índice

Somewhere in your organisation right now, there is a meeting happening where someone is presenting the AI roadmap.

Approved tools. Governance frameworks. Rollout timelines. A carefully constructed plan for how artificial intelligence will be integrated into how the organisation works.

And somewhere else in the same organisation, people are already three steps ahead.

Using tools that did not come up in that meeting. Solving problems the roadmap has not yet reached. Getting things done in ways no dashboard will ever capture.

These are not two phases of the same strategy. One is designed. The other is already shaping how decisions are made. And most organisations are only managing one of them.

The shadow IT cannot see

The term “shadow AI” has become part of the governance conversation. It sounds contained. Almost manageable.

The data suggests something else.

According to a Cybernews survey of more than 1,000 employees, 59% use AI tools that have not been formally approved by their companies. Among executives and senior managers, that figure rises to 93%.

Read that again.

The people responsible for defining AI strategy are also the most likely to operate outside it.

A separate report from UpGuard found that more than 80% of workers use unapproved AI tools in their jobs, with fewer than 20% relying exclusively on company-approved systems.

This is not edge behaviour. This is how work is already happening.

And yet most organisations are still treating it as an exception. A compliance issue. Something to be controlled rather than understood.

Two systems solving different problems

What makes this gap particularly interesting is not the existence of shadow AI.

It is what each side is trying to solve.

IT departments deploying AI are, by necessity, solving for security, compliance, integration and scalability. The tools they select are evaluated against governance frameworks, data protection requirements and system compatibility.

Those are legitimate problems. But they are not the same problems employees are trying to solve.

People are solving for the work.

For the question they need to answer before a meeting. For the draft they need to finish before the end of the day. For the decision that cannot wait for the next rollout phase.

They are not choosing tools based on governance. They are choosing tools based on usefulness.

Research from August 2025 found that 57% of managers are aware their direct reports use unapproved AI tools and actively support it.

This is not a case of employees going rogue.

It is a case of the organisation quietly running two systems at once: one official, one functional, and middle management is holding both together.

What the organisation is no longer seeing

This is where the conversation shifts. The issue is often framed in terms of tool usage.

Which platform is being used. Which one is approved. Which one is secure.

That is the visible layer.

The deeper shift is happening somewhere else.

Employees are not just choosing different tools. They are choosing how to think through their work using systems the organisation does not understand.

They are testing ideas, validating assumptions, shaping arguments and filtering information through models that sit completely outside the organisation’s field of view. And that has a consequence.

The organisation is no longer able to see how the criteria behind its decisions are being constructed.

Not just what decisions are made. But how people arrive at them. What they trust. What they question. What they consider “good enough”.

When interpretation moves outside the organisation, visibility does not just decrease. It disappears.

Why control is not the answer

The natural response to this is restriction.

More policies. More controls. Clearer rules about what is and is not allowed.

Some of that is necessary. The risks are real. Data breaches linked to shadow AI are both more common and more costly.

But restriction alone misses the signal.

When more than half of an organisation consistently chooses tools outside the official stack, the problem is not only security. It is relevance.

The tools being provided are not meeting the needs of the people expected to use them.

And in many cases, the organisation does not know enough about those needs to close the gap.

Deloitte’s 2026 State of AI in the Enterprise report found that access to AI increased significantly in the past year, yet only 34% of organisations are fundamentally rethinking how work gets done.

Access is expanding. Understanding is not.

The question that does not belong to IT

At the centre of this gap is a question that cannot be answered through governance alone.

Why are people going elsewhere?

Not in the sense of risk management. In the sense of meaning.

What does this behaviour reveal about how people experience their work?

What does it say about what they need to do it well?

What does it expose about the gap between how the organisation thinks work happens and how it actually does?

This is not a technical question. It is a cultural one and increasingly, it is a communication one. Because the gap between official AI and actual AI use is, at its core, a gap in understanding.

People do not share a common language for deciding when and how to use these tools. Organisations do not have enough visibility into how work is actually being done. And very few are creating the conditions for those two things to meet.

Two strategies, one organisation

Most organisations are investing significant time and energy in managing the AI they deployed.

Almost none are paying attention to the AI that is already shaping how their people think, decide and act.

That second strategy does not appear in roadmaps. It does not follow governance frameworks. It does not wait for approval, but it is already influencing how work gets done.

And until organisations are willing to understand it, not just control it, they will continue to manage one version of reality while operating in another.

Because the strategy you do not see is the one already defining how your organisation thinks.

Agregar un comentario

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

Rellena el siguiente formulario para contactar