LOBUS.WORKS
← All insights
GenAI / AI Governance

Your AI Strategy Is Incoherent — Here’s How to Tell

10 Apr 2026·7 min

The CEO says the company is AI-first.

Engineering still cannot get a tool approved without a three-month review.

Recruiting tells candidates the organisation is serious about AI, then bans its use in interviews. Security blocks experiments that would teach the company anything useful. Legal reviews a small licence request as if it were a merger. Product asks for faster delivery but nobody has decided what acceptable AI-assisted work actually looks like.

This is not a strategy.

It is an incoherent set of signals.

The cost of incoherence

Incoherence does not look dramatic at first. It looks like drift.

A few licences here. A committee there. Some unofficial experimentation. A lot of executive language about urgency. Not much actual operating clarity.

Then the costs begin to stack up.

Your best engineers lose faith that leadership means what it says. Middle managers stop taking the announcements seriously because none of the downstream conditions change. Security and legal become the visible villains for decisions leadership never actually resolved. Tooling adoption becomes political. Valuable experiments move underground. Talent starts leaving for places where the rules are at least consistent.

That is a strategy problem even when it first appears as a tooling problem.

Three patterns that show the strategy is not real

The first is the delegation trap.

Leadership announces the ambition, forms a committee, and leaves actual ownership blurred. Nobody has the mandate to design the operating model, run the experiments, define the guardrails, and carry the outcome. So the organisation confuses governance with delay and activity with progress.

The second is the amplifier blind spot.

Leaders talk as if AI will fix weak engineering conditions. It will not. It will amplify them. If the underlying system already has poor review capacity, weak requirements, and clumsy handoffs, faster code generation will make those weaknesses more expensive.

The third is lane confusion.

The organisation has not decided what kind of AI adopter it wants to be. Is it cautious and compliance-heavy? Is it fast and experimental? Is it selective by domain? All three can be defensible. Trying to be all three at once produces contradiction in every policy and every operating decision.

Warning

You do not need an aggressive AI strategy. You do need a coherent one.

What coherence actually looks like

It looks less like slogans and more like alignment.

The leadership team agrees what problem AI is meant to solve. Engineering knows what kinds of work can be experimented with safely. Security and legal know where the red lines really are and how to review requests in proportion to the risk. Hiring, policy, procurement, and delivery do not contradict one another every time someone tries to use the tools in earnest.

Most of all, somebody owns the operating reality rather than the announcement.

The decision most organisations avoid

They avoid choosing their lane.

That sounds abstract, but it is practical. If you are going to be conservative, be conservative on purpose. Say where experimentation is allowed, what evidence is required, and what remains out of scope for now.

If you are going to move quickly, then accept the cost of making that possible: faster approvals, clearer ownership, and enough trust to let teams learn in bounded ways.

If you want a selective path, define the selection logic. Which domains move first? Which controls vary by risk? Which teams are expected to build the early capability?

Almost any of those choices is better than public urgency paired with private obstruction.

The leadership question

The useful test is simple.

If an engineer asks, "What are we actually allowed to do with AI here, who owns the answer, and how will we know whether it worked?" can the organisation respond clearly?

If not, the strategy is probably still theatre.

An AI strategy becomes real when policy, incentives, tooling, and accountability all start pointing in the same direction.

That does not require perfect certainty. It does require an honest operating choice.

Without that, the organisation will keep saying AI-first while behaving AI-confused.