Most organisations have an AI policy. Fewer have AI governance that actually works. New research with over 2,000 UK tech workers reveals the gap between documented strategy and operational reality. 

The strategy-reality gap

We asked UK tech workers how well their company’s stated AI strategy reflects the reality of how AI is actually used. The answers depended entirely on where people sat in the organisation. 

“Our AI strategy matches reality very well”: 

  • C-suite: 56% 
  • Directors: 40% 
  • Senior management: 35% 
  • Middle management: 30% 
  • Intermediate staff: 22% 
  • Entry-level staff: 16% 

Leadership believes the strategy is working. The rest of the organisation disagrees. 

This is not a communication problem. It is a governance problem. If large parts of the workforce cannot see where AI is heading, the strategy is not reaching them. 

Governance that does not govern

Having a policy is not the same as having governance. 

True AI governance means clear rules, consistent accountability, and consequences that apply to everyone. Our research shows that most organisations are falling short. 

52% of UK tech workers say AI decisions at their company are being made without the right expertise. 

This is not just a frontline concern. 65% of C-suite executives themselves acknowledge that such decisions occur at senior level. 

The problem is structural. Leaders are making high-stakes AI decisions in areas where they do not yet have the depth they need. And the safeguards that should catch this are not working. 

The people setting the rules are breaking them

Perhaps the most striking finding in our research is that senior leaders are the most likely to operate outside governance frameworks. 

C-suite AI behaviours: 

  • 73% have uploaded confidential company data into AI tools 
  • 78% have used AI for work they are not trained to do 
  • 93% have made decisions based on inaccurate AI outputs 

Compare this to intermediate-level staff: 

  • 35% have uploaded confidential data 
  • 49% have used AI for untrained work 

The people with the greatest autonomy over AI are also the ones most exposed to its risks. When governance does not apply to senior leaders, it signals that the rules are optional for everyone. 

What effective AI governance looks like

Governance that works is not a document. It is a system embedded in how decisions get made. 

  1. Visible ownership

Someone needs to be accountable for AI governance, with the authority to enforce it. 80% of C-suite say they need a dedicated AI specialist at board level. Most have not appointed one. 

  1. Rules that apply universally

Governance frameworks lose credibility the moment exceptions are made for seniority. If C-suite can bypass safeguards, so can everyone else. 

  1. Verification built into workflows

Only 37% of tech workers always check AI outputs before using them. Governance should make verification a process requirement, not a personal choice. 

  1. Strategy that reaches the frontline

If entry-level staff cannot describe your AI strategy, it is not a strategy. It is a boardroom conversation. 

  1. Regular assessment

AI capability and risk evolve quickly. Governance needs to be reviewed and stress-tested, not written once and forgotten. 

Take the next step

If you want to close the gap between AI policy and practice in your organisation, we offer a free 30-minute consultation to discuss your AI strategy and data foundations. 
Book your free consultation

Read the full research 

This article draws on findings from AI in the Workforce: The Hidden Risk for UK Businesses, independent research with over 2,000 UK tech workers. 
Download the whitepaper