UK businesses are using AI at scale. But the training needed to support that scale is not reaching the people who need it most. New independent research with over 2,000 UK tech workers exposes a training divide that is growing wider as AI adoption accelerates. 

The result is not just a skills gap. It is an accountability gap. When people use powerful tools without adequate guidance, the risk does not stay contained. It spreads. 

Training is concentrated at the top

The research paints a clear picture of who is being supported and who is not: 

Seniority  Received formal AI training  Received no AI training 
C-suite  73%  3% 
Directors  57%  8% 
Senior management  58%  10% 
Middle management  50%  22% 
Intermediate staff  39%  34% 
Entry-level staff  35%  26% 


One in three intermediate staff say they have received no AI training at all. For C-suite executives, that figure is one in thirty.
 

34% of intermediate staff and 26% of entry-level employees have received no AI training 

The people using AI most are being supported least

AI use is not limited to senior staff. 86% of entry-level employees and 83% of intermediate staff use AI at work regularly. The workforce is using AI widely at every level, but the support structure around that use thins sharply as you move away from the boardroom. 

This creates a structural problem: 

  • Employees who are newest and least experienced are receiving the least guidance 
  • Senior leaders who do receive training are applying AI to higher-stakes decisions 
  • The business impact of error increases with seniority, but support does not 

Verification is not yet a habit

The training gap has a predictable consequence: AI outputs are not being checked consistently. 

  • Only 37% of tech workers say they always verify AI outputs before acting on them 
  • 21% check sometimes or never 
  • 73% of respondents have experienced AI decisions being based on inaccurate data 

When verification is inconsistent and training is patchy, inaccuracies move through organisations and become decisions, documents, and outcomes. 

Only 37% of tech workers always verify AI outputs before using them 

When strategy does not reach the frontline

Training gaps compound quickly when AI strategy is also unclear lower down the organisation: 

  • 56% of C-suite executives believe their AI strategy matches operational reality very well 
  • Only 16% of entry-level staff agree 

If the people using AI every day do not understand the strategy behind its use, they are navigating by instinct. That is not a problem a single mandatory course can solve. It requires organisations to rethink how they build AI capability across the whole workforce, not just at the level where decisions are made. 

Capability building that actually sticks

The organisations that get this right are not the ones that tick a box with a one-hour compliance module. They are the ones that: 

  • Invest in building genuine capability across teams, not just at the top 
  • Embed AI literacy into how work is actually done 
  • Create clear standards that are reinforced, tested, and refreshed as AI evolves 
  • Make training role-relevant and practical, not generic 

Inovus combines consulting expertise with structured capability-building to ensure that AI adoption is sustainable. We work with organisations to identify skills gaps across the full workforce, design training that is role-relevant and practical, and build internal capability that outlasts the engagement. 

The goal is not to create dependency on external support. It is to make organisations genuinely AI-proficient on their own terms. 

Take the next step

If you want to understand where your organisation’s AI skills gaps sit, and what it would take to close them, we offer a free 30-minute consultation to explore where to start. 
Book your free consultation

Read the full research

This article draws on findings from AI in the Workforce: The Hidden Risk for UK Businesses, independent research with over 2,000 UK tech workers. 
Download the whitepaper