UK Employees Struggle With AI Skills As Adoption Outpaces Training

More than eight out of 10 workers say they don’t fully understand how to use AI in their jobs, raising questions over whether employers are rushing to implement artificial intelligence without giving staff the skills or confidence to use it safely.

Despite widespread investment in AI tools across UK businesses, most employees admit they are not making full use of the technology and lack trust in its reliability. With workers estimating they waste 13 hours a week on tasks AI could handle, the scale of missed productivity is enormous.

The findings, part of new global research by cloud communications firm GoTo that includes UK organisations, suggest employers may be overlooking a fundamental problem: while AI systems are being rapidly introduced, many employees feel unprepared, unsupported and unsure how to apply them in practice.

Even among Gen Z workers, often seen as digital natives, nearly three quarters said they struggled to apply AI to their daily tasks. And while 86% of employees questioned their confidence in AI’s accuracy, fewer than half of IT leaders shared those doubts, revealing a growing gap between boardroom optimism and frontline reality.

The Promise of AI Collides With Reality

Artificial intelligence is often framed as a transformative force for productivity and innovation. From automated admin to enhanced customer service, the potential is vast. But as AI adoption accelerates, a more sobering picture is emerging.

While 78% of employees already use some form of AI — including tools like ChatGPT and systems companies have developed — 86% say they aren’t using these tools to their full potential, the report, The Pulse of Work in 2025: Trends, Truths and the Practicality of AI, says.

That gap translates into a considerable loss of value. Employees across sectors and seniority levels estimate they’re spending around 2.6 hours per day on tasks AI could manage. That’s equivalent to 13 hours a week, or roughly six working weeks per year. Experts say that in the UK, where productivity has consistently lagged behind other G7 nations, such inefficiency is more than a technical oversight as it represents a strategic failure.

Yet many workers say the tools they’re given are either poorly suited to their roles or rolled out with little explanation. For smaller employers in particular, budget constraints, lack of training infrastructure and limited IT support make the challenge even harder.

The Skills Gap Runs Across Generations

Perhaps surprisingly, the research finds the AI readiness gap isn’t confined to older generations. Younger workers, who are often assumed to be tech-fluent, report almost as much confusion as their senior colleagues.

Nearly three quarters of Gen Z employees said they weren’t confident using AI in their roles, while the figure for Baby Boomers was over 90%. Gen X and Millennials also reported high levels of uncertainty. The problem, it seems, is not generational but educational.

The confusion isn’t helped by the pace of AI development. Tools are evolving rapidly, but training often fails to keep up. And in many cases, it’s not being offered at all.

Trust in AI Remains Fragile

Beyond the question of capability lies a deeper issue: trust. The study found that 86% of employees have little confidence in AI’s accuracy or reliability. Only around half of IT leaders shared that concern, a disconnect that may be fuelling growing unease among frontline staff.

In theory, AI should relieve pressure by taking on time-consuming, low-value tasks. But in practice, many employees feel they must double-check what AI creates, or correct mistakes. Nearly eight in 10 said AI-generated content or decisions often need extra refinement before they can be used.

And that leads to a paradox. AI is introduced to save time, but without confidence in the results, it can end up creating more work.

Risks of Unsanctioned AI Use

Frustrated by the limitations, or simply unaware of the rules, more than half of employees admitted using AI for high-stakes or sensitive decisions, including areas explicitly prohibited by their employer’s policy. These include compliance tasks, legal risk assessments, personnel decisions and matters requiring emotional intelligence, such as conflict resolution or team dynamics.

A total of 77% of those who used AI in this way said they didn’t regret doing so. This raises serious questions for HR and compliance teams. If staff are using generative AI to help draft emails related to disciplinary actions, or to prepare client communications containing sensitive data, the potential for error, breach or ethical misjudgement is high.

And yet many employees say they don’t feel they’ve been given clear boundaries. In organisations with fewer than 50 employees, only 19% have a formal AI policy. Even in larger firms, fewer than half report having clear guidelines in place. Where policies do exist, they are often buried in general IT use protocols and rarely supported by tailored training.

Small Businesses Particularly Exposed

The risks are especially pronounced for SMEs, and in the UK, that’s the majority of employers. Smaller firms face distinct challenges: tighter budgets, fewer dedicated IT or L&D staff and a tendency to adopt AI in a piecemeal, reactive way.

Only 59% of employees at smaller companies said they use AI tools regularly, compared to over 80% at larger firms. Nearly half of SME workers admitted they lacked the knowledge to use AI in ways that could save time or improve performance. And among small-business IT leaders, the likelihood of believing employees are “making the most” of AI is low, but not low enough to prompt urgent investment.

Wellbeing Gains Remain Untapped

The irony is that, if used well, AI could help tackle one of the workplace’s most pressing challenges: wellbeing. More than 60% of employees said AI could improve motivation and engagement, and a similar proportion believed that smarter automation would reduce the stress of repetitive or mentally draining work.

Yet that potential remains largely theoretical. Without adequate training or support, AI tools can contribute to stress rather than relieve it. Employees report feeling uncertain about whether they are using tools correctly, whether they are allowed to use them for certain tasks, and whether they can trust the output, all of which adds cognitive strain.

The Policy Vacuum

With AI now embedded in everything from scheduling tools to customer service chatbots, experts say the lack of clear organisational policy is no longer tenable.

According to the research, 54% of employees are using AI in ways that contravene or exceed company policy. But with only 45% of IT leaders reporting that their organisation has a policy at all, enforcement is difficult. Even where policies exist, few are accompanied by practical training or regular updates.

UK regulators are beginning to take notice. The Information Commissioner’s Office has issued guidance on responsible AI use, and employment law experts warn that poorly governed AI decision-making could leave employers exposed under the Equality Act. The Trades Union Congress has also called for stricter oversight of algorithmic management and workplace surveillance.

A More Responsible AI Strategy

What does a better approach look like? The report offers several clues. First, it recommends organisations remove barriers to adoption through targeted investment. More than three quarters of IT leaders believe that spending £15–£20 per employee per month could save around an hour per day in productivity.

Second, training must improve. Both IT leaders (71%) and employees (81%) agree that clearer instructions and guardrails are essential. But only a minority of firms currently offer structured, role-specific training on AI use.

Third, policies must move beyond broad IT rules and become embedded in real workflows. That means defining where AI is useful, where it is risky, and how employees can make decisions with confidence.

Finally, organisations need to close the perception gap between senior leaders and staff. Many IT decision-makers overestimate how confident employees feel with AI. Without candid dialogue and feedback, these blind spots will persist.

Share

Latest News

Latest Analysis

Related Articles

Anxiety Still Top Reason For Employee Assistance Calls As Mental Health Pressures Persist

Anxiety remained the single biggest category for the fourth consecutive year, accounting for 19% of all calls in 2024, an analysis reveals.

Toxic Workdays: Air Pollution Linked to Rising Workplace Health Risks

Air pollution poses a growing threat to workforce health, with new data linking it to long-term illness, cognitive decline and lost productivity.

‘Data Dread’ Spreads as Workers Excel at Avoiding Spreadsheets

New research reveals soaring data anxiety in the workplace, with 78% of employees struggling and many hoping AI can help.

Young Workers Warn of Health Crisis That Could Push Them Out Of Work

Without action, researchers warn, a significant portion of the workforce could face premature exit driven by preventable ill-health.