“AI slop” is a term increasingly used to describe low-quality content generated by artificial intelligence. While this term is often used in the context of internet clutter, its implications within the workplace are far more significant.
When employees over-rely on genAI tools without training and clear guidelines this can not only lead to AI “workslop” (i.e. poor quality output) but can also lead to other serious risks for the employer, including the loss of confidential and/or privileged business information. This is particularly the case where employees are using public genAI tools to carry out workplace tasks.
The problem is many businesses don’t even know when employees are using public genAI tools. Recent research highlights the scale of this issue: 71% of UK employees admit to using unapproved consumer AI tools at work, with over half (51%) doing so every week. This widespread, unsanctioned adoption means the risk of misuse is far greater than many organisations realise.
A crucial part of addressing this comes from educating and training employees on proper AI use. But organisations must have measures and policies in place to protect employees and themselves if things go wrong.
AI at Work: The Dos and Don’ts
AI is increasingly being integrated into the world of work. A recent report found that 88% of organisations report regular AI use in at least one business function, compared with 78% a year ago. But crucially, most are still in the experimentation phase.
When used well, enterprise genAI tools can improve productivity and free up employees to focus on higher-value work. But this isn’t always the case.
Problems can arise when employees rely too heavily on genAI without fully understanding the content it generates or failing to verify its accuracy. Without clear guidelines, tools designed to enhance productivity could easily have the opposite effect. Over-reliance on genAI can lead to poor quality output (i.e. AI workslop) – work that looks polished but lacks accuracy, originality or critical thought.
The absence of clear rules can also expose employers to legal and reputational risks. For example, employees failing to properly check AI output before including it within their work. Inaccurate or hallucinated AI content can damage trust and credibility amongst colleagues and clients and lead to reputational damage. There could be intellectual property risks for the employer too if the AI generated content is not original and derived from other sources.
Public GenAI tools: Unpacking the Hazards of Misuse
Another key risk is that employees use public genAI to generate work content, rather than approved enterprise tools. In doing so, they may input confidential or legally privileged business or personal data into public generative AI tools, which are not secure and may use that data to train the model.
For example, publicly available versions of genAI platforms like ChatGPT, Gemini, and Claude are not designed with enterprise-level confidentiality in mind. When employees input sensitive personal data or confidential/privileged company information into these public tools, such as information about workplace issues or business documentation, there is a real risk that such data and/or information could be exposed, stored, or even used to train future models.
This lack of security and confidentiality introduces significant concerns for organisations, including loss of confidentiality and legal privilege, data leaks, and breaches of regulatory compliance.
A report from the US earlier this year found that more than half (57%) of employees admit to inputting high-risk information into public genAI tools exposing concerning gaps in enterprise genAI tool usage. Importantly, these risks do not only impact the organisation.
Employees themselves may be held accountable if they input confidential information into public genAI tools. Depending on the circumstances, this could lead to disciplinary action, reputational damage, or even legal consequences for individuals.
In-house Tools: Risks and Challenges
Risks also arise when using enterprise genAI tools. For example, employees may enter personal or sensitive information about managers or colleagues into the enterprise genAI tool, and the input/output may be stored.
If the person named in the input later makes a data subject access request (DSAR), the employer may have to disclose any personal data held, including what was entered into the AI tool and its output. Such input/output could also be disclosable in future employment tribunal litigation, if relevant to the issues in the case.
Managing Employee Use
All the issues outlined so far demonstrate why clear policies on employee AI use are no longer optional; they are essential for employers navigating the growing role of AI in the workplace.
Well-drafted policies should set clear guidance for employees on the types of AI tools they are permitted to use in the workplace (i.e. authorised enterprise tools only) and for what tasks. It should make clear that employees must use AI as a partner rather than a substitute for judgment and creativity, and always have a “human in the loop” to carefully check AI output for accuracy. It should also be clear on the types of information that should and should not be inputted into such tools and set out clear consequences for employee misuse (e.g. disciplinary action).
Strong governance must go beyond policies and must include training, clear workforce communication and leadership. Employers must also consider how AI affects work quality and organisational culture, ensuring that AI use is aligned to wider people strategies, including inclusion, wellbeing, behaviour and values.
Learning and development should be aligned to the AI strategy and include risk training both for the HR team and the wider workforce. This is vital to embed responsible use, reinforce business values, and prevent a culture of ‘workslop’.
When used well, the benefits of AI in employment are clear, but employers must be proactive in implementing comprehensive AI policies, supported by education and engagement. This is part of fostering trust, confidence, and understanding among the workforce regarding new AI technology, and sets the foundation for successful AI implementation.

