Book a call

Why spying on shadow AI use will backfire (and what to do instead)

Sep 09, 2025
What to do about shadow AI use

I've seen a lot of buzz in the last week about shadow AI, which refers to AI tools employees use for work in the background without formal approval.  

Recent research from MIT revealed that while only 40% of companies have purchased official AI subscriptions, employees from over 90% of organisations regularly use personal AI tools for work tasks. That means nearly every knowledge worker is now using AI in some form — and largely outside official channels.  

I don't need to explain why this is risky AF.  

Unmanaged AI use can expose sensitive data, create compliance issues, introduce bias into business decisions and fragment organisation-wide AI adoption.

The shadow AI paradox

The paradox is (remember I talked about how AI ethics is full of these?) shadow AI often also represents genuine innovation.  

MIT's research found that shadow AI users generally reported better results than their companies' official AI initiatives, which frequently remained stalled in pilot phases and squarely not delivering.  

Shadow AI use is essentially a collection of AI use cases that really work in practice, developed by people who know their jobs inside out. And it reflects the kind of scrappy spirit of innovation that can drive a real competitive advantage.  

Which is why there's a lot of interest in it right now.

Why monitoring makes things worse  

While I'm pleased there's growing awareness of this issue, I've found an uncomfortable recurring narrative: employee surveillance.  

The prevailing wisdom (and here's just one example) suggests organisations should monitor network traffic, track employee activity and analyse digital resource use to detect "unauthorised" AI use.  

So treat employees like naughty little children. And in doing so, create an environment of suspicion where every click, download and resource spike is scrutinised for signs of misdemeanours.

I believe this approach is fundamentally counterproductive because it makes people feel unsafe, likely drives AI use further underground and creates adversarial relationships with the "AI police"  

A radical alternative: treating people like adults  

I advocate for an alternative approach.  

And stay with me here, because this is pretty radical: treat your people like intelligent, capable adults trying to do their jobs better.  

Wild, right? Anyway, here's how I'd go about it:  

1. Encourage open conversations

Create safe spaces for employees to discuss their AI use without fear of punishment. Regular "AI coffee chats," anonymous feedback channels, workshops and town halls can reveal what people are actually doing with AI and why.  

2. Build AI ethics literacy

Help employees understand both the benefits and risks of AI tools. When people know why certain practices are problematic and what alternatives exist, they can make better choices. Support informed decision-making rather than restricting use.  

3. Create accessible governance

Often, shadow AI emerges because official channels are too slow or bureaucratic. Streamline your approval processes and provide clear guidance on when and how to get AI tools approved.  

4. Establish "AI office hours"

Regular sessions where employees can discuss ideas, get guidance on compliance/ethics requirements, or seek help moving shadow AI into official channels.  

5. Form AI communities of practice

Connect employees who are using AI effectively with those who are curious. Peer learning is often more effective than top-down training.  

6. Focus on outcomes, not tools

Rather than obsessing over which specific AI tools people are using, concentrate on ensuring good outcomes: fair and accurate results, ethical practices and alignment with business objectives.

The business case for trust-based AI governance  

I think this approach doesn't just benefit an organisation's culture. It can lead to better outcomes across the board, like:  

  • Higher adoption rates (because people feel supported)
  • Better risk management (because people feel safe disclosing AI use)
  • Talent retentio (because restrictive approaches risk losing your best people to more progressive competitors)  

It's important for me to stress here that I'm not an HR expert, I'm just someone with Big Feelings on how humans should treat each other at work and in life in general.  

Call me naive, call me a hopeless idealist. But also... prove me wrong?  

I sincerely believe the most successful organisations (and we're talking long-term, sustainable success) will be those who invest in building trust, welcoming honest conversation around AI and creating an environment where responsible innovation can thrive out in the open.  

So instead of asking "how do we catch people using unauthorised AI?" try asking:  

  • What AI tools are our people finding genuinely useful?
  • What gaps in our official AI strategy do these shadow uses reveal?
  • How can we support the innovation that's already happening?
  • What would make people feel safe being transparent about their AI use?  

The goal is to create conditions where the good stuff can emerge from the shadows and be properly supported, while the risky stuff gets addressed through education and better alternatives, not punishment.

Want some help with this? 

Want to explore what responsible innovation looks like for your organisation?

 

Book a call or email: [email protected]

 

Book a call