Book a call

Spiralling about AI? Start with what you can control.

Sep 01, 2025
Spiralling about AI? Start with what you can control.

Today I'm coming at you with a reminder that YOU are the captain of your own ship.

In one of the first AI ethics classes I took (when I was deep in study mode) the tutor shared five questions to ask before using generative AI for a task:

  1. Am I misrepresenting effort?
  2. Am I spreading unverified claims?
  3. Am I conscious of bias?
  4. Am I sabotaging my own skills?
  5. Am I sharing private information?

The fourth question in particular caught my attention, because it made me consider how quickly I spin out thinking about big things I can't control. Like climate change, labour market disruption, threats to democracy, how we're all destined for Universal Basic Income and so on. But I overlook the immediate, personal impact.

Because AI ethics (and AI in general as a thing) is so vast and wide-ranging, and the world feels so uncertain, it's easy to spiral into existential dread about all sorts of huge and scary thoughts.

We've all had sleepless nights about it, right?

What's really helped me recently is remembering that it all starts with AI's impact on YOU. And there's plenty there you can control.

The 5 layers of AI impact

In my workshops, I teach a framework for assessing AI opportunity and responsibility in business that works across five layers:

Layer 1: personal impact

How does AI affect you directly? Your skills, your productivity, your privacy, your decision-making, your mental health?

Layer 2: stakeholder impact

How does your AI use affect the people around you? Your team, your clients, maybe also your family and friends?

Layer 3: business impact

What's the broader impact on your organisation? Culture, processes, competitive advantage, risk?

Layer 4: industry impact

How might your choices contribute to wider patterns in your sector?

Layer 5: society impact

What's the cumulative effect of all these decisions on the world at large, both in the present and future?

Most people (for clarity: I am most people) jump straight to layer five — the big, societal implications that feel overwhelming and impossible to influence. But when you look at all these separate layers, you have waaaay more agency than you might think.

Start where you have control

You can absolutely control layer one. You decide whether to use AI to challenge your thinking or replace it entirely. You choose whether to fact-check outputs or blindly trust them. You control what personal information you share with AI tools.

You have significant influence over layer two. You can be transparent about when you're using AI. You can involve your team in decisions that affect them. You can choose to use AI in ways that respect your clients' time and intelligence.

If you're in any kind of leadership role, you're actively shaping how your business approaches AI (so that's layer three). Your choices around training, policies, tool selection and culture all matter.

Then, your decisions at layers one, two and three don't just affect you — they ripple outward to layers four and five.

Every choice shapes the future

The future isn't inevitable (as I keep repeating ad nauseam) despite what the hype wants us to believe. Every choice you make in your business and life contributes towards what comes next.

This technology requires human adoption to have impact, so individual choices matter enormously right now. YOUR choices matter enormously right now.

When you choose to use AI thoughtfully rather than recklessly, you're modelling a different way forward. When you prioritise agency over automation, you're voting for a particular kind of future. When you refuse to let AI diminish the quality of your work or relationships, you're setting an important boundary.

And when you combine that individual agency with collective action — when teams, businesses and industries start making more thoughtful choices en masse— that's how we build towards a future we all want to be part of.

Moving from anxiety to action: some practical steps

So next time you find yourself spiralling about AI's impact on the world, try this:

Start with layer one. What's one thing about your own AI use you could adjust today?

  • Maybe it's fact-checking more carefully
  • Maybe it's being more transparent about what tools you're using and when
  • Maybe it's choosing to write that email yourself for a more personal touch

For a bonus point: What's one conversation you could start or uncomfortable question you could (gently) ask about responsible AI use?

For my fellow overthinkers: you're not powerless. Start where you have control, and watch the ripples spread.

The big, scary questions about AI's future impact on society are important, but they're not where your power lies. Your power is in the daily choices you make about how you engage with this technology.

Use it wisely.

Want to explore what responsible innovation looks like for your organisation?

 

Book a call or email: [email protected]

 

Book a call