Book a call

3 principles that transform AI ethics conversations

Aug 25, 2025
3 principles that transform AI ethics conversations

At the start of every AI ethics workshop, I share a few ground rules. There are the basics (don't be a dick), and then three principles that will help you get the most out of our time together:

  1. Admit when you don't know something
  2. Accept that competing truths can co-exist
  3. Be open to changing your mind (and let others do the same)

Let me tell you why these matter — and how they can transform not just workshops, but how your organisation approaches AI ethics entirely.

Principle 1: admit when you don't know something

My AI ethics workshops are about exploration first, finding answers second. And exploration simply isn't possible if you're not willing to admit ignorance.

AI ethics as a discipline is impossibly vast and complex and blows my mind daily. Arguably, knowing anything for certain isn't the point. It's more a constant process of learning, evolution and reflection than anything fixed. So being comfortable with what you don't know is the perfect starting point.

We're socially conditioned to see "I don't know" as weakness, but I think this holds us back from growth. "Not knowing" helps you get curious, ask better questions (like the really "obvious" ones) and expand your thinking. Pretending you have all the answers shuts down any real exploration before it even starts.

Why this matters in practice

When leaders create space for "I don't know" in AI discussions, teams stop defaulting to the loudest voice or the most confident-sounding solution. Instead, they start asking more fundamental (and important) questions, like:

  • What are we trying to achieve?
  • Who might be affected?
  • What don't we understand about the consequences?

Principle 2: accept that competing truths can co-exist

Because they absolutely do — in AI ethics and life in general.

For example:

  • Generative AI is fundamentally unethical, because it's trained on stolen intellectual property
  • Generative AI has some genuinely beneficial applications

Both of these statements are true, but conflicting.

Being comfortable with paradoxes like this helps you find a more pragmatic, less emotionally-charged position. Instead of getting stuck in polarised discourse, you can develop a nuanced approach grounded in the real world.

Why this matters in practice

Most AI ethics challenges don't have clean answers. Your customer service chatbot might improve response times while potentially providing a worse overall service. Your recruitment AI might reduce bias in some areas while introducing it in others.

The key to solving these issues (or at least working towards solving them) is the ability to hold these tensions without rushing to oversimplified solutions.

Principle 3: permission to change your mind

Like admitting to not knowing, I consider changing your mind as a sign of growth, not inconsistency or weakness. You can think one thing, encounter new information, and think something different.

There's nothing "bad" about this. And allowing others grace to do the same shows intellectual maturity.

Why this matters in practice

As an example, my own position on AI has evolved dramatically. While I still sit in the sceptical camp (and I'm happy to own this, we need sceptics and enthusiasts working together for better progress) I'm doing a lot less shouting at the moon these days.

Thanks to studying AI ethics, I've shifted to a more moderate, pragmatic and — I think — productive position. My focus is on improving AI literacy and non-judgementally helping people make good decisions within their context and current constraints, rather than taking a hard line stance that's ultimately limiting and unrealistic.

When we create space for minds to change, we open up possibilities for better — and truly innovative — progress. As opposed to black-and-white absolutes that sound clean but don't reflect or work in reality.

So here's your permission slip — change your mind, as much as you like.

Question your assumptions. Sit with complexity. Acknowledge your biases. Admit what you don't know. Listen to voices that disagree with you. Hold multiple perspectives simultaneously. Look for common ground. Accept that perfect solutions don't exist.

Do this and see what shifts. It might be more than you were expecting...

Applying these principles beyond workshops

These principles don't just work in workshop settings — they're essential for any organisation trying to navigate AI responsibly:

  • In strategy meetings: create space for uncertainty rather than demanding definitive answers to complex questions
  • In policy development: build in mechanisms for iteration rather than trying to create perfect, unchanging guidelines
  • In team discussions: reward intellectual honesty over confidence, and curiosity over certainty

Remember: the goal isn't to have all the answers. It's to create the conditions where better answers can emerge.

Want to explore what responsible innovation looks like for your organisation?

 

Book a call or email: [email protected]

 

Book a call