We all see the same problems...
Oct 20, 2025There are five big objections to AI ethics that I bump up against in my work. What's interesting is that these typically come from people who see the same problems I do: problematic tech, inadequate regulation, corporate lip service and an urgent need for change.
We're all looking at the current situation and wanting better. We've just got different theories about how to get there.
So instead of going round in circles arguing about these things on social media (a tragic waste of time and human potential), I'm going to address the big five objections here and explain why I believe multiple approaches – technical solutions, activism, regulation, education and practical ethics frameworks – all have a role to play.
Let's work together, yeah?
Objection 1: "You need technical solutions, not paper frameworks"
I agree that if this tech is here to stay, we need technical solutions – audit trails, bias detection tools, transparent models. But here are two critical points:
Point one: the bulletproof tech doesn't exist yet. So what do we do in the meantime?
Organisations are deploying AI right now. They're making decisions that affect their customers, employees and communities. Waiting for perfect technical solutions means either making decisions in a vacuum with no ethical principles or not adopting AI at all. And I don't know how many businesses are open to that latter suggestion.
My work gives people a way to make better decisions with the tools that currently exist, while understanding the limitations and wider harms.
Point two: for better tech to be developed, there needs to be demand – and that won't happen without education and advocacy.
Technical solutions don't just materialise because someone clever invented them. They emerge because there's demand. And demand comes from organisations understanding why they need better tools, regulators understanding what needs regulating and the market recognising more ethically-aligned AI as a competitive advantage.
Without people (I am people) doing the education and framework-building work, there's no clear signal to developers about what problems to solve or what "good" looks like. My work creates the conditions that make better technical solutions both possible and commercially viable.
The ideal is both approaches working together, because technical solutions and ethical frameworks are complementary and mutually reinforcing.
Objection 2: "AI is fundamentally unethical, so we should avoid it entirely"
I understand this position. I feel it in my bones. Of course, there are ENORMOUS issues with how AI systems are built, what they're trained on and the harms they can cause. Some people can afford to opt out entirely, and that's a valid choice (personally, I use as little AI as possible).
But it's also a privileged position not afforded to everyone. Many organisations and individuals don't have that luxury. They're under competitive pressure, facing resource constraints or working in contexts where AI use is becoming expected or required.
I want to help people make the best decisions they can within their context, while drawing attention to wider systemic issues in the process. Meeting people where they are and helping them make better choices within real constraints is valuable work, not compromise.
Pragmatic harm reduction matters. Less bad is still progress, even when the ideal solution feels out of reach.
Objection 3: "AI ethics is corporate theatre"
I agree! So much of what passes for AI ethics in business is empty policy and box-ticking exercises. The old "we can't possibly be the bad guys, we have an ethics policy" trick. I see it, and it frustrates me as much as it frustrates you.
That's exactly what I'm trying to fix.
I'm actively researching and testing ways to add substance to governance and policy. Working through what makes a difference versus what just looks good on paper. I'm learning from each organisation I work with, iterating on approaches, and building an evidence base for what constitutes walking the talk rather than performative compliance.
(the temptation here to throw in a snarky "what are you doing about it?" is strong)
Objection 4: "AI ethics is lots of talking and no action"
I sure do a lot of talking, that much is true. But let me share some bits of action I've witnessed recently in my work:
Workshops with teams who couldn't see eye to eye about AI, having more constructive conversations. Using ethics as a vehicle for exploring grey areas together and understanding different points of view. Then finding a responsible path forward together.
Business owners who've re-evaluated their position on AI and drawn firmer boundaries after taking my email course.
An employee who felt empowered to constructively challenge some thoughtless AI use at the hands of their employer – and it ended positively.
These aren't world-changing victories. But they're real, tangible shifts in how people think and act. That's what change looks like on the ground.
The talking is how we get to shared understanding. The shared understanding is what enables action.
Objection 5: "You're placing the burden of responsibility on users"
This is something I've reflected on deeply (surprise, surprise). Yes, in an ideal world the burden of responsibility lies with AI developers and regulators. I don't want to place more guilt on anyone's already overburdened shoulders.
But.
The tech is problematic. It's largely unregulated. It's cheap and widely accessible. Millions of people are using it every day. And pretending individual responsibility doesn't matter while we wait for some perfect solution to materialise doesn't help the situation.
Personal responsibility is still important – not instead of systemic change, but alongside it.
So rather than guilt, I focus on agency. I'm not saying "it's all on you" – I'm saying "while we're pushing for better from developers and regulators, here's what you can do right now."
Giving people frameworks for making better decisions isn't letting developers or regulators off the hook – it's acknowledging that change happens on multiple levels simultaneously. AND IT'S GOT TO START SOMEWHERE.
My work on user education and responsibility doesn't replace the need for regulation and better tech. It creates informed users who can demand better and recognise when they're not getting it. Education drives advocacy, which in turn drives systemic change.
In sum, we all have our part to play
If you've raised any of these objections, I get it. I agree with you about the problems. The current AI landscape is messy, un(der)-regulated, and full of hype and harm.
We just have different ideas about how to make it better – and we probably need all of them.
We need clever people building better technical solutions. We need passionate activists pushing for stronger regulation and systemic change. We need brave souls calling out corporate theatre. We need people like me helping others make better decisions right now, with the resources and constraints they have.
All hands on deck!
None of us can solve this alone. But we can all do our bit (and we can all pause to reflect on the way we conduct ourselves on LinkedIn 😘).