Contact

You say innovation. I say: what does that even mean?

Nov 17, 2025

Last week, major US-based venture capital firm a16z published their thoughts on AI regulation. Which (predictably) can be summed up as don't regulate development, only regulate bad uses. Let the technology flourish, deal with problems as they emerge.

Cool, cool.

Except, they've skipped a crucial step. In a rush to protect "innovation", they've failed to define what it really means and who it should benefit.

(I mean, we can all hazard a guess, right?)

I spend my days helping businesses develop substantive AI ethics principles. And believe me, you can't have a sensible conversation about innovation until you agree on what you're innovating towards.

Four questions before you build anything

So before we get tangled up in debates about pragmatic harm reduction versus creative destruction (shudder), let's pause and ask four very simple questions:

  • Does this technology reduce inequality or exacerbate it?
  • Does it enhance human autonomy or diminish it?
  • Does it create broadly shared prosperity or concentrate wealth?
  • Does it strengthen democracy or undermine it?

These aren't philosophical abstractions. You can apply them whether you're writing national policy, deciding what tech your company should build/deploy or choosing which tools to use in your own work. They work across different scales and contexts.

The false choice

Our venture capitalists pals (?) want us to believe we face a binary choice: innovation or stagnation, progress or precaution. I call BS.

Let's take a situation we're all familiar with: a company considering using AI to replace junior roles — analysts, paralegals, designers, whatever. The business case looks bulletproof. The solution is faster, cheaper and scales infinitely. No pesky sick days or icky interpersonal problems. Why wouldn't you?

Now run it through those four questions and watch what emerges.

  • Inequality: you've pulled up the ladder — only people who can afford unpaid internships get through
  • Autonomy: remove the roles where people learn judgement and you get two classes: AI operators and expensive experts ageing out
  • Prosperity: the short-term savings flow to shareholders, the long-term costs get dumped on society
  • Democracy: knowledge concentrates at the top, and power sits with whoever controls the AI

 

What does better look like?

My advice here would be to reframe what a junior role means and looks like, beyond a set of tasks to be automated.

It's an apprenticeship. It's where you develop intuition and understand why the senior people make the choices they do.

So you redesign it with this in mind. The AI handles the grunt work. Then the junior's job becomes learning judgement: reviewing what the AI produces, understanding where it goes wrong, developing the expertise to know when to trust it and when to bin it. Learning what good looks like and all that real-world embodied stuff the AI has no access to.

(Yes, there's an argument that you can only truly learn by doing things the long way round. I get it. I'm just not convinced that's what the future of work is going to look like — and pretending otherwise won't help anyone prepare for it.)

The company keeps their talent pipeline. Maintains their culture. Still gets their efficiency gains. That's what happens when you ask the right questions upfront.

Innovation is not inherently "good"

The venture capital argument assumes innovation is self-evidently good. That market forces will sort out the distribution of benefits. That "progress" means the same thing to everyone. I've been around long enough to know that's not true. Some innovation concentrates power. Some spreads it. The technology itself doesn't determine which — that depends on the questions we ask and the choices we make.

I acknowledge that these things are uncomfortable.

Nobody wants to be the downer asking about who loses when everyone else is excited about the gains. And I know questioning whether we should do something feels anti-progress to some.

But it's not. Categorically.

You're asking whether "better" means better for everyone or better for a few. And to me, that's one of the most important questions to be asking in this current moment in time.

Let's get ethical

Subscribe for updates and AI ethics insights.

 

Connect with me: