Let's talk

The AI gender clusterf*ck

Nov 11, 2025

Picture two software engineers. Same code. Same AI tool used in the process. One gets praised. One gets marked down for incompetence. The difference is gender. Because of course it is.

Oh how I wish this were a hypothetical scenario. But sadly it's cold, hard research findings.

A study into barriers to adoption of new tech gave reviewers identical code to assess along with the gender of the engineer and whether they'd used an AI tool. When the engineer was described as male using AI, the competence penalty was 6%. For women using AI it was 13%. Male non-adopters — the harshest critics of all — judged female AI users 26% more harshly than men using the same tools.

This competence penalty is just one of a huge web of interconnected problems surrounding AI and gender, each feeding into the others and disadvantaging women at every level.

Women are judged more harshly for using AI that was built with bias against them. In turn they use it less, even as they're most likely to lose their jobs to it. And least likely to be building the systems replacing them.

If you're a woman reading this, you will not be shocked. Because you'll have lived some version of being damned if you do and damned if you don't. I've got an 18 month old toddler, so have fresh experience of the many variations of this paradox tied to motherhood.

So how did we get here and what can we do about it? The best place to start is, as always, at the beginning.

It starts at the source

Women make up just 22% of AI professionals globally and only 18% of AI researchers. So the people building these systems don't represent the people using them, which is where the problems start.

Teams lacking diversity build systems that reflect that lack. They can't spot bias they've never experienced. There's nobody to call out bullshit. The echo chamber gets baked into the code.

The results are predictable. A UNESCO study found women described in domestic roles four times more often than men by major language models. When asked in a study by Stanford to assign pronouns to jobs, AI chose male pronouns 83% of the time for "programmer" and female pronouns 91% of the time for "nurse."

These systems excel at perpetuating and amplifying patterns — including the harmful ones.

The triple bind

These biased systems don't exist or operate in a vacuum. They create interconnected problems that compound each other out there in the real world, trapping women in an impossible situation.

Jobs on the line

79% of employed women in the US work in jobs at high risk of AI automation, compared to 58% of men. Women are over-represented in the roles AI is forecast to replace: customer service, data entry, administration and retail checkout.

The adoption gap

Meanwhile, women aren't building the AI literacy to help them adapt to a changing job market. A Federal Reserve Bank survey found 50% of men use generative AI versus 37% of women. A Harvard meta-analysis across 25 countries found women had 22% lower odds of using generative AI.

Access isn't the problem. Researchers on the same study gave Kenyan entrepreneurs equal access to ChatGPT plus training. Women were still 13% less likely to adopt it.

The trust crisis

Why the reticence? I'd hazard a guess that the aforementioned competence penalty is a contributing factor. Along with lower trust in this tech — because why would you trust something that seems to be built to work against you?

And that mistrust isn't abstract.

Women experience higher rates of tech-facilitated harassment. A 2025 survey into online abuse against women in the US found 25% of women had experienced harassment enabled by technology, including AI-generated deepfake pornography.

Mara Bolis, Senior Advisor on Feminist Futures of Work at Oxfam America, noted: "Women's generative AI hesitancy is rooted in rationality, not hysteria. It's risk awareness, not risk aversion."

The vicious cycle

AI has been sold as democratising. But when we look at the facts before us, it seems to be the opposite.

AI trained mostly by men reflects male perspectives. Works less well for women. Women trust it less. Women use it less. The next generation of models are trained on majority male use patterns. The gap widens.

Gender isn't the only fault line. The same pattern repeats across race, age and economic class. Older workers face similar competence penalties when using AI. Black and ethnic minority professionals encounter additional bias from systems trained predominantly on Western, white perspectives. People in lower-paid roles have less autonomy to experiment with AI tools and less access to training — yet their jobs face the highest automation risk.

Viewed through this lens, we can see how the current tech ecosystem is concentrating advantage in the hands of those who already have it: younger, male, higher-income workers with the confidence and freedom to adopt new tools without penalty. Quelle surprise.

How we fix this sh*tshow

There are no easy answers or quick fixes here. What we need is systemic change. Inclusivity and transparency at every level — in who builds these systems and who decides how they're deployed.

What this looks like in practice

If you're deploying AI in your business (as I know most of my readers are) start by asking who's in the room when decisions get made. Not just about which tools to buy, but how they'll be used, who'll be expected to use them and how that use will be evaluated.

Before rolling out any AI tool, speak to the people who'll use it. What are their concerns? What support do they need? How will using this tool affect how they're perceived by colleagues and managers?

When it comes to roll out, monitor adoption (but not in an icky surveillance way). If you see gender gaps, age gaps or other disparities — that's the data signalling you have something to address in your culture.

Set clear policies on AI use. Is it expected? Optional? For which tasks? How will AI-assisted work be evaluated? Make these expectations explicit and consistent.

Beyond your business

Individuals can't solve systemic problems alone, but here are some concrete actions you can take to help tackle the wider problems.

Push for transparency from AI vendors about how their systems were trained and tested. Support regulatory frameworks that require bias auditing and impact assessments. Join industry groups working on responsible AI deployment.

And crucially — when AI systems don't work well for parts of your workforce, make noise about it. Call out the bullshit.

Time to get uncomfortable

I acknowledge that none of these actions are easy or comfortable. But the research is clear and the harm is measurable. We know what needs to change. The question is whether we have the courage to see it through — or whether we'll keep paying lip service to ethics while building and deploying systems that entrench inequality.

Please don't just nod along to this article. Change something. And if you need support with this, I'd love to help.

Let's get ethical

Subscribe for updates and AI ethics insights.