Bias, data and power

In the last part of this series, I explored what happens when AI design becomes a shared process – when we build with people rather than for them. But even the best intentions meet their limits when bias is already built into the data itself.

During one of our workshops, someone said, “If you can’t see me in the data, how can you protect me with it?”

That question stopped the room. It captured something we all know but rarely name which is the myth that data is neutral when it’s anything but. Data carries the fingerprints of the world it comes from: its assumptions, its exclusions, its power dynamics. And once you see that, you start to understand how easily inequality can become mainstream infrastructure.

The myth of objectivity

Data is often treated as the most reliable form of evidence. “The numbers don’t lie”, we tell ourselves. But don’t they?

They lie through omission; through the patterns of who was included, who was heard, and who was never even considered.

In AI, those omissions have real consequences. Research shows that 44% of AI systems exhibit gender bias, and 25% show both gender and racial bias. Behind those numbers are patterns of exclusion that repeat old hierarchies at scale.

From recruitment algorithms that downgrade CVs containing “female-coded” words, to facial recognition systems that misclassify Black women at rates up to 34% higher than white men, to medical tools that underestimate pain in women because they were trained mostly on male data, to voice systems that struggle to recognise accents outside the “standard”.

None of this happens by accident. It happens because the people collecting, curating, and coding the data rarely represent the full range of the people who will live with its consequences.

During the our workshops, several women shared personal experiences of what it feels like to be invisible to systems that claim to be objective. One participant spoke about trying to access online safety tools that assumed every user was a man reporting another man. Another described applying for financial support and finding the system unable to recognise her caregiving responsibilities as legitimate work.

Those stories made something clear: bias isn’t always visible as discrimination. Sometimes it hides in the spaces between data points – in what gets left out because it doesn’t fit neatly.

When we treat data as neutral, we stop asking the most important questions: who is missing, and what happens to them as a result?

The danger of averages

Another theme that came up again and again in our workshops was how AI systems are tested and evaluated and the danger of designing for “the average user”.

Averages are comfortable. They smooth out complexity. They give us the illusion of progress because, statistically, most things seem to work. But most is not the same as enough.

When you design for the average, you optimise for the majority and accept that someone will always be left behind. That’s the trade-off that sits quietly at the centre of so many systems – from algorithms to organisational policies.

In one discussion, a participant put it perfectly: “The problem with averages is that they make harm look small”. That idea stayed with me because it’s true far beyond AI.

In leadership, we do this too. We track engagement scores or performance metrics that tell us how most people are doing, but not who is falling through the cracks or why. We celebrate consistency rather than curiosity and we treat exceptions as outliers instead of evidence that something needs to change.

When we focus on averages, we lose sight of people. And that’s where systems fail.

What accountability really means

One of the most transformative parts of this project was seeing how accountability shifted when bias was named early. Instead of waiting for a system to cause harm and then fixing it after the fact, participants explored what it could look like to design bias out before it becomes normalised.

That begins with transparency and with asking better questions. Not just about outcomes, but about process. Where did the data come from? Who labelled it? Whose definitions were used?

It also means giving people the right to question and contest what systems produce. One participant described wanting an explanation button next to every AI decision. Not a technical one, but a human one: Why did it reach that conclusion about me? What can I do if it’s wrong?

That desire for transparency goes beyond technology. It’s the same shift we need in leadership. We measure performance, but not fairness. We fix what’s visible, but not what’s systemic.

Real accountability happens when people can see how decisions are made and understand their role in them. It’s when we build systems that stay open to challenge – where questions are welcomed, not treated as resistance – and when those questions lead to real change. It’s when responsibility is shared rather than passed down or hidden behind a process, so people feel part of the outcome rather than bound by it.

Where we go from here

Bias doesn’t only appear when systems fail. It’s built into how we define success, progress, and even fairness.

Every dataset, every metric, every decision is a product of its context – and context is shaped by power. Once you see that, you start to realise that the question isn’t whether bias exists, but whether we’re willing to look for it and change the conditions that allow it to grow.

In the next part of this series, I’ll share some of the bold, practical ideas that came out of this project, from building a feminist AI Fellowship to creating watchdog hubs, youth empowerment labs, and advocacy networks. Each one shows what it looks like when we stop describing the problem and start designing the alternatives.

Until next time!

Tania

Previous
Previous

From awareness to action: redesigning AI

Next
Next

AI design, reimagined