From awareness to action: redesigning AI

In the last part of this series, I wrote about bias, data, and power – and how inequality can quietly embed itself in the systems we trust.

But naming the problem is only the first step. The harder work is deciding what to do about it.

When we reached this stage of the project, the conversation shifted. The question stopped being “What’s wrong?” and became “What could we build instead?”

Awareness is not the same as progress. You can understand how bias works and still recreate it if nothing changes in the process, the priorities, or the people who get to decide.

That’s where the Future Opportunities part of this project began, with the realisation that fairness can’t be retrofitted, it has to be designed in from the start.

Changing who gets to build the future

One of the boldest ideas to emerge from our workshops was the creation of a Fellowship for women, non-binary people and allies working at the intersection of gender, technology, and ethics.

The aim was simple but radical: to create space for those most impacted by technology to lead its redesign.

For too long, conversations about AI have been dominated by the same voices: those who already hold power in tech, policy, and funding. But the people who live with the consequences of those systems often have the sharpest insights into what’s broken and what needs to change.

The Fellowship would bring them into the centre of the conversation, not as case studies, but as creators, builders, and policy shapers. Participants would explore inclusive approaches to AI, test ideas, prototype solutions, and share their findings publicly so others can learn and build on them.

Who gets to be seen as an expert must change because expertise doesn’t only live in theory or code, it lives in experience, perspective, and proximity to the problem.

When people who understand and have lived exclusion are given the power to design inclusion, the results are very different.

Naming harm before it scales

Another idea was the AI Watchdog Hub – a research-led accountability platform that documents, analyses, and challenges gendered harms caused by AI systems.

Right now, most forms of bias are noticed only after the damage is done. When someone is misclassified, excluded, or targeted. When a product fails publicly. When people lose trust.

The hub would flip that. It would track emerging harms, monitor bias in real time, and create a space where individuals, researchers, and civil society groups could report concerns and share evidence. It could even publish an annual “State of AI Harms” report, building the pressure for transparency and reform.

This isn’t about blame, it’s about visibility. What we don’t measure, we normalise.

The hub could provide policymakers, journalists, and companies the information they need to act faster and more responsibly and it could give the public a place to be heard.

Bias thrives in silence. Accountability begins with naming it.

Building literacy, not fear

The third idea focused on young people. Many of our participants spoke about how alienating technology can feel, especially to those who don’t see themselves represented in it. That’s how the idea for Youth Empowerment Labs was born.

These labs would be spaces in schools, colleges, and youth centres where young people could learn how AI works – not to turn everyone into a coder, but to help them understand who it serves and who it leaves out.

They’d explore how bias enters data, how design decisions shape outcomes, and how to imagine alternatives.

The goal isn’t just to build digital literacy but to build agency, to show that they have a say in how the systems shaping their lives are built.

One participant said, “We’re told AI is something that happens somewhere else”. The labs would change that narrative. They’d show that the future isn’t something being built for them, it’s something they can build themselves.

Turning insight into influence

The final proposal was the AI Advocacy Network, a coalition connecting feminist organisations, researchers, campaigners, and policymakers working at the intersection of gender justice and technology.

Today, most efforts in this space are fragmented. Many organisations are doing incredible work but often in isolation, without the infrastructure to share learning or amplify impact.

This network would change that. It would convene strategy sessions, share open resources, and create collective responses to key moments in AI regulation and policy.

It would also help funders and decision-makers understand what’s at stake. Because progress in AI isn’t just about innovation but the values that guide it, the people who benefit from it, and the accountability structures that sustain it.

By connecting people across disciplines, the network could turn good ideas into systemic change. The aim would be to shift the entire conversation from how fast we can build, to who we’re building for.

From awareness to architecture

Every one of these ideas moves beyond awareness. They translate principles into practice and reflection into architecture.

The Fellowship builds capability.

The Watchdog builds accountability.

The Labs build agency.

The Network builds momentum.

Together, they form a blueprint for a fairer digital future – one that distributes power differently and designs inclusion from the ground up. And while these initiatives were conceived in the context of AI, their lessons stretch far beyond technology.

If bias is systemic, then the response has to be systemic too – embedded not in policies written after the fact, but in the assumptions we start with and the choices we make when no one’s watching.

Redefining what progress means

AI isn’t just a technological shift; it’s a test of leadership. It asks us, collectively, how we design for accountability, how we share power, and how we measure progress.

The next part of this series will explore what it means to redefine success – to move beyond the traditional measures of speed, efficiency, and scale, and instead look at equity, dignity, and redistribution of power as the true indicators of progress.

The systems we build are only ever as fair as the values we start from. And that may be the real opportunity here, not just to design smarter technology, but to design a smarter way of leading.


Until next time!

Tania

Next
Next

Bias, data and power