AI design, reimagined
In my last article, I explored who gets to shape technology – and who gets left out. This next part of the series looks at what happens after we ask those questions.
When we began the workshops for this project, it became clear how many people live with the consequences of systems they never had the chance to shape. I remember someone saying, “AI feels like something that happens to us, not for us.”
And it’s not just about technology, it’s about the design itself – how ideas are formed, by whom, who gets a say, and which perspectives define the outcome. Once you start to notice that pattern, it’s hard to unsee it.
That was the moment I stopped thinking about AI design as a technical exercise and started seeing it as a leadership practice. It’s not just code or data. It’s a reflection of who holds the power when decisions are made.
Designing with, not for
One of the things that made this project different was how it was built. We didn’t collect opinions at the end to validate what had already been decided. It was about co-design by bringing lived experience into the room from the very start and allowing it to shape what we built together.
That shift might sound subtle, but it changes everything.
When people are invited to help design something, the conversation moves from feedback to ownership. You stop asking whether people agree with what’s been done and start exploring what needs to change for it to work in the real world. You start asking, “Who could this harm” and “Who is missing here”.
It also changes who feels responsible for the outcome. When people see their experience reflected in a system, they don’t just support it – they protect it, challenge it, and improve it.
The same applies to leadership: how we build teams, strategies, and cultures. Too many decisions are still made in rooms that don’t reflect the people they affect. I know it feels quicker that way, more efficient, but the cost shows up later.
It shows up in disengaged teams where people stop offering ideas. It shows up in leaders who are confused about why performance has dropped. It shows up in cultures where people do what’s asked, but never more because they’ve learned their voice doesn’t matter. When people can’t see themselves in the decisions being made, they stop believing those decisions are for them.
Leadership, like AI design, is stronger when it’s shared. When people see their fingerprints on the process, they give more of themselves to the result.
What kept us honest
As the workshops unfolded, five core values began to anchor our conversations. You could call them “feminist principles” or principles of inclusive design. I think of them as habits of better leadership.
“Care” came up first. It sounds simple, but in practice, it’s rare. Care asks us to consider impact before progress, to notice who might be harmed, or what voices haven’t been heard.
“Access” was next. It pushed us to remove barriers so people could participate fully, not just be invited in. Language, confidence, time, money, accessibility. All the quiet filters that decide whose voices count.
“Transparency” became the thread that held everything together. In technology, it’s about how systems make decisions. In leadership, it’s about how people do. When there’s clarity on the “why”, people can connect to the “what”. Without transparency there’s no trust and curiosity disappears, which is the very thing innovation depends on.
“Collaboration” reminded us that the best answers are built with more than one mind. Collaboration is less about consensus and more about challenge, perspective, and shared ownership.
“Resistance” asked the hardest question of all. Not “Can we?” but “Should we?”. Not every idea deserves momentum. In leadership, that’s integrity. It’s holding the line when something moves fast but feels wrong. It’s understanding that real progress isn’t just about doing more but doing what matters.
These values might have been born from discussions about AI, but they extend far beyond it. They’re a blueprint for how we lead, build, and decide. It is the difference between systems that look good on paper and systems that work in real life.
What this looks like in practice
One example we discussed was a tool being developed to help survivors of online abuse generate the formal language required to request takedowns and legal action. It supports them to be understood by systems that often ignore plain speech. That is what designing with people looks like in the real world, it holds the person at the centre and removes the translation tax the system would otherwise demand.
Inclusive AI design means not only protecting people from harm but restoring agency. It makes space for complexity rather than simplifying people into categories.
And this is the kind of thinking AI desperately needs. Because technology not only reflects bias, it reproduces it at scale. To change that, we have to start with processes that share power early, question assumptions constantly, and build transparency in from the beginning.
Looking ahead
Every system is shaped by the assumptions built into it and AI makes those assumptions visible at scale. Once you start to see how bias moves through design, you begin to see how easily it hides behind data, language, and logic.
That’s where the next part of this series begins: with data. Who it represents. Who it erases. And how power quietly embeds itself in the numbers we trust. Because until we understand how bias becomes data, we can’t begin to design systems – or organisations – that are truly fair.
Until next time!
Tania.