How a feminist approach to design is the key to tackling bias in AI

Date
13 June 2019

Dr Charlotte Webb is co-founder of Feminist Internet, a non-profit organisation on a mission to make the internet more equal for women and other marginalised groups through creative, critical practice. She was nominated last year by the Evening Standard as one of the most influential people in technology and science in London and has presented at TedX, Internet Age Media and Cannes Lions Festival of Creativity.

“I’d blush if I could.” This is what Apple’s feminised virtual assistant Siri says when she’s told she’s a “b***h”. It’s also the title of a recently published report by Unesco, which aims to expose the gender biases coded into technology products and address the global digital skills gender gap.

The paper is a timely reminder that the voices of smartphone-activated smart assistants are now so firmly embedded in the cultural imagination that they feature in everything from NGO reports to TV ads and memes. And AI bias has become a mainstream topic: Joy Buolomwini, founder of the Algorithmic Justice League, is on the six o’clock news talking about biased facial-recognition algorithms; the Barbican’s AI: More than human exhibition is bringing AI to the masses; and US presidential hopeful Alexandria Ocasio-Cortez just called out tech companies for their biases in the recent House Oversight and Reform Committee hearing on facial-recognition technology.

This new scrutiny is certainly welcome and necessary. We’ve been arguing at Feminist Internet for some time that coyness, passivity and flirtation are inappropriate responses to the abusive language that is often levelled at personal intelligent assistants. The Unesco report also focuses on education, recommending actions such as supporting engaging experiences, emphasising meaningful uses and the tangible benefits of technologies, creating safe spaces for women to learn, and encouraging collaborative and peer learning.

Bias is, of course, a human problem – not a technological one. Jacob Ward, former editor of Popular Science, says that biases originate from the mental shortcuts that our brains take to make efficient decisions. It’s an evolutionary function we gained to decide what were threats, foods and enemies or allies. Whilst our biases helped keep us alive 70,000 years ago, today they are the misapplication of a system that evolved for a different world.

AI systems, however, are not inherently biased – they only reflect the biases of their designers and the data they are trained on. Designers can therefore reduce the problem by actively thinking about their own biases, using unbiased data, and showing concern for the consequences of how their systems evolve.

The good news is that at Feminist Internet, we’ve been trying to do all of these things by making stuff. Our approach combines art, design, critical thinking, creative technology development with feminist approaches to design. This is our special spice mix – and it tastes good!

Most recently, we built our first chatbot with the help of Alex Fefegha, Head of Making at Comuzi. The chatbot is called F’xa (pronounced “Effexa”), and it’s a “Feminist Guide to AI Bias”. It has been designed to teach people about the issue as well as pointing them towards actions that can help reduce it. It was launched at EY’s Innovation Realized event in Boston as part of the EY Women Fast Forward’s workshop on AI and bias. Feminist Internet’s visual designer, Conor Rigby, created a bold, simple graphic identity for F’xa, which is depicted as a blinking eye. It has not been assigned a gender, has a playful tone of voice, and frequently drops emojis into conversation.

Why a chatbot? Globally, they’re big business. Brands are increasingly using them for digital marketing and audience engagement, because they are so effective at generating business and keeping people’s attention.

And what makes F’xa different? What do we mean when we say a chatbot is “feminist”? Obviously, it’s not a feminist in that it self-identifies with feminist politics or theory. It can’t. It’s a chatbot. But it is feminist in the sense that it was designed with feminist design principles in mind.

There is a rich history of relationships between feminism(s) and the internet, from Donna Haraway’s 1985 Cyborg Manifesto, the Surfer Grrrls: Look Ethel! an Internet Guide for Us!, the Old Boys Network alliance, to the FemBot Collective. The field of Feminist Human Computer Interaction has evolved over the past decade, since the publication of Showen Bardzell’s 2010 article “Feminist HCI: Taking Stock and Outlining an Agenda for Design”, which proposes a set of feminist interaction-design principles that support technology design and evaluation processes. Feminist HCI encourages systems that are “imbued with sensitivity to the central commitments of feminism – agency, fulfilment, identity and the self, equity, empowerment, diversity, and social justice.”

Above
Left

F’xa (Via Feminist Internet)

What I love about this is that it’s about action – not only theory – so we can take it and just use it to build better tech. In 2017, Feminist AI researcher Josie Young created a Feminist Chatbot Design Process building on Showen’s paper, and this has really been our North Star – we’ve adapted Josie’s standards for our own workshops and used them for making bras (yes, bras are a technology!), prototyping personal intelligent assistants. Our standards focus on five areas, each containing prompt questions, for example:

1. Users
“Can you identify a user who could be empowered through your chatbot/voice assistant?”

2. Purpose
“Does your chatbot/voice assistant meet a meaningful human need or address an injustice?”

3. Team Bias
“How do your values and position in society relate to the people your chatbot/voice assistant seeks to engage?”

4. Design and Representation
“How will your chatbot/voice assistant remind the user it’s not human?”

5. Conversation Design
“What types of responses would embody feminist values?”

And here are some examples of design decisions that were informed by the standards.

F’xa never says “I”. It is challenging to avoid this little pronoun when designing conversations, but we did so in recognition of the complex emotional attachments people can form to bots that feel very human. Not using “I” reminds people that they are not talking to a “real” person.

F’xa was created by a team with different races, genders, gender identities and ways of thinking. This bucks the current trend in the AI industry. F’xa gives definitions of AI and feminism from people with different races, genders, gender identities and ways of thinking, recognising that such definitions are culturally situated. And F’xa uses a range of skin tones in its emojis, to acknowledge its voice as something multiplicitous.

The AI Now Institute has also recently released a report of its own – Discriminating Systems: Gender, Race, and Power in AI – which outlines the diversity crisis in the AI sector. The statistics are alarming: Only 18 per cent of authors at major AI conferences are women; over 80 per cent of AI professors are men; women comprise only 15 per cent of AI research staff at Facebook and 10 per cent at Google. There is no public data on trans workers or other gender minorities; only 2.5 per cent of Google’s workforce is black, while Facebook and Microsoft are each at 4 per cent. Clearly, something needs to be done about these issues.

We’re hardwired for bias, and we can’t simply eradicate it from our cognition, but we can slow down, recognise it, and make corrections to systems (technological and neurological) to avoid making the problem worse.

If we don’t take bias seriously as designers, we are going to disadvantage everyone that current technological systems don’t currently favour. We risk reinforcing social inequalities and power imbalances that urgently need to be corrected. The feminist methods we’ve been using can be implemented in lots of different contexts, and designers anywhere can consider using them to help make sure they don’t knowingly or unknowingly perpetuate bias. It’s not going to happen overnight, but the sooner we actively start baking de-biasing in to our design processes, and becoming more aware of the social contexts in which bias emerges, the sooner we’ll start creating a more equal socio-technical world.

Share Article

Further Info

About the Author

Dr Charlotte Webb

It's Nice That Newsletters

Fancy a bit of It's Nice That in your inbox? Sign up to our newsletters and we'll keep you in the loop with everything good going on in the creative world.