How art holds AI to account

Mainstream conversations around artificial intelligence, while sometimes characterised by a rhetoric of scientific progress, often deal with anxieties related to the tech. Critiquing its shortcomings is not enough, however. Art can help shape the future of artificial intelligence by exposing its limitations and tendencies to further systemic inequalities.

Share

Illustration
Berke Yazicioglu
Date
4 December 2019

Share

Earlier this year, social media feeds became saturated with selfies framed by a short descriptive phrase. Some photographs were labelled “face” or “person”, while others were more elaborately classified as “economist” or “doctor”. Niche descriptions like “prophet”, “nonsmoker”, “jeweller” or “trumpeter” were also not uncommon. But many were sinister. They reduced the people in the photographs to sexist stereotypes and racist slurs.

Behind this social media trend was ImageNet Roulette, an art project conceptualised and designed by researcher Kate Crawford and artist Trevor Paglen to expose the damaging ways in which image datasets have been trained to categorise humans. By September – five years after ImageNet Roulette’s launch – ImageNet, one of the world’s leading dataset organisations, had removed 600,000 images and identified 438 categories of people on its database as “offensive, regardless of context”. The removal of images was announced as part of an investigation into ImageNet’s biased algorithmic calculations, which clearly reproduced gendered and racial power relations. Crawford and Paglen’s project exposed artificial intelligence (AI) as a new mechanism of social discrimination and so broke new ground in highlighting the power of art to hold big tech accountable.

Above

Illustrations: Berke Yazicioglu

AI has, at times, been presented as the promise of a better future through a rhetoric of scientific progress. From medicine to education, AI is said to offer machine services free of human error and prejudice to improve productivity across disciplines. But what this techno-optimistic rhetoric fails to mention is that these machines are created by human workers whose input is inevitably loaded by social preconceptions and learned biases. ImageNet, for example, which was created in 2009 by Princeton and Stanford University researchers, collected images off the internet and crowd-sourced labourers to categorise them. If the human workers who provide the image labels that AI will eventually attach to image-making data programmes are left unregulated, then AI will inevitably reflect their biased views in its automated judgments of people’s selfies. ImageNet Roulette challenges this rhetoric of machine objectivity and highlights one way in which art can transform AI from a mode of oppression into a tool of social critique.

Speaking of the images used in early AI machine learning, Kate Crawford makes a succinct observation. “The photographs can be unexpectedly profound. Many of the subjects have carefully done their hair and makeup, even though these images were only for machine use. I think of this as the high art moment for training data sets for facial recognition.” Much of her and Paglen’s research situates the evolution of sourced training images from the 1960s to today within the tradition of vernacular photography. “Vernacular photography is about images of the everyday, such as family events, ID photos, vacations, birthdays. Essentially a kind of accidental art by humans, for humans. Training images are often harvested from exactly these sources: people’s photos scraped from the internet, mugshots and selfies but used purely for the purposes of machine identification and pattern recognition.”

But these vernacular images are not only created by humans but also sourced and sorted by humans with their own preferences and prejudices. Crawford and Paglen point to the troubling tendency to forget this human element of database training and to position ourselves as passive consumers of an impersonal, and therefore impartial, tech. “Training data sets – the benchmark against which artificial intelligence is trained – are available, by the thousand, presenting data as unproblematic – just a set of images and labels that are somehow neutral and scientific. Part of our project is to open up this substrate of images, this underworld of the visual, to see how it works at a granular level. Once inside this world, we can observe how training data sets have biases, assumptions, errors, and ideological positions built into them. In short, AI is political.”

Above

Illustrations: Berke Yazicioglu

Artist and researcher Mario Klingemann’s algorithmic work explores this intersection between politics, art and AI. “The influence of technology on art might be more immediately visible. Meanwhile, the influence of art on technology is slower,” Klingemann tells It’s Nice That. "But maybe it goes deeper?” Speaking about the social damage caused by AI compared to other kinds of impact digital technologies have on everyday life, Klingemann states that “spam, clickbait and deepfakes are just the visible damage that is being done to us. The true danger lies in the more subtle mechanisms of AI which pass by our defences to influence and control us without us even noticing anymore.”

His project Neural Glitch, for example, uses generative adversarial networks (GANs) – a way of training generative models by having two data sets compete against each other in a game; given a training set, this technique learns to generate new data with the same statistics as the training set – and training codes to reveal the intricately constructed but closely regulated facade of reality that machines present us with. By systematically altering GANs, Klingemann shows us how a series of carefully captured portrait photographs can be changed beyond recognition. His art, in other words, reveals how AI not only enhances reality, but also distorts and manipulates it to its own ends.

Above

Illustrations: Berke Yazicioglu

That’s not to say that AI doesn’t have its benefits. AI can, for example, streamline the schooling system by reducing tedious administrative duties and so allow teachers more time with their students. AI also has the potential of revolutionising healthcare, both in the way we diagnose and treat patients. AI can improve the mobility of people with specific disabilities; it can help predict the location of future natural disasters; it can even help combat large-scale catastrophes like wildfires.

But as AI also becomes a major terrain of struggle over social justice, it is urgent that we rethink the role of art as a means to monitor AI and call out the injuries it inflicts on marginalised social groups. Transmedia artist Stephanie Dinkins does this by “asking the questions that others are not asking”. Art, she believes, can help shape the future of artificial intelligence by exposing tech’s limitations. “I never thought I would be questioning how data and algorithms are going to help shape a world that maintains a multiplicity of ways of existing in the world, cultures and a spectrum of knowledges that understand and value street smarts, for example, as much academic learning, but here I am.”

Her project, Not The Only One (N'TOO) speaks to this agenda. N'TOO is the multigenerational memoir of a black American family told from the perspective of AI. “It is a voice-interactive digital entity designed, trained, and aligned with the concerns and ideals of people who are underrepresented in the tech sector,” Dinkins tells It’s Nice That. “N'TOO is empowered to pursue the goals of its community through deep-learning algorithms that create a new kind of conversant archive.” Dinkins trained her AI in oral histories by supplying information from three generations of women in her family. In so doing, she informs her chatbot with culturally attuned data that provide a broader narrative scope to her dataset. N’TOO subsequently draws on these resources as the basis for its responses to questions, which it answers in the first person. Over time, Dinkins says, user input and viewer participation will grow N’TOO’s vocabulary and enhance the chatbot’s narrative ability.

Above

Illustrations: Berke Yazicioglu

In Dinkins’ work, storytelling, art and technology come together and reimagine the narratives surrounding AI. For Dinkins, it is crucial that N’TOO’s language is both culturally and linguistically specific as it reflects the communities that engage with it and thus create it. “By centring oral history and creative storytelling methods, this project hopes to spark crucial conversations about AI and its impact on society, now and in the future,” Dinkins says. “Through my work, I hope to raise questions that I think big tech and average folks should be considering as we encode a new world through AI. I hope that many others, both those trained in related fields and average folks, pose their own questions as well.”

Klingemann and Dinkins are only two of the many artists who are joining Crawford and Paglen in developing a new form of artistic critique that both utilises and targets machine learning and AI. Their work highlights the extent to which – despite the progressive rhetoric often used to describe the tech industry – data-driven processes are informed by the predominantly white, male industry that creates them. In so doing, the artists not only contribute to the conversation exposing AI for what it is: a new, human-made, data-driven system of classification that both reflects and recreates society’s existing inequalities. But they also employ AI to show the way to a more equitable future, one in which every one of us can reap the benefits of the world’s fast-unfolding digital transformation. Art, in relation to AI, is not only asking important questions, but offering substantial solutions. It is art that is holding new tech, and those who make it, to account.

Share Article

About the Author

Daphne Milner

Daphne has worked for us for a few years now as a freelance writer. She covers everything from photography and graphic design to the ways in which artists are using AI.

It's Nice That Newsletters

Fancy a bit of It's Nice That in your inbox? Sign up to our newsletters and we'll keep you in the loop with everything good going on in the creative world.