Can AI imagine the next victims of police violence? Future Wake explores injustice through code

What are the issues with using AI for predictive policing? Could it reduce crime or does it reinforce racial biases in the criminal justice system? The co-founder of art project Future Wake examines unjust policing using AI and storytelling.

Date
12 January 2022

Share

There’s nothing more powerful in a creative brainstorm than the “what if?” question. Art project Future Wake is born out of the collaboration between two creative minds who kept bouncing “what if?” proposals between each other without filtering out subjects of controversy. Future Wake ended up being a project that kept on giving both surprising and, we admit, frightening us.

When discussing racial biases in AI practices during our “what if?” conversations, my co-creator and I came upon the issue of predictive policing using AI to predict crime. With the resurgence of the Black Lives Matter movement sparking protests around the world, we knew there was a story to be told around predictive policing. Luckily in Mozilla we found a partner and sponsor that could work with us to realise this concept.

So what’s the issue with using AI for predictive policing? Advocates for predictive policing say that forecasting crime using statistics will help reduce crime. However, the critics argue that the dependence on algorithms raises questions around accountability and transparency. These algorithms are black boxes – there’s little to no way to scrutinise how it will make decisions. And if they make a wrong decision, who is ultimately accountable, a line of code? Also, the use of technology does not remove bias from police decision-making processes. As these models are dependent on historical data, the predictions will inherit and reproduce historical police biases. It also means the forecasting of crime with statistics could reinforce and repeat the current racial biases that exist in the criminal justice system.

With many countries experimenting with predictive policing methods, and with the BLM movement raising questions on racial bias in policing, we started to ask the question: What if we could use predictive policing on the police?

From this provocative question emerged three more questions: Do we have the means to implement these algorithms? What would the artwork say if we use flawed predictive policing methods? From what angle can we tell a story with this data?

“As these models are dependent on historical data, the predictions will inherit and reproduce historical police biases”

Future Wake

Can we technically do it?

When using machine-learning algorithms, in order to get a trustworthy output nothing is more important than a trustworthy input. Acquiring an official governmental database on policing, crime, and/or predictive policing turned out to be an impossible quest. We’d need a database that spanned a significant amount of time, as complete as possible and with enough depth (variables/factors) to allow us to approach the prediction from multiple angles. Getting official USA databases with these features were either impossible, too expensive, or required a lot of waiting.

Enter civilian initiatives Fatal Encounters and Mapping Police Violence – two independent research organisations which, because of the lack of an open official database, took on the challenge to collect and share comprehensive data on police killings nationwide. Despite the huge amount of time and effort spent meticulously collecting data, they publish these datasets openly and for free. Even though we know these databases are not 100 per cent accurate, we believe it’s the best we could get. The need for these initiatives to even exist demonstrates the lack of available information on police killings nationwide.

These incredible initiatives allowed us to dive deep into the data to retrieve a story we could tell. The Fatal Encounter dataset contained data on 30,798 victims who were killed in the United States between January 2000 and September 2021, while Mapping Police Violence contained data on 9,468 victims killed in the United States between January 2013 and September 2021. After removing duplicate cases, we had on our screens the demographic, location, and event descriptions for 30,990 police brutality victims.

“Reading through this database was a confrontational, aggravating, but most of all saddening experience”

Future Wake

Finding a story in data

Reading through this database was a confrontational, aggravating, but most of all saddening experience. These are 30,990 short stories involving the police that all end fatally. Besides having police involvement in common, they blatantly show how flawed society is. There are no winners in these stories; you wouldn’t wish this desperation, violence, and trauma on anyone. At that moment, after reading and discussing many stories from the database, we decided to focus our artwork around this feeling. Rather than entering a political discussion directly about and around the police, we wanted to state the obvious, trying an angle we could (hopefully) all agree on. These deaths are sad and very real. Art is often strongest when it has a simple core; we built Future Wake around empathy.

With this direction in mind, we decided to tell a story about the victims instead of the police. What if we predict the next fatal victim of police brutality? Using known predictive policing methods we developed a series of algorithms. These allowed us to predict who, when, where, and how fatal encounters will occur in the five most populous cities in the United States. It was amazing and intimidating to see these results. We are very aware that our predictions are flawed – in the same way that predictive policing is. Still, we think this only adds to the concept of Future Wake. If nothing changes in these systems, biases are likely to repeat itself.

“We are very aware that our predictions are flawed – in the same way that predictive policing is. Still, we think this only adds to the concept of Future Wake. If nothing changes in these systems, biases are likely to repeat itself.”

Future Wake

Empathy for data

It’s tough to have feelings for a spreadsheet. The usual data visualisations like graphs, tables, and flowcharts, don’t often stir emotions. The data lifecycle usually goes from translating real life into collections of information, then processing it into a graph of some kind. Alternatively, we wanted to have our data – which started out as human – to end up as human as possible. To remind the audiences that these predictions could happen to real people, we wanted to bring our data to life.

This time we found machine learning algorithms to be a great help in generating the story visually. AI generated a face and a story for each prediction. The stories were generated from the real-life events in our database using the GPT-2 text generator. Sourcing pictures of past victims from our database, we generated new portraits using Style-GAN 2-ADA. These portraits were deepfaked (using the First-Order model) onto a video performance donated from a real person, bringing the predicted story to life.

Even though we have worked with this material for a while and the generated visuals are not perfect, every single time we see a finished performance for the first time it feels chilling.

With Future Wake we hope the viewer will get a sense of urgency. Though the predicted wakes are not real, someone like them could very realistically be the next victim. Visit it here.

Hero Header

Copyright © Future Wake, 2021

Share Article

Further Info

About the Author

Anonymous

It's Nice That Newsletters

Fancy a bit of It's Nice That in your inbox? Sign up to our newsletters and we'll keep you in the loop with everything good going on in the creative world.