A summary of key insights from Deepfakery, a web series presented by the Co-Creation Studio and WITNESS
You catch yourself laughing at a retweet of Ben Shapiro’s face on Cardi B and Megan Thee Stallion’s bodies in a contemporary rendition of the WAP music video. You know it’s fake. A few posts later, you see Mark Zuckerberg talking about Facebook’s complete lack of intention to make the world a better place. It looks real but his message seems uncharacteristic. You’re unsure. Both videos are labelled deepfakes. But do they serve the same function?
From August to October this year, the Co-Creation Studio and WITNESS launched a web series, titled Deepfakery. The six-episode series featured 20 panelists from areas such as activism, academia, archives, art, documentary and engineering all participating in a critical conversation around the theme: Prepare [for deepfake technology], Don’t Panic.
In recent years, news reports on the rise of deepfakes have both raised concerns and provided room for amusement. The Merriam-Webster dictionary defines a deepfake as a video that has been edited using an algorithm to replace the person in the original video with someone else (especially a public figure) in a way that makes it look authentic.
Although deepfakes are predominantly audiovisual, they fall under an ecosystem of disinformation amid shifting understandings of reality. But this ecosystem isn’t new. Alteration of truth has been a satirical, artistic and journalistic tool used across mediums for centuries.
As Daniel Howe, an artist behind the project Spectre, says in Episode 1, “Synthetic media in various forms has been around almost as long as media itself. Art, satire, and activism have been used in a wide range of socially positive ways, mediating between technology and public discourse. So, as Walter Benjamin, the cultural critic, said almost 100 years ago, ‘Art is the product of this very conversation between technology and society.’”
Taking another angle, Karen Hao of MIT Technology Review speaks in Episode 2 about how such elements of satire are viewed by creative agencies and the advertising industry as a way to infiltrate popular discourse. She mentions the need to decide how we label what is considered a deepfake so we can then develop the boundaries around its ethical use.
Regardless of their usage for art and satire, there is no denying that deepfake videos pose unique threats. As Jane Lytvynenko, a senior reporter at BuzzFeed News, says in Episode 2, “Videos are easily taken out of context and miscaptioned, which is a very big problem for misinformation. And part of the reason why they’re a much bigger problem than something like a written article is that people very much lean towards the seeing-is-believing gut instinct.”
While sophisticated AI technology is accessible to few, most deepfakes online originate from non-technical users. In September 2019, the firm Deeptrace found that out of 15,000 deepfake videos, 96% were pornographic. Faces from female celebrities were mapped onto porn stars. Samantha Cole, staff writer at Vice, sheds light on the key question of gender dynamics shaping the majority of the videos today. “Deepfake’s origin was in making celebrity porn. There was Photoshop with static images and then deepfakes took it a step further using algorithms to add them to videos. The people I talk to aren’t machine learning experts but rather people interested in using this new technology. But what they miss out on is consent. Most of the deepfakes that exist online are porn.” As she states, creators making this porn are not engineers but rather people without technical expertise who utilize the tools.
The nexus of the deepfake conversation today centers on a need for global media literacy. The speed and traction with which community-generated deepfake videos spread online has shifted perceptions of truth. User discernment on what constitutes as fake is a critical factor in determining collective trust towards digital media. However, within the conversation about literacy, there is a tendency to overlook regional subjectivities. These regional contexts depend upon the socio-political implications of the producers and users, which intrinsically rests upon existing power dynamics.
In Episode 6 of Deepfakery, Adebayo Okweowo, Africa Program Manager at WITNESS points out this need: “Media literacy comes up over and over again because not a lot of people know how to quickly identify or spot a shallow fake. And of course, that is very prevalent on the African continent. Miscontextualization, sharing stuff that was not relevant to issues, went viral. It should be stopped and tracked.” Similarly, Brandi Collins Dexter of the Shorenstein Center sheds light on considerations to Black communities in the US when discussing digital engagement. She says, “Black people over a range of ages are more likely to engage with new technology and gaming consoles and share new technology and content. Also, humor, parody and satire are often part of cultural exchanges.” When attempting to mitigate misinformation, we run the risk of overlooking cultural specificities of different groups, especially when examining who uses these altered media forms at their inception.
Furthermore, in Episode 4, Megha Rajagopalan, international correspondent at Buzzfeed News discusses the intersection of deepfakes and surveillance. When speaking to the possibility of governments using deepfakes to implicate activists or journalists, she asks, “How much more powerful could it be if it was possible to completely fabricate conversations particularly in an environment where the media is already heavily censored and heavily constrained?” In state-controlled systems, deepfakes could well be used as excuses or caveats to further control free speech.
One potential model for intentional and ethical usages of deepfakes comes from David France, journalist, filmmaker and director of the HBO film Welcome to Chechnya. To maintain the anonymity of LGBTQ+ activists in Chechnya, France worked with Ryan Laney, a veteran VFX specialist to find faces that could be used to replace the activists via deepfake technology and help protect their identities. France sourced LGBTQ+ social media influencers who already had public presences around their sexual or gender orientations. Episode 3 of the series explores France’s methodologies, ethics and findings. Despite these advances, what might a dialogue with adjacent industries such as production and casting look like as documentary filmmakers start using deepfakes? In Episode 6, artist/director Franscesca Panetta discusses the implications for actors who may voice altered characters. Panetta brings up the need for the actors’ union SAG-AFTRA to update its policies which at present do not cover boundaries and regulations for actors taking up AI work.
What is apparent from the conversations that came out of this series is that a public discourse around deepfakes has only just begun. This technology has slowly but powerfully generated a massive trail of new considerations around misinformation, truth, and artistic usage. It may well take five to ten years for the technology to become available for real-time manipulation. Meanwhile, building an ecosystem which prepares societies to protect the most vulnerable while also maintaining freedom of speech will require the interrogative work of policy-makers, big tech organizations, scholars, artists and journalists working together.
Watch the episodes below:
Episode 1 — Faking the powerful
Episode 2 — Not funny anymore: Deepfakes, manipulated media, and mis/disinformation
Episode 3 — Using AI-generated Face Doubles in Documentary: Welcome to Chechnya
Episode 4 — Boundary lines? Deepfakes weaponized against journalists and activists
Episode 5 — Manipulating memories: Archives, history and deepfakes
Episode 6 — Still funny?: Satire, deepfakes, and human rights globally
Immerse is an initiative of the MIT Open DocLab and receives funding from Just Films | Ford Foundation and the MacArthur Foundation. IFP is our fiscal sponsor. Learn more here. We are committed to exploring and showcasing emerging nonfiction projects that push the boundaries of media and tackle issues of social justice — and rely on friends like you to sustain ourselves and grow. Join us by making a gift today.