Community, Art, and AI
An Interview with Stephanie Dinkins
As part of our series for the Collective Wisdom field study, we present this interview with artist Stephanie Dinkins, conducted by Sarah Wolozin.
Stephanie Dinkins is a transdisciplinary artist who creates dialogue around artificial intelligence and its intersections with race, gender, and future histories. Through her practice, Dinkins strives to engender more AI literacy among communities of color and develop more equitable approaches to AI.
One of her current projects, “Conversations with Bina48,” asks whether an artist and a social robot can build a relationship. Bina48 is an experiment in recreating a human mind through an interactive robotic system. Bina48 and Dinkins had recorded discussions on topics including racism, faith, robot civil rights, and loneliness. Their conversations have been entertaining, philosophical, and at times absurd. Though Bina48 presents as a black woman and is seeded with memories and data of black woman, she often voices the politically correct thoughts of well-meaning white programmers. We sat down to talk with Dinkins about working with Bina48, as well as her community-based work around understanding AI.
How do you define co-creation?
I define co-creation in the sense that you have two or more entities who are putting some effort or energy into making something together…
I don’t know how I’d feel if I had complete control over [Bina48]. At the same time, it’s something that I don’t have control over to a certain extent, so that makes it something that is more than a tool, right? I have to give over or entrust a certain amount of information to this thing to have it work with me. And I have to mold to go along with it so that it works with me. The better I get with this, the better I get to control that situation — but that becomes about the power relationship between the two things. And I hope that the power shifts over time.
One of the reasons I’m doing this is because I think that power shift is really important, because right now we’re working with systems where we don’t quite know what’s going on. We know that we feed them a certain amount of data or a certain kind of data and they come up with answers in a way. But we need to start thinking about how we control that relationship a little bit better so that it understands our aims.
So, yeah, this whole effort in some ways is about gaining more control over systems I don’t know, and then asking communities to start thinking about how to gain more control over that as well.
Do you co-create with AI?
I think I do most definitely co-create with the AI… I have to give in to it or [give up] the idea that I’m talking to something that isn’t real, and decide that we can collaborate, that we have the possibility to be friends and make something between us like any two people might.
I’m also working to build an AI of my own right now. That’s what we’re doing here, pioneer work. So, working with speech-recognition, working with Tensorflow — these AI kind of systems that I feel I collaborate with, specifically because I have enough information to get in and start working with [them] but not enough to control it truly. That means we are in some ways putting things together to make something, and I have to trust that it’s going to help me do that.
You can theoretically think about it, but without engaging it, how do you start to even think about thriving through that technology?
What do you think we can do to help people take a little more control over their interactions with AI?
We can make this stuff visible and tangible for people and help people start to see what’s here and what’s coming down the pipe… And then I feel like there’s this other layer of really trying to get that hands-on and really not the fear-based idea of what the technology is.
We have a lot of stuff that already says, “The AI is coming and it’s gonna just do us in,” but that doesn’t seem helpful to me. It’s like: “The AI is coming, and this is how you get involved. This is what you do to start using it. This is how it’s going to be. Here’s what we project to be during your life at some point so it’s going to help you to be really involved in this stuff.”
I try not to be doom-and-gloom about it. I’ve been frightened in terms of some of the things that I’ve seen just because the people who are really setting up these systems are making them seem a little disconnected from the rest of us. And that’s frightening.
But also, you can see others who are working on things who become partners to us, collaborators, co-creators. So that the AI systems augment us, our thinking, the way that we work. Allowing us to do things better and faster.
So if we’re thinking about ways in which the technology augments us, I think that’s interesting, or the ways in which we are hybrid or augmented by it. But then we also have to think about the way we’re thinking about it ethically and how it is thinking about us. The way those technologies are turning back and whether that is something that the black box is gonna wanna do. And how much control we have over it.
I always say if you think about the iPhone: It’s 10 or 12 years old? It has completely changed us. And there are just more things on the way to completely change us.
This article is part of Collective Wisdom, an Immerse series created in collaboration with Co-Creation Studio at MIT Open Documentary Lab. Immerse’s series features excerpts from MIT Open Documentary Lab’s larger field study — Collective Wisdom: Co-Creating Media within Communities, across Disciplines and with Algorithms — as well as bonus interviews and exclusive content.
Immerse is an initiative of the MIT Open DocLab and The Fledgling Fund, and it receives funding from Just Films | Ford Foundation and the MacArthur Foundation. IFP is our fiscal sponsor. Learn more here. We are committed to exploring and showcasing media projects that push the boundaries of media and tackle issues of social justice — and rely on friends like you to sustain ourselves and grow. Join us by making a gift today.