Virtual Production Bulletin: An Interview with Todd Bryant, NYU and Rashin Fahandej, Emerson College
The Virtual Production Bulletin is our new Q&A series about the intersection of game-engines, LED walls, real-time motion-capture, VFX and animation with documentary and non-fiction storytelling. It is co-produced with the Co-Creation Studio. How can we understand documentary techniques and ethics with new paradigms of virtual immediacy, abstraction or reenactment? What are the systems that enable and encourage such paradigms? And what other questions should we be asking about the blurring of virtual and physical space?
Todd Bryant started teaching Virtual Production, a course for both film and engineering students at New York University (NYU), at the beginning of the COVID as most educational institutions pivoted to online classes. Bryant was additionally tasked with piloting this new curriculum to students with vastly different backgrounds in film and engineering.
What virtual production novices often forget is the impending urgency to prepare the next generation of makers. To retain the present workforce and build a strong foundation for younger artists, producers, and engineers, integrated education is key, especially as the worlds of games, video, television, film and emergent technology merge in the coming decades.
A graduate of the Interactive Telecommunications Program at NYU, Bryant has been working at the intersection of engineering and art since the beginning of his career. He now heads the Virtual Production Studio at NYU. The following interview has been edited for length and clarity.
Srushti Kamat: Your course at NYU teaches both Tisch School of the Arts and Engineering students, correct?
Todd Bryant: My classes often have 18 people and four of the students will be from undergrad film, seven will be from the Tisch School of the Arts’ Interactive Telecommunications Program and another seven will be from the Tandon School of Engineering Integrated Design & Media. They have different instincts. The undergrad film students, all of a sudden, find themselves with cameras that can do anything. They don’t have to lug them up and down stairs. They don’t have to fix lenses. They don’t have to worry about all those lights. They are so unburdened by all that they just have an absolute blast. They already know cinematic storytelling; I’m just giving them a tool. When I teach virtual production in the game engine, I try to use little if not zero code so that it’s more about treating a real time engine as an editing tool like you would DaVinci Resolve or Avid or Adobe Premiere. For those students, once they understand the fundamentals of computer graphics, it’s quite enabling.
SK: What about the engineers?
TB: The engineers tend to have more of a predilection towards UX, UI design, making things organized or making sense of chaos. They’re more code-minded, logic and math oriented. Whereas those students obviously naturally flock towards some of the more code heavy classes, in this class, they get to treat themselves as artists. It becomes very experimental for them as they’re doing it from more of a novel standpoint. So they’re learning the rules by breaking them and then seeing what the film students are making. They understand the underlying way the engine works. In the end, it’s great to have them all as one big class working together because they each have intrinsic strengths.
SK: Are there any challenges?
TB: You definitely see some unevenness when it comes to the first half of the semester. Some people will adapt to the tools differently than others. Again, unlike a first-semester engineering student, the second semester engineering student is likely to have already taken some sort of game engineering class and is therefore much further along. This class tends to be their first experience doing some sort of narrative storytelling.
SK: Can you tell us about how you organize your classes? How do you teach virtual production?
TB: The opening lecture covers the history of virtual production. While it’s a new buzzword, virtual production is steeped in a lot of historical concepts that are part of cinematic language. We brainstorm and do collaborative design sprints using the interactive whiteboard,Miro. Most of the content was developed during the pandemic.
We use an exercise that was developed at the Upright Citizen Brigade Theater in Chicago years ago. UCB felt that the Chicago comedy scene was too jokey, and there were too many non sequiturs. So they came up with this game called A to C, almost like Telephone, where one person says something and the next person responds, and then the next person responds.
But in this game you say, “Okay, you said the word bird, and I have to say A makes me think of B, which makes me think of C.” For example, bird makes me think of leaves, which makes me think of Paul McCartney. So Paul McCartney is now my C, and that would be the next person’s A. And the next person would say, “Paul McCartney makes me think of the Beatles, which makes me think of infestation.” And so everyone is taking their A and moving it to their C. Therefore, you’re coming up with a rapid fire way of doing a free association but it’s connected. And what you learn is to keep your B silent so you make B something personal about your own experience or something that’s relatable to you. It becomes second nature as you learn this improv technique, and therefore every word you’re spitting out as a C which the next person takes on A, is meaningful and important to you.
SK: It’s so fun that you use improv techniques from UCB. It goes to show what overlaps can exist between up and coming workflows and more traditional ones like theater. Why did you decide to teach this class?
TB: Working as a technical director in the field, I found myself having to teach too many teams that I was joining. These were ground level things that were core parts of the game engines. Instead of having to teach each production, I created a class as an easy way of bringing them together so that I could just teach it once. There weren’t that many classes a few years ago so everyone was just doing a litmus test. All these companies were coming to me and saying, “How can we adapt this? How can we move into this? I heard it saves money. I heard it frees up creativity. Can we take this?”
Perspective Shift
Rashin Fahandej, one of the students in Bryant’s executive course from June 2021, shares her perspective below.
Fahandej is an Iranian-American filmmaker and immersive storyteller whose work centers around marginalized voices as well as the role of media, technology, and public collaboration in generating social change. Fahandej is the founder of A Father’s Lullaby which interrogates structural racism in the criminal justice system. As an assistant professor at Emerson College, she teaches courses in Emerging and Interactive Media and has launched a pioneering XR Co-Creation initiative where students, formerly incarcerated fathers, probation officers, and their children co-create personal documentary projects using AR, VR, 360°, and emerging technology.
SK: Why did you take the course?
Rashin Fahandej: When I began teaching in this field, there were very few departments looking at the intersections of art and science. John Hopkins University is one example. It was always important to consider an interdisciplinary and cross-institutional approach via research. I had been looking into virtual production within the realm of emerging media to answer two main interests — first, as a pedagogical tool with the potential to redefine the classroom and introduce new methodologies into higher education. Second, was the potential for co-creation and collaboration.. I use the Unity game engine myself and had some introductory understanding of the Unreal engine so I was also curious about how game engines could be used for collaboration between my students and community members. I wear multiple hats as a practitioner, researcher, and educator in the field of emerging and interactive media. Our field is rapidly changing with technological advancements so I’m excited to shape the field not only through experimental and groundbreaking approaches to storytelling but also through possibilities for reimagining production methodologies. Innovative pedagogical approaches can help prepare the next generations of creators.
SK: What did you learn? What did you take away as an educator, as an artist and as someone involved in co-creation?
RF: There was a lot of wonderful learning! When it comes to technology, both as an artist and educator, I am concerned with the two pillars of equity and accessibility. Both can be defined at multiple levels. For example, Todd’s class at NYU brought together an incredible interdisciplinary group of professionals from across the globe. This access was possible because of educational institutions’ pivoting to online and virtual learning during COVID-19. These are necessary environments for innovation and speculation and they are interdisciplinary in nature. The second pillar is equity. It is crucial that diverse voices and creators contribute to development so that the field is able to reach its full potential in being equitable and just. To that end, my concerns and efforts involve the onboarding of community members to work with emergent technology students in academic settings. For example, in my XR Community Co-Creation course, fathers/community members would Zoom into class with their mobile phones because they didn’t have laptops. There is a lot of potential in virtual production, but when it comes to using the workflow and tools for community members, the accessibility is limited and it is an important question to consider.
What Fahendej refers to is the utility of the game engine beyond a student space, re-echoing broader questions about accessibility and equity. What could it look like, in theory or in practice, if the tool were brought into communities who may not be as familiar or comfortable with emerging technologies? Taking a note from Bryant’s course, what opportunities are there to use these platforms to foster cross-pollination between disciplines that might not be otherwise possible? Can we design a future in which the game engine and virtual production methodologies are iterated with those communities in mind?
For more news, discourse, and resources on immersive and emerging forms of nonfiction media, sign up for our monthly newsletter.
Immerse is an initiative of the MIT Open DocLab and Dot Connector Studio, and receives funding from Just Films | Ford Foundation, the MacArthur Foundation, and the National Endowment for the Arts. The Gotham Film & Media Institute is our fiscal sponsor. Learn more here. We are committed to exploring and showcasing emerging nonfiction projects that push the boundaries of media and tackle issues of social justice — and rely on friends like you to sustain ourselves and grow. Join us by making a gift today.