VR is not film. So what is it? by William Uricchio

William Uricchio on 2016-11-03

VR Is Not film, So What Is It?

An Immerse response

I’m grateful to Janet Murray for reminding us that, at least when it comes to media, we tend to back into the future. We have a habit of taking what we already know and projecting it onto the next shiny new thing coming down the technological highway.

Just as film’s pioneers spent their first decade emulating theater, many of today’s VR makers are doing their best to emulate the logic of film. And just as early filmmakers finally shattered the proscenium arch and evolved new vocabularies, we can expect VR’s makers to find robust, exploration-based alternatives to the still-dominant film paradigm of carefully composed frames, editing strategies and fades to black.

VR is not film.

But if it’s not film, what is it … besides a philosophical claim of some kind or an investment opportunity? What do we mean when we refer to the medium of VR? These questions sparked the Virtually There conference organized by MIT’s Open Documentary Lab in May 2016.

Notwithstanding our tendency to refer to VR in the singular, there are good reasons to disambiguate the concept and its underlying technologies. Only then can we take the next step of developing new expressive vocabularies and techniques.

For example, we know that the currently most accessible VR systems and viewers use pre-rendered 360 video. As the photograph and painting are to their panoramic counterparts, so is video to 360 video. Stereoscopic depth, immersion in a seamless world — the illusion is solid, and so are the assets, as evidenced by projects like Darren Emerson’s Witness 360: 7/7. Better, we can easily comprehend this experience thanks to a few hundred years of practice with earlier pre-rendered panoramas.

WITNESS 360: 7/7 by Darren Emerson

But 3D capture, whether based on photogrammetry, Kinect, or LIDAR (laser scanning), is a different thing altogether. Clouds of fixed data points enable real-time rendering of visual artifacts that can be seen from any position: the virtual world responds to the gaze of the viewer, as we can see with Karim Ben Khalifa’s The Enemy. The modeling is based on rendering algorithms that can be designed to do just about anything, including mimicking the rules of everyday physics. Indexical points of data dancing to the arbitrary rules of algorithms: like so much in the data age, we are still adjusting to the epistemologically challenging results.

Karim Ben Khalifa’s The Enemy at the Virtually There conference

As if the differences between pre-rendered 360 video and the real time rendering of 3D capture systems weren’t enough, computer generated worlds, like Nonny de la Peña’s Project Syria add a third major variant to the VR mix. Add to this the issue of interactivity and things really get complicated. 360 video can be activated with hot spots, and 3D capture can easily emulate the default ‘fixity’ of 360 video. Needless to say, there are countless other refinements (hand controllers, sight activation, mixing pre-rendered elements with real time elements, etc.). Keep an eye on Deniz Tortum (full disclosure: a recent CMS@MIT and Open Doc Lab alum!), who has been doing some great work on VR terminology and technique, and is highly attentive to these and other distinctions.

Project Syria by Nonny de la Peña

Lumping all of these approaches together as “VR” won’t help us to figure out where to go next. Nor will parsing out these distinctions to the nth degree. But we at least need to distinguish between the pre-rendered 360 VR and 3D capture VR with its real time rendering of pre-scanned data. The former is part of a world that we know well. Of course it requires new storytelling techniques, but at least the world is stable and its artifacts fixed. But 3D capture is a far more ephemeral and responsive medium, shaped by algorithmic instructions, generated on the fly, and about to be endowed with pupil-tracking and sight-activated navigation.

Like our Facebook feeds and Google searches, 3D capture requires new ways of thinking and new literacies if we are to make critical and creative use of it. While it shares some aesthetic and ethical concerns with 360 video, it is far more interesting for what it does not share. And that’s the part we still need to invent! We can begin by looking more closely at domains such as world-building, experience design, and notions of spatial narratives for inspiration.

PS: As for ‘empathy’…. At least in media, we understand ‘empathy’ from a long history of representation. Given this history, we can be forgiven for extending the concept to VR. One can critique the entire project of mediated empathy as problematic (and I for one prefer my empathy to be people-centric, not mediated), or potentially inscribe VR of the pre-rendered 360 video variety within it.

But the workings of 3D capture are arguably different, operating more as experience than representation. Some cognitive scientists such as MIT’s Emile Bruneau are exploring this distinction (in Emile’s case, with The Enemy), and their language and observations regarding human response may prove to be far more productive than resuscitating the empathy debate (no matter what the marketers tell us!).

Immerse is an initiative of Tribeca Film Institute, MIT Open DocLab and The Fledgling Fund. Learn more about our vision for the project here.