Notes on the wonders and woes of generative AI — how might artists adapt to rapidly changing technological landscapes?
At this year’s TED2023 Conference in Vancouver (April 17–21, 2023) — themed around “Possibility” — attendees were abuzz around the role increasingly intelligent and large-scale AI models such as ChatGPT, Stable Diffusion and Midjourney might play now and in the and distant future.
More than just an artfully curated week of talks, the TED Conference has always been a showcase of innovation. Interwoven between the familiar presentations were displays of the bright, dazzling and sometimes too-good-to-be-true world unlocked with AI. From live performance capture tools that could transform actors to aliens with barely any equipment to worldbuilding tools that could generate architectural styles from across the expanse of the globe, the audience was inundated with striking images of possibility. As my second time attending on the ground as conference support staff through the Bezos Scholars Program, the adult daycare culture of TED, with dragonboat courses and circus workshops squeezed between opportunities for idea generation, was a strong departure from my quotidian life, to say the least.
In the ecological domain, Karen Bakker spoke of the potential of harnessing AI that analysto protect and restore endangered coral reefs. Part-and-parcel with the product placement historically featured at TED, Sal Khan showcased the potential of Khanmigo, an AI writing coach chatbot that can help students write essays, structure arguments and even speak to literary figures to better understand the works with which they engage.
Throughout, the conversation always returned to a tempering of optimism around the possibilities of AI. Audience members were consistently probed for their level of fear, their own impending dread of the rapid change at the hands of tech corporations’ “space race” to own and dominate the landscape. As with other technologies that had come before (the most recently featured hype cycle at TED was 2022’s emphasis on Mars and the Metaverse), this year’s TED was a showcase of the bite-sized: a balance of stories of possibilities and constraints, all distilled to less than 15 minutes.
Beyond catastrophizing: The art of designing with constraints
One striking session focused on AI creativity offered insight into larger explorations of what it might look like to create art and documentary in tandem with AI tools. On April 20th, 2023, TED gathered AI artists and technologists K Allado-McDowell, Refik Anadol, Bilawal Sidhu and Eileen Isagon Skyers to speak more on the subject.
Admittedly, a lot of what I know about AI art is prompted from a fear of my impending obsolescence — and circles of artists, writers and creatives who find themselves up against the increasingly threatening foe of automation. As a journalist and creative in the games space, it’s difficult not to lose hope when every headline or Tweet is a reminder that a machine may soon be able to do what I do better, faster and cheaper.
When prompted with the same question — of whether artists should feel threatened by the proliferation of such tools — panelists offered more optimistic thoughts, preferring to frame the technology as adding to the range of possible forms of expression as opposed to replacing any singular art form. While the concerns are understandable, especially considering the professionalization of and specialization needed to “master” any particular suite of creative tools in an era pre-AI, this future is here whether we like it or not and we must be equipped to adapt.
This particular moment of flux, as AI artists establish not only their own distinctive signatures but also the legitimacy of their artistic medium, is one that AI artist and writer K Allado-McDowell likened to the emergence of photography. Photography, in their eyes, did not discredit painting and the value of collecting painting, just as AI art will not displace traditional forms of art. And people are not necessarily creating art solely for the final product — novelty, creative inspiration and trial-and-error might emerge as particularly important in the years to come.
What they cited instead was the emergence of a human-AI feedback loop, one that goes beyond simply prompting “make this and you get it back.” As a co-author of several book projects, operas and games with GPT3, they are particularly attuned to this feedback loop and finding ways to set rules and constraints that push both them and generative AI to “come up with things that wouldn’t be possible alone.”
The key challenge is sifting through the noise — what some of the panelists called “AI garbage.” “I do think the abundance with generative image making tools like Midjourney, Stable Diffusion and DALL-E makes it hard … to actually find what the constraints are in the system,” proposed Allado-McDowell. “I think we will see really incredible AI-generated artworks that are standalone images made solely with those tools and they will have their own vocabulary. But I don’t think we’ve seen the Duchampian moment yet with these tools. We’re waiting for that person to come and break the system open.”
Bilawal Sidhu, former project manager for Google Street View and now technology influencer and content creator, also echoed sentiments about the “journey” of creation. Sidhu, who grew up making home movie videos using clunky video editing software, has hope for the next generation of content creators. He said, “I think it’s going to be more about the canvas on which you create and the kind of story that you tell at the end of the day versus the tools that you end up using.” Sidhu sees even more potential now for anyone with talent and vision to succeed.
AI can cut the barriers to entry, elevating an artist’s craft to the next level. Sidhu pushed writers to consider visual image, visual artists to consider animation and 3D, and game makers to consider fully immersive worlds. While the metaverse may seem to currently be in decline, it now offers many more possibilities to be populated by a diversity of creators using these tools — not just the large AAA gaming companies and the Pixars of the world, but the indies and small studios. AI could also be used to generate new game mechanics or to create personalized gaming experiences based on a player’s individual preferences and playing style.
The unsolved problems of copyright and data stewardship
The panel also touched upon the collective ethical reckoning with these tools we are currently witnessing. The use of stolen, unattributed data for algorithmic art generation currently is not strongly matched by processes for legal recourse and copyright. AI may, as we’ve seen in numerous sectors, continue to amplify existing human biases, rely on incomplete training data and perpetuate harm against groups at the margins, including people of color, queer and trans communities, disabled folks and inhabitants of the Global South. And the question remains: In a situation where AI and humans work together, who is the rightful owner and how should contributor(s) be cited?
When prompted with this question, panelists had notably less to offer — not to my surprise. In many ways, we are still in the theory stage, struggling to keep informed with the rapid progress currently largely driven by OpenAI. That said, efforts to contain AI are by no means new. For example, the European Union has proposed a set of ethical guidelines for trustworthy AI, which includes principles such as human oversight and accountability, transparency and non-discrimination.
Artist Refik Anadol, works on projects that meld the physical and digital, including Artificial Realities: an attempt to construct physical corals using 3D printed AI sculptures. He cited a fascination with archival processes. Well known for striking data visualizations that have graced the MoMA and Gaudi’s Casa Battlo (the latter of which was auctioned as an NFT for $5 million in 2021), Anadol has been working in these spaces for almost a decade harnessing what he calls “large, focused and publicly available datasets, visualizing … ‘humanity’s collective memories.’”
On the panel, spoke about standing on the shoulders of those who came before and the importance of working in partnership with ancestral knowledge. AI, in his eyes, is a tool for preservation and learning — and these principles of respect and care were foundational in the creation of an “open-source AI rainforest model that can reconstruct extinct flora and fauna based on [Amazonian] tribes’ deep collective knowledge.”
Anadol also spoke extensively about the importance of open source technologies and the power of artists to create and harness their own AI models, putting their narratives into their own hands. While we have yet to see the full accessibility of such computation and tools at our disposal, the option is on the table for artists to continue to explore.
A final confession
After a week of sitting through TED, it became a running joke among presenters to have ChatGPT, or another large language model generate your talk for you — a tantalizing lure that journalist me, in all my sleep-deprivation, was inspired to explore for the first time in writing this recap. So I tried it on OpenAI:
JF: Can you write a summary of key points from a panel transcript on generative AI creativity and the role in the arts from this year’s TED conference, moderated by Chris Anderson?
And then:
JF: Can you expand that to be 800 words?
While (disclaimer) none of the content made its way to this final article, I was struck by the potential for near instantaneous information generation. I had with me an interactive, encyclopedic research assistant who might lack inspiration with their prose, but could constantly go back to the drawing board to get you a satisfactory response. I was able to organize my thoughts more clearly, understand a flow from an unwieldy transcript and get an article together I probably would not have had the time to do.
And I’m perhaps late to the game, out of resistance or a critical stance around the potential for harm such tools can cause. On the other side of the globe, my grandmother recently used the tool to solve her own interpersonal problems after being slighted by a colleague. After copy pasting their read receipts, she asked ChatGPT why someone might respond that way.
My grandmother likened using the OpenAI chatbot to her memories of raising spirits in her childhood — a Pandora’s Box too dangerous to be tampered with for too long. And that was something I felt strongly while continuing to prompt the tool for more, a dopamine rush with every “generate.” It was freeing in some ways to be pulled into rabbit-hole tangents and asking the chatbot to summarize, rephrase, expand. I saw the value of the tool the same way older generations complained about research before the internet — and may one day venture back to using it to organize thoughts and follow threads the same way I’d trace citations on Google Scholar.
In closing,, I reflect on the question Chris Anderson asked all of the panelists — what advice they would give to a young artist looking to start their career today. In media curator Isagon-Skyers’ words, “I would say to follow your intuition, because that’s one thing AI doesn’t have.”
“Yet,” says Sidhu.
For more news, discourse, and resources on immersive and emerging forms of nonfiction media, sign up for our monthly newsletter.
Immerse is an initiative of the MIT Open DocLab and Dot Connector Studio, and receives funding from Just Films | Ford Foundation, the MacArthur Foundation, and the National Endowment for the Arts. The Gotham Film & Media Institute is our fiscal sponsor. Learn more here. We are committed to exploring and showcasing emerging nonfiction projects that push the boundaries of media and tackle issues of social justice — and rely on friends like you to sustain ourselves and grow. Join us by making a gift today.