Award-winning performer Holly Herndon is using artificial intelligence to pioneer novel forms of composition, pushing back against AI-generated music that produces an endless glut of the same instead of anything radically new.
Composer Holly Herndon photographed in 2020. (Holly Herndon, Matt Dryhust / Ars Electronica / Flickr)
There’s a moment, about five minutes in to one of the most recent episodes of the podcast Interdependence, when the usual banter is brushed aside and the real stakes of the conversation become clear. “People are still talking about music,” cohost Mat Dryhurst says, “as if the world hasn’t changed significantly.”
For some years now, there have been few surer guides to the significance of that change than Dryhurst and his partner, the award-winning performer and composer Holly Herndon. Ever since 2020, Herndon and Dryhurst have uploaded new episodes of Interdependence once every week or so, featuring conversations with the likes of artificial intelligence (AI) researcher François Pachet, performance artist Marina Abramović and Taiwan’s inaugural minister of digital affairs, Audrey Tang. At times, the couple seem to be so far ahead of the curve that a note of frustration creeps into their voices at the rest of the world’s failure to see just how far the social and technological goalposts have shifted.
I first interviewed Herndon and Dryhurst back in 2013, at the time of her album Movement. At that time, other than as a tool for “smashing things together” and creating subtly shifting “textural elements,” artificial intelligence–type tools were not playing “a huge part” in Herndon’s music, she told me. But in the years since, the technology has moved fast. In November 2018, just as new kinds of large language models were being invented by corporations like Google and OpenAI, Herndon posted on Twitter: “AI is a deceptive, over abused term. Collective Intelligence (CI) is more useful. It’s often just us (our labor/data), in aggregate, harnessed to produce value by a few, who maybe have an easier time acting with impunity because we are distracted by fairytales about sentient robots.” Less than a week later, she released the first video from her third album Proto, a record which tackled these issues head-on.
The impetus for the project came from an unusual place. After years touring a solo laptop set, Herndon felt an urge to sing with other people again. Growing up in Johnson City, Tennessee, Herndon sang in church choirs and now found she “missed the joy of singing with other people as well as the joy of the audience feeling it,” as she put it in a January 2020 interview. At the same time, she and Dryhurst are both self-confessed “nerds interested in nerdy topics.” Stuff happening with large language models and deep learning was too tempting to resist. AI programs tend to be built on vast datasets, usually scraped off the words, images, and sounds that millions of people all over the world have posted on the internet, which are then used to train algorithmic systems to produce new content that mimics whatever it’s been trained on, creating a kind of aggregate of a huge wealth of human thought. But Herndon is not interested in reproducing other people’s ideas, and is too conscientious to appropriate other people’s data without their consent, as the big software companies do.
Using a powerful gaming computer, Herndon started building up her own dataset based on her own voice, her partner’s voice, and contributions from an additional vocal ensemble of fourteen other close friends, all of whom were properly credited and compensated for their contributions. The resulting program, Dryhurst and Herndon’s “AI Baby,” was christened Spawn, and regarded during the album’s recording process not simply as a generator of content but a collaborator with the more fleshy participants. Sounds and compositional ideas flowed back and forth between human and machine, getting stranger and stranger at each step of the process. The end results are utterly beguiling, like medieval plainsong beamed down from another planet, with the lines between organic and digital blurred to the point of total indistinction.
Proto is an object lesson in the potential aesthetic benefits that can come from working with machine learning systems. It’s also a first step in thinking through the ethical minefield that such systems represent. To a rising tide of “deepfake” songs, in which artists like Travis Scott and Kanye “Ye” West are digitally reconfigured to voice lyrics they never uttered in styles they’ve employed, Herndon’s objection is twofold: on the one hand, it’s extractive. Such practices profit from other people’s labor without their consent. On the other hand, it’s dull, producing an endless glut of the same instead of anything radically new.
Herndon and Dryhurst’s approach suggests another way is possible. They don’t have all the answers yet, but the point of their Interdependence podcast is to at least initiate a series of conversations that might start to grope toward it. As Herndon herself said in an interview with the Guardian in May 2019, “We haven’t yet figured out how to deal with intellectual property [in pre-digital music] and AI is like if a sample could sprout legs and run. It is recording technology 2.0, and we don’t have an ethical framework.”
As major record labels like Warner Records are already investing in systems like Endel, an app for generating an endless grey goo of “personalized sound environments,” it might be worth paying attention to the fruits of a more artist-led approach to the immense wealth of our own human collective intelligence.Original post