CBeloved actor, film star and refugee advocate Atheé Blanchett stands at a podium addressing the European Union Parliament. “The future is now,” she says authoritatively. So far, so normal, but then she says, “But where are the sex robots?”
The footage is from an actual speech Blanchett gave in 2023, but the rest is fictional.
Her voice was generated by Australian artist Xanthe Dobie using text-to-speech platform PlayHT for Dobie’s 2024 video work, Future Sex/Love Sounds, which imagines a feminist utopia populated by sex robots and voiced by celebrity clones.
Much has been written about the world-changing potential of large-scale language models (LLMs), including Midjourney and Open AI’s GPT-4. These models are trained on massive amounts of data, generating everything from academic papers, fake news, and “revenge porn.” Music, images, software code.
While supporters praise the technology for speeding up scientific research and eliminating routine administrative tasks, it also presents a wide range of workers, from accountants, lawyers and teachers to graphic designers, actors, writers and musicians, with an existential crisis.
As the debate rages, artists like Dobie are using these very tools to explore the possibilities and precarity of technology itself.
“The technology itself is spreading at a faster rate than the law can keep up with, which creates ethical grey areas,” says Dobie, who uses celebrity internet culture to explore questions of technology and power.
“We see replicas of celebrities all the time, but data on us, the little people of the world, is collected at exactly the same rate… It’s not a question of technology capabilities. [that’s bad]That’s how flawed, stupid, evil people choose to use it.”
Choreographer Alisdair McIndoe is another artist working at the intersection of technology and art: His new work, Plagiary, premieres this week at Melbourne’s Now or Never festival before running in a season at the Sydney Opera House, and uses custom algorithms to generate new choreography for dancers to receive for the first time each night.
Although the AI-generated instructions are specific, each dancer is able to interpret them in their own way, making the resulting performance more like a human-machine collaboration.
“The frequently asked questions are [from dancers] “Your first thought is: ‘I’m being asked to turn my left elbow repeatedly, go to a back corner and imagine I’m a newborn cow. Are you still turning your left elbow at that point?'” McIndoe says. “It quickly becomes a really interesting discussion of what is meaning, interpretation and truth.”
Not all artists are fans of the technology: In January 2023, Nick Cave posted a scathing critique of a ChatGPT-generated song that imitated his work, calling it “bullshit” and a “grotesque mockery of humanity.”
“Songs come from suffering,” he says, “which means they’re based on complex, inner human conflicts of creation. And as far as I know, algorithms don’t have emotions.”
Painter Sam Leach doesn’t agree with Cave’s idea that “creative genius” is an exclusively human trait, but he encounters this kind of “total rejection of technology and everything related to it” frequently.
“I’ve never been particularly interested in anything to do with purity of soul. I see my practice as a way of studying and understanding the world around me. I just don’t believe I can draw a line between myself and the rest of the world and define myself as a unique individual.”
Leach sees AI as a valuable artistic tool that allows him to address and interpret a wide range of creative artifacts. He has customized a series of open-source models trained on his own paintings, reference photographs, and historical artworks to produce dozens of works, some of which are surrealistic oil paintings, such as a portrait of a polar bear standing on a bunch of chrome bananas.
He justifies his use of sources by emphasizing that he spends hours “editing” with a paintbrush to refine the software’s suggestions. He also uses an art critic chatbot to question his ideas.
For Leach, the biggest concern about AI isn’t the technology itself or how it’s being used, but who owns it: “A very small number of giant companies own the biggest models with incredible power.”
One of the most common concerns about AI is copyright. This is an especially complicated issue for people working in the artistic sector, whose intellectual property is being used to train multi-million dollar models, often without their consent or compensation. For example, last year it was revealed that 18,000 Australian books had been used in the Book3 dataset without permission or compensation. Booker Prize-winning author Richard Flanagan described this as “the biggest act of copyright theft in history.”
And last week, Australian music rights organisation APRA AMCOS released the results of a survey which found that 82% of its members are concerned that AI will reduce their ability to make a living from music.
In the European Union, the Artificial Intelligence Act came into force on August 1 to mitigate such risks. Meanwhile, in Australia, although eight voluntary AI ethics principles have existed since 2019, there are still no specific laws or regulations regulating AI technology.
This legal vacuum has forced some artists to create their own custom frameworks and models to protect their work and culture. Rowan Savage, a sound artist from Kombumeri who works as Salvage, has collaborated with musician Alexis Weaver to develop an AI model called Koup Music, a tool that transforms field recordings of his own voice in the country into a digital representation, and will be presenting the process at the Now or Never festival.
Savage’s abstract dance music sounds like a dense flock of computerized birds, animal-code hybrid lifeforms that are haunting, alien, and familiar all at once.
“When people think of Indigenous Australians, they sometimes associate us with the natural world. There’s a kind of infantilisation there, and we can use technology to counter that,” Savage says. “We often think of there as a strict separation between what we call nature and what we call technology. I don’t think so. I want to break that and let the natural world influence the technological world.”
Savage designed Koup Music to give him full control over the data it uses to train it, so that it wouldn’t appropriate other artists’ work without their consent. In exchange, the model prevents his recordings from being co-opted into the larger network on which Koup is built. Savage sees the recordings as the property of the community.
“I think it’s OK to make recordings about your country for personal use, but I don’t think it’s necessary to share them with the world. [for any person or thing to use]” says Savage.[I wouldn’t feel comfortable] “We cannot collect sources without speaking to key members of the community. As Aboriginal people we have always had a sense of community and there is no private ownership of sources as the Anglo world sees it.”
For Savage, AI holds great creative potential, but also poses “so many dangers”: “My concern as an artist is, how do we use AI ethically but still allow it to actually do all sorts of exciting things?”