In 2014, the American artist Richard Prince debuted an exhibition entitled New Portraits. It featured images from other people’s Instagram accounts with comments added by the artist, blown up and displayed on a gallery’s walls. One sold for $US90,000 ($129,000), triggering outrage from some of the artists and Instagram users whose work Prince had repurposed. A photographer said he felt “violated”.
Seven years later, Prince has another option to challenge our ideas about what is art and who is an artist. There are now artificial intelligence (AI) systems that can create photorealistic images of almost anything. Science fiction writer Philip K Dick once posed the question: do androids dream of electric sheep? The more immediate question is: can they create art and does it matter if they do?
Turns out they can be instructed to create a photorealistic image of a raccoon wearing an astronaut’s helmet. Or a robot couple dining in front of the Eiffel Tower. Or an anthropomorphic dragon fruit wearing a kung fu belt in the snow.
These cheerful images are products of Imagen, a powerful AI system developed by Google and unveiled late last month. DALL·E 2 from OpenAI is another rival, though its little brother DALL·E Mini has become better known for generating charmingly mangled images from a text prompt.
Despite the furore created this week by a related Google AI chatbot system, which convinced an engineer at the company it was sentient to the point he was fired for defending its rights, these systems are not in any danger of becoming conscious. Instead, Google explained, AI tools possess sophisticated algorithms and draw on large datasets to predict a plausible answer, whether visual or text, when supplied with a prompt.
Sydney artist Gillian Kayrooz, who often works with digital media, does not feel threatened by her AI rivals. “When you think about what makes an item collectable, it’s attached to an artist and who they are,” Kayrooz says. “That’s what makes something famous, it’s attached to them alongside the work that is delivered to the public.”
Examples abound. Marcel Duchamp’s Fountain is a urinal. Tracey Emin exhibited her bed. Andy Warhol entrusted the manufacture of many of his works to The Factory, his studio full of assistants and, in one case, retroactively deemed a knock-off created by a former assistant to be authentic to save the man from jail time.
Kayrooz is right legally too. Australian copyright law does not recognise AI-generated imagery as art. “The basic rule in Australia is for copyright to subsist there must be a human author,” says University of Sydney professor Kim Weatherall, who researches technology and the law.
But the law works on a spectrum and some machine involvement does not invalidate a human’s ultimate authorship. “If I use Microsoft Word to write a book, obviously it’s just a tool and the expression is determined by me,” says Weatherall.
She is confident most AI-generated imagery sits on the wrong side of the copyright line – too much machine, too little human creativity. “If you write in: ‘I want a picture of Boris Johnson with fish coming out of his ears’ and it generates a picture that looks like that, that’s interesting in that spectrum I was describing before. It is the human being making choices — I want Boris Johnson, I want fish, I want it in his ears — but the expression really is being generated by the system.”
One marker of that is style. DALL·E Mini, which has swept the internet in recent weeks, produces grids of images that are instantly recognisable, with fuzzy, visibly digital renderings and warped faces. Users can also specify a style such as “digital art” or “woodblock print” in DALL·E 2 and Google’s Imagen, but they are bounded by the underlying dataset and a user’s ability to reduce an aesthetic to a brief description.
Still, even if purely AI-generated images aren’t art as recognised by copyright law, they will affect the art world. Plenty of artists have created their own AI tools to visualise data, or transformed images initially created by AI into their own works. Creating a quick logo for a business, a digital illustration for an opinion piece or frames in a cartoon looks likely to become trivially easy, too. That bodes poorly for people doing rote graphic design and animation work, who could be pushed further down the value chain to correcting work performed initially by AI.
Another consequence could be a flattening of style. The internet is already full of futuristic, laser-eyed and steam punk-style images that have become particularly associated with non-fungible tokens, a system of tracking ownership online that should be able to cover any genre of digital imagery. AI images could entrench that.
But Ellen Broad, an associate professor at the 3A Institute in the Australian National University’s school of cybernetics, does not believe the most apocalyptic pronouncements. “Do I think this is the end of human creativity and expression? No.”
“In three years time when everybody is using the same kinds of image generation models there will develop a market ... for something that looks different,” she says.
Broad could be right. But then AI has a long history of fooling humans into seeing deeper meaning in its output. Blake Lemoine, the Google engineer, was entranced by the poetic but nonsense answers that his company's chatbot LaMDA generated when he asked about its soul. "I think of my soul as something similar to a star-gate," LaMDA said, according to a transcript Lemoine published online after his firing. Funerals have been held for decommissioned dog robots that Sony released in the 1990s.
“It’s very easy to anthropomorphise,” says Jasmin Craufurd-Hill, an emerging technology researcher and the director, advanced technology, with the Australian Risk Policy Institute. “People have connected and started to assign human characteristics and behaviour to our technology.”
Yet Imagen and DALL·E 2 do not, for the moment, display realistic humans.“There’s a reason there’s an absence of humans,” Craufurd-Hill says. “And it relates back to these incredibly problematic data sets.”
Many large datasets, upon which AI systems frequently draw, include images that are racist, sexist or inappropriate, such as pornography, Craufurd-Hill says. If an AI is trained on such a database without proper guardrails, they can end up feeding back the same kind of problematic material even if users do not deploy it maliciously.
In an elliptical, confessional 2015 essay explaining his Instagram-derived exhibition, Prince seemed to forecast the unsettling no-man’s land in which AI has arrived.
“The ingredients, the recipe, ‘the manufacture’, whatever you want to call it ... was familiar but had changed into something I had never seen before,” he wrote of his works. “I wasn’t sure it even looked like art. And that was the best part. Not looking like art. The new portraits were in that gray area. Undefined. In-between. They had no history, no past, no name. A life of their own. They’ll learn. They’ll find their own way. I have no responsibility. They do. Friendly monsters.”
|Period||18 Jul 2022|