Keep up to date with complimentary notifications
Just register to the Artificial intelligence myFT Digest — delivered straight to your email.
The writer is creator of Sifted, an FT-supported site about European start-ups
Generative artificial intelligence systems’ tendency to “hallucinate” — or just fabricate things — can be amusing and sometimes unnerving, as one New Zealand supermarket chain discovered. After Pak’nSave released a chatbot last year that offered recipe suggestions to budget-conscious shoppers utilizing leftover ingredients, its Savey Meal-bot suggested one customer make an “aromatic water mix” that could have produced chlorine gas.
Legal professionals have also learned to be cautious of the output of generative AI models, considering their capacity to invent entirely fictional cases. A recent Stanford University investigation of the responses produced by three cutting-edge generative AI models to 200,000 legal queries found hallucinations to be “widespread and disturbing”. When given specific, verifiable questions about random federal court cases, OpenAI’s ChatGPT 3.5 hallucinated 69 per cent of the time while Meta’s Llama 2 model reached 88 per cent.
To a substantial number of users, generative AI’s hallucinations are an annoying defect that they anticipate the technology firms will resolve one day, similar to email spam. The extent to which companies can do so is currently a topic of active research and strong disagreement. Some researchers argue that hallucinations are intrinsic to the technology itself. Generative AI models are probabilistic machines trained to deliver the most statistically probable response. It is challenging to encode human traits, such as common sense, context, nuance, or logic.
While experts strive to decipher that conundrum, a number of users view machine-generated fiction as a stimulating feature and have been exploring its creative possibilities. “I adore hallucinations,” says Martin Puchner, a Harvard University professor and author. “They are so fascinating when it comes to artistic creation.”
As the author of Culture: A New World History, Puchner has a unique view of creativity. In his book, he argues that for centuries humans have been merging the inputs of preceding generations and other cultures to produce fresh syncretistic outputs. Reflect on how much imperial Rome borrowed from ancient Greece, or how the Italian renaissance was influenced by Arabic scholarship, or how Japan adopted Chinese writing and philosophy, or how Jamaican Rastafarian culture assimilated Ethiopian traditions.
Every civilization, Puchner writes, tends to overrate the originality of their own culture to bolster dubious claims of superiority and ownership. “Such claims conveniently forget that everything comes from somewhere, is mined, borrowed, moved, purchased, stolen, recorded, copied, and often misunderstood,” he writes. “Culture is a massive recycling endeavor.”
The similarities with generative AI are compelling. In some respect, our machines are executing today what humans have been doing forever: mixing up different cultural inputs to create slightly altered outputs. In that sense, generative AI can be regarded as a colossal cultural syncretism machine. Dependent on imperfect, limited data and overconfident in generalizing from the specific, machines might be more like fallible humans than we sometimes imagine. Hallucinations may not be so much an algorithmic aberration as a reflection of human culture.
That all seems theoretical. What does it mean in practice? In summary, everything hinges on the use case. Generative AI models can be an outstanding technique for enhancing human creativity by developing new concepts and content, particularly in music, images, and video.
When provoked in the correct manner, these models can operate as a valuable sparring partner and a tireless source of inspiration for creative projects. They can be the algorithmic counterpart of thinking outside the box. Puchner has himself been experimenting with personalized chatbots to converse with historical figures, such as Aristotle, Confucius, and Jesus, based on their words and ideas. “Prompt engineering should be part of Harvard’s syllabus,” Puchner says.
The creator of a generative AI company tells me we are swiftly entering the era of “generative media”. “The internet and social media brought the cost of distribution down to zero. Now, thanks to gen AI, the cost of creation is also going close to zero,” he says. That trend could result in a surge of creativity, for both positive and negative outcomes. It may also intensify apprehensions regarding disinformation and intellectual property theft.
Impressive as it is as a fiction writer, generative AI still has a long way to go in terms of reliable non-fiction. That is acceptable as long as we use it as a tool to enhance human abilities rather than envision it as an agent that can supplant all human processing power. To hallucinate a phrase: caveat prompter.