When marketers start using ChatGPT, Google’s Bard, Microsoft’s Bing Chat, Meta AI, or their own large-scale language models (LLMs), they need to be concerned about “hallucinations” and how to prevent them.
IBM provides the following definition for hallucinations: “AI hallucinations are when large-scale language models (often generative AI chatbots or computer vision tools) perceive patterns or objects that are non-existent or imperceptible to human observers, creating meaningless , or produce completely meaningless output, which is inaccurate.
“In general, when a user requests a generative AI tool, they want an output that appropriately responds to the prompt (i.e., the correct answer to the question). However, if the AI algorithm produces output that is not based on training data, , may be decoded incorrectly by the transformer or produce output that does not follow a discernible pattern; in other words, it “hallucinates” the reaction. ”
Suresh Venkatasubramanian, a Brown University professor who helped co-author the White House’s AI Bill of Rights blueprint, said in a CNN blog post that the problem is that LLMs simply “generate plausible answers” for users. He said that this is what they are trained to do. Display a prompt.
“So, in that sense, any answer that sounds plausible, whether accurate or factual, made up or not, is a rational answer, and that is what it produces. There is no true knowledge in it.”
He said how he would narrate these computer outputs to his young son at age 4 to make the behavior more similar to hallucinations and lies with the implication that something is wrong or malicious. He said that he would be comparing.
“All you have to do is say, ‘So, what happened?’ And he’ll keep making more stories,” Venkatasubramanian added. “And he just kept going on and on.”
frequency of hallucinations
If a hallucination is a “black swan” event, a rare occurrence, marketers should be aware of it, but they don’t necessarily need to pay much attention to it.
However, Vectara research shows that chatbots fabricate details in at least 3% of interactions, and up to 27% despite taking steps to avoid such occurrences.
“We told the system 10 to 20 facts and asked for a summary of those facts,” Amr Awadallah, Vectara’s chief executive and former Google executive, said in an Investis Digital blog post. “The fundamental problem is that the system can still introduce errors.”
Researchers say the incidence of hallucinations may be higher if chatbots perform other tasks (beyond simple summaries).
What marketers should do
Despite the potential challenges posed by hallucinations, generative AI has many benefits. To reduce the possibility of hallucinations, we recommend the following:
- Use the generative AI only as a starting point for creating: Generative AI is a tool, not a replacement for your work as a marketer. Use this as a starting point and create prompts to answer questions that will help you get the job done. Make sure your content always aligns with your brand voice.
- Cross-check LLM-generated content. Peer review and teamwork are essential.
- Check the source. LLMs are designed to handle vast amounts of information, but some sources of information may be unreliable.
- Use your LLM tactically. Run drafts through generative AI and look for missing information. If the generative AI suggests something, check it first. Not necessarily because hallucinations can occur, but because, as I said, good marketers scrutinize their work.
- Monitor development status: Stay up to date with the latest developments in AI to continually improve the quality of your output and be aware of new features, hallucinations, and other emerging issues.
Benefits of hallucinations?
But Tim Hwang of FiscalNote says hallucinations, while potentially dangerous, have some value.
In a blog post for Brandtimes, Hwang said: “LLMs are bad at all the things you expect computers to be good at,” he says. “And the LLM is good at all the things you think computers are bad at.”
He further explained that while using AI as a search tool is not a great idea, “storytelling, creativity, aesthetics, these are all things that this technology is fundamentally really good at.”
Brand identity is essentially what people think about a brand, so illusions should be considered a feature rather than a bug, Huang said, adding that it is possible to make an AI hallucinate its own interface.
Therefore, marketers can provide By configuring an LLM with an arbitrary set of objects and instructing it to do something that cannot normally be measured or that would be costly to measure by other means, you are effectively causing the LLM to hallucinate.
An example mentioned in the blog post is assigning a certain score to an object based on how well it matches the brand, giving the AI a score, and based on that score, consumers who are more likely to become lifelong consumers of the brand. It’s about asking.
“Illusion is, in some ways, a fundamental element of what we want from these technologies,” Huang says. “I think it’s in the best interests of people in advertising and marketing to manipulate hallucinations, rather than reject them and fear them.”
Emulate the consumer perspective
A recent application of hallucinations is Insights Machine, a platform that allows brands to create AI personas based on detailed target audience demographics. These AI personas interact as real individuals and provide diverse responses and perspectives.
Although AI personas may occasionally have unexpected or illusory responses, they primarily serve as catalysts for marketers’ creativity and inspiration. Humans are responsible for interpreting and exploiting these responses, highlighting the fundamental role of hallucinations in these innovative technologies.
As AI becomes more central to marketing, machine errors are likely to occur. Only humans can check for errors. This is an eternal irony in the age of AI marketing.
Pini YaquelCo-founder and CEO of opti moveI wrote this article.