Salesforce on Thursday announced the general availability of its enterprise chatbot, Einstein Copilot.
Salesforce executives say Einstein Copilot is far less likely than other AI chatbots. cause a hallucinationor generate false or nonsensical information. This is a problem that other chatbots from Google, Meta, Anthropic, and OpenAI have struggled to overcome.
“They can lie very confidently,” Patrick Stokes, executive vice president of product marketing at Salesforce, said of AI chatbots during his keynote at Thursday’s Salesforce World Tour NYC.
einstein co-pilot According to Stokes, it uses company-proprietary data from both spreadsheets and documents across all apps stored on Salesforce’s own platform, Google Cloud, Amazon Web Services, Snowflake, and other data warehouses. This is to do.
Chatbots are designed as a kind of intermediary between companies and their personal data. Large-scale language model (LLM) Such as OpenAI’s GPT-4 and Google’s Gemini. When an employee asks a question like “What should be my next step to address this customer complaint?”, Einstein pulls in business-relevant data from his Salesforce and other cloud services. He then attaches that data to the first query and sends it to LLM, which generates a response.
Salesforce’s new chatbots also come with a layer of protection to ensure that the LLM that sends the prompts cannot retain your company’s data.
In a follow-up interview with Quartz, Stokes explained that Einstein Copilot is less likely to hallucinate than other chatbots. “We’ll get the data before we send any questions to the LLM,” he said, adding: “I don’t think it’s possible to completely prevent hallucinations.”
That’s why chatbots are equipped with hallucination detection capabilities. You can also collect real-time feedback from Salesforce customers and alert administrators to weaknesses in the system.
AI hallucinations always occur
Stokes said imagining a world without AI illusions is as “stupid” as imagining a world in which computer networks cannot be hacked at all.
“There are always breakthroughs, and I think that applies to AI as well,” he said. “But what we can do is do everything we can to make sure we build transparent technology that can surface when that happens.”
Ariel Kelmen, Chief Marketing Officer at Salesforce, argued: “What’s interesting is that the LLM was essentially created to hallucinate,” he said. “That’s how they work. They have imagination.”
New York Times coverage Research last year found that AI systems had a hallucination rate of about 5% for Meta, up to 8% for Anthropic, 3% for OpenAI, and up to 27% for Google PaLM.
Chatbots “hallucinate” when they don’t have the training data needed to answer a question, but still produce responses that appear to be factual. Hallucinations can be caused by a variety of factors, including inaccurate or biased training data and overfitting. Algorithms cannot make predictions or conclusions From data other than that used for training.
Hallucinations are currently one of the biggest problems in generative AI models. It’s not always easy to solve. Because AI models are trained on large datasets, it can be difficult to find specific problems in the data. In any case, the data used to train the AI model may be inaccurate. Coming from places like Reddit.
That’s where Salesforce claims its chatbot is different. However, it is still early days and only time will tell which AI chatbots are the least paranoid.