In a warning to global airlines implementing AI into their customer service platforms, Air Canada unsuccessfully tried to disavow an AI-powered chatbot and lost a small claims lawsuit against grieving passengers.
The passenger claimed he was misled by the airline’s bereavement fare rules because the chatbot gave illusory answers that contradicted the airline’s policies. The Canadian Small Claims Tribunal found the passenger was correct and ordered him to pay $812.02 in damages and legal costs.
After her grandmother’s death, the passenger used a chatbot on Air Canada’s website to look up her ticket, which suggested she could retroactively apply for a bereavement fare on that flight. The passenger took a screenshot of the chatbot’s response and showed it to the court. The chatbot told the customer:
“Air Canada offers are discounted” Bereavement fare If you need to travel due to an impending death or the death of a close relative… 90 days from the date if you need to travel immediately or if you have already traveled and would like to submit your ticket for discounted bereavement fees. Please submit within Your ticket was issued by completing the ticket refund request form. ”
At the heart of the Air Canada case was underlined text that was a live link to the airline’s bereavement fare policy page on the airline’s website. This page contradicts the chatbot’s statement: “Please note that our bereavement policy does not allow for refunds for travel that has already occurred.”
Air Canada argued that passengers had an opportunity to review the chatbot’s response because a link was provided in the chatbot’s response. However, the court found that Air Canada did not explain why passengers should not trust the information provided by the chatbot on its website.
The passenger later learned through an Air Canada employee that Air Canada does not accept retrospective bereavement requests, but he still received a refund because he “relied on the chatbot’s advice,” according to incident records. He said he asked for it. Air Canada responded to the passenger’s complaint by offering him a $200 ticket, but the passenger refused.
The chatbot’s failure was a “negligent misrepresentation”
The court held that the claim against Air Canada amounted to “negligent misrepresentation.”
“Air Canada maintains that it cannot be held responsible for information provided by any of its agents, employees or representatives, including chatbots. There is no explanation as to why this is thought to be the case. In effect, Air Canada is suggesting that the chatbot is a separate legal entity responsible for its own actions. This is a noteworthy submission. Although the chatbot has an interactive component, Still, it’s only part of Air Canada’s website. It’s clear that Air Canada is responsible for all the information on its website. Whether the information comes from a static page or from a chatbot. It makes no difference whether they came or not,” Civil Resolution Tribunal member Christopher C. Rivers said in his ruling on the case.
Rivers said Air Canada “failed to take reasonable care to ensure that its chatbot was accurate.” The airline failed to explain to the court “why a web page titled ‘Bereavement Travel’ is inherently more trustworthy than a chatbot.” It also does not explain why customers should double-check information in one part of the website in another part of the website. ”
Is Air Canada investing too much in AI?
Air Canada introduced an Artificial Intelligence Lab in 2019 to apply AI to improve operations and customer experience.
“Big data and AI are now a big part of our business,” Air Canada President and CEO Karin Rovinescu told Future Travel Experience at the time.
Last year, Air Canada also announced big plans for an AI-powered voice customer service chatbot. This could eventually replace the same person who briefed bereaved passengers on incidents where the website’s chatbot got it wrong.
Oddly enough, as Business Traveler reported, Mel Crocker, Air Canada’s executive vice president and chief information officer, said the airline’s initial investment in AI-powered voice customer service was He admitted that it was more than he paid a call center employee to respond to a simple customer question.
“We’re not working on this to take away jobs,” he said. “But if we can solve things that require a human touch with humans and solve things with technology that can be automated, we will do it.”
In what now sounds like a cynical remark, Crocker added: And happy customers mean they fly more with Air Canada. ”
While the small claims case didn’t cost Air Canada much in dollars and cents, it didn’t help its customer service reputation. Airlines could easily afford to pay passengers the difference in bereavement fares.
If Air Canada had done that, the issue wouldn’t have received as much attention as it did to investigate why the chatbot was giving incorrect information to customers. But the case raises important questions and could set a precedent regarding airline liability for the performance of AI-powered systems.
Consumer rights and the illusion of AI
AI tools are vulnerable to hallucinations where information suddenly appears to be fabricated.
“AI hallucinations are when large-scale language models (LLMs), often generative AI chatbots or computer vision tools, recognize patterns or objects that do not exist or are imperceptible to human observers, leading to meaningless interpretations. “is a phenomenon that produces either no or completely inaccurate output,” IBMBM explains. on its website.
Air Canada’s rival WestJet experienced a similar incident in 2018. The company’s chatbot, Juliet, misinterpreted a customer’s glowing comment about a flight attendant’s concern for succulent mutilations and directed the customer to a suicide hotline. In this case, no harm occurred and the airline’s customers found the situation amusing.
But as the recent Air Canada incident shows, AI illusions can come at a cost.
Although airlines are not financial institutions, they handle important transactions and currency-like loyalty points and miles, making them vulnerable to liability if phantom AI misleads consumers. The U.S. Consumer Financial Protection Bureau closely monitors the use of artificial intelligence and the technology’s impact on consumer rights.
In a study on banking chatbots published last year, the CFPB found that while chatbots can help answer customers’ basic questions, “their effectiveness diminishes as the problem becomes more complex.”
“After investigating consumer complaints and the current market, we found that some people are experiencing significant negative consequences due to technical limitations in chatbot functionality,” they wrote in their report. Pointed out. “There are a number of negative outcomes for customers, including wasted time, feeling stuck and frustrated, receiving inaccurate information, and paying additional junk fees. This is especially noticeable when you cannot receive it.”
The CFPB also warned that “financial institutions risk violating legal obligations, undermining customer trust, and harming consumers when they deploy chatbot technology.” Chatbots, as well as the processes they replace, must comply with all applicable federal consumer financial laws, and failure to do so may subject businesses to liability for violating these laws. Chatbots can also pose certain privacy and security risks. If chatbots are poorly designed or customers are left without support, widespread damage can occur and customer trust can be seriously eroded. ”
Air Canada isn’t the only company using AI to answer common customer questions. Many airlines and airports around the world are implementing automated chat and bots directly into their customer service flows on their websites, apps, and popular social media channels.
However, the technology is not foolproof, and airlines will need to consider the legal and financial risks of AI illusions. We may need to reconsider how far AI can go at this point.
follow me twitter Or LinkedIn. check out my website.