Air Canada has recently lost a small claims court case against a passenger who had relied on the airline's AI chatbot's advice in relation to retroactive claims under the airline's bereavement policy. 

The chatbot had advised the grieving passenger that he could apply for bereavement fares retroactively.

"Air Canada offers reduced bereavement fares if you need to travel because of an imminent death or a death in your immediate family

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.”

The chatbot message also included a link to a webpage which said, in part, that the bereavement policy did not apply to requests for bereavement consideration after travel had been completed.

The Air Canada chatbot is an AI driven chatbot that uses a large language model (LLM), which means that it can answer questions independently, without any human input. LLMs are vulnerable to AI hallucination, in other words the LLM may perceive patterns or objects that are non-existent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.

Perhaps the most remarkable thing about this debacle is that Air Canada sought to argue that it could not be held liable for information provided by one of its agents, servants, or representatives – including a chatbot.

In his decision, Civil Resolution Tribunal Member Christopher Rivers, noted that the airline failed to explain why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot. It also did not explain why customers should have to double-check information found in one part of its website on another part of its website. The Tribunal ultimately determined that the claim against Air Canada constituted “negligent misrepresentation" and that the applicant was entitled to damages.  

While the damages and court fees payable by Air Canada were relatively minor (CAN$812.02), this case should serve as a warning to companies, which are seeking to utilise AI chatbots, as to their vulnerability to hallucinate and the consequent potential for legal and financial liability.