A man was trying to get a bereavement discount. An AI told him he could. The airline said the bot was wrong. A court disagreed — and rewrote the rules on corporate responsibility for AI.
In November 2022, Jake Moffatt's grandmother passed away. He needed to book last-minute flights from Vancouver to Toronto for the funeral — never an inexpensive proposition.
Before booking, he visited Air Canada's website and asked the airline's AI chatbot whether he could get a reduced bereavement fare. The bot told him yes — and added a detail that would become the center of a legal battle: that he could book his ticket at full price now and apply for the bereavement discount retroactively within 90 days.
Moffatt took the bot at its word. He booked the tickets. He paid full price. Then he submitted his bereavement discount application — and Air Canada said no.
This is a reconstructed version of the chatbot exchange, based on court filings and reporting. The key detail: the chatbot gave confident, specific, and incorrect legal/policy information.
Moffatt flew to Toronto, attended his grandmother's funeral, and returned. He then submitted his bereavement discount application along with his documentation.
Air Canada denied it. The airline's customer service team explained that bereavement fares had to be requested before booking and could not be applied retroactively. The chatbot had been wrong.
But Moffatt had screenshots. He'd followed the bot's explicit instructions. When he escalated his complaint, Air Canada's position was extraordinary: the chatbot was a separate legal entity from the airline, and the company couldn't be held responsible for information it provided.
Moffatt filed a claim with British Columbia's Civil Resolution Tribunal — a small claims court designed to handle disputes without lawyers. The amount was just over $800. But the implications were enormous.
In January 2024, Tribunal Member Christopher Rivers issued his decision. He rejected Air Canada's argument entirely — and in doing so, established a significant legal precedent.
On its face, this was a $812 dispute. But the tribunal's reasoning has implications that extend far beyond one airline ticket.
Companies can no longer treat chatbot errors as a "computer said so" shield. If you deploy it on your platform under your brand, you own its outputs.
AI hallucinations — confident fabrications — aren't just embarrassing. When a customer relies on one, the company deploying the AI bears the consequence.
Companies that use AI to answer policy questions have a legal obligation to ensure those AI systems are accurate — or to clearly disclaim their limitations.
This Canadian ruling arrived as the EU AI Act and similar regulations were being finalized. It reinforced the global regulatory direction: AI providers are responsible for their systems.
The dollar amount was small. The principle was not. For the first time, a court looked at an AI chatbot and said: "Someone built this thing. Someone deployed it. And someone has to be responsible for what it says."