court gavel

Suppose you hire a customer support agent who answers questions from your customers. 

At some point, that employee begins to give incorrect answers, bad advice, and possibly even dangerous directions.  When those actions lead to injury or death, the employee will face criminal charges.  At a minimum, your company will be civilly liable for damages.  Personal liability and criminal culpability of leadership may also follow.

What if the agent isn’t your employee?  What if it’s your AI chatbot?

In mid-February 2024, Canada’s Civil Resolution Tribunal of British Columbia established the precedent of corporate civil liability for errant AI under “negligent misrepresentation” when it ruled in favor of the applicant in Moffatt v. Air Canada, 2024 BCCRT 149 and awarded damages.  While this was only an award of $812.02 CAD from a small claims case in Canada, the precedent is clear: companies are liable for their AI’s actions

The Tribunal determined,

“While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”  CanLII.org, (2024 Feb)

With very little effort, one can envision how this liability can scale rapidly to devastating proportions.

So, what went wrong and how can that risk be mitigated?

While the precise point that initiated the chatbot’s error (a.k.a. “hallucination”) may never be known, the invariable conclusion is faulty data.  Data is the foundation, heart, and fuel of Artificial Intelligence, including chatbots.

Possible points of failure include:

1.       Training – the model could have been trained with inaccurate, incomplete, and/or out-of-date data.

2.      Testing – the model’s testing data could have been curated to address only a subset of conditions.

3.       Execution – the model may have been directed to the wrong dataset for current/updated information.

Poor data quality, inadequate data quantity, and faulty data management all sabotage the accuracy of AI systems.  The axiom of “Garbage In, Garbage Out (GIGO)” has never been more accurate than now in the world of AI.

The DataInFormation℠ suite of solutions mitigates these risks through careful data quality management, objective data inspection and labeling, and curated preparation of the unstructured data often overlooked in AI implementations. Our services include image and video annotation, data labeling, multi-language audio transcription, computer vision calibration, NLP validation, training data curation, data-as-a-product, LLM prompt engineering, LLM inspection, and related data optimization services. 

We combine state-of-the-art technology with U.S.-based experts to comb data and unlock the tremendous value that the information represents to you.  And we do it in a socially responsible way.  We don’t make data: we make data fit

Let’s talk about getting your data into formation.  Check out one of our solution briefs to learn how we enable clients to get more value from their AI investment.

 

Contact: Andrew Gibbs at (757) 214-9629, Andrew.Gibbs@liberty-source.com, and on LinkedIn: Andrew Gibbs, CGFM

References:

CanLII.org, (2024, Feb 14) Moffatt v. Air Canada, 2024 BCCRT 149, https://www.canlii.org/en/bc/bccrt/doc/2024/2024bccrt149/2024bccrt149.html
Belanger, Ashley. (2024, Feb 16) Air Canada must honor refund policy invented by airline’s chatbot. Ars technical, https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/

Let’s Talk

Group of business colleagues communicating on a meeting in the office.

Elevate AI/ML now with DataInFormation for company success.

Contact Us