About This Article

When the Algorithm Sits at the Table: AI’s Emerging Role in Claims and Mediation examines how artificial intelligence is reshaping insurance claims, litigation, and the mediator’s role. Shawn Patey outlines current insurer uses — from claims triage and fraud detection to damages prediction — and explains how AI is no longer peripheral but a core participant in decision-making. The piece surveys regulatory momentum in Canada, noting the lapse of Bill C-27 while highlighting OSFI and FSRA expectations pushing firms toward documented AI governance, human oversight, and model risk management. It contrasts U.S. NAIC guidance as a preview of likely Canadian standards. Practical risks are detailed: bias from historical data, poor handling of outliers, opacity that invites bad-faith claims, and the compression of human discretion when AI outputs become default settlement baselines. Patey offers concrete guidance for mediators: treat AI outputs as starting points, press for high-level transparency about methodologies, insist on human review, and probe fairness testing. The article urges mediators to build basic AI literacy so they can challenge flawed scores and preserve empathy, context, and judgment at the settlement table, ensuring data-driven inputs inform — but do not dominate — resolution.

When the Algorithm Sits at the Table:

AI’s Emerging Role in Claims and Mediation
by Shawn Patey ~ Mediator

 The Quiet Disruption in the Claims World

Artificial intelligence has moved beyond novelty and firmly planted itself in Ontario’s insurance and litigation landscape. Insurers are deploying AI for everything from claims triage and fraud detection to damages prediction and risk analytics.

Mediators now share the table with an invisible but influential technological participant.
Canadian insurers, for example, are using AI to detect fraud more effectively—even expanding data-sharing programs to enhance detection across multiple providers[1].

Regulatory and Governance Spotlight

Canada was on the verge of its first comprehensive federal AI regulation under Bill C-27: Artificial Intelligence and Data Act (AIDA). AIDA would have targeted “high-impact” AI systems—including those used in insurance decision-making—mandating risk mitigation, fairness, and transparency[2].

However, Bill C-27 died on prorogation on January 6, 2025[3], and any federal AI law will have to be reintroduced as a new bill and run the full legislative process. AIDA’s core ideas—governing “high-impact” AI with risk, fairness, and transparency controls—will undoubtedly still  be the template, but real obligations would only kick in after follow-on regulations define the scope and mechanics. In the meantime, the federal Office of the Superintendent of Financial Institutions (OSFI)[4] and the Ontario’s Financial Services Regulatory (FSRA)[5] expectations are already pushing insurers toward documented AI governance and human oversight.

OSFI is encouraging documented AI governance and human oversight by requiring tech-risk governance, third-party oversight, and operational controls under Guideline B-13, and by extending its Model Risk Management (E-23) framework to all analytical models—pushing firms to inventory, validate, monitor, and keep humans in the loop on model-driven decisions[6].

FSRA is actively preparing for AI integration in insurance. Its initiatives include oversight of AI and machine learning tools, and a responsibility framework for big data use in auto insurance that emphasizes consumer transparency and model fairness[7].

NAIC Model Bulletin (U.S.)—A Warning for Canada

Across the border, the National Association of Insurance Commissioners (NAIC)[8] has issued a model bulletin that illustrates best practices for AI governance: insurers must maintain documented AI systems programs, conduct bias testing, enforce strong internal controls, and ensure human oversight[9]. Though U.S.-based, it’s a powerful preview of regulatory direction Canada may follow.

Why Mediators Should Be Paying Attention

When AI outputs become the default baseline for settlement authority, human flexibility—especially on unusual files—can get squeezed. The lived story of an injury risks being flattened into a spreadsheet entry, and we as mediators should expect real friction over transparency. How much, if at all, parties must reveal about AI-generated scores, factors, or reasoning during a lawsuit and in mediation, remains to be seen.

Risks and Challenges of AI-Driven Systems

Algorithms learn from historical data. If that data undervalues certain claimant groups, the model will replicate, or even amplify, those biases and drive unfair results. These systems also struggle with outliers and atypical fact patterns, so deserving cases can be mis-scored or mispriced. And when carriers lean on opaque tools without documented controls, human review, and clear explanations, they invite bad-faith allegations and regulatory scrutiny for mishandling claims.

Opportunities for Mediators to Add Value

Mediators should treat the AI-generated “number” as a place to begin, not the final word, and press for exceptions grounded in the claimant’s unique circumstances. We should promote transparency by asking parties to share, at a high level, any relevant AI policies or methodologies so everyone understands what’s steering authority. In busy, multi-party matters, we can lean on AI to accelerate valuation and timelines, but keep the deliberations human. And guard against bias by asking whether the models have been tested for fairness, especially when their outputs are materially constraining settlement authority and screening decisions.

Looking Ahead

As AI cements its role in claims and litigation, mediators must develop literacy in how these systems work, at least enough to question them. You’ll need to discern when a number is deserved, and when it’s the product of flawed logic or data. The settlement table has always been about law, empathy, and sound judgment. Now, data science has taken a seat, and it will be part of the mediator’s job to ensure it doesn’t take the lead.

Disclaimer: This blog is for informational purposes only and does not constitute legal advice.

Share This Article

The content on this website, including blog posts, articles, and downloadable materials, is provided for general informational and educational purposes only. It is not intended to be legal advice, does not create a solicitor-client relationship, and should not be relied upon as a substitute for legal advice from a qualified lawyer.