The Quiet Disruption in the Claims World
Artificial intelligence has moved beyond novelty and firmly planted itself in Ontario’s insurance and litigation landscape. Insurers are deploying AI for everything from claims triage and fraud detection to damages prediction and risk analytics.
Mediators now share the table with an invisible but influential technological participant.
Canadian insurers, for example, are using AI to detect fraud more effectively—even expanding data-sharing programs to enhance detection across multiple providers[1].
Regulatory and Governance Spotlight
Canada was on the verge of its first comprehensive federal AI regulation under Bill C-27: Artificial Intelligence and Data Act (AIDA). AIDA would have targeted “high-impact” AI systems—including those used in insurance decision-making—mandating risk mitigation, fairness, and transparency[2].
However, Bill C-27 died on prorogation on January 6, 2025[3], and any federal AI law will have to be reintroduced as a new bill and run the full legislative process. AIDA’s core ideas—governing “high-impact” AI with risk, fairness, and transparency controls—will undoubtedly still be the template, but real obligations would only kick in after follow-on regulations define the scope and mechanics. In the meantime, the federal Office of the Superintendent of Financial Institutions (OSFI)[4] and the Ontario’s Financial Services Regulatory (FSRA)[5] expectations are already pushing insurers toward documented AI governance and human oversight.
OSFI is encouraging documented AI governance and human oversight by requiring tech-risk governance, third-party oversight, and operational controls under Guideline B-13, and by extending its Model Risk Management (E-23) framework to all analytical models—pushing firms to inventory, validate, monitor, and keep humans in the loop on model-driven decisions[6].
FSRA is actively preparing for AI integration in insurance. Its initiatives include oversight of AI and machine learning tools, and a responsibility framework for big data use in auto insurance that emphasizes consumer transparency and model fairness[7].
NAIC Model Bulletin (U.S.)—A Warning for Canada
Across the border, the National Association of Insurance Commissioners (NAIC)[8] has issued a model bulletin that illustrates best practices for AI governance: insurers must maintain documented AI systems programs, conduct bias testing, enforce strong internal controls, and ensure human oversight[9]. Though U.S.-based, it’s a powerful preview of regulatory direction Canada may follow.
Why Mediators Should Be Paying Attention
When AI outputs become the default baseline for settlement authority, human flexibility—especially on unusual files—can get squeezed. The lived story of an injury risks being flattened into a spreadsheet entry, and we as mediators should expect real friction over transparency. How much, if at all, parties must reveal about AI-generated scores, factors, or reasoning during a lawsuit and in mediation, remains to be seen.
Risks and Challenges of AI-Driven Systems
Algorithms learn from historical data. If that data undervalues certain claimant groups, the model will replicate, or even amplify, those biases and drive unfair results. These systems also struggle with outliers and atypical fact patterns, so deserving cases can be mis-scored or mispriced. And when carriers lean on opaque tools without documented controls, human review, and clear explanations, they invite bad-faith allegations and regulatory scrutiny for mishandling claims.
Opportunities for Mediators to Add Value
Mediators should treat the AI-generated “number” as a place to begin, not the final word, and press for exceptions grounded in the claimant’s unique circumstances. We should promote transparency by asking parties to share, at a high level, any relevant AI policies or methodologies so everyone understands what’s steering authority. In busy, multi-party matters, we can lean on AI to accelerate valuation and timelines, but keep the deliberations human. And guard against bias by asking whether the models have been tested for fairness, especially when their outputs are materially constraining settlement authority and screening decisions.
Looking Ahead
As AI cements its role in claims and litigation, mediators must develop literacy in how these systems work, at least enough to question them. You’ll need to discern when a number is deserved, and when it’s the product of flawed logic or data. The settlement table has always been about law, empathy, and sound judgment. Now, data science has taken a seat, and it will be part of the mediator’s job to ensure it doesn’t take the lead.
Disclaimer: This blog is for informational purposes only and does not constitute legal advice.
1. https://www.shift-technology.com/resources/press/canadian-insurers-expand-ai-driven-fraud-detection-programme
2. https://www.fasken.com/en/knowledge/2023/11/artificial-intelligence-in-financial-services-the-canadian-regulatory-landscape
3. https://www.parl.ca/legisinfo/en/bill/44-1/c-27
4. https://www.osfi-bsif.gc.ca/en
5. https://www.fsrao.ca/
6. https://www.osfi-bsif.gc.ca/en/guidance/guidance-library/technology-cyber-risk-management
7. https://www.fsrao.ca/media/11501/download
8. https://content.naic.org/
9. https://www.hklaw.com/en/insights/publications/2025/05/the-implications-and-scope-of-the-naic-model-bulletin