Getting to the How and Why: AI Shows Its Work

March 19, 2026


DigitalInsurance

With insurance sector AI solutions, explainability was once a hindrance to widespread adoption. Those days are waning thanks to next-gen models, according to Gradient AI’s Stan Smith.


For years, the most pressing questions surrounding artificial intelligence solutions for insurance underwriting concerned accuracy. In a sector where margins are thin and exacting calculations non-negotiable, could AI be accurate enough to trust?


That question has largely been answered. Modern AI systems routinely outperform traditional actuarial and rules-based models across a range of underwriting applications. They uncover patterns humans overlook, integrate broader data sources, and continuously learn from outcomes. In terms of predictive performance, AI has arrived.


Yet adoption across insurance remains cautious and disjointed. Why? Because the hesitance involves explainability rather than accuracy.


This conclusion follows simple logic: Insurers are rational actors. Underwriting is fundamentally about pricing risk as precisely as possible. If a new tool improves loss ratios and portfolio performance, carriers have every incentive to use it.


But accuracy alone has never been enough. Insurance operates in a highly regulated environment. Underwriting decisions must be defensible—to regulators, auditors, reinsurers and internal governance teams. Insurers must not only make sound decisions; they must clearly explain how and why those decisions were made.


For decades, that requirement has shaped underwriting practices. Traditional statistical models often persist not because they are the most predictive but because they are transparent. Underwriters can trace decisions back to specific variables. Regulators can follow the logic step by step. The reasoning is visible.


Early machine learning models disrupted that balance. While often transformative in performance, first-generation AI systems struggled to “show their work.” They produced highly accurate scores and recommendations but offered little meaningful insight into how those outcomes were generated.


For underwriters, that opacity was uncomfortable. For compliance teams, it was untenable. As a result, many insurers accepted lower predictive performance in exchange for greater transparency. Even when AI was adopted, it was often constrained—”good enough” models were favored over those more powerful but less explainable.


This was not resistance to innovation but merely good governance. Underwriters must understand recommendations to trust and apply them. Executives need confidence that AI aligns with risk appetite and company policy. Regulators need clarity around methodology, bias controls and decision logic. Without explainability, even the most accurate model is incomplete.


Over time, this dynamic hardened into a false tradeoff: performance versus transparency. The industry assumed that more accurate models would necessarily be less explainable.


That assumption is now outdated. Recent advances in AI, particularly the integration of large language model techniques with advanced analytics, are redefining explainability. Modern systems no longer simply output risk scores. They can translate complex model behavior into plain-language reasoning aligned with underwriting judgment.


Instead of forcing underwriters to interpret abstract probabilities, today’s tools can articulate the specific drivers behind a recommendation. For example, in healthcare underwriting, sophisticated platforms can replicate analyses that once required manual nurse review—while clearly outlining the conditions, utilization patterns or cost drivers that influenced the result.


Explainability in this new context does not mean exposing every mathematical parameter. It means making decision logic understandable, defensible and actionable for the people responsible for applying it.


When underwriters understand why a model reached a conclusion, confidence increases. When regulators can see how key factors are measured and managed, resistance decreases. Explainability transforms AI from a perceived compliance risk into a governance asset.


This is especially critical in addressing concerns about bias. Insurers must demonstrate not only that their models perform well but that they perform fairly. That requires visibility into training data, performance across populations and outcomes over time.

AI systems operating across multiple carriers and geographies offer a broader data foundation for identifying and mitigating bias. With deeper and more diverse datasets, potential issues can be detected earlier, validated when absent or corrected when present. In this context, explainability becomes central to sustaining long-term trust. A model that cannot clearly demonstrate how it detects and mitigates bias will struggle to satisfy regulators, regardless of its predictive strength.


The value of explainable AI is particularly evident in complex underwriting environments where human intuition has limits. Consider small commercial property/casualty insurance policies. Based on my company’s internal data assessment, more than 98% may never generate a claim, yet a small subset accounts for most losses. Identifying that subset using traditional methods is extremely challenging.


AI excels at detecting subtle combinations of factors that signal higher risk. Crucially, explainable AI can also articulate what those factors are, allowing underwriters to validate and act on the insight rather than blindly accepting a score.


A similar dynamic exists in small-group healthcare underwriting, where a single high-cost individual can materially impact profitability. Advanced models can detect early indicators of elevated risk that conventional approaches may miss while clearly explaining the drivers behind the assessment.


In each case, adoption hinges less on raw accuracy than on whether the recommendation can be understood and defended. AI must align with regulatory expectations, underwriting workflows and enterprise risk management practices. It must build confidence rather than demand blind trust.


The future of AI-driven underwriting is not about replacing human expertise. It is about augmenting it with systems that predict accurately and explain clearly. When that balance is achieved, the supposed tradeoff between performance and transparency disappears.


The misconception that insurers must choose between what works and what can be explained no longer holds. The technology is mature. The data is robust. The models are proven. As explainable AI continues to advance, insurers no longer have to compromise between predictive power and defensibility. They can—and increasingly will—have both.


This article appeared first on Carrier Management.


Share This