Skip to content

AI Governance: Creating a Responsible Framework

Author

Matthew Busbee
Chief Data Officer, Pan-American Life Insurance Group

May 2026

Artificial intelligence (AI) is no longer a distant technological ambition; it is increasingly embedded in day-to-day work across insurance and financial services. The hard part is no longer experimentation — it’s earning and sustaining the right level of confidence in how AI is used, how decisions are made, and how risk is managed over time. As adoption accelerates, governance can’t be treated as a one-time approval step or a static policy document; it must function as an operating model that brings business, technology, risk, compliance, and data teams into a repeatable life cycle. The Business Research Company predicts AI in the insurance market will grow from $10.24 billion in 2025 to $13.94 billion in 2026 — raising the stakes for transparency, accountability and ongoing monitoring.

AI Governance Growing Urgency

As AI becomes more powerful and deeply integrated into underwriting, claims, servicing, sales and internal operations, it brings risks that are often invisible until something goes wrong: unintended bias or proxy discrimination, data leakage, vendor opacity, model drift, hallucinations in generative systems, and inconsistent human oversight as users learn to trust the tool. Robust governance frameworks, rigorous data management, and practical risk controls are necessary to manage AI responsibly across the full life cycle — not just at go-live. Deloitte reports that only 30% of companies believe their risk and governance strategy is highly prepared for AI adoption. The question is no longer whether to adopt AI, but how to operationalize it responsibly at scale.

Traditional Governance

Many organizations are discovering that traditional risk controls, information technology (IT) checkpoints and compliance practices don’t cleanly translate to modern AI. AI behaves probabilistically, changes over time as data and environments shift, and is increasingly embedded inside third-party products — sometimes with limited transparency into model logic, training data or change cadence. That combination creates a practical gap between “we approved it” and “we can continuously demonstrate it is behaving as intended.” Governance gaps often emerge not from bad intentions but from misalignment among business objectives, technical constraints, vendor obligations and regulatory expectations — especially in highly regulated lines like underwriting and pricing. In other words: responsible AI requires more than policy — it requires repeatable life cycle controls, clear decision rights and an auditable trail.

Practical Best Practices

Rather than treating governance as a single committee review, mature programs build controls into the work from intake through ongoing monitoring. Three make-or-break practices show up repeatedly across carriers:

  • Clear ownership and cross-functional decision rights: business accountability, early risk and compliance involvement and a defined approval path for higher-risk use cases
  • Life cycle controls that scale: standard intake, risk classification, data controls, testing expectations and go-live criteria that are commensurate with risk
  • Ongoing monitoring and change management: drift and hallucination monitoring where applicable, periodic revalidation, incident reporting and vendor change-notification expectations

A practical way to scale governance is to define what evidence is required at each stage of the life cycle: what must be documented, reviewed, tested, approved and retained — especially for higher-risk use cases. This is where regulatory mapping, auditability and traceability become operational requirements rather than aspirational goals.

The LIMRA and LOMA AI Governance Group (AIGG) organizes these practices into a repeatable, seven-step AI Project Lifecycle (AIPL) — from planning and aligning through regulatory/compliance, data management, design and implementation, testing, operationalization, and ongoing governance. Critically, the same rigor should apply whether a solution is built internally or procured from a vendor: Firms remain accountable for outcomes, even when the technology is delivered as a service.

High-quality, well-managed data remains foundational — particularly around permissible use, consent, privacy, retention and clear stewardship. Strong data governance reduces downstream rework and helps teams assess bias risk and data suitability before models are trained or embedded into business workflows.

Transparency and Explainability

Transparency and explainability aren’t abstract ideas — they show up as concrete artifacts: documented purpose and scope, data lineage and consent considerations, testing evidence (including fairness and bias assessments where relevant), and clear records of who approved what and why. When third-party vendors are involved, these expectations should translate into procurement due diligence, contractual requirements, and change-notification mechanisms. MIT Sloan Management Review found 84% of interviewed AI experts believe companies should be required to disclose the use of AI in products and offerings to customers — an important signal for customer trust and regulatory readiness.

Building AI Governance

What makes life cycle governance valuable is not just its structure, but also its ability to drive consistency across teams and over time. It forces clarity on who owns the use case, what “fit for purpose” means, what data is permitted and appropriate, and what monitoring is required after launch. It also highlights the enablement side of governance: AI literacy, clear usage guidance, and maintaining a “human-in-the-loop” posture that doesn’t erode as users become more comfortable with the tool.

The AIGG offers a practical roadmap to help carriers translate principles into repeatable execution — especially as AI expands beyond pilots into production. The AIGG’s life cycle approach and best practices are designed to be used by executive leadership and practitioners alike, helping organizations move faster without sacrificing customer trust, regulatory readiness or operational resilience.

AI Governance Best Practices

The AIGG developed AI Governance Best Practices to help carriers operationalize governance across the full AI project life cycle — from project intake and planning through testing, operationalization, and ongoing monitoring. The white paper reflects cross-carrier input and emphasizes practical execution themes such as aligning AI to clear business objectives, establishing cross-functional oversight (often via an AI center of excellence or governance committee), treating vendor AI with the same diligence as internal builds, strengthening data governance and privacy practices and implementing repeatable monitoring and audit mechanisms following deployment.

If you’re building or buying AI capabilities in underwriting, claims, servicing, sales, finance, or internal productivity tools, this guide is a useful reference to help standardize how you assess risk, document decisions, and maintain accountability over time. It is especially relevant for leaders and practitioners in technology, data, compliance/legal, risk management, procurement/vendor management, internal audit, and business-unit ownership roles.

Conclusion

AI is accelerating operational change in insurance, but it also raises the bar for accountability. Responsible AI isn’t achieved by declaring principles — it’s achieved by building confidence through clear ownership, repeatable life cycle controls, and evidence that systems behave as intended in production. That is what AI governance should deliver: trust that is earned continuously, not assumed at go-live.

The industry is making real progress. For a concrete roadmap you can apply, I recommend downloading and using the AI Governance Best Practices white paper as a reference point for how to help plan, build/buy, test, operationalize and monitor AI systems in a way that supports innovation while protecting customers and the enterprise.

Did you accomplish the goal of your visit to our site?

Yes No