Skip to content
Artificial Intelligence

Executive Briefing:
AI Governance Best Practices

LIMRA and LOMA AI Governance Group

 

Download the Full Report

Artificial Intelligence (AI) is rapidly transforming the insurance industry, presenting both significant opportunities and complex governance challenges. As AI becomes embedded across operations, carriers must adopt robust governance frameworks, rigorous data management, and comprehensive risk mitigation strategies. This briefing distills industry best practices to guide organizations in managing and mitigating AI risks throughout the project lifecycle - whether developing systems internally or procuring from third-party vendors.

It is strongly recommended that executives ensure that:

  1. There is clear alignment of AI initiatives with business objectives and regulatory expectations.
  2. Your firm is conducting and institutionalizing transparent vendor vetting and data testing.
  3. You have implemented proactive governance and compliance measures.
  4. Are promoting enterprise-wide AI literacy and cross-functional ownership.

The AI Project Lifecycle (AIPL)

LIMRA and LOMA recommend a structured, seven-step lifecycle is recommended for responsible AI adoption. These seven steps and key themes are:

 

Planning-and-Aligning-100px.png

1. Planning and Aligning

  • Define clear business goals and project scope.
  • Align AI initiatives with enterprise strategy and stakeholder expectations.
  • Utilize frameworks such as AIGG AI Risk Classification (AIRC) and AI Risk Evaluation (AIRE).

Regulatory-and-Compliance-100px.png

2. Regulatory and Compliance

  • Ensure compliance with legal and regulatory requirements from project inception.
  • Conduct regulatory mapping and cross-functional reviews.
  • Maintain audit trails and documentation for transparency and explainability.
Data-Management-100px.png

3. Data Management

  • Prioritize data integrity, governance, and privacy.
  • Establish clear roles for data ownership and access.
  • Conduct bias and fairness audits; comply with privacy laws (GDPR, CCPA, HIPAA).
Design-and-Implementation-100px.png

4. Design and Implementation

  • Decide on build vs. buy based on risk, accountability, and regulatory obligations.
  • Require comprehensive documentation and transparency from vendors.
  • Form cross-functional teams for oversight.
Ttesting-and-quality-assurance-100px.png

5. Testing and Quality Assurance

  • Implement rigorous QA/QC for accuracy, fairness, explainability, and robustness.
  • Use explainability tools and simulate customer experiences.
  • Document all testing procedures and results.
Operationalization-100px.png

6. Operationalization

  • Deploy AI systems with IT oversight, user training, and real-time monitoring for model drift and compliance.
  • Maintain change management protocols and feedback systems.
  • Ensure human-in-the-loop oversight for high-risk systems.
Governance-100px.png

7. Governance (and Monitoring)

  • Establish centralized AI governance committees for ongoing oversight.
  • Conduct regular audits, horizon scanning for emerging risks, and periodic revalidations.
  • Maintain incident reporting and continuous improvement mechanisms.

Legal and Regulatory Framework

  • NIST AI Risk Management Framework (RMF): Focuses on trustworthy AI - validity, reliability, safety, security, transparency, explainability, fairness, privacy, and accountability.
  • NIST GenAI Profile: Addresses risks specific to generative AI, including hallucinations and output variability.
  • NAIC Model Bulletin: Emphasizes transparency, governance, ethical data practices, and recordkeeping. Requires disclosure of AI usage to consumers and accountability for third-party systems.

Best Practices Highlights

  • Early and Cross-Functional Engagement: Involve compliance, risk, and legal teams from the outset.
  • Centralized AI Inventory: Track all AI use cases and models for better oversight.
  • Vendor Due Diligence: Treat vendor solutions with the same rigor as internal builds, requiring transparency and documentation.
  • Continuous Monitoring and Auditing: Regularly review AI systems for fairness, bias, and compliance.
  • Human Oversight: Maintain human-in-the-loop for critical decisions, especially in underwriting and claims.
  • AI Literacy and Training: Elevate AI literacy across the enterprise and provide tailored training for different roles.

Conclusion

The insurance industry must embed compliance and governance throughout the AI lifecycle, treat all AI initiatives with diligence and transparency, and foster a culture of cross-functional ownership and continuous improvement. The frameworks and best practices outlined in this briefing provide a roadmap for safe, responsible, and value-driven AI deployment.

Did you accomplish the goal of your visit to our site?

Yes No