Skip to content

AI Group Launches AI Risk Classification (AIRC) Model

Author

Kartik Sakthivel, Ph.D., MS-IT/MS-CS, MBA, PGC-IQ
Vice President & Chief Information Officer and Regional Chief Executive Officer – Asia West
LIMRA and LOMA
ksakthivel@limra.com

July 2025

Artificial intelligence (AI) is revolutionizing the financial services sector. These technologies are not only enhancing existing processes but are also reshaping the insurance landscape. By leveraging AI effectively, firms can gain significant competitive advantages, drive innovation, and better serve a new generation of digital customers.

AI Risks

The rapid evolution of AI presents new frontiers in risk management. Organizations across our industry continue to prioritize ethical issues versus methodological issues when it comes to AI implementations. Today, many AI implementations across the industry are focused on operationalizing generative AI (GenAI) and are decidedly internally focused. This ensures that firms are able to manage AI risks.

However, our industry has been successfully and carefully leveraging traditional AI, such as machine learning (ML), across the value chain for several years. Firms are keenly focused on making these traditional AI models and implementations transparent and explainable. This is important because, without transparency, AI systems can be "black boxes," making it difficult to understand how they arrive at their decisions.

To address AI risks, the European Union (EU) introduced the Artificial Intelligence Act (AIA), the first legislation of its kind. Effective from August 1, 2024, the AIA establishes a regulatory framework for all AI systems operating within the EU. It classifies AI systems into four categories:

  • Unacceptable
  • High
  • Limited
  • Minimal risk

This classification allows for proportionate regulatory oversight, with requirements tailored to each risk level.

Risk Frameworks

AI risks differ from those associated with traditional software implementations. The National Institute of Standards and Technology (NIST) highlighted these differences in its AI Risk Management Framework (AI RMF 1.0), published in January 2023. Current risk management frameworks are insufficient for addressing AI-specific risks, such as transparency, hallucinations, security issues and the complexities of AI systems.

Former President Joe Biden's executive order on AI, issued on October 30, 2023, sets new standards for AI safety in the United States. However, the U.S. lacks a comparable overarching AI regulation. AI governance frameworks should establish basic parameters for all stakeholders involved in AI implementations. A holistic approach is necessary for proactive and proportionate AI governance across the insurance value chain. This includes assessing current and future AI applications in the industry.

These AI governance frameworks will be vital to helping carriers mitigate AI risks by ensuring careful oversight of the design, development and implementation of AI, as well as conducting detailed assessments of the ethical, legal and societal implications of their AI systems. By taking a proactive approach, organizations can mitigate risks that include, but are not limited to, inadvertent bias and proxy discrimination, data privacy and protection concerns, liability, intellectual property challenges, reputational damage and financial impacts.

AIRC Model

Insurance firms are using AI across the value chain, with each firm determining risk levels of AI implementations. The EU AI Act requires inventorying AI systems and classifying them into one of the previously mentioned four categories based on risk. The AIRC Model, presented below, provides common guidelines for risk classification in the insurance industry. The to-be-released AI Risk Evaluation (AIRE) Framework for LIMRA and LOMA members will build on this model and help firms evaluate AI initiatives and develop appropriate risk management strategies.

AI Risk Classification Model


The AI risk classifications for the insurance value chain build upon the EU AI Act and classify AI systems into five risk categories — four categorized as “Intentional AI” and one as “Inadvertent AI.” This approach is unique to the AIRC Model and the AIRE.

Intentional AI

As the name suggests, intentional AI systems are those that an enterprise intentionally invests in and implements to solve business problems. These systems are typically driven by a business need, a strategic initiative, a desire or need to innovate, or an interest in experimentation. Firms may choose to either build these systems in-house or procure them by engaging a third-party solution provider.

  • Unacceptable Risk
    Unacceptable-risk AI systems are prohibited due to their potential harm to human rights and safety. Examples include AI systems designed to manipulate users or those that use biometric surveillance without consent.
  • High Risk
    High-risk AI systems can have significant impacts on individuals. These systems must comply with multiple requirements and undergo conformity assessments. Examples include AI systems for underwriting and claims processing.
  • Limited Risk
    Limited-risk AI systems pose minor risks and can be managed through transparency obligations. Examples include AI chatbots that clearly identify themselves and provide options for human assistance.
  • Minimal Risk
    Minimal-risk AI systems present little to no risk and can be governed under existing policies. Examples include AI tools for internal data analysis and administrative automation.

Inadvertent AI

Inadvertent AI systems are embedded in commonly used software platforms. Firms must ensure vendor transparency to mitigate risks associated with these systems.

Conclusion

The AIRC Model and the soon-to-be-released AIRE Framework provide a starting point for AI risk management and governance. The AIRC classifies AI systems based on risk, and the AIRE offers guidelines for risk mitigation. This framework will be periodically updated as AI implementations and regulatory frameworks evolve.

Organizations should use the AIRC model and the AIRE framework to develop customized AI governance strategies. These strategies should align with the firm's risk appetite and management objectives. By adopting a proactive approach, insurers can ensure the safe and ethical use of AI, benefiting both the industry and its customers.

Did you accomplish the goal of your visit to our site?

Yes No