AI Group Launches AI Risk Classification (AIRC) Model

AI Group Launches AI Risk Classification (AIRC) Model
July 2025
Artificial intelligence (AI) is revolutionizing the financial services sector. These technologies are not only enhancing existing processes but are also reshaping the insurance landscape. By leveraging AI effectively, firms can gain significant competitive advantages, drive innovation, and better serve a new generation of digital customers.
AI Risks
The rapid evolution of AI presents new frontiers in risk management. Organizations across our industry continue to prioritize ethical issues versus methodological issues when it comes to AI implementations. Today, many AI implementations across the industry are focused on operationalizing generative AI (GenAI) and are decidedly internally focused. This ensures that firms are able to manage AI risks.
However, our industry has been successfully and carefully leveraging traditional AI, such as machine learning (ML), across the value chain for several years. Firms are keenly focused on making these traditional AI models and implementations transparent and explainable. This is important because, without transparency, AI systems can be "black boxes," making it difficult to understand how they arrive at their decisions.
To address AI risks, the European Union (EU) introduced the Artificial Intelligence Act (AIA), the first legislation of its kind. Effective from August 1, 2024, the AIA establishes a regulatory framework for all AI systems operating within the EU. It classifies AI systems into four categories:
This classification allows for proportionate regulatory oversight, with requirements tailored to each risk level.
AI risks differ from those associated with traditional software implementations. The National Institute of Standards and Technology (NIST) highlighted these differences in its AI Risk Management Framework (AI RMF 1.0), published in January 2023. Current risk management frameworks are insufficient for addressing AI-specific risks, such as transparency, hallucinations, security issues and the complexities of AI systems.
Former President Joe Biden's executive order on AI, issued on October 30, 2023, sets new standards for AI safety in the United States. However, the U.S. lacks a comparable overarching AI regulation. AI governance frameworks should establish basic parameters for all stakeholders involved in AI implementations. A holistic approach is necessary for proactive and proportionate AI governance across the insurance value chain. This includes assessing current and future AI applications in the industry.
These AI governance frameworks will be vital to helping carriers mitigate AI risks by ensuring careful oversight of the design, development and implementation of AI, as well as conducting detailed assessments of the ethical, legal and societal implications of their AI systems. By taking a proactive approach, organizations can mitigate risks that include, but are not limited to, inadvertent bias and proxy discrimination, data privacy and protection concerns, liability, intellectual property challenges, reputational damage and financial impacts.
Insurance firms are using AI across the value chain, with each firm determining risk levels of AI implementations. The EU AI Act requires inventorying AI systems and classifying them into one of the previously mentioned four categories based on risk. The AIRC Model, presented below, provides common guidelines for risk classification in the insurance industry. The to-be-released AI Risk Evaluation (AIRE) Framework for LIMRA and LOMA members will build on this model and help firms evaluate AI initiatives and develop appropriate risk management strategies.
The AI risk classifications for the insurance value chain build upon the EU AI Act and classify AI systems into five risk categories — four categorized as “Intentional AI” and one as “Inadvertent AI.” This approach is unique to the AIRC Model and the AIRE.
As the name suggests, intentional AI systems are those that an enterprise intentionally invests in and implements to solve business problems. These systems are typically driven by a business need, a strategic initiative, a desire or need to innovate, or an interest in experimentation. Firms may choose to either build these systems in-house or procure them by engaging a third-party solution provider.
Inadvertent AI systems are embedded in commonly used software platforms. Firms must ensure vendor transparency to mitigate risks associated with these systems.
The AIRC Model and the soon-to-be-released AIRE Framework provide a starting point for AI risk management and governance. The AIRC classifies AI systems based on risk, and the AIRE offers guidelines for risk mitigation. This framework will be periodically updated as AI implementations and regulatory frameworks evolve.
Organizations should use the AIRC model and the AIRE framework to develop customized AI governance strategies. These strategies should align with the firm's risk appetite and management objectives. By adopting a proactive approach, insurers can ensure the safe and ethical use of AI, benefiting both the industry and its customers.
July 2025 Subscribe
AI Group Launches AI Risk Classification (AIRC) Model
AI and the Future: The Critical Role of Quality Assurance
Spotlight on Ann McGarry, CMO at Securian Financial
Navigating Disruption With Courageous Leadership
The Next Horizon: Adapting to Changing Workforce Benefits
Plan Sponsors Confused About Fiduciary Responsibilities
DC Plans Risk Stress Test: Can Providers Keep up?
Young Consumers: ‘Make it Easy to Learn and Easier to Buy’