Written by Ana Carolina Teles, AI & GRC Specialist at Palqee Technologies
The European Union has introduced the EU Artificial Intelligence Act, a wide-ranging piece of legislation governing the application of artificial intelligence technologies across member states.
Â
As the Act imposes obligations regarding risk governance especially for high-risk AI systems providers, this article is part of our Decoding AI: The European Union's take on Artificial Intelligence Series and provides guidance on how to establish an effective risk management system that covers the requirements of the Act.
Don't forget to grab your complimentary Palqee EU AI Act Framework before we embark on this journey! |
Which AI Systems need to have a Risk Management System?
Under the EU AI Act, AI systems are classified into four distinct categories based on their potential risks to fundamental rights.
Â
Among these categories, high-risk AI systems hold a unique position due to their significant potential impact on fundamental rights and safety, especially in sectors like healthcare, transportation, and infrastructure, where errors or biases can lead to severe consequences.
Â
As a result, they are subject to specific regulatory requirements, including the implementation of a risk management system.
Â
This system comprises a structured framework that companies employ to identify, assess, mitigate, and monitor potential risks associated with the development, deployment, and usage of AI technologies. The process demands the analysis of both internal and external factors to proactively anticipate negative outcomes, such as safety concerns, legal issues, or infringements on individuals' rights. Â
Implementing a Risk Management System according to the EU AI Act
Â
As mandated by Article 9 of the EU AI Act, a risk management system must be established, implemented, documented, and maintained in connection with high-risk AI systems.
Â
Operating as a continuous and iterative process throughout the lifecycle of these systems, the Act requires a comprehensive framework that incorporates a series of stages covering different risk aspects associated with the system's purpose:
Risk Identification and Assessment
The initial step is the identification and assessment of potential risks associated with the AI system. It’s required to identify both known and foreseeable risks linked to each high-risk AI system. Thoroughly understanding these risks is paramount to evaluate their implications adequately.
Â
Following this, risks also need to be estimated and evaluated that might surface during the intended use of the high-risk AI system and under conditions of reasonably foreseeable misuse.
Â
Moreover, the EU AI Act emphasises a data-driven approach to risk evaluation. Businesses are required to assess potential risks based on data collected from the post-market monitoring system, in compliance with Article 61.
A strategy to achieve this within your organisation involves:
Examining the system's functionalities, intended use, and potential vulnerabilities. To ensure a holistic analysis, it's advisable to engage a multidisciplinary team comprising AI developers, legal experts, and domain specialists.Â
Once the initial examination is complete, it's important to establish a data analysis team that collaborates closely with your AI development and deployment teams. This team's role is to gather and analyse real-world data generated by the high-risk AI systems that are already in use. Â
By examining actual user interactions, system performance, and reported incidents, valuable insights into the system's behaviour and potential risks can be gained. Alternatively, your organisation can also implement automated monitoring systems like PAM that will proactively highlight and flag potential risks in a post-market environment.
The multidisciplinary and data analysis teams will work collaboratively to envision different usage scenarios, including scenarios where the AI system might be misused or operated in unintended ways. This proactive analysis helps uncover potential vulnerabilities and identifies areas where risks could materialise. For smaller organisations, team members may cover different roles. In this case it’s important to consider how you can segregate roles and responsibilities as much as possible to mitigate conflict of interest risk and to ensure accountability. Â
Mitigation
This stage involves implementing measures that are reasonable and aligned with your risk assessment. It also requires you to consider the combined effects of requirements for high-risk AI systems specified in Chapter 2 of the Act as well as including relevant harmonised standards if relevant to your AI system. These measures must address various aspects of safety, ethics, and compliance.
Â
Furthermore, your company must focus on mitigating potential hazards and the overall residual risk of high-risk AI system. This involves adopting measures to ensure these risks are acceptable, considering the AI system's intended use or foreseeable misuse. Remaining risks should be communicated to users.
Â
The Act stipulates that to determine the most appropriate risk management measures, the following must be guaranteed:
Minimising risks through effective design and development.
Implementing mitigation and control measures for risks that cannot be eradicated.
Providing comprehensive information, especially regarding specific risks, and offering user training when necessary.
Additionally, the organisation must take into account the expected technical knowledge, experience, and education of the users, all within the context of the intended usage environment, to effectively eliminate or reduce risks.
Â
A practical approach includes:
Establishing a monitoring framework to stay up to date with AI advancements, ensuring that the system's measures stay state-of-the-art.Â
Allocating resources based on risk severity and seamlessly integrate these measures into the design and development process.
Implementing policies that define how often risk assessments for acceptability are conducted, taking into account various usage and misuse scenarios.Â
Developing customisable communication templates to address any remaining risks transparently with users and stakeholders, providing details to enhance your brand's reputation.Â
Formulating a comprehensive risk mitigation plan covering technical controls and user guidance and maintain development records to monitor identified risks and track mitigation efforts effectively.Â
Testing
In this phase, the Act highlights three key elements:
Thorough TestingÂ
Appropriate Testing ProceduresÂ
Timely Testing
In line with identifying suitable risk management measures, the proposed Act mandates testing high-risk AI systems to ensure their consistent compliance with prescribed requirements and intended functionality.
Under the EU AI Act, these measures encompass a range of strategies and actions aimed at mitigating potential risks associated with high-risk AI systems. Here are some examples:
Robust technical controls.
Algorithmic Transparency.
Ethical Guidelines.
Fail-Safe Mechanisms.
Regularly monitoring system performance.
Besides, the proposed Act emphasises aligning testing procedures with the AI system's intended purpose, focusing solely on what's necessary to achieve that goal.
 Lastly, compliance with this regulation requires following these timing testing procedures:
Testing activities should be distributed across various stages of the AI system's development lifecycle.
These tests must be concluded prior to the system's introduction to the market or its deployment for providing services.
Predefined metrics and thresholds must be applied, tailored to align with the AI system's intended purpose.
Consider these practical steps to streamline this phase:
Designate a testing leader within the team to oversee each stage of the process, from design to pre-deployment.Â
Concentrate testing efforts on essential tasks to fulfil the system's purpose, avoiding unnecessary complexity and cumbersome processes. Consider using MLOps and Explainable AI solutions that can help you streamline this.
Maintain comprehensive records of all testing procedures, results, and any corrective actions, all while aligning with predefined metrics and thresholds. Â
Establish clear reporting mechanisms to communicate outcomes to relevant stakeholders, including regulators if necessary.Â
Additional considerations
Vulnerability & Integration
In alignment with the Act, the risk management system must also address these two specific points:
Â
Vulnerable Users: Evaluate whether the high-risk AI system could potentially grant access or have an influence on children.
The organisation can put practical measures in place, such as age verification mechanisms, child-friendly interfaces, guardian controls, and collaboration with experts. These steps ensure that high-risk AI systems are designed and used responsibly, safeguarding the well-being of young users, and aligning with the EU AI Act's emphasis on protecting vulnerable individuals.Â
Directive 2013/36/EU: Credit institutions under this directive must integrate EU AI Act's risk management procedures into their existing framework.
To integrate the EU AI Act's risk management procedures, credit institutions should start with a review of current risk processes, identify relevant Act sections, customise integration plans, maintain documentation, conduct tests, set feedback mechanisms, schedule audits, and ensure ongoing compliance and improvement aligned with the Act and institution's evolving risk management needs.Â
Monitoring & Updating
Just like GDPR compliance, risk management systems under the AI Act require ongoing monitoring and updating to remain effective.
Â
This entails regularly reviewing emerging risks, staying informed about technological advancements, adapting to changing regulations, and meticulously documenting the entire process.
Â
It's an ever evolving process that must be well documented.
Due to this, your business should emphasise the following:
Empower your multidisciplinary AI development team to cultivate a culture of vigilance and adaptability.
Schedule regular audits or reviews of your system to verify that it aligns with current regulations and standards. Make updates and improvements as needed.
Collect feedback from stakeholders and adjust your processes and systems accordingly.Â
Establish documentation standards for the management system.Â
Specify the format, naming conventions, and where these documents will be stored.Â
Choose appropriate software or tools for creating, managing, and storing documents. Consider using document management systems, cloud-based solutions, or dedicated compliance software.Â
Conclusion
If you’re familiar with risk management systems, you may have noticed that the risk management requirements outlined in the EU AI Act shares many similarities with established frameworks, such as ISO 27001. Both systems stress a similar approach to continuous monitoring and adaptation to evolving risks.
This is no coincidence as the EU AI Act has been heavily influenced and inspired by the recommendations and works of ISO, NIST and the OECD.
Â
Businesses that have already adopted ISO 27001 can smoothly integrate the risk management system required by the proposed Act, leveraging their existing risk assessment and mitigation mechanisms.
Unsure if your AI system is considered high-risk according to the EU AI Act? Take the free High-Risk Risk Category Assessment: |
Comments