
Written by Tiffany Cunha, GRC & Data Protection Specialist at Palqee Technologies
The European Union has introduced the EU Artificial Intelligence Act, a comprehensive law that regulates the use of AI systems in the EU. In this Series, ‘Decoding AI: The European Union’s Take on Artificial Intelligence’, we break down everything you need to know about the law for you.
Unsure if your AI system is considered high-risk according to the EU AI Act? Take the free High-Risk Risk Category Assessment.
Taking a risk-based approach
The proposed EU AI Act aims to set a standard for trustworthy AI systems. The challenge however is to balance compliance requirements across all sorts of AI use cases and industries. Depending on the AI system and its intended purpose, it can carry different levels of risks. Often labelled as a 'black box', it can be difficult to understand why an AI system is behaving a certain way in a real-world environment. This may not be much of a concern for AI tools that help increase productivity, but it can have a negative impact on the fundamental rights and safety of people when used for example in law enforcement or the medical space, especially when the AI develops or shows bias towards certain groups of people.
The key objective of the EU AI Act is to prioritise safety, providing legal clarity, enforcing fundamental rights effectively and preventing market fragmentation without over-regulating AI systems that pose minimal risk.
To maintain this balance the EU AI Act introduces a risk-based approach that takes a technology-neutral stance to define and assess AI systems. It applies to both providers and users of AI systems within the EU. Regulatory compliance requirements and scope are tailored to the specific level of risk posed by an AI system.
The EU AI Act classifies AI systems into four categories based on their risk level:
(i) Low or minimal risk: those systems will not be subjected to additional legal obligations and can be freely developed and used within the EU. Nevertheless, the EU AI act proposes the establishment of codes of conduct to encourage providers of non-high-risk AI systems to voluntarily adhere to the mandatory requirements applicable to high-risk AI systems.
(ii) Limited risk: AI systems such chatbots, emotion recognition systems, biometric categorisation systems, and AI systems involved in generating or manipulating image, audio, or video content (e.g., deepfakes), will be subject to a specific and restricted set of transparency requirements.
(iii) High-risk: title III (Article 6) of the EU AI act governs 'high-risk' AI systems that have the potential to negatively affect people's safety or fundamental rights. High-risk AI systems that fall into this category have to comply with all the requirements in the EU AI Act, including going through a conformity assessment to obtain a CE mark prior to placing the AI system into the EU market.
(i) Unacceptable risk: title II (Article 5) of the EU AI Act prohibits AI practices categorised as "unacceptable risk." This includes banning harmful AI systems that use manipulative subliminal techniques, exploit vulnerable groups, employ social scoring by public authorities, and utilise real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (with only a few exceptions).
High-Risk AI Systems
Compliance requirements for high-risk AI system providers are broad in scope, and will require a considerable amount of resources to implement and maintain. The EU AI Act limits the definition of high-risk AI systems to two categories:
1. AI that is a Safety Components in Products or a Product itself that are subject to EU Harmonisation Legislation
Whenever an AI system is used as a safety component or is a product itself that is subject to the EU harmonisation laws, the AI system is considered high-risk. The harmonisation legislation cover rules for products and solutions such as toys, cars, medical devices, lifts as well as
2. AI Systems used in any of the following areas
1. Biometric Identification and Categorisation of Natural Persons
The EU AI Act categorises AI systems used for real-time and post-remote biometric identification of natural persons as high-risk. These systems are intended to recognise and categorise individuals based on unique biometric traits, such as fingerprints, facial features, or iris patterns. The potential misuse of such technology raises concerns about privacy infringement and the possibility of mass surveillance.
2. Management and Operation of Critical Infrastructure
AI systems intended for use as safety components in the management and operation of critical infrastructure, such as road traffic and the supply of water, gas, heating, and electricity, are also classified as high-risk. Given the vital nature of these services, any malfunction or security breach in AI systems can lead to severe consequences for public safety and well-being.
3. Education and Vocational Training
AI systems intended for determining access or assigning individuals to educational and vocational training institutions, as well as those used for assessing students' performance. These systems can impact a person's educational and career opportunities, potentially leading to biased decisions and unfair treatment.
4. Employment, Workers Management, and Access to Self-Employment
AI systems used for recruitment, candidate evaluation, and performance assessment in the workplace. Making employment-related decisions based on AI algorithms can result in discrimination, lack of transparency, and the potential for biased outcomes.
5. Access to Essential Private and Public Services and Benefits
Several AI systems that evaluate eligibility for public assistance benefits, creditworthiness, and emergency first response services fall under the high-risk category. The use of AI in these areas requires strict safeguards to prevent discrimination, misinformation, or abuse of power.
6. Law Enforcement
AI systems employed by law enforcement authorities for individual risk assessments, detection of deep fakes, evaluation of evidence, and profiling individuals. AI in this area can have a serious impact on personal liberties, which is why their accuracy and fairness must be ensured.
7. Migration, Asylum, and Border Control Management
AI systems used for assessing risks posed by individuals intending to enter or have entered the territory of a Member State, verifying travel documents, and examining applications for asylum and residence permits. Ensuring the ethical use of AI in migration and border control is crucial to prevent unjust treatment and potential human rights violations.
8. Administration of Justice and Democratic Processes
AI systems designed to assist judicial authorities in researching and interpreting facts and the law are considered high-risk. The potential implications of AI in legal proceedings require careful oversight to maintain transparency and ensure fair outcomes.
Conclusion
The EU AI Act recognises the potential risks associated with certain AI systems and seeks to regulate their use to protect individuals and society at large. By categorising specific applications as high-risk, the EU aims to ensure the responsible development and deployment of AI technology. The final draft is currently being reviewed in trilogue negotiations. It is expected that some amends will be made before the final version of the Act passes into law, possibly end of 2023, early 2024. However, main focus of the negotiations is giving more clarity on aspects such as the harmonisation rules as well as the regulatory sandbox.
Start-ups operating in high-risk category domains should pay close attention to the regulations outlined in the AI Act to comply with the law and build trust among users and stakeholders. Embracing ethical AI practices and adhering to these guidelines will pave the way for a sustainable and beneficial AI future.