top of page

Decoding the EU AI Act: Record-Keeping Requirements

 a robot holding a clock and documents flying around

Written by Ana Carolina Teles, AI & GRC Specialist at Palqee Technologies

In our ongoing Decoding the EU AI Act: The European Union's Approach to Artificial Intelligence, we are exploring the influence of the EU AI Act on the tech scene. This legislation aims to promote responsible deployment of AI technologies in the European market.

In our previous articles, we discussed the requirements for developers of high-risk AI systems to comply with the proposed Act.

In this post, we'll delve into the Record-Keeping obligation outlined in Article 12 of the Act. --- Don't forget to check out the Palqee EU AI Act Framework. It's a step-by-step guide to ensure you're aligned with AI compliance standards. Access it through this link!


What does record-keeping under the EU AI Act mean?

As outlined in Article 12 of the AI Act, record-keeping refers to the systematic process of capturing and maintaining records or logs of events, especially when high-risk AI systems are in operation.

But what exactly does this imply?

In simple terms, it’s like having a detailed digital diary for these systems. Every action the system takes is recorded, like how you'd write down significant moments from your day. So, if there's ever a need to look back and understand what happened – whether it's for checking the system's performance, investigating an issue, or ensuring it's working safely – we have a clear and detailed record to consult. This allows developers, auditors, and regulators to closely examine how the system works, ensuring its transparency and traceability throughout its entire lifecycle.

What specific record-keeping requirements must organisations adhere to under the EU AI Act?

Specifically, the AI Act outlines the following record-keeping obligations for providers of high-risk AI systems:

  • Design and Development: In line with paragraph 1 of Article 12, high-risk AI systems need to be equipped with the capability to automatically log events during their operation.

  • Adherence to Standards: The logging capabilities of these AI systems must conform to established standards or widely accepted specifications. For instance, the NIST SP 800-92 provides guidelines on cybersecurity log management, emphasising the importance of generating, transmitting, storing, accessing, and disposing of log data. Additionally, IBM's Common Base Event structure provides a standardised format for event data across diverse sources in WebSphere Application Server.

  • Ensure Traceability: Also, these logging features must ensure consistent traceability throughout the AI system's lifecycle, in line with its designated purpose.

  • Active Monitoring: The AI system must possess capabilities that proactively monitor its activities, allowing for the identification of potential risks or necessary modifications.

  • Facilitate Post-Market Monitoring: These functionalities ought to support post-market monitoring as outlined in Article 61 of the proposed Act.

  • Detailed Logging for Specific High-Risk AI Systems: For specific high-risk AI systems, as detailed in paragraph 1, point (a) of Annex III1, the Act stipulates that the recording capacities must include, at a minimum:

    • Record the duration of each system use, capturing both the start and end times.

    • Maintain a record of the reference database used to check input data.

    • Log the input data that resulted in matches.

    • Identify the natural persons who verified the results, as per Article 14 (5).

Framework for ensuring compliance with Record-Keeping obligations

Translating the often broadly defined requirements of the EU AI Act into an organisation's daily operations is always a challenge. Below we created a step-by-step guide for high-risk AI system providers, to ensure compliance with the new law's requirements:

1. System Design & Development Audit

  1. Objective: Ensure that AI systems are designed with automatic event recording capabilities.

  2. Action Steps:

    1. Review the design and development process of the AI system.

    2. Integrate automatic logging at the initial stages of system development.

2. Standardisation & Specification Alignment

  1. Objective: Certify that logging capabilities meet recognised standards.

  2. Action Steps:

    1. Identify and list down recognised standards and common specifications for AI logging.

    2. Conduct regular audits to ensure alignment with these standards.

3. Traceability Assurance

  1. Objective: Guarantee a consistent level of traceability throughout the AI system's lifecycle.

  2. Action Steps:

    1. Implement a system that checks the traceability of each log entry.

    2. Regularly review and update the traceability mechanisms to align with the system's intended purpose.

4. Active Monitoring System

  1. Objective: Monitor the AI system's operation to identify potential risks or modifications.

  2. Action Steps:

    1. Develop a dashboard or interface that provides real-time insights into the AI system's operations.

    2. Set up alerts for situations that might present risks or require modifications.

5. Post-Market Monitoring Integration

  1. Objective: Ensure that the AI system supports post-market monitoring as per Article 61.

  2. Action Steps:

    1. Integrate post-market monitoring tools or software.

    2. Schedule regular post-market reviews to assess the AI system's performance in real-world scenarios.

6. Detailed Logging Mechanism for Specific AI Systems

  1. Objective: Establish detailed logging for specific high-risk AI systems as per Annex III.

  2. Action Steps:

    1. Determine whether your AI systems align with the categories outlined in Annex III.

    2. If they do, incorporate enhanced logging details, including usage duration, reference databases, matched input data, and identification of verifiers.

7. Integration into Quality Management System (QMS)

  1. Objective: Ensure that record-keeping procedures are seamlessly integrated into the QMS, aligning with the requirements of Article 17 of the EU AI Act.

  2. Action Steps:

    1. If you have a QMS already, pinpoint areas where record-keeping aligns with quality management, such as risk management, compliance, and documentation.

    2. Align the integration of record-keeping with the overarching objectives of your QMS, ensuring it complements the organisation's commitment to quality and risk management.

    3. Incorporate mechanisms within the QMS to monitor the efficacy of record-keeping procedures, and periodically review their alignment with quality goals.

    4. Document all steps undertaken to weave record-keeping into your QMS, providing a reference for future audits and showcasing the organisation's dedication to compliance and quality.

8. Continuous Training & Awareness

  1. Objective: Assure that all stakeholders, especially those involved in the verification of results, are aware of their roles and responsibilities.

  2. Action Steps:

    1. Conduct regular training sessions on the importance of record-keeping and its implications.

    2. Update stakeholders on any changes or updates to the EU AI Act's obligations.

How does Record-Keeping ensure Safety and Compliance?

Record-keeping plays a dual role in both monitoring and compliance. To grasp this clearly, picture a security system at an international airport employing real-time biometric identification to authenticate passenger identities. This system scans faces and matches them against a database of known individuals, such as criminals or persons of interest.

  • Traceability: Every time a passenger walks past the camera, the system logs this event. If the system identifies a match or even a potential match, it records the exact time, the specific camera, and the matched profile from the database. It is important because if there's ever a need to understand why the system flagged a particular individual, there's a clear record to refer back to. For instance, if a passenger is wrongly identified as a criminal, the logs can be checked to understand what went wrong and where.

  • Monitoring and Risk Management: The system doesn't just passively record data. It actively monitors for anomalies or unexpected patterns. Suppose the system starts flagging an unusually high number of false positives. In that case, the logs can help technicians identify if there's a fault in the system or if it's being tampered with. Moreover, the detailed logs, such as the reference database used and the specific input data (in this case, the facial features of the passenger) that led to a match, ensure that every decision made by the AI can be reviewed in detail. This is important for accountability, especially at an airport where incorrect identifications can lead to problems.

  • Human Oversight: Now, let's say the system flags a passenger as a person of interest. Before any action is taken, the Act mandates (Article 14) that these identifications are verified by humans. The logs will have a record of which security personnel verified the AI's decision, adding an extra layer of accountability.


Incorporating record-keeping methodologies from the very beginning of an AI system's design journey positions organisations for a more efficient post-market monitoring process. This approach is similar to the principle of privacy-by-design in Data Protection: by including privacy measures from the start, monitoring and compliance become simpler and more effective at a later stage. Fulfilling the record-keeping requirements, is a measure AI companies can implement fairly easily with existing methods for logging and monitoring. Further, it ensures compliance with other parts of the EU AI Act, as it’s a basis for good AI Governance and Risk Management.


bottom of page