European Union (EU) Artificial Intelligence (AI) Act (Regulation (EU) 2024/1689) represents the first most comprehensive regulation on AI. It proposes a proactive framework to regulate AI systems and minimise the risk of AI systems against to the health, safety, or fundamental rights of end users. Article 23 establishes obligations for importers of high-risk AI systems to ensure these systems comply with EU standards before and after entering the market. At its core, the article conceptualizes importers as key gatekeepers in the AI supply chain, bridging non-EU providers (manufacturers or developers) and the EU market. Importers are not mere intermediaries but active participants accountable for verifying conformity, mitigating risks, and facilitating regulatory oversight. This aligns with the Act’s broader goal of promoting safe, trustworthy AI while protecting fundamental rights, health, and safety.

The “importer” is defined in the Act as a natural or legal person located or established in the Union (EU) that places on the market an AI system bearing the name or trademark of a natural or legal person established in a third country (outside the EU). This definition emphasizes the importer’s role in introducing foreign-made AI systems into the EU, making them liable for compliance even if the original provider is extraterritorial.

The article’s obligations can be conceptualized into three phases: pre-market verification (ensuring readiness before market entry), ongoing responsibility (during storage, transport, and handling), and post-market cooperation (record-keeping and interaction with authorities). These elements create a layered accountability system, where importers must proactively check, react to issues, and support enforcement. Below, I’ll explain each element clearly, drawing from the article’s seven paragraphs.

GrokPre-Market Verification (Article 23(1))

Core Obligation: Before placing a high-risk AI system on the market, importers must verify its conformity with the EU AI Act.

Key Elements to Check:

The provider has completed the relevant conformity assessment procedure (as outlined in Article 43, which involves third-party audits for high-risk systems).The provider has prepared technical documentation (per Article 11 and Annex IV, detailing design, development, and risk management).The system has the CE marking (indicating EU compliance) and is accompanied by the EU declaration of conformity (Article 47) and user instructions.The provider has appointed an authorized representative in the EU (Article 22(1)) to handle regulatory matters.

Explanation: This acts as a “due diligence checkpoint” to prevent non-compliant systems from entering the EU. Importers aren’t required to perform assessments themselves but must confirm the provider’s work, reducing the risk of unsafe AI (e.g., biased facial recognition tools) reaching users.

2. Handling Non-Conformity (Article 23(2))

Core Obligation: If there’s reason to believe the system is non-compliant, falsified, or has fake documentation, the importer must not place it on the market until it’s fixed.

Key Elements:

Assess risks: If the system poses a risk (as defined in Article 79(1), e.g., threats to health or rights), inform the provider, authorized representative, and market surveillance authorities.Proactive remediation: Bring the system into conformity before proceeding.

Explanation: This element introduces a “risk escalation” mechanism, empowering importers to halt distribution and notify stakeholders. It conceptualizes importers as risk managers, ensuring accountability flows back to the provider while protecting the EU market from immediate harms.

3. Identification and Traceability (Article 23(3))

Core Obligation: Importers must add their contact details (name, trade name/mark, address) to the AI system, its packaging, or documentation.

Key Elements:

Applies where applicable (e.g., physical products or digital interfaces).Ensures traceability for users, regulators, and supply chain actors.

Explanation: This fosters transparency and accountability, making it easier to contact the importer for complaints, recalls, or investigations. It’s a simple yet crucial element in the Act’s emphasis on identifiable responsibility in the AI value chain.

4. Storage and Transport Conditions (Article 23(4))

Core Obligation: While the system is under the importer’s control, ensure storage or transport doesn’t compromise compliance with the Act’s requirements (Section 2, covering aspects like robustness, accuracy, and cybersecurity).

Key Elements:

Monitor conditions (e.g., temperature, handling) that could affect the system’s integrity.Applies only where relevant (e.g., hardware-based AI).

Explanation: This conceptualizes the importer’s role extending beyond paperwork to physical stewardship, preventing degradation that could turn a compliant system non-compliant (e.g., data corruption in transit affecting AI performance).

5. Record-Keeping (Article 23(5))

Core Obligation: Retain key documents for 10 years after the system is placed on the market or put into service.

Key Elements:

Copies of: Notified body certificate (if applicable), instructions for use, and EU declaration of conformity (Article 47).Retention period aligns with potential long-term risks in AI systems.

Explanation: This element supports auditability and long-term enforcement, allowing regulators to retroactively verify compliance. It underscores the Act’s forward-looking approach to AI, where issues might emerge years later (e.g., evolving biases in machine learning models).

6. Information Provision to Authorities (Article 23(6))

Core Obligation: Upon reasoned request, provide competent authorities with all necessary information and documentation to prove conformity.

Key Elements:

Include records from Paragraph 5 and ensure technical documentation is available.Deliver in an easily understandable language.

Explanation: This facilitates regulatory inspections, conceptualizing importers as cooperative partners in oversight. It ensures authorities can access evidence without barriers, enhancing the Act’s enforcement teeth.

7. Cooperation with Authorities (Article 23(7))

Core Obligation: Assist competent authorities in actions related to the imported system, especially to reduce or mitigate risks.

Key Elements:

Broad cooperation: Includes investigations, corrective measures, or recalls.Focus on risk mitigation for high-risk AI.

Explanation: As the final element, this ties everything together by mandating active support, turning importers into allies in public protection. It reflects the Act’s collaborative ecosystem, where private actors help enforce public interest goals.

EU AI Act Article 23: Understanding Obligations of Importers of High-risk AI Systems was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

By

Leave a Reply

Your email address will not be published. Required fields are marked *