Skip to content

Authorised IMDS & CDX Training & Consulting partner for

The EU’s AI Act And Product Safety

The first draft of the Artificial Intelligence Act (the “AI Act”) was proposed by the EU Commission in April 2021 and it became the first substantive framework of its kind. It aims to provide a single framework for AI products and services used in the EU, ensuring products placed on the EU market are safe while allowing for innovation.

The Act will apply to systems used and products placed on the EU market – even where the providers are not in the EU – and adopts an risk-based approach akin to that commonly seen in Medical Device Regulations, with obligations proportional to the level of risk.

There is no single agreed definition of AI within academia and industry, so to define its scope and seek to regulate in such a comprehensive manner is a bold approach by the European Commission and akin to the ambitions it had when introducing the General Data Protection Regulation or GDPR in seeking to put in place a gold standard globally influential regulatory framework.

Currently the AI Act has entered the final stage of the legislative process with the EU Parliament and Member States thrashing out the details of the final wording, with certain aspects in particular subject to intense debate. Indeed, as recently as early December, the final trilogues were taking place and substantive amendments debated. It was announced that a political agreement had been reached on the AI Act but we’re awaiting a final draft form to be published, likely in the New Year. Where possible, we’ve included the reported outcome of those debates in the below analysis.

Once a final form is agreed and approved the AI Act will enter into law and, following a grace period of up to 2 years, its requirements will apply. The European Commission’s ambition is that a final draft would be agreed prior to European elections next year for fear of this causing significant delays.

While it may be subject to some changes, the fundamentals of this draft are so important and such a step change for actors in this space that it’s important for businesses deploying AI to understand the proposed requirements and their passage to entering law.

The Key provisions of the law includes,

The Act follows a risk-based approach whereby AI systems are categorised based on level of risk, with obligations proportional to the level of risk posed.

Risk Levels:-

1) Prohibited uses:-Some forms of AI are explicitly prohibited under the AI Act as they are deemed to be an unacceptable level of risk and/or unacceptable use purposes. See further detail below.

2) High risk:-Some forms of AI are classed as high risk, where they:

  1. a) are (1) intended to be used as a product covered by Union harmonisation legislation listed in Annex II (or a safety component of one), which covers products such as machinery, toys and medical devices, and (2) the product is required to undergo a third party conformity assessment with a view to placing on the market/putting into service and/or
  2. b) are listed specifically at Annex III (i.e. biometric ID, critical infrastructure safety, education and vocational training, employment, public benefits, law enforcement, border control and administration of justice).

3) Low or limited risk:-Other forms of AI not falling within the prohibited use and high-risk categories will be deemed low or limited risk. These are subject to much less prescriptive obligations. Generally speaking, this category includes spam filters, chat bots, and other non-intrusive forms of product.

Prohibited Uses:-

Some forms of AI are explicitly prohibited under the AI Act including:

  • Those that deploy subliminal techniques to “materially distort a person’s behaviour in a manner that causes or is likely to cause physical or psychological harm”;
  • Those that exploit vulnerabilities of a specific group due to age or disability to cause physical or psychological harm;
  • Those deployed by or for public authorities to classify or evaluate people based on behaviour or personality which could lead to scores which have detrimental effect on treatment of people/groups which are unrelated and/or disproportionate; and
  • Those involving real-time biometric ID in public spaces for law enforcement (unless strictly necessary for finding victims of crime or suspects of specified offences, or prevention of a specific substantial threat to life).
  • Databases based on bulk scraping of facial images
  • Systems which categorise individuals based on sensitive personal traits such as race or political views
  • Predictive policing software to predict likelihood of crime based on personal traits
  • Emotion recognition in workplace/education environments, except where used for safety reasons.

High Risk AI System Obligations:-

  • Required to undergo conformity assessment including drawing up of technical documentation and declaration of conformity before placing on the market.
  • In certain limited use cases, third-party conformity assessment by a notified body is required, including in cases regarding biometric identification and categorisation system providers.
  • Risk management system to be implemented and maintained.
  • Testing throughout development and prior to placing on the market against defined metrics.
  • Using only data which meets quality criteria to train models (where applicable).
  • Design products with capabilities for automatic recording of event logs which ensure traceability of risk-related events as well as usage and input data.
  • Inclusion of instructions for use which identify the Provider and any risks of use.
  • Design for human oversight during their use.
  • Design to achieve an appropriate level of accuracy, robustness and cybersecurity throughout its lifecycle.
  • Products which continue to learn after their being placed on the market should be developed to address potential feedback loops and bias.
  • Registration to an EU database, which is to be developed by the European Commission.
  • Ensuring systems are designed to inform any natural person using it that they are using an AI system, and in the event they generate or manipulate image/audio content to create “deep fakes”, disclose the artificial generation/manipulation of the content.
  • Undertake post-market monitoring to analyse use and inputs and confirm compliance.
  • Report any serious incident which constitutes a breach to the relevant Market Surveillance Authority within 15 days.
  • In the event of non-conformity, take corrective action to bring it into conformity or withdraw or recall it. Where there is no importer within the EU, the producer shall appoint an authorised representative within the EU to cooperate with national competent authorities.

Limited Risk AI System Obligations

Limited risk systems are much less strictly regulated and there are fewer obligations for parties placing such systems on the market. The obligations that do apply include:

Ensuring they are designed to inform any natural person using it that they are using an AI system, and in the event they generate or manipulate image/audio content to create “deep fakes”, disclose the artificial generation/manipulation of the content.

Exclusions & Exemptions:-

Military use exclusion:

A specific exclusion for AI systems developed for or used for military purposes exclusively and those used for law enforcement and judicial enforcement where utilised by public authorities.

Notable proposed exemptions debated but not currently in available draft text

Proposals include exemptions for AI systems:

  • That does not materially influence the outcome of decision-making but performs a narrow procedural task, for example AI model that transforms unstructured data into structured data or classifies incoming documents into categories.
  • That reviews a previously completed human activity. I.e. merely providing an additional layer to human activity.
  • Intended to detect decision-making patterns or deviations from prior decision-making patterns to flag potential inconsistencies or anomalies, for instance, the grading pattern of a teacher.
  • Used to perform preparatory tasks only, to an assessment relevant to a critical use case. Examples include file-handling software.

The Consequences For Non-Conformity:-

It is Member States’ responsibility to set and decide the exact penalties for breach, as is the case with many EU product safety regimes. However, there are some specific examples and caps outlined in the draft, and we can draw conclusions from other regimes which may help understand the potential penalties for breach. The possible penalties include: