Ilya Vasilenko
Go back

EU AI Regulation

Why?

AI Regulation of the European Union will be an important part of the business of every company producing AI systems and/or applying AI systems. It is important that companies become aware of the AI Regulation and take the action. This practical guide is intended to spread the word and help adopting the AI Regulation.

Practical guide

Supporting material for the self-assessment

In this section, you can find the supporting material that is referenced in the video above for the three self-assessment steps.

Step 1 - Does the EU AI Regulation apply to my business?

Read the definition of AI in the Annex I of the Regulation.

Step 2 - What is the risk category applicable to your business?

Familiarize yourself with the type of applications that are prohibited in the Regulation.

Familirize yourself with the definition of High-Risk Applications in the Annex III of the Regulation.

Step 3 - What are the recommended measures?

Familirize yourself with the technical documentation that will be required for high-risk applications in the Annex IV of the Regulation. A part of this documentation will be needed for a (self-)assessment of Limited-Risk and Minimal-Risk applications. Thus, it is healthy to assess what your business would reasonably put in place right now in order to build sustainably compliant products and utilize sustainably compliant services.

EU AI Regulation - original text

In this seciton, you will find the original text from the EU AI Regulation referenced in the video above.

Annex I - Artificial Intelligence Techniques and Approaches

ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHES referred to in Article 3, point 1:

  1. Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
  2. Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
  3. Statistical approaches, Bayesian estimation, search and optimization methods.

Original text in PDF format: Annex I.

Title II - Prohibited Artificial Intelligence Practices

Article 5 (extract)

  1. The following artificial intelligence practices shall be prohibited:
    1. the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
    2. the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
    3. the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:
      1. detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
      2. detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;
    4. the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:
      1. the targeted search for specific potential victims of crime, including missing children;
      2. the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack;
      3. the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State.

Full original text of the Article 5 in PDF format: Article 5.

Annex III High-Risk AI Systems referred to In Article 6(2)

High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:
  1. Biometric identification and categorisation of natural persons:
    1. AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;
  2. Management and operation of critical infrastructure:
    1. AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.
  3. Education and vocational training:
    1. AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;
    2. AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.
  4. Employment, workers management and access to self-employment:
    1. AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
    2. AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
  5. Access to and enjoyment of essential private services and public services and benefits:
    1. AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
    2. AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;
    3. AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.
  6. Law enforcement:
    1. AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
    2. AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
    3. AI systems intended to be used by law enforcement authorities to detect deep fakes as referred to in article 52(3);
    4. AI systems intended to be used by law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
    5. AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;
    6. AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
    7. AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
  7. Migration, asylum and border control management:
    1. AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
    2. AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
    3. AI systems intended to be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;
    4. AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
  8. Administration of justice and democratic processes:
    1. AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.

Original text in PDF format: Annex III.

EU AI Regulation - Annex IV - Technical Documentation

The technical documentation referred to in Article 11(1) shall contain at least the following information, as applicable to the relevant AI system:

  1. A general description of the AI system including:
    1. its intended purpose, the person/s developing the system the date and the version of the system;
    2. how the AI system interacts or can be used to interact with hardware or software that is not part of the AI system itself, where applicable;
    3. the versions of relevant software or firmware and any requirement related to version update;
    4. the description of all forms in which the AI system is placed on the market or put into service;
    5. the description of hardware on which the AI system is intended to run;
    6. where the AI system is a component of products, photographs or illustrations showing external features, marking and internal layout of those products;
    7. instructions of use for the user and, where applicable installation instructions;
  2. A detailed description of the elements of the AI system and of the process for its development, including:
    1. the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre-trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider;
    2. the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;
    3. the description of the system architecture explaining how software components build on or feed into each other and integrate into the overall processing; the computational resources used to develop, train, test and validate the AI system;
    4. where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including information about the provenance of those data sets, their scope and main characteristics; how the data was obtained and selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection);
    5. assessment of the human oversight measures needed in accordance with Article 14, including an assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the users, in accordance with Articles 13(3)(d);
    6. where applicable, a detailed description of pre-determined changes to the AI system and its performance, together with all the relevant information related to the technical solutions adopted to ensure continuous compliance of the AI system with the relevant requirements set out in Title III, Chapter 2;
    7. the validation and testing procedures used, including information about the validation and testing data used and their main characteristics; metrics used to measure accuracy, robustness, cybersecurity and compliance with other relevant requirements set out in Title III, Chapter 2 as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred to under point (f).
  3. Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights and discrimination in view of the intended purpose of the AI system; the human oversight measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; specifications on input data, as appropriate;
  4. A detailed description of the risk management system in accordance with Article 9;
  5. A description of any change made to the system through its lifecycle;
  6. A list of the harmonised standards applied in full or in part the references of which have been published in the Official Journal of the European Union; where no such harmonised standards have been applied, a detailed description of the solutions adopted to meet the requirements set out in Title III, Chapter 2, including a list of other relevant standards and technical specifications applied;
  7. A copy of the EU declaration of conformity;
  8. A detailed description of the system in place to evaluate the AI system performance in the post-market phase in accordance with Article 61, including the post-market monitoring plan referred to in Article 61(3).

Original text in PDF format: Annex IV.

Official EU documents and announcements and other resources:

Other links:
Ilya Vasilenko
ILYA VASILENKO