EU Artificial Intelligence Act

Frequently Asked Questions

About the AI Act

The AI ACT is a regulation proposed by the European Commission which aims to introduce a common regulatory and legal framework for artificial intelligence.

The EU AI Act has been approved and published.

Details on the when organizations need to comply can be found in Article 113

 

The EU AI Act applies to the development, deployment, and use of AI systems in the European Union. It sets out requirements for both the providers and users of AI systems, regardless of where they are based, if their AI systems are used within the EU.

The regulation applies to a wide range of AI systems, including both software-based and hardware-based systems. The AI systems covered by the regulation are divided into two categories: high-risk AI systems and non-high risk AI systems.

AI systems identified as high-risk include AI technology used in:

  • critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
  • educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • safety components of products (e.g. AI application in robot-assisted surgery);
  • employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • migration, asylum and border control management (e.g. verification of authenticity of travel documents);
  • administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).

Non-high risk AI systems include those that are not considered high-risk, but still have the potential to impact individuals or society. For example, chatbots, social media algorithms, and recommender systems fall under this category.

The EU AI Act also includes provisions related to the use of biometric data for remote identification and the creation of deepfakes. It prohibits the use of certain AI applications, such as social scoring systems that evaluate individuals’ behavior or trustworthiness, and mandates the creation of a European Artificial Intelligence Board to oversee the regulation’s implementation and provide guidance on AI development and use.

Yes, the EU AI Act includes penalties for non-compliance with the regulation. The penalties for non-compliance vary depending on the type of violation and the severity of the risk involved.

For example, intentional violations of the requirements for high-risk AI systems could result in fines of up to 6% of a company’s annual worldwide revenue or €30 million, whichever is higher. Non-intentional violations could result in fines of up to 4% of a company’s annual worldwide revenue or €20 million, whichever is higher.

In addition to fines, the EU AI Act includes provisions for corrective measures, such as the suspension or withdrawal of the AI system from the market, or even the closure of the company responsible for the system.

It’s worth noting that the EU AI Act is still in the proposal stage and has not yet been adopted by the EU. If and when it is adopted, the specific penalties for non-compliance may be subject to change based on the final text of the regulation.

If and when the EU AI Act is adopted, organizations that develop or use AI systems in the EU will need to take several steps to comply with the regulation, especially if their AI systems are classified as high-risk.

Here are some of the key steps organizations would need to take to comply with the EU AI Act:

  1. Identify whether their AI system falls within the scope of the regulation and if it is classified as high-risk.

  2. Ensure that their AI systems comply with the mandatory requirements set out in the regulation, such as transparency, accountability, human oversight, and data quality.

  3. Conduct a risk assessment to identify and mitigate potential risks associated with the development and use of their AI systems.

  4. Ensure that their AI systems are subject to appropriate testing, validation, and auditing processes to ensure compliance with the regulation.

  5. Keep detailed documentation of their AI systems and the processes used to develop and deploy them.

  6. Appoint a designated person responsible for ensuring compliance with the EU AI Act.

  7. Cooperate with national supervisory authorities and provide them with the information they need to carry out their supervisory duties.

It’s worth noting that the exact steps that organizations will need to take to comply with the EU AI Act will depend on the final text of the regulation, which may be subject to change. However, the steps outlined above provide a general overview of what organizations can expect if and when the regulation is adopted.

Yes, organizations that develop or use high-risk AI systems are required to conduct self-assessments under the EU AI Act. The self-assessment is a preliminary step that organizations must take before undergoing the mandatory conformity assessment conducted by a third-party conformity assessment body.

The purpose of the self-assessment is to allow organizations to identify and address any non-conformities or risks associated with their high-risk AI systems before the mandatory conformity assessment is conducted. This helps to ensure a more efficient and effective conformity assessment process, as any issues identified during the self-assessment can be addressed before the third-party conformity assessment body begins their evaluation.

The EU AI Act requires organizations to conduct a self-assessment before deploying a high-risk AI system in the EU market, and to repeat the self-assessment at regular intervals to ensure ongoing compliance. The frequency of the self-assessment must be determined by the risk level of the AI system.

The self-assessment must evaluate the conformity of the high-risk AI system with the mandatory requirements set out in the regulation, such as transparency, accountability, human oversight, and data quality. The self-assessment report must be made available to the third-party conformity assessment body upon request.

In summary, self-assessments are an important part of the conformity assessment process under the EU AI Act, as they allow organizations to identify and address any non-conformities or risks associated with their high-risk AI systems before undergoing the mandatory third-party conformity assessment.

Yes, audits are required as part of the EU AI Act. The regulation mandates that organizations that develop or use high-risk AI systems must conduct regular conformity assessments to ensure that their AI systems comply with the mandatory requirements set out in the regulation.

These conformity assessments must be conducted by a third-party conformity assessment body that is accredited by the national supervisory authority in the member state where the organization is established or operates. The conformity assessment must cover the entire lifecycle of the AI system, from design and development to deployment and operation, and must be repeated at regular intervals to ensure ongoing compliance.

In addition, the EU AI Act requires that organizations keep detailed documentation of their AI systems and the conformity assessments conducted on them. This documentation must be made available to the national supervisory authority upon request.

The purpose of these conformity assessments and documentation requirements is to ensure that high-risk AI systems are developed and used in a way that complies with the mandatory requirements of the regulation and to provide transparency and accountability in the development and deployment of AI systems.

The requirements for high-risk AI systems under the EU AI Act are:

  1. Transparency: High-risk AI systems must be designed and developed in a transparent way, so that individuals affected by the system can understand how it works and how it is making decisions.

  2. Quality of data: The data used to train and test high-risk AI systems must be of high quality, relevant, and free from bias. Organizations must keep records of the data used and ensure that it is regularly updated and monitored.

  3. Human oversight: High-risk AI systems must have human oversight to ensure that they are being used in a responsible and ethical manner. This may include monitoring the system for errors, biases, or other issues.

  4. Robustness and accuracy: High-risk AI systems must be designed and developed to be robust, accurate, and reliable. Organizations must conduct appropriate testing and validation to ensure that the system is functioning as intended.

  5. Conformity assessment: High-risk AI systems must undergo a conformity assessment to ensure that they comply with the requirements set out in the regulation. Organizations must keep records of the assessment and any measures taken to address non-conformities.

  6. Risk management: Organizations must conduct a risk assessment of the high-risk AI system to identify and mitigate potential risks to individuals, society, and fundamental rights.

  7. Reporting obligations: Organizations must report certain incidents or malfunctions related to the high-risk AI system to the national supervisory authority.

  8. Logging requirements: Organizations must keep records of certain activities related to the high-risk AI system, including technical documentation, conformity assessments, incidents, and modifications or updates made to the system.

These requirements are designed to ensure that high-risk AI systems are developed and used in a responsible and ethical manner, with a focus on transparency, data quality, human oversight, robustness and accuracy, conformity assessment, risk management, reporting, and logging.

The EU AI Act also includes requirements for non-high risk AI systems, but these requirements are less strict than those for high-risk AI systems.

The regulation sets out general requirements for all AI systems, including the need for transparency, human oversight, and accuracy. Organizations that develop or use non-high risk AI systems must ensure that the system is designed and developed in a way that is transparent and understandable to individuals, and must have appropriate human oversight in place to ensure that the system is used in a responsible and ethical manner.

The EU AI Act also establishes a voluntary code of conduct for AI systems, which organizations can choose to comply with. This code of conduct includes principles for the ethical development and use of AI, including respect for fundamental rights, non-discrimination, and transparency.

While the requirements for non-high risk AI systems are less strict than those for high-risk AI systems, the EU AI Act still sets out clear expectations for the responsible development and use of AI, and organizations that develop or use AI should consider these requirements and principles when designing and implementing their systems.

About this website

To allow everyone to easily navigate through the various articles of the AI Act. 

This website is launched by RiskNow. A company based in Amsterdam that helps organisation to manage risk and comply with laws and regulations. We do this with our RiskNow SaaS platform that provides the ultimate user experience.

With our software platform, organisations can easily demonstrate compliance with the AI Act.

For more information about us, see www.risknow.com. Feel free to reach out to us if you have any questions, suggestions or remarks on this website.

Want to get in touch or report an error? Drop us an e-mail at info@risknow.com.

 

The articles and definitions on this website are derived from the last version of the draft AI Act:

https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689