AI Act: first AI law » ESQ - Your Certification Partner

AI Act: first AI law

ESQ - Your Certification Partner > Security > AI Act: first AI law

AI Act: first AI law

On March 13th, the European Parliament approved the AI Act, the first law on artificial intelligence, with 523 votes in favor, 46 against, and 49 abstentions.

This law adopts a risk-based approach to ensure that companies release products compliant with the law before making them available to the public.

The First Regulatory Step

European Union’s first law on artificial intelligence (AI), known as the AI Act, represents a significant step towards regulating and responsibly adopting this emerging technology. Proposed by the European Commission in 2021, the AI Act aims to create a clear and consistent regulatory framework for AI in the EU.

One of the main features of the AI Act is its risk-based approach. It classifies AI applications based on their level of risk, dividing them into four categories: unacceptable, high, limited, and minimal risk. This classification determines the level of regulation and supervision required for each type of application.


The objective of the law is to regulate the use of artificial intelligence (AI) systems to ensure the protection of the privacy and rights of European citizens.

This text addresses a wide range of areas and applications of AI, including systems for corporate personnel selection, algorithms for autonomous vehicles, facial recognition by law enforcement agencies, and the spread of online misinformation.

A regulation like this also establishes clear obligations for operators of AI systems, including transparency, traceability, and human oversight requirements. Organizations are required to adhere to ethical standards and ensure compliance with principles of non-discrimination and respect for privacy and personal data.

Another significant aspect of the AI Act is its approach to the control and certification of high-risk AI systems. EU-designated certification bodies will assess the compliance of AI systems with the requirements set by the regulation, ensuring the safety and reliability of AI technologies used in the European Union.


However, the AI Act has also sparked debates and concerns. Some critics argue that it could hinder innovation and competitiveness of European businesses by imposing overly strict and burdensome requirements. Others emphasize the need to further strengthen provisions related to civil liability and transparency of AI systems.

Despite the controversies, the AI Act represents an important starting point for the European Union in addressing the challenges and opportunities posed by AI. With a balanced approach between regulation and promotion of innovation, the EU aims to ensure that AI is developed and used safely, ethically, and responsibly for the benefit of all European citizens.

Before coming into effect, the text must be approved by the Council of the European Union. The AI Act will enter into force 20 days after its publication in the Official Journal of the European Union and will be applied after an additional 2 years. For applications classified as “unacceptable risk,” bans could potentially begin as soon as 6 months.

Follow us to stay updated!

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.