The EU Artificial Intelligence Act: a structured step toward trustworthy AI

26 Nov 2025 07:00 AM - By itSMF Staff

Reading time: ~ 2 min.

direttiva sulla responsabilità da intelligenza artificiale itsmf blog

Background and legislative path of AI act

After nearly three years of negotiation, the European Union Artificial Intelligence Act (AI Act) has become the world’s first comprehensive regulatory framework for artificial intelligence. Initially proposed on April 21, 2021, the Act underwent multiple readings and amendments before final approval by the European Parliament on March 13, 2024 (EUR-Lex).


The Act reflects the EU’s commitment to a human-centric approach to technology, balancing innovation with fundamental rights. It aims to establish a harmonized, risk-based legal framework that ensures AI systems introduced in the European market are safe, transparent, and compliant with EU values such as privacy, dignity, and fairness.

Objectives and scope of the regulation

The AI Act’s overarching goal is to guarantee the safety and ethical use of AI while promoting innovation across the Union. Specifically, it seeks to:
  • Ensure that AI systems respect fundamental rights and European values;
  • Foster legal certainty to encourage investment and innovation;
  • Strengthen governance and oversight mechanisms for AI deployment;
  • Promote a single European market for legitimate, trustworthy AI solutions.

The law applies to developers, providers, and users of AI systems within the EU, as well as to entities outside the EU whose systems affect EU citizens — mirroring the extraterritorial principle also seen in modern data protection laws.

A harmonized risk-based framework

At the heart of the regulation lies its risk-based classification of AI systems.

The Act defines AI as “a machine-based system that operates with varying levels of autonomy and may exhibit adaptiveness after deployment, inferring from inputs how to generate outputs such as predictions, content, or decisions” — a future-proof definition intended to encompass evolving AI forms.
The regulation distinguishes between three primary risk categories:
  1. Unacceptable riskSystems that pose a clear threat to people's safety or rights are banned outright. These include AI used for social scoring (as in China’s credit system), real-time biometric surveillance in public spaces (except in narrowly defined cases), and manipulative or exploitative applications targeting vulnerable groups such as minors or disabled persons.
  2. High riskSystems that can significantly impact fundamental rights or safety are subject to strict obligations. Examples include AI used in critical infrastructure, education and employment, law enforcement, migration, or judicial administration (Annex III). Developers must perform conformity assessments, maintain traceability, and ensure data quality and transparency throughout the system’s lifecycle.
  3. Limited and minimal risk – Systems such as chatbots or generative AI tools fall into this category and face transparency requirements (e.g., disclosing that the user is interacting with an AI, or labeling synthetic content such as deepfakes).
This graduated approach aims to ensure proportionate regulation, preventing excessive administrative burden for low-risk AI while tightening controls on high-impact applications.

AI growth led by transparency, privacy and compliance

Even for non-high-risk systems, the AI Act encourages providers to adopt voluntary codes of conduct addressing values like environmental sustainability, accessibility, and diversity.


The Act interacts closely with existing data protection laws such as Switzerland’s Federal Act on Data Protection (LPD) and the EU General Data Protection Regulation (GDPR). While these frameworks primarily safeguard personal data and individual consent, the AI Act takes a systemic approach, addressing collective and ethical risks associated with automated decision-making. Privacy protection remains integral, particularly where AI systems process biometric, behavioral, or sensitive personal data.


Privacy protection appears explicitly in Chapter X, where the Act underscores the human factor as a persistent challenge.


The regulation also underscores the importance of human oversight and accountability in AI operations. Human judgment remains essential to interpret AI outputs, detect bias, and prevent misuse — a reminder that compliance cannot rely solely on technical safeguards.


Notably, the AI Act specifies that anonymized or aggregated data, as referenced for instance in OpenAI’s Data Processing Addendum, must still be handled with caution to avoid potential re-identification and ensure adherence to applicable data protection standards.

AI Act enforcement and penalties

The AI Act introduces significant financial sanctions for non-compliance, modeled partly after the GDPR. Depending on the violation, penalties can reach up to €35 million or 7% of global annual turnover for prohibited practices, €15 million or 3% for breaches of obligations, and €7.5 million or 1.5% for providing misleading information to authorities.


Supervisory authorities will be empowered to issue warnings, suspend systems, or prohibit market access if risks are not adequately mitigated. This structure complements, rather than replaces, national laws like the LPD (FADP), which remain the cornerstone for personal data protection within Switzerland’s jurisdiction.

Fostering innovation: regulatory sandboxes

Recognizing the need to balance compliance with innovation, the Act establishes Regulatory Sandboxes — controlled environments that allow AI developers, particularly startups and SMEs, to test systems under regulatory supervision before full deployment.


This approach aims to lower entry barriers for smaller entities while ensuring alignment with safety and ethics standards. It embodies the EU’s vision of responsible innovation — where progress is guided, not constrained, by regulation.

What to expect

As the AI Act enters into force, Europe — and by extension, Switzerland, through market interoperability — faces a new era of AI governance. The coming years will likely bring adjustments, refinements, and resistance, as organizations learn to align innovation strategies with compliance obligations.


Over time, however, the framework could strengthen public trust in AI and encourage the development of transparent, human-centered systems. For businesses and institutions, success will depend on how effectively they integrate ethical awareness into technological design and governance.

In this sense, the AI Act may represent not only a legal milestone but also the foundation of a new digital social contract — one that defines how humans and intelligent machines can coexist responsibly in the years ahead.
If you want to keep you up-to-date with the most recent news on this topic, don't forget to subscribe to our newsletter: you will get a monthly update with the most relevant and valuable content from our experts!

SUBSCRIBE TO OUR NEWSLETTER

Need to know more about it?

Click on one of the options below to enter in the itSMF Enviroment and for being updated the way which is best for you.

Subscribe to itSMF Newsletter
CONTACT US TO SEND YOUR MESSAGE
DISCOVER OUR EVENT CALENDAR
Get the benefits of Membership Program

Our sponsors

A special thanks to our Advanced Sponsors:

itSMF Staff