Reading time: ~ 2 min.

Background and legislative path of AI act
After nearly three years of negotiation, the European Union Artificial Intelligence Act (AI Act) has become the world’s first comprehensive regulatory framework for artificial intelligence. Initially proposed on April 21, 2021, the Act underwent multiple readings and amendments before final approval by the European Parliament on March 13, 2024 (EUR-Lex).
Objectives and scope of the regulation
- Ensure that AI systems respect fundamental rights and European values;
- Foster legal certainty to encourage investment and innovation;
- Strengthen governance and oversight mechanisms for AI deployment;
- Promote a single European market for legitimate, trustworthy AI solutions.
The law applies to developers, providers, and users of AI systems within the EU, as well as to entities outside the EU whose systems affect EU citizens — mirroring the extraterritorial principle also seen in modern data protection laws.
A harmonized risk-based framework
- Unacceptable risk – Systems that pose a clear threat to people's safety or rights are banned outright. These include AI used for social scoring (as in China’s credit system), real-time biometric surveillance in public spaces (except in narrowly defined cases), and manipulative or exploitative applications targeting vulnerable groups such as minors or disabled persons.
- High risk – Systems that can significantly impact fundamental rights or safety are subject to strict obligations. Examples include AI used in critical infrastructure, education and employment, law enforcement, migration, or judicial administration (Annex III). Developers must perform conformity assessments, maintain traceability, and ensure data quality and transparency throughout the system’s lifecycle.
- Limited and minimal risk – Systems such as chatbots or generative AI tools fall into this category and face transparency requirements (e.g., disclosing that the user is interacting with an AI, or labeling synthetic content such as deepfakes).
AI growth led by transparency, privacy and compliance
Even for non-high-risk systems, the AI Act encourages providers to adopt voluntary codes of conduct addressing values like environmental sustainability, accessibility, and diversity.
The Act interacts closely with existing data protection laws such as Switzerland’s Federal Act on Data Protection (LPD) and the EU General Data Protection Regulation (GDPR). While these frameworks primarily safeguard personal data and individual consent, the AI Act takes a systemic approach, addressing collective and ethical risks associated with automated decision-making. Privacy protection remains integral, particularly where AI systems process biometric, behavioral, or sensitive personal data.
Privacy protection appears explicitly in Chapter X, where the Act underscores the human factor as a persistent challenge.
The regulation also underscores the importance of human oversight and accountability in AI operations. Human judgment remains essential to interpret AI outputs, detect bias, and prevent misuse — a reminder that compliance cannot rely solely on technical safeguards.
AI Act enforcement and penalties
The AI Act introduces significant financial sanctions for non-compliance, modeled partly after the GDPR. Depending on the violation, penalties can reach up to €35 million or 7% of global annual turnover for prohibited practices, €15 million or 3% for breaches of obligations, and €7.5 million or 1.5% for providing misleading information to authorities.
Fostering innovation: regulatory sandboxes
Recognizing the need to balance compliance with innovation, the Act establishes Regulatory Sandboxes — controlled environments that allow AI developers, particularly startups and SMEs, to test systems under regulatory supervision before full deployment.
What to expect
As the AI Act enters into force, Europe — and by extension, Switzerland, through market interoperability — faces a new era of AI governance. The coming years will likely bring adjustments, refinements, and resistance, as organizations learn to align innovation strategies with compliance obligations.
Over time, however, the framework could strengthen public trust in AI and encourage the development of transparent, human-centered systems. For businesses and institutions, success will depend on how effectively they integrate ethical awareness into technological design and governance.
In this sense, the AI Act may represent not only a legal milestone but also the foundation of a new digital social contract — one that defines how humans and intelligent machines can coexist responsibly in the years ahead.Our sponsors
A special thanks to our Advanced Sponsors:





