AI brings unprecedented benefits, transforming the way we work and live. By automating time-consuming, low-value tasks, it empowers us to focus on what truly matters—strategic thinking, deep data insights, and effective risk management—where human expertise thrives.
However, alongside its numerous advantages, AI also brings risks. Without proper regulation, the rapid advancement of AI could lead to decisions that disrupt or negatively impact our lives.
What kinds of threats might arise if AI is not governed properly?
- AI technologies, such as machine learning (ML) and natural language processing (NLP), rely heavily on the data they are trained on. If this data contains biases that discriminate against certain groups, the AI output is likely to reflect those biases.
- Uncontrolled access to personal or biometric data poses a serious privacy risk, potentially leading to unauthorised identity use, surveillance concerns, and data misuse.
- Lastly, AI can be exploited for fraudulent and criminal activities, making it a tool for cybercrime, misinformation, and financial fraud.
These and other inherent risks highlight the urgent need for regulation and proper oversight from public authorities.
The UK, US, Singapore, and Canada have created dedicated organizations called AI Safety Institutes (AISIs). These institutions are responsible for conducting essential safety research, sharing their findings, and promoting information exchange among relevant stakeholders.
The European Union (EU) has taken a pioneering role in regulating artificial intelligence (AI) with the enactment of the Artificial Intelligence Act (AI Act), which came into force on August 1, 2024. This legislation establishes a comprehensive framework to ensure the ethical and safe deployment of AI technologies within the EU, applying to developers and users regardless of their physical location.
Classification of AI Systems by Risk
The AI Act introduces a risk-based classification system for AI applications, delineating four primary categories:
- Unacceptable Risk: AI systems deemed to pose a clear threat to safety, livelihoods, or fundamental rights are prohibited. This includes applications such as social scoring by governments, exploitation of vulnerabilities of specific groups, and real-time biometric identification in public spaces for law enforcement purposes, with certain exceptions.
- High Risk: AI systems that significantly impact health, safety, or fundamental rights fall into this category. Examples encompass AI used in critical infrastructure, education, employment, essential private and public services, law enforcement, migration, and justice. These systems are subject to stringent obligations, including risk assessments, data governance measures, and human oversight.
- Limited Risk: AI applications with limited risk are subject to specific transparency obligations. For instance, when AI systems interact with humans, generate or manipulate content (audio, video, text etc), users should be informed of the AI's involvement.
- Minimal Risk: AI systems that pose minimal or no risk are permitted with no additional legal requirements. This category includes applications such as AI-enabled spam filters or video game algorithms.
The majority of the AI Act’s provisions focus on high-risk AI systems and models, ensuring their safe and ethical deployment. The regulation is built upon key principles, including robust governance, accuracy, transparency and explainability, human oversight and accountability, safety, security, ethics, and the protection of data privacy and fundamental rights.
The Regulatory Obligations
The AI Act imposes varying levels of obligations corresponding to the risk classification:
- High-Risk AI Systems: Providers must implement a risk management system, ensure high-quality datasets to minimize risks and discriminatory outcomes, maintain detailed technical documentation, and ensure record-keeping to secure traceability. Additionally, these systems must undergo conformity assessments before deployment and are subject to post-market monitoring.
- Limited-Risk AI Systems: Providers are required to ensure transparency, particularly by informing users that they are interacting with an AI system.
Entities Obliged to Comply and Exemptions
The AI Act applies to:
- Providers: Organizations or individuals that develop or place an AI system on the market or put it into service within the EU.
- Users: Entities utilizing AI systems within the EU, excluding private, non-professional use.
- Importers and Distributors: Entities involved in the supply chain of AI systems within the EU market.
Exemptions are granted for AI systems developed or used exclusively for military purposes. Additionally, public authorities in third countries and international organizations using AI systems in the framework of international agreements for law enforcement and judicial cooperation with the EU or its Member States are exempt.
Implementation Timelines
The AI Act stipulates a phased implementation approach:
- February 2, 2025 – Provisions related to prohibited AI systems and AI literacy/training come into force.
- August 2, 2025 – Provisions concerning general-purpose AI models, public governance, and penalties take effect.
- August 2, 2026 – The majority of provisions come into force.
August 2, 2027 – The AI Act becomes fully applicable.
Penalties for Non-Compliance
The AI Act enforces strict penalties to ensure adherence:
- For non-compliance with prohibited AI practices: Fines up to €35 million or 7% of the total worldwide annual turnover for the preceding financial year.
- For non-compliance related to high-risk AI system requirements: Fines up to €15 million or 3% of the total worldwide annual turnover for the preceding financial year, whichever is higher.
- For supplying incorrect, incomplete, or misleading information to notified bodies and national authorities: Fines up to €7.5 million or 1% of the total worldwide annual turnover for the preceding financial year.
- For the non-compliance related to general-purpose AI models: The Authoritative Body may impose fines not exceeding 3% of the annual worldwide turnover in the preceding financial year or €15 million, whichever is higher.
The AI Act represents a significant step by the EU to balance the promotion of AI innovation with the safeguarding of fundamental rights and public interests. Organizations involved in the development, deployment, or use of AI systems within the EU should proactively assess their operations to ensure compliance with this comprehensive regulatory framework.
Transparency Statement: ChatGPT assisted in summarizing a large volume of content and shaping the narrative for readability. However, the process was entirely guided by my prompts, thoroughly reviewed, cross-checked with the source data, and adjusted to resolve any discrepancies.
Sources:
- REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCILof 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
- Variengien, A., & Martinet, C. (2024, July 29). AI Safety Institutes: Can countries meet the challenge? ai.