You are currently viewing The EU’s AI Act is now in force

The EU’s AI Act is now in force

  • Post author:
  • Post category:AI
  • Post comments:0 Comments
  • Post last modified:August 1, 2024

It’s now official: The European Union’s risk-based AI regulation has come into effect as of Thursday, August 1, 2024. This marks the beginning of a series of phased compliance deadlines tailored to different AI developers and applications, with most regulations becoming fully effective by mid-2026. The first deadline, set for six months from now, targets a limited number of banned AI uses in specific scenarios, such as the deployment of remote biometrics by law enforcement in public areas.

The EU’s strategy deems most AI applications as low or no risk, exempting them from the regulation. However, certain AI uses, classified as high risk—such as biometric systems, AI-based medical software, and AI in education and employment—must adhere to stringent risk and quality management standards. This includes undergoing a pre-market conformity assessment and potential regulatory audits. High-risk AI systems used by public sector entities or their contractors must also be registered in an EU database.

A third category, termed “limited risk,” applies to AI technologies like chatbots and tools capable of generating deepfakes, which must comply with transparency requirements to prevent user deception.

The penalties for non-compliance are scaled based on the severity of the violation: up to 7% of global annual turnover for prohibited AI applications, up to 3% for other breaches, and up to 1.5% for providing false information to regulators.

Additionally, developers of general-purpose AIs (GPAIs) are subject to light transparency requirements, including providing a summary of training data and ensuring policies are in place to respect copyright rules, among other stipulations.

Only a select few of the most advanced AI models will be required to undergo risk assessment and implement mitigation measures. These general-purpose AI models (GPAIs), which pose potential systemic risks, are currently defined as those trained with over 10^25 FLOPs of computing power.

While the enforcement of general AI regulations is managed by national bodies within each EU Member State, the rules for GPAIs are enforced at the EU level.

The specific requirements for GPAI developers under the AI Act are still being finalized, as Codes of Practice are yet to be established. Earlier this week, the AI Office, responsible for strategic oversight and building the AI ecosystem, initiated a consultation and call for participation in this rule-making process, aiming to finalize the Codes by April 2025.

OpenAI, the developer of the GPT large language model behind ChatGPT, noted in a primer for the AI Act last month that it plans to work closely with the EU AI Office and other authorities as the new law is implemented. This includes creating technical documentation and guidance for downstream providers and users of its GPAI models.

OpenAI advised organizations to classify any AI systems they use, determine how they are categorized under the Act, and understand the obligations that apply. Organizations should also identify whether they are providers or deployers of these AI systems. Given the complexity of these issues, consulting with legal counsel is recommended.

The exact requirements for high-risk AI systems are still being developed by European standards bodies. The Commission has tasked these bodies with completing their work by April 2025, after which the standards will be evaluated and endorsed by the EU before becoming mandatory for developers.

Leave a Reply