New EU Guidelines Set AI Model Standards Ahead of 2025

The European Commission has released new guidelines defining when AI models count as general-purpose systems, based on FLOP thresholds and risk levels. The rules aim to clarify compliance and reinforce the EU’s leadership in global AI regulation.

The European Commission recently issued new guidelines to clarify when artificial intelligence models should be classified as general-purpose systems. Released on July 18, 2025, these detailed guidelines outline specific technical thresholds and compliance requirements that will take effect on August 2, 2025.

One key development is the introduction of a threshold for computational power. Models that exceed 10²³ floating-point operations (FLOP) during training and can create language, text-to-image, or text-to-video content may qualify as general-purpose AI. This requirement aligns with the complexity of training large models, typically those with around one billion parameters. The guidelines reference several models that tested this criterion, showing the compute needed to reach such classifications. For example, one model required 7.5 × 10²² FLOP while another about 6.5 × 10²³ FLOP, both indicating what it takes in terms of resources to be considered general-purpose.

An interesting point raised by AI governance expert Luiza Jarovsky is that models designed for narrow tasks can dodge regulatory classification despite meeting these FLOP standards. This distinction is crucial for developers, as it highlights how specific functionalities influence regulatory accountability. If a model operates competently only within a limited scope, it may not qualify for the same level of scrutiny as broader-scope models, even with substantial training compute.

The guidelines establish a tiered classification system. General-purpose AI models must meet specific requirements regarding transparency and documentation. Models exceeding 10²⁵ FLOP are classified as posing “systemic risk,” triggering more stringent safety evaluations under Article 55. This means companies must alert the Commission when approaching these thresholds, ensuring they can forecast compliance during the model training phases.

Moreover, the guidelines note that there will be flexible enforcement strategies. Models launched before the enforcement date will have until August 2, 2027, to comply with these regulations. In contrast, newly developed models will need to adhere to the rules immediately, although the Commission plans to support businesses in this transition. Input from various stakeholders helped shape these guidelines, broadening their applicability and relevance.

Another critical aspect is compliance for both upstream providers who create the technology and downstream actors integrating it. If the downstream modifications of a model exceed a specified threshold of the original model’s training compute, the original provider may retain certain obligations, ensuring that accountability remains intact. Open-source models have exemptions, but those classified as systemic risk will still require regulation.

The regulatory approach aims to establish a comprehensive framework for AI governance, balancing technological advancement with safety and transparency. The European Commission’s focus is to lead in global AI governance, encouraging other regions to consider similar standards. While some companies are open to compliance frameworks, like Microsoft, others, such as Meta, express hesitation due to legal uncertainties. This illustrates the ongoing dialogue about regulation in the tech industry and highlights the EU’s ambition to set the tone for future AI governance efforts.

“Content generated using AI”