The EU AI Act defines the “provider” of a GPAI model under Article 3(2) as a natural or legal person, public authority, agency, or other body that develops a GPAI model or has it developed and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge. This definition extends to both original developers and entities that significantly modify existing models, with recent guidelines from the European Commission (issued July 2025) providing clarifications on scope, particularly regarding fine-tuning and modifications. These guidelines emphasize a lifecycle approach and compute-based thresholds to determine provider status, ensuring obligations are proportionate and focused on entities with control over market placement.

Grok

Original Providers (e.g., OpenAI for GPT Models, Meta for Llama, Google for Gemini)

Original providers are those who initially develop and place the base GPAI model on the EU market, bearing primary responsibility for compliance. For instance, OpenAI qualifies as the provider for models like GPT-4, as it develops and markets them under its trademark. Similarly, Meta is the provider for Llama series models, and Google for Gemini, when made available via APIs or downloads in the EU. Obligations include maintaining technical documentation, ensuring copyright compliance, publishing training data summaries, and notifying the Commission if systemic risk thresholds (e.g., training compute exceeding 10²⁵ FLOPs) are met. The guidelines confirm that any fine-tuning or modifications performed by the original provider (or on their behalf) are considered part of the model’s lifecycle, not creating a new model. For example, if OpenAI fine-tunes GPT-4 post-market and releases it as an updated version under its brand, it remains the same provider without triggering new classifications. This applies uniformly to other GPAI providers like Anthropic (Claude) or Stability AI (Stable Diffusion), where lifecycle modifications do not shift provider status.

Entities Fine-Tuning and Marketing Under Their Own Trademark (Downstream Providers)

If a public or private entity fine-tunes an existing GPAI model (e.g., GPT via OpenAI’s tools, Llama from Meta, or Gemini from Google) and places the modified version on the EU market under its own name or trademark, it may become a downstream provider, assuming obligations proportional to the modification. The guidelines introduce an “indicative criterion” for this: if the training compute used for the modification exceeds one-third of the original model’s training compute, the entity is presumed to be a new provider. If the original compute is unknown, fixed thresholds apply — 10²³ FLOPs for general GPAI status or 10²⁵ FLOPs for systemic risk assessment.

For example, a company fine-tuning GPT-4 for sector-specific use (e.g., healthcare chatbots) and marketing it as “HealthAI Bot” under its trademark would likely qualify as a provider if the fine-tuning compute meets the threshold, leading to significant changes in capabilities or generality. The same holds for fine-tuning other models like Claude or PaLM — if marketed anew, the entity must evaluate whether modifications alter systemic risks or create a “new model.” However, minor fine-tuning (below the compute threshold) or internal use does not trigger provider status; the entity remains a deployer, with obligations staying on the original provider. The guidelines note that this threshold applies only to downstream actors, not originals, to avoid over-burdening innovators while ensuring accountability for substantial alterations.

Key Differences and Obligation Shifts

Scope of Control: Original providers handle the full lifecycle, including pre- and post-market updates, with comprehensive obligations (e.g., entire training data summaries). Downstream providers focus on incremental changes, supplementing (not replacing) original documentation — e.g., detailing only new data or risks from fine-tuning.Risk Assessment: Originals like OpenAI often meet systemic risk due to scale, requiring evaluations and incident reporting. Downstream fine-tuners are assessed independently; they may not inherit systemic status unless their modifications push total compute over thresholds, but must notify the Commission if risks emerge.Market Placement and Trademarks: Placing a fine-tuned model under a new trademark is a strong indicator of provider status, especially if combined with commercial intent or significant compute. Obligations shift accordingly — e.g., the downstream entity must implement copyright policies and provide downstream info, while originals may need contractual clauses in licenses to address modifications.Exceptions and Proportionality: Open-source models (e.g., fine-tuned versions of Llama) may have reduced documentation requirements unless systemic. Upstream providers can avoid EU obligations by explicitly excluding EU distribution in licenses, shifting provider status to downstream integrators.

References

EU AI ActGuidelines for providers of general-purpose AI modelsCommission seeks input to clarify rules for general-purpose AI modelsTaking the EU AI Act to Practice: How the Final GPAI Guidelines Shape the AI Regulatory LandscapeEU Commission — Guidelines on the Scope of the Obligation for GPAI Models under the AI ActEuropean Commission Issues Guidelines for Providers of General-Purpose AI Models

Analysis: Who is the “Provider” for GPAI Models? was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

By

Leave a Reply

Your email address will not be published. Required fields are marked *