GPAI Models and the EU AI Act: What Developers Using Claude, GPT-4, and Gemini Need to Know
Most EU AI Act coverage focuses on the risk classification of AI systems — chatbots, HR tools, credit scoring. But there is an entire separate chapter of the regulation — Chapter V — that deals specifically with general-purpose AI (GPAI) models: foundation models like Claude, GPT-4, Gemini, Llama, and Mistral. If you are building a product on top of any of these models, there are obligations that apply to you.
What is a GPAI model?
The EU AI Act defines a general-purpose AI model as an AI model that is trained on broad data at scale, designed for generality of outputs, and can be used for a wide range of purposes — including when integrated into downstream systems. This covers all major foundation models. It does not cover narrow, task-specific AI systems trained for a single purpose.
Two categories: standard GPAI and systemic-risk GPAI
Standard GPAI models
All GPAI models must comply with a base set of obligations:
- Maintain technical documentation describing the model's architecture, training data, and capabilities
- Publish a summary of the training data used (to enable copyright compliance assessment)
- Provide information to downstream deployers (via usage policies, system cards, or model cards)
- Comply with EU copyright law during training
Systemic-risk GPAI models
Models trained on more than 10^25 FLOPs of compute are designated as "systemic risk" models and face additional obligations: adversarial testing and red-teaming, incident reporting to the European AI Office, cybersecurity measures, and energy efficiency reporting. GPT-4, Claude 3 Opus and above, and Gemini Ultra are in this category.
What this means if you are a developer using these models
If you are building an application on top of a GPAI model via API — which describes most AI startups and SaaS products today — you are a downstream provider or deployer, not a GPAI model provider. The Chapter V obligations fall primarily on the model provider (Anthropic, OpenAI, Google, Meta), not on you.
However, you are not entirely off the hook. Your obligations as a downstream developer include:
- Comply with the model provider's usage policies. The Act requires GPAI providers to publish usage policies and requires downstream users to comply with them. Violating usage policies could expose you to liability.
- Classify your AI system separately. The fact that you use a GPAI model does not automatically determine your system's risk classification. Your finished product must be classified based on its own use case and outputs.
- Apply Article 50 transparency obligations. If your product uses a GPAI model to power a chatbot or conversational interface, Article 50 still applies — you must disclose to users that they are interacting with AI.
- Do not use GPAI for prohibited purposes. If your application causes the GPAI model to perform a prohibited activity (social scoring, biometric manipulation), you are liable — not the model provider.
The "general purpose AI system" distinction
There is an important distinction between a GPAI model (the foundation model itself) and a GPAI system (a GPAI model deployed in a specific context). If you build a general-purpose AI assistant on top of Claude — not a narrow tool for a specific task but a general assistant that your users can use for anything — your product may itself qualify as a GPAI system. In that case, both the Chapter V obligations (flowing from the underlying model) and the risk classification framework (for the deployed system) apply to you.
Practical steps for developers
- Read your model provider's usage policies. Anthropic, OpenAI, and Google all publish usage policies that are directly relevant to your EU AI Act compliance. Understand what they permit and prohibit.
- Classify your finished product. Use ActReady's free classifier to determine your product's risk tier independently of the underlying GPAI model. The model provider's compliance does not substitute for yours.
- Implement Article 50 disclosure if your product includes any conversational AI interface.
- Document your reliance on the GPAI model in your technical documentation — including which model, which version, and what usage policies you are operating under.
- Monitor model provider compliance. If your model provider is found non-compliant and forced to change their model, your product may need to change too. Stay informed.
The GPAI Code of Practice
The European AI Office has been developing a GPAI Code of Practice — a set of voluntary commitments for model providers that serves as a practical implementation guide. Major providers including Anthropic, Google, and OpenAI are participating. The Code of Practice does not reduce your obligations as a downstream developer, but it does mean the major model providers are actively working toward compliance — which reduces the risk that using their APIs puts you in a difficult position.
The bottom line
If you are building on Claude, GPT-4, Gemini, or any other major foundation model: the Chapter V obligations are largely the model provider's problem. Your job is to classify your own product correctly, comply with usage policies, implement transparency disclosures, and document your use of the underlying model. Start with a free classification at ActReady to understand exactly where your product sits.
Stay ahead of the deadline
Get EU AI Act updates, enforcement news, and compliance guides delivered to your inbox. No spam — unsubscribe any time.
Check your AI system's risk level for free
Our classifier maps your AI system against the EU AI Act in under 60 seconds. No signup required.
Classify Your AI System