EU AI Act: Deployer vs Provider — What's the Difference and Why It Matters
One of the most common points of confusion in EU AI Act compliance is the distinction between a provider and a deployer. The regulation assigns very different obligations to each role — and most companies are one, the other, or both without realising it.
The definitions
Provider
A provider is any natural or legal person that develops an AI system or general-purpose AI model and places it on the market or puts it into service under their own name or trademark, whether for payment or free of charge. In simple terms: if you built the AI system and offer it to others, you are a provider.
- A company that builds and sells an AI-powered hiring tool is a provider
- A startup that offers an AI risk scoring API is a provider
- An open-source project that releases an AI model is a provider
- Anthropic, OpenAI, and Google are providers of GPAI models
Deployer
A deployer is any natural or legal person that uses an AI system under their own authority for a professional purpose, except where used for personal non-professional activities. In simple terms: if you are using someone else's AI system in your product or operations, you are a deployer.
- A company that integrates GPT-4 into their customer support product is a deployer
- An HR department using a third-party AI screening tool is a deployer
- A bank using an AI credit scoring API is a deployer
Most SaaS companies are both
Here is the part that trips people up: most SaaS companies with AI features are simultaneously providers and deployers. They are a deployer of the underlying GPAI model (Claude, GPT-4, etc.) and a provider of their own AI-powered product to their customers.
If you build a product that uses Claude under the hood and sell it to businesses: you are a deployer of Claude and a provider of your product. Both sets of obligations apply.
What providers must do
Providers of high-risk AI systems carry the heaviest obligations:
- Establish a risk management system (Article 9)
- Implement data governance practices (Article 10)
- Prepare Annex IV technical documentation
- Enable automatic logging of events (Article 12)
- Provide transparency and instructions to deployers (Article 13)
- Design for human oversight (Article 14)
- Ensure accuracy, robustness, and cybersecurity (Article 15)
- Conduct a conformity assessment before market placement
- Register the system in the EU AI database
- Affix CE marking
- Implement post-market monitoring (Article 72)
Providers of limited risk systems must comply with Article 50 transparency obligations. Providers of minimal risk systems have no mandatory obligations.
What deployers must do
Deployers have a lighter but still meaningful set of obligations for high-risk AI systems:
- Use the AI system in accordance with the provider's instructions
- Assign human oversight to competent individuals
- Monitor the AI system for risks during use
- Inform the provider of serious incidents or malfunctions
- Conduct a fundamental rights impact assessment (for certain deployers)
- Keep logs of system use where technically possible
- Inform affected individuals that they are subject to AI decision-making
Can obligations shift between provider and deployer?
Yes — and this is important. Under Article 25, a deployer becomes a provider (and takes on provider obligations) if they:
- Place a high-risk AI system on the market under their own name
- Make a substantial modification to the AI system
- Change the intended purpose of a high-risk AI system in a way that makes a previously non-high-risk system high-risk
If you take a third-party AI system and significantly customise it or rebrand it as your own product, you become the provider — and inherit the full provider compliance stack.
The practical question: which am I?
Ask yourself two questions:
- Did I build the AI system and offer it to others? If yes, you are a provider for that system.
- Am I using an AI system built by someone else in my product or operations? If yes, you are a deployer for that system.
Most AI-enabled SaaS companies will answer yes to both. The next step is classifying each AI system you are involved with to understand whether the high-risk obligations apply. Use ActReady's free classifier to check each system in under 60 seconds — it will identify whether you are acting as a provider, deployer, or both, and what that means for your obligations.
Stay ahead of the deadline
Get EU AI Act updates, enforcement news, and compliance guides delivered to your inbox. No spam — unsubscribe any time.
Check your AI system's risk level for free
Our classifier maps your AI system against the EU AI Act in under 60 seconds. No signup required.
Classify Your AI System