A Practical Guide to the EU AI Act for Teams and Businesses
When the EU AI Act entered into force in August 2024, it marked the first broad, cross-sector regulatory framework specifically designed for artificial intelligence. Unlike earlier, sector-specific rules, it applies across a wide range of use cases, from healthcare diagnostics to hiring systems.
For companies operating in, or selling into, the EU, this is no longer a distant regulatory concern. Some provisions are already in effect, and the full rollout continues through 2027.
This article breaks down how the Act is structured, what the risk classification system means in practice, how responsibilities are split between providers and deployers, and what compliance looks like from both an engineering and organizational perspective.
How the EU AI Act Classifies AI Systems
The Act groups AI systems into four risk tiers. Where a system falls affects everything from documentation requirements to whether it can be placed on the market at all.
Unacceptable risk systems are prohibited. This includes AI used for social scoring by governments, certain forms of real-time biometric identification in public spaces (subject to narrow law enforcement exceptions), and systems designed to manipulate behavior by exploiting vulnerabilities. In practice, if a system falls into this category, it cannot be legally deployed in the EU.
High-risk systems are where most compliance efforts will concentrate. These include AI used in areas such as critical infrastructure, education access, employment decisions, essential services like credit scoring, law enforcement, border control, and parts of the justice system.
In the legislation, high-risk classification is defined through a combination of listed use cases and existing product safety frameworks. Not every system in these domains is automatically high-risk, but many are, particularly where decisions have a material impact on individuals.
For systems that do qualify, the requirements are extensive. Organizations must implement risk management processes, ensure appropriate data governance, maintain detailed technical documentation, enable logging, provide transparency to users, design for human oversight, and meet standards for accuracy and robustness. These are not best practices; they are conditions for market access.
Limited-risk systems come with lighter obligations, mostly around transparency. Users must be informed when they are interacting with AI, and synthetic or manipulated content, such as deepfakes, must be clearly labeled.
Minimal-risk systems, such as spam filters or many AI features in games, are not subject to mandatory requirements, although voluntary codes of conduct are encouraged.
What High-Risk Compliance Actually Requires
For teams building high-risk systems, the Act doesn’t introduce entirely new concepts so much as formalize and enforce them.
Risk management is expected to be continuous. It is not enough to assess risks once before release. Teams need to identify foreseeable risks, estimate their likelihood and impact, and revisit those assessments as systems evolve.
Data governance requirements apply across training, validation, and testing datasets. The emphasis is on relevance, representativeness, and appropriate handling of bias and errors. For teams working with sensitive data, such as health or biometric information, this has direct implications for how data pipelines are designed and audited.
Technical documentation must be in place before a system is deployed and kept up to date over time. The Act does not prescribe a specific format, but in practice, many teams will rely on artifacts similar to model cards and system documentation to meet these requirements.
Logging is another core requirement. High-risk systems need to record events in a way that makes it possible to trace decisions and investigate incidents. This pushes observability from a “nice to have” into something that must be designed in from the start.
Human oversight is often the hardest part to put into practice. Why? It is not enough to say that a human can intervene. Systems must be designed so that qualified individuals can understand what is happening, monitor outcomes, and take meaningful action when needed. That includes ensuring they actually have the authority to do so.
Providers and Deployers: Who Is Responsible
The Act distinguishes between providers and deployers, and many organizations will end up acting as both.
Providers are those who develop or place AI systems on the market. They carry the bulk of the regulatory burden. This includes conducting conformity assessments, applying CE marking where required, registering certain systems, and carrying out post-market monitoring. Providers outside the EU must appoint an authorized representative within the Union.
Deployers are organizations that use AI systems in a professional context. Their obligations are narrower but still significant. They are expected to follow the provider’s instructions, monitor system performance, report serious incidents, and ensure that appropriate human oversight is in place.
One detail that is easy to miss is that deployers can become providers. If a company modifies a system or uses it in a way that introduces new risks, it may take on provider-level responsibilities for those changes.
For teams integrating third-party AI tools, this has a practical implication: using a compliant product does not automatically make your use of it compliant. Responsibility does not transfer with the software.
General-Purpose AI Models
The Act also introduces rules for general-purpose AI models, including large language models that can be adapted for many tasks.
Providers of these models must maintain technical documentation, comply with EU copyright law, and publish summaries describing their training data.
Some models may be classified as posing systemic risk. Compute thresholds, such as training runs exceeding 10²⁵ FLOPs, are used as indicators, but they are not the only factor. Additional requirements for these models include more rigorous testing and incident reporting.
For teams building on top of these models, responsibilities are layered. The model provider and the application developer each have their own obligations, and the boundary between them is not always straightforward.
Enforcement and Timeline
The Act is being rolled out in stages, with different obligations applying at different points:
- Prohibitions on unacceptable-risk systems
- Rules for general-purpose AI models
- Most high-risk system requirements
- Additional requirements for certain regulated sectors
Penalties for non-compliance are significant. The highest tier reaches €35 million, or 7% of global annual turnover, for violations involving prohibited practices. Lower tiers apply to other types of non-compliance.
Enforcement will be handled at the member state level, meaning implementation details may vary across the EU.
Bottom line
For organizations that have not started preparing, two steps stand out.
First, map your AI systems against the Act’s risk categories. Second, determine whether you are acting as a provider, a deployer, or both.
The EU AI Act is dense, and many of the supporting technical standards are still being finalized through European standardization bodies. What is already clear is that compliance is not just a legal exercise. It requires changes to how systems are designed, documented, and monitored.