European Union AI Act: What businesses need to know

Disclaimer

This guide:

  • provides general information to readers who take full responsibility for its use
  • is not a substitute for professional or legal advice

 

The Government of Canada:

  • is not responsible for errors or omissions or for results arising from the use of the information in this guide
  • has not verified the contents of third-party websites
  • does not endorse, warrant, promote or recommend any services or products that may be provided or accessed through third-party websites or any person or body that may provide them 
  • has not issued or caused to be issued any advertisements that may appear on these websites

On this page

Introduction to the EU AI Act

The Artificial Intelligence Act (AI) is the European Union (EU)’s primary regulatory framework for AI systems. It aims to address and mitigate the potential risks associated with AI technologies, ensuring they are developed and used in a safe and ethical manner. The Act has been progressively coming into effect since February 2025.

Importantly, the AI Act applies to all companies offering AI systems or services within the EU regardless of whether they are physically based in the EU. This means Canadian AI companies doing business in the EU must comply with its requirements.

Key principles

  • Broad definition of AI systems: The AI Act defines AI systems as machine-based systems that operate with varying levels of autonomy and can learn or adapt after deployment to produce outputs based on received inputs.
  • Risk-based regulatory approach: The Act applies a risk-based framework, setting out different obligations depending on the level of risk posed by an AI system, ranging from minimal to unacceptable risk.
  • Simplified compliance rules for small and medium-sized enterprises (SMEs): To qualify as an SME, a company must have fewer than 250 employees and either an annual turnover of less than EUR 50 million or an annual balance sheet total of less than EUR 43 million.

Determine your obligations

To assess whether your company must comply with the AI Act, begin by identifying your role under the regulation and the risk posed by your AI application. 

Your specific obligations will depend on both:

  1. Your role in the supply chain (whether you are the provider, deployer, importer, or distributor of the AI system)
  2. The risk level of the AI system, (unacceptable risk, high risk, limited risk and minimal risk)

Please note that the EU is considering delaying the application of the Act’s requirements for high-risk AI applications, as well as simplifying certain obligations for businesses. This guide will be updated once these amendments have been officially adopted.

Determine your role in an AI supply chain

Your activitiesYour role
  • You develop or commission the development of an AI system or General-Purpose AI (GPAI) model
  • You place on the EU market or put into service an AI system or GPAI model under your own name or trademark

You may qualify as a provider

  • You use an AI system or GPAI model in your own capacity
  • You make use of the output generated by an AI system within the EU

You may qualify as a deployer

  • You are a company established within the EU
  • You place an AI system on the EU market under the name or trademark of a company based outside the EU

You may qualify as an importer

  • You make an AI system available on the market

You may qualify as a distributor

Foreign companies also are required to appoint an authorized representative based in the EU. You may delegate this authorized representative to fulfill certain AI Act compliance obligations on your behalf.

Determine the risk level of AI systems

In addition to your role in the AI supply chain, your company’s obligations under the AI Act are also determined by the risk level of your AI system. The Act defines four risk categories, ranked from highest to lowest.

Risk levelDescription and examplesStatus or requirements
Unacceptable riskAI systems involving practices considered unacceptable due to their potential risks to European values and fundamental rights. Examples include social scoring, mass surveillance, or manipulation of behaviour causing harm.Status: Prohibited
High riskAI systems posing a significant risk to the health, safety or fundamental rights of EU citizens.Requirement: All actors must comply with obligations outlined in Articles 6 and 49
Limited riskAI systems designed for interaction with individuals or content creation that may carry inherent risks of impersonation.Requirement: Providers and deployers must comply with the rules outlined in Article 50
Minimal riskExamples include AI-enabled search engines or spam filters.Requirement: Providers and deployers may adopt voluntary Codes of Practice (Article 95)

High-risk systems

An AI system is considered high-risk when:

  • it is intended to be used as a safety component of a product listed in Annex I of the AI Act
  • it falls under one of the legislations listed in Annex I of the AI Act
  • it is intended to be used in the educational sector, in recruiting and employment processes, in credit scoring purposes (except fraud detection), or for insurance pricing for natural persons as mentioned in Annex III of the AI Act

Checklist: your obligations as a provider of a high-risk AI system

  • Establish a risk management system throughout the high-risk AI system’s lifecycle
  • Conduct data governance that ensures training, validation, and testing datasets are relevant, sufficiently representative and, to the greatest extent possible, free of errors and complete according to the intended purpose
  • Draw up technical documentation to demonstrate compliance. SMEs may opt to provide information in a simplified manner, under a format to be developed by the European Commission
  • Ensure built-in record-keeping in the AI system, to enable automatic recording of events relevant for risk identification and substantial modifications throughout the system’s lifecycle
  • Provide instructions for use to deployers to enable compliance.
  • Design AI systems to allow for deployers to implement human oversight
  • Design AI systems to achieve appropriate levels of accuracy, robustness, and cybersecurity
  • Establish a quality management system to ensure compliance

These obligations may be transferred to distributors, importers, deployers when: 

  • they put their name or trademark on high-risk AI system
  • they make a substantial modification to a high-risk AI system
  • they modify the intended purpose of an AI system in such a way that it becomes high-risk

Once you have fulfilled the compliance obligations under the AI Act, you must complete the following steps:

  • Conduct a Conformity Assessment: Demonstrate compliance with the applicable requirements through a formal conformity assessment procedure. This may include a Fundamental Rights Impact Assessment (FRIA).
  • Issue an EU Declaration of Conformity: For each high-risk AI system, you must provide a Declaration of Conformity, stating that the system meets the relevant legal requirements.
  • Register your high-risk system in the EU Database

Application of fundamental rights impact assessments (FRIA) to Canadian deployers

If you are providing a public service in the EU or deploying a high-risk AI system related to law enforcement, migration, border control, and asylum, you need to carry out a FRIA. This requires deployers to:

  • describe how and when the AI system will be used and who it may affect
  • identify potential risks of harm to impacted individuals or groups
  • outline human oversight measures in place to mitigate risks
  • specify actions to be taken if risks materialize

This assessment must be completed before the first use of the system but can be updated if circumstances change. The AI Office will provide a template to help deployers comply with their obligations. If an organisation has already conducted a Data Protection Impact Assessment (DPIA) under EU privacy laws, the FRIA should complement that evaluation.

Checklist: key obligations for deployers of high-risk AI systems in the EU market

  • Use high-risk AI systems in line with the provider’s instructions and assign trained personnel to oversee their operation
  • Monitor system performance, report risks or serious incidents to providers and authorities, ensure input data quality, and maintain system logs for at least 6 months
  • Inform workers and affected individuals when deploying high-risk AI in workplaces or decision-making processes
  • Cooperate with authorities on compliance and use AI system documentation to fulfill data protection impact assessment obligations where applicable

Importers and distributors

If you are an importer or distributor of high-risk AI systems, you are responsible for ensuring that these systems meet all regulatory requirements before they are placed on the EU market.

Importers must verify that the provider has:

  • completed the conformity assessment procedure
  • prepared the necessary technical documentation
  • affixed the required CE marking
  • supplied the EU Declaration of Conformity and instructions for use
  • appointed an authorized representative established in the EU

Distributors are required to conduct due diligence to ensure that high-risk AI systems comply with the AI Act before placing them on the market.

Limited-risk systems

An AI system is considered limited risk when it:

  • interacts directly with users
  • poses minimal risks to individuals’ rights and safety

Examples include chatbots and virtual assistants (for example, AI-powered customer service bots), AI-generated content tools (such as text generators and AI art creators), and deepfake or synthetic media generators.

As a provider of a Limited-Risk AI System, you must ensure that:

  • users are clearly informed that they are interacting with an AI system
  • any AI-generated or manipulated content,including audio, images, video, or text, is labeled in a machine-readable format to make it detectable as artificially generated
  • the labeling of such content is robust, reliable, and interoperable, considering technical feasibility and relevant industry standards

As a deployer of a Limited-Risk AI System, you must:

  • inform users if the AI system is used to analyze emotions or categorize individuals based on biometric data, unless the processing is legally authorized for the detection or investigation of crimes
  • disclose when the system creates deepfakes or alters publicly shared images, videos, or text, unless the content is clearly artistic, creative, or satirical in nature
  • ensure that all required information is provided to users at the first point of interaction or exposure, and that it meets accessibility standards

Minimal-risk systems

An AI system is considered minimal risk when it:

  • does not pose significant risks to health, safety or fundamental rights
  • is used in everyday applications where AI enhances functionality without directly affecting individuals in a way that requires oversight

Examples include spam filters, AI-based search engines, and AI-based autocorrect or predictive text tools.

Minimal-risk AI systems are not subject to mandatory requirements under the AI Act. However, providers and deployers are encouraged to follow voluntary best practices for AI development and deployment through codes of conduct.

General purpose AI models

An AI system is considered a GPAI model if it:

  • is trained on a large dataset using self-supervision techniques
  • can perform a wide range of tasks rather than being designed for a specific function
  • is adaptable and can be embedded in different systems across industries
  • maintains the same capabilities regardless of how it is made available to users

Exemptions include models used solely for research, development or prototyping.

Providers of GPAI models must:

  • maintain comprehensive technical documentation detailing the model’s training, testing, and evaluation processes
  • provide downstream AI providers with documentation that clearly specifies the model’s capabilities and limitations
  • ensure compliance with EU copyright laws, particularly concerning the use of training data
  • publish a summary of the training data used to develop the model, following the AI Office’s standardized template
  • cooperate fully with EU and national authorities to facilitate regulatory oversight and enforcement
  • adhere to voluntary codes of practice as evidence of compliance until harmonized standards are established

Providers releasing GPAI models under a free and open-source license are exempt from these obligations, unless the model is classified as posing a systemic risk.

Additional requirements

GPAI models are considered systemic risk models when their computing power exceeds 10^25 floating point operations per second (FLOPS). In such cases, additional requirements apply under Article 55, including mandatory reporting and the implementation of risk mitigation measures.

Enforcement

Enforcement of the AI Act will primarily rest with EU Member States, which were required to establish national market surveillance authorities by August 2, 2025. These bodies will be responsible for monitoring compliance, conducting investigations, and enforcing penalties for violations of the Act.

The EU AI Office will play a central role in overseeing general-purpose AI models and will provide guidance, coordination, and support to Member States in implementing and interpreting the AI Act.

Penalties for non-compliance

  • Up to EUR 7.5 million or 1% of global annual turnover for supplying incorrect or misleading information
  • Up to EUR 35 million or 7% of global annual turnover for prohibited AI practices
  • For SMEs and start-ups, fines are capped at the lower of the two thresholds (fixed amount or turnover percentage)

Key dates

  • Prohibition on unacceptable risk AI systems applicable: February 2, 2025
  • Obligations for GPAI providers in effect: August 2, 2025
  • Obligations for high-risk under Annex III applicable: August 2, 2026
  • Remaining obligations for high-risk applicable: August 2, 2027

Additional resources

Additional Information

Date published:

Hi! I'm Eva. Select the icon to start a chat with me.