Article

5 Things Canadian Businesses Should Know About The New EU Artificial Intelligence Act

April 17, 2024

Share

On March 13, 2024, the European Parliament passed the Artificial Intelligence Act1 (the “Act“), marking the arrival of the first comprehensive Artificial Intelligence (AI) law established by a liberal democracy.2

Like the EU’s General Data Protection Regulation (GDPR), introduced in 2018, legal pundits believe the Act could become a global standard, influencing future legislation including Canada’s proposed Artificial Intelligence and Data Act.

The Act aims to foster trustworthy AI by creating a uniform legal framework regulating the development, placement, service and use of AI Systems (defined below). The Act also strives to promote a human-centric approach focused on protecting fundamental rights. With the growing influence of artificial intelligence across all economic sectors and the Act’s extra-territorial reach, this bulletin addresses five things that Canadian businesses should know about the Act.

Who does the Act apply to?

The Act applies to businesses who develop, place, serve or utilize an ‘Artificial Intelligence System’ in the EU market. The Act defines an Artificial Intelligence System as “a machine based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” (an “AI System“).

The following Canadian businesses may fall within the purview of the Act:

  • businesses developing and/or using AI Systems with operations or clients in the EU;
  • businesses exporting AI-enhanced regulated products or systems used in high-risk areas in the EU; and
  • businesses offering online services that have AI components and are accessible to EU consumers (e.g., an e-commerce retailer with an AI-based chatbot on its website that transacts with EU consumers).

What obligations does the Act impose?

The Act adopts a risk-based approach in defining four levels of risk for AI Systems: unacceptable risk, high-level risk, limited risk, and minimal risk. Differing compliance obligations attach to each risk level.3

Unacceptable Risk

AI Systems which pose an unacceptable risk due to their threat to the safety, livelihood and rights of individuals are outright banned. This category encompasses AI applications which:

  • manipulate human behaviour through subliminal or exploitative means;
  • use biometric data and categorisation measures (including facial recognition technology) to identify individuals and/or infer emotion;
  • facilitate social scoring through personal characteristics, socio-economic status, or behaviour; or
  • utilize predictive policing techniques.

High Risk

High risk AI Systems include those which pose potentially harmful threats or damaging implications to individuals’ health, safety, or fundamental rights, and which are required to undergo third-party conformity assessments.

AI Systems used in a variety of fields such as health, education, recruitment, critical infrastructure management, law enforcement and justice are likely to fall within the high-risk tier. AI Systems which make determinations affecting individuals access to items such as healthcare, life insurance and/or quantify an individual’s financial status (i.e., credit scores) are also considered high-risk.

These AI Systems are to be carefully assessed both before they come to market and throughout their lifecycle. They are also subject to record-keeping requirements and obligations of security, transparency, and human oversight, including:

  • the establishment of a risk management system, with particular consideration given to whether the AI System will adversely impact minors or other vulnerable groups;
  • adherence to suitable data governance and management practices tailored to the AI System’s intended purposes;
  • being designed to automatically record events throughout their operational lifespan;
  • being designed in a sufficiently transparent manner to enable users to understand the AI System’s functioning (including the provision of instructions for use); and
  • verification by at least two natural persons before a decision based on an AI System’s output is implemented.

Limited Risk AI Systems

Limited Risk AI Systems include AI applications used for creative purposes such as generating or manipulating text (e.g., chatbots/chat assistants), video, or sounds.

These systems are subject to transparency obligations aimed at informing users they are interacting with an AI System. Additionally, AI generated outputs must include identifying marks, such as a labels or watermarks, denoting that they are artificially generated.

Canadian businesses that use chatbots on their websites will be subject to the Act and should become familiar with their compliance obligations.

Minimal Risk AI Systems

Minimal Risk AI systems are unregulated and include AI applications used purely for entertainment, such as video games, or for routine applications such as spam filters. The European Commission anticipates that most AI applications will fall within this category.

What penalties does the Act impose?

The Act imposes significant monetary penalties for non-compliance, including (whichever is higher):

  • up to 7% of global revenue or €35m for prohibited AI violations;
  • up to 3% of global revenue or €15m for non-compliance with record-keeping obligations4 ; and
  • up to 1.5% of global revenue or €7.5m for supplying incorrect or misleading information.

The Act also provides the European AI Office, a new EU level regulator established under the Act, discretion to apply administrative fines on a case-by-case basis (utilizing guiding factors).

Although the Act caps fines for startups and SMEs, it is important for businesses of all sizes to ensure a full understanding of their obligations under the Act and establish processes to ensure compliance.

When will compliance obligations take effect?

The Act is expected to come into force in May 2024, with the following application timelines to take effect:

  • 6 months following entry into force (estimated November 2024) – regulations regarding prohibited AI Systems (i.e., systems posing an unacceptable risk) take effect;
  • 12 months following entry into force (estimated May 2025) – regulations regarding GPAI models take effect;
  • 24 months following entry into force (estimated May 2026) – regulations for high-risk AI Systems under Annex III5 of the Act take effect; and
  • 36 months following entry into force (estimated May 2027) – regulations for high-risk AI Systems under Annex II of the Act take effect.

What can my business do now to prepare?

As AI regulations become more prominent, businesses should consider taking the following steps to comply with the Act (and future legislation):

  • determine if the Act applies to your business;6
  • understand compliance requirements;
  • develop frameworks and policies to ensure compliance; and
  • develop an AI governance strategy that aligns with your business objectives.

______________

1 View the full text of the Act here.

2 In July 2023, China introduced rules regarding generative AI, which came into force in August 2023.

3 The Act also imposes separate compliance obligations (with a focus on transparency) in respect of General Purpose AI (GPAI) models, defined as AI models i) trained with a large amount of data using self-supervision at scale, ii) capable of competently performing a wide range of distinct tasks, and iii) which can be integrated into a variety of downstream systems or applications. A discussion of GPAI models and their compliance obligations is beyond the scope of this bulletin.

4 See Article 71 of the Act.

5 Annex II and III are detailed guidelines that assist with understanding and applying the act. They cover technical definitions, detail compliance requirements and list specific provisions for the various categories of high-risk AI Systems. Annex II lists EU legislation that is harmonized with the AI Act, covering sectors like machinery, medical devices, and toys. Annex III provides a detailed list of the various high-risk AI Systems, covering areas such as biometric identification and critical infrastructure management.

6 To assist in determining if the Act applies to your business’ AI System, visit the EU AI Act Compliance Checker here.