On 2 February 2025, the first articles of Regulation (EU) 2024/1689 entered into force, better known as the AI Regulation or the AI Act. The AI Regulation aims to promote the functioning and application of AI systems while simultaneously offering a high level of protection for fundamental rights of humans, nature, and society

The financial sector is also affected by the entry into force of this Regulation. The impact is minimal for now, but doing nothing is not an option. As with any other legislation, good preparation is essential for implementation.

On 18 February 2025, the Dutch Authority for the Financial Markets (“AFM”) issued a request to investment firms and institutions regarding the extent to which AI is applied within the operations of these institutions. The AFM states that it is closely monitoring the development of AI and Generative AI and is therefore requesting information from the market. The AFM sees opportunities as well as risks in the use of Artificial Intelligence (“AI”).

AI is all around us, and its application, speed and quality are only increasing. With this request and the Regulation, it is clear that both the supervisor and the legislator want to gain control over AI.

This article dissects and analyses the AI Regulation. First, the scope of the AI Regulation is determined, followed by a brief discussion of the obligations arising from it, and it concludes with the (next) steps financial institutions should take.

The AI Act under the microscope

The AI Regulation applies to institutions that use, develop, place on the market, import or distribute AI systems. For financial institutions, two roles are particularly relevant: the user and the provider.

The user role applies as soon as an institution uses AI — which, given current developments, is the case for virtually all institutions. The provider role applies when an institution itself, or together with a third party, develops an AI system and places it on the market or uses it in its own name. This therefore also applies to internally developed systems.

The remainder of this article will primarily address the obligations and responsibilities that apply to these two parties.

The aim of the AI Regulation is defined as:

To improve the functioning of the internal market and to promote the use of human-centric and trustworthy artificial intelligence (AI), whilst ensuring a high level of protection of health, safety, and the fundamental rights enshrined in the Charter, including democracy, the rule of law, and the protection of the environment, against the harmful effects of AI systems in the Union, and to support innovation

This objective underlines the societal relevance of regulation. Concepts such as fundamental rights, safety and democracy demonstrate that this concerns core values.

The AI Act does not focus on specific sectors, but on systems that qualify as AI systems. From the definition it becomes clear — simply put — that it concerns machine-based systems which generate output based on input, show adaptability, and operate partly or wholly autonomously.

Autonomy does not mean it must act fully independently of human intervention: systems that are partly controlled by humans also fall within its scope. What matters is that the input affects the output and that the system learns or adapts during use.

In this way, the AI Regulation has a broad scope and applies where an AI system is used or placed on the market. This broad scope ensures that all applications of AI systems fall under the legislation and can continue to be used sustainably, as long as they fall within the definition. To facilitate interpretation, the European Commission has issued guidelines. However, the Commission itself indicates that this document is non-binding and subject to further development, as AI continues to evolve.

Different risks

It is important to know that the AI Regulation includes different risk classifications. The number of rules that apply depends on the classification of the AI system.

The AI Regulation identifies three types of risks:

  1. Unacceptable;
  2. High risk; and
  3. Minimal.

In addition, the AI Regulation sets out obligations for general-purpose models and the systems that operate on them, such as ChatGPT.

Unacceptable risk


Certain AI systems are considered to pose an unacceptable risk, and since 2 February 2025, it has been prohibited to place or use these systems on the market. These include, for example, systems and applications that excessively restrict human free will, manipulate or discriminate.

Examples of banned AI systems include systems that influence behaviour in a subconscious and negative way, systems that classify individuals based on social scoring, or systems that use biometric data to categorise people according to characteristics such as race, sexual orientation or political preference.

High risk


The AI Regulation focuses in particular on regulating this risk category. These are AI systems that pose a potential risk to humans, the environment, safety and/or fundamental rights, but are permitted under strict conditions. This is the most heavily regulated category of permitted systems.

There are several ways for an AI system to be classified as high risk. First, there is a list (Annex III AI Regulation) of areas in which AI systems are regarded as high risk. This includes education and training (where AI determines access to educational institutions) or access to essential services (determining the creditworthiness of individuals). Other areas include AI in critical infrastructure or law enforcement. These are areas that society relies on to function and to ensure a safe and fair environment for everyone.

In addition, an AI system qualifies as high risk when the following conditions are met:

  • the AI system is intended to serve as a safety component, or is itself the product of a product that falls under the legislation mentioned in Annex I; and
  • the product of which the AI system is a safety component or the AI system itself must undergo a conformity assessment by a third party.

Annex I includes legislation relating to, among other things, civil aviation and medical devices.

Minimal risk


Finally, there is a ‘residual category’: everything that does not fall under unacceptable, high, or general-purpose AI systems falls under the category of minimal risk. More on this later, including examples.

AI system analysis

Ultimately, it is important to determine whether an institution is subject to the rules laid down in the AI Regulation and, if so, which rules. As outlined earlier, various combinations are possible and thus multiple rules may apply. To determine what applies, the institution will need to carry out an analysis of its own organisation.

The following step-by-step plan can be followed:

  1. Is there an AI system involved?
    • Without an AI system, there are no obligations. Therefore, the first question is whether there is an AI system involved. The elements to assess this can be found in the definition, and the European Commission has issued guidelines to assist in the definition, as discussed above.
  2. If there is an AI system, the role of the institution must be determined.
    • Is the AI system used within the institution, or is it placed on the market or developed for internal use? Most institutions will fall under the user role.
  3. What risk category does the system I use fall under?
    • For prohibited AI systems, refer to the list in Article 5 that includes banned AI systems.
    • For high-risk AI systems, an assessment must be made as to whether the system is used in one of the areas listed in Annex III (including the use of credit scoring) or via the test of whether it is (part of) a safety component of a product falling under the legislation mentioned in Annex I. Financial institutions do not appear to be affected by the legislation listed.
      In the case that an AI system falls under the areas in Annex III, it is not a high-risk AI system if it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including where it does not materially influence the outcome of decision-making.
    • Anything that is not prohibited or high-risk falls under minimal risk or general-purpose AI systems. These are systems that generate audio, video, and images.
  4. What obligations apply per risk level?
    • The next paragraph will address the requirements for each level, with particular focus on high risk.

Based on these steps, an institution will know which obligations apply. However, in order to actually comply, steps must be taken. A policy document should be drawn up on how staff members are to handle the use of AI systems, and staff must also be trained for this.

Obligations of users and providers

Most of the obligations under the AI Act apply to users and providers of high-risk AI systems. In addition to these obligations, there are transparency obligations for certain AI systems and a general requirement for AI literacy.

AI literacy


As of 2 February 2025, this obligation from Article 4 applies. Essentially, it means that staff who use and/or develop AI must have sufficient knowledge of and competence with it. Depending on the function and the purpose for which the AI system is used, a higher level of AI literacy may be expected.

Although there is (for now) no direct fine for failing to adequately educate staff, if damage is caused by an AI system and this is partly due to low AI literacy among staff, it can be taken into account in the overall fine.

High risk


Most of the obligations regarding high-risk systems lie with the provider. They must take measures in areas such as risk management, data management, technical documentation, human oversight, transparency and cybersecurity to ensure the systems are used correctly.

In addition to organisational measures, there is the conformity assessment. This means a third-party evaluation must confirm that the AI system complies with the requirements of the AI Act. The system must then be CE-marked and entered into a European database of AI systems.

Users of high-risk systems will not be exempt from obligations either — they must also take measures. These include using the system in accordance with the instructions, ensuring that human oversight is carried out by qualified individuals, and monitoring the system. In some cases, users must conduct an assessment of the AI system’s impact on the fundamental rights of individuals or groups.

Minimal risk


Lastly, there is the category with minimal to no risk. The vast majority of AI systems will fall under this category and will impose little to no obligations on any party. An exception is the obligation for AI systems intended for direct interaction with individuals, such as the aforementioned chatbots. For these systems, it must be clear to the person that they are interacting with an AI system — a so-called transparency obligation.

This does not mean there are no risks. Since most AI systems will fall into this category, they must still be used responsibly. Therefore, as a user or institution responsible for the user, it is important to establish guidelines on how AI systems should be used and when they may or may not be used. An example is that no sensitive customer information or personal data should be entered by the user.

There should also be rules on what kind of questions may or may not be asked. These systems learn from the input they receive, so they must be used ethically.
Ensure that users do not blindly trust AI systems but verify whether the information is accurate. An answer may seem well-reasoned and convincing but can still be incorrect.

General-purpose AI systems


These are AI systems built on general-purpose AI models. General-purpose AI models have a dedicated chapter in the AI Regulation. That chapter contains multiple obligations for providers of such models. AI systems built on them can, for example, generate images or text. Often harmless, but not exempt from obligations. Since it may involve text or images of other individuals, copyright must be respected and transparency must also be provided in the form of technical documentation.

What's next?

Since 2 February 2025, it has been prohibited to use AI systems that pose an unacceptable risk. So start by mapping out which AI systems are currently being used within the organisation. The first analysis should be whether any prohibited AI systems are among them. If not, then assess which risk class the system does fall under and which obligations apply. Then identify which staff members are using which types of AI systems.

Since 2 February 2025, there is also the requirement of AI literacy for staff members. Staff must use AI responsibly, know how it works, and be aware of the risks. The required level of knowledge depends on the risk. Users of general-purpose AI systems require less training than a bank employee using AI to determine creditworthiness. An organisation must therefore closely monitor which knowledge levels its employees need and provide for this. Determine which training courses need to be followed. Also begin drafting guidelines and policies on the use of AI systems and ensure that AI use is included in the organisation’s annual monitoring activities. The development of AI systems is progressing rapidly, so guidelines can quickly become outdated or need adjustment.

For now, there are no other obligations. The obligations concerning general-purpose systems apply as of 2 August 2025, as do the provisions relating to sanctions and administrative fines. For example, the fine for offering or using a prohibited AI system is €35,000,000 or 7% of the company’s annual turnover if that amount is higher than €35 million.

The requirements for high-risk AI systems will largely enter into force as of 2 August 2026. Only Article 6(1), the classification as a high-risk system because it is a safety component or product under the legislation listed in Annex I, will enter into force as of 2 August 2027. Although that may still seem far off, it is wise to start working on the AI Act now.

Vous voulez en savoir plus ?