The application of artificial intelligence within organisations such as investment firms, asset managers, fund managers and trading platforms is developing at a rapid pace. Whereas AI was initially used primarily for data analysis and information processing, its use is increasingly shifting towards more complex applications such as trade optimisation and predictive modelling. This development offers clear opportunities, but according to the AFM, the risks are increasing just as quickly.
Recent AFM research shows that more than half of asset managers are already using AI or plan to do so in the near term. At the same time, many organisations still lack a clearly defined governance structure. A significant proportion of institutions do not yet have specific AI policies in place, and an ethical framework is often missing.
The regulator emphasises that AI does not occupy a special position: its use falls under existing obligations relating to controlled and sound business operations. This means organisations must explicitly identify, assess and manage AI-related risks—just as they would any other type of risk.
The report highlights three key areas of concern:
The effectiveness of AI depends entirely on the quality of the data it uses. At the same time, the increasing complexity of models makes it more difficult to explain decisions. This creates tension, particularly when dealing with clients and regulators, where transparency is essential.
AI requires clearly defined responsibilities, appropriate controls and well-documented decision-making. In practice, these foundational elements are often insufficiently developed, particularly for generative AI. Key questions remain: who is responsible, when should intervention occur, and what should be documented—and how?
The degree of human involvement in AI-driven decision-making is a crucial aspect of governance. This ranges from full human control to largely autonomous systems. The following model helps clarify these variations.

The distinction between “human-in-the-loop”, “human-on-the-loop” and “human-out-of-the-loop” is essential. As AI applications become more autonomous, the requirements for controls, monitoring and escalation increase. For many applications within asset management, a model involving human participation in decision-making is the most appropriate—particularly where investment decisions or client impact are concerned.
At the same time, a ‘human-on-the-loop’ model—where AI operates independently but under supervision—also requires clear agreements on when and how intervention takes place. This directly affects governance, as well as areas such as incident management and accountability.
The introduction of AI changes not only processes but also the role of employees. The European AI Regulation therefore underlines the importance of AI literacy. Employees must not only understand how AI works, but also:
This is not a ‘nice-to-have’, but a prerequisite for effective control.
What the report implicitly makes clear is that AI is not a standalone topic. Its risks intersect with existing domains such as IT, data, compliance and outsourcing. It is therefore logical to integrate AI risks into existing frameworks, such as ICT risk analysis under DORA.
In practice, this means for example:
One aspect that currently receives relatively little attention in practice is the relationship between AI and incident management. Many of the risks identified by the AFM—such as data leaks via generative AI, incorrect or unexplainable model outputs, and attacks such as data poisoning—effectively manifest as ICT incidents. As such, they fall within the scope of DORA.
This means organisations must consider how AI-related disruptions are detected, classified and addressed within existing incident management processes. For example: when does a deviating AI output constitute an incident? How is it determined whether a data breach or integrity issue has occurred? And are existing classification and escalation procedures adequate for these types of risks?
In practice, this often requires extending current processes to include AI-specific scenarios in monitoring, incident classification and reporting. In this way, AI is not treated as a separate domain, but integrated into broader ICT risk management as intended by DORA.
The current phase in the sector is characterised by experimentation: widespread use, relatively limited investment and little standardisation. However, regulatory expectations clearly point towards a next phase in which structure and control are central.
Organisations taking steps in this direction often begin with:
Knowledge transfer plays a key role here. Without sufficient understanding of AI—both technical and ethical—effective control remains difficult.
AI offers asset managers opportunities to operate more efficiently and innovatively. However, as emphasised by the AFM, this requires a proportional investment in control—not only in technology, but also in governance, risk management and expertise.
Organisations that succeed in this will not only meet regulatory expectations more effectively, but also build more sustainable trust in their services.
Note: This article was generated using AI, and subsequently edited and reviewed by our IT Compliance Consultants. Our colleagues provide support in establishing ICT risk management frameworks, as well as implementing controls for a well-managed IT environment. Organisations using AI solutions that wish to assess the extent to which AI risks are controlled, or seek advice on the design of AI governance, can contact us via dora@projectivegroup.com.