Share

09 Dec 2024

AI: Considerations for Irish Fund Management Companies

briefing

Asset Management and Investment Funds

Download PDF here

For further information on any of the issues discussed in this publication please contact the related contact(s) on this page.


As an increasing number of those operating in the asset management sector turn to the use of artificial intelligence (AI) in order to increase efficiencies and improve competitiveness, the Irish and EU regulatory frameworks are also evolving and adapting to address risks arising from the use of such AI. Within the past six months alone, we have seen the publication of the EU AI Act in the Official Journal, the Central Bank of Ireland (Central Bank) outlining its approach to supervision of AI in the financial markets sector and ESMA issuing guidance on the use of AI in the provision of retail investment services.

In this briefing we provide an overview of the legal and regulatory considerations which should be borne in mind by Irish fund management companies (FMC) when using or contemplating the use of AI in their business models.

Supervisory focus of the Central Bank of Ireland

In its  Regulatory & Supervisory Outlook published in February 2024 (R&S Outlook), the Central Bank notes that AI is one of the technologies with the “greatest transformative potential” for both the entities it supervises and for itself as a regulator. The R&S Outlook provides its perspective on the supervisory implications of using AI in financial services, giving FMCs an insight as to how the Central Bank may approach the use of AI by these entities, particularly over the next two years. Since its publication, the Central Bank has also engaged on a bilateral basis with entities regulated by it to better understand how AI is being used across specific sectors.

Solving business challenges

Where FMCs are using or proposing to use AI-related technologies, the Central Bank expects them to have a clear understanding of what business challenges such technologies are addressing. In particular, the Central Bank notes that there will be cases where judgements need to be made by FMCs about whether it is appropriate to use AI for a particular process or business problem, noting that even if a business challenge can be addressed by AI, it may not be appropriate for AI to be used.

In this regard, FMCs should map out those business challenges which are, or may in the future be, addressed by AI. Such challenges could include for example risk management in the context of monitoring and managing risk exposure of a fund’s portfolio, operational efficiencies, supporting anti-money laundering monitoring and using internal chatbots or co-pilots or the completion of RFPs and DDQs. In the area of sustainable finance, AI tools may be used to assess investee companies’ ESG commitments and pledges and extract other sustainability related data reported by investee companies. 

As part of this process, FMCs should document the decision-making process around any judgements taken regarding the appropriateness of the AI to solve a specific business challenge and ensure that the decision-making is sufficiently transparent, with clarity over who is accountable for any decisions made.

Risk management

While the Central Bank acknowledges the potential positive benefits that AI can bring, it also notes that it poses risks that can negatively impact individuals and society. It is of the view that AI technologies can change the risk landscape for FMCs and other regulated entities, particularly in the areas of digital transformation, the management of cyber and IT risk, and governance.

In particular, it notes that FMCs (i) engaging in AI-enabled processes while continuing to use existing systems in parallel, or (ii) transitioning to the AI-enabled processes will be expected to ensure that they continue to be operationally resilient. FMCs should be ready to respond and adapt should such operational risk crystallise.

Therefore, FMCs may need to:

  • review and update their risk management framework in order to ensure that all risks arising from the use of AI are appropriately identified, monitored and managed on an ongoing basis. This should involve  categorising the FMC’s use of AI and documenting the basis for such categorisation for each AI usage. In carrying out this risk classification exercise, the risk classification system in the EU AI Act[1] (AI Act), discussed in more detail below, should be considered; and

  • review and update their operational resilience framework;

As with the development and review of any internal framework, senior management and designated persons in FMCs will have a pivotal role to play in ensuring that an appropriate risk management framework and operational resilience framework to manage the risks arising from the use of AI is established and embedded within the organisation.

The R&S Outlook indicates that the Central Bank is currently undertaking policy work and developing supervisory expectations for regulated entities, including preparing for the implementation of the EU's AI Act. 

EU AI Act

In establishing an AI risk and governance framework, each FMC should also have regard to the upcoming AI Act.  In order to determine whether it falls within the scope of the AI Act, and if so, what obligations it will be subject to, each FMC will need to assess whether it constitutes any of the following, bearing in mind the dates on which applicable obligations will begin to apply:

Category of Operator

Description

Application Date

Provider

Any FMC which develops (or instructs the development of) an AI system/GPAI system and places it on the EU market or puts it into service within the EU, under its own name or trademark, whether for payment or free of charge.

Providers of AI systems must comply with applicable obligations from 2 August 2026.

Providers of GPAI models must comply with specific obligations from 2 August 2025 unless the GPAI model has been placed on the EU market and is already in use before that date in which case providers have until 2 August 2027 to comply with applicable rules.

Deployer

Any FMC which uses an AI system under its authority.

2 August 2026

Importer

Any FMC which places an AI system on the market in the EU that bears the name or trademark of a third country person or entity.

2 August 2026

Distributor

Any FMC that makes an AI system available on the EU market where it is not the provider or the importer of that system.

2 August 2026


What AI is covered by the AI Act?

Consideration will also need to be given to the type of AI technology which is being provided or deployed, and whether such system is in scope of the AI Act.

The AI Act governs two types of technology, namely: 

  • AI systems; and

  • General-purpose AI systems

 An AI System is “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. An AI system is differentiated from any other software where the output of such software is pre-determined by a strict algorithm (i.e. if x then y).

 A General-purpose AI (GPAI) system is “an AI system which is based on a general-purpose AI model[2] and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.”

Risk Classification

The AI Act applies a risk-based approach in classifying AI systems, as follows:

  • Unacceptable risk (prohibited): AI which presents an unacceptable risk to EU citizens and accordingly is banned within the EU from 2 February 2025. Examples include social scoring systems and, subject to some very narrow exceptions, sophisticated applications of AI that remotely monitor people in real time in public spaces.

  •  High risk[3][4]: AI which creates a high-risk to the health and safety or fundamental rights of EU citizens.  An example includes AI systems used in employment and workers management and access to self-employment, e.g. to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.

  • Limited risk: AI which presents a limited risk to EU citizens. Examples include chatbots.

AI systems not falling into the risk categories mentioned above will not trigger  any obligations for operators of such AI systems under the AI Act.

Obligations imposed under the AI Act

Obligations imposed under the AI Act will be dictated by the classification of the FMC and the risk classification of the AI system being provided, deployed, imported or distributed by it.

The risker the AI system, the stricter the rules.

Providers of high-risk AI systems will be required to comply with a host of detailed obligations which include for example an obligation to establish, implement, document and maintain a risk management system and a quality management system in accordance with the parameters set down in the AI Act, to ensure that the system is designed in such a way that it allows for effective human oversight and to ensure that a relevant conformity assessment procedure is undertaken and an EU declaration of conformity is issued.

Separately, those FMCs deploying high-risk AI systems must for example take appropriate technical and organisational measures to ensure that the system is being used in accordance with the instructions for use issued by the provider. They will also be required to identify a person with the necessary “competence, training and authority” to oversee the AI system and must ensure that those dealing with the operation of the system have a sufficient level of AI literacy.

On the other hand, for AI systems falling into the “limited risk” category, compliance obligations are light touch and focus on transparency requirements for providers and deployers. These transparency requirements are aimed at ensuring that the user is informed that they are dealing with an AI system unless this is immediately obvious to the user.

It is also worth noting that an additional set of obligations are imposed on providers of GPAI models. Under the framework, more onerous obligations are imposed for GPAI models which create systemic risk than those with no systemic risk.[5] Those providing GPAI models which create systemic risk are required for example to perform model evaluations in accordance with standardised protocols and tools, assess and mitigate systemic risk that may stem from such AI models and keep track of, document and report any “serious incidents” to the EU AI Office and national competent authorities.

Key takeaways

Given the regulatory focus of the Central Bank on use of AI by entities regulated by it and the application of the AI Act fast approaching, FMCs should:

  • map out their business challenges that are already, or are intended to be, solved using AI and document their decision-making process for the use of such AI;

  • review and update their operational resilience framework and risk management framework in light of their use or proposed use of AI;

  • determine whether their use or proposed use of AI triggers the application of the AI Act and if so, assess the scope of those obligations and implement a step-plan to ensure compliance with its requirements by the applicable deadlines; and

  • consider identifying a person(s) at senior management level within the FMC who will assume responsibility for overseeing the implementation of the AI governance and risk management framework.

Contact Us

By prioritising the establishment and implementation of a robust AI framework which meets the supervisory expectations of the Central Bank as well as ensuring compliance with the AI Act, FMCs should be well positioned to harness the potential of AI to address existing business challenges and optimise operations while managing any risks arising from such use appropriately.


Footnotes:

[1] Regulation (EU) 2024/1689
[2] A “general-purpose AI model’ is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”. 
[3] Annex III of the AI Act lists areas in which the use of AI can be particularly sensitive and lists concrete use cases for each area. An AI system is classified as high-risk if it is intended to be used for one of these use cases.
[4] For an overview of the essential requirements laid down in the AI Act for high-risk AI systems and the role that technical standards play in meeting such requirements, please refer to the Science for Policy Brief published by the European Commission in November 2024
[5] Defined as a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole

DISCLAIMER: This document is for information purposes only and does not purport to represent legal advice. If you have any queries or would like further information relating to any of the above matters, please refer to the contacts above or your usual contact in Dillon Eustace.


Copyright Notice: © 2024 Dillon Eustace LLP. All rights reserved.

Key Contacts