In this FinOps Report article written by Chris Kentouris, Brenda Leong addresses confusion in applying AI legislation to financial services, noting disagreement over risk categorization. Brenda explains that “the decision on the level of risk should begin with a process of elimination which means that if it isn’t high risk based on Annex III of the AI Act, the application could be classified as limited risk.”
This article further explores how financial firms will have to adjust their strategies to comply with Europe’s AI Act over the next two years.
Financial firms across the globe will likely have to think out of the box and pray they make the right AI governance strategy to comply with Europe’s Artificial Intelligence Act (AI Act) over the next two years.
Embracing AI’s ability to reduce operating costs and increase efficiency the financial services industry is expected to spend up to US$79 billion developing and using the new technology by 2027, according to a joint report from the World Economic Forum and global consultancy Accenture. That’s more than twice as much as three years earlier. So far, the most popular installation of AI appears to be in the front office for customer onboarding and service, algorithmic trading, predictive analytics, market surveillance, and fraud detection. However, there is also growing interest in AI-driven systems for middle office functions such as risk management, and back office back-office functions such as monitoring trades that fail to settle on time.
As the world’s most comprehensive law affecting the development and use of AI, the European Union’s AI Act positions the EU at the regulatory forefront by creating a human-centric approach that balances the benefit of innovation with the downside of possible bias and conflict of interest. The AI legislation relies on a firm using a four-tiered risk system of unacceptable, high, limited and minimal risk with the designation of an AI’s application based on its impact to an individual’s safety and fundamental rights. Applications using AI in an unacceptable fashion are banned because they could cause the most damage, while high-risk AI systems are allowed as long as they follow stringent oversight. Limited risk AI systems are subject to fewer and lighter requirements and minimal risk AI systems are exempt. (The AI Act suggests minimal risk AI systems still follow “best practice,” which amounts to the same requirements as high-risk systems). General purpose AI (GPAI) models are addressed separately from the four risk categories but have similar requirements as high-risk systems. Prohibited AI systems involve the use of manipulative or deceptive techniques aimed at changing behavior to cause harm. High-risk systems pose a high threat to safety and fundamental rights.
As a horizontal regulation intended for every industry using AI, the EU’s AI Act includes little guidance on how it should be interpreted by the financial sector, leaving C-suite executives in a lurch. Their interpretation will determine just how much work their firms have to do and by when. “While there is a framework in place that builds on existing risk management practices financial institutions must wait for further supervisory guidance and technical standards,” explains Rebecca Perry, director of privacy and data governance solutions for Exterro, a Beaverton, Oregon-headquartered technology firm tracking the use of personal data in AI applications. Of the multiple ways in which an AI-based application can fall under the high-risk category listed in Annex III of the AI Act, only two can be directly applied to the financial services industry. Those are determining the creditworthiness of an individual and deciding on a job applicant’s employment or an employee’s promotion. Another two ways — the automation of services that impact individual rights and decision-making based on profiling– could be applicable to the financial sector depending on the European Commission’s final decision.
The lack of the AI Act’s legal clarity for financial service professionals had some panelists and attendees at a Legalweek event in New York in March worried about their decision-making process. Should a financial firm opt for a narrow interpretation of the AI Act, it might place its AI-based applications into the limited risk category. Should a financial firm rely on amore liberal interpretation, some of its AI-based applications could fall into a high-risk category. The AI Act calls for firms to be fined for any violations, but the legislation never explains what would happen to a firm if a regulator were to disagree with its interpretation. One can only presume that a regulatory agency would be understanding, but that can’t be known for certain until a firm were to be fined. The EC’s new AI Office is mainly responsible for overseeing compliance with the AI Act for GPAI model providers, while national regulatory agencies are responsible for other AI applications.
C-suite executives will have to make some tough decisions quickly to meet the AI Act’s deadlines. Passed in August 2024, the AI Act must be implemented in phases beginning in February 2025. By then, all prohibited AI systems must have been eliminated and all employees using an AI-powered application must have become AI-literate. The legislation does not specify how to address AI literacy, but it is presumed that users of AI systems would understand the risk category of an application, how it works, and how to prevent the misuse of personal data. The rules for general purpose AI (GPAI) models will be effective beginning in August 2025 with August 2027 as a final deadline. As of February 2026, the obligations for providers and users of limited-risk systems come into play.
About 200 companies including Accenture, Broadridge Financial, IBM, and Wipro, have signed a voluntary AI Pact to share best practices and pledge concrete actions to meet the AI Act’s requirements ahead of its deadlines. Others, such as Apple and Meta, have balked at following the agreement on the grounds that the AI Pact is more stringent than the legislation. “We have already implemented an AI governance policy to govern all of our AI applications and because they rely primarily on human oversight, we assigned them as limited risk,” explains Joseph Lo, head of enterprise platforms at technology and business outsourcing giant Broadridge Financial, headquartered in Lake Success, New York. The firm has multiple AI-based applications, but the popular one in Europe –NYFIX Algo Co Pilot –permits trading desks of asset managers to select the best algorithm to reduce implicit trading costs.
Any strategy to comply with the EU’s AI Act will be based on the extent of a firm’s AI implementation, the risk category assigned to a particular AI-based application, whether the AI technology represents a general- purpose AI model, and whether a firm is a provider or a deployer. “Ultimately, to successfully comply with the EU AI legislation, financial firms must develop an AI governance model for implementing AI across the enterprise, monitor the type of data to be included, test continually and train on AI,” says Guarav Kapoor, co- founder and chief executive of San Jose, California-based MetricStream, a SaaS software firm specializing in risk, compliance and governance technology. The number of a financial firm’s AI applications affected by the AI Act will depend on its size and regional reach.
Europe’s AI legislation defines AI in such a broad fashion that some legal experts believe it could apply to a large range of business software. AI is any “machine-based system” that is designed to operate with varying levels of autonomy and generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. Guidance issued by the EC in February 2025 confirmed the far-reaching application. The AI Act is extraterritorial in that it applies to a provider or deployer if the AI’s output is used in Europe; it is irrelevant in which region the technology was developed. As a rule of thumb, providers under the AI Act are those which actually create and distribute AI technology, while deployers are those which use it. However, deployers can easily become providers under certain circumstances. Providers will have more responsibilities than deployers with providers offering high-risk applications bearing the highest legal onus for ensuring their AI technology works properly and is fully transparent. As a result, they are also more likely to be susceptible to regulatory fines for violating the EU’s legislation, legal experts tell FinOps Report.
Although other European regulations such as the General Data Protection Regulation (GDPR) and the second incarnation of the Markets in Financial Instruments Directive (MiFID II) do mention AI, the AI Act is the first dedicated specifically to the responsibilities of those providing and using AI. The GDPR applies only to the use of personal data in AI and to any automated decision-making functions such as profiling. The GDPR prohibits a firm from using personal data within an AI application or from using AI to making a decision without the explicit consent of the consumer who must fully understand how it works. Last year the European Securities and Markets Association, the pan-European regulatory watchdog, recommended that asset managers keep in mind that they should fulfill the requirements of MiFID II when using AI-based technology. MiFID II requires the continual human oversight of algorithmic trading, a service typically relying on AI.
So far, the United Kingdom has relied on principles-based guidance for the development and use of AI. There is no federal legislation affecting the development and use of AI in the US, although there is plenty of regulatory guidance from multiple agencies. The US Securities and Exchange Commission only requires firms to disclose their use of AI to their investors. Several states have come up with their own laws on AI with Colorado considered at the forefront. “Given that many large global financial firms have operations in Colorado and the European Union it stands to reason that those preparing to comply with Colorado’s legislation will have leg up in meeting the requirements of Europe’s AI Act,” says Tyler Thompson, a partner focused on data privacy at the law firm of Reed Smith in Denver. “There is sufficient overlap that compliance with one regulation assists with compliance for the other.” The two key differences between the EU and Colorado’s legislation are that the Colorado act only impacts providers and deployers of high-risk AI. In addition, it affects only firms with AI users in that state.
Although most of the provisions of Europe’s AI legislation deal with AI systems, the EC at the 11th hour decided to include an entirely separate designation for general purpose AI (GPAI) models to take into account ChatGPT. That’s a type of generative AI which can create images, texts and codes. ChatGPT was launched in November 2022 while the EC and Parliament were drafting the final text of the Act and reportedly caused the most controversy. The AI legislation defines GPAI models under as those that can perform multiple tasks and can be integrated into multiple downstream systems or applications. AI models are a component of AI systems and serve as their engines to drive their functionality. AI models require the addition of more components, such as user interfaces, to become AI systems.
The responsibilities of GPAI models fall primarily on their providers and overlap with those of providers of high-risk systems. Providers of standard GPAI models are required to have technical documentation, provide training materials, and follow the EU’s Copyright Directive. Those with systemic risk must also conduct model evaluations to mitigate systemic risk, prevent cybersecurity breaches, and report any hacking. GPAI models with systemic risk are those with high impact which means they use a high predefined level of computational power. Alternatively, the European Commission could decide a GPAI model carries systemic risk. As a type of generative AI which can create images, text and codes, ChatGPT falls under the category of a GPAI model without systemic risk. (A full description of the requirements for GPAI models can be found in Article 53 of the EU’s AI Act and further clarification was provided in April).
In addition to maintaining technical documentation, providers of high-risk systems must identify those systems and report on how they work to an EC-owned database. They must also ensure human oversight of the systems, conduct continual testing, and complete a conformity assessment. The evaluation means they have passed the EU’s requirements before they sell their AI-based systems on the continent. Deployers, or users, of high-risk systems must comply with their providers’ explanations of how to use their applications, including which types of data they are permitted. They must also explain how the high-risk systems work to their employees. Providers and deployers of limited risk systems have shared responsibilities in that they must provide regulators with full transparency on how the technology was created and works. Users of AI- based applications must also know they are interacting with AI technology.
A firm can decide its AI system is exempt from the high-risk category by proving it will not harm the health, safety, or fundamental rights of natural persons. The text of the AI legislation says that the presumption of high risk could be rebutted if the AI were to perform narrow procedural tasks, improve previous human activity, detect decision-making patterns or deviations from prior decision-making patters or performs preparatory tasks to assess whether a system falls under the types listed in Annex III of the legislation.
However, the exception cannot be used if the AI system profiles natural persons because profiling carries the risk of harm.
Any good AI governance program will need to start with the basics — finding which departments are using AI, for which purpose, how the technology works, and who has access. Inventory management of AI applications won’t be easy unless there is a centralized database to store and update all the information. “The largest financial firms could easily have several hundred applications using AI and have a sophisticated methodology to track their use of specialized technology platforms,” says Rupert Brown, chief technology officer for Evidology Systems, a London-headquartered regulatory compliance management firm. “However, that may not be the case for mid to small-sized firms with only a handful of systems using AI.” Firms which have kept track of the use of AI applications in separate business units will need time to consolidate the information, which includes the type of data used.
After a financial firm correctly determines the number of its AI applications affected by Europe’s AI Act, it must decide under which category of risk those applications fall. Asking the provider of a system for the answer would be a good start and several AI technology providers attending the Legalweek event in New York told FinOps Report they believed their clients will follow their recommendations. “The internal IT department, business departments, and the risk department would add their two cents to categorizing the risk level of an AI application by creating a virtual AI governance committee of sorts which would fall under the jurisdiction of the chief risk officer or chief information security officer,” says MetricStream’s Kapoor, whose firm offers software to manage AI models. Such a committee would also tackle the full whammy of obligations under the AI legislation in multiple departments. “The requirements for technical documentation, performance metrics, and cybersecurity protection would fall under the IT department while the legal department would ensure that AI systems comply with data protection rules,” suggests Exterro’s Perry. “Human oversight of AI models and applications could be handled by individual business units.” An impromptu survey of attendees at the Legalweek gathering showed that the CISO is the favored candidate to oversee an AI governance committee as he or she would have the most experience with both AI and data security. Some firms have formalized the title of AI governance director.
So far, the uncertainty of how to apply the AI legislation to the financial services industry seems to have created the most dissent when it comes to assigning a risk category. “Generally speaking, most financial firms are unlikely to use AI applications that fall under the high-risk category,” says Christian Geyer, chief executive officer of Reston, Virginia-based Act fore. “In practice, financial firms tend to deploy open or third- party general purpose AI models, so the compliance burden of documentation and training often rests with the software provider.” Several other AI technology providers and even US attorneys focused on AI and data privacy regulations who were attending Legalweek agreed. “The decision on the level of risk should begin with a process of elimination which means that if it isn’t high risk based on Annex III of the AI Act, the application could be classified as limited risk,” says attorney Brenda Leong, a director in ZwillGen, a Washington, DC.-based technology law boutique. The websites of several US-based law firms reviewed by FinOps Report which offered short tests to determine the risk level of a particular application, showed a preference for categorization in the limited risk category.
However, compliance managers at several global financial firms speaking with FinOps Report on the condition of anonymity, say they will classify AI used in algorithmic trading, portfolio management, and predictive analytics as high-risk applications of AI because they are easily subject to mistakes which could result mega financial losses. In the case of anti-money laundering technology using AI, opinions varied widely. As written, the AI Act appears to have exempted AML compliance tools from the high-risk category because they involve finding fraud or criminal activity. However, the websites of several AML technology firms suggest that AML applications using AI follow the rules for “high-risk” AI systems regardless of their risk designation.
If the goal of the European Commission is to protect citizens from any harm, regulators could easily believe that all AI applied in the financial services industry is high-risk. Therefore, a financial services firm must follow the most onerous requirements. Whether that designation could be reduced to limited risk through the greater use of human oversight is subject to interpretation. While some AI software providers believe that using human oversight might allow a firm to rely on a lower risk profile, legal experts aren’t so sure. “The EU AI Act classifies an AI system based on its intended use and content, not on technical safeguards or mitigation,” says Elisabetta Righini, a partner in the law firm of Sidley Austin in Brussels specializing in AI legislation. “Human oversight is just one of the mandatory compliance requirements.”
Even fulfilling the EU AI Act’s transparency requirements for limited-risk systems won’t be easy. “Financial firms must maintain clear documentation explaining how AI is used, why it doesn’t meet the high-risk criteria, and how it functions,” says Geyer whose firm provides AI/ML-powered data mining software for forensic analysis. If the system is developed internally, the legal, compliance, risk and IT departments will also have to be involved in determining how to write the best explanation. If the system was initially developed by a third-party provider, a financial firm will likely need to rely on the provider’s explanations. “Contracts between providers and deployers may need to be reviewed to ensure that a provider is obligated to provide a deployer with sufficient information to allow the deployer tomeet the EU’s AI transparency specifications,” recommends Reed Smith’s Thompson.“Addendums might need to be added to ensure that providers give deployers the complete information.” In the case of a provider of a high-risk system, that information includes the key features of the AI which many existing contracts lack such as the AI’s design, training data, algorithms used, and requirement for human oversight. Broadridge’s Lo says that his firm has already provided deployers of its Algo Co Pilot application with complete transparency over how it works, and the data used.
Although it is likely that all financial service firms would meet the criteria for deployers of AI technology, it isn’t certain when they would do so as providers. “The specific criteria for reclassifying a deployer as a provider are still grey,” cautions Leong. According to Article 25 of the EU AI Act, a deployer may become a provider under several conditions which appear to affect only high-risk systems. Those circumstances include when a deployer puts its name or trademark on a high-risk system, makes a substantial modification to a high-risk system, or modifies a non-high-risk system to turn it into a high-risk system. Because financial firms often customize third-party software for internal use, it stands to reason they could easily become reclassified as providers based on the legislation’s current wording. Therefore, it is critical for contracts to also indicate how much liability a provider and deployer of an AI application would bear and how that liability would change if a deployer becomes a provider, say legal experts. The same applies when a GPAI model is used and altered down the line by other firms.
The good news for financial service firms forced to comply with the AI Act is that the EC has asked for industry feedback on how the regulation should apply to the financial industry and the US government is lobbying against some of its stringent requirements. More specific guidelines could reduce potential violations. Penalties also depend on the severity of the in fraction. Financial firms which do not eliminate the illegal use of AI will face the highest penalty — the greater of E35 million or seven percent of worldwide revenues for the year prior to the violation. Those who do not follow the requirements for high-risk systems face a fine of the greater of E15 million or three percent of worldwide turnover. Those who provide incorrect information to an EU regulator will face a penalty of either E7.5 million or one percent of annual worldwide revenue. For GPAI providers, the maximum penalty is the higher of E15 million or three percent of annual revenues. The EC recently withdrew a proposal to adapt an AI liability act which would have eased the burden of proof for individuals harmed by high-risk AI systems to sue providers.Legal experts caution that regardless of the amount of work a financial service firm must initially complete to fulfill the requirements of the EU’s AI Act, it must remember that its efforts will be ongoing. The risk classification of the AI-based application could change over time and deployers could become reclassified as providers. “Technology is evolving and financial firms must continually monitor their current and future uses of AI,” says Sidley’s Righini. “Complying with Europe’s AI Act won’t be a one-time compliance exercise.”
Reprinted with permission from FinOps Report. Further duplication without permission is prohibited. All rights reserved.