Artificial Intelligence

U.S. Treasury Department Publishes AI Guidance for Financial Services

Published: Mar. 04, 2026

On February 19, 2026, the U.S. Department of the Treasury released two new, non-binding financial-sector AI resources: the Artificial Intelligence Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF). 

These materials fall under “soft law” for risk-management: they do not create new legal obligations, but try to standardize how financial institutions address and document AI risk governance. For companies using AI in regulated financial services, these resources are likely to become an important reference in examinations, internal audit expectations, third-party oversight, and contract negotiations, even where no regulator expressly incorporates them.

The AI Lexicon: Definitions, With Legal Caveats

The AI Lexicon creates a common taxonomy. It is intended to align terminology across technical, risk, legal, and operational stakeholders by providing a shared vocabulary for AI in financial services. 

This may have a downstream contracting impact. Common definitions can help reduce ambiguity, such as in vendor due diligence questionnaires, statements of work, service descriptions, incident reporting, and audit provisions. They give parties a common starting point for terms such as “hallucination,” “prompt injection,” “guardrails,” “AI governance,” and “third-party AI risk” for the particular sector context.

However, the framing is direct about limiting expectations for legal reliance. It is an optional tool, not controlling for legal interpretation, regulatory oversight reports, supervisory statements, international agreements, or private contracts. It should not be applied as if it reflects official Treasury views. 

Parties should not treat Lexicon terms as dispositive of regulatory definitions and should resist any argument that a Lexicon definition establishes a binding standard of care. Nevertheless, in practice, clients should expect the Lexicon to be used as persuasive support for what constitutes reasonable AI governance. It is likely to be used as a default starting position and incorporated into emerging best practices.

Financial Services AI RMF: Sector-Specific Version of the NIST AI RMF

The second resource is the FS AI RMF, an industry-led, financial sector–specific risk management framework, developed through public-private collaboration involving inputs from more than 100 financial institutions and government agencies (including NIST). It is structurally aligned to the NIST AI RMF but applied to a financial-services implementation model. It is framed to address the concern that accelerated AI adoption is creating complex risks that are not clearly addressed by traditional risk frameworks, particularly for emerging AI such as agentic systems. But it is expressly offered as a complement to existing risk programs rather than a replacement, emphasizing integration with existing global standards. 

The FS AI RMF is composed of four coordinated components: an AI Adoption Stage Questionnaire, a Risk and Control Matrix (RCM), an implementation Guidebook, and a Control Objective Reference Guide. 

How Companies Should Use the Four Sections

Like the NIST AI RMF, the FS AI RMF is designed to be used as a structured implementation path rather than a compliance checklist. Organizations will first classify their AI maturity using the AI Adoption Stage Questionnaire; then apply the Risk and Control Matrix (RCM) to identify the Control Objectives most relevant to their stage; and finally, implement and document controls using the Guidebook and the companion Control Objective Reference Guide. 

This staged approach focuses on proportionality. The Treasury highlights that the Framework is meant to be adaptable, to allow smaller institutions to focus on the controls that match their adoption stage and resources. 

The Questionnaire is a structured self-assessment that places an organization into one of four adoption stages: Initial, Minimal, Evolving, or Embedded. The categorization is based on business impact, technology implementation, and scalability, and it is intended to help firms tailor which Control Objectives are most relevant at their given maturity level. The adoption-stage also drives scope, as each stage encompasses an increasing number of the complete list of Control Objectives. 

The RCM organizes Control Objectives that mirror the NIST AI RMF taxonomy (Functions, Categories, and Sub-Categories), so that firms can map, prioritize, and document AI controls consistently across governance, risk, and technical teams. Each Control Objective is aligned to an AI Trustworthy Principle and a broad Risk Statement (which the Framework recognizes may be further customized to the firm’s own risk taxonomy).

The Framework also follows NIST’s four-function model, Map, Measure, Manage, and Govern, and provides a control inventory across those domains. Finally, the Control Objective Reference Guide is designed as a “one-stop” detail layer for the 230 objectives, including NIST alignment, adoption-stage applicability, risk/principle mapping, implementation guidance, and examples of controls and evidence artifacts. There are useful disclaimers, illustrative examples, and a general approach, without a guarantee of meeting regulatory or audit expectations. Organizations are expected to apply controls and evidence to their own operational and regulatory environment.

Legal and Regulatory Significance

Although these resources are non-binding, the Framework is explicitly positioned to integrate with legal and compliance processes. FS organizations should treat them as likely to influence supervisory expectations, internal audit baselines, and third-party management norms. Within the “Govern” function, a control objective calls for identifying, monitoring, and integrating applicable laws, regulations, contractual obligations, and sector requirements into AI governance artifacts and operations as requirements evolve. 

The FS AI RMF is particularly positioned to influence risk governance because it is designed to address a recognized “gap” between general AI frameworks and the sector’s need for practical, scalable implementation guidance. So for companies, this is less about adopting a new rule and more about anticipating how AI deployments are controlled, auditable, and aligned with existing safety, soundness, consumer protection, and operational resilience obligations. This makes the FS AI RMF a practical template for demonstrating a defensible compliance posture.

AI risk often aligns with legal priorities at the intersections of organizational functions: vendor procurement, data licensing and provenance, subcontractor access, model updates, and incident communications. The FS AI RMF devotes substantial attention to third-party AI risk management, including due diligence that incorporates data provenance, secondary data use, and intellectual property considerations, and emphasizes negotiating contractual provisions to protect against data misuse and to secure data rights. 

The Guidebook frames the Framework as covering AI risks “throughout the supply chain,” explicitly including third-party providers. It also contemplates fourth-party disclosure and termination rights, reflecting an expectation that AI supply chains require layered accountability. 

Documentation, Evidence, and Defensibility

A recurring theme is that AI governance must be demonstrable. The Guidebook proscribes centralized documentation, defined documentation, and monitoring and review processes suitable for oversight and audit. That aligns with the reality that the most common compliance failure mode in AI governance is not a technical shortcoming, but the inability to show what was decided, by whom, on what basis, with what testing, and under what monitoring. 

The Control Objective Reference Guide reinforces an important point for FS companies building an “audit-ready” program: examples of controls and evidence are not a guarantee of regulatory sufficiency, and organizations are expected to design both controls and proof of their own use case and context. An institution that uses the FS AI RMF should describe a risk-based program aligned to the Framework’s structure, with clear scoping and staged implementation, and with documented rationales for any deviations or prioritization decisions.

Who This Applies to

The FS AI RMF is expressly designed for “all financial institutions,” regardless of size, type, complexity, or criticality, and it is also framed as usable by their third-party providers to manage AI-related risks across the supply chain. Practically, the Framework will be most relevant for institutions and service providers that deploy AI in ways that are external-facing, use sensitive or regulated data, materially affect customers or market outcomes, or support critical business processes.

Organizations that are not “AI builders” but procure AI-enabled products should also pay attention because it expressly contemplates identification and risk management of purchased AI and “shadow” systems within the AI inventory and governance process.

What Target Organizations Need to Do

Because the FS AI RMF is structured around maturity and scalability, “what to do” should start with scoping and prioritization rather than attempting to implement all 230 objectives at once. The Framework’s intended workflow is to determine the organization’s current (and desired) AI Adoption Stage using the Questionnaire, then filter and prioritize the relevant Control Objectives within the RCM accordingly.

The first step is governance integration with law, regulation, contracts, and sector obligations. The Guidebook’s “AI Legal, Regulatory, and Policy Integration” control objective is explicit that organizations should identify, monitor, and integrate applicable legal and regulatory requirements and contractual obligations into AI governance records and operational practices, and verify ongoing compliance as requirements evolve.

The second is updating enterprise risk management and model risk management processes to address AI-specific risks. The Guidebook includes control objectives calling for updated risk assessment methods consistent with ERM and Model Risk Management, AI-specific monitoring and escalation processes aligned with risk appetite, and structured AI documentation standards and repositories suitable for oversight and auditability. These controls lean to lifecycle controls rather than point-in-time evaluations.

The third step is inventory and transparency. The Control Objective Reference Guide anticipates a central AI inventory that captures not only internally developed systems but also AI components embedded in purchased products and shadow deployments. For many organizations, this may be a challenging step because it requires coordination across procurement, IT asset management, business lines, and vendor management.

The fourth is third-party AI risk as a continuous obligation. The Reference Guide includes implementation expectations around monitoring third-party AI risks and benefits throughout the AI lifecycle and tracking risk/performance metrics such as data quality, model accuracy, security incidents, and compliance, particularly where third-party resources are material to AI operations. This aligns with longstanding FS expectations: outsourcing services does not outsource accountability.

Finally, the control-and-evidence model should be treated as a starting point, not a checklist. The Framework repeatedly emphasizes tailoring, and the Control Objective Reference Guide cautions against presuming that all example controls and evidence artifacts are either required or sufficient for regulatory or audit purposes. Covered organizations should align to their risk profile, adoption stage, product footprint, and regulator expectations, and they should document those decisions.

How it Aligns With Existing Guidance and Regulations

The FS AI RMF is best understood as a sector-specific layer on existing guidance rather than a replacement. It is built on the NIST AI RMF taxonomy, and explicitly maps to NIST functions, categories, and sub-categories. Institutions that already use NIST-based governance can extend their control environment without reinventing a whole new structure. In addition to the NIST AI RMF 1.0 and related NIST resources, it aligns with U.S. supervisory Model Risk Management guidance (SR 11‑7), and a range of other standards and regimes that frequently appear in cross-border or multi-regulator compliance programs, including the EU AI Act and FFIEC guidance, among others.

It may actually function as a harmonization tool. Institutions often struggle to reconcile model risk management controls, cybersecurity controls, third-party risk controls, and emerging AI governance principles into a coherent operating model. The FS AI RMF’s objective is to create a common control language that can be mapped to existing risk frameworks and used to reduce fragmentation across compliance silos.

For the Lexicon, its most likely impact is internal: improving cross-functional communication and reducing ambiguity in AI risk discussions. However, since the Lexicon disclaims use for legal interpretation of regulations and private contracts, institutions should be cautious about treating its definitions as “plug-and-play” contract terms without adapting them to the specific risk allocation, warranty, audit, confidentiality, IP, and liability constructs in the relevant agreement.

Where clients are negotiating AI procurement or renewal agreements in 2026, these new tools provide a structured basis for the coherence and comprehensive approach that regulators increasingly expect to see: visibility into vendor model and data practices, audit and testing rights, update and change controls, incident response coordination, and clear allocation of responsibilities under a shared responsibility model.