← Services

Virtova services · By Sultan Meghji

AI compliance for financial services

AI compliance for U.S. banks, insurers, asset managers, and PE-owned financial firms: operational alignment to interagency guidance, fair-lending obligations, BSA/AML, and the AI-specific rulebook now landing on top of all of them.

AI compliance in financial services in 2026 is not a single rule. It is several rulebooks layered on top of each other: interagency model-risk guidance, third-party risk, fair-lending and consumer-protection regimes, BSA/AML, the NIST AI RMF as a non-binding shared vocabulary, the EU AI Act for cross-border firms, and a quickening pace of supervisory letters as the agencies extend existing frameworks to AI systems. Engagements that treat compliance as a single workstream miss most of the surface that actually generates findings.

Sultan Meghji leads Virtova engagements personally. As inaugural Chief Innovation Officer of the U.S. FDIC, Sultan was inside the agency as supervisors began to reckon with how the existing rulebook applied to AI in banking. The view from the regulator’s side of the table about which AI compliance programs hold up, and which collapse on first examination, sits at the center of how Virtova structures this work.

What this engagement looks like

A Virtova AI compliance engagement for a financial-services firm typically runs eight to twelve weeks for the diagnostic-and-build, with optional ongoing monthly oversight afterward. The standard scope covers seven threads.

Inventory and scoping. A current map of every AI system the firm operates that intersects regulation: credit decisioning, underwriting, fraud, marketing, customer operations, BSA/AML, capital and liquidity, asset management, third-party AI services. Most inventories in 2026 are still significantly incomplete; rebuilding the inventory is usually the first three weeks.

Model-risk-management alignment. SR 26-2 alignment for banks (most relevant above $30 billion in total assets) for in-scope models — traditional statistical and quantitative models and non-generative, non-agentic AI models. SR 26-2 supersedes SR 11-7 and SR 21-8 and explicitly leaves generative and agentic AI out of formal scope. For those systems, the engagement stands up a parallel governance discipline that mirrors SR 26-2 rigor without being formally MRM-scoped — the structure footnote 3 of the guidance tells banks to use.

Fair-lending and consumer-protection posture. ECOA, the Fair Housing Act, UDAAP, and CFPB adverse-action reasoning expectations apply whether the credit or pricing decision was made by a human, a logistic regression, or a foundation model. Engagements scope disparate-impact analysis design, explainability documentation, and the adverse-action narrative the firm will use when the regulator asks how it makes decisions.

Third-party risk. AI vendors (model providers, embedded-AI platforms, data labelers, evaluation services) sit inside the existing third-party-risk framework with elevated expectations. Engagements scope diligence, contractual risk allocation, ongoing monitoring, and the silent-model-swap risk that most contracts do not yet address.

BSA/AML and sanctions. AI used in transaction monitoring, customer-due-diligence, and sanctions screening sits inside an examination regime that does not bend gracefully. Compliance work here includes both the model-risk and the BSA-specific dimensions, plus the documentation that lets the firm answer “how does the AI work?” without freezing.

Cross-border exposure. Where the firm has European customers, EU operations, or output flowing into the EU, EU AI Act readiness is part of the scope. Most internationally active U.S. financial firms have material EU exposure through at least one product line.

Examination readiness. The supervisory narrative for the firm’s primary regulator: what the firm says when an examiner asks how AI is governed and where it sits in production. The narrative shapes the examination’s tempo more than firms expect.

What’s changing in the 2026 cycle

Three shifts are worth flagging for any financial-services firm scoping AI compliance work this year.

First, the SR 26-2 update is widely misread. The press summaries say it brings generative AI inside MRM; the actual text (footnote 3) does the opposite — generative and agentic AI are explicitly out of scope. The same footnote tells banks to use the SR 26-2 principles to govern those systems anyway. The practical upshot is that a bank’s vendor LLMs, generative summarization tools, and AI-assisted document review still need a documentation discipline that mirrors MRM, just on a parallel track. Most current programs have neither track properly built out.

Second, fair-lending and consumer-protection enforcement is getting more comfortable with AI. Adverse-action reasoning expectations under ECOA and the CFPB’s framework are firming up, and firms whose AI systems cannot produce a defensible adverse-action narrative are now finding the conversations harder than they were eighteen months ago.

Third, third-party-risk practice is catching up to silent-model-swap risk. The contractual and monitoring posture most banks signed in 2022–2024 around vendor AI is no longer the right posture for the platforms those vendors actually run today; the renewals coming up in this cycle are an opportunity to fix it without a separate program.

When the engagement is the wrong answer

AI compliance work is the wrong scope when the firm has no AI in regulatorily significant systems and is doing pre-emptive program work; at that point the better starting engagement is AI governance consulting. It is also the wrong scope when the gap is foundational (no model inventory, no risk appetite, no governance forum) because compliance is what comes after governance, not before.

Virtova will tell you which one fits on the discovery call.

Next step

Most engagements start with a 30-minute discovery call. Bring an honest read of where AI is in production, what the last examination flagged, and what the upcoming examination cycle is likely to focus on. We will tell you which thread is the most urgent and what a focused engagement looks like.

"AI compliance in financial services isn't a new program. It's the existing compliance program absorbing AI as an operating reality — and discovering, in the process, which parts of it were never as solid as the binder claimed."
— Sultan Meghji

Frequently asked

What is AI compliance for financial services?
AI compliance for financial services is the work of bringing AI systems used in banking, insurance, asset management, and adjacent regulated activities into compliance with the existing financial-services rulebook plus the AI-specific guidance now extending it. The scope spans interagency model-risk and third-party-risk guidance, fair-lending and consumer-protection obligations, BSA/AML, and the AI-specific layer added by NIST AI RMF and EU AI Act exposure.
How is AI compliance different from AI governance?
AI governance is the firm's program for managing AI risk holistically. AI compliance is the narrower, regulator-facing slice: making sure each AI system in scope satisfies the specific rules supervisors will examine against. The two engagements overlap meaningfully and are often run together; firms that treat compliance as a substitute for governance tend to fail audits, and firms that treat governance as a substitute for compliance tend to fail examinations.
Does Virtova handle AI fair-lending compliance?
Yes. AI used in credit, underwriting, pricing, marketing, and collections sits squarely inside ECOA, the Fair Housing Act, UDAAP, and CFPB adverse-action expectations. Virtova engagements include disparate-impact analysis design, explainability and adverse-action reasoning posture, and the documentation that supports a defensible regulatory conversation when AI is in the decisioning path.
What about AI compliance under the new model-risk guidance?
The April 17, 2026 SR 26-2 update supersedes SR 11-7 and SR 21-8 and is most relevant to banks with more than $30 billion in total assets. It explicitly excludes generative AI and agentic AI from formal scope (footnote 3) and applies its principles to traditional statistical and quantitative models and non-generative, non-agentic AI models. The same footnote tells banks to use those principles to guide governance for systems the guidance doesn't formally cover. Virtova aligns the firm's existing MRM program to SR 26-2 for in-scope models and stands up the parallel discipline for the generative and agentic AI systems that SR 26-2 deliberately leaves outside.
Who runs the engagement?
Sultan Meghji, personally. As inaugural Chief Innovation Officer of the U.S. FDIC, Sultan worked alongside the agency's policy and supervision functions on the AI extension of the existing financial-services rulebook. Specialist support (including former model validators, fair-lending counsel, and former bank examiners) is brought in by name when depth warrants and is always disclosed in writing.

Related Virtova services

Work with Virtova

Most engagements start with a 30-minute call.

Confidential by default. NDAs available on request.

Book a discovery call →