AI governance in U.S. banking is caught between two realities. The rulebook that supervisors will examine against has been in place for a decade and predates modern AI. The rulebook that banks actually need is being written now — in interagency statements, in NIST guidance, and in the consent orders coming out of this cycle of examinations. Sitting still inside that gap is the most expensive posture a bank can take.
I spent 2021 and early 2022 inside the U.S. Federal Deposit Insurance Corporation, building the agency’s first innovation division. We had coverage over AI, quantum computing, digital assets, digital identity, and cybersecurity for the U.S. banking system — roughly five thousand institutions and the third-party ecosystem sitting underneath them. The team grew to forty people and shipped the agency’s first tech-sprint and policy-sprint programs; day to day we served as the FDIC’s primary point of contact to the other executive-branch agencies, allied governments, and the banking system on questions that did not fit into any existing examination framework. The inside view of what the U.S. bank regulatory posture on AI could be, and the inside view of what it actually was, ended up being two different stories. I wrote about the gap publicly when I resigned, in a Bloomberg op-ed titled “I Quit as FDIC Innovation Chief Because of Regulators’ Technophobia” (Bloomberg Opinion, February 22, 2022; linked via the Internet Archive because the Bloomberg page sits behind a paywall). The core argument has held up: the U.S. banking supervisors have the legal authority to get ahead of AI risk; the constraint is cultural, not statutory.
Four years later, the picture is clearer but not friendlier. Here is the operator’s read.
What the U.S. rulebook actually asks of banks in 2026
There is no standalone federal “AI regulation” for U.S. banks. There is a stack of existing regulation that already applies to AI, and a set of newer, non-binding frameworks that supervisors increasingly expect to see operationalized.
On the binding side, the anchors are:
- Model risk management guidance. The 2011 interagency Supervisory Guidance on Model Risk Management (known as SR 11-7 at the Federal Reserve, OCC Bulletin 2011-12 at the OCC, and FIL 22-2017 at the FDIC), together with the 2021 BSA/AML extension SR 21-8, governed model inventory, validation, documentation, and ongoing monitoring for fifteen years. On April 17, 2026 the agencies issued SR 26-2 (with companion OCC Bulletin 2026-13 and an FDIC FIL), which supersedes both. SR 26-2 preserves the SR 11-7 framework with a more explicit risk-based, materiality-driven posture and is most relevant to banks above $30 billion in total assets. The widely circulated headline that SR 26-2 brings generative AI inside MRM is wrong: footnote 3 of the actual guidance explicitly excludes generative and agentic AI from scope, calling them “novel and rapidly evolving,” and tells banks to use the SR 26-2 principles to guide governance for systems the guidance doesn’t formally cover. If your firm runs any LLM in a customer-facing or decisioning path, that system isn’t inside MRM under SR 26-2 — but you are on the hook to govern it under a parallel discipline that mirrors MRM rigor.
- Third-party risk management. The June 2023 interagency Guidance on Third-Party Relationships is the frame for everything a bank does with an AI vendor. Supervisors expect diligence, contractual risk allocation, and ongoing monitoring proportionate to the criticality of the relationship. An LLM vendor powering a chatbot that handles customer disputes is not a low-criticality engagement.
- Fair lending and consumer protection. ECOA, the Fair Housing Act, UDAAP, and the CFPB’s adverse-action-reasoning expectations apply whether the credit decision was adjudicated by a human, a logistic regression, or a foundation model. Explainability is not optional here; it is a legal obligation.
On the non-binding side, the two frameworks you will see referenced in every examination conversation are:
- NIST AI Risk Management Framework 1.0 (AI RMF). Voluntary, but now the shared vocabulary across U.S. federal agencies, large customers, and increasingly the auditors who go in behind them. I wrote a longer regulated-industry playbook on the RMF that tracks the Govern / Map / Measure / Manage functions as operating muscles rather than checklist items. The July 2024 NIST Generative AI Profile (AI 600-1) extended the framework with specific risks and controls for generative systems, which is where most of the implementation gaps in banking sit right now.
- OCC heightened-standards expectations and interagency capital planning. When AI begins to materially affect the risk profile of a large bank, the supervisors’ expectations on governance, board oversight, and independent challenge step up accordingly. The newer generative-AI deployments in large banks are now material, even if the quarterly risk reporting hasn’t caught up.
Taken together, the U.S. posture is: the old rulebook already applies, the guidance is being updated to make that application explicit, and the non-binding frameworks are where supervisors are actually calibrating their expectations. Treating the non-binding stack as optional is how banks get caught flat-footed.
Why the EU AI Act matters for a U.S. bank
A lot of U.S. bank boards have been told the EU AI Act is a European problem. For most large U.S. banks, and for a good chunk of the mid-size banks, this is wrong in a specific and expensive way.
The Act applies extraterritorially. A U.S. bank that offers services to customers in the EU, or whose AI system produces output that is used in the EU, is in scope for the relevant obligations. Most internationally active U.S. banks meet one of those triggers through at least one product line. The high-risk classifications around credit scoring, employment decisions, and essential-services access map directly onto common U.S. banking use cases — consumer credit, hiring, and collections.
The sharper operational point is that the EU Act is forcing documentation discipline that has no direct U.S. analogue yet, and auditors coming through a U.S. bank’s EU-facing workflows are already asking for it. Banks that have built their Act-compliant documentation stack end up ahead on the domestic audit side almost as a side effect. Banks that have not are usually discovering the gap mid-examination.
The CAT signal
One more recent data point worth flagging for risk committees. In April 2026, Bloomberg covered a group’s report on the Mythos vulnerability in the SEC’s consolidated audit trail (see the press excerpt on the Virtova homepage). The quote that ran was mine:
Even de-identified, CAT data is a strategic asset — for traders seeking alpha and for adversaries seeking market intelligence. AI has changed the economics of exploiting it: what once took a nation-state team can now be done on a single laptop with open-source tools.
The point I would bring into a bank board conversation out of that is not about CAT specifically. It is that the cost-curve of exploiting large, semi-public financial datasets has fallen by two orders of magnitude in the last three years. Bank data inventories, BSA/AML outputs, and model training corpora have all quietly become more attractive targets. Governance programs that treat “AI risk” as purely a model-output problem are missing half of the surface.
What to do between now and the next exam cycle
If I were sitting in a U.S. bank risk-committee seat today, the six things I would want evidence of before the next exam cycle are concrete and operational.
- A named, senior accountable executive for AI risk. Chief Risk Officer, Chief AI Officer, or — in smaller firms — the CIO or CTO wearing the hat explicitly. Not a committee. Supervisors will ask who owns this, and the answer needs a name.
- A current model inventory that includes generative systems. Most inventories I see in 2026 are still pre-LLM. The vendor-provided LLMs inside the contact center, the coding-assistant models in engineering, and the marketing copy generators all belong on the inventory.
- Tier classification with real teeth. Tier 1 gets independent validation. Tier 2 gets periodic review. Tier 3 gets attestation. Most firms have the labels but not the differentiated process behind them.
- A risk appetite statement that says specific things. “We will use AI responsibly” is not a risk appetite statement. “We will not deploy AI in Tier 1 credit-decisioning without human adjudication for the first 24 months of production” is. Commit to the specific sentences.
- An incident playbook you have actually run. Tabletop a model-drift incident, a vendor model-swap incident, and a regulator-facing disclosure incident. The first thirty minutes is where the job gets done or not.
- A written view on the EU AI Act if any cross-border exposure exists. Even a two-page memo is better than the current absence. Auditors coming into EU-facing lines of business will ask.
None of this is novel. All of it is still missing in the median engagement.
Where Virtova comes in
Virtova’s fractional and advisory work on this stack is built for U.S. banks and the institutions sitting adjacent to them. The commonest two engagement shapes are:
- A fractional Chief AI Officer engagement — three to twelve months, embedded alongside the existing leadership team, owning the governance program through its first audit or examination cycle, ending in a named permanent hire.
- An AI governance and regulatory readiness assessment — a four-to-six-week diagnostic against NIST AI RMF, the new interagency model-risk guidance, and EU AI Act exposure, producing a prioritized gap list and a written scope for the twelve months that follow.
Both run through discovery calls before anything else. The honest version of this conversation is faster than the brochure version.
For more regular writing on where U.S. AI and financial regulation are actually going, the Sultan Meghji Substack is the primary channel — 13,000+ subscribers, weekly frequency, unhedged. The Frontier Foundry Substack covers the product side.
Sultan Meghji founded Virtova in 2009. He served as the inaugural Chief Innovation Officer of the U.S. FDIC and is the Co-Founder and CEO of Frontier Foundry Corporation. More at sultanismyname.com.