The 2011 interagency Supervisory Guidance on Model Risk Management (known as SR 11-7 at the Federal Reserve, OCC Bulletin 2011-12 at the OCC, and FIL 22-2017 at the FDIC) is one of the most consequential documents in U.S. bank supervision. For fifteen years it defined how U.S. banks inventory, validate, document, and govern the models they use to make decisions. On April 17, 2026 the agencies finalized SR 26-2, which supersedes both SR 11-7 and SR 21-8 (the 2021 BSA/AML model-risk extension) and is most relevant to banks with more than $30 billion in total assets. The structural framework is preserved with a more explicit risk-based, materiality-driven posture.
The change banks need to read carefully is what is in scope and what isn’t. SR 26-2 explicitly excludes generative AI and agentic AI from its formal scope (footnote 3): the agencies describe these technologies as “novel and rapidly evolving.” The guidance applies to traditional statistical and quantitative models and to non-generative, non-agentic AI models. The same footnote tells banks to use the SR 26-2 principles to guide governance and controls for the tools, processes, and systems the guidance doesn’t cover. In practice, that means a bank’s generative-AI program lives in a parallel governance discipline that mirrors MRM rigor without being formally MRM-scoped. Most banks have not yet built that parallel discipline. Engagements that pretend generative AI is inside MRM scope or that it has no governance obligation at all will both fail the next exam cycle.
Sultan Meghji leads Virtova engagements personally. Sultan’s FDIC tenure included direct work alongside the agency’s model-risk and supervision functions as the AI extension of the existing MRM stack came into focus. The experience that informed that work (what supervisors actually look for when they examine an MRM program, where the program collapses, and what the firm’s narrative needs to look like) sits at the center of how Virtova structures this engagement.
What this engagement looks like
A Virtova MRM consulting engagement typically runs ten to fourteen weeks for a diagnostic-and-build, with optional ongoing program oversight afterward. The standard scope covers six threads.
Inventory. A model inventory built or rebuilt to meet SR 26-2’s risk-based, materiality-driven approach. Traditional credit, ALM, BSA/AML, capital, and non-generative AI/ML models sit on the formal MRM inventory; generative and agentic AI systems sit on a parallel inventory governed by the same documentation discipline even though they’re outside SR 26-2’s formal scope. In most engagements this thread alone takes four to six weeks because the artifact most clients call an inventory is significantly incomplete.
Validation framework. Tier-appropriate validation against materiality (model exposure × purpose). For high-materiality models, independent validation by qualified personnel with documented evidence. For lower materiality, periodic review or attestation at defined cadence. The framework also covers the parallel-discipline validation approach for generative and agentic AI systems, which sit outside SR 26-2’s formal scope but require their own evidence set drawing from the SR 26-2 principles.
Documentation standards. Model documentation that meets SR 26-2 expectations for in-scope models: conceptual soundness, development assumptions, data lineage, validation results, outcomes analysis, ongoing monitoring evidence, change history. For third-party models that meet the in-scope definition, the documentation set includes the diligence artifacts the firm relies on. For generative and agentic systems, an analogous documentation set built to the same rigor even though formal MRM scope doesn’t apply.
Ongoing monitoring. Drift detection and outcomes analysis appropriate to model type, escalation thresholds, action playbooks, and the documented response when measures cross a threshold. The third-party dimension (what happens when a vendor silently swaps the underlying model under a deployed system) is a thread we now expect to find unaddressed in nearly every engagement.
Governance. A model risk committee that runs at the right cadence with the right authority, decisions on the record, and the integration with broader AI governance and enterprise risk management. The committee is where the program runs.
Examination readiness. The supervisory narrative for the firm’s primary regulator: the document the firm will use to walk a senior examiner through the program in fifteen minutes. In examination-driven sectors this is high-leverage; the firm’s narrative often shapes the examination’s tempo.
When the engagement is the wrong answer
MRM consulting is the wrong scope when the firm needs an end-to-end AI governance program built from scratch. At that point the right starting engagement is AI governance consulting, with MRM as one of the risk disciplines that sits inside it. It is also the wrong scope when the underlying problem is a single model in remediation rather than the program as a whole; that work is better scoped as a tightly bounded validation engagement.
Virtova will tell you which one fits on the discovery call.
A note on what SR 26-2 doesn’t cover
The most common implementation mistake I see in 2026 is reading the press summaries and assuming SR 26-2 brings generative and agentic AI inside MRM. The actual text does the opposite: footnote 3 explicitly excludes them. The same footnote, in the same paragraph, tells banks to use the SR 26-2 principles to guide governance and controls for systems the guidance doesn’t cover. The supervisory direction is clear even if the headlines are confused: traditional and non-generative AI models go inside MRM under SR 26-2; generative and agentic AI go in a parallel governance discipline that mirrors MRM rigor.
The second most common mistake is treating that parallel discipline as a lighter regime. Examiners are increasingly drawing on the same expectations (documented purpose and use, validation evidence, ongoing monitoring, change control, governance committee oversight) and asking firms to demonstrate them whether or not the system technically falls inside SR 26-2. Engagements scope the parallel program with the same rigor as the formal MRM program, integrated with NIST AI RMF and its Generative AI Profile where helpful.
Next step
Most engagements start with a 30-minute discovery call. Bring the current MRM policy, the most recent inventory, and any recent validation or governance finding on a model the firm relies on. We will tell you where the SR 26-2 gap is largest, where the parallel discipline for generative and agentic AI needs to start, and what a focused engagement looks like.