← Services

Virtova services · By Sultan Meghji

NIST AI RMF compliance

Operationalize the NIST AI Risk Management Framework (Govern, Map, Measure, Manage) as a running program in regulated firms. Engagements led by Sultan Meghji, former inaugural FDIC Chief Innovation Officer.

The NIST AI Risk Management Framework is the most widely referenced AI governance standard in the United States. It is voluntary. Federal agencies, large customers, and increasingly the auditors who go in behind them all reference it as if it were not. That tension (voluntary in the statute, expected in practice) is where most NIST AI RMF compliance engagements actually start.

I sat inside the U.S. FDIC in 2021 as federal agencies began reckoning with the draft framework. The firms that have done well with it since then have not treated it as a standard to conform to. They have treated it as a diagnostic to run against themselves, a quarterly exercise in honest self-assessment, and the basis of an operating program that draws on the framework’s vocabulary without becoming a separate NIST workstream layered on top of existing risk processes. That is the model Virtova brings into engagements.

Detailed thinking on operationalizing the framework, including the July 2024 Generative AI Profile (AI 600-1), sits in the longer Virtova NIST AI RMF playbook. The service page below covers what a compliance engagement looks like in practice.

What this engagement looks like

A Virtova NIST AI RMF compliance engagement typically runs six to ten weeks for the diagnostic, with optional follow-on remediation. The work is organized around the framework’s four functions plus a fifth integration thread.

Govern. The thinnest function in nearly every engagement. We build the operating muscle: a named senior accountable executive, a working risk forum, a documented risk appetite statement that says specific things, and a decision log that survives the next exam. Govern is what turns the rest of the framework from documentation into program.

Map. The function that asks the firm to understand each AI system in context: data sources, downstream consumers, the human decisions it informs, the failure modes that matter, the affected parties, and (for generative systems under AI 600-1) training-data provenance and confabulation surface. A good Map artifact for a single system fits on two pages, in plain English. If a senior examiner cannot follow it, it is not yet a Map.

Measure. Where most firms bring model-quality metrics (accuracy, F1, ROC-AUC) to what is supposed to be a risk conversation. We translate model behavior into a measurement set the board and supervisors will actually use: tier-appropriate validation evidence, drift monitoring, fairness and disparate-impact metrics for in-scope use cases, and incident tracking. Measure is also where the gap between U.S. and EU expectations starts to bite for firms with cross-border exposure.

Manage. The closing-the-loop function. Mitigation actions tied to specific findings, change-management discipline for model updates and vendor swaps, retirement criteria, and the documented response when measures cross a threshold. Manage is where the program proves it can act, not just observe.

Integration. The thread the framework itself does not name explicitly: how AI risk management connects to the firm’s existing model risk management, third-party risk, information security, fair-lending, and BSA/AML programs. Treating AI risk as a separate vertical is the most common implementation mistake we see. Virtova engagements wire the four NIST functions into existing risk infrastructure rather than building a parallel one.

When the engagement is the wrong answer

NIST AI RMF compliance work is the wrong scope when the firm needs a foundational AI governance program built end-to-end. At that point the right engagement is AI governance consulting, with NIST RMF as one input among several. It is also the wrong scope when the firm has a working program and the ask is specifically about a single function (Map remediation after an audit finding, for example), at which point a tightly bounded sprint fits better than the diagnostic.

Virtova will tell you which one fits on the discovery call.

Generative AI: the AI 600-1 layer

The July 2024 NIST Generative AI Profile (AI 600-1) extends the RMF with specific risks and controls for generative systems: confabulation, data integrity, environmental and human-rights considerations, harmful bias, intellectual-property exposure, and obscenity. For most U.S. firms in 2026, the Generative AI Profile is the more operationally relevant document day to day; the base RMF is the structural anchor underneath it. Engagements scope both.

In practice, the most common AI 600-1 gaps Virtova finds are around training-data provenance documentation, evaluation evidence for confabulation and harmful-bias risks at the use-case level, and the human-oversight design for generative systems that have effectively been shipped without one. Closing those three gaps tends to be the largest single piece of remediation work in a typical engagement.

Next step

Most engagements start with a 30-minute discovery call. Bring whatever you have (current charter, model inventory, last audit finding, the AI section of the most recent board report) and we will tell you which function has the largest gap and what a tightly scoped engagement looks like.

"The NIST AI RMF is voluntary. So is brushing your teeth. The question isn't whether the framework is mandatory. The question is which version of it your supervisor will read back to you when they show up."
— Sultan Meghji

Frequently asked

What is NIST AI RMF compliance?
NIST AI RMF compliance is the work of building an AI risk-management program that aligns to the U.S. National Institute of Standards and Technology's voluntary AI Risk Management Framework (AI RMF 1.0, January 2023) and its July 2024 Generative AI Profile (AI 600-1). The framework is voluntary, but increasingly the shared vocabulary U.S. supervisors, auditors, and large customers use to evaluate AI governance.
How is the NIST AI RMF organized?
The framework is organized around four functions (Govern, Map, Measure, and Manage) that interlock continuously rather than execute once. Govern sets the operating muscle for AI risk decisions. Map establishes context for each AI system. Measure produces evidence about behavior and risk. Manage closes the loop with action. The Generative AI Profile extends each function with specific risks and controls for generative systems.
Is NIST AI RMF compliance mandatory for U.S. banks?
Not directly. The framework is voluntary and not itself enforceable. In practice, U.S. banking supervisors increasingly reference NIST AI RMF concepts in examinations and interagency guidance. The updated model-risk-management guidance (SR 26-2, April 17, 2026) preserves the SR 11-7 framework with a more risk-based, materiality-driven posture and uses vocabulary aligned with NIST. SR 26-2 explicitly excludes generative and agentic AI from formal scope, which makes the NIST AI RMF Generative AI Profile the de facto reference banks reach for when they govern those systems. Banks treating the framework as optional are usually surprised mid-exam when it isn't.
What does a Virtova NIST AI RMF compliance engagement produce?
A current-state read against each of the four functions, a gap list prioritized by risk and regulatory exposure, a documented operating model with named owners, and a phased remediation plan an examination team or external auditor can follow. The deliverable is a running program with the framework wired into existing risk and audit workstreams, not a standalone NIST compliance binder.
Who runs the engagement?
Sultan Meghji, personally. As inaugural Chief Innovation Officer of the U.S. FDIC, Sultan was inside the agency as federal supervisors began reckoning with the AI RMF in 2021. The view from the regulator's side of the table about how NIST concepts translate into examination expectations sits at the center of how Virtova scopes this work.

Related Virtova services

Related writing

Work with Virtova

Most engagements start with a 30-minute call.

Confidential by default. NDAs available on request.

Book a discovery call →