Most AI governance programs in regulated firms exist primarily as documents. The charter is written, the policy lives on a wiki, the model inventory is in a spreadsheet, and the committee meets quarterly. None of that, by itself, is a governance program. It is governance theater, and supervisors recognize the difference faster than firms expect.
A real AI governance program is a running operating cadence: a named senior accountable executive, a working forum that meets often enough to actually change things, a model inventory that updates when systems update, and decisions on the record that turn into deliverables. Virtova’s AI governance consulting is calibrated to produce that cadence and hand it off so the client can run it without us.
Engagements are led personally by Sultan Meghji, former inaugural Chief Innovation Officer of the U.S. FDIC. Inside the agency, the work covered AI policy across the U.S. banking system: five thousand institutions and the third-party ecosystem underneath them. The view from the regulator’s side of the table, about which programs hold up under examination and which collapse when they meet the rulebook, sits at the center of how Virtova structures this work today.
What this engagement looks like
A Virtova AI governance engagement typically runs eight to twelve weeks for a diagnostic-and-build, with optional ongoing monthly oversight afterward. The standard scope covers six threads.
Accountability and operating model. A named senior accountable executive for AI risk: Chief Risk Officer, Chief AI Officer, Chief Compliance Officer, or in smaller firms the CIO or CTO wearing the hat explicitly. Not a committee. We make the accountability explicit in writing on day one because every other thread depends on it.
Risk appetite and tier classification. A risk appetite statement that says specific things. Not “we will use AI responsibly” but, for example, “we will not deploy AI in Tier 1 credit decisions without human adjudication for the first 24 months of production.” Classification logic distinguishes Tier 1 from Tier 2 from Tier 3, with a process behind each tier that has real teeth.
Model inventory and documentation. Most inventories in 2026 are still pre-LLM. Vendor-provided large language models inside contact centers, coding assistants in engineering, and marketing-copy generators all belong on a governed list, even though SR 26-2 leaves generative and agentic AI out of formal MRM scope. The documentation standard tracks the relevant rulebook. For banks, traditional and non-generative AI models go inside the SR 26-2-aligned MRM program; generative and agentic systems sit in a parallel discipline the agencies tell firms to build to the same principles. For firms with EU exposure, the EU AI Act technical-documentation requirement applies on top.
Governance forum. A standing forum that runs at the right cadence (usually monthly for active programs, quarterly for steady-state) with a standing agenda, named owners, decisions on the record, and follow-up between meetings. The forum is where the program actually runs; the document is where it is described.
Incident and change-management readiness. Tabletop exercises for the most likely incident types: model drift, vendor model swap, data-quality break, regulatory inquiry. The first thirty minutes is where the job gets done or not.
Board and supervisor engagement. Board materials, examination response playbook, and the AI section of the firm’s risk report. In examination-driven sectors this is high-leverage; the firm’s narrative to its supervisors is often the single most consequential governance artifact.
Who this is for
The practice is built for U.S. banks, insurers, health systems, life-sciences companies, private-equity portfolio companies, and U.S. and allied federal agencies. The consistent theme across these clients is that the cost of an AI failure includes regulatory and reputational tail risk, not just unit economics. AI governance work in tech-native firms looks different and is often led by AI specialists; in regulated industry, the work is closer to risk management with an AI specialty than the other way around.
When the engagement is the wrong answer
AI governance consulting is the wrong call when the organization has a functioning governance program and is stuck on a different problem (three pilots stalled in procurement, an AI strategy that needs prioritization, an integration finding from a recent acquisition). Virtova will say so on the discovery call rather than scope a project that is the wrong shape.
It is also not the right answer when the underlying risk problem is not AI at all. Several of the “AI governance” engagements Virtova has been asked to scope are really data-governance, third-party-risk, or change-management problems with an AI label. The honest version of the conversation comes faster than the brochure version.
A note on regulatory context
The federal AI governance rulebook for U.S. financial services moved meaningfully in the last twelve months. SR 26-2 (April 17, 2026) supersedes SR 11-7 and SR 21-8 — the first material update to interagency model-risk-management guidance since 2011 — and is most relevant to banks with more than $30 billion in total assets. Most press summaries said SR 26-2 brings generative AI inside MRM. Footnote 3 of the actual guidance does the opposite: generative and agentic AI are explicitly excluded from scope. The same footnote tells banks to use the SR 26-2 principles to govern those systems anyway. The EU AI Act, on a separate track, has phased into operative obligations that reach extraterritorially into U.S. operations. AI governance work in 2026 is no longer about adapting to a stable framework. It is about building a program that absorbs framework changes without rebuilding itself every twelve months.
Next step
Most engagements start with a 30-minute discovery call. Bring the current state (charter, inventory, last incident, last examiner conversation) and we will tell you which thread is the most urgent and what a focused engagement around it looks like.