The U.S. Federal Deposit Insurance Corporation is one of the three federal banking supervisors in the United States, with primary supervisory responsibility for state-chartered banks not members of the Federal Reserve System and a coordinating role across the broader U.S. banking system. Its AI-relevant supervisory posture is not a separate rulebook. It is the existing FDIC supervision (model risk, IT examination, third-party risk, consumer protection, BSA/AML) extended to AI systems in production, with the April 17, 2026 SR 26-2 supersession of SR 11-7 and SR 21-8 setting the framework — preserving the SR 11-7 structure, sharpening the risk-based posture, and explicitly leaving generative and agentic AI out of formal MRM scope while telling banks to use the same principles to govern those systems.
I served as the FDIC’s inaugural Chief Innovation Officer from 2021 to 2022, built the agency’s first innovation division from scratch, and stood up the agency’s policy work on AI, quantum computing, digital assets, digital identity, and cybersecurity for the U.S. banking system. The FDIC AI regulation advisory practice at Virtova is built on that direct experience of how the agency’s supervisory culture actually operates and how the existing rulebook extends to AI.
What this engagement looks like
A Virtova FDIC AI regulation advisory engagement is typically scoped around a specific supervisory event or cycle: an upcoming examination, a recent finding, a new product entering production, a third-party AI vendor being onboarded, or board-level preparation for an emerging area of supervisory attention. Standard threads:
Supervisory narrative. The document and the verbal walk-through the firm uses to communicate its AI program to FDIC examiners. Done well, the narrative establishes the framing for the examination and changes the tempo of the conversation that follows. Done badly, it forces the examination team to discover the program through artifact review, which is slower and less forgiving.
Documentation set. Model inventory, MRM artifacts for SR 26-2-in-scope models (traditional and non-generative AI), the parallel-discipline documentation for generative and agentic AI systems that SR 26-2 leaves outside formal scope, third-party-risk documentation, and fair-lending posture. Engagements work to FDIC and interagency standards rather than generic AI compliance rubrics.
Examination tabletop. A walkthrough of the upcoming or anticipated examination: what the team is likely to be asked, where the firm’s program will be tested hardest, what the right answer looks like, and where the honest answer requires acknowledging a gap and a remediation plan rather than a defense.
Finding response. Where the firm is responding to an existing finding involving AI, governance, or model risk, Virtova helps draft the response, sequence the remediation, and structure the supervisory communication that follows.
Board materials. The AI section of the firm’s board risk report, the standing AI agenda for the risk committee, and the one-page narrative non-technical directors read before signing off. Board materials are where governance programs collide with the rest of the firm; in examination-driven sectors they are also a primary supervisory artifact.
Who this is for
The practice is built for FDIC-supervised institutions and the firms whose supervisory exposure runs through the FDIC framework: state-chartered community and regional banks, savings institutions, deposit insurance fund participants, and the third-party ecosystem (technology service providers, fintechs in bank partnerships) that the FDIC examines under TSP and bank-vendor frameworks. The same supervisory culture and rulebook governs adjacent OCC and Federal Reserve work, so engagements often span the supervisor set.
What FDIC examiners actually look at
The supervisory experience of an FDIC examination involving AI is, in 2026, less about the technology itself and more about whether the firm can demonstrate it understands the technology in risk terms. Examiners are typically looking for four things.
A named senior accountable executive for AI risk who can answer questions in their own words rather than reading from documentation. The accountability question is not new; it is the same question SR 11-7 has asked of model risk for fifteen years, applied to AI systems that did not exist when SR 11-7 was drafted.
A model and AI inventory that the firm clearly maintains as an operating artifact rather than producing on demand. Inventories assembled in the week before an examination are visible from across the room.
A documented incident response history that includes at least one real AI-relevant incident (model drift, vendor model swap, fair-lending finding, security incident affecting an AI system). Firms that claim a clean history without the underlying monitoring discipline to support the claim tend to receive harder follow-up questions, not easier ones.
A written EU AI Act posture if any cross-border exposure exists. The FDIC does not enforce the EU Act, but examiners increasingly note its absence as evidence of how the firm thinks about AI risk in general.
When the engagement is the wrong answer
FDIC-specific advisory is the wrong scope when the firm needs a foundational AI governance program built end-to-end. That work is AI governance consulting. It is also the wrong scope when the question is purely legal: a representation question, a Memorandum of Understanding response, an enforcement matter. Virtova works alongside the firm’s banking counsel rather than substituting for them; engagements are scoped accordingly.
Next step
Most engagements start with a 30-minute discovery call. Bring the upcoming or recent supervisory event (examination notice, finding, vendor onboarding, board ask) and we will tell you what a tightly scoped engagement looks like and where the highest-leverage preparation sits.