Technical diligence in 2026 is harder than it was in 2018. Deals are closing on companies whose competitive position depends on AI systems that did not exist eighteen months ago, on data flows that cross more regulatory regimes than they did five years ago, and on third-party platforms whose contractual terms have not kept pace with the operational risk they actually carry. A diligence engagement that reads the architecture diagram and the SOC 2 report is no longer a serious read.
Sultan Meghji leads Virtova diligence engagements personally. The work is informed by direct operator experience across U.S. equities technology, bioinformatics, federal agency innovation, and the founding-CEO experience at Frontier Foundry Corporation, where Sultan has been on the operating side of platforms that get diligenced and the diligence side of platforms being acquired.
What this engagement looks like
A Virtova M&A technical diligence engagement runs in two formats (pre-LOI quick-look and formal diligence) sometimes sequentially, sometimes independently.
Pre-LOI quick-look (one to two weeks). Scoped tightly around the two or three technical questions that materially affect bid posture. Typical outputs include an architecture and risk read, an opinionated list of the highest-leverage formal-diligence themes, and a written go/no-go input. Pre-LOI work runs on whatever data the sponsor can secure under NDA, not the full data room.
Formal diligence (three to six weeks). Standard scope covers eight threads:
- Architecture and platform. Production architecture, scaling realities, single points of failure, and the gap between the documented system and the operating one.
- Data and ML infrastructure. Data lineage, warehouse and lakehouse posture, feature stores, MLOps and inference infrastructure, observability and lineage.
- AI systems and model risk. Inventory of AI in production, generative-AI exposure, vendor LLM dependencies, and the model-risk framework in place. Especially relevant in financial-services and healthcare deals.
- Cybersecurity posture. Posture below the documented surface: controls actually in place, third-party exposure, observed incident history, and the realistic threat model.
- Third-party and supply-chain risk. Vendor concentration, contractual risk allocation, and the silent-model-swap exposure most contracts do not yet address.
- Engineering organization. Headcount realities, key-person dependencies, attrition signals, and the gap between what the leadership team describes and what the IC layer experiences.
- Regulatory and supervisory exposure. Where the target operates in regulated industries, how its current AI, model risk, and security posture would survive examination by the relevant supervisor.
- Integration thesis. What actually needs to happen post-close, in what sequence, and over what realistic timeline. The integration thesis is the most consequential part of the deliverable; deals that succeed are usually the ones where the integration thesis was honest before the term sheet.
AI-specific diligence in 2026
Most diligence frameworks in use today were drafted before generative AI arrived in production. They are not wrong; they are incomplete. A serious AI diligence overlay covers four threads on top of the standard scope.
Vendor LLM dependency. Where the target’s product depends on a foundation-model provider, the diligence reads the contract for switching costs, model-version control, data-handling, and the silent-swap exposure. Most contracts signed in 2023–2024 did not anticipate the operational reality of 2026; the renewal posture matters as much as the current terms.
Model risk and validation discipline. The target’s MRM equivalent: what it is, whether it is real, whether it has caught a real incident. Companies whose AI claims are stronger than their MRM discipline are common; the gap is where post-close surprises live.
Training-data provenance and IP. For targets whose products incorporate fine-tuned or trained models, an honest read on training data: where it came from, what licenses it sits under, what intellectual-property exposure follows the deal. The exposure here has grown materially as enforcement actions and licensing disputes have accumulated through 2025–2026.
Generative-AI specific risk surface. Confabulation, prompt injection, harmful output, and the controls actually in place. The diligence does not need to be exhaustive on each, but it needs to be specific enough that the integration team is not discovering the surface for the first time post-close.
When the engagement is the wrong answer
Technical diligence is the wrong shape when the question is purely commercial (market sizing, customer concentration, revenue durability), at which point a commercial-diligence partner is the right call. It is also the wrong shape when the deal is too small for senior diligence to pay for itself; Virtova engagements scope to deals where the technical risk is meaningful enough to justify the senior-led approach.
Virtova will tell you which one fits on the discovery call.
Sectors and stages
The practice is built around regulated-industry deals: U.S. financial services (community/regional/global banks, insurers, asset managers, fintech-bank partnerships), healthcare and life sciences, federal contractors and adjacent industrial technology, and the AI-native companies whose primary value sits in production AI systems that need to survive integration into a larger acquirer’s risk framework.
Next step
Most engagements start with a 30-minute discovery call before the diligence package is built. Bring the deal context (sponsor, target, sector, deal size, indicative timeline) and we will tell you what scope fits and where the technical questions that matter actually sit.