CyberAudit is a Frontier Foundry product. Virtova is the consulting channel for engagements; both entities share ownership. The public-facing launch is on the Frontier Foundry website in May 2026; early-access engagements run through Virtova in the interim.
The platform is built on the thesis described in Cyber audit in the neural-net era: traditional human-only code audit, evaluated against established control frameworks by hand, no longer fits the systems being audited. CyberAudit is the code-security audit response to that argument, productized as a self-hosted CLI that produces audit-grade findings without sending code outside the customer environment.
How it works
CyberAudit runs a three-stage pipeline plus four pre-report passes against a local knowledge base of approximately 290,000 CVEs.
Stage 1, Auditor. The first pass uses an LLM grounded in CWE retrieval to identify candidate findings in the codebase.
Stage 2, Challenger. Each candidate finding is then challenged adversarially: the Challenger stage tries to invalidate the finding against multiple challenge vectors (false positive triage, defense-mechanism recognition, schema verification, ungrounded-rejection detection) before any finding is surfaced.
Stage 3, Overnight diagnostics. Optional deeper structural pass that catches architectural defects per-file scanners miss, plus a triage queue and health signal for the engagement.
The pre-report passes include HyDE-style query rewriting for retrieval-grounded CWE matching, hybrid retrieval (BGE-code-v1 dense + BM25 sparse + reciprocal-rank-fusion + cross-encoder rerank), deterministic defense recognizers per language–CWE pair that veto LLM dismissals, and cross-file pattern aggregation that elevates systemic defects.
What you get
A typical deep scan produces structured JSON findings, a JSONL log, a systemic-patterns artifact, and a 25–30 page CSO-voice PDF report. The report includes a cover page and classified-distribution notice, executive summary with a BLUF verdict (NOT READY / CONDITIONAL / READY), findings distribution by severity and CWE, per-finding detail cards with regulatory cross-references (OCC 2023-35, NYDFS 500, FFIEC IT Handbook, NIST SP 800-53), near-miss analysis (CRITICAL findings the Challenger demoted), false-positive analysis for transparency, and a production-readiness assessment.
CyberAudit is positioned as a scanning aid for human-reviewed audit workflows, not an unattended assurance product. The reviewer feedback loop (true-positive / false-positive dispositions) feeds a persistent ledger that calibrates future scans against the engagement’s real workload.
Reference metrics
On the AutoCEN2 benchmark (an 814-file financial-crime compliance platform across Go, Python, Rust, and TypeScript), the v2 pipeline produced 2.24× the upheld findings of the v1 baseline, with estimated precision rising from approximately 30% to approximately 84%. CRITICAL findings upheld scaled 2.83×; HIGH findings upheld scaled 2.80×. Schema-leak rate dropped to 0%; ungrounded-rejection rate dropped to under 5%. Runtime was unchanged. Full case study available on request to early-access engagements.
Engagement and access
CyberAudit early-access engagements are run through Virtova as design-partner relationships in advance of the May 2026 Frontier Foundry public launch. Partners get the self-hosted platform deployed against their actual codebases, working sessions to calibrate the pipeline against their specific control framework, and direct input on the v2.x roadmap. The cohort is small.
Most early-access conversations start with a 30-minute discovery call. Bring an honest read on the codebases you need to audit and the regulatory framework the audit serves; we will tell you whether CyberAudit is a fit and what the design-partner engagement looks like.