UncovAI at NVIDIA GTC 2026: CEO Fraud, Deepfake Detection & the Future of Information Integrity
Florian Barbaro, CEO of UncovAI, took the stage at NVIDIA GTC 2026 to address the fraud wave that generative AI has quietly enabled. CEO impersonation. Synthetic hires. WhatsApp long cons. Here's everything from the session — and what it means for your organisation right now.
The 2026 Threat Landscape: Three Doors Left Open
Three years ago, deepfakes were a curiosity. Today they're a primary fraud vector — and the organisations losing money to them aren't failing on technology. They're failing on awareness.
CEO impersonation attacks — where criminals clone an executive's face and voice for a live Zoom call — have become routine. Finance teams have wired millions based on a synthetic face and a voice indistinguishable from the real thing. The cost of the attack: under $200. The take: millions. The FBI's Internet Crime Complaint Center has flagged AI-assisted business email compromise and video impersonation as one of the fastest-growing financial crime categories globally.
Florian walked the GTC audience through three converging attack surfaces that define where the threat sits in 2026. For a deeper breakdown of how each attack type works, the UncovAI AI Scam & Deepfake Detector page covers the full fraud taxonomy.
Live video calls. Private messaging platforms. Remote hiring pipelines. Each one is a door — and most organisations have left all three wide open.
CEO Fraud in 2026: How Synthetic Attacks Actually Work
"We are not in a chatbot era anymore. We are in a synthetic identity era. The question isn't whether AI can generate a perfect CEO — it's whether your organisation can tell the difference before wiring $25 million." — Florian Barbaro, CEO of UncovAI, NVIDIA GTC 2026
Modern CEO fraud doesn't require a hacker. It requires a GPU, a voice sample from a public earnings call, and fifteen minutes. Here's how the three main attack vectors play out:
The Impersonation Call
Attacker clones the CEO's face and voice using models like ElevenLabs and Sora 2. Schedules a Zoom call with the CFO. Requests an urgent wire. Hangs up.
The Synthetic Hire
A remote candidate passes an entire HR interview using a Generated ID — AI face, AI voice, forged documents. Once hired: data exfiltration, backdoors, payroll fraud.
The WhatsApp Long Con
AI-generated voice notes and profile photos build trust over weeks. Europol's Innovation Lab calls it "Trust Score" grooming. The endgame: a financial transfer or credential hand-off. UncovAI's audio detection layer catches cloned voices in these scenarios.
The Phishing Link
A convincing synthetic voice note or deepfake video directs targets to a spoofed URL. The AI lends credibility the link alone never could. URL phishing detection becomes the last line of defence.
NVIDIA × UncovAI: A Vision for Enterprise-Scale Detection
The centrepiece of Florian's session was a direction, not yet a product announcement. The vision: a future NVIDIA × UncovAI framework that would run UncovAI's multi-model detection stack on NVIDIA's GPU infrastructure — enabling the latencies that make real-time meeting analysis viable at true enterprise scale. You can explore the current UncovAI product suite while this partnership develops.
Parallel processing of facial micro-expression analysis, audio frequency forensics, and biometric consistency scoring — simultaneously, on a live call. Any mismatch triggers an alert before a single euro moves.
The goal is a detection system that operates at the speed of the attack itself. Not hours later when the damage is done. As this collaboration takes shape, updates will be shared here first.
Use Case: Stopping Synthetic Identity Fraud at the Hiring Gate
The most immediate application Florian outlined was video detection for HR and remote onboarding. As video-first hiring has normalised, it's created a gap between what HR teams see on screen and what's actually there — and attackers are exploiting it systematically. The FTC has explicitly warned businesses about AI impersonation risks in remote hiring and KYC workflows.
UncovAI's Generated ID detection layer works across three checkpoints, each powered by fully in-house forensic models built and maintained by UncovAI's own research team:
Checkpoint 1 — Pre-Call Forensic Document Verification
Before the interview begins, the candidate's submitted ID passes through UncovAI's proprietary forensic analysis engine. It dissects pixel-level artefacts, generative model signatures, and texture inconsistencies no human eye could catch — the same class of signals that MIT CSAIL research identified as the most reliable indicators of GAN-generated imagery. Synthetic documents fail immediately, flagged with a forensic trace showing exactly where and how the forgery was introduced. See how the UncovAI image detection layer handles this at the document level.
Checkpoint 2 — Live Biometric Forensic Analysis
UncovAI's invisible meeting bot joins the call and runs the in-house biometric forensic model in real-time. It cross-references micro-expression timing, blink cadence, lip-sync frame accuracy, and facial geometry across 90-degree rotations — a known weak point for most current generative models. Every signal is processed by models UncovAI built, owns, and continuously retrains against the latest generative architectures.
Checkpoint 3 — Post-Call Forensic Trust Score Report
Within seconds of the call ending, the HR team receives a full forensic Trust Score report: a layer-by-layer breakdown of every anomaly detected, synthetic content probability per signal type, and a clear pass/flag verdict backed by forensic evidence. No external tools involved. No forensic expertise required on the HR side.
Real-Time Zoom Call Protection
One of the most searched queries of 2026 is "deepfake detection tools for live meetings." The reason is obvious: Zoom, Teams, and Meet are where high-value decisions get made — and where the highest-value impersonation attacks are executed. NIST's AI Risk Management Framework specifically identifies live video impersonation as a critical emerging threat requiring technical countermeasures.
UncovAI's live meeting bot requires no installation on participants' devices. It joins as a standard attendee, silently processing the stream. If a synthetic face or cloned voice is detected, the host gets an instant in-call notification. The attacker sees nothing. The fraud stops before it starts.
For enterprises, the bot integrates directly into Zoom, Teams, and Meet workflows via API. Security teams can enforce mandatory AI verification on all financial approval calls without changing how those calls are run. See enterprise pricing for deployment at scale.
The Larger Crisis: Information Integrity
Beyond fraud, Florian addressed what he called the real underlying problem: the collapse of trust in digital content. When any face, voice, or document can be synthesised in minutes, the burden of proof for authenticity shifts entirely. The question stops being "could this be fake?" and becomes "can you prove it's real?" The C2PA (Coalition for Content Provenance and Authenticity) — backed by Adobe, Microsoft, Google, and the BBC — is building the open standard for cryptographic content signing, and it's one of the frameworks UncovAI's detection stack is designed to complement.
UncovAI's answer is its suite of fully in-house detection models — built, trained, and maintained entirely by UncovAI's research team. Unlike platforms that depend on third-party APIs, UncovAI controls every layer of its detection stack. That means faster updates when new generative models ship, tighter accuracy, and no dependency on external providers. When Sora 2 releases a new architecture or ElevenLabs updates its voice synthesis, UncovAI's team is already training against it. The same forensic rigour applies to written content — see how UncovAI's text detection handles AI-generated documents and essays.
Florian's vision is a web where content without verified provenance is treated with the same scepticism we apply to unsigned code. The technology is ready. The adoption curve is the final barrier.
How UncovAI Compares to Other Deepfake Detection Tools in 2026
| Feature | UncovAI | Traditional Detectors |
|---|---|---|
| Real-Time Analysis (Live Calls) | ✓ Zoom, Teams, Meet | ✗ Upload only |
| WhatsApp Voice Note Detection | ✓ WhatsApp Bot | ✗ No |
| Sora 2 / ElevenLabs Detection | ✓ 2026 model support | ⚡ Limited |
| Synthetic Identity (Generated ID) Detection | ✓ HR onboarding workflow | ✗ No |
| Fully In-House Forensic Models | ✓ 100% proprietary | ✗ Third-party APIs |
| Trust Score Forensic Reporting | ✓ Layer-by-layer breakdown | ✗ Binary result only |
Three Things You Can Do Right Now
Install the Browser Extension
The UncovAI browser extension for Chrome and Firefox lets you right-click any image or video to check for GAN signatures and content credentials. Under two seconds. No account required to start.
Add the WhatsApp Bot
Forward any suspicious voice note, image, or video to the UncovAI WhatsApp Bot. It runs multi-channel analysis and returns a Trust Score within seconds — flagging catfishing patterns and investment scam profiles before they cause harm. Need volume? Buy detection credits for high-frequency use cases.
Audit Your HR Onboarding Workflow
If your organisation hires remotely, the cost of one synthetic hire vastly outweighs the cost of the tool. See if UncovAI is built for your use case — then get in touch to see the Generated ID detection layer in action.
Frequently Asked Questions
Is UncovAI the same as "Uncov AI", "Uncover AI", or "Uncove AI"?
Yes. While the brand is UncovAI, users frequently search for it as Uncov AI, Uncover AI, Uncov, Uncove AI, Uncove IA, Uncov IA, and Uncover IA. All of these point to the same forensic-grade deepfake detection platform at uncovai.com. Visit the full FAQ for more common questions.
Can UncovAI detect deepfake voices in real-time on Zoom?
Yes. The meeting bot analyses audio frequencies and visual micro-expressions simultaneously during any live Zoom, Teams, or Google Meet call. Detection happens in under three seconds — with no installation required on the other participant's device.
What is synthetic identity fraud (Generated ID fraud)?
Synthetic identity fraud uses AI tools to create a fully fabricated identity: a non-existent face, a cloned voice, and forged documents. Attackers use these to pass KYC checks, get hired, or impersonate executives on video calls. UncovAI's forensic models flag these at every touchpoint. The FTC and Europol have both published warnings on this exact threat category.
Is there a free deepfake detector available right now?
Yes. UncovAI offers a free web scanner and a free Chrome and Firefox browser extension — no subscription required. Both support detection of images and videos generated by Midjourney, Sora 2, Stable Diffusion, and ElevenLabs voice synthesis. See the pricing page for what's included in each plan.
What did Florian Barbaro present at NVIDIA GTC 2026?
Florian Barbaro shared a vision for an NVIDIA GTC × UncovAI AI fraud protection framework. His session covered CEO impersonation attacks, real-time deepfake detection, synthetic identity fraud in HR pipelines, and UncovAI's in-house forensic model approach to information integrity. Read more on the UncovAI blog.
Can I verify suspicious WhatsApp messages with AI?
Yes. The UncovAI WhatsApp Bot verifies suspicious voice notes, images, and videos in real-time. Forward any suspicious content and receive a Trust Score within seconds. It uses the same audio detection models that power the enterprise live meeting product.
Don't Wait for the Next Deepfake to Cost You
UncovAI's forensic detection runs entirely on in-house models — no third-party APIs, no black boxes. Start with the free scanner or book an enterprise demo to see the full detection pipeline in action.
Get Started Free →
