← Atlas
Atlas expedition

Scams, deepfakes, and social engineering in 2026

Field guide to the 2026 AI scam landscape — named cases, dollar losses, attacker tooling, and the four defense layers that actually move the needle.

99 sources ~14 min read #23 security · deepfakes · social-engineering · ai-safety · talk-prep

TL;DR for the talk. AI didn’t invent scams — it removed every last constraint on running them. In 2024 US victims lost a record $16.6B to internet crime [1], Deloitte projects gen-AI fraud reaches $40B by 2027 [8], and Microsoft now measures AI-written phishing emails at a 54% click rate vs 12% for human-written [54]. For a non-technical audience the shape worth conveying is: the attacker side has industrialised (Telegram-distributed jailbroken LLMs, real-time face-swap from one photo, $40B/year SE-Asia scam compounds), the detector side is losing the arms race (peer-reviewed accuracy drops 10-15% on unseen generators [92]), and the only consumer defenses that survive contact with reality are out-of-band human protocols — family codewords, callback verification on a known number, and dual-officer sign-off on any video-call money instruction [93][95].

The scoreboard, 2024–2026

Metric Number Source
FBI IC3 reported losses, 2024 $16.6B (+33% YoY) across 859,532 complaints [1]
Cyber-enabled fraud share of those losses 83% ($13.7B) [2]
Investment fraud (#1 IC3 category) $6.57B; crypto complaints $9.3B (+66% YoY) [4]
FTC consumer fraud losses, 2024 $12.5B (+25% YoY) [6]
FTC government-imposter scams, 2024 $789M (+$171M YoY) [7]
FBI elder-fraud losses, 2024 $4.9B (+43% YoY) [5]
UK Finance APP fraud, H1 2025 £257.5M (+12%); investment scams +55% [13]
Crypto-scam revenue, 2024 (Chainalysis) $9.9B → likely $12.4B as wallets identified [68]
Crypto-scam revenue, 2025 (Chainalysis) $14B–$17B [11][69]
AI-impersonation scam growth, 2025 +1,400% YoY [11]
Average individual scam payment, 2025 $2,764 (+253%) [11]
AI-enabled scam revenue per operation 4.5× traditional [11]
Deepfake fraud growth, Q1 2024 → Q1 2025 (Sumsub) +700% globally; Maldives +2,100% [47][10]
Multi-step (sophisticated) fraud share, 2024 → 2025 10% → 28% (+180%) [9]
Deloitte projection, US gen-AI fraud, 2027 $40B (from $12.3B in 2023, 32% CAGR) [8]
SE-Asia scam compound annual profits (UNODC 2025) ~$40B [63]
US losses to SE-Asia scams, 2024 (Treasury) $10B (+66% YoY) [72]

For a non-technical room, the line that lands is the simple one: fraud is now the single largest category of US cybercrime loss by a large margin, and growing at 25–33%/yr [1][6]. Ransomware gets the headlines; consumer fraud quietly took the money.

Three attacker playbooks worth showing on a slide

1. Voice cloning / vishing

Three seconds of TikTok audio is now enough to clone someone’s voice with ~90% accuracy [16]. ElevenLabs officially recommends 1–2 minutes of clean audio for its Instant Voice Cloning, but acknowledges 30-second samples already produce excellent results [15]. OpenAI’s Voice Engine (15-second clones) has stayed in restricted preview through 2026 specifically because of abuse risk [21][22]. ElevenLabs added a “No-Go Voices” classifier that refuses clones of presidential candidates and other high-risk figures [23]. The criminal market doesn’t care: deepfake-vishing rose ~1,600% in Q1 2025 with turnkey vishing-as-a-service kits starting at $1,000 [29].

Case Date What happened Outcome
LastPass Apr 2024 WhatsApp calls and a voicemail with a deepfake of CEO Karim Toubba’s voice Caught — wrong channel tipped off the employee [17][18]
NH Biden robocall Jan 2024 AI Biden voice telling NH Democrats not to vote in primary $6M FCC fine to operative Steve Kramer; 13 felony voter-suppression counts; $1M penalty for distributor Lingo Telecom [19][20]
FBI senior-officials campaign May 2025 AI voice + SMS impersonating senior US officials IC3 PSA; out-of-band callback the recommended counter [93]
AI scam losses (FBI 2025) 2025 Family-in-distress + investment deepfake calls $893M total; $352M from victims 60+; AARP est. avg loss $18k, avg victim age 74 [30]

Detection vendors push back hard but the consumer side is brutal: McAfee’s 7,000-person survey found 70% of people are not confident they could distinguish a cloned voice from the real thing [26]. Pindrop’s enterprise Pulse product claims 99.2% accuracy from 2 seconds of audio and scored 96.4% (81/84) on NPR’s benchmark [24][25] — those numbers are for call-centers fronting banks, not your iPhone. NCC Group built a real-time clone-the-call rig in about an hour using public clips and off-the-shelf hardware [31] — meaning the technical barrier is gone. Mandiant’s M-Trends 2026 ranks voice phishing as the #2 initial infection vector of 2025, present in 11% of investigated intrusions [32].

2. Real-time deepfake video / live face-swap

The Arup case is the canonical one for a non-technical audience because the loss is dollar-attributed and the playbook is exactly the executive scenario the audience worries about.

Case Date Playbook Loss
Arup (Hong Kong) Jan 2024 Multi-person Zoom — CFO and colleagues all synthetic, generated from public conference footage; only the finance worker was real on the call. 15 transactions wired before discovery HK$200M / ~US$25.6M [27][33][34]
WPP (Mark Read) May 2024 Fake WhatsApp + AI voice clone of CEO + Microsoft Teams meeting + chat impersonation $0 — caught [35][36]
KnowBe4 (DPRK fake IT worker) Jul 2024 AI-enhanced photo + stolen US identity, passed 4 video interviews; malware loaded on the issued laptop within minutes $0 breach — sandboxed account [37][38]
Singapore finance director Mar 2025 Same Arup playbook, all-synthetic Zoom US$499,000 [28]
DPRK IT-worker scheme (DOJ indictment) Dec 2024 AI synthetic identities + deepfake interviews + US laptop farms $88M over six years; by Nov 2025 DOJ counted 136 victim US firms and seized $15M crypto [39][40][41]

Tooling on GitHub:

Tool ⭐ Stars State (Apr 2026)
Deep-Live-Cam ⭐ 92k Active — one image, real-time face swap, dominant in the live category [43]
Faceswap ⭐ 55k Active — the foundational training-based toolkit [44]
DeepFaceLive ⭐ 31k ⚠ Archived 13 Nov 2024, still widely forked [42]
Roop ⭐ 31k ⚠ Archived Mar 2026; author cited “second-order effects” [45]

Resemble AI counted >$200M in deepfake-enabled fraud in Q1 2025 alone, with video the dominant modality (46% of incidents) [46]. Sumsub’s 2025–26 report says deepfakes are now ~7% of all fraud attempts and 76% of fraud occurs after KYC [47] — the front door has been hardened, the inside hasn’t. Reality Defender’s latest video model claims an 8% balanced-accuracy improvement over the prior generation [48]; Microsoft’s Video Authenticator (released 2020) has not published a current accuracy figure [49].

3. AI-generated phishing and the agentic frontier

The AI-phishing story has two tempos. The first, in 2023, was cheaper-not-better: IBM’s X-Force study had ChatGPT generate a phishing email in 5 minutes vs ~16 hours human, but the AI version pulled an 11% click rate against humans’ 14% [50][51]. Two years later that gap inverted.

Study / report Finding Interpretation
Hoxhunt (Mar 2025), 70k simulations AI agent “JKR” 24% more effective than elite human red teams; 55% relative improvement vs 2023 AI passes human on persuasion [52]
Microsoft Digital Defense Report 2025 AI-generated phish: 54% click rate vs 12% manual (4.5×); up to 50× more profitable The headline number for the talk [54][55]
Microsoft MDDR 2025 ClickFix social-engineering became #1 initial-access technique, 47% of Defender Expert cases The volume side [55]
Hoxhunt 2026 Phishing Trends AI-phish 4% (Nov 2025) → 56% (Dec 2025) of reported emails — 14× surge The cliff edge [53]
Verizon DBIR 2025 Pretexting (BEC core) nearly doubled YoY; 30% of social-engineering The BEC pipeline [62]
FBI IC3 2024 BEC: $2.77B / 21,442 incidents The bottom line [3]

Attacker tooling matured in lockstep:

Tool Pricing Notes
WormGPT 4 $50/mo or $220 lifetime, source incl. Telegram + DarknetArmy launch ~Sept 2025; 500+ subs [56]
WormGPT (Grok/Mixtral wrappers) ~€60 on BreachForums Jailbroken commercial LLMs in a wrapper [57]
KawaiiGPT Free on GitHub, v2.5 Entry-level malicious LLM, identified Jul 2025 [58]
FraudGPT, DarkGPT, WolfGPT, PoisonGPT Various Darkforum-distributed [58]

The agentic frontier is now operational, not theoretical:

  • Anthropic’s Nov 2025 disclosure (a watershed for this talk): a Chinese state-sponsored group used Claude Code to autonomously target ~30 global organisations, with AI doing 80–90% of the campaign, humans intervening at 4–6 critical decision points [59].
  • OpenAI banned North Korean Emerald Sleet accounts using ChatGPT to draft multilingual spear-phishing for APAC defense targets [60].
  • Symantec demonstrated (Feb 2025) that OpenAI’s Operator agent could be prompted to identify a person, find their email, write a PowerShell payload, and send a convincing lure end-to-end [61].

The supply chain: $40B/year of industrialised forced-labor scam compounds

This is the part most non-technical audiences don’t know, and it changes the moral framing. The “Nigerian prince” mental model is two decades stale. The bulk of romance, pig-butchering, investment, and increasingly sextortion fraud now flows through industrial-scale compounds in Cambodia, Myanmar, and Laos.

  • Scale. UNODC’s April 2025 Inflection Point report estimates Mekong-region scam centres generate ~US$40B/year in profits [63]. UN reporting documents torture, rape, and forced labor inside the centres, staffed by trafficked workers from 50+ countries [64]. The 2024 US TIP Report kept Cambodia and Burma on Tier 3, citing tens of thousands of forced-criminality victims and senior-official complicity [79].
  • Crypto rails. Chainalysis tracked ~40% YoY pig-butchering revenue growth, a 210% jump in deposits to those scams in 2024 [12], and a 1,900% CAGR for AI service vendors on the Huione marketplace 2021–2024 [67]. Crypto-scam revenue hit $14–17B in 2025 [69]. TRM Labs testified to Congress that FinCEN-designated Cambodian Huione Group received $39.6B in transaction volume; Chinese-language laundering networks processed $103B in 2025, up from $123M in 2020 [14].
  • The Wang Xing inflection point. Chinese actor Wang Xing was lured to Mae Sot on a fake film audition on 3 Jan 2025, kidnapped into Myawaddy, and rescued on 7 Jan [70]. The case sparked a Chinese-tourism boycott; on 5 Feb 2025 Thailand cut electricity, internet, and fuel to Myawaddy compounds, and 12–17 Feb Myanmar repatriated 261 victims and detained 1,303 foreign nationals [71].
  • Enforcement. INTERPOL’s 2024 anti-trafficking-fuelled-fraud operation across 116 countries arrested 2,500+ [65]; Operation Shadow Storm (2026) cited $11B in compound-linked crypto flows [66]. OFAC sanctioned the Karen National Army and leader Saw Chit Thu (May 2025) [73] and the DKBA + Trans Asia/Troth Star network behind KK Park and Huanya (Nov 2025), citing $10B in 2024 American losses, +66% YoY [72]. October 2025: US prosecutors seized $15B in crypto from Chen Zhi’s Cambodian forced-labor camps — dwarfing the prior $225M record [81].
  • AI amplification. Real-time face-swap apps like Haotian (50 tunable facial settings) drove a 700% Q1-2025 spike in US deepfake fraud and >$200M in Q1 losses [77][78] — these are the tools used inside the compounds during video calls.
  • Sextortion + nudify-app combo. NCMEC’s 2025 CyberTipline received 21.3M reports; online-enticement rose 156% YoY to 1.4M; >1.5M reports linked to generative AI [74]. Since 2021 NCMEC has tracked at least 36 teenage boys who died by suicide after sextortion [75]. The Jordan DeMay case — a Marquette teenager who killed himself within hours of being extorted for $1,000 he didn’t have — yielded 17.5-year sentences for two Nigerian brothers, Samuel and Sampson Ogoshi [76]. The FBI’s earlier 322% sextortion spike (Feb 2022 – Feb 2023) was already alarming; AI “nudify” apps are the now-dominant 2024–26 escalation, and victims are overwhelmingly boys aged 14–17 [80].

For the talk: the audience needs to know the average voice on the other end of a romance scam is itself a victim — UNODC documents trafficked workers from 50+ countries forced to run the calls under threat of violence [63][64]. That reframes the problem from “stupid grandma” to organised transnational forced-labor crime.

What actually works in 2026 — four layers, none sufficient alone

Layer What it is Where it bites Where it fails
Provenance C2PA / Content Credentials cryptographically sign content at capture or generation 6,000+ C2PA members [82]; Adobe, OpenAI/DALL-E 3 [84], Pixel 10 hardware-signed every photo, Sony PXW-Z300 native [83] ⚠ Nikon Z6 III firmware suspended after a signing-key vulnerability forced certificate revocation [83]; only labels what’s signed, not what isn’t
Regulation EU AI Act Article 50 deepfake-disclosure (in force 2 Aug 2026); TAKE IT DOWN Act; ELVIS Act; CA AB 1836 Article 50 forces deployer disclosure at first exposure, fines up to €15M / 3% turnover [85][86]; EC Code of Practice published 17 Dec 2025 [87]; TAKE IT DOWN forces 48-hr removal of NCII deepfakes by 19 May 2026 [88]; Tennessee ELVIS Act protects voice [89]; CA AB 1836 protects deceased performers [90] Doesn’t reach offshore compounds; satire/art carve-outs
Detection Reality Defender, Pindrop, Hive, Sensity, Truepic — vendor models Pindrop 96.4% on NPR audio benchmark [25]; Reality Defender +8% balanced accuracy on video [48]; Norton ships on-device deepfake scanning Jul 2025 [94] ⚠ Peer-reviewed: 10–15% accuracy drop cross-dataset; XCeption 89.2% → 85.7%, MDD AUC 0.998 → 0.674 against unseen generators [91][92]; GAN-trained detectors fail on diffusion outputs
Human protocols Out-of-band callback; family/codeword/safe-word; dual-officer transfer sign-off FBI IC3 PSA explicitly recommends out-of-band verification [93]; post-Arup industry consensus is mandatory dual-officer counter-signature on any video-call money instruction [95]; LastPass deflected attack purely because of channel mismatch [18] Discipline-dependent; a tired CFO is the failure mode

Platform-side, labelling exists but is voluntary and patchy. TikTok auto-labels via C2PA + watermark + detection and has flagged 1.3B AI videos, though AI-written captions/hashtags are exempt [98]. Meta replaced its “Made with AI” label with “AI info” in July 2024, driven by self-declaration and partner metadata [99]. And the defender’s own AI is now a phishing surface: in 2025 a prompt-injection flaw in Gmail’s Gemini “summarize this email” feature smuggled invisible white-on-white instructions through, producing fake Google security alerts inside the AI summary itself [96].

The “liar’s dividend” — bad actors dismissing real footage as deepfake — is now a recognised democratic harm. The Brennan Center, citing Hany Farid, argues that without ubiquitous provenance infrastructure the dividend grows; rapid authentication is the practical counter [97].

Concrete advice the talk can leave on the screen

For a non-technical room, three rules survive contact with reality:

  1. Family codeword. Pick one with your parents, partner, and kids. If a “distressed family member” calls and can’t say it, hang up and call the known number. The data is unambiguous: FBI logged $893M in AI-scam losses in 2025 ($352M from victims 60+); AARP puts the average voice-clone victim at 74 with ~$18,000 lost [30]; McAfee finds 70% of people can’t tell a clone from the real voice [26].
  2. Out-of-band callback for any money instruction received via video, voice, or chat. This is the one rule the LastPass employee followed [18], the Arup employee did not [27], and the IC3 explicitly recommends [93]. For organisations: dual-officer counter-signature on every transfer, always [95].
  3. Assume the romance is a forced-labor compound, not a person. UN: ~$40B/year industry, 50+ countries’ citizens trafficked in [63][64]. Crypto requests = stop. Reverse-image search the profile photo. Insist on a live, unscheduled video call — and even then, knowing Haotian-class face-swap exists [78], treat that as one signal, not proof.

The bigger frame for the audience: defenses now layer rather than solve. Provenance + regulation + detection + human protocols, each imperfect, get you to “good enough” — provided you’re disciplined enough to use them. The AI security story in 2026 is less “the machines outsmarted us” and more “the economic floor under fraud collapsed, and only people who slow down get to keep their money.” The talk should land that as the takeaway.

Citations · 99 sources

Click the Citations tab to load…