The single decision. Keep the title “ProbLLMs: Why You Can’t Trust the Robot” [1], cut the existing deck roughly in half, and graft in four laymen-relevant angles with strong 2026 evidence. Everything below explains why each move survives contact with what the audience actually fears.
One thread runs through all eight angles
AI didn’t invent any of these threats — it removed the friction. Phishing emails written by AI hit a 54% click rate vs 12% for human-written [2]. Deloitte projects gen-AI fraud at $40B by 2027 [3]. Engagement-ranked feeds were already a manipulation engine; AI just makes every step cheaper and more personalised, and trust in news has fallen 20 points in a decade as a result [4]. That single framing — same threat, friction removed — is the spine the laymen edition should hang on.
What the audience polls actually fear, ranked
Public concern data lines up cleanly with three of the four grafts. Hallucinations top Pew at 66% concern [5]; deepfakes/misinformation top YouGov [6]. The FBI reports $893M in AI-fraud losses in 2025 with $352M of that against adults 60+ [7] — voice-clone scams are the one AI risk laymen fear personally. NCMEC AI-CSAM reports jumped from 4,700 in 2023 to 440,000 in H1 2025 [8], and the FTC opened a Sept-2025 inquiry into seven chatbot companies over kids and companion-AI harms [9]. By contrast, the existing deck’s Model Context Protocol/CVE/OWASP-LLM01 sections have no consumer surface [10] and don’t appear in any 2025 public-concern poll. Cut them.
The contradictions worth flagging on stage
Two findings cut against the doom narrative — admit them, don’t hide them. First: the 2024 “election deepfake apocalypse” mostly didn’t land [11]; the damage was targeted, not viral — Slovakia 2023, the Biden NH robocall ($6M FCC fine [12]), and the Arup $25M deepfake video call [13]. Second: the headline AI privacy fines have mostly been overturned (Italy’s €15M against OpenAI was scrapped March 2026 [14]) — the wins are for piracy (Anthropic $1.5B [15]), not privacy. So the practical advice for the audience is workflow-level, not “the law will protect you”: don’t paste secrets into chatbots, opt out where the toggle exists, assume “share” links are public [16].
The unified recommendation
Open with a humor anchor (DPD chatbot swearing at customer [17]) → seat the “confident liar” mental model with hallucination stories (Mata v. Avianca [18], Apple Intelligence’s BBC summaries [19]) → escalate to deepfakes and voice clones with the Brightwell-mom and Arup cases → pivot from “AI fooling you” to “AI being fooled” with the resume-injection demo [20] → close with the four out-of-band defenses that actually work (family codeword, callback verification, dual-officer sign-off on video-call money instructions [21]). Six 10-minute attention chunks, ~24 slides total — every chunk one concrete story, not one taxonomy.
The open question the talk should leave hanging: when ChatGPT’s $100M-run-rate ad business [22] becomes a $10B one, will any of these defenses still work — or does the “confident liar” become the most lucrative ad surface ever built?