← Default view
SCOUT · ATLAS // BROADSIDE №01 FILED 2026-04-28 282 CITES · 48 MIN · $28.69 EDIT · LAYMEN CUT
A Talk-Prep Broadside · Non-Technical Audience Edition

PROBLLMS

Why You Can't Trust the Robot.

Keep the title. Cut the deck in half. Graft in four 2026 angles — voice-clone scams, kids and AI companions, AI-injected ads, deepfake nudes — that show up in the polls the audience actually fills out.

The single decision

AI didn't invent any of these threats — it removed the friction. That single sentence is the spine the laymen edition should hang on. Phishing click-rate · 54% vs 12%.

SOURCE · Expedition synthesis CHILD BRIEFS · 8 CITATIONS · 282 READ · 48 min RUN · 60.5 min · $28.69 FILED BY · scout-researcher
// THE SPINE //
Same threat. Friction removed.
— canonical synthesis · expedition-depth
Threat 01 · phishing

54%

Click rate on AI-written phishing

Microsoft's red-team study: 4.5× more effective than human-written and far more profitable. The whole story of "AI in security 2026" is in this number — old threat, new fluency.

Versus 12% for human-written.

SOURCE · paubox.com
CLICK
BAIT
2.0
Threat 02 · fraud horizon

$40B

Projected gen-AI fraud loss · by 2027

Deloitte's 2024 estimate. Voice clones, deepfake video calls, scripted scams. The damage is already metric — and the curve is exponential, not linear.

SOURCE · deloitte.com
Deloitte deepfake banking fraud banner deloitte · 2024
Threat 03 · the elderly

$893M

FBI-reported AI-fraud losses · 2025

Of which $352M against adults 60+. This is the one AI risk that laymen fear personally — and it's the one number the deck should put on the wall in 96-point type.

SOURCE · scamwatchhq.com
FBI report on AI-fraud losses against seniors fbi ic3 · 2025
Threat 04 · the kids

440K

NCMEC AI-CSAM reports · H1 2025 alone

From 4,700 in all of 2023. That's a ~94× jump in 18 months. The FTC opened a Sept-2025 inquiry into seven chatbot companies over kids and companion-AI harms. Schools are seeing deepfake cyberbullying.

SOURCE · usnews.com · cnbc.com
94×

66%

Pew · top public concern is hallucinations
pewresearch.org

486+

Court cases sanctioning lawyers for AI-fabricated citations
— child brief · hallucination harms

−20pt

Drop in news trust over a decade · same period AI cheapened every step
pewresearch.org
Mental model · the confident liar

80%

Whisper hallucination rate · long medical transcripts

It invents dialogue. Mata v. Avianca: lawyer cited cases that don't exist, sanctioned. Apple Intelligence rewrote BBC headlines into nonsense. The robot is fluent and confident — confidence ≠ truth.

SOURCE · Mata v. Avianca · daringfireball.net
USDC SDNY court seal — Mata v. Avianca usdc sdny · seal
The named case · ARUP

$25M

Lost on a single deepfake video conference call

Hong Kong, May 2024. A finance worker at engineering firm Arup joined what looked like a routine video call with the CFO and colleagues. Every face on the call was a deepfake. The transfer cleared in minutes.

If you remember one story for the deck, remember this one.

SOURCE · cnn.com
Arup deepfake scam — CNN coverage cnn · may 2024

The arc — six chunks · ~60 minutes.

// concrete story per chunk · not one taxonomy //
01
Open · 10m

The chatbot that swore.

DPD bot calls customers names. Audience laughs; you've earned 9 minutes.

02
Seat · 10m

Confident liar.

Mata v. Avianca. Apple BBC. Whisper 80%. Confidence ≠ truth.

03
Escalate · 10m

Voice clones.

Brightwell mom. Arup $25M. FBI $893M. Personal, not theoretical.

04
Pivot · 10m

AI being fooled.

Resume injection. Confused-deputy. The defender becomes the surface.

05
Contradict · 10m

The honesty pause.

78 election fakes studied — apocalypse mostly didn't land. Earn trust.

06
Close · 10m

Four words to write down.

Codeword. Callback. Dual-officer. Don't paste.

The deck triage.

// surgical edit · half the slides go //
KEEP
— title + DPD opener + Mata · Apple · Whisper trio
  • "ProbLLMs · Why You Can't Trust the Robot" survives. It's funny, it's accurate, it lands without tech jargon.
  • The hallucination stories are the audience's mental model. Don't cut them.
CUT
— developer half · MCP · CVE · OWASP-LLM01
  • No consumer surface. Zero appearances in 2025 public-concern polls.
  • Beautiful for engineers. Wrong room.
GRAFT
— four 2026 angles the polls actually rank
  • + voice-clone scams (FBI $893M)
  • + kids and AI companions (NCMEC 4.7K → 440K)
  • + AI-injected ads ($100M run-rate, heading to $10B)
  • + deepfake nudes (school cyberbullying pipeline)

Defenses. Workflow-level.

// not "the law will protect you" //
01

Family codeword

Agree a phrase the scammer can't know. Voice-clone scams collapse the moment you ask for it.

02

Callback verification

Hang up. Call the known number back. Don't trust the number that called you.

03

Dual-officer sign-off

Money instructions on a video call require a second human, on a different channel. Period. FBI · IC3 PSA.

04

Don't paste secrets

Assume "share" links are public. Opt out where the toggle exists. Logging is the default. bitdefender.

Two findings cut the other way. Admit them on stage.

The election deepfake apocalypse mostly didn't land.

Knight Columbia studied 78 fakes. The damage was targeted, not viral — Slovakia 2023, the Biden NH robocall ($6M FCC fine), the Arup video-call. Don't oversell the apocalypse — undersells the targeted danger.

knightcolumbia.org

The headline AI privacy fines have mostly been overturned.

Italy's €15M against OpenAI was scrapped March 2026. The legal wins are for piracy (Anthropic $1.5B), not privacy. Practical advice for the audience is workflow-level, not regulatory hope.

tradingview/reuters
// the question to leave hanging //

When ChatGPT's $100M-run-rate ad business becomes a $10B one — does the "confident liar" become the most lucrative ad surface ever built? [src]

The eight angles.

// expedition synthesis · child briefs //
  1. 01
    Title candidates Eight title candidates with a recommended pick and the patterns behind each.
    recon 7 cites
  2. 02
    Existing deck triage Slide-by-slide keep/trim/simplify/cut verdict on the AI-Security-Talk deck for a non-technical audience.
    expedition 77 cites
  3. 03
    AI vs LLM and other foundational concepts A non-technical primer on the AI/ML/deep-learning/LLM stack and the surrounding jargon.
    survey 17 cites
  4. 04
    Scams, deepfakes, and social-engineering in 2026 Field guide to the 2026 AI scam landscape — named cases, dollar losses, attacker tooling, and four defense layers.
    expedition 99 cites
  5. 05
    Data harvesting and privacy AI runs on harvested data. Defaults still leak; your every prompt is logged unless you turn it off.
    survey 29 cites
  6. 06
    Ads, manipulation, and trust erosion The ad-funded internet was already a manipulation engine; AI cheapens every step.
    survey 26 cites
  7. 07
    Misinformation, hallucination harms, and election/medical/legal risks Where LLM hallucinations have caused measurable harm — courts (486+ cases), hospitals (Whisper 80%), elections.
    survey 20 cites
  8. 08
    Presentation craft for non-technical audiences A recon-depth playbook for explaining technical work to non-technical audiences.
    recon 7 cites
SCOUT · ATLAS · BROADSIDE №01 FILED 28 APR 2026 · LAYMEN CUT SUGGESTED OPEN: DPD chatbot · TIME