TL;DR — the eight subscriptions that earn the click in 2026. Andrej Karpathy [9] for foundations. Yannic Kilcher [1] for paper rigor. AI Explained [14] as the hype filter. Dwarkesh Patel [53] and Latent Space [55] for the architects-of-AI long form and AI-engineer canon. Fireship [24] as the weekly “what shipped” tax. IndyDevDan [39] for Claude-Code agentic engineering. Stanford CS25 [68] and MIT 6.S191 [71] for the yearly lab refresh. Everything below adds a specific niche on top.
Pick by purpose
| If you want… | Subscribe to |
|---|---|
| One-shot foundations / “what is a transformer” | Karpathy Zero to Hero [9], 3Blue1Brown [3] |
| Rigor on this week’s paper | Yannic Kilcher [1], Tunadorable [7], Umar Jamil [8] |
| Frontier-model analysis without hype | AI Explained [14], AI Daily Brief [19] |
| Weekly “what shipped this week” | Fireship [24] |
| Working-engineer opinions | ThePrimeagen [26], Theo [27], ArjanCodes [30], Hussein Nasser [28] |
| Architects-of-AI interviews | Dwarkesh Patel [54], Latent Space [55], MLST [56] |
| Build-with-LLMs how-to | IndyDevDan [40], Cole Medin [42], Sam Witteveen [44] |
| Yearly canonical lecture drop | Stanford CS25 [68], MIT 6.S191 [71], CS224N [73] |
What changed since 2024
Three shifts shape this list. First, the AI-news YouTube category is now packaging-driven — channels like Wes Roth openly admit their “outrageous” thumbnails are clickbait that “work”, which makes the “hype filter” tier (AI Explained, AI Daily Brief) disproportionately valuable [17][14]. Second, an entire new tier emerged in 2024-2026: agent-builders shipping Cursor / Claude Code / n8n pipelines on camera with companion repos [40][41] ⭐ 1.4k. Third, the long-form interview format consolidated around Dwarkesh, Latent Space and MLST as the technically-deep tier. Lex remains the public-discourse heavyweight [64][52].
AI research & paper explainers
The technical-depth tier has stayed small. Yannic Kilcher (~280K subs) does 45+ minute paragraph-by-paragraph paper walkthroughs and pushes back when methods don’t justify hype [1][2]. 3Blue1Brown (2.2M subs, 50M+ views on the deep-learning series) is the canonical visuals-first foundation course [3]. Andrej Karpathy’s Neural Networks: Zero to Hero builds NNs from backprop through GPTs in code and is the single most-recommended free DL course [9][65]; his 4-hour “Let’s reproduce GPT-2 (124M)” [67] and 3h31m “Deep Dive into LLMs like ChatGPT” [66] are the 2024-2025 supplements.
The implementation-from-scratch sub-tier — frequently grouped on AI-researcher Twitter — is Umar Jamil (Transformer / Diffusion / LLM in PyTorch with public slides + GitHub code) [8], Tunadorable (paper review, implementation, Triton kernels) [7], and bycloud (~185K subs, technical LLM-research summaries plus the AI Timeline newsletter and findmypapers.ai paper-search engine) [6][21].
| Channel | Best for | Cadence | Signal |
|---|---|---|---|
| Yannic Kilcher [1] | Paper rigor, math/method critique | Weekly-ish | High |
| 3Blue1Brown [3] | Visuals-first foundations | Sporadic | High |
| Karpathy [9] | NNs from scratch in code | Rare, evergreen | Highest |
| Umar Jamil [8] | From-scratch PyTorch transformers | Periodic | High |
| Tunadorable [7] | Paper review + Triton kernels | Periodic | High |
| bycloud [6] | Short technical paper digests | Weekly | High |
| AI Coffee Break (Letitia) [5] | NLP/CV/multimodal explainers | Twice monthly | High |
| Welch Labs [10] | The “why” of the math | Sporadic | High |
| Computerphile [12] | General-audience CS depth | Weekly | High |
| Two Minute Papers [4] | ⚠ Hype-prone — formulaic fake-excitement called out on HN | Weekly | Skip |
AI news & discourse — signal vs. hype
The AI-news tier stratifies cleanly along a signal-vs-packaging axis. AI Explained (Philip, ~400K subs) is the consensus “hype filter”: meticulous research-backed analysis whose newsletter (literally titled Signal to Noise) is reportedly read by staff at OpenAI, Microsoft and DeepMind [14][15][13]. The AI Daily Brief with Nathaniel Whittemore (~134K YouTube subs, well-rated on Apple Podcasts) ships 10–25-minute daily episodes that “prioritize depth over sensationalism, consistency over viral moments” — the other clear analyst pick [19][20].
Matt Wolfe (~925K subs, FutureTools.io, ~250K newsletter) is the best generalist-aggregator if you only subscribe to one — telling viewers “which three [tools] actually matter” out of 50 weekly launches [16][51]. Matthew Berman (~480K subs, ~4 vids/wk, 57M views) covers AI news, model comparisons, and generative-art topics — useful for tracking releases, but lighter on actually-built agents than the IndyDevDan / Cole Medin tier [49]. Wes Roth (~293K subs) is the canonical split-reputation case: he openly admits his thumbnails are “outrageous” clickbait that “work”, while his on-camera discourse is “measured and chock full of facts” — useful, but the packaging triggers hype-skepticism [17][18]. MattVidPro and AI Search sit in the broader generalist/tool-review tier — recommended in beginner-accessible roundups but absent from analyst-tier shortlists [22][23].
| Channel | Subs | Tier | Notes |
|---|---|---|---|
| AI Explained [14] | ~400K | Analyst | Hype filter; Signal-to-Noise newsletter ✓ |
| AI Daily Brief [19] | ~134K | Analyst | Daily 10–25 min, depth over virality ✓ |
| bycloud [6] | ~185K | Technical | Cutting-edge paper digests |
| Matthew Berman [49] | ~480K | Analyst-lite | News + model tests, ~4/wk |
| Matt Wolfe [16] | ~925K | Generalist | Best single-pick aggregator |
| Wes Roth [17] | ~293K | Mixed | ⚠ Clickbait thumbnails, fact-dense talk |
| MattVidPro [22] | — | Tool reviews | Outside analyst shortlists |
| AI Search [23] | — | Beginner | Outside analyst shortlists |
Software engineering & programming
The 2026 community consensus splits cleanly by purpose. Fireship (Jeff Delaney, ~4.1M subs, ~0.75% monthly growth) is the unanimous default for staying current via 100-second explainers and the weekly Code Report [24][25]. ThePrimeagen (~484K, ex-Netflix, two channels — Vim/Neovim, Rust, TypeScript, performance) and Theo Browne / t3.gg (~492K by late 2025, opinionated TypeScript/React/full-stack) are the most-cited working-engineer commentary picks [26][27].
For backend depth, Hussein Nasser (~445K) is the consensus pick — databases, proxies, networking protocols, kernel-level optimizations [28]. Dave Farley’s Continuous Delivery is the senior pick for architecture and team practice, often featuring Kent Beck, Sam Newman, Kevlin Henney and Daniel Terhorst-North rather than tooling tutorials [29]. ArjanCodes is the recommended Python channel for design patterns and code architecture, not syntax [30]. Web Dev Simplified (1.7M+, Kyle Cook) and Ben Awad (~300K, React/GraphQL/TypeScript, including the famous 14-hour fullstack tutorial) hold the front-end tutorial slots [31][32]. Coderized publishes rare but highly-polished principled-coding essays [33]. Coding Train (Daniel Shiffman, NYU ITP) remains canonical for creative coding [34].
For system design specifically, ByteByteGo (1.37M, Alex Xu), Hello Interview (‘goat prep’ on Reddit/Blind), Jordan Has No Life (staff-level distributed systems) and Arpit Bhayani round out the 2026 picks alongside Hussein Nasser [36]. The web-dev top-10 also includes Kevin Powell, Jack Herrington, Net Ninja, Traversy Media and ByteGrad [37]. For senior-IC career content, A Life Engineered (Steve Huynh, ex-Amazon Principal) is the most-cited [38]. HN’s advanced-programming thread additionally champions Jon Gjengset (Rust), Jacob Sorber (C/systems), and the ACM SIGPLAN talks archive [35].
| Channel | Subs | Best for |
|---|---|---|
| Fireship [24] | 4.1M | Weekly currency, 100-second explainers |
| ThePrimeagen [26] | 484K | Vim, Rust, performance opinions |
| Theo (t3.gg) [27] | 492K | TypeScript, React, full-stack |
| Hussein Nasser [28] | 445K | Databases, networking, backend depth |
| Continuous Delivery [29] | — | Architecture, senior practice |
| ArjanCodes [30] | — | Python design patterns |
| Web Dev Simplified [31] | 1.7M | JS/CSS/React focused tutorials |
| Ben Awad [32] | 300K | React/GraphQL/TS long-form |
| ByteByteGo [36] | 1.37M | System design (Alex Xu) |
| Hello Interview [36] | — | System-design interview prep |
| Coderized [33] | — | Polished essays on coding principles |
| Coding Train [34] | — | Creative coding (p5.js, generative art) |
| A Life Engineered [38] | — | Senior-IC career trajectory |
| Jon Gjengset [35] | — | Advanced Rust |
| Jacob Sorber [35] | — | Short, dense C / systems explainers |
Build-with-AI / agentic-dev wave
The 2024-2026 wave splits by what the creator does on camera. IndyDevDan ships hands-on Claude Code agentic-engineering content — sub-agents, hooks, multi-agent observability, custom Claude Code SDK agents — explicitly framed as “anti-hype” tactical engineering [39][40]; the companion repo disler/claude-code-hooks-multi-agent-observability [41] ⭐ 1.4k confirms the demos are runnable artefacts rather than slideware. Cole Medin covers AI agents, RAG, local LLMs, pragmatically mixing code with no-code (LangChain, LangGraph, n8n, Ollama, Supabase) and has actively pushed past surface “vibe coding” into context engineering for production agents [42][43].
Sam Witteveen is the framework-walkthrough specialist — CrewAI, SmolAgents, Pydantic AI, LangGraph, Google’s A2A protocol, Microsoft’s Magentic-One — backed by 11 years in DL and agent work since early 2023 [44][45]. AI Jason (Jason Zhou) is the practical builder-channel for AI agents, LangChain, Auto-GPT and multi-agent frameworks; he runs a 1,200+ member paid AI Builder Club community (co-built with Inflect Labs) covering Cursor, MCP and agent courses, evidence of educator standing rather than just creator [89][90][91]. Mervin Praison is high-signal for his own framework MervinPraison/PraisonAI [46] ⭐ 7.0k but self-referential. Riley Brown owns the vibe-coding-with-Cursor-voice niche aimed at non-coders [47]. David Ondrej (Planet AI, ~321K subs) leans monetisation-pitch over engineering depth [48]. For n8n/Make automation specifically, Nate Herk and Nick Saraev are the recommended builders; Greg Isenberg is AI-startup-ideas, not on-camera engineering [50]. Other builder channels worth knowing: VoloBuilds, AI Code King, AI Foundations [50].
| Channel | Tier | Speciality |
|---|---|---|
| IndyDevDan [39] | Top — engineer | Claude Code agentic engineering, hooks, sub-agents ✓ |
| Cole Medin [42] | Top — engineer | RAG, agents, context engineering, LangChain/n8n ✓ |
| Sam Witteveen [44] | Framework specialist | CrewAI, SmolAgents, Pydantic AI, A2A |
| AI Jason [89] | Builder-educator | Agents, LangChain, multi-agent frameworks; runs paid Builder Club ✓ |
| Riley Brown [47] | Designer-friendly | Voice-driven Cursor builds for non-coders |
| Mervin Praison [46] ⭐ 7.0k | Framework author | PraisonAI tutorials — useful but self-referential |
| Matthew Berman [49] | News-aggregator | Model comparisons, lighter on agents built |
| David Ondrej [48] | Monetisation | ⚠ “Make money with AI” framing |
| VoloBuilds / AI Code King / AI Foundations [50] | Builder-adjacent | Modern AI dev patterns / no-code |
Long-form interview podcasts on YouTube
Dwarkesh Patel runs the breakout technical-interview show — biweekly, 60–120-minute interviews with “exhaustive preparation, technical depth, and willingness to challenge premises”, earning him the description of “the primary oral historian of the artificial intelligence revolution” [53]; 2025-26 episodes feature Karpathy, Sutskever, Nadella, Zuckerberg, Hassabis and Amodei, and he is widely described as having “rose from nowhere to become Silicon Valley’s favourite podcaster” [54]. Latent Space (swyx & Alessio) is the AI-engineer canonical reference: a top-10 US Tech show that hit 10M+ readers/listeners in 2025, whose episodes “become reference material passed between engineering teams” [55]. Machine Learning Street Talk (Tim Scarfe, Keith Duggar) is positioned as “the highest-rated technical AI podcast on Spotify” with the deepest research/cognitive-science framing — explicitly “not for the faint of heart” [11][56].
The broader-audience tier: Lex Fridman remains the heavyweight for 3–4-hour researcher conversations; his 2025 Amodei and Hassabis episodes “shaped public discourse” [52]. ⚠ Counter-reading is well-corroborated and not fringe: Helen Lewis writes Fridman “does not maintain even a thin veneer of journalistic detachment”, The Verge’s Elizabeth Lopatto calls him a “softball interviewer” [84]; HN threads since 2020 flag the same pattern — “Lex seems to never do any research on the guest… it’s all un-interesting, softball questions that don’t challenge” [85][86]; a long-form critique notes Fridman “didn’t follow up on technical details about model improvements and would rather talk about whether the model can feel love” in the Altman interview [87]; industry commentary observes that powerful guests treat the show as “a friendlier alternative to journalism” with “narrative control without accountability” [88]. Treat as primary-source archive, not analysis. No Priors (Sarah Guo & Elad Gil) is the founder/investor lens, weekly, with grounded forecasts on agents and continual learning [59]. Cognitive Revolution is biweekly with builder/live-player analysis — correctly hosted by Nathan Labenz (not “Nathaniel Whittemore”) and Erik Torenberg [57].
BG2 (Brad Gerstner & Bill Gurley) lands CEO-tier guests like Jensen Huang, Satya Nadella and OpenAI’s Nick Turley for markets/capital framing [58]. a16z’s AI + a16z plus the annual Big Ideas series cover infrastructure and tooling from a deployed-capital perspective [61]. 20VC is more VC-news than AI-substance and draws mixed reviews [62]. Acquired remains best-in-class for tech-history (1M+ listeners/episode, monthly multi-hour case studies) [60]. Patrick Collison is best treated as a high-signal guest (Dwarkesh, a16z, Cursor) rather than a host of his own regular channel [63]. 2026 industry roundups consistently cluster Dwarkesh, Latent Space, MLST, No Priors and Cognitive Revolution as the top tier for engineers and founders [64].
| Show | Cadence | Depth tier | Best for |
|---|---|---|---|
| Dwarkesh Patel [53] | Biweekly | Top technical | Architects-of-AI long form ✓ |
| Latent Space [55] | Weekly | Top technical | AI-engineer canon ✓ |
| Machine Learning Street Talk [56] | Periodic | Top technical | Research/cognitive-science depth ✓ |
| Cognitive Revolution [57] | Biweekly | Builder | Live-player analysis at the frontier |
| No Priors [59] | Weekly | Investor | Founder/agent forecasts |
| Lex Fridman [52] | Weekly | Public discourse | 3–4 hour heavyweight conversations |
| BG2 [58] | Biweekly | Markets | CEO-tier guests, capital framing |
| AI + a16z [61] | Periodic | Investor | Infra, dev tooling, agentic interface |
| Acquired [60] | Monthly | History | Multi-hour company case studies |
| 20VC [62] | Several/wk | VC | ⚠ More VC-news than AI-substance |
Labs, lecturers & conferences
The lecturer tier centres on Andrej Karpathy: Neural Networks: Zero to Hero is an 8-lecture progression from micrograd backprop through MLPs, WaveNet, GPT and a custom tokenizer [9][65], extended in mid-2024 by the 4-hour “Let’s reproduce GPT-2 (124M)” build-from-scratch lecture [67] and in early 2025 by the 3h31m “Deep Dive into LLMs like ChatGPT” general-audience overview [66]. Sebastian Raschka complements this with a “Build a Large Language Model (From Scratch)” code-along plus 2026 commentary on RLVR, GRPO and reasoning LLMs [79]. Jeremy Howard / fast.ai still ships the free 8-lesson Practical Deep Learning for Coders on YouTube [80].
The university tier delivers reliable yearly drops: Stanford CS25 Transformers United is on its sixth iteration in 2026 with rotating frontier speakers (Hinton, Vaswani, Karpathy) and a new V6 lecture posted in April 2026 [68][69][70]. MIT 6.S191 ships a fully refreshed playlist every January IAP, with the 2026 edition graded for credit [71][72]. CS224N (Manning, NLP) and CS231N (computer vision, Spring 2025 lectures live) remain canonical text/CV references [73][74]. DeepLearning.AI is the practitioner anchor, releasing free 1–2 hour short courses co-built with OpenAI, Anthropic, LangChain, Google and AWS [75].
The lab/conference tier covers Google DeepMind’s channel (Hannah Fry’s podcast returning in 2026, plus keynotes) [76], Simons Institute (full workshop talks including the 2026 Federated Learning program) [77], Microsoft Research’s Research Forum series [78] and Hugging Face’s course/community playlist [81].
The frontier-lab official channels behave very differently and only one is a reasonable subscription. Anthropic (@anthropic-ai, 569K subs / 168 videos as of April 2026) posts roughly weekly — short Opus 4.6 / Cowork / Claude Code / MCP demos plus longer interpretability and safety pieces like “What is AI reward hacking?” and “When AIs act emotional”; useful as a secondary follow but dominated by launch announcements [92][98]. OpenAI (@OpenAI, 1.94M subs / 525 videos) is overwhelmingly product-marketing — ChatGPT use-case ads, Codex spotlights, GPT-5.x demo clips, the OpenAI Podcast and replays of openai.com/live launch streams — high cadence, low signal-to-press-release ratio; treat as the official launch-replay archive, not a learning channel [93][96]. NeurIPS has no official YouTube channel for paper talks — orals, tutorials and invited talks live on nips.cc/virtual/<year> about a month after the event (“We do not use Whova for videos”), and even Sutskever’s 2024 Test-of-Time keynote surfaces on YouTube only as third-party reuploads [94][95][97]. Curated 2026 “best AI YouTube” lists targeting practitioners reflect this pattern — they lead with educator/explainer channels (Two Minute Papers, DeepLearning.AI, AI Explained, Yannic Kilcher, Dwarkesh) and do not single out OpenAI, Anthropic or NeurIPS [99].
r/MachineLearning consensus singles out Stanford CS229 as the most-rigorous free ML course (95+ mentions in the subreddit) [82], and the community-maintained dair-ai/ML-YouTube-Courses index [83] ⭐ 17k aggregates the canonical free ML lecture series across Stanford, MIT, DeepMind and CMU.
| Channel | Cadence | What it is |
|---|---|---|
| Karpathy [9] | Rare, evergreen | NNs from scratch + LLM deep-dives ✓ |
| Stanford Online — CS25 [68] | Quarterly drops | Transformers United seminar (V6 in 2026) ✓ |
| MIT 6.S191 [71] | Annual (January IAP) | Intro to Deep Learning, fully refreshed yearly |
| Stanford CS224N [73] | Annual | NLP with deep learning (Manning) |
| Stanford CS231N [74] | Annual | Deep learning for computer vision |
| DeepLearning.AI [75] | Weekly+ | Free short courses with OpenAI/Anthropic/etc. |
| Sebastian Raschka [79] | Periodic | “Build an LLM from Scratch” + reasoning LLMs |
| fast.ai [80] | Course-yearly | Practical Deep Learning, 8 lessons |
| Google DeepMind [76] | Periodic | Lab keynotes + Hannah Fry podcast |
| Simons Institute [77] | Workshop drops | Theory talks (2026 Federated Learning) |
| Microsoft Research [78] | Periodic | Research Forum talks |
| Hugging Face [81] | Periodic | Course + community talks |
| Anthropic [92] | Weekly | ✓ Secondary follow — interpretability + product launches |
| OpenAI [93] | High cadence | ⚠ Launch replays + marketing, not learning |
| NeurIPS [94] | — | ⚠ No official YouTube; talks on nips.cc/virtual instead |
dair-ai/ML-YouTube-Courses [83] ⭐ 17k |
Living index | Aggregator of canonical free ML video courses |
What to skip or actively de-noise
- Two Minute Papers. Once the canonical low-effort summarizer; HN regulars in 2025 said the “fake-excitement” pattern now overshadows actual algorithm explanation [4].
- “GPT-X JUST CHANGED EVERYTHING” reaction channels. The community frame is that “thousands of AI channels are recycling the same news with the same clickbait” [17] — AI Explained, AI Daily Brief, bycloud and Matthew Berman cover the same news with substantially more substance.
- “Make money with AI” framing. David Ondrej and the broader monetisation tier optimise the funnel, not the engineering [48]. Useful as market signal, low signal as practitioner content.
- 20VC for AI-substance. Strong VC reporting; mixed for technical AI [62].
Coverage gaps
- Direct numerical YouTube subscriber counts are uneven across this list — the analyst/aggregator tier is well-instrumented (Fireship, ThePrimeagen, Theo, Matt Wolfe, Berman); some lecturer / smaller lab channels report differently or update slowly.
- Direct subreddit-thread quotes for the AI Explained / AI Daily Brief analyst-tier rating could not be retrieved in this run (reddit.com search and WebFetch were unreliable); the consensus is corroborated through synthesis and aggregator pieces but a future pass should harden it with raw r/MachineLearning, r/singularity and r/OpenAI threads.
- The “skip” verdict on Two Minute Papers rests on a single substantive HN thread; a second venue criticising the channel’s drift was not surfaced here.
- Non-English channels (Chinese-language ML, European-language SWE, etc.) are out of scope and worth a dedicated pass.