← Atlas
Atlas expedition

High-signal long-form bloggers and newsletter authors for AI and software, 2026

A 2026 working list of long-form blogs and newsletters worth a slot in your reader — by lens (technical, applied, engineering, strategy) — plus what to drop and why.

81 sources ~12 min read #41 ai · software · newsletters · blogs · reading-list · 2026

TL;DR. If you only have five slots: Simon Willison (near-daily, ground truth on every model release [[1]]), Sebastian Raschka’s Ahead of AI (monthly architecture deep-dives, 150k+ readers [[2]][[19]]), Nathan Lambert’s Interconnects (1–3×/week on open models and post-training [[15]]), Gergely Orosz’s Pragmatic Engineer (weekly, original eng-org survey reporting, paid [[25]]), and Ben Thompson’s Stratechery (daily strategy lens, paid [[51]]). Add Hamel Husain for evals and applied AI [[8]], Marc Brooker for distributed systems [[31]], Zvi Mowshowitz for rapid model + policy synthesis [[17]], and Ethan Mollick for “what to do with the thing” [[18]]. Cap at 2–3 newsletters in your inbox; the rest belongs in an RSS reader [[76]]. Drop anything unopened for 30+ days [[75]].

Why the 2026 list looks different

Several canonical names have gone quiet. Don’t burn a subscription slot on a dormant site:

Source Status Last public post
Lilian Weng — Lil’Log ⚠ dormant May 2025 [[3]]
Chip Huyen — huyenchip.com ⚠ stalled January 2025 [[4]]
Christopher Olah — colah.github.io ⚠ dormant 2021; output → Anthropic interpretability [[10]]
Jay Alammar — jalammar.github.io ✗ frozen Moved to Substack [[9]]
Adrian Colyer — the morning paper ✗ dormant Feb 2021 [[36]]
Cindy Sridharan — copyconstruct ✗ dormant 2021/2022 [[37]]
Dan Luu — danluu.com ⚠ slowed Oct 2024 (Patreon may post earlier) [[38]]
Charlie Guo — Artificial Ignorance ✗ news roundups discontinued Joined OpenAI Jan 2026 [[21]]
Sander Dieleman — sander.ai ⚠ very infrequent April 2025 [[14]]

Meanwhile, output has consolidated around a smaller set of practitioner-writers — many of whom converge through Applied LLMs (applied-llms.org), the Pragmatic Engineer, and Latent Space [[41]][[25]][[42]]. An April 2026 third-party ranking of tech/AI blogs corroborates the shift, with 23 of the top 25 entries being single-author publications rather than multi-author/corporate ones [[13]].

1. Long-form AI/ML technical bloggers

Personal blogs, deep posts, individual author. Confirmed active in 2026.

Author Site Cadence Distinctive angle
Simon Willison simonwillison.net Near-daily [[1]] Hands-on tests of every new model the day it ships; “pelican on a bicycle” SVG benchmark; multiple posts per week through April 2026
Sebastian Raschka magazine.sebastianraschka.com ~Monthly [[2]] Visual architecture deep-dives — “A Visual Guide to Attention Variants in Modern LLMs” (Mar 2026), “Components of A Coding Agent” (Apr 2026) [[2]]
Andrej Karpathy karpathy.github.io Occasional [[7]] Pedagogical: Feb 2026 “microgpt” — a 200-line dependency-free GPT trainer [[7]]
Eugene Yan eugeneyan.com Every 4–8 weeks [[5]] 2–66 min essays on evals, RecSys, and senior-IC career; Member of Technical Staff at Anthropic as of 2026 [[5]][[28]]
Vicki Boykis vickiboykis.com ~Biweekly [[6]] Mixes ML-infra (“Querying 3 billion vectors”, Feb 2026) with culture essays (“On Programming Joy and Octocat”) [[6]]
Hamel Husain hamel.dev Active 2026 [[8]] Evals and applied AI engineering — “Evals Skills for Coding Agents” (Mar 2026), “The Revenge of the Data Scientist” (Mar 2026) [[8]]
Cameron R. Wolfe cameronrwolfe.substack.com Several times/week [[12]] Deep (Learning) Focus — accessible research overviews to tens of thousands [[12]]
Finbarr Timbers artfintel.com Active [[11]] Artificial Fintelligence — read by 5,000+ researchers at OpenAI, DeepMind, Midjourney, Google [[11]]

2. AI/ML newsletter authors

Substack/email ecosystem; mostly solo authors. Subscriber counts are author-disclosed where available.

Author Newsletter Cadence / Audience Lens
Nathan Lambert Interconnects 1–3×/week, 300+ posts; Ai2 post-training lead [[15]] Open models, post-training, RLHF, frontier research
Jack Clark Import AI Weekly, ~70k readers [[16]] Frontier-research synthesis; sharpened policy lens since Anthropic Institute launch (Mar 2026) [[66]]
Zvi Mowshowitz Don’t Worry About the Vase Weekly news rollups + multi-part model breakdowns (GPT-5.5, Claude Opus 4.7) [[17]] Speed + long-term world-model building
Ethan Mollick One Useful Thing Active; Wharton [[18]] “How do I actually use this?” — work, education, applied [[18]]
Sebastian Raschka Ahead of AI 150k+ subs [[19]] LLM architecture rigour; pairs with the personal blog [[19]]
Eugene Yan eugeneyan.com newsletter 11.8k+ subs [[28]] RecSys + LLMs from Anthropic [[28]]
Rohit Krishnan Strange Loop Canon ~Weekly [[20]] AI × economics × innovation [[20]]
Simon Willison simonw.substack.com Mirrors weblog [[26]] “State of LLMs” updates, agentic engineering patterns [[26]]
swyx + Alessio Fanelli Latent Space 10M+ cumulative readers/listeners through 2025; 2026 thesis = “coding agents breaking containment” [[24]][[42]] The AI Engineer beat
Gergely Orosz The Pragmatic Engineer Weekly, $15/mo or $150/yr; #1 software/AI eng newsletter on Substack [[25]] Original survey reporting on engineering orgs and AI’s effect on them
Azeem Azhar Exponential View 100k+ subs [[27]] 3–5 yr horizon; 2026 thesis = AI-as-workforce, advantage to orchestrators [[54]]
The Batch (Andrew Ng / DeepLearning.AI) deeplearning.ai/the-batch Weekly team-run [[22]] Industry + research roundup with Ng’s letter [[22]]
Ben’s Bites bensbites.com Daily, ~120k subs [[23]] Builder/startup ecosystem [[23]]

3. Long-form software-engineering bloggers

The non-AI core — systems, formal methods, career, observability. All confirmed active in 2026.

Author Site Cadence Distinctive angle
Julia Evans jvns.ca Steady [[29]] Beginner-respecting deep posts on Unix/Git internals; March 2026 tcpdump+dig examples; January 2026 Git data-model series [[29]]
Marc Brooker brooker.co.za ~Every 2–3 weeks [[31]] Distributed systems from AWS principal-eng vantage; “SFQ: Simple Stateless Stochastic Fairness” (Feb 2026), “Pass@k is Mostly Bunk” (Jan 2026) [[31]]
Hillel Wayne Computer Things Returned Jan 2026 [[30]] Formal methods, software philosophy, exotic tooling [[30]]
Will Larson lethain.com Near-weekly [[33]] Engineering leadership + how agents reshape staff-plus work — “Agents as scaffolding for recurring tasks” (Apr 2026) [[33]]
Charity Majors charity.wtf Frequent [[34]] Observability, SRE culture, management; “Bring Back Ops Pride” (Jan 2026) [[34]]
Brendan Gregg brendangregg.com Active [[32]] Performance; joined OpenAI Feb 2026 to focus on ChatGPT performance [[32]]
Murat Demirbas muratbuffalo.blogspot.com Multiple posts/month [[35]] Consensus, formal methods, databases, AI-for-systems benchmarks; deep BugBash’26 conference notes [[35]]

Mostly talks, included for completeness: Kelsey Hightower has shifted from long-form posts to conference talks — at KubeCon Europe 2026 he framed the year’s lesson as “Everyone is a junior engineer when it comes to AI” [[39]].

4. Applied AI — the “builders’ cohort”

The writers an engineer actually shipping LLM features should read. Most converge through Applied LLMs and the Parlance Labs course network [[41]][[40]].

The original “What We Learned from a Year of Building with LLMs” essays were co-authored by Eugene Yan, Bryan Bischof, Charles Frye, Hamel Husain, Jason Liu and Shreya Shankar [[48]][[41]] — every name on that list runs a high-signal individual outlet:

Author Outlet What they uniquely cover
Hamel Husain hamel.dev + Parlance Labs Evals, error analysis, LLM-as-judge methodology [[8]][[40]]
Shreya Shankar sh-reya.com Trustworthy LLM-as-judge research; O’Reilly Evals book ships Spring 2026 [[45]]
Eugene Yan eugeneyan.com RAG, fine-tuning, caching, guardrails, defensive UX [[28]]
Jason Liu jxnl.co + Maven RAG Playbook RAG-in-production; Instructor (6M+ monthly downloads) [[43]][[44]]
Bryan Bischof O’Reilly: Year of Building with LLMs Head of AI at Hex (the Magic copilot); co-author of the year-of-building essays [[48]]
Charles Frye Modal GPU/infra; “What every AI engineer needs to know about GPUs” [[47]]

Adjacent must-reads in this lens:

  • swyx + Alessio FanelliLatent Space — the AI Engineer newsletter+podcast and AINews aggregator [[42]][[24]]
  • Phillip CarterHoneycomb — observability-driven development for LLMs; O’Reilly book on observability for LLMs [[46]]
  • Hugo Bowne-AndersonVanishing Gradients — agents, evals, multimodal, data infra for builders [[49]]
  • Dan BeckerMastering LLMs course with Hamel Husain — 40+ hours organized around evals, RAG, fine-tuning, prompts [[50]]
  • Simon Willison’s weekly Substack [[26]] — the highest-cadence hands-on commentator, fits this lens too

5. Strategy, business, and policy commentators

The “what does it mean” lens — split by sub-lens.

Tech-business strategy

Author Outlet Notes
Ben Thompson Stratechery Paid; Daily Update + weekly Articles. 2026 essays apply Aggregation Theory to OpenAI/Anthropic — “Aggregators and AI”, “AI Power, Now and In 100 Years”, “AI and the Human Condition” [[51]][[52]]
Benedict Evans ben-evans.com Free, ~200k subs, ~50% open rate; sharply skeptic 2026 take that “OpenAI lacks moat, network effects or stickiness” [[55]][[56]]
Azeem Azhar Exponential View [[53]] 100k+ subs; 2026 = year AI feels less like tools, more like a workforce [[27]][[54]]

Platform / policy reporting

  • Casey NewtonPlatformer — tone has visibly hardened toward urgency since 2024 (“Why I’m having trouble covering AI”) [[57]][[58]]
  • Jack ClarkImport AI + Anthropic Institute (launched Mar 2026) [[66]]
  • Helen TonerCSET, Georgetown — interim Executive Director at CSET; go-to voice on China, evals, AI governance [[67]]

Macro / economist

  • Tyler CowenMarginal Revolution — frequent short posts plus a 40,000-word AI-augmented book in March 2026 (“Rise and Decline, and the Pending AI Revolution”) [[61]][[62]]
  • Noah SmithNoahpinion — openly flipped on AI risk in 2026 (“Superintelligence is already here”, updated bioweapons concerns) [[63]][[64]]

Specialist lenses

  • Matt LevineMoney Stuff (free, Bloomberg) — the financial-AI lens: AI-debt, data-center financing, capital flows around AI [[65]]
  • Patrick McKenzieBits About Money + Complex Systems podcast — fintech-AI plumbing and the engineering economics of AI adoption [[68]]
  • Dwarkesh PatelDwarkesh essays — pushing harder into written essays in 2026, with a blog prize for big AI questions [[59]][[60]]
  • Thomas Ptacek — Fly.io — “My AI Skeptic Friends Are All Nuts” marked his public flip; continues writing on agents and security [[70]]
  • Matt Rickardblog.matt-rickard.com — daily short-form strategy posts (“The Spec Layer”, “The Model is Not The Product”) [[69]]
  • Karen Haokarendhao.com — leading critical-journalism voice on OpenAI labor, water/power footprint, and AGI as marketing narrative; author of Empire of AI [[71]]

6. How to triage in 2026 — the heuristics

Heuristic Source
Cap inbox at 2–3 newsletters. One daily briefing + 1–2 weekly deep dives, role-matched. The rest belongs in RSS. [[76]]
RSS reader for everything else. RSS adoption was up 34% YoY in 2026; Substack hit 8.4M paid subs (+68%). Engineer-curated 2026 lists treat the presence of an RSS feed plus an OPML export as a hard prerequisite for a serious source. [[74]] [[77]]
Named author > corporate blog. HN engineers explicitly favour decade-plus individual bloggers (Julia Evans, Rachel by the Bay, Simon Willison, Bartosz Ciechanowski) over corporate posts. [[72]]
No RSS feed = marketing-led. “If the blog doesn’t have RSS, you know they’re probably made from marketers with no input from engineering.” [[72]]
Read the archive, not the latest issue. Archive depth is the load-bearing quality signal — five years of consistent posts > one viral hit. [[77]]
Drop after 30 days unopened. The KonMari-style rubric — drop anything unopened 30+ days, duplicating social media, or off your current goals. Surveys find only 24% of received email is actually important. [[75]]
Open rate is dead as a quality signal. Apple Mail Privacy Protection inflates ~75% of opens. Trust your own click-throughs and “did I learn something” gut check instead. [[74]]
The slop test. “AI slop” was a 2026 word-of-the-year [[78]]; HN banned AI-generated and AI-edited comments outright [[79]]; DoubleVerify documented a 200-site “AutoBait” LLM content-farm network in March 2026 [[80]]. Detection of formulaic LLM patterns is now a one-strike unsubscribe trigger. [[78]] [[79]] [[80]]
River, not inbox. Current’s Terry Godier: “Email’s unread count means something specific… but when we applied that same visual language to RSS, we imported the anxiety without the cause.” [[73]]
Re-evaluate inherited heuristics. Marc Brooker’s much-shared March 2026 post: many engineering rules are now wrong post-LLM; humility + active re-evaluation applies to your subscription stack too. [[81]]

Starter packs by reader type

Decisions, not options. Pick one. Add later if you’re under-fed.

The frontier-research follower (5 slots). Simon Willison [[1]] · Sebastian Raschka [[2]] · Nathan Lambert [[15]] · Jack Clark [[16]] · Cameron R. Wolfe [[12]].

The shipping-engineer (5 slots). Hamel Husain [[8]] · Eugene Yan [[28]] · Jason Liu [[43]] · Latent Space [[42]] · Phillip Carter / Honeycomb [[46]].

The eng-leader / staff+ (5 slots). Pragmatic Engineer (paid) [[25]] · Will Larson [[33]] · Charity Majors [[34]] · Marc Brooker [[31]] · Julia Evans [[29]].

The strategist / exec (5 slots). Stratechery (paid) [[51]] · Benedict Evans [[55]] · Exponential View [[27]] · Platformer [[57]] · Matt Levine’s Money Stuff [[65]].

The “I want one big thinker who’ll have a take on everything” (1 slot). Zvi Mowshowitz, Don’t Worry About the Vase [[17]] — speed-premium short-term updates plus long-term world-model building, in one feed.

What this list isn’t

It’s a 2026 working list, not a permanent record. Half the names will look different in 12 months: Charlie Guo went from independent to OpenAI inside a quarter [[21]]; Brendan Gregg moved from Intel to OpenAI in February [[32]]; Hillel Wayne returned after a hiatus [[30]]; Adrian Colyer hasn’t [[36]]. Re-run the list when an author’s incentives change or their cadence breaks. The point of an explicit reading stack isn’t permanence — it’s that you stop drift-subscribing and stop guilt-reading. Run a 30-day audit; cut what’s not earning its slot [[75]].

Citations · 81 sources

Click the Citations tab to load…