Decision (Apr 2026). Two-plugin baseline: Smart Connections for local-first semantic retrieval + related-notes sidebar [1] [2], and Copilot for Obsidian for polished vault chat with Ollama or cloud models [4] [14]. Add Smart Composer if you want Cursor-style
@fileedits and one-click apply [7]. Pick Khoj instead if you need the same brain on phone/WhatsApp/browser, self-hosted [9]. For pure summarization, Text Generator templates [11] cover most workflows; the dedicatedobsidian-ai-summaryis only worth installing for weekly/monthly link-walking review notes [16].
Landscape
The 2026 ecosystem has split into clear lanes — retrieval engines, chat-with-vault assistants, agentic surfaces, and template-driven generators [12]. Most users only need one from each of the first two columns.
| Lane | What it does | Top picks |
|---|---|---|
| Retrieval / discovery | Embeds every note locally, surfaces related ones in a sidebar, exposes a Lookup view | Smart Connections [2] |
| Chat with vault (RAG) | Conversational Q&A grounded in your notes via embeddings + LLM | Copilot [4], Smart Composer [7], Khoj [10] |
| Editor-grade writing assistant | @file mentions, Apply-Edit, inline rewrites |
Smart Composer [8] |
| Template-driven generation | Reusable prompts (summary, outline, expand) tied to frontmatter | Text Generator [11] |
| Dedicated summarization | One-shot summarize commands with chunking | obsidian-ai-summarize [17], obsidian-ai-summary [16] |
| Agentic (read+edit+create) | Autonomous tool calls that modify the vault | Obsidian Chat / Vault AI Chat (early) [20] |
Head-to-head: the four plugins worth installing
| Smart Connections | Copilot for Obsidian | Smart Composer | Khoj | |
|---|---|---|---|---|
| Repo | brianpetro/obsidian-smart-connections ⭐ 4.9k | logancyang/obsidian-copilot ⭐ 6.8k | glowingjade/obsidian-smart-composer ⭐ 2.2k | khoj-ai/khoj ⭐ 34k |
| Primary lane | Retrieval + related-notes [1] | Vault chat / RAG [4] | Editor + chat hybrid [7] | Cross-device second brain [9] |
| Local embeddings out of box | ✓ ships with on-device model [2] | ✗ optional; needs Ollama+nomic-embed-text or API [6] | ✓ via Ollama / LM Studio [7] | ✓ self-hosted server handles it [9] |
| LLM providers | Local + 100+ via APIs (Claude, Gemini, GPT, Llama 3) [1] | OpenRouter (rec.), OpenAI, Anthropic, Gemini, Cohere, Ollama, LM Studio [4] | OpenAI, Anthropic, Gemini, Groq, DeepSeek, OpenRouter, Azure, Ollama, LM Studio [7] | gpt, claude, gemini, llama, qwen, mistral [9] |
| RAG / vault chat | Smart Chat (free in core) [2] | Vault QA mode, semantic indexing optional [4] | @Vault or Cmd+Shift+Enter [8] |
Chat over docs + web [10] |
| Apply-Edit / inline rewrites | ✗ retrieval only | Command palette quick actions [4] | ✓ one-click Apply [7] | ✗ chat-only |
| Multimodal context | ✗ | YouTube, web, PDF, EPUB, images [4] | Websites, images, YouTube transcripts [7] | PDF, image, Word, Notion, org-mode [9] |
| Agent / tool-use | ✗ (use MCP externally) [19] | Plus tier: autonomous Agent mode [4] | MCP support for tools [7] | Custom agents, automations [9] |
| Privacy posture | Local-first, offline after index [2] | Local with Ollama; cloud by default [14] | Local with Ollama; cloud optional [7] | Self-host or Khoj cloud [9] |
| Pricing | Core free; Pro plugins via supporter sub or one-time founding [3] | BYOK free; Plus $14.99/mo or $139.99/yr; Self-Host Supporter $349.99 one-time [5] | Free with own keys/subs; Gemini API works as free tier [7] | Free open-source self-host; managed cloud tiers [9] |
Where each one wins
Smart Connections — pick for retrieval + connections sidebar. It’s the only one in the comparison that ships local on-device embeddings with zero configuration and keeps working offline once indexed [2]. The 2026 review consensus is that it’s the best “find related notes while I write” experience and free for the core plugin [12]. Pro adds inline connections and ranking knobs but is optional [3].
Copilot for Obsidian — pick for the most polished chat-with-vault. Native Ollama support, “Relevant Notes” auto-injecting context, and broad multimodal handling (YouTube, PDF, EPUB, images) [4] [14]. The free tier is fully functional with your own API keys; Plus ($14.99/mo, $139.99/yr) unlocks Agent mode, time-window queries, and a built-in model [5]. Going fully local is well-documented: install Ollama, pull a chat model + nomic-embed-text, point at localhost:11434 [6].
Smart Composer — pick if you want Cursor-in-Obsidian. @file mentions feed exact files into context, @Vault triggers RAG retrieval, and Apply-Edit lets you accept AI rewrites with one click [7] [8]. MCP support means you can plug in external tools/data sources. Best for editor-heavy workflows where chat and writing blur.
Khoj — pick if “vault” extends past Obsidian. It’s a self-hostable second brain reachable from Obsidian, Emacs, browser, desktop, phone, or WhatsApp [9]. Indexes Markdown, PDF, Notion, Word, org-mode. Heavier setup (Docker/server) than the in-process plugins, but unmatched if you want one knowledge base across surfaces — and the 34k-star repo is the most actively maintained of the bunch [10].
Summarization specifically
Most users don’t need a dedicated summarization plugin in 2026 — Copilot, Smart Composer, and Text Generator all handle “summarize this note / selection” via prompt or template [11] [4]. The two narrow tools earn install only in specific cases:
| Use case | Tool | Notes |
|---|---|---|
| Weekly/monthly review note that walks all linked notes and dumps a summary per link | obsidian-ai-summary ⭐ 39 | Niche but well-shaped for periodic notes [16] |
| One-shot summarize command with chunking + placement profiles | obsidian-ai-summarize ⭐ 18 | Smaller scope than Text Generator, simpler UX [17] |
| Reusable summary templates with frontmatter overrides | Text Generator ⭐ 1.9k | Ships GPT-5 + thinking-model support; community template gallery [11] |
Operational notes
- Indexing cost. First-build embedding on a 5,000-note vault takes 20–30 min locally; index lives in
.obsidian/and updates incrementally [15]. Plan for the first run, not the steady state. - Local-only stack that’s known-good in 2026: Ollama +
nomic-embed-textfor embeddings + Qwen 2.5 (or Llama 3) for chat, wired into Copilot or Smart Composer [6] [14]. - Don’t double-index. Running Smart Connections + Copilot Vault QA + Smart Composer RAG simultaneously means three separate embedding indexes over the same vault. Pick one chat plugin; let Smart Connections handle the discovery sidebar [13].
- Agentic plugins (Obsidian Chat, Vault AI Chat) can read+edit+create files autonomously [20] — interesting but immature; the more reliable 2026 pattern is wiring Smart Connections retrieval into Claude Code or another external agent via MCP [19].
- The community-maintained Awesome Obsidian AI Tools ⭐ 115 list is the right index when you need to look beyond these picks [18].