I ran /insights expecting a vanity report. What I got was a detailed forensic breakdown of every inefficiency in how I use Claude Code, delivered without flattery.
That’s more useful.
Here’s what the feature actually does, what it found in my sessions, and how I’m changing my workflow based on it.
What /insights Does
Run /insights in Claude Code. It analyzes your session history and produces a structured report across five dimensions:
- Project areas — what you’ve actually been working on, auto-clustered by session content
- Interaction style — how you use the tool, patterns Claude picks up from across your sessions
- What works — your most effective workflows, with specifics
- Friction analysis — categorized breakdowns of where your sessions derailed, with examples
- Suggestions — concrete CLAUDE.md additions, features to try, and prompt patterns
The output is an HTML report. Full session metadata, shareable, offline.
My numbers: 126 analyzed sessions, ~447 hours, 187 commits, 81% full or mostly-achieved outcome rate. That sounds good until you read the friction section.
The Friction Section Is Where It Gets Interesting
The report doesn’t just count success rates. It categorizes failures and gives you examples pulled from your actual sessions.
Buggy generated code: 54 instances. Multi-round debugging loops for syntax errors, undefined references, broken template literals. The fix the report suggested: ask Claude to validate code in smaller, testable chunks before applying large batch generations. I’d been doing the opposite — large diffs, fix whatever breaks. Wrong instinct.
Wrong approach / misunderstood context: 51 instances. This is the one that stings. Claude re-planned work I’d already completed. Misread my product positioning (training course framed as internal, not paid). Explored the codebase instead of using context7 after I explicitly said to use context7. Deployed to a preview URL twice instead of production.
These aren't Claude hallucinating. These are Claude not having enough durable context. The fix is a better CLAUDE.md — project-specific rules that persist across sessions so you're not re-establishing constraints every time.
Parallel agent failures: cascade problems. I run a lot of multi-agent workflows — 414 Task tool invocations in a month. The report identified a consistent pattern: I’d launch 10–15 agents in parallel and they’d all fail with the same error. Permission restriction, missing prerequisite file, schema mismatch. All 15 agents wasted on a shared root cause I could have caught with one canary run first.
The CLAUDE.md Suggestions Are Specific and Good
The report generates concrete CLAUDE.md additions, not generic advice. Examples from mine:
“When deploying to Cloudflare Pages or any hosting platform, always deploy to PRODUCTION unless explicitly told otherwise. Never deploy to preview URLs without asking first.”
This happened twice in my sessions. It’s now in my CLAUDE.md.
“Do NOT re-plan or re-explore work that has already been completed in this session. If unsure, ask the user before restarting any planning phase.”
The report found multiple instances of me saying “we already did all this” mid-session. That’s a pattern it identified from session history, not something I consciously tracked.
“When spawning sub-agents/Task agents, ensure they have all required permissions and tool access before dispatching. Test one agent before launching a batch.”
This one is going directly into my multi-agent skill templates, not just CLAUDE.md.
The Canary Pattern
The most useful concrete technique from the suggestions section: gate parallel agent batches with a single canary run.
Before /insights, my pattern was: identify 15 tasks, spawn 15 agents simultaneously, find out they all failed for the same reason.
New pattern:
- Spawn one agent
- Wait for it to complete fully
- If it succeeded, launch the remaining batch in parallel
- If it failed, diagnose the root cause before touching anything else
One canary run would have caught every batch failure I’ve had. The cost of running one agent sequentially is nothing compared to a session where 15 agents all hit the same wall.
The Feature Suggestions Actually Fit Your Patterns
The report looks at your tool usage breakdown and suggests features accordingly. It doesn’t recommend everything — it reasons about what would help given how you specifically work.
It recommended hooks for me based on my buggy code count. Hooks let you run shell commands automatically at lifecycle events — post-edit, pre-commit, etc. Auto-running tsc --noEmit after every TypeScript edit would have caught several classes of errors before they cascaded. I haven’t set these up yet. I should.
It recommended headless mode based on my batch operation patterns (51-file JSDoc transforms, 17 Playwright fix tasks). Running Claude non-interactively from scripts instead of manually orchestrating in interactive sessions. For the large batch work I do, this is the right tool and I’ve been using the wrong one.
The Thing About the Branch Named “entire”
The report ends with a “fun fact” from your sessions. Mine:
“Claude accidentally deleted a protected branch called ‘entire’ while cleaning up stale git branches.”
This is accurate. I asked Claude to merge scattered branches, sync main, and clean up stale branches. It got a little too enthusiastic about the cleanup and deleted a branch I hadn’t flagged as stale. The branch was named “entire.” It is gone.
The report's suggested commit pattern: git add -A && git commit -m "checkpoint: before [operation]" && git tag checkpoint-$(date +%s) before any destructive git operation. This would have saved the branch. It's now in my workflow.
What It Doesn’t Do
The report is backward-looking. It analyzes sessions that already happened. It won’t prevent the next wrong approach or buggy generation in real time.
It also can’t fix the underlying model behaviors — it can only help you structure your prompts and CLAUDE.md to reduce the chance of them happening. The 51 wrong-approach instances weren’t random; most of them traced back to missing context that a better CLAUDE.md would have provided.
Think of it as a retrospective tool, not a guardrail. The value is in acting on the suggestions, not in the report itself.
How to Actually Use It
- Run
/insightsafter you’ve accumulated at least a few weeks of sessions — it needs pattern data to be useful - Read the friction section before the “what works” section — that’s where the value is
- Take the CLAUDE.md suggestions literally. Copy them in. Don’t paraphrase
- Implement the canary pattern immediately if you run any parallel agent workflows
- Run it again in a month and compare the friction categories
The 81% satisfaction rate felt good until I understood what the other 19% looked like in detail. That detail is what makes the tool worth running.
447 hours of sessions. The report found the same failure modes recurring across every project area. Most of them had the same root cause: insufficient durable context. The fix is mostly CLAUDE.md discipline, not model upgrades.
Run /insights in Claude Code. The friction section will be uncomfortable. That’s the point.