Claude Code workflow patterns we use to ship production code daily
Seven Claude Code workflow patterns we run daily to ship production code: plan mode, CLAUDE.md, hooks, subagents, parallel sessions, slash commands, MCP.
A Claude Code workflow pattern is a repeatable session shape (how you set context, when you let Claude write, what you check before merging) that turns a one-shot AI coding session into a process you can run twice a day without surprises.
Anthropic reports that most of its own code is now written by Claude Code, with engineers shifting toward architecture and orchestration roles (Anthropic, 2026). That kind of throughput only holds when the session is shaped, not improvised. Below are seven patterns we run on every Studio engagement, ordered roughly by impact on production safety.
What we measured
For each pattern we looked at four properties:
- Friction: setup cost before the first call.
- Compliance: how reliably the pattern enforces a rule, vs. asking Claude nicely.
- Token cost: how it changes spend per task.
- Recovery: what happens when Claude drifts.
1. The plan-first loop
What it is. Switch into plan mode (Shift+Tab to plan, or /plan) before any non-trivial change. Claude can read, search, and reason, but the tool layer physically blocks edits and shell writes (Anthropic, permission modes). Iterate the plan with the questions you would ask a senior engineer ("what is the worst failure mode here?", "what does this break that the tests do not cover?"), then drop into Accept Edits to execute.
What it solves. Anthropic's internal testing found unguided attempts succeed roughly 33% of the time. Plan mode is the cheapest correction available, because at that stage no code has been written.
Where it falls short. For one-line fixes the planning ritual is overhead. Skip it for typos, import sorts, and CSS tweaks where the diff fits on a screen.
Who it is for. Anyone touching code that ships to production.
2. CLAUDE.md as steering doc
What it is. A project-root file Claude reads at session start. Document conventions, paths, do-not-touch zones, command examples, and the test runner. Anthropic's own teams treat CLAUDE.md as a record of past mistakes and the style rules that came out of fixing them.
What it solves. Repetitive context. Without a CLAUDE.md, the first 30 minutes of every session go on re-explaining what the codebase is and how it ships.
Where it falls short. Compliance lands around 70% on the rules CLAUDE.md describes (Anthropic, 2026). Claude reads it, then makes its own judgment under pressure. For non-negotiables, see hooks below.
Who it is for. Every project. Cap it at 200 lines so it loads on every session without eating context budget.
3. Hooks for the hard rules
What it is. Deterministic scripts that fire at one of 25 lifecycle points (PreToolUse, PostToolUse, SessionStart, and so on). Hooks block tool calls before they execute, run any bash or Python they like, and exit with a code Claude is forced to respect.
What it solves. The 30% gap CLAUDE.md cannot close. "Do not push to main", "do not rm -rf the migrations folder", "run lint before committing". Hooks turn each of these from instruction into infrastructure (Anthropic, best practices).
Where it falls short. Hooks need a script per rule plus discipline to keep them in sync with the codebase. A stale hook that blocks a renamed command is its own kind of incident.
Who it is for. Teams shipping more than once a week. Solo devs can defer until they get bitten the first time.
4. Subagents for quality separation
What it is. A subagent is a separate Claude session with its own context window, invoked from the main session. The canonical pattern is writer/reviewer: one agent writes the code, a second one reviews it cold, without the bias of "code I just wrote" (Anthropic, subagents).
What it solves. Self-review blindness. Claude often declares a task complete when it has only addressed part of it; a fresh-context reviewer catches what the writer missed.
Where it falls short. Two sessions cost roughly 2x the tokens. Reserve subagents for diffs that touch billable or sensitive surfaces (auth, payments, migrations, anything in infra/).
Who it is for. Anyone who has ever shipped a bug Claude declared "verified and tested".
5. Parallel sessions on git worktrees
What it is. Multiple Claude Code instances, each in its own git worktree of the same repo, each working on an independent task.
What it solves. Sequential dependence. The Claude Code creator runs 10 to 15 parallel sessions and ships 20 to 30 PRs a day, on workflows that would take five hours sequentially per task (Mind Wired AI, 2026). For a studio, three or four parallel sessions is the realistic ceiling before code review becomes the bottleneck.
Where it falls short. You become the merge conflict. Without strict task isolation (one agent per surface, no overlapping files), parallel sessions waste more time than they save.
Who it is for. Engineers comfortable holding three or more task contexts in their head simultaneously.
6. Slash commands for repeated routines
What it is. Reusable command files at .claude/commands/<name>.md that prompt Claude with a structured task: "release a new patch version", "audit accessibility on the page I am in", "draft a Supabase migration from this type definition".
What it solves. Prompt drift. The same recurring task phrased three different ways produces three different shapes of work, and the worst version always shows up under deadline pressure.
Where it falls short. Commands rot. If the underlying tool changes (an npm script renamed, a lint rule replaced), the command silently produces stale output. Schedule a quarterly review.
Who it is for. Any team with more than three routines that recur weekly.
7. MCP servers over raw CLIs
What it is. Hand Claude an MCP server (Model Context Protocol) instead of shell access to your databases, observability stack, or admin APIs. The server exposes typed tools with explicit permissions and audit logs.
What it solves. Blast radius. A bare psql in Claude's hands can drop a table; an MCP tool exposes only the queries you whitelisted, with structured logs you can grep when something goes sideways. Anthropic's data infrastructure team explicitly migrated from BigQuery CLI to an MCP server for this reason, citing logging and PII control (Anthropic, 2026).
Where it falls short. You build the MCP server. For one-off scripts, the cost is not worth it. We covered the build-vs-buy decision in our MCP build vs buy article.
Who it is for. Anyone giving Claude access to data with compliance, billing, or PII attached.
How to choose
Pick the patterns that match the risk surface of what you are building. A blog plugin and a payments gateway do not need the same workflow.
- If you ship to production at all, run plan mode plus CLAUDE.md plus at least one hook. This is the floor.
- If your repo touches shared infra (production database, payments, auth), add MCP servers and remove shell access to those systems.
- If review is your bottleneck rather than writing, add subagents for the writer/reviewer split before adding more parallel writers.
- If you are a solo dev with 30 focused minutes a day, skip parallel sessions and double down on slash commands. The leverage is in repetition reduction, not parallelism.
None of these patterns survives without a CLAUDE.md that describes the project honestly. The patterns are leverage on top of a clear context. The context comes first.
Sources
- How Anthropic teams use Claude Code (PDF, 2026)
- How Anthropic teams use Claude Code (blog post)
- Best Practices for Claude Code (Anthropic docs)
- Choose a permission mode (Anthropic docs)
- Create custom subagents (Anthropic docs)
- Claude Code auto mode (Anthropic engineering)
- How the creator of Claude Code uses it (Mind Wired AI, 2026)
Frequently asked questions
- Do I need all seven patterns to start?
- No. The minimum floor is plan mode plus a CLAUDE.md plus one hook that protects your worst-case command (push to main, drop database, delete migrations). Everything else is leverage you add as your team grows. Most solo developers run with three patterns for months before adding a fourth.
- How long does it take to set up CLAUDE.md and hooks for an existing project?
- Plan four to eight hours for the first pass on a mid-sized codebase. Two hours to draft a CLAUDE.md by reading the readme, the package scripts, and one or two recent PRs. One to four hours to write the first three or four hooks (typically: block force-push, block destructive shell commands, run lint or typecheck before commit). Add to both files over time as new failure modes show up.
- What is the right token budget for these workflows?
- Subagents and parallel sessions roughly multiply token spend by the number of sessions running. Plan mode adds 10-20% over a single session because the planning phase reads more files. Slash commands and hooks are essentially free. If you are on metered billing, keep subagents for high-stakes diffs only.
- Can these patterns replace human code review?
- No. They reduce the noise reaching the human reviewer. The writer/reviewer subagent pattern catches obvious issues; the human still owns architectural and product decisions, and is still the final gate before merge. We have not seen a workflow where removing the human review step ends well, especially on changes that touch billing, auth, or data migrations.
Studio
Start a project.
One partner for companies, public sector, startups and SaaS. Faster delivery, modern tech, lower costs. One team, one invoice.