Configuring Claude Code: A Practical Guide for Developers Who Want Control, Not Autopilot

My current mindset towards AI is best put by Paul Ford (CEO of Postlight): “All of the people I love hate this stuff, and all the people I hate love it. And yet, likely because of the same personality flaws that drew me to technology in the first place, I am annoyingly excited.”

That tension is real, and most developers are sitting in it right now. The advice out there doesn’t help. You’ve got people treating AI like a junior dev you can hand the keys to and walk away from, and people who refuse to engage because they saw a hallucinated function name once.

This guide is for the people in the middle. You see the potential, but you’re not willing to sacrifice control or code quality to get there. I think you can have both with the right configuration. To get there, read through all the Claude Code documentation so you don’t have to.

I’ve been using Claude Code lately as my AI development tool. Its configuration surface is massive: permissions, hooks, skills, sub-agents, sandboxing, Model Context Protocol (MCP) servers, memory files. A misconfigured permission set can give your agent free rein to push code, delete files, or access credentials without ever asking you.

This post gives you a safe, effective starting point that keeps you in control without drowning you in approval prompts.

This is part one of two. Here I cover the global configuration layer: settings, permissions, hooks, and tooling that apply across every project. In part two, I’ll walk through how this setup fits into an actual development workflow from idea to deployed code.

Also, these settings are just a best effort in controlling Claude. There’s evidence that even with all these in place, Claude is able to find workarounds if determined to do so.

Principles

These drive every configuration decision below.

  1. Architect, not spectator. You set direction, review output, and make decisions. The agent executes. If you can’t explain why a line of code exists, you don’t ship it.
  2. Minimal trust, earned incrementally. Permissions start at “ask” and get promoted to “allow.” Trust is earned per-project, not assumed globally.
  3. Persistent knowledge over chat context. Research, specs, and status go to files, not chat summaries. Files give context durability.
  4. Spec-driven development. Code doesn’t start until specs and plans exist. The agent needs the same context a human developer would.
  5. Adversarial review. The writer should never be the only reviewer. Use different models or agents for writing vs. reviewing.
Agentic engineering coding patterns

I am still exploring effective agentic engineering coding patterns, so this image will change over time. There probably isn’t a “one process fits all” solution.

Personal Development Project Workflow 2026-02-13 13.37.39.excalidraw.svg

Configurations: levels and scope

Claude Code offers an incredible amount of knobs to tune your workflow. There is no perfect setup. Each feature below can be tuned to work best for you. But you also need to decide where that configuration lives. You can configure at the:

  • Enterprise level — applies to all users in an Enterprise (I won’t cover this at all here)
  • User level — applies to all projects and Claude Code instances on your machine
  • Project level — applies to the specific repo where Claude Code was opened
  • Local level — applies to just your machine on that project

Everything below covers my preferred user-level features. The stuff that helps my overall workflow regardless of what I’m building.

Settings (permissions and output style)

The settings.json file is where permissions live, along with some other configs to tweak functionality.

Permissions: least privilege with layered override

The goal is to establish a global security baseline in ~/.claude/settings.json that protects against catastrophic and irreversible actions across all projects. My approach uses three tiers:

Allow — Safe, read-only, or non-destructive operations that would be tedious to approve every time. These run without prompting.

Ask — Legitimate development operations that modify state. These prompt for confirmation by default, but can be promoted to allow at the project level (via .claude/settings.json or .claude/settings.local.json) when you trust the context. This is the key flexibility mechanism: start strict globally, loosen per-project as needed.

Deny — Destructive or dangerous operations that should never be auto-approved. Deny rules are absolute and cannot be overridden by project-level settings. Use this for anything where a single mistake could cause data loss, credential exposure, or infrastructure damage.

Why this layered approach? Deny rules act as a hard security floor. Ask rules are the default friction point for state-changing operations, designed to be promoted to allow at the project level once you’re comfortable. This avoids two failure modes: too permissive globally (leading to accidental damage) and too restrictive everywhere (leading to prompt fatigue where you stop reading confirmations).

I make Claude ask for each of the operations listed below. This probably makes some AI enthusiasts grind their teeth, thinking about all the questions I get from Claude. Remember, it is a starting point.

  • Edit / Write — File modifications prompt unless a project explicitly allows them. This is the first thing to promote per-project (e.g., Edit(/src/**)) once it becomes tedious.
  • General Bash commands — Anything not explicitly listed (curl, wget, pip, arbitrary scripts) will prompt. This catches unexpected network access and script execution.
  • Task / MCP tools — Left at default so sub-agent and MCP tool invocations remain visible.

Over time, this will evolve. Use as-is for a week or two. When you find yourself always approving the same thing, promote it if you think it can be trusted long-term. Common first promotions: Edit(/src/**), Write(/src/**), Bash(npm run *), Bash(python *), Bash(pytest *).

{
  "permissions": {
    "allow": [
      "Read",
      "Glob",
      "Grep",
      "WebSearch",
      "Bash(git status)",
      "Bash(git log *)",
      "Bash(git diff *)",
      "Bash(git branch *)",
      "Bash(ls *)",
      "Bash(* --version)",
      "Bash(* --help)",
      "Bash(gh pr view *)",
      "Bash(gh pr create *)",
      "Bash(gh pr list *)",
      "Bash(gh issue *)"
    ],
    "ask": [
      "Bash(git add *)",
      "Bash(git commit *)",
      "Bash(git push *)",
      "Bash(git checkout *)",
      "Bash(git switch *)",
      "Bash(git rebase *)",
      "Bash(git merge *)",
      "Bash(git stash *)",
      "Bash(npm install *)",
      "Bash(npm publish *)",
      "Bash(gh pr merge *)",
      "Bash(docker *)",
      "Bash(psql *)",
      "Bash(mysql *)",
      "Bash(mongosh *)",
      "Bash(sqlite3 *)",
      "Bash(redis-cli *)"
    ],
    "deny": [
      "Bash(sudo *)",
      "Bash(rm -rf *)",
      "Bash(git push --force *)",
      "Bash(git push * --force *)",
      "Bash(git reset --hard *)",
      "Bash(git clean *)",
      "Bash(terraform destroy *)",
      "Bash(kubectl delete *)",
      "Bash(find * -delete *)",
      "Bash(find * -exec rm *)",
      "Bash(xargs rm *)",
      "Bash(chmod -R *)",
      "Bash(chown -R *)",
      "Bash(npm unpublish *)",
      "Bash(kill -9 *)",
      "Bash(killall *)",
      "Bash(aws s3 rm *)",
      "Bash(aws s3 rb *)",
      "Bash(gcloud * delete *)",
      "Bash(systemctl stop *)",
      "Bash(launchctl unload *)",
      "Read(~/.ssh/**)",
      "Read(~/.aws/**)",
      "Read(.env)",
      "Edit(.env)",
      "Write(.env)"
    ]
  }
}

Attribution: disabled

By default, Claude Code appends a Co-Authored-By: Claude ... trailer to every git commit and a Generated with Claude Code footer to PR descriptions. I disable both globally by setting them to empty strings:

{
  "attribution": {
    "commit": "",
    "pr": ""
  }
}

Output style: explanatory

Claude Code supports multiple output styles (concise, explanatory, verbose) configured via outputStyle in settings or toggled per-session with /output-style. I use explanatory globally.

{
  "outputStyle": "Explanatory"
}

It strikes the right balance. Claude explains its reasoning and decisions without flooding the terminal with every intermediate thought. This matters most when you’re reviewing what Claude did after a long autonomous run.

You can also create custom output styles, which is helpful if you’re using Claude Code as a harness for other agentic workflows.

Session quality surveys

Who needs more workspace interruptions? Not me. Disabled via disableSessionQualitySurveys: true in ~/.claude/settings.json.

Memory

In the least hot take of all hot takes: context is king. Everyone knows this by now. Memory is how you control the context Claude receives.

CLAUDE.md (global)

~/.claude/CLAUDE.md is loaded into every Claude Code session regardless of project. This is where I define my relationship with the agent: how it should behave, what it should never do, what conventions to follow everywhere.

My global CLAUDE.md is intentionally short and opinionated. It covers four things:

Working relationship. Tone and decision-making style. No sycophancy, be direct, challenge my reasoning, present tradeoffs instead of silently picking the easy path. Without this, Claude defaults to agreeable and verbose, which erodes the architect-developer dynamic.

Working style. The summary here is: make Claude be thorough. Large Language Models (LLMs) want to take the optimal path to being right. Optimal based on token percentages, not thinking through the right solution. I emphasize correct fix over quick fix.

Hard rules. Things that should never happen. Never publish secrets, never commit .env files, never take git actions (commit, amend, push) without explicit permission (another autonomous AI sin, I suppose). These overlap with the deny list in permissions, but redundancy is the point.

New project setup. Standards that apply when initializing any repo (required .gitignore entries, creating a project-level CLAUDE.md).

The project-level CLAUDE.md (covered in part two) is where stack-specific instructions, API context, and project conventions live. The global file stays lean so it doesn’t burn tokens on irrelevant context.

MEMORY.md

At the project level, Claude Code has a MEMORY.md file (~/.claude/projects/<project>/memory/MEMORY.md) that the agent can read and write to persist learnings across sessions. Patterns it discovered, debugging insights, user preferences it picked up. There’s currently no user-level memory file, only project-scoped ones, so this doesn’t factor into global configuration.

Memory management tools

Worth mentioning: there’s a growing ecosystem of tools being built to manage Claude’s memory. Everything from daily processes that create “short-term memory” documents to vector databases that implement Retrieval Augmented Generation (RAG) on top of Claude.

At the time of writing, these are interesting to explore. Some are probably beneficial, but realistically overkill for most people. And for a real hot take: don’t build your own, or spend too much time adopting someone else’s. At the rate Anthropic is shipping new features, I think the default memory management will improve dramatically over the next six months.

Skills

Skills are reusable prompt files (.md) that Claude Code can load on demand, either triggered by a slash command (/skill-name) or auto-loaded by the agent when it recognizes a matching context. They’re probably the single most effective way to extend Claude’s capabilities without bloating every session with instructions it doesn’t need.

The key concept is progressive disclosure. Rather than stuffing everything into your CLAUDE.md and burning tokens on context that’s irrelevant to the current task, skills let you define specialized knowledge and workflows that only load when called. A code review checklist lives in its own file and only enters the context window when you invoke it.

Most of my skills are defined at the user level (~/.claude/skills/) because they’re part of my general development process, not specific to any one project. I’ll cover the full set in part two. For now, here are three meta-skills. Skills whose job is to create and improve other configuration.

I won’t cover the exact contents for any of these skills, just how I use them and the principles behind them. You should make skills you fully understand and that meet your needs.

Create CLAUDE.md (/write-claude-md)

This skill guides the creation of project-level CLAUDE.md files. It targets 50-100 lines (max 150) and follows a structured section order: project overview, directory structure, commands, patterns, testing, git conventions, critical rules, and reference docs. Content that’s path-specific goes to .claude/rules/ files with paths: frontmatter so it only loads when relevant. Large reference material uses @ imports so Claude reads it on demand rather than upfront. The skill explicitly excludes anything Claude can infer from the code, style rules (those belong in linter configs), and embedded code snippets (which go stale; point to file:line instead).

Refine and maintain CLAUDE.md (/improve-claude-md)

This skill audits existing CLAUDE.md files for drift and quality. It runs through five phases: discovery (finding all CLAUDE.md and rules files), drift detection (verifying that documented paths, commands, and directory structures still match the actual codebase), quality assessment (scoring against a rubric covering commands, architecture clarity, conciseness, currency, and actionability), a quality report, and then targeted updates with user approval.

Create SKILL.md (/create-skill)

This skill follows an eval-driven development loop: capture intent, interview for edge cases, draft the skill, create test prompts, run them (with-skill vs. baseline), evaluate results, and iterate until satisfied.

A few principles baked into the skill:

Keep SKILL.md under 500 lines. Detailed content goes in references/ and only loads when needed (progressive disclosure again).

The description field is the primary trigger mechanism. It should include specific phrases users would say, written in third person, slightly “pushy” to counter Claude’s tendency to under-trigger.

Explain the why behind instructions. LLMs respond better to reasoning than rigid ALWAYS/NEVER constraints.

Hooks

Hooks are one of my favorite features. They add deterministic execution into a largely nondeterministic world of agentic coding. And yet, I’ve only found one global hook I actually use (project-level hooks are a different story, but that’s part two):

Notifications

Hooks defined in ~/.claude/settings.json trigger the script ~/.claude/scripts/notify-attention.sh on two events:

  • Notification — when a question is asked or input is required
  • Stop — when a task is complete

The script sends macOS notifications using terminal-notifier, with distinct sounds: Purr for notifications, Glass for stop. Clicking a notification activates the originating application (either iTerm2 or Cursor) based on environment detection. It activates the app itself, not a specific window.

This is the kind of hook that pays for itself immediately. Without it, you’re either staring at the terminal waiting for Claude to finish, or you’re off doing something else and miss the moment it needs input. Neither is great for flow.

MCP servers

MCP servers add tool definitions to every API call, which means they consume tokens whether you use them or not. A handful of globally installed MCPs can quietly eat thousands of tokens per turn just from the tool schema overhead. Be deliberate about what you install globally versus per-project. Most MCPs should live at the project level where they’re actually relevant.

The one exception I make is Context7 (@upstash/context7-mcp), installed globally in ~/.claude/settings.json. Context7 pulls up-to-date library documentation directly into Claude’s context. Doesn’t matter if you’re working with a Python package or a JavaScript framework, being able to say “look up the docs for X” without leaving the session is useful enough to justify the constant token cost.

Model configuration

I run Opus for everything. It produces the best results and I’d rather pay for quality than debug mediocre output. But if you’re doing extended coding sessions, you’ll hit token limits faster than you’d like.

A practical alternative is to split by task type: use Opus for planning and reasoning (architecture decisions, spec writing, code review, debugging complex issues) and Sonnet for implementation (writing code, running tests, routine file edits). Sonnet is fast and capable enough for execution work, and the token savings are significant over a full day of development.

If you don’t want to manually toggle between models, set your model to opusplan. Claude Code will automatically use Opus when it’s planning or reasoning through a problem and switch to Sonnet when it’s time to execute. You get the best of both without having to think about it.

Tools

Claude Code ships with a fixed set of built-in tools: file read/write/edit, bash execution, glob/grep search, web fetch, sub-agent orchestration, and a few others. You can’t define custom tools within Claude Code itself. If you need Claude to interact with an external system or API that isn’t covered by the built-ins, MCP servers are the extension point (covered above).

For the full list of what’s available out of the box, see the Claude Code tools documentation. Not much to configure here, but worth knowing what you’re working with.

Sandboxing

Sandboxing wraps every command Claude executes in a macOS sandbox that restricts filesystem reads/writes and network access to only what’s necessary. Think of it as a layer beneath permissions. Even if a command is in the allow list, the sandbox constrains where it can read, write, and connect. Permissions control whether a command runs. Sandboxing controls what it can touch when it does.

Enabled via the sandbox block in ~/.claude/settings.json. Some tools need full system access to function: git needs SSH keys and network for push/pull, gh needs auth tokens and the GitHub API, docker needs the daemon socket. These go in excludedCommands to bypass the sandbox, while still being subject to permission rules (e.g., git push still requires approval via the ask list).

{
  "sandbox": {
    "enabled": true,
    "excludedCommands": ["git", "gh", "docker"]
  }
}

Sub-agents

Sub-agents are autonomous Claude instances that the main session can spin off to handle isolated tasks. They get their own context window, execute independently, and return results back to the parent. Three types exist:

  • Project sub-agents — Defined in .claude/agents/, scoped to a specific codebase.
  • User sub-agents — Defined in ~/.claude/agents/, available across all projects.
  • CLI-defined sub-agents — Passed as JSON when launching Claude Code. They exist only for that session and are never saved to disk. Useful for quick testing and automation scripts.

The most common pattern is isolating high-volume operations (test runs, doc fetches, log processing) so they don’t pollute the main context. You can also chain them, where one sub-agent completes a task and passes results to the next.

I don’t have any global-level sub-agents configured yet. For my current workflow, I prefer using a combination of fresh context and skills to achieve similar outcomes. It keeps me closer to the work and gives me more control over what context each task receives. As I get more comfortable with the overall workflow and start pushing toward more autonomous operation, I can see sub-agents becoming a bigger part of the setup.

Tips and tricks

Dictation (VoiceInk + Karabiner)

Claude Code is a terminal-based tool, and you’ll spend a lot of time typing prompts. Dictation makes this significantly faster, especially for longer instructions where you’re explaining context or thinking through an approach. I use VoiceInk for transcription (noticeably better than macOS built-in dictation) mapped to a Hyper key via Karabiner-Elements (Caps Lock remapped to Ctrl + Option + Command + Shift). One key press to start dictating, one to stop. No menus, no mouse.

Visibility into Claude Code’s thinking

Claude Code used to expose more of its internal reasoning and tool calls directly in the terminal output, but recent updates have progressively stripped that visibility away in favor of a cleaner UI. The result is that you often can’t tell what Claude is actually doing under the hood, especially during longer-running tasks with sub-agents.

Verbose mode (--verbose / -v) is the quickest option. It surfaces more detail about tool calls and internal decisions. Mileage varies though; it helps for simple sessions but still doesn’t give you full visibility into sub-agent orchestration or nested tool calls.

Claude Code Dev Tools (matt1398/claude-devtools) is a more comprehensive solution. It gives you visibility into all agents, sub-agents, and their individual tool calls in a separate UI. I’ve been using it regularly for my sub-agent development work and it’s become essential for understanding what’s actually happening during multi-agent runs. Particularly useful for long-running tasks (making sure agents stay on track), sub-agent workflows (debugging coordination and handoffs), and using Claude Code as an agent harness (tracing execution across multiple layers).

Status line

Claude Code’s status line is a configurable two-line display at the top of the terminal that keeps key session info visible without cluttering the conversation.

The first line is the status bar: model name, working directory, git branch (with uncommitted file count, upstream sync status, and time since last fetch), and a context bar (a 10-segment visual bar showing context usage as a percentage of max tokens).

The second line echoes back your most recent message (truncated to fit), skipping interrupted or cancelled messages. Useful for confirming what Claude is actually working on, especially after dictation input.

The color theme is configurable, with 9 preset color options.

Where this is headed

Everything in this post reflects where AI-assisted coding is as of early 2026. This will change, probably faster than either of us expects. But I’d argue this is the right starting point regardless of where capabilities go next. Working through the configuration, understanding the architecture, seeing firsthand what agents do well and where they fall apart… that foundation only becomes more valuable as these tools mature.

I think of this as a phased progression:

Phase 1: Human-in-the-loop. Where I am now, and what this series covers. You’re actively reviewing every meaningful decision the agent makes. Stage gates everywhere, tight feedback loops, full understanding of the code being produced.

Phase 2: Mobile integration / remote control. Where I want to go next. The agent operates with more autonomy on well-defined tasks, but you stay in the loop through notifications and lightweight approvals. You’re not watching every keystroke, but you’re still steering.

Phase 3: Full autonomy. The long game. I don’t think this is ready for production work today, but it’s worth experimenting with on side projects. A few frameworks I’m watching:

  • wshobson/agents — Multi-agent orchestration for Claude Code
  • boshu2/agentops — DevOps layer for coding agents with flow, feedback, and persistent memory
  • BMAD-METHOD — Agile framework designed for AI-driven development
  • GSD — light-weight and powerful meta-prompting, context engineering and spec-driven development system for Claude Code