Claude Code parallel agents: per-process guardrails that actually work
I started caring about claude code parallel agents when I stopped running one long session and started splitting work on purpose. One process fixes a flaky test. Another builds a feature. A third chews through some refactor I have been avoiding for a week. In theory that is great. In practice it gets weird fast. The third time I found two agents fighting over the same migration file, I gave up pretending I could eyeball it in real time. That is the niche AgentCheck fits into. It is a stdin/stdout governance proxy for Claude Code, so you wrap each agent session, watch its hook events, log what happened as JSONL, and inject corrections when a rule fires. Install is simple: npm install -g github:paprika-org/agentcheck.
What claude code parallel agents actually look like in practice
When people say claude code parallel agents, they usually do not mean some fancy distributed system. They mean opening multiple Claude Code processes at the same time and giving each one a bounded job. I do this when I want one session fixing tests, one session writing a small feature, and one session handling cleanup that would otherwise derail the main thread.
That looks more like everyday terminal work than orchestration. Three shells. Three prompts. Three different branches if I am being careful, or one messy working tree if I am being honest. The appeal is obvious: less waiting, less context switching, more throughput on independent tasks. If the tasks really are independent, claude code parallel agents feels almost unfairly productive.
The problem is that independence is usually an assumption, not a fact. Two tasks that looked separate at 9:00 can both end up touching the same helper, test fixture, schema file, or build script by 9:07.
Where parallel execution goes wrong
The failure mode is not subtle. One agent edits a file and another agent is still reasoning from the old version. One fix lands cleanly but changes an assumption the other process made twenty seconds earlier. Then you get cascading context drift: agent A responds to agent B's change without knowing it exists, and agent B "fixes" the breakage by pushing the code farther away from what either task originally intended.
This is why claude code parallel agents becomes dangerous before it becomes obviously broken. Humans do not notice the drift in real time. We notice it later, when a test starts failing for a reason nobody remembers introducing, or when a refactor quietly undoes a feature branch's local patch. File conflicts are the visible part. The worse part is behavioral conflict: each agent taking locally reasonable actions based on stale context.
I hit this most often in files that attract incidental edits. Migrations. Shared utilities. Snapshot tests. CI config. The obvious collision is two agents editing the same file. The less obvious one is one agent changing the shape of something another agent is about to rely on.
Per-process guardrails instead of central control
AgentCheck does not try to solve this by becoming a central coordinator for every Claude process. That matters. It wraps each Claude Code session individually, sits on stdin/stdout, watches PreToolUse and PostToolUse hook events, writes them out as JSONL, and can inject a correction when one of your rules triggers.
I like that model because it is smaller and more honest. With claude code parallel agents, you do not always need a brain in the middle telling every worker what to do. A lot of the value comes from per-process guardrails: log the behavior, inspect the tool calls, and stop or warn when a session is about to do something that matches a bad pattern.
So AgentCheck is not pretending to be scheduling infrastructure. It is closer to a thin layer of governance around each process. That means you can add rules incrementally instead of redesigning how you run agents.
What the JSONL looks like when two parallel agents hit the same territory
A realistic log is not glamorous. That is the point. You want enough structure to reconstruct what happened when two sessions wander into the same file at the same time. Here is the kind of JSONL I actually want when parallel work starts stepping on itself:
{"ts":"2026-04-21T14:12:05.118Z","session":"agent-test-fix","hook":"PreToolUse","tool":"Edit","file":"db/migrations/20260421_add_status.sql","task":"fix failing integration test"}
{"ts":"2026-04-21T14:12:05.201Z","session":"agent-refactor","hook":"PreToolUse","tool":"Edit","file":"db/migrations/20260421_add_status.sql","task":"extract migration helpers"}
{"ts":"2026-04-21T14:12:05.207Z","session":"agent-refactor","hook":"RuleTriggered","rule":"recent-file-touch","severity":"warn","file":"db/migrations/20260421_add_status.sql","message":"File was modified by another wrapped session in the same second"}
{"ts":"2026-04-21T14:12:05.209Z","session":"agent-refactor","hook":"CorrectionInjected","action":"stderr_warn","message":"Another agent just touched db/migrations/20260421_add_status.sql. Re-read before editing or choose a different task."}
{"ts":"2026-04-21T14:12:06.044Z","session":"agent-test-fix","hook":"PostToolUse","tool":"Edit","file":"db/migrations/20260421_add_status.sql","result":"success"}
{"ts":"2026-04-21T14:12:06.412Z","session":"agent-refactor","hook":"PostToolUse","tool":"Read","file":"db/migrations/20260421_add_status.sql","result":"success"}
That is the useful level of detail. Not magic. Just enough to show that two wrapped sessions converged on the same file, a rule noticed, and one process got nudged before blindly writing over the other. For claude code parallel agents, that kind of trace is a lot better than reconstructing the mess from shell history and vibes.
A minimal parallel Claude Code setup with AgentCheck
I keep this simple. The whole point is to wrap each Claude Code process separately. No central daemon required. No shared queue required. Just three tasks, three processes, each with the same proxy in front of it.
# install npm install -g github:paprika-org/agentcheck # agent 1: fix a test agentcheck --session agent-test-fix -- claude code "Fix the failing auth integration test without touching production behavior" # agent 2: write a feature agentcheck --session agent-feature -- claude code "Add bulk archive support to the admin jobs page" # agent 3: refactor agentcheck --session agent-refactor -- claude code "Refactor notification formatting into a shared helper without changing output"
If you want config instead of long commands, a minimal setup looks like this:
{
"rules": [
{
"id": "recent-file-touch",
"when": { "hook": "PreToolUse", "tool": "Edit" },
"then": { "action": "warn" }
}
],
"sessions": [
{
"name": "agent-test-fix",
"command": "claude code \"Fix the failing auth integration test without touching production behavior\""
},
{
"name": "agent-feature",
"command": "claude code \"Add bulk archive support to the admin jobs page\""
},
{
"name": "agent-refactor",
"command": "claude code \"Refactor notification formatting into a shared helper without changing output\""
}
]
}
The important part is not the exact flag shape. It is the operating model: every Claude Code process gets wrapped on its own, every hook event can be observed, and every correction is local to that process.
Where this falls short
This is the part worth saying plainly. AgentCheck does not have cross-agent shared state in the strong sense. It does not prevent conflicts at the filesystem level. It is not a lock manager. It is not transactional. If two uncoordinated processes both write the same file, AgentCheck is not going to enforce atomic safety for you.
What it does help with is behavioral drift. It gives you a way to watch tool usage, log it cleanly, and inject corrections when a wrapped session starts heading somewhere suspicious. For claude code parallel agents, that is useful, but it is not the same thing as resource locking or full coordination.
So I would use it for visibility and guardrails, not for guarantees. If your workflow genuinely needs hard exclusion around shared files or shared state, you still need another mechanism for that.