Skip to main content
Discover how Schaltwerk solves real development challenges with isolated AI agents and parallel workflows.
Note: Command examples use bun. Replace bun run … with npm run … (and bun install with npm install) if you prefer npm.

Parallel Feature Development

Scenario: You need to build a user authentication system with registration, login, and password reset.Without Schaltwerk: Work on one feature at a time, or risk merge conflicts if working in parallel.With Schaltwerk:
1

Create specs

Spec 1: User Registration
- Email/password validation
- Hash passwords with bcrypt
- Store in database
- Send confirmation email

Spec 2: Login System
- JWT token generation
- Session management
- Token refresh mechanism

Spec 3: Password Reset
- Reset token generation
- Email reset link
- Secure token validation
- Password update flow
2

Launch agents

  • Claude Code works on registration
  • OpenCode handles login
  • Codex implements password reset
  • All in isolated worktrees
3

Work in parallel

  • No conflicts between agents
  • Each has own git branch
  • Switch between sessions to monitor
4

Review and integrate

  • Test each feature independently
  • Merge in dependency order
  • Run integration tests
Result: 3 features built in parallel, tested independently, integrated safely.

Bug Fixing at Scale

Scenario: Your test suite has 15 failing tests across different modules.Without Schaltwerk: Fix bugs one at a time, or juggle multiple branches manually.With Schaltwerk:
1

Identify bugs

bun run test 2>&1 | grep "FAIL"
# Output:
# FAIL auth/login.test.ts
# FAIL api/users.test.ts
# FAIL ui/ProfilePage.test.ts
# ... 12 more
2

Create specs

Create one spec per bug with test output, stack trace, and affected files
3

Assign agents

Start sessions for each bug:
  • fix-auth-login → Claude Code
  • fix-api-users → Codex
  • fix-ui-profile → OpenCode
  • Continue for remaining bugs
4

Verify fixes

# In each session's bottom terminal
bun run test -- auth/login.test.ts
# Verify specific test passes
5

Merge passing fixes

  • Mark sessions as reviewed after tests pass
  • Merge fixes one by one
  • Re-run full suite after each merge
Result: 15 bugs fixed in parallel, each tested in isolation, zero regressions.

Safe Refactoring

Scenario: Legacy payment module needs refactoring to use Strategy pattern for different payment providers.Without Schaltwerk: Refactor on main branch and hope tests catch issues, or create complex branch structure manually.With Schaltwerk:
1

Create refactoring spec

# Refactor Payment Module

## Goal
Extract payment provider logic into Strategy pattern

## Current State
- Single PaymentProcessor class with if/else for providers
- Stripe, PayPal, Square logic mixed together
- Hard to test, hard to add new providers

## Target State
- PaymentStrategy interface
- StripeStrategy, PayPalStrategy, SquareStrategy
- PaymentProcessor uses strategy pattern
- Each provider independently testable

## Constraints
- All existing tests must pass
- No breaking API changes
- Maintain current functionality
2

Start refactoring session

Agent works in isolated worktree, can’t break main branch
3

Incremental approach

1. Extract PaymentStrategy interface
2. Create StripeStrategy (test)
3. Create PayPalStrategy (test)
4. Create SquareStrategy (test)
5. Update PaymentProcessor to use strategies (test)
6. Remove old if/else logic (test)
7. Update all call sites (test)
4

Validate thoroughly

# Run full test suite
bun run test

# Run integration tests
bun run test:integration

# Check code coverage
bun run coverage

# Performance benchmarks
bun run benchmark
5

Review changes

  • Inspect every file in diff view
  • Verify no functionality changed
  • Check performance metrics
  • Mark as reviewed only when confident
Result: Major refactoring completed safely, tested thoroughly, zero downtime.

Comparing Implementations

Scenario: Need to implement a caching layer, but not sure about the best approach.Without Schaltwerk: Implement one solution, hope it’s optimal, or spend time prototyping manually.With Schaltwerk:
1

Create identical specs

# Implement Caching Layer

## Requirements
- Cache API responses
- Configurable TTL
- LRU eviction policy
- Metrics for hit/miss rates
- Thread-safe

## Acceptance Criteria
- Sub-10ms cache hits
- 90%+ hit rate for repeated queries
- Memory usage under 100MB
- Pass all existing tests
2

Launch multiple agents

  • Session 1: Claude Code → Redis-based cache
  • Session 2: Codex → In-memory LRU cache
  • Session 3: OpenCode → Memcached-based cache
3

Compare implementations

## Redis Approach (Claude Code)
Pros:
- Distributed caching
- Persistence
- Advanced data structures

Cons:
- Extra dependency
- Network latency
- More complex setup

## In-Memory LRU (Codex)
Pros:
- Fastest (no network)
- Simple implementation
- No external dependencies

Cons:
- Not distributed
- Lost on restart
- Limited by single machine memory

## Memcached (OpenCode)
Pros:
- Distributed
- Fast
- Battle-tested

Cons:
- External dependency
- Network latency
- No persistence
4

Benchmark all three

# Run benchmarks in each session
bun run benchmark -- cache

# Compare results
Redis: 5ms avg, 98% hit rate, 50MB memory
LRU: 2ms avg, 95% hit rate, 80MB memory
Memcached: 4ms avg, 97% hit rate, 40MB memory
5

Choose winner

Pick LRU approach (fastest, simplest, meets requirements)Merge that session, cancel others
Result: Three solutions tested in parallel, best one chosen based on data, not guesswork.

Exploratory Development

Scenario: Exploring different UI frameworks for a new feature.Without Schaltwerk: Create branches manually, switch between them, cleanup after.With Schaltwerk:
1

Create exploration specs

Spec 1: React + TailwindCSS
Spec 2: Vue + Bootstrap
Spec 3: Svelte + Custom CSS
2

Rapid prototyping

  • Each agent builds same feature in different framework
  • No setup overhead (automatic worktrees)
  • Switch between sessions to compare
3

Quick evaluation

  • Check bundle sizes
  • Measure load times
  • Review code complexity
  • Get user feedback on each
4

Decide and cleanup

  • Merge chosen approach
  • Convert others to specs for documentation
  • Or cancel if not needed
Result: Rapid exploration of alternatives without cluttering your main branch.

Code Review Assistance

Scenario: Large PR needs review, but you’re short on time.Without Schaltwerk: Review manually or use basic GitHub tools.With Schaltwerk:
1

Create review session via MCP

await fetch('http://localhost:8547/api/sessions', {
  method: 'POST',
  body: JSON.stringify({
    name: `review-pr-${prNumber}`,
    spec_content: `Review PR #${prNumber}\n\n${prDiff}`,
    select: true
  })
});
2

Agent reviews code

  • Claude Code analyzes diff
  • Identifies potential bugs
  • Suggests improvements
  • Checks test coverage
3

Get structured feedback

## Security Issues
- Line 45: SQL injection vulnerability

## Performance Concerns
- Line 102: N+1 query problem

## Best Practices
- Line 67: Consider extracting to separate function
- Line 89: Missing error handling

## Tests
- Missing tests for edge case: null input
- Integration tests needed for API changes
4

Manual review

Use AI feedback as starting point for your own review
Result: Faster, more thorough code reviews with AI assistance.

Automated Issue Triage

Scenario: New issues are filed daily, need to be triaged and assigned.Without Schaltwerk: Manually create branches, assign developers, track progress.With Schaltwerk:
1

Setup webhook

// GitHub webhook handler
app.post('/webhook/issues', async (req, res) => {
  const issue = req.body.issue;

  if (issue.labels.includes('bug')) {
    // Create session for bug fix
    await createSchaltwerkSession({
      name: `fix-issue-${issue.number}`,
      prompt: `Fix: ${issue.title}\n\n${issue.body}\n\nReported by: ${issue.user.login}`,
      agent_type: 'claude'
    });
  }

  res.status(200).send('OK');
});
2

Auto-create sessions

  • Webhook receives new issue
  • Creates Schaltwerk session automatically
  • Agent starts investigating
3

Agent investigates

  • Reproduces issue
  • Identifies root cause
  • Proposes fix
  • Adds tests
4

Human review

  • Review proposed fix in Schaltwerk
  • Test locally
  • Merge if good, or provide feedback
Result: Issues automatically converted to actionable sessions, faster response time.

Documentation Generation

Scenario: Code changes but documentation lags behind.Without Schaltwerk: Manually update docs, often forgotten.With Schaltwerk:
1

Create doc update spec

# Update API Documentation

## Changes
- New endpoint: POST /api/users/:id/avatar
- Updated endpoint: GET /api/users (added pagination)
- Deprecated: GET /api/user (use /api/users/:id)

## Tasks
- Update OpenAPI spec
- Update README examples
- Add migration guide
- Update changelog
2

Agent updates docs

  • Scans code for API changes
  • Updates relevant documentation
  • Maintains consistent formatting
3

Review doc changes

  • Verify accuracy
  • Check examples work
  • Ensure clarity
4

Merge with code changes

  • Docs stay in sync with code
  • No outdated documentation
Result: Documentation always reflects current code state.

Performance Optimization

Scenario: Application is slow, multiple bottlenecks identified.Without Schaltwerk: Optimize one at a time, long iteration cycle.With Schaltwerk:
1

Identify bottlenecks

bun run benchmark
# Output:
# API response time: 500ms (slow)
# Database queries: 200ms (acceptable)
# Frontend render: 800ms (slow)
# Image loading: 1200ms (slow)
2

Create optimization specs

  • Spec 1: Optimize API response time
  • Spec 2: Optimize frontend rendering
  • Spec 3: Optimize image loading
3

Parallel optimization

  • Claude Code optimizes API (caching, query optimization)
  • Codex optimizes frontend (code splitting, lazy loading)
  • OpenCode optimizes images (compression, lazy loading, CDN)
4

Benchmark improvements

# Session 1: API optimization
API response time: 80ms (6x improvement)

# Session 2: Frontend optimization
Frontend render: 200ms (4x improvement)

# Session 3: Image optimization
Image loading: 300ms (4x improvement)
5

Merge all improvements

  • Integrate optimizations
  • Final benchmark: 95% faster overall
Result: Multiple optimizations in parallel, massive speedup.

Test Coverage Improvement

Scenario: Code coverage is 45%, need to reach 80%+.Without Schaltwerk: Write tests manually, slow progress.With Schaltwerk:
1

Identify untested code

bun run coverage
# Identifies files with low coverage
# auth/login.ts: 30%
# api/users.ts: 40%
# ui/Profile.tsx: 25%
# ... 20 more files
2

Create test specs

One spec per low-coverage file:
# Add Tests for auth/login.ts

Current Coverage: 30%
Target: 90%

Missing Tests:
- Login with invalid credentials
- Login with expired token
- Login with rate limiting
- Login with 2FA
- Error handling edge cases
3

Agents write tests

  • Multiple agents work on different files
  • Each writes comprehensive test suites
  • Follows existing test patterns
4

Verify coverage

# After merging all test sessions
bun run coverage
# Total coverage: 87% ✓
Result: Test coverage improved from 45% to 87% efficiently.

Migration Projects

Scenario: Migrate from JavaScript to TypeScript gradually.Without Schaltwerk: Migrate files one by one on main branch, risky.With Schaltwerk:
1

Plan migration

# TypeScript Migration Plan

## Phase 1: Utilities (10 files)
Low risk, no dependencies

## Phase 2: Components (25 files)
Moderate risk, some dependencies

## Phase 3: Business Logic (30 files)
High risk, many dependencies

## Phase 4: Entry Points (5 files)
Critical, final integration
2

Create sessions per phase

  • Session 1: Migrate utilities
  • Session 2: Migrate components (depends on Session 1)
  • Session 3: Migrate business logic (depends on Session 2)
  • Session 4: Migrate entry points (depends on Session 3)
3

Incremental migration

  • Complete Phase 1, test, merge
  • Complete Phase 2, test, merge
  • Continue through all phases
4

Verify at each step

# After each phase
bun run typecheck
bun run test
bun run build
Result: Complete migration with minimal risk, tested at every step.

Best Practices Across Use Cases

Start Small

Begin with simple use casesBuild confidence before complex workflows

Name Clearly

Use descriptive session namesfix-login-bug not session1

Test Everything

Run tests in each session before mergingCatch issues early

Review Carefully

AI agents make mistakesAlways review generated code

Iterate Quickly

Fast feedback loopsDon’t let sessions pile up

Document Workflows

Save successful patterns as templatesReuse proven approaches

Next Steps