Skip to content
Back to Blog
Agentic AI8 min read

Setting Up Claude Code for a Team: Best Practices and Configurations

How to roll out Claude Code across a development team — shared CLAUDE.md, custom commands, permission policies, cost management, onboarding, and team-wide standards.

From Individual Tool to Team Platform

Claude Code is powerful for individual developers. But its value multiplies when adopted by a team with shared configurations. A well-configured team setup means every developer's Claude Code sessions follow the same coding conventions, use the same custom commands, respect the same permission boundaries, and produce code that is consistent across the codebase.

This guide covers the complete process of rolling out Claude Code to a development team.

Step 1: Create the Team CLAUDE.md

The most important step is creating a comprehensive CLAUDE.md at the project root. This file should be authored collaboratively and committed to the repository.

Structure for a Team CLAUDE.md

# Project: [Name]

## Team Information
- Team size: 8 developers
- Primary language: TypeScript (backend and frontend)
- Deployment: Kubernetes on AWS EKS
- CI/CD: GitHub Actions

## Tech Stack
[List every framework, library, and tool with version numbers]

## Project Structure
[Directory layout with purpose of each directory]

## Coding Conventions
[Naming, formatting, import rules, error handling patterns]

## Architecture Decisions
[Key ADRs — why we chose X over Y]

## Commands
[How to build, test, lint, deploy]

## Do Not
[Explicit list of patterns/practices to avoid]

Real Example

# Project: OrderFlow

## Tech Stack
- Runtime: Node.js 20 LTS
- Language: TypeScript 5.4 (strict mode)
- Backend: Fastify 4.x with TypeBox validation
- ORM: Drizzle ORM with PostgreSQL 16
- Frontend: Next.js 14 (App Router)
- Styling: Tailwind CSS 3.4
- Testing: Vitest (unit), Playwright (E2E)
- Monorepo: Turborepo

## Project Structure
packages/
  api/            # Fastify backend
  web/            # Next.js frontend
  shared/         # Shared types, utils, validation schemas
  db/             # Drizzle schema and migrations
  email/          # Email templates and sending service

## Coding Conventions
- Named exports only (no default exports)
- Explicit return types on all exported functions
- Use TypeBox for API request/response validation
- Error responses: { error: string, code: string, details?: unknown }
- Dates: Store as UTC timestamps, display in user's timezone
- IDs: UUID v7 (time-sortable)
- Imports: Use workspace aliases (@api/, @web/, @shared/)

## Database
- Migrations: pnpm --filter db migrate
- Generate types: pnpm --filter db generate
- Naming: snake_case for tables and columns
- Always add created_at and updated_at to new tables
- Soft delete with deleted_at column for user-facing resources

## Testing
- Unit tests: pnpm test (runs vitest across all packages)
- E2E tests: pnpm test:e2e (runs Playwright)
- All API endpoints need integration tests
- Minimum coverage: 80% for new code

## Git Workflow
- Branch naming: feature/PROJ-123-short-description
- Commits: Conventional commits (feat:, fix:, chore:, refactor:)
- PRs: Squash merge to main
- Required: 1 approval + passing CI

## Do Not
- Never use any type — use unknown and narrow
- Never use console.log — use the structured logger (@shared/logger)
- Never commit .env files
- Never use synchronous file operations
- Never import from barrel files (index.ts) in the same package
- Never use string concatenation for SQL

Step 2: Configure Shared Settings

Create .claude/settings.json at the project root for shared tool permissions:

{
  "permissions": {
    "allow": [
      "Bash(pnpm test*)",
      "Bash(pnpm lint*)",
      "Bash(pnpm build*)",
      "Bash(pnpm --filter*)",
      "Bash(npx tsc --noEmit*)",
      "Bash(git status)",
      "Bash(git diff*)",
      "Bash(git log*)",
      "Bash(git branch*)"
    ],
    "deny": [
      "Bash(rm -rf*)",
      "Bash(*--force*)",
      "Bash(git push*)",
      "Bash(pnpm publish*)"
    ]
  }
}

This configuration:

  • Allows testing, linting, building, and git read operations without prompting
  • Denies destructive operations (force delete, force push, publishing)
  • Requires approval for everything else (file writes, other bash commands)

Step 3: Create Custom Slash Commands

Encode your team's common workflows as custom slash commands:

Feature Development Command

<!-- .claude/commands/new-feature.md -->
Implement a new feature: $ARGUMENTS

Follow this process:
1. Read the relevant existing code to understand patterns
2. Create/update the database schema if needed (in packages/db/)
3. Implement the backend API endpoints (in packages/api/)
4. Add TypeBox validation schemas
5. Create the frontend components (in packages/web/)
6. Write integration tests for all new endpoints
7. Run the full test suite: pnpm test
8. Fix any failing tests
9. Run the linter: pnpm lint
10. Fix any lint issues

PR Preparation Command

<!-- .claude/commands/prep-pr.md -->
Prepare the current changes for a pull request:

1. Run all tests: pnpm test
2. Run the linter: pnpm lint
3. Run type checking: npx tsc --noEmit
4. Review the diff: git diff main...HEAD
5. Check for any TODO/FIXME comments in changed files
6. Verify no console.log statements in changed files
7. Check that all new files have proper exports
8. Report any issues found
9. If all checks pass, suggest a PR title and description based on the changes

Database Migration Command

<!-- .claude/commands/migrate.md -->
Create a database migration for: $ARGUMENTS

1. Update the Drizzle schema in packages/db/schema/
2. Generate the migration: pnpm --filter db migrate
3. Review the generated SQL migration file
4. Check for:
   - Missing indexes on foreign key columns
   - NOT NULL columns without defaults on existing tables
   - Potential data loss (dropping columns/tables)
5. Report any issues with the migration

Step 4: Cost Management

Per-Developer Budgets

For API-billed usage, set up alerts:

# Track individual usage
claude /cost  # Shows current session cost

Model Selection Guidelines

Create team guidelines for model selection:

## Model Usage Guidelines (add to CLAUDE.md)

### Use Sonnet (faster, cheaper) for:
- Simple bug fixes
- Adding tests to existing code
- Code formatting and style fixes
- Simple CRUD endpoint creation
- Running commands and checking output

### Use Opus (more capable) for:
- Complex feature implementation
- Architecture decisions
- Security reviews
- Large-scale refactoring
- Debugging complex multi-service issues

Cost Tracking

# Weekly cost summary script
#!/bin/bash
echo "Claude Code usage this week:"
echo "  API costs: $(claude-usage --since '7 days ago' --format cost)"
echo "  Sessions: $(claude-usage --since '7 days ago' --format count)"
echo "  Avg cost/session: $(claude-usage --since '7 days ago' --format avg)"

Step 5: Onboarding New Developers

Onboarding Checklist

## Claude Code Onboarding

1. Install Claude Code: npm install -g @anthropic-ai/claude-code
2. Authenticate: Run `claude` and follow the auth flow
3. Run /doctor to verify setup
4. Read the project CLAUDE.md (it is your AI pair's instruction manual)
5. Try custom commands:
   - /prep-pr — prepare a pull request
   - /new-feature — implement a feature
   - /migrate — create a database migration
6. Review the permission settings in .claude/settings.json
7. Start with small tasks to build familiarity

Starter Exercises

Give new team members specific tasks to practice with Claude Code:

## Practice Tasks (in order of complexity)

1. Use Claude Code to add a new field to an existing API endpoint
2. Use Claude Code to write tests for an untested service
3. Use Claude Code to debug a known bug (provide the bug report)
4. Use Claude Code to implement a small feature end-to-end
5. Use Claude Code to review a teammate's PR

Step 6: Establish Review Standards

AI-Generated Code Review Policy

## Code Review Standards for AI-Generated Code

AI-generated code receives the same review scrutiny as human-written code.
Reviewers should pay special attention to:

1. **Business logic correctness** — Does the code implement the right behavior?
2. **Edge cases** — Are boundary conditions handled?
3. **Security** — Input validation, auth checks, data exposure
4. **Performance** — Query efficiency, pagination, caching
5. **Consistency** — Does it follow our patterns in CLAUDE.md?

The author is responsible for understanding every line of AI-generated code.
"Claude wrote it" is not an acceptable justification during code review.

Step 7: Iterate on CLAUDE.md

The CLAUDE.md file is a living document. Schedule regular updates:

Monthly CLAUDE.md Review

During sprint retrospective:
1. Were there recurring issues in Claude Code's output?
   -> Add corrective instructions to CLAUDE.md
2. Did Claude Code miss a convention?
   -> Document the convention explicitly
3. Were there new patterns adopted this month?
   -> Add them to CLAUDE.md
4. Is the CLAUDE.md getting too long (>300 lines)?
   -> Move detailed sections to .claude/docs/ and link to them

Measuring Team Adoption Success

Track these metrics:

Metric Target How to Measure
Adoption rate 100% of developers using weekly Usage logs
First-attempt code quality <2 review rounds for AI-generated PRs PR metrics
Convention compliance <5% of review comments about conventions Code review data
Time to first feature New developers productive in <3 days Onboarding tracking
Cost per developer <$200/month average API billing

Conclusion

Setting up Claude Code for a team is about creating shared context (CLAUDE.md), shared workflows (custom commands), shared guardrails (permissions), and shared expectations (review standards). When done well, it creates a multiplier effect — every developer benefits from the collective knowledge encoded in the project's Claude Code configuration, and AI-generated code is consistent across the entire team.

Share this article
N

NYC News

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.