Skip to content
Back to Blog
Agentic AI7 min read

Claude Code MCP Servers: Extend Your AI Developer with Custom Tools

How to configure, build, and use MCP (Model Context Protocol) servers with Claude Code — connecting databases, APIs, GitHub, Slack, and custom tools to your AI workflow.

What Is the Model Context Protocol?

The Model Context Protocol (MCP) is an open standard created by Anthropic that defines how AI models connect to external tools and data sources. Think of it as a USB port for AI — a standardized way to plug capabilities into any AI application that supports the protocol.

Claude Code has first-class MCP support. By configuring MCP servers, you can give Claude Code the ability to query databases, interact with GitHub, send Slack messages, read from Notion, execute SQL, manage cloud infrastructure, and connect to virtually any API or service.

How MCP Servers Work with Claude Code

An MCP server is a lightweight process that:

  1. Exposes tools — Functions that Claude Code can call (e.g., "query_database", "create_github_issue")
  2. Defines schemas — Input/output schemas for each tool so Claude knows how to call them
  3. Handles execution — Receives tool calls from Claude Code, executes them, and returns results

Claude Code communicates with MCP servers over stdin/stdout using JSON-RPC. The server runs locally on your machine alongside Claude Code.

[Claude Code] <--JSON-RPC--> [MCP Server] <--API calls--> [External Service]

Configuring MCP Servers

MCP servers are configured in .claude/settings.json (project level) or ~/.claude/settings.json (global).

Basic Configuration

{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost:5432/mydb"]
    }
  }
}

Multiple Servers

{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost:5432/mydb"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxxxxxxxxxxx"
      }
    },
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"]
    },
    "slack": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-slack"],
      "env": {
        "SLACK_BOT_TOKEN": "xoxb-xxxxxxxxxxxx"
      }
    }
  }
}

Configuration Fields

Field Required Description
command Yes The executable to run the server
args Yes Command-line arguments for the server
env No Environment variables passed to the server process
cwd No Working directory for the server

The MCP ecosystem has grown rapidly. Here are the most useful servers for development workflows:

Database Servers

Server Package Capabilities
PostgreSQL @modelcontextprotocol/server-postgres Query, schema inspection
SQLite @modelcontextprotocol/server-sqlite Query, schema, write
MySQL @modelcontextprotocol/server-mysql Query, schema inspection

Example: Query your database directly

You: How many users signed up in the last 7 days? Break it down by day.

Claude Code (using postgres MCP):
[Tool Call] mcp__postgres__query
  SELECT DATE(created_at) as date, COUNT(*) as signups
  FROM users
  WHERE created_at >= NOW() - INTERVAL '7 days'
  GROUP BY DATE(created_at)
  ORDER BY date;

Result:
| date       | signups |
|------------|---------|
| 2026-01-05 | 142     |
| 2026-01-06 | 167     |
| 2026-01-07 | 153     |
| ...        | ...     |

GitHub Server

{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_xxxx"
      }
    }
  }
}

Capabilities:

  • Create and manage issues
  • Create and review pull requests
  • Search repositories
  • Read file contents from any GitHub repo
  • List branches, commits, and tags

Memory Server

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-memory"]
    }
  }
}

The memory server gives Claude Code a persistent knowledge graph. It can store entities, relationships, and facts that persist across sessions — useful for tracking project decisions, architecture notes, and team context.

Building a Custom MCP Server

When no existing server meets your needs, you can build your own. MCP servers are straightforward to implement in TypeScript or Python.

TypeScript MCP Server

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "deployment-manager",
  version: "1.0.0",
});

// Define a tool
server.tool(
  "get_deployment_status",
  "Check the status of a Kubernetes deployment",
  {
    namespace: z.string().describe("Kubernetes namespace"),
    deployment: z.string().describe("Deployment name"),
  },
  async ({ namespace, deployment }) => {
    const { execSync } = await import("child_process");
    const result = execSync(
      `kubectl get deployment ${deployment} -n ${namespace} -o json`
    ).toString();
    const parsed = JSON.parse(result);

    return {
      content: [
        {
          type: "text",
          text: JSON.stringify({
            name: parsed.metadata.name,
            replicas: parsed.spec.replicas,
            readyReplicas: parsed.status.readyReplicas,
            updatedReplicas: parsed.status.updatedReplicas,
            conditions: parsed.status.conditions,
          }, null, 2),
        },
      ],
    };
  }
);

server.tool(
  "scale_deployment",
  "Scale a Kubernetes deployment to a specified number of replicas",
  {
    namespace: z.string(),
    deployment: z.string(),
    replicas: z.number().min(0).max(50),
  },
  async ({ namespace, deployment, replicas }) => {
    const { execSync } = await import("child_process");
    execSync(
      `kubectl scale deployment ${deployment} -n ${namespace} --replicas=${replicas}`
    );
    return {
      content: [
        {
          type: "text",
          text: `Scaled ${deployment} in ${namespace} to ${replicas} replicas`,
        },
      ],
    };
  }
);

// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);

Python MCP Server

from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
import subprocess
import json

app = Server("deployment-manager")

@app.list_tools()
async def list_tools():
    return [
        Tool(
            name="get_deployment_status",
            description="Check Kubernetes deployment status",
            inputSchema={
                "type": "object",
                "properties": {
                    "namespace": {"type": "string"},
                    "deployment": {"type": "string"},
                },
                "required": ["namespace", "deployment"],
            },
        )
    ]

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "get_deployment_status":
        result = subprocess.run(
            ["kubectl", "get", "deployment", arguments["deployment"],
             "-n", arguments["namespace"], "-o", "json"],
            capture_output=True, text=True
        )
        return [TextContent(type="text", text=result.stdout)]

async def main():
    async with stdio_server() as (read_stream, write_stream):
        await app.run(read_stream, write_stream)

if __name__ == "__main__":
    import asyncio
    asyncio.run(main())

Registering Your Custom Server

{
  "mcpServers": {
    "deployment-manager": {
      "command": "node",
      "args": [".claude/mcp-servers/deployment-manager/index.js"]
    }
  }
}

Security Considerations

MCP servers run locally with your user permissions. Security best practices:

  1. Audit third-party servers — Review the source code of any MCP server before installing
  2. Use read-only database connections — For the postgres MCP server, use a read-only database user
  3. Scope API tokens — Give GitHub tokens minimal required permissions
  4. Never commit secrets — Store tokens in environment variables, not in settings.json
  5. Restrict filesystem access — The filesystem server accepts path restrictions; always limit to the directories you need
# Create a .env file for MCP secrets (add to .gitignore)
echo "GITHUB_TOKEN=ghp_xxxx" > .claude/.env
echo "DATABASE_URL=postgresql://readonly:pass@localhost/mydb" >> .claude/.env

Debugging MCP Servers

When an MCP server fails to connect or a tool returns unexpected results:

# Test the server manually
npx -y @modelcontextprotocol/server-postgres "postgresql://localhost/mydb"

# Check Claude Code's MCP status
claude /doctor

# Run with verbose logging
claude -v

Common issues:

  • Server fails to start — Check that the command and args are correct, and required env vars are set
  • Tools not appearing — Verify the server is listed in settings.json and restart Claude Code
  • Timeout errors — Increase the server's timeout or optimize the underlying query

Real-World MCP Workflow

Here is an example of how MCP servers transform a typical development session:

You: The users are reporting slow page loads. Investigate and fix.

Claude Code:
1. [mcp__postgres__query] Check slow query log
2. [Grep] Find the endpoint responsible for the slow queries
3. [Read] Read the endpoint code
4. [mcp__postgres__query] Run EXPLAIN ANALYZE on the problematic query
5. [Edit] Add missing database index
6. [Bash] Create and run migration
7. [mcp__postgres__query] Verify query performance improved
8. [mcp__github__create_pull_request] Create PR with the fix
9. [mcp__slack__send_message] Notify the team in #engineering

Without MCP, steps 1, 4, 7, 8, and 9 would require manual intervention. With MCP, the entire workflow is autonomous.

Conclusion

MCP servers transform Claude Code from a code editor into a connected development platform. By plugging in database, GitHub, Slack, and custom servers, you give Claude Code the ability to investigate production issues, query data, manage infrastructure, and communicate with your team — all from a single terminal session. The protocol is open and extensible, so any tool or service can become part of your AI-assisted workflow.

Share this article
N

NYC News

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.