Getting Started with the Anthropic Python SDK: Installation and First Claude API Call
Learn how to install the Anthropic Python SDK, configure your API key, make your first Claude API call using the messages endpoint, and parse structured responses for agent development.
Why Claude for Agent Development
Anthropic's Claude family of models has become a leading choice for building agentic AI systems. Claude's strong instruction-following, large context windows (up to 200K tokens), native tool use, and extended thinking capabilities make it particularly well-suited for complex multi-step agent workflows. The Anthropic Python SDK provides a clean, type-safe interface to all of these features.
In this tutorial, you will install the SDK, configure authentication, make your first API call, and understand how to parse responses — the foundation for everything that follows in agent development.
Prerequisites
Before starting, ensure you have:
- Python 3.8 or later installed
- An Anthropic API key from console.anthropic.com
- Basic familiarity with Python
Step 1: Install the Anthropic SDK
Install the official package with pip:
pip install anthropic
This installs the anthropic package with all core dependencies including httpx for HTTP transport and pydantic for type validation. For async applications, no extra install is needed — async support is built in.
Verify the installation:
python -c "import anthropic; print(anthropic.__version__)"
Step 2: Configure Your API Key
Set your API key as an environment variable:
export ANTHROPIC_API_KEY="sk-ant-api03-your-key-here"
The SDK automatically reads this variable. You can also pass it explicitly:
import anthropic
client = anthropic.Anthropic(api_key="sk-ant-api03-your-key-here")
Security note: Never commit API keys to version control. Use environment variables or a secrets manager in production.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
Step 3: Make Your First API Call
The messages API is the primary interface for all Claude interactions:
import anthropic
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain what an AI agent is in three sentences."}
]
)
print(message.content[0].text)
This sends a single user message to Claude and prints the text response. The model parameter specifies which Claude model to use — claude-sonnet-4-20250514 offers the best balance of speed and capability for most agent tasks.
Step 4: Parse the Response Object
The response object contains rich metadata beyond just the text:
import anthropic
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=512,
messages=[
{"role": "user", "content": "What is tool use in LLMs?"}
]
)
# The response text
print(message.content[0].text)
# Token usage for cost tracking
print(f"Input tokens: {message.usage.input_tokens}")
print(f"Output tokens: {message.usage.output_tokens}")
# Stop reason tells you why generation ended
print(f"Stop reason: {message.stop_reason}")
# Model used
print(f"Model: {message.model}")
The stop_reason field is critical for agent loops: it tells you whether the model finished naturally (end_turn), hit the token limit (max_tokens), or wants to call a tool (tool_use).
Step 5: Async Client for Production
For web servers and concurrent agent systems, use the async client:
import asyncio
import anthropic
async def ask_claude(question: str) -> str:
client = anthropic.AsyncAnthropic()
message = await client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": question}
]
)
return message.content[0].text
result = asyncio.run(ask_claude("What are agentic workflows?"))
print(result)
The async client uses the same API as the sync client but returns awaitable coroutines, making it ideal for FastAPI endpoints or multi-agent orchestration where you need to run multiple Claude calls concurrently.
Error Handling
Always handle API errors gracefully in production code:
import anthropic
client = anthropic.Anthropic()
try:
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello"}]
)
print(message.content[0].text)
except anthropic.AuthenticationError:
print("Invalid API key")
except anthropic.RateLimitError:
print("Rate limited — implement exponential backoff")
except anthropic.APIError as e:
print(f"API error: {e.status_code} {e.message}")
The SDK provides typed exceptions for every error category, making it straightforward to handle rate limits, authentication failures, and server errors differently.
FAQ
What Claude model should I use for agents?
Use claude-sonnet-4-20250514 for most agent tasks — it offers strong reasoning and tool use at moderate cost. Use claude-opus-4-20250514 for tasks requiring deep analysis or complex multi-step reasoning. Use claude-haiku-3-5-20241022 for high-volume, low-latency tasks like classification or routing.
Is the async client required for agent development?
Not required, but strongly recommended. Agent systems typically involve multiple concurrent API calls, tool executions, and I/O operations. The async client lets you run these in parallel without blocking, significantly improving throughput in production.
How do I track API costs?
Every response includes usage.input_tokens and usage.output_tokens. Multiply these by the per-token pricing for your model. For Sonnet, input tokens cost roughly $3 per million and output tokens $15 per million. Build token tracking into your agent loop from day one.
#Anthropic #Claude #PythonSDK #GettingStarted #Tutorial #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.