Getting Started with the OpenAI Python SDK: Installation and First API Call
Learn how to install the OpenAI Python SDK, configure your API key, make your first chat completion request, and parse the response object. A complete beginner-friendly walkthrough.
Why the OpenAI Python SDK
The OpenAI Python SDK is the official client library for interacting with OpenAI's APIs. While you could hit the REST endpoints directly with requests or httpx, the SDK gives you type-safe request and response objects, automatic retries, streaming helpers, and a clean interface that mirrors the API exactly. Whether you are building a chatbot, a content pipeline, or an agentic system, the SDK is the foundation everything else sits on.
This post walks you through installation, configuration, your first API call, and how to work with the response object.
Installation
Install the SDK with pip:
pip install openai
This installs the openai package along with its dependencies including httpx, pydantic, and typing-extensions. Verify the installation:
python -c "import openai; print(openai.__version__)"
You should see a version like 1.x.x. The SDK follows semantic versioning, so any 1.x release maintains backward compatibility.
Configuring Your API Key
The SDK reads your API key from the OPENAI_API_KEY environment variable by default:
export OPENAI_API_KEY="sk-proj-your-key-here"
For a more portable setup, use a .env file with python-dotenv:
from dotenv import load_dotenv
load_dotenv() # loads OPENAI_API_KEY from .env
from openai import OpenAI
client = OpenAI() # automatically picks up the env var
You can also pass the key explicitly when creating the client:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
client = OpenAI(api_key="sk-proj-your-key-here")
Security rule: Never commit API keys to version control. Use environment variables, .env files added to .gitignore, or a secrets manager.
Your First Chat Completion
The chat.completions.create method is the core of the SDK. Here is a complete example:
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful Python tutor."},
{"role": "user", "content": "Explain list comprehensions in one paragraph."},
],
)
print(response.choices[0].message.content)
This sends a request to the Chat Completions API with a system message that sets the assistant's behavior and a user message with the actual question. The response comes back as a structured ChatCompletion object.
Understanding the Response Object
The response object has a well-defined structure. Here is how to inspect it:
# The full response object
print(response.model_dump_json(indent=2))
# Key fields
print(f"Model used: {response.model}")
print(f"Finish reason: {response.choices[0].finish_reason}")
print(f"Prompt tokens: {response.usage.prompt_tokens}")
print(f"Completion tokens: {response.usage.completion_tokens}")
print(f"Total tokens: {response.usage.total_tokens}")
# The actual text
content = response.choices[0].message.content
print(content)
The choices array contains one or more completions (one by default). Each choice has a message with role and content, plus a finish_reason that tells you why generation stopped (stop, length, tool_calls, etc.).
A Reusable Helper Function
In practice, you will wrap the API call in a helper:
from openai import OpenAI
client = OpenAI()
def ask(prompt: str, system: str = "You are a helpful assistant.", model: str = "gpt-4o") -> str:
response = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": system},
{"role": "user", "content": prompt},
],
)
return response.choices[0].message.content
# Usage
answer = ask("What is the time complexity of binary search?")
print(answer)
This pattern keeps your application code clean and makes it easy to swap models or adjust system prompts globally.
FAQ
What Python versions does the OpenAI SDK support?
The OpenAI Python SDK requires Python 3.8 or later. For the best experience with type hints and async features, Python 3.10+ is recommended.
Can I use the SDK without an API key for testing?
No, the SDK requires a valid API key for all API calls. However, you can use the OPENAI_BASE_URL environment variable to point the client at a local mock server or compatible endpoint for testing without spending credits.
How do I check my API usage and remaining credits?
The response object includes a usage field with token counts for each request. For account-level billing and usage, visit the OpenAI dashboard at platform.openai.com/usage.
#OpenAI #PythonSDK #API #GettingStarted #Tutorial #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.