Environment-Specific Agent Configuration: Dev, Staging, and Production Settings
Manage AI agent configurations across development, staging, and production environments using config hierarchies, environment overrides, and secure secrets management.
Why Agents Need Environment-Specific Config
An AI agent that works perfectly in development can behave completely differently in production — not because of code bugs, but because of configuration differences. In development you might use a cheaper model, shorter token limits, and permissive guardrails. In production you need the best model, full token budgets, and strict safety filters. Managing these differences manually is a recipe for deployment disasters.
The goal is a configuration system where each environment inherits sensible defaults but can override specific values, with production secrets kept separate from development credentials.
Config Hierarchy Pattern
The most effective pattern is a layered configuration where each layer can override the previous one. The resolution order is: defaults, then environment-specific, then local overrides.
from dataclasses import dataclass, field
from typing import Any, Optional
from pathlib import Path
import os
try:
import tomllib
except ImportError:
import tomli as tomllib
@dataclass
class LayeredConfig:
_layers: list[dict[str, Any]] = field(default_factory=list)
def add_layer(self, layer: dict[str, Any]):
self._layers.append(layer)
def get(self, key: str, default: Any = None) -> Any:
keys = key.split(".")
for layer in reversed(self._layers):
value = layer
for k in keys:
if isinstance(value, dict) and k in value:
value = value[k]
else:
value = None
break
if value is not None:
return value
return default
def load_config(env: Optional[str] = None) -> LayeredConfig:
config = LayeredConfig()
env = env or os.getenv("APP_ENV", "development")
config_dir = Path("config")
# Layer 1: defaults
defaults_path = config_dir / "defaults.toml"
if defaults_path.exists():
with open(defaults_path, "rb") as f:
config.add_layer(tomllib.load(f))
# Layer 2: environment-specific
env_path = config_dir / f"{env}.toml"
if env_path.exists():
with open(env_path, "rb") as f:
config.add_layer(tomllib.load(f))
# Layer 3: local overrides (never committed to git)
local_path = config_dir / "local.toml"
if local_path.exists():
with open(local_path, "rb") as f:
config.add_layer(tomllib.load(f))
# Layer 4: environment variable overrides
env_overrides = _collect_env_overrides("AGENT_")
if env_overrides:
config.add_layer(env_overrides)
return config
def _collect_env_overrides(prefix: str) -> dict[str, Any]:
result: dict[str, Any] = {}
for key, value in os.environ.items():
if key.startswith(prefix):
config_key = key[len(prefix):].lower().replace("__", ".")
parts = config_key.split(".")
current = result
for part in parts[:-1]:
current = current.setdefault(part, {})
current[parts[-1]] = value
return result
Environment Config Files
Here is what the TOML configuration files look like across environments.
# config/defaults.toml content (loaded as baseline)
DEFAULTS_TOML = """
[agent]
model = "gpt-4o-mini"
temperature = 0.7
max_tokens = 1024
system_prompt = "You are a helpful assistant."
[guardrails]
content_filter = true
max_tool_calls = 5
timeout_seconds = 30
[logging]
level = "INFO"
include_prompts = false
"""
# config/production.toml content (overrides for prod)
PRODUCTION_TOML = """
[agent]
model = "gpt-4o"
max_tokens = 4096
[guardrails]
content_filter = true
max_tool_calls = 10
timeout_seconds = 60
[logging]
level = "WARNING"
include_prompts = false
"""
# config/development.toml content (overrides for dev)
DEVELOPMENT_TOML = """
[agent]
model = "gpt-4o-mini"
temperature = 1.0
[guardrails]
content_filter = false
max_tool_calls = 20
timeout_seconds = 120
[logging]
level = "DEBUG"
include_prompts = true
"""
In this setup, development uses a cheaper model with verbose logging and disabled content filters for easier debugging. Production uses the best model with strict guardrails and minimal logging.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
Secrets Management
API keys and credentials must never appear in config files. Use a separate secrets layer that loads from environment variables or a secrets manager.
from dataclasses import dataclass
from typing import Optional
import os
@dataclass
class AgentSecrets:
openai_api_key: str
database_url: str
redis_url: str
webhook_secret: Optional[str] = None
@classmethod
def from_env(cls) -> "AgentSecrets":
openai_key = os.environ.get("OPENAI_API_KEY")
if not openai_key:
raise EnvironmentError("OPENAI_API_KEY is required")
return cls(
openai_api_key=openai_key,
database_url=os.environ.get(
"DATABASE_URL", "postgresql://localhost/agents_dev"
),
redis_url=os.environ.get("REDIS_URL", "redis://localhost:6379/0"),
webhook_secret=os.environ.get("WEBHOOK_SECRET"),
)
class SecureConfigLoader:
def __init__(self, config: LayeredConfig, secrets: AgentSecrets):
self.config = config
self.secrets = secrets
def get_agent_settings(self) -> dict:
return {
"model": self.config.get("agent.model"),
"temperature": float(self.config.get("agent.temperature", 0.7)),
"max_tokens": int(self.config.get("agent.max_tokens", 1024)),
"api_key": self.secrets.openai_api_key,
}
Config Validation Across Environments
Validate that all environments have consistent, valid configurations before deployment. This catches misconfigurations in CI rather than in production.
def validate_all_environments(config_dir: str = "config"):
environments = ["development", "staging", "production"]
errors: list[str] = []
for env in environments:
config = load_config(env)
model = config.get("agent.model")
if model not in ("gpt-4o", "gpt-4o-mini", "gpt-3.5-turbo"):
errors.append(f"[{env}] Unknown model: {model}")
temp = float(config.get("agent.temperature", 0.7))
if not 0.0 <= temp <= 2.0:
errors.append(f"[{env}] Invalid temperature: {temp}")
if env == "production":
if config.get("logging.include_prompts"):
errors.append(
"[production] Prompt logging must be disabled in production"
)
if not config.get("guardrails.content_filter"):
errors.append(
"[production] Content filter must be enabled in production"
)
if errors:
for error in errors:
print(f"VALIDATION ERROR: {error}")
raise ValueError(f"Config validation failed with {len(errors)} errors")
print("All environment configs valid")
Run this validation in your CI pipeline to prevent misconfigurations from reaching production.
FAQ
Should I use a single config file with environment sections or separate files per environment?
Separate files per environment are easier to manage. A single file with sections grows unwieldy as the number of settings increases, and it means every developer can see production values (even if they cannot use them). Separate files also make code review cleaner since changes to production config are isolated in their own diff.
How do I handle config values that differ between production regions?
Add a region layer to the hierarchy that sits between the environment config and local overrides. For example, load production.toml then production-us-east.toml. The region file only needs to contain the values that differ — everything else is inherited from the base production config.
Is it safe to include development API keys in the config files?
Development keys with low rate limits and no access to production data can be committed for convenience. However, production keys must always come from environment variables or a secrets manager. Add config/local.toml to your .gitignore and use it for any credentials that should never leave a developer's machine.
#EnvironmentConfiguration #AIAgents #DevOps #SecretsManagement #Python #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.