Fine-Grained Permissions for AI Agent Tools: Defining What Each User Can Access
Design and implement fine-grained permission systems for AI agent tools using RBAC, ABAC, and policy evaluation. Includes FastAPI examples for dynamic, context-aware access control.
Why Coarse Permissions Break in AI Agent Systems
Most applications start with simple role-based access: admins can do everything, users can access their own data. This breaks quickly in AI agent platforms. Consider a customer support agent with access to tools for reading tickets, sending emails, issuing refunds, and accessing customer PII. A junior support representative should be able to read tickets and send templated emails but not issue refunds above a threshold or access payment details. A manager should access refunds but only for their team's customers.
This is not a role problem — it is a permissions problem. You need to control access at the level of individual tools, with conditions based on the user, the resource, and the context of the request.
Permission Models Compared
RBAC (Role-Based Access Control) — users are assigned roles, roles have permissions. Simple to understand but rigid. You end up with role explosion: "junior-support-us-east", "senior-support-emea-no-pii".
ABAC (Attribute-Based Access Control) — permissions are evaluated against attributes of the user, the resource, the action, and the environment. Flexible and expressive. Can handle conditions like "allow refunds under $100 for users in the billing department during business hours."
ReBAC (Relationship-Based Access Control) — permissions are based on relationships between entities. Used by Google Zanzibar and systems like SpiceDB. "User X can edit document Y because they are in group Z which owns folder W that contains Y."
For AI agent platforms, ABAC provides the best balance of expressiveness and implementation complexity. You can model nearly any access pattern without building a graph database.
Designing the Permission Schema
Define permissions as a combination of resource, action, and conditions:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
from pydantic import BaseModel
from typing import Optional
from enum import Enum
class Action(str, Enum):
READ = "read"
EXECUTE = "execute"
CONFIGURE = "configure"
DELETE = "delete"
class Condition(BaseModel):
field: str # e.g., "amount", "department", "region"
operator: str # "eq", "lt", "gt", "in", "not_in"
value: str | int | float | list
class Permission(BaseModel):
resource: str # e.g., "tool:refund", "agent:support", "data:pii"
action: Action
conditions: list[Condition] = []
effect: str = "allow" # "allow" or "deny"
class PolicySet(BaseModel):
name: str
description: str
permissions: list[Permission]
Policy Evaluation Engine
The engine evaluates a request against a user's permission set. Deny rules take precedence over allow rules:
from typing import Any
class PolicyEngine:
def evaluate(
self,
permissions: list[Permission],
resource: str,
action: Action,
context: dict[str, Any],
) -> bool:
matching = [
p for p in permissions
if p.resource == resource and p.action == action
]
if not matching:
return False # Default deny
# Check for explicit deny first
for perm in matching:
if perm.effect == "deny" and self._conditions_met(perm.conditions, context):
return False
# Check for allow
for perm in matching:
if perm.effect == "allow" and self._conditions_met(perm.conditions, context):
return True
return False
def _conditions_met(
self, conditions: list[Condition], context: dict[str, Any],
) -> bool:
if not conditions:
return True # No conditions means always matches
for cond in conditions:
value = context.get(cond.field)
if value is None:
return False
if cond.operator == "eq" and value != cond.value:
return False
elif cond.operator == "lt" and value >= cond.value:
return False
elif cond.operator == "gt" and value <= cond.value:
return False
elif cond.operator == "in" and value not in cond.value:
return False
elif cond.operator == "not_in" and value in cond.value:
return False
return True
policy_engine = PolicyEngine()
Applying Permissions to Agent Tool Calls
Create a FastAPI dependency that checks permissions before any tool execution:
from fastapi import Depends, HTTPException
class ToolPermissionChecker:
def __init__(self, resource: str, action: Action):
self.resource = resource
self.action = action
async def __call__(
self,
request_context: dict,
user: TokenPayload = Depends(get_current_user),
) -> bool:
# Fetch user's policy set from database or cache
user_policies = await get_user_policies(user.sub, user.org_id)
# Build evaluation context
context = {
"user_role": user.role,
"user_department": user.department,
**request_context,
}
if not policy_engine.evaluate(
user_policies.permissions, self.resource, self.action, context,
):
raise HTTPException(
status_code=403,
detail=f"Not authorized to {self.action.value} {self.resource}",
)
return True
# Usage in routes
check_refund = ToolPermissionChecker("tool:refund", Action.EXECUTE)
@router.post("/tools/refund")
async def execute_refund(
amount: float,
customer_id: str,
_authorized: bool = Depends(check_refund),
):
# Permission already verified with conditions
return await process_refund(amount, customer_id)
Dynamic Permissions for Agent Runtime
AI agents need to check permissions dynamically during execution, not just at the API boundary. When an agent decides to use a tool, it should check whether the current user's permissions allow that specific tool with the given parameters:
class PermissionAwareToolExecutor:
def __init__(self, policy_engine: PolicyEngine):
self.engine = policy_engine
async def execute_tool(
self,
tool_name: str,
params: dict,
user_permissions: list[Permission],
user_context: dict,
) -> dict:
# Merge tool parameters into evaluation context
context = {**user_context, **params}
resource = f"tool:{tool_name}"
if not self.engine.evaluate(
user_permissions, resource, Action.EXECUTE, context,
):
return {
"error": "permission_denied",
"message": f"User not authorized to execute {tool_name} with these parameters",
}
tool = self.get_tool(tool_name)
return await tool.run(**params)
This pattern lets the agent reason about permissions. If a refund tool is denied because the amount exceeds the user's limit, the agent can inform the user and suggest escalation rather than failing silently.
FAQ
How do I avoid permission check latency on every tool call?
Cache the user's resolved permission set in Redis with a short TTL (five to fifteen minutes). Load the full permission set once when the session starts and refresh it on the next request after the cache expires. For critical security decisions (like high-value refunds), always fetch fresh permissions from the database.
Should I embed permissions in the JWT or fetch them from a database?
For simple systems with a few roles and scopes, embedding them in the JWT works well and avoids a database round-trip. For fine-grained ABAC with conditional rules, store the full policy set in the database and cache it. The JWT can carry the user's role as a hint, but the authoritative permission evaluation should use the database-backed policy set.
How do I audit permission decisions for compliance?
Log every permission evaluation with the user ID, resource, action, context, and decision (allow or deny). Store these logs in an append-only audit table or ship them to a dedicated logging service. For regulated industries, include the specific policy that matched and the condition values that were evaluated. This creates a complete audit trail of who accessed what and why.
#Permissions #RBAC #ABAC #FastAPI #AIAgents #Authorization #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.