AI Feature Adoption Agent: Identifying Underused Features and Driving Engagement
Build an AI agent that tracks feature usage, identifies underutilized capabilities for each user, and delivers contextual tips to drive adoption and reduce churn risk.
Why Feature Adoption Drives SaaS Retention
SaaS churn research consistently shows the same pattern: users who adopt more features retain longer. A user who only uses basic task creation in a project management tool is far more likely to churn than one who also uses automations, time tracking, and reporting. The problem is that most users never discover features beyond what they needed on day one.
An AI feature adoption agent solves this by tracking what each user actually uses, comparing their behavior to power users, and delivering contextual suggestions at exactly the right moment.
Usage Tracking Foundation
Before the AI can make suggestions, it needs data. Implement event tracking that captures feature usage at a granular level.
from dataclasses import dataclass
from datetime import datetime
from enum import Enum
class FeatureCategory(str, Enum):
CORE = "core"
COLLABORATION = "collaboration"
AUTOMATION = "automation"
REPORTING = "reporting"
INTEGRATION = "integration"
ADMIN = "admin"
@dataclass
class FeatureDefinition:
key: str
name: str
category: FeatureCategory
description: str
activation_threshold: int # Uses needed to count as "adopted"
discovery_url: str # Page where user can learn about this feature
FEATURE_REGISTRY = [
FeatureDefinition("task_create", "Task Creation", FeatureCategory.CORE,
"Creating tasks and to-dos", 5, "/features/tasks"),
FeatureDefinition("kanban_view", "Kanban Board", FeatureCategory.CORE,
"Drag and drop task management", 3, "/features/kanban"),
FeatureDefinition("team_assign", "Team Assignment", FeatureCategory.COLLABORATION,
"Assigning tasks to team members", 3, "/features/teams"),
FeatureDefinition("comment_thread", "Comments & Threads", FeatureCategory.COLLABORATION,
"Discussing work in context", 5, "/features/comments"),
FeatureDefinition("automation_rule", "Workflow Automations", FeatureCategory.AUTOMATION,
"Rules that automate repetitive actions", 1, "/features/automations"),
FeatureDefinition("custom_report", "Custom Reports", FeatureCategory.REPORTING,
"Building and saving custom analytics", 1, "/features/reports"),
FeatureDefinition("api_integration", "API Integrations", FeatureCategory.INTEGRATION,
"Connecting external tools via API", 1, "/features/integrations"),
FeatureDefinition("time_tracking", "Time Tracking", FeatureCategory.CORE,
"Logging time spent on tasks", 3, "/features/time"),
]
async def track_feature_usage(db, user_id: str, tenant_id: str,
feature_key: str):
"""Record a feature usage event."""
await db.execute("""
INSERT INTO feature_usage (user_id, tenant_id, feature_key, used_at)
VALUES ($1, $2, $3, NOW());
""", user_id, tenant_id, feature_key)
# Update the running count
await db.execute("""
INSERT INTO feature_adoption (user_id, tenant_id, feature_key,
use_count, first_used, last_used)
VALUES ($1, $2, $3, 1, NOW(), NOW())
ON CONFLICT (user_id, feature_key)
DO UPDATE SET use_count = feature_adoption.use_count + 1,
last_used = NOW();
""", user_id, tenant_id, feature_key)
Adoption Analysis Engine
The engine compares each user's feature adoption to power user benchmarks and identifies the highest-value features they have not yet discovered.
from dataclasses import dataclass
@dataclass
class AdoptionGap:
feature: FeatureDefinition
user_usage_count: int
power_user_avg: float
adoption_score: float # 0 = not used, 1 = fully adopted
opportunity_score: float # How valuable adopting this would be
class AdoptionAnalyzer:
def __init__(self, db, feature_registry: list[FeatureDefinition]):
self.db = db
self.features = {f.key: f for f in feature_registry}
async def analyze_user(self, user_id: str,
tenant_id: str) -> list[AdoptionGap]:
# Get user's usage counts
user_usage = await self.db.fetch("""
SELECT feature_key, use_count
FROM feature_adoption
WHERE user_id = $1;
""", user_id)
user_counts = {r["feature_key"]: r["use_count"] for r in user_usage}
# Get power user benchmarks (top 20% of users by total usage)
power_user_avgs = await self.db.fetch("""
WITH power_users AS (
SELECT user_id FROM feature_adoption
WHERE tenant_id = $1
GROUP BY user_id
ORDER BY SUM(use_count) DESC
LIMIT (SELECT COUNT(DISTINCT user_id) / 5
FROM feature_adoption WHERE tenant_id = $1)
)
SELECT fa.feature_key, AVG(fa.use_count) as avg_count
FROM feature_adoption fa
JOIN power_users pu ON pu.user_id = fa.user_id
GROUP BY fa.feature_key;
""", tenant_id)
benchmarks = {r["feature_key"]: float(r["avg_count"])
for r in power_user_avgs}
gaps = []
for feature_key, feature in self.features.items():
user_count = user_counts.get(feature_key, 0)
power_avg = benchmarks.get(feature_key, 0)
threshold = feature.activation_threshold
adoption = min(user_count / threshold, 1.0) if threshold > 0 else 1.0
# Opportunity = how much power users use it vs this user
if power_avg > 0:
opportunity = max(0, 1.0 - (user_count / power_avg))
else:
opportunity = 0.0
gaps.append(AdoptionGap(
feature=feature,
user_usage_count=user_count,
power_user_avg=power_avg,
adoption_score=adoption,
opportunity_score=opportunity,
))
# Sort by opportunity score descending
gaps.sort(key=lambda g: g.opportunity_score, reverse=True)
return gaps
Contextual Tip Generation
Suggestions are most effective when they appear at the right moment. The agent monitors the user's current activity and matches it to relevant unadopted features.
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
class ContextualTipEngine:
# Maps user actions to related features they might not know about
CONTEXT_TRIGGERS = {
"task_create": ["automation_rule", "team_assign"],
"kanban_view": ["time_tracking", "custom_report"],
"team_assign": ["comment_thread"],
"comment_thread": ["automation_rule"],
}
def __init__(self, analyzer: AdoptionAnalyzer, llm_client):
self.analyzer = analyzer
self.llm_client = llm_client
async def get_contextual_tip(self, user_id: str, tenant_id: str,
current_action: str) -> dict | None:
# Check if there are related unadopted features
related_features = self.CONTEXT_TRIGGERS.get(current_action, [])
if not related_features:
return None
gaps = await self.analyzer.analyze_user(user_id, tenant_id)
gap_map = {g.feature.key: g for g in gaps}
# Find the best unadopted feature related to current action
best_gap = None
for feature_key in related_features:
gap = gap_map.get(feature_key)
if gap and gap.adoption_score < 0.3:
if not best_gap or gap.opportunity_score > best_gap.opportunity_score:
best_gap = gap
if not best_gap:
return None
# Check rate limit: do not show tips too frequently
recently_shown = await self.was_tip_shown_recently(
user_id, best_gap.feature.key, hours=24
)
if recently_shown:
return None
tip = await self.generate_tip(
current_action, best_gap.feature, best_gap
)
return tip
async def generate_tip(self, action: str,
feature: FeatureDefinition,
gap: AdoptionGap) -> dict:
prompt = f"""Write a short, helpful product tip (2-3 sentences).
The user just performed: {action}.
Suggest they try: {feature.name} - {feature.description}.
Power users use this feature an average of {gap.power_user_avg:.0f} times.
Be specific about how it connects to what they just did.
Return JSON: {{"title": "...", "body": "...", "cta_text": "...", "cta_url": "..."}}"""
response = await self.llm_client.chat(
messages=[{"role": "user", "content": prompt}],
response_format={"type": "json_object"},
)
import json
tip_data = json.loads(response.content)
tip_data["feature_key"] = feature.key
tip_data["cta_url"] = feature.discovery_url
return tip_data
async def was_tip_shown_recently(self, user_id: str,
feature_key: str,
hours: int) -> bool:
count = await self.analyzer.db.fetchval("""
SELECT COUNT(*) FROM feature_tips_shown
WHERE user_id = $1 AND feature_key = $2
AND shown_at > NOW() - INTERVAL '1 hour' * $3;
""", user_id, feature_key, hours)
return count > 0
Engagement Metrics Dashboard
Track how well the adoption agent is performing with an analytics endpoint.
from fastapi import FastAPI, Depends
from pydantic import BaseModel
app = FastAPI()
class AdoptionMetrics(BaseModel):
total_features: int
avg_features_adopted: float
tip_shown_count: int
tip_click_rate: float
features_adopted_after_tips: int
top_unadopted: list[dict]
@app.get("/api/admin/adoption-metrics", response_model=AdoptionMetrics)
async def get_adoption_metrics(
tenant_id: str = Depends(get_current_tenant),
db = Depends(get_db),
):
total_features = len(FEATURE_REGISTRY)
avg_adopted = await db.fetchval("""
SELECT AVG(adopted_count) FROM (
SELECT user_id, COUNT(*) as adopted_count
FROM feature_adoption
WHERE tenant_id = $1 AND use_count >= 3
GROUP BY user_id
) sub;
""", tenant_id)
tip_stats = await db.fetchrow("""
SELECT COUNT(*) as shown,
COUNT(*) FILTER (WHERE clicked) as clicked
FROM feature_tips_shown
WHERE tenant_id = $1
AND shown_at > NOW() - INTERVAL '30 days';
""", tenant_id)
# Features most commonly unadopted
top_unadopted = await db.fetch("""
SELECT feature_key,
COUNT(DISTINCT u.id) - COUNT(DISTINCT fa.user_id) as non_adopters
FROM users u
CROSS JOIN unnest($2::text[]) as feature_key
LEFT JOIN feature_adoption fa ON fa.user_id = u.id
AND fa.feature_key = feature_key AND fa.use_count >= 3
WHERE u.tenant_id = $1
GROUP BY feature_key
ORDER BY non_adopters DESC
LIMIT 5;
""", tenant_id, [f.key for f in FEATURE_REGISTRY])
shown = tip_stats["shown"] if tip_stats else 0
clicked = tip_stats["clicked"] if tip_stats else 0
return AdoptionMetrics(
total_features=total_features,
avg_features_adopted=round(float(avg_adopted or 0), 1),
tip_shown_count=shown,
tip_click_rate=round(clicked / shown, 3) if shown > 0 else 0,
features_adopted_after_tips=0, # Requires attribution logic
top_unadopted=[dict(r) for r in top_unadopted],
)
FAQ
How do I avoid annoying users with feature suggestions?
Enforce strict rate limits: maximum one tip per session, maximum three tips per week per user. Track dismissal rates per feature — if a user dismisses the same feature tip twice, stop suggesting it permanently. Let users disable feature tips entirely in their notification preferences.
How do I measure whether a tip caused feature adoption?
Use attribution windows. When a user clicks a feature tip, record the feature key and timestamp. If they use that feature within 7 days, attribute it to the tip. Compare adoption rates for features with and without tip exposure to measure incremental lift. A well-designed tip system should show 15-25% higher adoption for tipped features.
Should I suggest features that require a plan upgrade?
Yes, but mark them clearly as premium features and do not count them toward adoption metrics. Frame upgrade suggestions as value discovery rather than upselling — "Teams that use automations save an average of 4 hours per week. This feature is available on the Pro plan." Track click-through to upgrade pages separately from regular adoption metrics.
#FeatureAdoption #UserEngagement #SaaS #UsageAnalytics #Python #ChurnPrevention #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.