Skip to content
Learn Agentic AI14 min read0 views

Building a Tutoring Agent: Adaptive Learning with AI-Powered Explanations

Learn how to build an AI tutoring agent that assesses student knowledge, adapts difficulty in real time, and uses scaffolding techniques to guide learners through complex topics.

Why Adaptive Tutoring Matters

Traditional educational software serves the same content to every student regardless of their current understanding. A student who already grasps algebra fundamentals gets the same explanation as one who is struggling with basic variables. This one-size-fits-all approach wastes time for advanced learners and frustrates beginners.

An adaptive tutoring agent solves this by continuously assessing what the student knows, adjusting the difficulty of questions and explanations, and providing scaffolded support that meets each learner exactly where they are. The core loop is simple: assess, explain, practice, reassess.

The Tutoring Loop Architecture

A tutoring agent operates on a continuous feedback cycle with four stages:

  1. Knowledge Assessment — determine what the student already understands
  2. Adaptive Explanation — explain the next concept at the right level
  3. Guided Practice — present problems matched to the student's ability
  4. Reassessment — measure whether understanding improved

Here is the data model that tracks a student's progress through this loop:

from dataclasses import dataclass, field
from enum import Enum
from typing import Optional

class Mastery(Enum):
    NOVICE = "novice"
    DEVELOPING = "developing"
    PROFICIENT = "proficient"
    ADVANCED = "advanced"

@dataclass
class TopicState:
    topic: str
    mastery: Mastery = Mastery.NOVICE
    attempts: int = 0
    correct: int = 0
    last_misconception: Optional[str] = None

    @property
    def accuracy(self) -> float:
        if self.attempts == 0:
            return 0.0
        return self.correct / self.attempts

@dataclass
class StudentProfile:
    student_id: str
    topics: dict[str, TopicState] = field(default_factory=dict)
    difficulty_level: int = 1  # 1-5 scale
    preferred_explanation_style: str = "analogy"

    def update_mastery(self, topic: str, was_correct: bool,
                       misconception: Optional[str] = None):
        if topic not in self.topics:
            self.topics[topic] = TopicState(topic=topic)

        state = self.topics[topic]
        state.attempts += 1
        if was_correct:
            state.correct += 1
        if misconception:
            state.last_misconception = misconception

        # Update mastery level based on rolling accuracy
        if state.attempts >= 3:
            if state.accuracy >= 0.9:
                state.mastery = Mastery.ADVANCED
            elif state.accuracy >= 0.7:
                state.mastery = Mastery.PROFICIENT
            elif state.accuracy >= 0.4:
                state.mastery = Mastery.DEVELOPING
            else:
                state.mastery = Mastery.NOVICE

Building the Tutoring Agent

The agent uses the student profile to generate appropriately leveled explanations and questions. The key insight is that the system prompt changes dynamically based on the student's current mastery:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

from agents import Agent, Runner, function_tool
import json

def build_tutor_instructions(profile: StudentProfile,
                              current_topic: str) -> str:
    state = profile.topics.get(current_topic, TopicState(current_topic))
    mastery = state.mastery.value

    scaffolding_rules = {
        "novice": (
            "Use simple vocabulary. Break every concept into small steps. "
            "Provide concrete real-world analogies. Ask one question at a time."
        ),
        "developing": (
            "Use moderate vocabulary. Introduce formal terminology alongside "
            "plain language. Include two-step problems."
        ),
        "proficient": (
            "Use standard academic language. Present multi-step problems. "
            "Encourage the student to explain their reasoning."
        ),
        "advanced": (
            "Challenge with edge cases and synthesis questions. "
            "Ask the student to connect concepts across topics."
        ),
    }

    misconception_note = ""
    if state.last_misconception:
        misconception_note = (
            f"\nIMPORTANT: The student previously showed this "
            f"misconception: {state.last_misconception}. "
            f"Address it proactively in your explanation."
        )

    return f"""You are a patient, encouraging tutor teaching {current_topic}.

Student mastery level: {mastery}
Student accuracy so far: {state.accuracy:.0%} over {state.attempts} attempts
Scaffolding approach: {scaffolding_rules[mastery]}
{misconception_note}

Always end your explanation with a practice question appropriate to the
student's level. Format the question clearly so it can be extracted."""

The Assessment Tool

The agent needs a tool to evaluate student responses and update their profile:

student_db: dict[str, StudentProfile] = {}

@function_tool
def evaluate_student_response(
    student_id: str,
    topic: str,
    student_answer: str,
    correct_answer: str,
    is_correct: bool,
    misconception: str = "",
) -> str:
    """Evaluate a student response and update their mastery tracking."""
    profile = student_db.get(student_id)
    if not profile:
        profile = StudentProfile(student_id=student_id)
        student_db[student_id] = profile

    profile.update_mastery(topic, is_correct, misconception or None)
    state = profile.topics[topic]

    return json.dumps({
        "mastery": state.mastery.value,
        "accuracy": f"{state.accuracy:.0%}",
        "attempts": state.attempts,
        "recommendation": _get_recommendation(state),
    })

def _get_recommendation(state: TopicState) -> str:
    if state.accuracy < 0.4 and state.attempts >= 3:
        return "revisit_fundamentals"
    elif state.accuracy >= 0.9 and state.attempts >= 5:
        return "advance_to_next_topic"
    else:
        return "continue_practice"

Running the Adaptive Tutoring Session

Tie everything together into a session loop that continuously adapts:

import asyncio

async def tutoring_session(student_id: str, topic: str):
    profile = student_db.get(
        student_id, StudentProfile(student_id=student_id)
    )
    student_db[student_id] = profile

    tutor = Agent(
        name="Adaptive Tutor",
        instructions=build_tutor_instructions(profile, topic),
        tools=[evaluate_student_response],
    )

    # Initial assessment question
    result = await Runner.run(
        tutor,
        f"Start by asking a diagnostic question about {topic} "
        f"to assess what the student already knows.",
    )
    print(f"Tutor: {result.final_output}")

    # Interactive loop
    while True:
        student_input = input("Student: ")
        if student_input.lower() in ("quit", "exit"):
            break

        # Rebuild instructions with updated profile
        tutor = Agent(
            name="Adaptive Tutor",
            instructions=build_tutor_instructions(profile, topic),
            tools=[evaluate_student_response],
        )

        result = await Runner.run(tutor, student_input)
        print(f"Tutor: {result.final_output}")

asyncio.run(tutoring_session("student-1", "fractions"))

The agent rebuilds its instructions each turn so the scaffolding adapts as the student's mastery changes. A student who answers three fraction questions correctly will see the tutor shift from basic analogies to multi-step word problems automatically.

FAQ

How does the agent decide when to increase difficulty?

The mastery tracking system monitors rolling accuracy over a minimum number of attempts. Once a student reaches 70% accuracy over at least three attempts, their mastery level increases, which changes the scaffolding rules in the system prompt. This prevents premature advancement from a single lucky answer.

Can this approach work with subjects beyond math?

Yes. The tutoring loop pattern is subject-agnostic. For language learning you would track vocabulary mastery, for history you would track understanding of events and causal relationships. The key is defining what "mastery" means for each topic and what misconceptions are common.

How do you prevent the agent from just giving away answers?

The system prompt explicitly instructs the agent to use scaffolding — guiding the student toward the answer rather than stating it directly. You can reinforce this by adding a guardrail tool that flags when the agent's response contains a direct answer to its own practice question, then asks the agent to rephrase as a hint instead.


#AITutoring #AdaptiveLearning #EducationAI #Python #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.