Skip to content
Learn Agentic AI13 min read0 views

Building a Homework Helper Agent: Guided Problem Solving Without Giving Answers

Create an AI homework helper that uses the Socratic method to guide students through problems step by step, providing graduated hints and concept explanations without revealing final answers.

The Homework Helper Paradox

The biggest challenge in building a homework helper is not solving problems — any LLM can do that. The challenge is helping without helping too much. Research consistently shows that students learn more when they struggle productively through a problem than when they are given the answer. An effective homework helper agent uses the Socratic method: asking guiding questions that lead the student to discover the answer themselves.

This requires a fundamentally different architecture than a typical Q&A chatbot. The agent must understand the solution path, track where the student is on that path, and generate targeted questions rather than direct answers.

Solution Path Decomposition

The first step is breaking a problem into a sequence of concepts and sub-steps that the student needs to work through:

from dataclasses import dataclass, field
from enum import Enum
from typing import Optional

class StepStatus(str, Enum):
    NOT_STARTED = "not_started"
    STRUGGLING = "struggling"
    HINT_GIVEN = "hint_given"
    COMPLETED = "completed"
    SKIPPED = "skipped"

@dataclass
class SolutionStep:
    step_number: int
    description: str
    concept: str
    expected_result: str
    hints: list[str]  # Graduated hints, least to most specific
    common_mistakes: list[str] = field(default_factory=list)
    status: StepStatus = StepStatus.NOT_STARTED
    hint_level: int = 0
    student_attempts: int = 0

@dataclass
class ProblemState:
    problem_id: str
    problem_text: str
    subject: str
    steps: list[SolutionStep] = field(default_factory=list)
    current_step: int = 0
    total_hints_used: int = 0
    student_identified_concepts: list[str] = field(default_factory=list)

    @property
    def progress(self) -> float:
        if not self.steps:
            return 0.0
        completed = sum(
            1 for s in self.steps if s.status == StepStatus.COMPLETED
        )
        return completed / len(self.steps)

    def get_current_step(self) -> Optional[SolutionStep]:
        if self.current_step < len(self.steps):
            return self.steps[self.current_step]
        return None

    def advance(self):
        if self.current_step < len(self.steps):
            self.steps[self.current_step].status = StepStatus.COMPLETED
            self.current_step += 1

Problem Analysis Agent

Before guiding the student, the agent needs to fully understand the problem and decompose it into steps:

from agents import Agent, Runner
from pydantic import BaseModel

class StepDefinition(BaseModel):
    description: str
    concept: str
    expected_result: str
    hint_1: str  # Conceptual direction
    hint_2: str  # Structural guidance
    hint_3: str  # Near-solution help
    common_mistakes: list[str]

class ProblemAnalysis(BaseModel):
    subject: str
    difficulty: str
    prerequisites: list[str]
    steps: list[StepDefinition]
    final_answer: str

problem_analyzer = Agent(
    name="Problem Analyzer",
    instructions="""Analyze a homework problem and decompose it into
guided solution steps. For each step:

1. Describe what the student needs to figure out
2. Identify the key concept being applied
3. State the expected intermediate result
4. Create THREE graduated hints:
   - Hint 1: A conceptual question that points in the right direction
     Example: "What mathematical operation combines rates?"
   - Hint 2: A structural guide without specific numbers
     Example: "Set up an equation where distance = rate x time"
   - Hint 3: A near-complete setup with blanks
     Example: "Plug in: d = 60 mph x ___ hours"
5. List common mistakes students make at this step

CRITICAL: The hints must GUIDE, not TELL. The student should have
to think even with the most specific hint.

Identify prerequisites the student might be missing.""",
    output_type=ProblemAnalysis,
)

async def analyze_problem(problem_text: str) -> ProblemState:
    result = await Runner.run(
        problem_analyzer,
        f"Analyze this homework problem:\n\n{problem_text}",
    )
    analysis = result.final_output_as(ProblemAnalysis)

    steps = []
    for i, step_def in enumerate(analysis.steps):
        steps.append(SolutionStep(
            step_number=i + 1,
            description=step_def.description,
            concept=step_def.concept,
            expected_result=step_def.expected_result,
            hints=[step_def.hint_1, step_def.hint_2, step_def.hint_3],
            common_mistakes=step_def.common_mistakes,
        ))

    return ProblemState(
        problem_id=f"prob-{hash(problem_text) % 10000:04d}",
        problem_text=problem_text,
        subject=analysis.subject,
        steps=steps,
    )

The Socratic Guide Agent

This is the core interaction agent. It asks questions instead of giving answers:

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

from agents import function_tool
import json

@function_tool
def get_next_hint(problem_id: str, step_number: int) -> str:
    """Provide the next level of hint for the current step."""
    # In production, load from database
    state = problem_states.get(problem_id)
    if not state:
        return json.dumps({"error": "problem not found"})

    step = state.steps[step_number - 1]
    if step.hint_level >= len(step.hints):
        return json.dumps({
            "hint": "Let me walk through this step with you.",
            "hint_level": step.hint_level,
            "max_reached": True,
        })

    hint = step.hints[step.hint_level]
    step.hint_level += 1
    step.status = StepStatus.HINT_GIVEN
    state.total_hints_used += 1

    return json.dumps({
        "hint": hint,
        "hint_level": step.hint_level,
        "hints_remaining": len(step.hints) - step.hint_level,
    })

@function_tool
def check_student_work(
    problem_id: str,
    step_number: int,
    student_answer: str,
) -> str:
    """Check if the student's work for a step is on the right track."""
    state = problem_states.get(problem_id)
    if not state:
        return json.dumps({"error": "problem not found"})

    step = state.steps[step_number - 1]
    step.student_attempts += 1

    # Simple check — in production use semantic comparison
    is_correct = student_answer.strip().lower() in (
        step.expected_result.strip().lower()
    )

    if is_correct:
        step.status = StepStatus.COMPLETED
        state.current_step = min(
            step_number, len(state.steps) - 1
        )

    return json.dumps({
        "on_track": is_correct,
        "attempts": step.student_attempts,
        "step_complete": is_correct,
        "common_mistakes": step.common_mistakes if not is_correct else [],
    })

def build_socratic_instructions(state: ProblemState) -> str:
    current = state.get_current_step()
    if not current:
        return "The student has completed all steps. Congratulate them."

    return f"""You are a Socratic homework helper. The student is working
on: {state.problem_text}

They are on step {current.step_number} of {len(state.steps)}:
"{current.description}" (concept: {current.concept})

SOCRATIC METHOD RULES:
1. NEVER state the answer directly — always ask a guiding question
2. If the student asks "what is the answer?", respond with
   "What do you think? Let's work through it together."
3. Start by asking what the student already knows about the concept
4. If they are stuck, offer to provide a hint (use get_next_hint tool)
5. Validate their work with the check_student_work tool
6. Celebrate progress: "Great thinking!" when they get a step right
7. If they make a common mistake, ask a question that reveals
   why their approach does not work — do NOT just say "that's wrong"

CONCEPT IDENTIFICATION:
- When the student demonstrates understanding of a concept,
  acknowledge it explicitly: "You clearly understand [concept]"
- When they struggle, identify which prerequisite might be missing

Progress: {state.progress:.0%} complete
Hints used: {state.total_hints_used}
Current step attempts: {current.student_attempts}"""

socratic_helper = Agent(
    name="Homework Helper",
    instructions="Dynamic — set per interaction",
    tools=[get_next_hint, check_student_work],
)

Interaction Loop

The session loop manages the conversation while maintaining problem state:

async def homework_session(problem_text: str):
    state = await analyze_problem(problem_text)
    problem_states[state.problem_id] = state

    print(f"Let's work through this problem together!")
    print(f"Problem: {state.problem_text}")
    print(f"I have broken it into {len(state.steps)} steps.\n")

    while state.current_step < len(state.steps):
        helper = Agent(
            name="Homework Helper",
            instructions=build_socratic_instructions(state),
            tools=[get_next_hint, check_student_work],
        )

        student_input = input("Student: ")
        if student_input.lower() in ("quit", "exit"):
            break

        result = await Runner.run(helper, student_input)
        print(f"Helper: {result.final_output}\n")

    if state.progress >= 1.0:
        print("Congratulations! You solved the problem!")
        print(f"Hints used: {state.total_hints_used}")

FAQ

How does the agent prevent itself from accidentally revealing the answer?

The system prompt explicitly forbids direct answers and provides alternative phrasings for common situations where a chatbot would normally just give the answer. Additionally, the problem analysis step stores the final answer separately from the hints, and the Socratic guide agent never receives the final answer in its context — only the current step's guiding questions. This architectural separation makes accidental leaks much less likely.

What happens when a student is completely stuck even after all three hints?

After exhausting all hints on a step, the agent shifts from Socratic questioning to a worked example of a similar but different problem. It walks through an analogous problem step by step, then asks the student to apply the same approach to their original problem. This provides the scaffolding needed without directly solving their homework.

Can the agent handle problems it has not been specifically trained on?

Yes. The problem analysis step uses the LLM's general reasoning ability to decompose any problem into steps, identify concepts, and generate hints. The agent does not rely on a pre-built problem database. However, the quality of the decomposition depends on the LLM's knowledge of the subject. For advanced topics like graduate-level mathematics, the analysis should be reviewed by a subject-matter expert before deployment.


#HomeworkHelper #SocraticMethod #EducationAI #Python #GuidedLearning #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.