AI Agent for Course Creation: Outline Generation, Content Drafting, and Quiz Design
Build an AI course creation agent that generates curriculum outlines mapped to learning objectives, drafts lesson content, and designs aligned assessments — all from a topic description.
The Course Creation Pipeline
Building a course is a complex, multi-stage process that most educators dread. You need learning objectives that are measurable, a logical content sequence, lessons that build on each other, and assessments that actually test what was taught. An AI course creation agent automates this pipeline while maintaining pedagogical rigor by following established instructional design frameworks like Bloom's Taxonomy and backward design.
The pipeline has four stages: Objective Mapping, Outline Generation, Content Drafting, and Assessment Design. Each stage's output feeds into the next, creating a coherent course where every lesson and quiz ties back to specific learning objectives.
Course and Curriculum Data Models
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
class BloomLevel(str, Enum):
REMEMBER = "remember"
UNDERSTAND = "understand"
APPLY = "apply"
ANALYZE = "analyze"
EVALUATE = "evaluate"
CREATE = "create"
@dataclass
class LearningObjective:
objective_id: str
description: str
bloom_level: BloomLevel
measurable_verb: str # e.g., "identify", "compare", "design"
assessment_criteria: str
@dataclass
class Lesson:
lesson_id: str
title: str
module: str
order: int
objectives: list[str] # objective_ids
estimated_duration_minutes: int
content: str = ""
prerequisites: list[str] = field(default_factory=list)
key_concepts: list[str] = field(default_factory=list)
@dataclass
class QuizItem:
question: str
question_type: str
correct_answer: str
distractors: list[str] = field(default_factory=list)
objective_id: str = "" # Links back to learning objective
bloom_level: str = ""
explanation: str = ""
@dataclass
class Module:
module_id: str
title: str
description: str
order: int
lessons: list[Lesson] = field(default_factory=list)
quiz: list[QuizItem] = field(default_factory=list)
@dataclass
class CourseOutline:
title: str
description: str
target_audience: str
prerequisites: list[str]
objectives: list[LearningObjective] = field(default_factory=list)
modules: list[Module] = field(default_factory=list)
total_hours: float = 0.0
Stage 1: Learning Objective Generation
The backward design approach starts with what students should be able to do after completing the course, then works backward to content:
from agents import Agent, Runner
from pydantic import BaseModel
class ObjectiveOutput(BaseModel):
objectives: list[dict]
prerequisite_knowledge: list[str]
target_audience_description: str
objective_designer = Agent(
name="Learning Objective Designer",
instructions="""You design measurable learning objectives using
Bloom's Taxonomy. Given a course topic and target audience:
1. Generate 8-15 learning objectives that span Bloom's levels:
- 20% Remember/Understand (foundation)
- 40% Apply/Analyze (core skills)
- 40% Evaluate/Create (advanced skills)
2. Each objective MUST:
- Start with a measurable action verb from Bloom's Taxonomy
- Be specific enough to assess (not "understand databases" but
"design a normalized schema for a given business domain")
- Include the condition and criteria for success
3. Organize objectives in a logical learning sequence where each
builds on previous ones.
BLOOM'S VERBS BY LEVEL:
- Remember: list, define, identify, recall, name
- Understand: explain, summarize, classify, compare
- Apply: implement, execute, solve, demonstrate
- Analyze: differentiate, examine, deconstruct, debug
- Evaluate: justify, critique, assess, defend
- Create: design, construct, produce, compose
Return a JSON object with objectives, prerequisites, and
target audience description.""",
output_type=ObjectiveOutput,
)
async def generate_objectives(
topic: str, audience: str, duration_hours: float
) -> list[LearningObjective]:
result = await Runner.run(
objective_designer,
f"Topic: {topic}\nAudience: {audience}\n"
f"Course duration: {duration_hours} hours",
)
output = result.final_output_as(ObjectiveOutput)
objectives = []
for i, obj in enumerate(output.objectives):
objectives.append(LearningObjective(
objective_id=f"obj-{i+1:03d}",
description=obj["description"],
bloom_level=BloomLevel(obj["bloom_level"]),
measurable_verb=obj["verb"],
assessment_criteria=obj["criteria"],
))
return objectives
Stage 2: Outline Generation
With objectives defined, generate a modular course outline that maps every objective to specific lessons:
class OutlineOutput(BaseModel):
modules: list[dict]
objective_coverage: dict # objective_id -> list of lesson_ids
outline_generator = Agent(
name="Course Outline Generator",
instructions="""You create course outlines that ensure complete
coverage of learning objectives. Given a set of objectives:
1. Group related objectives into modules (3-7 modules typical)
2. Within each module, create lessons (2-5 per module)
3. Each lesson should:
- Address 1-3 specific learning objectives
- Build on previous lessons (define prerequisites)
- Include estimated duration (30-90 minutes each)
- List key concepts to cover
4. COVERAGE CHECK: Every learning objective must appear in at least
one lesson. Verify this explicitly.
5. SEQUENCING RULES:
- Remember/Understand objectives before Apply objectives
- Concrete before abstract
- Simple before complex
- Each module should end with an integrative lesson
Return modules with their lessons and an objective coverage map.""",
output_type=OutlineOutput,
)
async def generate_outline(
topic: str,
objectives: list[LearningObjective],
duration_hours: float,
) -> list[Module]:
obj_text = "\n".join(
f"- [{o.objective_id}] ({o.bloom_level.value}) {o.description}"
for o in objectives
)
result = await Runner.run(
outline_generator,
f"Topic: {topic}\nDuration: {duration_hours}h\n"
f"Objectives:\n{obj_text}",
)
output = result.final_output_as(OutlineOutput)
modules = []
for i, mod in enumerate(output.modules):
lessons = []
for j, les in enumerate(mod["lessons"]):
lessons.append(Lesson(
lesson_id=f"les-{i+1}-{j+1}",
title=les["title"],
module=mod["title"],
order=j + 1,
objectives=les.get("objective_ids", []),
estimated_duration_minutes=les.get("duration", 45),
prerequisites=les.get("prerequisites", []),
key_concepts=les.get("key_concepts", []),
))
modules.append(Module(
module_id=f"mod-{i+1:02d}",
title=mod["title"],
description=mod["description"],
order=i + 1,
lessons=lessons,
))
return modules
Stage 3: Content Drafting
With the structure defined, draft lesson content that covers the mapped objectives:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
content_drafter = Agent(
name="Lesson Content Drafter",
instructions="""You draft educational lesson content. For each
lesson:
1. Start with a brief MOTIVATION section (why this matters)
2. Present CORE CONTENT with clear explanations
3. Include WORKED EXAMPLES that demonstrate the concept
4. Add PRACTICE EXERCISES (2-3 per lesson)
5. End with a KEY TAKEAWAYS summary
CONTENT GUIDELINES:
- Define every technical term on first use
- Use analogies to connect new concepts to familiar ones
- Include code examples, diagrams, or formulas where appropriate
- Mark prerequisite knowledge clearly
- Keep paragraphs short (3-5 sentences max)
- Use headers and bullet points for scannability
Each piece of content must demonstrably address its mapped
learning objectives. At the end, add a self-check: "After this
lesson, you should be able to: [restate objectives]" """,
)
async def draft_lesson_content(
lesson: Lesson,
objectives: list[LearningObjective],
previous_lessons: list[str],
) -> str:
mapped_objectives = [
o for o in objectives if o.objective_id in lesson.objectives
]
obj_text = "\n".join(
f"- {o.description} (Bloom: {o.bloom_level.value})"
for o in mapped_objectives
)
result = await Runner.run(
content_drafter,
f"Lesson: {lesson.title}\n"
f"Key concepts: {', '.join(lesson.key_concepts)}\n"
f"Duration: {lesson.estimated_duration_minutes} minutes\n"
f"Learning objectives to address:\n{obj_text}\n"
f"Prerequisites covered in previous lessons: "
f"{', '.join(previous_lessons)}",
)
return result.final_output
Stage 4: Assessment Design
Generate quizzes that are aligned to specific learning objectives, ensuring every objective is assessed:
class AssessmentOutput(BaseModel):
questions: list[dict]
objective_coverage: dict
assessment_designer = Agent(
name="Assessment Designer",
instructions="""Design quiz questions aligned to learning objectives.
ALIGNMENT RULES:
1. Each question must map to exactly one learning objective
2. The question's cognitive demand must match the objective's
Bloom's level (a "remember" objective gets recall questions,
an "analyze" objective gets analysis questions)
3. Every objective must be assessed by at least one question
QUESTION DESIGN:
- Multiple choice: 4 options, one correct, three plausible distractors
- Short answer: clear rubric for what constitutes a correct answer
- Coding/practical: specific input/output expectations
DISTRACTOR QUALITY:
- Each distractor targets a specific misconception
- Distractors are the same length and format as the correct answer
- Avoid "all of the above" and "none of the above"
Return questions with objective mappings and an explicit coverage
check showing which objectives are assessed.""",
output_type=AssessmentOutput,
)
async def design_module_assessment(
module: Module,
objectives: list[LearningObjective],
questions_per_objective: int = 2,
) -> list[QuizItem]:
module_obj_ids = set()
for lesson in module.lessons:
module_obj_ids.update(lesson.objectives)
module_objectives = [
o for o in objectives if o.objective_id in module_obj_ids
]
obj_text = "\n".join(
f"- [{o.objective_id}] ({o.bloom_level.value}) "
f"{o.description} — Assess: {o.assessment_criteria}"
for o in module_objectives
)
result = await Runner.run(
assessment_designer,
f"Module: {module.title}\n"
f"Objectives to assess:\n{obj_text}\n"
f"Generate {questions_per_objective} questions per objective.",
)
output = result.final_output_as(AssessmentOutput)
items = []
for q in output.questions:
items.append(QuizItem(
question=q["question"],
question_type=q["type"],
correct_answer=q["correct_answer"],
distractors=q.get("distractors", []),
objective_id=q["objective_id"],
bloom_level=q.get("bloom_level", ""),
explanation=q.get("explanation", ""),
))
return items
Running the Full Pipeline
import asyncio
async def create_course(
topic: str, audience: str, duration_hours: float
) -> CourseOutline:
# Stage 1: Learning objectives
objectives = await generate_objectives(topic, audience, duration_hours)
print(f"Generated {len(objectives)} learning objectives")
# Stage 2: Course outline
modules = await generate_outline(topic, objectives, duration_hours)
print(f"Created {len(modules)} modules")
# Stage 3: Draft content for each lesson
previous = []
for module in modules:
for lesson in module.lessons:
lesson.content = await draft_lesson_content(
lesson, objectives, previous
)
previous.append(lesson.title)
print(f"Drafted content for module: {module.title}")
# Stage 4: Design assessments per module
for module in modules:
module.quiz = await design_module_assessment(module, objectives)
print(f"Designed {len(module.quiz)} quiz items for {module.title}")
course = CourseOutline(
title=f"Complete Guide to {topic}",
description=f"A {duration_hours}-hour course on {topic}",
target_audience=audience,
prerequisites=[],
objectives=objectives,
modules=modules,
total_hours=duration_hours,
)
return course
# Usage
course = asyncio.run(
create_course("Python Web Development", "junior developers", 20)
)
FAQ
How does the agent ensure assessments actually test the learning objectives and not something else?
The backward design approach enforces alignment at every stage. Each quiz question is tagged with a specific objective ID, and the assessment designer is required to produce a coverage map showing which objectives are tested. A validation step checks that every objective has at least one question and that the question's Bloom level matches the objective's Bloom level. For instance, a "create" objective cannot be assessed with a multiple-choice recall question — it requires a practical or project-based assessment.
Can the agent handle updating a course when new content needs to be added?
Yes. Because objectives, modules, lessons, and quizzes are linked by IDs, adding a new topic means generating new objectives, inserting lessons into the appropriate module, and designing additional quiz items. The outline generator can be re-run in "update" mode where it receives the existing outline and adds new elements without restructuring what already works. Existing content and assessments remain stable.
How do you maintain consistency across lessons drafted by the AI?
The content drafter receives a list of previous lesson titles and their key concepts as context. This prevents re-explaining concepts that were covered in earlier lessons and ensures consistent terminology. For additional consistency, you can add a "style guide" to the content drafter's system prompt that specifies voice, formatting conventions, code style, and terminology preferences. Running a final review pass with a separate editor agent that checks for contradictions across lessons adds another layer of quality control.
#CourseCreation #CurriculumDesign #EducationAI #Python #InstructionalDesign #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.