Building a Content Publishing Agent: Draft, Review, Edit, and Publish Pipeline
Create a multi-stage content publishing agent that drafts articles, routes them through AI reviewer agents, tracks versions, manages edits, and publishes to a CMS via API.
The Content Publishing Challenge
Publishing content involves multiple stages: drafting, review, editing, and final publication. In traditional workflows, each stage involves different people and tools, with content getting lost in email threads and shared documents. An AI-powered publishing agent automates the pipeline while maintaining quality through multi-agent review.
The architecture uses specialized agents for each stage — a drafter that generates content, reviewers that check quality from different angles, an editor that incorporates feedback, and a publisher that pushes to the CMS.
Data Model for the Pipeline
First, define the content artifact as it flows through stages:
from dataclasses import dataclass, field
from datetime import datetime, timezone
from enum import Enum
from typing import Any
import uuid
class ContentStatus(Enum):
DRAFT = "draft"
IN_REVIEW = "in_review"
REVISION_NEEDED = "revision_needed"
APPROVED = "approved"
PUBLISHED = "published"
@dataclass
class ContentVersion:
version: int
content: str
created_at: str = field(
default_factory=lambda: datetime.now(timezone.utc).isoformat()
)
created_by: str = ""
changes_summary: str = ""
@dataclass
class ReviewFeedback:
reviewer: str
approved: bool
comments: list[str] = field(default_factory=list)
suggestions: list[str] = field(default_factory=list)
reviewed_at: str = field(
default_factory=lambda: datetime.now(timezone.utc).isoformat()
)
@dataclass
class ContentArticle:
article_id: str = field(default_factory=lambda: str(uuid.uuid4()))
title: str = ""
topic: str = ""
target_audience: str = ""
status: ContentStatus = ContentStatus.DRAFT
versions: list[ContentVersion] = field(default_factory=list)
reviews: list[ReviewFeedback] = field(default_factory=list)
metadata: dict[str, Any] = field(default_factory=dict)
@property
def current_version(self) -> ContentVersion | None:
return self.versions[-1] if self.versions else None
def add_version(self, content: str, author: str, summary: str):
v = ContentVersion(
version=len(self.versions) + 1,
content=content,
created_by=author,
changes_summary=summary,
)
self.versions.append(v)
Stage 1: The Drafter Agent
The drafter takes a brief and produces the first version:
class DrafterAgent:
def __init__(self, llm_client):
self.llm = llm_client
async def draft(self, article: ContentArticle) -> ContentArticle:
prompt = f"""Write an article on the following topic.
Topic: {article.topic}
Target Audience: {article.target_audience}
Title: {article.title}
Requirements:
- 800 to 1200 words
- Clear structure with headings
- Include practical examples
- Professional tone appropriate for the audience
"""
response = await self.llm.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a professional content writer."},
{"role": "user", "content": prompt},
],
)
content = response.choices[0].message.content
article.add_version(content, "drafter_agent", "Initial draft")
article.status = ContentStatus.IN_REVIEW
return article
Stage 2: Reviewer Agents
Multiple reviewers check the content from different perspectives. Each reviewer is a specialized agent:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
class ReviewerAgent:
def __init__(self, llm_client, reviewer_name: str, focus_area: str):
self.llm = llm_client
self.name = reviewer_name
self.focus = focus_area
async def review(self, article: ContentArticle) -> ReviewFeedback:
content = article.current_version.content
prompt = f"""Review this article from the perspective of {self.focus}.
Article Title: {article.title}
Content:
{content}
Provide your review as JSON:
{{
"approved": true/false,
"comments": ["comment1", "comment2"],
"suggestions": ["suggestion1", "suggestion2"]
}}"""
response = await self.llm.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": f"You are a {self.focus} reviewer."},
{"role": "user", "content": prompt},
],
response_format={"type": "json_object"},
)
result = json.loads(response.choices[0].message.content)
return ReviewFeedback(
reviewer=self.name,
approved=result["approved"],
comments=result.get("comments", []),
suggestions=result.get("suggestions", []),
)
# Create specialized reviewers
reviewers = [
ReviewerAgent(llm, "technical_reviewer", "technical accuracy and code quality"),
ReviewerAgent(llm, "seo_reviewer", "SEO optimization and keyword usage"),
ReviewerAgent(llm, "style_reviewer", "writing style, grammar, and readability"),
]
Stage 3: The Editor Agent
The editor incorporates reviewer feedback into the next version:
class EditorAgent:
def __init__(self, llm_client):
self.llm = llm_client
async def edit(
self, article: ContentArticle, feedbacks: list[ReviewFeedback]
) -> ContentArticle:
all_suggestions = []
for fb in feedbacks:
all_suggestions.extend(
[f"[{fb.reviewer}] {s}" for s in fb.suggestions]
)
all_suggestions.extend(
[f"[{fb.reviewer}] {c}" for c in fb.comments]
)
prompt = f"""Revise this article based on reviewer feedback.
Current Content:
{article.current_version.content}
Reviewer Feedback:
{chr(10).join(f"- {s}" for s in all_suggestions)}
Incorporate the feedback while maintaining the article's voice and structure.
Return only the revised article text."""
response = await self.llm.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a professional editor."},
{"role": "user", "content": prompt},
],
)
revised = response.choices[0].message.content
article.add_version(revised, "editor_agent", "Incorporated reviewer feedback")
return article
The Pipeline Orchestrator
The orchestrator runs the full pipeline with configurable review rounds:
class PublishingPipeline:
def __init__(self, drafter, reviewers, editor, publisher, max_rounds=3):
self.drafter = drafter
self.reviewers = reviewers
self.editor = editor
self.publisher = publisher
self.max_rounds = max_rounds
async def run(self, article: ContentArticle) -> ContentArticle:
# Stage 1: Draft
article = await self.drafter.draft(article)
# Stage 2-3: Review and edit loop
for round_num in range(1, self.max_rounds + 1):
feedbacks = []
for reviewer in self.reviewers:
fb = await reviewer.review(article)
feedbacks.append(fb)
article.reviews.append(fb)
all_approved = all(fb.approved for fb in feedbacks)
if all_approved:
article.status = ContentStatus.APPROVED
break
article.status = ContentStatus.REVISION_NEEDED
article = await self.editor.edit(article, feedbacks)
article.status = ContentStatus.IN_REVIEW
# Stage 4: Publish
if article.status == ContentStatus.APPROVED:
await self.publisher.publish(article)
article.status = ContentStatus.PUBLISHED
return article
Stage 4: Publishing to a CMS
The publisher pushes the final content to your CMS API:
class CMSPublisher:
def __init__(self, api_base: str, api_key: str):
self.api_base = api_base
self.api_key = api_key
async def publish(self, article: ContentArticle):
import httpx
async with httpx.AsyncClient() as client:
response = await client.post(
f"{self.api_base}/articles",
headers={"Authorization": f"Bearer {self.api_key}"},
json={
"title": article.title,
"content": article.current_version.content,
"status": "published",
"metadata": article.metadata,
},
)
response.raise_for_status()
FAQ
How many review rounds should the pipeline allow before force-publishing?
Set a maximum of two to three rounds. If reviewers keep requesting changes after three rounds, the content likely needs a human editor. Escalate to a human rather than running an infinite review loop. Track the approval rate across rounds — if round-three approval is below 50 percent, your drafting prompt needs improvement.
How do I prevent reviewers from contradicting each other?
Give each reviewer a clearly scoped focus area and instruct them to only comment within their domain. The technical reviewer should not suggest style changes, and the SEO reviewer should not comment on code correctness. In the editor prompt, explicitly note which feedback came from which reviewer so the editor can weigh domain-specific suggestions appropriately.
Should I use the same LLM for all agents or different models?
Use your strongest model (GPT-4o or equivalent) for the drafter and editor, as they need the most creative and analytical capability. For reviewers, a smaller and faster model can work well since they are checking specific criteria rather than generating content. This reduces cost and latency. Run benchmarks with your actual content to find the quality threshold for each role.
#ContentPipeline #MultiAgent #Workflow #Publishing #Python #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.