Skip to content
Learn Agentic AI9 min read0 views

Building an AI Help Center: Context-Aware Documentation Search and Support

Create an AI-powered help center that ingests your documentation, searches by context and meaning, suggests relevant articles proactively, and escalates to human support when needed.

Beyond Keyword Search for Help Centers

Traditional help centers rely on users knowing the right search terms. A user struggling with "my chart is not showing data" will not find the article titled "Configuring Data Source Connections for Dashboards" because there is no keyword overlap. An AI help center understands that both are about the same problem and returns the right answer regardless of how the user phrases their question.

Documentation Ingestion Pipeline

The first step is converting your documentation into searchable chunks with proper metadata. Each chunk retains its source article, section heading, and category for attribution and filtering.

from dataclasses import dataclass
from openai import OpenAI
import hashlib

client = OpenAI()

@dataclass
class DocChunk:
    chunk_id: str
    article_id: str
    article_title: str
    section_heading: str
    content: str
    category: str
    url: str
    embedding: list[float] | None = None

def chunk_article(article: dict, max_chunk_size: int = 800) -> list[DocChunk]:
    """Split an article into chunks by section headings."""
    content = article["content"]
    sections = split_by_headings(content)
    chunks = []

    for section in sections:
        # Split large sections into smaller overlapping chunks
        text_chunks = split_text(section["content"], max_chunk_size, overlap=100)
        for i, text in enumerate(text_chunks):
            chunk_id = hashlib.sha256(
                f"{article['id']}:{section['heading']}:{i}".encode()
            ).hexdigest()[:16]

            chunks.append(DocChunk(
                chunk_id=chunk_id,
                article_id=article["id"],
                article_title=article["title"],
                section_heading=section["heading"],
                content=text,
                category=article.get("category", "general"),
                url=article["url"],
            ))
    return chunks


def split_by_headings(markdown: str) -> list[dict]:
    """Split markdown content by ## headings."""
    import re
    sections = []
    parts = re.split(r'^(## .+)$', markdown, flags=re.MULTILINE)

    current_heading = "Introduction"
    current_content = ""

    for part in parts:
        if part.startswith("## "):
            if current_content.strip():
                sections.append({
                    "heading": current_heading,
                    "content": current_content.strip()
                })
            current_heading = part.replace("## ", "").strip()
            current_content = ""
        else:
            current_content += part

    if current_content.strip():
        sections.append({
            "heading": current_heading,
            "content": current_content.strip()
        })
    return sections


async def index_documentation(articles: list[dict], db_pool):
    """Process and index all documentation articles."""
    for article in articles:
        chunks = chunk_article(article)
        for chunk in chunks:
            embedding = create_embedding(chunk.content)
            await store_chunk(db_pool, chunk, embedding)
    print(f"Indexed {len(articles)} articles.")

Contextual Search with User State

When a user searches from within the product, include their current context to boost relevance.

from fastapi import FastAPI, Depends, Query
from pydantic import BaseModel

app = FastAPI()

class HelpSearchResult(BaseModel):
    article_title: str
    section: str
    snippet: str
    url: str
    relevance_score: float

@app.get("/api/help/search", response_model=list[HelpSearchResult])
async def search_help(
    q: str = Query(..., min_length=2),
    current_page: str = Query(None),
    error_code: str = Query(None),
    tenant_id: str = Depends(get_current_tenant),
    db_pool = Depends(get_db_pool),
):
    # Enrich the query with context
    enriched_query = q
    if current_page:
        enriched_query += f" (user is on the {current_page} page)"
    if error_code:
        enriched_query += f" (error code: {error_code})"

    query_embedding = create_embedding(enriched_query)
    embedding_str = "[" + ",".join(str(x) for x in query_embedding) + "]"

    async with db_pool.acquire() as conn:
        rows = await conn.fetch("""
            SELECT article_title, section_heading, content, url,
                   1 - (embedding <=> $1::vector) AS score
            FROM doc_chunks
            ORDER BY embedding <=> $1::vector
            LIMIT 10;
        """, embedding_str)

    return [
        HelpSearchResult(
            article_title=r["article_title"],
            section=r["section_heading"],
            snippet=r["content"][:200] + "...",
            url=r["url"],
            relevance_score=round(float(r["score"]), 4),
        )
        for r in rows
    ]

AI Answer Generation with Citations

Instead of just returning search results, generate a direct answer with citations to the source documentation.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

class HelpAnswer(BaseModel):
    answer: str
    sources: list[dict]
    confidence: float
    suggest_ticket: bool

async def answer_help_question(question: str, context: dict,
                                db_pool, llm_client) -> HelpAnswer:
    # Retrieve relevant documentation chunks
    query_embedding = create_embedding(question)
    embedding_str = "[" + ",".join(str(x) for x in query_embedding) + "]"

    async with db_pool.acquire() as conn:
        chunks = await conn.fetch("""
            SELECT article_title, section_heading, content, url,
                   1 - (embedding <=> $1::vector) AS score
            FROM doc_chunks
            ORDER BY embedding <=> $1::vector
            LIMIT 5;
        """, embedding_str)

    if not chunks or float(chunks[0]["score"]) < 0.3:
        return HelpAnswer(
            answer="I could not find a relevant answer in the documentation.",
            sources=[],
            confidence=0.0,
            suggest_ticket=True,
        )

    doc_context = "\n\n".join([
        f"[Source: {c['article_title']} > {c['section_heading']}]\n{c['content']}"
        for c in chunks
    ])

    prompt = f"""Answer the user's question using ONLY the documentation below.
If the documentation does not contain the answer, say so clearly.
Include [Source: article title] citations for every fact you state.

Documentation:
{doc_context}

User question: {question}"""

    response = await llm_client.chat(
        messages=[{"role": "user", "content": prompt}],
    )

    sources = [
        {"title": c["article_title"], "url": c["url"],
         "section": c["section_heading"]}
        for c in chunks[:3]
    ]

    top_score = float(chunks[0]["score"])
    return HelpAnswer(
        answer=response.content,
        sources=sources,
        confidence=round(top_score, 2),
        suggest_ticket=top_score < 0.5,
    )

Escalation to Human Support

When the AI cannot answer confidently, it creates a support ticket pre-populated with context.

async def create_support_ticket(question: str, ai_answer: HelpAnswer,
                                 user_context: dict, db) -> dict:
    ticket = await db.fetchrow("""
        INSERT INTO support_tickets
            (user_id, tenant_id, subject, body, priority, status, ai_context)
        VALUES ($1, $2, $3, $4, $5, 'open', $6)
        RETURNING id, subject, status;
    """,
        user_context["user_id"],
        user_context["tenant_id"],
        f"Help request: {question[:100]}",
        f"User question: {question}\n\n"
        f"AI attempted answer (confidence: {ai_answer.confidence}):\n"
        f"{ai_answer.answer}\n\n"
        f"User was on page: {user_context.get('current_page', 'unknown')}",
        "normal" if ai_answer.confidence > 0.2 else "high",
        {"ai_answer": ai_answer.answer, "sources": ai_answer.sources},
    )
    return dict(ticket)

FAQ

How often should I re-index the documentation?

Set up a webhook from your documentation CMS that triggers re-indexing whenever an article is created, updated, or deleted. For bulk updates (documentation restructuring), run a full re-index job. Delete stale chunks for removed articles by tracking article IDs and removing orphaned chunks after each sync.

How do I handle documentation that contradicts itself?

Add a last_updated field to each chunk and boost newer content in relevance scoring. When the AI detects contradictions in retrieved chunks, instruct it to prefer the most recently updated source and flag the contradiction to your documentation team for resolution.

Should the AI help center replace the traditional search entirely?

No. Keep keyword search as a fallback. Some users prefer browsing categories and scanning article titles. Display the AI answer prominently at the top of search results, with traditional keyword results below. This gives users the speed of AI with the transparency of traditional search.


#AIHelpCenter #DocumentationSearch #SupportAutomation #SaaS #Python #RAG #AgenticAI #LearnAI #AIEngineering

Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.