Skip to content
Agentic AI
Agentic AI archive page 35 of 35

Agentic AI & LLM Engineering

Deep dives into agentic AI, LLM evaluation, synthetic data generation, model selection, and production AI engineering best practices.

8 of 314 articles

Agentic AI
6 min read4Oct 28, 2025

Why Data Curation for LLM Training Takes So Long: Text, Image, and Video Processing Bottlenecks

Traditional data curation pipelines for LLM training face critical bottlenecks in synthetic data generation, quality filtering, and semantic deduplication across text, image, and video modalities.

Agentic AI
5 min read1Oct 27, 2025

Quality Data Filtering vs Fuzzy Deduplication: The Critical Tradeoff in LLM Training

Learn how quality filtering and fuzzy deduplication create a tradeoff in LLM data curation, and how NeMo Curator uses GPU acceleration to handle both at scale.

Agentic AI
4 min read1Oct 27, 2025

How NVIDIA NeMo Curator Speeds Up LLM Training: Benchmarks and Results

NeMo Curator delivers 17x faster data processing with measurable accuracy gains. See the GPU scaling benchmarks and real-world performance improvements for LLM training.

Agentic AI
5 min read2Jul 5, 2025

Azure AI Foundry Agent Service: A Complete Guide to Building Enterprise AI Agents

Azure AI Foundry Agent Service provides a managed framework for building, managing, and deploying AI agents on Azure. Compare it to Semantic Kernel, AutoGen, and Copilot Studio.

Agentic AI
5 min read2Feb 14, 2025

AI Agents: What They Are and the Current Landscape in 2025

A comprehensive overview of AI agents — what they are, how they work, and the major platforms including GPT Agents, Gemini, Claude, Copilot, AutoGen, and AutoGPT.

Agentic AI
6 min read2Jan 24, 2025

Prompt Task Classification and Complexity Evaluation: NVIDIA's DeBERTa-Based Framework Explained

NVIDIA's prompt-task-and-complexity-classifier categorizes prompts across 11 task types and 6 complexity dimensions using DeBERTa. Learn how it works and when to use it.

Agentic AI
5 min read3Sep 21, 2024

Retrieval-Augmented Generation (RAG): How It Works and Why It Matters

RAG strengthens LLM responses by grounding them in external knowledge sources. Learn how retrieval-augmented generation reduces hallucinations and enables real-time knowledge access.