Grounding
Grounding is the practice of anchoring an LLM's outputs in verifiable external sources - retrieved documents, tool results, database records, or other factual references - rather than relying solely on the model's parametric knowledge.
Details
Grounding is the primary mitigation for hallucination: by conditioning generation on retrieved evidence (as in RAG or via web search tools) and validating outputs against source material, systems can detect and reduce fabricated content. In agent systems, grounding also applies to tool call arguments - verifying that identifiers, URLs, and parameters correspond to real entities before execution, which is a key defense against hallucination exploitation.
Grounding is not binary: it ranges from loose (providing relevant context and hoping the model uses it) to strict (requiring every claim to cite a retrieved source and rejecting uncited outputs).
Examples
- A RAG system that retrieves passages and instructs the model to only answer based on retrieved content.
- Validation of tool call arguments (package names, URLs, record IDs) against known-good registries before execution.
- A citation-required mode where the model must reference specific source documents for each claim.
Synonyms
output grounding, grounded generation