Multi-turn Conversation

A structural property of AI systems where conversation history accumulates across turns, with each exchange adding content to the LLM's context. The application manages this growing history - storing, truncating, or summarizing prior turns to fit within the model's context size.

Details

Multi-turn conversation is distinct from the chatbot UI pattern: a chatbot is an interface, while multi-turn conversation is the underlying context accumulation behavior. Many systems combine both - a chatbot with multi-turn state - but agents and AI workflows can also maintain multi-turn conversation state without a chat UI.

Each turn adds more untrusted input to the context, creating an expanding attack surface over the course of a conversation. This enables attack patterns that single-turn calls do not: a user can build up adversarial context incrementally across turns, making prompt injection and guardrail bypass attempts harder for per-turn defenses to detect. The accumulated history also creates compounding risks from misaligned model behaviors such as consistency bias, where the model reinforces its own prior responses across turns rather than correcting course.

History management decisions - such as prompt compaction, truncation, or summarization - directly affect both the user experience and the security surface: aggressive truncation reduces the accumulated attack surface but may lose important conversational context.

Examples

  • A chatbot that maintains a full transcript of prior turns and includes it in each request to the model.
  • An agent that accumulates tool results and user messages across an extended task session.
  • A customer support system that summarizes older turns and keeps recent turns verbatim to balance context limits with conversational continuity.