Context Engineering
Context engineering is the practice of selecting, structuring, and maintaining the context given to an LLM so it reliably performs the desired task within a fixed context size. It is one component of harness engineering, which additionally encompasses deterministic enforcement, architectural constraints, and agent-driven maintenance.
Details
The core challenge is fitting the right information into a limited context window while keeping it useful for the model. Selecting what to include involves retrieval and ranking (see RAG), in-context learning example selection, and prompt compaction and memory to manage what persists across turns. Structuring the selected content - prompt engineering for instructions and examples, formatting of tool outputs - determines how effectively the model can use what it receives. Protecting context integrity through PII redaction (to reduce data exfiltration risk) and provenance cues that help the model distinguish instructions from untrusted content mitigates prompt injection and context poisoning.
In multi-agent systems, context engineering extends to designing what crosses context isolation boundaries: choosing which information a parent agent passes to a subagent and how the subagent's result is formatted before re-entering the parent's context.
Examples
- RAG pipelines that retrieve and compress relevant passages before generating.
- Summarizing a long chat history into a short "state" block.
- Putting tool outputs in a strict schema and separating them from instructions.
- Skills that inject task-specific instructions into an agent's context only when relevant, keeping the base prompt lean.
- Curating the task description and inputs passed across a context isolation boundary to a subagent, balancing focus against information loss.
- Structuring repository-level agent instructions as a short entry point pointing to deeper knowledge rather than a monolithic file (progressive context disclosure).
Synonyms
context management