History Processor
chimera.context.history provides composable processors for transforming
conversation history before sending it to the LLM.
Key Classes
Section titled “Key Classes”| Class | Description |
|---|---|
HistoryProcessor | Abstract base class — implement process(messages) |
TruncateProcessor | Keep only the last N messages |
PruneProcessor | Replace old tool result content with [pruned] |
CompressProcessor | Compress old messages into a summary, keep recent ones intact |
CompositeProcessor | Chain multiple processors in sequence |
Quick Start
Section titled “Quick Start”from chimera.context.history import TruncateProcessor, PruneProcessor, CompositeProcessor
# Keep last 20 messages, prune old tool results beyond the 3 most recentprocessor = CompositeProcessor([ PruneProcessor(keep_last_n_results=3), TruncateProcessor(max_messages=20),])
cleaned = processor.process(messages)Processors
Section titled “Processors”TruncateProcessor
Section titled “TruncateProcessor”Keeps only the last N messages, discarding everything older.
proc = TruncateProcessor(max_messages=15)PruneProcessor
Section titled “PruneProcessor”Replaces old tool result content with [pruned], preserving structure.
proc = PruneProcessor(keep_last_n_results=5)CompressProcessor
Section titled “CompressProcessor”Compresses old messages into a summary, keeping recent ones intact. Uses simple concatenation and truncation (no LLM call).
proc = CompressProcessor(keep_recent=5, max_summary_tokens=500)CompositeProcessor
Section titled “CompositeProcessor”Chains multiple processors — each processor’s output becomes the next input.
proc = CompositeProcessor([PruneProcessor(), TruncateProcessor()])Custom Processors
Section titled “Custom Processors”Subclass HistoryProcessor and implement process():
from chimera.context.history import HistoryProcessor
class DropSystemProcessor(HistoryProcessor): def process(self, messages): return [m for m in messages if m.role != "system"]Related
Section titled “Related”- Compaction — threshold-based context compaction
- Focus Chain — token-budget context selection