Token-safe prompt assembly with memory orchestration. Use for any agent that needs to construct LLM prompts with memory retrieval. Guarantees no API failure due to token overflow. Implements two-phase context construction, memory safety valve, and hard limits on memory injection.
Security Analysis
Latest Release
v1.0.4
Renamed to 'Prompt Safe'. Added compelling description emphasizing token overflow prevention and API stability.
More by @prompt
Published by @prompt on ClawHub