Token-based context compaction for local models (MLX, llama.cpp, Ollama) that don't report context limits.
Security Analysis
high confidenceThe skill's code and instructions match its stated purpose (local-model context compaction); nothing in the files indicates unexplained credential access, hidden exfiltration, or behavior outside that purpose — but you should verify the npm/GitHub package source before running an npx installer.
Name/description (context compaction for local LLMs) align with the included code and runtime instructions. The plugin inspects OpenClaw config, reads session transcripts, estimates tokens, summarizes old messages, and prepends a summary — all consistent with the stated functionality.
SKILL.md and the code limit actions to reading openclaw.json (with an explicit prompt in the CLI), reading session transcripts (when provided by the runtime), writing plugin files into ~/.openclaw/extensions/, and calling the local OpenClaw LLM runtime for summaries. There are no instructions to read unrelated system files, access unrelated env vars, or transmit data to external endpoints.
There is no formal install spec in the registry, but SKILL.md instructs using `npx jasper-context-compactor setup`. Running via npx will fetch the package from npm, which is normal but carries the usual supply-chain risk (downloading and executing remote code). The included CLI copies files into ~/.openclaw/extensions — expected for a plugin installer. Recommend verifying the npm package and GitHub repository before running npx.
The skill requests no environment variables, no credentials, and no config paths beyond user OpenClaw config paths under the user's home directory. The code only touches ~/.openclaw/, consistent with its purpose.
The skill does not request always:true or system-wide elevated privileges. The installer writes plugin files under the user's home (~/.openclaw/extensions/context-compactor) and updates openclaw.json — reasonable for a user-installed plugin. It does not modify other skills' configs or system settings beyond the user's OpenClaw config.
Guidance
This plugin is internally coherent and appears to do what it claims: estimate tokens, summarize older messages, and inject a compacted summary for local models. Before installing: - Verify the package source: check the npm page and GitHub repo referenced in the README (ensure they match the publisher you trust). npx will fetch remote code, so confirm the package contents/maintainer. - Backup your openclaw.json (the CLI already does this, but you can manually back up before running commands). - Review the included files (index.ts, cli.js) locally if possible rather than running npx directly, or install from a tarball you inspected. - If you use sensitive local providers, confirm the plugin's modelFilter setting so it only runs where you want it. If you want higher assurance, run the CLI from a checked-out copy of this repository instead of via npx so you control the exact code being executed.
Latest Release
v0.3.8
v0.3.8: Enhanced Ollama detection
More by @emberDesire
Published by @emberDesire on ClawHub