Track OpenClaw token usage and API costs by parsing session JSONL files. Use when user asks about token spend, API costs, model usage breakdown, daily cost t...
Security Analysis
medium confidenceThe skill's code and instructions align with its stated purpose (parsing local OpenClaw session JSONL files to compute token/cost metrics); it reads only local files and has no obvious external network behavior in the provided code, but the script was truncated in the listing so full-file review is advised.
Name/description say it parses local OpenClaw session files and reports token/costs; the SKILL.md and the included script implement exactly that (auto-discover agents dir, read JSONL session files, extract message.usage and message.model, aggregate costs). No extra credentials or unrelated binaries are requested.
Runtime instructions explicitly limit activity to reading local session JSONL files under ~/.openclaw/agents (or OPENCLAW_HOME) and producing text/JSON reports. The script scans files, filters by mtime/timestamp, and aggregates usage — this stays within the claimed scope and does not request other system files or network calls in the visible code.
No install spec is provided (instruction-only); the included Python script uses only stdlib imports. This is low-risk compared with arbitrary downloads or external package installs.
No secrets or elevated credentials are required. The only environment interaction is an optional OPENCLAW_HOME override and reading the user's agents directory, which is appropriate for the task.
Skill is not marked always:true and does not request system-wide persistence. It suggests cron usage to log outputs (user action), which is appropriate for a reporting tool.
Guidance
This skill appears coherent and performs local parsing of OpenClaw session files to compute costs. Before installing or running: (1) review the full, untruncated scripts/cost_tracker.py file to confirm there are no hidden network calls or obfuscated behavior; (2) run the script in a non-root account and inspect its output on a small agents directory first; (3) if you plan to cron the tool, ensure the destination log is stored securely and not piped to an external endpoint; (4) if you have security policies, consider running it in a sandbox/container or inspect open network activity (e.g., with monitoring tools) on first run. If you want higher confidence, provide the complete, untruncated script for review.
Latest Release
v1.0.0
Initial release: per-model cost breakdown, daily spend trends, JSON + text output
More by @pfrederiksen
Published by @pfrederiksen on ClawHub