Search local memory index (local-first). Use for /mem queries in Telegram.
Security Analysis
medium confidenceThe skill's purpose (local memory search) matches its instructions, but the runtime instructions tell the agent to execute external scripts that are not included or described — that gives the agent the ability to run arbitrary local code and access local files, which is potentially risky.
Name and description (local-first memory search for /mem) align with the actions described (update index, search index). The skill does not request unrelated credentials, binaries, or config paths.
The SKILL.md tells the agent to run scripts/index-memory.py and scripts/search-memory.py but those scripts are not included or described. Because the skill is instruction-only, the agent will execute whatever code exists at those paths in the host environment; that code could read arbitrary local files, modify data, or transmit data externally. The instructions are also vague ('if needed'), giving runtime discretion.
No install spec (instruction-only), so nothing is fetched or written by the skill itself. This lowers remote install risk but increases reliance on external files whose contents are unknown.
The skill declares no environment variables, credentials, or config paths. There is nothing requested that appears disproportionate to local memory search.
The skill does not request permanent presence (always:false) and does not modify other skills or system-wide settings. Note: model invocation is enabled (default), so the agent could call this skill autonomously — this is normal but combined with the instruction-scope concern increases the blast radius.
Guidance
Before installing or enabling this skill: (1) verify that the referenced scripts (scripts/index-memory.py and scripts/search-memory.py) exist in the environment and inspect their source — do not run them if you can't review them; (2) ensure those scripts only access the local memory index and do not read or transmit unrelated files or credentials; (3) if possible, run the scripts in a restricted or sandboxed environment first; (4) consider limiting the agent's autonomous invocation for this skill (or require explicit user confirmation) until you trust the scripts; (5) ask the skill author to include the implementation or a detailed spec in the package so behavior is auditable. These steps will reduce the risk that the skill executes unexpected or exfiltrating code.
Latest Release
v0.1.0
Initial publish
More by @Trumppo
Published by @Trumppo on ClawHub