Use when creating new skills, editing existing skills, or verifying skills work before deployment
Security Analysis
high confidenceThe skill is internally consistent with its stated purpose (helping authors create, test, and harden other skills); its files and instructions align with that goal and request no unrelated credentials or installs.
Name and description ('Writing Skills' for creating/editing/verifying skills) match the included files and behavior. The SKILL.md and companion docs are authoring/testing guidance; the single code file (render-graphs.js) is a utility to render Graphviz diagrams from SKILL.md — directly relevant to visualizing skill flows.
SKILL.md instructs authors and test harnesses to inspect skill directories (e.g., ~/.claude/skills, ~/.agents/skills) and to run pressure scenarios that make agents act. That is expected for a skill-testing toolkit, but the docs also recommend persuading agents with strong imperative language and 'make the agent believe it's real work' for testing — useful for fidelity of tests but potentially risky if misapplied. No instructions ask for unrelated secrets or external endpoints.
This is an instruction-only skill with no install spec. The included render-graphs.js is a small local utility; it doesn't download arbitrary code. There is no network-based install or archive extraction in the package metadata.
The skill requests no environment variables, no credentials, and no special config paths beyond referencing the user's skill directories (home paths) which is coherent for an authoring/testing tool.
Flags show normal defaults (always: false, disable-model-invocation: false). The skill does not request permanent inclusion or modify other skills. Autonomous invocation is allowed by default, which is platform behavior and appropriate for a reusable authoring skill.
Guidance
This package appears to be a legitimate toolkit for writing and testing other skills. Before installing or enabling it: 1) Review render-graphs.js — it runs shell commands (execSync) and requires Graphviz 'dot' to be present; only run it in environments where you trust executing local scripts. 2) Be aware the docs encourage using strong, directive language and realistic scenarios to force agent compliance — effective for testing, but these persuasion techniques can be misused; ensure test scenarios are run in isolated/sandboxed environments and not against production systems or real user data. 3) The skill reads user skill directories (~/.claude/skills etc.); confirm you are comfortable with any automated agent behavior that will read or list those paths. 4) If you plan to allow autonomous invocation, test the skill manually first and keep it disabled from always-on inclusion. If you want a deeper assessment, provide runtime logs or examples of how your agents will call these tests so I can check for unexpected file/system/network operations.
Latest Release
v0.1.0
Initial release: 编写技能
More by @zlc000190
Published by @zlc000190 on ClawHub