Pre-scan a SKILL.md locally before publishing to ClawHub. Simulates the ClawScan security review using the same prompt and evaluation criteria as the real sc...
Security Analysis
medium confidenceThis skill appears purpose-aligned, but users should know it sends the SKILL.md being scanned to an external LLM service using a provider API key.
The stated purpose is to locally pre-scan a SKILL.md by calling an LLM with the ClawScan prompt, which matches the documented behavior.
The usage instructions are user-invoked examples rather than automatic execution, and the reviewed text does not instruct the agent to run hidden or destructive actions.
There is no install spec and no declared package installation; the skill is used by manually running the included Python script.
The skill needs an OpenAI-compatible or Anthropic API key and network access to an LLM provider, which is expected for its purpose but should be understood before use.
The artifacts do not show background persistence, privilege escalation, or ongoing activity after the scan command finishes.
Guidance
Before installing, be comfortable that this tool sends the SKILL.md you choose to scan to an LLM provider using your API key. Avoid scanning files that contain secrets, and only use trusted API endpoints.
Latest Release
v1.0.0
Initial release of skill-prescan: simulate ClawHub’s security review for SKILL.md files locally. - Enables local pre-scan of SKILL.md with the same prompt and evaluation criteria as ClawHub’s ClawScan security review. - Supports OpenAI-compatible and Anthropic providers, with custom model and endpoint options. - Outputs security verdicts and findings mirroring ClawHub, helping identify concerns before publishing. - Includes environment variable and CLI argument support for configuration. - Provides guidance for interpreting results and improving skill documentation.
More by @hanningwang
Published by @hanningwang on ClawHub