Safety Guard URLs or files with the safety-guard CLI (web, PDFs, images, audio, YouTube).
Security Analysis
medium confidenceThe skill is a thin wrapper around a local 'safety-guard' CLI and LLM/fallback services (so many env vars and a Homebrew install make sense), but there are metadata inconsistencies and undeclared environment/config items that don't fully line up with the registry info.
The skill's declared purpose (running the safety-guard CLI on URLs/files/YouTube) matches the requirement for a safety-guard binary. However, the included _meta.json file appears to describe a different package (different ownerId and slug 'summarize'), which is inconsistent with the skill metadata and suggests a packaging or copy-paste error.
SKILL.md instructs the agent to run the safety-guard CLI and to use various provider API keys and optional fallback services. Those instructions will cause content to be sent to external LLM providers and services (Firecrawl, Apify) which is expected, but SKILL.md references environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY, XAI_API_KEY, GEMINI_API_KEY, FIRECRAWL_API_KEY, APIFY_API_TOKEN) and a user config file (~/.safety-guard/config.json) that are not declared in the registry metadata. The agent would access these env vars/config if present — the registry should declare them to make the surface explicit.
The install spec is a Homebrew formula: steipete/tap/safety-guard which will create a safety-guard binary. Homebrew is an expected install mechanism, but this uses a third-party tap (steipete/tap) rather than a first-party or widely-known tap; that raises moderate risk because the formula content should be inspected before trusting the binary it installs.
Multiple provider API keys and optional fallback tokens are referenced in the runtime instructions. Those env vars are reasonable for a tool that calls LLMs and external crawlers, but the registry declares no required env vars — the SKILL.md references several secrets without them being surfaced in requires.env or primaryEnv. This mismatch reduces transparency and could lead to unexpected credential exposure if a user provides tokens without realizing which skill will use them.
The skill does not request always:true, and it does not modify other skills. It mentions an optional per-user config file (~/.safety-guard/config.json) which is a normal, limited form of persistence; users should be aware that API keys or model settings stored there will be read by the CLI.
Guidance
This skill delegates work to a locally installed safety-guard binary and to external LLMs/fallback services. Before installing: (1) Verify the Homebrew formula source (steipete/tap) and inspect the formula or upstream project to ensure the binary is trustworthy; (2) be aware the tool will send content to LLM providers and optional services (OpenAI/Anthropic/xAI/Google, Firecrawl, Apify) — only provide API keys if you trust those endpoints and the safety-guard project; (3) note the package metadata mismatch (_meta.json) — ask the publisher to correct it or provide provenance; (4) if you need to install, consider auditing the brew formula or obtaining the binary from the official project homepage (https://safety-guard.sh) first; (5) if you want a safer baseline, request the publisher add the env vars to requires.env and correct the metadata so the skill's registry information matches its runtime behavior.
Latest Release
v1.0.0
Initial release of Safety Guard. - Enables scanning of URLs, local files (PDFs, images, audio), and YouTube links via command-line. - Supports Google, OpenAI, Anthropic, and xAI models; default set to google/gemini-3-flash-preview. - Allows configuration through environment variables and optional config file. - Provides flags for output length, token limits, data extraction, and machine-readable output. - Integrates with Firecrawl and Apify for enhanced fallback extraction and YouTube support.
More by @john-niu-07
Published by @john-niu-07 on ClawHub