Scan prompts for prompt injection attacks before sending them to any LLM. Detect jailbreaks, data exfiltration, encoding bypass, multilingual attacks, and 25...
Security Analysis
medium confidenceThe skill's declared requirements and runtime instructions are coherent with a prompt-scanning API: it only needs curl/jq and an API token to call glitchward.com, and its SKILL.md contains straightforward cURL examples — but confirm you trust the remote service before sending sensitive prompts.
Name and description match the declared runtime behavior: the SKILL.md instructs the agent to call Glitchward's Shield API endpoints (validate, validate/batch, stats). Required binaries (curl, jq) and the single env var (GLITCHWARD_SHIELD_TOKEN) are proportional to an HTTP-based API client.
Instructions only tell the agent to POST prompt text to glitchward.com and to check the returned is_blocked/risk_score/matches fields. They do not instruct reading local files or other env vars. The SKILL.md and README also include example test prompts (e.g., 'ignore all previous instructions ...') — a pre-scan injection pattern was detected in the content, but in context this appears to be a demonstration/example used to show detection rather than an instruction to exfiltrate data. Still, presence of injection-pattern examples is worth noting because they could influence evaluation or be misunderstood by less careful integrators.
No install spec or code files are executed on install (instruction-only). This minimizes disk-write/remote-code risk; runtime network calls are performed by curl at the agent's direction.
Only a single API token env var (GLITCHWARD_SHIELD_TOKEN) is required, which is appropriate for an external API. No unrelated secrets, files, or system credentials are requested. Note: the token grants the external service ability to receive prompts you send, so treat it as sensitive.
Skill is not always-enabled and does not request elevated platform privileges. It's user-invocable and uses normal model invocation behavior. No install-time persistence or modification of other skills is present.
Guidance
This skill appears to do what it says: it sends text to an external Prompt-Scanner API and returns a block/risk decision. Before installing, confirm you trust the remote domain (glitchward.com) and review its privacy/retention policy — any prompt you send (including sensitive data or system prompts) may be logged. Treat GLITCHWARD_SHIELD_TOKEN as a secret: store it securely, rotate it if compromised, and avoid embedding it in shared config. Test the skill with non-sensitive data first. If you cannot trust sending prompts off-host, prefer a local/offline scanning solution. Finally, verify the skill's source/owner (the registry metadata shows an owner id but no homepage in the registry entry) before granting it runtime access.
Latest Release
v1.0.1
- Renamed skill to "glitchward-llm-shield" and updated description for clarity. - Removed the internal implementation file (`llm-shield-skill.js`). - Simplified SKILL.md: shifted from detailed usage instructions and command documentation to concise API usage examples. - Updated setup and token configuration steps. - Clarified API endpoints for single and batch prompt validation. - Streamlined documentation to focus on integration pattern, attack categories, and when/how to use the skill. - Expanded coverage of detected attack types and use cases.
Popular Skills
Published by @eyeskiller on ClawHub