Enables AI agents to discover, filter, and engage with content across 24 platforms including social, academic, decentralized networks, with auto-generated SV...
Security Analysis
medium confidenceThe skill mostly implements a multi-platform discovery/engagement tool that fits its description, but there are multiple inconsistencies and sensitive artefacts (hard-coded/example tokens, internal IPs, and mismatched declared requirements) that warrant caution before installing or running autonomously.
The codebase contains many platform-specific modules (arXiv, YouTube, Mastodon, Nostr, Bluesky, Farcaster, podcasts, ClawHub, etc.) matching the advertised 24-platform discovery purpose. However, the registry metadata declares no required environment variables or primary credential while SKILL.md and the code clearly expect a ~/.grazer/config.json with many API keys and tokens — a mismatch between declared metadata and actual configuration requirements.
SKILL.md instructs the agent/operator to create ~/.grazer/config.json with multiple API keys, to run an autonomous agent loop that discovers and auto-responds, and to modify other agent scripts and server deployments (VPS IPs referenced). It also documents saving training data (~/.grazer/training.json) and enabling auto_respond. Those instructions give the skill the ability to read local config, network-exfiltrate or post to many platforms, and autonomously act on behalf of agents — scope is broad and requires explicit operator review.
There is no install spec in the registry entry, but the repository contains packaging and publish artifacts (setup.py, package.json, homebrew formula, publish scripts). Nothing in the install artifacts is a direct red flag (no opaque external archive downloads), but the presence of multiple package manifests means installation will place code on disk and potentially register CLI entrypoints — operators should inspect packaging scripts before installing.
Although the registry lists no required env vars, the skill expects numerous API keys/tokens via ~/.grazer/config.json (bottube, moltbook, clawcities, clawsta, fourclaw, clawhub token, youtube API key, LLM URL/api key, etc.). The repo also contains example files that embed an LLM URL pointing to an internal IP (100.75.100.89) and at least one example/curl that includes a bearer token string — these are disproportionate to a minimal discovery client and introduce risk if used as-is or if these example secrets are real.
The skill supports an autonomous continuous agent loop, auto-response deployment, and training-data persistence. Although 'always: false' is set (so it's not forcibly always-enabled), the default config/example enables auto_respond and persistent training storage which increases blast radius if deployed without careful controls. There is no automatic telemetry on install claimed, but network activity during runtime (discovery, posting, LLM calls) is core to the skill and must be consented to and monitored.
Guidance
What to check before installing or running this skill: - Metadata mismatch: the registry claims no required credentials but the SKILL.md and code require many API keys in ~/.grazer/config.json. Treat the repository's config.example.json as authoritative and do not assume 'no env vars required'. - Inspect and sanitize config.example.json: it contains an LLM URL pointing to an internal IP (100.75.100.89). Do not use that endpoint unless you control and trust it; change llm_url/llm_api_key to your trusted LLM or leave LLM image generation disabled. - Look for leaked/embedded secrets: some docs/scripts include example bearer tokens and registry publish snippets (e.g., a ClawHub Authorization header). If any token is real, rotate it immediately and do not reuse tokens found in the repo. - Disable autonomous writes by default: before enabling auto_respond or running the agent loop, set auto_respond=false, run in dry-run mode, and test 'discover' and 'dry-run' flows to verify outputs. - Run in an isolated environment first: install in a sandbox or container (not on production agents), monitor network calls, and confirm it only contacts the expected platform APIs. Check where it stores training data (~/.grazer/training.json) and any idempotency markers (~/.grazer/idempotency_keys.json). - Audit publish/build scripts: review publish.sh, setup.py, and any build scripts to ensure they don't execute unexpected commands or upload artifacts using embedded credentials. - If you will let it call an LLM: point llm_url to a trusted, authenticated LLM (or disable LLM-powered generation), and ensure llm_api_key is not left unset if using a public/remote LLM. If you want, I can: - scan the rest of the code files for hard-coded tokens, suspicious network endpoints, or code paths that read unexpected local files; or - produce a short checklist of exact lines/files that contain the example tokens, internal IPs, and where the skill writes local files.
Latest Release
v2.0.0
v2.0.0: Added Bluesky, Farcaster, Mastodon, Nostr, Semantic Scholar, OpenReview. 18→24 platforms.
More by @scottcjn
Published by @scottcjn on ClawHub