ZappushZappush
SkillsUse CasesBenchmarkCommunitySign In
      Back to Skills
      mvanhorn

      Safety Report

      Last30days

      @mvanhorn

      Research a topic from the last 30 days. Also triggered by 'last30'. Sources: Reddit, X, YouTube, TikTok, Instagram, Hacker News, Polymarket, web. Become an e...

      104Downloads
      1Installs
      1Stars
      2Versions
      Workflow Automation3,323Search & Retrieval2,116Video & Audio1,618Social Media1,367

      Security Analysis

      medium confidence
      Clean0.12 risk

      The skill's code, environment requirements, and instructions align with its stated purpose (research recent social/web signals); a few documentation and privacy-relevant details are inconsistent or merit user review but nothing indicates intentional misdirection.

      Mar 7, 202646 files3 concerns
      Purpose & Capabilityok

      The skill claims to research the last 30 days across Reddit, X, YouTube, TikTok, Instagram, Hacker News, Polymarket, and the web — and the repository includes code to do exactly that (ScrapeCreators-backed Reddit/TikTok/Instagram, vendored Bird GraphQL client for X, Brave/Parallel web search, YouTube transcript handling, Polymarket/HN modules). Required binaries (node, python3) and the primary env var (SCRAPECREATORS_API_KEY) are coherent with these capabilities.

      Instruction Scopenote

      Overall the SKILL.md stays on-scope (parse intent, run multi-source research, synthesize, optionally save briefings). Two items to note: (1) SKILL.md / README describe automatic saving of briefings to ~/Documents/Last30Days/, but the included briefing.py saves to ~/.local/share/last30days/briefs — a documentation vs implementation mismatch to be aware of. (2) X/Twitter search can rely on your browser cookies or AUTH_TOKEN/CT0 env vars; that implies the vendored Node script may access cookie data (sensitive) if used — this is optional but privacy-relevant. The SKILL.md also instructs that an intro text must be shown before calling tools, which is explicit and scoped.

      Install Mechanismok

      There is no external installer or network download in the registry spec. The skill bundles its Python and a vendored Node 'bird-search.mjs' module; no remote install URLs or extract-from-URL steps are present in the provided files. Requiring Node and Python is proportionate because Node is used for the vendored X client and Python runs the main engine.

      Credentialsnote

      The single required env var is SCRAPECREATORS_API_KEY (primary credential) which matches the described use (Reddit, TikTok, Instagram via ScrapeCreators). A set of optional env vars (OPENAI_API_KEY, XAI_API_KEY, OPENROUTER_API_KEY, PARALLEL_API_KEY, BRAVE_API_KEY, APIFY_API_TOKEN, AUTH_TOKEN, CT0) is long but optional and justifiable for alternative/backup backends and X cookie fallback. Special attention: AUTH_TOKEN/CT0 (or automatic browser-cookie-based X auth) can expose sensitive session tokens — only provide these if you understand the tradeoff.

      Persistence & Privilegenote

      The skill writes persistent data: SQLite store, briefings and archives under the user's home directory (briefing.py writes to ~/.local/share/last30days/briefs; README/SKILL.md historically referenced ~/Documents/Last30Days). always:false and user-invocable:true — the skill is not forced onto agents globally. Persistence behavior is consistent with watchlist/briefing features, but users should expect local files and a DB to be created and may want to audit or configure the save path.

      Guidance

      This skill appears to do what it says, but review a few practical items before installing: - ScrapeCreators API key is required (SCRAPECREATORS_API_KEY). If you provide it, the skill will use that service to fetch Reddit/TikTok/Instagram data. Only give this key if you trust the service and the skill. - X (Twitter) search can use browser cookies or AUTH_TOKEN/CT0 environment variables. Those are sensitive session tokens that grant access to your account context; only set them if you understand and trust the vendored Node script. If you want to avoid any chance of cookie/token access, do not provide AUTH_TOKEN/CT0 and do not run X search. - The skill saves research locally (creates a SQLite DB and briefing files). Documentation and changelog disagree about the exact save path (README/SKILL.md mention ~/Documents/Last30Days whereas briefing.py writes to ~/.local/share/last30days/briefs). If you care where files are stored, inspect scripts/store.py and briefing.py and adjust flags or file permissions accordingly. - The repo bundles a vendored Node module (bird-search.mjs). Review that script if you have concerns about how it obtains X auth (cookies, token use). Node may prompt OS-level access (macOS Keychain prompts when reading cookies) — be prepared for that behavior. - If you want minimal exposure: run the script in --mock mode or with --sources that exclude X/Instagram/TikTok, or run without --store to avoid persistence. Check file permissions (the README suggests chmod 600 for .env) and keep secrets out of world-readable files. If you want, I can: summarize where the skill writes files, extract which files read which env vars, or point to exact lines that access cookies/tokens so you can inspect them before installing.

      Latest Release

      v2.9.3

      Security scan improvements: declared AUTH_TOKEN/CT0 in optionalEnv, clarified X token access language (no browser session access), added permissions overview block, removed prompt-injection false positive

      More by @mvanhorn

      Polymarket

      20 stars

      Search X

      13 stars

      Remotion Server

      3 stars

      Parallel

      3 stars

      Manus

      2 stars

      Tesla

      0 stars

      Published by @mvanhorn on ClawHub

      Zappush© 2026 Zappush
      HomeGuaranteeSupport

      Something feels unusual? We want to help: [email protected]