Research any topic across Reddit, X, YouTube, TikTok, Instagram, Hacker News, Polymarket, GitHub, Perplexity, and more. AI agent scores by upvotes, likes, an...
Security Analysis
high confidenceThe skill's code and runtime instructions require many credentials, binaries, and local file writes that are not declared in the registry metadata, and the package mixes instruction-only metadata with a large on-disk codebase — these mismatches warrant caution before installing.
The skill claims to research social/web sources (reasonable), and the code implements that. However the registry declares no required environment variables or binaries while SKILL.md and the Python code clearly expect many API keys (OPENAI_API_KEY, OPENROUTER_API_KEY, PARALLEL_API_KEY, BRAVE_API_KEY, XAI_API_KEY, SCRAPECREATORS_API_KEY, AUTH_TOKEN/CT0, etc.) and external tools (Node.js for the vendored bird-search, yt-dlp for YouTube). The omission in registry metadata is an incoherence: a researcher skill legitimately needs those credentials and binaries, but the metadata does not declare them.
SKILL.md instructs the agent to find the skill root, read and update persistent context (variants/open/context.md), run Python scripts in the skill folder, and read multiple reference files. The runtime explicitly reads/writes a SQLite DB (~/.local/share/last30days/research.db), saves briefings to ~/.local/share/last30days/briefs/ and ~/Documents/Last30Days/, and may write a ~/.config/last30days/.env. It also instructs launching subprocesses (node, yt-dlp) and using web backends. These actions are consistent with a research/watchlist tool, but they involve persistent local storage and credential files which the registry metadata did not announce.
There is no install spec in the registry (instruction-only), yet the skill bundle contains a substantial Python codebase and vendored JavaScript (bird-search). The vendored JS avoids external downloads (good), but the code will spawn Node subprocesses and expects yt-dlp and other binaries to be present. The absence of declared required binaries in the metadata (node, yt-dlp) is an inconsistency and increases surprise-install risk.
SKILL.md and the code reference many sensitive environment variables and credential sources (OpenAI/OpenRouter keys, Parallel/Brave keys, ScrapeCreators, xAI, X AUTH_TOKEN/CT0 browser tokens, etc.) and expect a ~/.config/last30days/.env or process env injection. The registry lists no required env vars or primary credential, which is a mismatch. Requesting browser tokens (AUTH_TOKEN/CT0) or device-auth flows is powerful and should be explicitly declared before install.
The skill persistently stores data (SQLite DB, saved briefings in user home, full raw dumps with transcripts). It instructs users how to set up cron/launchd automation and includes a setup wizard that can write config. The skill does not set always:true and does not appear to modify other skills' configs. Persistent writes to the user's Documents and ~/.local/share are consistent with watchlist/briefing features but are notable privacy/footprint considerations the user should accept explicitly.
Guidance
This package includes a full Python+JS research engine that will read and write files in your home directory, save full reports (including transcripts) to ~/.local/share and ~/Documents, and can use many third-party APIs. The registry metadata omits the many required credentials and binaries that the skill actually expects (Node.js, yt-dlp, and API keys like OPENAI_API_KEY, OPENROUTER_API_KEY, SCRAPECREATORS_API_KEY, AUTH_TOKEN/CT0, etc.). Before installing: (1) review the code files (scripts/) yourself or in a sandbox, (2) do not provide long-lived browser tokens (AUTH_TOKEN/CT0) unless you understand the implications, (3) only populate ~/.config/last30days/.env with keys you trust this tool to use, (4) consider running the skill in an isolated environment or VM, and (5) be aware it will create persistent local data and recommends cronjobs for automation. The mismatches between declared metadata and actual requirements are the primary risk — if you want to proceed, insist the author update the manifest to list required env vars and binaries and document any third-party endpoints used (e.g., ScrapeCreators, Perplexity/OpenRouter).
Latest Release
v3.0.0-open
last30days v3.0.0-open - Major update with significant changes to data sources and core functionality. - Now supports research across Reddit, X, YouTube, TikTok, Instagram, Hacker News, Polymarket, GitHub, Perplexity, and more. - Adds scoring by upvotes, likes, and real-money signals. - Refined agent routing and configuration; updated command parsing and documentation. - Removes/deprecates advanced configuration and internal metadata from user documentation for simplicity.
More by @mvanhorn
Published by @mvanhorn on ClawHub