ZappushZappush
SkillsUse CasesBenchmarkCommunitySign In
      Back to Skills
      robbyczgw-cla

      Safety Report

      Topic Monitor

      @robbyczgw-cla

      Monitor topics of interest and proactively alert when important developments occur. Use when user wants automated monitoring of specific subjects (e.g., prod...

      6,466Downloads
      30Installs
      23Stars
      13Versions
      API Integration4,971Workflow Automation3,323Browser Automation1,737Monitoring & Logging1,579

      Security Analysis

      high confidence
      Clean0.12 risk

      The skill's code, runtime instructions, and optional environment variables are consistent with a local topic-monitoring alert tool; nothing in the package requests unrelated credentials or performs unexpected network exfiltration.

      Mar 3, 202615 files3 concerns
      Purpose & Capabilityok

      Name/description match the implementation: the package is a Python-based topic monitor that performs scheduled searches, scores results, stores local state, and queues alerts for agent delivery. Required binary (python3) and optional env vars (TOPIC_MONITOR_TELEGRAM_ID, TOPIC_MONITOR_DATA_DIR, WEB_SEARCH_PLUS_PATH, plus optional search-provider keys) are proportionate to the described functionality. There are no unrelated credentials (AWS, GitHub tokens, etc.) requested.

      Instruction Scopenote

      SKILL.md instructs running the included Python scripts, setting up cron, and optionally providing search-provider keys. The runtime instructions and code operate on local files (config.json, .data/) and call a local/partner 'web-search-plus' script via subprocess; they do not perform direct HTTP calls themselves. One minor mismatch: SKILL.md/promos say 'Memory Integration' and referencing past conversations, but the code shown only reads its own local state and has hooks to external skills (e.g., personal-analytics) rather than directly accessing an agent's conversation history — verify how 'memory' is implemented in your agent environment if that matters to you.

      Install Mechanismok

      There is no remote install/download step defined (no install spec). The skill is executed as Python scripts shipped with the skill; no external arbitrary downloads or extracted archives are present in the provided files. This is low-risk from an install mechanism perspective, assuming you trust the included source files.

      Credentialsnote

      The skill declares no required secrets and only a small set of optional env vars for delivery (Telegram chat id), data directory, and the path to the web-search-plus script. The code forwards only an allowlisted set of env vars to the search subprocess (PATH, HOME, LANG, TERM, and explicit search API keys). This is reasonable for the stated purpose. Recommendation: only provide search-provider API keys and messaging credentials you trust and scope appropriately. Also note SMTP credentials are stored in config.json examples (not env vars) — review where you store sensitive email credentials.

      Persistence & Privilegenote

      Skill does not request always:true and runs only when invoked or scheduled. It writes state and queue files under a configurable .data/ directory inside the skill (TOPIC_MONITOR_DATA_DIR), which is expected. The setup_cron.py may add cron jobs to the user's crontab for scheduling — inspect that script before running to confirm desired cron behavior. The skill only modifies its own files and does not attempt to alter other skills' configs.

      Guidance

      This package appears to be what it claims: a local Python-based topic monitor that stores state in a .data/ directory and uses a 'web-search-plus' script (if present) to perform searches, then queues structured alert JSON for your agent to deliver via Telegram/Discord/Email. Before installing or enabling: - Inspect scripts/setup_cron.py before running to see what crontab entries it will create. If you prefer manual scheduling, skip the auto-setup. - If you will enable real searches, verify the WEB_SEARCH_PLUS_PATH and the web-search-plus script; when providing search-provider API keys (SERPER_API_KEY, TAVILY_API_KEY, EXA_API_KEY, YOU_API_KEY, SEARXNG_INSTANCE_URL) only expose minimally-scoped keys and keep them secret. - Review config.json (or config.example.json) for places that may hold SMTP credentials or channel configuration; prefer using per-service credentials stored securely rather than committing them to project files. - If you care about the claimed 'Memory Integration,' confirm how the skill obtains that context in your OpenClaw environment (the code shown uses only local state and optional external skills/hooks). If you do not want any outbound network calls from this skill, note that the skill itself does not do direct HTTP requests — it delegates to web-search-plus and the OpenClaw agent for delivery. If any of the above behaviours are unacceptable (e.g., automatic cron editing, forwarding of particular API keys), inspect or modify the relevant scripts before enabling them.

      Latest Release

      v1.3.5

      b70db6d chore: release v1.3.5 1248efa Sync version with ClawHub (v1.3.4) dd4580b security: remove crontab manipulation, sanitize subprocess query input

      Popular Skills

      self-improving-agent

      @pskoett · 1,456 stars

      Gog

      @steipete · 672 stars

      Tavily Web Search

      @arun-8687 · 620 stars

      Find Skills

      @JimLiuxinghai · 529 stars

      Proactive Agent

      @halthelobster · 426 stars

      Summarize

      @summarize · 415 stars

      Published by @robbyczgw-cla on ClawHub

      Zappush© 2026 Zappush
      HomeGuaranteeSupport

      Something feels unusual? We want to help: [email protected]