ZappushZappush
SkillsUse CasesBenchmarkCommunitySign In
      Back to Skills
      joe-rlo

      Safety Report

      Memory Pipeline

      @joe-rlo

      Complete agent memory + performance system. Extracts structured facts, builds knowledge graphs, generates briefings, and enforces execution discipline via pre-game routines, tool policies, result compression, and after-action reviews. Includes external knowledge ingestion (ChatGPT exports, etc.) into searchable memory. Use when working on memory management, briefing generation, knowledge consolidation, external data ingestion, agent consistency, or improving execution quality across sessions.

      2,167Downloads
      5Installs
      4Stars
      4Versions
      Search & Retrieval2,116Customer Support1,744AI & Machine Learning1,383Networking & DNS1,102

      Security Analysis

      medium confidence
      Suspicious0.08 risk

      The skill largely matches its stated purpose (memory extraction, linking, briefing) and only asks for LLM API keys it needs, but there are internal inconsistencies (notably where some scripts write/read files) and a few oddities you should review before installing or automating it.

      Feb 11, 202612 files3 concerns
      Purpose & Capabilityconcern

      The name/description (memory extraction, linking, briefings, ingestion) aligns with the included scripts and runtime instructions — they legitimately need LLM API keys and access to workspace and transcripts. However, ingest-chatgpt.py writes output into the skill's own directory (skills/.../memory/knowledge/chatgpt) while the rest of the pipeline expects workspace memory under $CLAWDBOT_WORKSPACE/memory. This mismatch is incoherent and will cause imported data to be placed where other scripts won't find it unless adjusted. Also openclaw.plugin.json version (0.1.0) differs from the registry version (0.4.0) — minor but noteworthy.

      Instruction Scopenote

      SKILL.md and scripts instruct the agent to read daily notes, session transcripts (~/.clawdbot/agents/main/sessions/*.jsonl), and files like SOUL.md/IDENTITY.md/USER.md, then call external LLM APIs. Those actions are appropriate for a memory pipeline, but they do access sensitive local data (session transcripts and local config files). The ingestion script includes an exclusion filter for many medical/research terms (odd domain-specific defaults) — not harmful but surprising and worth auditing if you expect to import such content.

      Install Mechanismok

      There is no remote install/download step in the manifest (no brew/npm/remote archive). The skill is provided as source/scripts and SKILL.md; risk is low from the installer perspective. Still, the bundle includes multiple executable Python scripts and a setup.sh — review them before running.

      Credentialsnote

      The only secrets it looks for are LLM API keys (OpenAI/Anthropic/Gemini) via env vars or standard ~/.config files — reasonable and proportional for the stated functionality. The scripts read those config files if env vars are absent. They do not request unrelated cloud creds, SSH keys, or other service tokens.

      Persistence & Privilegeok

      The skill does not request always:true and does not modify other skills or global agent settings. It writes/maintains files under the workspace memory/ directory and heartbeat-state.json (expected for automation). The only privilege to note is read access to agent session transcripts (~/.clawdbot/agents/...) which contain sensitive conversational data but are logically needed for extracting memory.

      Guidance

      This package is plausibly a real memory pipeline, but review and (optionally) run it in a sandbox before trusting it with your main workspace. Specifically: - Inspect ingest-chatgpt.py: it writes imported ChatGPT exports into the skill's own memory/ subdirectory instead of your workspace memory/; either change OUTPUT_DIR to point at your workspace or move files after import so the rest of the pipeline can see them. - Audit setup.sh and any scripts before executing them, and run them with least privilege (not as root). - Be aware the scripts read local agent transcripts (~/.clawdbot/agents/main/sessions) and ~/.config/*/api_key files — these are sensitive. If you don’t want them scanned, move or restrict those files before running or use a temporary workspace. - The exclusion patterns in ingest-chatgpt.py include many medical/research terms; if you expect to import such content, remove/change that list. - Keep LLM API keys in secure locations; prefer env vars for ephemeral use rather than placing keys in files unless you control file permissions. Consider running a dry-run option where available (ingest has --dry-run) to preview actions. If you want, I can point to the exact lines in the scripts that need changing (e.g., OUTPUT_DIR in ingest-chatgpt.py) or produce a minimal patch to make all scripts consistently target the same workspace/memory location.

      Latest Release

      v0.4.0

      Cleaned up README — clearer structure, scannable tables, setup front and center

      Popular Skills

      self-improving-agent

      @pskoett · 1,456 stars

      Gog

      @steipete · 672 stars

      Tavily Web Search

      @arun-8687 · 620 stars

      Find Skills

      @JimLiuxinghai · 529 stars

      Proactive Agent

      @halthelobster · 426 stars

      Summarize

      @summarize · 415 stars

      Published by @joe-rlo on ClawHub

      Zappush© 2026 Zappush
      HomeGuaranteeSupport

      Something feels unusual? We want to help: [email protected]