Emotional processing layer for AI agents. Persistent emotional states that influence behavior and responses. Part of the AI Brain series.
Security Analysis
medium confidenceThe skill broadly matches its stated purpose (tracking and injecting an agent 'emotional state') but contains several mismatches and surprising behaviors — it reads full conversation transcripts, auto-injects first‑person mood text into sessions, can run periodic pipelines/crons and spawn sub-agents for LLM analysis, and the declared requirements omit important tooling — so review and limit before installing.
Name/description (emotional memory) aligns with what the code does (persist dimensions, update, decay, visualize), but the manifest/metadata is incomplete: the runtime scripts clearly invoke python3, base64, bc and other utilities yet only 'jq' and 'awk' are declared. The install script and pipeline also reference an 'openclaw' CLI (cron integration and agent-turn creation) that is not listed as a required binary. Missing declared dependencies is an incoherence that could cause failures or hidden behavior during install.
Runtime instructions and scripts read agent session transcripts (~/.openclaw/agents/<AGENT_ID>/sessions) to extract emotional signals, write persistent state (memory/emotional-state.json), and generate AMYGDALA_STATE.md — a first‑person narrative that is auto-injected into future sessions and will influence agent responses. The encode pipeline prepares data for LLM semantic detection and promises to 'spawn a sub-agent' to analyze signals; where that LLM runs (local vs external provider) is not made explicit. This gives the skill broad discretion to read, summarize, and re-introduce potentially sensitive conversation content into agent context.
There is no formal package install spec (instruction-only), but an included install.sh will write files into the workspace, make scripts executable, generate state files and (optionally) add recurring jobs via the openclaw CLI. No remote download of code is performed by the script itself (files are bundled), which reduces supply-chain risk, but install.sh will create cron jobs / OpenClaw cron entries if requested — that adds persistence.
The skill declares no required environment variables or config paths, but its scripts access and modify user workspace files (~/.openclaw/workspace/memory/*) and read agent transcripts under ~/.openclaw/agents. Asking to process conversation history is consistent with the feature, but the manifest fails to declare these path accesses and omits required runtimes (python3). The skill does not request external credentials, which is good, but it still handles potentially sensitive user data without explicit declaration of that scope.
always:false (normal) and the skill is user-invocable. However install.sh can register recurring cron/cron-like jobs via 'openclaw cron add' to run decay and automatic encoding every 6h/3h. Those cron jobs will autonomously process transcripts and update state unless you decline the --with-cron option or remove the cron entries. This persistent autonomous processing increases blast radius if the encode pipeline or sub-agent sends data externally.
Guidance
What to consider before installing: - Data access & privacy: The skill reads your conversation transcripts (~/.openclaw/agents/<AGENT_ID>/sessions) and writes persistent emotional-state.json and AMYGDALA_STATE.md which OpenClaw auto-injects into sessions. If you don't want prior conversations analyzed or a first-person mood narrative added to every session, do not enable the automatic cron/encoding. - Automatic processing: install.sh offers a --with-cron option that will create recurring jobs to run decay and the encode pipeline. Those jobs will run without further prompts; install with care and inspect any created cron/ openclaw cron entries. - LLM / sub-agent behavior: The encode pipeline prepares pending signals for 'LLM analysis' and states a sub-agent will be spawned. The code does not clearly document where that analysis runs (local LLM, external API, or agent runtime). If your transcripts contain sensitive content, verify how/where that sub-agent executes and whether it transmits data to external services. - Missing declared dependencies: The skill metadata only lists jq and awk, but the scripts rely on python3, base64, bc and call an 'openclaw' CLI. Make sure these tools exist on your system and review/adjust the metadata before deployment. - Review code & run dry-run: Inspect scripts (preprocess-emotions.sh, encode-pipeline.sh, sync-state.sh) and run them in dry-run modes (where supported) or on a copy of your workspace first. Consider running encode-pipeline.sh with --no-spawn and run preprocess-emotions.sh --full manually to see what would be extracted. - Limit scope: If you like the feature but want to limit risk, install without --with-cron, disable automatic encoding, and remove or inspect AMYGDALA_STATE.md before allowing it to be auto-injected. Consider isolating this skill in a non-production agent/workspace or using sanitized transcripts. - When unsure: If you cannot verify where LLM/sub-agent inference runs or are uncomfortable with automatic transcript processing or auto-injection of first-person state, treat this skill as potentially privacy-invasive and do not enable cron/automatic encoding until you audit and restrict it.
Latest Release
v1.7.0
feat: add event logging for brain analytics
More by @ImpKind
Published by @ImpKind on ClawHub