Persistent memory for AI agents to store facts, learn from actions, recall information, and track entities across sessions.
Security Analysis
high confidenceThis skill is internally coherent: it implements a local SQLite-backed agent memory (reads/writes ~/.agent-memory/memory.db by default), requires no credentials or external network access, and its files match the SKILL.md usage.
Name, README, SKILL.md, CLI wrappers, examples, tests, and src/memory.py all describe and implement a local persistent memory system using SQLite. There are no unrelated requirements (no cloud credentials, no network libraries) that would contradict the stated purpose.
SKILL.md instructs the agent to instantiate AgentMemory and call methods such as remember(), recall(), learn(), and track_entity(). The code implements exactly those behaviors and only accesses the configured SQLite DB path (default ~/.agent-memory/memory.db). There are no instructions that read unrelated system files, transmit data to external endpoints, or access unexpected environment variables.
No install spec is provided (instruction-only skill). The package includes source files but does not declare external installs or downloads; requirements.txt is empty. This is low-risk from an install perspective.
The skill declares no required environment variables, no credentials, and no config paths beyond an optional db_path constructor argument. This is proportional for a local memory store.
The skill persists data to disk by default at ~/.agent-memory/memory.db and creates that directory if missing. always is false and the skill does not request elevated privileges, but it will store potentially sensitive text in an unencrypted SQLite file unless the integrator specifies a different db_path or provides encryption. Consider this permanant-on-disk persistence when deciding to install.
Guidance
This skill appears to do what it says: a local, file-backed memory for agents. Before installing, consider: 1) Data sensitivity — memories are stored in plaintext SQLite by default (~/.agent-memory/memory.db); avoid writing secrets (passwords, API keys) into it or point it to a custom path you control. 2) File permissions — restrict access to the DB file (e.g., chmod 600). 3) Backup / retention — decide how long you want memories kept or enable expiry when calling remember(). 4) If you need encryption or remote storage, modify the code or supply an encrypted DB. The code contains no network calls or credential exfiltration patterns, and tests exercise local behavior, so it looks safe to use with the precautions above.
Latest Release
v1.0.0
Initial release - SQLite-backed persistent memory for AI agents
Popular Skills
Published by @Dennis-Da-Menace on ClawHub