Provide personalized wellness guidance while maintaining strict safety boundaries.
Security Analysis
high confidenceThe skill's instructions match its stated purpose (providing cautious, evidence-aware wellness guidance); it is instruction-only, requests no credentials, and raises no obvious security inconsistencies — but it implies long-term collection of personal health data, so privacy/retention should be considered before use.
Name/description align with SKILL.md content. The instructions focus on safe, non-diagnostic wellness coaching and ask for user-specific context (medications, routines) — all coherent with a health-guidance skill. No unrelated binaries, env vars, or services are required.
Instructions are appropriately scoped to health coaching and explicitly forbid diagnosis/prescribing. However, they instruct the agent to "learn personal normals over 2-4 weeks" and to "track multiple metrics," which implies ongoing collection and retention of sensitive personal health data. The SKILL.md does not specify how or where that data is stored or protected; consider this a privacy/operational scope note rather than a code-execution risk.
No install spec and no code files — lowest-risk footprint. Nothing will be downloaded or written by the skill itself based on the provided metadata.
No environment variables, credentials, or config paths are requested. The skill does not ask for unrelated secrets or system access.
The skill is not marked always:true and is user-invocable (normal defaults). But the guidance's expectation of multi-week tracking implies the need for persistent memory or storage at the platform level; verify the agent's memory/data retention, consent prompts, and storage protections before allowing ongoing use.
Guidance
This skill is internally consistent and low-risk from a system/security perspective because it's instruction-only and requests no credentials. Its main concern is privacy: the instructions ask the agent to collect and "learn" personal health patterns over weeks, which means the platform will likely store sensitive health data (medications, sleep, mood, etc.). Before installing or enabling it for repeated use, check: (1) how and where the agent/platform stores memory or conversation logs, retention periods, and deletion options; (2) whether stored health data is encrypted and access-controlled; (3) that you (the user) consent explicitly to multi-week tracking; and (4) that the skill's non-diagnostic disclaimers meet any legal/organizational requirements. Avoid providing highly sensitive documents (full medical records, secure auth tokens) to the skill, and consult a human healthcare provider for anything beyond general wellness guidance.
Latest Release
v1.0.1
Fix format: frontmatter approach
More by @ivangdavila
Published by @ivangdavila on ClawHub