HaS (Hide and Seek) on-device text and image anonymization. Text: 8 languages (zh/en/fr/de/es/pt/ja/ko), open-set entity types. Image: 21 privacy categories...
Security Analysis
high confidenceThe skill's files, install steps, and runtime instructions are consistent with an on-device text/image anonymizer and do not request unrelated credentials or unexpected remote endpoints.
The skill claims on-device text and image anonymization and actually requires a local LLM server (llama-server) for text and Ultralytics/OpenCV for image segmentation. Requiring 'uv' as a dependency runner and providing model paths for text/image models is coherent with the stated purpose.
SKILL.md and the bundled scripts instruct only local operations (scanning files, running a local llama-server on loopback, running YOLO segmentation, producing reports, and writing outputs). The runtime code explicitly refuses non-loopback llama-server URLs. The code does inspect local processes (ps/lsof) to manage the server, which is reasonable for starting/reusing a local model server.
Install steps use Homebrew formulas for uv and llama.cpp (macOS) and download model files from HuggingFace (trusted public model hosting). Downloads target a 'models' directory; no obscure shorteners or personal servers are used. The downloads are model artifacts, not arbitrary executables.
No required secrets or unrelated environment variables are requested. Optional env vars (model paths, parallel request limits) are appropriate and documented. The code does not access or require cloud credentials.
The skill is not always-enabled and does not request elevated platform privileges. It starts or reuses a local llama-server process for inference, but does not modify other skills or system-wide agent settings.
Guidance
This package appears to be what it claims: an on-device text/image anonymizer. Before installing, verify the HuggingFace model URLs (ensure you're comfortable with those model artifacts) and be aware that running llama-server will start a local model process that can be resource intensive. Mapping files that can restore anonymized text are highly sensitive — store them with restricted permissions (the code already writes mappings with 0600, which is good). On non-macOS systems you will need to install llama-server/uv by other means; review the brew formula names and your platform's equivalents. If you need higher assurance, validate the downloaded model checksums from the model publisher and inspect any external dependencies installed by 'uv' (Python packages listed in script headers).
Latest Release
v1.0.1
- Added environment variable support for model file locations and parallel request settings (HAS_TEXT_MODEL_PATH, HAS_IMAGE_MODEL, HAS_TEXT_MAX_PARALLEL_REQUESTS). - Updated skill requirements metadata to document these environment variables and their usage. - No functional or documentation changes to program logic detected.
Popular Skills
Published by @XuanwuSkill on ClawHub