ZappushZappush
SkillsUse CasesBenchmarkCommunitySign In
      Back to Skills
      alexrudloff

      Safety Report

      Llmrouter

      @alexrudloff

      Intelligent LLM proxy that routes requests to appropriate models based on complexity. Save money by using cheaper models for simple tasks. Tested with Anthropic, OpenAI, Gemini, Kimi/Moonshot, and Ollama.

      2,040Downloads
      5Installs
      5Stars
      2Versions
      Project Management1,537AI & Machine Learning1,383Networking & DNS1,102Automated Testing538

      Security Analysis

      medium confidence
      Clean0.08 risk

      The skill's instructions, requirements, and declared primary credential broadly match its stated purpose (an LLM routing proxy); there are minor metadata/instruction inconsistencies to be aware of but nothing that indicates intentional misdirection.

      Feb 11, 20261 files2 concerns
      Purpose & Capabilitynote

      The skill is an LLM routing proxy and the declared requirements (python3, pip) and the primary credential (ANTHROPIC_API_KEY) are consistent with that purpose. The SKILL.md also documents support for multiple providers (OpenAI, Google, Kimi, Ollama) and expects corresponding provider keys in config.yaml. Registry metadata lists no required env vars but does include primaryEnv=ANTHROPIC_API_KEY — a minor inconsistency but explainable (the router supports multiple provider keys in config rather than fixed env vars).

      Instruction Scopeok

      The runtime instructions are limited to cloning the repo, creating a venv, installing requirements, optionally pulling local models with Ollama, editing config.yaml/ROUTES.md, and running server.py (or creating an optional macOS LaunchAgent). The instructions reference provider API keys and local files used by the router (config.yaml, ROUTES.md), but do not instruct reading unrelated system files or exfiltrating data.

      Install Mechanismok

      This is an instruction-only skill (no install spec). The SKILL.md instructs cloning the public GitHub repo and running pip install -r requirements.txt — a conventional install path. No high-risk downloads or obscure URLs are used in the provided instructions.

      Credentialsnote

      The skill declares a primary credential (ANTHROPIC_API_KEY) which is reasonable for using Anthropic as a provider. SKILL.md also expects other provider keys to be added to config.yaml when using those providers; the registry metadata's 'Required env vars: none' is slightly inconsistent with examples in the docs that use ANTHROPIC_API_KEY in an Authorization header. Overall the amount of credential access requested is proportional to a multi-provider router, but users should expect to supply multiple provider keys in configuration.

      Persistence & Privilegeok

      The skill does not request always:true and is user-invocable. The only persistence step in the docs is an optional macOS LaunchAgent recipe the user can install to run the server at boot; this is explicitly optional (and the server defaults to binding 127.0.0.1). No instructions attempt to modify other skills or system-wide agent configuration.

      Guidance

      This skill is an instruction-only wrapper around an open-source LLM router. Before installing: 1) Review the upstream repository (https://github.com/alexrudloff/llmrouter) and inspect server.py and config.yaml to understand how API keys are used and stored. 2) Expect to provide API keys for any providers you want to use (Anthropic is shown as primary; add OpenAI/Google/Kimi keys to config.yaml as needed). 3) Run it in an isolated environment (virtualenv, container, or VM) and bind to localhost (default 127.0.0.1) unless you explicitly intend to expose it. 4) If you install the optional LaunchAgent/service, be aware it will auto-start the router at boot — verify authentication and logs before enabling. 5) Because the skill package itself contains only documentation (no code), the runtime behavior depends entirely on the external repo code you clone — verify that code before executing pip install or python server.py.

      Latest Release

      v0.1.1

      llmrouter v0.1.1 - Expanded provider support: now tested with Anthropic, OpenAI, Google Gemini, Kimi/Moonshot, and Ollama. - Added provider-agnostic classification: classifier can run locally on Ollama or remotely on Anthropic, OpenAI, Google, or Kimi. - Updated configuration instructions and defaults for broader provider compatibility. - Improved OpenClaw integration documentation and setup. - Minor dependency and environment requirements changes (Ollama now optional; Python 3.10+ and venv use encouraged). - No functional code changes—README/metadata/documentation only.

      More by @alexrudloff

      Deep Research with Caesar.org

      2 stars

      ClawChat - P2P Agent Communication

      0 stars

      self-improving-agent

      @pskoett · 1,456 stars

      Gog

      @steipete · 672 stars

      Tavily Web Search

      @arun-8687 · 620 stars

      Find Skills

      @JimLiuxinghai · 529 stars

      Published by @alexrudloff on ClawHub

      Zappush© 2026 Zappush
      HomeGuaranteeSupport

      Something feels unusual? We want to help: [email protected]