ZappushZappush
SkillsUse CasesBenchmarkCommunitySign In
      Back to Skills
      killerapp

      Safety Report

      baml-codegen

      @killerapp

      Use when generating BAML code for type-safe LLM extraction, classification, RAG, or agent workflows - creates complete .baml files with types, functions, clients, tests, and framework integrations from natural language requirements. Queries official BoundaryML repositories via MCP for real-time patterns. Supports multimodal inputs (images, audio), Python/TypeScript/Ruby/Go, 10+ frameworks, 50-70% token optimization, 95%+ compilation success.

      1,028Downloads
      0Installs
      0Stars
      1Versions
      API Integration11,971Workflow Automation8,822Video & Audio6,125File Management5,911

      Security Analysis

      medium confidence
      Suspicious0.08 risk

      The skill's documentation and runtime instructions are plausible for a BAML code generator, but there are clear mismatches between what it says it needs (MCP servers, baml-cli, LLM provider credentials) and the metadata (no required binaries, no env vars, unknown source/homepage), so proceed with caution.

      Feb 11, 202621 files4 concerns
      Purpose & Capabilityconcern

      The skill claims to query MCP servers and to require running `baml-cli generate` (and to integrate with LLM providers). Yet the registry metadata declares no required binaries, no required environment variables, and no install specification. That mismatch is unexpected: a codegen workflow that invokes a CLI and cloud LLM providers would normally declare those dependencies and credentials.

      Instruction Scopenote

      SKILL.md is detailed and constrained to BAML generation tasks (edit baml_src/, generate baml_client/, run tests, integrate with frameworks). It instructs the agent to run `baml-cli generate`, manage project files, and use provider clients. It does not explicitly instruct exfiltration or access unrelated system paths, but it assumes ability to run CLIs, access networked MCP servers, and use LLM provider credentials that are not declared.

      Install Mechanismnote

      There is no install spec (instruction-only), which reduces risk from arbitrary downloads. However, the instructions assume the existence of external tooling (`baml-cli`, language runtimes, package managers) without declaring them in metadata or providing safe install sources.

      Credentialsconcern

      The skill references many LLM providers (openai, anthropic, gemini, bedrock, ollama, openai-generic) and MCP servers in prose, but requires.env is empty and no primary credential is declared. Requesting zero credentials while instructing use of cloud LLM providers and MCP queries is disproportionate and inconsistent — the skill will need API keys and network access in practice.

      Persistence & Privilegeok

      Flags show normal defaults (always:false, agent invocation allowed). The skill does not request permanent presence or system-wide config changes in the metadata or SKILL.md. It instructs editing project files (baml_src/) and generating baml_client/ which is expected for a generator.

      Guidance

      This skill appears to be a detailed BAML code-generation recipe, but there are important red flags: (1) The SKILL.md assumes you can run `baml-cli` and that MCP/LLM provider access exists, yet the skill metadata declares no required binaries or credentials. (2) The source is unknown and there is no homepage; you cannot verify origin or upstream code. Before installing or enabling: - Confirm you trust the publisher or obtain the upstream repository/homepage. - Ensure `baml-cli` and any language toolchains are installed from official sources; do not run arbitrary install links. - Provide LLM provider API keys only to parts of your system you control; do not hand credentials to an unknown remote. - Run this skill in a sandboxed project or disposable environment first (so generated code and any post-generation hooks cannot affect unrelated files). - If you expect the skill to query MCP or provider endpoints automatically, require that it declare those endpoints and explicit env vars in metadata; consider asking the author to add required env var declarations and an install spec. If you want, I can list exactly what env vars and binaries would be reasonable to require for this skill (e.g., BAML_CLI, OPENAI_API_KEY, ANTHROPIC_API_KEY, MCP_ENDPOINT) and suggest a minimal secure run checklist.

      Latest Release

      v2.0.0

      From Foundry: Use when generating BAML code for type-safe LLM extraction, classification, RAG,

      More by @killerapp

      copywriter

      6 stars

      skill-condenser

      2 stars

      aws-agentcore-langgraph

      2 stars

      agentskills-io

      2 stars

      adversarial-coach

      0 stars

      para-pkm

      0 stars

      Published by @killerapp on ClawHub

      Zappush© 2026 Zappush
      HomeGuaranteeSupport

      Something feels unusual? We want to help: [email protected]