深度调研的多Agent编排工作流:把一个调研目标拆成可并行子目标,用 Claude Code 非交互模式(`claude -p`)运行子进程;联网与采集优先使用已安装的 skills,其次使用 MCP 工具;用脚本聚合子结果并分章精修,最终交付"成品报告文件路径 + 关键结论/建议摘要"。用于:系统性网页/资料调...
Security Analysis
high confidenceThe skill's instructions, requested capabilities, and file/network usage are coherent with a multi-agent deep research orchestration workflow; nothing requested is disproportionate to its stated purpose.
Name/description (multi-agent deep research) match the SKILL.md: it requires spawning child Claude Code processes, using web/MCP scraping tools, producing on-disk report artifacts and logs. There are no unrelated environment variables, binaries, or install steps requested.
The instructions explicitly direct filesystem writes (creating .research/<name>/ and subdirs), spawning child processes (claude -p), running shell scripts, and performing web fetch/search/scrape via skills or MCP tools. This behavior matches a distributed research workflow, but it grants broad discretion to run networked fetches and many local file operations — users should expect data collection, persistent files, and spawned processes.
Instruction-only skill with no install spec or downloaded code. No archives or external installers are referenced, so there is no install-time execution risk.
No environment variables, secrets, or external credentials are requested. The skill relies on preinstalled skills/MCP plugins (firecrawl, exa) and local shell tools, which is proportionate to a web-scraping/multi-agent orchestration task.
The skill does not request always:true and does not attempt to change other skills' configs. It requires writing its own project directory and logs, which is appropriate for its stated purpose.
Guidance
This skill is internally consistent with a multi-agent deep-research workflow, but before installing/using it you should: 1) Expect it to create a persistent .research/<name>/ directory with raw data, logs, and final reports — review and clean those files as needed. 2) Be aware it will spawn multiple child processes (claude -p) and perform many network fetches (via installed skills or MCP tools), which can consume quota, bandwidth, and compute time. 3) Confirm which skills/MCP plugins (firecrawl, exa) are available in your environment — if unavailable the skill will fallback to other web fetch methods. 4) Verify privacy/compliance requirements for scraping external sites and storing scraped content. 5) When prompted for confirmation the skill should wait before executing; review generated prompts and the scheduling script (run_children.sh) before you allow runs. If you want stronger safeguards, require the skill to run in an isolated workspace, limit concurrency/timeouts, or have it produce a dry-run plan that you review prior to execution.
Latest Release
v0.1.0
deep-research-skill v0.1.0 - Initial release of a multi-Agent workflow for in-depth research tasks, supporting automated goal decomposition and parallel execution using Claude Code's non-interactive mode. - Enforces a step-by-step process: goal clarification, parallel subgoal scheduling, data collection/aggregation, section-based refinement, and structured report delivery as files. - Prioritizes internet access through installed skills, then MCP (firecrawl > exa), with fallback to basic web fetch/search only if necessary. - Implements strict logging, permission control, and mandatory user confirmation before task execution. - Requires all outputs to be saved as files; does not post complete reports in chat. - Designed for reproducible, systematized research use cases such as web/material analysis, competitive/industry analysis, bulk retrieval, and long-form evidence-integrated writing.
More by @feiskyer
Published by @feiskyer on ClawHub