{short title}
Multi-LLM peer review — send plans, ideas, or code to GPT (via OpenAI Codex CLI) and Gemini (via Gemini CLI) for structured peer review with cross-examination, then cherry-pick feedback. Decision Packet v2 with tiered output (Ship Blocker / Before Next Sprint / Backlog), dependency arrows, effort estimates, conflict flags, confidence scores, and JSON export with formal schema validation. Tie-breaker model resolves HIGH CONFIDENCE deadlocks. Supports review, idea, redteam, debate, premortem, advocate, refactor, deploy, api, perf, diff, quick, gate, delegate, help, history, status, and result modes. Supports parallel multi-mode dispatch (--modes redteam,deploy,perf) with collision detection. Review gate mode validates Claude's own output via GPT/Gemini before proceeding. Delegate mode hands off implementation tasks to external models with write permissions. Background execution (--background) for async reviews with job management (status/result). Session resumability (--resume) for continuing prior reviews across turns. Use this skill whenever the user wants a second opinion from other AI models, wants to brainstorm with multiple perspectives, needs adversarial analysis, wants to stress-test a plan, review a code diff, get deployment readiness feedback, API design review, performance analysis, validate Claude's output, delegate coding tasks, or mentions peer review, brainstorm, or multi-LLM feedback. Supports --rounds N, --verbose, --quiet, --gpt-model, --gemini-model, --steelman, --iterate, --json, --modes, --effort, --background, and --resume flags. Model aliases (spark, mini, flash, pro) for quick model selection. Falls back to GitHub Copilot CLI if Codex CLI is unavailable.