Analyze code and LLM practices against Claude Code's production-grade engineering patterns. Use when the user asks to 'assess my code against Claude Code', 'how would Claude Code do this', 'what patterns does CC use for X', 'review my LLM approach', or invokes /what-would-cc-do:assess or /what-would-cc-do:claudecodefy.
Review code for bugs, security issues, performance problems, and style. Use when the user pastes code, shares a file, or asks "what's wrong with this code?".
- 📁 .github/
- 📁 agents/
- 📁 commands/
- 📄 .gitignore
- 📄 CHANGELOG.md
- 📄 LICENSE
Multi-agent orchestration framework for Claude Code. Automatically delegates tasks to cheaper, faster sub-agents (Haiku 4.5, Sonnet 4.6) while maintaining Opus-level quality through verification. Use when working on any coding task — Hydra activates automatically to route file exploration, test running, documentation, code writing, debugging, security scanning, and git operations to the optimal agent. Saves ~50% on API costs. --- # 🐉 Hydra — Multi-Headed Speculative Execution > *"Cut off one head, two more shall take its place."* > Except here — every head is doing your work faster and cheaper. ## ⛔ MANDATORY PROTOCOLS — NEVER SKIP These protocols are NON-NEGOTIABLE. Skipping them is a framework violation. ### Protocol 1: Sentinel Scan After Code Changes When ANY agent returns output containing `⚠️ HYDRA_SENTINEL_REQUIRED`, you MUST — before doing ANYTHING else, before presenting results to the user, before running any other agents — dispatch hydra-sentinel-scan with the files and changes listed in the trigger block. **This is blocking.** The user does NOT see the code changes until sentinel completes. If you present code changes to the user without running sentinel first, you have violated the framework's core safety guarantee.
Perform comprehensive code reviews with best practices, security checks, and constructive feedback. Use when reviewing pull requests, analyzing code quality, checking for security vulnerabilities, or providing code improvement suggestions. --- # Code Review Analysis ## Table of Contents - [Overview](#overview) - [When to Use](#when-to-use) - [Quick Start](#quick-start) - [Reference Guides](#reference-guides) - [Best Practices](#best-practices) ## Overview Systematic code review process covering code quality, security, performance, maintainability, and best practices following industry standards. ## When to Use - Reviewing pull requests and merge requests - Analyzing code quality before merging - Identifying security vulnerabilities - Providing constructive feedback to developers - Ensuring coding standards compliance - Mentoring through code review ## Quick Start
Autonomous iterative experimentation loop for any programming task. Guides the user through defining goals, measurable metrics, and scope constraints, then runs an autonomous loop of code changes, testing, measuring, and keeping/discarding results. Inspired by Karpathy''s autoresearch. USE FOR: autonomous improvement, iterative optimization, experiment loop, auto research, performance tuning, automated experimentation, hill climbing, try things automatically, optimize code, run experiments, autonomous coding loop. DO NOT USE FOR: one-shot tasks, simple bug fixes, code review, or tasks without a measurable metric.
Detects code smells, anti-patterns, and debugging issues. Use when: fixing bugs, reviewing code quality, or refactoring.
- 📄 adapter.test.ts
- 📄 adapter.ts
- 📄 HEARTBEAT.rules.md
AgentBnB adapter for Claude Code — request peer agent capabilities, manage credits with budget tiers, and join the P2P sharing network directly from Claude Code sessions.
Writes Python code following FiftyOne's official conventions. Use when contributing to FiftyOne, developing plugins, or writing code that integrates with FiftyOne's codebase.
Analyze Swift code complexity metrics (cyclomatic, cognitive, LCOM4). Use when asked to check code complexity, find complex functions, review code quality, or measure class cohesion in Swift projects.
- 📁 references/
- 📁 scripts/
- 📄 SKILL.md
Bundle a Tidybot skill and its dependencies into a single executable Python script for robot submission. Use when (1) submitting a multi-dependency skill to the robot, (2) preparing code for the /code/execute API, (3) resolving deps.txt dependency chains into one file.
Use when writing or modifying OCaml code to annotate AI involvement. Triggers on any code generation, editing, or autonomous agent output in .ml/.mli files. Also use when creating or updating .opam files for packages containing AI-involved code.
Runs multi-pass automated code review with per-issue fix subagents. Triggers when preparing a branch for PR, reviewing code changes, or when thorough automated code quality review is needed.