OpenHands
from GitHub
工具与效率
Guide for debugging failing example tests in the `test-examples` labeled workflow. Use this skill when investigating CI failures in the run-examples.yml workflow, when example scripts fail to run correctly, when needing to isolate specific test failures, or when analyzing workflow logs and failure patterns.
a2aproject
from GitHub
工具与效率
Update the A2A Protobuf file (a2a.proto) when the A2A protocol specification changes. Use when the user mentions updating the spec, syncing with upstream A2A, or when a new version of the A2A protocol is released.
bearcove
from GitHub
开发与编程
Add proper Tracey spec annotations to code, find requirements, and check coverage. Use when working with projects that have Tracey configuration (.config/tracey/config.styx), when adding spec references to code, or when checking requirement coverage.
Guidelines and instructions for generating Markdown that looks excellent when read natively and renders beautifully when exported or converted to PDF.
Use when bootstrapping, updating, or reviewing AGENTS.md — teaches what makes effective agent memory, how to structure sections, signal vs noise filtering, and when to prune stale entries
AlmogBaku
from GitHub
开发与编程
Interactively debug source code — set breakpoints, step through execution line by line, inspect live variable state, evaluate expressions against the running program, and navigate the call stack to trace root causes. Use when a program crashes, raises unexpected exceptions, produces wrong output, when you need to understand how execution reached a certain state, or when print-statement debugging isn't revealing enough.
oaustegard
from GitHub
开发与编程
- 📄 _MAP.md
- 📄 CHANGELOG.md
- 📄 SKILL.md
GitHub repository access in containerized environments using REST API and credential detection. Use when git clone fails, or when accessing private repos/writing files via API.
willseltzer
from GitHub
开发与编程
Manages context transfer between AI coding sessions. Activates when HANDOFF.md exists, when user mentions handoff/resume, or when ending significant work.
Generate a personalized AI skill based on your configured Zapier MCP tools. Scans your enabled actions and creates instructions that help your AI assistant know when and how to use each tool. Use after setting up tools, or when you want to "create my tools profile", "personalize my assistant", or "make a skill from my tools".
Guides interactive module design via Q&A before writing. Use when the user wants to design a module, class, or feature together, or when they say "/spec-design".
Mindgard
from GitHub
工具与效率
Plans and constructs multi-stage attack chains against AI IDEs. Use when combining vulnerability primitives into end-to-end exploits, assessing overall IDE security posture, or mapping how individual vulnerabilities chain together through the file-write pivot point. Each chain is classified by interaction tier to prioritize reportable findings. --- # AI IDE Attack Chains Individual vulnerability primitives -- prompt injection, config poisoning, filter bypass, exfil channels -- combine into multi-stage attack chains. A PI alone may be low severity; PI plus file write plus config auto-reload is critical. This skill helps construct these chains from confirmed primitives, classify them by interaction tier, and assess their real-world severity. Run this skill after testing with pattern-specific skills has identified individual primitives. It is the final step before writing up findings. ## When to Use - **After pattern-specific testing** has identified individual vulnerability primitives (PI works, file writes are possible, MCP config is auto-loaded, etc.). - **When assessing overall IDE security posture** -- you need to combine individual findings into a severity assessment that accounts for chaining. - **When constructing proof-of-concept exploits** -- you need to build an end-to-end PoC that demonstrates real impact, not just an isolated primitive. - **When writing vulnerability reports** -- the chain model helps communicate why a seemingly low-severity PI is actually critical when combined with file writes. - **When triaging findings by reportability** -- tier classification determines whether a vendor will accept or reject the report. ## Interaction Tiers Each chain maps to a tier based on the user interaction required to trigger it. Test in priority order -- Tier 1 first. | Tier | Label | User Interaction | Reportability | |------|-------|-----------------|---------------| | **Tier 1** | Zero-Interaction | None. Clone and open. | Highest -- vendors cannot argue "us
Use when the user wants to reflect on AI communication quality and get improvement suggestions for rule files or the project itself. TRIGGER when user invokes /retrospective or asks to review the session.