Daily Featured Skills Count
5,070 5,117 5,165 5,205 5,241 5,288 5,311
05/09 05/10 05/11 05/12 05/13 05/14 05/15
♾️ Free & Open Source 🛡️ Secure & Worry-Free

Import Skills

afrotools afrotools
from GitHub Tools & Productivity
  • 📄 SKILL.md

debug

Use this skill ONLY when the user has a failing integration that is specifically based on an Afro.tools spec — the user is integrating a provider whose spec exists in the Afro.tools registry (Paycard, LengoPay, Djomy, Wave, NimbaSMS, etc.) and encounters wrong status codes, auth errors, webhook failures, unexpected response fields, or runtime errors. Do NOT activate for generic debugging, non-afrotools providers, or integrations not based on an Afro.tools spec. This skill diagnoses whether the problem is a spec error, a missing gotcha, or an undocumented API change. --- # Afro.tools — Debug skill When this skill activates, fetch the spec for the affected provider and capability, then systematically compare it against the user's implementation to find the gap. ## Workflow 1. Identify the provider slug and capability from the error, code snippet, or user description. Ask if unclear — don't guess. 2. Fetch the spec: ``` afrotools.get_spec({ provider: "<slug>", capability: "<capability>" }) ``` 3. **Check gotchas first.** Most integration failures map directly to a documented gotcha. Read every entry before looking elsewhere. 4. Cross-check the implementation against the spec: - **Auth** — correct field name, location (header vs body), format string - **Endpoint** — correct method and URL, path params in the right place - **Field names** — provider fields are often non-standard (e.g. `paycard-amount`, not `amount`); compare the user's payload against `input_schema` - **Status values** — enums are case-sensitive (e.g. LengoPay uses `SUCCESS`, not `success`); `code: 0` on Paycard means "found", not "paid" - **Webhook** — returning HTTP 200 immediately, not fulfilling on callback alone, HTTPS required for some providers 5. Surface the diagnosis clearly: - Quote the relevant spec field or gotcha - Show what the code does vs. what the spec requires - Provide a minimal corrected snippet

0 16 10 days ago · Uploaded Detail →
KonghaYao KonghaYao
from GitHub Data & AI
  • 📄 SKILL.md

issue-archive

归档已关闭/已修复的 issues,提取经验教训并同步更新 CLAUDE.md 和 spec/global。 当用户说"归档 issue"、"archive issues"、"清理已修复的 issue"、 "归档已关闭的 issue"、"整理一下 issues"、"把修好的 issue 归档了"时触发。 也适用于用户想要清理 spec/issues/ 目录或将已解决问题归档的场景。 如果 spec/issues/ 中有 Fixed/Closed/Done 状态的 issue 积压,应主动建议使用此 skill。 --- # issue-archive: Issue 归档与问题领域沉淀 将 `spec/issues/` 中已解决的 issue 归档到 `spec/archive-issues/`,从每个 issue 中提炼**问题领域认知**(而非零散 TRAP),更新到对应的 domain 文件。 ## 归档条件 通过 Grep 扫描 `spec/issues/` 中所有 issue 的 `**状态**` 字段: | 状态模式 | 归档 | 说明 | |---------|------|------| | `Fixed`(含 `Fixed + Verify`、`Fixed(待用户验证)`) | 是 | 已修复 | | `Closed` | 是 | 已关闭 | | `Done` | 是 | 已完成 | | `Open`、`Open (搁置)` | 否 | 仍需处理 | | `Partial`、`Reopen` | 否 | 未完全解决 | 如果扫描结果为空(没有可归档的 issue),直接报告并结束。 ## 工作流程 ### 阶段一:扫描 1. 用 Grep 在 `spec/issues/` 中搜索 `**状态**` 行 2. 按上述规则筛选可归档 issue 3. 输出清单(标题 + 状态),直接进入归档 ### 阶段二:领域识别与认知提炼 逐个 Read 可归档的 issue 文件,执行: **步骤 1:识别所属领域** 从 issue 的标题、涉及文件、问题描述中识别属于哪个领域: - `message-pipeline` — 消息渲染、事件处理、视图模型 - `agent` — ReAct 循环、工具系统、LLM 适配 - `tui` — TUI 渲染、交互、面板 - `mcp` — MCP 连接、工具桥接 - `storage` — 持久化、数据库 - `compact` — 上下文压缩 - `token-tracking` — Token 追踪 - `langfuse` — 可观测性 - 等等(参考 `spec/global/domains/` 现有列表) **如果领域不存在**,创建新的 domain 文件 `spec/global/domains/<domain>.md`,使用标准模板: ```markdown # <领域名称> 领域 ## 领域综述 <一句话概括这个领域的核心职责> ## 核心流程 (后续通过 issue 归档逐步填充) ## 技术方案总结 | 维度 | 选型 | |------|------| (后续通过 issue 归档逐步填充) --- ## 相关 Feature ``` 同时在 `spec/global/index.md` 的领域索引表中追加该领域。 **步骤 2:关键词提取** 从 issue 中提取 2-4 个**搜索关键词**,用于快速索引。关键词应选择: - 技术术语:`HashMap 顺序`、`Prompt Cache`、`BaseMessage vs MessageViewModel` - 错误模式:`缓存失效`、`维度混淆`、`并发竞争` - 涉及概念:`reasoning_content`、`parking_lot::RwLock`、`RebuildAll` **步骤 3:提炼领域级认知** 不是记录零散的 TRAP,而是提炼**领域理解**: - **问题本质**:这类问题的根本原因是什么?(如 "HashMap 非确定性顺序导致缓存前缀不稳定") - **通用模式**:以后遇到类似问题应该如何思考?(如 "所有需要跨进程复用的序列化内容必须保证顺序稳定") - **架构影响**:这个修复对整体架构有什么启示?(如 "统一 RebuildAll 路径消除了增量更新的复杂度") - **技术决策**:这个 issue 背后代表了一个什么样的技术选型? - **CLAUDE.md 链接标记**:此 issue 是否需要在 CLAUDE.md 中添加内联链接?(仅高价值 TRAP 标记 `link: true`) **提炼模板**(写入临时文件): ```markdown ##

0 15 10 days ago · Uploaded Detail →
zpratt zpratt
from GitHub Tools & Productivity
  • 📄 SKILL.md

plan-to-graph

Converts a Lousy Agents spec or master plan into a structured Beads (bd) dependency graph of epics and tasks. Use when asked to 'convert plan to issues', 'create beads from spec', 'populate beads', 'plan to graph', or 'break down spec into tasks'.

0 10 1 month ago · Uploaded Detail →
istofel istofel
from GitHub Data & AI
  • 📁 docs/
  • 📁 references/
  • 📄 LICENSE
  • 📄 README.md
  • 📄 SKILL.md

istofel-project-plan

Guia profissional para criação de documentação completa de produto digital: MVP Scope, PRD e SPEC. Use este skill sempre que o usuário quiser definir, planejar ou documentar um produto digital, ideia de startup, feature ou sistema — mesmo que mencione apenas "quero criar um produto", "me ajuda a planejar isso", "preciso de um PRD", "quero escrever uma spec", "tenho uma ideia de app", "como estruturo isso", "me ajuda a pensar no escopo", ou qualquer variação. Aplique também quando o usuário apresentar um prompt de produto e pedir refinamento, análise crítica ou expansão. Este skill produz documentos técnicos profissionais com análise de mercado, decisões de stack, modelagem de dados, regras de negócio, roadmap, user flows, especificação por módulo e diagramas de sequência. Sempre gera um documento por vez, solicitando aprovação antes de avançar para o próximo: MVP Scope → PRD → SPEC → CLAUDE.md. --- # istofel_project_plan Skill profissional para geração de documentação técnica e estratégica de produto digital. Produz quatro documentos em sequência obrigatória: 1. **MVP Scope** — visão geral técnica e estratégica 2. **PRD** — requisitos detalhados de produto 3. **SPEC** — especificação técnica de implementação 4. **CLAUDE.md** — contexto de sessão personalizado para o agente de IA **Regra de ouro:** Gerar um documento por vez. Ao finalizar cada um, perguntar explicitamente se o usuário deseja prosseguir para o próximo. Nunca pular etapas. --- ## Princípios Gerais

0 6 1 month ago · Uploaded Detail →
ngvoicu ngvoicu
from GitHub Development & Coding
  • 📁 .claude-plugin/
  • 📁 .cursor-plugin/
  • 📁 agents/
  • 📄 .gitignore
  • 📄 AGENTS.md
  • 📄 CLAUDE.md

specmint-core

Persistent spec management for AI coding workflows. Use this skill when the user explicitly mentions specs, forging, or structured planning: says "forge", "forge a spec", "write a spec for X", "create a spec", "plan X as a spec", "resume", "what was I working on", "spec list/status/pause/switch/activate", "implement the spec", "implement phase N", "implement all phases", "generate openapi", or exits plan mode (offer to save as a spec). Also trigger when a `.specs/` directory exists at session start. Do NOT trigger on general feature requests, coding tasks, or questions that don't mention specs or forging — those are normal coding tasks, not spec management. --- # Spec Mint Core Turn ephemeral plans into structured, persistent specs built through deep research and iterative interviews. Specs have phases, tasks, acceptance criteria, a registry, resume context, a decision log, and a deviations log. They live in `.specs/` at the project root and work with any AI coding tool that can read markdown. Whether `.specs/` is committed is repository policy. Respect `.gitignore` and the user's preference for tracked vs local-only spec state. ## Critical Invariants 1. **Single-file policy**: Keep this workflow in one `SKILL.md` file. 2. **Canonical paths**: - Registry: `.specs/registry.md` - Per-spec files: `.specs/<id>/SPEC.md`, `.specs/<id>/research-*.md`, `.specs/<id>/interview-*.md` 3. **Authority rule**: `SPEC.md` frontmatter is authoritative. Registry is a denormalized index for quick lookup. 4. **Active-spec rule**: Target exactly one active spec at a time. 5. **Parser policy**: Use best-effort parsing with clear warnings and repair guidance instead of hard failure on malformed rows. 6. **Progress tracking is sacred**: After completing any task, immediately update SPEC.md (checkbox, `← current` marker, phase marker) AND registry.md (progress count, date). Then re-read both files to verify the edits landed correctly. Never move to the next task without updating both files.

0 6 1 month ago · Uploaded Detail →

Skill File Structure Sample (Reference)

skill-sample/
├─ SKILL.md              ⭐ Required: skill entry doc (purpose / usage / examples / deps)
├─ manifest.sample.json  ⭐ Recommended: machine-readable metadata (index / validation / autofill)
├─ LICENSE.sample        ⭐ Recommended: license & scope (open source / restriction / commercial)
├─ scripts/
│  └─ example-run.py     ✅ Runnable example script for quick verification
├─ assets/
│  ├─ example-formatting-guide.md  🧩 Output conventions: layout / structure / style
│  └─ example-template.tex         🧩 Templates: quickly generate standardized output
└─ references/           🧩 Knowledge base: methods / guides / best practices
   ├─ example-ref-structure.md     🧩 Structure reference
   ├─ example-ref-analysis.md      🧩 Analysis reference
   └─ example-ref-visuals.md       🧩 Visual reference

More Agent Skills specs Anthropic docs: https://agentskills.io/home

SKILL.md Requirements

├─ ⭐ Required: YAML Frontmatter (must be at top)
│  ├─ ⭐ name                 : unique skill name, follow naming convention
│  └─ ⭐ description          : include trigger keywords for matching
│
├─ ✅ Optional: Frontmatter extension fields
│  ├─ ✅ license              : license identifier
│  ├─ ✅ compatibility        : runtime constraints when needed
│  ├─ ✅ metadata             : key-value fields (author/version/source_url...)
│  └─ 🧩 allowed-tools        : tool whitelist (experimental)
│
└─ ✅ Recommended: Markdown body (progressive disclosure)
   ├─ ✅ Overview / Purpose
   ├─ ✅ When to use
   ├─ ✅ Step-by-step
   ├─ ✅ Inputs / Outputs
   ├─ ✅ Examples
   ├─ 🧩 Files & References
   ├─ 🧩 Edge cases
   ├─ 🧩 Troubleshooting
   └─ 🧩 Safety notes

Why SkillWink?

Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.

We provide keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.

Keyword Search Version Updates Multi-Metric Ranking Open Standard Discussion

Quick Start:

Import/download skills (.zip/.skill), then place locally:

~/.claude/skills/ (Claude Code)

~/.codex/skills/ (Codex CLI)

One SKILL.md can be reused across tools.

FAQ

Everything you need to know: what skills are, how they work, how to find/import them, and how to contribute.

1. What are Agent Skills?

A skill is a reusable capability package, usually including SKILL.md (purpose/IO/how-to) and optional scripts/templates/examples.

Think of it as a plugin playbook + resource bundle for AI assistants/toolchains.

2. How do Skills work?

Skills use progressive disclosure: load brief metadata first, load full docs only when needed, then execute by guidance.

This keeps agents lightweight while preserving enough context for complex tasks.

3. How can I quickly find the right skill?

Use these three together:

  • Semantic search: describe your goal in natural language.
  • Multi-filtering: category/tag/author/language/license.
  • Sort by downloads/likes/comments/updated to find higher-quality skills.

4. Which import methods are supported?

  • Upload archive: .zip / .skill (recommended)
  • Upload skills folder
  • Import from GitHub repository

Note: file size for all methods should be within 10MB.

5. How to use in Claude / Codex?

Typical paths (may vary by local setup):

  • Claude Code:~/.claude/skills/
  • Codex CLI:~/.codex/skills/

One SKILL.md can usually be reused across tools.

6. Can one skill be shared across tools?

Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.

Example: retrieval + writing + automation scripts as one workflow.

7. Are these skills safe to use?

Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.

8. Why does it not work after import?

Most common reasons:

  • Wrong folder path or nested one level too deep
  • Invalid/incomplete SKILL.md fields or format
  • Dependencies missing (Python/Node/CLI)
  • Tool has not reloaded skills yet

9. Does SkillWink include duplicates/low-quality skills?

We try to avoid that. Use ranking + comments to surface better skills:

  • Duplicate skills: compare differences (speed/stability/focus)
  • Low quality skills: regularly cleaned up