3,758 Open Skills

Free to get · One-click to use

✅ Keyword search & category filters
✅ Discussions & community interaction
✅ Version updates & multi-metric ranking
✅ Open SKILL.md standard

Import Skills

anthropics anthropics
from GitHub Development & Coding

instrument-data-to-allotrope

Convert laboratory instrument output files (PDF, CSV, Excel, TXT) to Allotrope Simple Model (ASM) JSON format or flattened 2D CSV. Use this skill when scientists need to standardize instrument data for LIMS systems, data lakes, or downstream analysis. Supports auto-detection of instrument types. Outputs include full ASM JSON, flattened CSV for easy import, and exportable Python code for data engineers. Common triggers include converting instrument files, standardizing lab data, preparing data for upload to LIMS/ELN systems, or generating parser code for production pipelines.

0 9.9K 14 days ago · Uploaded Detail →
Dicklesworthstone Dicklesworthstone
from GitHub Tools & Productivity

agent-mail

MCP Agent Mail - Mail-like coordination layer for multi-agent workflows. Identities, inbox/outbox, file reservations, contact policies, threaded messaging, pre-commit guard, Human Overseer, static exports, disaster recovery. Git+SQLite backed. Python/FastMCP.

0 1.8K 14 days ago · Uploaded Detail →
heurist-network heurist-network
from GitHub Development & Coding

heurist-skill-marketplace-admin

Admin operations for the Heurist Skill Marketplace. Use this skill when the user asks to add, ingest, approve, reject, remove, update, or list skills in the Heurist marketplace. Also triggers for checking upstream changes, updating external API dependencies, updating metrics (stars/downloads), re-ingesting skills, or any marketplace database management task. Working directory is heurist-agent-framework. Always use uv run python to run scripts.

0 787 4 days ago · Uploaded Detail →
kalil0321 kalil0321
from GitHub Development & Coding

reverse-engineering-api

Reverse engineer web APIs by capturing browser traffic (HAR files) and generating production-ready Python API clients. Use when the user wants to create an API client for a website, automate web interactions, or understand undocumented APIs. Activate on tasks mentioning "reverse engineer", "API client", "HAR file", "capture traffic", or "automate website".

0 495 16 days ago · Uploaded Detail →
agentscope-ai agentscope-ai
from GitHub Data & AI

auto-arena

Automatically evaluate and compare multiple AI models or agents without pre-existing test data. Generates test queries from a task description, collects responses from all target endpoints, auto-generates evaluation rubrics, runs pairwise comparisons via a judge model, and produces win-rate rankings with reports and charts. Supports checkpoint resume, incremental endpoint addition, and judge model hot-swap. Use when the user asks to compare, benchmark, or rank multiple models or agents on a custom task, or run an arena-style evaluation. --- # Auto Arena Skill End-to-end automated model comparison using the OpenJudge `AutoArenaPipeline`: 1. **Generate queries** — LLM creates diverse test queries from task description 2. **Collect responses** — query all target endpoints concurrently 3. **Generate rubrics** — LLM produces evaluation criteria from task + sample queries 4. **Pairwise evaluation** — judge model compares every model pair (with position-bias swap) 5. **Analyze & rank** — compute win rates, win matrix, and rankings 6. **Report & charts** — Markdown report + win-rate bar chart + optional matrix heatmap ## Prerequisites ```bash # Install OpenJudge pip install py-openjudge # Extra dependency for auto_arena (chart generation) pip install matplotlib ``` ## Gather from user before running | Info | Required? | Notes | |------|-----------|-------| | Task description | Yes | What the models/agents should do (set in config YAML) | | Target endpoints | Yes | At least 2 OpenAI-compatible endpoints to compare | | Judge endpoint | Yes | Strong model for pairwise evaluation (e.g. `gpt-4`, `qwen-max`) | | API keys | Yes | Env vars: `OPENAI_API_KEY`, `DASHSCOPE_API_KEY`, etc. | | Number of queries | No | Default: `20` | | Seed queries | No | Example queries to guide generation style | | System prompts | No | Per-endpoint system prompts | | Output directory | No | Default: `./evaluation_results` | | Report language | No | `"zh"` (default) or `"en"` | ## Quick start ### CLI ```bash # Run evaluation python -m cookbooks.auto_arena --config config.yaml --save # Use pre-generated queries python -m cookbooks.auto_arena --config config.yaml \ --queries_file queries.json --save # Start fresh, ignore checkpoint python -m cookbooks.auto_arena --config config.yaml --fresh --save # Re-run only pairwise evaluation with new judge model # (keeps queries, responses, and rubrics) python -m cookbooks.auto_arena --config config.yaml --rerun-judge --save ``` ### Python API ```python import asyncio from cookbooks.auto_arena.auto_arena_pipeline import AutoArenaPipeline async def main(): pipeline = AutoArenaPipeline.from_config("config.yaml") result = await pipeline.evaluate() print(f"Best model: {result.best_pipeline}") for rank, (model, win_rate) in enumerate(result.rankings, 1): print(f"{rank}. {model}: {win_rate:.1%}") asyncio.run(main()) ``` ### Minimal Python API (no config file) ```python import asyncio from cookbooks.auto_arena.auto_arena_pipeline import AutoArenaPipeline from cookbooks.auto_arena.schema import OpenAIEndpoint async def main(): pipeline = AutoArenaPipeline( task_description="Customer service chatbot for e-commerce", target_endpoints={ "gpt4": OpenAIEndpoint( base_url="https://api.openai.com/v1", api_key="sk-...", model="gpt-4", ), "qwen": OpenAIEndpoint( base_url="https://dashscope.aliyuncs.com/compatible-mode/v1", api_key="sk-...", model="qwen-max", ), }, judge_endpoint=OpenAIEndpoint( base_url="https://api.openai.com/v1", api_key="sk-...", model="gpt-4", ), num_queries=20, ) result = await pipeline.evaluate() print(f"Best: {result.best_pipeline}") asyncio.run(main()) ``` ## CLI options | Flag | Default | Description | |------|---------|-------------| | `--config` | — | Path to YAML configuration file (required) | | `--output_dir` | config value | Override output directory | | `--queries_file` | — | Path to pre-generated queries JSON (skip generation) | | `--save` | `False` | Save results to file | | `--fresh` | `False` | Start fresh, ignore checkpoint | | `--rerun-judge` | `False` | Re-run pairwise evaluation only (keep queries/responses/rubrics) | ## Minimal config file ```yaml

0 467 12 days ago · Uploaded Detail →
defendend defendend
from GitHub Development & Coding

ast-index

当用户要求“查找类”、“搜索符号”、“查找用法”、“查找实现”、“搜索代码库”、“查找文件”、“类层次结构”、“查找调用者”、“模块依赖项”、“未使用的依赖项”、“项目映射”、“项目约定”、“项目结构”、“什么框架”、“什么体系结构”、“查找 Perl 子项”、“Perl 导出”时,应该使用此技能。 “查找Python类”,“Go结构”,“Go接口”,“查找React组件”,“查找TypeScript接口”,“查找Rust结构”,“查找Ruby类”,“查找C#控制器”,“查找Dart类”,“查找Flutter小部件”,“查找mixin”,“查找Scala特征”,“查找案例类”,“查找对象”,“查找PHP类”,“查找Laravel模型”, “查找 PHP 特征”,或者需要在 Android/Kotlin/Java、iOS/Swift/ObjC、Dart/Flutter、TypeScript/JavaScript、Rust、Ruby、C#、Scala、PHP、Perl、Python、Go、C++ 或 Protocol Buffers 项目中快速搜索代码。 也是由提及“ast-index”CLI 工具触发的。

0 283 14 days ago · Uploaded Detail →
Runchuan-BU Runchuan-BU
from GitHub Development & Coding

add-figure

将仅 Python 的图形参考技能添加到 BioClaw 安装中。当用户希望在代理容器内提供出版质量的绘图指导而不添加源代码功能时使用。使用仅限 Python 的“SKILL.md”和根级“seaborn_reference.md”创建“container/skills/figure/”。

0 243 13 days ago · Uploaded Detail →

Skill File Structure Sample (Reference)

skill-sample/
├─ SKILL.md              ⭐ Required: skill entry doc (purpose / usage / examples / deps)
├─ manifest.sample.json  ⭐ Recommended: machine-readable metadata (index / validation / autofill)
├─ LICENSE.sample        ⭐ Recommended: license & scope (open source / restriction / commercial)
├─ scripts/
│  └─ example-run.py     ✅ Runnable example script for quick verification
├─ assets/
│  ├─ example-formatting-guide.md  🧩 Output conventions: layout / structure / style
│  └─ example-template.tex         🧩 Templates: quickly generate standardized output
└─ references/           🧩 Knowledge base: methods / guides / best practices
   ├─ example-ref-structure.md     🧩 Structure reference
   ├─ example-ref-analysis.md      🧩 Analysis reference
   └─ example-ref-visuals.md       🧩 Visual reference

More Agent Skills specs Anthropic docs: https://agentskills.io/home

SKILL.md Requirements

├─ ⭐ Required: YAML Frontmatter (must be at top)
│  ├─ ⭐ name                 : unique skill name, follow naming convention
│  └─ ⭐ description          : include trigger keywords for matching
│
├─ ✅ Optional: Frontmatter extension fields
│  ├─ ✅ license              : license identifier
│  ├─ ✅ compatibility        : runtime constraints when needed
│  ├─ ✅ metadata             : key-value fields (author/version/source_url...)
│  └─ 🧩 allowed-tools        : tool whitelist (experimental)
│
└─ ✅ Recommended: Markdown body (progressive disclosure)
   ├─ ✅ Overview / Purpose
   ├─ ✅ When to use
   ├─ ✅ Step-by-step
   ├─ ✅ Inputs / Outputs
   ├─ ✅ Examples
   ├─ 🧩 Files & References
   ├─ 🧩 Edge cases
   ├─ 🧩 Troubleshooting
   └─ 🧩 Safety notes

Why SkillWink?

Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.

We provide keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.

Keyword Search Version Updates Multi-Metric Ranking Open Standard Discussion

Quick Start:

Import/download skills (.zip/.skill), then place locally:

~/.claude/skills/ (Claude Code)

~/.codex/skills/ (Codex CLI)

One SKILL.md can be reused across tools.

FAQ

Everything you need to know: what skills are, how they work, how to find/import them, and how to contribute.

1. What are Agent Skills?

A skill is a reusable capability package, usually including SKILL.md (purpose/IO/how-to) and optional scripts/templates/examples.

Think of it as a plugin playbook + resource bundle for AI assistants/toolchains.

2. How do Skills work?

Skills use progressive disclosure: load brief metadata first, load full docs only when needed, then execute by guidance.

This keeps agents lightweight while preserving enough context for complex tasks.

3. How can I quickly find the right skill?

Use these three together:

  • Semantic search: describe your goal in natural language.
  • Multi-filtering: category/tag/author/language/license.
  • Sort by downloads/likes/comments/updated to find higher-quality skills.

4. Which import methods are supported?

  • Upload archive: .zip / .skill (recommended)
  • Upload skills folder
  • Import from GitHub repository

Note: file size for all methods should be within 10MB.

5. How to use in Claude / Codex?

Typical paths (may vary by local setup):

  • Claude Code:~/.claude/skills/
  • Codex CLI:~/.codex/skills/

One SKILL.md can usually be reused across tools.

6. Can one skill be shared across tools?

Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.

Example: retrieval + writing + automation scripts as one workflow.

7. Are these skills safe to use?

Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.

8. Why does it not work after import?

Most common reasons:

  • Wrong folder path or nested one level too deep
  • Invalid/incomplete SKILL.md fields or format
  • Dependencies missing (Python/Node/CLI)
  • Tool has not reloaded skills yet

9. Does SkillWink include duplicates/low-quality skills?

We try to avoid that. Use ranking + comments to surface better skills:

  • Duplicate skills: compare differences (speed/stability/focus)
  • Low quality skills: regularly cleaned up