Daily Featured Skills Count
5,117 5,165 5,205 5,241 5,288 5,316 5,330
05/10 05/11 05/12 05/13 05/14 05/15 05/16
♾️ Free & Open Source 🛡️ Secure & Worry-Free

Import Skills

pskoett pskoett
from GitHub Data & AI
  • 📄 SKILL.md

context-surfing

Monitors context window health throughout a session and rides peak context quality for maximum output fidelity. Activates automatically after plan-interview and intent-framed-agent. Stays active through execution and hands off cleanly to simplify-and-harden and self-improvement when the wave completes naturally or exits via handoff. Use this skill whenever a multi-step agent task is underway and session continuity or context drift is a concern. Especially important for long-running tasks, complex refactors, or any work where degraded context would silently corrupt the output. Trigger even if the user doesn't say "context surfing" — if an agent task is running across multiple steps with intent and a plan already established, this skill is live. --- # Context Surfing ## Install ```bash npx skills add pskoett/pskoett-ai-skills/skills/context-surfing ``` The agent rides the wave of peak context. When the wave crests, it commits. When it detects drift, it pulls out cleanly — saving state, handing off, and letting the next session catch the next wave. No wipeouts. No zombie sessions. Only intentional, high-fidelity execution. --- ## Mental Model

0 114 1 month ago · Uploaded Detail →
KimYx0207 KimYx0207
from GitHub Tools & Productivity
  • 📄 SKILL.md

Meta_Kim

Save current project task progress to local task state file for cross-session continuity. Use when: user says "save progress", "记一下", "保存进度", session is interrupted, or user wants to resume work later. Writes to .claude/project-task-state.json so next session can load it via SessionStart hook.

0 106 1 month ago · Uploaded Detail →
tombelieber tombelieber
from GitHub Development & Coding
  • 📄 SKILL.md

claude-view

Monitor and query Claude Code sessions — list sessions, search conversations, check costs, view AI fluency score, see live running agents. Use when the user asks about their Claude Code usage, costs, session history, or running agents. --- ## You operate the `claude-view` HTTP API **If the claude-view MCP tools are available in your environment, prefer using them instead of curl.** This skill is the fallback for environments without MCP support. claude-view runs a local server on port 47892 (or `$CLAUDE_VIEW_PORT`). All endpoints return JSON (camelCase field names). Base URL: `http://localhost:47892` ## Resolving the server 1. Check if running: `curl -sf http://localhost:47892/api/health` 2. If not running, tell user: `npx claude-view` ## Endpoints | Intent | Method | Endpoint | Key Params | |--------|--------|----------|------------| | List sessions | GET | `/api/sessions` | `?limit`, `?q`, `?filter`, `?sort`, `?offset`, `?branches`, `?models`, `?time_after`, `?time_before` | | Get session detail | GET | `/api/sessions/{id}` | — | | Search sessions | GET | `/api/search` | `?q` (required), `?limit`, `?offset`, `?scope` | | Dashboard stats | GET | `/api/stats/dashboard` | `?project`, `?branch`, `?from`, `?to` | | AI Fluency Score | GET | `/api/score` | — | | Token stats | GET | `/api/stats/tokens` | — | | Live sessions | GET | `/api/live/sessions` | — | | Live summary | GET | `/api/live/summary` | — | | Server health | GET | `/api/health` | — | ## Reading responses All responses are JSON with camelCase field names. Key shapes: **Sessions list:** `{ sessions: [{ id, project, displayName, gitBranch, durationSeconds, totalInputTokens, totalOutputTokens, primaryModel, messageCount, turnCount, commitCount, modifiedAt }], total, hasMore }` **Session detail:** All session fields plus `commits: [{ hash, message, timestamp, branch }]` and `derivedMetrics: { tokensPerPrompt, reeditRate, toolDensity, editVelocity }` **Search:** `{ query, totalSessions, totalMatches, elapsedMs,

0 42 1 month ago · Uploaded Detail →
floomhq floomhq
from GitHub Tools & Productivity
  • 📄 SKILL.md

agents

Scan running Claude sessions to see what other agents are working on. Use when asked "what are the other agents doing", "check other sessions", "what's running", "scan agents", "who's working on what", or before picking up new work to avoid overlap. --- # Agents: Scan Running Claude Sessions Runs `scan.sh` to inspect all tmux sessions running Claude and report what each is doing. ## Usage ```bash bash ~/.claude/skills/agents/scripts/scan.sh # all sessions bash ~/.claude/skills/agents/scripts/scan.sh floom # only floom/* sessions bash ~/.claude/skills/agents/scripts/scan.sh openpaper # only openpaper/* sessions ``` ## What It Shows

0 26 10 days ago · Uploaded Detail →
bearlike bearlike
from GitHub Tools & Productivity
  • 📄 SKILL.md

truss-cli-smoketest

End-to-end smoke testing of the Truss CLI via tmux. Use this skill when asked to test the CLI, verify CLI behavior after changes, smoke-test the agent loop, check for regressions, or validate MCP/plugin/session features work correctly through the terminal interface. Also use when debugging CLI crashes, MCP connection issues, or session lifecycle problems that need live reproduction.

0 32 29 days ago · Uploaded Detail →
swyxio swyxio
from GitHub Tools & Productivity
  • 📄 README.md
  • 📄 SKILL.md
  • 📄 stats.sh

claude-session-introspect

Inspect Claude Code session JSONL files at ~/.claude/projects/ to extract real conversation telemetry: token counts (input/output/cache reads/cache writes), assistant turn counts, human prompt counts, tool-use counts, compaction boundaries, and the contents of compaction summaries. Use this skill when the user asks "how many tokens did this session use", "how many prompts have I sent", "show me the stats for this conversation", "what got compacted", "where are the compaction boundaries", "introspect the session", "do brain surgery on the JSONL", or wants any data point that lives inside the on-disk session log rather than the live context window. Inspired by Tal Raviv's "I wanted to know how compaction works" article.

0 28 1 month ago · Uploaded Detail →
spences10 spences10
from GitHub Development & Coding
  • 📁 .claude-plugin/
  • 📄 SKILL.md

analytics

Query Claude Code session analytics from ccrecall database. Use when user asks about token usage, session history, or wants to analyze their Claude Code usage patterns.

0 27 1 month ago · Uploaded Detail →
punt-labs punt-labs
from GitHub Research & Analysis
  • 📁 references/
  • 📄 SKILL.md

prfaq

This skill should be used when the user asks to "write a PR/FAQ", "prfaq", "working backwards", "product discovery", "evaluate a product idea", "press release FAQ", "test product value", "revise prfaq", "update prfaq", "add research to prfaq", "add FAQs", "run a meeting", "review meeting", "hive meeting", "autonomous meeting", "consensus meeting", "stress test my prfaq", "go/no-go decision", "should we build this", "vote on prfaq", or wants to use the Amazon Working Backwards process to evaluate whether a product or feature is worth building. --- # Working Backwards: PR/FAQ ## Purpose Guide the user through the Amazon Working Backwards process to produce a professional PR/FAQ document. The output is a LaTeX file that compiles to a polished PDF suitable for executive review and product decision-making. The process forces clarity about customer value, surfaces risks early, and creates a shared artifact for go/no-go decisions. ## When to Use - Evaluating whether a new product or feature is worth building - Forcing specificity on a vague product idea - Preparing a product pitch for leadership review - Testing whether a team truly understands the customer problem - Structuring a go/no-go decision with an auditable artifact ## Revise Mode Before starting the full workflow, check if a `prfaq.tex` file already exists in the project root (or the path the user specifies). If it does, enter **revise mode** instead of starting from scratch. 1. **Read the existing document.** Parse the `.tex` file to understand what's already written — the press release, FAQs, and risk assessment. 2. **Ask what to revise.** Present the user with the sections found and ask what they want to improve. Common revision goals: - **Refine the product** — sharpen the problem statement, solution, or differentiation based on new thinking - **Incorporate research** — thread new primary data (customer interviews, market analysis, survey results) into existing sections. Run Phase 0 research discovery to find

0 21 1 month ago · Uploaded Detail →
gentritbiba gentritbiba
from GitHub Development & Coding
  • 📄 SKILL.md

cogpit-sessions

Create and manage Claude Code sessions via the Cogpit (agent-window) HTTP API running on localhost:19384. Use when an agent needs to spawn a new Claude Code session in a project directory, send messages to existing sessions, stop sessions, list projects, or query active sessions. Triggers on requests like "start a session", "run claude in project X", "send a message to session Y", "list cogpit projects", or any programmatic interaction with the agent-window server.

0 18 1 month ago · Uploaded Detail →
tobyilee tobyilee
from GitHub Development & Coding
  • 📄 SKILL.md

catchup

Use this skill at the START of a new Claude Code session to pick up where the previous session left off. Finds the most recent handoff document under `.claude/reports/handoff/`, reads it, then VERIFIES its claims against the live codebase (Read each referenced file, run `git status` / `git log --oneline`) before reporting findings and waiting for user instruction. Treats the handoff as a hypothesis, not fact — previous session may have been confused. Trigger on "catchup", "catch up", "이어서 작업", "이어서 해줘", "이전 세션 이어", "resume from handoff", "handoff 읽어", "handoff 참고해서 시작", "어제 작업 이어서", "지난번 이어서", "새 세션 준비", "prepare to continue", "pick up where we left off", "캐치업". Also trigger when the user starts a new session by referencing prior work ("어제 한 작업 이어서 하자", "계속 하자", "resume the auth refactor") and an `.claude/reports/handoff/` directory exists in the project. Do NOT trigger when the user wants to start a brand-new unrelated task, or when they want to read a specific named document (use Read directly).

0 18 1 month ago · Uploaded Detail →

Skill File Structure Sample (Reference)

skill-sample/
├─ SKILL.md              ⭐ Required: skill entry doc (purpose / usage / examples / deps)
├─ manifest.sample.json  ⭐ Recommended: machine-readable metadata (index / validation / autofill)
├─ LICENSE.sample        ⭐ Recommended: license & scope (open source / restriction / commercial)
├─ scripts/
│  └─ example-run.py     ✅ Runnable example script for quick verification
├─ assets/
│  ├─ example-formatting-guide.md  🧩 Output conventions: layout / structure / style
│  └─ example-template.tex         🧩 Templates: quickly generate standardized output
└─ references/           🧩 Knowledge base: methods / guides / best practices
   ├─ example-ref-structure.md     🧩 Structure reference
   ├─ example-ref-analysis.md      🧩 Analysis reference
   └─ example-ref-visuals.md       🧩 Visual reference

More Agent Skills specs Anthropic docs: https://agentskills.io/home

SKILL.md Requirements

├─ ⭐ Required: YAML Frontmatter (must be at top)
│  ├─ ⭐ name                 : unique skill name, follow naming convention
│  └─ ⭐ description          : include trigger keywords for matching
│
├─ ✅ Optional: Frontmatter extension fields
│  ├─ ✅ license              : license identifier
│  ├─ ✅ compatibility        : runtime constraints when needed
│  ├─ ✅ metadata             : key-value fields (author/version/source_url...)
│  └─ 🧩 allowed-tools        : tool whitelist (experimental)
│
└─ ✅ Recommended: Markdown body (progressive disclosure)
   ├─ ✅ Overview / Purpose
   ├─ ✅ When to use
   ├─ ✅ Step-by-step
   ├─ ✅ Inputs / Outputs
   ├─ ✅ Examples
   ├─ 🧩 Files & References
   ├─ 🧩 Edge cases
   ├─ 🧩 Troubleshooting
   └─ 🧩 Safety notes

Why SkillWink?

Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.

We provide keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.

Keyword Search Version Updates Multi-Metric Ranking Open Standard Discussion

Quick Start:

Import/download skills (.zip/.skill), then place locally:

~/.claude/skills/ (Claude Code)

~/.codex/skills/ (Codex CLI)

One SKILL.md can be reused across tools.

FAQ

Everything you need to know: what skills are, how they work, how to find/import them, and how to contribute.

1. What are Agent Skills?

A skill is a reusable capability package, usually including SKILL.md (purpose/IO/how-to) and optional scripts/templates/examples.

Think of it as a plugin playbook + resource bundle for AI assistants/toolchains.

2. How do Skills work?

Skills use progressive disclosure: load brief metadata first, load full docs only when needed, then execute by guidance.

This keeps agents lightweight while preserving enough context for complex tasks.

3. How can I quickly find the right skill?

Use these three together:

  • Semantic search: describe your goal in natural language.
  • Multi-filtering: category/tag/author/language/license.
  • Sort by downloads/likes/comments/updated to find higher-quality skills.

4. Which import methods are supported?

  • Upload archive: .zip / .skill (recommended)
  • Upload skills folder
  • Import from GitHub repository

Note: file size for all methods should be within 10MB.

5. How to use in Claude / Codex?

Typical paths (may vary by local setup):

  • Claude Code:~/.claude/skills/
  • Codex CLI:~/.codex/skills/

One SKILL.md can usually be reused across tools.

6. Can one skill be shared across tools?

Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.

Example: retrieval + writing + automation scripts as one workflow.

7. Are these skills safe to use?

Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.

8. Why does it not work after import?

Most common reasons:

  • Wrong folder path or nested one level too deep
  • Invalid/incomplete SKILL.md fields or format
  • Dependencies missing (Python/Node/CLI)
  • Tool has not reloaded skills yet

9. Does SkillWink include duplicates/low-quality skills?

We try to avoid that. Use ranking + comments to surface better skills:

  • Duplicate skills: compare differences (speed/stability/focus)
  • Low quality skills: regularly cleaned up