Daily Featured Skills Count
4,870 4,909 4,940 4,970 5,005 5,034 5,044
05/03 05/04 05/05 05/06 05/07 05/08 05/09
♾️ Free & Open Source 🛡️ Secure & Worry-Free

Import Skills

DefangLabs DefangLabs
from GitHub Ops & Delivery
  • 📄 SKILL.md

defang

Deploy the current project to the cloud using Defang. Guides through CLI setup, authentication, compose file creation, stack selection, config management, and deployment.

0 151 22 days ago · Uploaded Detail →
SCStelz SCStelz
from GitHub Testing & Security
  • 📄 SKILL.md
  • 📄 svg-widgets.yaml

ai-agent-posture

Use this skill when asked to audit, assess, or report on AI agent security posture across Copilot Studio and Microsoft 365 Copilot agents. Triggers on keywords like "AI agent posture", "agent security audit", "Copilot Studio agents", "agent inventory", "agent authentication", "unauthenticated agents", "agent tools", "MCP tools on agents", "agent knowledge sources", "XPIA risk", "agent sprawl", "AI agent risk", "agent governance", or when investigating AI agent configurations, access policies, tool permissions, or credential exposure. This skill queries the AIAgentsInfo table in Advanced Hunting to produce a comprehensive security posture assessment covering agent inventory, authentication gaps, access control misconfigurations, MCP tool proliferation, knowledge source exposure, XPIA email exfiltration risk, hard-coded credential detection, HTTP request risks, creator governance, and agent sprawl analysis. Supports inline chat and markdown file output.

0 159 1 month ago · Uploaded Detail →
autopkg autopkg
from GitHub Development & Coding
  • 📁 references/
  • 📄 SKILL.md

autopkg-recipes

Create, review, or fix AutoPkg recipes for the dataJAR-recipes repository. USE FOR: creating download recipes, munki recipes, pkg recipes; fixing recipe formatting; validating recipe structure; applying dataJAR naming conventions; architecture-specific downloads; URL scraping patterns; code signature verification; MunkiInstallsItemsCreator usage; minimum OS version detection. DO NOT USE FOR: running AutoPkg itself; managing Munki repositories; macOS packaging unrelated to AutoPkg.

0 125 11 days ago · Uploaded Detail →
DAAF-Contribution-Community DAAF-Contribution-Community
from GitHub Tools & Productivity
  • 📁 references/
  • 📄 SKILL.md

agent-authoring

Guide for creating new DAAF agent definition files with full ecosystem integration. Use when adding a new specialized agent, revising agent structure, or verifying agent integration completeness across documentation. --- # Agent Authoring Create new DAAF agents that conform to the canonical template and are fully wired into the system documentation for discoverability and usability. ## What This Skill Does - Guides creation of agent `.md` files conforming to `agent_reference/AGENT_TEMPLATE.md` (12 mandatory sections) - Ensures cross-agent consistency (standardized confidence model, Learning Signal, STOP format, etc.) - Provides a **complete integration checklist** covering every file that references agents across the codebase to ensure it is discoverable and its invocation patterns are well-understood by the system agents - Complements `skill-authoring`: this skill handles the behavioral protocol file; if the new agent also needs a companion skill, invoke `skill-authoring` separately ## Decision Tree: What Do You Need? ``` What are you doing? │ ├─ Creating a brand-new agent │ └─ Follow "New Agent Workflow" below │ ├─ Revising an existing agent to match the template │ └─ Read: references/template-walkthrough.md │ + agent_reference/AGENT_TEMPLATE.md (the canonical blueprint) │ ├─ Checking if an agent is fully integrated into the ecosystem │ └─ Read: references/integration-checklist.md │ ├─ Understanding what must be identical across all agents │ └─ Read: references/cross-agent-standards.md │ └─ Understanding the current agent landscape before adding to it └─ Read: agents/README.md (Agent Index + "Commonly Confused Pairs") ``` ## New Agent Workflow ### Phase 1: Design (before writing) Before beginning, you MUST have a clear, coherent, and compelling answer to each of the following questions: 1. **Define the role** in one sentence — what does this agent do and why does it exist? 2. **Identify pipeline stage(s)** — which stage(s) does it operate in, or i

0 157 1 month ago · Uploaded Detail →
forcedotcom forcedotcom
from GitHub Data & AI
  • 📁 assets/
  • 📁 references/
  • 📄 SKILL.md

agentforce-development

Use this skill when working with Salesforce Agent Script — the scripting language for authoring Agentforce agents using the Atlas Reasoning Engine. Triggers include: creating, modifying, or comprehending Agent Script agents; working with AiAuthoringBundle files or .agent files; designing topic graphs or flow control; producing or updating an Agent Spec; validating Agent Script or diagnosing compilation errors; previewing agents or debugging behavioral issues; deploying, publishing, activating, or deactivating agents; deleting or renaming agents; authoring AiEvaluationDefinition test specs or running agent tests. This skill teaches Agent Script from scratch — AI models have zero prior training data on this language. Do NOT use for Apex development, Flow building, Prompt Template authoring, Experience Cloud configuration, or general Salesforce CLI tasks unrelated to Agent Script.

0 119 1 month ago · Uploaded Detail →
agentevals-dev agentevals-dev
from GitHub Tools & Productivity
  • 📁 evals/
  • 📄 SKILL.md

eval

Evaluate and score agent behavior against a golden reference. Use this skill whenever the user wants to run evaluation, check pass/fail status, understand metric scores, compare sessions for regressions, validate agent behavior, or score a trace from a file or a live session. Trigger on phrases like "eval this trace", "check my agent output", "did my agent do the right thing", "compare runs", "did my agent regress", "score session X", "evaluate against golden", "run evals". Works with both local trace files and live streaming sessions. --- Evaluate agent behavior and explain what the scores mean. ## Determine the input type First, figure out what to evaluate: - **Trace file(s)** — user mentions a `.json` or `.jsonl` file path → use `evaluate_traces` - **Sessions vs golden** — user has multiple live sessions and wants regression testing → use `evaluate_sessions` - **Single live session** — user wants to score one session against a golden eval set → guide them to use `evaluate_sessions` with one session as golden ## Evaluating trace files 1. Get the file path(s). Check the extension: `.jsonl` → `trace_format: "otlp-json"` | `.json` → `"jaeger-json"` (default) 2. Ask if they have a golden eval set JSON. For `tool_trajectory_avg_score` (the default metric), an eval set is required — it provides the expected tool call sequence to compare against. If they don't have one yet, explain this and suggest starting with `hallucinations_v1`, or ask if they want to create a golden set from a reference run first. 3. Call `evaluate_traces` with the file(s), format, and eval set. 4. Present results as a score table (see Score interpretation below) and explain failures. ## Evaluating sessions (regression testing) This workflow requires the server to be running with the `--dev` flag (which enables WebSocket and session streaming). Plain `agentevals serve` will not have sessions. If you get a connection error from any tool below, tell the user: ```bash uv run agentevals serve --dev ```

0 116 1 month ago · Uploaded Detail →

Skill File Structure Sample (Reference)

skill-sample/
├─ SKILL.md              ⭐ Required: skill entry doc (purpose / usage / examples / deps)
├─ manifest.sample.json  ⭐ Recommended: machine-readable metadata (index / validation / autofill)
├─ LICENSE.sample        ⭐ Recommended: license & scope (open source / restriction / commercial)
├─ scripts/
│  └─ example-run.py     ✅ Runnable example script for quick verification
├─ assets/
│  ├─ example-formatting-guide.md  🧩 Output conventions: layout / structure / style
│  └─ example-template.tex         🧩 Templates: quickly generate standardized output
└─ references/           🧩 Knowledge base: methods / guides / best practices
   ├─ example-ref-structure.md     🧩 Structure reference
   ├─ example-ref-analysis.md      🧩 Analysis reference
   └─ example-ref-visuals.md       🧩 Visual reference

More Agent Skills specs Anthropic docs: https://agentskills.io/home

SKILL.md Requirements

├─ ⭐ Required: YAML Frontmatter (must be at top)
│  ├─ ⭐ name                 : unique skill name, follow naming convention
│  └─ ⭐ description          : include trigger keywords for matching
│
├─ ✅ Optional: Frontmatter extension fields
│  ├─ ✅ license              : license identifier
│  ├─ ✅ compatibility        : runtime constraints when needed
│  ├─ ✅ metadata             : key-value fields (author/version/source_url...)
│  └─ 🧩 allowed-tools        : tool whitelist (experimental)
│
└─ ✅ Recommended: Markdown body (progressive disclosure)
   ├─ ✅ Overview / Purpose
   ├─ ✅ When to use
   ├─ ✅ Step-by-step
   ├─ ✅ Inputs / Outputs
   ├─ ✅ Examples
   ├─ 🧩 Files & References
   ├─ 🧩 Edge cases
   ├─ 🧩 Troubleshooting
   └─ 🧩 Safety notes

Why SkillWink?

Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.

We provide keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.

Keyword Search Version Updates Multi-Metric Ranking Open Standard Discussion

Quick Start:

Import/download skills (.zip/.skill), then place locally:

~/.claude/skills/ (Claude Code)

~/.codex/skills/ (Codex CLI)

One SKILL.md can be reused across tools.

FAQ

Everything you need to know: what skills are, how they work, how to find/import them, and how to contribute.

1. What are Agent Skills?

A skill is a reusable capability package, usually including SKILL.md (purpose/IO/how-to) and optional scripts/templates/examples.

Think of it as a plugin playbook + resource bundle for AI assistants/toolchains.

2. How do Skills work?

Skills use progressive disclosure: load brief metadata first, load full docs only when needed, then execute by guidance.

This keeps agents lightweight while preserving enough context for complex tasks.

3. How can I quickly find the right skill?

Use these three together:

  • Semantic search: describe your goal in natural language.
  • Multi-filtering: category/tag/author/language/license.
  • Sort by downloads/likes/comments/updated to find higher-quality skills.

4. Which import methods are supported?

  • Upload archive: .zip / .skill (recommended)
  • Upload skills folder
  • Import from GitHub repository

Note: file size for all methods should be within 10MB.

5. How to use in Claude / Codex?

Typical paths (may vary by local setup):

  • Claude Code:~/.claude/skills/
  • Codex CLI:~/.codex/skills/

One SKILL.md can usually be reused across tools.

6. Can one skill be shared across tools?

Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.

Example: retrieval + writing + automation scripts as one workflow.

7. Are these skills safe to use?

Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.

8. Why does it not work after import?

Most common reasons:

  • Wrong folder path or nested one level too deep
  • Invalid/incomplete SKILL.md fields or format
  • Dependencies missing (Python/Node/CLI)
  • Tool has not reloaded skills yet

9. Does SkillWink include duplicates/low-quality skills?

We try to avoid that. Use ranking + comments to surface better skills:

  • Duplicate skills: compare differences (speed/stability/focus)
  • Low quality skills: regularly cleaned up