Distill durable user or team rubric preferences from local agent execution histories, repository discussions, and existing agent/rubrics content. Use when asked to mine prior agent work, compare candidate preferences with current rubrics, and open a PR that adds or updates rubric YAML on the agent/rubrics branch.
- 📁 references/
- 📁 scripts/
- 📄 SKILL.md
Improve or create AGENTS.md files that serve as shared instructions for AI coding agents (Claude, Gemini, etc.). Use this skill whenever the user mentions AGENTS.md, CLAUDE.md, GEMINI.md, agent instructions, agent configuration, or wants to improve how AI agents behave in their project. Also trigger when the user says "improve my instructions", "agents file", "update my rules", or asks about best practices for configuring coding agents. If in doubt and the task involves AI agent instruction files, use this skill. --- You help users write and improve AGENTS.md files — shared instruction files that AI coding agents (Claude, Gemini, etc.) read at the start of every session. The goal is a single file that works across platforms via symlinks. Read `references/BEST_PRACTICES.md` before analyzing or writing any AGENTS.md content. It contains the patterns extracted from official documentation that inform every decision below. ## Core Workflow ### 1. Assess the Current State Before proposing changes, read the target file and evaluate it against these dimensions: - **Length** — is it under 200 lines? Ideally under 100? Agents have a budget of ~150 instructions they can reliably follow, and the system prompt already uses ~50. - **Structure** — does it use markdown headers to group related instructions? Or is it a wall of text / a single giant list? - **Specificity** — are instructions concrete and verifiable ("use 2-space indentation") or vague ("write clean code")? - **Contradictions** — do any rules conflict with each other? - **Scope mixing** — does it blend personal preferences with project-level standards? - **Signal-to-noise** — does every instruction pass the conciseness test ("would removing this cause the agent to make mistakes")? Are there instructions the agent would follow anyway without being told? - **Completeness** — is it missing key sections (Priorities, Never/Hard Rules, Common Commands, Architecture, Workflow, Tooling)? Score: 1 = all present, 2 = 1-2 missin
- 📁 examples/
- 📁 references/
- 📁 scripts/
- 📄 SKILL.md
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", "disallowedTools", "block tools", "agent denylist", "maxTurns", "agent memory", "mcpServers in agent", "agent hooks", "background agent", "resume agent", "agent teams", "permission rules", "permission mode", "delegate mode", "agent team", "team lead", "teammate", "multi-agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
- 📁 references/
- 📁 scripts/
- 📄 SKILL.md
Curious Agent 接入工具包。When an agent needs to check its knowledge confidence on a topic, trigger active exploration, sync discoveries, or share new findings with the user. Covers: (1) confidence checking before answering, (2) injecting topics for exploration, (3) syncing exploration results to memory, (4) proactive sharing of new discoveries. Trigger scenarios: user asks a question, agent wants to check its knowledge boundary, agent wants to explore a topic proactively.
Comprehensive guide for using Codex CLI (OpenAI) and Claude Code CLI (Anthropic) - AI-powered coding agents. Use when orchestrating CLI commands, automating tasks, configuring agents, or troubleshooting issues.
Give AI agents persistent identity, memory, and personality using Soul Protocol. Use when building agents that need to remember across sessions, maintain consistent behavior, evolve over time, or migrate between platforms. CLI-first — use shell commands for speed, MCP tools as fallback for agents without shell access.
- 📄 example-corpus.md
- 📄 prompt-template.md
- 📄 README.md
Run a critic agent conditioned on the user's own past edits to catch voice and tone problems in a draft. Use when user says 'run voice-critic', 'voice check', 'does this sound like me', or wants a pass on a draft that has already been fact-checked and style-checked but still reads wrong. Do NOT use for fact-checking (claims verification is a separate agent) or for style-guide enforcement (also a separate agent). This skill specifically catches the gap between 'correct' and 'sounds like the author'.
Use when entering a repository that relies on ChatMem, resuming interrupted local agent work, continuing a project in another agent, searching prior repository context, creating durable memory candidates, or preparing a handoff.
This skill should be used when the user asks to "create agent", "build agent", "new agent", "add agent", "에이전트 만들어줘", "에이전트 추가해줘", "서브에이전트 추가", or when the orchestrator detects no suitable agent exists for a task.
Use this skill when the user asks to view, export, or open agent output artifacts — "show me the artifacts", "export the results", "open the report", "convert the task output to HTML", "what did the agent produce", or wants to work with structured outputs from an A2A agent task.
- 📁 references/
- 📁 scripts/
- 📄 SKILL.md
Build production-ready AI agents with Microsoft Foundry and Agent Framework. Use when creating AI agents, selecting LLM models, implementing agent orchestration, adding tracing/observability, or evaluating agent quality. Covers agent architecture, model selection, multi-agent workflows, and production deployment.
- 📁 skill-creator/
- 📁 skills/
- 📄 .gitignore
- 📄 AgentSkills_refactor_spec_v1.md
- 📄 README.md
AI Agent 技能库项目规范,指导 agent 正确阅读和使用 skill