- 📄 README.md
- 📄 SKILL.md
mcpc
Use mcpc CLI to interact with MCP servers - call tools, read resources, get prompts. Use this when working with Model Context Protocol servers, calling MCP tools, or accessing MCP resources programmatically.
Free to get · One-click to use
Use mcpc CLI to interact with MCP servers - call tools, read resources, get prompts. Use this when working with Model Context Protocol servers, calling MCP tools, or accessing MCP resources programmatically.
Use when working with sunpeak, or when the user asks to "build an MCP App", "build a ChatGPT App", "add a UI to an MCP tool", "create an interactive resource for Claude or ChatGPT", "build a React UI for an MCP server", or needs guidance on MCP App resources, tool-to-UI data flow, simulation files, host context, platform-specific ChatGPT/Claude features, or end-to-end testing of MCP App UIs.
Install, set up, and manage MCP servers using the mcpmu CLI. Use when the user wants to install mcpmu, register it as an MCP server in Claude Code, add/remove/list MCP servers, manage namespaces, set tool permissions, or expose servers via serve mode.
小红书(RedNote)内容工具。使用场景: - 搜索小红书笔记并获取详情 - 获取首页推荐列表 - 获取帖子详情(正文、图片、互动数据、评论) - 发表评论 / 回复评论 - 获取用户主页和笔记列表 - 点赞、收藏帖子 - 发布图文或视频笔记 - 热点话题跟踪与分析报告 - 帖子导出为长图 触发词示例: - "搜一下小红书上的XX" - "跟踪一下小红书上的XX热点" - "分析小红书上关于XX的讨论" - "小红书XX话题报告" - "生成XX的小红书舆情报告" --- # 小红书 MCP Skill 基于 [xiaohongshu-mcp](https://github.com/xpzouying/xiaohongshu-mcp) 封装的 shell 脚本工具集。 ## 前置条件 ```bash cd scripts/ ./install-check.sh # 检查依赖(xiaohongshu-mcp、jq、python3) ./start-mcp.sh # 启动 MCP 服务(默认端口 18060) ./status.sh # 确认已登录 ``` 未登录时需扫码:`mcp-call.sh get_login_qrcode` 获取二维码,用小红书 App 扫码。 服务端口可通过 `MCP_URL` 环境变量覆盖(默认 `http://localhost:18060/mcp`)。 ## 核心数据流 **重要:** 大多数操作需要 `feed_id` + `xsec_token` 配对。这两个值从搜索/推荐/用户主页结果中获取,**不可自行构造**。 ``` search_feeds / list_feeds / user_profile │ ▼ 返回 feeds 数组,每个 feed 包含: ├── id → 用作 feed_id ├── xsecToken → 用作 xsec_token └── noteCard → 标题、作者、封面、互动数据 │ ▼ get_feed_detail(feed_id, xsec_token) │ ▼ 返回完整笔记: 正文、图片列表、评论列表 评论中包含 comment_id、user_id(用于回复评论) ``` ## 脚本参考 | 脚本 | 用途 | 参数 | |------|------|------| | `search.sh <关键词>` | 搜索笔记 | 关键词 | | `recommend.sh` | 首页推荐 | 无 | | `post-detail.sh <feed_id> <xsec_token>` | 帖子详情+评论 | 从搜索结果获取 | | `comment.sh <feed_id> <xsec_token> <内容>` | 发表评论 | 从搜索结果获取 | | `user-profile.sh <user_id> <xsec_token>` | 用户主页+笔记 | 从搜索结果获取 | | `track-topic.sh <话题> [选项]` | 热点分析报告 | `--limit N` `--output file` `--feishu` | | `export-long-image.sh` | 帖子导出长图 | `--posts-file json -o output.jpg` | | `mcp-call.sh <tool> [json_args]` | 通用 MCP 调用 | 见下方工具表 | | `start-mcp.sh` | 启动服务 | `--headless=false` `--port=N` | | `stop-mcp.sh` | 停止服务 | 无 | | `status.sh` | 检查登录 | 无 | | `install-check.sh` | 检查依赖 | 无 | ## MCP 工具详细参数 ### search_feeds — 搜索笔记 ```json {"keyword": "咖啡", "filters": {"sort_by": "最新", "note_type": "图文", "publish_time": "一周内"}} ``` filters 可选字段: - `sort_by`: 综合 | 最新 | 最多点赞 | 最多评论 | 最多收藏 - `note_type`: 不限 | 视频 | 图文 - `publish_time`: 不限 | 一天内 | 一周内 | 半年内 - `search_scope`: 不限 | 已看过 | 未看过 | 已关注 - `location`: 不限 | 同城 | 附近 ### get_feed_detail — 帖子详情 ```j
MCP Tools 開發指南。Use when creating or modifying MCP server tools.
This skill should be used when the user asks to "add an app to my MCP server", "add UI to my MCP server", "add a view to my MCP tool", "enrich MCP tools with UI", "add interactive UI to existing server", "add MCP Apps to my server", or needs to add interactive UI capabilities to an existing MCP server that already has tools. Provides guidance for analyzing existing tools and adding MCP Apps UI resources.
Integrate MiniMem as an MCP server for cross-agent memory operations. Use this when an agent needs to call MiniMem memory APIs through MCP (search memories, write memory, memory-cited chat, graph search/neighbors), wire MCP client config, or run a reusable local MCP bridge process.
Generate a complete MCP server project in Swift using the FastMCP library. Use when asked to create a Swift MCP server, build an MCP tool server, or scaffold a Model Context Protocol project in Swift.
Add, remove, or list MCP server configurations when the user wants to manage MCP integrations --- # Manage MCP Servers Use this skill when the user wants to add, remove, or view MCP server configurations. MCP servers are configured as `[[mcp]]` blocks in a dedicated `mcp.toml` file, located next to the active `zerda.toml`. Do not hardcode a container working directory. If `mcp.toml` does not exist yet, create it. MCP connections are established at process startup, so config changes require reload to take effect. Resolve the active `zerda.toml` in this order: 1. Explicit `--config` path 2. `$ZERDA_CONFIG` 3. `~/.zerda/zerda.toml`
Activate when the user needs to generate, refine, or evaluate academic figures, diagrams, or statistical plots. Uses PaperBanana (paperbanana MCP server) to transform text descriptions or data files into publication-quality illustrations. Supports methodology diagrams, statistical plots, and comparative evaluation. --- > **Orchestration Log**: When this skill is activated, append a log entry to `outputs/orchestration_log.md`: > ``` > ### Skill Activation: Figure Engine > **Timestamp:** [current date/time] > **Actor:** AI Agent (figure-engine) > **Input:** [brief description of the figure request] > **Output:** [brief description of what was produced — e.g., "Generated methodology diagram (fig03_methodology.png), 3 iterations"] > ``` # Figure Engine ## Core Principle Academic papers need professional figures. This skill eliminates manual design work by using PaperBanana to generate publication-quality diagrams and plots from text descriptions or data files. Claude should produce ACTUAL FIGURES, not describe what to draw. > **Based on:** Zhu, D., Meng, R., Song, Y., Wei, X., Li, S., Pfister, T., & Yoon, J. (2026). > *PaperBanana: Automating Academic Illustration for AI Scientists.* [arXiv:2601.23265](https://arxiv.org/abs/2601.23265). > The pipeline uses a 5-agent, 2-phase architecture: Retriever → Planner → Stylist (Phase 1: planning), > then Visualizer ↔ Critic iterative refinement (Phase 2: generation) with VLM-as-Judge evaluation. > MCP integration via [`llmsresearch/paperbanana`](https://github.com/llmsresearch/paperbanana). > Official research repo: [`dwzhu-pku/PaperBanana`](https://github.com/dwzhu-pku/PaperBanana). ## Prerequisites The **paperbanana MCP server** must be running. It requires: - `pip install paperbanana[mcp,google]` - A `GOOGLE_API_KEY` in a `.env` file in the project root (get a free key at https://aistudio.google.com/apikey) The MCP server is configured in `plugin.json` and starts automatically. It loads the Google API key from `.env` in the p
skill-sample/ ├─ SKILL.md ⭐ Required: skill entry doc (purpose / usage / examples / deps) ├─ manifest.sample.json ⭐ Recommended: machine-readable metadata (index / validation / autofill) ├─ LICENSE.sample ⭐ Recommended: license & scope (open source / restriction / commercial) ├─ scripts/ │ └─ example-run.py ✅ Runnable example script for quick verification ├─ assets/ │ ├─ example-formatting-guide.md 🧩 Output conventions: layout / structure / style │ └─ example-template.tex 🧩 Templates: quickly generate standardized output └─ references/ 🧩 Knowledge base: methods / guides / best practices ├─ example-ref-structure.md 🧩 Structure reference ├─ example-ref-analysis.md 🧩 Analysis reference └─ example-ref-visuals.md 🧩 Visual reference
More Agent Skills specs Anthropic docs: https://agentskills.io/home
├─ ⭐ Required: YAML Frontmatter (must be at top) │ ├─ ⭐ name : unique skill name, follow naming convention │ └─ ⭐ description : include trigger keywords for matching │ ├─ ✅ Optional: Frontmatter extension fields │ ├─ ✅ license : license identifier │ ├─ ✅ compatibility : runtime constraints when needed │ ├─ ✅ metadata : key-value fields (author/version/source_url...) │ └─ 🧩 allowed-tools : tool whitelist (experimental) │ └─ ✅ Recommended: Markdown body (progressive disclosure) ├─ ✅ Overview / Purpose ├─ ✅ When to use ├─ ✅ Step-by-step ├─ ✅ Inputs / Outputs ├─ ✅ Examples ├─ 🧩 Files & References ├─ 🧩 Edge cases ├─ 🧩 Troubleshooting └─ 🧩 Safety notes
Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.
We provide AI semantic + keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.
Quick Start:
Import/download skills (.zip/.skill), then place locally:
~/.claude/skills/ (Claude Code)
~/.codex/skills/ (Codex CLI)
One SKILL.md can be reused across tools.
Everything you need to know: what skills are, how they work, how to find/import them, and how to contribute.
A skill is a reusable capability package, usually including SKILL.md (purpose/IO/how-to) and optional scripts/templates/examples.
Think of it as a plugin playbook + resource bundle for AI assistants/toolchains.
Skills use progressive disclosure: load brief metadata first, load full docs only when needed, then execute by guidance.
This keeps agents lightweight while preserving enough context for complex tasks.
Use these three together:
Note: file size for all methods should be within 10MB.
Typical paths (may vary by local setup):
One SKILL.md can usually be reused across tools.
Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.
Example: retrieval + writing + automation scripts as one workflow.
Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.
Most common reasons:
We try to avoid that. Use ranking + comments to surface better skills: