Extracts reusable knowledge (debugging discoveries, architectural decisions, conventions) from work sessions and saves them as structured memory files in .claude/memories/. Also use when the user asks to "run a retrospective", "extract learnings", or "save what we learned" from the current session.
Create or update CLAUDE.md and AGENTS.md files following official best practices. Use when asked to create, update, audit, or improve project configuration files for AI agents, or when users mention "CLAUDE.md", "AGENTS.md", "agent config", or "agent instructions".
Cancel the active Loop in Claude Code
Thin orchestrator bootstrap prompt. Read Multorum's shipped orchestrator methodology before acting.
- 📄 extract_dialog.py
- 📄 SKILL.md
Use when analyzing conversation patterns — extracts dialog from Claude Code or Codex CLI history, classifies each user message across 6 academic dimensions (Bloom's cognitive level, Graesser question depth, Paul & Elder reasoning probe, Walton presupposition quality, Long & Sato discourse function, Graesser generation mechanism), and outputs tagged dialog reports
- 📁 instructions/
- 📁 references/
- 📄 SKILL.md
电影级全流程提示词生成。用户给出故事主题,一键生成完整 markdown 文件,包含:角色设定提示词、场景提示词、完整剧本、分镜表、每个分镜的视频生成提示词。支持 Seedance 2.0 / 即梦 / Midjourney / SD。触发词:分镜、提示词、prompt、镜头、脚本、剧本、视频生成、电影风格、Seedance、生成剧本。
Autonomously optimize any Claude Code skill by running it repeatedly, scoring outputs against binary evals, mutating the prompt, and keeping improvements. Based on Karpathy's autoresearch methodology. Use when: optimize this skill, improve this skill, run autoresearch on, make this skill better, self-improve skill, benchmark skill, eval my skill, run evals on. Outputs: an improved SKILL.md, a results log, and a changelog of every mutation tried.
- 📄 inspect.sh
- 📄 README.md
- 📄 README.zh-TW.md
Show installed Claude Code skills, plugins, hooks, MCP servers, and commands in a browser dashboard
Canonical reference for Claude Code plugin artifact schemas, hook events, frontmatter fields, and naming conventions. Used to inject domain knowledge into Codex audit prompts. Run /codex-toolkit:refresh-knowledge to update from latest docs.
- 📁 agents/
- 📁 examples/
- 📁 orchestration/
- 📄 CATALOG.md
- 📄 README.md
- 📄 SKILL.md
OpenClaw 하네스 — Plan→Work→Review 에이전트 오케스트레이션 + 모델 라우팅 + 채널 브릿지. Claude Code 하네스 생태계 분석 기반. GLM/GPT/Claude 모델 지원. GLM-5.1 포함. 한국어 감지→GLM 자동 라우팅. sessions_spawn으로 에이전트별 모델 별도 지정. 브릿지로 실시간 채널 알림.
- 📁 docs/
- 📁 examples/
- 📁 prompts/
- 📄 .gitignore
- 📄 LICENSE
- 📄 package-lock.json
Turn an idea, file, folder, or URL into a polished live HTML page. Use when the user wants a webpage, interactive teaching site, visual report, dashboard, atlas, browsable export, or shareable HTML artifact from a prompt or source.
Build multi-step LLM reasoning chains in n8n using Groq, OpenAI, or Claude for structured data extraction, categorization, scoring, and analysis. Use this skill whenever the user wants to chain multiple LLM calls together in an n8n workflow — phrases like "extract entities then categorize", "multi-step LLM prompt", "chain_llm", "LLM pipeline", "classify and score", "entity extraction then enrichment". Also use when processing call transcripts, customer messages, or any unstructured text through multiple analysis passes in n8n. Prefer this pattern over single-shot prompts whenever the output requires both extraction AND reasoning, since single-shot hallucinates categories while chains let each step verify the previous.