Autonomous Obsidian vault management using the PARA + LLM Wiki pattern. Three operations: ingest (inbox to wiki + PARA routing), query (cross-vault synthesis), lint (health check). Scheduled via CronCreate 4x/day. Filesystem-only -- no Obsidian app dependency. Activate when the user mentions 'ingest', 'vault', 'wiki page', 'obsidian brain', 'vault lint', 'inbox', 'wiki index', 'knowledge base query', or wants to process, organize, query, or audit their Obsidian vault.
Creates entity model documents with Mermaid.js ER diagrams and attribute tables defining entities, relationships, data types, and validation rules. Use when the user asks to "create an entity model", "design a data model", "draw an ERD", "define database schema", "model entities", or mentions entity-relationship diagram, ER diagram, database design, or data modeling. --- # Entity Model ## Instructions Create or update the entity model at `docs/entity_model.md` based on `docs/requirements.md`. The document contains an ER diagram and attribute tables. ## DO NOT - Add attributes/columns to the Mermaid diagram - Write prose descriptions like "Key attributes: name, email..." - Create a "Relationships" table - Skip the attribute tables ## Document Structure ```markdown # Entity Model ## Entity Relationship Diagram ```mermaid erDiagram ROOM_TYPE ||--o{ ROOM : "categorizes" GUEST ||--o{ RESERVATION : "makes" ``` ### ENTITY_NAME One sentence describing the entity. | Attribute | Description | Data Type | Length/Precision | Validation Rules | |-----------|-------------|-----------|------------------|-----------------------| | id | ... | Long | 19 | Primary Key, Sequence | | ... | ... | ... | ... | ... | ## Required Format for Each Entity
Break a design brief into an ordered checklist of independently buildable tasks using vertical slices. Saves as a markdown checklist. Use when user wants to break down work, create tasks from a brief, plan implementation order, or mentions "tasks" or "breakdown".
- 📁 arxiv_doc_builder/
- 📁 references/
- 📄 pyproject.toml
- 📄 SKILL.md
Automatically convert arXiv papers to well-structured Markdown documentation. Invoke with an arXiv ID to fetch materials (LaTeX source or PDF), convert to Markdown, and generate implementation-ready reference documentation with preserved mathematics and section structure.
Use when users want to bootstrap the brain with existing project knowledge -- reading ADRs, feature specs, UX docs, commit history, retro lessons, and error patterns to seed institutional memory on a project that already has artifacts on disk. Also use for incremental re-hydration after significant work outside the pipeline. Triggers on "hydrate brain", "bootstrap brain", "seed memory", "brain init", "populate brain", "import history".
Converts documents and knowledge gathered from Microsoft Copilot into well-structured, SharePoint-compatible markdown files. Use when a user has content from Copilot (summaries, research, meeting notes, process steps) and wants to produce a formatted page for a SharePoint site, wiki, or knowledge base. Triggers include "create a markdown file from this", "format this for SharePoint", "write this up as a knowledge base article", "turn this Copilot output into a page", or when a user pastes Copilot-generated content and asks for it to be documented.
- 📁 scripts/
- 📄 README.md
- 📄 SKILL.md
AI-powered API documentation generation tool that auto-generates comprehensive API docs from source code, including OpenAPI/Swagger specs, Postman collections, and markdown documentation with examples and authentication details.
- 📁 agents/
- 📁 config/
- 📁 references/
- 📄 SKILL.md
Convert PDFs to markdown, JSON, or HTML using the Marker CLI. Use when the user wants to convert PDF content into text-based formats (markdown, HTML, JSON), mentions "marker" or "marker_single", or asks to batch-convert a folder of PDFs into readable text. Handles single-file and batch-folder conversion, resuming interrupted runs, LLM-enhanced output, and conversion state/error log review.
- 📄 create_preview.py
- 📄 create_template.py
- 📄 markdown_normalizer.py
将 Markdown 文件转换为格式化的 Word 文档。当用户想要将 .md 转换为 .docx、从 Markdown 创建 Word 文档或提及文档转换时调用此技能。
Search the Claude knowledge graph vault for past Q&A sessions, concepts, and developer knowledge. Use when you need to find information from previous conversations, recall how something was done before, or look up stored technical knowledge. Supports Korean and English queries.
Use when completing retro feature work, after implementation and tests pass, before creating PRs. Keywords: scenario tests, clean install, retro documentation updates.
Ingest or update a codebase in the agent-knowledge base. First run bootstraps the knowledge base from scratch; subsequent runs are incremental (only changed/new/deleted files reprocessed). Uses tree-sitter for zero-token structural extraction. Trigger on "/knowledge-ingest", "ingest this codebase", "load this into knowledge", "scan this project", "index this repo", "update knowledge", "refresh knowledge", "re-ingest". --- # knowledge-ingest Populate or update agent-knowledge from a codebase. Tree-sitter extracts structure (zero LLM tokens), then the agent distills clusters into knowledge entries + graph edges via existing MCP tools. **First run**: full ingest — scans all files, creates entries from scratch. **Subsequent runs**: incremental — only reprocesses files whose SHA256 changed, adds entries for new files, removes entries for deleted files. The `.knowledge-ingest-cache.json` file in the target directory tracks state between runs. ## When to use - **Onboarding a new project** — bootstrap the knowledge base so future sessions have context - **After a refactor** — re-run to update subsystem boundaries and relationships - **Periodic refresh** — re-run after significant changes to keep knowledge current - **Importing documentation** — PDFs, architecture diagrams, or external URLs ## When NOT to use - Single-file changes — just write a knowledge entry manually - No code changes since last ingest — the cache will skip everything anyway (fast no-op) ## Procedure ### Phase 0 — Validation 1. Confirm the target path exists and is a directory. 2. Detect project name: - Check `package.json` → `name` field - Check `Cargo.toml` → `[package] name` - Check `go.mod` → `module` line - Check `pyproject.toml` → `[project] name` - Fall back to directory basename 3. Check for `.knowledge-ingest-cache.json` in the target directory. If found, load it — this is an incremental run. Report how many files changed since last ingest. ### Phase 1 — Structural Extraction (zero tokens) 4. Loc