Use the OfferPilot skill pack for resume optimization, China-first JD fit diagnosis, targeted resumes, and cover letters in Claude Code style repository agents. Use when the user wants structured job-application outputs or JD fit analysis from local resume and job-description files.
- 📁 evals/
- 📁 guides/
- 📁 rules/
- 📄 README.md
- 📄 SKILL.md
Universal Server-Driven UI (SDUI) Engine for building JSON-driven React interfaces with Shadcn design quality. Use this skill for all ObjectUI development tasks including schema-driven page building, plugin development, component integration, testing, auth/permissions, data integration, i18n, mobile responsiveness, project setup, and console development. Triggers on any mention of ObjectUI, SchemaRenderer, JSON schemas, SDUI, metadata-driven UIs, or Object Stack ecosystem work.
- 📄 examples.md
- 📄 prompt.md
- 📄 SKILL.md
Defines product requirements using the 5W2H framework with psychology-enhanced audience analysis for Who/When, and generates role-aligned handoff notes for RD, UI/UX, and QA. Use when the user asks to clarify a requirement, write a PRD, do 5W2H analysis, define acceptance criteria, or align RD/design/QA on scope.
- 📁 scripts/
- 📄 QUICK_REFERENCE.md
- 📄 README.md
- 📄 SKILL.md
Guidance for setting up CI/CD pipelines for DataRobot application templates using GitLab, GitHub Actions, and Pulumi for infrastructure as code.
Use when changing libs/database/src/schema.ts, adding Drizzle migrations, debugging drizzle-kit drift, or deciding between auto-generated and custom SQL migrations. Includes the post-#867 baseline-reset rule.
- 📁 references/
- 📁 scripts/
- 📄 metadata.json
- 📄 SKILL.md
- 📄 SKILL_HE.md
Discover, query, and analyze Israeli government open data from data.gov.il (CKAN API). Use when user asks about Israeli government data, "data.gov.il", government datasets, CBS statistics, or needs data about Israeli transportation, education, health, geography, economy, or environment. Supports dataset search, tabular data queries, and analysis guidance. Enhances existing datagov-mcp and data-gov-il-mcp servers with workflow best practices. Do NOT use for classified government data or data requiring security clearance.
- 📄 LICENSE
- 📄 README.md
- 📄 SimulationsInferentialMistakes.R
Statistical quality checklist for movement science and neuroscience research. Auto-triggers when analyzing data, interpreting results, running statistics, writing results sections, or reviewing analysis code. Based on Makin & Orban de Xivry (2019, eLife).
- 📁 analyze_cycle_time/
- 📁 analyze_flow_debt/
- 📁 analyze_process_evolution/
- 📄 inject.py
- 📄 SKILL.md
Router skill for all MCS-MCP chart visualizations. Trigger this skill whenever any mcs-mcp analysis tool result is present in the conversation and the user asks to visualize, chart, plot, or show it. This router maps the tool that produced the result to the correct chart sub-skill. Do NOT attempt to build any chart ad-hoc — always read the sub-skill first. --- # MCS Charts Router This is the sole entry point for all MCS-MCP chart skills. When a chart request arrives, identify which analysis tool produced the data, then read and follow the matching sub-skill before writing any code. --- ## Step 1 — Identify the data source Look at the conversation for the most recent mcs-mcp tool result. Match it to one of the tools in the routing table below. --- ## Step 2 — Routing Table ``` Tool that produced the data Sub-skill path (relative to this file) ──────────────────────────────── ────────────────────────────────────────────── analyze_process_stability analyze_process_stability/s.md analyze_throughput analyze_throughput/s.md analyze_wip_stability analyze_wip_stability/s.md analyze_wip_age_stability analyze_wip_age_stability/s.md analyze_work_item_age analyze_work_item_age/s.md analyze_process_evolution analyze_process_evolution/s.md analyze_residence_time analyze_residence_time/s.md generate_cfd_data generate_cfd_data/s.md analyze_cycle_time analyze_cycle_time/s.md analyze_status_persistence analyze_status_persistence/s.md analyze_flow_debt analyze_flow_debt/s.md analyze_yield analyze_yield/s.md forecast_monte_carlo forecast_monte_carlo/s.md forecast_backtest forecast_backtest/s.md ``` Sub-folder names match the exact tool name as registered in the MCP server. --- ## Step 3 — Read the sub-skill, then build Use the `view` tool to read the matched
- 📁 references/
- 📁 scripts/
- 📁 security/
- 📄 .gitignore
- 📄 CHANGELOG.md
- 📄 config.yaml.example
Routes Snowflake-related operations to Cortex Code CLI for specialized Snowflake expertise. Use when user asks about Snowflake databases, data warehouses, SQL queries on Snowflake, Cortex AI features, Snowpark, dynamic tables, data governance in Snowflake, Snowflake security, or mentions "Cortex" explicitly. Do NOT use for general programming, local file operations, non-Snowflake databases, web development, or infrastructure tasks unrelated to Snowflake.
HealthClaw Guardrails (healthclaw.io) — FHIR agent guardrails for secure clinical data access via MCP. Supports FHIR R4 US Core v9 (stable) and R6 ballot3 (experimental). Use when: (1) Reading patient data through MCP with automatic PHI redaction, (2) Writing clinical resources with two-phase propose/commit and step-up auth, (3) Proxying requests to real FHIR servers (HAPI, SMART Health IT, Epic), (4) Auditing AI agent access to healthcare data, (5) Evaluating R6 Permission resources for access control decisions. 12 MCP tools with guardrail enforcement.
Back up all agent data to GitHub — SQLite databases, Claude Code memory, identity, skills, brain notes. Use when the user says backup, back up, save everything, push to github, or snapshot. Also used by heartbeat for automated backups.
Debug Bright Data Scraping Browser sessions using the Browser Sessions API. Use this skill when the user encounters a Bright Data browser session error, puppeteer stack trace, failed scraper run, or asks about session bandwidth, duration, captchas, or connection issues. Also use when a Bright Data scraper produces unexpected results such as empty data, 0 items found, missing products, or fewer results than expected — session data can reveal whether the issue is network/proxy-side (blocks, captchas, redirects, timeouts) or client-side (selectors, extraction logic). Triggers on phrases like 'why did my session fail', 'debug my bright data session', 'check my scraping browser sessions', 'how much bandwidth did my scraper use', 'got 0 results', 'found 0', 'scraper returned empty', 'scraper not working', 'script didn't work', or when a Bright Data error code or brd.superproxy.io stack trace appears in the conversation. Requires BRIGHTDATA_API_KEY environment variable.