- 📁 config/
- 📁 prompts/
- 📁 scripts/
- 📄 SKILL.md
Track and filter arXiv papers by research topics and keywords. Run daily paper digests, manage topic configurations, view tracking history, and trigger keyword evolution. Use when users ask about paper tracking (论文追踪), arxiv monitoring, research paper alerts, or academic paper digests.
- 📁 .github/
- 📁 _shared/
- 📁 analyze-results/
- 📄 .gitignore
- 📄 ARCHITECTURE.md
- 📄 CHANGELOG.md
Research automation skills for Claude Code. Full pipeline from idea to submittable paper. Skills: /lit-review, /novelty-check, /experiment, /analyze-results, /write-paper, /research (orchestrator), /setup.
- 📁 .claude-plugin/
- 📁 assets/
- 📁 references/
- 📄 SKILL.md
- 📄 SOURCES.md
Fast research that beats plain websearch — discovers what exists before searching specifics (Landscape Scan), catches recent releases within days/weeks (Recency Pulse + upstream supply chain), and runs parallel queries for multi-angle coverage. Good for everyday research and current-info questions. Use when user requests research, comparison, or "what's the latest on X". For high-stakes decisions requiring hypothesis testing, COMPASS audit, Red Team, or full report → use /deep-research-pro instead.
Deep research with citation tracking
Recherche et synthèse web avancée — trouve, lit et résume des informations depuis le web.
- 📁 evaluations/
- 📁 references/
- 📁 templates/
- 📄 README.md
- 📄 SKILL.md
- 📄 standards
Plans new DataHub connectors by classifying the source system, researching it using a dedicated agent or inline research, and generating a _PLANNING.md blueprint with entity mapping and architecture decisions. Use when building a new connector, researching a source system for DataHub, or designing connector architecture. Triggers on: "plan a connector", "new connector for X", "research X for DataHub", "design connector for X", "create planning doc", or any request to plan/research/design a DataHub ingestion source.
De-Anthropocentric Research Engine — the main entry point for all research tasks. Use this skill whenever the user wants to research a topic, find papers, survey a field, trace citation chains, explore research gaps, generate novel ideas, or run experiments. This skill routes to the appropriate mode (quick/survey/deep/web/hybrid/research/execute) based on user intent. For full research mode, it orchestrates the complete pipeline: brainstorming → literature survey → gap analysis → idea generation → review loop → spec writing → implementation planning → experiment execution. ALWAYS use this skill as the starting point for any research-related request. --- # DARE DARE is a Vibe Researching Toolkit. You are a research assistant that uses DARE's external MCP tools to accomplish research tasks. ## Your Role You are an autonomous research agent. Given a research topic or question, you: 1. Understand intent, gauge desired depth and breadth 2. Automatically select the appropriate mode 3. Orchestrate tool calls autonomously, adapting based on intermediate results 4. Deliver structured research output ## Philosophical Orientation <HARD-GATE> Before proceeding to ANY mode (quick, survey, deep, research, web, hybrid, or execute), you MUST read `skills/dare/references/de-anthropocentric.md` in full. This is the value system that governs all research activity in DARE. Do NOT skip this step, do NOT summarize it from memory — read the actual file every time. </HARD-GATE> This philosophy is not a set of instructions. It is a lens. After reading it, carry its principles forward into every decision you make: which papers matter, which gaps to prioritize, which ideas to pursue, how to evaluate progress. The document will tell you what those principles are. ## Tools See `skills/tools/SKILL.md` for full reference, `skills/tools/references/dare-scholar.md` for detailed paper tool usage. | MCP Server | Tool | Purpose | |---|---|---| | apify | `google-scholar-scraper` | Google Scholar search
- 📁 .claude/
- 📁 .github/
- 📁 agents/
- 📄 CHANGELOG.md
- 📄 LICENSE
- 📄 README.md
Experiment executor and monitor for academic research. 2-agent system covering code experiments (ML training, statistical analysis, ETL, simulation) and human studies (surveys, field studies, interviews). 4 modes: run (execute + monitor code), manage (track human studies), validate (statistical interpretation + reproducibility verification), plan (Socratic experiment design). Triggers on: run experiment, execute code, train model, benchmark, manage study, track participants, field study, survey, validate results, check statistics, reproduce, plan experiment, design study, 跑實驗, 執行程式, 管理研究, 驗證結果, 規劃實驗.
- 📁 references/
- 📄 README.md
- 📄 SKILL.md
Strategic research framework that compresses months of market/competitive research into hours through structured power questions. Extracts unspoken industry insights, fragile market assumptions, and strategic attack surfaces from competitor data, reviews, and industry sources using parallel Exa-powered intelligence gathering. Use when user says "attack surface", "research the market", "competitive analysis", "analyze competitors", "find market opportunity", "stress-test this idea", "market research", "evaluate opportunity", "find blind spots", "market entry", or when they want to deeply understand a market, evaluate a new direction, find industry blind spots, assess a partnership, or analyze opportunities. Do NOT use for code review, testing, deployment, bug fixing, or implementation tasks. --- # Attack Surface — Strategic Research Framework Compress months of market research into hours. The difference between 3 hours and 3 months isn't the amount of information — it's knowing which questions actually matter. Instead of "summarize these" or "analyze the competition", this framework extracts: - **UNSPOKEN INSIGHTS** — what successful players understand that customers never say out loud - **FRAGILE ASSUMPTIONS** — beliefs the entire market is built on, and how they break - **ATTACK SURFACES** — the blind spots, the fragile consensus, the opening nobody is talking about ## When to Use - Entering a new market or vertical - Evaluating a new feature direction for an existing project - Assessing a partnership or platform opportunity - Stress-testing a business idea before committing - Finding competitive blind spots and underserved niches - Any strategic question that benefits from deep evidence-based analysis ## Workflow Overview 7 phases, alternating between automated intelligence gathering and user-guided analysis: | Phase | Name | Mode | Output | |-------|------|------|--------| | 1 | Briefing | Interactive | Research brief | | 2 | Source Collection | Automated (parall
Use when evaluating academic papers or surveying a research topic. Gathers venue, citations, GitHub stats, social buzz, reproducibility, and author signals to produce a scored markdown report. Triggers: "evaluate paper", "paper review", "research survey", "literature review", "is this paper good", "find papers on", "compare papers", "paper impact
- 📁 scripts/
- 📄 README.md
- 📄 SKILL.md
Index and search code repositories, documentation, research papers, HuggingFace datasets, local folders, Slack workspaces, Google Drive, X (Twitter), and packages with Nia AI. Includes auth bootstrapping, Oracle autonomous research, GitHub live search, Tracer agent, dependency analysis, context sharing, code advisor, document agent, data extraction, filesystem operations, and generic connectors.
- 📄 SKILL.md
- 📄 user_preference_questions.json
Runs the full dev cycle (research → plan → impl → review + fix loop) for a task.