Transcribe audio files to text using local speech recognition. Triggers on: "转录", "transcribe", "语音转文字", "ASR", "识别音频", "把这段音频转成文字". --- ## When to Use - User wants to transcribe an audio file to text - User provides an audio file path and asks for transcription - User says "转录", "识别", "transcribe", "语音转文字" ## When NOT to Use - User wants to synthesize speech from text (use `/tts`) - User wants to create a podcast or explainer (use `/podcast` or `/explainer`) ## Purpose Transcribe audio files to text using `coli asr`, which runs fully offline via local speech recognition models. No API key required. Supports Chinese, English, Japanese, Korean, and Cantonese (sensevoice model) or English-only (whisper model). Run `coli asr --help` for current CLI options and supported flags. ## Hard Constraints - No shell scripts. Use direct commands only. - Always read config following `shared/config-pattern.md` before any interaction - Follow `shared/common-patterns.md` for interaction patterns - Never ask more than one question at a time <HARD-GATE> Use the AskUserQuestion tool for every multiple-choice step — do NOT print options as plain text. Ask one question at a time. Wait for the user's answer before proceeding. After all parameters are collected, summarize and ask the user to confirm before running any transcription. </HARD-GATE> ## Interaction Flow ### Step 0: Prerequisites Check Before config setup, silently check the environment: ```bash COLI_OK=$(which coli 2>/dev/null && echo yes || echo no) FFMPEG_OK=$(which ffmpeg 2>/dev/null && echo yes || echo no) MODELS_DIR="$HOME/.coli/models" MODELS_OK=$([ -d "$MODELS_DIR" ] && ls "$MODELS_DIR" | grep -q sherpa && echo yes || echo no) ``` | Issue | Action | |-------|--------| | `coli` not found | Block. Tell user to run `npm install -g @marswave/coli` first | | `ffmpeg` not found | Warn (WAV files still work). Suggest `brew install ffmpeg` / `sudo apt install ffmpeg` | | Models not downloaded | Inform user: first transcription w
Skill files are scattered across GitHub and communities, difficult to search, and hard to evaluate. SkillWink organizes open-source skills into a searchable, filterable library you can directly download and use.
We provide keyword search, version updates, multi-metric ranking (downloads / likes / comments / updates), and open SKILL.md standards. You can also discuss usage and improvements on skill detail pages.
Sort by downloads/likes/comments/updated to find higher-quality skills.
4. Which import methods are supported?
Upload archive: .zip / .skill (recommended)
Upload skills folder
Import from GitHub repository
Note: file size for all methods should be within 10MB.
5. How to use in Claude / Codex?
Typical paths (may vary by local setup):
Claude Code:~/.claude/skills/
Codex CLI:~/.codex/skills/
One SKILL.md can usually be reused across tools.
6. Can one skill be shared across tools?
Yes. Most skills are standardized docs + assets, so they can be reused where format is supported.
Example: retrieval + writing + automation scripts as one workflow.
7. Are these skills safe to use?
Some skills come from public GitHub repositories and some are uploaded by SkillWink creators. Always review code before installing and own your security decisions.
8. Why does it not work after import?
Most common reasons:
Wrong folder path or nested one level too deep
Invalid/incomplete SKILL.md fields or format
Dependencies missing (Python/Node/CLI)
Tool has not reloaded skills yet
9. Does SkillWink include duplicates/low-quality skills?
We try to avoid that. Use ranking + comments to surface better skills: