- 📁 references/
- 📁 scripts/
- 📄 SKILL.md
VectCutAPI is a powerful cloud-based video editing API tool that provides programmatic control over CapCut/JianYing (剪映) for professional video editing. Use this skill when users need to: (1) Create video draft projects programmatically, (2) Add video/audio/image materials with precise control, (3) Add text, subtitles, and captions, (4) Apply effects, transitions, and animations, (5) Add keyframe animations, (6) Process videos in batch, (7) Generate AI-powered videos, (8) Integrate with n8n workflows, (9) Build MCP video editing agents. The API supports HTTP REST and MCP protocols, works with both CapCut (international) and JianYing (China), and provides web preview without downloading.
- 📁 references/
- 📁 scripts/
- 📄 .security-scan-passed
- 📄 SKILL.md
Transcribes audio and video files to text using Qwen3-ASR. Supports two modes — local MLX inference on macOS Apple Silicon (no API key, 15-27x realtime) and remote API via vLLM/OpenAI-compatible endpoints. Auto-detects platform and recommends the best path. Triggers when the user wants to transcribe recordings, convert audio/video to text, do speech-to-text, or mentions ASR, Qwen ASR, 转录, 语音转文字, 录音转文字. Also triggers for meeting recordings, lectures, interviews, podcasts, screen recordings, or any audio/video file the user wants converted to text.
- 📁 references/
- 📄 .gitignore
- 📄 LICENSE
- 📄 README.md
TVC advertising creative director skill for Nano Banana Pro keyframe prompts and Seedance video scripts. Specialized for television commercials and brand advertising — from a product brief to production-ready keyframe prompts and cinematic video scripts. Three core capabilities: (1) Cinematic Product Breakdown — multi-phase product micro-films with precise camera choreography, component disassembly animations, feature visualization, and material macro shots; (2) Brand World Crosscut — interweaving product close-ups with in-context usage scenes via match cuts between phases (outdoor cameras with skydiving/skiing, luxury cars with mountain roads); (3) Lifestyle Film — product stays in the brand world throughout (worn/held/carried), highlighted through cinematography rather than studio cutaways, ideal for wearables and lifestyle products. Covers TVC narrative models, product cinematography, brand world integration, multi-grid storyboards, and video prompts. Use this skill whenever users want to create TVC ads, product commercials, brand films, product hero videos, or any advertising visual content — even if they just say 'help me make a product video', 'I need a TVC storyboard', or '帮我做一条产品广告'.
Generate AI videos from text prompts using the HeyGen API. Use when: (1) Generating videos from text descriptions, (2) Creating AI-generated video clips for content production, (3) Image-to-video generation with a reference image, (4) Choosing between video generation providers (VEO, Kling, Sora, Runway, Seedance), (5) Working with HeyGen's /v1/workflows/executions endpoint for video generation.
You cannot access video content on your own. Use Cerul to search what was said, shown, or presented in tech talks, podcasts, conference presentations, and earnings calls. Use when a user asks about what someone said, wants video evidence, or needs citations from talks and interviews.
- 📁 skills/
- 📄 .gitignore
- 📄 LICENSE
- 📄 package.json
Collection of agent skills for Helios video engine. Use when working with programmatic video creation, browser-native animations, or Helios compositions. Install individual skills by path for specific capabilities.
- 📁 references/
- 📁 scripts/
- 📁 workflows/
- 📄 SKILL.md
AI video & audio summarizer. Summarize YouTube videos, Bilibili videos, podcasts, TikTok, Twitter/X, Xiaohongshu, and any online video or audio. Use when the user wants to summarize a video, extract transcripts/subtitles, get chapter-by-chapter summaries, or understand video content quickly.
NVIDIA DeepStream SDK 9.0 development with Python pyservicemaker API. Use when building video analytics pipelines, GStreamer-based video processing, TensorRT inference integration, object detection/tracking, or Kafka/message broker integration.
- 📁 docs/
- 📁 remotion-standup/
- 📁 scripts/
- 📄 .gitignore
- 📄 README.md
- 📄 REMOTION_VOICEOVER.md
Automated video editing skill for talk/vlog/standup videos. Use when: cutting video, splitting video into sentences, merging video clips, extracting audio, transcribing speech, auto-editing oral presentation videos, combining selected sentence clips into a final video, generating video cover/thumbnail with title, B-roll cutaway editing, persistent video overlay/watermark, blinking REC indicator, ending title cards, multi-source audio mixing, generating voiceover videos with Remotion (audio-only to video with animated visuals/subtitles). Requires ffmpeg and whisper. Remotion workflow additionally requires Node.js and npm.
This skill should be used when the user asks to "generate video prompts", "create Seedance prompts", "write video descriptions", mentions "Seedance", "seedance", "即梦", "即梦平台", "视频提示词", "视频生成", "AI视频", "短剧", "广告视频", "视频延长", "生成图片", "文生图", "图生图", "图生视频", "文生视频", or discusses video prompt engineering, AI video generation, or Seedance 2.0 workflows. It also handles requests to create, edit, or manipulate images and videos using the dreamina CLI tool.
Watch a tutorial or demo video and generate a Claude Code skill from it. Activated when user says "create a skill from this video" or similar.
- 📁 src/
- 📄 .gitattributes
- 📄 .gitignore
- 📄 LICENSE
A comprehensive Bilibili toolkit that integrates hot trending monitoring, video downloading, video watching/playback, subtitle downloading, and video publishing capabilities into a single unified skill. Supports Bilibili session cookie authentication for publishing and high-quality downloads. Requests go to official Bilibili API endpoints over HTTPS.