token-efficiency
Minimize token waste in all bash, file, and data processing operations. ALWAYS apply these principles whenever executing commands, reading files, or processing output. This skill governs how an LLM agent interacts with the system — it applies to every task involving tool use, file reading, data processing, or shell commands. TRIGGER on any computer use whatsoever. --- # Token-Efficient Computer Use Every character of stdout returned from a tool call gets tokenized and billed. A single careless `cat` on a 2000-line JSON file costs as much as a thoughtful conversation turn. The goal isn't to memorize specific commands — it's to internalize a simple cost model: **each byte of tool output is money and context window spent**, and each tool call has fixed round-trip overhead on top of that. Two questions should precede every tool call: 1. **Can I avoid this call entirely?** If the information is already in context from a previous read or from the user, use it. 2. **If I must call, how do I minimize the bytes returned?** Filter, project, truncate, or count at the source rather than dumping raw output.
Changelog: Source: GitHub https://github.com/undefdev/token-efficiency
No comments yet. Be the first one!