awesome-claude-skills-security
Comprehensive testing prompts and wordlists for evaluating Large Language Model (LLM) security, safety, and robustness. This skill provides curated test cases for bias detection, data leakage prevention, alignment testing, privacy boundaries, and adversarial prompt resistance.
Changelog: Source: GitHub https://github.com/Eyadkelleh/awesome-claude-skills-security
No comments yet. Be the first one!