feat: add 7 new skills from GitHub repo analysis
New skills: - prompt-library: Curated role-based and task-specific prompt templates - javascript-mastery: 33+ essential JavaScript concepts - llm-app-patterns: RAG pipelines, agent architectures, LLMOps - workflow-automation: Multi-step automation and API integration - autonomous-agent-patterns: Tool design, permissions, browser automation - bun-development: Bun runtime, testing, bundling, Node.js migration - github-workflow-automation: AI PR reviews, issue triage, CI/CD Sources: n8n, awesome-chatgpt-prompts, dify, gemini-cli, bun, 33-js-concepts, cline, codex Total skills: 62 → 69
This commit is contained in:
95
README.md
95
README.md
@@ -36,55 +36,62 @@ The repository is organized into several key areas of expertise:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Full Skill Registry (62/62)
|
## Full Skill Registry (69/69)
|
||||||
|
|
||||||
Below is the complete list of available skills. Each skill folder contains a `SKILL.md` that can be imported into Antigravity or Claude Code.
|
Below is the complete list of available skills. Each skill folder contains a `SKILL.md` that can be imported into Antigravity or Claude Code.
|
||||||
|
|
||||||
> [!NOTE] > **Document Skills**: We provide both **community** and **official Anthropic** versions for DOCX, PDF, PPTX, and XLSX. Locally, the official versions are used by default (via symlinks). In the repository, both versions are available for flexibility.
|
> [!NOTE] > **Document Skills**: We provide both **community** and **official Anthropic** versions for DOCX, PDF, PPTX, and XLSX. Locally, the official versions are used by default (via symlinks). In the repository, both versions are available for flexibility.
|
||||||
|
|
||||||
| Skill Name | Description | Path |
|
| Skill Name | Description | Path |
|
||||||
| :------------------------------- | :------------------------------------------------------------ | :--------------------------------------------- |
|
| :------------------------------- | :------------------------------------------------------------- | :--------------------------------------------- |
|
||||||
| **Algorithmic Art** | Creative generative art using p5.js and seeded randomness. | `skills/algorithmic-art` |
|
| **Algorithmic Art** | Creative generative art using p5.js and seeded randomness. | `skills/algorithmic-art` |
|
||||||
| **App Store Optimization** | Complete ASO toolkit for iOS and Android app performance. | `skills/app-store-optimization` |
|
| **App Store Optimization** | Complete ASO toolkit for iOS and Android app performance. | `skills/app-store-optimization` |
|
||||||
| **AWS Pentesting** | Specialized security assessment for Amazon Web Services. | `skills/aws-penetration-testing` |
|
| **Autonomous Agent Patterns** | Design patterns for autonomous coding agents and tools. | `skills/autonomous-agent-patterns` ⭐ NEW |
|
||||||
| **Backend Guidelines** | Core architecture patterns for Node/Express microservices. | `skills/backend-dev-guidelines` |
|
| **AWS Pentesting** | Specialized security assessment for Amazon Web Services. | `skills/aws-penetration-testing` |
|
||||||
| **Brainstorming** | Requirement discovery and intent exploration framework. | `skills/brainstorming` |
|
| **Backend Guidelines** | Core architecture patterns for Node/Express microservices. | `skills/backend-dev-guidelines` |
|
||||||
| **Brand Guidelines (Anthropic)** | Official Anthropic brand styling and visual standards. | `skills/brand-guidelines-anthropic` ⭐ NEW |
|
| **Brainstorming** | Requirement discovery and intent exploration framework. | `skills/brainstorming` |
|
||||||
| **Brand Guidelines (Community)** | Community-contributed brand guidelines and templates. | `skills/brand-guidelines-community` |
|
| **Brand Guidelines (Anthropic)** | Official Anthropic brand styling and visual standards. | `skills/brand-guidelines-anthropic` ⭐ NEW |
|
||||||
| **Canvas Design** | Beautiful static visual design in PDF and PNG. | `skills/canvas-design` |
|
| **Brand Guidelines (Community)** | Community-contributed brand guidelines and templates. | `skills/brand-guidelines-community` |
|
||||||
| **Claude D3.js** | Advanced data visualization with D3.js. | `skills/claude-d3js-skill` |
|
| **Bun Development** | Modern JavaScript/TypeScript development with Bun runtime. | `skills/bun-development` ⭐ NEW |
|
||||||
| **Content Creator** | SEO-optimized marketing and brand voice toolkit. | `skills/content-creator` |
|
| **Canvas Design** | Beautiful static visual design in PDF and PNG. | `skills/canvas-design` |
|
||||||
| **Core Components** | Design system tokens and baseline UI patterns. | `skills/core-components` |
|
| **Claude D3.js** | Advanced data visualization with D3.js. | `skills/claude-d3js-skill` |
|
||||||
| **Doc Co-authoring** | Structured workflow for technical documentation. | `skills/doc-coauthoring` |
|
| **Content Creator** | SEO-optimized marketing and brand voice toolkit. | `skills/content-creator` |
|
||||||
| **DOCX (Official)** | Official Anthropic MS Word document manipulation. | `skills/docx-official` ⭐ NEW |
|
| **Core Components** | Design system tokens and baseline UI patterns. | `skills/core-components` |
|
||||||
| **Ethical Hacking** | Comprehensive penetration testing lifecycle methodology. | `skills/ethical-hacking-methodology` |
|
| **Doc Co-authoring** | Structured workflow for technical documentation. | `skills/doc-coauthoring` |
|
||||||
| **Frontend Design** | Production-grade UI component implementation. | `skills/frontend-design` |
|
| **DOCX (Official)** | Official Anthropic MS Word document manipulation. | `skills/docx-official` ⭐ NEW |
|
||||||
| **Frontend Guidelines** | Modern React/TS development patterns and file structure. | `skills/frontend-dev-guidelines` |
|
| **Ethical Hacking** | Comprehensive penetration testing lifecycle methodology. | `skills/ethical-hacking-methodology` |
|
||||||
| **Git Pushing** | Automated staging and conventional commits. | `skills/git-pushing` |
|
| **Frontend Design** | Production-grade UI component implementation. | `skills/frontend-design` |
|
||||||
| **Internal Comms (Anthropic)** | Official Anthropic corporate communication templates. | `skills/internal-comms-anthropic` ⭐ NEW |
|
| **Frontend Guidelines** | Modern React/TS development patterns and file structure. | `skills/frontend-dev-guidelines` |
|
||||||
| **Internal Comms (Community)** | Community-contributed communication templates. | `skills/internal-comms-community` |
|
| **Git Pushing** | Automated staging and conventional commits. | `skills/git-pushing` |
|
||||||
| **Kaizen** | Continuous improvement and error-proofing (Poka-Yoke). | `skills/kaizen` |
|
| **GitHub Workflow Automation** | AI-powered PR reviews, issue triage, and CI/CD integration. | `skills/github-workflow-automation` ⭐ NEW |
|
||||||
| **Linux Shell Scripting** | Production-ready shell scripts for automation. | `skills/linux-shell-scripting` |
|
| **Internal Comms (Anthropic)** | Official Anthropic corporate communication templates. | `skills/internal-comms-anthropic` ⭐ NEW |
|
||||||
| **Loki Mode** | Fully autonomous startup development engine. | `skills/loki-mode` |
|
| **Internal Comms (Community)** | Community-contributed communication templates. | `skills/internal-comms-community` |
|
||||||
| **MCP Builder** | High-quality Model Context Protocol (MCP) server creation. | `skills/mcp-builder` |
|
| **JavaScript Mastery** | 33+ essential JavaScript concepts every developer should know. | `skills/javascript-mastery` ⭐ NEW |
|
||||||
| **NotebookLM** | Source-grounded querying via Google NotebookLM. | `skills/notebooklm` |
|
| **Kaizen** | Continuous improvement and error-proofing (Poka-Yoke). | `skills/kaizen` |
|
||||||
| **PDF (Official)** | Official Anthropic PDF document manipulation. | `skills/pdf-official` ⭐ NEW |
|
| **Linux Shell Scripting** | Production-ready shell scripts for automation. | `skills/linux-shell-scripting` |
|
||||||
| **Pentest Checklist** | Structured security assessment planning and scoping. | `skills/pentest-checklist` |
|
| **LLM App Patterns** | RAG pipelines, agent architectures, and LLMOps patterns. | `skills/llm-app-patterns` ⭐ NEW |
|
||||||
| **PPTX (Official)** | Official Anthropic PowerPoint manipulation. | `skills/pptx-official` ⭐ NEW |
|
| **Loki Mode** | Fully autonomous startup development engine. | `skills/loki-mode` |
|
||||||
| **Product Toolkit** | RICE prioritization and product discovery frameworks. | `skills/product-manager-toolkit` |
|
| **MCP Builder** | High-quality Model Context Protocol (MCP) server creation. | `skills/mcp-builder` |
|
||||||
| **Prompt Engineering** | Expert patterns for LLM instruction optimization. | `skills/prompt-engineering` |
|
| **NotebookLM** | Source-grounded querying via Google NotebookLM. | `skills/notebooklm` |
|
||||||
| **React Best Practices** | Vercel's 40+ performance optimization rules for React. | `skills/react-best-practices` ⭐ NEW (Vercel) |
|
| **PDF (Official)** | Official Anthropic PDF document manipulation. | `skills/pdf-official` ⭐ NEW |
|
||||||
| **React UI Patterns** | Standardized loading states and error handling for React. | `skills/react-ui-patterns` |
|
| **Pentest Checklist** | Structured security assessment planning and scoping. | `skills/pentest-checklist` |
|
||||||
| **Senior Architect** | Scalable system design and architecture diagrams. | `skills/senior-architect` |
|
| **PPTX (Official)** | Official Anthropic PowerPoint manipulation. | `skills/pptx-official` ⭐ NEW |
|
||||||
| **Skill Creator** | Meta-skill for building high-performance agentic skills. | `skills/skill-creator` |
|
| **Product Toolkit** | RICE prioritization and product discovery frameworks. | `skills/product-manager-toolkit` |
|
||||||
| **Software Architecture** | Quality-focused design principles and analysis. | `skills/software-architecture` |
|
| **Prompt Engineering** | Expert patterns for LLM instruction optimization. | `skills/prompt-engineering` |
|
||||||
| **Systematic Debugging** | Root cause analysis and structured fix verification. | `skills/systematic-debugging` |
|
| **Prompt Library** | Curated role-based and task-specific prompt templates. | `skills/prompt-library` ⭐ NEW |
|
||||||
| **TDD** | Test-Driven Development workflow and red-green-refactor. | `skills/test-driven-development` |
|
| **React Best Practices** | Vercel's 40+ performance optimization rules for React. | `skills/react-best-practices` ⭐ NEW (Vercel) |
|
||||||
| **UI/UX Pro Max** | Advanced design intelligence and 50+ styling options. | `skills/ui-ux-pro-max` |
|
| **React UI Patterns** | Standardized loading states and error handling for React. | `skills/react-ui-patterns` |
|
||||||
| **Web Artifacts** | Complex React/Tailwind/Shadcn UI artifact builder. | `skills/web-artifacts-builder` |
|
| **Senior Architect** | Scalable system design and architecture diagrams. | `skills/senior-architect` |
|
||||||
| **Web Design Guidelines** | Vercel's 100+ UI/UX audit rules (accessibility, performance). | `skills/web-design-guidelines` ⭐ NEW (Vercel) |
|
| **Skill Creator** | Meta-skill for building high-performance agentic skills. | `skills/skill-creator` |
|
||||||
| **Webapp Testing** | Local web application testing with Playwright. | `skills/webapp-testing` |
|
| **Software Architecture** | Quality-focused design principles and analysis. | `skills/software-architecture` |
|
||||||
| **XLSX (Official)** | Official Anthropic Excel spreadsheet manipulation. | `skills/xlsx-official` ⭐ NEW |
|
| **Systematic Debugging** | Root cause analysis and structured fix verification. | `skills/systematic-debugging` |
|
||||||
|
| **TDD** | Test-Driven Development workflow and red-green-refactor. | `skills/test-driven-development` |
|
||||||
|
| **UI/UX Pro Max** | Advanced design intelligence and 50+ styling options. | `skills/ui-ux-pro-max` |
|
||||||
|
| **Web Artifacts** | Complex React/Tailwind/Shadcn UI artifact builder. | `skills/web-artifacts-builder` |
|
||||||
|
| **Web Design Guidelines** | Vercel's 100+ UI/UX audit rules (accessibility, performance). | `skills/web-design-guidelines` ⭐ NEW (Vercel) |
|
||||||
|
| **Webapp Testing** | Local web application testing with Playwright. | `skills/webapp-testing` |
|
||||||
|
| **Workflow Automation** | Multi-step automations, API integration, AI-native pipelines. | `skills/workflow-automation` ⭐ NEW |
|
||||||
|
| **XLSX (Official)** | Official Anthropic Excel spreadsheet manipulation. | `skills/xlsx-official` ⭐ NEW |
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> Use the `validate_skills.py` script in the `scripts/` directory to ensure all skills are properly formatted and ready for use.
|
> Use the `validate_skills.py` script in the `scripts/` directory to ensure all skills are properly formatted and ready for use.
|
||||||
|
|||||||
761
skills/autonomous-agent-patterns/SKILL.md
Normal file
761
skills/autonomous-agent-patterns/SKILL.md
Normal file
@@ -0,0 +1,761 @@
|
|||||||
|
---
|
||||||
|
name: autonomous-agent-patterns
|
||||||
|
description: "Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use when building AI agents, designing tool APIs, implementing permission systems, or creating autonomous coding assistants."
|
||||||
|
---
|
||||||
|
|
||||||
|
# 🕹️ Autonomous Agent Patterns
|
||||||
|
|
||||||
|
> Design patterns for building autonomous coding agents, inspired by [Cline](https://github.com/cline/cline) and [OpenAI Codex](https://github.com/openai/codex).
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
|
||||||
|
- Building autonomous AI agents
|
||||||
|
- Designing tool/function calling APIs
|
||||||
|
- Implementing permission and approval systems
|
||||||
|
- Creating browser automation for agents
|
||||||
|
- Designing human-in-the-loop workflows
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Core Agent Architecture
|
||||||
|
|
||||||
|
### 1.1 Agent Loop
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────┐
|
||||||
|
│ AGENT LOOP │
|
||||||
|
│ │
|
||||||
|
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||||
|
│ │ Think │───▶│ Decide │───▶│ Act │ │
|
||||||
|
│ │ (Reason) │ │ (Plan) │ │ (Execute)│ │
|
||||||
|
│ └──────────┘ └──────────┘ └──────────┘ │
|
||||||
|
│ ▲ │ │
|
||||||
|
│ │ ┌──────────┐ │ │
|
||||||
|
│ └─────────│ Observe │◀─────────┘ │
|
||||||
|
│ │ (Result) │ │
|
||||||
|
│ └──────────┘ │
|
||||||
|
└─────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
```python
|
||||||
|
class AgentLoop:
|
||||||
|
def __init__(self, llm, tools, max_iterations=50):
|
||||||
|
self.llm = llm
|
||||||
|
self.tools = {t.name: t for t in tools}
|
||||||
|
self.max_iterations = max_iterations
|
||||||
|
self.history = []
|
||||||
|
|
||||||
|
def run(self, task: str) -> str:
|
||||||
|
self.history.append({"role": "user", "content": task})
|
||||||
|
|
||||||
|
for i in range(self.max_iterations):
|
||||||
|
# Think: Get LLM response with tool options
|
||||||
|
response = self.llm.chat(
|
||||||
|
messages=self.history,
|
||||||
|
tools=self._format_tools(),
|
||||||
|
tool_choice="auto"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Decide: Check if agent wants to use a tool
|
||||||
|
if response.tool_calls:
|
||||||
|
for tool_call in response.tool_calls:
|
||||||
|
# Act: Execute the tool
|
||||||
|
result = self._execute_tool(tool_call)
|
||||||
|
|
||||||
|
# Observe: Add result to history
|
||||||
|
self.history.append({
|
||||||
|
"role": "tool",
|
||||||
|
"tool_call_id": tool_call.id,
|
||||||
|
"content": str(result)
|
||||||
|
})
|
||||||
|
else:
|
||||||
|
# No more tool calls = task complete
|
||||||
|
return response.content
|
||||||
|
|
||||||
|
return "Max iterations reached"
|
||||||
|
|
||||||
|
def _execute_tool(self, tool_call) -> Any:
|
||||||
|
tool = self.tools[tool_call.name]
|
||||||
|
args = json.loads(tool_call.arguments)
|
||||||
|
return tool.execute(**args)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.2 Multi-Model Architecture
|
||||||
|
|
||||||
|
```python
|
||||||
|
class MultiModelAgent:
|
||||||
|
"""
|
||||||
|
Use different models for different purposes:
|
||||||
|
- Fast model for planning
|
||||||
|
- Powerful model for complex reasoning
|
||||||
|
- Specialized model for code generation
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.models = {
|
||||||
|
"fast": "gpt-3.5-turbo", # Quick decisions
|
||||||
|
"smart": "gpt-4-turbo", # Complex reasoning
|
||||||
|
"code": "claude-3-sonnet", # Code generation
|
||||||
|
}
|
||||||
|
|
||||||
|
def select_model(self, task_type: str) -> str:
|
||||||
|
if task_type == "planning":
|
||||||
|
return self.models["fast"]
|
||||||
|
elif task_type == "analysis":
|
||||||
|
return self.models["smart"]
|
||||||
|
elif task_type == "code":
|
||||||
|
return self.models["code"]
|
||||||
|
return self.models["smart"]
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Tool Design Patterns
|
||||||
|
|
||||||
|
### 2.1 Tool Schema
|
||||||
|
|
||||||
|
```python
|
||||||
|
class Tool:
|
||||||
|
"""Base class for agent tools"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def schema(self) -> dict:
|
||||||
|
"""JSON Schema for the tool"""
|
||||||
|
return {
|
||||||
|
"name": self.name,
|
||||||
|
"description": self.description,
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": self._get_parameters(),
|
||||||
|
"required": self._get_required()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
def execute(self, **kwargs) -> ToolResult:
|
||||||
|
"""Execute the tool and return result"""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
class ReadFileTool(Tool):
|
||||||
|
name = "read_file"
|
||||||
|
description = "Read the contents of a file from the filesystem"
|
||||||
|
|
||||||
|
def _get_parameters(self):
|
||||||
|
return {
|
||||||
|
"path": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Absolute path to the file"
|
||||||
|
},
|
||||||
|
"start_line": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Line to start reading from (1-indexed)"
|
||||||
|
},
|
||||||
|
"end_line": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Line to stop reading at (inclusive)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
def _get_required(self):
|
||||||
|
return ["path"]
|
||||||
|
|
||||||
|
def execute(self, path: str, start_line: int = None, end_line: int = None) -> ToolResult:
|
||||||
|
try:
|
||||||
|
with open(path, 'r') as f:
|
||||||
|
lines = f.readlines()
|
||||||
|
|
||||||
|
if start_line and end_line:
|
||||||
|
lines = lines[start_line-1:end_line]
|
||||||
|
|
||||||
|
return ToolResult(
|
||||||
|
success=True,
|
||||||
|
output="".join(lines)
|
||||||
|
)
|
||||||
|
except FileNotFoundError:
|
||||||
|
return ToolResult(
|
||||||
|
success=False,
|
||||||
|
error=f"File not found: {path}"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.2 Essential Agent Tools
|
||||||
|
|
||||||
|
```python
|
||||||
|
CODING_AGENT_TOOLS = {
|
||||||
|
# File operations
|
||||||
|
"read_file": "Read file contents",
|
||||||
|
"write_file": "Create or overwrite a file",
|
||||||
|
"edit_file": "Make targeted edits to a file",
|
||||||
|
"list_directory": "List files and folders",
|
||||||
|
"search_files": "Search for files by pattern",
|
||||||
|
|
||||||
|
# Code understanding
|
||||||
|
"search_code": "Search for code patterns (grep)",
|
||||||
|
"get_definition": "Find function/class definition",
|
||||||
|
"get_references": "Find all references to a symbol",
|
||||||
|
|
||||||
|
# Terminal
|
||||||
|
"run_command": "Execute a shell command",
|
||||||
|
"read_output": "Read command output",
|
||||||
|
"send_input": "Send input to running command",
|
||||||
|
|
||||||
|
# Browser (optional)
|
||||||
|
"open_browser": "Open URL in browser",
|
||||||
|
"click_element": "Click on page element",
|
||||||
|
"type_text": "Type text into input",
|
||||||
|
"screenshot": "Capture screenshot",
|
||||||
|
|
||||||
|
# Context
|
||||||
|
"ask_user": "Ask the user a question",
|
||||||
|
"search_web": "Search the web for information"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.3 Edit Tool Design
|
||||||
|
|
||||||
|
```python
|
||||||
|
class EditFileTool(Tool):
|
||||||
|
"""
|
||||||
|
Precise file editing with conflict detection.
|
||||||
|
Uses search/replace pattern for reliable edits.
|
||||||
|
"""
|
||||||
|
|
||||||
|
name = "edit_file"
|
||||||
|
description = "Edit a file by replacing specific content"
|
||||||
|
|
||||||
|
def execute(
|
||||||
|
self,
|
||||||
|
path: str,
|
||||||
|
search: str,
|
||||||
|
replace: str,
|
||||||
|
expected_occurrences: int = 1
|
||||||
|
) -> ToolResult:
|
||||||
|
"""
|
||||||
|
Args:
|
||||||
|
path: File to edit
|
||||||
|
search: Exact text to find (must match exactly, including whitespace)
|
||||||
|
replace: Text to replace with
|
||||||
|
expected_occurrences: How many times search should appear (validation)
|
||||||
|
"""
|
||||||
|
with open(path, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
# Validate
|
||||||
|
actual_occurrences = content.count(search)
|
||||||
|
if actual_occurrences != expected_occurrences:
|
||||||
|
return ToolResult(
|
||||||
|
success=False,
|
||||||
|
error=f"Expected {expected_occurrences} occurrences, found {actual_occurrences}"
|
||||||
|
)
|
||||||
|
|
||||||
|
if actual_occurrences == 0:
|
||||||
|
return ToolResult(
|
||||||
|
success=False,
|
||||||
|
error="Search text not found in file"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Apply edit
|
||||||
|
new_content = content.replace(search, replace)
|
||||||
|
|
||||||
|
with open(path, 'w') as f:
|
||||||
|
f.write(new_content)
|
||||||
|
|
||||||
|
return ToolResult(
|
||||||
|
success=True,
|
||||||
|
output=f"Replaced {actual_occurrences} occurrence(s)"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Permission & Safety Patterns
|
||||||
|
|
||||||
|
### 3.1 Permission Levels
|
||||||
|
|
||||||
|
```python
|
||||||
|
class PermissionLevel(Enum):
|
||||||
|
# Fully automatic - no user approval needed
|
||||||
|
AUTO = "auto"
|
||||||
|
|
||||||
|
# Ask once per session
|
||||||
|
ASK_ONCE = "ask_once"
|
||||||
|
|
||||||
|
# Ask every time
|
||||||
|
ASK_EACH = "ask_each"
|
||||||
|
|
||||||
|
# Never allow
|
||||||
|
NEVER = "never"
|
||||||
|
|
||||||
|
PERMISSION_CONFIG = {
|
||||||
|
# Low risk - can auto-approve
|
||||||
|
"read_file": PermissionLevel.AUTO,
|
||||||
|
"list_directory": PermissionLevel.AUTO,
|
||||||
|
"search_code": PermissionLevel.AUTO,
|
||||||
|
|
||||||
|
# Medium risk - ask once
|
||||||
|
"write_file": PermissionLevel.ASK_ONCE,
|
||||||
|
"edit_file": PermissionLevel.ASK_ONCE,
|
||||||
|
|
||||||
|
# High risk - ask each time
|
||||||
|
"run_command": PermissionLevel.ASK_EACH,
|
||||||
|
"delete_file": PermissionLevel.ASK_EACH,
|
||||||
|
|
||||||
|
# Dangerous - never auto-approve
|
||||||
|
"sudo_command": PermissionLevel.NEVER,
|
||||||
|
"format_disk": PermissionLevel.NEVER
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.2 Approval UI Pattern
|
||||||
|
|
||||||
|
```python
|
||||||
|
class ApprovalManager:
|
||||||
|
def __init__(self, ui, config):
|
||||||
|
self.ui = ui
|
||||||
|
self.config = config
|
||||||
|
self.session_approvals = {}
|
||||||
|
|
||||||
|
def request_approval(self, tool_name: str, args: dict) -> bool:
|
||||||
|
level = self.config.get(tool_name, PermissionLevel.ASK_EACH)
|
||||||
|
|
||||||
|
if level == PermissionLevel.AUTO:
|
||||||
|
return True
|
||||||
|
|
||||||
|
if level == PermissionLevel.NEVER:
|
||||||
|
self.ui.show_error(f"Tool '{tool_name}' is not allowed")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if level == PermissionLevel.ASK_ONCE:
|
||||||
|
if tool_name in self.session_approvals:
|
||||||
|
return self.session_approvals[tool_name]
|
||||||
|
|
||||||
|
# Show approval dialog
|
||||||
|
approved = self.ui.show_approval_dialog(
|
||||||
|
tool=tool_name,
|
||||||
|
args=args,
|
||||||
|
risk_level=self._assess_risk(tool_name, args)
|
||||||
|
)
|
||||||
|
|
||||||
|
if level == PermissionLevel.ASK_ONCE:
|
||||||
|
self.session_approvals[tool_name] = approved
|
||||||
|
|
||||||
|
return approved
|
||||||
|
|
||||||
|
def _assess_risk(self, tool_name: str, args: dict) -> str:
|
||||||
|
"""Analyze specific call for risk level"""
|
||||||
|
if tool_name == "run_command":
|
||||||
|
cmd = args.get("command", "")
|
||||||
|
if any(danger in cmd for danger in ["rm -rf", "sudo", "chmod"]):
|
||||||
|
return "HIGH"
|
||||||
|
return "MEDIUM"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.3 Sandboxing
|
||||||
|
|
||||||
|
```python
|
||||||
|
class SandboxedExecution:
|
||||||
|
"""
|
||||||
|
Execute code/commands in isolated environment
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, workspace_dir: str):
|
||||||
|
self.workspace = workspace_dir
|
||||||
|
self.allowed_commands = ["npm", "python", "node", "git", "ls", "cat"]
|
||||||
|
self.blocked_paths = ["/etc", "/usr", "/bin", os.path.expanduser("~")]
|
||||||
|
|
||||||
|
def validate_path(self, path: str) -> bool:
|
||||||
|
"""Ensure path is within workspace"""
|
||||||
|
real_path = os.path.realpath(path)
|
||||||
|
workspace_real = os.path.realpath(self.workspace)
|
||||||
|
return real_path.startswith(workspace_real)
|
||||||
|
|
||||||
|
def validate_command(self, command: str) -> bool:
|
||||||
|
"""Check if command is allowed"""
|
||||||
|
cmd_parts = shlex.split(command)
|
||||||
|
if not cmd_parts:
|
||||||
|
return False
|
||||||
|
|
||||||
|
base_cmd = cmd_parts[0]
|
||||||
|
return base_cmd in self.allowed_commands
|
||||||
|
|
||||||
|
def execute_sandboxed(self, command: str) -> ToolResult:
|
||||||
|
if not self.validate_command(command):
|
||||||
|
return ToolResult(
|
||||||
|
success=False,
|
||||||
|
error=f"Command not allowed: {command}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Execute in isolated environment
|
||||||
|
result = subprocess.run(
|
||||||
|
command,
|
||||||
|
shell=True,
|
||||||
|
cwd=self.workspace,
|
||||||
|
capture_output=True,
|
||||||
|
timeout=30,
|
||||||
|
env={
|
||||||
|
**os.environ,
|
||||||
|
"HOME": self.workspace, # Isolate home directory
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return ToolResult(
|
||||||
|
success=result.returncode == 0,
|
||||||
|
output=result.stdout.decode(),
|
||||||
|
error=result.stderr.decode() if result.returncode != 0 else None
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Browser Automation
|
||||||
|
|
||||||
|
### 4.1 Browser Tool Pattern
|
||||||
|
|
||||||
|
```python
|
||||||
|
class BrowserTool:
|
||||||
|
"""
|
||||||
|
Browser automation for agents using Playwright/Puppeteer.
|
||||||
|
Enables visual debugging and web testing.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, headless: bool = True):
|
||||||
|
self.browser = None
|
||||||
|
self.page = None
|
||||||
|
self.headless = headless
|
||||||
|
|
||||||
|
async def open_url(self, url: str) -> ToolResult:
|
||||||
|
"""Navigate to URL and return page info"""
|
||||||
|
if not self.browser:
|
||||||
|
self.browser = await playwright.chromium.launch(headless=self.headless)
|
||||||
|
self.page = await self.browser.new_page()
|
||||||
|
|
||||||
|
await self.page.goto(url)
|
||||||
|
|
||||||
|
# Capture state
|
||||||
|
screenshot = await self.page.screenshot(type='png')
|
||||||
|
title = await self.page.title()
|
||||||
|
|
||||||
|
return ToolResult(
|
||||||
|
success=True,
|
||||||
|
output=f"Loaded: {title}",
|
||||||
|
metadata={
|
||||||
|
"screenshot": base64.b64encode(screenshot).decode(),
|
||||||
|
"url": self.page.url
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
async def click(self, selector: str) -> ToolResult:
|
||||||
|
"""Click on an element"""
|
||||||
|
try:
|
||||||
|
await self.page.click(selector, timeout=5000)
|
||||||
|
await self.page.wait_for_load_state("networkidle")
|
||||||
|
|
||||||
|
screenshot = await self.page.screenshot()
|
||||||
|
return ToolResult(
|
||||||
|
success=True,
|
||||||
|
output=f"Clicked: {selector}",
|
||||||
|
metadata={"screenshot": base64.b64encode(screenshot).decode()}
|
||||||
|
)
|
||||||
|
except TimeoutError:
|
||||||
|
return ToolResult(
|
||||||
|
success=False,
|
||||||
|
error=f"Element not found: {selector}"
|
||||||
|
)
|
||||||
|
|
||||||
|
async def type_text(self, selector: str, text: str) -> ToolResult:
|
||||||
|
"""Type text into an input"""
|
||||||
|
await self.page.fill(selector, text)
|
||||||
|
return ToolResult(success=True, output=f"Typed into {selector}")
|
||||||
|
|
||||||
|
async def get_page_content(self) -> ToolResult:
|
||||||
|
"""Get accessible text content of the page"""
|
||||||
|
content = await self.page.evaluate("""
|
||||||
|
() => {
|
||||||
|
// Get visible text
|
||||||
|
const walker = document.createTreeWalker(
|
||||||
|
document.body,
|
||||||
|
NodeFilter.SHOW_TEXT,
|
||||||
|
null,
|
||||||
|
false
|
||||||
|
);
|
||||||
|
|
||||||
|
let text = '';
|
||||||
|
while (walker.nextNode()) {
|
||||||
|
const node = walker.currentNode;
|
||||||
|
if (node.textContent.trim()) {
|
||||||
|
text += node.textContent.trim() + '\\n';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return text;
|
||||||
|
}
|
||||||
|
""")
|
||||||
|
return ToolResult(success=True, output=content)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.2 Visual Agent Pattern
|
||||||
|
|
||||||
|
```python
|
||||||
|
class VisualAgent:
|
||||||
|
"""
|
||||||
|
Agent that uses screenshots to understand web pages.
|
||||||
|
Can identify elements visually without selectors.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, llm, browser):
|
||||||
|
self.llm = llm
|
||||||
|
self.browser = browser
|
||||||
|
|
||||||
|
async def describe_page(self) -> str:
|
||||||
|
"""Use vision model to describe current page"""
|
||||||
|
screenshot = await self.browser.screenshot()
|
||||||
|
|
||||||
|
response = self.llm.chat([
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": [
|
||||||
|
{"type": "text", "text": "Describe this webpage. List all interactive elements you see."},
|
||||||
|
{"type": "image", "data": screenshot}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
return response.content
|
||||||
|
|
||||||
|
async def find_and_click(self, description: str) -> ToolResult:
|
||||||
|
"""Find element by visual description and click it"""
|
||||||
|
screenshot = await self.browser.screenshot()
|
||||||
|
|
||||||
|
# Ask vision model to find element
|
||||||
|
response = self.llm.chat([
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"text": f"""
|
||||||
|
Find the element matching: "{description}"
|
||||||
|
Return the approximate coordinates as JSON: {{"x": number, "y": number}}
|
||||||
|
"""
|
||||||
|
},
|
||||||
|
{"type": "image", "data": screenshot}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
coords = json.loads(response.content)
|
||||||
|
await self.browser.page.mouse.click(coords["x"], coords["y"])
|
||||||
|
|
||||||
|
return ToolResult(success=True, output=f"Clicked at ({coords['x']}, {coords['y']})")
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Context Management
|
||||||
|
|
||||||
|
### 5.1 Context Injection Patterns
|
||||||
|
|
||||||
|
````python
|
||||||
|
class ContextManager:
|
||||||
|
"""
|
||||||
|
Manage context provided to the agent.
|
||||||
|
Inspired by Cline's @-mention patterns.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, workspace: str):
|
||||||
|
self.workspace = workspace
|
||||||
|
self.context = []
|
||||||
|
|
||||||
|
def add_file(self, path: str) -> None:
|
||||||
|
"""@file - Add file contents to context"""
|
||||||
|
with open(path, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
self.context.append({
|
||||||
|
"type": "file",
|
||||||
|
"path": path,
|
||||||
|
"content": content
|
||||||
|
})
|
||||||
|
|
||||||
|
def add_folder(self, path: str, max_files: int = 20) -> None:
|
||||||
|
"""@folder - Add all files in folder"""
|
||||||
|
for root, dirs, files in os.walk(path):
|
||||||
|
for file in files[:max_files]:
|
||||||
|
file_path = os.path.join(root, file)
|
||||||
|
self.add_file(file_path)
|
||||||
|
|
||||||
|
def add_url(self, url: str) -> None:
|
||||||
|
"""@url - Fetch and add URL content"""
|
||||||
|
response = requests.get(url)
|
||||||
|
content = html_to_markdown(response.text)
|
||||||
|
|
||||||
|
self.context.append({
|
||||||
|
"type": "url",
|
||||||
|
"url": url,
|
||||||
|
"content": content
|
||||||
|
})
|
||||||
|
|
||||||
|
def add_problems(self, diagnostics: list) -> None:
|
||||||
|
"""@problems - Add IDE diagnostics"""
|
||||||
|
self.context.append({
|
||||||
|
"type": "diagnostics",
|
||||||
|
"problems": diagnostics
|
||||||
|
})
|
||||||
|
|
||||||
|
def format_for_prompt(self) -> str:
|
||||||
|
"""Format all context for LLM prompt"""
|
||||||
|
parts = []
|
||||||
|
for item in self.context:
|
||||||
|
if item["type"] == "file":
|
||||||
|
parts.append(f"## File: {item['path']}\n```\n{item['content']}\n```")
|
||||||
|
elif item["type"] == "url":
|
||||||
|
parts.append(f"## URL: {item['url']}\n{item['content']}")
|
||||||
|
elif item["type"] == "diagnostics":
|
||||||
|
parts.append(f"## Problems:\n{json.dumps(item['problems'], indent=2)}")
|
||||||
|
|
||||||
|
return "\n\n".join(parts)
|
||||||
|
````
|
||||||
|
|
||||||
|
### 5.2 Checkpoint/Resume
|
||||||
|
|
||||||
|
```python
|
||||||
|
class CheckpointManager:
|
||||||
|
"""
|
||||||
|
Save and restore agent state for long-running tasks.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, storage_dir: str):
|
||||||
|
self.storage_dir = storage_dir
|
||||||
|
os.makedirs(storage_dir, exist_ok=True)
|
||||||
|
|
||||||
|
def save_checkpoint(self, session_id: str, state: dict) -> str:
|
||||||
|
"""Save current agent state"""
|
||||||
|
checkpoint = {
|
||||||
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
"session_id": session_id,
|
||||||
|
"history": state["history"],
|
||||||
|
"context": state["context"],
|
||||||
|
"workspace_state": self._capture_workspace(state["workspace"]),
|
||||||
|
"metadata": state.get("metadata", {})
|
||||||
|
}
|
||||||
|
|
||||||
|
path = os.path.join(self.storage_dir, f"{session_id}.json")
|
||||||
|
with open(path, 'w') as f:
|
||||||
|
json.dump(checkpoint, f, indent=2)
|
||||||
|
|
||||||
|
return path
|
||||||
|
|
||||||
|
def restore_checkpoint(self, checkpoint_path: str) -> dict:
|
||||||
|
"""Restore agent state from checkpoint"""
|
||||||
|
with open(checkpoint_path, 'r') as f:
|
||||||
|
checkpoint = json.load(f)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"history": checkpoint["history"],
|
||||||
|
"context": checkpoint["context"],
|
||||||
|
"workspace": self._restore_workspace(checkpoint["workspace_state"]),
|
||||||
|
"metadata": checkpoint["metadata"]
|
||||||
|
}
|
||||||
|
|
||||||
|
def _capture_workspace(self, workspace: str) -> dict:
|
||||||
|
"""Capture relevant workspace state"""
|
||||||
|
# Git status, file hashes, etc.
|
||||||
|
return {
|
||||||
|
"git_ref": subprocess.getoutput(f"cd {workspace} && git rev-parse HEAD"),
|
||||||
|
"git_dirty": subprocess.getoutput(f"cd {workspace} && git status --porcelain")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. MCP (Model Context Protocol) Integration
|
||||||
|
|
||||||
|
### 6.1 MCP Server Pattern
|
||||||
|
|
||||||
|
```python
|
||||||
|
from mcp import Server, Tool
|
||||||
|
|
||||||
|
class MCPAgent:
|
||||||
|
"""
|
||||||
|
Agent that can dynamically discover and use MCP tools.
|
||||||
|
'Add a tool that...' pattern from Cline.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, llm):
|
||||||
|
self.llm = llm
|
||||||
|
self.mcp_servers = {}
|
||||||
|
self.available_tools = {}
|
||||||
|
|
||||||
|
def connect_server(self, name: str, config: dict) -> None:
|
||||||
|
"""Connect to an MCP server"""
|
||||||
|
server = Server(config)
|
||||||
|
self.mcp_servers[name] = server
|
||||||
|
|
||||||
|
# Discover tools
|
||||||
|
tools = server.list_tools()
|
||||||
|
for tool in tools:
|
||||||
|
self.available_tools[tool.name] = {
|
||||||
|
"server": name,
|
||||||
|
"schema": tool.schema
|
||||||
|
}
|
||||||
|
|
||||||
|
async def create_tool(self, description: str) -> str:
|
||||||
|
"""
|
||||||
|
Create a new MCP server based on user description.
|
||||||
|
'Add a tool that fetches Jira tickets'
|
||||||
|
"""
|
||||||
|
# Generate MCP server code
|
||||||
|
code = self.llm.generate(f"""
|
||||||
|
Create a Python MCP server with a tool that does:
|
||||||
|
{description}
|
||||||
|
|
||||||
|
Use the FastMCP framework. Include proper error handling.
|
||||||
|
Return only the Python code.
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Save and install
|
||||||
|
server_name = self._extract_name(description)
|
||||||
|
path = f"./mcp_servers/{server_name}/server.py"
|
||||||
|
|
||||||
|
with open(path, 'w') as f:
|
||||||
|
f.write(code)
|
||||||
|
|
||||||
|
# Hot-reload
|
||||||
|
self.connect_server(server_name, {"path": path})
|
||||||
|
|
||||||
|
return f"Created tool: {server_name}"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices Checklist
|
||||||
|
|
||||||
|
### Agent Design
|
||||||
|
|
||||||
|
- [ ] Clear task decomposition
|
||||||
|
- [ ] Appropriate tool granularity
|
||||||
|
- [ ] Error handling at each step
|
||||||
|
- [ ] Progress visibility to user
|
||||||
|
|
||||||
|
### Safety
|
||||||
|
|
||||||
|
- [ ] Permission system implemented
|
||||||
|
- [ ] Dangerous operations blocked
|
||||||
|
- [ ] Sandbox for untrusted code
|
||||||
|
- [ ] Audit logging enabled
|
||||||
|
|
||||||
|
### UX
|
||||||
|
|
||||||
|
- [ ] Approval UI is clear
|
||||||
|
- [ ] Progress updates provided
|
||||||
|
- [ ] Undo/rollback available
|
||||||
|
- [ ] Explanation of actions
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Cline](https://github.com/cline/cline)
|
||||||
|
- [OpenAI Codex](https://github.com/openai/codex)
|
||||||
|
- [Model Context Protocol](https://modelcontextprotocol.io/)
|
||||||
|
- [Anthropic Tool Use](https://docs.anthropic.com/claude/docs/tool-use)
|
||||||
691
skills/bun-development/SKILL.md
Normal file
691
skills/bun-development/SKILL.md
Normal file
@@ -0,0 +1,691 @@
|
|||||||
|
---
|
||||||
|
name: bun-development
|
||||||
|
description: "Modern JavaScript/TypeScript development with Bun runtime. Covers package management, bundling, testing, and migration from Node.js. Use when working with Bun, optimizing JS/TS development speed, or migrating from Node.js to Bun."
|
||||||
|
---
|
||||||
|
|
||||||
|
# ⚡ Bun Development
|
||||||
|
|
||||||
|
> Fast, modern JavaScript/TypeScript development with the Bun runtime, inspired by [oven-sh/bun](https://github.com/oven-sh/bun).
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
|
||||||
|
- Starting new JS/TS projects with Bun
|
||||||
|
- Migrating from Node.js to Bun
|
||||||
|
- Optimizing development speed
|
||||||
|
- Using Bun's built-in tools (bundler, test runner)
|
||||||
|
- Troubleshooting Bun-specific issues
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Getting Started
|
||||||
|
|
||||||
|
### 1.1 Installation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# macOS / Linux
|
||||||
|
curl -fsSL https://bun.sh/install | bash
|
||||||
|
|
||||||
|
# Windows
|
||||||
|
powershell -c "irm bun.sh/install.ps1 | iex"
|
||||||
|
|
||||||
|
# Homebrew
|
||||||
|
brew tap oven-sh/bun
|
||||||
|
brew install bun
|
||||||
|
|
||||||
|
# npm (if needed)
|
||||||
|
npm install -g bun
|
||||||
|
|
||||||
|
# Upgrade
|
||||||
|
bun upgrade
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.2 Why Bun?
|
||||||
|
|
||||||
|
| Feature | Bun | Node.js |
|
||||||
|
| :-------------- | :------------- | :-------------------------- |
|
||||||
|
| Startup time | ~25ms | ~100ms+ |
|
||||||
|
| Package install | 10-100x faster | Baseline |
|
||||||
|
| TypeScript | Native | Requires transpiler |
|
||||||
|
| JSX | Native | Requires transpiler |
|
||||||
|
| Test runner | Built-in | External (Jest, Vitest) |
|
||||||
|
| Bundler | Built-in | External (Webpack, esbuild) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Project Setup
|
||||||
|
|
||||||
|
### 2.1 Create New Project
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Initialize project
|
||||||
|
bun init
|
||||||
|
|
||||||
|
# Creates:
|
||||||
|
# ├── package.json
|
||||||
|
# ├── tsconfig.json
|
||||||
|
# ├── index.ts
|
||||||
|
# └── README.md
|
||||||
|
|
||||||
|
# With specific template
|
||||||
|
bun create <template> <project-name>
|
||||||
|
|
||||||
|
# Examples
|
||||||
|
bun create react my-app # React app
|
||||||
|
bun create next my-app # Next.js app
|
||||||
|
bun create vite my-app # Vite app
|
||||||
|
bun create elysia my-api # Elysia API
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.2 package.json
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"name": "my-bun-project",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"module": "index.ts",
|
||||||
|
"type": "module",
|
||||||
|
"scripts": {
|
||||||
|
"dev": "bun run --watch index.ts",
|
||||||
|
"start": "bun run index.ts",
|
||||||
|
"test": "bun test",
|
||||||
|
"build": "bun build ./index.ts --outdir ./dist",
|
||||||
|
"lint": "bunx eslint ."
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/bun": "latest"
|
||||||
|
},
|
||||||
|
"peerDependencies": {
|
||||||
|
"typescript": "^5.0.0"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.3 tsconfig.json (Bun-optimized)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"compilerOptions": {
|
||||||
|
"lib": ["ESNext"],
|
||||||
|
"module": "esnext",
|
||||||
|
"target": "esnext",
|
||||||
|
"moduleResolution": "bundler",
|
||||||
|
"moduleDetection": "force",
|
||||||
|
"allowImportingTsExtensions": true,
|
||||||
|
"noEmit": true,
|
||||||
|
"composite": true,
|
||||||
|
"strict": true,
|
||||||
|
"downlevelIteration": true,
|
||||||
|
"skipLibCheck": true,
|
||||||
|
"jsx": "react-jsx",
|
||||||
|
"allowSyntheticDefaultImports": true,
|
||||||
|
"forceConsistentCasingInFileNames": true,
|
||||||
|
"allowJs": true,
|
||||||
|
"types": ["bun-types"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Package Management
|
||||||
|
|
||||||
|
### 3.1 Installing Packages
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install from package.json
|
||||||
|
bun install # or 'bun i'
|
||||||
|
|
||||||
|
# Add dependencies
|
||||||
|
bun add express # Regular dependency
|
||||||
|
bun add -d typescript # Dev dependency
|
||||||
|
bun add -D @types/node # Dev dependency (alias)
|
||||||
|
bun add --optional pkg # Optional dependency
|
||||||
|
|
||||||
|
# From specific registry
|
||||||
|
bun add lodash --registry https://registry.npmmirror.com
|
||||||
|
|
||||||
|
# Install specific version
|
||||||
|
bun add react@18.2.0
|
||||||
|
bun add react@latest
|
||||||
|
bun add react@next
|
||||||
|
|
||||||
|
# From git
|
||||||
|
bun add github:user/repo
|
||||||
|
bun add git+https://github.com/user/repo.git
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.2 Removing & Updating
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Remove package
|
||||||
|
bun remove lodash
|
||||||
|
|
||||||
|
# Update packages
|
||||||
|
bun update # Update all
|
||||||
|
bun update lodash # Update specific
|
||||||
|
bun update --latest # Update to latest (ignore ranges)
|
||||||
|
|
||||||
|
# Check outdated
|
||||||
|
bun outdated
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.3 bunx (npx equivalent)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Execute package binaries
|
||||||
|
bunx prettier --write .
|
||||||
|
bunx tsc --init
|
||||||
|
bunx create-react-app my-app
|
||||||
|
|
||||||
|
# With specific version
|
||||||
|
bunx -p typescript@4.9 tsc --version
|
||||||
|
|
||||||
|
# Run without installing
|
||||||
|
bunx cowsay "Hello from Bun!"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.4 Lockfile
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# bun.lockb is a binary lockfile (faster parsing)
|
||||||
|
# To generate text lockfile for debugging:
|
||||||
|
bun install --yarn # Creates yarn.lock
|
||||||
|
|
||||||
|
# Trust existing lockfile
|
||||||
|
bun install --frozen-lockfile
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Running Code
|
||||||
|
|
||||||
|
### 4.1 Basic Execution
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run TypeScript directly (no build step!)
|
||||||
|
bun run index.ts
|
||||||
|
|
||||||
|
# Run JavaScript
|
||||||
|
bun run index.js
|
||||||
|
|
||||||
|
# Run with arguments
|
||||||
|
bun run server.ts --port 3000
|
||||||
|
|
||||||
|
# Run package.json script
|
||||||
|
bun run dev
|
||||||
|
bun run build
|
||||||
|
|
||||||
|
# Short form (for scripts)
|
||||||
|
bun dev
|
||||||
|
bun build
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.2 Watch Mode
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Auto-restart on file changes
|
||||||
|
bun --watch run index.ts
|
||||||
|
|
||||||
|
# With hot reloading
|
||||||
|
bun --hot run server.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.3 Environment Variables
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// .env file is loaded automatically!
|
||||||
|
|
||||||
|
// Access environment variables
|
||||||
|
const apiKey = Bun.env.API_KEY;
|
||||||
|
const port = Bun.env.PORT ?? "3000";
|
||||||
|
|
||||||
|
// Or use process.env (Node.js compatible)
|
||||||
|
const dbUrl = process.env.DATABASE_URL;
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run with specific env file
|
||||||
|
bun --env-file=.env.production run index.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Built-in APIs
|
||||||
|
|
||||||
|
### 5.1 File System (Bun.file)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Read file
|
||||||
|
const file = Bun.file("./data.json");
|
||||||
|
const text = await file.text();
|
||||||
|
const json = await file.json();
|
||||||
|
const buffer = await file.arrayBuffer();
|
||||||
|
|
||||||
|
// File info
|
||||||
|
console.log(file.size); // bytes
|
||||||
|
console.log(file.type); // MIME type
|
||||||
|
|
||||||
|
// Write file
|
||||||
|
await Bun.write("./output.txt", "Hello, Bun!");
|
||||||
|
await Bun.write("./data.json", JSON.stringify({ foo: "bar" }));
|
||||||
|
|
||||||
|
// Stream large files
|
||||||
|
const reader = file.stream();
|
||||||
|
for await (const chunk of reader) {
|
||||||
|
console.log(chunk);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.2 HTTP Server (Bun.serve)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const server = Bun.serve({
|
||||||
|
port: 3000,
|
||||||
|
|
||||||
|
fetch(request) {
|
||||||
|
const url = new URL(request.url);
|
||||||
|
|
||||||
|
if (url.pathname === "/") {
|
||||||
|
return new Response("Hello World!");
|
||||||
|
}
|
||||||
|
|
||||||
|
if (url.pathname === "/api/users") {
|
||||||
|
return Response.json([
|
||||||
|
{ id: 1, name: "Alice" },
|
||||||
|
{ id: 2, name: "Bob" },
|
||||||
|
]);
|
||||||
|
}
|
||||||
|
|
||||||
|
return new Response("Not Found", { status: 404 });
|
||||||
|
},
|
||||||
|
|
||||||
|
error(error) {
|
||||||
|
return new Response(`Error: ${error.message}`, { status: 500 });
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(`Server running at http://localhost:${server.port}`);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.3 WebSocket Server
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const server = Bun.serve({
|
||||||
|
port: 3000,
|
||||||
|
|
||||||
|
fetch(req, server) {
|
||||||
|
// Upgrade to WebSocket
|
||||||
|
if (server.upgrade(req)) {
|
||||||
|
return; // Upgraded
|
||||||
|
}
|
||||||
|
return new Response("Upgrade failed", { status: 500 });
|
||||||
|
},
|
||||||
|
|
||||||
|
websocket: {
|
||||||
|
open(ws) {
|
||||||
|
console.log("Client connected");
|
||||||
|
ws.send("Welcome!");
|
||||||
|
},
|
||||||
|
|
||||||
|
message(ws, message) {
|
||||||
|
console.log(`Received: ${message}`);
|
||||||
|
ws.send(`Echo: ${message}`);
|
||||||
|
},
|
||||||
|
|
||||||
|
close(ws) {
|
||||||
|
console.log("Client disconnected");
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.4 SQLite (Bun.sql)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { Database } from "bun:sqlite";
|
||||||
|
|
||||||
|
const db = new Database("mydb.sqlite");
|
||||||
|
|
||||||
|
// Create table
|
||||||
|
db.run(`
|
||||||
|
CREATE TABLE IF NOT EXISTS users (
|
||||||
|
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
name TEXT NOT NULL,
|
||||||
|
email TEXT UNIQUE
|
||||||
|
)
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Insert
|
||||||
|
const insert = db.prepare("INSERT INTO users (name, email) VALUES (?, ?)");
|
||||||
|
insert.run("Alice", "alice@example.com");
|
||||||
|
|
||||||
|
// Query
|
||||||
|
const query = db.prepare("SELECT * FROM users WHERE name = ?");
|
||||||
|
const user = query.get("Alice");
|
||||||
|
console.log(user); // { id: 1, name: "Alice", email: "alice@example.com" }
|
||||||
|
|
||||||
|
// Query all
|
||||||
|
const allUsers = db.query("SELECT * FROM users").all();
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.5 Password Hashing
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Hash password
|
||||||
|
const password = "super-secret";
|
||||||
|
const hash = await Bun.password.hash(password);
|
||||||
|
|
||||||
|
// Verify password
|
||||||
|
const isValid = await Bun.password.verify(password, hash);
|
||||||
|
console.log(isValid); // true
|
||||||
|
|
||||||
|
// With algorithm options
|
||||||
|
const bcryptHash = await Bun.password.hash(password, {
|
||||||
|
algorithm: "bcrypt",
|
||||||
|
cost: 12,
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Testing
|
||||||
|
|
||||||
|
### 6.1 Basic Tests
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// math.test.ts
|
||||||
|
import { describe, it, expect, beforeAll, afterAll } from "bun:test";
|
||||||
|
|
||||||
|
describe("Math operations", () => {
|
||||||
|
it("adds two numbers", () => {
|
||||||
|
expect(1 + 1).toBe(2);
|
||||||
|
});
|
||||||
|
|
||||||
|
it("subtracts two numbers", () => {
|
||||||
|
expect(5 - 3).toBe(2);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6.2 Running Tests
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run all tests
|
||||||
|
bun test
|
||||||
|
|
||||||
|
# Run specific file
|
||||||
|
bun test math.test.ts
|
||||||
|
|
||||||
|
# Run matching pattern
|
||||||
|
bun test --grep "adds"
|
||||||
|
|
||||||
|
# Watch mode
|
||||||
|
bun test --watch
|
||||||
|
|
||||||
|
# With coverage
|
||||||
|
bun test --coverage
|
||||||
|
|
||||||
|
# Timeout
|
||||||
|
bun test --timeout 5000
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6.3 Matchers
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { expect, test } from "bun:test";
|
||||||
|
|
||||||
|
test("matchers", () => {
|
||||||
|
// Equality
|
||||||
|
expect(1).toBe(1);
|
||||||
|
expect({ a: 1 }).toEqual({ a: 1 });
|
||||||
|
expect([1, 2]).toContain(1);
|
||||||
|
|
||||||
|
// Comparisons
|
||||||
|
expect(10).toBeGreaterThan(5);
|
||||||
|
expect(5).toBeLessThanOrEqual(5);
|
||||||
|
|
||||||
|
// Truthiness
|
||||||
|
expect(true).toBeTruthy();
|
||||||
|
expect(null).toBeNull();
|
||||||
|
expect(undefined).toBeUndefined();
|
||||||
|
|
||||||
|
// Strings
|
||||||
|
expect("hello").toMatch(/ell/);
|
||||||
|
expect("hello").toContain("ell");
|
||||||
|
|
||||||
|
// Arrays
|
||||||
|
expect([1, 2, 3]).toHaveLength(3);
|
||||||
|
|
||||||
|
// Exceptions
|
||||||
|
expect(() => {
|
||||||
|
throw new Error("fail");
|
||||||
|
}).toThrow("fail");
|
||||||
|
|
||||||
|
// Async
|
||||||
|
await expect(Promise.resolve(1)).resolves.toBe(1);
|
||||||
|
await expect(Promise.reject("err")).rejects.toBe("err");
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6.4 Mocking
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { mock, spyOn } from "bun:test";
|
||||||
|
|
||||||
|
// Mock function
|
||||||
|
const mockFn = mock((x: number) => x * 2);
|
||||||
|
mockFn(5);
|
||||||
|
expect(mockFn).toHaveBeenCalled();
|
||||||
|
expect(mockFn).toHaveBeenCalledWith(5);
|
||||||
|
expect(mockFn.mock.results[0].value).toBe(10);
|
||||||
|
|
||||||
|
// Spy on method
|
||||||
|
const obj = {
|
||||||
|
method: () => "original",
|
||||||
|
};
|
||||||
|
const spy = spyOn(obj, "method").mockReturnValue("mocked");
|
||||||
|
expect(obj.method()).toBe("mocked");
|
||||||
|
expect(spy).toHaveBeenCalled();
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Bundling
|
||||||
|
|
||||||
|
### 7.1 Basic Build
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Bundle for production
|
||||||
|
bun build ./src/index.ts --outdir ./dist
|
||||||
|
|
||||||
|
# With options
|
||||||
|
bun build ./src/index.ts \
|
||||||
|
--outdir ./dist \
|
||||||
|
--target browser \
|
||||||
|
--minify \
|
||||||
|
--sourcemap
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7.2 Build API
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const result = await Bun.build({
|
||||||
|
entrypoints: ["./src/index.ts"],
|
||||||
|
outdir: "./dist",
|
||||||
|
target: "browser", // or "bun", "node"
|
||||||
|
minify: true,
|
||||||
|
sourcemap: "external",
|
||||||
|
splitting: true,
|
||||||
|
format: "esm",
|
||||||
|
|
||||||
|
// External packages (not bundled)
|
||||||
|
external: ["react", "react-dom"],
|
||||||
|
|
||||||
|
// Define globals
|
||||||
|
define: {
|
||||||
|
"process.env.NODE_ENV": JSON.stringify("production"),
|
||||||
|
},
|
||||||
|
|
||||||
|
// Naming
|
||||||
|
naming: {
|
||||||
|
entry: "[name].[hash].js",
|
||||||
|
chunk: "chunks/[name].[hash].js",
|
||||||
|
asset: "assets/[name].[hash][ext]",
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!result.success) {
|
||||||
|
console.error(result.logs);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7.3 Compile to Executable
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create standalone executable
|
||||||
|
bun build ./src/cli.ts --compile --outfile myapp
|
||||||
|
|
||||||
|
# Cross-compile
|
||||||
|
bun build ./src/cli.ts --compile --target=bun-linux-x64 --outfile myapp-linux
|
||||||
|
bun build ./src/cli.ts --compile --target=bun-darwin-arm64 --outfile myapp-mac
|
||||||
|
|
||||||
|
# With embedded assets
|
||||||
|
bun build ./src/cli.ts --compile --outfile myapp --embed ./assets
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Migration from Node.js
|
||||||
|
|
||||||
|
### 8.1 Compatibility
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Most Node.js APIs work out of the box
|
||||||
|
import fs from "fs";
|
||||||
|
import path from "path";
|
||||||
|
import crypto from "crypto";
|
||||||
|
|
||||||
|
// process is global
|
||||||
|
console.log(process.cwd());
|
||||||
|
console.log(process.env.HOME);
|
||||||
|
|
||||||
|
// Buffer is global
|
||||||
|
const buf = Buffer.from("hello");
|
||||||
|
|
||||||
|
// __dirname and __filename work
|
||||||
|
console.log(__dirname);
|
||||||
|
console.log(__filename);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8.2 Common Migration Steps
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Install Bun
|
||||||
|
curl -fsSL https://bun.sh/install | bash
|
||||||
|
|
||||||
|
# 2. Replace package manager
|
||||||
|
rm -rf node_modules package-lock.json
|
||||||
|
bun install
|
||||||
|
|
||||||
|
# 3. Update scripts in package.json
|
||||||
|
# "start": "node index.js" → "start": "bun run index.ts"
|
||||||
|
# "test": "jest" → "test": "bun test"
|
||||||
|
|
||||||
|
# 4. Add Bun types
|
||||||
|
bun add -d @types/bun
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8.3 Differences from Node.js
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ Node.js specific (may not work)
|
||||||
|
require("module") // Use import instead
|
||||||
|
require.resolve("pkg") // Use import.meta.resolve
|
||||||
|
__non_webpack_require__ // Not supported
|
||||||
|
|
||||||
|
// ✅ Bun equivalents
|
||||||
|
import pkg from "pkg";
|
||||||
|
const resolved = import.meta.resolve("pkg");
|
||||||
|
Bun.resolveSync("pkg", process.cwd());
|
||||||
|
|
||||||
|
// ❌ These globals differ
|
||||||
|
process.hrtime() // Use Bun.nanoseconds()
|
||||||
|
setImmediate() // Use queueMicrotask()
|
||||||
|
|
||||||
|
// ✅ Bun-specific features
|
||||||
|
const file = Bun.file("./data.txt"); // Fast file API
|
||||||
|
Bun.serve({ port: 3000, fetch: ... }); // Fast HTTP server
|
||||||
|
Bun.password.hash(password); // Built-in hashing
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Performance Tips
|
||||||
|
|
||||||
|
### 9.1 Use Bun-native APIs
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Slow (Node.js compat)
|
||||||
|
import fs from "fs/promises";
|
||||||
|
const content = await fs.readFile("./data.txt", "utf-8");
|
||||||
|
|
||||||
|
// Fast (Bun-native)
|
||||||
|
const file = Bun.file("./data.txt");
|
||||||
|
const content = await file.text();
|
||||||
|
```
|
||||||
|
|
||||||
|
### 9.2 Use Bun.serve for HTTP
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Don't: Express/Fastify (overhead)
|
||||||
|
import express from "express";
|
||||||
|
const app = express();
|
||||||
|
|
||||||
|
// Do: Bun.serve (native, 4-10x faster)
|
||||||
|
Bun.serve({
|
||||||
|
fetch(req) {
|
||||||
|
return new Response("Hello!");
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
// Or use Elysia (Bun-optimized framework)
|
||||||
|
import { Elysia } from "elysia";
|
||||||
|
new Elysia().get("/", () => "Hello!").listen(3000);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 9.3 Bundle for Production
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Always bundle and minify for production
|
||||||
|
bun build ./src/index.ts --outdir ./dist --minify --target node
|
||||||
|
|
||||||
|
# Then run the bundle
|
||||||
|
bun run ./dist/index.js
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
| Task | Command |
|
||||||
|
| :----------- | :----------------------------------------- |
|
||||||
|
| Init project | `bun init` |
|
||||||
|
| Install deps | `bun install` |
|
||||||
|
| Add package | `bun add <pkg>` |
|
||||||
|
| Run script | `bun run <script>` |
|
||||||
|
| Run file | `bun run file.ts` |
|
||||||
|
| Watch mode | `bun --watch run file.ts` |
|
||||||
|
| Run tests | `bun test` |
|
||||||
|
| Build | `bun build ./src/index.ts --outdir ./dist` |
|
||||||
|
| Execute pkg | `bunx <pkg>` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Bun Documentation](https://bun.sh/docs)
|
||||||
|
- [Bun GitHub](https://github.com/oven-sh/bun)
|
||||||
|
- [Elysia Framework](https://elysiajs.com/)
|
||||||
|
- [Bun Discord](https://bun.sh/discord)
|
||||||
846
skills/github-workflow-automation/SKILL.md
Normal file
846
skills/github-workflow-automation/SKILL.md
Normal file
@@ -0,0 +1,846 @@
|
|||||||
|
---
|
||||||
|
name: github-workflow-automation
|
||||||
|
description: "Automate GitHub workflows with AI assistance. Includes PR reviews, issue triage, CI/CD integration, and Git operations. Use when automating GitHub workflows, setting up PR review automation, creating GitHub Actions, or triaging issues."
|
||||||
|
---
|
||||||
|
|
||||||
|
# 🔧 GitHub Workflow Automation
|
||||||
|
|
||||||
|
> Patterns for automating GitHub workflows with AI assistance, inspired by [Gemini CLI](https://github.com/google-gemini/gemini-cli) and modern DevOps practices.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
|
||||||
|
- Automating PR reviews with AI
|
||||||
|
- Setting up issue triage automation
|
||||||
|
- Creating GitHub Actions workflows
|
||||||
|
- Integrating AI into CI/CD pipelines
|
||||||
|
- Automating Git operations (rebases, cherry-picks)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Automated PR Review
|
||||||
|
|
||||||
|
### 1.1 PR Review Action
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/ai-review.yml
|
||||||
|
name: AI Code Review
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
types: [opened, synchronize]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
review:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
pull-requests: write
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed
|
||||||
|
run: |
|
||||||
|
files=$(git diff --name-only origin/${{ github.base_ref }}...HEAD)
|
||||||
|
echo "files<<EOF" >> $GITHUB_OUTPUT
|
||||||
|
echo "$files" >> $GITHUB_OUTPUT
|
||||||
|
echo "EOF" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: Get diff
|
||||||
|
id: diff
|
||||||
|
run: |
|
||||||
|
diff=$(git diff origin/${{ github.base_ref }}...HEAD)
|
||||||
|
echo "diff<<EOF" >> $GITHUB_OUTPUT
|
||||||
|
echo "$diff" >> $GITHUB_OUTPUT
|
||||||
|
echo "EOF" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: AI Review
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
const { Anthropic } = require('@anthropic-ai/sdk');
|
||||||
|
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
|
||||||
|
|
||||||
|
const response = await client.messages.create({
|
||||||
|
model: "claude-3-sonnet-20240229",
|
||||||
|
max_tokens: 4096,
|
||||||
|
messages: [{
|
||||||
|
role: "user",
|
||||||
|
content: `Review this PR diff and provide feedback:
|
||||||
|
|
||||||
|
Changed files: ${{ steps.changed.outputs.files }}
|
||||||
|
|
||||||
|
Diff:
|
||||||
|
${{ steps.diff.outputs.diff }}
|
||||||
|
|
||||||
|
Provide:
|
||||||
|
1. Summary of changes
|
||||||
|
2. Potential issues or bugs
|
||||||
|
3. Suggestions for improvement
|
||||||
|
4. Security concerns if any
|
||||||
|
|
||||||
|
Format as GitHub markdown.`
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
|
||||||
|
await github.rest.pulls.createReview({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
pull_number: context.issue.number,
|
||||||
|
body: response.content[0].text,
|
||||||
|
event: 'COMMENT'
|
||||||
|
});
|
||||||
|
env:
|
||||||
|
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.2 Review Comment Patterns
|
||||||
|
|
||||||
|
````markdown
|
||||||
|
# AI Review Structure
|
||||||
|
|
||||||
|
## 📋 Summary
|
||||||
|
|
||||||
|
Brief description of what this PR does.
|
||||||
|
|
||||||
|
## ✅ What looks good
|
||||||
|
|
||||||
|
- Well-structured code
|
||||||
|
- Good test coverage
|
||||||
|
- Clear naming conventions
|
||||||
|
|
||||||
|
## ⚠️ Potential Issues
|
||||||
|
|
||||||
|
1. **Line 42**: Possible null pointer exception
|
||||||
|
```javascript
|
||||||
|
// Current
|
||||||
|
user.profile.name;
|
||||||
|
// Suggested
|
||||||
|
user?.profile?.name ?? "Unknown";
|
||||||
|
```
|
||||||
|
````
|
||||||
|
|
||||||
|
2. **Line 78**: Consider error handling
|
||||||
|
```javascript
|
||||||
|
// Add try-catch or .catch()
|
||||||
|
```
|
||||||
|
|
||||||
|
## 💡 Suggestions
|
||||||
|
|
||||||
|
- Consider extracting the validation logic into a separate function
|
||||||
|
- Add JSDoc comments for public methods
|
||||||
|
|
||||||
|
## 🔒 Security Notes
|
||||||
|
|
||||||
|
- No sensitive data exposure detected
|
||||||
|
- API key handling looks correct
|
||||||
|
|
||||||
|
````
|
||||||
|
|
||||||
|
### 1.3 Focused Reviews
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Review only specific file types
|
||||||
|
- name: Filter code files
|
||||||
|
run: |
|
||||||
|
files=$(git diff --name-only origin/${{ github.base_ref }}...HEAD | \
|
||||||
|
grep -E '\.(ts|tsx|js|jsx|py|go)$' || true)
|
||||||
|
echo "code_files=$files" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
# Review with context
|
||||||
|
- name: AI Review with context
|
||||||
|
run: |
|
||||||
|
# Include relevant context files
|
||||||
|
context=""
|
||||||
|
for file in ${{ steps.changed.outputs.files }}; do
|
||||||
|
if [[ -f "$file" ]]; then
|
||||||
|
context+="=== $file ===\n$(cat $file)\n\n"
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# Send to AI with full file context
|
||||||
|
````
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Issue Triage Automation
|
||||||
|
|
||||||
|
### 2.1 Auto-label Issues
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/issue-triage.yml
|
||||||
|
name: Issue Triage
|
||||||
|
|
||||||
|
on:
|
||||||
|
issues:
|
||||||
|
types: [opened]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
triage:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
issues: write
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Analyze issue
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
const issue = context.payload.issue;
|
||||||
|
|
||||||
|
// Call AI to analyze
|
||||||
|
const analysis = await analyzeIssue(issue.title, issue.body);
|
||||||
|
|
||||||
|
// Apply labels
|
||||||
|
const labels = [];
|
||||||
|
|
||||||
|
if (analysis.type === 'bug') {
|
||||||
|
labels.push('bug');
|
||||||
|
if (analysis.severity === 'high') labels.push('priority: high');
|
||||||
|
} else if (analysis.type === 'feature') {
|
||||||
|
labels.push('enhancement');
|
||||||
|
} else if (analysis.type === 'question') {
|
||||||
|
labels.push('question');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (analysis.area) {
|
||||||
|
labels.push(`area: ${analysis.area}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
await github.rest.issues.addLabels({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
issue_number: issue.number,
|
||||||
|
labels: labels
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add initial response
|
||||||
|
if (analysis.type === 'bug' && !analysis.hasReproSteps) {
|
||||||
|
await github.rest.issues.createComment({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
issue_number: issue.number,
|
||||||
|
body: `Thanks for reporting this issue!
|
||||||
|
|
||||||
|
To help us investigate, could you please provide:
|
||||||
|
- Steps to reproduce the issue
|
||||||
|
- Expected behavior
|
||||||
|
- Actual behavior
|
||||||
|
- Environment (OS, version, etc.)
|
||||||
|
|
||||||
|
This will help us resolve your issue faster. 🙏`
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.2 Issue Analysis Prompt
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const TRIAGE_PROMPT = `
|
||||||
|
Analyze this GitHub issue and classify it:
|
||||||
|
|
||||||
|
Title: {title}
|
||||||
|
Body: {body}
|
||||||
|
|
||||||
|
Return JSON with:
|
||||||
|
{
|
||||||
|
"type": "bug" | "feature" | "question" | "docs" | "other",
|
||||||
|
"severity": "low" | "medium" | "high" | "critical",
|
||||||
|
"area": "frontend" | "backend" | "api" | "docs" | "ci" | "other",
|
||||||
|
"summary": "one-line summary",
|
||||||
|
"hasReproSteps": boolean,
|
||||||
|
"isFirstContribution": boolean,
|
||||||
|
"suggestedLabels": ["label1", "label2"],
|
||||||
|
"suggestedAssignees": ["username"] // based on area expertise
|
||||||
|
}
|
||||||
|
`;
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.3 Stale Issue Management
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/stale.yml
|
||||||
|
name: Manage Stale Issues
|
||||||
|
|
||||||
|
on:
|
||||||
|
schedule:
|
||||||
|
- cron: "0 0 * * *" # Daily
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
stale:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/stale@v9
|
||||||
|
with:
|
||||||
|
stale-issue-message: |
|
||||||
|
This issue has been automatically marked as stale because it has not had
|
||||||
|
recent activity. It will be closed in 14 days if no further activity occurs.
|
||||||
|
|
||||||
|
If this issue is still relevant:
|
||||||
|
- Add a comment with an update
|
||||||
|
- Remove the `stale` label
|
||||||
|
|
||||||
|
Thank you for your contributions! 🙏
|
||||||
|
|
||||||
|
stale-pr-message: |
|
||||||
|
This PR has been automatically marked as stale. Please update it or it
|
||||||
|
will be closed in 14 days.
|
||||||
|
|
||||||
|
days-before-stale: 60
|
||||||
|
days-before-close: 14
|
||||||
|
stale-issue-label: "stale"
|
||||||
|
stale-pr-label: "stale"
|
||||||
|
exempt-issue-labels: "pinned,security,in-progress"
|
||||||
|
exempt-pr-labels: "pinned,security"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. CI/CD Integration
|
||||||
|
|
||||||
|
### 3.1 Smart Test Selection
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/smart-tests.yml
|
||||||
|
name: Smart Test Selection
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
analyze:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
outputs:
|
||||||
|
test_suites: ${{ steps.analyze.outputs.suites }}
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Analyze changes
|
||||||
|
id: analyze
|
||||||
|
run: |
|
||||||
|
# Get changed files
|
||||||
|
changed=$(git diff --name-only origin/${{ github.base_ref }}...HEAD)
|
||||||
|
|
||||||
|
# Determine which test suites to run
|
||||||
|
suites="[]"
|
||||||
|
|
||||||
|
if echo "$changed" | grep -q "^src/api/"; then
|
||||||
|
suites=$(echo $suites | jq '. + ["api"]')
|
||||||
|
fi
|
||||||
|
|
||||||
|
if echo "$changed" | grep -q "^src/frontend/"; then
|
||||||
|
suites=$(echo $suites | jq '. + ["frontend"]')
|
||||||
|
fi
|
||||||
|
|
||||||
|
if echo "$changed" | grep -q "^src/database/"; then
|
||||||
|
suites=$(echo $suites | jq '. + ["database", "api"]')
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If nothing specific, run all
|
||||||
|
if [ "$suites" = "[]" ]; then
|
||||||
|
suites='["all"]'
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "suites=$suites" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
test:
|
||||||
|
needs: analyze
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
suite: ${{ fromJson(needs.analyze.outputs.test_suites) }}
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Run tests
|
||||||
|
run: |
|
||||||
|
if [ "${{ matrix.suite }}" = "all" ]; then
|
||||||
|
npm test
|
||||||
|
else
|
||||||
|
npm test -- --suite ${{ matrix.suite }}
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.2 Deployment with AI Validation
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/deploy.yml
|
||||||
|
name: Deploy with AI Validation
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches: [main]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
validate:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Get deployment changes
|
||||||
|
id: changes
|
||||||
|
run: |
|
||||||
|
# Get commits since last deployment
|
||||||
|
last_deploy=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
|
||||||
|
if [ -n "$last_deploy" ]; then
|
||||||
|
changes=$(git log --oneline $last_deploy..HEAD)
|
||||||
|
else
|
||||||
|
changes=$(git log --oneline -10)
|
||||||
|
fi
|
||||||
|
echo "changes<<EOF" >> $GITHUB_OUTPUT
|
||||||
|
echo "$changes" >> $GITHUB_OUTPUT
|
||||||
|
echo "EOF" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: AI Risk Assessment
|
||||||
|
id: assess
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
// Analyze changes for deployment risk
|
||||||
|
const prompt = `
|
||||||
|
Analyze these changes for deployment risk:
|
||||||
|
|
||||||
|
${process.env.CHANGES}
|
||||||
|
|
||||||
|
Return JSON:
|
||||||
|
{
|
||||||
|
"riskLevel": "low" | "medium" | "high",
|
||||||
|
"concerns": ["concern1", "concern2"],
|
||||||
|
"recommendations": ["rec1", "rec2"],
|
||||||
|
"requiresManualApproval": boolean
|
||||||
|
}
|
||||||
|
`;
|
||||||
|
|
||||||
|
// Call AI and parse response
|
||||||
|
const analysis = await callAI(prompt);
|
||||||
|
|
||||||
|
if (analysis.riskLevel === 'high') {
|
||||||
|
core.setFailed('High-risk deployment detected. Manual review required.');
|
||||||
|
}
|
||||||
|
|
||||||
|
return analysis;
|
||||||
|
env:
|
||||||
|
CHANGES: ${{ steps.changes.outputs.changes }}
|
||||||
|
|
||||||
|
deploy:
|
||||||
|
needs: validate
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
environment: production
|
||||||
|
steps:
|
||||||
|
- name: Deploy
|
||||||
|
run: |
|
||||||
|
echo "Deploying to production..."
|
||||||
|
# Deployment commands here
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.3 Rollback Automation
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/rollback.yml
|
||||||
|
name: Automated Rollback
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
reason:
|
||||||
|
description: "Reason for rollback"
|
||||||
|
required: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
rollback:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Find last stable version
|
||||||
|
id: stable
|
||||||
|
run: |
|
||||||
|
# Find last successful deployment
|
||||||
|
stable=$(git tag -l 'v*' --sort=-version:refname | head -1)
|
||||||
|
echo "version=$stable" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: Rollback
|
||||||
|
run: |
|
||||||
|
git checkout ${{ steps.stable.outputs.version }}
|
||||||
|
# Deploy stable version
|
||||||
|
npm run deploy
|
||||||
|
|
||||||
|
- name: Notify team
|
||||||
|
uses: slackapi/slack-github-action@v1
|
||||||
|
with:
|
||||||
|
payload: |
|
||||||
|
{
|
||||||
|
"text": "🔄 Production rolled back to ${{ steps.stable.outputs.version }}",
|
||||||
|
"blocks": [
|
||||||
|
{
|
||||||
|
"type": "section",
|
||||||
|
"text": {
|
||||||
|
"type": "mrkdwn",
|
||||||
|
"text": "*Rollback executed*\n• Version: `${{ steps.stable.outputs.version }}`\n• Reason: ${{ inputs.reason }}\n• Triggered by: ${{ github.actor }}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Git Operations
|
||||||
|
|
||||||
|
### 4.1 Automated Rebasing
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/auto-rebase.yml
|
||||||
|
name: Auto Rebase
|
||||||
|
|
||||||
|
on:
|
||||||
|
issue_comment:
|
||||||
|
types: [created]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
rebase:
|
||||||
|
if: github.event.issue.pull_request && contains(github.event.comment.body, '/rebase')
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
- name: Setup Git
|
||||||
|
run: |
|
||||||
|
git config user.name "github-actions[bot]"
|
||||||
|
git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||||
|
|
||||||
|
- name: Rebase PR
|
||||||
|
run: |
|
||||||
|
# Fetch PR branch
|
||||||
|
gh pr checkout ${{ github.event.issue.number }}
|
||||||
|
|
||||||
|
# Rebase onto main
|
||||||
|
git fetch origin main
|
||||||
|
git rebase origin/main
|
||||||
|
|
||||||
|
# Force push
|
||||||
|
git push --force-with-lease
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
- name: Comment result
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
github.rest.issues.createComment({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
issue_number: context.issue.number,
|
||||||
|
body: '✅ Successfully rebased onto main!'
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.2 Smart Cherry-Pick
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// AI-assisted cherry-pick that handles conflicts
|
||||||
|
async function smartCherryPick(commitHash: string, targetBranch: string) {
|
||||||
|
// Get commit info
|
||||||
|
const commitInfo = await exec(`git show ${commitHash} --stat`);
|
||||||
|
|
||||||
|
// Check for potential conflicts
|
||||||
|
const targetDiff = await exec(
|
||||||
|
`git diff ${targetBranch}...HEAD -- ${affectedFiles}`
|
||||||
|
);
|
||||||
|
|
||||||
|
// AI analysis
|
||||||
|
const analysis = await ai.analyze(`
|
||||||
|
I need to cherry-pick this commit to ${targetBranch}:
|
||||||
|
|
||||||
|
${commitInfo}
|
||||||
|
|
||||||
|
Current state of affected files on ${targetBranch}:
|
||||||
|
${targetDiff}
|
||||||
|
|
||||||
|
Will there be conflicts? If so, suggest resolution strategy.
|
||||||
|
`);
|
||||||
|
|
||||||
|
if (analysis.willConflict) {
|
||||||
|
// Create branch for manual resolution
|
||||||
|
await exec(
|
||||||
|
`git checkout -b cherry-pick-${commitHash.slice(0, 7)} ${targetBranch}`
|
||||||
|
);
|
||||||
|
const result = await exec(`git cherry-pick ${commitHash}`, {
|
||||||
|
allowFail: true,
|
||||||
|
});
|
||||||
|
|
||||||
|
if (result.failed) {
|
||||||
|
// AI-assisted conflict resolution
|
||||||
|
const conflicts = await getConflicts();
|
||||||
|
for (const conflict of conflicts) {
|
||||||
|
const resolution = await ai.resolveConflict(conflict);
|
||||||
|
await applyResolution(conflict.file, resolution);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
await exec(`git checkout ${targetBranch}`);
|
||||||
|
await exec(`git cherry-pick ${commitHash}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.3 Branch Cleanup
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/branch-cleanup.yml
|
||||||
|
name: Branch Cleanup
|
||||||
|
|
||||||
|
on:
|
||||||
|
schedule:
|
||||||
|
- cron: '0 0 * * 0' # Weekly
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
cleanup:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Find stale branches
|
||||||
|
id: stale
|
||||||
|
run: |
|
||||||
|
# Branches not updated in 30 days
|
||||||
|
stale=$(git for-each-ref --sort=-committerdate refs/remotes/origin \
|
||||||
|
--format='%(refname:short) %(committerdate:relative)' | \
|
||||||
|
grep -E '[3-9][0-9]+ days|[0-9]+ months|[0-9]+ years' | \
|
||||||
|
grep -v 'origin/main\|origin/develop' | \
|
||||||
|
cut -d' ' -f1 | sed 's|origin/||')
|
||||||
|
|
||||||
|
echo "branches<<EOF" >> $GITHUB_OUTPUT
|
||||||
|
echo "$stale" >> $GITHUB_OUTPUT
|
||||||
|
echo "EOF" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: Create cleanup PR
|
||||||
|
if: steps.stale.outputs.branches != ''
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
const branches = `${{ steps.stale.outputs.branches }}`.split('\n').filter(Boolean);
|
||||||
|
|
||||||
|
const body = `## 🧹 Stale Branch Cleanup
|
||||||
|
|
||||||
|
The following branches haven't been updated in over 30 days:
|
||||||
|
|
||||||
|
${branches.map(b => `- \`${b}\``).join('\n')}
|
||||||
|
|
||||||
|
### Actions:
|
||||||
|
- [ ] Review each branch
|
||||||
|
- [ ] Delete branches that are no longer needed
|
||||||
|
- Comment \`/keep branch-name\` to preserve specific branches
|
||||||
|
`;
|
||||||
|
|
||||||
|
await github.rest.issues.create({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
title: 'Stale Branch Cleanup',
|
||||||
|
body: body,
|
||||||
|
labels: ['housekeeping']
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. On-Demand Assistance
|
||||||
|
|
||||||
|
### 5.1 @mention Bot
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# .github/workflows/mention-bot.yml
|
||||||
|
name: AI Mention Bot
|
||||||
|
|
||||||
|
on:
|
||||||
|
issue_comment:
|
||||||
|
types: [created]
|
||||||
|
pull_request_review_comment:
|
||||||
|
types: [created]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
respond:
|
||||||
|
if: contains(github.event.comment.body, '@ai-helper')
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Extract question
|
||||||
|
id: question
|
||||||
|
run: |
|
||||||
|
# Extract text after @ai-helper
|
||||||
|
question=$(echo "${{ github.event.comment.body }}" | sed 's/.*@ai-helper//')
|
||||||
|
echo "question=$question" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
- name: Get context
|
||||||
|
id: context
|
||||||
|
run: |
|
||||||
|
if [ "${{ github.event.issue.pull_request }}" != "" ]; then
|
||||||
|
# It's a PR - get diff
|
||||||
|
gh pr diff ${{ github.event.issue.number }} > context.txt
|
||||||
|
else
|
||||||
|
# It's an issue - get description
|
||||||
|
gh issue view ${{ github.event.issue.number }} --json body -q .body > context.txt
|
||||||
|
fi
|
||||||
|
echo "context=$(cat context.txt)" >> $GITHUB_OUTPUT
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
- name: AI Response
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
const response = await ai.chat(`
|
||||||
|
Context: ${process.env.CONTEXT}
|
||||||
|
|
||||||
|
Question: ${process.env.QUESTION}
|
||||||
|
|
||||||
|
Provide a helpful, specific answer. Include code examples if relevant.
|
||||||
|
`);
|
||||||
|
|
||||||
|
await github.rest.issues.createComment({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
issue_number: context.issue.number,
|
||||||
|
body: response
|
||||||
|
});
|
||||||
|
env:
|
||||||
|
CONTEXT: ${{ steps.context.outputs.context }}
|
||||||
|
QUESTION: ${{ steps.question.outputs.question }}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.2 Command Patterns
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Available Commands
|
||||||
|
|
||||||
|
| Command | Description |
|
||||||
|
| :------------------- | :-------------------------- |
|
||||||
|
| `@ai-helper explain` | Explain the code in this PR |
|
||||||
|
| `@ai-helper review` | Request AI code review |
|
||||||
|
| `@ai-helper fix` | Suggest fixes for issues |
|
||||||
|
| `@ai-helper test` | Generate test cases |
|
||||||
|
| `@ai-helper docs` | Generate documentation |
|
||||||
|
| `/rebase` | Rebase PR onto main |
|
||||||
|
| `/update` | Update PR branch from main |
|
||||||
|
| `/approve` | Mark as approved by bot |
|
||||||
|
| `/label bug` | Add 'bug' label |
|
||||||
|
| `/assign @user` | Assign to user |
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Repository Configuration
|
||||||
|
|
||||||
|
### 6.1 CODEOWNERS
|
||||||
|
|
||||||
|
```
|
||||||
|
# .github/CODEOWNERS
|
||||||
|
|
||||||
|
# Global owners
|
||||||
|
* @org/core-team
|
||||||
|
|
||||||
|
# Frontend
|
||||||
|
/src/frontend/ @org/frontend-team
|
||||||
|
*.tsx @org/frontend-team
|
||||||
|
*.css @org/frontend-team
|
||||||
|
|
||||||
|
# Backend
|
||||||
|
/src/api/ @org/backend-team
|
||||||
|
/src/database/ @org/backend-team
|
||||||
|
|
||||||
|
# Infrastructure
|
||||||
|
/.github/ @org/devops-team
|
||||||
|
/terraform/ @org/devops-team
|
||||||
|
Dockerfile @org/devops-team
|
||||||
|
|
||||||
|
# Docs
|
||||||
|
/docs/ @org/docs-team
|
||||||
|
*.md @org/docs-team
|
||||||
|
|
||||||
|
# Security-sensitive
|
||||||
|
/src/auth/ @org/security-team
|
||||||
|
/src/crypto/ @org/security-team
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6.2 Branch Protection
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Set up via GitHub API
|
||||||
|
- name: Configure branch protection
|
||||||
|
uses: actions/github-script@v7
|
||||||
|
with:
|
||||||
|
script: |
|
||||||
|
await github.rest.repos.updateBranchProtection({
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
branch: 'main',
|
||||||
|
required_status_checks: {
|
||||||
|
strict: true,
|
||||||
|
contexts: ['test', 'lint', 'ai-review']
|
||||||
|
},
|
||||||
|
enforce_admins: true,
|
||||||
|
required_pull_request_reviews: {
|
||||||
|
required_approving_review_count: 1,
|
||||||
|
require_code_owner_reviews: true,
|
||||||
|
dismiss_stale_reviews: true
|
||||||
|
},
|
||||||
|
restrictions: null,
|
||||||
|
required_linear_history: true,
|
||||||
|
allow_force_pushes: false,
|
||||||
|
allow_deletions: false
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Security
|
||||||
|
|
||||||
|
- [ ] Store API keys in GitHub Secrets
|
||||||
|
- [ ] Use minimal permissions in workflows
|
||||||
|
- [ ] Validate all inputs
|
||||||
|
- [ ] Don't expose sensitive data in logs
|
||||||
|
|
||||||
|
### Performance
|
||||||
|
|
||||||
|
- [ ] Cache dependencies
|
||||||
|
- [ ] Use matrix builds for parallel testing
|
||||||
|
- [ ] Skip unnecessary jobs with path filters
|
||||||
|
- [ ] Use self-hosted runners for heavy workloads
|
||||||
|
|
||||||
|
### Reliability
|
||||||
|
|
||||||
|
- [ ] Add timeouts to jobs
|
||||||
|
- [ ] Handle rate limits gracefully
|
||||||
|
- [ ] Implement retry logic
|
||||||
|
- [ ] Have rollback procedures
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Gemini CLI GitHub Action](https://github.com/google-github-actions/run-gemini-cli)
|
||||||
|
- [GitHub Actions Documentation](https://docs.github.com/en/actions)
|
||||||
|
- [GitHub REST API](https://docs.github.com/en/rest)
|
||||||
|
- [CODEOWNERS Syntax](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners)
|
||||||
645
skills/javascript-mastery/SKILL.md
Normal file
645
skills/javascript-mastery/SKILL.md
Normal file
@@ -0,0 +1,645 @@
|
|||||||
|
---
|
||||||
|
name: javascript-mastery
|
||||||
|
description: "Comprehensive JavaScript reference covering 33+ essential concepts every developer should know. From fundamentals like primitives and closures to advanced patterns like async/await and functional programming. Use when explaining JS concepts, debugging JavaScript issues, or teaching JavaScript fundamentals."
|
||||||
|
---
|
||||||
|
|
||||||
|
# 🧠 JavaScript Mastery
|
||||||
|
|
||||||
|
> 33+ essential JavaScript concepts every developer should know, inspired by [33-js-concepts](https://github.com/leonardomso/33-js-concepts).
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
|
||||||
|
- Explaining JavaScript concepts
|
||||||
|
- Debugging tricky JS behavior
|
||||||
|
- Teaching JavaScript fundamentals
|
||||||
|
- Reviewing code for JS best practices
|
||||||
|
- Understanding language quirks
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Fundamentals
|
||||||
|
|
||||||
|
### 1.1 Primitive Types
|
||||||
|
|
||||||
|
JavaScript has 7 primitive types:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// String
|
||||||
|
const str = "hello";
|
||||||
|
|
||||||
|
// Number (integers and floats)
|
||||||
|
const num = 42;
|
||||||
|
const float = 3.14;
|
||||||
|
|
||||||
|
// BigInt (for large integers)
|
||||||
|
const big = 9007199254740991n;
|
||||||
|
|
||||||
|
// Boolean
|
||||||
|
const bool = true;
|
||||||
|
|
||||||
|
// Undefined
|
||||||
|
let undef; // undefined
|
||||||
|
|
||||||
|
// Null
|
||||||
|
const empty = null;
|
||||||
|
|
||||||
|
// Symbol (unique identifiers)
|
||||||
|
const sym = Symbol("description");
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key points**:
|
||||||
|
|
||||||
|
- Primitives are immutable
|
||||||
|
- Passed by value
|
||||||
|
- `typeof null === "object"` is a historical bug
|
||||||
|
|
||||||
|
### 1.2 Type Coercion
|
||||||
|
|
||||||
|
JavaScript implicitly converts types:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// String coercion
|
||||||
|
"5" + 3; // "53" (number → string)
|
||||||
|
"5" - 3; // 2 (string → number)
|
||||||
|
|
||||||
|
// Boolean coercion
|
||||||
|
Boolean(""); // false
|
||||||
|
Boolean("hello"); // true
|
||||||
|
Boolean(0); // false
|
||||||
|
Boolean([]); // true (!)
|
||||||
|
|
||||||
|
// Equality coercion
|
||||||
|
"5" == 5; // true (coerces)
|
||||||
|
"5" === 5; // false (strict)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Falsy values** (8 total):
|
||||||
|
`false`, `0`, `-0`, `0n`, `""`, `null`, `undefined`, `NaN`
|
||||||
|
|
||||||
|
### 1.3 Equality Operators
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// == (loose equality) - coerces types
|
||||||
|
null == undefined; // true
|
||||||
|
"1" == 1; // true
|
||||||
|
|
||||||
|
// === (strict equality) - no coercion
|
||||||
|
null === undefined; // false
|
||||||
|
"1" === 1; // false
|
||||||
|
|
||||||
|
// Object.is() - handles edge cases
|
||||||
|
Object.is(NaN, NaN); // true (NaN === NaN is false!)
|
||||||
|
Object.is(-0, 0); // false (0 === -0 is true!)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule**: Always use `===` unless you have a specific reason not to.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Scope & Closures
|
||||||
|
|
||||||
|
### 2.1 Scope Types
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Global scope
|
||||||
|
var globalVar = "global";
|
||||||
|
|
||||||
|
function outer() {
|
||||||
|
// Function scope
|
||||||
|
var functionVar = "function";
|
||||||
|
|
||||||
|
if (true) {
|
||||||
|
// Block scope (let/const only)
|
||||||
|
let blockVar = "block";
|
||||||
|
const alsoBlock = "block";
|
||||||
|
var notBlock = "function"; // var ignores blocks!
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.2 Closures
|
||||||
|
|
||||||
|
A closure is a function that remembers its lexical scope:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function createCounter() {
|
||||||
|
let count = 0; // "closed over" variable
|
||||||
|
|
||||||
|
return {
|
||||||
|
increment() {
|
||||||
|
return ++count;
|
||||||
|
},
|
||||||
|
decrement() {
|
||||||
|
return --count;
|
||||||
|
},
|
||||||
|
getCount() {
|
||||||
|
return count;
|
||||||
|
},
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
const counter = createCounter();
|
||||||
|
counter.increment(); // 1
|
||||||
|
counter.increment(); // 2
|
||||||
|
counter.getCount(); // 2
|
||||||
|
```
|
||||||
|
|
||||||
|
**Common use cases**:
|
||||||
|
|
||||||
|
- Data privacy (module pattern)
|
||||||
|
- Function factories
|
||||||
|
- Partial application
|
||||||
|
- Memoization
|
||||||
|
|
||||||
|
### 2.3 var vs let vs const
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// var - function scoped, hoisted, can redeclare
|
||||||
|
var x = 1;
|
||||||
|
var x = 2; // OK
|
||||||
|
|
||||||
|
// let - block scoped, hoisted (TDZ), no redeclare
|
||||||
|
let y = 1;
|
||||||
|
// let y = 2; // Error!
|
||||||
|
|
||||||
|
// const - like let, but can't reassign
|
||||||
|
const z = 1;
|
||||||
|
// z = 2; // Error!
|
||||||
|
|
||||||
|
// BUT: const objects are mutable
|
||||||
|
const obj = { a: 1 };
|
||||||
|
obj.a = 2; // OK
|
||||||
|
obj.b = 3; // OK
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Functions & Execution
|
||||||
|
|
||||||
|
### 3.1 Call Stack
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function first() {
|
||||||
|
console.log("first start");
|
||||||
|
second();
|
||||||
|
console.log("first end");
|
||||||
|
}
|
||||||
|
|
||||||
|
function second() {
|
||||||
|
console.log("second");
|
||||||
|
}
|
||||||
|
|
||||||
|
first();
|
||||||
|
// Output:
|
||||||
|
// "first start"
|
||||||
|
// "second"
|
||||||
|
// "first end"
|
||||||
|
```
|
||||||
|
|
||||||
|
Stack overflow example:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function infinite() {
|
||||||
|
infinite(); // No base case!
|
||||||
|
}
|
||||||
|
infinite(); // RangeError: Maximum call stack size exceeded
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.2 Hoisting
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Variable hoisting
|
||||||
|
console.log(a); // undefined (hoisted, not initialized)
|
||||||
|
var a = 5;
|
||||||
|
|
||||||
|
console.log(b); // ReferenceError (TDZ)
|
||||||
|
let b = 5;
|
||||||
|
|
||||||
|
// Function hoisting
|
||||||
|
sayHi(); // Works!
|
||||||
|
function sayHi() {
|
||||||
|
console.log("Hi!");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Function expressions don't hoist
|
||||||
|
sayBye(); // TypeError
|
||||||
|
var sayBye = function () {
|
||||||
|
console.log("Bye!");
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.3 this Keyword
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Global context
|
||||||
|
console.log(this); // window (browser) or global (Node)
|
||||||
|
|
||||||
|
// Object method
|
||||||
|
const obj = {
|
||||||
|
name: "Alice",
|
||||||
|
greet() {
|
||||||
|
console.log(this.name); // "Alice"
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
// Arrow functions (lexical this)
|
||||||
|
const obj2 = {
|
||||||
|
name: "Bob",
|
||||||
|
greet: () => {
|
||||||
|
console.log(this.name); // undefined (inherits outer this)
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
// Explicit binding
|
||||||
|
function greet() {
|
||||||
|
console.log(this.name);
|
||||||
|
}
|
||||||
|
greet.call({ name: "Charlie" }); // "Charlie"
|
||||||
|
greet.apply({ name: "Diana" }); // "Diana"
|
||||||
|
const bound = greet.bind({ name: "Eve" });
|
||||||
|
bound(); // "Eve"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Event Loop & Async
|
||||||
|
|
||||||
|
### 4.1 Event Loop
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
console.log("1");
|
||||||
|
|
||||||
|
setTimeout(() => console.log("2"), 0);
|
||||||
|
|
||||||
|
Promise.resolve().then(() => console.log("3"));
|
||||||
|
|
||||||
|
console.log("4");
|
||||||
|
|
||||||
|
// Output: 1, 4, 3, 2
|
||||||
|
// Why? Microtasks (Promises) run before macrotasks (setTimeout)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Execution order**:
|
||||||
|
|
||||||
|
1. Synchronous code (call stack)
|
||||||
|
2. Microtasks (Promise callbacks, queueMicrotask)
|
||||||
|
3. Macrotasks (setTimeout, setInterval, I/O)
|
||||||
|
|
||||||
|
### 4.2 Callbacks
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Callback pattern
|
||||||
|
function fetchData(callback) {
|
||||||
|
setTimeout(() => {
|
||||||
|
callback(null, { data: "result" });
|
||||||
|
}, 1000);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error-first convention
|
||||||
|
fetchData((error, result) => {
|
||||||
|
if (error) {
|
||||||
|
console.error(error);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
console.log(result);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Callback hell (avoid this!)
|
||||||
|
getData((data) => {
|
||||||
|
processData(data, (processed) => {
|
||||||
|
saveData(processed, (saved) => {
|
||||||
|
notify(saved, () => {
|
||||||
|
// 😱 Pyramid of doom
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.3 Promises
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Creating a Promise
|
||||||
|
const promise = new Promise((resolve, reject) => {
|
||||||
|
setTimeout(() => {
|
||||||
|
resolve("Success!");
|
||||||
|
// or: reject(new Error("Failed!"));
|
||||||
|
}, 1000);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Consuming Promises
|
||||||
|
promise
|
||||||
|
.then((result) => console.log(result))
|
||||||
|
.catch((error) => console.error(error))
|
||||||
|
.finally(() => console.log("Done"));
|
||||||
|
|
||||||
|
// Promise combinators
|
||||||
|
Promise.all([p1, p2, p3]); // All must succeed
|
||||||
|
Promise.allSettled([p1, p2]); // Wait for all, get status
|
||||||
|
Promise.race([p1, p2]); // First to settle
|
||||||
|
Promise.any([p1, p2]); // First to succeed
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.4 async/await
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
async function fetchUserData(userId) {
|
||||||
|
try {
|
||||||
|
const response = await fetch(`/api/users/${userId}`);
|
||||||
|
if (!response.ok) throw new Error("Failed to fetch");
|
||||||
|
const user = await response.json();
|
||||||
|
return user;
|
||||||
|
} catch (error) {
|
||||||
|
console.error("Error:", error);
|
||||||
|
throw error; // Re-throw for caller to handle
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parallel execution
|
||||||
|
async function fetchAll() {
|
||||||
|
const [users, posts] = await Promise.all([
|
||||||
|
fetch("/api/users"),
|
||||||
|
fetch("/api/posts"),
|
||||||
|
]);
|
||||||
|
return { users, posts };
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Functional Programming
|
||||||
|
|
||||||
|
### 5.1 Higher-Order Functions
|
||||||
|
|
||||||
|
Functions that take or return functions:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Takes a function
|
||||||
|
const numbers = [1, 2, 3];
|
||||||
|
const doubled = numbers.map((n) => n * 2); // [2, 4, 6]
|
||||||
|
|
||||||
|
// Returns a function
|
||||||
|
function multiply(a) {
|
||||||
|
return function (b) {
|
||||||
|
return a * b;
|
||||||
|
};
|
||||||
|
}
|
||||||
|
const double = multiply(2);
|
||||||
|
double(5); // 10
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.2 Pure Functions
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Pure: same input → same output, no side effects
|
||||||
|
function add(a, b) {
|
||||||
|
return a + b;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Impure: modifies external state
|
||||||
|
let total = 0;
|
||||||
|
function addToTotal(value) {
|
||||||
|
total += value; // Side effect!
|
||||||
|
return total;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Impure: depends on external state
|
||||||
|
function getDiscount(price) {
|
||||||
|
return price * globalDiscountRate; // External dependency
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.3 map, filter, reduce
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const users = [
|
||||||
|
{ name: "Alice", age: 25 },
|
||||||
|
{ name: "Bob", age: 30 },
|
||||||
|
{ name: "Charlie", age: 35 },
|
||||||
|
];
|
||||||
|
|
||||||
|
// map: transform each element
|
||||||
|
const names = users.map((u) => u.name);
|
||||||
|
// ["Alice", "Bob", "Charlie"]
|
||||||
|
|
||||||
|
// filter: keep elements matching condition
|
||||||
|
const adults = users.filter((u) => u.age >= 30);
|
||||||
|
// [{ name: "Bob", ... }, { name: "Charlie", ... }]
|
||||||
|
|
||||||
|
// reduce: accumulate into single value
|
||||||
|
const totalAge = users.reduce((sum, u) => sum + u.age, 0);
|
||||||
|
// 90
|
||||||
|
|
||||||
|
// Chaining
|
||||||
|
const result = users
|
||||||
|
.filter((u) => u.age >= 30)
|
||||||
|
.map((u) => u.name)
|
||||||
|
.join(", ");
|
||||||
|
// "Bob, Charlie"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.4 Currying & Composition
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Currying: transform f(a, b, c) into f(a)(b)(c)
|
||||||
|
const curry = (fn) => {
|
||||||
|
return function curried(...args) {
|
||||||
|
if (args.length >= fn.length) {
|
||||||
|
return fn.apply(this, args);
|
||||||
|
}
|
||||||
|
return (...moreArgs) => curried(...args, ...moreArgs);
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
const add = curry((a, b, c) => a + b + c);
|
||||||
|
add(1)(2)(3); // 6
|
||||||
|
add(1, 2)(3); // 6
|
||||||
|
add(1)(2, 3); // 6
|
||||||
|
|
||||||
|
// Composition: combine functions
|
||||||
|
const compose =
|
||||||
|
(...fns) =>
|
||||||
|
(x) =>
|
||||||
|
fns.reduceRight((acc, fn) => fn(acc), x);
|
||||||
|
|
||||||
|
const pipe =
|
||||||
|
(...fns) =>
|
||||||
|
(x) =>
|
||||||
|
fns.reduce((acc, fn) => fn(acc), x);
|
||||||
|
|
||||||
|
const addOne = (x) => x + 1;
|
||||||
|
const double = (x) => x * 2;
|
||||||
|
|
||||||
|
const addThenDouble = compose(double, addOne);
|
||||||
|
addThenDouble(5); // 12 = (5 + 1) * 2
|
||||||
|
|
||||||
|
const doubleThenAdd = pipe(double, addOne);
|
||||||
|
doubleThenAdd(5); // 11 = (5 * 2) + 1
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Objects & Prototypes
|
||||||
|
|
||||||
|
### 6.1 Prototypal Inheritance
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Prototype chain
|
||||||
|
const animal = {
|
||||||
|
speak() {
|
||||||
|
console.log("Some sound");
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
const dog = Object.create(animal);
|
||||||
|
dog.bark = function () {
|
||||||
|
console.log("Woof!");
|
||||||
|
};
|
||||||
|
|
||||||
|
dog.speak(); // "Some sound" (inherited)
|
||||||
|
dog.bark(); // "Woof!" (own method)
|
||||||
|
|
||||||
|
// ES6 Classes (syntactic sugar)
|
||||||
|
class Animal {
|
||||||
|
speak() {
|
||||||
|
console.log("Some sound");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
class Dog extends Animal {
|
||||||
|
bark() {
|
||||||
|
console.log("Woof!");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6.2 Object Methods
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const obj = { a: 1, b: 2 };
|
||||||
|
|
||||||
|
// Keys, values, entries
|
||||||
|
Object.keys(obj); // ["a", "b"]
|
||||||
|
Object.values(obj); // [1, 2]
|
||||||
|
Object.entries(obj); // [["a", 1], ["b", 2]]
|
||||||
|
|
||||||
|
// Shallow copy
|
||||||
|
const copy = { ...obj };
|
||||||
|
const copy2 = Object.assign({}, obj);
|
||||||
|
|
||||||
|
// Freeze (immutable)
|
||||||
|
const frozen = Object.freeze({ x: 1 });
|
||||||
|
frozen.x = 2; // Silently fails (or throws in strict mode)
|
||||||
|
|
||||||
|
// Seal (no add/delete, can modify)
|
||||||
|
const sealed = Object.seal({ x: 1 });
|
||||||
|
sealed.x = 2; // OK
|
||||||
|
sealed.y = 3; // Fails
|
||||||
|
delete sealed.x; // Fails
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Modern JavaScript (ES6+)
|
||||||
|
|
||||||
|
### 7.1 Destructuring
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Array destructuring
|
||||||
|
const [first, second, ...rest] = [1, 2, 3, 4, 5];
|
||||||
|
// first = 1, second = 2, rest = [3, 4, 5]
|
||||||
|
|
||||||
|
// Object destructuring
|
||||||
|
const { name, age, city = "Unknown" } = { name: "Alice", age: 25 };
|
||||||
|
// name = "Alice", age = 25, city = "Unknown"
|
||||||
|
|
||||||
|
// Renaming
|
||||||
|
const { name: userName } = { name: "Bob" };
|
||||||
|
// userName = "Bob"
|
||||||
|
|
||||||
|
// Nested
|
||||||
|
const {
|
||||||
|
address: { street },
|
||||||
|
} = { address: { street: "123 Main" } };
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7.2 Spread & Rest
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Spread: expand iterable
|
||||||
|
const arr1 = [1, 2, 3];
|
||||||
|
const arr2 = [...arr1, 4, 5]; // [1, 2, 3, 4, 5]
|
||||||
|
|
||||||
|
const obj1 = { a: 1 };
|
||||||
|
const obj2 = { ...obj1, b: 2 }; // { a: 1, b: 2 }
|
||||||
|
|
||||||
|
// Rest: collect remaining
|
||||||
|
function sum(...numbers) {
|
||||||
|
return numbers.reduce((a, b) => a + b, 0);
|
||||||
|
}
|
||||||
|
sum(1, 2, 3, 4); // 10
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7.3 Modules
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Named exports
|
||||||
|
export const PI = 3.14159;
|
||||||
|
export function square(x) {
|
||||||
|
return x * x;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Default export
|
||||||
|
export default class Calculator {}
|
||||||
|
|
||||||
|
// Importing
|
||||||
|
import Calculator, { PI, square } from "./math.js";
|
||||||
|
import * as math from "./math.js";
|
||||||
|
|
||||||
|
// Dynamic import
|
||||||
|
const module = await import("./dynamic.js");
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7.4 Optional Chaining & Nullish Coalescing
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Optional chaining (?.)
|
||||||
|
const user = { address: { city: "NYC" } };
|
||||||
|
const city = user?.address?.city; // "NYC"
|
||||||
|
const zip = user?.address?.zip; // undefined (no error)
|
||||||
|
const fn = user?.getName?.(); // undefined if no method
|
||||||
|
|
||||||
|
// Nullish coalescing (??)
|
||||||
|
const value = null ?? "default"; // "default"
|
||||||
|
const zero = 0 ?? "default"; // 0 (not nullish!)
|
||||||
|
const empty = "" ?? "default"; // "" (not nullish!)
|
||||||
|
|
||||||
|
// Compare with ||
|
||||||
|
const value2 = 0 || "default"; // "default" (0 is falsy)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference Card
|
||||||
|
|
||||||
|
| Concept | Key Point |
|
||||||
|
| :------------- | :-------------------------------- |
|
||||||
|
| `==` vs `===` | Always use `===` |
|
||||||
|
| `var` vs `let` | Prefer `let`/`const` |
|
||||||
|
| Closures | Function + lexical scope |
|
||||||
|
| `this` | Depends on how function is called |
|
||||||
|
| Event loop | Microtasks before macrotasks |
|
||||||
|
| Pure functions | Same input → same output |
|
||||||
|
| Prototypes | `__proto__` → prototype chain |
|
||||||
|
| `??` vs `\|\|` | `??` only checks null/undefined |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [33 JS Concepts](https://github.com/leonardomso/33-js-concepts)
|
||||||
|
- [JavaScript.info](https://javascript.info/)
|
||||||
|
- [MDN JavaScript Guide](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide)
|
||||||
|
- [You Don't Know JS](https://github.com/getify/You-Dont-Know-JS)
|
||||||
760
skills/llm-app-patterns/SKILL.md
Normal file
760
skills/llm-app-patterns/SKILL.md
Normal file
@@ -0,0 +1,760 @@
|
|||||||
|
---
|
||||||
|
name: llm-app-patterns
|
||||||
|
description: "Production-ready patterns for building LLM applications. Covers RAG pipelines, agent architectures, prompt IDEs, and LLMOps monitoring. Use when designing AI applications, implementing RAG, building agents, or setting up LLM observability."
|
||||||
|
---
|
||||||
|
|
||||||
|
# 🤖 LLM Application Patterns
|
||||||
|
|
||||||
|
> Production-ready patterns for building LLM applications, inspired by [Dify](https://github.com/langgenius/dify) and industry best practices.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
|
||||||
|
- Designing LLM-powered applications
|
||||||
|
- Implementing RAG (Retrieval-Augmented Generation)
|
||||||
|
- Building AI agents with tools
|
||||||
|
- Setting up LLMOps monitoring
|
||||||
|
- Choosing between agent architectures
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. RAG Pipeline Architecture
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
|
||||||
|
RAG (Retrieval-Augmented Generation) grounds LLM responses in your data.
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||||
|
│ Ingest │────▶│ Retrieve │────▶│ Generate │
|
||||||
|
│ Documents │ │ Context │ │ Response │
|
||||||
|
└─────────────┘ └─────────────┘ └─────────────┘
|
||||||
|
│ │ │
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌─────────┐ ┌───────────┐ ┌───────────┐
|
||||||
|
│ Chunking│ │ Vector │ │ LLM │
|
||||||
|
│Embedding│ │ Search │ │ + Context│
|
||||||
|
└─────────┘ └───────────┘ └───────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.1 Document Ingestion
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Chunking strategies
|
||||||
|
class ChunkingStrategy:
|
||||||
|
# Fixed-size chunks (simple but may break context)
|
||||||
|
FIXED_SIZE = "fixed_size" # e.g., 512 tokens
|
||||||
|
|
||||||
|
# Semantic chunking (preserves meaning)
|
||||||
|
SEMANTIC = "semantic" # Split on paragraphs/sections
|
||||||
|
|
||||||
|
# Recursive splitting (tries multiple separators)
|
||||||
|
RECURSIVE = "recursive" # ["\n\n", "\n", " ", ""]
|
||||||
|
|
||||||
|
# Document-aware (respects structure)
|
||||||
|
DOCUMENT_AWARE = "document_aware" # Headers, lists, etc.
|
||||||
|
|
||||||
|
# Recommended settings
|
||||||
|
CHUNK_CONFIG = {
|
||||||
|
"chunk_size": 512, # tokens
|
||||||
|
"chunk_overlap": 50, # token overlap between chunks
|
||||||
|
"separators": ["\n\n", "\n", ". ", " "],
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.2 Embedding & Storage
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Vector database selection
|
||||||
|
VECTOR_DB_OPTIONS = {
|
||||||
|
"pinecone": {
|
||||||
|
"use_case": "Production, managed service",
|
||||||
|
"scale": "Billions of vectors",
|
||||||
|
"features": ["Hybrid search", "Metadata filtering"]
|
||||||
|
},
|
||||||
|
"weaviate": {
|
||||||
|
"use_case": "Self-hosted, multi-modal",
|
||||||
|
"scale": "Millions of vectors",
|
||||||
|
"features": ["GraphQL API", "Modules"]
|
||||||
|
},
|
||||||
|
"chromadb": {
|
||||||
|
"use_case": "Development, prototyping",
|
||||||
|
"scale": "Thousands of vectors",
|
||||||
|
"features": ["Simple API", "In-memory option"]
|
||||||
|
},
|
||||||
|
"pgvector": {
|
||||||
|
"use_case": "Existing Postgres infrastructure",
|
||||||
|
"scale": "Millions of vectors",
|
||||||
|
"features": ["SQL integration", "ACID compliance"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Embedding model selection
|
||||||
|
EMBEDDING_MODELS = {
|
||||||
|
"openai/text-embedding-3-small": {
|
||||||
|
"dimensions": 1536,
|
||||||
|
"cost": "$0.02/1M tokens",
|
||||||
|
"quality": "Good for most use cases"
|
||||||
|
},
|
||||||
|
"openai/text-embedding-3-large": {
|
||||||
|
"dimensions": 3072,
|
||||||
|
"cost": "$0.13/1M tokens",
|
||||||
|
"quality": "Best for complex queries"
|
||||||
|
},
|
||||||
|
"local/bge-large": {
|
||||||
|
"dimensions": 1024,
|
||||||
|
"cost": "Free (compute only)",
|
||||||
|
"quality": "Comparable to OpenAI small"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.3 Retrieval Strategies
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Basic semantic search
|
||||||
|
def semantic_search(query: str, top_k: int = 5):
|
||||||
|
query_embedding = embed(query)
|
||||||
|
results = vector_db.similarity_search(
|
||||||
|
query_embedding,
|
||||||
|
top_k=top_k
|
||||||
|
)
|
||||||
|
return results
|
||||||
|
|
||||||
|
# Hybrid search (semantic + keyword)
|
||||||
|
def hybrid_search(query: str, top_k: int = 5, alpha: float = 0.5):
|
||||||
|
"""
|
||||||
|
alpha=1.0: Pure semantic
|
||||||
|
alpha=0.0: Pure keyword (BM25)
|
||||||
|
alpha=0.5: Balanced
|
||||||
|
"""
|
||||||
|
semantic_results = vector_db.similarity_search(query)
|
||||||
|
keyword_results = bm25_search(query)
|
||||||
|
|
||||||
|
# Reciprocal Rank Fusion
|
||||||
|
return rrf_merge(semantic_results, keyword_results, alpha)
|
||||||
|
|
||||||
|
# Multi-query retrieval
|
||||||
|
def multi_query_retrieval(query: str):
|
||||||
|
"""Generate multiple query variations for better recall"""
|
||||||
|
queries = llm.generate_query_variations(query, n=3)
|
||||||
|
all_results = []
|
||||||
|
for q in queries:
|
||||||
|
all_results.extend(semantic_search(q))
|
||||||
|
return deduplicate(all_results)
|
||||||
|
|
||||||
|
# Contextual compression
|
||||||
|
def compressed_retrieval(query: str):
|
||||||
|
"""Retrieve then compress to relevant parts only"""
|
||||||
|
docs = semantic_search(query, top_k=10)
|
||||||
|
compressed = llm.extract_relevant_parts(docs, query)
|
||||||
|
return compressed
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.4 Generation with Context
|
||||||
|
|
||||||
|
```python
|
||||||
|
RAG_PROMPT_TEMPLATE = """
|
||||||
|
Answer the user's question based ONLY on the following context.
|
||||||
|
If the context doesn't contain enough information, say "I don't have enough information to answer that."
|
||||||
|
|
||||||
|
Context:
|
||||||
|
{context}
|
||||||
|
|
||||||
|
Question: {question}
|
||||||
|
|
||||||
|
Answer:"""
|
||||||
|
|
||||||
|
def generate_with_rag(question: str):
|
||||||
|
# Retrieve
|
||||||
|
context_docs = hybrid_search(question, top_k=5)
|
||||||
|
context = "\n\n".join([doc.content for doc in context_docs])
|
||||||
|
|
||||||
|
# Generate
|
||||||
|
prompt = RAG_PROMPT_TEMPLATE.format(
|
||||||
|
context=context,
|
||||||
|
question=question
|
||||||
|
)
|
||||||
|
|
||||||
|
response = llm.generate(prompt)
|
||||||
|
|
||||||
|
# Return with citations
|
||||||
|
return {
|
||||||
|
"answer": response,
|
||||||
|
"sources": [doc.metadata for doc in context_docs]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Agent Architectures
|
||||||
|
|
||||||
|
### 2.1 ReAct Pattern (Reasoning + Acting)
|
||||||
|
|
||||||
|
```
|
||||||
|
Thought: I need to search for information about X
|
||||||
|
Action: search("X")
|
||||||
|
Observation: [search results]
|
||||||
|
Thought: Based on the results, I should...
|
||||||
|
Action: calculate(...)
|
||||||
|
Observation: [calculation result]
|
||||||
|
Thought: I now have enough information
|
||||||
|
Action: final_answer("The answer is...")
|
||||||
|
```
|
||||||
|
|
||||||
|
```python
|
||||||
|
REACT_PROMPT = """
|
||||||
|
You are an AI assistant that can use tools to answer questions.
|
||||||
|
|
||||||
|
Available tools:
|
||||||
|
{tools_description}
|
||||||
|
|
||||||
|
Use this format:
|
||||||
|
Thought: [your reasoning about what to do next]
|
||||||
|
Action: [tool_name(arguments)]
|
||||||
|
Observation: [tool result - this will be filled in]
|
||||||
|
... (repeat Thought/Action/Observation as needed)
|
||||||
|
Thought: I have enough information to answer
|
||||||
|
Final Answer: [your final response]
|
||||||
|
|
||||||
|
Question: {question}
|
||||||
|
"""
|
||||||
|
|
||||||
|
class ReActAgent:
|
||||||
|
def __init__(self, tools: list, llm):
|
||||||
|
self.tools = {t.name: t for t in tools}
|
||||||
|
self.llm = llm
|
||||||
|
self.max_iterations = 10
|
||||||
|
|
||||||
|
def run(self, question: str) -> str:
|
||||||
|
prompt = REACT_PROMPT.format(
|
||||||
|
tools_description=self._format_tools(),
|
||||||
|
question=question
|
||||||
|
)
|
||||||
|
|
||||||
|
for _ in range(self.max_iterations):
|
||||||
|
response = self.llm.generate(prompt)
|
||||||
|
|
||||||
|
if "Final Answer:" in response:
|
||||||
|
return self._extract_final_answer(response)
|
||||||
|
|
||||||
|
action = self._parse_action(response)
|
||||||
|
observation = self._execute_tool(action)
|
||||||
|
prompt += f"\nObservation: {observation}\n"
|
||||||
|
|
||||||
|
return "Max iterations reached"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.2 Function Calling Pattern
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Define tools as functions with schemas
|
||||||
|
TOOLS = [
|
||||||
|
{
|
||||||
|
"name": "search_web",
|
||||||
|
"description": "Search the web for current information",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"query": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Search query"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["query"]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "calculate",
|
||||||
|
"description": "Perform mathematical calculations",
|
||||||
|
"parameters": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"expression": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Math expression to evaluate"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"required": ["expression"]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
class FunctionCallingAgent:
|
||||||
|
def run(self, question: str) -> str:
|
||||||
|
messages = [{"role": "user", "content": question}]
|
||||||
|
|
||||||
|
while True:
|
||||||
|
response = self.llm.chat(
|
||||||
|
messages=messages,
|
||||||
|
tools=TOOLS,
|
||||||
|
tool_choice="auto"
|
||||||
|
)
|
||||||
|
|
||||||
|
if response.tool_calls:
|
||||||
|
for tool_call in response.tool_calls:
|
||||||
|
result = self._execute_tool(
|
||||||
|
tool_call.name,
|
||||||
|
tool_call.arguments
|
||||||
|
)
|
||||||
|
messages.append({
|
||||||
|
"role": "tool",
|
||||||
|
"tool_call_id": tool_call.id,
|
||||||
|
"content": str(result)
|
||||||
|
})
|
||||||
|
else:
|
||||||
|
return response.content
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.3 Plan-and-Execute Pattern
|
||||||
|
|
||||||
|
```python
|
||||||
|
class PlanAndExecuteAgent:
|
||||||
|
"""
|
||||||
|
1. Create a plan (list of steps)
|
||||||
|
2. Execute each step
|
||||||
|
3. Replan if needed
|
||||||
|
"""
|
||||||
|
|
||||||
|
def run(self, task: str) -> str:
|
||||||
|
# Planning phase
|
||||||
|
plan = self.planner.create_plan(task)
|
||||||
|
# Returns: ["Step 1: ...", "Step 2: ...", ...]
|
||||||
|
|
||||||
|
results = []
|
||||||
|
for step in plan:
|
||||||
|
# Execute each step
|
||||||
|
result = self.executor.execute(step, context=results)
|
||||||
|
results.append(result)
|
||||||
|
|
||||||
|
# Check if replan needed
|
||||||
|
if self._needs_replan(task, results):
|
||||||
|
new_plan = self.planner.replan(
|
||||||
|
task,
|
||||||
|
completed=results,
|
||||||
|
remaining=plan[len(results):]
|
||||||
|
)
|
||||||
|
plan = new_plan
|
||||||
|
|
||||||
|
# Synthesize final answer
|
||||||
|
return self.synthesizer.summarize(task, results)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.4 Multi-Agent Collaboration
|
||||||
|
|
||||||
|
```python
|
||||||
|
class AgentTeam:
|
||||||
|
"""
|
||||||
|
Specialized agents collaborating on complex tasks
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.agents = {
|
||||||
|
"researcher": ResearchAgent(),
|
||||||
|
"analyst": AnalystAgent(),
|
||||||
|
"writer": WriterAgent(),
|
||||||
|
"critic": CriticAgent()
|
||||||
|
}
|
||||||
|
self.coordinator = CoordinatorAgent()
|
||||||
|
|
||||||
|
def solve(self, task: str) -> str:
|
||||||
|
# Coordinator assigns subtasks
|
||||||
|
assignments = self.coordinator.decompose(task)
|
||||||
|
|
||||||
|
results = {}
|
||||||
|
for assignment in assignments:
|
||||||
|
agent = self.agents[assignment.agent]
|
||||||
|
result = agent.execute(
|
||||||
|
assignment.subtask,
|
||||||
|
context=results
|
||||||
|
)
|
||||||
|
results[assignment.id] = result
|
||||||
|
|
||||||
|
# Critic reviews
|
||||||
|
critique = self.agents["critic"].review(results)
|
||||||
|
|
||||||
|
if critique.needs_revision:
|
||||||
|
# Iterate with feedback
|
||||||
|
return self.solve_with_feedback(task, results, critique)
|
||||||
|
|
||||||
|
return self.coordinator.synthesize(results)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Prompt IDE Patterns
|
||||||
|
|
||||||
|
### 3.1 Prompt Templates with Variables
|
||||||
|
|
||||||
|
```python
|
||||||
|
class PromptTemplate:
|
||||||
|
def __init__(self, template: str, variables: list[str]):
|
||||||
|
self.template = template
|
||||||
|
self.variables = variables
|
||||||
|
|
||||||
|
def format(self, **kwargs) -> str:
|
||||||
|
# Validate all variables provided
|
||||||
|
missing = set(self.variables) - set(kwargs.keys())
|
||||||
|
if missing:
|
||||||
|
raise ValueError(f"Missing variables: {missing}")
|
||||||
|
|
||||||
|
return self.template.format(**kwargs)
|
||||||
|
|
||||||
|
def with_examples(self, examples: list[dict]) -> str:
|
||||||
|
"""Add few-shot examples"""
|
||||||
|
example_text = "\n\n".join([
|
||||||
|
f"Input: {ex['input']}\nOutput: {ex['output']}"
|
||||||
|
for ex in examples
|
||||||
|
])
|
||||||
|
return f"{example_text}\n\n{self.template}"
|
||||||
|
|
||||||
|
# Usage
|
||||||
|
summarizer = PromptTemplate(
|
||||||
|
template="Summarize the following text in {style} style:\n\n{text}",
|
||||||
|
variables=["style", "text"]
|
||||||
|
)
|
||||||
|
|
||||||
|
prompt = summarizer.format(
|
||||||
|
style="professional",
|
||||||
|
text="Long article content..."
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.2 Prompt Versioning & A/B Testing
|
||||||
|
|
||||||
|
```python
|
||||||
|
class PromptRegistry:
|
||||||
|
def __init__(self, db):
|
||||||
|
self.db = db
|
||||||
|
|
||||||
|
def register(self, name: str, template: str, version: str):
|
||||||
|
"""Store prompt with version"""
|
||||||
|
self.db.save({
|
||||||
|
"name": name,
|
||||||
|
"template": template,
|
||||||
|
"version": version,
|
||||||
|
"created_at": datetime.now(),
|
||||||
|
"metrics": {}
|
||||||
|
})
|
||||||
|
|
||||||
|
def get(self, name: str, version: str = "latest") -> str:
|
||||||
|
"""Retrieve specific version"""
|
||||||
|
return self.db.get(name, version)
|
||||||
|
|
||||||
|
def ab_test(self, name: str, user_id: str) -> str:
|
||||||
|
"""Return variant based on user bucket"""
|
||||||
|
variants = self.db.get_all_versions(name)
|
||||||
|
bucket = hash(user_id) % len(variants)
|
||||||
|
return variants[bucket]
|
||||||
|
|
||||||
|
def record_outcome(self, prompt_id: str, outcome: dict):
|
||||||
|
"""Track prompt performance"""
|
||||||
|
self.db.update_metrics(prompt_id, outcome)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.3 Prompt Chaining
|
||||||
|
|
||||||
|
```python
|
||||||
|
class PromptChain:
|
||||||
|
"""
|
||||||
|
Chain prompts together, passing output as input to next
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, steps: list[dict]):
|
||||||
|
self.steps = steps
|
||||||
|
|
||||||
|
def run(self, initial_input: str) -> dict:
|
||||||
|
context = {"input": initial_input}
|
||||||
|
results = []
|
||||||
|
|
||||||
|
for step in self.steps:
|
||||||
|
prompt = step["prompt"].format(**context)
|
||||||
|
output = llm.generate(prompt)
|
||||||
|
|
||||||
|
# Parse output if needed
|
||||||
|
if step.get("parser"):
|
||||||
|
output = step["parser"](output)
|
||||||
|
|
||||||
|
context[step["output_key"]] = output
|
||||||
|
results.append({
|
||||||
|
"step": step["name"],
|
||||||
|
"output": output
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
"final_output": context[self.steps[-1]["output_key"]],
|
||||||
|
"intermediate_results": results
|
||||||
|
}
|
||||||
|
|
||||||
|
# Example: Research → Analyze → Summarize
|
||||||
|
chain = PromptChain([
|
||||||
|
{
|
||||||
|
"name": "research",
|
||||||
|
"prompt": "Research the topic: {input}",
|
||||||
|
"output_key": "research"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "analyze",
|
||||||
|
"prompt": "Analyze these findings:\n{research}",
|
||||||
|
"output_key": "analysis"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "summarize",
|
||||||
|
"prompt": "Summarize this analysis in 3 bullet points:\n{analysis}",
|
||||||
|
"output_key": "summary"
|
||||||
|
}
|
||||||
|
])
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. LLMOps & Observability
|
||||||
|
|
||||||
|
### 4.1 Metrics to Track
|
||||||
|
|
||||||
|
```python
|
||||||
|
LLM_METRICS = {
|
||||||
|
# Performance
|
||||||
|
"latency_p50": "50th percentile response time",
|
||||||
|
"latency_p99": "99th percentile response time",
|
||||||
|
"tokens_per_second": "Generation speed",
|
||||||
|
|
||||||
|
# Quality
|
||||||
|
"user_satisfaction": "Thumbs up/down ratio",
|
||||||
|
"task_completion": "% tasks completed successfully",
|
||||||
|
"hallucination_rate": "% responses with factual errors",
|
||||||
|
|
||||||
|
# Cost
|
||||||
|
"cost_per_request": "Average $ per API call",
|
||||||
|
"tokens_per_request": "Average tokens used",
|
||||||
|
"cache_hit_rate": "% requests served from cache",
|
||||||
|
|
||||||
|
# Reliability
|
||||||
|
"error_rate": "% failed requests",
|
||||||
|
"timeout_rate": "% requests that timed out",
|
||||||
|
"retry_rate": "% requests needing retry"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.2 Logging & Tracing
|
||||||
|
|
||||||
|
```python
|
||||||
|
import logging
|
||||||
|
from opentelemetry import trace
|
||||||
|
|
||||||
|
tracer = trace.get_tracer(__name__)
|
||||||
|
|
||||||
|
class LLMLogger:
|
||||||
|
def log_request(self, request_id: str, data: dict):
|
||||||
|
"""Log LLM request for debugging and analysis"""
|
||||||
|
log_entry = {
|
||||||
|
"request_id": request_id,
|
||||||
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
"model": data["model"],
|
||||||
|
"prompt": data["prompt"][:500], # Truncate for storage
|
||||||
|
"prompt_tokens": data["prompt_tokens"],
|
||||||
|
"temperature": data.get("temperature", 1.0),
|
||||||
|
"user_id": data.get("user_id"),
|
||||||
|
}
|
||||||
|
logging.info(f"LLM_REQUEST: {json.dumps(log_entry)}")
|
||||||
|
|
||||||
|
def log_response(self, request_id: str, data: dict):
|
||||||
|
"""Log LLM response"""
|
||||||
|
log_entry = {
|
||||||
|
"request_id": request_id,
|
||||||
|
"completion_tokens": data["completion_tokens"],
|
||||||
|
"total_tokens": data["total_tokens"],
|
||||||
|
"latency_ms": data["latency_ms"],
|
||||||
|
"finish_reason": data["finish_reason"],
|
||||||
|
"cost_usd": self._calculate_cost(data),
|
||||||
|
}
|
||||||
|
logging.info(f"LLM_RESPONSE: {json.dumps(log_entry)}")
|
||||||
|
|
||||||
|
# Distributed tracing
|
||||||
|
@tracer.start_as_current_span("llm_call")
|
||||||
|
def call_llm(prompt: str) -> str:
|
||||||
|
span = trace.get_current_span()
|
||||||
|
span.set_attribute("prompt.length", len(prompt))
|
||||||
|
|
||||||
|
response = llm.generate(prompt)
|
||||||
|
|
||||||
|
span.set_attribute("response.length", len(response))
|
||||||
|
span.set_attribute("tokens.total", response.usage.total_tokens)
|
||||||
|
|
||||||
|
return response.content
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.3 Evaluation Framework
|
||||||
|
|
||||||
|
```python
|
||||||
|
class LLMEvaluator:
|
||||||
|
"""
|
||||||
|
Evaluate LLM outputs for quality
|
||||||
|
"""
|
||||||
|
|
||||||
|
def evaluate_response(self,
|
||||||
|
question: str,
|
||||||
|
response: str,
|
||||||
|
ground_truth: str = None) -> dict:
|
||||||
|
scores = {}
|
||||||
|
|
||||||
|
# Relevance: Does it answer the question?
|
||||||
|
scores["relevance"] = self._score_relevance(question, response)
|
||||||
|
|
||||||
|
# Coherence: Is it well-structured?
|
||||||
|
scores["coherence"] = self._score_coherence(response)
|
||||||
|
|
||||||
|
# Groundedness: Is it based on provided context?
|
||||||
|
scores["groundedness"] = self._score_groundedness(response)
|
||||||
|
|
||||||
|
# Accuracy: Does it match ground truth?
|
||||||
|
if ground_truth:
|
||||||
|
scores["accuracy"] = self._score_accuracy(response, ground_truth)
|
||||||
|
|
||||||
|
# Harmfulness: Is it safe?
|
||||||
|
scores["safety"] = self._score_safety(response)
|
||||||
|
|
||||||
|
return scores
|
||||||
|
|
||||||
|
def run_benchmark(self, test_cases: list[dict]) -> dict:
|
||||||
|
"""Run evaluation on test set"""
|
||||||
|
results = []
|
||||||
|
for case in test_cases:
|
||||||
|
response = llm.generate(case["prompt"])
|
||||||
|
scores = self.evaluate_response(
|
||||||
|
question=case["prompt"],
|
||||||
|
response=response,
|
||||||
|
ground_truth=case.get("expected")
|
||||||
|
)
|
||||||
|
results.append(scores)
|
||||||
|
|
||||||
|
return self._aggregate_scores(results)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Production Patterns
|
||||||
|
|
||||||
|
### 5.1 Caching Strategy
|
||||||
|
|
||||||
|
```python
|
||||||
|
import hashlib
|
||||||
|
from functools import lru_cache
|
||||||
|
|
||||||
|
class LLMCache:
|
||||||
|
def __init__(self, redis_client, ttl_seconds=3600):
|
||||||
|
self.redis = redis_client
|
||||||
|
self.ttl = ttl_seconds
|
||||||
|
|
||||||
|
def _cache_key(self, prompt: str, model: str, **kwargs) -> str:
|
||||||
|
"""Generate deterministic cache key"""
|
||||||
|
content = f"{model}:{prompt}:{json.dumps(kwargs, sort_keys=True)}"
|
||||||
|
return hashlib.sha256(content.encode()).hexdigest()
|
||||||
|
|
||||||
|
def get_or_generate(self, prompt: str, model: str, **kwargs) -> str:
|
||||||
|
key = self._cache_key(prompt, model, **kwargs)
|
||||||
|
|
||||||
|
# Check cache
|
||||||
|
cached = self.redis.get(key)
|
||||||
|
if cached:
|
||||||
|
return cached.decode()
|
||||||
|
|
||||||
|
# Generate
|
||||||
|
response = llm.generate(prompt, model=model, **kwargs)
|
||||||
|
|
||||||
|
# Cache (only cache deterministic outputs)
|
||||||
|
if kwargs.get("temperature", 1.0) == 0:
|
||||||
|
self.redis.setex(key, self.ttl, response)
|
||||||
|
|
||||||
|
return response
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.2 Rate Limiting & Retry
|
||||||
|
|
||||||
|
```python
|
||||||
|
import time
|
||||||
|
from tenacity import retry, wait_exponential, stop_after_attempt
|
||||||
|
|
||||||
|
class RateLimiter:
|
||||||
|
def __init__(self, requests_per_minute: int):
|
||||||
|
self.rpm = requests_per_minute
|
||||||
|
self.timestamps = []
|
||||||
|
|
||||||
|
def acquire(self):
|
||||||
|
"""Wait if rate limit would be exceeded"""
|
||||||
|
now = time.time()
|
||||||
|
|
||||||
|
# Remove old timestamps
|
||||||
|
self.timestamps = [t for t in self.timestamps if now - t < 60]
|
||||||
|
|
||||||
|
if len(self.timestamps) >= self.rpm:
|
||||||
|
sleep_time = 60 - (now - self.timestamps[0])
|
||||||
|
time.sleep(sleep_time)
|
||||||
|
|
||||||
|
self.timestamps.append(time.time())
|
||||||
|
|
||||||
|
# Retry with exponential backoff
|
||||||
|
@retry(
|
||||||
|
wait=wait_exponential(multiplier=1, min=4, max=60),
|
||||||
|
stop=stop_after_attempt(5)
|
||||||
|
)
|
||||||
|
def call_llm_with_retry(prompt: str) -> str:
|
||||||
|
try:
|
||||||
|
return llm.generate(prompt)
|
||||||
|
except RateLimitError:
|
||||||
|
raise # Will trigger retry
|
||||||
|
except APIError as e:
|
||||||
|
if e.status_code >= 500:
|
||||||
|
raise # Retry server errors
|
||||||
|
raise # Don't retry client errors
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.3 Fallback Strategy
|
||||||
|
|
||||||
|
```python
|
||||||
|
class LLMWithFallback:
|
||||||
|
def __init__(self, primary: str, fallbacks: list[str]):
|
||||||
|
self.primary = primary
|
||||||
|
self.fallbacks = fallbacks
|
||||||
|
|
||||||
|
def generate(self, prompt: str, **kwargs) -> str:
|
||||||
|
models = [self.primary] + self.fallbacks
|
||||||
|
|
||||||
|
for model in models:
|
||||||
|
try:
|
||||||
|
return llm.generate(prompt, model=model, **kwargs)
|
||||||
|
except (RateLimitError, APIError) as e:
|
||||||
|
logging.warning(f"Model {model} failed: {e}")
|
||||||
|
continue
|
||||||
|
|
||||||
|
raise AllModelsFailedError("All models exhausted")
|
||||||
|
|
||||||
|
# Usage
|
||||||
|
llm_client = LLMWithFallback(
|
||||||
|
primary="gpt-4-turbo",
|
||||||
|
fallbacks=["gpt-3.5-turbo", "claude-3-sonnet"]
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture Decision Matrix
|
||||||
|
|
||||||
|
| Pattern | Use When | Complexity | Cost |
|
||||||
|
| :------------------- | :--------------- | :--------- | :-------- |
|
||||||
|
| **Simple RAG** | FAQ, docs search | Low | Low |
|
||||||
|
| **Hybrid RAG** | Mixed queries | Medium | Medium |
|
||||||
|
| **ReAct Agent** | Multi-step tasks | Medium | Medium |
|
||||||
|
| **Function Calling** | Structured tools | Low | Low |
|
||||||
|
| **Plan-Execute** | Complex tasks | High | High |
|
||||||
|
| **Multi-Agent** | Research tasks | Very High | Very High |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [Dify Platform](https://github.com/langgenius/dify)
|
||||||
|
- [LangChain Docs](https://python.langchain.com/)
|
||||||
|
- [LlamaIndex](https://www.llamaindex.ai/)
|
||||||
|
- [Anthropic Cookbook](https://github.com/anthropics/anthropic-cookbook)
|
||||||
322
skills/prompt-library/SKILL.md
Normal file
322
skills/prompt-library/SKILL.md
Normal file
@@ -0,0 +1,322 @@
|
|||||||
|
---
|
||||||
|
name: prompt-library
|
||||||
|
description: "Curated collection of high-quality prompts for various use cases. Includes role-based prompts, task-specific templates, and prompt refinement techniques. Use when user needs prompt templates, role-play prompts, or ready-to-use prompt examples for coding, writing, analysis, or creative tasks."
|
||||||
|
---
|
||||||
|
|
||||||
|
# 📝 Prompt Library
|
||||||
|
|
||||||
|
> A comprehensive collection of battle-tested prompts inspired by [awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts) and community best practices.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when the user:
|
||||||
|
|
||||||
|
- Needs ready-to-use prompt templates
|
||||||
|
- Wants role-based prompts (act as X)
|
||||||
|
- Asks for prompt examples or inspiration
|
||||||
|
- Needs task-specific prompt patterns
|
||||||
|
- Wants to improve their prompting
|
||||||
|
|
||||||
|
## Prompt Categories
|
||||||
|
|
||||||
|
### 🎭 Role-Based Prompts
|
||||||
|
|
||||||
|
#### Expert Developer
|
||||||
|
|
||||||
|
```
|
||||||
|
Act as an expert software developer with 15+ years of experience. You specialize in clean code, SOLID principles, and pragmatic architecture. When reviewing code:
|
||||||
|
1. Identify bugs and potential issues
|
||||||
|
2. Suggest performance improvements
|
||||||
|
3. Recommend better patterns
|
||||||
|
4. Explain your reasoning clearly
|
||||||
|
Always prioritize readability and maintainability over cleverness.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Code Reviewer
|
||||||
|
|
||||||
|
```
|
||||||
|
Act as a senior code reviewer. Your role is to:
|
||||||
|
1. Check for bugs, edge cases, and error handling
|
||||||
|
2. Evaluate code structure and organization
|
||||||
|
3. Assess naming conventions and readability
|
||||||
|
4. Identify potential security issues
|
||||||
|
5. Suggest improvements with specific examples
|
||||||
|
|
||||||
|
Format your review as:
|
||||||
|
🔴 Critical Issues (must fix)
|
||||||
|
🟡 Suggestions (should consider)
|
||||||
|
🟢 Praise (what's done well)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Technical Writer
|
||||||
|
|
||||||
|
```
|
||||||
|
Act as a technical documentation expert. Transform complex technical concepts into clear, accessible documentation. Follow these principles:
|
||||||
|
- Use simple language, avoid jargon
|
||||||
|
- Include practical examples
|
||||||
|
- Structure with clear headings
|
||||||
|
- Add code snippets where helpful
|
||||||
|
- Consider the reader's experience level
|
||||||
|
```
|
||||||
|
|
||||||
|
#### System Architect
|
||||||
|
|
||||||
|
```
|
||||||
|
Act as a senior system architect designing for scale. Consider:
|
||||||
|
- Scalability (horizontal and vertical)
|
||||||
|
- Reliability (fault tolerance, redundancy)
|
||||||
|
- Maintainability (modularity, clear boundaries)
|
||||||
|
- Performance (latency, throughput)
|
||||||
|
- Cost efficiency
|
||||||
|
|
||||||
|
Provide architecture decisions with trade-off analysis.
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🛠️ Task-Specific Prompts
|
||||||
|
|
||||||
|
#### Debug This Code
|
||||||
|
|
||||||
|
```
|
||||||
|
Debug the following code. Your analysis should include:
|
||||||
|
|
||||||
|
1. **Problem Identification**: What exactly is failing?
|
||||||
|
2. **Root Cause**: Why is it failing?
|
||||||
|
3. **Fix**: Provide corrected code
|
||||||
|
4. **Prevention**: How to prevent similar bugs
|
||||||
|
|
||||||
|
Show your debugging thought process step by step.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Explain Like I'm 5 (ELI5)
|
||||||
|
|
||||||
|
```
|
||||||
|
Explain [CONCEPT] as if I'm 5 years old. Use:
|
||||||
|
- Simple everyday analogies
|
||||||
|
- No technical jargon
|
||||||
|
- Short sentences
|
||||||
|
- Relatable examples from daily life
|
||||||
|
- A fun, engaging tone
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Code Refactoring
|
||||||
|
|
||||||
|
```
|
||||||
|
Refactor this code following these priorities:
|
||||||
|
1. Readability first
|
||||||
|
2. Remove duplication (DRY)
|
||||||
|
3. Single responsibility per function
|
||||||
|
4. Meaningful names
|
||||||
|
5. Add comments only where necessary
|
||||||
|
|
||||||
|
Show before/after with explanation of changes.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Write Tests
|
||||||
|
|
||||||
|
```
|
||||||
|
Write comprehensive tests for this code:
|
||||||
|
1. Happy path scenarios
|
||||||
|
2. Edge cases
|
||||||
|
3. Error conditions
|
||||||
|
4. Boundary values
|
||||||
|
|
||||||
|
Use [FRAMEWORK] testing conventions. Include:
|
||||||
|
- Descriptive test names
|
||||||
|
- Arrange-Act-Assert pattern
|
||||||
|
- Mocking where appropriate
|
||||||
|
```
|
||||||
|
|
||||||
|
#### API Documentation
|
||||||
|
|
||||||
|
```
|
||||||
|
Generate API documentation for this endpoint including:
|
||||||
|
- Endpoint URL and method
|
||||||
|
- Request parameters (path, query, body)
|
||||||
|
- Request/response examples
|
||||||
|
- Error codes and meanings
|
||||||
|
- Authentication requirements
|
||||||
|
- Rate limits if applicable
|
||||||
|
|
||||||
|
Format as OpenAPI/Swagger or Markdown.
|
||||||
|
```
|
||||||
|
|
||||||
|
### 📊 Analysis Prompts
|
||||||
|
|
||||||
|
#### Code Complexity Analysis
|
||||||
|
|
||||||
|
```
|
||||||
|
Analyze the complexity of this codebase:
|
||||||
|
|
||||||
|
1. **Cyclomatic Complexity**: Identify complex functions
|
||||||
|
2. **Coupling**: Find tightly coupled components
|
||||||
|
3. **Cohesion**: Assess module cohesion
|
||||||
|
4. **Dependencies**: Map critical dependencies
|
||||||
|
5. **Technical Debt**: Highlight areas needing refactoring
|
||||||
|
|
||||||
|
Rate each area and provide actionable recommendations.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Performance Analysis
|
||||||
|
|
||||||
|
```
|
||||||
|
Analyze this code for performance issues:
|
||||||
|
|
||||||
|
1. **Time Complexity**: Big O analysis
|
||||||
|
2. **Space Complexity**: Memory usage patterns
|
||||||
|
3. **I/O Bottlenecks**: Database, network, disk
|
||||||
|
4. **Algorithmic Issues**: Inefficient patterns
|
||||||
|
5. **Quick Wins**: Easy optimizations
|
||||||
|
|
||||||
|
Prioritize findings by impact.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Security Review
|
||||||
|
|
||||||
|
```
|
||||||
|
Perform a security review of this code:
|
||||||
|
|
||||||
|
1. **Input Validation**: Check all inputs
|
||||||
|
2. **Authentication/Authorization**: Access control
|
||||||
|
3. **Data Protection**: Sensitive data handling
|
||||||
|
4. **Injection Vulnerabilities**: SQL, XSS, etc.
|
||||||
|
5. **Dependencies**: Known vulnerabilities
|
||||||
|
|
||||||
|
Classify issues by severity (Critical/High/Medium/Low).
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🎨 Creative Prompts
|
||||||
|
|
||||||
|
#### Brainstorm Features
|
||||||
|
|
||||||
|
```
|
||||||
|
Brainstorm features for [PRODUCT]:
|
||||||
|
|
||||||
|
For each feature, provide:
|
||||||
|
- Name and one-line description
|
||||||
|
- User value proposition
|
||||||
|
- Implementation complexity (Low/Med/High)
|
||||||
|
- Dependencies on other features
|
||||||
|
|
||||||
|
Generate 10 ideas, then rank top 3 by impact/effort ratio.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Name Generator
|
||||||
|
|
||||||
|
```
|
||||||
|
Generate names for [PROJECT/FEATURE]:
|
||||||
|
|
||||||
|
Provide 10 options in these categories:
|
||||||
|
- Descriptive (what it does)
|
||||||
|
- Evocative (how it feels)
|
||||||
|
- Acronyms (memorable abbreviations)
|
||||||
|
- Metaphorical (analogies)
|
||||||
|
|
||||||
|
For each, explain the reasoning and check domain availability patterns.
|
||||||
|
```
|
||||||
|
|
||||||
|
### 🔄 Transformation Prompts
|
||||||
|
|
||||||
|
#### Migrate Code
|
||||||
|
|
||||||
|
```
|
||||||
|
Migrate this code from [SOURCE] to [TARGET]:
|
||||||
|
|
||||||
|
1. Identify equivalent constructs
|
||||||
|
2. Handle incompatible features
|
||||||
|
3. Preserve functionality exactly
|
||||||
|
4. Follow target language idioms
|
||||||
|
5. Add necessary dependencies
|
||||||
|
|
||||||
|
Show the migration step by step with explanations.
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Convert Format
|
||||||
|
|
||||||
|
```
|
||||||
|
Convert this [SOURCE_FORMAT] to [TARGET_FORMAT]:
|
||||||
|
|
||||||
|
Requirements:
|
||||||
|
- Preserve all data
|
||||||
|
- Use idiomatic target format
|
||||||
|
- Handle edge cases
|
||||||
|
- Validate the output
|
||||||
|
- Provide sample verification
|
||||||
|
```
|
||||||
|
|
||||||
|
## Prompt Engineering Techniques
|
||||||
|
|
||||||
|
### Chain of Thought (CoT)
|
||||||
|
|
||||||
|
```
|
||||||
|
Let's solve this step by step:
|
||||||
|
1. First, I'll understand the problem
|
||||||
|
2. Then, I'll identify the key components
|
||||||
|
3. Next, I'll work through the logic
|
||||||
|
4. Finally, I'll verify the solution
|
||||||
|
|
||||||
|
[Your question here]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Few-Shot Learning
|
||||||
|
|
||||||
|
```
|
||||||
|
Here are some examples of the task:
|
||||||
|
|
||||||
|
Example 1:
|
||||||
|
Input: [example input 1]
|
||||||
|
Output: [example output 1]
|
||||||
|
|
||||||
|
Example 2:
|
||||||
|
Input: [example input 2]
|
||||||
|
Output: [example output 2]
|
||||||
|
|
||||||
|
Now complete this:
|
||||||
|
Input: [actual input]
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
|
||||||
|
### Persona Pattern
|
||||||
|
|
||||||
|
```
|
||||||
|
You are [PERSONA] with [TRAITS].
|
||||||
|
Your communication style is [STYLE].
|
||||||
|
You prioritize [VALUES].
|
||||||
|
|
||||||
|
When responding:
|
||||||
|
- [Behavior 1]
|
||||||
|
- [Behavior 2]
|
||||||
|
- [Behavior 3]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Structured Output
|
||||||
|
|
||||||
|
```
|
||||||
|
Respond in the following JSON format:
|
||||||
|
{
|
||||||
|
"analysis": "your analysis here",
|
||||||
|
"recommendations": ["rec1", "rec2"],
|
||||||
|
"confidence": 0.0-1.0,
|
||||||
|
"caveats": ["caveat1"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Prompt Improvement Checklist
|
||||||
|
|
||||||
|
When crafting prompts, ensure:
|
||||||
|
|
||||||
|
- [ ] **Clear objective**: What exactly do you want?
|
||||||
|
- [ ] **Context provided**: Background information included?
|
||||||
|
- [ ] **Format specified**: How should output be structured?
|
||||||
|
- [ ] **Examples given**: Are there reference examples?
|
||||||
|
- [ ] **Constraints defined**: Any limitations or requirements?
|
||||||
|
- [ ] **Success criteria**: How do you measure good output?
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts)
|
||||||
|
- [prompts.chat](https://prompts.chat)
|
||||||
|
- [Learn Prompting](https://learnprompting.org/)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
> 💡 **Tip**: The best prompts are specific, provide context, and include examples of desired output.
|
||||||
705
skills/workflow-automation/SKILL.md
Normal file
705
skills/workflow-automation/SKILL.md
Normal file
@@ -0,0 +1,705 @@
|
|||||||
|
---
|
||||||
|
name: workflow-automation
|
||||||
|
description: "Design and implement automated workflows combining visual logic with custom code. Create multi-step automations, integrate APIs, and build AI-native pipelines. Use when designing automation flows, integrating APIs, building event-driven systems, or creating LangChain-style AI workflows."
|
||||||
|
---
|
||||||
|
|
||||||
|
# 🔄 Workflow Automation
|
||||||
|
|
||||||
|
> Patterns for building robust automated workflows, inspired by [n8n](https://github.com/n8n-io/n8n) and modern automation platforms.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Use this skill when:
|
||||||
|
|
||||||
|
- Designing multi-step automation workflows
|
||||||
|
- Integrating multiple APIs and services
|
||||||
|
- Building event-driven systems
|
||||||
|
- Creating AI-augmented pipelines
|
||||||
|
- Handling errors in complex flows
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Workflow Design Principles
|
||||||
|
|
||||||
|
### 1.1 Core Concepts
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────┐
|
||||||
|
│ WORKFLOW │
|
||||||
|
│ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ │
|
||||||
|
│ │Trigger │───▶│ Node │───▶│ Node │───▶│ Action │ │
|
||||||
|
│ └────────┘ └────────┘ └────────┘ └────────┘ │
|
||||||
|
│ │ │ │ │ │
|
||||||
|
│ ▼ ▼ ▼ ▼ │
|
||||||
|
│ [Webhook] [Transform] [Condition] [Send Email] │
|
||||||
|
└─────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Components**:
|
||||||
|
|
||||||
|
- **Trigger**: What starts the workflow
|
||||||
|
- **Node**: Individual processing step
|
||||||
|
- **Edge**: Connection between nodes
|
||||||
|
- **Action**: External effect (API call, email, etc.)
|
||||||
|
|
||||||
|
### 1.2 Trigger Types
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const TRIGGER_TYPES = {
|
||||||
|
// Event-based
|
||||||
|
webhook: {
|
||||||
|
description: "HTTP request triggers workflow",
|
||||||
|
use_case: "External integrations, form submissions",
|
||||||
|
example: "POST /webhook/order-created",
|
||||||
|
},
|
||||||
|
|
||||||
|
// Time-based
|
||||||
|
cron: {
|
||||||
|
description: "Scheduled execution",
|
||||||
|
use_case: "Reports, cleanup, sync jobs",
|
||||||
|
example: "0 9 * * *", // Daily at 9 AM
|
||||||
|
},
|
||||||
|
|
||||||
|
// Change-based
|
||||||
|
polling: {
|
||||||
|
description: "Check for changes periodically",
|
||||||
|
use_case: "Monitor RSS, check file changes",
|
||||||
|
example: "Every 5 minutes check for new items",
|
||||||
|
},
|
||||||
|
|
||||||
|
// Message-based
|
||||||
|
queue: {
|
||||||
|
description: "Process from message queue",
|
||||||
|
use_case: "Async processing, decoupling",
|
||||||
|
example: "SQS, RabbitMQ, Redis Streams",
|
||||||
|
},
|
||||||
|
|
||||||
|
// Manual
|
||||||
|
manual: {
|
||||||
|
description: "User-initiated execution",
|
||||||
|
use_case: "Testing, on-demand tasks",
|
||||||
|
example: "Run workflow button",
|
||||||
|
},
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 1.3 Node Types
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const NODE_TYPES = {
|
||||||
|
// Data transformation
|
||||||
|
transform: {
|
||||||
|
description: "Modify data shape or values",
|
||||||
|
operations: ["map", "filter", "merge", "split"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// Flow control
|
||||||
|
condition: {
|
||||||
|
description: "Branch based on logic",
|
||||||
|
operations: ["if/else", "switch", "filter"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// External actions
|
||||||
|
action: {
|
||||||
|
description: "Interact with external services",
|
||||||
|
operations: ["HTTP request", "database", "email", "API"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// Sub-workflows
|
||||||
|
subworkflow: {
|
||||||
|
description: "Call another workflow",
|
||||||
|
operations: ["invoke", "wait", "parallel"],
|
||||||
|
},
|
||||||
|
|
||||||
|
// Error handling
|
||||||
|
errorHandler: {
|
||||||
|
description: "Handle failures gracefully",
|
||||||
|
operations: ["retry", "fallback", "notify"],
|
||||||
|
},
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Common Workflow Patterns
|
||||||
|
|
||||||
|
### 2.1 Sequential Pipeline
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Simple A → B → C flow
|
||||||
|
const sequentialWorkflow = {
|
||||||
|
trigger: { type: "webhook", path: "/process" },
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "fetch",
|
||||||
|
type: "http",
|
||||||
|
config: {
|
||||||
|
method: "GET",
|
||||||
|
url: "{{trigger.data.api_url}}",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "transform",
|
||||||
|
type: "code",
|
||||||
|
config: {
|
||||||
|
code: `
|
||||||
|
return items.map(item => ({
|
||||||
|
id: item.id,
|
||||||
|
name: item.name.toUpperCase(),
|
||||||
|
processed: true
|
||||||
|
}));
|
||||||
|
`,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "save",
|
||||||
|
type: "database",
|
||||||
|
config: {
|
||||||
|
operation: "insert",
|
||||||
|
table: "processed_items",
|
||||||
|
data: "{{transform.output}}",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.2 Parallel Execution
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Fan-out: Execute multiple nodes in parallel
|
||||||
|
const parallelWorkflow = {
|
||||||
|
trigger: { type: "cron", schedule: "0 * * * *" },
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "parallel_group",
|
||||||
|
type: "parallel",
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "fetch_users",
|
||||||
|
type: "http",
|
||||||
|
config: { url: "/api/users" },
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "fetch_orders",
|
||||||
|
type: "http",
|
||||||
|
config: { url: "/api/orders" },
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "fetch_products",
|
||||||
|
type: "http",
|
||||||
|
config: { url: "/api/products" },
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "merge",
|
||||||
|
type: "merge",
|
||||||
|
config: {
|
||||||
|
method: "append", // or "combine", "zip"
|
||||||
|
inputs: ["fetch_users", "fetch_orders", "fetch_products"],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.3 Conditional Branching
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const conditionalWorkflow = {
|
||||||
|
trigger: { type: "webhook", path: "/order" },
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "check_value",
|
||||||
|
type: "switch",
|
||||||
|
config: {
|
||||||
|
property: "{{trigger.data.total}}",
|
||||||
|
rules: [
|
||||||
|
{ operator: "gte", value: 1000, output: "high_value" },
|
||||||
|
{ operator: "gte", value: 100, output: "medium_value" },
|
||||||
|
{ operator: "lt", value: 100, output: "low_value" },
|
||||||
|
],
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "high_value",
|
||||||
|
type: "action",
|
||||||
|
onlyIf: "{{check_value.output}} === 'high_value'",
|
||||||
|
config: {
|
||||||
|
action: "notify_sales_team",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "medium_value",
|
||||||
|
type: "action",
|
||||||
|
onlyIf: "{{check_value.output}} === 'medium_value'",
|
||||||
|
config: {
|
||||||
|
action: "send_thank_you_email",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "low_value",
|
||||||
|
type: "action",
|
||||||
|
onlyIf: "{{check_value.output}} === 'low_value'",
|
||||||
|
config: {
|
||||||
|
action: "add_to_newsletter",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.4 Loop/Iterator Pattern
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const loopWorkflow = {
|
||||||
|
trigger: { type: "manual" },
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "fetch_items",
|
||||||
|
type: "http",
|
||||||
|
config: { url: "/api/items" },
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "process_each",
|
||||||
|
type: "loop",
|
||||||
|
config: {
|
||||||
|
items: "{{fetch_items.data}}",
|
||||||
|
batchSize: 10, // Process 10 at a time
|
||||||
|
continueOnError: true,
|
||||||
|
},
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "enrich",
|
||||||
|
type: "http",
|
||||||
|
config: {
|
||||||
|
url: "/api/enrich/{{item.id}}",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "save",
|
||||||
|
type: "database",
|
||||||
|
config: {
|
||||||
|
operation: "update",
|
||||||
|
id: "{{item.id}}",
|
||||||
|
data: "{{enrich.output}}",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.5 Wait/Delay Pattern
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const waitWorkflow = {
|
||||||
|
trigger: { type: "webhook", path: "/signup" },
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "send_welcome",
|
||||||
|
type: "email",
|
||||||
|
config: {
|
||||||
|
to: "{{trigger.data.email}}",
|
||||||
|
template: "welcome",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "wait_24h",
|
||||||
|
type: "wait",
|
||||||
|
config: {
|
||||||
|
duration: "24h",
|
||||||
|
// Or: resumeAt: "{{trigger.data.preferred_time}}"
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "send_onboarding",
|
||||||
|
type: "email",
|
||||||
|
config: {
|
||||||
|
to: "{{trigger.data.email}}",
|
||||||
|
template: "onboarding_tips",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Error Handling Patterns
|
||||||
|
|
||||||
|
### 3.1 Retry with Backoff
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const retryConfig = {
|
||||||
|
retries: 3,
|
||||||
|
backoff: "exponential", // linear, exponential, fixed
|
||||||
|
initialDelay: 1000, // ms
|
||||||
|
maxDelay: 30000, // ms
|
||||||
|
retryOn: ["ECONNRESET", "ETIMEDOUT", "HTTP_5XX"],
|
||||||
|
};
|
||||||
|
|
||||||
|
const nodeWithRetry = {
|
||||||
|
id: "api_call",
|
||||||
|
type: "http",
|
||||||
|
config: { url: "/api/external" },
|
||||||
|
errorHandling: {
|
||||||
|
retry: retryConfig,
|
||||||
|
onMaxRetries: {
|
||||||
|
action: "continue", // or "fail", "branch"
|
||||||
|
fallbackValue: { data: [] },
|
||||||
|
},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.2 Dead Letter Queue
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const workflowWithDLQ = {
|
||||||
|
config: {
|
||||||
|
onError: {
|
||||||
|
action: "send_to_dlq",
|
||||||
|
queue: "failed_workflows",
|
||||||
|
includeContext: true, // Include full workflow state
|
||||||
|
},
|
||||||
|
},
|
||||||
|
nodes: [
|
||||||
|
/* ... */
|
||||||
|
],
|
||||||
|
};
|
||||||
|
|
||||||
|
// Separate workflow to process failed items
|
||||||
|
const dlqProcessor = {
|
||||||
|
trigger: {
|
||||||
|
type: "queue",
|
||||||
|
queue: "failed_workflows",
|
||||||
|
},
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "analyze",
|
||||||
|
type: "code",
|
||||||
|
config: {
|
||||||
|
code: `
|
||||||
|
const error = $input.error;
|
||||||
|
const context = $input.context;
|
||||||
|
|
||||||
|
// Classify error
|
||||||
|
if (error.type === 'VALIDATION') {
|
||||||
|
return { action: 'discard', reason: 'Bad data' };
|
||||||
|
}
|
||||||
|
if (error.type === 'RATE_LIMIT') {
|
||||||
|
return { action: 'retry', delay: '1h' };
|
||||||
|
}
|
||||||
|
return { action: 'manual_review' };
|
||||||
|
`,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3.3 Compensation/Rollback
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const sagaWorkflow = {
|
||||||
|
name: "order_saga",
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "reserve_inventory",
|
||||||
|
type: "api",
|
||||||
|
compensate: {
|
||||||
|
id: "release_inventory",
|
||||||
|
type: "api",
|
||||||
|
config: { method: "POST", url: "/inventory/release" },
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "charge_payment",
|
||||||
|
type: "api",
|
||||||
|
compensate: {
|
||||||
|
id: "refund_payment",
|
||||||
|
type: "api",
|
||||||
|
config: { method: "POST", url: "/payments/refund" },
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "create_shipment",
|
||||||
|
type: "api",
|
||||||
|
compensate: {
|
||||||
|
id: "cancel_shipment",
|
||||||
|
type: "api",
|
||||||
|
config: { method: "POST", url: "/shipments/cancel" },
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
onError: {
|
||||||
|
strategy: "compensate_all", // Run all compensations in reverse order
|
||||||
|
},
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Integration Patterns
|
||||||
|
|
||||||
|
### 4.1 API Integration Template
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const apiIntegration = {
|
||||||
|
name: "github_integration",
|
||||||
|
baseUrl: "https://api.github.com",
|
||||||
|
auth: {
|
||||||
|
type: "bearer",
|
||||||
|
token: "{{secrets.GITHUB_TOKEN}}",
|
||||||
|
},
|
||||||
|
operations: {
|
||||||
|
listRepos: {
|
||||||
|
method: "GET",
|
||||||
|
path: "/user/repos",
|
||||||
|
params: {
|
||||||
|
per_page: 100,
|
||||||
|
sort: "updated",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
createIssue: {
|
||||||
|
method: "POST",
|
||||||
|
path: "/repos/{{owner}}/{{repo}}/issues",
|
||||||
|
body: {
|
||||||
|
title: "{{title}}",
|
||||||
|
body: "{{body}}",
|
||||||
|
labels: "{{labels}}",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
rateLimiting: {
|
||||||
|
requests: 5000,
|
||||||
|
period: "1h",
|
||||||
|
strategy: "queue", // queue, reject, throttle
|
||||||
|
},
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4.2 Webhook Handler
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const webhookHandler = {
|
||||||
|
trigger: {
|
||||||
|
type: "webhook",
|
||||||
|
path: "/webhooks/stripe",
|
||||||
|
method: "POST",
|
||||||
|
authentication: {
|
||||||
|
type: "signature",
|
||||||
|
header: "stripe-signature",
|
||||||
|
secret: "{{secrets.STRIPE_WEBHOOK_SECRET}}",
|
||||||
|
algorithm: "sha256",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "validate",
|
||||||
|
type: "code",
|
||||||
|
config: {
|
||||||
|
code: `
|
||||||
|
const event = $input.body;
|
||||||
|
if (!['checkout.session.completed',
|
||||||
|
'payment_intent.succeeded'].includes(event.type)) {
|
||||||
|
return { skip: true };
|
||||||
|
}
|
||||||
|
return event;
|
||||||
|
`,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "route",
|
||||||
|
type: "switch",
|
||||||
|
config: {
|
||||||
|
property: "{{validate.type}}",
|
||||||
|
routes: {
|
||||||
|
"checkout.session.completed": "handle_checkout",
|
||||||
|
"payment_intent.succeeded": "handle_payment",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. AI-Native Workflows
|
||||||
|
|
||||||
|
### 5.1 LLM in Pipeline
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const aiWorkflow = {
|
||||||
|
trigger: { type: "webhook", path: "/analyze" },
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "extract_text",
|
||||||
|
type: "code",
|
||||||
|
config: {
|
||||||
|
code: "return { text: $input.document.content }",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "analyze_sentiment",
|
||||||
|
type: "llm",
|
||||||
|
config: {
|
||||||
|
model: "gpt-4",
|
||||||
|
prompt: `
|
||||||
|
Analyze the sentiment of the following text.
|
||||||
|
Return JSON: {"sentiment": "positive|negative|neutral", "confidence": 0-1}
|
||||||
|
|
||||||
|
Text: {{extract_text.text}}
|
||||||
|
`,
|
||||||
|
responseFormat: "json",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "route_by_sentiment",
|
||||||
|
type: "switch",
|
||||||
|
config: {
|
||||||
|
property: "{{analyze_sentiment.sentiment}}",
|
||||||
|
routes: {
|
||||||
|
negative: "escalate_to_support",
|
||||||
|
positive: "send_thank_you",
|
||||||
|
neutral: "archive",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.2 Agent Workflow
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const agentWorkflow = {
|
||||||
|
trigger: { type: "webhook", path: "/research" },
|
||||||
|
nodes: [
|
||||||
|
{
|
||||||
|
id: "research_agent",
|
||||||
|
type: "agent",
|
||||||
|
config: {
|
||||||
|
model: "gpt-4",
|
||||||
|
tools: ["web_search", "calculator", "code_interpreter"],
|
||||||
|
maxIterations: 10,
|
||||||
|
prompt: `
|
||||||
|
Research the following topic and provide a comprehensive summary:
|
||||||
|
{{trigger.topic}}
|
||||||
|
|
||||||
|
Use the tools available to gather accurate, up-to-date information.
|
||||||
|
`,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "format_report",
|
||||||
|
type: "llm",
|
||||||
|
config: {
|
||||||
|
model: "gpt-4",
|
||||||
|
prompt: `
|
||||||
|
Format this research into a professional report with sections:
|
||||||
|
- Executive Summary
|
||||||
|
- Key Findings
|
||||||
|
- Recommendations
|
||||||
|
|
||||||
|
Research: {{research_agent.output}}
|
||||||
|
`,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id: "send_report",
|
||||||
|
type: "email",
|
||||||
|
config: {
|
||||||
|
to: "{{trigger.email}}",
|
||||||
|
subject: "Research Report: {{trigger.topic}}",
|
||||||
|
body: "{{format_report.output}}",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Workflow Best Practices
|
||||||
|
|
||||||
|
### 6.1 Design Checklist
|
||||||
|
|
||||||
|
- [ ] **Idempotency**: Can workflow run multiple times safely?
|
||||||
|
- [ ] **Error handling**: What happens when nodes fail?
|
||||||
|
- [ ] **Timeouts**: Are there appropriate timeouts?
|
||||||
|
- [ ] **Logging**: Is there enough observability?
|
||||||
|
- [ ] **Rate limits**: Are external APIs rate-limited?
|
||||||
|
- [ ] **Secrets**: Are credentials stored securely?
|
||||||
|
- [ ] **Testing**: Can workflow be tested in isolation?
|
||||||
|
|
||||||
|
### 6.2 Naming Conventions
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Workflows: verb_noun or noun_verb
|
||||||
|
"sync_customers";
|
||||||
|
"process_orders";
|
||||||
|
"daily_report_generator";
|
||||||
|
|
||||||
|
// Nodes: action_target
|
||||||
|
"fetch_user_data";
|
||||||
|
"transform_to_csv";
|
||||||
|
"send_notification_email";
|
||||||
|
|
||||||
|
// Variables: lowercase_snake_case
|
||||||
|
"order_total";
|
||||||
|
"customer_email";
|
||||||
|
"processing_date";
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6.3 Testing Workflows
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const workflowTest = {
|
||||||
|
name: "order_processing_test",
|
||||||
|
workflow: "process_order",
|
||||||
|
testCases: [
|
||||||
|
{
|
||||||
|
name: "valid_order",
|
||||||
|
input: {
|
||||||
|
order_id: "test-123",
|
||||||
|
items: [{ sku: "A1", qty: 2 }],
|
||||||
|
},
|
||||||
|
expectedOutput: {
|
||||||
|
status: "processed",
|
||||||
|
},
|
||||||
|
mocks: {
|
||||||
|
inventory_check: { available: true },
|
||||||
|
payment_process: { success: true },
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "out_of_stock",
|
||||||
|
input: {
|
||||||
|
order_id: "test-456",
|
||||||
|
items: [{ sku: "B2", qty: 100 }],
|
||||||
|
},
|
||||||
|
expectedOutput: {
|
||||||
|
status: "failed",
|
||||||
|
reason: "insufficient_inventory",
|
||||||
|
},
|
||||||
|
mocks: {
|
||||||
|
inventory_check: { available: false },
|
||||||
|
},
|
||||||
|
},
|
||||||
|
],
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resource Links
|
||||||
|
|
||||||
|
- [n8n Documentation](https://docs.n8n.io/)
|
||||||
|
- [Temporal Workflows](https://temporal.io/)
|
||||||
|
- [Apache Airflow](https://airflow.apache.org/)
|
||||||
|
- [Zapier Automation Patterns](https://zapier.com/blog/automation-patterns/)
|
||||||
@@ -370,5 +370,47 @@
|
|||||||
"path": "skills/xlsx-official",
|
"path": "skills/xlsx-official",
|
||||||
"name": "xlsx",
|
"name": "xlsx",
|
||||||
"description": "\"Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas\""
|
"description": "\"Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas\""
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "prompt-library",
|
||||||
|
"path": "skills/prompt-library",
|
||||||
|
"name": "prompt-library",
|
||||||
|
"description": "Curated collection of high-quality prompts for various use cases. Includes role-based prompts, task-specific templates, and prompt refinement techniques. Use when user needs prompt templates, role-play prompts, or ready-to-use prompt examples for coding, writing, analysis, or creative tasks."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "javascript-mastery",
|
||||||
|
"path": "skills/javascript-mastery",
|
||||||
|
"name": "javascript-mastery",
|
||||||
|
"description": "Comprehensive JavaScript reference covering 33+ essential concepts every developer should know. From fundamentals like primitives and closures to advanced patterns like async/await and functional programming. Use when explaining JS concepts, debugging JavaScript issues, or teaching JavaScript fundamentals."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "llm-app-patterns",
|
||||||
|
"path": "skills/llm-app-patterns",
|
||||||
|
"name": "llm-app-patterns",
|
||||||
|
"description": "Production-ready patterns for building LLM applications. Covers RAG pipelines, agent architectures, prompt IDEs, and LLMOps monitoring. Use when designing AI applications, implementing RAG, building agents, or setting up LLM observability."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "workflow-automation",
|
||||||
|
"path": "skills/workflow-automation",
|
||||||
|
"name": "workflow-automation",
|
||||||
|
"description": "Design and implement automated workflows combining visual logic with custom code. Create multi-step automations, integrate APIs, and build AI-native pipelines. Use when designing automation flows, integrating APIs, building event-driven systems, or creating LangChain-style AI workflows."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "autonomous-agent-patterns",
|
||||||
|
"path": "skills/autonomous-agent-patterns",
|
||||||
|
"name": "autonomous-agent-patterns",
|
||||||
|
"description": "Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use when building AI agents, designing tool APIs, implementing permission systems, or creating autonomous coding assistants."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "bun-development",
|
||||||
|
"path": "skills/bun-development",
|
||||||
|
"name": "bun-development",
|
||||||
|
"description": "Modern JavaScript/TypeScript development with Bun runtime. Covers package management, bundling, testing, and migration from Node.js. Use when working with Bun, optimizing JS/TS development speed, or migrating from Node.js to Bun."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "github-workflow-automation",
|
||||||
|
"path": "skills/github-workflow-automation",
|
||||||
|
"name": "github-workflow-automation",
|
||||||
|
"description": "Automate GitHub workflows with AI assistance. Includes PR reviews, issue triage, CI/CD integration, and Git operations. Use when automating GitHub workflows, setting up PR review automation, creating GitHub Actions, or triaging issues."
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
|||||||
Reference in New Issue
Block a user