Compare commits
13 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9d09626fd2 | ||
|
|
014da3e744 | ||
|
|
113bc99e47 | ||
|
|
3e46a495c9 | ||
|
|
faf478f389 | ||
|
|
266cbf4c6c | ||
|
|
f8eaf7bd50 | ||
|
|
c86c93582e | ||
|
|
d32f89a211 | ||
|
|
1aa169c842 | ||
|
|
c9280cf9cf | ||
|
|
0fff14df81 | ||
|
|
8bd204708b |
8
.github/CODEOWNERS
vendored
Normal file
8
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
# Global owners
|
||||
* @sickn33
|
||||
|
||||
# Skills
|
||||
/skills/ @sickn33
|
||||
|
||||
# Documentation
|
||||
*.md @sickn33
|
||||
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
name: Bug Report
|
||||
about: Create a report to help us improve the skills
|
||||
title: "[BUG] "
|
||||
labels: bug
|
||||
assignees: sickn33
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
|
||||
1. Go to '...'
|
||||
2. Click on '...'
|
||||
3. Scroll down to '...'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Environment (please complete the following information):**
|
||||
|
||||
- OS: [e.g. macOS, Windows]
|
||||
- Tool: [e.g. Claude Code, Antigravity]
|
||||
- Version [if known]
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
19
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
19
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
name: Skill Request
|
||||
about: Suggest a new skill for the collection
|
||||
title: "[REQ] "
|
||||
labels: enhancement
|
||||
assignees: sickn33
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex: I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A description of the skill you want. What trigger should it have? What files should it effect?
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
18
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
18
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
## Description
|
||||
|
||||
Please describe your changes. What skill are you adding or modifying?
|
||||
|
||||
## Checklist
|
||||
|
||||
- [ ] My skill follows the [creation guidelines](https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/skill-creator)
|
||||
- [ ] I have run `validate_skills.py`
|
||||
- [ ] I have added my name to the credits (if applicable)
|
||||
|
||||
## Type of Change
|
||||
|
||||
- [ ] New Skill
|
||||
- [ ] Bug Fix
|
||||
- [ ] Documentation Update
|
||||
- [ ] Infrastructure
|
||||
|
||||
## Screenshots (if applicable)
|
||||
175
README.md
175
README.md
@@ -1,62 +1,183 @@
|
||||
# 🌌 Antigravity Awesome Skills
|
||||
# 🌌 Antigravity Awesome Skills: The Ultimate Claude Code Skills Collection
|
||||
|
||||
> **The Ultimate Collection of 50+ Agentic Skills for Claude Code (Antigravity)**
|
||||
> **The Ultimate Collection of 60+ Agentic Skills for Claude Code (Antigravity)**
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://claude.ai)
|
||||
[](https://github.com/guanyang/antigravity-skills)
|
||||
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested collection of **58 high-performance skills** designed to supercharge your Claude Code agent using the Antigravity framework.
|
||||
**Antigravity Awesome Skills** is the ultimate **Claude Code Skills** collection—a curated, battle-tested library of **71 high-performance skills** compatible with both **Antigravity** and **Claude Code**. This repository provides the essential **Claude Code skills** needed to transform your AI assistant into a full-stack digital agency, including official capabilities from **Anthropic** and **Vercel Labs**.
|
||||
|
||||
## 📍 Table of Contents
|
||||
|
||||
- [Features & Categories](#features--categories)
|
||||
- [Full Skill Registry](#full-skill-registry-7171)
|
||||
- [Installation](#installation)
|
||||
- [How to Contribute](#how-to-contribute)
|
||||
- [Credits & Sources](#credits--sources)
|
||||
- [License](#license)
|
||||
|
||||
Whether you are using the Google Deepmind Antigravity framework or the standard Anthropic Claude Code CLI, these skills are designed to drop right in and supercharge your agent.
|
||||
|
||||
This repository aggregates the best capabilities from across the open-source community, transforming your AI assistant into a full-stack digital agency capable of Engineering, Design, Security, Marketing, and Autonomous Operations.
|
||||
|
||||
## 🚀 Features & Categories
|
||||
## Features & Categories
|
||||
|
||||
- **🎨 Creative & Design**: Algorithmic art, Canvas design, Professional UI/UX, Design Systems.
|
||||
- **🛠️ Development & Engineering**: TDD, Clean Architecture, Playwright E2E Testing, Systematic Debugging.
|
||||
- **🛡️ Cybersecurity & Auditing**: Ethical Hacking, OWASP Audits, AWS Penetration Testing, SecOps.
|
||||
- **🛸 Autonomous Agents**: Loki Mode (Startup-in-a-box), Subagent Orchestration.
|
||||
- **📈 Business & Strategy**: Product Management (PRD/RICE), Marketing Strategy (SEO/ASO), Senior Architecture.
|
||||
- **🏗️ Infrastructure**: Backend/Frontend Guidelines, Docker, Git Workflows.
|
||||
The repository is organized into several key areas of expertise:
|
||||
|
||||
| Category | Skills Included |
|
||||
| :----------------------- | :------------------------------------------------------------------------------------- |
|
||||
| **🎨 Creative & Design** | UI/UX Pro Max, Frontend Design, Canvas, Algorithmic Art, Theme Factory, D3 Viz |
|
||||
| **🛠️ Development** | TDD, Systematic Debugging, Webapp Testing, Backend/Frontend Guidelines, React Patterns |
|
||||
| **🛡️ Cybersecurity** | Ethical Hacking, AWS Pentesting, OWASP Top 100, Pentest Checklists |
|
||||
| **🛸 Autonomous** | **Loki Mode** (Startup-in-a-box), Subagent Orchestration, Parallel Execution |
|
||||
| **📈 Strategy** | Product Manager Toolkit, Content Creator, ASO, Doc Co-authoring, Brainstorming |
|
||||
| **🏗️ Infrastructure** | Linux Shell Scripting, Git Worktrees, Conventional Commits, File Organization |
|
||||
|
||||
---
|
||||
|
||||
## 📦 Installation
|
||||
## Full Skill Registry (71/71)
|
||||
|
||||
Below is the complete list of available skills. Each skill folder contains a `SKILL.md` that can be imported into Antigravity or Claude Code.
|
||||
|
||||
> [!NOTE] > **Document Skills**: We provide both **community** and **official Anthropic** versions for DOCX, PDF, PPTX, and XLSX. Locally, the official versions are used by default (via symlinks). In the repository, both versions are available for flexibility.
|
||||
|
||||
| Skill Name | Description | Path |
|
||||
| :--------------------------------- | :-------------------------------------------------------------- | :--------------------------------------------- |
|
||||
| **Address GitHub Comments** | Systematic PR feedback handling using gh CLI. | `skills/address-github-comments` ⭐ NEW |
|
||||
| **Algorithmic Art** | Creative generative art using p5.js and seeded randomness. | `skills/algorithmic-art` |
|
||||
| **App Store Optimization** | Complete ASO toolkit for iOS and Android app performance. | `skills/app-store-optimization` |
|
||||
| **Autonomous Agent Patterns** | Design patterns for autonomous coding agents and tools. | `skills/autonomous-agent-patterns` ⭐ NEW |
|
||||
| **AWS Pentesting** | Specialized security assessment for Amazon Web Services. | `skills/aws-penetration-testing` |
|
||||
| **Backend Guidelines** | Core architecture patterns for Node/Express microservices. | `skills/backend-dev-guidelines` |
|
||||
| **Concise Planning** | Atomic, actionable task planning and checklists. | `skills/concise-planning` ⭐ NEW |
|
||||
| **Brainstorming** | Requirement discovery and intent exploration framework. | `skills/brainstorming` |
|
||||
| **Brand Guidelines (Anthropic)** | Official Anthropic brand styling and visual standards. | `skills/brand-guidelines-anthropic` ⭐ NEW |
|
||||
| **Brand Guidelines (Community)** | Community-contributed brand guidelines and templates. | `skills/brand-guidelines-community` |
|
||||
| **Bun Development** | Modern JavaScript/TypeScript development with Bun runtime. | `skills/bun-development` ⭐ NEW |
|
||||
| **Canvas Design** | Beautiful static visual design in PDF and PNG. | `skills/canvas-design` |
|
||||
| **Claude D3.js** | Advanced data visualization with D3.js. | `skills/claude-d3js-skill` |
|
||||
| **Content Creator** | SEO-optimized marketing and brand voice toolkit. | `skills/content-creator` |
|
||||
| **Core Components** | Design system tokens and baseline UI patterns. | `skills/core-components` |
|
||||
| **Dispatching Parallel Agents** | Work on independent tasks without shared state. | `skills/dispatching-parallel-agents` |
|
||||
| **Doc Co-authoring** | Structured workflow for technical documentation. | `skills/doc-coauthoring` |
|
||||
| **DOCX (Official)** | Official Anthropic MS Word document manipulation. | `skills/docx-official` ⭐ NEW |
|
||||
| **Ethical Hacking** | Comprehensive penetration testing lifecycle methodology. | `skills/ethical-hacking-methodology` |
|
||||
| **Executing Plans** | Execute written implementation plans in structured sessions. | `skills/executing-plans` |
|
||||
| **File Organizer** | Context-aware file organization and duplicate cleanup. | `skills/file-organizer` |
|
||||
| **Finishing Dev Branch** | Structured workflow for merging, PRs, and branch cleanup. | `skills/finishing-a-development-branch` |
|
||||
| **Frontend Design** | Production-grade UI component implementation. | `skills/frontend-design` |
|
||||
| **Frontend Guidelines** | Modern React/TS development patterns and file structure. | `skills/frontend-dev-guidelines` |
|
||||
| **Git Pushing** | Automated staging and conventional commits. | `skills/git-pushing` |
|
||||
| **GitHub Workflow Automation** | AI-powered PR reviews, issue triage, and CI/CD integration. | `skills/github-workflow-automation` ⭐ NEW |
|
||||
| **Internal Comms (Anthropic)** | Official Anthropic corporate communication templates. | `skills/internal-comms-anthropic` ⭐ NEW |
|
||||
| **Internal Comms (Community)** | Community-contributed communication templates. | `skills/internal-comms-community` |
|
||||
| **JavaScript Mastery** | 33+ essential JavaScript concepts every developer should know. | `skills/javascript-mastery` ⭐ NEW |
|
||||
| **Kaizen** | Continuous improvement and error-proofing (Poka-Yoke). | `skills/kaizen` |
|
||||
| **Linux Shell Scripting** | Production-ready shell scripts for automation. | `skills/linux-shell-scripting` |
|
||||
| **LLM App Patterns** | RAG pipelines, agent architectures, and LLMOps patterns. | `skills/llm-app-patterns` ⭐ NEW |
|
||||
| **Loki Mode** | Fully autonomous startup development engine. | `skills/loki-mode` |
|
||||
| **MCP Builder** | High-quality Model Context Protocol (MCP) server creation. | `skills/mcp-builder` |
|
||||
| **NotebookLM** | Source-grounded querying via Google NotebookLM. | `skills/notebooklm` |
|
||||
| **PDF (Official)** | Official Anthropic PDF document manipulation. | `skills/pdf-official` ⭐ NEW |
|
||||
| **Pentest Checklist** | Structured security assessment planning and scoping. | `skills/pentest-checklist` |
|
||||
| **Planning With Files** | Manus-style file-based planning for complex tasks. | `skills/planning-with-files` |
|
||||
| **Playwright Automation** | Complete browser automation and testing with Playwright. | `skills/playwright-skill` |
|
||||
| **PPTX (Official)** | Official Anthropic PowerPoint manipulation. | `skills/pptx-official` ⭐ NEW |
|
||||
| **Product Toolkit** | RICE prioritization and product discovery frameworks. | `skills/product-manager-toolkit` |
|
||||
| **Prompt Engineering** | Expert patterns for LLM instruction optimization. | `skills/prompt-engineering` |
|
||||
| **Prompt Library** | Curated role-based and task-specific prompt templates. | `skills/prompt-library` ⭐ NEW |
|
||||
| **React Best Practices** | Vercel's 40+ performance optimization rules for React. | `skills/react-best-practices` ⭐ NEW (Vercel) |
|
||||
| **React UI Patterns** | Standardized loading states and error handling for React. | `skills/react-ui-patterns` |
|
||||
| **Receiving Code Review** | Technical verification of code review feedback. | `skills/receiving-code-review` |
|
||||
| **Requesting Code Review** | Pre-merge requirements verification workflow. | `skills/requesting-code-review` |
|
||||
| **Senior Architect** | Scalable system design and architecture diagrams. | `skills/senior-architect` |
|
||||
| **Senior Fullstack** | Comprehensive fullstack guide (React, Node, Postgres). | `skills/senior-fullstack` |
|
||||
| **Skill Creator** | Meta-skill for building high-performance agentic skills. | `skills/skill-creator` |
|
||||
| **Skill Developer** | Create and manage skills using Anthropic best practices. | `skills/skill-developer` |
|
||||
| **Slack GIF Creator** | Create animated GIFs optimized for Slack. | `skills/slack-gif-creator` |
|
||||
| **Software Architecture** | Quality-focused design principles and analysis. | `skills/software-architecture` |
|
||||
| **Subagent Driven Dev** | Orchestrate independent subtasks in current session. | `skills/subagent-driven-development` |
|
||||
| **Systematic Debugging** | Root cause analysis and structured fix verification. | `skills/systematic-debugging` |
|
||||
| **TDD** | Test-Driven Development workflow and red-green-refactor. | `skills/test-driven-development` |
|
||||
| **Test Fixing** | Systematically fix failing tests using smart error grouping. | `skills/test-fixing` |
|
||||
| **Testing Patterns** | Jest patterns, factories, and TDD workflow strategies. | `skills/testing-patterns` |
|
||||
| **Theme Factory** | Toolkit for styling artifacts with pre-set or generated themes. | `skills/theme-factory` |
|
||||
| **Top 100 Vulnerabilities** | OWASP-aligned web vulnerability taxonomy and mitigations. | `skills/top-web-vulnerabilities` |
|
||||
| **UI/UX Pro Max** | Advanced design intelligence and 50+ styling options. | `skills/ui-ux-pro-max` |
|
||||
| **Using Git Worktrees** | Isolated workspaces for safe feature development. | `skills/using-git-worktrees` |
|
||||
| **Using Superpowers** | Establish skill usage protocols at conversation start. | `skills/using-superpowers` |
|
||||
| **Verification Before Completion** | Run verification commands before claiming success. | `skills/verification-before-completion` |
|
||||
| **Web Artifacts** | Complex React/Tailwind/Shadcn UI artifact builder. | `skills/web-artifacts-builder` |
|
||||
| **Web Design Guidelines** | Vercel's 100+ UI/UX audit rules (accessibility, performance). | `skills/web-design-guidelines` ⭐ NEW (Vercel) |
|
||||
| **Webapp Testing** | Local web application testing with Playwright. | `skills/webapp-testing` |
|
||||
| **Workflow Automation** | Multi-step automations, API integration, AI-native pipelines. | `skills/workflow-automation` ⭐ NEW |
|
||||
| **Writing Plans** | Create specs for multi-step tasks before coding. | `skills/writing-plans` |
|
||||
| **Writing Skills** | Create and verify skills before deployment. | `skills/writing-skills` |
|
||||
| **XLSX (Official)** | Official Anthropic Excel spreadsheet manipulation. | `skills/xlsx-official` ⭐ NEW |
|
||||
|
||||
> [!TIP]
|
||||
> Use the `validate_skills.py` script in the `scripts/` directory to ensure all skills are properly formatted and ready for use.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
To use these skills with **Antigravity** or **Claude Code**, clone this repository into your agent's skills directory:
|
||||
|
||||
```bash
|
||||
# Clone directly into your skills folder
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
Or copy valid markdown files (`SKILL.md`) to your existing configuration.
|
||||
---
|
||||
|
||||
## How to Contribute
|
||||
|
||||
We welcome contributions from the community! To add a new skill:
|
||||
|
||||
1. **Fork** the repository.
|
||||
2. **Create a new directory** inside `skills/` for your skill.
|
||||
3. **Add a `SKILL.md`** with the required frontmatter (name and description).
|
||||
4. **Run validation**: `python3 scripts/validate_skills.py`.
|
||||
5. **Submit a Pull Request**.
|
||||
|
||||
Please ensure your skill follows the Antigravity/Claude Code best practices.
|
||||
|
||||
---
|
||||
|
||||
## 🏆 Credits & Sources
|
||||
## Credits & Sources
|
||||
|
||||
This collection would not be possible without the incredible work of the Claude Code community. This repository is an aggregation of the following open-source projects:
|
||||
This collection would not be possible without the incredible work of the Claude Code community and official sources:
|
||||
|
||||
### 🌟 Core Foundation
|
||||
### Official Sources
|
||||
|
||||
- **[guanyang/antigravity-skills](https://github.com/guanyang/antigravity-skills)**: The original framework and core set of 33 skills.
|
||||
- **[anthropics/skills](https://github.com/anthropics/skills)**: Official Anthropic skills repository - Document manipulation (DOCX, PDF, PPTX, XLSX), Brand Guidelines, Internal Communications.
|
||||
- **[anthropics/claude-cookbooks](https://github.com/anthropics/claude-cookbooks)**: Official notebooks and recipes for building with Claude.
|
||||
- **[vercel-labs/agent-skills](https://github.com/vercel-labs/agent-skills)**: Vercel Labs official skills - React Best Practices, Web Design Guidelines.
|
||||
- **[openai/skills](https://github.com/openai/skills)**: OpenAI Codex skills catalog - Agent skills, Skill Creator, Concise Planning.
|
||||
|
||||
### 👥 Community Contributors
|
||||
### Community Contributors
|
||||
|
||||
- **[diet103/claude-code-infrastructure-showcase](https://github.com/diet103/claude-code-infrastructure-showcase)**: Infrastructure, Backend/Frontend Guidelines, and Skill Development meta-skills.
|
||||
- **[ChrisWiles/claude-code-showcase](https://github.com/ChrisWiles/claude-code-showcase)**: React UI patterns, Design System components, and Testing factories.
|
||||
- **[travisvn/awesome-claude-skills](https://github.com/travisvn/awesome-claude-skills)**: Autonomous agents (Loki Mode), Playwright integration, and D3.js visualization.
|
||||
- **[zebbern/claude-code-guide](https://github.com/zebbern/claude-code-guide)**: Comprehensive Security suite (Ethical Hacking, OWASP, AWS Auditing).
|
||||
- **[alirezarezvani/claude-skills](https://github.com/alirezarezvani/claude-skills)**: Senior Engineering roles, Product Management toolkit, Content Creator & ASO skills.
|
||||
- **[obra/superpowers](https://github.com/obra/superpowers)**: The original "Superpowers" by Jesse Vincent.
|
||||
- **[guanyang/antigravity-skills](https://github.com/guanyang/antigravity-skills)**: Core Antigravity extensions.
|
||||
- **[diet103/claude-code-infrastructure-showcase](https://github.com/diet103/claude-code-infrastructure-showcase)**: Infrastructure and Backend/Frontend Guidelines.
|
||||
- **[ChrisWiles/claude-code-showcase](https://github.com/ChrisWiles/claude-code-showcase)**: React UI patterns and Design Systems.
|
||||
- **[travisvn/awesome-claude-skills](https://github.com/travisvn/awesome-claude-skills)**: Loki Mode and Playwright integration.
|
||||
- **[zebbern/claude-code-guide](https://github.com/zebbern/claude-code-guide)**: Comprehensive Security suite.
|
||||
- **[alirezarezvani/claude-skills](https://github.com/alirezarezvani/claude-skills)**: Senior Engineering and PM toolkit.
|
||||
- **[karanb192/awesome-claude-skills](https://github.com/karanb192/awesome-claude-skills)**: A massive list of verified skills for Claude Code.
|
||||
|
||||
### Inspirations
|
||||
|
||||
- **[f/awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts)**: Inspiration for the Prompt Library.
|
||||
- **[leonardomso/33-js-concepts](https://github.com/leonardomso/33-js-concepts)**: Inspiration for JavaScript Mastery.
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ License
|
||||
## License
|
||||
|
||||
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
||||
Individual skills may retain the licenses of their original repositories.
|
||||
MIT License. See [LICENSE](LICENSE) for details.
|
||||
|
||||
---
|
||||
|
||||
**Keywords**: Claude Code, Antigravity, Agentic Skills, MCT, Model Context Protocol, AI Agents, Autonomous Coding, Prompt Engineering, Security Auditing, React Patterns, Microservices.
|
||||
**Keywords**: Claude Code, Antigravity, Agentic Skills, MCT, AI Agents, Autonomous Coding, Security Auditing, React Patterns.
|
||||
|
||||
72
scripts/generate_index.py
Normal file
72
scripts/generate_index.py
Normal file
@@ -0,0 +1,72 @@
|
||||
import os
|
||||
import json
|
||||
import re
|
||||
|
||||
def generate_index(skills_dir, output_file):
|
||||
print(f"🏗️ Generating index from: {skills_dir}")
|
||||
skills = []
|
||||
|
||||
for root, dirs, files in os.walk(skills_dir):
|
||||
if "SKILL.md" in files:
|
||||
skill_path = os.path.join(root, "SKILL.md")
|
||||
dir_name = os.path.basename(root)
|
||||
|
||||
skill_info = {
|
||||
"id": dir_name,
|
||||
"path": os.path.relpath(root, os.path.dirname(skills_dir)),
|
||||
"name": dir_name.replace("-", " ").title(),
|
||||
"description": ""
|
||||
}
|
||||
|
||||
with open(skill_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
# Try to extract from frontmatter first
|
||||
fm_match = re.search(r'^---\s*(.*?)\s*---', content, re.DOTALL)
|
||||
if fm_match:
|
||||
fm_content = fm_match.group(1)
|
||||
name_fm = re.search(r'^name:\s*(.+)$', fm_content, re.MULTILINE)
|
||||
desc_fm = re.search(r'^description:\s*(.+)$', fm_content, re.MULTILINE)
|
||||
|
||||
if name_fm:
|
||||
skill_info["name"] = name_fm.group(1).strip()
|
||||
if desc_fm:
|
||||
skill_info["description"] = desc_fm.group(1).strip()
|
||||
|
||||
# Fallback to Header and First Paragraph if needed
|
||||
if not skill_info["description"] or skill_info["description"] == "":
|
||||
name_match = re.search(r'^#\s+(.+)$', content, re.MULTILINE)
|
||||
if name_match and not fm_match: # Only override if no frontmatter name
|
||||
skill_info["name"] = name_match.group(1).strip()
|
||||
|
||||
# Extract first paragraph
|
||||
body = content
|
||||
if fm_match:
|
||||
body = content[fm_match.end():].strip()
|
||||
|
||||
lines = body.split('\n')
|
||||
desc_lines = []
|
||||
for line in lines:
|
||||
if line.startswith('#') or not line.strip():
|
||||
if desc_lines: break
|
||||
continue
|
||||
desc_lines.append(line.strip())
|
||||
|
||||
if desc_lines:
|
||||
skill_info["description"] = " ".join(desc_lines)[:150] + "..."
|
||||
|
||||
skills.append(skill_info)
|
||||
|
||||
skills.sort(key=lambda x: x["name"])
|
||||
|
||||
with open(output_file, 'w', encoding='utf-8') as f:
|
||||
json.dump(skills, f, indent=2)
|
||||
|
||||
print(f"✅ Generated index with {len(skills)} skills at: {output_file}")
|
||||
return skills
|
||||
|
||||
if __name__ == "__main__":
|
||||
base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
skills_path = os.path.join(base_dir, "skills")
|
||||
output_path = os.path.join(base_dir, "skills_index.json")
|
||||
generate_index(skills_path, output_path)
|
||||
50
scripts/validate_skills.py
Normal file
50
scripts/validate_skills.py
Normal file
@@ -0,0 +1,50 @@
|
||||
import os
|
||||
import re
|
||||
|
||||
def validate_skills(skills_dir):
|
||||
print(f"🔍 Validating skills in: {skills_dir}")
|
||||
errors = []
|
||||
skill_count = 0
|
||||
|
||||
for root, dirs, files in os.walk(skills_dir):
|
||||
if "SKILL.md" in files:
|
||||
skill_count += 1
|
||||
skill_path = os.path.join(root, "SKILL.md")
|
||||
rel_path = os.path.relpath(skill_path, skills_dir)
|
||||
|
||||
with open(skill_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
# Check for Frontmatter or Header
|
||||
has_frontmatter = content.strip().startswith("---")
|
||||
has_header = re.search(r'^#\s+', content, re.MULTILINE)
|
||||
|
||||
if not (has_frontmatter or has_header):
|
||||
errors.append(f"❌ {rel_path}: Missing frontmatter or top-level heading")
|
||||
|
||||
if has_frontmatter:
|
||||
# Basic check for name and description in frontmatter
|
||||
fm_match = re.search(r'^---\s*(.*?)\s*---', content, re.DOTALL)
|
||||
if fm_match:
|
||||
fm_content = fm_match.group(1)
|
||||
if "name:" not in fm_content:
|
||||
errors.append(f"⚠️ {rel_path}: Frontmatter missing 'name:'")
|
||||
if "description:" not in fm_content:
|
||||
errors.append(f"⚠️ {rel_path}: Frontmatter missing 'description:'")
|
||||
else:
|
||||
errors.append(f"❌ {rel_path}: Malformed frontmatter")
|
||||
|
||||
print(f"✅ Found and checked {skill_count} skills.")
|
||||
if errors:
|
||||
print("\n⚠️ Validation Results:")
|
||||
for err in errors:
|
||||
print(err)
|
||||
return False
|
||||
else:
|
||||
print("✨ All skills passed basic validation!")
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
skills_path = os.path.join(base_dir, "skills")
|
||||
validate_skills(skills_path)
|
||||
55
skills/address-github-comments/SKILL.md
Normal file
55
skills/address-github-comments/SKILL.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
name: address-github-comments
|
||||
description: Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI.
|
||||
---
|
||||
|
||||
# Address GitHub Comments
|
||||
|
||||
## Overview
|
||||
|
||||
Efficiently address PR review comments or issue feedback using the GitHub CLI (`gh`). This skill ensures all feedback is addressed systematically.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Ensure `gh` is authenticated.
|
||||
|
||||
```bash
|
||||
gh auth status
|
||||
```
|
||||
|
||||
If not logged in, run `gh auth login`.
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Inspect Comments
|
||||
|
||||
Fetch the comments for the current branch's PR.
|
||||
|
||||
```bash
|
||||
gh pr view --comments
|
||||
```
|
||||
|
||||
Or use a custom script if available to list threads.
|
||||
|
||||
### 2. Categorize and Plan
|
||||
|
||||
- List the comments and review threads.
|
||||
- Propose a fix for each.
|
||||
- **Wait for user confirmation** on which comments to address first if there are many.
|
||||
|
||||
### 3. Apply Fixes
|
||||
|
||||
Apply the code changes for the selected comments.
|
||||
|
||||
### 4. Respond to Comments
|
||||
|
||||
Once fixed, respond to the threads as resolved.
|
||||
|
||||
```bash
|
||||
gh pr comment <PR_NUMBER> --body "Addressed in latest commit."
|
||||
```
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
- **Applying fixes without understanding context**: Always read the surrounding code of a comment.
|
||||
- **Not verifying auth**: Check `gh auth status` before starting.
|
||||
761
skills/autonomous-agent-patterns/SKILL.md
Normal file
761
skills/autonomous-agent-patterns/SKILL.md
Normal file
@@ -0,0 +1,761 @@
|
||||
---
|
||||
name: autonomous-agent-patterns
|
||||
description: "Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use when building AI agents, designing tool APIs, implementing permission systems, or creating autonomous coding assistants."
|
||||
---
|
||||
|
||||
# 🕹️ Autonomous Agent Patterns
|
||||
|
||||
> Design patterns for building autonomous coding agents, inspired by [Cline](https://github.com/cline/cline) and [OpenAI Codex](https://github.com/openai/codex).
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Building autonomous AI agents
|
||||
- Designing tool/function calling APIs
|
||||
- Implementing permission and approval systems
|
||||
- Creating browser automation for agents
|
||||
- Designing human-in-the-loop workflows
|
||||
|
||||
---
|
||||
|
||||
## 1. Core Agent Architecture
|
||||
|
||||
### 1.1 Agent Loop
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ AGENT LOOP │
|
||||
│ │
|
||||
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||
│ │ Think │───▶│ Decide │───▶│ Act │ │
|
||||
│ │ (Reason) │ │ (Plan) │ │ (Execute)│ │
|
||||
│ └──────────┘ └──────────┘ └──────────┘ │
|
||||
│ ▲ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ └─────────│ Observe │◀─────────┘ │
|
||||
│ │ (Result) │ │
|
||||
│ └──────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
```python
|
||||
class AgentLoop:
|
||||
def __init__(self, llm, tools, max_iterations=50):
|
||||
self.llm = llm
|
||||
self.tools = {t.name: t for t in tools}
|
||||
self.max_iterations = max_iterations
|
||||
self.history = []
|
||||
|
||||
def run(self, task: str) -> str:
|
||||
self.history.append({"role": "user", "content": task})
|
||||
|
||||
for i in range(self.max_iterations):
|
||||
# Think: Get LLM response with tool options
|
||||
response = self.llm.chat(
|
||||
messages=self.history,
|
||||
tools=self._format_tools(),
|
||||
tool_choice="auto"
|
||||
)
|
||||
|
||||
# Decide: Check if agent wants to use a tool
|
||||
if response.tool_calls:
|
||||
for tool_call in response.tool_calls:
|
||||
# Act: Execute the tool
|
||||
result = self._execute_tool(tool_call)
|
||||
|
||||
# Observe: Add result to history
|
||||
self.history.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call.id,
|
||||
"content": str(result)
|
||||
})
|
||||
else:
|
||||
# No more tool calls = task complete
|
||||
return response.content
|
||||
|
||||
return "Max iterations reached"
|
||||
|
||||
def _execute_tool(self, tool_call) -> Any:
|
||||
tool = self.tools[tool_call.name]
|
||||
args = json.loads(tool_call.arguments)
|
||||
return tool.execute(**args)
|
||||
```
|
||||
|
||||
### 1.2 Multi-Model Architecture
|
||||
|
||||
```python
|
||||
class MultiModelAgent:
|
||||
"""
|
||||
Use different models for different purposes:
|
||||
- Fast model for planning
|
||||
- Powerful model for complex reasoning
|
||||
- Specialized model for code generation
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.models = {
|
||||
"fast": "gpt-3.5-turbo", # Quick decisions
|
||||
"smart": "gpt-4-turbo", # Complex reasoning
|
||||
"code": "claude-3-sonnet", # Code generation
|
||||
}
|
||||
|
||||
def select_model(self, task_type: str) -> str:
|
||||
if task_type == "planning":
|
||||
return self.models["fast"]
|
||||
elif task_type == "analysis":
|
||||
return self.models["smart"]
|
||||
elif task_type == "code":
|
||||
return self.models["code"]
|
||||
return self.models["smart"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Tool Design Patterns
|
||||
|
||||
### 2.1 Tool Schema
|
||||
|
||||
```python
|
||||
class Tool:
|
||||
"""Base class for agent tools"""
|
||||
|
||||
@property
|
||||
def schema(self) -> dict:
|
||||
"""JSON Schema for the tool"""
|
||||
return {
|
||||
"name": self.name,
|
||||
"description": self.description,
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": self._get_parameters(),
|
||||
"required": self._get_required()
|
||||
}
|
||||
}
|
||||
|
||||
def execute(self, **kwargs) -> ToolResult:
|
||||
"""Execute the tool and return result"""
|
||||
raise NotImplementedError
|
||||
|
||||
class ReadFileTool(Tool):
|
||||
name = "read_file"
|
||||
description = "Read the contents of a file from the filesystem"
|
||||
|
||||
def _get_parameters(self):
|
||||
return {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Absolute path to the file"
|
||||
},
|
||||
"start_line": {
|
||||
"type": "integer",
|
||||
"description": "Line to start reading from (1-indexed)"
|
||||
},
|
||||
"end_line": {
|
||||
"type": "integer",
|
||||
"description": "Line to stop reading at (inclusive)"
|
||||
}
|
||||
}
|
||||
|
||||
def _get_required(self):
|
||||
return ["path"]
|
||||
|
||||
def execute(self, path: str, start_line: int = None, end_line: int = None) -> ToolResult:
|
||||
try:
|
||||
with open(path, 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
if start_line and end_line:
|
||||
lines = lines[start_line-1:end_line]
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output="".join(lines)
|
||||
)
|
||||
except FileNotFoundError:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"File not found: {path}"
|
||||
)
|
||||
```
|
||||
|
||||
### 2.2 Essential Agent Tools
|
||||
|
||||
```python
|
||||
CODING_AGENT_TOOLS = {
|
||||
# File operations
|
||||
"read_file": "Read file contents",
|
||||
"write_file": "Create or overwrite a file",
|
||||
"edit_file": "Make targeted edits to a file",
|
||||
"list_directory": "List files and folders",
|
||||
"search_files": "Search for files by pattern",
|
||||
|
||||
# Code understanding
|
||||
"search_code": "Search for code patterns (grep)",
|
||||
"get_definition": "Find function/class definition",
|
||||
"get_references": "Find all references to a symbol",
|
||||
|
||||
# Terminal
|
||||
"run_command": "Execute a shell command",
|
||||
"read_output": "Read command output",
|
||||
"send_input": "Send input to running command",
|
||||
|
||||
# Browser (optional)
|
||||
"open_browser": "Open URL in browser",
|
||||
"click_element": "Click on page element",
|
||||
"type_text": "Type text into input",
|
||||
"screenshot": "Capture screenshot",
|
||||
|
||||
# Context
|
||||
"ask_user": "Ask the user a question",
|
||||
"search_web": "Search the web for information"
|
||||
}
|
||||
```
|
||||
|
||||
### 2.3 Edit Tool Design
|
||||
|
||||
```python
|
||||
class EditFileTool(Tool):
|
||||
"""
|
||||
Precise file editing with conflict detection.
|
||||
Uses search/replace pattern for reliable edits.
|
||||
"""
|
||||
|
||||
name = "edit_file"
|
||||
description = "Edit a file by replacing specific content"
|
||||
|
||||
def execute(
|
||||
self,
|
||||
path: str,
|
||||
search: str,
|
||||
replace: str,
|
||||
expected_occurrences: int = 1
|
||||
) -> ToolResult:
|
||||
"""
|
||||
Args:
|
||||
path: File to edit
|
||||
search: Exact text to find (must match exactly, including whitespace)
|
||||
replace: Text to replace with
|
||||
expected_occurrences: How many times search should appear (validation)
|
||||
"""
|
||||
with open(path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
# Validate
|
||||
actual_occurrences = content.count(search)
|
||||
if actual_occurrences != expected_occurrences:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Expected {expected_occurrences} occurrences, found {actual_occurrences}"
|
||||
)
|
||||
|
||||
if actual_occurrences == 0:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error="Search text not found in file"
|
||||
)
|
||||
|
||||
# Apply edit
|
||||
new_content = content.replace(search, replace)
|
||||
|
||||
with open(path, 'w') as f:
|
||||
f.write(new_content)
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Replaced {actual_occurrences} occurrence(s)"
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Permission & Safety Patterns
|
||||
|
||||
### 3.1 Permission Levels
|
||||
|
||||
```python
|
||||
class PermissionLevel(Enum):
|
||||
# Fully automatic - no user approval needed
|
||||
AUTO = "auto"
|
||||
|
||||
# Ask once per session
|
||||
ASK_ONCE = "ask_once"
|
||||
|
||||
# Ask every time
|
||||
ASK_EACH = "ask_each"
|
||||
|
||||
# Never allow
|
||||
NEVER = "never"
|
||||
|
||||
PERMISSION_CONFIG = {
|
||||
# Low risk - can auto-approve
|
||||
"read_file": PermissionLevel.AUTO,
|
||||
"list_directory": PermissionLevel.AUTO,
|
||||
"search_code": PermissionLevel.AUTO,
|
||||
|
||||
# Medium risk - ask once
|
||||
"write_file": PermissionLevel.ASK_ONCE,
|
||||
"edit_file": PermissionLevel.ASK_ONCE,
|
||||
|
||||
# High risk - ask each time
|
||||
"run_command": PermissionLevel.ASK_EACH,
|
||||
"delete_file": PermissionLevel.ASK_EACH,
|
||||
|
||||
# Dangerous - never auto-approve
|
||||
"sudo_command": PermissionLevel.NEVER,
|
||||
"format_disk": PermissionLevel.NEVER
|
||||
}
|
||||
```
|
||||
|
||||
### 3.2 Approval UI Pattern
|
||||
|
||||
```python
|
||||
class ApprovalManager:
|
||||
def __init__(self, ui, config):
|
||||
self.ui = ui
|
||||
self.config = config
|
||||
self.session_approvals = {}
|
||||
|
||||
def request_approval(self, tool_name: str, args: dict) -> bool:
|
||||
level = self.config.get(tool_name, PermissionLevel.ASK_EACH)
|
||||
|
||||
if level == PermissionLevel.AUTO:
|
||||
return True
|
||||
|
||||
if level == PermissionLevel.NEVER:
|
||||
self.ui.show_error(f"Tool '{tool_name}' is not allowed")
|
||||
return False
|
||||
|
||||
if level == PermissionLevel.ASK_ONCE:
|
||||
if tool_name in self.session_approvals:
|
||||
return self.session_approvals[tool_name]
|
||||
|
||||
# Show approval dialog
|
||||
approved = self.ui.show_approval_dialog(
|
||||
tool=tool_name,
|
||||
args=args,
|
||||
risk_level=self._assess_risk(tool_name, args)
|
||||
)
|
||||
|
||||
if level == PermissionLevel.ASK_ONCE:
|
||||
self.session_approvals[tool_name] = approved
|
||||
|
||||
return approved
|
||||
|
||||
def _assess_risk(self, tool_name: str, args: dict) -> str:
|
||||
"""Analyze specific call for risk level"""
|
||||
if tool_name == "run_command":
|
||||
cmd = args.get("command", "")
|
||||
if any(danger in cmd for danger in ["rm -rf", "sudo", "chmod"]):
|
||||
return "HIGH"
|
||||
return "MEDIUM"
|
||||
```
|
||||
|
||||
### 3.3 Sandboxing
|
||||
|
||||
```python
|
||||
class SandboxedExecution:
|
||||
"""
|
||||
Execute code/commands in isolated environment
|
||||
"""
|
||||
|
||||
def __init__(self, workspace_dir: str):
|
||||
self.workspace = workspace_dir
|
||||
self.allowed_commands = ["npm", "python", "node", "git", "ls", "cat"]
|
||||
self.blocked_paths = ["/etc", "/usr", "/bin", os.path.expanduser("~")]
|
||||
|
||||
def validate_path(self, path: str) -> bool:
|
||||
"""Ensure path is within workspace"""
|
||||
real_path = os.path.realpath(path)
|
||||
workspace_real = os.path.realpath(self.workspace)
|
||||
return real_path.startswith(workspace_real)
|
||||
|
||||
def validate_command(self, command: str) -> bool:
|
||||
"""Check if command is allowed"""
|
||||
cmd_parts = shlex.split(command)
|
||||
if not cmd_parts:
|
||||
return False
|
||||
|
||||
base_cmd = cmd_parts[0]
|
||||
return base_cmd in self.allowed_commands
|
||||
|
||||
def execute_sandboxed(self, command: str) -> ToolResult:
|
||||
if not self.validate_command(command):
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Command not allowed: {command}"
|
||||
)
|
||||
|
||||
# Execute in isolated environment
|
||||
result = subprocess.run(
|
||||
command,
|
||||
shell=True,
|
||||
cwd=self.workspace,
|
||||
capture_output=True,
|
||||
timeout=30,
|
||||
env={
|
||||
**os.environ,
|
||||
"HOME": self.workspace, # Isolate home directory
|
||||
}
|
||||
)
|
||||
|
||||
return ToolResult(
|
||||
success=result.returncode == 0,
|
||||
output=result.stdout.decode(),
|
||||
error=result.stderr.decode() if result.returncode != 0 else None
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Browser Automation
|
||||
|
||||
### 4.1 Browser Tool Pattern
|
||||
|
||||
```python
|
||||
class BrowserTool:
|
||||
"""
|
||||
Browser automation for agents using Playwright/Puppeteer.
|
||||
Enables visual debugging and web testing.
|
||||
"""
|
||||
|
||||
def __init__(self, headless: bool = True):
|
||||
self.browser = None
|
||||
self.page = None
|
||||
self.headless = headless
|
||||
|
||||
async def open_url(self, url: str) -> ToolResult:
|
||||
"""Navigate to URL and return page info"""
|
||||
if not self.browser:
|
||||
self.browser = await playwright.chromium.launch(headless=self.headless)
|
||||
self.page = await self.browser.new_page()
|
||||
|
||||
await self.page.goto(url)
|
||||
|
||||
# Capture state
|
||||
screenshot = await self.page.screenshot(type='png')
|
||||
title = await self.page.title()
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Loaded: {title}",
|
||||
metadata={
|
||||
"screenshot": base64.b64encode(screenshot).decode(),
|
||||
"url": self.page.url
|
||||
}
|
||||
)
|
||||
|
||||
async def click(self, selector: str) -> ToolResult:
|
||||
"""Click on an element"""
|
||||
try:
|
||||
await self.page.click(selector, timeout=5000)
|
||||
await self.page.wait_for_load_state("networkidle")
|
||||
|
||||
screenshot = await self.page.screenshot()
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Clicked: {selector}",
|
||||
metadata={"screenshot": base64.b64encode(screenshot).decode()}
|
||||
)
|
||||
except TimeoutError:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Element not found: {selector}"
|
||||
)
|
||||
|
||||
async def type_text(self, selector: str, text: str) -> ToolResult:
|
||||
"""Type text into an input"""
|
||||
await self.page.fill(selector, text)
|
||||
return ToolResult(success=True, output=f"Typed into {selector}")
|
||||
|
||||
async def get_page_content(self) -> ToolResult:
|
||||
"""Get accessible text content of the page"""
|
||||
content = await self.page.evaluate("""
|
||||
() => {
|
||||
// Get visible text
|
||||
const walker = document.createTreeWalker(
|
||||
document.body,
|
||||
NodeFilter.SHOW_TEXT,
|
||||
null,
|
||||
false
|
||||
);
|
||||
|
||||
let text = '';
|
||||
while (walker.nextNode()) {
|
||||
const node = walker.currentNode;
|
||||
if (node.textContent.trim()) {
|
||||
text += node.textContent.trim() + '\\n';
|
||||
}
|
||||
}
|
||||
return text;
|
||||
}
|
||||
""")
|
||||
return ToolResult(success=True, output=content)
|
||||
```
|
||||
|
||||
### 4.2 Visual Agent Pattern
|
||||
|
||||
```python
|
||||
class VisualAgent:
|
||||
"""
|
||||
Agent that uses screenshots to understand web pages.
|
||||
Can identify elements visually without selectors.
|
||||
"""
|
||||
|
||||
def __init__(self, llm, browser):
|
||||
self.llm = llm
|
||||
self.browser = browser
|
||||
|
||||
async def describe_page(self) -> str:
|
||||
"""Use vision model to describe current page"""
|
||||
screenshot = await self.browser.screenshot()
|
||||
|
||||
response = self.llm.chat([
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "text", "text": "Describe this webpage. List all interactive elements you see."},
|
||||
{"type": "image", "data": screenshot}
|
||||
]
|
||||
}
|
||||
])
|
||||
|
||||
return response.content
|
||||
|
||||
async def find_and_click(self, description: str) -> ToolResult:
|
||||
"""Find element by visual description and click it"""
|
||||
screenshot = await self.browser.screenshot()
|
||||
|
||||
# Ask vision model to find element
|
||||
response = self.llm.chat([
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": f"""
|
||||
Find the element matching: "{description}"
|
||||
Return the approximate coordinates as JSON: {{"x": number, "y": number}}
|
||||
"""
|
||||
},
|
||||
{"type": "image", "data": screenshot}
|
||||
]
|
||||
}
|
||||
])
|
||||
|
||||
coords = json.loads(response.content)
|
||||
await self.browser.page.mouse.click(coords["x"], coords["y"])
|
||||
|
||||
return ToolResult(success=True, output=f"Clicked at ({coords['x']}, {coords['y']})")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Context Management
|
||||
|
||||
### 5.1 Context Injection Patterns
|
||||
|
||||
````python
|
||||
class ContextManager:
|
||||
"""
|
||||
Manage context provided to the agent.
|
||||
Inspired by Cline's @-mention patterns.
|
||||
"""
|
||||
|
||||
def __init__(self, workspace: str):
|
||||
self.workspace = workspace
|
||||
self.context = []
|
||||
|
||||
def add_file(self, path: str) -> None:
|
||||
"""@file - Add file contents to context"""
|
||||
with open(path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
self.context.append({
|
||||
"type": "file",
|
||||
"path": path,
|
||||
"content": content
|
||||
})
|
||||
|
||||
def add_folder(self, path: str, max_files: int = 20) -> None:
|
||||
"""@folder - Add all files in folder"""
|
||||
for root, dirs, files in os.walk(path):
|
||||
for file in files[:max_files]:
|
||||
file_path = os.path.join(root, file)
|
||||
self.add_file(file_path)
|
||||
|
||||
def add_url(self, url: str) -> None:
|
||||
"""@url - Fetch and add URL content"""
|
||||
response = requests.get(url)
|
||||
content = html_to_markdown(response.text)
|
||||
|
||||
self.context.append({
|
||||
"type": "url",
|
||||
"url": url,
|
||||
"content": content
|
||||
})
|
||||
|
||||
def add_problems(self, diagnostics: list) -> None:
|
||||
"""@problems - Add IDE diagnostics"""
|
||||
self.context.append({
|
||||
"type": "diagnostics",
|
||||
"problems": diagnostics
|
||||
})
|
||||
|
||||
def format_for_prompt(self) -> str:
|
||||
"""Format all context for LLM prompt"""
|
||||
parts = []
|
||||
for item in self.context:
|
||||
if item["type"] == "file":
|
||||
parts.append(f"## File: {item['path']}\n```\n{item['content']}\n```")
|
||||
elif item["type"] == "url":
|
||||
parts.append(f"## URL: {item['url']}\n{item['content']}")
|
||||
elif item["type"] == "diagnostics":
|
||||
parts.append(f"## Problems:\n{json.dumps(item['problems'], indent=2)}")
|
||||
|
||||
return "\n\n".join(parts)
|
||||
````
|
||||
|
||||
### 5.2 Checkpoint/Resume
|
||||
|
||||
```python
|
||||
class CheckpointManager:
|
||||
"""
|
||||
Save and restore agent state for long-running tasks.
|
||||
"""
|
||||
|
||||
def __init__(self, storage_dir: str):
|
||||
self.storage_dir = storage_dir
|
||||
os.makedirs(storage_dir, exist_ok=True)
|
||||
|
||||
def save_checkpoint(self, session_id: str, state: dict) -> str:
|
||||
"""Save current agent state"""
|
||||
checkpoint = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"session_id": session_id,
|
||||
"history": state["history"],
|
||||
"context": state["context"],
|
||||
"workspace_state": self._capture_workspace(state["workspace"]),
|
||||
"metadata": state.get("metadata", {})
|
||||
}
|
||||
|
||||
path = os.path.join(self.storage_dir, f"{session_id}.json")
|
||||
with open(path, 'w') as f:
|
||||
json.dump(checkpoint, f, indent=2)
|
||||
|
||||
return path
|
||||
|
||||
def restore_checkpoint(self, checkpoint_path: str) -> dict:
|
||||
"""Restore agent state from checkpoint"""
|
||||
with open(checkpoint_path, 'r') as f:
|
||||
checkpoint = json.load(f)
|
||||
|
||||
return {
|
||||
"history": checkpoint["history"],
|
||||
"context": checkpoint["context"],
|
||||
"workspace": self._restore_workspace(checkpoint["workspace_state"]),
|
||||
"metadata": checkpoint["metadata"]
|
||||
}
|
||||
|
||||
def _capture_workspace(self, workspace: str) -> dict:
|
||||
"""Capture relevant workspace state"""
|
||||
# Git status, file hashes, etc.
|
||||
return {
|
||||
"git_ref": subprocess.getoutput(f"cd {workspace} && git rev-parse HEAD"),
|
||||
"git_dirty": subprocess.getoutput(f"cd {workspace} && git status --porcelain")
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. MCP (Model Context Protocol) Integration
|
||||
|
||||
### 6.1 MCP Server Pattern
|
||||
|
||||
```python
|
||||
from mcp import Server, Tool
|
||||
|
||||
class MCPAgent:
|
||||
"""
|
||||
Agent that can dynamically discover and use MCP tools.
|
||||
'Add a tool that...' pattern from Cline.
|
||||
"""
|
||||
|
||||
def __init__(self, llm):
|
||||
self.llm = llm
|
||||
self.mcp_servers = {}
|
||||
self.available_tools = {}
|
||||
|
||||
def connect_server(self, name: str, config: dict) -> None:
|
||||
"""Connect to an MCP server"""
|
||||
server = Server(config)
|
||||
self.mcp_servers[name] = server
|
||||
|
||||
# Discover tools
|
||||
tools = server.list_tools()
|
||||
for tool in tools:
|
||||
self.available_tools[tool.name] = {
|
||||
"server": name,
|
||||
"schema": tool.schema
|
||||
}
|
||||
|
||||
async def create_tool(self, description: str) -> str:
|
||||
"""
|
||||
Create a new MCP server based on user description.
|
||||
'Add a tool that fetches Jira tickets'
|
||||
"""
|
||||
# Generate MCP server code
|
||||
code = self.llm.generate(f"""
|
||||
Create a Python MCP server with a tool that does:
|
||||
{description}
|
||||
|
||||
Use the FastMCP framework. Include proper error handling.
|
||||
Return only the Python code.
|
||||
""")
|
||||
|
||||
# Save and install
|
||||
server_name = self._extract_name(description)
|
||||
path = f"./mcp_servers/{server_name}/server.py"
|
||||
|
||||
with open(path, 'w') as f:
|
||||
f.write(code)
|
||||
|
||||
# Hot-reload
|
||||
self.connect_server(server_name, {"path": path})
|
||||
|
||||
return f"Created tool: {server_name}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices Checklist
|
||||
|
||||
### Agent Design
|
||||
|
||||
- [ ] Clear task decomposition
|
||||
- [ ] Appropriate tool granularity
|
||||
- [ ] Error handling at each step
|
||||
- [ ] Progress visibility to user
|
||||
|
||||
### Safety
|
||||
|
||||
- [ ] Permission system implemented
|
||||
- [ ] Dangerous operations blocked
|
||||
- [ ] Sandbox for untrusted code
|
||||
- [ ] Audit logging enabled
|
||||
|
||||
### UX
|
||||
|
||||
- [ ] Approval UI is clear
|
||||
- [ ] Progress updates provided
|
||||
- [ ] Undo/rollback available
|
||||
- [ ] Explanation of actions
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Cline](https://github.com/cline/cline)
|
||||
- [OpenAI Codex](https://github.com/openai/codex)
|
||||
- [Model Context Protocol](https://modelcontextprotocol.io/)
|
||||
- [Anthropic Tool Use](https://docs.anthropic.com/claude/docs/tool-use)
|
||||
73
skills/brand-guidelines-community/SKILL.md
Normal file
73
skills/brand-guidelines-community/SKILL.md
Normal file
@@ -0,0 +1,73 @@
|
||||
---
|
||||
name: brand-guidelines
|
||||
description: Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
|
||||
license: Complete terms in LICENSE.txt
|
||||
---
|
||||
|
||||
# Anthropic Brand Styling
|
||||
|
||||
## Overview
|
||||
|
||||
To access Anthropic's official brand identity and style resources, use this skill.
|
||||
|
||||
**Keywords**: branding, corporate identity, visual identity, post-processing, styling, brand colors, typography, Anthropic brand, visual formatting, visual design
|
||||
|
||||
## Brand Guidelines
|
||||
|
||||
### Colors
|
||||
|
||||
**Main Colors:**
|
||||
|
||||
- Dark: `#141413` - Primary text and dark backgrounds
|
||||
- Light: `#faf9f5` - Light backgrounds and text on dark
|
||||
- Mid Gray: `#b0aea5` - Secondary elements
|
||||
- Light Gray: `#e8e6dc` - Subtle backgrounds
|
||||
|
||||
**Accent Colors:**
|
||||
|
||||
- Orange: `#d97757` - Primary accent
|
||||
- Blue: `#6a9bcc` - Secondary accent
|
||||
- Green: `#788c5d` - Tertiary accent
|
||||
|
||||
### Typography
|
||||
|
||||
- **Headings**: Poppins (with Arial fallback)
|
||||
- **Body Text**: Lora (with Georgia fallback)
|
||||
- **Note**: Fonts should be pre-installed in your environment for best results
|
||||
|
||||
## Features
|
||||
|
||||
### Smart Font Application
|
||||
|
||||
- Applies Poppins font to headings (24pt and larger)
|
||||
- Applies Lora font to body text
|
||||
- Automatically falls back to Arial/Georgia if custom fonts unavailable
|
||||
- Preserves readability across all systems
|
||||
|
||||
### Text Styling
|
||||
|
||||
- Headings (24pt+): Poppins font
|
||||
- Body text: Lora font
|
||||
- Smart color selection based on background
|
||||
- Preserves text hierarchy and formatting
|
||||
|
||||
### Shape and Accent Colors
|
||||
|
||||
- Non-text shapes use accent colors
|
||||
- Cycles through orange, blue, and green accents
|
||||
- Maintains visual interest while staying on-brand
|
||||
|
||||
## Technical Details
|
||||
|
||||
### Font Management
|
||||
|
||||
- Uses system-installed Poppins and Lora fonts when available
|
||||
- Provides automatic fallback to Arial (headings) and Georgia (body)
|
||||
- No font installation required - works with existing system fonts
|
||||
- For best results, pre-install Poppins and Lora fonts in your environment
|
||||
|
||||
### Color Application
|
||||
|
||||
- Uses RGB color values for precise brand matching
|
||||
- Applied via python-pptx's RGBColor class
|
||||
- Maintains color fidelity across different systems
|
||||
691
skills/bun-development/SKILL.md
Normal file
691
skills/bun-development/SKILL.md
Normal file
@@ -0,0 +1,691 @@
|
||||
---
|
||||
name: bun-development
|
||||
description: "Modern JavaScript/TypeScript development with Bun runtime. Covers package management, bundling, testing, and migration from Node.js. Use when working with Bun, optimizing JS/TS development speed, or migrating from Node.js to Bun."
|
||||
---
|
||||
|
||||
# ⚡ Bun Development
|
||||
|
||||
> Fast, modern JavaScript/TypeScript development with the Bun runtime, inspired by [oven-sh/bun](https://github.com/oven-sh/bun).
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Starting new JS/TS projects with Bun
|
||||
- Migrating from Node.js to Bun
|
||||
- Optimizing development speed
|
||||
- Using Bun's built-in tools (bundler, test runner)
|
||||
- Troubleshooting Bun-specific issues
|
||||
|
||||
---
|
||||
|
||||
## 1. Getting Started
|
||||
|
||||
### 1.1 Installation
|
||||
|
||||
```bash
|
||||
# macOS / Linux
|
||||
curl -fsSL https://bun.sh/install | bash
|
||||
|
||||
# Windows
|
||||
powershell -c "irm bun.sh/install.ps1 | iex"
|
||||
|
||||
# Homebrew
|
||||
brew tap oven-sh/bun
|
||||
brew install bun
|
||||
|
||||
# npm (if needed)
|
||||
npm install -g bun
|
||||
|
||||
# Upgrade
|
||||
bun upgrade
|
||||
```
|
||||
|
||||
### 1.2 Why Bun?
|
||||
|
||||
| Feature | Bun | Node.js |
|
||||
| :-------------- | :------------- | :-------------------------- |
|
||||
| Startup time | ~25ms | ~100ms+ |
|
||||
| Package install | 10-100x faster | Baseline |
|
||||
| TypeScript | Native | Requires transpiler |
|
||||
| JSX | Native | Requires transpiler |
|
||||
| Test runner | Built-in | External (Jest, Vitest) |
|
||||
| Bundler | Built-in | External (Webpack, esbuild) |
|
||||
|
||||
---
|
||||
|
||||
## 2. Project Setup
|
||||
|
||||
### 2.1 Create New Project
|
||||
|
||||
```bash
|
||||
# Initialize project
|
||||
bun init
|
||||
|
||||
# Creates:
|
||||
# ├── package.json
|
||||
# ├── tsconfig.json
|
||||
# ├── index.ts
|
||||
# └── README.md
|
||||
|
||||
# With specific template
|
||||
bun create <template> <project-name>
|
||||
|
||||
# Examples
|
||||
bun create react my-app # React app
|
||||
bun create next my-app # Next.js app
|
||||
bun create vite my-app # Vite app
|
||||
bun create elysia my-api # Elysia API
|
||||
```
|
||||
|
||||
### 2.2 package.json
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "my-bun-project",
|
||||
"version": "1.0.0",
|
||||
"module": "index.ts",
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "bun run --watch index.ts",
|
||||
"start": "bun run index.ts",
|
||||
"test": "bun test",
|
||||
"build": "bun build ./index.ts --outdir ./dist",
|
||||
"lint": "bunx eslint ."
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "latest"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"typescript": "^5.0.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2.3 tsconfig.json (Bun-optimized)
|
||||
|
||||
```json
|
||||
{
|
||||
"compilerOptions": {
|
||||
"lib": ["ESNext"],
|
||||
"module": "esnext",
|
||||
"target": "esnext",
|
||||
"moduleResolution": "bundler",
|
||||
"moduleDetection": "force",
|
||||
"allowImportingTsExtensions": true,
|
||||
"noEmit": true,
|
||||
"composite": true,
|
||||
"strict": true,
|
||||
"downlevelIteration": true,
|
||||
"skipLibCheck": true,
|
||||
"jsx": "react-jsx",
|
||||
"allowSyntheticDefaultImports": true,
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"allowJs": true,
|
||||
"types": ["bun-types"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Package Management
|
||||
|
||||
### 3.1 Installing Packages
|
||||
|
||||
```bash
|
||||
# Install from package.json
|
||||
bun install # or 'bun i'
|
||||
|
||||
# Add dependencies
|
||||
bun add express # Regular dependency
|
||||
bun add -d typescript # Dev dependency
|
||||
bun add -D @types/node # Dev dependency (alias)
|
||||
bun add --optional pkg # Optional dependency
|
||||
|
||||
# From specific registry
|
||||
bun add lodash --registry https://registry.npmmirror.com
|
||||
|
||||
# Install specific version
|
||||
bun add react@18.2.0
|
||||
bun add react@latest
|
||||
bun add react@next
|
||||
|
||||
# From git
|
||||
bun add github:user/repo
|
||||
bun add git+https://github.com/user/repo.git
|
||||
```
|
||||
|
||||
### 3.2 Removing & Updating
|
||||
|
||||
```bash
|
||||
# Remove package
|
||||
bun remove lodash
|
||||
|
||||
# Update packages
|
||||
bun update # Update all
|
||||
bun update lodash # Update specific
|
||||
bun update --latest # Update to latest (ignore ranges)
|
||||
|
||||
# Check outdated
|
||||
bun outdated
|
||||
```
|
||||
|
||||
### 3.3 bunx (npx equivalent)
|
||||
|
||||
```bash
|
||||
# Execute package binaries
|
||||
bunx prettier --write .
|
||||
bunx tsc --init
|
||||
bunx create-react-app my-app
|
||||
|
||||
# With specific version
|
||||
bunx -p typescript@4.9 tsc --version
|
||||
|
||||
# Run without installing
|
||||
bunx cowsay "Hello from Bun!"
|
||||
```
|
||||
|
||||
### 3.4 Lockfile
|
||||
|
||||
```bash
|
||||
# bun.lockb is a binary lockfile (faster parsing)
|
||||
# To generate text lockfile for debugging:
|
||||
bun install --yarn # Creates yarn.lock
|
||||
|
||||
# Trust existing lockfile
|
||||
bun install --frozen-lockfile
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Running Code
|
||||
|
||||
### 4.1 Basic Execution
|
||||
|
||||
```bash
|
||||
# Run TypeScript directly (no build step!)
|
||||
bun run index.ts
|
||||
|
||||
# Run JavaScript
|
||||
bun run index.js
|
||||
|
||||
# Run with arguments
|
||||
bun run server.ts --port 3000
|
||||
|
||||
# Run package.json script
|
||||
bun run dev
|
||||
bun run build
|
||||
|
||||
# Short form (for scripts)
|
||||
bun dev
|
||||
bun build
|
||||
```
|
||||
|
||||
### 4.2 Watch Mode
|
||||
|
||||
```bash
|
||||
# Auto-restart on file changes
|
||||
bun --watch run index.ts
|
||||
|
||||
# With hot reloading
|
||||
bun --hot run server.ts
|
||||
```
|
||||
|
||||
### 4.3 Environment Variables
|
||||
|
||||
```typescript
|
||||
// .env file is loaded automatically!
|
||||
|
||||
// Access environment variables
|
||||
const apiKey = Bun.env.API_KEY;
|
||||
const port = Bun.env.PORT ?? "3000";
|
||||
|
||||
// Or use process.env (Node.js compatible)
|
||||
const dbUrl = process.env.DATABASE_URL;
|
||||
```
|
||||
|
||||
```bash
|
||||
# Run with specific env file
|
||||
bun --env-file=.env.production run index.ts
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Built-in APIs
|
||||
|
||||
### 5.1 File System (Bun.file)
|
||||
|
||||
```typescript
|
||||
// Read file
|
||||
const file = Bun.file("./data.json");
|
||||
const text = await file.text();
|
||||
const json = await file.json();
|
||||
const buffer = await file.arrayBuffer();
|
||||
|
||||
// File info
|
||||
console.log(file.size); // bytes
|
||||
console.log(file.type); // MIME type
|
||||
|
||||
// Write file
|
||||
await Bun.write("./output.txt", "Hello, Bun!");
|
||||
await Bun.write("./data.json", JSON.stringify({ foo: "bar" }));
|
||||
|
||||
// Stream large files
|
||||
const reader = file.stream();
|
||||
for await (const chunk of reader) {
|
||||
console.log(chunk);
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 HTTP Server (Bun.serve)
|
||||
|
||||
```typescript
|
||||
const server = Bun.serve({
|
||||
port: 3000,
|
||||
|
||||
fetch(request) {
|
||||
const url = new URL(request.url);
|
||||
|
||||
if (url.pathname === "/") {
|
||||
return new Response("Hello World!");
|
||||
}
|
||||
|
||||
if (url.pathname === "/api/users") {
|
||||
return Response.json([
|
||||
{ id: 1, name: "Alice" },
|
||||
{ id: 2, name: "Bob" },
|
||||
]);
|
||||
}
|
||||
|
||||
return new Response("Not Found", { status: 404 });
|
||||
},
|
||||
|
||||
error(error) {
|
||||
return new Response(`Error: ${error.message}`, { status: 500 });
|
||||
},
|
||||
});
|
||||
|
||||
console.log(`Server running at http://localhost:${server.port}`);
|
||||
```
|
||||
|
||||
### 5.3 WebSocket Server
|
||||
|
||||
```typescript
|
||||
const server = Bun.serve({
|
||||
port: 3000,
|
||||
|
||||
fetch(req, server) {
|
||||
// Upgrade to WebSocket
|
||||
if (server.upgrade(req)) {
|
||||
return; // Upgraded
|
||||
}
|
||||
return new Response("Upgrade failed", { status: 500 });
|
||||
},
|
||||
|
||||
websocket: {
|
||||
open(ws) {
|
||||
console.log("Client connected");
|
||||
ws.send("Welcome!");
|
||||
},
|
||||
|
||||
message(ws, message) {
|
||||
console.log(`Received: ${message}`);
|
||||
ws.send(`Echo: ${message}`);
|
||||
},
|
||||
|
||||
close(ws) {
|
||||
console.log("Client disconnected");
|
||||
},
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
### 5.4 SQLite (Bun.sql)
|
||||
|
||||
```typescript
|
||||
import { Database } from "bun:sqlite";
|
||||
|
||||
const db = new Database("mydb.sqlite");
|
||||
|
||||
// Create table
|
||||
db.run(`
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
name TEXT NOT NULL,
|
||||
email TEXT UNIQUE
|
||||
)
|
||||
`);
|
||||
|
||||
// Insert
|
||||
const insert = db.prepare("INSERT INTO users (name, email) VALUES (?, ?)");
|
||||
insert.run("Alice", "alice@example.com");
|
||||
|
||||
// Query
|
||||
const query = db.prepare("SELECT * FROM users WHERE name = ?");
|
||||
const user = query.get("Alice");
|
||||
console.log(user); // { id: 1, name: "Alice", email: "alice@example.com" }
|
||||
|
||||
// Query all
|
||||
const allUsers = db.query("SELECT * FROM users").all();
|
||||
```
|
||||
|
||||
### 5.5 Password Hashing
|
||||
|
||||
```typescript
|
||||
// Hash password
|
||||
const password = "super-secret";
|
||||
const hash = await Bun.password.hash(password);
|
||||
|
||||
// Verify password
|
||||
const isValid = await Bun.password.verify(password, hash);
|
||||
console.log(isValid); // true
|
||||
|
||||
// With algorithm options
|
||||
const bcryptHash = await Bun.password.hash(password, {
|
||||
algorithm: "bcrypt",
|
||||
cost: 12,
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Testing
|
||||
|
||||
### 6.1 Basic Tests
|
||||
|
||||
```typescript
|
||||
// math.test.ts
|
||||
import { describe, it, expect, beforeAll, afterAll } from "bun:test";
|
||||
|
||||
describe("Math operations", () => {
|
||||
it("adds two numbers", () => {
|
||||
expect(1 + 1).toBe(2);
|
||||
});
|
||||
|
||||
it("subtracts two numbers", () => {
|
||||
expect(5 - 3).toBe(2);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 6.2 Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
bun test
|
||||
|
||||
# Run specific file
|
||||
bun test math.test.ts
|
||||
|
||||
# Run matching pattern
|
||||
bun test --grep "adds"
|
||||
|
||||
# Watch mode
|
||||
bun test --watch
|
||||
|
||||
# With coverage
|
||||
bun test --coverage
|
||||
|
||||
# Timeout
|
||||
bun test --timeout 5000
|
||||
```
|
||||
|
||||
### 6.3 Matchers
|
||||
|
||||
```typescript
|
||||
import { expect, test } from "bun:test";
|
||||
|
||||
test("matchers", () => {
|
||||
// Equality
|
||||
expect(1).toBe(1);
|
||||
expect({ a: 1 }).toEqual({ a: 1 });
|
||||
expect([1, 2]).toContain(1);
|
||||
|
||||
// Comparisons
|
||||
expect(10).toBeGreaterThan(5);
|
||||
expect(5).toBeLessThanOrEqual(5);
|
||||
|
||||
// Truthiness
|
||||
expect(true).toBeTruthy();
|
||||
expect(null).toBeNull();
|
||||
expect(undefined).toBeUndefined();
|
||||
|
||||
// Strings
|
||||
expect("hello").toMatch(/ell/);
|
||||
expect("hello").toContain("ell");
|
||||
|
||||
// Arrays
|
||||
expect([1, 2, 3]).toHaveLength(3);
|
||||
|
||||
// Exceptions
|
||||
expect(() => {
|
||||
throw new Error("fail");
|
||||
}).toThrow("fail");
|
||||
|
||||
// Async
|
||||
await expect(Promise.resolve(1)).resolves.toBe(1);
|
||||
await expect(Promise.reject("err")).rejects.toBe("err");
|
||||
});
|
||||
```
|
||||
|
||||
### 6.4 Mocking
|
||||
|
||||
```typescript
|
||||
import { mock, spyOn } from "bun:test";
|
||||
|
||||
// Mock function
|
||||
const mockFn = mock((x: number) => x * 2);
|
||||
mockFn(5);
|
||||
expect(mockFn).toHaveBeenCalled();
|
||||
expect(mockFn).toHaveBeenCalledWith(5);
|
||||
expect(mockFn.mock.results[0].value).toBe(10);
|
||||
|
||||
// Spy on method
|
||||
const obj = {
|
||||
method: () => "original",
|
||||
};
|
||||
const spy = spyOn(obj, "method").mockReturnValue("mocked");
|
||||
expect(obj.method()).toBe("mocked");
|
||||
expect(spy).toHaveBeenCalled();
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Bundling
|
||||
|
||||
### 7.1 Basic Build
|
||||
|
||||
```bash
|
||||
# Bundle for production
|
||||
bun build ./src/index.ts --outdir ./dist
|
||||
|
||||
# With options
|
||||
bun build ./src/index.ts \
|
||||
--outdir ./dist \
|
||||
--target browser \
|
||||
--minify \
|
||||
--sourcemap
|
||||
```
|
||||
|
||||
### 7.2 Build API
|
||||
|
||||
```typescript
|
||||
const result = await Bun.build({
|
||||
entrypoints: ["./src/index.ts"],
|
||||
outdir: "./dist",
|
||||
target: "browser", // or "bun", "node"
|
||||
minify: true,
|
||||
sourcemap: "external",
|
||||
splitting: true,
|
||||
format: "esm",
|
||||
|
||||
// External packages (not bundled)
|
||||
external: ["react", "react-dom"],
|
||||
|
||||
// Define globals
|
||||
define: {
|
||||
"process.env.NODE_ENV": JSON.stringify("production"),
|
||||
},
|
||||
|
||||
// Naming
|
||||
naming: {
|
||||
entry: "[name].[hash].js",
|
||||
chunk: "chunks/[name].[hash].js",
|
||||
asset: "assets/[name].[hash][ext]",
|
||||
},
|
||||
});
|
||||
|
||||
if (!result.success) {
|
||||
console.error(result.logs);
|
||||
}
|
||||
```
|
||||
|
||||
### 7.3 Compile to Executable
|
||||
|
||||
```bash
|
||||
# Create standalone executable
|
||||
bun build ./src/cli.ts --compile --outfile myapp
|
||||
|
||||
# Cross-compile
|
||||
bun build ./src/cli.ts --compile --target=bun-linux-x64 --outfile myapp-linux
|
||||
bun build ./src/cli.ts --compile --target=bun-darwin-arm64 --outfile myapp-mac
|
||||
|
||||
# With embedded assets
|
||||
bun build ./src/cli.ts --compile --outfile myapp --embed ./assets
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Migration from Node.js
|
||||
|
||||
### 8.1 Compatibility
|
||||
|
||||
```typescript
|
||||
// Most Node.js APIs work out of the box
|
||||
import fs from "fs";
|
||||
import path from "path";
|
||||
import crypto from "crypto";
|
||||
|
||||
// process is global
|
||||
console.log(process.cwd());
|
||||
console.log(process.env.HOME);
|
||||
|
||||
// Buffer is global
|
||||
const buf = Buffer.from("hello");
|
||||
|
||||
// __dirname and __filename work
|
||||
console.log(__dirname);
|
||||
console.log(__filename);
|
||||
```
|
||||
|
||||
### 8.2 Common Migration Steps
|
||||
|
||||
```bash
|
||||
# 1. Install Bun
|
||||
curl -fsSL https://bun.sh/install | bash
|
||||
|
||||
# 2. Replace package manager
|
||||
rm -rf node_modules package-lock.json
|
||||
bun install
|
||||
|
||||
# 3. Update scripts in package.json
|
||||
# "start": "node index.js" → "start": "bun run index.ts"
|
||||
# "test": "jest" → "test": "bun test"
|
||||
|
||||
# 4. Add Bun types
|
||||
bun add -d @types/bun
|
||||
```
|
||||
|
||||
### 8.3 Differences from Node.js
|
||||
|
||||
```typescript
|
||||
// ❌ Node.js specific (may not work)
|
||||
require("module") // Use import instead
|
||||
require.resolve("pkg") // Use import.meta.resolve
|
||||
__non_webpack_require__ // Not supported
|
||||
|
||||
// ✅ Bun equivalents
|
||||
import pkg from "pkg";
|
||||
const resolved = import.meta.resolve("pkg");
|
||||
Bun.resolveSync("pkg", process.cwd());
|
||||
|
||||
// ❌ These globals differ
|
||||
process.hrtime() // Use Bun.nanoseconds()
|
||||
setImmediate() // Use queueMicrotask()
|
||||
|
||||
// ✅ Bun-specific features
|
||||
const file = Bun.file("./data.txt"); // Fast file API
|
||||
Bun.serve({ port: 3000, fetch: ... }); // Fast HTTP server
|
||||
Bun.password.hash(password); // Built-in hashing
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Performance Tips
|
||||
|
||||
### 9.1 Use Bun-native APIs
|
||||
|
||||
```typescript
|
||||
// Slow (Node.js compat)
|
||||
import fs from "fs/promises";
|
||||
const content = await fs.readFile("./data.txt", "utf-8");
|
||||
|
||||
// Fast (Bun-native)
|
||||
const file = Bun.file("./data.txt");
|
||||
const content = await file.text();
|
||||
```
|
||||
|
||||
### 9.2 Use Bun.serve for HTTP
|
||||
|
||||
```typescript
|
||||
// Don't: Express/Fastify (overhead)
|
||||
import express from "express";
|
||||
const app = express();
|
||||
|
||||
// Do: Bun.serve (native, 4-10x faster)
|
||||
Bun.serve({
|
||||
fetch(req) {
|
||||
return new Response("Hello!");
|
||||
},
|
||||
});
|
||||
|
||||
// Or use Elysia (Bun-optimized framework)
|
||||
import { Elysia } from "elysia";
|
||||
new Elysia().get("/", () => "Hello!").listen(3000);
|
||||
```
|
||||
|
||||
### 9.3 Bundle for Production
|
||||
|
||||
```bash
|
||||
# Always bundle and minify for production
|
||||
bun build ./src/index.ts --outdir ./dist --minify --target node
|
||||
|
||||
# Then run the bundle
|
||||
bun run ./dist/index.js
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Command |
|
||||
| :----------- | :----------------------------------------- |
|
||||
| Init project | `bun init` |
|
||||
| Install deps | `bun install` |
|
||||
| Add package | `bun add <pkg>` |
|
||||
| Run script | `bun run <script>` |
|
||||
| Run file | `bun run file.ts` |
|
||||
| Watch mode | `bun --watch run file.ts` |
|
||||
| Run tests | `bun test` |
|
||||
| Build | `bun build ./src/index.ts --outdir ./dist` |
|
||||
| Execute pkg | `bunx <pkg>` |
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Bun Documentation](https://bun.sh/docs)
|
||||
- [Bun GitHub](https://github.com/oven-sh/bun)
|
||||
- [Elysia Framework](https://elysiajs.com/)
|
||||
- [Bun Discord](https://bun.sh/discord)
|
||||
Submodule skills/claude-d3js-skill deleted from e198c87d03
820
skills/claude-d3js-skill/SKILL.md
Normal file
820
skills/claude-d3js-skill/SKILL.md
Normal file
@@ -0,0 +1,820 @@
|
||||
---
|
||||
name: d3-viz
|
||||
description: Creating interactive data visualisations using d3.js. This skill should be used when creating custom charts, graphs, network diagrams, geographic visualisations, or any complex SVG-based data visualisation that requires fine-grained control over visual elements, transitions, or interactions. Use this for bespoke visualisations beyond standard charting libraries, whether in React, Vue, Svelte, vanilla JavaScript, or any other environment.
|
||||
---
|
||||
|
||||
# D3.js Visualisation
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides guidance for creating sophisticated, interactive data visualisations using d3.js. D3.js (Data-Driven Documents) excels at binding data to DOM elements and applying data-driven transformations to create custom, publication-quality visualisations with precise control over every visual element. The techniques work across any JavaScript environment, including vanilla JavaScript, React, Vue, Svelte, and other frameworks.
|
||||
|
||||
## When to use d3.js
|
||||
|
||||
**Use d3.js for:**
|
||||
- Custom visualisations requiring unique visual encodings or layouts
|
||||
- Interactive explorations with complex pan, zoom, or brush behaviours
|
||||
- Network/graph visualisations (force-directed layouts, tree diagrams, hierarchies, chord diagrams)
|
||||
- Geographic visualisations with custom projections
|
||||
- Visualisations requiring smooth, choreographed transitions
|
||||
- Publication-quality graphics with fine-grained styling control
|
||||
- Novel chart types not available in standard libraries
|
||||
|
||||
**Consider alternatives for:**
|
||||
- 3D visualisations - use Three.js instead
|
||||
|
||||
## Core workflow
|
||||
|
||||
### 1. Set up d3.js
|
||||
|
||||
Import d3 at the top of your script:
|
||||
|
||||
```javascript
|
||||
import * as d3 from 'd3';
|
||||
```
|
||||
|
||||
Or use the CDN version (7.x):
|
||||
|
||||
```html
|
||||
<script src="https://d3js.org/d3.v7.min.js"></script>
|
||||
```
|
||||
|
||||
All modules (scales, axes, shapes, transitions, etc.) are accessible through the `d3` namespace.
|
||||
|
||||
### 2. Choose the integration pattern
|
||||
|
||||
**Pattern A: Direct DOM manipulation (recommended for most cases)**
|
||||
Use d3 to select DOM elements and manipulate them imperatively. This works in any JavaScript environment:
|
||||
|
||||
```javascript
|
||||
function drawChart(data) {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart'); // Select by ID, class, or DOM element
|
||||
|
||||
// Clear previous content
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
// Set up dimensions
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
|
||||
// Create scales, axes, and draw visualisation
|
||||
// ... d3 code here ...
|
||||
}
|
||||
|
||||
// Call when data changes
|
||||
drawChart(myData);
|
||||
```
|
||||
|
||||
**Pattern B: Declarative rendering (for frameworks with templating)**
|
||||
Use d3 for data calculations (scales, layouts) but render elements via your framework:
|
||||
|
||||
```javascript
|
||||
function getChartElements(data) {
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.value)])
|
||||
.range([0, 400]);
|
||||
|
||||
return data.map((d, i) => ({
|
||||
x: 50,
|
||||
y: i * 30,
|
||||
width: xScale(d.value),
|
||||
height: 25
|
||||
}));
|
||||
}
|
||||
|
||||
// In React: {getChartElements(data).map((d, i) => <rect key={i} {...d} fill="steelblue" />)}
|
||||
// In Vue: v-for directive over the returned array
|
||||
// In vanilla JS: Create elements manually from the returned data
|
||||
```
|
||||
|
||||
Use Pattern A for complex visualisations with transitions, interactions, or when leveraging d3's full capabilities. Use Pattern B for simpler visualisations or when your framework prefers declarative rendering.
|
||||
|
||||
### 3. Structure the visualisation code
|
||||
|
||||
Follow this standard structure in your drawing function:
|
||||
|
||||
```javascript
|
||||
function drawVisualization(data) {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart'); // Or pass a selector/element
|
||||
svg.selectAll("*").remove(); // Clear previous render
|
||||
|
||||
// 1. Define dimensions
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// 2. Create main group with margins
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
// 3. Create scales
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.x)])
|
||||
.range([0, innerWidth]);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.y)])
|
||||
.range([innerHeight, 0]); // Note: inverted for SVG coordinates
|
||||
|
||||
// 4. Create and append axes
|
||||
const xAxis = d3.axisBottom(xScale);
|
||||
const yAxis = d3.axisLeft(yScale);
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(xAxis);
|
||||
|
||||
g.append("g")
|
||||
.call(yAxis);
|
||||
|
||||
// 5. Bind data and create visual elements
|
||||
g.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", 5)
|
||||
.attr("fill", "steelblue");
|
||||
}
|
||||
|
||||
// Call when data changes
|
||||
drawVisualization(myData);
|
||||
```
|
||||
|
||||
### 4. Implement responsive sizing
|
||||
|
||||
Make visualisations responsive to container size:
|
||||
|
||||
```javascript
|
||||
function setupResponsiveChart(containerId, data) {
|
||||
const container = document.getElementById(containerId);
|
||||
const svg = d3.select(`#${containerId}`).append('svg');
|
||||
|
||||
function updateChart() {
|
||||
const { width, height } = container.getBoundingClientRect();
|
||||
svg.attr('width', width).attr('height', height);
|
||||
|
||||
// Redraw visualisation with new dimensions
|
||||
drawChart(data, svg, width, height);
|
||||
}
|
||||
|
||||
// Update on initial load
|
||||
updateChart();
|
||||
|
||||
// Update on window resize
|
||||
window.addEventListener('resize', updateChart);
|
||||
|
||||
// Return cleanup function
|
||||
return () => window.removeEventListener('resize', updateChart);
|
||||
}
|
||||
|
||||
// Usage:
|
||||
// const cleanup = setupResponsiveChart('chart-container', myData);
|
||||
// cleanup(); // Call when component unmounts or element removed
|
||||
```
|
||||
|
||||
Or use ResizeObserver for more direct container monitoring:
|
||||
|
||||
```javascript
|
||||
function setupResponsiveChartWithObserver(svgElement, data) {
|
||||
const observer = new ResizeObserver(() => {
|
||||
const { width, height } = svgElement.getBoundingClientRect();
|
||||
d3.select(svgElement)
|
||||
.attr('width', width)
|
||||
.attr('height', height);
|
||||
|
||||
// Redraw visualisation
|
||||
drawChart(data, d3.select(svgElement), width, height);
|
||||
});
|
||||
|
||||
observer.observe(svgElement.parentElement);
|
||||
return () => observer.disconnect();
|
||||
}
|
||||
```
|
||||
|
||||
## Common visualisation patterns
|
||||
|
||||
### Bar chart
|
||||
|
||||
```javascript
|
||||
function drawBarChart(data, svgElement) {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgElement);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const xScale = d3.scaleBand()
|
||||
.domain(data.map(d => d.category))
|
||||
.range([0, innerWidth])
|
||||
.padding(0.1);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.value)])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(xScale));
|
||||
|
||||
g.append("g")
|
||||
.call(d3.axisLeft(yScale));
|
||||
|
||||
g.selectAll("rect")
|
||||
.data(data)
|
||||
.join("rect")
|
||||
.attr("x", d => xScale(d.category))
|
||||
.attr("y", d => yScale(d.value))
|
||||
.attr("width", xScale.bandwidth())
|
||||
.attr("height", d => innerHeight - yScale(d.value))
|
||||
.attr("fill", "steelblue");
|
||||
}
|
||||
|
||||
// Usage:
|
||||
// drawBarChart(myData, document.getElementById('chart'));
|
||||
```
|
||||
|
||||
### Line chart
|
||||
|
||||
```javascript
|
||||
const line = d3.line()
|
||||
.x(d => xScale(d.date))
|
||||
.y(d => yScale(d.value))
|
||||
.curve(d3.curveMonotoneX); // Smooth curve
|
||||
|
||||
g.append("path")
|
||||
.datum(data)
|
||||
.attr("fill", "none")
|
||||
.attr("stroke", "steelblue")
|
||||
.attr("stroke-width", 2)
|
||||
.attr("d", line);
|
||||
```
|
||||
|
||||
### Scatter plot
|
||||
|
||||
```javascript
|
||||
g.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", d => sizeScale(d.size)) // Optional: size encoding
|
||||
.attr("fill", d => colourScale(d.category)) // Optional: colour encoding
|
||||
.attr("opacity", 0.7);
|
||||
```
|
||||
|
||||
### Chord diagram
|
||||
|
||||
A chord diagram shows relationships between entities in a circular layout, with ribbons representing flows between them:
|
||||
|
||||
```javascript
|
||||
function drawChordDiagram(data) {
|
||||
// data format: array of objects with source, target, and value
|
||||
// Example: [{ source: 'A', target: 'B', value: 10 }, ...]
|
||||
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart');
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 600;
|
||||
const height = 600;
|
||||
const innerRadius = Math.min(width, height) * 0.3;
|
||||
const outerRadius = innerRadius + 30;
|
||||
|
||||
// Create matrix from data
|
||||
const nodes = Array.from(new Set(data.flatMap(d => [d.source, d.target])));
|
||||
const matrix = Array.from({ length: nodes.length }, () => Array(nodes.length).fill(0));
|
||||
|
||||
data.forEach(d => {
|
||||
const i = nodes.indexOf(d.source);
|
||||
const j = nodes.indexOf(d.target);
|
||||
matrix[i][j] += d.value;
|
||||
matrix[j][i] += d.value;
|
||||
});
|
||||
|
||||
// Create chord layout
|
||||
const chord = d3.chord()
|
||||
.padAngle(0.05)
|
||||
.sortSubgroups(d3.descending);
|
||||
|
||||
const arc = d3.arc()
|
||||
.innerRadius(innerRadius)
|
||||
.outerRadius(outerRadius);
|
||||
|
||||
const ribbon = d3.ribbon()
|
||||
.source(d => d.source)
|
||||
.target(d => d.target);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10)
|
||||
.domain(nodes);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${width / 2},${height / 2})`);
|
||||
|
||||
const chords = chord(matrix);
|
||||
|
||||
// Draw ribbons
|
||||
g.append("g")
|
||||
.attr("fill-opacity", 0.67)
|
||||
.selectAll("path")
|
||||
.data(chords)
|
||||
.join("path")
|
||||
.attr("d", ribbon)
|
||||
.attr("fill", d => colourScale(nodes[d.source.index]))
|
||||
.attr("stroke", d => d3.rgb(colourScale(nodes[d.source.index])).darker());
|
||||
|
||||
// Draw groups (arcs)
|
||||
const group = g.append("g")
|
||||
.selectAll("g")
|
||||
.data(chords.groups)
|
||||
.join("g");
|
||||
|
||||
group.append("path")
|
||||
.attr("d", arc)
|
||||
.attr("fill", d => colourScale(nodes[d.index]))
|
||||
.attr("stroke", d => d3.rgb(colourScale(nodes[d.index])).darker());
|
||||
|
||||
// Add labels
|
||||
group.append("text")
|
||||
.each(d => { d.angle = (d.startAngle + d.endAngle) / 2; })
|
||||
.attr("dy", "0.31em")
|
||||
.attr("transform", d => `rotate(${(d.angle * 180 / Math.PI) - 90})translate(${outerRadius + 30})${d.angle > Math.PI ? "rotate(180)" : ""}`)
|
||||
.attr("text-anchor", d => d.angle > Math.PI ? "end" : null)
|
||||
.text((d, i) => nodes[i])
|
||||
.style("font-size", "12px");
|
||||
}
|
||||
```
|
||||
|
||||
### Heatmap
|
||||
|
||||
A heatmap uses colour to encode values in a two-dimensional grid, useful for showing patterns across categories:
|
||||
|
||||
```javascript
|
||||
function drawHeatmap(data) {
|
||||
// data format: array of objects with row, column, and value
|
||||
// Example: [{ row: 'A', column: 'X', value: 10 }, ...]
|
||||
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart');
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
const margin = { top: 100, right: 30, bottom: 30, left: 100 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// Get unique rows and columns
|
||||
const rows = Array.from(new Set(data.map(d => d.row)));
|
||||
const columns = Array.from(new Set(data.map(d => d.column)));
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
// Create scales
|
||||
const xScale = d3.scaleBand()
|
||||
.domain(columns)
|
||||
.range([0, innerWidth])
|
||||
.padding(0.01);
|
||||
|
||||
const yScale = d3.scaleBand()
|
||||
.domain(rows)
|
||||
.range([0, innerHeight])
|
||||
.padding(0.01);
|
||||
|
||||
// Colour scale for values
|
||||
const colourScale = d3.scaleSequential(d3.interpolateYlOrRd)
|
||||
.domain([0, d3.max(data, d => d.value)]);
|
||||
|
||||
// Draw rectangles
|
||||
g.selectAll("rect")
|
||||
.data(data)
|
||||
.join("rect")
|
||||
.attr("x", d => xScale(d.column))
|
||||
.attr("y", d => yScale(d.row))
|
||||
.attr("width", xScale.bandwidth())
|
||||
.attr("height", yScale.bandwidth())
|
||||
.attr("fill", d => colourScale(d.value));
|
||||
|
||||
// Add x-axis labels
|
||||
svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`)
|
||||
.selectAll("text")
|
||||
.data(columns)
|
||||
.join("text")
|
||||
.attr("x", d => xScale(d) + xScale.bandwidth() / 2)
|
||||
.attr("y", -10)
|
||||
.attr("text-anchor", "middle")
|
||||
.text(d => d)
|
||||
.style("font-size", "12px");
|
||||
|
||||
// Add y-axis labels
|
||||
svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`)
|
||||
.selectAll("text")
|
||||
.data(rows)
|
||||
.join("text")
|
||||
.attr("x", -10)
|
||||
.attr("y", d => yScale(d) + yScale.bandwidth() / 2)
|
||||
.attr("dy", "0.35em")
|
||||
.attr("text-anchor", "end")
|
||||
.text(d => d)
|
||||
.style("font-size", "12px");
|
||||
|
||||
// Add colour legend
|
||||
const legendWidth = 20;
|
||||
const legendHeight = 200;
|
||||
const legend = svg.append("g")
|
||||
.attr("transform", `translate(${width - 60},${margin.top})`);
|
||||
|
||||
const legendScale = d3.scaleLinear()
|
||||
.domain(colourScale.domain())
|
||||
.range([legendHeight, 0]);
|
||||
|
||||
const legendAxis = d3.axisRight(legendScale)
|
||||
.ticks(5);
|
||||
|
||||
// Draw colour gradient in legend
|
||||
for (let i = 0; i < legendHeight; i++) {
|
||||
legend.append("rect")
|
||||
.attr("y", i)
|
||||
.attr("width", legendWidth)
|
||||
.attr("height", 1)
|
||||
.attr("fill", colourScale(legendScale.invert(i)));
|
||||
}
|
||||
|
||||
legend.append("g")
|
||||
.attr("transform", `translate(${legendWidth},0)`)
|
||||
.call(legendAxis);
|
||||
}
|
||||
```
|
||||
|
||||
### Pie chart
|
||||
|
||||
```javascript
|
||||
const pie = d3.pie()
|
||||
.value(d => d.value)
|
||||
.sort(null);
|
||||
|
||||
const arc = d3.arc()
|
||||
.innerRadius(0)
|
||||
.outerRadius(Math.min(width, height) / 2 - 20);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${width / 2},${height / 2})`);
|
||||
|
||||
g.selectAll("path")
|
||||
.data(pie(data))
|
||||
.join("path")
|
||||
.attr("d", arc)
|
||||
.attr("fill", (d, i) => colourScale(i))
|
||||
.attr("stroke", "white")
|
||||
.attr("stroke-width", 2);
|
||||
```
|
||||
|
||||
### Force-directed network
|
||||
|
||||
```javascript
|
||||
const simulation = d3.forceSimulation(nodes)
|
||||
.force("link", d3.forceLink(links).id(d => d.id).distance(100))
|
||||
.force("charge", d3.forceManyBody().strength(-300))
|
||||
.force("center", d3.forceCenter(width / 2, height / 2));
|
||||
|
||||
const link = g.selectAll("line")
|
||||
.data(links)
|
||||
.join("line")
|
||||
.attr("stroke", "#999")
|
||||
.attr("stroke-width", 1);
|
||||
|
||||
const node = g.selectAll("circle")
|
||||
.data(nodes)
|
||||
.join("circle")
|
||||
.attr("r", 8)
|
||||
.attr("fill", "steelblue")
|
||||
.call(d3.drag()
|
||||
.on("start", dragstarted)
|
||||
.on("drag", dragged)
|
||||
.on("end", dragended));
|
||||
|
||||
simulation.on("tick", () => {
|
||||
link
|
||||
.attr("x1", d => d.source.x)
|
||||
.attr("y1", d => d.source.y)
|
||||
.attr("x2", d => d.target.x)
|
||||
.attr("y2", d => d.target.y);
|
||||
|
||||
node
|
||||
.attr("cx", d => d.x)
|
||||
.attr("cy", d => d.y);
|
||||
});
|
||||
|
||||
function dragstarted(event) {
|
||||
if (!event.active) simulation.alphaTarget(0.3).restart();
|
||||
event.subject.fx = event.subject.x;
|
||||
event.subject.fy = event.subject.y;
|
||||
}
|
||||
|
||||
function dragged(event) {
|
||||
event.subject.fx = event.x;
|
||||
event.subject.fy = event.y;
|
||||
}
|
||||
|
||||
function dragended(event) {
|
||||
if (!event.active) simulation.alphaTarget(0);
|
||||
event.subject.fx = null;
|
||||
event.subject.fy = null;
|
||||
}
|
||||
```
|
||||
|
||||
## Adding interactivity
|
||||
|
||||
### Tooltips
|
||||
|
||||
```javascript
|
||||
// Create tooltip div (outside SVG)
|
||||
const tooltip = d3.select("body").append("div")
|
||||
.attr("class", "tooltip")
|
||||
.style("position", "absolute")
|
||||
.style("visibility", "hidden")
|
||||
.style("background-color", "white")
|
||||
.style("border", "1px solid #ddd")
|
||||
.style("padding", "10px")
|
||||
.style("border-radius", "4px")
|
||||
.style("pointer-events", "none");
|
||||
|
||||
// Add to elements
|
||||
circles
|
||||
.on("mouseover", function(event, d) {
|
||||
d3.select(this).attr("opacity", 1);
|
||||
tooltip
|
||||
.style("visibility", "visible")
|
||||
.html(`<strong>${d.label}</strong><br/>Value: ${d.value}`);
|
||||
})
|
||||
.on("mousemove", function(event) {
|
||||
tooltip
|
||||
.style("top", (event.pageY - 10) + "px")
|
||||
.style("left", (event.pageX + 10) + "px");
|
||||
})
|
||||
.on("mouseout", function() {
|
||||
d3.select(this).attr("opacity", 0.7);
|
||||
tooltip.style("visibility", "hidden");
|
||||
});
|
||||
```
|
||||
|
||||
### Zoom and pan
|
||||
|
||||
```javascript
|
||||
const zoom = d3.zoom()
|
||||
.scaleExtent([0.5, 10])
|
||||
.on("zoom", (event) => {
|
||||
g.attr("transform", event.transform);
|
||||
});
|
||||
|
||||
svg.call(zoom);
|
||||
```
|
||||
|
||||
### Click interactions
|
||||
|
||||
```javascript
|
||||
circles
|
||||
.on("click", function(event, d) {
|
||||
// Handle click (dispatch event, update app state, etc.)
|
||||
console.log("Clicked:", d);
|
||||
|
||||
// Visual feedback
|
||||
d3.selectAll("circle").attr("fill", "steelblue");
|
||||
d3.select(this).attr("fill", "orange");
|
||||
|
||||
// Optional: dispatch custom event for your framework/app to listen to
|
||||
// window.dispatchEvent(new CustomEvent('chartClick', { detail: d }));
|
||||
});
|
||||
```
|
||||
|
||||
## Transitions and animations
|
||||
|
||||
Add smooth transitions to visual changes:
|
||||
|
||||
```javascript
|
||||
// Basic transition
|
||||
circles
|
||||
.transition()
|
||||
.duration(750)
|
||||
.attr("r", 10);
|
||||
|
||||
// Chained transitions
|
||||
circles
|
||||
.transition()
|
||||
.duration(500)
|
||||
.attr("fill", "orange")
|
||||
.transition()
|
||||
.duration(500)
|
||||
.attr("r", 15);
|
||||
|
||||
// Staggered transitions
|
||||
circles
|
||||
.transition()
|
||||
.delay((d, i) => i * 50)
|
||||
.duration(500)
|
||||
.attr("cy", d => yScale(d.value));
|
||||
|
||||
// Custom easing
|
||||
circles
|
||||
.transition()
|
||||
.duration(1000)
|
||||
.ease(d3.easeBounceOut)
|
||||
.attr("r", 10);
|
||||
```
|
||||
|
||||
## Scales reference
|
||||
|
||||
### Quantitative scales
|
||||
|
||||
```javascript
|
||||
// Linear scale
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
// Log scale (for exponential data)
|
||||
const logScale = d3.scaleLog()
|
||||
.domain([1, 1000])
|
||||
.range([0, 500]);
|
||||
|
||||
// Power scale
|
||||
const powScale = d3.scalePow()
|
||||
.exponent(2)
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
// Time scale
|
||||
const timeScale = d3.scaleTime()
|
||||
.domain([new Date(2020, 0, 1), new Date(2024, 0, 1)])
|
||||
.range([0, 500]);
|
||||
```
|
||||
|
||||
### Ordinal scales
|
||||
|
||||
```javascript
|
||||
// Band scale (for bar charts)
|
||||
const bandScale = d3.scaleBand()
|
||||
.domain(['A', 'B', 'C', 'D'])
|
||||
.range([0, 400])
|
||||
.padding(0.1);
|
||||
|
||||
// Point scale (for line/scatter categories)
|
||||
const pointScale = d3.scalePoint()
|
||||
.domain(['A', 'B', 'C', 'D'])
|
||||
.range([0, 400]);
|
||||
|
||||
// Ordinal scale (for colours)
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
```
|
||||
|
||||
### Sequential scales
|
||||
|
||||
```javascript
|
||||
// Sequential colour scale
|
||||
const colourScale = d3.scaleSequential(d3.interpolateBlues)
|
||||
.domain([0, 100]);
|
||||
|
||||
// Diverging colour scale
|
||||
const divScale = d3.scaleDiverging(d3.interpolateRdBu)
|
||||
.domain([-10, 0, 10]);
|
||||
```
|
||||
|
||||
## Best practices
|
||||
|
||||
### Data preparation
|
||||
|
||||
Always validate and prepare data before visualisation:
|
||||
|
||||
```javascript
|
||||
// Filter invalid values
|
||||
const cleanData = data.filter(d => d.value != null && !isNaN(d.value));
|
||||
|
||||
// Sort data if order matters
|
||||
const sortedData = [...data].sort((a, b) => b.value - a.value);
|
||||
|
||||
// Parse dates
|
||||
const parsedData = data.map(d => ({
|
||||
...d,
|
||||
date: d3.timeParse("%Y-%m-%d")(d.date)
|
||||
}));
|
||||
```
|
||||
|
||||
### Performance optimisation
|
||||
|
||||
For large datasets (>1000 elements):
|
||||
|
||||
```javascript
|
||||
// Use canvas instead of SVG for many elements
|
||||
// Use quadtree for collision detection
|
||||
// Simplify paths with d3.line().curve(d3.curveStep)
|
||||
// Implement virtual scrolling for large lists
|
||||
// Use requestAnimationFrame for custom animations
|
||||
```
|
||||
|
||||
### Accessibility
|
||||
|
||||
Make visualisations accessible:
|
||||
|
||||
```javascript
|
||||
// Add ARIA labels
|
||||
svg.attr("role", "img")
|
||||
.attr("aria-label", "Bar chart showing quarterly revenue");
|
||||
|
||||
// Add title and description
|
||||
svg.append("title").text("Quarterly Revenue 2024");
|
||||
svg.append("desc").text("Bar chart showing revenue growth across four quarters");
|
||||
|
||||
// Ensure sufficient colour contrast
|
||||
// Provide keyboard navigation for interactive elements
|
||||
// Include data table alternative
|
||||
```
|
||||
|
||||
### Styling
|
||||
|
||||
Use consistent, professional styling:
|
||||
|
||||
```javascript
|
||||
// Define colour palettes upfront
|
||||
const colours = {
|
||||
primary: '#4A90E2',
|
||||
secondary: '#7B68EE',
|
||||
background: '#F5F7FA',
|
||||
text: '#333333',
|
||||
gridLines: '#E0E0E0'
|
||||
};
|
||||
|
||||
// Apply consistent typography
|
||||
svg.selectAll("text")
|
||||
.style("font-family", "Inter, sans-serif")
|
||||
.style("font-size", "12px");
|
||||
|
||||
// Use subtle grid lines
|
||||
g.selectAll(".tick line")
|
||||
.attr("stroke", colours.gridLines)
|
||||
.attr("stroke-dasharray", "2,2");
|
||||
```
|
||||
|
||||
## Common issues and solutions
|
||||
|
||||
**Issue**: Axes not appearing
|
||||
- Ensure scales have valid domains (check for NaN values)
|
||||
- Verify axis is appended to correct group
|
||||
- Check transform translations are correct
|
||||
|
||||
**Issue**: Transitions not working
|
||||
- Call `.transition()` before attribute changes
|
||||
- Ensure elements have unique keys for proper data binding
|
||||
- Check that useEffect dependencies include all changing data
|
||||
|
||||
**Issue**: Responsive sizing not working
|
||||
- Use ResizeObserver or window resize listener
|
||||
- Update dimensions in state to trigger re-render
|
||||
- Ensure SVG has width/height attributes or viewBox
|
||||
|
||||
**Issue**: Performance problems
|
||||
- Limit number of DOM elements (consider canvas for >1000 items)
|
||||
- Debounce resize handlers
|
||||
- Use `.join()` instead of separate enter/update/exit selections
|
||||
- Avoid unnecessary re-renders by checking dependencies
|
||||
|
||||
## Resources
|
||||
|
||||
### references/
|
||||
Contains detailed reference materials:
|
||||
- `d3-patterns.md` - Comprehensive collection of visualisation patterns and code examples
|
||||
- `scale-reference.md` - Complete guide to d3 scales with examples
|
||||
- `colour-schemes.md` - D3 colour schemes and palette recommendations
|
||||
|
||||
### assets/
|
||||
|
||||
Contains boilerplate templates:
|
||||
|
||||
- `chart-template.js` - Starter template for basic chart
|
||||
- `interactive-template.js` - Template with tooltips, zoom, and interactions
|
||||
- `sample-data.json` - Example datasets for testing
|
||||
|
||||
These templates work with vanilla JavaScript, React, Vue, Svelte, or any other JavaScript environment. Adapt them as needed for your specific framework.
|
||||
|
||||
To use these resources, read the relevant files when detailed guidance is needed for specific visualisation types or patterns.
|
||||
106
skills/claude-d3js-skill/assets/chart-template.jsx
Normal file
106
skills/claude-d3js-skill/assets/chart-template.jsx
Normal file
@@ -0,0 +1,106 @@
|
||||
import { useEffect, useRef, useState } from 'react';
|
||||
import * as d3 from 'd3';
|
||||
|
||||
function BasicChart({ data }) {
|
||||
const svgRef = useRef();
|
||||
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
// Select SVG element
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove(); // Clear previous content
|
||||
|
||||
// Define dimensions and margins
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// Create main group with margins
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
// Create scales
|
||||
const xScale = d3.scaleBand()
|
||||
.domain(data.map(d => d.label))
|
||||
.range([0, innerWidth])
|
||||
.padding(0.1);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.value)])
|
||||
.range([innerHeight, 0])
|
||||
.nice();
|
||||
|
||||
// Create and append axes
|
||||
const xAxis = d3.axisBottom(xScale);
|
||||
const yAxis = d3.axisLeft(yScale);
|
||||
|
||||
g.append("g")
|
||||
.attr("class", "x-axis")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(xAxis);
|
||||
|
||||
g.append("g")
|
||||
.attr("class", "y-axis")
|
||||
.call(yAxis);
|
||||
|
||||
// Bind data and create visual elements (bars in this example)
|
||||
g.selectAll("rect")
|
||||
.data(data)
|
||||
.join("rect")
|
||||
.attr("x", d => xScale(d.label))
|
||||
.attr("y", d => yScale(d.value))
|
||||
.attr("width", xScale.bandwidth())
|
||||
.attr("height", d => innerHeight - yScale(d.value))
|
||||
.attr("fill", "steelblue");
|
||||
|
||||
// Optional: Add axis labels
|
||||
g.append("text")
|
||||
.attr("class", "axis-label")
|
||||
.attr("x", innerWidth / 2)
|
||||
.attr("y", innerHeight + margin.bottom - 5)
|
||||
.attr("text-anchor", "middle")
|
||||
.text("Category");
|
||||
|
||||
g.append("text")
|
||||
.attr("class", "axis-label")
|
||||
.attr("transform", "rotate(-90)")
|
||||
.attr("x", -innerHeight / 2)
|
||||
.attr("y", -margin.left + 15)
|
||||
.attr("text-anchor", "middle")
|
||||
.text("Value");
|
||||
|
||||
}, [data]);
|
||||
|
||||
return (
|
||||
<div className="chart-container">
|
||||
<svg
|
||||
ref={svgRef}
|
||||
width="800"
|
||||
height="400"
|
||||
style={{ border: '1px solid #ddd' }}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// Example usage
|
||||
export default function App() {
|
||||
const sampleData = [
|
||||
{ label: 'A', value: 30 },
|
||||
{ label: 'B', value: 80 },
|
||||
{ label: 'C', value: 45 },
|
||||
{ label: 'D', value: 60 },
|
||||
{ label: 'E', value: 20 },
|
||||
{ label: 'F', value: 90 }
|
||||
];
|
||||
|
||||
return (
|
||||
<div className="p-8">
|
||||
<h1 className="text-2xl font-bold mb-4">Basic D3.js Chart</h1>
|
||||
<BasicChart data={sampleData} />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
227
skills/claude-d3js-skill/assets/interactive-template.jsx
Normal file
227
skills/claude-d3js-skill/assets/interactive-template.jsx
Normal file
@@ -0,0 +1,227 @@
|
||||
import { useEffect, useRef, useState } from 'react';
|
||||
import * as d3 from 'd3';
|
||||
|
||||
function InteractiveChart({ data }) {
|
||||
const svgRef = useRef();
|
||||
const tooltipRef = useRef();
|
||||
const [selectedPoint, setSelectedPoint] = useState(null);
|
||||
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
// Dimensions
|
||||
const width = 800;
|
||||
const height = 500;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// Create main group
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
// Scales
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.x)])
|
||||
.range([0, innerWidth])
|
||||
.nice();
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.y)])
|
||||
.range([innerHeight, 0])
|
||||
.nice();
|
||||
|
||||
const sizeScale = d3.scaleSqrt()
|
||||
.domain([0, d3.max(data, d => d.size || 10)])
|
||||
.range([3, 20]);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
// Add zoom behaviour
|
||||
const zoom = d3.zoom()
|
||||
.scaleExtent([0.5, 10])
|
||||
.on("zoom", (event) => {
|
||||
g.attr("transform", `translate(${margin.left + event.transform.x},${margin.top + event.transform.y}) scale(${event.transform.k})`);
|
||||
});
|
||||
|
||||
svg.call(zoom);
|
||||
|
||||
// Axes
|
||||
const xAxis = d3.axisBottom(xScale);
|
||||
const yAxis = d3.axisLeft(yScale);
|
||||
|
||||
const xAxisGroup = g.append("g")
|
||||
.attr("class", "x-axis")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(xAxis);
|
||||
|
||||
const yAxisGroup = g.append("g")
|
||||
.attr("class", "y-axis")
|
||||
.call(yAxis);
|
||||
|
||||
// Grid lines
|
||||
g.append("g")
|
||||
.attr("class", "grid")
|
||||
.attr("opacity", 0.1)
|
||||
.call(d3.axisLeft(yScale)
|
||||
.tickSize(-innerWidth)
|
||||
.tickFormat(""));
|
||||
|
||||
g.append("g")
|
||||
.attr("class", "grid")
|
||||
.attr("opacity", 0.1)
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(xScale)
|
||||
.tickSize(-innerHeight)
|
||||
.tickFormat(""));
|
||||
|
||||
// Tooltip
|
||||
const tooltip = d3.select(tooltipRef.current);
|
||||
|
||||
// Data points
|
||||
const circles = g.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", d => sizeScale(d.size || 10))
|
||||
.attr("fill", d => colourScale(d.category || 'default'))
|
||||
.attr("stroke", "#fff")
|
||||
.attr("stroke-width", 2)
|
||||
.attr("opacity", 0.7)
|
||||
.style("cursor", "pointer");
|
||||
|
||||
// Hover interactions
|
||||
circles
|
||||
.on("mouseover", function(event, d) {
|
||||
// Enlarge circle
|
||||
d3.select(this)
|
||||
.transition()
|
||||
.duration(200)
|
||||
.attr("opacity", 1)
|
||||
.attr("stroke-width", 3);
|
||||
|
||||
// Show tooltip
|
||||
tooltip
|
||||
.style("display", "block")
|
||||
.style("left", (event.pageX + 10) + "px")
|
||||
.style("top", (event.pageY - 10) + "px")
|
||||
.html(`
|
||||
<strong>${d.label || 'Point'}</strong><br/>
|
||||
X: ${d.x.toFixed(2)}<br/>
|
||||
Y: ${d.y.toFixed(2)}<br/>
|
||||
${d.category ? `Category: ${d.category}<br/>` : ''}
|
||||
${d.size ? `Size: ${d.size.toFixed(2)}` : ''}
|
||||
`);
|
||||
})
|
||||
.on("mousemove", function(event) {
|
||||
tooltip
|
||||
.style("left", (event.pageX + 10) + "px")
|
||||
.style("top", (event.pageY - 10) + "px");
|
||||
})
|
||||
.on("mouseout", function() {
|
||||
// Restore circle
|
||||
d3.select(this)
|
||||
.transition()
|
||||
.duration(200)
|
||||
.attr("opacity", 0.7)
|
||||
.attr("stroke-width", 2);
|
||||
|
||||
// Hide tooltip
|
||||
tooltip.style("display", "none");
|
||||
})
|
||||
.on("click", function(event, d) {
|
||||
// Highlight selected point
|
||||
circles.attr("stroke", "#fff").attr("stroke-width", 2);
|
||||
d3.select(this)
|
||||
.attr("stroke", "#000")
|
||||
.attr("stroke-width", 3);
|
||||
|
||||
setSelectedPoint(d);
|
||||
});
|
||||
|
||||
// Add transition on initial render
|
||||
circles
|
||||
.attr("r", 0)
|
||||
.transition()
|
||||
.duration(800)
|
||||
.delay((d, i) => i * 20)
|
||||
.attr("r", d => sizeScale(d.size || 10));
|
||||
|
||||
// Axis labels
|
||||
g.append("text")
|
||||
.attr("class", "axis-label")
|
||||
.attr("x", innerWidth / 2)
|
||||
.attr("y", innerHeight + margin.bottom - 5)
|
||||
.attr("text-anchor", "middle")
|
||||
.style("font-size", "14px")
|
||||
.text("X Axis");
|
||||
|
||||
g.append("text")
|
||||
.attr("class", "axis-label")
|
||||
.attr("transform", "rotate(-90)")
|
||||
.attr("x", -innerHeight / 2)
|
||||
.attr("y", -margin.left + 15)
|
||||
.attr("text-anchor", "middle")
|
||||
.style("font-size", "14px")
|
||||
.text("Y Axis");
|
||||
|
||||
}, [data]);
|
||||
|
||||
return (
|
||||
<div className="relative">
|
||||
<svg
|
||||
ref={svgRef}
|
||||
width="800"
|
||||
height="500"
|
||||
style={{ border: '1px solid #ddd', cursor: 'grab' }}
|
||||
/>
|
||||
<div
|
||||
ref={tooltipRef}
|
||||
style={{
|
||||
position: 'absolute',
|
||||
display: 'none',
|
||||
padding: '10px',
|
||||
background: 'white',
|
||||
border: '1px solid #ddd',
|
||||
borderRadius: '4px',
|
||||
pointerEvents: 'none',
|
||||
boxShadow: '0 2px 4px rgba(0,0,0,0.1)',
|
||||
fontSize: '13px',
|
||||
zIndex: 1000
|
||||
}}
|
||||
/>
|
||||
{selectedPoint && (
|
||||
<div className="mt-4 p-4 bg-blue-50 rounded border border-blue-200">
|
||||
<h3 className="font-bold mb-2">Selected Point</h3>
|
||||
<pre className="text-sm">{JSON.stringify(selectedPoint, null, 2)}</pre>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
// Example usage
|
||||
export default function App() {
|
||||
const sampleData = Array.from({ length: 50 }, (_, i) => ({
|
||||
id: i,
|
||||
label: `Point ${i + 1}`,
|
||||
x: Math.random() * 100,
|
||||
y: Math.random() * 100,
|
||||
size: Math.random() * 30 + 5,
|
||||
category: ['A', 'B', 'C', 'D'][Math.floor(Math.random() * 4)]
|
||||
}));
|
||||
|
||||
return (
|
||||
<div className="p-8">
|
||||
<h1 className="text-2xl font-bold mb-2">Interactive D3.js Chart</h1>
|
||||
<p className="text-gray-600 mb-4">
|
||||
Hover over points for details. Click to select. Scroll to zoom. Drag to pan.
|
||||
</p>
|
||||
<InteractiveChart data={sampleData} />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
115
skills/claude-d3js-skill/assets/sample-data.json
Normal file
115
skills/claude-d3js-skill/assets/sample-data.json
Normal file
@@ -0,0 +1,115 @@
|
||||
{
|
||||
"timeSeries": [
|
||||
{ "date": "2024-01-01", "value": 120, "category": "A" },
|
||||
{ "date": "2024-02-01", "value": 135, "category": "A" },
|
||||
{ "date": "2024-03-01", "value": 128, "category": "A" },
|
||||
{ "date": "2024-04-01", "value": 145, "category": "A" },
|
||||
{ "date": "2024-05-01", "value": 152, "category": "A" },
|
||||
{ "date": "2024-06-01", "value": 168, "category": "A" },
|
||||
{ "date": "2024-07-01", "value": 175, "category": "A" },
|
||||
{ "date": "2024-08-01", "value": 182, "category": "A" },
|
||||
{ "date": "2024-09-01", "value": 190, "category": "A" },
|
||||
{ "date": "2024-10-01", "value": 185, "category": "A" },
|
||||
{ "date": "2024-11-01", "value": 195, "category": "A" },
|
||||
{ "date": "2024-12-01", "value": 210, "category": "A" }
|
||||
],
|
||||
|
||||
"categorical": [
|
||||
{ "label": "Product A", "value": 450, "category": "Electronics" },
|
||||
{ "label": "Product B", "value": 320, "category": "Electronics" },
|
||||
{ "label": "Product C", "value": 580, "category": "Clothing" },
|
||||
{ "label": "Product D", "value": 290, "category": "Clothing" },
|
||||
{ "label": "Product E", "value": 410, "category": "Food" },
|
||||
{ "label": "Product F", "value": 370, "category": "Food" }
|
||||
],
|
||||
|
||||
"scatterData": [
|
||||
{ "x": 12, "y": 45, "size": 25, "category": "Group A", "label": "Point 1" },
|
||||
{ "x": 25, "y": 62, "size": 35, "category": "Group A", "label": "Point 2" },
|
||||
{ "x": 38, "y": 55, "size": 20, "category": "Group B", "label": "Point 3" },
|
||||
{ "x": 45, "y": 78, "size": 40, "category": "Group B", "label": "Point 4" },
|
||||
{ "x": 52, "y": 68, "size": 30, "category": "Group C", "label": "Point 5" },
|
||||
{ "x": 65, "y": 85, "size": 45, "category": "Group C", "label": "Point 6" },
|
||||
{ "x": 72, "y": 72, "size": 28, "category": "Group A", "label": "Point 7" },
|
||||
{ "x": 85, "y": 92, "size": 50, "category": "Group B", "label": "Point 8" }
|
||||
],
|
||||
|
||||
"hierarchical": {
|
||||
"name": "Root",
|
||||
"children": [
|
||||
{
|
||||
"name": "Category 1",
|
||||
"children": [
|
||||
{ "name": "Item 1.1", "value": 100 },
|
||||
{ "name": "Item 1.2", "value": 150 },
|
||||
{ "name": "Item 1.3", "value": 80 }
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Category 2",
|
||||
"children": [
|
||||
{ "name": "Item 2.1", "value": 200 },
|
||||
{ "name": "Item 2.2", "value": 120 },
|
||||
{ "name": "Item 2.3", "value": 90 }
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Category 3",
|
||||
"children": [
|
||||
{ "name": "Item 3.1", "value": 180 },
|
||||
{ "name": "Item 3.2", "value": 140 }
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
"network": {
|
||||
"nodes": [
|
||||
{ "id": "A", "group": 1 },
|
||||
{ "id": "B", "group": 1 },
|
||||
{ "id": "C", "group": 1 },
|
||||
{ "id": "D", "group": 2 },
|
||||
{ "id": "E", "group": 2 },
|
||||
{ "id": "F", "group": 3 },
|
||||
{ "id": "G", "group": 3 },
|
||||
{ "id": "H", "group": 3 }
|
||||
],
|
||||
"links": [
|
||||
{ "source": "A", "target": "B", "value": 1 },
|
||||
{ "source": "A", "target": "C", "value": 2 },
|
||||
{ "source": "B", "target": "C", "value": 1 },
|
||||
{ "source": "C", "target": "D", "value": 3 },
|
||||
{ "source": "D", "target": "E", "value": 2 },
|
||||
{ "source": "E", "target": "F", "value": 1 },
|
||||
{ "source": "F", "target": "G", "value": 2 },
|
||||
{ "source": "F", "target": "H", "value": 1 },
|
||||
{ "source": "G", "target": "H", "value": 1 }
|
||||
]
|
||||
},
|
||||
|
||||
"stackedData": [
|
||||
{ "group": "Q1", "seriesA": 30, "seriesB": 40, "seriesC": 25 },
|
||||
{ "group": "Q2", "seriesA": 45, "seriesB": 35, "seriesC": 30 },
|
||||
{ "group": "Q3", "seriesA": 40, "seriesB": 50, "seriesC": 35 },
|
||||
{ "group": "Q4", "seriesA": 55, "seriesB": 45, "seriesC": 40 }
|
||||
],
|
||||
|
||||
"geographicPoints": [
|
||||
{ "city": "London", "latitude": 51.5074, "longitude": -0.1278, "value": 8900000 },
|
||||
{ "city": "Paris", "latitude": 48.8566, "longitude": 2.3522, "value": 2140000 },
|
||||
{ "city": "Berlin", "latitude": 52.5200, "longitude": 13.4050, "value": 3645000 },
|
||||
{ "city": "Madrid", "latitude": 40.4168, "longitude": -3.7038, "value": 3223000 },
|
||||
{ "city": "Rome", "latitude": 41.9028, "longitude": 12.4964, "value": 2873000 }
|
||||
],
|
||||
|
||||
"divergingData": [
|
||||
{ "category": "Item A", "value": -15 },
|
||||
{ "category": "Item B", "value": 8 },
|
||||
{ "category": "Item C", "value": -22 },
|
||||
{ "category": "Item D", "value": 18 },
|
||||
{ "category": "Item E", "value": -5 },
|
||||
{ "category": "Item F", "value": 25 },
|
||||
{ "category": "Item G", "value": -12 },
|
||||
{ "category": "Item H", "value": 14 }
|
||||
]
|
||||
}
|
||||
564
skills/claude-d3js-skill/references/colour-schemes.md
Normal file
564
skills/claude-d3js-skill/references/colour-schemes.md
Normal file
@@ -0,0 +1,564 @@
|
||||
# D3.js Colour Schemes and Palette Recommendations
|
||||
|
||||
Comprehensive guide to colour selection in data visualisation with d3.js.
|
||||
|
||||
## Built-in categorical colour schemes
|
||||
|
||||
### Category10 (default)
|
||||
|
||||
```javascript
|
||||
d3.schemeCategory10
|
||||
// ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd',
|
||||
// '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- 10 distinct colours
|
||||
- Good colour-blind accessibility
|
||||
- Default choice for most categorical data
|
||||
- Balanced saturation and brightness
|
||||
|
||||
**Use cases:** General purpose categorical encoding, legend items, multiple data series
|
||||
|
||||
### Tableau10
|
||||
|
||||
```javascript
|
||||
d3.schemeTableau10
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- 10 colours optimised for data visualisation
|
||||
- Professional appearance
|
||||
- Excellent distinguishability
|
||||
|
||||
**Use cases:** Business dashboards, professional reports, presentations
|
||||
|
||||
### Accent
|
||||
|
||||
```javascript
|
||||
d3.schemeAccent
|
||||
// 8 colours with high saturation
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Bright, vibrant colours
|
||||
- High contrast
|
||||
- Modern aesthetic
|
||||
|
||||
**Use cases:** Highlighting important categories, modern web applications
|
||||
|
||||
### Dark2
|
||||
|
||||
```javascript
|
||||
d3.schemeDark2
|
||||
// 8 darker, muted colours
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Subdued palette
|
||||
- Professional appearance
|
||||
- Good for dark backgrounds
|
||||
|
||||
**Use cases:** Dark mode visualisations, professional contexts
|
||||
|
||||
### Paired
|
||||
|
||||
```javascript
|
||||
d3.schemePaired
|
||||
// 12 colours in pairs of similar hues
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Pairs of light and dark variants
|
||||
- Useful for nested categories
|
||||
- 12 distinct colours
|
||||
|
||||
**Use cases:** Grouped bar charts, hierarchical categories, before/after comparisons
|
||||
|
||||
### Pastel1 & Pastel2
|
||||
|
||||
```javascript
|
||||
d3.schemePastel1 // 9 colours
|
||||
d3.schemePastel2 // 8 colours
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Soft, low-saturation colours
|
||||
- Gentle appearance
|
||||
- Good for large areas
|
||||
|
||||
**Use cases:** Background colours, subtle categorisation, calming visualisations
|
||||
|
||||
### Set1, Set2, Set3
|
||||
|
||||
```javascript
|
||||
d3.schemeSet1 // 9 colours - vivid
|
||||
d3.schemeSet2 // 8 colours - muted
|
||||
d3.schemeSet3 // 12 colours - pastel
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Set1: High saturation, maximum distinction
|
||||
- Set2: Professional, balanced
|
||||
- Set3: Subtle, many categories
|
||||
|
||||
**Use cases:** Varied based on visual hierarchy needs
|
||||
|
||||
## Sequential colour schemes
|
||||
|
||||
Sequential schemes map continuous data from low to high values using a single hue or gradient.
|
||||
|
||||
### Single-hue sequential
|
||||
|
||||
**Blues:**
|
||||
```javascript
|
||||
d3.interpolateBlues
|
||||
d3.schemeBlues[9] // 9-step discrete version
|
||||
```
|
||||
|
||||
**Other single-hue options:**
|
||||
- `d3.interpolateGreens` / `d3.schemeGreens`
|
||||
- `d3.interpolateOranges` / `d3.schemeOranges`
|
||||
- `d3.interpolatePurples` / `d3.schemePurples`
|
||||
- `d3.interpolateReds` / `d3.schemeReds`
|
||||
- `d3.interpolateGreys` / `d3.schemeGreys`
|
||||
|
||||
**Use cases:**
|
||||
- Simple heat maps
|
||||
- Choropleth maps
|
||||
- Density plots
|
||||
- Single-metric visualisations
|
||||
|
||||
### Multi-hue sequential
|
||||
|
||||
**Viridis (recommended):**
|
||||
```javascript
|
||||
d3.interpolateViridis
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Perceptually uniform
|
||||
- Colour-blind friendly
|
||||
- Print-safe
|
||||
- No visual dead zones
|
||||
- Monotonically increasing perceived lightness
|
||||
|
||||
**Other perceptually-uniform options:**
|
||||
- `d3.interpolatePlasma` - Purple to yellow
|
||||
- `d3.interpolateInferno` - Black to white through red/orange
|
||||
- `d3.interpolateMagma` - Black to white through purple
|
||||
- `d3.interpolateCividis` - Colour-blind optimised
|
||||
|
||||
**Colour-blind accessible:**
|
||||
```javascript
|
||||
d3.interpolateTurbo // Rainbow-like but perceptually uniform
|
||||
d3.interpolateCool // Cyan to magenta
|
||||
d3.interpolateWarm // Orange to yellow
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Scientific visualisation
|
||||
- Medical imaging
|
||||
- Any high-precision data visualisation
|
||||
- Accessible visualisations
|
||||
|
||||
### Traditional sequential
|
||||
|
||||
**Yellow-Orange-Red:**
|
||||
```javascript
|
||||
d3.interpolateYlOrRd
|
||||
d3.schemeYlOrRd[9]
|
||||
```
|
||||
|
||||
**Yellow-Green-Blue:**
|
||||
```javascript
|
||||
d3.interpolateYlGnBu
|
||||
d3.schemeYlGnBu[9]
|
||||
```
|
||||
|
||||
**Other multi-hue:**
|
||||
- `d3.interpolateBuGn` - Blue to green
|
||||
- `d3.interpolateBuPu` - Blue to purple
|
||||
- `d3.interpolateGnBu` - Green to blue
|
||||
- `d3.interpolateOrRd` - Orange to red
|
||||
- `d3.interpolatePuBu` - Purple to blue
|
||||
- `d3.interpolatePuBuGn` - Purple to blue-green
|
||||
- `d3.interpolatePuRd` - Purple to red
|
||||
- `d3.interpolateRdPu` - Red to purple
|
||||
- `d3.interpolateYlGn` - Yellow to green
|
||||
- `d3.interpolateYlOrBr` - Yellow to orange-brown
|
||||
|
||||
**Use cases:** Traditional data visualisation, familiar colour associations (temperature, vegetation, water)
|
||||
|
||||
## Diverging colour schemes
|
||||
|
||||
Diverging schemes highlight deviations from a central value using two distinct hues.
|
||||
|
||||
### Red-Blue (temperature)
|
||||
|
||||
```javascript
|
||||
d3.interpolateRdBu
|
||||
d3.schemeRdBu[11]
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Intuitive temperature metaphor
|
||||
- Strong contrast
|
||||
- Clear positive/negative distinction
|
||||
|
||||
**Use cases:** Temperature, profit/loss, above/below average, correlation
|
||||
|
||||
### Red-Yellow-Blue
|
||||
|
||||
```javascript
|
||||
d3.interpolateRdYlBu
|
||||
d3.schemeRdYlBu[11]
|
||||
```
|
||||
|
||||
**Characteristics:**
|
||||
- Three-colour gradient
|
||||
- Softer transition through yellow
|
||||
- More visual steps
|
||||
|
||||
**Use cases:** When extreme values need emphasis and middle needs visibility
|
||||
|
||||
### Other diverging schemes
|
||||
|
||||
**Traffic light:**
|
||||
```javascript
|
||||
d3.interpolateRdYlGn // Red (bad) to green (good)
|
||||
```
|
||||
|
||||
**Spectral (rainbow):**
|
||||
```javascript
|
||||
d3.interpolateSpectral // Full spectrum
|
||||
```
|
||||
|
||||
**Other options:**
|
||||
- `d3.interpolateBrBG` - Brown to blue-green
|
||||
- `d3.interpolatePiYG` - Pink to yellow-green
|
||||
- `d3.interpolatePRGn` - Purple to green
|
||||
- `d3.interpolatePuOr` - Purple to orange
|
||||
- `d3.interpolateRdGy` - Red to grey
|
||||
|
||||
**Use cases:** Choose based on semantic meaning and accessibility needs
|
||||
|
||||
## Colour-blind friendly palettes
|
||||
|
||||
### General guidelines
|
||||
|
||||
1. **Avoid red-green combinations** (most common colour blindness)
|
||||
2. **Use blue-orange diverging** instead of red-green
|
||||
3. **Add texture or patterns** as redundant encoding
|
||||
4. **Test with simulation tools**
|
||||
|
||||
### Recommended colour-blind safe schemes
|
||||
|
||||
**Categorical:**
|
||||
```javascript
|
||||
// Okabe-Ito palette (colour-blind safe)
|
||||
const okabePalette = [
|
||||
'#E69F00', // Orange
|
||||
'#56B4E9', // Sky blue
|
||||
'#009E73', // Bluish green
|
||||
'#F0E442', // Yellow
|
||||
'#0072B2', // Blue
|
||||
'#D55E00', // Vermillion
|
||||
'#CC79A7', // Reddish purple
|
||||
'#000000' // Black
|
||||
];
|
||||
|
||||
const colourScale = d3.scaleOrdinal()
|
||||
.domain(categories)
|
||||
.range(okabePalette);
|
||||
```
|
||||
|
||||
**Sequential:**
|
||||
```javascript
|
||||
// Use Viridis, Cividis, or Blues
|
||||
d3.interpolateViridis // Best overall
|
||||
d3.interpolateCividis // Optimised for CVD
|
||||
d3.interpolateBlues // Simple, safe
|
||||
```
|
||||
|
||||
**Diverging:**
|
||||
```javascript
|
||||
// Use blue-orange instead of red-green
|
||||
d3.interpolateBrBG
|
||||
d3.interpolatePuOr
|
||||
```
|
||||
|
||||
## Custom colour palettes
|
||||
|
||||
### Creating custom sequential
|
||||
|
||||
```javascript
|
||||
const customSequential = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range(['#e8f4f8', '#006d9c']) // Light to dark blue
|
||||
.interpolate(d3.interpolateLab); // Perceptually uniform
|
||||
```
|
||||
|
||||
### Creating custom diverging
|
||||
|
||||
```javascript
|
||||
const customDiverging = d3.scaleLinear()
|
||||
.domain([0, 50, 100])
|
||||
.range(['#ca0020', '#f7f7f7', '#0571b0']) // Red, grey, blue
|
||||
.interpolate(d3.interpolateLab);
|
||||
```
|
||||
|
||||
### Creating custom categorical
|
||||
|
||||
```javascript
|
||||
// Brand colours
|
||||
const brandPalette = [
|
||||
'#FF6B6B', // Primary red
|
||||
'#4ECDC4', // Secondary teal
|
||||
'#45B7D1', // Tertiary blue
|
||||
'#FFA07A', // Accent coral
|
||||
'#98D8C8' // Accent mint
|
||||
];
|
||||
|
||||
const colourScale = d3.scaleOrdinal()
|
||||
.domain(categories)
|
||||
.range(brandPalette);
|
||||
```
|
||||
|
||||
## Semantic colour associations
|
||||
|
||||
### Universal colour meanings
|
||||
|
||||
**Red:**
|
||||
- Danger, error, negative
|
||||
- High temperature
|
||||
- Debt, loss
|
||||
|
||||
**Green:**
|
||||
- Success, positive
|
||||
- Growth, vegetation
|
||||
- Profit, gain
|
||||
|
||||
**Blue:**
|
||||
- Trust, calm
|
||||
- Water, cold
|
||||
- Information, neutral
|
||||
|
||||
**Yellow/Orange:**
|
||||
- Warning, caution
|
||||
- Energy, warmth
|
||||
- Attention
|
||||
|
||||
**Grey:**
|
||||
- Neutral, inactive
|
||||
- Missing data
|
||||
- Background
|
||||
|
||||
### Context-specific palettes
|
||||
|
||||
**Financial:**
|
||||
```javascript
|
||||
const financialColours = {
|
||||
profit: '#27ae60',
|
||||
loss: '#e74c3c',
|
||||
neutral: '#95a5a6',
|
||||
highlight: '#3498db'
|
||||
};
|
||||
```
|
||||
|
||||
**Temperature:**
|
||||
```javascript
|
||||
const temperatureScale = d3.scaleSequential(d3.interpolateRdYlBu)
|
||||
.domain([40, -10]); // Hot to cold (reversed)
|
||||
```
|
||||
|
||||
**Traffic/Status:**
|
||||
```javascript
|
||||
const statusColours = {
|
||||
success: '#27ae60',
|
||||
warning: '#f39c12',
|
||||
error: '#e74c3c',
|
||||
info: '#3498db',
|
||||
neutral: '#95a5a6'
|
||||
};
|
||||
```
|
||||
|
||||
## Accessibility best practices
|
||||
|
||||
### Contrast ratios
|
||||
|
||||
Ensure sufficient contrast between colours and backgrounds:
|
||||
|
||||
```javascript
|
||||
// Good contrast example
|
||||
const highContrast = {
|
||||
background: '#ffffff',
|
||||
text: '#2c3e50',
|
||||
primary: '#3498db',
|
||||
secondary: '#e74c3c'
|
||||
};
|
||||
```
|
||||
|
||||
**WCAG guidelines:**
|
||||
- Normal text: 4.5:1 minimum
|
||||
- Large text: 3:1 minimum
|
||||
- UI components: 3:1 minimum
|
||||
|
||||
### Redundant encoding
|
||||
|
||||
Never rely solely on colour to convey information:
|
||||
|
||||
```javascript
|
||||
// Add patterns or shapes
|
||||
const symbols = ['circle', 'square', 'triangle', 'diamond'];
|
||||
|
||||
// Add text labels
|
||||
// Use line styles (solid, dashed, dotted)
|
||||
// Use size encoding
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
Test visualisations for colour blindness:
|
||||
- Chrome DevTools (Rendering > Emulate vision deficiencies)
|
||||
- Colour Oracle (free desktop application)
|
||||
- Coblis (online simulator)
|
||||
|
||||
## Professional colour recommendations
|
||||
|
||||
### Data journalism
|
||||
|
||||
```javascript
|
||||
// Guardian style
|
||||
const guardianPalette = [
|
||||
'#005689', // Guardian blue
|
||||
'#c70000', // Guardian red
|
||||
'#7d0068', // Guardian pink
|
||||
'#951c75', // Guardian purple
|
||||
];
|
||||
|
||||
// FT style
|
||||
const ftPalette = [
|
||||
'#0f5499', // FT blue
|
||||
'#990f3d', // FT red
|
||||
'#593380', // FT purple
|
||||
'#262a33', // FT black
|
||||
];
|
||||
```
|
||||
|
||||
### Academic/Scientific
|
||||
|
||||
```javascript
|
||||
// Nature journal style
|
||||
const naturePalette = [
|
||||
'#0071b2', // Blue
|
||||
'#d55e00', // Vermillion
|
||||
'#009e73', // Green
|
||||
'#f0e442', // Yellow
|
||||
];
|
||||
|
||||
// Use Viridis for continuous data
|
||||
const scientificScale = d3.scaleSequential(d3.interpolateViridis);
|
||||
```
|
||||
|
||||
### Corporate/Business
|
||||
|
||||
```javascript
|
||||
// Professional, conservative
|
||||
const corporatePalette = [
|
||||
'#003f5c', // Dark blue
|
||||
'#58508d', // Purple
|
||||
'#bc5090', // Magenta
|
||||
'#ff6361', // Coral
|
||||
'#ffa600' // Orange
|
||||
];
|
||||
```
|
||||
|
||||
## Dynamic colour selection
|
||||
|
||||
### Based on data range
|
||||
|
||||
```javascript
|
||||
function selectColourScheme(data) {
|
||||
const extent = d3.extent(data);
|
||||
const hasNegative = extent[0] < 0;
|
||||
const hasPositive = extent[1] > 0;
|
||||
|
||||
if (hasNegative && hasPositive) {
|
||||
// Diverging: data crosses zero
|
||||
return d3.scaleSequentialSymlog(d3.interpolateRdBu)
|
||||
.domain([extent[0], 0, extent[1]]);
|
||||
} else {
|
||||
// Sequential: all positive or all negative
|
||||
return d3.scaleSequential(d3.interpolateViridis)
|
||||
.domain(extent);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Based on category count
|
||||
|
||||
```javascript
|
||||
function selectCategoricalScheme(categories) {
|
||||
const n = categories.length;
|
||||
|
||||
if (n <= 10) {
|
||||
return d3.scaleOrdinal(d3.schemeTableau10);
|
||||
} else if (n <= 12) {
|
||||
return d3.scaleOrdinal(d3.schemePaired);
|
||||
} else {
|
||||
// For many categories, use sequential with quantize
|
||||
return d3.scaleQuantize()
|
||||
.domain([0, n - 1])
|
||||
.range(d3.quantize(d3.interpolateRainbow, n));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Common colour mistakes to avoid
|
||||
|
||||
1. **Rainbow gradients for sequential data**
|
||||
- Problem: Not perceptually uniform, hard to read
|
||||
- Solution: Use Viridis, Blues, or other uniform schemes
|
||||
|
||||
2. **Red-green for diverging (colour blindness)**
|
||||
- Problem: 8% of males can't distinguish
|
||||
- Solution: Use blue-orange or purple-green
|
||||
|
||||
3. **Too many categorical colours**
|
||||
- Problem: Hard to distinguish and remember
|
||||
- Solution: Limit to 5-8 categories, use grouping
|
||||
|
||||
4. **Insufficient contrast**
|
||||
- Problem: Poor readability
|
||||
- Solution: Test contrast ratios, use darker colours on light backgrounds
|
||||
|
||||
5. **Culturally inconsistent colours**
|
||||
- Problem: Confusing semantic meaning
|
||||
- Solution: Research colour associations for target audience
|
||||
|
||||
6. **Inverted temperature scales**
|
||||
- Problem: Counterintuitive (red = cold)
|
||||
- Solution: Red/orange = hot, blue = cold
|
||||
|
||||
## Quick reference guide
|
||||
|
||||
**Need to show...**
|
||||
|
||||
- **Categories (≤10):** `d3.schemeCategory10` or `d3.schemeTableau10`
|
||||
- **Categories (>10):** `d3.schemePaired` or group categories
|
||||
- **Sequential (general):** `d3.interpolateViridis`
|
||||
- **Sequential (scientific):** `d3.interpolateViridis` or `d3.interpolatePlasma`
|
||||
- **Sequential (temperature):** `d3.interpolateRdYlBu` (inverted)
|
||||
- **Diverging (zero):** `d3.interpolateRdBu` or `d3.interpolateBrBG`
|
||||
- **Diverging (good/bad):** `d3.interpolateRdYlGn` (inverted)
|
||||
- **Colour-blind safe (categorical):** Okabe-Ito palette (shown above)
|
||||
- **Colour-blind safe (sequential):** `d3.interpolateCividis` or `d3.interpolateBlues`
|
||||
- **Colour-blind safe (diverging):** `d3.interpolatePuOr` or `d3.interpolateBrBG`
|
||||
|
||||
**Always remember:**
|
||||
1. Test for colour-blindness
|
||||
2. Ensure sufficient contrast
|
||||
3. Use semantic colours appropriately
|
||||
4. Add redundant encoding (patterns, labels)
|
||||
5. Keep it simple (fewer colours = clearer visualisation)
|
||||
869
skills/claude-d3js-skill/references/d3-patterns.md
Normal file
869
skills/claude-d3js-skill/references/d3-patterns.md
Normal file
@@ -0,0 +1,869 @@
|
||||
# D3.js Visualisation Patterns
|
||||
|
||||
This reference provides detailed code patterns for common d3.js visualisation types.
|
||||
|
||||
## Hierarchical visualisations
|
||||
|
||||
### Tree diagram
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
|
||||
const tree = d3.tree().size([height - 100, width - 200]);
|
||||
|
||||
const root = d3.hierarchy(data);
|
||||
tree(root);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", "translate(100,50)");
|
||||
|
||||
// Links
|
||||
g.selectAll("path")
|
||||
.data(root.links())
|
||||
.join("path")
|
||||
.attr("d", d3.linkHorizontal()
|
||||
.x(d => d.y)
|
||||
.y(d => d.x))
|
||||
.attr("fill", "none")
|
||||
.attr("stroke", "#555")
|
||||
.attr("stroke-width", 2);
|
||||
|
||||
// Nodes
|
||||
const node = g.selectAll("g")
|
||||
.data(root.descendants())
|
||||
.join("g")
|
||||
.attr("transform", d => `translate(${d.y},${d.x})`);
|
||||
|
||||
node.append("circle")
|
||||
.attr("r", 6)
|
||||
.attr("fill", d => d.children ? "#555" : "#999");
|
||||
|
||||
node.append("text")
|
||||
.attr("dy", "0.31em")
|
||||
.attr("x", d => d.children ? -8 : 8)
|
||||
.attr("text-anchor", d => d.children ? "end" : "start")
|
||||
.text(d => d.data.name)
|
||||
.style("font-size", "12px");
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Treemap
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
|
||||
const root = d3.hierarchy(data)
|
||||
.sum(d => d.value)
|
||||
.sort((a, b) => b.value - a.value);
|
||||
|
||||
d3.treemap()
|
||||
.size([width, height])
|
||||
.padding(2)
|
||||
.round(true)(root);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
const cell = svg.selectAll("g")
|
||||
.data(root.leaves())
|
||||
.join("g")
|
||||
.attr("transform", d => `translate(${d.x0},${d.y0})`);
|
||||
|
||||
cell.append("rect")
|
||||
.attr("width", d => d.x1 - d.x0)
|
||||
.attr("height", d => d.y1 - d.y0)
|
||||
.attr("fill", d => colourScale(d.parent.data.name))
|
||||
.attr("stroke", "white")
|
||||
.attr("stroke-width", 2);
|
||||
|
||||
cell.append("text")
|
||||
.attr("x", 4)
|
||||
.attr("y", 16)
|
||||
.text(d => d.data.name)
|
||||
.style("font-size", "12px")
|
||||
.style("fill", "white");
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Sunburst diagram
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 600;
|
||||
const height = 600;
|
||||
const radius = Math.min(width, height) / 2;
|
||||
|
||||
const root = d3.hierarchy(data)
|
||||
.sum(d => d.value)
|
||||
.sort((a, b) => b.value - a.value);
|
||||
|
||||
const partition = d3.partition()
|
||||
.size([2 * Math.PI, radius]);
|
||||
|
||||
partition(root);
|
||||
|
||||
const arc = d3.arc()
|
||||
.startAngle(d => d.x0)
|
||||
.endAngle(d => d.x1)
|
||||
.innerRadius(d => d.y0)
|
||||
.outerRadius(d => d.y1);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${width / 2},${height / 2})`);
|
||||
|
||||
g.selectAll("path")
|
||||
.data(root.descendants())
|
||||
.join("path")
|
||||
.attr("d", arc)
|
||||
.attr("fill", d => colourScale(d.depth))
|
||||
.attr("stroke", "white")
|
||||
.attr("stroke-width", 1);
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Chord diagram
|
||||
|
||||
```javascript
|
||||
function drawChordDiagram(data) {
|
||||
// data format: array of objects with source, target, and value
|
||||
// Example: [{ source: 'A', target: 'B', value: 10 }, ...]
|
||||
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart');
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 600;
|
||||
const height = 600;
|
||||
const innerRadius = Math.min(width, height) * 0.3;
|
||||
const outerRadius = innerRadius + 30;
|
||||
|
||||
// Create matrix from data
|
||||
const nodes = Array.from(new Set(data.flatMap(d => [d.source, d.target])));
|
||||
const matrix = Array.from({ length: nodes.length }, () => Array(nodes.length).fill(0));
|
||||
|
||||
data.forEach(d => {
|
||||
const i = nodes.indexOf(d.source);
|
||||
const j = nodes.indexOf(d.target);
|
||||
matrix[i][j] += d.value;
|
||||
matrix[j][i] += d.value;
|
||||
});
|
||||
|
||||
// Create chord layout
|
||||
const chord = d3.chord()
|
||||
.padAngle(0.05)
|
||||
.sortSubgroups(d3.descending);
|
||||
|
||||
const arc = d3.arc()
|
||||
.innerRadius(innerRadius)
|
||||
.outerRadius(outerRadius);
|
||||
|
||||
const ribbon = d3.ribbon()
|
||||
.source(d => d.source)
|
||||
.target(d => d.target);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10)
|
||||
.domain(nodes);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${width / 2},${height / 2})`);
|
||||
|
||||
const chords = chord(matrix);
|
||||
|
||||
// Draw ribbons
|
||||
g.append("g")
|
||||
.attr("fill-opacity", 0.67)
|
||||
.selectAll("path")
|
||||
.data(chords)
|
||||
.join("path")
|
||||
.attr("d", ribbon)
|
||||
.attr("fill", d => colourScale(nodes[d.source.index]))
|
||||
.attr("stroke", d => d3.rgb(colourScale(nodes[d.source.index])).darker());
|
||||
|
||||
// Draw groups (arcs)
|
||||
const group = g.append("g")
|
||||
.selectAll("g")
|
||||
.data(chords.groups)
|
||||
.join("g");
|
||||
|
||||
group.append("path")
|
||||
.attr("d", arc)
|
||||
.attr("fill", d => colourScale(nodes[d.index]))
|
||||
.attr("stroke", d => d3.rgb(colourScale(nodes[d.index])).darker());
|
||||
|
||||
// Add labels
|
||||
group.append("text")
|
||||
.each(d => { d.angle = (d.startAngle + d.endAngle) / 2; })
|
||||
.attr("dy", "0.31em")
|
||||
.attr("transform", d => `rotate(${(d.angle * 180 / Math.PI) - 90})translate(${outerRadius + 30})${d.angle > Math.PI ? "rotate(180)" : ""}`)
|
||||
.attr("text-anchor", d => d.angle > Math.PI ? "end" : null)
|
||||
.text((d, i) => nodes[i])
|
||||
.style("font-size", "12px");
|
||||
}
|
||||
|
||||
// Data format example:
|
||||
// const data = [
|
||||
// { source: 'Category A', target: 'Category B', value: 100 },
|
||||
// { source: 'Category A', target: 'Category C', value: 50 },
|
||||
// { source: 'Category B', target: 'Category C', value: 75 }
|
||||
// ];
|
||||
// drawChordDiagram(data);
|
||||
```
|
||||
|
||||
## Advanced chart types
|
||||
|
||||
### Heatmap
|
||||
|
||||
```javascript
|
||||
function drawHeatmap(data) {
|
||||
// data format: array of objects with row, column, and value
|
||||
// Example: [{ row: 'A', column: 'X', value: 10 }, ...]
|
||||
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select('#chart');
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
const margin = { top: 100, right: 30, bottom: 30, left: 100 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// Get unique rows and columns
|
||||
const rows = Array.from(new Set(data.map(d => d.row)));
|
||||
const columns = Array.from(new Set(data.map(d => d.column)));
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
// Create scales
|
||||
const xScale = d3.scaleBand()
|
||||
.domain(columns)
|
||||
.range([0, innerWidth])
|
||||
.padding(0.01);
|
||||
|
||||
const yScale = d3.scaleBand()
|
||||
.domain(rows)
|
||||
.range([0, innerHeight])
|
||||
.padding(0.01);
|
||||
|
||||
// Colour scale for values (sequential from light to dark red)
|
||||
const colourScale = d3.scaleSequential(d3.interpolateYlOrRd)
|
||||
.domain([0, d3.max(data, d => d.value)]);
|
||||
|
||||
// Draw rectangles
|
||||
g.selectAll("rect")
|
||||
.data(data)
|
||||
.join("rect")
|
||||
.attr("x", d => xScale(d.column))
|
||||
.attr("y", d => yScale(d.row))
|
||||
.attr("width", xScale.bandwidth())
|
||||
.attr("height", yScale.bandwidth())
|
||||
.attr("fill", d => colourScale(d.value));
|
||||
|
||||
// Add x-axis labels
|
||||
svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`)
|
||||
.selectAll("text")
|
||||
.data(columns)
|
||||
.join("text")
|
||||
.attr("x", d => xScale(d) + xScale.bandwidth() / 2)
|
||||
.attr("y", -10)
|
||||
.attr("text-anchor", "middle")
|
||||
.text(d => d)
|
||||
.style("font-size", "12px");
|
||||
|
||||
// Add y-axis labels
|
||||
svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`)
|
||||
.selectAll("text")
|
||||
.data(rows)
|
||||
.join("text")
|
||||
.attr("x", -10)
|
||||
.attr("y", d => yScale(d) + yScale.bandwidth() / 2)
|
||||
.attr("dy", "0.35em")
|
||||
.attr("text-anchor", "end")
|
||||
.text(d => d)
|
||||
.style("font-size", "12px");
|
||||
|
||||
// Add colour legend
|
||||
const legendWidth = 20;
|
||||
const legendHeight = 200;
|
||||
const legend = svg.append("g")
|
||||
.attr("transform", `translate(${width - 60},${margin.top})`);
|
||||
|
||||
const legendScale = d3.scaleLinear()
|
||||
.domain(colourScale.domain())
|
||||
.range([legendHeight, 0]);
|
||||
|
||||
const legendAxis = d3.axisRight(legendScale).ticks(5);
|
||||
|
||||
// Draw colour gradient in legend
|
||||
for (let i = 0; i < legendHeight; i++) {
|
||||
legend.append("rect")
|
||||
.attr("y", i)
|
||||
.attr("width", legendWidth)
|
||||
.attr("height", 1)
|
||||
.attr("fill", colourScale(legendScale.invert(i)));
|
||||
}
|
||||
|
||||
legend.append("g")
|
||||
.attr("transform", `translate(${legendWidth},0)`)
|
||||
.call(legendAxis);
|
||||
}
|
||||
|
||||
// Data format example:
|
||||
// const data = [
|
||||
// { row: 'Monday', column: 'Morning', value: 42 },
|
||||
// { row: 'Monday', column: 'Afternoon', value: 78 },
|
||||
// { row: 'Tuesday', column: 'Morning', value: 65 },
|
||||
// { row: 'Tuesday', column: 'Afternoon', value: 55 }
|
||||
// ];
|
||||
// drawHeatmap(data);
|
||||
```
|
||||
|
||||
### Area chart with gradient
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
// Define gradient
|
||||
const defs = svg.append("defs");
|
||||
const gradient = defs.append("linearGradient")
|
||||
.attr("id", "areaGradient")
|
||||
.attr("x1", "0%")
|
||||
.attr("x2", "0%")
|
||||
.attr("y1", "0%")
|
||||
.attr("y2", "100%");
|
||||
|
||||
gradient.append("stop")
|
||||
.attr("offset", "0%")
|
||||
.attr("stop-color", "steelblue")
|
||||
.attr("stop-opacity", 0.8);
|
||||
|
||||
gradient.append("stop")
|
||||
.attr("offset", "100%")
|
||||
.attr("stop-color", "steelblue")
|
||||
.attr("stop-opacity", 0.1);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const xScale = d3.scaleTime()
|
||||
.domain(d3.extent(data, d => d.date))
|
||||
.range([0, innerWidth]);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.value)])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
const area = d3.area()
|
||||
.x(d => xScale(d.date))
|
||||
.y0(innerHeight)
|
||||
.y1(d => yScale(d.value))
|
||||
.curve(d3.curveMonotoneX);
|
||||
|
||||
g.append("path")
|
||||
.datum(data)
|
||||
.attr("fill", "url(#areaGradient)")
|
||||
.attr("d", area);
|
||||
|
||||
const line = d3.line()
|
||||
.x(d => xScale(d.date))
|
||||
.y(d => yScale(d.value))
|
||||
.curve(d3.curveMonotoneX);
|
||||
|
||||
g.append("path")
|
||||
.datum(data)
|
||||
.attr("fill", "none")
|
||||
.attr("stroke", "steelblue")
|
||||
.attr("stroke-width", 2)
|
||||
.attr("d", line);
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(xScale));
|
||||
|
||||
g.append("g")
|
||||
.call(d3.axisLeft(yScale));
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Stacked bar chart
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const categories = Object.keys(data[0]).filter(k => k !== 'group');
|
||||
const stackedData = d3.stack().keys(categories)(data);
|
||||
|
||||
const xScale = d3.scaleBand()
|
||||
.domain(data.map(d => d.group))
|
||||
.range([0, innerWidth])
|
||||
.padding(0.1);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(stackedData[stackedData.length - 1], d => d[1])])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
g.selectAll("g")
|
||||
.data(stackedData)
|
||||
.join("g")
|
||||
.attr("fill", (d, i) => colourScale(i))
|
||||
.selectAll("rect")
|
||||
.data(d => d)
|
||||
.join("rect")
|
||||
.attr("x", d => xScale(d.data.group))
|
||||
.attr("y", d => yScale(d[1]))
|
||||
.attr("height", d => yScale(d[0]) - yScale(d[1]))
|
||||
.attr("width", xScale.bandwidth());
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(xScale));
|
||||
|
||||
g.append("g")
|
||||
.call(d3.axisLeft(yScale));
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Grouped bar chart
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const categories = Object.keys(data[0]).filter(k => k !== 'group');
|
||||
|
||||
const x0Scale = d3.scaleBand()
|
||||
.domain(data.map(d => d.group))
|
||||
.range([0, innerWidth])
|
||||
.padding(0.1);
|
||||
|
||||
const x1Scale = d3.scaleBand()
|
||||
.domain(categories)
|
||||
.range([0, x0Scale.bandwidth()])
|
||||
.padding(0.05);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => Math.max(...categories.map(c => d[c])))])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
const group = g.selectAll("g")
|
||||
.data(data)
|
||||
.join("g")
|
||||
.attr("transform", d => `translate(${x0Scale(d.group)},0)`);
|
||||
|
||||
group.selectAll("rect")
|
||||
.data(d => categories.map(key => ({ key, value: d[key] })))
|
||||
.join("rect")
|
||||
.attr("x", d => x1Scale(d.key))
|
||||
.attr("y", d => yScale(d.value))
|
||||
.attr("width", x1Scale.bandwidth())
|
||||
.attr("height", d => innerHeight - yScale(d.value))
|
||||
.attr("fill", d => colourScale(d.key));
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(x0Scale));
|
||||
|
||||
g.append("g")
|
||||
.call(d3.axisLeft(yScale));
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Bubble chart
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.x)])
|
||||
.range([0, innerWidth]);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.y)])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
const sizeScale = d3.scaleSqrt()
|
||||
.domain([0, d3.max(data, d => d.size)])
|
||||
.range([0, 50]);
|
||||
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
g.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", d => sizeScale(d.size))
|
||||
.attr("fill", d => colourScale(d.category))
|
||||
.attr("opacity", 0.6)
|
||||
.attr("stroke", "white")
|
||||
.attr("stroke-width", 2);
|
||||
|
||||
g.append("g")
|
||||
.attr("transform", `translate(0,${innerHeight})`)
|
||||
.call(d3.axisBottom(xScale));
|
||||
|
||||
g.append("g")
|
||||
.call(d3.axisLeft(yScale));
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
## Geographic visualisations
|
||||
|
||||
### Basic map with points
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!geoData || !pointData) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
|
||||
const projection = d3.geoMercator()
|
||||
.fitSize([width, height], geoData);
|
||||
|
||||
const pathGenerator = d3.geoPath().projection(projection);
|
||||
|
||||
// Draw map
|
||||
svg.selectAll("path")
|
||||
.data(geoData.features)
|
||||
.join("path")
|
||||
.attr("d", pathGenerator)
|
||||
.attr("fill", "#e0e0e0")
|
||||
.attr("stroke", "#999")
|
||||
.attr("stroke-width", 0.5);
|
||||
|
||||
// Draw points
|
||||
svg.selectAll("circle")
|
||||
.data(pointData)
|
||||
.join("circle")
|
||||
.attr("cx", d => projection([d.longitude, d.latitude])[0])
|
||||
.attr("cy", d => projection([d.longitude, d.latitude])[1])
|
||||
.attr("r", 5)
|
||||
.attr("fill", "steelblue")
|
||||
.attr("opacity", 0.7);
|
||||
|
||||
}, [geoData, pointData]);
|
||||
```
|
||||
|
||||
### Choropleth map
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!geoData || !valueData) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 600;
|
||||
|
||||
const projection = d3.geoMercator()
|
||||
.fitSize([width, height], geoData);
|
||||
|
||||
const pathGenerator = d3.geoPath().projection(projection);
|
||||
|
||||
// Create value lookup
|
||||
const valueLookup = new Map(valueData.map(d => [d.id, d.value]));
|
||||
|
||||
// Colour scale
|
||||
const colourScale = d3.scaleSequential(d3.interpolateBlues)
|
||||
.domain([0, d3.max(valueData, d => d.value)]);
|
||||
|
||||
svg.selectAll("path")
|
||||
.data(geoData.features)
|
||||
.join("path")
|
||||
.attr("d", pathGenerator)
|
||||
.attr("fill", d => {
|
||||
const value = valueLookup.get(d.id);
|
||||
return value ? colourScale(value) : "#e0e0e0";
|
||||
})
|
||||
.attr("stroke", "#999")
|
||||
.attr("stroke-width", 0.5);
|
||||
|
||||
}, [geoData, valueData]);
|
||||
```
|
||||
|
||||
## Advanced interactions
|
||||
|
||||
### Brush and zoom
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
svg.selectAll("*").remove();
|
||||
|
||||
const width = 800;
|
||||
const height = 400;
|
||||
const margin = { top: 20, right: 30, bottom: 40, left: 50 };
|
||||
const innerWidth = width - margin.left - margin.right;
|
||||
const innerHeight = height - margin.top - margin.bottom;
|
||||
|
||||
const xScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.x)])
|
||||
.range([0, innerWidth]);
|
||||
|
||||
const yScale = d3.scaleLinear()
|
||||
.domain([0, d3.max(data, d => d.y)])
|
||||
.range([innerHeight, 0]);
|
||||
|
||||
const g = svg.append("g")
|
||||
.attr("transform", `translate(${margin.left},${margin.top})`);
|
||||
|
||||
const circles = g.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", 5)
|
||||
.attr("fill", "steelblue");
|
||||
|
||||
// Add brush
|
||||
const brush = d3.brush()
|
||||
.extent([[0, 0], [innerWidth, innerHeight]])
|
||||
.on("start brush", (event) => {
|
||||
if (!event.selection) return;
|
||||
|
||||
const [[x0, y0], [x1, y1]] = event.selection;
|
||||
|
||||
circles.attr("fill", d => {
|
||||
const cx = xScale(d.x);
|
||||
const cy = yScale(d.y);
|
||||
return (cx >= x0 && cx <= x1 && cy >= y0 && cy <= y1)
|
||||
? "orange"
|
||||
: "steelblue";
|
||||
});
|
||||
});
|
||||
|
||||
g.append("g")
|
||||
.attr("class", "brush")
|
||||
.call(brush);
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Linked brushing between charts
|
||||
|
||||
```javascript
|
||||
function LinkedCharts({ data }) {
|
||||
const [selectedPoints, setSelectedPoints] = useState(new Set());
|
||||
const svg1Ref = useRef();
|
||||
const svg2Ref = useRef();
|
||||
|
||||
useEffect(() => {
|
||||
// Chart 1: Scatter plot
|
||||
const svg1 = d3.select(svg1Ref.current);
|
||||
svg1.selectAll("*").remove();
|
||||
|
||||
// ... create first chart ...
|
||||
|
||||
const circles1 = svg1.selectAll("circle")
|
||||
.data(data)
|
||||
.join("circle")
|
||||
.attr("fill", d => selectedPoints.has(d.id) ? "orange" : "steelblue");
|
||||
|
||||
// Chart 2: Bar chart
|
||||
const svg2 = d3.select(svg2Ref.current);
|
||||
svg2.selectAll("*").remove();
|
||||
|
||||
// ... create second chart ...
|
||||
|
||||
const bars = svg2.selectAll("rect")
|
||||
.data(data)
|
||||
.join("rect")
|
||||
.attr("fill", d => selectedPoints.has(d.id) ? "orange" : "steelblue");
|
||||
|
||||
// Add brush to first chart
|
||||
const brush = d3.brush()
|
||||
.on("start brush end", (event) => {
|
||||
if (!event.selection) {
|
||||
setSelectedPoints(new Set());
|
||||
return;
|
||||
}
|
||||
|
||||
const [[x0, y0], [x1, y1]] = event.selection;
|
||||
const selected = new Set();
|
||||
|
||||
data.forEach(d => {
|
||||
const x = xScale(d.x);
|
||||
const y = yScale(d.y);
|
||||
if (x >= x0 && x <= x1 && y >= y0 && y <= y1) {
|
||||
selected.add(d.id);
|
||||
}
|
||||
});
|
||||
|
||||
setSelectedPoints(selected);
|
||||
});
|
||||
|
||||
svg1.append("g").call(brush);
|
||||
|
||||
}, [data, selectedPoints]);
|
||||
|
||||
return (
|
||||
<div>
|
||||
<svg ref={svg1Ref} width="400" height="300" />
|
||||
<svg ref={svg2Ref} width="400" height="300" />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## Animation patterns
|
||||
|
||||
### Enter, update, exit with transitions
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data || data.length === 0) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
|
||||
const circles = svg.selectAll("circle")
|
||||
.data(data, d => d.id); // Key function for object constancy
|
||||
|
||||
// EXIT: Remove old elements
|
||||
circles.exit()
|
||||
.transition()
|
||||
.duration(500)
|
||||
.attr("r", 0)
|
||||
.remove();
|
||||
|
||||
// UPDATE: Modify existing elements
|
||||
circles
|
||||
.transition()
|
||||
.duration(500)
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("fill", "steelblue");
|
||||
|
||||
// ENTER: Add new elements
|
||||
circles.enter()
|
||||
.append("circle")
|
||||
.attr("cx", d => xScale(d.x))
|
||||
.attr("cy", d => yScale(d.y))
|
||||
.attr("r", 0)
|
||||
.attr("fill", "steelblue")
|
||||
.transition()
|
||||
.duration(500)
|
||||
.attr("r", 5);
|
||||
|
||||
}, [data]);
|
||||
```
|
||||
|
||||
### Path morphing
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!data1 || !data2) return;
|
||||
|
||||
const svg = d3.select(svgRef.current);
|
||||
|
||||
const line = d3.line()
|
||||
.x(d => xScale(d.x))
|
||||
.y(d => yScale(d.y))
|
||||
.curve(d3.curveMonotoneX);
|
||||
|
||||
const path = svg.select("path");
|
||||
|
||||
// Morph from data1 to data2
|
||||
path
|
||||
.datum(data1)
|
||||
.attr("d", line)
|
||||
.transition()
|
||||
.duration(1000)
|
||||
.attrTween("d", function() {
|
||||
const previous = d3.select(this).attr("d");
|
||||
const current = line(data2);
|
||||
return d3.interpolatePath(previous, current);
|
||||
});
|
||||
|
||||
}, [data1, data2]);
|
||||
```
|
||||
509
skills/claude-d3js-skill/references/scale-reference.md
Normal file
509
skills/claude-d3js-skill/references/scale-reference.md
Normal file
@@ -0,0 +1,509 @@
|
||||
# D3.js Scale Reference
|
||||
|
||||
Comprehensive guide to all d3 scale types with examples and use cases.
|
||||
|
||||
## Continuous scales
|
||||
|
||||
### Linear scale
|
||||
|
||||
Maps continuous input domain to continuous output range with linear interpolation.
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
scale(50); // Returns 250
|
||||
scale(0); // Returns 0
|
||||
scale(100); // Returns 500
|
||||
|
||||
// Invert scale (get input from output)
|
||||
scale.invert(250); // Returns 50
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Most common scale for quantitative data
|
||||
- Axes, bar lengths, position encoding
|
||||
- Temperature, prices, counts, measurements
|
||||
|
||||
**Methods:**
|
||||
- `.domain([min, max])` - Set input domain
|
||||
- `.range([min, max])` - Set output range
|
||||
- `.invert(value)` - Get domain value from range value
|
||||
- `.clamp(true)` - Restrict output to range bounds
|
||||
- `.nice()` - Extend domain to nice round values
|
||||
|
||||
### Power scale
|
||||
|
||||
Maps continuous input to continuous output with exponential transformation.
|
||||
|
||||
```javascript
|
||||
const sqrtScale = d3.scalePow()
|
||||
.exponent(0.5) // Square root
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
const squareScale = d3.scalePow()
|
||||
.exponent(2) // Square
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
// Shorthand for square root
|
||||
const sqrtScale2 = d3.scaleSqrt()
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Perceptual scaling (human perception is non-linear)
|
||||
- Area encoding (use square root to map values to circle radii)
|
||||
- Emphasising differences in small or large values
|
||||
|
||||
### Logarithmic scale
|
||||
|
||||
Maps continuous input to continuous output with logarithmic transformation.
|
||||
|
||||
```javascript
|
||||
const logScale = d3.scaleLog()
|
||||
.domain([1, 1000]) // Must be positive
|
||||
.range([0, 500]);
|
||||
|
||||
logScale(1); // Returns 0
|
||||
logScale(10); // Returns ~167
|
||||
logScale(100); // Returns ~333
|
||||
logScale(1000); // Returns 500
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Data spanning multiple orders of magnitude
|
||||
- Population, GDP, wealth distributions
|
||||
- Logarithmic axes
|
||||
- Exponential growth visualisations
|
||||
|
||||
**Important:** Domain values must be strictly positive (>0).
|
||||
|
||||
### Time scale
|
||||
|
||||
Specialised linear scale for temporal data.
|
||||
|
||||
```javascript
|
||||
const timeScale = d3.scaleTime()
|
||||
.domain([new Date(2020, 0, 1), new Date(2024, 0, 1)])
|
||||
.range([0, 800]);
|
||||
|
||||
timeScale(new Date(2022, 0, 1)); // Returns 400
|
||||
|
||||
// Invert to get date
|
||||
timeScale.invert(400); // Returns Date object for mid-2022
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Time series visualisations
|
||||
- Timeline axes
|
||||
- Temporal animations
|
||||
- Date-based interactions
|
||||
|
||||
**Methods:**
|
||||
- `.nice()` - Extend domain to nice time intervals
|
||||
- `.ticks(count)` - Generate nicely-spaced tick values
|
||||
- All linear scale methods apply
|
||||
|
||||
### Quantize scale
|
||||
|
||||
Maps continuous input to discrete output buckets.
|
||||
|
||||
```javascript
|
||||
const quantizeScale = d3.scaleQuantize()
|
||||
.domain([0, 100])
|
||||
.range(['low', 'medium', 'high']);
|
||||
|
||||
quantizeScale(25); // Returns 'low'
|
||||
quantizeScale(50); // Returns 'medium'
|
||||
quantizeScale(75); // Returns 'high'
|
||||
|
||||
// Get the threshold values
|
||||
quantizeScale.thresholds(); // Returns [33.33, 66.67]
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Binning continuous data
|
||||
- Heat map colours
|
||||
- Risk categories (low/medium/high)
|
||||
- Age groups, income brackets
|
||||
|
||||
### Quantile scale
|
||||
|
||||
Maps continuous input to discrete output based on quantiles.
|
||||
|
||||
```javascript
|
||||
const quantileScale = d3.scaleQuantile()
|
||||
.domain([3, 6, 7, 8, 8, 10, 13, 15, 16, 20, 24]) // Sample data
|
||||
.range(['low', 'medium', 'high']);
|
||||
|
||||
quantileScale(8); // Returns based on quantile position
|
||||
quantileScale.quantiles(); // Returns quantile thresholds
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Equal-size groups regardless of distribution
|
||||
- Percentile-based categorisation
|
||||
- Handling skewed distributions
|
||||
|
||||
### Threshold scale
|
||||
|
||||
Maps continuous input to discrete output with custom thresholds.
|
||||
|
||||
```javascript
|
||||
const thresholdScale = d3.scaleThreshold()
|
||||
.domain([0, 10, 20])
|
||||
.range(['freezing', 'cold', 'warm', 'hot']);
|
||||
|
||||
thresholdScale(-5); // Returns 'freezing'
|
||||
thresholdScale(5); // Returns 'cold'
|
||||
thresholdScale(15); // Returns 'warm'
|
||||
thresholdScale(25); // Returns 'hot'
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Custom breakpoints
|
||||
- Grade boundaries (A, B, C, D, F)
|
||||
- Temperature categories
|
||||
- Air quality indices
|
||||
|
||||
## Sequential scales
|
||||
|
||||
### Sequential colour scale
|
||||
|
||||
Maps continuous input to continuous colour gradient.
|
||||
|
||||
```javascript
|
||||
const colourScale = d3.scaleSequential(d3.interpolateBlues)
|
||||
.domain([0, 100]);
|
||||
|
||||
colourScale(0); // Returns lightest blue
|
||||
colourScale(50); // Returns mid blue
|
||||
colourScale(100); // Returns darkest blue
|
||||
```
|
||||
|
||||
**Available interpolators:**
|
||||
|
||||
**Single hue:**
|
||||
- `d3.interpolateBlues`, `d3.interpolateGreens`, `d3.interpolateReds`
|
||||
- `d3.interpolateOranges`, `d3.interpolatePurples`, `d3.interpolateGreys`
|
||||
|
||||
**Multi-hue:**
|
||||
- `d3.interpolateViridis`, `d3.interpolateInferno`, `d3.interpolateMagma`
|
||||
- `d3.interpolatePlasma`, `d3.interpolateWarm`, `d3.interpolateCool`
|
||||
- `d3.interpolateCubehelixDefault`, `d3.interpolateTurbo`
|
||||
|
||||
**Use cases:**
|
||||
- Heat maps, choropleth maps
|
||||
- Continuous data visualisation
|
||||
- Temperature, elevation, density
|
||||
|
||||
### Diverging colour scale
|
||||
|
||||
Maps continuous input to diverging colour gradient with a midpoint.
|
||||
|
||||
```javascript
|
||||
const divergingScale = d3.scaleDiverging(d3.interpolateRdBu)
|
||||
.domain([-10, 0, 10]);
|
||||
|
||||
divergingScale(-10); // Returns red
|
||||
divergingScale(0); // Returns white/neutral
|
||||
divergingScale(10); // Returns blue
|
||||
```
|
||||
|
||||
**Available interpolators:**
|
||||
- `d3.interpolateRdBu` - Red to blue
|
||||
- `d3.interpolateRdYlBu` - Red, yellow, blue
|
||||
- `d3.interpolateRdYlGn` - Red, yellow, green
|
||||
- `d3.interpolatePiYG` - Pink, yellow, green
|
||||
- `d3.interpolateBrBG` - Brown, blue-green
|
||||
- `d3.interpolatePRGn` - Purple, green
|
||||
- `d3.interpolatePuOr` - Purple, orange
|
||||
- `d3.interpolateRdGy` - Red, grey
|
||||
- `d3.interpolateSpectral` - Rainbow spectrum
|
||||
|
||||
**Use cases:**
|
||||
- Data with meaningful midpoint (zero, average, neutral)
|
||||
- Positive/negative values
|
||||
- Above/below comparisons
|
||||
- Correlation matrices
|
||||
|
||||
### Sequential quantile scale
|
||||
|
||||
Combines sequential colour with quantile mapping.
|
||||
|
||||
```javascript
|
||||
const sequentialQuantileScale = d3.scaleSequentialQuantile(d3.interpolateBlues)
|
||||
.domain([3, 6, 7, 8, 8, 10, 13, 15, 16, 20, 24]);
|
||||
|
||||
// Maps based on quantile position
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Perceptually uniform binning
|
||||
- Handling outliers
|
||||
- Skewed distributions
|
||||
|
||||
## Ordinal scales
|
||||
|
||||
### Band scale
|
||||
|
||||
Maps discrete input to continuous bands (rectangles) with optional padding.
|
||||
|
||||
```javascript
|
||||
const bandScale = d3.scaleBand()
|
||||
.domain(['A', 'B', 'C', 'D'])
|
||||
.range([0, 400])
|
||||
.padding(0.1);
|
||||
|
||||
bandScale('A'); // Returns start position (e.g., 0)
|
||||
bandScale('B'); // Returns start position (e.g., 110)
|
||||
bandScale.bandwidth(); // Returns width of each band (e.g., 95)
|
||||
bandScale.step(); // Returns total step including padding
|
||||
bandScale.paddingInner(); // Returns inner padding (between bands)
|
||||
bandScale.paddingOuter(); // Returns outer padding (at edges)
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Bar charts (most common use case)
|
||||
- Grouped elements
|
||||
- Categorical axes
|
||||
- Heat map cells
|
||||
|
||||
**Padding options:**
|
||||
- `.padding(value)` - Sets both inner and outer padding (0-1)
|
||||
- `.paddingInner(value)` - Padding between bands (0-1)
|
||||
- `.paddingOuter(value)` - Padding at edges (0-1)
|
||||
- `.align(value)` - Alignment of bands (0-1, default 0.5)
|
||||
|
||||
### Point scale
|
||||
|
||||
Maps discrete input to continuous points (no width).
|
||||
|
||||
```javascript
|
||||
const pointScale = d3.scalePoint()
|
||||
.domain(['A', 'B', 'C', 'D'])
|
||||
.range([0, 400])
|
||||
.padding(0.5);
|
||||
|
||||
pointScale('A'); // Returns position (e.g., 50)
|
||||
pointScale('B'); // Returns position (e.g., 150)
|
||||
pointScale('C'); // Returns position (e.g., 250)
|
||||
pointScale('D'); // Returns position (e.g., 350)
|
||||
pointScale.step(); // Returns distance between points
|
||||
```
|
||||
|
||||
**Use cases:**
|
||||
- Line chart categorical x-axis
|
||||
- Scatter plot with categorical axis
|
||||
- Node positions in network graphs
|
||||
- Any point positioning for categories
|
||||
|
||||
### Ordinal colour scale
|
||||
|
||||
Maps discrete input to discrete output (colours, shapes, etc.).
|
||||
|
||||
```javascript
|
||||
const colourScale = d3.scaleOrdinal(d3.schemeCategory10);
|
||||
|
||||
colourScale('apples'); // Returns first colour
|
||||
colourScale('oranges'); // Returns second colour
|
||||
colourScale('apples'); // Returns same first colour (consistent)
|
||||
|
||||
// Custom range
|
||||
const customScale = d3.scaleOrdinal()
|
||||
.domain(['cat1', 'cat2', 'cat3'])
|
||||
.range(['#FF6B6B', '#4ECDC4', '#45B7D1']);
|
||||
```
|
||||
|
||||
**Built-in colour schemes:**
|
||||
|
||||
**Categorical:**
|
||||
- `d3.schemeCategory10` - 10 colours
|
||||
- `d3.schemeAccent` - 8 colours
|
||||
- `d3.schemeDark2` - 8 colours
|
||||
- `d3.schemePaired` - 12 colours
|
||||
- `d3.schemePastel1` - 9 colours
|
||||
- `d3.schemePastel2` - 8 colours
|
||||
- `d3.schemeSet1` - 9 colours
|
||||
- `d3.schemeSet2` - 8 colours
|
||||
- `d3.schemeSet3` - 12 colours
|
||||
- `d3.schemeTableau10` - 10 colours
|
||||
|
||||
**Use cases:**
|
||||
- Category colours
|
||||
- Legend items
|
||||
- Multi-series charts
|
||||
- Network node types
|
||||
|
||||
## Scale utilities
|
||||
|
||||
### Nice domain
|
||||
|
||||
Extend domain to nice round values.
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0.201, 0.996])
|
||||
.nice();
|
||||
|
||||
scale.domain(); // Returns [0.2, 1.0]
|
||||
|
||||
// With count (approximate tick count)
|
||||
const scale2 = d3.scaleLinear()
|
||||
.domain([0.201, 0.996])
|
||||
.nice(5);
|
||||
```
|
||||
|
||||
### Clamping
|
||||
|
||||
Restrict output to range bounds.
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range([0, 500])
|
||||
.clamp(true);
|
||||
|
||||
scale(-10); // Returns 0 (clamped)
|
||||
scale(150); // Returns 500 (clamped)
|
||||
```
|
||||
|
||||
### Copy scales
|
||||
|
||||
Create independent copies.
|
||||
|
||||
```javascript
|
||||
const scale1 = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
const scale2 = scale1.copy();
|
||||
// scale2 is independent of scale1
|
||||
```
|
||||
|
||||
### Tick generation
|
||||
|
||||
Generate nice tick values for axes.
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range([0, 500]);
|
||||
|
||||
scale.ticks(10); // Generate ~10 ticks
|
||||
scale.tickFormat(10); // Get format function for ticks
|
||||
scale.tickFormat(10, ".2f"); // Custom format (2 decimal places)
|
||||
|
||||
// Time scale ticks
|
||||
const timeScale = d3.scaleTime()
|
||||
.domain([new Date(2020, 0, 1), new Date(2024, 0, 1)]);
|
||||
|
||||
timeScale.ticks(d3.timeYear); // Yearly ticks
|
||||
timeScale.ticks(d3.timeMonth, 3); // Every 3 months
|
||||
timeScale.tickFormat(5, "%Y-%m"); // Format as year-month
|
||||
```
|
||||
|
||||
## Colour spaces and interpolation
|
||||
|
||||
### RGB interpolation
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range(["blue", "red"]);
|
||||
// Default: RGB interpolation
|
||||
```
|
||||
|
||||
### HSL interpolation
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range(["blue", "red"])
|
||||
.interpolate(d3.interpolateHsl);
|
||||
// Smoother colour transitions
|
||||
```
|
||||
|
||||
### Lab interpolation
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range(["blue", "red"])
|
||||
.interpolate(d3.interpolateLab);
|
||||
// Perceptually uniform
|
||||
```
|
||||
|
||||
### HCL interpolation
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 100])
|
||||
.range(["blue", "red"])
|
||||
.interpolate(d3.interpolateHcl);
|
||||
// Perceptually uniform with hue
|
||||
```
|
||||
|
||||
## Common patterns
|
||||
|
||||
### Diverging scale with custom midpoint
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([min, midpoint, max])
|
||||
.range(["red", "white", "blue"])
|
||||
.interpolate(d3.interpolateHcl);
|
||||
```
|
||||
|
||||
### Multi-stop gradient scale
|
||||
|
||||
```javascript
|
||||
const scale = d3.scaleLinear()
|
||||
.domain([0, 25, 50, 75, 100])
|
||||
.range(["#d53e4f", "#fc8d59", "#fee08b", "#e6f598", "#66c2a5"]);
|
||||
```
|
||||
|
||||
### Radius scale for circles (perceptual)
|
||||
|
||||
```javascript
|
||||
const radiusScale = d3.scaleSqrt()
|
||||
.domain([0, d3.max(data, d => d.value)])
|
||||
.range([0, 50]);
|
||||
|
||||
// Use with circles
|
||||
circle.attr("r", d => radiusScale(d.value));
|
||||
```
|
||||
|
||||
### Adaptive scale based on data range
|
||||
|
||||
```javascript
|
||||
function createAdaptiveScale(data) {
|
||||
const extent = d3.extent(data);
|
||||
const range = extent[1] - extent[0];
|
||||
|
||||
// Use log scale if data spans >2 orders of magnitude
|
||||
if (extent[1] / extent[0] > 100) {
|
||||
return d3.scaleLog()
|
||||
.domain(extent)
|
||||
.range([0, width]);
|
||||
}
|
||||
|
||||
// Otherwise use linear
|
||||
return d3.scaleLinear()
|
||||
.domain(extent)
|
||||
.range([0, width]);
|
||||
}
|
||||
```
|
||||
|
||||
### Colour scale with explicit categories
|
||||
|
||||
```javascript
|
||||
const colourScale = d3.scaleOrdinal()
|
||||
.domain(['Low Risk', 'Medium Risk', 'High Risk'])
|
||||
.range(['#2ecc71', '#f39c12', '#e74c3c'])
|
||||
.unknown('#95a5a6'); // Fallback for unknown values
|
||||
```
|
||||
62
skills/concise-planning/SKILL.md
Normal file
62
skills/concise-planning/SKILL.md
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
name: concise-planning
|
||||
description: Use when a user asks for a plan for a coding task, to generate a clear, actionable, and atomic checklist.
|
||||
---
|
||||
|
||||
# Concise Planning
|
||||
|
||||
## Goal
|
||||
|
||||
Turn a user request into a **single, actionable plan** with atomic steps.
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Scan Context
|
||||
|
||||
- Read `README.md`, docs, and relevant code files.
|
||||
- Identify constraints (language, frameworks, tests).
|
||||
|
||||
### 2. Minimal Interaction
|
||||
|
||||
- Ask **at most 1–2 questions** and only if truly blocking.
|
||||
- Make reasonable assumptions for non-blocking unknowns.
|
||||
|
||||
### 3. Generate Plan
|
||||
|
||||
Use the following structure:
|
||||
|
||||
- **Approach**: 1-3 sentences on what and why.
|
||||
- **Scope**: Bullet points for "In" and "Out".
|
||||
- **Action Items**: A list of 6-10 atomic, ordered tasks (Verb-first).
|
||||
- **Validation**: At least one item for testing.
|
||||
|
||||
## Plan Template
|
||||
|
||||
```markdown
|
||||
# Plan
|
||||
|
||||
<High-level approach>
|
||||
|
||||
## Scope
|
||||
|
||||
- In:
|
||||
- Out:
|
||||
|
||||
## Action Items
|
||||
|
||||
[ ] <Step 1: Discovery>
|
||||
[ ] <Step 2: Implementation>
|
||||
[ ] <Step 3: Implementation>
|
||||
[ ] <Step 4: Validation/Testing>
|
||||
[ ] <Step 5: Rollout/Commit>
|
||||
|
||||
## Open Questions
|
||||
|
||||
- <Question 1 (max 3)>
|
||||
```
|
||||
|
||||
## Checklist Guidelines
|
||||
|
||||
- **Atomic**: Each step should be a single logical unit of work.
|
||||
- **Verb-first**: "Add...", "Refactor...", "Verify...".
|
||||
- **Concrete**: Name specific files or modules when possible.
|
||||
1
skills/docx
Symbolic link
1
skills/docx
Symbolic link
@@ -0,0 +1 @@
|
||||
docx-official
|
||||
846
skills/github-workflow-automation/SKILL.md
Normal file
846
skills/github-workflow-automation/SKILL.md
Normal file
@@ -0,0 +1,846 @@
|
||||
---
|
||||
name: github-workflow-automation
|
||||
description: "Automate GitHub workflows with AI assistance. Includes PR reviews, issue triage, CI/CD integration, and Git operations. Use when automating GitHub workflows, setting up PR review automation, creating GitHub Actions, or triaging issues."
|
||||
---
|
||||
|
||||
# 🔧 GitHub Workflow Automation
|
||||
|
||||
> Patterns for automating GitHub workflows with AI assistance, inspired by [Gemini CLI](https://github.com/google-gemini/gemini-cli) and modern DevOps practices.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Automating PR reviews with AI
|
||||
- Setting up issue triage automation
|
||||
- Creating GitHub Actions workflows
|
||||
- Integrating AI into CI/CD pipelines
|
||||
- Automating Git operations (rebases, cherry-picks)
|
||||
|
||||
---
|
||||
|
||||
## 1. Automated PR Review
|
||||
|
||||
### 1.1 PR Review Action
|
||||
|
||||
```yaml
|
||||
# .github/workflows/ai-review.yml
|
||||
name: AI Code Review
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize]
|
||||
|
||||
jobs:
|
||||
review:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Get changed files
|
||||
id: changed
|
||||
run: |
|
||||
files=$(git diff --name-only origin/${{ github.base_ref }}...HEAD)
|
||||
echo "files<<EOF" >> $GITHUB_OUTPUT
|
||||
echo "$files" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Get diff
|
||||
id: diff
|
||||
run: |
|
||||
diff=$(git diff origin/${{ github.base_ref }}...HEAD)
|
||||
echo "diff<<EOF" >> $GITHUB_OUTPUT
|
||||
echo "$diff" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: AI Review
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const { Anthropic } = require('@anthropic-ai/sdk');
|
||||
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
|
||||
|
||||
const response = await client.messages.create({
|
||||
model: "claude-3-sonnet-20240229",
|
||||
max_tokens: 4096,
|
||||
messages: [{
|
||||
role: "user",
|
||||
content: `Review this PR diff and provide feedback:
|
||||
|
||||
Changed files: ${{ steps.changed.outputs.files }}
|
||||
|
||||
Diff:
|
||||
${{ steps.diff.outputs.diff }}
|
||||
|
||||
Provide:
|
||||
1. Summary of changes
|
||||
2. Potential issues or bugs
|
||||
3. Suggestions for improvement
|
||||
4. Security concerns if any
|
||||
|
||||
Format as GitHub markdown.`
|
||||
}]
|
||||
});
|
||||
|
||||
await github.rest.pulls.createReview({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
pull_number: context.issue.number,
|
||||
body: response.content[0].text,
|
||||
event: 'COMMENT'
|
||||
});
|
||||
env:
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
```
|
||||
|
||||
### 1.2 Review Comment Patterns
|
||||
|
||||
````markdown
|
||||
# AI Review Structure
|
||||
|
||||
## 📋 Summary
|
||||
|
||||
Brief description of what this PR does.
|
||||
|
||||
## ✅ What looks good
|
||||
|
||||
- Well-structured code
|
||||
- Good test coverage
|
||||
- Clear naming conventions
|
||||
|
||||
## ⚠️ Potential Issues
|
||||
|
||||
1. **Line 42**: Possible null pointer exception
|
||||
```javascript
|
||||
// Current
|
||||
user.profile.name;
|
||||
// Suggested
|
||||
user?.profile?.name ?? "Unknown";
|
||||
```
|
||||
````
|
||||
|
||||
2. **Line 78**: Consider error handling
|
||||
```javascript
|
||||
// Add try-catch or .catch()
|
||||
```
|
||||
|
||||
## 💡 Suggestions
|
||||
|
||||
- Consider extracting the validation logic into a separate function
|
||||
- Add JSDoc comments for public methods
|
||||
|
||||
## 🔒 Security Notes
|
||||
|
||||
- No sensitive data exposure detected
|
||||
- API key handling looks correct
|
||||
|
||||
````
|
||||
|
||||
### 1.3 Focused Reviews
|
||||
|
||||
```yaml
|
||||
# Review only specific file types
|
||||
- name: Filter code files
|
||||
run: |
|
||||
files=$(git diff --name-only origin/${{ github.base_ref }}...HEAD | \
|
||||
grep -E '\.(ts|tsx|js|jsx|py|go)$' || true)
|
||||
echo "code_files=$files" >> $GITHUB_OUTPUT
|
||||
|
||||
# Review with context
|
||||
- name: AI Review with context
|
||||
run: |
|
||||
# Include relevant context files
|
||||
context=""
|
||||
for file in ${{ steps.changed.outputs.files }}; do
|
||||
if [[ -f "$file" ]]; then
|
||||
context+="=== $file ===\n$(cat $file)\n\n"
|
||||
fi
|
||||
done
|
||||
|
||||
# Send to AI with full file context
|
||||
````
|
||||
|
||||
---
|
||||
|
||||
## 2. Issue Triage Automation
|
||||
|
||||
### 2.1 Auto-label Issues
|
||||
|
||||
```yaml
|
||||
# .github/workflows/issue-triage.yml
|
||||
name: Issue Triage
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
triage:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
issues: write
|
||||
|
||||
steps:
|
||||
- name: Analyze issue
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const issue = context.payload.issue;
|
||||
|
||||
// Call AI to analyze
|
||||
const analysis = await analyzeIssue(issue.title, issue.body);
|
||||
|
||||
// Apply labels
|
||||
const labels = [];
|
||||
|
||||
if (analysis.type === 'bug') {
|
||||
labels.push('bug');
|
||||
if (analysis.severity === 'high') labels.push('priority: high');
|
||||
} else if (analysis.type === 'feature') {
|
||||
labels.push('enhancement');
|
||||
} else if (analysis.type === 'question') {
|
||||
labels.push('question');
|
||||
}
|
||||
|
||||
if (analysis.area) {
|
||||
labels.push(`area: ${analysis.area}`);
|
||||
}
|
||||
|
||||
await github.rest.issues.addLabels({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: issue.number,
|
||||
labels: labels
|
||||
});
|
||||
|
||||
// Add initial response
|
||||
if (analysis.type === 'bug' && !analysis.hasReproSteps) {
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: issue.number,
|
||||
body: `Thanks for reporting this issue!
|
||||
|
||||
To help us investigate, could you please provide:
|
||||
- Steps to reproduce the issue
|
||||
- Expected behavior
|
||||
- Actual behavior
|
||||
- Environment (OS, version, etc.)
|
||||
|
||||
This will help us resolve your issue faster. 🙏`
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 Issue Analysis Prompt
|
||||
|
||||
```typescript
|
||||
const TRIAGE_PROMPT = `
|
||||
Analyze this GitHub issue and classify it:
|
||||
|
||||
Title: {title}
|
||||
Body: {body}
|
||||
|
||||
Return JSON with:
|
||||
{
|
||||
"type": "bug" | "feature" | "question" | "docs" | "other",
|
||||
"severity": "low" | "medium" | "high" | "critical",
|
||||
"area": "frontend" | "backend" | "api" | "docs" | "ci" | "other",
|
||||
"summary": "one-line summary",
|
||||
"hasReproSteps": boolean,
|
||||
"isFirstContribution": boolean,
|
||||
"suggestedLabels": ["label1", "label2"],
|
||||
"suggestedAssignees": ["username"] // based on area expertise
|
||||
}
|
||||
`;
|
||||
```
|
||||
|
||||
### 2.3 Stale Issue Management
|
||||
|
||||
```yaml
|
||||
# .github/workflows/stale.yml
|
||||
name: Manage Stale Issues
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 0 * * *" # Daily
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/stale@v9
|
||||
with:
|
||||
stale-issue-message: |
|
||||
This issue has been automatically marked as stale because it has not had
|
||||
recent activity. It will be closed in 14 days if no further activity occurs.
|
||||
|
||||
If this issue is still relevant:
|
||||
- Add a comment with an update
|
||||
- Remove the `stale` label
|
||||
|
||||
Thank you for your contributions! 🙏
|
||||
|
||||
stale-pr-message: |
|
||||
This PR has been automatically marked as stale. Please update it or it
|
||||
will be closed in 14 days.
|
||||
|
||||
days-before-stale: 60
|
||||
days-before-close: 14
|
||||
stale-issue-label: "stale"
|
||||
stale-pr-label: "stale"
|
||||
exempt-issue-labels: "pinned,security,in-progress"
|
||||
exempt-pr-labels: "pinned,security"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. CI/CD Integration
|
||||
|
||||
### 3.1 Smart Test Selection
|
||||
|
||||
```yaml
|
||||
# .github/workflows/smart-tests.yml
|
||||
name: Smart Test Selection
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
analyze:
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
test_suites: ${{ steps.analyze.outputs.suites }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Analyze changes
|
||||
id: analyze
|
||||
run: |
|
||||
# Get changed files
|
||||
changed=$(git diff --name-only origin/${{ github.base_ref }}...HEAD)
|
||||
|
||||
# Determine which test suites to run
|
||||
suites="[]"
|
||||
|
||||
if echo "$changed" | grep -q "^src/api/"; then
|
||||
suites=$(echo $suites | jq '. + ["api"]')
|
||||
fi
|
||||
|
||||
if echo "$changed" | grep -q "^src/frontend/"; then
|
||||
suites=$(echo $suites | jq '. + ["frontend"]')
|
||||
fi
|
||||
|
||||
if echo "$changed" | grep -q "^src/database/"; then
|
||||
suites=$(echo $suites | jq '. + ["database", "api"]')
|
||||
fi
|
||||
|
||||
# If nothing specific, run all
|
||||
if [ "$suites" = "[]" ]; then
|
||||
suites='["all"]'
|
||||
fi
|
||||
|
||||
echo "suites=$suites" >> $GITHUB_OUTPUT
|
||||
|
||||
test:
|
||||
needs: analyze
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
suite: ${{ fromJson(needs.analyze.outputs.test_suites) }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Run tests
|
||||
run: |
|
||||
if [ "${{ matrix.suite }}" = "all" ]; then
|
||||
npm test
|
||||
else
|
||||
npm test -- --suite ${{ matrix.suite }}
|
||||
fi
|
||||
```
|
||||
|
||||
### 3.2 Deployment with AI Validation
|
||||
|
||||
```yaml
|
||||
# .github/workflows/deploy.yml
|
||||
name: Deploy with AI Validation
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
validate:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Get deployment changes
|
||||
id: changes
|
||||
run: |
|
||||
# Get commits since last deployment
|
||||
last_deploy=$(git describe --tags --abbrev=0 2>/dev/null || echo "")
|
||||
if [ -n "$last_deploy" ]; then
|
||||
changes=$(git log --oneline $last_deploy..HEAD)
|
||||
else
|
||||
changes=$(git log --oneline -10)
|
||||
fi
|
||||
echo "changes<<EOF" >> $GITHUB_OUTPUT
|
||||
echo "$changes" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: AI Risk Assessment
|
||||
id: assess
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
// Analyze changes for deployment risk
|
||||
const prompt = `
|
||||
Analyze these changes for deployment risk:
|
||||
|
||||
${process.env.CHANGES}
|
||||
|
||||
Return JSON:
|
||||
{
|
||||
"riskLevel": "low" | "medium" | "high",
|
||||
"concerns": ["concern1", "concern2"],
|
||||
"recommendations": ["rec1", "rec2"],
|
||||
"requiresManualApproval": boolean
|
||||
}
|
||||
`;
|
||||
|
||||
// Call AI and parse response
|
||||
const analysis = await callAI(prompt);
|
||||
|
||||
if (analysis.riskLevel === 'high') {
|
||||
core.setFailed('High-risk deployment detected. Manual review required.');
|
||||
}
|
||||
|
||||
return analysis;
|
||||
env:
|
||||
CHANGES: ${{ steps.changes.outputs.changes }}
|
||||
|
||||
deploy:
|
||||
needs: validate
|
||||
runs-on: ubuntu-latest
|
||||
environment: production
|
||||
steps:
|
||||
- name: Deploy
|
||||
run: |
|
||||
echo "Deploying to production..."
|
||||
# Deployment commands here
|
||||
```
|
||||
|
||||
### 3.3 Rollback Automation
|
||||
|
||||
```yaml
|
||||
# .github/workflows/rollback.yml
|
||||
name: Automated Rollback
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
reason:
|
||||
description: "Reason for rollback"
|
||||
required: true
|
||||
|
||||
jobs:
|
||||
rollback:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Find last stable version
|
||||
id: stable
|
||||
run: |
|
||||
# Find last successful deployment
|
||||
stable=$(git tag -l 'v*' --sort=-version:refname | head -1)
|
||||
echo "version=$stable" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Rollback
|
||||
run: |
|
||||
git checkout ${{ steps.stable.outputs.version }}
|
||||
# Deploy stable version
|
||||
npm run deploy
|
||||
|
||||
- name: Notify team
|
||||
uses: slackapi/slack-github-action@v1
|
||||
with:
|
||||
payload: |
|
||||
{
|
||||
"text": "🔄 Production rolled back to ${{ steps.stable.outputs.version }}",
|
||||
"blocks": [
|
||||
{
|
||||
"type": "section",
|
||||
"text": {
|
||||
"type": "mrkdwn",
|
||||
"text": "*Rollback executed*\n• Version: `${{ steps.stable.outputs.version }}`\n• Reason: ${{ inputs.reason }}\n• Triggered by: ${{ github.actor }}"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Git Operations
|
||||
|
||||
### 4.1 Automated Rebasing
|
||||
|
||||
```yaml
|
||||
# .github/workflows/auto-rebase.yml
|
||||
name: Auto Rebase
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
|
||||
jobs:
|
||||
rebase:
|
||||
if: github.event.issue.pull_request && contains(github.event.comment.body, '/rebase')
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Setup Git
|
||||
run: |
|
||||
git config user.name "github-actions[bot]"
|
||||
git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||
|
||||
- name: Rebase PR
|
||||
run: |
|
||||
# Fetch PR branch
|
||||
gh pr checkout ${{ github.event.issue.number }}
|
||||
|
||||
# Rebase onto main
|
||||
git fetch origin main
|
||||
git rebase origin/main
|
||||
|
||||
# Force push
|
||||
git push --force-with-lease
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Comment result
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
body: '✅ Successfully rebased onto main!'
|
||||
})
|
||||
```
|
||||
|
||||
### 4.2 Smart Cherry-Pick
|
||||
|
||||
```typescript
|
||||
// AI-assisted cherry-pick that handles conflicts
|
||||
async function smartCherryPick(commitHash: string, targetBranch: string) {
|
||||
// Get commit info
|
||||
const commitInfo = await exec(`git show ${commitHash} --stat`);
|
||||
|
||||
// Check for potential conflicts
|
||||
const targetDiff = await exec(
|
||||
`git diff ${targetBranch}...HEAD -- ${affectedFiles}`
|
||||
);
|
||||
|
||||
// AI analysis
|
||||
const analysis = await ai.analyze(`
|
||||
I need to cherry-pick this commit to ${targetBranch}:
|
||||
|
||||
${commitInfo}
|
||||
|
||||
Current state of affected files on ${targetBranch}:
|
||||
${targetDiff}
|
||||
|
||||
Will there be conflicts? If so, suggest resolution strategy.
|
||||
`);
|
||||
|
||||
if (analysis.willConflict) {
|
||||
// Create branch for manual resolution
|
||||
await exec(
|
||||
`git checkout -b cherry-pick-${commitHash.slice(0, 7)} ${targetBranch}`
|
||||
);
|
||||
const result = await exec(`git cherry-pick ${commitHash}`, {
|
||||
allowFail: true,
|
||||
});
|
||||
|
||||
if (result.failed) {
|
||||
// AI-assisted conflict resolution
|
||||
const conflicts = await getConflicts();
|
||||
for (const conflict of conflicts) {
|
||||
const resolution = await ai.resolveConflict(conflict);
|
||||
await applyResolution(conflict.file, resolution);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
await exec(`git checkout ${targetBranch}`);
|
||||
await exec(`git cherry-pick ${commitHash}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4.3 Branch Cleanup
|
||||
|
||||
```yaml
|
||||
# .github/workflows/branch-cleanup.yml
|
||||
name: Branch Cleanup
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 0 * * 0' # Weekly
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
cleanup:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Find stale branches
|
||||
id: stale
|
||||
run: |
|
||||
# Branches not updated in 30 days
|
||||
stale=$(git for-each-ref --sort=-committerdate refs/remotes/origin \
|
||||
--format='%(refname:short) %(committerdate:relative)' | \
|
||||
grep -E '[3-9][0-9]+ days|[0-9]+ months|[0-9]+ years' | \
|
||||
grep -v 'origin/main\|origin/develop' | \
|
||||
cut -d' ' -f1 | sed 's|origin/||')
|
||||
|
||||
echo "branches<<EOF" >> $GITHUB_OUTPUT
|
||||
echo "$stale" >> $GITHUB_OUTPUT
|
||||
echo "EOF" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Create cleanup PR
|
||||
if: steps.stale.outputs.branches != ''
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const branches = `${{ steps.stale.outputs.branches }}`.split('\n').filter(Boolean);
|
||||
|
||||
const body = `## 🧹 Stale Branch Cleanup
|
||||
|
||||
The following branches haven't been updated in over 30 days:
|
||||
|
||||
${branches.map(b => `- \`${b}\``).join('\n')}
|
||||
|
||||
### Actions:
|
||||
- [ ] Review each branch
|
||||
- [ ] Delete branches that are no longer needed
|
||||
- Comment \`/keep branch-name\` to preserve specific branches
|
||||
`;
|
||||
|
||||
await github.rest.issues.create({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
title: 'Stale Branch Cleanup',
|
||||
body: body,
|
||||
labels: ['housekeeping']
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. On-Demand Assistance
|
||||
|
||||
### 5.1 @mention Bot
|
||||
|
||||
```yaml
|
||||
# .github/workflows/mention-bot.yml
|
||||
name: AI Mention Bot
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
|
||||
jobs:
|
||||
respond:
|
||||
if: contains(github.event.comment.body, '@ai-helper')
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Extract question
|
||||
id: question
|
||||
run: |
|
||||
# Extract text after @ai-helper
|
||||
question=$(echo "${{ github.event.comment.body }}" | sed 's/.*@ai-helper//')
|
||||
echo "question=$question" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Get context
|
||||
id: context
|
||||
run: |
|
||||
if [ "${{ github.event.issue.pull_request }}" != "" ]; then
|
||||
# It's a PR - get diff
|
||||
gh pr diff ${{ github.event.issue.number }} > context.txt
|
||||
else
|
||||
# It's an issue - get description
|
||||
gh issue view ${{ github.event.issue.number }} --json body -q .body > context.txt
|
||||
fi
|
||||
echo "context=$(cat context.txt)" >> $GITHUB_OUTPUT
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: AI Response
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
const response = await ai.chat(`
|
||||
Context: ${process.env.CONTEXT}
|
||||
|
||||
Question: ${process.env.QUESTION}
|
||||
|
||||
Provide a helpful, specific answer. Include code examples if relevant.
|
||||
`);
|
||||
|
||||
await github.rest.issues.createComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
issue_number: context.issue.number,
|
||||
body: response
|
||||
});
|
||||
env:
|
||||
CONTEXT: ${{ steps.context.outputs.context }}
|
||||
QUESTION: ${{ steps.question.outputs.question }}
|
||||
```
|
||||
|
||||
### 5.2 Command Patterns
|
||||
|
||||
```markdown
|
||||
## Available Commands
|
||||
|
||||
| Command | Description |
|
||||
| :------------------- | :-------------------------- |
|
||||
| `@ai-helper explain` | Explain the code in this PR |
|
||||
| `@ai-helper review` | Request AI code review |
|
||||
| `@ai-helper fix` | Suggest fixes for issues |
|
||||
| `@ai-helper test` | Generate test cases |
|
||||
| `@ai-helper docs` | Generate documentation |
|
||||
| `/rebase` | Rebase PR onto main |
|
||||
| `/update` | Update PR branch from main |
|
||||
| `/approve` | Mark as approved by bot |
|
||||
| `/label bug` | Add 'bug' label |
|
||||
| `/assign @user` | Assign to user |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Repository Configuration
|
||||
|
||||
### 6.1 CODEOWNERS
|
||||
|
||||
```
|
||||
# .github/CODEOWNERS
|
||||
|
||||
# Global owners
|
||||
* @org/core-team
|
||||
|
||||
# Frontend
|
||||
/src/frontend/ @org/frontend-team
|
||||
*.tsx @org/frontend-team
|
||||
*.css @org/frontend-team
|
||||
|
||||
# Backend
|
||||
/src/api/ @org/backend-team
|
||||
/src/database/ @org/backend-team
|
||||
|
||||
# Infrastructure
|
||||
/.github/ @org/devops-team
|
||||
/terraform/ @org/devops-team
|
||||
Dockerfile @org/devops-team
|
||||
|
||||
# Docs
|
||||
/docs/ @org/docs-team
|
||||
*.md @org/docs-team
|
||||
|
||||
# Security-sensitive
|
||||
/src/auth/ @org/security-team
|
||||
/src/crypto/ @org/security-team
|
||||
```
|
||||
|
||||
### 6.2 Branch Protection
|
||||
|
||||
```yaml
|
||||
# Set up via GitHub API
|
||||
- name: Configure branch protection
|
||||
uses: actions/github-script@v7
|
||||
with:
|
||||
script: |
|
||||
await github.rest.repos.updateBranchProtection({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
branch: 'main',
|
||||
required_status_checks: {
|
||||
strict: true,
|
||||
contexts: ['test', 'lint', 'ai-review']
|
||||
},
|
||||
enforce_admins: true,
|
||||
required_pull_request_reviews: {
|
||||
required_approving_review_count: 1,
|
||||
require_code_owner_reviews: true,
|
||||
dismiss_stale_reviews: true
|
||||
},
|
||||
restrictions: null,
|
||||
required_linear_history: true,
|
||||
allow_force_pushes: false,
|
||||
allow_deletions: false
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Security
|
||||
|
||||
- [ ] Store API keys in GitHub Secrets
|
||||
- [ ] Use minimal permissions in workflows
|
||||
- [ ] Validate all inputs
|
||||
- [ ] Don't expose sensitive data in logs
|
||||
|
||||
### Performance
|
||||
|
||||
- [ ] Cache dependencies
|
||||
- [ ] Use matrix builds for parallel testing
|
||||
- [ ] Skip unnecessary jobs with path filters
|
||||
- [ ] Use self-hosted runners for heavy workloads
|
||||
|
||||
### Reliability
|
||||
|
||||
- [ ] Add timeouts to jobs
|
||||
- [ ] Handle rate limits gracefully
|
||||
- [ ] Implement retry logic
|
||||
- [ ] Have rollback procedures
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Gemini CLI GitHub Action](https://github.com/google-github-actions/run-gemini-cli)
|
||||
- [GitHub Actions Documentation](https://docs.github.com/en/actions)
|
||||
- [GitHub REST API](https://docs.github.com/en/rest)
|
||||
- [CODEOWNERS Syntax](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners)
|
||||
202
skills/internal-comms-anthropic/LICENSE.txt
Normal file
202
skills/internal-comms-anthropic/LICENSE.txt
Normal file
@@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
47
skills/internal-comms-anthropic/examples/3p-updates.md
Normal file
47
skills/internal-comms-anthropic/examples/3p-updates.md
Normal file
@@ -0,0 +1,47 @@
|
||||
## Instructions
|
||||
You are being asked to write a 3P update. 3P updates stand for "Progress, Plans, Problems." The main audience is for executives, leadership, other teammates, etc. They're meant to be very succinct and to-the-point: think something you can read in 30-60sec or less. They're also for people with some, but not a lot of context on what the team does.
|
||||
|
||||
3Ps can cover a team of any size, ranging all the way up to the entire company. The bigger the team, the less granular the tasks should be. For example, "mobile team" might have "shipped feature" or "fixed bugs," whereas the company might have really meaty 3Ps, like "hired 20 new people" or "closed 10 new deals."
|
||||
|
||||
They represent the work of the team across a time period, almost always one week. They include three sections:
|
||||
1) Progress: what the team has accomplished over the next time period. Focus mainly on things shipped, milestones achieved, tasks created, etc.
|
||||
2) Plans: what the team plans to do over the next time period. Focus on what things are top-of-mind, really high priority, etc. for the team.
|
||||
3) Problems: anything that is slowing the team down. This could be things like too few people, bugs or blockers that are preventing the team from moving forward, some deal that fell through, etc.
|
||||
|
||||
Before writing them, make sure that you know the team name. If it's not specified, you can ask explicitly what the team name you're writing for is.
|
||||
|
||||
|
||||
## Tools Available
|
||||
Whenever possible, try to pull from available sources to get the information you need:
|
||||
- Slack: posts from team members with their updates - ideally look for posts in large channels with lots of reactions
|
||||
- Google Drive: docs written from critical team members with lots of views
|
||||
- Email: emails with lots of responses of lots of content that seems relevant
|
||||
- Calendar: non-recurring meetings that have a lot of importance, like product reviews, etc.
|
||||
|
||||
|
||||
Try to gather as much context as you can, focusing on the things that covered the time period you're writing for:
|
||||
- Progress: anything between a week ago and today
|
||||
- Plans: anything from today to the next week
|
||||
- Problems: anything between a week ago and today
|
||||
|
||||
|
||||
If you don't have access, you can ask the user for things they want to cover. They might also include these things to you directly, in which case you're mostly just formatting for this particular format.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Clarify scope**: Confirm the team name and time period (usually past week for Progress/Problems, next
|
||||
week for Plans)
|
||||
2. **Gather information**: Use available tools or ask the user directly
|
||||
3. **Draft the update**: Follow the strict formatting guidelines
|
||||
4. **Review**: Ensure it's concise (30-60 seconds to read) and data-driven
|
||||
|
||||
## Formatting
|
||||
|
||||
The format is always the same, very strict formatting. Never use any formatting other than this. Pick an emoji that is fun and captures the vibe of the team and update.
|
||||
|
||||
[pick an emoji] [Team Name] (Dates Covered, usually a week)
|
||||
Progress: [1-3 sentences of content]
|
||||
Plans: [1-3 sentences of content]
|
||||
Problems: [1-3 sentences of content]
|
||||
|
||||
Each section should be no more than 1-3 sentences: clear, to the point. It should be data-driven, and generally include metrics where possible. The tone should be very matter-of-fact, not super prose-heavy.
|
||||
@@ -0,0 +1,65 @@
|
||||
## Instructions
|
||||
You are being asked to write a company-wide newsletter update. You are meant to summarize the past week/month of a company in the form of a newsletter that the entire company will read. It should be maybe ~20-25 bullet points long. It will be sent via Slack and email, so make it consumable for that.
|
||||
|
||||
Ideally it includes the following attributes:
|
||||
- Lots of links: pulling documents from Google Drive that are very relevant, linking to prominent Slack messages in announce channels and from executives, perhgaps referencing emails that went company-wide, highlighting significant things that have happened in the company.
|
||||
- Short and to-the-point: each bullet should probably be no longer than ~1-2 sentences
|
||||
- Use the "we" tense, as you are part of the company. Many of the bullets should say "we did this" or "we did that"
|
||||
|
||||
## Tools to use
|
||||
If you have access to the following tools, please try to use them. If not, you can also let the user know directly that their responses would be better if they gave them access.
|
||||
|
||||
- Slack: look for messages in channels with lots of people, with lots of reactions or lots of responses within the thread
|
||||
- Email: look for things from executives that discuss company-wide announcements
|
||||
- Calendar: if there were meetings with large attendee lists, particularly things like All-Hands meetings, big company announcements, etc. If there were documents attached to those meetings, those are great links to include.
|
||||
- Documents: if there were new docs published in the last week or two that got a lot of attention, you can link them. These should be things like company-wide vision docs, plans for the upcoming quarter or half, things authored by critical executives, etc.
|
||||
- External press: if you see references to articles or press we've received over the past week, that could be really cool too.
|
||||
|
||||
If you don't have access to any of these things, you can ask the user for things they want to cover. In this case, you'll mostly just be polishing up and fitting to this format more directly.
|
||||
|
||||
## Sections
|
||||
The company is pretty big: 1000+ people. There are a variety of different teams and initiatives going on across the company. To make sure the update works well, try breaking it into sections of similar things. You might break into clusters like {product development, go to market, finance} or {recruiting, execution, vision}, or {external news, internal news} etc. Try to make sure the different areas of the company are highlighted well.
|
||||
|
||||
## Prioritization
|
||||
Focus on:
|
||||
- Company-wide impact (not team-specific details)
|
||||
- Announcements from leadership
|
||||
- Major milestones and achievements
|
||||
- Information that affects most employees
|
||||
- External recognition or press
|
||||
|
||||
Avoid:
|
||||
- Overly granular team updates (save those for 3Ps)
|
||||
- Information only relevant to small groups
|
||||
- Duplicate information already communicated
|
||||
|
||||
## Example Formats
|
||||
|
||||
:megaphone: Company Announcements
|
||||
- Announcement 1
|
||||
- Announcement 2
|
||||
- Announcement 3
|
||||
|
||||
:dart: Progress on Priorities
|
||||
- Area 1
|
||||
- Sub-area 1
|
||||
- Sub-area 2
|
||||
- Sub-area 3
|
||||
- Area 2
|
||||
- Sub-area 1
|
||||
- Sub-area 2
|
||||
- Sub-area 3
|
||||
- Area 3
|
||||
- Sub-area 1
|
||||
- Sub-area 2
|
||||
- Sub-area 3
|
||||
|
||||
:pillar: Leadership Updates
|
||||
- Post 1
|
||||
- Post 2
|
||||
- Post 3
|
||||
|
||||
:thread: Social Updates
|
||||
- Update 1
|
||||
- Update 2
|
||||
- Update 3
|
||||
30
skills/internal-comms-anthropic/examples/faq-answers.md
Normal file
30
skills/internal-comms-anthropic/examples/faq-answers.md
Normal file
@@ -0,0 +1,30 @@
|
||||
## Instructions
|
||||
You are an assistant for answering questions that are being asked across the company. Every week, there are lots of questions that get asked across the company, and your goal is to try to summarize what those questions are. We want our company to be well-informed and on the same page, so your job is to produce a set of frequently asked questions that our employees are asking and attempt to answer them. Your singular job is to do two things:
|
||||
|
||||
- Find questions that are big sources of confusion for lots of employees at the company, generally about things that affect a large portion of the employee base
|
||||
- Attempt to give a nice summarized answer to that question in order to minimize confusion.
|
||||
|
||||
Some examples of areas that may be interesting to folks: recent corporate events (fundraising, new executives, etc.), upcoming launches, hiring progress, changes to vision or focus, etc.
|
||||
|
||||
|
||||
## Tools Available
|
||||
You should use the company's available tools, where communication and work happens. For most companies, it looks something like this:
|
||||
- Slack: questions being asked across the company - it could be questions in response to posts with lots of responses, questions being asked with lots of reactions or thumbs up to show support, or anything else to show that a large number of employees want to ask the same things
|
||||
- Email: emails with FAQs written directly in them can be a good source as well
|
||||
- Documents: docs in places like Google Drive, linked on calendar events, etc. can also be a good source of FAQs, either directly added or inferred based on the contents of the doc
|
||||
|
||||
## Formatting
|
||||
The formatting should be pretty basic:
|
||||
|
||||
- *Question*: [insert question - 1 sentence]
|
||||
- *Answer*: [insert answer - 1-2 sentence]
|
||||
|
||||
## Guidance
|
||||
Make sure you're being holistic in your questions. Don't focus too much on just the user in question or the team they are a part of, but try to capture the entire company. Try to be as holistic as you can in reading all the tools available, producing responses that are relevant to all at the company.
|
||||
|
||||
## Answer Guidelines
|
||||
- Base answers on official company communications when possible
|
||||
- If information is uncertain, indicate that clearly
|
||||
- Link to authoritative sources (docs, announcements, emails)
|
||||
- Keep tone professional but approachable
|
||||
- Flag if a question requires executive input or official response
|
||||
16
skills/internal-comms-anthropic/examples/general-comms.md
Normal file
16
skills/internal-comms-anthropic/examples/general-comms.md
Normal file
@@ -0,0 +1,16 @@
|
||||
## Instructions
|
||||
You are being asked to write internal company communication that doesn't fit into the standard formats (3P
|
||||
updates, newsletters, or FAQs).
|
||||
|
||||
Before proceeding:
|
||||
1. Ask the user about their target audience
|
||||
2. Understand the communication's purpose
|
||||
3. Clarify the desired tone (formal, casual, urgent, informational)
|
||||
4. Confirm any specific formatting requirements
|
||||
|
||||
Use these general principles:
|
||||
- Be clear and concise
|
||||
- Use active voice
|
||||
- Put the most important information first
|
||||
- Include relevant links and references
|
||||
- Match the company's communication style
|
||||
202
skills/internal-comms-community/LICENSE.txt
Normal file
202
skills/internal-comms-community/LICENSE.txt
Normal file
@@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
32
skills/internal-comms-community/SKILL.md
Normal file
32
skills/internal-comms-community/SKILL.md
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
name: internal-comms
|
||||
description: A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. Claude should use this skill whenever asked to write some sort of internal communications (status reports, leadership updates, 3P updates, company newsletters, FAQs, incident reports, project updates, etc.).
|
||||
license: Complete terms in LICENSE.txt
|
||||
---
|
||||
|
||||
## When to use this skill
|
||||
To write internal communications, use this skill for:
|
||||
- 3P updates (Progress, Plans, Problems)
|
||||
- Company newsletters
|
||||
- FAQ responses
|
||||
- Status reports
|
||||
- Leadership updates
|
||||
- Project updates
|
||||
- Incident reports
|
||||
|
||||
## How to use this skill
|
||||
|
||||
To write any internal communication:
|
||||
|
||||
1. **Identify the communication type** from the request
|
||||
2. **Load the appropriate guideline file** from the `examples/` directory:
|
||||
- `examples/3p-updates.md` - For Progress/Plans/Problems team updates
|
||||
- `examples/company-newsletter.md` - For company-wide newsletters
|
||||
- `examples/faq-answers.md` - For answering frequently asked questions
|
||||
- `examples/general-comms.md` - For anything else that doesn't explicitly match one of the above
|
||||
3. **Follow the specific instructions** in that file for formatting, tone, and content gathering
|
||||
|
||||
If the communication type doesn't match any existing guideline, ask for clarification or more context about the desired format.
|
||||
|
||||
## Keywords
|
||||
3P updates, company newsletter, company comms, weekly update, faqs, common questions, updates, internal comms
|
||||
645
skills/javascript-mastery/SKILL.md
Normal file
645
skills/javascript-mastery/SKILL.md
Normal file
@@ -0,0 +1,645 @@
|
||||
---
|
||||
name: javascript-mastery
|
||||
description: "Comprehensive JavaScript reference covering 33+ essential concepts every developer should know. From fundamentals like primitives and closures to advanced patterns like async/await and functional programming. Use when explaining JS concepts, debugging JavaScript issues, or teaching JavaScript fundamentals."
|
||||
---
|
||||
|
||||
# 🧠 JavaScript Mastery
|
||||
|
||||
> 33+ essential JavaScript concepts every developer should know, inspired by [33-js-concepts](https://github.com/leonardomso/33-js-concepts).
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Explaining JavaScript concepts
|
||||
- Debugging tricky JS behavior
|
||||
- Teaching JavaScript fundamentals
|
||||
- Reviewing code for JS best practices
|
||||
- Understanding language quirks
|
||||
|
||||
---
|
||||
|
||||
## 1. Fundamentals
|
||||
|
||||
### 1.1 Primitive Types
|
||||
|
||||
JavaScript has 7 primitive types:
|
||||
|
||||
```javascript
|
||||
// String
|
||||
const str = "hello";
|
||||
|
||||
// Number (integers and floats)
|
||||
const num = 42;
|
||||
const float = 3.14;
|
||||
|
||||
// BigInt (for large integers)
|
||||
const big = 9007199254740991n;
|
||||
|
||||
// Boolean
|
||||
const bool = true;
|
||||
|
||||
// Undefined
|
||||
let undef; // undefined
|
||||
|
||||
// Null
|
||||
const empty = null;
|
||||
|
||||
// Symbol (unique identifiers)
|
||||
const sym = Symbol("description");
|
||||
```
|
||||
|
||||
**Key points**:
|
||||
|
||||
- Primitives are immutable
|
||||
- Passed by value
|
||||
- `typeof null === "object"` is a historical bug
|
||||
|
||||
### 1.2 Type Coercion
|
||||
|
||||
JavaScript implicitly converts types:
|
||||
|
||||
```javascript
|
||||
// String coercion
|
||||
"5" + 3; // "53" (number → string)
|
||||
"5" - 3; // 2 (string → number)
|
||||
|
||||
// Boolean coercion
|
||||
Boolean(""); // false
|
||||
Boolean("hello"); // true
|
||||
Boolean(0); // false
|
||||
Boolean([]); // true (!)
|
||||
|
||||
// Equality coercion
|
||||
"5" == 5; // true (coerces)
|
||||
"5" === 5; // false (strict)
|
||||
```
|
||||
|
||||
**Falsy values** (8 total):
|
||||
`false`, `0`, `-0`, `0n`, `""`, `null`, `undefined`, `NaN`
|
||||
|
||||
### 1.3 Equality Operators
|
||||
|
||||
```javascript
|
||||
// == (loose equality) - coerces types
|
||||
null == undefined; // true
|
||||
"1" == 1; // true
|
||||
|
||||
// === (strict equality) - no coercion
|
||||
null === undefined; // false
|
||||
"1" === 1; // false
|
||||
|
||||
// Object.is() - handles edge cases
|
||||
Object.is(NaN, NaN); // true (NaN === NaN is false!)
|
||||
Object.is(-0, 0); // false (0 === -0 is true!)
|
||||
```
|
||||
|
||||
**Rule**: Always use `===` unless you have a specific reason not to.
|
||||
|
||||
---
|
||||
|
||||
## 2. Scope & Closures
|
||||
|
||||
### 2.1 Scope Types
|
||||
|
||||
```javascript
|
||||
// Global scope
|
||||
var globalVar = "global";
|
||||
|
||||
function outer() {
|
||||
// Function scope
|
||||
var functionVar = "function";
|
||||
|
||||
if (true) {
|
||||
// Block scope (let/const only)
|
||||
let blockVar = "block";
|
||||
const alsoBlock = "block";
|
||||
var notBlock = "function"; // var ignores blocks!
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 Closures
|
||||
|
||||
A closure is a function that remembers its lexical scope:
|
||||
|
||||
```javascript
|
||||
function createCounter() {
|
||||
let count = 0; // "closed over" variable
|
||||
|
||||
return {
|
||||
increment() {
|
||||
return ++count;
|
||||
},
|
||||
decrement() {
|
||||
return --count;
|
||||
},
|
||||
getCount() {
|
||||
return count;
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
const counter = createCounter();
|
||||
counter.increment(); // 1
|
||||
counter.increment(); // 2
|
||||
counter.getCount(); // 2
|
||||
```
|
||||
|
||||
**Common use cases**:
|
||||
|
||||
- Data privacy (module pattern)
|
||||
- Function factories
|
||||
- Partial application
|
||||
- Memoization
|
||||
|
||||
### 2.3 var vs let vs const
|
||||
|
||||
```javascript
|
||||
// var - function scoped, hoisted, can redeclare
|
||||
var x = 1;
|
||||
var x = 2; // OK
|
||||
|
||||
// let - block scoped, hoisted (TDZ), no redeclare
|
||||
let y = 1;
|
||||
// let y = 2; // Error!
|
||||
|
||||
// const - like let, but can't reassign
|
||||
const z = 1;
|
||||
// z = 2; // Error!
|
||||
|
||||
// BUT: const objects are mutable
|
||||
const obj = { a: 1 };
|
||||
obj.a = 2; // OK
|
||||
obj.b = 3; // OK
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Functions & Execution
|
||||
|
||||
### 3.1 Call Stack
|
||||
|
||||
```javascript
|
||||
function first() {
|
||||
console.log("first start");
|
||||
second();
|
||||
console.log("first end");
|
||||
}
|
||||
|
||||
function second() {
|
||||
console.log("second");
|
||||
}
|
||||
|
||||
first();
|
||||
// Output:
|
||||
// "first start"
|
||||
// "second"
|
||||
// "first end"
|
||||
```
|
||||
|
||||
Stack overflow example:
|
||||
|
||||
```javascript
|
||||
function infinite() {
|
||||
infinite(); // No base case!
|
||||
}
|
||||
infinite(); // RangeError: Maximum call stack size exceeded
|
||||
```
|
||||
|
||||
### 3.2 Hoisting
|
||||
|
||||
```javascript
|
||||
// Variable hoisting
|
||||
console.log(a); // undefined (hoisted, not initialized)
|
||||
var a = 5;
|
||||
|
||||
console.log(b); // ReferenceError (TDZ)
|
||||
let b = 5;
|
||||
|
||||
// Function hoisting
|
||||
sayHi(); // Works!
|
||||
function sayHi() {
|
||||
console.log("Hi!");
|
||||
}
|
||||
|
||||
// Function expressions don't hoist
|
||||
sayBye(); // TypeError
|
||||
var sayBye = function () {
|
||||
console.log("Bye!");
|
||||
};
|
||||
```
|
||||
|
||||
### 3.3 this Keyword
|
||||
|
||||
```javascript
|
||||
// Global context
|
||||
console.log(this); // window (browser) or global (Node)
|
||||
|
||||
// Object method
|
||||
const obj = {
|
||||
name: "Alice",
|
||||
greet() {
|
||||
console.log(this.name); // "Alice"
|
||||
},
|
||||
};
|
||||
|
||||
// Arrow functions (lexical this)
|
||||
const obj2 = {
|
||||
name: "Bob",
|
||||
greet: () => {
|
||||
console.log(this.name); // undefined (inherits outer this)
|
||||
},
|
||||
};
|
||||
|
||||
// Explicit binding
|
||||
function greet() {
|
||||
console.log(this.name);
|
||||
}
|
||||
greet.call({ name: "Charlie" }); // "Charlie"
|
||||
greet.apply({ name: "Diana" }); // "Diana"
|
||||
const bound = greet.bind({ name: "Eve" });
|
||||
bound(); // "Eve"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Event Loop & Async
|
||||
|
||||
### 4.1 Event Loop
|
||||
|
||||
```javascript
|
||||
console.log("1");
|
||||
|
||||
setTimeout(() => console.log("2"), 0);
|
||||
|
||||
Promise.resolve().then(() => console.log("3"));
|
||||
|
||||
console.log("4");
|
||||
|
||||
// Output: 1, 4, 3, 2
|
||||
// Why? Microtasks (Promises) run before macrotasks (setTimeout)
|
||||
```
|
||||
|
||||
**Execution order**:
|
||||
|
||||
1. Synchronous code (call stack)
|
||||
2. Microtasks (Promise callbacks, queueMicrotask)
|
||||
3. Macrotasks (setTimeout, setInterval, I/O)
|
||||
|
||||
### 4.2 Callbacks
|
||||
|
||||
```javascript
|
||||
// Callback pattern
|
||||
function fetchData(callback) {
|
||||
setTimeout(() => {
|
||||
callback(null, { data: "result" });
|
||||
}, 1000);
|
||||
}
|
||||
|
||||
// Error-first convention
|
||||
fetchData((error, result) => {
|
||||
if (error) {
|
||||
console.error(error);
|
||||
return;
|
||||
}
|
||||
console.log(result);
|
||||
});
|
||||
|
||||
// Callback hell (avoid this!)
|
||||
getData((data) => {
|
||||
processData(data, (processed) => {
|
||||
saveData(processed, (saved) => {
|
||||
notify(saved, () => {
|
||||
// 😱 Pyramid of doom
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 4.3 Promises
|
||||
|
||||
```javascript
|
||||
// Creating a Promise
|
||||
const promise = new Promise((resolve, reject) => {
|
||||
setTimeout(() => {
|
||||
resolve("Success!");
|
||||
// or: reject(new Error("Failed!"));
|
||||
}, 1000);
|
||||
});
|
||||
|
||||
// Consuming Promises
|
||||
promise
|
||||
.then((result) => console.log(result))
|
||||
.catch((error) => console.error(error))
|
||||
.finally(() => console.log("Done"));
|
||||
|
||||
// Promise combinators
|
||||
Promise.all([p1, p2, p3]); // All must succeed
|
||||
Promise.allSettled([p1, p2]); // Wait for all, get status
|
||||
Promise.race([p1, p2]); // First to settle
|
||||
Promise.any([p1, p2]); // First to succeed
|
||||
```
|
||||
|
||||
### 4.4 async/await
|
||||
|
||||
```javascript
|
||||
async function fetchUserData(userId) {
|
||||
try {
|
||||
const response = await fetch(`/api/users/${userId}`);
|
||||
if (!response.ok) throw new Error("Failed to fetch");
|
||||
const user = await response.json();
|
||||
return user;
|
||||
} catch (error) {
|
||||
console.error("Error:", error);
|
||||
throw error; // Re-throw for caller to handle
|
||||
}
|
||||
}
|
||||
|
||||
// Parallel execution
|
||||
async function fetchAll() {
|
||||
const [users, posts] = await Promise.all([
|
||||
fetch("/api/users"),
|
||||
fetch("/api/posts"),
|
||||
]);
|
||||
return { users, posts };
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Functional Programming
|
||||
|
||||
### 5.1 Higher-Order Functions
|
||||
|
||||
Functions that take or return functions:
|
||||
|
||||
```javascript
|
||||
// Takes a function
|
||||
const numbers = [1, 2, 3];
|
||||
const doubled = numbers.map((n) => n * 2); // [2, 4, 6]
|
||||
|
||||
// Returns a function
|
||||
function multiply(a) {
|
||||
return function (b) {
|
||||
return a * b;
|
||||
};
|
||||
}
|
||||
const double = multiply(2);
|
||||
double(5); // 10
|
||||
```
|
||||
|
||||
### 5.2 Pure Functions
|
||||
|
||||
```javascript
|
||||
// Pure: same input → same output, no side effects
|
||||
function add(a, b) {
|
||||
return a + b;
|
||||
}
|
||||
|
||||
// Impure: modifies external state
|
||||
let total = 0;
|
||||
function addToTotal(value) {
|
||||
total += value; // Side effect!
|
||||
return total;
|
||||
}
|
||||
|
||||
// Impure: depends on external state
|
||||
function getDiscount(price) {
|
||||
return price * globalDiscountRate; // External dependency
|
||||
}
|
||||
```
|
||||
|
||||
### 5.3 map, filter, reduce
|
||||
|
||||
```javascript
|
||||
const users = [
|
||||
{ name: "Alice", age: 25 },
|
||||
{ name: "Bob", age: 30 },
|
||||
{ name: "Charlie", age: 35 },
|
||||
];
|
||||
|
||||
// map: transform each element
|
||||
const names = users.map((u) => u.name);
|
||||
// ["Alice", "Bob", "Charlie"]
|
||||
|
||||
// filter: keep elements matching condition
|
||||
const adults = users.filter((u) => u.age >= 30);
|
||||
// [{ name: "Bob", ... }, { name: "Charlie", ... }]
|
||||
|
||||
// reduce: accumulate into single value
|
||||
const totalAge = users.reduce((sum, u) => sum + u.age, 0);
|
||||
// 90
|
||||
|
||||
// Chaining
|
||||
const result = users
|
||||
.filter((u) => u.age >= 30)
|
||||
.map((u) => u.name)
|
||||
.join(", ");
|
||||
// "Bob, Charlie"
|
||||
```
|
||||
|
||||
### 5.4 Currying & Composition
|
||||
|
||||
```javascript
|
||||
// Currying: transform f(a, b, c) into f(a)(b)(c)
|
||||
const curry = (fn) => {
|
||||
return function curried(...args) {
|
||||
if (args.length >= fn.length) {
|
||||
return fn.apply(this, args);
|
||||
}
|
||||
return (...moreArgs) => curried(...args, ...moreArgs);
|
||||
};
|
||||
};
|
||||
|
||||
const add = curry((a, b, c) => a + b + c);
|
||||
add(1)(2)(3); // 6
|
||||
add(1, 2)(3); // 6
|
||||
add(1)(2, 3); // 6
|
||||
|
||||
// Composition: combine functions
|
||||
const compose =
|
||||
(...fns) =>
|
||||
(x) =>
|
||||
fns.reduceRight((acc, fn) => fn(acc), x);
|
||||
|
||||
const pipe =
|
||||
(...fns) =>
|
||||
(x) =>
|
||||
fns.reduce((acc, fn) => fn(acc), x);
|
||||
|
||||
const addOne = (x) => x + 1;
|
||||
const double = (x) => x * 2;
|
||||
|
||||
const addThenDouble = compose(double, addOne);
|
||||
addThenDouble(5); // 12 = (5 + 1) * 2
|
||||
|
||||
const doubleThenAdd = pipe(double, addOne);
|
||||
doubleThenAdd(5); // 11 = (5 * 2) + 1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Objects & Prototypes
|
||||
|
||||
### 6.1 Prototypal Inheritance
|
||||
|
||||
```javascript
|
||||
// Prototype chain
|
||||
const animal = {
|
||||
speak() {
|
||||
console.log("Some sound");
|
||||
},
|
||||
};
|
||||
|
||||
const dog = Object.create(animal);
|
||||
dog.bark = function () {
|
||||
console.log("Woof!");
|
||||
};
|
||||
|
||||
dog.speak(); // "Some sound" (inherited)
|
||||
dog.bark(); // "Woof!" (own method)
|
||||
|
||||
// ES6 Classes (syntactic sugar)
|
||||
class Animal {
|
||||
speak() {
|
||||
console.log("Some sound");
|
||||
}
|
||||
}
|
||||
|
||||
class Dog extends Animal {
|
||||
bark() {
|
||||
console.log("Woof!");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 6.2 Object Methods
|
||||
|
||||
```javascript
|
||||
const obj = { a: 1, b: 2 };
|
||||
|
||||
// Keys, values, entries
|
||||
Object.keys(obj); // ["a", "b"]
|
||||
Object.values(obj); // [1, 2]
|
||||
Object.entries(obj); // [["a", 1], ["b", 2]]
|
||||
|
||||
// Shallow copy
|
||||
const copy = { ...obj };
|
||||
const copy2 = Object.assign({}, obj);
|
||||
|
||||
// Freeze (immutable)
|
||||
const frozen = Object.freeze({ x: 1 });
|
||||
frozen.x = 2; // Silently fails (or throws in strict mode)
|
||||
|
||||
// Seal (no add/delete, can modify)
|
||||
const sealed = Object.seal({ x: 1 });
|
||||
sealed.x = 2; // OK
|
||||
sealed.y = 3; // Fails
|
||||
delete sealed.x; // Fails
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Modern JavaScript (ES6+)
|
||||
|
||||
### 7.1 Destructuring
|
||||
|
||||
```javascript
|
||||
// Array destructuring
|
||||
const [first, second, ...rest] = [1, 2, 3, 4, 5];
|
||||
// first = 1, second = 2, rest = [3, 4, 5]
|
||||
|
||||
// Object destructuring
|
||||
const { name, age, city = "Unknown" } = { name: "Alice", age: 25 };
|
||||
// name = "Alice", age = 25, city = "Unknown"
|
||||
|
||||
// Renaming
|
||||
const { name: userName } = { name: "Bob" };
|
||||
// userName = "Bob"
|
||||
|
||||
// Nested
|
||||
const {
|
||||
address: { street },
|
||||
} = { address: { street: "123 Main" } };
|
||||
```
|
||||
|
||||
### 7.2 Spread & Rest
|
||||
|
||||
```javascript
|
||||
// Spread: expand iterable
|
||||
const arr1 = [1, 2, 3];
|
||||
const arr2 = [...arr1, 4, 5]; // [1, 2, 3, 4, 5]
|
||||
|
||||
const obj1 = { a: 1 };
|
||||
const obj2 = { ...obj1, b: 2 }; // { a: 1, b: 2 }
|
||||
|
||||
// Rest: collect remaining
|
||||
function sum(...numbers) {
|
||||
return numbers.reduce((a, b) => a + b, 0);
|
||||
}
|
||||
sum(1, 2, 3, 4); // 10
|
||||
```
|
||||
|
||||
### 7.3 Modules
|
||||
|
||||
```javascript
|
||||
// Named exports
|
||||
export const PI = 3.14159;
|
||||
export function square(x) {
|
||||
return x * x;
|
||||
}
|
||||
|
||||
// Default export
|
||||
export default class Calculator {}
|
||||
|
||||
// Importing
|
||||
import Calculator, { PI, square } from "./math.js";
|
||||
import * as math from "./math.js";
|
||||
|
||||
// Dynamic import
|
||||
const module = await import("./dynamic.js");
|
||||
```
|
||||
|
||||
### 7.4 Optional Chaining & Nullish Coalescing
|
||||
|
||||
```javascript
|
||||
// Optional chaining (?.)
|
||||
const user = { address: { city: "NYC" } };
|
||||
const city = user?.address?.city; // "NYC"
|
||||
const zip = user?.address?.zip; // undefined (no error)
|
||||
const fn = user?.getName?.(); // undefined if no method
|
||||
|
||||
// Nullish coalescing (??)
|
||||
const value = null ?? "default"; // "default"
|
||||
const zero = 0 ?? "default"; // 0 (not nullish!)
|
||||
const empty = "" ?? "default"; // "" (not nullish!)
|
||||
|
||||
// Compare with ||
|
||||
const value2 = 0 || "default"; // "default" (0 is falsy)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Card
|
||||
|
||||
| Concept | Key Point |
|
||||
| :------------- | :-------------------------------- |
|
||||
| `==` vs `===` | Always use `===` |
|
||||
| `var` vs `let` | Prefer `let`/`const` |
|
||||
| Closures | Function + lexical scope |
|
||||
| `this` | Depends on how function is called |
|
||||
| Event loop | Microtasks before macrotasks |
|
||||
| Pure functions | Same input → same output |
|
||||
| Prototypes | `__proto__` → prototype chain |
|
||||
| `??` vs `\|\|` | `??` only checks null/undefined |
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [33 JS Concepts](https://github.com/leonardomso/33-js-concepts)
|
||||
- [JavaScript.info](https://javascript.info/)
|
||||
- [MDN JavaScript Guide](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide)
|
||||
- [You Don't Know JS](https://github.com/getify/You-Dont-Know-JS)
|
||||
760
skills/llm-app-patterns/SKILL.md
Normal file
760
skills/llm-app-patterns/SKILL.md
Normal file
@@ -0,0 +1,760 @@
|
||||
---
|
||||
name: llm-app-patterns
|
||||
description: "Production-ready patterns for building LLM applications. Covers RAG pipelines, agent architectures, prompt IDEs, and LLMOps monitoring. Use when designing AI applications, implementing RAG, building agents, or setting up LLM observability."
|
||||
---
|
||||
|
||||
# 🤖 LLM Application Patterns
|
||||
|
||||
> Production-ready patterns for building LLM applications, inspired by [Dify](https://github.com/langgenius/dify) and industry best practices.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Designing LLM-powered applications
|
||||
- Implementing RAG (Retrieval-Augmented Generation)
|
||||
- Building AI agents with tools
|
||||
- Setting up LLMOps monitoring
|
||||
- Choosing between agent architectures
|
||||
|
||||
---
|
||||
|
||||
## 1. RAG Pipeline Architecture
|
||||
|
||||
### Overview
|
||||
|
||||
RAG (Retrieval-Augmented Generation) grounds LLM responses in your data.
|
||||
|
||||
```
|
||||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||
│ Ingest │────▶│ Retrieve │────▶│ Generate │
|
||||
│ Documents │ │ Context │ │ Response │
|
||||
└─────────────┘ └─────────────┘ └─────────────┘
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────┐ ┌───────────┐ ┌───────────┐
|
||||
│ Chunking│ │ Vector │ │ LLM │
|
||||
│Embedding│ │ Search │ │ + Context│
|
||||
└─────────┘ └───────────┘ └───────────┘
|
||||
```
|
||||
|
||||
### 1.1 Document Ingestion
|
||||
|
||||
```python
|
||||
# Chunking strategies
|
||||
class ChunkingStrategy:
|
||||
# Fixed-size chunks (simple but may break context)
|
||||
FIXED_SIZE = "fixed_size" # e.g., 512 tokens
|
||||
|
||||
# Semantic chunking (preserves meaning)
|
||||
SEMANTIC = "semantic" # Split on paragraphs/sections
|
||||
|
||||
# Recursive splitting (tries multiple separators)
|
||||
RECURSIVE = "recursive" # ["\n\n", "\n", " ", ""]
|
||||
|
||||
# Document-aware (respects structure)
|
||||
DOCUMENT_AWARE = "document_aware" # Headers, lists, etc.
|
||||
|
||||
# Recommended settings
|
||||
CHUNK_CONFIG = {
|
||||
"chunk_size": 512, # tokens
|
||||
"chunk_overlap": 50, # token overlap between chunks
|
||||
"separators": ["\n\n", "\n", ". ", " "],
|
||||
}
|
||||
```
|
||||
|
||||
### 1.2 Embedding & Storage
|
||||
|
||||
```python
|
||||
# Vector database selection
|
||||
VECTOR_DB_OPTIONS = {
|
||||
"pinecone": {
|
||||
"use_case": "Production, managed service",
|
||||
"scale": "Billions of vectors",
|
||||
"features": ["Hybrid search", "Metadata filtering"]
|
||||
},
|
||||
"weaviate": {
|
||||
"use_case": "Self-hosted, multi-modal",
|
||||
"scale": "Millions of vectors",
|
||||
"features": ["GraphQL API", "Modules"]
|
||||
},
|
||||
"chromadb": {
|
||||
"use_case": "Development, prototyping",
|
||||
"scale": "Thousands of vectors",
|
||||
"features": ["Simple API", "In-memory option"]
|
||||
},
|
||||
"pgvector": {
|
||||
"use_case": "Existing Postgres infrastructure",
|
||||
"scale": "Millions of vectors",
|
||||
"features": ["SQL integration", "ACID compliance"]
|
||||
}
|
||||
}
|
||||
|
||||
# Embedding model selection
|
||||
EMBEDDING_MODELS = {
|
||||
"openai/text-embedding-3-small": {
|
||||
"dimensions": 1536,
|
||||
"cost": "$0.02/1M tokens",
|
||||
"quality": "Good for most use cases"
|
||||
},
|
||||
"openai/text-embedding-3-large": {
|
||||
"dimensions": 3072,
|
||||
"cost": "$0.13/1M tokens",
|
||||
"quality": "Best for complex queries"
|
||||
},
|
||||
"local/bge-large": {
|
||||
"dimensions": 1024,
|
||||
"cost": "Free (compute only)",
|
||||
"quality": "Comparable to OpenAI small"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 1.3 Retrieval Strategies
|
||||
|
||||
```python
|
||||
# Basic semantic search
|
||||
def semantic_search(query: str, top_k: int = 5):
|
||||
query_embedding = embed(query)
|
||||
results = vector_db.similarity_search(
|
||||
query_embedding,
|
||||
top_k=top_k
|
||||
)
|
||||
return results
|
||||
|
||||
# Hybrid search (semantic + keyword)
|
||||
def hybrid_search(query: str, top_k: int = 5, alpha: float = 0.5):
|
||||
"""
|
||||
alpha=1.0: Pure semantic
|
||||
alpha=0.0: Pure keyword (BM25)
|
||||
alpha=0.5: Balanced
|
||||
"""
|
||||
semantic_results = vector_db.similarity_search(query)
|
||||
keyword_results = bm25_search(query)
|
||||
|
||||
# Reciprocal Rank Fusion
|
||||
return rrf_merge(semantic_results, keyword_results, alpha)
|
||||
|
||||
# Multi-query retrieval
|
||||
def multi_query_retrieval(query: str):
|
||||
"""Generate multiple query variations for better recall"""
|
||||
queries = llm.generate_query_variations(query, n=3)
|
||||
all_results = []
|
||||
for q in queries:
|
||||
all_results.extend(semantic_search(q))
|
||||
return deduplicate(all_results)
|
||||
|
||||
# Contextual compression
|
||||
def compressed_retrieval(query: str):
|
||||
"""Retrieve then compress to relevant parts only"""
|
||||
docs = semantic_search(query, top_k=10)
|
||||
compressed = llm.extract_relevant_parts(docs, query)
|
||||
return compressed
|
||||
```
|
||||
|
||||
### 1.4 Generation with Context
|
||||
|
||||
```python
|
||||
RAG_PROMPT_TEMPLATE = """
|
||||
Answer the user's question based ONLY on the following context.
|
||||
If the context doesn't contain enough information, say "I don't have enough information to answer that."
|
||||
|
||||
Context:
|
||||
{context}
|
||||
|
||||
Question: {question}
|
||||
|
||||
Answer:"""
|
||||
|
||||
def generate_with_rag(question: str):
|
||||
# Retrieve
|
||||
context_docs = hybrid_search(question, top_k=5)
|
||||
context = "\n\n".join([doc.content for doc in context_docs])
|
||||
|
||||
# Generate
|
||||
prompt = RAG_PROMPT_TEMPLATE.format(
|
||||
context=context,
|
||||
question=question
|
||||
)
|
||||
|
||||
response = llm.generate(prompt)
|
||||
|
||||
# Return with citations
|
||||
return {
|
||||
"answer": response,
|
||||
"sources": [doc.metadata for doc in context_docs]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Agent Architectures
|
||||
|
||||
### 2.1 ReAct Pattern (Reasoning + Acting)
|
||||
|
||||
```
|
||||
Thought: I need to search for information about X
|
||||
Action: search("X")
|
||||
Observation: [search results]
|
||||
Thought: Based on the results, I should...
|
||||
Action: calculate(...)
|
||||
Observation: [calculation result]
|
||||
Thought: I now have enough information
|
||||
Action: final_answer("The answer is...")
|
||||
```
|
||||
|
||||
```python
|
||||
REACT_PROMPT = """
|
||||
You are an AI assistant that can use tools to answer questions.
|
||||
|
||||
Available tools:
|
||||
{tools_description}
|
||||
|
||||
Use this format:
|
||||
Thought: [your reasoning about what to do next]
|
||||
Action: [tool_name(arguments)]
|
||||
Observation: [tool result - this will be filled in]
|
||||
... (repeat Thought/Action/Observation as needed)
|
||||
Thought: I have enough information to answer
|
||||
Final Answer: [your final response]
|
||||
|
||||
Question: {question}
|
||||
"""
|
||||
|
||||
class ReActAgent:
|
||||
def __init__(self, tools: list, llm):
|
||||
self.tools = {t.name: t for t in tools}
|
||||
self.llm = llm
|
||||
self.max_iterations = 10
|
||||
|
||||
def run(self, question: str) -> str:
|
||||
prompt = REACT_PROMPT.format(
|
||||
tools_description=self._format_tools(),
|
||||
question=question
|
||||
)
|
||||
|
||||
for _ in range(self.max_iterations):
|
||||
response = self.llm.generate(prompt)
|
||||
|
||||
if "Final Answer:" in response:
|
||||
return self._extract_final_answer(response)
|
||||
|
||||
action = self._parse_action(response)
|
||||
observation = self._execute_tool(action)
|
||||
prompt += f"\nObservation: {observation}\n"
|
||||
|
||||
return "Max iterations reached"
|
||||
```
|
||||
|
||||
### 2.2 Function Calling Pattern
|
||||
|
||||
```python
|
||||
# Define tools as functions with schemas
|
||||
TOOLS = [
|
||||
{
|
||||
"name": "search_web",
|
||||
"description": "Search the web for current information",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"query": {
|
||||
"type": "string",
|
||||
"description": "Search query"
|
||||
}
|
||||
},
|
||||
"required": ["query"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "calculate",
|
||||
"description": "Perform mathematical calculations",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"expression": {
|
||||
"type": "string",
|
||||
"description": "Math expression to evaluate"
|
||||
}
|
||||
},
|
||||
"required": ["expression"]
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
class FunctionCallingAgent:
|
||||
def run(self, question: str) -> str:
|
||||
messages = [{"role": "user", "content": question}]
|
||||
|
||||
while True:
|
||||
response = self.llm.chat(
|
||||
messages=messages,
|
||||
tools=TOOLS,
|
||||
tool_choice="auto"
|
||||
)
|
||||
|
||||
if response.tool_calls:
|
||||
for tool_call in response.tool_calls:
|
||||
result = self._execute_tool(
|
||||
tool_call.name,
|
||||
tool_call.arguments
|
||||
)
|
||||
messages.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call.id,
|
||||
"content": str(result)
|
||||
})
|
||||
else:
|
||||
return response.content
|
||||
```
|
||||
|
||||
### 2.3 Plan-and-Execute Pattern
|
||||
|
||||
```python
|
||||
class PlanAndExecuteAgent:
|
||||
"""
|
||||
1. Create a plan (list of steps)
|
||||
2. Execute each step
|
||||
3. Replan if needed
|
||||
"""
|
||||
|
||||
def run(self, task: str) -> str:
|
||||
# Planning phase
|
||||
plan = self.planner.create_plan(task)
|
||||
# Returns: ["Step 1: ...", "Step 2: ...", ...]
|
||||
|
||||
results = []
|
||||
for step in plan:
|
||||
# Execute each step
|
||||
result = self.executor.execute(step, context=results)
|
||||
results.append(result)
|
||||
|
||||
# Check if replan needed
|
||||
if self._needs_replan(task, results):
|
||||
new_plan = self.planner.replan(
|
||||
task,
|
||||
completed=results,
|
||||
remaining=plan[len(results):]
|
||||
)
|
||||
plan = new_plan
|
||||
|
||||
# Synthesize final answer
|
||||
return self.synthesizer.summarize(task, results)
|
||||
```
|
||||
|
||||
### 2.4 Multi-Agent Collaboration
|
||||
|
||||
```python
|
||||
class AgentTeam:
|
||||
"""
|
||||
Specialized agents collaborating on complex tasks
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.agents = {
|
||||
"researcher": ResearchAgent(),
|
||||
"analyst": AnalystAgent(),
|
||||
"writer": WriterAgent(),
|
||||
"critic": CriticAgent()
|
||||
}
|
||||
self.coordinator = CoordinatorAgent()
|
||||
|
||||
def solve(self, task: str) -> str:
|
||||
# Coordinator assigns subtasks
|
||||
assignments = self.coordinator.decompose(task)
|
||||
|
||||
results = {}
|
||||
for assignment in assignments:
|
||||
agent = self.agents[assignment.agent]
|
||||
result = agent.execute(
|
||||
assignment.subtask,
|
||||
context=results
|
||||
)
|
||||
results[assignment.id] = result
|
||||
|
||||
# Critic reviews
|
||||
critique = self.agents["critic"].review(results)
|
||||
|
||||
if critique.needs_revision:
|
||||
# Iterate with feedback
|
||||
return self.solve_with_feedback(task, results, critique)
|
||||
|
||||
return self.coordinator.synthesize(results)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Prompt IDE Patterns
|
||||
|
||||
### 3.1 Prompt Templates with Variables
|
||||
|
||||
```python
|
||||
class PromptTemplate:
|
||||
def __init__(self, template: str, variables: list[str]):
|
||||
self.template = template
|
||||
self.variables = variables
|
||||
|
||||
def format(self, **kwargs) -> str:
|
||||
# Validate all variables provided
|
||||
missing = set(self.variables) - set(kwargs.keys())
|
||||
if missing:
|
||||
raise ValueError(f"Missing variables: {missing}")
|
||||
|
||||
return self.template.format(**kwargs)
|
||||
|
||||
def with_examples(self, examples: list[dict]) -> str:
|
||||
"""Add few-shot examples"""
|
||||
example_text = "\n\n".join([
|
||||
f"Input: {ex['input']}\nOutput: {ex['output']}"
|
||||
for ex in examples
|
||||
])
|
||||
return f"{example_text}\n\n{self.template}"
|
||||
|
||||
# Usage
|
||||
summarizer = PromptTemplate(
|
||||
template="Summarize the following text in {style} style:\n\n{text}",
|
||||
variables=["style", "text"]
|
||||
)
|
||||
|
||||
prompt = summarizer.format(
|
||||
style="professional",
|
||||
text="Long article content..."
|
||||
)
|
||||
```
|
||||
|
||||
### 3.2 Prompt Versioning & A/B Testing
|
||||
|
||||
```python
|
||||
class PromptRegistry:
|
||||
def __init__(self, db):
|
||||
self.db = db
|
||||
|
||||
def register(self, name: str, template: str, version: str):
|
||||
"""Store prompt with version"""
|
||||
self.db.save({
|
||||
"name": name,
|
||||
"template": template,
|
||||
"version": version,
|
||||
"created_at": datetime.now(),
|
||||
"metrics": {}
|
||||
})
|
||||
|
||||
def get(self, name: str, version: str = "latest") -> str:
|
||||
"""Retrieve specific version"""
|
||||
return self.db.get(name, version)
|
||||
|
||||
def ab_test(self, name: str, user_id: str) -> str:
|
||||
"""Return variant based on user bucket"""
|
||||
variants = self.db.get_all_versions(name)
|
||||
bucket = hash(user_id) % len(variants)
|
||||
return variants[bucket]
|
||||
|
||||
def record_outcome(self, prompt_id: str, outcome: dict):
|
||||
"""Track prompt performance"""
|
||||
self.db.update_metrics(prompt_id, outcome)
|
||||
```
|
||||
|
||||
### 3.3 Prompt Chaining
|
||||
|
||||
```python
|
||||
class PromptChain:
|
||||
"""
|
||||
Chain prompts together, passing output as input to next
|
||||
"""
|
||||
|
||||
def __init__(self, steps: list[dict]):
|
||||
self.steps = steps
|
||||
|
||||
def run(self, initial_input: str) -> dict:
|
||||
context = {"input": initial_input}
|
||||
results = []
|
||||
|
||||
for step in self.steps:
|
||||
prompt = step["prompt"].format(**context)
|
||||
output = llm.generate(prompt)
|
||||
|
||||
# Parse output if needed
|
||||
if step.get("parser"):
|
||||
output = step["parser"](output)
|
||||
|
||||
context[step["output_key"]] = output
|
||||
results.append({
|
||||
"step": step["name"],
|
||||
"output": output
|
||||
})
|
||||
|
||||
return {
|
||||
"final_output": context[self.steps[-1]["output_key"]],
|
||||
"intermediate_results": results
|
||||
}
|
||||
|
||||
# Example: Research → Analyze → Summarize
|
||||
chain = PromptChain([
|
||||
{
|
||||
"name": "research",
|
||||
"prompt": "Research the topic: {input}",
|
||||
"output_key": "research"
|
||||
},
|
||||
{
|
||||
"name": "analyze",
|
||||
"prompt": "Analyze these findings:\n{research}",
|
||||
"output_key": "analysis"
|
||||
},
|
||||
{
|
||||
"name": "summarize",
|
||||
"prompt": "Summarize this analysis in 3 bullet points:\n{analysis}",
|
||||
"output_key": "summary"
|
||||
}
|
||||
])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. LLMOps & Observability
|
||||
|
||||
### 4.1 Metrics to Track
|
||||
|
||||
```python
|
||||
LLM_METRICS = {
|
||||
# Performance
|
||||
"latency_p50": "50th percentile response time",
|
||||
"latency_p99": "99th percentile response time",
|
||||
"tokens_per_second": "Generation speed",
|
||||
|
||||
# Quality
|
||||
"user_satisfaction": "Thumbs up/down ratio",
|
||||
"task_completion": "% tasks completed successfully",
|
||||
"hallucination_rate": "% responses with factual errors",
|
||||
|
||||
# Cost
|
||||
"cost_per_request": "Average $ per API call",
|
||||
"tokens_per_request": "Average tokens used",
|
||||
"cache_hit_rate": "% requests served from cache",
|
||||
|
||||
# Reliability
|
||||
"error_rate": "% failed requests",
|
||||
"timeout_rate": "% requests that timed out",
|
||||
"retry_rate": "% requests needing retry"
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 Logging & Tracing
|
||||
|
||||
```python
|
||||
import logging
|
||||
from opentelemetry import trace
|
||||
|
||||
tracer = trace.get_tracer(__name__)
|
||||
|
||||
class LLMLogger:
|
||||
def log_request(self, request_id: str, data: dict):
|
||||
"""Log LLM request for debugging and analysis"""
|
||||
log_entry = {
|
||||
"request_id": request_id,
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"model": data["model"],
|
||||
"prompt": data["prompt"][:500], # Truncate for storage
|
||||
"prompt_tokens": data["prompt_tokens"],
|
||||
"temperature": data.get("temperature", 1.0),
|
||||
"user_id": data.get("user_id"),
|
||||
}
|
||||
logging.info(f"LLM_REQUEST: {json.dumps(log_entry)}")
|
||||
|
||||
def log_response(self, request_id: str, data: dict):
|
||||
"""Log LLM response"""
|
||||
log_entry = {
|
||||
"request_id": request_id,
|
||||
"completion_tokens": data["completion_tokens"],
|
||||
"total_tokens": data["total_tokens"],
|
||||
"latency_ms": data["latency_ms"],
|
||||
"finish_reason": data["finish_reason"],
|
||||
"cost_usd": self._calculate_cost(data),
|
||||
}
|
||||
logging.info(f"LLM_RESPONSE: {json.dumps(log_entry)}")
|
||||
|
||||
# Distributed tracing
|
||||
@tracer.start_as_current_span("llm_call")
|
||||
def call_llm(prompt: str) -> str:
|
||||
span = trace.get_current_span()
|
||||
span.set_attribute("prompt.length", len(prompt))
|
||||
|
||||
response = llm.generate(prompt)
|
||||
|
||||
span.set_attribute("response.length", len(response))
|
||||
span.set_attribute("tokens.total", response.usage.total_tokens)
|
||||
|
||||
return response.content
|
||||
```
|
||||
|
||||
### 4.3 Evaluation Framework
|
||||
|
||||
```python
|
||||
class LLMEvaluator:
|
||||
"""
|
||||
Evaluate LLM outputs for quality
|
||||
"""
|
||||
|
||||
def evaluate_response(self,
|
||||
question: str,
|
||||
response: str,
|
||||
ground_truth: str = None) -> dict:
|
||||
scores = {}
|
||||
|
||||
# Relevance: Does it answer the question?
|
||||
scores["relevance"] = self._score_relevance(question, response)
|
||||
|
||||
# Coherence: Is it well-structured?
|
||||
scores["coherence"] = self._score_coherence(response)
|
||||
|
||||
# Groundedness: Is it based on provided context?
|
||||
scores["groundedness"] = self._score_groundedness(response)
|
||||
|
||||
# Accuracy: Does it match ground truth?
|
||||
if ground_truth:
|
||||
scores["accuracy"] = self._score_accuracy(response, ground_truth)
|
||||
|
||||
# Harmfulness: Is it safe?
|
||||
scores["safety"] = self._score_safety(response)
|
||||
|
||||
return scores
|
||||
|
||||
def run_benchmark(self, test_cases: list[dict]) -> dict:
|
||||
"""Run evaluation on test set"""
|
||||
results = []
|
||||
for case in test_cases:
|
||||
response = llm.generate(case["prompt"])
|
||||
scores = self.evaluate_response(
|
||||
question=case["prompt"],
|
||||
response=response,
|
||||
ground_truth=case.get("expected")
|
||||
)
|
||||
results.append(scores)
|
||||
|
||||
return self._aggregate_scores(results)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Production Patterns
|
||||
|
||||
### 5.1 Caching Strategy
|
||||
|
||||
```python
|
||||
import hashlib
|
||||
from functools import lru_cache
|
||||
|
||||
class LLMCache:
|
||||
def __init__(self, redis_client, ttl_seconds=3600):
|
||||
self.redis = redis_client
|
||||
self.ttl = ttl_seconds
|
||||
|
||||
def _cache_key(self, prompt: str, model: str, **kwargs) -> str:
|
||||
"""Generate deterministic cache key"""
|
||||
content = f"{model}:{prompt}:{json.dumps(kwargs, sort_keys=True)}"
|
||||
return hashlib.sha256(content.encode()).hexdigest()
|
||||
|
||||
def get_or_generate(self, prompt: str, model: str, **kwargs) -> str:
|
||||
key = self._cache_key(prompt, model, **kwargs)
|
||||
|
||||
# Check cache
|
||||
cached = self.redis.get(key)
|
||||
if cached:
|
||||
return cached.decode()
|
||||
|
||||
# Generate
|
||||
response = llm.generate(prompt, model=model, **kwargs)
|
||||
|
||||
# Cache (only cache deterministic outputs)
|
||||
if kwargs.get("temperature", 1.0) == 0:
|
||||
self.redis.setex(key, self.ttl, response)
|
||||
|
||||
return response
|
||||
```
|
||||
|
||||
### 5.2 Rate Limiting & Retry
|
||||
|
||||
```python
|
||||
import time
|
||||
from tenacity import retry, wait_exponential, stop_after_attempt
|
||||
|
||||
class RateLimiter:
|
||||
def __init__(self, requests_per_minute: int):
|
||||
self.rpm = requests_per_minute
|
||||
self.timestamps = []
|
||||
|
||||
def acquire(self):
|
||||
"""Wait if rate limit would be exceeded"""
|
||||
now = time.time()
|
||||
|
||||
# Remove old timestamps
|
||||
self.timestamps = [t for t in self.timestamps if now - t < 60]
|
||||
|
||||
if len(self.timestamps) >= self.rpm:
|
||||
sleep_time = 60 - (now - self.timestamps[0])
|
||||
time.sleep(sleep_time)
|
||||
|
||||
self.timestamps.append(time.time())
|
||||
|
||||
# Retry with exponential backoff
|
||||
@retry(
|
||||
wait=wait_exponential(multiplier=1, min=4, max=60),
|
||||
stop=stop_after_attempt(5)
|
||||
)
|
||||
def call_llm_with_retry(prompt: str) -> str:
|
||||
try:
|
||||
return llm.generate(prompt)
|
||||
except RateLimitError:
|
||||
raise # Will trigger retry
|
||||
except APIError as e:
|
||||
if e.status_code >= 500:
|
||||
raise # Retry server errors
|
||||
raise # Don't retry client errors
|
||||
```
|
||||
|
||||
### 5.3 Fallback Strategy
|
||||
|
||||
```python
|
||||
class LLMWithFallback:
|
||||
def __init__(self, primary: str, fallbacks: list[str]):
|
||||
self.primary = primary
|
||||
self.fallbacks = fallbacks
|
||||
|
||||
def generate(self, prompt: str, **kwargs) -> str:
|
||||
models = [self.primary] + self.fallbacks
|
||||
|
||||
for model in models:
|
||||
try:
|
||||
return llm.generate(prompt, model=model, **kwargs)
|
||||
except (RateLimitError, APIError) as e:
|
||||
logging.warning(f"Model {model} failed: {e}")
|
||||
continue
|
||||
|
||||
raise AllModelsFailedError("All models exhausted")
|
||||
|
||||
# Usage
|
||||
llm_client = LLMWithFallback(
|
||||
primary="gpt-4-turbo",
|
||||
fallbacks=["gpt-3.5-turbo", "claude-3-sonnet"]
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture Decision Matrix
|
||||
|
||||
| Pattern | Use When | Complexity | Cost |
|
||||
| :------------------- | :--------------- | :--------- | :-------- |
|
||||
| **Simple RAG** | FAQ, docs search | Low | Low |
|
||||
| **Hybrid RAG** | Mixed queries | Medium | Medium |
|
||||
| **ReAct Agent** | Multi-step tasks | Medium | Medium |
|
||||
| **Function Calling** | Structured tools | Low | Low |
|
||||
| **Plan-Execute** | Complex tasks | High | High |
|
||||
| **Multi-Agent** | Research tasks | Very High | Very High |
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Dify Platform](https://github.com/langgenius/dify)
|
||||
- [LangChain Docs](https://python.langchain.com/)
|
||||
- [LlamaIndex](https://www.llamaindex.ai/)
|
||||
- [Anthropic Cookbook](https://github.com/anthropics/anthropic-cookbook)
|
||||
Submodule skills/loki-mode deleted from be9270dfb5
57
skills/loki-mode/.github/workflows/claude-code-review.yml
vendored
Normal file
57
skills/loki-mode/.github/workflows/claude-code-review.yml
vendored
Normal file
@@ -0,0 +1,57 @@
|
||||
name: Claude Code Review
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize]
|
||||
# Optional: Only run on specific file changes
|
||||
# paths:
|
||||
# - "src/**/*.ts"
|
||||
# - "src/**/*.tsx"
|
||||
# - "src/**/*.js"
|
||||
# - "src/**/*.jsx"
|
||||
|
||||
jobs:
|
||||
claude-review:
|
||||
# Optional: Filter by PR author
|
||||
# if: |
|
||||
# github.event.pull_request.user.login == 'external-contributor' ||
|
||||
# github.event.pull_request.user.login == 'new-developer' ||
|
||||
# github.event.pull_request.author_association == 'FIRST_TIME_CONTRIBUTOR'
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
issues: read
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Run Claude Code Review
|
||||
id: claude-review
|
||||
uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
prompt: |
|
||||
REPO: ${{ github.repository }}
|
||||
PR NUMBER: ${{ github.event.pull_request.number }}
|
||||
|
||||
Please review this pull request and provide feedback on:
|
||||
- Code quality and best practices
|
||||
- Potential bugs or issues
|
||||
- Performance considerations
|
||||
- Security concerns
|
||||
- Test coverage
|
||||
|
||||
Use the repository's CLAUDE.md for guidance on style and conventions. Be constructive and helpful in your feedback.
|
||||
|
||||
Use `gh pr comment` with your Bash tool to leave your review as a comment on the PR.
|
||||
|
||||
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
|
||||
# or https://code.claude.com/docs/en/cli-reference for available options
|
||||
claude_args: '--allowed-tools "Bash(gh issue view:*),Bash(gh search:*),Bash(gh issue list:*),Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*),Bash(gh pr list:*)"'
|
||||
|
||||
50
skills/loki-mode/.github/workflows/claude.yml
vendored
Normal file
50
skills/loki-mode/.github/workflows/claude.yml
vendored
Normal file
@@ -0,0 +1,50 @@
|
||||
name: Claude Code
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
|
||||
jobs:
|
||||
claude:
|
||||
if: |
|
||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
||||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
issues: read
|
||||
id-token: write
|
||||
actions: read # Required for Claude to read CI results on PRs
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Run Claude Code
|
||||
id: claude
|
||||
uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
|
||||
# This is an optional setting that allows Claude to read CI results on PRs
|
||||
additional_permissions: |
|
||||
actions: read
|
||||
|
||||
# Optional: Give a custom prompt to Claude. If this is not specified, Claude will perform the instructions specified in the comment that tagged it.
|
||||
# prompt: 'Update the pull request description to include a summary of changes.'
|
||||
|
||||
# Optional: Add claude_args to customize behavior and configuration
|
||||
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
|
||||
# or https://code.claude.com/docs/en/cli-reference for available options
|
||||
# claude_args: '--allowed-tools Bash(gh pr:*)'
|
||||
|
||||
128
skills/loki-mode/.github/workflows/release.yml
vendored
Normal file
128
skills/loki-mode/.github/workflows/release.yml
vendored
Normal file
@@ -0,0 +1,128 @@
|
||||
name: Release
|
||||
|
||||
on:
|
||||
push:
|
||||
paths:
|
||||
- 'VERSION'
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
release:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Read version
|
||||
id: version
|
||||
run: |
|
||||
VERSION=$(cat VERSION | tr -d '\n')
|
||||
echo "version=$VERSION" >> $GITHUB_OUTPUT
|
||||
echo "tag=v$VERSION" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Check if tag exists
|
||||
id: check_tag
|
||||
run: |
|
||||
if git rev-parse "v${{ steps.version.outputs.version }}" >/dev/null 2>&1; then
|
||||
echo "exists=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "exists=false" >> $GITHUB_OUTPUT
|
||||
fi
|
||||
|
||||
- name: Create release artifacts
|
||||
if: steps.check_tag.outputs.exists == 'false'
|
||||
run: |
|
||||
mkdir -p release
|
||||
|
||||
# ============================================
|
||||
# Artifact 1: loki-mode.zip (for Claude.ai website)
|
||||
# SKILL.md at ROOT level for direct upload
|
||||
# ============================================
|
||||
mkdir -p release/skill-root
|
||||
cp SKILL.md release/skill-root/
|
||||
cp -r references release/skill-root/
|
||||
|
||||
cd release/skill-root
|
||||
zip -r ../loki-mode-${{ steps.version.outputs.version }}.zip .
|
||||
cd ../..
|
||||
|
||||
# Also create .skill file (same as zip, different extension)
|
||||
cp release/loki-mode-${{ steps.version.outputs.version }}.zip release/loki-mode-${{ steps.version.outputs.version }}.skill
|
||||
|
||||
# ============================================
|
||||
# Artifact 2: loki-mode-api.zip (for console.anthropic.com)
|
||||
# SKILL.md inside loki-mode/ folder (API requires folder wrapper)
|
||||
# ============================================
|
||||
mkdir -p release/api-package/loki-mode
|
||||
cp SKILL.md release/api-package/loki-mode/
|
||||
cp -r references release/api-package/loki-mode/
|
||||
|
||||
cd release/api-package
|
||||
zip -r ../loki-mode-api-${{ steps.version.outputs.version }}.zip loki-mode
|
||||
cd ../..
|
||||
|
||||
# ============================================
|
||||
# Artifact 3: loki-mode-claude-code.zip
|
||||
# For Claude Code: full package with loki-mode/ folder
|
||||
# Extract to ~/.claude/skills/
|
||||
# ============================================
|
||||
mkdir -p release/loki-mode
|
||||
cp SKILL.md release/loki-mode/
|
||||
cp README.md release/loki-mode/
|
||||
cp LICENSE release/loki-mode/ 2>/dev/null || true
|
||||
cp VERSION release/loki-mode/
|
||||
cp CHANGELOG.md release/loki-mode/
|
||||
cp -r references release/loki-mode/
|
||||
cp -r examples release/loki-mode/
|
||||
cp -r tests release/loki-mode/
|
||||
cp -r scripts release/loki-mode/
|
||||
cp -r autonomy release/loki-mode/
|
||||
|
||||
cd release
|
||||
zip -r loki-mode-claude-code-${{ steps.version.outputs.version }}.zip loki-mode
|
||||
tar -czvf loki-mode-claude-code-${{ steps.version.outputs.version }}.tar.gz loki-mode
|
||||
cd ..
|
||||
|
||||
- name: Create Git Tag
|
||||
if: steps.check_tag.outputs.exists == 'false'
|
||||
run: |
|
||||
git config user.name "github-actions[bot]"
|
||||
git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git tag -a "v${{ steps.version.outputs.version }}" -m "Release v${{ steps.version.outputs.version }}"
|
||||
git push origin "v${{ steps.version.outputs.version }}"
|
||||
|
||||
- name: Extract changelog for this version
|
||||
if: steps.check_tag.outputs.exists == 'false'
|
||||
id: changelog
|
||||
run: |
|
||||
VERSION="${{ steps.version.outputs.version }}"
|
||||
CHANGELOG=$(awk "/^## \[$VERSION\]/{flag=1; next} /^## \[/{flag=0} flag" CHANGELOG.md)
|
||||
if [ -z "$CHANGELOG" ]; then
|
||||
CHANGELOG="Release v$VERSION"
|
||||
fi
|
||||
echo "$CHANGELOG" > changelog_body.txt
|
||||
|
||||
- name: Create GitHub Release
|
||||
if: steps.check_tag.outputs.exists == 'false'
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
gh release create "v${{ steps.version.outputs.version }}" \
|
||||
release/loki-mode-${{ steps.version.outputs.version }}.zip \
|
||||
release/loki-mode-${{ steps.version.outputs.version }}.skill \
|
||||
release/loki-mode-api-${{ steps.version.outputs.version }}.zip \
|
||||
release/loki-mode-claude-code-${{ steps.version.outputs.version }}.zip \
|
||||
release/loki-mode-claude-code-${{ steps.version.outputs.version }}.tar.gz \
|
||||
--title "Loki Mode v${{ steps.version.outputs.version }}" \
|
||||
--notes-file changelog_body.txt
|
||||
|
||||
- name: Skip message
|
||||
if: steps.check_tag.outputs.exists == 'true'
|
||||
run: |
|
||||
echo "Tag v${{ steps.version.outputs.version }} already exists. Skipping release."
|
||||
1
skills/loki-mode/.gitignore
vendored
Normal file
1
skills/loki-mode/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
.DS_Store
|
||||
184
skills/loki-mode/ACKNOWLEDGEMENTS.md
Normal file
184
skills/loki-mode/ACKNOWLEDGEMENTS.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# Acknowledgements
|
||||
|
||||
Loki Mode stands on the shoulders of giants. This project incorporates research, patterns, and insights from the leading AI labs, academic institutions, and practitioners in the field.
|
||||
|
||||
---
|
||||
|
||||
## Research Labs
|
||||
|
||||
### Anthropic
|
||||
|
||||
Loki Mode is built for Claude and incorporates Anthropic's cutting-edge research on AI safety and agent development.
|
||||
|
||||
| Paper/Resource | Contribution to Loki Mode |
|
||||
|----------------|---------------------------|
|
||||
| [Constitutional AI: Harmlessness from AI Feedback](https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback) | Self-critique against principles, revision workflow |
|
||||
| [Building Effective Agents](https://www.anthropic.com/research/building-effective-agents) | Evaluator-optimizer pattern, parallelization, routing |
|
||||
| [Claude Code Best Practices](https://www.anthropic.com/engineering/claude-code-best-practices) | Explore-Plan-Code workflow, context management |
|
||||
| [Simple Probes Can Catch Sleeper Agents](https://www.anthropic.com/research/probes-catch-sleeper-agents) | Defection probes, anomaly detection patterns |
|
||||
| [Alignment Faking in Large Language Models](https://www.anthropic.com/research/alignment-faking) | Monitoring for strategic compliance |
|
||||
| [Visible Extended Thinking](https://www.anthropic.com/research/visible-extended-thinking) | Thinking levels (think, think hard, ultrathink) |
|
||||
| [Computer Use Safety](https://www.anthropic.com/news/3-5-models-and-computer-use) | Safe autonomous operation patterns |
|
||||
| [Sabotage Evaluations](https://www.anthropic.com/research/sabotage-evaluations-for-frontier-models) | Safety evaluation methodology |
|
||||
| [Effective Harnesses for Long-Running Agents](https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents) | One-feature-at-a-time pattern, Playwright MCP for E2E |
|
||||
| [Claude Agent SDK Overview](https://platform.claude.com/docs/en/agent-sdk/overview) | Task tool, subagents, resume parameter, hooks |
|
||||
|
||||
### Google DeepMind
|
||||
|
||||
DeepMind's research on world models, hierarchical reasoning, and scalable oversight informs Loki Mode's architecture.
|
||||
|
||||
| Paper/Resource | Contribution to Loki Mode |
|
||||
|----------------|---------------------------|
|
||||
| [SIMA 2: Generalist AI Agent](https://deepmind.google/blog/sima-2-an-agent-that-plays-reasons-and-learns-with-you-in-virtual-3d-worlds/) | Self-improvement loop, reward model training |
|
||||
| [Gemini Robotics 1.5](https://deepmind.google/blog/gemini-robotics-15-brings-ai-agents-into-the-physical-world/) | Hierarchical reasoning (planner + executor) |
|
||||
| [Dreamer 4: World Model Training](https://danijar.com/project/dreamer4/) | Simulation-first testing, safe exploration |
|
||||
| [Genie 3: World Models](https://deepmind.google/blog/genie-3-a-new-frontier-for-world-models/) | World model architecture patterns |
|
||||
| [Scalable AI Safety via Doubly-Efficient Debate](https://deepmind.google/research/publications/34920/) | Debate-based verification for critical changes |
|
||||
| [Human-AI Complementarity for Amplified Oversight](https://deepmindsafetyresearch.medium.com/human-ai-complementarity-a-goal-for-amplified-oversight-0ad8a44cae0a) | AI-assisted human supervision |
|
||||
| [Technical AGI Safety Approach](https://arxiv.org/html/2504.01849v1) | Safety-first agent design |
|
||||
|
||||
### OpenAI
|
||||
|
||||
OpenAI's Agents SDK and deep research patterns provide foundational patterns for agent orchestration.
|
||||
|
||||
| Paper/Resource | Contribution to Loki Mode |
|
||||
|----------------|---------------------------|
|
||||
| [Agents SDK Documentation](https://openai.github.io/openai-agents-python/) | Tracing spans, guardrails, tripwires |
|
||||
| [A Practical Guide to Building Agents](https://cdn.openai.com/business-guides-and-resources/a-practical-guide-to-building-agents.pdf) | Agent architecture best practices |
|
||||
| [Building Agents Track](https://developers.openai.com/tracks/building-agents/) | Development patterns, handoff callbacks |
|
||||
| [AGENTS.md Specification](https://agents.md/) | Standardized agent instructions |
|
||||
| [Introducing Deep Research](https://openai.com/index/introducing-deep-research/) | Adaptive planning, backtracking |
|
||||
| [Deep Research System Card](https://cdn.openai.com/deep-research-system-card.pdf) | Safety considerations for research agents |
|
||||
| [Introducing o3 and o4-mini](https://openai.com/index/introducing-o3-and-o4-mini/) | Reasoning model guidance |
|
||||
| [Reasoning Best Practices](https://platform.openai.com/docs/guides/reasoning-best-practices) | Extended thinking patterns |
|
||||
| [Chain of Thought Monitoring](https://openai.com/index/chain-of-thought-monitoring/) | Reasoning trace monitoring |
|
||||
| [Agent Builder Safety](https://platform.openai.com/docs/guides/agent-builder-safety) | Safety patterns for agent builders |
|
||||
| [Computer-Using Agent](https://openai.com/index/computer-using-agent/) | Computer use patterns |
|
||||
| [Agentic AI Foundation](https://openai.com/index/agentic-ai-foundation/) | Industry standards, interoperability |
|
||||
|
||||
### Amazon Web Services (AWS)
|
||||
|
||||
AWS Bedrock's multi-agent collaboration patterns inform Loki Mode's routing and dispatch strategies.
|
||||
|
||||
| Paper/Resource | Contribution to Loki Mode |
|
||||
|----------------|---------------------------|
|
||||
| [Multi-Agent Orchestration Guidance](https://aws.amazon.com/solutions/guidance/multi-agent-orchestration-on-aws/) | Three coordination mechanisms, architectural patterns |
|
||||
| [Bedrock Multi-Agent Collaboration](https://docs.aws.amazon.com/bedrock/latest/userguide/agents-multi-agent-collaboration.html) | Supervisor mode, routing mode, 10-agent limit |
|
||||
| [Multi-Agent Collaboration Announcement](https://aws.amazon.com/blogs/aws/introducing-multi-agent-collaboration-capability-for-amazon-bedrock/) | Intent classification, selective context sharing |
|
||||
| [AgentCore for SRE](https://aws.amazon.com/blogs/machine-learning/build-multi-agent-site-reliability-engineering-assistants-with-amazon-bedrock-agentcore/) | Gateway, Memory, Identity, Observability components |
|
||||
|
||||
**Key Pattern Adopted:** Routing Mode Optimization - Direct dispatch for simple tasks (lower latency), supervisor orchestration for complex tasks (full coordination).
|
||||
|
||||
---
|
||||
|
||||
## Academic Research
|
||||
|
||||
### Multi-Agent Systems
|
||||
|
||||
| Paper | Authors/Source | Contribution |
|
||||
|-------|----------------|--------------|
|
||||
| [Multi-Agent Collaboration Mechanisms Survey](https://arxiv.org/abs/2501.06322) | arXiv 2501.06322 | Collaboration structures, coopetition |
|
||||
| [CONSENSAGENT: Anti-Sycophancy Framework](https://aclanthology.org/2025.findings-acl.1141/) | ACL 2025 Findings | Blind review, devil's advocate |
|
||||
| [GoalAct: Hierarchical Execution](https://arxiv.org/abs/2504.16563) | arXiv 2504.16563 | Global planning, skill decomposition |
|
||||
| [A-Mem: Agentic Memory System](https://arxiv.org/html/2502.12110v11) | arXiv 2502.12110 | Zettelkasten-style memory linking |
|
||||
| [Multi-Agent Reflexion (MAR)](https://arxiv.org/html/2512.20845) | arXiv 2512.20845 | Structured debate, persona-based critics |
|
||||
| [Iter-VF: Iterative Verification-First](https://arxiv.org/html/2511.21734v1) | arXiv 2511.21734 | Answer-only verification, Markovian retry |
|
||||
|
||||
### Evaluation & Safety
|
||||
|
||||
| Paper | Authors/Source | Contribution |
|
||||
|-------|----------------|--------------|
|
||||
| [Assessment Framework for Agentic AI](https://arxiv.org/html/2512.12791v1) | arXiv 2512.12791 | Four-pillar evaluation framework |
|
||||
| [Measurement Imbalance in Agentic AI](https://arxiv.org/abs/2506.02064) | arXiv 2506.02064 | Multi-dimensional evaluation axes |
|
||||
| [Demo-to-Deployment Gap](https://www.marktechpost.com/2025/12/24/) | Stanford/Harvard | Tool reliability vs tool selection |
|
||||
|
||||
---
|
||||
|
||||
## Industry Resources
|
||||
|
||||
### Tools & Frameworks
|
||||
|
||||
| Resource | Contribution |
|
||||
|----------|--------------|
|
||||
| [NVIDIA ToolOrchestra](https://github.com/NVlabs/ToolOrchestra) | Efficiency metrics, three-reward signal framework, dynamic agent selection |
|
||||
| [LerianStudio/ring](https://github.com/LerianStudio/ring) | Subagent-driven-development pattern |
|
||||
| [Awesome Agentic Patterns](https://github.com/nibzard/awesome-agentic-patterns) | 105+ production patterns catalog |
|
||||
|
||||
### Best Practices Guides
|
||||
|
||||
| Resource | Contribution |
|
||||
|----------|--------------|
|
||||
| [Maxim AI: Production Multi-Agent Systems](https://www.getmaxim.ai/articles/best-practices-for-building-production-ready-multi-agent-systems/) | Correlation IDs, failure handling |
|
||||
| [UiPath: Agent Builder Best Practices](https://www.uipath.com/blog/ai/agent-builder-best-practices) | Single-responsibility agents |
|
||||
| [GitHub: Speed Without Control](https://github.blog/) | Static analysis + AI review, guardrails |
|
||||
|
||||
---
|
||||
|
||||
## Hacker News Community
|
||||
|
||||
Battle-tested insights from practitioners deploying agents in production.
|
||||
|
||||
### Discussions
|
||||
|
||||
| Thread | Key Insight |
|
||||
|--------|-------------|
|
||||
| [What Actually Works in Production for Autonomous Agents](https://news.ycombinator.com/item?id=44623207) | "Zero companies without human in the loop" |
|
||||
| [Coding with LLMs in Summer 2025](https://news.ycombinator.com/item?id=44623953) | Context curation beats automatic RAG |
|
||||
| [Superpowers: How I'm Using Coding Agents](https://news.ycombinator.com/item?id=45547344) | Sub-agents for context isolation (Simon Willison) |
|
||||
| [Claude Code Experience After Two Weeks](https://news.ycombinator.com/item?id=44596472) | Fresh contexts yield better results |
|
||||
| [AI Agent Benchmarks Are Broken](https://news.ycombinator.com/item?id=44531697) | LLM-as-judge has shared blind spots |
|
||||
| [How to Orchestrate Multi-Agent Workflows](https://news.ycombinator.com/item?id=45955997) | Event-driven, decoupled coordination |
|
||||
| [Context Engineering vs Prompt Engineering](https://news.ycombinator.com/item?id=44427757) | Manual context selection principles |
|
||||
|
||||
### Show HN Projects
|
||||
|
||||
| Project | Contribution |
|
||||
|---------|--------------|
|
||||
| [Self-Evolving Agents Repository](https://news.ycombinator.com/item?id=45099226) | Self-improvement patterns |
|
||||
| [Package Manager for Agent Skills](https://news.ycombinator.com/item?id=46422264) | Skills architecture |
|
||||
| [Wispbit - AI Code Review Agent](https://news.ycombinator.com/item?id=44722603) | Code review patterns |
|
||||
| [Agtrace - Monitoring for AI Coding Agents](https://news.ycombinator.com/item?id=46425670) | Agent monitoring patterns |
|
||||
|
||||
---
|
||||
|
||||
## Individual Contributors
|
||||
|
||||
Special thanks to thought leaders whose patterns and insights shaped Loki Mode:
|
||||
|
||||
| Contributor | Contribution |
|
||||
|-------------|--------------|
|
||||
| **Boris Cherny** (Creator of Claude Code) | Self-verification loop (2-3x quality improvement), extended thinking mode, "Less prompting, more systems" philosophy |
|
||||
| **Ivan Steshov** | Centralized constitution, agent lineage tracking, structured artifacts as contracts |
|
||||
| **Addy Osmani** | Git checkpoint system, specification-first approach, visual aids (Mermaid diagrams) |
|
||||
| **Simon Willison** | Sub-agents for context isolation, skills system, context curation patterns |
|
||||
|
||||
---
|
||||
|
||||
## Production Patterns Summary
|
||||
|
||||
Key patterns incorporated from practitioner experience:
|
||||
|
||||
| Pattern | Source | Implementation |
|
||||
|---------|--------|----------------|
|
||||
| Human-in-the-Loop (HITL) | HN Production Discussions | Confidence-based escalation thresholds |
|
||||
| Narrow Scope (3-5 steps) | Multiple Practitioners | Task scope constraints |
|
||||
| Deterministic Validation | Production Teams | Rule-based outer loops (not LLM-judged) |
|
||||
| Context Curation | Simon Willison | Manual selection, focused context |
|
||||
| Blind Review + Devil's Advocate | CONSENSAGENT | Anti-sycophancy protocol |
|
||||
| Hierarchical Reasoning | DeepMind Gemini | Orchestrator + specialized executors |
|
||||
| Constitutional Self-Critique | Anthropic | Principles-based revision |
|
||||
| Debate Verification | DeepMind | Critical change verification |
|
||||
| One Feature at a Time | Anthropic Harness | Single feature per iteration, full verification |
|
||||
| E2E Browser Testing | Anthropic Harness | Playwright MCP for visual verification |
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
This acknowledgements file documents the research and resources that influenced Loki Mode's design. All referenced works retain their original licenses and copyrights.
|
||||
|
||||
Loki Mode itself is released under the MIT License.
|
||||
|
||||
---
|
||||
|
||||
*Last updated: v2.35.0*
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user