Compare commits
305 Commits
v2.10.0
...
copilot/fi
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f27085af3d | ||
|
|
f893807051 | ||
|
|
9040899e65 | ||
|
|
b29fa15bf3 | ||
|
|
b05245e68b | ||
|
|
49e01dd216 | ||
|
|
460a8432a5 | ||
|
|
878b876475 | ||
|
|
189c0824d2 | ||
|
|
67e7e998f8 | ||
|
|
4432e60445 | ||
|
|
b71eff117b | ||
|
|
991de2de2f | ||
|
|
73ceec4e7d | ||
|
|
263c507684 | ||
|
|
afc06d1af6 | ||
|
|
988f528708 | ||
|
|
3f7dce00b8 | ||
|
|
3d6c75d37f | ||
|
|
2070a91ef7 | ||
|
|
17505fe683 | ||
|
|
f9f4375e4e | ||
|
|
637c20f3c3 | ||
|
|
72f5b9500d | ||
|
|
bedfbb5c1c | ||
|
|
318199e9b3 | ||
|
|
cafa9d5c52 | ||
|
|
963e4660c8 | ||
|
|
49f1bf1335 | ||
|
|
d6eefe200d | ||
|
|
41717e78db | ||
|
|
2a016df011 | ||
|
|
386b3c757e | ||
|
|
3d79501eba | ||
|
|
7e24ed2568 | ||
|
|
418982eb85 | ||
|
|
335359f138 | ||
|
|
4ec91e9fbe | ||
|
|
db3e262df3 | ||
|
|
d280ad1c3a | ||
|
|
b2aa003d57 | ||
|
|
3c3280d9ac | ||
|
|
58f8d654ef | ||
|
|
5593cad434 | ||
|
|
59151b3671 | ||
|
|
1dc10ee3a0 | ||
|
|
4c400ca121 | ||
|
|
4310ca4922 | ||
|
|
9f6d75245f | ||
|
|
1974e62ec1 | ||
|
|
9dd8fd6b51 | ||
|
|
b082ba9c42 | ||
|
|
90b4d5adb3 | ||
|
|
87989fecf6 | ||
|
|
f41aad3556 | ||
|
|
d3883ffaf9 | ||
|
|
cf8b63cd15 | ||
|
|
bb0886a594 | ||
|
|
d2f3423a13 | ||
|
|
a7c8432a47 | ||
|
|
bedac72820 | ||
|
|
fc45245400 | ||
|
|
4acf69d80e | ||
|
|
aaee268672 | ||
|
|
cb9f1b1a4e | ||
|
|
8b523ccc54 | ||
|
|
b831384713 | ||
|
|
2128d7b256 | ||
|
|
10bc6d37d7 | ||
|
|
a53052fcb1 | ||
|
|
a53a16e24a | ||
|
|
2250f88e89 | ||
|
|
6b377e8549 | ||
|
|
92e90552e0 | ||
|
|
76f43ef8ee | ||
|
|
dde0467e42 | ||
|
|
d1edc684e9 | ||
|
|
4e1ba66df9 | ||
|
|
cc905285e7 | ||
|
|
0897cca3a4 | ||
|
|
0ed1a74d8d | ||
|
|
1485b8a88b | ||
|
|
92fcf2e24d | ||
|
|
0d1599b597 | ||
|
|
af2934c938 | ||
|
|
03a0ece33a | ||
|
|
10ca106b79 | ||
|
|
f2c3e783b5 | ||
|
|
d1c1721db7 | ||
|
|
9a6cff2d69 | ||
|
|
a4292c3939 | ||
|
|
755f9cef4f | ||
|
|
334dbfe37f | ||
|
|
f9b6a3b8db | ||
|
|
ef5aeeaa0c | ||
|
|
559dd6f40f | ||
|
|
6365f592f0 | ||
|
|
ba668dde07 | ||
|
|
b74dbe7795 | ||
|
|
d11e71c1df | ||
|
|
395c772661 | ||
|
|
c427208f4c | ||
|
|
49d47b6729 | ||
|
|
e4b782d1e6 | ||
|
|
1bd6cd8c5e | ||
|
|
934f41481f | ||
|
|
9c64999dea | ||
|
|
539a5890d1 | ||
|
|
01731166b5 | ||
|
|
e1dd8f41bc | ||
|
|
b282aeee33 | ||
|
|
2467d6a2ac | ||
|
|
bad87316db | ||
|
|
e9b5ccd7ac | ||
|
|
eaf809578e | ||
|
|
637fc86c99 | ||
|
|
3e375f7ca1 | ||
|
|
6b517ed29a | ||
|
|
2fc3237ef9 | ||
|
|
a16b57e581 | ||
|
|
068b4b5d2d | ||
|
|
76e0dffad7 | ||
|
|
435b62cf6a | ||
|
|
c15e1667b5 | ||
|
|
9df4e271d0 | ||
|
|
0ffee44828 | ||
|
|
8c49211c70 | ||
|
|
4598ebaa67 | ||
|
|
384b22c43f | ||
|
|
54af154872 | ||
|
|
eecdcef8a9 | ||
|
|
2d73a06d49 | ||
|
|
b5d2b4f3f0 | ||
|
|
e2698696ff | ||
|
|
258d46c7a4 | ||
|
|
600628df56 | ||
|
|
437e01ee1f | ||
|
|
3a454d7084 | ||
|
|
1ffbe63e93 | ||
|
|
67474bf79d | ||
|
|
7b65390400 | ||
|
|
f0ba079b66 | ||
|
|
333d5d7237 | ||
|
|
5ed47d4490 | ||
|
|
f9096c3565 | ||
|
|
d409f629ee | ||
|
|
e190ad7ddc | ||
|
|
a96592b85b | ||
|
|
98e4811114 | ||
|
|
2a92eba60e | ||
|
|
3446382851 | ||
|
|
44d6277b69 | ||
|
|
531818043f | ||
|
|
b03f929948 | ||
|
|
a3acb446fb | ||
|
|
2917fd235d | ||
|
|
de26b1a161 | ||
|
|
1bf7d5cdb6 | ||
|
|
3aa5c303d6 | ||
|
|
911b59253f | ||
|
|
a644b154c3 | ||
|
|
210213d7c1 | ||
|
|
b99dac3a7c | ||
|
|
b9fcf5701b | ||
|
|
a5781c5cef | ||
|
|
440375bcb7 | ||
|
|
65e4a798be | ||
|
|
db5b978d1b | ||
|
|
5862c73f4d | ||
|
|
7db9aedc66 | ||
|
|
95e9a72442 | ||
|
|
60da1a4330 | ||
|
|
258ad1ed80 | ||
|
|
1fcefe1082 | ||
|
|
3fd063cdd9 | ||
|
|
fdb9f7eae9 | ||
|
|
6326b8dba8 | ||
|
|
1a7ed5257c | ||
|
|
3e172b817a | ||
|
|
b40ff8308d | ||
|
|
c33204e23e | ||
|
|
3dded33731 | ||
|
|
cc0f4a2ec4 | ||
|
|
1b606d851d | ||
|
|
7e2f243bfa | ||
|
|
3688425884 | ||
|
|
8801592bd2 | ||
|
|
d972c4fa3a | ||
|
|
e9783892c1 | ||
|
|
d8d8e70ebb | ||
|
|
deafaa6e77 | ||
|
|
790573472c | ||
|
|
66b777a937 | ||
|
|
63d98348d2 | ||
|
|
79f2642f4e | ||
|
|
48d458ce0b | ||
|
|
9df73a8e56 | ||
|
|
2d7529b613 | ||
|
|
283c4e6ae7 | ||
|
|
c7f7f23bd7 | ||
|
|
d2569f2107 | ||
|
|
4c272bfcbf | ||
|
|
256bfeee73 | ||
|
|
f57a068782 | ||
|
|
0c93e28ace | ||
|
|
899c8a01da | ||
|
|
4ff7187be6 | ||
|
|
d19edbebfb | ||
|
|
2001965e52 | ||
|
|
866d6954f7 | ||
|
|
7e5d8d52a1 | ||
|
|
b55e7e39cc | ||
|
|
f728d0d816 | ||
|
|
c8de7f50f8 | ||
|
|
9891cb28ed | ||
|
|
4d32a3e2af | ||
|
|
53927c5aec | ||
|
|
699ceabd57 | ||
|
|
c8e7424ea6 | ||
|
|
14fb3b5159 | ||
|
|
691b02c817 | ||
|
|
acc6dbc84f | ||
|
|
d8453057df | ||
|
|
f45abe634d | ||
|
|
85480f4ce4 | ||
|
|
e5d2a7e1ec | ||
|
|
c04d59d91d | ||
|
|
7d061238e6 | ||
|
|
805ef578f4 | ||
|
|
0792c9a505 | ||
|
|
86c74656aa | ||
|
|
a11280426c | ||
|
|
99fbad717f | ||
|
|
706a84b873 | ||
|
|
0f4a1b2fd7 | ||
|
|
c0348ca1b5 | ||
|
|
441189cd90 | ||
|
|
e242186fe2 | ||
|
|
45e2049663 | ||
|
|
c96815ed7c | ||
|
|
1e03172075 | ||
|
|
7280be2d63 | ||
|
|
b3c75a3ab0 | ||
|
|
0b9d17a95f | ||
|
|
c51ca4a4bf | ||
|
|
f155a8ff24 | ||
|
|
f7b16b436b | ||
|
|
79ed5ead64 | ||
|
|
d75824bfd0 | ||
|
|
173c634b46 | ||
|
|
41309cfd7d | ||
|
|
36107a5cae | ||
|
|
238cfc6933 | ||
|
|
f28591e648 | ||
|
|
d38e2eeab1 | ||
|
|
4fe8a1e6a4 | ||
|
|
1557826c5d | ||
|
|
bec54d7abb | ||
|
|
23f9ad547c | ||
|
|
28def60eec | ||
|
|
902063fd0a | ||
|
|
9ff0cc0b74 | ||
|
|
ae3d038711 | ||
|
|
af57b96721 | ||
|
|
d5d420d2e1 | ||
|
|
e053fd0eb7 | ||
|
|
ee5511fc59 | ||
|
|
f54c340851 | ||
|
|
ad83399403 | ||
|
|
40fd263b4e | ||
|
|
0405d4a577 | ||
|
|
00079b5bff | ||
|
|
27ce8af114 | ||
|
|
5e888ef6bb | ||
|
|
1134e1e735 | ||
|
|
4803af0b95 | ||
|
|
df0f084ac6 | ||
|
|
d962bb21ea | ||
|
|
807f72a5be | ||
|
|
c29f87c2a9 | ||
|
|
06e8811af6 | ||
|
|
afafa37a2e | ||
|
|
c69b033ada | ||
|
|
2f01e2b267 | ||
|
|
91f46351be | ||
|
|
910cbeb8e0 | ||
|
|
2e835b9d66 | ||
|
|
4057e379d9 | ||
|
|
2d5a9a3e85 | ||
|
|
5d01094479 | ||
|
|
3c38ec509d | ||
|
|
4365fba248 | ||
|
|
2885b4ebaa | ||
|
|
63074bc2b7 | ||
|
|
33d8f93c7f | ||
|
|
c688c5e0fa | ||
|
|
590ea1e5b3 | ||
|
|
33908288f1 | ||
|
|
831d87605a | ||
|
|
c6df6cee4c | ||
|
|
8c8bae5e98 | ||
|
|
a10633744d | ||
|
|
10f00a45dd | ||
|
|
29b45dd234 | ||
|
|
81ecf7cec3 |
15
.github/ISSUE_49_COMMENT.md
vendored
Normal file
15
.github/ISSUE_49_COMMENT.md
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
Suggested comment for [Issue #49](https://github.com/sickn33/antigravity-awesome-skills/issues/49). Paste this on the issue:
|
||||
|
||||
---
|
||||
|
||||
The 404 happens because the package wasn’t published to npm yet. We’ve addressed it in two ways:
|
||||
|
||||
1. **Publish to npm** – We’re set up to publish so `npx antigravity-awesome-skills` will work after the first release. You can also trigger a manual publish via the “Publish to npm” workflow (Actions tab) if you have `NPM_TOKEN` configured.
|
||||
|
||||
2. **Fallback** – Until then (or if you hit a 404 for any reason), use:
|
||||
```bash
|
||||
npx github:sickn33/antigravity-awesome-skills
|
||||
```
|
||||
The README, GETTING_STARTED, and FAQ now mention this fallback.
|
||||
|
||||
Thanks for reporting.
|
||||
243
.github/MAINTENANCE.md
vendored
Normal file
243
.github/MAINTENANCE.md
vendored
Normal file
@@ -0,0 +1,243 @@
|
||||
# 🛠️ Repository Maintenance Guide (V4)
|
||||
|
||||
> **"If it's not documented, it's broken."**
|
||||
|
||||
This guide details the exact procedures for maintaining `antigravity-awesome-skills`.
|
||||
It covers the **Quality Bar**, **Documentation Consistency**, and **Release Workflows**.
|
||||
|
||||
---
|
||||
|
||||
## 0. 🤖 Agent Protocol (THE BIBLE)
|
||||
|
||||
**AGENTS MUST READ AND FOLLOW THIS SECTION BEFORE MARKING ANY TASK AS COMPLETE.**
|
||||
|
||||
There are 3 things that usually fail/get forgotten. **DO NOT FORGET THEM:**
|
||||
|
||||
### 1. 📤 ALWAYS PUSH (Non-Negotiable)
|
||||
|
||||
Committing is NOT enough. You must PUSH to the remote.
|
||||
|
||||
- **BAD**: `git commit -m "feat: new skill"` (User sees nothing)
|
||||
- **GOOD**: `git commit -m "..." && git push origin main`
|
||||
|
||||
### 2. 🔄 SYNC GENERATED FILES (Avoid CI Drift)
|
||||
|
||||
If you touch **any of these**:
|
||||
|
||||
- `skills/` (add/remove/modify skills)
|
||||
- the **Full Skill Registry** section of `README.md`
|
||||
- **counts/claims** about the number of skills (`560+ Agentic Skills...`, `(560/560)`, etc.)
|
||||
|
||||
…then you **MUST** run the Validation Chain **BEFORE** committing.
|
||||
|
||||
- Running `npm run chain` is **NOT optional**.
|
||||
- Running `npm run catalog` is **NOT optional**.
|
||||
|
||||
If CI fails with:
|
||||
|
||||
> `❌ Detected uncommitted changes produced by registry/readme/catalog scripts.`
|
||||
|
||||
it means you **did not run or commit** the Validation Chain correctly.
|
||||
|
||||
### 3. 📝 EVIDENCE OF WORK
|
||||
|
||||
- You must create/update `walkthrough.md` or `CHANGELOG.md` to document what changed.
|
||||
- If you made something new, **link it** in the artifacts.
|
||||
|
||||
### 4. 🚫 NO BRANCHES
|
||||
|
||||
- **ALWAYS use the `main` branch.**
|
||||
- NEVER create feature branches (e.g., `feat/new-skill`).
|
||||
- We commit directly to `main` to keep history linear and simple.
|
||||
|
||||
---
|
||||
|
||||
## 1. 🚦 Daily Maintenance Routine
|
||||
|
||||
### A. Validation Chain
|
||||
|
||||
Before ANY commit that adds/modifies skills, run the chain:
|
||||
|
||||
1. **Validate, index, and update readme**:
|
||||
|
||||
```bash
|
||||
npm run chain
|
||||
```
|
||||
|
||||
_Must return 0 errors for new skills._
|
||||
|
||||
2. **Build catalog**:
|
||||
|
||||
```bash
|
||||
npm run catalog
|
||||
```
|
||||
|
||||
3. **COMMIT GENERATED FILES**:
|
||||
```bash
|
||||
git add README.md skills_index.json data/catalog.json data/bundles.json data/aliases.json CATALOG.md
|
||||
git commit -m "chore: sync generated files"
|
||||
```
|
||||
> 🔴 **CRITICAL**: If you skip this, CI will fail with "Detected uncommitted changes".
|
||||
> See [docs/CI_DRIFT_FIX.md](../docs/CI_DRIFT_FIX.md) for details.
|
||||
|
||||
### B. Post-Merge Routine (Must Do)
|
||||
|
||||
After multiple PR merges or significant changes:
|
||||
|
||||
1. **Sync Contributors List**:
|
||||
- Run: `git shortlog -sn --all`
|
||||
- Update `## Repo Contributors` in README.md.
|
||||
|
||||
2. **Verify Table of Contents**:
|
||||
- Ensure all new headers have clean anchors.
|
||||
- **NO EMOJIS** in H2 headers.
|
||||
|
||||
3. **Draft a Release**:
|
||||
- Go to [Releases Page](https://github.com/sickn33/antigravity-awesome-skills/releases).
|
||||
- Draft a new release for the merged changes.
|
||||
- Tag version (e.g., `v4.1.0`).
|
||||
|
||||
---
|
||||
|
||||
## 2. 📝 Documentation "Pixel Perfect" Rules
|
||||
|
||||
We discovered several consistency issues during V4 development. Follow these rules STRICTLY.
|
||||
|
||||
### A. Table of Contents (TOC) Anchors
|
||||
|
||||
GitHub's anchor generation breaks if headers have emojis.
|
||||
|
||||
- **BAD**: `## 🚀 New Here?` -> Anchor: `#--new-here` (Broken)
|
||||
- **GOOD**: `## New Here?` -> Anchor: `#new-here` (Clean)
|
||||
|
||||
**Rule**: **NEVER put emojis in H2 (`##`) headers.** Put them in the text below if needed.
|
||||
|
||||
### B. The "Trinity" of Docs
|
||||
|
||||
If you update installation instructions or tool compatibility, you MUST update all 3 files:
|
||||
|
||||
1. `README.md` (Source of Truth)
|
||||
2. `docs/GETTING_STARTED.md` (Beginner Guide)
|
||||
3. `docs/FAQ.md` (Troubleshooting)
|
||||
|
||||
_Common pitfall: Updating the clone URL in README but leaving an old one in FAQ._
|
||||
|
||||
### C. Statistics Consistency (CRITICAL)
|
||||
|
||||
If you add/remove skills, you **MUST** ensure the total count is identical in ALL locations.
|
||||
**Do not allow drift** (e.g., 560 in title, 558 in header).
|
||||
|
||||
Locations to check:
|
||||
|
||||
1. **Title of `README.md`**: "560+ Agentic Skills..."
|
||||
2. **`## Full Skill Registry (560/560)` header**.
|
||||
3. **`docs/GETTING_STARTED.md` intro**.
|
||||
|
||||
### D. Credits Policy (Who goes where?)
|
||||
|
||||
- **Credits & Sources**: Use this for **External Repos**.
|
||||
- _Rule_: "I extracted skills from this link you sent me." -> Add to `## Credits & Sources`.
|
||||
- **Repo Contributors**: Use this for **Pull Requests**.
|
||||
- _Rule_: "This user sent a PR." -> Add to `## Repo Contributors`.
|
||||
|
||||
### E. Badges & Links
|
||||
|
||||
- **Antigravity Badge**: Must point to `https://github.com/sickn33/antigravity-awesome-skills`, NOT `anthropics/antigravity`.
|
||||
- **License**: Ensure the link points to `LICENSE` file.
|
||||
|
||||
---
|
||||
|
||||
## 3. 🛡️ Governance & Quality Bar
|
||||
|
||||
### A. The 5-Point Quality Check
|
||||
|
||||
Reject any PR that fails this:
|
||||
|
||||
1. **Metadata**: Has `name`, `description`?
|
||||
2. **Safety**: `risk: offensive` used for red-team tools?
|
||||
3. **Clarity**: Does it say _when_ to use it?
|
||||
4. **Examples**: Copy-pasteable code blocks?
|
||||
5. **Actions**: "Run this command" vs "Think about this".
|
||||
|
||||
### B. Risk Labels (V4)
|
||||
|
||||
- ⚪ **Safe**: Default.
|
||||
- 🔴 **Risk**: Destructive/Security tools. MUST have `[Authorized Use Only]` warning.
|
||||
- 🟣 **Official**: Vendor mirrors only.
|
||||
|
||||
---
|
||||
|
||||
## 4. 🚀 Release Workflow
|
||||
|
||||
When cutting a new version (e.g., V4):
|
||||
|
||||
1. **Run Full Validation**: `python3 scripts/validate_skills.py --strict`
|
||||
2. **Update Changelog**: Add the new release section to `CHANGELOG.md`.
|
||||
3. **Bump Version**:
|
||||
- Update `package.json` → `"version": "X.Y.Z"` (source of truth for npm).
|
||||
- Update version header in `README.md` if it displays the number.
|
||||
- One-liner: `npm version patch` (or `minor`/`major`) — bumps `package.json` and creates a git tag; then amend if you need to tag after release.
|
||||
4. **Tag Release**:
|
||||
```bash
|
||||
git tag -a v4.0.0 -m "V4 Enterprise Edition"
|
||||
git push origin v4.0.0
|
||||
```
|
||||
5. **Publish to npm** (so `npx antigravity-awesome-skills` works):
|
||||
- **Option A (manual):** From repo root, with npm logged in and 2FA/token set up:
|
||||
```bash
|
||||
npm publish
|
||||
```
|
||||
You cannot republish the same version; always bump `package.json` before publishing.
|
||||
- **Option B (CI):** On GitHub, create a **Release** (tag e.g. `v4.6.1`). The workflow [Publish to npm](.github/workflows/publish-npm.yml) runs on **Release published** and runs `npm publish` if the repo secret `NPM_TOKEN` is set (npm → Access Tokens → Granular token with Publish, then add as repo secret `NPM_TOKEN`).
|
||||
|
||||
### 📋 Changelog Entry Template
|
||||
|
||||
Each new release section in `CHANGELOG.md` should follow [Keep a Changelog](https://keepachangelog.com/) and this structure:
|
||||
|
||||
```markdown
|
||||
## [X.Y.Z] - YYYY-MM-DD - "[Theme Name]"
|
||||
|
||||
> **[One-line catchy summary of the release]**
|
||||
|
||||
[Brief 2-3 sentence intro about the release's impact]
|
||||
|
||||
## 🚀 New Skills
|
||||
|
||||
### [Emoji] [Skill Name](skills/skill-name/)
|
||||
|
||||
**[Bold high-level benefit]**
|
||||
[Description of what it does]
|
||||
|
||||
- **Key Feature 1**: [Detail]
|
||||
- **Key Feature 2**: [Detail]
|
||||
|
||||
> **Try it:** `(User Prompt) ...`
|
||||
|
||||
---
|
||||
|
||||
## 📦 Improvements
|
||||
|
||||
- **Registry Update**: Now tracking [N] skills.
|
||||
- **[Component]**: [Change detail]
|
||||
|
||||
## 👥 Credits
|
||||
|
||||
A huge shoutout to our community contributors:
|
||||
|
||||
- **@username** for `skill-name`
|
||||
- **@username** for `fix-name`
|
||||
|
||||
---
|
||||
|
||||
_Upgrade now: `git pull origin main` to fetch the latest skills._
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. 🚨 Emergency Fixes
|
||||
|
||||
If a skill is found to be harmful or broken:
|
||||
|
||||
1. **Move to broken folder** (don't detect): `mv skills/bad-skill skills/.broken/`
|
||||
2. **Or Add Warning**: Add `> [!WARNING]` to the top of `SKILL.md`.
|
||||
3. **Push Immediately**.
|
||||
21
.github/PULL_REQUEST_TEMPLATE.md
vendored
21
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,17 +1,22 @@
|
||||
## Description
|
||||
# Pull Request Description
|
||||
|
||||
Please describe your changes. What skill are you adding or modifying?
|
||||
Please include a summary of the change and which skill is added or fixed.
|
||||
|
||||
## Checklist
|
||||
## Quality Bar Checklist ✅
|
||||
|
||||
- [ ] My skill follows the [creation guidelines](https://github.com/sickn33/antigravity-awesome-skills/tree/main/skills/skill-creator)
|
||||
- [ ] I have run `validate_skills.py`
|
||||
- [ ] I have added my name to the credits (if applicable)
|
||||
**All items must be checked before merging.**
|
||||
|
||||
- [ ] **Standards**: I have read `docs/QUALITY_BAR.md` and `docs/SECURITY_GUARDRAILS.md`.
|
||||
- [ ] **Metadata**: The `SKILL.md` frontmatter is valid (checked with `scripts/validate_skills.py`).
|
||||
- [ ] **Risk Label**: I have assigned the correct `risk:` tag (`none`, `safe`, `critical`, `offensive`).
|
||||
- [ ] **Triggers**: The "When to use" section is clear and specific.
|
||||
- [ ] **Security**: If this is an _offensive_ skill, I included the "Authorized Use Only" disclaimer.
|
||||
- [ ] **Local Test**: I have verified the skill works locally.
|
||||
- [ ] **Credits**: I have added the source credit in `README.md` (if applicable).
|
||||
|
||||
## Type of Change
|
||||
|
||||
- [ ] New Skill
|
||||
- [ ] Bug Fix
|
||||
- [ ] New Skill (Feature)
|
||||
- [ ] Documentation Update
|
||||
- [ ] Infrastructure
|
||||
|
||||
|
||||
91
.github/workflows/ci.yml
vendored
Normal file
91
.github/workflows/ci.yml
vendored
Normal file
@@ -0,0 +1,91 @@
|
||||
name: Skills Registry CI
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: ["main", "feat/*"]
|
||||
pull_request:
|
||||
branches: ["main"]
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
validate-and-build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.10"
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
pip install pyyaml
|
||||
|
||||
- name: 🔍 Validate Skills (Soft Mode)
|
||||
run: |
|
||||
python3 scripts/validate_skills.py
|
||||
|
||||
- name: 🏗️ Generate Index
|
||||
run: |
|
||||
python3 scripts/generate_index.py
|
||||
|
||||
- name: 📝 Update README
|
||||
run: |
|
||||
python3 scripts/update_readme.py
|
||||
|
||||
- name: Set up Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "lts/*"
|
||||
|
||||
- name: Install npm dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Audit npm dependencies
|
||||
run: npm audit --audit-level=high
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run tests
|
||||
run: npm run test
|
||||
|
||||
- name: 📦 Build catalog
|
||||
run: npm run catalog
|
||||
|
||||
- name: Set up GitHub credentials (for auto-sync)
|
||||
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
|
||||
run: |
|
||||
git config --global user.name 'github-actions[bot]'
|
||||
git config --global user.email 'github-actions[bot]@users.noreply.github.com'
|
||||
git remote set-url origin https://x-access-token:${{ secrets.GITHUB_TOKEN }}@github.com/${{ github.repository }}.git
|
||||
|
||||
- name: Auto-commit registry drift (main only)
|
||||
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
|
||||
run: |
|
||||
# If no changes, exit successfully
|
||||
git diff --quiet && exit 0
|
||||
|
||||
git add README.md skills_index.json data/catalog.json data/bundles.json data/aliases.json CATALOG.md || true
|
||||
|
||||
# If nothing to commit, exit successfully
|
||||
git diff --cached --quiet && exit 0
|
||||
|
||||
git commit -m "chore: sync generated registry files [ci skip]"
|
||||
git push origin HEAD
|
||||
|
||||
- name: 🚨 Check for Uncommitted Drift
|
||||
run: |
|
||||
if ! git diff --quiet; then
|
||||
echo "❌ Detected uncommitted changes produced by registry/readme/catalog scripts."
|
||||
echo
|
||||
echo "To fix locally, run the FULL Validation Chain, then commit and push:"
|
||||
echo " npm run chain"
|
||||
echo " npm run catalog"
|
||||
echo " git add README.md skills_index.json data/catalog.json data/bundles.json data/aliases.json CATALOG.md"
|
||||
echo " git commit -m \"chore: sync generated registry files\""
|
||||
echo " git push"
|
||||
exit 1
|
||||
fi
|
||||
28
.github/workflows/publish-npm.yml
vendored
Normal file
28
.github/workflows/publish-npm.yml
vendored
Normal file
@@ -0,0 +1,28 @@
|
||||
# Publish antigravity-awesome-skills to npm on release.
|
||||
# Requires NPM_TOKEN secret (npm → Access Tokens → Granular token with Publish).
|
||||
# Before creating a Release: bump package.json "version" (npm forbids republishing the same version).
|
||||
# Release tag (e.g. v4.6.1) should match package.json version.
|
||||
|
||||
name: Publish to npm
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [published]
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
publish:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "20"
|
||||
registry-url: "https://registry.npmjs.org"
|
||||
|
||||
- name: Publish
|
||||
run: npm publish
|
||||
env:
|
||||
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
44
.github/workflows/star-history.yml
vendored
Normal file
44
.github/workflows/star-history.yml
vendored
Normal file
@@ -0,0 +1,44 @@
|
||||
name: Update Star History Chart
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
# Daily at 06:00 UTC
|
||||
- cron: "0 6 * * *"
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
jobs:
|
||||
update-star-history:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Install SVG renderer
|
||||
run: |
|
||||
set -euo pipefail
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y librsvg2-bin
|
||||
|
||||
- name: Fetch latest chart (SVG) and render PNG
|
||||
run: |
|
||||
set -euo pipefail
|
||||
mkdir -p assets
|
||||
curl -fsSL \
|
||||
"https://api.star-history.com/svg?repos=sickn33/antigravity-awesome-skills&type=date&legend=top-left" \
|
||||
-o /tmp/star-history.svg
|
||||
rsvg-convert /tmp/star-history.svg -o assets/star-history.png
|
||||
|
||||
- name: Commit and push if changed
|
||||
run: |
|
||||
set -euo pipefail
|
||||
git config user.name "github-actions[bot]"
|
||||
git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git add assets/star-history.png
|
||||
if git diff --cached --quiet; then
|
||||
echo "No changes in star-history.png"
|
||||
exit 0
|
||||
fi
|
||||
git commit -m "chore: update star history chart"
|
||||
git push
|
||||
27
.gitignore
vendored
27
.gitignore
vendored
@@ -1,6 +1,31 @@
|
||||
|
||||
node_modules/
|
||||
__pycache__/
|
||||
.worktrees/
|
||||
|
||||
walkthrough.md
|
||||
.agent/rules/
|
||||
.gemini/
|
||||
LOCAL_CONFIG.md
|
||||
data/node_modules
|
||||
|
||||
# Temporary analysis and report files
|
||||
*_REPORT.md
|
||||
*_ANALYSIS*.md
|
||||
*_COUNT.md
|
||||
*_SUMMARY.md
|
||||
*_analysis.json
|
||||
*_validation.json
|
||||
*_results.json
|
||||
voltagent_*.json
|
||||
similar_skills_*.json
|
||||
remaining_*.json
|
||||
html_*.json
|
||||
|
||||
# Temporary analysis scripts
|
||||
scripts/*voltagent*.py
|
||||
scripts/*html*.py
|
||||
scripts/*similar*.py
|
||||
scripts/*count*.py
|
||||
|
||||
# Optional baseline for legacy JS validator (scripts/validate-skills.js)
|
||||
validation-baseline.json
|
||||
|
||||
677
CATALOG.md
Normal file
677
CATALOG.md
Normal file
@@ -0,0 +1,677 @@
|
||||
# Skill Catalog
|
||||
|
||||
Generated at: 2026-02-03T09:20:12.539Z
|
||||
|
||||
Total skills: 626
|
||||
|
||||
## architecture (60)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `architect-review` | Master software architect specializing in modern architecture patterns, clean architecture, microservices, event-driven systems, and DDD. Reviews system desi... | | architect, review, software, specializing, architecture, clean, microservices, event, driven, ddd, reviews, designs |
|
||||
| `architecture` | Architectural decision-making framework. Requirements analysis, trade-off evaluation, ADR documentation. Use when making architecture decisions or analyzing ... | architecture | architecture, architectural, decision, making, framework, requirements, analysis, trade, off, evaluation, adr, documentation |
|
||||
| `architecture-decision-records` | Write and maintain Architecture Decision Records (ADRs) following best practices for technical decision documentation. Use when documenting significant techn... | architecture, decision, records | architecture, decision, records, write, maintain, adrs, following, technical, documentation, documenting, significant, decisions |
|
||||
| `automate-whatsapp` | Build WhatsApp automations with Kapso workflows: configure WhatsApp triggers, edit workflow graphs, manage executions, deploy functions, and use databases/in... | automate, whatsapp | automate, whatsapp, automations, kapso, configure, triggers, edit, graphs, executions, deploy, functions, databases |
|
||||
| `avalonia-viewmodels-zafiro` | Optimal ViewModel and Wizard creation patterns for Avalonia using Zafiro and ReactiveUI. | avalonia, viewmodels, zafiro | avalonia, viewmodels, zafiro, optimal, viewmodel, wizard, creation, reactiveui |
|
||||
| `bash-linux` | Bash/Linux terminal patterns. Critical commands, piping, error handling, scripting. Use when working on macOS or Linux systems. | bash, linux | bash, linux, terminal, critical, commands, piping, error, handling, scripting, working, macos |
|
||||
| `binary-analysis-patterns` | Master binary analysis patterns including disassembly, decompilation, control flow analysis, and code pattern recognition. Use when analyzing executables, un... | binary | binary, analysis, including, disassembly, decompilation, control, flow, code, recognition, analyzing, executables, understanding |
|
||||
| `brainstorming` | Use this skill before any creative or constructive work (features, components, architecture, behavior changes, or functionality). This skill transforms vague... | brainstorming | brainstorming, skill, before, any, creative, constructive, work, features, components, architecture, behavior, changes |
|
||||
| `browser-extension-builder` | Expert in building browser extensions that solve real problems - Chrome, Firefox, and cross-browser extensions. Covers extension architecture, manifest v3, c... | browser, extension, builder | browser, extension, builder, building, extensions, solve, real, problems, chrome, firefox, cross, covers |
|
||||
| `c4-architecture-c4-architecture` | Generate comprehensive C4 architecture documentation for an existing repository/codebase using a bottom-up analysis approach. | c4, architecture | c4, architecture, generate, documentation, existing, repository, codebase, bottom, up, analysis, approach |
|
||||
| `c4-code` | Expert C4 Code-level documentation specialist. Analyzes code directories to create comprehensive C4 code-level documentation including function signatures, a... | c4, code | c4, code, level, documentation, analyzes, directories, including, function, signatures, arguments, dependencies, structure |
|
||||
| `c4-component` | Expert C4 Component-level documentation specialist. Synthesizes C4 Code-level documentation into Component-level architecture, defining component boundaries,... | c4, component | c4, component, level, documentation, synthesizes, code, architecture, defining, boundaries, interfaces, relationships, creates |
|
||||
| `c4-context` | Expert C4 Context-level documentation specialist. Creates high-level system context diagrams, documents personas, user journeys, system features, and externa... | c4 | c4, context, level, documentation, creates, high, diagrams, documents, personas, user, journeys, features |
|
||||
| `code-refactoring-refactor-clean` | You are a code refactoring expert specializing in clean code principles, SOLID design patterns, and modern software engineering best practices. Analyze and r... | code, refactoring, refactor, clean | code, refactoring, refactor, clean, specializing, principles, solid, software, engineering, analyze, provided, improve |
|
||||
| `codebase-cleanup-refactor-clean` | You are a code refactoring expert specializing in clean code principles, SOLID design patterns, and modern software engineering best practices. Analyze and r... | codebase, cleanup, refactor, clean | codebase, cleanup, refactor, clean, code, refactoring, specializing, principles, solid, software, engineering, analyze |
|
||||
| `competitor-alternatives` | When the user wants to create competitor comparison or alternative pages for SEO and sales enablement. Also use when the user mentions 'alternative page,' 'v... | competitor, alternatives | competitor, alternatives, user, wants, comparison, alternative, pages, seo, sales, enablement, mentions, page |
|
||||
| `context-degradation` | Recognize patterns of context failure: lost-in-middle, poisoning, distraction, and clash | degradation | degradation, context, recognize, failure, lost, middle, poisoning, distraction, clash |
|
||||
| `core-components` | Core component library and design system patterns. Use when building UI, using design tokens, or working with the component library. | core, components | core, components, component, library, building, ui, tokens, working |
|
||||
| `cpp-pro` | Write idiomatic C++ code with modern features, RAII, smart pointers, and STL algorithms. Handles templates, move semantics, and performance optimization. Use... | cpp | cpp, pro, write, idiomatic, code, features, raii, smart, pointers, stl, algorithms, move |
|
||||
| `cqrs-implementation` | Implement Command Query Responsibility Segregation for scalable architectures. Use when separating read and write models, optimizing query performance, or bu... | cqrs | cqrs, command, query, responsibility, segregation, scalable, architectures, separating, read, write, models, optimizing |
|
||||
| `doc-coauthoring` | Guide users through a structured workflow for co-authoring documentation. Use when user wants to write documentation, proposals, technical specs, decision do... | doc, coauthoring | doc, coauthoring, users, through, structured, co, authoring, documentation, user, wants, write, proposals |
|
||||
| `docs-architect` | Creates comprehensive technical documentation from existing codebases. Analyzes architecture, design patterns, and implementation details to produce long-for... | docs | docs, architect, creates, technical, documentation, existing, codebases, analyzes, architecture, details, produce, long |
|
||||
| `elixir-pro` | Write idiomatic Elixir code with OTP patterns, supervision trees, and Phoenix LiveView. Masters concurrency, fault tolerance, and distributed systems. Use PR... | elixir | elixir, pro, write, idiomatic, code, otp, supervision, trees, phoenix, liveview, masters, concurrency |
|
||||
| `email-systems` | Email has the highest ROI of any marketing channel. $36 for every $1 spent. Yet most startups treat it as an afterthought - bulk blasts, no personalization, ... | email | email, highest, roi, any, marketing, channel, 36, every, spent, yet, most, startups |
|
||||
| `error-detective` | Search logs and codebases for error patterns, stack traces, and anomalies. Correlates errors across systems and identifies root causes. Use PROACTIVELY when ... | error, detective | error, detective, search, logs, codebases, stack, traces, anomalies, correlates, errors, identifies, root |
|
||||
| `error-handling-patterns` | Master error handling patterns across languages including exceptions, Result types, error propagation, and graceful degradation to build resilient applicatio... | error, handling | error, handling, languages, including, exceptions, result, types, propagation, graceful, degradation, resilient, applications |
|
||||
| `event-sourcing-architect` | Expert in event sourcing, CQRS, and event-driven architecture patterns. Masters event store design, projection building, saga orchestration, and eventual con... | event, sourcing | event, sourcing, architect, cqrs, driven, architecture, masters, store, projection, building, saga, orchestration |
|
||||
| `event-store-design` | Design and implement event stores for event-sourced systems. Use when building event sourcing infrastructure, choosing event store technologies, or implement... | event, store | event, store, stores, sourced, building, sourcing, infrastructure, choosing, technologies, implementing, persistence |
|
||||
| `game-development/multiplayer` | Multiplayer game development principles. Architecture, networking, synchronization. | game, development/multiplayer | game, development/multiplayer, multiplayer, development, principles, architecture, networking, synchronization |
|
||||
| `godot-gdscript-patterns` | Master Godot 4 GDScript patterns including signals, scenes, state machines, and optimization. Use when building Godot games, implementing game systems, or le... | godot, gdscript | godot, gdscript, including, signals, scenes, state, machines, optimization, building, games, implementing, game |
|
||||
| `haskell-pro` | Expert Haskell engineer specializing in advanced type systems, pure functional design, and high-reliability software. Use PROACTIVELY for type-level programm... | haskell | haskell, pro, engineer, specializing, type, pure, functional, high, reliability, software, proactively, level |
|
||||
| `i18n-localization` | Internationalization and localization patterns. Detecting hardcoded strings, managing translations, locale files, RTL support. | i18n, localization | i18n, localization, internationalization, detecting, hardcoded, strings, managing, translations, locale, files, rtl |
|
||||
| `inngest` | Inngest expert for serverless-first background jobs, event-driven workflows, and durable execution without managing queues or workers. Use when: inngest, ser... | inngest | inngest, serverless, first, background, jobs, event, driven, durable, execution, without, managing, queues |
|
||||
| `julia-pro` | Master Julia 1.10+ with modern features, performance optimization, multiple dispatch, and production-ready practices. Expert in the Julia ecosystem including... | julia | julia, pro, 10, features, performance, optimization, multiple, dispatch, ecosystem, including, package, scientific |
|
||||
| `minecraft-bukkit-pro` | Master Minecraft server plugin development with Bukkit, Spigot, and Paper APIs. Specializes in event-driven architecture, command systems, world manipulation... | minecraft, bukkit | minecraft, bukkit, pro, server, plugin, development, spigot, paper, apis, specializes, event, driven |
|
||||
| `monorepo-architect` | Expert in monorepo architecture, build systems, and dependency management at scale. Masters Nx, Turborepo, Bazel, and Lerna for efficient multi-project devel... | monorepo | monorepo, architect, architecture, dependency, scale, masters, nx, turborepo, bazel, lerna, efficient, multi |
|
||||
| `multi-agent-patterns` | Master orchestrator, peer-to-peer, and hierarchical multi-agent architectures | multi, agent | multi, agent, orchestrator, peer, hierarchical, architectures |
|
||||
| `n8n-mcp-tools-expert` | Expert guide for using n8n-mcp MCP tools effectively. Use when searching for nodes, validating configurations, accessing templates, managing workflows, or us... | n8n, mcp | n8n, mcp, effectively, searching, nodes, validating, configurations, accessing, managing, any, provides, selection |
|
||||
| `nestjs-expert` | Nest.js framework expert specializing in module architecture, dependency injection, middleware, guards, interceptors, testing with Jest/Supertest, TypeORM/Mo... | nestjs | nestjs, nest, js, framework, specializing, module, architecture, dependency, injection, middleware, guards, interceptors |
|
||||
| `nx-workspace-patterns` | Configure and optimize Nx monorepo workspaces. Use when setting up Nx, configuring project boundaries, optimizing build caching, or implementing affected com... | nx, workspace | nx, workspace, configure, optimize, monorepo, workspaces, setting, up, configuring, boundaries, optimizing, caching |
|
||||
| `on-call-handoff-patterns` | Master on-call shift handoffs with context transfer, escalation procedures, and documentation. Use when transitioning on-call responsibilities, documenting s... | on, call, handoff | on, call, handoff, shift, handoffs, context, transfer, escalation, procedures, documentation, transitioning, responsibilities |
|
||||
| `parallel-agents` | Multi-agent orchestration patterns. Use when multiple independent tasks can run with different domain expertise or when comprehensive analysis requires multi... | parallel, agents | parallel, agents, multi, agent, orchestration, multiple, independent, tasks, run, different, domain, expertise |
|
||||
| `powershell-windows` | PowerShell Windows patterns. Critical pitfalls, operator syntax, error handling. | powershell, windows | powershell, windows, critical, pitfalls, operator, syntax, error, handling |
|
||||
| `production-code-audit` | Autonomously deep-scan entire codebase line-by-line, understand architecture and patterns, then systematically transform it to production-grade, corporate-le... | production, code, audit | production, code, audit, autonomously, deep, scan, entire, codebase, line, understand, architecture, then |
|
||||
| `projection-patterns` | Build read models and projections from event streams. Use when implementing CQRS read sides, building materialized views, or optimizing query performance in ... | projection | projection, read, models, projections, event, streams, implementing, cqrs, sides, building, materialized, views |
|
||||
| `prompt-engineering` | Expert guide on prompt engineering patterns, best practices, and optimization techniques. Use when user wants to improve prompts, learn prompting strategies,... | prompt, engineering | prompt, engineering, optimization, techniques, user, wants, improve, prompts, learn, prompting, debug, agent |
|
||||
| `radix-ui-design-system` | Build accessible design systems with Radix UI primitives. Headless component customization, theming strategies, and compound component patterns for productio... | radix, ui | radix, ui, accessible, primitives, headless, component, customization, theming, compound, grade, libraries |
|
||||
| `saga-orchestration` | Implement saga patterns for distributed transactions and cross-aggregate workflows. Use when coordinating multi-step business processes, handling compensatin... | saga | saga, orchestration, distributed, transactions, cross, aggregate, coordinating, multi, step, business, processes, handling |
|
||||
| `salesforce-development` | Expert patterns for Salesforce platform development including Lightning Web Components (LWC), Apex triggers and classes, REST/Bulk APIs, Connected Apps, and ... | salesforce | salesforce, development, platform, including, lightning, web, components, lwc, apex, triggers, classes, rest |
|
||||
| `skill-developer` | Create and manage Claude Code skills following Anthropic best practices. Use when creating new skills, modifying skill-rules.json, understanding trigger patt... | skill | skill, developer, claude, code, skills, following, anthropic, creating, new, modifying, rules, json |
|
||||
| `software-architecture` | Guide for quality focused software architecture. This skill should be used when users want to write code, design architecture, analyze code, in any case that... | software, architecture | software, architecture, quality, skill, should, used, users, want, write, code, analyze, any |
|
||||
| `tailwind-design-system` | Build scalable design systems with Tailwind CSS, design tokens, component libraries, and responsive patterns. Use when creating component libraries, implemen... | tailwind | tailwind, scalable, css, tokens, component, libraries, responsive, creating, implementing, standardizing, ui |
|
||||
| `tailwind-patterns` | Tailwind CSS v4 principles. CSS-first configuration, container queries, modern patterns, design token architecture. | tailwind | tailwind, css, v4, principles, first, configuration, container, queries, token, architecture |
|
||||
| `testing-patterns` | Jest testing patterns, factory functions, mocking strategies, and TDD workflow. Use when writing unit tests, creating test factories, or following TDD red-gr... | | testing, jest, factory, functions, mocking, tdd, writing, unit, tests, creating, test, factories |
|
||||
| `tool-design` | Build tools that agents can use effectively, including architectural reduction patterns | | agents, effectively, including, architectural, reduction |
|
||||
| `unreal-engine-cpp-pro` | Expert guide for Unreal Engine 5.x C++ development, covering UObject hygiene, performance patterns, and best practices. | unreal, engine, cpp | unreal, engine, cpp, pro, development, covering, uobject, hygiene, performance |
|
||||
| `wcag-audit-patterns` | Conduct WCAG 2.2 accessibility audits with automated testing, manual verification, and remediation guidance. Use when auditing websites for accessibility, fi... | wcag, audit | wcag, audit, conduct, accessibility, audits, automated, testing, manual, verification, remediation, guidance, auditing |
|
||||
| `workflow-orchestration-patterns` | Design durable workflows with Temporal for distributed systems. Covers workflow vs activity separation, saga patterns, state management, and determinism cons... | | orchestration, durable, temporal, distributed, covers, vs, activity, separation, saga, state, determinism, constraints |
|
||||
| `workflow-patterns` | Use this skill when implementing tasks according to Conductor's TDD workflow, handling phase checkpoints, managing git commits for tasks, or understanding th... | | skill, implementing, tasks, according, conductor, tdd, handling, phase, checkpoints, managing, git, commits |
|
||||
| `zapier-make-patterns` | No-code automation democratizes workflow building. Zapier and Make (formerly Integromat) let non-developers automate business processes without writing code.... | zapier, make | zapier, make, no, code, automation, democratizes, building, formerly, integromat, let, non, developers |
|
||||
|
||||
## business (37)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `competitive-landscape` | This skill should be used when the user asks to "analyze competitors", "assess competitive landscape", "identify differentiation", "evaluate market positioni... | competitive, landscape | competitive, landscape, skill, should, used, user, asks, analyze, competitors, assess, identify, differentiation |
|
||||
| `conductor-setup` | Initialize project with Conductor artifacts (product definition, tech stack, workflow, style guides) | conductor, setup | conductor, setup, initialize, artifacts, product, definition, tech, stack, style, guides |
|
||||
| `content-creator` | Create SEO-optimized marketing content with consistent brand voice. Includes brand voice analyzer, SEO optimizer, content frameworks, and social media templa... | content, creator | content, creator, seo, optimized, marketing, consistent, brand, voice, includes, analyzer, optimizer, frameworks |
|
||||
| `context-driven-development` | Use this skill when working with Conductor's context-driven development methodology, managing project context artifacts, or understanding the relationship be... | driven | driven, context, development, skill, working, conductor, methodology, managing, artifacts, understanding, relationship, between |
|
||||
| `copy-editing` | When the user wants to edit, review, or improve existing marketing copy. Also use when the user mentions 'edit this copy,' 'review my copy,' 'copy feedback,'... | copy, editing | copy, editing, user, wants, edit, review, improve, existing, marketing, mentions, my, feedback |
|
||||
| `copywriting` | Use this skill when writing, rewriting, or improving marketing copy for any page (homepage, landing page, pricing, feature, product, or about page). This ski... | copywriting | copywriting, skill, writing, rewriting, improving, marketing, copy, any, page, homepage, landing, pricing |
|
||||
| `deep-research` | Execute autonomous multi-step research using Google Gemini Deep Research Agent. Use for: market analysis, competitive landscaping, literature reviews, techni... | deep, research | deep, research, execute, autonomous, multi, step, google, gemini, agent, market, analysis, competitive |
|
||||
| `defi-protocol-templates` | Implement DeFi protocols with production-ready templates for staking, AMMs, governance, and lending systems. Use when building decentralized finance applicat... | defi, protocol | defi, protocol, protocols, staking, amms, governance, lending, building, decentralized, finance, applications, smart |
|
||||
| `employment-contract-templates` | Create employment contracts, offer letters, and HR policy documents following legal best practices. Use when drafting employment agreements, creating HR poli... | employment, contract | employment, contract, contracts, offer, letters, hr, policy, documents, following, legal, drafting, agreements |
|
||||
| `framework-migration-legacy-modernize` | Orchestrate a comprehensive legacy system modernization using the strangler fig pattern, enabling gradual replacement of outdated components while maintainin... | framework, migration, legacy, modernize | framework, migration, legacy, modernize, orchestrate, modernization, strangler, fig, enabling, gradual, replacement, outdated |
|
||||
| `free-tool-strategy` | When the user wants to plan, evaluate, or build a free tool for marketing purposes — lead generation, SEO value, or brand awareness. Also use when the user m... | free | free, user, wants, plan, evaluate, marketing, purposes, lead, generation, seo, value, brand |
|
||||
| `hr-pro` | Professional, ethical HR partner for hiring, onboarding/offboarding, PTO and leave, performance, compliant policies, and employee relations. Ask for jurisdic... | hr | hr, pro, professional, ethical, partner, hiring, onboarding, offboarding, pto, leave, performance, compliant |
|
||||
| `market-sizing-analysis` | This skill should be used when the user asks to "calculate TAM", "determine SAM", "estimate SOM", "size the market", "calculate market opportunity", "what's ... | market, sizing | market, sizing, analysis, skill, should, used, user, asks, calculate, tam, determine, sam |
|
||||
| `marketing-ideas` | Provide proven marketing strategies and growth ideas for SaaS and software products, prioritized using a marketing feasibility scoring system. | marketing, ideas | marketing, ideas, provide, proven, growth, saas, software, products, prioritized, feasibility, scoring |
|
||||
| `marketing-psychology` | Apply behavioral science and mental models to marketing decisions, prioritized using a psychological leverage and feasibility scoring system. | marketing, psychology | marketing, psychology, apply, behavioral, science, mental, models, decisions, prioritized, psychological, leverage, feasibility |
|
||||
| `notion-template-business` | Expert in building and selling Notion templates as a business - not just making templates, but building a sustainable digital product business. Covers templa... | notion, business | notion, business, building, selling, just, making, sustainable, digital, product, covers, pricing, marketplaces |
|
||||
| `page-cro` | Analyze and optimize individual pages for conversion performance. Use when the user wants to improve conversion rates, diagnose why a page is underperforming... | page, cro | page, cro, analyze, optimize, individual, pages, conversion, performance, user, wants, improve, rates |
|
||||
| `paywall-upgrade-cro` | When the user wants to create or optimize in-app paywalls, upgrade screens, upsell modals, or feature gates. Also use when the user mentions "paywall," "upgr... | paywall, upgrade, cro | paywall, upgrade, cro, user, wants, optimize, app, paywalls, screens, upsell, modals, feature |
|
||||
| `pricing-strategy` | Design pricing, packaging, and monetization strategies based on value, customer willingness to pay, and growth objectives. | pricing | pricing, packaging, monetization, value, customer, willingness, pay, growth, objectives |
|
||||
| `sales-automator` | Draft cold emails, follow-ups, and proposal templates. Creates pricing pages, case studies, and sales scripts. Use PROACTIVELY for sales outreach or lead nur... | sales, automator | sales, automator, draft, cold, emails, follow, ups, proposal, creates, pricing, pages, case |
|
||||
| `screenshots` | Generate marketing screenshots of your app using Playwright. Use when the user wants to create screenshots for Product Hunt, social media, landing pages, or ... | screenshots | screenshots, generate, marketing, app, playwright, user, wants, product, hunt, social, media, landing |
|
||||
| `scroll-experience` | Expert in building immersive scroll-driven experiences - parallax storytelling, scroll animations, interactive narratives, and cinematic web experiences. Lik... | scroll, experience | scroll, experience, building, immersive, driven, experiences, parallax, storytelling, animations, interactive, narratives, cinematic |
|
||||
| `seo-cannibalization-detector` | Analyzes multiple provided pages to identify keyword overlap and potential cannibalization issues. Suggests differentiation strategies. Use PROACTIVELY when ... | seo, cannibalization, detector | seo, cannibalization, detector, analyzes, multiple, provided, pages, identify, keyword, overlap, potential, issues |
|
||||
| `seo-content-auditor` | Analyzes provided content for quality, E-E-A-T signals, and SEO best practices. Scores content and provides improvement recommendations based on established ... | seo, content, auditor | seo, content, auditor, analyzes, provided, quality, signals, scores, provides, improvement, recommendations, established |
|
||||
| `seo-content-planner` | Creates comprehensive content outlines and topic clusters for SEO. Plans content calendars and identifies topic gaps. Use PROACTIVELY for content strategy an... | seo, content, planner | seo, content, planner, creates, outlines, topic, clusters, plans, calendars, identifies, gaps, proactively |
|
||||
| `seo-content-refresher` | Identifies outdated elements in provided content and suggests updates to maintain freshness. Finds statistics, dates, and examples that need updating. Use PR... | seo, content, refresher | seo, content, refresher, identifies, outdated, elements, provided, suggests, updates, maintain, freshness, finds |
|
||||
| `seo-content-writer` | Writes SEO-optimized content based on provided keywords and topic briefs. Creates engaging, comprehensive content following best practices. Use PROACTIVELY f... | seo, content, writer | seo, content, writer, writes, optimized, provided, keywords, topic, briefs, creates, engaging, following |
|
||||
| `seo-fundamentals` | Core principles of SEO including E-E-A-T, Core Web Vitals, technical foundations, content quality, and how modern search engines evaluate pages. This skill e... | seo, fundamentals | seo, fundamentals, core, principles, including, web, vitals, technical, foundations, content, quality, how |
|
||||
| `seo-keyword-strategist` | Analyzes keyword usage in provided content, calculates density, suggests semantic variations and LSI keywords based on the topic. Prevents over-optimization.... | seo, keyword, strategist | seo, keyword, strategist, analyzes, usage, provided, content, calculates, density, suggests, semantic, variations |
|
||||
| `seo-meta-optimizer` | Creates optimized meta titles, descriptions, and URL suggestions based on character limits and best practices. Generates compelling, keyword-rich metadata. U... | seo, meta, optimizer | seo, meta, optimizer, creates, optimized, titles, descriptions, url, suggestions, character, limits, generates |
|
||||
| `seo-snippet-hunter` | Formats content to be eligible for featured snippets and SERP features. Creates snippet-optimized content blocks based on best practices. Use PROACTIVELY for... | seo, snippet, hunter | seo, snippet, hunter, formats, content, eligible, featured, snippets, serp, features, creates, optimized |
|
||||
| `seo-structure-architect` | Analyzes and optimizes content structure including header hierarchy, suggests schema markup, and internal linking opportunities. Creates search-friendly cont... | seo, structure | seo, structure, architect, analyzes, optimizes, content, including, header, hierarchy, suggests, schema, markup |
|
||||
| `startup-business-analyst-business-case` | Generate comprehensive investor-ready business case document with market, solution, financials, and strategy | startup, business, analyst, case | startup, business, analyst, case, generate, investor, document, market, solution, financials |
|
||||
| `startup-business-analyst-financial-projections` | Create detailed 3-5 year financial model with revenue, costs, cash flow, and scenarios | startup, business, analyst, financial, projections | startup, business, analyst, financial, projections, detailed, year, model, revenue, costs, cash, flow |
|
||||
| `startup-business-analyst-market-opportunity` | Generate comprehensive market opportunity analysis with TAM/SAM/SOM calculations | startup, business, analyst, market, opportunity | startup, business, analyst, market, opportunity, generate, analysis, tam, sam, som, calculations |
|
||||
| `startup-financial-modeling` | This skill should be used when the user asks to "create financial projections", "build a financial model", "forecast revenue", "calculate burn rate", "estima... | startup, financial, modeling | startup, financial, modeling, skill, should, used, user, asks, projections, model, forecast, revenue |
|
||||
| `team-composition-analysis` | This skill should be used when the user asks to "plan team structure", "determine hiring needs", "design org chart", "calculate compensation", "plan equity a... | team, composition | team, composition, analysis, skill, should, used, user, asks, plan, structure, determine, hiring |
|
||||
|
||||
## data-ai (92)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `agent-memory-mcp` | A hybrid memory system that provides persistent, searchable knowledge management for AI agents (Architecture, Patterns, Decisions). | agent, memory, mcp | agent, memory, mcp, hybrid, provides, persistent, searchable, knowledge, ai, agents, architecture, decisions |
|
||||
| `agent-tool-builder` | Tools are how AI agents interact with the world. A well-designed tool is the difference between an agent that works and one that hallucinates, fails silently... | agent, builder | agent, builder, how, ai, agents, interact, world, well, designed, difference, between, works |
|
||||
| `ai-agents-architect` | Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build ... | ai, agents | ai, agents, architect, designing, building, autonomous, masters, memory, planning, multi, agent, orchestration |
|
||||
| `ai-engineer` | Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and ente... | ai | ai, engineer, llm, applications, rag, intelligent, agents, implements, vector, search, multimodal, agent |
|
||||
| `ai-wrapper-product` | Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just 'ChatGPT but different' - products t... | ai, wrapper, product | ai, wrapper, product, building, products, wrap, apis, openai, anthropic, etc, people, pay |
|
||||
| `analytics-tracking` | Design, audit, and improve analytics tracking systems that produce reliable, decision-ready data. Use when the user wants to set up, fix, or evaluate analyti... | analytics, tracking | analytics, tracking, audit, improve, produce, reliable, decision, data, user, wants, set, up |
|
||||
| `api-documenter` | Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build com... | api, documenter | api, documenter, documentation, openapi, ai, powered, developer, experience, interactive, docs, generate, sdks |
|
||||
| `autonomous-agent-patterns` | Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use ... | autonomous, agent | autonomous, agent, building, coding, agents, covers, integration, permission, browser, automation, human, loop |
|
||||
| `autonomous-agents` | Autonomous agents are AI systems that can independently decompose goals, plan actions, execute tools, and self-correct without constant human guidance. The c... | autonomous, agents | autonomous, agents, ai, independently, decompose, goals, plan, actions, execute, self, correct, without |
|
||||
| `beautiful-prose` | Hard-edged writing style contract for timeless, forceful English prose without AI tics | beautiful, prose | beautiful, prose, hard, edged, writing, style, contract, timeless, forceful, english, without, ai |
|
||||
| `behavioral-modes` | AI operational modes (brainstorm, implement, debug, review, teach, ship, orchestrate). Use to adapt behavior based on task type. | behavioral, modes | behavioral, modes, ai, operational, brainstorm, debug, review, teach, ship, orchestrate, adapt, behavior |
|
||||
| `blockrun` | Use when user needs capabilities Claude lacks (image generation, real-time X/Twitter data) or explicitly requests external models ("blockrun", "use grok", "u... | blockrun | blockrun, user, capabilities, claude, lacks, image, generation, real, time, twitter, data, explicitly |
|
||||
| `browser-automation` | Browser automation powers web testing, scraping, and AI agent interactions. The difference between a flaky script and a reliable system comes down to underst... | browser | browser, automation, powers, web, testing, scraping, ai, agent, interactions, difference, between, flaky |
|
||||
| `business-analyst` | Master modern business analysis with AI-powered analytics, real-time dashboards, and data-driven insights. Build comprehensive KPI frameworks, predictive mod... | business, analyst | business, analyst, analysis, ai, powered, analytics, real, time, dashboards, data, driven, insights |
|
||||
| `cc-skill-backend-patterns` | Backend architecture patterns, API design, database optimization, and server-side best practices for Node.js, Express, and Next.js API routes. | cc, skill, backend | cc, skill, backend, architecture, api, database, optimization, server, side, node, js, express |
|
||||
| `cc-skill-clickhouse-io` | ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads. | cc, skill, clickhouse, io | cc, skill, clickhouse, io, database, query, optimization, analytics, data, engineering, high, performance |
|
||||
| `clarity-gate` | Pre-ingestion verification for epistemic quality in RAG systems with 9-point verification and Two-Round HITL workflow | clarity, gate | clarity, gate, pre, ingestion, verification, epistemic, quality, rag, point, two, round, hitl |
|
||||
| `code-documentation-doc-generate` | You are a documentation expert specializing in creating comprehensive, maintainable documentation from code. Generate API docs, architecture diagrams, user g... | code, documentation, doc, generate | code, documentation, doc, generate, specializing, creating, maintainable, api, docs, architecture, diagrams, user |
|
||||
| `codex-review` | Professional code review with auto CHANGELOG generation, integrated with Codex AI | codex | codex, review, professional, code, auto, changelog, generation, integrated, ai |
|
||||
| `content-marketer` | Elite content marketing strategist specializing in AI-powered content creation, omnichannel distribution, SEO optimization, and data-driven performance marke... | content, marketer | content, marketer, elite, marketing, strategist, specializing, ai, powered, creation, omnichannel, distribution, seo |
|
||||
| `context-manager` | Elite AI context engineering specialist mastering dynamic context management, vector databases, knowledge graphs, and intelligent memory systems. Orchestrate... | manager | manager, context, elite, ai, engineering, mastering, dynamic, vector, databases, knowledge, graphs, intelligent |
|
||||
| `context-window-management` | Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, conte... | window | window, context, managing, llm, windows, including, summarization, trimming, routing, avoiding, rot, token |
|
||||
| `conversation-memory` | Persistent memory systems for LLM conversations including short-term, long-term, and entity-based memory Use when: conversation memory, remember, memory pers... | conversation, memory | conversation, memory, persistent, llm, conversations, including, short, term, long, entity, remember, persistence |
|
||||
| `crewai` | Expert in CrewAI - the leading role-based multi-agent framework used by 60% of Fortune 500 companies. Covers agent design with roles and goals, task definiti... | crewai | crewai, leading, role, multi, agent, framework, used, 60, fortune, 500, companies, covers |
|
||||
| `customer-support` | Elite AI-powered customer support specialist mastering conversational AI, automated ticketing, sentiment analysis, and omnichannel support experiences. Integ... | customer, support | customer, support, elite, ai, powered, mastering, conversational, automated, ticketing, sentiment, analysis, omnichannel |
|
||||
| `data-engineering-data-driven-feature` | Build features guided by data insights, A/B testing, and continuous measurement using specialized agents for analysis, implementation, and experimentation. | data, engineering, driven | data, engineering, driven, feature, features, guided, insights, testing, continuous, measurement, specialized, agents |
|
||||
| `data-quality-frameworks` | Implement data quality validation with Great Expectations, dbt tests, and data contracts. Use when building data quality pipelines, implementing validation r... | data, quality, frameworks | data, quality, frameworks, validation, great, expectations, dbt, tests, contracts, building, pipelines, implementing |
|
||||
| `data-scientist` | Expert data scientist for advanced analytics, machine learning, and statistical modeling. Handles complex data analysis, predictive modeling, and business in... | data, scientist | data, scientist, analytics, machine, learning, statistical, modeling, complex, analysis, predictive, business, intelligence |
|
||||
| `data-storytelling` | Transform data into compelling narratives using visualization, context, and persuasive structure. Use when presenting analytics to stakeholders, creating dat... | data, storytelling | data, storytelling, transform, compelling, narratives, visualization, context, persuasive, structure, presenting, analytics, stakeholders |
|
||||
| `database-architect` | Expert database architect specializing in data layer design from scratch, technology selection, schema modeling, and scalable database architectures. Masters... | database | database, architect, specializing, data, layer, scratch, technology, selection, schema, modeling, scalable, architectures |
|
||||
| `database-design` | Database design principles and decision-making. Schema design, indexing strategy, ORM selection, serverless databases. | database | database, principles, decision, making, schema, indexing, orm, selection, serverless, databases |
|
||||
| `dbt-transformation-patterns` | Master dbt (data build tool) for analytics engineering with model organization, testing, documentation, and incremental strategies. Use when building data tr... | dbt, transformation | dbt, transformation, data, analytics, engineering, model, organization, testing, documentation, incremental, building, transformations |
|
||||
| `documentation-generation-doc-generate` | You are a documentation expert specializing in creating comprehensive, maintainable documentation from code. Generate API docs, architecture diagrams, user g... | documentation, generation, doc, generate | documentation, generation, doc, generate, specializing, creating, maintainable, code, api, docs, architecture, diagrams |
|
||||
| `documentation-templates` | Documentation templates and structure guidelines. README, API docs, code comments, and AI-friendly documentation. | documentation | documentation, structure, guidelines, readme, api, docs, code, comments, ai, friendly |
|
||||
| `embedding-strategies` | Select and optimize embedding models for semantic search and RAG applications. Use when choosing embedding models, implementing chunking strategies, or optim... | embedding, strategies | embedding, strategies, select, optimize, models, semantic, search, rag, applications, choosing, implementing, chunking |
|
||||
| `fal-audio` | Text-to-speech and speech-to-text using fal.ai audio models | fal, audio | fal, audio, text, speech, ai, models |
|
||||
| `fal-generate` | Generate images and videos using fal.ai AI models | fal, generate | fal, generate, images, videos, ai, models |
|
||||
| `fal-image-edit` | AI-powered image editing with style transfer and object removal | fal, image, edit | fal, image, edit, ai, powered, editing, style, transfer, object, removal |
|
||||
| `fal-upscale` | Upscale and enhance image and video resolution using AI | fal, upscale | fal, upscale, enhance, image, video, resolution, ai |
|
||||
| `fal-workflow` | Generate workflow JSON files for chaining AI models | fal | fal, generate, json, files, chaining, ai, models |
|
||||
| `fp-ts-react` | Practical patterns for using fp-ts with React - hooks, state, forms, data fetching. Use when building React apps with functional programming patterns. Works ... | fp, ts, react | fp, ts, react, practical, hooks, state, forms, data, fetching, building, apps, functional |
|
||||
| `frontend-dev-guidelines` | Opinionated frontend development standards for modern React + TypeScript applications. Covers Suspense-first data fetching, lazy loading, feature-based archi... | frontend, dev, guidelines | frontend, dev, guidelines, opinionated, development, standards, react, typescript, applications, covers, suspense, first |
|
||||
| `geo-fundamentals` | Generative Engine Optimization for AI search engines (ChatGPT, Claude, Perplexity). | geo, fundamentals | geo, fundamentals, generative, engine, optimization, ai, search, engines, chatgpt, claude, perplexity |
|
||||
| `graphql` | GraphQL gives clients exactly the data they need - no more, no less. One endpoint, typed schema, introspection. But the flexibility that makes it powerful al... | graphql | graphql, gives, clients, exactly, data, no, less, one, endpoint, typed, schema, introspection |
|
||||
| `hybrid-search-implementation` | Combine vector and keyword search for improved retrieval. Use when implementing RAG systems, building search engines, or when neither approach alone provides... | hybrid, search | hybrid, search, combine, vector, keyword, improved, retrieval, implementing, rag, building, engines, neither |
|
||||
| `ios-developer` | Develop native iOS applications with Swift/SwiftUI. Masters iOS 18, SwiftUI, UIKit integration, Core Data, networking, and App Store optimization. Use PROACT... | ios | ios, developer, develop, native, applications, swift, swiftui, masters, 18, uikit, integration, core |
|
||||
| `langchain-architecture` | Design LLM applications using the LangChain framework with agents, memory, and tool integration patterns. Use when building LangChain applications, implement... | langchain, architecture | langchain, architecture, llm, applications, framework, agents, memory, integration, building, implementing, ai, creating |
|
||||
| `langgraph` | Expert in LangGraph - the production-grade framework for building stateful, multi-actor AI applications. Covers graph construction, state management, cycles ... | langgraph | langgraph, grade, framework, building, stateful, multi, actor, ai, applications, covers, graph, construction |
|
||||
| `llm-application-dev-ai-assistant` | You are an AI assistant development expert specializing in creating intelligent conversational interfaces, chatbots, and AI-powered applications. Design comp... | llm, application, dev, ai | llm, application, dev, ai, assistant, development, specializing, creating, intelligent, conversational, interfaces, chatbots |
|
||||
| `llm-application-dev-langchain-agent` | You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph. | llm, application, dev, langchain, agent | llm, application, dev, langchain, agent, developer, specializing, grade, ai, langgraph |
|
||||
| `llm-application-dev-prompt-optimize` | You are an expert prompt engineer specializing in crafting effective prompts for LLMs through advanced techniques including constitutional AI, chain-of-thoug... | llm, application, dev, prompt, optimize | llm, application, dev, prompt, optimize, engineer, specializing, crafting, effective, prompts, llms, through |
|
||||
| `llm-evaluation` | Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performanc... | llm, evaluation | llm, evaluation, applications, automated, metrics, human, feedback, benchmarking, testing, performance, measuring, ai |
|
||||
| `nanobanana-ppt-skills` | AI-powered PPT generation with document analysis and styled images | nanobanana, ppt, skills | nanobanana, ppt, skills, ai, powered, generation, document, analysis, styled, images |
|
||||
| `neon-postgres` | Expert patterns for Neon serverless Postgres, branching, connection pooling, and Prisma/Drizzle integration Use when: neon database, serverless postgres, dat... | neon, postgres | neon, postgres, serverless, branching, connection, pooling, prisma, drizzle, integration, database |
|
||||
| `nextjs-app-router-patterns` | Master Next.js 14+ App Router with Server Components, streaming, parallel routes, and advanced data fetching. Use when building Next.js applications, impleme... | nextjs, app, router | nextjs, app, router, next, js, 14, server, components, streaming, parallel, routes, data |
|
||||
| `nextjs-best-practices` | Next.js App Router principles. Server Components, data fetching, routing patterns. | nextjs, best, practices | nextjs, best, practices, next, js, app, router, principles, server, components, data, fetching |
|
||||
| `nodejs-backend-patterns` | Build production-ready Node.js backend services with Express/Fastify, implementing middleware patterns, error handling, authentication, database integration,... | nodejs, backend | nodejs, backend, node, js, express, fastify, implementing, middleware, error, handling, authentication, database |
|
||||
| `php-pro` | Write idiomatic PHP code with generators, iterators, SPL data structures, and modern OOP features. Use PROACTIVELY for high-performance PHP applications. | php | php, pro, write, idiomatic, code, generators, iterators, spl, data, structures, oop, features |
|
||||
| `postgres-best-practices` | Postgres performance optimization and best practices from Supabase. Use this skill when writing, reviewing, or optimizing Postgres queries, schema designs, o... | postgres, best, practices | postgres, best, practices, supabase, performance, optimization, skill, writing, reviewing, optimizing, queries, schema |
|
||||
| `postgresql` | Design a PostgreSQL-specific schema. Covers best-practices, data types, indexing, constraints, performance patterns, and advanced features | postgresql | postgresql, specific, schema, covers, data, types, indexing, constraints, performance, features |
|
||||
| `prisma-expert` | Prisma ORM expert for schema design, migrations, query optimization, relations modeling, and database operations. Use PROACTIVELY for Prisma schema issues, m... | prisma | prisma, orm, schema, migrations, query, optimization, relations, modeling, database, operations, proactively, issues |
|
||||
| `programmatic-seo` | Design and evaluate programmatic SEO strategies for creating SEO-driven pages at scale using templates and structured data. Use when the user mentions progra... | programmatic, seo | programmatic, seo, evaluate, creating, driven, pages, scale, structured, data, user, mentions, directory |
|
||||
| `prompt-caching` | Caching strategies for LLM prompts including Anthropic prompt caching, response caching, and CAG (Cache Augmented Generation) Use when: prompt caching, cache... | prompt, caching | prompt, caching, llm, prompts, including, anthropic, response, cag, cache, augmented, generation |
|
||||
| `prompt-engineer` | Expert prompt engineer specializing in advanced prompting techniques, LLM optimization, and AI system design. Masters chain-of-thought, constitutional AI, an... | prompt | prompt, engineer, specializing, prompting, techniques, llm, optimization, ai, masters, chain, thought, constitutional |
|
||||
| `prompt-engineering-patterns` | Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, impro... | prompt, engineering | prompt, engineering, techniques, maximize, llm, performance, reliability, controllability, optimizing, prompts, improving, outputs |
|
||||
| `rag-engineer` | Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LL... | rag | rag, engineer, building, retrieval, augmented, generation, masters, embedding, models, vector, databases, chunking |
|
||||
| `rag-implementation` | Build Retrieval-Augmented Generation (RAG) systems for LLM applications with vector databases and semantic search. Use when implementing knowledge-grounded A... | rag | rag, retrieval, augmented, generation, llm, applications, vector, databases, semantic, search, implementing, knowledge |
|
||||
| `react-best-practices` | React and Next.js performance optimization guidelines from Vercel Engineering. This skill should be used when writing, reviewing, or refactoring React/Next.j... | react, best, practices | react, best, practices, vercel, next, js, performance, optimization, guidelines, engineering, skill, should |
|
||||
| `react-ui-patterns` | Modern React UI patterns for loading states, error handling, and data fetching. Use when building UI components, handling async data, or managing UI states. | react, ui | react, ui, loading, states, error, handling, data, fetching, building, components, async, managing |
|
||||
| `scala-pro` | Master enterprise-grade Scala development with functional programming, distributed systems, and big data processing. Expert in Apache Pekko, Akka, Spark, ZIO... | scala | scala, pro, enterprise, grade, development, functional, programming, distributed, big, data, processing, apache |
|
||||
| `schema-markup` | Design, validate, and optimize schema.org structured data for eligibility, correctness, and measurable SEO impact. Use when the user wants to add, fix, audit... | schema, markup | schema, markup, validate, optimize, org, structured, data, eligibility, correctness, measurable, seo, impact |
|
||||
| `segment-cdp` | Expert patterns for Segment Customer Data Platform including Analytics.js, server-side tracking, tracking plans with Protocols, identity resolution, destinat... | segment, cdp | segment, cdp, customer, data, platform, including, analytics, js, server, side, tracking, plans |
|
||||
| `senior-architect` | Comprehensive software architecture skill for designing scalable, maintainable systems using ReactJS, NextJS, NodeJS, Express, React Native, Swift, Kotlin, F... | senior | senior, architect, software, architecture, skill, designing, scalable, maintainable, reactjs, nextjs, nodejs, express |
|
||||
| `seo-audit` | Diagnose and audit SEO issues affecting crawlability, indexation, rankings, and organic performance. Use when the user asks for an SEO audit, technical SEO r... | seo, audit | seo, audit, diagnose, issues, affecting, crawlability, indexation, rankings, organic, performance, user, asks |
|
||||
| `similarity-search-patterns` | Implement efficient similarity search with vector databases. Use when building semantic search, implementing nearest neighbor queries, or optimizing retrieva... | similarity, search | similarity, search, efficient, vector, databases, building, semantic, implementing, nearest, neighbor, queries, optimizing |
|
||||
| `skill-seekers` | -Automatically convert documentation websites, GitHub repositories, and PDFs into Claude AI skills in minutes. | skill, seekers | skill, seekers, automatically, convert, documentation, websites, github, repositories, pdfs, claude, ai, skills |
|
||||
| `spark-optimization` | Optimize Apache Spark jobs with partitioning, caching, shuffle optimization, and memory tuning. Use when improving Spark performance, debugging slow jobs, or... | spark, optimization | spark, optimization, optimize, apache, jobs, partitioning, caching, shuffle, memory, tuning, improving, performance |
|
||||
| `sql-optimization-patterns` | Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when de... | sql, optimization | sql, optimization, query, indexing, explain, analysis, dramatically, improve, database, performance, eliminate, slow |
|
||||
| `sqlmap-database-pentesting` | This skill should be used when the user asks to "automate SQL injection testing," "enumerate database structure," "extract database credentials using sqlmap,... | sqlmap, database, pentesting | sqlmap, database, pentesting, penetration, testing, skill, should, used, user, asks, automate, sql |
|
||||
| `stitch-ui-design` | Expert guide for creating effective prompts for Google Stitch AI UI design tool. Use when user wants to design UI/UX in Stitch, create app interfaces, genera... | stitch, ui | stitch, ui, creating, effective, prompts, google, ai, user, wants, ux, app, interfaces |
|
||||
| `tdd-orchestrator` | Master TDD orchestrator specializing in red-green-refactor discipline, multi-agent workflow coordination, and comprehensive test-driven development practices... | tdd, orchestrator | tdd, orchestrator, specializing, red, green, refactor, discipline, multi, agent, coordination, test, driven |
|
||||
| `team-collaboration-standup-notes` | You are an expert team communication specialist focused on async-first standup practices, AI-assisted note generation from commit history, and effective remo... | team, collaboration, standup, notes | team, collaboration, standup, notes, communication, async, first, ai, assisted, note, generation, commit |
|
||||
| `telegram-bot-builder` | Expert in building Telegram bots that solve real problems - from simple automation to complex AI-powered bots. Covers bot architecture, the Telegram Bot API,... | telegram, bot, builder | telegram, bot, builder, building, bots, solve, real, problems, simple, automation, complex, ai |
|
||||
| `trigger-dev` | Trigger.dev expert for background jobs, AI workflows, and reliable async execution with excellent developer experience and TypeScript-first design. Use when:... | trigger, dev | trigger, dev, background, jobs, ai, reliable, async, execution, excellent, developer, experience, typescript |
|
||||
| `unity-ecs-patterns` | Master Unity ECS (Entity Component System) with DOTS, Jobs, and Burst for high-performance game development. Use when building data-oriented games, optimizin... | unity, ecs | unity, ecs, entity, component, dots, jobs, burst, high, performance, game, development, building |
|
||||
| `vector-database-engineer` | Expert in vector databases, embedding strategies, and semantic search implementation. Masters Pinecone, Weaviate, Qdrant, Milvus, and pgvector for RAG applic... | vector, database | vector, database, engineer, databases, embedding, semantic, search, masters, pinecone, weaviate, qdrant, milvus |
|
||||
| `vector-index-tuning` | Optimize vector index performance for latency, recall, and memory. Use when tuning HNSW parameters, selecting quantization strategies, or scaling vector sear... | vector, index, tuning | vector, index, tuning, optimize, performance, latency, recall, memory, hnsw, parameters, selecting, quantization |
|
||||
| `vexor` | Vector-powered CLI for semantic file search with a Claude/Codex skill | vexor | vexor, vector, powered, cli, semantic, file, search, claude, codex, skill |
|
||||
| `voice-ai-development` | Expert in building voice AI applications - from real-time voice agents to voice-enabled apps. Covers OpenAI Realtime API, Vapi for voice agents, Deepgram for... | voice, ai | voice, ai, development, building, applications, real, time, agents, enabled, apps, covers, openai |
|
||||
| `voice-ai-engine-development` | Build real-time conversational AI voice engines using async worker pipelines, streaming transcription, LLM agents, and TTS synthesis with interrupt handling ... | voice, ai, engine | voice, ai, engine, development, real, time, conversational, engines, async, worker, pipelines, streaming |
|
||||
| `web-artifacts-builder` | Suite of tools for creating elaborate, multi-component claude.ai HTML artifacts using modern frontend web technologies (React, Tailwind CSS, shadcn/ui). Use ... | web, artifacts, builder | web, artifacts, builder, suite, creating, elaborate, multi, component, claude, ai, html, frontend |
|
||||
| `xlsx-official` | Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work ... | xlsx, official | xlsx, official, spreadsheet, creation, editing, analysis, formulas, formatting, data, visualization, claude, work |
|
||||
|
||||
## development (81)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `3d-web-experience` | Expert in building 3D experiences for the web - Three.js, React Three Fiber, Spline, WebGL, and interactive 3D scenes. Covers product configurators, 3D portf... | 3d, web, experience | 3d, web, experience, building, experiences, three, js, react, fiber, spline, webgl, interactive |
|
||||
| `algolia-search` | Expert patterns for Algolia search implementation, indexing strategies, React InstantSearch, and relevance tuning Use when: adding search to, algolia, instan... | algolia, search | algolia, search, indexing, react, instantsearch, relevance, tuning, adding, api, functionality |
|
||||
| `api-design-principles` | Master REST and GraphQL API design principles to build intuitive, scalable, and maintainable APIs that delight developers. Use when designing new APIs, revie... | api, principles | api, principles, rest, graphql, intuitive, scalable, maintainable, apis, delight, developers, designing, new |
|
||||
| `api-documentation-generator` | Generate comprehensive, developer-friendly API documentation from code, including endpoints, parameters, examples, and best practices | api, documentation, generator | api, documentation, generator, generate, developer, friendly, code, including, endpoints, parameters, examples |
|
||||
| `api-patterns` | API design principles and decision-making. REST vs GraphQL vs tRPC selection, response formats, versioning, pagination. | api | api, principles, decision, making, rest, vs, graphql, trpc, selection, response, formats, versioning |
|
||||
| `app-store-optimization` | Complete App Store Optimization (ASO) toolkit for researching, optimizing, and tracking mobile app performance on Apple App Store and Google Play Store | app, store, optimization | app, store, optimization, complete, aso, toolkit, researching, optimizing, tracking, mobile, performance, apple |
|
||||
| `architecture-patterns` | Implement proven backend architecture patterns including Clean Architecture, Hexagonal Architecture, and Domain-Driven Design. Use when architecting complex ... | architecture | architecture, proven, backend, including, clean, hexagonal, domain, driven, architecting, complex, refactoring, existing |
|
||||
| `async-python-patterns` | Master Python asyncio, concurrent programming, and async/await patterns for high-performance applications. Use when building async APIs, concurrent systems, ... | async, python | async, python, asyncio, concurrent, programming, await, high, performance, applications, building, apis, bound |
|
||||
| `azure-functions` | Expert patterns for Azure Functions development including isolated worker model, Durable Functions orchestration, cold start optimization, and production pat... | azure, functions | azure, functions, development, including, isolated, worker, model, durable, orchestration, cold, start, optimization |
|
||||
| `backend-dev-guidelines` | Opinionated backend development standards for Node.js + Express + TypeScript microservices. Covers layered architecture, BaseController pattern, dependency i... | backend, dev, guidelines | backend, dev, guidelines, opinionated, development, standards, node, js, express, typescript, microservices, covers |
|
||||
| `bullmq-specialist` | BullMQ expert for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript applications. Use when: bullmq, bull que... | bullmq | bullmq, redis, backed, job, queues, background, processing, reliable, async, execution, node, js |
|
||||
| `bun-development` | Modern JavaScript/TypeScript development with Bun runtime. Covers package management, bundling, testing, and migration from Node.js. Use when working with Bu... | bun | bun, development, javascript, typescript, runtime, covers, package, bundling, testing, migration, node, js |
|
||||
| `cc-skill-coding-standards` | Universal coding standards, best practices, and patterns for TypeScript, JavaScript, React, and Node.js development. | cc, skill, coding, standards | cc, skill, coding, standards, universal, typescript, javascript, react, node, js, development |
|
||||
| `cc-skill-frontend-patterns` | Frontend development patterns for React, Next.js, state management, performance optimization, and UI best practices. | cc, skill, frontend | cc, skill, frontend, development, react, next, js, state, performance, optimization, ui |
|
||||
| `context7-auto-research` | Automatically fetch latest library/framework documentation for Claude Code via Context7 API | context7, auto, research | context7, auto, research, automatically, fetch, latest, library, framework, documentation, claude, code, via |
|
||||
| `csharp-pro` | Write modern C# code with advanced features like records, pattern matching, and async/await. Optimizes .NET applications, implements enterprise patterns, and... | csharp | csharp, pro, write, code, features, like, records, matching, async, await, optimizes, net |
|
||||
| `discord-bot-architect` | Specialized skill for building production-ready Discord bots. Covers Discord.js (JavaScript) and Pycord (Python), gateway intents, slash commands, interactiv... | discord, bot | discord, bot, architect, specialized, skill, building, bots, covers, js, javascript, pycord, python |
|
||||
| `dotnet-architect` | Expert .NET backend architect specializing in C#, ASP.NET Core, Entity Framework, Dapper, and enterprise application patterns. Masters async/await, dependenc... | dotnet | dotnet, architect, net, backend, specializing, asp, core, entity, framework, dapper, enterprise, application |
|
||||
| `dotnet-backend-patterns` | Master C#/.NET backend development patterns for building robust APIs, MCP servers, and enterprise applications. Covers async/await, dependency injection, Ent... | dotnet, backend | dotnet, backend, net, development, building, robust, apis, mcp, servers, enterprise, applications, covers |
|
||||
| `exa-search` | Semantic search, similar content discovery, and structured research using Exa API | exa, search | exa, search, semantic, similar, content, discovery, structured, research, api |
|
||||
| `fastapi-pro` | Build high-performance async APIs with FastAPI, SQLAlchemy 2.0, and Pydantic V2. Master microservices, WebSockets, and modern Python async patterns. Use PROA... | fastapi | fastapi, pro, high, performance, async, apis, sqlalchemy, pydantic, v2, microservices, websockets, python |
|
||||
| `fastapi-templates` | Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applicati... | fastapi | fastapi, async, dependency, injection, error, handling, building, new, applications, setting, up, backend |
|
||||
| `firecrawl-scraper` | Deep web scraping, screenshots, PDF parsing, and website crawling using Firecrawl API | firecrawl, scraper | firecrawl, scraper, deep, web, scraping, screenshots, pdf, parsing, website, crawling, api |
|
||||
| `fp-ts-errors` | Handle errors as values using fp-ts Either and TaskEither for cleaner, more predictable TypeScript code. Use when implementing error handling patterns with f... | fp, ts, errors | fp, ts, errors, handle, values, either, taskeither, cleaner, predictable, typescript, code, implementing |
|
||||
| `fp-ts-pragmatic` | A practical, jargon-free guide to fp-ts functional programming - the 80/20 approach that gets results without the academic overhead. Use when writing TypeScr... | fp, ts, pragmatic | fp, ts, pragmatic, practical, jargon, free, functional, programming, 80, 20, approach, gets |
|
||||
| `frontend-design` | Create distinctive, production-grade frontend interfaces with intentional aesthetics, high craft, and non-generic visual identity. Use when building or styli... | frontend | frontend, distinctive, grade, interfaces, intentional, aesthetics, high, craft, non, generic, visual, identity |
|
||||
| `frontend-developer` | Build React components, implement responsive layouts, and handle client-side state management. Masters React 19, Next.js 15, and modern frontend architecture... | frontend | frontend, developer, react, components, responsive, layouts, handle, client, side, state, masters, 19 |
|
||||
| `frontend-mobile-development-component-scaffold` | You are a React component architecture expert specializing in scaffolding production-ready, accessible, and performant components. Generate complete componen... | frontend, mobile, component | frontend, mobile, component, development, scaffold, react, architecture, specializing, scaffolding, accessible, performant, components |
|
||||
| `frontend-slides` | Create stunning, animation-rich HTML presentations from scratch or by converting PowerPoint files. Use when the user wants to build a presentation, convert a... | frontend, slides | frontend, slides, stunning, animation, rich, html, presentations, scratch, converting, powerpoint, files, user |
|
||||
| `game-development/mobile-games` | Mobile game development principles. Touch input, battery, performance, app stores. | game, development/mobile, games | game, development/mobile, games, mobile, development, principles, touch, input, battery, performance, app, stores |
|
||||
| `go-concurrency-patterns` | Master Go concurrency with goroutines, channels, sync primitives, and context. Use when building concurrent Go applications, implementing worker pools, or de... | go, concurrency | go, concurrency, goroutines, channels, sync, primitives, context, building, concurrent, applications, implementing, worker |
|
||||
| `golang-pro` | Master Go 1.21+ with modern patterns, advanced concurrency, performance optimization, and production-ready microservices. Expert in the latest Go ecosystem i... | golang | golang, pro, go, 21, concurrency, performance, optimization, microservices, latest, ecosystem, including, generics |
|
||||
| `hubspot-integration` | Expert patterns for HubSpot CRM integration including OAuth authentication, CRM objects, associations, batch operations, webhooks, and custom objects. Covers... | hubspot, integration | hubspot, integration, crm, including, oauth, authentication, objects, associations, batch, operations, webhooks, custom |
|
||||
| `javascript-mastery` | Comprehensive JavaScript reference covering 33+ essential concepts every developer should know. From fundamentals like primitives and closures to advanced pa... | javascript, mastery | javascript, mastery, reference, covering, 33, essential, concepts, every, developer, should, know, fundamentals |
|
||||
| `javascript-pro` | Master modern JavaScript with ES6+, async patterns, and Node.js APIs. Handles promises, event loops, and browser/Node compatibility. Use PROACTIVELY for Java... | javascript | javascript, pro, es6, async, node, js, apis, promises, event, loops, browser, compatibility |
|
||||
| `javascript-testing-patterns` | Implement comprehensive testing strategies using Jest, Vitest, and Testing Library for unit tests, integration tests, and end-to-end testing with mocking, fi... | javascript | javascript, testing, jest, vitest, library, unit, tests, integration, mocking, fixtures, test, driven |
|
||||
| `javascript-typescript-typescript-scaffold` | You are a TypeScript project architecture expert specializing in scaffolding production-ready Node.js and frontend applications. Generate complete project st... | javascript, typescript | javascript, typescript, scaffold, architecture, specializing, scaffolding, node, js, frontend, applications, generate, complete |
|
||||
| `launch-strategy` | When the user wants to plan a product launch, feature announcement, or release strategy. Also use when the user mentions 'launch,' 'Product Hunt,' 'feature r... | launch | launch, user, wants, plan, product, feature, announcement, release, mentions, hunt, go, market |
|
||||
| `makepad-skills` | Makepad UI development skills for Rust apps: setup, patterns, shaders, packaging, and troubleshooting. | makepad, skills | makepad, skills, ui, development, rust, apps, setup, shaders, packaging, troubleshooting |
|
||||
| `mcp-builder` | Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use whe... | mcp, builder | mcp, builder, creating, high, quality, model, context, protocol, servers, enable, llms, interact |
|
||||
| `memory-safety-patterns` | Implement memory-safe programming with RAII, ownership, smart pointers, and resource management across Rust, C++, and C. Use when writing safe systems code, ... | memory, safety | memory, safety, safe, programming, raii, ownership, smart, pointers, resource, rust, writing, code |
|
||||
| `mobile-design` | Mobile-first design and engineering doctrine for iOS and Android apps. Covers touch interaction, performance, platform conventions, offline behavior, and mob... | mobile | mobile, first, engineering, doctrine, ios, android, apps, covers, touch, interaction, performance, platform |
|
||||
| `mobile-developer` | Develop React Native, Flutter, or native mobile apps with modern architecture patterns. Masters cross-platform development, native integrations, offline sync... | mobile | mobile, developer, develop, react, native, flutter, apps, architecture, masters, cross, platform, development |
|
||||
| `modern-javascript-patterns` | Master ES6+ features including async/await, destructuring, spread operators, arrow functions, promises, modules, iterators, generators, and functional progra... | modern, javascript | modern, javascript, es6, features, including, async, await, destructuring, spread, operators, arrow, functions |
|
||||
| `multi-platform-apps-multi-platform` | Build and deploy the same feature consistently across web, mobile, and desktop platforms using API-first architecture and parallel implementation strategies. | multi, platform, apps | multi, platform, apps, deploy, same, feature, consistently, web, mobile, desktop, platforms, api |
|
||||
| `n8n-code-python` | Write Python code in n8n Code nodes. Use when writing Python in n8n, using _input/_json/_node syntax, working with standard library, or need to understand Py... | n8n, code, python | n8n, code, python, write, nodes, writing, input, json, node, syntax, working, standard |
|
||||
| `n8n-node-configuration` | Operation-aware node configuration guidance. Use when configuring nodes, understanding property dependencies, determining required fields, choosing between g... | n8n, node, configuration | n8n, node, configuration, operation, aware, guidance, configuring, nodes, understanding, property, dependencies, determining |
|
||||
| `observe-whatsapp` | Observe and troubleshoot WhatsApp in Kapso: debug message delivery, inspect webhook deliveries/retries, triage API errors, and run health checks. Use when in... | observe, whatsapp | observe, whatsapp, troubleshoot, kapso, debug, message, delivery, inspect, webhook, deliveries, retries, triage |
|
||||
| `product-manager-toolkit` | Comprehensive toolkit for product managers including RICE prioritization, customer interview analysis, PRD templates, discovery frameworks, and go-to-market ... | product, manager | product, manager, toolkit, managers, including, rice, prioritization, customer, interview, analysis, prd, discovery |
|
||||
| `python-development-python-scaffold` | You are a Python project architecture expert specializing in scaffolding production-ready Python applications. Generate complete project structures with mode... | python | python, development, scaffold, architecture, specializing, scaffolding, applications, generate, complete, structures, tooling, uv |
|
||||
| `python-packaging` | Create distributable Python packages with proper project structure, setup.py/pyproject.toml, and publishing to PyPI. Use when packaging Python libraries, cre... | python, packaging | python, packaging, distributable, packages, proper, structure, setup, py, pyproject, toml, publishing, pypi |
|
||||
| `python-patterns` | Python development principles and decision-making. Framework selection, async patterns, type hints, project structure. Teaches thinking, not copying. | python | python, development, principles, decision, making, framework, selection, async, type, hints, structure, teaches |
|
||||
| `python-performance-optimization` | Profile and optimize Python code using cProfile, memory profilers, and performance best practices. Use when debugging slow Python code, optimizing bottleneck... | python, performance, optimization | python, performance, optimization, profile, optimize, code, cprofile, memory, profilers, debugging, slow, optimizing |
|
||||
| `python-pro` | Master Python 3.12+ with modern features, async programming, performance optimization, and production-ready practices. Expert in the latest Python ecosystem ... | python | python, pro, 12, features, async, programming, performance, optimization, latest, ecosystem, including, uv |
|
||||
| `python-testing-patterns` | Implement comprehensive testing strategies with pytest, fixtures, mocking, and test-driven development. Use when writing Python tests, setting up test suites... | python | python, testing, pytest, fixtures, mocking, test, driven, development, writing, tests, setting, up |
|
||||
| `react-modernization` | Upgrade React applications to latest versions, migrate from class components to hooks, and adopt concurrent features. Use when modernizing React codebases, m... | react, modernization | react, modernization, upgrade, applications, latest, versions, migrate, class, components, hooks, adopt, concurrent |
|
||||
| `react-native-architecture` | Build production React Native apps with Expo, navigation, native modules, offline sync, and cross-platform patterns. Use when developing mobile apps, impleme... | react, native, architecture | react, native, architecture, apps, expo, navigation, modules, offline, sync, cross, platform, developing |
|
||||
| `react-patterns` | Modern React patterns and principles. Hooks, composition, performance, TypeScript best practices. | react | react, principles, hooks, composition, performance, typescript |
|
||||
| `react-state-management` | Master modern React state management with Redux Toolkit, Zustand, Jotai, and React Query. Use when setting up global state, managing server state, or choosin... | react, state | react, state, redux, toolkit, zustand, jotai, query, setting, up, global, managing, server |
|
||||
| `reference-builder` | Creates exhaustive technical references and API documentation. Generates comprehensive parameter listings, configuration guides, and searchable reference mat... | reference, builder | reference, builder, creates, exhaustive, technical, references, api, documentation, generates, parameter, listings, configuration |
|
||||
| `remotion-best-practices` | Best practices for Remotion - Video creation in React | remotion, video, react, animation, composition | remotion, video, react, animation, composition, creation |
|
||||
| `ruby-pro` | Write idiomatic Ruby code with metaprogramming, Rails patterns, and performance optimization. Specializes in Ruby on Rails, gem development, and testing fram... | ruby | ruby, pro, write, idiomatic, code, metaprogramming, rails, performance, optimization, specializes, gem, development |
|
||||
| `rust-async-patterns` | Master Rust async programming with Tokio, async traits, error handling, and concurrent patterns. Use when building async Rust applications, implementing conc... | rust, async | rust, async, programming, tokio, traits, error, handling, concurrent, building, applications, implementing, debugging |
|
||||
| `rust-pro` | Master Rust 1.75+ with modern async patterns, advanced type system features, and production-ready systems programming. Expert in the latest Rust ecosystem in... | rust | rust, pro, 75, async, type, features, programming, latest, ecosystem, including, tokio, axum |
|
||||
| `senior-fullstack` | Comprehensive fullstack development skill for building complete web applications with React, Next.js, Node.js, GraphQL, and PostgreSQL. Includes project scaf... | senior, fullstack | senior, fullstack, development, skill, building, complete, web, applications, react, next, js, node |
|
||||
| `shodan-reconnaissance` | This skill should be used when the user asks to "search for exposed devices on the internet," "perform Shodan reconnaissance," "find vulnerable services usin... | shodan, reconnaissance | shodan, reconnaissance, pentesting, skill, should, used, user, asks, search, exposed, devices, internet |
|
||||
| `shopify-apps` | Expert patterns for Shopify app development including Remix/React Router apps, embedded apps with App Bridge, webhook handling, GraphQL Admin API, Polaris co... | shopify, apps | shopify, apps, app, development, including, remix, react, router, embedded, bridge, webhook, handling |
|
||||
| `shopify-development` | Build Shopify apps, extensions, themes using GraphQL Admin API, Shopify CLI, Polaris UI, and Liquid.
|
||||
TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify, development, apps, extensions, themes, graphql, admin, api, cli, polaris, ui, liquid |
|
||||
| `slack-bot-builder` | Build Slack apps using the Bolt framework across Python, JavaScript, and Java. Covers Block Kit for rich UIs, interactive components, slash commands, event h... | slack, bot, builder | slack, bot, builder, apps, bolt, framework, python, javascript, java, covers, block, kit |
|
||||
| `swiftui-expert-skill` | Write, review, or improve SwiftUI code following best practices for state management, view composition, performance, modern APIs, Swift concurrency, and iOS ... | swiftui, skill | swiftui, skill, write, review, improve, code, following, state, view, composition, performance, apis |
|
||||
| `systems-programming-rust-project` | You are a Rust project architecture expert specializing in scaffolding production-ready Rust applications. Generate complete project structures with cargo to... | programming, rust | programming, rust, architecture, specializing, scaffolding, applications, generate, complete, structures, cargo, tooling, proper |
|
||||
| `tavily-web` | Web search, content extraction, crawling, and research capabilities using Tavily API | tavily, web | tavily, web, search, content, extraction, crawling, research, capabilities, api |
|
||||
| `telegram-mini-app` | Expert in building Telegram Mini Apps (TWA) - web apps that run inside Telegram with native-like experience. Covers the TON ecosystem, Telegram Web App API, ... | telegram, mini, app | telegram, mini, app, building, apps, twa, web, run, inside, native, like, experience |
|
||||
| `temporal-python-testing` | Test Temporal workflows with pytest, time-skipping, and mocking strategies. Covers unit testing, integration testing, replay testing, and local development s... | temporal, python | temporal, python, testing, test, pytest, time, skipping, mocking, covers, unit, integration, replay |
|
||||
| `typescript-advanced-types` | Master TypeScript's advanced type system including generics, conditional types, mapped types, template literals, and utility types for building type-safe app... | typescript, advanced, types | typescript, advanced, types, type, including, generics, conditional, mapped, literals, utility, building, safe |
|
||||
| `typescript-expert` | TypeScript and JavaScript expert with deep knowledge of type-level programming, performance optimization, monorepo management, migration strategies, and mode... | typescript | typescript, javascript, deep, knowledge, type, level, programming, performance, optimization, monorepo, migration, tooling |
|
||||
| `typescript-pro` | Master TypeScript with advanced types, generics, and strict type safety. Handles complex type systems, decorators, and enterprise-grade patterns. Use PROACTI... | typescript | typescript, pro, types, generics, strict, type, safety, complex, decorators, enterprise, grade, proactively |
|
||||
| `ui-ux-pro-max` | UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 9 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwi... | ui, ux, max | ui, ux, max, pro, intelligence, 50, styles, 21, palettes, font, pairings, 20 |
|
||||
| `uv-package-manager` | Master the uv package manager for fast Python dependency management, virtual environments, and modern Python project workflows. Use when setting up Python pr... | uv, package, manager | uv, package, manager, fast, python, dependency, virtual, environments, setting, up, managing, dependencies |
|
||||
| `viral-generator-builder` | Expert in building shareable generator tools that go viral - name generators, quiz makers, avatar creators, personality tests, and calculator tools. Covers t... | viral, generator, builder | viral, generator, builder, building, shareable, go, name, generators, quiz, makers, avatar, creators |
|
||||
| `webapp-testing` | Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing... | webapp | webapp, testing, toolkit, interacting, local, web, applications, playwright, supports, verifying, frontend, functionality |
|
||||
|
||||
## general (128)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `address-github-comments` | Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI. | address, github, comments | address, github, comments, review, issue, open, pull, request, gh, cli |
|
||||
| `agent-manager-skill` | Manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling. | agent, manager, skill | agent, manager, skill, multiple, local, cli, agents, via, tmux, sessions, start, stop |
|
||||
| `algorithmic-art` | Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, gener... | algorithmic, art | algorithmic, art, creating, p5, js, seeded, randomness, interactive, parameter, exploration, users, request |
|
||||
| `angular-migration` | Migrate from AngularJS to Angular using hybrid mode, incremental component rewriting, and dependency injection updates. Use when upgrading AngularJS applicat... | angular, migration | angular, migration, migrate, angularjs, hybrid, mode, incremental, component, rewriting, dependency, injection, updates |
|
||||
| `anti-reversing-techniques` | Understand anti-reversing, obfuscation, and protection techniques encountered during software analysis. Use when analyzing protected binaries, bypassing anti... | anti, reversing, techniques | anti, reversing, techniques, understand, obfuscation, protection, encountered, during, software, analysis, analyzing, protected |
|
||||
| `app-builder` | Main application building orchestrator. Creates full-stack applications from natural language requests. Determines project type, selects tech stack, coordina... | app, builder | app, builder, main, application, building, orchestrator, creates, full, stack, applications, natural, language |
|
||||
| `app-builder/templates` | Project scaffolding templates for new applications. Use when creating new projects from scratch. Contains 12 templates for various tech stacks. | app, builder/templates | app, builder/templates, scaffolding, new, applications, creating, scratch, contains, 12, various, tech, stacks |
|
||||
| `arm-cortex-expert` | Senior embedded software engineer specializing in firmware and driver development for ARM Cortex-M microcontrollers (Teensy, STM32, nRF52, SAMD). Decades of ... | arm, cortex | arm, cortex, senior, embedded, software, engineer, specializing, firmware, driver, development, microcontrollers, teensy |
|
||||
| `avalonia-layout-zafiro` | Guidelines for modern Avalonia UI layout using Zafiro.Avalonia, emphasizing shared styles, generic components, and avoiding XAML redundancy. | avalonia, layout, zafiro | avalonia, layout, zafiro, guidelines, ui, emphasizing, shared, styles, generic, components, avoiding, xaml |
|
||||
| `avalonia-zafiro-development` | Mandatory skills, conventions, and behavioral rules for Avalonia UI development using the Zafiro toolkit. | avalonia, zafiro | avalonia, zafiro, development, mandatory, skills, conventions, behavioral, rules, ui, toolkit |
|
||||
| `backtesting-frameworks` | Build robust backtesting systems for trading strategies with proper handling of look-ahead bias, survivorship bias, and transaction costs. Use when developin... | backtesting, frameworks | backtesting, frameworks, robust, trading, proper, handling, look, ahead, bias, survivorship, transaction, costs |
|
||||
| `bazel-build-optimization` | Optimize Bazel builds for large-scale monorepos. Use when configuring Bazel, implementing remote execution, or optimizing build performance for enterprise co... | bazel, build, optimization | bazel, build, optimization, optimize, large, scale, monorepos, configuring, implementing, remote, execution, optimizing |
|
||||
| `blockchain-developer` | Build production-ready Web3 applications, smart contracts, and decentralized systems. Implements DeFi protocols, NFT platforms, DAOs, and enterprise blockcha... | blockchain | blockchain, developer, web3, applications, smart, contracts, decentralized, implements, defi, protocols, nft, platforms |
|
||||
| `brand-guidelines-anthropic` | Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand co... | brand, guidelines, anthropic | brand, guidelines, anthropic, applies, official, colors, typography, any, sort, artifact, may, benefit |
|
||||
| `brand-guidelines-community` | Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand co... | brand, guidelines, community | brand, guidelines, community, applies, anthropic, official, colors, typography, any, sort, artifact, may |
|
||||
| `busybox-on-windows` | How to use a Win32 build of BusyBox to run many of the standard UNIX command line tools on Windows. | busybox, on, windows | busybox, on, windows, how, win32, run, many, standard, unix, command, line |
|
||||
| `c-pro` | Write efficient C code with proper memory management, pointer arithmetic, and system calls. Handles embedded systems, kernel modules, and performance-critica... | c | c, pro, write, efficient, code, proper, memory, pointer, arithmetic, calls, embedded, kernel |
|
||||
| `canvas-design` | Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art... | canvas | canvas, beautiful, visual, art, png, pdf, documents, philosophy, should, skill, user, asks |
|
||||
| `cc-skill-continuous-learning` | Development skill from everything-claude-code | cc, skill, continuous, learning | cc, skill, continuous, learning, development, everything, claude, code |
|
||||
| `cc-skill-project-guidelines-example` | Project Guidelines Skill (Example) | cc, skill, guidelines, example | cc, skill, guidelines, example |
|
||||
| `cc-skill-strategic-compact` | Development skill from everything-claude-code | cc, skill, strategic, compact | cc, skill, strategic, compact, development, everything, claude, code |
|
||||
| `claude-ally-health` | A health assistant skill for medical information analysis, symptom tracking, and wellness guidance. | claude, ally, health | claude, ally, health, assistant, skill, medical, information, analysis, symptom, tracking, wellness, guidance |
|
||||
| `claude-code-guide` | Master guide for using Claude Code effectively. Includes configuration templates, prompting strategies "Thinking" keywords, debugging techniques, and best pr... | claude, code | claude, code, effectively, includes, configuration, prompting, thinking, keywords, debugging, techniques, interacting, agent |
|
||||
| `claude-scientific-skills` | Scientific research and analysis skills | claude, scientific, skills | claude, scientific, skills, research, analysis |
|
||||
| `claude-speed-reader` | -Speed read Claude's responses at 600+ WPM using RSVP with Spritz-style ORP highlighting | claude, speed, reader | claude, speed, reader, read, responses, 600, wpm, rsvp, spritz, style, orp, highlighting |
|
||||
| `claude-win11-speckit-update-skill` | Windows 11 system management | claude, win11, speckit, update, skill | claude, win11, speckit, update, skill, windows, 11 |
|
||||
| `clean-code` | Pragmatic coding standards - concise, direct, no over-engineering, no unnecessary comments | clean, code | clean, code, pragmatic, coding, standards, concise, direct, no, engineering, unnecessary, comments |
|
||||
| `code-documentation-code-explain` | You are a code education expert specializing in explaining complex code through clear narratives, visual diagrams, and step-by-step breakdowns. Transform dif... | code, documentation, explain | code, documentation, explain, education, specializing, explaining, complex, through, clear, narratives, visual, diagrams |
|
||||
| `code-refactoring-context-restore` | Use when working with code refactoring context restore | code, refactoring, restore | code, refactoring, restore, context, working |
|
||||
| `code-refactoring-tech-debt` | You are a technical debt expert specializing in identifying, quantifying, and prioritizing technical debt in software projects. Analyze the codebase to uncov... | code, refactoring, tech, debt | code, refactoring, tech, debt, technical, specializing, identifying, quantifying, prioritizing, software, analyze, codebase |
|
||||
| `code-review-excellence` | Master effective code review practices to provide constructive feedback, catch bugs early, and foster knowledge sharing while maintaining team morale. Use wh... | code, excellence | code, excellence, review, effective, provide, constructive, feedback, catch, bugs, early, foster, knowledge |
|
||||
| `codebase-cleanup-tech-debt` | You are a technical debt expert specializing in identifying, quantifying, and prioritizing technical debt in software projects. Analyze the codebase to uncov... | codebase, cleanup, tech, debt | codebase, cleanup, tech, debt, technical, specializing, identifying, quantifying, prioritizing, software, analyze, uncover |
|
||||
| `commit` | Create commit messages following Sentry conventions. Use when committing code changes, writing commit messages, or formatting git history. Follows convention... | commit | commit, messages, following, sentry, conventions, committing, code, changes, writing, formatting, git, history |
|
||||
| `comprehensive-review-full-review` | Use when working with comprehensive review full review | comprehensive, full | comprehensive, full, review, working |
|
||||
| `comprehensive-review-pr-enhance` | You are a PR optimization expert specializing in creating high-quality pull requests that facilitate efficient code reviews. Generate comprehensive PR descri... | comprehensive, pr, enhance | comprehensive, pr, enhance, review, optimization, specializing, creating, high, quality, pull, requests, facilitate |
|
||||
| `concise-planning` | Use when a user asks for a plan for a coding task, to generate a clear, actionable, and atomic checklist. | concise, planning | concise, planning, user, asks, plan, coding, task, generate, clear, actionable, atomic, checklist |
|
||||
| `context-compression` | Design and evaluate compression strategies for long-running sessions | compression | compression, context, evaluate, long, running, sessions |
|
||||
| `context-fundamentals` | Understand what context is, why it matters, and the anatomy of context in agent systems | fundamentals | fundamentals, context, understand, what, why, matters, anatomy, agent |
|
||||
| `context-management-context-restore` | Use when working with context management context restore | restore | restore, context, working |
|
||||
| `context-management-context-save` | Use when working with context management context save | save | save, context, working |
|
||||
| `context-optimization` | Apply compaction, masking, and caching strategies | optimization | optimization, context, apply, compaction, masking, caching |
|
||||
| `create-pr` | Create pull requests following Sentry conventions. Use when opening PRs, writing PR descriptions, or preparing changes for review. Follows Sentry's code revi... | create, pr | create, pr, pull, requests, following, sentry, conventions, opening, prs, writing, descriptions, preparing |
|
||||
| `culture-index` | Index and search culture documentation | culture, index | culture, index, search, documentation |
|
||||
| `daily-news-report` | Scrapes content based on a preset URL list, filters high-quality technical information, and generates daily Markdown reports. | daily, news, report | daily, news, report, scrapes, content, preset, url, list, filters, high, quality, technical |
|
||||
| `debugging-strategies` | Master systematic debugging techniques, profiling tools, and root cause analysis to efficiently track down bugs across any codebase or technology stack. Use ... | debugging, strategies | debugging, strategies, systematic, techniques, profiling, root, cause, analysis, efficiently, track, down, bugs |
|
||||
| `debugging-toolkit-smart-debug` | Use when working with debugging toolkit smart debug | debugging, debug | debugging, debug, toolkit, smart, working |
|
||||
| `design-md` | Analyze Stitch projects and synthesize a semantic design system into DESIGN.md files | md | md, analyze, stitch, synthesize, semantic, files |
|
||||
| `dispatching-parallel-agents` | Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies | dispatching, parallel, agents | dispatching, parallel, agents, facing, independent, tasks, worked, without, shared, state, sequential, dependencies |
|
||||
| `docx-official` | Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. When Claude ... | docx, official | docx, official, document, creation, editing, analysis, tracked, changes, comments, formatting, preservation, text |
|
||||
| `dx-optimizer` | Developer Experience specialist. Improves tooling, setup, and workflows. Use PROACTIVELY when setting up new projects, after team feedback, or when developme... | dx, optimizer | dx, optimizer, developer, experience, improves, tooling, setup, proactively, setting, up, new, after |
|
||||
| `environment-setup-guide` | Guide developers through setting up development environments with proper tools, dependencies, and configurations | environment, setup | environment, setup, developers, through, setting, up, development, environments, proper, dependencies, configurations |
|
||||
| `error-debugging-multi-agent-review` | Use when working with error debugging multi agent review | error, debugging, multi, agent | error, debugging, multi, agent, review, working |
|
||||
| `error-diagnostics-smart-debug` | Use when working with error diagnostics smart debug | error, diagnostics, debug | error, diagnostics, debug, smart, working |
|
||||
| `evaluation` | Build evaluation frameworks for agent systems | evaluation | evaluation, frameworks, agent |
|
||||
| `executing-plans` | Use when you have a written implementation plan to execute in a separate session with review checkpoints | executing, plans | executing, plans, written, plan, execute, separate, session, review, checkpoints |
|
||||
| `fal-platform` | Platform APIs for model management, pricing, and usage tracking | fal, platform | fal, platform, apis, model, pricing, usage, tracking |
|
||||
| `ffuf-claude-skill` | Web fuzzing with ffuf | ffuf, claude, skill | ffuf, claude, skill, web, fuzzing |
|
||||
| `file-organizer` | Intelligently organizes files and folders by understanding context, finding duplicates, and suggesting better organizational structures. Use when user wants ... | file, organizer | file, organizer, intelligently, organizes, files, folders, understanding, context, finding, duplicates, suggesting, better |
|
||||
| `finishing-a-development-branch` | Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting s... | finishing, a, branch | finishing, a, branch, development, complete, all, tests, pass, decide, how, integrate, work |
|
||||
| `fix-review` | Verify fix commits address audit findings without new bugs | fix | fix, review, verify, commits, address, audit, findings, without, new, bugs |
|
||||
| `framework-migration-code-migrate` | You are a code migration expert specializing in transitioning codebases between frameworks, languages, versions, and platforms. Generate comprehensive migrat... | framework, migration, code, migrate | framework, migration, code, migrate, specializing, transitioning, codebases, between, frameworks, languages, versions, platforms |
|
||||
| `game-development` | Game development orchestrator. Routes to platform-specific skills based on project needs. | game | game, development, orchestrator, routes, platform, specific, skills |
|
||||
| `game-development/2d-games` | 2D game development principles. Sprites, tilemaps, physics, camera. | game, development/2d, games | game, development/2d, games, 2d, development, principles, sprites, tilemaps, physics, camera |
|
||||
| `game-development/3d-games` | 3D game development principles. Rendering, shaders, physics, cameras. | game, development/3d, games | game, development/3d, games, 3d, development, principles, rendering, shaders, physics, cameras |
|
||||
| `game-development/game-audio` | Game audio principles. Sound design, music integration, adaptive audio systems. | game, development/game, audio | game, development/game, audio, principles, sound, music, integration, adaptive |
|
||||
| `game-development/game-design` | Game design principles. GDD structure, balancing, player psychology, progression. | game, development/game | game, development/game, principles, gdd, structure, balancing, player, psychology, progression |
|
||||
| `game-development/pc-games` | PC and console game development principles. Engine selection, platform features, optimization strategies. | game, development/pc, games | game, development/pc, games, pc, console, development, principles, engine, selection, platform, features, optimization |
|
||||
| `game-development/vr-ar` | VR/AR development principles. Comfort, interaction, performance requirements. | game, development/vr, ar | game, development/vr, ar, vr, development, principles, comfort, interaction, performance, requirements |
|
||||
| `game-development/web-games` | Web browser game development principles. Framework selection, WebGPU, optimization, PWA. | game, development/web, games | game, development/web, games, web, browser, development, principles, framework, selection, webgpu, optimization, pwa |
|
||||
| `git-advanced-workflows` | Master advanced Git workflows including rebasing, cherry-picking, bisect, worktrees, and reflog to maintain clean history and recover from any situation. Use... | git, advanced | git, advanced, including, rebasing, cherry, picking, bisect, worktrees, reflog, maintain, clean, history |
|
||||
| `git-pr-workflows-onboard` | You are an **expert onboarding specialist and knowledge transfer architect** with deep experience in remote-first organizations, technical team integration, ... | git, pr, onboard | git, pr, onboard, onboarding, knowledge, transfer, architect, deep, experience, remote, first, organizations |
|
||||
| `git-pr-workflows-pr-enhance` | You are a PR optimization expert specializing in creating high-quality pull requests that facilitate efficient code reviews. Generate comprehensive PR descri... | git, pr, enhance | git, pr, enhance, optimization, specializing, creating, high, quality, pull, requests, facilitate, efficient |
|
||||
| `imagen` | | imagen | imagen |
|
||||
| `infinite-gratitude` | Multi-agent research skill for parallel research execution (10 agents, battle-tested with real case studies). | infinite, gratitude | infinite, gratitude, multi, agent, research, skill, parallel, execution, 10, agents, battle, tested |
|
||||
| `interactive-portfolio` | Expert in building portfolios that actually land jobs and clients - not just showing work, but creating memorable experiences. Covers developer portfolios, d... | interactive, portfolio | interactive, portfolio, building, portfolios, actually, land, jobs, clients, just, showing, work, creating |
|
||||
| `last30days` | Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool. | last30days | last30days, research, topic, last, 30, days, reddit, web, become, write, copy, paste |
|
||||
| `legacy-modernizer` | Refactor legacy codebases, migrate outdated frameworks, and implement gradual modernization. Handles technical debt, dependency updates, and backward compati... | legacy, modernizer | legacy, modernizer, refactor, codebases, migrate, outdated, frameworks, gradual, modernization, technical, debt, dependency |
|
||||
| `linear-claude-skill` | Manage Linear issues, projects, and teams | linear, claude, skill | linear, claude, skill, issues, teams |
|
||||
| `lint-and-validate` | Automatic quality control, linting, and static analysis procedures. Use after every code modification to ensure syntax correctness and project standards. Tri... | lint, and, validate | lint, and, validate, automatic, quality, control, linting, static, analysis, procedures, after, every |
|
||||
| `linux-privilege-escalation` | This skill should be used when the user asks to "escalate privileges on Linux", "find privesc vectors on Linux systems", "exploit sudo misconfigurations", "a... | linux, privilege, escalation | linux, privilege, escalation, skill, should, used, user, asks, escalate, privileges, find, privesc |
|
||||
| `linux-shell-scripting` | This skill should be used when the user asks to "create bash scripts", "automate Linux tasks", "monitor system resources", "backup files", "manage users", or... | linux, shell, scripting | linux, shell, scripting, scripts, skill, should, used, user, asks, bash, automate, tasks |
|
||||
| `memory-systems` | Design short-term, long-term, and graph-based memory architectures | memory | memory, short, term, long, graph, architectures |
|
||||
| `micro-saas-launcher` | Expert in launching small, focused SaaS products fast - the indie hacker approach to building profitable software. Covers idea validation, MVP development, p... | micro, saas, launcher | micro, saas, launcher, launching, small, products, fast, indie, hacker, approach, building, profitable |
|
||||
| `monorepo-management` | Master monorepo management with Turborepo, Nx, and pnpm workspaces to build efficient, scalable multi-package repositories with optimized builds and dependen... | monorepo | monorepo, turborepo, nx, pnpm, workspaces, efficient, scalable, multi, package, repositories, optimized, dependency |
|
||||
| `nft-standards` | Implement NFT standards (ERC-721, ERC-1155) with proper metadata handling, minting strategies, and marketplace integration. Use when creating NFT contracts, ... | nft, standards | nft, standards, erc, 721, 1155, proper, metadata, handling, minting, marketplace, integration, creating |
|
||||
| `nosql-expert` | Expert guidance for distributed NoSQL databases (Cassandra, DynamoDB). Focuses on mental models, query-first modeling, single-table design, and avoiding hot ... | nosql | nosql, guidance, distributed, databases, cassandra, dynamodb, mental, models, query, first, modeling, single |
|
||||
| `obsidian-clipper-template-creator` | Guide for creating templates for the Obsidian Web Clipper. Use when you want to create a new clipping template, understand available variables, or format cli... | obsidian, clipper, creator | obsidian, clipper, creator, creating, web, want, new, clipping, understand, available, variables, format |
|
||||
| `onboarding-cro` | When the user wants to optimize post-signup onboarding, user activation, first-run experience, or time-to-value. Also use when the user mentions "onboarding ... | onboarding, cro | onboarding, cro, user, wants, optimize, post, signup, activation, first, run, experience, time |
|
||||
| `paid-ads` | When the user wants help with paid advertising campaigns on Google Ads, Meta (Facebook/Instagram), LinkedIn, Twitter/X, or other ad platforms. Also use when ... | paid, ads | paid, ads, user, wants, advertising, campaigns, google, meta, facebook, instagram, linkedin, twitter |
|
||||
| `paypal-integration` | Integrate PayPal payment processing with support for express checkout, subscriptions, and refund management. Use when implementing PayPal payments, processin... | paypal, integration | paypal, integration, integrate, payment, processing, express, checkout, subscriptions, refund, implementing, payments, online |
|
||||
| `performance-profiling` | Performance profiling principles. Measurement, analysis, and optimization techniques. | performance, profiling | performance, profiling, principles, measurement, analysis, optimization, techniques |
|
||||
| `personal-tool-builder` | Expert in building custom tools that solve your own problems first. The best products often start as personal tools - scratch your own itch, build for yourse... | personal, builder | personal, builder, building, custom, solve, own, problems, first, products, often, start, scratch |
|
||||
| `plan-writing` | Structured task planning with clear breakdowns, dependencies, and verification criteria. Use when implementing features, refactoring, or any multi-step work. | plan, writing | plan, writing, structured, task, planning, clear, breakdowns, dependencies, verification, criteria, implementing, features |
|
||||
| `planning-with-files` | Implements Manus-style file-based planning for complex tasks. Creates task_plan.md, findings.md, and progress.md. Use when starting complex multi-step tasks,... | planning, with, files | planning, with, files, implements, manus, style, file, complex, tasks, creates, task, plan |
|
||||
| `posix-shell-pro` | Expert in strict POSIX sh scripting for maximum portability across Unix-like systems. Specializes in shell scripts that run on any POSIX-compliant shell (das... | posix, shell | posix, shell, pro, strict, sh, scripting, maximum, portability, unix, like, specializes, scripts |
|
||||
| `pptx-official` | Presentation creation, editing, and analysis. When Claude needs to work with presentations (.pptx files) for: (1) Creating new presentations, (2) Modifying o... | pptx, official | pptx, official, presentation, creation, editing, analysis, claude, work, presentations, files, creating, new |
|
||||
| `privilege-escalation-methods` | This skill should be used when the user asks to "escalate privileges", "get root access", "become administrator", "privesc techniques", "abuse sudo", "exploi... | privilege, escalation, methods | privilege, escalation, methods, skill, should, used, user, asks, escalate, privileges, get, root |
|
||||
| `prompt-library` | Curated collection of high-quality prompts for various use cases. Includes role-based prompts, task-specific templates, and prompt refinement techniques. Use... | prompt, library | prompt, library, curated, collection, high, quality, prompts, various, cases, includes, role, task |
|
||||
| `readme` | When the user wants to create or update a README.md file for a project. Also use when the user says | readme | readme, user, wants, update, md, file, says |
|
||||
| `receiving-code-review` | Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technic... | receiving, code | receiving, code, review, feedback, before, implementing, suggestions, especially, seems, unclear, technically, questionable |
|
||||
| `referral-program` | When the user wants to create, optimize, or analyze a referral program, affiliate program, or word-of-mouth strategy. Also use when the user mentions 'referr... | referral, program | referral, program, user, wants, optimize, analyze, affiliate, word, mouth, mentions, ambassador, viral |
|
||||
| `requesting-code-review` | Use when completing tasks, implementing major features, or before merging to verify work meets requirements | requesting, code | requesting, code, review, completing, tasks, implementing, major, features, before, merging, verify, work |
|
||||
| `search-specialist` | Expert web researcher using advanced search techniques and synthesis. Masters search operators, result filtering, and multi-source verification. Handles comp... | search | search, web, researcher, techniques, synthesis, masters, operators, result, filtering, multi, source, verification |
|
||||
| `sharp-edges` | Identify error-prone APIs and dangerous configurations | sharp, edges | sharp, edges, identify, error, prone, apis, dangerous, configurations |
|
||||
| `shellcheck-configuration` | Master ShellCheck static analysis configuration and usage for shell script quality. Use when setting up linting infrastructure, fixing code issues, or ensuri... | shellcheck, configuration | shellcheck, configuration, static, analysis, usage, shell, script, quality, setting, up, linting, infrastructure |
|
||||
| `signup-flow-cro` | When the user wants to optimize signup, registration, account creation, or trial activation flows. Also use when the user mentions "signup conversions," "reg... | signup, flow, cro | signup, flow, cro, user, wants, optimize, registration, account, creation, trial, activation, flows |
|
||||
| `skill-creator` | Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capa... | skill, creator | skill, creator, creating, effective, skills, should, used, users, want, new, update, existing |
|
||||
| `skill-rails-upgrade` | Analyze Rails apps and provide upgrade assessments | skill, rails, upgrade | skill, rails, upgrade, analyze, apps, provide, assessments |
|
||||
| `slack-gif-creator` | Knowledge and utilities for creating animated GIFs optimized for Slack. Provides constraints, validation tools, and animation concepts. Use when users reques... | slack, gif, creator | slack, gif, creator, knowledge, utilities, creating, animated, gifs, optimized, provides, constraints, validation |
|
||||
| `social-content` | When the user wants help creating, scheduling, or optimizing social media content for LinkedIn, Twitter/X, Instagram, TikTok, Facebook, or other platforms. A... | social, content | social, content, user, wants, creating, scheduling, optimizing, media, linkedin, twitter, instagram, tiktok |
|
||||
| `subagent-driven-development` | Use when executing implementation plans with independent tasks in the current session | subagent, driven | subagent, driven, development, executing, plans, independent, tasks, current, session |
|
||||
| `superpowers-lab` | Lab environment for Claude superpowers | superpowers, lab | superpowers, lab, environment, claude |
|
||||
| `theme-factory` | Toolkit for styling artifacts with a theme. These artifacts can be slides, docs, reportings, HTML landing pages, etc. There are 10 pre-set themes with colors... | theme, factory | theme, factory, toolkit, styling, artifacts, these, slides, docs, reportings, html, landing, pages |
|
||||
| `threejs-skills` | Three.js skills for creating 3D elements and interactive experiences | threejs, skills | threejs, skills, three, js, creating, 3d, elements, interactive, experiences |
|
||||
| `turborepo-caching` | Configure Turborepo for efficient monorepo builds with local and remote caching. Use when setting up Turborepo, optimizing build pipelines, or implementing d... | turborepo, caching | turborepo, caching, configure, efficient, monorepo, local, remote, setting, up, optimizing, pipelines, implementing |
|
||||
| `tutorial-engineer` | Creates step-by-step tutorials and educational content from code. Transforms complex concepts into progressive learning experiences with hands-on examples. U... | tutorial | tutorial, engineer, creates, step, tutorials, educational, content, code, transforms, complex, concepts, progressive |
|
||||
| `ui-skills` | Opinionated, evolving constraints to guide agents when building interfaces | ui, skills | ui, skills, opinionated, evolving, constraints, agents, building, interfaces |
|
||||
| `ui-ux-designer` | Create interface designs, wireframes, and design systems. Masters user research, accessibility standards, and modern design tools. Specializes in design toke... | ui, ux, designer | ui, ux, designer, interface, designs, wireframes, masters, user, research, accessibility, standards, specializes |
|
||||
| `upgrading-expo` | Upgrade Expo SDK versions | upgrading, expo | upgrading, expo, upgrade, sdk, versions |
|
||||
| `upstash-qstash` | Upstash QStash expert for serverless message queues, scheduled jobs, and reliable HTTP-based task delivery without managing infrastructure. Use when: qstash,... | upstash, qstash | upstash, qstash, serverless, message, queues, scheduled, jobs, reliable, http, task, delivery, without |
|
||||
| `using-git-worktrees` | Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with sma... | using, git, worktrees | using, git, worktrees, starting, feature, work, isolation, current, workspace, before, executing, plans |
|
||||
| `using-superpowers` | Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions | using, superpowers | using, superpowers, starting, any, conversation, establishes, how, find, skills, requiring, skill, invocation |
|
||||
| `verification-before-completion` | Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output... | verification, before, completion | verification, before, completion, about, claim, work, complete, fixed, passing, committing, creating, prs |
|
||||
| `web-performance-optimization` | Optimize website and web application performance including loading speed, Core Web Vitals, bundle size, caching strategies, and runtime performance | web, performance, optimization | web, performance, optimization, optimize, website, application, including, loading, speed, core, vitals, bundle |
|
||||
| `windows-privilege-escalation` | This skill should be used when the user asks to "escalate privileges on Windows," "find Windows privesc vectors," "enumerate Windows for privilege escalation... | windows, privilege, escalation | windows, privilege, escalation, skill, should, used, user, asks, escalate, privileges, find, privesc |
|
||||
| `writing-plans` | Use when you have a spec or requirements for a multi-step task, before touching code | writing, plans | writing, plans, spec, requirements, multi, step, task, before, touching, code |
|
||||
| `writing-skills` | Use when creating, updating, or improving agent skills. | writing, skills | writing, skills, creating, updating, improving, agent |
|
||||
| `x-article-publisher-skill` | Publish articles to X/Twitter | x, article, publisher, skill | x, article, publisher, skill, publish, articles, twitter |
|
||||
|
||||
## infrastructure (78)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `agent-evaluation` | Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents... | agent, evaluation | agent, evaluation, testing, benchmarking, llm, agents, including, behavioral, capability, assessment, reliability, metrics |
|
||||
| `airflow-dag-patterns` | Build production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. Use when creating data pipelines, orchestrating wor... | airflow, dag | airflow, dag, apache, dags, operators, sensors, testing, deployment, creating, data, pipelines, orchestrating |
|
||||
| `api-testing-observability-api-mock` | You are an API mocking expert specializing in realistic mock services for development, testing, and demos. Design mocks that simulate real API behavior and e... | api, observability, mock | api, observability, mock, testing, mocking, specializing, realistic, development, demos, mocks, simulate, real |
|
||||
| `application-performance-performance-optimization` | Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across... | application, performance, optimization | application, performance, optimization, optimize, profiling, observability, backend, frontend, tuning, coordinating, stack |
|
||||
| `aws-serverless` | Specialized skill for building production-ready serverless applications on AWS. Covers Lambda functions, API Gateway, DynamoDB, SQS/SNS event-driven patterns... | aws, serverless | aws, serverless, specialized, skill, building, applications, covers, lambda, functions, api, gateway, dynamodb |
|
||||
| `aws-skills` | AWS development with infrastructure automation and cloud architecture patterns | aws, skills | aws, skills, development, infrastructure, automation, cloud, architecture |
|
||||
| `backend-architect` | Expert backend architect specializing in scalable API design, microservices architecture, and distributed systems. Masters REST/GraphQL/gRPC APIs, event-driv... | backend | backend, architect, specializing, scalable, api, microservices, architecture, distributed, masters, rest, graphql, grpc |
|
||||
| `backend-development-feature-development` | Orchestrate end-to-end backend feature development from requirements to deployment. Use when coordinating multi-phase feature delivery across teams and servi... | backend | backend, development, feature, orchestrate, requirements, deployment, coordinating, multi, phase, delivery, teams |
|
||||
| `bash-defensive-patterns` | Master defensive Bash programming techniques for production-grade scripts. Use when writing robust shell scripts, CI/CD pipelines, or system utilities requir... | bash, defensive | bash, defensive, programming, techniques, grade, scripts, writing, robust, shell, ci, cd, pipelines |
|
||||
| `bash-pro` | Master of defensive Bash scripting for production automation, CI/CD pipelines, and system utilities. Expert in safe, portable, and testable shell scripts. | bash | bash, pro, defensive, scripting, automation, ci, cd, pipelines, utilities, safe, portable, testable |
|
||||
| `bats-testing-patterns` | Master Bash Automated Testing System (Bats) for comprehensive shell script testing. Use when writing tests for shell scripts, CI/CD pipelines, or requiring t... | bats | bats, testing, bash, automated, shell, script, writing, tests, scripts, ci, cd, pipelines |
|
||||
| `c4-container` | Expert C4 Container-level documentation specialist. Synthesizes Component-level documentation into Container-level architecture, mapping components to deploy... | c4, container | c4, container, level, documentation, synthesizes, component, architecture, mapping, components, deployment, units, documenting |
|
||||
| `claude-d3js-skill` | Creating interactive data visualisations using d3.js. This skill should be used when creating custom charts, graphs, network diagrams, geographic visualisati... | claude, d3js, skill | claude, d3js, skill, d3, viz, creating, interactive, data, visualisations, js, should, used |
|
||||
| `code-review-ai-ai-review` | You are an expert AI-powered code review specialist combining automated static analysis, intelligent pattern recognition, and modern DevOps practices. Levera... | code, ai | code, ai, review, powered, combining, automated, static, analysis, intelligent, recognition, devops, leverage |
|
||||
| `cost-optimization` | Optimize cloud costs through resource rightsizing, tagging strategies, reserved instances, and spending analysis. Use when reducing cloud expenses, analyzing... | cost, optimization | cost, optimization, optimize, cloud, costs, through, resource, rightsizing, tagging, reserved, instances, spending |
|
||||
| `data-engineer` | Build scalable data pipelines, modern data warehouses, and real-time streaming architectures. Implements Apache Spark, dbt, Airflow, and cloud-native data pl... | data | data, engineer, scalable, pipelines, warehouses, real, time, streaming, architectures, implements, apache, spark |
|
||||
| `data-engineering-data-pipeline` | You are a data pipeline architecture expert specializing in scalable, reliable, and cost-effective data pipelines for batch and streaming data processing. | data, engineering, pipeline | data, engineering, pipeline, architecture, specializing, scalable, reliable, cost, effective, pipelines, batch, streaming |
|
||||
| `database-cloud-optimization-cost-optimize` | You are a cloud cost optimization expert specializing in reducing infrastructure expenses while maintaining performance and reliability. Analyze cloud spendi... | database, cloud, optimization, cost, optimize | database, cloud, optimization, cost, optimize, specializing, reducing, infrastructure, expenses, while, maintaining, performance |
|
||||
| `database-migrations-migration-observability` | Migration monitoring, CDC, and observability infrastructure | database, cdc, debezium, kafka, prometheus, grafana, monitoring | database, cdc, debezium, kafka, prometheus, grafana, monitoring, migrations, migration, observability, infrastructure |
|
||||
| `database-optimizer` | Expert database optimizer specializing in modern performance tuning, query optimization, and scalable architectures. Masters advanced indexing, N+1 resolutio... | database, optimizer | database, optimizer, specializing, performance, tuning, query, optimization, scalable, architectures, masters, indexing, resolution |
|
||||
| `deployment-procedures` | Production deployment principles and decision-making. Safe deployment workflows, rollback strategies, and verification. Teaches thinking, not scripts. | deployment, procedures | deployment, procedures, principles, decision, making, safe, rollback, verification, teaches, thinking, scripts |
|
||||
| `deployment-validation-config-validate` | You are a configuration management expert specializing in validating, testing, and ensuring the correctness of application configurations. Create comprehensi... | deployment, validation, config, validate | deployment, validation, config, validate, configuration, specializing, validating, testing, ensuring, correctness, application, configurations |
|
||||
| `distributed-debugging-debug-trace` | You are a debugging expert specializing in setting up comprehensive debugging environments, distributed tracing, and diagnostic tools. Configure debugging wo... | distributed, debugging, debug, trace | distributed, debugging, debug, trace, specializing, setting, up, environments, tracing, diagnostic, configure, solutions |
|
||||
| `distributed-tracing` | Implement distributed tracing with Jaeger and Tempo to track requests across microservices and identify performance bottlenecks. Use when debugging microserv... | distributed, tracing | distributed, tracing, jaeger, tempo, track, requests, microservices, identify, performance, bottlenecks, debugging, analyzing |
|
||||
| `django-pro` | Master Django 5.x with async views, DRF, Celery, and Django Channels. Build scalable web applications with proper architecture, testing, and deployment. Use ... | django | django, pro, async, views, drf, celery, channels, scalable, web, applications, proper, architecture |
|
||||
| `e2e-testing-patterns` | Master end-to-end testing with Playwright and Cypress to build reliable test suites that catch bugs, improve confidence, and enable fast deployment. Use when... | e2e | e2e, testing, playwright, cypress, reliable, test, suites, catch, bugs, improve, confidence, enable |
|
||||
| `error-debugging-error-analysis` | You are an expert error analysis specialist with deep expertise in debugging distributed systems, analyzing production incidents, and implementing comprehens... | error, debugging | error, debugging, analysis, deep, expertise, distributed, analyzing, incidents, implementing, observability, solutions |
|
||||
| `error-debugging-error-trace` | You are an error tracking and observability expert specializing in implementing comprehensive error monitoring solutions. Set up error tracking systems, conf... | error, debugging, trace | error, debugging, trace, tracking, observability, specializing, implementing, monitoring, solutions, set, up, configure |
|
||||
| `error-diagnostics-error-analysis` | You are an expert error analysis specialist with deep expertise in debugging distributed systems, analyzing production incidents, and implementing comprehens... | error, diagnostics | error, diagnostics, analysis, deep, expertise, debugging, distributed, analyzing, incidents, implementing, observability, solutions |
|
||||
| `error-diagnostics-error-trace` | You are an error tracking and observability expert specializing in implementing comprehensive error monitoring solutions. Set up error tracking systems, conf... | error, diagnostics, trace | error, diagnostics, trace, tracking, observability, specializing, implementing, monitoring, solutions, set, up, configure |
|
||||
| `expo-deployment` | Deploy Expo apps to production | expo, deployment | expo, deployment, deploy, apps |
|
||||
| `file-uploads` | Expert at handling file uploads and cloud storage. Covers S3, Cloudflare R2, presigned URLs, multipart uploads, and image optimization. Knows how to handle l... | file, uploads | file, uploads, handling, cloud, storage, covers, s3, cloudflare, r2, presigned, urls, multipart |
|
||||
| `flutter-expert` | Master Flutter development with Dart 3, advanced widgets, and multi-platform deployment. Handles state management, animations, testing, and performance optim... | flutter | flutter, development, dart, widgets, multi, platform, deployment, state, animations, testing, performance, optimization |
|
||||
| `game-development/game-art` | Game art principles. Visual style selection, asset pipeline, animation workflow. | game, development/game, art | game, development/game, art, principles, visual, style, selection, asset, pipeline, animation |
|
||||
| `gcp-cloud-run` | Specialized skill for building production-ready serverless applications on GCP. Covers Cloud Run services (containerized), Cloud Run Functions (event-driven)... | gcp, cloud, run | gcp, cloud, run, specialized, skill, building, serverless, applications, covers, containerized, functions, event |
|
||||
| `git-pr-workflows-git-workflow` | Orchestrate a comprehensive git workflow from code review through PR creation, leveraging specialized agents for quality assurance, testing, and deployment r... | git, pr | git, pr, orchestrate, code, review, through, creation, leveraging, specialized, agents, quality, assurance |
|
||||
| `github-actions-templates` | Create production-ready GitHub Actions workflows for automated testing, building, and deploying applications. Use when setting up CI/CD with GitHub Actions, ... | github, actions | github, actions, automated, testing, building, deploying, applications, setting, up, ci, cd, automating |
|
||||
| `github-workflow-automation` | Automate GitHub workflows with AI assistance. Includes PR reviews, issue triage, CI/CD integration, and Git operations. Use when automating GitHub workflows,... | github | github, automation, automate, ai, assistance, includes, pr, reviews, issue, triage, ci, cd |
|
||||
| `gitlab-ci-patterns` | Build GitLab CI/CD pipelines with multi-stage workflows, caching, and distributed runners for scalable automation. Use when implementing GitLab CI/CD, optimi... | gitlab, ci | gitlab, ci, cd, pipelines, multi, stage, caching, distributed, runners, scalable, automation, implementing |
|
||||
| `gitops-workflow` | Implement GitOps workflows with ArgoCD and Flux for automated, declarative Kubernetes deployments with continuous reconciliation. Use when implementing GitOp... | gitops | gitops, argocd, flux, automated, declarative, kubernetes, deployments, continuous, reconciliation, implementing, automating, setting |
|
||||
| `grafana-dashboards` | Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visua... | grafana, dashboards | grafana, dashboards, real, time, visualization, application, metrics, building, monitoring, visualizing, creating, operational |
|
||||
| `helm-chart-scaffolding` | Design, organize, and manage Helm charts for templating and packaging Kubernetes applications with reusable configurations. Use when creating Helm charts, pa... | helm, chart | helm, chart, scaffolding, organize, charts, templating, packaging, kubernetes, applications, reusable, configurations, creating |
|
||||
| `hugging-face-cli` | Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create ... | hugging, face, cli | hugging, face, cli, execute, hub, operations, hf, user, download, models, datasets, spaces |
|
||||
| `hybrid-cloud-networking` | Configure secure, high-performance connectivity between on-premises infrastructure and cloud platforms using VPN and dedicated connections. Use when building... | hybrid, cloud, networking | hybrid, cloud, networking, configure, secure, high, performance, connectivity, between, premises, infrastructure, platforms |
|
||||
| `istio-traffic-management` | Configure Istio traffic management including routing, load balancing, circuit breakers, and canary deployments. Use when implementing service mesh traffic po... | istio, traffic | istio, traffic, configure, including, routing, load, balancing, circuit, breakers, canary, deployments, implementing |
|
||||
| `iterate-pr` | Iterate on a PR until CI passes. Use when you need to fix CI failures, address review feedback, or continuously push fixes until all checks are green. Automa... | iterate, pr | iterate, pr, until, ci, passes, fix, failures, address, review, feedback, continuously, push |
|
||||
| `java-pro` | Master Java 21+ with modern features like virtual threads, pattern matching, and Spring Boot 3.x. Expert in the latest Java ecosystem including GraalVM, Proj... | java | java, pro, 21, features, like, virtual, threads, matching, spring, boot, latest, ecosystem |
|
||||
| `kpi-dashboard-design` | Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboard... | kpi, dashboard | kpi, dashboard, effective, dashboards, metrics, selection, visualization, real, time, monitoring, building, business |
|
||||
| `langfuse` | Expert in Langfuse - the open-source LLM observability platform. Covers tracing, prompt management, evaluation, datasets, and integration with LangChain, Lla... | langfuse | langfuse, open, source, llm, observability, platform, covers, tracing, prompt, evaluation, datasets, integration |
|
||||
| `llm-app-patterns` | Production-ready patterns for building LLM applications. Covers RAG pipelines, agent architectures, prompt IDEs, and LLMOps monitoring. Use when designing AI... | llm, app | llm, app, building, applications, covers, rag, pipelines, agent, architectures, prompt, ides, llmops |
|
||||
| `machine-learning-ops-ml-pipeline` | Design and implement a complete ML pipeline for: $ARGUMENTS | machine, learning, ops, ml, pipeline | machine, learning, ops, ml, pipeline, complete, arguments |
|
||||
| `microservices-patterns` | Design microservices architectures with service boundaries, event-driven communication, and resilience patterns. Use when building distributed systems, decom... | microservices | microservices, architectures, boundaries, event, driven, communication, resilience, building, distributed, decomposing, monoliths, implementing |
|
||||
| `ml-engineer` | Build production ML systems with PyTorch 2.x, TensorFlow, and modern ML frameworks. Implements model serving, feature engineering, A/B testing, and monitorin... | ml | ml, engineer, pytorch, tensorflow, frameworks, implements, model, serving, feature, engineering, testing, monitoring |
|
||||
| `ml-pipeline-workflow` | Build end-to-end MLOps pipelines from data preparation through model training, validation, and production deployment. Use when creating ML pipelines, impleme... | ml, pipeline | ml, pipeline, mlops, pipelines, data, preparation, through, model, training, validation, deployment, creating |
|
||||
| `mlops-engineer` | Build comprehensive ML pipelines, experiment tracking, and model registries with MLflow, Kubeflow, and modern MLOps tools. Implements automated training, dep... | mlops | mlops, engineer, ml, pipelines, experiment, tracking, model, registries, mlflow, kubeflow, implements, automated |
|
||||
| `moodle-external-api-development` | Create custom external web service APIs for Moodle LMS. Use when implementing web services for course management, user tracking, quiz operations, or custom p... | moodle, external, api | moodle, external, api, development, custom, web, apis, lms, implementing, course, user, tracking |
|
||||
| `multi-cloud-architecture` | Design multi-cloud architectures using a decision framework to select and integrate services across AWS, Azure, and GCP. Use when building multi-cloud system... | multi, cloud, architecture | multi, cloud, architecture, architectures, decision, framework, select, integrate, aws, azure, gcp, building |
|
||||
| `network-101` | This skill should be used when the user asks to "set up a web server", "configure HTTP or HTTPS", "perform SNMP enumeration", "configure SMB shares", "test n... | network, 101 | network, 101, skill, should, used, user, asks, set, up, web, server, configure |
|
||||
| `observability-monitoring-monitor-setup` | You are a monitoring and observability expert specializing in implementing comprehensive monitoring solutions. Set up metrics collection, distributed tracing... | observability, monitoring, monitor, setup | observability, monitoring, monitor, setup, specializing, implementing, solutions, set, up, metrics, collection, distributed |
|
||||
| `observability-monitoring-slo-implement` | You are an SLO (Service Level Objective) expert specializing in implementing reliability standards and error budget-based practices. Design SLO frameworks, d... | observability, monitoring, slo, implement | observability, monitoring, slo, implement, level, objective, specializing, implementing, reliability, standards, error, budget |
|
||||
| `performance-engineer` | Expert performance engineer specializing in modern observability, application optimization, and scalable system performance. Masters OpenTelemetry, distribut... | performance | performance, engineer, specializing, observability, application, optimization, scalable, masters, opentelemetry, distributed, tracing, load |
|
||||
| `performance-testing-review-ai-review` | You are an expert AI-powered code review specialist combining automated static analysis, intelligent pattern recognition, and modern DevOps practices. Levera... | performance, ai | performance, ai, testing, review, powered, code, combining, automated, static, analysis, intelligent, recognition |
|
||||
| `prometheus-configuration` | Set up Prometheus for comprehensive metric collection, storage, and monitoring of infrastructure and applications. Use when implementing metrics collection, ... | prometheus, configuration | prometheus, configuration, set, up, metric, collection, storage, monitoring, infrastructure, applications, implementing, metrics |
|
||||
| `protocol-reverse-engineering` | Master network protocol reverse engineering including packet analysis, protocol dissection, and custom protocol documentation. Use when analyzing network tra... | protocol, reverse, engineering | protocol, reverse, engineering, network, including, packet, analysis, dissection, custom, documentation, analyzing, traffic |
|
||||
| `server-management` | Server management principles and decision-making. Process management, monitoring strategy, and scaling decisions. Teaches thinking, not commands. | server | server, principles, decision, making, process, monitoring, scaling, decisions, teaches, thinking, commands |
|
||||
| `service-mesh-observability` | Implement comprehensive observability for service meshes including distributed tracing, metrics, and visualization. Use when setting up mesh monitoring, debu... | service, mesh, observability | service, mesh, observability, meshes, including, distributed, tracing, metrics, visualization, setting, up, monitoring |
|
||||
| `slo-implementation` | Define and implement Service Level Indicators (SLIs) and Service Level Objectives (SLOs) with error budgets and alerting. Use when establishing reliability t... | slo | slo, define, level, indicators, slis, objectives, slos, error, budgets, alerting, establishing, reliability |
|
||||
| `sql-pro` | Master modern SQL with cloud-native databases, OLTP/OLAP optimization, and advanced query techniques. Expert in performance tuning, data modeling, and hybrid... | sql | sql, pro, cloud, native, databases, oltp, olap, optimization, query, techniques, performance, tuning |
|
||||
| `temporal-python-pro` | Master Temporal workflow orchestration with Python SDK. Implements durable workflows, saga patterns, and distributed transactions. Covers async/await, testin... | temporal, python | temporal, python, pro, orchestration, sdk, implements, durable, saga, distributed, transactions, covers, async |
|
||||
| `terraform-module-library` | Build reusable Terraform modules for AWS, Azure, and GCP infrastructure following infrastructure-as-code best practices. Use when creating infrastructure mod... | terraform, module, library | terraform, module, library, reusable, modules, aws, azure, gcp, infrastructure, following, code, creating |
|
||||
| `terraform-skill` | Terraform infrastructure as code best practices | terraform, skill | terraform, skill, infrastructure, code |
|
||||
| `test-automator` | Master AI-powered test automation with modern frameworks, self-healing tests, and comprehensive quality engineering. Build scalable testing strategies with a... | automator | automator, test, ai, powered, automation, frameworks, self, healing, tests, quality, engineering, scalable |
|
||||
| `unity-developer` | Build Unity games with optimized C# scripts, efficient rendering, and proper asset management. Masters Unity 6 LTS, URP/HDRP pipelines, and cross-platform de... | unity | unity, developer, games, optimized, scripts, efficient, rendering, proper, asset, masters, lts, urp |
|
||||
| `vercel-deploy-claimable` | Deploy applications and websites to Vercel. Use this skill when the user requests deployment actions such as | vercel, deploy, claimable | vercel, deploy, claimable, applications, websites, skill, user, requests, deployment, actions, such |
|
||||
| `vercel-deployment` | Expert knowledge for deploying to Vercel with Next.js Use when: vercel, deploy, deployment, hosting, production. | vercel, deployment | vercel, deployment, knowledge, deploying, next, js, deploy, hosting |
|
||||
| `voice-agents` | Voice agents represent the frontier of AI interaction - humans speaking naturally with AI systems. The challenge isn't just speech recognition and synthesis,... | voice, agents | voice, agents, represent, frontier, ai, interaction, humans, speaking, naturally, challenge, isn, just |
|
||||
| `wireshark-analysis` | This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow... | wireshark | wireshark, network, traffic, analysis, skill, should, used, user, asks, analyze, capture, packets |
|
||||
| `workflow-automation` | Workflow automation is the infrastructure that makes AI agents reliable. Without durable execution, a network hiccup during a 10-step payment flow means lost... | | automation, infrastructure, makes, ai, agents, reliable, without, durable, execution, network, hiccup, during |
|
||||
|
||||
## security (112)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `accessibility-compliance-accessibility-audit` | You are an accessibility expert specializing in WCAG compliance, inclusive design, and assistive technology compatibility. Conduct audits, identify barriers,... | accessibility, compliance, audit | accessibility, compliance, audit, specializing, wcag, inclusive, assistive, technology, compatibility, conduct, audits, identify |
|
||||
| `active-directory-attacks` | This skill should be used when the user asks to "attack Active Directory", "exploit AD", "Kerberoasting", "DCSync", "pass-the-hash", "BloodHound enumeration"... | active, directory, attacks | active, directory, attacks, skill, should, used, user, asks, attack, exploit, ad, kerberoasting |
|
||||
| `agent-memory-systems` | Memory is the cornerstone of intelligent agents. Without it, every interaction starts from zero. This skill covers the architecture of agent memory: short-te... | agent, memory | agent, memory, cornerstone, intelligent, agents, without, every, interaction, starts, zero, skill, covers |
|
||||
| `ai-product` | Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integra... | ai, product | ai, product, every, powered, question, whether, ll, right, ship, demo, falls, apart |
|
||||
| `api-fuzzing-bug-bounty` | This skill should be used when the user asks to "test API security", "fuzz APIs", "find IDOR vulnerabilities", "test REST API", "test GraphQL", "API penetrat... | api, fuzzing, bug, bounty | api, fuzzing, bug, bounty, skill, should, used, user, asks, test, security, fuzz |
|
||||
| `api-security-best-practices` | Implement secure API design patterns including authentication, authorization, input validation, rate limiting, and protection against common API vulnerabilities | api, security, best, practices | api, security, best, practices, secure, including, authentication, authorization, input, validation, rate, limiting |
|
||||
| `attack-tree-construction` | Build comprehensive attack trees to visualize threat paths. Use when mapping attack scenarios, identifying defense gaps, or communicating security risks to s... | attack, tree, construction | attack, tree, construction, trees, visualize, threat, paths, mapping, scenarios, identifying, defense, gaps |
|
||||
| `auth-implementation-patterns` | Master authentication and authorization patterns including JWT, OAuth2, session management, and RBAC to build secure, scalable access control systems. Use wh... | auth | auth, authentication, authorization, including, jwt, oauth2, session, rbac, secure, scalable, access, control |
|
||||
| `aws-penetration-testing` | This skill should be used when the user asks to "pentest AWS", "test AWS security", "enumerate IAM", "exploit cloud infrastructure", "AWS privilege escalatio... | aws, penetration | aws, penetration, testing, skill, should, used, user, asks, pentest, test, security, enumerate |
|
||||
| `backend-security-coder` | Expert in secure backend coding practices specializing in input validation, authentication, and API security. Use PROACTIVELY for backend security implementa... | backend, security, coder | backend, security, coder, secure, coding, specializing, input, validation, authentication, api, proactively, implementations |
|
||||
| `broken-authentication` | This skill should be used when the user asks to "test for broken authentication vulnerabilities", "assess session management security", "perform credential s... | broken, authentication | broken, authentication, testing, skill, should, used, user, asks, test, vulnerabilities, assess, session |
|
||||
| `burp-suite-testing` | This skill should be used when the user asks to "intercept HTTP traffic", "modify web requests", "use Burp Suite for testing", "perform web vulnerability sca... | burp, suite | burp, suite, web, application, testing, skill, should, used, user, asks, intercept, http |
|
||||
| `cc-skill-security-review` | Use this skill when adding authentication, handling user input, working with secrets, creating API endpoints, or implementing payment/sensitive features. Pro... | cc, skill, security | cc, skill, security, review, adding, authentication, handling, user, input, working, secrets, creating |
|
||||
| `cicd-automation-workflow-automate` | You are a workflow automation expert specializing in creating efficient CI/CD pipelines, GitHub Actions workflows, and automated development processes. Desig... | cicd, automate | cicd, automate, automation, specializing, creating, efficient, ci, cd, pipelines, github, actions, automated |
|
||||
| `clerk-auth` | Expert patterns for Clerk auth implementation, middleware, organizations, webhooks, and user sync Use when: adding authentication, clerk auth, user authentic... | clerk, auth | clerk, auth, middleware, organizations, webhooks, user, sync, adding, authentication, sign, up |
|
||||
| `cloud-architect` | Expert cloud architect specializing in AWS/Azure/GCP multi-cloud infrastructure design, advanced IaC (Terraform/OpenTofu/CDK), FinOps cost optimization, and ... | cloud | cloud, architect, specializing, aws, azure, gcp, multi, infrastructure, iac, terraform, opentofu, cdk |
|
||||
| `cloud-penetration-testing` | This skill should be used when the user asks to "perform cloud penetration testing", "assess Azure or AWS or GCP security", "enumerate cloud resources", "exp... | cloud, penetration | cloud, penetration, testing, skill, should, used, user, asks, perform, assess, azure, aws |
|
||||
| `code-review-checklist` | Comprehensive checklist for conducting thorough code reviews covering functionality, security, performance, and maintainability | code, checklist | code, checklist, review, conducting, thorough, reviews, covering, functionality, security, performance, maintainability |
|
||||
| `code-reviewer` | Elite code review expert specializing in modern AI-powered code analysis, security vulnerabilities, performance optimization, and production reliability. Mas... | code | code, reviewer, elite, review, specializing, ai, powered, analysis, security, vulnerabilities, performance, optimization |
|
||||
| `codebase-cleanup-deps-audit` | You are a dependency security expert specializing in vulnerability scanning, license compliance, and supply chain security. Analyze project dependencies for ... | codebase, cleanup, deps, audit | codebase, cleanup, deps, audit, dependency, security, specializing, vulnerability, scanning, license, compliance, supply |
|
||||
| `computer-use-agents` | Build AI agents that interact with computers like humans do - viewing screens, moving cursors, clicking buttons, and typing text. Covers Anthropic's Computer... | computer, use, agents | computer, use, agents, ai, interact, computers, like, humans, do, viewing, screens, moving |
|
||||
| `database-admin` | Expert database administrator specializing in modern cloud databases, automation, and reliability engineering. Masters AWS/Azure/GCP database services, Infra... | database, admin | database, admin, administrator, specializing, cloud, databases, automation, reliability, engineering, masters, aws, azure |
|
||||
| `database-migration` | Execute database migrations across ORMs and platforms with zero-downtime strategies, data transformation, and rollback procedures. Use when migrating databas... | database, migration | database, migration, execute, migrations, orms, platforms, zero, downtime, data, transformation, rollback, procedures |
|
||||
| `database-migrations-sql-migrations` | SQL database migrations with zero-downtime strategies for PostgreSQL, MySQL, SQL Server | database, sql, migrations, postgresql, mysql, flyway, liquibase, alembic, zero-downtime | database, sql, migrations, postgresql, mysql, flyway, liquibase, alembic, zero-downtime, zero, downtime, server |
|
||||
| `dependency-management-deps-audit` | You are a dependency security expert specializing in vulnerability scanning, license compliance, and supply chain security. Analyze project dependencies for ... | dependency, deps, audit | dependency, deps, audit, security, specializing, vulnerability, scanning, license, compliance, supply, chain, analyze |
|
||||
| `deployment-engineer` | Expert deployment engineer specializing in modern CI/CD pipelines, GitOps workflows, and advanced deployment automation. Masters GitHub Actions, ArgoCD/Flux,... | deployment | deployment, engineer, specializing, ci, cd, pipelines, gitops, automation, masters, github, actions, argocd |
|
||||
| `deployment-pipeline-design` | Design multi-stage CI/CD pipelines with approval gates, security checks, and deployment orchestration. Use when architecting deployment workflows, setting up... | deployment, pipeline | deployment, pipeline, multi, stage, ci, cd, pipelines, approval, gates, security, checks, orchestration |
|
||||
| `design-orchestration` | Orchestrates design workflows by routing work through brainstorming, multi-agent review, and execution readiness in the correct order. Prevents premature imp... | | orchestration, orchestrates, routing, work, through, brainstorming, multi, agent, review, execution, readiness, correct |
|
||||
| `devops-troubleshooter` | Expert DevOps troubleshooter specializing in rapid incident response, advanced debugging, and modern observability. Masters log analysis, distributed tracing... | devops, troubleshooter | devops, troubleshooter, specializing, rapid, incident, response, debugging, observability, masters, log, analysis, distributed |
|
||||
| `docker-expert` | Docker containerization expert with deep knowledge of multi-stage builds, image optimization, container security, Docker Compose orchestration, and productio... | docker | docker, containerization, deep, knowledge, multi, stage, image, optimization, container, security, compose, orchestration |
|
||||
| `ethical-hacking-methodology` | This skill should be used when the user asks to "learn ethical hacking", "understand penetration testing lifecycle", "perform reconnaissance", "conduct secur... | ethical, hacking, methodology | ethical, hacking, methodology, skill, should, used, user, asks, learn, understand, penetration, testing |
|
||||
| `file-path-traversal` | This skill should be used when the user asks to "test for directory traversal", "exploit path traversal vulnerabilities", "read arbitrary files through web a... | file, path, traversal | file, path, traversal, testing, skill, should, used, user, asks, test, directory, exploit |
|
||||
| `find-bugs` | Find bugs, security vulnerabilities, and code quality issues in local branch changes. Use when asked to review changes, find bugs, security review, or audit ... | find, bugs | find, bugs, security, vulnerabilities, code, quality, issues, local, branch, changes, asked, review |
|
||||
| `firebase` | Firebase gives you a complete backend in minutes - auth, database, storage, functions, hosting. But the ease of setup hides real complexity. Security rules a... | firebase | firebase, gives, complete, backend, minutes, auth, database, storage, functions, hosting, ease, setup |
|
||||
| `firmware-analyst` | Expert firmware analyst specializing in embedded systems, IoT security, and hardware reverse engineering. Masters firmware extraction, analysis, and vulnerab... | firmware, analyst | firmware, analyst, specializing, embedded, iot, security, hardware, reverse, engineering, masters, extraction, analysis |
|
||||
| `form-cro` | Optimize any form that is NOT signup or account registration — including lead capture, contact, demo request, application, survey, quote, and checkout forms.... | form, cro | form, cro, optimize, any, signup, account, registration, including, lead, capture, contact, demo |
|
||||
| `framework-migration-deps-upgrade` | You are a dependency management expert specializing in safe, incremental upgrades of project dependencies. Plan and execute dependency updates with minimal r... | framework, migration, deps, upgrade | framework, migration, deps, upgrade, dependency, specializing, safe, incremental, upgrades, dependencies, plan, execute |
|
||||
| `frontend-mobile-security-xss-scan` | You are a frontend security specialist focusing on Cross-Site Scripting (XSS) vulnerability detection and prevention. Analyze React, Vue, Angular, and vanill... | frontend, mobile, security, xss, scan | frontend, mobile, security, xss, scan, focusing, cross, site, scripting, vulnerability, detection, prevention |
|
||||
| `frontend-security-coder` | Expert in secure frontend coding practices specializing in XSS prevention, output sanitization, and client-side security patterns. Use PROACTIVELY for fronte... | frontend, security, coder | frontend, security, coder, secure, coding, specializing, xss, prevention, output, sanitization, client, side |
|
||||
| `gdpr-data-handling` | Implement GDPR-compliant data handling with consent management, data subject rights, and privacy by design. Use when building systems that process EU persona... | gdpr, data, handling | gdpr, data, handling, compliant, consent, subject, rights, privacy, building, process, eu, personal |
|
||||
| `graphql-architect` | Master modern GraphQL with federation, performance optimization, and enterprise security. Build scalable schemas, implement advanced caching, and design real... | graphql | graphql, architect, federation, performance, optimization, enterprise, security, scalable, schemas, caching, real, time |
|
||||
| `html-injection-testing` | This skill should be used when the user asks to "test for HTML injection", "inject HTML into web pages", "perform HTML injection attacks", "deface web applic... | html, injection | html, injection, testing, skill, should, used, user, asks, test, inject, web, pages |
|
||||
| `hugging-face-jobs` | This skill should be used when users want to run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, ... | hugging, face, jobs | hugging, face, jobs, skill, should, used, users, want, run, any, workload, infrastructure |
|
||||
| `hybrid-cloud-architect` | Expert hybrid cloud architect specializing in complex multi-cloud solutions across AWS/Azure/GCP and private clouds (OpenStack/VMware). Masters hybrid connec... | hybrid, cloud | hybrid, cloud, architect, specializing, complex, multi, solutions, aws, azure, gcp, private, clouds |
|
||||
| `idor-testing` | This skill should be used when the user asks to "test for insecure direct object references," "find IDOR vulnerabilities," "exploit broken access control," "... | idor | idor, vulnerability, testing, skill, should, used, user, asks, test, insecure, direct, object |
|
||||
| `incident-responder` | Expert SRE incident responder specializing in rapid problem resolution, modern observability, and comprehensive incident management. Masters incident command... | incident, responder | incident, responder, sre, specializing, rapid, problem, resolution, observability, masters, command, blameless, post |
|
||||
| `incident-response-incident-response` | Use when working with incident response incident response | incident, response | incident, response, working |
|
||||
| `incident-response-smart-fix` | [Extended thinking: This workflow implements a sophisticated debugging and resolution pipeline that leverages AI-assisted debugging tools and observability p... | incident, response, fix | incident, response, fix, smart, extended, thinking, implements, sophisticated, debugging, resolution, pipeline, leverages |
|
||||
| `incident-runbook-templates` | Create structured incident response runbooks with step-by-step procedures, escalation paths, and recovery actions. Use when building runbooks, responding to ... | incident, runbook | incident, runbook, structured, response, runbooks, step, procedures, escalation, paths, recovery, actions, building |
|
||||
| `internal-comms-anthropic` | A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. Claude should use this skill whenev... | internal, comms, anthropic | internal, comms, anthropic, set, resources, me, write, all, kinds, communications, formats, my |
|
||||
| `internal-comms-community` | A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. Claude should use this skill whenev... | internal, comms, community | internal, comms, community, set, resources, me, write, all, kinds, communications, formats, my |
|
||||
| `k8s-manifest-generator` | Create production-ready Kubernetes manifests for Deployments, Services, ConfigMaps, and Secrets following best practices and security standards. Use when gen... | k8s, manifest, generator | k8s, manifest, generator, kubernetes, manifests, deployments, configmaps, secrets, following, security, standards, generating |
|
||||
| `k8s-security-policies` | Implement Kubernetes security policies including NetworkPolicy, PodSecurityPolicy, and RBAC for production-grade security. Use when securing Kubernetes clust... | k8s, security, policies | k8s, security, policies, kubernetes, including, networkpolicy, podsecuritypolicy, rbac, grade, securing, clusters, implementing |
|
||||
| `kubernetes-architect` | Expert Kubernetes architect specializing in cloud-native infrastructure, advanced GitOps workflows (ArgoCD/Flux), and enterprise container orchestration. Mas... | kubernetes | kubernetes, architect, specializing, cloud, native, infrastructure, gitops, argocd, flux, enterprise, container, orchestration |
|
||||
| `legal-advisor` | Draft privacy policies, terms of service, disclaimers, and legal notices. Creates GDPR-compliant texts, cookie policies, and data processing agreements. Use ... | legal, advisor | legal, advisor, draft, privacy, policies, terms, disclaimers, notices, creates, gdpr, compliant, texts |
|
||||
| `linkerd-patterns` | Implement Linkerd service mesh patterns for lightweight, security-focused service mesh deployments. Use when setting up Linkerd, configuring traffic policies... | linkerd | linkerd, mesh, lightweight, security, deployments, setting, up, configuring, traffic, policies, implementing, zero |
|
||||
| `loki-mode` | Multi-agent autonomous startup system for Claude Code. Triggers on "Loki Mode". Orchestrates 100+ specialized agents across engineering, QA, DevOps, security... | loki, mode | loki, mode, multi, agent, autonomous, startup, claude, code, triggers, orchestrates, 100, specialized |
|
||||
| `malware-analyst` | Expert malware analyst specializing in defensive malware research, threat intelligence, and incident response. Masters sandbox analysis, behavioral analysis,... | malware, analyst | malware, analyst, specializing, defensive, research, threat, intelligence, incident, response, masters, sandbox, analysis |
|
||||
| `memory-forensics` | Master memory forensics techniques including memory acquisition, process analysis, and artifact extraction using Volatility and related tools. Use when analy... | memory, forensics | memory, forensics, techniques, including, acquisition, process, analysis, artifact, extraction, volatility, related, analyzing |
|
||||
| `metasploit-framework` | This skill should be used when the user asks to "use Metasploit for penetration testing", "exploit vulnerabilities with msfconsole", "create payloads with ms... | metasploit, framework | metasploit, framework, skill, should, used, user, asks, penetration, testing, exploit, vulnerabilities, msfconsole |
|
||||
| `mobile-security-coder` | Expert in secure mobile coding practices specializing in input validation, WebView security, and mobile-specific security patterns. Use PROACTIVELY for mobil... | mobile, security, coder | mobile, security, coder, secure, coding, specializing, input, validation, webview, specific, proactively, implementations |
|
||||
| `mtls-configuration` | Configure mutual TLS (mTLS) for zero-trust service-to-service communication. Use when implementing zero-trust networking, certificate management, or securing... | mtls, configuration | mtls, configuration, configure, mutual, tls, zero, trust, communication, implementing, networking, certificate, securing |
|
||||
| `multi-agent-brainstorming` | Use this skill when a design or idea requires higher confidence, risk reduction, or formal review. This skill orchestrates a structured, sequential multi-age... | multi, agent, brainstorming | multi, agent, brainstorming, skill, idea, requires, higher, confidence, risk, reduction, formal, review |
|
||||
| `network-engineer` | Expert network engineer specializing in modern cloud networking, security architectures, and performance optimization. Masters multi-cloud connectivity, serv... | network | network, engineer, specializing, cloud, networking, security, architectures, performance, optimization, masters, multi, connectivity |
|
||||
| `nextjs-supabase-auth` | Expert integration of Supabase Auth with Next.js App Router Use when: supabase auth next, authentication next.js, login supabase, auth middleware, protected ... | nextjs, supabase, auth | nextjs, supabase, auth, integration, next, js, app, router, authentication, login, middleware, protected |
|
||||
| `nodejs-best-practices` | Node.js development principles and decision-making. Framework selection, async patterns, security, and architecture. Teaches thinking, not copying. | nodejs, best, practices | nodejs, best, practices, node, js, development, principles, decision, making, framework, selection, async |
|
||||
| `notebooklm` | Use this skill to query your Google NotebookLM notebooks directly from Claude Code for source-grounded, citation-backed answers from Gemini. Browser automati... | notebooklm | notebooklm, skill, query, google, notebooks, directly, claude, code, source, grounded, citation, backed |
|
||||
| `observability-engineer` | Build production-ready monitoring, logging, and tracing systems. Implements comprehensive observability strategies, SLI/SLO management, and incident response... | observability | observability, engineer, monitoring, logging, tracing, implements, sli, slo, incident, response, proactively, infrastructure |
|
||||
| `openapi-spec-generation` | Generate and maintain OpenAPI 3.1 specifications from code, design-first specs, and validation patterns. Use when creating API documentation, generating SDKs... | openapi, spec, generation | openapi, spec, generation, generate, maintain, specifications, code, first, specs, validation, creating, api |
|
||||
| `payment-integration` | Integrate Stripe, PayPal, and payment processors. Handles checkout flows, subscriptions, webhooks, and PCI compliance. Use PROACTIVELY when implementing paym... | payment, integration | payment, integration, integrate, stripe, paypal, processors, checkout, flows, subscriptions, webhooks, pci, compliance |
|
||||
| `pci-compliance` | Implement PCI DSS compliance requirements for secure handling of payment card data and payment systems. Use when securing payment processing, achieving PCI c... | pci, compliance | pci, compliance, dss, requirements, secure, handling, payment, card, data, securing, processing, achieving |
|
||||
| `pentest-checklist` | This skill should be used when the user asks to "plan a penetration test", "create a security assessment checklist", "prepare for penetration testing", "defi... | pentest, checklist | pentest, checklist, skill, should, used, user, asks, plan, penetration, test, security, assessment |
|
||||
| `plaid-fintech` | Expert patterns for Plaid API integration including Link token flows, transactions sync, identity verification, Auth for ACH, balance checks, webhook handlin... | plaid, fintech | plaid, fintech, api, integration, including, link, token, flows, transactions, sync, identity, verification |
|
||||
| `popup-cro` | Create and optimize popups, modals, overlays, slide-ins, and banners to increase conversions without harming user experience or brand trust. | popup, cro | popup, cro, optimize, popups, modals, overlays, slide, ins, banners, increase, conversions, without |
|
||||
| `postmortem-writing` | Write effective blameless postmortems with root cause analysis, timelines, and action items. Use when conducting incident reviews, writing postmortem documen... | postmortem, writing | postmortem, writing, write, effective, blameless, postmortems, root, cause, analysis, timelines, action, items |
|
||||
| `quant-analyst` | Build financial models, backtest trading strategies, and analyze market data. Implements risk metrics, portfolio optimization, and statistical arbitrage. Use... | quant, analyst | quant, analyst, financial, models, backtest, trading, analyze, market, data, implements, risk, metrics |
|
||||
| `red-team-tactics` | Red team tactics principles based on MITRE ATT&CK. Attack phases, detection evasion, reporting. | red, team, tactics | red, team, tactics, principles, mitre, att, ck, attack, phases, detection, evasion, reporting |
|
||||
| `red-team-tools` | This skill should be used when the user asks to "follow red team methodology", "perform bug bounty hunting", "automate reconnaissance", "hunt for XSS vulnera... | red, team | red, team, methodology, skill, should, used, user, asks, follow, perform, bug, bounty |
|
||||
| `research-engineer` | An uncompromising Academic Research Engineer. Operates with absolute scientific rigor, objective criticism, and zero flair. Focuses on theoretical correctnes... | research | research, engineer, uncompromising, academic, operates, absolute, scientific, rigor, objective, criticism, zero, flair |
|
||||
| `reverse-engineer` | Expert reverse engineer specializing in binary analysis, disassembly, decompilation, and software analysis. Masters IDA Pro, Ghidra, radare2, x64dbg, and mod... | reverse | reverse, engineer, specializing, binary, analysis, disassembly, decompilation, software, masters, ida, pro, ghidra |
|
||||
| `risk-manager` | Monitor portfolio risk, R-multiples, and position limits. Creates hedging strategies, calculates expectancy, and implements stop-losses. Use PROACTIVELY for ... | risk, manager | risk, manager, monitor, portfolio, multiples, position, limits, creates, hedging, calculates, expectancy, implements |
|
||||
| `risk-metrics-calculation` | Calculate portfolio risk metrics including VaR, CVaR, Sharpe, Sortino, and drawdown analysis. Use when measuring portfolio risk, implementing risk limits, or... | risk, metrics, calculation | risk, metrics, calculation, calculate, portfolio, including, var, cvar, sharpe, sortino, drawdown, analysis |
|
||||
| `sast-configuration` | Configure Static Application Security Testing (SAST) tools for automated vulnerability detection in application code. Use when setting up security scanning, ... | sast, configuration | sast, configuration, configure, static, application, security, testing, automated, vulnerability, detection, code, setting |
|
||||
| `scanning-tools` | This skill should be used when the user asks to "perform vulnerability scanning", "scan networks for open ports", "assess web application security", "scan wi... | scanning | scanning, security, skill, should, used, user, asks, perform, vulnerability, scan, networks, open |
|
||||
| `secrets-management` | Implement secure secrets management for CI/CD pipelines using Vault, AWS Secrets Manager, or native platform solutions. Use when handling sensitive credentia... | secrets | secrets, secure, ci, cd, pipelines, vault, aws, manager, native, platform, solutions, handling |
|
||||
| `security-auditor` | Expert security auditor specializing in DevSecOps, comprehensive cybersecurity, and compliance frameworks. Masters vulnerability assessment, threat modeling,... | security, auditor | security, auditor, specializing, devsecops, cybersecurity, compliance, frameworks, masters, vulnerability, assessment, threat, modeling |
|
||||
| `security-bluebook-builder` | Build security Blue Books for sensitive apps | security, bluebook, builder | security, bluebook, builder, blue, books, sensitive, apps |
|
||||
| `security-compliance-compliance-check` | You are a compliance expert specializing in regulatory requirements for software systems including GDPR, HIPAA, SOC2, PCI-DSS, and other industry standards. ... | security, compliance, check | security, compliance, check, specializing, regulatory, requirements, software, including, gdpr, hipaa, soc2, pci |
|
||||
| `security-requirement-extraction` | Derive security requirements from threat models and business context. Use when translating threats into actionable requirements, creating security user stori... | security, requirement, extraction | security, requirement, extraction, derive, requirements, threat, models, business, context, translating, threats, actionable |
|
||||
| `security-scanning-security-dependencies` | You are a security expert specializing in dependency vulnerability analysis, SBOM generation, and supply chain security. Scan project dependencies across eco... | security, scanning, dependencies | security, scanning, dependencies, specializing, dependency, vulnerability, analysis, sbom, generation, supply, chain, scan |
|
||||
| `security-scanning-security-hardening` | Coordinate multi-layer security scanning and hardening across application, infrastructure, and compliance controls. | security, scanning, hardening | security, scanning, hardening, coordinate, multi, layer, application, infrastructure, compliance, controls |
|
||||
| `security-scanning-security-sast` | Static Application Security Testing (SAST) for code vulnerability analysis across multiple languages and frameworks | security, scanning, sast | security, scanning, sast, static, application, testing, code, vulnerability, analysis, multiple, languages, frameworks |
|
||||
| `seo-authority-builder` | Analyzes content for E-E-A-T signals and suggests improvements to build authority and trust. Identifies missing credibility elements. Use PROACTIVELY for YMY... | seo, authority, builder | seo, authority, builder, analyzes, content, signals, suggests, improvements, trust, identifies, missing, credibility |
|
||||
| `service-mesh-expert` | Expert service mesh architect specializing in Istio, Linkerd, and cloud-native networking patterns. Masters traffic management, security policies, observabil... | service, mesh | service, mesh, architect, specializing, istio, linkerd, cloud, native, networking, masters, traffic, security |
|
||||
| `smtp-penetration-testing` | This skill should be used when the user asks to "perform SMTP penetration testing", "enumerate email users", "test for open mail relays", "grab SMTP banners"... | smtp, penetration | smtp, penetration, testing, skill, should, used, user, asks, perform, enumerate, email, users |
|
||||
| `solidity-security` | Master smart contract security best practices to prevent common vulnerabilities and implement secure Solidity patterns. Use when writing smart contracts, aud... | solidity, security | solidity, security, smart, contract, prevent, common, vulnerabilities, secure, writing, contracts, auditing, existing |
|
||||
| `sql-injection-testing` | This skill should be used when the user asks to "test for SQL injection vulnerabilities", "perform SQLi attacks", "bypass authentication using SQL injection"... | sql, injection | sql, injection, testing, skill, should, used, user, asks, test, vulnerabilities, perform, sqli |
|
||||
| `ssh-penetration-testing` | This skill should be used when the user asks to "pentest SSH services", "enumerate SSH configurations", "brute force SSH credentials", "exploit SSH vulnerabi... | ssh, penetration | ssh, penetration, testing, skill, should, used, user, asks, pentest, enumerate, configurations, brute |
|
||||
| `stride-analysis-patterns` | Apply STRIDE methodology to systematically identify threats. Use when analyzing system security, conducting threat modeling sessions, or creating security do... | stride | stride, analysis, apply, methodology, systematically, identify, threats, analyzing, security, conducting, threat, modeling |
|
||||
| `stripe-integration` | Implement Stripe payment processing for robust, PCI-compliant payment flows including checkout, subscriptions, and webhooks. Use when integrating Stripe paym... | stripe, integration | stripe, integration, payment, processing, robust, pci, compliant, flows, including, checkout, subscriptions, webhooks |
|
||||
| `terraform-specialist` | Expert Terraform/OpenTofu specialist mastering advanced IaC automation, state management, and enterprise infrastructure patterns. Handles complex module desi... | terraform | terraform, opentofu, mastering, iac, automation, state, enterprise, infrastructure, complex, module, multi, cloud |
|
||||
| `threat-mitigation-mapping` | Map identified threats to appropriate security controls and mitigations. Use when prioritizing security investments, creating remediation plans, or validatin... | threat, mitigation, mapping | threat, mitigation, mapping, map, identified, threats, appropriate, security, controls, mitigations, prioritizing, investments |
|
||||
| `threat-modeling-expert` | Expert in threat modeling methodologies, security architecture review, and risk assessment. Masters STRIDE, PASTA, attack trees, and security requirement ext... | threat, modeling | threat, modeling, methodologies, security, architecture, review, risk, assessment, masters, stride, pasta, attack |
|
||||
| `top-web-vulnerabilities` | This skill should be used when the user asks to "identify web application vulnerabilities", "explain common security flaws", "understand vulnerability catego... | top, web, vulnerabilities | top, web, vulnerabilities, 100, reference, skill, should, used, user, asks, identify, application |
|
||||
| `twilio-communications` | Build communication features with Twilio: SMS messaging, voice calls, WhatsApp Business API, and user verification (2FA). Covers the full spectrum from simpl... | twilio, communications | twilio, communications, communication, features, sms, messaging, voice, calls, whatsapp, business, api, user |
|
||||
| `ui-visual-validator` | Rigorous visual validation expert specializing in UI testing, design system compliance, and accessibility verification. Masters screenshot analysis, visual r... | ui, visual, validator | ui, visual, validator, rigorous, validation, specializing, testing, compliance, accessibility, verification, masters, screenshot |
|
||||
| `using-neon` | Guides and best practices for working with Neon Serverless Postgres. Covers getting started, local development with Neon, choosing a connection method, Neon ... | using, neon | using, neon, guides, working, serverless, postgres, covers, getting, started, local, development, choosing |
|
||||
| `varlock-claude-skill` | Secure environment variable management ensuring secrets are never exposed in Claude sessions, terminals, logs, or git commits | varlock, claude, skill | varlock, claude, skill, secure, environment, variable, ensuring, secrets, never, exposed, sessions, terminals |
|
||||
| `vulnerability-scanner` | Advanced vulnerability analysis principles. OWASP 2025, Supply Chain Security, attack surface mapping, risk prioritization. | vulnerability, scanner | vulnerability, scanner, analysis, principles, owasp, 2025, supply, chain, security, attack, surface, mapping |
|
||||
| `web-design-guidelines` | Review UI code for Web Interface Guidelines compliance. Use when asked to "review my UI", "check accessibility", "audit design", "review UX", or "check my si... | web, guidelines | web, guidelines, review, ui, code, interface, compliance, asked, my, check, accessibility, audit |
|
||||
| `wordpress-penetration-testing` | This skill should be used when the user asks to "pentest WordPress sites", "scan WordPress for vulnerabilities", "enumerate WordPress users, themes, or plugi... | wordpress, penetration | wordpress, penetration, testing, skill, should, used, user, asks, pentest, sites, scan, vulnerabilities |
|
||||
| `xss-html-injection` | This skill should be used when the user asks to "test for XSS vulnerabilities", "perform cross-site scripting attacks", "identify HTML injection flaws", "exp... | xss, html, injection | xss, html, injection, cross, site, scripting, testing, skill, should, used, user, asks |
|
||||
|
||||
## testing (22)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `ab-test-setup` | Structured guide for setting up A/B tests with mandatory gates for hypothesis, metrics, and execution readiness. | ab, setup | ab, setup, test, structured, setting, up, tests, mandatory, gates, hypothesis, metrics, execution |
|
||||
| `conductor-implement` | Execute tasks from a track's implementation plan following TDD workflow | conductor, implement | conductor, implement, execute, tasks, track, plan, following, tdd |
|
||||
| `conductor-revert` | Git-aware undo by logical work unit (track, phase, or task) | conductor, revert | conductor, revert, git, aware, undo, logical, work, unit, track, phase, task |
|
||||
| `debugger` | Debugging specialist for errors, test failures, and unexpected behavior. Use proactively when encountering any issues. | debugger | debugger, debugging, errors, test, failures, unexpected, behavior, proactively, encountering, any, issues |
|
||||
| `dependency-upgrade` | Manage major dependency version upgrades with compatibility analysis, staged rollout, and comprehensive testing. Use when upgrading framework versions, updat... | dependency, upgrade | dependency, upgrade, major, version, upgrades, compatibility, analysis, staged, rollout, testing, upgrading, framework |
|
||||
| `pentest-commands` | This skill should be used when the user asks to "run pentest commands", "scan with nmap", "use metasploit exploits", "crack passwords with hydra or john", "s... | pentest, commands | pentest, commands, skill, should, used, user, asks, run, scan, nmap, metasploit, exploits |
|
||||
| `performance-testing-review-multi-agent-review` | Use when working with performance testing review multi agent review | performance, multi, agent | performance, multi, agent, testing, review, working |
|
||||
| `playwright-skill` | Complete browser automation with Playwright. Auto-detects dev servers, writes clean test scripts to /tmp. Test pages, fill forms, take screenshots, check res... | playwright, skill | playwright, skill, complete, browser, automation, auto, detects, dev, servers, writes, clean, test |
|
||||
| `pypict-skill` | Pairwise test generation | pypict, skill | pypict, skill, pairwise, test, generation |
|
||||
| `screen-reader-testing` | Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issue... | screen, reader | screen, reader, testing, test, web, applications, readers, including, voiceover, nvda, jaws, validating |
|
||||
| `startup-analyst` | Expert startup business analyst specializing in market sizing, financial modeling, competitive analysis, and strategic planning for early-stage companies. Us... | startup, analyst | startup, analyst, business, specializing, market, sizing, financial, modeling, competitive, analysis, strategic, planning |
|
||||
| `startup-metrics-framework` | This skill should be used when the user asks about "key startup metrics", "SaaS metrics", "CAC and LTV", "unit economics", "burn multiple", "rule of 40", "ma... | startup, metrics, framework | startup, metrics, framework, skill, should, used, user, asks, about, key, saas, cac |
|
||||
| `systematic-debugging` | Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes | systematic, debugging | systematic, debugging, encountering, any, bug, test, failure, unexpected, behavior, before, proposing, fixes |
|
||||
| `tdd-workflow` | Test-Driven Development workflow principles. RED-GREEN-REFACTOR cycle. | tdd | tdd, test, driven, development, principles, red, green, refactor, cycle |
|
||||
| `tdd-workflows-tdd-cycle` | Use when working with tdd workflows tdd cycle | tdd, cycle | tdd, cycle, working |
|
||||
| `tdd-workflows-tdd-green` | Implement the minimal code needed to make failing tests pass in the TDD green phase. | tdd, green | tdd, green, minimal, code, needed, failing, tests, pass, phase |
|
||||
| `tdd-workflows-tdd-red` | Generate failing tests for the TDD red phase to define expected behavior and edge cases. | tdd, red | tdd, red, generate, failing, tests, phase, define, expected, behavior, edge, cases |
|
||||
| `tdd-workflows-tdd-refactor` | Use when working with tdd workflows tdd refactor | tdd, refactor | tdd, refactor, working |
|
||||
| `test-driven-development` | Use when implementing any feature or bugfix, before writing implementation code | driven | driven, test, development, implementing, any, feature, bugfix, before, writing, code |
|
||||
| `test-fixing` | Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test sui... | fixing | fixing, test, run, tests, systematically, fix, all, failing, smart, error, grouping, user |
|
||||
| `unit-testing-test-generate` | Generate comprehensive, maintainable unit tests across languages with strong coverage and edge case focus. | unit, generate | unit, generate, testing, test, maintainable, tests, languages, strong, coverage, edge, case |
|
||||
| `web3-testing` | Test smart contracts comprehensively using Hardhat and Foundry with unit tests, integration tests, and mainnet forking. Use when testing Solidity contracts, ... | web3 | web3, testing, test, smart, contracts, comprehensively, hardhat, foundry, unit, tests, integration, mainnet |
|
||||
|
||||
## workflow (16)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `agent-orchestration-improve-agent` | Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration. | agent, improve | agent, improve, orchestration, systematic, improvement, existing, agents, through, performance, analysis, prompt, engineering |
|
||||
| `agent-orchestration-multi-agent-optimize` | Optimize multi-agent systems with coordinated profiling, workload distribution, and cost-aware orchestration. Use when improving agent performance, throughpu... | agent, multi, optimize | agent, multi, optimize, orchestration, coordinated, profiling, workload, distribution, cost, aware, improving, performance |
|
||||
| `billing-automation` | Build automated billing systems for recurring payments, invoicing, subscription lifecycle, and dunning management. Use when implementing subscription billing... | billing | billing, automation, automated, recurring, payments, invoicing, subscription, lifecycle, dunning, implementing, automating, managing |
|
||||
| `changelog-automation` | Automate changelog generation from commits, PRs, and releases following Keep a Changelog format. Use when setting up release workflows, generating release no... | changelog | changelog, automation, automate, generation, commits, prs, releases, following, keep, format, setting, up |
|
||||
| `conductor-manage` | Manage track lifecycle: archive, restore, delete, rename, and cleanup | conductor, manage | conductor, manage, track, lifecycle, archive, restore, delete, rename, cleanup |
|
||||
| `conductor-new-track` | Create a new track with specification and phased implementation plan | conductor, new, track | conductor, new, track, specification, phased, plan |
|
||||
| `conductor-status` | Display project status, active tracks, and next actions | conductor, status | conductor, status, display, active, tracks, next, actions |
|
||||
| `conductor-validator` | Validates Conductor project artifacts for completeness, consistency, and correctness. Use after setup, when diagnosing issues, or before implementation to ve... | conductor, validator | conductor, validator, validates, artifacts, completeness, consistency, correctness, after, setup, diagnosing, issues, before |
|
||||
| `email-sequence` | When the user wants to create or optimize an email sequence, drip campaign, automated email flow, or lifecycle email program. Also use when the user mentions... | email, sequence | email, sequence, user, wants, optimize, drip, campaign, automated, flow, lifecycle, program, mentions |
|
||||
| `full-stack-orchestration-full-stack-feature` | Use when working with full stack orchestration full stack feature | full, stack | full, stack, orchestration, feature, working |
|
||||
| `git-pushing` | Stage, commit, and push git changes with conventional commit messages. Use when user wants to commit and push changes, mentions pushing to remote, or asks to... | git, pushing | git, pushing, stage, commit, push, changes, conventional, messages, user, wants, mentions, remote |
|
||||
| `kaizen` | Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss proce... | kaizen | kaizen, continuous, improvement, error, proofing, standardization, skill, user, wants, improve, code, quality |
|
||||
| `mermaid-expert` | Create Mermaid diagrams for flowcharts, sequences, ERDs, and architectures. Masters syntax for all diagram types and styling. Use PROACTIVELY for visual docu... | mermaid | mermaid, diagrams, flowcharts, sequences, erds, architectures, masters, syntax, all, diagram, types, styling |
|
||||
| `pdf-official` | Comprehensive PDF manipulation toolkit for extracting text and tables, creating new PDFs, merging/splitting documents, and handling forms. When Claude needs ... | pdf, official | pdf, official, manipulation, toolkit, extracting, text, tables, creating, new, pdfs, merging, splitting |
|
||||
| `team-collaboration-issue` | You are a GitHub issue resolution expert specializing in systematic bug investigation, feature implementation, and collaborative development workflows. Your ... | team, collaboration, issue | team, collaboration, issue, github, resolution, specializing, systematic, bug, investigation, feature, collaborative, development |
|
||||
| `track-management` | Use this skill when creating, managing, or working with Conductor tracks - the logical work units for features, bugs, and refactors. Applies to spec.md, plan... | track | track, skill, creating, managing, working, conductor, tracks, logical, work, units, features, bugs |
|
||||
278
CHANGELOG.md
278
CHANGELOG.md
@@ -7,6 +7,282 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
---
|
||||
|
||||
## [4.7.0] - 2026-02-03 - "Installer Fix & OpenCode Docs"
|
||||
|
||||
> Critical installer fix for Windows and OpenCode documentation completion.
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Installer**: Resolved `ReferenceError` for `tagArg` variable in `bin/install.js` ensuring correct execution on Windows/PowerShell (PR #53).
|
||||
|
||||
### Documentation
|
||||
|
||||
- **OpenCode**: Completed documentation for OpenCode integration in `README.md`.
|
||||
|
||||
---
|
||||
|
||||
## [4.6.0] - 2026-02-01 - "SPDD & Radix UI Design System"
|
||||
|
||||
> Agent workflow docs (SPDD) and Radix UI design system skill.
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skill**: `radix-ui-design-system` – Build accessible design systems with Radix UI primitives (headless, theming, WCAG, examples).
|
||||
- **Docs**: `skills/SPDD/` – Research, spec, and implementation workflow docs (1-research.md, 2-spec.md, 3-implementation.md).
|
||||
|
||||
### Registry
|
||||
|
||||
- **Total Skills**: 626 (from 625). Catalog regenerated.
|
||||
|
||||
---
|
||||
|
||||
## [4.5.0] - 2026-01-31 - "Stitch UI Design"
|
||||
|
||||
> Expert prompting guide for Google Stitch AI-powered UI design tool.
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skill**: `stitch-ui-design` – Expert guide for creating effective prompts for Google Stitch AI UI design tool (Gemini 2.5 Flash). Covers prompt structure, specificity techniques, iteration strategies, design-to-code workflows, and 10+ examples for landing pages, mobile apps, and dashboards.
|
||||
|
||||
### Changed
|
||||
|
||||
- **Documentation**: Clarified in README.md and GETTING_STARTED.md that installation means cloning the full repo once; Starter Packs are curated lists to discover skills by role, not a different installation method (fixes [#44](https://github.com/sickn33/antigravity-awesome-skills/issues/44)).
|
||||
|
||||
### Registry
|
||||
|
||||
- **Total Skills**: 625 (from 624). Catalog regenerated.
|
||||
|
||||
### Credits
|
||||
|
||||
- [@ALEKGG1](https://github.com/ALEKGG1) – stitch-ui-design (PR #45)
|
||||
- [@CypherPoet](https://github.com/CypherPoet) – Documentation clarity (#44)
|
||||
|
||||
---
|
||||
|
||||
## [4.4.0] - 2026-01-30 - "fp-ts skills for TypeScript"
|
||||
|
||||
> Three practical fp-ts skills for TypeScript functional programming.
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skills** (from [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills)):
|
||||
- `fp-ts-pragmatic` – Pipe, Option, Either, TaskEither without academic jargon.
|
||||
- `fp-ts-react` – Patterns for fp-ts with React 18/19 and Next.js 14/15 (state, forms, data fetching).
|
||||
- `fp-ts-errors` – Type-safe error handling with Either and TaskEither.
|
||||
|
||||
### Registry
|
||||
|
||||
- **Total Skills**: 624 (from 621). Catalog regenerated.
|
||||
|
||||
---
|
||||
|
||||
## [4.3.0] - 2026-01-29 - "VoltAgent Integration & Context Engineering Suite"
|
||||
|
||||
> 61 new skills from VoltAgent/awesome-agent-skills: official team skills and context engineering suite.
|
||||
|
||||
### Added
|
||||
|
||||
- **61 new skills** from [VoltAgent/awesome-agent-skills](https://github.com/VoltAgent/awesome-agent-skills):
|
||||
- **Official (27)**: Sentry (commit, create-pr, find-bugs, iterate-pr), Trail of Bits (culture-index, fix-review, sharp-edges), Expo (expo-deployment, upgrading-expo), Hugging Face (hugging-face-cli, hugging-face-jobs), Vercel, Google Stitch (design-md), Neon (using-neon), n8n (n8n-code-python, n8n-mcp-tools-expert, n8n-node-configuration), SwiftUI, fal.ai (fal-audio, fal-generate, fal-image-edit, fal-platform, fal-upscale, fal-workflow), deep-research, imagen, readme.
|
||||
- **Community (34)**: Context suite (context-fundamentals, context-degradation, context-compression, context-optimization, multi-agent-patterns, memory-systems, evaluation), frontend-slides, linear-claude-skill, skill-rails-upgrade, terraform-skill, tool-design, screenshots, automate-whatsapp, observe-whatsapp, aws-skills, ui-skills, vexor, pypict-skill, makepad-skills, threejs-skills, claude-scientific-skills, claude-win11-speckit-update-skill, security-bluebook-builder, claude-ally-health, clarity-gate, beautiful-prose, claude-speed-reader, skill-seekers, varlock-claude-skill, superpowers-lab, nanobanana-ppt-skills, x-article-publisher-skill, ffuf-claude-skill.
|
||||
|
||||
### Registry
|
||||
|
||||
- **Total Skills**: 614 (from 553). Catalog and SOURCES.md updated.
|
||||
|
||||
### Credits
|
||||
|
||||
- VoltAgent/awesome-agent-skills and official teams (Sentry, Trail of Bits, Expo, Hugging Face, Vercel Labs, Google Labs, Neon, fal.ai).
|
||||
|
||||
---
|
||||
|
||||
## [4.0.0] - 2026-01-28 - "The Enterprise Era"
|
||||
|
||||
> **A massive integration of 300+ Enterprise skills, transforming Antigravity into a complete operating system for AI agents.**
|
||||
|
||||
### Added
|
||||
|
||||
- **Massive Skill Injection**: Merged 300+ Enterprise skills from `rmyndharis/antigravity-skills`.
|
||||
- **New Categories**:
|
||||
- **Architecture & Design**: `backend-architect`, `c4-architecture`.
|
||||
- **Data & AI**: `rag-engineer`, `langchain-architecture`.
|
||||
- **Security**: `security-auditor`, `cloud-pentesting`.
|
||||
- **Catalog System**: Introduced `CATALOG.md` and `scripts/build-catalog.js` for automated, table-based skill discovery.
|
||||
|
||||
### Changed
|
||||
|
||||
- **Documentation Overhaul**:
|
||||
- Removed the legacy 250+ row skill table from `README.md`.
|
||||
- Restructured `README.md` to focus on high-level domains.
|
||||
- Replaced static registry with dynamic `CATALOG.md`.
|
||||
- **Version Bump**: Major version update to 4.0.0 reflecting the doubling of skill capacity (247 -> 550+).
|
||||
|
||||
### Credits
|
||||
|
||||
- **[@rmyndharis](https://github.com/rmyndharis)** - For the massive contribution of 300+ Enterprise skills and valid catalog logic.
|
||||
- **[@sstklen](https://github.com/sstklen)** & **[@rookie-ricardo](https://github.com/rookie-ricard)** - Continued community support.
|
||||
|
||||
## [3.4.0] - 2026-01-27 - "Voice Intelligence & Categorization"
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skill**: `voice-ai-engine-development` - Complete toolkit for building real-time voice agents (OpenAI Realtime, Vapi, Deepgram, ElevenLabs).
|
||||
- **Categorization**: Major README update introducing a concise "Features & Categories" summary table.
|
||||
|
||||
### Changed
|
||||
|
||||
- **README**: Replaced text-heavy category lists with a high-level summary table.
|
||||
- **Registry**: Synced generic skill count (256) across documentation.
|
||||
|
||||
### Contributors
|
||||
|
||||
- [@sickn33](https://github.com/sickn33) - Voice AI Engine (PR #33)
|
||||
- [@community](https://github.com/community) - Categorization Initiative (PR #32)
|
||||
|
||||
## [3.3.0] - 2026-01-26 - "News & Research"
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skills**:
|
||||
- `last30days`: Research any topic from the last 30 days on Reddit + X + Web.
|
||||
- `daily-news-report`: Generate daily news reports from multiple sources.
|
||||
|
||||
### Changed
|
||||
|
||||
- **Registry**: Updated `skills_index.json` and `README.md` registry (Total: 255 skills).
|
||||
|
||||
## [3.2.0] - 2026-01-26 - "Clarity & Consistency"
|
||||
|
||||
### Changed
|
||||
|
||||
- **Skills Refactoring**: Significant overhaul of `backend-dev-guidelines`, `frontend-design`, `frontend-dev-guidelines`, and `mobile-design`.
|
||||
- **Consolidation**: Merged fragmented documentation into single, authoritative `SKILL.md` files.
|
||||
- **Final Laws**: Introduced "Final Laws" sections to provide strict, non-negotiable decision frameworks.
|
||||
- **Simplification**: Removed external file dependencies to improve context retrieval for AI agents.
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Validation**: Fixed critical YAML frontmatter formatting issues in `seo-fundamentals`, `programmatic-seo`, and `schema-markup` that were blocking strict validation.
|
||||
- **Merge Conflicts**: Resolved text artifact conflicts in SEO skills.
|
||||
|
||||
## [3.1.0] - 2026-01-26 - "Stable & Deterministic"
|
||||
|
||||
### Fixed
|
||||
|
||||
- **CI/CD Drift**: Resolved persistent "Uncommitted Changes" errors in CI by making the index generation script deterministic (sorting by name + ID).
|
||||
- **Registry Sync**: Synced `README.md` and `skills_index.json` to accurately reflect all 253 skills.
|
||||
|
||||
### Added (Registry Restore)
|
||||
|
||||
The following skills are now correctly indexed and visible in the registry:
|
||||
|
||||
- **Marketing & Growth**: `programmatic-seo`, `schema-markup`, `seo-fundamentals`, `form-cro`, `popup-cro`, `analytics-tracking`.
|
||||
- **Security**: `windows-privilege-escalation`, `wireshark-analysis`, `wordpress-penetration-testing`, `writing-plans`.
|
||||
- **Development**: `tdd-workflow`, `web-performance-optimization`, `webapp-testing`, `workflow-automation`, `zapier-make-patterns`.
|
||||
- **Maker Tools**: `telegram-bot-builder`, `telegram-mini-app`, `viral-generator-builder`.
|
||||
|
||||
### Changed
|
||||
|
||||
- **Documentation**: Added `docs/CI_DRIFT_FIX.md` as a canonical reference for resolving drift issues.
|
||||
- **Guidance**: Updated `docs/GETTING_STARTED.md` counts to match the full registry (253+ skills).
|
||||
- **Maintenance**: Updated `MAINTENANCE.md` with strict protocols for handling generated files.
|
||||
|
||||
## [3.0.0] - 2026-01-25 - "The Governance Update"
|
||||
|
||||
### Added
|
||||
|
||||
- **Governance & Security**:
|
||||
- `docs/QUALITY_BAR.md`: Defined 5-point validation standard (Metadata, Risk, Triggers).
|
||||
- `docs/SECURITY_GUARDRAILS.md`: Enforced "Authorized Use Only" for offensive skills.
|
||||
- `CODE_OF_CONDUCT.md`: Adhered to Contributor Covenant v2.1.
|
||||
- **Automation**:
|
||||
- `scripts/validate_skills.py`: Automated Quality Bar enforcement (Soft Mode supported).
|
||||
- `.github/workflows/ci.yml`: Automated PR checks.
|
||||
- `scripts/generate_index.py`: Registry generation with Risk & Source columns.
|
||||
- **Experience**:
|
||||
- `docs/BUNDLES.md`: 9 Starter Packs (Essentials, Security, Web, Agent, Game Dev, DevOps, Data, Testing, Creative).
|
||||
- **Interactive Registry**: README now features Risk Levels (🔴/🟢/🟣) and Collections.
|
||||
- **Documentation**:
|
||||
- `docs/EXAMPLES.md`: Cookbook with 3 real-world scenarios.
|
||||
- `docs/SOURCES.md`: Legal ledger for attributions and licenses.
|
||||
- Release announcements are documented in this CHANGELOG.
|
||||
|
||||
### Changed
|
||||
|
||||
- **Standardization**: All 250+ skills are now validated against the new Quality Bar schema.
|
||||
- **Project Structure**: Introduced `docs/` folder for scalable documentation.
|
||||
|
||||
## [2.14.0] - 2026-01-25 - "Web Intelligence & Windows"
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skill**:
|
||||
- `context7-auto-research`: Auto-research capability for Claude Code.
|
||||
- `codex-review`: Professional code review with AI integration.
|
||||
- `exa-search`: Semantic search and discovery using Exa API.
|
||||
- `firecrawl-scraper`: Deep web scraping and PDF parsing.
|
||||
- `tavily-web`: Content extraction and research using Tavily.
|
||||
- `busybox-on-windows`: UNIX tool suite for Windows environments.
|
||||
|
||||
### Changed
|
||||
|
||||
- **Documentation**: Updated `obsidian-clipper-template-creator` docs and templates.
|
||||
- **Index & Registry**: Updated `skills_index.json` and `README.md` registry.
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Skills**: Fixed YAML frontmatter quoting in `lint-and-validate`.
|
||||
|
||||
## [2.13.0] - 2026-01-24 - "NoSQL Expert"
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skill**:
|
||||
- `nosql-expert`: Expert guidance for distributed NoSQL databases (Cassandra, DynamoDB), focusing on query-first modeling and anti-patterns.
|
||||
|
||||
### Changed
|
||||
|
||||
- **Index & Registry**: Updated `skills_index.json` and `README.md` registry.
|
||||
|
||||
### Contributors
|
||||
|
||||
- [@sickn33](https://github.com/sickn33) - PR #23
|
||||
|
||||
## [2.12.0] - 2026-01-23 - "Enterprise & UI Power"
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skills**:
|
||||
- `production-code-audit`: Comprehensive enterprise auditing skill for production readiness.
|
||||
- `avalonia-layout-zafiro`: Zafiro layout guidelines for Avalonia UI.
|
||||
- `avalonia-viewmodels-zafiro`: ViewModel composition patterns for Avalonia.
|
||||
- `avalonia-zafiro-development`: Core development rules for Avalonia Zafiro applications.
|
||||
|
||||
### Changed
|
||||
|
||||
- **Index & Registry**: Updated `skills_index.json` and `README.md` registry (Total: 243 skills).
|
||||
|
||||
### Contributors
|
||||
|
||||
- [@SuperJMN](https://github.com/SuperJMN) - PR #20
|
||||
- [@Mohammad-Faiz-Cloud-Engineer](https://github.com/Mohammad-Faiz-Cloud-Engineer) - PR #21
|
||||
|
||||
## [2.11.0] - 2026-01-23 - "Postgres Performance"
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skill**:
|
||||
- `postgres-best-practices`: Comprehensive Supabase PostgreSQL performance optimization guide with 30+ rules covering query performance, connection management, RLS security, schema design, locking, and monitoring.
|
||||
|
||||
### Changed
|
||||
|
||||
- **Official Sources**: Added [supabase/agent-skills](https://github.com/supabase/agent-skills) to Credits & Sources.
|
||||
- **Index & Registry**: Updated `skills_index.json` and `README.md` registry (Total: 239 skills).
|
||||
|
||||
### Contributors
|
||||
|
||||
- [@ar27111994](https://github.com/ar27111994) - PR #19
|
||||
|
||||
---
|
||||
|
||||
## [2.10.0] - 2026-01-22 - "Developer Excellence"
|
||||
@@ -68,7 +344,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
- `cc-skill-project-guidelines-example`
|
||||
- `cc-skill-security-review`
|
||||
- `cc-skill-strategic-compact`
|
||||
- **Documentation**: New `WALKTHROUGH.md` for import process details.
|
||||
- **Documentation**: New `docs/WALKTHROUGH.md` for import process details.
|
||||
|
||||
### Changed
|
||||
|
||||
|
||||
261
CONTRIBUTING.md
261
CONTRIBUTING.md
@@ -1,6 +1,19 @@
|
||||
# 🤝 Contributing Guide - Make It Easy for Everyone!
|
||||
# 🤝 Contributing Guide - V4 Enterprise Edition
|
||||
|
||||
**Thank you for wanting to make this repo better!** This guide shows you exactly how to contribute, even if you're new to open source.
|
||||
With V4, we raised the bar for quality. Please read the **new Quality Standards** below carefully.
|
||||
|
||||
---
|
||||
|
||||
## 🧐 The "Quality Bar" (V4 Standard)
|
||||
|
||||
**Critical for new skills:** Every skill submitted must pass our **5-Point Quality Check** (see `docs/QUALITY_BAR.md` for details):
|
||||
|
||||
1. **Metadata**: Correct Frontmatter (`name`, `description`).
|
||||
2. **Safety**: No harmful commands without "Risk" labels.
|
||||
3. **Clarity**: Clear "When to use" section.
|
||||
4. **Examples**: At least one copy-paste usage example.
|
||||
5. **Actions**: Must define concrete steps, not just "thoughts".
|
||||
|
||||
---
|
||||
|
||||
@@ -9,104 +22,77 @@
|
||||
You don't need to be an expert! Here are ways anyone can help:
|
||||
|
||||
### 1. Improve Documentation (Easiest!)
|
||||
|
||||
- Fix typos or grammar
|
||||
- Make explanations clearer
|
||||
- Add examples to existing skills
|
||||
- Translate documentation to other languages
|
||||
|
||||
### 2. Report Issues
|
||||
|
||||
- Found something confusing? Tell us!
|
||||
- Skill not working? Let us know!
|
||||
- Have suggestions? We want to hear them!
|
||||
|
||||
### 3. Create New Skills
|
||||
|
||||
- Share your expertise as a skill
|
||||
- Fill gaps in the current collection
|
||||
- Improve existing skills
|
||||
|
||||
### 4. Test and Validate
|
||||
|
||||
- Try skills and report what works/doesn't work
|
||||
- Test on different AI tools
|
||||
- Suggest improvements
|
||||
|
||||
---
|
||||
|
||||
## How to Improve Documentation
|
||||
## Local development setup
|
||||
|
||||
### Super Easy Method (No Git Knowledge Needed!)
|
||||
To run validation, index generation, and README updates locally:
|
||||
|
||||
1. **Find the file** you want to improve on GitHub
|
||||
2. **Click the pencil icon** (✏️) to edit
|
||||
3. **Make your changes** in the browser
|
||||
4. **Click "Propose changes"** at the bottom
|
||||
5. **Done!** We'll review and merge it
|
||||
1. **Node.js** (for catalog and installer): `npm ci`
|
||||
2. **Python 3** (for validate, index, readme scripts): install dependencies with
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
Then you can run `npm run chain` (validate → index → readme) and `npm run catalog`.
|
||||
|
||||
### Using Git (If You Know How)
|
||||
**Validation:** The canonical validator is **Python** (`scripts/validate_skills.py`). Use `npm run validate` (or `npm run validate:strict` for CI-style checks). The JavaScript validator (`scripts/validate-skills.js`) is legacy/optional and uses a different schema; CI and PR checks rely on the Python validator only.
|
||||
|
||||
```bash
|
||||
# 1. Fork the repo on GitHub (click the Fork button)
|
||||
|
||||
# 2. Clone your fork
|
||||
git clone https://github.com/YOUR-USERNAME/antigravity-awesome-skills.git
|
||||
cd antigravity-awesome-skills
|
||||
|
||||
# 3. Create a branch
|
||||
git checkout -b improve-docs
|
||||
|
||||
# 4. Make your changes
|
||||
# Edit files in your favorite editor
|
||||
|
||||
# 5. Commit and push
|
||||
git add .
|
||||
git commit -m "docs: make XYZ clearer"
|
||||
git push origin improve-docs
|
||||
|
||||
# 6. Open a Pull Request on GitHub
|
||||
```
|
||||
**npm audit:** CI runs `npm audit --audit-level=high`. To fix issues locally: run `npm audit`, then `npm update` or `npm audit fix` as appropriate; for breaking changes, update dependencies manually and run tests.
|
||||
|
||||
---
|
||||
|
||||
## How to Create a New Skill
|
||||
|
||||
### What Makes a Good Skill?
|
||||
|
||||
A skill should:
|
||||
- ✅ Solve a specific problem
|
||||
- ✅ Be reusable across projects
|
||||
- ✅ Have clear instructions
|
||||
- ✅ Include examples when possible
|
||||
|
||||
### Step-by-Step: Create Your First Skill
|
||||
### Step-by-Step Guide
|
||||
|
||||
#### Step 1: Choose Your Skill Topic
|
||||
|
||||
Ask yourself:
|
||||
- What am I good at?
|
||||
- What do I wish my AI assistant knew better?
|
||||
- What task do I do repeatedly?
|
||||
|
||||
**Examples:**
|
||||
- "I'm good at Docker, let me create a Docker skill"
|
||||
- "I wish AI understood Tailwind better"
|
||||
- "I keep setting up the same testing patterns"
|
||||
Ask yourself: "What do I wish my AI assistant knew better?".
|
||||
Example: "I'm good at Docker, let me create a Docker skill".
|
||||
|
||||
#### Step 2: Create the Folder Structure
|
||||
|
||||
Skills live in the `skills/` directory. Use `kebab-case` for folder names.
|
||||
|
||||
```bash
|
||||
# Navigate to the skills directory
|
||||
# Navigate to skills
|
||||
cd skills/
|
||||
|
||||
# Create your skill folder (use lowercase with hyphens)
|
||||
# Create your skill folder
|
||||
mkdir my-awesome-skill
|
||||
cd my-awesome-skill
|
||||
|
||||
# Create the SKILL.md file
|
||||
cd my-awesome-skill
|
||||
touch SKILL.md
|
||||
```
|
||||
|
||||
#### Step 3: Write Your SKILL.md
|
||||
|
||||
Every skill needs this basic structure:
|
||||
Every skill needs this basic structure. **Copy this template:**
|
||||
|
||||
```markdown
|
||||
---
|
||||
@@ -124,90 +110,47 @@ Explain what this skill does and when to use it.
|
||||
|
||||
- Use when [scenario 1]
|
||||
- Use when [scenario 2]
|
||||
- Use when [scenario 3]
|
||||
|
||||
## How It Works
|
||||
|
||||
### Step 1: [First Step]
|
||||
Explain what to do first...
|
||||
|
||||
### Step 2: [Second Step]
|
||||
Explain the next step...
|
||||
|
||||
### Step 3: [Final Step]
|
||||
Explain how to finish...
|
||||
Detailed step-by-step instructions for the AI...
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: [Common Use Case]
|
||||
\`\`\`
|
||||
Show example code or commands here
|
||||
\`\`\`
|
||||
### Example 1
|
||||
|
||||
### Example 2: [Another Use Case]
|
||||
\`\`\`
|
||||
More examples...
|
||||
code example here
|
||||
\`\`\`
|
||||
|
||||
## Best Practices
|
||||
|
||||
- ✅ Do this
|
||||
- ✅ Also do this
|
||||
- ❌ Don't do this
|
||||
- ❌ Avoid this
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- **Problem:** Description of common issue
|
||||
**Solution:** How to fix it
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [Link to documentation](https://example.com)
|
||||
- [Tutorial](https://example.com)
|
||||
```
|
||||
|
||||
#### Step 4: Test Your Skill
|
||||
#### Step 4: Validate (CRITICAL V4 STEP)
|
||||
|
||||
1. **Copy it to your AI tool's skills directory:**
|
||||
```bash
|
||||
cp -r skills/my-awesome-skill ~/.agent/skills/
|
||||
```
|
||||
|
||||
2. **Try using it:**
|
||||
```
|
||||
@my-awesome-skill help me with [task]
|
||||
```
|
||||
|
||||
3. **Does it work?** Great! If not, refine it.
|
||||
|
||||
#### Step 5: Validate Your Skill
|
||||
|
||||
Run the validation script:
|
||||
Use the canonical validator `scripts/validate_skills.py` via `npm run validate`. **We will not merge PRs that fail this check.**
|
||||
|
||||
```bash
|
||||
python3 scripts/validate_skills.py
|
||||
npm run validate # soft mode (warnings only)
|
||||
npm run validate:strict # strict mode (what CI runs)
|
||||
```
|
||||
|
||||
This checks:
|
||||
- ✅ SKILL.md exists
|
||||
|
||||
- ✅ `SKILL.md` exists
|
||||
- ✅ Frontmatter is correct
|
||||
- ✅ Name matches folder name
|
||||
- ✅ Description exists
|
||||
- ✅ Quality Bar checks passed
|
||||
|
||||
#### Step 6: Submit Your Skill
|
||||
#### Step 5: Submit Your Skill
|
||||
|
||||
```bash
|
||||
# 1. Add your skill
|
||||
git add skills/my-awesome-skill/
|
||||
|
||||
# 2. Commit with a clear message
|
||||
git commit -m "feat: add my-awesome-skill for [purpose]"
|
||||
|
||||
# 3. Push to your fork
|
||||
git commit -m "feat: add my-awesome-skill"
|
||||
git push origin my-branch
|
||||
|
||||
# 4. Open a Pull Request on GitHub
|
||||
```
|
||||
|
||||
---
|
||||
@@ -232,110 +175,34 @@ description: "One sentence describing what this skill does and when to use it"
|
||||
|
||||
- Use when you need to [scenario 1]
|
||||
- Use when you want to [scenario 2]
|
||||
- Use when working with [scenario 3]
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### Concept 1
|
||||
[Explain key concept]
|
||||
|
||||
### Concept 2
|
||||
[Explain another key concept]
|
||||
|
||||
## Step-by-Step Guide
|
||||
|
||||
### 1. [First Step Name]
|
||||
[Detailed instructions]
|
||||
|
||||
### 2. [Second Step Name]
|
||||
[Detailed instructions]
|
||||
|
||||
### 3. [Third Step Name]
|
||||
[Detailed instructions]
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: [Use Case Name]
|
||||
|
||||
\`\`\`language
|
||||
// Example code here
|
||||
\`\`\`
|
||||
|
||||
**Explanation:** [What this example demonstrates]
|
||||
|
||||
### Example 2: [Another Use Case]
|
||||
\`\`\`language
|
||||
// More example code
|
||||
\`\`\`
|
||||
|
||||
**Explanation:** [What this example demonstrates]
|
||||
|
||||
## Best Practices
|
||||
|
||||
- ✅ **Do:** [Good practice]
|
||||
- ✅ **Do:** [Another good practice]
|
||||
- ❌ **Don't:** [What to avoid]
|
||||
- ❌ **Don't:** [Another thing to avoid]
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Problem: [Common Issue]
|
||||
**Symptoms:** [How you know this is the problem]
|
||||
**Problem:** [Common Issue]
|
||||
**Solution:** [How to fix it]
|
||||
|
||||
### Problem: [Another Issue]
|
||||
**Symptoms:** [How you know this is the problem]
|
||||
**Solution:** [How to fix it]
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `@related-skill-1` - [When to use this instead]
|
||||
- `@related-skill-2` - [How this complements your skill]
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [Official Documentation](https://example.com)
|
||||
- [Tutorial](https://example.com)
|
||||
- [Community Guide](https://example.com)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How to Report Issues
|
||||
|
||||
### Found a Bug?
|
||||
|
||||
1. **Check existing issues** - Maybe it's already reported
|
||||
2. **Open a new issue** with this info:
|
||||
- What skill has the problem?
|
||||
- What AI tool are you using?
|
||||
- What did you expect to happen?
|
||||
- What actually happened?
|
||||
- Steps to reproduce
|
||||
|
||||
### Found Something Confusing?
|
||||
|
||||
1. **Open an issue** titled: "Documentation unclear: [topic]"
|
||||
2. **Explain:**
|
||||
- What part is confusing?
|
||||
- What did you expect to find?
|
||||
- How could it be clearer?
|
||||
|
||||
---
|
||||
|
||||
## Contribution Checklist
|
||||
|
||||
Before submitting your contribution:
|
||||
|
||||
- [ ] My skill has a clear, descriptive name
|
||||
- [ ] The `SKILL.md` has proper frontmatter (name + description)
|
||||
- [ ] I've included examples
|
||||
- [ ] I've tested the skill with an AI assistant
|
||||
- [ ] I've run `python3 scripts/validate_skills.py`
|
||||
- [ ] My commit message is clear (e.g., "feat: add docker-compose skill")
|
||||
- [ ] I've checked for typos and grammar
|
||||
|
||||
---
|
||||
|
||||
## Commit Message Guidelines
|
||||
|
||||
Use these prefixes:
|
||||
@@ -348,11 +215,11 @@ Use these prefixes:
|
||||
- `chore:` - Maintenance tasks
|
||||
|
||||
**Examples:**
|
||||
|
||||
```
|
||||
feat: add kubernetes-deployment skill
|
||||
docs: improve getting started guide
|
||||
fix: correct typo in stripe-integration skill
|
||||
docs: add examples to react-best-practices
|
||||
```
|
||||
|
||||
---
|
||||
@@ -360,30 +227,13 @@ docs: add examples to react-best-practices
|
||||
## Learning Resources
|
||||
|
||||
### New to Git/GitHub?
|
||||
|
||||
- [GitHub's Hello World Guide](https://guides.github.com/activities/hello-world/)
|
||||
- [Git Basics](https://git-scm.com/book/en/v2/Getting-Started-Git-Basics)
|
||||
|
||||
### New to Markdown?
|
||||
|
||||
- [Markdown Guide](https://www.markdownguide.org/basic-syntax/)
|
||||
- [GitHub Markdown](https://guides.github.com/features/mastering-markdown/)
|
||||
|
||||
### New to Open Source?
|
||||
- [First Contributions](https://github.com/firstcontributions/first-contributions)
|
||||
- [How to Contribute to Open Source](https://opensource.guide/how-to-contribute/)
|
||||
|
||||
---
|
||||
|
||||
## Need Help?
|
||||
|
||||
- **Questions?** Open a [Discussion](https://github.com/sickn33/antigravity-awesome-skills/discussions)
|
||||
- **Stuck?** Open an [Issue](https://github.com/sickn33/antigravity-awesome-skills/issues)
|
||||
- **Want feedback?** Open a [Draft Pull Request](https://github.com/sickn33/antigravity-awesome-skills/pulls)
|
||||
|
||||
---
|
||||
|
||||
## Recognition
|
||||
|
||||
All contributors are recognized in our [Contributors](https://github.com/sickn33/antigravity-awesome-skills/graphs/contributors) page!
|
||||
|
||||
---
|
||||
|
||||
@@ -392,10 +242,9 @@ All contributors are recognized in our [Contributors](https://github.com/sickn33
|
||||
- Be respectful and inclusive
|
||||
- Welcome newcomers
|
||||
- Focus on constructive feedback
|
||||
- Help others learn
|
||||
- **No harmful content**: See `docs/SECURITY_GUARDRAILS.md`.
|
||||
|
||||
---
|
||||
|
||||
**Thank you for making this project better for everyone!**
|
||||
|
||||
Every contribution, no matter how small, makes a difference. Whether you fix a typo, improve a sentence, or create a whole new skill - you're helping thousands of developers!
|
||||
|
||||
556
FAQ.md
556
FAQ.md
@@ -1,556 +0,0 @@
|
||||
# ❓ Frequently Asked Questions (FAQ)
|
||||
|
||||
**Got questions?** You're not alone! Here are answers to the most common questions about Antigravity Awesome Skills.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 General Questions
|
||||
|
||||
### What are "skills" exactly?
|
||||
|
||||
Skills are specialized instruction files that teach AI assistants how to handle specific tasks. Think of them as expert knowledge modules that your AI can load on-demand.
|
||||
|
||||
**Simple analogy:** Just like you might consult different experts (a lawyer, a doctor, a mechanic), skills let your AI become an expert in different areas when you need them.
|
||||
|
||||
---
|
||||
|
||||
### Do I need to install all 233 skills?
|
||||
|
||||
**No!** When you clone the repository, all skills are available, but your AI only loads them when you explicitly invoke them with `@skill-name` or `/skill-name`.
|
||||
|
||||
It's like having a library - all the books are there, but you only read the ones you need.
|
||||
|
||||
---
|
||||
|
||||
### Which AI tools work with these skills?
|
||||
|
||||
These skills work with any AI coding assistant that supports the `SKILL.md` format:
|
||||
|
||||
- ✅ **Claude Code** (Anthropic CLI)
|
||||
- ✅ **Gemini CLI** (Google)
|
||||
- ✅ **Codex CLI** (OpenAI)
|
||||
- ✅ **Cursor** (AI IDE)
|
||||
- ✅ **Antigravity IDE**
|
||||
- ✅ **OpenCode**
|
||||
- ⚠️ **GitHub Copilot** (partial support)
|
||||
|
||||
---
|
||||
|
||||
### Are these skills free to use?
|
||||
|
||||
**Yes!** This repository is licensed under MIT License, which means:
|
||||
|
||||
- ✅ Free for personal use
|
||||
- ✅ Free for commercial use
|
||||
- ✅ You can modify them
|
||||
- ✅ You can redistribute them
|
||||
|
||||
---
|
||||
|
||||
### Do skills work offline?
|
||||
|
||||
The skill files themselves are stored locally on your computer, but your AI assistant needs an internet connection to function. So:
|
||||
|
||||
- ✅ Skills are local files
|
||||
- ❌ AI assistant needs internet
|
||||
|
||||
---
|
||||
|
||||
## Installation & Setup
|
||||
|
||||
### Where should I install the skills?
|
||||
|
||||
The universal path that works with most tools is `.agent/skills/`:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
**Tool-specific paths:**
|
||||
|
||||
- Claude Code: `.claude/skills/` or `.agent/skills/`
|
||||
- Gemini CLI: `.gemini/skills/` or `.agent/skills/`
|
||||
- Cursor: `.cursor/skills/` or project root
|
||||
- Antigravity: `.agent/skills/`
|
||||
|
||||
---
|
||||
|
||||
### Can I install skills in multiple projects?
|
||||
|
||||
**Yes!** You have two options:
|
||||
|
||||
**Option 1: Global Installation** (recommended)
|
||||
Install once in your home directory, works for all projects:
|
||||
|
||||
```bash
|
||||
cd ~
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
**Option 2: Per-Project Installation**
|
||||
Install in each project directory:
|
||||
|
||||
```bash
|
||||
cd /path/to/your/project
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### How do I update skills to the latest version?
|
||||
|
||||
Navigate to your skills directory and pull the latest changes:
|
||||
|
||||
```bash
|
||||
cd .agent/skills
|
||||
git pull origin main
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Can I install only specific skills?
|
||||
|
||||
**Yes!** You can manually copy individual skill folders:
|
||||
|
||||
```bash
|
||||
# Clone the full repo first
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git temp-skills
|
||||
|
||||
# Copy only the skills you want
|
||||
mkdir -p .agent/skills
|
||||
cp -r temp-skills/skills/brainstorming .agent/skills/
|
||||
cp -r temp-skills/skills/stripe-integration .agent/skills/
|
||||
|
||||
# Clean up
|
||||
rm -rf temp-skills
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Using Skills
|
||||
|
||||
### How do I invoke a skill?
|
||||
|
||||
Use the `@` symbol followed by the skill name:
|
||||
|
||||
```
|
||||
@skill-name your request here
|
||||
```
|
||||
|
||||
**Examples:**
|
||||
|
||||
```
|
||||
@brainstorming help me design a todo app
|
||||
@stripe-integration add subscription billing
|
||||
@systematic-debugging fix this test failure
|
||||
```
|
||||
|
||||
Some tools also support `/skill-name` syntax.
|
||||
|
||||
---
|
||||
|
||||
### How do I know which skill to use?
|
||||
|
||||
**Method 1: Browse the README**
|
||||
Check the [Full Skill Registry](README.md#full-skill-registry-233233) organized by category
|
||||
|
||||
**Method 2: Search by keyword**
|
||||
|
||||
```bash
|
||||
ls skills/ | grep "keyword"
|
||||
```
|
||||
|
||||
**Method 3: Ask your AI**
|
||||
|
||||
```
|
||||
What skills are available for [topic]?
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Can I use multiple skills at once?
|
||||
|
||||
**Yes!** You can invoke multiple skills in the same conversation:
|
||||
|
||||
```
|
||||
@brainstorming help me design this feature
|
||||
|
||||
[After brainstorming...]
|
||||
|
||||
@test-driven-development now let's implement it with tests
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### What if a skill doesn't work?
|
||||
|
||||
**Troubleshooting steps:**
|
||||
|
||||
1. **Check installation path**
|
||||
|
||||
```bash
|
||||
ls .agent/skills/
|
||||
```
|
||||
|
||||
2. **Verify skill exists**
|
||||
|
||||
```bash
|
||||
ls .agent/skills/skill-name/
|
||||
```
|
||||
|
||||
3. **Check SKILL.md exists**
|
||||
|
||||
```bash
|
||||
cat .agent/skills/skill-name/SKILL.md
|
||||
```
|
||||
|
||||
4. **Try restarting your AI assistant**
|
||||
|
||||
5. **Check for typos in skill name**
|
||||
- Use `@brainstorming` not `@brain-storming`
|
||||
- Names are case-sensitive in some tools
|
||||
|
||||
6. **Report the issue**
|
||||
[Open an issue](https://github.com/sickn33/antigravity-awesome-skills/issues) with details
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
### I'm new to open source. Can I still contribute?
|
||||
|
||||
**Absolutely!** Everyone starts somewhere. We welcome contributions from beginners:
|
||||
|
||||
- Fix typos or grammar
|
||||
- Improve documentation clarity
|
||||
- Add examples to existing skills
|
||||
- Report issues or confusing parts
|
||||
|
||||
Check out [CONTRIBUTING.md](CONTRIBUTING.md) for step-by-step instructions.
|
||||
|
||||
---
|
||||
|
||||
### Do I need to know how to code to contribute?
|
||||
|
||||
**No!** Many valuable contributions don't require coding:
|
||||
|
||||
- **Documentation improvements** - Make things clearer
|
||||
- **Examples** - Add real-world usage examples
|
||||
- **Issue reporting** - Tell us what's confusing
|
||||
- **Testing** - Try skills and report what works
|
||||
|
||||
---
|
||||
|
||||
### How do I create a new skill?
|
||||
|
||||
**Quick version:**
|
||||
|
||||
1. Create a folder: `skills/my-skill-name/`
|
||||
2. Create `SKILL.md` with frontmatter and content
|
||||
3. Test it with your AI assistant
|
||||
4. Run validation: `python3 scripts/validate_skills.py`
|
||||
5. Submit a Pull Request
|
||||
|
||||
**Detailed version:** See [CONTRIBUTING.md](CONTRIBUTING.md)
|
||||
|
||||
---
|
||||
|
||||
### What makes a good skill?
|
||||
|
||||
A good skill:
|
||||
|
||||
- ✅ Solves a specific problem
|
||||
- ✅ Has clear, actionable instructions
|
||||
- ✅ Includes examples
|
||||
- ✅ Is reusable across projects
|
||||
- ✅ Follows the standard structure
|
||||
|
||||
See [SKILL_ANATOMY.md](docs/SKILL_ANATOMY.md) for details.
|
||||
|
||||
---
|
||||
|
||||
### How long does it take for my contribution to be reviewed?
|
||||
|
||||
Review times vary, but typically:
|
||||
|
||||
- **Simple fixes** (typos, docs): 1-3 days
|
||||
- **New skills**: 3-7 days
|
||||
- **Major changes**: 1-2 weeks
|
||||
|
||||
You can speed this up by:
|
||||
|
||||
- Following the contribution guidelines
|
||||
- Writing clear commit messages
|
||||
- Testing your changes
|
||||
- Responding to feedback quickly
|
||||
|
||||
---
|
||||
|
||||
## Technical Questions
|
||||
|
||||
### What's the difference between SKILL.md and README.md?
|
||||
|
||||
- **SKILL.md** (required): The actual skill definition that the AI reads
|
||||
- **README.md** (optional): Human-readable documentation about the skill
|
||||
|
||||
The AI primarily uses `SKILL.md`, while developers read `README.md`.
|
||||
|
||||
---
|
||||
|
||||
### Can I use scripts or code in my skill?
|
||||
|
||||
**Yes!** Skills can include:
|
||||
|
||||
- `scripts/` - Helper scripts
|
||||
- `examples/` - Example code
|
||||
- `templates/` - Code templates
|
||||
- `references/` - Documentation
|
||||
|
||||
Reference them in your `SKILL.md`:
|
||||
|
||||
```markdown
|
||||
Run the setup script:
|
||||
\`\`\`bash
|
||||
bash scripts/setup.sh
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### What programming languages can skills cover?
|
||||
|
||||
**Any language!** Current skills cover:
|
||||
|
||||
- JavaScript/TypeScript
|
||||
- Python
|
||||
- Go
|
||||
- Rust
|
||||
- Swift
|
||||
- Kotlin
|
||||
- Shell scripting
|
||||
- And many more...
|
||||
|
||||
---
|
||||
|
||||
### Can skills call other skills?
|
||||
|
||||
**Yes!** Skills can reference other skills:
|
||||
|
||||
```markdown
|
||||
## Workflow
|
||||
|
||||
1. First, use `@brainstorming` to design
|
||||
2. Then, use `@writing-plans` to plan
|
||||
3. Finally, use `@test-driven-development` to implement
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### How do I validate my skill before submitting?
|
||||
|
||||
Run the validation script:
|
||||
|
||||
```bash
|
||||
python3 scripts/validate_skills.py
|
||||
```
|
||||
|
||||
This checks:
|
||||
|
||||
- ✅ SKILL.md exists
|
||||
- ✅ Frontmatter is valid
|
||||
- ✅ Name matches folder name
|
||||
- ✅ Description exists
|
||||
|
||||
---
|
||||
|
||||
## Learning & Best Practices
|
||||
|
||||
### Which skills should I try first?
|
||||
|
||||
**For beginners:**
|
||||
|
||||
- `@brainstorming` - Design before coding
|
||||
- `@systematic-debugging` - Fix bugs methodically
|
||||
- `@git-pushing` - Commit with good messages
|
||||
|
||||
**For developers:**
|
||||
|
||||
- `@test-driven-development` - Write tests first
|
||||
- `@react-best-practices` - Modern React patterns
|
||||
- `@senior-fullstack` - Full-stack development
|
||||
|
||||
**For security:**
|
||||
|
||||
- `@ethical-hacking-methodology` - Security basics
|
||||
- `@burp-suite-testing` - Web app testing
|
||||
|
||||
---
|
||||
|
||||
### How do I learn to write good skills?
|
||||
|
||||
**Learning path:**
|
||||
|
||||
1. **Read existing skills** - Study 5-10 well-written skills
|
||||
2. **Use skills** - Try them with your AI assistant
|
||||
3. **Read guides** - Check [SKILL_ANATOMY.md](docs/SKILL_ANATOMY.md)
|
||||
4. **Start simple** - Create a basic skill first
|
||||
5. **Get feedback** - Submit and learn from reviews
|
||||
6. **Iterate** - Improve based on feedback
|
||||
|
||||
**Recommended skills to study:**
|
||||
|
||||
- `skills/brainstorming/SKILL.md` - Clear structure
|
||||
- `skills/systematic-debugging/SKILL.md` - Comprehensive
|
||||
- `skills/git-pushing/SKILL.md` - Simple and focused
|
||||
|
||||
---
|
||||
|
||||
### Are there any skills for learning AI/ML?
|
||||
|
||||
**Yes!** Check out:
|
||||
|
||||
- `@rag-engineer` - RAG systems
|
||||
- `@prompt-engineering` - Prompt design
|
||||
- `@langgraph` - Multi-agent systems
|
||||
- `@ai-agents-architect` - Agent architecture
|
||||
- `@llm-app-patterns` - LLM application patterns
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### My AI assistant doesn't recognize skills
|
||||
|
||||
**Possible causes:**
|
||||
|
||||
1. **Wrong installation path**
|
||||
- Check your tool's documentation for the correct path
|
||||
- Try `.agent/skills/` as the universal path
|
||||
|
||||
2. **Skill name typo**
|
||||
- Verify the exact skill name: `ls .agent/skills/`
|
||||
- Use the exact name from the folder
|
||||
|
||||
3. **Tool doesn't support skills**
|
||||
- Verify your tool supports the SKILL.md format
|
||||
- Check the [Compatibility](#-compatibility) section
|
||||
|
||||
4. **Need to restart**
|
||||
- Restart your AI assistant after installing skills
|
||||
|
||||
---
|
||||
|
||||
### A skill gives incorrect or outdated advice
|
||||
|
||||
**Please report it!**
|
||||
|
||||
1. [Open an issue](https://github.com/sickn33/antigravity-awesome-skills/issues)
|
||||
2. Include:
|
||||
- Which skill
|
||||
- What's incorrect
|
||||
- What should it say instead
|
||||
- Links to correct documentation
|
||||
|
||||
We'll update it quickly!
|
||||
|
||||
---
|
||||
|
||||
### Can I modify skills for my own use?
|
||||
|
||||
**Yes!** The MIT License allows you to:
|
||||
|
||||
- ✅ Modify skills for your needs
|
||||
- ✅ Create private versions
|
||||
- ✅ Customize for your team
|
||||
|
||||
**To modify:**
|
||||
|
||||
1. Copy the skill to a new location
|
||||
2. Edit the SKILL.md file
|
||||
3. Use your modified version
|
||||
|
||||
**Consider contributing improvements back!**
|
||||
|
||||
---
|
||||
|
||||
## Statistics & Info
|
||||
|
||||
### How many skills are there?
|
||||
|
||||
**233 skills** across 10+ categories as of the latest update.
|
||||
|
||||
---
|
||||
|
||||
### How often are skills updated?
|
||||
|
||||
- **Bug fixes**: As soon as reported
|
||||
- **New skills**: Added regularly by contributors
|
||||
- **Updates**: When best practices change
|
||||
|
||||
**Stay updated:**
|
||||
|
||||
```bash
|
||||
cd .agent/skills
|
||||
git pull origin main
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Who maintains this repository?
|
||||
|
||||
This is a community-driven project with contributions from:
|
||||
|
||||
- Original creators
|
||||
- Open source contributors
|
||||
- AI coding assistant users worldwide
|
||||
|
||||
See [Credits & Sources](README.md#credits--sources) for attribution.
|
||||
|
||||
---
|
||||
|
||||
## Still Have Questions?
|
||||
|
||||
### Where can I get help?
|
||||
|
||||
- **[GitHub Discussions](https://github.com/sickn33/antigravity-awesome-skills/discussions)** - Ask questions
|
||||
- **[GitHub Issues](https://github.com/sickn33/antigravity-awesome-skills/issues)** - Report bugs
|
||||
- **Documentation** - Read the guides in this repo
|
||||
- **Community** - Connect with other users
|
||||
|
||||
---
|
||||
|
||||
### How can I stay updated?
|
||||
|
||||
- **Star the repository** on GitHub
|
||||
- **Watch the repository** for updates
|
||||
- **Subscribe to releases** for notifications
|
||||
- **Follow contributors** on social media
|
||||
|
||||
---
|
||||
|
||||
### Can I use these skills commercially?
|
||||
|
||||
**Yes!** The MIT License permits commercial use. You can:
|
||||
|
||||
- ✅ Use in commercial projects
|
||||
- ✅ Use in client work
|
||||
- ✅ Include in paid products
|
||||
- ✅ Modify for commercial purposes
|
||||
|
||||
**Only requirement:** Keep the license notice.
|
||||
|
||||
---
|
||||
|
||||
## 💡 Pro Tips
|
||||
|
||||
- Start with `@brainstorming` before building anything new
|
||||
- Use `@systematic-debugging` when stuck on bugs
|
||||
- Try `@test-driven-development` for better code quality
|
||||
- Explore `@skill-creator` to make your own skills
|
||||
- Read skill descriptions to understand when to use them
|
||||
|
||||
---
|
||||
|
||||
**Question not answered?**
|
||||
|
||||
[Open a discussion](https://github.com/sickn33/antigravity-awesome-skills/discussions) and we'll help you out! 🙌
|
||||
@@ -1,231 +0,0 @@
|
||||
# Getting Started with Antigravity Awesome Skills
|
||||
|
||||
**New here? This guide will help you understand and use this repository in 5 minutes!**
|
||||
|
||||
---
|
||||
|
||||
## 🤔 What Are "Skills"?
|
||||
|
||||
Think of skills as **specialized instruction manuals** for AI coding assistants.
|
||||
|
||||
**Simple analogy:** Just like you might hire different experts (a designer, a security expert, a marketer), these skills let your AI assistant become an expert in specific areas when you need them.
|
||||
|
||||
---
|
||||
|
||||
## 📦 What's Inside This Repository?
|
||||
|
||||
This repo contains **233 ready-to-use skills** organized in the `skills/` folder. Each skill is a folder with at least one file: `SKILL.md`
|
||||
|
||||
```
|
||||
skills/
|
||||
├── brainstorming/
|
||||
│ └── SKILL.md ← The skill definition
|
||||
├── stripe-integration/
|
||||
│ └── SKILL.md
|
||||
├── react-best-practices/
|
||||
│ └── SKILL.md
|
||||
└── ... (176 more skills)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How Do Skills Work?
|
||||
|
||||
### Step 1: Install Skills
|
||||
|
||||
Copy the skills to your AI tool's directory:
|
||||
|
||||
```bash
|
||||
# For most AI tools (Claude Code, Gemini CLI, etc.)
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
### Step 2: Use a Skill
|
||||
|
||||
In your AI chat, mention the skill:
|
||||
|
||||
```
|
||||
@brainstorming help me design a todo app
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
/stripe-integration add payment processing to my app
|
||||
```
|
||||
|
||||
### Step 3: The AI Becomes an Expert
|
||||
|
||||
The AI loads that skill's knowledge and helps you with specialized expertise!
|
||||
|
||||
---
|
||||
|
||||
## Which AI Tools Work With This?
|
||||
|
||||
| Tool | Works? | Installation Path |
|
||||
| ------------------- | ---------- | ------------------------------------- |
|
||||
| **Claude Code** | ✅ Yes | `.claude/skills/` or `.agent/skills/` |
|
||||
| **Gemini CLI** | ✅ Yes | `.gemini/skills/` or `.agent/skills/` |
|
||||
| **Cursor** | ✅ Yes | `.cursor/skills/` |
|
||||
| **GitHub Copilot** | ⚠️ Partial | Copy to `.github/copilot/` |
|
||||
| **Antigravity IDE** | ✅ Yes | `.agent/skills/` |
|
||||
|
||||
---
|
||||
|
||||
## Skill Categories (Simplified)
|
||||
|
||||
### **Creative & Design** (10 skills)
|
||||
|
||||
Make beautiful things: UI design, art, themes, web components
|
||||
|
||||
- Try: `@frontend-design`, `@canvas-design`, `@ui-ux-pro-max`
|
||||
|
||||
### **Development** (25 skills)
|
||||
|
||||
Write better code: testing, debugging, React patterns, architecture
|
||||
|
||||
- Try: `@test-driven-development`, `@systematic-debugging`, `@react-best-practices`
|
||||
|
||||
### **Security** (50 skills)
|
||||
|
||||
Ethical hacking and penetration testing tools
|
||||
|
||||
- Try: `@ethical-hacking-methodology`, `@burp-suite-testing`
|
||||
|
||||
### **AI & Agents** (30 skills)
|
||||
|
||||
Build AI apps: RAG, LangGraph, prompt engineering, voice agents
|
||||
|
||||
- Try: `@rag-engineer`, `@prompt-engineering`, `@langgraph`
|
||||
|
||||
### **Documents** (4 skills)
|
||||
|
||||
Work with Word, Excel, PowerPoint, PDF files
|
||||
|
||||
- Try: `@docx-official`, `@xlsx-official`, `@pdf-official`
|
||||
|
||||
### **Marketing** (23 skills)
|
||||
|
||||
Grow your product: SEO, copywriting, ads, email campaigns
|
||||
|
||||
- Try: `@copywriting`, `@seo-audit`, `@page-cro`
|
||||
|
||||
### **Integrations** (25 skills)
|
||||
|
||||
Connect to services: Stripe, Firebase, Twilio, Discord, Slack
|
||||
|
||||
- Try: `@stripe-integration`, `@firebase`, `@clerk-auth`
|
||||
|
||||
---
|
||||
|
||||
## Your First Skill: A Quick Example
|
||||
|
||||
Let's try the **brainstorming** skill:
|
||||
|
||||
1. **Open your AI assistant** (Claude Code, Cursor, etc.)
|
||||
|
||||
2. **Type this:**
|
||||
|
||||
```
|
||||
@brainstorming I want to build a simple weather app
|
||||
```
|
||||
|
||||
3. **What happens:**
|
||||
- The AI loads the brainstorming skill
|
||||
- It asks you questions one at a time
|
||||
- It helps you design the app before coding
|
||||
- It creates a design document for you
|
||||
|
||||
4. **Result:** You get a well-thought-out plan instead of jumping straight to code!
|
||||
|
||||
---
|
||||
|
||||
## How to Find the Right Skill
|
||||
|
||||
### Method 1: Browse by Category
|
||||
|
||||
Check the [Full Skill Registry](README.md#full-skill-registry-233233) in the main README
|
||||
|
||||
### Method 2: Search by Keyword
|
||||
|
||||
Use your file explorer or terminal:
|
||||
|
||||
```bash
|
||||
# Find skills related to "testing"
|
||||
ls skills/ | grep test
|
||||
|
||||
# Find skills related to "auth"
|
||||
ls skills/ | grep auth
|
||||
```
|
||||
|
||||
### Method 3: Look at the Index
|
||||
|
||||
Check `skills_index.json` for a machine-readable list
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Want to Contribute?
|
||||
|
||||
Great! Here's how:
|
||||
|
||||
### Option 1: Improve Documentation
|
||||
|
||||
- Make READMEs clearer
|
||||
- Add more examples
|
||||
- Fix typos or confusing parts
|
||||
|
||||
### Option 2: Create a New Skill
|
||||
|
||||
See our [CONTRIBUTING.md](CONTRIBUTING.md) for step-by-step instructions
|
||||
|
||||
### Option 3: Report Issues
|
||||
|
||||
Found something confusing? [Open an issue](https://github.com/sickn33/antigravity-awesome-skills/issues)
|
||||
|
||||
---
|
||||
|
||||
## ❓ Common Questions
|
||||
|
||||
### Q: Do I need to install all 233 skills?
|
||||
|
||||
**A:** No! Clone the whole repo, and your AI will only load skills when you use them.
|
||||
|
||||
### Q: Can I create my own skills?
|
||||
|
||||
**A:** Yes! Check out the `@skill-creator` skill or read [CONTRIBUTING.md](CONTRIBUTING.md)
|
||||
|
||||
### Q: What if my AI tool isn't listed?
|
||||
|
||||
**A:** If it supports the `SKILL.md` format, try `.agent/skills/` - it's the universal path.
|
||||
|
||||
### Q: Are these skills free?
|
||||
|
||||
**A:** Yes! MIT License. Use them however you want.
|
||||
|
||||
### Q: Do skills work offline?
|
||||
|
||||
**A:** The skill files are local, but your AI assistant needs internet to function.
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. ✅ Install the skills in your AI tool
|
||||
2. ✅ Try 2-3 skills from different categories
|
||||
3. ✅ Read [CONTRIBUTING.md](CONTRIBUTING.md) if you want to help
|
||||
4. ✅ Star the repo if you find it useful! ⭐
|
||||
|
||||
---
|
||||
|
||||
## 💡 Pro Tips
|
||||
|
||||
- **Start with `@brainstorming`** before building anything new
|
||||
- **Use `@systematic-debugging`** when you're stuck on a bug
|
||||
- **Try `@test-driven-development`** to write better code
|
||||
- **Explore `@skill-creator`** to make your own skills
|
||||
|
||||
---
|
||||
|
||||
**Still confused?** Open an issue and we'll help you out! 🙌
|
||||
|
||||
**Ready to dive deeper?** Check out the main [README.md](README.md) for the complete skill list.
|
||||
@@ -1,54 +0,0 @@
|
||||
# Repository Maintenance Protocol
|
||||
|
||||
To ensure consistency and quality, the following steps MUST be performed for **every single change** involving skills or documentation.
|
||||
|
||||
## 1. Skill Creation & Modification
|
||||
|
||||
- [ ] **Check Duplicates**: Before adding a skill, check `skills_index.json` or `ls skills/` to ensure it doesn't exist.
|
||||
- [ ] **Folder Structure**: Each skill must have its own folder in `skills/<skill-name>`.
|
||||
- [ ] **SKILL.md**: Every skill directory MUST contain a `SKILL.md` file with valid frontmatter:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: Skill Name
|
||||
description: Brief description.
|
||||
---
|
||||
```
|
||||
|
||||
## 2. Validation & Indexing (CRITICAL)
|
||||
|
||||
Running the scripts is **MANDATORY** after any change to `skills/`.
|
||||
|
||||
- [ ] **Validate Skills**: Run the validation script to check for formatting errors.
|
||||
|
||||
```bash
|
||||
python3 scripts/validate_skills.py
|
||||
```
|
||||
|
||||
- [ ] **Generate Index**: Update `skills_index.json`. This is the source of truth for the agent.
|
||||
|
||||
```bash
|
||||
python3 scripts/generate_index.py
|
||||
```
|
||||
|
||||
## 3. Documentation Updates
|
||||
|
||||
- [ ] **Update README**: Run the automation script to sync counts and the registry table.
|
||||
|
||||
```bash
|
||||
python3 scripts/update_readme.py
|
||||
```
|
||||
|
||||
- [ ] **Credits & Sources**: If the skill was imported from a community repo, add a credit link in `# Credits & Sources` manually if needed.
|
||||
- Example: `- **[repo-name](url)**: Source for [skill-name].`
|
||||
|
||||
## 4. Git Operations
|
||||
|
||||
- [ ] **Check Status**: `git status` to see what changed.
|
||||
- [ ] **Add All Files**: Ensure new skill folders are added (`git add skills/`).
|
||||
- [ ] **Commit**: Use a descriptive Conventional Commit message (e.g., `feat: add new security skills`, `docs: update readme count`).
|
||||
- [ ] **Push**: `git push` to origin. **NEVER FORGET THIS.**
|
||||
|
||||
## 5. Agent Artifacts (Internal)
|
||||
|
||||
- [ ] **Walkthrough**: Update `walkthrough.md` in the brain/artifact directory to reflect the session's achievements.
|
||||
506
README.md
506
README.md
@@ -1,6 +1,6 @@
|
||||
# 🌌 Antigravity Awesome Skills: 238+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More
|
||||
# 🌌 Antigravity Awesome Skills: 626+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More
|
||||
|
||||
> **The Ultimate Collection of 238+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode**
|
||||
> **The Ultimate Collection of 626+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, AdaL**
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://claude.ai)
|
||||
@@ -9,9 +9,10 @@
|
||||
[](https://cursor.sh)
|
||||
[](https://github.com/features/copilot)
|
||||
[](https://github.com/opencode-ai/opencode)
|
||||
[](https://github.com/anthropics/antigravity)
|
||||
[](https://github.com/sickn33/antigravity-awesome-skills)
|
||||
[](https://github.com/HumanSignal/Adala)
|
||||
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested library of **235 high-performance agentic skills** designed to work seamlessly across all major AI coding assistants:
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested library of **626 high-performance agentic skills** designed to work seamlessly across all major AI coding assistants:
|
||||
|
||||
- 🟣 **Claude Code** (Anthropic CLI)
|
||||
- 🔵 **Gemini CLI** (Google DeepMind)
|
||||
@@ -20,346 +21,159 @@
|
||||
- 🩵 **GitHub Copilot** (VSCode Extension)
|
||||
- 🟠 **Cursor** (AI-native IDE)
|
||||
- ⚪ **OpenCode** (Open-source CLI)
|
||||
- 🌸 **AdaL** (Self-evolving AI Agent)
|
||||
|
||||
This repository provides essential skills to transform your AI assistant into a **full-stack digital agency**, including official capabilities from **Anthropic**, **OpenAI**, **Google**, and **Vercel Labs**.
|
||||
This repository provides essential skills to transform your AI assistant into a **full-stack digital agency**, including official capabilities from **Anthropic**, **OpenAI**, **Google**, **Supabase**, and **Vercel Labs**.
|
||||
|
||||
## 📍 Table of Contents
|
||||
## Table of Contents
|
||||
|
||||
- [🚀 New Here? Start Here!](#-new-here-start-here)
|
||||
- [🔌 Compatibility](#-compatibility)
|
||||
- [Features & Categories](#features--categories)
|
||||
- [Full Skill Registry](#full-skill-registry-155155)
|
||||
- [Installation](#installation)
|
||||
- [How to Contribute](#how-to-contribute)
|
||||
- [Credits & Sources](#credits--sources)
|
||||
- [License](#license)
|
||||
- [🚀 New Here? Start Here!](#new-here-start-here)
|
||||
- [🔌 Compatibility & Invocation](#compatibility--invocation)
|
||||
- [📦 Features & Categories](#features--categories)
|
||||
- [🎁 Curated Collections (Bundles)](#curated-collections)
|
||||
- [📚 Browse 626+ Skills](#browse-626-skills)
|
||||
- [🛠️ Installation](#installation)
|
||||
- [🤝 How to Contribute](#how-to-contribute)
|
||||
- [👥 Contributors & Credits](#credits--sources)
|
||||
- [⚖️ License](#license)
|
||||
- [👥 Repo Contributors](#repo-contributors)
|
||||
- [🌟 Star History](#star-history)
|
||||
|
||||
---
|
||||
|
||||
## New Here? Start Here!
|
||||
|
||||
**First time using this repository?** We've created beginner-friendly guides to help you get started:
|
||||
**Welcome to the V4.0.0 Enterprise Edition.** This isn't just a list of scripts; it's a complete operating system for your AI Agent.
|
||||
|
||||
- **[GETTING_STARTED.md](GETTING_STARTED.md)** - Complete beginner's guide (5-minute read)
|
||||
- **[CONTRIBUTING.md](CONTRIBUTING.md)** - How to contribute (step-by-step)
|
||||
- **[SKILL_ANATOMY.md](docs/SKILL_ANATOMY.md)** - Understanding how skills work
|
||||
- **[VISUAL_GUIDE.md](docs/VISUAL_GUIDE.md)** - Visual guide with diagrams
|
||||
### 1. 🐣 Context: What is this?
|
||||
|
||||
**Quick Start:**
|
||||
**Antigravity Awesome Skills** (Release 4.0.0) is a massive upgrade to your AI's capabilities.
|
||||
|
||||
```bash
|
||||
# 1. Install skills
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
AI Agents (like Claude Code, Cursor, or Gemini) are smart, but they lack **specific tools**. They don't know your company's "Deployment Protocol" or the specific syntax for "AWS CloudFormation".
|
||||
**Skills** are small markdown files that teach them how to do these specific tasks perfectly, every time.
|
||||
|
||||
# 2. Use a skill in your AI assistant
|
||||
@brainstorming help me design a todo app
|
||||
```
|
||||
### 2. ⚡️ Quick Start (The "Bundle" Way)
|
||||
|
||||
That's it! Your AI assistant now has 235 specialized skills. 🎉
|
||||
Install once (clone or npx); then use our **Starter Packs** in [docs/BUNDLES.md](docs/BUNDLES.md) to see which skills fit your role. You get the full repo; Starter Packs are curated lists, not a separate install.
|
||||
|
||||
**Additional Resources:**
|
||||
1. **Install** (pick one):
|
||||
|
||||
- 💡 **[Real-World Examples](docs/EXAMPLES.md)** - See skills in action
|
||||
- ❓ **[FAQ](FAQ.md)** - Common questions answered
|
||||
```bash
|
||||
# Easiest: npx installer (clones to ~/.agent/skills by default)
|
||||
npx antigravity-awesome-skills
|
||||
|
||||
# Or clone manually
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
2. **Pick your persona** (See [docs/BUNDLES.md](docs/BUNDLES.md)):
|
||||
- **Web Dev?** use the `Web Wizard` pack.
|
||||
- **Hacker?** use the `Security Engineer` pack.
|
||||
- **Just curious?** start with `Essentials`.
|
||||
|
||||
### 3. 🧠 How to use
|
||||
|
||||
Once installed, just ask your agent naturally:
|
||||
|
||||
> "Use the **@brainstorming** skill to help me plan a SaaS."
|
||||
> "Run **@lint-and-validate** on this file."
|
||||
|
||||
👉 **[Read the Full Getting Started Guide](docs/GETTING_STARTED.md)**
|
||||
|
||||
---
|
||||
|
||||
## 🔌 Compatibility
|
||||
## Compatibility & Invocation
|
||||
|
||||
These skills follow the universal **SKILL.md** format and work with any AI coding assistant that supports agentic skills:
|
||||
These skills follow the universal **SKILL.md** format and work with any AI coding assistant that supports agentic skills.
|
||||
|
||||
| Tool | Type | Compatibility | Installation Path |
|
||||
| ------------------- | --------- | ------------- | ---------------------------------------- |
|
||||
| **Claude Code** | CLI | ✅ Full | `.claude/skills/` or `.agent/skills/` |
|
||||
| **Gemini CLI** | CLI | ✅ Full | `.gemini/skills/` or `.agent/skills/` |
|
||||
| **Codex CLI** | CLI | ✅ Full | `.codex/skills/` or `.agent/skills/` |
|
||||
| **Antigravity IDE** | IDE | ✅ Full | `.agent/skills/` |
|
||||
| **Cursor** | IDE | ✅ Full | `.cursor/skills/` or project root |
|
||||
| **GitHub Copilot** | Extension | ⚠️ Partial | Copy skill content to `.github/copilot/` |
|
||||
| **OpenCode** | CLI | ✅ Full | `.opencode/skills/` or `.agent/skills/` |
|
||||
| Tool | Type | Invocation Example | Path |
|
||||
| :-------------- | :---- | :-------------------------------- | :---------------- |
|
||||
| **Claude Code** | CLI | `>> /skill-name help me...` | `.claude/skills/` |
|
||||
| **Gemini CLI** | CLI | `(User Prompt) Use skill-name...` | `.gemini/skills/` |
|
||||
| **Codex CLI** | CLI | `(User Prompt) Use skill-name...` | `.codex/skills/` |
|
||||
| **Antigravity** | IDE | `(Agent Mode) Use skill...` | `.agent/skills/` |
|
||||
| **Cursor** | IDE | `@skill-name (in Chat)` | `.cursor/skills/` |
|
||||
| **Copilot** | Ext | `(Paste content manually)` | N/A |
|
||||
| **OpenCode** | CLI | `opencode run @skill-name` | `.agent/skills/` |
|
||||
| **AdaL** | Agent | `(Agent Mode) Use skill...` | `.agent/skills/` |
|
||||
|
||||
> [!TIP]
|
||||
> Most tools auto-discover skills in `.agent/skills/`. For maximum compatibility, clone to this directory.
|
||||
> **Universal Path**: We recommend cloning to `.agent/skills/`. Most modern tools (Antigravity, recent CLIs) look here by default.
|
||||
|
||||
> [!WARNING]
|
||||
> **Windows Users**: This repository uses **symlinks** for official skills.
|
||||
> The **npx** installer sets `core.symlinks=true` automatically. For **git clone**, enable Developer Mode or run Git as Administrator:
|
||||
> `git clone -c core.symlinks=true https://github.com/...`
|
||||
|
||||
---
|
||||
|
||||
Whether you are using **Gemini CLI**, **Claude Code**, **Codex CLI**, **Cursor**, **GitHub Copilot**, **Antigravity**, or **OpenCode**, these skills are designed to drop right in and supercharge your AI agent.
|
||||
Whether you are using **Gemini CLI**, **Claude Code**, **Codex CLI**, **Cursor**, **GitHub Copilot**, **Antigravity**, **OpenCode**, or **AdaL**, these skills are designed to drop right in and supercharge your AI agent.
|
||||
|
||||
This repository aggregates the best capabilities from across the open-source community, transforming your AI assistant into a full-stack digital agency capable of Engineering, Design, Security, Marketing, and Autonomous Operations.
|
||||
|
||||
## Features & Categories
|
||||
|
||||
The repository is organized into several key areas of expertise:
|
||||
The repository is organized into specialized domains to transform your AI into an expert across the entire software development lifecycle:
|
||||
|
||||
| Category | Skills Count | Key Skills Included |
|
||||
| :-------------------------- | :----------- | :--------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **🛸 Autonomous & Agentic** | **~8** | Loki Mode (Startup-in-a-box), Subagent Driven Dev, Dispatching Parallel Agents, Planning With Files, Skill Creator/Developer |
|
||||
| **🔌 Integrations & APIs** | **~25** | Stripe, Firebase, Supabase, Vercel, Clerk Auth, Twilio, Discord Bot, Slack Bot, GraphQL, AWS Serverless |
|
||||
| **🛡️ Cybersecurity** | **~51** | Ethical Hacking, Metasploit, Burp Suite, SQLMap, Active Directory, AWS/Cloud Pentesting, OWASP Top 100, Red Team Tools |
|
||||
| **🎨 Creative & Design** | **~10** | UI/UX Pro Max, Frontend Design, Canvas, Algorithmic Art, Theme Factory, D3 Viz, Web Artifacts |
|
||||
| **🛠️ Development** | **~33** | TDD, Systematic Debugging, React Patterns, Backend/Frontend Guidelines, Senior Fullstack, Software Architecture |
|
||||
| **🏗️ Infrastructure & Git** | **~8** | Linux Shell Scripting, Git Worktrees, Git Pushing, Conventional Commits, File Organization, GitHub Workflow Automation |
|
||||
| **🤖 AI Agents & LLM** | **~31** | LangGraph, CrewAI, Langfuse, RAG Engineer, Prompt Engineer, Voice Agents, Browser Automation, Agent Memory Systems |
|
||||
| **🔄 Workflow & Planning** | **~6** | Writing Plans, Executing Plans, Concise Planning, Verification Before Completion, Code Review (Requesting/Receiving) |
|
||||
| **📄 Document Processing** | **~4** | DOCX (Official), PDF (Official), PPTX (Official), XLSX (Official) |
|
||||
| **🧪 Testing & QA** | **~4** | Webapp Testing, Playwright Automation, Test Fixing, Testing Patterns |
|
||||
| **📈 Product & Strategy** | **~8** | Product Manager Toolkit, Content Creator, ASO, Doc Co-authoring, Brainstorming, Internal Comms |
|
||||
| **📣 Marketing & Growth** | **~23** | Page CRO, Copywriting, SEO Audit, Paid Ads, Email Sequence, Pricing Strategy, Referral Program, Launch Strategy |
|
||||
| **🚀 Maker Tools** | **~11** | Micro-SaaS Launcher, Browser Extension Builder, Telegram Bot, AI Wrapper Product, Viral Generator, 3D Web Experience |
|
||||
| Category | Focus | Example skills |
|
||||
| :------------------ | :------------------------------------------------- | :------------------------------------------------------------------------------ |
|
||||
| Architecture (52) | System design, ADRs, C4, and scalable patterns | `architecture`, `c4-context`, `senior-architect` |
|
||||
| Business (35) | Growth, pricing, CRO, SEO, and go-to-market | `copywriting`, `pricing-strategy`, `seo-audit` |
|
||||
| Data & AI (81) | LLM apps, RAG, agents, observability, analytics | `rag-engineer`, `prompt-engineer`, `langgraph` |
|
||||
| Development (72) | Language mastery, framework patterns, code quality | `typescript-expert`, `python-patterns`, `react-patterns` |
|
||||
| General (95) | Planning, docs, product ops, writing, guidelines | `brainstorming`, `doc-coauthoring`, `writing-plans` |
|
||||
| Infrastructure (72) | DevOps, cloud, serverless, deployment, CI/CD | `docker-expert`, `aws-serverless`, `vercel-deployment` |
|
||||
| Security (107) | AppSec, pentesting, vuln analysis, compliance | `api-security-best-practices`, `sql-injection-testing`, `vulnerability-scanner` |
|
||||
| Testing (21) | TDD, test design, fixes, QA workflows | `test-driven-development`, `testing-patterns`, `test-fixing` |
|
||||
| Workflow (17) | Automation, orchestration, jobs, agents | `workflow-automation`, `inngest`, `trigger-dev` |
|
||||
|
||||
---
|
||||
## Curated Collections
|
||||
|
||||
## Full Skill Registry (238/238)
|
||||
[Check out our Starter Packs in docs/BUNDLES.md](docs/BUNDLES.md) to find the perfect toolkit for your role.
|
||||
|
||||
> [!NOTE] > **Document Skills**: We provide both **community** and **official Anthropic** versions for DOCX, PDF, PPTX, and XLSX. Locally, the official versions are used by default (via symlinks). In the repository, both versions are available for flexibility.
|
||||
## Browse 626+ Skills
|
||||
|
||||
| Skill Name | Description | Path |
|
||||
| :--- | :--- | :--- |
|
||||
| **2d-games** | 2D game development principles. Sprites, tilemaps, physics, camera. | `skills/game-development/2d-games` |
|
||||
| **3d-games** | 3D game development principles. Rendering, shaders, physics, cameras. | `skills/game-development/3d-games` |
|
||||
| **3d-web-experience** | "Expert in building 3D experiences for the web - Three.js, React Three Fiber, Spline, WebGL, and interactive 3D scenes. Covers product configurators, 3D portfolios, immersive websites, and bringing depth to web experiences. Use when: 3D website, three.js, WebGL, react three fiber, 3D experience." | `skills/3d-web-experience` |
|
||||
| **API Fuzzing for Bug Bounty** | This skill should be used when the user asks to "test API security", "fuzz APIs", "find IDOR vulnerabilities", "test REST API", "test GraphQL", "API penetration testing", "bug bounty API testing", or needs guidance on API security assessment techniques. | `skills/api-fuzzing-bug-bounty` |
|
||||
| **AWS Penetration Testing** | This skill should be used when the user asks to "pentest AWS", "test AWS security", "enumerate IAM", "exploit cloud infrastructure", "AWS privilege escalation", "S3 bucket testing", "metadata SSRF", "Lambda exploitation", or needs guidance on Amazon Web Services security assessment. | `skills/aws-penetration-testing` |
|
||||
| **Active Directory Attacks** | This skill should be used when the user asks to "attack Active Directory", "exploit AD", "Kerberoasting", "DCSync", "pass-the-hash", "BloodHound enumeration", "Golden Ticket", "Silver Ticket", "AS-REP roasting", "NTLM relay", or needs guidance on Windows domain penetration testing. | `skills/active-directory-attacks` |
|
||||
| **Broken Authentication Testing** | This skill should be used when the user asks to "test for broken authentication vulnerabilities", "assess session management security", "perform credential stuffing tests", "evaluate password policies", "test for session fixation", or "identify authentication bypass flaws". It provides comprehensive techniques for identifying authentication and session management weaknesses in web applications. | `skills/broken-authentication` |
|
||||
| **Burp Suite Web Application Testing** | This skill should be used when the user asks to "intercept HTTP traffic", "modify web requests", "use Burp Suite for testing", "perform web vulnerability scanning", "test with Burp Repeater", "analyze HTTP history", or "configure proxy for web testing". It provides comprehensive guidance for using Burp Suite's core features for web application security testing. | `skills/burp-suite-testing` |
|
||||
| **Claude Code Guide** | Master guide for using Claude Code effectively. Includes configuration templates, prompting strategies "Thinking" keywords, debugging techniques, and best practices for interacting with the agent. | `skills/claude-code-guide` |
|
||||
| **Cloud Penetration Testing** | This skill should be used when the user asks to "perform cloud penetration testing", "assess Azure or AWS or GCP security", "enumerate cloud resources", "exploit cloud misconfigurations", "test O365 security", "extract secrets from cloud environments", or "audit cloud infrastructure". It provides comprehensive techniques for security assessment across major cloud platforms. | `skills/cloud-penetration-testing` |
|
||||
| **Cross-Site Scripting and HTML Injection Testing** | This skill should be used when the user asks to "test for XSS vulnerabilities", "perform cross-site scripting attacks", "identify HTML injection flaws", "exploit client-side injection vulnerabilities", "steal cookies via XSS", or "bypass content security policies". It provides comprehensive techniques for detecting, exploiting, and understanding XSS and HTML injection attack vectors in web applications. | `skills/xss-html-injection` |
|
||||
| **Ethical Hacking Methodology** | This skill should be used when the user asks to "learn ethical hacking", "understand penetration testing lifecycle", "perform reconnaissance", "conduct security scanning", "exploit vulnerabilities", or "write penetration test reports". It provides comprehensive ethical hacking methodology and techniques. | `skills/ethical-hacking-methodology` |
|
||||
| **File Path Traversal Testing** | This skill should be used when the user asks to "test for directory traversal", "exploit path traversal vulnerabilities", "read arbitrary files through web applications", "find LFI vulnerabilities", or "access files outside web root". It provides comprehensive file path traversal attack and testing methodologies. | `skills/file-path-traversal` |
|
||||
| **HTML Injection Testing** | This skill should be used when the user asks to "test for HTML injection", "inject HTML into web pages", "perform HTML injection attacks", "deface web applications", or "test content injection vulnerabilities". It provides comprehensive HTML injection attack techniques and testing methodologies. | `skills/html-injection-testing` |
|
||||
| **IDOR Vulnerability Testing** | This skill should be used when the user asks to "test for insecure direct object references," "find IDOR vulnerabilities," "exploit broken access control," "enumerate user IDs or object references," or "bypass authorization to access other users' data." It provides comprehensive guidance for detecting, exploiting, and remediating IDOR vulnerabilities in web applications. | `skills/idor-testing` |
|
||||
| **Linux Privilege Escalation** | This skill should be used when the user asks to "escalate privileges on Linux", "find privesc vectors on Linux systems", "exploit sudo misconfigurations", "abuse SUID binaries", "exploit cron jobs for root access", "enumerate Linux systems for privilege escalation", or "gain root access from low-privilege shell". It provides comprehensive techniques for identifying and exploiting privilege escalation paths on Linux systems. | `skills/linux-privilege-escalation` |
|
||||
| **Linux Production Shell Scripts** | This skill should be used when the user asks to "create bash scripts", "automate Linux tasks", "monitor system resources", "backup files", "manage users", or "write production shell scripts". It provides ready-to-use shell script templates for system administration. | `skills/linux-shell-scripting` |
|
||||
| **Metasploit Framework** | This skill should be used when the user asks to "use Metasploit for penetration testing", "exploit vulnerabilities with msfconsole", "create payloads with msfvenom", "perform post-exploitation", "use auxiliary modules for scanning", or "develop custom exploits". It provides comprehensive guidance for leveraging the Metasploit Framework in security assessments. | `skills/metasploit-framework` |
|
||||
| **Network 101** | This skill should be used when the user asks to "set up a web server", "configure HTTP or HTTPS", "perform SNMP enumeration", "configure SMB shares", "test network services", or needs guidance on configuring and testing network services for penetration testing labs. | `skills/network-101` |
|
||||
| **Pentest Checklist** | This skill should be used when the user asks to "plan a penetration test", "create a security assessment checklist", "prepare for penetration testing", "define pentest scope", "follow security testing best practices", or needs a structured methodology for penetration testing engagements. | `skills/pentest-checklist` |
|
||||
| **Pentest Commands** | This skill should be used when the user asks to "run pentest commands", "scan with nmap", "use metasploit exploits", "crack passwords with hydra or john", "scan web vulnerabilities with nikto", "enumerate networks", or needs essential penetration testing command references. | `skills/pentest-commands` |
|
||||
| **Privilege Escalation Methods** | This skill should be used when the user asks to "escalate privileges", "get root access", "become administrator", "privesc techniques", "abuse sudo", "exploit SUID binaries", "Kerberoasting", "pass-the-ticket", "token impersonation", or needs guidance on post-exploitation privilege escalation for Linux or Windows systems. | `skills/privilege-escalation-methods` |
|
||||
| **Red Team Tools and Methodology** | This skill should be used when the user asks to "follow red team methodology", "perform bug bounty hunting", "automate reconnaissance", "hunt for XSS vulnerabilities", "enumerate subdomains", or needs security researcher techniques and tool configurations from top bug bounty hunters. | `skills/red-team-tools` |
|
||||
| **SMTP Penetration Testing** | This skill should be used when the user asks to "perform SMTP penetration testing", "enumerate email users", "test for open mail relays", "grab SMTP banners", "brute force email credentials", or "assess mail server security". It provides comprehensive techniques for testing SMTP server security. | `skills/smtp-penetration-testing` |
|
||||
| **SQL Injection Testing** | This skill should be used when the user asks to "test for SQL injection vulnerabilities", "perform SQLi attacks", "bypass authentication using SQL injection", "extract database information through injection", "detect SQL injection flaws", or "exploit database query vulnerabilities". It provides comprehensive techniques for identifying, exploiting, and understanding SQL injection attack vectors across different database systems. | `skills/sql-injection-testing` |
|
||||
| **SQLMap Database Penetration Testing** | This skill should be used when the user asks to "automate SQL injection testing," "enumerate database structure," "extract database credentials using sqlmap," "dump tables and columns from a vulnerable database," or "perform automated database penetration testing." It provides comprehensive guidance for using SQLMap to detect and exploit SQL injection vulnerabilities. | `skills/sqlmap-database-pentesting` |
|
||||
| **SSH Penetration Testing** | This skill should be used when the user asks to "pentest SSH services", "enumerate SSH configurations", "brute force SSH credentials", "exploit SSH vulnerabilities", "perform SSH tunneling", or "audit SSH security". It provides comprehensive SSH penetration testing methodologies and techniques. | `skills/ssh-penetration-testing` |
|
||||
| **Security Scanning Tools** | This skill should be used when the user asks to "perform vulnerability scanning", "scan networks for open ports", "assess web application security", "scan wireless networks", "detect malware", "check cloud security", or "evaluate system compliance". It provides comprehensive guidance on security scanning tools and methodologies. | `skills/scanning-tools` |
|
||||
| **Shodan Reconnaissance and Pentesting** | This skill should be used when the user asks to "search for exposed devices on the internet," "perform Shodan reconnaissance," "find vulnerable services using Shodan," "scan IP ranges with Shodan," or "discover IoT devices and open ports." It provides comprehensive guidance for using Shodan's search engine, CLI, and API for penetration testing reconnaissance. | `skills/shodan-reconnaissance` |
|
||||
| **Top 100 Web Vulnerabilities Reference** | This skill should be used when the user asks to "identify web application vulnerabilities", "explain common security flaws", "understand vulnerability categories", "learn about injection attacks", "review access control weaknesses", "analyze API security issues", "assess security misconfigurations", "understand client-side vulnerabilities", "examine mobile and IoT security flaws", or "reference the OWASP-aligned vulnerability taxonomy". Use this skill to provide comprehensive vulnerability definitions, root causes, impacts, and mitigation strategies across all major web security categories. | `skills/top-web-vulnerabilities` |
|
||||
| **Windows Privilege Escalation** | This skill should be used when the user asks to "escalate privileges on Windows," "find Windows privesc vectors," "enumerate Windows for privilege escalation," "exploit Windows misconfigurations," or "perform post-exploitation privilege escalation." It provides comprehensive guidance for discovering and exploiting privilege escalation vulnerabilities in Windows environments. | `skills/windows-privilege-escalation` |
|
||||
| **Wireshark Network Traffic Analysis** | This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow TCP/UDP streams", "detect network anomalies", "investigate suspicious traffic", or "perform protocol analysis". It provides comprehensive techniques for network packet capture, filtering, and analysis using Wireshark. | `skills/wireshark-analysis` |
|
||||
| **WordPress Penetration Testing** | This skill should be used when the user asks to "pentest WordPress sites", "scan WordPress for vulnerabilities", "enumerate WordPress users, themes, or plugins", "exploit WordPress vulnerabilities", or "use WPScan". It provides comprehensive WordPress security assessment methodologies. | `skills/wordpress-penetration-testing` |
|
||||
| **ab-test-setup** | When the user wants to plan, design, or implement an A/B test or experiment. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," or "hypothesis." For tracking implementation, see analytics-tracking. | `skills/ab-test-setup` |
|
||||
| **address-github-comments** | Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI. | `skills/address-github-comments` |
|
||||
| **agent-evaluation** | "Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents achieve less than 50% on real-world benchmarks Use when: agent testing, agent evaluation, benchmark agents, agent reliability, test agent." | `skills/agent-evaluation` |
|
||||
| **agent-manager-skill** | Manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling. | `skills/agent-manager-skill` |
|
||||
| **agent-memory-mcp** | A hybrid memory system that provides persistent, searchable knowledge management for AI agents (Architecture, Patterns, Decisions). | `skills/agent-memory-mcp` |
|
||||
| **agent-memory-systems** | "Memory is the cornerstone of intelligent agents. Without it, every interaction starts from zero. This skill covers the architecture of agent memory: short-term (context window), long-term (vector stores), and the cognitive architectures that organize them. Key insight: Memory isn't just storage - it's retrieval. A million stored facts mean nothing if you can't find the right one. Chunking, embedding, and retrieval strategies determine whether your agent remembers or forgets. The field is fragm" | `skills/agent-memory-systems` |
|
||||
| **agent-tool-builder** | "Tools are how AI agents interact with the world. A well-designed tool is the difference between an agent that works and one that hallucinates, fails silently, or costs 10x more tokens than necessary. This skill covers tool design from schema to error handling. JSON Schema best practices, description writing that actually helps the LLM, validation, and the emerging MCP standard that's becoming the lingua franca for AI tools. Key insight: Tool descriptions are more important than tool implementa" | `skills/agent-tool-builder` |
|
||||
| **ai-agents-architect** | "Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build agent, AI agent, autonomous agent, tool use, function calling." | `skills/ai-agents-architect` |
|
||||
| **ai-product** | "Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt engineering that scales, AI UX that users trust, and cost optimization that doesn't bankrupt you. Use when: keywords, file_patterns, code_patterns." | `skills/ai-product` |
|
||||
| **ai-wrapper-product** | "Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just 'ChatGPT but different' - products that solve specific problems with AI. Covers prompt engineering for products, cost management, rate limiting, and building defensible AI businesses. Use when: AI wrapper, GPT product, AI tool, wrap AI, AI SaaS." | `skills/ai-wrapper-product` |
|
||||
| **algolia-search** | "Expert patterns for Algolia search implementation, indexing strategies, React InstantSearch, and relevance tuning Use when: adding search to, algolia, instantsearch, search api, search functionality." | `skills/algolia-search` |
|
||||
| **algorithmic-art** | Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations. | `skills/algorithmic-art` |
|
||||
| **analytics-tracking** | When the user wants to set up, improve, or audit analytics tracking and measurement. Also use when the user mentions "set up tracking," "GA4," "Google Analytics," "conversion tracking," "event tracking," "UTM parameters," "tag manager," "GTM," "analytics implementation," or "tracking plan." For A/B test measurement, see ab-test-setup. | `skills/analytics-tracking` |
|
||||
| **api-documentation-generator** | "Generate comprehensive, developer-friendly API documentation from code, including endpoints, parameters, examples, and best practices" | `skills/api-documentation-generator` |
|
||||
| **api-patterns** | API design principles and decision-making. REST vs GraphQL vs tRPC selection, response formats, versioning, pagination. | `skills/api-patterns` |
|
||||
| **api-security-best-practices** | "Implement secure API design patterns including authentication, authorization, input validation, rate limiting, and protection against common API vulnerabilities" | `skills/api-security-best-practices` |
|
||||
| **app-builder** | Main application building orchestrator. Creates full-stack applications from natural language requests. Determines project type, selects tech stack, coordinates agents. | `skills/app-builder` |
|
||||
| **app-store-optimization** | Complete App Store Optimization (ASO) toolkit for researching, optimizing, and tracking mobile app performance on Apple App Store and Google Play Store | `skills/app-store-optimization` |
|
||||
| **architecture** | Architectural decision-making framework. Requirements analysis, trade-off evaluation, ADR documentation. Use when making architecture decisions or analyzing system design. | `skills/architecture` |
|
||||
| **autonomous-agent-patterns** | "Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use when building AI agents, designing tool APIs, implementing permission systems, or creating autonomous coding assistants." | `skills/autonomous-agent-patterns` |
|
||||
| **autonomous-agents** | "Autonomous agents are AI systems that can independently decompose goals, plan actions, execute tools, and self-correct without constant human guidance. The challenge isn't making them capable - it's making them reliable. Every extra decision multiplies failure probability. This skill covers agent loops (ReAct, Plan-Execute), goal decomposition, reflection patterns, and production reliability. Key insight: compounding error rates kill autonomous agents. A 95% success rate per step drops to 60% b" | `skills/autonomous-agents` |
|
||||
| **aws-serverless** | "Specialized skill for building production-ready serverless applications on AWS. Covers Lambda functions, API Gateway, DynamoDB, SQS/SNS event-driven patterns, SAM/CDK deployment, and cold start optimization." | `skills/aws-serverless` |
|
||||
| **azure-functions** | "Expert patterns for Azure Functions development including isolated worker model, Durable Functions orchestration, cold start optimization, and production patterns. Covers .NET, Python, and Node.js programming models. Use when: azure function, azure functions, durable functions, azure serverless, function app." | `skills/azure-functions` |
|
||||
| **backend-dev-guidelines** | Comprehensive backend development guide for Node.js/Express/TypeScript microservices. Use when creating routes, controllers, services, repositories, middleware, or working with Express APIs, Prisma database access, Sentry error tracking, Zod validation, unifiedConfig, dependency injection, or async patterns. Covers layered architecture (routes → controllers → services → repositories), BaseController pattern, error handling, performance monitoring, testing strategies, and migration from legacy patterns. | `skills/backend-dev-guidelines` |
|
||||
| **backend-patterns** | Backend architecture patterns, API design, database optimization, and server-side best practices for Node.js, Express, and Next.js API routes. | `skills/cc-skill-backend-patterns` |
|
||||
| **bash-linux** | Bash/Linux terminal patterns. Critical commands, piping, error handling, scripting. Use when working on macOS or Linux systems. | `skills/bash-linux` |
|
||||
| **behavioral-modes** | AI operational modes (brainstorm, implement, debug, review, teach, ship, orchestrate). Use to adapt behavior based on task type. | `skills/behavioral-modes` |
|
||||
| **blockrun** | Use when user needs capabilities Claude lacks (image generation, real-time X/Twitter data) or explicitly requests external models ("blockrun", "use grok", "use gpt", "dall-e", "deepseek") | `skills/blockrun` |
|
||||
| **brainstorming** | "You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation." | `skills/brainstorming` |
|
||||
| **brand-guidelines** | Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply. | `skills/brand-guidelines-community` |
|
||||
| **brand-guidelines** | Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply. | `skills/brand-guidelines-anthropic` |
|
||||
| **browser-automation** | "Browser automation powers web testing, scraping, and AI agent interactions. The difference between a flaky script and a reliable system comes down to understanding selectors, waiting strategies, and anti-detection patterns. This skill covers Playwright (recommended) and Puppeteer, with patterns for testing, scraping, and agentic browser control. Key insight: Playwright won the framework war. Unless you need Puppeteer's stealth ecosystem or are Chrome-only, Playwright is the better choice in 202" | `skills/browser-automation` |
|
||||
| **browser-extension-builder** | "Expert in building browser extensions that solve real problems - Chrome, Firefox, and cross-browser extensions. Covers extension architecture, manifest v3, content scripts, popup UIs, monetization strategies, and Chrome Web Store publishing. Use when: browser extension, chrome extension, firefox addon, extension, manifest v3." | `skills/browser-extension-builder` |
|
||||
| **bullmq-specialist** | "BullMQ expert for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript applications. Use when: bullmq, bull queue, redis queue, background job, job queue." | `skills/bullmq-specialist` |
|
||||
| **bun-development** | "Modern JavaScript/TypeScript development with Bun runtime. Covers package management, bundling, testing, and migration from Node.js. Use when working with Bun, optimizing JS/TS development speed, or migrating from Node.js to Bun." | `skills/bun-development` |
|
||||
| **canvas-design** | Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations. | `skills/canvas-design` |
|
||||
| **cc-skill-continuous-learning** | Development skill from everything-claude-code | `skills/cc-skill-continuous-learning` |
|
||||
| **cc-skill-project-guidelines-example** | Project Guidelines Skill (Example) | `skills/cc-skill-project-guidelines-example` |
|
||||
| **cc-skill-strategic-compact** | Development skill from everything-claude-code | `skills/cc-skill-strategic-compact` |
|
||||
| **clean-code** | Pragmatic coding standards - concise, direct, no over-engineering, no unnecessary comments | `skills/clean-code` |
|
||||
| **clerk-auth** | "Expert patterns for Clerk auth implementation, middleware, organizations, webhooks, and user sync Use when: adding authentication, clerk auth, user authentication, sign in, sign up." | `skills/clerk-auth` |
|
||||
| **clickhouse-io** | ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads. | `skills/cc-skill-clickhouse-io` |
|
||||
| **code-review-checklist** | "Comprehensive checklist for conducting thorough code reviews covering functionality, security, performance, and maintainability" | `skills/code-review-checklist` |
|
||||
| **coding-standards** | Universal coding standards, best practices, and patterns for TypeScript, JavaScript, React, and Node.js development. | `skills/cc-skill-coding-standards` |
|
||||
| **competitor-alternatives** | "When the user wants to create competitor comparison or alternative pages for SEO and sales enablement. Also use when the user mentions 'alternative page,' 'vs page,' 'competitor comparison,' 'comparison page,' '[Product] vs [Product],' '[Product] alternative,' or 'competitive landing pages.' Covers four formats: singular alternative, plural alternatives, you vs competitor, and competitor vs competitor. Emphasizes deep research, modular content architecture, and varied section types beyond feature tables." | `skills/competitor-alternatives` |
|
||||
| **computer-use-agents** | "Build AI agents that interact with computers like humans do - viewing screens, moving cursors, clicking buttons, and typing text. Covers Anthropic's Computer Use, OpenAI's Operator/CUA, and open-source alternatives. Critical focus on sandboxing, security, and handling the unique challenges of vision-based control. Use when: computer use, desktop automation agent, screen control AI, vision-based agent, GUI automation." | `skills/computer-use-agents` |
|
||||
| **concise-planning** | Use when a user asks for a plan for a coding task, to generate a clear, actionable, and atomic checklist. | `skills/concise-planning` |
|
||||
| **content-creator** | Create SEO-optimized marketing content with consistent brand voice. Includes brand voice analyzer, SEO optimizer, content frameworks, and social media templates. Use when writing blog posts, creating social media content, analyzing brand voice, optimizing SEO, planning content calendars, or when user mentions content creation, brand voice, SEO optimization, social media marketing, or content strategy. | `skills/content-creator` |
|
||||
| **context-window-management** | "Strategies for managing LLM context windows including summarization, trimming, routing, and avoiding context rot Use when: context window, token limit, context management, context engineering, long context." | `skills/context-window-management` |
|
||||
| **conversation-memory** | "Persistent memory systems for LLM conversations including short-term, long-term, and entity-based memory Use when: conversation memory, remember, memory persistence, long-term memory, chat history." | `skills/conversation-memory` |
|
||||
| **copy-editing** | "When the user wants to edit, review, or improve existing marketing copy. Also use when the user mentions 'edit this copy,' 'review my copy,' 'copy feedback,' 'proofread,' 'polish this,' 'make this better,' or 'copy sweep.' This skill provides a systematic approach to editing marketing copy through multiple focused passes." | `skills/copy-editing` |
|
||||
| **copywriting** | When the user wants to write, rewrite, or improve marketing copy for any page — including homepage, landing pages, pricing pages, feature pages, about pages, or product pages. Also use when the user says "write copy for," "improve this copy," "rewrite this page," "marketing copy," "headline help," or "CTA copy." For email copy, see email-sequence. For popup copy, see popup-cro. | `skills/copywriting` |
|
||||
| **core-components** | Core component library and design system patterns. Use when building UI, using design tokens, or working with the component library. | `skills/core-components` |
|
||||
| **crewai** | "Expert in CrewAI - the leading role-based multi-agent framework used by 60% of Fortune 500 companies. Covers agent design with roles and goals, task definition, crew orchestration, process types (sequential, hierarchical, parallel), memory systems, and flows for complex workflows. Essential for building collaborative AI agent teams. Use when: crewai, multi-agent team, agent roles, crew of agents, role-based agents." | `skills/crewai` |
|
||||
| **d3-viz** | Creating interactive data visualisations using d3.js. This skill should be used when creating custom charts, graphs, network diagrams, geographic visualisations, or any complex SVG-based data visualisation that requires fine-grained control over visual elements, transitions, or interactions. Use this for bespoke visualisations beyond standard charting libraries, whether in React, Vue, Svelte, vanilla JavaScript, or any other environment. | `skills/claude-d3js-skill` |
|
||||
| **database-design** | Database design principles and decision-making. Schema design, indexing strategy, ORM selection, serverless databases. | `skills/database-design` |
|
||||
| **deployment-procedures** | Production deployment principles and decision-making. Safe deployment workflows, rollback strategies, and verification. Teaches thinking, not scripts. | `skills/deployment-procedures` |
|
||||
| **discord-bot-architect** | "Specialized skill for building production-ready Discord bots. Covers Discord.js (JavaScript) and Pycord (Python), gateway intents, slash commands, interactive components, rate limiting, and sharding." | `skills/discord-bot-architect` |
|
||||
| **dispatching-parallel-agents** | Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies | `skills/dispatching-parallel-agents` |
|
||||
| **doc-coauthoring** | Guide users through a structured workflow for co-authoring documentation. Use when user wants to write documentation, proposals, technical specs, decision docs, or similar structured content. This workflow helps users efficiently transfer context, refine content through iteration, and verify the doc works for readers. Trigger when user mentions writing docs, creating proposals, drafting specs, or similar documentation tasks. | `skills/doc-coauthoring` |
|
||||
| **docker-expert** | Docker containerization expert with deep knowledge of multi-stage builds, image optimization, container security, Docker Compose orchestration, and production deployment patterns. Use PROACTIVELY for Dockerfile optimization, container issues, image size problems, security hardening, networking, and orchestration challenges. | `skills/docker-expert` |
|
||||
| **documentation-templates** | Documentation templates and structure guidelines. README, API docs, code comments, and AI-friendly documentation. | `skills/documentation-templates` |
|
||||
| **docx** | "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. When Claude needs to work with professional documents (.docx files) for: (1) Creating new documents, (2) Modifying or editing content, (3) Working with tracked changes, (4) Adding comments, or any other document tasks" | `skills/docx-official` |
|
||||
| **email-sequence** | When the user wants to create or optimize an email sequence, drip campaign, automated email flow, or lifecycle email program. Also use when the user mentions "email sequence," "drip campaign," "nurture sequence," "onboarding emails," "welcome sequence," "re-engagement emails," "email automation," or "lifecycle emails." For in-app onboarding, see onboarding-cro. | `skills/email-sequence` |
|
||||
| **email-systems** | "Email has the highest ROI of any marketing channel. $36 for every $1 spent. Yet most startups treat it as an afterthought - bulk blasts, no personalization, landing in spam folders. This skill covers transactional email that works, marketing automation that converts, deliverability that reaches inboxes, and the infrastructure decisions that scale. Use when: keywords, file_patterns, code_patterns." | `skills/email-systems` |
|
||||
| **environment-setup-guide** | "Guide developers through setting up development environments with proper tools, dependencies, and configurations" | `skills/environment-setup-guide` |
|
||||
| **executing-plans** | Use when you have a written implementation plan to execute in a separate session with review checkpoints | `skills/executing-plans` |
|
||||
| **file-organizer** | Intelligently organizes files and folders by understanding context, finding duplicates, and suggesting better organizational structures. Use when user wants to clean up directories, organize downloads, remove duplicates, or restructure projects. | `skills/file-organizer` |
|
||||
| **file-uploads** | "Expert at handling file uploads and cloud storage. Covers S3, Cloudflare R2, presigned URLs, multipart uploads, and image optimization. Knows how to handle large files without blocking. Use when: file upload, S3, R2, presigned URL, multipart." | `skills/file-uploads` |
|
||||
| **finishing-a-development-branch** | Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting structured options for merge, PR, or cleanup | `skills/finishing-a-development-branch` |
|
||||
| **firebase** | "Firebase gives you a complete backend in minutes - auth, database, storage, functions, hosting. But the ease of setup hides real complexity. Security rules are your last line of defense, and they're often wrong. Firestore queries are limited, and you learn this after you've designed your data model. This skill covers Firebase Authentication, Firestore, Realtime Database, Cloud Functions, Cloud Storage, and Firebase Hosting. Key insight: Firebase is optimized for read-heavy, denormalized data. I" | `skills/firebase` |
|
||||
| **form-cro** | When the user wants to optimize any form that is NOT signup/registration — including lead capture forms, contact forms, demo request forms, application forms, survey forms, or checkout forms. Also use when the user mentions "form optimization," "lead form conversions," "form friction," "form fields," "form completion rate," or "contact form." For signup/registration forms, see signup-flow-cro. For popups containing forms, see popup-cro. | `skills/form-cro` |
|
||||
| **free-tool-strategy** | When the user wants to plan, evaluate, or build a free tool for marketing purposes — lead generation, SEO value, or brand awareness. Also use when the user mentions "engineering as marketing," "free tool," "marketing tool," "calculator," "generator," "interactive tool," "lead gen tool," "build a tool for leads," or "free resource." This skill bridges engineering and marketing — useful for founders and technical marketers. | `skills/free-tool-strategy` |
|
||||
| **frontend-design** | Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, artifacts, posters, or applications (examples include websites, landing pages, dashboards, React components, HTML/CSS layouts, or when styling/beautifying any web UI). Generates creative, polished code and UI design that avoids generic AI aesthetics. | `skills/frontend-design` |
|
||||
| **frontend-dev-guidelines** | Frontend development guidelines for React/TypeScript applications. Modern patterns including Suspense, lazy loading, useSuspenseQuery, file organization with features directory, MUI v7 styling, TanStack Router, performance optimization, and TypeScript best practices. Use when creating components, pages, features, fetching data, styling, routing, or working with frontend code. | `skills/frontend-dev-guidelines` |
|
||||
| **frontend-patterns** | Frontend development patterns for React, Next.js, state management, performance optimization, and UI best practices. | `skills/cc-skill-frontend-patterns` |
|
||||
| **game-art** | Game art principles. Visual style selection, asset pipeline, animation workflow. | `skills/game-development/game-art` |
|
||||
| **game-audio** | Game audio principles. Sound design, music integration, adaptive audio systems. | `skills/game-development/game-audio` |
|
||||
| **game-design** | Game design principles. GDD structure, balancing, player psychology, progression. | `skills/game-development/game-design` |
|
||||
| **game-development** | Game development orchestrator. Routes to platform-specific skills based on project needs. | `skills/game-development` |
|
||||
| **gcp-cloud-run** | "Specialized skill for building production-ready serverless applications on GCP. Covers Cloud Run services (containerized), Cloud Run Functions (event-driven), cold start optimization, and event-driven architecture with Pub/Sub." | `skills/gcp-cloud-run` |
|
||||
| **geo-fundamentals** | Generative Engine Optimization for AI search engines (ChatGPT, Claude, Perplexity). | `skills/geo-fundamentals` |
|
||||
| **git-pushing** | Stage, commit, and push git changes with conventional commit messages. Use when user wants to commit and push changes, mentions pushing to remote, or asks to save and push their work. Also activates when user says "push changes", "commit and push", "push this", "push to github", or similar git workflow requests. | `skills/git-pushing` |
|
||||
| **github-workflow-automation** | "Automate GitHub workflows with AI assistance. Includes PR reviews, issue triage, CI/CD integration, and Git operations. Use when automating GitHub workflows, setting up PR review automation, creating GitHub Actions, or triaging issues." | `skills/github-workflow-automation` |
|
||||
| **graphql** | "GraphQL gives clients exactly the data they need - no more, no less. One endpoint, typed schema, introspection. But the flexibility that makes it powerful also makes it dangerous. Without proper controls, clients can craft queries that bring down your server. This skill covers schema design, resolvers, DataLoader for N+1 prevention, federation for microservices, and client integration with Apollo/urql. Key insight: GraphQL is a contract. The schema is the API documentation. Design it carefully." | `skills/graphql` |
|
||||
| **hubspot-integration** | "Expert patterns for HubSpot CRM integration including OAuth authentication, CRM objects, associations, batch operations, webhooks, and custom objects. Covers Node.js and Python SDKs. Use when: hubspot, hubspot api, hubspot crm, hubspot integration, contacts api." | `skills/hubspot-integration` |
|
||||
| **i18n-localization** | Internationalization and localization patterns. Detecting hardcoded strings, managing translations, locale files, RTL support. | `skills/i18n-localization` |
|
||||
| **inngest** | "Inngest expert for serverless-first background jobs, event-driven workflows, and durable execution without managing queues or workers. Use when: inngest, serverless background job, event-driven workflow, step function, durable execution." | `skills/inngest` |
|
||||
| **interactive-portfolio** | "Expert in building portfolios that actually land jobs and clients - not just showing work, but creating memorable experiences. Covers developer portfolios, designer portfolios, creative portfolios, and portfolios that convert visitors into opportunities. Use when: portfolio, personal website, showcase work, developer portfolio, designer portfolio." | `skills/interactive-portfolio` |
|
||||
| **internal-comms** | A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. Claude should use this skill whenever asked to write some sort of internal communications (status reports, leadership updates, 3P updates, company newsletters, FAQs, incident reports, project updates, etc.). | `skills/internal-comms-anthropic` |
|
||||
| **internal-comms** | A set of resources to help me write all kinds of internal communications, using the formats that my company likes to use. Claude should use this skill whenever asked to write some sort of internal communications (status reports, leadership updates, 3P updates, company newsletters, FAQs, incident reports, project updates, etc.). | `skills/internal-comms-community` |
|
||||
| **javascript-mastery** | "Comprehensive JavaScript reference covering 33+ essential concepts every developer should know. From fundamentals like primitives and closures to advanced patterns like async/await and functional programming. Use when explaining JS concepts, debugging JavaScript issues, or teaching JavaScript fundamentals." | `skills/javascript-mastery` |
|
||||
| **kaizen** | Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss process improvements. | `skills/kaizen` |
|
||||
| **langfuse** | "Expert in Langfuse - the open-source LLM observability platform. Covers tracing, prompt management, evaluation, datasets, and integration with LangChain, LlamaIndex, and OpenAI. Essential for debugging, monitoring, and improving LLM applications in production. Use when: langfuse, llm observability, llm tracing, prompt management, llm evaluation." | `skills/langfuse` |
|
||||
| **langgraph** | "Expert in LangGraph - the production-grade framework for building stateful, multi-actor AI applications. Covers graph construction, state management, cycles and branches, persistence with checkpointers, human-in-the-loop patterns, and the ReAct agent pattern. Used in production at LinkedIn, Uber, and 400+ companies. This is LangChain's recommended approach for building agents. Use when: langgraph, langchain agent, stateful agent, agent graph, react agent." | `skills/langgraph` |
|
||||
| **launch-strategy** | "When the user wants to plan a product launch, feature announcement, or release strategy. Also use when the user mentions 'launch,' 'Product Hunt,' 'feature release,' 'announcement,' 'go-to-market,' 'beta launch,' 'early access,' 'waitlist,' or 'product update.' This skill covers phased launches, channel strategy, and ongoing launch momentum." | `skills/launch-strategy` |
|
||||
| **lint-and-validate** | Automatic quality control, linting, and static analysis procedures. Use after every code modification to ensure syntax correctness and project standards. Triggers onKeywords: lint, format, check, validate, types, static analysis. | `skills/lint-and-validate` |
|
||||
| **llm-app-patterns** | "Production-ready patterns for building LLM applications. Covers RAG pipelines, agent architectures, prompt IDEs, and LLMOps monitoring. Use when designing AI applications, implementing RAG, building agents, or setting up LLM observability." | `skills/llm-app-patterns` |
|
||||
| **loki-mode** | Multi-agent autonomous startup system for Claude Code. Triggers on "Loki Mode". Orchestrates 100+ specialized agents across engineering, QA, DevOps, security, data/ML, business operations, marketing, HR, and customer success. Takes PRD to fully deployed, revenue-generating product with zero human intervention. Features Task tool for subagent dispatch, parallel code review with 3 specialized reviewers, severity-based issue triage, distributed task queue with dead letter handling, automatic deployment to cloud providers, A/B testing, customer feedback loops, incident response, circuit breakers, and self-healing. Handles rate limits via distributed state checkpoints and auto-resume with exponential backoff. Requires --dangerously-skip-permissions flag. | `skills/loki-mode` |
|
||||
| **marketing-ideas** | "When the user needs marketing ideas, inspiration, or strategies for their SaaS or software product. Also use when the user asks for 'marketing ideas,' 'growth ideas,' 'how to market,' 'marketing strategies,' 'marketing tactics,' 'ways to promote,' or 'ideas to grow.' This skill provides 140 proven marketing approaches organized by category." | `skills/marketing-ideas` |
|
||||
| **marketing-psychology** | "When the user wants to apply psychological principles, mental models, or behavioral science to marketing. Also use when the user mentions 'psychology,' 'mental models,' 'cognitive bias,' 'persuasion,' 'behavioral science,' 'why people buy,' 'decision-making,' or 'consumer behavior.' This skill provides 70+ mental models organized for marketing application." | `skills/marketing-psychology` |
|
||||
| **mcp-builder** | Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK). | `skills/mcp-builder` |
|
||||
| **micro-saas-launcher** | "Expert in launching small, focused SaaS products fast - the indie hacker approach to building profitable software. Covers idea validation, MVP development, pricing, launch strategies, and growing to sustainable revenue. Ship in weeks, not months. Use when: micro saas, indie hacker, small saas, side project, saas mvp." | `skills/micro-saas-launcher` |
|
||||
| **mobile-design** | Mobile-first design thinking and decision-making for iOS and Android apps. Touch interaction, performance patterns, platform conventions. Teaches principles, not fixed values. Use when building React Native, Flutter, or native mobile apps. | `skills/mobile-design` |
|
||||
| **mobile-games** | Mobile game development principles. Touch input, battery, performance, app stores. | `skills/game-development/mobile-games` |
|
||||
| **moodle-external-api-development** | Create custom external web service APIs for Moodle LMS. Use when implementing web services for course management, user tracking, quiz operations, or custom plugin functionality. Covers parameter validation, database operations, error handling, service registration, and Moodle coding standards. | `skills/moodle-external-api-development` |
|
||||
| **multiplayer** | Multiplayer game development principles. Architecture, networking, synchronization. | `skills/game-development/multiplayer` |
|
||||
| **neon-postgres** | "Expert patterns for Neon serverless Postgres, branching, connection pooling, and Prisma/Drizzle integration Use when: neon database, serverless postgres, database branching, neon postgres, postgres serverless." | `skills/neon-postgres` |
|
||||
| **nestjs-expert** | Nest.js framework expert specializing in module architecture, dependency injection, middleware, guards, interceptors, testing with Jest/Supertest, TypeORM/Mongoose integration, and Passport.js authentication. Use PROACTIVELY for any Nest.js application issues including architecture decisions, testing strategies, performance optimization, or debugging complex dependency injection problems. If a specialized expert is a better fit, I will recommend switching and stop. | `skills/nestjs-expert` |
|
||||
| **nextjs-best-practices** | Next.js App Router principles. Server Components, data fetching, routing patterns. | `skills/nextjs-best-practices` |
|
||||
| **nextjs-supabase-auth** | "Expert integration of Supabase Auth with Next.js App Router Use when: supabase auth next, authentication next.js, login supabase, auth middleware, protected route." | `skills/nextjs-supabase-auth` |
|
||||
| **nodejs-best-practices** | Node.js development principles and decision-making. Framework selection, async patterns, security, and architecture. Teaches thinking, not copying. | `skills/nodejs-best-practices` |
|
||||
| **notebooklm** | Use this skill to query your Google NotebookLM notebooks directly from Claude Code for source-grounded, citation-backed answers from Gemini. Browser automation, library management, persistent auth. Drastically reduced hallucinations through document-only responses. | `skills/notebooklm` |
|
||||
| **notion-template-business** | "Expert in building and selling Notion templates as a business - not just making templates, but building a sustainable digital product business. Covers template design, pricing, marketplaces, marketing, and scaling to real revenue. Use when: notion template, sell templates, digital product, notion business, gumroad." | `skills/notion-template-business` |
|
||||
| **onboarding-cro** | When the user wants to optimize post-signup onboarding, user activation, first-run experience, or time-to-value. Also use when the user mentions "onboarding flow," "activation rate," "user activation," "first-run experience," "empty states," "onboarding checklist," "aha moment," or "new user experience." For signup/registration optimization, see signup-flow-cro. For ongoing email sequences, see email-sequence. | `skills/onboarding-cro` |
|
||||
| **page-cro** | When the user wants to optimize, improve, or increase conversions on any marketing page — including homepage, landing pages, pricing pages, feature pages, or blog posts. Also use when the user says "CRO," "conversion rate optimization," "this page isn't converting," "improve conversions," or "why isn't this page working." For signup/registration flows, see signup-flow-cro. For post-signup activation, see onboarding-cro. For forms outside of signup, see form-cro. For popups/modals, see popup-cro. | `skills/page-cro` |
|
||||
| **paid-ads** | "When the user wants help with paid advertising campaigns on Google Ads, Meta (Facebook/Instagram), LinkedIn, Twitter/X, or other ad platforms. Also use when the user mentions 'PPC,' 'paid media,' 'ad copy,' 'ad creative,' 'ROAS,' 'CPA,' 'ad campaign,' 'retargeting,' or 'audience targeting.' This skill covers campaign strategy, ad creation, audience targeting, and optimization." | `skills/paid-ads` |
|
||||
| **parallel-agents** | Multi-agent orchestration patterns. Use when multiple independent tasks can run with different domain expertise or when comprehensive analysis requires multiple perspectives. | `skills/parallel-agents` |
|
||||
| **paywall-upgrade-cro** | When the user wants to create or optimize in-app paywalls, upgrade screens, upsell modals, or feature gates. Also use when the user mentions "paywall," "upgrade screen," "upgrade modal," "upsell," "feature gate," "convert free to paid," "freemium conversion," "trial expiration screen," "limit reached screen," "plan upgrade prompt," or "in-app pricing." Distinct from public pricing pages (see page-cro) — this skill focuses on in-product upgrade moments where the user has already experienced value. | `skills/paywall-upgrade-cro` |
|
||||
| **pc-games** | PC and console game development principles. Engine selection, platform features, optimization strategies. | `skills/game-development/pc-games` |
|
||||
| **pdf** | Comprehensive PDF manipulation toolkit for extracting text and tables, creating new PDFs, merging/splitting documents, and handling forms. When Claude needs to fill in a PDF form or programmatically process, generate, or analyze PDF documents at scale. | `skills/pdf-official` |
|
||||
| **performance-profiling** | Performance profiling principles. Measurement, analysis, and optimization techniques. | `skills/performance-profiling` |
|
||||
| **personal-tool-builder** | "Expert in building custom tools that solve your own problems first. The best products often start as personal tools - scratch your own itch, build for yourself, then discover others have the same itch. Covers rapid prototyping, local-first apps, CLI tools, scripts that grow into products, and the art of dogfooding. Use when: build a tool, personal tool, scratch my itch, solve my problem, CLI tool." | `skills/personal-tool-builder` |
|
||||
| **plaid-fintech** | "Expert patterns for Plaid API integration including Link token flows, transactions sync, identity verification, Auth for ACH, balance checks, webhook handling, and fintech compliance best practices. Use when: plaid, bank account linking, bank connection, ach, account aggregation." | `skills/plaid-fintech` |
|
||||
| **plan-writing** | Structured task planning with clear breakdowns, dependencies, and verification criteria. Use when implementing features, refactoring, or any multi-step work. | `skills/plan-writing` |
|
||||
| **planning-with-files** | Implements Manus-style file-based planning for complex tasks. Creates task_plan.md, findings.md, and progress.md. Use when starting complex multi-step tasks, research projects, or any task requiring >5 tool calls. | `skills/planning-with-files` |
|
||||
| **playwright-skill** | Complete browser automation with Playwright. Auto-detects dev servers, writes clean test scripts to /tmp. Test pages, fill forms, take screenshots, check responsive design, validate UX, test login flows, check links, automate any browser task. Use when user wants to test websites, automate browser interactions, validate web functionality, or perform any browser-based testing. | `skills/playwright-skill` |
|
||||
| **popup-cro** | When the user wants to create or optimize popups, modals, overlays, slide-ins, or banners for conversion purposes. Also use when the user mentions "exit intent," "popup conversions," "modal optimization," "lead capture popup," "email popup," "announcement banner," or "overlay." For forms outside of popups, see form-cro. For general page conversion optimization, see page-cro. | `skills/popup-cro` |
|
||||
| **powershell-windows** | PowerShell Windows patterns. Critical pitfalls, operator syntax, error handling. | `skills/powershell-windows` |
|
||||
| **pptx** | "Presentation creation, editing, and analysis. When Claude needs to work with presentations (.pptx files) for: (1) Creating new presentations, (2) Modifying or editing content, (3) Working with layouts, (4) Adding comments or speaker notes, or any other presentation tasks" | `skills/pptx-official` |
|
||||
| **pricing-strategy** | "When the user wants help with pricing decisions, packaging, or monetization strategy. Also use when the user mentions 'pricing,' 'pricing tiers,' 'freemium,' 'free trial,' 'packaging,' 'price increase,' 'value metric,' 'Van Westendorp,' 'willingness to pay,' or 'monetization.' This skill covers pricing research, tier structure, and packaging strategy." | `skills/pricing-strategy` |
|
||||
| **prisma-expert** | Prisma ORM expert for schema design, migrations, query optimization, relations modeling, and database operations. Use PROACTIVELY for Prisma schema issues, migration problems, query performance, relation design, or database connection issues. | `skills/prisma-expert` |
|
||||
| **product-manager-toolkit** | Comprehensive toolkit for product managers including RICE prioritization, customer interview analysis, PRD templates, discovery frameworks, and go-to-market strategies. Use for feature prioritization, user research synthesis, requirement documentation, and product strategy development. | `skills/product-manager-toolkit` |
|
||||
| **programmatic-seo** | When the user wants to create SEO-driven pages at scale using templates and data. Also use when the user mentions "programmatic SEO," "template pages," "pages at scale," "directory pages," "location pages," "[keyword] + [city] pages," "comparison pages," "integration pages," or "building many pages for SEO." For auditing existing SEO issues, see seo-audit. | `skills/programmatic-seo` |
|
||||
| **prompt-caching** | "Caching strategies for LLM prompts including Anthropic prompt caching, response caching, and CAG (Cache Augmented Generation) Use when: prompt caching, cache prompt, response cache, cag, cache augmented." | `skills/prompt-caching` |
|
||||
| **prompt-engineer** | "Expert in designing effective prompts for LLM-powered applications. Masters prompt structure, context management, output formatting, and prompt evaluation. Use when: prompt engineering, system prompt, few-shot, chain of thought, prompt design." | `skills/prompt-engineer` |
|
||||
| **prompt-engineering** | Expert guide on prompt engineering patterns, best practices, and optimization techniques. Use when user wants to improve prompts, learn prompting strategies, or debug agent behavior. | `skills/prompt-engineering` |
|
||||
| **prompt-library** | "Curated collection of high-quality prompts for various use cases. Includes role-based prompts, task-specific templates, and prompt refinement techniques. Use when user needs prompt templates, role-play prompts, or ready-to-use prompt examples for coding, writing, analysis, or creative tasks." | `skills/prompt-library` |
|
||||
| **python-patterns** | Python development principles and decision-making. Framework selection, async patterns, type hints, project structure. Teaches thinking, not copying. | `skills/python-patterns` |
|
||||
| **rag-engineer** | "Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LLM applications. Use when: building RAG, vector search, embeddings, semantic search, document retrieval." | `skills/rag-engineer` |
|
||||
| **rag-implementation** | "Retrieval-Augmented Generation patterns including chunking, embeddings, vector stores, and retrieval optimization Use when: rag, retrieval augmented, vector search, embeddings, semantic search." | `skills/rag-implementation` |
|
||||
| **react-patterns** | Modern React patterns and principles. Hooks, composition, performance, TypeScript best practices. | `skills/react-patterns` |
|
||||
| **react-ui-patterns** | Modern React UI patterns for loading states, error handling, and data fetching. Use when building UI components, handling async data, or managing UI states. | `skills/react-ui-patterns` |
|
||||
| **receiving-code-review** | Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation | `skills/receiving-code-review` |
|
||||
| **red-team-tactics** | Red team tactics principles based on MITRE ATT&CK. Attack phases, detection evasion, reporting. | `skills/red-team-tactics` |
|
||||
| **referral-program** | "When the user wants to create, optimize, or analyze a referral program, affiliate program, or word-of-mouth strategy. Also use when the user mentions 'referral,' 'affiliate,' 'ambassador,' 'word of mouth,' 'viral loop,' 'refer a friend,' or 'partner program.' This skill covers program design, incentive structure, and growth optimization." | `skills/referral-program` |
|
||||
| **remotion-best-practices** | Best practices for Remotion - Video creation in React | `skills/remotion-best-practices` |
|
||||
| **requesting-code-review** | Use when completing tasks, implementing major features, or before merging to verify work meets requirements | `skills/requesting-code-review` |
|
||||
| **research-engineer** | "An uncompromising Academic Research Engineer. Operates with absolute scientific rigor, objective criticism, and zero flair. Focuses on theoretical correctness, formal verification, and optimal implementation across any required technology." | `skills/research-engineer` |
|
||||
| **salesforce-development** | "Expert patterns for Salesforce platform development including Lightning Web Components (LWC), Apex triggers and classes, REST/Bulk APIs, Connected Apps, and Salesforce DX with scratch orgs and 2nd generation packages (2GP). Use when: salesforce, sfdc, apex, lwc, lightning web components." | `skills/salesforce-development` |
|
||||
| **schema-markup** | When the user wants to add, fix, or optimize schema markup and structured data on their site. Also use when the user mentions "schema markup," "structured data," "JSON-LD," "rich snippets," "schema.org," "FAQ schema," "product schema," "review schema," or "breadcrumb schema." For broader SEO issues, see seo-audit. | `skills/schema-markup` |
|
||||
| **scroll-experience** | "Expert in building immersive scroll-driven experiences - parallax storytelling, scroll animations, interactive narratives, and cinematic web experiences. Like NY Times interactives, Apple product pages, and award-winning web experiences. Makes websites feel like experiences, not just pages. Use when: scroll animation, parallax, scroll storytelling, interactive story, cinematic website." | `skills/scroll-experience` |
|
||||
| **security-review** | Use this skill when adding authentication, handling user input, working with secrets, creating API endpoints, or implementing payment/sensitive features. Provides comprehensive security checklist and patterns. | `skills/cc-skill-security-review` |
|
||||
| **segment-cdp** | "Expert patterns for Segment Customer Data Platform including Analytics.js, server-side tracking, tracking plans with Protocols, identity resolution, destinations configuration, and data governance best practices. Use when: segment, analytics.js, customer data platform, cdp, tracking plan." | `skills/segment-cdp` |
|
||||
| **senior-architect** | Comprehensive software architecture skill for designing scalable, maintainable systems using ReactJS, NextJS, NodeJS, Express, React Native, Swift, Kotlin, Flutter, Postgres, GraphQL, Go, Python. Includes architecture diagram generation, system design patterns, tech stack decision frameworks, and dependency analysis. Use when designing system architecture, making technical decisions, creating architecture diagrams, evaluating trade-offs, or defining integration patterns. | `skills/senior-architect` |
|
||||
| **senior-fullstack** | Comprehensive fullstack development skill for building complete web applications with React, Next.js, Node.js, GraphQL, and PostgreSQL. Includes project scaffolding, code quality analysis, architecture patterns, and complete tech stack guidance. Use when building new projects, analyzing code quality, implementing design patterns, or setting up development workflows. | `skills/senior-fullstack` |
|
||||
| **seo-audit** | When the user wants to audit, review, or diagnose SEO issues on their site. Also use when the user mentions "SEO audit," "technical SEO," "why am I not ranking," "SEO issues," "on-page SEO," "meta tags review," or "SEO health check." For building pages at scale to target keywords, see programmatic-seo. For adding structured data, see schema-markup. | `skills/seo-audit` |
|
||||
| **seo-fundamentals** | SEO fundamentals, E-E-A-T, Core Web Vitals, and Google algorithm principles. | `skills/seo-fundamentals` |
|
||||
| **server-management** | Server management principles and decision-making. Process management, monitoring strategy, and scaling decisions. Teaches thinking, not commands. | `skills/server-management` |
|
||||
| **shopify-apps** | "Expert patterns for Shopify app development including Remix/React Router apps, embedded apps with App Bridge, webhook handling, GraphQL Admin API, Polaris components, billing, and app extensions. Use when: shopify app, shopify, embedded app, polaris, app bridge." | `skills/shopify-apps` |
|
||||
| **shopify-development** | \| | `skills/shopify-development` |
|
||||
| **signup-flow-cro** | When the user wants to optimize signup, registration, account creation, or trial activation flows. Also use when the user mentions "signup conversions," "registration friction," "signup form optimization," "free trial signup," "reduce signup dropoff," or "account creation flow." For post-signup onboarding, see onboarding-cro. For lead capture forms (not account creation), see form-cro. | `skills/signup-flow-cro` |
|
||||
| **skill-creator** | Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations. | `skills/skill-creator` |
|
||||
| **skill-developer** | Create and manage Claude Code skills following Anthropic best practices. Use when creating new skills, modifying skill-rules.json, understanding trigger patterns, working with hooks, debugging skill activation, or implementing progressive disclosure. Covers skill structure, YAML frontmatter, trigger types (keywords, intent patterns, file paths, content patterns), enforcement levels (block, suggest, warn), hook mechanisms (UserPromptSubmit, PreToolUse), session tracking, and the 500-line rule. | `skills/skill-developer` |
|
||||
| **slack-bot-builder** | "Build Slack apps using the Bolt framework across Python, JavaScript, and Java. Covers Block Kit for rich UIs, interactive components, slash commands, event handling, OAuth installation flows, and Workflow Builder integration. Focus on best practices for production-ready Slack apps. Use when: slack bot, slack app, bolt framework, block kit, slash command." | `skills/slack-bot-builder` |
|
||||
| **slack-gif-creator** | Knowledge and utilities for creating animated GIFs optimized for Slack. Provides constraints, validation tools, and animation concepts. Use when users request animated GIFs for Slack like "make me a GIF of X doing Y for Slack." | `skills/slack-gif-creator` |
|
||||
| **social-content** | "When the user wants help creating, scheduling, or optimizing social media content for LinkedIn, Twitter/X, Instagram, TikTok, Facebook, or other platforms. Also use when the user mentions 'LinkedIn post,' 'Twitter thread,' 'social media,' 'content calendar,' 'social scheduling,' 'engagement,' or 'viral content.' This skill covers content creation, repurposing, and platform-specific strategies." | `skills/social-content` |
|
||||
| **software-architecture** | Guide for quality focused software architecture. This skill should be used when users want to write code, design architecture, analyze code, in any case that relates to software development. | `skills/software-architecture` |
|
||||
| **stripe-integration** | "Get paid from day one. Payments, subscriptions, billing portal, webhooks, metered billing, Stripe Connect. The complete guide to implementing Stripe correctly, including all the edge cases that will bite you at 3am. This isn't just API calls - it's the full payment system: handling failures, managing subscriptions, dealing with dunning, and keeping revenue flowing. Use when: stripe, payments, subscription, billing, checkout." | `skills/stripe-integration` |
|
||||
| **subagent-driven-development** | Use when executing implementation plans with independent tasks in the current session | `skills/subagent-driven-development` |
|
||||
| **systematic-debugging** | Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes | `skills/systematic-debugging` |
|
||||
| **tailwind-patterns** | Tailwind CSS v4 principles. CSS-first configuration, container queries, modern patterns, design token architecture. | `skills/tailwind-patterns` |
|
||||
| **tdd-workflow** | Test-Driven Development workflow principles. RED-GREEN-REFACTOR cycle. | `skills/tdd-workflow` |
|
||||
| **telegram-bot-builder** | "Expert in building Telegram bots that solve real problems - from simple automation to complex AI-powered bots. Covers bot architecture, the Telegram Bot API, user experience, monetization strategies, and scaling bots to thousands of users. Use when: telegram bot, bot api, telegram automation, chat bot telegram, tg bot." | `skills/telegram-bot-builder` |
|
||||
| **telegram-mini-app** | "Expert in building Telegram Mini Apps (TWA) - web apps that run inside Telegram with native-like experience. Covers the TON ecosystem, Telegram Web App API, payments, user authentication, and building viral mini apps that monetize. Use when: telegram mini app, TWA, telegram web app, TON app, mini app." | `skills/telegram-mini-app` |
|
||||
| **templates** | Project scaffolding templates for new applications. Use when creating new projects from scratch. Contains 12 templates for various tech stacks. | `skills/app-builder/templates` |
|
||||
| **test-driven-development** | Use when implementing any feature or bugfix, before writing implementation code | `skills/test-driven-development` |
|
||||
| **test-fixing** | Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass. | `skills/test-fixing` |
|
||||
| **testing-patterns** | Jest testing patterns, factory functions, mocking strategies, and TDD workflow. Use when writing unit tests, creating test factories, or following TDD red-green-refactor cycle. | `skills/testing-patterns` |
|
||||
| **theme-factory** | Toolkit for styling artifacts with a theme. These artifacts can be slides, docs, reportings, HTML landing pages, etc. There are 10 pre-set themes with colors/fonts that you can apply to any artifact that has been creating, or can generate a new theme on-the-fly. | `skills/theme-factory` |
|
||||
| **trigger-dev** | "Trigger.dev expert for background jobs, AI workflows, and reliable async execution with excellent developer experience and TypeScript-first design. Use when: trigger.dev, trigger dev, background task, ai background job, long running task." | `skills/trigger-dev` |
|
||||
| **twilio-communications** | "Build communication features with Twilio: SMS messaging, voice calls, WhatsApp Business API, and user verification (2FA). Covers the full spectrum from simple notifications to complex IVR systems and multi-channel authentication. Critical focus on compliance, rate limits, and error handling. Use when: twilio, send SMS, text message, voice call, phone verification." | `skills/twilio-communications` |
|
||||
| **typescript-expert** | >- | `skills/typescript-expert` |
|
||||
| **ui-ux-pro-max** | "UI/UX design intelligence. 50 styles, 21 palettes, 50 font pairings, 20 charts, 9 stacks (React, Next.js, Vue, Svelte, SwiftUI, React Native, Flutter, Tailwind, shadcn/ui). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app, .html, .tsx, .vue, .svelte. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient. Integrations: shadcn/ui MCP for component search and examples." | `skills/ui-ux-pro-max` |
|
||||
| **upstash-qstash** | "Upstash QStash expert for serverless message queues, scheduled jobs, and reliable HTTP-based task delivery without managing infrastructure. Use when: qstash, upstash queue, serverless cron, scheduled http, message queue serverless." | `skills/upstash-qstash` |
|
||||
| **using-git-worktrees** | Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with smart directory selection and safety verification | `skills/using-git-worktrees` |
|
||||
| **using-superpowers** | Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions | `skills/using-superpowers` |
|
||||
| **vercel-deployment** | "Expert knowledge for deploying to Vercel with Next.js Use when: vercel, deploy, deployment, hosting, production." | `skills/vercel-deployment` |
|
||||
| **vercel-react-best-practices** | React and Next.js performance optimization guidelines from Vercel Engineering. This skill should be used when writing, reviewing, or refactoring React/Next.js code to ensure optimal performance patterns. Triggers on tasks involving React components, Next.js pages, data fetching, bundle optimization, or performance improvements. | `skills/react-best-practices` |
|
||||
| **verification-before-completion** | Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always | `skills/verification-before-completion` |
|
||||
| **viral-generator-builder** | "Expert in building shareable generator tools that go viral - name generators, quiz makers, avatar creators, personality tests, and calculator tools. Covers the psychology of sharing, viral mechanics, and building tools people can't resist sharing with friends. Use when: generator tool, quiz maker, name generator, avatar creator, viral tool." | `skills/viral-generator-builder` |
|
||||
| **voice-agents** | "Voice agents represent the frontier of AI interaction - humans speaking naturally with AI systems. The challenge isn't just speech recognition and synthesis, it's achieving natural conversation flow with sub-800ms latency while handling interruptions, background noise, and emotional nuance. This skill covers two architectures: speech-to-speech (OpenAI Realtime API, lowest latency, most natural) and pipeline (STT→LLM→TTS, more control, easier to debug). Key insight: latency is the constraint. Hu" | `skills/voice-agents` |
|
||||
| **voice-ai-development** | "Expert in building voice AI applications - from real-time voice agents to voice-enabled apps. Covers OpenAI Realtime API, Vapi for voice agents, Deepgram for transcription, ElevenLabs for synthesis, LiveKit for real-time infrastructure, and WebRTC fundamentals. Knows how to build low-latency, production-ready voice experiences. Use when: voice ai, voice agent, speech to text, text to speech, realtime voice." | `skills/voice-ai-development` |
|
||||
| **vr-ar** | VR/AR development principles. Comfort, interaction, performance requirements. | `skills/game-development/vr-ar` |
|
||||
| **vulnerability-scanner** | Advanced vulnerability analysis principles. OWASP 2025, Supply Chain Security, attack surface mapping, risk prioritization. | `skills/vulnerability-scanner` |
|
||||
| **web-artifacts-builder** | Suite of tools for creating elaborate, multi-component claude.ai HTML artifacts using modern frontend web technologies (React, Tailwind CSS, shadcn/ui). Use for complex artifacts requiring state management, routing, or shadcn/ui components - not for simple single-file HTML/JSX artifacts. | `skills/web-artifacts-builder` |
|
||||
| **web-design-guidelines** | Review UI code for Web Interface Guidelines compliance. Use when asked to "review my UI", "check accessibility", "audit design", "review UX", or "check my site against best practices". | `skills/web-design-guidelines` |
|
||||
| **web-games** | Web browser game development principles. Framework selection, WebGPU, optimization, PWA. | `skills/game-development/web-games` |
|
||||
| **web-performance-optimization** | "Optimize website and web application performance including loading speed, Core Web Vitals, bundle size, caching strategies, and runtime performance" | `skills/web-performance-optimization` |
|
||||
| **webapp-testing** | Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing browser screenshots, and viewing browser logs. | `skills/webapp-testing` |
|
||||
| **workflow-automation** | "Workflow automation is the infrastructure that makes AI agents reliable. Without durable execution, a network hiccup during a 10-step payment flow means lost money and angry customers. With it, workflows resume exactly where they left off. This skill covers the platforms (n8n, Temporal, Inngest) and patterns (sequential, parallel, orchestrator-worker) that turn brittle scripts into production-grade automation. Key insight: The platforms make different tradeoffs. n8n optimizes for accessibility" | `skills/workflow-automation` |
|
||||
| **writing-plans** | Use when you have a spec or requirements for a multi-step task, before touching code | `skills/writing-plans` |
|
||||
| **writing-skills** | Use when creating new skills, editing existing skills, or verifying skills work before deployment | `skills/writing-skills` |
|
||||
| **xlsx** | "Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work with spreadsheets (.xlsx, .xlsm, .csv, .tsv, etc) for: (1) Creating new spreadsheets with formulas and formatting, (2) Reading or analyzing data, (3) Modify existing spreadsheets while preserving formulas, (4) Data analysis and visualization in spreadsheets, or (5) Recalculating formulas" | `skills/xlsx-official` |
|
||||
| **zapier-make-patterns** | "No-code automation democratizes workflow building. Zapier and Make (formerly Integromat) let non-developers automate business processes without writing code. But no-code doesn't mean no-complexity - these platforms have their own patterns, pitfalls, and breaking points. This skill covers when to use which platform, how to build reliable automations, and when to graduate to code-based solutions. Key insight: Zapier optimizes for simplicity and integrations (7000+ apps), Make optimizes for power " | `skills/zapier-make-patterns` |
|
||||
We have moved the full skill registry to a dedicated catalog to keep this README clean.
|
||||
|
||||
👉 **[View the Complete Skill Catalog (CATALOG.md)](CATALOG.md)**
|
||||
|
||||
## Installation
|
||||
|
||||
To use these skills with **Claude Code**, **Gemini CLI**, **Codex CLI**, **Cursor**, **Antigravity**, or **OpenCode**, clone this repository into your agent's skills directory:
|
||||
To use these skills with **Claude Code**, **Gemini CLI**, **Codex CLI**, **Cursor**, **Antigravity**, **OpenCode**, or **AdaL**:
|
||||
|
||||
### Option A: npx (recommended)
|
||||
|
||||
```bash
|
||||
# Universal installation (works with most tools)
|
||||
# Default: ~/.agent/skills (universal)
|
||||
npx antigravity-awesome-skills
|
||||
|
||||
# Cursor
|
||||
npx antigravity-awesome-skills --cursor
|
||||
|
||||
# Claude Code
|
||||
npx antigravity-awesome-skills --claude
|
||||
|
||||
# Gemini CLI
|
||||
npx antigravity-awesome-skills --gemini
|
||||
|
||||
# Codex CLI
|
||||
npx antigravity-awesome-skills --codex
|
||||
|
||||
# OpenCode (Universal)
|
||||
npx antigravity-awesome-skills
|
||||
|
||||
# Custom path
|
||||
npx antigravity-awesome-skills --path ./my-skills
|
||||
```
|
||||
|
||||
Run `npx antigravity-awesome-skills --help` for all options. If the directory already exists, the installer runs `git pull` to update.
|
||||
|
||||
> **If you see a 404 error:** the package may not be published to npm yet. Use: `npx github:sickn33/antigravity-awesome-skills`
|
||||
|
||||
### Option B: git clone
|
||||
|
||||
```bash
|
||||
# Universal (works with most tools)
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
|
||||
# Claude Code specific
|
||||
@@ -368,8 +182,14 @@ git clone https://github.com/sickn33/antigravity-awesome-skills.git .claude/skil
|
||||
# Gemini CLI specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .gemini/skills
|
||||
|
||||
# Codex CLI specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .codex/skills
|
||||
|
||||
# Cursor specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .cursor/skills
|
||||
|
||||
# OpenCode specific (Universal path)
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
---
|
||||
@@ -390,6 +210,17 @@ Please ensure your skill follows the Antigravity/Claude Code best practices.
|
||||
|
||||
## Credits & Sources
|
||||
|
||||
We stand on the shoulders of giants.
|
||||
|
||||
👉 **[View the Full Attribution Ledger](docs/SOURCES.md)**
|
||||
|
||||
Key contributors and sources include:
|
||||
|
||||
- **HackTricks**
|
||||
- **OWASP**
|
||||
- **Anthropic / OpenAI / Google**
|
||||
- **The Open Source Community**
|
||||
|
||||
This collection would not be possible without the incredible work of the Claude Code community and official sources:
|
||||
|
||||
### Official Sources
|
||||
@@ -399,9 +230,12 @@ This collection would not be possible without the incredible work of the Claude
|
||||
- **[remotion-dev/skills](https://github.com/remotion-dev/skills)**: Official Remotion skills - Video creation in React with 28 modular rules.
|
||||
- **[vercel-labs/agent-skills](https://github.com/vercel-labs/agent-skills)**: Vercel Labs official skills - React Best Practices, Web Design Guidelines.
|
||||
- **[openai/skills](https://github.com/openai/skills)**: OpenAI Codex skills catalog - Agent skills, Skill Creator, Concise Planning.
|
||||
- **[supabase/agent-skills](https://github.com/supabase/agent-skills)**: Supabase official skills - Postgres Best Practices.
|
||||
|
||||
### Community Contributors
|
||||
|
||||
- **[rmyndharis/antigravity-skills](https://github.com/rmyndharis/antigravity-skills)**: For the massive contribution of 300+ Enterprise skills and the catalog generation logic.
|
||||
|
||||
- **[obra/superpowers](https://github.com/obra/superpowers)**: The original "Superpowers" by Jesse Vincent.
|
||||
- **[guanyang/antigravity-skills](https://github.com/guanyang/antigravity-skills)**: Core Antigravity extensions.
|
||||
- **[diet103/claude-code-infrastructure-showcase](https://github.com/diet103/claude-code-infrastructure-showcase)**: Infrastructure and Backend/Frontend Guidelines.
|
||||
@@ -410,12 +244,15 @@ This collection would not be possible without the incredible work of the Claude
|
||||
- **[zebbern/claude-code-guide](https://github.com/zebbern/claude-code-guide)**: Comprehensive Security suite & Guide (Source for ~60 new skills).
|
||||
- **[alirezarezvani/claude-skills](https://github.com/alirezarezvani/claude-skills)**: Senior Engineering and PM toolkit.
|
||||
- **[karanb192/awesome-claude-skills](https://github.com/karanb192/awesome-claude-skills)**: A massive list of verified skills for Claude Code.
|
||||
- **[VoltAgent/awesome-agent-skills](https://github.com/VoltAgent/awesome-agent-skills)**: Curated collection of 61 high-quality skills including official team skills from Sentry, Trail of Bits, Expo, Hugging Face, and comprehensive context engineering suite (v4.3.0 integration).
|
||||
- **[zircote/.claude](https://github.com/zircote/.claude)**: Shopify development skill reference.
|
||||
- **[vibeforge1111/vibeship-spawner-skills](https://github.com/vibeforge1111/vibeship-spawner-skills)**: AI Agents, Integrations, Maker Tools (57 skills, Apache 2.0).
|
||||
- **[coreyhaines31/marketingskills](https://github.com/coreyhaines31/marketingskills)**: Marketing skills for CRO, copywriting, SEO, paid ads, and growth (23 skills, MIT).
|
||||
- **[vudovn/antigravity-kit](https://github.com/vudovn/antigravity-kit)**: AI Agent templates with Skills, Agents, and Workflows (33 skills, MIT).
|
||||
- **[affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code)**: Complete Claude Code configuration collection from Anthropic hackathon winner - skills only (8 skills, MIT).
|
||||
- **[whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills)**: Practical fp-ts skills for TypeScript – fp-ts-pragmatic, fp-ts-react, fp-ts-errors (v4.4.0).
|
||||
- **[webzler/agentMemory](https://github.com/webzler/agentMemory)**: Source for the agent-memory-mcp skill.
|
||||
- **[sstklen/claude-api-cost-optimization](https://github.com/sstklen/claude-api-cost-optimization)**: Save 50-90% on Claude API costs with smart optimization strategies (MIT).
|
||||
|
||||
### Inspirations
|
||||
|
||||
@@ -428,13 +265,16 @@ This collection would not be possible without the incredible work of the Claude
|
||||
|
||||
MIT License. See [LICENSE](LICENSE) for details.
|
||||
|
||||
---
|
||||
## Community
|
||||
|
||||
**Keywords**: Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, Agentic Skills, AI Coding Assistant, AI Agent Skills, MCP, MCT, AI Agents, Autonomous Coding, Security Auditing, React Patterns, LLM Tools, AI IDE, Coding AI, AI Pair Programming, Vibe Coding, Agentic Coding, AI Developer Tools.
|
||||
- [Community Guidelines](docs/COMMUNITY_GUIDELINES.md)
|
||||
- [Security Policy](docs/SECURITY_GUARDRAILS.md)
|
||||
|
||||
---
|
||||
|
||||
## 🏷️ GitHub Topics
|
||||
---
|
||||
|
||||
## GitHub Topics
|
||||
|
||||
For repository maintainers, add these topics to maximize discoverability:
|
||||
|
||||
@@ -442,6 +282,54 @@ For repository maintainers, add these topics to maximize discoverability:
|
||||
claude-code, gemini-cli, codex-cli, antigravity, cursor, github-copilot, opencode,
|
||||
agentic-skills, ai-coding, llm-tools, ai-agents, autonomous-coding, mcp,
|
||||
ai-developer-tools, ai-pair-programming, vibe-coding, skill, skills, SKILL.md, rules.md, CLAUDE.md, GEMINI.md, CURSOR.md
|
||||
claude-code, gemini-cli, codex-cli, antigravity, cursor, github-copilot, opencode,
|
||||
agentic-skills, ai-coding, llm-tools, ai-agents, autonomous-coding, mcp
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Repo Contributors
|
||||
|
||||
<a href="https://github.com/sickn33/antigravity-awesome-skills/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=sickn33/antigravity-awesome-skills" />
|
||||
</a>
|
||||
|
||||
Made with [contrib.rocks](https://contrib.rocks).
|
||||
|
||||
We officially thank the following contributors for their help in making this repository awesome!
|
||||
|
||||
- [sck_0](https://github.com/sck000)
|
||||
- [Munir Abbasi](https://github.com/munir-abbasi)
|
||||
- [sickn33](https://github.com/sickn33)
|
||||
- [Mohammad Faiz](https://github.com/Mohammad-Faiz-Cloud-Engineer)
|
||||
- [Đỗ Khắc Gia Khoa](https://github.com/Dokhacgiakhoa)
|
||||
- [Ianj332](https://github.com/IanJ332)
|
||||
- [GuppyTheCat](https://github.com/GuppyTheCat)
|
||||
- [Tiger-Foxx](https://github.com/Tiger-Foxx)
|
||||
- [arathiesh](https://github.com/arathiesh)
|
||||
- [1bcMax](https://github.com/1bcMax)
|
||||
- [ALEKGG1](https://github.com/ALEKGG1)
|
||||
- [Ahmed Rehan](https://github.com/ar27111994)
|
||||
- [BenedictKing](https://github.com/BenedictKing)
|
||||
- [whatiskadudoing](https://github.com/whatiskadudoing)
|
||||
- [Nguyen Huu Loc](https://github.com/LocNguyenSGU)
|
||||
- [Owen Wu](https://github.com/yubing744)
|
||||
- [SuperJMN](https://github.com/SuperJMN)
|
||||
- [Truong Nguyen](https://github.com/truongnmt)
|
||||
- [Viktor Ferenczi](https://github.com/viktor-ferenczi)
|
||||
- [c1c3ru](https://github.com/c1c3ru)
|
||||
- [ckdwns9121](https://github.com/ckdwns9121)
|
||||
- [junited31](https://github.com/junited31)
|
||||
- [liyin2015](https://github.com/liyin2015)
|
||||
- [krisnasantosa15](https://github.com/KrisnaSantosa15)
|
||||
- [sstklen](https://github.com/sstklen)
|
||||
- [taksrules](https://github.com/taksrules)
|
||||
- [zebbern](https://github.com/zebbern)
|
||||
- [vuth-dogo](https://github.com/vuth-dogo)
|
||||
- [mvanhorn](https://github.com/mvanhorn)
|
||||
- [rookie-ricardo](https://github.com/rookie-ricardo)
|
||||
- [evandro-miguel](https://github.com/evandro-miguel)
|
||||
- [raeef1001](https://github.com/raeef1001)
|
||||
- [devchangjun](https://github.com/devchangjun)
|
||||
|
||||
## Star History
|
||||
|
||||
[](https://www.star-history.com/#sickn33/antigravity-awesome-skills&type=date&legend=top-left)
|
||||
|
||||
19
SECURITY.md
Normal file
19
SECURITY.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
We track the `main` branch.
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
**DO NOT** open a public Issue for security exploits.
|
||||
|
||||
If you find a security vulnerability (e.g., a skill that bypasses the "Authorized Use Only" check or executes malicious code without warning):
|
||||
|
||||
1. Email: `security@antigravity.dev` (Placeholder)
|
||||
2. Or open a **Private Advisory** on this repository.
|
||||
|
||||
## Offensive Skills Policy
|
||||
|
||||
Please read our [Security Guardrails](docs/SECURITY_GUARDRAILS.md).
|
||||
All offensive skills are strictly for **authorized educational and professional use only**.
|
||||
BIN
assets/star-history.png
Normal file
BIN
assets/star-history.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 50 KiB |
136
bin/install.js
Executable file
136
bin/install.js
Executable file
@@ -0,0 +1,136 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
const { spawnSync } = require('child_process');
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
|
||||
const REPO = 'https://github.com/sickn33/antigravity-awesome-skills.git';
|
||||
const HOME = process.env.HOME || process.env.USERPROFILE || '';
|
||||
|
||||
function resolveDir(p) {
|
||||
if (!p) return null;
|
||||
const s = p.replace(/^~($|\/)/, HOME + '$1');
|
||||
return path.resolve(s);
|
||||
}
|
||||
|
||||
function parseArgs() {
|
||||
const a = process.argv.slice(2);
|
||||
let pathArg = null;
|
||||
let versionArg = null;
|
||||
let tagArg = null;
|
||||
let cursor = false, claude = false, gemini = false, codex = false;
|
||||
|
||||
for (let i = 0; i < a.length; i++) {
|
||||
if (a[i] === '--help' || a[i] === '-h') return { help: true };
|
||||
if (a[i] === '--path' && a[i + 1]) { pathArg = a[++i]; continue; }
|
||||
if (a[i] === '--version' && a[i + 1]) { versionArg = a[++i]; continue; }
|
||||
if (a[i] === '--tag' && a[i + 1]) { tagArg = a[++i]; continue; }
|
||||
if (a[i] === '--cursor') { cursor = true; continue; }
|
||||
if (a[i] === '--claude') { claude = true; continue; }
|
||||
if (a[i] === '--gemini') { gemini = true; continue; }
|
||||
if (a[i] === '--codex') { codex = true; continue; }
|
||||
if (a[i] === 'install') continue;
|
||||
}
|
||||
|
||||
return { pathArg, versionArg, tagArg, cursor, claude, gemini, codex };
|
||||
}
|
||||
|
||||
function defaultDir(opts) {
|
||||
if (opts.pathArg) return resolveDir(opts.pathArg);
|
||||
if (opts.cursor) return path.join(HOME, '.cursor', 'skills');
|
||||
if (opts.claude) return path.join(HOME, '.claude', 'skills');
|
||||
if (opts.gemini) return path.join(HOME, '.gemini', 'skills');
|
||||
if (opts.codex) {
|
||||
const codexHome = process.env.CODEX_HOME;
|
||||
if (codexHome) return path.join(codexHome, 'skills');
|
||||
return path.join(HOME, '.codex', 'skills');
|
||||
}
|
||||
return path.join(HOME, '.agent', 'skills');
|
||||
}
|
||||
|
||||
function printHelp() {
|
||||
console.log(`
|
||||
antigravity-awesome-skills — installer
|
||||
|
||||
npx antigravity-awesome-skills [install] [options]
|
||||
|
||||
Clones the skills repo into your agent's skills directory.
|
||||
|
||||
Options:
|
||||
--cursor Install to ~/.cursor/skills (Cursor)
|
||||
--claude Install to ~/.claude/skills (Claude Code)
|
||||
--gemini Install to ~/.gemini/skills (Gemini CLI)
|
||||
--codex Install to ~/.codex/skills (Codex CLI)
|
||||
--path <dir> Install to <dir> (default: ~/.agent/skills)
|
||||
--version <ver> After clone, checkout tag v<ver> (e.g. 4.6.0 -> v4.6.0)
|
||||
--tag <tag> After clone, checkout this tag (e.g. v4.6.0)
|
||||
|
||||
Examples:
|
||||
npx antigravity-awesome-skills
|
||||
npx antigravity-awesome-skills --cursor
|
||||
npx antigravity-awesome-skills --version 4.6.0
|
||||
npx antigravity-awesome-skills --path ./my-skills
|
||||
`);
|
||||
}
|
||||
|
||||
function run(cmd, args, opts = {}) {
|
||||
const r = spawnSync(cmd, args, { stdio: 'inherit', ...opts });
|
||||
if (r.status !== 0) process.exit(r.status == null ? 1 : r.status);
|
||||
}
|
||||
|
||||
function main() {
|
||||
const opts = parseArgs();
|
||||
const { tagArg, versionArg } = opts;
|
||||
|
||||
if (opts.help) {
|
||||
printHelp();
|
||||
return;
|
||||
}
|
||||
|
||||
const target = defaultDir(opts);
|
||||
if (!target || !HOME) {
|
||||
console.error('Could not resolve home directory. Use --path <absolute-path>.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (fs.existsSync(target)) {
|
||||
const gitDir = path.join(target, '.git');
|
||||
if (fs.existsSync(gitDir)) {
|
||||
console.log('Directory already exists and is a git repo. Updating…');
|
||||
process.chdir(target);
|
||||
run('git', ['pull']);
|
||||
return;
|
||||
}
|
||||
console.error(`Directory exists and is not a git repo: ${target}`);
|
||||
console.error('Remove it or use --path to choose another location.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const parent = path.dirname(target);
|
||||
if (!fs.existsSync(parent)) {
|
||||
try {
|
||||
fs.mkdirSync(parent, { recursive: true });
|
||||
} catch (e) {
|
||||
console.error(`Cannot create parent directory: ${parent}`, e.message);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
if (process.platform === 'win32') {
|
||||
run('git', ['-c', 'core.symlinks=true', 'clone', REPO, target]);
|
||||
} else {
|
||||
run('git', ['clone', REPO, target]);
|
||||
}
|
||||
|
||||
const ref = tagArg || (versionArg ? (versionArg.startsWith('v') ? versionArg : `v${versionArg}`) : null);
|
||||
if (ref) {
|
||||
console.log(`Checking out ${ref}…`);
|
||||
process.chdir(target);
|
||||
run('git', ['checkout', ref]);
|
||||
}
|
||||
|
||||
console.log(`\nInstalled to ${target}`);
|
||||
console.log('Pick a bundle in docs/BUNDLES.md and use @skill-name in your AI assistant.');
|
||||
}
|
||||
|
||||
main();
|
||||
127
data/aliases.json
Normal file
127
data/aliases.json
Normal file
@@ -0,0 +1,127 @@
|
||||
{
|
||||
"generatedAt": "2026-02-03T09:20:12.539Z",
|
||||
"aliases": {
|
||||
"accessibility-compliance-audit": "accessibility-compliance-accessibility-audit",
|
||||
"active directory attacks": "active-directory-attacks",
|
||||
"agent-orchestration-improve": "agent-orchestration-improve-agent",
|
||||
"agent-orchestration-optimize": "agent-orchestration-multi-agent-optimize",
|
||||
"api fuzzing for bug bounty": "api-fuzzing-bug-bounty",
|
||||
"api-testing-mock": "api-testing-observability-api-mock",
|
||||
"templates": "app-builder/templates",
|
||||
"application-performance-optimization": "application-performance-performance-optimization",
|
||||
"aws penetration testing": "aws-penetration-testing",
|
||||
"backend-development-feature": "backend-development-feature-development",
|
||||
"brand-guidelines": "brand-guidelines-anthropic",
|
||||
"broken authentication testing": "broken-authentication",
|
||||
"burp suite web application testing": "burp-suite-testing",
|
||||
"c4-architecture": "c4-architecture-c4-architecture",
|
||||
"backend-patterns": "cc-skill-backend-patterns",
|
||||
"clickhouse-io": "cc-skill-clickhouse-io",
|
||||
"coding-standards": "cc-skill-coding-standards",
|
||||
"cc-skill-learning": "cc-skill-continuous-learning",
|
||||
"frontend-patterns": "cc-skill-frontend-patterns",
|
||||
"cc-skill-example": "cc-skill-project-guidelines-example",
|
||||
"security-review": "cc-skill-security-review",
|
||||
"cicd-automation-automate": "cicd-automation-workflow-automate",
|
||||
"claude code guide": "claude-code-guide",
|
||||
"d3-viz": "claude-d3js-skill",
|
||||
"claude-win11-skill": "claude-win11-speckit-update-skill",
|
||||
"cloud penetration testing": "cloud-penetration-testing",
|
||||
"code-documentation-explain": "code-documentation-code-explain",
|
||||
"code-documentation-generate": "code-documentation-doc-generate",
|
||||
"code-refactoring-restore": "code-refactoring-context-restore",
|
||||
"code-refactoring-clean": "code-refactoring-refactor-clean",
|
||||
"codebase-cleanup-clean": "codebase-cleanup-refactor-clean",
|
||||
"comprehensive-review-full": "comprehensive-review-full-review",
|
||||
"comprehensive-review-enhance": "comprehensive-review-pr-enhance",
|
||||
"context-management-restore": "context-management-context-restore",
|
||||
"context-management-save": "context-management-context-save",
|
||||
"data-engineering-feature": "data-engineering-data-driven-feature",
|
||||
"data-engineering-pipeline": "data-engineering-data-pipeline",
|
||||
"database-cloud-optimize": "database-cloud-optimization-cost-optimize",
|
||||
"database-migrations-observability": "database-migrations-migration-observability",
|
||||
"database-migrations-sql": "database-migrations-sql-migrations",
|
||||
"debugging-toolkit-debug": "debugging-toolkit-smart-debug",
|
||||
"dependency-management-audit": "dependency-management-deps-audit",
|
||||
"deployment-validation-validate": "deployment-validation-config-validate",
|
||||
"distributed-debugging-trace": "distributed-debugging-debug-trace",
|
||||
"documentation-generation-generate": "documentation-generation-doc-generate",
|
||||
"docx": "docx-official",
|
||||
"error-debugging-analysis": "error-debugging-error-analysis",
|
||||
"error-debugging-review": "error-debugging-multi-agent-review",
|
||||
"error-diagnostics-analysis": "error-diagnostics-error-analysis",
|
||||
"error-diagnostics-trace": "error-diagnostics-error-trace",
|
||||
"error-diagnostics-debug": "error-diagnostics-smart-debug",
|
||||
"ethical hacking methodology": "ethical-hacking-methodology",
|
||||
"file path traversal testing": "file-path-traversal",
|
||||
"finishing-a-branch": "finishing-a-development-branch",
|
||||
"framework-migration-migrate": "framework-migration-code-migrate",
|
||||
"framework-migration-upgrade": "framework-migration-deps-upgrade",
|
||||
"framework-migration-modernize": "framework-migration-legacy-modernize",
|
||||
"frontend-mobile-scaffold": "frontend-mobile-development-component-scaffold",
|
||||
"frontend-mobile-scan": "frontend-mobile-security-xss-scan",
|
||||
"full-stack-feature": "full-stack-orchestration-full-stack-feature",
|
||||
"2d-games": "game-development/2d-games",
|
||||
"3d-games": "game-development/3d-games",
|
||||
"game-art": "game-development/game-art",
|
||||
"game-audio": "game-development/game-audio",
|
||||
"game-design": "game-development/game-design",
|
||||
"mobile-games": "game-development/mobile-games",
|
||||
"multiplayer": "game-development/multiplayer",
|
||||
"pc-games": "game-development/pc-games",
|
||||
"vr-ar": "game-development/vr-ar",
|
||||
"web-games": "game-development/web-games",
|
||||
"git-pr-workflow": "git-pr-workflows-git-workflow",
|
||||
"html injection testing": "html-injection-testing",
|
||||
"idor vulnerability testing": "idor-testing",
|
||||
"incident-response": "incident-response-incident-response",
|
||||
"infinite gratitude": "infinite-gratitude",
|
||||
"internal-comms": "internal-comms-anthropic",
|
||||
"javascript-typescript-scaffold": "javascript-typescript-typescript-scaffold",
|
||||
"linux privilege escalation": "linux-privilege-escalation",
|
||||
"linux production shell scripts": "linux-shell-scripting",
|
||||
"llm-application-assistant": "llm-application-dev-ai-assistant",
|
||||
"llm-application-agent": "llm-application-dev-langchain-agent",
|
||||
"llm-application-optimize": "llm-application-dev-prompt-optimize",
|
||||
"machine-learning-pipeline": "machine-learning-ops-ml-pipeline",
|
||||
"metasploit framework": "metasploit-framework",
|
||||
"moodle-external-development": "moodle-external-api-development",
|
||||
"multi-platform-apps": "multi-platform-apps-multi-platform",
|
||||
"network 101": "network-101",
|
||||
"observability-monitoring-setup": "observability-monitoring-monitor-setup",
|
||||
"observability-monitoring-implement": "observability-monitoring-slo-implement",
|
||||
"obsidian-clipper-creator": "obsidian-clipper-template-creator",
|
||||
"pdf": "pdf-official",
|
||||
"pentest checklist": "pentest-checklist",
|
||||
"pentest commands": "pentest-commands",
|
||||
"performance-testing-ai": "performance-testing-review-ai-review",
|
||||
"performance-testing-agent": "performance-testing-review-multi-agent-review",
|
||||
"supabase-postgres-best-practices": "postgres-best-practices",
|
||||
"pptx": "pptx-official",
|
||||
"privilege escalation methods": "privilege-escalation-methods",
|
||||
"python-development-scaffold": "python-development-python-scaffold",
|
||||
"vercel-react-best-practices": "react-best-practices",
|
||||
"red team tools and methodology": "red-team-tools",
|
||||
"security scanning tools": "scanning-tools",
|
||||
"security-compliance-check": "security-compliance-compliance-check",
|
||||
"security-scanning-dependencies": "security-scanning-security-dependencies",
|
||||
"security-scanning-hardening": "security-scanning-security-hardening",
|
||||
"security-scanning-sast": "security-scanning-security-sast",
|
||||
"shodan reconnaissance and pentesting": "shodan-reconnaissance",
|
||||
"smtp penetration testing": "smtp-penetration-testing",
|
||||
"sql injection testing": "sql-injection-testing",
|
||||
"sqlmap database penetration testing": "sqlmap-database-pentesting",
|
||||
"ssh penetration testing": "ssh-penetration-testing",
|
||||
"startup-business-case": "startup-business-analyst-business-case",
|
||||
"startup-business-projections": "startup-business-analyst-financial-projections",
|
||||
"startup-business-opportunity": "startup-business-analyst-market-opportunity",
|
||||
"systems-programming-project": "systems-programming-rust-project",
|
||||
"team-collaboration-notes": "team-collaboration-standup-notes",
|
||||
"top 100 web vulnerabilities reference": "top-web-vulnerabilities",
|
||||
"windows privilege escalation": "windows-privilege-escalation",
|
||||
"wireshark network traffic analysis": "wireshark-analysis",
|
||||
"wordpress penetration testing": "wordpress-penetration-testing",
|
||||
"xlsx": "xlsx-official",
|
||||
"cross-site scripting and html injection testing": "xss-html-injection"
|
||||
}
|
||||
}
|
||||
425
data/bundles.json
Normal file
425
data/bundles.json
Normal file
@@ -0,0 +1,425 @@
|
||||
{
|
||||
"generatedAt": "2026-02-03T09:20:12.539Z",
|
||||
"bundles": {
|
||||
"core-dev": {
|
||||
"description": "Core development skills across languages, frameworks, and backend/frontend fundamentals.",
|
||||
"skills": [
|
||||
"3d-web-experience",
|
||||
"algolia-search",
|
||||
"api-design-principles",
|
||||
"api-documentation-generator",
|
||||
"api-documenter",
|
||||
"api-fuzzing-bug-bounty",
|
||||
"api-patterns",
|
||||
"api-security-best-practices",
|
||||
"api-testing-observability-api-mock",
|
||||
"app-store-optimization",
|
||||
"application-performance-performance-optimization",
|
||||
"architecture-patterns",
|
||||
"async-python-patterns",
|
||||
"autonomous-agents",
|
||||
"aws-serverless",
|
||||
"azure-functions",
|
||||
"backend-architect",
|
||||
"backend-dev-guidelines",
|
||||
"backend-development-feature-development",
|
||||
"backend-security-coder",
|
||||
"bullmq-specialist",
|
||||
"bun-development",
|
||||
"cc-skill-backend-patterns",
|
||||
"cc-skill-coding-standards",
|
||||
"cc-skill-frontend-patterns",
|
||||
"cc-skill-security-review",
|
||||
"claude-d3js-skill",
|
||||
"code-documentation-doc-generate",
|
||||
"context7-auto-research",
|
||||
"discord-bot-architect",
|
||||
"django-pro",
|
||||
"documentation-generation-doc-generate",
|
||||
"documentation-templates",
|
||||
"dotnet-architect",
|
||||
"dotnet-backend-patterns",
|
||||
"exa-search",
|
||||
"fastapi-pro",
|
||||
"fastapi-templates",
|
||||
"firebase",
|
||||
"firecrawl-scraper",
|
||||
"flutter-expert",
|
||||
"fp-ts-errors",
|
||||
"fp-ts-pragmatic",
|
||||
"fp-ts-react",
|
||||
"frontend-design",
|
||||
"frontend-dev-guidelines",
|
||||
"frontend-developer",
|
||||
"frontend-mobile-development-component-scaffold",
|
||||
"frontend-mobile-security-xss-scan",
|
||||
"frontend-security-coder",
|
||||
"frontend-slides",
|
||||
"game-development/mobile-games",
|
||||
"go-concurrency-patterns",
|
||||
"golang-pro",
|
||||
"graphql",
|
||||
"hubspot-integration",
|
||||
"hugging-face-jobs",
|
||||
"ios-developer",
|
||||
"java-pro",
|
||||
"javascript-mastery",
|
||||
"javascript-pro",
|
||||
"javascript-testing-patterns",
|
||||
"javascript-typescript-typescript-scaffold",
|
||||
"langgraph",
|
||||
"launch-strategy",
|
||||
"makepad-skills",
|
||||
"mcp-builder",
|
||||
"memory-safety-patterns",
|
||||
"mobile-design",
|
||||
"mobile-developer",
|
||||
"mobile-security-coder",
|
||||
"modern-javascript-patterns",
|
||||
"moodle-external-api-development",
|
||||
"multi-platform-apps-multi-platform",
|
||||
"n8n-code-python",
|
||||
"n8n-node-configuration",
|
||||
"nextjs-app-router-patterns",
|
||||
"nextjs-best-practices",
|
||||
"nextjs-supabase-auth",
|
||||
"nodejs-backend-patterns",
|
||||
"nodejs-best-practices",
|
||||
"observe-whatsapp",
|
||||
"openapi-spec-generation",
|
||||
"php-pro",
|
||||
"plaid-fintech",
|
||||
"product-manager-toolkit",
|
||||
"python-development-python-scaffold",
|
||||
"python-packaging",
|
||||
"python-patterns",
|
||||
"python-performance-optimization",
|
||||
"python-pro",
|
||||
"python-testing-patterns",
|
||||
"react-best-practices",
|
||||
"react-modernization",
|
||||
"react-native-architecture",
|
||||
"react-patterns",
|
||||
"react-state-management",
|
||||
"react-ui-patterns",
|
||||
"reference-builder",
|
||||
"remotion-best-practices",
|
||||
"ruby-pro",
|
||||
"rust-async-patterns",
|
||||
"rust-pro",
|
||||
"senior-architect",
|
||||
"senior-fullstack",
|
||||
"shodan-reconnaissance",
|
||||
"shopify-apps",
|
||||
"shopify-development",
|
||||
"slack-bot-builder",
|
||||
"stitch-ui-design",
|
||||
"swiftui-expert-skill",
|
||||
"systems-programming-rust-project",
|
||||
"tavily-web",
|
||||
"telegram-bot-builder",
|
||||
"telegram-mini-app",
|
||||
"temporal-python-pro",
|
||||
"temporal-python-testing",
|
||||
"top-web-vulnerabilities",
|
||||
"trigger-dev",
|
||||
"twilio-communications",
|
||||
"typescript-advanced-types",
|
||||
"typescript-expert",
|
||||
"typescript-pro",
|
||||
"ui-ux-pro-max",
|
||||
"using-neon",
|
||||
"uv-package-manager",
|
||||
"viral-generator-builder",
|
||||
"voice-agents",
|
||||
"voice-ai-development",
|
||||
"web-artifacts-builder",
|
||||
"webapp-testing"
|
||||
]
|
||||
},
|
||||
"security-core": {
|
||||
"description": "Security, privacy, and compliance essentials.",
|
||||
"skills": [
|
||||
"accessibility-compliance-accessibility-audit",
|
||||
"api-fuzzing-bug-bounty",
|
||||
"api-security-best-practices",
|
||||
"attack-tree-construction",
|
||||
"auth-implementation-patterns",
|
||||
"aws-penetration-testing",
|
||||
"backend-security-coder",
|
||||
"broken-authentication",
|
||||
"burp-suite-testing",
|
||||
"cc-skill-security-review",
|
||||
"cicd-automation-workflow-automate",
|
||||
"clerk-auth",
|
||||
"cloud-architect",
|
||||
"cloud-penetration-testing",
|
||||
"code-review-checklist",
|
||||
"code-reviewer",
|
||||
"codebase-cleanup-deps-audit",
|
||||
"computer-use-agents",
|
||||
"database-admin",
|
||||
"dependency-management-deps-audit",
|
||||
"deployment-engineer",
|
||||
"deployment-pipeline-design",
|
||||
"design-orchestration",
|
||||
"docker-expert",
|
||||
"ethical-hacking-methodology",
|
||||
"find-bugs",
|
||||
"firebase",
|
||||
"firmware-analyst",
|
||||
"form-cro",
|
||||
"framework-migration-deps-upgrade",
|
||||
"frontend-mobile-security-xss-scan",
|
||||
"frontend-security-coder",
|
||||
"gdpr-data-handling",
|
||||
"graphql-architect",
|
||||
"hugging-face-jobs",
|
||||
"hybrid-cloud-architect",
|
||||
"idor-testing",
|
||||
"k8s-manifest-generator",
|
||||
"k8s-security-policies",
|
||||
"kubernetes-architect",
|
||||
"legal-advisor",
|
||||
"linkerd-patterns",
|
||||
"loki-mode",
|
||||
"malware-analyst",
|
||||
"metasploit-framework",
|
||||
"mobile-security-coder",
|
||||
"multi-agent-brainstorming",
|
||||
"network-engineer",
|
||||
"nextjs-supabase-auth",
|
||||
"nodejs-best-practices",
|
||||
"notebooklm",
|
||||
"openapi-spec-generation",
|
||||
"payment-integration",
|
||||
"pci-compliance",
|
||||
"pentest-checklist",
|
||||
"plaid-fintech",
|
||||
"quant-analyst",
|
||||
"red-team-tools",
|
||||
"reverse-engineer",
|
||||
"risk-manager",
|
||||
"risk-metrics-calculation",
|
||||
"sast-configuration",
|
||||
"scanning-tools",
|
||||
"secrets-management",
|
||||
"security-auditor",
|
||||
"security-bluebook-builder",
|
||||
"security-compliance-compliance-check",
|
||||
"security-requirement-extraction",
|
||||
"security-scanning-security-dependencies",
|
||||
"security-scanning-security-hardening",
|
||||
"security-scanning-security-sast",
|
||||
"service-mesh-expert",
|
||||
"smtp-penetration-testing",
|
||||
"solidity-security",
|
||||
"ssh-penetration-testing",
|
||||
"stride-analysis-patterns",
|
||||
"stripe-integration",
|
||||
"terraform-specialist",
|
||||
"threat-mitigation-mapping",
|
||||
"threat-modeling-expert",
|
||||
"top-web-vulnerabilities",
|
||||
"twilio-communications",
|
||||
"ui-visual-validator",
|
||||
"using-neon",
|
||||
"varlock-claude-skill",
|
||||
"vulnerability-scanner",
|
||||
"web-design-guidelines",
|
||||
"wordpress-penetration-testing",
|
||||
"xss-html-injection"
|
||||
]
|
||||
},
|
||||
"k8s-core": {
|
||||
"description": "Kubernetes and service mesh essentials.",
|
||||
"skills": [
|
||||
"backend-architect",
|
||||
"devops-troubleshooter",
|
||||
"gitops-workflow",
|
||||
"helm-chart-scaffolding",
|
||||
"istio-traffic-management",
|
||||
"k8s-manifest-generator",
|
||||
"k8s-security-policies",
|
||||
"kubernetes-architect",
|
||||
"legal-advisor",
|
||||
"linkerd-patterns",
|
||||
"microservices-patterns",
|
||||
"moodle-external-api-development",
|
||||
"mtls-configuration",
|
||||
"network-engineer",
|
||||
"observability-monitoring-slo-implement",
|
||||
"service-mesh-expert",
|
||||
"service-mesh-observability",
|
||||
"slo-implementation"
|
||||
]
|
||||
},
|
||||
"data-core": {
|
||||
"description": "Data engineering and analytics foundations.",
|
||||
"skills": [
|
||||
"airflow-dag-patterns",
|
||||
"analytics-tracking",
|
||||
"blockrun",
|
||||
"business-analyst",
|
||||
"cc-skill-backend-patterns",
|
||||
"cc-skill-clickhouse-io",
|
||||
"claude-d3js-skill",
|
||||
"content-marketer",
|
||||
"data-engineer",
|
||||
"data-engineering-data-driven-feature",
|
||||
"data-engineering-data-pipeline",
|
||||
"data-quality-frameworks",
|
||||
"data-scientist",
|
||||
"data-storytelling",
|
||||
"database-admin",
|
||||
"database-architect",
|
||||
"database-cloud-optimization-cost-optimize",
|
||||
"database-design",
|
||||
"database-migration",
|
||||
"database-migrations-migration-observability",
|
||||
"database-migrations-sql-migrations",
|
||||
"database-optimizer",
|
||||
"dbt-transformation-patterns",
|
||||
"firebase",
|
||||
"fp-ts-react",
|
||||
"frontend-dev-guidelines",
|
||||
"gdpr-data-handling",
|
||||
"graphql",
|
||||
"hugging-face-jobs",
|
||||
"hybrid-cloud-networking",
|
||||
"idor-testing",
|
||||
"ios-developer",
|
||||
"kpi-dashboard-design",
|
||||
"legal-advisor",
|
||||
"loki-mode",
|
||||
"ml-pipeline-workflow",
|
||||
"moodle-external-api-development",
|
||||
"neon-postgres",
|
||||
"nextjs-app-router-patterns",
|
||||
"nextjs-best-practices",
|
||||
"nodejs-backend-patterns",
|
||||
"pci-compliance",
|
||||
"php-pro",
|
||||
"postgres-best-practices",
|
||||
"postgresql",
|
||||
"prisma-expert",
|
||||
"programmatic-seo",
|
||||
"quant-analyst",
|
||||
"react-best-practices",
|
||||
"react-ui-patterns",
|
||||
"scala-pro",
|
||||
"schema-markup",
|
||||
"segment-cdp",
|
||||
"senior-architect",
|
||||
"seo-audit",
|
||||
"spark-optimization",
|
||||
"sql-injection-testing",
|
||||
"sql-optimization-patterns",
|
||||
"sql-pro",
|
||||
"sqlmap-database-pentesting",
|
||||
"unity-ecs-patterns",
|
||||
"using-neon",
|
||||
"vector-database-engineer",
|
||||
"xlsx-official"
|
||||
]
|
||||
},
|
||||
"ops-core": {
|
||||
"description": "Operations, observability, and delivery pipelines.",
|
||||
"skills": [
|
||||
"agent-evaluation",
|
||||
"airflow-dag-patterns",
|
||||
"api-testing-observability-api-mock",
|
||||
"application-performance-performance-optimization",
|
||||
"aws-serverless",
|
||||
"backend-architect",
|
||||
"backend-development-feature-development",
|
||||
"c4-container",
|
||||
"cicd-automation-workflow-automate",
|
||||
"code-review-ai-ai-review",
|
||||
"data-engineer",
|
||||
"data-engineering-data-pipeline",
|
||||
"database-migration",
|
||||
"database-migrations-migration-observability",
|
||||
"database-optimizer",
|
||||
"deployment-engineer",
|
||||
"deployment-pipeline-design",
|
||||
"deployment-procedures",
|
||||
"deployment-validation-config-validate",
|
||||
"devops-troubleshooter",
|
||||
"distributed-debugging-debug-trace",
|
||||
"distributed-tracing",
|
||||
"django-pro",
|
||||
"docker-expert",
|
||||
"e2e-testing-patterns",
|
||||
"error-debugging-error-analysis",
|
||||
"error-debugging-error-trace",
|
||||
"error-diagnostics-error-analysis",
|
||||
"error-diagnostics-error-trace",
|
||||
"expo-deployment",
|
||||
"flutter-expert",
|
||||
"game-development/game-art",
|
||||
"git-pr-workflows-git-workflow",
|
||||
"gitlab-ci-patterns",
|
||||
"gitops-workflow",
|
||||
"grafana-dashboards",
|
||||
"incident-responder",
|
||||
"incident-response-incident-response",
|
||||
"incident-response-smart-fix",
|
||||
"incident-runbook-templates",
|
||||
"internal-comms-anthropic",
|
||||
"internal-comms-community",
|
||||
"kpi-dashboard-design",
|
||||
"kubernetes-architect",
|
||||
"langfuse",
|
||||
"llm-app-patterns",
|
||||
"loki-mode",
|
||||
"machine-learning-ops-ml-pipeline",
|
||||
"malware-analyst",
|
||||
"ml-engineer",
|
||||
"ml-pipeline-workflow",
|
||||
"mlops-engineer",
|
||||
"observability-engineer",
|
||||
"observability-monitoring-monitor-setup",
|
||||
"observability-monitoring-slo-implement",
|
||||
"performance-engineer",
|
||||
"performance-testing-review-ai-review",
|
||||
"postmortem-writing",
|
||||
"prometheus-configuration",
|
||||
"risk-metrics-calculation",
|
||||
"security-auditor",
|
||||
"server-management",
|
||||
"service-mesh-expert",
|
||||
"service-mesh-observability",
|
||||
"slo-implementation",
|
||||
"temporal-python-pro",
|
||||
"terraform-specialist",
|
||||
"unity-developer",
|
||||
"vercel-deploy-claimable",
|
||||
"vercel-deployment",
|
||||
"voice-agents"
|
||||
]
|
||||
}
|
||||
},
|
||||
"common": [
|
||||
"bash-pro",
|
||||
"python-pro",
|
||||
"javascript-pro",
|
||||
"typescript-pro",
|
||||
"golang-pro",
|
||||
"rust-pro",
|
||||
"java-pro",
|
||||
"frontend-developer",
|
||||
"backend-architect",
|
||||
"nodejs-backend-patterns",
|
||||
"fastapi-pro",
|
||||
"api-design-principles",
|
||||
"sql-pro",
|
||||
"database-architect",
|
||||
"kubernetes-architect",
|
||||
"terraform-specialist",
|
||||
"observability-engineer",
|
||||
"security-auditor",
|
||||
"sast-configuration",
|
||||
"gitops-workflow"
|
||||
]
|
||||
}
|
||||
15243
data/catalog.json
Normal file
15243
data/catalog.json
Normal file
File diff suppressed because it is too large
Load Diff
27
data/package-lock.json
generated
Normal file
27
data/package-lock.json
generated
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"name": "antigravity-awesome-skills",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"dependencies": {
|
||||
"yaml": "^2.8.2"
|
||||
}
|
||||
},
|
||||
"node_modules/yaml": {
|
||||
"version": "2.8.2",
|
||||
"resolved": "https://registry.npmjs.org/yaml/-/yaml-2.8.2.tgz",
|
||||
"integrity": "sha512-mplynKqc1C2hTVYxd0PU2xQAc22TI1vShAYGksCCfxbn/dFwnHTNi1bvYsBTkhdUNtGIf5xNOg938rrSSYvS9A==",
|
||||
"license": "ISC",
|
||||
"bin": {
|
||||
"yaml": "bin.mjs"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 14.6"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/eemeli"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
7
data/package.json
Normal file
7
data/package.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "4.6.0",
|
||||
"dependencies": {
|
||||
"yaml": "^2.8.2"
|
||||
}
|
||||
}
|
||||
5033
data/skills_index.json
Normal file
5033
data/skills_index.json
Normal file
File diff suppressed because it is too large
Load Diff
396
docs/BUNDLES.md
Normal file
396
docs/BUNDLES.md
Normal file
@@ -0,0 +1,396 @@
|
||||
# 📦 Antigravity Skill Bundles
|
||||
|
||||
> **Curated collections of skills organized by role and expertise level.** Don't know where to start? Pick a bundle below to get a curated set of skills for your role.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
1. **Install the repository:**
|
||||
```bash
|
||||
npx antigravity-awesome-skills
|
||||
# or clone manually
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
2. **Choose your bundle** from the list below based on your role or interests.
|
||||
|
||||
3. **Use skills** by referencing them in your AI assistant:
|
||||
- Claude Code: `>> @skill-name help me...`
|
||||
- Cursor: `@skill-name in chat`
|
||||
- Gemini CLI: `Use skill-name...`
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Essentials & Core
|
||||
|
||||
### 🚀 The "Essentials" Starter Pack
|
||||
|
||||
_For everyone. Install these first._
|
||||
|
||||
- [`concise-planning`](../skills/concise-planning/): Always start with a plan.
|
||||
- [`lint-and-validate`](../skills/lint-and-validate/): Keep your code clean automatically.
|
||||
- [`git-pushing`](../skills/git-pushing/): Save your work safely.
|
||||
- [`kaizen`](../skills/kaizen/): Continuous improvement mindset.
|
||||
- [`systematic-debugging`](../skills/systematic-debugging/): Debug like a pro.
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ Security & Compliance
|
||||
|
||||
### 🛡️ The "Security Engineer" Pack
|
||||
|
||||
_For pentesting, auditing, and hardening._
|
||||
|
||||
- [`ethical-hacking-methodology`](../skills/ethical-hacking-methodology/): The Bible of ethical hacking.
|
||||
- [`burp-suite-testing`](../skills/burp-suite-testing/): Web vulnerability scanning.
|
||||
- [`top-web-vulnerabilities`](../skills/top-web-vulnerabilities/): OWASP-aligned vulnerability taxonomy.
|
||||
- [`linux-privilege-escalation`](../skills/linux-privilege-escalation/): Advanced Linux security assessment.
|
||||
- [`cloud-penetration-testing`](../skills/cloud-penetration-testing/): AWS/Azure/GCP security.
|
||||
- [`security-auditor`](../skills/security-auditor/): Comprehensive security audits.
|
||||
- [`vulnerability-scanner`](../skills/vulnerability-scanner/): Advanced vulnerability analysis.
|
||||
|
||||
### 🔐 The "Security Developer" Pack
|
||||
|
||||
_For building secure applications._
|
||||
|
||||
- [`api-security-best-practices`](../skills/api-security-best-practices/): Secure API design patterns.
|
||||
- [`auth-implementation-patterns`](../skills/auth-implementation-patterns/): JWT, OAuth2, session management.
|
||||
- [`backend-security-coder`](../skills/backend-security-coder/): Secure backend coding practices.
|
||||
- [`frontend-security-coder`](../skills/frontend-security-coder/): XSS prevention and client-side security.
|
||||
- [`cc-skill-security-review`](../skills/cc-skill-security-review/): Security checklist for features.
|
||||
- [`pci-compliance`](../skills/pci-compliance/): Payment card security standards.
|
||||
|
||||
---
|
||||
|
||||
## 🌐 Web Development
|
||||
|
||||
### 🌐 The "Web Wizard" Pack
|
||||
|
||||
_For building modern, high-performance web apps._
|
||||
|
||||
- [`frontend-design`](../skills/frontend-design/): UI guidelines and aesthetics.
|
||||
- [`react-best-practices`](../skills/react-best-practices/): React & Next.js performance optimization.
|
||||
- [`react-patterns`](../skills/react-patterns/): Modern React patterns and principles.
|
||||
- [`nextjs-best-practices`](../skills/nextjs-best-practices/): Next.js App Router patterns.
|
||||
- [`tailwind-patterns`](../skills/tailwind-patterns/): Tailwind CSS v4 styling superpowers.
|
||||
- [`form-cro`](../skills/form-cro/): Optimize your forms for conversion.
|
||||
- [`seo-audit`](../skills/seo-audit/): Get found on Google.
|
||||
|
||||
### 🖌️ The "Web Designer" Pack
|
||||
|
||||
_For pixel-perfect experiences._
|
||||
|
||||
- [`ui-ux-pro-max`](../skills/ui-ux-pro-max/): Premium design systems and tokens.
|
||||
- [`frontend-design`](../skills/frontend-design/): The base layer of aesthetics.
|
||||
- [`3d-web-experience`](../skills/3d-web-experience/): Three.js & React Three Fiber magic.
|
||||
- [`canvas-design`](../skills/canvas-design/): Static visuals and posters.
|
||||
- [`mobile-design`](../skills/mobile-design/): Mobile-first design principles.
|
||||
- [`scroll-experience`](../skills/scroll-experience/): Immersive scroll-driven experiences.
|
||||
|
||||
### ⚡ The "Full-Stack Developer" Pack
|
||||
|
||||
_For end-to-end web application development._
|
||||
|
||||
- [`senior-fullstack`](../skills/senior-fullstack/): Complete fullstack development guide.
|
||||
- [`frontend-developer`](../skills/frontend-developer/): React 19+ and Next.js 15+ expertise.
|
||||
- [`backend-dev-guidelines`](../skills/backend-dev-guidelines/): Node.js/Express/TypeScript patterns.
|
||||
- [`api-patterns`](../skills/api-patterns/): REST vs GraphQL vs tRPC selection.
|
||||
- [`database-design`](../skills/database-design/): Schema design and ORM selection.
|
||||
- [`stripe-integration`](../skills/stripe-integration/): Payments and subscriptions.
|
||||
|
||||
---
|
||||
|
||||
## 🤖 AI & Agents
|
||||
|
||||
### 🤖 The "Agent Architect" Pack
|
||||
|
||||
_For building AI systems and autonomous agents._
|
||||
|
||||
- [`agent-evaluation`](../skills/agent-evaluation/): Test and benchmark your agents.
|
||||
- [`langgraph`](../skills/langgraph/): Build stateful agent workflows.
|
||||
- [`mcp-builder`](../skills/mcp-builder/): Create your own MCP tools.
|
||||
- [`prompt-engineering`](../skills/prompt-engineering/): Master the art of talking to LLMs.
|
||||
- [`ai-agents-architect`](../skills/ai-agents-architect/): Design autonomous AI agents.
|
||||
- [`rag-engineer`](../skills/rag-engineer/): Build RAG systems with vector search.
|
||||
|
||||
### 🧠 The "LLM Application Developer" Pack
|
||||
|
||||
_For building production LLM applications._
|
||||
|
||||
- [`llm-app-patterns`](../skills/llm-app-patterns/): Production-ready LLM patterns.
|
||||
- [`rag-implementation`](../skills/rag-implementation/): Retrieval-Augmented Generation.
|
||||
- [`prompt-caching`](../skills/prompt-caching/): Cache strategies for LLM prompts.
|
||||
- [`context-window-management`](../skills/context-window-management/): Manage LLM context efficiently.
|
||||
- [`langfuse`](../skills/langfuse/): LLM observability and tracing.
|
||||
|
||||
---
|
||||
|
||||
## 🎮 Game Development
|
||||
|
||||
### 🎮 The "Indie Game Dev" Pack
|
||||
|
||||
_For building games with AI assistants._
|
||||
|
||||
- [`game-development/game-design`](../skills/game-development/game-design/): Mechanics and loops.
|
||||
- [`game-development/2d-games`](../skills/game-development/2d-games/): Sprites and physics.
|
||||
- [`game-development/3d-games`](../skills/game-development/3d-games/): Models and shaders.
|
||||
- [`unity-developer`](../skills/unity-developer/): Unity 6 LTS development.
|
||||
- [`godot-gdscript-patterns`](../skills/godot-gdscript-patterns/): Godot 4 GDScript patterns.
|
||||
- [`algorithmic-art`](../skills/algorithmic-art/): Generate assets with code.
|
||||
|
||||
---
|
||||
|
||||
## 🐍 Backend & Languages
|
||||
|
||||
### 🐍 The "Python Pro" Pack
|
||||
|
||||
_For backend heavyweights and data scientists._
|
||||
|
||||
- [`python-pro`](../skills/python-pro/): Master Python 3.12+ with modern features.
|
||||
- [`python-patterns`](../skills/python-patterns/): Idiomatic Python code.
|
||||
- [`fastapi-pro`](../skills/fastapi-pro/): High-performance async APIs.
|
||||
- [`fastapi-templates`](../skills/fastapi-templates/): Production-ready FastAPI projects.
|
||||
- [`django-pro`](../skills/django-pro/): The battery-included framework.
|
||||
- [`python-testing-patterns`](../skills/python-testing-patterns/): Comprehensive testing with pytest.
|
||||
- [`async-python-patterns`](../skills/async-python-patterns/): Python asyncio mastery.
|
||||
|
||||
### 🟦 The "TypeScript & JavaScript" Pack
|
||||
|
||||
_For modern web development._
|
||||
|
||||
- [`typescript-expert`](../skills/typescript-expert/): TypeScript mastery and advanced types.
|
||||
- [`javascript-pro`](../skills/javascript-pro/): Modern JavaScript with ES6+.
|
||||
- [`react-best-practices`](../skills/react-best-practices/): React performance optimization.
|
||||
- [`nodejs-best-practices`](../skills/nodejs-best-practices/): Node.js development principles.
|
||||
- [`nextjs-app-router-patterns`](../skills/nextjs-app-router-patterns/): Next.js 14+ App Router.
|
||||
|
||||
### 🦀 The "Systems Programming" Pack
|
||||
|
||||
_For low-level and performance-critical code._
|
||||
|
||||
- [`rust-pro`](../skills/rust-pro/): Rust 1.75+ with async patterns.
|
||||
- [`go-concurrency-patterns`](../skills/go-concurrency-patterns/): Go concurrency mastery.
|
||||
- [`golang-pro`](../skills/golang-pro/): Go development expertise.
|
||||
- [`memory-safety-patterns`](../skills/memory-safety-patterns/): Memory-safe programming.
|
||||
- [`cpp-pro`](../skills/cpp-pro/): Modern C++ development.
|
||||
|
||||
---
|
||||
|
||||
## 🦄 Product & Business
|
||||
|
||||
### 🦄 The "Startup Founder" Pack
|
||||
|
||||
_For building products, not just code._
|
||||
|
||||
- [`product-manager-toolkit`](../skills/product-manager-toolkit/): RICE prioritization, PRD templates.
|
||||
- [`competitive-landscape`](../skills/competitive-landscape/): Competitor analysis.
|
||||
- [`competitor-alternatives`](../skills/competitor-alternatives/): Create comparison pages.
|
||||
- [`launch-strategy`](../skills/launch-strategy/): Product launch planning.
|
||||
- [`copywriting`](../skills/copywriting/): Marketing copy that converts.
|
||||
- [`stripe-integration`](../skills/stripe-integration/): Get paid from day one.
|
||||
|
||||
### 📊 The "Business Analyst" Pack
|
||||
|
||||
_For data-driven decision making._
|
||||
|
||||
- [`business-analyst`](../skills/business-analyst/): AI-powered analytics and KPIs.
|
||||
- [`startup-metrics-framework`](../skills/startup-metrics-framework/): SaaS metrics and unit economics.
|
||||
- [`startup-financial-modeling`](../skills/startup-financial-modeling/): 3-5 year financial projections.
|
||||
- [`market-sizing-analysis`](../skills/market-sizing-analysis/): TAM/SAM/SOM calculations.
|
||||
- [`kpi-dashboard-design`](../skills/kpi-dashboard-design/): Effective KPI dashboards.
|
||||
|
||||
### 📈 The "Marketing & Growth" Pack
|
||||
|
||||
_For driving user acquisition and retention._
|
||||
|
||||
- [`content-creator`](../skills/content-creator/): SEO-optimized marketing content.
|
||||
- [`seo-audit`](../skills/seo-audit/): Technical SEO health checks.
|
||||
- [`programmatic-seo`](../skills/programmatic-seo/): Create pages at scale.
|
||||
- [`analytics-tracking`](../skills/analytics-tracking/): Set up GA4/PostHog correctly.
|
||||
- [`ab-test-setup`](../skills/ab-test-setup/): Validated learning experiments.
|
||||
- [`email-sequence`](../skills/email-sequence/): Automated email campaigns.
|
||||
|
||||
---
|
||||
|
||||
## 🌧️ DevOps & Infrastructure
|
||||
|
||||
### 🌧️ The "DevOps & Cloud" Pack
|
||||
|
||||
_For infrastructure and scaling._
|
||||
|
||||
- [`docker-expert`](../skills/docker-expert/): Master containers and multi-stage builds.
|
||||
- [`aws-serverless`](../skills/aws-serverless/): Serverless on AWS (Lambda, DynamoDB).
|
||||
- [`kubernetes-architect`](../skills/kubernetes-architect/): K8s architecture and GitOps.
|
||||
- [`terraform-specialist`](../skills/terraform-specialist/): Infrastructure as Code mastery.
|
||||
- [`environment-setup-guide`](../skills/environment-setup-guide/): Standardization for teams.
|
||||
- [`deployment-procedures`](../skills/deployment-procedures/): Safe rollout strategies.
|
||||
- [`bash-linux`](../skills/bash-linux/): Terminal wizardry.
|
||||
|
||||
### 📊 The "Observability & Monitoring" Pack
|
||||
|
||||
_For production reliability._
|
||||
|
||||
- [`observability-engineer`](../skills/observability-engineer/): Comprehensive monitoring systems.
|
||||
- [`distributed-tracing`](../skills/distributed-tracing/): Track requests across microservices.
|
||||
- [`slo-implementation`](../skills/slo-implementation/): Service Level Objectives.
|
||||
- [`incident-responder`](../skills/incident-responder/): Rapid incident response.
|
||||
- [`postmortem-writing`](../skills/postmortem-writing/): Blameless postmortems.
|
||||
- [`performance-engineer`](../skills/performance-engineer/): Application performance optimization.
|
||||
|
||||
---
|
||||
|
||||
## 📊 Data & Analytics
|
||||
|
||||
### 📊 The "Data & Analytics" Pack
|
||||
|
||||
_For making sense of the numbers._
|
||||
|
||||
- [`analytics-tracking`](../skills/analytics-tracking/): Set up GA4/PostHog correctly.
|
||||
- [`claude-d3js-skill`](../skills/claude-d3js-skill/): Beautiful custom visualizations with D3.js.
|
||||
- [`sql-pro`](../skills/sql-pro/): Modern SQL with cloud-native databases.
|
||||
- [`postgres-best-practices`](../skills/postgres-best-practices/): Postgres optimization.
|
||||
- [`ab-test-setup`](../skills/ab-test-setup/): Validated learning.
|
||||
- [`database-architect`](../skills/database-architect/): Database design from scratch.
|
||||
|
||||
### 🔄 The "Data Engineering" Pack
|
||||
|
||||
_For building data pipelines._
|
||||
|
||||
- [`data-engineer`](../skills/data-engineer/): Data pipeline architecture.
|
||||
- [`airflow-dag-patterns`](../skills/airflow-dag-patterns/): Apache Airflow DAGs.
|
||||
- [`dbt-transformation-patterns`](../skills/dbt-transformation-patterns/): Analytics engineering.
|
||||
- [`vector-database-engineer`](../skills/vector-database-engineer/): Vector databases for RAG.
|
||||
- [`embedding-strategies`](../skills/embedding-strategies/): Embedding model selection.
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Creative & Content
|
||||
|
||||
### 🎨 The "Creative Director" Pack
|
||||
|
||||
_For visuals, content, and branding._
|
||||
|
||||
- [`canvas-design`](../skills/canvas-design/): Generate posters and diagrams.
|
||||
- [`frontend-design`](../skills/frontend-design/): UI aesthetics.
|
||||
- [`content-creator`](../skills/content-creator/): SEO-optimized blog posts.
|
||||
- [`copy-editing`](../skills/copy-editing/): Polish your prose.
|
||||
- [`algorithmic-art`](../skills/algorithmic-art/): Code-generated masterpieces.
|
||||
- [`interactive-portfolio`](../skills/interactive-portfolio/): Portfolios that land jobs.
|
||||
|
||||
---
|
||||
|
||||
## 🐞 Quality Assurance
|
||||
|
||||
### 🐞 The "QA & Testing" Pack
|
||||
|
||||
_For breaking things before users do._
|
||||
|
||||
- [`test-driven-development`](../skills/test-driven-development/): Red, Green, Refactor.
|
||||
- [`systematic-debugging`](../skills/systematic-debugging/): Debug like Sherlock Holmes.
|
||||
- [`browser-automation`](../skills/browser-automation/): End-to-end testing with Playwright.
|
||||
- [`e2e-testing-patterns`](../skills/e2e-testing-patterns/): Reliable E2E test suites.
|
||||
- [`ab-test-setup`](../skills/ab-test-setup/): Validated experiments.
|
||||
- [`code-review-checklist`](../skills/code-review-checklist/): Catch bugs in PRs.
|
||||
- [`test-fixing`](../skills/test-fixing/): Fix failing tests systematically.
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Specialized Packs
|
||||
|
||||
### 📱 The "Mobile Developer" Pack
|
||||
|
||||
_For iOS, Android, and cross-platform apps._
|
||||
|
||||
- [`mobile-developer`](../skills/mobile-developer/): Cross-platform mobile development.
|
||||
- [`react-native-architecture`](../skills/react-native-architecture/): React Native with Expo.
|
||||
- [`flutter-expert`](../skills/flutter-expert/): Flutter multi-platform apps.
|
||||
- [`ios-developer`](../skills/ios-developer/): iOS development with Swift.
|
||||
- [`app-store-optimization`](../skills/app-store-optimization/): ASO for App Store and Play Store.
|
||||
|
||||
### 🔗 The "Integration & APIs" Pack
|
||||
|
||||
_For connecting services and building integrations._
|
||||
|
||||
- [`stripe-integration`](../skills/stripe-integration/): Payments and subscriptions.
|
||||
- [`twilio-communications`](../skills/twilio-communications/): SMS, voice, WhatsApp.
|
||||
- [`hubspot-integration`](../skills/hubspot-integration/): CRM integration.
|
||||
- [`plaid-fintech`](../skills/plaid-fintech/): Bank account linking and ACH.
|
||||
- [`algolia-search`](../skills/algolia-search/): Search implementation.
|
||||
|
||||
### 🎯 The "Architecture & Design" Pack
|
||||
|
||||
_For system design and technical decisions._
|
||||
|
||||
- [`senior-architect`](../skills/senior-architect/): Comprehensive software architecture.
|
||||
- [`architecture-patterns`](../skills/architecture-patterns/): Clean Architecture, DDD, Hexagonal.
|
||||
- [`microservices-patterns`](../skills/microservices-patterns/): Microservices architecture.
|
||||
- [`event-sourcing-architect`](../skills/event-sourcing-architect/): Event sourcing and CQRS.
|
||||
- [`architecture-decision-records`](../skills/architecture-decision-records/): Document technical decisions.
|
||||
|
||||
---
|
||||
|
||||
## 📚 How to Use Bundles
|
||||
|
||||
### Installation
|
||||
|
||||
1. **Clone the repository:**
|
||||
```bash
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
2. **Or use the installer:**
|
||||
```bash
|
||||
npx antigravity-awesome-skills
|
||||
```
|
||||
|
||||
### Using Skills
|
||||
|
||||
Once installed, reference skills in your AI assistant:
|
||||
|
||||
- **Claude Code**: `>> @skill-name help me...`
|
||||
- **Cursor**: `@skill-name` in chat
|
||||
- **Gemini CLI**: `Use skill-name...`
|
||||
|
||||
### Customizing Bundles
|
||||
|
||||
You can create your own bundle by:
|
||||
1. Copying skill folders to your `.agent/skills/` directory
|
||||
2. Or referencing multiple skills in a single conversation
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Learning Paths
|
||||
|
||||
### Beginner → Intermediate → Advanced
|
||||
|
||||
**Web Development:**
|
||||
1. Start: `Essentials` → `Web Wizard`
|
||||
2. Grow: `Full-Stack Developer` → `Architecture & Design`
|
||||
3. Master: `Observability & Monitoring` → `Security Developer`
|
||||
|
||||
**AI/ML:**
|
||||
1. Start: `Essentials` → `Agent Architect`
|
||||
2. Grow: `LLM Application Developer` → `Data Engineering`
|
||||
3. Master: Advanced RAG and agent orchestration
|
||||
|
||||
**Security:**
|
||||
1. Start: `Essentials` → `Security Developer`
|
||||
2. Grow: `Security Engineer` → Advanced pentesting
|
||||
3. Master: Red team tactics and threat modeling
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
Found a skill that should be in a bundle? Or want to create a new bundle? [Open an issue](https://github.com/sickn33/antigravity-awesome-skills/issues) or submit a PR!
|
||||
|
||||
---
|
||||
|
||||
## 📖 Related Documentation
|
||||
|
||||
- [Getting Started Guide](GETTING_STARTED.md)
|
||||
- [Full Skill Catalog](../CATALOG.md)
|
||||
- [Contributing Guide](../CONTRIBUTING.md)
|
||||
|
||||
---
|
||||
|
||||
_Last updated: January 2026 | Total Skills: 560+ | Total Bundles: 20+_
|
||||
38
docs/CI_DRIFT_FIX.md
Normal file
38
docs/CI_DRIFT_FIX.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# CI Drift Fix Guide
|
||||
|
||||
**Problem**: The failing job is caused by uncommitted changes detected in `README.md`, `skills_index.json`, or catalog files after the update scripts run.
|
||||
|
||||
**Error**:
|
||||
|
||||
```
|
||||
❌ Detected uncommitted changes produced by registry/readme/catalog scripts.
|
||||
```
|
||||
|
||||
**Cause**:
|
||||
Scripts like `scripts/generate_index.py`, `scripts/update_readme.py`, and `scripts/build-catalog.js` modify `README.md`, `skills_index.json`, `data/catalog.json`, `data/bundles.json`, `data/aliases.json`, and `CATALOG.md`. The workflow expects these files to have no changes after the scripts run. Any differences mean the committed repo is out-of-sync with what the generation scripts produce.
|
||||
|
||||
**How to Fix (DO THIS EVERY TIME):**
|
||||
|
||||
1. Run the **FULL Validation Chain** locally:
|
||||
|
||||
```bash
|
||||
npm run chain
|
||||
npm run catalog
|
||||
```
|
||||
|
||||
2. Check for changes:
|
||||
|
||||
```bash
|
||||
git status
|
||||
git diff
|
||||
```
|
||||
|
||||
3. Commit and push any updates:
|
||||
```bash
|
||||
git add README.md skills_index.json data/catalog.json data/bundles.json data/aliases.json CATALOG.md
|
||||
git commit -m "chore: sync generated registry files"
|
||||
git push
|
||||
```
|
||||
|
||||
**Summary**:
|
||||
Always commit and push all changes produced by the registry, readme, and catalog scripts. This keeps the CI workflow passing by ensuring the repository and generated files are synced.
|
||||
33
docs/COMMUNITY_GUIDELINES.md
Normal file
33
docs/COMMUNITY_GUIDELINES.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to creating a positive environment include:
|
||||
|
||||
- Using welcoming and inclusive language
|
||||
- Being respectful of differing viewpoints and experiences
|
||||
- Gracefully accepting constructive criticism
|
||||
- Focusing on what is best for the community
|
||||
- Showing empathy towards other community members
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
- The use of sexualized language or imagery and unwelcome sexual attention or advances
|
||||
- Trolling, insulting/derogatory comments, and personal or political attacks
|
||||
- Public or private harassment
|
||||
- Publishing others' private information without explicit permission
|
||||
- Other conduct which could reasonably be considered inappropriate in a professional setting
|
||||
|
||||
## Enforcement
|
||||
|
||||
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.1.
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
782
docs/EXAMPLES.md
782
docs/EXAMPLES.md
@@ -1,760 +1,56 @@
|
||||
# 💡 Real-World Examples - See Skills in Action
|
||||
# 🧪 Real-World Examples ("The Antigravity Cookbook")
|
||||
|
||||
**Learn by example!** This guide shows you real conversations using different skills.
|
||||
Skills are powerful on their own, but unstoppable when combined.
|
||||
Here are three common scenarios and how to solve them using this repository.
|
||||
|
||||
## 🥘 Recipe 1: The "Legacy Code Audit"
|
||||
|
||||
_Scenario: You just inherited a messy 5-year-old Node.js repo. You need to fix it safely._
|
||||
|
||||
**Skills Used:**
|
||||
|
||||
1. `concise-planning` (To map the chaos)
|
||||
2. `lint-and-validate` (To find the bugs)
|
||||
3. `security-audit` (To find the holes)
|
||||
|
||||
**The Workflow:**
|
||||
|
||||
1. **Plan**: "Agent, use `concise-planning` to create a checklist for refactoring `src/legacy-api.js`."
|
||||
2. **Audit**: "Run `security-audit` on the `package.json` to find vulnerable dependencies."
|
||||
3. **Fix**: "Use `lint-and-validate` rules to auto-fix the formatting issues in `src/`."
|
||||
|
||||
---
|
||||
|
||||
## Example 1: Designing a Feature with @brainstorming
|
||||
## 🥘 Recipe 2: The "Modern Web App"
|
||||
|
||||
### The Scenario
|
||||
You want to add a "dark mode" feature to your app, but you're not sure how to approach it.
|
||||
_Scenario: You need to build a high-performance Landing Page in 2 hours._
|
||||
|
||||
### The Conversation
|
||||
**Skills Used:**
|
||||
|
||||
```
|
||||
You: @brainstorming I want to add dark mode to my React app
|
||||
1. `frontend-design` (For aesthetics)
|
||||
2. `react-patterns` (For structure)
|
||||
3. `tailwind-mastery` (For speed)
|
||||
|
||||
AI: Great! Let me help you think through this. First, let me check your
|
||||
current project structure...
|
||||
|
||||
[AI examines your files]
|
||||
|
||||
I see you're using React with Tailwind CSS. Now, let's understand
|
||||
your requirements:
|
||||
|
||||
What's your primary goal for dark mode?
|
||||
a) User preference (toggle in settings)
|
||||
b) System preference (follow OS setting)
|
||||
c) Both (with manual override)
|
||||
**The Workflow:**
|
||||
|
||||
You: c) Both
|
||||
|
||||
AI: Perfect! That's the best user experience. Next question:
|
||||
|
||||
Where should the dark mode toggle be located?
|
||||
a) In the main navigation
|
||||
b) In a settings page
|
||||
c) In a dropdown menu
|
||||
d) Multiple locations
|
||||
|
||||
You: a) In the main navigation
|
||||
|
||||
AI: Good choice for accessibility. One more question:
|
||||
|
||||
Should dark mode preference persist across sessions?
|
||||
a) Yes, save to localStorage
|
||||
b) Yes, save to user account (requires backend)
|
||||
c) No, reset each session
|
||||
|
||||
You: a) Yes, save to localStorage
|
||||
|
||||
AI: Excellent! Now I have a clear picture. Let me present the design...
|
||||
|
||||
## Dark Mode Implementation Design
|
||||
|
||||
### Architecture Overview
|
||||
We'll use React Context for state management, Tailwind's dark mode
|
||||
utilities, and localStorage for persistence. The system will:
|
||||
1. Check OS preference on first visit
|
||||
2. Allow manual override via toggle
|
||||
3. Persist preference in localStorage
|
||||
|
||||
Does this approach look good so far?
|
||||
|
||||
You: Yes, continue
|
||||
|
||||
AI: [Continues with detailed design sections...]
|
||||
|
||||
[After design is complete]
|
||||
|
||||
I've saved the design to docs/plans/2026-01-20-dark-mode-design.md
|
||||
|
||||
Ready to set up for implementation?
|
||||
|
||||
You: Yes
|
||||
|
||||
AI: I'll create an isolated workspace using git worktrees and write an
|
||||
implementation plan. One moment...
|
||||
```
|
||||
|
||||
### What Happened
|
||||
- ✅ AI asked focused questions one at a time
|
||||
- ✅ Explored your specific context
|
||||
- ✅ Presented design in sections
|
||||
- ✅ Saved documentation
|
||||
- ✅ Prepared for implementation
|
||||
1. **Design**: "Use `frontend-design` to generate a color palette and typography for a 'Cyberpunk Coffee Shop'."
|
||||
2. **Scaffold**: "Initialize a Vite project. Then apply `react-patterns` to create the 'Hero' component."
|
||||
3. **Style**: "Use `tailwind-mastery` to make the buttons glassmorphic and responsive."
|
||||
|
||||
---
|
||||
|
||||
## Example 2: Debugging with @systematic-debugging
|
||||
## 🥘 Recipe 3: The "Agent Architect"
|
||||
|
||||
### The Scenario
|
||||
Your tests are failing, but you're not sure why.
|
||||
_Scenario: You want to build a custom AI agent that can verify its own code._
|
||||
|
||||
### The Conversation
|
||||
**Skills Used:**
|
||||
|
||||
```
|
||||
You: @systematic-debugging My tests are failing with "Cannot read property
|
||||
'map' of undefined"
|
||||
1. `mcp-builder` (To build tools)
|
||||
2. `agent-evaluation` (To test reliability)
|
||||
3. `prompt-engineering` (To refine instructions)
|
||||
|
||||
AI: Let me help you debug this systematically. First, I'll gather information:
|
||||
|
||||
1. Running the failing test to see the full error...
|
||||
|
||||
[AI runs: npm test]
|
||||
|
||||
I see the error occurs in UserList.test.tsx at line 45. Let me examine
|
||||
the test file and the component...
|
||||
|
||||
[AI reads both files]
|
||||
|
||||
## Hypothesis
|
||||
The test is failing because `users` prop is undefined when the component
|
||||
renders. This suggests the mock data isn't being passed correctly.
|
||||
|
||||
## Verification
|
||||
Let me check the test setup...
|
||||
|
||||
[AI examines test]
|
||||
|
||||
Found it! The test is calling:
|
||||
```typescript
|
||||
render(<UserList />)
|
||||
```
|
||||
|
||||
But UserList requires a `users` prop. The component expects:
|
||||
```typescript
|
||||
interface UserListProps {
|
||||
users: User[];
|
||||
}
|
||||
```
|
||||
|
||||
## Solution
|
||||
The test should pass the users prop:
|
||||
```typescript
|
||||
const mockUsers = [
|
||||
{ id: 1, name: 'Alice' },
|
||||
{ id: 2, name: 'Bob' }
|
||||
];
|
||||
|
||||
render(<UserList users={mockUsers} />)
|
||||
```
|
||||
|
||||
Should I apply this fix?
|
||||
**The Workflow:**
|
||||
|
||||
You: Yes
|
||||
|
||||
AI: [Applies fix and runs tests]
|
||||
|
||||
✅ All tests passing!
|
||||
|
||||
## Root Cause
|
||||
The test was missing required props. This is a common issue when
|
||||
components are refactored to require props that were previously optional.
|
||||
|
||||
## Prevention
|
||||
Consider adding TypeScript strict mode to catch these at compile time:
|
||||
```json
|
||||
"strict": true,
|
||||
"strictNullChecks": true
|
||||
```
|
||||
|
||||
Would you like me to add this to your tsconfig.json?
|
||||
```
|
||||
|
||||
### What Happened
|
||||
- ✅ Systematic investigation
|
||||
- ✅ Clear hypothesis
|
||||
- ✅ Verification before fixing
|
||||
- ✅ Root cause analysis
|
||||
- ✅ Prevention suggestions
|
||||
|
||||
---
|
||||
|
||||
## Example 3: Adding Payments with @stripe-integration
|
||||
|
||||
### The Scenario
|
||||
You need to add subscription billing to your SaaS app.
|
||||
|
||||
### The Conversation
|
||||
|
||||
```
|
||||
You: @stripe-integration I need to add monthly subscription billing
|
||||
|
||||
AI: I'll help you implement Stripe subscriptions. Let me understand your
|
||||
requirements:
|
||||
|
||||
Current stack detected:
|
||||
- Next.js 14 (App Router)
|
||||
- TypeScript
|
||||
- Prisma with PostgreSQL
|
||||
|
||||
For subscription billing, we'll need:
|
||||
|
||||
1. **Stripe Setup**
|
||||
- Stripe account and API keys
|
||||
- Webhook endpoint for events
|
||||
- Product and price IDs
|
||||
|
||||
2. **Backend Implementation**
|
||||
- API routes for checkout
|
||||
- Webhook handler
|
||||
- Database schema for subscriptions
|
||||
|
||||
3. **Frontend Implementation**
|
||||
- Pricing page
|
||||
- Checkout flow
|
||||
- Subscription management
|
||||
|
||||
Let's start with the backend. First, install dependencies:
|
||||
|
||||
```bash
|
||||
npm install stripe @stripe/stripe-js
|
||||
```
|
||||
|
||||
Now, let's create the database schema...
|
||||
|
||||
[AI provides Prisma schema]
|
||||
|
||||
```prisma
|
||||
model User {
|
||||
id String @id @default(cuid())
|
||||
email String @unique
|
||||
stripeCustomerId String? @unique
|
||||
subscriptions Subscription[]
|
||||
}
|
||||
|
||||
model Subscription {
|
||||
id String @id @default(cuid())
|
||||
userId String
|
||||
user User @relation(fields: [userId], references: [id])
|
||||
stripeSubscriptionId String @unique
|
||||
status String
|
||||
priceId String
|
||||
currentPeriodEnd DateTime
|
||||
createdAt DateTime @default(now())
|
||||
updatedAt DateTime @updatedAt
|
||||
}
|
||||
```
|
||||
|
||||
Next, let's create the checkout API route...
|
||||
|
||||
[AI provides complete implementation with:]
|
||||
- API route for creating checkout sessions
|
||||
- Webhook handler for subscription events
|
||||
- Helper functions for Stripe operations
|
||||
- Error handling and logging
|
||||
- TypeScript types
|
||||
|
||||
Now for the frontend...
|
||||
|
||||
[AI provides:]
|
||||
- Pricing page component
|
||||
- Checkout button component
|
||||
- Subscription status display
|
||||
- Tailwind styling
|
||||
|
||||
Finally, let's set up the webhook:
|
||||
|
||||
1. In Stripe Dashboard, add webhook endpoint:
|
||||
`https://yourdomain.com/api/webhooks/stripe`
|
||||
|
||||
2. Select these events:
|
||||
- customer.subscription.created
|
||||
- customer.subscription.updated
|
||||
- customer.subscription.deleted
|
||||
- invoice.payment_succeeded
|
||||
- invoice.payment_failed
|
||||
|
||||
3. Copy the webhook secret to your .env:
|
||||
```
|
||||
STRIPE_WEBHOOK_SECRET=whsec_...
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
Test locally with Stripe CLI:
|
||||
```bash
|
||||
stripe listen --forward-to localhost:3000/api/webhooks/stripe
|
||||
stripe trigger customer.subscription.created
|
||||
```
|
||||
|
||||
## Security Checklist
|
||||
- ✅ API keys in environment variables
|
||||
- ✅ Webhook signature verification
|
||||
- ✅ Server-side price validation
|
||||
- ✅ Idempotency keys for operations
|
||||
|
||||
Ready to test the implementation?
|
||||
```
|
||||
|
||||
### What Happened
|
||||
- ✅ Detected existing stack
|
||||
- ✅ Provided complete implementation
|
||||
- ✅ Included database schema
|
||||
- ✅ Added security best practices
|
||||
- ✅ Included testing instructions
|
||||
|
||||
---
|
||||
|
||||
## Example 4: Writing Copy with @copywriting
|
||||
|
||||
### The Scenario
|
||||
You need compelling copy for your SaaS landing page.
|
||||
|
||||
### The Conversation
|
||||
|
||||
```
|
||||
You: @copywriting Write homepage copy for my project management tool
|
||||
|
||||
AI: I'll help you create compelling homepage copy. First, let me understand
|
||||
your product:
|
||||
|
||||
What makes your project management tool different from competitors
|
||||
like Asana, Trello, or Monday.com?
|
||||
|
||||
You: It's specifically built for remote teams with built-in video standups
|
||||
and async communication features
|
||||
|
||||
AI: Excellent differentiator! Who is your primary target audience?
|
||||
a) Startups and small teams (2-20 people)
|
||||
b) Mid-size companies (20-200 people)
|
||||
c) Enterprise (200+ people)
|
||||
d) Freelancers and agencies
|
||||
|
||||
You: a) Startups and small teams
|
||||
|
||||
AI: Perfect. What's the main pain point you solve?
|
||||
|
||||
You: Remote teams waste time in unnecessary meetings and lose context
|
||||
in async communication
|
||||
|
||||
AI: Great! Now I have what I need. Here's your homepage copy:
|
||||
|
||||
---
|
||||
|
||||
## Hero Section
|
||||
|
||||
**Headline:**
|
||||
Run Your Remote Team Without the Meeting Overload
|
||||
|
||||
**Subheadline:**
|
||||
The project management tool built for distributed teams. Replace
|
||||
status meetings with async video standups and keep everyone in sync
|
||||
without the Zoom fatigue.
|
||||
|
||||
**CTA:**
|
||||
Start Free Trial → (No credit card required)
|
||||
|
||||
---
|
||||
|
||||
## Problem Section
|
||||
|
||||
**Headline:**
|
||||
Remote Work Shouldn't Mean More Meetings
|
||||
|
||||
**Body:**
|
||||
Your team is distributed across time zones. You're spending hours in
|
||||
status meetings just to stay aligned. Async updates get lost in Slack.
|
||||
Context disappears. Productivity suffers.
|
||||
|
||||
Sound familiar?
|
||||
|
||||
---
|
||||
|
||||
## Solution Section
|
||||
|
||||
**Headline:**
|
||||
Stay Aligned Without the Meeting Marathon
|
||||
|
||||
**Features:**
|
||||
|
||||
📹 **Async Video Standups**
|
||||
Record quick video updates on your schedule. Your team watches when
|
||||
it works for them. No more 9am calls for your teammate in Tokyo.
|
||||
|
||||
💬 **Context-Rich Communication**
|
||||
Every task, comment, and decision stays connected. No more hunting
|
||||
through Slack threads or email chains.
|
||||
|
||||
📊 **Visual Progress Tracking**
|
||||
See what's happening at a glance. Kanban boards, timelines, and
|
||||
dashboards that actually make sense.
|
||||
|
||||
🔔 **Smart Notifications**
|
||||
Get notified about what matters. Ignore the rest. Your focus time
|
||||
is sacred.
|
||||
|
||||
---
|
||||
|
||||
## Social Proof Section
|
||||
|
||||
**Headline:**
|
||||
Trusted by Remote Teams Worldwide
|
||||
|
||||
[Testimonial placeholders with structure]
|
||||
|
||||
---
|
||||
|
||||
## CTA Section
|
||||
|
||||
**Headline:**
|
||||
Ready to Reclaim Your Calendar?
|
||||
|
||||
**Body:**
|
||||
Join 500+ remote teams who've cut meeting time by 60% while staying
|
||||
more aligned than ever.
|
||||
|
||||
**CTA:**
|
||||
Start Your Free 14-Day Trial →
|
||||
|
||||
No credit card required. Cancel anytime.
|
||||
|
||||
---
|
||||
|
||||
## Copy Notes
|
||||
|
||||
**Tone:** Professional but conversational
|
||||
**Focus:** Pain point (too many meetings) → Solution (async video)
|
||||
**Differentiation:** Built specifically for remote teams
|
||||
**Social proof:** Quantified benefit (60% fewer meetings)
|
||||
|
||||
Would you like me to:
|
||||
1. Write alternative versions for A/B testing?
|
||||
2. Create copy for specific feature pages?
|
||||
3. Write email sequences for onboarding?
|
||||
```
|
||||
|
||||
### What Happened
|
||||
- ✅ Asked clarifying questions
|
||||
- ✅ Identified unique value proposition
|
||||
- ✅ Created benefit-focused copy
|
||||
- ✅ Included clear CTAs
|
||||
- ✅ Provided copy notes for consistency
|
||||
|
||||
---
|
||||
|
||||
## Example 5: Test-Driven Development with @test-driven-development
|
||||
|
||||
### The Scenario
|
||||
You need to implement a new feature: user authentication.
|
||||
|
||||
### The Conversation
|
||||
|
||||
```
|
||||
You: @test-driven-development I need to implement user login functionality
|
||||
|
||||
AI: Perfect! Let's use TDD to build this. Following the Red-Green-Refactor
|
||||
cycle, we'll write tests first, then implement.
|
||||
|
||||
## Step 1: Write the First Test (RED)
|
||||
|
||||
Let's start with the simplest case - successful login:
|
||||
|
||||
```typescript
|
||||
// auth.test.ts
|
||||
import { login } from './auth';
|
||||
|
||||
describe('login', () => {
|
||||
it('should return user data for valid credentials', async () => {
|
||||
const result = await login({
|
||||
email: 'user@example.com',
|
||||
password: 'correct-password'
|
||||
});
|
||||
|
||||
expect(result.success).toBe(true);
|
||||
expect(result.user).toHaveProperty('id');
|
||||
expect(result.user).toHaveProperty('email');
|
||||
expect(result.token).toBeDefined();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
Let's run this test to see it fail:
|
||||
|
||||
[AI runs: npm test]
|
||||
|
||||
✅ Test fails as expected (auth.ts doesn't exist yet)
|
||||
|
||||
## Step 2: Write Minimal Implementation (GREEN)
|
||||
|
||||
Now let's write just enough code to make this test pass:
|
||||
|
||||
```typescript
|
||||
// auth.ts
|
||||
interface LoginCredentials {
|
||||
email: string;
|
||||
password: string;
|
||||
}
|
||||
|
||||
interface LoginResult {
|
||||
success: boolean;
|
||||
user?: {
|
||||
id: string;
|
||||
email: string;
|
||||
};
|
||||
token?: string;
|
||||
}
|
||||
|
||||
export async function login(
|
||||
credentials: LoginCredentials
|
||||
): Promise<LoginResult> {
|
||||
// Minimal implementation to pass the test
|
||||
return {
|
||||
success: true,
|
||||
user: {
|
||||
id: '1',
|
||||
email: credentials.email
|
||||
},
|
||||
token: 'fake-token'
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
[AI runs: npm test]
|
||||
|
||||
✅ Test passes!
|
||||
|
||||
## Step 3: Add Next Test (RED)
|
||||
|
||||
Now let's test the failure case:
|
||||
|
||||
```typescript
|
||||
it('should return error for invalid credentials', async () => {
|
||||
const result = await login({
|
||||
email: 'user@example.com',
|
||||
password: 'wrong-password'
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error).toBe('Invalid credentials');
|
||||
expect(result.user).toBeUndefined();
|
||||
expect(result.token).toBeUndefined();
|
||||
});
|
||||
```
|
||||
|
||||
[AI runs: npm test]
|
||||
|
||||
❌ Test fails (always returns success)
|
||||
|
||||
## Step 4: Implement Real Logic (GREEN)
|
||||
|
||||
Now let's add actual authentication:
|
||||
|
||||
```typescript
|
||||
import bcrypt from 'bcrypt';
|
||||
import jwt from 'jsonwebtoken';
|
||||
import { db } from './db';
|
||||
|
||||
export async function login(
|
||||
credentials: LoginCredentials
|
||||
): Promise<LoginResult> {
|
||||
// Find user
|
||||
const user = await db.user.findUnique({
|
||||
where: { email: credentials.email }
|
||||
});
|
||||
|
||||
if (!user) {
|
||||
return {
|
||||
success: false,
|
||||
error: 'Invalid credentials'
|
||||
};
|
||||
}
|
||||
|
||||
// Verify password
|
||||
const validPassword = await bcrypt.compare(
|
||||
credentials.password,
|
||||
user.passwordHash
|
||||
);
|
||||
|
||||
if (!validPassword) {
|
||||
return {
|
||||
success: false,
|
||||
error: 'Invalid credentials'
|
||||
};
|
||||
}
|
||||
|
||||
// Generate token
|
||||
const token = jwt.sign(
|
||||
{ userId: user.id },
|
||||
process.env.JWT_SECRET!,
|
||||
{ expiresIn: '7d' }
|
||||
);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
user: {
|
||||
id: user.id,
|
||||
email: user.email
|
||||
},
|
||||
token
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
[AI runs: npm test]
|
||||
|
||||
✅ All tests pass!
|
||||
|
||||
## Step 5: Add Edge Cases
|
||||
|
||||
Let's add more tests for edge cases:
|
||||
|
||||
```typescript
|
||||
it('should handle missing email', async () => {
|
||||
const result = await login({
|
||||
email: '',
|
||||
password: 'password'
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error).toBe('Email is required');
|
||||
});
|
||||
|
||||
it('should handle missing password', async () => {
|
||||
const result = await login({
|
||||
email: 'user@example.com',
|
||||
password: ''
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error).toBe('Password is required');
|
||||
});
|
||||
|
||||
it('should handle database errors gracefully', async () => {
|
||||
// Mock database error
|
||||
jest.spyOn(db.user, 'findUnique').mockRejectedValue(
|
||||
new Error('Database connection failed')
|
||||
);
|
||||
|
||||
const result = await login({
|
||||
email: 'user@example.com',
|
||||
password: 'password'
|
||||
});
|
||||
|
||||
expect(result.success).toBe(false);
|
||||
expect(result.error).toBe('An error occurred');
|
||||
});
|
||||
```
|
||||
|
||||
[AI implements validation and error handling]
|
||||
|
||||
## Step 6: Refactor (REFACTOR)
|
||||
|
||||
Now that all tests pass, let's refactor for better code quality:
|
||||
|
||||
```typescript
|
||||
// Extract validation
|
||||
function validateCredentials(credentials: LoginCredentials): string | null {
|
||||
if (!credentials.email) return 'Email is required';
|
||||
if (!credentials.password) return 'Password is required';
|
||||
return null;
|
||||
}
|
||||
|
||||
// Extract user verification
|
||||
async function verifyUser(
|
||||
email: string,
|
||||
password: string
|
||||
): Promise<User | null> {
|
||||
const user = await db.user.findUnique({ where: { email } });
|
||||
if (!user) return null;
|
||||
|
||||
const valid = await bcrypt.compare(password, user.passwordHash);
|
||||
return valid ? user : null;
|
||||
}
|
||||
|
||||
// Main function is now cleaner
|
||||
export async function login(
|
||||
credentials: LoginCredentials
|
||||
): Promise<LoginResult> {
|
||||
try {
|
||||
const validationError = validateCredentials(credentials);
|
||||
if (validationError) {
|
||||
return { success: false, error: validationError };
|
||||
}
|
||||
|
||||
const user = await verifyUser(
|
||||
credentials.email,
|
||||
credentials.password
|
||||
);
|
||||
|
||||
if (!user) {
|
||||
return { success: false, error: 'Invalid credentials' };
|
||||
}
|
||||
|
||||
const token = generateToken(user.id);
|
||||
|
||||
return {
|
||||
success: true,
|
||||
user: { id: user.id, email: user.email },
|
||||
token
|
||||
};
|
||||
} catch (error) {
|
||||
console.error('Login error:', error);
|
||||
return { success: false, error: 'An error occurred' };
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
[AI runs: npm test]
|
||||
|
||||
✅ All tests still pass after refactoring!
|
||||
|
||||
## Summary
|
||||
|
||||
We've implemented login functionality using TDD:
|
||||
- ✅ 5 tests covering happy path, errors, and edge cases
|
||||
- ✅ Clean, testable code
|
||||
- ✅ Proper error handling
|
||||
- ✅ Type safety with TypeScript
|
||||
|
||||
Next steps:
|
||||
1. Add rate limiting tests
|
||||
2. Add session management tests
|
||||
3. Add logout functionality
|
||||
|
||||
Should I continue with these?
|
||||
```
|
||||
|
||||
### What Happened
|
||||
- ✅ Followed Red-Green-Refactor cycle
|
||||
- ✅ Wrote tests before implementation
|
||||
- ✅ Covered edge cases
|
||||
- ✅ Refactored with confidence
|
||||
- ✅ Maintained test coverage
|
||||
|
||||
---
|
||||
|
||||
## Key Takeaways
|
||||
|
||||
### What Makes These Examples Effective?
|
||||
|
||||
1. **Skills ask clarifying questions** before jumping to solutions
|
||||
2. **Skills provide context-aware help** based on your project
|
||||
3. **Skills follow best practices** for their domain
|
||||
4. **Skills include complete examples** not just snippets
|
||||
5. **Skills explain the "why"** not just the "how"
|
||||
|
||||
### How to Get Similar Results
|
||||
|
||||
1. **Be specific** in your requests
|
||||
2. **Provide context** about your project
|
||||
3. **Answer questions** the skill asks
|
||||
4. **Review suggestions** before applying
|
||||
5. **Iterate** based on results
|
||||
|
||||
---
|
||||
|
||||
## Try These Yourself!
|
||||
|
||||
Pick a skill and try it with your own project:
|
||||
|
||||
- **Planning:** `@brainstorming` or `@writing-plans`
|
||||
- **Development:** `@test-driven-development` or `@react-best-practices`
|
||||
- **Debugging:** `@systematic-debugging` or `@test-fixing`
|
||||
- **Integration:** `@stripe-integration` or `@firebase`
|
||||
- **Marketing:** `@copywriting` or `@seo-audit`
|
||||
|
||||
---
|
||||
|
||||
**Want more examples?** Check individual skill folders for additional examples and use cases!
|
||||
1. **Build**: "Use `mcp-builder` to create a `verify-file` tool."
|
||||
2. **Instruct**: "Apply `prompt-engineering` patterns to the System Prompt so the agent always checks file paths."
|
||||
3. **Test**: "Run `agent-evaluation` to benchmark how often the agent fails to find the file."
|
||||
|
||||
183
docs/FAQ.md
Normal file
183
docs/FAQ.md
Normal file
@@ -0,0 +1,183 @@
|
||||
# ❓ Frequently Asked Questions (FAQ)
|
||||
|
||||
**Got questions?** You're not alone! Here are answers to the most common questions about Antigravity Awesome Skills.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 General Questions
|
||||
|
||||
### What are "skills" exactly?
|
||||
|
||||
Skills are specialized instruction files that teach AI assistants how to handle specific tasks. Think of them as expert knowledge modules that your AI can load on-demand.
|
||||
**Simple analogy:** Just like you might consult different experts (a lawyer, a doctor, a mechanic), these skills let your AI become an expert in different areas when you need them.
|
||||
|
||||
### Do I need to install all 626+ skills?
|
||||
|
||||
**No!** When you clone the repository, all skills are available, but your AI only loads them when you explicitly invoke them with `@skill-name`.
|
||||
It's like having a library - all books are there, but you only read the ones you need.
|
||||
**Pro Tip:** Use [Starter Packs](BUNDLES.md) to install only what matches your role.
|
||||
|
||||
### Which AI tools work with these skills?
|
||||
|
||||
- ✅ **Claude Code** (Anthropic CLI)
|
||||
- ✅ **Gemini CLI** (Google)
|
||||
- ✅ **Codex CLI** (OpenAI)
|
||||
- ✅ **Cursor** (AI IDE)
|
||||
- ✅ **Antigravity IDE**
|
||||
- ✅ **OpenCode**
|
||||
- ⚠️ **GitHub Copilot** (partial support via copy-paste)
|
||||
|
||||
### Are these skills free to use?
|
||||
|
||||
**Yes!** This repository is licensed under MIT License.
|
||||
|
||||
- ✅ Free for personal use
|
||||
- ✅ Free for commercial use
|
||||
- ✅ You can modify them
|
||||
|
||||
### Do skills work offline?
|
||||
|
||||
The skill files themselves are stored locally on your computer, but your AI assistant needs an internet connection to function.
|
||||
|
||||
---
|
||||
|
||||
## 🔒 Security & Trust (V4 Update)
|
||||
|
||||
### What do the Risk Labels mean?
|
||||
|
||||
We classify skills so you know what you're running:
|
||||
|
||||
- ⚪ **Safe (White/Blue)**: Read-only, planning, or benign skills.
|
||||
- 🔴 **Risk (Red)**: Skills that modify files (delete), use network scanners, or perform destructive actions. **Use with caution.**
|
||||
- 🟣 **Official (Purple)**: Maintained by trusted vendors (Anthropic, DeepMind, etc.).
|
||||
|
||||
### Can these skills hack my computer?
|
||||
|
||||
**No.** Skills are text files. However, they _instruct_ the AI to run commands. If a skill says "delete all files", a compliant AI might try to do it.
|
||||
_Always check the Risk label and review the code._
|
||||
|
||||
---
|
||||
|
||||
## 📦 Installation & Setup
|
||||
|
||||
### Where should I install the skills?
|
||||
|
||||
The universal path that works with most tools is `.agent/skills/`.
|
||||
|
||||
**Using npx:** `npx antigravity-awesome-skills` (or `npx github:sickn33/antigravity-awesome-skills` if you get a 404).
|
||||
|
||||
**Using git clone:**
|
||||
|
||||
```bash
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
**Tool-specific paths:**
|
||||
|
||||
- Claude Code: `.claude/skills/`
|
||||
- Gemini CLI: `.gemini/skills/`
|
||||
- Codex CLI: `.codex/skills/`
|
||||
- Cursor: `.cursor/skills/` or project root
|
||||
|
||||
### Does this work with Windows?
|
||||
|
||||
**Yes**, but some "Official" skills use **symlinks** which Windows handles poorly by default.
|
||||
Run git with:
|
||||
|
||||
```bash
|
||||
git clone -c core.symlinks=true https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
Or enable "Developer Mode" in Windows Settings.
|
||||
|
||||
### How do I update skills?
|
||||
|
||||
Navigate to your skills directory and pull the latest changes:
|
||||
|
||||
```bash
|
||||
cd .agent/skills
|
||||
git pull origin main
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Using Skills
|
||||
|
||||
### How do I invoke a skill?
|
||||
|
||||
Use the `@` symbol followed by the skill name:
|
||||
|
||||
```bash
|
||||
@brainstorming help me design a todo app
|
||||
```
|
||||
|
||||
### Can I use multiple skills at once?
|
||||
|
||||
**Yes!** You can invoke multiple skills:
|
||||
|
||||
```bash
|
||||
@brainstorming help me design this, then use @writing-plans to create a task list.
|
||||
```
|
||||
|
||||
### How do I know which skill to use?
|
||||
|
||||
1. **Browse the catalog**: Check the [Skill Catalog](../CATALOG.md).
|
||||
2. **Search**: `ls skills/ | grep "keyword"`
|
||||
3. **Ask your AI**: "What skills do you have for testing?"
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Troubleshooting
|
||||
|
||||
### My AI assistant doesn't recognize skills
|
||||
|
||||
**Possible causes:**
|
||||
|
||||
1. **Wrong installation path**: Check your tool's docs. Try `.agent/skills/`.
|
||||
2. **Restart Needed**: Restart your AI/IDE after installing.
|
||||
3. **Typos**: Did you type `@brain-storming` instead of `@brainstorming`?
|
||||
|
||||
### A skill gives incorrect or outdated advice
|
||||
|
||||
Please [Open an issue](https://github.com/sickn33/antigravity-awesome-skills/issues)!
|
||||
Include:
|
||||
|
||||
- Which skill
|
||||
- What went wrong
|
||||
- What should happen instead
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Contribution
|
||||
|
||||
### I'm new to open source. Can I contribute?
|
||||
|
||||
**Absolutely!** We welcome beginners.
|
||||
|
||||
- Fix typos
|
||||
- Add examples
|
||||
- Improve docs
|
||||
Check out [CONTRIBUTING.md](../CONTRIBUTING.md) for instructions.
|
||||
|
||||
### My PR failed "Quality Bar" check. Why?
|
||||
|
||||
V4 introduces automated quality control. Your skill might be missing:
|
||||
|
||||
1. A valid `description`.
|
||||
2. Usage examples.
|
||||
Run `python3 scripts/validate_skills.py` locally to check before you push.
|
||||
|
||||
### Can I update an "Official" skill?
|
||||
|
||||
**No.** Official skills (in `skills/official/`) are mirrored from vendors. Open an issue instead.
|
||||
|
||||
---
|
||||
|
||||
## 💡 Pro Tips
|
||||
|
||||
- Start with `@brainstorming` before building anything new
|
||||
- Use `@systematic-debugging` when stuck on bugs
|
||||
- Try `@test-driven-development` for better code quality
|
||||
- Explore `@skill-creator` to make your own skills
|
||||
|
||||
**Still confused?** [Open a discussion](https://github.com/sickn33/antigravity-awesome-skills/discussions) and we'll help you out! 🙌
|
||||
121
docs/GETTING_STARTED.md
Normal file
121
docs/GETTING_STARTED.md
Normal file
@@ -0,0 +1,121 @@
|
||||
# Getting Started with Antigravity Awesome Skills (V4)
|
||||
|
||||
**New here? This guide will help you supercharge your AI Agent in 5 minutes.**
|
||||
|
||||
---
|
||||
|
||||
## 🤔 What Are "Skills"?
|
||||
|
||||
AI Agents (like **Claude Code**, **Gemini**, **Cursor**) are smart, but they lack specific knowledge about your tools.
|
||||
**Skills** are specialized instruction manuals (markdown files) that teach your AI how to perform specific tasks perfectly, every time.
|
||||
|
||||
**Analogy:** Your AI is a brilliant intern. **Skills** are the SOPs (Standard Operating Procedures) that make them a Senior Engineer.
|
||||
|
||||
---
|
||||
|
||||
## ⚡️ Quick Start: The "Starter Packs"
|
||||
|
||||
Don't panic about the 626+ skills. You don't need them all at once.
|
||||
We have curated **Starter Packs** to get you running immediately.
|
||||
|
||||
You **install the full repo once** (npx or clone); Starter Packs are curated lists to help you **pick which skills to use** by role (e.g. Web Wizard, Hacker Pack)—they are not a different way to install.
|
||||
|
||||
### 1. Install the Repo
|
||||
|
||||
**Option A — npx (easiest):**
|
||||
|
||||
```bash
|
||||
npx antigravity-awesome-skills
|
||||
```
|
||||
|
||||
This clones to `~/.agent/skills` by default. Use `--cursor`, `--claude`, `--gemini`, or `--codex` to install for a specific tool, or `--path <dir>` for a custom location. Run `npx antigravity-awesome-skills --help` for details.
|
||||
|
||||
If you see a 404 error, use: `npx github:sickn33/antigravity-awesome-skills`
|
||||
|
||||
**Option B — git clone:**
|
||||
|
||||
```bash
|
||||
# Universal (works for most agents)
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
### 2. Pick Your Persona
|
||||
|
||||
Find the bundle that matches your role (see [BUNDLES.md](BUNDLES.md)):
|
||||
|
||||
| Persona | Bundle Name | What's Inside? |
|
||||
| :-------------------- | :------------- | :------------------------------------------------ |
|
||||
| **Web Developer** | `Web Wizard` | React Patterns, Tailwind mastery, Frontend Design |
|
||||
| **Security Engineer** | `Hacker Pack` | OWASP, Metasploit, Pentest Methodology |
|
||||
| **Manager / PM** | `Product Pack` | Brainstorming, Planning, SEO, Strategy |
|
||||
| **Everything** | `Essentials` | Clean Code, Planning, Validation (The Basics) |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 How to Use a Skill
|
||||
|
||||
Once installed, just talk to your AI naturally.
|
||||
|
||||
### Example 1: Planning a Feature (**Essentials**)
|
||||
|
||||
> "Use **@brainstorming** to help me design a new login flow."
|
||||
|
||||
**What happens:** The AI loads the brainstorming skill, asks you structured questions, and produces a professional spec.
|
||||
|
||||
### Example 2: Checking Your Code (**Web Wizard**)
|
||||
|
||||
> "Run **@lint-and-validate** on this file and fix errors."
|
||||
|
||||
**What happens:** The AI follows strict linting rules defined in the skill to clean your code.
|
||||
|
||||
### Example 3: Security Audit (**Hacker Pack**)
|
||||
|
||||
> "Use **@api-security-best-practices** to review my API endpoints."
|
||||
|
||||
**What happens:** The AI audits your code against OWASP standards.
|
||||
|
||||
---
|
||||
|
||||
## 🔌 Supported Tools
|
||||
|
||||
| Tool | Status | Path |
|
||||
| :-------------- | :-------------- | :---------------- |
|
||||
| **Claude Code** | ✅ Full Support | `.claude/skills/` |
|
||||
| **Gemini CLI** | ✅ Full Support | `.gemini/skills/` |
|
||||
| **Codex CLI** | ✅ Full Support | `.codex/skills/` |
|
||||
| **Antigravity** | ✅ Native | `.agent/skills/` |
|
||||
| **Cursor** | ✅ Native | `.cursor/skills/` |
|
||||
| **Copilot** | ⚠️ Text Only | Manual copy-paste |
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ Trust & Safety (New in V4)
|
||||
|
||||
We classify skills so you know what you're running:
|
||||
|
||||
- 🟣 **Official**: Maintained by Anthropic/Google/Vendors (High Trust).
|
||||
- 🔵 **Safe**: Community skills that are non-destructive (Read-only/Planning).
|
||||
- 🔴 **Risk**: Skills that modify systems or perform security tests (Authorized Use Only).
|
||||
|
||||
_Check the [Skill Catalog](../CATALOG.md) for the full list._
|
||||
|
||||
---
|
||||
|
||||
## ❓ FAQ
|
||||
|
||||
**Q: Do I need to install all 626 skills?**
|
||||
A: You clone the whole repo once; your AI only _reads_ the skills you invoke (or that are relevant), so it stays lightweight. **Starter Packs** in [BUNDLES.md](BUNDLES.md) are curated lists to help you discover the right skills for your role—they don't change how you install.
|
||||
|
||||
**Q: Can I make my own skills?**
|
||||
A: Yes! Use the **@skill-creator** skill to build your own.
|
||||
|
||||
**Q: Is this free?**
|
||||
A: Yes, MIT License. Open Source forever.
|
||||
|
||||
---
|
||||
|
||||
## ⏭️ Next Steps
|
||||
|
||||
1. [Browse the Bundles](BUNDLES.md)
|
||||
2. [See Real-World Examples](EXAMPLES.md)
|
||||
3. [Contribute a Skill](../CONTRIBUTING.md)
|
||||
66
docs/QUALITY_BAR.md
Normal file
66
docs/QUALITY_BAR.md
Normal file
@@ -0,0 +1,66 @@
|
||||
# 🏆 Quality Bar & Validation Standards
|
||||
|
||||
To transform **Antigravity Awesome Skills** from a collection of scripts into a trusted platform, every skill must meet a specific standard of quality and safety.
|
||||
|
||||
## The "Validated" Badge ✅
|
||||
|
||||
A skill earns the "Validated" badge only if it passes these **5 automated checks**:
|
||||
|
||||
### 1. Metadata Integrity
|
||||
|
||||
The `SKILL.md` frontmatter must be valid YAML and contain:
|
||||
|
||||
- `name`: Kebab-case, matches folder name.
|
||||
- `description`: Under 200 chars, clear value prop.
|
||||
- `risk`: One of `[none, safe, critical, offensive]`.
|
||||
- `source`: URL to original source (or "self" if original).
|
||||
|
||||
### 2. Clear Triggers ("When to use")
|
||||
|
||||
The skill MUST have a section explicitly stating when to trigger it.
|
||||
|
||||
- **Good**: "Use when the user asks to debug a React component."
|
||||
- **Bad**: "This skill helps you with code."
|
||||
Accepted headings: `## When to Use`, `## Use this skill when`, `## When to Use This Skill`.
|
||||
|
||||
### 3. Safety & Risk Classification
|
||||
|
||||
Every skill must declare its risk level:
|
||||
|
||||
- 🟢 **none**: Pure text/reasoning (e.g., Brainstorming).
|
||||
- 🔵 **safe**: Reads files, runs safe commands (e.g., Linter).
|
||||
- 🟠 **critical**: Modifies state, deletes files, pushes to prod (e.g., Git Push).
|
||||
- 🔴 **offensive**: Pentesting/Red Team tools. **MUST** have "Authorized Use Only" warning.
|
||||
|
||||
### 4. Copy-Pasteable Examples
|
||||
|
||||
At least one code block or interaction example that a user (or agent) can immediately use.
|
||||
|
||||
### 5. Explicit Limitations
|
||||
|
||||
A list of known edge cases or things the skill _cannot_ do.
|
||||
|
||||
- _Example_: "Does not work on Windows without WSL."
|
||||
|
||||
---
|
||||
|
||||
## Support Levels
|
||||
|
||||
We also categorize skills by who maintains them:
|
||||
|
||||
| Level | Badge | Meaning |
|
||||
| :------------ | :---- | :-------------------------------------------------- |
|
||||
| **Official** | 🟣 | Maintained by the core team. High reliability. |
|
||||
| **Community** | ⚪ | Contributed by the ecosystem. Best effort support. |
|
||||
| **Verified** | ✨ | Community skill that has passed deep manual review. |
|
||||
|
||||
---
|
||||
|
||||
## How to Validate Your Skill
|
||||
|
||||
The canonical validator is `scripts/validate_skills.py`. Run `npm run validate` (or `npm run validate:strict`) before submitting a PR:
|
||||
|
||||
```bash
|
||||
npm run validate # soft mode (warnings only)
|
||||
npm run validate:strict # strict mode (CI uses this)
|
||||
```
|
||||
51
docs/SECURITY_GUARDRAILS.md
Normal file
51
docs/SECURITY_GUARDRAILS.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# 🛡️ Security Guardrails & Policy
|
||||
|
||||
Antigravity Awesome Skills is a powerful toolkit. With great power comes great responsibility. This document defines the **Rules of Engagement** for all security and offensive capabilities in this repository.
|
||||
|
||||
## 🔴 Offensive Skills Policy (The "Red Line")
|
||||
|
||||
**What is an Offensive Skill?**
|
||||
Any skill designed to penetrate, exploit, disrupt, or simulate attacks against systems.
|
||||
_Examples: Pentesting, SQL Injection, Phishing Simulation, Red Teaming._
|
||||
|
||||
### 1. The "Authorized Use Only" Disclaimer
|
||||
|
||||
Every offensive skill **MUST** begin with this exact disclaimer in its `SKILL.md`:
|
||||
|
||||
> **⚠️ AUTHORIZED USE ONLY**
|
||||
> This skill is for educational purposes or authorized security assessments only.
|
||||
> You must have explicit, written permission from the system owner before using this tool.
|
||||
> Misuse of this tool is illegal and strictly prohibited.
|
||||
|
||||
### 2. Mandatory User Confirmation
|
||||
|
||||
Offensive skills must **NEVER** run fully autonomously.
|
||||
|
||||
- **Requirement**: The skill description/instructions must explicitly tell the agent to _ask for user confirmation_ before executing any exploit or attack command.
|
||||
- **Agent Instruction**: "Ask the user to verify the target URL/IP before running."
|
||||
|
||||
### 3. Safe by Design
|
||||
|
||||
- **No Weaponized Payloads**: Skills should not include active malware, ransomware, or non-educational exploits.
|
||||
- **Sandbox Recommended**: Instructions should recommend running in a contained environment (Docker/VM).
|
||||
|
||||
---
|
||||
|
||||
## 🔵 Defensive Skills Policy
|
||||
|
||||
**What is a Defensive Skill?**
|
||||
Tools for hardening, auditing, monitoring, or protecting systems.
|
||||
_Examples: Linting, Log Analysis, Configuration Auditing._
|
||||
|
||||
- **Data Privacy**: Defensive skills must not upload data to 3rd party servers without explicit user consent.
|
||||
- **Non-Destructive**: Audits should be read-only by default.
|
||||
|
||||
---
|
||||
|
||||
## ⚖️ Legal Disclaimer
|
||||
|
||||
By using this repository, you agree that:
|
||||
|
||||
1. You are responsible for your own actions.
|
||||
2. The authors and contributors are not liable for any damage caused by these tools.
|
||||
3. You will comply with all local, state, and federal laws regarding cybersecurity.
|
||||
@@ -31,6 +31,7 @@ skills/
|
||||
Every `SKILL.md` file has two main parts:
|
||||
|
||||
### 1. Frontmatter (Metadata)
|
||||
|
||||
### 2. Content (Instructions)
|
||||
|
||||
Let's break down each part:
|
||||
@@ -51,12 +52,14 @@ description: "Brief description of what this skill does"
|
||||
### Required Fields
|
||||
|
||||
#### `name`
|
||||
|
||||
- **What it is:** The skill's identifier
|
||||
- **Format:** lowercase-with-hyphens
|
||||
- **Must match:** The folder name exactly
|
||||
- **Example:** `stripe-integration`
|
||||
|
||||
#### `description`
|
||||
|
||||
- **What it is:** One-sentence summary
|
||||
- **Format:** String in quotes
|
||||
- **Length:** Keep it under 150 characters
|
||||
@@ -70,9 +73,9 @@ Some skills include additional metadata:
|
||||
---
|
||||
name: my-skill-name
|
||||
description: "Brief description"
|
||||
version: "1.0.0"
|
||||
author: "Your Name"
|
||||
tags: ["react", "typescript", "testing"]
|
||||
risk: "safe" # none | safe | critical | offensive (see QUALITY_BAR.md)
|
||||
source: "community"
|
||||
tags: ["react", "typescript"]
|
||||
---
|
||||
```
|
||||
|
||||
@@ -85,13 +88,16 @@ After the frontmatter comes the actual skill content. Here's the recommended str
|
||||
### Recommended Sections
|
||||
|
||||
#### 1. Title (H1)
|
||||
|
||||
```markdown
|
||||
# Skill Title
|
||||
```
|
||||
|
||||
- Use a clear, descriptive title
|
||||
- Usually matches or expands on the skill name
|
||||
|
||||
#### 2. Overview
|
||||
|
||||
```markdown
|
||||
## Overview
|
||||
|
||||
@@ -100,6 +106,7 @@ A brief explanation of what this skill does and why it exists.
|
||||
```
|
||||
|
||||
#### 3. When to Use
|
||||
|
||||
```markdown
|
||||
## When to Use This Skill
|
||||
|
||||
@@ -111,28 +118,34 @@ A brief explanation of what this skill does and why it exists.
|
||||
**Why this matters:** Helps the AI know when to activate this skill
|
||||
|
||||
#### 4. Core Instructions
|
||||
|
||||
```markdown
|
||||
## How It Works
|
||||
|
||||
### Step 1: [Action]
|
||||
|
||||
Detailed instructions...
|
||||
|
||||
### Step 2: [Action]
|
||||
|
||||
More instructions...
|
||||
```
|
||||
|
||||
**This is the heart of your skill** - clear, actionable steps
|
||||
|
||||
#### 5. Examples
|
||||
|
||||
```markdown
|
||||
## Examples
|
||||
|
||||
### Example 1: [Use Case]
|
||||
|
||||
\`\`\`javascript
|
||||
// Example code
|
||||
\`\`\`
|
||||
|
||||
### Example 2: [Another Use Case]
|
||||
|
||||
\`\`\`javascript
|
||||
// More code
|
||||
\`\`\`
|
||||
@@ -141,6 +154,7 @@ More instructions...
|
||||
**Why examples matter:** They show the AI exactly what good output looks like
|
||||
|
||||
#### 6. Best Practices
|
||||
|
||||
```markdown
|
||||
## Best Practices
|
||||
|
||||
@@ -151,6 +165,7 @@ More instructions...
|
||||
```
|
||||
|
||||
#### 7. Common Pitfalls
|
||||
|
||||
```markdown
|
||||
## Common Pitfalls
|
||||
|
||||
@@ -159,6 +174,7 @@ More instructions...
|
||||
```
|
||||
|
||||
#### 8. Related Skills
|
||||
|
||||
```markdown
|
||||
## Related Skills
|
||||
|
||||
@@ -173,11 +189,13 @@ More instructions...
|
||||
### Use Clear, Direct Language
|
||||
|
||||
**❌ Bad:**
|
||||
|
||||
```markdown
|
||||
You might want to consider possibly checking if the user has authentication.
|
||||
```
|
||||
|
||||
**✅ Good:**
|
||||
|
||||
```markdown
|
||||
Check if the user is authenticated before proceeding.
|
||||
```
|
||||
@@ -185,11 +203,13 @@ Check if the user is authenticated before proceeding.
|
||||
### Use Action Verbs
|
||||
|
||||
**❌ Bad:**
|
||||
|
||||
```markdown
|
||||
The file should be created...
|
||||
```
|
||||
|
||||
**✅ Good:**
|
||||
|
||||
```markdown
|
||||
Create the file...
|
||||
```
|
||||
@@ -197,11 +217,13 @@ Create the file...
|
||||
### Be Specific
|
||||
|
||||
**❌ Bad:**
|
||||
|
||||
```markdown
|
||||
Set up the database properly.
|
||||
```
|
||||
|
||||
**✅ Good:**
|
||||
|
||||
```markdown
|
||||
1. Create a PostgreSQL database
|
||||
2. Run migrations: `npm run migrate`
|
||||
@@ -224,6 +246,7 @@ scripts/
|
||||
```
|
||||
|
||||
**Reference them in SKILL.md:**
|
||||
|
||||
```markdown
|
||||
Run the setup script:
|
||||
\`\`\`bash
|
||||
@@ -256,6 +279,7 @@ templates/
|
||||
```
|
||||
|
||||
**Reference in SKILL.md:**
|
||||
|
||||
```markdown
|
||||
Use this template as a starting point:
|
||||
\`\`\`typescript
|
||||
@@ -279,16 +303,19 @@ references/
|
||||
## Skill Size Guidelines
|
||||
|
||||
### Minimum Viable Skill
|
||||
|
||||
- **Frontmatter:** name + description
|
||||
- **Content:** 100-200 words
|
||||
- **Sections:** Overview + Instructions
|
||||
|
||||
### Standard Skill
|
||||
|
||||
- **Frontmatter:** name + description
|
||||
- **Content:** 300-800 words
|
||||
- **Sections:** Overview + When to Use + Instructions + Examples
|
||||
|
||||
### Comprehensive Skill
|
||||
|
||||
- **Frontmatter:** name + description + optional fields
|
||||
- **Content:** 800-2000 words
|
||||
- **Sections:** All recommended sections
|
||||
@@ -303,7 +330,9 @@ references/
|
||||
### Use Markdown Effectively
|
||||
|
||||
#### Code Blocks
|
||||
|
||||
Always specify the language:
|
||||
|
||||
```markdown
|
||||
\`\`\`javascript
|
||||
const example = "code";
|
||||
@@ -311,7 +340,9 @@ const example = "code";
|
||||
```
|
||||
|
||||
#### Lists
|
||||
|
||||
Use consistent formatting:
|
||||
|
||||
```markdown
|
||||
- Item 1
|
||||
- Item 2
|
||||
@@ -320,11 +351,13 @@ Use consistent formatting:
|
||||
```
|
||||
|
||||
#### Emphasis
|
||||
|
||||
- **Bold** for important terms: `**important**`
|
||||
- *Italic* for emphasis: `*emphasis*`
|
||||
- _Italic_ for emphasis: `*emphasis*`
|
||||
- `Code` for commands/code: `` `code` ``
|
||||
|
||||
#### Links
|
||||
|
||||
```markdown
|
||||
[Link text](https://example.com)
|
||||
```
|
||||
@@ -336,24 +369,28 @@ Use consistent formatting:
|
||||
Before finalizing your skill:
|
||||
|
||||
### Content Quality
|
||||
|
||||
- [ ] Instructions are clear and actionable
|
||||
- [ ] Examples are realistic and helpful
|
||||
- [ ] No typos or grammar errors
|
||||
- [ ] Technical accuracy verified
|
||||
|
||||
### Structure
|
||||
|
||||
- [ ] Frontmatter is valid YAML
|
||||
- [ ] Name matches folder name
|
||||
- [ ] Sections are logically organized
|
||||
- [ ] Headings follow hierarchy (H1 → H2 → H3)
|
||||
|
||||
### Completeness
|
||||
|
||||
- [ ] Overview explains the "why"
|
||||
- [ ] Instructions explain the "how"
|
||||
- [ ] Examples show the "what"
|
||||
- [ ] Edge cases are addressed
|
||||
|
||||
### Usability
|
||||
|
||||
- [ ] A beginner could follow this
|
||||
- [ ] An expert would find it useful
|
||||
- [ ] The AI can parse it correctly
|
||||
@@ -373,6 +410,7 @@ description: "You MUST use this before any creative work..."
|
||||
```
|
||||
|
||||
**Analysis:**
|
||||
|
||||
- ✅ Clear name
|
||||
- ✅ Strong description with urgency ("MUST use")
|
||||
- ✅ Explains when to use it
|
||||
@@ -381,10 +419,12 @@ description: "You MUST use this before any creative work..."
|
||||
# Brainstorming Ideas Into Designs
|
||||
|
||||
## Overview
|
||||
|
||||
Help turn ideas into fully formed designs...
|
||||
```
|
||||
|
||||
**Analysis:**
|
||||
|
||||
- ✅ Clear title
|
||||
- ✅ Concise overview
|
||||
- ✅ Explains the value proposition
|
||||
@@ -393,11 +433,13 @@ Help turn ideas into fully formed designs...
|
||||
## The Process
|
||||
|
||||
**Understanding the idea:**
|
||||
|
||||
- Check out the current project state first
|
||||
- Ask questions one at a time
|
||||
```
|
||||
|
||||
**Analysis:**
|
||||
|
||||
- ✅ Broken into clear phases
|
||||
- ✅ Specific, actionable steps
|
||||
- ✅ Easy to follow
|
||||
@@ -412,10 +454,12 @@ Help turn ideas into fully formed designs...
|
||||
## Instructions
|
||||
|
||||
If the user is working with React:
|
||||
|
||||
- Use functional components
|
||||
- Prefer hooks over class components
|
||||
|
||||
If the user is working with Vue:
|
||||
|
||||
- Use Composition API
|
||||
- Follow Vue 3 patterns
|
||||
```
|
||||
@@ -424,9 +468,11 @@ If the user is working with Vue:
|
||||
|
||||
```markdown
|
||||
## Basic Usage
|
||||
|
||||
[Simple instructions for common cases]
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
[Complex patterns for power users]
|
||||
```
|
||||
|
||||
@@ -447,15 +493,18 @@ If the user is working with Vue:
|
||||
How to know if your skill is good:
|
||||
|
||||
### Clarity Test
|
||||
|
||||
- Can someone unfamiliar with the topic follow it?
|
||||
- Are there any ambiguous instructions?
|
||||
|
||||
### Completeness Test
|
||||
|
||||
- Does it cover the happy path?
|
||||
- Does it handle edge cases?
|
||||
- Are error scenarios addressed?
|
||||
|
||||
### Usefulness Test
|
||||
|
||||
- Does it solve a real problem?
|
||||
- Would you use this yourself?
|
||||
- Does it save time or improve quality?
|
||||
@@ -467,11 +516,13 @@ How to know if your skill is good:
|
||||
### Study These Examples
|
||||
|
||||
**For Beginners:**
|
||||
|
||||
- `skills/brainstorming/SKILL.md` - Clear structure
|
||||
- `skills/git-pushing/SKILL.md` - Simple and focused
|
||||
- `skills/copywriting/SKILL.md` - Good examples
|
||||
|
||||
**For Advanced:**
|
||||
|
||||
- `skills/systematic-debugging/SKILL.md` - Comprehensive
|
||||
- `skills/react-best-practices/SKILL.md` - Multiple files
|
||||
- `skills/loki-mode/SKILL.md` - Complex workflows
|
||||
@@ -491,22 +542,28 @@ How to know if your skill is good:
|
||||
## Common Mistakes to Avoid
|
||||
|
||||
### ❌ Mistake 1: Too Vague
|
||||
|
||||
```markdown
|
||||
## Instructions
|
||||
|
||||
Make the code better.
|
||||
```
|
||||
|
||||
**✅ Fix:**
|
||||
|
||||
```markdown
|
||||
## Instructions
|
||||
|
||||
1. Extract repeated logic into functions
|
||||
2. Add error handling for edge cases
|
||||
3. Write unit tests for core functionality
|
||||
```
|
||||
|
||||
### ❌ Mistake 2: Too Complex
|
||||
|
||||
```markdown
|
||||
## Instructions
|
||||
|
||||
[5000 words of dense technical jargon]
|
||||
```
|
||||
|
||||
@@ -514,8 +571,10 @@ Make the code better.
|
||||
Break into multiple skills or use progressive disclosure
|
||||
|
||||
### ❌ Mistake 3: No Examples
|
||||
|
||||
```markdown
|
||||
## Instructions
|
||||
|
||||
[Instructions without any code examples]
|
||||
```
|
||||
|
||||
@@ -523,6 +582,7 @@ Break into multiple skills or use progressive disclosure
|
||||
Add at least 2-3 realistic examples
|
||||
|
||||
### ❌ Mistake 4: Outdated Information
|
||||
|
||||
```markdown
|
||||
Use React class components...
|
||||
```
|
||||
|
||||
87
docs/SOURCES.md
Normal file
87
docs/SOURCES.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# 📜 Sources & Attributions
|
||||
|
||||
We believe in giving credit where credit is due.
|
||||
If you recognize your work here and it is not properly attributed, please open an Issue.
|
||||
|
||||
| Skill / Category | Original Source | License | Notes |
|
||||
| :-------------------------- | :----------------------------------------------------- | :------------- | :---------------------------- |
|
||||
| `cloud-penetration-testing` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. |
|
||||
| `active-directory-attacks` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. |
|
||||
| `owasp-top-10` | [OWASP](https://owasp.org/) | CC-BY-SA | Methodology adapted. |
|
||||
| `burp-suite-testing` | [PortSwigger](https://portswigger.net/burp) | N/A | Usage guide only (no binary). |
|
||||
| `crewai` | [CrewAI](https://github.com/joaomdmoura/crewAI) | MIT | Framework guides. |
|
||||
| `langgraph` | [LangGraph](https://github.com/langchain-ai/langgraph) | MIT | Framework guides. |
|
||||
| `react-patterns` | [React Docs](https://react.dev/) | CC-BY | Official patterns. |
|
||||
| **All Official Skills** | [Anthropic / Google / OpenAI] | Proprietary | Usage encouraged by vendors. |
|
||||
|
||||
## Skills from VoltAgent/awesome-agent-skills
|
||||
|
||||
The following skills were added from the curated collection at [VoltAgent/awesome-agent-skills](https://github.com/VoltAgent/awesome-agent-skills):
|
||||
|
||||
### Official Team Skills
|
||||
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :---- | :-------------- | :------ | :---- |
|
||||
| `vercel-deploy-claimable` | [Vercel Labs](https://github.com/vercel-labs/agent-skills) | MIT | Official Vercel skill |
|
||||
| `design-md` | [Google Labs (Stitch)](https://github.com/google-labs-code/stitch-skills) | Compatible | Google Labs Stitch skills |
|
||||
| `hugging-face-cli`, `hugging-face-jobs` | [Hugging Face](https://github.com/huggingface/skills) | Compatible | Official Hugging Face skills |
|
||||
| `culture-index`, `fix-review`, `sharp-edges` | [Trail of Bits](https://github.com/trailofbits/skills) | Compatible | Security skills from Trail of Bits |
|
||||
| `expo-deployment`, `upgrading-expo` | [Expo](https://github.com/expo/skills) | Compatible | Official Expo skills |
|
||||
| `commit`, `create-pr`, `find-bugs`, `iterate-pr` | [Sentry](https://github.com/getsentry/skills) | Compatible | Sentry dev team skills |
|
||||
| `using-neon` | [Neon](https://github.com/neondatabase/agent-skills) | Compatible | Neon Postgres best practices |
|
||||
| `fal-audio`, `fal-generate`, `fal-image-edit`, `fal-platform`, `fal-upscale`, `fal-workflow` | [fal.ai Community](https://github.com/fal-ai-community/skills) | Compatible | fal.ai AI model skills |
|
||||
|
||||
### Community Skills
|
||||
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :---- | :-------------- | :------ | :---- |
|
||||
| `automate-whatsapp`, `observe-whatsapp` | [gokapso](https://github.com/gokapso/agent-skills) | Compatible | WhatsApp automation skills |
|
||||
| `readme` | [Shpigford](https://github.com/Shpigford/skills) | Compatible | README generation |
|
||||
| `screenshots` | [Shpigford](https://github.com/Shpigford/skills) | Compatible | Marketing screenshots |
|
||||
| `aws-skills` | [zxkane](https://github.com/zxkane/aws-skills) | Compatible | AWS development patterns |
|
||||
| `deep-research` | [sanjay3290](https://github.com/sanjay3290/ai-skills) | Compatible | Gemini Deep Research Agent |
|
||||
| `ffuf-claude-skill` | [jthack](https://github.com/jthack/ffuf_claude_skill) | Compatible | Web fuzzing with ffuf |
|
||||
| `ui-skills` | [ibelick](https://github.com/ibelick/ui-skills) | Compatible | UI development constraints |
|
||||
| `vexor` | [scarletkc](https://github.com/scarletkc/vexor) | Compatible | Vector-powered CLI |
|
||||
| `pypict-skill` | [omkamal](https://github.com/omkamal/pypict-claude-skill) | Compatible | Pairwise test generation |
|
||||
| `makepad-skills` | [ZhangHanDong](https://github.com/ZhangHanDong/makepad-skills) | Compatible | Makepad UI development |
|
||||
| `swiftui-expert-skill` | [AvdLee](https://github.com/AvdLee/SwiftUI-Agent-Skill) | Compatible | SwiftUI best practices |
|
||||
| `threejs-skills` | [CloudAI-X](https://github.com/CloudAI-X/threejs-skills) | Compatible | Three.js 3D experiences |
|
||||
| `claude-scientific-skills` | [K-Dense-AI](https://github.com/K-Dense-AI/claude-scientific-skills) | Compatible | Scientific research skills |
|
||||
| `claude-win11-speckit-update-skill` | [NotMyself](https://github.com/NotMyself/claude-win11-speckit-update-skill) | Compatible | Windows 11 management |
|
||||
| `imagen` | [sanjay3290](https://github.com/sanjay3290/ai-skills) | Compatible | Google Gemini image generation |
|
||||
| `security-bluebook-builder` | [SHADOWPR0](https://github.com/SHADOWPR0/security-bluebook-builder) | Compatible | Security documentation |
|
||||
| `claude-ally-health` | [huifer](https://github.com/huifer/Claude-Ally-Health) | Compatible | Health assistant |
|
||||
| `clarity-gate` | [frmoretto](https://github.com/frmoretto/clarity-gate) | Compatible | RAG quality verification |
|
||||
| `n8n-code-python`, `n8n-mcp-tools-expert`, `n8n-node-configuration` | [czlonkowski](https://github.com/czlonkowski/n8n-skills) | Compatible | n8n automation skills |
|
||||
| `varlock-claude-skill` | [wrsmith108](https://github.com/wrsmith108/varlock-claude-skill) | Compatible | Secure environment variables |
|
||||
| `beautiful-prose` | [SHADOWPR0](https://github.com/SHADOWPR0/beautiful_prose) | Compatible | Writing style guide |
|
||||
| `claude-speed-reader` | [SeanZoR](https://github.com/SeanZoR/claude-speed-reader) | Compatible | Speed reading tool |
|
||||
| `skill-seekers` | [yusufkaraaslan](https://github.com/yusufkaraaslan/Skill_Seekers) | Compatible | Skill conversion tool |
|
||||
|
||||
- **frontend-slides** - [zarazhangrui](https://github.com/zarazhangrui/frontend-slides)
|
||||
- **linear-claude-skill** - [wrsmith108](https://github.com/wrsmith108/linear-claude-skill)
|
||||
- **skill-rails-upgrade** - [robzolkos](https://github.com/robzolkos/skill-rails-upgrade)
|
||||
- **context-fundamentals** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **context-degradation** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **context-compression** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **context-optimization** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **multi-agent-patterns** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **tool-design** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **evaluation** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **memory-systems** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **terraform-skill** - [antonbabenko](https://github.com/antonbabenko/terraform-skill)
|
||||
|
||||
## Skills from whatiskadudoing/fp-ts-skills (v4.4.0)
|
||||
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :---- | :-------------- | :------ | :---- |
|
||||
| `fp-ts-pragmatic` | [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills) | Compatible | Pragmatic fp-ts guide – pipe, Option, Either, TaskEither |
|
||||
| `fp-ts-react` | [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills) | Compatible | fp-ts with React 18/19 and Next.js |
|
||||
| `fp-ts-errors` | [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills) | Compatible | Type-safe error handling with Either and TaskEither |
|
||||
|
||||
## License Policy
|
||||
|
||||
- **Code**: All original code in this repository is **MIT**.
|
||||
- **Content**: Documentation is **CC-BY-4.0**.
|
||||
- **Third Party**: We respect the upstream licenses. If an imported skill is GPL, it will be marked clearly or excluded (we aim for MIT/Apache compatibility).
|
||||
@@ -31,10 +31,9 @@
|
||||
antigravity-awesome-skills/
|
||||
│
|
||||
├── 📄 README.md ← Overview & skill list
|
||||
├── 📄 GETTING_STARTED.md ← Start here! (NEW)
|
||||
├── 📄 CONTRIBUTING.md ← How to contribute (NEW)
|
||||
├── 📄 CONTRIBUTING.md ← How to contribute
|
||||
│
|
||||
├── 📁 skills/ ← All 179 skills live here
|
||||
├── 📁 skills/ ← All 250+ skills live here
|
||||
│ │
|
||||
│ ├── 📁 brainstorming/
|
||||
│ │ └── 📄 SKILL.md ← Skill definition
|
||||
@@ -43,20 +42,22 @@ antigravity-awesome-skills/
|
||||
│ │ ├── 📄 SKILL.md
|
||||
│ │ └── 📁 examples/ ← Optional extras
|
||||
│ │
|
||||
│ ├── 📁 react-best-practices/
|
||||
│ │ ├── 📄 SKILL.md
|
||||
│ │ ├── 📁 rules/
|
||||
│ │ └── 📄 README.md
|
||||
│ │
|
||||
│ └── ... (176 more skills)
|
||||
│ └── ... (250+ more skills)
|
||||
│
|
||||
├── 📁 scripts/ ← Validation & management
|
||||
│ ├── validate_skills.py
|
||||
│ └── generate_index.py
|
||||
│ ├── validate_skills.py ← Quality Bar Enforcer
|
||||
│ └── generate_index.py ← Registry Generator
|
||||
│
|
||||
└── 📁 docs/ ← Documentation (NEW)
|
||||
├── 📁 .github/
|
||||
│ └── 📄 MAINTENANCE.md ← Maintainers Guide
|
||||
│
|
||||
└── 📁 docs/ ← Documentation
|
||||
├── 📄 GETTING_STARTED.md ← Start here! (NEW)
|
||||
├── 📄 FAQ.md ← Troubleshooting
|
||||
├── 📄 BUNDLES.md ← Starter Packs (NEW)
|
||||
├── 📄 QUALITY_BAR.md ← Quality Standards
|
||||
├── 📄 SKILL_ANATOMY.md ← How skills work
|
||||
└── 📄 VISUAL_GUIDE.md ← This file!
|
||||
└── 📄 VISUAL_GUIDE.md ← This file!
|
||||
```
|
||||
|
||||
---
|
||||
@@ -95,7 +96,7 @@ antigravity-awesome-skills/
|
||||
|
||||
```
|
||||
┌─────────────────────────┐
|
||||
│ 179 AWESOME SKILLS │
|
||||
│ 250+ AWESOME SKILLS │
|
||||
└────────────┬────────────┘
|
||||
│
|
||||
┌────────────────────────┼────────────────────────┐
|
||||
@@ -129,7 +130,7 @@ antigravity-awesome-skills/
|
||||
|
||||
## Skill File Anatomy (Visual)
|
||||
|
||||
```
|
||||
````
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ SKILL.md │
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
@@ -167,13 +168,14 @@ antigravity-awesome-skills/
|
||||
│ └───────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
````
|
||||
|
||||
---
|
||||
|
||||
## Installation (Visual Steps)
|
||||
|
||||
### Step 1: Clone the Repository
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Terminal │
|
||||
@@ -188,6 +190,7 @@ antigravity-awesome-skills/
|
||||
```
|
||||
|
||||
### Step 2: Verify Installation
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ File Explorer │
|
||||
@@ -202,6 +205,7 @@ antigravity-awesome-skills/
|
||||
```
|
||||
|
||||
### Step 3: Use a Skill
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ AI Assistant Chat │
|
||||
@@ -271,16 +275,19 @@ antigravity-awesome-skills/
|
||||
## Finding Skills (Visual Guide)
|
||||
|
||||
### Method 1: Browse by Category
|
||||
|
||||
```
|
||||
README.md → Scroll to "Full Skill Registry" → Find category → Pick skill
|
||||
```
|
||||
|
||||
### Method 2: Search by Keyword
|
||||
|
||||
```
|
||||
Terminal → ls skills/ | grep "keyword" → See matching skills
|
||||
```
|
||||
|
||||
### Method 3: Use the Index
|
||||
|
||||
```
|
||||
Open skills_index.json → Search for keyword → Find skill path
|
||||
```
|
||||
@@ -449,7 +456,7 @@ START HERE
|
||||
│ 5. Submit PR │
|
||||
│ │
|
||||
│ 🆘 HELP │
|
||||
│ • GETTING_STARTED.md - Basics │
|
||||
│ • docs/GETTING_STARTED.md - Basics │
|
||||
│ • CONTRIBUTING.md - How to contribute │
|
||||
│ • SKILL_ANATOMY.md - Deep dive │
|
||||
│ • GitHub Issues - Ask questions │
|
||||
@@ -465,19 +472,19 @@ START HERE
|
||||
Day 1: Install skills
|
||||
│
|
||||
└─→ "Wow, @brainstorming helped me design my app!"
|
||||
|
||||
|
||||
Day 3: Use 5 different skills
|
||||
│
|
||||
└─→ "These skills save me so much time!"
|
||||
|
||||
|
||||
Week 1: Create first skill
|
||||
│
|
||||
└─→ "I shared my expertise as a skill!"
|
||||
|
||||
|
||||
Week 2: Skill gets merged
|
||||
│
|
||||
└─→ "My skill is helping others! 🎉"
|
||||
|
||||
|
||||
Month 1: Regular contributor
|
||||
│
|
||||
└─→ "I've contributed 5 skills and improved docs!"
|
||||
@@ -497,7 +504,8 @@ Month 1: Regular contributor
|
||||
---
|
||||
|
||||
**Visual learner?** This guide should help! Still have questions? Check out:
|
||||
- [GETTING_STARTED.md](../GETTING_STARTED.md) - Text-based intro
|
||||
|
||||
- [GETTING_STARTED.md](GETTING_STARTED.md) - Text-based intro
|
||||
- [SKILL_ANATOMY.md](SKILL_ANATOMY.md) - Detailed breakdown
|
||||
- [CONTRIBUTING.md](../CONTRIBUTING.md) - How to contribute
|
||||
|
||||
|
||||
124
docs/vietnamese/BUNDLES.vi.md
Normal file
124
docs/vietnamese/BUNDLES.vi.md
Normal file
@@ -0,0 +1,124 @@
|
||||
# 📦 Gói Kỹ năng Antigravity (Skill Bundles)
|
||||
|
||||
Bạn không biết bắt đầu từ đâu? Hãy chọn một gói dưới đây để sỡ hữu bộ kỹ năng được tuyển chọn phù hợp cho vai trò của bạn.
|
||||
|
||||
## 🚀 Gói Khởi đầu "Thiết yếu" (Essentials)
|
||||
|
||||
_Dành cho tất cả mọi người. Hãy cài đặt những kỹ năng này đầu tiên._
|
||||
|
||||
- `concise-planning`: Luôn bắt đầu bằng một kế hoạch.
|
||||
- `lint-and-validate`: Giữ code sạch sẽ một cách tự động.
|
||||
- `git-pushing`: Lưu công việc của bạn một cách an toàn.
|
||||
- `kaizen`: Tư duy cải tiến liên tục.
|
||||
|
||||
## 🛡️ Gói "Kỹ sư Bảo mật" (Security Engineer)
|
||||
|
||||
_Dành cho pentesting, kiểm tra (auditing) và tăng cường bảo mật (hardening)._
|
||||
|
||||
- `ethical-hacking-methodology`: "Kinh thánh" về hacking đạo đức.
|
||||
- `burp-suite-testing`: Quét lỗ hổng web.
|
||||
- `owasp-top-10`: Kiểm tra các lỗi phổ biến nhất.
|
||||
- `linux-privilege-escalation`: Đánh giá bảo mật Linux nâng cao.
|
||||
- `cloud-penetration-testing`: Bảo mật AWS/Azure/GCP.
|
||||
|
||||
## 🌐 Gói "Phù thủy Web" (Web Wizard)
|
||||
|
||||
_Để xây dựng các ứng dụng web hiện đại, hiệu suất cao._
|
||||
|
||||
- `frontend-design`: Hướng dẫn về UI và thẩm mỹ.
|
||||
- `react-patterns`: Các thực hành tốt nhất cho React (nếu có sẵn).
|
||||
- `tailwind-patterns`: Siêu năng lực tạo kiểu (styling).
|
||||
- `form-cro`: Tối ưu hóa các biểu mẫu để tăng tỷ lệ chuyển đổi.
|
||||
- `seo-audit`: Giúp trang web của bạn xuất hiện trên Google.
|
||||
|
||||
## 🤖 Gói "Kiến trúc sư Agent" (Agent Architect)
|
||||
|
||||
_Để xây dựng các hệ thống AI._
|
||||
|
||||
- `agent-evaluation`: Kiểm tra các trợ lý AI (agents) của bạn.
|
||||
- `langgraph`: Xây dựng quy trình làm việc agent có trạng thái (stateful).
|
||||
- `mcp-builder`: Tự tạo các công cụ của riêng bạn.
|
||||
- `prompt-engineering`: Làm chủ nghệ thuật giao tiếp với LLM.
|
||||
|
||||
## 🎮 Gói "Lập trình viên Game Độc lập" (Indie Game Dev)
|
||||
|
||||
_Đế xây dựng trò chơi với sự trợ giúp của AI._
|
||||
|
||||
- `game-development/game-design`: Cơ chế và vòng lặp trò chơi.
|
||||
- `game-development/2d-games`: Sprite và vật lý 2D.
|
||||
- `game-development/3d-games`: Mô hình 3D và shader.
|
||||
- `game-development/unity-csharp`: Làm chủ lập trình C#.
|
||||
- `algorithmic-art`: Tạo tài nguyên game (assets) bằng code.
|
||||
|
||||
## 🐍 Gói "Chuyên gia Python" (Python Pro)
|
||||
|
||||
_Dành cho những người làm backend và các nhà khoa học dữ liệu._
|
||||
|
||||
- `python-patterns`: Viết code Python chuẩn mực (idiomatic).
|
||||
- `poetry-manager`: Quản lý phụ thuộc (dependencies) hiệu quả.
|
||||
- `pytest-mastery`: Các framework kiểm thử.
|
||||
- `fastapi-expert`: Xây dựng API hiệu suất cao.
|
||||
- `django-guide`: Framework đầy đủ tính năng "battery-included".
|
||||
|
||||
## 🦄 Gói "Người sáng lập Startup" (Startup Founder)
|
||||
|
||||
_Để xây dựng sản phẩm, không chỉ là viết code._
|
||||
|
||||
- `product-requirements-doc`: Định nghĩa những gì cần xây dựng.
|
||||
- `competitor-analysis`: Biết rõ đối thủ của bạn là ai.
|
||||
- `pitch-deck-creator`: Gọi vốn (hoặc chỉ để giải thích ý tưởng của bạn).
|
||||
- `landing-page-copy`: Viết nội dung bán hàng hiệu quả.
|
||||
- `stripe-integration`: Phương thức thanh toán.
|
||||
|
||||
## 🌧️ Gói "DevOps & Cloud"
|
||||
|
||||
_Dành cho hạ tầng và mở rộng hệ thống._
|
||||
|
||||
- `docker-expert`: Làm chủ container và build đa giai đoạn.
|
||||
- `aws-serverless`: Sử dụng serverless trên AWS (Lambda, DynamoDB).
|
||||
- `environment-setup-guide`: Chuẩn hóa cho các đội ngũ.
|
||||
- `deployment-procedures`: Chiến lược triển khai an toàn.
|
||||
- `bash-linux`: Làm chủ dòng lệnh Terminal.
|
||||
|
||||
## 📊 Gói "Dữ liệu & Phân tích" (Data & Analytics)
|
||||
|
||||
_Để hiểu rõ các con số._
|
||||
|
||||
- `analytics-tracking`: Thiết lập GA4/PostHog chính xác.
|
||||
- `d3-viz`: Các hình ảnh hóa dữ liệu tùy chỉnh đẹp mắt.
|
||||
- `sql-mastery`: Viết truy vấn tốt hơn (Skill cộng đồng).
|
||||
- `ab-test-setup`: Học hỏi từ thực nghiệm đã xác thực.
|
||||
|
||||
## 🎨 Gói "Giám đốc Sáng tạo" (Creative Director)
|
||||
|
||||
_Dành cho hình ảnh, nội dung và thương hiệu._
|
||||
|
||||
- `canvas-design`: Tạo poster và sơ đồ.
|
||||
- `frontend-design`: Thẩm mỹ UI.
|
||||
- `content-creator`: Các bài blog tối ưu hóa SEO.
|
||||
- `copy-editing`: Trau chuốt nội dung văn bản.
|
||||
- `algorithmic-art`: Các tác phẩm nghệ thuật tạo bằng code.
|
||||
|
||||
## 🐞 Gói "QA & Kiểm thử" (QA & Testing)
|
||||
|
||||
_Để phát hiện lỗi trước khi người dùng thấy._
|
||||
|
||||
- `test-driven-development`: Red, Green, Refactor.
|
||||
- `systematic-debugging`: "Sherlock Holmes" trong thế giới code.
|
||||
- `browser-automation`: Kiểm thử đầu cuối (E2E) với Playwright.
|
||||
- `ab-test-setup`: Các thử nghiệm đã được xác thực.
|
||||
- `code-review-checklist`: Phát hiện lỗi trong các Pull Request.
|
||||
|
||||
## 🖌️ Gói "Thiết kế Web" (Web Designer)
|
||||
|
||||
_Để tạo ra những trải nghiệm hoàn hảo đến từng pixel._
|
||||
|
||||
- `ui-ux-pro-max`: Hệ thống thiết kế/tokens cao cấp.
|
||||
- `frontend-design`: Nền tảng của thẩm mỹ.
|
||||
- `3d-web-experience`: Ma thuật của Three.js & R3F.
|
||||
- `canvas-design`: Hình ảnh tĩnh/poster.
|
||||
- `responsive-layout`: Các nguyên tắc ưu tiên di động (Mobile-first).
|
||||
|
||||
---
|
||||
|
||||
_Để sử dụng một gói kỹ năng, chỉ cần copy tên các skill vào thư mục `.agent/skills` của bạn hoặc yêu cầu trợ lý AI sử dụng chúng._
|
||||
236
docs/vietnamese/CONTRIBUTING.vi.md
Normal file
236
docs/vietnamese/CONTRIBUTING.vi.md
Normal file
@@ -0,0 +1,236 @@
|
||||
# 🤝 Hướng dẫn Đóng góp - Phiên bản V3 Enterprise
|
||||
|
||||
**Cảm ơn bạn đã muốn làm cho repository này trở nên tốt hơn!** Hướng dẫn này sẽ chỉ cho bạn chính xác cách thức đóng góp, ngay cả khi bạn là người mới đối với mã nguồn mở.
|
||||
Với phiên bản V3, chúng tôi đã nâng cao các tiêu chuẩn về chất lượng. Vui lòng đọc kỹ **Tiêu chuẩn Chất lượng mới** bên dưới.
|
||||
|
||||
---
|
||||
|
||||
## 🧐 "Quy chuẩn Chất lượng" (Quality Bar - Tiêu chuẩn V3)
|
||||
|
||||
**Quan trọng đối với các skill mới:** Mỗi skill được gửi đi phải vượt qua **5 bước Kiểm tra Chất lượng** (xem `docs/vietnamese/QUALITY_BAR.md` để biết chi tiết):
|
||||
|
||||
1. **Siêu dữ liệu (Metadata)**: Phần Frontmatter chính xác (`name`, `description`).
|
||||
2. **An toàn (Safety)**: Không chứa các lệnh gây hại mà không có nhãn rủi ro ("Risk").
|
||||
3. **Rõ ràng (Clarity)**: Có phần "Khi nào nên dùng" (When to use) rõ ràng.
|
||||
4. **Ví dụ (Examples)**: Ít nhất một ví dụ sử dụng có thể sao chép và dùng được ngay.
|
||||
5. **Hành động (Actions)**: Phải định nghĩa các bước hành động cụ thể cho AI, không chỉ là các "ý nghĩ".
|
||||
|
||||
---
|
||||
|
||||
## Các Cách để Đóng góp
|
||||
|
||||
Bạn không cần phải là một chuyên gia! Dưới đây là những cách mà bất kỳ ai cũng có thể giúp đỡ:
|
||||
|
||||
### 1. Cải thiện Tài liệu (Dễ nhất!)
|
||||
|
||||
- Sửa lỗi chính tả hoặc ngữ pháp.
|
||||
- Làm cho các giải thích trở nên rõ ràng hơn.
|
||||
- Thêm ví dụ vào các kỹ năng hiện có.
|
||||
- Dịch tài liệu sang các ngôn ngữ khác.
|
||||
|
||||
### 2. Báo cáo Vấn đề (Issues)
|
||||
|
||||
- Thấy điều gì đó khó hiểu? Hãy cho chúng tôi biết!
|
||||
- Kỹ năng không hoạt động? Hãy báo cho chúng tôi!
|
||||
- Bạn có đề xuất? Chúng tôi luôn sẵn sàng lắng nghe!
|
||||
|
||||
### 3. Tạo Kỹ năng (Skill) mới
|
||||
|
||||
- Chia sẻ chuyên môn của bạn dưới dạng một kỹ năng.
|
||||
- Lấp đầy các khoảng trống trong bộ sưu tập hiện tại.
|
||||
- Cải thiện các kỹ năng sẵn có.
|
||||
|
||||
### 4. Kiểm thử và Xác thực
|
||||
|
||||
- Thử nghiệm các kỹ năng và báo cáo những gì hoạt động hoặc không hoạt động.
|
||||
- Kiểm tra trên các công cụ AI khác nhau.
|
||||
- Đề xuất các cải tiến.
|
||||
|
||||
---
|
||||
|
||||
## Cách Tạo một Kỹ năng mới
|
||||
|
||||
### Hướng dẫn Từng bước
|
||||
|
||||
#### Bước 1: Chọn Chủ đề cho Kỹ năng của bạn
|
||||
|
||||
Hãy tự hỏi: "Tôi ước trợ lý AI của mình hiểu rõ hơn về điều gì?".
|
||||
Ví dụ: "Tôi giỏi về Docker, tôi sẽ tạo một kỹ năng về Docker".
|
||||
|
||||
#### Bước 2: Tạo Cấu trúc Thư mục
|
||||
|
||||
Các kỹ năng nằm trong thư mục `skills/`. Sử dụng định dạng `kebab-case` cho tên thư mục.
|
||||
|
||||
```bash
|
||||
# Di chuyển đến thư mục skills
|
||||
cd skills/
|
||||
|
||||
# Tạo thư mục cho skill của bạn
|
||||
mkdir my-awesome-skill
|
||||
cd my-awesome-skill
|
||||
|
||||
# Tạo file SKILL.md
|
||||
touch SKILL.md
|
||||
```
|
||||
|
||||
#### Bước 3: Viết Nội dung cho SKILL.md
|
||||
|
||||
Mỗi kỹ năng đều cần cấu trúc cơ bản này. **Hãy sao chép mẫu dưới đây:**
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: my-awesome-skill
|
||||
description: "Mô tả ngắn gọn về chức năng của skill này"
|
||||
---
|
||||
|
||||
# Tiêu đề Skill
|
||||
|
||||
## Tổng quan
|
||||
|
||||
Giải thích skill này làm gì và khi nào nên sử dụng nó.
|
||||
|
||||
## Khi nào nên sử dụng Skill này
|
||||
|
||||
- Sử dụng khi [tình huống 1]
|
||||
- Sử dụng khi [tình huống 2]
|
||||
|
||||
## Cách hoạt động
|
||||
|
||||
Hướng dẫn chi tiết từng bước cho AI...
|
||||
|
||||
## Ví dụ
|
||||
|
||||
### Ví dụ 1
|
||||
|
||||
\`\`\`
|
||||
code ví dụ ở đây
|
||||
\`\`\`
|
||||
|
||||
## Thực hành tốt nhất
|
||||
|
||||
- ✅ Nên làm điều này
|
||||
- ❌ Không nên làm điều này
|
||||
```
|
||||
|
||||
#### Bước 4: Xác thực (BƯỚC V3 QUAN TRỌNG)
|
||||
|
||||
Chạy script xác thực (validation) tại máy của bạn. **Chúng tôi sẽ không chấp nhận các Pull Request (PR) thất bại ở bước kiểm tra này.**
|
||||
|
||||
```bash
|
||||
# Chế độ nhẹ (chỉ cảnh báo)
|
||||
python3 scripts/validate_skills.py
|
||||
|
||||
# Chế độ nghiêm ngặt (chế độ mà hệ thống CI sẽ chạy)
|
||||
python3 scripts/validate_skills.py --strict
|
||||
```
|
||||
|
||||
Bước này sẽ kiểm tra:
|
||||
|
||||
- ✅ File `SKILL.md` có tồn tại hay không.
|
||||
- ✅ Phần Frontmatter có chính xác không.
|
||||
- ✅ Tên (Name) có khớp với tên thư mục không.
|
||||
- ✅ Có vượt qua các kiểm tra của Quality Bar không.
|
||||
|
||||
#### Bước 5: Gửi Kỹ năng của bạn
|
||||
|
||||
```bash
|
||||
git add skills/my-awesome-skill/
|
||||
git commit -m "feat: add my-awesome-skill"
|
||||
git push origin my-branch
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Bản mẫu (Template) Kỹ năng (Copy & Paste)
|
||||
|
||||
Tiết kiệm thời gian! Hãy sao chép bản mẫu này:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: your-skill-name
|
||||
description: "Mô tả trong một câu về chức năng của skill và khi nào cần dùng"
|
||||
---
|
||||
|
||||
# Tên Kỹ năng của bạn
|
||||
|
||||
## Tổng quan
|
||||
|
||||
[2-3 câu giải thích skill này làm gì]
|
||||
|
||||
## Khi nào nên sử dụng Skill này
|
||||
|
||||
- Sử dụng khi bạn cần [tình huống 1]
|
||||
- Sử dụng khi bạn muốn [tình huống 2]
|
||||
|
||||
## Hướng dẫn Từng bước
|
||||
|
||||
### 1. [Tên Bước đầu tiên]
|
||||
|
||||
[Hướng dẫn chi tiết]
|
||||
|
||||
## Ví dụ
|
||||
|
||||
### Ví dụ 1: [Tên Trường hợp sử dụng]
|
||||
|
||||
\`\`\`language
|
||||
// Code ví dụ ở đây
|
||||
\`\`\`
|
||||
|
||||
## Thực hành tốt nhất
|
||||
|
||||
- ✅ **Nên:** [Thực hành tốt]
|
||||
- ❌ **Không nên:** [Điều cần tránh]
|
||||
|
||||
## Xử lý Sự cố
|
||||
|
||||
**Vấn đề:** [Lỗi thường gặp]
|
||||
**Giải pháp:** [Cách khắc phục]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hướng dẫn về Thông điệp Commit (Commit Message)
|
||||
|
||||
Sử dụng các tiền tố sau:
|
||||
|
||||
- `feat:` - Kỹ năng mới hoặc tính năng lớn.
|
||||
- `docs:` - Cải thiện tài liệu hướng dẫn.
|
||||
- `fix:` - Sửa lỗi.
|
||||
- `refactor:` - Cải thiện code mà không thay đổi chức năng.
|
||||
- `test:` - Thêm hoặc cập nhật các bài kiểm tra.
|
||||
- `chore:` - Các tác vụ bảo trì.
|
||||
|
||||
**Ví dụ:**
|
||||
|
||||
```
|
||||
feat: add kubernetes-deployment skill
|
||||
docs: improve getting started guide
|
||||
fix: correct typo in stripe-integration skill
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tài liệu Học tập
|
||||
|
||||
### Bạn mới sử dụng Git/GitHub?
|
||||
|
||||
- [Hướng dẫn Hello World của GitHub](https://guides.github.com/activities/hello-world/)
|
||||
- [Cơ bản về Git](https://git-scm.com/book/en/v2/Getting-Started-Git-Basics)
|
||||
|
||||
### Bạn mới sử dụng Markdown?
|
||||
|
||||
- [Hướng dẫn Markdown](https://www.markdownguide.org/basic-syntax/)
|
||||
|
||||
---
|
||||
|
||||
## Quy tắc Ứng xử (Code of Conduct)
|
||||
|
||||
- Luôn tôn trọng và hòa nhập.
|
||||
- Chào đón những người mới.
|
||||
- Tập trung vào các phản hồi mang tính xây dựng.
|
||||
- **Không chứa nội dung gây hại**: Xem `docs/vietnamese/SECURITY_GUARDRAILS.md`.
|
||||
|
||||
---
|
||||
|
||||
**Cảm ơn bạn đã làm cho dự án này trở nên tốt đẹp hơn cho mọi người!**
|
||||
Mọi sự đóng góp, dù nhỏ đến đâu, đều tạo nên sự khác biệt. Dù bạn sửa một lỗi chính tả, cải thiện một câu văn, hay tạo ra một kỹ năng hoàn toàn mới - bạn đang giúp đỡ hàng ngàn lập trình viên khác!
|
||||
56
docs/vietnamese/EXAMPLES.vi.md
Normal file
56
docs/vietnamese/EXAMPLES.vi.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# 🧪 Ví dụ Thực tế ("Sổ tay Nấu ăn Antigravity")
|
||||
|
||||
Các kỹ năng (skills) vốn dĩ đã mạnh mẽ khi đứng riêng lẻ, nhưng chúng sẽ trở nên không thể ngăn cản khi được kết hợp lại với nhau.
|
||||
Dưới đây là ba tình huống phổ biến và cách giải quyết chúng bằng cách sử dụng repository này.
|
||||
|
||||
## 🥘 Công thức 1: "Kiểm tra Code cũ (Legacy Code Audit)"
|
||||
|
||||
_Tình huống: Bạn vừa tiếp nhận một repository Node.js 5 năm tuổi đang rất hỗn loạn. Bạn cần sửa nó một cách an toàn._
|
||||
|
||||
**Các Skill sử dụng:**
|
||||
|
||||
1. `concise-planning` (Để lập bản đồ cho sự hỗn loạn)
|
||||
2. `lint-and-validate` (Để tìm ra lỗi)
|
||||
3. `security-audit` (Để tìm ra lỗ hổng bảo mật)
|
||||
|
||||
**Quy trình làm việc (Workflow):**
|
||||
|
||||
1. **Lập kế hoạch**: "Agent, hãy dùng `concise-planning` để tạo một danh sách kiểm tra (checklist) cho việc tái cấu trúc (refactoring) file `src/legacy-api.js`."
|
||||
2. **Kiểm tra**: "Chạy `security-audit` trên file `package.json` để tìm các thư viện phụ thuộc có lỗ hổng bảo mật."
|
||||
3. **Sửa lỗi**: "Sử dụng các quy tắc của `lint-and-validate` để tự động sửa các lỗi định dạng trong thư mục `src/`."
|
||||
|
||||
---
|
||||
|
||||
## 🥘 Công thức 2: "Ứng dụng Web hiện đại"
|
||||
|
||||
_Tình huống: Bạn cần xây dựng một trang Landing Page hiệu suất cao trong vòng 2 giờ._
|
||||
|
||||
**Các Skill sử dụng:**
|
||||
|
||||
1. `frontend-design` (Để đảm bảo tính thẩm mỹ)
|
||||
2. `react-patterns` (Để xây dựng cấu trúc)
|
||||
3. `tailwind-mastery` (Để tăng tốc độ triển khai)
|
||||
|
||||
**Quy trình làm việc (Workflow):**
|
||||
|
||||
1. **Thiết kế**: "Hãy sử dụng `frontend-design` để tạo bảng màu và kiểu chữ cho một 'Quán cà phê phong cách Cyberpunk'."
|
||||
2. **Khởi tạo**: "Khởi tạo một dự án Vite. Sau đó áp dụng `react-patterns` để tạo component 'Hero'."
|
||||
3. **Định dạng kiểu dáng**: "Sử dụng `tailwind-mastery` để làm cho các nút bấm có hiệu ứng glassmorphic (hiệu ứng kính mờ) và phản hồi tốt trên nhiều thiết bị (responsive)."
|
||||
|
||||
---
|
||||
|
||||
## 🥘 Công thức 3: "Kiến trúc sư Agent"
|
||||
|
||||
_Tình huống: Bạn muốn xây dựng một trợ lý AI (agent) tùy chỉnh có khả năng tự xác minh mã code của chính nó._
|
||||
|
||||
**Các Skill sử dụng:**
|
||||
|
||||
1. `mcp-builder` (Để xây dựng công cụ)
|
||||
2. `agent-evaluation` (Để kiểm tra độ tin cậy)
|
||||
3. `prompt-engineering` (Để trau chuốt các hướng dẫn)
|
||||
|
||||
**Quy trình làm việc (Workflow):**
|
||||
|
||||
1. **Xây dựng**: "Sử dụng `mcp-builder` để tạo một công cụ `verify-file`."
|
||||
2. **Hướng dẫn**: "Áp dụng các mẫu `prompt-engineering` vào System Prompt để agent luôn kiểm tra lại đường dẫn file."
|
||||
3. **Kiểm tra**: "Chạy `agent-evaluation` để đo lường tần suất agent thất bại trong việc tìm file."
|
||||
178
docs/vietnamese/FAQ.vi.md
Normal file
178
docs/vietnamese/FAQ.vi.md
Normal file
@@ -0,0 +1,178 @@
|
||||
# ❓ Câu hỏi thường gặp (FAQ)
|
||||
|
||||
**Bạn có thắc mắc?** Bạn không hề cô đơn! Dưới đây là câu trả lời cho những câu hỏi thường gặp nhất về Antigravity Awesome Skills.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Câu hỏi Chung
|
||||
|
||||
### "Skills" (kỹ năng) chính xác là gì?
|
||||
|
||||
Skills là các tệp hướng dẫn chuyên biệt dạy cho các trợ lý AI cách xử lý những tác vụ cụ thể. Hãy coi chúng như những mô-đun kiến thức chuyên gia mà AI của bạn có thể tải khi cần.
|
||||
**Một so sánh đơn giản:** Giống như việc bạn tham khảo ý kiến của các chuyên gia khác nhau (luật sư, bác sĩ, thợ máy), những kỹ năng này giúp AI của bạn trở thành chuyên gia trong các lĩnh vực khác nhau khi bạn cần.
|
||||
|
||||
### Tôi có cần phải cài đặt tất cả hơn 560 skills không?
|
||||
|
||||
**Không!** Khi bạn clone (tải bản sao) repository này, tất cả các kỹ năng đều có sẵn, nhưng AI của bạn chỉ tải chúng khi bạn yêu cầu rõ ràng bằng lệnh `@ten-skill`.
|
||||
Nó giống như việc sở hữu một thư viện - tất cả sách đều ở đó, nhưng bạn chỉ đọc những cuốn bạn cần thôi.
|
||||
**Mẹo:** Sử dụng [Bản mẫu Khởi đầu (Starter Packs)](BUNDLES.vi.md) để chỉ cài đặt những gì phù hợp với vai trò của bạn.
|
||||
|
||||
### Những công cụ AI nào hoạt động với các kỹ năng này?
|
||||
|
||||
- ✅ **Claude Code** (Dòng lệnh CLI của Anthropic)
|
||||
- ✅ **Gemini CLI** (Google)
|
||||
- ✅ **Codex CLI** (OpenAI)
|
||||
- ✅ **Cursor** (IDE tích hợp AI)
|
||||
- ✅ **Antigravity IDE**
|
||||
- ✅ **OpenCode**
|
||||
- ⚠️ **GitHub Copilot** (Hỗ trợ một phần qua việc copy-paste)
|
||||
|
||||
### Những kỹ năng này có được sử dụng miễn phí không?
|
||||
|
||||
**Có!** Repository này được cấp phép theo giấy phép MIT License.
|
||||
|
||||
- ✅ Miễn phí cho sử dụng cá nhân.
|
||||
- ✅ Miễn phí cho sử dụng thương mại.
|
||||
- ✅ Bạn có thể sửa đổi chúng.
|
||||
|
||||
### Các kỹ năng có hoạt động ngoại tuyến (offline) không?
|
||||
|
||||
Bản thân các file skill được lưu trữ cục bộ trên máy tính của bạn, nhưng trợ lý AI của bạn vẫn cần kết nối internet để hoạt động.
|
||||
|
||||
---
|
||||
|
||||
## 🔒 Bảo mật & Tin cậy (Cập nhật V4)
|
||||
|
||||
### Các Nhãn rủi ro (Risk Labels) có ý nghĩa gì?
|
||||
|
||||
Chúng tôi phân loại các kỹ năng để bạn biết mình đang chạy cái gì:
|
||||
|
||||
- ⚪ **Safe (Trắng/Xanh)**: Các kỹ năng chỉ đọc, lập kế hoạch hoặc vô hại.
|
||||
- 🔴 **Risk (Đỏ)**: Các kỹ năng sửa đổi file (xóa), sử dụng công cụ quét mạng, hoặc thực hiện các hành động có tính phá hủy. **Hãy sử dụng thận trọng.**
|
||||
- 🟣 **Official (Tím)**: Được duy trì bởi các nhà cung cấp tin cậy (Anthropic, DeepMind, v.v.).
|
||||
|
||||
### Những kỹ năng này có thể hack máy tính của tôi không?
|
||||
|
||||
**Không.** Kỹ năng là các file văn bản. Tuy nhiên, chúng *hướng dẫn* AI chạy các dòng lệnh. Nếu một skill nói "xóa toàn bộ file", một AI tuân thủ có thể sẽ thử làm việc đó.
|
||||
_Luôn kiểm tra nhãn rủi ro và xem xét mã nguồn trước khi dùng._
|
||||
|
||||
---
|
||||
|
||||
## 📦 Cài đặt & Thiết lập
|
||||
|
||||
### Tôi nên cài đặt các kỹ năng này ở đâu?
|
||||
|
||||
Đường dẫn phổ biến nhất hoạt động với mạng lưới các công cụ AI là `.agent/skills/`:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
**Các đường dẫn cụ thể cho từng công cụ:**
|
||||
|
||||
- Claude Code: `.claude/skills/`
|
||||
- Gemini CLI: `.gemini/skills/`
|
||||
- Cursor: `.cursor/skills/` hoặc gốc của dự án.
|
||||
|
||||
### Repo này có hoạt động trên Windows không?
|
||||
|
||||
**Có**, nhưng một số kỹ năng "Official" (chính thức) sử dụng **symlinks** (liên kết tượng trưng) mà Windows xử lý không tốt theo mặc định.
|
||||
Hãy chạy git clone với lệnh sau:
|
||||
|
||||
```bash
|
||||
git clone -c core.symlinks=true https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
Hoặc bật "Chế độ Nhà phát triển" (Developer Mode) trong phần Cài đặt của Windows.
|
||||
|
||||
### Làm thế nào để cập nhật các kỹ năng?
|
||||
|
||||
Chuyển hướng đến thư mục chứa kỹ năng của bạn và kéo (pull) những thay đổi mới nhất:
|
||||
|
||||
```bash
|
||||
cd .agent/skills
|
||||
git pull origin main
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🛠️ Cách sử dụng Skills
|
||||
|
||||
### Làm thế nào để gọi một kỹ năng?
|
||||
|
||||
Sử dụng biểu tượng `@` theo sau là tên skill:
|
||||
|
||||
```bash
|
||||
@brainstorming giúp tôi thiết kế một ứng dụng todo
|
||||
```
|
||||
|
||||
### Tôi có thể dùng nhiều kỹ năng cùng một lúc không?
|
||||
|
||||
**Có!** Bạn có thể gọi nhiều kỹ năng:
|
||||
|
||||
```bash
|
||||
@brainstorming giúp tôi thiết kế phần này, sau đó dùng @writing-plans để tạo danh sách nhiệm vụ.
|
||||
```
|
||||
|
||||
### Làm thế nào để tôi biết nên dùng kỹ năng nào?
|
||||
|
||||
1. **Duyệt qua danh mục**: Xem [Danh mục Skill (Skill Catalog)](../CATALOG.vi.md).
|
||||
2. **Tìm kiếm**: `ls skills/ | grep "từ-khóa"`
|
||||
3. **Hỏi AI của bạn**: "Bạn có kỹ năng nào để kiểm thử (testing) không?"
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Xử lý sự cố
|
||||
|
||||
### Trợ lý AI của tôi không nhận diện được kỹ năng
|
||||
|
||||
**Các nguyên nhân có thể xảy ra:**
|
||||
|
||||
1. **Sai đường dẫn cài đặt**: Kiểm tra tài liệu hướng dẫn của công cụ bạn dùng. Hãy thử `.agent/skills/`.
|
||||
2. **Cần khởi động lại**: Khởi động lại AI/IDE sau khi cài đặt.
|
||||
3. **Lỗi đánh máy**: Bạn có gõ lầm `@brain-storming` thay vì `@brainstorming` không?
|
||||
|
||||
### Một kỹ năng đưa ra lời khuyên sai hoặc lỗi thời
|
||||
|
||||
Hãy [Mở một issue](https://github.com/sickn33/antigravity-awesome-skills/issues)!
|
||||
Vui lòng gửi kèm:
|
||||
|
||||
- Skill nào?
|
||||
- Điều gì đã xảy ra?
|
||||
- Đáng lẽ điều gì nên xảy ra?
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Đóng góp
|
||||
|
||||
### Tôi là người mới đối với mã nguồn mở. Tôi có thể đóng góp không?
|
||||
|
||||
**Chắc chắn là có!** Chúng tôi chào đón những người mới bắt đầu.
|
||||
|
||||
- Sửa lỗi đánh máy.
|
||||
- Thêm ví dụ.
|
||||
- Cải thiện tài liệu hướng dẫn.
|
||||
Hãy xem [CONTRIBUTING.md](../CONTRIBUTING.vi.md) để biết hướng dẫn chi tiết.
|
||||
|
||||
### Pull Request (PR) của tôi thất bại khi kiểm tra "Quality Bar". Tại sao?
|
||||
|
||||
Phiên bản V3 áp dụng kiểm soát chất lượng tự động. Skill của bạn có thể đang thiếu:
|
||||
|
||||
1. Một `description` (mô tả) hợp lệ.
|
||||
2. Các ví dụ sử dụng.
|
||||
Hãy chạy `python3 scripts/validate_skills.py` cục bộ để kiểm tra trước khi đẩy code lên.
|
||||
|
||||
### Tôi có thể cập nhật các kỹ năng "Official" không?
|
||||
|
||||
**Không.** Các kỹ năng chính thức (trong thư mục `skills/official/`) được đồng bộ từ các nhà cung cấp. Thay vào đó, hãy mở một issue để báo lỗi.
|
||||
|
||||
---
|
||||
|
||||
## 💡 Mẹo Chuyên nghiệp
|
||||
|
||||
- Bắt đầu với `@brainstorming` trước khi xây dựng bất kỳ thứ gì mới.
|
||||
- Sử dụng `@systematic-debugging` khi gặp lỗi khó nhằn.
|
||||
- Thử `@test-driven-development` để code có chất lượng tốt hơn.
|
||||
- Khám phá `@skill-creator` để tự tạo kỹ năng của riêng bạn.
|
||||
|
||||
**Vẫn còn thắc mắc?** [Mở một cuộc thảo luận (Discussion)](https://github.com/sickn33/antigravity-awesome-skills/discussions) và chúng tôi sẽ giúp bạn! 🙌
|
||||
108
docs/vietnamese/GETTING_STARTED.vi.md
Normal file
108
docs/vietnamese/GETTING_STARTED.vi.md
Normal file
@@ -0,0 +1,108 @@
|
||||
# Hướng dẫn Bắt đầu với Antigravity Awesome Skills (V4)
|
||||
|
||||
**Bạn mới đến đây? Hướng dẫn này sẽ giúp bạn tăng cường sức mạnh cho trợ lý trợ lý AI của mình chỉ trong 5 phút.**
|
||||
|
||||
---
|
||||
|
||||
## 🤔 "Skills" (Kỹ năng) là gì?
|
||||
|
||||
Các trợ lý AI (như **Claude Code**, **Gemini**, **Cursor**) rất thông minh, nhưng chúng thiếu kiến thức cụ thể về các công cụ và quy trình làm việc của bạn.
|
||||
**Skills** là các hướng dẫn sử dụng chuyên biệt (dưới dạng file markdown) dạy cho AI của bạn cách thực hiện các tác vụ cụ thể một cách hoàn hảo trong mọi lần thực hiện.
|
||||
|
||||
**Một phép so sánh:** AI của bạn là một thực tập sinh xuất sắc. **Skills** là các SOP (Quy trình vận hành tiêu chuẩn) biến họ thành một Kỹ sư cao cấp.
|
||||
|
||||
---
|
||||
|
||||
## ⚡️ Khởi động nhanh: Các "Gói khởi đầu" (Starter Packs)
|
||||
|
||||
Đừng lo lắng về con số hơn 560 kỹ năng. Bạn không cần dùng tất cả chúng cùng một lúc.
|
||||
Chúng tôi đã tuyển chọn các **Gói khởi đầu** để bạn có thể bắt đầu sử dụng ngay lập tức.
|
||||
|
||||
### 1. Cài đặt Repository
|
||||
|
||||
Sao chép các kỹ năng vào thư mục agent của bạn:
|
||||
|
||||
```bash
|
||||
# Cài đặt phổ thông (hoạt động với hầu hết các agent)
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
### 2. Chọn vai trò của bạn
|
||||
|
||||
Tìm gói kỹ năng phù hợp với vị trí của bạn (xem [BUNDLES.md](BUNDLES.vi.md)):
|
||||
|
||||
| Vai trò | Tên Gói kỹ năng | Bên trong có những gì? |
|
||||
| :-------------------- | :-------------- | :------------------------------------------------ |
|
||||
| **Web Developer** | `Web Wizard` | React Patterns, Tailwind mastery, Frontend Design |
|
||||
| **Security Engineer** | `Hacker Pack` | OWASP, Metasploit, Pentest Methodology |
|
||||
| **Manager / PM** | `Product Pack` | Brainstorming, Planning, SEO, Strategy |
|
||||
| **Cơ bản cho tất cả** | `Essentials` | Clean Code, Planning, Validation (Những thứ cơ bản nhất) |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Cách sử dụng một Skill
|
||||
|
||||
Sau khi cài đặt, bạn chỉ cần trò chuyện với AI một cách tự nhiên.
|
||||
|
||||
### Ví dụ 1: Lập kế hoạch cho một Tính năng (**Essentials**)
|
||||
|
||||
> "Sử dụng **@brainstorming** để giúp tôi thiết kế một luồng đăng nhập mới."
|
||||
|
||||
**Điều gì sẽ xảy ra:** AI sẽ tải kỹ năng brainstorming, đặt cho bạn các câu hỏi có cấu trúc và tạo ra một bản đặc tả chuyên nghiệp.
|
||||
|
||||
### Ví dụ 2: Kiểm tra Code của bạn (**Web Wizard**)
|
||||
|
||||
> "Chạy **@lint-and-validate** trên file này và sửa các lỗi."
|
||||
|
||||
**Điều gì sẽ xảy ra:** AI sẽ tuân theo các quy tắc linting nghiêm ngặt được định nghĩa trong skill để làm sạch code của bạn.
|
||||
|
||||
### Ví dụ 3: Kiểm tra Bảo mật (**Hacker Pack**)
|
||||
|
||||
> "Sử dụng **@api-security-best-practices** để xem xét các endpoint API của tôi."
|
||||
|
||||
**Điều gì sẽ xảy ra:** AI sẽ kiểm tra code của bạn dựa trên các tiêu chuẩn OWASP.
|
||||
|
||||
---
|
||||
|
||||
## 🔌 Các công cụ được hỗ trợ
|
||||
|
||||
| Công cụ | Trạng thái | Đường dẫn |
|
||||
| :--------------- | :-------------- | :---------------- |
|
||||
| **Claude Code** | ✅ Hỗ trợ đầy đủ | `.claude/skills/` |
|
||||
| **Gemini CLI** | ✅ Hỗ trợ đầy đủ | `.gemini/skills/` |
|
||||
| **Antigravity** | ✅ Hỗ trợ gốc | `.agent/skills/` |
|
||||
| **Cursor** | ✅ Hỗ trợ gốc | `.cursor/skills/` |
|
||||
| **Copilot** | ⚠️ Chỉ văn bản | Copy-paste thủ công |
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ Sự tin cậy & An toàn (Mới trong bản V4)
|
||||
|
||||
Chúng tôi phân loại các kỹ năng để bạn biết mình đang chạy những gì:
|
||||
|
||||
- 🟣 **Official (Chính thức)**: Được duy trì bởi Anthropic/Google/Nhà cung cấp (Độ tin cậy cao).
|
||||
- 🔵 **Safe (An toàn)**: Các kỹ năng cộng đồng không gây hại (Chỉ đọc/Lập kế hoạch).
|
||||
- 🔴 **Risk (Rủi ro)**: Các kỹ năng sửa đổi hệ thống hoặc thực hiện kiểm thử bảo mật (Sử dụng khi được cấp phép).
|
||||
|
||||
_Kiểm tra [Danh mục Skill (Skill Catalog)](../CATALOG.vi.md) để xem danh sách đầy đủ._
|
||||
|
||||
---
|
||||
|
||||
## ❓ FAQ
|
||||
|
||||
**H: Tôi có cần cài đặt tất cả 560 kỹ năng không?**
|
||||
Đ: Bạn tải toàn bộ repo về, nhưng AI của bạn chỉ _đọc_ những kỹ năng bạn yêu cầu (hoặc những kỹ năng có liên quan). Nó rất nhẹ!
|
||||
|
||||
**H: Tôi có thể tự tạo kỹ năng cho riêng mình không?**
|
||||
Đ: Có! Sử dụng kỹ năng **@skill-creator** để tự xây dựng.
|
||||
|
||||
**H: Nó có miễn phí không?**
|
||||
Đ: Có, Giấy phép MIT. Mã nguồn mở mãi mãi.
|
||||
|
||||
---
|
||||
|
||||
## ⏭️ Các bước tiếp theo
|
||||
|
||||
1. [Duyệt qua các Gói kỹ năng (Bundles)](BUNDLES.vi.md)
|
||||
2. [Xem các Ví dụ thực tế (Examples)](EXAMPLES.vi.md)
|
||||
3. [Đóng góp một Skill mới](../CONTRIBUTING.vi.md)
|
||||
64
docs/vietnamese/QUALITY_BAR.vi.md
Normal file
64
docs/vietnamese/QUALITY_BAR.vi.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# 🏆 Tiêu chuẩn Chất lượng & Xác thực
|
||||
|
||||
Để biến **Antigravity Awesome Skills** từ một tập hợp các script thành một nền tảng đáng tin cậy, mỗi skill (kỹ năng) phải đáp ứng một tiêu chuẩn cụ thể về chất lượng và an toàn.
|
||||
|
||||
## Huy hiệu "Đã xác thực" (Validated) ✅
|
||||
|
||||
Một skill chỉ nhận được huy hiệu "Đã xác thực" nếu nó vượt qua **5 bước kiểm tra tự động** sau:
|
||||
|
||||
### 1. Tính toàn vẹn của siêu dữ liệu (Metadata Integrity)
|
||||
|
||||
Phần frontmatter trong `SKILL.md` phải là mã YAML hợp lệ và chứa:
|
||||
|
||||
- `name`: Định dạng Kebab-case, khớp với tên thư mục.
|
||||
- `description`: Dưới 200 ký tự, nêu rõ giá trị mang lại.
|
||||
- `risk`: Thuộc một trong các loại `[none, safe, critical, offensive]`.
|
||||
- `source`: URL dẫn đến nguồn gốc (hoặc "self" nếu là nội dung gốc).
|
||||
|
||||
### 2. Điều kiện kích hoạt rõ ràng ("Khi nào nên dùng")
|
||||
|
||||
Skill BẮT BUỘC phải có một phần nêu rõ thời điểm nên kích hoạt nó.
|
||||
|
||||
- **Tốt**: "Sử dụng khi người dùng yêu cầu debug (gỡ lỗi) một component React."
|
||||
- **Tệ**: "Skill này giúp bạn xử lý code."
|
||||
|
||||
### 3. Phân loại An toàn & Rủi ro
|
||||
|
||||
Mỗi skill phải khai báo mức độ rủi ro của nó:
|
||||
|
||||
- 🟢 **none**: Chỉ là văn bản/tư duy thuần túy (ví dụ: Brainstorming).
|
||||
- 🔵 **safe**: Đọc file, chạy các lệnh an toàn (ví dụ: Linter).
|
||||
- 🟠 **critical**: Sửa đổi trạng thái, xóa file, push lên môi trường production (ví dụ: Git Push).
|
||||
- 🔴 **offensive**: Các công cụ Pentesting/Red Team. **BẮT BUỘC** phải có cảnh báo "Chỉ dành cho mục đích sử dụng đã được cấp phép".
|
||||
|
||||
### 4. Ví dụ thực tế (Copy-Pasteable)
|
||||
|
||||
Ít nhất một khối code hoặc ví dụ tương tác mà người dùng (hoặc agent) có thể sử dụng ngay lập tức.
|
||||
|
||||
### 5. Giới hạn rõ ràng (Explicit Limitations)
|
||||
|
||||
Danh sách các trường hợp biên hoặc những việc mà skill _không thể_ thực hiện.
|
||||
|
||||
- _Ví dụ_: "Không hoạt động trên Windows nếu không có WSL."
|
||||
|
||||
---
|
||||
|
||||
## Các cấp độ Hỗ trợ
|
||||
|
||||
Chúng tôi cũng phân loại skill dựa trên người duy trì chúng:
|
||||
|
||||
| Cấp độ | Huy hiệu | Ý nghĩa |
|
||||
| :------------ | :------- | :--------------------------------------------------- |
|
||||
| **Official** | 🟣 | Do đội ngũ cốt lõi duy trì. Độ tin cậy cao. |
|
||||
| **Community** | ⚪ | Do cộng đồng đóng góp. Hỗ trợ theo khả năng tốt nhất. |
|
||||
| **Verified** | ✨ | Skill cộng đồng đã vượt qua vòng kiểm duyệt thủ công chuyên sâu. |
|
||||
|
||||
---
|
||||
|
||||
## Cách Xác thực Skill của bạn
|
||||
|
||||
Chạy script xác thực trước khi gửi Pull Request (PR):
|
||||
|
||||
```bash
|
||||
python3 scripts/validate_skills.py --strict
|
||||
```
|
||||
190
docs/vietnamese/README.vi.md
Normal file
190
docs/vietnamese/README.vi.md
Normal file
@@ -0,0 +1,190 @@
|
||||
# 🌌 Antigravity Awesome Skills: 560+ Kỹ năng (Skills) cho Claude Code, Gemini CLI, Cursor, Copilot và nhiều hơn nữa
|
||||
|
||||
> **Bộ sưu tập tối ưu gồm hơn 560 Kỹ năng Phổ quát cho các Trợ lý Lập trình AI — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode**
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://claude.ai)
|
||||
[](https://github.com/google-gemini/gemini-cli)
|
||||
[](https://github.com/openai/codex)
|
||||
[](https://cursor.sh)
|
||||
[](https://github.com/features/copilot)
|
||||
[](https://github.com/opencode-ai/opencode)
|
||||
[](https://github.com/sickn33/antigravity-awesome-skills)
|
||||
|
||||
**Antigravity Awesome Skills** là một thư viện được tuyển chọn và kiểm chứng kỹ lưỡng với **560 kỹ năng hiệu suất cao** được thiết kế để hoạt động mượt mà trên tất cả các trợ lý lập trình AI lớn:
|
||||
|
||||
- 🟣 **Claude Code** (Anthropic CLI)
|
||||
- 🔵 **Gemini CLI** (Google DeepMind)
|
||||
- 🟢 **Codex CLI** (OpenAI)
|
||||
- 🔴 **Antigravity IDE** (Google DeepMind)
|
||||
- 🩵 **GitHub Copilot** (VSCode Extension)
|
||||
- 🟠 **Cursor** (AI-native IDE)
|
||||
- ⚪ **OpenCode** (Mã nguồn mở CLI)
|
||||
|
||||
**Chào mừng bạn đến với Phiên bản V4.0.0 Enterprise.** Đây không chỉ là một danh sách các tập lệnh (scripts); nó là một hệ điều hành hoàn chỉnh cho Trợ lý AI của bạn.
|
||||
|
||||
### 1. 🐣 Bối cảnh: Đây là gì?
|
||||
|
||||
**Antigravity Awesome Skills** (Phiên bản 4.0.0) là một bản nâng cấp khổng lồ cho khả năng của AI.
|
||||
|
||||
Các trợ lý AI (như Claude Code, Cursor, hoặc Gemini) rất thông minh, nhưng chúng thiếu các **công cụ chuyên biệt**. Chúng không biết "Quy trình Triển khai" của công ty bạn hoặc cú pháp cụ thể cho "AWS CloudFormation".
|
||||
**Skills** là các tệp markdown nhỏ dạy cho chúng cách thực hiện những tác vụ cụ thể này một cách chính xác trong mọi lần thực thi.
|
||||
...
|
||||
Repository này cung cấp các kỹ năng thiết yếu để biến trợ lý AI của bạn thành một **đội ngũ chuyên gia số toàn năng**, bao gồm các khả năng chính thức từ **Anthropic**, **OpenAI**, **Google**, **Supabase**, và **Vercel Labs**.
|
||||
...
|
||||
Cho dù bạn đang sử dụng **Gemini CLI**, **Claude Code**, **Codex CLI**, **Cursor**, **GitHub Copilot**, **Antigravity**, hay **OpenCode**, những kỹ năng này được thiết kế để có thể sử dụng ngay lập tức và tăng cường sức mạnh cho trợ lý AI của bạn.
|
||||
|
||||
Repository này tập hợp những khả năng tốt nhất từ khắp cộng đồng mã nguồn mở, biến trợ lý AI của bạn thành một đội ngũ chuyên gia số toàn năng có khả năng Kỹ thuật, Thiết kế, Bảo mật, Marketing và Vận hành Tự động.
|
||||
|
||||
## Tính năng & Danh mục
|
||||
|
||||
Repository được tổ chức thành các lĩnh vực chuyên biệt để biến AI của bạn thành một chuyên gia trên toàn bộ vòng đời phát triển phần mềm:
|
||||
|
||||
| Danh mục | Trọng tâm | Ví dụ kỹ năng |
|
||||
| :--- | :--- | :--- |
|
||||
| Kiến trúc (52) | Thiết kế hệ thống, ADRs, C4 và các mẫu có thể mở rộng | `architecture`, `c4-context`, `senior-architect` |
|
||||
| Kinh doanh (35) | Tăng trưởng, định giá, CRO, SEO và thâm nhập thị trường | `copywriting`, `pricing-strategy`, `seo-audit` |
|
||||
| Dữ liệu & AI (81) | Ứng dụng LLM, RAG, agents, khả năng quan sát, phân tích | `rag-engineer`, `prompt-engineer`, `langgraph` |
|
||||
| Phát triển (72) | Làm chủ ngôn ngữ, mẫu thiết kế framework, chất lượng code | `typescript-expert`, `python-patterns`, `react-patterns` |
|
||||
| Tổng quát (95) | Lập kế hoạch, tài liệu, vận hành sản phẩm, viết bài, hướng dẫn | `brainstorming`, `doc-coauthoring`, `writing-plans` |
|
||||
| Hạ tầng (72) | DevOps, cloud, serverless, triển khai, CI/CD | `docker-expert`, `aws-serverless`, `vercel-deployment` |
|
||||
| Bảo mật (107) | AppSec, pentesting, phân tích lỗ hổng, tuân thủ | `api-security-best-practices`, `sql-injection-testing`, `vulnerability-scanner` |
|
||||
| Kiểm thử (21) | TDD, thiết kế kiểm thử, sửa lỗi, quy trình QA | `test-driven-development`, `testing-patterns`, `test-fixing` |
|
||||
| Quy trình (17) | Tự động hóa, điều phối, công việc, agents | `workflow-automation`, `inngest`, `trigger-dev` |
|
||||
|
||||
## Bộ sưu tập Tuyển chọn
|
||||
|
||||
[Xem các Gói khởi đầu tại docs/vietnamese/BUNDLES.md](docs/vietnamese/BUNDLES.vi.md) để tìm bộ công cụ hoàn hảo cho vai trò của bạn.
|
||||
|
||||
## Duyệt hơn 560 Kỹ năng
|
||||
|
||||
Chúng tôi đã chuyển danh sách đầy đủ các kỹ năng sang một danh mục riêng biệt để giữ cho file README này gọn gàng.
|
||||
|
||||
👉 **[Xem Danh mục Kỹ năng Đầy đủ (CATALOG.vi.md)](CATALOG.vi.md)**
|
||||
|
||||
## Cài đặt
|
||||
|
||||
Để sử dụng các kỹ năng này với **Claude Code**, **Gemini CLI**, **Codex CLI**, **Cursor**, **Antigravity**, hoặc **OpenCode**, hãy clone repository này vào thư mục kỹ năng của assistant của bạn:
|
||||
|
||||
```bash
|
||||
# Cài đặt phổ thông (hoạt động với hầu hết các công cụ)
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
|
||||
# Dành riêng cho Claude Code
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .claude/skills
|
||||
|
||||
# Dành riêng cho Gemini CLI
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .gemini/skills
|
||||
|
||||
# Dành riêng cho Cursor
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .cursor/skills
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cách thức Đóng góp
|
||||
|
||||
Chúng tôi chào đón mọi sự đóng góp từ cộng đồng! Để thêm một kỹ năng mới:
|
||||
|
||||
1. **Fork** repository.
|
||||
2. **Tạo một thư mục mới** bên trong `skills/` cho kỹ năng của bạn.
|
||||
3. **Thêm file `SKILL.md`** với phần frontmatter bắt buộc (tên và mô tả).
|
||||
4. **Chạy kiểm tra xác thực**: `python3 scripts/validate_skills.py`.
|
||||
5. **Gửi một Pull Request**.
|
||||
|
||||
Vui lòng đảm bảo kỹ năng của bạn tuân thủ các thực hành tốt nhất của Antigravity/Claude Code.
|
||||
|
||||
---
|
||||
|
||||
## Người đóng góp & Ghi công
|
||||
|
||||
Chúng tôi đứng trên vai của những người khổng lồ.
|
||||
|
||||
👉 **[Xem Sổ cái Ghi công Đầy đủ (docs/vietnamese/SOURCES.vi.md)](docs/vietnamese/SOURCES.vi.md)**
|
||||
|
||||
Các nguồn đóng góp và nguồn chính bao gồm:
|
||||
|
||||
- **HackTricks**
|
||||
- **OWASP**
|
||||
- **Anthropic / OpenAI / Google**
|
||||
- **Cộng đồng mã nguồn mở**
|
||||
|
||||
Bộ sưu tập này sẽ không thể hình thành nếu không có công việc tuyệt vời của cộng đồng Claude Code và các nguồn chính thức:
|
||||
|
||||
### Các nguồn Chính thức
|
||||
|
||||
- **[anthropics/skills](https://github.com/anthropics/skills)**: Thư mục skill chính thức của Anthropic - Xử lý tài liệu (DOCX, PDF, PPTX, XLSX), Hướng dẫn thương hiệu, Giao tiếp nội bộ.
|
||||
- **[anthropics/claude-cookbooks](https://github.com/anthropics/claude-cookbooks)**: Các notebook và công thức chính thức để xây dựng với Claude.
|
||||
- **[remotion-dev/skills](https://github.com/remotion-dev/skills)**: Skills chính thức của Remotion - Tạo video trong React với 28 quy tắc mô-đun.
|
||||
- **[vercel-labs/agent-skills](https://github.com/vercel-labs/agent-skills)**: Skills chính thức của Vercel Labs - Thực hành tốt nhất cho React, Hướng dẫn thiết kế Web.
|
||||
- **[openai/skills](https://github.com/openai/skills)**: Danh mục skill của OpenAI Codex - Các kỹ năng của Agent, Trình tạo Skill, Lập kế hoạch Súc tích.
|
||||
- **[supabase/agent-skills](https://github.com/supabase/agent-skills)**: Skills chính thức của Supabase - Thực hành tốt nhất cho Postgres.
|
||||
|
||||
### Những người đóng góp từ Cộng đồng
|
||||
|
||||
- **[rmyndharis/antigravity-skills](https://github.com/rmyndharis/antigravity-skills)**: Cho sự đóng góp khổng lồ của hơn 300+ kỹ năng Enterprise và logic tạo danh mục.
|
||||
- **[obra/superpowers](https://github.com/obra/superpowers)**: Bản "Superpowers" gốc bởi Jesse Vincent.
|
||||
- **[guanyang/antigravity-skills](https://github.com/guanyang/antigravity-skills)**: Các bản mở rộng Antigravity cốt lõi.
|
||||
- **[diet103/claude-code-infrastructure-showcase](https://github.com/diet103/claude-code-infrastructure-showcase)**: Cơ sở hạ tầng và Hướng dẫn cho Backend/Frontend.
|
||||
- **[ChrisWiles/claude-code-showcase](https://github.com/ChrisWiles/claude-code-showcase)**: Các mẫu React UI và Hệ thống Thiết kế.
|
||||
- **[travisvn/awesome-claude-skills](https://github.com/travisvn/awesome-claude-skills)**: Loki Mode và tích hợp Playwright.
|
||||
- **[zebbern/claude-code-guide](https://github.com/zebbern/claude-code-guide)**: Bộ công cụ bảo mật toàn diện & Hướng dẫn (Nguồn cho khoảng 60 kỹ năng mới).
|
||||
- **[alirezarezvani/claude-skills](https://github.com/alirezarezvani/claude-skills)**: Bộ công cụ Kỹ sư cao cấp và PM.
|
||||
- **[karanb192/awesome-claude-skills](https://github.com/karanb192/awesome-claude-skills)**: Một danh sách khổng lồ các kỹ năng đã được xác thực cho Claude Code.
|
||||
- **[zircote/.claude](https://github.com/zircote/.claude)**: Tham chiếu kỹ năng phát triển cho Shopify.
|
||||
- **[vibeforge1111/vibeship-spawner-skills](https://github.com/vibeforge1111/vibeship-spawner-skills)**: Trợ lý AI, Tích hợp, Công cụ của nhà sáng tạo (57 kỹ năng, Apache 2.0).
|
||||
- **[coreyhaines31/marketingskills](https://github.com/coreyhaines31/marketingskills)**: Các kỹ năng Marketing cho CRO, copywriting, SEO, quảng cáo trả phí và tăng trưởng (23 kỹ năng, MIT).
|
||||
- **[vudovn/antigravity-kit](https://github.com/vudovn/antigravity-kit)**: Các mẫu AI Agent với Kỹ năng, Agents và Quy trình làm việc (33 kỹ năng, MIT).
|
||||
- **[affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code)**: Bộ sưu tập cấu hình Claude Code đầy đủ từ người chiến thắng cuộc thi hackathon của Anthropic - chỉ phần kỹ năng (8 kỹ năng, MIT).
|
||||
- **[webzler/agentMemory](https://github.com/webzler/agentMemory)**: Nguồn cho kỹ năng agent-memory-mcp.
|
||||
- **[sstklen/claude-api-cost-optimization](https://github.com/sstklen/claude-api-cost-optimization)**: Tiết kiệm 50-90% chi phí Claude API với các chiến lược tối ưu hóa thông minh (MIT).
|
||||
|
||||
### Nguồn cảm hứng
|
||||
|
||||
- **[f/awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts)**: Cảm hứng cho Thư viện Prompt.
|
||||
- **[leonardomso/33-js-concepts](https://github.com/leonardomso/33-js-concepts)**: Cảm hứng cho việc Làm chủ JavaScript.
|
||||
|
||||
---
|
||||
|
||||
## Giấy phép
|
||||
|
||||
Giấy phép MIT. Xem file [LICENSE](LICENSE) để biết chi tiết.
|
||||
|
||||
## Cộng đồng
|
||||
|
||||
- [Cộng đồng (Community Guidelines)](docs/vietnamese/COMMUNITY_GUIDELINES.vi.md)
|
||||
- [Chính sách Bảo mật (Security Policy)](docs/vietnamese/SECURITY_GUARDRAILS.vi.md)
|
||||
|
||||
---
|
||||
|
||||
## Những người đóng góp cho Repo
|
||||
|
||||
Chúng tôi chính thức cảm ơn những người đóng góp sau đây đã giúp làm cho repository này trở nên tuyệt vời!
|
||||
|
||||
- [mvanhorn](https://github.com/mvanhorn)
|
||||
- [rookie-ricardo](https://github.com/rookie-ricardo)
|
||||
- [sck_0](https://github.com/sck_0)
|
||||
- [Munir Abbasi](https://github.com/munirabbasi)
|
||||
- [Mohammad Faiz](https://github.com/mohdfaiz2k9)
|
||||
- [Ianj332](https://github.com/Ianj332)
|
||||
- [sickn33](https://github.com/sickn33)
|
||||
- [GuppyTheCat](https://github.com/GuppyTheCat)
|
||||
- [Tiger-Foxx](https://github.com/Tiger-Foxx)
|
||||
- [arathiesh](https://github.com/arathiesh)
|
||||
- [1bcMax](https://github.com/1bcMax)
|
||||
- [Ahmed Rehan](https://github.com/ar27111994)
|
||||
- [BenedictKing](https://github.com/BenedictKing)
|
||||
- [Nguyen Huu Loc](https://github.com/LocNguyenSGU)
|
||||
- [Owen Wu](https://github.com/yubing744)
|
||||
- [SuperJMN](https://github.com/SuperJMN)
|
||||
- [Viktor Ferenczi](https://github.com/viktor-ferenczi)
|
||||
- [krisnasantosa15](https://github.com/krisnasantosa15)
|
||||
- [raeef1001](https://github.com/raeef1001)
|
||||
- [taksrules](https://github.com/taksrules)
|
||||
- [zebbern](https://github.com/zebbern)
|
||||
- [Đỗ Khắc Gia Khoa](https://github.com/dokhacgiakhoa)
|
||||
- [vuth-dogo](https://github.com/vuth-dogo)
|
||||
|
||||
## Lịch sử Star
|
||||
|
||||
[](https://www.star-history.com/#sickn33/antigravity-awesome-skills&type=date&legend=top-left)
|
||||
19
docs/vietnamese/SECURITY.vi.md
Normal file
19
docs/vietnamese/SECURITY.vi.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Chính sách Bảo mật (Security Policy)
|
||||
|
||||
## Các Phiên bản được Hỗ trợ
|
||||
|
||||
Chúng tôi theo dõi nhánh `main`.
|
||||
|
||||
## Báo cáo Lỗ hổng
|
||||
|
||||
**KHÔNG** mở Issue công khai cho các lỗ hổng bảo mật.
|
||||
|
||||
Nếu bạn tìm thấy một lỗ hổng bảo mật (ví dụ: một kỹ năng vượt qua kiểm tra "Sử dụng có Phép" hoặc thực thi mã độc hại mà không cảnh báo):
|
||||
|
||||
1. Email: `security@antigravity.dev` (Placeholder)
|
||||
2. Hoặc mở một **Private Advisory** (Cảnh báo Bảo mật Riêng tư) trên repository này.
|
||||
|
||||
## Chính sách Kỹ năng Tấn công (Offensive Skills Policy)
|
||||
|
||||
Vui lòng đọc [Rào chắn Bảo mật (Security Guardrails)](docs/vietnamese/SECURITY_GUARDRAILS.vi.md) của chúng tôi.
|
||||
Tất cả các kỹ năng tấn công (offensive skills) chỉ dành nghiêm ngặt cho **mục đích giáo dục và hoạt động chuyên nghiệp đã được cấp phép**.
|
||||
51
docs/vietnamese/SECURITY_GUARDRAILS.vi.md
Normal file
51
docs/vietnamese/SECURITY_GUARDRAILS.vi.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# 🛡️ Chính sách Bảo mật & Rào chắn An toàn (Guardrails)
|
||||
|
||||
Antigravity Awesome Skills là một bộ công cụ mạnh mẽ. Và quyền lực lớn luôn đi kèm với trách nhiệm lớn. Tài liệu này xác định **Quy tắc Ứng xử** cho tất cả các khả năng bảo mật và tấn công trong repository này.
|
||||
|
||||
## 🔴 Chính sách đối với Skill Tấn công (Vạch kẻ đỏ)
|
||||
|
||||
**Skill Tấn công là gì?**
|
||||
Bất kỳ kỹ năng (skill) nào được thiết kế để xâm nhập, khai thác, làm gián đoạn hoặc mô phỏng tấn công chống lại các hệ thống.
|
||||
_Ví dụ: Pentesting (Kiểm thử xâm nhập), SQL Injection, Mô phỏng Phishing, Red Teaming._
|
||||
|
||||
### 1. Tuyên bố từ chối trách nhiệm "Chỉ dành cho mục đích sử dụng đã được phê duyệt"
|
||||
|
||||
Mỗi skill tấn công **BẮT BUỘC** phải bắt đầu bằng tuyên bố từ chối trách nhiệm chính xác như sau trong file `SKILL.md`:
|
||||
|
||||
> **⚠️ CHỈ DÀNH CHO MỤC ĐÍCH SỬ DỤNG ĐÃ ĐƯỢC PHÊ DUYỆT**
|
||||
> Skill này chỉ dành cho mục đích giáo dục hoặc đánh giá bảo mật đã được cấp phép.
|
||||
> Bạn phải có sự cho phép rõ ràng bằng văn bản từ chủ sở hữu hệ thống trước khi sử dụng công cụ này.
|
||||
> Việc lạm dụng công cụ này là bất hợp pháp và bị nghiêm cấm hoàn toàn.
|
||||
|
||||
### 2. Yêu cầu Xác nhận từ Người dùng
|
||||
|
||||
Các skill tấn công **KHÔNG BAO GIỜ** được chạy hoàn toàn tự động một cách tự ý (autonomously).
|
||||
|
||||
- **Yêu cầu**: Mô tả hoặc hướng dẫn của skill phải yêu cầu agent (trợ lý AI) *hỏi xác nhận của người dùng* trước khi thực thi bất kỳ lệnh khai thác hoặc tấn công nào.
|
||||
- **Hướng dẫn cho Agent**: "Yêu cầu người dùng xác minh URL/IP mục tiêu trước khi chạy."
|
||||
|
||||
### 3. Thiết kế để An toàn
|
||||
|
||||
- **Không chứa mã độc (Weaponized Payloads)**: Các skill không được bao gồm malware hoạt động, ransomware, hoặc các mã khai thác không mang tính giáo dục.
|
||||
- **Khuyến nghị dùng Sandbox**: Các hướng dẫn nên khuyến nghị chạy trong môi trường cô lập (Docker/VM).
|
||||
|
||||
---
|
||||
|
||||
## 🔵 Chính sách đối với Skill Phòng thủ
|
||||
|
||||
**Skill Phòng thủ là gì?**
|
||||
Các công cụ dùng để tăng cường bảo mật (hardening), kiểm tra (auditing), giám sát (monitoring), hoặc bảo vệ hệ thống.
|
||||
_Ví dụ: Linting (Kiểm tra lỗi code), Phân tích Log, Kiểm tra Cấu hình._
|
||||
|
||||
- **Quyền riêng tư dữ liệu**: Các kỹ năng phòng thủ không được tải dữ liệu lên các máy chủ bên thứ ba mà không có sự đồng ý rõ ràng của người dùng.
|
||||
- **Không gây hư tổn (Non-Destructive)**: Các quy trình kiểm tra (audits) nên mặc định ở chế độ chỉ đọc (read-only).
|
||||
|
||||
---
|
||||
|
||||
## ⚖️ Tuyên bố Pháp lý
|
||||
|
||||
Bằng việc sử dụng repository này, bạn đồng ý rằng:
|
||||
|
||||
1. Bạn tự chịu trách nhiệm về hành động của mình.
|
||||
2. Các tác giả và người đóng góp không chịu trách nhiệm cho bất kỳ thiệt hại nào do các công cụ này gây ra.
|
||||
3. Bạn sẽ tuân thủ tất cả các luật địa phương, tiểu bang và liên bang liên quan đến an ninh mạng.
|
||||
106
docs/vietnamese/SKILLS_README.vi.md
Normal file
106
docs/vietnamese/SKILLS_README.vi.md
Normal file
@@ -0,0 +1,106 @@
|
||||
# Thư mục Kỹ năng (Skills Directory)
|
||||
|
||||
**Chào mừng đến với thư mục Kỹ năng!** Đây là nơi tập hợp tất cả 179+ kỹ năng AI chuyên biệt.
|
||||
|
||||
## 🤔 Kỹ năng là gì?
|
||||
|
||||
Kỹ năng là các bộ hướng dẫn chuyên biệt dạy trợ lý AI cách xử lý các tác vụ cụ thể. Hãy nghĩ về chúng như các module kiến thức chuyên gia mà AI của bạn có thể tải theo yêu cầu.
|
||||
|
||||
**Hình dung đơn giản:** Giống như bạn có thể tham khảo ý kiến các chuyên gia khác nhau (một nhà thiết kế, một chuyên gia bảo mật, một chuyên gia Marketing), các kỹ năng cho phép AI trở thành chuyên gia trong các lĩnh vực khác nhau ngay khi bạn cần.
|
||||
|
||||
---
|
||||
|
||||
## 📂 Cấu trúc Thư mục
|
||||
|
||||
Mỗi kỹ năng nằm trong thư mục riêng với cấu trúc sau:
|
||||
|
||||
```
|
||||
skills/
|
||||
├── skill-name/ # Thư mục kỹ năng riêng lẻ
|
||||
│ ├── SKILL.md # Định nghĩa kỹ năng chính (bắt buộc)
|
||||
│ ├── scripts/ # Scripts hỗ trợ (tùy chọn)
|
||||
│ ├── examples/ # Ví dụ sử dụng (tùy chọn)
|
||||
│ └── resources/ # Templates & tài nguyên (tùy chọn)
|
||||
```
|
||||
|
||||
**Lưu ý quan trọng:** Chỉ file `SKILL.md` là bắt buộc. Mọi thứ khác là tùy chọn!
|
||||
|
||||
---
|
||||
|
||||
## Cách sử dụng Kỹ năng
|
||||
|
||||
### Bước 1: Đảm bảo kỹ năng đã được thiết lập
|
||||
Các file kỹ năng nên nằm trong thư mục `.agent/skills/` của bạn (hoặc `.claude/skills/`, `.gemini/skills/`, v.v.)
|
||||
|
||||
### Bước 2: Kích hoạt kỹ năng trong cuộc trò chuyện với AI
|
||||
Sử dụng biểu tượng `@` theo sau bởi tên kỹ năng:
|
||||
|
||||
```
|
||||
@brainstorming giúp tôi thiết kế một ứng dụng todo
|
||||
```
|
||||
|
||||
hoặc
|
||||
|
||||
```
|
||||
@stripe-integration thêm xử lý thanh toán vào ứng dụng của tôi
|
||||
```
|
||||
|
||||
### Bước 3: AI trở thành chuyên gia
|
||||
AI tải kiến thức của kỹ năng đó và giúp bạn với chuyên môn đặc thù!
|
||||
|
||||
---
|
||||
|
||||
## Tìm kiếm Kỹ năng
|
||||
|
||||
### Cách 1: Duyệt thư mục này
|
||||
```bash
|
||||
ls skills/
|
||||
```
|
||||
|
||||
### Cách 2: Tìm kiếm theo từ khóa
|
||||
```bash
|
||||
ls skills/ | grep "từ khóa"
|
||||
```
|
||||
|
||||
### Cách 3: Kiểm tra README chính
|
||||
Xem [README chính](../README.vi.md) để biết danh sách đầy đủ tất cả 179+ kỹ năng được tổ chức theo danh mục.
|
||||
|
||||
---
|
||||
|
||||
## 💡 Các Kỹ năng Phổ biến để Thử nghiệm
|
||||
|
||||
**Cho người mới bắt đầu:**
|
||||
- `@brainstorming` - Thiết kế trước khi code
|
||||
- `@systematic-debugging` - Sửa lỗi một cách có phương pháp
|
||||
- `@git-pushing` - Commit với thông báo tốt
|
||||
|
||||
**Cho lập trình viên:**
|
||||
- `@test-driven-development` - Viết test trước
|
||||
- `@react-best-practices` - Các mẫu React hiện đại
|
||||
- `@senior-fullstack` - Phát triển Full-stack
|
||||
|
||||
**Cho bảo mật:**
|
||||
- `@ethical-hacking-methodology` - Cơ bản về bảo mật
|
||||
- `@burp-suite-testing` - Kiểm thử bảo mật ứng dụng web
|
||||
|
||||
---
|
||||
|
||||
## Tạo Kỹ năng Riêng của Bạn
|
||||
|
||||
Muốn tạo một kỹ năng mới? Hãy xem:
|
||||
1. [CONTRIBUTING.vi.md](../docs/vietnamese/CONTRIBUTING.vi.md) - Cách đóng góp
|
||||
2. [docs/vietnamese/SKILL_ANATOMY.vi.md](../docs/vietnamese/SKILL_ANATOMY.vi.md) - Hướng dẫn cấu trúc kỹ năng
|
||||
3. `@skill-creator` - Sử dụng kỹ năng này để tạo kỹ năng mới!
|
||||
|
||||
---
|
||||
|
||||
## Tài liệu Tham khảo
|
||||
|
||||
- **[Bắt đầu](../docs/vietnamese/GETTING_STARTED.vi.md)** - Hướng dẫn bắt đầu nhanh
|
||||
- **[Ví dụ](../docs/vietnamese/EXAMPLES.vi.md)** - Ví dụ sử dụng thực tế
|
||||
- **[FAQ](../docs/vietnamese/FAQ.vi.md)** - Các câu hỏi thường gặp
|
||||
- **[Hướng dẫn Trực quan](../docs/vietnamese/VISUAL_GUIDE.vi.md)** - Biểu đồ và lưu đồ
|
||||
|
||||
---
|
||||
|
||||
**Cần trợ giúp?** Kiểm tra [FAQ](../docs/vietnamese/FAQ.vi.md) hoặc mở một issue trên GitHub!
|
||||
605
docs/vietnamese/SKILL_ANATOMY.vi.md
Normal file
605
docs/vietnamese/SKILL_ANATOMY.vi.md
Normal file
@@ -0,0 +1,605 @@
|
||||
# Cấu trúc của một Skill - Hiểu về Hệ thống
|
||||
|
||||
**Bạn muốn hiểu cách các skill (kỹ năng) hoạt động bên trong?** Hướng dẫn này sẽ phân tích chi tiết từng phần của một file skill.
|
||||
|
||||
---
|
||||
|
||||
## 📁 Cấu trúc Thư mục Cơ bản
|
||||
|
||||
```
|
||||
skills/
|
||||
└── my-skill-name/
|
||||
├── SKILL.md ← Bắt buộc: Định nghĩa skill chính
|
||||
├── examples/ ← Tùy chọn: Các file ví dụ
|
||||
│ ├── example1.js
|
||||
│ └── example2.py
|
||||
├── scripts/ ← Tùy chọn: Các script hỗ trợ
|
||||
│ └── helper.sh
|
||||
├── templates/ ← Tùy chọn: Các bản mẫu code (templates)
|
||||
│ └── template.tsx
|
||||
├── references/ ← Tùy chọn: Tài liệu tham khảo
|
||||
│ └── api-docs.md
|
||||
└── README.md ← Tùy chọn: Tài liệu bổ sung
|
||||
```
|
||||
|
||||
**Quy tắc Cốt lõi:** Chỉ có file `SKILL.md` là bắt buộc. Tất cả những thành phần khác đều là tùy chọn!
|
||||
|
||||
---
|
||||
|
||||
## Cấu trúc file SKILL.md
|
||||
|
||||
Mỗi file `SKILL.md` có hai phần chính:
|
||||
|
||||
### 1. Frontmatter (Siêu dữ liệu - Metadata)
|
||||
|
||||
### 2. Nội dung (Hướng dẫn - Instructions)
|
||||
|
||||
Hãy cùng phân tích chi tiết từng phần:
|
||||
|
||||
---
|
||||
|
||||
## Phần 1: Frontmatter
|
||||
|
||||
Frontmatter nằm ở ngay đầu file, được bao bọc bởi cặp `---`:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: my-skill-name
|
||||
description: "Mô tả ngắn gọn về chức năng của skill này"
|
||||
---
|
||||
```
|
||||
|
||||
### Các trường Bắt buộc
|
||||
|
||||
#### `name`
|
||||
|
||||
- **Định nghĩa:** Mã định danh của skill.
|
||||
- **Định dạng:** chữ-thường-có-dấu-gạch-ngang (kebab-case).
|
||||
- **Yêu cầu:** Phải khớp hoàn toàn với tên thư mục.
|
||||
- **Ví dụ:** `stripe-integration`
|
||||
|
||||
#### `description`
|
||||
|
||||
- **Định nghĩa:** Tóm tắt chức năng trong một câu.
|
||||
- **Định dạng:** Chuỗi ký tự nằm trong dấu ngoặc kép.
|
||||
- **Độ dài:** Nên dưới 150 ký tự.
|
||||
- **Ví dụ:** `"Các mẫu tích hợp thanh toán Stripe bao gồm checkout, đăng ký gói (subscriptions) và webhooks"`
|
||||
|
||||
### Các trường Tùy chọn
|
||||
|
||||
Một số skill bao gồm thêm siêu dữ liệu bổ sung:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: my-skill-name
|
||||
description: "Mô tả ngắn"
|
||||
risk: "safe" # none | safe | critical | offensive (xem QUALITY_BAR.md)
|
||||
source: "community"
|
||||
tags: ["react", "typescript"]
|
||||
---
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phần 2: Nội dung
|
||||
|
||||
Sau phần frontmatter là nội dung thực tế của skill. Dưới đây là cấu trúc được đề xuất:
|
||||
|
||||
### Các mục Đề xuất
|
||||
|
||||
#### 1. Tiêu đề (H1)
|
||||
|
||||
```markdown
|
||||
# Tiêu đề Skill
|
||||
```
|
||||
|
||||
- Sử dụng tiêu đề rõ ràng, mang tính mô tả.
|
||||
- Thường khớp hoặc mở rộng từ tên skill.
|
||||
|
||||
#### 2. Tổng quan (Overview)
|
||||
|
||||
```markdown
|
||||
## Tổng quan
|
||||
|
||||
Một giải thích ngắn gọn về chức năng của skill và lý do tại sao nó tồn tại.
|
||||
Khoảng 2-4 câu là lý tưởng.
|
||||
```
|
||||
|
||||
#### 3. Khi nào cần sử dụng (When to Use)
|
||||
|
||||
```markdown
|
||||
## Khi nào nên sử dụng Skill này
|
||||
|
||||
- Sử dụng khi bạn cần [tình huống 1]
|
||||
- Sử dụng khi làm việc với [tình huống 2]
|
||||
- Sử dụng khi người dùng hỏi về [tình huống 3]
|
||||
```
|
||||
|
||||
**Tại sao điều này quan trọng:** Giúp AI biết khi nào cần kích hoạt skill này.
|
||||
|
||||
#### 4. Hướng dẫn Cốt lõi (Core Instructions)
|
||||
|
||||
```markdown
|
||||
## Cách hoạt động
|
||||
|
||||
### Bước 1: [Hành động]
|
||||
|
||||
Hướng dẫn chi tiết...
|
||||
|
||||
### Bước 2: [Hành động]
|
||||
|
||||
Hướng dẫn thêm...
|
||||
```
|
||||
|
||||
**Đây là linh hồn của skill** - các bước rõ ràng và có thể thực hiện được.
|
||||
|
||||
#### 5. Ví dụ (Examples)
|
||||
|
||||
```markdown
|
||||
## Ví dụ
|
||||
|
||||
### Ví dụ 1: [Trường hợp sử dụng]
|
||||
|
||||
\`\`\`javascript
|
||||
// Code ví dụ
|
||||
\`\`\`
|
||||
|
||||
### Ví dụ 2: [Trường hợp sử dụng khác]
|
||||
|
||||
\`\`\`javascript
|
||||
// Thêm code
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
**Tại sao ví dụ lại quan trọng:** Chúng cho AI thấy chính xác kết quả đầu ra tốt trông như thế nào.
|
||||
|
||||
#### 6. Thực hành Tốt nhất (Best Practices)
|
||||
|
||||
```markdown
|
||||
## Thực hành Tốt nhất
|
||||
|
||||
- ✅ Nên làm điều này
|
||||
- ✅ Cũng nên làm điều này
|
||||
- ❌ Không nên làm điều này
|
||||
- ❌ Tránh điều này
|
||||
```
|
||||
|
||||
#### 7. Các lỗi thường gặp (Common Pitfalls)
|
||||
|
||||
```markdown
|
||||
## Các lỗi thường gặp
|
||||
|
||||
- **Vấn đề:** Mô tả lỗi
|
||||
**Giải pháp:** Cách khắc phục
|
||||
```
|
||||
|
||||
#### 8. Các Skill liên quan (Related Skills)
|
||||
|
||||
```markdown
|
||||
## Các Skill liên quan
|
||||
|
||||
- `@other-skill` - Khi nào nên dùng skill này thay thế
|
||||
- `@complementary-skill` - Cách các skill này hoạt động cùng nhau
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Viết Hướng dẫn Hiệu quả
|
||||
|
||||
### Sử dụng Ngôn ngữ Rõ ràng, Trực tiếp
|
||||
|
||||
**❌ Không tốt:**
|
||||
|
||||
```markdown
|
||||
Bạn có lẽ nên cân nhắc việc kiểm tra xem người dùng đã xác thực hay chưa.
|
||||
```
|
||||
|
||||
**✅ Tốt:**
|
||||
|
||||
```markdown
|
||||
Kiểm tra xem người dùng đã được xác thực chưa trước khi tiếp tục.
|
||||
```
|
||||
|
||||
### Sử dụng Động từ Hành động
|
||||
|
||||
**❌ Không tốt:**
|
||||
|
||||
```markdown
|
||||
File nên được tạo ra...
|
||||
```
|
||||
|
||||
**✅ Tốt:**
|
||||
|
||||
```markdown
|
||||
Tạo file...
|
||||
```
|
||||
|
||||
### Cụ thể và Chi tiết
|
||||
|
||||
**❌ Không tốt:**
|
||||
|
||||
```markdown
|
||||
Thiết lập cơ sở dữ liệu một cách chính xác.
|
||||
```
|
||||
|
||||
**✅ Tốt:**
|
||||
|
||||
```markdown
|
||||
1. Tạo cơ sở dữ liệu PostgreSQL
|
||||
2. Chạy migration: `npm run migrate`
|
||||
3. Nạp dữ liệu ban đầu (seed): `npm run seed`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Các Thành phần Tùy chọn
|
||||
|
||||
### Thư mục Scripts
|
||||
|
||||
Nếu skill của bạn cần các script hỗ trợ:
|
||||
|
||||
```
|
||||
scripts/
|
||||
├── setup.sh ← Tự động hóa thiết lập
|
||||
├── validate.py ← Công cụ kiểm tra (validation)
|
||||
└── generate.js ← Công cụ tạo code (generators)
|
||||
```
|
||||
|
||||
**Tham chiếu chúng trong SKILL.md:**
|
||||
|
||||
```markdown
|
||||
Chạy script thiết lập:
|
||||
\`\`\`bash
|
||||
bash scripts/setup.sh
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
### Thư mục Examples
|
||||
|
||||
Các ví dụ thực tế minh họa skill:
|
||||
|
||||
```
|
||||
examples/
|
||||
├── basic-usage.js
|
||||
├── advanced-pattern.ts
|
||||
└── full-implementation/
|
||||
├── index.js
|
||||
└── config.json
|
||||
```
|
||||
|
||||
### Thư mục Templates
|
||||
|
||||
Các mẫu code có thể tái sử dụng:
|
||||
|
||||
```
|
||||
templates/
|
||||
├── component.tsx
|
||||
├── test.spec.ts
|
||||
└── config.json
|
||||
```
|
||||
|
||||
**Tham chiếu trong SKILL.md:**
|
||||
|
||||
```markdown
|
||||
Sử dụng bản mẫu này làm điểm bắt đầu:
|
||||
\`\`\`typescript
|
||||
{{#include templates/component.tsx}}
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
### Thư mục References
|
||||
|
||||
Tài liệu bên ngoài hoặc tham chiếu API:
|
||||
|
||||
```
|
||||
references/
|
||||
├── api-docs.md
|
||||
├── best-practices.md
|
||||
└── troubleshooting.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hướng dẫn về Quy mô Skill
|
||||
|
||||
### Skill Tối giản (Minimum Viable Skill)
|
||||
|
||||
- **Frontmatter:** name + description
|
||||
- **Nội dung:** 100-200 từ
|
||||
- **Các mục:** Tổng quan + Hướng dẫn
|
||||
|
||||
### Skill Tiêu chuẩn (Standard Skill)
|
||||
|
||||
- **Frontmatter:** name + description
|
||||
- **Nội dung:** 300-800 từ
|
||||
- **Các mục:** Tổng quan + Khi nào sử dụng + Hướng dẫn + Ví dụ
|
||||
|
||||
### Skill Toàn diện (Comprehensive Skill)
|
||||
|
||||
- **Frontmatter:** name + description + các trường tùy chọn
|
||||
- **Nội dung:** 800-2000 từ
|
||||
- **Các mục:** Đầy đủ tất cả các mục đề xuất
|
||||
- **Bổ sung:** Scripts, ví dụ, templates
|
||||
|
||||
**Quy tắc ngón tay cái:** Bắt đầu nhỏ, mở rộng dựa trên phản hồi.
|
||||
|
||||
---
|
||||
|
||||
## Thực hành Tốt nhất về Định dạng
|
||||
|
||||
### Sử dụng Markdown Hiệu quả
|
||||
|
||||
#### Khối Code (Code Blocks)
|
||||
|
||||
Luôn chỉ định ngôn ngữ:
|
||||
|
||||
```markdown
|
||||
\`\`\`javascript
|
||||
const example = "code";
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
#### Danh sách (Lists)
|
||||
|
||||
Sử dụng định dạng nhất quán:
|
||||
|
||||
```markdown
|
||||
- Mục 1
|
||||
- Mục 2
|
||||
- Mục con 2.1
|
||||
- Mục con 2.2
|
||||
```
|
||||
|
||||
#### Nhấn mạnh (Emphasis)
|
||||
|
||||
- **In đậm** cho các thuật ngữ quan trọng: `**quan trọng**`
|
||||
- _In nghiêng_ để nhấn mạnh: `*nhấn mạnh*`
|
||||
- `Code` cho lệnh hoặc code: `` `code` ``
|
||||
|
||||
#### Liên kết (Links)
|
||||
|
||||
```markdown
|
||||
[Văn bản liên kết](https://example.com)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ✅ Danh mục Kiểm tra Chất lượng (Quality Checklist)
|
||||
|
||||
Trước khi hoàn tất skill của bạn:
|
||||
|
||||
### Chất lượng Nội dung
|
||||
|
||||
- [ ] Hướng dẫn rõ ràng và có thể thực hiện được.
|
||||
- [ ] Ví dụ thực tế và hữu ích.
|
||||
- [ ] Không có lỗi chính tả hoặc ngữ pháp.
|
||||
- [ ] Độ chính xác kỹ thuật đã được xác minh.
|
||||
|
||||
### Cấu trúc
|
||||
|
||||
- [ ] Frontmatter là YAML hợp lệ.
|
||||
- [ ] Tên (Name) khớp với tên thư mục.
|
||||
- [ ] Các phần được sắp xếp logic.
|
||||
- [ ] Các tiêu đề tuân thủ cấp bậc (H1 → H2 → H3).
|
||||
|
||||
### Tính đầy đủ
|
||||
|
||||
- [ ] Phần Tổng quan giải thích "tại sao".
|
||||
- [ ] Hướng dẫn giải thích "làm thế nào".
|
||||
- [ ] Ví dụ cho thấy "cái gì".
|
||||
- [ ] Các trường hợp biên (edge cases) được đề cập.
|
||||
|
||||
### Khả năng sử dụng
|
||||
|
||||
- [ ] Một người mới bắt đầu có thể làm theo.
|
||||
- [ ] Một chuyên gia thấy nó hữu ích.
|
||||
- [ ] AI có thể phân tích chính xác.
|
||||
- [ ] Nó giải quyết một vấn đề thực tế.
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Phân tích Ví dụ Thực tế
|
||||
|
||||
Hãy phân tích một skill thực tế: `brainstorming`
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: brainstorming
|
||||
description: "Bạn PHẢI sử dụng skill này trước bất kỳ công việc sáng tạo nào..."
|
||||
---
|
||||
```
|
||||
|
||||
**Phân tích:**
|
||||
|
||||
- ✅ Tên rõ ràng.
|
||||
- ✅ Mô tả mạnh mẽ với tính cấp bách ("PHẢI sử dụng").
|
||||
- ✅ Giải thích khi nào nên dùng.
|
||||
|
||||
```markdown
|
||||
# Brainstorming Ý tưởng thành Thiết kế
|
||||
|
||||
## Tổng quan
|
||||
|
||||
Giúp chuyển đổi ý tưởng thành các thiết kế hoàn chỉnh...
|
||||
```
|
||||
|
||||
**Phân tích:**
|
||||
|
||||
- ✅ Tiêu đề rõ ràng.
|
||||
- ✅ Tổng quan súc tích.
|
||||
- ✅ Giải thích giá trị mang lại.
|
||||
|
||||
```markdown
|
||||
## Quy trình
|
||||
|
||||
**Hiểu ý tưởng:**
|
||||
|
||||
- Kiểm tra trạng thái dự án hiện tại trước.
|
||||
- Đặt câu hỏi từng cái một.
|
||||
```
|
||||
|
||||
**Phân tích:**
|
||||
|
||||
- ✅ Được chia thành các giai đoạn rõ ràng.
|
||||
- ✅ Các bước cụ thể, có thể hành động.
|
||||
- ✅ Dễ dàng thực hiện theo.
|
||||
|
||||
---
|
||||
|
||||
## Các Mẫu Nâng cao
|
||||
|
||||
### Logic có Điều kiện
|
||||
|
||||
```markdown
|
||||
## Hướng dẫn
|
||||
|
||||
Nếu người dùng đang làm việc với React:
|
||||
|
||||
- Sử dụng functional components.
|
||||
- Ưu tiên hooks hơn class components.
|
||||
|
||||
Nếu người dùng đang làm việc với Vue:
|
||||
|
||||
- Sử dụng Composition API.
|
||||
- Tuân theo các mẫu của Vue 3.
|
||||
```
|
||||
|
||||
### Tiết lộ Lũy tiến (Progressive Disclosure)
|
||||
|
||||
```markdown
|
||||
## Cách dùng Cơ bản
|
||||
|
||||
[Hướng dẫn đơn giản cho các trường hợp phổ biến]
|
||||
|
||||
## Cách dùng Nâng cao
|
||||
|
||||
[Các mẫu phức tạp cho người dùng chuyên sâu]
|
||||
```
|
||||
|
||||
### Tham chiếu Chéo (Cross-References)
|
||||
|
||||
```markdown
|
||||
## Các Luồng công việc liên quan
|
||||
|
||||
1. Đầu tiên, dùng `@brainstorming` để thiết kế.
|
||||
2. Sau đó, dùng `@writing-plans` để lập kế hoạch.
|
||||
3. Cuối cùng, dùng `@test-driven-development` để triển khai.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Đo lường Hiệu quả của Skill
|
||||
|
||||
Cách để biết skill của bạn có tốt hay không:
|
||||
|
||||
### Kiểm tra Tính Rõ ràng
|
||||
|
||||
- Người không quen thuộc với chủ đề có thể làm theo không?
|
||||
- Có hướng dẫn nào mơ hồ không?
|
||||
|
||||
### Kiểm tra Tính Đầy đủ
|
||||
|
||||
- Nó có bao quát trường hợp thuận lợi (happy path) không?
|
||||
- Nó có xử lý các trường hợp biên không?
|
||||
- Các kịch bản lỗi đã được giải quyết chưa?
|
||||
|
||||
### Kiểm tra Tính Hữu ích
|
||||
|
||||
- Nó có giải quyết một vấn đề thực tế không?
|
||||
- Chính bạn có sử dụng nó không?
|
||||
- Nó có giúp tiết kiệm thời gian hoặc cải thiện chất lượng không?
|
||||
|
||||
---
|
||||
|
||||
## Học hỏi từ các Skill hiện có
|
||||
|
||||
### Nghiên cứu các Ví dụ sau
|
||||
|
||||
**Dành cho Người mới:**
|
||||
|
||||
- `skills/brainstorming/SKILL.md` - Cấu trúc rõ ràng.
|
||||
- `skills/git-pushing/SKILL.md` - Đơn giản và tập trung.
|
||||
- `skills/copywriting/SKILL.md` - Ví dụ tốt.
|
||||
|
||||
**Dành cho Nâng cao:**
|
||||
|
||||
- `skills/systematic-debugging/SKILL.md` - Toàn diện.
|
||||
- `skills/react-best-practices/SKILL.md` - Nhiều file.
|
||||
- `skills/loki-mode/SKILL.md` - Các luồng công việc phức tạp.
|
||||
|
||||
---
|
||||
|
||||
## 💡 Mẹo Chuyên nghiệp
|
||||
|
||||
1. **Bắt đầu với phần "Khi nào sử dụng"** - Điều này làm rõ mục đích của skill.
|
||||
2. **Viết ví dụ trước** - Chúng giúp bạn hiểu những gì bạn đang dạy.
|
||||
3. **Kiểm tra với AI** - Xem liệu nó có thực sự hoạt động trước khi gửi.
|
||||
4. **Nhận phản hồi** - Nhờ người khác xem qua skill của bạn.
|
||||
5. **Cải thiện liên tục** - Skill sẽ tốt lên theo thời gian dựa trên việc sử dụng.
|
||||
|
||||
---
|
||||
|
||||
## Các lỗi Thường gặp cần Tránh
|
||||
|
||||
### ❌ Lỗi 1: Quá mơ hồ
|
||||
|
||||
```markdown
|
||||
## Hướng dẫn
|
||||
|
||||
Làm cho code tốt hơn.
|
||||
```
|
||||
|
||||
**✅ Khắc phục:**
|
||||
|
||||
```markdown
|
||||
## Hướng dẫn
|
||||
|
||||
1. Tách các logic lặp lại thành các hàm.
|
||||
2. Thêm xử lý lỗi cho các trường hợp biên.
|
||||
3. Viết unit tests cho các chức năng cốt lõi.
|
||||
```
|
||||
|
||||
### ❌ Lỗi 2: Quá phức tạp
|
||||
|
||||
```markdown
|
||||
## Hướng dẫn
|
||||
|
||||
[5000 từ chứa đầy thuật ngữ kỹ thuật dày đặc]
|
||||
```
|
||||
|
||||
**✅ Khắc phục:**
|
||||
Chia nhỏ thành nhiều skill hoặc sử dụng phương pháp tiết lộ lũy tiến.
|
||||
|
||||
### ❌ Lỗi 3: Không có ví dụ
|
||||
|
||||
```markdown
|
||||
## Hướng dẫn
|
||||
|
||||
[Hướng dẫn mà không có bất kỳ ví dụ code nào]
|
||||
```
|
||||
|
||||
**✅ Khắc phục:**
|
||||
Thêm ít nhất 2-3 ví dụ thực tế.
|
||||
|
||||
### ❌ Lỗi 4: Thông tin lỗi thời
|
||||
|
||||
```markdown
|
||||
Sử dụng React class components...
|
||||
```
|
||||
|
||||
**✅ Khắc phục:**
|
||||
Luôn cập nhật skill với các thực hành tốt nhất hiện tại.
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Các bước Tiếp theo
|
||||
|
||||
1. **Đọc 3-5 skill hiện có** để thấy các phong cách khác nhau.
|
||||
2. **Thử dùng bản mẫu skill** từ file `CONTRIBUTING.md`.
|
||||
3. **Tạo một skill đơn giản** cho lĩnh vực bạn am hiểu.
|
||||
4. **Kiểm tra nó** với trợ lý AI của bạn.
|
||||
5. **Chia sẻ nó** qua Pull Request.
|
||||
|
||||
---
|
||||
|
||||
**Hãy nhớ rằng:** Mọi chuyên gia đều từng là người mới bắt đầu. Hãy bắt đầu đơn giản, học hỏi từ phản hồi và cải thiện theo thời gian! 🚀
|
||||
21
docs/vietnamese/SOURCES.vi.md
Normal file
21
docs/vietnamese/SOURCES.vi.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# 📜 Nguồn gốc & Ghi công
|
||||
|
||||
Chúng tôi tin tưởng vào việc ghi nhận công sức một cách xứng đáng.
|
||||
Nếu bạn nhận ra tác phẩm của mình ở đây mà chưa được ghi công đúng cách, vui lòng mở một Issue (Báo lỗi).
|
||||
|
||||
| Kỹ năng / Danh mục | Nguồn gốc | Giấy phép | Ghi chú |
|
||||
| :-------------------------- | :----------------------------------------------------- | :------------- | :---------------------------- |
|
||||
| `cloud-penetration-testing` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Được chuyển đổi cho trợ lý AI.|
|
||||
| `active-directory-attacks` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Được chuyển đổi cho trợ lý AI.|
|
||||
| `owasp-top-10` | [OWASP](https://owasp.org/) | CC-BY-SA | Chuyển đổi từ phương pháp gốc.|
|
||||
| `burp-suite-testing` | [PortSwigger](https://portswigger.net/burp) | Không có | Chỉ là hướng dẫn sử dụng. |
|
||||
| `crewai` | [CrewAI](https://github.com/joaomdmoura/crewAI) | MIT | Hướng dẫn về framework. |
|
||||
| `langgraph` | [LangGraph](https://github.com/langchain-ai/langgraph) | MIT | Hướng dẫn về framework. |
|
||||
| `react-patterns` | [React Docs](https://react.dev/) | CC-BY | Các mẫu thiết kế chính thức. |
|
||||
| **Tất cả Skill Chính thức** | [Anthropic / Google / OpenAI] | Bản quyền riêng| Được khuyến khích bởi nhà CC. |
|
||||
|
||||
## Chính sách Giấy phép
|
||||
|
||||
- **Mã nguồn (Code)**: Tất cả mã nguồn gốc trong repository này tuân theo giấy phép **MIT**.
|
||||
- **Nội dung (Content)**: Tài liệu hướng dẫn tuân theo giấy phép **CC-BY-4.0**.
|
||||
- **Bên thứ ba**: Chúng tôi tôn trọng giấy phép từ phía nguồn gốc của các tài liệu. Nếu một kỹ năng được nhập về có giấy phép GPL, nó sẽ được đánh dấu rõ ràng hoặc bị loại trừ (chúng tôi hướng tới sự tương thích với MIT/Apache).
|
||||
65
docs/vietnamese/TRANSLATION_PLAN.vi.md
Normal file
65
docs/vietnamese/TRANSLATION_PLAN.vi.md
Normal file
@@ -0,0 +1,65 @@
|
||||
# 🗺️ Kế hoạch Dịch thuật (Translation Master Plan)
|
||||
|
||||
Tài liệu này dùng để theo dõi tiến độ dịch thuật toàn bộ repository `antigravity-awesome-skills` sang tiếng Việt.
|
||||
|
||||
**Mục tiêu:** Dịch toàn bộ 560+ kỹ năng và tài liệu hướng dẫn.
|
||||
**Quy tắc:**
|
||||
1. Giữ nguyên cấu trúc thư mục gốc.
|
||||
2. File dịch được lưu tại `docs/vietnamese/skills/<category>/<skill-name>.vi.md`.
|
||||
3. Sử dụng văn phong chuyên nghiệp, dễ hiểu cho lập trình viên Việt Nam.
|
||||
|
||||
---
|
||||
|
||||
## 📊 Tổng quan Tiến độ
|
||||
|
||||
- [x] **Giai đoạn 0: Thiết lập & Core Docs** (README, CONTRIBUTING, v.v.)
|
||||
- [x] **Giai đoạn 1: Essentials Bundle** (Các kỹ năng cốt lõi)
|
||||
- [ ] **Giai đoạn 2: Development & Security** (Kỹ năng lập trình & bảo mật)
|
||||
- [ ] **Giai đoạn 3: Data & AI, Infrastructure** (Dữ liệu, AI và Hạ tầng)
|
||||
- [ ] **Giai đoạn 4: Architecture & General** (Kiến trúc & Các kỹ năng chung)
|
||||
- [ ] **Giai đoạn 5: Business, Testing & Workflow** (Kinh doanh, Kiểm thử & Quy trình)
|
||||
|
||||
---
|
||||
|
||||
## ✅ Chi tiết Công việc
|
||||
|
||||
### 🟢 Giai đoạn 0: Tài liệu Gốc (Core Documentation)
|
||||
|
||||
- [x] `README.vi.md`
|
||||
- [x] `docs/vietnamese/BUNDLES.vi.md`
|
||||
- [x] `docs/vietnamese/CONTRIBUTING.vi.md`
|
||||
- [x] `docs/vietnamese/EXAMPLES.vi.md`
|
||||
- [x] `docs/vietnamese/FAQ.vi.md`
|
||||
- [x] `docs/vietnamese/GETTING_STARTED.vi.md`
|
||||
- [x] `docs/vietnamese/QUALITY_BAR.vi.md`
|
||||
- [x] `docs/vietnamese/SECURITY_GUARDRAILS.vi.md`
|
||||
- [x] `docs/vietnamese/SKILL_ANATOMY.vi.md`
|
||||
- [x] `docs/vietnamese/SOURCES.vi.md`
|
||||
- [x] `docs/vietnamese/VISUAL_GUIDE.vi.md`
|
||||
- [x] `docs/vietnamese/SECURITY.vi.md`
|
||||
- [x] `docs/vietnamese/SKILLS_README.vi.md`
|
||||
|
||||
### 🟢 Giai đoạn 1: Essentials Bundle (Core Skills)
|
||||
> **TRẠNG THÁI: GIỮ NGUYÊN TIẾNG ANH (KEPT IN ENGLISH)**
|
||||
> *Lý do: Các file Skill đóng vai trò là System Prompts cho AI Agent. Việc dịch sang tiếng Việt có thể làm giảm hiệu suất và độ chính xác của Agent.*
|
||||
|
||||
### 🟡 Giai đoạn 2 - 5: All Other Skills
|
||||
> **TRẠNG THÁI: GIỮ NGUYÊN TIẾNG ANH (KEPT IN ENGLISH)**
|
||||
> *Lý do: Tương tự như trên, giữ nguyên bản gốc để đảm bảo tính tương thích tốt nhất với các mô hình AI.*
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Kết luận
|
||||
- **Phần dành cho Con người (Human-facing Docs):** Đã dịch hoàn tất (`README`, `DOCS`, v.v.).
|
||||
- **Phần dành cho AI (Agent Skills):** Giữ nguyên Tiếng Anh.
|
||||
|
||||
Dự án đã đạt trạng thái sẵn sàng sử dụng cho người dùng Việt Nam (với tài liệu hướng dẫn tiếng Việt) mà vẫn đảm bảo sức mạnh cốt lõi của Agent (với Skills tiếng Anh).
|
||||
|
||||
---
|
||||
|
||||
## 📝 Nhật ký Thay đổi (Changelog)
|
||||
|
||||
- **2026-01-29**:
|
||||
- Hoàn thành Giai đoạn 0 (Docs).
|
||||
- Hoàn thành Giai đoạn 1 (Essentials).
|
||||
- Cấu trúc lại repo: Chuyển toàn bộ bản dịch vào `docs/vietnamese/`.
|
||||
512
docs/vietnamese/VISUAL_GUIDE.vi.md
Normal file
512
docs/vietnamese/VISUAL_GUIDE.vi.md
Normal file
@@ -0,0 +1,512 @@
|
||||
# Hướng dẫn Nhanh bằng Hình ảnh (Visual Guide)
|
||||
|
||||
**Học qua hình ảnh!** Hướng dẫn này sử dụng các sơ đồ và ví dụ trực quan để giúp bạn hiểu về các kỹ năng (skills).
|
||||
|
||||
---
|
||||
|
||||
## Cái nhìn Tổng quan
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ BẠN (Lập trình viên) │
|
||||
│ ↓ │
|
||||
│ "Giúp tôi xây dựng hệ thống thanh toán" │
|
||||
│ ↓ │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ TRỢ LÝ AI (AI ASSISTANT) │
|
||||
│ ↓ │
|
||||
│ Tải kỹ năng @stripe-integration │
|
||||
│ ↓ │
|
||||
│ Trở thành chuyên gia về thanh toán Stripe │
|
||||
│ ↓ │
|
||||
│ Cung cấp hỗ trợ chuyên sâu kèm các ví dụ code │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📦 Cấu trúc Repository (Trực quan)
|
||||
|
||||
```
|
||||
antigravity-awesome-skills/
|
||||
│
|
||||
├── 📄 README.md ← Tổng quan & danh sách skill
|
||||
├── 📄 CONTRIBUTING.md ← Cách thức đóng góp
|
||||
│
|
||||
├── 📁 skills/ ← Nơi chứa tất cả hơn 250 skills
|
||||
│ │
|
||||
│ ├── 📁 brainstorming/
|
||||
│ │ └── 📄 SKILL.md ← Định nghĩa skill
|
||||
│ │
|
||||
│ ├── 📁 stripe-integration/
|
||||
│ │ ├── 📄 SKILL.md
|
||||
│ │ └── 📁 examples/ ← Các phần bổ sung tùy chọn
|
||||
│ │
|
||||
│ └── ... (Hơn 250 skills khác)
|
||||
│
|
||||
├── 📁 scripts/ ← Quản lý & Xác thực
|
||||
│ ├── validate_skills.py ← Công cụ kiểm soát chất lượng
|
||||
│ └── generate_index.py ← Công cụ tạo danh mục (registry)
|
||||
│
|
||||
├── 📁 .github/
|
||||
│ └── 📄 MAINTENANCE.md ← Hướng dẫn cho người duy trì
|
||||
│
|
||||
└── 📁 docs/ ← Tài liệu hướng dẫn
|
||||
├── 📄 GETTING_STARTED.md ← Bắt đầu tại đây! (MỚI)
|
||||
├── 📄 FAQ.md ← Giải đáp thắc mắc
|
||||
├── 📄 BUNDLES.md ← Gói khởi đầu (MỚI)
|
||||
├── 📄 QUALITY_BAR.md ← Tiêu chuẩn chất lượng
|
||||
├── 📄 SKILL_ANATOMY.md ← Cách thức skill hoạt động
|
||||
└── 📄 VISUAL_GUIDE.md ← Chính là file này!
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cách Skills Hoạt động (Sơ đồ Luồng)
|
||||
|
||||
```
|
||||
┌──────────────┐
|
||||
│ 1. CÀI ĐẶT │ Sao chép skills vào .agent/skills/
|
||||
1 └──────┬───────┘
|
||||
│
|
||||
↓
|
||||
┌──────────────┐
|
||||
│ 2. GỌI LỆNH │ Gõ: @ten-skill trong chat với AI
|
||||
└──────┬───────┘
|
||||
│
|
||||
↓
|
||||
┌──────────────┐
|
||||
│ 3. TẢI DỮ LIỆU│ AI đọc file SKILL.md
|
||||
└──────┬───────┘
|
||||
│
|
||||
↓
|
||||
┌──────────────┐
|
||||
│ 4. THỰC THI │ AI tuân theo hướng dẫn trong skill
|
||||
└──────┬───────┘
|
||||
│
|
||||
↓
|
||||
┌──────────────┐
|
||||
│ 5. KẾT QUẢ │ Bạn nhận được hỗ trợ chuyên sâu!
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Phân loại Skills (Bản đồ Trực quan)
|
||||
|
||||
```
|
||||
┌─────────────────────────┐
|
||||
│ 250+ AWESOME SKILLS │
|
||||
└────────────┬────────────┘
|
||||
│
|
||||
┌────────────────────────┼────────────────────────┐
|
||||
│ │ │
|
||||
┌────▼────┐ ┌──────▼──────┐ ┌──────▼──────┐
|
||||
│ SÁNG TẠO│ │ PHÁT TRIỂN │ │ BẢO MẬT │
|
||||
│ (10) │ │ (25) │ │ (50) │
|
||||
└────┬────┘ └──────┬──────┘ └──────┬──────┘
|
||||
│ │ │
|
||||
• Thiết kế UI/UX • TDD • Hacking Đạo đức
|
||||
• Nghệ thuật Canvas • Debugging • Metasploit
|
||||
• Giao diện/Themes • Mẫu thiết kế React • Burp Suite
|
||||
• SQLMap
|
||||
│ │ │
|
||||
└────────────────────────┼────────────────────────┘
|
||||
│
|
||||
┌────────────────────────┼────────────────────────┐
|
||||
│ │ │
|
||||
┌────▼────┐ ┌──────▼──────┐ ┌──────▼──────┐
|
||||
│ AI │ │ TÀI LIỆU │ │ MARKETING │
|
||||
│ (30) │ │ (4) │ │ (23) │
|
||||
└────┬────┘ └──────┬──────┘ └──────┬──────┘
|
||||
│ │ │
|
||||
• Hệ thống RAG • DOCX • SEO
|
||||
• LangGraph • PDF • Copywriting
|
||||
• Prompt Eng. • PPTX • CRO
|
||||
• Voice Agents • XLSX • Quảng cáo trả phí
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cấu trúc File Skill (Trực quan)
|
||||
|
||||
````
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ SKILL.md │
|
||||
├─────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────┐ │
|
||||
│ │ FRONTMATTER (Siêu dữ liệu) │ │
|
||||
│ │ ───────────────────────────────────────────── │ │
|
||||
│ │ --- │ │
|
||||
│ │ name: my-skill │ │
|
||||
│ │ description: "Công dụng của skill này" │ │
|
||||
│ │ --- │ │
|
||||
│ └───────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────┐ │
|
||||
│ │ NỘI DUNG (Hướng dẫn) │ │
|
||||
│ │ ───────────────────────────────────────────── │ │
|
||||
│ │ │ │
|
||||
│ │ # Tiêu đề Skill │ │
|
||||
│ │ │ │
|
||||
│ │ ## Tổng quan │ │
|
||||
│ │ Skill này làm gì... │ │
|
||||
│ │ │ │
|
||||
│ │ ## Khi nào nên dùng │ │
|
||||
│ │ - Sử dụng khi... │ │
|
||||
│ │ │ │
|
||||
│ │ ## Hướng dẫn │ │
|
||||
│ │ 1. Bước đầu tiên... │ │
|
||||
│ │ 2. Bước thứ hai... │ │
|
||||
│ │ │ │
|
||||
│ │ ## Ví dụ │ │
|
||||
│ │ ```javascript │ │
|
||||
│ │ // Code ví dụ │ │
|
||||
│ │ ``` │ │
|
||||
│ │ │ │
|
||||
│ └───────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
````
|
||||
|
||||
---
|
||||
|
||||
## Cài đặt (Các bước Trực quan)
|
||||
|
||||
### Bước 1: Sao chép Repository
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Terminal │
|
||||
├─────────────────────────────────────────┤
|
||||
│ $ git clone https://github.com/ │
|
||||
│ sickn33/antigravity-awesome-skills │
|
||||
│ .agent/skills │
|
||||
│ │
|
||||
│ ✓ Đang sao chép vào '.agent/skills'... │
|
||||
│ ✓ Hoàn tất! │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Bước 2: Xác minh Cài đặt
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ File Explorer │
|
||||
├─────────────────────────────────────────┤
|
||||
│ 📁 .agent/ │
|
||||
│ └── 📁 skills/ │
|
||||
│ ├── 📁 brainstorming/ │
|
||||
│ ├── 📁 stripe-integration/ │
|
||||
│ ├── 📁 react-best-practices/ │
|
||||
│ └── ... (Thêm 176 mục nữa) │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Bước 3: Sử dụng Skill
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ AI Assistant Chat │
|
||||
├─────────────────────────────────────────┤
|
||||
│ Bạn: @brainstorming giúp tôi thiết kế │
|
||||
│ một ứng dụng todo │
|
||||
│ │
|
||||
│ AI: Tuyệt vời! Hãy để tôi giúp bạn suy │
|
||||
│ nghĩ kỹ về việc này. Đầu tiên, hãy │
|
||||
│ tìm hiểu các yêu cầu của bạn... │
|
||||
│ │
|
||||
│ Mục đích sử dụng chính là gì? │
|
||||
│ a) Quản lý công việc cá nhân │
|
||||
│ b) Hợp tác nhóm │
|
||||
│ c) Lập kế hoạch dự án │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Ví dụ: Sử dụng Skill (Từng bước)
|
||||
|
||||
### Tình huống: Bạn muốn thêm thanh toán Stripe vào ứng dụng của mình
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ BƯỚC 1: Xác định Nhu cầu │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ "Tôi cần thêm xử lý thanh toán vào ứng dụng của mình" │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ BƯỚC 2: Tìm đúng Skill │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Tìm kiếm: "payment" hoặc "stripe" │
|
||||
│ Tìm thấy: @stripe-integration │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ BƯỚC 3: Gọi lệnh Skill │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Bạn: @stripe-integration giúp tôi thêm thanh toán định kỳ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ BƯỚC 4: AI Tải Kiến thức của Skill │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ • Các mẫu Stripe API │
|
||||
│ • Xử lý Webhook │
|
||||
│ • Quản lý gói đăng ký (Subscription) │
|
||||
│ • Thực hành tốt nhất │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
↓
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ BƯỚC 5: Nhận Hỗ trợ Chuyên gia │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ AI cung cấp: │
|
||||
│ • Các ví dụ code │
|
||||
│ • Hướng dẫn thiết lập │
|
||||
│ • Các lưu ý về bảo mật │
|
||||
│ • Chiến lược kiểm thử │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tìm kiếm Skills (Hướng dẫn Trực quan)
|
||||
|
||||
### Cách 1: Duyệt theo Danh mục
|
||||
|
||||
```
|
||||
README.md → Cuộn xuống "Full Skill Registry" → Tìm danh mục → Chọn skill
|
||||
```
|
||||
|
||||
### Cách 2: Tìm theo Từ khóa
|
||||
|
||||
```
|
||||
Terminal → ls skills/ | grep "từ-khóa" → Xem các skill khớp
|
||||
```
|
||||
|
||||
### Cách 3: Sử dụng Index
|
||||
|
||||
```
|
||||
Mở file skills_index.json → Tìm từ khóa → Tìm đường dẫn đến skill
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tạo Skill đầu tiên của bạn (Quy trình Trực quan)
|
||||
|
||||
```
|
||||
┌──────────────┐
|
||||
│ 1. Ý TƯỞNG │ "Tôi muốn chia sẻ kiến thức về Docker"
|
||||
└──────┬───────┘
|
||||
│
|
||||
↓
|
||||
┌──────────────┐
|
||||
│ 2. KHỞI TẠO │ mkdir skills/docker-mastery
|
||||
└──────┬───────┘ touch skills/docker-mastery/SKILL.md
|
||||
│
|
||||
↓
|
||||
┌──────────────┐
|
||||
│ 3. VIẾT │ Thêm frontmatter + nội dung
|
||||
└──────┬───────┘ (Dùng template từ CONTRIBUTING.vi.md)
|
||||
│
|
||||
↓
|
||||
┌──────────────┐
|
||||
│ 4. KIỂM THỬ │ Sao chép vào .agent/skills/
|
||||
└──────┬───────┘ Thử: @docker-mastery
|
||||
│
|
||||
↓
|
||||
┌──────────────┐
|
||||
│ 5. XÁC THỰC │ python3 scripts/validate_skills.py
|
||||
└──────┬───────┘
|
||||
│
|
||||
↓
|
||||
┌──────────────┐
|
||||
│ 6. GỬI ĐI │ git commit + push + Pull Request
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Các Cấp độ Phức tạp của Skill
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ ĐỘ PHỨC TẠP CỦA SKILL │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ĐƠN GIẢN TIÊU CHUẨN PHỨC TẠP │
|
||||
│ ──────── ────────── ──────── │
|
||||
│ │
|
||||
│ • 1 file • 1 file • Nhiều file│
|
||||
│ • 100-200 từ • 300-800 từ • 800-2000 từ│
|
||||
│ • Cấu trúc cơ bản • Cấu trúc đầy đủ • Có scripts │
|
||||
│ • Không có phần phụ • Có ví dụ • Có ví dụ │
|
||||
│ • Thực hành tốt nhất • Có template│
|
||||
│ • Có tài liệu│
|
||||
│ Ví dụ: Ví dụ: Ví dụ: │
|
||||
│ git-pushing brainstorming loki-mode │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Tác động của việc Đóng góp (Trực quan)
|
||||
|
||||
```
|
||||
Sự đóng góp của bạn
|
||||
│
|
||||
├─→ Cải thiện tài liệu hướng dẫn
|
||||
│ │
|
||||
│ └─→ Giúp hàng ngàn dev dễ hiểu hơn
|
||||
│
|
||||
├─→ Tạo ra Skill mới
|
||||
│ │
|
||||
│ └─→ Mang lại khả năng mới cho mọi người
|
||||
│
|
||||
├─→ Sửa lỗi/Lỗi chính tả
|
||||
│ │
|
||||
│ └─→ Tránh hiểu lầm cho người dùng tương lai
|
||||
│
|
||||
└─→ Thêm ví dụ
|
||||
│
|
||||
└─→ Giúp người mới học dễ dàng hơn
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Lộ trình Học tập (Roadmap Trực quan)
|
||||
|
||||
```
|
||||
BẮT ĐẦU TẠI ĐÂY
|
||||
│
|
||||
↓
|
||||
┌─────────────────┐
|
||||
│ Đọc │
|
||||
│ GETTING_STARTED │
|
||||
└────────┬────────┘
|
||||
│
|
||||
↓
|
||||
┌─────────────────┐
|
||||
│ Thử dùng 2-3 Skill│
|
||||
│ với Trợ lý AI │
|
||||
└────────┬────────┘
|
||||
│
|
||||
↓
|
||||
┌─────────────────┐
|
||||
│ Đọc │
|
||||
│ SKILL_ANATOMY │
|
||||
└────────┬────────┘
|
||||
│
|
||||
↓
|
||||
┌─────────────────┐
|
||||
│ Nghiên cứu │
|
||||
│ Skills hiện có │
|
||||
└────────┬────────┘
|
||||
│
|
||||
↓
|
||||
┌─────────────────┐
|
||||
│ Tạo Skill │
|
||||
│ đơn giản │
|
||||
└────────┬────────┘
|
||||
│
|
||||
↓
|
||||
┌─────────────────┐
|
||||
│ Đọc │
|
||||
│ CONTRIBUTING │
|
||||
└────────┬────────┘
|
||||
│
|
||||
↓
|
||||
┌─────────────────┐
|
||||
│ Gửi PR │
|
||||
└────────┬────────┘
|
||||
│
|
||||
↓
|
||||
TRỞ THÀNH CONTRIBUTOR! 🎉
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 💡 Mẹo Nhanh (Bản ghi chú Trực quan)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ THAM KHẢO NHANH │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ 📥 CÀI ĐẶT │
|
||||
│ git clone [repo] .agent/skills │
|
||||
│ │
|
||||
│ 🎯 SỬ DỤNG │
|
||||
│ @ten-skill [yêu cầu của bạn] │
|
||||
│ │
|
||||
│ 🔍 TÌM KIẾM │
|
||||
│ ls skills/ | grep "từ-khóa" │
|
||||
│ │
|
||||
│ ✅ XÁC THỰC │
|
||||
│ python3 scripts/validate_skills.py │
|
||||
│ │
|
||||
│ 📝 TẠO SKILL │
|
||||
│ 1. mkdir skills/ten-skill-cua-ban │
|
||||
│ 2. Tạo SKILL.md với frontmatter │
|
||||
│ 3. Thêm nội dung │
|
||||
│ 4. Thử nghiệm & xác thực │
|
||||
│ 5. Gửi Pull Request (PR) │
|
||||
│ │
|
||||
│ 🆘 TRỢ GIÚP │
|
||||
│ • docs/GETTING_STARTED.md - Cơ bản │
|
||||
│ • CONTRIBUTING.md - Cách đóng góp │
|
||||
│ • SKILL_ANATOMY.md - Tìm hiểu sâu │
|
||||
│ • GitHub Issues - Đặt câu hỏi │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Câu chuyện Thành công (Dòng thời gian Trực quan)
|
||||
|
||||
```
|
||||
Ngày 1: Cài đặt skills
|
||||
│
|
||||
└─→ "Oa, @brainstorming đã giúp mình thiết kế ứng dụng!"
|
||||
|
||||
Ngày 3: Dùng 5 skills khác nhau
|
||||
│
|
||||
└─→ "Những kỹ năng này giúp mình tiết kiệm bao nhiêu thời gian!"
|
||||
|
||||
Tuần 1: Tạo skill đầu tiên
|
||||
│
|
||||
└─→ "Mình đã chia sẻ kiến thức của mình dưới dạng một skill!"
|
||||
|
||||
Tuần 2: Skill được gộp (merge) vào máy chủ
|
||||
│
|
||||
└─→ "Kỹ năng của mình đang giúp đỡ người khác! 🎉"
|
||||
|
||||
Tháng 1: Trở thành người đóng góp thường xuyên
|
||||
│
|
||||
└─→ "Mình đã đóng góp 5 skills và cải thiện rất nhiều tài liệu!"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Các Bước Tiếp theo
|
||||
|
||||
1. ✅ **Hiểu** cấu trúc trực quan.
|
||||
2. ✅ **Cài đặt** skills vào công cụ AI của bạn.
|
||||
3. ✅ **Thử dùng** 2-3 skills từ các danh mục khác nhau.
|
||||
4. ✅ **Đọc** file CONTRIBUTING.md.
|
||||
5. ✅ **Tạo** skill đầu tiên của bạn.
|
||||
6. ✅ **Chia sẻ** với cộng đồng.
|
||||
|
||||
---
|
||||
|
||||
**Bạn là người học qua hình ảnh?** Hy vọng hướng dẫn này sẽ giúp ích! Bạn vẫn còn thắc mắc? Hãy kiểm tra:
|
||||
|
||||
- [GETTING_STARTED.md](GETTING_STARTED.vi.md) - Giới thiệu bằng văn bản.
|
||||
- [SKILL_ANATOMY.md](SKILL_ANATOMY.vi.md) - Phân tích chi tiết.
|
||||
- [CONTRIBUTING.md](../CONTRIBUTING.vi.md) - Cách thức đóng góp.
|
||||
|
||||
**Sẵn sàng đóng góp?** Bạn làm được mà! 💪
|
||||
191
lib/skill-utils.js
Normal file
191
lib/skill-utils.js
Normal file
@@ -0,0 +1,191 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const yaml = require('yaml');
|
||||
|
||||
function stripQuotes(value) {
|
||||
if (typeof value !== 'string') return value;
|
||||
if (value.length < 2) return value.trim();
|
||||
const first = value[0];
|
||||
const last = value[value.length - 1];
|
||||
if ((first === '"' && last === '"') || (first === "'" && last === "'")) {
|
||||
return value.slice(1, -1).trim();
|
||||
}
|
||||
if (first === '"' || first === "'") {
|
||||
return value.slice(1).trim();
|
||||
}
|
||||
if (last === '"' || last === "'") {
|
||||
return value.slice(0, -1).trim();
|
||||
}
|
||||
return value.trim();
|
||||
}
|
||||
|
||||
function parseInlineList(raw) {
|
||||
if (typeof raw !== 'string') return [];
|
||||
const value = raw.trim();
|
||||
if (!value.startsWith('[') || !value.endsWith(']')) return [];
|
||||
const inner = value.slice(1, -1).trim();
|
||||
if (!inner) return [];
|
||||
return inner
|
||||
.split(',')
|
||||
.map(item => stripQuotes(item.trim()))
|
||||
.filter(Boolean);
|
||||
}
|
||||
|
||||
function isPlainObject(value) {
|
||||
return value && typeof value === 'object' && !Array.isArray(value);
|
||||
}
|
||||
|
||||
function parseFrontmatter(content) {
|
||||
const sanitized = content.replace(/^\uFEFF/, '');
|
||||
const lines = sanitized.split(/\r?\n/);
|
||||
if (!lines.length || lines[0].trim() !== '---') {
|
||||
return { data: {}, body: content, errors: [], hasFrontmatter: false };
|
||||
}
|
||||
|
||||
let endIndex = -1;
|
||||
for (let i = 1; i < lines.length; i += 1) {
|
||||
if (lines[i].trim() === '---') {
|
||||
endIndex = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (endIndex === -1) {
|
||||
return {
|
||||
data: {},
|
||||
body: content,
|
||||
errors: ['Missing closing frontmatter delimiter (---).'],
|
||||
hasFrontmatter: true,
|
||||
};
|
||||
}
|
||||
|
||||
const errors = [];
|
||||
const fmText = lines.slice(1, endIndex).join('\n');
|
||||
let data = {};
|
||||
|
||||
try {
|
||||
const doc = yaml.parseDocument(fmText, { prettyErrors: false });
|
||||
if (doc.errors && doc.errors.length) {
|
||||
errors.push(...doc.errors.map(error => error.message));
|
||||
}
|
||||
data = doc.toJS();
|
||||
} catch (err) {
|
||||
errors.push(err.message);
|
||||
data = {};
|
||||
}
|
||||
|
||||
if (!isPlainObject(data)) {
|
||||
errors.push('Frontmatter must be a YAML mapping/object.');
|
||||
data = {};
|
||||
}
|
||||
|
||||
const body = lines.slice(endIndex + 1).join('\n');
|
||||
return { data, body, errors, hasFrontmatter: true };
|
||||
}
|
||||
|
||||
function tokenize(value) {
|
||||
if (!value) return [];
|
||||
return value
|
||||
.toLowerCase()
|
||||
.replace(/[^a-z0-9]+/g, ' ')
|
||||
.split(' ')
|
||||
.map(token => token.trim())
|
||||
.filter(Boolean);
|
||||
}
|
||||
|
||||
function unique(list) {
|
||||
const seen = new Set();
|
||||
const result = [];
|
||||
for (const item of list) {
|
||||
if (!item || seen.has(item)) continue;
|
||||
seen.add(item);
|
||||
result.push(item);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
function readSkill(skillDir, skillId) {
|
||||
const skillPath = path.join(skillDir, skillId, 'SKILL.md');
|
||||
const content = fs.readFileSync(skillPath, 'utf8');
|
||||
const { data } = parseFrontmatter(content);
|
||||
const name = typeof data.name === 'string' && data.name.trim()
|
||||
? data.name.trim()
|
||||
: skillId;
|
||||
const description = typeof data.description === 'string'
|
||||
? data.description.trim()
|
||||
: '';
|
||||
|
||||
let tags = [];
|
||||
if (Array.isArray(data.tags)) {
|
||||
tags = data.tags.map(tag => String(tag).trim());
|
||||
} else if (typeof data.tags === 'string' && data.tags.trim()) {
|
||||
const parts = data.tags.includes(',')
|
||||
? data.tags.split(',')
|
||||
: data.tags.split(/\s+/);
|
||||
tags = parts.map(tag => tag.trim());
|
||||
} else if (isPlainObject(data.metadata) && data.metadata.tags) {
|
||||
const rawTags = data.metadata.tags;
|
||||
if (Array.isArray(rawTags)) {
|
||||
tags = rawTags.map(tag => String(tag).trim());
|
||||
} else if (typeof rawTags === 'string' && rawTags.trim()) {
|
||||
const parts = rawTags.includes(',')
|
||||
? rawTags.split(',')
|
||||
: rawTags.split(/\s+/);
|
||||
tags = parts.map(tag => tag.trim());
|
||||
}
|
||||
}
|
||||
|
||||
tags = tags.filter(Boolean);
|
||||
|
||||
return {
|
||||
id: skillId,
|
||||
name,
|
||||
description,
|
||||
tags,
|
||||
path: skillPath,
|
||||
content,
|
||||
};
|
||||
}
|
||||
|
||||
function listSkillIds(skillsDir) {
|
||||
return fs.readdirSync(skillsDir)
|
||||
.filter(entry => {
|
||||
if (entry.startsWith('.')) return false;
|
||||
const dirPath = path.join(skillsDir, entry);
|
||||
if (!fs.statSync(dirPath).isDirectory()) return false;
|
||||
const skillPath = path.join(dirPath, 'SKILL.md');
|
||||
return fs.existsSync(skillPath);
|
||||
})
|
||||
.sort();
|
||||
}
|
||||
|
||||
/**
|
||||
* Recursively list all skill directory paths under skillsDir (relative paths).
|
||||
* Matches generate_index.py behavior so catalog includes nested skills (e.g. game-development/2d-games).
|
||||
*/
|
||||
function listSkillIdsRecursive(skillsDir, baseDir = skillsDir, acc = []) {
|
||||
const entries = fs.readdirSync(baseDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (entry.name.startsWith('.')) continue;
|
||||
if (!entry.isDirectory()) continue;
|
||||
const dirPath = path.join(baseDir, entry.name);
|
||||
const skillPath = path.join(dirPath, 'SKILL.md');
|
||||
const relPath = path.relative(skillsDir, dirPath);
|
||||
if (fs.existsSync(skillPath)) {
|
||||
acc.push(relPath);
|
||||
}
|
||||
listSkillIdsRecursive(skillsDir, dirPath, acc);
|
||||
}
|
||||
return acc.sort();
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
listSkillIds,
|
||||
listSkillIdsRecursive,
|
||||
parseFrontmatter,
|
||||
parseInlineList,
|
||||
readSkill,
|
||||
stripQuotes,
|
||||
tokenize,
|
||||
unique,
|
||||
};
|
||||
34
package-lock.json
generated
Normal file
34
package-lock.json
generated
Normal file
@@ -0,0 +1,34 @@
|
||||
{
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "4.2.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "4.2.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"yaml": "^2.8.2"
|
||||
},
|
||||
"bin": {
|
||||
"antigravity-awesome-skills": "bin/install.js"
|
||||
}
|
||||
},
|
||||
"node_modules/yaml": {
|
||||
"version": "2.8.2",
|
||||
"resolved": "https://registry.npmjs.org/yaml/-/yaml-2.8.2.tgz",
|
||||
"integrity": "sha512-mplynKqc1C2hTVYxd0PU2xQAc22TI1vShAYGksCCfxbn/dFwnHTNi1bvYsBTkhdUNtGIf5xNOg938rrSSYvS9A==",
|
||||
"license": "ISC",
|
||||
"bin": {
|
||||
"yaml": "bin.mjs"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">= 14.6"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/eemeli"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
37
package.json
Normal file
37
package.json
Normal file
@@ -0,0 +1,37 @@
|
||||
{
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "4.7.0",
|
||||
"description": "626+ agentic skills for Claude Code, Gemini CLI, Cursor, Antigravity & more. Installer CLI.",
|
||||
"license": "MIT",
|
||||
"scripts": {
|
||||
"validate": "python3 scripts/validate_skills.py",
|
||||
"validate:strict": "python3 scripts/validate_skills.py --strict",
|
||||
"index": "python3 scripts/generate_index.py",
|
||||
"readme": "python3 scripts/update_readme.py",
|
||||
"chain": "npm run validate && npm run index && npm run readme",
|
||||
"catalog": "node scripts/build-catalog.js",
|
||||
"build": "npm run chain && npm run catalog",
|
||||
"test": "node scripts/tests/validate_skills_headings.test.js && python3 scripts/tests/test_validate_skills_headings.py"
|
||||
},
|
||||
"devDependencies": {
|
||||
"yaml": "^2.8.2"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+https://github.com/sickn33/antigravity-awesome-skills.git"
|
||||
},
|
||||
"bin": {
|
||||
"antigravity-awesome-skills": "bin/install.js"
|
||||
},
|
||||
"files": [
|
||||
"bin"
|
||||
],
|
||||
"keywords": [
|
||||
"claude-code",
|
||||
"cursor",
|
||||
"gemini-cli",
|
||||
"antigravity",
|
||||
"agentic-skills",
|
||||
"ai-coding"
|
||||
]
|
||||
}
|
||||
355
scripts/build-catalog.js
Normal file
355
scripts/build-catalog.js
Normal file
@@ -0,0 +1,355 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const {
|
||||
listSkillIdsRecursive,
|
||||
readSkill,
|
||||
tokenize,
|
||||
unique,
|
||||
} = require('../lib/skill-utils');
|
||||
|
||||
const ROOT = path.resolve(__dirname, '..');
|
||||
const SKILLS_DIR = path.join(ROOT, 'skills');
|
||||
|
||||
const STOPWORDS = new Set([
|
||||
'a', 'an', 'and', 'are', 'as', 'at', 'be', 'but', 'by', 'for', 'from', 'has', 'have', 'in', 'into',
|
||||
'is', 'it', 'its', 'of', 'on', 'or', 'our', 'out', 'over', 'that', 'the', 'their', 'they', 'this',
|
||||
'to', 'use', 'when', 'with', 'you', 'your', 'will', 'can', 'if', 'not', 'only', 'also', 'more',
|
||||
'best', 'practice', 'practices', 'expert', 'specialist', 'focused', 'focus', 'master', 'modern',
|
||||
'advanced', 'comprehensive', 'production', 'production-ready', 'ready', 'build', 'create', 'deliver',
|
||||
'design', 'implement', 'implementation', 'strategy', 'strategies', 'patterns', 'pattern', 'workflow',
|
||||
'workflows', 'guide', 'template', 'templates', 'tool', 'tools', 'project', 'projects', 'support',
|
||||
'manage', 'management', 'system', 'systems', 'services', 'service', 'across', 'end', 'end-to-end',
|
||||
'using', 'based', 'ensure', 'ensure', 'help', 'needs', 'need', 'focuses', 'handles', 'builds', 'make',
|
||||
]);
|
||||
|
||||
const TAG_STOPWORDS = new Set([
|
||||
'pro', 'expert', 'patterns', 'pattern', 'workflow', 'workflows', 'templates', 'template', 'toolkit',
|
||||
'tools', 'tool', 'project', 'projects', 'guide', 'management', 'engineer', 'architect', 'developer',
|
||||
'specialist', 'assistant', 'analysis', 'review', 'reviewer', 'automation', 'orchestration', 'scaffold',
|
||||
'scaffolding', 'implementation', 'strategy', 'context', 'management', 'feature', 'features', 'smart',
|
||||
'system', 'systems', 'design', 'development', 'development', 'test', 'testing', 'workflow',
|
||||
]);
|
||||
|
||||
const CATEGORY_RULES = [
|
||||
{
|
||||
name: 'security',
|
||||
keywords: [
|
||||
'security', 'sast', 'compliance', 'privacy', 'threat', 'vulnerability', 'owasp', 'pci', 'gdpr',
|
||||
'secrets', 'risk', 'malware', 'forensics', 'attack', 'incident', 'auth', 'mtls', 'zero', 'trust',
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'infrastructure',
|
||||
keywords: [
|
||||
'kubernetes', 'k8s', 'helm', 'terraform', 'cloud', 'network', 'devops', 'gitops', 'prometheus',
|
||||
'grafana', 'observability', 'monitoring', 'logging', 'tracing', 'deployment', 'istio', 'linkerd',
|
||||
'service', 'mesh', 'slo', 'sre', 'oncall', 'incident', 'pipeline', 'cicd', 'ci', 'cd', 'kafka',
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'data-ai',
|
||||
keywords: [
|
||||
'data', 'database', 'db', 'sql', 'postgres', 'mysql', 'analytics', 'etl', 'warehouse', 'dbt',
|
||||
'ml', 'ai', 'llm', 'rag', 'vector', 'embedding', 'spark', 'airflow', 'cdc', 'pipeline',
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'development',
|
||||
keywords: [
|
||||
'python', 'javascript', 'typescript', 'java', 'golang', 'go', 'rust', 'csharp', 'dotnet', 'php',
|
||||
'ruby', 'node', 'react', 'frontend', 'backend', 'mobile', 'ios', 'android', 'flutter', 'fastapi',
|
||||
'django', 'nextjs', 'vue', 'api',
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'architecture',
|
||||
keywords: [
|
||||
'architecture', 'c4', 'microservices', 'event', 'cqrs', 'saga', 'domain', 'ddd', 'patterns',
|
||||
'decision', 'adr',
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'testing',
|
||||
keywords: ['testing', 'tdd', 'unit', 'e2e', 'qa', 'test'],
|
||||
},
|
||||
{
|
||||
name: 'business',
|
||||
keywords: [
|
||||
'business', 'market', 'sales', 'finance', 'startup', 'legal', 'hr', 'product', 'customer', 'seo',
|
||||
'marketing', 'kpi', 'contract', 'employment',
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'workflow',
|
||||
keywords: ['workflow', 'orchestration', 'conductor', 'automation', 'process', 'collaboration'],
|
||||
},
|
||||
];
|
||||
|
||||
const BUNDLE_RULES = {
|
||||
'core-dev': {
|
||||
description: 'Core development skills across languages, frameworks, and backend/frontend fundamentals.',
|
||||
keywords: [
|
||||
'python', 'javascript', 'typescript', 'go', 'golang', 'rust', 'java', 'node', 'frontend', 'backend',
|
||||
'react', 'fastapi', 'django', 'nextjs', 'api', 'mobile', 'ios', 'android', 'flutter', 'php', 'ruby',
|
||||
],
|
||||
},
|
||||
'security-core': {
|
||||
description: 'Security, privacy, and compliance essentials.',
|
||||
keywords: [
|
||||
'security', 'sast', 'compliance', 'threat', 'risk', 'privacy', 'secrets', 'owasp', 'gdpr', 'pci',
|
||||
'vulnerability', 'auth',
|
||||
],
|
||||
},
|
||||
'k8s-core': {
|
||||
description: 'Kubernetes and service mesh essentials.',
|
||||
keywords: ['kubernetes', 'k8s', 'helm', 'istio', 'linkerd', 'service', 'mesh'],
|
||||
},
|
||||
'data-core': {
|
||||
description: 'Data engineering and analytics foundations.',
|
||||
keywords: [
|
||||
'data', 'database', 'sql', 'dbt', 'airflow', 'spark', 'analytics', 'etl', 'warehouse', 'postgres',
|
||||
'mysql', 'kafka',
|
||||
],
|
||||
},
|
||||
'ops-core': {
|
||||
description: 'Operations, observability, and delivery pipelines.',
|
||||
keywords: [
|
||||
'observability', 'monitoring', 'logging', 'tracing', 'prometheus', 'grafana', 'devops', 'gitops',
|
||||
'deployment', 'cicd', 'pipeline', 'slo', 'sre', 'incident',
|
||||
],
|
||||
},
|
||||
};
|
||||
|
||||
const CURATED_COMMON = [
|
||||
'bash-pro',
|
||||
'python-pro',
|
||||
'javascript-pro',
|
||||
'typescript-pro',
|
||||
'golang-pro',
|
||||
'rust-pro',
|
||||
'java-pro',
|
||||
'frontend-developer',
|
||||
'backend-architect',
|
||||
'nodejs-backend-patterns',
|
||||
'fastapi-pro',
|
||||
'api-design-principles',
|
||||
'sql-pro',
|
||||
'database-architect',
|
||||
'kubernetes-architect',
|
||||
'terraform-specialist',
|
||||
'observability-engineer',
|
||||
'security-auditor',
|
||||
'sast-configuration',
|
||||
'gitops-workflow',
|
||||
];
|
||||
|
||||
function normalizeTokens(tokens) {
|
||||
return unique(tokens.map(token => token.toLowerCase())).filter(Boolean);
|
||||
}
|
||||
|
||||
function deriveTags(skill) {
|
||||
let tags = Array.isArray(skill.tags) ? skill.tags : [];
|
||||
tags = tags.map(tag => tag.toLowerCase()).filter(Boolean);
|
||||
|
||||
if (!tags.length) {
|
||||
tags = skill.id
|
||||
.split('-')
|
||||
.map(tag => tag.toLowerCase())
|
||||
.filter(tag => tag && !TAG_STOPWORDS.has(tag));
|
||||
}
|
||||
|
||||
return normalizeTokens(tags);
|
||||
}
|
||||
|
||||
function detectCategory(skill, tags) {
|
||||
const haystack = normalizeTokens([
|
||||
...tags,
|
||||
...tokenize(skill.name),
|
||||
...tokenize(skill.description),
|
||||
]);
|
||||
const haystackSet = new Set(haystack);
|
||||
|
||||
for (const rule of CATEGORY_RULES) {
|
||||
for (const keyword of rule.keywords) {
|
||||
if (haystackSet.has(keyword)) {
|
||||
return rule.name;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return 'general';
|
||||
}
|
||||
|
||||
function buildTriggers(skill, tags) {
|
||||
const tokens = tokenize(`${skill.name} ${skill.description}`)
|
||||
.filter(token => token.length >= 2 && !STOPWORDS.has(token));
|
||||
return unique([...tags, ...tokens]).slice(0, 12);
|
||||
}
|
||||
|
||||
function buildAliases(skills) {
|
||||
const existingIds = new Set(skills.map(skill => skill.id));
|
||||
const aliases = {};
|
||||
const used = new Set();
|
||||
|
||||
for (const skill of skills) {
|
||||
if (skill.name && skill.name !== skill.id) {
|
||||
const alias = skill.name.toLowerCase();
|
||||
if (!existingIds.has(alias) && !used.has(alias)) {
|
||||
aliases[alias] = skill.id;
|
||||
used.add(alias);
|
||||
}
|
||||
}
|
||||
|
||||
const tokens = skill.id.split('-').filter(Boolean);
|
||||
if (skill.id.length < 28 || tokens.length < 4) continue;
|
||||
|
||||
const deduped = [];
|
||||
const tokenSeen = new Set();
|
||||
for (const token of tokens) {
|
||||
if (tokenSeen.has(token)) continue;
|
||||
tokenSeen.add(token);
|
||||
deduped.push(token);
|
||||
}
|
||||
|
||||
const aliasTokens = deduped.length > 3
|
||||
? [deduped[0], deduped[1], deduped[deduped.length - 1]]
|
||||
: deduped;
|
||||
const alias = unique(aliasTokens).join('-');
|
||||
|
||||
if (!alias || alias === skill.id) continue;
|
||||
if (existingIds.has(alias) || used.has(alias)) continue;
|
||||
|
||||
aliases[alias] = skill.id;
|
||||
used.add(alias);
|
||||
}
|
||||
|
||||
return aliases;
|
||||
}
|
||||
|
||||
function buildBundles(skills) {
|
||||
const bundles = {};
|
||||
const skillTokens = new Map();
|
||||
|
||||
for (const skill of skills) {
|
||||
const tokens = normalizeTokens([
|
||||
...skill.tags,
|
||||
...tokenize(skill.name),
|
||||
...tokenize(skill.description),
|
||||
]);
|
||||
skillTokens.set(skill.id, new Set(tokens));
|
||||
}
|
||||
|
||||
for (const [bundleName, rule] of Object.entries(BUNDLE_RULES)) {
|
||||
const bundleSkills = [];
|
||||
const keywords = rule.keywords.map(keyword => keyword.toLowerCase());
|
||||
|
||||
for (const skill of skills) {
|
||||
const tokenSet = skillTokens.get(skill.id) || new Set();
|
||||
if (keywords.some(keyword => tokenSet.has(keyword))) {
|
||||
bundleSkills.push(skill.id);
|
||||
}
|
||||
}
|
||||
|
||||
bundles[bundleName] = {
|
||||
description: rule.description,
|
||||
skills: bundleSkills.sort(),
|
||||
};
|
||||
}
|
||||
|
||||
const common = CURATED_COMMON.filter(skillId => skillTokens.has(skillId));
|
||||
|
||||
return { bundles, common };
|
||||
}
|
||||
|
||||
function truncate(value, limit) {
|
||||
if (!value || value.length <= limit) return value || '';
|
||||
return `${value.slice(0, limit - 3)}...`;
|
||||
}
|
||||
|
||||
function renderCatalogMarkdown(catalog) {
|
||||
const lines = [];
|
||||
lines.push('# Skill Catalog');
|
||||
lines.push('');
|
||||
lines.push(`Generated at: ${catalog.generatedAt}`);
|
||||
lines.push('');
|
||||
lines.push(`Total skills: ${catalog.total}`);
|
||||
lines.push('');
|
||||
|
||||
const categories = Array.from(new Set(catalog.skills.map(skill => skill.category))).sort();
|
||||
for (const category of categories) {
|
||||
const grouped = catalog.skills.filter(skill => skill.category === category);
|
||||
lines.push(`## ${category} (${grouped.length})`);
|
||||
lines.push('');
|
||||
lines.push('| Skill | Description | Tags | Triggers |');
|
||||
lines.push('| --- | --- | --- | --- |');
|
||||
|
||||
for (const skill of grouped) {
|
||||
const description = truncate(skill.description, 160).replace(/\|/g, '\\|');
|
||||
const tags = skill.tags.join(', ');
|
||||
const triggers = skill.triggers.join(', ');
|
||||
lines.push(`| \`${skill.id}\` | ${description} | ${tags} | ${triggers} |`);
|
||||
}
|
||||
|
||||
lines.push('');
|
||||
}
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
function buildCatalog() {
|
||||
const skillRelPaths = listSkillIdsRecursive(SKILLS_DIR);
|
||||
const skills = skillRelPaths.map(relPath => readSkill(SKILLS_DIR, relPath));
|
||||
const catalogSkills = [];
|
||||
|
||||
for (const skill of skills) {
|
||||
const tags = deriveTags(skill);
|
||||
const category = detectCategory(skill, tags);
|
||||
const triggers = buildTriggers(skill, tags);
|
||||
|
||||
catalogSkills.push({
|
||||
id: skill.id,
|
||||
name: skill.name,
|
||||
description: skill.description,
|
||||
category,
|
||||
tags,
|
||||
triggers,
|
||||
path: path.relative(ROOT, skill.path),
|
||||
});
|
||||
}
|
||||
|
||||
const catalog = {
|
||||
generatedAt: new Date().toISOString(),
|
||||
total: catalogSkills.length,
|
||||
skills: catalogSkills.sort((a, b) => a.id.localeCompare(b.id)),
|
||||
};
|
||||
|
||||
const aliases = buildAliases(catalog.skills);
|
||||
const bundleData = buildBundles(catalog.skills);
|
||||
|
||||
const catalogPath = path.join(ROOT, 'data', 'catalog.json');
|
||||
const catalogMarkdownPath = path.join(ROOT, 'CATALOG.md');
|
||||
const bundlesPath = path.join(ROOT, 'data', 'bundles.json');
|
||||
const aliasesPath = path.join(ROOT, 'data', 'aliases.json');
|
||||
|
||||
fs.writeFileSync(catalogPath, JSON.stringify(catalog, null, 2));
|
||||
fs.writeFileSync(catalogMarkdownPath, renderCatalogMarkdown(catalog));
|
||||
fs.writeFileSync(
|
||||
bundlesPath,
|
||||
JSON.stringify({ generatedAt: catalog.generatedAt, ...bundleData }, null, 2),
|
||||
);
|
||||
fs.writeFileSync(
|
||||
aliasesPath,
|
||||
JSON.stringify({ generatedAt: catalog.generatedAt, aliases }, null, 2),
|
||||
);
|
||||
|
||||
return catalog;
|
||||
}
|
||||
|
||||
if (require.main === module) {
|
||||
const catalog = buildCatalog();
|
||||
console.log(`Generated catalog for ${catalog.total} skills.`);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
buildCatalog,
|
||||
};
|
||||
@@ -2,69 +2,90 @@ import os
|
||||
import json
|
||||
import re
|
||||
|
||||
import yaml
|
||||
|
||||
def parse_frontmatter(content):
|
||||
"""
|
||||
Parses YAML frontmatter using PyYAML for standard compliance.
|
||||
"""
|
||||
fm_match = re.search(r'^---\s*\n(.*?)\n---', content, re.DOTALL)
|
||||
if not fm_match:
|
||||
return {}
|
||||
|
||||
try:
|
||||
return yaml.safe_load(fm_match.group(1)) or {}
|
||||
except yaml.YAMLError as e:
|
||||
print(f"⚠️ YAML parsing error: {e}")
|
||||
return {}
|
||||
|
||||
def generate_index(skills_dir, output_file):
|
||||
print(f"🏗️ Generating index from: {skills_dir}")
|
||||
skills = []
|
||||
|
||||
for root, dirs, files in os.walk(skills_dir):
|
||||
# Skip .disabled directories
|
||||
dirs[:] = [d for d in dirs if d != '.disabled']
|
||||
# Skip .disabled or hidden directories
|
||||
dirs[:] = [d for d in dirs if not d.startswith('.')]
|
||||
|
||||
if "SKILL.md" in files:
|
||||
skill_path = os.path.join(root, "SKILL.md")
|
||||
dir_name = os.path.basename(root)
|
||||
parent_dir = os.path.basename(os.path.dirname(root))
|
||||
|
||||
# Default values
|
||||
skill_info = {
|
||||
"id": dir_name,
|
||||
"path": os.path.relpath(root, os.path.dirname(skills_dir)),
|
||||
"category": parent_dir if parent_dir != "skills" else "uncategorized",
|
||||
"name": dir_name.replace("-", " ").title(),
|
||||
"description": ""
|
||||
"description": "",
|
||||
"risk": "unknown",
|
||||
"source": "unknown"
|
||||
}
|
||||
|
||||
with open(skill_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
|
||||
# Try to extract from frontmatter first
|
||||
fm_match = re.search(r'^---\s*(.*?)\s*---', content, re.DOTALL)
|
||||
if fm_match:
|
||||
fm_content = fm_match.group(1)
|
||||
name_fm = re.search(r'^name:\s*(.+)$', fm_content, re.MULTILINE)
|
||||
desc_fm = re.search(r'^description:\s*(.+)$', fm_content, re.MULTILINE)
|
||||
|
||||
if name_fm:
|
||||
skill_info["name"] = name_fm.group(1).strip()
|
||||
if desc_fm:
|
||||
skill_info["description"] = desc_fm.group(1).strip()
|
||||
|
||||
# Fallback to Header and First Paragraph if needed
|
||||
if not skill_info["description"] or skill_info["description"] == "":
|
||||
name_match = re.search(r'^#\s+(.+)$', content, re.MULTILINE)
|
||||
if name_match and not fm_match: # Only override if no frontmatter name
|
||||
skill_info["name"] = name_match.group(1).strip()
|
||||
|
||||
# Extract first paragraph
|
||||
body = content
|
||||
if fm_match:
|
||||
body = content[fm_match.end():].strip()
|
||||
|
||||
lines = body.split('\n')
|
||||
desc_lines = []
|
||||
for line in lines:
|
||||
if line.startswith('#') or not line.strip():
|
||||
if desc_lines: break
|
||||
continue
|
||||
desc_lines.append(line.strip())
|
||||
|
||||
if desc_lines:
|
||||
skill_info["description"] = " ".join(desc_lines)[:150] + "..."
|
||||
try:
|
||||
with open(skill_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
except Exception as e:
|
||||
print(f"⚠️ Error reading {skill_path}: {e}")
|
||||
continue
|
||||
|
||||
# Parse Metadata
|
||||
metadata = parse_frontmatter(content)
|
||||
|
||||
# Merge Metadata
|
||||
if "name" in metadata: skill_info["name"] = metadata["name"]
|
||||
if "description" in metadata: skill_info["description"] = metadata["description"]
|
||||
if "risk" in metadata: skill_info["risk"] = metadata["risk"]
|
||||
if "source" in metadata: skill_info["source"] = metadata["source"]
|
||||
|
||||
# Fallback for description if missing in frontmatter (legacy support)
|
||||
if not skill_info["description"]:
|
||||
body = content
|
||||
fm_match = re.search(r'^---\s*\n(.*?)\n---', content, re.DOTALL)
|
||||
if fm_match:
|
||||
body = content[fm_match.end():].strip()
|
||||
|
||||
# Simple extraction of first non-header paragraph
|
||||
lines = body.split('\n')
|
||||
desc_lines = []
|
||||
for line in lines:
|
||||
if line.startswith('#') or not line.strip():
|
||||
if desc_lines: break
|
||||
continue
|
||||
desc_lines.append(line.strip())
|
||||
|
||||
if desc_lines:
|
||||
skill_info["description"] = " ".join(desc_lines)[:250].strip()
|
||||
|
||||
skills.append(skill_info)
|
||||
|
||||
skills.sort(key=lambda x: x["name"])
|
||||
# Sort validation: by name
|
||||
skills.sort(key=lambda x: (x["name"].lower(), x["id"].lower()))
|
||||
|
||||
with open(output_file, 'w', encoding='utf-8') as f:
|
||||
json.dump(skills, f, indent=2)
|
||||
|
||||
print(f"✅ Generated index with {len(skills)} skills at: {output_file}")
|
||||
print(f"✅ Generated rich index with {len(skills)} skills at: {output_file}")
|
||||
return skills
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
151
scripts/normalize-frontmatter.js
Normal file
151
scripts/normalize-frontmatter.js
Normal file
@@ -0,0 +1,151 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const yaml = require('yaml');
|
||||
const { listSkillIds, parseFrontmatter } = require('../lib/skill-utils');
|
||||
|
||||
const ROOT = path.resolve(__dirname, '..');
|
||||
const SKILLS_DIR = path.join(ROOT, 'skills');
|
||||
const ALLOWED_FIELDS = new Set([
|
||||
'name',
|
||||
'description',
|
||||
'risk',
|
||||
'source',
|
||||
'license',
|
||||
'compatibility',
|
||||
'metadata',
|
||||
'allowed-tools',
|
||||
]);
|
||||
|
||||
function isPlainObject(value) {
|
||||
return value && typeof value === 'object' && !Array.isArray(value);
|
||||
}
|
||||
|
||||
function coerceToString(value) {
|
||||
if (value === null || value === undefined) return '';
|
||||
if (typeof value === 'string') return value.trim();
|
||||
if (typeof value === 'number' || typeof value === 'boolean') return String(value);
|
||||
if (Array.isArray(value)) {
|
||||
const simple = value.every(item => ['string', 'number', 'boolean'].includes(typeof item));
|
||||
return simple ? value.map(item => String(item).trim()).filter(Boolean).join(', ') : JSON.stringify(value);
|
||||
}
|
||||
if (isPlainObject(value)) {
|
||||
return JSON.stringify(value);
|
||||
}
|
||||
return String(value).trim();
|
||||
}
|
||||
|
||||
function appendMetadata(metadata, key, value) {
|
||||
const nextValue = coerceToString(value);
|
||||
if (!nextValue) return;
|
||||
if (!metadata[key]) {
|
||||
metadata[key] = nextValue;
|
||||
return;
|
||||
}
|
||||
if (metadata[key].includes(nextValue)) return;
|
||||
metadata[key] = `${metadata[key]}, ${nextValue}`;
|
||||
}
|
||||
|
||||
function collectAllowedTools(value, toolSet) {
|
||||
if (!value) return;
|
||||
if (typeof value === 'string') {
|
||||
value
|
||||
.split(/[\s,]+/)
|
||||
.map(token => token.trim())
|
||||
.filter(Boolean)
|
||||
.forEach(token => toolSet.add(token));
|
||||
return;
|
||||
}
|
||||
if (Array.isArray(value)) {
|
||||
value
|
||||
.map(token => String(token).trim())
|
||||
.filter(Boolean)
|
||||
.forEach(token => toolSet.add(token));
|
||||
}
|
||||
}
|
||||
|
||||
function normalizeSkill(skillId) {
|
||||
const skillPath = path.join(SKILLS_DIR, skillId, 'SKILL.md');
|
||||
const content = fs.readFileSync(skillPath, 'utf8');
|
||||
const { data, body, hasFrontmatter } = parseFrontmatter(content);
|
||||
|
||||
if (!hasFrontmatter) return false;
|
||||
|
||||
let modified = false;
|
||||
const updated = { ...data };
|
||||
const metadata = isPlainObject(updated.metadata) ? { ...updated.metadata } : {};
|
||||
if (updated.metadata !== undefined && !isPlainObject(updated.metadata)) {
|
||||
appendMetadata(metadata, 'legacy_metadata', updated.metadata);
|
||||
modified = true;
|
||||
}
|
||||
|
||||
const allowedTools = new Set();
|
||||
collectAllowedTools(updated['allowed-tools'], allowedTools);
|
||||
collectAllowedTools(updated.tools, allowedTools);
|
||||
collectAllowedTools(updated.tool_access, allowedTools);
|
||||
|
||||
if (updated.tools !== undefined) {
|
||||
delete updated.tools;
|
||||
modified = true;
|
||||
}
|
||||
if (updated.tool_access !== undefined) {
|
||||
delete updated.tool_access;
|
||||
modified = true;
|
||||
}
|
||||
|
||||
for (const key of Object.keys(updated)) {
|
||||
if (ALLOWED_FIELDS.has(key)) continue;
|
||||
if (key === 'tags') {
|
||||
appendMetadata(metadata, 'tags', updated[key]);
|
||||
} else {
|
||||
appendMetadata(metadata, key, updated[key]);
|
||||
}
|
||||
delete updated[key];
|
||||
modified = true;
|
||||
}
|
||||
|
||||
if (allowedTools.size) {
|
||||
updated['allowed-tools'] = Array.from(allowedTools).join(' ');
|
||||
modified = true;
|
||||
} else if (updated['allowed-tools'] !== undefined) {
|
||||
delete updated['allowed-tools'];
|
||||
modified = true;
|
||||
}
|
||||
|
||||
if (Object.keys(metadata).length) {
|
||||
updated.metadata = metadata;
|
||||
modified = true;
|
||||
} else if (updated.metadata !== undefined) {
|
||||
delete updated.metadata;
|
||||
modified = true;
|
||||
}
|
||||
|
||||
if (!modified) return false;
|
||||
|
||||
const ordered = {};
|
||||
for (const key of ['name', 'description', 'license', 'compatibility', 'allowed-tools', 'metadata']) {
|
||||
if (updated[key] !== undefined) {
|
||||
ordered[key] = updated[key];
|
||||
}
|
||||
}
|
||||
|
||||
const fm = yaml.stringify(ordered).trimEnd();
|
||||
const bodyPrefix = body.length && (body.startsWith('\n') || body.startsWith('\r\n')) ? '' : '\n';
|
||||
const next = `---\n${fm}\n---${bodyPrefix}${body}`;
|
||||
fs.writeFileSync(skillPath, next);
|
||||
return true;
|
||||
}
|
||||
|
||||
function run() {
|
||||
const skillIds = listSkillIds(SKILLS_DIR);
|
||||
let updatedCount = 0;
|
||||
for (const skillId of skillIds) {
|
||||
if (normalizeSkill(skillId)) updatedCount += 1;
|
||||
}
|
||||
console.log(`Normalized frontmatter for ${updatedCount} skills.`);
|
||||
}
|
||||
|
||||
if (require.main === module) {
|
||||
run();
|
||||
}
|
||||
|
||||
module.exports = { run };
|
||||
66
scripts/release_cycle.sh
Executable file
66
scripts/release_cycle.sh
Executable file
@@ -0,0 +1,66 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Release Cycle Automation Script
|
||||
# Enforces protocols from .github/MAINTENANCE.md
|
||||
|
||||
set -e
|
||||
|
||||
GREEN='\033[0;32m'
|
||||
RED='\033[0;31m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
echo -e "${YELLOW}🤖 Initiating Antigravity Release Protocol...${NC}"
|
||||
|
||||
# 1. Validation Chain
|
||||
echo -e "\n${YELLOW}Step 1: Running Validation Chain...${NC}"
|
||||
echo "Running validate_skills.py..."
|
||||
python3 scripts/validate_skills.py
|
||||
echo "Running generate_index.py..."
|
||||
python3 scripts/generate_index.py
|
||||
echo "Running update_readme.py..."
|
||||
python3 scripts/update_readme.py
|
||||
|
||||
# 2. Catalog (required for CI)
|
||||
echo -e "\n${YELLOW}Step 2: Build catalog...${NC}"
|
||||
npm run catalog
|
||||
|
||||
# 3. Stats Consistency Check
|
||||
echo -e "\n${YELLOW}Step 3: Verifying Stats Consistency...${NC}"
|
||||
JSON_COUNT=$(python3 -c "import json; print(len(json.load(open('skills_index.json'))))")
|
||||
echo "Skills in Registry (JSON): $JSON_COUNT"
|
||||
|
||||
# Check README Intro
|
||||
README_CONTENT=$(cat README.md)
|
||||
if [[ "$README_CONTENT" != *"$JSON_COUNT high-performance"* ]]; then
|
||||
echo -e "${RED}❌ ERROR: README.md intro consistency failure!${NC}"
|
||||
echo "Expected: '$JSON_COUNT high-performance'"
|
||||
echo "Found mismatch. Please grep for 'high-performance' in README.md and fix it."
|
||||
exit 1
|
||||
fi
|
||||
echo -e "${GREEN}✅ Stats Consistent.${NC}"
|
||||
|
||||
# 4. Version check (package.json is source of truth for npm)
|
||||
echo -e "\n${YELLOW}Step 4: Version check${NC}"
|
||||
PKG_VERSION=$(node -p "require('./package.json').version")
|
||||
echo "package.json version: $PKG_VERSION"
|
||||
echo "Ensure this version is bumped before 'npm publish' (npm forbids republishing the same version)."
|
||||
|
||||
# 5. Contributor Check
|
||||
echo -e "\n${YELLOW}Step 5: Contributor Check${NC}"
|
||||
echo "Recent commits by author (check against README 'Repo Contributors'):"
|
||||
git shortlog -sn --since="1 month ago" --all --no-merges | head -n 10
|
||||
|
||||
echo -e "${YELLOW}⚠️ MANUAL VERIFICATION REQUIRED:${NC}"
|
||||
echo "1. Are all PR authors above listed in 'Repo Contributors'?"
|
||||
echo "2. Are all External Sources listed in 'Credits & Sources'?"
|
||||
read -p "Type 'yes' to confirm you have verified contributors: " CONFIRM_CONTRIB
|
||||
|
||||
if [ "$CONFIRM_CONTRIB" != "yes" ]; then
|
||||
echo -e "${RED}❌ Verification failed. Aborting.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "\n${GREEN}✅ Release Cycle Checks Passed. You may now commit and push.${NC}"
|
||||
echo -e "${YELLOW}After tagging a release: run \`npm publish\` from repo root (or use GitHub Release + NPM_TOKEN for CI).${NC}"
|
||||
exit 0
|
||||
18
scripts/tests/test_validate_skills_headings.py
Normal file
18
scripts/tests/test_validate_skills_headings.py
Normal file
@@ -0,0 +1,18 @@
|
||||
import os
|
||||
import sys
|
||||
|
||||
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
|
||||
from validate_skills import has_when_to_use_section
|
||||
|
||||
SAMPLES = [
|
||||
("## When to Use", True),
|
||||
("## Use this skill when", True),
|
||||
("## When to Use This Skill", True),
|
||||
("## Overview", False),
|
||||
]
|
||||
|
||||
for heading, expected in SAMPLES:
|
||||
content = f"\n{heading}\n- item\n"
|
||||
assert has_when_to_use_section(content) is expected, heading
|
||||
|
||||
print("ok")
|
||||
16
scripts/tests/validate_skills_headings.test.js
Normal file
16
scripts/tests/validate_skills_headings.test.js
Normal file
@@ -0,0 +1,16 @@
|
||||
const assert = require('assert');
|
||||
const { hasUseSection } = require('../validate-skills');
|
||||
|
||||
const samples = [
|
||||
['## When to Use', true],
|
||||
['## Use this skill when', true],
|
||||
['## When to Use This Skill', true],
|
||||
['## Overview', false],
|
||||
];
|
||||
|
||||
for (const [heading, expected] of samples) {
|
||||
const content = `\n${heading}\n- item\n`;
|
||||
assert.strictEqual(hasUseSection(content), expected, heading);
|
||||
}
|
||||
|
||||
console.log('ok');
|
||||
@@ -3,123 +3,63 @@ import json
|
||||
import os
|
||||
import re
|
||||
|
||||
|
||||
def update_readme():
|
||||
base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
readme_path = os.path.join(base_dir, "README.md")
|
||||
index_path = os.path.join(base_dir, "skills_index.json")
|
||||
|
||||
print(f"📖 Reading skills index from: {index_path}")
|
||||
with open(index_path, 'r', encoding='utf-8') as f:
|
||||
with open(index_path, "r", encoding="utf-8") as f:
|
||||
skills = json.load(f)
|
||||
|
||||
total_skills = len(skills)
|
||||
print(f"🔢 Total skills found: {total_skills}")
|
||||
|
||||
print(f"📝 Updating README at: {readme_path}")
|
||||
with open(readme_path, 'r', encoding='utf-8') as f:
|
||||
with open(readme_path, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
|
||||
# 1. Update Title Count
|
||||
# Pattern: # 🌌 Antigravity Awesome Skills: [NUM]+ Agentic Skills
|
||||
content = re.sub(
|
||||
r'(# 🌌 Antigravity Awesome Skills: )\d+(\+ Agentic Skills)',
|
||||
f'\\g<1>{total_skills}\\g<2>',
|
||||
content
|
||||
r"(# 🌌 Antigravity Awesome Skills: )\d+(\+ Agentic Skills)",
|
||||
rf"\g<1>{total_skills}\g<2>",
|
||||
content,
|
||||
)
|
||||
|
||||
# 2. Update Blockquote Count
|
||||
# Pattern: Collection of [NUM]+ Universal
|
||||
content = re.sub(
|
||||
r'(Collection of )\d+(\+ Universal)',
|
||||
f'\\g<1>{total_skills}\\g<2>',
|
||||
content
|
||||
r"(Collection of )\d+(\+ Universal)",
|
||||
rf"\g<1>{total_skills}\g<2>",
|
||||
content,
|
||||
)
|
||||
|
||||
# 3. Update Intro Text Count
|
||||
# Pattern: library of **[NUM] high-performance skills**
|
||||
content = re.sub(
|
||||
r'(library of \*\*)\d+( high-performance skills\*\*)',
|
||||
f'\\g<1>{total_skills}\\g<2>',
|
||||
content
|
||||
r"(library of \*\*)\d+( high-performance agentic skills\*\*)",
|
||||
rf"\g<1>{total_skills}\g<2>",
|
||||
content,
|
||||
)
|
||||
|
||||
# 4. Update Registry Header Count
|
||||
# Pattern: ## Full Skill Registry ([NUM]/[NUM])
|
||||
# 4. Update Browse section header
|
||||
content = re.sub(
|
||||
r'(## Full Skill Registry \()\d+/\d+(\))',
|
||||
f'\\g<1>{total_skills}/{total_skills}\\g<2>',
|
||||
content
|
||||
r"## Browse \d+\+ Skills",
|
||||
f"## Browse {total_skills}+ Skills",
|
||||
content,
|
||||
)
|
||||
|
||||
# 5. Generate New Registry Table
|
||||
print("🔄 Generating new registry table...")
|
||||
|
||||
# Store the Note block to preserve it
|
||||
note_pattern = r'(> \[!NOTE\].*?)\n\n\| Skill Name'
|
||||
note_match = re.search(note_pattern, content, re.DOTALL)
|
||||
note_block = ""
|
||||
if note_match:
|
||||
note_block = note_match.group(1)
|
||||
else:
|
||||
# Fallback default note if not found (though it should be there)
|
||||
note_block = "> [!NOTE] > **Document Skills**: We provide both **community** and **official Anthropic** versions for DOCX, PDF, PPTX, and XLSX. Locally, the official versions are used by default (via symlinks). In the repository, both versions are available for flexibility."
|
||||
# 5. Update TOC link for Browse (anchor matches header-derived slug)
|
||||
content = re.sub(
|
||||
r"\[📚 Browse \d+\+ Skills\]\(#browse-\d+-skills\)",
|
||||
f"[📚 Browse {total_skills}+ Skills](#browse-{total_skills}-skills)",
|
||||
content,
|
||||
)
|
||||
|
||||
table_header = "| Skill Name | Description | Path |\n| :--- | :--- | :--- |"
|
||||
table_rows = []
|
||||
|
||||
for skill in skills:
|
||||
name = skill.get('name', 'Unknown')
|
||||
desc = skill.get('description', '').replace('\n', ' ').strip()
|
||||
path = skill.get('path', '')
|
||||
|
||||
# Escape pipes in description to strictly avoid breaking the table
|
||||
desc = desc.replace('|', '\|')
|
||||
|
||||
row = f"| **{name}** | {desc} | `{path}` |"
|
||||
table_rows.append(row)
|
||||
|
||||
new_table_section = f"{note_block}\n\n{table_header}\n" + "\n".join(table_rows)
|
||||
|
||||
# Replace the old table section
|
||||
# We look for the start of the section and the end (which is either the next H2 or EOF)
|
||||
# The section starts after "## Full Skill Registry (X/X)"
|
||||
|
||||
# First, find the header position
|
||||
header_pattern = r'## Full Skill Registry \(\d+/\d+\)'
|
||||
header_match = re.search(header_pattern, content)
|
||||
|
||||
if not header_match:
|
||||
print("❌ Could not find 'Full Skill Registry' header.")
|
||||
return
|
||||
|
||||
start_pos = header_match.end()
|
||||
|
||||
# Find the next section (## ...) or end of file
|
||||
next_section_match = re.search(r'\n## ', content[start_pos:])
|
||||
|
||||
if next_section_match:
|
||||
end_pos = start_pos + next_section_match.start()
|
||||
# Keep everything after the table
|
||||
rest_of_file = content[end_pos:]
|
||||
else:
|
||||
# Table goes to end of file
|
||||
rest_of_file = ""
|
||||
|
||||
# Check for text between Header and Table (usually just newlines or the Note)
|
||||
# We replace everything from Header End to Next Section with our New Table Section
|
||||
# but we need to supply the pre-table Note which we extracted/re-generated above.
|
||||
|
||||
# Simplification: We construct the top part (before header), add header, add new table section, add rest.
|
||||
|
||||
before_header = content[:header_match.start()]
|
||||
new_header = f"## Full Skill Registry ({total_skills}/{total_skills})"
|
||||
|
||||
new_content = f"{before_header}{new_header}\n\n{new_table_section}\n{rest_of_file}"
|
||||
|
||||
with open(readme_path, 'w', encoding='utf-8') as f:
|
||||
f.write(new_content)
|
||||
with open(readme_path, "w", encoding="utf-8") as f:
|
||||
f.write(content)
|
||||
|
||||
print("✅ README.md updated successfully.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
update_readme()
|
||||
|
||||
293
scripts/validate-skills.js
Normal file
293
scripts/validate-skills.js
Normal file
@@ -0,0 +1,293 @@
|
||||
/**
|
||||
* Legacy / alternative validator. For CI and PR checks, use scripts/validate_skills.py.
|
||||
* Run: npm run validate (or npm run validate:strict)
|
||||
*/
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { listSkillIds, parseFrontmatter } = require('../lib/skill-utils');
|
||||
|
||||
const ROOT = path.resolve(__dirname, '..');
|
||||
const SKILLS_DIR = path.join(ROOT, 'skills');
|
||||
const BASELINE_PATH = path.join(ROOT, 'validation-baseline.json');
|
||||
|
||||
const errors = [];
|
||||
const warnings = [];
|
||||
const missingUseSection = [];
|
||||
const missingDoNotUseSection = [];
|
||||
const missingInstructionsSection = [];
|
||||
const longFiles = [];
|
||||
const unknownFieldSkills = [];
|
||||
const isStrict = process.argv.includes('--strict')
|
||||
|| process.env.STRICT === '1'
|
||||
|| process.env.STRICT === 'true';
|
||||
const writeBaseline = process.argv.includes('--write-baseline')
|
||||
|| process.env.WRITE_BASELINE === '1'
|
||||
|| process.env.WRITE_BASELINE === 'true';
|
||||
|
||||
const NAME_PATTERN = /^[a-z0-9]+(?:-[a-z0-9]+)*$/;
|
||||
const MAX_NAME_LENGTH = 64;
|
||||
const MAX_DESCRIPTION_LENGTH = 1024;
|
||||
const MAX_COMPATIBILITY_LENGTH = 500;
|
||||
const MAX_SKILL_LINES = 500;
|
||||
const ALLOWED_FIELDS = new Set([
|
||||
'name',
|
||||
'description',
|
||||
'risk',
|
||||
'source',
|
||||
'license',
|
||||
'compatibility',
|
||||
'metadata',
|
||||
'allowed-tools',
|
||||
]);
|
||||
|
||||
const USE_SECTION_PATTERNS = [
|
||||
/^##\s+When\s+to\s+Use/im,
|
||||
/^##\s+Use\s+this\s+skill\s+when/im,
|
||||
/^##\s+When\s+to\s+Use\s+This\s+Skill/im,
|
||||
];
|
||||
|
||||
function hasUseSection(content) {
|
||||
return USE_SECTION_PATTERNS.some(pattern => pattern.test(content));
|
||||
}
|
||||
|
||||
function isPlainObject(value) {
|
||||
return value && typeof value === 'object' && !Array.isArray(value);
|
||||
}
|
||||
|
||||
function validateStringField(fieldName, value, { min = 1, max = Infinity } = {}) {
|
||||
if (typeof value !== 'string') {
|
||||
return `${fieldName} must be a string.`;
|
||||
}
|
||||
const trimmed = value.trim();
|
||||
if (!trimmed) {
|
||||
return `${fieldName} cannot be empty.`;
|
||||
}
|
||||
if (trimmed.length < min) {
|
||||
return `${fieldName} must be at least ${min} characters.`;
|
||||
}
|
||||
if (trimmed.length > max) {
|
||||
return `${fieldName} must be <= ${max} characters.`;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function addError(message) {
|
||||
errors.push(message);
|
||||
}
|
||||
|
||||
function addWarning(message) {
|
||||
warnings.push(message);
|
||||
}
|
||||
|
||||
function loadBaseline() {
|
||||
if (!fs.existsSync(BASELINE_PATH)) {
|
||||
return {
|
||||
useSection: [],
|
||||
doNotUseSection: [],
|
||||
instructionsSection: [],
|
||||
longFile: [],
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
const parsed = JSON.parse(fs.readFileSync(BASELINE_PATH, 'utf8'));
|
||||
return {
|
||||
useSection: Array.isArray(parsed.useSection) ? parsed.useSection : [],
|
||||
doNotUseSection: Array.isArray(parsed.doNotUseSection) ? parsed.doNotUseSection : [],
|
||||
instructionsSection: Array.isArray(parsed.instructionsSection) ? parsed.instructionsSection : [],
|
||||
longFile: Array.isArray(parsed.longFile) ? parsed.longFile : [],
|
||||
};
|
||||
} catch (err) {
|
||||
addWarning('Failed to parse validation-baseline.json; strict mode may fail.');
|
||||
return { useSection: [], doNotUseSection: [], instructionsSection: [], longFile: [] };
|
||||
}
|
||||
}
|
||||
|
||||
function addStrictSectionErrors(label, missing, baselineSet) {
|
||||
if (!isStrict) return;
|
||||
const strictMissing = missing.filter(skillId => !baselineSet.has(skillId));
|
||||
if (strictMissing.length) {
|
||||
addError(`Missing "${label}" section (strict): ${strictMissing.length} skills (examples: ${strictMissing.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
}
|
||||
|
||||
function run() {
|
||||
const skillIds = listSkillIds(SKILLS_DIR);
|
||||
const baseline = loadBaseline();
|
||||
const baselineUse = new Set(baseline.useSection || []);
|
||||
const baselineDoNotUse = new Set(baseline.doNotUseSection || []);
|
||||
const baselineInstructions = new Set(baseline.instructionsSection || []);
|
||||
const baselineLongFile = new Set(baseline.longFile || []);
|
||||
|
||||
for (const skillId of skillIds) {
|
||||
const skillPath = path.join(SKILLS_DIR, skillId, 'SKILL.md');
|
||||
|
||||
if (!fs.existsSync(skillPath)) {
|
||||
addError(`Missing SKILL.md: ${skillId}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
const content = fs.readFileSync(skillPath, 'utf8');
|
||||
const { data, errors: fmErrors, hasFrontmatter } = parseFrontmatter(content);
|
||||
const lineCount = content.split(/\r?\n/).length;
|
||||
|
||||
if (!hasFrontmatter) {
|
||||
addError(`Missing frontmatter: ${skillId}`);
|
||||
}
|
||||
|
||||
if (fmErrors && fmErrors.length) {
|
||||
fmErrors.forEach(error => addError(`Frontmatter parse error (${skillId}): ${error}`));
|
||||
}
|
||||
|
||||
if (!NAME_PATTERN.test(skillId)) {
|
||||
addError(`Folder name must match ${NAME_PATTERN}: ${skillId}`);
|
||||
}
|
||||
|
||||
if (data.name !== undefined) {
|
||||
const nameError = validateStringField('name', data.name, { min: 1, max: MAX_NAME_LENGTH });
|
||||
if (nameError) {
|
||||
addError(`${nameError} (${skillId})`);
|
||||
} else {
|
||||
const nameValue = String(data.name).trim();
|
||||
if (!NAME_PATTERN.test(nameValue)) {
|
||||
addError(`name must match ${NAME_PATTERN}: ${skillId}`);
|
||||
}
|
||||
if (nameValue !== skillId) {
|
||||
addError(`name must match folder name: ${skillId} -> ${nameValue}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const descError = data.description === undefined
|
||||
? 'description is required.'
|
||||
: validateStringField('description', data.description, { min: 1, max: MAX_DESCRIPTION_LENGTH });
|
||||
if (descError) {
|
||||
addError(`${descError} (${skillId})`);
|
||||
}
|
||||
|
||||
if (data.license !== undefined) {
|
||||
const licenseError = validateStringField('license', data.license, { min: 1, max: 128 });
|
||||
if (licenseError) {
|
||||
addError(`${licenseError} (${skillId})`);
|
||||
}
|
||||
}
|
||||
|
||||
if (data.compatibility !== undefined) {
|
||||
const compatibilityError = validateStringField(
|
||||
'compatibility',
|
||||
data.compatibility,
|
||||
{ min: 1, max: MAX_COMPATIBILITY_LENGTH },
|
||||
);
|
||||
if (compatibilityError) {
|
||||
addError(`${compatibilityError} (${skillId})`);
|
||||
}
|
||||
}
|
||||
|
||||
if (data['allowed-tools'] !== undefined) {
|
||||
if (typeof data['allowed-tools'] !== 'string') {
|
||||
addError(`allowed-tools must be a space-delimited string. (${skillId})`);
|
||||
} else if (!data['allowed-tools'].trim()) {
|
||||
addError(`allowed-tools cannot be empty. (${skillId})`);
|
||||
}
|
||||
}
|
||||
|
||||
if (data.metadata !== undefined) {
|
||||
if (!isPlainObject(data.metadata)) {
|
||||
addError(`metadata must be a string map/object. (${skillId})`);
|
||||
} else {
|
||||
for (const [key, value] of Object.entries(data.metadata)) {
|
||||
if (typeof value !== 'string') {
|
||||
addError(`metadata.${key} must be a string. (${skillId})`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (data && Object.keys(data).length) {
|
||||
const unknownFields = Object.keys(data).filter(key => !ALLOWED_FIELDS.has(key));
|
||||
if (unknownFields.length) {
|
||||
unknownFieldSkills.push(skillId);
|
||||
addError(`Unknown frontmatter fields (${skillId}): ${unknownFields.join(', ')}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (lineCount > MAX_SKILL_LINES) {
|
||||
longFiles.push(skillId);
|
||||
}
|
||||
|
||||
if (!hasUseSection(content)) {
|
||||
missingUseSection.push(skillId);
|
||||
}
|
||||
|
||||
if (!content.includes('## Do not use')) {
|
||||
missingDoNotUseSection.push(skillId);
|
||||
}
|
||||
|
||||
if (!content.includes('## Instructions')) {
|
||||
missingInstructionsSection.push(skillId);
|
||||
}
|
||||
}
|
||||
|
||||
if (missingUseSection.length) {
|
||||
addWarning(`Missing "Use this skill when" section: ${missingUseSection.length} skills (examples: ${missingUseSection.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
if (missingDoNotUseSection.length) {
|
||||
addWarning(`Missing "Do not use" section: ${missingDoNotUseSection.length} skills (examples: ${missingDoNotUseSection.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
if (missingInstructionsSection.length) {
|
||||
addWarning(`Missing "Instructions" section: ${missingInstructionsSection.length} skills (examples: ${missingInstructionsSection.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
if (longFiles.length) {
|
||||
addWarning(`SKILL.md over ${MAX_SKILL_LINES} lines: ${longFiles.length} skills (examples: ${longFiles.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
if (unknownFieldSkills.length) {
|
||||
addWarning(`Unknown frontmatter fields detected: ${unknownFieldSkills.length} skills (examples: ${unknownFieldSkills.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
addStrictSectionErrors('Use this skill when', missingUseSection, baselineUse);
|
||||
addStrictSectionErrors('Do not use', missingDoNotUseSection, baselineDoNotUse);
|
||||
addStrictSectionErrors('Instructions', missingInstructionsSection, baselineInstructions);
|
||||
addStrictSectionErrors(`SKILL.md line count <= ${MAX_SKILL_LINES}`, longFiles, baselineLongFile);
|
||||
|
||||
if (writeBaseline) {
|
||||
const baselineData = {
|
||||
generatedAt: new Date().toISOString(),
|
||||
useSection: [...missingUseSection].sort(),
|
||||
doNotUseSection: [...missingDoNotUseSection].sort(),
|
||||
instructionsSection: [...missingInstructionsSection].sort(),
|
||||
longFile: [...longFiles].sort(),
|
||||
};
|
||||
fs.writeFileSync(BASELINE_PATH, JSON.stringify(baselineData, null, 2));
|
||||
console.log(`Baseline written to ${BASELINE_PATH}`);
|
||||
}
|
||||
|
||||
if (warnings.length) {
|
||||
console.warn('Warnings:');
|
||||
for (const warning of warnings) {
|
||||
console.warn(`- ${warning}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (errors.length) {
|
||||
console.error('\nErrors:');
|
||||
for (const error of errors) {
|
||||
console.error(`- ${error}`);
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(`Validation passed for ${skillIds.length} skills.`);
|
||||
}
|
||||
|
||||
if (require.main === module) {
|
||||
run();
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
hasUseSection,
|
||||
run,
|
||||
};
|
||||
@@ -1,52 +1,132 @@
|
||||
import os
|
||||
import re
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
def validate_skills(skills_dir):
|
||||
WHEN_TO_USE_PATTERNS = [
|
||||
re.compile(r"^##\s+When\s+to\s+Use", re.MULTILINE | re.IGNORECASE),
|
||||
re.compile(r"^##\s+Use\s+this\s+skill\s+when", re.MULTILINE | re.IGNORECASE),
|
||||
re.compile(r"^##\s+When\s+to\s+Use\s+This\s+Skill", re.MULTILINE | re.IGNORECASE),
|
||||
]
|
||||
|
||||
def has_when_to_use_section(content):
|
||||
return any(pattern.search(content) for pattern in WHEN_TO_USE_PATTERNS)
|
||||
|
||||
def parse_frontmatter(content):
|
||||
"""
|
||||
Simple frontmatter parser using regex to avoid external dependencies.
|
||||
Returns a dict of key-values.
|
||||
"""
|
||||
fm_match = re.search(r'^---\s*\n(.*?)\n---', content, re.DOTALL)
|
||||
if not fm_match:
|
||||
return None
|
||||
|
||||
fm_text = fm_match.group(1)
|
||||
metadata = {}
|
||||
for line in fm_text.split('\n'):
|
||||
if ':' in line:
|
||||
key, val = line.split(':', 1)
|
||||
metadata[key.strip()] = val.strip().strip('"').strip("'")
|
||||
return metadata
|
||||
|
||||
def validate_skills(skills_dir, strict_mode=False):
|
||||
print(f"🔍 Validating skills in: {skills_dir}")
|
||||
print(f"⚙️ Mode: {'STRICT (CI)' if strict_mode else 'Standard (Dev)'}")
|
||||
|
||||
errors = []
|
||||
warnings = []
|
||||
skill_count = 0
|
||||
|
||||
# Pre-compiled regex
|
||||
security_disclaimer_pattern = re.compile(r"AUTHORIZED USE ONLY", re.IGNORECASE)
|
||||
|
||||
valid_risk_levels = ["none", "safe", "critical", "offensive"]
|
||||
|
||||
for root, dirs, files in os.walk(skills_dir):
|
||||
# Skip .disabled directories
|
||||
dirs[:] = [d for d in dirs if d != '.disabled']
|
||||
# Skip .disabled or hidden directories
|
||||
dirs[:] = [d for d in dirs if not d.startswith('.')]
|
||||
|
||||
if "SKILL.md" in files:
|
||||
skill_count += 1
|
||||
skill_path = os.path.join(root, "SKILL.md")
|
||||
rel_path = os.path.relpath(skill_path, skills_dir)
|
||||
|
||||
with open(skill_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
try:
|
||||
with open(skill_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
except Exception as e:
|
||||
errors.append(f"❌ {rel_path}: Unreadable file - {str(e)}")
|
||||
continue
|
||||
|
||||
# Check for Frontmatter or Header
|
||||
has_frontmatter = content.strip().startswith("---")
|
||||
has_header = re.search(r'^#\s+', content, re.MULTILINE)
|
||||
|
||||
if not (has_frontmatter or has_header):
|
||||
errors.append(f"❌ {rel_path}: Missing frontmatter or top-level heading")
|
||||
|
||||
if has_frontmatter:
|
||||
# Basic check for name and description in frontmatter
|
||||
fm_match = re.search(r'^---\s*(.*?)\s*---', content, re.DOTALL)
|
||||
if fm_match:
|
||||
fm_content = fm_match.group(1)
|
||||
if "name:" not in fm_content:
|
||||
errors.append(f"⚠️ {rel_path}: Frontmatter missing 'name:'")
|
||||
if "description:" not in fm_content:
|
||||
errors.append(f"⚠️ {rel_path}: Frontmatter missing 'description:'")
|
||||
else:
|
||||
errors.append(f"❌ {rel_path}: Malformed frontmatter")
|
||||
# 1. Frontmatter Check
|
||||
metadata = parse_frontmatter(content)
|
||||
if not metadata:
|
||||
errors.append(f"❌ {rel_path}: Missing or malformed YAML frontmatter")
|
||||
continue # Cannot proceed without metadata
|
||||
|
||||
# 2. Metadata Schema Checks
|
||||
if "name" not in metadata:
|
||||
errors.append(f"❌ {rel_path}: Missing 'name' in frontmatter")
|
||||
elif metadata["name"] != os.path.basename(root):
|
||||
warnings.append(f"⚠️ {rel_path}: Name '{metadata['name']}' does not match folder name '{os.path.basename(root)}'")
|
||||
|
||||
if "description" not in metadata:
|
||||
errors.append(f"❌ {rel_path}: Missing 'description' in frontmatter")
|
||||
|
||||
# Risk Validation (Quality Bar)
|
||||
if "risk" not in metadata:
|
||||
msg = f"⚠️ {rel_path}: Missing 'risk' label (defaulting to 'unknown')"
|
||||
if strict_mode: errors.append(msg.replace("⚠️", "❌"))
|
||||
else: warnings.append(msg)
|
||||
elif metadata["risk"] not in valid_risk_levels:
|
||||
errors.append(f"❌ {rel_path}: Invalid risk level '{metadata['risk']}'. Must be one of {valid_risk_levels}")
|
||||
|
||||
# Source Validation
|
||||
if "source" not in metadata:
|
||||
msg = f"⚠️ {rel_path}: Missing 'source' attribution"
|
||||
if strict_mode: errors.append(msg.replace("⚠️", "❌"))
|
||||
else: warnings.append(msg)
|
||||
|
||||
# 3. Content Checks (Triggers)
|
||||
if not has_when_to_use_section(content):
|
||||
msg = f"⚠️ {rel_path}: Missing '## When to Use' section"
|
||||
if strict_mode: errors.append(msg.replace("⚠️", "❌"))
|
||||
else: warnings.append(msg)
|
||||
|
||||
# 4. Security Guardrails
|
||||
if metadata.get("risk") == "offensive":
|
||||
if not security_disclaimer_pattern.search(content):
|
||||
errors.append(f"🚨 {rel_path}: OFFENSIVE SKILL MISSING SECURITY DISCLAIMER! (Must contain 'AUTHORIZED USE ONLY')")
|
||||
|
||||
# Reporting
|
||||
print(f"\n📊 Checked {skill_count} skills.")
|
||||
|
||||
if warnings:
|
||||
print(f"\n⚠️ Found {len(warnings)} Warnings:")
|
||||
for w in warnings:
|
||||
print(w)
|
||||
|
||||
print(f"✅ Found and checked {skill_count} skills.")
|
||||
if errors:
|
||||
print("\n⚠️ Validation Results:")
|
||||
for err in errors:
|
||||
print(err)
|
||||
print(f"\n❌ Found {len(errors)} Critical Errors:")
|
||||
for e in errors:
|
||||
print(e)
|
||||
return False
|
||||
else:
|
||||
print("✨ All skills passed basic validation!")
|
||||
return True
|
||||
|
||||
if strict_mode and warnings:
|
||||
print("\n❌ STRICT MODE: Failed due to warnings.")
|
||||
return False
|
||||
|
||||
print("\n✨ All skills passed validation!")
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="Validate Antigravity Skills")
|
||||
parser.add_argument("--strict", action="store_true", help="Fail on warnings (for CI)")
|
||||
args = parser.parse_args()
|
||||
|
||||
base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
skills_path = os.path.join(base_dir, "skills")
|
||||
validate_skills(skills_path)
|
||||
|
||||
success = validate_skills(skills_path, strict_mode=args.strict)
|
||||
if not success:
|
||||
sys.exit(1)
|
||||
|
||||
@@ -170,9 +170,9 @@ description: "What this skill does"
|
||||
|
||||
## Documentation
|
||||
|
||||
- **[Getting Started](../GETTING_STARTED.md)** - Quick start guide
|
||||
- **[Getting Started](../docs/GETTING_STARTED.md)** - Quick start guide
|
||||
- **[Examples](../docs/EXAMPLES.md)** - Real-world usage examples
|
||||
- **[FAQ](../FAQ.md)** - Common questions
|
||||
- **[FAQ](../docs/FAQ.md)** - Common questions
|
||||
- **[Visual Guide](../docs/VISUAL_GUIDE.md)** - Diagrams and flowcharts
|
||||
|
||||
---
|
||||
@@ -198,4 +198,4 @@ Found a skill that needs improvement? Want to add a new skill?
|
||||
|
||||
---
|
||||
|
||||
**Need help?** Check the [FAQ](../FAQ.md) or open an issue on GitHub!
|
||||
**Need help?** Check the [FAQ](../docs/FAQ.md) or open an issue on GitHub!
|
||||
|
||||
22
skills/SPDD/1-research.md
Normal file
22
skills/SPDD/1-research.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# ROLE: Codebase Research Agent
|
||||
Sua única missão é documentar e explicar a base de código como ela existe hoje.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- NÃO sugira melhorias, refatorações ou mudanças arquiteturais.
|
||||
- NÃO realize análise de causa raiz ou proponha melhorias futuras.
|
||||
- APENAS descreva o que existe, onde existe e como os componentes interagem.
|
||||
- Você é um cartógrafo técnico criando um mapa do sistema atual.
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Initial Analysis:** Leia os arquivos mencionados pelo usuário integralmente (SEM limit/offset).
|
||||
2. **Decomposition:** Decompunha a dúvida do usuário em áreas de pesquisa (ex: Rotas, Banco, UI).
|
||||
3. **Execution:** - Localize onde os arquivos e componentes vivem.
|
||||
- Analise COMO o código atual funciona (sem criticar).
|
||||
- Encontre exemplos de padrões existentes para referência.
|
||||
4. **Project State:**
|
||||
- Se projeto NOVO: Pesquise e liste a melhor estrutura de pastas e bibliotecas padrão de mercado para a stack.
|
||||
- Se projeto EXISTENTE: Identifique dívidas técnicas ou padrões que devem ser respeitados.
|
||||
|
||||
## OUTPUT:
|
||||
- Gere o arquivo `docs/prds/prd_current_task.md` com YAML frontmatter (date, topic, tags, status).
|
||||
- **Ação Obrigatória:** Termine com: "Pesquisa concluída. Por favor, dê um `/clear` e carregue `.agente/2-spec.md` para o planejamento."
|
||||
20
skills/SPDD/2-spec.md
Normal file
20
skills/SPDD/2-spec.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# ROLE: Implementation Planning Agent
|
||||
Você deve criar planos de implementação detalhados e ser cético quanto a requisitos vagos.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- Não escreva o plano de uma vez; valide a estrutura das fases com o usuário.
|
||||
- Cada decisão técnica deve ser tomada antes de finalizar o plano.
|
||||
- O plano deve ser acionável e completo, sem "perguntas abertas".
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Context Check:** Leia o `docs/prds/prd_current_task.md` gerado anteriormente.
|
||||
2. **Phasing:** Divida o trabalho em fases incrementais e testáveis.
|
||||
3. **Detailing:** Para cada arquivo afetado, defina:
|
||||
- **Path exato.**
|
||||
- **Ação:** (CRIAR | MODIFICAR | DELETAR).
|
||||
- **Lógica:** Snippets de pseudocódigo ou referências de implementação.
|
||||
4. **Success Criteria:** Defina "Automated Verification" (scripts/testes) e "Manual Verification" (UI/UX).
|
||||
|
||||
## OUTPUT:
|
||||
- Gere o arquivo `docs/specs/spec_current_task.md` seguindo o template de fases.
|
||||
- **Ação Obrigatória:** Termine com: "Spec finalizada. Por favor, dê um `/clear` e carregue `.agente/3-implementation.md` para execução."
|
||||
20
skills/SPDD/3-implementation.md
Normal file
20
skills/SPDD/3-implementation.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# ROLE: Implementation Execution Agent
|
||||
Você deve implementar um plano técnico aprovado com precisão cirúrgica.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- Siga a intenção do plano enquanto se adapta à realidade encontrada.
|
||||
- Implemente uma fase COMPLETAMENTE antes de passar para a próxima.
|
||||
- **STOP & THINK:** Se encontrar um erro na Spec ou um mismatch no código, PARE e reporte. Não tente adivinhar.
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Sanity Check:** Leia a Spec e o Ticket original. Verifique se o ambiente está limpo.
|
||||
2. **Execution:** Codifique seguindo os padrões de Clean Code e os snippets da Spec.
|
||||
3. **Verification:**
|
||||
- Após cada fase, execute os comandos de "Automated Verification" descritos na Spec.
|
||||
- PAUSE para confirmação manual do usuário após cada fase concluída.
|
||||
4. **Progress:** Atualize os checkboxes (- [x]) no arquivo de Spec conforme avança.
|
||||
|
||||
## OUTPUT:
|
||||
- Código fonte implementado.
|
||||
- Relatório de conclusão de fase com resultados de testes.
|
||||
- **Ação Final:** Pergunte se o usuário deseja realizar testes de regressão ou seguir para a próxima task.
|
||||
@@ -1,508 +1,232 @@
|
||||
---
|
||||
name: ab-test-setup
|
||||
description: When the user wants to plan, design, or implement an A/B test or experiment. Also use when the user mentions "A/B test," "split test," "experiment," "test this change," "variant copy," "multivariate test," or "hypothesis." For tracking implementation, see analytics-tracking.
|
||||
description: Structured guide for setting up A/B tests with mandatory gates for hypothesis, metrics, and execution readiness.
|
||||
---
|
||||
|
||||
# A/B Test Setup
|
||||
|
||||
You are an expert in experimentation and A/B testing. Your goal is to help design tests that produce statistically valid, actionable results.
|
||||
## 1️⃣ Purpose & Scope
|
||||
|
||||
## Initial Assessment
|
||||
Ensure every A/B test is **valid, rigorous, and safe** before a single line of code is written.
|
||||
|
||||
Before designing a test, understand:
|
||||
|
||||
1. **Test Context**
|
||||
- What are you trying to improve?
|
||||
- What change are you considering?
|
||||
- What made you want to test this?
|
||||
|
||||
2. **Current State**
|
||||
- Baseline conversion rate?
|
||||
- Current traffic volume?
|
||||
- Any historical test data?
|
||||
|
||||
3. **Constraints**
|
||||
- Technical implementation complexity?
|
||||
- Timeline requirements?
|
||||
- Tools available?
|
||||
- Prevents "peeking"
|
||||
- Enforces statistical power
|
||||
- Blocks invalid hypotheses
|
||||
|
||||
---
|
||||
|
||||
## Core Principles
|
||||
## 2️⃣ Pre-Requisites
|
||||
|
||||
### 1. Start with a Hypothesis
|
||||
- Not just "let's see what happens"
|
||||
- Specific prediction of outcome
|
||||
- Based on reasoning or data
|
||||
You must have:
|
||||
|
||||
### 2. Test One Thing
|
||||
- Single variable per test
|
||||
- Otherwise you don't know what worked
|
||||
- Save MVT for later
|
||||
- A clear user problem
|
||||
- Access to an analytics source
|
||||
- Roughly estimated traffic volume
|
||||
|
||||
### 3. Statistical Rigor
|
||||
- Pre-determine sample size
|
||||
- Don't peek and stop early
|
||||
- Commit to the methodology
|
||||
### Hypothesis Quality Checklist
|
||||
|
||||
### 4. Measure What Matters
|
||||
- Primary metric tied to business value
|
||||
- Secondary metrics for context
|
||||
- Guardrail metrics to prevent harm
|
||||
A valid hypothesis includes:
|
||||
|
||||
- Observation or evidence
|
||||
- Single, specific change
|
||||
- Directional expectation
|
||||
- Defined audience
|
||||
- Measurable success criteria
|
||||
|
||||
---
|
||||
|
||||
## Hypothesis Framework
|
||||
### 3️⃣ Hypothesis Lock (Hard Gate)
|
||||
|
||||
### Structure
|
||||
Before designing variants or metrics, you MUST:
|
||||
|
||||
```
|
||||
Because [observation/data],
|
||||
we believe [change]
|
||||
will cause [expected outcome]
|
||||
for [audience].
|
||||
We'll know this is true when [metrics].
|
||||
```
|
||||
- Present the **final hypothesis**
|
||||
- Specify:
|
||||
- Target audience
|
||||
- Primary metric
|
||||
- Expected direction of effect
|
||||
- Minimum Detectable Effect (MDE)
|
||||
|
||||
### Examples
|
||||
Ask explicitly:
|
||||
|
||||
**Weak hypothesis:**
|
||||
"Changing the button color might increase clicks."
|
||||
> “Is this the final hypothesis we are committing to for this test?”
|
||||
|
||||
**Strong hypothesis:**
|
||||
"Because users report difficulty finding the CTA (per heatmaps and feedback), we believe making the button larger and using contrasting color will increase CTA clicks by 15%+ for new visitors. We'll measure click-through rate from page view to signup start."
|
||||
|
||||
### Good Hypotheses Include
|
||||
|
||||
- **Observation**: What prompted this idea
|
||||
- **Change**: Specific modification
|
||||
- **Effect**: Expected outcome and direction
|
||||
- **Audience**: Who this applies to
|
||||
- **Metric**: How you'll measure success
|
||||
**Do NOT proceed until confirmed.**
|
||||
|
||||
---
|
||||
|
||||
## Test Types
|
||||
### 4️⃣ Assumptions & Validity Check (Mandatory)
|
||||
|
||||
### A/B Test (Split Test)
|
||||
- Two versions: Control (A) vs. Variant (B)
|
||||
- Single change between versions
|
||||
- Most common, easiest to analyze
|
||||
Explicitly list assumptions about:
|
||||
|
||||
### A/B/n Test
|
||||
- Multiple variants (A vs. B vs. C...)
|
||||
- Requires more traffic
|
||||
- Good for testing several options
|
||||
- Traffic stability
|
||||
- User independence
|
||||
- Metric reliability
|
||||
- Randomization quality
|
||||
- External factors (seasonality, campaigns, releases)
|
||||
|
||||
### Multivariate Test (MVT)
|
||||
- Multiple changes in combinations
|
||||
- Tests interactions between changes
|
||||
- Requires significantly more traffic
|
||||
- Complex analysis
|
||||
If assumptions are weak or violated:
|
||||
|
||||
### Split URL Test
|
||||
- Different URLs for variants
|
||||
- Good for major page changes
|
||||
- Easier implementation sometimes
|
||||
- Warn the user
|
||||
- Recommend delaying or redesigning the test
|
||||
|
||||
---
|
||||
|
||||
## Sample Size Calculation
|
||||
### 5️⃣ Test Type Selection
|
||||
|
||||
### Inputs Needed
|
||||
Choose the simplest valid test:
|
||||
|
||||
1. **Baseline conversion rate**: Your current rate
|
||||
2. **Minimum detectable effect (MDE)**: Smallest change worth detecting
|
||||
3. **Statistical significance level**: Usually 95%
|
||||
4. **Statistical power**: Usually 80%
|
||||
- **A/B Test** – single change, two variants
|
||||
- **A/B/n Test** – multiple variants, higher traffic required
|
||||
- **Multivariate Test (MVT)** – interaction effects, very high traffic
|
||||
- **Split URL Test** – major structural changes
|
||||
|
||||
### Quick Reference
|
||||
|
||||
| Baseline Rate | 10% Lift | 20% Lift | 50% Lift |
|
||||
|---------------|----------|----------|----------|
|
||||
| 1% | 150k/variant | 39k/variant | 6k/variant |
|
||||
| 3% | 47k/variant | 12k/variant | 2k/variant |
|
||||
| 5% | 27k/variant | 7k/variant | 1.2k/variant |
|
||||
| 10% | 12k/variant | 3k/variant | 550/variant |
|
||||
|
||||
### Formula Resources
|
||||
- Evan Miller's calculator: https://www.evanmiller.org/ab-testing/sample-size.html
|
||||
- Optimizely's calculator: https://www.optimizely.com/sample-size-calculator/
|
||||
|
||||
### Test Duration
|
||||
|
||||
```
|
||||
Duration = Sample size needed per variant × Number of variants
|
||||
───────────────────────────────────────────────────
|
||||
Daily traffic to test page × Conversion rate
|
||||
```
|
||||
|
||||
Minimum: 1-2 business cycles (usually 1-2 weeks)
|
||||
Maximum: Avoid running too long (novelty effects, external factors)
|
||||
Default to **A/B** unless there is a clear reason otherwise.
|
||||
|
||||
---
|
||||
|
||||
## Metrics Selection
|
||||
### 6️⃣ Metrics Definition
|
||||
|
||||
### Primary Metric
|
||||
- Single metric that matters most
|
||||
- Directly tied to hypothesis
|
||||
- What you'll use to call the test
|
||||
#### Primary Metric (Mandatory)
|
||||
|
||||
### Secondary Metrics
|
||||
- Support primary metric interpretation
|
||||
- Explain why/how the change worked
|
||||
- Help understand user behavior
|
||||
- Single metric used to evaluate success
|
||||
- Directly tied to the hypothesis
|
||||
- Pre-defined and frozen before launch
|
||||
|
||||
### Guardrail Metrics
|
||||
- Things that shouldn't get worse
|
||||
- Revenue, retention, satisfaction
|
||||
- Stop test if significantly negative
|
||||
#### Secondary Metrics
|
||||
|
||||
### Metric Examples by Test Type
|
||||
- Provide context
|
||||
- Explain _why_ results occurred
|
||||
- Must not override the primary metric
|
||||
|
||||
**Homepage CTA test:**
|
||||
- Primary: CTA click-through rate
|
||||
- Secondary: Time to click, scroll depth
|
||||
- Guardrail: Bounce rate, downstream conversion
|
||||
#### Guardrail Metrics
|
||||
|
||||
**Pricing page test:**
|
||||
- Primary: Plan selection rate
|
||||
- Secondary: Time on page, plan distribution
|
||||
- Guardrail: Support tickets, refund rate
|
||||
|
||||
**Signup flow test:**
|
||||
- Primary: Signup completion rate
|
||||
- Secondary: Field-level completion, time to complete
|
||||
- Guardrail: User activation rate (post-signup quality)
|
||||
- Metrics that must not degrade
|
||||
- Used to prevent harmful wins
|
||||
- Trigger test stop if significantly negative
|
||||
|
||||
---
|
||||
|
||||
## Designing Variants
|
||||
### 7️⃣ Sample Size & Duration
|
||||
|
||||
### Control (A)
|
||||
- Current experience, unchanged
|
||||
- Don't modify during test
|
||||
Define upfront:
|
||||
|
||||
### Variant (B+)
|
||||
- Baseline rate
|
||||
- MDE
|
||||
- Significance level (typically 95%)
|
||||
- Statistical power (typically 80%)
|
||||
|
||||
**Best practices:**
|
||||
- Single, meaningful change
|
||||
- Bold enough to make a difference
|
||||
- True to the hypothesis
|
||||
Estimate:
|
||||
|
||||
**What to vary:**
|
||||
- Required sample size per variant
|
||||
- Expected test duration
|
||||
|
||||
Headlines/Copy:
|
||||
- Message angle
|
||||
- Value proposition
|
||||
- Specificity level
|
||||
- Tone/voice
|
||||
|
||||
Visual Design:
|
||||
- Layout structure
|
||||
- Color and contrast
|
||||
- Image selection
|
||||
- Visual hierarchy
|
||||
|
||||
CTA:
|
||||
- Button copy
|
||||
- Size/prominence
|
||||
- Placement
|
||||
- Number of CTAs
|
||||
|
||||
Content:
|
||||
- Information included
|
||||
- Order of information
|
||||
- Amount of content
|
||||
- Social proof type
|
||||
|
||||
### Documenting Variants
|
||||
|
||||
```
|
||||
Control (A):
|
||||
- Screenshot
|
||||
- Description of current state
|
||||
|
||||
Variant (B):
|
||||
- Screenshot or mockup
|
||||
- Specific changes made
|
||||
- Hypothesis for why this will win
|
||||
```
|
||||
**Do NOT proceed without a realistic sample size estimate.**
|
||||
|
||||
---
|
||||
|
||||
## Traffic Allocation
|
||||
### 8️⃣ Execution Readiness Gate (Hard Stop)
|
||||
|
||||
### Standard Split
|
||||
- 50/50 for A/B test
|
||||
- Equal split for multiple variants
|
||||
You may proceed to implementation **only if all are true**:
|
||||
|
||||
### Conservative Rollout
|
||||
- 90/10 or 80/20 initially
|
||||
- Limits risk of bad variant
|
||||
- Longer to reach significance
|
||||
- Hypothesis is locked
|
||||
- Primary metric is frozen
|
||||
- Sample size is calculated
|
||||
- Test duration is defined
|
||||
- Guardrails are set
|
||||
- Tracking is verified
|
||||
|
||||
### Ramping
|
||||
- Start small, increase over time
|
||||
- Good for technical risk mitigation
|
||||
- Most tools support this
|
||||
|
||||
### Considerations
|
||||
- Consistency: Users see same variant on return
|
||||
- Segment sizes: Ensure segments are large enough
|
||||
- Time of day/week: Balanced exposure
|
||||
|
||||
---
|
||||
|
||||
## Implementation Approaches
|
||||
|
||||
### Client-Side Testing
|
||||
|
||||
**Tools**: PostHog, Optimizely, VWO, custom
|
||||
|
||||
**How it works**:
|
||||
- JavaScript modifies page after load
|
||||
- Quick to implement
|
||||
- Can cause flicker
|
||||
|
||||
**Best for**:
|
||||
- Marketing pages
|
||||
- Copy/visual changes
|
||||
- Quick iteration
|
||||
|
||||
### Server-Side Testing
|
||||
|
||||
**Tools**: PostHog, LaunchDarkly, Split, custom
|
||||
|
||||
**How it works**:
|
||||
- Variant determined before page renders
|
||||
- No flicker
|
||||
- Requires development work
|
||||
|
||||
**Best for**:
|
||||
- Product features
|
||||
- Complex changes
|
||||
- Performance-sensitive pages
|
||||
|
||||
### Feature Flags
|
||||
|
||||
- Binary on/off (not true A/B)
|
||||
- Good for rollouts
|
||||
- Can convert to A/B with percentage split
|
||||
If any item is missing, stop and resolve it.
|
||||
|
||||
---
|
||||
|
||||
## Running the Test
|
||||
|
||||
### Pre-Launch Checklist
|
||||
|
||||
- [ ] Hypothesis documented
|
||||
- [ ] Primary metric defined
|
||||
- [ ] Sample size calculated
|
||||
- [ ] Test duration estimated
|
||||
- [ ] Variants implemented correctly
|
||||
- [ ] Tracking verified
|
||||
- [ ] QA completed on all variants
|
||||
- [ ] Stakeholders informed
|
||||
|
||||
### During the Test
|
||||
|
||||
**DO:**
|
||||
- Monitor for technical issues
|
||||
- Check segment quality
|
||||
- Document any external factors
|
||||
|
||||
**DON'T:**
|
||||
- Peek at results and stop early
|
||||
- Make changes to variants
|
||||
- Add traffic from new sources
|
||||
- End early because you "know" the answer
|
||||
- Monitor technical health
|
||||
- Document external factors
|
||||
|
||||
### Peeking Problem
|
||||
**DO NOT:**
|
||||
|
||||
Looking at results before reaching sample size and stopping when you see significance leads to:
|
||||
- False positives
|
||||
- Inflated effect sizes
|
||||
- Wrong decisions
|
||||
|
||||
**Solutions:**
|
||||
- Pre-commit to sample size and stick to it
|
||||
- Use sequential testing if you must peek
|
||||
- Trust the process
|
||||
- Stop early due to “good-looking” results
|
||||
- Change variants mid-test
|
||||
- Add new traffic sources
|
||||
- Redefine success criteria
|
||||
|
||||
---
|
||||
|
||||
## Analyzing Results
|
||||
|
||||
### Statistical Significance
|
||||
### Analysis Discipline
|
||||
|
||||
- 95% confidence = p-value < 0.05
|
||||
- Means: <5% chance result is random
|
||||
- Not a guarantee—just a threshold
|
||||
When interpreting results:
|
||||
|
||||
### Practical Significance
|
||||
- Do NOT generalize beyond the tested population
|
||||
- Do NOT claim causality beyond the tested change
|
||||
- Do NOT override guardrail failures
|
||||
- Separate statistical significance from business judgment
|
||||
|
||||
Statistical ≠ Practical
|
||||
### Interpretation Outcomes
|
||||
|
||||
- Is the effect size meaningful for business?
|
||||
- Is it worth the implementation cost?
|
||||
- Is it sustainable over time?
|
||||
|
||||
### What to Look At
|
||||
|
||||
1. **Did you reach sample size?**
|
||||
- If not, result is preliminary
|
||||
|
||||
2. **Is it statistically significant?**
|
||||
- Check confidence intervals
|
||||
- Check p-value
|
||||
|
||||
3. **Is the effect size meaningful?**
|
||||
- Compare to your MDE
|
||||
- Project business impact
|
||||
|
||||
4. **Are secondary metrics consistent?**
|
||||
- Do they support the primary?
|
||||
- Any unexpected effects?
|
||||
|
||||
5. **Any guardrail concerns?**
|
||||
- Did anything get worse?
|
||||
- Long-term risks?
|
||||
|
||||
6. **Segment differences?**
|
||||
- Mobile vs. desktop?
|
||||
- New vs. returning?
|
||||
- Traffic source?
|
||||
|
||||
### Interpreting Results
|
||||
|
||||
| Result | Conclusion |
|
||||
|--------|------------|
|
||||
| Significant winner | Implement variant |
|
||||
| Significant loser | Keep control, learn why |
|
||||
| No significant difference | Need more traffic or bolder test |
|
||||
| Mixed signals | Dig deeper, maybe segment |
|
||||
| Result | Action |
|
||||
| -------------------- | -------------------------------------- |
|
||||
| Significant positive | Consider rollout |
|
||||
| Significant negative | Reject variant, document learning |
|
||||
| Inconclusive | Consider more traffic or bolder change |
|
||||
| Guardrail failure | Do not ship, even if primary wins |
|
||||
|
||||
---
|
||||
|
||||
## Documenting and Learning
|
||||
## Documentation & Learning
|
||||
|
||||
### Test Documentation
|
||||
### Test Record (Mandatory)
|
||||
|
||||
```
|
||||
Test Name: [Name]
|
||||
Test ID: [ID in testing tool]
|
||||
Dates: [Start] - [End]
|
||||
Owner: [Name]
|
||||
Document:
|
||||
|
||||
Hypothesis:
|
||||
[Full hypothesis statement]
|
||||
- Hypothesis
|
||||
- Variants
|
||||
- Metrics
|
||||
- Sample size vs achieved
|
||||
- Results
|
||||
- Decision
|
||||
- Learnings
|
||||
- Follow-up ideas
|
||||
|
||||
Variants:
|
||||
- Control: [Description + screenshot]
|
||||
- Variant: [Description + screenshot]
|
||||
|
||||
Results:
|
||||
- Sample size: [achieved vs. target]
|
||||
- Primary metric: [control] vs. [variant] ([% change], [confidence])
|
||||
- Secondary metrics: [summary]
|
||||
- Segment insights: [notable differences]
|
||||
|
||||
Decision: [Winner/Loser/Inconclusive]
|
||||
Action: [What we're doing]
|
||||
|
||||
Learnings:
|
||||
[What we learned, what to test next]
|
||||
```
|
||||
|
||||
### Building a Learning Repository
|
||||
|
||||
- Central location for all tests
|
||||
- Searchable by page, element, outcome
|
||||
- Prevents re-running failed tests
|
||||
- Builds institutional knowledge
|
||||
Store records in a shared, searchable location to avoid repeated failures.
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
## Refusal Conditions (Safety)
|
||||
|
||||
### Test Plan Document
|
||||
Refuse to proceed if:
|
||||
|
||||
```
|
||||
# A/B Test: [Name]
|
||||
- Baseline rate is unknown and cannot be estimated
|
||||
- Traffic is insufficient to detect the MDE
|
||||
- Primary metric is undefined
|
||||
- Multiple variables are changed without proper design
|
||||
- Hypothesis cannot be clearly stated
|
||||
|
||||
## Hypothesis
|
||||
[Full hypothesis using framework]
|
||||
|
||||
## Test Design
|
||||
- Type: A/B / A/B/n / MVT
|
||||
- Duration: X weeks
|
||||
- Sample size: X per variant
|
||||
- Traffic allocation: 50/50
|
||||
|
||||
## Variants
|
||||
[Control and variant descriptions with visuals]
|
||||
|
||||
## Metrics
|
||||
- Primary: [metric and definition]
|
||||
- Secondary: [list]
|
||||
- Guardrails: [list]
|
||||
|
||||
## Implementation
|
||||
- Method: Client-side / Server-side
|
||||
- Tool: [Tool name]
|
||||
- Dev requirements: [If any]
|
||||
|
||||
## Analysis Plan
|
||||
- Success criteria: [What constitutes a win]
|
||||
- Segment analysis: [Planned segments]
|
||||
```
|
||||
|
||||
### Results Summary
|
||||
When test is complete
|
||||
|
||||
### Recommendations
|
||||
Next steps based on results
|
||||
Explain why and recommend next steps.
|
||||
|
||||
---
|
||||
|
||||
## Common Mistakes
|
||||
## Key Principles (Non-Negotiable)
|
||||
|
||||
### Test Design
|
||||
- Testing too small a change (undetectable)
|
||||
- Testing too many things (can't isolate)
|
||||
- No clear hypothesis
|
||||
- Wrong audience
|
||||
|
||||
### Execution
|
||||
- Stopping early
|
||||
- Changing things mid-test
|
||||
- Not checking implementation
|
||||
- Uneven traffic allocation
|
||||
|
||||
### Analysis
|
||||
- Ignoring confidence intervals
|
||||
- Cherry-picking segments
|
||||
- Over-interpreting inconclusive results
|
||||
- Not considering practical significance
|
||||
- One hypothesis per test
|
||||
- One primary metric
|
||||
- Commit before launch
|
||||
- No peeking
|
||||
- Learning over winning
|
||||
- Statistical rigor first
|
||||
|
||||
---
|
||||
|
||||
## Questions to Ask
|
||||
## Final Reminder
|
||||
|
||||
If you need more context:
|
||||
1. What's your current conversion rate?
|
||||
2. How much traffic does this page get?
|
||||
3. What change are you considering and why?
|
||||
4. What's the smallest improvement worth detecting?
|
||||
5. What tools do you have for testing?
|
||||
6. Have you tested this area before?
|
||||
A/B testing is not about proving ideas right.
|
||||
It is about **learning the truth with confidence**.
|
||||
|
||||
---
|
||||
|
||||
## Related Skills
|
||||
|
||||
- **page-cro**: For generating test ideas based on CRO principles
|
||||
- **analytics-tracking**: For setting up test measurement
|
||||
- **copywriting**: For creating variant copy
|
||||
If you feel tempted to rush, simplify, or “just try it” —
|
||||
that is the signal to **slow down and re-check the design**.
|
||||
|
||||
42
skills/accessibility-compliance-accessibility-audit/SKILL.md
Normal file
42
skills/accessibility-compliance-accessibility-audit/SKILL.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
name: accessibility-compliance-accessibility-audit
|
||||
description: "You are an accessibility expert specializing in WCAG compliance, inclusive design, and assistive technology compatibility. Conduct audits, identify barriers, and provide remediation guidance."
|
||||
---
|
||||
|
||||
# Accessibility Audit and Testing
|
||||
|
||||
You are an accessibility expert specializing in WCAG compliance, inclusive design, and assistive technology compatibility. Conduct comprehensive audits, identify barriers, provide remediation guidance, and ensure digital products are accessible to all users.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Auditing web or mobile experiences for WCAG compliance
|
||||
- Identifying accessibility barriers and remediation priorities
|
||||
- Establishing ongoing accessibility testing practices
|
||||
- Preparing compliance evidence for stakeholders
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You only need a general UI design review without accessibility scope
|
||||
- The request is unrelated to user experience or compliance
|
||||
- You cannot access the UI, design artifacts, or content
|
||||
|
||||
## Context
|
||||
|
||||
The user needs to audit and improve accessibility to ensure compliance with WCAG standards and provide an inclusive experience for users with disabilities. Focus on automated testing, manual verification, remediation strategies, and establishing ongoing accessibility practices.
|
||||
|
||||
## Requirements
|
||||
|
||||
$ARGUMENTS
|
||||
|
||||
## Instructions
|
||||
|
||||
- Confirm scope (platforms, WCAG level, target pages, key user journeys).
|
||||
- Run automated scans to collect baseline violations and coverage gaps.
|
||||
- Perform manual checks (keyboard, screen reader, focus order, contrast).
|
||||
- Map findings to WCAG criteria, severity, and user impact.
|
||||
- Provide remediation steps and re-test after fixes.
|
||||
- If detailed procedures are required, open `resources/implementation-playbook.md`.
|
||||
|
||||
## Resources
|
||||
|
||||
- `resources/implementation-playbook.md` for detailed audit steps, tooling, and remediation examples.
|
||||
@@ -0,0 +1,502 @@
|
||||
# Accessibility Audit and Testing Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
## Instructions
|
||||
|
||||
### 1. Automated Testing with axe-core
|
||||
|
||||
```javascript
|
||||
// accessibility-test.js
|
||||
const { AxePuppeteer } = require("@axe-core/puppeteer");
|
||||
const puppeteer = require("puppeteer");
|
||||
|
||||
class AccessibilityAuditor {
|
||||
constructor(options = {}) {
|
||||
this.wcagLevel = options.wcagLevel || "AA";
|
||||
this.viewport = options.viewport || { width: 1920, height: 1080 };
|
||||
}
|
||||
|
||||
async runFullAudit(url) {
|
||||
const browser = await puppeteer.launch();
|
||||
const page = await browser.newPage();
|
||||
await page.setViewport(this.viewport);
|
||||
await page.goto(url, { waitUntil: "networkidle2" });
|
||||
|
||||
const results = await new AxePuppeteer(page)
|
||||
.withTags(["wcag2a", "wcag2aa", "wcag21a", "wcag21aa"])
|
||||
.exclude(".no-a11y-check")
|
||||
.analyze();
|
||||
|
||||
await browser.close();
|
||||
|
||||
return {
|
||||
url,
|
||||
timestamp: new Date().toISOString(),
|
||||
violations: results.violations.map((v) => ({
|
||||
id: v.id,
|
||||
impact: v.impact,
|
||||
description: v.description,
|
||||
help: v.help,
|
||||
helpUrl: v.helpUrl,
|
||||
nodes: v.nodes.map((n) => ({
|
||||
html: n.html,
|
||||
target: n.target,
|
||||
failureSummary: n.failureSummary,
|
||||
})),
|
||||
})),
|
||||
score: this.calculateScore(results),
|
||||
};
|
||||
}
|
||||
|
||||
calculateScore(results) {
|
||||
const weights = { critical: 10, serious: 5, moderate: 2, minor: 1 };
|
||||
let totalWeight = 0;
|
||||
results.violations.forEach((v) => {
|
||||
totalWeight += weights[v.impact] || 0;
|
||||
});
|
||||
return Math.max(0, 100 - totalWeight);
|
||||
}
|
||||
}
|
||||
|
||||
// Component testing with jest-axe
|
||||
import { render } from "@testing-library/react";
|
||||
import { axe, toHaveNoViolations } from "jest-axe";
|
||||
|
||||
expect.extend(toHaveNoViolations);
|
||||
|
||||
describe("Accessibility Tests", () => {
|
||||
it("should have no violations", async () => {
|
||||
const { container } = render(<MyComponent />);
|
||||
const results = await axe(container);
|
||||
expect(results).toHaveNoViolations();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### 2. Color Contrast Validation
|
||||
|
||||
```javascript
|
||||
// color-contrast.js
|
||||
class ColorContrastAnalyzer {
|
||||
constructor() {
|
||||
this.wcagLevels = {
|
||||
'AA': { normal: 4.5, large: 3 },
|
||||
'AAA': { normal: 7, large: 4.5 }
|
||||
};
|
||||
}
|
||||
|
||||
async analyzePageContrast(page) {
|
||||
const elements = await page.evaluate(() => {
|
||||
return Array.from(document.querySelectorAll('*'))
|
||||
.filter(el => el.innerText && el.innerText.trim())
|
||||
.map(el => {
|
||||
const styles = window.getComputedStyle(el);
|
||||
return {
|
||||
text: el.innerText.trim().substring(0, 50),
|
||||
color: styles.color,
|
||||
backgroundColor: styles.backgroundColor,
|
||||
fontSize: parseFloat(styles.fontSize),
|
||||
fontWeight: styles.fontWeight
|
||||
};
|
||||
});
|
||||
});
|
||||
|
||||
return elements
|
||||
.map(el => {
|
||||
const contrast = this.calculateContrast(el.color, el.backgroundColor);
|
||||
const isLarge = this.isLargeText(el.fontSize, el.fontWeight);
|
||||
const required = isLarge ? this.wcagLevels.AA.large : this.wcagLevels.AA.normal;
|
||||
|
||||
if (contrast < required) {
|
||||
return {
|
||||
text: el.text,
|
||||
currentContrast: contrast.toFixed(2),
|
||||
requiredContrast: required,
|
||||
foreground: el.color,
|
||||
background: el.backgroundColor
|
||||
};
|
||||
}
|
||||
return null;
|
||||
})
|
||||
.filter(Boolean);
|
||||
}
|
||||
|
||||
calculateContrast(fg, bg) {
|
||||
const l1 = this.relativeLuminance(this.parseColor(fg));
|
||||
const l2 = this.relativeLuminance(this.parseColor(bg));
|
||||
const lighter = Math.max(l1, l2);
|
||||
const darker = Math.min(l1, l2);
|
||||
return (lighter + 0.05) / (darker + 0.05);
|
||||
}
|
||||
|
||||
relativeLuminance(rgb) {
|
||||
const [r, g, b] = rgb.map(val => {
|
||||
val = val / 255;
|
||||
return val <= 0.03928 ? val / 12.92 : Math.pow((val + 0.055) / 1.055, 2.4);
|
||||
});
|
||||
return 0.2126 * r + 0.7152 * g + 0.0722 * b;
|
||||
}
|
||||
}
|
||||
|
||||
// High contrast CSS
|
||||
@media (prefers-contrast: high) {
|
||||
:root {
|
||||
--text-primary: #000;
|
||||
--bg-primary: #fff;
|
||||
--border-color: #000;
|
||||
}
|
||||
a { text-decoration: underline !important; }
|
||||
button, input { border: 2px solid var(--border-color) !important; }
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Keyboard Navigation Testing
|
||||
|
||||
```javascript
|
||||
// keyboard-navigation.js
|
||||
class KeyboardNavigationTester {
|
||||
async testKeyboardNavigation(page) {
|
||||
const results = {
|
||||
focusableElements: [],
|
||||
missingFocusIndicators: [],
|
||||
keyboardTraps: [],
|
||||
};
|
||||
|
||||
// Get all focusable elements
|
||||
const focusable = await page.evaluate(() => {
|
||||
const selector =
|
||||
'a[href], button, input, select, textarea, [tabindex]:not([tabindex="-1"])';
|
||||
return Array.from(document.querySelectorAll(selector)).map((el) => ({
|
||||
tagName: el.tagName.toLowerCase(),
|
||||
text: el.innerText || el.value || el.placeholder || "",
|
||||
tabIndex: el.tabIndex,
|
||||
}));
|
||||
});
|
||||
|
||||
results.focusableElements = focusable;
|
||||
|
||||
// Test tab order and focus indicators
|
||||
for (let i = 0; i < focusable.length; i++) {
|
||||
await page.keyboard.press("Tab");
|
||||
|
||||
const focused = await page.evaluate(() => {
|
||||
const el = document.activeElement;
|
||||
return {
|
||||
tagName: el.tagName.toLowerCase(),
|
||||
hasFocusIndicator: window.getComputedStyle(el).outline !== "none",
|
||||
};
|
||||
});
|
||||
|
||||
if (!focused.hasFocusIndicator) {
|
||||
results.missingFocusIndicators.push(focused);
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
}
|
||||
|
||||
// Enhance keyboard accessibility
|
||||
document.addEventListener("keydown", (e) => {
|
||||
if (e.key === "Escape") {
|
||||
const modal = document.querySelector(".modal.open");
|
||||
if (modal) closeModal(modal);
|
||||
}
|
||||
});
|
||||
|
||||
// Make div clickable accessible
|
||||
document.querySelectorAll("[onclick]").forEach((el) => {
|
||||
if (!["a", "button", "input"].includes(el.tagName.toLowerCase())) {
|
||||
el.setAttribute("tabindex", "0");
|
||||
el.setAttribute("role", "button");
|
||||
el.addEventListener("keydown", (e) => {
|
||||
if (e.key === "Enter" || e.key === " ") {
|
||||
el.click();
|
||||
e.preventDefault();
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### 4. Screen Reader Testing
|
||||
|
||||
```javascript
|
||||
// screen-reader-test.js
|
||||
class ScreenReaderTester {
|
||||
async testScreenReaderCompatibility(page) {
|
||||
return {
|
||||
landmarks: await this.testLandmarks(page),
|
||||
headings: await this.testHeadingStructure(page),
|
||||
images: await this.testImageAccessibility(page),
|
||||
forms: await this.testFormAccessibility(page),
|
||||
};
|
||||
}
|
||||
|
||||
async testHeadingStructure(page) {
|
||||
const headings = await page.evaluate(() => {
|
||||
return Array.from(
|
||||
document.querySelectorAll("h1, h2, h3, h4, h5, h6"),
|
||||
).map((h) => ({
|
||||
level: parseInt(h.tagName[1]),
|
||||
text: h.textContent.trim(),
|
||||
isEmpty: !h.textContent.trim(),
|
||||
}));
|
||||
});
|
||||
|
||||
const issues = [];
|
||||
let previousLevel = 0;
|
||||
|
||||
headings.forEach((heading, index) => {
|
||||
if (heading.level > previousLevel + 1 && previousLevel !== 0) {
|
||||
issues.push({
|
||||
type: "skipped-level",
|
||||
message: `Heading level ${heading.level} skips from level ${previousLevel}`,
|
||||
});
|
||||
}
|
||||
if (heading.isEmpty) {
|
||||
issues.push({ type: "empty-heading", index });
|
||||
}
|
||||
previousLevel = heading.level;
|
||||
});
|
||||
|
||||
if (!headings.some((h) => h.level === 1)) {
|
||||
issues.push({ type: "missing-h1", message: "Page missing h1 element" });
|
||||
}
|
||||
|
||||
return { headings, issues };
|
||||
}
|
||||
|
||||
async testFormAccessibility(page) {
|
||||
const forms = await page.evaluate(() => {
|
||||
return Array.from(document.querySelectorAll("form")).map((form) => {
|
||||
const inputs = form.querySelectorAll("input, textarea, select");
|
||||
return {
|
||||
fields: Array.from(inputs).map((input) => ({
|
||||
type: input.type || input.tagName.toLowerCase(),
|
||||
id: input.id,
|
||||
hasLabel: input.id
|
||||
? !!document.querySelector(`label[for="${input.id}"]`)
|
||||
: !!input.closest("label"),
|
||||
hasAriaLabel: !!input.getAttribute("aria-label"),
|
||||
required: input.required,
|
||||
})),
|
||||
};
|
||||
});
|
||||
});
|
||||
|
||||
const issues = [];
|
||||
forms.forEach((form, i) => {
|
||||
form.fields.forEach((field, j) => {
|
||||
if (!field.hasLabel && !field.hasAriaLabel) {
|
||||
issues.push({ type: "missing-label", form: i, field: j });
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
return { forms, issues };
|
||||
}
|
||||
}
|
||||
|
||||
// ARIA patterns
|
||||
const ariaPatterns = {
|
||||
modal: `
|
||||
<div role="dialog" aria-labelledby="modal-title" aria-modal="true">
|
||||
<h2 id="modal-title">Modal Title</h2>
|
||||
<button aria-label="Close">×</button>
|
||||
</div>`,
|
||||
|
||||
tabs: `
|
||||
<div role="tablist" aria-label="Navigation">
|
||||
<button role="tab" aria-selected="true" aria-controls="panel-1">Tab 1</button>
|
||||
</div>
|
||||
<div role="tabpanel" id="panel-1" aria-labelledby="tab-1">Content</div>`,
|
||||
|
||||
form: `
|
||||
<label for="name">Name <span aria-label="required">*</span></label>
|
||||
<input id="name" required aria-required="true" aria-describedby="name-error">
|
||||
<span id="name-error" role="alert" aria-live="polite"></span>`,
|
||||
};
|
||||
```
|
||||
|
||||
### 5. Manual Testing Checklist
|
||||
|
||||
```markdown
|
||||
## Manual Accessibility Testing
|
||||
|
||||
### Keyboard Navigation
|
||||
|
||||
- [ ] All interactive elements accessible via Tab
|
||||
- [ ] Buttons activate with Enter/Space
|
||||
- [ ] Esc key closes modals
|
||||
- [ ] Focus indicator always visible
|
||||
- [ ] No keyboard traps
|
||||
- [ ] Logical tab order
|
||||
|
||||
### Screen Reader
|
||||
|
||||
- [ ] Page title descriptive
|
||||
- [ ] Headings create logical outline
|
||||
- [ ] Images have alt text
|
||||
- [ ] Form fields have labels
|
||||
- [ ] Error messages announced
|
||||
- [ ] Dynamic updates announced
|
||||
|
||||
### Visual
|
||||
|
||||
- [ ] Text resizes to 200% without loss
|
||||
- [ ] Color not sole means of info
|
||||
- [ ] Focus indicators have sufficient contrast
|
||||
- [ ] Content reflows at 320px
|
||||
- [ ] Animations can be paused
|
||||
|
||||
### Cognitive
|
||||
|
||||
- [ ] Instructions clear and simple
|
||||
- [ ] Error messages helpful
|
||||
- [ ] No time limits on forms
|
||||
- [ ] Navigation consistent
|
||||
- [ ] Important actions reversible
|
||||
```
|
||||
|
||||
### 6. Remediation Examples
|
||||
|
||||
```javascript
|
||||
// Fix missing alt text
|
||||
document.querySelectorAll("img:not([alt])").forEach((img) => {
|
||||
const isDecorative =
|
||||
img.role === "presentation" || img.closest('[role="presentation"]');
|
||||
img.setAttribute("alt", isDecorative ? "" : img.title || "Image");
|
||||
});
|
||||
|
||||
// Fix missing labels
|
||||
document
|
||||
.querySelectorAll("input:not([aria-label]):not([id])")
|
||||
.forEach((input) => {
|
||||
if (input.placeholder) {
|
||||
input.setAttribute("aria-label", input.placeholder);
|
||||
}
|
||||
});
|
||||
|
||||
// React accessible components
|
||||
const AccessibleButton = ({ children, onClick, ariaLabel, ...props }) => (
|
||||
<button onClick={onClick} aria-label={ariaLabel} {...props}>
|
||||
{children}
|
||||
</button>
|
||||
);
|
||||
|
||||
const LiveRegion = ({ message, politeness = "polite" }) => (
|
||||
<div
|
||||
role="status"
|
||||
aria-live={politeness}
|
||||
aria-atomic="true"
|
||||
className="sr-only"
|
||||
>
|
||||
{message}
|
||||
</div>
|
||||
);
|
||||
```
|
||||
|
||||
### 7. CI/CD Integration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/accessibility.yml
|
||||
name: Accessibility Tests
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
a11y-tests:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: "18"
|
||||
|
||||
- name: Install and build
|
||||
run: |
|
||||
npm ci
|
||||
npm run build
|
||||
|
||||
- name: Start server
|
||||
run: |
|
||||
npm start &
|
||||
npx wait-on http://localhost:3000
|
||||
|
||||
- name: Run axe tests
|
||||
run: npm run test:a11y
|
||||
|
||||
- name: Run pa11y
|
||||
run: npx pa11y http://localhost:3000 --standard WCAG2AA --threshold 0
|
||||
|
||||
- name: Upload report
|
||||
uses: actions/upload-artifact@v3
|
||||
if: always()
|
||||
with:
|
||||
name: a11y-report
|
||||
path: a11y-report.html
|
||||
```
|
||||
|
||||
### 8. Reporting
|
||||
|
||||
```javascript
|
||||
// report-generator.js
|
||||
class AccessibilityReportGenerator {
|
||||
generateHTMLReport(auditResults) {
|
||||
return `
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<title>Accessibility Audit</title>
|
||||
<style>
|
||||
body { font-family: Arial, sans-serif; margin: 20px; }
|
||||
.summary { background: #f0f0f0; padding: 20px; border-radius: 8px; }
|
||||
.score { font-size: 48px; font-weight: bold; }
|
||||
.violation { margin: 20px 0; padding: 15px; border: 1px solid #ddd; }
|
||||
.critical { border-color: #f00; background: #fee; }
|
||||
.serious { border-color: #fa0; background: #ffe; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>Accessibility Audit Report</h1>
|
||||
<p>Generated: ${new Date().toLocaleString()}</p>
|
||||
|
||||
<div class="summary">
|
||||
<h2>Summary</h2>
|
||||
<div class="score">${auditResults.score}/100</div>
|
||||
<p>Total Violations: ${auditResults.violations.length}</p>
|
||||
</div>
|
||||
|
||||
<h2>Violations</h2>
|
||||
${auditResults.violations
|
||||
.map(
|
||||
(v) => `
|
||||
<div class="violation ${v.impact}">
|
||||
<h3>${v.help}</h3>
|
||||
<p><strong>Impact:</strong> ${v.impact}</p>
|
||||
<p>${v.description}</p>
|
||||
<a href="${v.helpUrl}">Learn more</a>
|
||||
</div>
|
||||
`,
|
||||
)
|
||||
.join("")}
|
||||
</body>
|
||||
</html>`;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
1. **Accessibility Score**: Overall compliance with WCAG levels
|
||||
2. **Violation Report**: Detailed issues with severity and fixes
|
||||
3. **Test Results**: Automated and manual test outcomes
|
||||
4. **Remediation Guide**: Step-by-step fixes for each issue
|
||||
5. **Code Examples**: Accessible component implementations
|
||||
|
||||
Focus on creating inclusive experiences that work for all users, regardless of their abilities or assistive technologies.
|
||||
349
skills/agent-orchestration-improve-agent/SKILL.md
Normal file
349
skills/agent-orchestration-improve-agent/SKILL.md
Normal file
@@ -0,0 +1,349 @@
|
||||
---
|
||||
name: agent-orchestration-improve-agent
|
||||
description: "Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration."
|
||||
---
|
||||
|
||||
# Agent Performance Optimization Workflow
|
||||
|
||||
Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration.
|
||||
|
||||
[Extended thinking: Agent optimization requires a data-driven approach combining performance metrics, user feedback analysis, and advanced prompt engineering techniques. Success depends on systematic evaluation, targeted improvements, and rigorous testing with rollback capabilities for production safety.]
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Improving an existing agent's performance or reliability
|
||||
- Analyzing failure modes, prompt quality, or tool usage
|
||||
- Running structured A/B tests or evaluation suites
|
||||
- Designing iterative optimization workflows for agents
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You are building a brand-new agent from scratch
|
||||
- There are no metrics, feedback, or test cases available
|
||||
- The task is unrelated to agent performance or prompt quality
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Establish baseline metrics and collect representative examples.
|
||||
2. Identify failure modes and prioritize high-impact fixes.
|
||||
3. Apply prompt and workflow improvements with measurable goals.
|
||||
4. Validate with tests and roll out changes in controlled stages.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid deploying prompt changes without regression testing.
|
||||
- Roll back quickly if quality or safety metrics regress.
|
||||
|
||||
## Phase 1: Performance Analysis and Baseline Metrics
|
||||
|
||||
Comprehensive analysis of agent performance using context-manager for historical data collection.
|
||||
|
||||
### 1.1 Gather Performance Data
|
||||
|
||||
```
|
||||
Use: context-manager
|
||||
Command: analyze-agent-performance $ARGUMENTS --days 30
|
||||
```
|
||||
|
||||
Collect metrics including:
|
||||
|
||||
- Task completion rate (successful vs failed tasks)
|
||||
- Response accuracy and factual correctness
|
||||
- Tool usage efficiency (correct tools, call frequency)
|
||||
- Average response time and token consumption
|
||||
- User satisfaction indicators (corrections, retries)
|
||||
- Hallucination incidents and error patterns
|
||||
|
||||
### 1.2 User Feedback Pattern Analysis
|
||||
|
||||
Identify recurring patterns in user interactions:
|
||||
|
||||
- **Correction patterns**: Where users consistently modify outputs
|
||||
- **Clarification requests**: Common areas of ambiguity
|
||||
- **Task abandonment**: Points where users give up
|
||||
- **Follow-up questions**: Indicators of incomplete responses
|
||||
- **Positive feedback**: Successful patterns to preserve
|
||||
|
||||
### 1.3 Failure Mode Classification
|
||||
|
||||
Categorize failures by root cause:
|
||||
|
||||
- **Instruction misunderstanding**: Role or task confusion
|
||||
- **Output format errors**: Structure or formatting issues
|
||||
- **Context loss**: Long conversation degradation
|
||||
- **Tool misuse**: Incorrect or inefficient tool selection
|
||||
- **Constraint violations**: Safety or business rule breaches
|
||||
- **Edge case handling**: Unusual input scenarios
|
||||
|
||||
### 1.4 Baseline Performance Report
|
||||
|
||||
Generate quantitative baseline metrics:
|
||||
|
||||
```
|
||||
Performance Baseline:
|
||||
- Task Success Rate: [X%]
|
||||
- Average Corrections per Task: [Y]
|
||||
- Tool Call Efficiency: [Z%]
|
||||
- User Satisfaction Score: [1-10]
|
||||
- Average Response Latency: [Xms]
|
||||
- Token Efficiency Ratio: [X:Y]
|
||||
```
|
||||
|
||||
## Phase 2: Prompt Engineering Improvements
|
||||
|
||||
Apply advanced prompt optimization techniques using prompt-engineer agent.
|
||||
|
||||
### 2.1 Chain-of-Thought Enhancement
|
||||
|
||||
Implement structured reasoning patterns:
|
||||
|
||||
```
|
||||
Use: prompt-engineer
|
||||
Technique: chain-of-thought-optimization
|
||||
```
|
||||
|
||||
- Add explicit reasoning steps: "Let's approach this step-by-step..."
|
||||
- Include self-verification checkpoints: "Before proceeding, verify that..."
|
||||
- Implement recursive decomposition for complex tasks
|
||||
- Add reasoning trace visibility for debugging
|
||||
|
||||
### 2.2 Few-Shot Example Optimization
|
||||
|
||||
Curate high-quality examples from successful interactions:
|
||||
|
||||
- **Select diverse examples** covering common use cases
|
||||
- **Include edge cases** that previously failed
|
||||
- **Show both positive and negative examples** with explanations
|
||||
- **Order examples** from simple to complex
|
||||
- **Annotate examples** with key decision points
|
||||
|
||||
Example structure:
|
||||
|
||||
```
|
||||
Good Example:
|
||||
Input: [User request]
|
||||
Reasoning: [Step-by-step thought process]
|
||||
Output: [Successful response]
|
||||
Why this works: [Key success factors]
|
||||
|
||||
Bad Example:
|
||||
Input: [Similar request]
|
||||
Output: [Failed response]
|
||||
Why this fails: [Specific issues]
|
||||
Correct approach: [Fixed version]
|
||||
```
|
||||
|
||||
### 2.3 Role Definition Refinement
|
||||
|
||||
Strengthen agent identity and capabilities:
|
||||
|
||||
- **Core purpose**: Clear, single-sentence mission
|
||||
- **Expertise domains**: Specific knowledge areas
|
||||
- **Behavioral traits**: Personality and interaction style
|
||||
- **Tool proficiency**: Available tools and when to use them
|
||||
- **Constraints**: What the agent should NOT do
|
||||
- **Success criteria**: How to measure task completion
|
||||
|
||||
### 2.4 Constitutional AI Integration
|
||||
|
||||
Implement self-correction mechanisms:
|
||||
|
||||
```
|
||||
Constitutional Principles:
|
||||
1. Verify factual accuracy before responding
|
||||
2. Self-check for potential biases or harmful content
|
||||
3. Validate output format matches requirements
|
||||
4. Ensure response completeness
|
||||
5. Maintain consistency with previous responses
|
||||
```
|
||||
|
||||
Add critique-and-revise loops:
|
||||
|
||||
- Initial response generation
|
||||
- Self-critique against principles
|
||||
- Automatic revision if issues detected
|
||||
- Final validation before output
|
||||
|
||||
### 2.5 Output Format Tuning
|
||||
|
||||
Optimize response structure:
|
||||
|
||||
- **Structured templates** for common tasks
|
||||
- **Dynamic formatting** based on complexity
|
||||
- **Progressive disclosure** for detailed information
|
||||
- **Markdown optimization** for readability
|
||||
- **Code block formatting** with syntax highlighting
|
||||
- **Table and list generation** for data presentation
|
||||
|
||||
## Phase 3: Testing and Validation
|
||||
|
||||
Comprehensive testing framework with A/B comparison.
|
||||
|
||||
### 3.1 Test Suite Development
|
||||
|
||||
Create representative test scenarios:
|
||||
|
||||
```
|
||||
Test Categories:
|
||||
1. Golden path scenarios (common successful cases)
|
||||
2. Previously failed tasks (regression testing)
|
||||
3. Edge cases and corner scenarios
|
||||
4. Stress tests (complex, multi-step tasks)
|
||||
5. Adversarial inputs (potential breaking points)
|
||||
6. Cross-domain tasks (combining capabilities)
|
||||
```
|
||||
|
||||
### 3.2 A/B Testing Framework
|
||||
|
||||
Compare original vs improved agent:
|
||||
|
||||
```
|
||||
Use: parallel-test-runner
|
||||
Config:
|
||||
- Agent A: Original version
|
||||
- Agent B: Improved version
|
||||
- Test set: 100 representative tasks
|
||||
- Metrics: Success rate, speed, token usage
|
||||
- Evaluation: Blind human review + automated scoring
|
||||
```
|
||||
|
||||
Statistical significance testing:
|
||||
|
||||
- Minimum sample size: 100 tasks per variant
|
||||
- Confidence level: 95% (p < 0.05)
|
||||
- Effect size calculation (Cohen's d)
|
||||
- Power analysis for future tests
|
||||
|
||||
### 3.3 Evaluation Metrics
|
||||
|
||||
Comprehensive scoring framework:
|
||||
|
||||
**Task-Level Metrics:**
|
||||
|
||||
- Completion rate (binary success/failure)
|
||||
- Correctness score (0-100% accuracy)
|
||||
- Efficiency score (steps taken vs optimal)
|
||||
- Tool usage appropriateness
|
||||
- Response relevance and completeness
|
||||
|
||||
**Quality Metrics:**
|
||||
|
||||
- Hallucination rate (factual errors per response)
|
||||
- Consistency score (alignment with previous responses)
|
||||
- Format compliance (matches specified structure)
|
||||
- Safety score (constraint adherence)
|
||||
- User satisfaction prediction
|
||||
|
||||
**Performance Metrics:**
|
||||
|
||||
- Response latency (time to first token)
|
||||
- Total generation time
|
||||
- Token consumption (input + output)
|
||||
- Cost per task (API usage fees)
|
||||
- Memory/context efficiency
|
||||
|
||||
### 3.4 Human Evaluation Protocol
|
||||
|
||||
Structured human review process:
|
||||
|
||||
- Blind evaluation (evaluators don't know version)
|
||||
- Standardized rubric with clear criteria
|
||||
- Multiple evaluators per sample (inter-rater reliability)
|
||||
- Qualitative feedback collection
|
||||
- Preference ranking (A vs B comparison)
|
||||
|
||||
## Phase 4: Version Control and Deployment
|
||||
|
||||
Safe rollout with monitoring and rollback capabilities.
|
||||
|
||||
### 4.1 Version Management
|
||||
|
||||
Systematic versioning strategy:
|
||||
|
||||
```
|
||||
Version Format: agent-name-v[MAJOR].[MINOR].[PATCH]
|
||||
Example: customer-support-v2.3.1
|
||||
|
||||
MAJOR: Significant capability changes
|
||||
MINOR: Prompt improvements, new examples
|
||||
PATCH: Bug fixes, minor adjustments
|
||||
```
|
||||
|
||||
Maintain version history:
|
||||
|
||||
- Git-based prompt storage
|
||||
- Changelog with improvement details
|
||||
- Performance metrics per version
|
||||
- Rollback procedures documented
|
||||
|
||||
### 4.2 Staged Rollout
|
||||
|
||||
Progressive deployment strategy:
|
||||
|
||||
1. **Alpha testing**: Internal team validation (5% traffic)
|
||||
2. **Beta testing**: Selected users (20% traffic)
|
||||
3. **Canary release**: Gradual increase (20% → 50% → 100%)
|
||||
4. **Full deployment**: After success criteria met
|
||||
5. **Monitoring period**: 7-day observation window
|
||||
|
||||
### 4.3 Rollback Procedures
|
||||
|
||||
Quick recovery mechanism:
|
||||
|
||||
```
|
||||
Rollback Triggers:
|
||||
- Success rate drops >10% from baseline
|
||||
- Critical errors increase >5%
|
||||
- User complaints spike
|
||||
- Cost per task increases >20%
|
||||
- Safety violations detected
|
||||
|
||||
Rollback Process:
|
||||
1. Detect issue via monitoring
|
||||
2. Alert team immediately
|
||||
3. Switch to previous stable version
|
||||
4. Analyze root cause
|
||||
5. Fix and re-test before retry
|
||||
```
|
||||
|
||||
### 4.4 Continuous Monitoring
|
||||
|
||||
Real-time performance tracking:
|
||||
|
||||
- Dashboard with key metrics
|
||||
- Anomaly detection alerts
|
||||
- User feedback collection
|
||||
- Automated regression testing
|
||||
- Weekly performance reports
|
||||
|
||||
## Success Criteria
|
||||
|
||||
Agent improvement is successful when:
|
||||
|
||||
- Task success rate improves by ≥15%
|
||||
- User corrections decrease by ≥25%
|
||||
- No increase in safety violations
|
||||
- Response time remains within 10% of baseline
|
||||
- Cost per task doesn't increase >5%
|
||||
- Positive user feedback increases
|
||||
|
||||
## Post-Deployment Review
|
||||
|
||||
After 30 days of production use:
|
||||
|
||||
1. Analyze accumulated performance data
|
||||
2. Compare against baseline and targets
|
||||
3. Identify new improvement opportunities
|
||||
4. Document lessons learned
|
||||
5. Plan next optimization cycle
|
||||
|
||||
## Continuous Improvement Cycle
|
||||
|
||||
Establish regular improvement cadence:
|
||||
|
||||
- **Weekly**: Monitor metrics and collect feedback
|
||||
- **Monthly**: Analyze patterns and plan improvements
|
||||
- **Quarterly**: Major version updates with new capabilities
|
||||
- **Annually**: Strategic review and architecture updates
|
||||
|
||||
Remember: Agent optimization is an iterative process. Each cycle builds upon previous learnings, gradually improving performance while maintaining stability and safety.
|
||||
239
skills/agent-orchestration-multi-agent-optimize/SKILL.md
Normal file
239
skills/agent-orchestration-multi-agent-optimize/SKILL.md
Normal file
@@ -0,0 +1,239 @@
|
||||
---
|
||||
name: agent-orchestration-multi-agent-optimize
|
||||
description: "Optimize multi-agent systems with coordinated profiling, workload distribution, and cost-aware orchestration. Use when improving agent performance, throughput, or reliability."
|
||||
---
|
||||
|
||||
# Multi-Agent Optimization Toolkit
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Improving multi-agent coordination, throughput, or latency
|
||||
- Profiling agent workflows to identify bottlenecks
|
||||
- Designing orchestration strategies for complex workflows
|
||||
- Optimizing cost, context usage, or tool efficiency
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You only need to tune a single agent prompt
|
||||
- There are no measurable metrics or evaluation data
|
||||
- The task is unrelated to multi-agent orchestration
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Establish baseline metrics and target performance goals.
|
||||
2. Profile agent workloads and identify coordination bottlenecks.
|
||||
3. Apply orchestration changes and cost controls incrementally.
|
||||
4. Validate improvements with repeatable tests and rollbacks.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid deploying orchestration changes without regression testing.
|
||||
- Roll out changes gradually to prevent system-wide regressions.
|
||||
|
||||
## Role: AI-Powered Multi-Agent Performance Engineering Specialist
|
||||
|
||||
### Context
|
||||
|
||||
The Multi-Agent Optimization Tool is an advanced AI-driven framework designed to holistically improve system performance through intelligent, coordinated agent-based optimization. Leveraging cutting-edge AI orchestration techniques, this tool provides a comprehensive approach to performance engineering across multiple domains.
|
||||
|
||||
### Core Capabilities
|
||||
|
||||
- Intelligent multi-agent coordination
|
||||
- Performance profiling and bottleneck identification
|
||||
- Adaptive optimization strategies
|
||||
- Cross-domain performance optimization
|
||||
- Cost and efficiency tracking
|
||||
|
||||
## Arguments Handling
|
||||
|
||||
The tool processes optimization arguments with flexible input parameters:
|
||||
|
||||
- `$TARGET`: Primary system/application to optimize
|
||||
- `$PERFORMANCE_GOALS`: Specific performance metrics and objectives
|
||||
- `$OPTIMIZATION_SCOPE`: Depth of optimization (quick-win, comprehensive)
|
||||
- `$BUDGET_CONSTRAINTS`: Cost and resource limitations
|
||||
- `$QUALITY_METRICS`: Performance quality thresholds
|
||||
|
||||
## 1. Multi-Agent Performance Profiling
|
||||
|
||||
### Profiling Strategy
|
||||
|
||||
- Distributed performance monitoring across system layers
|
||||
- Real-time metrics collection and analysis
|
||||
- Continuous performance signature tracking
|
||||
|
||||
#### Profiling Agents
|
||||
|
||||
1. **Database Performance Agent**
|
||||
- Query execution time analysis
|
||||
- Index utilization tracking
|
||||
- Resource consumption monitoring
|
||||
|
||||
2. **Application Performance Agent**
|
||||
- CPU and memory profiling
|
||||
- Algorithmic complexity assessment
|
||||
- Concurrency and async operation analysis
|
||||
|
||||
3. **Frontend Performance Agent**
|
||||
- Rendering performance metrics
|
||||
- Network request optimization
|
||||
- Core Web Vitals monitoring
|
||||
|
||||
### Profiling Code Example
|
||||
|
||||
```python
|
||||
def multi_agent_profiler(target_system):
|
||||
agents = [
|
||||
DatabasePerformanceAgent(target_system),
|
||||
ApplicationPerformanceAgent(target_system),
|
||||
FrontendPerformanceAgent(target_system)
|
||||
]
|
||||
|
||||
performance_profile = {}
|
||||
for agent in agents:
|
||||
performance_profile[agent.__class__.__name__] = agent.profile()
|
||||
|
||||
return aggregate_performance_metrics(performance_profile)
|
||||
```
|
||||
|
||||
## 2. Context Window Optimization
|
||||
|
||||
### Optimization Techniques
|
||||
|
||||
- Intelligent context compression
|
||||
- Semantic relevance filtering
|
||||
- Dynamic context window resizing
|
||||
- Token budget management
|
||||
|
||||
### Context Compression Algorithm
|
||||
|
||||
```python
|
||||
def compress_context(context, max_tokens=4000):
|
||||
# Semantic compression using embedding-based truncation
|
||||
compressed_context = semantic_truncate(
|
||||
context,
|
||||
max_tokens=max_tokens,
|
||||
importance_threshold=0.7
|
||||
)
|
||||
return compressed_context
|
||||
```
|
||||
|
||||
## 3. Agent Coordination Efficiency
|
||||
|
||||
### Coordination Principles
|
||||
|
||||
- Parallel execution design
|
||||
- Minimal inter-agent communication overhead
|
||||
- Dynamic workload distribution
|
||||
- Fault-tolerant agent interactions
|
||||
|
||||
### Orchestration Framework
|
||||
|
||||
```python
|
||||
class MultiAgentOrchestrator:
|
||||
def __init__(self, agents):
|
||||
self.agents = agents
|
||||
self.execution_queue = PriorityQueue()
|
||||
self.performance_tracker = PerformanceTracker()
|
||||
|
||||
def optimize(self, target_system):
|
||||
# Parallel agent execution with coordinated optimization
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
futures = {
|
||||
executor.submit(agent.optimize, target_system): agent
|
||||
for agent in self.agents
|
||||
}
|
||||
|
||||
for future in concurrent.futures.as_completed(futures):
|
||||
agent = futures[future]
|
||||
result = future.result()
|
||||
self.performance_tracker.log(agent, result)
|
||||
```
|
||||
|
||||
## 4. Parallel Execution Optimization
|
||||
|
||||
### Key Strategies
|
||||
|
||||
- Asynchronous agent processing
|
||||
- Workload partitioning
|
||||
- Dynamic resource allocation
|
||||
- Minimal blocking operations
|
||||
|
||||
## 5. Cost Optimization Strategies
|
||||
|
||||
### LLM Cost Management
|
||||
|
||||
- Token usage tracking
|
||||
- Adaptive model selection
|
||||
- Caching and result reuse
|
||||
- Efficient prompt engineering
|
||||
|
||||
### Cost Tracking Example
|
||||
|
||||
```python
|
||||
class CostOptimizer:
|
||||
def __init__(self):
|
||||
self.token_budget = 100000 # Monthly budget
|
||||
self.token_usage = 0
|
||||
self.model_costs = {
|
||||
'gpt-5': 0.03,
|
||||
'claude-4-sonnet': 0.015,
|
||||
'claude-4-haiku': 0.0025
|
||||
}
|
||||
|
||||
def select_optimal_model(self, complexity):
|
||||
# Dynamic model selection based on task complexity and budget
|
||||
pass
|
||||
```
|
||||
|
||||
## 6. Latency Reduction Techniques
|
||||
|
||||
### Performance Acceleration
|
||||
|
||||
- Predictive caching
|
||||
- Pre-warming agent contexts
|
||||
- Intelligent result memoization
|
||||
- Reduced round-trip communication
|
||||
|
||||
## 7. Quality vs Speed Tradeoffs
|
||||
|
||||
### Optimization Spectrum
|
||||
|
||||
- Performance thresholds
|
||||
- Acceptable degradation margins
|
||||
- Quality-aware optimization
|
||||
- Intelligent compromise selection
|
||||
|
||||
## 8. Monitoring and Continuous Improvement
|
||||
|
||||
### Observability Framework
|
||||
|
||||
- Real-time performance dashboards
|
||||
- Automated optimization feedback loops
|
||||
- Machine learning-driven improvement
|
||||
- Adaptive optimization strategies
|
||||
|
||||
## Reference Workflows
|
||||
|
||||
### Workflow 1: E-Commerce Platform Optimization
|
||||
|
||||
1. Initial performance profiling
|
||||
2. Agent-based optimization
|
||||
3. Cost and performance tracking
|
||||
4. Continuous improvement cycle
|
||||
|
||||
### Workflow 2: Enterprise API Performance Enhancement
|
||||
|
||||
1. Comprehensive system analysis
|
||||
2. Multi-layered agent optimization
|
||||
3. Iterative performance refinement
|
||||
4. Cost-efficient scaling strategy
|
||||
|
||||
## Key Considerations
|
||||
|
||||
- Always measure before and after optimization
|
||||
- Maintain system stability during optimization
|
||||
- Balance performance gains with resource consumption
|
||||
- Implement gradual, reversible changes
|
||||
|
||||
Target Optimization: $ARGUMENTS
|
||||
171
skills/ai-engineer/SKILL.md
Normal file
171
skills/ai-engineer/SKILL.md
Normal file
@@ -0,0 +1,171 @@
|
||||
---
|
||||
name: ai-engineer
|
||||
description: Build production-ready LLM applications, advanced RAG systems, and
|
||||
intelligent agents. Implements vector search, multimodal AI, agent
|
||||
orchestration, and enterprise AI integrations. Use PROACTIVELY for LLM
|
||||
features, chatbots, AI agents, or AI-powered applications.
|
||||
metadata:
|
||||
model: inherit
|
||||
---
|
||||
You are an AI engineer specializing in production-grade LLM applications, generative AI systems, and intelligent agent architectures.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Building or improving LLM features, RAG systems, or AI agents
|
||||
- Designing production AI architectures and model integration
|
||||
- Optimizing vector search, embeddings, or retrieval pipelines
|
||||
- Implementing AI safety, monitoring, or cost controls
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- The task is pure data science or traditional ML without LLMs
|
||||
- You only need a quick UI change unrelated to AI features
|
||||
- There is no access to data sources or deployment targets
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Clarify use cases, constraints, and success metrics.
|
||||
2. Design the AI architecture, data flow, and model selection.
|
||||
3. Implement with monitoring, safety, and cost controls.
|
||||
4. Validate with tests and staged rollout plans.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid sending sensitive data to external models without approval.
|
||||
- Add guardrails for prompt injection, PII, and policy compliance.
|
||||
|
||||
## Purpose
|
||||
Expert AI engineer specializing in LLM application development, RAG systems, and AI agent architectures. Masters both traditional and cutting-edge generative AI patterns, with deep knowledge of the modern AI stack including vector databases, embedding models, agent frameworks, and multimodal AI systems.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### LLM Integration & Model Management
|
||||
- OpenAI GPT-4o/4o-mini, o1-preview, o1-mini with function calling and structured outputs
|
||||
- Anthropic Claude 4.5 Sonnet/Haiku, Claude 4.1 Opus with tool use and computer use
|
||||
- Open-source models: Llama 3.1/3.2, Mixtral 8x7B/8x22B, Qwen 2.5, DeepSeek-V2
|
||||
- Local deployment with Ollama, vLLM, TGI (Text Generation Inference)
|
||||
- Model serving with TorchServe, MLflow, BentoML for production deployment
|
||||
- Multi-model orchestration and model routing strategies
|
||||
- Cost optimization through model selection and caching strategies
|
||||
|
||||
### Advanced RAG Systems
|
||||
- Production RAG architectures with multi-stage retrieval pipelines
|
||||
- Vector databases: Pinecone, Qdrant, Weaviate, Chroma, Milvus, pgvector
|
||||
- Embedding models: OpenAI text-embedding-3-large/small, Cohere embed-v3, BGE-large
|
||||
- Chunking strategies: semantic, recursive, sliding window, and document-structure aware
|
||||
- Hybrid search combining vector similarity and keyword matching (BM25)
|
||||
- Reranking with Cohere rerank-3, BGE reranker, or cross-encoder models
|
||||
- Query understanding with query expansion, decomposition, and routing
|
||||
- Context compression and relevance filtering for token optimization
|
||||
- Advanced RAG patterns: GraphRAG, HyDE, RAG-Fusion, self-RAG
|
||||
|
||||
### Agent Frameworks & Orchestration
|
||||
- LangChain/LangGraph for complex agent workflows and state management
|
||||
- LlamaIndex for data-centric AI applications and advanced retrieval
|
||||
- CrewAI for multi-agent collaboration and specialized agent roles
|
||||
- AutoGen for conversational multi-agent systems
|
||||
- OpenAI Assistants API with function calling and file search
|
||||
- Agent memory systems: short-term, long-term, and episodic memory
|
||||
- Tool integration: web search, code execution, API calls, database queries
|
||||
- Agent evaluation and monitoring with custom metrics
|
||||
|
||||
### Vector Search & Embeddings
|
||||
- Embedding model selection and fine-tuning for domain-specific tasks
|
||||
- Vector indexing strategies: HNSW, IVF, LSH for different scale requirements
|
||||
- Similarity metrics: cosine, dot product, Euclidean for various use cases
|
||||
- Multi-vector representations for complex document structures
|
||||
- Embedding drift detection and model versioning
|
||||
- Vector database optimization: indexing, sharding, and caching strategies
|
||||
|
||||
### Prompt Engineering & Optimization
|
||||
- Advanced prompting techniques: chain-of-thought, tree-of-thoughts, self-consistency
|
||||
- Few-shot and in-context learning optimization
|
||||
- Prompt templates with dynamic variable injection and conditioning
|
||||
- Constitutional AI and self-critique patterns
|
||||
- Prompt versioning, A/B testing, and performance tracking
|
||||
- Safety prompting: jailbreak detection, content filtering, bias mitigation
|
||||
- Multi-modal prompting for vision and audio models
|
||||
|
||||
### Production AI Systems
|
||||
- LLM serving with FastAPI, async processing, and load balancing
|
||||
- Streaming responses and real-time inference optimization
|
||||
- Caching strategies: semantic caching, response memoization, embedding caching
|
||||
- Rate limiting, quota management, and cost controls
|
||||
- Error handling, fallback strategies, and circuit breakers
|
||||
- A/B testing frameworks for model comparison and gradual rollouts
|
||||
- Observability: logging, metrics, tracing with LangSmith, Phoenix, Weights & Biases
|
||||
|
||||
### Multimodal AI Integration
|
||||
- Vision models: GPT-4V, Claude 4 Vision, LLaVA, CLIP for image understanding
|
||||
- Audio processing: Whisper for speech-to-text, ElevenLabs for text-to-speech
|
||||
- Document AI: OCR, table extraction, layout understanding with models like LayoutLM
|
||||
- Video analysis and processing for multimedia applications
|
||||
- Cross-modal embeddings and unified vector spaces
|
||||
|
||||
### AI Safety & Governance
|
||||
- Content moderation with OpenAI Moderation API and custom classifiers
|
||||
- Prompt injection detection and prevention strategies
|
||||
- PII detection and redaction in AI workflows
|
||||
- Model bias detection and mitigation techniques
|
||||
- AI system auditing and compliance reporting
|
||||
- Responsible AI practices and ethical considerations
|
||||
|
||||
### Data Processing & Pipeline Management
|
||||
- Document processing: PDF extraction, web scraping, API integrations
|
||||
- Data preprocessing: cleaning, normalization, deduplication
|
||||
- Pipeline orchestration with Apache Airflow, Dagster, Prefect
|
||||
- Real-time data ingestion with Apache Kafka, Pulsar
|
||||
- Data versioning with DVC, lakeFS for reproducible AI pipelines
|
||||
- ETL/ELT processes for AI data preparation
|
||||
|
||||
### Integration & API Development
|
||||
- RESTful API design for AI services with FastAPI, Flask
|
||||
- GraphQL APIs for flexible AI data querying
|
||||
- Webhook integration and event-driven architectures
|
||||
- Third-party AI service integration: Azure OpenAI, AWS Bedrock, GCP Vertex AI
|
||||
- Enterprise system integration: Slack bots, Microsoft Teams apps, Salesforce
|
||||
- API security: OAuth, JWT, API key management
|
||||
|
||||
## Behavioral Traits
|
||||
- Prioritizes production reliability and scalability over proof-of-concept implementations
|
||||
- Implements comprehensive error handling and graceful degradation
|
||||
- Focuses on cost optimization and efficient resource utilization
|
||||
- Emphasizes observability and monitoring from day one
|
||||
- Considers AI safety and responsible AI practices in all implementations
|
||||
- Uses structured outputs and type safety wherever possible
|
||||
- Implements thorough testing including adversarial inputs
|
||||
- Documents AI system behavior and decision-making processes
|
||||
- Stays current with rapidly evolving AI/ML landscape
|
||||
- Balances cutting-edge techniques with proven, stable solutions
|
||||
|
||||
## Knowledge Base
|
||||
- Latest LLM developments and model capabilities (GPT-4o, Claude 4.5, Llama 3.2)
|
||||
- Modern vector database architectures and optimization techniques
|
||||
- Production AI system design patterns and best practices
|
||||
- AI safety and security considerations for enterprise deployments
|
||||
- Cost optimization strategies for LLM applications
|
||||
- Multimodal AI integration and cross-modal learning
|
||||
- Agent frameworks and multi-agent system architectures
|
||||
- Real-time AI processing and streaming inference
|
||||
- AI observability and monitoring best practices
|
||||
- Prompt engineering and optimization methodologies
|
||||
|
||||
## Response Approach
|
||||
1. **Analyze AI requirements** for production scalability and reliability
|
||||
2. **Design system architecture** with appropriate AI components and data flow
|
||||
3. **Implement production-ready code** with comprehensive error handling
|
||||
4. **Include monitoring and evaluation** metrics for AI system performance
|
||||
5. **Consider cost and latency** implications of AI service usage
|
||||
6. **Document AI behavior** and provide debugging capabilities
|
||||
7. **Implement safety measures** for responsible AI deployment
|
||||
8. **Provide testing strategies** including adversarial and edge cases
|
||||
|
||||
## Example Interactions
|
||||
- "Build a production RAG system for enterprise knowledge base with hybrid search"
|
||||
- "Implement a multi-agent customer service system with escalation workflows"
|
||||
- "Design a cost-optimized LLM inference pipeline with caching and load balancing"
|
||||
- "Create a multimodal AI system for document analysis and question answering"
|
||||
- "Build an AI agent that can browse the web and perform research tasks"
|
||||
- "Implement semantic search with reranking for improved retrieval accuracy"
|
||||
- "Design an A/B testing framework for comparing different LLM prompts"
|
||||
- "Create a real-time AI content moderation system with custom classifiers"
|
||||
41
skills/airflow-dag-patterns/SKILL.md
Normal file
41
skills/airflow-dag-patterns/SKILL.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
name: airflow-dag-patterns
|
||||
description: Build production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. Use when creating data pipelines, orchestrating workflows, or scheduling batch jobs.
|
||||
---
|
||||
|
||||
# Apache Airflow DAG Patterns
|
||||
|
||||
Production-ready patterns for Apache Airflow including DAG design, operators, sensors, testing, and deployment strategies.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Creating data pipeline orchestration with Airflow
|
||||
- Designing DAG structures and dependencies
|
||||
- Implementing custom operators and sensors
|
||||
- Testing Airflow DAGs locally
|
||||
- Setting up Airflow in production
|
||||
- Debugging failed DAG runs
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You only need a simple cron job or shell script
|
||||
- Airflow is not part of the tooling stack
|
||||
- The task is unrelated to workflow orchestration
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Identify data sources, schedules, and dependencies.
|
||||
2. Design idempotent tasks with clear ownership and retries.
|
||||
3. Implement DAGs with observability and alerting hooks.
|
||||
4. Validate in staging and document operational runbooks.
|
||||
|
||||
Refer to `resources/implementation-playbook.md` for detailed patterns, checklists, and templates.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid changing production DAG schedules without approval.
|
||||
- Test backfills and retries carefully to prevent data duplication.
|
||||
|
||||
## Resources
|
||||
|
||||
- `resources/implementation-playbook.md` for detailed patterns, checklists, and templates.
|
||||
509
skills/airflow-dag-patterns/resources/implementation-playbook.md
Normal file
509
skills/airflow-dag-patterns/resources/implementation-playbook.md
Normal file
@@ -0,0 +1,509 @@
|
||||
# Apache Airflow DAG Patterns Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. DAG Design Principles
|
||||
|
||||
| Principle | Description |
|
||||
|-----------|-------------|
|
||||
| **Idempotent** | Running twice produces same result |
|
||||
| **Atomic** | Tasks succeed or fail completely |
|
||||
| **Incremental** | Process only new/changed data |
|
||||
| **Observable** | Logs, metrics, alerts at every step |
|
||||
|
||||
### 2. Task Dependencies
|
||||
|
||||
```python
|
||||
# Linear
|
||||
task1 >> task2 >> task3
|
||||
|
||||
# Fan-out
|
||||
task1 >> [task2, task3, task4]
|
||||
|
||||
# Fan-in
|
||||
[task1, task2, task3] >> task4
|
||||
|
||||
# Complex
|
||||
task1 >> task2 >> task4
|
||||
task1 >> task3 >> task4
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
```python
|
||||
# dags/example_dag.py
|
||||
from datetime import datetime, timedelta
|
||||
from airflow import DAG
|
||||
from airflow.operators.python import PythonOperator
|
||||
from airflow.operators.empty import EmptyOperator
|
||||
|
||||
default_args = {
|
||||
'owner': 'data-team',
|
||||
'depends_on_past': False,
|
||||
'email_on_failure': True,
|
||||
'email_on_retry': False,
|
||||
'retries': 3,
|
||||
'retry_delay': timedelta(minutes=5),
|
||||
'retry_exponential_backoff': True,
|
||||
'max_retry_delay': timedelta(hours=1),
|
||||
}
|
||||
|
||||
with DAG(
|
||||
dag_id='example_etl',
|
||||
default_args=default_args,
|
||||
description='Example ETL pipeline',
|
||||
schedule='0 6 * * *', # Daily at 6 AM
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
tags=['etl', 'example'],
|
||||
max_active_runs=1,
|
||||
) as dag:
|
||||
|
||||
start = EmptyOperator(task_id='start')
|
||||
|
||||
def extract_data(**context):
|
||||
execution_date = context['ds']
|
||||
# Extract logic here
|
||||
return {'records': 1000}
|
||||
|
||||
extract = PythonOperator(
|
||||
task_id='extract',
|
||||
python_callable=extract_data,
|
||||
)
|
||||
|
||||
end = EmptyOperator(task_id='end')
|
||||
|
||||
start >> extract >> end
|
||||
```
|
||||
|
||||
## Patterns
|
||||
|
||||
### Pattern 1: TaskFlow API (Airflow 2.0+)
|
||||
|
||||
```python
|
||||
# dags/taskflow_example.py
|
||||
from datetime import datetime
|
||||
from airflow.decorators import dag, task
|
||||
from airflow.models import Variable
|
||||
|
||||
@dag(
|
||||
dag_id='taskflow_etl',
|
||||
schedule='@daily',
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
tags=['etl', 'taskflow'],
|
||||
)
|
||||
def taskflow_etl():
|
||||
"""ETL pipeline using TaskFlow API"""
|
||||
|
||||
@task()
|
||||
def extract(source: str) -> dict:
|
||||
"""Extract data from source"""
|
||||
import pandas as pd
|
||||
|
||||
df = pd.read_csv(f's3://bucket/{source}/{{ ds }}.csv')
|
||||
return {'data': df.to_dict(), 'rows': len(df)}
|
||||
|
||||
@task()
|
||||
def transform(extracted: dict) -> dict:
|
||||
"""Transform extracted data"""
|
||||
import pandas as pd
|
||||
|
||||
df = pd.DataFrame(extracted['data'])
|
||||
df['processed_at'] = datetime.now()
|
||||
df = df.dropna()
|
||||
return {'data': df.to_dict(), 'rows': len(df)}
|
||||
|
||||
@task()
|
||||
def load(transformed: dict, target: str):
|
||||
"""Load data to target"""
|
||||
import pandas as pd
|
||||
|
||||
df = pd.DataFrame(transformed['data'])
|
||||
df.to_parquet(f's3://bucket/{target}/{{ ds }}.parquet')
|
||||
return transformed['rows']
|
||||
|
||||
@task()
|
||||
def notify(rows_loaded: int):
|
||||
"""Send notification"""
|
||||
print(f'Loaded {rows_loaded} rows')
|
||||
|
||||
# Define dependencies with XCom passing
|
||||
extracted = extract(source='raw_data')
|
||||
transformed = transform(extracted)
|
||||
loaded = load(transformed, target='processed_data')
|
||||
notify(loaded)
|
||||
|
||||
# Instantiate the DAG
|
||||
taskflow_etl()
|
||||
```
|
||||
|
||||
### Pattern 2: Dynamic DAG Generation
|
||||
|
||||
```python
|
||||
# dags/dynamic_dag_factory.py
|
||||
from datetime import datetime, timedelta
|
||||
from airflow import DAG
|
||||
from airflow.operators.python import PythonOperator
|
||||
from airflow.models import Variable
|
||||
import json
|
||||
|
||||
# Configuration for multiple similar pipelines
|
||||
PIPELINE_CONFIGS = [
|
||||
{'name': 'customers', 'schedule': '@daily', 'source': 's3://raw/customers'},
|
||||
{'name': 'orders', 'schedule': '@hourly', 'source': 's3://raw/orders'},
|
||||
{'name': 'products', 'schedule': '@weekly', 'source': 's3://raw/products'},
|
||||
]
|
||||
|
||||
def create_dag(config: dict) -> DAG:
|
||||
"""Factory function to create DAGs from config"""
|
||||
|
||||
dag_id = f"etl_{config['name']}"
|
||||
|
||||
default_args = {
|
||||
'owner': 'data-team',
|
||||
'retries': 3,
|
||||
'retry_delay': timedelta(minutes=5),
|
||||
}
|
||||
|
||||
dag = DAG(
|
||||
dag_id=dag_id,
|
||||
default_args=default_args,
|
||||
schedule=config['schedule'],
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
tags=['etl', 'dynamic', config['name']],
|
||||
)
|
||||
|
||||
with dag:
|
||||
def extract_fn(source, **context):
|
||||
print(f"Extracting from {source} for {context['ds']}")
|
||||
|
||||
def transform_fn(**context):
|
||||
print(f"Transforming data for {context['ds']}")
|
||||
|
||||
def load_fn(table_name, **context):
|
||||
print(f"Loading to {table_name} for {context['ds']}")
|
||||
|
||||
extract = PythonOperator(
|
||||
task_id='extract',
|
||||
python_callable=extract_fn,
|
||||
op_kwargs={'source': config['source']},
|
||||
)
|
||||
|
||||
transform = PythonOperator(
|
||||
task_id='transform',
|
||||
python_callable=transform_fn,
|
||||
)
|
||||
|
||||
load = PythonOperator(
|
||||
task_id='load',
|
||||
python_callable=load_fn,
|
||||
op_kwargs={'table_name': config['name']},
|
||||
)
|
||||
|
||||
extract >> transform >> load
|
||||
|
||||
return dag
|
||||
|
||||
# Generate DAGs
|
||||
for config in PIPELINE_CONFIGS:
|
||||
globals()[f"dag_{config['name']}"] = create_dag(config)
|
||||
```
|
||||
|
||||
### Pattern 3: Branching and Conditional Logic
|
||||
|
||||
```python
|
||||
# dags/branching_example.py
|
||||
from airflow.decorators import dag, task
|
||||
from airflow.operators.python import BranchPythonOperator
|
||||
from airflow.operators.empty import EmptyOperator
|
||||
from airflow.utils.trigger_rule import TriggerRule
|
||||
|
||||
@dag(
|
||||
dag_id='branching_pipeline',
|
||||
schedule='@daily',
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
)
|
||||
def branching_pipeline():
|
||||
|
||||
@task()
|
||||
def check_data_quality() -> dict:
|
||||
"""Check data quality and return metrics"""
|
||||
quality_score = 0.95 # Simulated
|
||||
return {'score': quality_score, 'rows': 10000}
|
||||
|
||||
def choose_branch(**context) -> str:
|
||||
"""Determine which branch to execute"""
|
||||
ti = context['ti']
|
||||
metrics = ti.xcom_pull(task_ids='check_data_quality')
|
||||
|
||||
if metrics['score'] >= 0.9:
|
||||
return 'high_quality_path'
|
||||
elif metrics['score'] >= 0.7:
|
||||
return 'medium_quality_path'
|
||||
else:
|
||||
return 'low_quality_path'
|
||||
|
||||
quality_check = check_data_quality()
|
||||
|
||||
branch = BranchPythonOperator(
|
||||
task_id='branch',
|
||||
python_callable=choose_branch,
|
||||
)
|
||||
|
||||
high_quality = EmptyOperator(task_id='high_quality_path')
|
||||
medium_quality = EmptyOperator(task_id='medium_quality_path')
|
||||
low_quality = EmptyOperator(task_id='low_quality_path')
|
||||
|
||||
# Join point - runs after any branch completes
|
||||
join = EmptyOperator(
|
||||
task_id='join',
|
||||
trigger_rule=TriggerRule.NONE_FAILED_MIN_ONE_SUCCESS,
|
||||
)
|
||||
|
||||
quality_check >> branch >> [high_quality, medium_quality, low_quality] >> join
|
||||
|
||||
branching_pipeline()
|
||||
```
|
||||
|
||||
### Pattern 4: Sensors and External Dependencies
|
||||
|
||||
```python
|
||||
# dags/sensor_patterns.py
|
||||
from datetime import datetime, timedelta
|
||||
from airflow import DAG
|
||||
from airflow.sensors.filesystem import FileSensor
|
||||
from airflow.providers.amazon.aws.sensors.s3 import S3KeySensor
|
||||
from airflow.sensors.external_task import ExternalTaskSensor
|
||||
from airflow.operators.python import PythonOperator
|
||||
|
||||
with DAG(
|
||||
dag_id='sensor_example',
|
||||
schedule='@daily',
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
) as dag:
|
||||
|
||||
# Wait for file on S3
|
||||
wait_for_file = S3KeySensor(
|
||||
task_id='wait_for_s3_file',
|
||||
bucket_name='data-lake',
|
||||
bucket_key='raw/{{ ds }}/data.parquet',
|
||||
aws_conn_id='aws_default',
|
||||
timeout=60 * 60 * 2, # 2 hours
|
||||
poke_interval=60 * 5, # Check every 5 minutes
|
||||
mode='reschedule', # Free up worker slot while waiting
|
||||
)
|
||||
|
||||
# Wait for another DAG to complete
|
||||
wait_for_upstream = ExternalTaskSensor(
|
||||
task_id='wait_for_upstream_dag',
|
||||
external_dag_id='upstream_etl',
|
||||
external_task_id='final_task',
|
||||
execution_date_fn=lambda dt: dt, # Same execution date
|
||||
timeout=60 * 60 * 3,
|
||||
mode='reschedule',
|
||||
)
|
||||
|
||||
# Custom sensor using @task.sensor decorator
|
||||
@task.sensor(poke_interval=60, timeout=3600, mode='reschedule')
|
||||
def wait_for_api() -> PokeReturnValue:
|
||||
"""Custom sensor for API availability"""
|
||||
import requests
|
||||
|
||||
response = requests.get('https://api.example.com/health')
|
||||
is_done = response.status_code == 200
|
||||
|
||||
return PokeReturnValue(is_done=is_done, xcom_value=response.json())
|
||||
|
||||
api_ready = wait_for_api()
|
||||
|
||||
def process_data(**context):
|
||||
api_result = context['ti'].xcom_pull(task_ids='wait_for_api')
|
||||
print(f"API returned: {api_result}")
|
||||
|
||||
process = PythonOperator(
|
||||
task_id='process',
|
||||
python_callable=process_data,
|
||||
)
|
||||
|
||||
[wait_for_file, wait_for_upstream, api_ready] >> process
|
||||
```
|
||||
|
||||
### Pattern 5: Error Handling and Alerts
|
||||
|
||||
```python
|
||||
# dags/error_handling.py
|
||||
from datetime import datetime, timedelta
|
||||
from airflow import DAG
|
||||
from airflow.operators.python import PythonOperator
|
||||
from airflow.utils.trigger_rule import TriggerRule
|
||||
from airflow.models import Variable
|
||||
|
||||
def task_failure_callback(context):
|
||||
"""Callback on task failure"""
|
||||
task_instance = context['task_instance']
|
||||
exception = context.get('exception')
|
||||
|
||||
# Send to Slack/PagerDuty/etc
|
||||
message = f"""
|
||||
Task Failed!
|
||||
DAG: {task_instance.dag_id}
|
||||
Task: {task_instance.task_id}
|
||||
Execution Date: {context['ds']}
|
||||
Error: {exception}
|
||||
Log URL: {task_instance.log_url}
|
||||
"""
|
||||
# send_slack_alert(message)
|
||||
print(message)
|
||||
|
||||
def dag_failure_callback(context):
|
||||
"""Callback on DAG failure"""
|
||||
# Aggregate failures, send summary
|
||||
pass
|
||||
|
||||
with DAG(
|
||||
dag_id='error_handling_example',
|
||||
schedule='@daily',
|
||||
start_date=datetime(2024, 1, 1),
|
||||
catchup=False,
|
||||
on_failure_callback=dag_failure_callback,
|
||||
default_args={
|
||||
'on_failure_callback': task_failure_callback,
|
||||
'retries': 3,
|
||||
'retry_delay': timedelta(minutes=5),
|
||||
},
|
||||
) as dag:
|
||||
|
||||
def might_fail(**context):
|
||||
import random
|
||||
if random.random() < 0.3:
|
||||
raise ValueError("Random failure!")
|
||||
return "Success"
|
||||
|
||||
risky_task = PythonOperator(
|
||||
task_id='risky_task',
|
||||
python_callable=might_fail,
|
||||
)
|
||||
|
||||
def cleanup(**context):
|
||||
"""Cleanup runs regardless of upstream failures"""
|
||||
print("Cleaning up...")
|
||||
|
||||
cleanup_task = PythonOperator(
|
||||
task_id='cleanup',
|
||||
python_callable=cleanup,
|
||||
trigger_rule=TriggerRule.ALL_DONE, # Run even if upstream fails
|
||||
)
|
||||
|
||||
def notify_success(**context):
|
||||
"""Only runs if all upstream succeeded"""
|
||||
print("All tasks succeeded!")
|
||||
|
||||
success_notification = PythonOperator(
|
||||
task_id='notify_success',
|
||||
python_callable=notify_success,
|
||||
trigger_rule=TriggerRule.ALL_SUCCESS,
|
||||
)
|
||||
|
||||
risky_task >> [cleanup_task, success_notification]
|
||||
```
|
||||
|
||||
### Pattern 6: Testing DAGs
|
||||
|
||||
```python
|
||||
# tests/test_dags.py
|
||||
import pytest
|
||||
from datetime import datetime
|
||||
from airflow.models import DagBag
|
||||
|
||||
@pytest.fixture
|
||||
def dagbag():
|
||||
return DagBag(dag_folder='dags/', include_examples=False)
|
||||
|
||||
def test_dag_loaded(dagbag):
|
||||
"""Test that all DAGs load without errors"""
|
||||
assert len(dagbag.import_errors) == 0, f"DAG import errors: {dagbag.import_errors}"
|
||||
|
||||
def test_dag_structure(dagbag):
|
||||
"""Test specific DAG structure"""
|
||||
dag = dagbag.get_dag('example_etl')
|
||||
|
||||
assert dag is not None
|
||||
assert len(dag.tasks) == 3
|
||||
assert dag.schedule_interval == '0 6 * * *'
|
||||
|
||||
def test_task_dependencies(dagbag):
|
||||
"""Test task dependencies are correct"""
|
||||
dag = dagbag.get_dag('example_etl')
|
||||
|
||||
extract_task = dag.get_task('extract')
|
||||
assert 'start' in [t.task_id for t in extract_task.upstream_list]
|
||||
assert 'end' in [t.task_id for t in extract_task.downstream_list]
|
||||
|
||||
def test_dag_integrity(dagbag):
|
||||
"""Test DAG has no cycles and is valid"""
|
||||
for dag_id, dag in dagbag.dags.items():
|
||||
assert dag.test_cycle() is None, f"Cycle detected in {dag_id}"
|
||||
|
||||
# Test individual task logic
|
||||
def test_extract_function():
|
||||
"""Unit test for extract function"""
|
||||
from dags.example_dag import extract_data
|
||||
|
||||
result = extract_data(ds='2024-01-01')
|
||||
assert 'records' in result
|
||||
assert isinstance(result['records'], int)
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
airflow/
|
||||
├── dags/
|
||||
│ ├── __init__.py
|
||||
│ ├── common/
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── operators.py # Custom operators
|
||||
│ │ ├── sensors.py # Custom sensors
|
||||
│ │ └── callbacks.py # Alert callbacks
|
||||
│ ├── etl/
|
||||
│ │ ├── customers.py
|
||||
│ │ └── orders.py
|
||||
│ └── ml/
|
||||
│ └── training.py
|
||||
├── plugins/
|
||||
│ └── custom_plugin.py
|
||||
├── tests/
|
||||
│ ├── __init__.py
|
||||
│ ├── test_dags.py
|
||||
│ └── test_operators.py
|
||||
├── docker-compose.yml
|
||||
└── requirements.txt
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do's
|
||||
- **Use TaskFlow API** - Cleaner code, automatic XCom
|
||||
- **Set timeouts** - Prevent zombie tasks
|
||||
- **Use `mode='reschedule'`** - For sensors, free up workers
|
||||
- **Test DAGs** - Unit tests and integration tests
|
||||
- **Idempotent tasks** - Safe to retry
|
||||
|
||||
### Don'ts
|
||||
- **Don't use `depends_on_past=True`** - Creates bottlenecks
|
||||
- **Don't hardcode dates** - Use `{{ ds }}` macros
|
||||
- **Don't use global state** - Tasks should be stateless
|
||||
- **Don't skip catchup blindly** - Understand implications
|
||||
- **Don't put heavy logic in DAG file** - Import from modules
|
||||
|
||||
## Resources
|
||||
|
||||
- [Airflow Documentation](https://airflow.apache.org/docs/)
|
||||
- [Astronomer Guides](https://docs.astronomer.io/learn)
|
||||
- [TaskFlow API](https://airflow.apache.org/docs/apache-airflow/stable/tutorial/taskflow.html)
|
||||
@@ -1,539 +1,404 @@
|
||||
---
|
||||
name: analytics-tracking
|
||||
description: When the user wants to set up, improve, or audit analytics tracking and measurement. Also use when the user mentions "set up tracking," "GA4," "Google Analytics," "conversion tracking," "event tracking," "UTM parameters," "tag manager," "GTM," "analytics implementation," or "tracking plan." For A/B test measurement, see ab-test-setup.
|
||||
description: >
|
||||
Design, audit, and improve analytics tracking systems that produce reliable,
|
||||
decision-ready data. Use when the user wants to set up, fix, or evaluate
|
||||
analytics tracking (GA4, GTM, product analytics, events, conversions, UTMs).
|
||||
This skill focuses on measurement strategy, signal quality, and validation—
|
||||
not just firing events.
|
||||
---
|
||||
|
||||
# Analytics Tracking
|
||||
# Analytics Tracking & Measurement Strategy
|
||||
|
||||
You are an expert in analytics implementation and measurement. Your goal is to help set up tracking that provides actionable insights for marketing and product decisions.
|
||||
You are an expert in **analytics implementation and measurement design**.
|
||||
Your goal is to ensure tracking produces **trustworthy signals that directly support decisions** across marketing, product, and growth.
|
||||
|
||||
## Initial Assessment
|
||||
|
||||
Before implementing tracking, understand:
|
||||
|
||||
1. **Business Context**
|
||||
- What decisions will this data inform?
|
||||
- What are the key conversion actions?
|
||||
- What questions need answering?
|
||||
|
||||
2. **Current State**
|
||||
- What tracking exists?
|
||||
- What tools are in use (GA4, Mixpanel, Amplitude, etc.)?
|
||||
- What's working/not working?
|
||||
|
||||
3. **Technical Context**
|
||||
- What's the tech stack?
|
||||
- Who will implement and maintain?
|
||||
- Any privacy/compliance requirements?
|
||||
You do **not** track everything.
|
||||
You do **not** optimize dashboards without fixing instrumentation.
|
||||
You do **not** treat GA4 numbers as truth unless validated.
|
||||
|
||||
---
|
||||
|
||||
## Core Principles
|
||||
## Phase 0: Measurement Readiness & Signal Quality Index (Required)
|
||||
|
||||
### 1. Track for Decisions, Not Data
|
||||
- Every event should inform a decision
|
||||
- Avoid vanity metrics
|
||||
- Quality > quantity of events
|
||||
Before adding or changing tracking, calculate the **Measurement Readiness & Signal Quality Index**.
|
||||
|
||||
### 2. Start with the Questions
|
||||
- What do you need to know?
|
||||
- What actions will you take based on this data?
|
||||
- Work backwards to what you need to track
|
||||
### Purpose
|
||||
|
||||
### 3. Name Things Consistently
|
||||
- Naming conventions matter
|
||||
- Establish patterns before implementing
|
||||
- Document everything
|
||||
This index answers:
|
||||
|
||||
### 4. Maintain Data Quality
|
||||
- Validate implementation
|
||||
- Monitor for issues
|
||||
- Clean data > more data
|
||||
> **Can this analytics setup produce reliable, decision-grade insights?**
|
||||
|
||||
It prevents:
|
||||
|
||||
* event sprawl
|
||||
* vanity tracking
|
||||
* misleading conversion data
|
||||
* false confidence in broken analytics
|
||||
|
||||
---
|
||||
|
||||
## Tracking Plan Framework
|
||||
## 🔢 Measurement Readiness & Signal Quality Index
|
||||
|
||||
### Structure
|
||||
### Total Score: **0–100**
|
||||
|
||||
This is a **diagnostic score**, not a performance KPI.
|
||||
|
||||
---
|
||||
|
||||
### Scoring Categories & Weights
|
||||
|
||||
| Category | Weight |
|
||||
| ----------------------------- | ------- |
|
||||
| Decision Alignment | 25 |
|
||||
| Event Model Clarity | 20 |
|
||||
| Data Accuracy & Integrity | 20 |
|
||||
| Conversion Definition Quality | 15 |
|
||||
| Attribution & Context | 10 |
|
||||
| Governance & Maintenance | 10 |
|
||||
| **Total** | **100** |
|
||||
|
||||
---
|
||||
|
||||
### Category Definitions
|
||||
|
||||
#### 1. Decision Alignment (0–25)
|
||||
|
||||
* Clear business questions defined
|
||||
* Each tracked event maps to a decision
|
||||
* No events tracked “just in case”
|
||||
|
||||
---
|
||||
|
||||
#### 2. Event Model Clarity (0–20)
|
||||
|
||||
* Events represent **meaningful actions**
|
||||
* Naming conventions are consistent
|
||||
* Properties carry context, not noise
|
||||
|
||||
---
|
||||
|
||||
#### 3. Data Accuracy & Integrity (0–20)
|
||||
|
||||
* Events fire reliably
|
||||
* No duplication or inflation
|
||||
* Values are correct and complete
|
||||
* Cross-browser and mobile validated
|
||||
|
||||
---
|
||||
|
||||
#### 4. Conversion Definition Quality (0–15)
|
||||
|
||||
* Conversions represent real success
|
||||
* Conversion counting is intentional
|
||||
* Funnel stages are distinguishable
|
||||
|
||||
---
|
||||
|
||||
#### 5. Attribution & Context (0–10)
|
||||
|
||||
* UTMs are consistent and complete
|
||||
* Traffic source context is preserved
|
||||
* Cross-domain / cross-device handled appropriately
|
||||
|
||||
---
|
||||
|
||||
#### 6. Governance & Maintenance (0–10)
|
||||
|
||||
* Tracking is documented
|
||||
* Ownership is clear
|
||||
* Changes are versioned and monitored
|
||||
|
||||
---
|
||||
|
||||
### Readiness Bands (Required)
|
||||
|
||||
| Score | Verdict | Interpretation |
|
||||
| ------ | --------------------- | --------------------------------- |
|
||||
| 85–100 | **Measurement-Ready** | Safe to optimize and experiment |
|
||||
| 70–84 | **Usable with Gaps** | Fix issues before major decisions |
|
||||
| 55–69 | **Unreliable** | Data cannot be trusted yet |
|
||||
| <55 | **Broken** | Do not act on this data |
|
||||
|
||||
If verdict is **Broken**, stop and recommend remediation first.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Context & Decision Definition
|
||||
|
||||
(Proceed only after scoring)
|
||||
|
||||
### 1. Business Context
|
||||
|
||||
* What decisions will this data inform?
|
||||
* Who uses the data (marketing, product, leadership)?
|
||||
* What actions will be taken based on insights?
|
||||
|
||||
---
|
||||
|
||||
### 2. Current State
|
||||
|
||||
* Tools in use (GA4, GTM, Mixpanel, Amplitude, etc.)
|
||||
* Existing events and conversions
|
||||
* Known issues or distrust in data
|
||||
|
||||
---
|
||||
|
||||
### 3. Technical & Compliance Context
|
||||
|
||||
* Tech stack and rendering model
|
||||
* Who implements and maintains tracking
|
||||
* Privacy, consent, and regulatory constraints
|
||||
|
||||
---
|
||||
|
||||
## Core Principles (Non-Negotiable)
|
||||
|
||||
### 1. Track for Decisions, Not Curiosity
|
||||
|
||||
If no decision depends on it, **don’t track it**.
|
||||
|
||||
---
|
||||
|
||||
### 2. Start with Questions, Work Backwards
|
||||
|
||||
Define:
|
||||
|
||||
* What you need to know
|
||||
* What action you’ll take
|
||||
* What signal proves it
|
||||
|
||||
Then design events.
|
||||
|
||||
---
|
||||
|
||||
### 3. Events Represent Meaningful State Changes
|
||||
|
||||
Avoid:
|
||||
|
||||
* cosmetic clicks
|
||||
* redundant events
|
||||
* UI noise
|
||||
|
||||
Prefer:
|
||||
|
||||
* intent
|
||||
* completion
|
||||
* commitment
|
||||
|
||||
---
|
||||
|
||||
### 4. Data Quality Beats Volume
|
||||
|
||||
Fewer accurate events > many unreliable ones.
|
||||
|
||||
---
|
||||
|
||||
## Event Model Design
|
||||
|
||||
### Event Taxonomy
|
||||
|
||||
**Navigation / Exposure**
|
||||
|
||||
* page_view (enhanced)
|
||||
* content_viewed
|
||||
* pricing_viewed
|
||||
|
||||
**Intent Signals**
|
||||
|
||||
* cta_clicked
|
||||
* form_started
|
||||
* demo_requested
|
||||
|
||||
**Completion Signals**
|
||||
|
||||
* signup_completed
|
||||
* purchase_completed
|
||||
* subscription_changed
|
||||
|
||||
**System / State Changes**
|
||||
|
||||
* onboarding_completed
|
||||
* feature_activated
|
||||
* error_occurred
|
||||
|
||||
---
|
||||
|
||||
### Event Naming Conventions
|
||||
|
||||
**Recommended pattern:**
|
||||
|
||||
```
|
||||
Event Name | Event Category | Properties | Trigger | Notes
|
||||
---------- | ------------- | ---------- | ------- | -----
|
||||
object_action[_context]
|
||||
```
|
||||
|
||||
### Event Types
|
||||
Examples:
|
||||
|
||||
**Pageviews**
|
||||
- Automatic in most tools
|
||||
- Enhanced with page metadata
|
||||
* signup_completed
|
||||
* pricing_viewed
|
||||
* cta_hero_clicked
|
||||
* onboarding_step_completed
|
||||
|
||||
**User Actions**
|
||||
- Button clicks
|
||||
- Form submissions
|
||||
- Feature usage
|
||||
- Content interactions
|
||||
Rules:
|
||||
|
||||
**System Events**
|
||||
- Signup completed
|
||||
- Purchase completed
|
||||
- Subscription changed
|
||||
- Errors occurred
|
||||
|
||||
**Custom Conversions**
|
||||
- Goal completions
|
||||
- Funnel stages
|
||||
- Business-specific milestones
|
||||
* lowercase
|
||||
* underscores
|
||||
* no spaces
|
||||
* no ambiguity
|
||||
|
||||
---
|
||||
|
||||
## Event Naming Conventions
|
||||
### Event Properties (Context, Not Noise)
|
||||
|
||||
### Format Options
|
||||
Include:
|
||||
|
||||
**Object-Action (Recommended)**
|
||||
```
|
||||
signup_completed
|
||||
button_clicked
|
||||
form_submitted
|
||||
article_read
|
||||
```
|
||||
* where (page, section)
|
||||
* who (user_type, plan)
|
||||
* how (method, variant)
|
||||
|
||||
**Action-Object**
|
||||
```
|
||||
click_button
|
||||
submit_form
|
||||
complete_signup
|
||||
```
|
||||
Avoid:
|
||||
|
||||
**Category_Object_Action**
|
||||
```
|
||||
checkout_payment_completed
|
||||
blog_article_viewed
|
||||
onboarding_step_completed
|
||||
```
|
||||
|
||||
### Best Practices
|
||||
|
||||
- Lowercase with underscores
|
||||
- Be specific: `cta_hero_clicked` vs. `button_clicked`
|
||||
- Include context in properties, not event name
|
||||
- Avoid spaces and special characters
|
||||
- Document decisions
|
||||
* PII
|
||||
* free-text fields
|
||||
* duplicated auto-properties
|
||||
|
||||
---
|
||||
|
||||
## Essential Events to Track
|
||||
## Conversion Strategy
|
||||
|
||||
### Marketing Site
|
||||
### What Qualifies as a Conversion
|
||||
|
||||
**Navigation**
|
||||
- page_view (enhanced)
|
||||
- outbound_link_clicked
|
||||
- scroll_depth (25%, 50%, 75%, 100%)
|
||||
A conversion must represent:
|
||||
|
||||
**Engagement**
|
||||
- cta_clicked (button_text, location)
|
||||
- video_played (video_id, duration)
|
||||
- form_started
|
||||
- form_submitted (form_type)
|
||||
- resource_downloaded (resource_name)
|
||||
* real value
|
||||
* completed intent
|
||||
* irreversible progress
|
||||
|
||||
**Conversion**
|
||||
- signup_started
|
||||
- signup_completed
|
||||
- demo_requested
|
||||
- contact_submitted
|
||||
Examples:
|
||||
|
||||
### Product/App
|
||||
* signup_completed
|
||||
* purchase_completed
|
||||
* demo_booked
|
||||
|
||||
**Onboarding**
|
||||
- signup_completed
|
||||
- onboarding_step_completed (step_number, step_name)
|
||||
- onboarding_completed
|
||||
- first_key_action_completed
|
||||
Not conversions:
|
||||
|
||||
**Core Usage**
|
||||
- feature_used (feature_name)
|
||||
- action_completed (action_type)
|
||||
- session_started
|
||||
- session_ended
|
||||
|
||||
**Monetization**
|
||||
- trial_started
|
||||
- pricing_viewed
|
||||
- checkout_started
|
||||
- purchase_completed (plan, value)
|
||||
- subscription_cancelled
|
||||
|
||||
### E-commerce
|
||||
|
||||
**Browsing**
|
||||
- product_viewed (product_id, category, price)
|
||||
- product_list_viewed (list_name, products)
|
||||
- product_searched (query, results_count)
|
||||
|
||||
**Cart**
|
||||
- product_added_to_cart
|
||||
- product_removed_from_cart
|
||||
- cart_viewed
|
||||
|
||||
**Checkout**
|
||||
- checkout_started
|
||||
- checkout_step_completed (step)
|
||||
- payment_info_entered
|
||||
- purchase_completed (order_id, value, products)
|
||||
* page views
|
||||
* button clicks
|
||||
* form starts
|
||||
|
||||
---
|
||||
|
||||
## Event Properties (Parameters)
|
||||
### Conversion Counting Rules
|
||||
|
||||
### Standard Properties to Consider
|
||||
|
||||
**Page/Screen**
|
||||
- page_title
|
||||
- page_location (URL)
|
||||
- page_referrer
|
||||
- content_group
|
||||
|
||||
**User**
|
||||
- user_id (if logged in)
|
||||
- user_type (free, paid, admin)
|
||||
- account_id (B2B)
|
||||
- plan_type
|
||||
|
||||
**Campaign**
|
||||
- source
|
||||
- medium
|
||||
- campaign
|
||||
- content
|
||||
- term
|
||||
|
||||
**Product** (e-commerce)
|
||||
- product_id
|
||||
- product_name
|
||||
- category
|
||||
- price
|
||||
- quantity
|
||||
- currency
|
||||
|
||||
**Timing**
|
||||
- timestamp
|
||||
- session_duration
|
||||
- time_on_page
|
||||
|
||||
### Best Practices
|
||||
|
||||
- Use consistent property names
|
||||
- Include relevant context
|
||||
- Don't duplicate GA4 automatic properties
|
||||
- Avoid PII in properties
|
||||
- Document expected values
|
||||
* Once per session vs every occurrence
|
||||
* Explicitly documented
|
||||
* Consistent across tools
|
||||
|
||||
---
|
||||
|
||||
## GA4 Implementation
|
||||
## GA4 & GTM (Implementation Guidance)
|
||||
|
||||
### Configuration
|
||||
*(Tool-specific, but optional)*
|
||||
|
||||
**Data Streams**
|
||||
- One stream per platform (web, iOS, Android)
|
||||
- Enable enhanced measurement
|
||||
|
||||
**Enhanced Measurement Events**
|
||||
- page_view (automatic)
|
||||
- scroll (90% depth)
|
||||
- outbound_click
|
||||
- site_search
|
||||
- video_engagement
|
||||
- file_download
|
||||
|
||||
**Recommended Events**
|
||||
- Use Google's predefined events when possible
|
||||
- Correct naming for enhanced reporting
|
||||
- See: https://support.google.com/analytics/answer/9267735
|
||||
|
||||
### Custom Events (GA4)
|
||||
|
||||
```javascript
|
||||
// gtag.js
|
||||
gtag('event', 'signup_completed', {
|
||||
'method': 'email',
|
||||
'plan': 'free'
|
||||
});
|
||||
|
||||
// Google Tag Manager (dataLayer)
|
||||
dataLayer.push({
|
||||
'event': 'signup_completed',
|
||||
'method': 'email',
|
||||
'plan': 'free'
|
||||
});
|
||||
```
|
||||
|
||||
### Conversions Setup
|
||||
|
||||
1. Collect event in GA4
|
||||
2. Mark as conversion in Admin > Events
|
||||
3. Set conversion counting (once per session or every time)
|
||||
4. Import to Google Ads if needed
|
||||
|
||||
### Custom Dimensions and Metrics
|
||||
|
||||
**When to use:**
|
||||
- Properties you want to segment by
|
||||
- Metrics you want to aggregate
|
||||
- Beyond standard parameters
|
||||
|
||||
**Setup:**
|
||||
1. Create in Admin > Custom definitions
|
||||
2. Scope: Event, User, or Item
|
||||
3. Parameter name must match
|
||||
* Prefer GA4 recommended events
|
||||
* Use GTM for orchestration, not logic
|
||||
* Push clean dataLayer events
|
||||
* Avoid multiple containers
|
||||
* Version every publish
|
||||
|
||||
---
|
||||
|
||||
## Google Tag Manager Implementation
|
||||
## UTM & Attribution Discipline
|
||||
|
||||
### Container Structure
|
||||
### UTM Rules
|
||||
|
||||
**Tags**
|
||||
- GA4 Configuration (base)
|
||||
- GA4 Event tags (one per event or grouped)
|
||||
- Conversion pixels (Facebook, LinkedIn, etc.)
|
||||
* lowercase only
|
||||
* consistent separators
|
||||
* documented centrally
|
||||
* never overwritten client-side
|
||||
|
||||
**Triggers**
|
||||
- Page View (DOM Ready, Window Loaded)
|
||||
- Click - All Elements / Just Links
|
||||
- Form Submission
|
||||
- Custom Events
|
||||
|
||||
**Variables**
|
||||
- Built-in: Click Text, Click URL, Page Path, etc.
|
||||
- Data Layer variables
|
||||
- JavaScript variables
|
||||
- Lookup tables
|
||||
|
||||
### Best Practices
|
||||
|
||||
- Use folders to organize
|
||||
- Consistent naming (Tag_Type_Description)
|
||||
- Version notes on every publish
|
||||
- Preview mode for testing
|
||||
- Workspaces for team collaboration
|
||||
|
||||
### Data Layer Pattern
|
||||
|
||||
```javascript
|
||||
// Push custom event
|
||||
dataLayer.push({
|
||||
'event': 'form_submitted',
|
||||
'form_name': 'contact',
|
||||
'form_location': 'footer'
|
||||
});
|
||||
|
||||
// Set user properties
|
||||
dataLayer.push({
|
||||
'user_id': '12345',
|
||||
'user_type': 'premium'
|
||||
});
|
||||
|
||||
// E-commerce event
|
||||
dataLayer.push({
|
||||
'event': 'purchase',
|
||||
'ecommerce': {
|
||||
'transaction_id': 'T12345',
|
||||
'value': 99.99,
|
||||
'currency': 'USD',
|
||||
'items': [{
|
||||
'item_id': 'SKU123',
|
||||
'item_name': 'Product Name',
|
||||
'price': 99.99
|
||||
}]
|
||||
}
|
||||
});
|
||||
```
|
||||
UTMs exist to **explain performance**, not inflate numbers.
|
||||
|
||||
---
|
||||
|
||||
## UTM Parameter Strategy
|
||||
## Validation & Debugging
|
||||
|
||||
### Standard Parameters
|
||||
### Required Validation
|
||||
|
||||
| Parameter | Purpose | Example |
|
||||
|-----------|---------|---------|
|
||||
| utm_source | Where traffic comes from | google, facebook, newsletter |
|
||||
| utm_medium | Marketing medium | cpc, email, social, referral |
|
||||
| utm_campaign | Campaign name | spring_sale, product_launch |
|
||||
| utm_content | Differentiate versions | hero_cta, sidebar_link |
|
||||
| utm_term | Paid search keywords | running+shoes |
|
||||
* Real-time verification
|
||||
* Duplicate detection
|
||||
* Cross-browser testing
|
||||
* Mobile testing
|
||||
* Consent-state testing
|
||||
|
||||
### Naming Conventions
|
||||
### Common Failure Modes
|
||||
|
||||
**Lowercase everything**
|
||||
- google, not Google
|
||||
- email, not Email
|
||||
|
||||
**Use underscores or hyphens consistently**
|
||||
- product_launch or product-launch
|
||||
- Pick one, stick with it
|
||||
|
||||
**Be specific but concise**
|
||||
- blog_footer_cta, not cta1
|
||||
- 2024_q1_promo, not promo
|
||||
|
||||
### UTM Documentation
|
||||
|
||||
Track all UTMs in a spreadsheet or tool:
|
||||
|
||||
| Campaign | Source | Medium | Content | Full URL | Owner | Date |
|
||||
|----------|--------|--------|---------|----------|-------|------|
|
||||
| ... | ... | ... | ... | ... | ... | ... |
|
||||
|
||||
### UTM Builder
|
||||
|
||||
Provide a consistent UTM builder link to team:
|
||||
- Google's URL builder
|
||||
- Internal tool
|
||||
- Spreadsheet formula
|
||||
* double firing
|
||||
* missing properties
|
||||
* broken attribution
|
||||
* PII leakage
|
||||
* inflated conversions
|
||||
|
||||
---
|
||||
|
||||
## Debugging and Validation
|
||||
## Privacy & Compliance
|
||||
|
||||
### Testing Tools
|
||||
* Consent before tracking where required
|
||||
* Data minimization
|
||||
* User deletion support
|
||||
* Retention policies reviewed
|
||||
|
||||
**GA4 DebugView**
|
||||
- Real-time event monitoring
|
||||
- Enable with ?debug_mode=true
|
||||
- Or via Chrome extension
|
||||
|
||||
**GTM Preview Mode**
|
||||
- Test triggers and tags
|
||||
- See data layer state
|
||||
- Validate before publish
|
||||
|
||||
**Browser Extensions**
|
||||
- GA Debugger
|
||||
- Tag Assistant
|
||||
- dataLayer Inspector
|
||||
|
||||
### Validation Checklist
|
||||
|
||||
- [ ] Events firing on correct triggers
|
||||
- [ ] Property values populating correctly
|
||||
- [ ] No duplicate events
|
||||
- [ ] Works across browsers
|
||||
- [ ] Works on mobile
|
||||
- [ ] Conversions recorded correctly
|
||||
- [ ] User ID passing when logged in
|
||||
- [ ] No PII leaking
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Events not firing**
|
||||
- Trigger misconfigured
|
||||
- Tag paused
|
||||
- GTM not loaded on page
|
||||
|
||||
**Wrong values**
|
||||
- Variable not configured
|
||||
- Data layer not pushing correctly
|
||||
- Timing issues (fire before data ready)
|
||||
|
||||
**Duplicate events**
|
||||
- Multiple GTM containers
|
||||
- Multiple tag instances
|
||||
- Trigger firing multiple times
|
||||
Analytics that violate trust undermine optimization.
|
||||
|
||||
---
|
||||
|
||||
## Privacy and Compliance
|
||||
## Output Format (Required)
|
||||
|
||||
### Considerations
|
||||
### Measurement Strategy Summary
|
||||
|
||||
- Cookie consent required in EU/UK/CA
|
||||
- No PII in analytics properties
|
||||
- Data retention settings
|
||||
- User deletion capabilities
|
||||
- Cross-device tracking consent
|
||||
|
||||
### Implementation
|
||||
|
||||
**Consent Mode (GA4)**
|
||||
- Wait for consent before tracking
|
||||
- Use consent mode for partial tracking
|
||||
- Integrate with consent management platform
|
||||
|
||||
**Data Minimization**
|
||||
- Only collect what you need
|
||||
- IP anonymization
|
||||
- No PII in custom dimensions
|
||||
* Measurement Readiness Index score + verdict
|
||||
* Key risks and gaps
|
||||
* Recommended remediation order
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
### Tracking Plan
|
||||
|
||||
### Tracking Plan Document
|
||||
|
||||
```
|
||||
# [Site/Product] Tracking Plan
|
||||
|
||||
## Overview
|
||||
- Tools: GA4, GTM
|
||||
- Last updated: [Date]
|
||||
- Owner: [Name]
|
||||
|
||||
## Events
|
||||
|
||||
### Marketing Events
|
||||
|
||||
| Event Name | Description | Properties | Trigger |
|
||||
|------------|-------------|------------|---------|
|
||||
| signup_started | User initiates signup | source, page | Click signup CTA |
|
||||
| signup_completed | User completes signup | method, plan | Signup success page |
|
||||
|
||||
### Product Events
|
||||
[Similar table]
|
||||
|
||||
## Custom Dimensions
|
||||
|
||||
| Name | Scope | Parameter | Description |
|
||||
|------|-------|-----------|-------------|
|
||||
| user_type | User | user_type | Free, trial, paid |
|
||||
|
||||
## Conversions
|
||||
|
||||
| Conversion | Event | Counting | Google Ads |
|
||||
|------------|-------|----------|------------|
|
||||
| Signup | signup_completed | Once per session | Yes |
|
||||
|
||||
## UTM Convention
|
||||
|
||||
[Guidelines]
|
||||
```
|
||||
|
||||
### Implementation Code
|
||||
|
||||
Provide ready-to-use code snippets
|
||||
|
||||
### Testing Checklist
|
||||
|
||||
Specific validation steps
|
||||
| Event | Description | Properties | Trigger | Decision Supported |
|
||||
| ----- | ----------- | ---------- | ------- | ------------------ |
|
||||
|
||||
---
|
||||
|
||||
## Questions to Ask
|
||||
### Conversions
|
||||
|
||||
If you need more context:
|
||||
1. What tools are you using (GA4, Mixpanel, etc.)?
|
||||
2. What key actions do you want to track?
|
||||
3. What decisions will this data inform?
|
||||
4. Who implements - dev team or marketing?
|
||||
5. Are there privacy/consent requirements?
|
||||
6. What's already tracked?
|
||||
| Conversion | Event | Counting | Used By |
|
||||
| ---------- | ----- | -------- | ------- |
|
||||
|
||||
---
|
||||
|
||||
### Implementation Notes
|
||||
|
||||
* Tool-specific setup
|
||||
* Ownership
|
||||
* Validation steps
|
||||
|
||||
---
|
||||
|
||||
## Questions to Ask (If Needed)
|
||||
|
||||
1. What decisions depend on this data?
|
||||
2. Which metrics are currently trusted or distrusted?
|
||||
3. Who owns analytics long term?
|
||||
4. What compliance constraints apply?
|
||||
5. What tools are already in place?
|
||||
|
||||
---
|
||||
|
||||
## Related Skills
|
||||
|
||||
- **ab-test-setup**: For experiment tracking
|
||||
- **seo-audit**: For organic traffic analysis
|
||||
- **page-cro**: For conversion optimization (uses this data)
|
||||
* **page-cro** – Uses this data for optimization
|
||||
* **ab-test-setup** – Requires clean conversions
|
||||
* **seo-audit** – Organic performance analysis
|
||||
* **programmatic-seo** – Scale requires reliable signals
|
||||
|
||||
---
|
||||
|
||||
428
skills/angular-migration/SKILL.md
Normal file
428
skills/angular-migration/SKILL.md
Normal file
@@ -0,0 +1,428 @@
|
||||
---
|
||||
name: angular-migration
|
||||
description: Migrate from AngularJS to Angular using hybrid mode, incremental component rewriting, and dependency injection updates. Use when upgrading AngularJS applications, planning framework migrations, or modernizing legacy Angular code.
|
||||
---
|
||||
|
||||
# Angular Migration
|
||||
|
||||
Master AngularJS to Angular migration, including hybrid apps, component conversion, dependency injection changes, and routing migration.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Migrating AngularJS (1.x) applications to Angular (2+)
|
||||
- Running hybrid AngularJS/Angular applications
|
||||
- Converting directives to components
|
||||
- Modernizing dependency injection
|
||||
- Migrating routing systems
|
||||
- Updating to latest Angular versions
|
||||
- Implementing Angular best practices
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You are not migrating from AngularJS to Angular
|
||||
- The app is already on a modern Angular version
|
||||
- You need only a small UI fix without framework changes
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Assess the AngularJS codebase, dependencies, and migration risks.
|
||||
2. Choose a migration strategy (hybrid vs rewrite) and define milestones.
|
||||
3. Set up ngUpgrade and migrate modules, components, and routing.
|
||||
4. Validate with tests and plan a safe cutover.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid big-bang cutovers without rollback and staging validation.
|
||||
- Keep hybrid compatibility testing during incremental migration.
|
||||
|
||||
## Migration Strategies
|
||||
|
||||
### 1. Big Bang (Complete Rewrite)
|
||||
- Rewrite entire app in Angular
|
||||
- Parallel development
|
||||
- Switch over at once
|
||||
- **Best for:** Small apps, green field projects
|
||||
|
||||
### 2. Incremental (Hybrid Approach)
|
||||
- Run AngularJS and Angular side-by-side
|
||||
- Migrate feature by feature
|
||||
- ngUpgrade for interop
|
||||
- **Best for:** Large apps, continuous delivery
|
||||
|
||||
### 3. Vertical Slice
|
||||
- Migrate one feature completely
|
||||
- New features in Angular, maintain old in AngularJS
|
||||
- Gradually replace
|
||||
- **Best for:** Medium apps, distinct features
|
||||
|
||||
## Hybrid App Setup
|
||||
|
||||
```typescript
|
||||
// main.ts - Bootstrap hybrid app
|
||||
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
|
||||
import { UpgradeModule } from '@angular/upgrade/static';
|
||||
import { AppModule } from './app/app.module';
|
||||
|
||||
platformBrowserDynamic()
|
||||
.bootstrapModule(AppModule)
|
||||
.then(platformRef => {
|
||||
const upgrade = platformRef.injector.get(UpgradeModule);
|
||||
// Bootstrap AngularJS
|
||||
upgrade.bootstrap(document.body, ['myAngularJSApp'], { strictDi: true });
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// app.module.ts
|
||||
import { NgModule } from '@angular/core';
|
||||
import { BrowserModule } from '@angular/platform-browser';
|
||||
import { UpgradeModule } from '@angular/upgrade/static';
|
||||
|
||||
@NgModule({
|
||||
imports: [
|
||||
BrowserModule,
|
||||
UpgradeModule
|
||||
]
|
||||
})
|
||||
export class AppModule {
|
||||
constructor(private upgrade: UpgradeModule) {}
|
||||
|
||||
ngDoBootstrap() {
|
||||
// Bootstrapped manually in main.ts
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Component Migration
|
||||
|
||||
### AngularJS Controller → Angular Component
|
||||
```javascript
|
||||
// Before: AngularJS controller
|
||||
angular.module('myApp').controller('UserController', function($scope, UserService) {
|
||||
$scope.user = {};
|
||||
|
||||
$scope.loadUser = function(id) {
|
||||
UserService.getUser(id).then(function(user) {
|
||||
$scope.user = user;
|
||||
});
|
||||
};
|
||||
|
||||
$scope.saveUser = function() {
|
||||
UserService.saveUser($scope.user);
|
||||
};
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// After: Angular component
|
||||
import { Component, OnInit } from '@angular/core';
|
||||
import { UserService } from './user.service';
|
||||
|
||||
@Component({
|
||||
selector: 'app-user',
|
||||
template: `
|
||||
<div>
|
||||
<h2>{{ user.name }}</h2>
|
||||
<button (click)="saveUser()">Save</button>
|
||||
</div>
|
||||
`
|
||||
})
|
||||
export class UserComponent implements OnInit {
|
||||
user: any = {};
|
||||
|
||||
constructor(private userService: UserService) {}
|
||||
|
||||
ngOnInit() {
|
||||
this.loadUser(1);
|
||||
}
|
||||
|
||||
loadUser(id: number) {
|
||||
this.userService.getUser(id).subscribe(user => {
|
||||
this.user = user;
|
||||
});
|
||||
}
|
||||
|
||||
saveUser() {
|
||||
this.userService.saveUser(this.user);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### AngularJS Directive → Angular Component
|
||||
```javascript
|
||||
// Before: AngularJS directive
|
||||
angular.module('myApp').directive('userCard', function() {
|
||||
return {
|
||||
restrict: 'E',
|
||||
scope: {
|
||||
user: '=',
|
||||
onDelete: '&'
|
||||
},
|
||||
template: `
|
||||
<div class="card">
|
||||
<h3>{{ user.name }}</h3>
|
||||
<button ng-click="onDelete()">Delete</button>
|
||||
</div>
|
||||
`
|
||||
};
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// After: Angular component
|
||||
import { Component, Input, Output, EventEmitter } from '@angular/core';
|
||||
|
||||
@Component({
|
||||
selector: 'app-user-card',
|
||||
template: `
|
||||
<div class="card">
|
||||
<h3>{{ user.name }}</h3>
|
||||
<button (click)="delete.emit()">Delete</button>
|
||||
</div>
|
||||
`
|
||||
})
|
||||
export class UserCardComponent {
|
||||
@Input() user: any;
|
||||
@Output() delete = new EventEmitter<void>();
|
||||
}
|
||||
|
||||
// Usage: <app-user-card [user]="user" (delete)="handleDelete()"></app-user-card>
|
||||
```
|
||||
|
||||
## Service Migration
|
||||
|
||||
```javascript
|
||||
// Before: AngularJS service
|
||||
angular.module('myApp').factory('UserService', function($http) {
|
||||
return {
|
||||
getUser: function(id) {
|
||||
return $http.get('/api/users/' + id);
|
||||
},
|
||||
saveUser: function(user) {
|
||||
return $http.post('/api/users', user);
|
||||
}
|
||||
};
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// After: Angular service
|
||||
import { Injectable } from '@angular/core';
|
||||
import { HttpClient } from '@angular/common/http';
|
||||
import { Observable } from 'rxjs';
|
||||
|
||||
@Injectable({
|
||||
providedIn: 'root'
|
||||
})
|
||||
export class UserService {
|
||||
constructor(private http: HttpClient) {}
|
||||
|
||||
getUser(id: number): Observable<any> {
|
||||
return this.http.get(`/api/users/${id}`);
|
||||
}
|
||||
|
||||
saveUser(user: any): Observable<any> {
|
||||
return this.http.post('/api/users', user);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Dependency Injection Changes
|
||||
|
||||
### Downgrading Angular → AngularJS
|
||||
```typescript
|
||||
// Angular service
|
||||
import { Injectable } from '@angular/core';
|
||||
|
||||
@Injectable({ providedIn: 'root' })
|
||||
export class NewService {
|
||||
getData() {
|
||||
return 'data from Angular';
|
||||
}
|
||||
}
|
||||
|
||||
// Make available to AngularJS
|
||||
import { downgradeInjectable } from '@angular/upgrade/static';
|
||||
|
||||
angular.module('myApp')
|
||||
.factory('newService', downgradeInjectable(NewService));
|
||||
|
||||
// Use in AngularJS
|
||||
angular.module('myApp').controller('OldController', function(newService) {
|
||||
console.log(newService.getData());
|
||||
});
|
||||
```
|
||||
|
||||
### Upgrading AngularJS → Angular
|
||||
```typescript
|
||||
// AngularJS service
|
||||
angular.module('myApp').factory('oldService', function() {
|
||||
return {
|
||||
getData: function() {
|
||||
return 'data from AngularJS';
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
// Make available to Angular
|
||||
import { InjectionToken } from '@angular/core';
|
||||
|
||||
export const OLD_SERVICE = new InjectionToken<any>('oldService');
|
||||
|
||||
@NgModule({
|
||||
providers: [
|
||||
{
|
||||
provide: OLD_SERVICE,
|
||||
useFactory: (i: any) => i.get('oldService'),
|
||||
deps: ['$injector']
|
||||
}
|
||||
]
|
||||
})
|
||||
|
||||
// Use in Angular
|
||||
@Component({...})
|
||||
export class NewComponent {
|
||||
constructor(@Inject(OLD_SERVICE) private oldService: any) {
|
||||
console.log(this.oldService.getData());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Routing Migration
|
||||
|
||||
```javascript
|
||||
// Before: AngularJS routing
|
||||
angular.module('myApp').config(function($routeProvider) {
|
||||
$routeProvider
|
||||
.when('/users', {
|
||||
template: '<user-list></user-list>'
|
||||
})
|
||||
.when('/users/:id', {
|
||||
template: '<user-detail></user-detail>'
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// After: Angular routing
|
||||
import { NgModule } from '@angular/core';
|
||||
import { RouterModule, Routes } from '@angular/router';
|
||||
|
||||
const routes: Routes = [
|
||||
{ path: 'users', component: UserListComponent },
|
||||
{ path: 'users/:id', component: UserDetailComponent }
|
||||
];
|
||||
|
||||
@NgModule({
|
||||
imports: [RouterModule.forRoot(routes)],
|
||||
exports: [RouterModule]
|
||||
})
|
||||
export class AppRoutingModule {}
|
||||
```
|
||||
|
||||
## Forms Migration
|
||||
|
||||
```html
|
||||
<!-- Before: AngularJS -->
|
||||
<form name="userForm" ng-submit="saveUser()">
|
||||
<input type="text" ng-model="user.name" required>
|
||||
<input type="email" ng-model="user.email" required>
|
||||
<button ng-disabled="userForm.$invalid">Save</button>
|
||||
</form>
|
||||
```
|
||||
|
||||
```typescript
|
||||
// After: Angular (Template-driven)
|
||||
@Component({
|
||||
template: `
|
||||
<form #userForm="ngForm" (ngSubmit)="saveUser()">
|
||||
<input type="text" [(ngModel)]="user.name" name="name" required>
|
||||
<input type="email" [(ngModel)]="user.email" name="email" required>
|
||||
<button [disabled]="userForm.invalid">Save</button>
|
||||
</form>
|
||||
`
|
||||
})
|
||||
|
||||
// Or Reactive Forms (preferred)
|
||||
import { FormBuilder, FormGroup, Validators } from '@angular/forms';
|
||||
|
||||
@Component({
|
||||
template: `
|
||||
<form [formGroup]="userForm" (ngSubmit)="saveUser()">
|
||||
<input formControlName="name">
|
||||
<input formControlName="email">
|
||||
<button [disabled]="userForm.invalid">Save</button>
|
||||
</form>
|
||||
`
|
||||
})
|
||||
export class UserFormComponent {
|
||||
userForm: FormGroup;
|
||||
|
||||
constructor(private fb: FormBuilder) {
|
||||
this.userForm = this.fb.group({
|
||||
name: ['', Validators.required],
|
||||
email: ['', [Validators.required, Validators.email]]
|
||||
});
|
||||
}
|
||||
|
||||
saveUser() {
|
||||
console.log(this.userForm.value);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Migration Timeline
|
||||
|
||||
```
|
||||
Phase 1: Setup (1-2 weeks)
|
||||
- Install Angular CLI
|
||||
- Set up hybrid app
|
||||
- Configure build tools
|
||||
- Set up testing
|
||||
|
||||
Phase 2: Infrastructure (2-4 weeks)
|
||||
- Migrate services
|
||||
- Migrate utilities
|
||||
- Set up routing
|
||||
- Migrate shared components
|
||||
|
||||
Phase 3: Feature Migration (varies)
|
||||
- Migrate feature by feature
|
||||
- Test thoroughly
|
||||
- Deploy incrementally
|
||||
|
||||
Phase 4: Cleanup (1-2 weeks)
|
||||
- Remove AngularJS code
|
||||
- Remove ngUpgrade
|
||||
- Optimize bundle
|
||||
- Final testing
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
- **references/hybrid-mode.md**: Hybrid app patterns
|
||||
- **references/component-migration.md**: Component conversion guide
|
||||
- **references/dependency-injection.md**: DI migration strategies
|
||||
- **references/routing.md**: Routing migration
|
||||
- **assets/hybrid-bootstrap.ts**: Hybrid app template
|
||||
- **assets/migration-timeline.md**: Project planning
|
||||
- **scripts/analyze-angular-app.sh**: App analysis script
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Start with Services**: Migrate services first (easier)
|
||||
2. **Incremental Approach**: Feature-by-feature migration
|
||||
3. **Test Continuously**: Test at every step
|
||||
4. **Use TypeScript**: Migrate to TypeScript early
|
||||
5. **Follow Style Guide**: Angular style guide from day 1
|
||||
6. **Optimize Later**: Get it working, then optimize
|
||||
7. **Document**: Keep migration notes
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- Not setting up hybrid app correctly
|
||||
- Migrating UI before logic
|
||||
- Ignoring change detection differences
|
||||
- Not handling scope properly
|
||||
- Mixing patterns (AngularJS + Angular)
|
||||
- Inadequate testing
|
||||
42
skills/anti-reversing-techniques/SKILL.md
Normal file
42
skills/anti-reversing-techniques/SKILL.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
name: anti-reversing-techniques
|
||||
description: Understand anti-reversing, obfuscation, and protection techniques encountered during software analysis. Use when analyzing protected binaries, bypassing anti-debugging for authorized analysis, or understanding software protection mechanisms.
|
||||
---
|
||||
|
||||
> **AUTHORIZED USE ONLY**: This skill contains dual-use security techniques. Before proceeding with any bypass or analysis:
|
||||
> 1. **Verify authorization**: Confirm you have explicit written permission from the software owner, or are operating within a legitimate security context (CTF, authorized pentest, malware analysis, security research)
|
||||
> 2. **Document scope**: Ensure your activities fall within the defined scope of your authorization
|
||||
> 3. **Legal compliance**: Understand that unauthorized bypassing of software protection may violate laws (CFAA, DMCA anti-circumvention, etc.)
|
||||
>
|
||||
> **Legitimate use cases**: Malware analysis, authorized penetration testing, CTF competitions, academic security research, analyzing software you own/have rights to
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Analyzing protected binaries with explicit authorization
|
||||
- Conducting malware analysis or security research in scope
|
||||
- Participating in CTFs or approved training exercises
|
||||
- Understanding anti-debugging or obfuscation techniques for defense
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You lack written authorization or a defined scope
|
||||
- The goal is to bypass protections for piracy or misuse
|
||||
- Legal or policy restrictions prohibit analysis
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Confirm written authorization, scope, and legal constraints.
|
||||
2. Identify protection mechanisms and choose safe analysis methods.
|
||||
3. Document findings and avoid modifying artifacts unnecessarily.
|
||||
4. Provide defensive recommendations and mitigation guidance.
|
||||
|
||||
## Safety
|
||||
|
||||
- Do not share bypass steps outside the authorized context.
|
||||
- Preserve evidence and maintain chain-of-custody for malware cases.
|
||||
|
||||
Refer to `resources/implementation-playbook.md` for detailed techniques and examples.
|
||||
|
||||
## Resources
|
||||
|
||||
- `resources/implementation-playbook.md` for detailed techniques and examples.
|
||||
@@ -0,0 +1,539 @@
|
||||
# Anti-Reversing Techniques Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
# Anti-Reversing Techniques
|
||||
|
||||
Understanding protection mechanisms encountered during authorized software analysis, security research, and malware analysis. This knowledge helps analysts bypass protections to complete legitimate analysis tasks.
|
||||
|
||||
## Anti-Debugging Techniques
|
||||
|
||||
### Windows Anti-Debugging
|
||||
|
||||
#### API-Based Detection
|
||||
|
||||
```c
|
||||
// IsDebuggerPresent
|
||||
if (IsDebuggerPresent()) {
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// CheckRemoteDebuggerPresent
|
||||
BOOL debugged = FALSE;
|
||||
CheckRemoteDebuggerPresent(GetCurrentProcess(), &debugged);
|
||||
if (debugged) exit(1);
|
||||
|
||||
// NtQueryInformationProcess
|
||||
typedef NTSTATUS (NTAPI *pNtQueryInformationProcess)(
|
||||
HANDLE, PROCESSINFOCLASS, PVOID, ULONG, PULONG);
|
||||
|
||||
DWORD debugPort = 0;
|
||||
NtQueryInformationProcess(
|
||||
GetCurrentProcess(),
|
||||
ProcessDebugPort, // 7
|
||||
&debugPort,
|
||||
sizeof(debugPort),
|
||||
NULL
|
||||
);
|
||||
if (debugPort != 0) exit(1);
|
||||
|
||||
// Debug flags
|
||||
DWORD debugFlags = 0;
|
||||
NtQueryInformationProcess(
|
||||
GetCurrentProcess(),
|
||||
ProcessDebugFlags, // 0x1F
|
||||
&debugFlags,
|
||||
sizeof(debugFlags),
|
||||
NULL
|
||||
);
|
||||
if (debugFlags == 0) exit(1); // 0 means being debugged
|
||||
```
|
||||
|
||||
**Bypass Approaches:**
|
||||
```python
|
||||
# x64dbg: ScyllaHide plugin
|
||||
# Patches common anti-debug checks
|
||||
|
||||
# Manual patching in debugger:
|
||||
# - Set IsDebuggerPresent return to 0
|
||||
# - Patch PEB.BeingDebugged to 0
|
||||
# - Hook NtQueryInformationProcess
|
||||
|
||||
# IDAPython: Patch checks
|
||||
ida_bytes.patch_byte(check_addr, 0x90) # NOP
|
||||
```
|
||||
|
||||
#### PEB-Based Detection
|
||||
|
||||
```c
|
||||
// Direct PEB access
|
||||
#ifdef _WIN64
|
||||
PPEB peb = (PPEB)__readgsqword(0x60);
|
||||
#else
|
||||
PPEB peb = (PPEB)__readfsdword(0x30);
|
||||
#endif
|
||||
|
||||
// BeingDebugged flag
|
||||
if (peb->BeingDebugged) exit(1);
|
||||
|
||||
// NtGlobalFlag
|
||||
// Debugged: 0x70 (FLG_HEAP_ENABLE_TAIL_CHECK |
|
||||
// FLG_HEAP_ENABLE_FREE_CHECK |
|
||||
// FLG_HEAP_VALIDATE_PARAMETERS)
|
||||
if (peb->NtGlobalFlag & 0x70) exit(1);
|
||||
|
||||
// Heap flags
|
||||
PDWORD heapFlags = (PDWORD)((PBYTE)peb->ProcessHeap + 0x70);
|
||||
if (*heapFlags & 0x50000062) exit(1);
|
||||
```
|
||||
|
||||
**Bypass Approaches:**
|
||||
```assembly
|
||||
; In debugger, modify PEB directly
|
||||
; x64dbg: dump at gs:[60] (x64) or fs:[30] (x86)
|
||||
; Set BeingDebugged (offset 2) to 0
|
||||
; Clear NtGlobalFlag (offset 0xBC for x64)
|
||||
```
|
||||
|
||||
#### Timing-Based Detection
|
||||
|
||||
```c
|
||||
// RDTSC timing
|
||||
uint64_t start = __rdtsc();
|
||||
// ... some code ...
|
||||
uint64_t end = __rdtsc();
|
||||
if ((end - start) > THRESHOLD) exit(1);
|
||||
|
||||
// QueryPerformanceCounter
|
||||
LARGE_INTEGER start, end, freq;
|
||||
QueryPerformanceFrequency(&freq);
|
||||
QueryPerformanceCounter(&start);
|
||||
// ... code ...
|
||||
QueryPerformanceCounter(&end);
|
||||
double elapsed = (double)(end.QuadPart - start.QuadPart) / freq.QuadPart;
|
||||
if (elapsed > 0.1) exit(1); // Too slow = debugger
|
||||
|
||||
// GetTickCount
|
||||
DWORD start = GetTickCount();
|
||||
// ... code ...
|
||||
if (GetTickCount() - start > 1000) exit(1);
|
||||
```
|
||||
|
||||
**Bypass Approaches:**
|
||||
```
|
||||
- Use hardware breakpoints instead of software
|
||||
- Patch timing checks
|
||||
- Use VM with controlled time
|
||||
- Hook timing APIs to return consistent values
|
||||
```
|
||||
|
||||
#### Exception-Based Detection
|
||||
|
||||
```c
|
||||
// SEH-based detection
|
||||
__try {
|
||||
__asm { int 3 } // Software breakpoint
|
||||
}
|
||||
__except(EXCEPTION_EXECUTE_HANDLER) {
|
||||
// Normal execution: exception caught
|
||||
return;
|
||||
}
|
||||
// Debugger ate the exception
|
||||
exit(1);
|
||||
|
||||
// VEH-based detection
|
||||
LONG CALLBACK VectoredHandler(PEXCEPTION_POINTERS ep) {
|
||||
if (ep->ExceptionRecord->ExceptionCode == EXCEPTION_BREAKPOINT) {
|
||||
ep->ContextRecord->Rip++; // Skip INT3
|
||||
return EXCEPTION_CONTINUE_EXECUTION;
|
||||
}
|
||||
return EXCEPTION_CONTINUE_SEARCH;
|
||||
}
|
||||
```
|
||||
|
||||
### Linux Anti-Debugging
|
||||
|
||||
```c
|
||||
// ptrace self-trace
|
||||
if (ptrace(PTRACE_TRACEME, 0, NULL, NULL) == -1) {
|
||||
// Already being traced
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// /proc/self/status
|
||||
FILE *f = fopen("/proc/self/status", "r");
|
||||
char line[256];
|
||||
while (fgets(line, sizeof(line), f)) {
|
||||
if (strncmp(line, "TracerPid:", 10) == 0) {
|
||||
int tracer_pid = atoi(line + 10);
|
||||
if (tracer_pid != 0) exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
// Parent process check
|
||||
if (getppid() != 1 && strcmp(get_process_name(getppid()), "bash") != 0) {
|
||||
// Unusual parent (might be debugger)
|
||||
}
|
||||
```
|
||||
|
||||
**Bypass Approaches:**
|
||||
```bash
|
||||
# LD_PRELOAD to hook ptrace
|
||||
# Compile: gcc -shared -fPIC -o hook.so hook.c
|
||||
long ptrace(int request, ...) {
|
||||
return 0; // Always succeed
|
||||
}
|
||||
|
||||
# Usage
|
||||
LD_PRELOAD=./hook.so ./target
|
||||
```
|
||||
|
||||
## Anti-VM Detection
|
||||
|
||||
### Hardware Fingerprinting
|
||||
|
||||
```c
|
||||
// CPUID-based detection
|
||||
int cpuid_info[4];
|
||||
__cpuid(cpuid_info, 1);
|
||||
// Check hypervisor bit (bit 31 of ECX)
|
||||
if (cpuid_info[2] & (1 << 31)) {
|
||||
// Running in hypervisor
|
||||
}
|
||||
|
||||
// CPUID brand string
|
||||
__cpuid(cpuid_info, 0x40000000);
|
||||
char vendor[13] = {0};
|
||||
memcpy(vendor, &cpuid_info[1], 12);
|
||||
// "VMwareVMware", "Microsoft Hv", "KVMKVMKVM", "VBoxVBoxVBox"
|
||||
|
||||
// MAC address prefix
|
||||
// VMware: 00:0C:29, 00:50:56
|
||||
// VirtualBox: 08:00:27
|
||||
// Hyper-V: 00:15:5D
|
||||
```
|
||||
|
||||
### Registry/File Detection
|
||||
|
||||
```c
|
||||
// Windows registry keys
|
||||
// HKLM\SOFTWARE\VMware, Inc.\VMware Tools
|
||||
// HKLM\SOFTWARE\Oracle\VirtualBox Guest Additions
|
||||
// HKLM\HARDWARE\ACPI\DSDT\VBOX__
|
||||
|
||||
// Files
|
||||
// C:\Windows\System32\drivers\vmmouse.sys
|
||||
// C:\Windows\System32\drivers\vmhgfs.sys
|
||||
// C:\Windows\System32\drivers\VBoxMouse.sys
|
||||
|
||||
// Processes
|
||||
// vmtoolsd.exe, vmwaretray.exe
|
||||
// VBoxService.exe, VBoxTray.exe
|
||||
```
|
||||
|
||||
### Timing-Based VM Detection
|
||||
|
||||
```c
|
||||
// VM exits cause timing anomalies
|
||||
uint64_t start = __rdtsc();
|
||||
__cpuid(cpuid_info, 0); // Causes VM exit
|
||||
uint64_t end = __rdtsc();
|
||||
if ((end - start) > 500) {
|
||||
// Likely in VM (CPUID takes longer)
|
||||
}
|
||||
```
|
||||
|
||||
**Bypass Approaches:**
|
||||
```
|
||||
- Use bare-metal analysis environment
|
||||
- Harden VM (remove guest tools, change MAC)
|
||||
- Patch detection code
|
||||
- Use specialized analysis VMs (FLARE-VM)
|
||||
```
|
||||
|
||||
## Code Obfuscation
|
||||
|
||||
### Control Flow Obfuscation
|
||||
|
||||
#### Control Flow Flattening
|
||||
|
||||
```c
|
||||
// Original
|
||||
if (cond) {
|
||||
func_a();
|
||||
} else {
|
||||
func_b();
|
||||
}
|
||||
func_c();
|
||||
|
||||
// Flattened
|
||||
int state = 0;
|
||||
while (1) {
|
||||
switch (state) {
|
||||
case 0:
|
||||
state = cond ? 1 : 2;
|
||||
break;
|
||||
case 1:
|
||||
func_a();
|
||||
state = 3;
|
||||
break;
|
||||
case 2:
|
||||
func_b();
|
||||
state = 3;
|
||||
break;
|
||||
case 3:
|
||||
func_c();
|
||||
return;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Analysis Approach:**
|
||||
- Identify state variable
|
||||
- Map state transitions
|
||||
- Reconstruct original flow
|
||||
- Tools: D-810 (IDA), SATURN
|
||||
|
||||
#### Opaque Predicates
|
||||
|
||||
```c
|
||||
// Always true, but complex to analyze
|
||||
int x = rand();
|
||||
if ((x * x) >= 0) { // Always true
|
||||
real_code();
|
||||
} else {
|
||||
junk_code(); // Dead code
|
||||
}
|
||||
|
||||
// Always false
|
||||
if ((x * (x + 1)) % 2 == 1) { // Product of consecutive = even
|
||||
junk_code();
|
||||
}
|
||||
```
|
||||
|
||||
**Analysis Approach:**
|
||||
- Identify constant expressions
|
||||
- Symbolic execution to prove predicates
|
||||
- Pattern matching for known opaque predicates
|
||||
|
||||
### Data Obfuscation
|
||||
|
||||
#### String Encryption
|
||||
|
||||
```c
|
||||
// XOR encryption
|
||||
char decrypt_string(char *enc, int len, char key) {
|
||||
char *dec = malloc(len + 1);
|
||||
for (int i = 0; i < len; i++) {
|
||||
dec[i] = enc[i] ^ key;
|
||||
}
|
||||
dec[len] = 0;
|
||||
return dec;
|
||||
}
|
||||
|
||||
// Stack strings
|
||||
char url[20];
|
||||
url[0] = 'h'; url[1] = 't'; url[2] = 't'; url[3] = 'p';
|
||||
url[4] = ':'; url[5] = '/'; url[6] = '/';
|
||||
// ...
|
||||
```
|
||||
|
||||
**Analysis Approach:**
|
||||
```python
|
||||
# FLOSS for automatic string deobfuscation
|
||||
floss malware.exe
|
||||
|
||||
# IDAPython string decryption
|
||||
def decrypt_xor(ea, length, key):
|
||||
result = ""
|
||||
for i in range(length):
|
||||
byte = ida_bytes.get_byte(ea + i)
|
||||
result += chr(byte ^ key)
|
||||
return result
|
||||
```
|
||||
|
||||
#### API Obfuscation
|
||||
|
||||
```c
|
||||
// Dynamic API resolution
|
||||
typedef HANDLE (WINAPI *pCreateFileW)(LPCWSTR, DWORD, DWORD,
|
||||
LPSECURITY_ATTRIBUTES, DWORD, DWORD, HANDLE);
|
||||
|
||||
HMODULE kernel32 = LoadLibraryA("kernel32.dll");
|
||||
pCreateFileW myCreateFile = (pCreateFileW)GetProcAddress(
|
||||
kernel32, "CreateFileW");
|
||||
|
||||
// API hashing
|
||||
DWORD hash_api(char *name) {
|
||||
DWORD hash = 0;
|
||||
while (*name) {
|
||||
hash = ((hash >> 13) | (hash << 19)) + *name++;
|
||||
}
|
||||
return hash;
|
||||
}
|
||||
// Resolve by hash comparison instead of string
|
||||
```
|
||||
|
||||
**Analysis Approach:**
|
||||
- Identify hash algorithm
|
||||
- Build hash database of known APIs
|
||||
- Use HashDB plugin for IDA
|
||||
- Dynamic analysis to resolve at runtime
|
||||
|
||||
### Instruction-Level Obfuscation
|
||||
|
||||
#### Dead Code Insertion
|
||||
|
||||
```asm
|
||||
; Original
|
||||
mov eax, 1
|
||||
|
||||
; With dead code
|
||||
push ebx ; Dead
|
||||
mov eax, 1
|
||||
pop ebx ; Dead
|
||||
xor ecx, ecx ; Dead
|
||||
add ecx, ecx ; Dead
|
||||
```
|
||||
|
||||
#### Instruction Substitution
|
||||
|
||||
```asm
|
||||
; Original: xor eax, eax (set to 0)
|
||||
; Substitutions:
|
||||
sub eax, eax
|
||||
mov eax, 0
|
||||
and eax, 0
|
||||
lea eax, [0]
|
||||
|
||||
; Original: mov eax, 1
|
||||
; Substitutions:
|
||||
xor eax, eax
|
||||
inc eax
|
||||
|
||||
push 1
|
||||
pop eax
|
||||
```
|
||||
|
||||
## Packing and Encryption
|
||||
|
||||
### Common Packers
|
||||
|
||||
```
|
||||
UPX - Open source, easy to unpack
|
||||
Themida - Commercial, VM-based protection
|
||||
VMProtect - Commercial, code virtualization
|
||||
ASPack - Compression packer
|
||||
PECompact - Compression packer
|
||||
Enigma - Commercial protector
|
||||
```
|
||||
|
||||
### Unpacking Methodology
|
||||
|
||||
```
|
||||
1. Identify packer (DIE, Exeinfo PE, PEiD)
|
||||
|
||||
2. Static unpacking (if known packer):
|
||||
- UPX: upx -d packed.exe
|
||||
- Use existing unpackers
|
||||
|
||||
3. Dynamic unpacking:
|
||||
a. Find Original Entry Point (OEP)
|
||||
b. Set breakpoint on OEP
|
||||
c. Dump memory when OEP reached
|
||||
d. Fix import table (Scylla, ImpREC)
|
||||
|
||||
4. OEP finding techniques:
|
||||
- Hardware breakpoint on stack (ESP trick)
|
||||
- Break on common API calls (GetCommandLineA)
|
||||
- Trace and look for typical entry patterns
|
||||
```
|
||||
|
||||
### Manual Unpacking Example
|
||||
|
||||
```
|
||||
1. Load packed binary in x64dbg
|
||||
2. Note entry point (packer stub)
|
||||
3. Use ESP trick:
|
||||
- Run to entry
|
||||
- Set hardware breakpoint on [ESP]
|
||||
- Run until breakpoint hits (after PUSHAD/POPAD)
|
||||
4. Look for JMP to OEP
|
||||
5. At OEP, use Scylla to:
|
||||
- Dump process
|
||||
- Find imports (IAT autosearch)
|
||||
- Fix dump
|
||||
```
|
||||
|
||||
## Virtualization-Based Protection
|
||||
|
||||
### Code Virtualization
|
||||
|
||||
```
|
||||
Original x86 code is converted to custom bytecode
|
||||
interpreted by embedded VM at runtime.
|
||||
|
||||
Original: VM Protected:
|
||||
mov eax, 1 push vm_context
|
||||
add eax, 2 call vm_entry
|
||||
; VM interprets bytecode
|
||||
; equivalent to original
|
||||
```
|
||||
|
||||
### Analysis Approaches
|
||||
|
||||
```
|
||||
1. Identify VM components:
|
||||
- VM entry (dispatcher)
|
||||
- Handler table
|
||||
- Bytecode location
|
||||
- Virtual registers/stack
|
||||
|
||||
2. Trace execution:
|
||||
- Log handler calls
|
||||
- Map bytecode to operations
|
||||
- Understand instruction set
|
||||
|
||||
3. Lifting/devirtualization:
|
||||
- Map VM instructions back to native
|
||||
- Tools: VMAttack, SATURN, NoVmp
|
||||
|
||||
4. Symbolic execution:
|
||||
- Analyze VM semantically
|
||||
- angr, Triton
|
||||
```
|
||||
|
||||
## Bypass Strategies Summary
|
||||
|
||||
### General Principles
|
||||
|
||||
1. **Understand the protection**: Identify what technique is used
|
||||
2. **Find the check**: Locate protection code in binary
|
||||
3. **Patch or hook**: Modify check to always pass
|
||||
4. **Use appropriate tools**: ScyllaHide, x64dbg plugins
|
||||
5. **Document findings**: Keep notes on bypassed protections
|
||||
|
||||
### Tool Recommendations
|
||||
|
||||
```
|
||||
Anti-debug bypass: ScyllaHide, TitanHide
|
||||
Unpacking: x64dbg + Scylla, OllyDumpEx
|
||||
Deobfuscation: D-810, SATURN, miasm
|
||||
VM analysis: VMAttack, NoVmp, manual tracing
|
||||
String decryption: FLOSS, custom scripts
|
||||
Symbolic execution: angr, Triton
|
||||
```
|
||||
|
||||
### Ethical Considerations
|
||||
|
||||
This knowledge should only be used for:
|
||||
- Authorized security research
|
||||
- Malware analysis (defensive)
|
||||
- CTF competitions
|
||||
- Understanding protections for legitimate purposes
|
||||
- Educational purposes
|
||||
|
||||
Never use to bypass protections for:
|
||||
- Software piracy
|
||||
- Unauthorized access
|
||||
- Malicious purposes
|
||||
37
skills/api-design-principles/SKILL.md
Normal file
37
skills/api-design-principles/SKILL.md
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
name: api-design-principles
|
||||
description: Master REST and GraphQL API design principles to build intuitive, scalable, and maintainable APIs that delight developers. Use when designing new APIs, reviewing API specifications, or establishing API design standards.
|
||||
---
|
||||
|
||||
# API Design Principles
|
||||
|
||||
Master REST and GraphQL API design principles to build intuitive, scalable, and maintainable APIs that delight developers and stand the test of time.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Designing new REST or GraphQL APIs
|
||||
- Refactoring existing APIs for better usability
|
||||
- Establishing API design standards for your team
|
||||
- Reviewing API specifications before implementation
|
||||
- Migrating between API paradigms (REST to GraphQL, etc.)
|
||||
- Creating developer-friendly API documentation
|
||||
- Optimizing APIs for specific use cases (mobile, third-party integrations)
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You only need implementation guidance for a specific framework
|
||||
- You are doing infrastructure-only work without API contracts
|
||||
- You cannot change or version public interfaces
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Define consumers, use cases, and constraints.
|
||||
2. Choose API style and model resources or types.
|
||||
3. Specify errors, versioning, pagination, and auth strategy.
|
||||
4. Validate with examples and review for consistency.
|
||||
|
||||
Refer to `resources/implementation-playbook.md` for detailed patterns, checklists, and templates.
|
||||
|
||||
## Resources
|
||||
|
||||
- `resources/implementation-playbook.md` for detailed patterns, checklists, and templates.
|
||||
155
skills/api-design-principles/assets/api-design-checklist.md
Normal file
155
skills/api-design-principles/assets/api-design-checklist.md
Normal file
@@ -0,0 +1,155 @@
|
||||
# API Design Checklist
|
||||
|
||||
## Pre-Implementation Review
|
||||
|
||||
### Resource Design
|
||||
|
||||
- [ ] Resources are nouns, not verbs
|
||||
- [ ] Plural names for collections
|
||||
- [ ] Consistent naming across all endpoints
|
||||
- [ ] Clear resource hierarchy (avoid deep nesting >2 levels)
|
||||
- [ ] All CRUD operations properly mapped to HTTP methods
|
||||
|
||||
### HTTP Methods
|
||||
|
||||
- [ ] GET for retrieval (safe, idempotent)
|
||||
- [ ] POST for creation
|
||||
- [ ] PUT for full replacement (idempotent)
|
||||
- [ ] PATCH for partial updates
|
||||
- [ ] DELETE for removal (idempotent)
|
||||
|
||||
### Status Codes
|
||||
|
||||
- [ ] 200 OK for successful GET/PATCH/PUT
|
||||
- [ ] 201 Created for POST
|
||||
- [ ] 204 No Content for DELETE
|
||||
- [ ] 400 Bad Request for malformed requests
|
||||
- [ ] 401 Unauthorized for missing auth
|
||||
- [ ] 403 Forbidden for insufficient permissions
|
||||
- [ ] 404 Not Found for missing resources
|
||||
- [ ] 422 Unprocessable Entity for validation errors
|
||||
- [ ] 429 Too Many Requests for rate limiting
|
||||
- [ ] 500 Internal Server Error for server issues
|
||||
|
||||
### Pagination
|
||||
|
||||
- [ ] All collection endpoints paginated
|
||||
- [ ] Default page size defined (e.g., 20)
|
||||
- [ ] Maximum page size enforced (e.g., 100)
|
||||
- [ ] Pagination metadata included (total, pages, etc.)
|
||||
- [ ] Cursor-based or offset-based pattern chosen
|
||||
|
||||
### Filtering & Sorting
|
||||
|
||||
- [ ] Query parameters for filtering
|
||||
- [ ] Sort parameter supported
|
||||
- [ ] Search parameter for full-text search
|
||||
- [ ] Field selection supported (sparse fieldsets)
|
||||
|
||||
### Versioning
|
||||
|
||||
- [ ] Versioning strategy defined (URL/header/query)
|
||||
- [ ] Version included in all endpoints
|
||||
- [ ] Deprecation policy documented
|
||||
|
||||
### Error Handling
|
||||
|
||||
- [ ] Consistent error response format
|
||||
- [ ] Detailed error messages
|
||||
- [ ] Field-level validation errors
|
||||
- [ ] Error codes for client handling
|
||||
- [ ] Timestamps in error responses
|
||||
|
||||
### Authentication & Authorization
|
||||
|
||||
- [ ] Authentication method defined (Bearer token, API key)
|
||||
- [ ] Authorization checks on all endpoints
|
||||
- [ ] 401 vs 403 used correctly
|
||||
- [ ] Token expiration handled
|
||||
|
||||
### Rate Limiting
|
||||
|
||||
- [ ] Rate limits defined per endpoint/user
|
||||
- [ ] Rate limit headers included
|
||||
- [ ] 429 status code for exceeded limits
|
||||
- [ ] Retry-After header provided
|
||||
|
||||
### Documentation
|
||||
|
||||
- [ ] OpenAPI/Swagger spec generated
|
||||
- [ ] All endpoints documented
|
||||
- [ ] Request/response examples provided
|
||||
- [ ] Error responses documented
|
||||
- [ ] Authentication flow documented
|
||||
|
||||
### Testing
|
||||
|
||||
- [ ] Unit tests for business logic
|
||||
- [ ] Integration tests for endpoints
|
||||
- [ ] Error scenarios tested
|
||||
- [ ] Edge cases covered
|
||||
- [ ] Performance tests for heavy endpoints
|
||||
|
||||
### Security
|
||||
|
||||
- [ ] Input validation on all fields
|
||||
- [ ] SQL injection prevention
|
||||
- [ ] XSS prevention
|
||||
- [ ] CORS configured correctly
|
||||
- [ ] HTTPS enforced
|
||||
- [ ] Sensitive data not in URLs
|
||||
- [ ] No secrets in responses
|
||||
|
||||
### Performance
|
||||
|
||||
- [ ] Database queries optimized
|
||||
- [ ] N+1 queries prevented
|
||||
- [ ] Caching strategy defined
|
||||
- [ ] Cache headers set appropriately
|
||||
- [ ] Large responses paginated
|
||||
|
||||
### Monitoring
|
||||
|
||||
- [ ] Logging implemented
|
||||
- [ ] Error tracking configured
|
||||
- [ ] Performance metrics collected
|
||||
- [ ] Health check endpoint available
|
||||
- [ ] Alerts configured for errors
|
||||
|
||||
## GraphQL-Specific Checks
|
||||
|
||||
### Schema Design
|
||||
|
||||
- [ ] Schema-first approach used
|
||||
- [ ] Types properly defined
|
||||
- [ ] Non-null vs nullable decided
|
||||
- [ ] Interfaces/unions used appropriately
|
||||
- [ ] Custom scalars defined
|
||||
|
||||
### Queries
|
||||
|
||||
- [ ] Query depth limiting
|
||||
- [ ] Query complexity analysis
|
||||
- [ ] DataLoaders prevent N+1
|
||||
- [ ] Pagination pattern chosen (Relay/offset)
|
||||
|
||||
### Mutations
|
||||
|
||||
- [ ] Input types defined
|
||||
- [ ] Payload types with errors
|
||||
- [ ] Optimistic response support
|
||||
- [ ] Idempotency considered
|
||||
|
||||
### Performance
|
||||
|
||||
- [ ] DataLoader for all relationships
|
||||
- [ ] Query batching enabled
|
||||
- [ ] Persisted queries considered
|
||||
- [ ] Response caching implemented
|
||||
|
||||
### Documentation
|
||||
|
||||
- [ ] All fields documented
|
||||
- [ ] Deprecations marked
|
||||
- [ ] Examples provided
|
||||
- [ ] Schema introspection enabled
|
||||
182
skills/api-design-principles/assets/rest-api-template.py
Normal file
182
skills/api-design-principles/assets/rest-api-template.py
Normal file
@@ -0,0 +1,182 @@
|
||||
"""
|
||||
Production-ready REST API template using FastAPI.
|
||||
Includes pagination, filtering, error handling, and best practices.
|
||||
"""
|
||||
|
||||
from fastapi import FastAPI, HTTPException, Query, Path, Depends, status
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.middleware.trustedhost import TrustedHostMiddleware
|
||||
from fastapi.responses import JSONResponse
|
||||
from pydantic import BaseModel, Field, EmailStr, ConfigDict
|
||||
from typing import Optional, List, Any
|
||||
from datetime import datetime
|
||||
from enum import Enum
|
||||
|
||||
app = FastAPI(
|
||||
title="API Template",
|
||||
version="1.0.0",
|
||||
docs_url="/api/docs"
|
||||
)
|
||||
|
||||
# Security Middleware
|
||||
# Trusted Host: Prevents HTTP Host Header attacks
|
||||
app.add_middleware(
|
||||
TrustedHostMiddleware,
|
||||
allowed_hosts=["*"] # TODO: Configure this in production, e.g. ["api.example.com"]
|
||||
)
|
||||
|
||||
# CORS: Configures Cross-Origin Resource Sharing
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"], # TODO: Update this with specific origins in production
|
||||
allow_credentials=False, # TODO: Set to True if you need cookies/auth headers, but restrict origins
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Models
|
||||
class UserStatus(str, Enum):
|
||||
ACTIVE = "active"
|
||||
INACTIVE = "inactive"
|
||||
SUSPENDED = "suspended"
|
||||
|
||||
class UserBase(BaseModel):
|
||||
email: EmailStr
|
||||
name: str = Field(..., min_length=1, max_length=100)
|
||||
status: UserStatus = UserStatus.ACTIVE
|
||||
|
||||
class UserCreate(UserBase):
|
||||
password: str = Field(..., min_length=8)
|
||||
|
||||
class UserUpdate(BaseModel):
|
||||
email: Optional[EmailStr] = None
|
||||
name: Optional[str] = Field(None, min_length=1, max_length=100)
|
||||
status: Optional[UserStatus] = None
|
||||
|
||||
class User(UserBase):
|
||||
id: str
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
# Pagination
|
||||
class PaginationParams(BaseModel):
|
||||
page: int = Field(1, ge=1)
|
||||
page_size: int = Field(20, ge=1, le=100)
|
||||
|
||||
class PaginatedResponse(BaseModel):
|
||||
items: List[Any]
|
||||
total: int
|
||||
page: int
|
||||
page_size: int
|
||||
pages: int
|
||||
|
||||
# Error handling
|
||||
class ErrorDetail(BaseModel):
|
||||
field: Optional[str] = None
|
||||
message: str
|
||||
code: str
|
||||
|
||||
class ErrorResponse(BaseModel):
|
||||
error: str
|
||||
message: str
|
||||
details: Optional[List[ErrorDetail]] = None
|
||||
|
||||
@app.exception_handler(HTTPException)
|
||||
async def http_exception_handler(request, exc):
|
||||
return JSONResponse(
|
||||
status_code=exc.status_code,
|
||||
content=ErrorResponse(
|
||||
error=exc.__class__.__name__,
|
||||
message=exc.detail if isinstance(exc.detail, str) else exc.detail.get("message", "Error"),
|
||||
details=exc.detail.get("details") if isinstance(exc.detail, dict) else None
|
||||
).model_dump()
|
||||
)
|
||||
|
||||
# Endpoints
|
||||
@app.get("/api/users", response_model=PaginatedResponse, tags=["Users"])
|
||||
async def list_users(
|
||||
page: int = Query(1, ge=1),
|
||||
page_size: int = Query(20, ge=1, le=100),
|
||||
status: Optional[UserStatus] = Query(None),
|
||||
search: Optional[str] = Query(None)
|
||||
):
|
||||
"""List users with pagination and filtering."""
|
||||
# Mock implementation
|
||||
total = 100
|
||||
items = [
|
||||
User(
|
||||
id=str(i),
|
||||
email=f"user{i}@example.com",
|
||||
name=f"User {i}",
|
||||
status=UserStatus.ACTIVE,
|
||||
created_at=datetime.now(),
|
||||
updated_at=datetime.now()
|
||||
).model_dump()
|
||||
for i in range((page-1)*page_size, min(page*page_size, total))
|
||||
]
|
||||
|
||||
return PaginatedResponse(
|
||||
items=items,
|
||||
total=total,
|
||||
page=page,
|
||||
page_size=page_size,
|
||||
pages=(total + page_size - 1) // page_size
|
||||
)
|
||||
|
||||
@app.post("/api/users", response_model=User, status_code=status.HTTP_201_CREATED, tags=["Users"])
|
||||
async def create_user(user: UserCreate):
|
||||
"""Create a new user."""
|
||||
# Mock implementation
|
||||
return User(
|
||||
id="123",
|
||||
email=user.email,
|
||||
name=user.name,
|
||||
status=user.status,
|
||||
created_at=datetime.now(),
|
||||
updated_at=datetime.now()
|
||||
)
|
||||
|
||||
@app.get("/api/users/{user_id}", response_model=User, tags=["Users"])
|
||||
async def get_user(user_id: str = Path(..., description="User ID")):
|
||||
"""Get user by ID."""
|
||||
# Mock: Check if exists
|
||||
if user_id == "999":
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={"message": "User not found", "details": {"id": user_id}}
|
||||
)
|
||||
|
||||
return User(
|
||||
id=user_id,
|
||||
email="user@example.com",
|
||||
name="User Name",
|
||||
status=UserStatus.ACTIVE,
|
||||
created_at=datetime.now(),
|
||||
updated_at=datetime.now()
|
||||
)
|
||||
|
||||
@app.patch("/api/users/{user_id}", response_model=User, tags=["Users"])
|
||||
async def update_user(user_id: str, update: UserUpdate):
|
||||
"""Partially update user."""
|
||||
# Validate user exists
|
||||
existing = await get_user(user_id)
|
||||
|
||||
# Apply updates
|
||||
update_data = update.model_dump(exclude_unset=True)
|
||||
for field, value in update_data.items():
|
||||
setattr(existing, field, value)
|
||||
|
||||
existing.updated_at = datetime.now()
|
||||
return existing
|
||||
|
||||
@app.delete("/api/users/{user_id}", status_code=status.HTTP_204_NO_CONTENT, tags=["Users"])
|
||||
async def delete_user(user_id: str):
|
||||
"""Delete user."""
|
||||
await get_user(user_id) # Verify exists
|
||||
return None
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8000)
|
||||
583
skills/api-design-principles/references/graphql-schema-design.md
Normal file
583
skills/api-design-principles/references/graphql-schema-design.md
Normal file
@@ -0,0 +1,583 @@
|
||||
# GraphQL Schema Design Patterns
|
||||
|
||||
## Schema Organization
|
||||
|
||||
### Modular Schema Structure
|
||||
|
||||
```graphql
|
||||
# user.graphql
|
||||
type User {
|
||||
id: ID!
|
||||
email: String!
|
||||
name: String!
|
||||
posts: [Post!]!
|
||||
}
|
||||
|
||||
extend type Query {
|
||||
user(id: ID!): User
|
||||
users(first: Int, after: String): UserConnection!
|
||||
}
|
||||
|
||||
extend type Mutation {
|
||||
createUser(input: CreateUserInput!): CreateUserPayload!
|
||||
}
|
||||
|
||||
# post.graphql
|
||||
type Post {
|
||||
id: ID!
|
||||
title: String!
|
||||
content: String!
|
||||
author: User!
|
||||
}
|
||||
|
||||
extend type Query {
|
||||
post(id: ID!): Post
|
||||
}
|
||||
```
|
||||
|
||||
## Type Design Patterns
|
||||
|
||||
### 1. Non-Null Types
|
||||
|
||||
```graphql
|
||||
type User {
|
||||
id: ID! # Always required
|
||||
email: String! # Required
|
||||
phone: String # Optional (nullable)
|
||||
posts: [Post!]! # Non-null array of non-null posts
|
||||
tags: [String!] # Nullable array of non-null strings
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Interfaces for Polymorphism
|
||||
|
||||
```graphql
|
||||
interface Node {
|
||||
id: ID!
|
||||
createdAt: DateTime!
|
||||
}
|
||||
|
||||
type User implements Node {
|
||||
id: ID!
|
||||
createdAt: DateTime!
|
||||
email: String!
|
||||
}
|
||||
|
||||
type Post implements Node {
|
||||
id: ID!
|
||||
createdAt: DateTime!
|
||||
title: String!
|
||||
}
|
||||
|
||||
type Query {
|
||||
node(id: ID!): Node
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Unions for Heterogeneous Results
|
||||
|
||||
```graphql
|
||||
union SearchResult = User | Post | Comment
|
||||
|
||||
type Query {
|
||||
search(query: String!): [SearchResult!]!
|
||||
}
|
||||
|
||||
# Query example
|
||||
{
|
||||
search(query: "graphql") {
|
||||
... on User {
|
||||
name
|
||||
email
|
||||
}
|
||||
... on Post {
|
||||
title
|
||||
content
|
||||
}
|
||||
... on Comment {
|
||||
text
|
||||
author {
|
||||
name
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Input Types
|
||||
|
||||
```graphql
|
||||
input CreateUserInput {
|
||||
email: String!
|
||||
name: String!
|
||||
password: String!
|
||||
profileInput: ProfileInput
|
||||
}
|
||||
|
||||
input ProfileInput {
|
||||
bio: String
|
||||
avatar: String
|
||||
website: String
|
||||
}
|
||||
|
||||
input UpdateUserInput {
|
||||
id: ID!
|
||||
email: String
|
||||
name: String
|
||||
profileInput: ProfileInput
|
||||
}
|
||||
```
|
||||
|
||||
## Pagination Patterns
|
||||
|
||||
### Relay Cursor Pagination (Recommended)
|
||||
|
||||
```graphql
|
||||
type UserConnection {
|
||||
edges: [UserEdge!]!
|
||||
pageInfo: PageInfo!
|
||||
totalCount: Int!
|
||||
}
|
||||
|
||||
type UserEdge {
|
||||
node: User!
|
||||
cursor: String!
|
||||
}
|
||||
|
||||
type PageInfo {
|
||||
hasNextPage: Boolean!
|
||||
hasPreviousPage: Boolean!
|
||||
startCursor: String
|
||||
endCursor: String
|
||||
}
|
||||
|
||||
type Query {
|
||||
users(first: Int, after: String, last: Int, before: String): UserConnection!
|
||||
}
|
||||
|
||||
# Usage
|
||||
{
|
||||
users(first: 10, after: "cursor123") {
|
||||
edges {
|
||||
cursor
|
||||
node {
|
||||
id
|
||||
name
|
||||
}
|
||||
}
|
||||
pageInfo {
|
||||
hasNextPage
|
||||
endCursor
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Offset Pagination (Simpler)
|
||||
|
||||
```graphql
|
||||
type UserList {
|
||||
items: [User!]!
|
||||
total: Int!
|
||||
page: Int!
|
||||
pageSize: Int!
|
||||
}
|
||||
|
||||
type Query {
|
||||
users(page: Int = 1, pageSize: Int = 20): UserList!
|
||||
}
|
||||
```
|
||||
|
||||
## Mutation Design Patterns
|
||||
|
||||
### 1. Input/Payload Pattern
|
||||
|
||||
```graphql
|
||||
input CreatePostInput {
|
||||
title: String!
|
||||
content: String!
|
||||
tags: [String!]
|
||||
}
|
||||
|
||||
type CreatePostPayload {
|
||||
post: Post
|
||||
errors: [Error!]
|
||||
success: Boolean!
|
||||
}
|
||||
|
||||
type Error {
|
||||
field: String
|
||||
message: String!
|
||||
code: String!
|
||||
}
|
||||
|
||||
type Mutation {
|
||||
createPost(input: CreatePostInput!): CreatePostPayload!
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Optimistic Response Support
|
||||
|
||||
```graphql
|
||||
type UpdateUserPayload {
|
||||
user: User
|
||||
clientMutationId: String
|
||||
errors: [Error!]
|
||||
}
|
||||
|
||||
input UpdateUserInput {
|
||||
id: ID!
|
||||
name: String
|
||||
clientMutationId: String
|
||||
}
|
||||
|
||||
type Mutation {
|
||||
updateUser(input: UpdateUserInput!): UpdateUserPayload!
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Batch Mutations
|
||||
|
||||
```graphql
|
||||
input BatchCreateUserInput {
|
||||
users: [CreateUserInput!]!
|
||||
}
|
||||
|
||||
type BatchCreateUserPayload {
|
||||
results: [CreateUserResult!]!
|
||||
successCount: Int!
|
||||
errorCount: Int!
|
||||
}
|
||||
|
||||
type CreateUserResult {
|
||||
user: User
|
||||
errors: [Error!]
|
||||
index: Int!
|
||||
}
|
||||
|
||||
type Mutation {
|
||||
batchCreateUsers(input: BatchCreateUserInput!): BatchCreateUserPayload!
|
||||
}
|
||||
```
|
||||
|
||||
## Field Design
|
||||
|
||||
### Arguments and Filtering
|
||||
|
||||
```graphql
|
||||
type Query {
|
||||
posts(
|
||||
# Pagination
|
||||
first: Int = 20
|
||||
after: String
|
||||
|
||||
# Filtering
|
||||
status: PostStatus
|
||||
authorId: ID
|
||||
tag: String
|
||||
|
||||
# Sorting
|
||||
orderBy: PostOrderBy = CREATED_AT
|
||||
orderDirection: OrderDirection = DESC
|
||||
|
||||
# Searching
|
||||
search: String
|
||||
): PostConnection!
|
||||
}
|
||||
|
||||
enum PostStatus {
|
||||
DRAFT
|
||||
PUBLISHED
|
||||
ARCHIVED
|
||||
}
|
||||
|
||||
enum PostOrderBy {
|
||||
CREATED_AT
|
||||
UPDATED_AT
|
||||
TITLE
|
||||
}
|
||||
|
||||
enum OrderDirection {
|
||||
ASC
|
||||
DESC
|
||||
}
|
||||
```
|
||||
|
||||
### Computed Fields
|
||||
|
||||
```graphql
|
||||
type User {
|
||||
firstName: String!
|
||||
lastName: String!
|
||||
fullName: String! # Computed in resolver
|
||||
posts: [Post!]!
|
||||
postCount: Int! # Computed, doesn't load all posts
|
||||
}
|
||||
|
||||
type Post {
|
||||
likeCount: Int!
|
||||
commentCount: Int!
|
||||
isLikedByViewer: Boolean! # Context-dependent
|
||||
}
|
||||
```
|
||||
|
||||
## Subscriptions
|
||||
|
||||
```graphql
|
||||
type Subscription {
|
||||
postAdded: Post!
|
||||
|
||||
postUpdated(postId: ID!): Post!
|
||||
|
||||
userStatusChanged(userId: ID!): UserStatus!
|
||||
}
|
||||
|
||||
type UserStatus {
|
||||
userId: ID!
|
||||
online: Boolean!
|
||||
lastSeen: DateTime!
|
||||
}
|
||||
|
||||
# Client usage
|
||||
subscription {
|
||||
postAdded {
|
||||
id
|
||||
title
|
||||
author {
|
||||
name
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Custom Scalars
|
||||
|
||||
```graphql
|
||||
scalar DateTime
|
||||
scalar Email
|
||||
scalar URL
|
||||
scalar JSON
|
||||
scalar Money
|
||||
|
||||
type User {
|
||||
email: Email!
|
||||
website: URL
|
||||
createdAt: DateTime!
|
||||
metadata: JSON
|
||||
}
|
||||
|
||||
type Product {
|
||||
price: Money!
|
||||
}
|
||||
```
|
||||
|
||||
## Directives
|
||||
|
||||
### Built-in Directives
|
||||
|
||||
```graphql
|
||||
type User {
|
||||
name: String!
|
||||
email: String! @deprecated(reason: "Use emails field instead")
|
||||
emails: [String!]!
|
||||
|
||||
# Conditional inclusion
|
||||
privateData: PrivateData @include(if: $isOwner)
|
||||
}
|
||||
|
||||
# Query
|
||||
query GetUser($isOwner: Boolean!) {
|
||||
user(id: "123") {
|
||||
name
|
||||
privateData @include(if: $isOwner) {
|
||||
ssn
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Custom Directives
|
||||
|
||||
```graphql
|
||||
directive @auth(requires: Role = USER) on FIELD_DEFINITION
|
||||
|
||||
enum Role {
|
||||
USER
|
||||
ADMIN
|
||||
MODERATOR
|
||||
}
|
||||
|
||||
type Mutation {
|
||||
deleteUser(id: ID!): Boolean! @auth(requires: ADMIN)
|
||||
updateProfile(input: ProfileInput!): User! @auth
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Union Error Pattern
|
||||
|
||||
```graphql
|
||||
type User {
|
||||
id: ID!
|
||||
email: String!
|
||||
}
|
||||
|
||||
type ValidationError {
|
||||
field: String!
|
||||
message: String!
|
||||
}
|
||||
|
||||
type NotFoundError {
|
||||
message: String!
|
||||
resourceType: String!
|
||||
resourceId: ID!
|
||||
}
|
||||
|
||||
type AuthorizationError {
|
||||
message: String!
|
||||
}
|
||||
|
||||
union UserResult = User | ValidationError | NotFoundError | AuthorizationError
|
||||
|
||||
type Query {
|
||||
user(id: ID!): UserResult!
|
||||
}
|
||||
|
||||
# Usage
|
||||
{
|
||||
user(id: "123") {
|
||||
... on User {
|
||||
id
|
||||
email
|
||||
}
|
||||
... on NotFoundError {
|
||||
message
|
||||
resourceType
|
||||
}
|
||||
... on AuthorizationError {
|
||||
message
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Errors in Payload
|
||||
|
||||
```graphql
|
||||
type CreateUserPayload {
|
||||
user: User
|
||||
errors: [Error!]
|
||||
success: Boolean!
|
||||
}
|
||||
|
||||
type Error {
|
||||
field: String
|
||||
message: String!
|
||||
code: ErrorCode!
|
||||
}
|
||||
|
||||
enum ErrorCode {
|
||||
VALIDATION_ERROR
|
||||
UNAUTHORIZED
|
||||
NOT_FOUND
|
||||
INTERNAL_ERROR
|
||||
}
|
||||
```
|
||||
|
||||
## N+1 Query Problem Solutions
|
||||
|
||||
### DataLoader Pattern
|
||||
|
||||
```python
|
||||
from aiodataloader import DataLoader
|
||||
|
||||
class PostLoader(DataLoader):
|
||||
async def batch_load_fn(self, post_ids):
|
||||
posts = await db.posts.find({"id": {"$in": post_ids}})
|
||||
post_map = {post["id"]: post for post in posts}
|
||||
return [post_map.get(pid) for pid in post_ids]
|
||||
|
||||
# Resolver
|
||||
@user_type.field("posts")
|
||||
async def resolve_posts(user, info):
|
||||
loader = info.context["loaders"]["post"]
|
||||
return await loader.load_many(user["post_ids"])
|
||||
```
|
||||
|
||||
### Query Depth Limiting
|
||||
|
||||
```python
|
||||
from graphql import GraphQLError
|
||||
|
||||
def depth_limit_validator(max_depth: int):
|
||||
def validate(context, node, ancestors):
|
||||
depth = len(ancestors)
|
||||
if depth > max_depth:
|
||||
raise GraphQLError(
|
||||
f"Query depth {depth} exceeds maximum {max_depth}"
|
||||
)
|
||||
return validate
|
||||
```
|
||||
|
||||
### Query Complexity Analysis
|
||||
|
||||
```python
|
||||
def complexity_limit_validator(max_complexity: int):
|
||||
def calculate_complexity(node):
|
||||
# Each field = 1, lists multiply
|
||||
complexity = 1
|
||||
if is_list_field(node):
|
||||
complexity *= get_list_size_arg(node)
|
||||
return complexity
|
||||
|
||||
return validate_complexity
|
||||
```
|
||||
|
||||
## Schema Versioning
|
||||
|
||||
### Field Deprecation
|
||||
|
||||
```graphql
|
||||
type User {
|
||||
name: String! @deprecated(reason: "Use firstName and lastName")
|
||||
firstName: String!
|
||||
lastName: String!
|
||||
}
|
||||
```
|
||||
|
||||
### Schema Evolution
|
||||
|
||||
```graphql
|
||||
# v1 - Initial
|
||||
type User {
|
||||
name: String!
|
||||
}
|
||||
|
||||
# v2 - Add optional field (backward compatible)
|
||||
type User {
|
||||
name: String!
|
||||
email: String
|
||||
}
|
||||
|
||||
# v3 - Deprecate and add new field
|
||||
type User {
|
||||
name: String! @deprecated(reason: "Use firstName/lastName")
|
||||
firstName: String!
|
||||
lastName: String!
|
||||
email: String
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
1. **Nullable vs Non-Null**: Start nullable, make non-null when guaranteed
|
||||
2. **Input Types**: Always use input types for mutations
|
||||
3. **Payload Pattern**: Return errors in mutation payloads
|
||||
4. **Pagination**: Use cursor-based for infinite scroll, offset for simple cases
|
||||
5. **Naming**: Use camelCase for fields, PascalCase for types
|
||||
6. **Deprecation**: Use `@deprecated` instead of removing fields
|
||||
7. **DataLoaders**: Always use for relationships to prevent N+1
|
||||
8. **Complexity Limits**: Protect against expensive queries
|
||||
9. **Custom Scalars**: Use for domain-specific types (Email, DateTime)
|
||||
10. **Documentation**: Document all fields with descriptions
|
||||
408
skills/api-design-principles/references/rest-best-practices.md
Normal file
408
skills/api-design-principles/references/rest-best-practices.md
Normal file
@@ -0,0 +1,408 @@
|
||||
# REST API Best Practices
|
||||
|
||||
## URL Structure
|
||||
|
||||
### Resource Naming
|
||||
|
||||
```
|
||||
# Good - Plural nouns
|
||||
GET /api/users
|
||||
GET /api/orders
|
||||
GET /api/products
|
||||
|
||||
# Bad - Verbs or mixed conventions
|
||||
GET /api/getUser
|
||||
GET /api/user (inconsistent singular)
|
||||
POST /api/createOrder
|
||||
```
|
||||
|
||||
### Nested Resources
|
||||
|
||||
```
|
||||
# Shallow nesting (preferred)
|
||||
GET /api/users/{id}/orders
|
||||
GET /api/orders/{id}
|
||||
|
||||
# Deep nesting (avoid)
|
||||
GET /api/users/{id}/orders/{orderId}/items/{itemId}/reviews
|
||||
# Better:
|
||||
GET /api/order-items/{id}/reviews
|
||||
```
|
||||
|
||||
## HTTP Methods and Status Codes
|
||||
|
||||
### GET - Retrieve Resources
|
||||
|
||||
```
|
||||
GET /api/users → 200 OK (with list)
|
||||
GET /api/users/{id} → 200 OK or 404 Not Found
|
||||
GET /api/users?page=2 → 200 OK (paginated)
|
||||
```
|
||||
|
||||
### POST - Create Resources
|
||||
|
||||
```
|
||||
POST /api/users
|
||||
Body: {"name": "John", "email": "john@example.com"}
|
||||
→ 201 Created
|
||||
Location: /api/users/123
|
||||
Body: {"id": "123", "name": "John", ...}
|
||||
|
||||
POST /api/users (validation error)
|
||||
→ 422 Unprocessable Entity
|
||||
Body: {"errors": [...]}
|
||||
```
|
||||
|
||||
### PUT - Replace Resources
|
||||
|
||||
```
|
||||
PUT /api/users/{id}
|
||||
Body: {complete user object}
|
||||
→ 200 OK (updated)
|
||||
→ 404 Not Found (doesn't exist)
|
||||
|
||||
# Must include ALL fields
|
||||
```
|
||||
|
||||
### PATCH - Partial Update
|
||||
|
||||
```
|
||||
PATCH /api/users/{id}
|
||||
Body: {"name": "Jane"} (only changed fields)
|
||||
→ 200 OK
|
||||
→ 404 Not Found
|
||||
```
|
||||
|
||||
### DELETE - Remove Resources
|
||||
|
||||
```
|
||||
DELETE /api/users/{id}
|
||||
→ 204 No Content (deleted)
|
||||
→ 404 Not Found
|
||||
→ 409 Conflict (can't delete due to references)
|
||||
```
|
||||
|
||||
## Filtering, Sorting, and Searching
|
||||
|
||||
### Query Parameters
|
||||
|
||||
```
|
||||
# Filtering
|
||||
GET /api/users?status=active
|
||||
GET /api/users?role=admin&status=active
|
||||
|
||||
# Sorting
|
||||
GET /api/users?sort=created_at
|
||||
GET /api/users?sort=-created_at (descending)
|
||||
GET /api/users?sort=name,created_at
|
||||
|
||||
# Searching
|
||||
GET /api/users?search=john
|
||||
GET /api/users?q=john
|
||||
|
||||
# Field selection (sparse fieldsets)
|
||||
GET /api/users?fields=id,name,email
|
||||
```
|
||||
|
||||
## Pagination Patterns
|
||||
|
||||
### Offset-Based Pagination
|
||||
|
||||
```python
|
||||
GET /api/users?page=2&page_size=20
|
||||
|
||||
Response:
|
||||
{
|
||||
"items": [...],
|
||||
"page": 2,
|
||||
"page_size": 20,
|
||||
"total": 150,
|
||||
"pages": 8
|
||||
}
|
||||
```
|
||||
|
||||
### Cursor-Based Pagination (for large datasets)
|
||||
|
||||
```python
|
||||
GET /api/users?limit=20&cursor=eyJpZCI6MTIzfQ
|
||||
|
||||
Response:
|
||||
{
|
||||
"items": [...],
|
||||
"next_cursor": "eyJpZCI6MTQzfQ",
|
||||
"has_more": true
|
||||
}
|
||||
```
|
||||
|
||||
### Link Header Pagination (RESTful)
|
||||
|
||||
```
|
||||
GET /api/users?page=2
|
||||
|
||||
Response Headers:
|
||||
Link: <https://api.example.com/users?page=3>; rel="next",
|
||||
<https://api.example.com/users?page=1>; rel="prev",
|
||||
<https://api.example.com/users?page=1>; rel="first",
|
||||
<https://api.example.com/users?page=8>; rel="last"
|
||||
```
|
||||
|
||||
## Versioning Strategies
|
||||
|
||||
### URL Versioning (Recommended)
|
||||
|
||||
```
|
||||
/api/v1/users
|
||||
/api/v2/users
|
||||
|
||||
Pros: Clear, easy to route
|
||||
Cons: Multiple URLs for same resource
|
||||
```
|
||||
|
||||
### Header Versioning
|
||||
|
||||
```
|
||||
GET /api/users
|
||||
Accept: application/vnd.api+json; version=2
|
||||
|
||||
Pros: Clean URLs
|
||||
Cons: Less visible, harder to test
|
||||
```
|
||||
|
||||
### Query Parameter
|
||||
|
||||
```
|
||||
GET /api/users?version=2
|
||||
|
||||
Pros: Easy to test
|
||||
Cons: Optional parameter can be forgotten
|
||||
```
|
||||
|
||||
## Rate Limiting
|
||||
|
||||
### Headers
|
||||
|
||||
```
|
||||
X-RateLimit-Limit: 1000
|
||||
X-RateLimit-Remaining: 742
|
||||
X-RateLimit-Reset: 1640000000
|
||||
|
||||
Response when limited:
|
||||
429 Too Many Requests
|
||||
Retry-After: 3600
|
||||
```
|
||||
|
||||
### Implementation Pattern
|
||||
|
||||
```python
|
||||
from fastapi import HTTPException, Request
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
class RateLimiter:
|
||||
def __init__(self, calls: int, period: int):
|
||||
self.calls = calls
|
||||
self.period = period
|
||||
self.cache = {}
|
||||
|
||||
def check(self, key: str) -> bool:
|
||||
now = datetime.now()
|
||||
if key not in self.cache:
|
||||
self.cache[key] = []
|
||||
|
||||
# Remove old requests
|
||||
self.cache[key] = [
|
||||
ts for ts in self.cache[key]
|
||||
if now - ts < timedelta(seconds=self.period)
|
||||
]
|
||||
|
||||
if len(self.cache[key]) >= self.calls:
|
||||
return False
|
||||
|
||||
self.cache[key].append(now)
|
||||
return True
|
||||
|
||||
limiter = RateLimiter(calls=100, period=60)
|
||||
|
||||
@app.get("/api/users")
|
||||
async def get_users(request: Request):
|
||||
if not limiter.check(request.client.host):
|
||||
raise HTTPException(
|
||||
status_code=429,
|
||||
headers={"Retry-After": "60"}
|
||||
)
|
||||
return {"users": [...]}
|
||||
```
|
||||
|
||||
## Authentication and Authorization
|
||||
|
||||
### Bearer Token
|
||||
|
||||
```
|
||||
Authorization: Bearer eyJhbGciOiJIUzI1NiIs...
|
||||
|
||||
401 Unauthorized - Missing/invalid token
|
||||
403 Forbidden - Valid token, insufficient permissions
|
||||
```
|
||||
|
||||
### API Keys
|
||||
|
||||
```
|
||||
X-API-Key: your-api-key-here
|
||||
```
|
||||
|
||||
## Error Response Format
|
||||
|
||||
### Consistent Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "VALIDATION_ERROR",
|
||||
"message": "Request validation failed",
|
||||
"details": [
|
||||
{
|
||||
"field": "email",
|
||||
"message": "Invalid email format",
|
||||
"value": "not-an-email"
|
||||
}
|
||||
],
|
||||
"timestamp": "2025-10-16T12:00:00Z",
|
||||
"path": "/api/users"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Status Code Guidelines
|
||||
|
||||
- `200 OK`: Successful GET, PATCH, PUT
|
||||
- `201 Created`: Successful POST
|
||||
- `204 No Content`: Successful DELETE
|
||||
- `400 Bad Request`: Malformed request
|
||||
- `401 Unauthorized`: Authentication required
|
||||
- `403 Forbidden`: Authenticated but not authorized
|
||||
- `404 Not Found`: Resource doesn't exist
|
||||
- `409 Conflict`: State conflict (duplicate email, etc.)
|
||||
- `422 Unprocessable Entity`: Validation errors
|
||||
- `429 Too Many Requests`: Rate limited
|
||||
- `500 Internal Server Error`: Server error
|
||||
- `503 Service Unavailable`: Temporary downtime
|
||||
|
||||
## Caching
|
||||
|
||||
### Cache Headers
|
||||
|
||||
```
|
||||
# Client caching
|
||||
Cache-Control: public, max-age=3600
|
||||
|
||||
# No caching
|
||||
Cache-Control: no-cache, no-store, must-revalidate
|
||||
|
||||
# Conditional requests
|
||||
ETag: "33a64df551425fcc55e4d42a148795d9f25f89d4"
|
||||
If-None-Match: "33a64df551425fcc55e4d42a148795d9f25f89d4"
|
||||
→ 304 Not Modified
|
||||
```
|
||||
|
||||
## Bulk Operations
|
||||
|
||||
### Batch Endpoints
|
||||
|
||||
```python
|
||||
POST /api/users/batch
|
||||
{
|
||||
"items": [
|
||||
{"name": "User1", "email": "user1@example.com"},
|
||||
{"name": "User2", "email": "user2@example.com"}
|
||||
]
|
||||
}
|
||||
|
||||
Response:
|
||||
{
|
||||
"results": [
|
||||
{"id": "1", "status": "created"},
|
||||
{"id": null, "status": "failed", "error": "Email already exists"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Idempotency
|
||||
|
||||
### Idempotency Keys
|
||||
|
||||
```
|
||||
POST /api/orders
|
||||
Idempotency-Key: unique-key-123
|
||||
|
||||
If duplicate request:
|
||||
→ 200 OK (return cached response)
|
||||
```
|
||||
|
||||
## CORS Configuration
|
||||
|
||||
```python
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["https://example.com"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
```
|
||||
|
||||
## Documentation with OpenAPI
|
||||
|
||||
```python
|
||||
from fastapi import FastAPI
|
||||
|
||||
app = FastAPI(
|
||||
title="My API",
|
||||
description="API for managing users",
|
||||
version="1.0.0",
|
||||
docs_url="/docs",
|
||||
redoc_url="/redoc"
|
||||
)
|
||||
|
||||
@app.get(
|
||||
"/api/users/{user_id}",
|
||||
summary="Get user by ID",
|
||||
response_description="User details",
|
||||
tags=["Users"]
|
||||
)
|
||||
async def get_user(
|
||||
user_id: str = Path(..., description="The user ID")
|
||||
):
|
||||
"""
|
||||
Retrieve user by ID.
|
||||
|
||||
Returns full user profile including:
|
||||
- Basic information
|
||||
- Contact details
|
||||
- Account status
|
||||
"""
|
||||
pass
|
||||
```
|
||||
|
||||
## Health and Monitoring Endpoints
|
||||
|
||||
```python
|
||||
@app.get("/health")
|
||||
async def health_check():
|
||||
return {
|
||||
"status": "healthy",
|
||||
"version": "1.0.0",
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
|
||||
@app.get("/health/detailed")
|
||||
async def detailed_health():
|
||||
return {
|
||||
"status": "healthy",
|
||||
"checks": {
|
||||
"database": await check_database(),
|
||||
"redis": await check_redis(),
|
||||
"external_api": await check_external_api()
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -0,0 +1,513 @@
|
||||
# API Design Principles Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. RESTful Design Principles
|
||||
|
||||
**Resource-Oriented Architecture**
|
||||
|
||||
- Resources are nouns (users, orders, products), not verbs
|
||||
- Use HTTP methods for actions (GET, POST, PUT, PATCH, DELETE)
|
||||
- URLs represent resource hierarchies
|
||||
- Consistent naming conventions
|
||||
|
||||
**HTTP Methods Semantics:**
|
||||
|
||||
- `GET`: Retrieve resources (idempotent, safe)
|
||||
- `POST`: Create new resources
|
||||
- `PUT`: Replace entire resource (idempotent)
|
||||
- `PATCH`: Partial resource updates
|
||||
- `DELETE`: Remove resources (idempotent)
|
||||
|
||||
### 2. GraphQL Design Principles
|
||||
|
||||
**Schema-First Development**
|
||||
|
||||
- Types define your domain model
|
||||
- Queries for reading data
|
||||
- Mutations for modifying data
|
||||
- Subscriptions for real-time updates
|
||||
|
||||
**Query Structure:**
|
||||
|
||||
- Clients request exactly what they need
|
||||
- Single endpoint, multiple operations
|
||||
- Strongly typed schema
|
||||
- Introspection built-in
|
||||
|
||||
### 3. API Versioning Strategies
|
||||
|
||||
**URL Versioning:**
|
||||
|
||||
```
|
||||
/api/v1/users
|
||||
/api/v2/users
|
||||
```
|
||||
|
||||
**Header Versioning:**
|
||||
|
||||
```
|
||||
Accept: application/vnd.api+json; version=1
|
||||
```
|
||||
|
||||
**Query Parameter Versioning:**
|
||||
|
||||
```
|
||||
/api/users?version=1
|
||||
```
|
||||
|
||||
## REST API Design Patterns
|
||||
|
||||
### Pattern 1: Resource Collection Design
|
||||
|
||||
```python
|
||||
# Good: Resource-oriented endpoints
|
||||
GET /api/users # List users (with pagination)
|
||||
POST /api/users # Create user
|
||||
GET /api/users/{id} # Get specific user
|
||||
PUT /api/users/{id} # Replace user
|
||||
PATCH /api/users/{id} # Update user fields
|
||||
DELETE /api/users/{id} # Delete user
|
||||
|
||||
# Nested resources
|
||||
GET /api/users/{id}/orders # Get user's orders
|
||||
POST /api/users/{id}/orders # Create order for user
|
||||
|
||||
# Bad: Action-oriented endpoints (avoid)
|
||||
POST /api/createUser
|
||||
POST /api/getUserById
|
||||
POST /api/deleteUser
|
||||
```
|
||||
|
||||
### Pattern 2: Pagination and Filtering
|
||||
|
||||
```python
|
||||
from typing import List, Optional
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class PaginationParams(BaseModel):
|
||||
page: int = Field(1, ge=1, description="Page number")
|
||||
page_size: int = Field(20, ge=1, le=100, description="Items per page")
|
||||
|
||||
class FilterParams(BaseModel):
|
||||
status: Optional[str] = None
|
||||
created_after: Optional[str] = None
|
||||
search: Optional[str] = None
|
||||
|
||||
class PaginatedResponse(BaseModel):
|
||||
items: List[dict]
|
||||
total: int
|
||||
page: int
|
||||
page_size: int
|
||||
pages: int
|
||||
|
||||
@property
|
||||
def has_next(self) -> bool:
|
||||
return self.page < self.pages
|
||||
|
||||
@property
|
||||
def has_prev(self) -> bool:
|
||||
return self.page > 1
|
||||
|
||||
# FastAPI endpoint example
|
||||
from fastapi import FastAPI, Query, Depends
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
@app.get("/api/users", response_model=PaginatedResponse)
|
||||
async def list_users(
|
||||
page: int = Query(1, ge=1),
|
||||
page_size: int = Query(20, ge=1, le=100),
|
||||
status: Optional[str] = Query(None),
|
||||
search: Optional[str] = Query(None)
|
||||
):
|
||||
# Apply filters
|
||||
query = build_query(status=status, search=search)
|
||||
|
||||
# Count total
|
||||
total = await count_users(query)
|
||||
|
||||
# Fetch page
|
||||
offset = (page - 1) * page_size
|
||||
users = await fetch_users(query, limit=page_size, offset=offset)
|
||||
|
||||
return PaginatedResponse(
|
||||
items=users,
|
||||
total=total,
|
||||
page=page,
|
||||
page_size=page_size,
|
||||
pages=(total + page_size - 1) // page_size
|
||||
)
|
||||
```
|
||||
|
||||
### Pattern 3: Error Handling and Status Codes
|
||||
|
||||
```python
|
||||
from fastapi import HTTPException, status
|
||||
from pydantic import BaseModel
|
||||
|
||||
class ErrorResponse(BaseModel):
|
||||
error: str
|
||||
message: str
|
||||
details: Optional[dict] = None
|
||||
timestamp: str
|
||||
path: str
|
||||
|
||||
class ValidationErrorDetail(BaseModel):
|
||||
field: str
|
||||
message: str
|
||||
value: Any
|
||||
|
||||
# Consistent error responses
|
||||
STATUS_CODES = {
|
||||
"success": 200,
|
||||
"created": 201,
|
||||
"no_content": 204,
|
||||
"bad_request": 400,
|
||||
"unauthorized": 401,
|
||||
"forbidden": 403,
|
||||
"not_found": 404,
|
||||
"conflict": 409,
|
||||
"unprocessable": 422,
|
||||
"internal_error": 500
|
||||
}
|
||||
|
||||
def raise_not_found(resource: str, id: str):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_404_NOT_FOUND,
|
||||
detail={
|
||||
"error": "NotFound",
|
||||
"message": f"{resource} not found",
|
||||
"details": {"id": id}
|
||||
}
|
||||
)
|
||||
|
||||
def raise_validation_error(errors: List[ValidationErrorDetail]):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
|
||||
detail={
|
||||
"error": "ValidationError",
|
||||
"message": "Request validation failed",
|
||||
"details": {"errors": [e.dict() for e in errors]}
|
||||
}
|
||||
)
|
||||
|
||||
# Example usage
|
||||
@app.get("/api/users/{user_id}")
|
||||
async def get_user(user_id: str):
|
||||
user = await fetch_user(user_id)
|
||||
if not user:
|
||||
raise_not_found("User", user_id)
|
||||
return user
|
||||
```
|
||||
|
||||
### Pattern 4: HATEOAS (Hypermedia as the Engine of Application State)
|
||||
|
||||
```python
|
||||
class UserResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
email: str
|
||||
_links: dict
|
||||
|
||||
@classmethod
|
||||
def from_user(cls, user: User, base_url: str):
|
||||
return cls(
|
||||
id=user.id,
|
||||
name=user.name,
|
||||
email=user.email,
|
||||
_links={
|
||||
"self": {"href": f"{base_url}/api/users/{user.id}"},
|
||||
"orders": {"href": f"{base_url}/api/users/{user.id}/orders"},
|
||||
"update": {
|
||||
"href": f"{base_url}/api/users/{user.id}",
|
||||
"method": "PATCH"
|
||||
},
|
||||
"delete": {
|
||||
"href": f"{base_url}/api/users/{user.id}",
|
||||
"method": "DELETE"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## GraphQL Design Patterns
|
||||
|
||||
### Pattern 1: Schema Design
|
||||
|
||||
```graphql
|
||||
# schema.graphql
|
||||
|
||||
# Clear type definitions
|
||||
type User {
|
||||
id: ID!
|
||||
email: String!
|
||||
name: String!
|
||||
createdAt: DateTime!
|
||||
|
||||
# Relationships
|
||||
orders(first: Int = 20, after: String, status: OrderStatus): OrderConnection!
|
||||
|
||||
profile: UserProfile
|
||||
}
|
||||
|
||||
type Order {
|
||||
id: ID!
|
||||
status: OrderStatus!
|
||||
total: Money!
|
||||
items: [OrderItem!]!
|
||||
createdAt: DateTime!
|
||||
|
||||
# Back-reference
|
||||
user: User!
|
||||
}
|
||||
|
||||
# Pagination pattern (Relay-style)
|
||||
type OrderConnection {
|
||||
edges: [OrderEdge!]!
|
||||
pageInfo: PageInfo!
|
||||
totalCount: Int!
|
||||
}
|
||||
|
||||
type OrderEdge {
|
||||
node: Order!
|
||||
cursor: String!
|
||||
}
|
||||
|
||||
type PageInfo {
|
||||
hasNextPage: Boolean!
|
||||
hasPreviousPage: Boolean!
|
||||
startCursor: String
|
||||
endCursor: String
|
||||
}
|
||||
|
||||
# Enums for type safety
|
||||
enum OrderStatus {
|
||||
PENDING
|
||||
CONFIRMED
|
||||
SHIPPED
|
||||
DELIVERED
|
||||
CANCELLED
|
||||
}
|
||||
|
||||
# Custom scalars
|
||||
scalar DateTime
|
||||
scalar Money
|
||||
|
||||
# Query root
|
||||
type Query {
|
||||
user(id: ID!): User
|
||||
users(first: Int = 20, after: String, search: String): UserConnection!
|
||||
|
||||
order(id: ID!): Order
|
||||
}
|
||||
|
||||
# Mutation root
|
||||
type Mutation {
|
||||
createUser(input: CreateUserInput!): CreateUserPayload!
|
||||
updateUser(input: UpdateUserInput!): UpdateUserPayload!
|
||||
deleteUser(id: ID!): DeleteUserPayload!
|
||||
|
||||
createOrder(input: CreateOrderInput!): CreateOrderPayload!
|
||||
}
|
||||
|
||||
# Input types for mutations
|
||||
input CreateUserInput {
|
||||
email: String!
|
||||
name: String!
|
||||
password: String!
|
||||
}
|
||||
|
||||
# Payload types for mutations
|
||||
type CreateUserPayload {
|
||||
user: User
|
||||
errors: [Error!]
|
||||
}
|
||||
|
||||
type Error {
|
||||
field: String
|
||||
message: String!
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 2: Resolver Design
|
||||
|
||||
```python
|
||||
from typing import Optional, List
|
||||
from ariadne import QueryType, MutationType, ObjectType
|
||||
from dataclasses import dataclass
|
||||
|
||||
query = QueryType()
|
||||
mutation = MutationType()
|
||||
user_type = ObjectType("User")
|
||||
|
||||
@query.field("user")
|
||||
async def resolve_user(obj, info, id: str) -> Optional[dict]:
|
||||
"""Resolve single user by ID."""
|
||||
return await fetch_user_by_id(id)
|
||||
|
||||
@query.field("users")
|
||||
async def resolve_users(
|
||||
obj,
|
||||
info,
|
||||
first: int = 20,
|
||||
after: Optional[str] = None,
|
||||
search: Optional[str] = None
|
||||
) -> dict:
|
||||
"""Resolve paginated user list."""
|
||||
# Decode cursor
|
||||
offset = decode_cursor(after) if after else 0
|
||||
|
||||
# Fetch users
|
||||
users = await fetch_users(
|
||||
limit=first + 1, # Fetch one extra to check hasNextPage
|
||||
offset=offset,
|
||||
search=search
|
||||
)
|
||||
|
||||
# Pagination
|
||||
has_next = len(users) > first
|
||||
if has_next:
|
||||
users = users[:first]
|
||||
|
||||
edges = [
|
||||
{
|
||||
"node": user,
|
||||
"cursor": encode_cursor(offset + i)
|
||||
}
|
||||
for i, user in enumerate(users)
|
||||
]
|
||||
|
||||
return {
|
||||
"edges": edges,
|
||||
"pageInfo": {
|
||||
"hasNextPage": has_next,
|
||||
"hasPreviousPage": offset > 0,
|
||||
"startCursor": edges[0]["cursor"] if edges else None,
|
||||
"endCursor": edges[-1]["cursor"] if edges else None
|
||||
},
|
||||
"totalCount": await count_users(search=search)
|
||||
}
|
||||
|
||||
@user_type.field("orders")
|
||||
async def resolve_user_orders(user: dict, info, first: int = 20) -> dict:
|
||||
"""Resolve user's orders (N+1 prevention with DataLoader)."""
|
||||
# Use DataLoader to batch requests
|
||||
loader = info.context["loaders"]["orders_by_user"]
|
||||
orders = await loader.load(user["id"])
|
||||
|
||||
return paginate_orders(orders, first)
|
||||
|
||||
@mutation.field("createUser")
|
||||
async def resolve_create_user(obj, info, input: dict) -> dict:
|
||||
"""Create new user."""
|
||||
try:
|
||||
# Validate input
|
||||
validate_user_input(input)
|
||||
|
||||
# Create user
|
||||
user = await create_user(
|
||||
email=input["email"],
|
||||
name=input["name"],
|
||||
password=hash_password(input["password"])
|
||||
)
|
||||
|
||||
return {
|
||||
"user": user,
|
||||
"errors": []
|
||||
}
|
||||
except ValidationError as e:
|
||||
return {
|
||||
"user": None,
|
||||
"errors": [{"field": e.field, "message": e.message}]
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 3: DataLoader (N+1 Problem Prevention)
|
||||
|
||||
```python
|
||||
from aiodataloader import DataLoader
|
||||
from typing import List, Optional
|
||||
|
||||
class UserLoader(DataLoader):
|
||||
"""Batch load users by ID."""
|
||||
|
||||
async def batch_load_fn(self, user_ids: List[str]) -> List[Optional[dict]]:
|
||||
"""Load multiple users in single query."""
|
||||
users = await fetch_users_by_ids(user_ids)
|
||||
|
||||
# Map results back to input order
|
||||
user_map = {user["id"]: user for user in users}
|
||||
return [user_map.get(user_id) for user_id in user_ids]
|
||||
|
||||
class OrdersByUserLoader(DataLoader):
|
||||
"""Batch load orders by user ID."""
|
||||
|
||||
async def batch_load_fn(self, user_ids: List[str]) -> List[List[dict]]:
|
||||
"""Load orders for multiple users in single query."""
|
||||
orders = await fetch_orders_by_user_ids(user_ids)
|
||||
|
||||
# Group orders by user_id
|
||||
orders_by_user = {}
|
||||
for order in orders:
|
||||
user_id = order["user_id"]
|
||||
if user_id not in orders_by_user:
|
||||
orders_by_user[user_id] = []
|
||||
orders_by_user[user_id].append(order)
|
||||
|
||||
# Return in input order
|
||||
return [orders_by_user.get(user_id, []) for user_id in user_ids]
|
||||
|
||||
# Context setup
|
||||
def create_context():
|
||||
return {
|
||||
"loaders": {
|
||||
"user": UserLoader(),
|
||||
"orders_by_user": OrdersByUserLoader()
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### REST APIs
|
||||
|
||||
1. **Consistent Naming**: Use plural nouns for collections (`/users`, not `/user`)
|
||||
2. **Stateless**: Each request contains all necessary information
|
||||
3. **Use HTTP Status Codes Correctly**: 2xx success, 4xx client errors, 5xx server errors
|
||||
4. **Version Your API**: Plan for breaking changes from day one
|
||||
5. **Pagination**: Always paginate large collections
|
||||
6. **Rate Limiting**: Protect your API with rate limits
|
||||
7. **Documentation**: Use OpenAPI/Swagger for interactive docs
|
||||
|
||||
### GraphQL APIs
|
||||
|
||||
1. **Schema First**: Design schema before writing resolvers
|
||||
2. **Avoid N+1**: Use DataLoaders for efficient data fetching
|
||||
3. **Input Validation**: Validate at schema and resolver levels
|
||||
4. **Error Handling**: Return structured errors in mutation payloads
|
||||
5. **Pagination**: Use cursor-based pagination (Relay spec)
|
||||
6. **Deprecation**: Use `@deprecated` directive for gradual migration
|
||||
7. **Monitoring**: Track query complexity and execution time
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- **Over-fetching/Under-fetching (REST)**: Fixed in GraphQL but requires DataLoaders
|
||||
- **Breaking Changes**: Version APIs or use deprecation strategies
|
||||
- **Inconsistent Error Formats**: Standardize error responses
|
||||
- **Missing Rate Limits**: APIs without limits are vulnerable to abuse
|
||||
- **Poor Documentation**: Undocumented APIs frustrate developers
|
||||
- **Ignoring HTTP Semantics**: POST for idempotent operations breaks expectations
|
||||
- **Tight Coupling**: API structure shouldn't mirror database schema
|
||||
|
||||
## Resources
|
||||
|
||||
- **references/rest-best-practices.md**: Comprehensive REST API design guide
|
||||
- **references/graphql-schema-design.md**: GraphQL schema patterns and anti-patterns
|
||||
- **references/api-versioning-strategies.md**: Versioning approaches and migration paths
|
||||
- **assets/rest-api-template.py**: FastAPI REST API template
|
||||
- **assets/graphql-schema-template.graphql**: Complete GraphQL schema example
|
||||
- **assets/api-design-checklist.md**: Pre-implementation review checklist
|
||||
- **scripts/openapi-generator.py**: Generate OpenAPI specs from code
|
||||
184
skills/api-documenter/SKILL.md
Normal file
184
skills/api-documenter/SKILL.md
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
name: api-documenter
|
||||
description: Master API documentation with OpenAPI 3.1, AI-powered tools, and
|
||||
modern developer experience practices. Create interactive docs, generate SDKs,
|
||||
and build comprehensive developer portals. Use PROACTIVELY for API
|
||||
documentation or developer portal creation.
|
||||
metadata:
|
||||
model: sonnet
|
||||
---
|
||||
You are an expert API documentation specialist mastering modern developer experience through comprehensive, interactive, and AI-enhanced documentation.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Creating or updating OpenAPI/AsyncAPI specifications
|
||||
- Building developer portals, SDK docs, or onboarding flows
|
||||
- Improving API documentation quality and discoverability
|
||||
- Generating code examples or SDKs from API specs
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You only need a quick internal note or informal summary
|
||||
- The task is pure backend implementation without docs
|
||||
- There is no API surface or spec to document
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Identify target users, API scope, and documentation goals.
|
||||
2. Create or validate specifications with examples and auth flows.
|
||||
3. Build interactive docs and ensure accuracy with tests.
|
||||
4. Plan maintenance, versioning, and migration guidance.
|
||||
|
||||
## Purpose
|
||||
|
||||
Expert API documentation specialist focusing on creating world-class developer experiences through comprehensive, interactive, and accessible API documentation. Masters modern documentation tools, OpenAPI 3.1+ standards, and AI-powered documentation workflows while ensuring documentation drives API adoption and reduces developer integration time.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Modern Documentation Standards
|
||||
|
||||
- OpenAPI 3.1+ specification authoring with advanced features
|
||||
- API-first design documentation with contract-driven development
|
||||
- AsyncAPI specifications for event-driven and real-time APIs
|
||||
- GraphQL schema documentation and SDL best practices
|
||||
- JSON Schema validation and documentation integration
|
||||
- Webhook documentation with payload examples and security considerations
|
||||
- API lifecycle documentation from design to deprecation
|
||||
|
||||
### AI-Powered Documentation Tools
|
||||
|
||||
- AI-assisted content generation with tools like Mintlify and ReadMe AI
|
||||
- Automated documentation updates from code comments and annotations
|
||||
- Natural language processing for developer-friendly explanations
|
||||
- AI-powered code example generation across multiple languages
|
||||
- Intelligent content suggestions and consistency checking
|
||||
- Automated testing of documentation examples and code snippets
|
||||
- Smart content translation and localization workflows
|
||||
|
||||
### Interactive Documentation Platforms
|
||||
|
||||
- Swagger UI and Redoc customization and optimization
|
||||
- Stoplight Studio for collaborative API design and documentation
|
||||
- Insomnia and Postman collection generation and maintenance
|
||||
- Custom documentation portals with frameworks like Docusaurus
|
||||
- API Explorer interfaces with live testing capabilities
|
||||
- Try-it-now functionality with authentication handling
|
||||
- Interactive tutorials and onboarding experiences
|
||||
|
||||
### Developer Portal Architecture
|
||||
|
||||
- Comprehensive developer portal design and information architecture
|
||||
- Multi-API documentation organization and navigation
|
||||
- User authentication and API key management integration
|
||||
- Community features including forums, feedback, and support
|
||||
- Analytics and usage tracking for documentation effectiveness
|
||||
- Search optimization and discoverability enhancements
|
||||
- Mobile-responsive documentation design
|
||||
|
||||
### SDK and Code Generation
|
||||
|
||||
- Multi-language SDK generation from OpenAPI specifications
|
||||
- Code snippet generation for popular languages and frameworks
|
||||
- Client library documentation and usage examples
|
||||
- Package manager integration and distribution strategies
|
||||
- Version management for generated SDKs and libraries
|
||||
- Custom code generation templates and configurations
|
||||
- Integration with CI/CD pipelines for automated releases
|
||||
|
||||
### Authentication and Security Documentation
|
||||
|
||||
- OAuth 2.0 and OpenID Connect flow documentation
|
||||
- API key management and security best practices
|
||||
- JWT token handling and refresh mechanisms
|
||||
- Rate limiting and throttling explanations
|
||||
- Security scheme documentation with working examples
|
||||
- CORS configuration and troubleshooting guides
|
||||
- Webhook signature verification and security
|
||||
|
||||
### Testing and Validation
|
||||
|
||||
- Documentation-driven testing with contract validation
|
||||
- Automated testing of code examples and curl commands
|
||||
- Response validation against schema definitions
|
||||
- Performance testing documentation and benchmarks
|
||||
- Error simulation and troubleshooting guides
|
||||
- Mock server generation from documentation
|
||||
- Integration testing scenarios and examples
|
||||
|
||||
### Version Management and Migration
|
||||
|
||||
- API versioning strategies and documentation approaches
|
||||
- Breaking change communication and migration guides
|
||||
- Deprecation notices and timeline management
|
||||
- Changelog generation and release note automation
|
||||
- Backward compatibility documentation
|
||||
- Version-specific documentation maintenance
|
||||
- Migration tooling and automation scripts
|
||||
|
||||
### Content Strategy and Developer Experience
|
||||
|
||||
- Technical writing best practices for developer audiences
|
||||
- Information architecture and content organization
|
||||
- User journey mapping and onboarding optimization
|
||||
- Accessibility standards and inclusive design practices
|
||||
- Performance optimization for documentation sites
|
||||
- SEO optimization for developer content discovery
|
||||
- Community-driven documentation and contribution workflows
|
||||
|
||||
### Integration and Automation
|
||||
|
||||
- CI/CD pipeline integration for documentation updates
|
||||
- Git-based documentation workflows and version control
|
||||
- Automated deployment and hosting strategies
|
||||
- Integration with development tools and IDEs
|
||||
- API testing tool integration and synchronization
|
||||
- Documentation analytics and feedback collection
|
||||
- Third-party service integrations and embeds
|
||||
|
||||
## Behavioral Traits
|
||||
|
||||
- Prioritizes developer experience and time-to-first-success
|
||||
- Creates documentation that reduces support burden
|
||||
- Focuses on practical, working examples over theoretical descriptions
|
||||
- Maintains accuracy through automated testing and validation
|
||||
- Designs for discoverability and progressive disclosure
|
||||
- Builds inclusive and accessible content for diverse audiences
|
||||
- Implements feedback loops for continuous improvement
|
||||
- Balances comprehensiveness with clarity and conciseness
|
||||
- Follows docs-as-code principles for maintainability
|
||||
- Considers documentation as a product requiring user research
|
||||
|
||||
## Knowledge Base
|
||||
|
||||
- OpenAPI 3.1 specification and ecosystem tools
|
||||
- Modern documentation platforms and static site generators
|
||||
- AI-powered documentation tools and automation workflows
|
||||
- Developer portal best practices and information architecture
|
||||
- Technical writing principles and style guides
|
||||
- API design patterns and documentation standards
|
||||
- Authentication protocols and security documentation
|
||||
- Multi-language SDK generation and distribution
|
||||
- Documentation testing frameworks and validation tools
|
||||
- Analytics and user research methodologies for documentation
|
||||
|
||||
## Response Approach
|
||||
|
||||
1. **Assess documentation needs** and target developer personas
|
||||
2. **Design information architecture** with progressive disclosure
|
||||
3. **Create comprehensive specifications** with validation and examples
|
||||
4. **Build interactive experiences** with try-it-now functionality
|
||||
5. **Generate working code examples** across multiple languages
|
||||
6. **Implement testing and validation** for accuracy and reliability
|
||||
7. **Optimize for discoverability** and search engine visibility
|
||||
8. **Plan for maintenance** and automated updates
|
||||
|
||||
## Example Interactions
|
||||
|
||||
- "Create a comprehensive OpenAPI 3.1 specification for this REST API with authentication examples"
|
||||
- "Build an interactive developer portal with multi-API documentation and user onboarding"
|
||||
- "Generate SDKs in Python, JavaScript, and Go from this OpenAPI spec"
|
||||
- "Design a migration guide for developers upgrading from API v1 to v2"
|
||||
- "Create webhook documentation with security best practices and payload examples"
|
||||
- "Build automated testing for all code examples in our API documentation"
|
||||
- "Design an API explorer interface with live testing and authentication"
|
||||
- "Create comprehensive error documentation with troubleshooting guides"
|
||||
46
skills/api-testing-observability-api-mock/SKILL.md
Normal file
46
skills/api-testing-observability-api-mock/SKILL.md
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
name: api-testing-observability-api-mock
|
||||
description: "You are an API mocking expert specializing in realistic mock services for development, testing, and demos. Design mocks that simulate real API behavior and enable parallel development."
|
||||
---
|
||||
|
||||
# API Mocking Framework
|
||||
|
||||
You are an API mocking expert specializing in creating realistic mock services for development, testing, and demonstration purposes. Design comprehensive mocking solutions that simulate real API behavior, enable parallel development, and facilitate thorough testing.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Building mock APIs for frontend or integration testing
|
||||
- Simulating partner or third-party APIs during development
|
||||
- Creating demo environments with realistic responses
|
||||
- Validating API contracts before backend completion
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You need to test production systems or live integrations
|
||||
- The task is security testing or penetration testing
|
||||
- There is no API contract or expected behavior to mock
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid reusing production secrets or real customer data in mocks.
|
||||
- Make mock endpoints clearly labeled to prevent accidental use.
|
||||
|
||||
## Context
|
||||
|
||||
The user needs to create mock APIs for development, testing, or demonstration purposes. Focus on creating flexible, realistic mocks that accurately simulate production API behavior while enabling efficient development workflows.
|
||||
|
||||
## Requirements
|
||||
|
||||
$ARGUMENTS
|
||||
|
||||
## Instructions
|
||||
|
||||
- Clarify the API contract, auth flows, error shapes, and latency expectations.
|
||||
- Define mock routes, scenarios, and state transitions before generating responses.
|
||||
- Provide deterministic fixtures with optional randomness toggles.
|
||||
- Document how to run the mock server and how to switch scenarios.
|
||||
- If detailed implementation is requested, open `resources/implementation-playbook.md`.
|
||||
|
||||
## Resources
|
||||
|
||||
- `resources/implementation-playbook.md` for code samples, checklists, and templates.
|
||||
File diff suppressed because it is too large
Load Diff
154
skills/application-performance-performance-optimization/SKILL.md
Normal file
154
skills/application-performance-performance-optimization/SKILL.md
Normal file
@@ -0,0 +1,154 @@
|
||||
---
|
||||
name: application-performance-performance-optimization
|
||||
description: "Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across the stack."
|
||||
---
|
||||
|
||||
Optimize application performance end-to-end using specialized performance and optimization agents:
|
||||
|
||||
[Extended thinking: This workflow orchestrates a comprehensive performance optimization process across the entire application stack. Starting with deep profiling and baseline establishment, the workflow progresses through targeted optimizations in each system layer, validates improvements through load testing, and establishes continuous monitoring for sustained performance. Each phase builds on insights from previous phases, creating a data-driven optimization strategy that addresses real bottlenecks rather than theoretical improvements. The workflow emphasizes modern observability practices, user-centric performance metrics, and cost-effective optimization strategies.]
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Coordinating performance optimization across backend, frontend, and infrastructure
|
||||
- Establishing baselines and profiling to identify bottlenecks
|
||||
- Designing load tests, performance budgets, or capacity plans
|
||||
- Building observability for performance and reliability targets
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- The task is a small localized fix with no broader performance goals
|
||||
- There is no access to metrics, tracing, or profiling data
|
||||
- The request is unrelated to performance or scalability
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Confirm performance goals, constraints, and target metrics.
|
||||
2. Establish baselines with profiling, tracing, and real-user data.
|
||||
3. Execute phased optimizations across the stack with measurable impact.
|
||||
4. Validate improvements and set guardrails to prevent regressions.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid load testing production without approvals and safeguards.
|
||||
- Roll out performance changes gradually with rollback plans.
|
||||
|
||||
## Phase 1: Performance Profiling & Baseline
|
||||
|
||||
### 1. Comprehensive Performance Profiling
|
||||
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Profile application performance comprehensively for: $ARGUMENTS. Generate flame graphs for CPU usage, heap dumps for memory analysis, trace I/O operations, and identify hot paths. Use APM tools like DataDog or New Relic if available. Include database query profiling, API response times, and frontend rendering metrics. Establish performance baselines for all critical user journeys."
|
||||
- Context: Initial performance investigation
|
||||
- Output: Detailed performance profile with flame graphs, memory analysis, bottleneck identification, baseline metrics
|
||||
|
||||
### 2. Observability Stack Assessment
|
||||
|
||||
- Use Task tool with subagent_type="observability-engineer"
|
||||
- Prompt: "Assess current observability setup for: $ARGUMENTS. Review existing monitoring, distributed tracing with OpenTelemetry, log aggregation, and metrics collection. Identify gaps in visibility, missing metrics, and areas needing better instrumentation. Recommend APM tool integration and custom metrics for business-critical operations."
|
||||
- Context: Performance profile from step 1
|
||||
- Output: Observability assessment report, instrumentation gaps, monitoring recommendations
|
||||
|
||||
### 3. User Experience Analysis
|
||||
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Analyze user experience metrics for: $ARGUMENTS. Measure Core Web Vitals (LCP, FID, CLS), page load times, time to interactive, and perceived performance. Use Real User Monitoring (RUM) data if available. Identify user journeys with poor performance and their business impact."
|
||||
- Context: Performance baselines from step 1
|
||||
- Output: UX performance report, Core Web Vitals analysis, user impact assessment
|
||||
|
||||
## Phase 2: Database & Backend Optimization
|
||||
|
||||
### 4. Database Performance Optimization
|
||||
|
||||
- Use Task tool with subagent_type="database-cloud-optimization::database-optimizer"
|
||||
- Prompt: "Optimize database performance for: $ARGUMENTS based on profiling data: {context_from_phase_1}. Analyze slow query logs, create missing indexes, optimize execution plans, implement query result caching with Redis/Memcached. Review connection pooling, prepared statements, and batch processing opportunities. Consider read replicas and database sharding if needed."
|
||||
- Context: Performance bottlenecks from phase 1
|
||||
- Output: Optimized queries, new indexes, caching strategy, connection pool configuration
|
||||
|
||||
### 5. Backend Code & API Optimization
|
||||
|
||||
- Use Task tool with subagent_type="backend-development::backend-architect"
|
||||
- Prompt: "Optimize backend services for: $ARGUMENTS targeting bottlenecks: {context_from_phase_1}. Implement efficient algorithms, add application-level caching, optimize N+1 queries, use async/await patterns effectively. Implement pagination, response compression, GraphQL query optimization, and batch API operations. Add circuit breakers and bulkheads for resilience."
|
||||
- Context: Database optimizations from step 4, profiling data from phase 1
|
||||
- Output: Optimized backend code, caching implementation, API improvements, resilience patterns
|
||||
|
||||
### 6. Microservices & Distributed System Optimization
|
||||
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Optimize distributed system performance for: $ARGUMENTS. Analyze service-to-service communication, implement service mesh optimizations, optimize message queue performance (Kafka/RabbitMQ), reduce network hops. Implement distributed caching strategies and optimize serialization/deserialization."
|
||||
- Context: Backend optimizations from step 5
|
||||
- Output: Service communication improvements, message queue optimization, distributed caching setup
|
||||
|
||||
## Phase 3: Frontend & CDN Optimization
|
||||
|
||||
### 7. Frontend Bundle & Loading Optimization
|
||||
|
||||
- Use Task tool with subagent_type="frontend-developer"
|
||||
- Prompt: "Optimize frontend performance for: $ARGUMENTS targeting Core Web Vitals: {context_from_phase_1}. Implement code splitting, tree shaking, lazy loading, and dynamic imports. Optimize bundle sizes with webpack/rollup analysis. Implement resource hints (prefetch, preconnect, preload). Optimize critical rendering path and eliminate render-blocking resources."
|
||||
- Context: UX analysis from phase 1, backend optimizations from phase 2
|
||||
- Output: Optimized bundles, lazy loading implementation, improved Core Web Vitals
|
||||
|
||||
### 8. CDN & Edge Optimization
|
||||
|
||||
- Use Task tool with subagent_type="cloud-infrastructure::cloud-architect"
|
||||
- Prompt: "Optimize CDN and edge performance for: $ARGUMENTS. Configure CloudFlare/CloudFront for optimal caching, implement edge functions for dynamic content, set up image optimization with responsive images and WebP/AVIF formats. Configure HTTP/2 and HTTP/3, implement Brotli compression. Set up geographic distribution for global users."
|
||||
- Context: Frontend optimizations from step 7
|
||||
- Output: CDN configuration, edge caching rules, compression setup, geographic optimization
|
||||
|
||||
### 9. Mobile & Progressive Web App Optimization
|
||||
|
||||
- Use Task tool with subagent_type="frontend-mobile-development::mobile-developer"
|
||||
- Prompt: "Optimize mobile experience for: $ARGUMENTS. Implement service workers for offline functionality, optimize for slow networks with adaptive loading. Reduce JavaScript execution time for mobile CPUs. Implement virtual scrolling for long lists. Optimize touch responsiveness and smooth animations. Consider React Native/Flutter specific optimizations if applicable."
|
||||
- Context: Frontend optimizations from steps 7-8
|
||||
- Output: Mobile-optimized code, PWA implementation, offline functionality
|
||||
|
||||
## Phase 4: Load Testing & Validation
|
||||
|
||||
### 10. Comprehensive Load Testing
|
||||
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Conduct comprehensive load testing for: $ARGUMENTS using k6/Gatling/Artillery. Design realistic load scenarios based on production traffic patterns. Test normal load, peak load, and stress scenarios. Include API testing, browser-based testing, and WebSocket testing if applicable. Measure response times, throughput, error rates, and resource utilization at various load levels."
|
||||
- Context: All optimizations from phases 1-3
|
||||
- Output: Load test results, performance under load, breaking points, scalability analysis
|
||||
|
||||
### 11. Performance Regression Testing
|
||||
|
||||
- Use Task tool with subagent_type="performance-testing-review::test-automator"
|
||||
- Prompt: "Create automated performance regression tests for: $ARGUMENTS. Set up performance budgets for key metrics, integrate with CI/CD pipeline using GitHub Actions or similar. Create Lighthouse CI tests for frontend, API performance tests with Artillery, and database performance benchmarks. Implement automatic rollback triggers for performance regressions."
|
||||
- Context: Load test results from step 10, baseline metrics from phase 1
|
||||
- Output: Performance test suite, CI/CD integration, regression prevention system
|
||||
|
||||
## Phase 5: Monitoring & Continuous Optimization
|
||||
|
||||
### 12. Production Monitoring Setup
|
||||
|
||||
- Use Task tool with subagent_type="observability-engineer"
|
||||
- Prompt: "Implement production performance monitoring for: $ARGUMENTS. Set up APM with DataDog/New Relic/Dynatrace, configure distributed tracing with OpenTelemetry, implement custom business metrics. Create Grafana dashboards for key metrics, set up PagerDuty alerts for performance degradation. Define SLIs/SLOs for critical services with error budgets."
|
||||
- Context: Performance improvements from all previous phases
|
||||
- Output: Monitoring dashboards, alert rules, SLI/SLO definitions, runbooks
|
||||
|
||||
### 13. Continuous Performance Optimization
|
||||
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Establish continuous optimization process for: $ARGUMENTS. Create performance budget tracking, implement A/B testing for performance changes, set up continuous profiling in production. Document optimization opportunities backlog, create capacity planning models, and establish regular performance review cycles."
|
||||
- Context: Monitoring setup from step 12, all previous optimization work
|
||||
- Output: Performance budget tracking, optimization backlog, capacity planning, review process
|
||||
|
||||
## Configuration Options
|
||||
|
||||
- **performance_focus**: "latency" | "throughput" | "cost" | "balanced" (default: "balanced")
|
||||
- **optimization_depth**: "quick-wins" | "comprehensive" | "enterprise" (default: "comprehensive")
|
||||
- **tools_available**: ["datadog", "newrelic", "prometheus", "grafana", "k6", "gatling"]
|
||||
- **budget_constraints**: Set maximum acceptable costs for infrastructure changes
|
||||
- **user_impact_tolerance**: "zero-downtime" | "maintenance-window" | "gradual-rollout"
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- **Response Time**: P50 < 200ms, P95 < 1s, P99 < 2s for critical endpoints
|
||||
- **Core Web Vitals**: LCP < 2.5s, FID < 100ms, CLS < 0.1
|
||||
- **Throughput**: Support 2x current peak load with <1% error rate
|
||||
- **Database Performance**: Query P95 < 100ms, no queries > 1s
|
||||
- **Resource Utilization**: CPU < 70%, Memory < 80% under normal load
|
||||
- **Cost Efficiency**: Performance per dollar improved by minimum 30%
|
||||
- **Monitoring Coverage**: 100% of critical paths instrumented with alerting
|
||||
|
||||
Performance optimization target: $ARGUMENTS
|
||||
174
skills/architect-review/SKILL.md
Normal file
174
skills/architect-review/SKILL.md
Normal file
@@ -0,0 +1,174 @@
|
||||
---
|
||||
name: architect-review
|
||||
description: Master software architect specializing in modern architecture
|
||||
patterns, clean architecture, microservices, event-driven systems, and DDD.
|
||||
Reviews system designs and code changes for architectural integrity,
|
||||
scalability, and maintainability. Use PROACTIVELY for architectural decisions.
|
||||
metadata:
|
||||
model: opus
|
||||
---
|
||||
You are a master software architect specializing in modern software architecture patterns, clean architecture principles, and distributed systems design.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Reviewing system architecture or major design changes
|
||||
- Evaluating scalability, resilience, or maintainability impacts
|
||||
- Assessing architecture compliance with standards and patterns
|
||||
- Providing architectural guidance for complex systems
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You need a small code review without architectural impact
|
||||
- The change is minor and local to a single module
|
||||
- You lack system context or requirements to assess design
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Gather system context, goals, and constraints.
|
||||
2. Evaluate architecture decisions and identify risks.
|
||||
3. Recommend improvements with tradeoffs and next steps.
|
||||
4. Document decisions and follow up on validation.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid approving high-risk changes without validation plans.
|
||||
- Document assumptions and dependencies to prevent regressions.
|
||||
|
||||
## Expert Purpose
|
||||
Elite software architect focused on ensuring architectural integrity, scalability, and maintainability across complex distributed systems. Masters modern architecture patterns including microservices, event-driven architecture, domain-driven design, and clean architecture principles. Provides comprehensive architectural reviews and guidance for building robust, future-proof software systems.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Modern Architecture Patterns
|
||||
- Clean Architecture and Hexagonal Architecture implementation
|
||||
- Microservices architecture with proper service boundaries
|
||||
- Event-driven architecture (EDA) with event sourcing and CQRS
|
||||
- Domain-Driven Design (DDD) with bounded contexts and ubiquitous language
|
||||
- Serverless architecture patterns and Function-as-a-Service design
|
||||
- API-first design with GraphQL, REST, and gRPC best practices
|
||||
- Layered architecture with proper separation of concerns
|
||||
|
||||
### Distributed Systems Design
|
||||
- Service mesh architecture with Istio, Linkerd, and Consul Connect
|
||||
- Event streaming with Apache Kafka, Apache Pulsar, and NATS
|
||||
- Distributed data patterns including Saga, Outbox, and Event Sourcing
|
||||
- Circuit breaker, bulkhead, and timeout patterns for resilience
|
||||
- Distributed caching strategies with Redis Cluster and Hazelcast
|
||||
- Load balancing and service discovery patterns
|
||||
- Distributed tracing and observability architecture
|
||||
|
||||
### SOLID Principles & Design Patterns
|
||||
- Single Responsibility, Open/Closed, Liskov Substitution principles
|
||||
- Interface Segregation and Dependency Inversion implementation
|
||||
- Repository, Unit of Work, and Specification patterns
|
||||
- Factory, Strategy, Observer, and Command patterns
|
||||
- Decorator, Adapter, and Facade patterns for clean interfaces
|
||||
- Dependency Injection and Inversion of Control containers
|
||||
- Anti-corruption layers and adapter patterns
|
||||
|
||||
### Cloud-Native Architecture
|
||||
- Container orchestration with Kubernetes and Docker Swarm
|
||||
- Cloud provider patterns for AWS, Azure, and Google Cloud Platform
|
||||
- Infrastructure as Code with Terraform, Pulumi, and CloudFormation
|
||||
- GitOps and CI/CD pipeline architecture
|
||||
- Auto-scaling patterns and resource optimization
|
||||
- Multi-cloud and hybrid cloud architecture strategies
|
||||
- Edge computing and CDN integration patterns
|
||||
|
||||
### Security Architecture
|
||||
- Zero Trust security model implementation
|
||||
- OAuth2, OpenID Connect, and JWT token management
|
||||
- API security patterns including rate limiting and throttling
|
||||
- Data encryption at rest and in transit
|
||||
- Secret management with HashiCorp Vault and cloud key services
|
||||
- Security boundaries and defense in depth strategies
|
||||
- Container and Kubernetes security best practices
|
||||
|
||||
### Performance & Scalability
|
||||
- Horizontal and vertical scaling patterns
|
||||
- Caching strategies at multiple architectural layers
|
||||
- Database scaling with sharding, partitioning, and read replicas
|
||||
- Content Delivery Network (CDN) integration
|
||||
- Asynchronous processing and message queue patterns
|
||||
- Connection pooling and resource management
|
||||
- Performance monitoring and APM integration
|
||||
|
||||
### Data Architecture
|
||||
- Polyglot persistence with SQL and NoSQL databases
|
||||
- Data lake, data warehouse, and data mesh architectures
|
||||
- Event sourcing and Command Query Responsibility Segregation (CQRS)
|
||||
- Database per service pattern in microservices
|
||||
- Master-slave and master-master replication patterns
|
||||
- Distributed transaction patterns and eventual consistency
|
||||
- Data streaming and real-time processing architectures
|
||||
|
||||
### Quality Attributes Assessment
|
||||
- Reliability, availability, and fault tolerance evaluation
|
||||
- Scalability and performance characteristics analysis
|
||||
- Security posture and compliance requirements
|
||||
- Maintainability and technical debt assessment
|
||||
- Testability and deployment pipeline evaluation
|
||||
- Monitoring, logging, and observability capabilities
|
||||
- Cost optimization and resource efficiency analysis
|
||||
|
||||
### Modern Development Practices
|
||||
- Test-Driven Development (TDD) and Behavior-Driven Development (BDD)
|
||||
- DevSecOps integration and shift-left security practices
|
||||
- Feature flags and progressive deployment strategies
|
||||
- Blue-green and canary deployment patterns
|
||||
- Infrastructure immutability and cattle vs. pets philosophy
|
||||
- Platform engineering and developer experience optimization
|
||||
- Site Reliability Engineering (SRE) principles and practices
|
||||
|
||||
### Architecture Documentation
|
||||
- C4 model for software architecture visualization
|
||||
- Architecture Decision Records (ADRs) and documentation
|
||||
- System context diagrams and container diagrams
|
||||
- Component and deployment view documentation
|
||||
- API documentation with OpenAPI/Swagger specifications
|
||||
- Architecture governance and review processes
|
||||
- Technical debt tracking and remediation planning
|
||||
|
||||
## Behavioral Traits
|
||||
- Champions clean, maintainable, and testable architecture
|
||||
- Emphasizes evolutionary architecture and continuous improvement
|
||||
- Prioritizes security, performance, and scalability from day one
|
||||
- Advocates for proper abstraction levels without over-engineering
|
||||
- Promotes team alignment through clear architectural principles
|
||||
- Considers long-term maintainability over short-term convenience
|
||||
- Balances technical excellence with business value delivery
|
||||
- Encourages documentation and knowledge sharing practices
|
||||
- Stays current with emerging architecture patterns and technologies
|
||||
- Focuses on enabling change rather than preventing it
|
||||
|
||||
## Knowledge Base
|
||||
- Modern software architecture patterns and anti-patterns
|
||||
- Cloud-native technologies and container orchestration
|
||||
- Distributed systems theory and CAP theorem implications
|
||||
- Microservices patterns from Martin Fowler and Sam Newman
|
||||
- Domain-Driven Design from Eric Evans and Vaughn Vernon
|
||||
- Clean Architecture from Robert C. Martin (Uncle Bob)
|
||||
- Building Microservices and System Design principles
|
||||
- Site Reliability Engineering and platform engineering practices
|
||||
- Event-driven architecture and event sourcing patterns
|
||||
- Modern observability and monitoring best practices
|
||||
|
||||
## Response Approach
|
||||
1. **Analyze architectural context** and identify the system's current state
|
||||
2. **Assess architectural impact** of proposed changes (High/Medium/Low)
|
||||
3. **Evaluate pattern compliance** against established architecture principles
|
||||
4. **Identify architectural violations** and anti-patterns
|
||||
5. **Recommend improvements** with specific refactoring suggestions
|
||||
6. **Consider scalability implications** for future growth
|
||||
7. **Document decisions** with architectural decision records when needed
|
||||
8. **Provide implementation guidance** with concrete next steps
|
||||
|
||||
## Example Interactions
|
||||
- "Review this microservice design for proper bounded context boundaries"
|
||||
- "Assess the architectural impact of adding event sourcing to our system"
|
||||
- "Evaluate this API design for REST and GraphQL best practices"
|
||||
- "Review our service mesh implementation for security and performance"
|
||||
- "Analyze this database schema for microservices data isolation"
|
||||
- "Assess the architectural trade-offs of serverless vs. containerized deployment"
|
||||
- "Review this event-driven system design for proper decoupling"
|
||||
- "Evaluate our CI/CD pipeline architecture for scalability and security"
|
||||
441
skills/architecture-decision-records/SKILL.md
Normal file
441
skills/architecture-decision-records/SKILL.md
Normal file
@@ -0,0 +1,441 @@
|
||||
---
|
||||
name: architecture-decision-records
|
||||
description: Write and maintain Architecture Decision Records (ADRs) following best practices for technical decision documentation. Use when documenting significant technical decisions, reviewing past architectural choices, or establishing decision processes.
|
||||
---
|
||||
|
||||
# Architecture Decision Records
|
||||
|
||||
Comprehensive patterns for creating, maintaining, and managing Architecture Decision Records (ADRs) that capture the context and rationale behind significant technical decisions.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Making significant architectural decisions
|
||||
- Documenting technology choices
|
||||
- Recording design trade-offs
|
||||
- Onboarding new team members
|
||||
- Reviewing historical decisions
|
||||
- Establishing decision-making processes
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You only need to document small implementation details
|
||||
- The change is a minor patch or routine maintenance
|
||||
- There is no architectural decision to capture
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Capture the decision context, constraints, and drivers.
|
||||
2. Document considered options with tradeoffs.
|
||||
3. Record the decision, rationale, and consequences.
|
||||
4. Link related ADRs and update status over time.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. What is an ADR?
|
||||
|
||||
An Architecture Decision Record captures:
|
||||
- **Context**: Why we needed to make a decision
|
||||
- **Decision**: What we decided
|
||||
- **Consequences**: What happens as a result
|
||||
|
||||
### 2. When to Write an ADR
|
||||
|
||||
| Write ADR | Skip ADR |
|
||||
|-----------|----------|
|
||||
| New framework adoption | Minor version upgrades |
|
||||
| Database technology choice | Bug fixes |
|
||||
| API design patterns | Implementation details |
|
||||
| Security architecture | Routine maintenance |
|
||||
| Integration patterns | Configuration changes |
|
||||
|
||||
### 3. ADR Lifecycle
|
||||
|
||||
```
|
||||
Proposed → Accepted → Deprecated → Superseded
|
||||
↓
|
||||
Rejected
|
||||
```
|
||||
|
||||
## Templates
|
||||
|
||||
### Template 1: Standard ADR (MADR Format)
|
||||
|
||||
```markdown
|
||||
# ADR-0001: Use PostgreSQL as Primary Database
|
||||
|
||||
## Status
|
||||
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
|
||||
We need to select a primary database for our new e-commerce platform. The system
|
||||
will handle:
|
||||
- ~10,000 concurrent users
|
||||
- Complex product catalog with hierarchical categories
|
||||
- Transaction processing for orders and payments
|
||||
- Full-text search for products
|
||||
- Geospatial queries for store locator
|
||||
|
||||
The team has experience with MySQL, PostgreSQL, and MongoDB. We need ACID
|
||||
compliance for financial transactions.
|
||||
|
||||
## Decision Drivers
|
||||
|
||||
* **Must have ACID compliance** for payment processing
|
||||
* **Must support complex queries** for reporting
|
||||
* **Should support full-text search** to reduce infrastructure complexity
|
||||
* **Should have good JSON support** for flexible product attributes
|
||||
* **Team familiarity** reduces onboarding time
|
||||
|
||||
## Considered Options
|
||||
|
||||
### Option 1: PostgreSQL
|
||||
- **Pros**: ACID compliant, excellent JSON support (JSONB), built-in full-text
|
||||
search, PostGIS for geospatial, team has experience
|
||||
- **Cons**: Slightly more complex replication setup than MySQL
|
||||
|
||||
### Option 2: MySQL
|
||||
- **Pros**: Very familiar to team, simple replication, large community
|
||||
- **Cons**: Weaker JSON support, no built-in full-text search (need
|
||||
Elasticsearch), no geospatial without extensions
|
||||
|
||||
### Option 3: MongoDB
|
||||
- **Pros**: Flexible schema, native JSON, horizontal scaling
|
||||
- **Cons**: No ACID for multi-document transactions (at decision time),
|
||||
team has limited experience, requires schema design discipline
|
||||
|
||||
## Decision
|
||||
|
||||
We will use **PostgreSQL 15** as our primary database.
|
||||
|
||||
## Rationale
|
||||
|
||||
PostgreSQL provides the best balance of:
|
||||
1. **ACID compliance** essential for e-commerce transactions
|
||||
2. **Built-in capabilities** (full-text search, JSONB, PostGIS) reduce
|
||||
infrastructure complexity
|
||||
3. **Team familiarity** with SQL databases reduces learning curve
|
||||
4. **Mature ecosystem** with excellent tooling and community support
|
||||
|
||||
The slight complexity in replication is outweighed by the reduction in
|
||||
additional services (no separate Elasticsearch needed).
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- Single database handles transactions, search, and geospatial queries
|
||||
- Reduced operational complexity (fewer services to manage)
|
||||
- Strong consistency guarantees for financial data
|
||||
- Team can leverage existing SQL expertise
|
||||
|
||||
### Negative
|
||||
- Need to learn PostgreSQL-specific features (JSONB, full-text search syntax)
|
||||
- Vertical scaling limits may require read replicas sooner
|
||||
- Some team members need PostgreSQL-specific training
|
||||
|
||||
### Risks
|
||||
- Full-text search may not scale as well as dedicated search engines
|
||||
- Mitigation: Design for potential Elasticsearch addition if needed
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
- Use JSONB for flexible product attributes
|
||||
- Implement connection pooling with PgBouncer
|
||||
- Set up streaming replication for read replicas
|
||||
- Use pg_trgm extension for fuzzy search
|
||||
|
||||
## Related Decisions
|
||||
|
||||
- ADR-0002: Caching Strategy (Redis) - complements database choice
|
||||
- ADR-0005: Search Architecture - may supersede if Elasticsearch needed
|
||||
|
||||
## References
|
||||
|
||||
- [PostgreSQL JSON Documentation](https://www.postgresql.org/docs/current/datatype-json.html)
|
||||
- [PostgreSQL Full Text Search](https://www.postgresql.org/docs/current/textsearch.html)
|
||||
- Internal: Performance benchmarks in `/docs/benchmarks/database-comparison.md`
|
||||
```
|
||||
|
||||
### Template 2: Lightweight ADR
|
||||
|
||||
```markdown
|
||||
# ADR-0012: Adopt TypeScript for Frontend Development
|
||||
|
||||
**Status**: Accepted
|
||||
**Date**: 2024-01-15
|
||||
**Deciders**: @alice, @bob, @charlie
|
||||
|
||||
## Context
|
||||
|
||||
Our React codebase has grown to 50+ components with increasing bug reports
|
||||
related to prop type mismatches and undefined errors. PropTypes provide
|
||||
runtime-only checking.
|
||||
|
||||
## Decision
|
||||
|
||||
Adopt TypeScript for all new frontend code. Migrate existing code incrementally.
|
||||
|
||||
## Consequences
|
||||
|
||||
**Good**: Catch type errors at compile time, better IDE support, self-documenting
|
||||
code.
|
||||
|
||||
**Bad**: Learning curve for team, initial slowdown, build complexity increase.
|
||||
|
||||
**Mitigations**: TypeScript training sessions, allow gradual adoption with
|
||||
`allowJs: true`.
|
||||
```
|
||||
|
||||
### Template 3: Y-Statement Format
|
||||
|
||||
```markdown
|
||||
# ADR-0015: API Gateway Selection
|
||||
|
||||
In the context of **building a microservices architecture**,
|
||||
facing **the need for centralized API management, authentication, and rate limiting**,
|
||||
we decided for **Kong Gateway**
|
||||
and against **AWS API Gateway and custom Nginx solution**,
|
||||
to achieve **vendor independence, plugin extensibility, and team familiarity with Lua**,
|
||||
accepting that **we need to manage Kong infrastructure ourselves**.
|
||||
```
|
||||
|
||||
### Template 4: ADR for Deprecation
|
||||
|
||||
```markdown
|
||||
# ADR-0020: Deprecate MongoDB in Favor of PostgreSQL
|
||||
|
||||
## Status
|
||||
|
||||
Accepted (Supersedes ADR-0003)
|
||||
|
||||
## Context
|
||||
|
||||
ADR-0003 (2021) chose MongoDB for user profile storage due to schema flexibility
|
||||
needs. Since then:
|
||||
- MongoDB's multi-document transactions remain problematic for our use case
|
||||
- Our schema has stabilized and rarely changes
|
||||
- We now have PostgreSQL expertise from other services
|
||||
- Maintaining two databases increases operational burden
|
||||
|
||||
## Decision
|
||||
|
||||
Deprecate MongoDB and migrate user profiles to PostgreSQL.
|
||||
|
||||
## Migration Plan
|
||||
|
||||
1. **Phase 1** (Week 1-2): Create PostgreSQL schema, dual-write enabled
|
||||
2. **Phase 2** (Week 3-4): Backfill historical data, validate consistency
|
||||
3. **Phase 3** (Week 5): Switch reads to PostgreSQL, monitor
|
||||
4. **Phase 4** (Week 6): Remove MongoDB writes, decommission
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- Single database technology reduces operational complexity
|
||||
- ACID transactions for user data
|
||||
- Team can focus PostgreSQL expertise
|
||||
|
||||
### Negative
|
||||
- Migration effort (~4 weeks)
|
||||
- Risk of data issues during migration
|
||||
- Lose some schema flexibility
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
Document from ADR-0003 experience:
|
||||
- Schema flexibility benefits were overestimated
|
||||
- Operational cost of multiple databases was underestimated
|
||||
- Consider long-term maintenance in technology decisions
|
||||
```
|
||||
|
||||
### Template 5: Request for Comments (RFC) Style
|
||||
|
||||
```markdown
|
||||
# RFC-0025: Adopt Event Sourcing for Order Management
|
||||
|
||||
## Summary
|
||||
|
||||
Propose adopting event sourcing pattern for the order management domain to
|
||||
improve auditability, enable temporal queries, and support business analytics.
|
||||
|
||||
## Motivation
|
||||
|
||||
Current challenges:
|
||||
1. Audit requirements need complete order history
|
||||
2. "What was the order state at time X?" queries are impossible
|
||||
3. Analytics team needs event stream for real-time dashboards
|
||||
4. Order state reconstruction for customer support is manual
|
||||
|
||||
## Detailed Design
|
||||
|
||||
### Event Store
|
||||
|
||||
```
|
||||
OrderCreated { orderId, customerId, items[], timestamp }
|
||||
OrderItemAdded { orderId, item, timestamp }
|
||||
OrderItemRemoved { orderId, itemId, timestamp }
|
||||
PaymentReceived { orderId, amount, paymentId, timestamp }
|
||||
OrderShipped { orderId, trackingNumber, timestamp }
|
||||
```
|
||||
|
||||
### Projections
|
||||
|
||||
- **CurrentOrderState**: Materialized view for queries
|
||||
- **OrderHistory**: Complete timeline for audit
|
||||
- **DailyOrderMetrics**: Analytics aggregation
|
||||
|
||||
### Technology
|
||||
|
||||
- Event Store: EventStoreDB (purpose-built, handles projections)
|
||||
- Alternative considered: Kafka + custom projection service
|
||||
|
||||
## Drawbacks
|
||||
|
||||
- Learning curve for team
|
||||
- Increased complexity vs. CRUD
|
||||
- Need to design events carefully (immutable once stored)
|
||||
- Storage growth (events never deleted)
|
||||
|
||||
## Alternatives
|
||||
|
||||
1. **Audit tables**: Simpler but doesn't enable temporal queries
|
||||
2. **CDC from existing DB**: Complex, doesn't change data model
|
||||
3. **Hybrid**: Event source only for order state changes
|
||||
|
||||
## Unresolved Questions
|
||||
|
||||
- [ ] Event schema versioning strategy
|
||||
- [ ] Retention policy for events
|
||||
- [ ] Snapshot frequency for performance
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
1. Prototype with single order type (2 weeks)
|
||||
2. Team training on event sourcing (1 week)
|
||||
3. Full implementation and migration (4 weeks)
|
||||
4. Monitoring and optimization (ongoing)
|
||||
|
||||
## References
|
||||
|
||||
- [Event Sourcing by Martin Fowler](https://martinfowler.com/eaaDev/EventSourcing.html)
|
||||
- [EventStoreDB Documentation](https://www.eventstore.com/docs)
|
||||
```
|
||||
|
||||
## ADR Management
|
||||
|
||||
### Directory Structure
|
||||
|
||||
```
|
||||
docs/
|
||||
├── adr/
|
||||
│ ├── README.md # Index and guidelines
|
||||
│ ├── template.md # Team's ADR template
|
||||
│ ├── 0001-use-postgresql.md
|
||||
│ ├── 0002-caching-strategy.md
|
||||
│ ├── 0003-mongodb-user-profiles.md # [DEPRECATED]
|
||||
│ └── 0020-deprecate-mongodb.md # Supersedes 0003
|
||||
```
|
||||
|
||||
### ADR Index (README.md)
|
||||
|
||||
```markdown
|
||||
# Architecture Decision Records
|
||||
|
||||
This directory contains Architecture Decision Records (ADRs) for [Project Name].
|
||||
|
||||
## Index
|
||||
|
||||
| ADR | Title | Status | Date |
|
||||
|-----|-------|--------|------|
|
||||
| [0001](0001-use-postgresql.md) | Use PostgreSQL as Primary Database | Accepted | 2024-01-10 |
|
||||
| [0002](0002-caching-strategy.md) | Caching Strategy with Redis | Accepted | 2024-01-12 |
|
||||
| [0003](0003-mongodb-user-profiles.md) | MongoDB for User Profiles | Deprecated | 2023-06-15 |
|
||||
| [0020](0020-deprecate-mongodb.md) | Deprecate MongoDB | Accepted | 2024-01-15 |
|
||||
|
||||
## Creating a New ADR
|
||||
|
||||
1. Copy `template.md` to `NNNN-title-with-dashes.md`
|
||||
2. Fill in the template
|
||||
3. Submit PR for review
|
||||
4. Update this index after approval
|
||||
|
||||
## ADR Status
|
||||
|
||||
- **Proposed**: Under discussion
|
||||
- **Accepted**: Decision made, implementing
|
||||
- **Deprecated**: No longer relevant
|
||||
- **Superseded**: Replaced by another ADR
|
||||
- **Rejected**: Considered but not adopted
|
||||
```
|
||||
|
||||
### Automation (adr-tools)
|
||||
|
||||
```bash
|
||||
# Install adr-tools
|
||||
brew install adr-tools
|
||||
|
||||
# Initialize ADR directory
|
||||
adr init docs/adr
|
||||
|
||||
# Create new ADR
|
||||
adr new "Use PostgreSQL as Primary Database"
|
||||
|
||||
# Supersede an ADR
|
||||
adr new -s 3 "Deprecate MongoDB in Favor of PostgreSQL"
|
||||
|
||||
# Generate table of contents
|
||||
adr generate toc > docs/adr/README.md
|
||||
|
||||
# Link related ADRs
|
||||
adr link 2 "Complements" 1 "Is complemented by"
|
||||
```
|
||||
|
||||
## Review Process
|
||||
|
||||
```markdown
|
||||
## ADR Review Checklist
|
||||
|
||||
### Before Submission
|
||||
- [ ] Context clearly explains the problem
|
||||
- [ ] All viable options considered
|
||||
- [ ] Pros/cons balanced and honest
|
||||
- [ ] Consequences (positive and negative) documented
|
||||
- [ ] Related ADRs linked
|
||||
|
||||
### During Review
|
||||
- [ ] At least 2 senior engineers reviewed
|
||||
- [ ] Affected teams consulted
|
||||
- [ ] Security implications considered
|
||||
- [ ] Cost implications documented
|
||||
- [ ] Reversibility assessed
|
||||
|
||||
### After Acceptance
|
||||
- [ ] ADR index updated
|
||||
- [ ] Team notified
|
||||
- [ ] Implementation tickets created
|
||||
- [ ] Related documentation updated
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do's
|
||||
- **Write ADRs early** - Before implementation starts
|
||||
- **Keep them short** - 1-2 pages maximum
|
||||
- **Be honest about trade-offs** - Include real cons
|
||||
- **Link related decisions** - Build decision graph
|
||||
- **Update status** - Deprecate when superseded
|
||||
|
||||
### Don'ts
|
||||
- **Don't change accepted ADRs** - Write new ones to supersede
|
||||
- **Don't skip context** - Future readers need background
|
||||
- **Don't hide failures** - Rejected decisions are valuable
|
||||
- **Don't be vague** - Specific decisions, specific consequences
|
||||
- **Don't forget implementation** - ADR without action is waste
|
||||
|
||||
## Resources
|
||||
|
||||
- [Documenting Architecture Decisions (Michael Nygard)](https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions)
|
||||
- [MADR Template](https://adr.github.io/madr/)
|
||||
- [ADR GitHub Organization](https://adr.github.io/)
|
||||
- [adr-tools](https://github.com/npryce/adr-tools)
|
||||
37
skills/architecture-patterns/SKILL.md
Normal file
37
skills/architecture-patterns/SKILL.md
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
name: architecture-patterns
|
||||
description: Implement proven backend architecture patterns including Clean Architecture, Hexagonal Architecture, and Domain-Driven Design. Use when architecting complex backend systems or refactoring existing applications for better maintainability.
|
||||
---
|
||||
|
||||
# Architecture Patterns
|
||||
|
||||
Master proven backend architecture patterns including Clean Architecture, Hexagonal Architecture, and Domain-Driven Design to build maintainable, testable, and scalable systems.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Designing new backend systems from scratch
|
||||
- Refactoring monolithic applications for better maintainability
|
||||
- Establishing architecture standards for your team
|
||||
- Migrating from tightly coupled to loosely coupled architectures
|
||||
- Implementing domain-driven design principles
|
||||
- Creating testable and mockable codebases
|
||||
- Planning microservices decomposition
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You only need small, localized refactors
|
||||
- The system is primarily frontend with no backend architecture changes
|
||||
- You need implementation details without architectural design
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Clarify domain boundaries, constraints, and scalability targets.
|
||||
2. Select an architecture pattern that fits the domain complexity.
|
||||
3. Define module boundaries, interfaces, and dependency rules.
|
||||
4. Provide migration steps and validation checks.
|
||||
|
||||
Refer to `resources/implementation-playbook.md` for detailed patterns, checklists, and templates.
|
||||
|
||||
## Resources
|
||||
|
||||
- `resources/implementation-playbook.md` for detailed patterns, checklists, and templates.
|
||||
@@ -0,0 +1,479 @@
|
||||
# Architecture Patterns Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. Clean Architecture (Uncle Bob)
|
||||
|
||||
**Layers (dependency flows inward):**
|
||||
|
||||
- **Entities**: Core business models
|
||||
- **Use Cases**: Application business rules
|
||||
- **Interface Adapters**: Controllers, presenters, gateways
|
||||
- **Frameworks & Drivers**: UI, database, external services
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- Dependencies point inward
|
||||
- Inner layers know nothing about outer layers
|
||||
- Business logic independent of frameworks
|
||||
- Testable without UI, database, or external services
|
||||
|
||||
### 2. Hexagonal Architecture (Ports and Adapters)
|
||||
|
||||
**Components:**
|
||||
|
||||
- **Domain Core**: Business logic
|
||||
- **Ports**: Interfaces defining interactions
|
||||
- **Adapters**: Implementations of ports (database, REST, message queue)
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- Swap implementations easily (mock for testing)
|
||||
- Technology-agnostic core
|
||||
- Clear separation of concerns
|
||||
|
||||
### 3. Domain-Driven Design (DDD)
|
||||
|
||||
**Strategic Patterns:**
|
||||
|
||||
- **Bounded Contexts**: Separate models for different domains
|
||||
- **Context Mapping**: How contexts relate
|
||||
- **Ubiquitous Language**: Shared terminology
|
||||
|
||||
**Tactical Patterns:**
|
||||
|
||||
- **Entities**: Objects with identity
|
||||
- **Value Objects**: Immutable objects defined by attributes
|
||||
- **Aggregates**: Consistency boundaries
|
||||
- **Repositories**: Data access abstraction
|
||||
- **Domain Events**: Things that happened
|
||||
|
||||
## Clean Architecture Pattern
|
||||
|
||||
### Directory Structure
|
||||
|
||||
```
|
||||
app/
|
||||
├── domain/ # Entities & business rules
|
||||
│ ├── entities/
|
||||
│ │ ├── user.py
|
||||
│ │ └── order.py
|
||||
│ ├── value_objects/
|
||||
│ │ ├── email.py
|
||||
│ │ └── money.py
|
||||
│ └── interfaces/ # Abstract interfaces
|
||||
│ ├── user_repository.py
|
||||
│ └── payment_gateway.py
|
||||
├── use_cases/ # Application business rules
|
||||
│ ├── create_user.py
|
||||
│ ├── process_order.py
|
||||
│ └── send_notification.py
|
||||
├── adapters/ # Interface implementations
|
||||
│ ├── repositories/
|
||||
│ │ ├── postgres_user_repository.py
|
||||
│ │ └── redis_cache_repository.py
|
||||
│ ├── controllers/
|
||||
│ │ └── user_controller.py
|
||||
│ └── gateways/
|
||||
│ ├── stripe_payment_gateway.py
|
||||
│ └── sendgrid_email_gateway.py
|
||||
└── infrastructure/ # Framework & external concerns
|
||||
├── database.py
|
||||
├── config.py
|
||||
└── logging.py
|
||||
```
|
||||
|
||||
### Implementation Example
|
||||
|
||||
```python
|
||||
# domain/entities/user.py
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from typing import Optional
|
||||
|
||||
@dataclass
|
||||
class User:
|
||||
"""Core user entity - no framework dependencies."""
|
||||
id: str
|
||||
email: str
|
||||
name: str
|
||||
created_at: datetime
|
||||
is_active: bool = True
|
||||
|
||||
def deactivate(self):
|
||||
"""Business rule: deactivating user."""
|
||||
self.is_active = False
|
||||
|
||||
def can_place_order(self) -> bool:
|
||||
"""Business rule: active users can order."""
|
||||
return self.is_active
|
||||
|
||||
# domain/interfaces/user_repository.py
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Optional, List
|
||||
from domain.entities.user import User
|
||||
|
||||
class IUserRepository(ABC):
|
||||
"""Port: defines contract, no implementation."""
|
||||
|
||||
@abstractmethod
|
||||
async def find_by_id(self, user_id: str) -> Optional[User]:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def find_by_email(self, email: str) -> Optional[User]:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def save(self, user: User) -> User:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def delete(self, user_id: str) -> bool:
|
||||
pass
|
||||
|
||||
# use_cases/create_user.py
|
||||
from domain.entities.user import User
|
||||
from domain.interfaces.user_repository import IUserRepository
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
import uuid
|
||||
|
||||
@dataclass
|
||||
class CreateUserRequest:
|
||||
email: str
|
||||
name: str
|
||||
|
||||
@dataclass
|
||||
class CreateUserResponse:
|
||||
user: User
|
||||
success: bool
|
||||
error: Optional[str] = None
|
||||
|
||||
class CreateUserUseCase:
|
||||
"""Use case: orchestrates business logic."""
|
||||
|
||||
def __init__(self, user_repository: IUserRepository):
|
||||
self.user_repository = user_repository
|
||||
|
||||
async def execute(self, request: CreateUserRequest) -> CreateUserResponse:
|
||||
# Business validation
|
||||
existing = await self.user_repository.find_by_email(request.email)
|
||||
if existing:
|
||||
return CreateUserResponse(
|
||||
user=None,
|
||||
success=False,
|
||||
error="Email already exists"
|
||||
)
|
||||
|
||||
# Create entity
|
||||
user = User(
|
||||
id=str(uuid.uuid4()),
|
||||
email=request.email,
|
||||
name=request.name,
|
||||
created_at=datetime.now(),
|
||||
is_active=True
|
||||
)
|
||||
|
||||
# Persist
|
||||
saved_user = await self.user_repository.save(user)
|
||||
|
||||
return CreateUserResponse(
|
||||
user=saved_user,
|
||||
success=True
|
||||
)
|
||||
|
||||
# adapters/repositories/postgres_user_repository.py
|
||||
from domain.interfaces.user_repository import IUserRepository
|
||||
from domain.entities.user import User
|
||||
from typing import Optional
|
||||
import asyncpg
|
||||
|
||||
class PostgresUserRepository(IUserRepository):
|
||||
"""Adapter: PostgreSQL implementation."""
|
||||
|
||||
def __init__(self, pool: asyncpg.Pool):
|
||||
self.pool = pool
|
||||
|
||||
async def find_by_id(self, user_id: str) -> Optional[User]:
|
||||
async with self.pool.acquire() as conn:
|
||||
row = await conn.fetchrow(
|
||||
"SELECT * FROM users WHERE id = $1", user_id
|
||||
)
|
||||
return self._to_entity(row) if row else None
|
||||
|
||||
async def find_by_email(self, email: str) -> Optional[User]:
|
||||
async with self.pool.acquire() as conn:
|
||||
row = await conn.fetchrow(
|
||||
"SELECT * FROM users WHERE email = $1", email
|
||||
)
|
||||
return self._to_entity(row) if row else None
|
||||
|
||||
async def save(self, user: User) -> User:
|
||||
async with self.pool.acquire() as conn:
|
||||
await conn.execute(
|
||||
"""
|
||||
INSERT INTO users (id, email, name, created_at, is_active)
|
||||
VALUES ($1, $2, $3, $4, $5)
|
||||
ON CONFLICT (id) DO UPDATE
|
||||
SET email = $2, name = $3, is_active = $5
|
||||
""",
|
||||
user.id, user.email, user.name, user.created_at, user.is_active
|
||||
)
|
||||
return user
|
||||
|
||||
async def delete(self, user_id: str) -> bool:
|
||||
async with self.pool.acquire() as conn:
|
||||
result = await conn.execute(
|
||||
"DELETE FROM users WHERE id = $1", user_id
|
||||
)
|
||||
return result == "DELETE 1"
|
||||
|
||||
def _to_entity(self, row) -> User:
|
||||
"""Map database row to entity."""
|
||||
return User(
|
||||
id=row["id"],
|
||||
email=row["email"],
|
||||
name=row["name"],
|
||||
created_at=row["created_at"],
|
||||
is_active=row["is_active"]
|
||||
)
|
||||
|
||||
# adapters/controllers/user_controller.py
|
||||
from fastapi import APIRouter, Depends, HTTPException
|
||||
from use_cases.create_user import CreateUserUseCase, CreateUserRequest
|
||||
from pydantic import BaseModel
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
class CreateUserDTO(BaseModel):
|
||||
email: str
|
||||
name: str
|
||||
|
||||
@router.post("/users")
|
||||
async def create_user(
|
||||
dto: CreateUserDTO,
|
||||
use_case: CreateUserUseCase = Depends(get_create_user_use_case)
|
||||
):
|
||||
"""Controller: handles HTTP concerns only."""
|
||||
request = CreateUserRequest(email=dto.email, name=dto.name)
|
||||
response = await use_case.execute(request)
|
||||
|
||||
if not response.success:
|
||||
raise HTTPException(status_code=400, detail=response.error)
|
||||
|
||||
return {"user": response.user}
|
||||
```
|
||||
|
||||
## Hexagonal Architecture Pattern
|
||||
|
||||
```python
|
||||
# Core domain (hexagon center)
|
||||
class OrderService:
|
||||
"""Domain service - no infrastructure dependencies."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
order_repository: OrderRepositoryPort,
|
||||
payment_gateway: PaymentGatewayPort,
|
||||
notification_service: NotificationPort
|
||||
):
|
||||
self.orders = order_repository
|
||||
self.payments = payment_gateway
|
||||
self.notifications = notification_service
|
||||
|
||||
async def place_order(self, order: Order) -> OrderResult:
|
||||
# Business logic
|
||||
if not order.is_valid():
|
||||
return OrderResult(success=False, error="Invalid order")
|
||||
|
||||
# Use ports (interfaces)
|
||||
payment = await self.payments.charge(
|
||||
amount=order.total,
|
||||
customer=order.customer_id
|
||||
)
|
||||
|
||||
if not payment.success:
|
||||
return OrderResult(success=False, error="Payment failed")
|
||||
|
||||
order.mark_as_paid()
|
||||
saved_order = await self.orders.save(order)
|
||||
|
||||
await self.notifications.send(
|
||||
to=order.customer_email,
|
||||
subject="Order confirmed",
|
||||
body=f"Order {order.id} confirmed"
|
||||
)
|
||||
|
||||
return OrderResult(success=True, order=saved_order)
|
||||
|
||||
# Ports (interfaces)
|
||||
class OrderRepositoryPort(ABC):
|
||||
@abstractmethod
|
||||
async def save(self, order: Order) -> Order:
|
||||
pass
|
||||
|
||||
class PaymentGatewayPort(ABC):
|
||||
@abstractmethod
|
||||
async def charge(self, amount: Money, customer: str) -> PaymentResult:
|
||||
pass
|
||||
|
||||
class NotificationPort(ABC):
|
||||
@abstractmethod
|
||||
async def send(self, to: str, subject: str, body: str):
|
||||
pass
|
||||
|
||||
# Adapters (implementations)
|
||||
class StripePaymentAdapter(PaymentGatewayPort):
|
||||
"""Primary adapter: connects to Stripe API."""
|
||||
|
||||
def __init__(self, api_key: str):
|
||||
self.stripe = stripe
|
||||
self.stripe.api_key = api_key
|
||||
|
||||
async def charge(self, amount: Money, customer: str) -> PaymentResult:
|
||||
try:
|
||||
charge = self.stripe.Charge.create(
|
||||
amount=amount.cents,
|
||||
currency=amount.currency,
|
||||
customer=customer
|
||||
)
|
||||
return PaymentResult(success=True, transaction_id=charge.id)
|
||||
except stripe.error.CardError as e:
|
||||
return PaymentResult(success=False, error=str(e))
|
||||
|
||||
class MockPaymentAdapter(PaymentGatewayPort):
|
||||
"""Test adapter: no external dependencies."""
|
||||
|
||||
async def charge(self, amount: Money, customer: str) -> PaymentResult:
|
||||
return PaymentResult(success=True, transaction_id="mock-123")
|
||||
```
|
||||
|
||||
## Domain-Driven Design Pattern
|
||||
|
||||
```python
|
||||
# Value Objects (immutable)
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class Email:
|
||||
"""Value object: validated email."""
|
||||
value: str
|
||||
|
||||
def __post_init__(self):
|
||||
if "@" not in self.value:
|
||||
raise ValueError("Invalid email")
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class Money:
|
||||
"""Value object: amount with currency."""
|
||||
amount: int # cents
|
||||
currency: str
|
||||
|
||||
def add(self, other: "Money") -> "Money":
|
||||
if self.currency != other.currency:
|
||||
raise ValueError("Currency mismatch")
|
||||
return Money(self.amount + other.amount, self.currency)
|
||||
|
||||
# Entities (with identity)
|
||||
class Order:
|
||||
"""Entity: has identity, mutable state."""
|
||||
|
||||
def __init__(self, id: str, customer: Customer):
|
||||
self.id = id
|
||||
self.customer = customer
|
||||
self.items: List[OrderItem] = []
|
||||
self.status = OrderStatus.PENDING
|
||||
self._events: List[DomainEvent] = []
|
||||
|
||||
def add_item(self, product: Product, quantity: int):
|
||||
"""Business logic in entity."""
|
||||
item = OrderItem(product, quantity)
|
||||
self.items.append(item)
|
||||
self._events.append(ItemAddedEvent(self.id, item))
|
||||
|
||||
def total(self) -> Money:
|
||||
"""Calculated property."""
|
||||
return sum(item.subtotal() for item in self.items)
|
||||
|
||||
def submit(self):
|
||||
"""State transition with business rules."""
|
||||
if not self.items:
|
||||
raise ValueError("Cannot submit empty order")
|
||||
if self.status != OrderStatus.PENDING:
|
||||
raise ValueError("Order already submitted")
|
||||
|
||||
self.status = OrderStatus.SUBMITTED
|
||||
self._events.append(OrderSubmittedEvent(self.id))
|
||||
|
||||
# Aggregates (consistency boundary)
|
||||
class Customer:
|
||||
"""Aggregate root: controls access to entities."""
|
||||
|
||||
def __init__(self, id: str, email: Email):
|
||||
self.id = id
|
||||
self.email = email
|
||||
self._addresses: List[Address] = []
|
||||
self._orders: List[str] = [] # Order IDs, not full objects
|
||||
|
||||
def add_address(self, address: Address):
|
||||
"""Aggregate enforces invariants."""
|
||||
if len(self._addresses) >= 5:
|
||||
raise ValueError("Maximum 5 addresses allowed")
|
||||
self._addresses.append(address)
|
||||
|
||||
@property
|
||||
def primary_address(self) -> Optional[Address]:
|
||||
return next((a for a in self._addresses if a.is_primary), None)
|
||||
|
||||
# Domain Events
|
||||
@dataclass
|
||||
class OrderSubmittedEvent:
|
||||
order_id: str
|
||||
occurred_at: datetime = field(default_factory=datetime.now)
|
||||
|
||||
# Repository (aggregate persistence)
|
||||
class OrderRepository:
|
||||
"""Repository: persist/retrieve aggregates."""
|
||||
|
||||
async def find_by_id(self, order_id: str) -> Optional[Order]:
|
||||
"""Reconstitute aggregate from storage."""
|
||||
pass
|
||||
|
||||
async def save(self, order: Order):
|
||||
"""Persist aggregate and publish events."""
|
||||
await self._persist(order)
|
||||
await self._publish_events(order._events)
|
||||
order._events.clear()
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
- **references/clean-architecture-guide.md**: Detailed layer breakdown
|
||||
- **references/hexagonal-architecture-guide.md**: Ports and adapters patterns
|
||||
- **references/ddd-tactical-patterns.md**: Entities, value objects, aggregates
|
||||
- **assets/clean-architecture-template/**: Complete project structure
|
||||
- **assets/ddd-examples/**: Domain modeling examples
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Dependency Rule**: Dependencies always point inward
|
||||
2. **Interface Segregation**: Small, focused interfaces
|
||||
3. **Business Logic in Domain**: Keep frameworks out of core
|
||||
4. **Test Independence**: Core testable without infrastructure
|
||||
5. **Bounded Contexts**: Clear domain boundaries
|
||||
6. **Ubiquitous Language**: Consistent terminology
|
||||
7. **Thin Controllers**: Delegate to use cases
|
||||
8. **Rich Domain Models**: Behavior with data
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- **Anemic Domain**: Entities with only data, no behavior
|
||||
- **Framework Coupling**: Business logic depends on frameworks
|
||||
- **Fat Controllers**: Business logic in controllers
|
||||
- **Repository Leakage**: Exposing ORM objects
|
||||
- **Missing Abstractions**: Concrete dependencies in core
|
||||
- **Over-Engineering**: Clean architecture for simple CRUD
|
||||
306
skills/arm-cortex-expert/SKILL.md
Normal file
306
skills/arm-cortex-expert/SKILL.md
Normal file
@@ -0,0 +1,306 @@
|
||||
---
|
||||
name: arm-cortex-expert
|
||||
description: >
|
||||
Senior embedded software engineer specializing in firmware and driver
|
||||
development for ARM Cortex-M microcontrollers (Teensy, STM32, nRF52, SAMD).
|
||||
Decades of experience writing reliable, optimized, and maintainable embedded
|
||||
code with deep expertise in memory barriers, DMA/cache coherency,
|
||||
interrupt-driven I/O, and peripheral drivers.
|
||||
metadata:
|
||||
model: inherit
|
||||
---
|
||||
|
||||
# @arm-cortex-expert
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Working on @arm-cortex-expert tasks or workflows
|
||||
- Needing guidance, best practices, or checklists for @arm-cortex-expert
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- The task is unrelated to @arm-cortex-expert
|
||||
- You need a different domain or tool outside this scope
|
||||
|
||||
## Instructions
|
||||
|
||||
- Clarify goals, constraints, and required inputs.
|
||||
- Apply relevant best practices and validate outcomes.
|
||||
- Provide actionable steps and verification.
|
||||
- If detailed examples are required, open `resources/implementation-playbook.md`.
|
||||
|
||||
## 🎯 Role & Objectives
|
||||
|
||||
- Deliver **complete, compilable firmware and driver modules** for ARM Cortex-M platforms.
|
||||
- Implement **peripheral drivers** (I²C/SPI/UART/ADC/DAC/PWM/USB) with clean abstractions using HAL, bare-metal registers, or platform-specific libraries.
|
||||
- Provide **software architecture guidance**: layering, HAL patterns, interrupt safety, memory management.
|
||||
- Show **robust concurrency patterns**: ISRs, ring buffers, event queues, cooperative scheduling, FreeRTOS/Zephyr integration.
|
||||
- Optimize for **performance and determinism**: DMA transfers, cache effects, timing constraints, memory barriers.
|
||||
- Focus on **software maintainability**: code comments, unit-testable modules, modular driver design.
|
||||
|
||||
---
|
||||
|
||||
## 🧠 Knowledge Base
|
||||
|
||||
**Target Platforms**
|
||||
|
||||
- **Teensy 4.x** (i.MX RT1062, Cortex-M7 600 MHz, tightly coupled memory, caches, DMA)
|
||||
- **STM32** (F4/F7/H7 series, Cortex-M4/M7, HAL/LL drivers, STM32CubeMX)
|
||||
- **nRF52** (Nordic Semiconductor, Cortex-M4, BLE, nRF SDK/Zephyr)
|
||||
- **SAMD** (Microchip/Atmel, Cortex-M0+/M4, Arduino/bare-metal)
|
||||
|
||||
**Core Competencies**
|
||||
|
||||
- Writing register-level drivers for I²C, SPI, UART, CAN, SDIO
|
||||
- Interrupt-driven data pipelines and non-blocking APIs
|
||||
- DMA usage for high-throughput (ADC, SPI, audio, UART)
|
||||
- Implementing protocol stacks (BLE, USB CDC/MSC/HID, MIDI)
|
||||
- Peripheral abstraction layers and modular codebases
|
||||
- Platform-specific integration (Teensyduino, STM32 HAL, nRF SDK, Arduino SAMD)
|
||||
|
||||
**Advanced Topics**
|
||||
|
||||
- Cooperative vs. preemptive scheduling (FreeRTOS, Zephyr, bare-metal schedulers)
|
||||
- Memory safety: avoiding race conditions, cache line alignment, stack/heap balance
|
||||
- ARM Cortex-M7 memory barriers for MMIO and DMA/cache coherency
|
||||
- Efficient C++17/Rust patterns for embedded (templates, constexpr, zero-cost abstractions)
|
||||
- Cross-MCU messaging over SPI/I²C/USB/BLE
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Operating Principles
|
||||
|
||||
- **Safety Over Performance:** correctness first; optimize after profiling
|
||||
- **Full Solutions:** complete drivers with init, ISR, example usage — not snippets
|
||||
- **Explain Internals:** annotate register usage, buffer structures, ISR flows
|
||||
- **Safe Defaults:** guard against buffer overruns, blocking calls, priority inversions, missing barriers
|
||||
- **Document Tradeoffs:** blocking vs async, RAM vs flash, throughput vs CPU load
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ Safety-Critical Patterns for ARM Cortex-M7 (Teensy 4.x, STM32 F7/H7)
|
||||
|
||||
### Memory Barriers for MMIO (ARM Cortex-M7 Weakly-Ordered Memory)
|
||||
|
||||
**CRITICAL:** ARM Cortex-M7 has weakly-ordered memory. The CPU and hardware can reorder register reads/writes relative to other operations.
|
||||
|
||||
**Symptoms of Missing Barriers:**
|
||||
|
||||
- "Works with debug prints, fails without them" (print adds implicit delay)
|
||||
- Register writes don't take effect before next instruction executes
|
||||
- Reading stale register values despite hardware updates
|
||||
- Intermittent failures that disappear with optimization level changes
|
||||
|
||||
#### Implementation Pattern
|
||||
|
||||
**C/C++:** Wrap register access with `__DMB()` (data memory barrier) before/after reads, `__DSB()` (data synchronization barrier) after writes. Create helper functions: `mmio_read()`, `mmio_write()`, `mmio_modify()`.
|
||||
|
||||
**Rust:** Use `cortex_m::asm::dmb()` and `cortex_m::asm::dsb()` around volatile reads/writes. Create macros like `safe_read_reg!()`, `safe_write_reg!()`, `safe_modify_reg!()` that wrap HAL register access.
|
||||
|
||||
**Why This Matters:** M7 reorders memory operations for performance. Without barriers, register writes may not complete before next instruction, or reads return stale cached values.
|
||||
|
||||
### DMA and Cache Coherency
|
||||
|
||||
**CRITICAL:** ARM Cortex-M7 devices (Teensy 4.x, STM32 F7/H7) have data caches. DMA and CPU can see different data without cache maintenance.
|
||||
|
||||
**Alignment Requirements (CRITICAL):**
|
||||
|
||||
- All DMA buffers: **32-byte aligned** (ARM Cortex-M7 cache line size)
|
||||
- Buffer size: **multiple of 32 bytes**
|
||||
- Violating alignment corrupts adjacent memory during cache invalidate
|
||||
|
||||
**Memory Placement Strategies (Best to Worst):**
|
||||
|
||||
1. **DTCM/SRAM** (Non-cacheable, fastest CPU access)
|
||||
- C++: `__attribute__((section(".dtcm.bss"))) __attribute__((aligned(32))) static uint8_t buffer[512];`
|
||||
- Rust: `#[link_section = ".dtcm"] #[repr(C, align(32))] static mut BUFFER: [u8; 512] = [0; 512];`
|
||||
|
||||
2. **MPU-configured Non-cacheable regions** - Configure OCRAM/SRAM regions as non-cacheable via MPU
|
||||
|
||||
3. **Cache Maintenance** (Last resort - slowest)
|
||||
- Before DMA reads from memory: `arm_dcache_flush_delete()` or `cortex_m::cache::clean_dcache_by_range()`
|
||||
- After DMA writes to memory: `arm_dcache_delete()` or `cortex_m::cache::invalidate_dcache_by_range()`
|
||||
|
||||
### Address Validation Helper (Debug Builds)
|
||||
|
||||
**Best practice:** Validate MMIO addresses in debug builds using `is_valid_mmio_address(addr)` checking addr is within valid peripheral ranges (e.g., 0x40000000-0x4FFFFFFF for peripherals, 0xE0000000-0xE00FFFFF for ARM Cortex-M system peripherals). Use `#ifdef DEBUG` guards and halt on invalid addresses.
|
||||
|
||||
### Write-1-to-Clear (W1C) Register Pattern
|
||||
|
||||
Many status registers (especially i.MX RT, STM32) clear by writing 1, not 0:
|
||||
|
||||
```cpp
|
||||
uint32_t status = mmio_read(&USB1_USBSTS);
|
||||
mmio_write(&USB1_USBSTS, status); // Write bits back to clear them
|
||||
```
|
||||
|
||||
**Common W1C:** `USBSTS`, `PORTSC`, CCM status. **Wrong:** `status &= ~bit` does nothing on W1C registers.
|
||||
|
||||
### Platform Safety & Gotchas
|
||||
|
||||
**⚠️ Voltage Tolerances:**
|
||||
|
||||
- Most platforms: GPIO max 3.3V (NOT 5V tolerant except STM32 FT pins)
|
||||
- Use level shifters for 5V interfaces
|
||||
- Check datasheet current limits (typically 6-25mA)
|
||||
|
||||
**Teensy 4.x:** FlexSPI dedicated to Flash/PSRAM only • EEPROM emulated (limit writes <10Hz) • LPSPI max 30MHz • Never change CCM clocks while peripherals active
|
||||
|
||||
**STM32 F7/H7:** Clock domain config per peripheral • Fixed DMA stream/channel assignments • GPIO speed affects slew rate/power
|
||||
|
||||
**nRF52:** SAADC needs calibration after power-on • GPIOTE limited (8 channels) • Radio shares priority levels
|
||||
|
||||
**SAMD:** SERCOM needs careful pin muxing • GCLK routing critical • Limited DMA on M0+ variants
|
||||
|
||||
### Modern Rust: Never Use `static mut`
|
||||
|
||||
**CORRECT Patterns:**
|
||||
|
||||
```rust
|
||||
static READY: AtomicBool = AtomicBool::new(false);
|
||||
static STATE: Mutex<RefCell<Option<T>>> = Mutex::new(RefCell::new(None));
|
||||
// Access: critical_section::with(|cs| STATE.borrow_ref_mut(cs))
|
||||
```
|
||||
|
||||
**WRONG:** `static mut` is undefined behavior (data races).
|
||||
|
||||
**Atomic Ordering:** `Relaxed` (CPU-only) • `Acquire/Release` (shared state) • `AcqRel` (CAS) • `SeqCst` (rarely needed)
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Interrupt Priorities & NVIC Configuration
|
||||
|
||||
**Platform-Specific Priority Levels:**
|
||||
|
||||
- **M0/M0+**: 2-4 priority levels (limited)
|
||||
- **M3/M4/M7**: 8-256 priority levels (configurable)
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- **Lower number = higher priority** (e.g., priority 0 preempts priority 1)
|
||||
- **ISRs at same priority level cannot preempt each other**
|
||||
- Priority grouping: preemption priority vs sub-priority (M3/M4/M7)
|
||||
- Reserve highest priorities (0-2) for time-critical operations (DMA, timers)
|
||||
- Use middle priorities (3-7) for normal peripherals (UART, SPI, I2C)
|
||||
- Use lowest priorities (8+) for background tasks
|
||||
|
||||
**Configuration:**
|
||||
|
||||
- C/C++: `NVIC_SetPriority(IRQn, priority)` or `HAL_NVIC_SetPriority()`
|
||||
- Rust: `NVIC::set_priority()` or use PAC-specific functions
|
||||
|
||||
---
|
||||
|
||||
## 🔒 Critical Sections & Interrupt Masking
|
||||
|
||||
**Purpose:** Protect shared data from concurrent access by ISRs and main code.
|
||||
|
||||
**C/C++:**
|
||||
|
||||
```cpp
|
||||
__disable_irq(); /* critical section */ __enable_irq(); // Blocks all
|
||||
|
||||
// M3/M4/M7: Mask only lower-priority interrupts
|
||||
uint32_t basepri = __get_BASEPRI();
|
||||
__set_BASEPRI(priority_threshold << (8 - __NVIC_PRIO_BITS));
|
||||
/* critical section */
|
||||
__set_BASEPRI(basepri);
|
||||
```
|
||||
|
||||
**Rust:** `cortex_m::interrupt::free(|cs| { /* use cs token */ })`
|
||||
|
||||
**Best Practices:**
|
||||
|
||||
- **Keep critical sections SHORT** (microseconds, not milliseconds)
|
||||
- Prefer BASEPRI over PRIMASK when possible (allows high-priority ISRs to run)
|
||||
- Use atomic operations when feasible instead of disabling interrupts
|
||||
- Document critical section rationale in comments
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Hardfault Debugging Basics
|
||||
|
||||
**Common Causes:**
|
||||
|
||||
- Unaligned memory access (especially on M0/M0+)
|
||||
- Null pointer dereference
|
||||
- Stack overflow (SP corrupted or overflows into heap/data)
|
||||
- Illegal instruction or executing data as code
|
||||
- Writing to read-only memory or invalid peripheral addresses
|
||||
|
||||
**Inspection Pattern (M3/M4/M7):**
|
||||
|
||||
- Check `HFSR` (HardFault Status Register) for fault type
|
||||
- Check `CFSR` (Configurable Fault Status Register) for detailed cause
|
||||
- Check `MMFAR` / `BFAR` for faulting address (if valid)
|
||||
- Inspect stack frame: `R0-R3, R12, LR, PC, xPSR`
|
||||
|
||||
**Platform Limitations:**
|
||||
|
||||
- **M0/M0+**: Limited fault information (no CFSR, MMFAR, BFAR)
|
||||
- **M3/M4/M7**: Full fault registers available
|
||||
|
||||
**Debug Tip:** Use hardfault handler to capture stack frame and print/log registers before reset.
|
||||
|
||||
---
|
||||
|
||||
## 📊 Cortex-M Architecture Differences
|
||||
|
||||
| Feature | M0/M0+ | M3 | M4/M4F | M7/M7F |
|
||||
| ------------------ | ------------------------ | -------- | --------------------- | -------------------- |
|
||||
| **Max Clock** | ~50 MHz | ~100 MHz | ~180 MHz | ~600 MHz |
|
||||
| **ISA** | Thumb-1 only | Thumb-2 | Thumb-2 + DSP | Thumb-2 + DSP |
|
||||
| **MPU** | M0+ optional | Optional | Optional | Optional |
|
||||
| **FPU** | No | No | M4F: single precision | M7F: single + double |
|
||||
| **Cache** | No | No | No | I-cache + D-cache |
|
||||
| **TCM** | No | No | No | ITCM + DTCM |
|
||||
| **DWT** | No | Yes | Yes | Yes |
|
||||
| **Fault Handling** | Limited (HardFault only) | Full | Full | Full |
|
||||
|
||||
---
|
||||
|
||||
## 🧮 FPU Context Saving
|
||||
|
||||
**Lazy Stacking (Default on M4F/M7F):** FPU context (S0-S15, FPSCR) saved only if ISR uses FPU. Reduces latency for non-FPU ISRs but creates variable timing.
|
||||
|
||||
**Disable for deterministic latency:** Configure `FPU->FPCCR` (clear LSPEN bit) in hard real-time systems or when ISRs always use FPU.
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ Stack Overflow Protection
|
||||
|
||||
**MPU Guard Pages (Best):** Configure no-access MPU region below stack. Triggers MemManage fault on M3/M4/M7. Limited on M0/M0+.
|
||||
|
||||
**Canary Values (Portable):** Magic value (e.g., `0xDEADBEEF`) at stack bottom, check periodically.
|
||||
|
||||
**Watchdog:** Indirect detection via timeout, provides recovery. **Best:** MPU guard pages, else canary + watchdog.
|
||||
|
||||
---
|
||||
|
||||
## 🔄 Workflow
|
||||
|
||||
1. **Clarify Requirements** → target platform, peripheral type, protocol details (speed, mode, packet size)
|
||||
2. **Design Driver Skeleton** → constants, structs, compile-time config
|
||||
3. **Implement Core** → init(), ISR handlers, buffer logic, user-facing API
|
||||
4. **Validate** → example usage + notes on timing, latency, throughput
|
||||
5. **Optimize** → suggest DMA, interrupt priorities, or RTOS tasks if needed
|
||||
6. **Iterate** → refine with improved versions as hardware interaction feedback is provided
|
||||
|
||||
---
|
||||
|
||||
## 🛠 Example: SPI Driver for External Sensor
|
||||
|
||||
**Pattern:** Create non-blocking SPI drivers with transaction-based read/write:
|
||||
|
||||
- Configure SPI (clock speed, mode, bit order)
|
||||
- Use CS pin control with proper timing
|
||||
- Abstract register read/write operations
|
||||
- Example: `sensorReadRegister(0x0F)` for WHO_AM_I
|
||||
- For high throughput (>500 kHz), use DMA transfers
|
||||
|
||||
**Platform-specific APIs:**
|
||||
|
||||
- **Teensy 4.x**: `SPI.beginTransaction(SPISettings(speed, order, mode))` → `SPI.transfer(data)` → `SPI.endTransaction()`
|
||||
- **STM32**: `HAL_SPI_Transmit()` / `HAL_SPI_Receive()` or LL drivers
|
||||
- **nRF52**: `nrfx_spi_xfer()` or `nrf_drv_spi_transfer()`
|
||||
- **SAMD**: Configure SERCOM in SPI master mode with `SERCOM_SPI_MODE_MASTER`
|
||||
39
skills/async-python-patterns/SKILL.md
Normal file
39
skills/async-python-patterns/SKILL.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
name: async-python-patterns
|
||||
description: Master Python asyncio, concurrent programming, and async/await patterns for high-performance applications. Use when building async APIs, concurrent systems, or I/O-bound applications requiring non-blocking operations.
|
||||
---
|
||||
|
||||
# Async Python Patterns
|
||||
|
||||
Comprehensive guidance for implementing asynchronous Python applications using asyncio, concurrent programming patterns, and async/await for building high-performance, non-blocking systems.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Building async web APIs (FastAPI, aiohttp, Sanic)
|
||||
- Implementing concurrent I/O operations (database, file, network)
|
||||
- Creating web scrapers with concurrent requests
|
||||
- Developing real-time applications (WebSocket servers, chat systems)
|
||||
- Processing multiple independent tasks simultaneously
|
||||
- Building microservices with async communication
|
||||
- Optimizing I/O-bound workloads
|
||||
- Implementing async background tasks and queues
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- The workload is CPU-bound with minimal I/O.
|
||||
- A simple synchronous script is sufficient.
|
||||
- The runtime environment cannot support asyncio/event loop usage.
|
||||
|
||||
## Instructions
|
||||
|
||||
- Clarify workload characteristics (I/O vs CPU), targets, and runtime constraints.
|
||||
- Pick concurrency patterns (tasks, gather, queues, pools) with cancellation rules.
|
||||
- Add timeouts, backpressure, and structured error handling.
|
||||
- Include testing and debugging guidance for async code paths.
|
||||
- If detailed examples are required, open `resources/implementation-playbook.md`.
|
||||
|
||||
Refer to `resources/implementation-playbook.md` for detailed patterns and examples.
|
||||
|
||||
## Resources
|
||||
|
||||
- `resources/implementation-playbook.md` for detailed patterns and examples.
|
||||
@@ -0,0 +1,678 @@
|
||||
# Async Python Patterns Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. Event Loop
|
||||
The event loop is the heart of asyncio, managing and scheduling asynchronous tasks.
|
||||
|
||||
**Key characteristics:**
|
||||
- Single-threaded cooperative multitasking
|
||||
- Schedules coroutines for execution
|
||||
- Handles I/O operations without blocking
|
||||
- Manages callbacks and futures
|
||||
|
||||
### 2. Coroutines
|
||||
Functions defined with `async def` that can be paused and resumed.
|
||||
|
||||
**Syntax:**
|
||||
```python
|
||||
async def my_coroutine():
|
||||
result = await some_async_operation()
|
||||
return result
|
||||
```
|
||||
|
||||
### 3. Tasks
|
||||
Scheduled coroutines that run concurrently on the event loop.
|
||||
|
||||
### 4. Futures
|
||||
Low-level objects representing eventual results of async operations.
|
||||
|
||||
### 5. Async Context Managers
|
||||
Resources that support `async with` for proper cleanup.
|
||||
|
||||
### 6. Async Iterators
|
||||
Objects that support `async for` for iterating over async data sources.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
async def main():
|
||||
print("Hello")
|
||||
await asyncio.sleep(1)
|
||||
print("World")
|
||||
|
||||
# Python 3.7+
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Fundamental Patterns
|
||||
|
||||
### Pattern 1: Basic Async/Await
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
async def fetch_data(url: str) -> dict:
|
||||
"""Fetch data from URL asynchronously."""
|
||||
await asyncio.sleep(1) # Simulate I/O
|
||||
return {"url": url, "data": "result"}
|
||||
|
||||
async def main():
|
||||
result = await fetch_data("https://api.example.com")
|
||||
print(result)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Pattern 2: Concurrent Execution with gather()
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from typing import List
|
||||
|
||||
async def fetch_user(user_id: int) -> dict:
|
||||
"""Fetch user data."""
|
||||
await asyncio.sleep(0.5)
|
||||
return {"id": user_id, "name": f"User {user_id}"}
|
||||
|
||||
async def fetch_all_users(user_ids: List[int]) -> List[dict]:
|
||||
"""Fetch multiple users concurrently."""
|
||||
tasks = [fetch_user(uid) for uid in user_ids]
|
||||
results = await asyncio.gather(*tasks)
|
||||
return results
|
||||
|
||||
async def main():
|
||||
user_ids = [1, 2, 3, 4, 5]
|
||||
users = await fetch_all_users(user_ids)
|
||||
print(f"Fetched {len(users)} users")
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Pattern 3: Task Creation and Management
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
async def background_task(name: str, delay: int):
|
||||
"""Long-running background task."""
|
||||
print(f"{name} started")
|
||||
await asyncio.sleep(delay)
|
||||
print(f"{name} completed")
|
||||
return f"Result from {name}"
|
||||
|
||||
async def main():
|
||||
# Create tasks
|
||||
task1 = asyncio.create_task(background_task("Task 1", 2))
|
||||
task2 = asyncio.create_task(background_task("Task 2", 1))
|
||||
|
||||
# Do other work
|
||||
print("Main: doing other work")
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
# Wait for tasks
|
||||
result1 = await task1
|
||||
result2 = await task2
|
||||
|
||||
print(f"Results: {result1}, {result2}")
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Pattern 4: Error Handling in Async Code
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from typing import List, Optional
|
||||
|
||||
async def risky_operation(item_id: int) -> dict:
|
||||
"""Operation that might fail."""
|
||||
await asyncio.sleep(0.1)
|
||||
if item_id % 3 == 0:
|
||||
raise ValueError(f"Item {item_id} failed")
|
||||
return {"id": item_id, "status": "success"}
|
||||
|
||||
async def safe_operation(item_id: int) -> Optional[dict]:
|
||||
"""Wrapper with error handling."""
|
||||
try:
|
||||
return await risky_operation(item_id)
|
||||
except ValueError as e:
|
||||
print(f"Error: {e}")
|
||||
return None
|
||||
|
||||
async def process_items(item_ids: List[int]):
|
||||
"""Process multiple items with error handling."""
|
||||
tasks = [safe_operation(iid) for iid in item_ids]
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
# Filter out failures
|
||||
successful = [r for r in results if r is not None and not isinstance(r, Exception)]
|
||||
failed = [r for r in results if isinstance(r, Exception)]
|
||||
|
||||
print(f"Success: {len(successful)}, Failed: {len(failed)}")
|
||||
return successful
|
||||
|
||||
asyncio.run(process_items([1, 2, 3, 4, 5, 6]))
|
||||
```
|
||||
|
||||
### Pattern 5: Timeout Handling
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
async def slow_operation(delay: int) -> str:
|
||||
"""Operation that takes time."""
|
||||
await asyncio.sleep(delay)
|
||||
return f"Completed after {delay}s"
|
||||
|
||||
async def with_timeout():
|
||||
"""Execute operation with timeout."""
|
||||
try:
|
||||
result = await asyncio.wait_for(slow_operation(5), timeout=2.0)
|
||||
print(result)
|
||||
except asyncio.TimeoutError:
|
||||
print("Operation timed out")
|
||||
|
||||
asyncio.run(with_timeout())
|
||||
```
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 6: Async Context Managers
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from typing import Optional
|
||||
|
||||
class AsyncDatabaseConnection:
|
||||
"""Async database connection context manager."""
|
||||
|
||||
def __init__(self, dsn: str):
|
||||
self.dsn = dsn
|
||||
self.connection: Optional[object] = None
|
||||
|
||||
async def __aenter__(self):
|
||||
print("Opening connection")
|
||||
await asyncio.sleep(0.1) # Simulate connection
|
||||
self.connection = {"dsn": self.dsn, "connected": True}
|
||||
return self.connection
|
||||
|
||||
async def __aexit__(self, exc_type, exc_val, exc_tb):
|
||||
print("Closing connection")
|
||||
await asyncio.sleep(0.1) # Simulate cleanup
|
||||
self.connection = None
|
||||
|
||||
async def query_database():
|
||||
"""Use async context manager."""
|
||||
async with AsyncDatabaseConnection("postgresql://localhost") as conn:
|
||||
print(f"Using connection: {conn}")
|
||||
await asyncio.sleep(0.2) # Simulate query
|
||||
return {"rows": 10}
|
||||
|
||||
asyncio.run(query_database())
|
||||
```
|
||||
|
||||
### Pattern 7: Async Iterators and Generators
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from typing import AsyncIterator
|
||||
|
||||
async def async_range(start: int, end: int, delay: float = 0.1) -> AsyncIterator[int]:
|
||||
"""Async generator that yields numbers with delay."""
|
||||
for i in range(start, end):
|
||||
await asyncio.sleep(delay)
|
||||
yield i
|
||||
|
||||
async def fetch_pages(url: str, max_pages: int) -> AsyncIterator[dict]:
|
||||
"""Fetch paginated data asynchronously."""
|
||||
for page in range(1, max_pages + 1):
|
||||
await asyncio.sleep(0.2) # Simulate API call
|
||||
yield {
|
||||
"page": page,
|
||||
"url": f"{url}?page={page}",
|
||||
"data": [f"item_{page}_{i}" for i in range(5)]
|
||||
}
|
||||
|
||||
async def consume_async_iterator():
|
||||
"""Consume async iterator."""
|
||||
async for number in async_range(1, 5):
|
||||
print(f"Number: {number}")
|
||||
|
||||
print("\nFetching pages:")
|
||||
async for page_data in fetch_pages("https://api.example.com/items", 3):
|
||||
print(f"Page {page_data['page']}: {len(page_data['data'])} items")
|
||||
|
||||
asyncio.run(consume_async_iterator())
|
||||
```
|
||||
|
||||
### Pattern 8: Producer-Consumer Pattern
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from asyncio import Queue
|
||||
from typing import Optional
|
||||
|
||||
async def producer(queue: Queue, producer_id: int, num_items: int):
|
||||
"""Produce items and put them in queue."""
|
||||
for i in range(num_items):
|
||||
item = f"Item-{producer_id}-{i}"
|
||||
await queue.put(item)
|
||||
print(f"Producer {producer_id} produced: {item}")
|
||||
await asyncio.sleep(0.1)
|
||||
await queue.put(None) # Signal completion
|
||||
|
||||
async def consumer(queue: Queue, consumer_id: int):
|
||||
"""Consume items from queue."""
|
||||
while True:
|
||||
item = await queue.get()
|
||||
if item is None:
|
||||
queue.task_done()
|
||||
break
|
||||
|
||||
print(f"Consumer {consumer_id} processing: {item}")
|
||||
await asyncio.sleep(0.2) # Simulate work
|
||||
queue.task_done()
|
||||
|
||||
async def producer_consumer_example():
|
||||
"""Run producer-consumer pattern."""
|
||||
queue = Queue(maxsize=10)
|
||||
|
||||
# Create tasks
|
||||
producers = [
|
||||
asyncio.create_task(producer(queue, i, 5))
|
||||
for i in range(2)
|
||||
]
|
||||
|
||||
consumers = [
|
||||
asyncio.create_task(consumer(queue, i))
|
||||
for i in range(3)
|
||||
]
|
||||
|
||||
# Wait for producers
|
||||
await asyncio.gather(*producers)
|
||||
|
||||
# Wait for queue to be empty
|
||||
await queue.join()
|
||||
|
||||
# Cancel consumers
|
||||
for c in consumers:
|
||||
c.cancel()
|
||||
|
||||
asyncio.run(producer_consumer_example())
|
||||
```
|
||||
|
||||
### Pattern 9: Semaphore for Rate Limiting
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from typing import List
|
||||
|
||||
async def api_call(url: str, semaphore: asyncio.Semaphore) -> dict:
|
||||
"""Make API call with rate limiting."""
|
||||
async with semaphore:
|
||||
print(f"Calling {url}")
|
||||
await asyncio.sleep(0.5) # Simulate API call
|
||||
return {"url": url, "status": 200}
|
||||
|
||||
async def rate_limited_requests(urls: List[str], max_concurrent: int = 5):
|
||||
"""Make multiple requests with rate limiting."""
|
||||
semaphore = asyncio.Semaphore(max_concurrent)
|
||||
tasks = [api_call(url, semaphore) for url in urls]
|
||||
results = await asyncio.gather(*tasks)
|
||||
return results
|
||||
|
||||
async def main():
|
||||
urls = [f"https://api.example.com/item/{i}" for i in range(20)]
|
||||
results = await rate_limited_requests(urls, max_concurrent=3)
|
||||
print(f"Completed {len(results)} requests")
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Pattern 10: Async Locks and Synchronization
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
|
||||
class AsyncCounter:
|
||||
"""Thread-safe async counter."""
|
||||
|
||||
def __init__(self):
|
||||
self.value = 0
|
||||
self.lock = asyncio.Lock()
|
||||
|
||||
async def increment(self):
|
||||
"""Safely increment counter."""
|
||||
async with self.lock:
|
||||
current = self.value
|
||||
await asyncio.sleep(0.01) # Simulate work
|
||||
self.value = current + 1
|
||||
|
||||
async def get_value(self) -> int:
|
||||
"""Get current value."""
|
||||
async with self.lock:
|
||||
return self.value
|
||||
|
||||
async def worker(counter: AsyncCounter, worker_id: int):
|
||||
"""Worker that increments counter."""
|
||||
for _ in range(10):
|
||||
await counter.increment()
|
||||
print(f"Worker {worker_id} incremented")
|
||||
|
||||
async def test_counter():
|
||||
"""Test concurrent counter."""
|
||||
counter = AsyncCounter()
|
||||
|
||||
workers = [asyncio.create_task(worker(counter, i)) for i in range(5)]
|
||||
await asyncio.gather(*workers)
|
||||
|
||||
final_value = await counter.get_value()
|
||||
print(f"Final counter value: {final_value}")
|
||||
|
||||
asyncio.run(test_counter())
|
||||
```
|
||||
|
||||
## Real-World Applications
|
||||
|
||||
### Web Scraping with aiohttp
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import aiohttp
|
||||
from typing import List, Dict
|
||||
|
||||
async def fetch_url(session: aiohttp.ClientSession, url: str) -> Dict:
|
||||
"""Fetch single URL."""
|
||||
try:
|
||||
async with session.get(url, timeout=aiohttp.ClientTimeout(total=10)) as response:
|
||||
text = await response.text()
|
||||
return {
|
||||
"url": url,
|
||||
"status": response.status,
|
||||
"length": len(text)
|
||||
}
|
||||
except Exception as e:
|
||||
return {"url": url, "error": str(e)}
|
||||
|
||||
async def scrape_urls(urls: List[str]) -> List[Dict]:
|
||||
"""Scrape multiple URLs concurrently."""
|
||||
async with aiohttp.ClientSession() as session:
|
||||
tasks = [fetch_url(session, url) for url in urls]
|
||||
results = await asyncio.gather(*tasks)
|
||||
return results
|
||||
|
||||
async def main():
|
||||
urls = [
|
||||
"https://httpbin.org/delay/1",
|
||||
"https://httpbin.org/delay/2",
|
||||
"https://httpbin.org/status/404",
|
||||
]
|
||||
|
||||
results = await scrape_urls(urls)
|
||||
for result in results:
|
||||
print(result)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Async Database Operations
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from typing import List, Optional
|
||||
|
||||
# Simulated async database client
|
||||
class AsyncDB:
|
||||
"""Simulated async database."""
|
||||
|
||||
async def execute(self, query: str) -> List[dict]:
|
||||
"""Execute query."""
|
||||
await asyncio.sleep(0.1)
|
||||
return [{"id": 1, "name": "Example"}]
|
||||
|
||||
async def fetch_one(self, query: str) -> Optional[dict]:
|
||||
"""Fetch single row."""
|
||||
await asyncio.sleep(0.1)
|
||||
return {"id": 1, "name": "Example"}
|
||||
|
||||
async def get_user_data(db: AsyncDB, user_id: int) -> dict:
|
||||
"""Fetch user and related data concurrently."""
|
||||
user_task = db.fetch_one(f"SELECT * FROM users WHERE id = {user_id}")
|
||||
orders_task = db.execute(f"SELECT * FROM orders WHERE user_id = {user_id}")
|
||||
profile_task = db.fetch_one(f"SELECT * FROM profiles WHERE user_id = {user_id}")
|
||||
|
||||
user, orders, profile = await asyncio.gather(user_task, orders_task, profile_task)
|
||||
|
||||
return {
|
||||
"user": user,
|
||||
"orders": orders,
|
||||
"profile": profile
|
||||
}
|
||||
|
||||
async def main():
|
||||
db = AsyncDB()
|
||||
user_data = await get_user_data(db, 1)
|
||||
print(user_data)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### WebSocket Server
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from typing import Set
|
||||
|
||||
# Simulated WebSocket connection
|
||||
class WebSocket:
|
||||
"""Simulated WebSocket."""
|
||||
|
||||
def __init__(self, client_id: str):
|
||||
self.client_id = client_id
|
||||
|
||||
async def send(self, message: str):
|
||||
"""Send message."""
|
||||
print(f"Sending to {self.client_id}: {message}")
|
||||
await asyncio.sleep(0.01)
|
||||
|
||||
async def recv(self) -> str:
|
||||
"""Receive message."""
|
||||
await asyncio.sleep(1)
|
||||
return f"Message from {self.client_id}"
|
||||
|
||||
class WebSocketServer:
|
||||
"""Simple WebSocket server."""
|
||||
|
||||
def __init__(self):
|
||||
self.clients: Set[WebSocket] = set()
|
||||
|
||||
async def register(self, websocket: WebSocket):
|
||||
"""Register new client."""
|
||||
self.clients.add(websocket)
|
||||
print(f"Client {websocket.client_id} connected")
|
||||
|
||||
async def unregister(self, websocket: WebSocket):
|
||||
"""Unregister client."""
|
||||
self.clients.remove(websocket)
|
||||
print(f"Client {websocket.client_id} disconnected")
|
||||
|
||||
async def broadcast(self, message: str):
|
||||
"""Broadcast message to all clients."""
|
||||
if self.clients:
|
||||
tasks = [client.send(message) for client in self.clients]
|
||||
await asyncio.gather(*tasks)
|
||||
|
||||
async def handle_client(self, websocket: WebSocket):
|
||||
"""Handle individual client connection."""
|
||||
await self.register(websocket)
|
||||
try:
|
||||
async for message in self.message_iterator(websocket):
|
||||
await self.broadcast(f"{websocket.client_id}: {message}")
|
||||
finally:
|
||||
await self.unregister(websocket)
|
||||
|
||||
async def message_iterator(self, websocket: WebSocket):
|
||||
"""Iterate over messages from client."""
|
||||
for _ in range(3): # Simulate 3 messages
|
||||
yield await websocket.recv()
|
||||
```
|
||||
|
||||
## Performance Best Practices
|
||||
|
||||
### 1. Use Connection Pools
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import aiohttp
|
||||
|
||||
async def with_connection_pool():
|
||||
"""Use connection pool for efficiency."""
|
||||
connector = aiohttp.TCPConnector(limit=100, limit_per_host=10)
|
||||
|
||||
async with aiohttp.ClientSession(connector=connector) as session:
|
||||
tasks = [session.get(f"https://api.example.com/item/{i}") for i in range(50)]
|
||||
responses = await asyncio.gather(*tasks)
|
||||
return responses
|
||||
```
|
||||
|
||||
### 2. Batch Operations
|
||||
|
||||
```python
|
||||
async def batch_process(items: List[str], batch_size: int = 10):
|
||||
"""Process items in batches."""
|
||||
for i in range(0, len(items), batch_size):
|
||||
batch = items[i:i + batch_size]
|
||||
tasks = [process_item(item) for item in batch]
|
||||
await asyncio.gather(*tasks)
|
||||
print(f"Processed batch {i // batch_size + 1}")
|
||||
|
||||
async def process_item(item: str):
|
||||
"""Process single item."""
|
||||
await asyncio.sleep(0.1)
|
||||
return f"Processed: {item}"
|
||||
```
|
||||
|
||||
### 3. Avoid Blocking Operations
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import concurrent.futures
|
||||
from typing import Any
|
||||
|
||||
def blocking_operation(data: Any) -> Any:
|
||||
"""CPU-intensive blocking operation."""
|
||||
import time
|
||||
time.sleep(1)
|
||||
return data * 2
|
||||
|
||||
async def run_in_executor(data: Any) -> Any:
|
||||
"""Run blocking operation in thread pool."""
|
||||
loop = asyncio.get_event_loop()
|
||||
with concurrent.futures.ThreadPoolExecutor() as pool:
|
||||
result = await loop.run_in_executor(pool, blocking_operation, data)
|
||||
return result
|
||||
|
||||
async def main():
|
||||
results = await asyncio.gather(*[run_in_executor(i) for i in range(5)])
|
||||
print(results)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
### 1. Forgetting await
|
||||
|
||||
```python
|
||||
# Wrong - returns coroutine object, doesn't execute
|
||||
result = async_function()
|
||||
|
||||
# Correct
|
||||
result = await async_function()
|
||||
```
|
||||
|
||||
### 2. Blocking the Event Loop
|
||||
|
||||
```python
|
||||
# Wrong - blocks event loop
|
||||
import time
|
||||
async def bad():
|
||||
time.sleep(1) # Blocks!
|
||||
|
||||
# Correct
|
||||
async def good():
|
||||
await asyncio.sleep(1) # Non-blocking
|
||||
```
|
||||
|
||||
### 3. Not Handling Cancellation
|
||||
|
||||
```python
|
||||
async def cancelable_task():
|
||||
"""Task that handles cancellation."""
|
||||
try:
|
||||
while True:
|
||||
await asyncio.sleep(1)
|
||||
print("Working...")
|
||||
except asyncio.CancelledError:
|
||||
print("Task cancelled, cleaning up...")
|
||||
# Perform cleanup
|
||||
raise # Re-raise to propagate cancellation
|
||||
```
|
||||
|
||||
### 4. Mixing Sync and Async Code
|
||||
|
||||
```python
|
||||
# Wrong - can't call async from sync directly
|
||||
def sync_function():
|
||||
result = await async_function() # SyntaxError!
|
||||
|
||||
# Correct
|
||||
def sync_function():
|
||||
result = asyncio.run(async_function())
|
||||
```
|
||||
|
||||
## Testing Async Code
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import pytest
|
||||
|
||||
# Using pytest-asyncio
|
||||
@pytest.mark.asyncio
|
||||
async def test_async_function():
|
||||
"""Test async function."""
|
||||
result = await fetch_data("https://api.example.com")
|
||||
assert result is not None
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_with_timeout():
|
||||
"""Test with timeout."""
|
||||
with pytest.raises(asyncio.TimeoutError):
|
||||
await asyncio.wait_for(slow_operation(5), timeout=1.0)
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
- **Python asyncio documentation**: https://docs.python.org/3/library/asyncio.html
|
||||
- **aiohttp**: Async HTTP client/server
|
||||
- **FastAPI**: Modern async web framework
|
||||
- **asyncpg**: Async PostgreSQL driver
|
||||
- **motor**: Async MongoDB driver
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
1. **Use asyncio.run()** for entry point (Python 3.7+)
|
||||
2. **Always await coroutines** to execute them
|
||||
3. **Use gather() for concurrent execution** of multiple tasks
|
||||
4. **Implement proper error handling** with try/except
|
||||
5. **Use timeouts** to prevent hanging operations
|
||||
6. **Pool connections** for better performance
|
||||
7. **Avoid blocking operations** in async code
|
||||
8. **Use semaphores** for rate limiting
|
||||
9. **Handle task cancellation** properly
|
||||
10. **Test async code** with pytest-asyncio
|
||||
38
skills/attack-tree-construction/SKILL.md
Normal file
38
skills/attack-tree-construction/SKILL.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
name: attack-tree-construction
|
||||
description: Build comprehensive attack trees to visualize threat paths. Use when mapping attack scenarios, identifying defense gaps, or communicating security risks to stakeholders.
|
||||
---
|
||||
|
||||
# Attack Tree Construction
|
||||
|
||||
Systematic attack path visualization and analysis.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Visualizing complex attack scenarios
|
||||
- Identifying defense gaps and priorities
|
||||
- Communicating risks to stakeholders
|
||||
- Planning defensive investments or test scopes
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You lack authorization or a defined scope to model the system
|
||||
- The task is a general risk review without attack-path modeling
|
||||
- The request is unrelated to security assessment or design
|
||||
|
||||
## Instructions
|
||||
|
||||
- Confirm scope, assets, and the attacker goal for the root node.
|
||||
- Decompose into sub-goals with AND/OR structure.
|
||||
- Annotate leaves with cost, skill, time, and detectability.
|
||||
- Map mitigations per branch and prioritize high-impact paths.
|
||||
- If detailed templates are required, open `resources/implementation-playbook.md`.
|
||||
|
||||
## Safety
|
||||
|
||||
- Share attack trees only with authorized stakeholders.
|
||||
- Avoid including sensitive exploit details unless required.
|
||||
|
||||
## Resources
|
||||
|
||||
- `resources/implementation-playbook.md` for detailed patterns, templates, and examples.
|
||||
@@ -0,0 +1,671 @@
|
||||
# Attack Tree Construction Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. Attack Tree Structure
|
||||
|
||||
```
|
||||
[Root Goal]
|
||||
|
|
||||
┌────────────┴────────────┐
|
||||
│ │
|
||||
[Sub-goal 1] [Sub-goal 2]
|
||||
(OR node) (AND node)
|
||||
│ │
|
||||
┌─────┴─────┐ ┌─────┴─────┐
|
||||
│ │ │ │
|
||||
[Attack] [Attack] [Attack] [Attack]
|
||||
(leaf) (leaf) (leaf) (leaf)
|
||||
```
|
||||
|
||||
### 2. Node Types
|
||||
|
||||
| Type | Symbol | Description |
|
||||
|------|--------|-------------|
|
||||
| **OR** | Oval | Any child achieves goal |
|
||||
| **AND** | Rectangle | All children required |
|
||||
| **Leaf** | Box | Atomic attack step |
|
||||
|
||||
### 3. Attack Attributes
|
||||
|
||||
| Attribute | Description | Values |
|
||||
|-----------|-------------|--------|
|
||||
| **Cost** | Resources needed | $, $$, $$$ |
|
||||
| **Time** | Duration to execute | Hours, Days, Weeks |
|
||||
| **Skill** | Expertise required | Low, Medium, High |
|
||||
| **Detection** | Likelihood of detection | Low, Medium, High |
|
||||
|
||||
## Templates
|
||||
|
||||
### Template 1: Attack Tree Data Model
|
||||
|
||||
```python
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
from typing import List, Dict, Optional, Union
|
||||
import json
|
||||
|
||||
class NodeType(Enum):
|
||||
OR = "or"
|
||||
AND = "and"
|
||||
LEAF = "leaf"
|
||||
|
||||
|
||||
class Difficulty(Enum):
|
||||
TRIVIAL = 1
|
||||
LOW = 2
|
||||
MEDIUM = 3
|
||||
HIGH = 4
|
||||
EXPERT = 5
|
||||
|
||||
|
||||
class Cost(Enum):
|
||||
FREE = 0
|
||||
LOW = 1
|
||||
MEDIUM = 2
|
||||
HIGH = 3
|
||||
VERY_HIGH = 4
|
||||
|
||||
|
||||
class DetectionRisk(Enum):
|
||||
NONE = 0
|
||||
LOW = 1
|
||||
MEDIUM = 2
|
||||
HIGH = 3
|
||||
CERTAIN = 4
|
||||
|
||||
|
||||
@dataclass
|
||||
class AttackAttributes:
|
||||
difficulty: Difficulty = Difficulty.MEDIUM
|
||||
cost: Cost = Cost.MEDIUM
|
||||
detection_risk: DetectionRisk = DetectionRisk.MEDIUM
|
||||
time_hours: float = 8.0
|
||||
requires_insider: bool = False
|
||||
requires_physical: bool = False
|
||||
|
||||
|
||||
@dataclass
|
||||
class AttackNode:
|
||||
id: str
|
||||
name: str
|
||||
description: str
|
||||
node_type: NodeType
|
||||
attributes: AttackAttributes = field(default_factory=AttackAttributes)
|
||||
children: List['AttackNode'] = field(default_factory=list)
|
||||
mitigations: List[str] = field(default_factory=list)
|
||||
cve_refs: List[str] = field(default_factory=list)
|
||||
|
||||
def add_child(self, child: 'AttackNode') -> None:
|
||||
self.children.append(child)
|
||||
|
||||
def calculate_path_difficulty(self) -> float:
|
||||
"""Calculate aggregate difficulty for this path."""
|
||||
if self.node_type == NodeType.LEAF:
|
||||
return self.attributes.difficulty.value
|
||||
|
||||
if not self.children:
|
||||
return 0
|
||||
|
||||
child_difficulties = [c.calculate_path_difficulty() for c in self.children]
|
||||
|
||||
if self.node_type == NodeType.OR:
|
||||
return min(child_difficulties)
|
||||
else: # AND
|
||||
return max(child_difficulties)
|
||||
|
||||
def calculate_path_cost(self) -> float:
|
||||
"""Calculate aggregate cost for this path."""
|
||||
if self.node_type == NodeType.LEAF:
|
||||
return self.attributes.cost.value
|
||||
|
||||
if not self.children:
|
||||
return 0
|
||||
|
||||
child_costs = [c.calculate_path_cost() for c in self.children]
|
||||
|
||||
if self.node_type == NodeType.OR:
|
||||
return min(child_costs)
|
||||
else: # AND
|
||||
return sum(child_costs)
|
||||
|
||||
def to_dict(self) -> Dict:
|
||||
"""Convert to dictionary for serialization."""
|
||||
return {
|
||||
"id": self.id,
|
||||
"name": self.name,
|
||||
"description": self.description,
|
||||
"type": self.node_type.value,
|
||||
"attributes": {
|
||||
"difficulty": self.attributes.difficulty.name,
|
||||
"cost": self.attributes.cost.name,
|
||||
"detection_risk": self.attributes.detection_risk.name,
|
||||
"time_hours": self.attributes.time_hours,
|
||||
},
|
||||
"mitigations": self.mitigations,
|
||||
"children": [c.to_dict() for c in self.children]
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class AttackTree:
|
||||
name: str
|
||||
description: str
|
||||
root: AttackNode
|
||||
version: str = "1.0"
|
||||
|
||||
def find_easiest_path(self) -> List[AttackNode]:
|
||||
"""Find the path with lowest difficulty."""
|
||||
return self._find_path(self.root, minimize="difficulty")
|
||||
|
||||
def find_cheapest_path(self) -> List[AttackNode]:
|
||||
"""Find the path with lowest cost."""
|
||||
return self._find_path(self.root, minimize="cost")
|
||||
|
||||
def find_stealthiest_path(self) -> List[AttackNode]:
|
||||
"""Find the path with lowest detection risk."""
|
||||
return self._find_path(self.root, minimize="detection")
|
||||
|
||||
def _find_path(
|
||||
self,
|
||||
node: AttackNode,
|
||||
minimize: str
|
||||
) -> List[AttackNode]:
|
||||
"""Recursive path finding."""
|
||||
if node.node_type == NodeType.LEAF:
|
||||
return [node]
|
||||
|
||||
if not node.children:
|
||||
return [node]
|
||||
|
||||
if node.node_type == NodeType.OR:
|
||||
# Pick the best child path
|
||||
best_path = None
|
||||
best_score = float('inf')
|
||||
|
||||
for child in node.children:
|
||||
child_path = self._find_path(child, minimize)
|
||||
score = self._path_score(child_path, minimize)
|
||||
if score < best_score:
|
||||
best_score = score
|
||||
best_path = child_path
|
||||
|
||||
return [node] + (best_path or [])
|
||||
else: # AND
|
||||
# Must traverse all children
|
||||
path = [node]
|
||||
for child in node.children:
|
||||
path.extend(self._find_path(child, minimize))
|
||||
return path
|
||||
|
||||
def _path_score(self, path: List[AttackNode], metric: str) -> float:
|
||||
"""Calculate score for a path."""
|
||||
if metric == "difficulty":
|
||||
return sum(n.attributes.difficulty.value for n in path if n.node_type == NodeType.LEAF)
|
||||
elif metric == "cost":
|
||||
return sum(n.attributes.cost.value for n in path if n.node_type == NodeType.LEAF)
|
||||
elif metric == "detection":
|
||||
return sum(n.attributes.detection_risk.value for n in path if n.node_type == NodeType.LEAF)
|
||||
return 0
|
||||
|
||||
def get_all_leaf_attacks(self) -> List[AttackNode]:
|
||||
"""Get all leaf attack nodes."""
|
||||
leaves = []
|
||||
self._collect_leaves(self.root, leaves)
|
||||
return leaves
|
||||
|
||||
def _collect_leaves(self, node: AttackNode, leaves: List[AttackNode]) -> None:
|
||||
if node.node_type == NodeType.LEAF:
|
||||
leaves.append(node)
|
||||
for child in node.children:
|
||||
self._collect_leaves(child, leaves)
|
||||
|
||||
def get_unmitigated_attacks(self) -> List[AttackNode]:
|
||||
"""Find attacks without mitigations."""
|
||||
return [n for n in self.get_all_leaf_attacks() if not n.mitigations]
|
||||
|
||||
def export_json(self) -> str:
|
||||
"""Export tree to JSON."""
|
||||
return json.dumps({
|
||||
"name": self.name,
|
||||
"description": self.description,
|
||||
"version": self.version,
|
||||
"root": self.root.to_dict()
|
||||
}, indent=2)
|
||||
```
|
||||
|
||||
### Template 2: Attack Tree Builder
|
||||
|
||||
```python
|
||||
class AttackTreeBuilder:
|
||||
"""Fluent builder for attack trees."""
|
||||
|
||||
def __init__(self, name: str, description: str):
|
||||
self.name = name
|
||||
self.description = description
|
||||
self._node_stack: List[AttackNode] = []
|
||||
self._root: Optional[AttackNode] = None
|
||||
|
||||
def goal(self, id: str, name: str, description: str = "") -> 'AttackTreeBuilder':
|
||||
"""Set the root goal (OR node by default)."""
|
||||
self._root = AttackNode(
|
||||
id=id,
|
||||
name=name,
|
||||
description=description,
|
||||
node_type=NodeType.OR
|
||||
)
|
||||
self._node_stack = [self._root]
|
||||
return self
|
||||
|
||||
def or_node(self, id: str, name: str, description: str = "") -> 'AttackTreeBuilder':
|
||||
"""Add an OR sub-goal."""
|
||||
node = AttackNode(
|
||||
id=id,
|
||||
name=name,
|
||||
description=description,
|
||||
node_type=NodeType.OR
|
||||
)
|
||||
self._current().add_child(node)
|
||||
self._node_stack.append(node)
|
||||
return self
|
||||
|
||||
def and_node(self, id: str, name: str, description: str = "") -> 'AttackTreeBuilder':
|
||||
"""Add an AND sub-goal (all children required)."""
|
||||
node = AttackNode(
|
||||
id=id,
|
||||
name=name,
|
||||
description=description,
|
||||
node_type=NodeType.AND
|
||||
)
|
||||
self._current().add_child(node)
|
||||
self._node_stack.append(node)
|
||||
return self
|
||||
|
||||
def attack(
|
||||
self,
|
||||
id: str,
|
||||
name: str,
|
||||
description: str = "",
|
||||
difficulty: Difficulty = Difficulty.MEDIUM,
|
||||
cost: Cost = Cost.MEDIUM,
|
||||
detection: DetectionRisk = DetectionRisk.MEDIUM,
|
||||
time_hours: float = 8.0,
|
||||
mitigations: List[str] = None
|
||||
) -> 'AttackTreeBuilder':
|
||||
"""Add a leaf attack node."""
|
||||
node = AttackNode(
|
||||
id=id,
|
||||
name=name,
|
||||
description=description,
|
||||
node_type=NodeType.LEAF,
|
||||
attributes=AttackAttributes(
|
||||
difficulty=difficulty,
|
||||
cost=cost,
|
||||
detection_risk=detection,
|
||||
time_hours=time_hours
|
||||
),
|
||||
mitigations=mitigations or []
|
||||
)
|
||||
self._current().add_child(node)
|
||||
return self
|
||||
|
||||
def end(self) -> 'AttackTreeBuilder':
|
||||
"""Close current node, return to parent."""
|
||||
if len(self._node_stack) > 1:
|
||||
self._node_stack.pop()
|
||||
return self
|
||||
|
||||
def build(self) -> AttackTree:
|
||||
"""Build the attack tree."""
|
||||
if not self._root:
|
||||
raise ValueError("No root goal defined")
|
||||
return AttackTree(
|
||||
name=self.name,
|
||||
description=self.description,
|
||||
root=self._root
|
||||
)
|
||||
|
||||
def _current(self) -> AttackNode:
|
||||
if not self._node_stack:
|
||||
raise ValueError("No current node")
|
||||
return self._node_stack[-1]
|
||||
|
||||
|
||||
# Example usage
|
||||
def build_account_takeover_tree() -> AttackTree:
|
||||
"""Build attack tree for account takeover scenario."""
|
||||
return (
|
||||
AttackTreeBuilder("Account Takeover", "Gain unauthorized access to user account")
|
||||
.goal("G1", "Take Over User Account")
|
||||
|
||||
.or_node("S1", "Steal Credentials")
|
||||
.attack(
|
||||
"A1", "Phishing Attack",
|
||||
difficulty=Difficulty.LOW,
|
||||
cost=Cost.LOW,
|
||||
detection=DetectionRisk.MEDIUM,
|
||||
mitigations=["Security awareness training", "Email filtering"]
|
||||
)
|
||||
.attack(
|
||||
"A2", "Credential Stuffing",
|
||||
difficulty=Difficulty.TRIVIAL,
|
||||
cost=Cost.LOW,
|
||||
detection=DetectionRisk.HIGH,
|
||||
mitigations=["Rate limiting", "MFA", "Password breach monitoring"]
|
||||
)
|
||||
.attack(
|
||||
"A3", "Keylogger Malware",
|
||||
difficulty=Difficulty.MEDIUM,
|
||||
cost=Cost.MEDIUM,
|
||||
detection=DetectionRisk.MEDIUM,
|
||||
mitigations=["Endpoint protection", "MFA"]
|
||||
)
|
||||
.end()
|
||||
|
||||
.or_node("S2", "Bypass Authentication")
|
||||
.attack(
|
||||
"A4", "Session Hijacking",
|
||||
difficulty=Difficulty.MEDIUM,
|
||||
cost=Cost.LOW,
|
||||
detection=DetectionRisk.LOW,
|
||||
mitigations=["Secure session management", "HTTPS only"]
|
||||
)
|
||||
.attack(
|
||||
"A5", "Authentication Bypass Vulnerability",
|
||||
difficulty=Difficulty.HIGH,
|
||||
cost=Cost.LOW,
|
||||
detection=DetectionRisk.LOW,
|
||||
mitigations=["Security testing", "Code review", "WAF"]
|
||||
)
|
||||
.end()
|
||||
|
||||
.or_node("S3", "Social Engineering")
|
||||
.and_node("S3.1", "Account Recovery Attack")
|
||||
.attack(
|
||||
"A6", "Gather Personal Information",
|
||||
difficulty=Difficulty.LOW,
|
||||
cost=Cost.FREE,
|
||||
detection=DetectionRisk.NONE
|
||||
)
|
||||
.attack(
|
||||
"A7", "Call Support Desk",
|
||||
difficulty=Difficulty.MEDIUM,
|
||||
cost=Cost.FREE,
|
||||
detection=DetectionRisk.MEDIUM,
|
||||
mitigations=["Support verification procedures", "Security questions"]
|
||||
)
|
||||
.end()
|
||||
.end()
|
||||
|
||||
.build()
|
||||
)
|
||||
```
|
||||
|
||||
### Template 3: Mermaid Diagram Generator
|
||||
|
||||
```python
|
||||
class MermaidExporter:
|
||||
"""Export attack trees to Mermaid diagram format."""
|
||||
|
||||
def __init__(self, tree: AttackTree):
|
||||
self.tree = tree
|
||||
self._lines: List[str] = []
|
||||
self._node_count = 0
|
||||
|
||||
def export(self) -> str:
|
||||
"""Export tree to Mermaid flowchart."""
|
||||
self._lines = ["flowchart TD"]
|
||||
self._export_node(self.tree.root, None)
|
||||
return "\n".join(self._lines)
|
||||
|
||||
def _export_node(self, node: AttackNode, parent_id: Optional[str]) -> str:
|
||||
"""Recursively export nodes."""
|
||||
node_id = f"N{self._node_count}"
|
||||
self._node_count += 1
|
||||
|
||||
# Node shape based on type
|
||||
if node.node_type == NodeType.OR:
|
||||
shape = f"{node_id}(({node.name}))"
|
||||
elif node.node_type == NodeType.AND:
|
||||
shape = f"{node_id}[{node.name}]"
|
||||
else: # LEAF
|
||||
# Color based on difficulty
|
||||
style = self._get_leaf_style(node)
|
||||
shape = f"{node_id}[/{node.name}/]"
|
||||
self._lines.append(f" style {node_id} {style}")
|
||||
|
||||
self._lines.append(f" {shape}")
|
||||
|
||||
if parent_id:
|
||||
connector = "-->" if node.node_type != NodeType.AND else "==>"
|
||||
self._lines.append(f" {parent_id} {connector} {node_id}")
|
||||
|
||||
for child in node.children:
|
||||
self._export_node(child, node_id)
|
||||
|
||||
return node_id
|
||||
|
||||
def _get_leaf_style(self, node: AttackNode) -> str:
|
||||
"""Get style based on attack attributes."""
|
||||
colors = {
|
||||
Difficulty.TRIVIAL: "fill:#ff6b6b", # Red - easy attack
|
||||
Difficulty.LOW: "fill:#ffa06b",
|
||||
Difficulty.MEDIUM: "fill:#ffd93d",
|
||||
Difficulty.HIGH: "fill:#6bcb77",
|
||||
Difficulty.EXPERT: "fill:#4d96ff", # Blue - hard attack
|
||||
}
|
||||
color = colors.get(node.attributes.difficulty, "fill:#gray")
|
||||
return color
|
||||
|
||||
|
||||
class PlantUMLExporter:
|
||||
"""Export attack trees to PlantUML format."""
|
||||
|
||||
def __init__(self, tree: AttackTree):
|
||||
self.tree = tree
|
||||
|
||||
def export(self) -> str:
|
||||
"""Export tree to PlantUML."""
|
||||
lines = [
|
||||
"@startmindmap",
|
||||
f"* {self.tree.name}",
|
||||
]
|
||||
self._export_node(self.tree.root, lines, 1)
|
||||
lines.append("@endmindmap")
|
||||
return "\n".join(lines)
|
||||
|
||||
def _export_node(self, node: AttackNode, lines: List[str], depth: int) -> None:
|
||||
"""Recursively export nodes."""
|
||||
prefix = "*" * (depth + 1)
|
||||
|
||||
if node.node_type == NodeType.OR:
|
||||
marker = "[OR]"
|
||||
elif node.node_type == NodeType.AND:
|
||||
marker = "[AND]"
|
||||
else:
|
||||
diff = node.attributes.difficulty.name
|
||||
marker = f"<<{diff}>>"
|
||||
|
||||
lines.append(f"{prefix} {marker} {node.name}")
|
||||
|
||||
for child in node.children:
|
||||
self._export_node(child, lines, depth + 1)
|
||||
```
|
||||
|
||||
### Template 4: Attack Path Analysis
|
||||
|
||||
```python
|
||||
from typing import Set, Tuple
|
||||
|
||||
class AttackPathAnalyzer:
|
||||
"""Analyze attack paths and coverage."""
|
||||
|
||||
def __init__(self, tree: AttackTree):
|
||||
self.tree = tree
|
||||
|
||||
def get_all_paths(self) -> List[List[AttackNode]]:
|
||||
"""Get all possible attack paths."""
|
||||
paths = []
|
||||
self._collect_paths(self.tree.root, [], paths)
|
||||
return paths
|
||||
|
||||
def _collect_paths(
|
||||
self,
|
||||
node: AttackNode,
|
||||
current_path: List[AttackNode],
|
||||
all_paths: List[List[AttackNode]]
|
||||
) -> None:
|
||||
"""Recursively collect all paths."""
|
||||
current_path = current_path + [node]
|
||||
|
||||
if node.node_type == NodeType.LEAF:
|
||||
all_paths.append(current_path)
|
||||
return
|
||||
|
||||
if not node.children:
|
||||
all_paths.append(current_path)
|
||||
return
|
||||
|
||||
if node.node_type == NodeType.OR:
|
||||
# Each child is a separate path
|
||||
for child in node.children:
|
||||
self._collect_paths(child, current_path, all_paths)
|
||||
else: # AND
|
||||
# Must combine all children
|
||||
child_paths = []
|
||||
for child in node.children:
|
||||
child_sub_paths = []
|
||||
self._collect_paths(child, [], child_sub_paths)
|
||||
child_paths.append(child_sub_paths)
|
||||
|
||||
# Combine paths from all AND children
|
||||
combined = self._combine_and_paths(child_paths)
|
||||
for combo in combined:
|
||||
all_paths.append(current_path + combo)
|
||||
|
||||
def _combine_and_paths(
|
||||
self,
|
||||
child_paths: List[List[List[AttackNode]]]
|
||||
) -> List[List[AttackNode]]:
|
||||
"""Combine paths from AND node children."""
|
||||
if not child_paths:
|
||||
return [[]]
|
||||
|
||||
if len(child_paths) == 1:
|
||||
return [path for paths in child_paths for path in paths]
|
||||
|
||||
# Cartesian product of all child path combinations
|
||||
result = [[]]
|
||||
for paths in child_paths:
|
||||
new_result = []
|
||||
for existing in result:
|
||||
for path in paths:
|
||||
new_result.append(existing + path)
|
||||
result = new_result
|
||||
return result
|
||||
|
||||
def calculate_path_metrics(self, path: List[AttackNode]) -> Dict:
|
||||
"""Calculate metrics for a specific path."""
|
||||
leaves = [n for n in path if n.node_type == NodeType.LEAF]
|
||||
|
||||
total_difficulty = sum(n.attributes.difficulty.value for n in leaves)
|
||||
total_cost = sum(n.attributes.cost.value for n in leaves)
|
||||
total_time = sum(n.attributes.time_hours for n in leaves)
|
||||
max_detection = max((n.attributes.detection_risk.value for n in leaves), default=0)
|
||||
|
||||
return {
|
||||
"steps": len(leaves),
|
||||
"total_difficulty": total_difficulty,
|
||||
"avg_difficulty": total_difficulty / len(leaves) if leaves else 0,
|
||||
"total_cost": total_cost,
|
||||
"total_time_hours": total_time,
|
||||
"max_detection_risk": max_detection,
|
||||
"requires_insider": any(n.attributes.requires_insider for n in leaves),
|
||||
"requires_physical": any(n.attributes.requires_physical for n in leaves),
|
||||
}
|
||||
|
||||
def identify_critical_nodes(self) -> List[Tuple[AttackNode, int]]:
|
||||
"""Find nodes that appear in the most paths."""
|
||||
paths = self.get_all_paths()
|
||||
node_counts: Dict[str, Tuple[AttackNode, int]] = {}
|
||||
|
||||
for path in paths:
|
||||
for node in path:
|
||||
if node.id not in node_counts:
|
||||
node_counts[node.id] = (node, 0)
|
||||
node_counts[node.id] = (node, node_counts[node.id][1] + 1)
|
||||
|
||||
return sorted(
|
||||
node_counts.values(),
|
||||
key=lambda x: x[1],
|
||||
reverse=True
|
||||
)
|
||||
|
||||
def coverage_analysis(self, mitigated_attacks: Set[str]) -> Dict:
|
||||
"""Analyze how mitigations affect attack coverage."""
|
||||
all_paths = self.get_all_paths()
|
||||
blocked_paths = []
|
||||
open_paths = []
|
||||
|
||||
for path in all_paths:
|
||||
path_attacks = {n.id for n in path if n.node_type == NodeType.LEAF}
|
||||
if path_attacks & mitigated_attacks:
|
||||
blocked_paths.append(path)
|
||||
else:
|
||||
open_paths.append(path)
|
||||
|
||||
return {
|
||||
"total_paths": len(all_paths),
|
||||
"blocked_paths": len(blocked_paths),
|
||||
"open_paths": len(open_paths),
|
||||
"coverage_percentage": len(blocked_paths) / len(all_paths) * 100 if all_paths else 0,
|
||||
"open_path_details": [
|
||||
{"path": [n.name for n in p], "metrics": self.calculate_path_metrics(p)}
|
||||
for p in open_paths[:5] # Top 5 open paths
|
||||
]
|
||||
}
|
||||
|
||||
def prioritize_mitigations(self) -> List[Dict]:
|
||||
"""Prioritize mitigations by impact."""
|
||||
critical_nodes = self.identify_critical_nodes()
|
||||
paths = self.get_all_paths()
|
||||
total_paths = len(paths)
|
||||
|
||||
recommendations = []
|
||||
for node, count in critical_nodes:
|
||||
if node.node_type == NodeType.LEAF and node.mitigations:
|
||||
recommendations.append({
|
||||
"attack": node.name,
|
||||
"attack_id": node.id,
|
||||
"paths_blocked": count,
|
||||
"coverage_impact": count / total_paths * 100,
|
||||
"difficulty": node.attributes.difficulty.name,
|
||||
"mitigations": node.mitigations,
|
||||
})
|
||||
|
||||
return sorted(recommendations, key=lambda x: x["coverage_impact"], reverse=True)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do's
|
||||
- **Start with clear goals** - Define what attacker wants
|
||||
- **Be exhaustive** - Consider all attack vectors
|
||||
- **Attribute attacks** - Cost, skill, and detection
|
||||
- **Update regularly** - New threats emerge
|
||||
- **Validate with experts** - Red team review
|
||||
|
||||
### Don'ts
|
||||
- **Don't oversimplify** - Real attacks are complex
|
||||
- **Don't ignore dependencies** - AND nodes matter
|
||||
- **Don't forget insider threats** - Not all attackers are external
|
||||
- **Don't skip mitigations** - Trees are for defense planning
|
||||
- **Don't make it static** - Threat landscape evolves
|
||||
|
||||
## Resources
|
||||
|
||||
- [Attack Trees by Bruce Schneier](https://www.schneier.com/academic/archives/1999/12/attack_trees.html)
|
||||
- [MITRE ATT&CK Framework](https://attack.mitre.org/)
|
||||
- [OWASP Attack Surface Analysis](https://owasp.org/www-community/controls/Attack_Surface_Analysis_Cheat_Sheet)
|
||||
39
skills/auth-implementation-patterns/SKILL.md
Normal file
39
skills/auth-implementation-patterns/SKILL.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
name: auth-implementation-patterns
|
||||
description: Master authentication and authorization patterns including JWT, OAuth2, session management, and RBAC to build secure, scalable access control systems. Use when implementing auth systems, securing APIs, or debugging security issues.
|
||||
---
|
||||
|
||||
# Authentication & Authorization Implementation Patterns
|
||||
|
||||
Build secure, scalable authentication and authorization systems using industry-standard patterns and modern best practices.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Implementing user authentication systems
|
||||
- Securing REST or GraphQL APIs
|
||||
- Adding OAuth2/social login or SSO
|
||||
- Designing session management or RBAC
|
||||
- Debugging authentication or authorization issues
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You only need UI copy or login page styling
|
||||
- The task is infrastructure-only without identity concerns
|
||||
- You cannot change auth policies or credential storage
|
||||
|
||||
## Instructions
|
||||
|
||||
- Define users, tenants, flows, and threat model constraints.
|
||||
- Choose auth strategy (session, JWT, OIDC) and token lifecycle.
|
||||
- Design authorization model and policy enforcement points.
|
||||
- Plan secrets storage, rotation, logging, and audit requirements.
|
||||
- If detailed examples are required, open `resources/implementation-playbook.md`.
|
||||
|
||||
## Safety
|
||||
|
||||
- Never log secrets, tokens, or credentials.
|
||||
- Enforce least privilege and secure storage for keys.
|
||||
|
||||
## Resources
|
||||
|
||||
- `resources/implementation-playbook.md` for detailed patterns and examples.
|
||||
@@ -0,0 +1,618 @@
|
||||
# Authentication and Authorization Implementation Patterns Implementation Playbook
|
||||
|
||||
This file contains detailed patterns, checklists, and code samples referenced by the skill.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. Authentication vs Authorization
|
||||
|
||||
**Authentication (AuthN)**: Who are you?
|
||||
- Verifying identity (username/password, OAuth, biometrics)
|
||||
- Issuing credentials (sessions, tokens)
|
||||
- Managing login/logout
|
||||
|
||||
**Authorization (AuthZ)**: What can you do?
|
||||
- Permission checking
|
||||
- Role-based access control (RBAC)
|
||||
- Resource ownership validation
|
||||
- Policy enforcement
|
||||
|
||||
### 2. Authentication Strategies
|
||||
|
||||
**Session-Based:**
|
||||
- Server stores session state
|
||||
- Session ID in cookie
|
||||
- Traditional, simple, stateful
|
||||
|
||||
**Token-Based (JWT):**
|
||||
- Stateless, self-contained
|
||||
- Scales horizontally
|
||||
- Can store claims
|
||||
|
||||
**OAuth2/OpenID Connect:**
|
||||
- Delegate authentication
|
||||
- Social login (Google, GitHub)
|
||||
- Enterprise SSO
|
||||
|
||||
## JWT Authentication
|
||||
|
||||
### Pattern 1: JWT Implementation
|
||||
|
||||
```typescript
|
||||
// JWT structure: header.payload.signature
|
||||
import jwt from 'jsonwebtoken';
|
||||
import { Request, Response, NextFunction } from 'express';
|
||||
|
||||
interface JWTPayload {
|
||||
userId: string;
|
||||
email: string;
|
||||
role: string;
|
||||
iat: number;
|
||||
exp: number;
|
||||
}
|
||||
|
||||
// Generate JWT
|
||||
function generateTokens(userId: string, email: string, role: string) {
|
||||
const accessToken = jwt.sign(
|
||||
{ userId, email, role },
|
||||
process.env.JWT_SECRET!,
|
||||
{ expiresIn: '15m' } // Short-lived
|
||||
);
|
||||
|
||||
const refreshToken = jwt.sign(
|
||||
{ userId },
|
||||
process.env.JWT_REFRESH_SECRET!,
|
||||
{ expiresIn: '7d' } // Long-lived
|
||||
);
|
||||
|
||||
return { accessToken, refreshToken };
|
||||
}
|
||||
|
||||
// Verify JWT
|
||||
function verifyToken(token: string): JWTPayload {
|
||||
try {
|
||||
return jwt.verify(token, process.env.JWT_SECRET!) as JWTPayload;
|
||||
} catch (error) {
|
||||
if (error instanceof jwt.TokenExpiredError) {
|
||||
throw new Error('Token expired');
|
||||
}
|
||||
if (error instanceof jwt.JsonWebTokenError) {
|
||||
throw new Error('Invalid token');
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// Middleware
|
||||
function authenticate(req: Request, res: Response, next: NextFunction) {
|
||||
const authHeader = req.headers.authorization;
|
||||
if (!authHeader?.startsWith('Bearer ')) {
|
||||
return res.status(401).json({ error: 'No token provided' });
|
||||
}
|
||||
|
||||
const token = authHeader.substring(7);
|
||||
try {
|
||||
const payload = verifyToken(token);
|
||||
req.user = payload; // Attach user to request
|
||||
next();
|
||||
} catch (error) {
|
||||
return res.status(401).json({ error: 'Invalid token' });
|
||||
}
|
||||
}
|
||||
|
||||
// Usage
|
||||
app.get('/api/profile', authenticate, (req, res) => {
|
||||
res.json({ user: req.user });
|
||||
});
|
||||
```
|
||||
|
||||
### Pattern 2: Refresh Token Flow
|
||||
|
||||
```typescript
|
||||
interface StoredRefreshToken {
|
||||
token: string;
|
||||
userId: string;
|
||||
expiresAt: Date;
|
||||
createdAt: Date;
|
||||
}
|
||||
|
||||
class RefreshTokenService {
|
||||
// Store refresh token in database
|
||||
async storeRefreshToken(userId: string, refreshToken: string) {
|
||||
const expiresAt = new Date(Date.now() + 7 * 24 * 60 * 60 * 1000);
|
||||
await db.refreshTokens.create({
|
||||
token: await hash(refreshToken), // Hash before storing
|
||||
userId,
|
||||
expiresAt,
|
||||
});
|
||||
}
|
||||
|
||||
// Refresh access token
|
||||
async refreshAccessToken(refreshToken: string) {
|
||||
// Verify refresh token
|
||||
let payload;
|
||||
try {
|
||||
payload = jwt.verify(
|
||||
refreshToken,
|
||||
process.env.JWT_REFRESH_SECRET!
|
||||
) as { userId: string };
|
||||
} catch {
|
||||
throw new Error('Invalid refresh token');
|
||||
}
|
||||
|
||||
// Check if token exists in database
|
||||
const storedToken = await db.refreshTokens.findOne({
|
||||
where: {
|
||||
token: await hash(refreshToken),
|
||||
userId: payload.userId,
|
||||
expiresAt: { $gt: new Date() },
|
||||
},
|
||||
});
|
||||
|
||||
if (!storedToken) {
|
||||
throw new Error('Refresh token not found or expired');
|
||||
}
|
||||
|
||||
// Get user
|
||||
const user = await db.users.findById(payload.userId);
|
||||
if (!user) {
|
||||
throw new Error('User not found');
|
||||
}
|
||||
|
||||
// Generate new access token
|
||||
const accessToken = jwt.sign(
|
||||
{ userId: user.id, email: user.email, role: user.role },
|
||||
process.env.JWT_SECRET!,
|
||||
{ expiresIn: '15m' }
|
||||
);
|
||||
|
||||
return { accessToken };
|
||||
}
|
||||
|
||||
// Revoke refresh token (logout)
|
||||
async revokeRefreshToken(refreshToken: string) {
|
||||
await db.refreshTokens.deleteOne({
|
||||
token: await hash(refreshToken),
|
||||
});
|
||||
}
|
||||
|
||||
// Revoke all user tokens (logout all devices)
|
||||
async revokeAllUserTokens(userId: string) {
|
||||
await db.refreshTokens.deleteMany({ userId });
|
||||
}
|
||||
}
|
||||
|
||||
// API endpoints
|
||||
app.post('/api/auth/refresh', async (req, res) => {
|
||||
const { refreshToken } = req.body;
|
||||
try {
|
||||
const { accessToken } = await refreshTokenService
|
||||
.refreshAccessToken(refreshToken);
|
||||
res.json({ accessToken });
|
||||
} catch (error) {
|
||||
res.status(401).json({ error: 'Invalid refresh token' });
|
||||
}
|
||||
});
|
||||
|
||||
app.post('/api/auth/logout', authenticate, async (req, res) => {
|
||||
const { refreshToken } = req.body;
|
||||
await refreshTokenService.revokeRefreshToken(refreshToken);
|
||||
res.json({ message: 'Logged out successfully' });
|
||||
});
|
||||
```
|
||||
|
||||
## Session-Based Authentication
|
||||
|
||||
### Pattern 1: Express Session
|
||||
|
||||
```typescript
|
||||
import session from 'express-session';
|
||||
import RedisStore from 'connect-redis';
|
||||
import { createClient } from 'redis';
|
||||
|
||||
// Setup Redis for session storage
|
||||
const redisClient = createClient({
|
||||
url: process.env.REDIS_URL,
|
||||
});
|
||||
await redisClient.connect();
|
||||
|
||||
app.use(
|
||||
session({
|
||||
store: new RedisStore({ client: redisClient }),
|
||||
secret: process.env.SESSION_SECRET!,
|
||||
resave: false,
|
||||
saveUninitialized: false,
|
||||
cookie: {
|
||||
secure: process.env.NODE_ENV === 'production', // HTTPS only
|
||||
httpOnly: true, // No JavaScript access
|
||||
maxAge: 24 * 60 * 60 * 1000, // 24 hours
|
||||
sameSite: 'strict', // CSRF protection
|
||||
},
|
||||
})
|
||||
);
|
||||
|
||||
// Login
|
||||
app.post('/api/auth/login', async (req, res) => {
|
||||
const { email, password } = req.body;
|
||||
|
||||
const user = await db.users.findOne({ email });
|
||||
if (!user || !(await verifyPassword(password, user.passwordHash))) {
|
||||
return res.status(401).json({ error: 'Invalid credentials' });
|
||||
}
|
||||
|
||||
// Store user in session
|
||||
req.session.userId = user.id;
|
||||
req.session.role = user.role;
|
||||
|
||||
res.json({ user: { id: user.id, email: user.email, role: user.role } });
|
||||
});
|
||||
|
||||
// Session middleware
|
||||
function requireAuth(req: Request, res: Response, next: NextFunction) {
|
||||
if (!req.session.userId) {
|
||||
return res.status(401).json({ error: 'Not authenticated' });
|
||||
}
|
||||
next();
|
||||
}
|
||||
|
||||
// Protected route
|
||||
app.get('/api/profile', requireAuth, async (req, res) => {
|
||||
const user = await db.users.findById(req.session.userId);
|
||||
res.json({ user });
|
||||
});
|
||||
|
||||
// Logout
|
||||
app.post('/api/auth/logout', (req, res) => {
|
||||
req.session.destroy((err) => {
|
||||
if (err) {
|
||||
return res.status(500).json({ error: 'Logout failed' });
|
||||
}
|
||||
res.clearCookie('connect.sid');
|
||||
res.json({ message: 'Logged out successfully' });
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
## OAuth2 / Social Login
|
||||
|
||||
### Pattern 1: OAuth2 with Passport.js
|
||||
|
||||
```typescript
|
||||
import passport from 'passport';
|
||||
import { Strategy as GoogleStrategy } from 'passport-google-oauth20';
|
||||
import { Strategy as GitHubStrategy } from 'passport-github2';
|
||||
|
||||
// Google OAuth
|
||||
passport.use(
|
||||
new GoogleStrategy(
|
||||
{
|
||||
clientID: process.env.GOOGLE_CLIENT_ID!,
|
||||
clientSecret: process.env.GOOGLE_CLIENT_SECRET!,
|
||||
callbackURL: '/api/auth/google/callback',
|
||||
},
|
||||
async (accessToken, refreshToken, profile, done) => {
|
||||
try {
|
||||
// Find or create user
|
||||
let user = await db.users.findOne({
|
||||
googleId: profile.id,
|
||||
});
|
||||
|
||||
if (!user) {
|
||||
user = await db.users.create({
|
||||
googleId: profile.id,
|
||||
email: profile.emails?.[0]?.value,
|
||||
name: profile.displayName,
|
||||
avatar: profile.photos?.[0]?.value,
|
||||
});
|
||||
}
|
||||
|
||||
return done(null, user);
|
||||
} catch (error) {
|
||||
return done(error, undefined);
|
||||
}
|
||||
}
|
||||
)
|
||||
);
|
||||
|
||||
// Routes
|
||||
app.get('/api/auth/google', passport.authenticate('google', {
|
||||
scope: ['profile', 'email'],
|
||||
}));
|
||||
|
||||
app.get(
|
||||
'/api/auth/google/callback',
|
||||
passport.authenticate('google', { session: false }),
|
||||
(req, res) => {
|
||||
// Generate JWT
|
||||
const tokens = generateTokens(req.user.id, req.user.email, req.user.role);
|
||||
// Redirect to frontend with token
|
||||
res.redirect(`${process.env.FRONTEND_URL}/auth/callback?token=${tokens.accessToken}`);
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
## Authorization Patterns
|
||||
|
||||
### Pattern 1: Role-Based Access Control (RBAC)
|
||||
|
||||
```typescript
|
||||
enum Role {
|
||||
USER = 'user',
|
||||
MODERATOR = 'moderator',
|
||||
ADMIN = 'admin',
|
||||
}
|
||||
|
||||
const roleHierarchy: Record<Role, Role[]> = {
|
||||
[Role.ADMIN]: [Role.ADMIN, Role.MODERATOR, Role.USER],
|
||||
[Role.MODERATOR]: [Role.MODERATOR, Role.USER],
|
||||
[Role.USER]: [Role.USER],
|
||||
};
|
||||
|
||||
function hasRole(userRole: Role, requiredRole: Role): boolean {
|
||||
return roleHierarchy[userRole].includes(requiredRole);
|
||||
}
|
||||
|
||||
// Middleware
|
||||
function requireRole(...roles: Role[]) {
|
||||
return (req: Request, res: Response, next: NextFunction) => {
|
||||
if (!req.user) {
|
||||
return res.status(401).json({ error: 'Not authenticated' });
|
||||
}
|
||||
|
||||
if (!roles.some(role => hasRole(req.user.role, role))) {
|
||||
return res.status(403).json({ error: 'Insufficient permissions' });
|
||||
}
|
||||
|
||||
next();
|
||||
};
|
||||
}
|
||||
|
||||
// Usage
|
||||
app.delete('/api/users/:id',
|
||||
authenticate,
|
||||
requireRole(Role.ADMIN),
|
||||
async (req, res) => {
|
||||
// Only admins can delete users
|
||||
await db.users.delete(req.params.id);
|
||||
res.json({ message: 'User deleted' });
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
### Pattern 2: Permission-Based Access Control
|
||||
|
||||
```typescript
|
||||
enum Permission {
|
||||
READ_USERS = 'read:users',
|
||||
WRITE_USERS = 'write:users',
|
||||
DELETE_USERS = 'delete:users',
|
||||
READ_POSTS = 'read:posts',
|
||||
WRITE_POSTS = 'write:posts',
|
||||
}
|
||||
|
||||
const rolePermissions: Record<Role, Permission[]> = {
|
||||
[Role.USER]: [Permission.READ_POSTS, Permission.WRITE_POSTS],
|
||||
[Role.MODERATOR]: [
|
||||
Permission.READ_POSTS,
|
||||
Permission.WRITE_POSTS,
|
||||
Permission.READ_USERS,
|
||||
],
|
||||
[Role.ADMIN]: Object.values(Permission),
|
||||
};
|
||||
|
||||
function hasPermission(userRole: Role, permission: Permission): boolean {
|
||||
return rolePermissions[userRole]?.includes(permission) ?? false;
|
||||
}
|
||||
|
||||
function requirePermission(...permissions: Permission[]) {
|
||||
return (req: Request, res: Response, next: NextFunction) => {
|
||||
if (!req.user) {
|
||||
return res.status(401).json({ error: 'Not authenticated' });
|
||||
}
|
||||
|
||||
const hasAllPermissions = permissions.every(permission =>
|
||||
hasPermission(req.user.role, permission)
|
||||
);
|
||||
|
||||
if (!hasAllPermissions) {
|
||||
return res.status(403).json({ error: 'Insufficient permissions' });
|
||||
}
|
||||
|
||||
next();
|
||||
};
|
||||
}
|
||||
|
||||
// Usage
|
||||
app.get('/api/users',
|
||||
authenticate,
|
||||
requirePermission(Permission.READ_USERS),
|
||||
async (req, res) => {
|
||||
const users = await db.users.findAll();
|
||||
res.json({ users });
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
### Pattern 3: Resource Ownership
|
||||
|
||||
```typescript
|
||||
// Check if user owns resource
|
||||
async function requireOwnership(
|
||||
resourceType: 'post' | 'comment',
|
||||
resourceIdParam: string = 'id'
|
||||
) {
|
||||
return async (req: Request, res: Response, next: NextFunction) => {
|
||||
if (!req.user) {
|
||||
return res.status(401).json({ error: 'Not authenticated' });
|
||||
}
|
||||
|
||||
const resourceId = req.params[resourceIdParam];
|
||||
|
||||
// Admins can access anything
|
||||
if (req.user.role === Role.ADMIN) {
|
||||
return next();
|
||||
}
|
||||
|
||||
// Check ownership
|
||||
let resource;
|
||||
if (resourceType === 'post') {
|
||||
resource = await db.posts.findById(resourceId);
|
||||
} else if (resourceType === 'comment') {
|
||||
resource = await db.comments.findById(resourceId);
|
||||
}
|
||||
|
||||
if (!resource) {
|
||||
return res.status(404).json({ error: 'Resource not found' });
|
||||
}
|
||||
|
||||
if (resource.userId !== req.user.userId) {
|
||||
return res.status(403).json({ error: 'Not authorized' });
|
||||
}
|
||||
|
||||
next();
|
||||
};
|
||||
}
|
||||
|
||||
// Usage
|
||||
app.put('/api/posts/:id',
|
||||
authenticate,
|
||||
requireOwnership('post'),
|
||||
async (req, res) => {
|
||||
// User can only update their own posts
|
||||
const post = await db.posts.update(req.params.id, req.body);
|
||||
res.json({ post });
|
||||
}
|
||||
);
|
||||
```
|
||||
|
||||
## Security Best Practices
|
||||
|
||||
### Pattern 1: Password Security
|
||||
|
||||
```typescript
|
||||
import bcrypt from 'bcrypt';
|
||||
import { z } from 'zod';
|
||||
|
||||
// Password validation schema
|
||||
const passwordSchema = z.string()
|
||||
.min(12, 'Password must be at least 12 characters')
|
||||
.regex(/[A-Z]/, 'Password must contain uppercase letter')
|
||||
.regex(/[a-z]/, 'Password must contain lowercase letter')
|
||||
.regex(/[0-9]/, 'Password must contain number')
|
||||
.regex(/[^A-Za-z0-9]/, 'Password must contain special character');
|
||||
|
||||
// Hash password
|
||||
async function hashPassword(password: string): Promise<string> {
|
||||
const saltRounds = 12; // 2^12 iterations
|
||||
return bcrypt.hash(password, saltRounds);
|
||||
}
|
||||
|
||||
// Verify password
|
||||
async function verifyPassword(
|
||||
password: string,
|
||||
hash: string
|
||||
): Promise<boolean> {
|
||||
return bcrypt.compare(password, hash);
|
||||
}
|
||||
|
||||
// Registration with password validation
|
||||
app.post('/api/auth/register', async (req, res) => {
|
||||
try {
|
||||
const { email, password } = req.body;
|
||||
|
||||
// Validate password
|
||||
passwordSchema.parse(password);
|
||||
|
||||
// Check if user exists
|
||||
const existingUser = await db.users.findOne({ email });
|
||||
if (existingUser) {
|
||||
return res.status(400).json({ error: 'Email already registered' });
|
||||
}
|
||||
|
||||
// Hash password
|
||||
const passwordHash = await hashPassword(password);
|
||||
|
||||
// Create user
|
||||
const user = await db.users.create({
|
||||
email,
|
||||
passwordHash,
|
||||
});
|
||||
|
||||
// Generate tokens
|
||||
const tokens = generateTokens(user.id, user.email, user.role);
|
||||
|
||||
res.status(201).json({
|
||||
user: { id: user.id, email: user.email },
|
||||
...tokens,
|
||||
});
|
||||
} catch (error) {
|
||||
if (error instanceof z.ZodError) {
|
||||
return res.status(400).json({ error: error.errors[0].message });
|
||||
}
|
||||
res.status(500).json({ error: 'Registration failed' });
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### Pattern 2: Rate Limiting
|
||||
|
||||
```typescript
|
||||
import rateLimit from 'express-rate-limit';
|
||||
import RedisStore from 'rate-limit-redis';
|
||||
|
||||
// Login rate limiter
|
||||
const loginLimiter = rateLimit({
|
||||
store: new RedisStore({ client: redisClient }),
|
||||
windowMs: 15 * 60 * 1000, // 15 minutes
|
||||
max: 5, // 5 attempts
|
||||
message: 'Too many login attempts, please try again later',
|
||||
standardHeaders: true,
|
||||
legacyHeaders: false,
|
||||
});
|
||||
|
||||
// API rate limiter
|
||||
const apiLimiter = rateLimit({
|
||||
windowMs: 60 * 1000, // 1 minute
|
||||
max: 100, // 100 requests per minute
|
||||
standardHeaders: true,
|
||||
});
|
||||
|
||||
// Apply to routes
|
||||
app.post('/api/auth/login', loginLimiter, async (req, res) => {
|
||||
// Login logic
|
||||
});
|
||||
|
||||
app.use('/api/', apiLimiter);
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Never Store Plain Passwords**: Always hash with bcrypt/argon2
|
||||
2. **Use HTTPS**: Encrypt data in transit
|
||||
3. **Short-Lived Access Tokens**: 15-30 minutes max
|
||||
4. **Secure Cookies**: httpOnly, secure, sameSite flags
|
||||
5. **Validate All Input**: Email format, password strength
|
||||
6. **Rate Limit Auth Endpoints**: Prevent brute force attacks
|
||||
7. **Implement CSRF Protection**: For session-based auth
|
||||
8. **Rotate Secrets Regularly**: JWT secrets, session secrets
|
||||
9. **Log Security Events**: Login attempts, failed auth
|
||||
10. **Use MFA When Possible**: Extra security layer
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- **Weak Passwords**: Enforce strong password policies
|
||||
- **JWT in localStorage**: Vulnerable to XSS, use httpOnly cookies
|
||||
- **No Token Expiration**: Tokens should expire
|
||||
- **Client-Side Auth Checks Only**: Always validate server-side
|
||||
- **Insecure Password Reset**: Use secure tokens with expiration
|
||||
- **No Rate Limiting**: Vulnerable to brute force
|
||||
- **Trusting Client Data**: Always validate on server
|
||||
|
||||
## Resources
|
||||
|
||||
- **references/jwt-best-practices.md**: JWT implementation guide
|
||||
- **references/oauth2-flows.md**: OAuth2 flow diagrams and examples
|
||||
- **references/session-security.md**: Secure session management
|
||||
- **assets/auth-security-checklist.md**: Security review checklist
|
||||
- **assets/password-policy-template.md**: Password requirements template
|
||||
- **scripts/token-validator.ts**: JWT validation utility
|
||||
257
skills/automate-whatsapp/SKILL.md
Normal file
257
skills/automate-whatsapp/SKILL.md
Normal file
@@ -0,0 +1,257 @@
|
||||
---
|
||||
name: automate-whatsapp
|
||||
description: "Build WhatsApp automations with Kapso workflows: configure WhatsApp triggers, edit workflow graphs, manage executions, deploy functions, and use databases/integrations for state. Use when automating WhatsApp conversations and event handling."
|
||||
source: "https://github.com/gokapso/agent-skills/tree/master/skills/automate-whatsapp"
|
||||
risk: safe
|
||||
---
|
||||
|
||||
# Automate WhatsApp
|
||||
|
||||
## When to use
|
||||
|
||||
Use this skill to build and run WhatsApp automations: workflow CRUD, graph edits, triggers, executions, function management, app integrations, and D1 database operations.
|
||||
|
||||
## Setup
|
||||
|
||||
Env vars:
|
||||
- `KAPSO_API_BASE_URL` (host only, no `/platform/v1`)
|
||||
- `KAPSO_API_KEY`
|
||||
|
||||
## How to
|
||||
|
||||
### Edit a workflow graph
|
||||
|
||||
1. Fetch graph: `node scripts/get-graph.js <workflow_id>` (note the `lock_version`)
|
||||
2. Edit the JSON (see graph rules below)
|
||||
3. Validate: `node scripts/validate-graph.js --definition-file <path>`
|
||||
4. Update: `node scripts/update-graph.js <workflow_id> --expected-lock-version <n> --definition-file <path>`
|
||||
5. Re-fetch to confirm
|
||||
|
||||
For small edits, use `edit-graph.js` with `--old-file` and `--new-file` instead.
|
||||
|
||||
If you get a lock_version conflict: re-fetch, re-apply changes, retry with new lock_version.
|
||||
|
||||
### Manage triggers
|
||||
|
||||
1. List: `node scripts/list-triggers.js <workflow_id>`
|
||||
2. Create: `node scripts/create-trigger.js <workflow_id> --trigger-type <type> --phone-number-id <id>`
|
||||
3. Toggle: `node scripts/update-trigger.js --trigger-id <id> --active true|false`
|
||||
4. Delete: `node scripts/delete-trigger.js --trigger-id <id>`
|
||||
|
||||
For inbound_message triggers, first run `node scripts/list-whatsapp-phone-numbers.js` to get `phone_number_id`.
|
||||
|
||||
### Debug executions
|
||||
|
||||
1. List: `node scripts/list-executions.js <workflow_id>`
|
||||
2. Inspect: `node scripts/get-execution.js <execution-id>`
|
||||
3. Get value: `node scripts/get-context-value.js <execution-id> --variable-path vars.foo`
|
||||
4. Events: `node scripts/list-execution-events.js <execution-id>`
|
||||
|
||||
### Create and deploy a function
|
||||
|
||||
1. Write code with handler signature (see function rules below)
|
||||
2. Create: `node scripts/create-function.js --name <name> --code-file <path>`
|
||||
3. Deploy: `node scripts/deploy-function.js --function-id <id>`
|
||||
4. Verify: `node scripts/get-function.js --function-id <id>`
|
||||
|
||||
### Set up agent node with app integrations
|
||||
|
||||
1. Find model: `node scripts/list-provider-models.js`
|
||||
2. Find account: `node scripts/list-accounts.js --app-slug <slug>` (use `pipedream_account_id`)
|
||||
3. Find action: `node scripts/search-actions.js --query <word> --app-slug <slug>` (action_id = key)
|
||||
4. Create integration: `node scripts/create-integration.js --action-id <id> --app-slug <slug> --account-id <id> --configured-props <json>`
|
||||
5. Add tools to agent node via `flow_agent_app_integration_tools`
|
||||
|
||||
### Database CRUD
|
||||
|
||||
1. List tables: `node scripts/list-tables.js`
|
||||
2. Query: `node scripts/query-rows.js --table <name> --filters <json>`
|
||||
3. Create/update/delete with row scripts
|
||||
|
||||
## Graph rules
|
||||
|
||||
- Exactly one start node with `id` = `start`
|
||||
- Never change existing node IDs
|
||||
- Use `{node_type}_{timestamp_ms}` for new node IDs
|
||||
- Non-decide nodes have 0 or 1 outgoing `next` edge
|
||||
- Decide edge labels must match `conditions[].label`
|
||||
- Edge keys are `source`/`target`/`label` (not `from`/`to`)
|
||||
|
||||
For full schema details, see `references/graph-contract.md`.
|
||||
|
||||
## Function rules
|
||||
|
||||
```js
|
||||
async function handler(request, env) {
|
||||
// Parse input
|
||||
const body = await request.json();
|
||||
// Use env.KV and env.DB as needed
|
||||
return new Response(JSON.stringify({ result: "ok" }));
|
||||
}
|
||||
```
|
||||
|
||||
- Do NOT use `export`, `export default`, or arrow functions
|
||||
- Return a `Response` object
|
||||
|
||||
## Execution context
|
||||
|
||||
Always use this structure:
|
||||
- `vars` - user-defined variables
|
||||
- `system` - system variables
|
||||
- `context` - channel data
|
||||
- `metadata` - request metadata
|
||||
|
||||
## Scripts
|
||||
|
||||
### Workflows
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-workflows.js` | List workflows (metadata only) |
|
||||
| `get-workflow.js` | Get workflow metadata |
|
||||
| `create-workflow.js` | Create a workflow |
|
||||
| `update-workflow-settings.js` | Update workflow settings |
|
||||
|
||||
### Graph
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `get-graph.js` | Get workflow graph + lock_version |
|
||||
| `edit-graph.js` | Patch graph via string replacement |
|
||||
| `update-graph.js` | Replace entire graph |
|
||||
| `validate-graph.js` | Validate graph structure locally |
|
||||
|
||||
### Triggers
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-triggers.js` | List triggers for a workflow |
|
||||
| `create-trigger.js` | Create a trigger |
|
||||
| `update-trigger.js` | Enable/disable a trigger |
|
||||
| `delete-trigger.js` | Delete a trigger |
|
||||
| `list-whatsapp-phone-numbers.js` | List phone numbers for trigger setup |
|
||||
|
||||
### Executions
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-executions.js` | List executions |
|
||||
| `get-execution.js` | Get execution details |
|
||||
| `get-context-value.js` | Read value from execution context |
|
||||
| `update-execution-status.js` | Force execution state |
|
||||
| `resume-execution.js` | Resume waiting execution |
|
||||
| `list-execution-events.js` | List execution events |
|
||||
|
||||
### Functions
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-functions.js` | List project functions |
|
||||
| `get-function.js` | Get function details + code |
|
||||
| `create-function.js` | Create a function |
|
||||
| `update-function.js` | Update function code |
|
||||
| `deploy-function.js` | Deploy function to runtime |
|
||||
| `invoke-function.js` | Invoke function with payload |
|
||||
| `list-function-invocations.js` | List function invocations |
|
||||
|
||||
### App integrations
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-apps.js` | Search integration apps |
|
||||
| `search-actions.js` | Search actions (action_id = key) |
|
||||
| `get-action-schema.js` | Get action JSON schema |
|
||||
| `list-accounts.js` | List connected accounts |
|
||||
| `create-connect-token.js` | Create OAuth connect link |
|
||||
| `configure-prop.js` | Resolve remote_options for a prop |
|
||||
| `reload-props.js` | Reload dynamic props |
|
||||
| `list-integrations.js` | List saved integrations |
|
||||
| `create-integration.js` | Create an integration |
|
||||
| `update-integration.js` | Update an integration |
|
||||
| `delete-integration.js` | Delete an integration |
|
||||
|
||||
### Databases
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-tables.js` | List D1 tables |
|
||||
| `get-table.js` | Get table schema + sample rows |
|
||||
| `query-rows.js` | Query rows with filters |
|
||||
| `create-row.js` | Create a row |
|
||||
| `update-row.js` | Update rows |
|
||||
| `upsert-row.js` | Upsert a row |
|
||||
| `delete-row.js` | Delete rows |
|
||||
|
||||
### OpenAPI
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `openapi-explore.mjs` | Explore OpenAPI (search/op/schema/where) |
|
||||
|
||||
Install deps (once):
|
||||
```bash
|
||||
npm i
|
||||
```
|
||||
|
||||
Examples:
|
||||
```bash
|
||||
node scripts/openapi-explore.mjs --spec workflows search "variables"
|
||||
node scripts/openapi-explore.mjs --spec workflows op getWorkflowVariables
|
||||
node scripts/openapi-explore.mjs --spec platform op queryDatabaseRows
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Prefer file paths over inline JSON (`--definition-file`, `--code-file`)
|
||||
- `action_id` is the same as `key` from `search-actions`
|
||||
- `--account-id` uses `pipedream_account_id` from `list-accounts`
|
||||
- Variable CRUD (`variables-set.js`, `variables-delete.js`) is blocked - Platform API doesn't support it
|
||||
- Raw SQL execution is not supported via Platform API
|
||||
|
||||
## References
|
||||
|
||||
Read before editing:
|
||||
- [references/graph-contract.md](references/graph-contract.md) - Graph schema, computed vs editable fields, lock_version
|
||||
- [references/node-types.md](references/node-types.md) - Node types and config shapes
|
||||
- [references/workflow-overview.md](references/workflow-overview.md) - Execution flow and states
|
||||
|
||||
Other references:
|
||||
- [references/execution-context.md](references/execution-context.md) - Context structure and variable substitution
|
||||
- [references/triggers.md](references/triggers.md) - Trigger types and setup
|
||||
- [references/app-integrations.md](references/app-integrations.md) - App integration and variable_definitions
|
||||
- [references/functions-reference.md](references/functions-reference.md) - Function management
|
||||
- [references/functions-payloads.md](references/functions-payloads.md) - Payload shapes for functions
|
||||
- [references/databases-reference.md](references/databases-reference.md) - Database operations
|
||||
|
||||
## Assets
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `workflow-linear.json` | Minimal linear workflow |
|
||||
| `workflow-decision.json` | Minimal branching workflow |
|
||||
| `workflow-agent-simple.json` | Minimal agent workflow |
|
||||
| `workflow-customer-support-intake-agent.json` | Customer support intake |
|
||||
| `workflow-interactive-buttons-decide-function.json` | Interactive buttons + decide (function) |
|
||||
| `workflow-interactive-buttons-decide-ai.json` | Interactive buttons + decide (AI) |
|
||||
| `workflow-api-template-wait-agent.json` | API trigger + template + agent |
|
||||
| `function-decide-route-interactive-buttons.json` | Function for button routing |
|
||||
| `agent-app-integration-example.json` | Agent node with app integrations |
|
||||
|
||||
## Related skills
|
||||
|
||||
- `integrate-whatsapp` - Onboarding, webhooks, messaging, templates, flows
|
||||
- `observe-whatsapp` - Debugging, logs, health checks
|
||||
|
||||
<!-- FILEMAP:BEGIN -->
|
||||
```text
|
||||
[automate-whatsapp file map]|root: .
|
||||
|.:{package.json,SKILL.md}
|
||||
|assets:{agent-app-integration-example.json,databases-example.json,function-decide-route-interactive-buttons.json,functions-example.json,workflow-agent-simple.json,workflow-api-template-wait-agent.json,workflow-customer-support-intake-agent.json,workflow-decision.json,workflow-interactive-buttons-decide-ai.json,workflow-interactive-buttons-decide-function.json,workflow-linear.json}
|
||||
|references:{app-integrations.md,databases-reference.md,execution-context.md,function-contracts.md,functions-payloads.md,functions-reference.md,graph-contract.md,node-types.md,triggers.md,workflow-overview.md,workflow-reference.md}
|
||||
|scripts:{configure-prop.js,create-connect-token.js,create-function.js,create-integration.js,create-row.js,create-trigger.js,create-workflow.js,delete-integration.js,delete-row.js,delete-trigger.js,deploy-function.js,edit-graph.js,get-action-schema.js,get-context-value.js,get-execution-event.js,get-execution.js,get-function.js,get-graph.js,get-table.js,get-workflow.js,invoke-function.js,list-accounts.js,list-apps.js,list-execution-events.js,list-executions.js,list-function-invocations.js,list-functions.js,list-integrations.js,list-provider-models.js,list-tables.js,list-triggers.js,list-whatsapp-phone-numbers.js,list-workflows.js,openapi-explore.mjs,query-rows.js,reload-props.js,resume-execution.js,search-actions.js,update-execution-status.js,update-function.js,update-graph.js,update-integration.js,update-row.js,update-trigger.js,update-workflow-settings.js,upsert-row.js,validate-graph.js,variables-delete.js,variables-list.js,variables-set.js}
|
||||
|scripts/lib/databases:{args.js,filters.js,kapso-api.js}
|
||||
|scripts/lib/functions:{args.js,kapso-api.js}
|
||||
|scripts/lib/workflows:{args.js,kapso-api.js,result.js}
|
||||
```
|
||||
<!-- FILEMAP:END -->
|
||||
|
||||
59
skills/avalonia-layout-zafiro/SKILL.md
Normal file
59
skills/avalonia-layout-zafiro/SKILL.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
name: avalonia-layout-zafiro
|
||||
description: Guidelines for modern Avalonia UI layout using Zafiro.Avalonia, emphasizing shared styles, generic components, and avoiding XAML redundancy.
|
||||
allowed-tools: Read, Write, Edit, Glob, Grep
|
||||
---
|
||||
|
||||
# Avalonia Layout with Zafiro.Avalonia
|
||||
|
||||
> Master modern, clean, and maintainable Avalonia UI layouts.
|
||||
> **Focus on semantic containers, shared styles, and minimal XAML.**
|
||||
|
||||
## 🎯 Selective Reading Rule
|
||||
|
||||
**Read ONLY files relevant to the layout challenge!**
|
||||
|
||||
---
|
||||
|
||||
## 📑 Content Map
|
||||
|
||||
| File | Description | When to Read |
|
||||
|------|-------------|--------------|
|
||||
| `themes.md` | Theme organization and shared styles | Setting up or refining app themes |
|
||||
| `containers.md` | Semantic containers (`HeaderedContainer`, `EdgePanel`, `Card`) | Structuring views and layouts |
|
||||
| `icons.md` | Icon usage with `IconExtension` and `IconOptions` | Adding and customizing icons |
|
||||
| `behaviors.md` | `Xaml.Interaction.Behaviors` and avoiding Converters | Implementing complex interactions |
|
||||
| `components.md` | Generic components and avoiding nesting | Creating reusable UI elements |
|
||||
|
||||
---
|
||||
|
||||
## 🔗 Related Project (Exemplary Implementation)
|
||||
|
||||
For a real-world example, refer to the **Angor** project:
|
||||
`/mnt/fast/Repos/angor/src/Angor/Avalonia/Angor.Avalonia.sln`
|
||||
|
||||
---
|
||||
|
||||
## ✅ Checklist for Clean Layouts
|
||||
|
||||
- [ ] **Used semantic containers?** (e.g., `HeaderedContainer` instead of `Border` with manual header)
|
||||
- [ ] **Avoided redundant properties?** Use shared styles in `axaml` files.
|
||||
- [ ] **Minimized nesting?** Flatten layouts using `EdgePanel` or generic components.
|
||||
- [ ] **Icons via extension?** Use `{Icon fa-name}` and `IconOptions` for styling.
|
||||
- [ ] **Behaviors over code-behind?** Use `Interaction.Behaviors` for UI-logic.
|
||||
- [ ] **Avoided Converters?** Prefer ViewModel properties or Behaviors unless necessary.
|
||||
|
||||
---
|
||||
|
||||
## ❌ Anti-Patterns
|
||||
|
||||
**DON'T:**
|
||||
- Use hardcoded colors or sizes (literals) in views.
|
||||
- Create deep nesting of `Grid` and `StackPanel`.
|
||||
- Repeat visual properties across multiple elements (use Styles).
|
||||
- Use `IValueConverter` for simple logic that belongs in the ViewModel.
|
||||
|
||||
**DO:**
|
||||
- Use `DynamicResource` for colors and brushes.
|
||||
- Extract repeated layouts into generic components.
|
||||
- Leverage `Zafiro.Avalonia` specific panels like `EdgePanel` for common UI patterns.
|
||||
35
skills/avalonia-layout-zafiro/behaviors.md
Normal file
35
skills/avalonia-layout-zafiro/behaviors.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Interactions and Logic
|
||||
|
||||
To keep XAML clean and maintainable, minimize logic in views and avoid excessive use of converters.
|
||||
|
||||
## 🎭 Xaml.Interaction.Behaviors
|
||||
|
||||
Use `Interaction.Behaviors` to handle UI-related logic that doesn't belong in the ViewModel, such as focus management, animations, or specialized event handling.
|
||||
|
||||
```xml
|
||||
<TextBox Text="{Binding Address}">
|
||||
<Interaction.Behaviors>
|
||||
<UntouchedClassBehavior />
|
||||
</Interaction.Behaviors>
|
||||
</TextBox>
|
||||
```
|
||||
|
||||
### Why use Behaviors?
|
||||
- **Encapsulation**: UI logic is contained in a reusable behavior class.
|
||||
- **Clean XAML**: Avoids code-behind and complex XAML triggers.
|
||||
- **Testability**: Behaviors can be tested independently of the View.
|
||||
|
||||
## 🚫 Avoiding Converters
|
||||
|
||||
Converters often lead to "magical" logic hidden in XAML. Whenever possible, prefer:
|
||||
|
||||
1. **ViewModel Properties**: Let the ViewModel provide the final data format (e.g., a `string` formatted for display).
|
||||
2. **MultiBinding**: Use for simple logic combinations (And/Or) directly in XAML.
|
||||
3. **Behaviors**: For more complex interactions that involve state or events.
|
||||
|
||||
### When to use Converters?
|
||||
Only use them when the conversion is purely visual and highly reusable across different contexts (e.g., `BoolToOpacityConverter`).
|
||||
|
||||
## 🧩 Simplified Interactions
|
||||
|
||||
If you find yourself needing a complex converter or behavior, consider if the component can be simplified or if the data model can be adjusted to make the view binding more direct.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user