Compare commits
142 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3186c43cd9 | ||
|
|
7c8481bcb4 | ||
|
|
6670ca074f | ||
|
|
9dc93c1cb9 | ||
|
|
44e51f0ea9 | ||
|
|
b0a8a59124 | ||
|
|
45bb3e5617 | ||
|
|
58489dfbaf | ||
|
|
35556e0306 | ||
|
|
e7ae616385 | ||
|
|
e06454dafd | ||
|
|
17bce709de | ||
|
|
167d7c97c7 | ||
|
|
817b7fe635 | ||
|
|
5f1f624b7f | ||
|
|
cc2946b6d5 | ||
|
|
c44f0f6505 | ||
|
|
dd60bb2940 | ||
|
|
b4e952d2a8 | ||
|
|
d6fd03cea7 | ||
|
|
ef994f7e5d | ||
|
|
183c792fef | ||
|
|
56720c9e1b | ||
|
|
800dc51041 | ||
|
|
ebaa824d74 | ||
|
|
dc6f3c51e5 | ||
|
|
2675db4d2f | ||
|
|
9c4724fb71 | ||
|
|
e94d250e55 | ||
|
|
29e6cf6966 | ||
|
|
6fc7543a96 | ||
|
|
b85ba3500f | ||
|
|
4419102cc9 | ||
|
|
41cd889ebd | ||
|
|
c12f68780b | ||
|
|
f4b23f7480 | ||
|
|
4df02e8068 | ||
|
|
3c899d01f2 | ||
|
|
b7a64f7b3b | ||
|
|
d556615959 | ||
|
|
67a3d81894 | ||
|
|
b690d7beb2 | ||
|
|
03c6270dc6 | ||
|
|
69e1545618 | ||
|
|
4dcc4b29b0 | ||
|
|
797bf03dd1 | ||
|
|
1b2bed231d | ||
|
|
45e5ebbdbd | ||
|
|
0824bef4ba | ||
|
|
c124b3b174 | ||
|
|
b328c91767 | ||
|
|
31f1697e28 | ||
|
|
a58aa5628c | ||
|
|
7eabe62ae8 | ||
|
|
37684d0fed | ||
|
|
601649074d | ||
|
|
a648e1adb7 | ||
|
|
2a88369687 | ||
|
|
1a60d58ba0 | ||
|
|
d1a14dfab9 | ||
|
|
621dbe008e | ||
|
|
eb493121d3 | ||
|
|
801c8fa475 | ||
|
|
cf00d4fcca | ||
|
|
6070da6a63 | ||
|
|
fd9b119040 | ||
|
|
ca2551fe2b | ||
|
|
0da99cd2c9 | ||
|
|
ce852bed63 | ||
|
|
53671205f0 | ||
|
|
ac20cc63b6 | ||
|
|
e1c84cd8f4 | ||
|
|
73e51321ca | ||
|
|
eca46228ed | ||
|
|
aa164fac16 | ||
|
|
6247fcefab | ||
|
|
b46e45fb4d | ||
|
|
8839ed1b2d | ||
|
|
5ba1fe9a97 | ||
|
|
85f26eb186 | ||
|
|
0fc520c7fe | ||
|
|
7f5ca000bd | ||
|
|
679eb72d23 | ||
|
|
2b3277c066 | ||
|
|
850c940dfd | ||
|
|
1bc750e4a1 | ||
|
|
84a41851e0 | ||
|
|
6d94cf984c | ||
|
|
129949ddf0 | ||
|
|
f893807051 | ||
|
|
9040899e65 | ||
|
|
b29fa15bf3 | ||
|
|
b05245e68b | ||
|
|
49e01dd216 | ||
|
|
460a8432a5 | ||
|
|
878b876475 | ||
|
|
189c0824d2 | ||
|
|
67e7e998f8 | ||
|
|
4432e60445 | ||
|
|
b71eff117b | ||
|
|
991de2de2f | ||
|
|
73ceec4e7d | ||
|
|
263c507684 | ||
|
|
afc06d1af6 | ||
|
|
988f528708 | ||
|
|
3f7dce00b8 | ||
|
|
3d6c75d37f | ||
|
|
2070a91ef7 | ||
|
|
17505fe683 | ||
|
|
f9f4375e4e | ||
|
|
637c20f3c3 | ||
|
|
72f5b9500d | ||
|
|
bedfbb5c1c | ||
|
|
318199e9b3 | ||
|
|
cafa9d5c52 | ||
|
|
963e4660c8 | ||
|
|
49f1bf1335 | ||
|
|
d6eefe200d | ||
|
|
41717e78db | ||
|
|
2a016df011 | ||
|
|
386b3c757e | ||
|
|
3d79501eba | ||
|
|
7e24ed2568 | ||
|
|
418982eb85 | ||
|
|
335359f138 | ||
|
|
4ec91e9fbe | ||
|
|
db3e262df3 | ||
|
|
d280ad1c3a | ||
|
|
b2aa003d57 | ||
|
|
3c3280d9ac | ||
|
|
58f8d654ef | ||
|
|
5593cad434 | ||
|
|
59151b3671 | ||
|
|
1dc10ee3a0 | ||
|
|
4c400ca121 | ||
|
|
4310ca4922 | ||
|
|
9f6d75245f | ||
|
|
1974e62ec1 | ||
|
|
9dd8fd6b51 | ||
|
|
b082ba9c42 | ||
|
|
90b4d5adb3 | ||
|
|
d3883ffaf9 |
15
.github/ISSUE_49_COMMENT.md
vendored
Normal file
15
.github/ISSUE_49_COMMENT.md
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
Suggested comment for [Issue #49](https://github.com/sickn33/antigravity-awesome-skills/issues/49). Paste this on the issue:
|
||||
|
||||
---
|
||||
|
||||
The 404 happens because the package wasn’t published to npm yet. We’ve addressed it in two ways:
|
||||
|
||||
1. **Publish to npm** – We’re set up to publish so `npx antigravity-awesome-skills` will work after the first release. You can also trigger a manual publish via the “Publish to npm” workflow (Actions tab) if you have `NPM_TOKEN` configured.
|
||||
|
||||
2. **Fallback** – Until then (or if you hit a 404 for any reason), use:
|
||||
```bash
|
||||
npx github:sickn33/antigravity-awesome-skills
|
||||
```
|
||||
The README, GETTING_STARTED, and FAQ now mention this fallback.
|
||||
|
||||
Thanks for reporting.
|
||||
60
.github/MAINTENANCE.md
vendored
60
.github/MAINTENANCE.md
vendored
@@ -1,4 +1,4 @@
|
||||
# 🛠️ Repository Maintenance Guide (V4)
|
||||
# 🛠️ Repository Maintenance Guide (V5)
|
||||
|
||||
> **"If it's not documented, it's broken."**
|
||||
|
||||
@@ -41,7 +41,7 @@ it means you **did not run or commit** the Validation Chain correctly.
|
||||
|
||||
### 3. 📝 EVIDENCE OF WORK
|
||||
|
||||
- You must create/update `walkthrough.md` or `RELEASE_NOTES.md` to document what changed.
|
||||
- You must create/update `walkthrough.md` or `CHANGELOG.md` to document what changed.
|
||||
- If you made something new, **link it** in the artifacts.
|
||||
|
||||
### 4. 🚫 NO BRANCHES
|
||||
@@ -145,6 +145,24 @@ Locations to check:
|
||||
- **Antigravity Badge**: Must point to `https://github.com/sickn33/antigravity-awesome-skills`, NOT `anthropics/antigravity`.
|
||||
- **License**: Ensure the link points to `LICENSE` file.
|
||||
|
||||
### F. Workflows Consistency (NEW in V5)
|
||||
|
||||
If you touch any Workflows-related artifact, keep all workflow surfaces in sync:
|
||||
|
||||
1. `docs/WORKFLOWS.md` (human-readable playbooks)
|
||||
2. `data/workflows.json` (machine-readable schema)
|
||||
3. `skills/antigravity-workflows/SKILL.md` (orchestration entrypoint)
|
||||
|
||||
Rules:
|
||||
|
||||
- Every workflow id referenced in docs must exist in `data/workflows.json`.
|
||||
- If you add/remove a workflow step category, update prompt examples accordingly.
|
||||
- If a workflow references optional skills not yet merged (example: `go-playwright`), mark them explicitly as **optional** in docs.
|
||||
- If workflow onboarding text is changed, update the docs trinity:
|
||||
- `README.md`
|
||||
- `docs/GETTING_STARTED.md`
|
||||
- `docs/FAQ.md`
|
||||
|
||||
---
|
||||
|
||||
## 3. 🛡️ Governance & Quality Bar
|
||||
@@ -172,20 +190,42 @@ Reject any PR that fails this:
|
||||
When cutting a new version (e.g., V4):
|
||||
|
||||
1. **Run Full Validation**: `python3 scripts/validate_skills.py --strict`
|
||||
2. **Update Changelog**: Create `RELEASE_NOTES.md`.
|
||||
3. **Bump Version**: Update header in `README.md`.
|
||||
4. **Tag Release**:
|
||||
2. **Update Changelog**: Add the new release section to `CHANGELOG.md`.
|
||||
3. **Bump Version**:
|
||||
- Update `package.json` → `"version": "X.Y.Z"` (source of truth for npm).
|
||||
- Update version header in `README.md` if it displays the number.
|
||||
- One-liner: `npm version patch` (or `minor`/`major`) — bumps `package.json` and creates a git tag; then amend if you need to tag after release.
|
||||
4. **Create GitHub Release** (REQUIRED):
|
||||
|
||||
> ⚠️ **CRITICAL**: Pushing a tag (`git push --tags`) is NOT enough. You must create a **GitHub Release Object** for it to appear in the sidebar and trigger the NPM publish workflow.
|
||||
|
||||
Use the GitHub CLI:
|
||||
|
||||
```bash
|
||||
git tag -a v4.0.0 -m "V4 Enterprise Edition"
|
||||
git push origin v4.0.0
|
||||
# This creates the tag AND the release page automatically
|
||||
gh release create v4.0.0 --title "v4.0.0 - [Theme Name]" --notes-file release_notes.md
|
||||
```
|
||||
|
||||
### 📋 Release Note Template
|
||||
_Or manually via the GitHub UI > Releases > Draft a new release._
|
||||
|
||||
All changeslogs/release notes MUST follow this structure to ensure professionalism and quality:
|
||||
5. **Publish to npm** (so `npx antigravity-awesome-skills` works):
|
||||
- **Option A (manual):** From repo root, with npm logged in and 2FA/token set up:
|
||||
```bash
|
||||
npm publish
|
||||
```
|
||||
You cannot republish the same version; always bump `package.json` before publishing.
|
||||
- **Option B (CI):** On GitHub, create a **Release** (tag e.g. `v4.6.1`). The workflow [Publish to npm](.github/workflows/publish-npm.yml) runs on **Release published** and runs `npm publish` if the repo secret `NPM_TOKEN` is set (npm → Access Tokens → Granular token with Publish, then add as repo secret `NPM_TOKEN`).
|
||||
|
||||
6. **Close linked issue(s)**:
|
||||
- If the release completes an issue scope (feature/fix), close it with `gh issue close <id> --comment "..."`
|
||||
- Include release tag reference in the closing note when applicable.
|
||||
|
||||
### 📋 Changelog Entry Template
|
||||
|
||||
Each new release section in `CHANGELOG.md` should follow [Keep a Changelog](https://keepachangelog.com/) and this structure:
|
||||
|
||||
```markdown
|
||||
# Release vX.Y.Z: [Theme Name]
|
||||
## [X.Y.Z] - YYYY-MM-DD - "[Theme Name]"
|
||||
|
||||
> **[One-line catchy summary of the release]**
|
||||
|
||||
|
||||
10
.github/workflows/ci.yml
vendored
10
.github/workflows/ci.yml
vendored
@@ -45,6 +45,13 @@ jobs:
|
||||
- name: Install npm dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Audit npm dependencies
|
||||
run: npm audit --audit-level=high
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run tests
|
||||
run: npm run test
|
||||
|
||||
- name: 📦 Build catalog
|
||||
run: npm run catalog
|
||||
|
||||
@@ -61,6 +68,9 @@ jobs:
|
||||
# If no changes, exit successfully
|
||||
git diff --quiet && exit 0
|
||||
|
||||
# Pull with rebase to integrate remote changes
|
||||
git pull origin main --rebase || true
|
||||
|
||||
git add README.md skills_index.json data/catalog.json data/bundles.json data/aliases.json CATALOG.md || true
|
||||
|
||||
# If nothing to commit, exit successfully
|
||||
|
||||
28
.github/workflows/publish-npm.yml
vendored
Normal file
28
.github/workflows/publish-npm.yml
vendored
Normal file
@@ -0,0 +1,28 @@
|
||||
# Publish antigravity-awesome-skills to npm on release.
|
||||
# Requires NPM_TOKEN secret (npm → Access Tokens → Granular token with Publish).
|
||||
# Before creating a Release: bump package.json "version" (npm forbids republishing the same version).
|
||||
# Release tag (e.g. v4.6.1) should match package.json version.
|
||||
|
||||
name: Publish to npm
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [published]
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
publish:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "20"
|
||||
registry-url: "https://registry.npmjs.org"
|
||||
|
||||
- name: Publish
|
||||
run: npm publish
|
||||
env:
|
||||
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
|
||||
5
.gitignore
vendored
5
.gitignore
vendored
@@ -1,4 +1,6 @@
|
||||
node_modules/
|
||||
__pycache__/
|
||||
.worktrees/
|
||||
|
||||
walkthrough.md
|
||||
.agent/rules/
|
||||
@@ -24,3 +26,6 @@ scripts/*voltagent*.py
|
||||
scripts/*html*.py
|
||||
scripts/*similar*.py
|
||||
scripts/*count*.py
|
||||
|
||||
# Optional baseline for legacy JS validator (scripts/validate-skills.js)
|
||||
validation-baseline.json
|
||||
|
||||
324
CATALOG.md
324
CATALOG.md
@@ -1,13 +1,15 @@
|
||||
# Skill Catalog
|
||||
|
||||
Generated at: 2026-01-31T07:34:21.497Z
|
||||
Generated at: 2026-02-08T00:00:00.000Z
|
||||
|
||||
Total skills: 618
|
||||
Total skills: 856
|
||||
|
||||
## architecture (58)
|
||||
## architecture (64)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `angular` | Modern Angular (v20+) expert with deep knowledge of Signals, Standalone Components, Zoneless applications, SSR/Hydration, and reactive patterns. Use PROACTIV... | angular | angular, v20, deep, knowledge, signals, standalone, components, zoneless, applications, ssr, hydration, reactive |
|
||||
| `angular-state-management` | Master modern Angular state management with Signals, NgRx, and RxJS. Use when setting up global state, managing component stores, choosing between state solu... | angular, state | angular, state, signals, ngrx, rxjs, setting, up, global, managing, component, stores, choosing |
|
||||
| `architect-review` | Master software architect specializing in modern architecture patterns, clean architecture, microservices, event-driven systems, and DDD. Reviews system desi... | | architect, review, software, specializing, architecture, clean, microservices, event, driven, ddd, reviews, designs |
|
||||
| `architecture` | Architectural decision-making framework. Requirements analysis, trade-off evaluation, ADR documentation. Use when making architecture decisions or analyzing ... | architecture | architecture, architectural, decision, making, framework, requirements, analysis, trade, off, evaluation, adr, documentation |
|
||||
| `architecture-decision-records` | Write and maintain Architecture Decision Records (ADRs) following best practices for technical decision documentation. Use when documenting significant techn... | architecture, decision, records | architecture, decision, records, write, maintain, adrs, following, technical, documentation, documenting, significant, decisions |
|
||||
@@ -21,6 +23,7 @@ Total skills: 618
|
||||
| `c4-code` | Expert C4 Code-level documentation specialist. Analyzes code directories to create comprehensive C4 code-level documentation including function signatures, a... | c4, code | c4, code, level, documentation, analyzes, directories, including, function, signatures, arguments, dependencies, structure |
|
||||
| `c4-component` | Expert C4 Component-level documentation specialist. Synthesizes C4 Code-level documentation into Component-level architecture, defining component boundaries,... | c4, component | c4, component, level, documentation, synthesizes, code, architecture, defining, boundaries, interfaces, relationships, creates |
|
||||
| `c4-context` | Expert C4 Context-level documentation specialist. Creates high-level system context diagrams, documents personas, user journeys, system features, and externa... | c4 | c4, context, level, documentation, creates, high, diagrams, documents, personas, user, journeys, features |
|
||||
| `calendly-automation` | Automate Calendly scheduling, event management, invitee tracking, availability checks, and organization administration via Rube MCP (Composio). Always search... | calendly | calendly, automation, automate, scheduling, event, invitee, tracking, availability, checks, organization, administration, via |
|
||||
| `code-refactoring-refactor-clean` | You are a code refactoring expert specializing in clean code principles, SOLID design patterns, and modern software engineering best practices. Analyze and r... | code, refactoring, refactor, clean | code, refactoring, refactor, clean, specializing, principles, solid, software, engineering, analyze, provided, improve |
|
||||
| `codebase-cleanup-refactor-clean` | You are a code refactoring expert specializing in clean code principles, SOLID design patterns, and modern software engineering best practices. Analyze and r... | codebase, cleanup, refactor, clean | codebase, cleanup, refactor, clean, code, refactoring, specializing, principles, solid, software, engineering, analyze |
|
||||
| `competitor-alternatives` | When the user wants to create competitor comparison or alternative pages for SEO and sales enablement. Also use when the user mentions 'alternative page,' 'v... | competitor, alternatives | competitor, alternatives, user, wants, comparison, alternative, pages, seo, sales, enablement, mentions, page |
|
||||
@@ -36,6 +39,7 @@ Total skills: 618
|
||||
| `error-handling-patterns` | Master error handling patterns across languages including exceptions, Result types, error propagation, and graceful degradation to build resilient applicatio... | error, handling | error, handling, languages, including, exceptions, result, types, propagation, graceful, degradation, resilient, applications |
|
||||
| `event-sourcing-architect` | Expert in event sourcing, CQRS, and event-driven architecture patterns. Masters event store design, projection building, saga orchestration, and eventual con... | event, sourcing | event, sourcing, architect, cqrs, driven, architecture, masters, store, projection, building, saga, orchestration |
|
||||
| `event-store-design` | Design and implement event stores for event-sourced systems. Use when building event sourcing infrastructure, choosing event store technologies, or implement... | event, store | event, store, stores, sourced, building, sourcing, infrastructure, choosing, technologies, implementing, persistence |
|
||||
| `game-development/multiplayer` | Multiplayer game development principles. Architecture, networking, synchronization. | game, development/multiplayer | game, development/multiplayer, multiplayer, development, principles, architecture, networking, synchronization |
|
||||
| `godot-gdscript-patterns` | Master Godot 4 GDScript patterns including signals, scenes, state machines, and optimization. Use when building Godot games, implementing game systems, or le... | godot, gdscript | godot, gdscript, including, signals, scenes, state, machines, optimization, building, games, implementing, game |
|
||||
| `haskell-pro` | Expert Haskell engineer specializing in advanced type systems, pure functional design, and high-reliability software. Use PROACTIVELY for type-level programm... | haskell | haskell, pro, engineer, specializing, type, pure, functional, high, reliability, software, proactively, level |
|
||||
| `i18n-localization` | Internationalization and localization patterns. Detecting hardcoded strings, managing translations, locale files, RTL support. | i18n, localization | i18n, localization, internationalization, detecting, hardcoded, strings, managing, translations, locale, files, rtl |
|
||||
@@ -53,6 +57,7 @@ Total skills: 618
|
||||
| `production-code-audit` | Autonomously deep-scan entire codebase line-by-line, understand architecture and patterns, then systematically transform it to production-grade, corporate-le... | production, code, audit | production, code, audit, autonomously, deep, scan, entire, codebase, line, understand, architecture, then |
|
||||
| `projection-patterns` | Build read models and projections from event streams. Use when implementing CQRS read sides, building materialized views, or optimizing query performance in ... | projection | projection, read, models, projections, event, streams, implementing, cqrs, sides, building, materialized, views |
|
||||
| `prompt-engineering` | Expert guide on prompt engineering patterns, best practices, and optimization techniques. Use when user wants to improve prompts, learn prompting strategies,... | prompt, engineering | prompt, engineering, optimization, techniques, user, wants, improve, prompts, learn, prompting, debug, agent |
|
||||
| `radix-ui-design-system` | Build accessible design systems with Radix UI primitives. Headless component customization, theming strategies, and compound component patterns for productio... | radix, ui | radix, ui, accessible, primitives, headless, component, customization, theming, compound, grade, libraries |
|
||||
| `saga-orchestration` | Implement saga patterns for distributed transactions and cross-aggregate workflows. Use when coordinating multi-step business processes, handling compensatin... | saga | saga, orchestration, distributed, transactions, cross, aggregate, coordinating, multi, step, business, processes, handling |
|
||||
| `salesforce-development` | Expert patterns for Salesforce platform development including Lightning Web Components (LWC), Apex triggers and classes, REST/Bulk APIs, Connected Apps, and ... | salesforce | salesforce, development, platform, including, lightning, web, components, lwc, apex, triggers, classes, rest |
|
||||
| `skill-developer` | Create and manage Claude Code skills following Anthropic best practices. Use when creating new skills, modifying skill-rules.json, understanding trigger patt... | skill | skill, developer, claude, code, skills, following, anthropic, creating, new, modifying, rules, json |
|
||||
@@ -63,11 +68,12 @@ Total skills: 618
|
||||
| `tool-design` | Build tools that agents can use effectively, including architectural reduction patterns | | agents, effectively, including, architectural, reduction |
|
||||
| `unreal-engine-cpp-pro` | Expert guide for Unreal Engine 5.x C++ development, covering UObject hygiene, performance patterns, and best practices. | unreal, engine, cpp | unreal, engine, cpp, pro, development, covering, uobject, hygiene, performance |
|
||||
| `wcag-audit-patterns` | Conduct WCAG 2.2 accessibility audits with automated testing, manual verification, and remediation guidance. Use when auditing websites for accessibility, fi... | wcag, audit | wcag, audit, conduct, accessibility, audits, automated, testing, manual, verification, remediation, guidance, auditing |
|
||||
| `wiki-architect` | Analyzes code repositories and generates hierarchical documentation structures with onboarding guides. Use when the user wants to create a wiki, generate doc... | wiki | wiki, architect, analyzes, code, repositories, generates, hierarchical, documentation, structures, onboarding, guides, user |
|
||||
| `workflow-orchestration-patterns` | Design durable workflows with Temporal for distributed systems. Covers workflow vs activity separation, saga patterns, state management, and determinism cons... | | orchestration, durable, temporal, distributed, covers, vs, activity, separation, saga, state, determinism, constraints |
|
||||
| `workflow-patterns` | Use this skill when implementing tasks according to Conductor's TDD workflow, handling phase checkpoints, managing git commits for tasks, or understanding th... | | skill, implementing, tasks, according, conductor, tdd, handling, phase, checkpoints, managing, git, commits |
|
||||
| `zapier-make-patterns` | No-code automation democratizes workflow building. Zapier and Make (formerly Integromat) let non-developers automate business processes without writing code.... | zapier, make | zapier, make, no, code, automation, democratizes, building, formerly, integromat, let, non, developers |
|
||||
|
||||
## business (37)
|
||||
## business (38)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
@@ -108,20 +114,97 @@ Total skills: 618
|
||||
| `startup-business-analyst-market-opportunity` | Generate comprehensive market opportunity analysis with TAM/SAM/SOM calculations | startup, business, analyst, market, opportunity | startup, business, analyst, market, opportunity, generate, analysis, tam, sam, som, calculations |
|
||||
| `startup-financial-modeling` | This skill should be used when the user asks to "create financial projections", "build a financial model", "forecast revenue", "calculate burn rate", "estima... | startup, financial, modeling | startup, financial, modeling, skill, should, used, user, asks, projections, model, forecast, revenue |
|
||||
| `team-composition-analysis` | This skill should be used when the user asks to "plan team structure", "determine hiring needs", "design org chart", "calculate compensation", "plan equity a... | team, composition | team, composition, analysis, skill, should, used, user, asks, plan, structure, determine, hiring |
|
||||
| `whatsapp-automation` | Automate WhatsApp Business tasks via Rube MCP (Composio): send messages, manage templates, upload media, and handle contacts. Always search tools first for c... | whatsapp | whatsapp, automation, automate, business, tasks, via, rube, mcp, composio, send, messages, upload |
|
||||
|
||||
## data-ai (93)
|
||||
## data-ai (159)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `agent-framework-azure-ai-py` | Build Azure AI Foundry agents using the Microsoft Agent Framework Python SDK (agent-framework-azure-ai). Use when creating persistent agents with AzureAIAgen... | agent, framework, azure, ai, py | agent, framework, azure, ai, py, foundry, agents, microsoft, python, sdk, creating, persistent |
|
||||
| `agent-memory-mcp` | A hybrid memory system that provides persistent, searchable knowledge management for AI agents (Architecture, Patterns, Decisions). | agent, memory, mcp | agent, memory, mcp, hybrid, provides, persistent, searchable, knowledge, ai, agents, architecture, decisions |
|
||||
| `agent-tool-builder` | Tools are how AI agents interact with the world. A well-designed tool is the difference between an agent that works and one that hallucinates, fails silently... | agent, builder | agent, builder, how, ai, agents, interact, world, well, designed, difference, between, works |
|
||||
| `agents-v2-py` | Build container-based Foundry Agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition.
|
||||
Use when creating hosted agents that run custom code i... | agents, v2, py | agents, v2, py, container, foundry, azure, ai, sdk, imagebasedhostedagentdefinition, creating, hosted, run |
|
||||
| `ai-agents-architect` | Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build ... | ai, agents | ai, agents, architect, designing, building, autonomous, masters, memory, planning, multi, agent, orchestration |
|
||||
| `ai-engineer` | Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and ente... | ai | ai, engineer, llm, applications, rag, intelligent, agents, implements, vector, search, multimodal, agent |
|
||||
| `ai-wrapper-product` | Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just 'ChatGPT but different' - products t... | ai, wrapper, product | ai, wrapper, product, building, products, wrap, apis, openai, anthropic, etc, people, pay |
|
||||
| `analytics-tracking` | Design, audit, and improve analytics tracking systems that produce reliable, decision-ready data. Use when the user wants to set up, fix, or evaluate analyti... | analytics, tracking | analytics, tracking, audit, improve, produce, reliable, decision, data, user, wants, set, up |
|
||||
| `angular-ui-patterns` | Modern Angular UI patterns for loading states, error handling, and data display. Use when building UI components, handling async data, or managing component ... | angular, ui | angular, ui, loading, states, error, handling, data, display, building, components, async, managing |
|
||||
| `api-documenter` | Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build com... | api, documenter | api, documenter, documentation, openapi, ai, powered, developer, experience, interactive, docs, generate, sdks |
|
||||
| `audio-transcriber` | Transform audio recordings into professional Markdown documentation with intelligent summaries using LLM integration | audio, transcription, whisper, meeting-minutes, speech-to-text | audio, transcription, whisper, meeting-minutes, speech-to-text, transcriber, transform, recordings, professional, markdown, documentation, intelligent |
|
||||
| `autonomous-agent-patterns` | Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use ... | autonomous, agent | autonomous, agent, building, coding, agents, covers, integration, permission, browser, automation, human, loop |
|
||||
| `autonomous-agents` | Autonomous agents are AI systems that can independently decompose goals, plan actions, execute tools, and self-correct without constant human guidance. The c... | autonomous, agents | autonomous, agents, ai, independently, decompose, goals, plan, actions, execute, self, correct, without |
|
||||
| `azure-ai-agents-persistent-dotnet` | Azure AI Agents Persistent SDK for .NET. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. Use for agent CRUD, conve... | azure, ai, agents, persistent, dotnet | azure, ai, agents, persistent, dotnet, sdk, net, low, level, creating, managing, threads |
|
||||
| `azure-ai-agents-persistent-java` | Azure AI Agents Persistent SDK for Java. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools.
|
||||
Triggers: "PersistentAgen... | azure, ai, agents, persistent, java | azure, ai, agents, persistent, java, sdk, low, level, creating, managing, threads, messages |
|
||||
| `azure-ai-contentsafety-java` | Build content moderation applications with Azure AI Content Safety SDK for Java. Use when implementing text/image analysis, blocklist management, or harm det... | azure, ai, contentsafety, java | azure, ai, contentsafety, java, content, moderation, applications, safety, sdk, implementing, text, image |
|
||||
| `azure-ai-contentsafety-py` | Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification.
|
||||
Triggers: "azure-ai-contents... | azure, ai, contentsafety, py | azure, ai, contentsafety, py, content, safety, sdk, python, detecting, harmful, text, images |
|
||||
| `azure-ai-contentsafety-ts` | Analyze text and images for harmful content using Azure AI Content Safety (@azure-rest/ai-content-safety). Use when moderating user-generated content, detect... | azure, ai, contentsafety, ts | azure, ai, contentsafety, ts, analyze, text, images, harmful, content, safety, rest, moderating |
|
||||
| `azure-ai-contentunderstanding-py` | Azure AI Content Understanding SDK for Python. Use for multimodal content extraction from documents, images, audio, and video.
|
||||
Triggers: "azure-ai-contentund... | azure, ai, contentunderstanding, py | azure, ai, contentunderstanding, py, content, understanding, sdk, python, multimodal, extraction, documents, images |
|
||||
| `azure-ai-document-intelligence-dotnet` | Azure AI Document Intelligence SDK for .NET. Extract text, tables, and structured data from documents using prebuilt and custom models. Use for invoice proce... | azure, ai, document, intelligence, dotnet | azure, ai, document, intelligence, dotnet, sdk, net, extract, text, tables, structured, data |
|
||||
| `azure-ai-document-intelligence-ts` | Extract text, tables, and structured data from documents using Azure Document Intelligence (@azure-rest/ai-document-intelligence). Use when processing invoic... | azure, ai, document, intelligence, ts | azure, ai, document, intelligence, ts, extract, text, tables, structured, data, documents, rest |
|
||||
| `azure-ai-formrecognizer-java` | Build document analysis applications with Azure Document Intelligence (Form Recognizer) SDK for Java. Use when extracting text, tables, key-value pairs from ... | azure, ai, formrecognizer, java | azure, ai, formrecognizer, java, document, analysis, applications, intelligence, form, recognizer, sdk, extracting |
|
||||
| `azure-ai-ml-py` | Azure Machine Learning SDK v2 for Python. Use for ML workspaces, jobs, models, datasets, compute, and pipelines.
|
||||
Triggers: "azure-ai-ml", "MLClient", "worksp... | azure, ai, ml, py | azure, ai, ml, py, machine, learning, sdk, v2, python, workspaces, jobs, models |
|
||||
| `azure-ai-openai-dotnet` | Azure OpenAI SDK for .NET. Client library for Azure OpenAI and OpenAI services. Use for chat completions, embeddings, image generation, audio transcription, ... | azure, ai, openai, dotnet | azure, ai, openai, dotnet, sdk, net, client, library, chat, completions, embeddings, image |
|
||||
| `azure-ai-projects-dotnet` | Azure AI Projects SDK for .NET. High-level client for Azure AI Foundry projects including agents, connections, datasets, deployments, evaluations, and indexe... | azure, ai, dotnet | azure, ai, dotnet, sdk, net, high, level, client, foundry, including, agents, connections |
|
||||
| `azure-ai-projects-java` | Azure AI Projects SDK for Java. High-level SDK for Azure AI Foundry project management including connections, datasets, indexes, and evaluations.
|
||||
Triggers: "... | azure, ai, java | azure, ai, java, sdk, high, level, foundry, including, connections, datasets, indexes, evaluations |
|
||||
| `azure-ai-projects-py` | Build AI applications using the Azure AI Projects Python SDK (azure-ai-projects). Use when working with Foundry project clients, creating versioned agents wi... | azure, ai, py | azure, ai, py, applications, python, sdk, working, foundry, clients, creating, versioned, agents |
|
||||
| `azure-ai-projects-ts` | Build AI applications using Azure AI Projects SDK for JavaScript (@azure/ai-projects). Use when working with Foundry project clients, agents, connections, de... | azure, ai, ts | azure, ai, ts, applications, sdk, javascript, working, foundry, clients, agents, connections, deployments |
|
||||
| `azure-ai-textanalytics-py` | Azure AI Text Analytics SDK for sentiment analysis, entity recognition, key phrases, language detection, PII, and healthcare NLP. Use for natural language pr... | azure, ai, textanalytics, py | azure, ai, textanalytics, py, text, analytics, sdk, sentiment, analysis, entity, recognition, key |
|
||||
| `azure-ai-transcription-py` | Azure AI Transcription SDK for Python. Use for real-time and batch speech-to-text transcription with timestamps and diarization.
|
||||
Triggers: "transcription", "... | azure, ai, transcription, py | azure, ai, transcription, py, sdk, python, real, time, batch, speech, text, timestamps |
|
||||
| `azure-ai-translation-document-py` | Azure AI Document Translation SDK for batch translation of documents with format preservation. Use for translating Word, PDF, Excel, PowerPoint, and other do... | azure, ai, translation, document, py | azure, ai, translation, document, py, sdk, batch, documents, format, preservation, translating, word |
|
||||
| `azure-ai-translation-text-py` | Azure AI Text Translation SDK for real-time text translation, transliteration, language detection, and dictionary lookup. Use for translating text content in... | azure, ai, translation, text, py | azure, ai, translation, text, py, sdk, real, time, transliteration, language, detection, dictionary |
|
||||
| `azure-ai-translation-ts` | Build translation applications using Azure Translation SDKs for JavaScript (@azure-rest/ai-translation-text, @azure-rest/ai-translation-document). Use when i... | azure, ai, translation, ts | azure, ai, translation, ts, applications, sdks, javascript, rest, text, document, implementing, transliteration |
|
||||
| `azure-ai-vision-imageanalysis-java` | Build image analysis applications with Azure AI Vision SDK for Java. Use when implementing image captioning, OCR text extraction, object detection, tagging, ... | azure, ai, vision, imageanalysis, java | azure, ai, vision, imageanalysis, java, image, analysis, applications, sdk, implementing, captioning, ocr |
|
||||
| `azure-ai-vision-imageanalysis-py` | Azure AI Vision Image Analysis SDK for captions, tags, objects, OCR, people detection, and smart cropping. Use for computer vision and image understanding ta... | azure, ai, vision, imageanalysis, py | azure, ai, vision, imageanalysis, py, image, analysis, sdk, captions, tags, objects, ocr |
|
||||
| `azure-ai-voicelive-dotnet` | Azure AI Voice Live SDK for .NET. Build real-time voice AI applications with bidirectional WebSocket communication. Use for voice assistants, conversational ... | azure, ai, voicelive, dotnet | azure, ai, voicelive, dotnet, voice, live, sdk, net, real, time, applications, bidirectional |
|
||||
| `azure-ai-voicelive-java` | Azure AI VoiceLive SDK for Java. Real-time bidirectional voice conversations with AI assistants using WebSocket.
|
||||
Triggers: "VoiceLiveClient java", "voice ass... | azure, ai, voicelive, java | azure, ai, voicelive, java, sdk, real, time, bidirectional, voice, conversations, assistants, websocket |
|
||||
| `azure-ai-voicelive-py` | Build real-time voice AI applications using Azure AI Voice Live SDK (azure-ai-voicelive). Use this skill when creating Python applications that need real-tim... | azure, ai, voicelive, py | azure, ai, voicelive, py, real, time, voice, applications, live, sdk, skill, creating |
|
||||
| `azure-ai-voicelive-ts` | Azure AI Voice Live SDK for JavaScript/TypeScript. Build real-time voice AI applications with bidirectional WebSocket communication. Use for voice assistants... | azure, ai, voicelive, ts | azure, ai, voicelive, ts, voice, live, sdk, javascript, typescript, real, time, applications |
|
||||
| `azure-communication-callautomation-java` | Build call automation workflows with Azure Communication Services Call Automation Java SDK. Use when implementing IVR systems, call routing, call recording, ... | azure, communication, callautomation, java | azure, communication, callautomation, java, call, automation, sdk, implementing, ivr, routing, recording, dtmf |
|
||||
| `azure-cosmos-java` | Azure Cosmos DB SDK for Java. NoSQL database operations with global distribution, multi-model support, and reactive patterns.
|
||||
Triggers: "CosmosClient java", ... | azure, cosmos, java | azure, cosmos, java, db, sdk, nosql, database, operations, global, distribution, multi, model |
|
||||
| `azure-cosmos-py` | Azure Cosmos DB SDK for Python (NoSQL API). Use for document CRUD, queries, containers, and globally distributed data.
|
||||
Triggers: "cosmos db", "CosmosClient",... | azure, cosmos, py | azure, cosmos, py, db, sdk, python, nosql, api, document, crud, queries, containers |
|
||||
| `azure-cosmos-rust` | Azure Cosmos DB SDK for Rust (NoSQL API). Use for document CRUD, queries, containers, and globally distributed data.
|
||||
Triggers: "cosmos db rust", "CosmosClien... | azure, cosmos, rust | azure, cosmos, rust, db, sdk, nosql, api, document, crud, queries, containers, globally |
|
||||
| `azure-cosmos-ts` | Azure Cosmos DB JavaScript/TypeScript SDK (@azure/cosmos) for data plane operations. Use for CRUD operations on documents, queries, bulk operations, and cont... | azure, cosmos, ts | azure, cosmos, ts, db, javascript, typescript, sdk, data, plane, operations, crud, documents |
|
||||
| `azure-data-tables-java` | Build table storage applications with Azure Tables SDK for Java. Use when working with Azure Table Storage or Cosmos DB Table API for NoSQL key-value data, s... | azure, data, tables, java | azure, data, tables, java, table, storage, applications, sdk, working, cosmos, db, api |
|
||||
| `azure-data-tables-py` | Azure Tables SDK for Python (Storage and Cosmos DB). Use for NoSQL key-value storage, entity CRUD, and batch operations.
|
||||
Triggers: "table storage", "TableSer... | azure, data, tables, py | azure, data, tables, py, sdk, python, storage, cosmos, db, nosql, key, value |
|
||||
| `azure-eventhub-dotnet` | Azure Event Hubs SDK for .NET. Use for high-throughput event streaming: sending events (EventHubProducerClient, EventHubBufferedProducerClient), receiving ev... | azure, eventhub, dotnet | azure, eventhub, dotnet, event, hubs, sdk, net, high, throughput, streaming, sending, events |
|
||||
| `azure-eventhub-java` | Build real-time streaming applications with Azure Event Hubs SDK for Java. Use when implementing event streaming, high-throughput data ingestion, or building... | azure, eventhub, java | azure, eventhub, java, real, time, streaming, applications, event, hubs, sdk, implementing, high |
|
||||
| `azure-eventhub-rust` | Azure Event Hubs SDK for Rust. Use for sending and receiving events, streaming data ingestion.
|
||||
Triggers: "event hubs rust", "ProducerClient rust", "ConsumerC... | azure, eventhub, rust | azure, eventhub, rust, event, hubs, sdk, sending, receiving, events, streaming, data, ingestion |
|
||||
| `azure-eventhub-ts` | Build event streaming applications using Azure Event Hubs SDK for JavaScript (@azure/event-hubs). Use when implementing high-throughput event ingestion, real... | azure, eventhub, ts | azure, eventhub, ts, event, streaming, applications, hubs, sdk, javascript, implementing, high, throughput |
|
||||
| `azure-maps-search-dotnet` | Azure Maps SDK for .NET. Location-based services including geocoding, routing, rendering, geolocation, and weather. Use for address search, directions, map t... | azure, maps, search, dotnet | azure, maps, search, dotnet, sdk, net, location, including, geocoding, routing, rendering, geolocation |
|
||||
| `azure-monitor-ingestion-java` | Azure Monitor Ingestion SDK for Java. Send custom logs to Azure Monitor via Data Collection Rules (DCR) and Data Collection Endpoints (DCE).
|
||||
Triggers: "LogsI... | azure, monitor, ingestion, java | azure, monitor, ingestion, java, sdk, send, custom, logs, via, data, collection, rules |
|
||||
| `azure-monitor-ingestion-py` | Azure Monitor Ingestion SDK for Python. Use for sending custom logs to Log Analytics workspace via Logs Ingestion API.
|
||||
Triggers: "azure-monitor-ingestion", "... | azure, monitor, ingestion, py | azure, monitor, ingestion, py, sdk, python, sending, custom, logs, log, analytics, workspace |
|
||||
| `azure-monitor-query-java` | Azure Monitor Query SDK for Java. Execute Kusto queries against Log Analytics workspaces and query metrics from Azure resources.
|
||||
Triggers: "LogsQueryClient j... | azure, monitor, query, java | azure, monitor, query, java, sdk, execute, kusto, queries, against, log, analytics, workspaces |
|
||||
| `azure-monitor-query-py` | Azure Monitor Query SDK for Python. Use for querying Log Analytics workspaces and Azure Monitor metrics.
|
||||
Triggers: "azure-monitor-query", "LogsQueryClient", ... | azure, monitor, query, py | azure, monitor, query, py, sdk, python, querying, log, analytics, workspaces, metrics, triggers |
|
||||
| `azure-postgres-ts` | Connect to Azure Database for PostgreSQL Flexible Server from Node.js/TypeScript using the pg (node-postgres) package. Use for PostgreSQL queries, connection... | azure, postgres, ts | azure, postgres, ts, connect, database, postgresql, flexible, server, node, js, typescript, pg |
|
||||
| `azure-resource-manager-cosmosdb-dotnet` | Azure Resource Manager SDK for Cosmos DB in .NET. Use for MANAGEMENT PLANE operations: creating/managing Cosmos DB accounts, databases, containers, throughpu... | azure, resource, manager, cosmosdb, dotnet | azure, resource, manager, cosmosdb, dotnet, sdk, cosmos, db, net, plane, operations, creating |
|
||||
| `azure-resource-manager-mysql-dotnet` | Azure MySQL Flexible Server SDK for .NET. Database management for MySQL Flexible Server deployments. Use for creating servers, databases, firewall rules, con... | azure, resource, manager, mysql, dotnet | azure, resource, manager, mysql, dotnet, flexible, server, sdk, net, database, deployments, creating |
|
||||
| `azure-resource-manager-postgresql-dotnet` | Azure PostgreSQL Flexible Server SDK for .NET. Database management for PostgreSQL Flexible Server deployments. Use for creating servers, databases, firewall ... | azure, resource, manager, postgresql, dotnet | azure, resource, manager, postgresql, dotnet, flexible, server, sdk, net, database, deployments, creating |
|
||||
| `azure-resource-manager-redis-dotnet` | Azure Resource Manager SDK for Redis in .NET. Use for MANAGEMENT PLANE operations: creating/managing Azure Cache for Redis instances, firewall rules, access ... | azure, resource, manager, redis, dotnet | azure, resource, manager, redis, dotnet, sdk, net, plane, operations, creating, managing, cache |
|
||||
| `azure-resource-manager-sql-dotnet` | Azure Resource Manager SDK for Azure SQL in .NET. Use for MANAGEMENT PLANE operations: creating/managing SQL servers, databases, elastic pools, firewall rule... | azure, resource, manager, sql, dotnet | azure, resource, manager, sql, dotnet, sdk, net, plane, operations, creating, managing, servers |
|
||||
| `azure-search-documents-dotnet` | Azure AI Search SDK for .NET (Azure.Search.Documents). Use for building search applications with full-text, vector, semantic, and hybrid search. Covers Searc... | azure, search, documents, dotnet | azure, search, documents, dotnet, ai, sdk, net, building, applications, full, text, vector |
|
||||
| `azure-search-documents-py` | Azure AI Search SDK for Python. Use for vector search, hybrid search, semantic ranking, indexing, and skillsets.
|
||||
Triggers: "azure-search-documents", "SearchC... | azure, search, documents, py | azure, search, documents, py, ai, sdk, python, vector, hybrid, semantic, ranking, indexing |
|
||||
| `azure-search-documents-ts` | Build search applications using Azure AI Search SDK for JavaScript (@azure/search-documents). Use when creating/managing indexes, implementing vector/hybrid ... | azure, search, documents, ts | azure, search, documents, ts, applications, ai, sdk, javascript, creating, managing, indexes, implementing |
|
||||
| `azure-storage-blob-java` | Build blob storage applications with Azure Storage Blob SDK for Java. Use when uploading, downloading, or managing files in Azure Blob Storage, working with ... | azure, storage, blob, java | azure, storage, blob, java, applications, sdk, uploading, downloading, managing, files, working, containers |
|
||||
| `azure-storage-file-datalake-py` | Azure Data Lake Storage Gen2 SDK for Python. Use for hierarchical file systems, big data analytics, and file/directory operations.
|
||||
Triggers: "data lake", "Da... | azure, storage, file, datalake, py | azure, storage, file, datalake, py, data, lake, gen2, sdk, python, hierarchical, big |
|
||||
| `beautiful-prose` | Hard-edged writing style contract for timeless, forceful English prose without AI tics | beautiful, prose | beautiful, prose, hard, edged, writing, style, contract, timeless, forceful, english, without, ai |
|
||||
| `behavioral-modes` | AI operational modes (brainstorm, implement, debug, review, teach, ship, orchestrate). Use to adapt behavior based on task type. | behavioral, modes | behavioral, modes, ai, operational, brainstorm, debug, review, teach, ship, orchestrate, adapt, behavior |
|
||||
| `blockrun` | Use when user needs capabilities Claude lacks (image generation, real-time X/Twitter data) or explicitly requests external models ("blockrun", "use grok", "u... | blockrun | blockrun, user, capabilities, claude, lacks, image, generation, real, time, twitter, data, explicitly |
|
||||
@@ -155,8 +238,13 @@ Total skills: 618
|
||||
| `fal-workflow` | Generate workflow JSON files for chaining AI models | fal | fal, generate, json, files, chaining, ai, models |
|
||||
| `fp-ts-react` | Practical patterns for using fp-ts with React - hooks, state, forms, data fetching. Use when building React apps with functional programming patterns. Works ... | fp, ts, react | fp, ts, react, practical, hooks, state, forms, data, fetching, building, apps, functional |
|
||||
| `frontend-dev-guidelines` | Opinionated frontend development standards for modern React + TypeScript applications. Covers Suspense-first data fetching, lazy loading, feature-based archi... | frontend, dev, guidelines | frontend, dev, guidelines, opinionated, development, standards, react, typescript, applications, covers, suspense, first |
|
||||
| `frontend-ui-dark-ts` | Build dark-themed React applications using Tailwind CSS with custom theming, glassmorphism effects, and Framer Motion animations. Use when creating dashboard... | frontend, ui, dark, ts | frontend, ui, dark, ts, themed, react, applications, tailwind, css, custom, theming, glassmorphism |
|
||||
| `geo-fundamentals` | Generative Engine Optimization for AI search engines (ChatGPT, Claude, Perplexity). | geo, fundamentals | geo, fundamentals, generative, engine, optimization, ai, search, engines, chatgpt, claude, perplexity |
|
||||
| `google-analytics-automation` | Automate Google Analytics tasks via Rube MCP (Composio): run reports, list accounts/properties, funnels, pivots, key events. Always search tools first for cu... | google, analytics | google, analytics, automation, automate, tasks, via, rube, mcp, composio, run, reports, list |
|
||||
| `googlesheets-automation` | Automate Google Sheets operations (read, write, format, filter, manage spreadsheets) via Rube MCP (Composio). Read/write data, manage tabs, apply formatting,... | googlesheets | googlesheets, automation, automate, google, sheets, operations, read, write, format, filter, spreadsheets, via |
|
||||
| `graphql` | GraphQL gives clients exactly the data they need - no more, no less. One endpoint, typed schema, introspection. But the flexibility that makes it powerful al... | graphql | graphql, gives, clients, exactly, data, no, less, one, endpoint, typed, schema, introspection |
|
||||
| `hosted-agents-v2-py` | Build hosted agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition.
|
||||
Use when creating container-based agents that run custom code in Azure ... | hosted, agents, v2, py | hosted, agents, v2, py, azure, ai, sdk, imagebasedhostedagentdefinition, creating, container, run, custom |
|
||||
| `hybrid-search-implementation` | Combine vector and keyword search for improved retrieval. Use when implementing RAG systems, building search engines, or when neither approach alone provides... | hybrid, search | hybrid, search, combine, vector, keyword, improved, retrieval, implementing, rag, building, engines, neither |
|
||||
| `ios-developer` | Develop native iOS applications with Swift/SwiftUI. Masters iOS 18, SwiftUI, UIKit integration, Core Data, networking, and App Store optimization. Use PROACT... | ios | ios, developer, develop, native, applications, swift, swiftui, masters, 18, uikit, integration, core |
|
||||
| `langchain-architecture` | Design LLM applications using the LangChain framework with agents, memory, and tool integration patterns. Use when building LangChain applications, implement... | langchain, architecture | langchain, architecture, llm, applications, framework, agents, memory, integration, building, implementing, ai, creating |
|
||||
@@ -165,19 +253,21 @@ Total skills: 618
|
||||
| `llm-application-dev-langchain-agent` | You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph. | llm, application, dev, langchain, agent | llm, application, dev, langchain, agent, developer, specializing, grade, ai, langgraph |
|
||||
| `llm-application-dev-prompt-optimize` | You are an expert prompt engineer specializing in crafting effective prompts for LLMs through advanced techniques including constitutional AI, chain-of-thoug... | llm, application, dev, prompt, optimize | llm, application, dev, prompt, optimize, engineer, specializing, crafting, effective, prompts, llms, through |
|
||||
| `llm-evaluation` | Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performanc... | llm, evaluation | llm, evaluation, applications, automated, metrics, human, feedback, benchmarking, testing, performance, measuring, ai |
|
||||
| `mailchimp-automation` | Automate Mailchimp email marketing including campaigns, audiences, subscribers, segments, and analytics via Rube MCP (Composio). Always search tools first fo... | mailchimp | mailchimp, automation, automate, email, marketing, including, campaigns, audiences, subscribers, segments, analytics, via |
|
||||
| `nanobanana-ppt-skills` | AI-powered PPT generation with document analysis and styled images | nanobanana, ppt, skills | nanobanana, ppt, skills, ai, powered, generation, document, analysis, styled, images |
|
||||
| `neon-postgres` | Expert patterns for Neon serverless Postgres, branching, connection pooling, and Prisma/Drizzle integration Use when: neon database, serverless postgres, dat... | neon, postgres | neon, postgres, serverless, branching, connection, pooling, prisma, drizzle, integration, database |
|
||||
| `nextjs-app-router-patterns` | Master Next.js 14+ App Router with Server Components, streaming, parallel routes, and advanced data fetching. Use when building Next.js applications, impleme... | nextjs, app, router | nextjs, app, router, next, js, 14, server, components, streaming, parallel, routes, data |
|
||||
| `nextjs-best-practices` | Next.js App Router principles. Server Components, data fetching, routing patterns. | nextjs, best, practices | nextjs, best, practices, next, js, app, router, principles, server, components, data, fetching |
|
||||
| `nodejs-backend-patterns` | Build production-ready Node.js backend services with Express/Fastify, implementing middleware patterns, error handling, authentication, database integration,... | nodejs, backend | nodejs, backend, node, js, express, fastify, implementing, middleware, error, handling, authentication, database |
|
||||
| `php-pro` | Write idiomatic PHP code with generators, iterators, SPL data structures, and modern OOP features. Use PROACTIVELY for high-performance PHP applications. | php | php, pro, write, idiomatic, code, generators, iterators, spl, data, structures, oop, features |
|
||||
| `podcast-generation` | Generate AI-powered podcast-style audio narratives using Azure OpenAI's GPT Realtime Mini model via WebSocket. Use when building text-to-speech features, aud... | podcast, generation | podcast, generation, generate, ai, powered, style, audio, narratives, azure, openai, gpt, realtime |
|
||||
| `postgres-best-practices` | Postgres performance optimization and best practices from Supabase. Use this skill when writing, reviewing, or optimizing Postgres queries, schema designs, o... | postgres, best, practices | postgres, best, practices, supabase, performance, optimization, skill, writing, reviewing, optimizing, queries, schema |
|
||||
| `postgresql` | Design a PostgreSQL-specific schema. Covers best-practices, data types, indexing, constraints, performance patterns, and advanced features | postgresql | postgresql, specific, schema, covers, data, types, indexing, constraints, performance, features |
|
||||
| `prisma-expert` | Prisma ORM expert for schema design, migrations, query optimization, relations modeling, and database operations. Use PROACTIVELY for Prisma schema issues, m... | prisma | prisma, orm, schema, migrations, query, optimization, relations, modeling, database, operations, proactively, issues |
|
||||
| `programmatic-seo` | Design and evaluate programmatic SEO strategies for creating SEO-driven pages at scale using templates and structured data. Use when the user mentions progra... | programmatic, seo | programmatic, seo, evaluate, creating, driven, pages, scale, structured, data, user, mentions, directory |
|
||||
| `prompt-caching` | Caching strategies for LLM prompts including Anthropic prompt caching, response caching, and CAG (Cache Augmented Generation) Use when: prompt caching, cache... | prompt, caching | prompt, caching, llm, prompts, including, anthropic, response, cag, cache, augmented, generation |
|
||||
| `prompt-engineer` | Expert prompt engineer specializing in advanced prompting techniques, LLM optimization, and AI system design. Masters chain-of-thought, constitutional AI, an... | prompt | prompt, engineer, specializing, prompting, techniques, llm, optimization, ai, masters, chain, thought, constitutional |
|
||||
| `prompt-engineering-patterns` | Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, impro... | prompt, engineering | prompt, engineering, techniques, maximize, llm, performance, reliability, controllability, optimizing, prompts, improving, outputs |
|
||||
| `pydantic-models-py` | Create Pydantic models following the multi-model pattern with Base, Create, Update, Response, and InDB variants. Use when defining API request/response schem... | pydantic, models, py | pydantic, models, py, following, multi, model, base, update, response, indb, variants, defining |
|
||||
| `rag-engineer` | Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LL... | rag | rag, engineer, building, retrieval, augmented, generation, masters, embedding, models, vector, databases, chunking |
|
||||
| `rag-implementation` | Build Retrieval-Augmented Generation (RAG) systems for LLM applications with vector databases and semantic search. Use when implementing knowledge-grounded A... | rag | rag, retrieval, augmented, generation, llm, applications, vector, databases, semantic, search, implementing, knowledge |
|
||||
| `react-best-practices` | React and Next.js performance optimization guidelines from Vercel Engineering. This skill should be used when writing, reviewing, or refactoring React/Next.j... | react, best, practices | react, best, practices, vercel, next, js, performance, optimization, guidelines, engineering, skill, should |
|
||||
@@ -185,14 +275,17 @@ Total skills: 618
|
||||
| `scala-pro` | Master enterprise-grade Scala development with functional programming, distributed systems, and big data processing. Expert in Apache Pekko, Akka, Spark, ZIO... | scala | scala, pro, enterprise, grade, development, functional, programming, distributed, big, data, processing, apache |
|
||||
| `schema-markup` | Design, validate, and optimize schema.org structured data for eligibility, correctness, and measurable SEO impact. Use when the user wants to add, fix, audit... | schema, markup | schema, markup, validate, optimize, org, structured, data, eligibility, correctness, measurable, seo, impact |
|
||||
| `segment-cdp` | Expert patterns for Segment Customer Data Platform including Analytics.js, server-side tracking, tracking plans with Protocols, identity resolution, destinat... | segment, cdp | segment, cdp, customer, data, platform, including, analytics, js, server, side, tracking, plans |
|
||||
| `sendgrid-automation` | Automate SendGrid email operations including sending emails, managing contacts/lists, sender identities, templates, and analytics via Rube MCP (Composio). Al... | sendgrid | sendgrid, automation, automate, email, operations, including, sending, emails, managing, contacts, lists, sender |
|
||||
| `senior-architect` | Comprehensive software architecture skill for designing scalable, maintainable systems using ReactJS, NextJS, NodeJS, Express, React Native, Swift, Kotlin, F... | senior | senior, architect, software, architecture, skill, designing, scalable, maintainable, reactjs, nextjs, nodejs, express |
|
||||
| `seo-audit` | Diagnose and audit SEO issues affecting crawlability, indexation, rankings, and organic performance. Use when the user asks for an SEO audit, technical SEO r... | seo, audit | seo, audit, diagnose, issues, affecting, crawlability, indexation, rankings, organic, performance, user, asks |
|
||||
| `similarity-search-patterns` | Implement efficient similarity search with vector databases. Use when building semantic search, implementing nearest neighbor queries, or optimizing retrieva... | similarity, search | similarity, search, efficient, vector, databases, building, semantic, implementing, nearest, neighbor, queries, optimizing |
|
||||
| `skill-creator-ms` | Guide for creating effective skills for AI coding agents working with Azure SDKs and Microsoft Foundry services. Use when creating new skills or updating exi... | skill, creator, ms | skill, creator, ms, creating, effective, skills, ai, coding, agents, working, azure, sdks |
|
||||
| `skill-seekers` | -Automatically convert documentation websites, GitHub repositories, and PDFs into Claude AI skills in minutes. | skill, seekers | skill, seekers, automatically, convert, documentation, websites, github, repositories, pdfs, claude, ai, skills |
|
||||
| `spark-optimization` | Optimize Apache Spark jobs with partitioning, caching, shuffle optimization, and memory tuning. Use when improving Spark performance, debugging slow jobs, or... | spark, optimization | spark, optimization, optimize, apache, jobs, partitioning, caching, shuffle, memory, tuning, improving, performance |
|
||||
| `sql-optimization-patterns` | Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when de... | sql, optimization | sql, optimization, query, indexing, explain, analysis, dramatically, improve, database, performance, eliminate, slow |
|
||||
| `sqlmap-database-pentesting` | This skill should be used when the user asks to "automate SQL injection testing," "enumerate database structure," "extract database credentials using sqlmap,... | sqlmap, database, pentesting | sqlmap, database, pentesting, penetration, testing, skill, should, used, user, asks, automate, sql |
|
||||
| `stitch-ui-design` | Expert guide for creating effective prompts for Google Stitch AI UI design tool. Use when user wants to design UI/UX in Stitch, create app interfaces, genera... | stitch, ui | stitch, ui, creating, effective, prompts, google, ai, user, wants, ux, app, interfaces |
|
||||
| `supabase-automation` | Automate Supabase database queries, table management, project administration, storage, edge functions, and SQL execution via Rube MCP (Composio). Always sear... | supabase | supabase, automation, automate, database, queries, table, administration, storage, edge, functions, sql, execution |
|
||||
| `tdd-orchestrator` | Master TDD orchestrator specializing in red-green-refactor discipline, multi-agent workflow coordination, and comprehensive test-driven development practices... | tdd, orchestrator | tdd, orchestrator, specializing, red, green, refactor, discipline, multi, agent, coordination, test, driven |
|
||||
| `team-collaboration-standup-notes` | You are an expert team communication specialist focused on async-first standup practices, AI-assisted note generation from commit history, and effective remo... | team, collaboration, standup, notes | team, collaboration, standup, notes, communication, async, first, ai, assisted, note, generation, commit |
|
||||
| `telegram-bot-builder` | Expert in building Telegram bots that solve real problems - from simple automation to complex AI-powered bots. Covers bot architecture, the Telegram Bot API,... | telegram, bot, builder | telegram, bot, builder, building, bots, solve, real, problems, simple, automation, complex, ai |
|
||||
@@ -204,10 +297,10 @@ Total skills: 618
|
||||
| `voice-ai-development` | Expert in building voice AI applications - from real-time voice agents to voice-enabled apps. Covers OpenAI Realtime API, Vapi for voice agents, Deepgram for... | voice, ai | voice, ai, development, building, applications, real, time, agents, enabled, apps, covers, openai |
|
||||
| `voice-ai-engine-development` | Build real-time conversational AI voice engines using async worker pipelines, streaming transcription, LLM agents, and TTS synthesis with interrupt handling ... | voice, ai, engine | voice, ai, engine, development, real, time, conversational, engines, async, worker, pipelines, streaming |
|
||||
| `web-artifacts-builder` | Suite of tools for creating elaborate, multi-component claude.ai HTML artifacts using modern frontend web technologies (React, Tailwind CSS, shadcn/ui). Use ... | web, artifacts, builder | web, artifacts, builder, suite, creating, elaborate, multi, component, claude, ai, html, frontend |
|
||||
| `xlsx` | Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work ... | xlsx | xlsx, spreadsheet, creation, editing, analysis, formulas, formatting, data, visualization, claude, work, spreadsheets |
|
||||
| `xlsx-official` | Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work ... | xlsx, official | xlsx, official, spreadsheet, creation, editing, analysis, formulas, formatting, data, visualization, claude, work |
|
||||
| `youtube-automation` | Automate YouTube tasks via Rube MCP (Composio): upload videos, manage playlists, search content, get analytics, and handle comments. Always search tools firs... | youtube | youtube, automation, automate, tasks, via, rube, mcp, composio, upload, videos, playlists, search |
|
||||
|
||||
## development (80)
|
||||
## development (127)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
@@ -219,19 +312,74 @@ Total skills: 618
|
||||
| `app-store-optimization` | Complete App Store Optimization (ASO) toolkit for researching, optimizing, and tracking mobile app performance on Apple App Store and Google Play Store | app, store, optimization | app, store, optimization, complete, aso, toolkit, researching, optimizing, tracking, mobile, performance, apple |
|
||||
| `architecture-patterns` | Implement proven backend architecture patterns including Clean Architecture, Hexagonal Architecture, and Domain-Driven Design. Use when architecting complex ... | architecture | architecture, proven, backend, including, clean, hexagonal, domain, driven, architecting, complex, refactoring, existing |
|
||||
| `async-python-patterns` | Master Python asyncio, concurrent programming, and async/await patterns for high-performance applications. Use when building async APIs, concurrent systems, ... | async, python | async, python, asyncio, concurrent, programming, await, high, performance, applications, building, apis, bound |
|
||||
| `azure-appconfiguration-java` | Azure App Configuration SDK for Java. Centralized application configuration management with key-value settings, feature flags, and snapshots.
|
||||
Triggers: "Conf... | azure, appconfiguration, java | azure, appconfiguration, java, app, configuration, sdk, centralized, application, key, value, settings, feature |
|
||||
| `azure-appconfiguration-py` | Azure App Configuration SDK for Python. Use for centralized configuration management, feature flags, and dynamic settings.
|
||||
Triggers: "azure-appconfiguration"... | azure, appconfiguration, py | azure, appconfiguration, py, app, configuration, sdk, python, centralized, feature, flags, dynamic, settings |
|
||||
| `azure-appconfiguration-ts` | Build applications using Azure App Configuration SDK for JavaScript (@azure/app-configuration). Use when working with configuration settings, feature flags, ... | azure, appconfiguration, ts | azure, appconfiguration, ts, applications, app, configuration, sdk, javascript, working, settings, feature, flags |
|
||||
| `azure-communication-callingserver-java` | Azure Communication Services CallingServer (legacy) Java SDK. Note - This SDK is deprecated. Use azure-communication-callautomation instead for new projects.... | azure, communication, callingserver, java | azure, communication, callingserver, java, legacy, sdk, note, deprecated, callautomation, instead, new, skill |
|
||||
| `azure-communication-chat-java` | Build real-time chat applications with Azure Communication Services Chat Java SDK. Use when implementing chat threads, messaging, participants, read receipts... | azure, communication, chat, java | azure, communication, chat, java, real, time, applications, sdk, implementing, threads, messaging, participants |
|
||||
| `azure-communication-common-java` | Azure Communication Services common utilities for Java. Use when working with CommunicationTokenCredential, user identifiers, token refresh, or shared authen... | azure, communication, common, java | azure, communication, common, java, utilities, working, communicationtokencredential, user, identifiers, token, refresh, shared |
|
||||
| `azure-communication-sms-java` | Send SMS messages with Azure Communication Services SMS Java SDK. Use when implementing SMS notifications, alerts, OTP delivery, bulk messaging, or delivery ... | azure, communication, sms, java | azure, communication, sms, java, send, messages, sdk, implementing, notifications, alerts, otp, delivery |
|
||||
| `azure-compute-batch-java` | Azure Batch SDK for Java. Run large-scale parallel and HPC batch jobs with pools, jobs, tasks, and compute nodes.
|
||||
Triggers: "BatchClient java", "azure batch ... | azure, compute, batch, java | azure, compute, batch, java, sdk, run, large, scale, parallel, hpc, jobs, pools |
|
||||
| `azure-containerregistry-py` | Azure Container Registry SDK for Python. Use for managing container images, artifacts, and repositories.
|
||||
Triggers: "azure-containerregistry", "ContainerRegis... | azure, containerregistry, py | azure, containerregistry, py, container, registry, sdk, python, managing, images, artifacts, repositories, triggers |
|
||||
| `azure-eventgrid-dotnet` | Azure Event Grid SDK for .NET. Client library for publishing and consuming events with Azure Event Grid. Use for event-driven architectures, pub/sub messagin... | azure, eventgrid, dotnet | azure, eventgrid, dotnet, event, grid, sdk, net, client, library, publishing, consuming, events |
|
||||
| `azure-eventgrid-java` | Build event-driven applications with Azure Event Grid SDK for Java. Use when publishing events, implementing pub/sub patterns, or integrating with Azure serv... | azure, eventgrid, java | azure, eventgrid, java, event, driven, applications, grid, sdk, publishing, events, implementing, pub |
|
||||
| `azure-eventgrid-py` | Azure Event Grid SDK for Python. Use for publishing events, handling CloudEvents, and event-driven architectures.
|
||||
Triggers: "event grid", "EventGridPublisher... | azure, eventgrid, py | azure, eventgrid, py, event, grid, sdk, python, publishing, events, handling, cloudevents, driven |
|
||||
| `azure-eventhub-py` | Azure Event Hubs SDK for Python streaming. Use for high-throughput event ingestion, producers, consumers, and checkpointing.
|
||||
Triggers: "event hubs", "EventHu... | azure, eventhub, py | azure, eventhub, py, event, hubs, sdk, python, streaming, high, throughput, ingestion, producers |
|
||||
| `azure-functions` | Expert patterns for Azure Functions development including isolated worker model, Durable Functions orchestration, cold start optimization, and production pat... | azure, functions | azure, functions, development, including, isolated, worker, model, durable, orchestration, cold, start, optimization |
|
||||
| `azure-identity-rust` | Azure Identity SDK for Rust authentication. Use for DeveloperToolsCredential, ManagedIdentityCredential, ClientSecretCredential, and token-based authenticati... | azure, identity, rust | azure, identity, rust, sdk, authentication, developertoolscredential, managedidentitycredential, clientsecretcredential, token, triggers, managed, credential |
|
||||
| `azure-keyvault-certificates-rust` | Azure Key Vault Certificates SDK for Rust. Use for creating, importing, and managing certificates.
|
||||
Triggers: "keyvault certificates rust", "CertificateClient... | azure, keyvault, certificates, rust | azure, keyvault, certificates, rust, key, vault, sdk, creating, importing, managing, triggers, certificateclient |
|
||||
| `azure-keyvault-keys-rust` | Azure Key Vault Keys SDK for Rust. Use for creating, managing, and using cryptographic keys.
|
||||
Triggers: "keyvault keys rust", "KeyClient rust", "create key ru... | azure, keyvault, keys, rust | azure, keyvault, keys, rust, key, vault, sdk, creating, managing, cryptographic, triggers, keyclient |
|
||||
| `azure-keyvault-keys-ts` | Manage cryptographic keys using Azure Key Vault Keys SDK for JavaScript (@azure/keyvault-keys). Use when creating, encrypting/decrypting, signing, or rotatin... | azure, keyvault, keys, ts | azure, keyvault, keys, ts, cryptographic, key, vault, sdk, javascript, creating, encrypting, decrypting |
|
||||
| `azure-messaging-webpubsub-java` | Build real-time web applications with Azure Web PubSub SDK for Java. Use when implementing WebSocket-based messaging, live updates, chat applications, or ser... | azure, messaging, webpubsub, java | azure, messaging, webpubsub, java, real, time, web, applications, pubsub, sdk, implementing, websocket |
|
||||
| `azure-mgmt-apicenter-dotnet` | Azure API Center SDK for .NET. Centralized API inventory management with governance, versioning, and discovery. Use for creating API services, workspaces, AP... | azure, mgmt, apicenter, dotnet | azure, mgmt, apicenter, dotnet, api, center, sdk, net, centralized, inventory, governance, versioning |
|
||||
| `azure-mgmt-apicenter-py` | Azure API Center Management SDK for Python. Use for managing API inventory, metadata, and governance across your organization.
|
||||
Triggers: "azure-mgmt-apicente... | azure, mgmt, apicenter, py | azure, mgmt, apicenter, py, api, center, sdk, python, managing, inventory, metadata, governance |
|
||||
| `azure-mgmt-apimanagement-py` | Azure API Management SDK for Python. Use for managing APIM services, APIs, products, subscriptions, and policies.
|
||||
Triggers: "azure-mgmt-apimanagement", "ApiM... | azure, mgmt, apimanagement, py | azure, mgmt, apimanagement, py, api, sdk, python, managing, apim, apis, products, subscriptions |
|
||||
| `azure-mgmt-fabric-dotnet` | Azure Resource Manager SDK for Fabric in .NET. Use for MANAGEMENT PLANE operations: provisioning, scaling, suspending/resuming Microsoft Fabric capacities, c... | azure, mgmt, fabric, dotnet | azure, mgmt, fabric, dotnet, resource, manager, sdk, net, plane, operations, provisioning, scaling |
|
||||
| `azure-mgmt-fabric-py` | Azure Fabric Management SDK for Python. Use for managing Microsoft Fabric capacities and resources.
|
||||
Triggers: "azure-mgmt-fabric", "FabricMgmtClient", "Fabri... | azure, mgmt, fabric, py | azure, mgmt, fabric, py, sdk, python, managing, microsoft, capacities, resources, triggers, fabricmgmtclient |
|
||||
| `azure-mgmt-mongodbatlas-dotnet` | Manage MongoDB Atlas Organizations as Azure ARM resources using Azure.ResourceManager.MongoDBAtlas SDK. Use when creating, updating, listing, or deleting Mon... | azure, mgmt, mongodbatlas, dotnet | azure, mgmt, mongodbatlas, dotnet, mongodb, atlas, organizations, arm, resources, resourcemanager, sdk, creating |
|
||||
| `azure-monitor-opentelemetry-exporter-py` | Azure Monitor OpenTelemetry Exporter for Python. Use for low-level OpenTelemetry export to Application Insights.
|
||||
Triggers: "azure-monitor-opentelemetry-expor... | azure, monitor, opentelemetry, exporter, py | azure, monitor, opentelemetry, exporter, py, python, low, level, export, application, insights, triggers |
|
||||
| `azure-monitor-opentelemetry-py` | Azure Monitor OpenTelemetry Distro for Python. Use for one-line Application Insights setup with auto-instrumentation.
|
||||
Triggers: "azure-monitor-opentelemetry"... | azure, monitor, opentelemetry, py | azure, monitor, opentelemetry, py, distro, python, one, line, application, insights, setup, auto |
|
||||
| `azure-resource-manager-durabletask-dotnet` | Azure Resource Manager SDK for Durable Task Scheduler in .NET. Use for MANAGEMENT PLANE operations: creating/managing Durable Task Schedulers, Task Hubs, and... | azure, resource, manager, durabletask, dotnet | azure, resource, manager, durabletask, dotnet, sdk, durable, task, scheduler, net, plane, operations |
|
||||
| `azure-resource-manager-playwright-dotnet` | Azure Resource Manager SDK for Microsoft Playwright Testing in .NET. Use for MANAGEMENT PLANE operations: creating/managing Playwright Testing workspaces, ch... | azure, resource, manager, playwright, dotnet | azure, resource, manager, playwright, dotnet, sdk, microsoft, testing, net, plane, operations, creating |
|
||||
| `azure-speech-to-text-rest-py` | Azure Speech to Text REST API for short audio (Python). Use for simple speech recognition of audio files up to 60 seconds without the Speech SDK.
|
||||
Triggers: "... | azure, speech, to, text, rest, py | azure, speech, to, text, rest, py, api, short, audio, python, simple, recognition |
|
||||
| `azure-storage-blob-py` | Azure Blob Storage SDK for Python. Use for uploading, downloading, listing blobs, managing containers, and blob lifecycle.
|
||||
Triggers: "blob storage", "BlobSer... | azure, storage, blob, py | azure, storage, blob, py, sdk, python, uploading, downloading, listing, blobs, managing, containers |
|
||||
| `azure-storage-blob-rust` | Azure Blob Storage SDK for Rust. Use for uploading, downloading, and managing blobs and containers.
|
||||
Triggers: "blob storage rust", "BlobClient rust", "upload... | azure, storage, blob, rust | azure, storage, blob, rust, sdk, uploading, downloading, managing, blobs, containers, triggers, blobclient |
|
||||
| `azure-storage-blob-ts` | Azure Blob Storage JavaScript/TypeScript SDK (@azure/storage-blob) for blob operations. Use for uploading, downloading, listing, and managing blobs and conta... | azure, storage, blob, ts | azure, storage, blob, ts, javascript, typescript, sdk, operations, uploading, downloading, listing, managing |
|
||||
| `azure-storage-file-share-ts` | Azure File Share JavaScript/TypeScript SDK (@azure/storage-file-share) for SMB file share operations. Use for creating shares, managing directories, uploadin... | azure, storage, file, share, ts | azure, storage, file, share, ts, javascript, typescript, sdk, smb, operations, creating, shares |
|
||||
| `azure-storage-queue-py` | Azure Queue Storage SDK for Python. Use for reliable message queuing, task distribution, and asynchronous processing.
|
||||
Triggers: "queue storage", "QueueServic... | azure, storage, queue, py | azure, storage, queue, py, sdk, python, reliable, message, queuing, task, distribution, asynchronous |
|
||||
| `azure-storage-queue-ts` | Azure Queue Storage JavaScript/TypeScript SDK (@azure/storage-queue) for message queue operations. Use for sending, receiving, peeking, and deleting messages... | azure, storage, queue, ts | azure, storage, queue, ts, javascript, typescript, sdk, message, operations, sending, receiving, peeking |
|
||||
| `azure-web-pubsub-ts` | Build real-time messaging applications using Azure Web PubSub SDKs for JavaScript (@azure/web-pubsub, @azure/web-pubsub-client). Use when implementing WebSoc... | azure, web, pubsub, ts | azure, web, pubsub, ts, real, time, messaging, applications, sdks, javascript, client, implementing |
|
||||
| `backend-dev-guidelines` | Opinionated backend development standards for Node.js + Express + TypeScript microservices. Covers layered architecture, BaseController pattern, dependency i... | backend, dev, guidelines | backend, dev, guidelines, opinionated, development, standards, node, js, express, typescript, microservices, covers |
|
||||
| `bullmq-specialist` | BullMQ expert for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript applications. Use when: bullmq, bull que... | bullmq | bullmq, redis, backed, job, queues, background, processing, reliable, async, execution, node, js |
|
||||
| `bun-development` | Modern JavaScript/TypeScript development with Bun runtime. Covers package management, bundling, testing, and migration from Node.js. Use when working with Bu... | bun | bun, development, javascript, typescript, runtime, covers, package, bundling, testing, migration, node, js |
|
||||
| `cc-skill-coding-standards` | Universal coding standards, best practices, and patterns for TypeScript, JavaScript, React, and Node.js development. | cc, skill, coding, standards | cc, skill, coding, standards, universal, typescript, javascript, react, node, js, development |
|
||||
| `cc-skill-frontend-patterns` | Frontend development patterns for React, Next.js, state management, performance optimization, and UI best practices. | cc, skill, frontend | cc, skill, frontend, development, react, next, js, state, performance, optimization, ui |
|
||||
| `context7-auto-research` | Automatically fetch latest library/framework documentation for Claude Code via Context7 API | context7, auto, research | context7, auto, research, automatically, fetch, latest, library, framework, documentation, claude, code, via |
|
||||
| `copilot-sdk` | Build applications powered by GitHub Copilot using the Copilot SDK. Use when creating programmatic integrations with Copilot across Node.js/TypeScript, Pytho... | copilot, sdk | copilot, sdk, applications, powered, github, creating, programmatic, integrations, node, js, typescript, python |
|
||||
| `csharp-pro` | Write modern C# code with advanced features like records, pattern matching, and async/await. Optimizes .NET applications, implements enterprise patterns, and... | csharp | csharp, pro, write, code, features, like, records, matching, async, await, optimizes, net |
|
||||
| `discord-bot-architect` | Specialized skill for building production-ready Discord bots. Covers Discord.js (JavaScript) and Pycord (Python), gateway intents, slash commands, interactiv... | discord, bot | discord, bot, architect, specialized, skill, building, bots, covers, js, javascript, pycord, python |
|
||||
| `dotnet-architect` | Expert .NET backend architect specializing in C#, ASP.NET Core, Entity Framework, Dapper, and enterprise application patterns. Masters async/await, dependenc... | dotnet | dotnet, architect, net, backend, specializing, asp, core, entity, framework, dapper, enterprise, application |
|
||||
| `dotnet-backend-patterns` | Master C#/.NET backend development patterns for building robust APIs, MCP servers, and enterprise applications. Covers async/await, dependency injection, Ent... | dotnet, backend | dotnet, backend, net, development, building, robust, apis, mcp, servers, enterprise, applications, covers |
|
||||
| `exa-search` | Semantic search, similar content discovery, and structured research using Exa API | exa, search | exa, search, semantic, similar, content, discovery, structured, research, api |
|
||||
| `fastapi-pro` | Build high-performance async APIs with FastAPI, SQLAlchemy 2.0, and Pydantic V2. Master microservices, WebSockets, and modern Python async patterns. Use PROA... | fastapi | fastapi, pro, high, performance, async, apis, sqlalchemy, pydantic, v2, microservices, websockets, python |
|
||||
| `fastapi-router-py` | Create FastAPI routers with CRUD operations, authentication dependencies, and proper response models. Use when building REST API endpoints, creating new rout... | fastapi, router, py | fastapi, router, py, routers, crud, operations, authentication, dependencies, proper, response, models, building |
|
||||
| `fastapi-templates` | Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applicati... | fastapi | fastapi, async, dependency, injection, error, handling, building, new, applications, setting, up, backend |
|
||||
| `firecrawl-scraper` | Deep web scraping, screenshots, PDF parsing, and website crawling using Firecrawl API | firecrawl, scraper | firecrawl, scraper, deep, web, scraping, screenshots, pdf, parsing, website, crawling, api |
|
||||
| `fp-ts-errors` | Handle errors as values using fp-ts Either and TaskEither for cleaner, more predictable TypeScript code. Use when implementing error handling patterns with f... | fp, ts, errors | fp, ts, errors, handle, values, either, taskeither, cleaner, predictable, typescript, code, implementing |
|
||||
@@ -240,7 +388,10 @@ Total skills: 618
|
||||
| `frontend-developer` | Build React components, implement responsive layouts, and handle client-side state management. Masters React 19, Next.js 15, and modern frontend architecture... | frontend | frontend, developer, react, components, responsive, layouts, handle, client, side, state, masters, 19 |
|
||||
| `frontend-mobile-development-component-scaffold` | You are a React component architecture expert specializing in scaffolding production-ready, accessible, and performant components. Generate complete componen... | frontend, mobile, component | frontend, mobile, component, development, scaffold, react, architecture, specializing, scaffolding, accessible, performant, components |
|
||||
| `frontend-slides` | Create stunning, animation-rich HTML presentations from scratch or by converting PowerPoint files. Use when the user wants to build a presentation, convert a... | frontend, slides | frontend, slides, stunning, animation, rich, html, presentations, scratch, converting, powerpoint, files, user |
|
||||
| `game-development/mobile-games` | Mobile game development principles. Touch input, battery, performance, app stores. | game, development/mobile, games | game, development/mobile, games, mobile, development, principles, touch, input, battery, performance, app, stores |
|
||||
| `gemini-api-dev` | Use this skill when building applications with Gemini models, Gemini API, working with multimodal content (text, images, audio, video), implementing function... | gemini, api, dev | gemini, api, dev, skill, building, applications, models, working, multimodal, content, text, images |
|
||||
| `go-concurrency-patterns` | Master Go concurrency with goroutines, channels, sync primitives, and context. Use when building concurrent Go applications, implementing worker pools, or de... | go, concurrency | go, concurrency, goroutines, channels, sync, primitives, context, building, concurrent, applications, implementing, worker |
|
||||
| `go-playwright` | Expert capability for robust, stealthy, and efficient browser automation using Playwright Go. | go, playwright | go, playwright, capability, robust, stealthy, efficient, browser, automation |
|
||||
| `golang-pro` | Master Go 1.21+ with modern patterns, advanced concurrency, performance optimization, and production-ready microservices. Expert in the latest Go ecosystem i... | golang | golang, pro, go, 21, concurrency, performance, optimization, microservices, latest, ecosystem, including, generics |
|
||||
| `hubspot-integration` | Expert patterns for HubSpot CRM integration including OAuth authentication, CRM objects, associations, batch operations, webhooks, and custom objects. Covers... | hubspot, integration | hubspot, integration, crm, including, oauth, authentication, objects, associations, batch, operations, webhooks, custom |
|
||||
| `javascript-mastery` | Comprehensive JavaScript reference covering 33+ essential concepts every developer should know. From fundamentals like primitives and closures to advanced pa... | javascript, mastery | javascript, mastery, reference, covering, 33, essential, concepts, every, developer, should, know, fundamentals |
|
||||
@@ -248,9 +399,12 @@ Total skills: 618
|
||||
| `javascript-testing-patterns` | Implement comprehensive testing strategies using Jest, Vitest, and Testing Library for unit tests, integration tests, and end-to-end testing with mocking, fi... | javascript | javascript, testing, jest, vitest, library, unit, tests, integration, mocking, fixtures, test, driven |
|
||||
| `javascript-typescript-typescript-scaffold` | You are a TypeScript project architecture expert specializing in scaffolding production-ready Node.js and frontend applications. Generate complete project st... | javascript, typescript | javascript, typescript, scaffold, architecture, specializing, scaffolding, node, js, frontend, applications, generate, complete |
|
||||
| `launch-strategy` | When the user wants to plan a product launch, feature announcement, or release strategy. Also use when the user mentions 'launch,' 'Product Hunt,' 'feature r... | launch | launch, user, wants, plan, product, feature, announcement, release, mentions, hunt, go, market |
|
||||
| `m365-agents-ts` | Microsoft 365 Agents SDK for TypeScript/Node.js. Build multichannel agents for Teams/M365/Copilot Studio with AgentApplication routing, Express hosting, stre... | m365, agents, ts | m365, agents, ts, microsoft, 365, sdk, typescript, node, js, multichannel, teams, copilot |
|
||||
| `makepad-skills` | Makepad UI development skills for Rust apps: setup, patterns, shaders, packaging, and troubleshooting. | makepad, skills | makepad, skills, ui, development, rust, apps, setup, shaders, packaging, troubleshooting |
|
||||
| `mcp-builder` | Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use whe... | mcp, builder | mcp, builder, creating, high, quality, model, context, protocol, servers, enable, llms, interact |
|
||||
| `mcp-builder-ms` | Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use whe... | mcp, builder, ms | mcp, builder, ms, creating, high, quality, model, context, protocol, servers, enable, llms |
|
||||
| `memory-safety-patterns` | Implement memory-safe programming with RAII, ownership, smart pointers, and resource management across Rust, C++, and C. Use when writing safe systems code, ... | memory, safety | memory, safety, safe, programming, raii, ownership, smart, pointers, resource, rust, writing, code |
|
||||
| `microsoft-azure-webjobs-extensions-authentication-events-dotnet` | Microsoft Entra Authentication Events SDK for .NET. Azure Functions triggers for custom authentication extensions. Use for token enrichment, custom claims, a... | microsoft, azure, webjobs, extensions, authentication, events, dotnet | microsoft, azure, webjobs, extensions, authentication, events, dotnet, entra, sdk, net, functions, triggers |
|
||||
| `mobile-design` | Mobile-first design and engineering doctrine for iOS and Android apps. Covers touch interaction, performance, platform conventions, offline behavior, and mob... | mobile | mobile, first, engineering, doctrine, ios, android, apps, covers, touch, interaction, performance, platform |
|
||||
| `mobile-developer` | Develop React Native, Flutter, or native mobile apps with modern architecture patterns. Masters cross-platform development, native integrations, offline sync... | mobile | mobile, developer, develop, react, native, flutter, apps, architecture, masters, cross, platform, development |
|
||||
| `modern-javascript-patterns` | Master ES6+ features including async/await, destructuring, spread operators, arrow functions, promises, modules, iterators, generators, and functional progra... | modern, javascript | modern, javascript, es6, features, including, async, await, destructuring, spread, operators, arrow, functions |
|
||||
@@ -265,6 +419,7 @@ Total skills: 618
|
||||
| `python-performance-optimization` | Profile and optimize Python code using cProfile, memory profilers, and performance best practices. Use when debugging slow Python code, optimizing bottleneck... | python, performance, optimization | python, performance, optimization, profile, optimize, code, cprofile, memory, profilers, debugging, slow, optimizing |
|
||||
| `python-pro` | Master Python 3.12+ with modern features, async programming, performance optimization, and production-ready practices. Expert in the latest Python ecosystem ... | python | python, pro, 12, features, async, programming, performance, optimization, latest, ecosystem, including, uv |
|
||||
| `python-testing-patterns` | Implement comprehensive testing strategies with pytest, fixtures, mocking, and test-driven development. Use when writing Python tests, setting up test suites... | python | python, testing, pytest, fixtures, mocking, test, driven, development, writing, tests, setting, up |
|
||||
| `react-flow-node-ts` | Create React Flow node components with TypeScript types, handles, and Zustand integration. Use when building custom nodes for React Flow canvas, creating vis... | react, flow, node, ts | react, flow, node, ts, components, typescript, types, zustand, integration, building, custom, nodes |
|
||||
| `react-modernization` | Upgrade React applications to latest versions, migrate from class components to hooks, and adopt concurrent features. Use when modernizing React codebases, m... | react, modernization | react, modernization, upgrade, applications, latest, versions, migrate, class, components, hooks, adopt, concurrent |
|
||||
| `react-native-architecture` | Build production React Native apps with Expo, navigation, native modules, offline sync, and cross-platform patterns. Use when developing mobile apps, impleme... | react, native, architecture | react, native, architecture, apps, expo, navigation, modules, offline, sync, cross, platform, developing |
|
||||
| `react-patterns` | Modern React patterns and principles. Hooks, composition, performance, TypeScript best practices. | react | react, principles, hooks, composition, performance, typescript |
|
||||
@@ -279,6 +434,7 @@ Total skills: 618
|
||||
| `shopify-apps` | Expert patterns for Shopify app development including Remix/React Router apps, embedded apps with App Bridge, webhook handling, GraphQL Admin API, Polaris co... | shopify, apps | shopify, apps, app, development, including, remix, react, router, embedded, bridge, webhook, handling |
|
||||
| `shopify-development` | Build Shopify apps, extensions, themes using GraphQL Admin API, Shopify CLI, Polaris UI, and Liquid.
|
||||
TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify, development, apps, extensions, themes, graphql, admin, api, cli, polaris, ui, liquid |
|
||||
| `slack-automation` | Automate Slack messaging, channel management, search, reactions, and threads via Rube MCP (Composio). Send messages, search conversations, manage channels/us... | slack | slack, automation, automate, messaging, channel, search, reactions, threads, via, rube, mcp, composio |
|
||||
| `slack-bot-builder` | Build Slack apps using the Bolt framework across Python, JavaScript, and Java. Covers Block Kit for rich UIs, interactive components, slash commands, event h... | slack, bot, builder | slack, bot, builder, apps, bolt, framework, python, javascript, java, covers, block, kit |
|
||||
| `swiftui-expert-skill` | Write, review, or improve SwiftUI code following best practices for state management, view composition, performance, modern APIs, Swift concurrency, and iOS ... | swiftui, skill | swiftui, skill, write, review, improve, code, following, state, view, composition, performance, apis |
|
||||
| `systems-programming-rust-project` | You are a Rust project architecture expert specializing in scaffolding production-ready Rust applications. Generate complete project structures with cargo to... | programming, rust | programming, rust, architecture, specializing, scaffolding, applications, generate, complete, structures, cargo, tooling, proper |
|
||||
@@ -292,17 +448,20 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `uv-package-manager` | Master the uv package manager for fast Python dependency management, virtual environments, and modern Python project workflows. Use when setting up Python pr... | uv, package, manager | uv, package, manager, fast, python, dependency, virtual, environments, setting, up, managing, dependencies |
|
||||
| `viral-generator-builder` | Expert in building shareable generator tools that go viral - name generators, quiz makers, avatar creators, personality tests, and calculator tools. Covers t... | viral, generator, builder | viral, generator, builder, building, shareable, go, name, generators, quiz, makers, avatar, creators |
|
||||
| `webapp-testing` | Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing... | webapp | webapp, testing, toolkit, interacting, local, web, applications, playwright, supports, verifying, frontend, functionality |
|
||||
| `zustand-store-ts` | Create Zustand stores with TypeScript, subscribeWithSelector middleware, and proper state/action separation. Use when building React state management, creati... | zustand, store, ts | zustand, store, ts, stores, typescript, subscribewithselector, middleware, proper, state, action, separation, building |
|
||||
|
||||
## general (122)
|
||||
## general (135)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `address-github-comments` | Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI. | address, github, comments | address, github, comments, review, issue, open, pull, request, gh, cli |
|
||||
| `agent-manager-skill` | Manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling. | agent, manager, skill | agent, manager, skill, multiple, local, cli, agents, via, tmux, sessions, start, stop |
|
||||
| `algorithmic-art` | Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, gener... | algorithmic, art | algorithmic, art, creating, p5, js, seeded, randomness, interactive, parameter, exploration, users, request |
|
||||
| `angular-best-practices` | Angular performance optimization and best practices guide. Use when writing, reviewing, or refactoring Angular code for optimal performance, bundle size, and... | angular, best, practices | angular, best, practices, performance, optimization, writing, reviewing, refactoring, code, optimal, bundle, size |
|
||||
| `angular-migration` | Migrate from AngularJS to Angular using hybrid mode, incremental component rewriting, and dependency injection updates. Use when upgrading AngularJS applicat... | angular, migration | angular, migration, migrate, angularjs, hybrid, mode, incremental, component, rewriting, dependency, injection, updates |
|
||||
| `anti-reversing-techniques` | Understand anti-reversing, obfuscation, and protection techniques encountered during software analysis. Use when analyzing protected binaries, bypassing anti... | anti, reversing, techniques | anti, reversing, techniques, understand, obfuscation, protection, encountered, during, software, analysis, analyzing, protected |
|
||||
| `app-builder` | Main application building orchestrator. Creates full-stack applications from natural language requests. Determines project type, selects tech stack, coordina... | app, builder | app, builder, main, application, building, orchestrator, creates, full, stack, applications, natural, language |
|
||||
| `app-builder/templates` | Project scaffolding templates for new applications. Use when creating new projects from scratch. Contains 12 templates for various tech stacks. | app, builder/templates | app, builder/templates, scaffolding, new, applications, creating, scratch, contains, 12, various, tech, stacks |
|
||||
| `arm-cortex-expert` | Senior embedded software engineer specializing in firmware and driver development for ARM Cortex-M microcontrollers (Teensy, STM32, nRF52, SAMD). Decades of ... | arm, cortex | arm, cortex, senior, embedded, software, engineer, specializing, firmware, driver, development, microcontrollers, teensy |
|
||||
| `avalonia-layout-zafiro` | Guidelines for modern Avalonia UI layout using Zafiro.Avalonia, emphasizing shared styles, generic components, and avoiding XAML redundancy. | avalonia, layout, zafiro | avalonia, layout, zafiro, guidelines, ui, emphasizing, shared, styles, generic, components, avoiding, xaml |
|
||||
| `avalonia-zafiro-development` | Mandatory skills, conventions, and behavioral rules for Avalonia UI development using the Zafiro toolkit. | avalonia, zafiro | avalonia, zafiro, development, mandatory, skills, conventions, behavioral, rules, ui, toolkit |
|
||||
@@ -322,7 +481,7 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `claude-scientific-skills` | Scientific research and analysis skills | claude, scientific, skills | claude, scientific, skills, research, analysis |
|
||||
| `claude-speed-reader` | -Speed read Claude's responses at 600+ WPM using RSVP with Spritz-style ORP highlighting | claude, speed, reader | claude, speed, reader, read, responses, 600, wpm, rsvp, spritz, style, orp, highlighting |
|
||||
| `claude-win11-speckit-update-skill` | Windows 11 system management | claude, win11, speckit, update, skill | claude, win11, speckit, update, skill, windows, 11 |
|
||||
| `clean-code` | Pragmatic coding standards - concise, direct, no over-engineering, no unnecessary comments | clean, code | clean, code, pragmatic, coding, standards, concise, direct, no, engineering, unnecessary, comments |
|
||||
| `clean-code` | Applies principles from Robert C. Martin's 'Clean Code'. Use this skill when writing, reviewing, or refactoring code to ensure high quality, readability, and... | clean, code | clean, code, applies, principles, robert, martin, skill, writing, reviewing, refactoring, high, quality |
|
||||
| `code-documentation-code-explain` | You are a code education expert specializing in explaining complex code through clear narratives, visual diagrams, and step-by-step breakdowns. Transform dif... | code, documentation, explain | code, documentation, explain, education, specializing, explaining, complex, through, clear, narratives, visual, diagrams |
|
||||
| `code-refactoring-context-restore` | Use when working with code refactoring context restore | code, refactoring, restore | code, refactoring, restore, context, working |
|
||||
| `code-refactoring-tech-debt` | You are a technical debt expert specializing in identifying, quantifying, and prioritizing technical debt in software projects. Analyze the codebase to uncov... | code, refactoring, tech, debt | code, refactoring, tech, debt, technical, specializing, identifying, quantifying, prioritizing, software, analyze, codebase |
|
||||
@@ -331,6 +490,7 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `commit` | Create commit messages following Sentry conventions. Use when committing code changes, writing commit messages, or formatting git history. Follows convention... | commit | commit, messages, following, sentry, conventions, committing, code, changes, writing, formatting, git, history |
|
||||
| `comprehensive-review-full-review` | Use when working with comprehensive review full review | comprehensive, full | comprehensive, full, review, working |
|
||||
| `comprehensive-review-pr-enhance` | You are a PR optimization expert specializing in creating high-quality pull requests that facilitate efficient code reviews. Generate comprehensive PR descri... | comprehensive, pr, enhance | comprehensive, pr, enhance, review, optimization, specializing, creating, high, quality, pull, requests, facilitate |
|
||||
| `computer-vision-expert` | SOTA Computer Vision Expert (2026). Specialized in YOLO26, Segment Anything 3 (SAM 3), Vision Language Models, and real-time spatial analysis. | computer, vision | computer, vision, sota, 2026, specialized, yolo26, segment, anything, sam, language, models, real |
|
||||
| `concise-planning` | Use when a user asks for a plan for a coding task, to generate a clear, actionable, and atomic checklist. | concise, planning | concise, planning, user, asks, plan, coding, task, generate, clear, actionable, atomic, checklist |
|
||||
| `context-compression` | Design and evaluate compression strategies for long-running sessions | compression | compression, context, evaluate, long, running, sessions |
|
||||
| `context-fundamentals` | Understand what context is, why it matters, and the anatomy of context in agent systems | fundamentals | fundamentals, context, understand, what, why, matters, anatomy, agent |
|
||||
@@ -344,7 +504,6 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `debugging-toolkit-smart-debug` | Use when working with debugging toolkit smart debug | debugging, debug | debugging, debug, toolkit, smart, working |
|
||||
| `design-md` | Analyze Stitch projects and synthesize a semantic design system into DESIGN.md files | md | md, analyze, stitch, synthesize, semantic, files |
|
||||
| `dispatching-parallel-agents` | Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies | dispatching, parallel, agents | dispatching, parallel, agents, facing, independent, tasks, worked, without, shared, state, sequential, dependencies |
|
||||
| `docx` | Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. When Claude ... | docx | docx, document, creation, editing, analysis, tracked, changes, comments, formatting, preservation, text, extraction |
|
||||
| `docx-official` | Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. When Claude ... | docx, official | docx, official, document, creation, editing, analysis, tracked, changes, comments, formatting, preservation, text |
|
||||
| `dx-optimizer` | Developer Experience specialist. Improves tooling, setup, and workflows. Use PROACTIVELY when setting up new projects, after team feedback, or when developme... | dx, optimizer | dx, optimizer, developer, experience, improves, tooling, setup, proactively, setting, up, new, after |
|
||||
| `environment-setup-guide` | Guide developers through setting up development environments with proper tools, dependencies, and configurations | environment, setup | environment, setup, developers, through, setting, up, development, environments, proper, dependencies, configurations |
|
||||
@@ -359,9 +518,17 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `fix-review` | Verify fix commits address audit findings without new bugs | fix | fix, review, verify, commits, address, audit, findings, without, new, bugs |
|
||||
| `framework-migration-code-migrate` | You are a code migration expert specializing in transitioning codebases between frameworks, languages, versions, and platforms. Generate comprehensive migrat... | framework, migration, code, migrate | framework, migration, code, migrate, specializing, transitioning, codebases, between, frameworks, languages, versions, platforms |
|
||||
| `game-development` | Game development orchestrator. Routes to platform-specific skills based on project needs. | game | game, development, orchestrator, routes, platform, specific, skills |
|
||||
| `game-development/2d-games` | 2D game development principles. Sprites, tilemaps, physics, camera. | game, development/2d, games | game, development/2d, games, 2d, development, principles, sprites, tilemaps, physics, camera |
|
||||
| `game-development/3d-games` | 3D game development principles. Rendering, shaders, physics, cameras. | game, development/3d, games | game, development/3d, games, 3d, development, principles, rendering, shaders, physics, cameras |
|
||||
| `game-development/game-audio` | Game audio principles. Sound design, music integration, adaptive audio systems. | game, development/game, audio | game, development/game, audio, principles, sound, music, integration, adaptive |
|
||||
| `game-development/game-design` | Game design principles. GDD structure, balancing, player psychology, progression. | game, development/game | game, development/game, principles, gdd, structure, balancing, player, psychology, progression |
|
||||
| `game-development/pc-games` | PC and console game development principles. Engine selection, platform features, optimization strategies. | game, development/pc, games | game, development/pc, games, pc, console, development, principles, engine, selection, platform, features, optimization |
|
||||
| `game-development/vr-ar` | VR/AR development principles. Comfort, interaction, performance requirements. | game, development/vr, ar | game, development/vr, ar, vr, development, principles, comfort, interaction, performance, requirements |
|
||||
| `game-development/web-games` | Web browser game development principles. Framework selection, WebGPU, optimization, PWA. | game, development/web, games | game, development/web, games, web, browser, development, principles, framework, selection, webgpu, optimization, pwa |
|
||||
| `git-advanced-workflows` | Master advanced Git workflows including rebasing, cherry-picking, bisect, worktrees, and reflog to maintain clean history and recover from any situation. Use... | git, advanced | git, advanced, including, rebasing, cherry, picking, bisect, worktrees, reflog, maintain, clean, history |
|
||||
| `git-pr-workflows-onboard` | You are an **expert onboarding specialist and knowledge transfer architect** with deep experience in remote-first organizations, technical team integration, ... | git, pr, onboard | git, pr, onboard, onboarding, knowledge, transfer, architect, deep, experience, remote, first, organizations |
|
||||
| `git-pr-workflows-pr-enhance` | You are a PR optimization expert specializing in creating high-quality pull requests that facilitate efficient code reviews. Generate comprehensive PR descri... | git, pr, enhance | git, pr, enhance, optimization, specializing, creating, high, quality, pull, requests, facilitate, efficient |
|
||||
| `github-issue-creator` | Convert raw notes, error logs, voice dictation, or screenshots into crisp GitHub-flavored markdown issue reports. Use when the user pastes bug info, error me... | github, issue, creator | github, issue, creator, convert, raw, notes, error, logs, voice, dictation, screenshots, crisp |
|
||||
| `imagen` | | imagen | imagen |
|
||||
| `infinite-gratitude` | Multi-agent research skill for parallel research execution (10 agents, battle-tested with real case studies). | infinite, gratitude | infinite, gratitude, multi, agent, research, skill, parallel, execution, 10, agents, battle, tested |
|
||||
| `interactive-portfolio` | Expert in building portfolios that actually land jobs and clients - not just showing work, but creating memorable experiences. Covers developer portfolios, d... | interactive, portfolio | interactive, portfolio, building, portfolios, actually, land, jobs, clients, just, showing, work, creating |
|
||||
@@ -378,6 +545,7 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `nosql-expert` | Expert guidance for distributed NoSQL databases (Cassandra, DynamoDB). Focuses on mental models, query-first modeling, single-table design, and avoiding hot ... | nosql | nosql, guidance, distributed, databases, cassandra, dynamodb, mental, models, query, first, modeling, single |
|
||||
| `obsidian-clipper-template-creator` | Guide for creating templates for the Obsidian Web Clipper. Use when you want to create a new clipping template, understand available variables, or format cli... | obsidian, clipper, creator | obsidian, clipper, creator, creating, web, want, new, clipping, understand, available, variables, format |
|
||||
| `onboarding-cro` | When the user wants to optimize post-signup onboarding, user activation, first-run experience, or time-to-value. Also use when the user mentions "onboarding ... | onboarding, cro | onboarding, cro, user, wants, optimize, post, signup, activation, first, run, experience, time |
|
||||
| `oss-hunter` | Automatically hunt for high-impact OSS contribution opportunities in trending repositories. | oss, hunter | oss, hunter, automatically, hunt, high, impact, contribution, opportunities, trending, repositories |
|
||||
| `paid-ads` | When the user wants help with paid advertising campaigns on Google Ads, Meta (Facebook/Instagram), LinkedIn, Twitter/X, or other ad platforms. Also use when ... | paid, ads | paid, ads, user, wants, advertising, campaigns, google, meta, facebook, instagram, linkedin, twitter |
|
||||
| `paypal-integration` | Integrate PayPal payment processing with support for express checkout, subscriptions, and refund management. Use when implementing PayPal payments, processin... | paypal, integration | paypal, integration, integrate, payment, processing, express, checkout, subscriptions, refund, implementing, payments, online |
|
||||
| `performance-profiling` | Performance profiling principles. Measurement, analysis, and optimization techniques. | performance, profiling | performance, profiling, principles, measurement, analysis, optimization, techniques |
|
||||
@@ -385,11 +553,10 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `plan-writing` | Structured task planning with clear breakdowns, dependencies, and verification criteria. Use when implementing features, refactoring, or any multi-step work. | plan, writing | plan, writing, structured, task, planning, clear, breakdowns, dependencies, verification, criteria, implementing, features |
|
||||
| `planning-with-files` | Implements Manus-style file-based planning for complex tasks. Creates task_plan.md, findings.md, and progress.md. Use when starting complex multi-step tasks,... | planning, with, files | planning, with, files, implements, manus, style, file, complex, tasks, creates, task, plan |
|
||||
| `posix-shell-pro` | Expert in strict POSIX sh scripting for maximum portability across Unix-like systems. Specializes in shell scripts that run on any POSIX-compliant shell (das... | posix, shell | posix, shell, pro, strict, sh, scripting, maximum, portability, unix, like, specializes, scripts |
|
||||
| `pptx` | Presentation creation, editing, and analysis. When Claude needs to work with presentations (.pptx files) for: (1) Creating new presentations, (2) Modifying o... | pptx | pptx, presentation, creation, editing, analysis, claude, work, presentations, files, creating, new, modifying |
|
||||
| `pptx-official` | Presentation creation, editing, and analysis. When Claude needs to work with presentations (.pptx files) for: (1) Creating new presentations, (2) Modifying o... | pptx, official | pptx, official, presentation, creation, editing, analysis, claude, work, presentations, files, creating, new |
|
||||
| `privilege-escalation-methods` | This skill should be used when the user asks to "escalate privileges", "get root access", "become administrator", "privesc techniques", "abuse sudo", "exploi... | privilege, escalation, methods | privilege, escalation, methods, skill, should, used, user, asks, escalate, privileges, get, root |
|
||||
| `prompt-engineer` | Transforms user prompts into optimized prompts using frameworks (RTF, RISEN, Chain of Thought, RODES, Chain of Density, RACE, RISE, STAR, SOAP, CLEAR, GROW) | prompt-engineering, optimization, frameworks, ai-enhancement | prompt-engineering, optimization, frameworks, ai-enhancement, prompt, engineer, transforms, user, prompts, optimized, rtf, risen |
|
||||
| `prompt-library` | Curated collection of high-quality prompts for various use cases. Includes role-based prompts, task-specific templates, and prompt refinement techniques. Use... | prompt, library | prompt, library, curated, collection, high, quality, prompts, various, cases, includes, role, task |
|
||||
| `readme` | When the user wants to create or update a README.md file for a project. Also use when the user says | readme | readme, user, wants, update, md, file, says |
|
||||
| `receiving-code-review` | Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technic... | receiving, code | receiving, code, review, feedback, before, implementing, suggestions, especially, seems, unclear, technically, questionable |
|
||||
| `referral-program` | When the user wants to create, optimize, or analyze a referral program, affiliate program, or word-of-mouth strategy. Also use when the user mentions 'referr... | referral, program | referral, program, user, wants, optimize, analyze, affiliate, word, mouth, mentions, ambassador, viral |
|
||||
| `requesting-code-review` | Use when completing tasks, implementing major features, or before merging to verify work meets requirements | requesting, code | requesting, code, review, completing, tasks, implementing, major, features, before, merging, verify, work |
|
||||
@@ -397,7 +564,6 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `sharp-edges` | Identify error-prone APIs and dangerous configurations | sharp, edges | sharp, edges, identify, error, prone, apis, dangerous, configurations |
|
||||
| `shellcheck-configuration` | Master ShellCheck static analysis configuration and usage for shell script quality. Use when setting up linting infrastructure, fixing code issues, or ensuri... | shellcheck, configuration | shellcheck, configuration, static, analysis, usage, shell, script, quality, setting, up, linting, infrastructure |
|
||||
| `signup-flow-cro` | When the user wants to optimize signup, registration, account creation, or trial activation flows. Also use when the user mentions "signup conversions," "reg... | signup, flow, cro | signup, flow, cro, user, wants, optimize, registration, account, creation, trial, activation, flows |
|
||||
| `skill-creator` | Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capa... | skill, creator | skill, creator, creating, effective, skills, should, used, users, want, new, update, existing |
|
||||
| `skill-rails-upgrade` | Analyze Rails apps and provide upgrade assessments | skill, rails, upgrade | skill, rails, upgrade, analyze, apps, provide, assessments |
|
||||
| `slack-gif-creator` | Knowledge and utilities for creating animated GIFs optimized for Slack. Provides constraints, validation tools, and animation concepts. Use when users reques... | slack, gif, creator | slack, gif, creator, knowledge, utilities, creating, animated, gifs, optimized, provides, constraints, validation |
|
||||
| `social-content` | When the user wants help creating, scheduling, or optimizing social media content for LinkedIn, Twitter/X, Instagram, TikTok, Facebook, or other platforms. A... | social, content | social, content, user, wants, creating, scheduling, optimizing, media, linkedin, twitter, instagram, tiktok |
|
||||
@@ -415,12 +581,16 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `using-superpowers` | Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions | using, superpowers | using, superpowers, starting, any, conversation, establishes, how, find, skills, requiring, skill, invocation |
|
||||
| `verification-before-completion` | Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output... | verification, before, completion | verification, before, completion, about, claim, work, complete, fixed, passing, committing, creating, prs |
|
||||
| `web-performance-optimization` | Optimize website and web application performance including loading speed, Core Web Vitals, bundle size, caching strategies, and runtime performance | web, performance, optimization | web, performance, optimization, optimize, website, application, including, loading, speed, core, vitals, bundle |
|
||||
| `wiki-changelog` | Analyzes git commit history and generates structured changelogs categorized by change type. Use when the user asks about recent changes, wants a changelog, o... | wiki, changelog | wiki, changelog, analyzes, git, commit, history, generates, structured, changelogs, categorized, change, type |
|
||||
| `wiki-page-writer` | Generates rich technical documentation pages with dark-mode Mermaid diagrams, source code citations, and first-principles depth. Use when writing documentati... | wiki, page, writer | wiki, page, writer, generates, rich, technical, documentation, pages, dark, mode, mermaid, diagrams |
|
||||
| `wiki-vitepress` | Packages generated wiki Markdown into a VitePress static site with dark theme, dark-mode Mermaid diagrams with click-to-zoom, and production build output. Us... | wiki, vitepress | wiki, vitepress, packages, generated, markdown, static, site, dark, theme, mode, mermaid, diagrams |
|
||||
| `windows-privilege-escalation` | This skill should be used when the user asks to "escalate privileges on Windows," "find Windows privesc vectors," "enumerate Windows for privilege escalation... | windows, privilege, escalation | windows, privilege, escalation, skill, should, used, user, asks, escalate, privileges, find, privesc |
|
||||
| `writing-plans` | Use when you have a spec or requirements for a multi-step task, before touching code | writing, plans | writing, plans, spec, requirements, multi, step, task, before, touching, code |
|
||||
| `writing-skills` | Use when creating, updating, or improving agent skills. | writing, skills | writing, skills, creating, updating, improving, agent |
|
||||
| `x-article-publisher-skill` | Publish articles to X/Twitter | x, article, publisher, skill | x, article, publisher, skill, publish, articles, twitter |
|
||||
| `youtube-summarizer` | Extract transcripts from YouTube videos and generate comprehensive, detailed summaries using intelligent analysis frameworks | video, summarization, transcription, youtube, content-analysis | video, summarization, transcription, youtube, content-analysis, summarizer, extract, transcripts, videos, generate, detailed, summaries |
|
||||
|
||||
## infrastructure (77)
|
||||
## infrastructure (102)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
@@ -430,11 +600,38 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `application-performance-performance-optimization` | Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across... | application, performance, optimization | application, performance, optimization, optimize, profiling, observability, backend, frontend, tuning, coordinating, stack |
|
||||
| `aws-serverless` | Specialized skill for building production-ready serverless applications on AWS. Covers Lambda functions, API Gateway, DynamoDB, SQS/SNS event-driven patterns... | aws, serverless | aws, serverless, specialized, skill, building, applications, covers, lambda, functions, api, gateway, dynamodb |
|
||||
| `aws-skills` | AWS development with infrastructure automation and cloud architecture patterns | aws, skills | aws, skills, development, infrastructure, automation, cloud, architecture |
|
||||
| `azd-deployment` | Deploy containerized applications to Azure Container Apps using Azure Developer CLI (azd). Use when setting up azd projects, writing azure.yaml configuration... | azd, deployment | azd, deployment, deploy, containerized, applications, azure, container, apps, developer, cli, setting, up |
|
||||
| `azure-ai-anomalydetector-java` | Build anomaly detection applications with Azure AI Anomaly Detector SDK for Java. Use when implementing univariate/multivariate anomaly detection, time-serie... | azure, ai, anomalydetector, java | azure, ai, anomalydetector, java, anomaly, detection, applications, detector, sdk, implementing, univariate, multivariate |
|
||||
| `azure-identity-java` | Azure Identity Java SDK for authentication with Azure services. Use when implementing DefaultAzureCredential, managed identity, service principal, or any Azu... | azure, identity, java | azure, identity, java, sdk, authentication, implementing, defaultazurecredential, managed, principal, any, applications |
|
||||
| `azure-identity-py` | Azure Identity SDK for Python authentication. Use for DefaultAzureCredential, managed identity, service principals, and token caching.
|
||||
Triggers: "azure-ident... | azure, identity, py | azure, identity, py, sdk, python, authentication, defaultazurecredential, managed, principals, token, caching, triggers |
|
||||
| `azure-identity-ts` | Authenticate to Azure services using Azure Identity SDK for JavaScript (@azure/identity). Use when configuring authentication with DefaultAzureCredential, ma... | azure, identity, ts | azure, identity, ts, authenticate, sdk, javascript, configuring, authentication, defaultazurecredential, managed, principals, interactive |
|
||||
| `azure-messaging-webpubsubservice-py` | Azure Web PubSub Service SDK for Python. Use for real-time messaging, WebSocket connections, and pub/sub patterns.
|
||||
Triggers: "azure-messaging-webpubsubservic... | azure, messaging, webpubsubservice, py | azure, messaging, webpubsubservice, py, web, pubsub, sdk, python, real, time, websocket, connections |
|
||||
| `azure-mgmt-apimanagement-dotnet` | Azure Resource Manager SDK for API Management in .NET. Use for MANAGEMENT PLANE operations: creating/managing APIM services, APIs, products, subscriptions, p... | azure, mgmt, apimanagement, dotnet | azure, mgmt, apimanagement, dotnet, resource, manager, sdk, api, net, plane, operations, creating |
|
||||
| `azure-mgmt-applicationinsights-dotnet` | Azure Application Insights SDK for .NET. Application performance monitoring and observability resource management. Use for creating Application Insights comp... | azure, mgmt, applicationinsights, dotnet | azure, mgmt, applicationinsights, dotnet, application, insights, sdk, net, performance, monitoring, observability, resource |
|
||||
| `azure-mgmt-arizeaiobservabilityeval-dotnet` | Azure Resource Manager SDK for Arize AI Observability and Evaluation (.NET). Use when managing Arize AI organizations
|
||||
on Azure via Azure Marketplace, creati... | azure, mgmt, arizeaiobservabilityeval, dotnet | azure, mgmt, arizeaiobservabilityeval, dotnet, resource, manager, sdk, arize, ai, observability, evaluation, net |
|
||||
| `azure-mgmt-botservice-dotnet` | Azure Resource Manager SDK for Bot Service in .NET. Management plane operations for creating and managing Azure Bot resources, channels (Teams, DirectLine, S... | azure, mgmt, botservice, dotnet | azure, mgmt, botservice, dotnet, resource, manager, sdk, bot, net, plane, operations, creating |
|
||||
| `azure-mgmt-botservice-py` | Azure Bot Service Management SDK for Python. Use for creating, managing, and configuring Azure Bot Service resources.
|
||||
Triggers: "azure-mgmt-botservice", "Azu... | azure, mgmt, botservice, py | azure, mgmt, botservice, py, bot, sdk, python, creating, managing, configuring, resources, triggers |
|
||||
| `azure-mgmt-weightsandbiases-dotnet` | Azure Weights & Biases SDK for .NET. ML experiment tracking and model management via Azure Marketplace. Use for creating W&B instances, managing SSO, marketp... | azure, mgmt, weightsandbiases, dotnet | azure, mgmt, weightsandbiases, dotnet, weights, biases, sdk, net, ml, experiment, tracking, model |
|
||||
| `azure-microsoft-playwright-testing-ts` | Run Playwright tests at scale using Azure Playwright Workspaces (formerly Microsoft Playwright Testing). Use when scaling browser tests across cloud-hosted b... | azure, microsoft, playwright, ts | azure, microsoft, playwright, ts, testing, run, tests, scale, workspaces, formerly, scaling, browser |
|
||||
| `azure-monitor-opentelemetry-exporter-java` | Azure Monitor OpenTelemetry Exporter for Java. Export OpenTelemetry traces, metrics, and logs to Azure Monitor/Application Insights.
|
||||
Triggers: "AzureMonitorE... | azure, monitor, opentelemetry, exporter, java | azure, monitor, opentelemetry, exporter, java, export, traces, metrics, logs, application, insights, triggers |
|
||||
| `azure-monitor-opentelemetry-ts` | Instrument applications with Azure Monitor and OpenTelemetry for JavaScript (@azure/monitor-opentelemetry). Use when adding distributed tracing, metrics, and... | azure, monitor, opentelemetry, ts | azure, monitor, opentelemetry, ts, instrument, applications, javascript, adding, distributed, tracing, metrics, logs |
|
||||
| `azure-servicebus-dotnet` | Azure Service Bus SDK for .NET. Enterprise messaging with queues, topics, subscriptions, and sessions. Use for reliable message delivery, pub/sub patterns, d... | azure, servicebus, dotnet | azure, servicebus, dotnet, bus, sdk, net, enterprise, messaging, queues, topics, subscriptions, sessions |
|
||||
| `azure-servicebus-py` | Azure Service Bus SDK for Python messaging. Use for queues, topics, subscriptions, and enterprise messaging patterns.
|
||||
Triggers: "service bus", "ServiceBusCli... | azure, servicebus, py | azure, servicebus, py, bus, sdk, python, messaging, queues, topics, subscriptions, enterprise, triggers |
|
||||
| `azure-servicebus-ts` | Build messaging applications using Azure Service Bus SDK for JavaScript (@azure/service-bus). Use when implementing queues, topics/subscriptions, message ses... | azure, servicebus, ts | azure, servicebus, ts, messaging, applications, bus, sdk, javascript, implementing, queues, topics, subscriptions |
|
||||
| `azure-storage-file-share-py` | Azure Storage File Share SDK for Python. Use for SMB file shares, directories, and file operations in the cloud.
|
||||
Triggers: "azure-storage-file-share", "Share... | azure, storage, file, share, py | azure, storage, file, share, py, sdk, python, smb, shares, directories, operations, cloud |
|
||||
| `backend-architect` | Expert backend architect specializing in scalable API design, microservices architecture, and distributed systems. Masters REST/GraphQL/gRPC APIs, event-driv... | backend | backend, architect, specializing, scalable, api, microservices, architecture, distributed, masters, rest, graphql, grpc |
|
||||
| `backend-development-feature-development` | Orchestrate end-to-end backend feature development from requirements to deployment. Use when coordinating multi-phase feature delivery across teams and servi... | backend | backend, development, feature, orchestrate, requirements, deployment, coordinating, multi, phase, delivery, teams |
|
||||
| `bash-defensive-patterns` | Master defensive Bash programming techniques for production-grade scripts. Use when writing robust shell scripts, CI/CD pipelines, or system utilities requir... | bash, defensive | bash, defensive, programming, techniques, grade, scripts, writing, robust, shell, ci, cd, pipelines |
|
||||
| `bash-pro` | Master of defensive Bash scripting for production automation, CI/CD pipelines, and system utilities. Expert in safe, portable, and testable shell scripts. | bash | bash, pro, defensive, scripting, automation, ci, cd, pipelines, utilities, safe, portable, testable |
|
||||
| `bats-testing-patterns` | Master Bash Automated Testing System (Bats) for comprehensive shell script testing. Use when writing tests for shell scripts, CI/CD pipelines, or requiring t... | bats | bats, testing, bash, automated, shell, script, writing, tests, scripts, ci, cd, pipelines |
|
||||
| `box-automation` | Automate Box cloud storage operations including file upload/download, search, folder management, sharing, collaborations, and metadata queries via Rube MCP (... | box | box, automation, automate, cloud, storage, operations, including, file, upload, download, search, folder |
|
||||
| `c4-container` | Expert C4 Container-level documentation specialist. Synthesizes Component-level documentation into Container-level architecture, mapping components to deploy... | c4, container | c4, container, level, documentation, synthesizes, component, architecture, mapping, components, deployment, units, documenting |
|
||||
| `claude-d3js-skill` | Creating interactive data visualisations using d3.js. This skill should be used when creating custom charts, graphs, network diagrams, geographic visualisati... | claude, d3js, skill | claude, d3js, skill, d3, viz, creating, interactive, data, visualisations, js, should, used |
|
||||
| `code-review-ai-ai-review` | You are an expert AI-powered code review specialist combining automated static analysis, intelligent pattern recognition, and modern DevOps practices. Levera... | code, ai | code, ai, review, powered, combining, automated, static, analysis, intelligent, recognition, devops, leverage |
|
||||
@@ -457,9 +654,12 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `expo-deployment` | Deploy Expo apps to production | expo, deployment | expo, deployment, deploy, apps |
|
||||
| `file-uploads` | Expert at handling file uploads and cloud storage. Covers S3, Cloudflare R2, presigned URLs, multipart uploads, and image optimization. Knows how to handle l... | file, uploads | file, uploads, handling, cloud, storage, covers, s3, cloudflare, r2, presigned, urls, multipart |
|
||||
| `flutter-expert` | Master Flutter development with Dart 3, advanced widgets, and multi-platform deployment. Handles state management, animations, testing, and performance optim... | flutter | flutter, development, dart, widgets, multi, platform, deployment, state, animations, testing, performance, optimization |
|
||||
| `freshservice-automation` | Automate Freshservice ITSM tasks via Rube MCP (Composio): create/update tickets, bulk operations, service requests, and outbound emails. Always search tools ... | freshservice | freshservice, automation, automate, itsm, tasks, via, rube, mcp, composio, update, tickets, bulk |
|
||||
| `game-development/game-art` | Game art principles. Visual style selection, asset pipeline, animation workflow. | game, development/game, art | game, development/game, art, principles, visual, style, selection, asset, pipeline, animation |
|
||||
| `gcp-cloud-run` | Specialized skill for building production-ready serverless applications on GCP. Covers Cloud Run services (containerized), Cloud Run Functions (event-driven)... | gcp, cloud, run | gcp, cloud, run, specialized, skill, building, serverless, applications, covers, containerized, functions, event |
|
||||
| `git-pr-workflows-git-workflow` | Orchestrate a comprehensive git workflow from code review through PR creation, leveraging specialized agents for quality assurance, testing, and deployment r... | git, pr | git, pr, orchestrate, code, review, through, creation, leveraging, specialized, agents, quality, assurance |
|
||||
| `github-actions-templates` | Create production-ready GitHub Actions workflows for automated testing, building, and deploying applications. Use when setting up CI/CD with GitHub Actions, ... | github, actions | github, actions, automated, testing, building, deploying, applications, setting, up, ci, cd, automating |
|
||||
| `github-automation` | Automate GitHub repositories, issues, pull requests, branches, CI/CD, and permissions via Rube MCP (Composio). Manage code workflows, review PRs, search code... | github | github, automation, automate, repositories, issues, pull, requests, branches, ci, cd, permissions, via |
|
||||
| `github-workflow-automation` | Automate GitHub workflows with AI assistance. Includes PR reviews, issue triage, CI/CD integration, and Git operations. Use when automating GitHub workflows,... | github | github, automation, automate, ai, assistance, includes, pr, reviews, issue, triage, ci, cd |
|
||||
| `gitlab-ci-patterns` | Build GitLab CI/CD pipelines with multi-stage workflows, caching, and distributed runners for scalable automation. Use when implementing GitLab CI/CD, optimi... | gitlab, ci | gitlab, ci, cd, pipelines, multi, stage, caching, distributed, runners, scalable, automation, implementing |
|
||||
| `gitops-workflow` | Implement GitOps workflows with ArgoCD and Flux for automated, declarative Kubernetes deployments with continuous reconciliation. Use when implementing GitOp... | gitops | gitops, argocd, flux, automated, declarative, kubernetes, deployments, continuous, reconciliation, implementing, automating, setting |
|
||||
@@ -485,8 +685,10 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `observability-monitoring-slo-implement` | You are an SLO (Service Level Objective) expert specializing in implementing reliability standards and error budget-based practices. Design SLO frameworks, d... | observability, monitoring, slo, implement | observability, monitoring, slo, implement, level, objective, specializing, implementing, reliability, standards, error, budget |
|
||||
| `performance-engineer` | Expert performance engineer specializing in modern observability, application optimization, and scalable system performance. Masters OpenTelemetry, distribut... | performance | performance, engineer, specializing, observability, application, optimization, scalable, masters, opentelemetry, distributed, tracing, load |
|
||||
| `performance-testing-review-ai-review` | You are an expert AI-powered code review specialist combining automated static analysis, intelligent pattern recognition, and modern DevOps practices. Levera... | performance, ai | performance, ai, testing, review, powered, code, combining, automated, static, analysis, intelligent, recognition |
|
||||
| `pipedrive-automation` | Automate Pipedrive CRM operations including deals, contacts, organizations, activities, notes, and pipeline management via Rube MCP (Composio). Always search... | pipedrive | pipedrive, automation, automate, crm, operations, including, deals, contacts, organizations, activities, notes, pipeline |
|
||||
| `prometheus-configuration` | Set up Prometheus for comprehensive metric collection, storage, and monitoring of infrastructure and applications. Use when implementing metrics collection, ... | prometheus, configuration | prometheus, configuration, set, up, metric, collection, storage, monitoring, infrastructure, applications, implementing, metrics |
|
||||
| `protocol-reverse-engineering` | Master network protocol reverse engineering including packet analysis, protocol dissection, and custom protocol documentation. Use when analyzing network tra... | protocol, reverse, engineering | protocol, reverse, engineering, network, including, packet, analysis, dissection, custom, documentation, analyzing, traffic |
|
||||
| `readme` | When the user wants to create or update a README.md file for a project. Also use when the user says 'write readme,' 'create readme,' 'document this project,'... | readme | readme, user, wants, update, md, file, says, write, document, documentation, asks, skill |
|
||||
| `server-management` | Server management principles and decision-making. Process management, monitoring strategy, and scaling decisions. Teaches thinking, not commands. | server | server, principles, decision, making, process, monitoring, scaling, decisions, teaches, thinking, commands |
|
||||
| `service-mesh-observability` | Implement comprehensive observability for service meshes including distributed tracing, metrics, and visualization. Use when setting up mesh monitoring, debu... | service, mesh, observability | service, mesh, observability, meshes, including, distributed, tracing, metrics, visualization, setting, up, monitoring |
|
||||
| `slo-implementation` | Define and implement Service Level Indicators (SLIs) and Service Level Objectives (SLOs) with error budgets and alerting. Use when establishing reliability t... | slo | slo, define, level, indicators, slis, objectives, slos, error, budgets, alerting, establishing, reliability |
|
||||
@@ -496,13 +698,13 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `terraform-skill` | Terraform infrastructure as code best practices | terraform, skill | terraform, skill, infrastructure, code |
|
||||
| `test-automator` | Master AI-powered test automation with modern frameworks, self-healing tests, and comprehensive quality engineering. Build scalable testing strategies with a... | automator | automator, test, ai, powered, automation, frameworks, self, healing, tests, quality, engineering, scalable |
|
||||
| `unity-developer` | Build Unity games with optimized C# scripts, efficient rendering, and proper asset management. Masters Unity 6 LTS, URP/HDRP pipelines, and cross-platform de... | unity | unity, developer, games, optimized, scripts, efficient, rendering, proper, asset, masters, lts, urp |
|
||||
| `vercel-deploy-claimable` | Deploy applications and websites to Vercel. Use this skill when the user requests deployment actions such as | vercel, deploy, claimable | vercel, deploy, claimable, applications, websites, skill, user, requests, deployment, actions, such |
|
||||
| `vercel-deploy-claimable` | Deploy applications and websites to Vercel. Use this skill when the user requests deployment actions such as 'Deploy my app', 'Deploy this to production', 'C... | vercel, deploy, claimable | vercel, deploy, claimable, applications, websites, skill, user, requests, deployment, actions, such, my |
|
||||
| `vercel-deployment` | Expert knowledge for deploying to Vercel with Next.js Use when: vercel, deploy, deployment, hosting, production. | vercel, deployment | vercel, deployment, knowledge, deploying, next, js, deploy, hosting |
|
||||
| `voice-agents` | Voice agents represent the frontier of AI interaction - humans speaking naturally with AI systems. The challenge isn't just speech recognition and synthesis,... | voice, agents | voice, agents, represent, frontier, ai, interaction, humans, speaking, naturally, challenge, isn, just |
|
||||
| `wireshark-analysis` | This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow... | wireshark | wireshark, network, traffic, analysis, skill, should, used, user, asks, analyze, capture, packets |
|
||||
| `workflow-automation` | Workflow automation is the infrastructure that makes AI agents reliable. Without durable execution, a network hiccup during a 10-step payment flow means lost... | | automation, infrastructure, makes, ai, agents, reliable, without, durable, execution, network, hiccup, during |
|
||||
|
||||
## security (112)
|
||||
## security (126)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
@@ -510,11 +712,22 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `active-directory-attacks` | This skill should be used when the user asks to "attack Active Directory", "exploit AD", "Kerberoasting", "DCSync", "pass-the-hash", "BloodHound enumeration"... | active, directory, attacks | active, directory, attacks, skill, should, used, user, asks, attack, exploit, ad, kerberoasting |
|
||||
| `agent-memory-systems` | Memory is the cornerstone of intelligent agents. Without it, every interaction starts from zero. This skill covers the architecture of agent memory: short-te... | agent, memory | agent, memory, cornerstone, intelligent, agents, without, every, interaction, starts, zero, skill, covers |
|
||||
| `ai-product` | Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integra... | ai, product | ai, product, every, powered, question, whether, ll, right, ship, demo, falls, apart |
|
||||
| `antigravity-workflows` | Orchestrate multiple Antigravity skills through guided workflows for SaaS MVP delivery, security audits, AI agent builds, and browser QA. | antigravity | antigravity, orchestrate, multiple, skills, through, guided, saas, mvp, delivery, security, audits, ai |
|
||||
| `api-fuzzing-bug-bounty` | This skill should be used when the user asks to "test API security", "fuzz APIs", "find IDOR vulnerabilities", "test REST API", "test GraphQL", "API penetrat... | api, fuzzing, bug, bounty | api, fuzzing, bug, bounty, skill, should, used, user, asks, test, security, fuzz |
|
||||
| `api-security-best-practices` | Implement secure API design patterns including authentication, authorization, input validation, rate limiting, and protection against common API vulnerabilities | api, security, best, practices | api, security, best, practices, secure, including, authentication, authorization, input, validation, rate, limiting |
|
||||
| `attack-tree-construction` | Build comprehensive attack trees to visualize threat paths. Use when mapping attack scenarios, identifying defense gaps, or communicating security risks to s... | attack, tree, construction | attack, tree, construction, trees, visualize, threat, paths, mapping, scenarios, identifying, defense, gaps |
|
||||
| `auth-implementation-patterns` | Master authentication and authorization patterns including JWT, OAuth2, session management, and RBAC to build secure, scalable access control systems. Use wh... | auth | auth, authentication, authorization, including, jwt, oauth2, session, rbac, secure, scalable, access, control |
|
||||
| `aws-penetration-testing` | This skill should be used when the user asks to "pentest AWS", "test AWS security", "enumerate IAM", "exploit cloud infrastructure", "AWS privilege escalatio... | aws, penetration | aws, penetration, testing, skill, should, used, user, asks, pentest, test, security, enumerate |
|
||||
| `azure-cosmos-db-py` | Build Azure Cosmos DB NoSQL services with Python/FastAPI following production-grade patterns. Use when implementing database client setup with dual auth (Def... | azure, cosmos, db, py | azure, cosmos, db, py, nosql, python, fastapi, following, grade, implementing, database, client |
|
||||
| `azure-identity-dotnet` | Azure Identity SDK for .NET. Authentication library for Azure SDK clients using Microsoft Entra ID. Use for DefaultAzureCredential, managed identity, service... | azure, identity, dotnet | azure, identity, dotnet, sdk, net, authentication, library, clients, microsoft, entra, id, defaultazurecredential |
|
||||
| `azure-keyvault-py` | Azure Key Vault SDK for Python. Use for secrets, keys, and certificates management with secure storage.
|
||||
Triggers: "key vault", "SecretClient", "KeyClient", "... | azure, keyvault, py | azure, keyvault, py, key, vault, sdk, python, secrets, keys, certificates, secure, storage |
|
||||
| `azure-keyvault-secrets-rust` | Azure Key Vault Secrets SDK for Rust. Use for storing and retrieving secrets, passwords, and API keys.
|
||||
Triggers: "keyvault secrets rust", "SecretClient rust"... | azure, keyvault, secrets, rust | azure, keyvault, secrets, rust, key, vault, sdk, storing, retrieving, passwords, api, keys |
|
||||
| `azure-keyvault-secrets-ts` | Manage secrets using Azure Key Vault Secrets SDK for JavaScript (@azure/keyvault-secrets). Use when storing and retrieving application secrets or configurati... | azure, keyvault, secrets, ts | azure, keyvault, secrets, ts, key, vault, sdk, javascript, storing, retrieving, application, configuration |
|
||||
| `azure-security-keyvault-keys-dotnet` | Azure Key Vault Keys SDK for .NET. Client library for managing cryptographic keys in Azure Key Vault and Managed HSM. Use for key creation, rotation, encrypt... | azure, security, keyvault, keys, dotnet | azure, security, keyvault, keys, dotnet, key, vault, sdk, net, client, library, managing |
|
||||
| `azure-security-keyvault-keys-java` | Azure Key Vault Keys Java SDK for cryptographic key management. Use when creating, managing, or using RSA/EC keys, performing encrypt/decrypt/sign/verify ope... | azure, security, keyvault, keys, java | azure, security, keyvault, keys, java, key, vault, sdk, cryptographic, creating, managing, rsa |
|
||||
| `azure-security-keyvault-secrets-java` | Azure Key Vault Secrets Java SDK for secret management. Use when storing, retrieving, or managing passwords, API keys, connection strings, or other sensitive... | azure, security, keyvault, secrets, java | azure, security, keyvault, secrets, java, key, vault, sdk, secret, storing, retrieving, managing |
|
||||
| `backend-security-coder` | Expert in secure backend coding practices specializing in input validation, authentication, and API security. Use PROACTIVELY for backend security implementa... | backend, security, coder | backend, security, coder, secure, coding, specializing, input, validation, authentication, api, proactively, implementations |
|
||||
| `broken-authentication` | This skill should be used when the user asks to "test for broken authentication vulnerabilities", "assess session management security", "perform credential s... | broken, authentication | broken, authentication, testing, skill, should, used, user, asks, test, vulnerabilities, assess, session |
|
||||
| `burp-suite-testing` | This skill should be used when the user asks to "intercept HTTP traffic", "modify web requests", "use Burp Suite for testing", "perform web vulnerability sca... | burp, suite | burp, suite, web, application, testing, skill, should, used, user, asks, intercept, http |
|
||||
@@ -536,6 +749,7 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `design-orchestration` | Orchestrates design workflows by routing work through brainstorming, multi-agent review, and execution readiness in the correct order. Prevents premature imp... | | orchestration, orchestrates, routing, work, through, brainstorming, multi, agent, review, execution, readiness, correct |
|
||||
| `devops-troubleshooter` | Expert DevOps troubleshooter specializing in rapid incident response, advanced debugging, and modern observability. Masters log analysis, distributed tracing... | devops, troubleshooter | devops, troubleshooter, specializing, rapid, incident, response, debugging, observability, masters, log, analysis, distributed |
|
||||
| `docker-expert` | Docker containerization expert with deep knowledge of multi-stage builds, image optimization, container security, Docker Compose orchestration, and productio... | docker | docker, containerization, deep, knowledge, multi, stage, image, optimization, container, security, compose, orchestration |
|
||||
| `dotnet-backend` | Build ASP.NET Core 8+ backend services with EF Core, auth, background jobs, and production API patterns. | dotnet, backend | dotnet, backend, asp, net, core, ef, auth, background, jobs, api |
|
||||
| `ethical-hacking-methodology` | This skill should be used when the user asks to "learn ethical hacking", "understand penetration testing lifecycle", "perform reconnaissance", "conduct secur... | ethical, hacking, methodology | ethical, hacking, methodology, skill, should, used, user, asks, learn, understand, penetration, testing |
|
||||
| `file-path-traversal` | This skill should be used when the user asks to "test for directory traversal", "exploit path traversal vulnerabilities", "read arbitrary files through web a... | file, path, traversal | file, path, traversal, testing, skill, should, used, user, asks, test, directory, exploit |
|
||||
| `find-bugs` | Find bugs, security vulnerabilities, and code quality issues in local branch changes. Use when asked to review changes, find bugs, security review, or audit ... | find, bugs | find, bugs, security, vulnerabilities, code, quality, issues, local, branch, changes, asked, review |
|
||||
@@ -563,6 +777,8 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `legal-advisor` | Draft privacy policies, terms of service, disclaimers, and legal notices. Creates GDPR-compliant texts, cookie policies, and data processing agreements. Use ... | legal, advisor | legal, advisor, draft, privacy, policies, terms, disclaimers, notices, creates, gdpr, compliant, texts |
|
||||
| `linkerd-patterns` | Implement Linkerd service mesh patterns for lightweight, security-focused service mesh deployments. Use when setting up Linkerd, configuring traffic policies... | linkerd | linkerd, mesh, lightweight, security, deployments, setting, up, configuring, traffic, policies, implementing, zero |
|
||||
| `loki-mode` | Multi-agent autonomous startup system for Claude Code. Triggers on "Loki Mode". Orchestrates 100+ specialized agents across engineering, QA, DevOps, security... | loki, mode | loki, mode, multi, agent, autonomous, startup, claude, code, triggers, orchestrates, 100, specialized |
|
||||
| `m365-agents-dotnet` | Microsoft 365 Agents SDK for .NET. Build multichannel agents for Teams/M365/Copilot Studio with ASP.NET Core hosting, AgentApplication routing, and MSAL-base... | m365, agents, dotnet | m365, agents, dotnet, microsoft, 365, sdk, net, multichannel, teams, copilot, studio, asp |
|
||||
| `m365-agents-py` | Microsoft 365 Agents SDK for Python. Build multichannel agents for Teams/M365/Copilot Studio with aiohttp hosting, AgentApplication routing, streaming respon... | m365, agents, py | m365, agents, py, microsoft, 365, sdk, python, multichannel, teams, copilot, studio, aiohttp |
|
||||
| `malware-analyst` | Expert malware analyst specializing in defensive malware research, threat intelligence, and incident response. Masters sandbox analysis, behavioral analysis,... | malware, analyst | malware, analyst, specializing, defensive, research, threat, intelligence, incident, response, masters, sandbox, analysis |
|
||||
| `memory-forensics` | Master memory forensics techniques including memory acquisition, process analysis, and artifact extraction using Volatility and related tools. Use when analy... | memory, forensics | memory, forensics, techniques, including, acquisition, process, analysis, artifact, extraction, volatility, related, analyzing |
|
||||
| `metasploit-framework` | This skill should be used when the user asks to "use Metasploit for penetration testing", "exploit vulnerabilities with msfconsole", "create payloads with ms... | metasploit, framework | metasploit, framework, skill, should, used, user, asks, penetration, testing, exploit, vulnerabilities, msfconsole |
|
||||
@@ -616,14 +832,17 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `varlock-claude-skill` | Secure environment variable management ensuring secrets are never exposed in Claude sessions, terminals, logs, or git commits | varlock, claude, skill | varlock, claude, skill, secure, environment, variable, ensuring, secrets, never, exposed, sessions, terminals |
|
||||
| `vulnerability-scanner` | Advanced vulnerability analysis principles. OWASP 2025, Supply Chain Security, attack surface mapping, risk prioritization. | vulnerability, scanner | vulnerability, scanner, analysis, principles, owasp, 2025, supply, chain, security, attack, surface, mapping |
|
||||
| `web-design-guidelines` | Review UI code for Web Interface Guidelines compliance. Use when asked to "review my UI", "check accessibility", "audit design", "review UX", or "check my si... | web, guidelines | web, guidelines, review, ui, code, interface, compliance, asked, my, check, accessibility, audit |
|
||||
| `wiki-onboarding` | Generates two complementary onboarding guides — a Principal-Level architectural deep-dive and a Zero-to-Hero contributor walkthrough. Use when the user wants... | wiki, onboarding | wiki, onboarding, generates, two, complementary, guides, principal, level, architectural, deep, dive, zero |
|
||||
| `wiki-researcher` | Conducts multi-turn iterative deep research on specific topics within a codebase with zero tolerance for shallow analysis. Use when the user wants an in-dept... | wiki, researcher | wiki, researcher, conducts, multi, turn, iterative, deep, research, specific, topics, within, codebase |
|
||||
| `wordpress-penetration-testing` | This skill should be used when the user asks to "pentest WordPress sites", "scan WordPress for vulnerabilities", "enumerate WordPress users, themes, or plugi... | wordpress, penetration | wordpress, penetration, testing, skill, should, used, user, asks, pentest, sites, scan, vulnerabilities |
|
||||
| `xss-html-injection` | This skill should be used when the user asks to "test for XSS vulnerabilities", "perform cross-site scripting attacks", "identify HTML injection flaws", "exp... | xss, html, injection | xss, html, injection, cross, site, scripting, testing, skill, should, used, user, asks |
|
||||
|
||||
## testing (22)
|
||||
## testing (24)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `ab-test-setup` | Structured guide for setting up A/B tests with mandatory gates for hypothesis, metrics, and execution readiness. | ab, setup | ab, setup, test, structured, setting, up, tests, mandatory, gates, hypothesis, metrics, execution |
|
||||
| `circleci-automation` | Automate CircleCI tasks via Rube MCP (Composio): trigger pipelines, monitor workflows/jobs, retrieve artifacts and test metadata. Always search tools first f... | circleci | circleci, automation, automate, tasks, via, rube, mcp, composio, trigger, pipelines, monitor, jobs |
|
||||
| `conductor-implement` | Execute tasks from a track's implementation plan following TDD workflow | conductor, implement | conductor, implement, execute, tasks, track, plan, following, tdd |
|
||||
| `conductor-revert` | Git-aware undo by logical work unit (track, phase, or task) | conductor, revert | conductor, revert, git, aware, undo, logical, work, unit, track, phase, task |
|
||||
| `debugger` | Debugging specialist for errors, test failures, and unexpected behavior. Use proactively when encountering any issues. | debugger | debugger, debugging, errors, test, failures, unexpected, behavior, proactively, encountering, any, issues |
|
||||
@@ -645,25 +864,90 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `test-fixing` | Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test sui... | fixing | fixing, test, run, tests, systematically, fix, all, failing, smart, error, grouping, user |
|
||||
| `unit-testing-test-generate` | Generate comprehensive, maintainable unit tests across languages with strong coverage and edge case focus. | unit, generate | unit, generate, testing, test, maintainable, tests, languages, strong, coverage, edge, case |
|
||||
| `web3-testing` | Test smart contracts comprehensively using Hardhat and Foundry with unit tests, integration tests, and mainnet forking. Use when testing Solidity contracts, ... | web3 | web3, testing, test, smart, contracts, comprehensively, hardhat, foundry, unit, tests, integration, mainnet |
|
||||
| `wiki-qa` | Answers questions about a code repository using source file analysis. Use when the user asks a question about how something works, wants to understand a comp... | wiki, qa | wiki, qa, answers, questions, about, code, repository, source, file, analysis, user, asks |
|
||||
|
||||
## workflow (17)
|
||||
## workflow (81)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `activecampaign-automation` | Automate ActiveCampaign tasks via Rube MCP (Composio): manage contacts, tags, list subscriptions, automation enrollment, and tasks. Always search tools first... | activecampaign | activecampaign, automation, automate, tasks, via, rube, mcp, composio, contacts, tags, list, subscriptions |
|
||||
| `agent-orchestration-improve-agent` | Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration. | agent, improve | agent, improve, orchestration, systematic, improvement, existing, agents, through, performance, analysis, prompt, engineering |
|
||||
| `agent-orchestration-multi-agent-optimize` | Optimize multi-agent systems with coordinated profiling, workload distribution, and cost-aware orchestration. Use when improving agent performance, throughpu... | agent, multi, optimize | agent, multi, optimize, orchestration, coordinated, profiling, workload, distribution, cost, aware, improving, performance |
|
||||
| `airtable-automation` | Automate Airtable tasks via Rube MCP (Composio): records, bases, tables, fields, views. Always search tools first for current schemas. | airtable | airtable, automation, automate, tasks, via, rube, mcp, composio, records, bases, tables, fields |
|
||||
| `amplitude-automation` | Automate Amplitude tasks via Rube MCP (Composio): events, user activity, cohorts, user identification. Always search tools first for current schemas. | amplitude | amplitude, automation, automate, tasks, via, rube, mcp, composio, events, user, activity, cohorts |
|
||||
| `asana-automation` | Automate Asana tasks via Rube MCP (Composio): tasks, projects, sections, teams, workspaces. Always search tools first for current schemas. | asana | asana, automation, automate, tasks, via, rube, mcp, composio, sections, teams, workspaces, always |
|
||||
| `bamboohr-automation` | Automate BambooHR tasks via Rube MCP (Composio): employees, time-off, benefits, dependents, employee updates. Always search tools first for current schemas. | bamboohr | bamboohr, automation, automate, tasks, via, rube, mcp, composio, employees, time, off, benefits |
|
||||
| `basecamp-automation` | Automate Basecamp project management, to-dos, messages, people, and to-do list organization via Rube MCP (Composio). Always search tools first for current sc... | basecamp | basecamp, automation, automate, dos, messages, people, do, list, organization, via, rube, mcp |
|
||||
| `billing-automation` | Build automated billing systems for recurring payments, invoicing, subscription lifecycle, and dunning management. Use when implementing subscription billing... | billing | billing, automation, automated, recurring, payments, invoicing, subscription, lifecycle, dunning, implementing, automating, managing |
|
||||
| `bitbucket-automation` | Automate Bitbucket repositories, pull requests, branches, issues, and workspace management via Rube MCP (Composio). Always search tools first for current sch... | bitbucket | bitbucket, automation, automate, repositories, pull, requests, branches, issues, workspace, via, rube, mcp |
|
||||
| `brevo-automation` | Automate Brevo (Sendinblue) tasks via Rube MCP (Composio): manage email campaigns, create/edit templates, track senders, and monitor campaign performance. Al... | brevo | brevo, automation, automate, sendinblue, tasks, via, rube, mcp, composio, email, campaigns, edit |
|
||||
| `cal-com-automation` | Automate Cal.com tasks via Rube MCP (Composio): manage bookings, check availability, configure webhooks, and handle teams. Always search tools first for curr... | cal, com | cal, com, automation, automate, tasks, via, rube, mcp, composio, bookings, check, availability |
|
||||
| `canva-automation` | Automate Canva tasks via Rube MCP (Composio): designs, exports, folders, brand templates, autofill. Always search tools first for current schemas. | canva | canva, automation, automate, tasks, via, rube, mcp, composio, designs, exports, folders, brand |
|
||||
| `changelog-automation` | Automate changelog generation from commits, PRs, and releases following Keep a Changelog format. Use when setting up release workflows, generating release no... | changelog | changelog, automation, automate, generation, commits, prs, releases, following, keep, format, setting, up |
|
||||
| `clickup-automation` | Automate ClickUp project management including tasks, spaces, folders, lists, comments, and team operations via Rube MCP (Composio). Always search tools first... | clickup | clickup, automation, automate, including, tasks, spaces, folders, lists, comments, team, operations, via |
|
||||
| `close-automation` | Automate Close CRM tasks via Rube MCP (Composio): create leads, manage calls/SMS, handle tasks, and track notes. Always search tools first for current schemas. | close | close, automation, automate, crm, tasks, via, rube, mcp, composio, leads, calls, sms |
|
||||
| `coda-automation` | Automate Coda tasks via Rube MCP (Composio): manage docs, pages, tables, rows, formulas, permissions, and publishing. Always search tools first for current s... | coda | coda, automation, automate, tasks, via, rube, mcp, composio, docs, pages, tables, rows |
|
||||
| `conductor-manage` | Manage track lifecycle: archive, restore, delete, rename, and cleanup | conductor, manage | conductor, manage, track, lifecycle, archive, restore, delete, rename, cleanup |
|
||||
| `conductor-new-track` | Create a new track with specification and phased implementation plan | conductor, new, track | conductor, new, track, specification, phased, plan |
|
||||
| `conductor-status` | Display project status, active tracks, and next actions | conductor, status | conductor, status, display, active, tracks, next, actions |
|
||||
| `conductor-validator` | Validates Conductor project artifacts for completeness, consistency, and correctness. Use after setup, when diagnosing issues, or before implementation to ve... | conductor, validator | conductor, validator, validates, artifacts, completeness, consistency, correctness, after, setup, diagnosing, issues, before |
|
||||
| `confluence-automation` | Automate Confluence page creation, content search, space management, labels, and hierarchy navigation via Rube MCP (Composio). Always search tools first for ... | confluence | confluence, automation, automate, page, creation, content, search, space, labels, hierarchy, navigation, via |
|
||||
| `convertkit-automation` | Automate ConvertKit (Kit) tasks via Rube MCP (Composio): manage subscribers, tags, broadcasts, and broadcast stats. Always search tools first for current sch... | convertkit | convertkit, automation, automate, kit, tasks, via, rube, mcp, composio, subscribers, tags, broadcasts |
|
||||
| `datadog-automation` | Automate Datadog tasks via Rube MCP (Composio): query metrics, search logs, manage monitors/dashboards, create events and downtimes. Always search tools firs... | datadog | datadog, automation, automate, tasks, via, rube, mcp, composio, query, metrics, search, logs |
|
||||
| `discord-automation` | Automate Discord tasks via Rube MCP (Composio): messages, channels, roles, webhooks, reactions. Always search tools first for current schemas. | discord | discord, automation, automate, tasks, via, rube, mcp, composio, messages, channels, roles, webhooks |
|
||||
| `docusign-automation` | Automate DocuSign tasks via Rube MCP (Composio): templates, envelopes, signatures, document management. Always search tools first for current schemas. | docusign | docusign, automation, automate, tasks, via, rube, mcp, composio, envelopes, signatures, document, always |
|
||||
| `dropbox-automation` | Automate Dropbox file management, sharing, search, uploads, downloads, and folder operations via Rube MCP (Composio). Always search tools first for current s... | dropbox | dropbox, automation, automate, file, sharing, search, uploads, downloads, folder, operations, via, rube |
|
||||
| `email-sequence` | When the user wants to create or optimize an email sequence, drip campaign, automated email flow, or lifecycle email program. Also use when the user mentions... | email, sequence | email, sequence, user, wants, optimize, drip, campaign, automated, flow, lifecycle, program, mentions |
|
||||
| `figma-automation` | Automate Figma tasks via Rube MCP (Composio): files, components, design tokens, comments, exports. Always search tools first for current schemas. | figma | figma, automation, automate, tasks, via, rube, mcp, composio, files, components, tokens, comments |
|
||||
| `freshdesk-automation` | Automate Freshdesk helpdesk operations including tickets, contacts, companies, notes, and replies via Rube MCP (Composio). Always search tools first for curr... | freshdesk | freshdesk, automation, automate, helpdesk, operations, including, tickets, contacts, companies, notes, replies, via |
|
||||
| `full-stack-orchestration-full-stack-feature` | Use when working with full stack orchestration full stack feature | full, stack | full, stack, orchestration, feature, working |
|
||||
| `git-pushing` | Stage, commit, and push git changes with conventional commit messages. Use when user wants to commit and push changes, mentions pushing to remote, or asks to... | git, pushing | git, pushing, stage, commit, push, changes, conventional, messages, user, wants, mentions, remote |
|
||||
| `gitlab-automation` | Automate GitLab project management, issues, merge requests, pipelines, branches, and user operations via Rube MCP (Composio). Always search tools first for c... | gitlab | gitlab, automation, automate, issues, merge, requests, pipelines, branches, user, operations, via, rube |
|
||||
| `gmail-automation` | Automate Gmail tasks via Rube MCP (Composio): send/reply, search, labels, drafts, attachments. Always search tools first for current schemas. | gmail | gmail, automation, automate, tasks, via, rube, mcp, composio, send, reply, search, labels |
|
||||
| `google-calendar-automation` | Automate Google Calendar events, scheduling, availability checks, and attendee management via Rube MCP (Composio). Create events, find free slots, manage att... | google, calendar | google, calendar, automation, automate, events, scheduling, availability, checks, attendee, via, rube, mcp |
|
||||
| `google-drive-automation` | Automate Google Drive file operations (upload, download, search, share, organize) via Rube MCP (Composio). Upload/download files, manage folders, share with ... | google, drive | google, drive, automation, automate, file, operations, upload, download, search, share, organize, via |
|
||||
| `helpdesk-automation` | Automate HelpDesk tasks via Rube MCP (Composio): list tickets, manage views, use canned responses, and configure custom fields. Always search tools first for... | helpdesk | helpdesk, automation, automate, tasks, via, rube, mcp, composio, list, tickets, views, canned |
|
||||
| `hubspot-automation` | Automate HubSpot CRM operations (contacts, companies, deals, tickets, properties) via Rube MCP using Composio integration. | hubspot | hubspot, automation, automate, crm, operations, contacts, companies, deals, tickets, properties, via, rube |
|
||||
| `instagram-automation` | Automate Instagram tasks via Rube MCP (Composio): create posts, carousels, manage media, get insights, and publishing limits. Always search tools first for c... | instagram | instagram, automation, automate, tasks, via, rube, mcp, composio, posts, carousels, media, get |
|
||||
| `intercom-automation` | Automate Intercom tasks via Rube MCP (Composio): conversations, contacts, companies, segments, admins. Always search tools first for current schemas. | intercom | intercom, automation, automate, tasks, via, rube, mcp, composio, conversations, contacts, companies, segments |
|
||||
| `jira-automation` | Automate Jira tasks via Rube MCP (Composio): issues, projects, sprints, boards, comments, users. Always search tools first for current schemas. | jira | jira, automation, automate, tasks, via, rube, mcp, composio, issues, sprints, boards, comments |
|
||||
| `kaizen` | Guide for continuous improvement, error proofing, and standardization. Use this skill when the user wants to improve code quality, refactor, or discuss proce... | kaizen | kaizen, continuous, improvement, error, proofing, standardization, skill, user, wants, improve, code, quality |
|
||||
| `klaviyo-automation` | Automate Klaviyo tasks via Rube MCP (Composio): manage email/SMS campaigns, inspect campaign messages, track tags, and monitor send jobs. Always search tools... | klaviyo | klaviyo, automation, automate, tasks, via, rube, mcp, composio, email, sms, campaigns, inspect |
|
||||
| `linear-automation` | Automate Linear tasks via Rube MCP (Composio): issues, projects, cycles, teams, labels. Always search tools first for current schemas. | linear | linear, automation, automate, tasks, via, rube, mcp, composio, issues, cycles, teams, labels |
|
||||
| `linkedin-automation` | Automate LinkedIn tasks via Rube MCP (Composio): create posts, manage profile, company info, comments, and image uploads. Always search tools first for curre... | linkedin | linkedin, automation, automate, tasks, via, rube, mcp, composio, posts, profile, company, info |
|
||||
| `make-automation` | Automate Make (Integromat) tasks via Rube MCP (Composio): operations, enums, language and timezone lookups. Always search tools first for current schemas. | make | make, automation, automate, integromat, tasks, via, rube, mcp, composio, operations, enums, language |
|
||||
| `mermaid-expert` | Create Mermaid diagrams for flowcharts, sequences, ERDs, and architectures. Masters syntax for all diagram types and styling. Use PROACTIVELY for visual docu... | mermaid | mermaid, diagrams, flowcharts, sequences, erds, architectures, masters, syntax, all, diagram, types, styling |
|
||||
| `pdf` | Comprehensive PDF manipulation toolkit for extracting text and tables, creating new PDFs, merging/splitting documents, and handling forms. When Claude needs ... | pdf | pdf, manipulation, toolkit, extracting, text, tables, creating, new, pdfs, merging, splitting, documents |
|
||||
| `microsoft-teams-automation` | Automate Microsoft Teams tasks via Rube MCP (Composio): send messages, manage channels, create meetings, handle chats, and search messages. Always search too... | microsoft, teams | microsoft, teams, automation, automate, tasks, via, rube, mcp, composio, send, messages, channels |
|
||||
| `miro-automation` | Automate Miro tasks via Rube MCP (Composio): boards, items, sticky notes, frames, sharing, connectors. Always search tools first for current schemas. | miro | miro, automation, automate, tasks, via, rube, mcp, composio, boards, items, sticky, notes |
|
||||
| `mixpanel-automation` | Automate Mixpanel tasks via Rube MCP (Composio): events, segmentation, funnels, cohorts, user profiles, JQL queries. Always search tools first for current sc... | mixpanel | mixpanel, automation, automate, tasks, via, rube, mcp, composio, events, segmentation, funnels, cohorts |
|
||||
| `monday-automation` | Automate Monday.com work management including boards, items, columns, groups, subitems, and updates via Rube MCP (Composio). Always search tools first for cu... | monday | monday, automation, automate, com, work, including, boards, items, columns, groups, subitems, updates |
|
||||
| `notion-automation` | Automate Notion tasks via Rube MCP (Composio): pages, databases, blocks, comments, users. Always search tools first for current schemas. | notion | notion, automation, automate, tasks, via, rube, mcp, composio, pages, databases, blocks, comments |
|
||||
| `one-drive-automation` | Automate OneDrive file management, search, uploads, downloads, sharing, permissions, and folder operations via Rube MCP (Composio). Always search tools first... | one, drive | one, drive, automation, automate, onedrive, file, search, uploads, downloads, sharing, permissions, folder |
|
||||
| `outlook-automation` | Automate Outlook tasks via Rube MCP (Composio): emails, calendar, contacts, folders, attachments. Always search tools first for current schemas. | outlook | outlook, automation, automate, tasks, via, rube, mcp, composio, emails, calendar, contacts, folders |
|
||||
| `outlook-calendar-automation` | Automate Outlook Calendar tasks via Rube MCP (Composio): create events, manage attendees, find meeting times, and handle invitations. Always search tools fir... | outlook, calendar | outlook, calendar, automation, automate, tasks, via, rube, mcp, composio, events, attendees, find |
|
||||
| `pagerduty-automation` | Automate PagerDuty tasks via Rube MCP (Composio): manage incidents, services, schedules, escalation policies, and on-call rotations. Always search tools firs... | pagerduty | pagerduty, automation, automate, tasks, via, rube, mcp, composio, incidents, schedules, escalation, policies |
|
||||
| `pdf-official` | Comprehensive PDF manipulation toolkit for extracting text and tables, creating new PDFs, merging/splitting documents, and handling forms. When Claude needs ... | pdf, official | pdf, official, manipulation, toolkit, extracting, text, tables, creating, new, pdfs, merging, splitting |
|
||||
| `posthog-automation` | Automate PostHog tasks via Rube MCP (Composio): events, feature flags, projects, user profiles, annotations. Always search tools first for current schemas. | posthog | posthog, automation, automate, tasks, via, rube, mcp, composio, events, feature, flags, user |
|
||||
| `postmark-automation` | Automate Postmark email delivery tasks via Rube MCP (Composio): send templated emails, manage templates, monitor delivery stats and bounces. Always search to... | postmark | postmark, automation, automate, email, delivery, tasks, via, rube, mcp, composio, send, templated |
|
||||
| `reddit-automation` | Automate Reddit tasks via Rube MCP (Composio): search subreddits, create posts, manage comments, and browse top content. Always search tools first for curren... | reddit | reddit, automation, automate, tasks, via, rube, mcp, composio, search, subreddits, posts, comments |
|
||||
| `render-automation` | Automate Render tasks via Rube MCP (Composio): services, deployments, projects. Always search tools first for current schemas. | render | render, automation, automate, tasks, via, rube, mcp, composio, deployments, always, search, first |
|
||||
| `salesforce-automation` | Automate Salesforce tasks via Rube MCP (Composio): leads, contacts, accounts, opportunities, SOQL queries. Always search tools first for current schemas. | salesforce | salesforce, automation, automate, tasks, via, rube, mcp, composio, leads, contacts, accounts, opportunities |
|
||||
| `segment-automation` | Automate Segment tasks via Rube MCP (Composio): track events, identify users, manage groups, page views, aliases, batch operations. Always search tools first... | segment | segment, automation, automate, tasks, via, rube, mcp, composio, track, events, identify, users |
|
||||
| `sentry-automation` | Automate Sentry tasks via Rube MCP (Composio): manage issues/events, configure alerts, track releases, monitor projects and teams. Always search tools first ... | sentry | sentry, automation, automate, tasks, via, rube, mcp, composio, issues, events, configure, alerts |
|
||||
| `shopify-automation` | Automate Shopify tasks via Rube MCP (Composio): products, orders, customers, inventory, collections. Always search tools first for current schemas. | shopify | shopify, automation, automate, tasks, via, rube, mcp, composio, products, orders, customers, inventory |
|
||||
| `skill-creator` | This skill should be used when the user asks to create a new skill, build a skill, make a custom skill, develop a CLI skill, or wants to extend the CLI with ... | automation, scaffolding, skill-creation, meta-skill | automation, scaffolding, skill-creation, meta-skill, skill, creator, should, used, user, asks, new, custom |
|
||||
| `square-automation` | Automate Square tasks via Rube MCP (Composio): payments, orders, invoices, locations. Always search tools first for current schemas. | square | square, automation, automate, tasks, via, rube, mcp, composio, payments, orders, invoices, locations |
|
||||
| `stripe-automation` | Automate Stripe tasks via Rube MCP (Composio): customers, charges, subscriptions, invoices, products, refunds. Always search tools first for current schemas. | stripe | stripe, automation, automate, tasks, via, rube, mcp, composio, customers, charges, subscriptions, invoices |
|
||||
| `team-collaboration-issue` | You are a GitHub issue resolution expert specializing in systematic bug investigation, feature implementation, and collaborative development workflows. Your ... | team, collaboration, issue | team, collaboration, issue, github, resolution, specializing, systematic, bug, investigation, feature, collaborative, development |
|
||||
| `telegram-automation` | Automate Telegram tasks via Rube MCP (Composio): send messages, manage chats, share photos/documents, and handle bot commands. Always search tools first for ... | telegram | telegram, automation, automate, tasks, via, rube, mcp, composio, send, messages, chats, share |
|
||||
| `tiktok-automation` | Automate TikTok tasks via Rube MCP (Composio): upload/publish videos, post photos, manage content, and view user profiles/stats. Always search tools first fo... | tiktok | tiktok, automation, automate, tasks, via, rube, mcp, composio, upload, publish, videos, post |
|
||||
| `todoist-automation` | Automate Todoist task management, projects, sections, filtering, and bulk operations via Rube MCP (Composio). Always search tools first for current schemas. | todoist | todoist, automation, automate, task, sections, filtering, bulk, operations, via, rube, mcp, composio |
|
||||
| `track-management` | Use this skill when creating, managing, or working with Conductor tracks - the logical work units for features, bugs, and refactors. Applies to spec.md, plan... | track | track, skill, creating, managing, working, conductor, tracks, logical, work, units, features, bugs |
|
||||
| `trello-automation` | Automate Trello boards, cards, and workflows via Rube MCP (Composio). Create cards, manage lists, assign members, and search across boards programmatically. | trello | trello, automation, automate, boards, cards, via, rube, mcp, composio, lists, assign, members |
|
||||
| `twitter-automation` | Automate Twitter/X tasks via Rube MCP (Composio): posts, search, users, bookmarks, lists, media. Always search tools first for current schemas. | twitter | twitter, automation, automate, tasks, via, rube, mcp, composio, posts, search, users, bookmarks |
|
||||
| `vercel-automation` | Automate Vercel tasks via Rube MCP (Composio): manage deployments, domains, DNS, env vars, projects, and teams. Always search tools first for current schemas. | vercel | vercel, automation, automate, tasks, via, rube, mcp, composio, deployments, domains, dns, env |
|
||||
| `webflow-automation` | Automate Webflow CMS collections, site publishing, page management, asset uploads, and ecommerce orders via Rube MCP (Composio). Always search tools first fo... | webflow | webflow, automation, automate, cms, collections, site, publishing, page, asset, uploads, ecommerce, orders |
|
||||
| `wrike-automation` | Automate Wrike project management via Rube MCP (Composio): create tasks/folders, manage projects, assign work, and track progress. Always search tools first ... | wrike | wrike, automation, automate, via, rube, mcp, composio, tasks, folders, assign, work, track |
|
||||
| `zendesk-automation` | Automate Zendesk tasks via Rube MCP (Composio): tickets, users, organizations, replies. Always search tools first for current schemas. | zendesk | zendesk, automation, automate, tasks, via, rube, mcp, composio, tickets, users, organizations, replies |
|
||||
| `zoho-crm-automation` | Automate Zoho CRM tasks via Rube MCP (Composio): create/update records, search contacts, manage leads, and convert leads. Always search tools first for curre... | zoho, crm | zoho, crm, automation, automate, tasks, via, rube, mcp, composio, update, records, search |
|
||||
| `zoom-automation` | Automate Zoom meeting creation, management, recordings, webinars, and participant tracking via Rube MCP (Composio). Always search tools first for current sch... | zoom | zoom, automation, automate, meeting, creation, recordings, webinars, participant, tracking, via, rube, mcp |
|
||||
|
||||
268
CHANGELOG.md
268
CHANGELOG.md
@@ -7,8 +7,274 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
||||
|
||||
---
|
||||
|
||||
## [5.2.0] - 2026-02-13 - "Podcast Generation & Azure Expansion"
|
||||
|
||||
> **New AI capabilities: Podcast Generation, Azure Identity, and Self-Evolving Agents.**
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skill**: `podcast-generation` - Create multi-speaker podcasts from text/URLs using OpenAI Text-to-Speech (TTS) and pydub.
|
||||
- **New Skill**: `weevolve` - Self-evolving knowledge engine with recursive improvement protocol.
|
||||
- **Azure Skills Expansion**:
|
||||
- `azure-ai-agents-persistent-dotnet`: Persistent agent patterns for .NET.
|
||||
- `azure-ai-agents-persistent-java`: Persistent agent patterns for Java.
|
||||
- `azd-deployment`: Azure Developer CLI deployment strategies.
|
||||
- **Python Enhancements**:
|
||||
- `pydantic-models-py`: Robust data validation patterns.
|
||||
- `fastapi-router-py`: Scalable API routing structures.
|
||||
|
||||
### Registry
|
||||
|
||||
- **Total Skills**: 856 (from 845).
|
||||
- **Generated Files**: Synced `skills_index.json`, `data/catalog.json`, and `README.md`.
|
||||
|
||||
### Contributors
|
||||
|
||||
- **[@sickn33](https://github.com/sickn33)** - Podcast Generation & Azure skills sync (PR #74).
|
||||
- **[@aro-brez](https://github.com/aro-brez)** - WeEvolve skill (Issue #75).
|
||||
|
||||
---
|
||||
|
||||
## [5.1.0] - 2026-02-12 - "Official Microsoft & Gemini Skills"
|
||||
|
||||
> **845+ skills: the largest single-PR expansion ever, powered by official vendor collections.**
|
||||
|
||||
Integrates the full official Microsoft skills collection (129 skills) and Google Gemini API development skills, significantly expanding Azure SDK coverage across .NET, Python, TypeScript, Java, and Rust, plus M365 Agents, Semantic Kernel, and wiki plugin skills.
|
||||
|
||||
### Added
|
||||
|
||||
- **129 Microsoft Official Skills** from [microsoft/skills](https://github.com/microsoft/skills):
|
||||
- Azure SDKs across .NET, Python, TypeScript, Java, and Rust
|
||||
- M365 Agents, Semantic Kernel, and wiki plugin skills
|
||||
- Flat structure using YAML `name` field as directory name
|
||||
- Attribution files: `docs/LICENSE-MICROSOFT`, `docs/microsoft-skills-attribution.json`
|
||||
- **Gemini API Skills**: Official Gemini API development skill under `skills/gemini-api-dev/`
|
||||
- **New Scripts & Tooling**:
|
||||
- `scripts/sync_microsoft_skills.py` (v4): Flat-structure sync with collision detection, stale cleanup, and attribution metadata
|
||||
- `scripts/tests/inspect_microsoft_repo.py`: Remote repo inspection
|
||||
- `scripts/tests/test_comprehensive_coverage.py`: Coverage verification
|
||||
- **New npm scripts**: `sync:microsoft` and `sync:all-official` in `package.json`
|
||||
|
||||
### Fixed
|
||||
|
||||
- **`scripts/generate_index.py`**: Enhanced frontmatter parsing for unquoted `@` symbols and commas
|
||||
- **`scripts/build-catalog.js`**: Deterministic `generatedAt` timestamp (prevents CI drift)
|
||||
|
||||
### Registry
|
||||
|
||||
- **Total Skills**: 845 (from 626). All generated files synced.
|
||||
|
||||
### Contributors
|
||||
|
||||
- [@ar27111994](https://github.com/ar27111994) - Microsoft & Gemini skills integration (PR #73)
|
||||
|
||||
---
|
||||
|
||||
## [5.0.0] - 2026-02-10 - "Antigravity Workflows Foundation"
|
||||
|
||||
> Workflows are now first-class: users can run guided, multi-skill playbooks instead of manually composing skills one by one.
|
||||
|
||||
### Added
|
||||
|
||||
- **New orchestration skill**: `antigravity-workflows`
|
||||
- `skills/antigravity-workflows/SKILL.md`
|
||||
- `skills/antigravity-workflows/resources/implementation-playbook.md`
|
||||
- **New workflow documentation**: `docs/WORKFLOWS.md`
|
||||
- Introduces the Workflows model and differentiates it from Bundles.
|
||||
- Provides execution playbooks with prerequisites, ordered steps, and prompt examples.
|
||||
- **New machine-readable workflow registry**: `data/workflows.json`
|
||||
- `ship-saas-mvp`
|
||||
- `security-audit-web-app`
|
||||
- `build-ai-agent-system`
|
||||
- `qa-browser-automation`
|
||||
|
||||
### Changed
|
||||
|
||||
- **README / Onboarding docs** updated to include Workflows discovery and usage:
|
||||
- `README.md` (TOC + "Antigravity Workflows" section)
|
||||
- `docs/GETTING_STARTED.md` (Bundles vs Workflows guidance)
|
||||
- `docs/FAQ.md` (new Q&A: Bundles vs Workflows)
|
||||
- **Go browser automation alignment**:
|
||||
- Workflow playbooks now include optional `@go-playwright` hooks for Go-based QA/E2E flows.
|
||||
- **Registry sync** after workflow skill addition:
|
||||
- `CATALOG.md`
|
||||
- `skills_index.json`
|
||||
- `data/catalog.json`
|
||||
- `data/bundles.json`
|
||||
|
||||
### Contributors
|
||||
|
||||
- [@sickn33](https://github.com/sickn33) - Workflows architecture, docs, and release integration
|
||||
|
||||
---
|
||||
|
||||
## [4.11.0] - 2026-02-08 - "Clean Code & Registry Stability"
|
||||
|
||||
> Quality improvements: Clean Code principles and deterministic builds.
|
||||
|
||||
### Changed
|
||||
|
||||
- **`clean-code` skill** - Complete rewrite based on Robert C. Martin's "Clean Code":
|
||||
- Systematic coverage: Meaningful names, functions, comments, formatting, objects, error handling, unit tests, and classes
|
||||
- Added F.I.R.S.T. test principles and Law of Demeter guidance
|
||||
- Fixed invalid heading format (`## ## When to Use` → `## When to Use`) that blocked validation
|
||||
- Added implementation checklist and code smell detection
|
||||
- **Registry Stabilization** - Fixed `scripts/build-catalog.js` for deterministic CI builds:
|
||||
- Uses `SOURCE_DATE_EPOCH` environment variable for reproducible timestamps
|
||||
- Replaced `localeCompare` with explicit comparator for consistent sorting across environments
|
||||
- Prevents CI validation failures caused by timestamp drift
|
||||
|
||||
### Contributors
|
||||
|
||||
- [@jackjin1997](https://github.com/jackjin1997) - Clean Code skill update and registry fixes (PR #69, forged at [ClawForge](https://github.com/jackjin1997/ClawForge))
|
||||
|
||||
---
|
||||
|
||||
## [4.10.0] - 2026-02-06 - "Composio Automation + .NET Backend"
|
||||
|
||||
> A major expansion focused on practical app automation and stronger backend engineering coverage.
|
||||
|
||||
### Added
|
||||
|
||||
- **79 new skills total**.
|
||||
- **78 Composio/Rube automation skills** (PR #64), with operational playbooks for:
|
||||
- CRM and sales stacks (`HubSpot`, `Pipedrive`, `Salesforce`, `Zoho CRM`, `Close`).
|
||||
- Collaboration and project tools (`Notion`, `ClickUp`, `Asana`, `Jira`, `Confluence`, `Trello`, `Monday`).
|
||||
- Messaging and support channels (`Slack`, `Discord`, `Teams`, `Intercom`, `Freshdesk`, `Zendesk`).
|
||||
- Marketing and analytics systems (`Google Analytics`, `Mixpanel`, `PostHog`, `Segment`, `Mailchimp`, `Klaviyo`).
|
||||
- Infra/dev tooling (`GitHub`, `GitLab`, `CircleCI`, `Datadog`, `PagerDuty`, `Vercel`, `Render`).
|
||||
- **1 new `dotnet-backend` skill** (PR #65) with:
|
||||
- ASP.NET Core 8+ API patterns (Minimal APIs + controller-based).
|
||||
- EF Core usage guidance, JWT auth examples, and background worker templates.
|
||||
- Explicit trigger guidance and documented limitations.
|
||||
- **Registry size increased to 713 skills** (from 634).
|
||||
|
||||
### Changed
|
||||
|
||||
- Regenerated and synced discovery artifacts after merging both PRs:
|
||||
- `README.md` (counts + contributor updates)
|
||||
- `skills_index.json`
|
||||
- `CATALOG.md`
|
||||
- `data/catalog.json`
|
||||
- `data/bundles.json`
|
||||
- `data/aliases.json`
|
||||
- Release metadata updated for `v4.10.0`:
|
||||
- `package.json` / `package-lock.json` version bump
|
||||
- GitHub Release object published with release notes
|
||||
|
||||
### Contributors
|
||||
|
||||
- [@sohamganatra](https://github.com/sohamganatra) - 78 Composio automation skills (PR #64)
|
||||
- [@Nguyen-Van-Chan](https://github.com/Nguyen-Van-Chan) - .NET backend skill (PR #65)
|
||||
|
||||
## [4.9.0] - 2026-02-05 - "OSS Hunter & Universal Skills"
|
||||
|
||||
> Automated contribution hunting and universal CLI AI skills (Audio, YouTube, Prompt Engineering).
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skill**: `oss-hunter` – Automated tool for finding high-impact Open Source contributions (Good First Issues, Help Wanted) in trending repositories.
|
||||
- **New Skill**: `audio-transcriber` – Transform audio recordings into professional Markdown with Whisper integration.
|
||||
- **New Skill**: `youtube-summarizer` – Generate comprehensive summaries/notes from YouTube videos.
|
||||
- **New Skill**: `prompt-engineer` (Enhanced) – Now includes 11 optimization frameworks (RTF, RISEN, etc.).
|
||||
- **Registry**: 634 skills (from 626). Catalog regenerated.
|
||||
|
||||
### Changed
|
||||
|
||||
- **CLI AI Skills**: Merged core skills from `ericgandrade/cli-ai-skills`.
|
||||
|
||||
### Contributors
|
||||
|
||||
- [@jackjin1997](https://github.com/jackjin1997) - OSS Hunter (PR #61)
|
||||
- [@ericgandrade](https://github.com/ericgandrade) - CLI AI Skills (PR #62)
|
||||
|
||||
## [4.7.0] - 2026-02-03 - "Installer Fix & OpenCode Docs"
|
||||
|
||||
> Critical installer fix for Windows and OpenCode documentation completion.
|
||||
|
||||
### Fixed
|
||||
|
||||
- **Installer**: Resolved `ReferenceError` for `tagArg` variable in `bin/install.js` ensuring correct execution on Windows/PowerShell (PR #53).
|
||||
|
||||
### Documentation
|
||||
|
||||
- **OpenCode**: Completed documentation for OpenCode integration in `README.md`.
|
||||
|
||||
---
|
||||
|
||||
## [4.6.0] - 2026-02-01 - "SPDD & Radix UI Design System"
|
||||
|
||||
> Agent workflow docs (SPDD) and Radix UI design system skill.
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skill**: `radix-ui-design-system` – Build accessible design systems with Radix UI primitives (headless, theming, WCAG, examples).
|
||||
- **Docs**: `skills/SPDD/` – Research, spec, and implementation workflow docs (1-research.md, 2-spec.md, 3-implementation.md).
|
||||
|
||||
### Registry
|
||||
|
||||
- **Total Skills**: 626 (from 625). Catalog regenerated.
|
||||
|
||||
---
|
||||
|
||||
## [4.5.0] - 2026-01-31 - "Stitch UI Design"
|
||||
|
||||
> Expert prompting guide for Google Stitch AI-powered UI design tool.
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skill**: `stitch-ui-design` – Expert guide for creating effective prompts for Google Stitch AI UI design tool (Gemini 2.5 Flash). Covers prompt structure, specificity techniques, iteration strategies, design-to-code workflows, and 10+ examples for landing pages, mobile apps, and dashboards.
|
||||
|
||||
### Changed
|
||||
|
||||
- **Documentation**: Clarified in README.md and GETTING_STARTED.md that installation means cloning the full repo once; Starter Packs are curated lists to discover skills by role, not a different installation method (fixes [#44](https://github.com/sickn33/antigravity-awesome-skills/issues/44)).
|
||||
|
||||
### Registry
|
||||
|
||||
- **Total Skills**: 625 (from 624). Catalog regenerated.
|
||||
|
||||
### Credits
|
||||
|
||||
- [@ALEKGG1](https://github.com/ALEKGG1) – stitch-ui-design (PR #45)
|
||||
- [@CypherPoet](https://github.com/CypherPoet) – Documentation clarity (#44)
|
||||
|
||||
---
|
||||
|
||||
## [4.4.0] - 2026-01-30 - "fp-ts skills for TypeScript"
|
||||
|
||||
> Three practical fp-ts skills for TypeScript functional programming.
|
||||
|
||||
### Added
|
||||
|
||||
- **New Skills** (from [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills)):
|
||||
- `fp-ts-pragmatic` – Pipe, Option, Either, TaskEither without academic jargon.
|
||||
- `fp-ts-react` – Patterns for fp-ts with React 18/19 and Next.js 14/15 (state, forms, data fetching).
|
||||
- `fp-ts-errors` – Type-safe error handling with Either and TaskEither.
|
||||
|
||||
### Registry
|
||||
|
||||
- **Total Skills**: 624 (from 621). Catalog regenerated.
|
||||
|
||||
---
|
||||
|
||||
## [4.3.0] - 2026-01-29 - "VoltAgent Integration & Context Engineering Suite"
|
||||
|
||||
> 61 new skills from VoltAgent/awesome-agent-skills: official team skills and context engineering suite.
|
||||
|
||||
### Added
|
||||
|
||||
- **61 new skills** from [VoltAgent/awesome-agent-skills](https://github.com/VoltAgent/awesome-agent-skills):
|
||||
- **Official (27)**: Sentry (commit, create-pr, find-bugs, iterate-pr), Trail of Bits (culture-index, fix-review, sharp-edges), Expo (expo-deployment, upgrading-expo), Hugging Face (hugging-face-cli, hugging-face-jobs), Vercel, Google Stitch (design-md), Neon (using-neon), n8n (n8n-code-python, n8n-mcp-tools-expert, n8n-node-configuration), SwiftUI, fal.ai (fal-audio, fal-generate, fal-image-edit, fal-platform, fal-upscale, fal-workflow), deep-research, imagen, readme.
|
||||
- **Community (34)**: Context suite (context-fundamentals, context-degradation, context-compression, context-optimization, multi-agent-patterns, memory-systems, evaluation), frontend-slides, linear-claude-skill, skill-rails-upgrade, terraform-skill, tool-design, screenshots, automate-whatsapp, observe-whatsapp, aws-skills, ui-skills, vexor, pypict-skill, makepad-skills, threejs-skills, claude-scientific-skills, claude-win11-speckit-update-skill, security-bluebook-builder, claude-ally-health, clarity-gate, beautiful-prose, claude-speed-reader, skill-seekers, varlock-claude-skill, superpowers-lab, nanobanana-ppt-skills, x-article-publisher-skill, ffuf-claude-skill.
|
||||
|
||||
### Registry
|
||||
|
||||
- **Total Skills**: 614 (from 553). Catalog and SOURCES.md updated.
|
||||
|
||||
### Credits
|
||||
|
||||
- VoltAgent/awesome-agent-skills and official teams (Sentry, Trail of Bits, Expo, Hugging Face, Vercel Labs, Google Labs, Neon, fal.ai).
|
||||
|
||||
---
|
||||
|
||||
## [4.0.0] - 2026-01-28 - "The Enterprise Era"
|
||||
@@ -120,7 +386,7 @@ The following skills are now correctly indexed and visible in the registry:
|
||||
- **Documentation**:
|
||||
- `docs/EXAMPLES.md`: Cookbook with 3 real-world scenarios.
|
||||
- `docs/SOURCES.md`: Legal ledger for attributions and licenses.
|
||||
- `RELEASE_NOTES.md`: Generated release announcement (archived).
|
||||
- Release announcements are documented in this CHANGELOG.
|
||||
|
||||
### Changed
|
||||
|
||||
|
||||
@@ -48,6 +48,23 @@ You don't need to be an expert! Here are ways anyone can help:
|
||||
|
||||
---
|
||||
|
||||
## Local development setup
|
||||
|
||||
To run validation, index generation, and README updates locally:
|
||||
|
||||
1. **Node.js** (for catalog and installer): `npm ci`
|
||||
2. **Python 3** (for validate, index, readme scripts): install dependencies with
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
Then you can run `npm run chain` (validate → index → readme) and `npm run catalog`.
|
||||
|
||||
**Validation:** The canonical validator is **Python** (`scripts/validate_skills.py`). Use `npm run validate` (or `npm run validate:strict` for CI-style checks). The JavaScript validator (`scripts/validate-skills.js`) is legacy/optional and uses a different schema; CI and PR checks rely on the Python validator only.
|
||||
|
||||
**npm audit:** CI runs `npm audit --audit-level=high`. To fix issues locally: run `npm audit`, then `npm update` or `npm audit fix` as appropriate; for breaking changes, update dependencies manually and run tests.
|
||||
|
||||
---
|
||||
|
||||
## How to Create a New Skill
|
||||
|
||||
### Step-by-Step Guide
|
||||
|
||||
362
README.md
362
README.md
@@ -1,6 +1,6 @@
|
||||
# 🌌 Antigravity Awesome Skills: 625+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More
|
||||
# 🌌 Antigravity Awesome Skills: 856+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More
|
||||
|
||||
> **The Ultimate Collection of 625+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode**
|
||||
> **The Ultimate Collection of 856+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, AdaL**
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://claude.ai)
|
||||
@@ -10,8 +10,13 @@
|
||||
[](https://github.com/features/copilot)
|
||||
[](https://github.com/opencode-ai/opencode)
|
||||
[](https://github.com/sickn33/antigravity-awesome-skills)
|
||||
[](https://sylph.ai/)
|
||||
[](https://github.com/yeasy/ask)
|
||||
[](https://buymeacoffee.com/sickn33)
|
||||
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested library of **624 high-performance agentic skills** designed to work seamlessly across all major AI coding assistants:
|
||||
If this project helps you, you can [support it here](https://buymeacoffee.com/sickn33) or simply ⭐ the repo.
|
||||
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested library of **856 high-performance agentic skills** designed to work seamlessly across all major AI coding assistants:
|
||||
|
||||
- 🟣 **Claude Code** (Anthropic CLI)
|
||||
- 🔵 **Gemini CLI** (Google DeepMind)
|
||||
@@ -20,52 +25,67 @@
|
||||
- 🩵 **GitHub Copilot** (VSCode Extension)
|
||||
- 🟠 **Cursor** (AI-native IDE)
|
||||
- ⚪ **OpenCode** (Open-source CLI)
|
||||
- 🌸 **AdaL CLI** (Self-evolving Coding Agent)
|
||||
|
||||
This repository provides essential skills to transform your AI assistant into a **full-stack digital agency**, including official capabilities from **Anthropic**, **OpenAI**, **Google**, **Supabase**, and **Vercel Labs**.
|
||||
This repository provides essential skills to transform your AI assistant into a **full-stack digital agency**, including official capabilities from **Anthropic**, **OpenAI**, **Google**, **Microsoft**, **Supabase**, and **Vercel Labs**.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [🚀 New Here? Start Here!](#new-here-start-here)
|
||||
- [🔌 Compatibility & Invocation](#compatibility--invocation)
|
||||
- [📦 Features & Categories](#features--categories)
|
||||
- [🎁 Curated Collections (Bundles)](#curated-collections)
|
||||
- [📚 Browse 625+ Skills](#browse-625-skills)
|
||||
- [🛠️ Installation](#installation)
|
||||
- [🧯 Troubleshooting](#troubleshooting)
|
||||
- [🎁 Curated Collections (Bundles)](#curated-collections)
|
||||
- [🧭 Antigravity Workflows](#antigravity-workflows)
|
||||
- [📦 Features & Categories](#features--categories)
|
||||
- [📚 Browse 856+ Skills](#browse-856-skills)
|
||||
- [🤝 How to Contribute](#how-to-contribute)
|
||||
- [🤝 Community](#community)
|
||||
- [☕ Support the Project](#support-the-project)
|
||||
- [👥 Contributors & Credits](#credits--sources)
|
||||
- [⚖️ License](#license)
|
||||
- [👥 Repo Contributors](#repo-contributors)
|
||||
- [⚖️ License](#license)
|
||||
- [🌟 Star History](#star-history)
|
||||
- [🏷️ GitHub Topics](#github-topics)
|
||||
|
||||
---
|
||||
|
||||
## New Here? Start Here!
|
||||
|
||||
**Welcome to the V4.0.0 Enterprise Edition.** This isn't just a list of scripts; it's a complete operating system for your AI Agent.
|
||||
**Welcome to the V5.2.0 Workflows Edition.** This isn't just a list of scripts; it's a complete operating system for your AI Agent.
|
||||
|
||||
### 1. 🐣 Context: What is this?
|
||||
|
||||
**Antigravity Awesome Skills** (Release 4.0.0) is a massive upgrade to your AI's capabilities.
|
||||
**Antigravity Awesome Skills** (Release 5.2.0) is a massive upgrade to your AI's capabilities.
|
||||
|
||||
AI Agents (like Claude Code, Cursor, or Gemini) are smart, but they lack **specific tools**. They don't know your company's "Deployment Protocol" or the specific syntax for "AWS CloudFormation".
|
||||
**Skills** are small markdown files that teach them how to do these specific tasks perfectly, every time.
|
||||
|
||||
### 2. ⚡️ Quick Start (The "Bundle" Way)
|
||||
### 2. ⚡️ Quick Start (1 minute)
|
||||
|
||||
Install once (clone or npx); then use our **Starter Packs** in [docs/BUNDLES.md](docs/BUNDLES.md) to see which skills fit your role. You get the full repo; Starter Packs are curated lists, not a separate install.
|
||||
Install once; then use Starter Packs in [docs/BUNDLES.md](docs/BUNDLES.md) to focus on your role.
|
||||
|
||||
1. **Install** (pick one):
|
||||
```bash
|
||||
# Easiest: npx installer (clones to ~/.agent/skills by default)
|
||||
npx antigravity-awesome-skills
|
||||
1. **Install**:
|
||||
|
||||
# Or clone manually
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
2. **Pick your persona** (See [docs/BUNDLES.md](docs/BUNDLES.md)):
|
||||
- **Web Dev?** use the `Web Wizard` pack.
|
||||
- **Hacker?** use the `Security Engineer` pack.
|
||||
- **Just curious?** start with `Essentials`.
|
||||
```bash
|
||||
# Default path: ~/.agent/skills
|
||||
npx antigravity-awesome-skills
|
||||
```
|
||||
|
||||
2. **Verify**:
|
||||
|
||||
```bash
|
||||
test -d ~/.agent/skills && echo "Skills installed in ~/.agent/skills"
|
||||
```
|
||||
|
||||
3. **Run your first skill**:
|
||||
|
||||
> "Use **@brainstorming** to plan a SaaS MVP."
|
||||
|
||||
4. **Pick a bundle**:
|
||||
- **Web Dev?** start with `Web Wizard`.
|
||||
- **Security?** start with `Security Engineer`.
|
||||
- **General use?** start with `Essentials`.
|
||||
|
||||
### 3. 🧠 How to use
|
||||
|
||||
@@ -86,53 +106,25 @@ These skills follow the universal **SKILL.md** format and work with any AI codin
|
||||
| :-------------- | :--- | :-------------------------------- | :---------------- |
|
||||
| **Claude Code** | CLI | `>> /skill-name help me...` | `.claude/skills/` |
|
||||
| **Gemini CLI** | CLI | `(User Prompt) Use skill-name...` | `.gemini/skills/` |
|
||||
| **Codex CLI** | CLI | `(User Prompt) Use skill-name...` | `.codex/skills/` |
|
||||
| **Antigravity** | IDE | `(Agent Mode) Use skill...` | `.agent/skills/` |
|
||||
| **Cursor** | IDE | `@skill-name (in Chat)` | `.cursor/skills/` |
|
||||
| **Copilot** | Ext | `(Paste content manually)` | N/A |
|
||||
| **OpenCode** | CLI | `opencode run @skill-name` | `.agent/skills/` |
|
||||
| **AdaL CLI** | CLI | `(Auto) Skills load on-demand` | `.adal/skills/` |
|
||||
|
||||
> [!TIP]
|
||||
> **Universal Path**: We recommend cloning to `.agent/skills/`. Most modern tools (Antigravity, recent CLIs) look here by default.
|
||||
|
||||
> [!WARNING]
|
||||
> **Windows Users**: This repository uses **symlinks** for official skills.
|
||||
> The **npx** installer sets `core.symlinks=true` automatically. For **git clone**, enable Developer Mode or run Git as Administrator:
|
||||
> `git clone -c core.symlinks=true https://github.com/...`
|
||||
> **Windows Users**: this repository uses **symlinks** for official skills.
|
||||
> See [Troubleshooting](#troubleshooting) for the exact fix.
|
||||
|
||||
---
|
||||
|
||||
Whether you are using **Gemini CLI**, **Claude Code**, **Codex CLI**, **Cursor**, **GitHub Copilot**, **Antigravity**, or **OpenCode**, these skills are designed to drop right in and supercharge your AI agent.
|
||||
|
||||
This repository aggregates the best capabilities from across the open-source community, transforming your AI assistant into a full-stack digital agency capable of Engineering, Design, Security, Marketing, and Autonomous Operations.
|
||||
|
||||
## Features & Categories
|
||||
|
||||
The repository is organized into specialized domains to transform your AI into an expert across the entire software development lifecycle:
|
||||
|
||||
| Category | Focus | Example skills |
|
||||
| :--- | :--- | :--- |
|
||||
| Architecture (52) | System design, ADRs, C4, and scalable patterns | `architecture`, `c4-context`, `senior-architect` |
|
||||
| Business (35) | Growth, pricing, CRO, SEO, and go-to-market | `copywriting`, `pricing-strategy`, `seo-audit` |
|
||||
| Data & AI (81) | LLM apps, RAG, agents, observability, analytics | `rag-engineer`, `prompt-engineer`, `langgraph` |
|
||||
| Development (72) | Language mastery, framework patterns, code quality | `typescript-expert`, `python-patterns`, `react-patterns` |
|
||||
| General (95) | Planning, docs, product ops, writing, guidelines | `brainstorming`, `doc-coauthoring`, `writing-plans` |
|
||||
| Infrastructure (72) | DevOps, cloud, serverless, deployment, CI/CD | `docker-expert`, `aws-serverless`, `vercel-deployment` |
|
||||
| Security (107) | AppSec, pentesting, vuln analysis, compliance | `api-security-best-practices`, `sql-injection-testing`, `vulnerability-scanner` |
|
||||
| Testing (21) | TDD, test design, fixes, QA workflows | `test-driven-development`, `testing-patterns`, `test-fixing` |
|
||||
| Workflow (17) | Automation, orchestration, jobs, agents | `workflow-automation`, `inngest`, `trigger-dev` |
|
||||
|
||||
## Curated Collections
|
||||
|
||||
[Check out our Starter Packs in docs/BUNDLES.md](docs/BUNDLES.md) to find the perfect toolkit for your role.
|
||||
|
||||
## Browse 625+ Skills
|
||||
|
||||
We have moved the full skill registry to a dedicated catalog to keep this README clean.
|
||||
|
||||
👉 **[View the Complete Skill Catalog (CATALOG.md)](CATALOG.md)**
|
||||
|
||||
## Installation
|
||||
|
||||
To use these skills with **Claude Code**, **Gemini CLI**, **Codex CLI**, **Cursor**, **Antigravity**, or **OpenCode**:
|
||||
To use these skills with **Claude Code**, **Gemini CLI**, **Codex CLI**, **Cursor**, **Antigravity**, **OpenCode**, or **AdaL**:
|
||||
|
||||
### Option A: npx (recommended)
|
||||
|
||||
@@ -149,6 +141,12 @@ npx antigravity-awesome-skills --claude
|
||||
# Gemini CLI
|
||||
npx antigravity-awesome-skills --gemini
|
||||
|
||||
# Codex CLI
|
||||
npx antigravity-awesome-skills --codex
|
||||
|
||||
# OpenCode (Universal)
|
||||
npx antigravity-awesome-skills
|
||||
|
||||
# Custom path
|
||||
npx antigravity-awesome-skills --path ./my-skills
|
||||
```
|
||||
@@ -167,12 +165,129 @@ git clone https://github.com/sickn33/antigravity-awesome-skills.git .claude/skil
|
||||
# Gemini CLI specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .gemini/skills
|
||||
|
||||
# Codex CLI specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .codex/skills
|
||||
|
||||
# Cursor specific
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .cursor/skills
|
||||
|
||||
# OpenCode specific (Universal path)
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### `npx antigravity-awesome-skills` returns 404
|
||||
|
||||
Use the GitHub package fallback:
|
||||
|
||||
```bash
|
||||
npx github:sickn33/antigravity-awesome-skills
|
||||
```
|
||||
|
||||
### Windows clone issues (symlinks)
|
||||
|
||||
This repository uses symlinks for official skills. Enable Developer Mode or run Git as Administrator, then clone with:
|
||||
|
||||
```bash
|
||||
git clone -c core.symlinks=true https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
### Skills installed but not detected by your tool
|
||||
|
||||
Install to the tool-specific path (for example `.claude/skills`, `.gemini/skills`, `.codex/skills`, `.cursor/skills`) or use the installer flags (`--claude`, `--gemini`, `--codex`, `--cursor`, `--path`).
|
||||
|
||||
### Update an existing installation
|
||||
|
||||
```bash
|
||||
git -C ~/.agent/skills pull
|
||||
```
|
||||
|
||||
### Reinstall from scratch
|
||||
|
||||
```bash
|
||||
rm -rf ~/.agent/skills
|
||||
npx antigravity-awesome-skills
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Curated Collections
|
||||
|
||||
**Bundles** are curated groups of skills for a specific role or goal (for example: `Web Wizard`, `Security Engineer`, `OSS Maintainer`).
|
||||
|
||||
They help you avoid picking from 700+ skills one by one.
|
||||
|
||||
What bundles are:
|
||||
|
||||
- Recommended starting sets for common workflows.
|
||||
- A shortcut for onboarding and faster execution.
|
||||
|
||||
What bundles are not:
|
||||
|
||||
- Not a separate install.
|
||||
- Not a locked preset.
|
||||
|
||||
How to use bundles:
|
||||
|
||||
1. Install the repository once.
|
||||
2. Pick one bundle in [docs/BUNDLES.md](docs/BUNDLES.md).
|
||||
3. Start with 3-5 skills from that bundle in your prompt.
|
||||
4. Add more only when needed.
|
||||
|
||||
Examples:
|
||||
|
||||
- Building a SaaS MVP: `Essentials` + `Full-Stack Developer` + `QA & Testing`.
|
||||
- Hardening production: `Security Developer` + `DevOps & Cloud` + `Observability & Monitoring`.
|
||||
- Shipping OSS changes: `Essentials` + `OSS Maintainer`.
|
||||
|
||||
## Antigravity Workflows
|
||||
|
||||
Bundles help you choose skills. Workflows help you execute them in order.
|
||||
|
||||
- Use bundles when you need curated recommendations by role.
|
||||
- Use workflows when you need step-by-step execution for a concrete goal.
|
||||
|
||||
Start here:
|
||||
|
||||
- [docs/WORKFLOWS.md](docs/WORKFLOWS.md): human-readable playbooks.
|
||||
- [data/workflows.json](data/workflows.json): machine-readable workflow metadata.
|
||||
|
||||
Initial workflows include:
|
||||
|
||||
- Ship a SaaS MVP
|
||||
- Security Audit for a Web App
|
||||
- Build an AI Agent System
|
||||
- QA and Browser Automation (with optional `@go-playwright` support for Go stacks)
|
||||
|
||||
## Features & Categories
|
||||
|
||||
The repository is organized into specialized domains to transform your AI into an expert across the entire software development lifecycle:
|
||||
|
||||
| Category | Focus | Example skills |
|
||||
| :------------- | :------------------------------------------------- | :------------------------------------------------------------------------------ |
|
||||
| Architecture | System design, ADRs, C4, and scalable patterns | `architecture`, `c4-context`, `senior-architect` |
|
||||
| Business | Growth, pricing, CRO, SEO, and go-to-market | `copywriting`, `pricing-strategy`, `seo-audit` |
|
||||
| Data & AI | LLM apps, RAG, agents, observability, analytics | `rag-engineer`, `prompt-engineer`, `langgraph` |
|
||||
| Development | Language mastery, framework patterns, code quality | `typescript-expert`, `python-patterns`, `react-patterns` |
|
||||
| General | Planning, docs, product ops, writing, guidelines | `brainstorming`, `doc-coauthoring`, `writing-plans` |
|
||||
| Infrastructure | DevOps, cloud, serverless, deployment, CI/CD | `docker-expert`, `aws-serverless`, `vercel-deployment` |
|
||||
| Security | AppSec, pentesting, vuln analysis, compliance | `api-security-best-practices`, `sql-injection-testing`, `vulnerability-scanner` |
|
||||
| Testing | TDD, test design, fixes, QA workflows | `test-driven-development`, `testing-patterns`, `test-fixing` |
|
||||
| Workflow | Automation, orchestration, jobs, agents | `workflow-automation`, `inngest`, `trigger-dev` |
|
||||
|
||||
Counts change as new skills are added. For the current full registry, see [CATALOG.md](CATALOG.md).
|
||||
|
||||
## Browse 856+ Skills
|
||||
|
||||
We have moved the full skill registry to a dedicated catalog to keep this README clean.
|
||||
|
||||
👉 **[View the Complete Skill Catalog (CATALOG.md)](CATALOG.md)**
|
||||
|
||||
---
|
||||
|
||||
## How to Contribute
|
||||
|
||||
We welcome contributions from the community! To add a new skill:
|
||||
@@ -187,6 +302,36 @@ Please ensure your skill follows the Antigravity/Claude Code best practices.
|
||||
|
||||
---
|
||||
|
||||
## Community
|
||||
|
||||
- [Community Guidelines](docs/COMMUNITY_GUIDELINES.md)
|
||||
- [Security Policy](docs/SECURITY_GUARDRAILS.md)
|
||||
|
||||
---
|
||||
|
||||
## Support the Project
|
||||
|
||||
Support is optional. This project stays free and open-source for everyone.
|
||||
|
||||
If this repository saves you time or helps you ship faster, you can support ongoing maintenance:
|
||||
|
||||
- [☕ Buy me a book on Buy Me a Coffee](https://buymeacoffee.com/sickn33)
|
||||
|
||||
Where support goes:
|
||||
|
||||
- Skill curation, testing, and quality validation.
|
||||
- Documentation updates, examples, and onboarding improvements.
|
||||
- Faster triage and review of community issues and PRs.
|
||||
|
||||
Prefer non-financial support:
|
||||
|
||||
- Star the repository.
|
||||
- Open clear, reproducible issues.
|
||||
- Submit PRs (skills, docs, fixes).
|
||||
- Share the project with other builders.
|
||||
|
||||
---
|
||||
|
||||
## Credits & Sources
|
||||
|
||||
We stand on the shoulders of giants.
|
||||
@@ -210,6 +355,8 @@ This collection would not be possible without the incredible work of the Claude
|
||||
- **[vercel-labs/agent-skills](https://github.com/vercel-labs/agent-skills)**: Vercel Labs official skills - React Best Practices, Web Design Guidelines.
|
||||
- **[openai/skills](https://github.com/openai/skills)**: OpenAI Codex skills catalog - Agent skills, Skill Creator, Concise Planning.
|
||||
- **[supabase/agent-skills](https://github.com/supabase/agent-skills)**: Supabase official skills - Postgres Best Practices.
|
||||
- **[microsoft/skills](https://github.com/microsoft/skills)**: Official Microsoft skills - Azure cloud services, Bot Framework, Cognitive Services, and enterprise development patterns across .NET, Python, TypeScript, Go, Rust, and Java.
|
||||
- **[google-gemini/gemini-skills](https://github.com/google-gemini/gemini-skills)**: Official Gemini skills - Gemini API, SDK and model interactions.
|
||||
|
||||
### Community Contributors
|
||||
|
||||
@@ -240,17 +387,71 @@ This collection would not be possible without the incredible work of the Claude
|
||||
|
||||
---
|
||||
|
||||
## Repo Contributors
|
||||
|
||||
<a href="https://github.com/sickn33/antigravity-awesome-skills/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=sickn33/antigravity-awesome-skills" />
|
||||
</a>
|
||||
|
||||
Made with [contrib.rocks](https://contrib.rocks).
|
||||
|
||||
We officially thank the following contributors for their help in making this repository awesome!
|
||||
|
||||
- [@sck000](https://github.com/sck000)
|
||||
- [@munir-abbasi](https://github.com/munir-abbasi)
|
||||
- [@sickn33](https://github.com/sickn33)
|
||||
- [@Mohammad-Faiz-Cloud-Engineer](https://github.com/Mohammad-Faiz-Cloud-Engineer)
|
||||
- [@Dokhacgiakhoa](https://github.com/Dokhacgiakhoa)
|
||||
- [@IanJ332](https://github.com/IanJ332)
|
||||
- [@chauey](https://github.com/chauey)
|
||||
- [@PabloSMD](https://github.com/PabloSMD)
|
||||
- [@GuppyTheCat](https://github.com/GuppyTheCat)
|
||||
- [@Tiger-Foxx](https://github.com/Tiger-Foxx)
|
||||
- [@arathiesh](https://github.com/arathiesh)
|
||||
- [@liyin2015](https://github.com/liyin2015)
|
||||
- [@1bcMax](https://github.com/1bcMax)
|
||||
- [@ALEKGG1](https://github.com/ALEKGG1)
|
||||
- [@ar27111994](https://github.com/ar27111994)
|
||||
- [@BenedictKing](https://github.com/BenedictKing)
|
||||
- [@whatiskadudoing](https://github.com/whatiskadudoing)
|
||||
- [@LocNguyenSGU](https://github.com/LocNguyenSGU)
|
||||
- [@yubing744](https://github.com/yubing744)
|
||||
- [@SuperJMN](https://github.com/SuperJMN)
|
||||
- [@truongnmt](https://github.com/truongnmt)
|
||||
- [@viktor-ferenczi](https://github.com/viktor-ferenczi)
|
||||
- [@c1c3ru](https://github.com/c1c3ru)
|
||||
- [@ckdwns9121](https://github.com/ckdwns9121)
|
||||
- [@fbientrigo](https://github.com/fbientrigo)
|
||||
- [@junited31](https://github.com/junited31)
|
||||
- [@KrisnaSantosa15](https://github.com/KrisnaSantosa15)
|
||||
- [@sstklen](https://github.com/sstklen)
|
||||
- [@taksrules](https://github.com/taksrules)
|
||||
- [@zebbern](https://github.com/zebbern)
|
||||
- [@vuth-dogo](https://github.com/vuth-dogo)
|
||||
- [@mvanhorn](https://github.com/mvanhorn)
|
||||
- [@rookie-ricardo](https://github.com/rookie-ricardo)
|
||||
- [@evandro-miguel](https://github.com/evandro-miguel)
|
||||
- [@raeef1001](https://github.com/raeef1001)
|
||||
- [@devchangjun](https://github.com/devchangjun)
|
||||
- [@jackjin1997](https://github.com/jackjin1997)
|
||||
- [@ericgandrade](https://github.com/ericgandrade)
|
||||
- [@sohamganatra](https://github.com/sohamganatra)
|
||||
- [@Nguyen-Van-Chan](https://github.com/Nguyen-Van-Chan)
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
MIT License. See [LICENSE](LICENSE) for details.
|
||||
|
||||
## Community
|
||||
|
||||
- [Community Guidelines](docs/COMMUNITY_GUIDELINES.md)
|
||||
- [Security Policy](docs/SECURITY_GUARDRAILS.md)
|
||||
|
||||
---
|
||||
|
||||
## Star History
|
||||
|
||||
[](https://www.star-history.com/#sickn33/antigravity-awesome-skills&type=date&legend=top-left)
|
||||
|
||||
If Antigravity Awesome Skills has been useful, consider ⭐ starring the repo or [buying me a book](https://buymeacoffee.com/sickn33).
|
||||
|
||||
---
|
||||
|
||||
## GitHub Topics
|
||||
@@ -262,40 +463,3 @@ claude-code, gemini-cli, codex-cli, antigravity, cursor, github-copilot, opencod
|
||||
agentic-skills, ai-coding, llm-tools, ai-agents, autonomous-coding, mcp,
|
||||
ai-developer-tools, ai-pair-programming, vibe-coding, skill, skills, SKILL.md, rules.md, CLAUDE.md, GEMINI.md, CURSOR.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Repo Contributors
|
||||
|
||||
We officially thank the following contributors for their help in making this repository awesome!
|
||||
|
||||
- [mvanhorn](https://github.com/mvanhorn)
|
||||
- [rookie-ricardo](https://github.com/rookie-ricardo)
|
||||
- [sck_0](https://github.com/sck_0)
|
||||
- [Munir Abbasi](https://github.com/munirabbasi)
|
||||
- [Mohammad Faiz](https://github.com/mohdfaiz2k9)
|
||||
- [Ianj332](https://github.com/Ianj332)
|
||||
- [sickn33](https://github.com/sickn33)
|
||||
- [GuppyTheCat](https://github.com/GuppyTheCat)
|
||||
- [Tiger-Foxx](https://github.com/Tiger-Foxx)
|
||||
- [arathiesh](https://github.com/arathiesh)
|
||||
- [1bcMax](https://github.com/1bcMax)
|
||||
- [Ahmed Rehan](https://github.com/ar27111994)
|
||||
- [BenedictKing](https://github.com/BenedictKing)
|
||||
- [Nguyen Huu Loc](https://github.com/LocNguyenSGU)
|
||||
- [Owen Wu](https://github.com/yubing744)
|
||||
- [SuperJMN](https://github.com/SuperJMN)
|
||||
- [Viktor Ferenczi](https://github.com/viktor-ferenczi)
|
||||
- [Đỗ Khắc Gia Khoa](https://github.com/Dokhacgiakhoa)
|
||||
- [evandro-miguel](https://github.com/evandro-miguel)
|
||||
- [junited31](https://github.com/junited31)
|
||||
- [krisnasantosa15](https://github.com/krisnasantosa15)
|
||||
- [raeef1001](https://github.com/raeef1001)
|
||||
- [taksrules](https://github.com/taksrules)
|
||||
- [zebbern](https://github.com/zebbern)
|
||||
- [vuth-dogo](https://github.com/vuth-dogo)
|
||||
- [whatiskadudoing](https://github.com/whatiskadudoing)
|
||||
|
||||
## Star History
|
||||
|
||||
[](https://www.star-history.com/#sickn33/antigravity-awesome-skills&type=date&legend=top-left)
|
||||
|
||||
183
RELEASE_NOTES.md
183
RELEASE_NOTES.md
@@ -1,183 +0,0 @@
|
||||
# Release v4.5.0: Stitch UI Design
|
||||
|
||||
> **Expert prompting guide for Google Stitch AI-powered UI design tool**
|
||||
|
||||
This release adds the stitch-ui-design skill and clarifies documentation around Starter Packs vs full repo installation, bringing the total to 625 skills. The new skill provides comprehensive guidance for creating effective prompts in Google Stitch (Gemini 2.5 Flash) to generate high-quality UI designs for web and mobile applications.
|
||||
|
||||
## New Skills (1)
|
||||
|
||||
- **[stitch-ui-design](skills/stitch-ui-design/)** – Expert guide for creating effective prompts for Google Stitch AI UI design tool. Covers prompt structure, specificity techniques, iteration strategies, design-to-code workflows, and 10+ practical examples for landing pages, mobile apps, and dashboards.
|
||||
|
||||
> **Try it:** `Use @stitch-ui-design to help me create a prompt for a mobile fitness app dashboard`
|
||||
|
||||
## Documentation Improvements
|
||||
|
||||
- **Clarified Starter Packs**: Updated README.md and GETTING_STARTED.md to explicitly state that installation means cloning the full repo once; Starter Packs are curated lists to help discover which skills to use by role, not a different installation method (fixes [#44](https://github.com/sickn33/antigravity-awesome-skills/issues/44))
|
||||
|
||||
## Registry Update
|
||||
|
||||
- **Total Skills**: 625 (from 624)
|
||||
- **New Skills Added**: 1
|
||||
- **Catalog**: Regenerated with all skills
|
||||
|
||||
## Credits
|
||||
|
||||
A huge shoutout to our community contributors:
|
||||
|
||||
- **[@CypherPoet](https://github.com/CypherPoet)** for raising the documentation clarity issue (#44)
|
||||
|
||||
---
|
||||
|
||||
# Release v4.4.0: fp-ts skills for TypeScript
|
||||
|
||||
> **Three practical fp-ts skills for TypeScript functional programming**
|
||||
|
||||
This release adds 3 fp-ts skills sourced from [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills), bringing the total to 624 skills. These skills focus on practical, jargon-free patterns for pipe, Option, Either, TaskEither, React integration, and type-safe error handling.
|
||||
|
||||
## New Skills (3)
|
||||
|
||||
- **[fp-ts-pragmatic](skills/fp-ts-pragmatic/)** – The 80/20 of functional programming: pipe, Option, Either, TaskEither without academic jargon
|
||||
- **[fp-ts-react](skills/fp-ts-react/)** – Patterns for using fp-ts with React 18/19 and Next.js 14/15 (state, forms, data fetching)
|
||||
- **[fp-ts-errors](skills/fp-ts-errors/)** – Type-safe error handling with Either and TaskEither; no more try/catch spaghetti
|
||||
|
||||
## Registry Update
|
||||
|
||||
- **Total Skills**: 624 (from 621)
|
||||
- **New Skills Added**: 3
|
||||
- **Catalog**: Regenerated with all skills
|
||||
|
||||
---
|
||||
|
||||
# Release v4.3.0: VoltAgent Integration & Context Engineering Suite
|
||||
|
||||
> **Massive expansion with 61 new skills from VoltAgent repository, including official team skills and comprehensive context engineering capabilities**
|
||||
|
||||
This release adds 61 high-quality skills sourced from the VoltAgent/awesome-agent-skills curated collection, bringing the total to 614 skills. Highlights include official skills from Sentry, Trail of Bits, Expo, Hugging Face, and a complete context engineering suite for building sophisticated AI agents.
|
||||
|
||||
## 🚀 New Skills
|
||||
|
||||
### Official Team Skills (27)
|
||||
|
||||
#### Sentry (4)
|
||||
- **[commit](skills/commit/)** – Create commits with best practices following Sentry conventions
|
||||
- **[create-pr](skills/create-pr/)** – Create pull requests with proper descriptions and review guidelines
|
||||
- **[find-bugs](skills/find-bugs/)** – Find and identify bugs in code systematically
|
||||
- **[iterate-pr](skills/iterate-pr/)** – Iterate on pull request feedback efficiently
|
||||
|
||||
#### Trail of Bits (3)
|
||||
- **[culture-index](skills/culture-index/)** – Index and search culture documentation
|
||||
- **[fix-review](skills/fix-review/)** – Verify fix commits address audit findings without new bugs
|
||||
- **[sharp-edges](skills/sharp-edges/)** – Identify error-prone APIs and dangerous configurations
|
||||
|
||||
#### Expo (2)
|
||||
- **[expo-deployment](skills/expo-deployment/)** – Deploy Expo apps to production
|
||||
- **[upgrading-expo](skills/upgrading-expo/)** – Upgrade Expo SDK versions safely
|
||||
|
||||
#### Hugging Face (2)
|
||||
- **[hugging-face-cli](skills/hugging-face-cli/)** – HF Hub CLI for models, datasets, repos, and compute jobs
|
||||
- **[hugging-face-jobs](skills/hugging-face-jobs/)** – Run compute jobs and Python scripts on HF infrastructure
|
||||
|
||||
#### Other Official (16)
|
||||
- **[vercel-deploy-claimable](skills/vercel-deploy-claimable/)** – Deploy projects to Vercel
|
||||
- **[design-md](skills/design-md/)** – Create and manage DESIGN.md files (Google Stitch)
|
||||
- **[using-neon](skills/using-neon/)** – Best practices for Neon Serverless Postgres
|
||||
- **[n8n-code-python](skills/n8n-code-python/)** – Python in n8n Code nodes
|
||||
- **[n8n-mcp-tools-expert](skills/n8n-mcp-tools-expert/)** – n8n MCP tools guide
|
||||
- **[n8n-node-configuration](skills/n8n-node-configuration/)** – n8n node configuration
|
||||
- **[swiftui-expert-skill](skills/swiftui-expert-skill/)** – SwiftUI best practices
|
||||
- **[fal-audio](skills/fal-audio/)** – Text-to-speech and speech-to-text using fal.ai
|
||||
- **[fal-generate](skills/fal-generate/)** – Generate images and videos using fal.ai AI models
|
||||
- **[fal-image-edit](skills/fal-image-edit/)** – AI-powered image editing with style transfer
|
||||
- **[fal-platform](skills/fal-platform/)** – Platform APIs for model management and usage tracking
|
||||
- **[fal-upscale](skills/fal-upscale/)** – Upscale and enhance image/video resolution using AI
|
||||
- **[fal-workflow](skills/fal-workflow/)** – Generate workflow JSON files for chaining AI models
|
||||
- **[deep-research](skills/deep-research/)** – Gemini Deep Research Agent for autonomous research
|
||||
- **[imagen](skills/imagen/)** – Generate images using Google Gemini
|
||||
- **[readme](skills/readme/)** – Generate comprehensive project documentation
|
||||
|
||||
### Community Skills (34)
|
||||
|
||||
#### Context Engineering Suite (7)
|
||||
A complete suite for building sophisticated AI agents with advanced context management:
|
||||
|
||||
- **[context-fundamentals](skills/context-fundamentals/)** – Understand what context is, why it matters, and the anatomy of context in agent systems
|
||||
- **[context-degradation](skills/context-degradation/)** – Recognize patterns of context failure: lost-in-middle, poisoning, distraction, and clash
|
||||
- **[context-compression](skills/context-compression/)** – Design and evaluate compression strategies for long-running sessions
|
||||
- **[context-optimization](skills/context-optimization/)** – Apply compaction, masking, and caching strategies
|
||||
- **[multi-agent-patterns](skills/multi-agent-patterns/)** – Master orchestrator, peer-to-peer, and hierarchical multi-agent architectures
|
||||
- **[memory-systems](skills/memory-systems/)** – Design short-term, long-term, and graph-based memory architectures
|
||||
- **[evaluation](skills/evaluation/)** – Build evaluation frameworks for agent systems
|
||||
|
||||
#### Development Tools (8)
|
||||
- **[frontend-slides](skills/frontend-slides/)** – Generate animation-rich HTML presentations with visual style previews
|
||||
- **[linear-claude-skill](skills/linear-claude-skill/)** – Manage Linear issues, projects, and teams
|
||||
- **[skill-rails-upgrade](skills/skill-rails-upgrade/)** – Analyze Rails apps and provide upgrade assessments
|
||||
- **[terraform-skill](skills/terraform-skill/)** – Terraform infrastructure as code best practices
|
||||
- **[tool-design](skills/tool-design/)** – Build tools that agents can use effectively, including architectural reduction patterns
|
||||
- **[screenshots](skills/screenshots/)** – Generate marketing screenshots with Playwright
|
||||
- **[automate-whatsapp](skills/automate-whatsapp/)** – Build WhatsApp automations with workflows and agents
|
||||
- **[observe-whatsapp](skills/observe-whatsapp/)** – Debug WhatsApp delivery issues and run health checks
|
||||
|
||||
#### Platform & Framework Skills (19)
|
||||
- **[aws-skills](skills/aws-skills/)** – AWS development with infrastructure automation
|
||||
- **[ui-skills](skills/ui-skills/)** – Opinionated constraints for building interfaces
|
||||
- **[vexor](skills/vexor/)** – Vector-powered CLI for semantic file search
|
||||
- **[pypict-skill](skills/pypict-skill/)** – Pairwise test generation
|
||||
- **[makepad-skills](skills/makepad-skills/)** – Makepad UI development for Rust apps
|
||||
- **[threejs-skills](skills/threejs-skills/)** – Three.js 3D experiences
|
||||
- **[claude-scientific-skills](skills/claude-scientific-skills/)** – Scientific research skills
|
||||
- **[claude-win11-speckit-update-skill](skills/claude-win11-speckit-update-skill/)** – Windows 11 management
|
||||
- **[security-bluebook-builder](skills/security-bluebook-builder/)** – Security documentation
|
||||
- **[claude-ally-health](skills/claude-ally-health/)** – Health assistant
|
||||
- **[clarity-gate](skills/clarity-gate/)** – RAG quality verification
|
||||
- **[beautiful-prose](skills/beautiful-prose/)** – Writing style guide
|
||||
- **[claude-speed-reader](skills/claude-speed-reader/)** – Speed reading tool
|
||||
- **[skill-seekers](skills/skill-seekers/)** – Skill conversion tool
|
||||
- **[varlock-claude-skill](skills/varlock-claude-skill/)** – Secure environment variable management
|
||||
- **[superpowers-lab](skills/superpowers-lab/)** – Superpowers Lab integration
|
||||
- **[nanobanana-ppt-skills](skills/nanobanana-ppt-skills/)** – PowerPoint presentation skills
|
||||
- **[x-article-publisher-skill](skills/x-article-publisher-skill/)** – X/Twitter article publishing
|
||||
- **[ffuf-claude-skill](skills/ffuf-claude-skill/)** – Web fuzzing with ffuf
|
||||
|
||||
---
|
||||
|
||||
## 📦 Registry Update
|
||||
|
||||
- **Total Skills**: 614 (from 553)
|
||||
- **New Skills Added**: 61
|
||||
- **Catalog**: Fully regenerated with all new skills
|
||||
- **Sources**: All skills properly attributed in `docs/SOURCES.md`
|
||||
|
||||
## 🔧 Improvements
|
||||
|
||||
### Quality Assurance
|
||||
- All new skills validated for frontmatter compliance
|
||||
- "When to Use" sections added where missing
|
||||
- Source attribution maintained for all skills
|
||||
- Risk labels properly set
|
||||
|
||||
### Documentation
|
||||
- Updated README.md with correct skill count (614)
|
||||
- Updated package.json version to 4.3.0
|
||||
- Comprehensive release notes created
|
||||
|
||||
## 📊 Statistics
|
||||
|
||||
- **Skills from VoltAgent Repository**: 61
|
||||
- Official Team Skills: 27
|
||||
- Community Skills: 34
|
||||
- **Skills Analyzed**: 174 total from VoltAgent
|
||||
- **Skills Already Present**: 32 (skipped as duplicates)
|
||||
- **Skills with Similar Names**: 89 (analyzed, 12 implemented as complementary)
|
||||
|
||||
## 👥 Credits
|
||||
|
||||
A huge shoutout to our community contributors and the VoltAgent team:
|
||||
|
||||
- **VoltAgent/awesome-agent-skills** for curating an excellent collection
|
||||
- **Official Teams**: Sentry, Trail of Bits, Expo, Hugging Face, Vercel Labs, Google Labs, Neon, fal.ai
|
||||
- **Community Contributors**: zarazhangrui, wrsmith108, robzolkos, muratcankoylan, antonbabenko, and all other skill authors
|
||||
|
||||
---
|
||||
|
||||
_Upgrade now: `git pull origin main` to fetch the latest skills._
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 50 KiB |
@@ -16,18 +16,23 @@ function resolveDir(p) {
|
||||
function parseArgs() {
|
||||
const a = process.argv.slice(2);
|
||||
let pathArg = null;
|
||||
let cursor = false, claude = false, gemini = false;
|
||||
let versionArg = null;
|
||||
let tagArg = null;
|
||||
let cursor = false, claude = false, gemini = false, codex = false;
|
||||
|
||||
for (let i = 0; i < a.length; i++) {
|
||||
if (a[i] === '--help' || a[i] === '-h') return { help: true };
|
||||
if (a[i] === '--path' && a[i + 1]) { pathArg = a[++i]; continue; }
|
||||
if (a[i] === '--version' && a[i + 1]) { versionArg = a[++i]; continue; }
|
||||
if (a[i] === '--tag' && a[i + 1]) { tagArg = a[++i]; continue; }
|
||||
if (a[i] === '--cursor') { cursor = true; continue; }
|
||||
if (a[i] === '--claude') { claude = true; continue; }
|
||||
if (a[i] === '--gemini') { gemini = true; continue; }
|
||||
if (a[i] === '--codex') { codex = true; continue; }
|
||||
if (a[i] === 'install') continue;
|
||||
}
|
||||
|
||||
return { pathArg, cursor, claude, gemini };
|
||||
return { pathArg, versionArg, tagArg, cursor, claude, gemini, codex };
|
||||
}
|
||||
|
||||
function defaultDir(opts) {
|
||||
@@ -35,6 +40,11 @@ function defaultDir(opts) {
|
||||
if (opts.cursor) return path.join(HOME, '.cursor', 'skills');
|
||||
if (opts.claude) return path.join(HOME, '.claude', 'skills');
|
||||
if (opts.gemini) return path.join(HOME, '.gemini', 'skills');
|
||||
if (opts.codex) {
|
||||
const codexHome = process.env.CODEX_HOME;
|
||||
if (codexHome) return path.join(codexHome, 'skills');
|
||||
return path.join(HOME, '.codex', 'skills');
|
||||
}
|
||||
return path.join(HOME, '.agent', 'skills');
|
||||
}
|
||||
|
||||
@@ -50,11 +60,15 @@ Options:
|
||||
--cursor Install to ~/.cursor/skills (Cursor)
|
||||
--claude Install to ~/.claude/skills (Claude Code)
|
||||
--gemini Install to ~/.gemini/skills (Gemini CLI)
|
||||
--codex Install to ~/.codex/skills (Codex CLI)
|
||||
--path <dir> Install to <dir> (default: ~/.agent/skills)
|
||||
--version <ver> After clone, checkout tag v<ver> (e.g. 4.6.0 -> v4.6.0)
|
||||
--tag <tag> After clone, checkout this tag (e.g. v4.6.0)
|
||||
|
||||
Examples:
|
||||
npx antigravity-awesome-skills
|
||||
npx antigravity-awesome-skills --cursor
|
||||
npx antigravity-awesome-skills --version 4.6.0
|
||||
npx antigravity-awesome-skills --path ./my-skills
|
||||
`);
|
||||
}
|
||||
@@ -66,6 +80,8 @@ function run(cmd, args, opts = {}) {
|
||||
|
||||
function main() {
|
||||
const opts = parseArgs();
|
||||
const { tagArg, versionArg } = opts;
|
||||
|
||||
if (opts.help) {
|
||||
printHelp();
|
||||
return;
|
||||
@@ -106,6 +122,13 @@ function main() {
|
||||
run('git', ['clone', REPO, target]);
|
||||
}
|
||||
|
||||
const ref = tagArg || (versionArg ? (versionArg.startsWith('v') ? versionArg : `v${versionArg}`) : null);
|
||||
if (ref) {
|
||||
console.log(`Checking out ${ref}…`);
|
||||
process.chdir(target);
|
||||
run('git', ['checkout', ref]);
|
||||
}
|
||||
|
||||
console.log(`\nInstalled to ${target}`);
|
||||
console.log('Pick a bundle in docs/BUNDLES.md and use @skill-name in your AI assistant.');
|
||||
}
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"generatedAt": "2026-01-31T07:34:21.497Z",
|
||||
"generatedAt": "2026-02-08T00:00:00.000Z",
|
||||
"aliases": {
|
||||
"accessibility-compliance-audit": "accessibility-compliance-accessibility-audit",
|
||||
"active directory attacks": "active-directory-attacks",
|
||||
@@ -7,8 +7,28 @@
|
||||
"agent-orchestration-optimize": "agent-orchestration-multi-agent-optimize",
|
||||
"api fuzzing for bug bounty": "api-fuzzing-bug-bounty",
|
||||
"api-testing-mock": "api-testing-observability-api-mock",
|
||||
"templates": "app-builder/templates",
|
||||
"application-performance-optimization": "application-performance-performance-optimization",
|
||||
"aws penetration testing": "aws-penetration-testing",
|
||||
"azure-ai-dotnet": "azure-ai-agents-persistent-dotnet",
|
||||
"azure-ai-java": "azure-ai-agents-persistent-java",
|
||||
"azure-ai-py": "azure-ai-contentunderstanding-py",
|
||||
"azure-ai-ts": "azure-ai-document-intelligence-ts",
|
||||
"azure-communication-java": "azure-communication-callautomation-java",
|
||||
"azure-keyvault-rust": "azure-keyvault-certificates-rust",
|
||||
"azure-messaging-java": "azure-messaging-webpubsub-java",
|
||||
"azure-messaging-py": "azure-messaging-webpubsubservice-py",
|
||||
"azure-mgmt-dotnet": "azure-mgmt-apimanagement-dotnet",
|
||||
"azure-microsoft-ts": "azure-microsoft-playwright-testing-ts",
|
||||
"azure-monitor-java": "azure-monitor-ingestion-java",
|
||||
"azure-monitor-py": "azure-monitor-opentelemetry-exporter-py",
|
||||
"azure-monitor-ts": "azure-monitor-opentelemetry-ts",
|
||||
"azure-resource-dotnet": "azure-resource-manager-cosmosdb-dotnet",
|
||||
"azure-search-dotnet": "azure-search-documents-dotnet",
|
||||
"azure-security-dotnet": "azure-security-keyvault-keys-dotnet",
|
||||
"azure-security-java": "azure-security-keyvault-keys-java",
|
||||
"azure-speech-py": "azure-speech-to-text-rest-py",
|
||||
"azure-storage-py": "azure-storage-file-datalake-py",
|
||||
"backend-development-feature": "backend-development-feature-development",
|
||||
"brand-guidelines": "brand-guidelines-anthropic",
|
||||
"broken authentication testing": "broken-authentication",
|
||||
@@ -45,6 +65,7 @@
|
||||
"deployment-validation-validate": "deployment-validation-config-validate",
|
||||
"distributed-debugging-trace": "distributed-debugging-debug-trace",
|
||||
"documentation-generation-generate": "documentation-generation-doc-generate",
|
||||
"docx": "docx-official",
|
||||
"error-debugging-analysis": "error-debugging-error-analysis",
|
||||
"error-debugging-review": "error-debugging-multi-agent-review",
|
||||
"error-diagnostics-analysis": "error-diagnostics-error-analysis",
|
||||
@@ -59,6 +80,16 @@
|
||||
"frontend-mobile-scaffold": "frontend-mobile-development-component-scaffold",
|
||||
"frontend-mobile-scan": "frontend-mobile-security-xss-scan",
|
||||
"full-stack-feature": "full-stack-orchestration-full-stack-feature",
|
||||
"2d-games": "game-development/2d-games",
|
||||
"3d-games": "game-development/3d-games",
|
||||
"game-art": "game-development/game-art",
|
||||
"game-audio": "game-development/game-audio",
|
||||
"game-design": "game-development/game-design",
|
||||
"mobile-games": "game-development/mobile-games",
|
||||
"multiplayer": "game-development/multiplayer",
|
||||
"pc-games": "game-development/pc-games",
|
||||
"vr-ar": "game-development/vr-ar",
|
||||
"web-games": "game-development/web-games",
|
||||
"git-pr-workflow": "git-pr-workflows-git-workflow",
|
||||
"html injection testing": "html-injection-testing",
|
||||
"idor vulnerability testing": "idor-testing",
|
||||
@@ -73,17 +104,20 @@
|
||||
"llm-application-optimize": "llm-application-dev-prompt-optimize",
|
||||
"machine-learning-pipeline": "machine-learning-ops-ml-pipeline",
|
||||
"metasploit framework": "metasploit-framework",
|
||||
"microsoft-azure-dotnet": "microsoft-azure-webjobs-extensions-authentication-events-dotnet",
|
||||
"moodle-external-development": "moodle-external-api-development",
|
||||
"multi-platform-apps": "multi-platform-apps-multi-platform",
|
||||
"network 101": "network-101",
|
||||
"observability-monitoring-setup": "observability-monitoring-monitor-setup",
|
||||
"observability-monitoring-implement": "observability-monitoring-slo-implement",
|
||||
"obsidian-clipper-creator": "obsidian-clipper-template-creator",
|
||||
"pdf": "pdf-official",
|
||||
"pentest checklist": "pentest-checklist",
|
||||
"pentest commands": "pentest-commands",
|
||||
"performance-testing-ai": "performance-testing-review-ai-review",
|
||||
"performance-testing-agent": "performance-testing-review-multi-agent-review",
|
||||
"supabase-postgres-best-practices": "postgres-best-practices",
|
||||
"pptx": "pptx-official",
|
||||
"privilege escalation methods": "privilege-escalation-methods",
|
||||
"python-development-scaffold": "python-development-python-scaffold",
|
||||
"vercel-react-best-practices": "react-best-practices",
|
||||
@@ -107,6 +141,7 @@
|
||||
"windows privilege escalation": "windows-privilege-escalation",
|
||||
"wireshark network traffic analysis": "wireshark-analysis",
|
||||
"wordpress penetration testing": "wordpress-penetration-testing",
|
||||
"xlsx": "xlsx-official",
|
||||
"cross-site scripting and html injection testing": "xss-html-injection"
|
||||
}
|
||||
}
|
||||
@@ -1,10 +1,11 @@
|
||||
{
|
||||
"generatedAt": "2026-01-31T07:34:21.497Z",
|
||||
"generatedAt": "2026-02-08T00:00:00.000Z",
|
||||
"bundles": {
|
||||
"core-dev": {
|
||||
"description": "Core development skills across languages, frameworks, and backend/frontend fundamentals.",
|
||||
"skills": [
|
||||
"3d-web-experience",
|
||||
"agent-framework-azure-ai-py",
|
||||
"algolia-search",
|
||||
"api-design-principles",
|
||||
"api-documentation-generator",
|
||||
@@ -19,7 +20,91 @@
|
||||
"async-python-patterns",
|
||||
"autonomous-agents",
|
||||
"aws-serverless",
|
||||
"azure-ai-agents-persistent-java",
|
||||
"azure-ai-anomalydetector-java",
|
||||
"azure-ai-contentsafety-java",
|
||||
"azure-ai-contentsafety-py",
|
||||
"azure-ai-contentunderstanding-py",
|
||||
"azure-ai-formrecognizer-java",
|
||||
"azure-ai-ml-py",
|
||||
"azure-ai-projects-java",
|
||||
"azure-ai-projects-py",
|
||||
"azure-ai-projects-ts",
|
||||
"azure-ai-transcription-py",
|
||||
"azure-ai-translation-ts",
|
||||
"azure-ai-vision-imageanalysis-java",
|
||||
"azure-ai-voicelive-java",
|
||||
"azure-ai-voicelive-py",
|
||||
"azure-ai-voicelive-ts",
|
||||
"azure-appconfiguration-java",
|
||||
"azure-appconfiguration-py",
|
||||
"azure-appconfiguration-ts",
|
||||
"azure-communication-callautomation-java",
|
||||
"azure-communication-callingserver-java",
|
||||
"azure-communication-chat-java",
|
||||
"azure-communication-common-java",
|
||||
"azure-communication-sms-java",
|
||||
"azure-compute-batch-java",
|
||||
"azure-containerregistry-py",
|
||||
"azure-cosmos-db-py",
|
||||
"azure-cosmos-java",
|
||||
"azure-cosmos-py",
|
||||
"azure-cosmos-rust",
|
||||
"azure-cosmos-ts",
|
||||
"azure-data-tables-java",
|
||||
"azure-data-tables-py",
|
||||
"azure-eventgrid-java",
|
||||
"azure-eventgrid-py",
|
||||
"azure-eventhub-java",
|
||||
"azure-eventhub-py",
|
||||
"azure-eventhub-rust",
|
||||
"azure-eventhub-ts",
|
||||
"azure-functions",
|
||||
"azure-identity-java",
|
||||
"azure-identity-py",
|
||||
"azure-identity-rust",
|
||||
"azure-identity-ts",
|
||||
"azure-keyvault-certificates-rust",
|
||||
"azure-keyvault-keys-rust",
|
||||
"azure-keyvault-keys-ts",
|
||||
"azure-keyvault-py",
|
||||
"azure-keyvault-secrets-rust",
|
||||
"azure-keyvault-secrets-ts",
|
||||
"azure-messaging-webpubsub-java",
|
||||
"azure-messaging-webpubsubservice-py",
|
||||
"azure-mgmt-apicenter-dotnet",
|
||||
"azure-mgmt-apicenter-py",
|
||||
"azure-mgmt-apimanagement-dotnet",
|
||||
"azure-mgmt-apimanagement-py",
|
||||
"azure-mgmt-applicationinsights-dotnet",
|
||||
"azure-mgmt-botservice-py",
|
||||
"azure-mgmt-fabric-py",
|
||||
"azure-monitor-ingestion-java",
|
||||
"azure-monitor-ingestion-py",
|
||||
"azure-monitor-opentelemetry-exporter-java",
|
||||
"azure-monitor-opentelemetry-exporter-py",
|
||||
"azure-monitor-opentelemetry-py",
|
||||
"azure-monitor-opentelemetry-ts",
|
||||
"azure-monitor-query-java",
|
||||
"azure-monitor-query-py",
|
||||
"azure-postgres-ts",
|
||||
"azure-search-documents-py",
|
||||
"azure-search-documents-ts",
|
||||
"azure-security-keyvault-keys-java",
|
||||
"azure-security-keyvault-secrets-java",
|
||||
"azure-servicebus-py",
|
||||
"azure-servicebus-ts",
|
||||
"azure-speech-to-text-rest-py",
|
||||
"azure-storage-blob-java",
|
||||
"azure-storage-blob-py",
|
||||
"azure-storage-blob-rust",
|
||||
"azure-storage-blob-ts",
|
||||
"azure-storage-file-datalake-py",
|
||||
"azure-storage-file-share-py",
|
||||
"azure-storage-file-share-ts",
|
||||
"azure-storage-queue-py",
|
||||
"azure-storage-queue-ts",
|
||||
"azure-web-pubsub-ts",
|
||||
"backend-architect",
|
||||
"backend-dev-guidelines",
|
||||
"backend-development-feature-development",
|
||||
@@ -33,14 +118,17 @@
|
||||
"claude-d3js-skill",
|
||||
"code-documentation-doc-generate",
|
||||
"context7-auto-research",
|
||||
"copilot-sdk",
|
||||
"discord-bot-architect",
|
||||
"django-pro",
|
||||
"documentation-generation-doc-generate",
|
||||
"documentation-templates",
|
||||
"dotnet-architect",
|
||||
"dotnet-backend",
|
||||
"dotnet-backend-patterns",
|
||||
"exa-search",
|
||||
"fastapi-pro",
|
||||
"fastapi-router-py",
|
||||
"fastapi-templates",
|
||||
"firebase",
|
||||
"firecrawl-scraper",
|
||||
@@ -55,7 +143,11 @@
|
||||
"frontend-mobile-security-xss-scan",
|
||||
"frontend-security-coder",
|
||||
"frontend-slides",
|
||||
"frontend-ui-dark-ts",
|
||||
"game-development/mobile-games",
|
||||
"gemini-api-dev",
|
||||
"go-concurrency-patterns",
|
||||
"go-playwright",
|
||||
"golang-pro",
|
||||
"graphql",
|
||||
"hubspot-integration",
|
||||
@@ -68,8 +160,11 @@
|
||||
"javascript-typescript-typescript-scaffold",
|
||||
"langgraph",
|
||||
"launch-strategy",
|
||||
"m365-agents-py",
|
||||
"m365-agents-ts",
|
||||
"makepad-skills",
|
||||
"mcp-builder",
|
||||
"mcp-builder-ms",
|
||||
"memory-safety-patterns",
|
||||
"mobile-design",
|
||||
"mobile-developer",
|
||||
@@ -88,7 +183,9 @@
|
||||
"openapi-spec-generation",
|
||||
"php-pro",
|
||||
"plaid-fintech",
|
||||
"podcast-generation",
|
||||
"product-manager-toolkit",
|
||||
"pydantic-models-py",
|
||||
"python-development-python-scaffold",
|
||||
"python-packaging",
|
||||
"python-patterns",
|
||||
@@ -96,6 +193,7 @@
|
||||
"python-pro",
|
||||
"python-testing-patterns",
|
||||
"react-best-practices",
|
||||
"react-flow-node-ts",
|
||||
"react-modernization",
|
||||
"react-native-architecture",
|
||||
"react-patterns",
|
||||
@@ -111,6 +209,7 @@
|
||||
"shodan-reconnaissance",
|
||||
"shopify-apps",
|
||||
"shopify-development",
|
||||
"slack-automation",
|
||||
"slack-bot-builder",
|
||||
"stitch-ui-design",
|
||||
"swiftui-expert-skill",
|
||||
@@ -133,18 +232,28 @@
|
||||
"voice-agents",
|
||||
"voice-ai-development",
|
||||
"web-artifacts-builder",
|
||||
"webapp-testing"
|
||||
"webapp-testing",
|
||||
"zustand-store-ts"
|
||||
]
|
||||
},
|
||||
"security-core": {
|
||||
"description": "Security, privacy, and compliance essentials.",
|
||||
"skills": [
|
||||
"accessibility-compliance-accessibility-audit",
|
||||
"antigravity-workflows",
|
||||
"api-fuzzing-bug-bounty",
|
||||
"api-security-best-practices",
|
||||
"attack-tree-construction",
|
||||
"auth-implementation-patterns",
|
||||
"aws-penetration-testing",
|
||||
"azure-cosmos-db-py",
|
||||
"azure-identity-dotnet",
|
||||
"azure-keyvault-py",
|
||||
"azure-keyvault-secrets-rust",
|
||||
"azure-keyvault-secrets-ts",
|
||||
"azure-security-keyvault-keys-dotnet",
|
||||
"azure-security-keyvault-keys-java",
|
||||
"azure-security-keyvault-secrets-java",
|
||||
"backend-security-coder",
|
||||
"broken-authentication",
|
||||
"burp-suite-testing",
|
||||
@@ -163,6 +272,7 @@
|
||||
"deployment-pipeline-design",
|
||||
"design-orchestration",
|
||||
"docker-expert",
|
||||
"dotnet-backend",
|
||||
"ethical-hacking-methodology",
|
||||
"find-bugs",
|
||||
"firebase",
|
||||
@@ -182,6 +292,8 @@
|
||||
"legal-advisor",
|
||||
"linkerd-patterns",
|
||||
"loki-mode",
|
||||
"m365-agents-dotnet",
|
||||
"m365-agents-py",
|
||||
"malware-analyst",
|
||||
"metasploit-framework",
|
||||
"mobile-security-coder",
|
||||
@@ -233,8 +345,22 @@
|
||||
"k8s-core": {
|
||||
"description": "Kubernetes and service mesh essentials.",
|
||||
"skills": [
|
||||
"azd-deployment",
|
||||
"azure-cosmos-db-py",
|
||||
"azure-identity-dotnet",
|
||||
"azure-identity-java",
|
||||
"azure-identity-py",
|
||||
"azure-identity-ts",
|
||||
"azure-messaging-webpubsubservice-py",
|
||||
"azure-mgmt-apimanagement-dotnet",
|
||||
"azure-mgmt-botservice-dotnet",
|
||||
"azure-mgmt-botservice-py",
|
||||
"azure-servicebus-dotnet",
|
||||
"azure-servicebus-py",
|
||||
"azure-servicebus-ts",
|
||||
"backend-architect",
|
||||
"devops-troubleshooter",
|
||||
"freshservice-automation",
|
||||
"gitops-workflow",
|
||||
"helm-chart-scaffolding",
|
||||
"istio-traffic-management",
|
||||
@@ -258,6 +384,36 @@
|
||||
"skills": [
|
||||
"airflow-dag-patterns",
|
||||
"analytics-tracking",
|
||||
"angular-ui-patterns",
|
||||
"azure-ai-document-intelligence-dotnet",
|
||||
"azure-ai-document-intelligence-ts",
|
||||
"azure-ai-textanalytics-py",
|
||||
"azure-cosmos-db-py",
|
||||
"azure-cosmos-java",
|
||||
"azure-cosmos-py",
|
||||
"azure-cosmos-rust",
|
||||
"azure-cosmos-ts",
|
||||
"azure-data-tables-java",
|
||||
"azure-data-tables-py",
|
||||
"azure-eventhub-dotnet",
|
||||
"azure-eventhub-java",
|
||||
"azure-eventhub-rust",
|
||||
"azure-eventhub-ts",
|
||||
"azure-maps-search-dotnet",
|
||||
"azure-mgmt-applicationinsights-dotnet",
|
||||
"azure-monitor-ingestion-java",
|
||||
"azure-monitor-ingestion-py",
|
||||
"azure-monitor-query-java",
|
||||
"azure-monitor-query-py",
|
||||
"azure-postgres-ts",
|
||||
"azure-resource-manager-cosmosdb-dotnet",
|
||||
"azure-resource-manager-mysql-dotnet",
|
||||
"azure-resource-manager-postgresql-dotnet",
|
||||
"azure-resource-manager-redis-dotnet",
|
||||
"azure-resource-manager-sql-dotnet",
|
||||
"azure-security-keyvault-secrets-java",
|
||||
"azure-storage-blob-java",
|
||||
"azure-storage-file-datalake-py",
|
||||
"blockrun",
|
||||
"business-analyst",
|
||||
"cc-skill-backend-patterns",
|
||||
@@ -282,7 +438,10 @@
|
||||
"firebase",
|
||||
"fp-ts-react",
|
||||
"frontend-dev-guidelines",
|
||||
"frontend-ui-dark-ts",
|
||||
"gdpr-data-handling",
|
||||
"google-analytics-automation",
|
||||
"googlesheets-automation",
|
||||
"graphql",
|
||||
"hugging-face-jobs",
|
||||
"hybrid-cloud-networking",
|
||||
@@ -291,6 +450,7 @@
|
||||
"kpi-dashboard-design",
|
||||
"legal-advisor",
|
||||
"loki-mode",
|
||||
"mailchimp-automation",
|
||||
"ml-pipeline-workflow",
|
||||
"moodle-external-api-development",
|
||||
"neon-postgres",
|
||||
@@ -303,12 +463,14 @@
|
||||
"postgresql",
|
||||
"prisma-expert",
|
||||
"programmatic-seo",
|
||||
"pydantic-models-py",
|
||||
"quant-analyst",
|
||||
"react-best-practices",
|
||||
"react-ui-patterns",
|
||||
"scala-pro",
|
||||
"schema-markup",
|
||||
"segment-cdp",
|
||||
"sendgrid-automation",
|
||||
"senior-architect",
|
||||
"seo-audit",
|
||||
"spark-optimization",
|
||||
@@ -316,11 +478,12 @@
|
||||
"sql-optimization-patterns",
|
||||
"sql-pro",
|
||||
"sqlmap-database-pentesting",
|
||||
"supabase-automation",
|
||||
"unity-ecs-patterns",
|
||||
"using-neon",
|
||||
"vector-database-engineer",
|
||||
"xlsx",
|
||||
"xlsx-official"
|
||||
"xlsx-official",
|
||||
"youtube-automation"
|
||||
]
|
||||
},
|
||||
"ops-core": {
|
||||
@@ -331,6 +494,13 @@
|
||||
"api-testing-observability-api-mock",
|
||||
"application-performance-performance-optimization",
|
||||
"aws-serverless",
|
||||
"azd-deployment",
|
||||
"azure-ai-anomalydetector-java",
|
||||
"azure-mgmt-applicationinsights-dotnet",
|
||||
"azure-mgmt-arizeaiobservabilityeval-dotnet",
|
||||
"azure-mgmt-weightsandbiases-dotnet",
|
||||
"azure-monitor-opentelemetry-exporter-java",
|
||||
"azure-monitor-opentelemetry-ts",
|
||||
"backend-architect",
|
||||
"backend-development-feature-development",
|
||||
"c4-container",
|
||||
@@ -357,6 +527,7 @@
|
||||
"error-diagnostics-error-trace",
|
||||
"expo-deployment",
|
||||
"flutter-expert",
|
||||
"game-development/game-art",
|
||||
"git-pr-workflows-git-workflow",
|
||||
"gitlab-ci-patterns",
|
||||
"gitops-workflow",
|
||||
@@ -382,8 +553,10 @@
|
||||
"observability-monitoring-slo-implement",
|
||||
"performance-engineer",
|
||||
"performance-testing-review-ai-review",
|
||||
"pipedrive-automation",
|
||||
"postmortem-writing",
|
||||
"prometheus-configuration",
|
||||
"readme",
|
||||
"risk-metrics-calculation",
|
||||
"security-auditor",
|
||||
"server-management",
|
||||
|
||||
6390
data/catalog.json
6390
data/catalog.json
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "4.2.0",
|
||||
"version": "4.6.0",
|
||||
"dependencies": {
|
||||
"yaml": "^2.8.2"
|
||||
}
|
||||
|
||||
216
data/workflows.json
Normal file
216
data/workflows.json
Normal file
@@ -0,0 +1,216 @@
|
||||
{
|
||||
"generatedAt": "2026-02-10T00:00:00.000Z",
|
||||
"version": 1,
|
||||
"workflows": [
|
||||
{
|
||||
"id": "ship-saas-mvp",
|
||||
"name": "Ship a SaaS MVP",
|
||||
"description": "End-to-end workflow to scope, build, test, and ship a SaaS MVP quickly.",
|
||||
"category": "web",
|
||||
"relatedBundles": [
|
||||
"core-dev",
|
||||
"ops-core"
|
||||
],
|
||||
"steps": [
|
||||
{
|
||||
"title": "Plan the scope",
|
||||
"goal": "Convert the idea into a clear implementation plan and milestones.",
|
||||
"recommendedSkills": [
|
||||
"brainstorming",
|
||||
"concise-planning",
|
||||
"writing-plans"
|
||||
],
|
||||
"notes": "Define problem, user persona, MVP boundaries, and acceptance criteria before coding."
|
||||
},
|
||||
{
|
||||
"title": "Build backend and API",
|
||||
"goal": "Implement the core data model, API contracts, and auth baseline.",
|
||||
"recommendedSkills": [
|
||||
"backend-dev-guidelines",
|
||||
"api-patterns",
|
||||
"database-design",
|
||||
"auth-implementation-patterns"
|
||||
],
|
||||
"notes": "Prefer small vertical slices; keep API contracts explicit and testable."
|
||||
},
|
||||
{
|
||||
"title": "Build frontend",
|
||||
"goal": "Deliver the primary user flows with production-grade UX patterns.",
|
||||
"recommendedSkills": [
|
||||
"frontend-developer",
|
||||
"react-patterns",
|
||||
"frontend-design"
|
||||
],
|
||||
"notes": "Prioritize onboarding, empty states, and one complete happy-path flow."
|
||||
},
|
||||
{
|
||||
"title": "Test and validate",
|
||||
"goal": "Catch regressions and ensure key flows work before release.",
|
||||
"recommendedSkills": [
|
||||
"test-driven-development",
|
||||
"systematic-debugging",
|
||||
"browser-automation",
|
||||
"go-playwright"
|
||||
],
|
||||
"notes": "Use go-playwright when the product stack or QA tooling is Go-based."
|
||||
},
|
||||
{
|
||||
"title": "Ship safely",
|
||||
"goal": "Release with basic observability and rollback readiness.",
|
||||
"recommendedSkills": [
|
||||
"deployment-procedures",
|
||||
"observability-engineer",
|
||||
"postmortem-writing"
|
||||
],
|
||||
"notes": "Define release checklist, minimum telemetry, and rollback triggers."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "security-audit-web-app",
|
||||
"name": "Security Audit for a Web App",
|
||||
"description": "Structured workflow for baseline AppSec review and risk triage.",
|
||||
"category": "security",
|
||||
"relatedBundles": [
|
||||
"security-core",
|
||||
"ops-core"
|
||||
],
|
||||
"steps": [
|
||||
{
|
||||
"title": "Define scope and threat model",
|
||||
"goal": "Identify critical assets, trust boundaries, and threat scenarios.",
|
||||
"recommendedSkills": [
|
||||
"ethical-hacking-methodology",
|
||||
"threat-modeling-expert",
|
||||
"attack-tree-construction"
|
||||
],
|
||||
"notes": "Document in-scope targets, assumptions, and out-of-scope constraints."
|
||||
},
|
||||
{
|
||||
"title": "Review authentication and authorization",
|
||||
"goal": "Find broken auth patterns and access-control weaknesses.",
|
||||
"recommendedSkills": [
|
||||
"broken-authentication",
|
||||
"auth-implementation-patterns",
|
||||
"idor-testing"
|
||||
],
|
||||
"notes": "Prioritize account takeover and privilege escalation paths."
|
||||
},
|
||||
{
|
||||
"title": "Assess API and input security",
|
||||
"goal": "Detect high-impact API and injection risks.",
|
||||
"recommendedSkills": [
|
||||
"api-security-best-practices",
|
||||
"api-fuzzing-bug-bounty",
|
||||
"top-web-vulnerabilities"
|
||||
],
|
||||
"notes": "Map findings to severity and exploitability, not only CVSS."
|
||||
},
|
||||
{
|
||||
"title": "Harden and verify",
|
||||
"goal": "Translate findings into concrete remediations and retest.",
|
||||
"recommendedSkills": [
|
||||
"security-auditor",
|
||||
"sast-configuration",
|
||||
"verification-before-completion"
|
||||
],
|
||||
"notes": "Track remediation owners and target dates; verify each fix with evidence."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "build-ai-agent-system",
|
||||
"name": "Build an AI Agent System",
|
||||
"description": "Workflow to design, implement, and evaluate a production-ready AI agent.",
|
||||
"category": "ai-agents",
|
||||
"relatedBundles": [
|
||||
"core-dev",
|
||||
"data-core"
|
||||
],
|
||||
"steps": [
|
||||
{
|
||||
"title": "Define use case and reliability targets",
|
||||
"goal": "Choose a narrow use case and measurable quality goals.",
|
||||
"recommendedSkills": [
|
||||
"ai-agents-architect",
|
||||
"agent-evaluation",
|
||||
"product-manager-toolkit"
|
||||
],
|
||||
"notes": "Set latency, quality, and failure-rate thresholds before implementation."
|
||||
},
|
||||
{
|
||||
"title": "Design architecture and retrieval",
|
||||
"goal": "Design tools, memory, and retrieval strategy for the agent.",
|
||||
"recommendedSkills": [
|
||||
"llm-app-patterns",
|
||||
"rag-implementation",
|
||||
"vector-database-engineer",
|
||||
"embedding-strategies"
|
||||
],
|
||||
"notes": "Keep retrieval quality measurable and version prompt/tool contracts."
|
||||
},
|
||||
{
|
||||
"title": "Implement orchestration",
|
||||
"goal": "Implement the orchestration loop and production safeguards.",
|
||||
"recommendedSkills": [
|
||||
"langgraph",
|
||||
"mcp-builder",
|
||||
"workflow-automation"
|
||||
],
|
||||
"notes": "Start with constrained tool permissions and explicit fallback behavior."
|
||||
},
|
||||
{
|
||||
"title": "Evaluate and iterate",
|
||||
"goal": "Run benchmark scenarios and improve weak areas systematically.",
|
||||
"recommendedSkills": [
|
||||
"agent-evaluation",
|
||||
"langfuse",
|
||||
"kaizen"
|
||||
],
|
||||
"notes": "Use test datasets and failure buckets to guide each iteration cycle."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"id": "qa-browser-automation",
|
||||
"name": "QA and Browser Automation",
|
||||
"description": "Workflow for robust E2E and browser-driven validation across stacks.",
|
||||
"category": "testing",
|
||||
"relatedBundles": [
|
||||
"core-dev",
|
||||
"ops-core"
|
||||
],
|
||||
"steps": [
|
||||
{
|
||||
"title": "Prepare test strategy",
|
||||
"goal": "Define critical user journeys, environments, and test data.",
|
||||
"recommendedSkills": [
|
||||
"e2e-testing-patterns",
|
||||
"test-driven-development",
|
||||
"code-review-checklist"
|
||||
],
|
||||
"notes": "Focus on business-critical flows and keep setup deterministic."
|
||||
},
|
||||
{
|
||||
"title": "Implement browser tests",
|
||||
"goal": "Automate key flows with resilient locators and stable waits.",
|
||||
"recommendedSkills": [
|
||||
"browser-automation",
|
||||
"go-playwright"
|
||||
],
|
||||
"notes": "Use go-playwright for Go-native automation projects and Playwright for JS/TS stacks."
|
||||
},
|
||||
{
|
||||
"title": "Triage failures and harden",
|
||||
"goal": "Stabilize flaky tests and establish repeatable CI execution.",
|
||||
"recommendedSkills": [
|
||||
"systematic-debugging",
|
||||
"test-fixing",
|
||||
"verification-before-completion"
|
||||
],
|
||||
"notes": "Classify failures by root cause: selector drift, timing, environment, data."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -14,9 +14,10 @@
|
||||
2. **Choose your bundle** from the list below based on your role or interests.
|
||||
|
||||
3. **Use skills** by referencing them in your AI assistant:
|
||||
- Claude Code: `>> @skill-name help me...`
|
||||
- Claude Code: `>> /skill-name help me...`
|
||||
- Cursor: `@skill-name in chat`
|
||||
- Gemini CLI: `Use skill-name...`
|
||||
- Codex CLI: `Use skill-name...`
|
||||
|
||||
---
|
||||
|
||||
@@ -328,33 +329,77 @@ _For system design and technical decisions._
|
||||
|
||||
---
|
||||
|
||||
## 🧰 Maintainer & OSS
|
||||
|
||||
### 🛠️ The "OSS Maintainer" Pack
|
||||
|
||||
_For shipping clean changes in public repositories._
|
||||
|
||||
- [`commit`](../skills/commit/): High-quality conventional commits.
|
||||
- [`create-pr`](../skills/create-pr/): PR creation with review-ready context.
|
||||
- [`requesting-code-review`](../skills/requesting-code-review/): Ask for targeted, high-signal reviews.
|
||||
- [`receiving-code-review`](../skills/receiving-code-review/): Apply feedback with technical rigor.
|
||||
- [`changelog-automation`](../skills/changelog-automation/): Keep release notes and changelogs consistent.
|
||||
- [`git-advanced-workflows`](../skills/git-advanced-workflows/): Rebase, cherry-pick, bisect, recovery.
|
||||
- [`documentation-templates`](../skills/documentation-templates/): Standardize docs and handoffs.
|
||||
|
||||
### 🧱 The "Skill Author" Pack
|
||||
|
||||
_For creating and maintaining high-quality SKILL.md assets._
|
||||
|
||||
- [`skill-creator`](../skills/skill-creator/): Design effective new skills.
|
||||
- [`skill-developer`](../skills/skill-developer/): Implement triggers, hooks, and skill lifecycle.
|
||||
- [`writing-skills`](../skills/writing-skills/): Improve clarity and structure of skill instructions.
|
||||
- [`documentation-generation-doc-generate`](../skills/documentation-generation-doc-generate/): Generate maintainable technical docs.
|
||||
- [`lint-and-validate`](../skills/lint-and-validate/): Validate quality after edits.
|
||||
- [`verification-before-completion`](../skills/verification-before-completion/): Confirm changes before claiming done.
|
||||
|
||||
---
|
||||
|
||||
## 📚 How to Use Bundles
|
||||
|
||||
### Installation
|
||||
### 1) Pick by immediate goal
|
||||
|
||||
1. **Clone the repository:**
|
||||
```bash
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
- Need to ship a feature now: `Essentials` + one domain pack (`Web Wizard`, `Python Pro`, `DevOps & Cloud`).
|
||||
- Need reliability and hardening: add `QA & Testing` + `Security Developer`.
|
||||
- Need product growth: add `Startup Founder` or `Marketing & Growth`.
|
||||
|
||||
2. **Or use the installer:**
|
||||
```bash
|
||||
npx antigravity-awesome-skills
|
||||
```
|
||||
### 2) Start with 3-5 skills, not 20
|
||||
|
||||
### Using Skills
|
||||
Pick the minimum set for your current milestone. Expand only when you hit a real gap.
|
||||
|
||||
Once installed, reference skills in your AI assistant:
|
||||
### 3) Invoke skills consistently
|
||||
|
||||
- **Claude Code**: `>> @skill-name help me...`
|
||||
- **Claude Code**: `>> /skill-name help me...`
|
||||
- **Cursor**: `@skill-name` in chat
|
||||
- **Gemini CLI**: `Use skill-name...`
|
||||
- **Codex CLI**: `Use skill-name...`
|
||||
|
||||
### Customizing Bundles
|
||||
### 4) Build your personal shortlist
|
||||
|
||||
You can create your own bundle by:
|
||||
1. Copying skill folders to your `.agent/skills/` directory
|
||||
2. Or referencing multiple skills in a single conversation
|
||||
Keep a small list of high-frequency skills and reuse it across tasks to reduce context switching.
|
||||
|
||||
## 🧩 Recommended Bundle Combos
|
||||
|
||||
### Ship a SaaS MVP (2 weeks)
|
||||
|
||||
`Essentials` + `Full-Stack Developer` + `QA & Testing` + `Startup Founder`
|
||||
|
||||
### Harden an existing production app
|
||||
|
||||
`Essentials` + `Security Developer` + `DevOps & Cloud` + `Observability & Monitoring`
|
||||
|
||||
### Build an AI product
|
||||
|
||||
`Essentials` + `Agent Architect` + `LLM Application Developer` + `Data Engineering`
|
||||
|
||||
### Grow traffic and conversions
|
||||
|
||||
`Web Wizard` + `Marketing & Growth` + `Data & Analytics`
|
||||
|
||||
### Launch and maintain open source
|
||||
|
||||
`Essentials` + `OSS Maintainer` + `Architecture & Design`
|
||||
|
||||
---
|
||||
|
||||
@@ -377,6 +422,11 @@ You can create your own bundle by:
|
||||
2. Grow: `Security Engineer` → Advanced pentesting
|
||||
3. Master: Red team tactics and threat modeling
|
||||
|
||||
**Open Source Maintenance:**
|
||||
1. Start: `Essentials` → `OSS Maintainer`
|
||||
2. Grow: `Architecture & Design` → `QA & Testing`
|
||||
3. Master: `Skill Author` + release automation workflows
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Contributing
|
||||
@@ -393,4 +443,4 @@ Found a skill that should be in a bundle? Or want to create a new bundle? [Open
|
||||
|
||||
---
|
||||
|
||||
_Last updated: January 2026 | Total Skills: 560+ | Total Bundles: 20+_
|
||||
_Last updated: February 2026 | Total Skills: 713+ | Total Bundles: 26_
|
||||
|
||||
20
docs/FAQ.md
20
docs/FAQ.md
@@ -11,12 +11,23 @@
|
||||
Skills are specialized instruction files that teach AI assistants how to handle specific tasks. Think of them as expert knowledge modules that your AI can load on-demand.
|
||||
**Simple analogy:** Just like you might consult different experts (a lawyer, a doctor, a mechanic), these skills let your AI become an expert in different areas when you need them.
|
||||
|
||||
### Do I need to install all 624+ skills?
|
||||
### Do I need to install all 700+ skills?
|
||||
|
||||
**No!** When you clone the repository, all skills are available, but your AI only loads them when you explicitly invoke them with `@skill-name`.
|
||||
It's like having a library - all books are there, but you only read the ones you need.
|
||||
**Pro Tip:** Use [Starter Packs](BUNDLES.md) to install only what matches your role.
|
||||
|
||||
### What is the difference between Bundles and Workflows?
|
||||
|
||||
- **Bundles** are curated recommendations grouped by role or domain.
|
||||
- **Workflows** are ordered execution playbooks for concrete outcomes.
|
||||
|
||||
Use bundles when you are deciding *which skills* to include. Use workflows when you need *step-by-step execution*.
|
||||
|
||||
Start from:
|
||||
- [BUNDLES.md](BUNDLES.md)
|
||||
- [WORKFLOWS.md](WORKFLOWS.md)
|
||||
|
||||
### Which AI tools work with these skills?
|
||||
|
||||
- ✅ **Claude Code** (Anthropic CLI)
|
||||
@@ -62,7 +73,11 @@ _Always check the Risk label and review the code._
|
||||
|
||||
### Where should I install the skills?
|
||||
|
||||
The universal path that works with most tools is `.agent/skills/`:
|
||||
The universal path that works with most tools is `.agent/skills/`.
|
||||
|
||||
**Using npx:** `npx antigravity-awesome-skills` (or `npx github:sickn33/antigravity-awesome-skills` if you get a 404).
|
||||
|
||||
**Using git clone:**
|
||||
|
||||
```bash
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
@@ -72,6 +87,7 @@ git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skill
|
||||
|
||||
- Claude Code: `.claude/skills/`
|
||||
- Gemini CLI: `.gemini/skills/`
|
||||
- Codex CLI: `.codex/skills/`
|
||||
- Cursor: `.cursor/skills/` or project root
|
||||
|
||||
### Does this work with Windows?
|
||||
|
||||
@@ -15,7 +15,7 @@ AI Agents (like **Claude Code**, **Gemini**, **Cursor**) are smart, but they lac
|
||||
|
||||
## ⚡️ Quick Start: The "Starter Packs"
|
||||
|
||||
Don't panic about the 624+ skills. You don't need them all at once.
|
||||
Don't panic about the 700+ skills. You don't need them all at once.
|
||||
We have curated **Starter Packs** to get you running immediately.
|
||||
|
||||
You **install the full repo once** (npx or clone); Starter Packs are curated lists to help you **pick which skills to use** by role (e.g. Web Wizard, Hacker Pack)—they are not a different way to install.
|
||||
@@ -28,7 +28,9 @@ You **install the full repo once** (npx or clone); Starter Packs are curated lis
|
||||
npx antigravity-awesome-skills
|
||||
```
|
||||
|
||||
This clones to `~/.agent/skills` by default. Use `--cursor`, `--claude`, or `--gemini` to install for a specific tool, or `--path <dir>` for a custom location. Run `npx antigravity-awesome-skills --help` for details.
|
||||
This clones to `~/.agent/skills` by default. Use `--cursor`, `--claude`, `--gemini`, or `--codex` to install for a specific tool, or `--path <dir>` for a custom location. Run `npx antigravity-awesome-skills --help` for details.
|
||||
|
||||
If you see a 404 error, use: `npx github:sickn33/antigravity-awesome-skills`
|
||||
|
||||
**Option B — git clone:**
|
||||
|
||||
@@ -50,6 +52,21 @@ Find the bundle that matches your role (see [BUNDLES.md](BUNDLES.md)):
|
||||
|
||||
---
|
||||
|
||||
## 🧭 Bundles vs Workflows
|
||||
|
||||
Bundles and workflows solve different problems:
|
||||
|
||||
- **Bundles** = curated sets by role (what to pick).
|
||||
- **Workflows** = step-by-step playbooks (how to execute).
|
||||
|
||||
Start with bundles in [BUNDLES.md](BUNDLES.md), then run a workflow from [WORKFLOWS.md](WORKFLOWS.md) when you need guided execution.
|
||||
|
||||
Example:
|
||||
|
||||
> "Use **@antigravity-workflows** and run `ship-saas-mvp` for my project idea."
|
||||
|
||||
---
|
||||
|
||||
## 🚀 How to Use a Skill
|
||||
|
||||
Once installed, just talk to your AI naturally.
|
||||
@@ -80,6 +97,7 @@ Once installed, just talk to your AI naturally.
|
||||
| :-------------- | :-------------- | :---------------- |
|
||||
| **Claude Code** | ✅ Full Support | `.claude/skills/` |
|
||||
| **Gemini CLI** | ✅ Full Support | `.gemini/skills/` |
|
||||
| **Codex CLI** | ✅ Full Support | `.codex/skills/` |
|
||||
| **Antigravity** | ✅ Native | `.agent/skills/` |
|
||||
| **Cursor** | ✅ Native | `.cursor/skills/` |
|
||||
| **Copilot** | ⚠️ Text Only | Manual copy-paste |
|
||||
@@ -100,7 +118,7 @@ _Check the [Skill Catalog](../CATALOG.md) for the full list._
|
||||
|
||||
## ❓ FAQ
|
||||
|
||||
**Q: Do I need to install all 624 skills?**
|
||||
**Q: Do I need to install all 700+ skills?**
|
||||
A: You clone the whole repo once; your AI only _reads_ the skills you invoke (or that are relevant), so it stays lightweight. **Starter Packs** in [BUNDLES.md](BUNDLES.md) are curated lists to help you discover the right skills for your role—they don't change how you install.
|
||||
|
||||
**Q: Can I make my own skills?**
|
||||
|
||||
21
docs/LICENSE-MICROSOFT
Normal file
21
docs/LICENSE-MICROSOFT
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) Microsoft Corporation.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE
|
||||
@@ -21,6 +21,7 @@ The skill MUST have a section explicitly stating when to trigger it.
|
||||
|
||||
- **Good**: "Use when the user asks to debug a React component."
|
||||
- **Bad**: "This skill helps you with code."
|
||||
Accepted headings: `## When to Use`, `## Use this skill when`, `## When to Use This Skill`.
|
||||
|
||||
### 3. Safety & Risk Classification
|
||||
|
||||
|
||||
@@ -73,7 +73,7 @@ Some skills include additional metadata:
|
||||
---
|
||||
name: my-skill-name
|
||||
description: "Brief description"
|
||||
risk: "safe" # safe | risk | official
|
||||
risk: "safe" # none | safe | critical | offensive (see QUALITY_BAR.md)
|
||||
source: "community"
|
||||
tags: ["react", "typescript"]
|
||||
---
|
||||
|
||||
@@ -3,16 +3,16 @@
|
||||
We believe in giving credit where credit is due.
|
||||
If you recognize your work here and it is not properly attributed, please open an Issue.
|
||||
|
||||
| Skill / Category | Original Source | License | Notes |
|
||||
| :-------------------------- | :----------------------------------------------------- | :------------- | :---------------------------- |
|
||||
| `cloud-penetration-testing` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. |
|
||||
| `active-directory-attacks` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. |
|
||||
| `owasp-top-10` | [OWASP](https://owasp.org/) | CC-BY-SA | Methodology adapted. |
|
||||
| `burp-suite-testing` | [PortSwigger](https://portswigger.net/burp) | N/A | Usage guide only (no binary). |
|
||||
| `crewai` | [CrewAI](https://github.com/joaomdmoura/crewAI) | MIT | Framework guides. |
|
||||
| `langgraph` | [LangGraph](https://github.com/langchain-ai/langgraph) | MIT | Framework guides. |
|
||||
| `react-patterns` | [React Docs](https://react.dev/) | CC-BY | Official patterns. |
|
||||
| **All Official Skills** | [Anthropic / Google / OpenAI] | Proprietary | Usage encouraged by vendors. |
|
||||
| Skill / Category | Original Source | License | Notes |
|
||||
| :-------------------------- | :----------------------------------------------------------------- | :------------- | :---------------------------- |
|
||||
| `cloud-penetration-testing` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. |
|
||||
| `active-directory-attacks` | [HackTricks](https://book.hacktricks.xyz/) | MIT / CC-BY-SA | Adapted for agentic use. |
|
||||
| `owasp-top-10` | [OWASP](https://owasp.org/) | CC-BY-SA | Methodology adapted. |
|
||||
| `burp-suite-testing` | [PortSwigger](https://portswigger.net/burp) | N/A | Usage guide only (no binary). |
|
||||
| `crewai` | [CrewAI](https://github.com/joaomdmoura/crewAI) | MIT | Framework guides. |
|
||||
| `langgraph` | [LangGraph](https://github.com/langchain-ai/langgraph) | MIT | Framework guides. |
|
||||
| `react-patterns` | [React Docs](https://react.dev/) | CC-BY | Official patterns. |
|
||||
| **All Official Skills** | [Anthropic / Google / OpenAI / Microsoft / Supabase / Vercel Labs] | Proprietary | Usage encouraged by vendors. |
|
||||
|
||||
## Skills from VoltAgent/awesome-agent-skills
|
||||
|
||||
@@ -20,44 +20,44 @@ The following skills were added from the curated collection at [VoltAgent/awesom
|
||||
|
||||
### Official Team Skills
|
||||
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :---- | :-------------- | :------ | :---- |
|
||||
| `vercel-deploy-claimable` | [Vercel Labs](https://github.com/vercel-labs/agent-skills) | MIT | Official Vercel skill |
|
||||
| `design-md` | [Google Labs (Stitch)](https://github.com/google-labs-code/stitch-skills) | Compatible | Google Labs Stitch skills |
|
||||
| `hugging-face-cli`, `hugging-face-jobs` | [Hugging Face](https://github.com/huggingface/skills) | Compatible | Official Hugging Face skills |
|
||||
| `culture-index`, `fix-review`, `sharp-edges` | [Trail of Bits](https://github.com/trailofbits/skills) | Compatible | Security skills from Trail of Bits |
|
||||
| `expo-deployment`, `upgrading-expo` | [Expo](https://github.com/expo/skills) | Compatible | Official Expo skills |
|
||||
| `commit`, `create-pr`, `find-bugs`, `iterate-pr` | [Sentry](https://github.com/getsentry/skills) | Compatible | Sentry dev team skills |
|
||||
| `using-neon` | [Neon](https://github.com/neondatabase/agent-skills) | Compatible | Neon Postgres best practices |
|
||||
| `fal-audio`, `fal-generate`, `fal-image-edit`, `fal-platform`, `fal-upscale`, `fal-workflow` | [fal.ai Community](https://github.com/fal-ai-community/skills) | Compatible | fal.ai AI model skills |
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------ | :--------- | :--------------------------------- |
|
||||
| `vercel-deploy-claimable` | [Vercel Labs](https://github.com/vercel-labs/agent-skills) | MIT | Official Vercel skill |
|
||||
| `design-md` | [Google Labs (Stitch)](https://github.com/google-labs-code/stitch-skills) | Compatible | Google Labs Stitch skills |
|
||||
| `hugging-face-cli`, `hugging-face-jobs` | [Hugging Face](https://github.com/huggingface/skills) | Compatible | Official Hugging Face skills |
|
||||
| `culture-index`, `fix-review`, `sharp-edges` | [Trail of Bits](https://github.com/trailofbits/skills) | Compatible | Security skills from Trail of Bits |
|
||||
| `expo-deployment`, `upgrading-expo` | [Expo](https://github.com/expo/skills) | Compatible | Official Expo skills |
|
||||
| `commit`, `create-pr`, `find-bugs`, `iterate-pr` | [Sentry](https://github.com/getsentry/skills) | Compatible | Sentry dev team skills |
|
||||
| `using-neon` | [Neon](https://github.com/neondatabase/agent-skills) | Compatible | Neon Postgres best practices |
|
||||
| `fal-audio`, `fal-generate`, `fal-image-edit`, `fal-platform`, `fal-upscale`, `fal-workflow` | [fal.ai Community](https://github.com/fal-ai-community/skills) | Compatible | fal.ai AI model skills |
|
||||
|
||||
### Community Skills
|
||||
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :---- | :-------------- | :------ | :---- |
|
||||
| `automate-whatsapp`, `observe-whatsapp` | [gokapso](https://github.com/gokapso/agent-skills) | Compatible | WhatsApp automation skills |
|
||||
| `readme` | [Shpigford](https://github.com/Shpigford/skills) | Compatible | README generation |
|
||||
| `screenshots` | [Shpigford](https://github.com/Shpigford/skills) | Compatible | Marketing screenshots |
|
||||
| `aws-skills` | [zxkane](https://github.com/zxkane/aws-skills) | Compatible | AWS development patterns |
|
||||
| `deep-research` | [sanjay3290](https://github.com/sanjay3290/ai-skills) | Compatible | Gemini Deep Research Agent |
|
||||
| `ffuf-claude-skill` | [jthack](https://github.com/jthack/ffuf_claude_skill) | Compatible | Web fuzzing with ffuf |
|
||||
| `ui-skills` | [ibelick](https://github.com/ibelick/ui-skills) | Compatible | UI development constraints |
|
||||
| `vexor` | [scarletkc](https://github.com/scarletkc/vexor) | Compatible | Vector-powered CLI |
|
||||
| `pypict-skill` | [omkamal](https://github.com/omkamal/pypict-claude-skill) | Compatible | Pairwise test generation |
|
||||
| `makepad-skills` | [ZhangHanDong](https://github.com/ZhangHanDong/makepad-skills) | Compatible | Makepad UI development |
|
||||
| `swiftui-expert-skill` | [AvdLee](https://github.com/AvdLee/SwiftUI-Agent-Skill) | Compatible | SwiftUI best practices |
|
||||
| `threejs-skills` | [CloudAI-X](https://github.com/CloudAI-X/threejs-skills) | Compatible | Three.js 3D experiences |
|
||||
| `claude-scientific-skills` | [K-Dense-AI](https://github.com/K-Dense-AI/claude-scientific-skills) | Compatible | Scientific research skills |
|
||||
| `claude-win11-speckit-update-skill` | [NotMyself](https://github.com/NotMyself/claude-win11-speckit-update-skill) | Compatible | Windows 11 management |
|
||||
| `imagen` | [sanjay3290](https://github.com/sanjay3290/ai-skills) | Compatible | Google Gemini image generation |
|
||||
| `security-bluebook-builder` | [SHADOWPR0](https://github.com/SHADOWPR0/security-bluebook-builder) | Compatible | Security documentation |
|
||||
| `claude-ally-health` | [huifer](https://github.com/huifer/Claude-Ally-Health) | Compatible | Health assistant |
|
||||
| `clarity-gate` | [frmoretto](https://github.com/frmoretto/clarity-gate) | Compatible | RAG quality verification |
|
||||
| `n8n-code-python`, `n8n-mcp-tools-expert`, `n8n-node-configuration` | [czlonkowski](https://github.com/czlonkowski/n8n-skills) | Compatible | n8n automation skills |
|
||||
| `varlock-claude-skill` | [wrsmith108](https://github.com/wrsmith108/varlock-claude-skill) | Compatible | Secure environment variables |
|
||||
| `beautiful-prose` | [SHADOWPR0](https://github.com/SHADOWPR0/beautiful_prose) | Compatible | Writing style guide |
|
||||
| `claude-speed-reader` | [SeanZoR](https://github.com/SeanZoR/claude-speed-reader) | Compatible | Speed reading tool |
|
||||
| `skill-seekers` | [yusufkaraaslan](https://github.com/yusufkaraaslan/Skill_Seekers) | Compatible | Skill conversion tool |
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :------------------------------------------------------------------ | :-------------------------------------------------------------------------- | :--------- | :----------------------------- |
|
||||
| `automate-whatsapp`, `observe-whatsapp` | [gokapso](https://github.com/gokapso/agent-skills) | Compatible | WhatsApp automation skills |
|
||||
| `readme` | [Shpigford](https://github.com/Shpigford/skills) | Compatible | README generation |
|
||||
| `screenshots` | [Shpigford](https://github.com/Shpigford/skills) | Compatible | Marketing screenshots |
|
||||
| `aws-skills` | [zxkane](https://github.com/zxkane/aws-skills) | Compatible | AWS development patterns |
|
||||
| `deep-research` | [sanjay3290](https://github.com/sanjay3290/ai-skills) | Compatible | Gemini Deep Research Agent |
|
||||
| `ffuf-claude-skill` | [jthack](https://github.com/jthack/ffuf_claude_skill) | Compatible | Web fuzzing with ffuf |
|
||||
| `ui-skills` | [ibelick](https://github.com/ibelick/ui-skills) | Compatible | UI development constraints |
|
||||
| `vexor` | [scarletkc](https://github.com/scarletkc/vexor) | Compatible | Vector-powered CLI |
|
||||
| `pypict-skill` | [omkamal](https://github.com/omkamal/pypict-claude-skill) | Compatible | Pairwise test generation |
|
||||
| `makepad-skills` | [ZhangHanDong](https://github.com/ZhangHanDong/makepad-skills) | Compatible | Makepad UI development |
|
||||
| `swiftui-expert-skill` | [AvdLee](https://github.com/AvdLee/SwiftUI-Agent-Skill) | Compatible | SwiftUI best practices |
|
||||
| `threejs-skills` | [CloudAI-X](https://github.com/CloudAI-X/threejs-skills) | Compatible | Three.js 3D experiences |
|
||||
| `claude-scientific-skills` | [K-Dense-AI](https://github.com/K-Dense-AI/claude-scientific-skills) | Compatible | Scientific research skills |
|
||||
| `claude-win11-speckit-update-skill` | [NotMyself](https://github.com/NotMyself/claude-win11-speckit-update-skill) | Compatible | Windows 11 management |
|
||||
| `imagen` | [sanjay3290](https://github.com/sanjay3290/ai-skills) | Compatible | Google Gemini image generation |
|
||||
| `security-bluebook-builder` | [SHADOWPR0](https://github.com/SHADOWPR0/security-bluebook-builder) | Compatible | Security documentation |
|
||||
| `claude-ally-health` | [huifer](https://github.com/huifer/Claude-Ally-Health) | Compatible | Health assistant |
|
||||
| `clarity-gate` | [frmoretto](https://github.com/frmoretto/clarity-gate) | Compatible | RAG quality verification |
|
||||
| `n8n-code-python`, `n8n-mcp-tools-expert`, `n8n-node-configuration` | [czlonkowski](https://github.com/czlonkowski/n8n-skills) | Compatible | n8n automation skills |
|
||||
| `varlock-claude-skill` | [wrsmith108](https://github.com/wrsmith108/varlock-claude-skill) | Compatible | Secure environment variables |
|
||||
| `beautiful-prose` | [SHADOWPR0](https://github.com/SHADOWPR0/beautiful_prose) | Compatible | Writing style guide |
|
||||
| `claude-speed-reader` | [SeanZoR](https://github.com/SeanZoR/claude-speed-reader) | Compatible | Speed reading tool |
|
||||
| `skill-seekers` | [yusufkaraaslan](https://github.com/yusufkaraaslan/Skill_Seekers) | Compatible | Skill conversion tool |
|
||||
|
||||
- **frontend-slides** - [zarazhangrui](https://github.com/zarazhangrui/frontend-slides)
|
||||
- **linear-claude-skill** - [wrsmith108](https://github.com/wrsmith108/linear-claude-skill)
|
||||
@@ -74,11 +74,11 @@ The following skills were added from the curated collection at [VoltAgent/awesom
|
||||
|
||||
## Skills from whatiskadudoing/fp-ts-skills (v4.4.0)
|
||||
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :---- | :-------------- | :------ | :---- |
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :---------------- | :------------------------------------------------------------------------------ | :--------- | :------------------------------------------------------- |
|
||||
| `fp-ts-pragmatic` | [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills) | Compatible | Pragmatic fp-ts guide – pipe, Option, Either, TaskEither |
|
||||
| `fp-ts-react` | [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills) | Compatible | fp-ts with React 18/19 and Next.js |
|
||||
| `fp-ts-errors` | [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills) | Compatible | Type-safe error handling with Either and TaskEither |
|
||||
| `fp-ts-react` | [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills) | Compatible | fp-ts with React 18/19 and Next.js |
|
||||
| `fp-ts-errors` | [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills) | Compatible | Type-safe error handling with Either and TaskEither |
|
||||
|
||||
## License Policy
|
||||
|
||||
|
||||
174
docs/WORKFLOWS.md
Normal file
174
docs/WORKFLOWS.md
Normal file
@@ -0,0 +1,174 @@
|
||||
# Antigravity Workflows
|
||||
|
||||
> Workflow playbooks to orchestrate multiple skills with less friction.
|
||||
|
||||
## What Is a Workflow?
|
||||
|
||||
A workflow is a guided, step-by-step execution path that combines multiple skills for one concrete outcome.
|
||||
|
||||
- **Bundles** tell you which skills are relevant for a role.
|
||||
- **Workflows** tell you how to use those skills in sequence to complete a real objective.
|
||||
|
||||
If bundles are your toolbox, workflows are your execution playbook.
|
||||
|
||||
---
|
||||
|
||||
## How to Use Workflows
|
||||
|
||||
1. Install the repository once (`npx antigravity-awesome-skills`).
|
||||
2. Pick a workflow matching your immediate goal.
|
||||
3. Execute steps in order and invoke the listed skills in each step.
|
||||
4. Keep output artifacts at each step (plan, decisions, tests, validation evidence).
|
||||
|
||||
You can combine workflows with bundles from [BUNDLES.md](BUNDLES.md) when you need broader coverage.
|
||||
|
||||
---
|
||||
|
||||
## Workflow: Ship a SaaS MVP
|
||||
|
||||
Build and ship a minimal but production-minded SaaS product.
|
||||
|
||||
**Related bundles:** `Essentials`, `Full-Stack Developer`, `QA & Testing`, `DevOps & Cloud`
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Local repository and runtime configured.
|
||||
- Clear user problem and MVP scope.
|
||||
- Basic deployment target selected.
|
||||
|
||||
### Steps
|
||||
|
||||
1. **Plan the scope**
|
||||
- **Goal:** Define MVP boundaries and acceptance criteria.
|
||||
- **Skills:** [`@brainstorming`](../skills/brainstorming/), [`@concise-planning`](../skills/concise-planning/), [`@writing-plans`](../skills/writing-plans/)
|
||||
- **Prompt example:** `Usa @concise-planning per definire milestones e criteri di accettazione del mio MVP SaaS.`
|
||||
|
||||
2. **Build backend and API**
|
||||
- **Goal:** Implement core entities, APIs, and auth baseline.
|
||||
- **Skills:** [`@backend-dev-guidelines`](../skills/backend-dev-guidelines/), [`@api-patterns`](../skills/api-patterns/), [`@database-design`](../skills/database-design/)
|
||||
- **Prompt example:** `Usa @backend-dev-guidelines per creare API e servizi del dominio billing.`
|
||||
|
||||
3. **Build frontend**
|
||||
- **Goal:** Ship core user flow with clear UX states.
|
||||
- **Skills:** [`@frontend-developer`](../skills/frontend-developer/), [`@react-patterns`](../skills/react-patterns/), [`@frontend-design`](../skills/frontend-design/)
|
||||
- **Prompt example:** `Usa @frontend-developer per implementare onboarding, empty state e dashboard iniziale.`
|
||||
|
||||
4. **Test and validate**
|
||||
- **Goal:** Cover critical user journeys before release.
|
||||
- **Skills:** [`@test-driven-development`](../skills/test-driven-development/), [`@browser-automation`](../skills/browser-automation/), `@go-playwright` (optional, Go stack)
|
||||
- **Prompt example:** `Usa @browser-automation per creare test E2E sui flussi signup e checkout.`
|
||||
- **Go note:** Se il progetto QA e tooling sono in Go, preferisci `@go-playwright`.
|
||||
|
||||
5. **Ship safely**
|
||||
- **Goal:** Release with observability and rollback plan.
|
||||
- **Skills:** [`@deployment-procedures`](../skills/deployment-procedures/), [`@observability-engineer`](../skills/observability-engineer/)
|
||||
- **Prompt example:** `Usa @deployment-procedures per una checklist di rilascio con rollback.`
|
||||
|
||||
---
|
||||
|
||||
## Workflow: Security Audit for a Web App
|
||||
|
||||
Run a focused security review from scope definition to remediation validation.
|
||||
|
||||
**Related bundles:** `Security Engineer`, `Security Developer`, `Observability & Monitoring`
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Explicit authorization for testing.
|
||||
- In-scope targets documented.
|
||||
- Logging and environment details available.
|
||||
|
||||
### Steps
|
||||
|
||||
1. **Define scope and threat model**
|
||||
- **Goal:** Identify assets, trust boundaries, and attack paths.
|
||||
- **Skills:** [`@ethical-hacking-methodology`](../skills/ethical-hacking-methodology/), [`@threat-modeling-expert`](../skills/threat-modeling-expert/), [`@attack-tree-construction`](../skills/attack-tree-construction/)
|
||||
- **Prompt example:** `Usa @threat-modeling-expert per mappare asset critici e trust boundaries della mia web app.`
|
||||
|
||||
2. **Review auth and access control**
|
||||
- **Goal:** Detect account takeover and authorization flaws.
|
||||
- **Skills:** [`@broken-authentication`](../skills/broken-authentication/), [`@auth-implementation-patterns`](../skills/auth-implementation-patterns/), [`@idor-testing`](../skills/idor-testing/)
|
||||
- **Prompt example:** `Usa @idor-testing per verificare accessi non autorizzati su endpoint multitenant.`
|
||||
|
||||
3. **Assess API and input security**
|
||||
- **Goal:** Uncover high-impact API and injection vulnerabilities.
|
||||
- **Skills:** [`@api-security-best-practices`](../skills/api-security-best-practices/), [`@api-fuzzing-bug-bounty`](../skills/api-fuzzing-bug-bounty/), [`@top-web-vulnerabilities`](../skills/top-web-vulnerabilities/)
|
||||
- **Prompt example:** `Usa @api-security-best-practices per audit endpoint auth, billing e admin.`
|
||||
|
||||
4. **Harden and verify**
|
||||
- **Goal:** Convert findings into fixes and verify evidence of mitigation.
|
||||
- **Skills:** [`@security-auditor`](../skills/security-auditor/), [`@sast-configuration`](../skills/sast-configuration/), [`@verification-before-completion`](../skills/verification-before-completion/)
|
||||
- **Prompt example:** `Usa @verification-before-completion per provare che le mitigazioni sono effettive.`
|
||||
|
||||
---
|
||||
|
||||
## Workflow: Build an AI Agent System
|
||||
|
||||
Design and deliver a production-grade agent with measurable reliability.
|
||||
|
||||
**Related bundles:** `Agent Architect`, `LLM Application Developer`, `Data Engineering`
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Narrow use case with measurable outcomes.
|
||||
- Access to model provider(s) and observability tooling.
|
||||
- Initial dataset or knowledge corpus.
|
||||
|
||||
### Steps
|
||||
|
||||
1. **Define target behavior and KPIs**
|
||||
- **Goal:** Set quality, latency, and failure thresholds.
|
||||
- **Skills:** [`@ai-agents-architect`](../skills/ai-agents-architect/), [`@agent-evaluation`](../skills/agent-evaluation/), [`@product-manager-toolkit`](../skills/product-manager-toolkit/)
|
||||
- **Prompt example:** `Usa @agent-evaluation per definire benchmark e criteri di successo del mio agente.`
|
||||
|
||||
2. **Design retrieval and memory**
|
||||
- **Goal:** Build reliable retrieval and context architecture.
|
||||
- **Skills:** [`@llm-app-patterns`](../skills/llm-app-patterns/), [`@rag-implementation`](../skills/rag-implementation/), [`@vector-database-engineer`](../skills/vector-database-engineer/)
|
||||
- **Prompt example:** `Usa @rag-implementation per progettare pipeline di chunking, embedding e retrieval.`
|
||||
|
||||
3. **Implement orchestration**
|
||||
- **Goal:** Implement deterministic orchestration and tool boundaries.
|
||||
- **Skills:** [`@langgraph`](../skills/langgraph/), [`@mcp-builder`](../skills/mcp-builder/), [`@workflow-automation`](../skills/workflow-automation/)
|
||||
- **Prompt example:** `Usa @langgraph per implementare il grafo agente con fallback e human-in-the-loop.`
|
||||
|
||||
4. **Evaluate and iterate**
|
||||
- **Goal:** Improve weak points with a structured loop.
|
||||
- **Skills:** [`@agent-evaluation`](../skills/agent-evaluation/), [`@langfuse`](../skills/langfuse/), [`@kaizen`](../skills/kaizen/)
|
||||
- **Prompt example:** `Usa @kaizen per prioritizzare le correzioni sulle failure modes rilevate dai test.`
|
||||
|
||||
---
|
||||
|
||||
## Workflow: QA and Browser Automation
|
||||
|
||||
Create resilient browser automation with deterministic execution in CI.
|
||||
|
||||
**Related bundles:** `QA & Testing`, `Full-Stack Developer`
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Test environments and stable credentials.
|
||||
- Critical user journeys identified.
|
||||
- CI pipeline available.
|
||||
|
||||
### Steps
|
||||
|
||||
1. **Prepare test strategy**
|
||||
- **Goal:** Scope journeys, fixtures, and execution environments.
|
||||
- **Skills:** [`@e2e-testing-patterns`](../skills/e2e-testing-patterns/), [`@test-driven-development`](../skills/test-driven-development/)
|
||||
- **Prompt example:** `Usa @e2e-testing-patterns per definire suite E2E minima ma ad alto impatto.`
|
||||
|
||||
2. **Implement browser tests**
|
||||
- **Goal:** Build robust test coverage with stable selectors.
|
||||
- **Skills:** [`@browser-automation`](../skills/browser-automation/), `@go-playwright` (optional, Go stack)
|
||||
- **Prompt example:** `Usa @go-playwright per implementare browser automation in un progetto Go.`
|
||||
|
||||
3. **Triage and harden**
|
||||
- **Goal:** Remove flaky behavior and enforce repeatability.
|
||||
- **Skills:** [`@systematic-debugging`](../skills/systematic-debugging/), [`@test-fixing`](../skills/test-fixing/), [`@verification-before-completion`](../skills/verification-before-completion/)
|
||||
- **Prompt example:** `Usa @systematic-debugging per classificare e risolvere le flakiness in CI.`
|
||||
|
||||
---
|
||||
|
||||
## Machine-Readable Workflows
|
||||
|
||||
For tooling and automation, workflow metadata is available in [data/workflows.json](../data/workflows.json).
|
||||
709
docs/microsoft-skills-attribution.json
Normal file
709
docs/microsoft-skills-attribution.json
Normal file
@@ -0,0 +1,709 @@
|
||||
{
|
||||
"source": "microsoft/skills",
|
||||
"repository": "https://github.com/microsoft/skills",
|
||||
"license": "MIT",
|
||||
"synced_skills": 140,
|
||||
"structure": "flat (frontmatter name as directory name)",
|
||||
"skills": [
|
||||
{
|
||||
"flat_name": "azure-ai-voicelive-dotnet",
|
||||
"original_path": "dotnet/foundry/voicelive",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-document-intelligence-dotnet",
|
||||
"original_path": "dotnet/foundry/document-intelligence",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-openai-dotnet",
|
||||
"original_path": "dotnet/foundry/openai",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-weightsandbiases-dotnet",
|
||||
"original_path": "dotnet/foundry/weightsandbiases",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-projects-dotnet",
|
||||
"original_path": "dotnet/foundry/projects",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-search-documents-dotnet",
|
||||
"original_path": "dotnet/foundry/search-documents",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-applicationinsights-dotnet",
|
||||
"original_path": "dotnet/monitoring/applicationinsights",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "m365-agents-dotnet",
|
||||
"original_path": "dotnet/m365/m365-agents",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-apimanagement-dotnet",
|
||||
"original_path": "dotnet/integration/apimanagement",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-apicenter-dotnet",
|
||||
"original_path": "dotnet/integration/apicenter",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-resource-manager-playwright-dotnet",
|
||||
"original_path": "dotnet/compute/playwright",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-resource-manager-durabletask-dotnet",
|
||||
"original_path": "dotnet/compute/durabletask",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-botservice-dotnet",
|
||||
"original_path": "dotnet/compute/botservice",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-identity-dotnet",
|
||||
"original_path": "dotnet/entra/azure-identity",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "microsoft-azure-webjobs-extensions-authentication-events-dotnet",
|
||||
"original_path": "dotnet/entra/authentication-events",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-security-keyvault-keys-dotnet",
|
||||
"original_path": "dotnet/entra/keyvault",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-maps-search-dotnet",
|
||||
"original_path": "dotnet/general/maps",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-eventgrid-dotnet",
|
||||
"original_path": "dotnet/messaging/eventgrid",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-servicebus-dotnet",
|
||||
"original_path": "dotnet/messaging/servicebus",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-eventhub-dotnet",
|
||||
"original_path": "dotnet/messaging/eventhubs",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-resource-manager-redis-dotnet",
|
||||
"original_path": "dotnet/data/redis",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-resource-manager-postgresql-dotnet",
|
||||
"original_path": "dotnet/data/postgresql",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-resource-manager-mysql-dotnet",
|
||||
"original_path": "dotnet/data/mysql",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-resource-manager-cosmosdb-dotnet",
|
||||
"original_path": "dotnet/data/cosmosdb",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-fabric-dotnet",
|
||||
"original_path": "dotnet/data/fabric",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-resource-manager-sql-dotnet",
|
||||
"original_path": "dotnet/data/sql",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-arizeaiobservabilityeval-dotnet",
|
||||
"original_path": "dotnet/partner/arize-ai-observability-eval",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-mongodbatlas-dotnet",
|
||||
"original_path": "dotnet/partner/mongodbatlas",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-keyvault-keys-rust",
|
||||
"original_path": "rust/entra/azure-keyvault-keys-rust",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-keyvault-secrets-rust",
|
||||
"original_path": "rust/entra/azure-keyvault-secrets-rust",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-identity-rust",
|
||||
"original_path": "rust/entra/azure-identity-rust",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-keyvault-certificates-rust",
|
||||
"original_path": "rust/entra/azure-keyvault-certificates-rust",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-eventhub-rust",
|
||||
"original_path": "rust/messaging/azure-eventhub-rust",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-cosmos-rust",
|
||||
"original_path": "rust/data/azure-cosmos-rust",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-storage-blob-rust",
|
||||
"original_path": "rust/data/azure-storage-blob-rust",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-voicelive-ts",
|
||||
"original_path": "typescript/foundry/voicelive",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-contentsafety-ts",
|
||||
"original_path": "typescript/foundry/contentsafety",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-document-intelligence-ts",
|
||||
"original_path": "typescript/foundry/document-intelligence",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-projects-ts",
|
||||
"original_path": "typescript/foundry/projects",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-search-documents-ts",
|
||||
"original_path": "typescript/foundry/search-documents",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-translation-ts",
|
||||
"original_path": "typescript/foundry/translation",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-monitor-opentelemetry-ts",
|
||||
"original_path": "typescript/monitoring/opentelemetry",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "zustand-store-ts",
|
||||
"original_path": "typescript/frontend/zustand-store",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "frontend-ui-dark-ts",
|
||||
"original_path": "typescript/frontend/frontend-ui-dark",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "react-flow-node-ts",
|
||||
"original_path": "typescript/frontend/react-flow-node",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "m365-agents-ts",
|
||||
"original_path": "typescript/m365/m365-agents",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-appconfiguration-ts",
|
||||
"original_path": "typescript/integration/appconfiguration",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-microsoft-playwright-testing-ts",
|
||||
"original_path": "typescript/compute/playwright",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-identity-ts",
|
||||
"original_path": "typescript/entra/azure-identity",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-keyvault-keys-ts",
|
||||
"original_path": "typescript/entra/keyvault-keys",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-keyvault-secrets-ts",
|
||||
"original_path": "typescript/entra/keyvault-secrets",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-servicebus-ts",
|
||||
"original_path": "typescript/messaging/servicebus",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-web-pubsub-ts",
|
||||
"original_path": "typescript/messaging/webpubsub",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-eventhub-ts",
|
||||
"original_path": "typescript/messaging/eventhubs",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-cosmos-ts",
|
||||
"original_path": "typescript/data/cosmosdb",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-storage-blob-ts",
|
||||
"original_path": "typescript/data/blob",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-postgres-ts",
|
||||
"original_path": "typescript/data/postgres",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-storage-queue-ts",
|
||||
"original_path": "typescript/data/queue",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-storage-file-share-ts",
|
||||
"original_path": "typescript/data/fileshare",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-speech-to-text-rest-py",
|
||||
"original_path": "python/foundry/speech-to-text-rest",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-transcription-py",
|
||||
"original_path": "python/foundry/transcription",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-vision-imageanalysis-py",
|
||||
"original_path": "python/foundry/vision-imageanalysis",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-contentunderstanding-py",
|
||||
"original_path": "python/foundry/contentunderstanding",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-voicelive-py",
|
||||
"original_path": "python/foundry/voicelive",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "agent-framework-azure-ai-py",
|
||||
"original_path": "python/foundry/agent-framework",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-contentsafety-py",
|
||||
"original_path": "python/foundry/contentsafety",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "agents-v2-py",
|
||||
"original_path": "python/foundry/agents-v2",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-translation-document-py",
|
||||
"original_path": "python/foundry/translation-document",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-translation-text-py",
|
||||
"original_path": "python/foundry/translation-text",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-textanalytics-py",
|
||||
"original_path": "python/foundry/textanalytics",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-ml-py",
|
||||
"original_path": "python/foundry/ml",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-projects-py",
|
||||
"original_path": "python/foundry/projects",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-search-documents-py",
|
||||
"original_path": "python/foundry/search-documents",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-monitor-opentelemetry-py",
|
||||
"original_path": "python/monitoring/opentelemetry",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-monitor-ingestion-py",
|
||||
"original_path": "python/monitoring/ingestion",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-monitor-query-py",
|
||||
"original_path": "python/monitoring/query",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-monitor-opentelemetry-exporter-py",
|
||||
"original_path": "python/monitoring/opentelemetry-exporter",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "m365-agents-py",
|
||||
"original_path": "python/m365/m365-agents",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-appconfiguration-py",
|
||||
"original_path": "python/integration/appconfiguration",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-apimanagement-py",
|
||||
"original_path": "python/integration/apimanagement",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-apicenter-py",
|
||||
"original_path": "python/integration/apicenter",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-fabric-py",
|
||||
"original_path": "python/compute/fabric",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-mgmt-botservice-py",
|
||||
"original_path": "python/compute/botservice",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-containerregistry-py",
|
||||
"original_path": "python/compute/containerregistry",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-identity-py",
|
||||
"original_path": "python/entra/azure-identity",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-keyvault-py",
|
||||
"original_path": "python/entra/keyvault",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-eventgrid-py",
|
||||
"original_path": "python/messaging/eventgrid",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-servicebus-py",
|
||||
"original_path": "python/messaging/servicebus",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-messaging-webpubsubservice-py",
|
||||
"original_path": "python/messaging/webpubsub-service",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-eventhub-py",
|
||||
"original_path": "python/messaging/eventhub",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-data-tables-py",
|
||||
"original_path": "python/data/tables",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-cosmos-py",
|
||||
"original_path": "python/data/cosmos",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-storage-blob-py",
|
||||
"original_path": "python/data/blob",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-storage-file-datalake-py",
|
||||
"original_path": "python/data/datalake",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-cosmos-db-py",
|
||||
"original_path": "python/data/cosmos-db",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-storage-queue-py",
|
||||
"original_path": "python/data/queue",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-storage-file-share-py",
|
||||
"original_path": "python/data/fileshare",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-formrecognizer-java",
|
||||
"original_path": "java/foundry/formrecognizer",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-vision-imageanalysis-java",
|
||||
"original_path": "java/foundry/vision-imageanalysis",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-voicelive-java",
|
||||
"original_path": "java/foundry/voicelive",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-contentsafety-java",
|
||||
"original_path": "java/foundry/contentsafety",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-projects-java",
|
||||
"original_path": "java/foundry/projects",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-anomalydetector-java",
|
||||
"original_path": "java/foundry/anomalydetector",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-monitor-ingestion-java",
|
||||
"original_path": "java/monitoring/ingestion",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-monitor-query-java",
|
||||
"original_path": "java/monitoring/query",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-monitor-opentelemetry-exporter-java",
|
||||
"original_path": "java/monitoring/opentelemetry-exporter",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-appconfiguration-java",
|
||||
"original_path": "java/integration/appconfiguration",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-communication-common-java",
|
||||
"original_path": "java/communication/common",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-communication-callingserver-java",
|
||||
"original_path": "java/communication/callingserver",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-communication-sms-java",
|
||||
"original_path": "java/communication/sms",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-communication-callautomation-java",
|
||||
"original_path": "java/communication/callautomation",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-communication-chat-java",
|
||||
"original_path": "java/communication/chat",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-compute-batch-java",
|
||||
"original_path": "java/compute/batch",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-identity-java",
|
||||
"original_path": "java/entra/azure-identity",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-security-keyvault-keys-java",
|
||||
"original_path": "java/entra/keyvault-keys",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-security-keyvault-secrets-java",
|
||||
"original_path": "java/entra/keyvault-secrets",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-eventgrid-java",
|
||||
"original_path": "java/messaging/eventgrid",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-messaging-webpubsub-java",
|
||||
"original_path": "java/messaging/webpubsub",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-eventhub-java",
|
||||
"original_path": "java/messaging/eventhubs",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-data-tables-java",
|
||||
"original_path": "java/data/tables",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-cosmos-java",
|
||||
"original_path": "java/data/cosmos",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-storage-blob-java",
|
||||
"original_path": "java/data/blob",
|
||||
"source": "microsoft/skills"
|
||||
},
|
||||
{
|
||||
"flat_name": "wiki-page-writer",
|
||||
"original_path": "plugins/wiki-page-writer",
|
||||
"source": "microsoft/skills (plugin)"
|
||||
},
|
||||
{
|
||||
"flat_name": "wiki-vitepress",
|
||||
"original_path": "plugins/wiki-vitepress",
|
||||
"source": "microsoft/skills (plugin)"
|
||||
},
|
||||
{
|
||||
"flat_name": "wiki-researcher",
|
||||
"original_path": "plugins/wiki-researcher",
|
||||
"source": "microsoft/skills (plugin)"
|
||||
},
|
||||
{
|
||||
"flat_name": "wiki-qa",
|
||||
"original_path": "plugins/wiki-qa",
|
||||
"source": "microsoft/skills (plugin)"
|
||||
},
|
||||
{
|
||||
"flat_name": "wiki-onboarding",
|
||||
"original_path": "plugins/wiki-onboarding",
|
||||
"source": "microsoft/skills (plugin)"
|
||||
},
|
||||
{
|
||||
"flat_name": "wiki-architect",
|
||||
"original_path": "plugins/wiki-architect",
|
||||
"source": "microsoft/skills (plugin)"
|
||||
},
|
||||
{
|
||||
"flat_name": "wiki-changelog",
|
||||
"original_path": "plugins/wiki-changelog",
|
||||
"source": "microsoft/skills (plugin)"
|
||||
},
|
||||
{
|
||||
"flat_name": "fastapi-router-py",
|
||||
"original_path": ".github/skills/fastapi-router-py",
|
||||
"source": "microsoft/skills (.github/skills)"
|
||||
},
|
||||
{
|
||||
"flat_name": "azd-deployment",
|
||||
"original_path": ".github/skills/azd-deployment",
|
||||
"source": "microsoft/skills (.github/skills)"
|
||||
},
|
||||
{
|
||||
"flat_name": "copilot-sdk",
|
||||
"original_path": ".github/skills/copilot-sdk",
|
||||
"source": "microsoft/skills (.github/skills)"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-agents-persistent-dotnet",
|
||||
"original_path": ".github/skills/azure-ai-agents-persistent-dotnet",
|
||||
"source": "microsoft/skills (.github/skills)"
|
||||
},
|
||||
{
|
||||
"flat_name": "hosted-agents-v2-py",
|
||||
"original_path": ".github/skills/hosted-agents-v2-py",
|
||||
"source": "microsoft/skills (.github/skills)"
|
||||
},
|
||||
{
|
||||
"flat_name": "pydantic-models-py",
|
||||
"original_path": ".github/skills/pydantic-models-py",
|
||||
"source": "microsoft/skills (.github/skills)"
|
||||
},
|
||||
{
|
||||
"flat_name": "skill-creator-ms",
|
||||
"original_path": ".github/skills/skill-creator",
|
||||
"source": "microsoft/skills (.github/skills)"
|
||||
},
|
||||
{
|
||||
"flat_name": "podcast-generation",
|
||||
"original_path": ".github/skills/podcast-generation",
|
||||
"source": "microsoft/skills (.github/skills)"
|
||||
},
|
||||
{
|
||||
"flat_name": "github-issue-creator",
|
||||
"original_path": ".github/skills/github-issue-creator",
|
||||
"source": "microsoft/skills (.github/skills)"
|
||||
},
|
||||
{
|
||||
"flat_name": "azure-ai-agents-persistent-java",
|
||||
"original_path": ".github/skills/azure-ai-agents-persistent-java",
|
||||
"source": "microsoft/skills (.github/skills)"
|
||||
},
|
||||
{
|
||||
"flat_name": "mcp-builder-ms",
|
||||
"original_path": ".github/skills/mcp-builder",
|
||||
"source": "microsoft/skills (.github/skills)"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -73,7 +73,7 @@ Một số skill bao gồm thêm siêu dữ liệu bổ sung:
|
||||
---
|
||||
name: my-skill-name
|
||||
description: "Mô tả ngắn"
|
||||
risk: "safe" # safe | risk | official
|
||||
risk: "safe" # none | safe | critical | offensive (xem QUALITY_BAR.md)
|
||||
source: "community"
|
||||
tags: ["react", "typescript"]
|
||||
---
|
||||
|
||||
@@ -149,12 +149,39 @@ function readSkill(skillDir, skillId) {
|
||||
|
||||
function listSkillIds(skillsDir) {
|
||||
return fs.readdirSync(skillsDir)
|
||||
.filter(entry => !entry.startsWith('.') && fs.statSync(path.join(skillsDir, entry)).isDirectory())
|
||||
.filter(entry => {
|
||||
if (entry.startsWith('.')) return false;
|
||||
const dirPath = path.join(skillsDir, entry);
|
||||
if (!fs.statSync(dirPath).isDirectory()) return false;
|
||||
const skillPath = path.join(dirPath, 'SKILL.md');
|
||||
return fs.existsSync(skillPath);
|
||||
})
|
||||
.sort();
|
||||
}
|
||||
|
||||
/**
|
||||
* Recursively list all skill directory paths under skillsDir (relative paths).
|
||||
* Matches generate_index.py behavior so catalog includes nested skills (e.g. game-development/2d-games).
|
||||
*/
|
||||
function listSkillIdsRecursive(skillsDir, baseDir = skillsDir, acc = []) {
|
||||
const entries = fs.readdirSync(baseDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (entry.name.startsWith('.')) continue;
|
||||
if (!entry.isDirectory()) continue;
|
||||
const dirPath = path.join(baseDir, entry.name);
|
||||
const skillPath = path.join(dirPath, 'SKILL.md');
|
||||
const relPath = path.relative(skillsDir, dirPath);
|
||||
if (fs.existsSync(skillPath)) {
|
||||
acc.push(relPath);
|
||||
}
|
||||
listSkillIdsRecursive(skillsDir, dirPath, acc);
|
||||
}
|
||||
return acc.sort();
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
listSkillIds,
|
||||
listSkillIdsRecursive,
|
||||
parseFrontmatter,
|
||||
parseInlineList,
|
||||
readSkill,
|
||||
|
||||
11
package-lock.json
generated
11
package-lock.json
generated
@@ -1,24 +1,25 @@
|
||||
{
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "4.2.0",
|
||||
"version": "5.2.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "4.2.0",
|
||||
"version": "5.2.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"yaml": "^2.8.2"
|
||||
},
|
||||
"bin": {
|
||||
"antigravity-awesome-skills": "bin/install.js"
|
||||
},
|
||||
"devDependencies": {
|
||||
"yaml": "^2.8.2"
|
||||
}
|
||||
},
|
||||
"node_modules/yaml": {
|
||||
"version": "2.8.2",
|
||||
"resolved": "https://registry.npmjs.org/yaml/-/yaml-2.8.2.tgz",
|
||||
"integrity": "sha512-mplynKqc1C2hTVYxd0PU2xQAc22TI1vShAYGksCCfxbn/dFwnHTNi1bvYsBTkhdUNtGIf5xNOg938rrSSYvS9A==",
|
||||
"dev": true,
|
||||
"license": "ISC",
|
||||
"bin": {
|
||||
"yaml": "bin.mjs"
|
||||
|
||||
15
package.json
15
package.json
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "4.5.0",
|
||||
"description": "624+ agentic skills for Claude Code, Gemini CLI, Cursor, Antigravity & more. Installer CLI.",
|
||||
"version": "5.2.0",
|
||||
"description": "845+ agentic skills for Claude Code, Gemini CLI, Cursor, Antigravity & more. Installer CLI.",
|
||||
"license": "MIT",
|
||||
"scripts": {
|
||||
"validate": "python3 scripts/validate_skills.py",
|
||||
@@ -10,17 +10,20 @@
|
||||
"readme": "python3 scripts/update_readme.py",
|
||||
"chain": "npm run validate && npm run index && npm run readme",
|
||||
"catalog": "node scripts/build-catalog.js",
|
||||
"build": "npm run chain && npm run catalog"
|
||||
"build": "npm run chain && npm run catalog",
|
||||
"test": "node scripts/tests/validate_skills_headings.test.js && python3 scripts/tests/test_validate_skills_headings.py && python3 scripts/tests/inspect_microsoft_repo.py && python3 scripts/tests/test_comprehensive_coverage.py",
|
||||
"sync:microsoft": "python3 scripts/sync_microsoft_skills.py",
|
||||
"sync:all-official": "npm run sync:microsoft && npm run chain"
|
||||
},
|
||||
"dependencies": {
|
||||
"devDependencies": {
|
||||
"yaml": "^2.8.2"
|
||||
},
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "https://github.com/sickn33/antigravity-awesome-skills.git"
|
||||
"url": "git+https://github.com/sickn33/antigravity-awesome-skills.git"
|
||||
},
|
||||
"bin": {
|
||||
"antigravity-awesome-skills": "./bin/install.js"
|
||||
"antigravity-awesome-skills": "bin/install.js"
|
||||
},
|
||||
"files": [
|
||||
"bin"
|
||||
|
||||
36
release_notes.md
Normal file
36
release_notes.md
Normal file
@@ -0,0 +1,36 @@
|
||||
## [5.0.0] - 2026-02-10 - "Antigravity Workflows Foundation"
|
||||
|
||||
> First-class Workflows are now available to orchestrate multiple skills through guided execution playbooks.
|
||||
|
||||
### 🚀 New Skills
|
||||
|
||||
### 🧭 [antigravity-workflows](skills/antigravity-workflows/)
|
||||
|
||||
**Orchestrates multi-step outcomes using curated workflow playbooks.**
|
||||
This new skill routes users from high-level goals to concrete execution steps across related skills and bundles.
|
||||
|
||||
- **Key Feature 1**: Workflow routing for SaaS MVP, Security Audit, AI Agent Systems, and Browser QA.
|
||||
- **Key Feature 2**: Explicit step-by-step outputs with prerequisites, recommended skills, and validation checkpoints.
|
||||
|
||||
> **Try it:** `Use @antigravity-workflows to run ship-saas-mvp for my project.`
|
||||
|
||||
---
|
||||
|
||||
## 📦 Improvements
|
||||
|
||||
- **Workflow Registry**: Added `data/workflows.json` for machine-readable workflow metadata.
|
||||
- **Workflow Docs**: Added `docs/WORKFLOWS.md` to distinguish Bundles vs Workflows and provide practical execution playbooks.
|
||||
- **Trinity Sync**: Updated `README.md`, `docs/GETTING_STARTED.md`, and `docs/FAQ.md` for workflow onboarding.
|
||||
- **Go QA Path**: Added optional `@go-playwright` wiring in QA/E2E workflow steps.
|
||||
- **Registry Update**: Catalog regenerated; repository now tracks 714 skills.
|
||||
|
||||
## 👥 Credits
|
||||
|
||||
A huge shoutout to our community and maintainers:
|
||||
|
||||
- **@Walapalam** for the Workflows concept request ([Issue #72](https://github.com/sickn33/antigravity-awesome-skills/issues/72))
|
||||
- **@sickn33** for workflow integration, release preparation, and maintenance updates
|
||||
|
||||
---
|
||||
|
||||
_Upgrade now: `git pull origin main` to fetch the latest skills._
|
||||
@@ -1,161 +1,454 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const fs = require("fs");
|
||||
const path = require("path");
|
||||
const {
|
||||
listSkillIds,
|
||||
listSkillIdsRecursive,
|
||||
readSkill,
|
||||
tokenize,
|
||||
unique,
|
||||
} = require('../lib/skill-utils');
|
||||
} = require("../lib/skill-utils");
|
||||
|
||||
const ROOT = path.resolve(__dirname, '..');
|
||||
const SKILLS_DIR = path.join(ROOT, 'skills');
|
||||
const ROOT = path.resolve(__dirname, "..");
|
||||
const SKILLS_DIR = path.join(ROOT, "skills");
|
||||
|
||||
const STOPWORDS = new Set([
|
||||
'a', 'an', 'and', 'are', 'as', 'at', 'be', 'but', 'by', 'for', 'from', 'has', 'have', 'in', 'into',
|
||||
'is', 'it', 'its', 'of', 'on', 'or', 'our', 'out', 'over', 'that', 'the', 'their', 'they', 'this',
|
||||
'to', 'use', 'when', 'with', 'you', 'your', 'will', 'can', 'if', 'not', 'only', 'also', 'more',
|
||||
'best', 'practice', 'practices', 'expert', 'specialist', 'focused', 'focus', 'master', 'modern',
|
||||
'advanced', 'comprehensive', 'production', 'production-ready', 'ready', 'build', 'create', 'deliver',
|
||||
'design', 'implement', 'implementation', 'strategy', 'strategies', 'patterns', 'pattern', 'workflow',
|
||||
'workflows', 'guide', 'template', 'templates', 'tool', 'tools', 'project', 'projects', 'support',
|
||||
'manage', 'management', 'system', 'systems', 'services', 'service', 'across', 'end', 'end-to-end',
|
||||
'using', 'based', 'ensure', 'ensure', 'help', 'needs', 'need', 'focuses', 'handles', 'builds', 'make',
|
||||
"a",
|
||||
"an",
|
||||
"and",
|
||||
"are",
|
||||
"as",
|
||||
"at",
|
||||
"be",
|
||||
"but",
|
||||
"by",
|
||||
"for",
|
||||
"from",
|
||||
"has",
|
||||
"have",
|
||||
"in",
|
||||
"into",
|
||||
"is",
|
||||
"it",
|
||||
"its",
|
||||
"of",
|
||||
"on",
|
||||
"or",
|
||||
"our",
|
||||
"out",
|
||||
"over",
|
||||
"that",
|
||||
"the",
|
||||
"their",
|
||||
"they",
|
||||
"this",
|
||||
"to",
|
||||
"use",
|
||||
"when",
|
||||
"with",
|
||||
"you",
|
||||
"your",
|
||||
"will",
|
||||
"can",
|
||||
"if",
|
||||
"not",
|
||||
"only",
|
||||
"also",
|
||||
"more",
|
||||
"best",
|
||||
"practice",
|
||||
"practices",
|
||||
"expert",
|
||||
"specialist",
|
||||
"focused",
|
||||
"focus",
|
||||
"master",
|
||||
"modern",
|
||||
"advanced",
|
||||
"comprehensive",
|
||||
"production",
|
||||
"production-ready",
|
||||
"ready",
|
||||
"build",
|
||||
"create",
|
||||
"deliver",
|
||||
"design",
|
||||
"implement",
|
||||
"implementation",
|
||||
"strategy",
|
||||
"strategies",
|
||||
"patterns",
|
||||
"pattern",
|
||||
"workflow",
|
||||
"workflows",
|
||||
"guide",
|
||||
"template",
|
||||
"templates",
|
||||
"tool",
|
||||
"tools",
|
||||
"project",
|
||||
"projects",
|
||||
"support",
|
||||
"manage",
|
||||
"management",
|
||||
"system",
|
||||
"systems",
|
||||
"services",
|
||||
"service",
|
||||
"across",
|
||||
"end",
|
||||
"end-to-end",
|
||||
"using",
|
||||
"based",
|
||||
"ensure",
|
||||
"ensure",
|
||||
"help",
|
||||
"needs",
|
||||
"need",
|
||||
"focuses",
|
||||
"handles",
|
||||
"builds",
|
||||
"make",
|
||||
]);
|
||||
|
||||
const TAG_STOPWORDS = new Set([
|
||||
'pro', 'expert', 'patterns', 'pattern', 'workflow', 'workflows', 'templates', 'template', 'toolkit',
|
||||
'tools', 'tool', 'project', 'projects', 'guide', 'management', 'engineer', 'architect', 'developer',
|
||||
'specialist', 'assistant', 'analysis', 'review', 'reviewer', 'automation', 'orchestration', 'scaffold',
|
||||
'scaffolding', 'implementation', 'strategy', 'context', 'management', 'feature', 'features', 'smart',
|
||||
'system', 'systems', 'design', 'development', 'development', 'test', 'testing', 'workflow',
|
||||
"pro",
|
||||
"expert",
|
||||
"patterns",
|
||||
"pattern",
|
||||
"workflow",
|
||||
"workflows",
|
||||
"templates",
|
||||
"template",
|
||||
"toolkit",
|
||||
"tools",
|
||||
"tool",
|
||||
"project",
|
||||
"projects",
|
||||
"guide",
|
||||
"management",
|
||||
"engineer",
|
||||
"architect",
|
||||
"developer",
|
||||
"specialist",
|
||||
"assistant",
|
||||
"analysis",
|
||||
"review",
|
||||
"reviewer",
|
||||
"automation",
|
||||
"orchestration",
|
||||
"scaffold",
|
||||
"scaffolding",
|
||||
"implementation",
|
||||
"strategy",
|
||||
"context",
|
||||
"management",
|
||||
"feature",
|
||||
"features",
|
||||
"smart",
|
||||
"system",
|
||||
"systems",
|
||||
"design",
|
||||
"development",
|
||||
"development",
|
||||
"test",
|
||||
"testing",
|
||||
"workflow",
|
||||
]);
|
||||
|
||||
const CATEGORY_RULES = [
|
||||
{
|
||||
name: 'security',
|
||||
name: "security",
|
||||
keywords: [
|
||||
'security', 'sast', 'compliance', 'privacy', 'threat', 'vulnerability', 'owasp', 'pci', 'gdpr',
|
||||
'secrets', 'risk', 'malware', 'forensics', 'attack', 'incident', 'auth', 'mtls', 'zero', 'trust',
|
||||
"security",
|
||||
"sast",
|
||||
"compliance",
|
||||
"privacy",
|
||||
"threat",
|
||||
"vulnerability",
|
||||
"owasp",
|
||||
"pci",
|
||||
"gdpr",
|
||||
"secrets",
|
||||
"risk",
|
||||
"malware",
|
||||
"forensics",
|
||||
"attack",
|
||||
"incident",
|
||||
"auth",
|
||||
"mtls",
|
||||
"zero",
|
||||
"trust",
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'infrastructure',
|
||||
name: "infrastructure",
|
||||
keywords: [
|
||||
'kubernetes', 'k8s', 'helm', 'terraform', 'cloud', 'network', 'devops', 'gitops', 'prometheus',
|
||||
'grafana', 'observability', 'monitoring', 'logging', 'tracing', 'deployment', 'istio', 'linkerd',
|
||||
'service', 'mesh', 'slo', 'sre', 'oncall', 'incident', 'pipeline', 'cicd', 'ci', 'cd', 'kafka',
|
||||
"kubernetes",
|
||||
"k8s",
|
||||
"helm",
|
||||
"terraform",
|
||||
"cloud",
|
||||
"network",
|
||||
"devops",
|
||||
"gitops",
|
||||
"prometheus",
|
||||
"grafana",
|
||||
"observability",
|
||||
"monitoring",
|
||||
"logging",
|
||||
"tracing",
|
||||
"deployment",
|
||||
"istio",
|
||||
"linkerd",
|
||||
"service",
|
||||
"mesh",
|
||||
"slo",
|
||||
"sre",
|
||||
"oncall",
|
||||
"incident",
|
||||
"pipeline",
|
||||
"cicd",
|
||||
"ci",
|
||||
"cd",
|
||||
"kafka",
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'data-ai',
|
||||
name: "data-ai",
|
||||
keywords: [
|
||||
'data', 'database', 'db', 'sql', 'postgres', 'mysql', 'analytics', 'etl', 'warehouse', 'dbt',
|
||||
'ml', 'ai', 'llm', 'rag', 'vector', 'embedding', 'spark', 'airflow', 'cdc', 'pipeline',
|
||||
"data",
|
||||
"database",
|
||||
"db",
|
||||
"sql",
|
||||
"postgres",
|
||||
"mysql",
|
||||
"analytics",
|
||||
"etl",
|
||||
"warehouse",
|
||||
"dbt",
|
||||
"ml",
|
||||
"ai",
|
||||
"llm",
|
||||
"rag",
|
||||
"vector",
|
||||
"embedding",
|
||||
"spark",
|
||||
"airflow",
|
||||
"cdc",
|
||||
"pipeline",
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'development',
|
||||
name: "development",
|
||||
keywords: [
|
||||
'python', 'javascript', 'typescript', 'java', 'golang', 'go', 'rust', 'csharp', 'dotnet', 'php',
|
||||
'ruby', 'node', 'react', 'frontend', 'backend', 'mobile', 'ios', 'android', 'flutter', 'fastapi',
|
||||
'django', 'nextjs', 'vue', 'api',
|
||||
"python",
|
||||
"javascript",
|
||||
"typescript",
|
||||
"java",
|
||||
"golang",
|
||||
"go",
|
||||
"rust",
|
||||
"csharp",
|
||||
"dotnet",
|
||||
"php",
|
||||
"ruby",
|
||||
"node",
|
||||
"react",
|
||||
"frontend",
|
||||
"backend",
|
||||
"mobile",
|
||||
"ios",
|
||||
"android",
|
||||
"flutter",
|
||||
"fastapi",
|
||||
"django",
|
||||
"nextjs",
|
||||
"vue",
|
||||
"api",
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'architecture',
|
||||
name: "architecture",
|
||||
keywords: [
|
||||
'architecture', 'c4', 'microservices', 'event', 'cqrs', 'saga', 'domain', 'ddd', 'patterns',
|
||||
'decision', 'adr',
|
||||
"architecture",
|
||||
"c4",
|
||||
"microservices",
|
||||
"event",
|
||||
"cqrs",
|
||||
"saga",
|
||||
"domain",
|
||||
"ddd",
|
||||
"patterns",
|
||||
"decision",
|
||||
"adr",
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'testing',
|
||||
keywords: ['testing', 'tdd', 'unit', 'e2e', 'qa', 'test'],
|
||||
name: "testing",
|
||||
keywords: ["testing", "tdd", "unit", "e2e", "qa", "test"],
|
||||
},
|
||||
{
|
||||
name: 'business',
|
||||
name: "business",
|
||||
keywords: [
|
||||
'business', 'market', 'sales', 'finance', 'startup', 'legal', 'hr', 'product', 'customer', 'seo',
|
||||
'marketing', 'kpi', 'contract', 'employment',
|
||||
"business",
|
||||
"market",
|
||||
"sales",
|
||||
"finance",
|
||||
"startup",
|
||||
"legal",
|
||||
"hr",
|
||||
"product",
|
||||
"customer",
|
||||
"seo",
|
||||
"marketing",
|
||||
"kpi",
|
||||
"contract",
|
||||
"employment",
|
||||
],
|
||||
},
|
||||
{
|
||||
name: 'workflow',
|
||||
keywords: ['workflow', 'orchestration', 'conductor', 'automation', 'process', 'collaboration'],
|
||||
name: "workflow",
|
||||
keywords: [
|
||||
"workflow",
|
||||
"orchestration",
|
||||
"conductor",
|
||||
"automation",
|
||||
"process",
|
||||
"collaboration",
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
const BUNDLE_RULES = {
|
||||
'core-dev': {
|
||||
description: 'Core development skills across languages, frameworks, and backend/frontend fundamentals.',
|
||||
"core-dev": {
|
||||
description:
|
||||
"Core development skills across languages, frameworks, and backend/frontend fundamentals.",
|
||||
keywords: [
|
||||
'python', 'javascript', 'typescript', 'go', 'golang', 'rust', 'java', 'node', 'frontend', 'backend',
|
||||
'react', 'fastapi', 'django', 'nextjs', 'api', 'mobile', 'ios', 'android', 'flutter', 'php', 'ruby',
|
||||
"python",
|
||||
"javascript",
|
||||
"typescript",
|
||||
"go",
|
||||
"golang",
|
||||
"rust",
|
||||
"java",
|
||||
"node",
|
||||
"frontend",
|
||||
"backend",
|
||||
"react",
|
||||
"fastapi",
|
||||
"django",
|
||||
"nextjs",
|
||||
"api",
|
||||
"mobile",
|
||||
"ios",
|
||||
"android",
|
||||
"flutter",
|
||||
"php",
|
||||
"ruby",
|
||||
],
|
||||
},
|
||||
'security-core': {
|
||||
description: 'Security, privacy, and compliance essentials.',
|
||||
"security-core": {
|
||||
description: "Security, privacy, and compliance essentials.",
|
||||
keywords: [
|
||||
'security', 'sast', 'compliance', 'threat', 'risk', 'privacy', 'secrets', 'owasp', 'gdpr', 'pci',
|
||||
'vulnerability', 'auth',
|
||||
"security",
|
||||
"sast",
|
||||
"compliance",
|
||||
"threat",
|
||||
"risk",
|
||||
"privacy",
|
||||
"secrets",
|
||||
"owasp",
|
||||
"gdpr",
|
||||
"pci",
|
||||
"vulnerability",
|
||||
"auth",
|
||||
],
|
||||
},
|
||||
'k8s-core': {
|
||||
description: 'Kubernetes and service mesh essentials.',
|
||||
keywords: ['kubernetes', 'k8s', 'helm', 'istio', 'linkerd', 'service', 'mesh'],
|
||||
},
|
||||
'data-core': {
|
||||
description: 'Data engineering and analytics foundations.',
|
||||
"k8s-core": {
|
||||
description: "Kubernetes and service mesh essentials.",
|
||||
keywords: [
|
||||
'data', 'database', 'sql', 'dbt', 'airflow', 'spark', 'analytics', 'etl', 'warehouse', 'postgres',
|
||||
'mysql', 'kafka',
|
||||
"kubernetes",
|
||||
"k8s",
|
||||
"helm",
|
||||
"istio",
|
||||
"linkerd",
|
||||
"service",
|
||||
"mesh",
|
||||
],
|
||||
},
|
||||
'ops-core': {
|
||||
description: 'Operations, observability, and delivery pipelines.',
|
||||
"data-core": {
|
||||
description: "Data engineering and analytics foundations.",
|
||||
keywords: [
|
||||
'observability', 'monitoring', 'logging', 'tracing', 'prometheus', 'grafana', 'devops', 'gitops',
|
||||
'deployment', 'cicd', 'pipeline', 'slo', 'sre', 'incident',
|
||||
"data",
|
||||
"database",
|
||||
"sql",
|
||||
"dbt",
|
||||
"airflow",
|
||||
"spark",
|
||||
"analytics",
|
||||
"etl",
|
||||
"warehouse",
|
||||
"postgres",
|
||||
"mysql",
|
||||
"kafka",
|
||||
],
|
||||
},
|
||||
"ops-core": {
|
||||
description: "Operations, observability, and delivery pipelines.",
|
||||
keywords: [
|
||||
"observability",
|
||||
"monitoring",
|
||||
"logging",
|
||||
"tracing",
|
||||
"prometheus",
|
||||
"grafana",
|
||||
"devops",
|
||||
"gitops",
|
||||
"deployment",
|
||||
"cicd",
|
||||
"pipeline",
|
||||
"slo",
|
||||
"sre",
|
||||
"incident",
|
||||
],
|
||||
},
|
||||
};
|
||||
|
||||
const CURATED_COMMON = [
|
||||
'bash-pro',
|
||||
'python-pro',
|
||||
'javascript-pro',
|
||||
'typescript-pro',
|
||||
'golang-pro',
|
||||
'rust-pro',
|
||||
'java-pro',
|
||||
'frontend-developer',
|
||||
'backend-architect',
|
||||
'nodejs-backend-patterns',
|
||||
'fastapi-pro',
|
||||
'api-design-principles',
|
||||
'sql-pro',
|
||||
'database-architect',
|
||||
'kubernetes-architect',
|
||||
'terraform-specialist',
|
||||
'observability-engineer',
|
||||
'security-auditor',
|
||||
'sast-configuration',
|
||||
'gitops-workflow',
|
||||
"bash-pro",
|
||||
"python-pro",
|
||||
"javascript-pro",
|
||||
"typescript-pro",
|
||||
"golang-pro",
|
||||
"rust-pro",
|
||||
"java-pro",
|
||||
"frontend-developer",
|
||||
"backend-architect",
|
||||
"nodejs-backend-patterns",
|
||||
"fastapi-pro",
|
||||
"api-design-principles",
|
||||
"sql-pro",
|
||||
"database-architect",
|
||||
"kubernetes-architect",
|
||||
"terraform-specialist",
|
||||
"observability-engineer",
|
||||
"security-auditor",
|
||||
"sast-configuration",
|
||||
"gitops-workflow",
|
||||
];
|
||||
|
||||
function normalizeTokens(tokens) {
|
||||
return unique(tokens.map(token => token.toLowerCase())).filter(Boolean);
|
||||
return unique(tokens.map((token) => token.toLowerCase())).filter(Boolean);
|
||||
}
|
||||
|
||||
function deriveTags(skill) {
|
||||
let tags = Array.isArray(skill.tags) ? skill.tags : [];
|
||||
tags = tags.map(tag => tag.toLowerCase()).filter(Boolean);
|
||||
tags = tags.map((tag) => tag.toLowerCase()).filter(Boolean);
|
||||
|
||||
if (!tags.length) {
|
||||
tags = skill.id
|
||||
.split('-')
|
||||
.map(tag => tag.toLowerCase())
|
||||
.filter(tag => tag && !TAG_STOPWORDS.has(tag));
|
||||
.split("-")
|
||||
.map((tag) => tag.toLowerCase())
|
||||
.filter((tag) => tag && !TAG_STOPWORDS.has(tag));
|
||||
}
|
||||
|
||||
return normalizeTokens(tags);
|
||||
@@ -177,17 +470,18 @@ function detectCategory(skill, tags) {
|
||||
}
|
||||
}
|
||||
|
||||
return 'general';
|
||||
return "general";
|
||||
}
|
||||
|
||||
function buildTriggers(skill, tags) {
|
||||
const tokens = tokenize(`${skill.name} ${skill.description}`)
|
||||
.filter(token => token.length >= 2 && !STOPWORDS.has(token));
|
||||
const tokens = tokenize(`${skill.name} ${skill.description}`).filter(
|
||||
(token) => token.length >= 2 && !STOPWORDS.has(token),
|
||||
);
|
||||
return unique([...tags, ...tokens]).slice(0, 12);
|
||||
}
|
||||
|
||||
function buildAliases(skills) {
|
||||
const existingIds = new Set(skills.map(skill => skill.id));
|
||||
const existingIds = new Set(skills.map((skill) => skill.id));
|
||||
const aliases = {};
|
||||
const used = new Set();
|
||||
|
||||
@@ -200,7 +494,7 @@ function buildAliases(skills) {
|
||||
}
|
||||
}
|
||||
|
||||
const tokens = skill.id.split('-').filter(Boolean);
|
||||
const tokens = skill.id.split("-").filter(Boolean);
|
||||
if (skill.id.length < 28 || tokens.length < 4) continue;
|
||||
|
||||
const deduped = [];
|
||||
@@ -211,10 +505,11 @@ function buildAliases(skills) {
|
||||
deduped.push(token);
|
||||
}
|
||||
|
||||
const aliasTokens = deduped.length > 3
|
||||
? [deduped[0], deduped[1], deduped[deduped.length - 1]]
|
||||
: deduped;
|
||||
const alias = unique(aliasTokens).join('-');
|
||||
const aliasTokens =
|
||||
deduped.length > 3
|
||||
? [deduped[0], deduped[1], deduped[deduped.length - 1]]
|
||||
: deduped;
|
||||
const alias = unique(aliasTokens).join("-");
|
||||
|
||||
if (!alias || alias === skill.id) continue;
|
||||
if (existingIds.has(alias) || used.has(alias)) continue;
|
||||
@@ -241,11 +536,11 @@ function buildBundles(skills) {
|
||||
|
||||
for (const [bundleName, rule] of Object.entries(BUNDLE_RULES)) {
|
||||
const bundleSkills = [];
|
||||
const keywords = rule.keywords.map(keyword => keyword.toLowerCase());
|
||||
const keywords = rule.keywords.map((keyword) => keyword.toLowerCase());
|
||||
|
||||
for (const skill of skills) {
|
||||
const tokenSet = skillTokens.get(skill.id) || new Set();
|
||||
if (keywords.some(keyword => tokenSet.has(keyword))) {
|
||||
if (keywords.some((keyword) => tokenSet.has(keyword))) {
|
||||
bundleSkills.push(skill.id);
|
||||
}
|
||||
}
|
||||
@@ -256,49 +551,58 @@ function buildBundles(skills) {
|
||||
};
|
||||
}
|
||||
|
||||
const common = CURATED_COMMON.filter(skillId => skillTokens.has(skillId));
|
||||
const common = CURATED_COMMON.filter((skillId) => skillTokens.has(skillId));
|
||||
|
||||
return { bundles, common };
|
||||
}
|
||||
|
||||
function truncate(value, limit) {
|
||||
if (!value || value.length <= limit) return value || '';
|
||||
if (!value || value.length <= limit) return value || "";
|
||||
return `${value.slice(0, limit - 3)}...`;
|
||||
}
|
||||
|
||||
function renderCatalogMarkdown(catalog) {
|
||||
const lines = [];
|
||||
lines.push('# Skill Catalog');
|
||||
lines.push('');
|
||||
lines.push("# Skill Catalog");
|
||||
lines.push("");
|
||||
lines.push(`Generated at: ${catalog.generatedAt}`);
|
||||
lines.push('');
|
||||
lines.push("");
|
||||
lines.push(`Total skills: ${catalog.total}`);
|
||||
lines.push('');
|
||||
lines.push("");
|
||||
|
||||
const categories = Array.from(new Set(catalog.skills.map(skill => skill.category))).sort();
|
||||
const categories = Array.from(
|
||||
new Set(catalog.skills.map((skill) => skill.category)),
|
||||
).sort();
|
||||
for (const category of categories) {
|
||||
const grouped = catalog.skills.filter(skill => skill.category === category);
|
||||
const grouped = catalog.skills.filter(
|
||||
(skill) => skill.category === category,
|
||||
);
|
||||
lines.push(`## ${category} (${grouped.length})`);
|
||||
lines.push('');
|
||||
lines.push('| Skill | Description | Tags | Triggers |');
|
||||
lines.push('| --- | --- | --- | --- |');
|
||||
lines.push("");
|
||||
lines.push("| Skill | Description | Tags | Triggers |");
|
||||
lines.push("| --- | --- | --- | --- |");
|
||||
|
||||
for (const skill of grouped) {
|
||||
const description = truncate(skill.description, 160).replace(/\|/g, '\\|');
|
||||
const tags = skill.tags.join(', ');
|
||||
const triggers = skill.triggers.join(', ');
|
||||
lines.push(`| \`${skill.id}\` | ${description} | ${tags} | ${triggers} |`);
|
||||
const description = truncate(skill.description, 160).replace(
|
||||
/\|/g,
|
||||
"\\|",
|
||||
);
|
||||
const tags = skill.tags.join(", ");
|
||||
const triggers = skill.triggers.join(", ");
|
||||
lines.push(
|
||||
`| \`${skill.id}\` | ${description} | ${tags} | ${triggers} |`,
|
||||
);
|
||||
}
|
||||
|
||||
lines.push('');
|
||||
lines.push("");
|
||||
}
|
||||
|
||||
return lines.join('\n');
|
||||
return lines.join("\n");
|
||||
}
|
||||
|
||||
function buildCatalog() {
|
||||
const skillIds = listSkillIds(SKILLS_DIR);
|
||||
const skills = skillIds.map(skillId => readSkill(SKILLS_DIR, skillId));
|
||||
const skillRelPaths = listSkillIdsRecursive(SKILLS_DIR);
|
||||
const skills = skillRelPaths.map((relPath) => readSkill(SKILLS_DIR, relPath));
|
||||
const catalogSkills = [];
|
||||
|
||||
for (const skill of skills) {
|
||||
@@ -318,24 +622,32 @@ function buildCatalog() {
|
||||
}
|
||||
|
||||
const catalog = {
|
||||
generatedAt: new Date().toISOString(),
|
||||
generatedAt: process.env.SOURCE_DATE_EPOCH
|
||||
? new Date(process.env.SOURCE_DATE_EPOCH * 1000).toISOString()
|
||||
: "2026-02-08T00:00:00.000Z",
|
||||
total: catalogSkills.length,
|
||||
skills: catalogSkills.sort((a, b) => a.id.localeCompare(b.id)),
|
||||
skills: catalogSkills.sort((a, b) =>
|
||||
a.id < b.id ? -1 : a.id > b.id ? 1 : 0,
|
||||
),
|
||||
};
|
||||
|
||||
const aliases = buildAliases(catalog.skills);
|
||||
const bundleData = buildBundles(catalog.skills);
|
||||
|
||||
const catalogPath = path.join(ROOT, 'data', 'catalog.json');
|
||||
const catalogMarkdownPath = path.join(ROOT, 'CATALOG.md');
|
||||
const bundlesPath = path.join(ROOT, 'data', 'bundles.json');
|
||||
const aliasesPath = path.join(ROOT, 'data', 'aliases.json');
|
||||
const catalogPath = path.join(ROOT, "data", "catalog.json");
|
||||
const catalogMarkdownPath = path.join(ROOT, "CATALOG.md");
|
||||
const bundlesPath = path.join(ROOT, "data", "bundles.json");
|
||||
const aliasesPath = path.join(ROOT, "data", "aliases.json");
|
||||
|
||||
fs.writeFileSync(catalogPath, JSON.stringify(catalog, null, 2));
|
||||
fs.writeFileSync(catalogMarkdownPath, renderCatalogMarkdown(catalog));
|
||||
fs.writeFileSync(
|
||||
bundlesPath,
|
||||
JSON.stringify({ generatedAt: catalog.generatedAt, ...bundleData }, null, 2),
|
||||
JSON.stringify(
|
||||
{ generatedAt: catalog.generatedAt, ...bundleData },
|
||||
null,
|
||||
2,
|
||||
),
|
||||
);
|
||||
fs.writeFileSync(
|
||||
aliasesPath,
|
||||
|
||||
@@ -6,14 +6,34 @@ import yaml
|
||||
|
||||
def parse_frontmatter(content):
|
||||
"""
|
||||
Parses YAML frontmatter using PyYAML for standard compliance.
|
||||
Parses YAML frontmatter, sanitizing unquoted values containing @.
|
||||
Handles single values and comma-separated lists by quoting the entire line.
|
||||
"""
|
||||
fm_match = re.search(r'^---\s*\n(.*?)\n---', content, re.DOTALL)
|
||||
if not fm_match:
|
||||
return {}
|
||||
|
||||
yaml_text = fm_match.group(1)
|
||||
|
||||
# Process line by line to handle values containing @ and commas
|
||||
sanitized_lines = []
|
||||
for line in yaml_text.splitlines():
|
||||
# Match "key: value" (handles keys with dashes like 'package-name')
|
||||
match = re.match(r'^(\s*[\w-]+):\s*(.*)$', line)
|
||||
if match:
|
||||
key, val = match.groups()
|
||||
val_s = val.strip()
|
||||
# If value contains @ and isn't already quoted, wrap the whole string in double quotes
|
||||
if '@' in val_s and not (val_s.startswith('"') or val_s.startswith("'")):
|
||||
# Escape any existing double quotes within the value string
|
||||
safe_val = val_s.replace('"', '\\"')
|
||||
line = f'{key}: "{safe_val}"'
|
||||
sanitized_lines.append(line)
|
||||
|
||||
sanitized_yaml = '\n'.join(sanitized_lines)
|
||||
|
||||
try:
|
||||
return yaml.safe_load(fm_match.group(1)) or {}
|
||||
return yaml.safe_load(sanitized_yaml) or {}
|
||||
except yaml.YAMLError as e:
|
||||
print(f"⚠️ YAML parsing error: {e}")
|
||||
return {}
|
||||
|
||||
@@ -8,6 +8,8 @@ const SKILLS_DIR = path.join(ROOT, 'skills');
|
||||
const ALLOWED_FIELDS = new Set([
|
||||
'name',
|
||||
'description',
|
||||
'risk',
|
||||
'source',
|
||||
'license',
|
||||
'compatibility',
|
||||
'metadata',
|
||||
|
||||
@@ -21,8 +21,12 @@ python3 scripts/generate_index.py
|
||||
echo "Running update_readme.py..."
|
||||
python3 scripts/update_readme.py
|
||||
|
||||
# 2. Stats Consistency Check
|
||||
echo -e "\n${YELLOW}Step 2: verifying Stats Consistency...${NC}"
|
||||
# 2. Catalog (required for CI)
|
||||
echo -e "\n${YELLOW}Step 2: Build catalog...${NC}"
|
||||
npm run catalog
|
||||
|
||||
# 3. Stats Consistency Check
|
||||
echo -e "\n${YELLOW}Step 3: Verifying Stats Consistency...${NC}"
|
||||
JSON_COUNT=$(python3 -c "import json; print(len(json.load(open('skills_index.json'))))")
|
||||
echo "Skills in Registry (JSON): $JSON_COUNT"
|
||||
|
||||
@@ -36,8 +40,14 @@ if [[ "$README_CONTENT" != *"$JSON_COUNT high-performance"* ]]; then
|
||||
fi
|
||||
echo -e "${GREEN}✅ Stats Consistent.${NC}"
|
||||
|
||||
# 3. Contributor Check
|
||||
echo -e "\n${YELLOW}Step 3: Contributor Check${NC}"
|
||||
# 4. Version check (package.json is source of truth for npm)
|
||||
echo -e "\n${YELLOW}Step 4: Version check${NC}"
|
||||
PKG_VERSION=$(node -p "require('./package.json').version")
|
||||
echo "package.json version: $PKG_VERSION"
|
||||
echo "Ensure this version is bumped before 'npm publish' (npm forbids republishing the same version)."
|
||||
|
||||
# 5. Contributor Check
|
||||
echo -e "\n${YELLOW}Step 5: Contributor Check${NC}"
|
||||
echo "Recent commits by author (check against README 'Repo Contributors'):"
|
||||
git shortlog -sn --since="1 month ago" --all --no-merges | head -n 10
|
||||
|
||||
@@ -52,4 +62,5 @@ if [ "$CONFIRM_CONTRIB" != "yes" ]; then
|
||||
fi
|
||||
|
||||
echo -e "\n${GREEN}✅ Release Cycle Checks Passed. You may now commit and push.${NC}"
|
||||
echo -e "${YELLOW}After tagging a release: run \`npm publish\` from repo root (or use GitHub Release + NPM_TOKEN for CI).${NC}"
|
||||
exit 0
|
||||
|
||||
424
scripts/sync_microsoft_skills.py
Normal file
424
scripts/sync_microsoft_skills.py
Normal file
@@ -0,0 +1,424 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Sync Microsoft Skills Repository - v4 (Flat Structure)
|
||||
Reads each SKILL.md frontmatter 'name' field and uses it as a flat directory
|
||||
name under skills/ to comply with the repository's indexing conventions.
|
||||
"""
|
||||
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import tempfile
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
MS_REPO = "https://github.com/microsoft/skills.git"
|
||||
REPO_ROOT = Path(__file__).parent.parent
|
||||
TARGET_DIR = REPO_ROOT / "skills"
|
||||
DOCS_DIR = REPO_ROOT / "docs"
|
||||
ATTRIBUTION_FILE = DOCS_DIR / "microsoft-skills-attribution.json"
|
||||
|
||||
|
||||
def clone_repo(temp_dir: Path):
|
||||
"""Clone Microsoft skills repository (shallow)."""
|
||||
print("🔄 Cloning Microsoft Skills repository...")
|
||||
subprocess.run(
|
||||
["git", "clone", "--depth", "1", MS_REPO, str(temp_dir)],
|
||||
check=True,
|
||||
)
|
||||
|
||||
|
||||
def cleanup_previous_sync():
|
||||
"""Remove skill directories from a previous sync using the attribution manifest."""
|
||||
if not ATTRIBUTION_FILE.exists():
|
||||
print(" ℹ️ No previous attribution file found — skipping cleanup.")
|
||||
return 0
|
||||
|
||||
try:
|
||||
with open(ATTRIBUTION_FILE) as f:
|
||||
attribution = json.load(f)
|
||||
except (json.JSONDecodeError, OSError) as e:
|
||||
print(f" ⚠️ Could not read attribution file: {e}")
|
||||
return 0
|
||||
|
||||
previous_skills = attribution.get("skills", [])
|
||||
removed_count = 0
|
||||
|
||||
for skill in previous_skills:
|
||||
flat_name = skill.get("flat_name", "")
|
||||
if not flat_name:
|
||||
continue
|
||||
|
||||
skill_dir = TARGET_DIR / flat_name
|
||||
if skill_dir.exists() and skill_dir.is_dir():
|
||||
shutil.rmtree(skill_dir)
|
||||
removed_count += 1
|
||||
|
||||
print(
|
||||
f" 🗑️ Removed {removed_count} previously synced skill directories.")
|
||||
return removed_count
|
||||
|
||||
|
||||
def extract_skill_name(skill_md_path: Path) -> str | None:
|
||||
"""Extract the 'name' field from SKILL.md YAML frontmatter."""
|
||||
try:
|
||||
content = skill_md_path.read_text(encoding="utf-8")
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
fm_match = re.search(r"^---\s*\n(.*?)\n---", content, re.DOTALL)
|
||||
if not fm_match:
|
||||
return None
|
||||
|
||||
for line in fm_match.group(1).splitlines():
|
||||
match = re.match(r"^name:\s*(.+)$", line)
|
||||
if match:
|
||||
value = match.group(1).strip().strip("\"'")
|
||||
if value:
|
||||
return value
|
||||
return None
|
||||
|
||||
|
||||
def generate_fallback_name(relative_path: Path) -> str:
|
||||
"""
|
||||
Generate a fallback directory name when frontmatter 'name' is missing.
|
||||
Converts a path like 'dotnet/compute/botservice' to 'ms-dotnet-compute-botservice'.
|
||||
"""
|
||||
parts = [p for p in relative_path.parts if p]
|
||||
return "ms-" + "-".join(parts)
|
||||
|
||||
|
||||
def find_skills_in_directory(source_dir: Path):
|
||||
"""
|
||||
Walk the Microsoft repo's skills/ directory (which uses symlinks)
|
||||
and resolve each to its actual SKILL.md content.
|
||||
Returns list of dicts: {relative_path, skill_md_path, source_dir}.
|
||||
"""
|
||||
skills_source = source_dir / "skills"
|
||||
results = []
|
||||
|
||||
if not skills_source.exists():
|
||||
return results
|
||||
|
||||
for item in skills_source.rglob("*"):
|
||||
if not item.is_dir():
|
||||
continue
|
||||
|
||||
skill_md = None
|
||||
actual_dir = None
|
||||
|
||||
if item.is_symlink():
|
||||
try:
|
||||
resolved = item.resolve()
|
||||
if (resolved / "SKILL.md").exists():
|
||||
skill_md = resolved / "SKILL.md"
|
||||
actual_dir = resolved
|
||||
except Exception:
|
||||
continue
|
||||
elif (item / "SKILL.md").exists():
|
||||
skill_md = item / "SKILL.md"
|
||||
actual_dir = item
|
||||
|
||||
if skill_md is None:
|
||||
continue
|
||||
|
||||
try:
|
||||
relative_path = item.relative_to(skills_source)
|
||||
except ValueError:
|
||||
continue
|
||||
|
||||
results.append({
|
||||
"relative_path": relative_path,
|
||||
"skill_md": skill_md,
|
||||
"source_dir": actual_dir,
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def find_plugin_skills(source_dir: Path, already_synced_names: set):
|
||||
"""Find plugin skills in .github/plugins/ that haven't been synced yet."""
|
||||
results = []
|
||||
github_plugins = source_dir / ".github" / "plugins"
|
||||
|
||||
if not github_plugins.exists():
|
||||
return results
|
||||
|
||||
for skill_file in github_plugins.rglob("SKILL.md"):
|
||||
skill_dir = skill_file.parent
|
||||
skill_name = skill_dir.name
|
||||
|
||||
if skill_name not in already_synced_names:
|
||||
results.append({
|
||||
"relative_path": Path("plugins") / skill_name,
|
||||
"skill_md": skill_file,
|
||||
"source_dir": skill_dir,
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def find_github_skills(source_dir: Path, already_synced_names: set):
|
||||
"""Find skills in .github/skills/ not reachable via the skills/ symlink tree."""
|
||||
results = []
|
||||
github_skills = source_dir / ".github" / "skills"
|
||||
|
||||
if not github_skills.exists():
|
||||
return results
|
||||
|
||||
for skill_dir in github_skills.iterdir():
|
||||
if not skill_dir.is_dir() or not (skill_dir / "SKILL.md").exists():
|
||||
continue
|
||||
|
||||
if skill_dir.name not in already_synced_names:
|
||||
results.append({
|
||||
"relative_path": Path(".github/skills") / skill_dir.name,
|
||||
"skill_md": skill_dir / "SKILL.md",
|
||||
"source_dir": skill_dir,
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def sync_skills_flat(source_dir: Path, target_dir: Path):
|
||||
"""
|
||||
Sync all Microsoft skills into a flat structure under skills/.
|
||||
Uses frontmatter 'name' as directory name, with collision detection.
|
||||
Protects existing non-Microsoft skills from being overwritten.
|
||||
"""
|
||||
# Load previous attribution to know which dirs are Microsoft-owned
|
||||
previously_synced_names = set()
|
||||
if ATTRIBUTION_FILE.exists():
|
||||
try:
|
||||
with open(ATTRIBUTION_FILE) as f:
|
||||
prev = json.load(f)
|
||||
previously_synced_names = {
|
||||
s["flat_name"] for s in prev.get("skills", []) if s.get("flat_name")
|
||||
}
|
||||
except (json.JSONDecodeError, OSError):
|
||||
pass
|
||||
|
||||
all_skill_entries = find_skills_in_directory(source_dir)
|
||||
print(f" 📂 Found {len(all_skill_entries)} skills in skills/ directory")
|
||||
|
||||
synced_count = 0
|
||||
skill_metadata = []
|
||||
# name -> original relative_path (for collision logging)
|
||||
used_names: dict[str, str] = {}
|
||||
|
||||
for entry in all_skill_entries:
|
||||
skill_name = extract_skill_name(entry["skill_md"])
|
||||
|
||||
if not skill_name:
|
||||
skill_name = generate_fallback_name(entry["relative_path"])
|
||||
print(
|
||||
f" ⚠️ No frontmatter name for {entry['relative_path']}, using fallback: {skill_name}")
|
||||
|
||||
# Internal collision detection (two Microsoft skills with same name)
|
||||
if skill_name in used_names:
|
||||
original = used_names[skill_name]
|
||||
print(
|
||||
f" ⚠️ Name collision '{skill_name}': {entry['relative_path']} vs {original}")
|
||||
lang = entry["relative_path"].parts[0] if entry["relative_path"].parts else "unknown"
|
||||
skill_name = f"{skill_name}-{lang}"
|
||||
print(f" Resolved to: {skill_name}")
|
||||
|
||||
# Protect existing non-Microsoft skills from being overwritten
|
||||
target_skill_dir = target_dir / skill_name
|
||||
if target_skill_dir.exists() and skill_name not in previously_synced_names:
|
||||
original_name = skill_name
|
||||
skill_name = f"{skill_name}-ms"
|
||||
print(
|
||||
f" ⚠️ '{original_name}' exists as a non-Microsoft skill, using: {skill_name}")
|
||||
|
||||
used_names[skill_name] = str(entry["relative_path"])
|
||||
|
||||
# Create flat target directory
|
||||
target_skill_dir = target_dir / skill_name
|
||||
target_skill_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Copy SKILL.md
|
||||
shutil.copy2(entry["skill_md"], target_skill_dir / "SKILL.md")
|
||||
|
||||
# Copy other files from the skill directory
|
||||
for file_item in entry["source_dir"].iterdir():
|
||||
if file_item.name != "SKILL.md" and file_item.is_file():
|
||||
shutil.copy2(file_item, target_skill_dir / file_item.name)
|
||||
|
||||
skill_metadata.append({
|
||||
"flat_name": skill_name,
|
||||
"original_path": str(entry["relative_path"]),
|
||||
"source": "microsoft/skills",
|
||||
})
|
||||
|
||||
synced_count += 1
|
||||
print(f" ✅ {entry['relative_path']} → skills/{skill_name}/")
|
||||
|
||||
# Collect all source directory names already synced (for dedup)
|
||||
synced_names = set(used_names.keys())
|
||||
already_synced_dir_names = {
|
||||
e["source_dir"].name for e in all_skill_entries}
|
||||
|
||||
# Sync plugin skills from .github/plugins/
|
||||
plugin_entries = find_plugin_skills(source_dir, already_synced_dir_names)
|
||||
|
||||
if plugin_entries:
|
||||
print(f"\n 📦 Found {len(plugin_entries)} additional plugin skills")
|
||||
for entry in plugin_entries:
|
||||
skill_name = extract_skill_name(entry["skill_md"])
|
||||
if not skill_name:
|
||||
skill_name = entry["source_dir"].name
|
||||
|
||||
if skill_name in synced_names:
|
||||
skill_name = f"{skill_name}-plugin"
|
||||
|
||||
# Protect existing non-Microsoft skills
|
||||
target_skill_dir = target_dir / skill_name
|
||||
if target_skill_dir.exists() and skill_name not in previously_synced_names:
|
||||
original_name = skill_name
|
||||
skill_name = f"{skill_name}-ms"
|
||||
target_skill_dir = target_dir / skill_name
|
||||
print(
|
||||
f" ⚠️ '{original_name}' exists as a non-Microsoft skill, using: {skill_name}")
|
||||
|
||||
synced_names.add(skill_name)
|
||||
already_synced_dir_names.add(entry["source_dir"].name)
|
||||
|
||||
target_skill_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
shutil.copy2(entry["skill_md"], target_skill_dir / "SKILL.md")
|
||||
|
||||
for file_item in entry["source_dir"].iterdir():
|
||||
if file_item.name != "SKILL.md" and file_item.is_file():
|
||||
shutil.copy2(file_item, target_skill_dir / file_item.name)
|
||||
|
||||
skill_metadata.append({
|
||||
"flat_name": skill_name,
|
||||
"original_path": str(entry["relative_path"]),
|
||||
"source": "microsoft/skills (plugin)",
|
||||
})
|
||||
|
||||
synced_count += 1
|
||||
print(f" ✅ {entry['relative_path']} → skills/{skill_name}/")
|
||||
|
||||
# Sync skills in .github/skills/ not reachable via the skills/ symlink tree
|
||||
github_skill_entries = find_github_skills(
|
||||
source_dir, already_synced_dir_names)
|
||||
|
||||
if github_skill_entries:
|
||||
print(
|
||||
f"\n <20> Found {len(github_skill_entries)} skills in .github/skills/ not linked from skills/")
|
||||
for entry in github_skill_entries:
|
||||
skill_name = extract_skill_name(entry["skill_md"])
|
||||
if not skill_name:
|
||||
skill_name = entry["source_dir"].name
|
||||
|
||||
if skill_name in synced_names:
|
||||
skill_name = f"{skill_name}-github"
|
||||
|
||||
# Protect existing non-Microsoft skills
|
||||
target_skill_dir = target_dir / skill_name
|
||||
if target_skill_dir.exists() and skill_name not in previously_synced_names:
|
||||
original_name = skill_name
|
||||
skill_name = f"{skill_name}-ms"
|
||||
target_skill_dir = target_dir / skill_name
|
||||
print(
|
||||
f" ⚠️ '{original_name}' exists as a non-Microsoft skill, using: {skill_name}")
|
||||
|
||||
synced_names.add(skill_name)
|
||||
|
||||
target_skill_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
shutil.copy2(entry["skill_md"], target_skill_dir / "SKILL.md")
|
||||
|
||||
for file_item in entry["source_dir"].iterdir():
|
||||
if file_item.name != "SKILL.md" and file_item.is_file():
|
||||
shutil.copy2(file_item, target_skill_dir / file_item.name)
|
||||
|
||||
skill_metadata.append({
|
||||
"flat_name": skill_name,
|
||||
"original_path": str(entry["relative_path"]),
|
||||
"source": "microsoft/skills (.github/skills)",
|
||||
})
|
||||
|
||||
synced_count += 1
|
||||
print(f" ✅ {entry['relative_path']} → skills/{skill_name}/")
|
||||
|
||||
return synced_count, skill_metadata
|
||||
|
||||
|
||||
def save_attribution(metadata: list):
|
||||
"""Save attribution metadata to docs/."""
|
||||
DOCS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
attribution = {
|
||||
"source": "microsoft/skills",
|
||||
"repository": "https://github.com/microsoft/skills",
|
||||
"license": "MIT",
|
||||
"synced_skills": len(metadata),
|
||||
"structure": "flat (frontmatter name as directory name)",
|
||||
"skills": metadata,
|
||||
}
|
||||
with open(DOCS_DIR / "microsoft-skills-attribution.json", "w") as f:
|
||||
json.dump(attribution, f, indent=2)
|
||||
|
||||
|
||||
def copy_license(source_dir: Path):
|
||||
"""Copy the Microsoft LICENSE to docs/."""
|
||||
DOCS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
if (source_dir / "LICENSE").exists():
|
||||
shutil.copy2(source_dir / "LICENSE", DOCS_DIR / "LICENSE-MICROSOFT")
|
||||
|
||||
|
||||
def main():
|
||||
"""Main sync function."""
|
||||
print("🚀 Microsoft Skills Sync Script v4 (Flat Structure)")
|
||||
print("=" * 55)
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
temp_path = Path(temp_dir)
|
||||
|
||||
try:
|
||||
clone_repo(temp_path)
|
||||
|
||||
TARGET_DIR.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
print("\n🧹 Cleaning up previous sync...")
|
||||
cleanup_previous_sync()
|
||||
|
||||
print("\n🔗 Resolving symlinks and flattening into skills/<name>/...")
|
||||
count, metadata = sync_skills_flat(temp_path, TARGET_DIR)
|
||||
|
||||
print("\n📄 Saving attribution...")
|
||||
save_attribution(metadata)
|
||||
copy_license(temp_path)
|
||||
|
||||
print(
|
||||
f"\n✨ Success! Synced {count} Microsoft skills (flat structure)")
|
||||
print(f"📁 Location: {TARGET_DIR}/")
|
||||
|
||||
# Show summary of languages
|
||||
languages = set()
|
||||
for skill in metadata:
|
||||
parts = skill["original_path"].split("/")
|
||||
if len(parts) >= 1 and parts[0] != "plugins":
|
||||
languages.add(parts[0])
|
||||
|
||||
print(f"\n📊 Organization:")
|
||||
print(f" Total skills: {count}")
|
||||
print(f" Languages: {', '.join(sorted(languages))}")
|
||||
|
||||
print("\n📋 Next steps:")
|
||||
print("1. Run: npm run build")
|
||||
print("2. Commit changes and create PR")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
exit(main())
|
||||
98
scripts/tests/inspect_microsoft_repo.py
Normal file
98
scripts/tests/inspect_microsoft_repo.py
Normal file
@@ -0,0 +1,98 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Inspect Microsoft Skills Repository Structure
|
||||
Shows the repository layout, skill locations, and what flat names would be generated.
|
||||
"""
|
||||
|
||||
import re
|
||||
import subprocess
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
MS_REPO = "https://github.com/microsoft/skills.git"
|
||||
|
||||
|
||||
def extract_skill_name(skill_md_path: Path) -> str | None:
|
||||
"""Extract the 'name' field from SKILL.md YAML frontmatter."""
|
||||
try:
|
||||
content = skill_md_path.read_text(encoding="utf-8")
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
fm_match = re.search(r"^---\s*\n(.*?)\n---", content, re.DOTALL)
|
||||
if not fm_match:
|
||||
return None
|
||||
|
||||
for line in fm_match.group(1).splitlines():
|
||||
match = re.match(r"^name:\s*(.+)$", line)
|
||||
if match:
|
||||
value = match.group(1).strip().strip("\"'")
|
||||
if value:
|
||||
return value
|
||||
return None
|
||||
|
||||
|
||||
def inspect_repo():
|
||||
"""Inspect the Microsoft skills repository structure."""
|
||||
print("🔍 Inspecting Microsoft Skills Repository Structure")
|
||||
print("=" * 60)
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
temp_path = Path(temp_dir)
|
||||
|
||||
print("\n1️⃣ Cloning repository...")
|
||||
subprocess.run(
|
||||
["git", "clone", "--depth", "1", MS_REPO, str(temp_path)],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
)
|
||||
|
||||
# Find all SKILL.md files
|
||||
all_skill_mds = list(temp_path.rglob("SKILL.md"))
|
||||
print(f"\n2️⃣ Total SKILL.md files found: {len(all_skill_mds)}")
|
||||
|
||||
# Show flat name mapping
|
||||
print(f"\n3️⃣ Flat Name Mapping (frontmatter 'name' → directory name):")
|
||||
print("-" * 60)
|
||||
|
||||
names_seen: dict[str, list[str]] = {}
|
||||
|
||||
for skill_md in sorted(all_skill_mds, key=lambda p: str(p)):
|
||||
try:
|
||||
rel = skill_md.parent.relative_to(temp_path)
|
||||
except ValueError:
|
||||
rel = skill_md.parent
|
||||
|
||||
name = extract_skill_name(skill_md)
|
||||
display_name = name if name else f"(no name → ms-{'-'.join(rel.parts[1:])})"
|
||||
|
||||
print(f" {rel} → {display_name}")
|
||||
|
||||
effective_name = name if name else f"ms-{'-'.join(rel.parts[1:])}"
|
||||
if effective_name not in names_seen:
|
||||
names_seen[effective_name] = []
|
||||
names_seen[effective_name].append(str(rel))
|
||||
|
||||
# Collision check
|
||||
collisions = {n: paths for n, paths in names_seen.items()
|
||||
if len(paths) > 1}
|
||||
if collisions:
|
||||
print(f"\n4️⃣ ⚠️ Name Collisions Detected ({len(collisions)}):")
|
||||
for name, paths in collisions.items():
|
||||
print(f" '{name}':")
|
||||
for p in paths:
|
||||
print(f" - {p}")
|
||||
else:
|
||||
print(
|
||||
f"\n4️⃣ ✅ No name collisions — all {len(names_seen)} names are unique!")
|
||||
|
||||
print("\n✨ Inspection complete!")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
inspect_repo()
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
189
scripts/tests/test_comprehensive_coverage.py
Normal file
189
scripts/tests/test_comprehensive_coverage.py
Normal file
@@ -0,0 +1,189 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test Script: Verify Microsoft Skills Sync Coverage and Flat Name Uniqueness
|
||||
Ensures all skills are captured and no directory name collisions exist.
|
||||
"""
|
||||
|
||||
import re
|
||||
import subprocess
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from collections import defaultdict
|
||||
|
||||
MS_REPO = "https://github.com/microsoft/skills.git"
|
||||
|
||||
|
||||
def extract_skill_name(skill_md_path: Path) -> str | None:
|
||||
"""Extract the 'name' field from SKILL.md YAML frontmatter."""
|
||||
try:
|
||||
content = skill_md_path.read_text(encoding="utf-8")
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
fm_match = re.search(r"^---\s*\n(.*?)\n---", content, re.DOTALL)
|
||||
if not fm_match:
|
||||
return None
|
||||
|
||||
for line in fm_match.group(1).splitlines():
|
||||
match = re.match(r"^name:\s*(.+)$", line)
|
||||
if match:
|
||||
value = match.group(1).strip().strip("\"'")
|
||||
if value:
|
||||
return value
|
||||
return None
|
||||
|
||||
|
||||
def analyze_skill_locations():
|
||||
"""
|
||||
Comprehensive analysis of all skill locations in Microsoft repo.
|
||||
Verifies flat name uniqueness and coverage.
|
||||
"""
|
||||
print("🔬 Comprehensive Skill Coverage & Uniqueness Analysis")
|
||||
print("=" * 60)
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
temp_path = Path(temp_dir)
|
||||
|
||||
print("\n1️⃣ Cloning repository...")
|
||||
subprocess.run(
|
||||
["git", "clone", "--depth", "1", MS_REPO, str(temp_path)],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
)
|
||||
|
||||
# Find ALL SKILL.md files
|
||||
all_skill_files = list(temp_path.rglob("SKILL.md"))
|
||||
print(f"\n2️⃣ Total SKILL.md files found: {len(all_skill_files)}")
|
||||
|
||||
# Categorize by location
|
||||
location_types = defaultdict(list)
|
||||
for skill_file in all_skill_files:
|
||||
path_str = str(skill_file)
|
||||
if ".github/skills" in path_str:
|
||||
location_types["github_skills"].append(skill_file)
|
||||
elif ".github/plugins" in path_str:
|
||||
location_types["github_plugins"].append(skill_file)
|
||||
elif "/skills/" in path_str:
|
||||
location_types["skills_dir"].append(skill_file)
|
||||
else:
|
||||
location_types["other"].append(skill_file)
|
||||
|
||||
print("\n3️⃣ Skills by Location Type:")
|
||||
for loc_type, files in sorted(location_types.items()):
|
||||
print(f" 📍 {loc_type}: {len(files)} skills")
|
||||
|
||||
# Flat name uniqueness check
|
||||
print("\n4️⃣ Flat Name Uniqueness Check:")
|
||||
print("-" * 60)
|
||||
|
||||
name_map: dict[str, list[str]] = {}
|
||||
missing_names = []
|
||||
|
||||
for skill_file in all_skill_files:
|
||||
try:
|
||||
rel = skill_file.parent.relative_to(temp_path)
|
||||
except ValueError:
|
||||
rel = skill_file.parent
|
||||
|
||||
name = extract_skill_name(skill_file)
|
||||
if not name:
|
||||
missing_names.append(str(rel))
|
||||
# Generate fallback
|
||||
parts = [p for p in rel.parts if p not in (
|
||||
".github", "skills", "plugins")]
|
||||
name = "ms-" + "-".join(parts) if parts else str(rel)
|
||||
|
||||
if name not in name_map:
|
||||
name_map[name] = []
|
||||
name_map[name].append(str(rel))
|
||||
|
||||
# Report results
|
||||
collisions = {n: paths for n, paths in name_map.items()
|
||||
if len(paths) > 1}
|
||||
unique_names = {n: paths for n,
|
||||
paths in name_map.items() if len(paths) == 1}
|
||||
|
||||
print(f"\n ✅ Unique names: {len(unique_names)}")
|
||||
|
||||
if missing_names:
|
||||
print(
|
||||
f"\n ⚠️ Skills missing frontmatter 'name' ({len(missing_names)}):")
|
||||
for path in missing_names[:5]:
|
||||
print(f" - {path}")
|
||||
if len(missing_names) > 5:
|
||||
print(f" ... and {len(missing_names) - 5} more")
|
||||
|
||||
if collisions:
|
||||
print(f"\n ❌ Name collisions ({len(collisions)}):")
|
||||
for name, paths in collisions.items():
|
||||
print(f" '{name}':")
|
||||
for p in paths:
|
||||
print(f" - {p}")
|
||||
else:
|
||||
print(f"\n ✅ No collisions detected!")
|
||||
|
||||
# Validate all names are valid directory names
|
||||
print("\n5️⃣ Directory Name Validation:")
|
||||
invalid_names = []
|
||||
for name in name_map:
|
||||
if not re.match(r"^[a-zA-Z0-9][a-zA-Z0-9._-]*$", name):
|
||||
invalid_names.append(name)
|
||||
|
||||
if invalid_names:
|
||||
print(f" ❌ Invalid directory names ({len(invalid_names)}):")
|
||||
for name in invalid_names[:5]:
|
||||
print(f" - '{name}'")
|
||||
else:
|
||||
print(f" ✅ All {len(name_map)} names are valid directory names!")
|
||||
|
||||
# Summary
|
||||
print("\n6️⃣ Summary:")
|
||||
print("-" * 60)
|
||||
total = len(all_skill_files)
|
||||
unique = len(unique_names) + len(collisions)
|
||||
|
||||
print(f" Total SKILL.md files: {total}")
|
||||
print(f" Unique flat names: {len(unique_names)}")
|
||||
print(f" Collisions: {len(collisions)}")
|
||||
print(f" Missing names: {len(missing_names)}")
|
||||
|
||||
is_pass = len(collisions) == 0 and len(invalid_names) == 0
|
||||
if is_pass:
|
||||
print(f"\n ✅ ALL CHECKS PASSED")
|
||||
else:
|
||||
print(f"\n ⚠️ SOME CHECKS NEED ATTENTION")
|
||||
|
||||
print("\n✨ Analysis complete!")
|
||||
|
||||
return {
|
||||
"total": total,
|
||||
"unique": len(unique_names),
|
||||
"collisions": len(collisions),
|
||||
"missing_names": len(missing_names),
|
||||
"invalid_names": len(invalid_names),
|
||||
"passed": is_pass,
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
results = analyze_skill_locations()
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("FINAL VERDICT")
|
||||
print("=" * 60)
|
||||
|
||||
if results["passed"]:
|
||||
print("\n✅ V4 FLAT STRUCTURE IS VALID")
|
||||
print(" All names are unique and valid directory names!")
|
||||
else:
|
||||
print("\n⚠️ V4 FLAT STRUCTURE NEEDS FIXES")
|
||||
if results["collisions"] > 0:
|
||||
print(f" {results['collisions']} name collisions to resolve")
|
||||
if results["invalid_names"] > 0:
|
||||
print(f" {results['invalid_names']} invalid directory names")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
18
scripts/tests/test_validate_skills_headings.py
Normal file
18
scripts/tests/test_validate_skills_headings.py
Normal file
@@ -0,0 +1,18 @@
|
||||
import os
|
||||
import sys
|
||||
|
||||
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
|
||||
from validate_skills import has_when_to_use_section
|
||||
|
||||
SAMPLES = [
|
||||
("## When to Use", True),
|
||||
("## Use this skill when", True),
|
||||
("## When to Use This Skill", True),
|
||||
("## Overview", False),
|
||||
]
|
||||
|
||||
for heading, expected in SAMPLES:
|
||||
content = f"\n{heading}\n- item\n"
|
||||
assert has_when_to_use_section(content) is expected, heading
|
||||
|
||||
print("ok")
|
||||
16
scripts/tests/validate_skills_headings.test.js
Normal file
16
scripts/tests/validate_skills_headings.test.js
Normal file
@@ -0,0 +1,16 @@
|
||||
const assert = require('assert');
|
||||
const { hasUseSection } = require('../validate-skills');
|
||||
|
||||
const samples = [
|
||||
['## When to Use', true],
|
||||
['## Use this skill when', true],
|
||||
['## When to Use This Skill', true],
|
||||
['## Overview', false],
|
||||
];
|
||||
|
||||
for (const [heading, expected] of samples) {
|
||||
const content = `\n${heading}\n- item\n`;
|
||||
assert.strictEqual(hasUseSection(content), expected, heading);
|
||||
}
|
||||
|
||||
console.log('ok');
|
||||
@@ -36,7 +36,7 @@ def update_readme():
|
||||
|
||||
# 3. Update Intro Text Count
|
||||
content = re.sub(
|
||||
r"(library of \*\*)\d+( high-performance skills\*\*)",
|
||||
r"(library of \*\*)\d+( high-performance agentic skills\*\*)",
|
||||
rf"\g<1>{total_skills}\g<2>",
|
||||
content,
|
||||
)
|
||||
|
||||
@@ -32,12 +32,24 @@ const MAX_SKILL_LINES = 500;
|
||||
const ALLOWED_FIELDS = new Set([
|
||||
'name',
|
||||
'description',
|
||||
'risk',
|
||||
'source',
|
||||
'license',
|
||||
'compatibility',
|
||||
'metadata',
|
||||
'allowed-tools',
|
||||
]);
|
||||
|
||||
const USE_SECTION_PATTERNS = [
|
||||
/^##\s+When\s+to\s+Use/im,
|
||||
/^##\s+Use\s+this\s+skill\s+when/im,
|
||||
/^##\s+When\s+to\s+Use\s+This\s+Skill/im,
|
||||
];
|
||||
|
||||
function hasUseSection(content) {
|
||||
return USE_SECTION_PATTERNS.some(pattern => pattern.test(content));
|
||||
}
|
||||
|
||||
function isPlainObject(value) {
|
||||
return value && typeof value === 'object' && !Array.isArray(value);
|
||||
}
|
||||
@@ -99,172 +111,183 @@ function addStrictSectionErrors(label, missing, baselineSet) {
|
||||
}
|
||||
}
|
||||
|
||||
const skillIds = listSkillIds(SKILLS_DIR);
|
||||
const baseline = loadBaseline();
|
||||
const baselineUse = new Set(baseline.useSection || []);
|
||||
const baselineDoNotUse = new Set(baseline.doNotUseSection || []);
|
||||
const baselineInstructions = new Set(baseline.instructionsSection || []);
|
||||
const baselineLongFile = new Set(baseline.longFile || []);
|
||||
function run() {
|
||||
const skillIds = listSkillIds(SKILLS_DIR);
|
||||
const baseline = loadBaseline();
|
||||
const baselineUse = new Set(baseline.useSection || []);
|
||||
const baselineDoNotUse = new Set(baseline.doNotUseSection || []);
|
||||
const baselineInstructions = new Set(baseline.instructionsSection || []);
|
||||
const baselineLongFile = new Set(baseline.longFile || []);
|
||||
|
||||
for (const skillId of skillIds) {
|
||||
const skillPath = path.join(SKILLS_DIR, skillId, 'SKILL.md');
|
||||
for (const skillId of skillIds) {
|
||||
const skillPath = path.join(SKILLS_DIR, skillId, 'SKILL.md');
|
||||
|
||||
if (!fs.existsSync(skillPath)) {
|
||||
addError(`Missing SKILL.md: ${skillId}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
const content = fs.readFileSync(skillPath, 'utf8');
|
||||
const { data, errors: fmErrors, hasFrontmatter } = parseFrontmatter(content);
|
||||
const lineCount = content.split(/\r?\n/).length;
|
||||
|
||||
if (!hasFrontmatter) {
|
||||
addError(`Missing frontmatter: ${skillId}`);
|
||||
}
|
||||
|
||||
if (fmErrors && fmErrors.length) {
|
||||
fmErrors.forEach(error => addError(`Frontmatter parse error (${skillId}): ${error}`));
|
||||
}
|
||||
|
||||
if (!NAME_PATTERN.test(skillId)) {
|
||||
addError(`Folder name must match ${NAME_PATTERN}: ${skillId}`);
|
||||
}
|
||||
|
||||
if (data.name !== undefined) {
|
||||
const nameError = validateStringField('name', data.name, { min: 1, max: MAX_NAME_LENGTH });
|
||||
if (nameError) {
|
||||
addError(`${nameError} (${skillId})`);
|
||||
} else {
|
||||
const nameValue = String(data.name).trim();
|
||||
if (!NAME_PATTERN.test(nameValue)) {
|
||||
addError(`name must match ${NAME_PATTERN}: ${skillId}`);
|
||||
}
|
||||
if (nameValue !== skillId) {
|
||||
addError(`name must match folder name: ${skillId} -> ${nameValue}`);
|
||||
}
|
||||
if (!fs.existsSync(skillPath)) {
|
||||
addError(`Missing SKILL.md: ${skillId}`);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
const descError = data.description === undefined
|
||||
? 'description is required.'
|
||||
: validateStringField('description', data.description, { min: 1, max: MAX_DESCRIPTION_LENGTH });
|
||||
if (descError) {
|
||||
addError(`${descError} (${skillId})`);
|
||||
}
|
||||
const content = fs.readFileSync(skillPath, 'utf8');
|
||||
const { data, errors: fmErrors, hasFrontmatter } = parseFrontmatter(content);
|
||||
const lineCount = content.split(/\r?\n/).length;
|
||||
|
||||
if (data.license !== undefined) {
|
||||
const licenseError = validateStringField('license', data.license, { min: 1, max: 128 });
|
||||
if (licenseError) {
|
||||
addError(`${licenseError} (${skillId})`);
|
||||
if (!hasFrontmatter) {
|
||||
addError(`Missing frontmatter: ${skillId}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (data.compatibility !== undefined) {
|
||||
const compatibilityError = validateStringField(
|
||||
'compatibility',
|
||||
data.compatibility,
|
||||
{ min: 1, max: MAX_COMPATIBILITY_LENGTH },
|
||||
);
|
||||
if (compatibilityError) {
|
||||
addError(`${compatibilityError} (${skillId})`);
|
||||
if (fmErrors && fmErrors.length) {
|
||||
fmErrors.forEach(error => addError(`Frontmatter parse error (${skillId}): ${error}`));
|
||||
}
|
||||
}
|
||||
|
||||
if (data['allowed-tools'] !== undefined) {
|
||||
if (typeof data['allowed-tools'] !== 'string') {
|
||||
addError(`allowed-tools must be a space-delimited string. (${skillId})`);
|
||||
} else if (!data['allowed-tools'].trim()) {
|
||||
addError(`allowed-tools cannot be empty. (${skillId})`);
|
||||
if (!NAME_PATTERN.test(skillId)) {
|
||||
addError(`Folder name must match ${NAME_PATTERN}: ${skillId}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (data.metadata !== undefined) {
|
||||
if (!isPlainObject(data.metadata)) {
|
||||
addError(`metadata must be a string map/object. (${skillId})`);
|
||||
} else {
|
||||
for (const [key, value] of Object.entries(data.metadata)) {
|
||||
if (typeof value !== 'string') {
|
||||
addError(`metadata.${key} must be a string. (${skillId})`);
|
||||
if (data.name !== undefined) {
|
||||
const nameError = validateStringField('name', data.name, { min: 1, max: MAX_NAME_LENGTH });
|
||||
if (nameError) {
|
||||
addError(`${nameError} (${skillId})`);
|
||||
} else {
|
||||
const nameValue = String(data.name).trim();
|
||||
if (!NAME_PATTERN.test(nameValue)) {
|
||||
addError(`name must match ${NAME_PATTERN}: ${skillId}`);
|
||||
}
|
||||
if (nameValue !== skillId) {
|
||||
addError(`name must match folder name: ${skillId} -> ${nameValue}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (data && Object.keys(data).length) {
|
||||
const unknownFields = Object.keys(data).filter(key => !ALLOWED_FIELDS.has(key));
|
||||
if (unknownFields.length) {
|
||||
unknownFieldSkills.push(skillId);
|
||||
addError(`Unknown frontmatter fields (${skillId}): ${unknownFields.join(', ')}`);
|
||||
const descError = data.description === undefined
|
||||
? 'description is required.'
|
||||
: validateStringField('description', data.description, { min: 1, max: MAX_DESCRIPTION_LENGTH });
|
||||
if (descError) {
|
||||
addError(`${descError} (${skillId})`);
|
||||
}
|
||||
|
||||
if (data.license !== undefined) {
|
||||
const licenseError = validateStringField('license', data.license, { min: 1, max: 128 });
|
||||
if (licenseError) {
|
||||
addError(`${licenseError} (${skillId})`);
|
||||
}
|
||||
}
|
||||
|
||||
if (data.compatibility !== undefined) {
|
||||
const compatibilityError = validateStringField(
|
||||
'compatibility',
|
||||
data.compatibility,
|
||||
{ min: 1, max: MAX_COMPATIBILITY_LENGTH },
|
||||
);
|
||||
if (compatibilityError) {
|
||||
addError(`${compatibilityError} (${skillId})`);
|
||||
}
|
||||
}
|
||||
|
||||
if (data['allowed-tools'] !== undefined) {
|
||||
if (typeof data['allowed-tools'] !== 'string') {
|
||||
addError(`allowed-tools must be a space-delimited string. (${skillId})`);
|
||||
} else if (!data['allowed-tools'].trim()) {
|
||||
addError(`allowed-tools cannot be empty. (${skillId})`);
|
||||
}
|
||||
}
|
||||
|
||||
if (data.metadata !== undefined) {
|
||||
if (!isPlainObject(data.metadata)) {
|
||||
addError(`metadata must be a string map/object. (${skillId})`);
|
||||
} else {
|
||||
for (const [key, value] of Object.entries(data.metadata)) {
|
||||
if (typeof value !== 'string') {
|
||||
addError(`metadata.${key} must be a string. (${skillId})`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (data && Object.keys(data).length) {
|
||||
const unknownFields = Object.keys(data).filter(key => !ALLOWED_FIELDS.has(key));
|
||||
if (unknownFields.length) {
|
||||
unknownFieldSkills.push(skillId);
|
||||
addError(`Unknown frontmatter fields (${skillId}): ${unknownFields.join(', ')}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (lineCount > MAX_SKILL_LINES) {
|
||||
longFiles.push(skillId);
|
||||
}
|
||||
|
||||
if (!hasUseSection(content)) {
|
||||
missingUseSection.push(skillId);
|
||||
}
|
||||
|
||||
if (!content.includes('## Do not use')) {
|
||||
missingDoNotUseSection.push(skillId);
|
||||
}
|
||||
|
||||
if (!content.includes('## Instructions')) {
|
||||
missingInstructionsSection.push(skillId);
|
||||
}
|
||||
}
|
||||
|
||||
if (lineCount > MAX_SKILL_LINES) {
|
||||
longFiles.push(skillId);
|
||||
if (missingUseSection.length) {
|
||||
addWarning(`Missing "Use this skill when" section: ${missingUseSection.length} skills (examples: ${missingUseSection.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
if (!content.includes('## Use this skill when')) {
|
||||
missingUseSection.push(skillId);
|
||||
if (missingDoNotUseSection.length) {
|
||||
addWarning(`Missing "Do not use" section: ${missingDoNotUseSection.length} skills (examples: ${missingDoNotUseSection.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
if (!content.includes('## Do not use')) {
|
||||
missingDoNotUseSection.push(skillId);
|
||||
if (missingInstructionsSection.length) {
|
||||
addWarning(`Missing "Instructions" section: ${missingInstructionsSection.length} skills (examples: ${missingInstructionsSection.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
if (!content.includes('## Instructions')) {
|
||||
missingInstructionsSection.push(skillId);
|
||||
if (longFiles.length) {
|
||||
addWarning(`SKILL.md over ${MAX_SKILL_LINES} lines: ${longFiles.length} skills (examples: ${longFiles.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
}
|
||||
|
||||
if (missingUseSection.length) {
|
||||
addWarning(`Missing "Use this skill when" section: ${missingUseSection.length} skills (examples: ${missingUseSection.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
if (missingDoNotUseSection.length) {
|
||||
addWarning(`Missing "Do not use" section: ${missingDoNotUseSection.length} skills (examples: ${missingDoNotUseSection.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
if (missingInstructionsSection.length) {
|
||||
addWarning(`Missing "Instructions" section: ${missingInstructionsSection.length} skills (examples: ${missingInstructionsSection.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
if (longFiles.length) {
|
||||
addWarning(`SKILL.md over ${MAX_SKILL_LINES} lines: ${longFiles.length} skills (examples: ${longFiles.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
if (unknownFieldSkills.length) {
|
||||
addWarning(`Unknown frontmatter fields detected: ${unknownFieldSkills.length} skills (examples: ${unknownFieldSkills.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
|
||||
addStrictSectionErrors('Use this skill when', missingUseSection, baselineUse);
|
||||
addStrictSectionErrors('Do not use', missingDoNotUseSection, baselineDoNotUse);
|
||||
addStrictSectionErrors('Instructions', missingInstructionsSection, baselineInstructions);
|
||||
addStrictSectionErrors(`SKILL.md line count <= ${MAX_SKILL_LINES}`, longFiles, baselineLongFile);
|
||||
|
||||
if (writeBaseline) {
|
||||
const baselineData = {
|
||||
generatedAt: new Date().toISOString(),
|
||||
useSection: [...missingUseSection].sort(),
|
||||
doNotUseSection: [...missingDoNotUseSection].sort(),
|
||||
instructionsSection: [...missingInstructionsSection].sort(),
|
||||
longFile: [...longFiles].sort(),
|
||||
};
|
||||
fs.writeFileSync(BASELINE_PATH, JSON.stringify(baselineData, null, 2));
|
||||
console.log(`Baseline written to ${BASELINE_PATH}`);
|
||||
}
|
||||
|
||||
if (warnings.length) {
|
||||
console.warn('Warnings:');
|
||||
for (const warning of warnings) {
|
||||
console.warn(`- ${warning}`);
|
||||
if (unknownFieldSkills.length) {
|
||||
addWarning(`Unknown frontmatter fields detected: ${unknownFieldSkills.length} skills (examples: ${unknownFieldSkills.slice(0, 5).join(', ')})`);
|
||||
}
|
||||
}
|
||||
|
||||
if (errors.length) {
|
||||
console.error('\nErrors:');
|
||||
for (const error of errors) {
|
||||
console.error(`- ${error}`);
|
||||
addStrictSectionErrors('Use this skill when', missingUseSection, baselineUse);
|
||||
addStrictSectionErrors('Do not use', missingDoNotUseSection, baselineDoNotUse);
|
||||
addStrictSectionErrors('Instructions', missingInstructionsSection, baselineInstructions);
|
||||
addStrictSectionErrors(`SKILL.md line count <= ${MAX_SKILL_LINES}`, longFiles, baselineLongFile);
|
||||
|
||||
if (writeBaseline) {
|
||||
const baselineData = {
|
||||
generatedAt: new Date().toISOString(),
|
||||
useSection: [...missingUseSection].sort(),
|
||||
doNotUseSection: [...missingDoNotUseSection].sort(),
|
||||
instructionsSection: [...missingInstructionsSection].sort(),
|
||||
longFile: [...longFiles].sort(),
|
||||
};
|
||||
fs.writeFileSync(BASELINE_PATH, JSON.stringify(baselineData, null, 2));
|
||||
console.log(`Baseline written to ${BASELINE_PATH}`);
|
||||
}
|
||||
process.exit(1);
|
||||
|
||||
if (warnings.length) {
|
||||
console.warn('Warnings:');
|
||||
for (const warning of warnings) {
|
||||
console.warn(`- ${warning}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (errors.length) {
|
||||
console.error('\nErrors:');
|
||||
for (const error of errors) {
|
||||
console.error(`- ${error}`);
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
console.log(`Validation passed for ${skillIds.length} skills.`);
|
||||
}
|
||||
|
||||
console.log(`Validation passed for ${skillIds.length} skills.`);
|
||||
if (require.main === module) {
|
||||
run();
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
hasUseSection,
|
||||
run,
|
||||
};
|
||||
|
||||
@@ -3,6 +3,15 @@ import re
|
||||
import argparse
|
||||
import sys
|
||||
|
||||
WHEN_TO_USE_PATTERNS = [
|
||||
re.compile(r"^##\s+When\s+to\s+Use", re.MULTILINE | re.IGNORECASE),
|
||||
re.compile(r"^##\s+Use\s+this\s+skill\s+when", re.MULTILINE | re.IGNORECASE),
|
||||
re.compile(r"^##\s+When\s+to\s+Use\s+This\s+Skill", re.MULTILINE | re.IGNORECASE),
|
||||
]
|
||||
|
||||
def has_when_to_use_section(content):
|
||||
return any(pattern.search(content) for pattern in WHEN_TO_USE_PATTERNS)
|
||||
|
||||
def parse_frontmatter(content):
|
||||
"""
|
||||
Simple frontmatter parser using regex to avoid external dependencies.
|
||||
@@ -30,7 +39,6 @@ def validate_skills(skills_dir, strict_mode=False):
|
||||
|
||||
# Pre-compiled regex
|
||||
security_disclaimer_pattern = re.compile(r"AUTHORIZED USE ONLY", re.IGNORECASE)
|
||||
trigger_section_pattern = re.compile(r"^##\s+When to Use", re.MULTILINE | re.IGNORECASE)
|
||||
|
||||
valid_risk_levels = ["none", "safe", "critical", "offensive"]
|
||||
|
||||
@@ -80,7 +88,7 @@ def validate_skills(skills_dir, strict_mode=False):
|
||||
else: warnings.append(msg)
|
||||
|
||||
# 3. Content Checks (Triggers)
|
||||
if not trigger_section_pattern.search(content):
|
||||
if not has_when_to_use_section(content):
|
||||
msg = f"⚠️ {rel_path}: Missing '## When to Use' section"
|
||||
if strict_mode: errors.append(msg.replace("⚠️", "❌"))
|
||||
else: warnings.append(msg)
|
||||
|
||||
22
skills/SPDD/1-research.md
Normal file
22
skills/SPDD/1-research.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# ROLE: Codebase Research Agent
|
||||
Sua única missão é documentar e explicar a base de código como ela existe hoje.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- NÃO sugira melhorias, refatorações ou mudanças arquiteturais.
|
||||
- NÃO realize análise de causa raiz ou proponha melhorias futuras.
|
||||
- APENAS descreva o que existe, onde existe e como os componentes interagem.
|
||||
- Você é um cartógrafo técnico criando um mapa do sistema atual.
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Initial Analysis:** Leia os arquivos mencionados pelo usuário integralmente (SEM limit/offset).
|
||||
2. **Decomposition:** Decompunha a dúvida do usuário em áreas de pesquisa (ex: Rotas, Banco, UI).
|
||||
3. **Execution:** - Localize onde os arquivos e componentes vivem.
|
||||
- Analise COMO o código atual funciona (sem criticar).
|
||||
- Encontre exemplos de padrões existentes para referência.
|
||||
4. **Project State:**
|
||||
- Se projeto NOVO: Pesquise e liste a melhor estrutura de pastas e bibliotecas padrão de mercado para a stack.
|
||||
- Se projeto EXISTENTE: Identifique dívidas técnicas ou padrões que devem ser respeitados.
|
||||
|
||||
## OUTPUT:
|
||||
- Gere o arquivo `docs/prds/prd_current_task.md` com YAML frontmatter (date, topic, tags, status).
|
||||
- **Ação Obrigatória:** Termine com: "Pesquisa concluída. Por favor, dê um `/clear` e carregue `.agente/2-spec.md` para o planejamento."
|
||||
20
skills/SPDD/2-spec.md
Normal file
20
skills/SPDD/2-spec.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# ROLE: Implementation Planning Agent
|
||||
Você deve criar planos de implementação detalhados e ser cético quanto a requisitos vagos.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- Não escreva o plano de uma vez; valide a estrutura das fases com o usuário.
|
||||
- Cada decisão técnica deve ser tomada antes de finalizar o plano.
|
||||
- O plano deve ser acionável e completo, sem "perguntas abertas".
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Context Check:** Leia o `docs/prds/prd_current_task.md` gerado anteriormente.
|
||||
2. **Phasing:** Divida o trabalho em fases incrementais e testáveis.
|
||||
3. **Detailing:** Para cada arquivo afetado, defina:
|
||||
- **Path exato.**
|
||||
- **Ação:** (CRIAR | MODIFICAR | DELETAR).
|
||||
- **Lógica:** Snippets de pseudocódigo ou referências de implementação.
|
||||
4. **Success Criteria:** Defina "Automated Verification" (scripts/testes) e "Manual Verification" (UI/UX).
|
||||
|
||||
## OUTPUT:
|
||||
- Gere o arquivo `docs/specs/spec_current_task.md` seguindo o template de fases.
|
||||
- **Ação Obrigatória:** Termine com: "Spec finalizada. Por favor, dê um `/clear` e carregue `.agente/3-implementation.md` para execução."
|
||||
20
skills/SPDD/3-implementation.md
Normal file
20
skills/SPDD/3-implementation.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# ROLE: Implementation Execution Agent
|
||||
Você deve implementar um plano técnico aprovado com precisão cirúrgica.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- Siga a intenção do plano enquanto se adapta à realidade encontrada.
|
||||
- Implemente uma fase COMPLETAMENTE antes de passar para a próxima.
|
||||
- **STOP & THINK:** Se encontrar um erro na Spec ou um mismatch no código, PARE e reporte. Não tente adivinhar.
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Sanity Check:** Leia a Spec e o Ticket original. Verifique se o ambiente está limpo.
|
||||
2. **Execution:** Codifique seguindo os padrões de Clean Code e os snippets da Spec.
|
||||
3. **Verification:**
|
||||
- Após cada fase, execute os comandos de "Automated Verification" descritos na Spec.
|
||||
- PAUSE para confirmação manual do usuário após cada fase concluída.
|
||||
4. **Progress:** Atualize os checkboxes (- [x]) no arquivo de Spec conforme avança.
|
||||
|
||||
## OUTPUT:
|
||||
- Código fonte implementado.
|
||||
- Relatório de conclusão de fase com resultados de testes.
|
||||
- **Ação Final:** Pergunte se o usuário deseja realizar testes de regressão ou seguir para a próxima task.
|
||||
209
skills/activecampaign-automation/SKILL.md
Normal file
209
skills/activecampaign-automation/SKILL.md
Normal file
@@ -0,0 +1,209 @@
|
||||
---
|
||||
name: activecampaign-automation
|
||||
description: "Automate ActiveCampaign tasks via Rube MCP (Composio): manage contacts, tags, list subscriptions, automation enrollment, and tasks. Always search tools first for current schemas."
|
||||
requires:
|
||||
mcp: [rube]
|
||||
---
|
||||
|
||||
# ActiveCampaign Automation via Rube MCP
|
||||
|
||||
Automate ActiveCampaign CRM and marketing automation operations through Composio's ActiveCampaign toolkit via Rube MCP.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
|
||||
- Active ActiveCampaign connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `active_campaign`
|
||||
- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas
|
||||
|
||||
## Setup
|
||||
|
||||
**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
|
||||
|
||||
|
||||
1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
|
||||
2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `active_campaign`
|
||||
3. If connection is not ACTIVE, follow the returned auth link to complete ActiveCampaign authentication
|
||||
4. Confirm connection status shows ACTIVE before running any workflows
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Create and Find Contacts
|
||||
|
||||
**When to use**: User wants to create new contacts or look up existing ones
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Search for an existing contact [Optional]
|
||||
2. `ACTIVE_CAMPAIGN_CREATE_CONTACT` - Create a new contact [Required]
|
||||
|
||||
**Key parameters for find**:
|
||||
- `email`: Search by email address
|
||||
- `id`: Search by ActiveCampaign contact ID
|
||||
- `phone`: Search by phone number
|
||||
|
||||
**Key parameters for create**:
|
||||
- `email`: Contact email address (required)
|
||||
- `first_name`: Contact first name
|
||||
- `last_name`: Contact last name
|
||||
- `phone`: Contact phone number
|
||||
- `organization_name`: Contact's organization
|
||||
- `job_title`: Contact's job title
|
||||
- `tags`: Comma-separated list of tags to apply
|
||||
|
||||
**Pitfalls**:
|
||||
- `email` is the only required field for contact creation
|
||||
- Phone search uses a general search parameter internally; it may return partial matches
|
||||
- When combining `email` and `phone` in FIND_CONTACT, results are filtered client-side
|
||||
- Tags provided during creation are applied immediately
|
||||
- Creating a contact with an existing email may update the existing contact
|
||||
|
||||
### 2. Manage Contact Tags
|
||||
|
||||
**When to use**: User wants to add or remove tags from contacts
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Find contact by email or ID [Prerequisite]
|
||||
2. `ACTIVE_CAMPAIGN_MANAGE_CONTACT_TAG` - Add or remove tags [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `action`: 'Add' or 'Remove' (required)
|
||||
- `tags`: Tag names as comma-separated string or array of strings (required)
|
||||
- `contact_id`: Contact ID (provide this or contact_email)
|
||||
- `contact_email`: Contact email address (alternative to contact_id)
|
||||
|
||||
**Pitfalls**:
|
||||
- `action` values are capitalized: 'Add' or 'Remove' (not lowercase)
|
||||
- Tags can be a comma-separated string ('tag1, tag2') or an array (['tag1', 'tag2'])
|
||||
- Either `contact_id` or `contact_email` must be provided; `contact_id` takes precedence
|
||||
- Adding a tag that does not exist creates it automatically
|
||||
- Removing a non-existent tag is a no-op (does not error)
|
||||
|
||||
### 3. Manage List Subscriptions
|
||||
|
||||
**When to use**: User wants to subscribe or unsubscribe contacts from lists
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Find the contact [Prerequisite]
|
||||
2. `ACTIVE_CAMPAIGN_MANAGE_LIST_SUBSCRIPTION` - Subscribe or unsubscribe [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `action`: 'subscribe' or 'unsubscribe' (required)
|
||||
- `list_id`: Numeric list ID string (required)
|
||||
- `email`: Contact email address (provide this or contact_id)
|
||||
- `contact_id`: Numeric contact ID string (alternative to email)
|
||||
|
||||
**Pitfalls**:
|
||||
- `action` values are lowercase: 'subscribe' or 'unsubscribe'
|
||||
- `list_id` is a numeric string (e.g., '2'), not the list name
|
||||
- List IDs can be retrieved via the GET /api/3/lists endpoint (not available as a Composio tool; use the ActiveCampaign UI)
|
||||
- If both `email` and `contact_id` are provided, `contact_id` takes precedence
|
||||
- Unsubscribing changes status to '2' (unsubscribed) but the relationship record persists
|
||||
|
||||
### 4. Add Contacts to Automations
|
||||
|
||||
**When to use**: User wants to enroll a contact in an automation workflow
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Verify contact exists [Prerequisite]
|
||||
2. `ACTIVE_CAMPAIGN_ADD_CONTACT_TO_AUTOMATION` - Enroll contact in automation [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `contact_email`: Email of the contact to enroll (required)
|
||||
- `automation_id`: ID of the target automation (required)
|
||||
|
||||
**Pitfalls**:
|
||||
- The contact must already exist in ActiveCampaign
|
||||
- Automations can only be created through the ActiveCampaign UI, not via API
|
||||
- `automation_id` must reference an existing, active automation
|
||||
- The tool performs a two-step process: lookup contact by email, then enroll
|
||||
- Automation IDs can be found in the ActiveCampaign UI or via GET /api/3/automations
|
||||
|
||||
### 5. Create Contact Tasks
|
||||
|
||||
**When to use**: User wants to create follow-up tasks associated with contacts
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Find the contact to associate the task with [Prerequisite]
|
||||
2. `ACTIVE_CAMPAIGN_CREATE_CONTACT_TASK` - Create the task [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `relid`: Contact ID to associate the task with (required)
|
||||
- `duedate`: Due date in ISO 8601 format with timezone (required, e.g., '2025-01-15T14:30:00-05:00')
|
||||
- `dealTasktype`: Task type ID based on available types (required)
|
||||
- `title`: Task title
|
||||
- `note`: Task description/content
|
||||
- `assignee`: User ID to assign the task to
|
||||
- `edate`: End date in ISO 8601 format (must be later than duedate)
|
||||
- `status`: 0 for incomplete, 1 for complete
|
||||
|
||||
**Pitfalls**:
|
||||
- `duedate` must be a valid ISO 8601 datetime with timezone offset; do NOT use placeholder values
|
||||
- `edate` must be later than `duedate`
|
||||
- `dealTasktype` is a string ID referencing task types configured in ActiveCampaign
|
||||
- `relid` is the numeric contact ID, not the email address
|
||||
- `assignee` is a user ID; resolve user names to IDs via the ActiveCampaign UI
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Contact Lookup Flow
|
||||
|
||||
```
|
||||
1. Call ACTIVE_CAMPAIGN_FIND_CONTACT with email
|
||||
2. If found, extract contact ID for subsequent operations
|
||||
3. If not found, create contact with ACTIVE_CAMPAIGN_CREATE_CONTACT
|
||||
4. Use contact ID for tags, subscriptions, or automations
|
||||
```
|
||||
|
||||
### Bulk Contact Tagging
|
||||
|
||||
```
|
||||
1. For each contact, call ACTIVE_CAMPAIGN_MANAGE_CONTACT_TAG
|
||||
2. Use contact_email to avoid separate lookup calls
|
||||
3. Batch with reasonable delays to respect rate limits
|
||||
```
|
||||
|
||||
### ID Resolution
|
||||
|
||||
**Contact email -> Contact ID**:
|
||||
```
|
||||
1. Call ACTIVE_CAMPAIGN_FIND_CONTACT with email
|
||||
2. Extract id from the response
|
||||
```
|
||||
|
||||
## Known Pitfalls
|
||||
|
||||
**Action Capitalization**:
|
||||
- Tag actions: 'Add', 'Remove' (capitalized)
|
||||
- Subscription actions: 'subscribe', 'unsubscribe' (lowercase)
|
||||
- Mixing up capitalization causes errors
|
||||
|
||||
**ID Types**:
|
||||
- Contact IDs: numeric strings (e.g., '123')
|
||||
- List IDs: numeric strings
|
||||
- Automation IDs: numeric strings
|
||||
- All IDs should be passed as strings, not integers
|
||||
|
||||
**Automations**:
|
||||
- Automations cannot be created via API; only enrollment is possible
|
||||
- Automation must be active to accept new contacts
|
||||
- Enrolling a contact already in the automation may have no effect
|
||||
|
||||
**Rate Limits**:
|
||||
- ActiveCampaign API has rate limits per account
|
||||
- Implement backoff on 429 responses
|
||||
- Batch operations should be spaced appropriately
|
||||
|
||||
**Response Parsing**:
|
||||
- Response data may be nested under `data` or `data.data`
|
||||
- Parse defensively with fallback patterns
|
||||
- Contact search may return multiple results; match by email for accuracy
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Tool Slug | Key Params |
|
||||
|------|-----------|------------|
|
||||
| Find contact | ACTIVE_CAMPAIGN_FIND_CONTACT | email, id, phone |
|
||||
| Create contact | ACTIVE_CAMPAIGN_CREATE_CONTACT | email, first_name, last_name, tags |
|
||||
| Add/remove tags | ACTIVE_CAMPAIGN_MANAGE_CONTACT_TAG | action, tags, contact_email |
|
||||
| Subscribe/unsubscribe | ACTIVE_CAMPAIGN_MANAGE_LIST_SUBSCRIPTION | action, list_id, email |
|
||||
| Add to automation | ACTIVE_CAMPAIGN_ADD_CONTACT_TO_AUTOMATION | contact_email, automation_id |
|
||||
| Create task | ACTIVE_CAMPAIGN_CREATE_CONTACT_TASK | relid, duedate, dealTasktype, title |
|
||||
333
skills/agent-framework-azure-ai-py/SKILL.md
Normal file
333
skills/agent-framework-azure-ai-py/SKILL.md
Normal file
@@ -0,0 +1,333 @@
|
||||
---
|
||||
name: agent-framework-azure-ai-py
|
||||
description: Build Azure AI Foundry agents using the Microsoft Agent Framework Python SDK (agent-framework-azure-ai). Use when creating persistent agents with AzureAIAgentsProvider, using hosted tools (code interpreter, file search, web search), integrating MCP servers, managing conversation threads, or implementing streaming responses. Covers function tools, structured outputs, and multi-tool agents.
|
||||
package: agent-framework-azure-ai
|
||||
---
|
||||
|
||||
# Agent Framework Azure Hosted Agents
|
||||
|
||||
Build persistent agents on Azure AI Foundry using the Microsoft Agent Framework Python SDK.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
User Query → AzureAIAgentsProvider → Azure AI Agent Service (Persistent)
|
||||
↓
|
||||
Agent.run() / Agent.run_stream()
|
||||
↓
|
||||
Tools: Functions | Hosted (Code/Search/Web) | MCP
|
||||
↓
|
||||
AgentThread (conversation persistence)
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# Full framework (recommended)
|
||||
pip install agent-framework --pre
|
||||
|
||||
# Or Azure-specific package only
|
||||
pip install agent-framework-azure-ai --pre
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
export AZURE_AI_PROJECT_ENDPOINT="https://<project>.services.ai.azure.com/api/projects/<project-id>"
|
||||
export AZURE_AI_MODEL_DEPLOYMENT_NAME="gpt-4o-mini"
|
||||
export BING_CONNECTION_ID="your-bing-connection-id" # For web search
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```python
|
||||
from azure.identity.aio import AzureCliCredential, DefaultAzureCredential
|
||||
|
||||
# Development
|
||||
credential = AzureCliCredential()
|
||||
|
||||
# Production
|
||||
credential = DefaultAzureCredential()
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Basic Agent
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="MyAgent",
|
||||
instructions="You are a helpful assistant.",
|
||||
)
|
||||
|
||||
result = await agent.run("Hello!")
|
||||
print(result.text)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Agent with Function Tools
|
||||
|
||||
```python
|
||||
from typing import Annotated
|
||||
from pydantic import Field
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
def get_weather(
|
||||
location: Annotated[str, Field(description="City name to get weather for")],
|
||||
) -> str:
|
||||
"""Get the current weather for a location."""
|
||||
return f"Weather in {location}: 72°F, sunny"
|
||||
|
||||
def get_current_time() -> str:
|
||||
"""Get the current UTC time."""
|
||||
from datetime import datetime, timezone
|
||||
return datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="WeatherAgent",
|
||||
instructions="You help with weather and time queries.",
|
||||
tools=[get_weather, get_current_time], # Pass functions directly
|
||||
)
|
||||
|
||||
result = await agent.run("What's the weather in Seattle?")
|
||||
print(result.text)
|
||||
```
|
||||
|
||||
### Agent with Hosted Tools
|
||||
|
||||
```python
|
||||
from agent_framework import (
|
||||
HostedCodeInterpreterTool,
|
||||
HostedFileSearchTool,
|
||||
HostedWebSearchTool,
|
||||
)
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="MultiToolAgent",
|
||||
instructions="You can execute code, search files, and search the web.",
|
||||
tools=[
|
||||
HostedCodeInterpreterTool(),
|
||||
HostedWebSearchTool(name="Bing"),
|
||||
],
|
||||
)
|
||||
|
||||
result = await agent.run("Calculate the factorial of 20 in Python")
|
||||
print(result.text)
|
||||
```
|
||||
|
||||
### Streaming Responses
|
||||
|
||||
```python
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="StreamingAgent",
|
||||
instructions="You are a helpful assistant.",
|
||||
)
|
||||
|
||||
print("Agent: ", end="", flush=True)
|
||||
async for chunk in agent.run_stream("Tell me a short story"):
|
||||
if chunk.text:
|
||||
print(chunk.text, end="", flush=True)
|
||||
print()
|
||||
```
|
||||
|
||||
### Conversation Threads
|
||||
|
||||
```python
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="ChatAgent",
|
||||
instructions="You are a helpful assistant.",
|
||||
tools=[get_weather],
|
||||
)
|
||||
|
||||
# Create thread for conversation persistence
|
||||
thread = agent.get_new_thread()
|
||||
|
||||
# First turn
|
||||
result1 = await agent.run("What's the weather in Seattle?", thread=thread)
|
||||
print(f"Agent: {result1.text}")
|
||||
|
||||
# Second turn - context is maintained
|
||||
result2 = await agent.run("What about Portland?", thread=thread)
|
||||
print(f"Agent: {result2.text}")
|
||||
|
||||
# Save thread ID for later resumption
|
||||
print(f"Conversation ID: {thread.conversation_id}")
|
||||
```
|
||||
|
||||
### Structured Outputs
|
||||
|
||||
```python
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
class WeatherResponse(BaseModel):
|
||||
model_config = ConfigDict(extra="forbid")
|
||||
|
||||
location: str
|
||||
temperature: float
|
||||
unit: str
|
||||
conditions: str
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="StructuredAgent",
|
||||
instructions="Provide weather information in structured format.",
|
||||
response_format=WeatherResponse,
|
||||
)
|
||||
|
||||
result = await agent.run("Weather in Seattle?")
|
||||
weather = WeatherResponse.model_validate_json(result.text)
|
||||
print(f"{weather.location}: {weather.temperature}°{weather.unit}")
|
||||
```
|
||||
|
||||
## Provider Methods
|
||||
|
||||
| Method | Description |
|
||||
|--------|-------------|
|
||||
| `create_agent()` | Create new agent on Azure AI service |
|
||||
| `get_agent(agent_id)` | Retrieve existing agent by ID |
|
||||
| `as_agent(sdk_agent)` | Wrap SDK Agent object (no HTTP call) |
|
||||
|
||||
## Hosted Tools Quick Reference
|
||||
|
||||
| Tool | Import | Purpose |
|
||||
|------|--------|---------|
|
||||
| `HostedCodeInterpreterTool` | `from agent_framework import HostedCodeInterpreterTool` | Execute Python code |
|
||||
| `HostedFileSearchTool` | `from agent_framework import HostedFileSearchTool` | Search vector stores |
|
||||
| `HostedWebSearchTool` | `from agent_framework import HostedWebSearchTool` | Bing web search |
|
||||
| `HostedMCPTool` | `from agent_framework import HostedMCPTool` | Service-managed MCP |
|
||||
| `MCPStreamableHTTPTool` | `from agent_framework import MCPStreamableHTTPTool` | Client-managed MCP |
|
||||
|
||||
## Complete Example
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from typing import Annotated
|
||||
from pydantic import BaseModel, Field
|
||||
from agent_framework import (
|
||||
HostedCodeInterpreterTool,
|
||||
HostedWebSearchTool,
|
||||
MCPStreamableHTTPTool,
|
||||
)
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
|
||||
def get_weather(
|
||||
location: Annotated[str, Field(description="City name")],
|
||||
) -> str:
|
||||
"""Get weather for a location."""
|
||||
return f"Weather in {location}: 72°F, sunny"
|
||||
|
||||
|
||||
class AnalysisResult(BaseModel):
|
||||
summary: str
|
||||
key_findings: list[str]
|
||||
confidence: float
|
||||
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
MCPStreamableHTTPTool(
|
||||
name="Docs MCP",
|
||||
url="https://learn.microsoft.com/api/mcp",
|
||||
) as mcp_tool,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="ResearchAssistant",
|
||||
instructions="You are a research assistant with multiple capabilities.",
|
||||
tools=[
|
||||
get_weather,
|
||||
HostedCodeInterpreterTool(),
|
||||
HostedWebSearchTool(name="Bing"),
|
||||
mcp_tool,
|
||||
],
|
||||
)
|
||||
|
||||
thread = agent.get_new_thread()
|
||||
|
||||
# Non-streaming
|
||||
result = await agent.run(
|
||||
"Search for Python best practices and summarize",
|
||||
thread=thread,
|
||||
)
|
||||
print(f"Response: {result.text}")
|
||||
|
||||
# Streaming
|
||||
print("\nStreaming: ", end="")
|
||||
async for chunk in agent.run_stream("Continue with examples", thread=thread):
|
||||
if chunk.text:
|
||||
print(chunk.text, end="", flush=True)
|
||||
print()
|
||||
|
||||
# Structured output
|
||||
result = await agent.run(
|
||||
"Analyze findings",
|
||||
thread=thread,
|
||||
response_format=AnalysisResult,
|
||||
)
|
||||
analysis = AnalysisResult.model_validate_json(result.text)
|
||||
print(f"\nConfidence: {analysis.confidence}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Conventions
|
||||
|
||||
- Always use async context managers: `async with provider:`
|
||||
- Pass functions directly to `tools=` parameter (auto-converted to AIFunction)
|
||||
- Use `Annotated[type, Field(description=...)]` for function parameters
|
||||
- Use `get_new_thread()` for multi-turn conversations
|
||||
- Prefer `HostedMCPTool` for service-managed MCP, `MCPStreamableHTTPTool` for client-managed
|
||||
|
||||
## Reference Files
|
||||
|
||||
- [references/tools.md](references/tools.md): Detailed hosted tool patterns
|
||||
- [references/mcp.md](references/mcp.md): MCP integration (hosted + local)
|
||||
- [references/threads.md](references/threads.md): Thread and conversation management
|
||||
- [references/advanced.md](references/advanced.md): OpenAPI, citations, structured outputs
|
||||
325
skills/agents-v2-py/SKILL.md
Normal file
325
skills/agents-v2-py/SKILL.md
Normal file
@@ -0,0 +1,325 @@
|
||||
---
|
||||
name: agents-v2-py
|
||||
description: |
|
||||
Build container-based Foundry Agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition.
|
||||
Use when creating hosted agents that run custom code in Azure AI Foundry with your own container images.
|
||||
Triggers: "ImageBasedHostedAgentDefinition", "hosted agent", "container agent", "Foundry Agent",
|
||||
"create_version", "ProtocolVersionRecord", "AgentProtocol.RESPONSES", "custom agent image".
|
||||
package: azure-ai-projects
|
||||
---
|
||||
|
||||
# Azure AI Hosted Agents (Python)
|
||||
|
||||
Build container-based hosted agents using `ImageBasedHostedAgentDefinition` from the Azure AI Projects SDK.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-projects>=2.0.0b3 azure-identity
|
||||
```
|
||||
|
||||
**Minimum SDK Version:** `2.0.0b3` or later required for hosted agent support.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_AI_PROJECT_ENDPOINT=https://<resource>.services.ai.azure.com/api/projects/<project>
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before creating hosted agents:
|
||||
|
||||
1. **Container Image** - Build and push to Azure Container Registry (ACR)
|
||||
2. **ACR Pull Permissions** - Grant your project's managed identity `AcrPull` role on the ACR
|
||||
3. **Capability Host** - Account-level capability host with `enablePublicHostingEnvironment=true`
|
||||
4. **SDK Version** - Ensure `azure-ai-projects>=2.0.0b3`
|
||||
|
||||
## Authentication
|
||||
|
||||
Always use `DefaultAzureCredential`:
|
||||
|
||||
```python
|
||||
from azure.identity import DefaultAzureCredential
|
||||
from azure.ai.projects import AIProjectClient
|
||||
|
||||
credential = DefaultAzureCredential()
|
||||
client = AIProjectClient(
|
||||
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
credential=credential
|
||||
)
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Imports
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.identity import DefaultAzureCredential
|
||||
from azure.ai.projects import AIProjectClient
|
||||
from azure.ai.projects.models import (
|
||||
ImageBasedHostedAgentDefinition,
|
||||
ProtocolVersionRecord,
|
||||
AgentProtocol,
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Create Hosted Agent
|
||||
|
||||
```python
|
||||
client = AIProjectClient(
|
||||
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
|
||||
agent = client.agents.create_version(
|
||||
agent_name="my-hosted-agent",
|
||||
definition=ImageBasedHostedAgentDefinition(
|
||||
container_protocol_versions=[
|
||||
ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1")
|
||||
],
|
||||
cpu="1",
|
||||
memory="2Gi",
|
||||
image="myregistry.azurecr.io/my-agent:latest",
|
||||
tools=[{"type": "code_interpreter"}],
|
||||
environment_variables={
|
||||
"AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
"MODEL_NAME": "gpt-4o-mini"
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
print(f"Created agent: {agent.name} (version: {agent.version})")
|
||||
```
|
||||
|
||||
### 3. List Agent Versions
|
||||
|
||||
```python
|
||||
versions = client.agents.list_versions(agent_name="my-hosted-agent")
|
||||
for version in versions:
|
||||
print(f"Version: {version.version}, State: {version.state}")
|
||||
```
|
||||
|
||||
### 4. Delete Agent Version
|
||||
|
||||
```python
|
||||
client.agents.delete_version(
|
||||
agent_name="my-hosted-agent",
|
||||
version=agent.version
|
||||
)
|
||||
```
|
||||
|
||||
## ImageBasedHostedAgentDefinition Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `container_protocol_versions` | `list[ProtocolVersionRecord]` | Yes | Protocol versions the agent supports |
|
||||
| `image` | `str` | Yes | Full container image path (registry/image:tag) |
|
||||
| `cpu` | `str` | No | CPU allocation (e.g., "1", "2") |
|
||||
| `memory` | `str` | No | Memory allocation (e.g., "2Gi", "4Gi") |
|
||||
| `tools` | `list[dict]` | No | Tools available to the agent |
|
||||
| `environment_variables` | `dict[str, str]` | No | Environment variables for the container |
|
||||
|
||||
## Protocol Versions
|
||||
|
||||
The `container_protocol_versions` parameter specifies which protocols your agent supports:
|
||||
|
||||
```python
|
||||
from azure.ai.projects.models import ProtocolVersionRecord, AgentProtocol
|
||||
|
||||
# RESPONSES protocol - standard agent responses
|
||||
container_protocol_versions=[
|
||||
ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1")
|
||||
]
|
||||
```
|
||||
|
||||
**Available Protocols:**
|
||||
| Protocol | Description |
|
||||
|----------|-------------|
|
||||
| `AgentProtocol.RESPONSES` | Standard response protocol for agent interactions |
|
||||
|
||||
## Resource Allocation
|
||||
|
||||
Specify CPU and memory for your container:
|
||||
|
||||
```python
|
||||
definition=ImageBasedHostedAgentDefinition(
|
||||
container_protocol_versions=[...],
|
||||
image="myregistry.azurecr.io/my-agent:latest",
|
||||
cpu="2", # 2 CPU cores
|
||||
memory="4Gi" # 4 GiB memory
|
||||
)
|
||||
```
|
||||
|
||||
**Resource Limits:**
|
||||
| Resource | Min | Max | Default |
|
||||
|----------|-----|-----|---------|
|
||||
| CPU | 0.5 | 4 | 1 |
|
||||
| Memory | 1Gi | 8Gi | 2Gi |
|
||||
|
||||
## Tools Configuration
|
||||
|
||||
Add tools to your hosted agent:
|
||||
|
||||
### Code Interpreter
|
||||
|
||||
```python
|
||||
tools=[{"type": "code_interpreter"}]
|
||||
```
|
||||
|
||||
### MCP Tools
|
||||
|
||||
```python
|
||||
tools=[
|
||||
{"type": "code_interpreter"},
|
||||
{
|
||||
"type": "mcp",
|
||||
"server_label": "my-mcp-server",
|
||||
"server_url": "https://my-mcp-server.example.com"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Multiple Tools
|
||||
|
||||
```python
|
||||
tools=[
|
||||
{"type": "code_interpreter"},
|
||||
{"type": "file_search"},
|
||||
{
|
||||
"type": "mcp",
|
||||
"server_label": "custom-tool",
|
||||
"server_url": "https://custom-tool.example.com"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Pass configuration to your container:
|
||||
|
||||
```python
|
||||
environment_variables={
|
||||
"AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
"MODEL_NAME": "gpt-4o-mini",
|
||||
"LOG_LEVEL": "INFO",
|
||||
"CUSTOM_CONFIG": "value"
|
||||
}
|
||||
```
|
||||
|
||||
**Best Practice:** Never hardcode secrets. Use environment variables or Azure Key Vault.
|
||||
|
||||
## Complete Example
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.identity import DefaultAzureCredential
|
||||
from azure.ai.projects import AIProjectClient
|
||||
from azure.ai.projects.models import (
|
||||
ImageBasedHostedAgentDefinition,
|
||||
ProtocolVersionRecord,
|
||||
AgentProtocol,
|
||||
)
|
||||
|
||||
def create_hosted_agent():
|
||||
"""Create a hosted agent with custom container image."""
|
||||
|
||||
client = AIProjectClient(
|
||||
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
|
||||
agent = client.agents.create_version(
|
||||
agent_name="data-processor-agent",
|
||||
definition=ImageBasedHostedAgentDefinition(
|
||||
container_protocol_versions=[
|
||||
ProtocolVersionRecord(
|
||||
protocol=AgentProtocol.RESPONSES,
|
||||
version="v1"
|
||||
)
|
||||
],
|
||||
image="myregistry.azurecr.io/data-processor:v1.0",
|
||||
cpu="2",
|
||||
memory="4Gi",
|
||||
tools=[
|
||||
{"type": "code_interpreter"},
|
||||
{"type": "file_search"}
|
||||
],
|
||||
environment_variables={
|
||||
"AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
"MODEL_NAME": "gpt-4o-mini",
|
||||
"MAX_RETRIES": "3"
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
print(f"Created hosted agent: {agent.name}")
|
||||
print(f"Version: {agent.version}")
|
||||
print(f"State: {agent.state}")
|
||||
|
||||
return agent
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_hosted_agent()
|
||||
```
|
||||
|
||||
## Async Pattern
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.identity.aio import DefaultAzureCredential
|
||||
from azure.ai.projects.aio import AIProjectClient
|
||||
from azure.ai.projects.models import (
|
||||
ImageBasedHostedAgentDefinition,
|
||||
ProtocolVersionRecord,
|
||||
AgentProtocol,
|
||||
)
|
||||
|
||||
async def create_hosted_agent_async():
|
||||
"""Create a hosted agent asynchronously."""
|
||||
|
||||
async with DefaultAzureCredential() as credential:
|
||||
async with AIProjectClient(
|
||||
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
credential=credential
|
||||
) as client:
|
||||
agent = await client.agents.create_version(
|
||||
agent_name="async-agent",
|
||||
definition=ImageBasedHostedAgentDefinition(
|
||||
container_protocol_versions=[
|
||||
ProtocolVersionRecord(
|
||||
protocol=AgentProtocol.RESPONSES,
|
||||
version="v1"
|
||||
)
|
||||
],
|
||||
image="myregistry.azurecr.io/async-agent:latest",
|
||||
cpu="1",
|
||||
memory="2Gi"
|
||||
)
|
||||
)
|
||||
return agent
|
||||
```
|
||||
|
||||
## Common Errors
|
||||
|
||||
| Error | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| `ImagePullBackOff` | ACR pull permission denied | Grant `AcrPull` role to project's managed identity |
|
||||
| `InvalidContainerImage` | Image not found | Verify image path and tag exist in ACR |
|
||||
| `CapabilityHostNotFound` | No capability host configured | Create account-level capability host |
|
||||
| `ProtocolVersionNotSupported` | Invalid protocol version | Use `AgentProtocol.RESPONSES` with version `"v1"` |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Version Your Images** - Use specific tags, not `latest` in production
|
||||
2. **Minimal Resources** - Start with minimum CPU/memory, scale up as needed
|
||||
3. **Environment Variables** - Use for all configuration, never hardcode
|
||||
4. **Error Handling** - Wrap agent creation in try/except blocks
|
||||
5. **Cleanup** - Delete unused agent versions to free resources
|
||||
|
||||
## Reference Links
|
||||
|
||||
- [Azure AI Projects SDK](https://pypi.org/project/azure-ai-projects/)
|
||||
- [Hosted Agents Documentation](https://learn.microsoft.com/azure/ai-services/agents/how-to/hosted-agents)
|
||||
- [Azure Container Registry](https://learn.microsoft.com/azure/container-registry/)
|
||||
170
skills/airtable-automation/SKILL.md
Normal file
170
skills/airtable-automation/SKILL.md
Normal file
@@ -0,0 +1,170 @@
|
||||
---
|
||||
name: airtable-automation
|
||||
description: "Automate Airtable tasks via Rube MCP (Composio): records, bases, tables, fields, views. Always search tools first for current schemas."
|
||||
requires:
|
||||
mcp: [rube]
|
||||
---
|
||||
|
||||
# Airtable Automation via Rube MCP
|
||||
|
||||
Automate Airtable operations through Composio's Airtable toolkit via Rube MCP.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
|
||||
- Active Airtable connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `airtable`
|
||||
- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas
|
||||
|
||||
## Setup
|
||||
|
||||
**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
|
||||
|
||||
|
||||
1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
|
||||
2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `airtable`
|
||||
3. If connection is not ACTIVE, follow the returned auth link to complete Airtable auth
|
||||
4. Confirm connection status shows ACTIVE before running any workflows
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Create and Manage Records
|
||||
|
||||
**When to use**: User wants to create, read, update, or delete records
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AIRTABLE_LIST_BASES` - Discover available bases [Prerequisite]
|
||||
2. `AIRTABLE_GET_BASE_SCHEMA` - Inspect table structure [Prerequisite]
|
||||
3. `AIRTABLE_LIST_RECORDS` - List/filter records [Optional]
|
||||
4. `AIRTABLE_CREATE_RECORD` / `AIRTABLE_CREATE_RECORDS` - Create records [Optional]
|
||||
5. `AIRTABLE_UPDATE_RECORD` / `AIRTABLE_UPDATE_MULTIPLE_RECORDS` - Update records [Optional]
|
||||
6. `AIRTABLE_DELETE_RECORD` / `AIRTABLE_DELETE_MULTIPLE_RECORDS` - Delete records [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `baseId`: Base ID (starts with 'app', e.g., 'appXXXXXXXXXXXXXX')
|
||||
- `tableIdOrName`: Table ID (starts with 'tbl') or table name
|
||||
- `fields`: Object mapping field names to values
|
||||
- `recordId`: Record ID (starts with 'rec') for updates/deletes
|
||||
- `filterByFormula`: Airtable formula for filtering
|
||||
- `typecast`: Set true for automatic type conversion
|
||||
|
||||
**Pitfalls**:
|
||||
- pageSize capped at 100; uses offset pagination; changing filters between pages can skip/duplicate rows
|
||||
- CREATE_RECORDS hard limit of 10 records per request; chunk larger imports
|
||||
- Field names are CASE-SENSITIVE and must match schema exactly
|
||||
- 422 UNKNOWN_FIELD_NAME when field names are wrong; 403 for permission issues
|
||||
- INVALID_MULTIPLE_CHOICE_OPTIONS may require typecast=true
|
||||
|
||||
### 2. Search and Filter Records
|
||||
|
||||
**When to use**: User wants to find specific records using formulas
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AIRTABLE_GET_BASE_SCHEMA` - Verify field names and types [Prerequisite]
|
||||
2. `AIRTABLE_LIST_RECORDS` - Query with filterByFormula [Required]
|
||||
3. `AIRTABLE_GET_RECORD` - Get full record details [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `filterByFormula`: Airtable formula (e.g., `{Status}='Done'`)
|
||||
- `sort`: Array of sort objects
|
||||
- `fields`: Array of field names to return
|
||||
- `maxRecords`: Max total records across all pages
|
||||
- `offset`: Pagination cursor from previous response
|
||||
|
||||
**Pitfalls**:
|
||||
- Field names in formulas must be wrapped in `{}` and match schema exactly
|
||||
- String values must be quoted: `{Status}='Active'` not `{Status}=Active`
|
||||
- 422 INVALID_FILTER_BY_FORMULA for bad syntax or non-existent fields
|
||||
- Airtable rate limit: ~5 requests/second per base; handle 429 with Retry-After
|
||||
|
||||
### 3. Manage Fields and Schema
|
||||
|
||||
**When to use**: User wants to create or modify table fields
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AIRTABLE_GET_BASE_SCHEMA` - Inspect current schema [Prerequisite]
|
||||
2. `AIRTABLE_CREATE_FIELD` - Create a new field [Optional]
|
||||
3. `AIRTABLE_UPDATE_FIELD` - Rename/describe a field [Optional]
|
||||
4. `AIRTABLE_UPDATE_TABLE` - Update table metadata [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `name`: Field name
|
||||
- `type`: Field type (singleLineText, number, singleSelect, etc.)
|
||||
- `options`: Type-specific options (choices for select, precision for number)
|
||||
- `description`: Field description
|
||||
|
||||
**Pitfalls**:
|
||||
- UPDATE_FIELD only changes name/description, NOT type/options; create a replacement field and migrate
|
||||
- Computed fields (formula, rollup, lookup) cannot be created via API
|
||||
- 422 when type options are missing or malformed
|
||||
|
||||
### 4. Manage Comments
|
||||
|
||||
**When to use**: User wants to view or add comments on records
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AIRTABLE_LIST_COMMENTS` - List comments on a record [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `baseId`: Base ID
|
||||
- `tableIdOrName`: Table identifier
|
||||
- `recordId`: Record ID (17 chars, starts with 'rec')
|
||||
- `pageSize`: Comments per page (max 100)
|
||||
|
||||
**Pitfalls**:
|
||||
- Record IDs must be exactly 17 characters starting with 'rec'
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Airtable Formula Syntax
|
||||
|
||||
**Comparison**:
|
||||
- `{Status}='Done'` - Equals
|
||||
- `{Priority}>1` - Greater than
|
||||
- `{Name}!=''` - Not empty
|
||||
|
||||
**Functions**:
|
||||
- `AND({A}='x', {B}='y')` - Both conditions
|
||||
- `OR({A}='x', {A}='y')` - Either condition
|
||||
- `FIND('test', {Name})>0` - Contains text
|
||||
- `IS_BEFORE({Due Date}, TODAY())` - Date comparison
|
||||
|
||||
**Escape rules**:
|
||||
- Single quotes in values: double them (`{Name}='John''s Company'`)
|
||||
|
||||
### Pagination
|
||||
|
||||
- Set `pageSize` (max 100)
|
||||
- Check response for `offset` string
|
||||
- Pass `offset` to next request unchanged
|
||||
- Keep filters/sorts/view stable between pages
|
||||
|
||||
## Known Pitfalls
|
||||
|
||||
**ID Formats**:
|
||||
- Base IDs: `appXXXXXXXXXXXXXX` (17 chars)
|
||||
- Table IDs: `tblXXXXXXXXXXXXXX` (17 chars)
|
||||
- Record IDs: `recXXXXXXXXXXXXXX` (17 chars)
|
||||
- Field IDs: `fldXXXXXXXXXXXXXX` (17 chars)
|
||||
|
||||
**Batch Limits**:
|
||||
- CREATE_RECORDS: max 10 per request
|
||||
- UPDATE_MULTIPLE_RECORDS: max 10 per request
|
||||
- DELETE_MULTIPLE_RECORDS: max 10 per request
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Tool Slug | Key Params |
|
||||
|------|-----------|------------|
|
||||
| List bases | AIRTABLE_LIST_BASES | (none) |
|
||||
| Get schema | AIRTABLE_GET_BASE_SCHEMA | baseId |
|
||||
| List records | AIRTABLE_LIST_RECORDS | baseId, tableIdOrName |
|
||||
| Get record | AIRTABLE_GET_RECORD | baseId, tableIdOrName, recordId |
|
||||
| Create record | AIRTABLE_CREATE_RECORD | baseId, tableIdOrName, fields |
|
||||
| Create records | AIRTABLE_CREATE_RECORDS | baseId, tableIdOrName, records |
|
||||
| Update record | AIRTABLE_UPDATE_RECORD | baseId, tableIdOrName, recordId, fields |
|
||||
| Update records | AIRTABLE_UPDATE_MULTIPLE_RECORDS | baseId, tableIdOrName, records |
|
||||
| Delete record | AIRTABLE_DELETE_RECORD | baseId, tableIdOrName, recordId |
|
||||
| Create field | AIRTABLE_CREATE_FIELD | baseId, tableIdOrName, name, type |
|
||||
| Update field | AIRTABLE_UPDATE_FIELD | baseId, tableIdOrName, fieldId |
|
||||
| Update table | AIRTABLE_UPDATE_TABLE | baseId, tableIdOrName, name |
|
||||
| List comments | AIRTABLE_LIST_COMMENTS | baseId, tableIdOrName, recordId |
|
||||
216
skills/amplitude-automation/SKILL.md
Normal file
216
skills/amplitude-automation/SKILL.md
Normal file
@@ -0,0 +1,216 @@
|
||||
---
|
||||
name: amplitude-automation
|
||||
description: "Automate Amplitude tasks via Rube MCP (Composio): events, user activity, cohorts, user identification. Always search tools first for current schemas."
|
||||
requires:
|
||||
mcp: [rube]
|
||||
---
|
||||
|
||||
# Amplitude Automation via Rube MCP
|
||||
|
||||
Automate Amplitude product analytics through Composio's Amplitude toolkit via Rube MCP.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
|
||||
- Active Amplitude connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `amplitude`
|
||||
- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas
|
||||
|
||||
## Setup
|
||||
|
||||
**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
|
||||
|
||||
|
||||
1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
|
||||
2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `amplitude`
|
||||
3. If connection is not ACTIVE, follow the returned auth link to complete Amplitude authentication
|
||||
4. Confirm connection status shows ACTIVE before running any workflows
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Send Events
|
||||
|
||||
**When to use**: User wants to track events or send event data to Amplitude
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AMPLITUDE_SEND_EVENTS` - Send one or more events to Amplitude [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `events`: Array of event objects, each containing:
|
||||
- `event_type`: Name of the event (e.g., 'page_view', 'purchase')
|
||||
- `user_id`: Unique user identifier (required if no `device_id`)
|
||||
- `device_id`: Device identifier (required if no `user_id`)
|
||||
- `event_properties`: Object with custom event properties
|
||||
- `user_properties`: Object with user properties to set
|
||||
- `time`: Event timestamp in milliseconds since epoch
|
||||
|
||||
**Pitfalls**:
|
||||
- At least one of `user_id` or `device_id` is required per event
|
||||
- `event_type` is required for every event; cannot be empty
|
||||
- `time` must be in milliseconds (13-digit epoch), not seconds
|
||||
- Batch limit applies; check schema for maximum events per request
|
||||
- Events are processed asynchronously; successful API response does not mean data is immediately queryable
|
||||
|
||||
### 2. Get User Activity
|
||||
|
||||
**When to use**: User wants to view event history for a specific user
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AMPLITUDE_FIND_USER` - Find user by ID or property [Prerequisite]
|
||||
2. `AMPLITUDE_GET_USER_ACTIVITY` - Retrieve user's event stream [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `user`: Amplitude internal user ID (from FIND_USER)
|
||||
- `offset`: Pagination offset for event list
|
||||
- `limit`: Maximum number of events to return
|
||||
|
||||
**Pitfalls**:
|
||||
- `user` parameter requires Amplitude's internal user ID, NOT your application's user_id
|
||||
- Must call FIND_USER first to resolve your user_id to Amplitude's internal ID
|
||||
- Activity is returned in reverse chronological order by default
|
||||
- Large activity histories require pagination via `offset`
|
||||
|
||||
### 3. Find and Identify Users
|
||||
|
||||
**When to use**: User wants to look up users or set user properties
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AMPLITUDE_FIND_USER` - Search for a user by various identifiers [Required]
|
||||
2. `AMPLITUDE_IDENTIFY` - Set or update user properties [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- For FIND_USER:
|
||||
- `user`: Search term (user_id, email, or Amplitude ID)
|
||||
- For IDENTIFY:
|
||||
- `user_id`: Your application's user identifier
|
||||
- `device_id`: Device identifier (alternative to user_id)
|
||||
- `user_properties`: Object with `$set`, `$unset`, `$add`, `$append` operations
|
||||
|
||||
**Pitfalls**:
|
||||
- FIND_USER searches across user_id, device_id, and Amplitude ID
|
||||
- IDENTIFY uses special property operations (`$set`, `$unset`, `$add`, `$append`)
|
||||
- `$set` overwrites existing values; `$setOnce` only sets if not already set
|
||||
- At least one of `user_id` or `device_id` is required for IDENTIFY
|
||||
- User property changes are eventually consistent; not immediate
|
||||
|
||||
### 4. Manage Cohorts
|
||||
|
||||
**When to use**: User wants to list cohorts, view cohort details, or update cohort membership
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AMPLITUDE_LIST_COHORTS` - List all saved cohorts [Required]
|
||||
2. `AMPLITUDE_GET_COHORT` - Get detailed cohort information [Optional]
|
||||
3. `AMPLITUDE_UPDATE_COHORT_MEMBERSHIP` - Add/remove users from a cohort [Optional]
|
||||
4. `AMPLITUDE_CHECK_COHORT_STATUS` - Check async cohort operation status [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- For LIST_COHORTS: No required parameters
|
||||
- For GET_COHORT: `cohort_id` (from list results)
|
||||
- For UPDATE_COHORT_MEMBERSHIP:
|
||||
- `cohort_id`: Target cohort ID
|
||||
- `memberships`: Object with `add` and/or `remove` arrays of user IDs
|
||||
- For CHECK_COHORT_STATUS: `request_id` from update response
|
||||
|
||||
**Pitfalls**:
|
||||
- Cohort IDs are required for all cohort-specific operations
|
||||
- UPDATE_COHORT_MEMBERSHIP is asynchronous; use CHECK_COHORT_STATUS to verify
|
||||
- `request_id` from the update response is needed for status checking
|
||||
- Maximum membership changes per request may be limited; chunk large updates
|
||||
- Only behavioral cohorts support API membership updates
|
||||
|
||||
### 5. Browse Event Categories
|
||||
|
||||
**When to use**: User wants to discover available event types and categories in Amplitude
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AMPLITUDE_GET_EVENT_CATEGORIES` - List all event categories [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- No required parameters; returns all configured event categories
|
||||
|
||||
**Pitfalls**:
|
||||
- Categories are configured in Amplitude UI; API provides read access
|
||||
- Event names within categories are case-sensitive
|
||||
- Use these categories to validate event_type values before sending events
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### ID Resolution
|
||||
|
||||
**Application user_id -> Amplitude internal ID**:
|
||||
```
|
||||
1. Call AMPLITUDE_FIND_USER with user=your_user_id
|
||||
2. Extract Amplitude's internal user ID from response
|
||||
3. Use internal ID for GET_USER_ACTIVITY
|
||||
```
|
||||
|
||||
**Cohort name -> Cohort ID**:
|
||||
```
|
||||
1. Call AMPLITUDE_LIST_COHORTS
|
||||
2. Find cohort by name in results
|
||||
3. Extract id for cohort operations
|
||||
```
|
||||
|
||||
### User Property Operations
|
||||
|
||||
Amplitude IDENTIFY supports these property operations:
|
||||
- `$set`: Set property value (overwrites existing)
|
||||
- `$setOnce`: Set only if property not already set
|
||||
- `$add`: Increment numeric property
|
||||
- `$append`: Append to list property
|
||||
- `$unset`: Remove property entirely
|
||||
|
||||
Example structure:
|
||||
```json
|
||||
{
|
||||
"user_properties": {
|
||||
"$set": {"plan": "premium", "company": "Acme"},
|
||||
"$add": {"login_count": 1}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Async Operation Pattern
|
||||
|
||||
For cohort membership updates:
|
||||
```
|
||||
1. Call AMPLITUDE_UPDATE_COHORT_MEMBERSHIP -> get request_id
|
||||
2. Call AMPLITUDE_CHECK_COHORT_STATUS with request_id
|
||||
3. Repeat step 2 until status is 'complete' or 'error'
|
||||
```
|
||||
|
||||
## Known Pitfalls
|
||||
|
||||
**User IDs**:
|
||||
- Amplitude has its own internal user IDs separate from your application's
|
||||
- FIND_USER resolves your IDs to Amplitude's internal IDs
|
||||
- GET_USER_ACTIVITY requires Amplitude's internal ID, not your user_id
|
||||
|
||||
**Event Timestamps**:
|
||||
- Must be in milliseconds since epoch (13 digits)
|
||||
- Seconds (10 digits) will be interpreted as very old dates
|
||||
- Omitting timestamp uses server receive time
|
||||
|
||||
**Rate Limits**:
|
||||
- Event ingestion has throughput limits per project
|
||||
- Batch events where possible to reduce API calls
|
||||
- Cohort membership updates have async processing limits
|
||||
|
||||
**Response Parsing**:
|
||||
- Response data may be nested under `data` key
|
||||
- User activity returns events in reverse chronological order
|
||||
- Cohort lists may include archived cohorts; check status field
|
||||
- Parse defensively with fallbacks for optional fields
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Tool Slug | Key Params |
|
||||
|------|-----------|------------|
|
||||
| Send events | AMPLITUDE_SEND_EVENTS | events (array) |
|
||||
| Find user | AMPLITUDE_FIND_USER | user |
|
||||
| Get user activity | AMPLITUDE_GET_USER_ACTIVITY | user, offset, limit |
|
||||
| Identify user | AMPLITUDE_IDENTIFY | user_id, user_properties |
|
||||
| List cohorts | AMPLITUDE_LIST_COHORTS | (none) |
|
||||
| Get cohort | AMPLITUDE_GET_COHORT | cohort_id |
|
||||
| Update cohort members | AMPLITUDE_UPDATE_COHORT_MEMBERSHIP | cohort_id, memberships |
|
||||
| Check cohort status | AMPLITUDE_CHECK_COHORT_STATUS | request_id |
|
||||
| List event categories | AMPLITUDE_GET_EVENT_CATEGORIES | (none) |
|
||||
58
skills/angular-best-practices/README.md
Normal file
58
skills/angular-best-practices/README.md
Normal file
@@ -0,0 +1,58 @@
|
||||
# Angular Best Practices
|
||||
|
||||
Performance optimization and best practices for Angular applications optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides prioritized performance guidelines across:
|
||||
|
||||
- **Change Detection** - OnPush strategy, Signals, Zoneless apps
|
||||
- **Async Operations** - Avoiding waterfalls, SSR preloading
|
||||
- **Bundle Optimization** - Lazy loading, `@defer`, tree-shaking
|
||||
- **Rendering Performance** - TrackBy, virtual scrolling, CDK
|
||||
- **SSR & Hydration** - Server-side rendering patterns
|
||||
- **Template Optimization** - Structural directives, pipe memoization
|
||||
- **State Management** - Efficient reactivity patterns
|
||||
- **Memory Management** - Subscription cleanup, detached refs
|
||||
|
||||
## Structure
|
||||
|
||||
The `SKILL.md` file is organized by priority:
|
||||
|
||||
1. **Critical Priority** - Largest performance gains (change detection, async)
|
||||
2. **High Priority** - Significant impact (bundles, rendering)
|
||||
3. **Medium Priority** - Noticeable improvements (SSR, templates)
|
||||
4. **Low Priority** - Incremental gains (memory, cleanup)
|
||||
|
||||
Each rule includes:
|
||||
|
||||
- ❌ **WRONG** - What not to do
|
||||
- ✅ **CORRECT** - Recommended pattern
|
||||
- 📝 **Why** - Explanation of the impact
|
||||
|
||||
## Quick Reference Checklist
|
||||
|
||||
**For New Components:**
|
||||
|
||||
- [ ] Using `ChangeDetectionStrategy.OnPush`
|
||||
- [ ] Using Signals for reactive state
|
||||
- [ ] Using `@defer` for non-critical content
|
||||
- [ ] Using `trackBy` for `*ngFor` loops
|
||||
- [ ] No subscriptions without cleanup
|
||||
|
||||
**For Performance Reviews:**
|
||||
|
||||
- [ ] No async waterfalls (parallel data fetching)
|
||||
- [ ] Routes lazy-loaded
|
||||
- [ ] Large libraries code-split
|
||||
- [ ] Images use `NgOptimizedImage`
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular Performance](https://angular.dev/guide/performance)
|
||||
- [Zoneless Angular](https://angular.dev/guide/zoneless)
|
||||
- [Angular SSR](https://angular.dev/guide/ssr)
|
||||
559
skills/angular-best-practices/SKILL.md
Normal file
559
skills/angular-best-practices/SKILL.md
Normal file
@@ -0,0 +1,559 @@
|
||||
---
|
||||
name: angular-best-practices
|
||||
description: Angular performance optimization and best practices guide. Use when writing, reviewing, or refactoring Angular code for optimal performance, bundle size, and rendering efficiency.
|
||||
risk: safe
|
||||
source: self
|
||||
---
|
||||
|
||||
# Angular Best Practices
|
||||
|
||||
Comprehensive performance optimization guide for Angular applications. Contains prioritized rules for eliminating performance bottlenecks, optimizing bundles, and improving rendering.
|
||||
|
||||
## When to Apply
|
||||
|
||||
Reference these guidelines when:
|
||||
|
||||
- Writing new Angular components or pages
|
||||
- Implementing data fetching patterns
|
||||
- Reviewing code for performance issues
|
||||
- Refactoring existing Angular code
|
||||
- Optimizing bundle size or load times
|
||||
- Configuring SSR/hydration
|
||||
|
||||
---
|
||||
|
||||
## Rule Categories by Priority
|
||||
|
||||
| Priority | Category | Impact | Focus |
|
||||
| -------- | --------------------- | ---------- | ------------------------------- |
|
||||
| 1 | Change Detection | CRITICAL | Signals, OnPush, Zoneless |
|
||||
| 2 | Async Waterfalls | CRITICAL | RxJS patterns, SSR preloading |
|
||||
| 3 | Bundle Optimization | CRITICAL | Lazy loading, tree shaking |
|
||||
| 4 | Rendering Performance | HIGH | @defer, trackBy, virtualization |
|
||||
| 5 | Server-Side Rendering | HIGH | Hydration, prerendering |
|
||||
| 6 | Template Optimization | MEDIUM | Control flow, pipes |
|
||||
| 7 | State Management | MEDIUM | Signal patterns, selectors |
|
||||
| 8 | Memory Management | LOW-MEDIUM | Cleanup, subscriptions |
|
||||
|
||||
---
|
||||
|
||||
## 1. Change Detection (CRITICAL)
|
||||
|
||||
### Use OnPush Change Detection
|
||||
|
||||
```typescript
|
||||
// CORRECT - OnPush with Signals
|
||||
@Component({
|
||||
changeDetection: ChangeDetectionStrategy.OnPush,
|
||||
template: `<div>{{ count() }}</div>`,
|
||||
})
|
||||
export class CounterComponent {
|
||||
count = signal(0);
|
||||
}
|
||||
|
||||
// WRONG - Default change detection
|
||||
@Component({
|
||||
template: `<div>{{ count }}</div>`, // Checked every cycle
|
||||
})
|
||||
export class CounterComponent {
|
||||
count = 0;
|
||||
}
|
||||
```
|
||||
|
||||
### Prefer Signals Over Mutable Properties
|
||||
|
||||
```typescript
|
||||
// CORRECT - Signals trigger precise updates
|
||||
@Component({
|
||||
template: `
|
||||
<h1>{{ title() }}</h1>
|
||||
<p>Count: {{ count() }}</p>
|
||||
`,
|
||||
})
|
||||
export class DashboardComponent {
|
||||
title = signal("Dashboard");
|
||||
count = signal(0);
|
||||
}
|
||||
|
||||
// WRONG - Mutable properties require zone.js checks
|
||||
@Component({
|
||||
template: `
|
||||
<h1>{{ title }}</h1>
|
||||
<p>Count: {{ count }}</p>
|
||||
`,
|
||||
})
|
||||
export class DashboardComponent {
|
||||
title = "Dashboard";
|
||||
count = 0;
|
||||
}
|
||||
```
|
||||
|
||||
### Enable Zoneless for New Projects
|
||||
|
||||
```typescript
|
||||
// main.ts - Zoneless Angular (v20+)
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [provideZonelessChangeDetection()],
|
||||
});
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- No zone.js patches on async APIs
|
||||
- Smaller bundle (~15KB savings)
|
||||
- Clean stack traces for debugging
|
||||
- Better micro-frontend compatibility
|
||||
|
||||
---
|
||||
|
||||
## 2. Async Operations & Waterfalls (CRITICAL)
|
||||
|
||||
### Eliminate Sequential Data Fetching
|
||||
|
||||
```typescript
|
||||
// WRONG - Nested subscriptions create waterfalls
|
||||
this.route.params.subscribe((params) => {
|
||||
// 1. Wait for params
|
||||
this.userService.getUser(params.id).subscribe((user) => {
|
||||
// 2. Wait for user
|
||||
this.postsService.getPosts(user.id).subscribe((posts) => {
|
||||
// 3. Wait for posts
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
// CORRECT - Parallel execution with forkJoin
|
||||
forkJoin({
|
||||
user: this.userService.getUser(id),
|
||||
posts: this.postsService.getPosts(id),
|
||||
}).subscribe((data) => {
|
||||
// Fetched in parallel
|
||||
});
|
||||
|
||||
// CORRECT - Flatten dependent calls with switchMap
|
||||
this.route.params
|
||||
.pipe(
|
||||
map((p) => p.id),
|
||||
switchMap((id) => this.userService.getUser(id)),
|
||||
)
|
||||
.subscribe();
|
||||
```
|
||||
|
||||
### Avoid Client-Side Waterfalls in SSR
|
||||
|
||||
```typescript
|
||||
// CORRECT - Use resolvers or blocking hydration for critical data
|
||||
export const route: Route = {
|
||||
path: "profile/:id",
|
||||
resolve: { data: profileResolver }, // Fetched on server before navigation
|
||||
component: ProfileComponent,
|
||||
};
|
||||
|
||||
// WRONG - Component fetches data on init
|
||||
class ProfileComponent implements OnInit {
|
||||
ngOnInit() {
|
||||
// Starts ONLY after JS loads and component renders
|
||||
this.http.get("/api/profile").subscribe();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Bundle Optimization (CRITICAL)
|
||||
|
||||
### Lazy Load Routes
|
||||
|
||||
```typescript
|
||||
// CORRECT - Lazy load feature routes
|
||||
export const routes: Routes = [
|
||||
{
|
||||
path: "admin",
|
||||
loadChildren: () =>
|
||||
import("./admin/admin.routes").then((m) => m.ADMIN_ROUTES),
|
||||
},
|
||||
{
|
||||
path: "dashboard",
|
||||
loadComponent: () =>
|
||||
import("./dashboard/dashboard.component").then(
|
||||
(m) => m.DashboardComponent,
|
||||
),
|
||||
},
|
||||
];
|
||||
|
||||
// WRONG - Eager loading everything
|
||||
import { AdminModule } from "./admin/admin.module";
|
||||
export const routes: Routes = [
|
||||
{ path: "admin", component: AdminComponent }, // In main bundle
|
||||
];
|
||||
```
|
||||
|
||||
### Use @defer for Heavy Components
|
||||
|
||||
```html
|
||||
<!-- CORRECT - Heavy component loads on demand -->
|
||||
@defer (on viewport) {
|
||||
<app-analytics-chart [data]="data()" />
|
||||
} @placeholder {
|
||||
<div class="chart-skeleton"></div>
|
||||
}
|
||||
|
||||
<!-- WRONG - Heavy component in initial bundle -->
|
||||
<app-analytics-chart [data]="data()" />
|
||||
```
|
||||
|
||||
### Avoid Barrel File Re-exports
|
||||
|
||||
```typescript
|
||||
// WRONG - Imports entire barrel, breaks tree-shaking
|
||||
import { Button, Modal, Table } from "@shared/components";
|
||||
|
||||
// CORRECT - Direct imports
|
||||
import { Button } from "@shared/components/button/button.component";
|
||||
import { Modal } from "@shared/components/modal/modal.component";
|
||||
```
|
||||
|
||||
### Dynamic Import Third-Party Libraries
|
||||
|
||||
```typescript
|
||||
// CORRECT - Load heavy library on demand
|
||||
async loadChart() {
|
||||
const { Chart } = await import('chart.js');
|
||||
this.chart = new Chart(this.canvas, config);
|
||||
}
|
||||
|
||||
// WRONG - Bundle Chart.js in main chunk
|
||||
import { Chart } from 'chart.js';
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Rendering Performance (HIGH)
|
||||
|
||||
### Always Use trackBy with @for
|
||||
|
||||
```html
|
||||
<!-- CORRECT - Efficient DOM updates -->
|
||||
@for (item of items(); track item.id) {
|
||||
<app-item-card [item]="item" />
|
||||
}
|
||||
|
||||
<!-- WRONG - Entire list re-renders on any change -->
|
||||
@for (item of items(); track $index) {
|
||||
<app-item-card [item]="item" />
|
||||
}
|
||||
```
|
||||
|
||||
### Use Virtual Scrolling for Large Lists
|
||||
|
||||
```typescript
|
||||
import { CdkVirtualScrollViewport, CdkFixedSizeVirtualScroll } from '@angular/cdk/scrolling';
|
||||
|
||||
@Component({
|
||||
imports: [CdkVirtualScrollViewport, CdkFixedSizeVirtualScroll],
|
||||
template: `
|
||||
<cdk-virtual-scroll-viewport itemSize="50" class="viewport">
|
||||
<div *cdkVirtualFor="let item of items" class="item">
|
||||
{{ item.name }}
|
||||
</div>
|
||||
</cdk-virtual-scroll-viewport>
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Prefer Pure Pipes Over Methods
|
||||
|
||||
```typescript
|
||||
// CORRECT - Pure pipe, memoized
|
||||
@Pipe({ name: 'filterActive', standalone: true, pure: true })
|
||||
export class FilterActivePipe implements PipeTransform {
|
||||
transform(items: Item[]): Item[] {
|
||||
return items.filter(i => i.active);
|
||||
}
|
||||
}
|
||||
|
||||
// Template
|
||||
@for (item of items() | filterActive; track item.id) { ... }
|
||||
|
||||
// WRONG - Method called every change detection
|
||||
@for (item of getActiveItems(); track item.id) { ... }
|
||||
```
|
||||
|
||||
### Use computed() for Derived Data
|
||||
|
||||
```typescript
|
||||
// CORRECT - Computed, cached until dependencies change
|
||||
export class ProductStore {
|
||||
products = signal<Product[]>([]);
|
||||
filter = signal('');
|
||||
|
||||
filteredProducts = computed(() => {
|
||||
const f = this.filter().toLowerCase();
|
||||
return this.products().filter(p =>
|
||||
p.name.toLowerCase().includes(f)
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
// WRONG - Recalculates every access
|
||||
get filteredProducts() {
|
||||
return this.products.filter(p =>
|
||||
p.name.toLowerCase().includes(this.filter)
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Server-Side Rendering (HIGH)
|
||||
|
||||
### Configure Incremental Hydration
|
||||
|
||||
```typescript
|
||||
// app.config.ts
|
||||
import {
|
||||
provideClientHydration,
|
||||
withIncrementalHydration,
|
||||
} from "@angular/platform-browser";
|
||||
|
||||
export const appConfig: ApplicationConfig = {
|
||||
providers: [
|
||||
provideClientHydration(withIncrementalHydration(), withEventReplay()),
|
||||
],
|
||||
};
|
||||
```
|
||||
|
||||
### Defer Non-Critical Content
|
||||
|
||||
```html
|
||||
<!-- Critical above-the-fold content -->
|
||||
<app-header />
|
||||
<app-hero />
|
||||
|
||||
<!-- Below-fold deferred with hydration triggers -->
|
||||
@defer (hydrate on viewport) {
|
||||
<app-product-grid />
|
||||
} @defer (hydrate on interaction) {
|
||||
<app-chat-widget />
|
||||
}
|
||||
```
|
||||
|
||||
### Use TransferState for SSR Data
|
||||
|
||||
```typescript
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class DataService {
|
||||
private http = inject(HttpClient);
|
||||
private transferState = inject(TransferState);
|
||||
private platformId = inject(PLATFORM_ID);
|
||||
|
||||
getData(key: string): Observable<Data> {
|
||||
const stateKey = makeStateKey<Data>(key);
|
||||
|
||||
if (isPlatformBrowser(this.platformId)) {
|
||||
const cached = this.transferState.get(stateKey, null);
|
||||
if (cached) {
|
||||
this.transferState.remove(stateKey);
|
||||
return of(cached);
|
||||
}
|
||||
}
|
||||
|
||||
return this.http.get<Data>(`/api/${key}`).pipe(
|
||||
tap((data) => {
|
||||
if (isPlatformServer(this.platformId)) {
|
||||
this.transferState.set(stateKey, data);
|
||||
}
|
||||
}),
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Template Optimization (MEDIUM)
|
||||
|
||||
### Use New Control Flow Syntax
|
||||
|
||||
```html
|
||||
<!-- CORRECT - New control flow (faster, smaller bundle) -->
|
||||
@if (user()) {
|
||||
<span>{{ user()!.name }}</span>
|
||||
} @else {
|
||||
<span>Guest</span>
|
||||
} @for (item of items(); track item.id) {
|
||||
<app-item [item]="item" />
|
||||
} @empty {
|
||||
<p>No items</p>
|
||||
}
|
||||
|
||||
<!-- WRONG - Legacy structural directives -->
|
||||
<span *ngIf="user; else guest">{{ user.name }}</span>
|
||||
<ng-template #guest><span>Guest</span></ng-template>
|
||||
```
|
||||
|
||||
### Avoid Complex Template Expressions
|
||||
|
||||
```typescript
|
||||
// CORRECT - Precompute in component
|
||||
class Component {
|
||||
items = signal<Item[]>([]);
|
||||
sortedItems = computed(() =>
|
||||
[...this.items()].sort((a, b) => a.name.localeCompare(b.name))
|
||||
);
|
||||
}
|
||||
|
||||
// Template
|
||||
@for (item of sortedItems(); track item.id) { ... }
|
||||
|
||||
// WRONG - Sorting in template every render
|
||||
@for (item of items() | sort:'name'; track item.id) { ... }
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. State Management (MEDIUM)
|
||||
|
||||
### Use Selectors to Prevent Re-renders
|
||||
|
||||
```typescript
|
||||
// CORRECT - Selective subscription
|
||||
@Component({
|
||||
template: `<span>{{ userName() }}</span>`,
|
||||
})
|
||||
class HeaderComponent {
|
||||
private store = inject(Store);
|
||||
// Only re-renders when userName changes
|
||||
userName = this.store.selectSignal(selectUserName);
|
||||
}
|
||||
|
||||
// WRONG - Subscribing to entire state
|
||||
@Component({
|
||||
template: `<span>{{ state().user.name }}</span>`,
|
||||
})
|
||||
class HeaderComponent {
|
||||
private store = inject(Store);
|
||||
// Re-renders on ANY state change
|
||||
state = toSignal(this.store);
|
||||
}
|
||||
```
|
||||
|
||||
### Colocate State with Features
|
||||
|
||||
```typescript
|
||||
// CORRECT - Feature-scoped store
|
||||
@Injectable() // NOT providedIn: 'root'
|
||||
export class ProductStore { ... }
|
||||
|
||||
@Component({
|
||||
providers: [ProductStore], // Scoped to component tree
|
||||
})
|
||||
export class ProductPageComponent {
|
||||
store = inject(ProductStore);
|
||||
}
|
||||
|
||||
// WRONG - Everything in global store
|
||||
@Injectable({ providedIn: 'root' })
|
||||
export class GlobalStore {
|
||||
// Contains ALL app state - hard to tree-shake
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Memory Management (LOW-MEDIUM)
|
||||
|
||||
### Use takeUntilDestroyed for Subscriptions
|
||||
|
||||
```typescript
|
||||
import { takeUntilDestroyed } from '@angular/core/rxjs-interop';
|
||||
|
||||
@Component({...})
|
||||
export class DataComponent {
|
||||
private destroyRef = inject(DestroyRef);
|
||||
|
||||
constructor() {
|
||||
this.data$.pipe(
|
||||
takeUntilDestroyed(this.destroyRef)
|
||||
).subscribe(data => this.process(data));
|
||||
}
|
||||
}
|
||||
|
||||
// WRONG - Manual subscription management
|
||||
export class DataComponent implements OnDestroy {
|
||||
private subscription!: Subscription;
|
||||
|
||||
ngOnInit() {
|
||||
this.subscription = this.data$.subscribe(...);
|
||||
}
|
||||
|
||||
ngOnDestroy() {
|
||||
this.subscription.unsubscribe(); // Easy to forget
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Prefer Signals Over Subscriptions
|
||||
|
||||
```typescript
|
||||
// CORRECT - No subscription needed
|
||||
@Component({
|
||||
template: `<div>{{ data().name }}</div>`,
|
||||
})
|
||||
export class Component {
|
||||
data = toSignal(this.service.data$, { initialValue: null });
|
||||
}
|
||||
|
||||
// WRONG - Manual subscription
|
||||
@Component({
|
||||
template: `<div>{{ data?.name }}</div>`,
|
||||
})
|
||||
export class Component implements OnInit, OnDestroy {
|
||||
data: Data | null = null;
|
||||
private sub!: Subscription;
|
||||
|
||||
ngOnInit() {
|
||||
this.sub = this.service.data$.subscribe((d) => (this.data = d));
|
||||
}
|
||||
|
||||
ngOnDestroy() {
|
||||
this.sub.unsubscribe();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Checklist
|
||||
|
||||
### New Component
|
||||
|
||||
- [ ] `changeDetection: ChangeDetectionStrategy.OnPush`
|
||||
- [ ] `standalone: true`
|
||||
- [ ] Signals for state (`signal()`, `input()`, `output()`)
|
||||
- [ ] `inject()` for dependencies
|
||||
- [ ] `@for` with `track` expression
|
||||
|
||||
### Performance Review
|
||||
|
||||
- [ ] No methods in templates (use pipes or computed)
|
||||
- [ ] Large lists virtualized
|
||||
- [ ] Heavy components deferred
|
||||
- [ ] Routes lazy-loaded
|
||||
- [ ] Third-party libs dynamically imported
|
||||
|
||||
### SSR Check
|
||||
|
||||
- [ ] Hydration configured
|
||||
- [ ] Critical content renders first
|
||||
- [ ] Non-critical content uses `@defer (hydrate on ...)`
|
||||
- [ ] TransferState for server-fetched data
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Angular Performance Guide](https://angular.dev/best-practices/performance)
|
||||
- [Zoneless Angular](https://angular.dev/guide/experimental/zoneless)
|
||||
- [Angular SSR Guide](https://angular.dev/guide/ssr)
|
||||
- [Change Detection Deep Dive](https://angular.dev/guide/change-detection)
|
||||
13
skills/angular-best-practices/metadata.json
Normal file
13
skills/angular-best-practices/metadata.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Performance optimization and best practices guide for Angular applications designed for AI agents and LLMs. Covers change detection strategies (OnPush, Signals, Zoneless), avoiding async waterfalls, bundle optimization with lazy loading and @defer, rendering performance, SSR/hydration patterns, and memory management. Prioritized by impact from critical to incremental improvements.",
|
||||
"references": [
|
||||
"https://angular.dev/best-practices",
|
||||
"https://angular.dev/guide/performance",
|
||||
"https://angular.dev/guide/zoneless",
|
||||
"https://angular.dev/guide/ssr",
|
||||
"https://web.dev/performance"
|
||||
]
|
||||
}
|
||||
41
skills/angular-state-management/README.md
Normal file
41
skills/angular-state-management/README.md
Normal file
@@ -0,0 +1,41 @@
|
||||
# Angular State Management
|
||||
|
||||
Complete state management patterns for Angular applications optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides decision frameworks and implementation patterns for:
|
||||
|
||||
- **Signal-based Services** - Lightweight state for shared data
|
||||
- **NgRx SignalStore** - Feature-scoped state with computed values
|
||||
- **NgRx Store** - Enterprise-scale global state management
|
||||
- **RxJS ComponentStore** - Reactive component-level state
|
||||
- **Forms State** - Reactive and template-driven form patterns
|
||||
|
||||
## Structure
|
||||
|
||||
The `SKILL.md` file is organized into:
|
||||
|
||||
1. **State Categories** - Local, shared, global, server, URL, and form state
|
||||
2. **Selection Criteria** - Decision trees for choosing the right solution
|
||||
3. **Implementation Patterns** - Complete examples for each approach
|
||||
4. **Migration Guides** - Moving from BehaviorSubject to Signals
|
||||
5. **Bridging Patterns** - Integrating Signals with RxJS
|
||||
|
||||
## When to Use Each Pattern
|
||||
|
||||
- **Signal Service**: Shared UI state (theme, user preferences)
|
||||
- **NgRx SignalStore**: Feature state with computed values
|
||||
- **NgRx Store**: Complex cross-feature dependencies
|
||||
- **ComponentStore**: Component-scoped async operations
|
||||
- **Reactive Forms**: Form state with validation
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular Signals](https://angular.dev/guide/signals)
|
||||
- [NgRx](https://ngrx.io)
|
||||
- [NgRx SignalStore](https://ngrx.io/guide/signals)
|
||||
634
skills/angular-state-management/SKILL.md
Normal file
634
skills/angular-state-management/SKILL.md
Normal file
@@ -0,0 +1,634 @@
|
||||
---
|
||||
name: angular-state-management
|
||||
description: Master modern Angular state management with Signals, NgRx, and RxJS. Use when setting up global state, managing component stores, choosing between state solutions, or migrating from legacy patterns.
|
||||
risk: safe
|
||||
source: self
|
||||
---
|
||||
|
||||
# Angular State Management
|
||||
|
||||
Comprehensive guide to modern Angular state management patterns, from Signal-based local state to global stores and server state synchronization.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Setting up global state management in Angular
|
||||
- Choosing between Signals, NgRx, or Akita
|
||||
- Managing component-level stores
|
||||
- Implementing optimistic updates
|
||||
- Debugging state-related issues
|
||||
- Migrating from legacy state patterns
|
||||
|
||||
## Do Not Use This Skill When
|
||||
|
||||
- The task is unrelated to Angular state management
|
||||
- You need React state management → use `react-state-management`
|
||||
|
||||
---
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### State Categories
|
||||
|
||||
| Type | Description | Solutions |
|
||||
| ---------------- | ---------------------------- | --------------------- |
|
||||
| **Local State** | Component-specific, UI state | Signals, `signal()` |
|
||||
| **Shared State** | Between related components | Signal services |
|
||||
| **Global State** | App-wide, complex | NgRx, Akita, Elf |
|
||||
| **Server State** | Remote data, caching | NgRx Query, RxAngular |
|
||||
| **URL State** | Route parameters | ActivatedRoute |
|
||||
| **Form State** | Input values, validation | Reactive Forms |
|
||||
|
||||
### Selection Criteria
|
||||
|
||||
```
|
||||
Small app, simple state → Signal Services
|
||||
Medium app, moderate state → Component Stores
|
||||
Large app, complex state → NgRx Store
|
||||
Heavy server interaction → NgRx Query + Signal Services
|
||||
Real-time updates → RxAngular + Signals
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Start: Signal-Based State
|
||||
|
||||
### Pattern 1: Simple Signal Service
|
||||
|
||||
```typescript
|
||||
// services/counter.service.ts
|
||||
import { Injectable, signal, computed } from "@angular/core";
|
||||
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class CounterService {
|
||||
// Private writable signals
|
||||
private _count = signal(0);
|
||||
|
||||
// Public read-only
|
||||
readonly count = this._count.asReadonly();
|
||||
readonly doubled = computed(() => this._count() * 2);
|
||||
readonly isPositive = computed(() => this._count() > 0);
|
||||
|
||||
increment() {
|
||||
this._count.update((v) => v + 1);
|
||||
}
|
||||
|
||||
decrement() {
|
||||
this._count.update((v) => v - 1);
|
||||
}
|
||||
|
||||
reset() {
|
||||
this._count.set(0);
|
||||
}
|
||||
}
|
||||
|
||||
// Usage in component
|
||||
@Component({
|
||||
template: `
|
||||
<p>Count: {{ counter.count() }}</p>
|
||||
<p>Doubled: {{ counter.doubled() }}</p>
|
||||
<button (click)="counter.increment()">+</button>
|
||||
`,
|
||||
})
|
||||
export class CounterComponent {
|
||||
counter = inject(CounterService);
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 2: Feature Signal Store
|
||||
|
||||
```typescript
|
||||
// stores/user.store.ts
|
||||
import { Injectable, signal, computed, inject } from "@angular/core";
|
||||
import { HttpClient } from "@angular/common/http";
|
||||
import { toSignal } from "@angular/core/rxjs-interop";
|
||||
|
||||
interface User {
|
||||
id: string;
|
||||
name: string;
|
||||
email: string;
|
||||
}
|
||||
|
||||
interface UserState {
|
||||
user: User | null;
|
||||
loading: boolean;
|
||||
error: string | null;
|
||||
}
|
||||
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class UserStore {
|
||||
private http = inject(HttpClient);
|
||||
|
||||
// State signals
|
||||
private _user = signal<User | null>(null);
|
||||
private _loading = signal(false);
|
||||
private _error = signal<string | null>(null);
|
||||
|
||||
// Selectors (read-only computed)
|
||||
readonly user = computed(() => this._user());
|
||||
readonly loading = computed(() => this._loading());
|
||||
readonly error = computed(() => this._error());
|
||||
readonly isAuthenticated = computed(() => this._user() !== null);
|
||||
readonly displayName = computed(() => this._user()?.name ?? "Guest");
|
||||
|
||||
// Actions
|
||||
async loadUser(id: string) {
|
||||
this._loading.set(true);
|
||||
this._error.set(null);
|
||||
|
||||
try {
|
||||
const user = await fetch(`/api/users/${id}`).then((r) => r.json());
|
||||
this._user.set(user);
|
||||
} catch (e) {
|
||||
this._error.set("Failed to load user");
|
||||
} finally {
|
||||
this._loading.set(false);
|
||||
}
|
||||
}
|
||||
|
||||
updateUser(updates: Partial<User>) {
|
||||
this._user.update((user) => (user ? { ...user, ...updates } : null));
|
||||
}
|
||||
|
||||
logout() {
|
||||
this._user.set(null);
|
||||
this._error.set(null);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 3: SignalStore (NgRx Signals)
|
||||
|
||||
```typescript
|
||||
// stores/products.store.ts
|
||||
import {
|
||||
signalStore,
|
||||
withState,
|
||||
withMethods,
|
||||
withComputed,
|
||||
patchState,
|
||||
} from "@ngrx/signals";
|
||||
import { inject } from "@angular/core";
|
||||
import { ProductService } from "./product.service";
|
||||
|
||||
interface ProductState {
|
||||
products: Product[];
|
||||
loading: boolean;
|
||||
filter: string;
|
||||
}
|
||||
|
||||
const initialState: ProductState = {
|
||||
products: [],
|
||||
loading: false,
|
||||
filter: "",
|
||||
};
|
||||
|
||||
export const ProductStore = signalStore(
|
||||
{ providedIn: "root" },
|
||||
|
||||
withState(initialState),
|
||||
|
||||
withComputed((store) => ({
|
||||
filteredProducts: computed(() => {
|
||||
const filter = store.filter().toLowerCase();
|
||||
return store
|
||||
.products()
|
||||
.filter((p) => p.name.toLowerCase().includes(filter));
|
||||
}),
|
||||
totalCount: computed(() => store.products().length),
|
||||
})),
|
||||
|
||||
withMethods((store, productService = inject(ProductService)) => ({
|
||||
async loadProducts() {
|
||||
patchState(store, { loading: true });
|
||||
|
||||
try {
|
||||
const products = await productService.getAll();
|
||||
patchState(store, { products, loading: false });
|
||||
} catch {
|
||||
patchState(store, { loading: false });
|
||||
}
|
||||
},
|
||||
|
||||
setFilter(filter: string) {
|
||||
patchState(store, { filter });
|
||||
},
|
||||
|
||||
addProduct(product: Product) {
|
||||
patchState(store, ({ products }) => ({
|
||||
products: [...products, product],
|
||||
}));
|
||||
},
|
||||
})),
|
||||
);
|
||||
|
||||
// Usage
|
||||
@Component({
|
||||
template: `
|
||||
<input (input)="store.setFilter($event.target.value)" />
|
||||
@if (store.loading()) {
|
||||
<app-spinner />
|
||||
} @else {
|
||||
@for (product of store.filteredProducts(); track product.id) {
|
||||
<app-product-card [product]="product" />
|
||||
}
|
||||
}
|
||||
`,
|
||||
})
|
||||
export class ProductListComponent {
|
||||
store = inject(ProductStore);
|
||||
|
||||
ngOnInit() {
|
||||
this.store.loadProducts();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## NgRx Store (Global State)
|
||||
|
||||
### Setup
|
||||
|
||||
```typescript
|
||||
// store/app.state.ts
|
||||
import { ActionReducerMap } from "@ngrx/store";
|
||||
|
||||
export interface AppState {
|
||||
user: UserState;
|
||||
cart: CartState;
|
||||
}
|
||||
|
||||
export const reducers: ActionReducerMap<AppState> = {
|
||||
user: userReducer,
|
||||
cart: cartReducer,
|
||||
};
|
||||
|
||||
// main.ts
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [
|
||||
provideStore(reducers),
|
||||
provideEffects([UserEffects, CartEffects]),
|
||||
provideStoreDevtools({ maxAge: 25 }),
|
||||
],
|
||||
});
|
||||
```
|
||||
|
||||
### Feature Slice Pattern
|
||||
|
||||
```typescript
|
||||
// store/user/user.actions.ts
|
||||
import { createActionGroup, props, emptyProps } from "@ngrx/store";
|
||||
|
||||
export const UserActions = createActionGroup({
|
||||
source: "User",
|
||||
events: {
|
||||
"Load User": props<{ userId: string }>(),
|
||||
"Load User Success": props<{ user: User }>(),
|
||||
"Load User Failure": props<{ error: string }>(),
|
||||
"Update User": props<{ updates: Partial<User> }>(),
|
||||
Logout: emptyProps(),
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// store/user/user.reducer.ts
|
||||
import { createReducer, on } from "@ngrx/store";
|
||||
import { UserActions } from "./user.actions";
|
||||
|
||||
export interface UserState {
|
||||
user: User | null;
|
||||
loading: boolean;
|
||||
error: string | null;
|
||||
}
|
||||
|
||||
const initialState: UserState = {
|
||||
user: null,
|
||||
loading: false,
|
||||
error: null,
|
||||
};
|
||||
|
||||
export const userReducer = createReducer(
|
||||
initialState,
|
||||
|
||||
on(UserActions.loadUser, (state) => ({
|
||||
...state,
|
||||
loading: true,
|
||||
error: null,
|
||||
})),
|
||||
|
||||
on(UserActions.loadUserSuccess, (state, { user }) => ({
|
||||
...state,
|
||||
user,
|
||||
loading: false,
|
||||
})),
|
||||
|
||||
on(UserActions.loadUserFailure, (state, { error }) => ({
|
||||
...state,
|
||||
loading: false,
|
||||
error,
|
||||
})),
|
||||
|
||||
on(UserActions.logout, () => initialState),
|
||||
);
|
||||
```
|
||||
|
||||
```typescript
|
||||
// store/user/user.selectors.ts
|
||||
import { createFeatureSelector, createSelector } from "@ngrx/store";
|
||||
import { UserState } from "./user.reducer";
|
||||
|
||||
export const selectUserState = createFeatureSelector<UserState>("user");
|
||||
|
||||
export const selectUser = createSelector(
|
||||
selectUserState,
|
||||
(state) => state.user,
|
||||
);
|
||||
|
||||
export const selectUserLoading = createSelector(
|
||||
selectUserState,
|
||||
(state) => state.loading,
|
||||
);
|
||||
|
||||
export const selectIsAuthenticated = createSelector(
|
||||
selectUser,
|
||||
(user) => user !== null,
|
||||
);
|
||||
```
|
||||
|
||||
```typescript
|
||||
// store/user/user.effects.ts
|
||||
import { Injectable, inject } from "@angular/core";
|
||||
import { Actions, createEffect, ofType } from "@ngrx/effects";
|
||||
import { switchMap, map, catchError, of } from "rxjs";
|
||||
|
||||
@Injectable()
|
||||
export class UserEffects {
|
||||
private actions$ = inject(Actions);
|
||||
private userService = inject(UserService);
|
||||
|
||||
loadUser$ = createEffect(() =>
|
||||
this.actions$.pipe(
|
||||
ofType(UserActions.loadUser),
|
||||
switchMap(({ userId }) =>
|
||||
this.userService.getUser(userId).pipe(
|
||||
map((user) => UserActions.loadUserSuccess({ user })),
|
||||
catchError((error) =>
|
||||
of(UserActions.loadUserFailure({ error: error.message })),
|
||||
),
|
||||
),
|
||||
),
|
||||
),
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Component Usage
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
template: `
|
||||
@if (loading()) {
|
||||
<app-spinner />
|
||||
} @else if (user(); as user) {
|
||||
<h1>Welcome, {{ user.name }}</h1>
|
||||
<button (click)="logout()">Logout</button>
|
||||
}
|
||||
`,
|
||||
})
|
||||
export class HeaderComponent {
|
||||
private store = inject(Store);
|
||||
|
||||
user = this.store.selectSignal(selectUser);
|
||||
loading = this.store.selectSignal(selectUserLoading);
|
||||
|
||||
logout() {
|
||||
this.store.dispatch(UserActions.logout());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## RxJS-Based Patterns
|
||||
|
||||
### Component Store (Local Feature State)
|
||||
|
||||
```typescript
|
||||
// stores/todo.store.ts
|
||||
import { Injectable } from "@angular/core";
|
||||
import { ComponentStore } from "@ngrx/component-store";
|
||||
import { switchMap, tap, catchError, EMPTY } from "rxjs";
|
||||
|
||||
interface TodoState {
|
||||
todos: Todo[];
|
||||
loading: boolean;
|
||||
}
|
||||
|
||||
@Injectable()
|
||||
export class TodoStore extends ComponentStore<TodoState> {
|
||||
constructor(private todoService: TodoService) {
|
||||
super({ todos: [], loading: false });
|
||||
}
|
||||
|
||||
// Selectors
|
||||
readonly todos$ = this.select((state) => state.todos);
|
||||
readonly loading$ = this.select((state) => state.loading);
|
||||
readonly completedCount$ = this.select(
|
||||
this.todos$,
|
||||
(todos) => todos.filter((t) => t.completed).length,
|
||||
);
|
||||
|
||||
// Updaters
|
||||
readonly addTodo = this.updater((state, todo: Todo) => ({
|
||||
...state,
|
||||
todos: [...state.todos, todo],
|
||||
}));
|
||||
|
||||
readonly toggleTodo = this.updater((state, id: string) => ({
|
||||
...state,
|
||||
todos: state.todos.map((t) =>
|
||||
t.id === id ? { ...t, completed: !t.completed } : t,
|
||||
),
|
||||
}));
|
||||
|
||||
// Effects
|
||||
readonly loadTodos = this.effect<void>((trigger$) =>
|
||||
trigger$.pipe(
|
||||
tap(() => this.patchState({ loading: true })),
|
||||
switchMap(() =>
|
||||
this.todoService.getAll().pipe(
|
||||
tap({
|
||||
next: (todos) => this.patchState({ todos, loading: false }),
|
||||
error: () => this.patchState({ loading: false }),
|
||||
}),
|
||||
catchError(() => EMPTY),
|
||||
),
|
||||
),
|
||||
),
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Server State with Signals
|
||||
|
||||
### HTTP + Signals Pattern
|
||||
|
||||
```typescript
|
||||
// services/api.service.ts
|
||||
import { Injectable, signal, inject } from "@angular/core";
|
||||
import { HttpClient } from "@angular/common/http";
|
||||
import { toSignal } from "@angular/core/rxjs-interop";
|
||||
|
||||
interface ApiState<T> {
|
||||
data: T | null;
|
||||
loading: boolean;
|
||||
error: string | null;
|
||||
}
|
||||
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class ProductApiService {
|
||||
private http = inject(HttpClient);
|
||||
|
||||
private _state = signal<ApiState<Product[]>>({
|
||||
data: null,
|
||||
loading: false,
|
||||
error: null,
|
||||
});
|
||||
|
||||
readonly products = computed(() => this._state().data ?? []);
|
||||
readonly loading = computed(() => this._state().loading);
|
||||
readonly error = computed(() => this._state().error);
|
||||
|
||||
async fetchProducts(): Promise<void> {
|
||||
this._state.update((s) => ({ ...s, loading: true, error: null }));
|
||||
|
||||
try {
|
||||
const data = await firstValueFrom(
|
||||
this.http.get<Product[]>("/api/products"),
|
||||
);
|
||||
this._state.update((s) => ({ ...s, data, loading: false }));
|
||||
} catch (e) {
|
||||
this._state.update((s) => ({
|
||||
...s,
|
||||
loading: false,
|
||||
error: "Failed to fetch products",
|
||||
}));
|
||||
}
|
||||
}
|
||||
|
||||
// Optimistic update
|
||||
async deleteProduct(id: string): Promise<void> {
|
||||
const previousData = this._state().data;
|
||||
|
||||
// Optimistically remove
|
||||
this._state.update((s) => ({
|
||||
...s,
|
||||
data: s.data?.filter((p) => p.id !== id) ?? null,
|
||||
}));
|
||||
|
||||
try {
|
||||
await firstValueFrom(this.http.delete(`/api/products/${id}`));
|
||||
} catch {
|
||||
// Rollback on error
|
||||
this._state.update((s) => ({ ...s, data: previousData }));
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do's
|
||||
|
||||
| Practice | Why |
|
||||
| ---------------------------------- | ---------------------------------- |
|
||||
| Use Signals for local state | Simple, reactive, no subscriptions |
|
||||
| Use `computed()` for derived data | Auto-updates, memoized |
|
||||
| Colocate state with feature | Easier to maintain |
|
||||
| Use NgRx for complex flows | Actions, effects, devtools |
|
||||
| Prefer `inject()` over constructor | Cleaner, works in factories |
|
||||
|
||||
### Don'ts
|
||||
|
||||
| Anti-Pattern | Instead |
|
||||
| --------------------------------- | ----------------------------------------------------- |
|
||||
| Store derived data | Use `computed()` |
|
||||
| Mutate signals directly | Use `set()` or `update()` |
|
||||
| Over-globalize state | Keep local when possible |
|
||||
| Mix RxJS and Signals chaotically | Choose primary, bridge with `toSignal`/`toObservable` |
|
||||
| Subscribe in components for state | Use template with signals |
|
||||
|
||||
---
|
||||
|
||||
## Migration Path
|
||||
|
||||
### From BehaviorSubject to Signals
|
||||
|
||||
```typescript
|
||||
// Before: RxJS-based
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class OldUserService {
|
||||
private userSubject = new BehaviorSubject<User | null>(null);
|
||||
user$ = this.userSubject.asObservable();
|
||||
|
||||
setUser(user: User) {
|
||||
this.userSubject.next(user);
|
||||
}
|
||||
}
|
||||
|
||||
// After: Signal-based
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class UserService {
|
||||
private _user = signal<User | null>(null);
|
||||
readonly user = this._user.asReadonly();
|
||||
|
||||
setUser(user: User) {
|
||||
this._user.set(user);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Bridging Signals and RxJS
|
||||
|
||||
```typescript
|
||||
import { toSignal, toObservable } from '@angular/core/rxjs-interop';
|
||||
|
||||
// Observable → Signal
|
||||
@Component({...})
|
||||
export class ExampleComponent {
|
||||
private route = inject(ActivatedRoute);
|
||||
|
||||
// Convert Observable to Signal
|
||||
userId = toSignal(
|
||||
this.route.params.pipe(map(p => p['id'])),
|
||||
{ initialValue: '' }
|
||||
);
|
||||
}
|
||||
|
||||
// Signal → Observable
|
||||
export class DataService {
|
||||
private filter = signal('');
|
||||
|
||||
// Convert Signal to Observable
|
||||
filter$ = toObservable(this.filter);
|
||||
|
||||
filteredData$ = this.filter$.pipe(
|
||||
debounceTime(300),
|
||||
switchMap(filter => this.http.get(`/api/data?q=${filter}`))
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Angular Signals Guide](https://angular.dev/guide/signals)
|
||||
- [NgRx Documentation](https://ngrx.io/)
|
||||
- [NgRx SignalStore](https://ngrx.io/guide/signals)
|
||||
- [RxAngular](https://www.rx-angular.io/)
|
||||
13
skills/angular-state-management/metadata.json
Normal file
13
skills/angular-state-management/metadata.json
Normal file
@@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Complete state management guide for Angular applications designed for AI agents and LLMs. Covers Signal-based services, NgRx for global state, RxJS patterns, and component stores. Includes decision trees for choosing the right solution, migration patterns from BehaviorSubject to Signals, and strategies for bridging Signals with RxJS observables.",
|
||||
"references": [
|
||||
"https://angular.dev/guide/signals",
|
||||
"https://ngrx.io",
|
||||
"https://ngrx.io/guide/signals",
|
||||
"https://www.rx-angular.io",
|
||||
"https://github.com/ngrx/platform"
|
||||
]
|
||||
}
|
||||
55
skills/angular-ui-patterns/README.md
Normal file
55
skills/angular-ui-patterns/README.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Angular UI Patterns
|
||||
|
||||
Modern UI patterns for building robust Angular applications optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill covers essential UI patterns for:
|
||||
|
||||
- **Loading States** - Skeleton vs spinner decision trees
|
||||
- **Error Handling** - Error boundary hierarchy and recovery
|
||||
- **Progressive Disclosure** - Using `@defer` for lazy rendering
|
||||
- **Data Display** - Handling empty, loading, and error states
|
||||
- **Form Patterns** - Submission states and validation feedback
|
||||
- **Dialog/Modal Patterns** - Proper dialog lifecycle management
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Never show stale UI** - Only show loading when no data exists
|
||||
2. **Surface all errors** - Never silently fail
|
||||
3. **Optimistic updates** - Update UI before server confirms
|
||||
4. **Progressive disclosure** - Use `@defer` to load non-critical content
|
||||
5. **Graceful degradation** - Fallback for failed features
|
||||
|
||||
## Structure
|
||||
|
||||
The `SKILL.md` file includes:
|
||||
|
||||
1. **Golden Rules** - Non-negotiable patterns to follow
|
||||
2. **Decision Trees** - When to use skeleton vs spinner
|
||||
3. **Code Examples** - Correct vs incorrect implementations
|
||||
4. **Anti-patterns** - Common mistakes to avoid
|
||||
|
||||
## Quick Reference
|
||||
|
||||
```html
|
||||
<!-- Angular template pattern for data states -->
|
||||
@if (error()) {
|
||||
<app-error-state [error]="error()" (retry)="load()" />
|
||||
} @else if (loading() && !data()) {
|
||||
<app-skeleton-state />
|
||||
} @else if (!data()?.length) {
|
||||
<app-empty-state message="No items found" />
|
||||
} @else {
|
||||
<app-data-display [data]="data()" />
|
||||
}
|
||||
```
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular @defer](https://angular.dev/guide/defer)
|
||||
- [Angular Templates](https://angular.dev/guide/templates)
|
||||
508
skills/angular-ui-patterns/SKILL.md
Normal file
508
skills/angular-ui-patterns/SKILL.md
Normal file
@@ -0,0 +1,508 @@
|
||||
---
|
||||
name: angular-ui-patterns
|
||||
description: Modern Angular UI patterns for loading states, error handling, and data display. Use when building UI components, handling async data, or managing component states.
|
||||
risk: safe
|
||||
source: self
|
||||
---
|
||||
|
||||
# Angular UI Patterns
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Never show stale UI** - Loading states only when actually loading
|
||||
2. **Always surface errors** - Users must know when something fails
|
||||
3. **Optimistic updates** - Make the UI feel instant
|
||||
4. **Progressive disclosure** - Use `@defer` to show content as available
|
||||
5. **Graceful degradation** - Partial data is better than no data
|
||||
|
||||
---
|
||||
|
||||
## Loading State Patterns
|
||||
|
||||
### The Golden Rule
|
||||
|
||||
**Show loading indicator ONLY when there's no data to display.**
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
template: `
|
||||
@if (error()) {
|
||||
<app-error-state [error]="error()" (retry)="load()" />
|
||||
} @else if (loading() && !items().length) {
|
||||
<app-skeleton-list />
|
||||
} @else if (!items().length) {
|
||||
<app-empty-state message="No items found" />
|
||||
} @else {
|
||||
<app-item-list [items]="items()" />
|
||||
}
|
||||
`,
|
||||
})
|
||||
export class ItemListComponent {
|
||||
private store = inject(ItemStore);
|
||||
|
||||
items = this.store.items;
|
||||
loading = this.store.loading;
|
||||
error = this.store.error;
|
||||
}
|
||||
```
|
||||
|
||||
### Loading State Decision Tree
|
||||
|
||||
```
|
||||
Is there an error?
|
||||
→ Yes: Show error state with retry option
|
||||
→ No: Continue
|
||||
|
||||
Is it loading AND we have no data?
|
||||
→ Yes: Show loading indicator (spinner/skeleton)
|
||||
→ No: Continue
|
||||
|
||||
Do we have data?
|
||||
→ Yes, with items: Show the data
|
||||
→ Yes, but empty: Show empty state
|
||||
→ No: Show loading (fallback)
|
||||
```
|
||||
|
||||
### Skeleton vs Spinner
|
||||
|
||||
| Use Skeleton When | Use Spinner When |
|
||||
| -------------------- | --------------------- |
|
||||
| Known content shape | Unknown content shape |
|
||||
| List/card layouts | Modal actions |
|
||||
| Initial page load | Button submissions |
|
||||
| Content placeholders | Inline operations |
|
||||
|
||||
---
|
||||
|
||||
## Control Flow Patterns
|
||||
|
||||
### @if/@else for Conditional Rendering
|
||||
|
||||
```html
|
||||
@if (user(); as user) {
|
||||
<span>Welcome, {{ user.name }}</span>
|
||||
} @else if (loading()) {
|
||||
<app-spinner size="small" />
|
||||
} @else {
|
||||
<a routerLink="/login">Sign In</a>
|
||||
}
|
||||
```
|
||||
|
||||
### @for with Track
|
||||
|
||||
```html
|
||||
@for (item of items(); track item.id) {
|
||||
<app-item-card [item]="item" (delete)="remove(item.id)" />
|
||||
} @empty {
|
||||
<app-empty-state
|
||||
icon="inbox"
|
||||
message="No items yet"
|
||||
actionLabel="Create Item"
|
||||
(action)="create()"
|
||||
/>
|
||||
}
|
||||
```
|
||||
|
||||
### @defer for Progressive Loading
|
||||
|
||||
```html
|
||||
<!-- Critical content loads immediately -->
|
||||
<app-header />
|
||||
<app-hero-section />
|
||||
|
||||
<!-- Non-critical content deferred -->
|
||||
@defer (on viewport) {
|
||||
<app-comments [postId]="postId()" />
|
||||
} @placeholder {
|
||||
<div class="h-32 bg-gray-100 animate-pulse"></div>
|
||||
} @loading (minimum 200ms) {
|
||||
<app-spinner />
|
||||
} @error {
|
||||
<app-error-state message="Failed to load comments" />
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling Patterns
|
||||
|
||||
### Error Handling Hierarchy
|
||||
|
||||
```
|
||||
1. Inline error (field-level) → Form validation errors
|
||||
2. Toast notification → Recoverable errors, user can retry
|
||||
3. Error banner → Page-level errors, data still partially usable
|
||||
4. Full error screen → Unrecoverable, needs user action
|
||||
```
|
||||
|
||||
### Always Show Errors
|
||||
|
||||
**CRITICAL: Never swallow errors silently.**
|
||||
|
||||
```typescript
|
||||
// CORRECT - Error always surfaced to user
|
||||
@Component({...})
|
||||
export class CreateItemComponent {
|
||||
private store = inject(ItemStore);
|
||||
private toast = inject(ToastService);
|
||||
|
||||
async create(data: CreateItemDto) {
|
||||
try {
|
||||
await this.store.create(data);
|
||||
this.toast.success('Item created successfully');
|
||||
this.router.navigate(['/items']);
|
||||
} catch (error) {
|
||||
console.error('createItem failed:', error);
|
||||
this.toast.error('Failed to create item. Please try again.');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WRONG - Error silently caught
|
||||
async create(data: CreateItemDto) {
|
||||
try {
|
||||
await this.store.create(data);
|
||||
} catch (error) {
|
||||
console.error(error); // User sees nothing!
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Error State Component Pattern
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
selector: "app-error-state",
|
||||
standalone: true,
|
||||
imports: [NgOptimizedImage],
|
||||
template: `
|
||||
<div class="error-state">
|
||||
<img ngSrc="/assets/error-icon.svg" width="64" height="64" alt="" />
|
||||
<h3>{{ title() }}</h3>
|
||||
<p>{{ message() }}</p>
|
||||
@if (retry.observed) {
|
||||
<button (click)="retry.emit()" class="btn-primary">Try Again</button>
|
||||
}
|
||||
</div>
|
||||
`,
|
||||
})
|
||||
export class ErrorStateComponent {
|
||||
title = input("Something went wrong");
|
||||
message = input("An unexpected error occurred");
|
||||
retry = output<void>();
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Button State Patterns
|
||||
|
||||
### Button Loading State
|
||||
|
||||
```html
|
||||
<button
|
||||
(click)="handleSubmit()"
|
||||
[disabled]="isSubmitting() || !form.valid"
|
||||
class="btn-primary"
|
||||
>
|
||||
@if (isSubmitting()) {
|
||||
<app-spinner size="small" class="mr-2" />
|
||||
Saving... } @else { Save Changes }
|
||||
</button>
|
||||
```
|
||||
|
||||
### Disable During Operations
|
||||
|
||||
**CRITICAL: Always disable triggers during async operations.**
|
||||
|
||||
```typescript
|
||||
// CORRECT - Button disabled while loading
|
||||
@Component({
|
||||
template: `
|
||||
<button
|
||||
[disabled]="saving()"
|
||||
(click)="save()"
|
||||
>
|
||||
@if (saving()) {
|
||||
<app-spinner size="sm" /> Saving...
|
||||
} @else {
|
||||
Save
|
||||
}
|
||||
</button>
|
||||
`
|
||||
})
|
||||
export class SaveButtonComponent {
|
||||
saving = signal(false);
|
||||
|
||||
async save() {
|
||||
this.saving.set(true);
|
||||
try {
|
||||
await this.service.save();
|
||||
} finally {
|
||||
this.saving.set(false);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WRONG - User can click multiple times
|
||||
<button (click)="save()">
|
||||
{{ saving() ? 'Saving...' : 'Save' }}
|
||||
</button>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Empty States
|
||||
|
||||
### Empty State Requirements
|
||||
|
||||
Every list/collection MUST have an empty state:
|
||||
|
||||
```html
|
||||
@for (item of items(); track item.id) {
|
||||
<app-item-card [item]="item" />
|
||||
} @empty {
|
||||
<app-empty-state
|
||||
icon="folder-open"
|
||||
title="No items yet"
|
||||
description="Create your first item to get started"
|
||||
actionLabel="Create Item"
|
||||
(action)="openCreateDialog()"
|
||||
/>
|
||||
}
|
||||
```
|
||||
|
||||
### Contextual Empty States
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
selector: "app-empty-state",
|
||||
template: `
|
||||
<div class="empty-state">
|
||||
<span class="icon" [class]="icon()"></span>
|
||||
<h3>{{ title() }}</h3>
|
||||
<p>{{ description() }}</p>
|
||||
@if (actionLabel()) {
|
||||
<button (click)="action.emit()" class="btn-primary">
|
||||
{{ actionLabel() }}
|
||||
</button>
|
||||
}
|
||||
</div>
|
||||
`,
|
||||
})
|
||||
export class EmptyStateComponent {
|
||||
icon = input("inbox");
|
||||
title = input.required<string>();
|
||||
description = input("");
|
||||
actionLabel = input<string | null>(null);
|
||||
action = output<void>();
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Form Patterns
|
||||
|
||||
### Form with Loading and Validation
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
template: `
|
||||
<form [formGroup]="form" (ngSubmit)="onSubmit()">
|
||||
<div class="form-field">
|
||||
<label for="name">Name</label>
|
||||
<input
|
||||
id="name"
|
||||
formControlName="name"
|
||||
[class.error]="isFieldInvalid('name')"
|
||||
/>
|
||||
@if (isFieldInvalid("name")) {
|
||||
<span class="error-text">
|
||||
{{ getFieldError("name") }}
|
||||
</span>
|
||||
}
|
||||
</div>
|
||||
|
||||
<div class="form-field">
|
||||
<label for="email">Email</label>
|
||||
<input id="email" type="email" formControlName="email" />
|
||||
@if (isFieldInvalid("email")) {
|
||||
<span class="error-text">
|
||||
{{ getFieldError("email") }}
|
||||
</span>
|
||||
}
|
||||
</div>
|
||||
|
||||
<button type="submit" [disabled]="form.invalid || submitting()">
|
||||
@if (submitting()) {
|
||||
<app-spinner size="sm" /> Submitting...
|
||||
} @else {
|
||||
Submit
|
||||
}
|
||||
</button>
|
||||
</form>
|
||||
`,
|
||||
})
|
||||
export class UserFormComponent {
|
||||
private fb = inject(FormBuilder);
|
||||
|
||||
submitting = signal(false);
|
||||
|
||||
form = this.fb.group({
|
||||
name: ["", [Validators.required, Validators.minLength(2)]],
|
||||
email: ["", [Validators.required, Validators.email]],
|
||||
});
|
||||
|
||||
isFieldInvalid(field: string): boolean {
|
||||
const control = this.form.get(field);
|
||||
return control ? control.invalid && control.touched : false;
|
||||
}
|
||||
|
||||
getFieldError(field: string): string {
|
||||
const control = this.form.get(field);
|
||||
if (control?.hasError("required")) return "This field is required";
|
||||
if (control?.hasError("email")) return "Invalid email format";
|
||||
if (control?.hasError("minlength")) return "Too short";
|
||||
return "";
|
||||
}
|
||||
|
||||
async onSubmit() {
|
||||
if (this.form.invalid) return;
|
||||
|
||||
this.submitting.set(true);
|
||||
try {
|
||||
await this.service.submit(this.form.value);
|
||||
this.toast.success("Submitted successfully");
|
||||
} catch {
|
||||
this.toast.error("Submission failed");
|
||||
} finally {
|
||||
this.submitting.set(false);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dialog/Modal Patterns
|
||||
|
||||
### Confirmation Dialog
|
||||
|
||||
```typescript
|
||||
// dialog.service.ts
|
||||
@Injectable({ providedIn: 'root' })
|
||||
export class DialogService {
|
||||
private dialog = inject(Dialog); // CDK Dialog or custom
|
||||
|
||||
async confirm(options: {
|
||||
title: string;
|
||||
message: string;
|
||||
confirmText?: string;
|
||||
cancelText?: string;
|
||||
}): Promise<boolean> {
|
||||
const dialogRef = this.dialog.open(ConfirmDialogComponent, {
|
||||
data: options,
|
||||
});
|
||||
|
||||
return await firstValueFrom(dialogRef.closed) ?? false;
|
||||
}
|
||||
}
|
||||
|
||||
// Usage
|
||||
async deleteItem(item: Item) {
|
||||
const confirmed = await this.dialog.confirm({
|
||||
title: 'Delete Item',
|
||||
message: `Are you sure you want to delete "${item.name}"?`,
|
||||
confirmText: 'Delete',
|
||||
});
|
||||
|
||||
if (confirmed) {
|
||||
await this.store.delete(item.id);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### Loading States
|
||||
|
||||
```typescript
|
||||
// WRONG - Spinner when data exists (causes flash on refetch)
|
||||
@if (loading()) {
|
||||
<app-spinner />
|
||||
}
|
||||
|
||||
// CORRECT - Only show loading without data
|
||||
@if (loading() && !items().length) {
|
||||
<app-spinner />
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
```typescript
|
||||
// WRONG - Error swallowed
|
||||
try {
|
||||
await this.service.save();
|
||||
} catch (e) {
|
||||
console.log(e); // User has no idea!
|
||||
}
|
||||
|
||||
// CORRECT - Error surfaced
|
||||
try {
|
||||
await this.service.save();
|
||||
} catch (e) {
|
||||
console.error("Save failed:", e);
|
||||
this.toast.error("Failed to save. Please try again.");
|
||||
}
|
||||
```
|
||||
|
||||
### Button States
|
||||
|
||||
```html
|
||||
<!-- WRONG - Button not disabled during submission -->
|
||||
<button (click)="submit()">Submit</button>
|
||||
|
||||
<!-- CORRECT - Disabled and shows loading -->
|
||||
<button (click)="submit()" [disabled]="loading()">
|
||||
@if (loading()) {
|
||||
<app-spinner size="sm" />
|
||||
} Submit
|
||||
</button>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## UI State Checklist
|
||||
|
||||
Before completing any UI component:
|
||||
|
||||
### UI States
|
||||
|
||||
- [ ] Error state handled and shown to user
|
||||
- [ ] Loading state shown only when no data exists
|
||||
- [ ] Empty state provided for collections (`@empty` block)
|
||||
- [ ] Buttons disabled during async operations
|
||||
- [ ] Buttons show loading indicator when appropriate
|
||||
|
||||
### Data & Mutations
|
||||
|
||||
- [ ] All async operations have error handling
|
||||
- [ ] All user actions have feedback (toast/visual)
|
||||
- [ ] Optimistic updates rollback on failure
|
||||
|
||||
### Accessibility
|
||||
|
||||
- [ ] Loading states announced to screen readers
|
||||
- [ ] Error messages linked to form fields
|
||||
- [ ] Focus management after state changes
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
- **angular-state-management**: Use Signal stores for state
|
||||
- **angular**: Apply modern patterns (Signals, @defer)
|
||||
- **testing-patterns**: Test all UI states
|
||||
12
skills/angular-ui-patterns/metadata.json
Normal file
12
skills/angular-ui-patterns/metadata.json
Normal file
@@ -0,0 +1,12 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Modern UI patterns for Angular applications designed for AI agents and LLMs. Covers loading states, error handling, progressive disclosure, and data display patterns. Emphasizes showing loading only without data, surfacing all errors, optimistic updates, and graceful degradation using @defer. Includes decision trees and anti-patterns to avoid.",
|
||||
"references": [
|
||||
"https://angular.dev/guide/defer",
|
||||
"https://angular.dev/guide/templates",
|
||||
"https://material.angular.io",
|
||||
"https://ng-spartan.com"
|
||||
]
|
||||
}
|
||||
40
skills/angular/README.md
Normal file
40
skills/angular/README.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Angular
|
||||
|
||||
A comprehensive guide to modern Angular development (v20+) optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill covers modern Angular patterns including:
|
||||
|
||||
- **Signals** - Angular's reactive primitive for state management
|
||||
- **Standalone Components** - Modern component architecture without NgModules
|
||||
- **Zoneless Applications** - High-performance apps without Zone.js
|
||||
- **SSR & Hydration** - Server-side rendering and client hydration patterns
|
||||
- **Modern Routing** - Functional guards, resolvers, and lazy loading
|
||||
- **Dependency Injection** - Modern DI with `inject()` function
|
||||
- **Reactive Forms** - Type-safe form handling
|
||||
|
||||
## Structure
|
||||
|
||||
This skill is a single, comprehensive `SKILL.md` file containing:
|
||||
|
||||
1. Modern component patterns with Signal inputs/outputs
|
||||
2. State management with Signals and computed values
|
||||
3. Performance optimization techniques
|
||||
4. SSR and hydration best practices
|
||||
5. Migration strategies from legacy Angular patterns
|
||||
|
||||
## Usage
|
||||
|
||||
This skill is designed to be read in full to understand the complete modern Angular development approach, or referenced for specific patterns when needed.
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular Documentation](https://angular.dev)
|
||||
- [Angular Signals](https://angular.dev/guide/signals)
|
||||
- [Zoneless Angular](https://angular.dev/guide/zoneless)
|
||||
- [Angular SSR](https://angular.dev/guide/ssr)
|
||||
821
skills/angular/SKILL.md
Normal file
821
skills/angular/SKILL.md
Normal file
@@ -0,0 +1,821 @@
|
||||
---
|
||||
name: angular
|
||||
description: >-
|
||||
Modern Angular (v20+) expert with deep knowledge of Signals, Standalone
|
||||
Components, Zoneless applications, SSR/Hydration, and reactive patterns.
|
||||
Use PROACTIVELY for Angular development, component architecture, state
|
||||
management, performance optimization, and migration to modern patterns.
|
||||
risk: safe
|
||||
source: self
|
||||
---
|
||||
|
||||
# Angular Expert
|
||||
|
||||
Master modern Angular development with Signals, Standalone Components, Zoneless applications, SSR/Hydration, and the latest reactive patterns.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Building new Angular applications (v20+)
|
||||
- Implementing Signals-based reactive patterns
|
||||
- Creating Standalone Components and migrating from NgModules
|
||||
- Configuring Zoneless Angular applications
|
||||
- Implementing SSR, prerendering, and hydration
|
||||
- Optimizing Angular performance
|
||||
- Adopting modern Angular patterns and best practices
|
||||
|
||||
## Do Not Use This Skill When
|
||||
|
||||
- Migrating from AngularJS (1.x) → use `angular-migration` skill
|
||||
- Working with legacy Angular apps that cannot upgrade
|
||||
- General TypeScript issues → use `typescript-expert` skill
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Assess the Angular version and project structure
|
||||
2. Apply modern patterns (Signals, Standalone, Zoneless)
|
||||
3. Implement with proper typing and reactivity
|
||||
4. Validate with build and tests
|
||||
|
||||
## Safety
|
||||
|
||||
- Always test changes in development before production
|
||||
- Gradual migration for existing apps (don't big-bang refactor)
|
||||
- Keep backward compatibility during transitions
|
||||
|
||||
---
|
||||
|
||||
## Angular Version Timeline
|
||||
|
||||
| Version | Release | Key Features |
|
||||
| -------------- | ------- | ------------------------------------------------------ |
|
||||
| **Angular 20** | Q2 2025 | Signals stable, Zoneless stable, Incremental hydration |
|
||||
| **Angular 21** | Q4 2025 | Signals-first default, Enhanced SSR |
|
||||
| **Angular 22** | Q2 2026 | Signal Forms, Selectorless components |
|
||||
|
||||
---
|
||||
|
||||
## 1. Signals: The New Reactive Primitive
|
||||
|
||||
Signals are Angular's fine-grained reactivity system, replacing zone.js-based change detection.
|
||||
|
||||
### Core Concepts
|
||||
|
||||
```typescript
|
||||
import { signal, computed, effect } from "@angular/core";
|
||||
|
||||
// Writable signal
|
||||
const count = signal(0);
|
||||
|
||||
// Read value
|
||||
console.log(count()); // 0
|
||||
|
||||
// Update value
|
||||
count.set(5); // Direct set
|
||||
count.update((v) => v + 1); // Functional update
|
||||
|
||||
// Computed (derived) signal
|
||||
const doubled = computed(() => count() * 2);
|
||||
|
||||
// Effect (side effects)
|
||||
effect(() => {
|
||||
console.log(`Count changed to: ${count()}`);
|
||||
});
|
||||
```
|
||||
|
||||
### Signal-Based Inputs and Outputs
|
||||
|
||||
```typescript
|
||||
import { Component, input, output, model } from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-user-card",
|
||||
standalone: true,
|
||||
template: `
|
||||
<div class="card">
|
||||
<h3>{{ name() }}</h3>
|
||||
<span>{{ role() }}</span>
|
||||
<button (click)="select.emit(id())">Select</button>
|
||||
</div>
|
||||
`,
|
||||
})
|
||||
export class UserCardComponent {
|
||||
// Signal inputs (read-only)
|
||||
id = input.required<string>();
|
||||
name = input.required<string>();
|
||||
role = input<string>("User"); // With default
|
||||
|
||||
// Output
|
||||
select = output<string>();
|
||||
|
||||
// Two-way binding (model)
|
||||
isSelected = model(false);
|
||||
}
|
||||
|
||||
// Usage:
|
||||
// <app-user-card [id]="'123'" [name]="'John'" [(isSelected)]="selected" />
|
||||
```
|
||||
|
||||
### Signal Queries (ViewChild/ContentChild)
|
||||
|
||||
```typescript
|
||||
import {
|
||||
Component,
|
||||
viewChild,
|
||||
viewChildren,
|
||||
contentChild,
|
||||
} from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-container",
|
||||
standalone: true,
|
||||
template: `
|
||||
<input #searchInput />
|
||||
<app-item *ngFor="let item of items()" />
|
||||
`,
|
||||
})
|
||||
export class ContainerComponent {
|
||||
// Signal-based queries
|
||||
searchInput = viewChild<ElementRef>("searchInput");
|
||||
items = viewChildren(ItemComponent);
|
||||
projectedContent = contentChild(HeaderDirective);
|
||||
|
||||
focusSearch() {
|
||||
this.searchInput()?.nativeElement.focus();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### When to Use Signals vs RxJS
|
||||
|
||||
| Use Case | Signals | RxJS |
|
||||
| ----------------------- | --------------- | -------------------------------- |
|
||||
| Local component state | ✅ Preferred | Overkill |
|
||||
| Derived/computed values | ✅ `computed()` | `combineLatest` works |
|
||||
| Side effects | ✅ `effect()` | `tap` operator |
|
||||
| HTTP requests | ❌ | ✅ HttpClient returns Observable |
|
||||
| Event streams | ❌ | ✅ `fromEvent`, operators |
|
||||
| Complex async flows | ❌ | ✅ `switchMap`, `mergeMap` |
|
||||
|
||||
---
|
||||
|
||||
## 2. Standalone Components
|
||||
|
||||
Standalone components are self-contained and don't require NgModule declarations.
|
||||
|
||||
### Creating Standalone Components
|
||||
|
||||
```typescript
|
||||
import { Component } from "@angular/core";
|
||||
import { CommonModule } from "@angular/common";
|
||||
import { RouterLink } from "@angular/router";
|
||||
|
||||
@Component({
|
||||
selector: "app-header",
|
||||
standalone: true,
|
||||
imports: [CommonModule, RouterLink], // Direct imports
|
||||
template: `
|
||||
<header>
|
||||
<a routerLink="/">Home</a>
|
||||
<a routerLink="/about">About</a>
|
||||
</header>
|
||||
`,
|
||||
})
|
||||
export class HeaderComponent {}
|
||||
```
|
||||
|
||||
### Bootstrapping Without NgModule
|
||||
|
||||
```typescript
|
||||
// main.ts
|
||||
import { bootstrapApplication } from "@angular/platform-browser";
|
||||
import { provideRouter } from "@angular/router";
|
||||
import { provideHttpClient } from "@angular/common/http";
|
||||
import { AppComponent } from "./app/app.component";
|
||||
import { routes } from "./app/app.routes";
|
||||
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [provideRouter(routes), provideHttpClient()],
|
||||
});
|
||||
```
|
||||
|
||||
### Lazy Loading Standalone Components
|
||||
|
||||
```typescript
|
||||
// app.routes.ts
|
||||
import { Routes } from "@angular/router";
|
||||
|
||||
export const routes: Routes = [
|
||||
{
|
||||
path: "dashboard",
|
||||
loadComponent: () =>
|
||||
import("./dashboard/dashboard.component").then(
|
||||
(m) => m.DashboardComponent,
|
||||
),
|
||||
},
|
||||
{
|
||||
path: "admin",
|
||||
loadChildren: () =>
|
||||
import("./admin/admin.routes").then((m) => m.ADMIN_ROUTES),
|
||||
},
|
||||
];
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Zoneless Angular
|
||||
|
||||
Zoneless applications don't use zone.js, improving performance and debugging.
|
||||
|
||||
### Enabling Zoneless Mode
|
||||
|
||||
```typescript
|
||||
// main.ts
|
||||
import { bootstrapApplication } from "@angular/platform-browser";
|
||||
import { provideZonelessChangeDetection } from "@angular/core";
|
||||
import { AppComponent } from "./app/app.component";
|
||||
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [provideZonelessChangeDetection()],
|
||||
});
|
||||
```
|
||||
|
||||
### Zoneless Component Patterns
|
||||
|
||||
```typescript
|
||||
import { Component, signal, ChangeDetectionStrategy } from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-counter",
|
||||
standalone: true,
|
||||
changeDetection: ChangeDetectionStrategy.OnPush,
|
||||
template: `
|
||||
<div>Count: {{ count() }}</div>
|
||||
<button (click)="increment()">+</button>
|
||||
`,
|
||||
})
|
||||
export class CounterComponent {
|
||||
count = signal(0);
|
||||
|
||||
increment() {
|
||||
this.count.update((v) => v + 1);
|
||||
// No zone.js needed - Signal triggers change detection
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Key Zoneless Benefits
|
||||
|
||||
- **Performance**: No zone.js patches on async APIs
|
||||
- **Debugging**: Clean stack traces without zone wrappers
|
||||
- **Bundle size**: Smaller without zone.js (~15KB savings)
|
||||
- **Interoperability**: Better with Web Components and micro-frontends
|
||||
|
||||
---
|
||||
|
||||
## 4. Server-Side Rendering & Hydration
|
||||
|
||||
### SSR Setup with Angular CLI
|
||||
|
||||
```bash
|
||||
ng add @angular/ssr
|
||||
```
|
||||
|
||||
### Hydration Configuration
|
||||
|
||||
```typescript
|
||||
// app.config.ts
|
||||
import { ApplicationConfig } from "@angular/core";
|
||||
import {
|
||||
provideClientHydration,
|
||||
withEventReplay,
|
||||
} from "@angular/platform-browser";
|
||||
|
||||
export const appConfig: ApplicationConfig = {
|
||||
providers: [provideClientHydration(withEventReplay())],
|
||||
};
|
||||
```
|
||||
|
||||
### Incremental Hydration (v20+)
|
||||
|
||||
```typescript
|
||||
import { Component } from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-page",
|
||||
standalone: true,
|
||||
template: `
|
||||
<app-hero />
|
||||
|
||||
@defer (hydrate on viewport) {
|
||||
<app-comments />
|
||||
}
|
||||
|
||||
@defer (hydrate on interaction) {
|
||||
<app-chat-widget />
|
||||
}
|
||||
`,
|
||||
})
|
||||
export class PageComponent {}
|
||||
```
|
||||
|
||||
### Hydration Triggers
|
||||
|
||||
| Trigger | When to Use |
|
||||
| ---------------- | --------------------------------------- |
|
||||
| `on idle` | Low-priority, hydrate when browser idle |
|
||||
| `on viewport` | Hydrate when element enters viewport |
|
||||
| `on interaction` | Hydrate on first user interaction |
|
||||
| `on hover` | Hydrate when user hovers |
|
||||
| `on timer(ms)` | Hydrate after specified delay |
|
||||
|
||||
---
|
||||
|
||||
## 5. Modern Routing Patterns
|
||||
|
||||
### Functional Route Guards
|
||||
|
||||
```typescript
|
||||
// auth.guard.ts
|
||||
import { inject } from "@angular/core";
|
||||
import { Router, CanActivateFn } from "@angular/router";
|
||||
import { AuthService } from "./auth.service";
|
||||
|
||||
export const authGuard: CanActivateFn = (route, state) => {
|
||||
const auth = inject(AuthService);
|
||||
const router = inject(Router);
|
||||
|
||||
if (auth.isAuthenticated()) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return router.createUrlTree(["/login"], {
|
||||
queryParams: { returnUrl: state.url },
|
||||
});
|
||||
};
|
||||
|
||||
// Usage in routes
|
||||
export const routes: Routes = [
|
||||
{
|
||||
path: "dashboard",
|
||||
loadComponent: () => import("./dashboard.component"),
|
||||
canActivate: [authGuard],
|
||||
},
|
||||
];
|
||||
```
|
||||
|
||||
### Route-Level Data Resolvers
|
||||
|
||||
```typescript
|
||||
import { inject } from '@angular/core';
|
||||
import { ResolveFn } from '@angular/router';
|
||||
import { UserService } from './user.service';
|
||||
import { User } from './user.model';
|
||||
|
||||
export const userResolver: ResolveFn<User> = (route) => {
|
||||
const userService = inject(UserService);
|
||||
return userService.getUser(route.paramMap.get('id')!);
|
||||
};
|
||||
|
||||
// In routes
|
||||
{
|
||||
path: 'user/:id',
|
||||
loadComponent: () => import('./user.component'),
|
||||
resolve: { user: userResolver }
|
||||
}
|
||||
|
||||
// In component
|
||||
export class UserComponent {
|
||||
private route = inject(ActivatedRoute);
|
||||
user = toSignal(this.route.data.pipe(map(d => d['user'])));
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Dependency Injection Patterns
|
||||
|
||||
### Modern inject() Function
|
||||
|
||||
```typescript
|
||||
import { Component, inject } from '@angular/core';
|
||||
import { HttpClient } from '@angular/common/http';
|
||||
import { UserService } from './user.service';
|
||||
|
||||
@Component({...})
|
||||
export class UserComponent {
|
||||
// Modern inject() - no constructor needed
|
||||
private http = inject(HttpClient);
|
||||
private userService = inject(UserService);
|
||||
|
||||
// Works in any injection context
|
||||
users = toSignal(this.userService.getUsers());
|
||||
}
|
||||
```
|
||||
|
||||
### Injection Tokens for Configuration
|
||||
|
||||
```typescript
|
||||
import { InjectionToken, inject } from "@angular/core";
|
||||
|
||||
// Define token
|
||||
export const API_BASE_URL = new InjectionToken<string>("API_BASE_URL");
|
||||
|
||||
// Provide in config
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [{ provide: API_BASE_URL, useValue: "https://api.example.com" }],
|
||||
});
|
||||
|
||||
// Inject in service
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class ApiService {
|
||||
private baseUrl = inject(API_BASE_URL);
|
||||
|
||||
get(endpoint: string) {
|
||||
return this.http.get(`${this.baseUrl}/${endpoint}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Component Composition & Reusability
|
||||
|
||||
### Content Projection (Slots)
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
selector: 'app-card',
|
||||
template: `
|
||||
<div class="card">
|
||||
<div class="header">
|
||||
<!-- Select by attribute -->
|
||||
<ng-content select="[card-header]"></ng-content>
|
||||
</div>
|
||||
<div class="body">
|
||||
<!-- Default slot -->
|
||||
<ng-content></ng-content>
|
||||
</div>
|
||||
</div>
|
||||
`
|
||||
})
|
||||
export class CardComponent {}
|
||||
|
||||
// Usage
|
||||
<app-card>
|
||||
<h3 card-header>Title</h3>
|
||||
<p>Body content</p>
|
||||
</app-card>
|
||||
```
|
||||
|
||||
### Host Directives (Composition)
|
||||
|
||||
```typescript
|
||||
// Reusable behaviors without inheritance
|
||||
@Directive({
|
||||
standalone: true,
|
||||
selector: '[appTooltip]',
|
||||
inputs: ['tooltip'] // Signal input alias
|
||||
})
|
||||
export class TooltipDirective { ... }
|
||||
|
||||
@Component({
|
||||
selector: 'app-button',
|
||||
standalone: true,
|
||||
hostDirectives: [
|
||||
{
|
||||
directive: TooltipDirective,
|
||||
inputs: ['tooltip: title'] // Map input
|
||||
}
|
||||
],
|
||||
template: `<ng-content />`
|
||||
})
|
||||
export class ButtonComponent {}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. State Management Patterns
|
||||
|
||||
### Signal-Based State Service
|
||||
|
||||
```typescript
|
||||
import { Injectable, signal, computed } from "@angular/core";
|
||||
|
||||
interface AppState {
|
||||
user: User | null;
|
||||
theme: "light" | "dark";
|
||||
notifications: Notification[];
|
||||
}
|
||||
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class StateService {
|
||||
// Private writable signals
|
||||
private _user = signal<User | null>(null);
|
||||
private _theme = signal<"light" | "dark">("light");
|
||||
private _notifications = signal<Notification[]>([]);
|
||||
|
||||
// Public read-only computed
|
||||
readonly user = computed(() => this._user());
|
||||
readonly theme = computed(() => this._theme());
|
||||
readonly notifications = computed(() => this._notifications());
|
||||
readonly unreadCount = computed(
|
||||
() => this._notifications().filter((n) => !n.read).length,
|
||||
);
|
||||
|
||||
// Actions
|
||||
setUser(user: User | null) {
|
||||
this._user.set(user);
|
||||
}
|
||||
|
||||
toggleTheme() {
|
||||
this._theme.update((t) => (t === "light" ? "dark" : "light"));
|
||||
}
|
||||
|
||||
addNotification(notification: Notification) {
|
||||
this._notifications.update((n) => [...n, notification]);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Component Store Pattern with Signals
|
||||
|
||||
```typescript
|
||||
import { Injectable, signal, computed, inject } from "@angular/core";
|
||||
import { HttpClient } from "@angular/common/http";
|
||||
import { toSignal } from "@angular/core/rxjs-interop";
|
||||
|
||||
@Injectable()
|
||||
export class ProductStore {
|
||||
private http = inject(HttpClient);
|
||||
|
||||
// State
|
||||
private _products = signal<Product[]>([]);
|
||||
private _loading = signal(false);
|
||||
private _filter = signal("");
|
||||
|
||||
// Selectors
|
||||
readonly products = computed(() => this._products());
|
||||
readonly loading = computed(() => this._loading());
|
||||
readonly filteredProducts = computed(() => {
|
||||
const filter = this._filter().toLowerCase();
|
||||
return this._products().filter((p) =>
|
||||
p.name.toLowerCase().includes(filter),
|
||||
);
|
||||
});
|
||||
|
||||
// Actions
|
||||
loadProducts() {
|
||||
this._loading.set(true);
|
||||
this.http.get<Product[]>("/api/products").subscribe({
|
||||
next: (products) => {
|
||||
this._products.set(products);
|
||||
this._loading.set(false);
|
||||
},
|
||||
error: () => this._loading.set(false),
|
||||
});
|
||||
}
|
||||
|
||||
setFilter(filter: string) {
|
||||
this._filter.set(filter);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Forms with Signals (Coming in v22+)
|
||||
|
||||
### Current Reactive Forms
|
||||
|
||||
```typescript
|
||||
import { Component, inject } from "@angular/core";
|
||||
import { FormBuilder, Validators, ReactiveFormsModule } from "@angular/forms";
|
||||
|
||||
@Component({
|
||||
selector: "app-user-form",
|
||||
standalone: true,
|
||||
imports: [ReactiveFormsModule],
|
||||
template: `
|
||||
<form [formGroup]="form" (ngSubmit)="onSubmit()">
|
||||
<input formControlName="name" placeholder="Name" />
|
||||
<input formControlName="email" type="email" placeholder="Email" />
|
||||
<button [disabled]="form.invalid">Submit</button>
|
||||
</form>
|
||||
`,
|
||||
})
|
||||
export class UserFormComponent {
|
||||
private fb = inject(FormBuilder);
|
||||
|
||||
form = this.fb.group({
|
||||
name: ["", Validators.required],
|
||||
email: ["", [Validators.required, Validators.email]],
|
||||
});
|
||||
|
||||
onSubmit() {
|
||||
if (this.form.valid) {
|
||||
console.log(this.form.value);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Signal-Aware Form Patterns (Preview)
|
||||
|
||||
```typescript
|
||||
// Future Signal Forms API (experimental)
|
||||
import { Component, signal } from '@angular/core';
|
||||
|
||||
@Component({...})
|
||||
export class SignalFormComponent {
|
||||
name = signal('');
|
||||
email = signal('');
|
||||
|
||||
// Computed validation
|
||||
isValid = computed(() =>
|
||||
this.name().length > 0 &&
|
||||
this.email().includes('@')
|
||||
);
|
||||
|
||||
submit() {
|
||||
if (this.isValid()) {
|
||||
console.log({ name: this.name(), email: this.email() });
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. Performance Optimization
|
||||
|
||||
### Change Detection Strategies
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
changeDetection: ChangeDetectionStrategy.OnPush,
|
||||
// Only checks when:
|
||||
// 1. Input signal/reference changes
|
||||
// 2. Event handler runs
|
||||
// 3. Async pipe emits
|
||||
// 4. Signal value changes
|
||||
})
|
||||
```
|
||||
|
||||
### Defer Blocks for Lazy Loading
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
template: `
|
||||
<!-- Immediate loading -->
|
||||
<app-header />
|
||||
|
||||
<!-- Lazy load when visible -->
|
||||
@defer (on viewport) {
|
||||
<app-heavy-chart />
|
||||
} @placeholder {
|
||||
<div class="skeleton" />
|
||||
} @loading (minimum 200ms) {
|
||||
<app-spinner />
|
||||
} @error {
|
||||
<p>Failed to load chart</p>
|
||||
}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### NgOptimizedImage
|
||||
|
||||
```typescript
|
||||
import { NgOptimizedImage } from '@angular/common';
|
||||
|
||||
@Component({
|
||||
imports: [NgOptimizedImage],
|
||||
template: `
|
||||
<img
|
||||
ngSrc="hero.jpg"
|
||||
width="800"
|
||||
height="600"
|
||||
priority
|
||||
/>
|
||||
|
||||
<img
|
||||
ngSrc="thumbnail.jpg"
|
||||
width="200"
|
||||
height="150"
|
||||
loading="lazy"
|
||||
placeholder="blur"
|
||||
/>
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Testing Modern Angular
|
||||
|
||||
### Testing Signal Components
|
||||
|
||||
```typescript
|
||||
import { ComponentFixture, TestBed } from "@angular/core/testing";
|
||||
import { CounterComponent } from "./counter.component";
|
||||
|
||||
describe("CounterComponent", () => {
|
||||
let component: CounterComponent;
|
||||
let fixture: ComponentFixture<CounterComponent>;
|
||||
|
||||
beforeEach(async () => {
|
||||
await TestBed.configureTestingModule({
|
||||
imports: [CounterComponent], // Standalone import
|
||||
}).compileComponents();
|
||||
|
||||
fixture = TestBed.createComponent(CounterComponent);
|
||||
component = fixture.componentInstance;
|
||||
fixture.detectChanges();
|
||||
});
|
||||
|
||||
it("should increment count", () => {
|
||||
expect(component.count()).toBe(0);
|
||||
|
||||
component.increment();
|
||||
|
||||
expect(component.count()).toBe(1);
|
||||
});
|
||||
|
||||
it("should update DOM on signal change", () => {
|
||||
component.count.set(5);
|
||||
fixture.detectChanges();
|
||||
|
||||
const el = fixture.nativeElement.querySelector(".count");
|
||||
expect(el.textContent).toContain("5");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Testing with Signal Inputs
|
||||
|
||||
```typescript
|
||||
import { ComponentFixture, TestBed } from "@angular/core/testing";
|
||||
import { ComponentRef } from "@angular/core";
|
||||
import { UserCardComponent } from "./user-card.component";
|
||||
|
||||
describe("UserCardComponent", () => {
|
||||
let fixture: ComponentFixture<UserCardComponent>;
|
||||
let componentRef: ComponentRef<UserCardComponent>;
|
||||
|
||||
beforeEach(async () => {
|
||||
await TestBed.configureTestingModule({
|
||||
imports: [UserCardComponent],
|
||||
}).compileComponents();
|
||||
|
||||
fixture = TestBed.createComponent(UserCardComponent);
|
||||
componentRef = fixture.componentRef;
|
||||
|
||||
// Set signal inputs via setInput
|
||||
componentRef.setInput("id", "123");
|
||||
componentRef.setInput("name", "John Doe");
|
||||
|
||||
fixture.detectChanges();
|
||||
});
|
||||
|
||||
it("should display user name", () => {
|
||||
const el = fixture.nativeElement.querySelector("h3");
|
||||
expect(el.textContent).toContain("John Doe");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
| Pattern | ✅ Do | ❌ Don't |
|
||||
| -------------------- | ------------------------------ | ------------------------------- |
|
||||
| **State** | Use Signals for local state | Overuse RxJS for simple state |
|
||||
| **Components** | Standalone with direct imports | Bloated SharedModules |
|
||||
| **Change Detection** | OnPush + Signals | Default CD everywhere |
|
||||
| **Lazy Loading** | `@defer` and `loadComponent` | Eager load everything |
|
||||
| **DI** | `inject()` function | Constructor injection (verbose) |
|
||||
| **Inputs** | `input()` signal function | `@Input()` decorator (legacy) |
|
||||
| **Zoneless** | Enable for new projects | Force on legacy without testing |
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Angular.dev Documentation](https://angular.dev)
|
||||
- [Angular Signals Guide](https://angular.dev/guide/signals)
|
||||
- [Angular SSR Guide](https://angular.dev/guide/ssr)
|
||||
- [Angular Update Guide](https://angular.dev/update-guide)
|
||||
- [Angular Blog](https://blog.angular.dev)
|
||||
|
||||
---
|
||||
|
||||
## Common Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
| ------------------------------ | --------------------------------------------------- |
|
||||
| Signal not updating UI | Ensure `OnPush` + call signal as function `count()` |
|
||||
| Hydration mismatch | Check server/client content consistency |
|
||||
| Circular dependency | Use `inject()` with `forwardRef` |
|
||||
| Zoneless not detecting changes | Trigger via signal updates, not mutations |
|
||||
| SSR fetch fails | Use `TransferState` or `withFetch()` |
|
||||
14
skills/angular/metadata.json
Normal file
14
skills/angular/metadata.json
Normal file
@@ -0,0 +1,14 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Comprehensive guide to modern Angular development (v20+) designed for AI agents and LLMs. Covers Signals, Standalone Components, Zoneless applications, SSR/Hydration, reactive patterns, routing, dependency injection, and modern forms. Emphasizes component-driven architecture with practical examples and migration strategies for modernizing existing codebases.",
|
||||
"references": [
|
||||
"https://angular.dev",
|
||||
"https://angular.dev/guide/signals",
|
||||
"https://angular.dev/guide/zoneless",
|
||||
"https://angular.dev/guide/ssr",
|
||||
"https://angular.dev/guide/standalone-components",
|
||||
"https://angular.dev/guide/defer"
|
||||
]
|
||||
}
|
||||
80
skills/antigravity-workflows/SKILL.md
Normal file
80
skills/antigravity-workflows/SKILL.md
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
name: antigravity-workflows
|
||||
description: "Orchestrate multiple Antigravity skills through guided workflows for SaaS MVP delivery, security audits, AI agent builds, and browser QA."
|
||||
source: self
|
||||
risk: none
|
||||
---
|
||||
|
||||
# Antigravity Workflows
|
||||
|
||||
Use this skill to turn a complex objective into a guided sequence of skill invocations.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- The user wants to combine several skills without manually selecting each one.
|
||||
- The goal is multi-phase (for example: plan, build, test, ship).
|
||||
- The user asks for best-practice execution for common scenarios like:
|
||||
- Shipping a SaaS MVP
|
||||
- Running a web security audit
|
||||
- Building an AI agent system
|
||||
- Implementing browser automation and E2E QA
|
||||
|
||||
## Workflow Source of Truth
|
||||
|
||||
Read workflows in this order:
|
||||
1. `docs/WORKFLOWS.md` for human-readable playbooks.
|
||||
2. `data/workflows.json` for machine-readable workflow metadata.
|
||||
|
||||
## How to Run This Skill
|
||||
|
||||
1. Identify the user's concrete outcome.
|
||||
2. Propose the 1-2 best matching workflows.
|
||||
3. Ask the user to choose one.
|
||||
4. Execute step-by-step:
|
||||
- Announce current step and expected artifact.
|
||||
- Invoke recommended skills for that step.
|
||||
- Verify completion criteria before moving to next step.
|
||||
5. At the end, provide:
|
||||
- Completed artifacts
|
||||
- Validation evidence
|
||||
- Remaining risks and next actions
|
||||
|
||||
## Default Workflow Routing
|
||||
|
||||
- Product delivery request -> `ship-saas-mvp`
|
||||
- Security review request -> `security-audit-web-app`
|
||||
- Agent/LLM product request -> `build-ai-agent-system`
|
||||
- E2E/browser testing request -> `qa-browser-automation`
|
||||
|
||||
## Copy-Paste Prompts
|
||||
|
||||
```text
|
||||
Use @antigravity-workflows to run the "Ship a SaaS MVP" workflow for my project idea.
|
||||
```
|
||||
|
||||
```text
|
||||
Use @antigravity-workflows and execute a full "Security Audit for a Web App" workflow.
|
||||
```
|
||||
|
||||
```text
|
||||
Use @antigravity-workflows to guide me through "Build an AI Agent System" with checkpoints.
|
||||
```
|
||||
|
||||
```text
|
||||
Use @antigravity-workflows to execute the "QA and Browser Automation" workflow and stabilize flaky tests.
|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
- This skill orchestrates; it does not replace specialized skills.
|
||||
- It depends on the local availability of referenced skills.
|
||||
- It does not guarantee success without environment access, credentials, or required infrastructure.
|
||||
- For stack-specific browser automation in Go, `go-playwright` may require the corresponding skill to be present in your local skills repository.
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `concise-planning`
|
||||
- `brainstorming`
|
||||
- `workflow-automation`
|
||||
- `verification-before-completion`
|
||||
@@ -0,0 +1,36 @@
|
||||
# Antigravity Workflows Implementation Playbook
|
||||
|
||||
This document explains how an agent should execute workflow-based orchestration.
|
||||
|
||||
## Execution Contract
|
||||
|
||||
For every workflow:
|
||||
|
||||
1. Confirm objective and scope.
|
||||
2. Select the best-matching workflow.
|
||||
3. Execute workflow steps in order.
|
||||
4. Produce one concrete artifact per step.
|
||||
5. Validate before continuing.
|
||||
|
||||
## Step Artifact Examples
|
||||
|
||||
- Plan step -> scope document or milestone checklist.
|
||||
- Build step -> code changes and implementation notes.
|
||||
- Test step -> test results and failure triage.
|
||||
- Release step -> rollout checklist and risk log.
|
||||
|
||||
## Safety Guardrails
|
||||
|
||||
- Never run destructive actions without explicit user approval.
|
||||
- If a required skill is missing, state the gap and fallback to closest available skill.
|
||||
- When security testing is involved, ensure authorization is explicit.
|
||||
|
||||
## Suggested Completion Format
|
||||
|
||||
At workflow completion, return:
|
||||
|
||||
1. Completed steps
|
||||
2. Artifacts produced
|
||||
3. Validation evidence
|
||||
4. Open risks
|
||||
5. Suggested next action
|
||||
@@ -186,7 +186,7 @@ class CompetitorAnalyzer:
|
||||
|
||||
def _analyze_title(self, title: str) -> Dict[str, Any]:
|
||||
"""Analyze title structure and keyword usage."""
|
||||
parts = re.split(r'[-:|]', title)
|
||||
parts = re.split(r'[-' + r':|]', title)
|
||||
|
||||
return {
|
||||
'title': title,
|
||||
|
||||
171
skills/asana-automation/SKILL.md
Normal file
171
skills/asana-automation/SKILL.md
Normal file
@@ -0,0 +1,171 @@
|
||||
---
|
||||
name: asana-automation
|
||||
description: "Automate Asana tasks via Rube MCP (Composio): tasks, projects, sections, teams, workspaces. Always search tools first for current schemas."
|
||||
requires:
|
||||
mcp: [rube]
|
||||
---
|
||||
|
||||
# Asana Automation via Rube MCP
|
||||
|
||||
Automate Asana operations through Composio's Asana toolkit via Rube MCP.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
|
||||
- Active Asana connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `asana`
|
||||
- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas
|
||||
|
||||
## Setup
|
||||
|
||||
**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
|
||||
|
||||
|
||||
1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
|
||||
2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `asana`
|
||||
3. If connection is not ACTIVE, follow the returned auth link to complete Asana OAuth
|
||||
4. Confirm connection status shows ACTIVE before running any workflows
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Manage Tasks
|
||||
|
||||
**When to use**: User wants to create, search, list, or organize tasks
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ASANA_GET_MULTIPLE_WORKSPACES` - Get workspace ID [Prerequisite]
|
||||
2. `ASANA_SEARCH_TASKS_IN_WORKSPACE` - Search tasks [Optional]
|
||||
3. `ASANA_GET_TASKS_FROM_A_PROJECT` - List project tasks [Optional]
|
||||
4. `ASANA_CREATE_A_TASK` - Create a new task [Optional]
|
||||
5. `ASANA_GET_A_TASK` - Get task details [Optional]
|
||||
6. `ASANA_CREATE_SUBTASK` - Create a subtask [Optional]
|
||||
7. `ASANA_GET_TASK_SUBTASKS` - List subtasks [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `workspace`: Workspace GID (required for search/creation)
|
||||
- `projects`: Array of project GIDs to add task to
|
||||
- `name`: Task name
|
||||
- `notes`: Task description
|
||||
- `assignee`: Assignee (user GID or email)
|
||||
- `due_on`: Due date (YYYY-MM-DD)
|
||||
|
||||
**Pitfalls**:
|
||||
- Workspace GID is required for most operations; get it first
|
||||
- Task GIDs are returned as strings, not integers
|
||||
- Search is workspace-scoped, not project-scoped
|
||||
|
||||
### 2. Manage Projects and Sections
|
||||
|
||||
**When to use**: User wants to create projects, manage sections, or organize tasks
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ASANA_GET_WORKSPACE_PROJECTS` - List workspace projects [Optional]
|
||||
2. `ASANA_GET_A_PROJECT` - Get project details [Optional]
|
||||
3. `ASANA_CREATE_A_PROJECT` - Create a new project [Optional]
|
||||
4. `ASANA_GET_SECTIONS_IN_PROJECT` - List sections [Optional]
|
||||
5. `ASANA_CREATE_SECTION_IN_PROJECT` - Create a new section [Optional]
|
||||
6. `ASANA_ADD_TASK_TO_SECTION` - Move task to section [Optional]
|
||||
7. `ASANA_GET_TASKS_FROM_A_SECTION` - List tasks in section [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `project_gid`: Project GID
|
||||
- `name`: Project or section name
|
||||
- `workspace`: Workspace GID for creation
|
||||
- `task`: Task GID for section assignment
|
||||
- `section`: Section GID
|
||||
|
||||
**Pitfalls**:
|
||||
- Projects belong to workspaces; workspace GID is needed for creation
|
||||
- Sections are ordered within a project
|
||||
- DUPLICATE_PROJECT creates a copy with optional task inclusion
|
||||
|
||||
### 3. Manage Teams and Users
|
||||
|
||||
**When to use**: User wants to list teams, team members, or workspace users
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ASANA_GET_TEAMS_IN_WORKSPACE` - List workspace teams [Optional]
|
||||
2. `ASANA_GET_USERS_FOR_TEAM` - List team members [Optional]
|
||||
3. `ASANA_GET_USERS_FOR_WORKSPACE` - List all workspace users [Optional]
|
||||
4. `ASANA_GET_CURRENT_USER` - Get authenticated user [Optional]
|
||||
5. `ASANA_GET_MULTIPLE_USERS` - Get multiple user details [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `workspace_gid`: Workspace GID
|
||||
- `team_gid`: Team GID
|
||||
|
||||
**Pitfalls**:
|
||||
- Users are workspace-scoped
|
||||
- Team membership requires the team GID
|
||||
|
||||
### 4. Parallel Operations
|
||||
|
||||
**When to use**: User needs to perform bulk operations efficiently
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ASANA_SUBMIT_PARALLEL_REQUESTS` - Execute multiple API calls in parallel [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `actions`: Array of action objects with method, path, and data
|
||||
|
||||
**Pitfalls**:
|
||||
- Each action must be a valid Asana API call
|
||||
- Failed individual requests do not roll back successful ones
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### ID Resolution
|
||||
|
||||
**Workspace name -> GID**:
|
||||
```
|
||||
1. Call ASANA_GET_MULTIPLE_WORKSPACES
|
||||
2. Find workspace by name
|
||||
3. Extract gid field
|
||||
```
|
||||
|
||||
**Project name -> GID**:
|
||||
```
|
||||
1. Call ASANA_GET_WORKSPACE_PROJECTS with workspace GID
|
||||
2. Find project by name
|
||||
3. Extract gid field
|
||||
```
|
||||
|
||||
### Pagination
|
||||
|
||||
- Asana uses cursor-based pagination with `offset` parameter
|
||||
- Check for `next_page` in response
|
||||
- Pass `offset` from `next_page.offset` for next request
|
||||
|
||||
## Known Pitfalls
|
||||
|
||||
**GID Format**:
|
||||
- All Asana IDs are strings (GIDs), not integers
|
||||
- GIDs are globally unique identifiers
|
||||
|
||||
**Workspace Scoping**:
|
||||
- Most operations require a workspace context
|
||||
- Tasks, projects, and users are workspace-scoped
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Tool Slug | Key Params |
|
||||
|------|-----------|------------|
|
||||
| List workspaces | ASANA_GET_MULTIPLE_WORKSPACES | (none) |
|
||||
| Search tasks | ASANA_SEARCH_TASKS_IN_WORKSPACE | workspace, text |
|
||||
| Create task | ASANA_CREATE_A_TASK | workspace, name, projects |
|
||||
| Get task | ASANA_GET_A_TASK | task_gid |
|
||||
| Create subtask | ASANA_CREATE_SUBTASK | parent, name |
|
||||
| List subtasks | ASANA_GET_TASK_SUBTASKS | task_gid |
|
||||
| Project tasks | ASANA_GET_TASKS_FROM_A_PROJECT | project_gid |
|
||||
| List projects | ASANA_GET_WORKSPACE_PROJECTS | workspace |
|
||||
| Create project | ASANA_CREATE_A_PROJECT | workspace, name |
|
||||
| Get project | ASANA_GET_A_PROJECT | project_gid |
|
||||
| Duplicate project | ASANA_DUPLICATE_PROJECT | project_gid |
|
||||
| List sections | ASANA_GET_SECTIONS_IN_PROJECT | project_gid |
|
||||
| Create section | ASANA_CREATE_SECTION_IN_PROJECT | project_gid, name |
|
||||
| Add to section | ASANA_ADD_TASK_TO_SECTION | section, task |
|
||||
| Section tasks | ASANA_GET_TASKS_FROM_A_SECTION | section_gid |
|
||||
| List teams | ASANA_GET_TEAMS_IN_WORKSPACE | workspace_gid |
|
||||
| Team members | ASANA_GET_USERS_FOR_TEAM | team_gid |
|
||||
| Workspace users | ASANA_GET_USERS_FOR_WORKSPACE | workspace_gid |
|
||||
| Current user | ASANA_GET_CURRENT_USER | (none) |
|
||||
| Parallel requests | ASANA_SUBMIT_PARALLEL_REQUESTS | actions |
|
||||
137
skills/audio-transcriber/CHANGELOG.md
Normal file
137
skills/audio-transcriber/CHANGELOG.md
Normal file
@@ -0,0 +1,137 @@
|
||||
# Changelog - audio-transcriber
|
||||
|
||||
All notable changes to the audio-transcriber skill will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
---
|
||||
|
||||
## [1.1.0] - 2026-02-03
|
||||
|
||||
### ✨ Added
|
||||
|
||||
- **Intelligent Prompt Workflow** (Step 3b) - Complete integration with prompt-engineer skill
|
||||
- **Scenario A**: User-provided prompts are automatically improved with prompt-engineer
|
||||
- Displays both original and improved versions side-by-side
|
||||
- Single confirmation: "Usar versão melhorada? [s/n]"
|
||||
- **Scenario B**: Auto-generation when no prompt provided
|
||||
- Analyzes transcript and suggests document type (ata, resumo, notas)
|
||||
- Shows suggestion and asks confirmation
|
||||
- Generates complete structured prompt (RISEN/RODES/STAR)
|
||||
- Shows preview and asks final confirmation
|
||||
- Falls back to DEFAULT_MEETING_PROMPT if declined
|
||||
|
||||
- **LLM Integration** - Process transcripts with Claude CLI or GitHub Copilot CLI
|
||||
- Priority: Claude > GitHub Copilot > None (transcript-only mode)
|
||||
- Step 0b: CLI detection logic documented
|
||||
- Timeout handling (5 minutes default)
|
||||
- Graceful fallback if CLI unavailable
|
||||
|
||||
- **Progress Indicators** - Visual feedback during long operations
|
||||
- `tqdm` progress bar for Whisper transcription segments
|
||||
- `rich` spinner for LLM processing
|
||||
- Clear status messages at each step
|
||||
|
||||
- **Timestamp-based File Naming** - Avoid overwriting previous transcriptions
|
||||
- Format: `transcript-YYYYMMDD-HHMMSS.md`
|
||||
- Format: `ata-YYYYMMDD-HHMMSS.md`
|
||||
- Prevents data loss from repeated runs
|
||||
|
||||
- **Automatic Cleanup** - Remove temporary files after processing
|
||||
- Deletes `metadata.json` and `transcription.json` automatically
|
||||
- `--keep-temp` flag to preserve if needed
|
||||
- Clean output directory
|
||||
|
||||
- **Rich Terminal UI** - Beautiful output with `rich` library
|
||||
- Formatted panels for prompt previews
|
||||
- Color-coded status messages (green=success, yellow=warning, red=error)
|
||||
- Spinner animations for long-running tasks
|
||||
|
||||
- **Dual Output Support** - Generate both transcript and processed ata
|
||||
- `transcript-*.md` - Raw transcription with timestamps
|
||||
- `ata-*.md` - Intelligent summary/meeting minutes (if LLM available)
|
||||
- User can decline LLM processing to get transcript-only
|
||||
|
||||
### 🔧 Changed
|
||||
|
||||
- **SKILL.md** - Major documentation updates
|
||||
- Added Step 0b (CLI Detection)
|
||||
- Updated Step 2 (Progress Indicators)
|
||||
- Added Step 3b (Intelligent Prompt Workflow with 150+ lines)
|
||||
- Updated version to 1.1.0
|
||||
- Added detailed workflow diagrams for both scenarios
|
||||
|
||||
- **install-requirements.sh** - Added UI libraries
|
||||
- Now installs `tqdm` and `rich` packages
|
||||
- Graceful fallback if installation fails
|
||||
- Updated success messages
|
||||
|
||||
- **Python Implementation** - Complete refactor
|
||||
- Created `scripts/transcribe.py` (516 lines)
|
||||
- Functions: `detect_cli_tool()`, `invoke_prompt_engineer()`, `handle_prompt_workflow()`, `process_with_llm()`, `transcribe_audio()`, `save_outputs()`, `cleanup_temp_files()`
|
||||
- Command-line arguments: `--prompt`, `--model`, `--output-dir`, `--keep-temp`
|
||||
- Auto-installs `rich` and `tqdm` if missing
|
||||
|
||||
### 🐛 Fixed
|
||||
|
||||
- **User prompts no longer ignored** - v1.0.0 completely ignored custom prompts
|
||||
- Now processes all prompts (custom or auto-generated) with LLM
|
||||
- Improves simple prompts into structured frameworks
|
||||
|
||||
- **Temporary files cleanup** - v1.0.0 left `metadata.json` and `transcription.json` as trash
|
||||
- Now automatically removed after processing
|
||||
- Clean output directory
|
||||
|
||||
- **File overwriting** - v1.0.0 used same filename (e.g., `meeting.md`) every time
|
||||
- Now uses timestamp to prevent data loss
|
||||
- Each run creates unique files
|
||||
|
||||
- **Missing ata/summary** - v1.0.0 only generated raw transcript
|
||||
- Now generates intelligent ata/resumo using LLM
|
||||
- Respects user's prompt instructions
|
||||
|
||||
- **No progress feedback** - v1.0.0 had silent processing (users didn't know if it froze)
|
||||
- Now shows progress bar for transcription
|
||||
- Shows spinner for LLM processing
|
||||
- Clear status messages throughout
|
||||
|
||||
### 📝 Notes
|
||||
|
||||
- **Backward Compatibility:** Fully compatible with v1.0.0 workflows
|
||||
- **Requires:** Python 3.8+, faster-whisper OR whisper, tqdm, rich
|
||||
- **Optional:** Claude CLI or GitHub Copilot CLI for intelligent processing
|
||||
- **Optional:** prompt-engineer skill for automatic prompt generation
|
||||
|
||||
### 🔗 Related Issues
|
||||
|
||||
- Fixes #1: Prompt do usuário RISEN ignorado
|
||||
- Fixes #2: Arquivos temporários (metadata.json, transcription.json) deixados como lixo
|
||||
- Fixes #3: Output incompleto (apenas transcript RAW, sem ata)
|
||||
- Fixes #4: Falta de indicador de progresso visual
|
||||
- Fixes #5: Formato de saída sem timestamp
|
||||
|
||||
---
|
||||
|
||||
## [1.0.0] - 2026-02-02
|
||||
|
||||
### ✨ Initial Release
|
||||
|
||||
- Audio transcription using Faster-Whisper or OpenAI Whisper
|
||||
- Automatic language detection
|
||||
- Speaker diarization (basic)
|
||||
- Voice Activity Detection (VAD)
|
||||
- Markdown output with metadata table
|
||||
- Installation script for dependencies
|
||||
- Example scripts for basic transcription
|
||||
- Support for multiple audio formats (MP3, WAV, M4A, OGG, FLAC, WEBM)
|
||||
- FFmpeg integration for format conversion
|
||||
- Zero-configuration philosophy
|
||||
|
||||
### 📝 Known Limitations (Fixed in v1.1.0)
|
||||
|
||||
- User prompts ignored (no LLM integration)
|
||||
- Only raw transcript generated (no ata/summary)
|
||||
- Temporary files not cleaned up
|
||||
- No progress indicators
|
||||
- Files overwritten on repeated runs
|
||||
340
skills/audio-transcriber/README.md
Normal file
340
skills/audio-transcriber/README.md
Normal file
@@ -0,0 +1,340 @@
|
||||
# Audio Transcriber Skill v1.1.0
|
||||
|
||||
Transform audio recordings into professional Markdown documentation with **intelligent atas/summaries using LLM integration** (Claude/Copilot CLI) and automatic prompt engineering.
|
||||
|
||||
## 🆕 What's New in v1.1.0
|
||||
|
||||
- **🧠 LLM Integration** - Claude CLI (primary) or GitHub Copilot CLI (fallback) for intelligent processing
|
||||
- **✨ Smart Prompts** - Automatic integration with prompt-engineer skill
|
||||
- User-provided prompts → automatically improved → user chooses version
|
||||
- No prompt → analyzes transcript → suggests format → generates structured prompt
|
||||
- **📊 Progress Indicators** - Visual progress bars (tqdm) and spinners (rich)
|
||||
- **📁 Timestamp Filenames** - `transcript-YYYYMMDD-HHMMSS.md` + `ata-YYYYMMDD-HHMMSS.md`
|
||||
- **🧹 Auto-Cleanup** - Removes temporary `metadata.json` and `transcription.json`
|
||||
- **🎨 Rich Terminal UI** - Beautiful formatted output with panels and colors
|
||||
|
||||
See **[CHANGELOG.md](./CHANGELOG.md)** for complete v1.1.0 details.
|
||||
|
||||
## 🎯 Core Features
|
||||
|
||||
- **📝 Rich Markdown Output** - Structured reports with metadata tables, timestamps, and formatting
|
||||
- **🎙️ Speaker Diarization** - Automatically identifies and labels different speakers
|
||||
- **📊 Technical Metadata** - Extracts file size, duration, language, processing time
|
||||
- **📋 Intelligent Atas/Summaries** - Generated via LLM (Claude/Copilot) with customizable prompts
|
||||
- **💡 Executive Summaries** - AI-generated structured summaries with topics, decisions, action items
|
||||
- **🌍 Multi-language** - Supports 99 languages with auto-detection
|
||||
- **⚡ Zero Configuration** - Auto-discovers Faster-Whisper/Whisper installation
|
||||
- **🔒 Privacy-First** - 100% local Whisper processing, no cloud uploads
|
||||
- **🚀 Flexible Modes** - Transcript-only or intelligent processing with LLM
|
||||
|
||||
## 📦 Installation
|
||||
|
||||
### Quick Install (NPX)
|
||||
|
||||
```bash
|
||||
npx cli-ai-skills@latest install audio-transcriber
|
||||
```
|
||||
|
||||
This automatically:
|
||||
- Downloads the skill
|
||||
- Installs Python dependencies (faster-whisper, tqdm, rich)
|
||||
- Installs ffmpeg (macOS via Homebrew)
|
||||
- Sets up the skill globally
|
||||
|
||||
### Manual Installation
|
||||
|
||||
#### 1. Install Transcription Engine
|
||||
|
||||
**Recommended (fastest):**
|
||||
```bash
|
||||
pip install faster-whisper tqdm rich
|
||||
```
|
||||
|
||||
**Alternative (original Whisper):**
|
||||
```bash
|
||||
pip install openai-whisper tqdm rich
|
||||
```
|
||||
|
||||
#### 2. Install Audio Tools (Optional)
|
||||
|
||||
For format conversion support:
|
||||
```bash
|
||||
# macOS
|
||||
brew install ffmpeg
|
||||
|
||||
# Linux
|
||||
apt install ffmpeg
|
||||
```
|
||||
|
||||
#### 3. Install LLM CLI (Optional - for intelligent summaries)
|
||||
|
||||
**Claude CLI (recommended):**
|
||||
```bash
|
||||
# Follow: https://docs.anthropic.com/en/docs/claude-cli
|
||||
```
|
||||
|
||||
**GitHub Copilot CLI (alternative):**
|
||||
```bash
|
||||
gh extension install github/gh-copilot
|
||||
```
|
||||
|
||||
#### 4. Install Skill
|
||||
|
||||
**Global installation (auto-updates with git pull):**
|
||||
```bash
|
||||
cd /path/to/cli-ai-skills
|
||||
./scripts/install-skills.sh $(pwd)
|
||||
```
|
||||
|
||||
**Repository only:**
|
||||
```bash
|
||||
# Skill is already available if you cloned the repo
|
||||
```
|
||||
|
||||
## 🚀 Usage
|
||||
|
||||
### Basic Transcription
|
||||
|
||||
```bash
|
||||
copilot> transcribe audio to markdown: meeting.mp3
|
||||
```
|
||||
|
||||
**Output:**
|
||||
- `meeting.md` - Full Markdown report with metadata, transcription, minutes, summary
|
||||
|
||||
### With Subtitles
|
||||
|
||||
```bash
|
||||
copilot> convert audio file to text with subtitles: interview.wav
|
||||
```
|
||||
|
||||
**Generates:**
|
||||
- `interview.md` - Markdown report
|
||||
- `interview.srt` - Subtitle file
|
||||
|
||||
### Batch Processing
|
||||
|
||||
```bash
|
||||
copilot> transcreva estes áudios: recordings/*.mp3
|
||||
```
|
||||
|
||||
**Processes all MP3 files in the directory.**
|
||||
|
||||
### Trigger Phrases
|
||||
|
||||
Activate the skill with any of these phrases:
|
||||
|
||||
- "transcribe audio to markdown"
|
||||
- "transcreva este áudio"
|
||||
- "convert audio file to text"
|
||||
- "extract speech from audio"
|
||||
- "áudio para texto com metadados"
|
||||
|
||||
## 📋 Use Cases
|
||||
|
||||
### 1. Team Meetings
|
||||
Record standups, planning sessions, or retrospectives and automatically generate:
|
||||
- Participant list
|
||||
- Discussion topics with timestamps
|
||||
- Decisions made
|
||||
- Action items assigned
|
||||
|
||||
### 2. Client Calls
|
||||
Transcribe client conversations with:
|
||||
- Speaker identification
|
||||
- Key agreements documented
|
||||
- Follow-up tasks extracted
|
||||
|
||||
### 3. Interviews
|
||||
Convert interviews to text with:
|
||||
- Question/answer attribution
|
||||
- Subtitle generation for video
|
||||
- Searchable transcript
|
||||
|
||||
### 4. Lectures & Training
|
||||
Document educational content with:
|
||||
- Timestamped notes
|
||||
- Topic breakdown
|
||||
- Key concepts summary
|
||||
|
||||
### 5. Content Creation
|
||||
Analyze podcasts, videos, YouTube content:
|
||||
- Full transcription
|
||||
- Chapter markers (timestamps)
|
||||
- Summary for show notes
|
||||
|
||||
## 📊 Output Example
|
||||
|
||||
```markdown
|
||||
# Audio Transcription Report
|
||||
|
||||
## 📊 Metadata
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **File Name** | team-standup.mp3 |
|
||||
| **File Size** | 3.2 MB |
|
||||
| **Duration** | 00:12:47 |
|
||||
| **Language** | English (en) |
|
||||
| **Processed Date** | 2026-02-02 14:35:21 |
|
||||
| **Speakers Identified** | 5 |
|
||||
| **Transcription Engine** | Faster-Whisper (model: base) |
|
||||
|
||||
---
|
||||
|
||||
## 🎙️ Full Transcription
|
||||
|
||||
**[00:00:12 → 00:00:45]** *Speaker 1*
|
||||
Good morning everyone. Let's start with updates from the frontend team.
|
||||
|
||||
**[00:00:46 → 00:01:23]** *Speaker 2*
|
||||
We completed the dashboard redesign and deployed to staging yesterday.
|
||||
|
||||
---
|
||||
|
||||
## 📋 Meeting Minutes
|
||||
|
||||
### Participants
|
||||
- Speaker 1 (Meeting Lead)
|
||||
- Speaker 2 (Frontend Developer)
|
||||
- Speaker 3 (Backend Developer)
|
||||
- Speaker 4 (Designer)
|
||||
- Speaker 5 (Product Manager)
|
||||
|
||||
### Topics Discussed
|
||||
1. **Dashboard Redesign** (00:00:46)
|
||||
- Completed and deployed to staging
|
||||
- Positive feedback from QA team
|
||||
|
||||
2. **API Performance Issues** (00:03:12)
|
||||
- Database query optimization needed
|
||||
- Target response time < 200ms
|
||||
|
||||
### Decisions Made
|
||||
- ✅ Approved dashboard for production deployment
|
||||
- ✅ Allocated 2 sprint points for API optimization
|
||||
|
||||
### Action Items
|
||||
- [ ] **Deploy dashboard to production** - Assigned to: Speaker 2 - Due: 2026-02-05
|
||||
- [ ] **Optimize database queries** - Assigned to: Speaker 3
|
||||
- [ ] **Schedule user testing session** - Assigned to: Speaker 5
|
||||
|
||||
---
|
||||
|
||||
## 📝 Executive Summary
|
||||
|
||||
The team standup covered progress on the dashboard redesign, which has been successfully completed and is ready for production deployment. The frontend team received positive feedback from QA and the design aligns with user requirements.
|
||||
|
||||
Backend performance concerns were raised regarding API response times. The team decided to prioritize query optimization in the current sprint, with a target of sub-200ms response times.
|
||||
|
||||
Next steps include production deployment of the dashboard by end of week and scheduling user testing sessions to validate the new design with real users.
|
||||
|
||||
### Key Points
|
||||
- 🔹 Dashboard redesign complete and staging-approved
|
||||
- 🔹 API performance optimization prioritized
|
||||
- 🔹 User testing scheduled for next week
|
||||
|
||||
### Next Steps
|
||||
1. Production deployment (Speaker 2)
|
||||
2. Database optimization (Speaker 3)
|
||||
3. User testing coordination (Speaker 5)
|
||||
```
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
No configuration needed! The skill automatically:
|
||||
- Detects Faster-Whisper or Whisper installation
|
||||
- Chooses the fastest available engine
|
||||
- Selects appropriate model based on file size
|
||||
- Auto-detects language
|
||||
|
||||
## 🔧 Troubleshooting
|
||||
|
||||
### "No transcription tool found"
|
||||
**Solution:** Install Whisper:
|
||||
```bash
|
||||
pip install faster-whisper
|
||||
```
|
||||
|
||||
### "Unsupported format"
|
||||
**Solution:** Install ffmpeg:
|
||||
```bash
|
||||
brew install ffmpeg # macOS
|
||||
apt install ffmpeg # Linux
|
||||
```
|
||||
|
||||
### Slow processing
|
||||
**Solution:** Use a smaller Whisper model:
|
||||
```bash
|
||||
# Edit the skill to use "tiny" or "base" model instead of "medium"
|
||||
```
|
||||
|
||||
### Poor speaker identification
|
||||
**Solution:**
|
||||
- Ensure clear audio with minimal background noise
|
||||
- Use a better microphone for recordings
|
||||
- Try the "medium" or "large" Whisper model
|
||||
|
||||
## 🛠️ Advanced Usage
|
||||
|
||||
### Custom Model Selection
|
||||
|
||||
Edit `SKILL.md` Step 2 to change model:
|
||||
```python
|
||||
model = WhisperModel("small", device="cpu") # Change "base" to "small", "medium", etc.
|
||||
```
|
||||
|
||||
### Output Language Control
|
||||
|
||||
Force output in specific language:
|
||||
```bash
|
||||
# Edit Step 3 to set language explicitly
|
||||
```
|
||||
|
||||
### Batch Settings
|
||||
|
||||
Process specific file types only:
|
||||
```bash
|
||||
copilot> transcribe audio: recordings/*.wav # Only WAV files
|
||||
```
|
||||
|
||||
## 📚 FAQ
|
||||
|
||||
**Q: Does this work offline?**
|
||||
A: Yes! 100% local processing, no internet required after initial model download.
|
||||
|
||||
**Q: What's the difference between Whisper and Faster-Whisper?**
|
||||
A: Faster-Whisper is 4-5x faster with same quality. Always prefer it if available.
|
||||
|
||||
**Q: Can I transcribe YouTube videos?**
|
||||
A: Not directly. Use a YouTube downloader first, then transcribe the audio file. Or use the `youtube-summarizer` skill instead.
|
||||
|
||||
**Q: How accurate is speaker identification?**
|
||||
A: Accuracy depends on audio quality. Clear recordings with distinct voices work best. Currently uses simple estimation; future versions will use advanced diarization.
|
||||
|
||||
**Q: What languages are supported?**
|
||||
A: 99 languages including English, Portuguese, Spanish, French, German, Chinese, Japanese, Arabic, and more.
|
||||
|
||||
**Q: Can I edit the meeting minutes format?**
|
||||
A: Yes! Edit the Markdown template in SKILL.md Step 3.
|
||||
|
||||
## 🔗 Related Skills
|
||||
|
||||
- **youtube-summarizer** - Extract and summarize YouTube video transcripts
|
||||
- **prompt-engineer** - Optimize prompts for better AI summaries
|
||||
|
||||
## 📄 License
|
||||
|
||||
This skill is part of the cli-ai-skills repository.
|
||||
MIT License - See repository LICENSE file.
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
Found a bug or have a feature request?
|
||||
Open an issue in the [cli-ai-skills repository](https://github.com/yourusername/cli-ai-skills).
|
||||
|
||||
---
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Author:** Eric Andrade
|
||||
**Created:** 2026-02-02
|
||||
558
skills/audio-transcriber/SKILL.md
Normal file
558
skills/audio-transcriber/SKILL.md
Normal file
@@ -0,0 +1,558 @@
|
||||
---
|
||||
name: audio-transcriber
|
||||
description: "Transform audio recordings into professional Markdown documentation with intelligent summaries using LLM integration"
|
||||
version: 1.2.0
|
||||
author: Eric Andrade
|
||||
created: 2025-02-01
|
||||
updated: 2026-02-04
|
||||
platforms: [github-copilot-cli, claude-code, codex]
|
||||
category: content
|
||||
tags: [audio, transcription, whisper, meeting-minutes, speech-to-text]
|
||||
risk: safe
|
||||
---
|
||||
|
||||
## Purpose
|
||||
|
||||
This skill automates audio-to-text transcription with professional Markdown output, extracting rich technical metadata (speakers, timestamps, language, file size, duration) and generating structured meeting minutes and executive summaries. It uses Faster-Whisper or Whisper with zero configuration, working universally across projects without hardcoded paths or API keys.
|
||||
|
||||
Inspired by tools like Plaud, this skill transforms raw audio recordings into actionable documentation, making it ideal for meetings, interviews, lectures, and content analysis.
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoke this skill when:
|
||||
|
||||
- User needs to transcribe audio/video files to text
|
||||
- User wants meeting minutes automatically generated from recordings
|
||||
- User requires speaker identification (diarization) in conversations
|
||||
- User needs subtitles/captions (SRT, VTT formats)
|
||||
- User wants executive summaries of long audio content
|
||||
- User asks variations of "transcribe this audio", "convert audio to text", "generate meeting notes from recording"
|
||||
- User has audio files in common formats (MP3, WAV, M4A, OGG, FLAC, WEBM)
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 0: Discovery (Auto-detect Transcription Tools)
|
||||
|
||||
**Objective:** Identify available transcription engines without user configuration.
|
||||
|
||||
**Actions:**
|
||||
|
||||
Run detection commands to find installed tools:
|
||||
|
||||
```bash
|
||||
# Check for Faster-Whisper (preferred - 4-5x faster)
|
||||
if python3 -c "import faster_whisper" 2>/dev/null; then
|
||||
TRANSCRIBER="faster-whisper"
|
||||
echo "✅ Faster-Whisper detected (optimized)"
|
||||
# Fallback to original Whisper
|
||||
elif python3 -c "import whisper" 2>/dev/null; then
|
||||
TRANSCRIBER="whisper"
|
||||
echo "✅ OpenAI Whisper detected"
|
||||
else
|
||||
TRANSCRIBER="none"
|
||||
echo "⚠️ No transcription tool found"
|
||||
fi
|
||||
|
||||
# Check for ffmpeg (audio format conversion)
|
||||
if command -v ffmpeg &>/dev/null; then
|
||||
echo "✅ ffmpeg available (format conversion enabled)"
|
||||
else
|
||||
echo "ℹ️ ffmpeg not found (limited format support)"
|
||||
fi
|
||||
```
|
||||
|
||||
**If no transcriber found:**
|
||||
|
||||
Offer automatic installation using the provided script:
|
||||
|
||||
```bash
|
||||
echo "⚠️ No transcription tool found"
|
||||
echo ""
|
||||
echo "🔧 Auto-install dependencies? (Recommended)"
|
||||
read -p "Run installation script? [Y/n]: " AUTO_INSTALL
|
||||
|
||||
if [[ ! "$AUTO_INSTALL" =~ ^[Nn] ]]; then
|
||||
# Get skill directory (works for both repo and symlinked installations)
|
||||
SKILL_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
# Run installation script
|
||||
if [[ -f "$SKILL_DIR/scripts/install-requirements.sh" ]]; then
|
||||
bash "$SKILL_DIR/scripts/install-requirements.sh"
|
||||
else
|
||||
echo "❌ Installation script not found"
|
||||
echo ""
|
||||
echo "📦 Manual installation:"
|
||||
echo " pip install faster-whisper # Recommended"
|
||||
echo " pip install openai-whisper # Alternative"
|
||||
echo " brew install ffmpeg # Optional (macOS)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Verify installation succeeded
|
||||
if python3 -c "import faster_whisper" 2>/dev/null || python3 -c "import whisper" 2>/dev/null; then
|
||||
echo "✅ Installation successful! Proceeding with transcription..."
|
||||
else
|
||||
echo "❌ Installation failed. Please install manually."
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo ""
|
||||
echo "📦 Manual installation required:"
|
||||
echo ""
|
||||
echo "Recommended (fastest):"
|
||||
echo " pip install faster-whisper"
|
||||
echo ""
|
||||
echo "Alternative (original):"
|
||||
echo " pip install openai-whisper"
|
||||
echo ""
|
||||
echo "Optional (format conversion):"
|
||||
echo " brew install ffmpeg # macOS"
|
||||
echo " apt install ffmpeg # Linux"
|
||||
echo ""
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
This ensures users can install dependencies with one confirmation, or opt for manual installation if preferred.
|
||||
|
||||
**If transcriber found:**
|
||||
|
||||
Proceed to Step 0b (CLI Detection).
|
||||
|
||||
|
||||
### Step 1: Validate Audio File
|
||||
|
||||
**Objective:** Verify file exists, check format, and extract metadata.
|
||||
|
||||
**Actions:**
|
||||
|
||||
1. **Accept file path or URL** from user:
|
||||
- Local file: `meeting.mp3`
|
||||
- URL: `https://example.com/audio.mp3` (download to temp directory)
|
||||
|
||||
2. **Verify file exists:**
|
||||
|
||||
```bash
|
||||
if [[ ! -f "$AUDIO_FILE" ]]; then
|
||||
echo "❌ File not found: $AUDIO_FILE"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
3. **Extract metadata** using ffprobe or file utilities:
|
||||
|
||||
```bash
|
||||
# Get file size
|
||||
FILE_SIZE=$(du -h "$AUDIO_FILE" | cut -f1)
|
||||
|
||||
# Get duration and format using ffprobe
|
||||
DURATION=$(ffprobe -v error -show_entries format=duration \
|
||||
-of default=noprint_wrappers=1:nokey=1 "$AUDIO_FILE" 2>/dev/null)
|
||||
FORMAT=$(ffprobe -v error -select_streams a:0 -show_entries \
|
||||
stream=codec_name -of default=noprint_wrappers=1:nokey=1 "$AUDIO_FILE" 2>/dev/null)
|
||||
|
||||
# Convert duration to HH:MM:SS
|
||||
DURATION_HMS=$(date -u -r "$DURATION" +%H:%M:%S 2>/dev/null || echo "Unknown")
|
||||
```
|
||||
|
||||
4. **Check file size** (warn if large for cloud APIs):
|
||||
|
||||
```bash
|
||||
SIZE_MB=$(du -m "$AUDIO_FILE" | cut -f1)
|
||||
if [[ $SIZE_MB -gt 25 ]]; then
|
||||
echo "⚠️ Large file ($FILE_SIZE) - processing may take several minutes"
|
||||
fi
|
||||
```
|
||||
|
||||
5. **Validate format** (supported: MP3, WAV, M4A, OGG, FLAC, WEBM):
|
||||
|
||||
```bash
|
||||
EXTENSION="${AUDIO_FILE##*.}"
|
||||
SUPPORTED_FORMATS=("mp3" "wav" "m4a" "ogg" "flac" "webm" "mp4")
|
||||
|
||||
if [[ ! " ${SUPPORTED_FORMATS[@]} " =~ " ${EXTENSION,,} " ]]; then
|
||||
echo "⚠️ Unsupported format: $EXTENSION"
|
||||
if command -v ffmpeg &>/dev/null; then
|
||||
echo "🔄 Converting to WAV..."
|
||||
ffmpeg -i "$AUDIO_FILE" -ar 16000 "${AUDIO_FILE%.*}.wav" -y
|
||||
AUDIO_FILE="${AUDIO_FILE%.*}.wav"
|
||||
else
|
||||
echo "❌ Install ffmpeg to convert formats: brew install ffmpeg"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
```
|
||||
|
||||
|
||||
### Step 3: Generate Markdown Output
|
||||
|
||||
**Objective:** Create structured Markdown with metadata, transcription, meeting minutes, and summary.
|
||||
|
||||
**Output Template:**
|
||||
|
||||
```markdown
|
||||
# Audio Transcription Report
|
||||
|
||||
## 📊 Metadata
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **File Name** | {filename} |
|
||||
| **File Size** | {file_size} |
|
||||
| **Duration** | {duration_hms} |
|
||||
| **Language** | {language} ({language_code}) |
|
||||
| **Processed Date** | {process_date} |
|
||||
| **Speakers Identified** | {num_speakers} |
|
||||
| **Transcription Engine** | {engine} (model: {model}) |
|
||||
|
||||
|
||||
## 📋 Meeting Minutes
|
||||
|
||||
### Participants
|
||||
- {speaker_1}
|
||||
- {speaker_2}
|
||||
- ...
|
||||
|
||||
### Topics Discussed
|
||||
1. **{topic_1}** ({timestamp})
|
||||
- {key_point_1}
|
||||
- {key_point_2}
|
||||
|
||||
2. **{topic_2}** ({timestamp})
|
||||
- {key_point_1}
|
||||
|
||||
### Decisions Made
|
||||
- ✅ {decision_1}
|
||||
- ✅ {decision_2}
|
||||
|
||||
### Action Items
|
||||
- [ ] **{action_1}** - Assigned to: {speaker} - Due: {date_if_mentioned}
|
||||
- [ ] **{action_2}** - Assigned to: {speaker}
|
||||
|
||||
|
||||
*Generated by audio-transcriber skill v1.0.0*
|
||||
*Transcription engine: {engine} | Processing time: {elapsed_time}s*
|
||||
```
|
||||
|
||||
**Implementation:**
|
||||
|
||||
Use Python or bash with AI model (Claude/GPT) for intelligent summarization:
|
||||
|
||||
```python
|
||||
def generate_meeting_minutes(segments):
|
||||
"""Extract topics, decisions, action items from transcription."""
|
||||
|
||||
# Group segments by topic (simple clustering by timestamps)
|
||||
topics = cluster_by_topic(segments)
|
||||
|
||||
# Identify action items (keywords: "should", "will", "need to", "action")
|
||||
action_items = extract_action_items(segments)
|
||||
|
||||
# Identify decisions (keywords: "decided", "agreed", "approved")
|
||||
decisions = extract_decisions(segments)
|
||||
|
||||
return {
|
||||
"topics": topics,
|
||||
"decisions": decisions,
|
||||
"action_items": action_items
|
||||
}
|
||||
|
||||
def generate_summary(segments, max_paragraphs=5):
|
||||
"""Create executive summary using AI (Claude/GPT via API or local model)."""
|
||||
|
||||
full_text = " ".join([s["text"] for s in segments])
|
||||
|
||||
# Use Chain of Density approach (from prompt-engineer frameworks)
|
||||
summary_prompt = f"""
|
||||
Summarize the following transcription in {max_paragraphs} concise paragraphs.
|
||||
Focus on key topics, decisions, and action items.
|
||||
|
||||
Transcription:
|
||||
{full_text}
|
||||
"""
|
||||
|
||||
# Call AI model (placeholder - user can integrate Claude API or use local model)
|
||||
summary = call_ai_model(summary_prompt)
|
||||
|
||||
return summary
|
||||
```
|
||||
|
||||
**Output file naming:**
|
||||
|
||||
```bash
|
||||
# v1.1.0: Use timestamp para evitar sobrescrever
|
||||
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
|
||||
TRANSCRIPT_FILE="transcript-${TIMESTAMP}.md"
|
||||
ATA_FILE="ata-${TIMESTAMP}.md"
|
||||
|
||||
echo "$TRANSCRIPT_CONTENT" > "$TRANSCRIPT_FILE"
|
||||
echo "✅ Transcript salvo: $TRANSCRIPT_FILE"
|
||||
|
||||
if [[ -n "$ATA_CONTENT" ]]; then
|
||||
echo "$ATA_CONTENT" > "$ATA_FILE"
|
||||
echo "✅ Ata salva: $ATA_FILE"
|
||||
fi
|
||||
```
|
||||
|
||||
|
||||
#### **SCENARIO A: User Provided Custom Prompt**
|
||||
|
||||
**Workflow:**
|
||||
|
||||
1. **Display user's prompt:**
|
||||
```
|
||||
📝 Prompt fornecido pelo usuário:
|
||||
┌──────────────────────────────────┐
|
||||
│ [User's prompt preview] │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
2. **Automatically improve with prompt-engineer (if available):**
|
||||
```bash
|
||||
🔧 Melhorando prompt com prompt-engineer...
|
||||
[Invokes: gh copilot -p "melhore este prompt: {user_prompt}"]
|
||||
```
|
||||
|
||||
3. **Show both versions:**
|
||||
```
|
||||
✨ Versão melhorada:
|
||||
┌──────────────────────────────────┐
|
||||
│ Role: Você é um documentador... │
|
||||
│ Instructions: Transforme... │
|
||||
│ Steps: 1) ... 2) ... │
|
||||
│ End Goal: ... │
|
||||
└──────────────────────────────────┘
|
||||
|
||||
📝 Versão original:
|
||||
┌──────────────────────────────────┐
|
||||
│ [User's original prompt] │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
4. **Ask which to use:**
|
||||
```bash
|
||||
💡 Usar versão melhorada? [s/n] (default: s):
|
||||
```
|
||||
|
||||
5. **Process with selected prompt:**
|
||||
- If "s": use improved
|
||||
- If "n": use original
|
||||
|
||||
|
||||
#### **LLM Processing (Both Scenarios)**
|
||||
|
||||
Once prompt is finalized:
|
||||
|
||||
```python
|
||||
from rich.progress import Progress, SpinnerColumn, TextColumn
|
||||
|
||||
def process_with_llm(transcript, prompt, cli_tool='claude'):
|
||||
full_prompt = f"{prompt}\n\n---\n\nTranscrição:\n\n{transcript}"
|
||||
|
||||
with Progress(
|
||||
SpinnerColumn(),
|
||||
TextColumn("[progress.description]{task.description}"),
|
||||
transient=True
|
||||
) as progress:
|
||||
progress.add_task(
|
||||
description=f"🤖 Processando com {cli_tool}...",
|
||||
total=None
|
||||
)
|
||||
|
||||
if cli_tool == 'claude':
|
||||
result = subprocess.run(
|
||||
['claude', '-'],
|
||||
input=full_prompt,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=300 # 5 minutes
|
||||
)
|
||||
elif cli_tool == 'gh-copilot':
|
||||
result = subprocess.run(
|
||||
['gh', 'copilot', 'suggest', '-t', 'shell', full_prompt],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=300
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
return result.stdout.strip()
|
||||
else:
|
||||
return None
|
||||
```
|
||||
|
||||
**Progress output:**
|
||||
```
|
||||
🤖 Processando com claude... ⠋
|
||||
[After completion:]
|
||||
✅ Ata gerada com sucesso!
|
||||
```
|
||||
|
||||
|
||||
#### **Final Output**
|
||||
|
||||
**Success (both files):**
|
||||
```bash
|
||||
💾 Salvando arquivos...
|
||||
|
||||
✅ Arquivos criados:
|
||||
- transcript-20260203-023045.md (transcript puro)
|
||||
- ata-20260203-023045.md (processado com LLM)
|
||||
|
||||
🧹 Removidos arquivos temporários: metadata.json, transcription.json
|
||||
|
||||
✅ Concluído! Tempo total: 3m 45s
|
||||
```
|
||||
|
||||
**Transcript only (user declined LLM):**
|
||||
```bash
|
||||
💾 Salvando arquivos...
|
||||
|
||||
✅ Arquivo criado:
|
||||
- transcript-20260203-023045.md
|
||||
|
||||
ℹ️ Ata não gerada (processamento LLM recusado pelo usuário)
|
||||
|
||||
🧹 Removidos arquivos temporários: metadata.json, transcription.json
|
||||
|
||||
✅ Concluído!
|
||||
```
|
||||
|
||||
|
||||
### Step 5: Display Results Summary
|
||||
|
||||
**Objective:** Show completion status and next steps.
|
||||
|
||||
**Output:**
|
||||
|
||||
```bash
|
||||
echo ""
|
||||
echo "✅ Transcription Complete!"
|
||||
echo ""
|
||||
echo "📊 Results:"
|
||||
echo " File: $OUTPUT_FILE"
|
||||
echo " Language: $LANGUAGE"
|
||||
echo " Duration: $DURATION_HMS"
|
||||
echo " Speakers: $NUM_SPEAKERS"
|
||||
echo " Words: $WORD_COUNT"
|
||||
echo " Processing time: ${ELAPSED_TIME}s"
|
||||
echo ""
|
||||
echo "📝 Generated:"
|
||||
echo " - $OUTPUT_FILE (Markdown report)"
|
||||
[if alternative formats:]
|
||||
echo " - ${OUTPUT_FILE%.*}.srt (Subtitles)"
|
||||
echo " - ${OUTPUT_FILE%.*}.json (Structured data)"
|
||||
echo ""
|
||||
echo "🎯 Next steps:"
|
||||
echo " 1. Review meeting minutes and action items"
|
||||
echo " 2. Share report with participants"
|
||||
echo " 3. Track action items to completion"
|
||||
```
|
||||
|
||||
|
||||
## Example Usage
|
||||
|
||||
### **Example 1: Basic Transcription**
|
||||
|
||||
**User Input:**
|
||||
```bash
|
||||
copilot> transcribe audio to markdown: meeting-2026-02-02.mp3
|
||||
```
|
||||
|
||||
**Skill Output:**
|
||||
|
||||
```bash
|
||||
✅ Faster-Whisper detected (optimized)
|
||||
✅ ffmpeg available (format conversion enabled)
|
||||
|
||||
📂 File: meeting-2026-02-02.mp3
|
||||
📊 Size: 12.3 MB
|
||||
⏱️ Duration: 00:45:32
|
||||
|
||||
🎙️ Processing...
|
||||
[████████████████████] 100%
|
||||
|
||||
✅ Language detected: Portuguese (pt-BR)
|
||||
👥 Speakers identified: 4
|
||||
📝 Generating Markdown output...
|
||||
|
||||
✅ Transcription Complete!
|
||||
|
||||
📊 Results:
|
||||
File: meeting-2026-02-02.md
|
||||
Language: pt-BR
|
||||
Duration: 00:45:32
|
||||
Speakers: 4
|
||||
Words: 6,842
|
||||
Processing time: 127s
|
||||
|
||||
📝 Generated:
|
||||
- meeting-2026-02-02.md (Markdown report)
|
||||
|
||||
🎯 Next steps:
|
||||
1. Review meeting minutes and action items
|
||||
2. Share report with participants
|
||||
3. Track action items to completion
|
||||
```
|
||||
|
||||
|
||||
### **Example 3: Batch Processing**
|
||||
|
||||
**User Input:**
|
||||
```bash
|
||||
copilot> transcreva estes áudios: recordings/*.mp3
|
||||
```
|
||||
|
||||
**Skill Output:**
|
||||
|
||||
```bash
|
||||
📦 Batch mode: 5 files found
|
||||
1. team-standup.mp3
|
||||
2. client-call.mp3
|
||||
3. brainstorm-session.mp3
|
||||
4. product-demo.mp3
|
||||
5. retrospective.mp3
|
||||
|
||||
🎙️ Processing batch...
|
||||
|
||||
[1/5] team-standup.mp3 ✅ (2m 34s)
|
||||
[2/5] client-call.mp3 ✅ (15m 12s)
|
||||
[3/5] brainstorm-session.mp3 ✅ (8m 47s)
|
||||
[4/5] product-demo.mp3 ✅ (22m 03s)
|
||||
[5/5] retrospective.mp3 ✅ (11m 28s)
|
||||
|
||||
✅ Batch Complete!
|
||||
📝 Generated 5 Markdown reports
|
||||
⏱️ Total processing time: 6m 15s
|
||||
```
|
||||
|
||||
|
||||
### **Example 5: Large File Warning**
|
||||
|
||||
**User Input:**
|
||||
```bash
|
||||
copilot> transcribe audio to markdown: conference-keynote.mp3
|
||||
```
|
||||
|
||||
**Skill Output:**
|
||||
|
||||
```bash
|
||||
✅ Faster-Whisper detected (optimized)
|
||||
|
||||
📂 File: conference-keynote.mp3
|
||||
📊 Size: 87.2 MB
|
||||
⏱️ Duration: 02:15:47
|
||||
⚠️ Large file (87.2 MB) - processing may take several minutes
|
||||
|
||||
Continue? [Y/n]:
|
||||
```
|
||||
|
||||
**User:** `Y`
|
||||
|
||||
```bash
|
||||
🎙️ Processing... (this may take 10-15 minutes)
|
||||
[████░░░░░░░░░░░░░░░░] 20% - Estimated time remaining: 12m
|
||||
```
|
||||
|
||||
|
||||
This skill is **platform-agnostic** and works in any terminal context where GitHub Copilot CLI is available. It does not depend on specific project configurations or external APIs, following the zero-configuration philosophy.
|
||||
250
skills/audio-transcriber/examples/basic-transcription.sh
Executable file
250
skills/audio-transcriber/examples/basic-transcription.sh
Executable file
@@ -0,0 +1,250 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Basic Audio Transcription Example
|
||||
# Demonstrates how to use the audio-transcriber skill manually
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Configuration
|
||||
AUDIO_FILE="${1:-}"
|
||||
MODEL="${MODEL:-base}" # Options: tiny, base, small, medium, large
|
||||
OUTPUT_FORMAT="${OUTPUT_FORMAT:-markdown}" # Options: markdown, txt, srt, vtt, json
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Helper functions
|
||||
error() {
|
||||
echo -e "${RED}❌ Error: $1${NC}" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
success() {
|
||||
echo -e "${GREEN}✅ $1${NC}"
|
||||
}
|
||||
|
||||
info() {
|
||||
echo -e "${BLUE}ℹ️ $1${NC}"
|
||||
}
|
||||
|
||||
warn() {
|
||||
echo -e "${YELLOW}⚠️ $1${NC}"
|
||||
}
|
||||
|
||||
# Check if audio file is provided
|
||||
if [[ -z "$AUDIO_FILE" ]]; then
|
||||
error "Usage: $0 <audio_file>"
|
||||
fi
|
||||
|
||||
# Verify file exists
|
||||
if [[ ! -f "$AUDIO_FILE" ]]; then
|
||||
error "File not found: $AUDIO_FILE"
|
||||
fi
|
||||
|
||||
# Step 0: Discovery - Check for transcription tools
|
||||
info "Step 0: Discovering transcription tools..."
|
||||
|
||||
TRANSCRIBER=""
|
||||
if python3 -c "import faster_whisper" 2>/dev/null; then
|
||||
TRANSCRIBER="faster-whisper"
|
||||
success "Faster-Whisper detected (optimized)"
|
||||
elif python3 -c "import whisper" 2>/dev/null; then
|
||||
TRANSCRIBER="whisper"
|
||||
success "OpenAI Whisper detected"
|
||||
else
|
||||
error "No transcription tool found. Install with: pip install faster-whisper"
|
||||
fi
|
||||
|
||||
# Check for ffmpeg
|
||||
if command -v ffmpeg &>/dev/null; then
|
||||
success "ffmpeg available (format conversion enabled)"
|
||||
else
|
||||
warn "ffmpeg not found (limited format support)"
|
||||
fi
|
||||
|
||||
# Step 1: Extract metadata
|
||||
info "Step 1: Extracting audio metadata..."
|
||||
|
||||
FILE_SIZE=$(du -h "$AUDIO_FILE" | cut -f1)
|
||||
info "File size: $FILE_SIZE"
|
||||
|
||||
# Get duration if ffprobe is available
|
||||
if command -v ffprobe &>/dev/null; then
|
||||
DURATION=$(ffprobe -v error -show_entries format=duration \
|
||||
-of default=noprint_wrappers=1:nokey=1 "$AUDIO_FILE" 2>/dev/null || echo "0")
|
||||
|
||||
# Convert to HH:MM:SS
|
||||
if command -v date &>/dev/null; then
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
# macOS
|
||||
DURATION_HMS=$(date -u -r "${DURATION%.*}" +%H:%M:%S 2>/dev/null || echo "Unknown")
|
||||
else
|
||||
# Linux
|
||||
DURATION_HMS=$(date -u -d @"${DURATION%.*}" +%H:%M:%S 2>/dev/null || echo "Unknown")
|
||||
fi
|
||||
else
|
||||
DURATION_HMS="Unknown"
|
||||
fi
|
||||
|
||||
info "Duration: $DURATION_HMS"
|
||||
else
|
||||
warn "ffprobe not found - cannot extract duration"
|
||||
DURATION="0"
|
||||
DURATION_HMS="Unknown"
|
||||
fi
|
||||
|
||||
# Check file size warning
|
||||
SIZE_MB=$(du -m "$AUDIO_FILE" | cut -f1)
|
||||
if [[ $SIZE_MB -gt 25 ]]; then
|
||||
warn "Large file ($FILE_SIZE) - processing may take several minutes"
|
||||
read -p "Continue? [Y/n]: " CONTINUE
|
||||
if [[ "$CONTINUE" =~ ^[Nn] ]]; then
|
||||
info "Transcription cancelled"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
# Step 2: Transcribe using Python
|
||||
info "Step 2: Transcribing audio..."
|
||||
|
||||
OUTPUT_FILE="${AUDIO_FILE%.*}.md"
|
||||
TEMP_JSON="/tmp/transcription_$$.json"
|
||||
|
||||
python3 << EOF
|
||||
import sys
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
try:
|
||||
if "$TRANSCRIBER" == "faster-whisper":
|
||||
from faster_whisper import WhisperModel
|
||||
model = WhisperModel("$MODEL", device="cpu", compute_type="int8")
|
||||
segments, info = model.transcribe("$AUDIO_FILE", language=None, vad_filter=True)
|
||||
|
||||
data = {
|
||||
"language": info.language,
|
||||
"language_probability": round(info.language_probability, 2),
|
||||
"duration": info.duration,
|
||||
"segments": []
|
||||
}
|
||||
|
||||
for segment in segments:
|
||||
data["segments"].append({
|
||||
"start": round(segment.start, 2),
|
||||
"end": round(segment.end, 2),
|
||||
"text": segment.text.strip()
|
||||
})
|
||||
else:
|
||||
import whisper
|
||||
model = whisper.load_model("$MODEL")
|
||||
result = model.transcribe("$AUDIO_FILE")
|
||||
|
||||
data = {
|
||||
"language": result["language"],
|
||||
"duration": result["segments"][-1]["end"] if result["segments"] else 0,
|
||||
"segments": result["segments"]
|
||||
}
|
||||
|
||||
with open("$TEMP_JSON", "w") as f:
|
||||
json.dump(data, f)
|
||||
|
||||
print(f"✅ Language detected: {data['language']}")
|
||||
print(f"📝 Transcribed {len(data['segments'])} segments")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
EOF
|
||||
|
||||
# Check if transcription succeeded
|
||||
if [[ ! -f "$TEMP_JSON" ]]; then
|
||||
error "Transcription failed"
|
||||
fi
|
||||
|
||||
# Step 3: Generate Markdown output
|
||||
info "Step 3: Generating Markdown report..."
|
||||
|
||||
python3 << 'EOF'
|
||||
import json
|
||||
import sys
|
||||
from datetime import datetime
|
||||
|
||||
# Load transcription data
|
||||
with open("${TEMP_JSON}") as f:
|
||||
data = json.load(f)
|
||||
|
||||
# Prepare metadata
|
||||
filename = "${AUDIO_FILE}".split("/")[-1]
|
||||
file_size = "${FILE_SIZE}"
|
||||
duration_hms = "${DURATION_HMS}"
|
||||
language = data["language"]
|
||||
process_date = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
num_segments = len(data["segments"])
|
||||
|
||||
# Generate Markdown
|
||||
markdown = f"""# Audio Transcription Report
|
||||
|
||||
## 📊 Metadata
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| **File Name** | {filename} |
|
||||
| **File Size** | {file_size} |
|
||||
| **Duration** | {duration_hms} |
|
||||
| **Language** | {language.upper()} |
|
||||
| **Processed Date** | {process_date} |
|
||||
| **Segments** | {num_segments} |
|
||||
| **Transcription Engine** | ${TRANSCRIBER} (model: ${MODEL}) |
|
||||
|
||||
---
|
||||
|
||||
## 🎙️ Full Transcription
|
||||
|
||||
"""
|
||||
|
||||
# Add transcription with timestamps
|
||||
for seg in data["segments"]:
|
||||
start_time = f"{int(seg['start'] // 60):02d}:{int(seg['start'] % 60):02d}"
|
||||
end_time = f"{int(seg['end'] // 60):02d}:{int(seg['end'] % 60):02d}"
|
||||
markdown += f"**[{start_time} → {end_time}]** \n{seg['text']}\n\n"
|
||||
|
||||
markdown += """---
|
||||
|
||||
## 📝 Summary
|
||||
|
||||
*Automatic summary generation requires AI integration (Claude/GPT).*
|
||||
*For now, review the full transcription above.*
|
||||
|
||||
---
|
||||
|
||||
*Generated by audio-transcriber skill example script*
|
||||
*Transcription engine: ${TRANSCRIBER} | Model: ${MODEL}*
|
||||
"""
|
||||
|
||||
# Write to file
|
||||
with open("${OUTPUT_FILE}", "w") as f:
|
||||
f.write(markdown)
|
||||
|
||||
print(f"✅ Markdown report saved: ${OUTPUT_FILE}")
|
||||
EOF
|
||||
|
||||
# Clean up
|
||||
rm -f "$TEMP_JSON"
|
||||
|
||||
# Step 4: Display summary
|
||||
success "Transcription complete!"
|
||||
echo ""
|
||||
echo "📊 Results:"
|
||||
echo " Output file: $OUTPUT_FILE"
|
||||
echo " Transcription engine: $TRANSCRIBER"
|
||||
echo " Model: $MODEL"
|
||||
echo ""
|
||||
info "Next steps:"
|
||||
echo " 1. Review the transcription: cat $OUTPUT_FILE"
|
||||
echo " 2. Edit if needed: vim $OUTPUT_FILE"
|
||||
echo " 3. Share with team or archive"
|
||||
EOF
|
||||
352
skills/audio-transcriber/references/tools-comparison.md
Normal file
352
skills/audio-transcriber/references/tools-comparison.md
Normal file
@@ -0,0 +1,352 @@
|
||||
# Transcription Tools Comparison
|
||||
|
||||
Comprehensive comparison of audio transcription engines supported by the audio-transcriber skill.
|
||||
|
||||
## Overview
|
||||
|
||||
| Tool | Type | Speed | Quality | Cost | Privacy | Offline | Languages |
|
||||
|------|------|-------|---------|------|---------|---------|-----------|
|
||||
| **Faster-Whisper** | Open-source | ⚡⚡⚡⚡⚡ | ⭐⭐⭐⭐⭐ | Free | 100% | ✅ | 99 |
|
||||
| **Whisper** | Open-source | ⚡⚡⚡ | ⭐⭐⭐⭐⭐ | Free | 100% | ✅ | 99 |
|
||||
| Google Speech-to-Text | Commercial API | ⚡⚡⚡⚡ | ⭐⭐⭐⭐⭐ | $0.006/15s | Partial | ❌ | 125+ |
|
||||
| Azure Speech | Commercial API | ⚡⚡⚡⚡ | ⭐⭐⭐⭐ | $1/hour | Partial | ❌ | 100+ |
|
||||
| AssemblyAI | Commercial API | ⚡⚡⚡⚡ | ⭐⭐⭐⭐⭐ | $0.00025/s | Partial | ❌ | 99 |
|
||||
|
||||
---
|
||||
|
||||
## Faster-Whisper (Recommended)
|
||||
|
||||
### Pros
|
||||
✅ **4-5x faster** than original Whisper
|
||||
✅ **Same quality** as original Whisper
|
||||
✅ **Lower memory usage** (50-60% less RAM)
|
||||
✅ **Free and open-source**
|
||||
✅ **100% offline** (privacy guaranteed)
|
||||
✅ **Easy installation** (`pip install faster-whisper`)
|
||||
✅ **Drop-in replacement** for Whisper
|
||||
|
||||
### Cons
|
||||
❌ Requires Python 3.8+
|
||||
❌ Initial model download (~100MB-1.5GB)
|
||||
❌ GPU optional but speeds up significantly
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
pip install faster-whisper
|
||||
```
|
||||
|
||||
### Usage Example
|
||||
|
||||
```python
|
||||
from faster_whisper import WhisperModel
|
||||
|
||||
# Load model (auto-downloads on first run)
|
||||
model = WhisperModel("base", device="cpu", compute_type="int8")
|
||||
|
||||
# Transcribe
|
||||
segments, info = model.transcribe("audio.mp3", language="pt")
|
||||
|
||||
# Print results
|
||||
for segment in segments:
|
||||
print(f"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}")
|
||||
```
|
||||
|
||||
### Model Sizes
|
||||
|
||||
| Model | Size | RAM | Speed (CPU) | Quality |
|
||||
|-------|------|-----|-------------|---------|
|
||||
| `tiny` | 39 MB | ~1 GB | Very fast (~10x realtime) | Basic |
|
||||
| `base` | 74 MB | ~1 GB | Fast (~7x realtime) | Good |
|
||||
| `small` | 244 MB | ~2 GB | Moderate (~4x realtime) | Very good |
|
||||
| `medium` | 769 MB | ~5 GB | Slow (~2x realtime) | Excellent |
|
||||
| `large` | 1550 MB | ~10 GB | Very slow (~1x realtime) | Best |
|
||||
|
||||
**Recommendation:** `small` or `medium` for production use.
|
||||
|
||||
---
|
||||
|
||||
## Whisper (Original)
|
||||
|
||||
### Pros
|
||||
✅ **Official OpenAI model**
|
||||
✅ **Excellent quality**
|
||||
✅ **Free and open-source**
|
||||
✅ **100% offline**
|
||||
✅ **Well-documented**
|
||||
✅ **Large community**
|
||||
|
||||
### Cons
|
||||
❌ **Slower** than Faster-Whisper (4-5x)
|
||||
❌ **Higher memory usage**
|
||||
❌ Requires PyTorch (large dependency)
|
||||
❌ GPU highly recommended for larger models
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
pip install openai-whisper
|
||||
```
|
||||
|
||||
### Usage Example
|
||||
|
||||
```python
|
||||
import whisper
|
||||
|
||||
# Load model
|
||||
model = whisper.load_model("base")
|
||||
|
||||
# Transcribe
|
||||
result = model.transcribe("audio.mp3", language="pt")
|
||||
|
||||
# Print results
|
||||
print(result["text"])
|
||||
```
|
||||
|
||||
### When to Use Whisper vs. Faster-Whisper
|
||||
|
||||
**Use Faster-Whisper if:**
|
||||
- Speed is important
|
||||
- Limited RAM available
|
||||
- Processing many files
|
||||
|
||||
**Use Original Whisper if:**
|
||||
- Faster-Whisper installation issues
|
||||
- Need exact OpenAI implementation
|
||||
- Already have Whisper in project dependencies
|
||||
|
||||
---
|
||||
|
||||
## Google Cloud Speech-to-Text
|
||||
|
||||
### Pros
|
||||
✅ **Very accurate** (industry-leading)
|
||||
✅ **Fast processing** (cloud infrastructure)
|
||||
✅ **125+ languages**
|
||||
✅ **Word-level timestamps**
|
||||
✅ **Punctuation & capitalization**
|
||||
✅ **Speaker diarization** (premium)
|
||||
|
||||
### Cons
|
||||
❌ **Requires internet** (cloud-only)
|
||||
❌ **Costs money** (after free tier)
|
||||
❌ **Privacy concerns** (audio uploaded to Google)
|
||||
❌ Requires GCP account setup
|
||||
❌ Complex authentication
|
||||
|
||||
### Pricing
|
||||
|
||||
- **Free tier:** 60 minutes/month
|
||||
- **Standard:** $0.006 per 15 seconds ($1.44/hour)
|
||||
- **Premium:** $0.009 per 15 seconds (with diarization)
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
pip install google-cloud-speech
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
1. Create GCP project
|
||||
2. Enable Speech-to-Text API
|
||||
3. Create service account & download JSON key
|
||||
4. Set environment variable:
|
||||
```bash
|
||||
export GOOGLE_APPLICATION_CREDENTIALS="path/to/key.json"
|
||||
```
|
||||
|
||||
### Usage Example
|
||||
|
||||
```python
|
||||
from google.cloud import speech
|
||||
|
||||
client = speech.SpeechClient()
|
||||
|
||||
with open("audio.wav", "rb") as audio_file:
|
||||
content = audio_file.read()
|
||||
|
||||
audio = speech.RecognitionAudio(content=content)
|
||||
config = speech.RecognitionConfig(
|
||||
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
|
||||
sample_rate_hertz=16000,
|
||||
language_code="pt-BR",
|
||||
)
|
||||
|
||||
response = client.recognize(config=config, audio=audio)
|
||||
|
||||
for result in response.results:
|
||||
print(result.alternatives[0].transcript)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Azure Speech Services
|
||||
|
||||
### Pros
|
||||
✅ **High accuracy**
|
||||
✅ **100+ languages**
|
||||
✅ **Real-time transcription**
|
||||
✅ **Custom models** (train on your data)
|
||||
✅ **Good Microsoft ecosystem integration**
|
||||
|
||||
### Cons
|
||||
❌ **Requires internet**
|
||||
❌ **Costs money** (after free tier)
|
||||
❌ **Privacy concerns** (cloud processing)
|
||||
❌ Requires Azure account
|
||||
❌ Complex setup
|
||||
|
||||
### Pricing
|
||||
|
||||
- **Free tier:** 5 hours/month
|
||||
- **Standard:** $1.00 per audio hour
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
pip install azure-cognitiveservices-speech
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
1. Create Azure account
|
||||
2. Create Speech resource
|
||||
3. Get API key and region
|
||||
4. Set environment variables:
|
||||
```bash
|
||||
export AZURE_SPEECH_KEY="your-key"
|
||||
export AZURE_SPEECH_REGION="your-region"
|
||||
```
|
||||
|
||||
### Usage Example
|
||||
|
||||
```python
|
||||
import azure.cognitiveservices.speech as speechsdk
|
||||
|
||||
speech_config = speechsdk.SpeechConfig(
|
||||
subscription=os.environ.get('AZURE_SPEECH_KEY'),
|
||||
region=os.environ.get('AZURE_SPEECH_REGION')
|
||||
)
|
||||
|
||||
audio_config = speechsdk.audio.AudioConfig(filename="audio.wav")
|
||||
speech_recognizer = speechsdk.SpeechRecognizer(
|
||||
speech_config=speech_config,
|
||||
audio_config=audio_config
|
||||
)
|
||||
|
||||
result = speech_recognizer.recognize_once()
|
||||
print(result.text)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## AssemblyAI
|
||||
|
||||
### Pros
|
||||
✅ **Modern, developer-friendly API**
|
||||
✅ **Excellent accuracy**
|
||||
✅ **Advanced features** (sentiment, topic detection, PII redaction)
|
||||
✅ **Speaker diarization** (included)
|
||||
✅ **Fast processing**
|
||||
✅ **Good documentation**
|
||||
|
||||
### Cons
|
||||
❌ **Requires internet**
|
||||
❌ **Costs money** (no free tier, only trial credits)
|
||||
❌ **Privacy concerns** (cloud processing)
|
||||
❌ Requires API key
|
||||
|
||||
### Pricing
|
||||
|
||||
- **Free trial:** $50 credits
|
||||
- **Standard:** $0.00025 per second (~$0.90/hour)
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
pip install assemblyai
|
||||
```
|
||||
|
||||
### Setup
|
||||
|
||||
1. Sign up at assemblyai.com
|
||||
2. Get API key
|
||||
3. Set environment variable:
|
||||
```bash
|
||||
export ASSEMBLYAI_API_KEY="your-key"
|
||||
```
|
||||
|
||||
### Usage Example
|
||||
|
||||
```python
|
||||
import assemblyai as aai
|
||||
|
||||
aai.settings.api_key = os.environ["ASSEMBLYAI_API_KEY"]
|
||||
|
||||
transcriber = aai.Transcriber()
|
||||
transcript = transcriber.transcribe("audio.mp3")
|
||||
|
||||
print(transcript.text)
|
||||
|
||||
# Speaker diarization
|
||||
for utterance in transcript.utterances:
|
||||
print(f"Speaker {utterance.speaker}: {utterance.text}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Recommendation Matrix
|
||||
|
||||
### Use Faster-Whisper if:
|
||||
- ✅ Privacy is critical (local processing)
|
||||
- ✅ Want zero cost (free forever)
|
||||
- ✅ Need offline capability
|
||||
- ✅ Processing many files (speed matters)
|
||||
- ✅ Limited budget
|
||||
|
||||
### Use Google Speech-to-Text if:
|
||||
- ✅ Need absolute best accuracy
|
||||
- ✅ Have budget for cloud services
|
||||
- ✅ Want advanced features (punctuation, diarization)
|
||||
- ✅ Already using GCP ecosystem
|
||||
|
||||
### Use Azure Speech if:
|
||||
- ✅ In Microsoft ecosystem
|
||||
- ✅ Need custom model training
|
||||
- ✅ Want real-time transcription
|
||||
- ✅ Have Azure credits
|
||||
|
||||
### Use AssemblyAI if:
|
||||
- ✅ Need advanced features (sentiment, topics)
|
||||
- ✅ Want easiest API experience
|
||||
- ✅ Need automatic PII redaction
|
||||
- ✅ Value developer experience
|
||||
|
||||
---
|
||||
|
||||
## Performance Benchmarks
|
||||
|
||||
**Test:** 1-hour podcast (MP3, 44.1kHz, stereo)
|
||||
|
||||
| Tool | Processing Time | Accuracy | Cost |
|
||||
|------|----------------|----------|------|
|
||||
| Faster-Whisper (small) | 8 min | 94% | $0 |
|
||||
| Whisper (small) | 32 min | 94% | $0 |
|
||||
| Google Speech | 2 min | 96% | $1.44 |
|
||||
| Azure Speech | 3 min | 95% | $1.00 |
|
||||
| AssemblyAI | 4 min | 96% | $0.90 |
|
||||
|
||||
*Benchmarks run on MacBook Pro M1, 16GB RAM*
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
**For the audio-transcriber skill:**
|
||||
|
||||
1. **Primary:** Faster-Whisper (best balance of speed, quality, privacy, cost)
|
||||
2. **Fallback:** Whisper (if Faster-Whisper unavailable)
|
||||
3. **Optional:** Cloud APIs (user choice for premium features)
|
||||
|
||||
This ensures the skill works out-of-the-box for most users while allowing advanced users to integrate commercial services if needed.
|
||||
190
skills/audio-transcriber/scripts/install-requirements.sh
Executable file
190
skills/audio-transcriber/scripts/install-requirements.sh
Executable file
@@ -0,0 +1,190 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Audio Transcriber - Requirements Installation Script
|
||||
# Automatically installs and validates dependencies
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Colors
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
RED='\033[0;31m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
echo -e "${BLUE}🔧 Audio Transcriber - Dependency Installation${NC}"
|
||||
echo ""
|
||||
|
||||
# Check Python
|
||||
if ! command -v python3 &>/dev/null; then
|
||||
echo -e "${RED}❌ Python 3 not found. Please install Python 3.8+${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
PYTHON_VERSION=$(python3 --version | cut -d' ' -f2 | cut -d'.' -f1,2)
|
||||
echo -e "${GREEN}✅ Python ${PYTHON_VERSION} detected${NC}"
|
||||
|
||||
# Check pip
|
||||
if ! python3 -m pip --version &>/dev/null; then
|
||||
echo -e "${RED}❌ pip not found. Please install pip${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}✅ pip available${NC}"
|
||||
echo ""
|
||||
|
||||
# Install system dependencies (macOS only)
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
echo -e "${BLUE}📦 Checking system dependencies (macOS)...${NC}"
|
||||
|
||||
# Check for Homebrew
|
||||
if command -v brew &>/dev/null; then
|
||||
# Install pkg-config and ffmpeg if not present
|
||||
NEED_INSTALL=""
|
||||
|
||||
if ! brew list pkg-config &>/dev/null 2>&1; then
|
||||
NEED_INSTALL="$NEED_INSTALL pkg-config"
|
||||
fi
|
||||
|
||||
if ! brew list ffmpeg &>/dev/null 2>&1; then
|
||||
NEED_INSTALL="$NEED_INSTALL ffmpeg"
|
||||
fi
|
||||
|
||||
if [[ -n "$NEED_INSTALL" ]]; then
|
||||
echo -e "${BLUE}Installing:$NEED_INSTALL${NC}"
|
||||
brew install $NEED_INSTALL --quiet
|
||||
echo -e "${GREEN}✅ System dependencies installed${NC}"
|
||||
else
|
||||
echo -e "${GREEN}✅ System dependencies already installed${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${YELLOW}⚠️ Homebrew not found. Install manually if needed:${NC}"
|
||||
echo " /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Install faster-whisper (recommended)
|
||||
echo -e "${BLUE}📦 Installing Faster-Whisper...${NC}"
|
||||
|
||||
# Try different installation methods based on Python environment
|
||||
if python3 -m pip install faster-whisper --quiet 2>/dev/null; then
|
||||
echo -e "${GREEN}✅ Faster-Whisper installed successfully${NC}"
|
||||
elif python3 -m pip install --user --break-system-packages faster-whisper --quiet 2>/dev/null; then
|
||||
echo -e "${GREEN}✅ Faster-Whisper installed successfully (user mode)${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}⚠️ Faster-Whisper installation failed, trying Whisper...${NC}"
|
||||
|
||||
if python3 -m pip install openai-whisper --quiet 2>/dev/null; then
|
||||
echo -e "${GREEN}✅ Whisper installed successfully${NC}"
|
||||
elif python3 -m pip install --user --break-system-packages openai-whisper --quiet 2>/dev/null; then
|
||||
echo -e "${GREEN}✅ Whisper installed successfully (user mode)${NC}"
|
||||
else
|
||||
echo -e "${RED}❌ Failed to install transcription engine${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}Manual installation options:${NC}"
|
||||
echo " 1. Use --break-system-packages (macOS/Homebrew Python):"
|
||||
echo " python3 -m pip install --user --break-system-packages openai-whisper"
|
||||
echo ""
|
||||
echo " 2. Use virtual environment (recommended):"
|
||||
echo " python3 -m venv ~/whisper-env"
|
||||
echo " source ~/whisper-env/bin/activate"
|
||||
echo " pip install faster-whisper"
|
||||
echo ""
|
||||
echo " 3. Use pipx (isolated):"
|
||||
echo " brew install pipx"
|
||||
echo " pipx install openai-whisper"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Install UI/progress libraries (tqdm, rich)
|
||||
echo ""
|
||||
echo -e "${BLUE}📦 Installing UI libraries (tqdm, rich)...${NC}"
|
||||
|
||||
if python3 -m pip install tqdm rich --quiet 2>/dev/null; then
|
||||
echo -e "${GREEN}✅ tqdm and rich installed successfully${NC}"
|
||||
elif python3 -m pip install --user --break-system-packages tqdm rich --quiet 2>/dev/null; then
|
||||
echo -e "${GREEN}✅ tqdm and rich installed successfully (user mode)${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}⚠️ Optional UI libraries not installed (skill will still work)${NC}"
|
||||
fi
|
||||
|
||||
# Check ffmpeg (optional but recommended)
|
||||
echo ""
|
||||
if command -v ffmpeg &>/dev/null; then
|
||||
echo -e "${GREEN}✅ ffmpeg already installed${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}⚠️ ffmpeg not found (should have been installed earlier)${NC}"
|
||||
if [[ "$OSTYPE" == "darwin"* ]] && command -v brew &>/dev/null; then
|
||||
echo -e "${BLUE}Installing ffmpeg via Homebrew...${NC}"
|
||||
brew install ffmpeg --quiet && echo -e "${GREEN}✅ ffmpeg installed${NC}"
|
||||
else
|
||||
echo -e "${BLUE}ℹ️ ffmpeg is optional but recommended for format conversion${NC}"
|
||||
echo ""
|
||||
echo "Install ffmpeg:"
|
||||
if [[ "$OSTYPE" == "darwin"* ]]; then
|
||||
echo " brew install ffmpeg"
|
||||
elif [[ "$OSTYPE" == "linux-gnu"* ]]; then
|
||||
echo " sudo apt install ffmpeg # Debian/Ubuntu"
|
||||
echo " sudo yum install ffmpeg # CentOS/RHEL"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Verify installation
|
||||
echo ""
|
||||
echo -e "${BLUE}🔍 Verifying installation...${NC}"
|
||||
|
||||
if python3 -c "import faster_whisper" 2>/dev/null; then
|
||||
echo -e "${GREEN}✅ Faster-Whisper verified${NC}"
|
||||
TRANSCRIBER="Faster-Whisper"
|
||||
elif python3 -c "import whisper" 2>/dev/null; then
|
||||
echo -e "${GREEN}✅ Whisper verified${NC}"
|
||||
TRANSCRIBER="Whisper"
|
||||
else
|
||||
echo -e "${RED}❌ No transcription engine found after installation${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Download initial model (optional)
|
||||
read -p "Download Whisper 'base' model now? (recommended, ~74MB) [Y/n]: " DOWNLOAD_MODEL
|
||||
|
||||
if [[ ! "$DOWNLOAD_MODEL" =~ ^[Nn] ]]; then
|
||||
echo ""
|
||||
echo -e "${BLUE}📥 Downloading 'base' model...${NC}"
|
||||
|
||||
python3 << 'EOF'
|
||||
try:
|
||||
import faster_whisper
|
||||
model = faster_whisper.WhisperModel("base", device="cpu", compute_type="int8")
|
||||
print("✅ Model downloaded successfully")
|
||||
except:
|
||||
try:
|
||||
import whisper
|
||||
model = whisper.load_model("base")
|
||||
print("✅ Model downloaded successfully")
|
||||
except Exception as e:
|
||||
print(f"❌ Model download failed: {e}")
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Success summary
|
||||
echo ""
|
||||
echo -e "${GREEN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${GREEN}✅ Installation Complete!${NC}"
|
||||
echo -e "${GREEN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo ""
|
||||
echo "📊 Installed components:"
|
||||
echo " • Transcription engine: $TRANSCRIBER"
|
||||
if command -v ffmpeg &>/dev/null; then
|
||||
echo " • Format conversion: ffmpeg (available)"
|
||||
else
|
||||
echo " • Format conversion: ffmpeg (not installed)"
|
||||
fi
|
||||
echo ""
|
||||
echo "🚀 Ready to use! Try:"
|
||||
echo " copilot> transcribe audio to markdown: myfile.mp3"
|
||||
echo " claude> transcreva este áudio: myfile.mp3"
|
||||
echo ""
|
||||
486
skills/audio-transcriber/scripts/transcribe.py
Executable file
486
skills/audio-transcriber/scripts/transcribe.py
Executable file
@@ -0,0 +1,486 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Audio Transcriber v1.1.0
|
||||
Transcreve áudio para texto e gera atas/resumos usando LLM.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
import subprocess
|
||||
import shutil
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
# Rich for beautiful terminal output
|
||||
try:
|
||||
from rich.console import Console
|
||||
from rich.prompt import Prompt
|
||||
from rich.panel import Panel
|
||||
from rich.progress import Progress, SpinnerColumn, TextColumn, BarColumn
|
||||
from rich import print as rprint
|
||||
RICH_AVAILABLE = True
|
||||
except ImportError:
|
||||
RICH_AVAILABLE = False
|
||||
print("⚠️ Installing rich for better UI...")
|
||||
subprocess.run([sys.executable, "-m", "pip", "install", "--user", "rich"], check=False)
|
||||
from rich.console import Console
|
||||
from rich.prompt import Prompt
|
||||
from rich.panel import Panel
|
||||
from rich.progress import Progress, SpinnerColumn, TextColumn, BarColumn
|
||||
from rich import print as rprint
|
||||
|
||||
# tqdm for progress bars
|
||||
try:
|
||||
from tqdm import tqdm
|
||||
except ImportError:
|
||||
print("⚠️ Installing tqdm for progress bars...")
|
||||
subprocess.run([sys.executable, "-m", "pip", "install", "--user", "tqdm"], check=False)
|
||||
from tqdm import tqdm
|
||||
|
||||
# Whisper engines
|
||||
try:
|
||||
from faster_whisper import WhisperModel
|
||||
TRANSCRIBER = "faster-whisper"
|
||||
except ImportError:
|
||||
try:
|
||||
import whisper
|
||||
TRANSCRIBER = "whisper"
|
||||
except ImportError:
|
||||
print("❌ Nenhum engine de transcrição encontrado!")
|
||||
print(" Instale: pip install faster-whisper")
|
||||
sys.exit(1)
|
||||
|
||||
console = Console()
|
||||
|
||||
# Template padrão RISEN para fallback
|
||||
DEFAULT_MEETING_PROMPT = """
|
||||
Role: Você é um transcritor profissional especializado em documentação.
|
||||
|
||||
Instructions: Transforme a transcrição fornecida em um documento estruturado e profissional.
|
||||
|
||||
Steps:
|
||||
1. Identifique o tipo de conteúdo (reunião, palestra, entrevista, etc.)
|
||||
2. Extraia os principais tópicos e pontos-chave
|
||||
3. Identifique participantes/speakers (se aplicável)
|
||||
4. Extraia decisões tomadas e ações definidas (se reunião)
|
||||
5. Organize em formato apropriado com seções claras
|
||||
6. Use Markdown para formatação profissional
|
||||
|
||||
End Goal: Documento final bem estruturado, legível e pronto para distribuição.
|
||||
|
||||
Narrowing:
|
||||
- Mantenha objetividade e clareza
|
||||
- Preserve contexto importante
|
||||
- Use formatação Markdown adequada
|
||||
- Inclua timestamps relevantes quando aplicável
|
||||
"""
|
||||
|
||||
|
||||
def detect_cli_tool():
|
||||
"""Detecta qual CLI de LLM está disponível (claude > gh copilot)."""
|
||||
if shutil.which('claude'):
|
||||
return 'claude'
|
||||
elif shutil.which('gh'):
|
||||
result = subprocess.run(['gh', 'copilot', '--version'],
|
||||
capture_output=True, text=True)
|
||||
if result.returncode == 0:
|
||||
return 'gh-copilot'
|
||||
return None
|
||||
|
||||
|
||||
def invoke_prompt_engineer(raw_prompt, timeout=90):
|
||||
"""
|
||||
Invoca prompt-engineer skill via CLI para melhorar/gerar prompts.
|
||||
|
||||
Args:
|
||||
raw_prompt: Prompt a ser melhorado ou meta-prompt
|
||||
timeout: Timeout em segundos
|
||||
|
||||
Returns:
|
||||
str: Prompt melhorado ou DEFAULT_MEETING_PROMPT se falhar
|
||||
"""
|
||||
try:
|
||||
# Tentar via gh copilot
|
||||
console.print("[dim] Invocando prompt-engineer...[/dim]")
|
||||
|
||||
result = subprocess.run(
|
||||
['gh', 'copilot', 'suggest', '-t', 'shell', raw_prompt],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout
|
||||
)
|
||||
|
||||
if result.returncode == 0 and result.stdout.strip():
|
||||
return result.stdout.strip()
|
||||
else:
|
||||
console.print("[yellow]⚠️ prompt-engineer não respondeu, usando template padrão[/yellow]")
|
||||
return DEFAULT_MEETING_PROMPT
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
console.print(f"[red]⚠️ Timeout após {timeout}s, usando template padrão[/red]")
|
||||
return DEFAULT_MEETING_PROMPT
|
||||
except Exception as e:
|
||||
console.print(f"[red]⚠️ Erro ao invocar prompt-engineer: {e}[/red]")
|
||||
return DEFAULT_MEETING_PROMPT
|
||||
|
||||
|
||||
def handle_prompt_workflow(user_prompt, transcript):
|
||||
"""
|
||||
Gerencia fluxo completo de prompts com prompt-engineer.
|
||||
|
||||
Cenário A: Usuário forneceu prompt → Melhorar AUTOMATICAMENTE → Confirmar
|
||||
Cenário B: Sem prompt → Sugerir tipo → Confirmar → Gerar → Confirmar
|
||||
|
||||
Returns:
|
||||
str: Prompt final a usar, ou None se usuário recusou processamento
|
||||
"""
|
||||
prompt_engineer_available = os.path.exists(
|
||||
os.path.expanduser('~/.copilot/skills/prompt-engineer/SKILL.md')
|
||||
)
|
||||
|
||||
# ========== CENÁRIO A: USUÁRIO FORNECEU PROMPT ==========
|
||||
if user_prompt:
|
||||
console.print("\n[cyan]📝 Prompt fornecido pelo usuário[/cyan]")
|
||||
console.print(Panel(user_prompt[:300] + ("..." if len(user_prompt) > 300 else ""),
|
||||
title="Prompt original", border_style="dim"))
|
||||
|
||||
if prompt_engineer_available:
|
||||
# Melhora AUTOMATICAMENTE (sem perguntar)
|
||||
console.print("\n[cyan]🔧 Melhorando prompt com prompt-engineer...[/cyan]")
|
||||
|
||||
improved_prompt = invoke_prompt_engineer(
|
||||
f"melhore este prompt:\n\n{user_prompt}"
|
||||
)
|
||||
|
||||
# Mostrar AMBAS versões
|
||||
console.print("\n[green]✨ Versão melhorada:[/green]")
|
||||
console.print(Panel(improved_prompt[:500] + ("..." if len(improved_prompt) > 500 else ""),
|
||||
title="Prompt otimizado", border_style="green"))
|
||||
|
||||
console.print("\n[dim]📝 Versão original:[/dim]")
|
||||
console.print(Panel(user_prompt[:300] + ("..." if len(user_prompt) > 300 else ""),
|
||||
title="Seu prompt", border_style="dim"))
|
||||
|
||||
# Pergunta qual usar
|
||||
confirm = Prompt.ask(
|
||||
"\n💡 Usar versão melhorada?",
|
||||
choices=["s", "n"],
|
||||
default="s"
|
||||
)
|
||||
|
||||
return improved_prompt if confirm == "s" else user_prompt
|
||||
else:
|
||||
# prompt-engineer não disponível
|
||||
console.print("[yellow]⚠️ prompt-engineer skill não disponível[/yellow]")
|
||||
console.print("[dim]✅ Usando seu prompt original[/dim]")
|
||||
return user_prompt
|
||||
|
||||
# ========== CENÁRIO B: SEM PROMPT - AUTO-GERAÇÃO ==========
|
||||
else:
|
||||
console.print("\n[yellow]⚠️ Nenhum prompt fornecido.[/yellow]")
|
||||
|
||||
if not prompt_engineer_available:
|
||||
console.print("[yellow]⚠️ prompt-engineer skill não encontrado[/yellow]")
|
||||
console.print("[dim]Usando template padrão...[/dim]")
|
||||
return DEFAULT_MEETING_PROMPT
|
||||
|
||||
# PASSO 1: Perguntar se quer auto-gerar
|
||||
console.print("Posso analisar o transcript e sugerir um formato de resumo/ata?")
|
||||
|
||||
generate = Prompt.ask(
|
||||
"\n💡 Gerar prompt automaticamente?",
|
||||
choices=["s", "n"],
|
||||
default="s"
|
||||
)
|
||||
|
||||
if generate == "n":
|
||||
console.print("[dim]✅ Ok, gerando apenas transcript.md (sem ata)[/dim]")
|
||||
return None # Sinaliza: não processar com LLM
|
||||
|
||||
# PASSO 2: Analisar transcript e SUGERIR tipo
|
||||
console.print("\n[cyan]🔍 Analisando transcript...[/cyan]")
|
||||
|
||||
suggestion_meta_prompt = f"""
|
||||
Analise este transcript ({len(transcript)} caracteres) e sugira:
|
||||
|
||||
1. Tipo de conteúdo (reunião, palestra, entrevista, etc.)
|
||||
2. Formato de saída recomendado (ata formal, resumo executivo, notas estruturadas)
|
||||
3. Framework ideal (RISEN, RODES, STAR, etc.)
|
||||
|
||||
Primeiras 1000 palavras do transcript:
|
||||
{transcript[:4000]}
|
||||
|
||||
Responda em 2-3 linhas concisas.
|
||||
"""
|
||||
|
||||
suggested_type = invoke_prompt_engineer(suggestion_meta_prompt)
|
||||
|
||||
# PASSO 3: Mostrar sugestão e CONFIRMAR
|
||||
console.print("\n[green]💡 Sugestão de formato:[/green]")
|
||||
console.print(Panel(suggested_type, title="Análise do transcript", border_style="green"))
|
||||
|
||||
confirm_type = Prompt.ask(
|
||||
"\n💡 Usar este formato?",
|
||||
choices=["s", "n"],
|
||||
default="s"
|
||||
)
|
||||
|
||||
if confirm_type == "n":
|
||||
console.print("[dim]Usando template padrão...[/dim]")
|
||||
return DEFAULT_MEETING_PROMPT
|
||||
|
||||
# PASSO 4: Gerar prompt completo baseado na sugestão
|
||||
console.print("\n[cyan]✨ Gerando prompt estruturado...[/cyan]")
|
||||
|
||||
final_meta_prompt = f"""
|
||||
Crie um prompt completo e estruturado (usando framework apropriado) para:
|
||||
|
||||
{suggested_type}
|
||||
|
||||
O prompt deve instruir uma IA a transformar o transcript em um documento
|
||||
profissional e bem formatado em Markdown.
|
||||
"""
|
||||
|
||||
generated_prompt = invoke_prompt_engineer(final_meta_prompt)
|
||||
|
||||
# PASSO 5: Mostrar prompt gerado e CONFIRMAR
|
||||
console.print("\n[green]✅ Prompt gerado:[/green]")
|
||||
console.print(Panel(generated_prompt[:600] + ("..." if len(generated_prompt) > 600 else ""),
|
||||
title="Preview", border_style="green"))
|
||||
|
||||
confirm_final = Prompt.ask(
|
||||
"\n💡 Usar este prompt?",
|
||||
choices=["s", "n"],
|
||||
default="s"
|
||||
)
|
||||
|
||||
if confirm_final == "s":
|
||||
return generated_prompt
|
||||
else:
|
||||
console.print("[dim]Usando template padrão...[/dim]")
|
||||
return DEFAULT_MEETING_PROMPT
|
||||
|
||||
|
||||
def process_with_llm(transcript, prompt, cli_tool='claude', timeout=300):
|
||||
"""
|
||||
Processa transcript com LLM usando prompt fornecido.
|
||||
|
||||
Args:
|
||||
transcript: Texto transcrito
|
||||
prompt: Prompt instruindo como processar
|
||||
cli_tool: 'claude' ou 'gh-copilot'
|
||||
timeout: Timeout em segundos
|
||||
|
||||
Returns:
|
||||
str: Ata/resumo processado
|
||||
"""
|
||||
full_prompt = f"{prompt}\n\n---\n\nTranscrição:\n\n{transcript}"
|
||||
|
||||
try:
|
||||
with Progress(
|
||||
SpinnerColumn(),
|
||||
TextColumn("[progress.description]{task.description}"),
|
||||
transient=True
|
||||
) as progress:
|
||||
progress.add_task(description=f"🤖 Processando com {cli_tool}...", total=None)
|
||||
|
||||
if cli_tool == 'claude':
|
||||
result = subprocess.run(
|
||||
['claude', '-'],
|
||||
input=full_prompt,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout
|
||||
)
|
||||
elif cli_tool == 'gh-copilot':
|
||||
result = subprocess.run(
|
||||
['gh', 'copilot', 'suggest', '-t', 'shell', full_prompt],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout
|
||||
)
|
||||
else:
|
||||
raise ValueError(f"CLI tool desconhecido: {cli_tool}")
|
||||
|
||||
if result.returncode == 0:
|
||||
return result.stdout.strip()
|
||||
else:
|
||||
console.print(f"[red]❌ Erro ao processar com {cli_tool}[/red]")
|
||||
console.print(f"[dim]{result.stderr[:200]}[/dim]")
|
||||
return None
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
console.print(f"[red]❌ Timeout após {timeout}s[/red]")
|
||||
return None
|
||||
except Exception as e:
|
||||
console.print(f"[red]❌ Erro: {e}[/red]")
|
||||
return None
|
||||
|
||||
|
||||
def transcribe_audio(audio_file, model="base"):
|
||||
"""
|
||||
Transcreve áudio usando Whisper com barra de progresso.
|
||||
|
||||
Returns:
|
||||
dict: {language, duration, segments: [{start, end, text}]}
|
||||
"""
|
||||
console.print(f"\n[cyan]🎙️ Transcrevendo áudio com {TRANSCRIBER}...[/cyan]")
|
||||
|
||||
try:
|
||||
if TRANSCRIBER == "faster-whisper":
|
||||
model_obj = WhisperModel(model, device="cpu", compute_type="int8")
|
||||
segments, info = model_obj.transcribe(
|
||||
audio_file,
|
||||
language=None,
|
||||
vad_filter=True,
|
||||
word_timestamps=True
|
||||
)
|
||||
|
||||
data = {
|
||||
"language": info.language,
|
||||
"language_probability": round(info.language_probability, 2),
|
||||
"duration": info.duration,
|
||||
"segments": []
|
||||
}
|
||||
|
||||
# Converter generator em lista com progresso
|
||||
console.print("[dim]Processando segmentos...[/dim]")
|
||||
for segment in tqdm(segments, desc="Segmentos", unit="seg"):
|
||||
data["segments"].append({
|
||||
"start": round(segment.start, 2),
|
||||
"end": round(segment.end, 2),
|
||||
"text": segment.text.strip()
|
||||
})
|
||||
|
||||
else: # whisper original
|
||||
import whisper
|
||||
model_obj = whisper.load_model(model)
|
||||
result = model_obj.transcribe(audio_file, word_timestamps=True)
|
||||
|
||||
data = {
|
||||
"language": result["language"],
|
||||
"duration": result["segments"][-1]["end"] if result["segments"] else 0,
|
||||
"segments": result["segments"]
|
||||
}
|
||||
|
||||
console.print(f"[green]✅ Transcrição completa! Idioma: {data['language'].upper()}[/green]")
|
||||
console.print(f"[dim] {len(data['segments'])} segmentos processados[/dim]")
|
||||
|
||||
return data
|
||||
|
||||
except Exception as e:
|
||||
console.print(f"[red]❌ Erro na transcrição: {e}[/red]")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def save_outputs(transcript_text, ata_text, audio_file, output_dir="."):
|
||||
"""
|
||||
Salva transcript e ata em arquivos .md com timestamp.
|
||||
|
||||
Returns:
|
||||
tuple: (transcript_path, ata_path or None)
|
||||
"""
|
||||
timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
|
||||
base_name = Path(audio_file).stem
|
||||
|
||||
# Sempre salva transcript
|
||||
transcript_filename = f"transcript-{timestamp}.md"
|
||||
transcript_path = Path(output_dir) / transcript_filename
|
||||
|
||||
with open(transcript_path, 'w', encoding='utf-8') as f:
|
||||
f.write(transcript_text)
|
||||
|
||||
console.print(f"[green]✅ Transcript salvo:[/green] {transcript_filename}")
|
||||
|
||||
# Salva ata se existir
|
||||
ata_path = None
|
||||
if ata_text:
|
||||
ata_filename = f"ata-{timestamp}.md"
|
||||
ata_path = Path(output_dir) / ata_filename
|
||||
|
||||
with open(ata_path, 'w', encoding='utf-8') as f:
|
||||
f.write(ata_text)
|
||||
|
||||
console.print(f"[green]✅ Ata salva:[/green] {ata_filename}")
|
||||
|
||||
return str(transcript_path), str(ata_path) if ata_path else None
|
||||
|
||||
|
||||
def main():
|
||||
"""Função principal."""
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description="Audio Transcriber v1.1.0")
|
||||
parser.add_argument("audio_file", help="Arquivo de áudio para transcrever")
|
||||
parser.add_argument("--prompt", help="Prompt customizado para processar transcript")
|
||||
parser.add_argument("--model", default="base", help="Modelo Whisper (tiny/base/small/medium/large)")
|
||||
parser.add_argument("--output-dir", default=".", help="Diretório de saída")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Verificar arquivo existe
|
||||
if not os.path.exists(args.audio_file):
|
||||
console.print(f"[red]❌ Arquivo não encontrado: {args.audio_file}[/red]")
|
||||
sys.exit(1)
|
||||
|
||||
console.print("[bold cyan]🎵 Audio Transcriber v1.1.0[/bold cyan]\n")
|
||||
|
||||
# Step 1: Transcrever
|
||||
transcription_data = transcribe_audio(args.audio_file, model=args.model)
|
||||
|
||||
# Gerar texto do transcript
|
||||
transcript_text = f"# Transcrição de Áudio\n\n"
|
||||
transcript_text += f"**Arquivo:** {Path(args.audio_file).name}\n"
|
||||
transcript_text += f"**Idioma:** {transcription_data['language'].upper()}\n"
|
||||
transcript_text += f"**Data:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
|
||||
transcript_text += "---\n\n## Transcrição Completa\n\n"
|
||||
|
||||
for seg in transcription_data["segments"]:
|
||||
start_min = int(seg["start"] // 60)
|
||||
start_sec = int(seg["start"] % 60)
|
||||
end_min = int(seg["end"] // 60)
|
||||
end_sec = int(seg["end"] % 60)
|
||||
transcript_text += f"**[{start_min:02d}:{start_sec:02d} → {end_min:02d}:{end_sec:02d}]** \n{seg['text']}\n\n"
|
||||
|
||||
# Step 2: Detectar CLI
|
||||
cli_tool = detect_cli_tool()
|
||||
|
||||
if not cli_tool:
|
||||
console.print("\n[yellow]⚠️ Nenhuma CLI de IA detectada (Claude ou GitHub Copilot)[/yellow]")
|
||||
console.print("[dim]ℹ️ Salvando apenas transcript.md...[/dim]")
|
||||
|
||||
save_outputs(transcript_text, None, args.audio_file, args.output_dir)
|
||||
|
||||
console.print("\n[cyan]💡 Para gerar ata/resumo:[/cyan]")
|
||||
console.print(" - Instale Claude CLI: pip install claude-cli")
|
||||
console.print(" - Ou GitHub Copilot CLI já está instalado (gh copilot)")
|
||||
return
|
||||
|
||||
console.print(f"\n[green]✅ CLI detectada: {cli_tool}[/green]")
|
||||
|
||||
# Step 3: Workflow de prompt
|
||||
final_prompt = handle_prompt_workflow(args.prompt, transcript_text)
|
||||
|
||||
if final_prompt is None:
|
||||
# Usuário recusou processamento
|
||||
save_outputs(transcript_text, None, args.audio_file, args.output_dir)
|
||||
return
|
||||
|
||||
# Step 4: Processar com LLM
|
||||
ata_text = process_with_llm(transcript_text, final_prompt, cli_tool)
|
||||
|
||||
if ata_text:
|
||||
console.print("[green]✅ Ata gerada com sucesso![/green]")
|
||||
else:
|
||||
console.print("[yellow]⚠️ Falha ao gerar ata, salvando apenas transcript[/yellow]")
|
||||
|
||||
# Step 5: Salvar arquivos
|
||||
console.print("\n[cyan]💾 Salvando arquivos...[/cyan]")
|
||||
save_outputs(transcript_text, ata_text, args.audio_file, args.output_dir)
|
||||
|
||||
console.print("\n[bold green]✅ Concluído![/bold green]")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
296
skills/azd-deployment/SKILL.md
Normal file
296
skills/azd-deployment/SKILL.md
Normal file
@@ -0,0 +1,296 @@
|
||||
---
|
||||
name: azd-deployment
|
||||
description: Deploy containerized applications to Azure Container Apps using Azure Developer CLI (azd). Use when setting up azd projects, writing azure.yaml configuration, creating Bicep infrastructure for Container Apps, configuring remote builds with ACR, implementing idempotent deployments, managing environment variables across local/.azure/Bicep, or troubleshooting azd up failures. Triggers on requests for azd configuration, Container Apps deployment, multi-service deployments, and infrastructure-as-code with Bicep.
|
||||
---
|
||||
|
||||
# Azure Developer CLI (azd) Container Apps Deployment
|
||||
|
||||
Deploy containerized frontend + backend applications to Azure Container Apps with remote builds, managed identity, and idempotent infrastructure.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Initialize and deploy
|
||||
azd auth login
|
||||
azd init # Creates azure.yaml and .azure/ folder
|
||||
azd env new <env-name> # Create environment (dev, staging, prod)
|
||||
azd up # Provision infra + build + deploy
|
||||
```
|
||||
|
||||
## Core File Structure
|
||||
|
||||
```
|
||||
project/
|
||||
├── azure.yaml # azd service definitions + hooks
|
||||
├── infra/
|
||||
│ ├── main.bicep # Root infrastructure module
|
||||
│ ├── main.parameters.json # Parameter injection from env vars
|
||||
│ └── modules/
|
||||
│ ├── container-apps-environment.bicep
|
||||
│ └── container-app.bicep
|
||||
├── .azure/
|
||||
│ ├── config.json # Default environment pointer
|
||||
│ └── <env-name>/
|
||||
│ ├── .env # Environment-specific values (azd-managed)
|
||||
│ └── config.json # Environment metadata
|
||||
└── src/
|
||||
├── frontend/Dockerfile
|
||||
└── backend/Dockerfile
|
||||
```
|
||||
|
||||
## azure.yaml Configuration
|
||||
|
||||
### Minimal Configuration
|
||||
|
||||
```yaml
|
||||
name: azd-deployment
|
||||
services:
|
||||
backend:
|
||||
project: ./src/backend
|
||||
language: python
|
||||
host: containerapp
|
||||
docker:
|
||||
path: ./Dockerfile
|
||||
remoteBuild: true
|
||||
```
|
||||
|
||||
### Full Configuration with Hooks
|
||||
|
||||
```yaml
|
||||
name: azd-deployment
|
||||
metadata:
|
||||
template: my-project@1.0.0
|
||||
|
||||
infra:
|
||||
provider: bicep
|
||||
path: ./infra
|
||||
|
||||
azure:
|
||||
location: eastus2
|
||||
|
||||
services:
|
||||
frontend:
|
||||
project: ./src/frontend
|
||||
language: ts
|
||||
host: containerapp
|
||||
docker:
|
||||
path: ./Dockerfile
|
||||
context: .
|
||||
remoteBuild: true
|
||||
|
||||
backend:
|
||||
project: ./src/backend
|
||||
language: python
|
||||
host: containerapp
|
||||
docker:
|
||||
path: ./Dockerfile
|
||||
context: .
|
||||
remoteBuild: true
|
||||
|
||||
hooks:
|
||||
preprovision:
|
||||
shell: sh
|
||||
run: |
|
||||
echo "Before provisioning..."
|
||||
|
||||
postprovision:
|
||||
shell: sh
|
||||
run: |
|
||||
echo "After provisioning - set up RBAC, etc."
|
||||
|
||||
postdeploy:
|
||||
shell: sh
|
||||
run: |
|
||||
echo "Frontend: ${SERVICE_FRONTEND_URI}"
|
||||
echo "Backend: ${SERVICE_BACKEND_URI}"
|
||||
```
|
||||
|
||||
### Key azure.yaml Options
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `remoteBuild: true` | Build images in Azure Container Registry (recommended) |
|
||||
| `context: .` | Docker build context relative to project path |
|
||||
| `host: containerapp` | Deploy to Azure Container Apps |
|
||||
| `infra.provider: bicep` | Use Bicep for infrastructure |
|
||||
|
||||
## Environment Variables Flow
|
||||
|
||||
### Three-Level Configuration
|
||||
|
||||
1. **Local `.env`** - For local development only
|
||||
2. **`.azure/<env>/.env`** - azd-managed, auto-populated from Bicep outputs
|
||||
3. **`main.parameters.json`** - Maps env vars to Bicep parameters
|
||||
|
||||
### Parameter Injection Pattern
|
||||
|
||||
```json
|
||||
// infra/main.parameters.json
|
||||
{
|
||||
"parameters": {
|
||||
"environmentName": { "value": "${AZURE_ENV_NAME}" },
|
||||
"location": { "value": "${AZURE_LOCATION=eastus2}" },
|
||||
"azureOpenAiEndpoint": { "value": "${AZURE_OPENAI_ENDPOINT}" }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Syntax: `${VAR_NAME}` or `${VAR_NAME=default_value}`
|
||||
|
||||
### Setting Environment Variables
|
||||
|
||||
```bash
|
||||
# Set for current environment
|
||||
azd env set AZURE_OPENAI_ENDPOINT "https://my-openai.openai.azure.com"
|
||||
azd env set AZURE_SEARCH_ENDPOINT "https://my-search.search.windows.net"
|
||||
|
||||
# Set during init
|
||||
azd env new prod
|
||||
azd env set AZURE_OPENAI_ENDPOINT "..."
|
||||
```
|
||||
|
||||
### Bicep Output → Environment Variable
|
||||
|
||||
```bicep
|
||||
// In main.bicep - outputs auto-populate .azure/<env>/.env
|
||||
output SERVICE_FRONTEND_URI string = frontend.outputs.uri
|
||||
output SERVICE_BACKEND_URI string = backend.outputs.uri
|
||||
output BACKEND_PRINCIPAL_ID string = backend.outputs.principalId
|
||||
```
|
||||
|
||||
## Idempotent Deployments
|
||||
|
||||
### Why azd up is Idempotent
|
||||
|
||||
1. **Bicep is declarative** - Resources reconcile to desired state
|
||||
2. **Remote builds tag uniquely** - Image tags include deployment timestamp
|
||||
3. **ACR reuses layers** - Only changed layers upload
|
||||
|
||||
### Preserving Manual Changes
|
||||
|
||||
Custom domains added via Portal can be lost on redeploy. Preserve with hooks:
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
preprovision:
|
||||
shell: sh
|
||||
run: |
|
||||
# Save custom domains before provision
|
||||
if az containerapp show --name "$FRONTEND_NAME" -g "$RG" &>/dev/null; then
|
||||
az containerapp show --name "$FRONTEND_NAME" -g "$RG" \
|
||||
--query "properties.configuration.ingress.customDomains" \
|
||||
-o json > /tmp/domains.json
|
||||
fi
|
||||
|
||||
postprovision:
|
||||
shell: sh
|
||||
run: |
|
||||
# Verify/restore custom domains
|
||||
if [ -f /tmp/domains.json ]; then
|
||||
echo "Saved domains: $(cat /tmp/domains.json)"
|
||||
fi
|
||||
```
|
||||
|
||||
### Handling Existing Resources
|
||||
|
||||
```bicep
|
||||
// Reference existing ACR (don't recreate)
|
||||
resource containerRegistry 'Microsoft.ContainerRegistry/registries@2023-07-01' existing = {
|
||||
name: containerRegistryName
|
||||
}
|
||||
|
||||
// Set customDomains to null to preserve Portal-added domains
|
||||
customDomains: empty(customDomainsParam) ? null : customDomainsParam
|
||||
```
|
||||
|
||||
## Container App Service Discovery
|
||||
|
||||
Internal HTTP routing between Container Apps in same environment:
|
||||
|
||||
```bicep
|
||||
// Backend reference in frontend env vars
|
||||
env: [
|
||||
{
|
||||
name: 'BACKEND_URL'
|
||||
value: 'http://ca-backend-${resourceToken}' // Internal DNS
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Frontend nginx proxies to internal URL:
|
||||
```nginx
|
||||
location /api {
|
||||
proxy_pass $BACKEND_URL;
|
||||
}
|
||||
```
|
||||
|
||||
## Managed Identity & RBAC
|
||||
|
||||
### Enable System-Assigned Identity
|
||||
|
||||
```bicep
|
||||
resource containerApp 'Microsoft.App/containerApps@2024-03-01' = {
|
||||
identity: {
|
||||
type: 'SystemAssigned'
|
||||
}
|
||||
}
|
||||
|
||||
output principalId string = containerApp.identity.principalId
|
||||
```
|
||||
|
||||
### Post-Provision RBAC Assignment
|
||||
|
||||
```yaml
|
||||
hooks:
|
||||
postprovision:
|
||||
shell: sh
|
||||
run: |
|
||||
PRINCIPAL_ID="${BACKEND_PRINCIPAL_ID}"
|
||||
|
||||
# Azure OpenAI access
|
||||
az role assignment create \
|
||||
--assignee-object-id "$PRINCIPAL_ID" \
|
||||
--assignee-principal-type ServicePrincipal \
|
||||
--role "Cognitive Services OpenAI User" \
|
||||
--scope "$OPENAI_RESOURCE_ID" 2>/dev/null || true
|
||||
|
||||
# Azure AI Search access
|
||||
az role assignment create \
|
||||
--assignee-object-id "$PRINCIPAL_ID" \
|
||||
--role "Search Index Data Reader" \
|
||||
--scope "$SEARCH_RESOURCE_ID" 2>/dev/null || true
|
||||
```
|
||||
|
||||
## Common Commands
|
||||
|
||||
```bash
|
||||
# Environment management
|
||||
azd env list # List environments
|
||||
azd env select <name> # Switch environment
|
||||
azd env get-values # Show all env vars
|
||||
azd env set KEY value # Set variable
|
||||
|
||||
# Deployment
|
||||
azd up # Full provision + deploy
|
||||
azd provision # Infrastructure only
|
||||
azd deploy # Code deployment only
|
||||
azd deploy --service backend # Deploy single service
|
||||
|
||||
# Debugging
|
||||
azd show # Show project status
|
||||
az containerapp logs show -n <app> -g <rg> --follow # Stream logs
|
||||
```
|
||||
|
||||
## Reference Files
|
||||
|
||||
- **Bicep patterns**: See [references/bicep-patterns.md](references/bicep-patterns.md) for Container Apps modules
|
||||
- **Troubleshooting**: See [references/troubleshooting.md](references/troubleshooting.md) for common issues
|
||||
- **azure.yaml schema**: See [references/azure-yaml-schema.md](references/azure-yaml-schema.md) for full options
|
||||
|
||||
## Critical Reminders
|
||||
|
||||
1. **Always use `remoteBuild: true`** - Local builds fail on M1/ARM Macs deploying to AMD64
|
||||
2. **Bicep outputs auto-populate .azure/<env>/.env** - Don't manually edit
|
||||
3. **Use `azd env set` for secrets** - Not main.parameters.json defaults
|
||||
4. **Service tags (`azd-service-name`)** - Required for azd to find Container Apps
|
||||
5. **`|| true` in hooks** - Prevent RBAC "already exists" errors from failing deploy
|
||||
349
skills/azure-ai-agents-persistent-dotnet/SKILL.md
Normal file
349
skills/azure-ai-agents-persistent-dotnet/SKILL.md
Normal file
@@ -0,0 +1,349 @@
|
||||
---
|
||||
name: azure-ai-agents-persistent-dotnet
|
||||
description: |
|
||||
Azure AI Agents Persistent SDK for .NET. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. Use for agent CRUD, conversation threads, streaming responses, function calling, file search, and code interpreter. Triggers: "PersistentAgentsClient", "persistent agents", "agent threads", "agent runs", "streaming agents", "function calling agents .NET".
|
||||
package: Azure.AI.Agents.Persistent
|
||||
---
|
||||
|
||||
# Azure.AI.Agents.Persistent (.NET)
|
||||
|
||||
Low-level SDK for creating and managing persistent AI agents with threads, messages, runs, and tools.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
dotnet add package Azure.AI.Agents.Persistent --prerelease
|
||||
dotnet add package Azure.Identity
|
||||
```
|
||||
|
||||
**Current Versions**: Stable v1.1.0, Preview v1.2.0-beta.8
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
PROJECT_ENDPOINT=https://<resource>.services.ai.azure.com/api/projects/<project>
|
||||
MODEL_DEPLOYMENT_NAME=gpt-4o-mini
|
||||
AZURE_BING_CONNECTION_ID=<bing-connection-resource-id>
|
||||
AZURE_AI_SEARCH_CONNECTION_ID=<search-connection-resource-id>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```csharp
|
||||
using Azure.AI.Agents.Persistent;
|
||||
using Azure.Identity;
|
||||
|
||||
var projectEndpoint = Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
|
||||
PersistentAgentsClient client = new(projectEndpoint, new DefaultAzureCredential());
|
||||
```
|
||||
|
||||
## Client Hierarchy
|
||||
|
||||
```
|
||||
PersistentAgentsClient
|
||||
├── Administration → Agent CRUD operations
|
||||
├── Threads → Thread management
|
||||
├── Messages → Message operations
|
||||
├── Runs → Run execution and streaming
|
||||
├── Files → File upload/download
|
||||
└── VectorStores → Vector store management
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Create Agent
|
||||
|
||||
```csharp
|
||||
var modelDeploymentName = Environment.GetEnvironmentVariable("MODEL_DEPLOYMENT_NAME");
|
||||
|
||||
PersistentAgent agent = await client.Administration.CreateAgentAsync(
|
||||
model: modelDeploymentName,
|
||||
name: "Math Tutor",
|
||||
instructions: "You are a personal math tutor. Write and run code to answer math questions.",
|
||||
tools: [new CodeInterpreterToolDefinition()]
|
||||
);
|
||||
```
|
||||
|
||||
### 2. Create Thread and Message
|
||||
|
||||
```csharp
|
||||
// Create thread
|
||||
PersistentAgentThread thread = await client.Threads.CreateThreadAsync();
|
||||
|
||||
// Create message
|
||||
await client.Messages.CreateMessageAsync(
|
||||
thread.Id,
|
||||
MessageRole.User,
|
||||
"I need to solve the equation `3x + 11 = 14`. Can you help me?"
|
||||
);
|
||||
```
|
||||
|
||||
### 3. Run Agent (Polling)
|
||||
|
||||
```csharp
|
||||
// Create run
|
||||
ThreadRun run = await client.Runs.CreateRunAsync(
|
||||
thread.Id,
|
||||
agent.Id,
|
||||
additionalInstructions: "Please address the user as Jane Doe."
|
||||
);
|
||||
|
||||
// Poll for completion
|
||||
do
|
||||
{
|
||||
await Task.Delay(TimeSpan.FromMilliseconds(500));
|
||||
run = await client.Runs.GetRunAsync(thread.Id, run.Id);
|
||||
}
|
||||
while (run.Status == RunStatus.Queued || run.Status == RunStatus.InProgress);
|
||||
|
||||
// Retrieve messages
|
||||
await foreach (PersistentThreadMessage message in client.Messages.GetMessagesAsync(
|
||||
threadId: thread.Id,
|
||||
order: ListSortOrder.Ascending))
|
||||
{
|
||||
Console.Write($"{message.Role}: ");
|
||||
foreach (MessageContent content in message.ContentItems)
|
||||
{
|
||||
if (content is MessageTextContent textContent)
|
||||
Console.WriteLine(textContent.Text);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Streaming Response
|
||||
|
||||
```csharp
|
||||
AsyncCollectionResult<StreamingUpdate> stream = client.Runs.CreateRunStreamingAsync(
|
||||
thread.Id,
|
||||
agent.Id
|
||||
);
|
||||
|
||||
await foreach (StreamingUpdate update in stream)
|
||||
{
|
||||
if (update.UpdateKind == StreamingUpdateReason.RunCreated)
|
||||
{
|
||||
Console.WriteLine("--- Run started! ---");
|
||||
}
|
||||
else if (update is MessageContentUpdate contentUpdate)
|
||||
{
|
||||
Console.Write(contentUpdate.Text);
|
||||
}
|
||||
else if (update.UpdateKind == StreamingUpdateReason.RunCompleted)
|
||||
{
|
||||
Console.WriteLine("\n--- Run completed! ---");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Function Calling
|
||||
|
||||
```csharp
|
||||
// Define function tool
|
||||
FunctionToolDefinition weatherTool = new(
|
||||
name: "getCurrentWeather",
|
||||
description: "Gets the current weather at a location.",
|
||||
parameters: BinaryData.FromObjectAsJson(new
|
||||
{
|
||||
Type = "object",
|
||||
Properties = new
|
||||
{
|
||||
Location = new { Type = "string", Description = "City and state, e.g. San Francisco, CA" },
|
||||
Unit = new { Type = "string", Enum = new[] { "c", "f" } }
|
||||
},
|
||||
Required = new[] { "location" }
|
||||
}, new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase })
|
||||
);
|
||||
|
||||
// Create agent with function
|
||||
PersistentAgent agent = await client.Administration.CreateAgentAsync(
|
||||
model: modelDeploymentName,
|
||||
name: "Weather Bot",
|
||||
instructions: "You are a weather bot.",
|
||||
tools: [weatherTool]
|
||||
);
|
||||
|
||||
// Handle function calls during polling
|
||||
do
|
||||
{
|
||||
await Task.Delay(500);
|
||||
run = await client.Runs.GetRunAsync(thread.Id, run.Id);
|
||||
|
||||
if (run.Status == RunStatus.RequiresAction
|
||||
&& run.RequiredAction is SubmitToolOutputsAction submitAction)
|
||||
{
|
||||
List<ToolOutput> outputs = [];
|
||||
foreach (RequiredToolCall toolCall in submitAction.ToolCalls)
|
||||
{
|
||||
if (toolCall is RequiredFunctionToolCall funcCall)
|
||||
{
|
||||
// Execute function and get result
|
||||
string result = ExecuteFunction(funcCall.Name, funcCall.Arguments);
|
||||
outputs.Add(new ToolOutput(toolCall, result));
|
||||
}
|
||||
}
|
||||
run = await client.Runs.SubmitToolOutputsToRunAsync(run, outputs, toolApprovals: null);
|
||||
}
|
||||
}
|
||||
while (run.Status == RunStatus.Queued || run.Status == RunStatus.InProgress);
|
||||
```
|
||||
|
||||
### 6. File Search with Vector Store
|
||||
|
||||
```csharp
|
||||
// Upload file
|
||||
PersistentAgentFileInfo file = await client.Files.UploadFileAsync(
|
||||
filePath: "document.txt",
|
||||
purpose: PersistentAgentFilePurpose.Agents
|
||||
);
|
||||
|
||||
// Create vector store
|
||||
PersistentAgentsVectorStore vectorStore = await client.VectorStores.CreateVectorStoreAsync(
|
||||
fileIds: [file.Id],
|
||||
name: "my_vector_store"
|
||||
);
|
||||
|
||||
// Create file search resource
|
||||
FileSearchToolResource fileSearchResource = new();
|
||||
fileSearchResource.VectorStoreIds.Add(vectorStore.Id);
|
||||
|
||||
// Create agent with file search
|
||||
PersistentAgent agent = await client.Administration.CreateAgentAsync(
|
||||
model: modelDeploymentName,
|
||||
name: "Document Assistant",
|
||||
instructions: "You help users find information in documents.",
|
||||
tools: [new FileSearchToolDefinition()],
|
||||
toolResources: new ToolResources { FileSearch = fileSearchResource }
|
||||
);
|
||||
```
|
||||
|
||||
### 7. Bing Grounding
|
||||
|
||||
```csharp
|
||||
var bingConnectionId = Environment.GetEnvironmentVariable("AZURE_BING_CONNECTION_ID");
|
||||
|
||||
BingGroundingToolDefinition bingTool = new(
|
||||
new BingGroundingSearchToolParameters(
|
||||
[new BingGroundingSearchConfiguration(bingConnectionId)]
|
||||
)
|
||||
);
|
||||
|
||||
PersistentAgent agent = await client.Administration.CreateAgentAsync(
|
||||
model: modelDeploymentName,
|
||||
name: "Search Agent",
|
||||
instructions: "Use Bing to answer questions about current events.",
|
||||
tools: [bingTool]
|
||||
);
|
||||
```
|
||||
|
||||
### 8. Azure AI Search
|
||||
|
||||
```csharp
|
||||
AzureAISearchToolResource searchResource = new(
|
||||
connectionId: searchConnectionId,
|
||||
indexName: "my_index",
|
||||
topK: 5,
|
||||
filter: "category eq 'documentation'",
|
||||
queryType: AzureAISearchQueryType.Simple
|
||||
);
|
||||
|
||||
PersistentAgent agent = await client.Administration.CreateAgentAsync(
|
||||
model: modelDeploymentName,
|
||||
name: "Search Agent",
|
||||
instructions: "Search the documentation index to answer questions.",
|
||||
tools: [new AzureAISearchToolDefinition()],
|
||||
toolResources: new ToolResources { AzureAISearch = searchResource }
|
||||
);
|
||||
```
|
||||
|
||||
### 9. Cleanup
|
||||
|
||||
```csharp
|
||||
await client.Threads.DeleteThreadAsync(thread.Id);
|
||||
await client.Administration.DeleteAgentAsync(agent.Id);
|
||||
await client.VectorStores.DeleteVectorStoreAsync(vectorStore.Id);
|
||||
await client.Files.DeleteFileAsync(file.Id);
|
||||
```
|
||||
|
||||
## Available Tools
|
||||
|
||||
| Tool | Class | Purpose |
|
||||
|------|-------|---------|
|
||||
| Code Interpreter | `CodeInterpreterToolDefinition` | Execute Python code, generate visualizations |
|
||||
| File Search | `FileSearchToolDefinition` | Search uploaded files via vector stores |
|
||||
| Function Calling | `FunctionToolDefinition` | Call custom functions |
|
||||
| Bing Grounding | `BingGroundingToolDefinition` | Web search via Bing |
|
||||
| Azure AI Search | `AzureAISearchToolDefinition` | Search Azure AI Search indexes |
|
||||
| OpenAPI | `OpenApiToolDefinition` | Call external APIs via OpenAPI spec |
|
||||
| Azure Functions | `AzureFunctionToolDefinition` | Invoke Azure Functions |
|
||||
| MCP | `MCPToolDefinition` | Model Context Protocol tools |
|
||||
| SharePoint | `SharepointToolDefinition` | Access SharePoint content |
|
||||
| Microsoft Fabric | `MicrosoftFabricToolDefinition` | Access Fabric data |
|
||||
|
||||
## Streaming Update Types
|
||||
|
||||
| Update Type | Description |
|
||||
|-------------|-------------|
|
||||
| `StreamingUpdateReason.RunCreated` | Run started |
|
||||
| `StreamingUpdateReason.RunInProgress` | Run processing |
|
||||
| `StreamingUpdateReason.RunCompleted` | Run finished |
|
||||
| `StreamingUpdateReason.RunFailed` | Run errored |
|
||||
| `MessageContentUpdate` | Text content chunk |
|
||||
| `RunStepUpdate` | Step status change |
|
||||
|
||||
## Key Types Reference
|
||||
|
||||
| Type | Purpose |
|
||||
|------|---------|
|
||||
| `PersistentAgentsClient` | Main entry point |
|
||||
| `PersistentAgent` | Agent with model, instructions, tools |
|
||||
| `PersistentAgentThread` | Conversation thread |
|
||||
| `PersistentThreadMessage` | Message in thread |
|
||||
| `ThreadRun` | Execution of agent against thread |
|
||||
| `RunStatus` | Queued, InProgress, RequiresAction, Completed, Failed |
|
||||
| `ToolResources` | Combined tool resources |
|
||||
| `ToolOutput` | Function call response |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always dispose clients** — Use `using` statements or explicit disposal
|
||||
2. **Poll with appropriate delays** — 500ms recommended between status checks
|
||||
3. **Clean up resources** — Delete threads and agents when done
|
||||
4. **Handle all run statuses** — Check for `RequiresAction`, `Failed`, `Cancelled`
|
||||
5. **Use streaming for real-time UX** — Better user experience than polling
|
||||
6. **Store IDs not objects** — Reference agents/threads by ID
|
||||
7. **Use async methods** — All operations should be async
|
||||
|
||||
## Error Handling
|
||||
|
||||
```csharp
|
||||
using Azure;
|
||||
|
||||
try
|
||||
{
|
||||
var agent = await client.Administration.CreateAgentAsync(...);
|
||||
}
|
||||
catch (RequestFailedException ex) when (ex.Status == 404)
|
||||
{
|
||||
Console.WriteLine("Resource not found");
|
||||
}
|
||||
catch (RequestFailedException ex)
|
||||
{
|
||||
Console.WriteLine($"Error: {ex.Status} - {ex.ErrorCode}: {ex.Message}");
|
||||
}
|
||||
```
|
||||
|
||||
## Related SDKs
|
||||
|
||||
| SDK | Purpose | Install |
|
||||
|-----|---------|---------|
|
||||
| `Azure.AI.Agents.Persistent` | Low-level agents (this SDK) | `dotnet add package Azure.AI.Agents.Persistent` |
|
||||
| `Azure.AI.Projects` | High-level project client | `dotnet add package Azure.AI.Projects` |
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| NuGet Package | https://www.nuget.org/packages/Azure.AI.Agents.Persistent |
|
||||
| API Reference | https://learn.microsoft.com/dotnet/api/azure.ai.agents.persistent |
|
||||
| GitHub Source | https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/ai/Azure.AI.Agents.Persistent |
|
||||
| Samples | https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/ai/Azure.AI.Agents.Persistent/samples |
|
||||
137
skills/azure-ai-agents-persistent-java/SKILL.md
Normal file
137
skills/azure-ai-agents-persistent-java/SKILL.md
Normal file
@@ -0,0 +1,137 @@
|
||||
---
|
||||
name: azure-ai-agents-persistent-java
|
||||
description: |
|
||||
Azure AI Agents Persistent SDK for Java. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools.
|
||||
Triggers: "PersistentAgentsClient", "persistent agents java", "agent threads java", "agent runs java", "streaming agents java".
|
||||
package: com.azure:azure-ai-agents-persistent
|
||||
---
|
||||
|
||||
# Azure AI Agents Persistent SDK for Java
|
||||
|
||||
Low-level SDK for creating and managing persistent AI agents with threads, messages, runs, and tools.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-ai-agents-persistent</artifactId>
|
||||
<version>1.0.0-beta.1</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
PROJECT_ENDPOINT=https://<resource>.services.ai.azure.com/api/projects/<project>
|
||||
MODEL_DEPLOYMENT_NAME=gpt-4o-mini
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```java
|
||||
import com.azure.ai.agents.persistent.PersistentAgentsClient;
|
||||
import com.azure.ai.agents.persistent.PersistentAgentsClientBuilder;
|
||||
import com.azure.identity.DefaultAzureCredentialBuilder;
|
||||
|
||||
String endpoint = System.getenv("PROJECT_ENDPOINT");
|
||||
PersistentAgentsClient client = new PersistentAgentsClientBuilder()
|
||||
.endpoint(endpoint)
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
The Azure AI Agents Persistent SDK provides a low-level API for managing persistent agents that can be reused across sessions.
|
||||
|
||||
### Client Hierarchy
|
||||
|
||||
| Client | Purpose |
|
||||
|--------|---------|
|
||||
| `PersistentAgentsClient` | Sync client for agent operations |
|
||||
| `PersistentAgentsAsyncClient` | Async client for agent operations |
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Create Agent
|
||||
|
||||
```java
|
||||
// Create agent with tools
|
||||
PersistentAgent agent = client.createAgent(
|
||||
modelDeploymentName,
|
||||
"Math Tutor",
|
||||
"You are a personal math tutor."
|
||||
);
|
||||
```
|
||||
|
||||
### 2. Create Thread
|
||||
|
||||
```java
|
||||
PersistentAgentThread thread = client.createThread();
|
||||
```
|
||||
|
||||
### 3. Add Message
|
||||
|
||||
```java
|
||||
client.createMessage(
|
||||
thread.getId(),
|
||||
MessageRole.USER,
|
||||
"I need help with equations."
|
||||
);
|
||||
```
|
||||
|
||||
### 4. Run Agent
|
||||
|
||||
```java
|
||||
ThreadRun run = client.createRun(thread.getId(), agent.getId());
|
||||
|
||||
// Poll for completion
|
||||
while (run.getStatus() == RunStatus.QUEUED || run.getStatus() == RunStatus.IN_PROGRESS) {
|
||||
Thread.sleep(500);
|
||||
run = client.getRun(thread.getId(), run.getId());
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Get Response
|
||||
|
||||
```java
|
||||
PagedIterable<PersistentThreadMessage> messages = client.listMessages(thread.getId());
|
||||
for (PersistentThreadMessage message : messages) {
|
||||
System.out.println(message.getRole() + ": " + message.getContent());
|
||||
}
|
||||
```
|
||||
|
||||
### 6. Cleanup
|
||||
|
||||
```java
|
||||
client.deleteThread(thread.getId());
|
||||
client.deleteAgent(agent.getId());
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use DefaultAzureCredential** for production authentication
|
||||
2. **Poll with appropriate delays** — 500ms recommended between status checks
|
||||
3. **Clean up resources** — Delete threads and agents when done
|
||||
4. **Handle all run statuses** — Check for RequiresAction, Failed, Cancelled
|
||||
5. **Use async client** for better throughput in high-concurrency scenarios
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import com.azure.core.exception.HttpResponseException;
|
||||
|
||||
try {
|
||||
PersistentAgent agent = client.createAgent(modelName, name, instructions);
|
||||
} catch (HttpResponseException e) {
|
||||
System.err.println("Error: " + e.getResponse().getStatusCode() + " - " + e.getMessage());
|
||||
}
|
||||
```
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| Maven Package | https://central.sonatype.com/artifact/com.azure/azure-ai-agents-persistent |
|
||||
| GitHub Source | https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-agents-persistent |
|
||||
256
skills/azure-ai-anomalydetector-java/SKILL.md
Normal file
256
skills/azure-ai-anomalydetector-java/SKILL.md
Normal file
@@ -0,0 +1,256 @@
|
||||
---
|
||||
name: azure-ai-anomalydetector-java
|
||||
description: Build anomaly detection applications with Azure AI Anomaly Detector SDK for Java. Use when implementing univariate/multivariate anomaly detection, time-series analysis, or AI-powered monitoring.
|
||||
package: com.azure:azure-ai-anomalydetector
|
||||
---
|
||||
|
||||
# Azure AI Anomaly Detector SDK for Java
|
||||
|
||||
Build anomaly detection applications using the Azure AI Anomaly Detector SDK for Java.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-ai-anomalydetector</artifactId>
|
||||
<version>3.0.0-beta.6</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Client Creation
|
||||
|
||||
### Sync and Async Clients
|
||||
|
||||
```java
|
||||
import com.azure.ai.anomalydetector.AnomalyDetectorClientBuilder;
|
||||
import com.azure.ai.anomalydetector.MultivariateClient;
|
||||
import com.azure.ai.anomalydetector.UnivariateClient;
|
||||
import com.azure.core.credential.AzureKeyCredential;
|
||||
|
||||
String endpoint = System.getenv("AZURE_ANOMALY_DETECTOR_ENDPOINT");
|
||||
String key = System.getenv("AZURE_ANOMALY_DETECTOR_API_KEY");
|
||||
|
||||
// Multivariate client for multiple correlated signals
|
||||
MultivariateClient multivariateClient = new AnomalyDetectorClientBuilder()
|
||||
.credential(new AzureKeyCredential(key))
|
||||
.endpoint(endpoint)
|
||||
.buildMultivariateClient();
|
||||
|
||||
// Univariate client for single variable analysis
|
||||
UnivariateClient univariateClient = new AnomalyDetectorClientBuilder()
|
||||
.credential(new AzureKeyCredential(key))
|
||||
.endpoint(endpoint)
|
||||
.buildUnivariateClient();
|
||||
```
|
||||
|
||||
### With DefaultAzureCredential
|
||||
|
||||
```java
|
||||
import com.azure.identity.DefaultAzureCredentialBuilder;
|
||||
|
||||
MultivariateClient client = new AnomalyDetectorClientBuilder()
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.endpoint(endpoint)
|
||||
.buildMultivariateClient();
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Univariate Anomaly Detection
|
||||
- **Batch Detection**: Analyze entire time series at once
|
||||
- **Streaming Detection**: Real-time detection on latest data point
|
||||
- **Change Point Detection**: Detect trend changes in time series
|
||||
|
||||
### Multivariate Anomaly Detection
|
||||
- Detect anomalies across 300+ correlated signals
|
||||
- Uses Graph Attention Network for inter-correlations
|
||||
- Three-step process: Train → Inference → Results
|
||||
|
||||
## Core Patterns
|
||||
|
||||
### Univariate Batch Detection
|
||||
|
||||
```java
|
||||
import com.azure.ai.anomalydetector.models.*;
|
||||
import java.time.OffsetDateTime;
|
||||
import java.util.List;
|
||||
|
||||
List<TimeSeriesPoint> series = List.of(
|
||||
new TimeSeriesPoint(OffsetDateTime.parse("2023-01-01T00:00:00Z"), 1.0),
|
||||
new TimeSeriesPoint(OffsetDateTime.parse("2023-01-02T00:00:00Z"), 2.5),
|
||||
// ... more data points (minimum 12 points required)
|
||||
);
|
||||
|
||||
UnivariateDetectionOptions options = new UnivariateDetectionOptions(series)
|
||||
.setGranularity(TimeGranularity.DAILY)
|
||||
.setSensitivity(95);
|
||||
|
||||
UnivariateEntireDetectionResult result = univariateClient.detectUnivariateEntireSeries(options);
|
||||
|
||||
// Check for anomalies
|
||||
for (int i = 0; i < result.getIsAnomaly().size(); i++) {
|
||||
if (result.getIsAnomaly().get(i)) {
|
||||
System.out.printf("Anomaly detected at index %d with value %.2f%n",
|
||||
i, series.get(i).getValue());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Univariate Last Point Detection (Streaming)
|
||||
|
||||
```java
|
||||
UnivariateLastDetectionResult lastResult = univariateClient.detectUnivariateLastPoint(options);
|
||||
|
||||
if (lastResult.isAnomaly()) {
|
||||
System.out.println("Latest point is an anomaly!");
|
||||
System.out.printf("Expected: %.2f, Upper: %.2f, Lower: %.2f%n",
|
||||
lastResult.getExpectedValue(),
|
||||
lastResult.getUpperMargin(),
|
||||
lastResult.getLowerMargin());
|
||||
}
|
||||
```
|
||||
|
||||
### Change Point Detection
|
||||
|
||||
```java
|
||||
UnivariateChangePointDetectionOptions changeOptions =
|
||||
new UnivariateChangePointDetectionOptions(series, TimeGranularity.DAILY);
|
||||
|
||||
UnivariateChangePointDetectionResult changeResult =
|
||||
univariateClient.detectUnivariateChangePoint(changeOptions);
|
||||
|
||||
for (int i = 0; i < changeResult.getIsChangePoint().size(); i++) {
|
||||
if (changeResult.getIsChangePoint().get(i)) {
|
||||
System.out.printf("Change point at index %d with confidence %.2f%n",
|
||||
i, changeResult.getConfidenceScores().get(i));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Multivariate Model Training
|
||||
|
||||
```java
|
||||
import com.azure.ai.anomalydetector.models.*;
|
||||
import com.azure.core.util.polling.SyncPoller;
|
||||
|
||||
// Prepare training request with blob storage data
|
||||
ModelInfo modelInfo = new ModelInfo()
|
||||
.setDataSource("https://storage.blob.core.windows.net/container/data.zip?sasToken")
|
||||
.setStartTime(OffsetDateTime.parse("2023-01-01T00:00:00Z"))
|
||||
.setEndTime(OffsetDateTime.parse("2023-06-01T00:00:00Z"))
|
||||
.setSlidingWindow(200)
|
||||
.setDisplayName("MyMultivariateModel");
|
||||
|
||||
// Train model (long-running operation)
|
||||
AnomalyDetectionModel trainedModel = multivariateClient.trainMultivariateModel(modelInfo);
|
||||
|
||||
String modelId = trainedModel.getModelId();
|
||||
System.out.println("Model ID: " + modelId);
|
||||
|
||||
// Check training status
|
||||
AnomalyDetectionModel model = multivariateClient.getMultivariateModel(modelId);
|
||||
System.out.println("Status: " + model.getModelInfo().getStatus());
|
||||
```
|
||||
|
||||
### Multivariate Batch Inference
|
||||
|
||||
```java
|
||||
MultivariateBatchDetectionOptions detectionOptions = new MultivariateBatchDetectionOptions()
|
||||
.setDataSource("https://storage.blob.core.windows.net/container/inference-data.zip?sasToken")
|
||||
.setStartTime(OffsetDateTime.parse("2023-07-01T00:00:00Z"))
|
||||
.setEndTime(OffsetDateTime.parse("2023-07-31T00:00:00Z"))
|
||||
.setTopContributorCount(10);
|
||||
|
||||
MultivariateDetectionResult detectionResult =
|
||||
multivariateClient.detectMultivariateBatchAnomaly(modelId, detectionOptions);
|
||||
|
||||
String resultId = detectionResult.getResultId();
|
||||
|
||||
// Poll for results
|
||||
MultivariateDetectionResult result = multivariateClient.getBatchDetectionResult(resultId);
|
||||
for (AnomalyState state : result.getResults()) {
|
||||
if (state.getValue().isAnomaly()) {
|
||||
System.out.printf("Anomaly at %s, severity: %.2f%n",
|
||||
state.getTimestamp(),
|
||||
state.getValue().getSeverity());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Multivariate Last Point Detection
|
||||
|
||||
```java
|
||||
MultivariateLastDetectionOptions lastOptions = new MultivariateLastDetectionOptions()
|
||||
.setVariables(List.of(
|
||||
new VariableValues("variable1", List.of("timestamp1"), List.of(1.0f)),
|
||||
new VariableValues("variable2", List.of("timestamp1"), List.of(2.5f))
|
||||
))
|
||||
.setTopContributorCount(5);
|
||||
|
||||
MultivariateLastDetectionResult lastResult =
|
||||
multivariateClient.detectMultivariateLastAnomaly(modelId, lastOptions);
|
||||
|
||||
if (lastResult.getValue().isAnomaly()) {
|
||||
System.out.println("Anomaly detected!");
|
||||
// Check contributing variables
|
||||
for (AnomalyContributor contributor : lastResult.getValue().getInterpretation()) {
|
||||
System.out.printf("Variable: %s, Contribution: %.2f%n",
|
||||
contributor.getVariable(),
|
||||
contributor.getContributionScore());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Model Management
|
||||
|
||||
```java
|
||||
// List all models
|
||||
PagedIterable<AnomalyDetectionModel> models = multivariateClient.listMultivariateModels();
|
||||
for (AnomalyDetectionModel m : models) {
|
||||
System.out.printf("Model: %s, Status: %s%n",
|
||||
m.getModelId(),
|
||||
m.getModelInfo().getStatus());
|
||||
}
|
||||
|
||||
// Delete a model
|
||||
multivariateClient.deleteMultivariateModel(modelId);
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import com.azure.core.exception.HttpResponseException;
|
||||
|
||||
try {
|
||||
univariateClient.detectUnivariateEntireSeries(options);
|
||||
} catch (HttpResponseException e) {
|
||||
System.out.println("Status code: " + e.getResponse().getStatusCode());
|
||||
System.out.println("Error: " + e.getMessage());
|
||||
}
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_ANOMALY_DETECTOR_ENDPOINT=https://<resource>.cognitiveservices.azure.com/
|
||||
AZURE_ANOMALY_DETECTOR_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Minimum Data Points**: Univariate requires at least 12 points; more data improves accuracy
|
||||
2. **Granularity Alignment**: Match `TimeGranularity` to your actual data frequency
|
||||
3. **Sensitivity Tuning**: Higher values (0-99) detect more anomalies
|
||||
4. **Multivariate Training**: Use 200-1000 sliding window based on pattern complexity
|
||||
5. **Error Handling**: Always handle `HttpResponseException` for API errors
|
||||
|
||||
## Trigger Phrases
|
||||
|
||||
- "anomaly detection Java"
|
||||
- "detect anomalies time series"
|
||||
- "multivariate anomaly Java"
|
||||
- "univariate anomaly detection"
|
||||
- "streaming anomaly detection"
|
||||
- "change point detection"
|
||||
- "Azure AI Anomaly Detector"
|
||||
282
skills/azure-ai-contentsafety-java/SKILL.md
Normal file
282
skills/azure-ai-contentsafety-java/SKILL.md
Normal file
@@ -0,0 +1,282 @@
|
||||
---
|
||||
name: azure-ai-contentsafety-java
|
||||
description: Build content moderation applications with Azure AI Content Safety SDK for Java. Use when implementing text/image analysis, blocklist management, or harm detection for hate, violence, sexual content, and self-harm.
|
||||
package: com.azure:azure-ai-contentsafety
|
||||
---
|
||||
|
||||
# Azure AI Content Safety SDK for Java
|
||||
|
||||
Build content moderation applications using the Azure AI Content Safety SDK for Java.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-ai-contentsafety</artifactId>
|
||||
<version>1.1.0-beta.1</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Client Creation
|
||||
|
||||
### With API Key
|
||||
|
||||
```java
|
||||
import com.azure.ai.contentsafety.ContentSafetyClient;
|
||||
import com.azure.ai.contentsafety.ContentSafetyClientBuilder;
|
||||
import com.azure.ai.contentsafety.BlocklistClient;
|
||||
import com.azure.ai.contentsafety.BlocklistClientBuilder;
|
||||
import com.azure.core.credential.KeyCredential;
|
||||
|
||||
String endpoint = System.getenv("CONTENT_SAFETY_ENDPOINT");
|
||||
String key = System.getenv("CONTENT_SAFETY_KEY");
|
||||
|
||||
ContentSafetyClient contentSafetyClient = new ContentSafetyClientBuilder()
|
||||
.credential(new KeyCredential(key))
|
||||
.endpoint(endpoint)
|
||||
.buildClient();
|
||||
|
||||
BlocklistClient blocklistClient = new BlocklistClientBuilder()
|
||||
.credential(new KeyCredential(key))
|
||||
.endpoint(endpoint)
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### With DefaultAzureCredential
|
||||
|
||||
```java
|
||||
import com.azure.identity.DefaultAzureCredentialBuilder;
|
||||
|
||||
ContentSafetyClient client = new ContentSafetyClientBuilder()
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.endpoint(endpoint)
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Harm Categories
|
||||
| Category | Description |
|
||||
|----------|-------------|
|
||||
| Hate | Discriminatory language based on identity groups |
|
||||
| Sexual | Sexual content, relationships, acts |
|
||||
| Violence | Physical harm, weapons, injury |
|
||||
| Self-harm | Self-injury, suicide-related content |
|
||||
|
||||
### Severity Levels
|
||||
- Text: 0-7 scale (default outputs 0, 2, 4, 6)
|
||||
- Image: 0, 2, 4, 6 (trimmed scale)
|
||||
|
||||
## Core Patterns
|
||||
|
||||
### Analyze Text
|
||||
|
||||
```java
|
||||
import com.azure.ai.contentsafety.models.*;
|
||||
|
||||
AnalyzeTextResult result = contentSafetyClient.analyzeText(
|
||||
new AnalyzeTextOptions("This is text to analyze"));
|
||||
|
||||
for (TextCategoriesAnalysis category : result.getCategoriesAnalysis()) {
|
||||
System.out.printf("Category: %s, Severity: %d%n",
|
||||
category.getCategory(),
|
||||
category.getSeverity());
|
||||
}
|
||||
```
|
||||
|
||||
### Analyze Text with Options
|
||||
|
||||
```java
|
||||
AnalyzeTextOptions options = new AnalyzeTextOptions("Text to analyze")
|
||||
.setCategories(Arrays.asList(
|
||||
TextCategory.HATE,
|
||||
TextCategory.VIOLENCE))
|
||||
.setOutputType(AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS);
|
||||
|
||||
AnalyzeTextResult result = contentSafetyClient.analyzeText(options);
|
||||
```
|
||||
|
||||
### Analyze Text with Blocklist
|
||||
|
||||
```java
|
||||
AnalyzeTextOptions options = new AnalyzeTextOptions("I h*te you and want to k*ll you")
|
||||
.setBlocklistNames(Arrays.asList("my-blocklist"))
|
||||
.setHaltOnBlocklistHit(true);
|
||||
|
||||
AnalyzeTextResult result = contentSafetyClient.analyzeText(options);
|
||||
|
||||
if (result.getBlocklistsMatch() != null) {
|
||||
for (TextBlocklistMatch match : result.getBlocklistsMatch()) {
|
||||
System.out.printf("Blocklist: %s, Item: %s, Text: %s%n",
|
||||
match.getBlocklistName(),
|
||||
match.getBlocklistItemId(),
|
||||
match.getBlocklistItemText());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Analyze Image
|
||||
|
||||
```java
|
||||
import com.azure.ai.contentsafety.models.*;
|
||||
import com.azure.core.util.BinaryData;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Paths;
|
||||
|
||||
// From file
|
||||
byte[] imageBytes = Files.readAllBytes(Paths.get("image.png"));
|
||||
ContentSafetyImageData imageData = new ContentSafetyImageData()
|
||||
.setContent(BinaryData.fromBytes(imageBytes));
|
||||
|
||||
AnalyzeImageResult result = contentSafetyClient.analyzeImage(
|
||||
new AnalyzeImageOptions(imageData));
|
||||
|
||||
for (ImageCategoriesAnalysis category : result.getCategoriesAnalysis()) {
|
||||
System.out.printf("Category: %s, Severity: %d%n",
|
||||
category.getCategory(),
|
||||
category.getSeverity());
|
||||
}
|
||||
```
|
||||
|
||||
### Analyze Image from URL
|
||||
|
||||
```java
|
||||
ContentSafetyImageData imageData = new ContentSafetyImageData()
|
||||
.setBlobUrl("https://example.com/image.jpg");
|
||||
|
||||
AnalyzeImageResult result = contentSafetyClient.analyzeImage(
|
||||
new AnalyzeImageOptions(imageData));
|
||||
```
|
||||
|
||||
## Blocklist Management
|
||||
|
||||
### Create or Update Blocklist
|
||||
|
||||
```java
|
||||
import com.azure.core.http.rest.RequestOptions;
|
||||
import com.azure.core.http.rest.Response;
|
||||
import com.azure.core.util.BinaryData;
|
||||
import java.util.Map;
|
||||
|
||||
Map<String, String> description = Map.of("description", "Custom blocklist");
|
||||
BinaryData resource = BinaryData.fromObject(description);
|
||||
|
||||
Response<BinaryData> response = blocklistClient.createOrUpdateTextBlocklistWithResponse(
|
||||
"my-blocklist", resource, new RequestOptions());
|
||||
|
||||
if (response.getStatusCode() == 201) {
|
||||
System.out.println("Blocklist created");
|
||||
} else if (response.getStatusCode() == 200) {
|
||||
System.out.println("Blocklist updated");
|
||||
}
|
||||
```
|
||||
|
||||
### Add Block Items
|
||||
|
||||
```java
|
||||
import com.azure.ai.contentsafety.models.*;
|
||||
import java.util.Arrays;
|
||||
|
||||
List<TextBlocklistItem> items = Arrays.asList(
|
||||
new TextBlocklistItem("badword1").setDescription("Offensive term"),
|
||||
new TextBlocklistItem("badword2").setDescription("Another term")
|
||||
);
|
||||
|
||||
AddOrUpdateTextBlocklistItemsResult result = blocklistClient.addOrUpdateBlocklistItems(
|
||||
"my-blocklist",
|
||||
new AddOrUpdateTextBlocklistItemsOptions(items));
|
||||
|
||||
for (TextBlocklistItem item : result.getBlocklistItems()) {
|
||||
System.out.printf("Added: %s (ID: %s)%n",
|
||||
item.getText(),
|
||||
item.getBlocklistItemId());
|
||||
}
|
||||
```
|
||||
|
||||
### List Blocklists
|
||||
|
||||
```java
|
||||
PagedIterable<TextBlocklist> blocklists = blocklistClient.listTextBlocklists();
|
||||
|
||||
for (TextBlocklist blocklist : blocklists) {
|
||||
System.out.printf("Blocklist: %s, Description: %s%n",
|
||||
blocklist.getName(),
|
||||
blocklist.getDescription());
|
||||
}
|
||||
```
|
||||
|
||||
### Get Blocklist
|
||||
|
||||
```java
|
||||
TextBlocklist blocklist = blocklistClient.getTextBlocklist("my-blocklist");
|
||||
System.out.println("Name: " + blocklist.getName());
|
||||
```
|
||||
|
||||
### List Block Items
|
||||
|
||||
```java
|
||||
PagedIterable<TextBlocklistItem> items =
|
||||
blocklistClient.listTextBlocklistItems("my-blocklist");
|
||||
|
||||
for (TextBlocklistItem item : items) {
|
||||
System.out.printf("ID: %s, Text: %s%n",
|
||||
item.getBlocklistItemId(),
|
||||
item.getText());
|
||||
}
|
||||
```
|
||||
|
||||
### Remove Block Items
|
||||
|
||||
```java
|
||||
List<String> itemIds = Arrays.asList("item-id-1", "item-id-2");
|
||||
|
||||
blocklistClient.removeBlocklistItems(
|
||||
"my-blocklist",
|
||||
new RemoveTextBlocklistItemsOptions(itemIds));
|
||||
```
|
||||
|
||||
### Delete Blocklist
|
||||
|
||||
```java
|
||||
blocklistClient.deleteTextBlocklist("my-blocklist");
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import com.azure.core.exception.HttpResponseException;
|
||||
|
||||
try {
|
||||
contentSafetyClient.analyzeText(new AnalyzeTextOptions("test"));
|
||||
} catch (HttpResponseException e) {
|
||||
System.out.println("Status: " + e.getResponse().getStatusCode());
|
||||
System.out.println("Error: " + e.getMessage());
|
||||
// Common codes: InvalidRequestBody, ResourceNotFound, TooManyRequests
|
||||
}
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com/
|
||||
CONTENT_SAFETY_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Blocklist Delay**: Changes take ~5 minutes to take effect
|
||||
2. **Category Selection**: Only request needed categories to reduce latency
|
||||
3. **Severity Thresholds**: Typically block severity >= 4 for strict moderation
|
||||
4. **Batch Processing**: Process multiple items in parallel for throughput
|
||||
5. **Caching**: Cache blocklist results where appropriate
|
||||
|
||||
## Trigger Phrases
|
||||
|
||||
- "content safety Java"
|
||||
- "content moderation Azure"
|
||||
- "analyze text safety"
|
||||
- "image moderation Java"
|
||||
- "blocklist management"
|
||||
- "hate speech detection"
|
||||
- "harmful content filter"
|
||||
214
skills/azure-ai-contentsafety-py/SKILL.md
Normal file
214
skills/azure-ai-contentsafety-py/SKILL.md
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
name: azure-ai-contentsafety-py
|
||||
description: |
|
||||
Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification.
|
||||
Triggers: "azure-ai-contentsafety", "ContentSafetyClient", "content moderation", "harmful content", "text analysis", "image analysis".
|
||||
package: azure-ai-contentsafety
|
||||
---
|
||||
|
||||
# Azure AI Content Safety SDK for Python
|
||||
|
||||
Detect harmful user-generated and AI-generated content in applications.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-contentsafety
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com
|
||||
CONTENT_SAFETY_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### API Key
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety import ContentSafetyClient
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
import os
|
||||
|
||||
client = ContentSafetyClient(
|
||||
endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],
|
||||
credential=AzureKeyCredential(os.environ["CONTENT_SAFETY_KEY"])
|
||||
)
|
||||
```
|
||||
|
||||
### Entra ID
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety import ContentSafetyClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
client = ContentSafetyClient(
|
||||
endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
```
|
||||
|
||||
## Analyze Text
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety import ContentSafetyClient
|
||||
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
|
||||
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
|
||||
|
||||
request = AnalyzeTextOptions(text="Your text content to analyze")
|
||||
response = client.analyze_text(request)
|
||||
|
||||
# Check each category
|
||||
for category in [TextCategory.HATE, TextCategory.SELF_HARM,
|
||||
TextCategory.SEXUAL, TextCategory.VIOLENCE]:
|
||||
result = next((r for r in response.categories_analysis
|
||||
if r.category == category), None)
|
||||
if result:
|
||||
print(f"{category}: severity {result.severity}")
|
||||
```
|
||||
|
||||
## Analyze Image
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety import ContentSafetyClient
|
||||
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
import base64
|
||||
|
||||
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
|
||||
|
||||
# From file
|
||||
with open("image.jpg", "rb") as f:
|
||||
image_data = base64.b64encode(f.read()).decode("utf-8")
|
||||
|
||||
request = AnalyzeImageOptions(
|
||||
image=ImageData(content=image_data)
|
||||
)
|
||||
|
||||
response = client.analyze_image(request)
|
||||
|
||||
for result in response.categories_analysis:
|
||||
print(f"{result.category}: severity {result.severity}")
|
||||
```
|
||||
|
||||
### Image from URL
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
|
||||
|
||||
request = AnalyzeImageOptions(
|
||||
image=ImageData(blob_url="https://example.com/image.jpg")
|
||||
)
|
||||
|
||||
response = client.analyze_image(request)
|
||||
```
|
||||
|
||||
## Text Blocklist Management
|
||||
|
||||
### Create Blocklist
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety import BlocklistClient
|
||||
from azure.ai.contentsafety.models import TextBlocklist
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
|
||||
blocklist_client = BlocklistClient(endpoint, AzureKeyCredential(key))
|
||||
|
||||
blocklist = TextBlocklist(
|
||||
blocklist_name="my-blocklist",
|
||||
description="Custom terms to block"
|
||||
)
|
||||
|
||||
result = blocklist_client.create_or_update_text_blocklist(
|
||||
blocklist_name="my-blocklist",
|
||||
options=blocklist
|
||||
)
|
||||
```
|
||||
|
||||
### Add Block Items
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety.models import AddOrUpdateTextBlocklistItemsOptions, TextBlocklistItem
|
||||
|
||||
items = AddOrUpdateTextBlocklistItemsOptions(
|
||||
blocklist_items=[
|
||||
TextBlocklistItem(text="blocked-term-1"),
|
||||
TextBlocklistItem(text="blocked-term-2")
|
||||
]
|
||||
)
|
||||
|
||||
result = blocklist_client.add_or_update_blocklist_items(
|
||||
blocklist_name="my-blocklist",
|
||||
options=items
|
||||
)
|
||||
```
|
||||
|
||||
### Analyze with Blocklist
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety.models import AnalyzeTextOptions
|
||||
|
||||
request = AnalyzeTextOptions(
|
||||
text="Text containing blocked-term-1",
|
||||
blocklist_names=["my-blocklist"],
|
||||
halt_on_blocklist_hit=True
|
||||
)
|
||||
|
||||
response = client.analyze_text(request)
|
||||
|
||||
if response.blocklists_match:
|
||||
for match in response.blocklists_match:
|
||||
print(f"Blocked: {match.blocklist_item_text}")
|
||||
```
|
||||
|
||||
## Severity Levels
|
||||
|
||||
Text analysis returns 4 severity levels (0, 2, 4, 6) by default. For 8 levels (0-7):
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety.models import AnalyzeTextOptions, AnalyzeTextOutputType
|
||||
|
||||
request = AnalyzeTextOptions(
|
||||
text="Your text",
|
||||
output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS
|
||||
)
|
||||
```
|
||||
|
||||
## Harm Categories
|
||||
|
||||
| Category | Description |
|
||||
|----------|-------------|
|
||||
| `Hate` | Attacks based on identity (race, religion, gender, etc.) |
|
||||
| `Sexual` | Sexual content, relationships, anatomy |
|
||||
| `Violence` | Physical harm, weapons, injury |
|
||||
| `SelfHarm` | Self-injury, suicide, eating disorders |
|
||||
|
||||
## Severity Scale
|
||||
|
||||
| Level | Text Range | Image Range | Meaning |
|
||||
|-------|------------|-------------|---------|
|
||||
| 0 | Safe | Safe | No harmful content |
|
||||
| 2 | Low | Low | Mild references |
|
||||
| 4 | Medium | Medium | Moderate content |
|
||||
| 6 | High | High | Severe content |
|
||||
|
||||
## Client Types
|
||||
|
||||
| Client | Purpose |
|
||||
|--------|---------|
|
||||
| `ContentSafetyClient` | Analyze text and images |
|
||||
| `BlocklistClient` | Manage custom blocklists |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use blocklists** for domain-specific terms
|
||||
2. **Set severity thresholds** appropriate for your use case
|
||||
3. **Handle multiple categories** — content can be harmful in multiple ways
|
||||
4. **Use halt_on_blocklist_hit** for immediate rejection
|
||||
5. **Log analysis results** for audit and improvement
|
||||
6. **Consider 8-severity mode** for finer-grained control
|
||||
7. **Pre-moderate AI outputs** before showing to users
|
||||
300
skills/azure-ai-contentsafety-ts/SKILL.md
Normal file
300
skills/azure-ai-contentsafety-ts/SKILL.md
Normal file
@@ -0,0 +1,300 @@
|
||||
---
|
||||
name: azure-ai-contentsafety-ts
|
||||
description: Analyze text and images for harmful content using Azure AI Content Safety (@azure-rest/ai-content-safety). Use when moderating user-generated content, detecting hate speech, violence, sexual content, or self-harm, or managing custom blocklists.
|
||||
package: @azure-rest/ai-content-safety
|
||||
---
|
||||
|
||||
# Azure AI Content Safety REST SDK for TypeScript
|
||||
|
||||
Analyze text and images for harmful content with customizable blocklists.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
npm install @azure-rest/ai-content-safety @azure/identity @azure/core-auth
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com
|
||||
CONTENT_SAFETY_KEY=<api-key>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
**Important**: This is a REST client. `ContentSafetyClient` is a **function**, not a class.
|
||||
|
||||
### API Key
|
||||
|
||||
```typescript
|
||||
import ContentSafetyClient from "@azure-rest/ai-content-safety";
|
||||
import { AzureKeyCredential } from "@azure/core-auth";
|
||||
|
||||
const client = ContentSafetyClient(
|
||||
process.env.CONTENT_SAFETY_ENDPOINT!,
|
||||
new AzureKeyCredential(process.env.CONTENT_SAFETY_KEY!)
|
||||
);
|
||||
```
|
||||
|
||||
### DefaultAzureCredential
|
||||
|
||||
```typescript
|
||||
import ContentSafetyClient from "@azure-rest/ai-content-safety";
|
||||
import { DefaultAzureCredential } from "@azure/identity";
|
||||
|
||||
const client = ContentSafetyClient(
|
||||
process.env.CONTENT_SAFETY_ENDPOINT!,
|
||||
new DefaultAzureCredential()
|
||||
);
|
||||
```
|
||||
|
||||
## Analyze Text
|
||||
|
||||
```typescript
|
||||
import ContentSafetyClient, { isUnexpected } from "@azure-rest/ai-content-safety";
|
||||
|
||||
const result = await client.path("/text:analyze").post({
|
||||
body: {
|
||||
text: "Text content to analyze",
|
||||
categories: ["Hate", "Sexual", "Violence", "SelfHarm"],
|
||||
outputType: "FourSeverityLevels" // or "EightSeverityLevels"
|
||||
}
|
||||
});
|
||||
|
||||
if (isUnexpected(result)) {
|
||||
throw result.body;
|
||||
}
|
||||
|
||||
for (const analysis of result.body.categoriesAnalysis) {
|
||||
console.log(`${analysis.category}: severity ${analysis.severity}`);
|
||||
}
|
||||
```
|
||||
|
||||
## Analyze Image
|
||||
|
||||
### Base64 Content
|
||||
|
||||
```typescript
|
||||
import { readFileSync } from "node:fs";
|
||||
|
||||
const imageBuffer = readFileSync("./image.png");
|
||||
const base64Image = imageBuffer.toString("base64");
|
||||
|
||||
const result = await client.path("/image:analyze").post({
|
||||
body: {
|
||||
image: { content: base64Image }
|
||||
}
|
||||
});
|
||||
|
||||
if (isUnexpected(result)) {
|
||||
throw result.body;
|
||||
}
|
||||
|
||||
for (const analysis of result.body.categoriesAnalysis) {
|
||||
console.log(`${analysis.category}: severity ${analysis.severity}`);
|
||||
}
|
||||
```
|
||||
|
||||
### Blob URL
|
||||
|
||||
```typescript
|
||||
const result = await client.path("/image:analyze").post({
|
||||
body: {
|
||||
image: { blobUrl: "https://storage.blob.core.windows.net/container/image.png" }
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
## Blocklist Management
|
||||
|
||||
### Create Blocklist
|
||||
|
||||
```typescript
|
||||
const result = await client
|
||||
.path("/text/blocklists/{blocklistName}", "my-blocklist")
|
||||
.patch({
|
||||
contentType: "application/merge-patch+json",
|
||||
body: {
|
||||
description: "Custom blocklist for prohibited terms"
|
||||
}
|
||||
});
|
||||
|
||||
if (isUnexpected(result)) {
|
||||
throw result.body;
|
||||
}
|
||||
|
||||
console.log(`Created: ${result.body.blocklistName}`);
|
||||
```
|
||||
|
||||
### Add Items to Blocklist
|
||||
|
||||
```typescript
|
||||
const result = await client
|
||||
.path("/text/blocklists/{blocklistName}:addOrUpdateBlocklistItems", "my-blocklist")
|
||||
.post({
|
||||
body: {
|
||||
blocklistItems: [
|
||||
{ text: "prohibited-term-1", description: "First blocked term" },
|
||||
{ text: "prohibited-term-2", description: "Second blocked term" }
|
||||
]
|
||||
}
|
||||
});
|
||||
|
||||
if (isUnexpected(result)) {
|
||||
throw result.body;
|
||||
}
|
||||
|
||||
for (const item of result.body.blocklistItems ?? []) {
|
||||
console.log(`Added: ${item.blocklistItemId}`);
|
||||
}
|
||||
```
|
||||
|
||||
### Analyze with Blocklist
|
||||
|
||||
```typescript
|
||||
const result = await client.path("/text:analyze").post({
|
||||
body: {
|
||||
text: "Text that might contain blocked terms",
|
||||
blocklistNames: ["my-blocklist"],
|
||||
haltOnBlocklistHit: false
|
||||
}
|
||||
});
|
||||
|
||||
if (isUnexpected(result)) {
|
||||
throw result.body;
|
||||
}
|
||||
|
||||
// Check blocklist matches
|
||||
if (result.body.blocklistsMatch) {
|
||||
for (const match of result.body.blocklistsMatch) {
|
||||
console.log(`Blocked: "${match.blocklistItemText}" from ${match.blocklistName}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### List Blocklists
|
||||
|
||||
```typescript
|
||||
const result = await client.path("/text/blocklists").get();
|
||||
|
||||
if (isUnexpected(result)) {
|
||||
throw result.body;
|
||||
}
|
||||
|
||||
for (const blocklist of result.body.value ?? []) {
|
||||
console.log(`${blocklist.blocklistName}: ${blocklist.description}`);
|
||||
}
|
||||
```
|
||||
|
||||
### Delete Blocklist
|
||||
|
||||
```typescript
|
||||
await client.path("/text/blocklists/{blocklistName}", "my-blocklist").delete();
|
||||
```
|
||||
|
||||
## Harm Categories
|
||||
|
||||
| Category | API Term | Description |
|
||||
|----------|----------|-------------|
|
||||
| Hate and Fairness | `Hate` | Discriminatory language targeting identity groups |
|
||||
| Sexual | `Sexual` | Sexual content, nudity, pornography |
|
||||
| Violence | `Violence` | Physical harm, weapons, terrorism |
|
||||
| Self-Harm | `SelfHarm` | Self-injury, suicide, eating disorders |
|
||||
|
||||
## Severity Levels
|
||||
|
||||
| Level | Risk | Recommended Action |
|
||||
|-------|------|-------------------|
|
||||
| 0 | Safe | Allow |
|
||||
| 2 | Low | Review or allow with warning |
|
||||
| 4 | Medium | Block or require human review |
|
||||
| 6 | High | Block immediately |
|
||||
|
||||
**Output Types**:
|
||||
- `FourSeverityLevels` (default): Returns 0, 2, 4, 6
|
||||
- `EightSeverityLevels`: Returns 0-7
|
||||
|
||||
## Content Moderation Helper
|
||||
|
||||
```typescript
|
||||
import ContentSafetyClient, {
|
||||
isUnexpected,
|
||||
TextCategoriesAnalysisOutput
|
||||
} from "@azure-rest/ai-content-safety";
|
||||
|
||||
interface ModerationResult {
|
||||
isAllowed: boolean;
|
||||
flaggedCategories: string[];
|
||||
maxSeverity: number;
|
||||
blocklistMatches: string[];
|
||||
}
|
||||
|
||||
async function moderateContent(
|
||||
client: ReturnType<typeof ContentSafetyClient>,
|
||||
text: string,
|
||||
maxAllowedSeverity = 2,
|
||||
blocklistNames: string[] = []
|
||||
): Promise<ModerationResult> {
|
||||
const result = await client.path("/text:analyze").post({
|
||||
body: { text, blocklistNames, haltOnBlocklistHit: false }
|
||||
});
|
||||
|
||||
if (isUnexpected(result)) {
|
||||
throw result.body;
|
||||
}
|
||||
|
||||
const flaggedCategories = result.body.categoriesAnalysis
|
||||
.filter(c => (c.severity ?? 0) > maxAllowedSeverity)
|
||||
.map(c => c.category!);
|
||||
|
||||
const maxSeverity = Math.max(
|
||||
...result.body.categoriesAnalysis.map(c => c.severity ?? 0)
|
||||
);
|
||||
|
||||
const blocklistMatches = (result.body.blocklistsMatch ?? [])
|
||||
.map(m => m.blocklistItemText!);
|
||||
|
||||
return {
|
||||
isAllowed: flaggedCategories.length === 0 && blocklistMatches.length === 0,
|
||||
flaggedCategories,
|
||||
maxSeverity,
|
||||
blocklistMatches
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
|
||||
| Operation | Method | Path |
|
||||
|-----------|--------|------|
|
||||
| Analyze Text | POST | `/text:analyze` |
|
||||
| Analyze Image | POST | `/image:analyze` |
|
||||
| Create/Update Blocklist | PATCH | `/text/blocklists/{blocklistName}` |
|
||||
| List Blocklists | GET | `/text/blocklists` |
|
||||
| Delete Blocklist | DELETE | `/text/blocklists/{blocklistName}` |
|
||||
| Add Blocklist Items | POST | `/text/blocklists/{blocklistName}:addOrUpdateBlocklistItems` |
|
||||
| List Blocklist Items | GET | `/text/blocklists/{blocklistName}/blocklistItems` |
|
||||
| Remove Blocklist Items | POST | `/text/blocklists/{blocklistName}:removeBlocklistItems` |
|
||||
|
||||
## Key Types
|
||||
|
||||
```typescript
|
||||
import ContentSafetyClient, {
|
||||
isUnexpected,
|
||||
AnalyzeTextParameters,
|
||||
AnalyzeImageParameters,
|
||||
TextCategoriesAnalysisOutput,
|
||||
ImageCategoriesAnalysisOutput,
|
||||
TextBlocklist,
|
||||
TextBlocklistItem
|
||||
} from "@azure-rest/ai-content-safety";
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always use isUnexpected()** - Type guard for error handling
|
||||
2. **Set appropriate thresholds** - Different categories may need different severity thresholds
|
||||
3. **Use blocklists for domain-specific terms** - Supplement AI detection with custom rules
|
||||
4. **Log moderation decisions** - Keep audit trail for compliance
|
||||
5. **Handle edge cases** - Empty text, very long text, unsupported image formats
|
||||
273
skills/azure-ai-contentunderstanding-py/SKILL.md
Normal file
273
skills/azure-ai-contentunderstanding-py/SKILL.md
Normal file
@@ -0,0 +1,273 @@
|
||||
---
|
||||
name: azure-ai-contentunderstanding-py
|
||||
description: |
|
||||
Azure AI Content Understanding SDK for Python. Use for multimodal content extraction from documents, images, audio, and video.
|
||||
Triggers: "azure-ai-contentunderstanding", "ContentUnderstandingClient", "multimodal analysis", "document extraction", "video analysis", "audio transcription".
|
||||
package: azure-ai-contentunderstanding
|
||||
---
|
||||
|
||||
# Azure AI Content Understanding SDK for Python
|
||||
|
||||
Multimodal AI service that extracts semantic content from documents, video, audio, and image files for RAG and automated workflows.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-contentunderstanding
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
CONTENTUNDERSTANDING_ENDPOINT=https://<resource>.cognitiveservices.azure.com/
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.ai.contentunderstanding import ContentUnderstandingClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
endpoint = os.environ["CONTENTUNDERSTANDING_ENDPOINT"]
|
||||
credential = DefaultAzureCredential()
|
||||
client = ContentUnderstandingClient(endpoint=endpoint, credential=credential)
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
|
||||
Content Understanding operations are asynchronous long-running operations:
|
||||
|
||||
1. **Begin Analysis** — Start the analysis operation with `begin_analyze()` (returns a poller)
|
||||
2. **Poll for Results** — Poll until analysis completes (SDK handles this with `.result()`)
|
||||
3. **Process Results** — Extract structured results from `AnalyzeResult.contents`
|
||||
|
||||
## Prebuilt Analyzers
|
||||
|
||||
| Analyzer | Content Type | Purpose |
|
||||
|----------|--------------|---------|
|
||||
| `prebuilt-documentSearch` | Documents | Extract markdown for RAG applications |
|
||||
| `prebuilt-imageSearch` | Images | Extract content from images |
|
||||
| `prebuilt-audioSearch` | Audio | Transcribe audio with timing |
|
||||
| `prebuilt-videoSearch` | Video | Extract frames, transcripts, summaries |
|
||||
| `prebuilt-invoice` | Documents | Extract invoice fields |
|
||||
|
||||
## Analyze Document
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.ai.contentunderstanding import ContentUnderstandingClient
|
||||
from azure.ai.contentunderstanding.models import AnalyzeInput
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
endpoint = os.environ["CONTENTUNDERSTANDING_ENDPOINT"]
|
||||
client = ContentUnderstandingClient(
|
||||
endpoint=endpoint,
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
|
||||
# Analyze document from URL
|
||||
poller = client.begin_analyze(
|
||||
analyzer_id="prebuilt-documentSearch",
|
||||
inputs=[AnalyzeInput(url="https://example.com/document.pdf")]
|
||||
)
|
||||
|
||||
result = poller.result()
|
||||
|
||||
# Access markdown content (contents is a list)
|
||||
content = result.contents[0]
|
||||
print(content.markdown)
|
||||
```
|
||||
|
||||
## Access Document Content Details
|
||||
|
||||
```python
|
||||
from azure.ai.contentunderstanding.models import MediaContentKind, DocumentContent
|
||||
|
||||
content = result.contents[0]
|
||||
if content.kind == MediaContentKind.DOCUMENT:
|
||||
document_content: DocumentContent = content # type: ignore
|
||||
print(document_content.start_page_number)
|
||||
```
|
||||
|
||||
## Analyze Image
|
||||
|
||||
```python
|
||||
from azure.ai.contentunderstanding.models import AnalyzeInput
|
||||
|
||||
poller = client.begin_analyze(
|
||||
analyzer_id="prebuilt-imageSearch",
|
||||
inputs=[AnalyzeInput(url="https://example.com/image.jpg")]
|
||||
)
|
||||
result = poller.result()
|
||||
content = result.contents[0]
|
||||
print(content.markdown)
|
||||
```
|
||||
|
||||
## Analyze Video
|
||||
|
||||
```python
|
||||
from azure.ai.contentunderstanding.models import AnalyzeInput
|
||||
|
||||
poller = client.begin_analyze(
|
||||
analyzer_id="prebuilt-videoSearch",
|
||||
inputs=[AnalyzeInput(url="https://example.com/video.mp4")]
|
||||
)
|
||||
|
||||
result = poller.result()
|
||||
|
||||
# Access video content (AudioVisualContent)
|
||||
content = result.contents[0]
|
||||
|
||||
# Get transcript phrases with timing
|
||||
for phrase in content.transcript_phrases:
|
||||
print(f"[{phrase.start_time} - {phrase.end_time}]: {phrase.text}")
|
||||
|
||||
# Get key frames (for video)
|
||||
for frame in content.key_frames:
|
||||
print(f"Frame at {frame.time}: {frame.description}")
|
||||
```
|
||||
|
||||
## Analyze Audio
|
||||
|
||||
```python
|
||||
from azure.ai.contentunderstanding.models import AnalyzeInput
|
||||
|
||||
poller = client.begin_analyze(
|
||||
analyzer_id="prebuilt-audioSearch",
|
||||
inputs=[AnalyzeInput(url="https://example.com/audio.mp3")]
|
||||
)
|
||||
|
||||
result = poller.result()
|
||||
|
||||
# Access audio transcript
|
||||
content = result.contents[0]
|
||||
for phrase in content.transcript_phrases:
|
||||
print(f"[{phrase.start_time}] {phrase.text}")
|
||||
```
|
||||
|
||||
## Custom Analyzers
|
||||
|
||||
Create custom analyzers with field schemas for specialized extraction:
|
||||
|
||||
```python
|
||||
# Create custom analyzer
|
||||
analyzer = client.create_analyzer(
|
||||
analyzer_id="my-invoice-analyzer",
|
||||
analyzer={
|
||||
"description": "Custom invoice analyzer",
|
||||
"base_analyzer_id": "prebuilt-documentSearch",
|
||||
"field_schema": {
|
||||
"fields": {
|
||||
"vendor_name": {"type": "string"},
|
||||
"invoice_total": {"type": "number"},
|
||||
"line_items": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"description": {"type": "string"},
|
||||
"amount": {"type": "number"}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
# Use custom analyzer
|
||||
from azure.ai.contentunderstanding.models import AnalyzeInput
|
||||
|
||||
poller = client.begin_analyze(
|
||||
analyzer_id="my-invoice-analyzer",
|
||||
inputs=[AnalyzeInput(url="https://example.com/invoice.pdf")]
|
||||
)
|
||||
|
||||
result = poller.result()
|
||||
|
||||
# Access extracted fields
|
||||
print(result.fields["vendor_name"])
|
||||
print(result.fields["invoice_total"])
|
||||
```
|
||||
|
||||
## Analyzer Management
|
||||
|
||||
```python
|
||||
# List all analyzers
|
||||
analyzers = client.list_analyzers()
|
||||
for analyzer in analyzers:
|
||||
print(f"{analyzer.analyzer_id}: {analyzer.description}")
|
||||
|
||||
# Get specific analyzer
|
||||
analyzer = client.get_analyzer("prebuilt-documentSearch")
|
||||
|
||||
# Delete custom analyzer
|
||||
client.delete_analyzer("my-custom-analyzer")
|
||||
```
|
||||
|
||||
## Async Client
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import os
|
||||
from azure.ai.contentunderstanding.aio import ContentUnderstandingClient
|
||||
from azure.ai.contentunderstanding.models import AnalyzeInput
|
||||
from azure.identity.aio import DefaultAzureCredential
|
||||
|
||||
async def analyze_document():
|
||||
endpoint = os.environ["CONTENTUNDERSTANDING_ENDPOINT"]
|
||||
credential = DefaultAzureCredential()
|
||||
|
||||
async with ContentUnderstandingClient(
|
||||
endpoint=endpoint,
|
||||
credential=credential
|
||||
) as client:
|
||||
poller = await client.begin_analyze(
|
||||
analyzer_id="prebuilt-documentSearch",
|
||||
inputs=[AnalyzeInput(url="https://example.com/doc.pdf")]
|
||||
)
|
||||
result = await poller.result()
|
||||
content = result.contents[0]
|
||||
return content.markdown
|
||||
|
||||
asyncio.run(analyze_document())
|
||||
```
|
||||
|
||||
## Content Types
|
||||
|
||||
| Class | For | Provides |
|
||||
|-------|-----|----------|
|
||||
| `DocumentContent` | PDF, images, Office docs | Pages, tables, figures, paragraphs |
|
||||
| `AudioVisualContent` | Audio, video files | Transcript phrases, timing, key frames |
|
||||
|
||||
Both derive from `MediaContent` which provides basic info and markdown representation.
|
||||
|
||||
## Model Imports
|
||||
|
||||
```python
|
||||
from azure.ai.contentunderstanding.models import (
|
||||
AnalyzeInput,
|
||||
AnalyzeResult,
|
||||
MediaContentKind,
|
||||
DocumentContent,
|
||||
AudioVisualContent,
|
||||
)
|
||||
```
|
||||
|
||||
## Client Types
|
||||
|
||||
| Client | Purpose |
|
||||
|--------|---------|
|
||||
| `ContentUnderstandingClient` | Sync client for all operations |
|
||||
| `ContentUnderstandingClient` (aio) | Async client for all operations |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use `begin_analyze` with `AnalyzeInput`** — this is the correct method signature
|
||||
2. **Access results via `result.contents[0]`** — results are returned as a list
|
||||
3. **Use prebuilt analyzers** for common scenarios (document/image/audio/video search)
|
||||
4. **Create custom analyzers** only for domain-specific field extraction
|
||||
5. **Use async client** for high-throughput scenarios with `azure.identity.aio` credentials
|
||||
6. **Handle long-running operations** — video/audio analysis can take minutes
|
||||
7. **Use URL sources** when possible to avoid upload overhead
|
||||
337
skills/azure-ai-document-intelligence-dotnet/SKILL.md
Normal file
337
skills/azure-ai-document-intelligence-dotnet/SKILL.md
Normal file
@@ -0,0 +1,337 @@
|
||||
---
|
||||
name: azure-ai-document-intelligence-dotnet
|
||||
description: |
|
||||
Azure AI Document Intelligence SDK for .NET. Extract text, tables, and structured data from documents using prebuilt and custom models. Use for invoice processing, receipt extraction, ID document analysis, and custom document models. Triggers: "Document Intelligence", "DocumentIntelligenceClient", "form recognizer", "invoice extraction", "receipt OCR", "document analysis .NET".
|
||||
package: Azure.AI.DocumentIntelligence
|
||||
---
|
||||
|
||||
# Azure.AI.DocumentIntelligence (.NET)
|
||||
|
||||
Extract text, tables, and structured data from documents using prebuilt and custom models.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
dotnet add package Azure.AI.DocumentIntelligence
|
||||
dotnet add package Azure.Identity
|
||||
```
|
||||
|
||||
**Current Version**: v1.0.0 (GA)
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
DOCUMENT_INTELLIGENCE_ENDPOINT=https://<resource-name>.cognitiveservices.azure.com/
|
||||
DOCUMENT_INTELLIGENCE_API_KEY=<your-api-key>
|
||||
BLOB_CONTAINER_SAS_URL=https://<storage>.blob.core.windows.net/<container>?<sas-token>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### Microsoft Entra ID (Recommended)
|
||||
|
||||
```csharp
|
||||
using Azure.Identity;
|
||||
using Azure.AI.DocumentIntelligence;
|
||||
|
||||
string endpoint = Environment.GetEnvironmentVariable("DOCUMENT_INTELLIGENCE_ENDPOINT");
|
||||
var credential = new DefaultAzureCredential();
|
||||
var client = new DocumentIntelligenceClient(new Uri(endpoint), credential);
|
||||
```
|
||||
|
||||
> **Note**: Entra ID requires a **custom subdomain** (e.g., `https://<resource-name>.cognitiveservices.azure.com/`), not a regional endpoint.
|
||||
|
||||
### API Key
|
||||
|
||||
```csharp
|
||||
string endpoint = Environment.GetEnvironmentVariable("DOCUMENT_INTELLIGENCE_ENDPOINT");
|
||||
string apiKey = Environment.GetEnvironmentVariable("DOCUMENT_INTELLIGENCE_API_KEY");
|
||||
var client = new DocumentIntelligenceClient(new Uri(endpoint), new AzureKeyCredential(apiKey));
|
||||
```
|
||||
|
||||
## Client Types
|
||||
|
||||
| Client | Purpose |
|
||||
|--------|---------|
|
||||
| `DocumentIntelligenceClient` | Analyze documents, classify documents |
|
||||
| `DocumentIntelligenceAdministrationClient` | Build/manage custom models and classifiers |
|
||||
|
||||
## Prebuilt Models
|
||||
|
||||
| Model ID | Description |
|
||||
|----------|-------------|
|
||||
| `prebuilt-read` | Extract text, languages, handwriting |
|
||||
| `prebuilt-layout` | Extract text, tables, selection marks, structure |
|
||||
| `prebuilt-invoice` | Extract invoice fields (vendor, items, totals) |
|
||||
| `prebuilt-receipt` | Extract receipt fields (merchant, items, total) |
|
||||
| `prebuilt-idDocument` | Extract ID document fields (name, DOB, address) |
|
||||
| `prebuilt-businessCard` | Extract business card fields |
|
||||
| `prebuilt-tax.us.w2` | Extract W-2 tax form fields |
|
||||
| `prebuilt-healthInsuranceCard.us` | Extract health insurance card fields |
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Analyze Invoice
|
||||
|
||||
```csharp
|
||||
using Azure.AI.DocumentIntelligence;
|
||||
|
||||
Uri invoiceUri = new Uri("https://example.com/invoice.pdf");
|
||||
|
||||
Operation<AnalyzeResult> operation = await client.AnalyzeDocumentAsync(
|
||||
WaitUntil.Completed,
|
||||
"prebuilt-invoice",
|
||||
invoiceUri);
|
||||
|
||||
AnalyzeResult result = operation.Value;
|
||||
|
||||
foreach (AnalyzedDocument document in result.Documents)
|
||||
{
|
||||
if (document.Fields.TryGetValue("VendorName", out DocumentField vendorNameField)
|
||||
&& vendorNameField.FieldType == DocumentFieldType.String)
|
||||
{
|
||||
string vendorName = vendorNameField.ValueString;
|
||||
Console.WriteLine($"Vendor Name: '{vendorName}', confidence: {vendorNameField.Confidence}");
|
||||
}
|
||||
|
||||
if (document.Fields.TryGetValue("InvoiceTotal", out DocumentField invoiceTotalField)
|
||||
&& invoiceTotalField.FieldType == DocumentFieldType.Currency)
|
||||
{
|
||||
CurrencyValue invoiceTotal = invoiceTotalField.ValueCurrency;
|
||||
Console.WriteLine($"Invoice Total: '{invoiceTotal.CurrencySymbol}{invoiceTotal.Amount}'");
|
||||
}
|
||||
|
||||
// Extract line items
|
||||
if (document.Fields.TryGetValue("Items", out DocumentField itemsField)
|
||||
&& itemsField.FieldType == DocumentFieldType.List)
|
||||
{
|
||||
foreach (DocumentField item in itemsField.ValueList)
|
||||
{
|
||||
var itemFields = item.ValueDictionary;
|
||||
if (itemFields.TryGetValue("Description", out DocumentField descField))
|
||||
Console.WriteLine($" Item: {descField.ValueString}");
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Extract Layout (Text, Tables, Structure)
|
||||
|
||||
```csharp
|
||||
Uri fileUri = new Uri("https://example.com/document.pdf");
|
||||
|
||||
Operation<AnalyzeResult> operation = await client.AnalyzeDocumentAsync(
|
||||
WaitUntil.Completed,
|
||||
"prebuilt-layout",
|
||||
fileUri);
|
||||
|
||||
AnalyzeResult result = operation.Value;
|
||||
|
||||
// Extract text by page
|
||||
foreach (DocumentPage page in result.Pages)
|
||||
{
|
||||
Console.WriteLine($"Page {page.PageNumber}: {page.Lines.Count} lines, {page.Words.Count} words");
|
||||
|
||||
foreach (DocumentLine line in page.Lines)
|
||||
{
|
||||
Console.WriteLine($" Line: '{line.Content}'");
|
||||
}
|
||||
}
|
||||
|
||||
// Extract tables
|
||||
foreach (DocumentTable table in result.Tables)
|
||||
{
|
||||
Console.WriteLine($"Table: {table.RowCount} rows x {table.ColumnCount} columns");
|
||||
foreach (DocumentTableCell cell in table.Cells)
|
||||
{
|
||||
Console.WriteLine($" Cell ({cell.RowIndex}, {cell.ColumnIndex}): {cell.Content}");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Analyze Receipt
|
||||
|
||||
```csharp
|
||||
Operation<AnalyzeResult> operation = await client.AnalyzeDocumentAsync(
|
||||
WaitUntil.Completed,
|
||||
"prebuilt-receipt",
|
||||
receiptUri);
|
||||
|
||||
AnalyzeResult result = operation.Value;
|
||||
|
||||
foreach (AnalyzedDocument document in result.Documents)
|
||||
{
|
||||
if (document.Fields.TryGetValue("MerchantName", out DocumentField merchantField))
|
||||
Console.WriteLine($"Merchant: {merchantField.ValueString}");
|
||||
|
||||
if (document.Fields.TryGetValue("Total", out DocumentField totalField))
|
||||
Console.WriteLine($"Total: {totalField.ValueCurrency.Amount}");
|
||||
|
||||
if (document.Fields.TryGetValue("TransactionDate", out DocumentField dateField))
|
||||
Console.WriteLine($"Date: {dateField.ValueDate}");
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Build Custom Model
|
||||
|
||||
```csharp
|
||||
var adminClient = new DocumentIntelligenceAdministrationClient(
|
||||
new Uri(endpoint),
|
||||
new AzureKeyCredential(apiKey));
|
||||
|
||||
string modelId = "my-custom-model";
|
||||
Uri blobContainerUri = new Uri("<blob-container-sas-url>");
|
||||
|
||||
var blobSource = new BlobContentSource(blobContainerUri);
|
||||
var options = new BuildDocumentModelOptions(modelId, DocumentBuildMode.Template, blobSource);
|
||||
|
||||
Operation<DocumentModelDetails> operation = await adminClient.BuildDocumentModelAsync(
|
||||
WaitUntil.Completed,
|
||||
options);
|
||||
|
||||
DocumentModelDetails model = operation.Value;
|
||||
|
||||
Console.WriteLine($"Model ID: {model.ModelId}");
|
||||
Console.WriteLine($"Created: {model.CreatedOn}");
|
||||
|
||||
foreach (var docType in model.DocumentTypes)
|
||||
{
|
||||
Console.WriteLine($"Document type: {docType.Key}");
|
||||
foreach (var field in docType.Value.FieldSchema)
|
||||
{
|
||||
Console.WriteLine($" Field: {field.Key}, Confidence: {docType.Value.FieldConfidence[field.Key]}");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Build Document Classifier
|
||||
|
||||
```csharp
|
||||
string classifierId = "my-classifier";
|
||||
Uri blobContainerUri = new Uri("<blob-container-sas-url>");
|
||||
|
||||
var sourceA = new BlobContentSource(blobContainerUri) { Prefix = "TypeA/train" };
|
||||
var sourceB = new BlobContentSource(blobContainerUri) { Prefix = "TypeB/train" };
|
||||
|
||||
var docTypes = new Dictionary<string, ClassifierDocumentTypeDetails>()
|
||||
{
|
||||
{ "TypeA", new ClassifierDocumentTypeDetails(sourceA) },
|
||||
{ "TypeB", new ClassifierDocumentTypeDetails(sourceB) }
|
||||
};
|
||||
|
||||
var options = new BuildClassifierOptions(classifierId, docTypes);
|
||||
|
||||
Operation<DocumentClassifierDetails> operation = await adminClient.BuildClassifierAsync(
|
||||
WaitUntil.Completed,
|
||||
options);
|
||||
|
||||
DocumentClassifierDetails classifier = operation.Value;
|
||||
Console.WriteLine($"Classifier ID: {classifier.ClassifierId}");
|
||||
```
|
||||
|
||||
### 6. Classify Document
|
||||
|
||||
```csharp
|
||||
string classifierId = "my-classifier";
|
||||
Uri documentUri = new Uri("https://example.com/document.pdf");
|
||||
|
||||
var options = new ClassifyDocumentOptions(classifierId, documentUri);
|
||||
|
||||
Operation<AnalyzeResult> operation = await client.ClassifyDocumentAsync(
|
||||
WaitUntil.Completed,
|
||||
options);
|
||||
|
||||
AnalyzeResult result = operation.Value;
|
||||
|
||||
foreach (AnalyzedDocument document in result.Documents)
|
||||
{
|
||||
Console.WriteLine($"Document type: {document.DocumentType}, confidence: {document.Confidence}");
|
||||
}
|
||||
```
|
||||
|
||||
### 7. Manage Models
|
||||
|
||||
```csharp
|
||||
// Get resource details
|
||||
DocumentIntelligenceResourceDetails resourceDetails = await adminClient.GetResourceDetailsAsync();
|
||||
Console.WriteLine($"Custom models: {resourceDetails.CustomDocumentModels.Count}/{resourceDetails.CustomDocumentModels.Limit}");
|
||||
|
||||
// Get specific model
|
||||
DocumentModelDetails model = await adminClient.GetModelAsync("my-model-id");
|
||||
Console.WriteLine($"Model: {model.ModelId}, Created: {model.CreatedOn}");
|
||||
|
||||
// List models
|
||||
await foreach (DocumentModelDetails modelItem in adminClient.GetModelsAsync())
|
||||
{
|
||||
Console.WriteLine($"Model: {modelItem.ModelId}");
|
||||
}
|
||||
|
||||
// Delete model
|
||||
await adminClient.DeleteModelAsync("my-model-id");
|
||||
```
|
||||
|
||||
## Key Types Reference
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `DocumentIntelligenceClient` | Main client for analysis |
|
||||
| `DocumentIntelligenceAdministrationClient` | Model management |
|
||||
| `AnalyzeResult` | Result of document analysis |
|
||||
| `AnalyzedDocument` | Single document within result |
|
||||
| `DocumentField` | Extracted field with value and confidence |
|
||||
| `DocumentFieldType` | String, Date, Number, Currency, etc. |
|
||||
| `DocumentPage` | Page info (lines, words, selection marks) |
|
||||
| `DocumentTable` | Extracted table with cells |
|
||||
| `DocumentModelDetails` | Custom model metadata |
|
||||
| `BlobContentSource` | Training data source |
|
||||
|
||||
## Build Modes
|
||||
|
||||
| Mode | Use Case |
|
||||
|------|----------|
|
||||
| `DocumentBuildMode.Template` | Fixed layout documents (forms) |
|
||||
| `DocumentBuildMode.Neural` | Variable layout documents |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use DefaultAzureCredential** for production
|
||||
2. **Reuse client instances** — clients are thread-safe
|
||||
3. **Handle long-running operations** — Use `WaitUntil.Completed` for simplicity
|
||||
4. **Check field confidence** — Always verify `Confidence` property
|
||||
5. **Use appropriate model** — Prebuilt for common docs, custom for specialized
|
||||
6. **Use custom subdomain** — Required for Entra ID authentication
|
||||
|
||||
## Error Handling
|
||||
|
||||
```csharp
|
||||
using Azure;
|
||||
|
||||
try
|
||||
{
|
||||
var operation = await client.AnalyzeDocumentAsync(
|
||||
WaitUntil.Completed,
|
||||
"prebuilt-invoice",
|
||||
documentUri);
|
||||
}
|
||||
catch (RequestFailedException ex)
|
||||
{
|
||||
Console.WriteLine($"Error: {ex.Status} - {ex.Message}");
|
||||
}
|
||||
```
|
||||
|
||||
## Related SDKs
|
||||
|
||||
| SDK | Purpose | Install |
|
||||
|-----|---------|---------|
|
||||
| `Azure.AI.DocumentIntelligence` | Document analysis (this SDK) | `dotnet add package Azure.AI.DocumentIntelligence` |
|
||||
| `Azure.AI.FormRecognizer` | Legacy SDK (deprecated) | Use DocumentIntelligence instead |
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| NuGet Package | https://www.nuget.org/packages/Azure.AI.DocumentIntelligence |
|
||||
| API Reference | https://learn.microsoft.com/dotnet/api/azure.ai.documentintelligence |
|
||||
| GitHub Samples | https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/documentintelligence/Azure.AI.DocumentIntelligence/samples |
|
||||
| Document Intelligence Studio | https://documentintelligence.ai.azure.com/ |
|
||||
| Prebuilt Models | https://aka.ms/azsdk/formrecognizer/models |
|
||||
323
skills/azure-ai-document-intelligence-ts/SKILL.md
Normal file
323
skills/azure-ai-document-intelligence-ts/SKILL.md
Normal file
@@ -0,0 +1,323 @@
|
||||
---
|
||||
name: azure-ai-document-intelligence-ts
|
||||
description: Extract text, tables, and structured data from documents using Azure Document Intelligence (@azure-rest/ai-document-intelligence). Use when processing invoices, receipts, IDs, forms, or building custom document models.
|
||||
package: @azure-rest/ai-document-intelligence
|
||||
---
|
||||
|
||||
# Azure Document Intelligence REST SDK for TypeScript
|
||||
|
||||
Extract text, tables, and structured data from documents using prebuilt and custom models.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
npm install @azure-rest/ai-document-intelligence @azure/identity
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
DOCUMENT_INTELLIGENCE_ENDPOINT=https://<resource>.cognitiveservices.azure.com
|
||||
DOCUMENT_INTELLIGENCE_API_KEY=<api-key>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
**Important**: This is a REST client. `DocumentIntelligence` is a **function**, not a class.
|
||||
|
||||
### DefaultAzureCredential
|
||||
|
||||
```typescript
|
||||
import DocumentIntelligence from "@azure-rest/ai-document-intelligence";
|
||||
import { DefaultAzureCredential } from "@azure/identity";
|
||||
|
||||
const client = DocumentIntelligence(
|
||||
process.env.DOCUMENT_INTELLIGENCE_ENDPOINT!,
|
||||
new DefaultAzureCredential()
|
||||
);
|
||||
```
|
||||
|
||||
### API Key
|
||||
|
||||
```typescript
|
||||
import DocumentIntelligence from "@azure-rest/ai-document-intelligence";
|
||||
|
||||
const client = DocumentIntelligence(
|
||||
process.env.DOCUMENT_INTELLIGENCE_ENDPOINT!,
|
||||
{ key: process.env.DOCUMENT_INTELLIGENCE_API_KEY! }
|
||||
);
|
||||
```
|
||||
|
||||
## Analyze Document (URL)
|
||||
|
||||
```typescript
|
||||
import DocumentIntelligence, {
|
||||
isUnexpected,
|
||||
getLongRunningPoller,
|
||||
AnalyzeOperationOutput
|
||||
} from "@azure-rest/ai-document-intelligence";
|
||||
|
||||
const initialResponse = await client
|
||||
.path("/documentModels/{modelId}:analyze", "prebuilt-layout")
|
||||
.post({
|
||||
contentType: "application/json",
|
||||
body: {
|
||||
urlSource: "https://example.com/document.pdf"
|
||||
},
|
||||
queryParameters: { locale: "en-US" }
|
||||
});
|
||||
|
||||
if (isUnexpected(initialResponse)) {
|
||||
throw initialResponse.body.error;
|
||||
}
|
||||
|
||||
const poller = getLongRunningPoller(client, initialResponse);
|
||||
const result = (await poller.pollUntilDone()).body as AnalyzeOperationOutput;
|
||||
|
||||
console.log("Pages:", result.analyzeResult?.pages?.length);
|
||||
console.log("Tables:", result.analyzeResult?.tables?.length);
|
||||
```
|
||||
|
||||
## Analyze Document (Local File)
|
||||
|
||||
```typescript
|
||||
import { readFile } from "node:fs/promises";
|
||||
|
||||
const fileBuffer = await readFile("./document.pdf");
|
||||
const base64Source = fileBuffer.toString("base64");
|
||||
|
||||
const initialResponse = await client
|
||||
.path("/documentModels/{modelId}:analyze", "prebuilt-invoice")
|
||||
.post({
|
||||
contentType: "application/json",
|
||||
body: { base64Source }
|
||||
});
|
||||
|
||||
if (isUnexpected(initialResponse)) {
|
||||
throw initialResponse.body.error;
|
||||
}
|
||||
|
||||
const poller = getLongRunningPoller(client, initialResponse);
|
||||
const result = (await poller.pollUntilDone()).body as AnalyzeOperationOutput;
|
||||
```
|
||||
|
||||
## Prebuilt Models
|
||||
|
||||
| Model ID | Description |
|
||||
|----------|-------------|
|
||||
| `prebuilt-read` | OCR - text and language extraction |
|
||||
| `prebuilt-layout` | Text, tables, selection marks, structure |
|
||||
| `prebuilt-invoice` | Invoice fields |
|
||||
| `prebuilt-receipt` | Receipt fields |
|
||||
| `prebuilt-idDocument` | ID document fields |
|
||||
| `prebuilt-tax.us.w2` | W-2 tax form fields |
|
||||
| `prebuilt-healthInsuranceCard.us` | Health insurance card fields |
|
||||
| `prebuilt-contract` | Contract fields |
|
||||
| `prebuilt-bankStatement.us` | Bank statement fields |
|
||||
|
||||
## Extract Invoice Fields
|
||||
|
||||
```typescript
|
||||
const initialResponse = await client
|
||||
.path("/documentModels/{modelId}:analyze", "prebuilt-invoice")
|
||||
.post({
|
||||
contentType: "application/json",
|
||||
body: { urlSource: invoiceUrl }
|
||||
});
|
||||
|
||||
if (isUnexpected(initialResponse)) {
|
||||
throw initialResponse.body.error;
|
||||
}
|
||||
|
||||
const poller = getLongRunningPoller(client, initialResponse);
|
||||
const result = (await poller.pollUntilDone()).body as AnalyzeOperationOutput;
|
||||
|
||||
const invoice = result.analyzeResult?.documents?.[0];
|
||||
if (invoice) {
|
||||
console.log("Vendor:", invoice.fields?.VendorName?.content);
|
||||
console.log("Total:", invoice.fields?.InvoiceTotal?.content);
|
||||
console.log("Due Date:", invoice.fields?.DueDate?.content);
|
||||
}
|
||||
```
|
||||
|
||||
## Extract Receipt Fields
|
||||
|
||||
```typescript
|
||||
const initialResponse = await client
|
||||
.path("/documentModels/{modelId}:analyze", "prebuilt-receipt")
|
||||
.post({
|
||||
contentType: "application/json",
|
||||
body: { urlSource: receiptUrl }
|
||||
});
|
||||
|
||||
const poller = getLongRunningPoller(client, initialResponse);
|
||||
const result = (await poller.pollUntilDone()).body as AnalyzeOperationOutput;
|
||||
|
||||
const receipt = result.analyzeResult?.documents?.[0];
|
||||
if (receipt) {
|
||||
console.log("Merchant:", receipt.fields?.MerchantName?.content);
|
||||
console.log("Total:", receipt.fields?.Total?.content);
|
||||
|
||||
for (const item of receipt.fields?.Items?.values || []) {
|
||||
console.log("Item:", item.properties?.Description?.content);
|
||||
console.log("Price:", item.properties?.TotalPrice?.content);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## List Document Models
|
||||
|
||||
```typescript
|
||||
import DocumentIntelligence, { isUnexpected, paginate } from "@azure-rest/ai-document-intelligence";
|
||||
|
||||
const response = await client.path("/documentModels").get();
|
||||
|
||||
if (isUnexpected(response)) {
|
||||
throw response.body.error;
|
||||
}
|
||||
|
||||
for await (const model of paginate(client, response)) {
|
||||
console.log(model.modelId);
|
||||
}
|
||||
```
|
||||
|
||||
## Build Custom Model
|
||||
|
||||
```typescript
|
||||
const initialResponse = await client.path("/documentModels:build").post({
|
||||
body: {
|
||||
modelId: "my-custom-model",
|
||||
description: "Custom model for purchase orders",
|
||||
buildMode: "template", // or "neural"
|
||||
azureBlobSource: {
|
||||
containerUrl: process.env.TRAINING_CONTAINER_SAS_URL!,
|
||||
prefix: "training-data/"
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
if (isUnexpected(initialResponse)) {
|
||||
throw initialResponse.body.error;
|
||||
}
|
||||
|
||||
const poller = getLongRunningPoller(client, initialResponse);
|
||||
const result = await poller.pollUntilDone();
|
||||
console.log("Model built:", result.body);
|
||||
```
|
||||
|
||||
## Build Document Classifier
|
||||
|
||||
```typescript
|
||||
import { DocumentClassifierBuildOperationDetailsOutput } from "@azure-rest/ai-document-intelligence";
|
||||
|
||||
const containerSasUrl = process.env.TRAINING_CONTAINER_SAS_URL!;
|
||||
|
||||
const initialResponse = await client.path("/documentClassifiers:build").post({
|
||||
body: {
|
||||
classifierId: "my-classifier",
|
||||
description: "Invoice vs Receipt classifier",
|
||||
docTypes: {
|
||||
invoices: {
|
||||
azureBlobSource: { containerUrl: containerSasUrl, prefix: "invoices/" }
|
||||
},
|
||||
receipts: {
|
||||
azureBlobSource: { containerUrl: containerSasUrl, prefix: "receipts/" }
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
if (isUnexpected(initialResponse)) {
|
||||
throw initialResponse.body.error;
|
||||
}
|
||||
|
||||
const poller = getLongRunningPoller(client, initialResponse);
|
||||
const result = (await poller.pollUntilDone()).body as DocumentClassifierBuildOperationDetailsOutput;
|
||||
console.log("Classifier:", result.result?.classifierId);
|
||||
```
|
||||
|
||||
## Classify Document
|
||||
|
||||
```typescript
|
||||
const initialResponse = await client
|
||||
.path("/documentClassifiers/{classifierId}:analyze", "my-classifier")
|
||||
.post({
|
||||
contentType: "application/json",
|
||||
body: { urlSource: documentUrl },
|
||||
queryParameters: { split: "auto" }
|
||||
});
|
||||
|
||||
if (isUnexpected(initialResponse)) {
|
||||
throw initialResponse.body.error;
|
||||
}
|
||||
|
||||
const poller = getLongRunningPoller(client, initialResponse);
|
||||
const result = await poller.pollUntilDone();
|
||||
console.log("Classification:", result.body.analyzeResult?.documents);
|
||||
```
|
||||
|
||||
## Get Service Info
|
||||
|
||||
```typescript
|
||||
const response = await client.path("/info").get();
|
||||
|
||||
if (isUnexpected(response)) {
|
||||
throw response.body.error;
|
||||
}
|
||||
|
||||
console.log("Custom model limit:", response.body.customDocumentModels.limit);
|
||||
console.log("Custom model count:", response.body.customDocumentModels.count);
|
||||
```
|
||||
|
||||
## Polling Pattern
|
||||
|
||||
```typescript
|
||||
import DocumentIntelligence, {
|
||||
isUnexpected,
|
||||
getLongRunningPoller,
|
||||
AnalyzeOperationOutput
|
||||
} from "@azure-rest/ai-document-intelligence";
|
||||
|
||||
// 1. Start operation
|
||||
const initialResponse = await client
|
||||
.path("/documentModels/{modelId}:analyze", "prebuilt-layout")
|
||||
.post({ contentType: "application/json", body: { urlSource } });
|
||||
|
||||
// 2. Check for errors
|
||||
if (isUnexpected(initialResponse)) {
|
||||
throw initialResponse.body.error;
|
||||
}
|
||||
|
||||
// 3. Create poller
|
||||
const poller = getLongRunningPoller(client, initialResponse);
|
||||
|
||||
// 4. Optional: Monitor progress
|
||||
poller.onProgress((state) => {
|
||||
console.log("Status:", state.status);
|
||||
});
|
||||
|
||||
// 5. Wait for completion
|
||||
const result = (await poller.pollUntilDone()).body as AnalyzeOperationOutput;
|
||||
```
|
||||
|
||||
## Key Types
|
||||
|
||||
```typescript
|
||||
import DocumentIntelligence, {
|
||||
isUnexpected,
|
||||
getLongRunningPoller,
|
||||
paginate,
|
||||
parseResultIdFromResponse,
|
||||
AnalyzeOperationOutput,
|
||||
DocumentClassifierBuildOperationDetailsOutput
|
||||
} from "@azure-rest/ai-document-intelligence";
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use getLongRunningPoller()** - Document analysis is async, always poll for results
|
||||
2. **Check isUnexpected()** - Type guard for proper error handling
|
||||
3. **Choose the right model** - Use prebuilt models when possible, custom for specialized docs
|
||||
4. **Handle confidence scores** - Fields have confidence values, set thresholds for your use case
|
||||
5. **Use pagination** - Use `paginate()` helper for listing models
|
||||
6. **Prefer neural mode** - For custom models, neural handles more variation than template
|
||||
341
skills/azure-ai-formrecognizer-java/SKILL.md
Normal file
341
skills/azure-ai-formrecognizer-java/SKILL.md
Normal file
@@ -0,0 +1,341 @@
|
||||
---
|
||||
name: azure-ai-formrecognizer-java
|
||||
description: Build document analysis applications with Azure Document Intelligence (Form Recognizer) SDK for Java. Use when extracting text, tables, key-value pairs from documents, receipts, invoices, or building custom document models.
|
||||
package: com.azure:azure-ai-formrecognizer
|
||||
---
|
||||
|
||||
# Azure Document Intelligence (Form Recognizer) SDK for Java
|
||||
|
||||
Build document analysis applications using the Azure AI Document Intelligence SDK for Java.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-ai-formrecognizer</artifactId>
|
||||
<version>4.2.0-beta.1</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Client Creation
|
||||
|
||||
### DocumentAnalysisClient
|
||||
|
||||
```java
|
||||
import com.azure.ai.formrecognizer.documentanalysis.DocumentAnalysisClient;
|
||||
import com.azure.ai.formrecognizer.documentanalysis.DocumentAnalysisClientBuilder;
|
||||
import com.azure.core.credential.AzureKeyCredential;
|
||||
|
||||
DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
|
||||
.credential(new AzureKeyCredential("{key}"))
|
||||
.endpoint("{endpoint}")
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### DocumentModelAdministrationClient
|
||||
|
||||
```java
|
||||
import com.azure.ai.formrecognizer.documentanalysis.administration.DocumentModelAdministrationClient;
|
||||
import com.azure.ai.formrecognizer.documentanalysis.administration.DocumentModelAdministrationClientBuilder;
|
||||
|
||||
DocumentModelAdministrationClient adminClient = new DocumentModelAdministrationClientBuilder()
|
||||
.credential(new AzureKeyCredential("{key}"))
|
||||
.endpoint("{endpoint}")
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### With DefaultAzureCredential
|
||||
|
||||
```java
|
||||
import com.azure.identity.DefaultAzureCredentialBuilder;
|
||||
|
||||
DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
|
||||
.endpoint("{endpoint}")
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
## Prebuilt Models
|
||||
|
||||
| Model ID | Purpose |
|
||||
|----------|---------|
|
||||
| `prebuilt-layout` | Extract text, tables, selection marks |
|
||||
| `prebuilt-document` | General document with key-value pairs |
|
||||
| `prebuilt-receipt` | Receipt data extraction |
|
||||
| `prebuilt-invoice` | Invoice field extraction |
|
||||
| `prebuilt-businessCard` | Business card parsing |
|
||||
| `prebuilt-idDocument` | ID document (passport, license) |
|
||||
| `prebuilt-tax.us.w2` | US W2 tax forms |
|
||||
|
||||
## Core Patterns
|
||||
|
||||
### Extract Layout
|
||||
|
||||
```java
|
||||
import com.azure.ai.formrecognizer.documentanalysis.models.*;
|
||||
import com.azure.core.util.BinaryData;
|
||||
import com.azure.core.util.polling.SyncPoller;
|
||||
import java.io.File;
|
||||
|
||||
File document = new File("document.pdf");
|
||||
BinaryData documentData = BinaryData.fromFile(document.toPath());
|
||||
|
||||
SyncPoller<OperationResult, AnalyzeResult> poller =
|
||||
client.beginAnalyzeDocument("prebuilt-layout", documentData);
|
||||
|
||||
AnalyzeResult result = poller.getFinalResult();
|
||||
|
||||
// Process pages
|
||||
for (DocumentPage page : result.getPages()) {
|
||||
System.out.printf("Page %d: %.2f x %.2f %s%n",
|
||||
page.getPageNumber(),
|
||||
page.getWidth(),
|
||||
page.getHeight(),
|
||||
page.getUnit());
|
||||
|
||||
// Lines
|
||||
for (DocumentLine line : page.getLines()) {
|
||||
System.out.println("Line: " + line.getContent());
|
||||
}
|
||||
|
||||
// Selection marks (checkboxes)
|
||||
for (DocumentSelectionMark mark : page.getSelectionMarks()) {
|
||||
System.out.printf("Checkbox: %s (confidence: %.2f)%n",
|
||||
mark.getSelectionMarkState(),
|
||||
mark.getConfidence());
|
||||
}
|
||||
}
|
||||
|
||||
// Tables
|
||||
for (DocumentTable table : result.getTables()) {
|
||||
System.out.printf("Table: %d rows x %d columns%n",
|
||||
table.getRowCount(),
|
||||
table.getColumnCount());
|
||||
|
||||
for (DocumentTableCell cell : table.getCells()) {
|
||||
System.out.printf("Cell[%d,%d]: %s%n",
|
||||
cell.getRowIndex(),
|
||||
cell.getColumnIndex(),
|
||||
cell.getContent());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Analyze from URL
|
||||
|
||||
```java
|
||||
String documentUrl = "https://example.com/invoice.pdf";
|
||||
|
||||
SyncPoller<OperationResult, AnalyzeResult> poller =
|
||||
client.beginAnalyzeDocumentFromUrl("prebuilt-invoice", documentUrl);
|
||||
|
||||
AnalyzeResult result = poller.getFinalResult();
|
||||
```
|
||||
|
||||
### Analyze Receipt
|
||||
|
||||
```java
|
||||
SyncPoller<OperationResult, AnalyzeResult> poller =
|
||||
client.beginAnalyzeDocumentFromUrl("prebuilt-receipt", receiptUrl);
|
||||
|
||||
AnalyzeResult result = poller.getFinalResult();
|
||||
|
||||
for (AnalyzedDocument doc : result.getDocuments()) {
|
||||
Map<String, DocumentField> fields = doc.getFields();
|
||||
|
||||
DocumentField merchantName = fields.get("MerchantName");
|
||||
if (merchantName != null && merchantName.getType() == DocumentFieldType.STRING) {
|
||||
System.out.printf("Merchant: %s (confidence: %.2f)%n",
|
||||
merchantName.getValueAsString(),
|
||||
merchantName.getConfidence());
|
||||
}
|
||||
|
||||
DocumentField transactionDate = fields.get("TransactionDate");
|
||||
if (transactionDate != null && transactionDate.getType() == DocumentFieldType.DATE) {
|
||||
System.out.printf("Date: %s%n", transactionDate.getValueAsDate());
|
||||
}
|
||||
|
||||
DocumentField items = fields.get("Items");
|
||||
if (items != null && items.getType() == DocumentFieldType.LIST) {
|
||||
for (DocumentField item : items.getValueAsList()) {
|
||||
Map<String, DocumentField> itemFields = item.getValueAsMap();
|
||||
System.out.printf("Item: %s, Price: %.2f%n",
|
||||
itemFields.get("Name").getValueAsString(),
|
||||
itemFields.get("Price").getValueAsDouble());
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### General Document Analysis
|
||||
|
||||
```java
|
||||
SyncPoller<OperationResult, AnalyzeResult> poller =
|
||||
client.beginAnalyzeDocumentFromUrl("prebuilt-document", documentUrl);
|
||||
|
||||
AnalyzeResult result = poller.getFinalResult();
|
||||
|
||||
// Key-value pairs
|
||||
for (DocumentKeyValuePair kvp : result.getKeyValuePairs()) {
|
||||
System.out.printf("Key: %s => Value: %s%n",
|
||||
kvp.getKey().getContent(),
|
||||
kvp.getValue() != null ? kvp.getValue().getContent() : "null");
|
||||
}
|
||||
```
|
||||
|
||||
## Custom Models
|
||||
|
||||
### Build Custom Model
|
||||
|
||||
```java
|
||||
import com.azure.ai.formrecognizer.documentanalysis.administration.models.*;
|
||||
|
||||
String blobContainerUrl = "{SAS_URL_of_training_data}";
|
||||
String prefix = "training-docs/";
|
||||
|
||||
SyncPoller<OperationResult, DocumentModelDetails> poller = adminClient.beginBuildDocumentModel(
|
||||
blobContainerUrl,
|
||||
DocumentModelBuildMode.TEMPLATE,
|
||||
prefix,
|
||||
new BuildDocumentModelOptions()
|
||||
.setModelId("my-custom-model")
|
||||
.setDescription("Custom invoice model"),
|
||||
Context.NONE);
|
||||
|
||||
DocumentModelDetails model = poller.getFinalResult();
|
||||
|
||||
System.out.println("Model ID: " + model.getModelId());
|
||||
System.out.println("Created: " + model.getCreatedOn());
|
||||
|
||||
model.getDocumentTypes().forEach((docType, details) -> {
|
||||
System.out.println("Document type: " + docType);
|
||||
details.getFieldSchema().forEach((field, schema) -> {
|
||||
System.out.printf(" Field: %s (%s)%n", field, schema.getType());
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Analyze with Custom Model
|
||||
|
||||
```java
|
||||
SyncPoller<OperationResult, AnalyzeResult> poller =
|
||||
client.beginAnalyzeDocumentFromUrl("my-custom-model", documentUrl);
|
||||
|
||||
AnalyzeResult result = poller.getFinalResult();
|
||||
|
||||
for (AnalyzedDocument doc : result.getDocuments()) {
|
||||
System.out.printf("Document type: %s (confidence: %.2f)%n",
|
||||
doc.getDocType(),
|
||||
doc.getConfidence());
|
||||
|
||||
doc.getFields().forEach((name, field) -> {
|
||||
System.out.printf("Field '%s': %s (confidence: %.2f)%n",
|
||||
name,
|
||||
field.getContent(),
|
||||
field.getConfidence());
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Compose Models
|
||||
|
||||
```java
|
||||
List<String> modelIds = Arrays.asList("model-1", "model-2", "model-3");
|
||||
|
||||
SyncPoller<OperationResult, DocumentModelDetails> poller =
|
||||
adminClient.beginComposeDocumentModel(
|
||||
modelIds,
|
||||
new ComposeDocumentModelOptions()
|
||||
.setModelId("composed-model")
|
||||
.setDescription("Composed from multiple models"));
|
||||
|
||||
DocumentModelDetails composedModel = poller.getFinalResult();
|
||||
```
|
||||
|
||||
### Manage Models
|
||||
|
||||
```java
|
||||
// List models
|
||||
PagedIterable<DocumentModelSummary> models = adminClient.listDocumentModels();
|
||||
for (DocumentModelSummary summary : models) {
|
||||
System.out.printf("Model: %s, Created: %s%n",
|
||||
summary.getModelId(),
|
||||
summary.getCreatedOn());
|
||||
}
|
||||
|
||||
// Get model details
|
||||
DocumentModelDetails model = adminClient.getDocumentModel("model-id");
|
||||
|
||||
// Delete model
|
||||
adminClient.deleteDocumentModel("model-id");
|
||||
|
||||
// Check resource limits
|
||||
ResourceDetails resources = adminClient.getResourceDetails();
|
||||
System.out.printf("Models: %d / %d%n",
|
||||
resources.getCustomDocumentModelCount(),
|
||||
resources.getCustomDocumentModelLimit());
|
||||
```
|
||||
|
||||
## Document Classification
|
||||
|
||||
### Build Classifier
|
||||
|
||||
```java
|
||||
Map<String, ClassifierDocumentTypeDetails> docTypes = new HashMap<>();
|
||||
docTypes.put("invoice", new ClassifierDocumentTypeDetails()
|
||||
.setAzureBlobSource(new AzureBlobContentSource(containerUrl).setPrefix("invoices/")));
|
||||
docTypes.put("receipt", new ClassifierDocumentTypeDetails()
|
||||
.setAzureBlobSource(new AzureBlobContentSource(containerUrl).setPrefix("receipts/")));
|
||||
|
||||
SyncPoller<OperationResult, DocumentClassifierDetails> poller =
|
||||
adminClient.beginBuildDocumentClassifier(docTypes,
|
||||
new BuildDocumentClassifierOptions().setClassifierId("my-classifier"));
|
||||
|
||||
DocumentClassifierDetails classifier = poller.getFinalResult();
|
||||
```
|
||||
|
||||
### Classify Document
|
||||
|
||||
```java
|
||||
SyncPoller<OperationResult, AnalyzeResult> poller =
|
||||
client.beginClassifyDocumentFromUrl("my-classifier", documentUrl, Context.NONE);
|
||||
|
||||
AnalyzeResult result = poller.getFinalResult();
|
||||
|
||||
for (AnalyzedDocument doc : result.getDocuments()) {
|
||||
System.out.printf("Classified as: %s (confidence: %.2f)%n",
|
||||
doc.getDocType(),
|
||||
doc.getConfidence());
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import com.azure.core.exception.HttpResponseException;
|
||||
|
||||
try {
|
||||
client.beginAnalyzeDocumentFromUrl("prebuilt-receipt", "invalid-url");
|
||||
} catch (HttpResponseException e) {
|
||||
System.out.println("Status: " + e.getResponse().getStatusCode());
|
||||
System.out.println("Error: " + e.getMessage());
|
||||
}
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
FORM_RECOGNIZER_ENDPOINT=https://<resource>.cognitiveservices.azure.com/
|
||||
FORM_RECOGNIZER_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
## Trigger Phrases
|
||||
|
||||
- "document intelligence Java"
|
||||
- "form recognizer SDK"
|
||||
- "extract text from PDF"
|
||||
- "OCR document Java"
|
||||
- "analyze invoice receipt"
|
||||
- "custom document model"
|
||||
- "document classification"
|
||||
271
skills/azure-ai-ml-py/SKILL.md
Normal file
271
skills/azure-ai-ml-py/SKILL.md
Normal file
@@ -0,0 +1,271 @@
|
||||
---
|
||||
name: azure-ai-ml-py
|
||||
description: |
|
||||
Azure Machine Learning SDK v2 for Python. Use for ML workspaces, jobs, models, datasets, compute, and pipelines.
|
||||
Triggers: "azure-ai-ml", "MLClient", "workspace", "model registry", "training jobs", "datasets".
|
||||
package: azure-ai-ml
|
||||
---
|
||||
|
||||
# Azure Machine Learning SDK v2 for Python
|
||||
|
||||
Client library for managing Azure ML resources: workspaces, jobs, models, data, and compute.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-ml
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_SUBSCRIPTION_ID=<your-subscription-id>
|
||||
AZURE_RESOURCE_GROUP=<your-resource-group>
|
||||
AZURE_ML_WORKSPACE_NAME=<your-workspace-name>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```python
|
||||
from azure.ai.ml import MLClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
ml_client = MLClient(
|
||||
credential=DefaultAzureCredential(),
|
||||
subscription_id=os.environ["AZURE_SUBSCRIPTION_ID"],
|
||||
resource_group_name=os.environ["AZURE_RESOURCE_GROUP"],
|
||||
workspace_name=os.environ["AZURE_ML_WORKSPACE_NAME"]
|
||||
)
|
||||
```
|
||||
|
||||
### From Config File
|
||||
|
||||
```python
|
||||
from azure.ai.ml import MLClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
# Uses config.json in current directory or parent
|
||||
ml_client = MLClient.from_config(
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
```
|
||||
|
||||
## Workspace Management
|
||||
|
||||
### Create Workspace
|
||||
|
||||
```python
|
||||
from azure.ai.ml.entities import Workspace
|
||||
|
||||
ws = Workspace(
|
||||
name="my-workspace",
|
||||
location="eastus",
|
||||
display_name="My Workspace",
|
||||
description="ML workspace for experiments",
|
||||
tags={"purpose": "demo"}
|
||||
)
|
||||
|
||||
ml_client.workspaces.begin_create(ws).result()
|
||||
```
|
||||
|
||||
### List Workspaces
|
||||
|
||||
```python
|
||||
for ws in ml_client.workspaces.list():
|
||||
print(f"{ws.name}: {ws.location}")
|
||||
```
|
||||
|
||||
## Data Assets
|
||||
|
||||
### Register Data
|
||||
|
||||
```python
|
||||
from azure.ai.ml.entities import Data
|
||||
from azure.ai.ml.constants import AssetTypes
|
||||
|
||||
# Register a file
|
||||
my_data = Data(
|
||||
name="my-dataset",
|
||||
version="1",
|
||||
path="azureml://datastores/workspaceblobstore/paths/data/train.csv",
|
||||
type=AssetTypes.URI_FILE,
|
||||
description="Training data"
|
||||
)
|
||||
|
||||
ml_client.data.create_or_update(my_data)
|
||||
```
|
||||
|
||||
### Register Folder
|
||||
|
||||
```python
|
||||
my_data = Data(
|
||||
name="my-folder-dataset",
|
||||
version="1",
|
||||
path="azureml://datastores/workspaceblobstore/paths/data/",
|
||||
type=AssetTypes.URI_FOLDER
|
||||
)
|
||||
|
||||
ml_client.data.create_or_update(my_data)
|
||||
```
|
||||
|
||||
## Model Registry
|
||||
|
||||
### Register Model
|
||||
|
||||
```python
|
||||
from azure.ai.ml.entities import Model
|
||||
from azure.ai.ml.constants import AssetTypes
|
||||
|
||||
model = Model(
|
||||
name="my-model",
|
||||
version="1",
|
||||
path="./model/",
|
||||
type=AssetTypes.CUSTOM_MODEL,
|
||||
description="My trained model"
|
||||
)
|
||||
|
||||
ml_client.models.create_or_update(model)
|
||||
```
|
||||
|
||||
### List Models
|
||||
|
||||
```python
|
||||
for model in ml_client.models.list(name="my-model"):
|
||||
print(f"{model.name} v{model.version}")
|
||||
```
|
||||
|
||||
## Compute
|
||||
|
||||
### Create Compute Cluster
|
||||
|
||||
```python
|
||||
from azure.ai.ml.entities import AmlCompute
|
||||
|
||||
cluster = AmlCompute(
|
||||
name="cpu-cluster",
|
||||
type="amlcompute",
|
||||
size="Standard_DS3_v2",
|
||||
min_instances=0,
|
||||
max_instances=4,
|
||||
idle_time_before_scale_down=120
|
||||
)
|
||||
|
||||
ml_client.compute.begin_create_or_update(cluster).result()
|
||||
```
|
||||
|
||||
### List Compute
|
||||
|
||||
```python
|
||||
for compute in ml_client.compute.list():
|
||||
print(f"{compute.name}: {compute.type}")
|
||||
```
|
||||
|
||||
## Jobs
|
||||
|
||||
### Command Job
|
||||
|
||||
```python
|
||||
from azure.ai.ml import command, Input
|
||||
|
||||
job = command(
|
||||
code="./src",
|
||||
command="python train.py --data ${{inputs.data}} --lr ${{inputs.learning_rate}}",
|
||||
inputs={
|
||||
"data": Input(type="uri_folder", path="azureml:my-dataset:1"),
|
||||
"learning_rate": 0.01
|
||||
},
|
||||
environment="AzureML-sklearn-1.0-ubuntu20.04-py38-cpu@latest",
|
||||
compute="cpu-cluster",
|
||||
display_name="training-job"
|
||||
)
|
||||
|
||||
returned_job = ml_client.jobs.create_or_update(job)
|
||||
print(f"Job URL: {returned_job.studio_url}")
|
||||
```
|
||||
|
||||
### Monitor Job
|
||||
|
||||
```python
|
||||
ml_client.jobs.stream(returned_job.name)
|
||||
```
|
||||
|
||||
## Pipelines
|
||||
|
||||
```python
|
||||
from azure.ai.ml import dsl, Input, Output
|
||||
from azure.ai.ml.entities import Pipeline
|
||||
|
||||
@dsl.pipeline(
|
||||
compute="cpu-cluster",
|
||||
description="Training pipeline"
|
||||
)
|
||||
def training_pipeline(data_input):
|
||||
prep_step = prep_component(data=data_input)
|
||||
train_step = train_component(
|
||||
data=prep_step.outputs.output_data,
|
||||
learning_rate=0.01
|
||||
)
|
||||
return {"model": train_step.outputs.model}
|
||||
|
||||
pipeline = training_pipeline(
|
||||
data_input=Input(type="uri_folder", path="azureml:my-dataset:1")
|
||||
)
|
||||
|
||||
pipeline_job = ml_client.jobs.create_or_update(pipeline)
|
||||
```
|
||||
|
||||
## Environments
|
||||
|
||||
### Create Custom Environment
|
||||
|
||||
```python
|
||||
from azure.ai.ml.entities import Environment
|
||||
|
||||
env = Environment(
|
||||
name="my-env",
|
||||
version="1",
|
||||
image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04",
|
||||
conda_file="./environment.yml"
|
||||
)
|
||||
|
||||
ml_client.environments.create_or_update(env)
|
||||
```
|
||||
|
||||
## Datastores
|
||||
|
||||
### List Datastores
|
||||
|
||||
```python
|
||||
for ds in ml_client.datastores.list():
|
||||
print(f"{ds.name}: {ds.type}")
|
||||
```
|
||||
|
||||
### Get Default Datastore
|
||||
|
||||
```python
|
||||
default_ds = ml_client.datastores.get_default()
|
||||
print(f"Default: {default_ds.name}")
|
||||
```
|
||||
|
||||
## MLClient Operations
|
||||
|
||||
| Property | Operations |
|
||||
|----------|------------|
|
||||
| `workspaces` | create, get, list, delete |
|
||||
| `jobs` | create_or_update, get, list, stream, cancel |
|
||||
| `models` | create_or_update, get, list, archive |
|
||||
| `data` | create_or_update, get, list |
|
||||
| `compute` | begin_create_or_update, get, list, delete |
|
||||
| `environments` | create_or_update, get, list |
|
||||
| `datastores` | create_or_update, get, list, get_default |
|
||||
| `components` | create_or_update, get, list |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use versioning** for data, models, and environments
|
||||
2. **Configure idle scale-down** to reduce compute costs
|
||||
3. **Use environments** for reproducible training
|
||||
4. **Stream job logs** to monitor progress
|
||||
5. **Register models** after successful training jobs
|
||||
6. **Use pipelines** for multi-step workflows
|
||||
7. **Tag resources** for organization and cost tracking
|
||||
455
skills/azure-ai-openai-dotnet/SKILL.md
Normal file
455
skills/azure-ai-openai-dotnet/SKILL.md
Normal file
@@ -0,0 +1,455 @@
|
||||
---
|
||||
name: azure-ai-openai-dotnet
|
||||
description: |
|
||||
Azure OpenAI SDK for .NET. Client library for Azure OpenAI and OpenAI services. Use for chat completions, embeddings, image generation, audio transcription, and assistants. Triggers: "Azure OpenAI", "AzureOpenAIClient", "ChatClient", "chat completions .NET", "GPT-4", "embeddings", "DALL-E", "Whisper", "OpenAI .NET".
|
||||
package: Azure.AI.OpenAI
|
||||
---
|
||||
|
||||
# Azure.AI.OpenAI (.NET)
|
||||
|
||||
Client library for Azure OpenAI Service providing access to OpenAI models including GPT-4, GPT-4o, embeddings, DALL-E, and Whisper.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
dotnet add package Azure.AI.OpenAI
|
||||
|
||||
# For OpenAI (non-Azure) compatibility
|
||||
dotnet add package OpenAI
|
||||
```
|
||||
|
||||
**Current Version**: 2.1.0 (stable)
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_OPENAI_ENDPOINT=https://<resource-name>.openai.azure.com
|
||||
AZURE_OPENAI_API_KEY=<api-key> # For key-based auth
|
||||
AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4o-mini # Your deployment name
|
||||
```
|
||||
|
||||
## Client Hierarchy
|
||||
|
||||
```
|
||||
AzureOpenAIClient (top-level)
|
||||
├── GetChatClient(deploymentName) → ChatClient
|
||||
├── GetEmbeddingClient(deploymentName) → EmbeddingClient
|
||||
├── GetImageClient(deploymentName) → ImageClient
|
||||
├── GetAudioClient(deploymentName) → AudioClient
|
||||
└── GetAssistantClient() → AssistantClient
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### API Key Authentication
|
||||
|
||||
```csharp
|
||||
using Azure;
|
||||
using Azure.AI.OpenAI;
|
||||
|
||||
AzureOpenAIClient client = new(
|
||||
new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!),
|
||||
new AzureKeyCredential(Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY")!));
|
||||
```
|
||||
|
||||
### Microsoft Entra ID (Recommended for Production)
|
||||
|
||||
```csharp
|
||||
using Azure.Identity;
|
||||
using Azure.AI.OpenAI;
|
||||
|
||||
AzureOpenAIClient client = new(
|
||||
new Uri(Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!),
|
||||
new DefaultAzureCredential());
|
||||
```
|
||||
|
||||
### Using OpenAI SDK Directly with Azure
|
||||
|
||||
```csharp
|
||||
using Azure.Identity;
|
||||
using OpenAI;
|
||||
using OpenAI.Chat;
|
||||
using System.ClientModel.Primitives;
|
||||
|
||||
#pragma warning disable OPENAI001
|
||||
|
||||
BearerTokenPolicy tokenPolicy = new(
|
||||
new DefaultAzureCredential(),
|
||||
"https://cognitiveservices.azure.com/.default");
|
||||
|
||||
ChatClient client = new(
|
||||
model: "gpt-4o-mini",
|
||||
authenticationPolicy: tokenPolicy,
|
||||
options: new OpenAIClientOptions()
|
||||
{
|
||||
Endpoint = new Uri("https://YOUR-RESOURCE.openai.azure.com/openai/v1")
|
||||
});
|
||||
```
|
||||
|
||||
## Chat Completions
|
||||
|
||||
### Basic Chat
|
||||
|
||||
```csharp
|
||||
using Azure.AI.OpenAI;
|
||||
using OpenAI.Chat;
|
||||
|
||||
AzureOpenAIClient azureClient = new(
|
||||
new Uri(endpoint),
|
||||
new DefaultAzureCredential());
|
||||
|
||||
ChatClient chatClient = azureClient.GetChatClient("gpt-4o-mini");
|
||||
|
||||
ChatCompletion completion = chatClient.CompleteChat(
|
||||
[
|
||||
new SystemChatMessage("You are a helpful assistant."),
|
||||
new UserChatMessage("What is Azure OpenAI?")
|
||||
]);
|
||||
|
||||
Console.WriteLine(completion.Content[0].Text);
|
||||
```
|
||||
|
||||
### Async Chat
|
||||
|
||||
```csharp
|
||||
ChatCompletion completion = await chatClient.CompleteChatAsync(
|
||||
[
|
||||
new SystemChatMessage("You are a helpful assistant."),
|
||||
new UserChatMessage("Explain cloud computing in simple terms.")
|
||||
]);
|
||||
|
||||
Console.WriteLine($"Response: {completion.Content[0].Text}");
|
||||
Console.WriteLine($"Tokens used: {completion.Usage.TotalTokenCount}");
|
||||
```
|
||||
|
||||
### Streaming Chat
|
||||
|
||||
```csharp
|
||||
await foreach (StreamingChatCompletionUpdate update
|
||||
in chatClient.CompleteChatStreamingAsync(messages))
|
||||
{
|
||||
if (update.ContentUpdate.Count > 0)
|
||||
{
|
||||
Console.Write(update.ContentUpdate[0].Text);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Chat with Options
|
||||
|
||||
```csharp
|
||||
ChatCompletionOptions options = new()
|
||||
{
|
||||
MaxOutputTokenCount = 1000,
|
||||
Temperature = 0.7f,
|
||||
TopP = 0.95f,
|
||||
FrequencyPenalty = 0,
|
||||
PresencePenalty = 0
|
||||
};
|
||||
|
||||
ChatCompletion completion = await chatClient.CompleteChatAsync(messages, options);
|
||||
```
|
||||
|
||||
### Multi-turn Conversation
|
||||
|
||||
```csharp
|
||||
List<ChatMessage> messages = new()
|
||||
{
|
||||
new SystemChatMessage("You are a helpful assistant."),
|
||||
new UserChatMessage("Hi, can you help me?"),
|
||||
new AssistantChatMessage("Of course! What do you need help with?"),
|
||||
new UserChatMessage("What's the capital of France?")
|
||||
};
|
||||
|
||||
ChatCompletion completion = await chatClient.CompleteChatAsync(messages);
|
||||
messages.Add(new AssistantChatMessage(completion.Content[0].Text));
|
||||
```
|
||||
|
||||
## Structured Outputs (JSON Schema)
|
||||
|
||||
```csharp
|
||||
using System.Text.Json;
|
||||
|
||||
ChatCompletionOptions options = new()
|
||||
{
|
||||
ResponseFormat = ChatResponseFormat.CreateJsonSchemaFormat(
|
||||
jsonSchemaFormatName: "math_reasoning",
|
||||
jsonSchema: BinaryData.FromBytes("""
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"steps": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"explanation": { "type": "string" },
|
||||
"output": { "type": "string" }
|
||||
},
|
||||
"required": ["explanation", "output"],
|
||||
"additionalProperties": false
|
||||
}
|
||||
},
|
||||
"final_answer": { "type": "string" }
|
||||
},
|
||||
"required": ["steps", "final_answer"],
|
||||
"additionalProperties": false
|
||||
}
|
||||
"""u8.ToArray()),
|
||||
jsonSchemaIsStrict: true)
|
||||
};
|
||||
|
||||
ChatCompletion completion = await chatClient.CompleteChatAsync(
|
||||
[new UserChatMessage("How can I solve 8x + 7 = -23?")],
|
||||
options);
|
||||
|
||||
using JsonDocument json = JsonDocument.Parse(completion.Content[0].Text);
|
||||
Console.WriteLine($"Answer: {json.RootElement.GetProperty("final_answer")}");
|
||||
```
|
||||
|
||||
## Reasoning Models (o1, o4-mini)
|
||||
|
||||
```csharp
|
||||
ChatCompletionOptions options = new()
|
||||
{
|
||||
ReasoningEffortLevel = ChatReasoningEffortLevel.Low,
|
||||
MaxOutputTokenCount = 100000
|
||||
};
|
||||
|
||||
ChatCompletion completion = await chatClient.CompleteChatAsync(
|
||||
[
|
||||
new DeveloperChatMessage("You are a helpful assistant"),
|
||||
new UserChatMessage("Explain the theory of relativity")
|
||||
], options);
|
||||
```
|
||||
|
||||
## Azure AI Search Integration (RAG)
|
||||
|
||||
```csharp
|
||||
using Azure.AI.OpenAI.Chat;
|
||||
|
||||
#pragma warning disable AOAI001
|
||||
|
||||
ChatCompletionOptions options = new();
|
||||
options.AddDataSource(new AzureSearchChatDataSource()
|
||||
{
|
||||
Endpoint = new Uri(searchEndpoint),
|
||||
IndexName = searchIndex,
|
||||
Authentication = DataSourceAuthentication.FromApiKey(searchKey)
|
||||
});
|
||||
|
||||
ChatCompletion completion = await chatClient.CompleteChatAsync(
|
||||
[new UserChatMessage("What health plans are available?")],
|
||||
options);
|
||||
|
||||
ChatMessageContext context = completion.GetMessageContext();
|
||||
if (context?.Intent is not null)
|
||||
{
|
||||
Console.WriteLine($"Intent: {context.Intent}");
|
||||
}
|
||||
foreach (ChatCitation citation in context?.Citations ?? [])
|
||||
{
|
||||
Console.WriteLine($"Citation: {citation.Content}");
|
||||
}
|
||||
```
|
||||
|
||||
## Embeddings
|
||||
|
||||
```csharp
|
||||
using OpenAI.Embeddings;
|
||||
|
||||
EmbeddingClient embeddingClient = azureClient.GetEmbeddingClient("text-embedding-ada-002");
|
||||
|
||||
OpenAIEmbedding embedding = await embeddingClient.GenerateEmbeddingAsync("Hello, world!");
|
||||
ReadOnlyMemory<float> vector = embedding.ToFloats();
|
||||
|
||||
Console.WriteLine($"Embedding dimensions: {vector.Length}");
|
||||
```
|
||||
|
||||
### Batch Embeddings
|
||||
|
||||
```csharp
|
||||
List<string> inputs = new()
|
||||
{
|
||||
"First document text",
|
||||
"Second document text",
|
||||
"Third document text"
|
||||
};
|
||||
|
||||
OpenAIEmbeddingCollection embeddings = await embeddingClient.GenerateEmbeddingsAsync(inputs);
|
||||
|
||||
foreach (OpenAIEmbedding emb in embeddings)
|
||||
{
|
||||
Console.WriteLine($"Index {emb.Index}: {emb.ToFloats().Length} dimensions");
|
||||
}
|
||||
```
|
||||
|
||||
## Image Generation (DALL-E)
|
||||
|
||||
```csharp
|
||||
using OpenAI.Images;
|
||||
|
||||
ImageClient imageClient = azureClient.GetImageClient("dall-e-3");
|
||||
|
||||
GeneratedImage image = await imageClient.GenerateImageAsync(
|
||||
"A futuristic city skyline at sunset",
|
||||
new ImageGenerationOptions
|
||||
{
|
||||
Size = GeneratedImageSize.W1024xH1024,
|
||||
Quality = GeneratedImageQuality.High,
|
||||
Style = GeneratedImageStyle.Vivid
|
||||
});
|
||||
|
||||
Console.WriteLine($"Image URL: {image.ImageUri}");
|
||||
```
|
||||
|
||||
## Audio (Whisper)
|
||||
|
||||
### Transcription
|
||||
|
||||
```csharp
|
||||
using OpenAI.Audio;
|
||||
|
||||
AudioClient audioClient = azureClient.GetAudioClient("whisper");
|
||||
|
||||
AudioTranscription transcription = await audioClient.TranscribeAudioAsync(
|
||||
"audio.mp3",
|
||||
new AudioTranscriptionOptions
|
||||
{
|
||||
ResponseFormat = AudioTranscriptionFormat.Verbose,
|
||||
Language = "en"
|
||||
});
|
||||
|
||||
Console.WriteLine(transcription.Text);
|
||||
```
|
||||
|
||||
### Text-to-Speech
|
||||
|
||||
```csharp
|
||||
BinaryData speech = await audioClient.GenerateSpeechAsync(
|
||||
"Hello, welcome to Azure OpenAI!",
|
||||
GeneratedSpeechVoice.Alloy,
|
||||
new SpeechGenerationOptions
|
||||
{
|
||||
SpeedRatio = 1.0f,
|
||||
ResponseFormat = GeneratedSpeechFormat.Mp3
|
||||
});
|
||||
|
||||
await File.WriteAllBytesAsync("output.mp3", speech.ToArray());
|
||||
```
|
||||
|
||||
## Function Calling (Tools)
|
||||
|
||||
```csharp
|
||||
ChatTool getCurrentWeatherTool = ChatTool.CreateFunctionTool(
|
||||
functionName: "get_current_weather",
|
||||
functionDescription: "Get the current weather in a given location",
|
||||
functionParameters: BinaryData.FromString("""
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"location": {
|
||||
"type": "string",
|
||||
"description": "The city and state, e.g. San Francisco, CA"
|
||||
},
|
||||
"unit": {
|
||||
"type": "string",
|
||||
"enum": ["celsius", "fahrenheit"]
|
||||
}
|
||||
},
|
||||
"required": ["location"]
|
||||
}
|
||||
"""));
|
||||
|
||||
ChatCompletionOptions options = new()
|
||||
{
|
||||
Tools = { getCurrentWeatherTool }
|
||||
};
|
||||
|
||||
ChatCompletion completion = await chatClient.CompleteChatAsync(
|
||||
[new UserChatMessage("What's the weather in Seattle?")],
|
||||
options);
|
||||
|
||||
if (completion.FinishReason == ChatFinishReason.ToolCalls)
|
||||
{
|
||||
foreach (ChatToolCall toolCall in completion.ToolCalls)
|
||||
{
|
||||
Console.WriteLine($"Function: {toolCall.FunctionName}");
|
||||
Console.WriteLine($"Arguments: {toolCall.FunctionArguments}");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Key Types Reference
|
||||
|
||||
| Type | Purpose |
|
||||
|------|---------|
|
||||
| `AzureOpenAIClient` | Top-level client for Azure OpenAI |
|
||||
| `ChatClient` | Chat completions |
|
||||
| `EmbeddingClient` | Text embeddings |
|
||||
| `ImageClient` | Image generation (DALL-E) |
|
||||
| `AudioClient` | Audio transcription/TTS |
|
||||
| `ChatCompletion` | Chat response |
|
||||
| `ChatCompletionOptions` | Request configuration |
|
||||
| `StreamingChatCompletionUpdate` | Streaming response chunk |
|
||||
| `ChatMessage` | Base message type |
|
||||
| `SystemChatMessage` | System prompt |
|
||||
| `UserChatMessage` | User input |
|
||||
| `AssistantChatMessage` | Assistant response |
|
||||
| `DeveloperChatMessage` | Developer message (reasoning models) |
|
||||
| `ChatTool` | Function/tool definition |
|
||||
| `ChatToolCall` | Tool invocation request |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use Entra ID in production** — Avoid API keys; use `DefaultAzureCredential`
|
||||
2. **Reuse client instances** — Create once, share across requests
|
||||
3. **Handle rate limits** — Implement exponential backoff for 429 errors
|
||||
4. **Stream for long responses** — Use `CompleteChatStreamingAsync` for better UX
|
||||
5. **Set appropriate timeouts** — Long completions may need extended timeouts
|
||||
6. **Use structured outputs** — JSON schema ensures consistent response format
|
||||
7. **Monitor token usage** — Track `completion.Usage` for cost management
|
||||
8. **Validate tool calls** — Always validate function arguments before execution
|
||||
|
||||
## Error Handling
|
||||
|
||||
```csharp
|
||||
using Azure;
|
||||
|
||||
try
|
||||
{
|
||||
ChatCompletion completion = await chatClient.CompleteChatAsync(messages);
|
||||
}
|
||||
catch (RequestFailedException ex) when (ex.Status == 429)
|
||||
{
|
||||
Console.WriteLine("Rate limited. Retry after delay.");
|
||||
await Task.Delay(TimeSpan.FromSeconds(10));
|
||||
}
|
||||
catch (RequestFailedException ex) when (ex.Status == 400)
|
||||
{
|
||||
Console.WriteLine($"Bad request: {ex.Message}");
|
||||
}
|
||||
catch (RequestFailedException ex)
|
||||
{
|
||||
Console.WriteLine($"Azure OpenAI error: {ex.Status} - {ex.Message}");
|
||||
}
|
||||
```
|
||||
|
||||
## Related SDKs
|
||||
|
||||
| SDK | Purpose | Install |
|
||||
|-----|---------|---------|
|
||||
| `Azure.AI.OpenAI` | Azure OpenAI client (this SDK) | `dotnet add package Azure.AI.OpenAI` |
|
||||
| `OpenAI` | OpenAI compatibility | `dotnet add package OpenAI` |
|
||||
| `Azure.Identity` | Authentication | `dotnet add package Azure.Identity` |
|
||||
| `Azure.Search.Documents` | AI Search for RAG | `dotnet add package Azure.Search.Documents` |
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| NuGet Package | https://www.nuget.org/packages/Azure.AI.OpenAI |
|
||||
| API Reference | https://learn.microsoft.com/dotnet/api/azure.ai.openai |
|
||||
| Migration Guide (1.0→2.0) | https://learn.microsoft.com/azure/ai-services/openai/how-to/dotnet-migration |
|
||||
| Quickstart | https://learn.microsoft.com/azure/ai-services/openai/quickstart |
|
||||
| GitHub Source | https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/openai/Azure.AI.OpenAI |
|
||||
348
skills/azure-ai-projects-dotnet/SKILL.md
Normal file
348
skills/azure-ai-projects-dotnet/SKILL.md
Normal file
@@ -0,0 +1,348 @@
|
||||
---
|
||||
name: azure-ai-projects-dotnet
|
||||
description: |
|
||||
Azure AI Projects SDK for .NET. High-level client for Azure AI Foundry projects including agents, connections, datasets, deployments, evaluations, and indexes. Use for AI Foundry project management, versioned agents, and orchestration. Triggers: "AI Projects", "AIProjectClient", "Foundry project", "versioned agents", "evaluations", "datasets", "connections", "deployments .NET".
|
||||
package: Azure.AI.Projects
|
||||
---
|
||||
|
||||
# Azure.AI.Projects (.NET)
|
||||
|
||||
High-level SDK for Azure AI Foundry project operations including agents, connections, datasets, deployments, evaluations, and indexes.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
dotnet add package Azure.AI.Projects
|
||||
dotnet add package Azure.Identity
|
||||
|
||||
# Optional: For versioned agents with OpenAI extensions
|
||||
dotnet add package Azure.AI.Projects.OpenAI --prerelease
|
||||
|
||||
# Optional: For low-level agent operations
|
||||
dotnet add package Azure.AI.Agents.Persistent --prerelease
|
||||
```
|
||||
|
||||
**Current Versions**: GA v1.1.0, Preview v1.2.0-beta.5
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
PROJECT_ENDPOINT=https://<resource>.services.ai.azure.com/api/projects/<project>
|
||||
MODEL_DEPLOYMENT_NAME=gpt-4o-mini
|
||||
CONNECTION_NAME=<your-connection-name>
|
||||
AI_SEARCH_CONNECTION_NAME=<ai-search-connection>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```csharp
|
||||
using Azure.Identity;
|
||||
using Azure.AI.Projects;
|
||||
|
||||
var endpoint = Environment.GetEnvironmentVariable("PROJECT_ENDPOINT");
|
||||
AIProjectClient projectClient = new AIProjectClient(
|
||||
new Uri(endpoint),
|
||||
new DefaultAzureCredential());
|
||||
```
|
||||
|
||||
## Client Hierarchy
|
||||
|
||||
```
|
||||
AIProjectClient
|
||||
├── Agents → AIProjectAgentsOperations (versioned agents)
|
||||
├── Connections → ConnectionsClient
|
||||
├── Datasets → DatasetsClient
|
||||
├── Deployments → DeploymentsClient
|
||||
├── Evaluations → EvaluationsClient
|
||||
├── Evaluators → EvaluatorsClient
|
||||
├── Indexes → IndexesClient
|
||||
├── Telemetry → AIProjectTelemetry
|
||||
├── OpenAI → ProjectOpenAIClient (preview)
|
||||
└── GetPersistentAgentsClient() → PersistentAgentsClient
|
||||
```
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Get Persistent Agents Client
|
||||
|
||||
```csharp
|
||||
// Get low-level agents client from project client
|
||||
PersistentAgentsClient agentsClient = projectClient.GetPersistentAgentsClient();
|
||||
|
||||
// Create agent
|
||||
PersistentAgent agent = await agentsClient.Administration.CreateAgentAsync(
|
||||
model: "gpt-4o-mini",
|
||||
name: "Math Tutor",
|
||||
instructions: "You are a personal math tutor.");
|
||||
|
||||
// Create thread and run
|
||||
PersistentAgentThread thread = await agentsClient.Threads.CreateThreadAsync();
|
||||
await agentsClient.Messages.CreateMessageAsync(thread.Id, MessageRole.User, "Solve 3x + 11 = 14");
|
||||
ThreadRun run = await agentsClient.Runs.CreateRunAsync(thread.Id, agent.Id);
|
||||
|
||||
// Poll for completion
|
||||
do
|
||||
{
|
||||
await Task.Delay(500);
|
||||
run = await agentsClient.Runs.GetRunAsync(thread.Id, run.Id);
|
||||
}
|
||||
while (run.Status == RunStatus.Queued || run.Status == RunStatus.InProgress);
|
||||
|
||||
// Get messages
|
||||
await foreach (var msg in agentsClient.Messages.GetMessagesAsync(thread.Id))
|
||||
{
|
||||
foreach (var content in msg.ContentItems)
|
||||
{
|
||||
if (content is MessageTextContent textContent)
|
||||
Console.WriteLine(textContent.Text);
|
||||
}
|
||||
}
|
||||
|
||||
// Cleanup
|
||||
await agentsClient.Threads.DeleteThreadAsync(thread.Id);
|
||||
await agentsClient.Administration.DeleteAgentAsync(agent.Id);
|
||||
```
|
||||
|
||||
### 2. Versioned Agents with Tools (Preview)
|
||||
|
||||
```csharp
|
||||
using Azure.AI.Projects.OpenAI;
|
||||
|
||||
// Create agent with web search tool
|
||||
PromptAgentDefinition agentDefinition = new(model: "gpt-4o-mini")
|
||||
{
|
||||
Instructions = "You are a helpful assistant that can search the web",
|
||||
Tools = {
|
||||
ResponseTool.CreateWebSearchTool(
|
||||
userLocation: WebSearchToolLocation.CreateApproximateLocation(
|
||||
country: "US",
|
||||
city: "Seattle",
|
||||
region: "Washington"
|
||||
)
|
||||
),
|
||||
}
|
||||
};
|
||||
|
||||
AgentVersion agentVersion = await projectClient.Agents.CreateAgentVersionAsync(
|
||||
agentName: "myAgent",
|
||||
options: new(agentDefinition));
|
||||
|
||||
// Get response client
|
||||
ProjectResponsesClient responseClient = projectClient.OpenAI.GetProjectResponsesClientForAgent(agentVersion.Name);
|
||||
|
||||
// Create response
|
||||
ResponseResult response = responseClient.CreateResponse("What's the weather in Seattle?");
|
||||
Console.WriteLine(response.GetOutputText());
|
||||
|
||||
// Cleanup
|
||||
projectClient.Agents.DeleteAgentVersion(agentName: agentVersion.Name, agentVersion: agentVersion.Version);
|
||||
```
|
||||
|
||||
### 3. Connections
|
||||
|
||||
```csharp
|
||||
// List all connections
|
||||
foreach (AIProjectConnection connection in projectClient.Connections.GetConnections())
|
||||
{
|
||||
Console.WriteLine($"{connection.Name}: {connection.ConnectionType}");
|
||||
}
|
||||
|
||||
// Get specific connection
|
||||
AIProjectConnection conn = projectClient.Connections.GetConnection(
|
||||
connectionName,
|
||||
includeCredentials: true);
|
||||
|
||||
// Get default connection
|
||||
AIProjectConnection defaultConn = projectClient.Connections.GetDefaultConnection(
|
||||
includeCredentials: false);
|
||||
```
|
||||
|
||||
### 4. Deployments
|
||||
|
||||
```csharp
|
||||
// List all deployments
|
||||
foreach (AIProjectDeployment deployment in projectClient.Deployments.GetDeployments())
|
||||
{
|
||||
Console.WriteLine($"{deployment.Name}: {deployment.ModelName}");
|
||||
}
|
||||
|
||||
// Filter by publisher
|
||||
foreach (var deployment in projectClient.Deployments.GetDeployments(modelPublisher: "Microsoft"))
|
||||
{
|
||||
Console.WriteLine(deployment.Name);
|
||||
}
|
||||
|
||||
// Get specific deployment
|
||||
ModelDeployment details = (ModelDeployment)projectClient.Deployments.GetDeployment("gpt-4o-mini");
|
||||
```
|
||||
|
||||
### 5. Datasets
|
||||
|
||||
```csharp
|
||||
// Upload single file
|
||||
FileDataset fileDataset = projectClient.Datasets.UploadFile(
|
||||
name: "my-dataset",
|
||||
version: "1.0",
|
||||
filePath: "data/training.txt",
|
||||
connectionName: connectionName);
|
||||
|
||||
// Upload folder
|
||||
FolderDataset folderDataset = projectClient.Datasets.UploadFolder(
|
||||
name: "my-dataset",
|
||||
version: "2.0",
|
||||
folderPath: "data/training",
|
||||
connectionName: connectionName,
|
||||
filePattern: new Regex(".*\\.txt"));
|
||||
|
||||
// Get dataset
|
||||
AIProjectDataset dataset = projectClient.Datasets.GetDataset("my-dataset", "1.0");
|
||||
|
||||
// Delete dataset
|
||||
projectClient.Datasets.Delete("my-dataset", "1.0");
|
||||
```
|
||||
|
||||
### 6. Indexes
|
||||
|
||||
```csharp
|
||||
// Create Azure AI Search index
|
||||
AzureAISearchIndex searchIndex = new(aiSearchConnectionName, aiSearchIndexName)
|
||||
{
|
||||
Description = "Sample Index"
|
||||
};
|
||||
|
||||
searchIndex = (AzureAISearchIndex)projectClient.Indexes.CreateOrUpdate(
|
||||
name: "my-index",
|
||||
version: "1.0",
|
||||
index: searchIndex);
|
||||
|
||||
// List indexes
|
||||
foreach (AIProjectIndex index in projectClient.Indexes.GetIndexes())
|
||||
{
|
||||
Console.WriteLine(index.Name);
|
||||
}
|
||||
|
||||
// Delete index
|
||||
projectClient.Indexes.Delete(name: "my-index", version: "1.0");
|
||||
```
|
||||
|
||||
### 7. Evaluations
|
||||
|
||||
```csharp
|
||||
// Create evaluation configuration
|
||||
var evaluatorConfig = new EvaluatorConfiguration(id: EvaluatorIDs.Relevance);
|
||||
evaluatorConfig.InitParams.Add("deployment_name", BinaryData.FromObjectAsJson("gpt-4o"));
|
||||
|
||||
// Create evaluation
|
||||
Evaluation evaluation = new Evaluation(
|
||||
data: new InputDataset("<dataset_id>"),
|
||||
evaluators: new Dictionary<string, EvaluatorConfiguration>
|
||||
{
|
||||
{ "relevance", evaluatorConfig }
|
||||
}
|
||||
)
|
||||
{
|
||||
DisplayName = "Sample Evaluation"
|
||||
};
|
||||
|
||||
// Run evaluation
|
||||
Evaluation result = projectClient.Evaluations.Create(evaluation: evaluation);
|
||||
|
||||
// Get evaluation
|
||||
Evaluation getResult = projectClient.Evaluations.Get(result.Name);
|
||||
|
||||
// List evaluations
|
||||
foreach (var eval in projectClient.Evaluations.GetAll())
|
||||
{
|
||||
Console.WriteLine($"{eval.DisplayName}: {eval.Status}");
|
||||
}
|
||||
```
|
||||
|
||||
### 8. Get Azure OpenAI Chat Client
|
||||
|
||||
```csharp
|
||||
using Azure.AI.OpenAI;
|
||||
using OpenAI.Chat;
|
||||
|
||||
ClientConnection connection = projectClient.GetConnection(typeof(AzureOpenAIClient).FullName!);
|
||||
|
||||
if (!connection.TryGetLocatorAsUri(out Uri uri) || uri is null)
|
||||
throw new InvalidOperationException("Invalid URI.");
|
||||
|
||||
uri = new Uri($"https://{uri.Host}");
|
||||
|
||||
AzureOpenAIClient azureOpenAIClient = new AzureOpenAIClient(uri, new DefaultAzureCredential());
|
||||
ChatClient chatClient = azureOpenAIClient.GetChatClient("gpt-4o-mini");
|
||||
|
||||
ChatCompletion result = chatClient.CompleteChat("List all rainbow colors");
|
||||
Console.WriteLine(result.Content[0].Text);
|
||||
```
|
||||
|
||||
## Available Agent Tools
|
||||
|
||||
| Tool | Class | Purpose |
|
||||
|------|-------|---------|
|
||||
| Code Interpreter | `CodeInterpreterToolDefinition` | Execute Python code |
|
||||
| File Search | `FileSearchToolDefinition` | Search uploaded files |
|
||||
| Function Calling | `FunctionToolDefinition` | Call custom functions |
|
||||
| Bing Grounding | `BingGroundingToolDefinition` | Web search via Bing |
|
||||
| Azure AI Search | `AzureAISearchToolDefinition` | Search Azure AI indexes |
|
||||
| OpenAPI | `OpenApiToolDefinition` | Call external APIs |
|
||||
| Azure Functions | `AzureFunctionToolDefinition` | Invoke Azure Functions |
|
||||
| MCP | `MCPToolDefinition` | Model Context Protocol tools |
|
||||
|
||||
## Key Types Reference
|
||||
|
||||
| Type | Purpose |
|
||||
|------|---------|
|
||||
| `AIProjectClient` | Main entry point |
|
||||
| `PersistentAgentsClient` | Low-level agent operations |
|
||||
| `PromptAgentDefinition` | Versioned agent definition |
|
||||
| `AgentVersion` | Versioned agent instance |
|
||||
| `AIProjectConnection` | Connection to Azure resource |
|
||||
| `AIProjectDeployment` | Model deployment info |
|
||||
| `AIProjectDataset` | Dataset metadata |
|
||||
| `AIProjectIndex` | Search index metadata |
|
||||
| `Evaluation` | Evaluation configuration and results |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use `DefaultAzureCredential`** for production authentication
|
||||
2. **Use async methods** (`*Async`) for all I/O operations
|
||||
3. **Poll with appropriate delays** (500ms recommended) when waiting for runs
|
||||
4. **Clean up resources** — delete threads, agents, and files when done
|
||||
5. **Use versioned agents** (via `Azure.AI.Projects.OpenAI`) for production scenarios
|
||||
6. **Store connection IDs** rather than names for tool configurations
|
||||
7. **Use `includeCredentials: true`** only when credentials are needed
|
||||
8. **Handle pagination** — use `AsyncPageable<T>` for listing operations
|
||||
|
||||
## Error Handling
|
||||
|
||||
```csharp
|
||||
using Azure;
|
||||
|
||||
try
|
||||
{
|
||||
var result = await projectClient.Evaluations.CreateAsync(evaluation);
|
||||
}
|
||||
catch (RequestFailedException ex)
|
||||
{
|
||||
Console.WriteLine($"Error: {ex.Status} - {ex.ErrorCode}: {ex.Message}");
|
||||
}
|
||||
```
|
||||
|
||||
## Related SDKs
|
||||
|
||||
| SDK | Purpose | Install |
|
||||
|-----|---------|---------|
|
||||
| `Azure.AI.Projects` | High-level project client (this SDK) | `dotnet add package Azure.AI.Projects` |
|
||||
| `Azure.AI.Agents.Persistent` | Low-level agent operations | `dotnet add package Azure.AI.Agents.Persistent` |
|
||||
| `Azure.AI.Projects.OpenAI` | Versioned agents with OpenAI | `dotnet add package Azure.AI.Projects.OpenAI` |
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| NuGet Package | https://www.nuget.org/packages/Azure.AI.Projects |
|
||||
| API Reference | https://learn.microsoft.com/dotnet/api/azure.ai.projects |
|
||||
| GitHub Source | https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/ai/Azure.AI.Projects |
|
||||
| Samples | https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/ai/Azure.AI.Projects/samples |
|
||||
152
skills/azure-ai-projects-java/SKILL.md
Normal file
152
skills/azure-ai-projects-java/SKILL.md
Normal file
@@ -0,0 +1,152 @@
|
||||
---
|
||||
name: azure-ai-projects-java
|
||||
description: |
|
||||
Azure AI Projects SDK for Java. High-level SDK for Azure AI Foundry project management including connections, datasets, indexes, and evaluations.
|
||||
Triggers: "AIProjectClient java", "azure ai projects java", "Foundry project java", "ConnectionsClient", "DatasetsClient", "IndexesClient".
|
||||
package: com.azure:azure-ai-projects
|
||||
---
|
||||
|
||||
# Azure AI Projects SDK for Java
|
||||
|
||||
High-level SDK for Azure AI Foundry project management with access to connections, datasets, indexes, and evaluations.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-ai-projects</artifactId>
|
||||
<version>1.0.0-beta.1</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
PROJECT_ENDPOINT=https://<resource>.services.ai.azure.com/api/projects/<project>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```java
|
||||
import com.azure.ai.projects.AIProjectClientBuilder;
|
||||
import com.azure.identity.DefaultAzureCredentialBuilder;
|
||||
|
||||
AIProjectClientBuilder builder = new AIProjectClientBuilder()
|
||||
.endpoint(System.getenv("PROJECT_ENDPOINT"))
|
||||
.credential(new DefaultAzureCredentialBuilder().build());
|
||||
```
|
||||
|
||||
## Client Hierarchy
|
||||
|
||||
The SDK provides multiple sub-clients for different operations:
|
||||
|
||||
| Client | Purpose |
|
||||
|--------|---------|
|
||||
| `ConnectionsClient` | Enumerate connected Azure resources |
|
||||
| `DatasetsClient` | Upload documents and manage datasets |
|
||||
| `DeploymentsClient` | Enumerate AI model deployments |
|
||||
| `IndexesClient` | Create and manage search indexes |
|
||||
| `EvaluationsClient` | Run AI model evaluations |
|
||||
| `EvaluatorsClient` | Manage evaluator configurations |
|
||||
| `SchedulesClient` | Manage scheduled operations |
|
||||
|
||||
```java
|
||||
// Build sub-clients from builder
|
||||
ConnectionsClient connectionsClient = builder.buildConnectionsClient();
|
||||
DatasetsClient datasetsClient = builder.buildDatasetsClient();
|
||||
DeploymentsClient deploymentsClient = builder.buildDeploymentsClient();
|
||||
IndexesClient indexesClient = builder.buildIndexesClient();
|
||||
EvaluationsClient evaluationsClient = builder.buildEvaluationsClient();
|
||||
```
|
||||
|
||||
## Core Operations
|
||||
|
||||
### List Connections
|
||||
|
||||
```java
|
||||
import com.azure.ai.projects.models.Connection;
|
||||
import com.azure.core.http.rest.PagedIterable;
|
||||
|
||||
PagedIterable<Connection> connections = connectionsClient.listConnections();
|
||||
for (Connection connection : connections) {
|
||||
System.out.println("Name: " + connection.getName());
|
||||
System.out.println("Type: " + connection.getType());
|
||||
System.out.println("Credential Type: " + connection.getCredentials().getType());
|
||||
}
|
||||
```
|
||||
|
||||
### List Indexes
|
||||
|
||||
```java
|
||||
indexesClient.listLatest().forEach(index -> {
|
||||
System.out.println("Index name: " + index.getName());
|
||||
System.out.println("Version: " + index.getVersion());
|
||||
System.out.println("Description: " + index.getDescription());
|
||||
});
|
||||
```
|
||||
|
||||
### Create or Update Index
|
||||
|
||||
```java
|
||||
import com.azure.ai.projects.models.AzureAISearchIndex;
|
||||
import com.azure.ai.projects.models.Index;
|
||||
|
||||
String indexName = "my-index";
|
||||
String indexVersion = "1.0";
|
||||
String searchConnectionName = System.getenv("AI_SEARCH_CONNECTION_NAME");
|
||||
String searchIndexName = System.getenv("AI_SEARCH_INDEX_NAME");
|
||||
|
||||
Index index = indexesClient.createOrUpdate(
|
||||
indexName,
|
||||
indexVersion,
|
||||
new AzureAISearchIndex()
|
||||
.setConnectionName(searchConnectionName)
|
||||
.setIndexName(searchIndexName)
|
||||
);
|
||||
|
||||
System.out.println("Created index: " + index.getName());
|
||||
```
|
||||
|
||||
### Access OpenAI Evaluations
|
||||
|
||||
The SDK exposes OpenAI's official SDK for evaluations:
|
||||
|
||||
```java
|
||||
import com.openai.services.EvalService;
|
||||
|
||||
EvalService evalService = evaluationsClient.getOpenAIClient();
|
||||
// Use OpenAI evaluation APIs directly
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use DefaultAzureCredential** for production authentication
|
||||
2. **Reuse client builder** to create multiple sub-clients efficiently
|
||||
3. **Handle pagination** when listing resources with `PagedIterable`
|
||||
4. **Use environment variables** for connection names and configuration
|
||||
5. **Check connection types** before accessing credentials
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import com.azure.core.exception.HttpResponseException;
|
||||
import com.azure.core.exception.ResourceNotFoundException;
|
||||
|
||||
try {
|
||||
Index index = indexesClient.get(indexName, version);
|
||||
} catch (ResourceNotFoundException e) {
|
||||
System.err.println("Index not found: " + indexName);
|
||||
} catch (HttpResponseException e) {
|
||||
System.err.println("Error: " + e.getResponse().getStatusCode());
|
||||
}
|
||||
```
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| Product Docs | https://learn.microsoft.com/azure/ai-studio/ |
|
||||
| API Reference | https://learn.microsoft.com/rest/api/aifoundry/aiprojects/ |
|
||||
| GitHub Source | https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-projects |
|
||||
| Samples | https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-projects/src/samples |
|
||||
295
skills/azure-ai-projects-py/SKILL.md
Normal file
295
skills/azure-ai-projects-py/SKILL.md
Normal file
@@ -0,0 +1,295 @@
|
||||
---
|
||||
name: azure-ai-projects-py
|
||||
description: Build AI applications using the Azure AI Projects Python SDK (azure-ai-projects). Use when working with Foundry project clients, creating versioned agents with PromptAgentDefinition, running evaluations, managing connections/deployments/datasets/indexes, or using OpenAI-compatible clients. This is the high-level Foundry SDK - for low-level agent operations, use azure-ai-agents-python skill.
|
||||
package: azure-ai-projects
|
||||
---
|
||||
|
||||
# Azure AI Projects Python SDK (Foundry SDK)
|
||||
|
||||
Build AI applications on Microsoft Foundry using the `azure-ai-projects` SDK.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-projects azure-identity
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_AI_PROJECT_ENDPOINT="https://<resource>.services.ai.azure.com/api/projects/<project>"
|
||||
AZURE_AI_MODEL_DEPLOYMENT_NAME="gpt-4o-mini"
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.identity import DefaultAzureCredential
|
||||
from azure.ai.projects import AIProjectClient
|
||||
|
||||
credential = DefaultAzureCredential()
|
||||
client = AIProjectClient(
|
||||
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
credential=credential,
|
||||
)
|
||||
```
|
||||
|
||||
## Client Operations Overview
|
||||
|
||||
| Operation | Access | Purpose |
|
||||
|-----------|--------|---------|
|
||||
| `client.agents` | `.agents.*` | Agent CRUD, versions, threads, runs |
|
||||
| `client.connections` | `.connections.*` | List/get project connections |
|
||||
| `client.deployments` | `.deployments.*` | List model deployments |
|
||||
| `client.datasets` | `.datasets.*` | Dataset management |
|
||||
| `client.indexes` | `.indexes.*` | Index management |
|
||||
| `client.evaluations` | `.evaluations.*` | Run evaluations |
|
||||
| `client.red_teams` | `.red_teams.*` | Red team operations |
|
||||
|
||||
## Two Client Approaches
|
||||
|
||||
### 1. AIProjectClient (Native Foundry)
|
||||
|
||||
```python
|
||||
from azure.ai.projects import AIProjectClient
|
||||
|
||||
client = AIProjectClient(
|
||||
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
credential=DefaultAzureCredential(),
|
||||
)
|
||||
|
||||
# Use Foundry-native operations
|
||||
agent = client.agents.create_agent(
|
||||
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
|
||||
name="my-agent",
|
||||
instructions="You are helpful.",
|
||||
)
|
||||
```
|
||||
|
||||
### 2. OpenAI-Compatible Client
|
||||
|
||||
```python
|
||||
# Get OpenAI-compatible client from project
|
||||
openai_client = client.get_openai_client()
|
||||
|
||||
# Use standard OpenAI API
|
||||
response = openai_client.chat.completions.create(
|
||||
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
|
||||
messages=[{"role": "user", "content": "Hello!"}],
|
||||
)
|
||||
```
|
||||
|
||||
## Agent Operations
|
||||
|
||||
### Create Agent (Basic)
|
||||
|
||||
```python
|
||||
agent = client.agents.create_agent(
|
||||
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
|
||||
name="my-agent",
|
||||
instructions="You are a helpful assistant.",
|
||||
)
|
||||
```
|
||||
|
||||
### Create Agent with Tools
|
||||
|
||||
```python
|
||||
from azure.ai.agents import CodeInterpreterTool, FileSearchTool
|
||||
|
||||
agent = client.agents.create_agent(
|
||||
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
|
||||
name="tool-agent",
|
||||
instructions="You can execute code and search files.",
|
||||
tools=[CodeInterpreterTool(), FileSearchTool()],
|
||||
)
|
||||
```
|
||||
|
||||
### Versioned Agents with PromptAgentDefinition
|
||||
|
||||
```python
|
||||
from azure.ai.projects.models import PromptAgentDefinition
|
||||
|
||||
# Create a versioned agent
|
||||
agent_version = client.agents.create_version(
|
||||
agent_name="customer-support-agent",
|
||||
definition=PromptAgentDefinition(
|
||||
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
|
||||
instructions="You are a customer support specialist.",
|
||||
tools=[], # Add tools as needed
|
||||
),
|
||||
version_label="v1.0",
|
||||
)
|
||||
```
|
||||
|
||||
See [references/agents.md](references/agents.md) for detailed agent patterns.
|
||||
|
||||
## Tools Overview
|
||||
|
||||
| Tool | Class | Use Case |
|
||||
|------|-------|----------|
|
||||
| Code Interpreter | `CodeInterpreterTool` | Execute Python, generate files |
|
||||
| File Search | `FileSearchTool` | RAG over uploaded documents |
|
||||
| Bing Grounding | `BingGroundingTool` | Web search (requires connection) |
|
||||
| Azure AI Search | `AzureAISearchTool` | Search your indexes |
|
||||
| Function Calling | `FunctionTool` | Call your Python functions |
|
||||
| OpenAPI | `OpenApiTool` | Call REST APIs |
|
||||
| MCP | `McpTool` | Model Context Protocol servers |
|
||||
| Memory Search | `MemorySearchTool` | Search agent memory stores |
|
||||
| SharePoint | `SharepointGroundingTool` | Search SharePoint content |
|
||||
|
||||
See [references/tools.md](references/tools.md) for all tool patterns.
|
||||
|
||||
## Thread and Message Flow
|
||||
|
||||
```python
|
||||
# 1. Create thread
|
||||
thread = client.agents.threads.create()
|
||||
|
||||
# 2. Add message
|
||||
client.agents.messages.create(
|
||||
thread_id=thread.id,
|
||||
role="user",
|
||||
content="What's the weather like?",
|
||||
)
|
||||
|
||||
# 3. Create and process run
|
||||
run = client.agents.runs.create_and_process(
|
||||
thread_id=thread.id,
|
||||
agent_id=agent.id,
|
||||
)
|
||||
|
||||
# 4. Get response
|
||||
if run.status == "completed":
|
||||
messages = client.agents.messages.list(thread_id=thread.id)
|
||||
for msg in messages:
|
||||
if msg.role == "assistant":
|
||||
print(msg.content[0].text.value)
|
||||
```
|
||||
|
||||
## Connections
|
||||
|
||||
```python
|
||||
# List all connections
|
||||
connections = client.connections.list()
|
||||
for conn in connections:
|
||||
print(f"{conn.name}: {conn.connection_type}")
|
||||
|
||||
# Get specific connection
|
||||
connection = client.connections.get(connection_name="my-search-connection")
|
||||
```
|
||||
|
||||
See [references/connections.md](references/connections.md) for connection patterns.
|
||||
|
||||
## Deployments
|
||||
|
||||
```python
|
||||
# List available model deployments
|
||||
deployments = client.deployments.list()
|
||||
for deployment in deployments:
|
||||
print(f"{deployment.name}: {deployment.model}")
|
||||
```
|
||||
|
||||
See [references/deployments.md](references/deployments.md) for deployment patterns.
|
||||
|
||||
## Datasets and Indexes
|
||||
|
||||
```python
|
||||
# List datasets
|
||||
datasets = client.datasets.list()
|
||||
|
||||
# List indexes
|
||||
indexes = client.indexes.list()
|
||||
```
|
||||
|
||||
See [references/datasets-indexes.md](references/datasets-indexes.md) for data operations.
|
||||
|
||||
## Evaluation
|
||||
|
||||
```python
|
||||
# Using OpenAI client for evals
|
||||
openai_client = client.get_openai_client()
|
||||
|
||||
# Create evaluation with built-in evaluators
|
||||
eval_run = openai_client.evals.runs.create(
|
||||
eval_id="my-eval",
|
||||
name="quality-check",
|
||||
data_source={
|
||||
"type": "custom",
|
||||
"item_references": [{"item_id": "test-1"}],
|
||||
},
|
||||
testing_criteria=[
|
||||
{"type": "fluency"},
|
||||
{"type": "task_adherence"},
|
||||
],
|
||||
)
|
||||
```
|
||||
|
||||
See [references/evaluation.md](references/evaluation.md) for evaluation patterns.
|
||||
|
||||
## Async Client
|
||||
|
||||
```python
|
||||
from azure.ai.projects.aio import AIProjectClient
|
||||
|
||||
async with AIProjectClient(
|
||||
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
credential=DefaultAzureCredential(),
|
||||
) as client:
|
||||
agent = await client.agents.create_agent(...)
|
||||
# ... async operations
|
||||
```
|
||||
|
||||
See [references/async-patterns.md](references/async-patterns.md) for async patterns.
|
||||
|
||||
## Memory Stores
|
||||
|
||||
```python
|
||||
# Create memory store for agent
|
||||
memory_store = client.agents.create_memory_store(
|
||||
name="conversation-memory",
|
||||
)
|
||||
|
||||
# Attach to agent for persistent memory
|
||||
agent = client.agents.create_agent(
|
||||
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
|
||||
name="memory-agent",
|
||||
tools=[MemorySearchTool()],
|
||||
tool_resources={"memory": {"store_ids": [memory_store.id]}},
|
||||
)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use context managers** for async client: `async with AIProjectClient(...) as client:`
|
||||
2. **Clean up agents** when done: `client.agents.delete_agent(agent.id)`
|
||||
3. **Use `create_and_process`** for simple runs, **streaming** for real-time UX
|
||||
4. **Use versioned agents** for production deployments
|
||||
5. **Prefer connections** for external service integration (AI Search, Bing, etc.)
|
||||
|
||||
## SDK Comparison
|
||||
|
||||
| Feature | `azure-ai-projects` | `azure-ai-agents` |
|
||||
|---------|---------------------|-------------------|
|
||||
| Level | High-level (Foundry) | Low-level (Agents) |
|
||||
| Client | `AIProjectClient` | `AgentsClient` |
|
||||
| Versioning | `create_version()` | Not available |
|
||||
| Connections | Yes | No |
|
||||
| Deployments | Yes | No |
|
||||
| Datasets/Indexes | Yes | No |
|
||||
| Evaluation | Via OpenAI client | No |
|
||||
| When to use | Full Foundry integration | Standalone agent apps |
|
||||
|
||||
## Reference Files
|
||||
|
||||
- [references/agents.md](references/agents.md): Agent operations with PromptAgentDefinition
|
||||
- [references/tools.md](references/tools.md): All agent tools with examples
|
||||
- [references/evaluation.md](references/evaluation.md): Evaluation operations overview
|
||||
- [references/built-in-evaluators.md](references/built-in-evaluators.md): Complete built-in evaluator reference
|
||||
- [references/custom-evaluators.md](references/custom-evaluators.md): Code and prompt-based evaluator patterns
|
||||
- [references/connections.md](references/connections.md): Connection operations
|
||||
- [references/deployments.md](references/deployments.md): Deployment enumeration
|
||||
- [references/datasets-indexes.md](references/datasets-indexes.md): Dataset and index operations
|
||||
- [references/async-patterns.md](references/async-patterns.md): Async client usage
|
||||
- [references/api-reference.md](references/api-reference.md): Complete API reference for all 373 SDK exports (v2.0.0b4)
|
||||
- [scripts/run_batch_evaluation.py](scripts/run_batch_evaluation.py): CLI tool for batch evaluations
|
||||
289
skills/azure-ai-projects-ts/SKILL.md
Normal file
289
skills/azure-ai-projects-ts/SKILL.md
Normal file
@@ -0,0 +1,289 @@
|
||||
---
|
||||
name: azure-ai-projects-ts
|
||||
description: Build AI applications using Azure AI Projects SDK for JavaScript (@azure/ai-projects). Use when working with Foundry project clients, agents, connections, deployments, datasets, indexes, evaluations, or getting OpenAI clients.
|
||||
package: @azure/ai-projects
|
||||
---
|
||||
|
||||
# Azure AI Projects SDK for TypeScript
|
||||
|
||||
High-level SDK for Azure AI Foundry projects with agents, connections, deployments, and evaluations.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
npm install @azure/ai-projects @azure/identity
|
||||
```
|
||||
|
||||
For tracing:
|
||||
```bash
|
||||
npm install @azure/monitor-opentelemetry @opentelemetry/api
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_AI_PROJECT_ENDPOINT=https://<resource>.services.ai.azure.com/api/projects/<project>
|
||||
MODEL_DEPLOYMENT_NAME=gpt-4o
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```typescript
|
||||
import { AIProjectClient } from "@azure/ai-projects";
|
||||
import { DefaultAzureCredential } from "@azure/identity";
|
||||
|
||||
const client = new AIProjectClient(
|
||||
process.env.AZURE_AI_PROJECT_ENDPOINT!,
|
||||
new DefaultAzureCredential()
|
||||
);
|
||||
```
|
||||
|
||||
## Operation Groups
|
||||
|
||||
| Group | Purpose |
|
||||
|-------|---------|
|
||||
| `client.agents` | Create and manage AI agents |
|
||||
| `client.connections` | List connected Azure resources |
|
||||
| `client.deployments` | List model deployments |
|
||||
| `client.datasets` | Upload and manage datasets |
|
||||
| `client.indexes` | Create and manage search indexes |
|
||||
| `client.evaluators` | Manage evaluation metrics |
|
||||
| `client.memoryStores` | Manage agent memory |
|
||||
|
||||
## Getting OpenAI Client
|
||||
|
||||
```typescript
|
||||
const openAIClient = await client.getOpenAIClient();
|
||||
|
||||
// Use for responses
|
||||
const response = await openAIClient.responses.create({
|
||||
model: "gpt-4o",
|
||||
input: "What is the capital of France?"
|
||||
});
|
||||
|
||||
// Use for conversations
|
||||
const conversation = await openAIClient.conversations.create({
|
||||
items: [{ type: "message", role: "user", content: "Hello!" }]
|
||||
});
|
||||
```
|
||||
|
||||
## Agents
|
||||
|
||||
### Create Agent
|
||||
|
||||
```typescript
|
||||
const agent = await client.agents.createVersion("my-agent", {
|
||||
kind: "prompt",
|
||||
model: "gpt-4o",
|
||||
instructions: "You are a helpful assistant."
|
||||
});
|
||||
```
|
||||
|
||||
### Agent with Tools
|
||||
|
||||
```typescript
|
||||
// Code Interpreter
|
||||
const agent = await client.agents.createVersion("code-agent", {
|
||||
kind: "prompt",
|
||||
model: "gpt-4o",
|
||||
instructions: "You can execute code.",
|
||||
tools: [{ type: "code_interpreter", container: { type: "auto" } }]
|
||||
});
|
||||
|
||||
// File Search
|
||||
const agent = await client.agents.createVersion("search-agent", {
|
||||
kind: "prompt",
|
||||
model: "gpt-4o",
|
||||
tools: [{ type: "file_search", vector_store_ids: [vectorStoreId] }]
|
||||
});
|
||||
|
||||
// Web Search
|
||||
const agent = await client.agents.createVersion("web-agent", {
|
||||
kind: "prompt",
|
||||
model: "gpt-4o",
|
||||
tools: [{
|
||||
type: "web_search_preview",
|
||||
user_location: { type: "approximate", country: "US", city: "Seattle" }
|
||||
}]
|
||||
});
|
||||
|
||||
// Azure AI Search
|
||||
const agent = await client.agents.createVersion("aisearch-agent", {
|
||||
kind: "prompt",
|
||||
model: "gpt-4o",
|
||||
tools: [{
|
||||
type: "azure_ai_search",
|
||||
azure_ai_search: {
|
||||
indexes: [{
|
||||
project_connection_id: connectionId,
|
||||
index_name: "my-index",
|
||||
query_type: "simple"
|
||||
}]
|
||||
}
|
||||
}]
|
||||
});
|
||||
|
||||
// Function Tool
|
||||
const agent = await client.agents.createVersion("func-agent", {
|
||||
kind: "prompt",
|
||||
model: "gpt-4o",
|
||||
tools: [{
|
||||
type: "function",
|
||||
function: {
|
||||
name: "get_weather",
|
||||
description: "Get weather for a location",
|
||||
strict: true,
|
||||
parameters: {
|
||||
type: "object",
|
||||
properties: { location: { type: "string" } },
|
||||
required: ["location"]
|
||||
}
|
||||
}
|
||||
}]
|
||||
});
|
||||
|
||||
// MCP Tool
|
||||
const agent = await client.agents.createVersion("mcp-agent", {
|
||||
kind: "prompt",
|
||||
model: "gpt-4o",
|
||||
tools: [{
|
||||
type: "mcp",
|
||||
server_label: "my-mcp",
|
||||
server_url: "https://mcp-server.example.com",
|
||||
require_approval: "always"
|
||||
}]
|
||||
});
|
||||
```
|
||||
|
||||
### Run Agent
|
||||
|
||||
```typescript
|
||||
const openAIClient = await client.getOpenAIClient();
|
||||
|
||||
// Create conversation
|
||||
const conversation = await openAIClient.conversations.create({
|
||||
items: [{ type: "message", role: "user", content: "Hello!" }]
|
||||
});
|
||||
|
||||
// Generate response using agent
|
||||
const response = await openAIClient.responses.create(
|
||||
{ conversation: conversation.id },
|
||||
{ body: { agent: { name: agent.name, type: "agent_reference" } } }
|
||||
);
|
||||
|
||||
// Cleanup
|
||||
await openAIClient.conversations.delete(conversation.id);
|
||||
await client.agents.deleteVersion(agent.name, agent.version);
|
||||
```
|
||||
|
||||
## Connections
|
||||
|
||||
```typescript
|
||||
// List all connections
|
||||
for await (const conn of client.connections.list()) {
|
||||
console.log(conn.name, conn.type);
|
||||
}
|
||||
|
||||
// Get connection by name
|
||||
const conn = await client.connections.get("my-connection");
|
||||
|
||||
// Get connection with credentials
|
||||
const connWithCreds = await client.connections.getWithCredentials("my-connection");
|
||||
|
||||
// Get default connection by type
|
||||
const defaultAzureOpenAI = await client.connections.getDefault("AzureOpenAI", true);
|
||||
```
|
||||
|
||||
## Deployments
|
||||
|
||||
```typescript
|
||||
// List all deployments
|
||||
for await (const deployment of client.deployments.list()) {
|
||||
if (deployment.type === "ModelDeployment") {
|
||||
console.log(deployment.name, deployment.modelName);
|
||||
}
|
||||
}
|
||||
|
||||
// Filter by publisher
|
||||
for await (const d of client.deployments.list({ modelPublisher: "OpenAI" })) {
|
||||
console.log(d.name);
|
||||
}
|
||||
|
||||
// Get specific deployment
|
||||
const deployment = await client.deployments.get("gpt-4o");
|
||||
```
|
||||
|
||||
## Datasets
|
||||
|
||||
```typescript
|
||||
// Upload single file
|
||||
const dataset = await client.datasets.uploadFile(
|
||||
"my-dataset",
|
||||
"1.0",
|
||||
"./data/training.jsonl"
|
||||
);
|
||||
|
||||
// Upload folder
|
||||
const dataset = await client.datasets.uploadFolder(
|
||||
"my-dataset",
|
||||
"2.0",
|
||||
"./data/documents/"
|
||||
);
|
||||
|
||||
// Get dataset
|
||||
const ds = await client.datasets.get("my-dataset", "1.0");
|
||||
|
||||
// List versions
|
||||
for await (const version of client.datasets.listVersions("my-dataset")) {
|
||||
console.log(version);
|
||||
}
|
||||
|
||||
// Delete
|
||||
await client.datasets.delete("my-dataset", "1.0");
|
||||
```
|
||||
|
||||
## Indexes
|
||||
|
||||
```typescript
|
||||
import { AzureAISearchIndex } from "@azure/ai-projects";
|
||||
|
||||
const indexConfig: AzureAISearchIndex = {
|
||||
name: "my-index",
|
||||
type: "AzureSearch",
|
||||
version: "1",
|
||||
indexName: "my-index",
|
||||
connectionName: "search-connection"
|
||||
};
|
||||
|
||||
// Create index
|
||||
const index = await client.indexes.createOrUpdate("my-index", "1", indexConfig);
|
||||
|
||||
// List indexes
|
||||
for await (const idx of client.indexes.list()) {
|
||||
console.log(idx.name);
|
||||
}
|
||||
|
||||
// Delete
|
||||
await client.indexes.delete("my-index", "1");
|
||||
```
|
||||
|
||||
## Key Types
|
||||
|
||||
```typescript
|
||||
import {
|
||||
AIProjectClient,
|
||||
AIProjectClientOptionalParams,
|
||||
Connection,
|
||||
ModelDeployment,
|
||||
DatasetVersionUnion,
|
||||
AzureAISearchIndex
|
||||
} from "@azure/ai-projects";
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use getOpenAIClient()** - For responses, conversations, files, and vector stores
|
||||
2. **Version your agents** - Use `createVersion` for reproducible agent definitions
|
||||
3. **Clean up resources** - Delete agents, conversations when done
|
||||
4. **Use connections** - Get credentials from project connections, don't hardcode
|
||||
5. **Filter deployments** - Use `modelPublisher` filter to find specific models
|
||||
227
skills/azure-ai-textanalytics-py/SKILL.md
Normal file
227
skills/azure-ai-textanalytics-py/SKILL.md
Normal file
@@ -0,0 +1,227 @@
|
||||
---
|
||||
name: azure-ai-textanalytics-py
|
||||
description: |
|
||||
Azure AI Text Analytics SDK for sentiment analysis, entity recognition, key phrases, language detection, PII, and healthcare NLP. Use for natural language processing on text.
|
||||
Triggers: "text analytics", "sentiment analysis", "entity recognition", "key phrase", "PII detection", "TextAnalyticsClient".
|
||||
package: azure-ai-textanalytics
|
||||
---
|
||||
|
||||
# Azure AI Text Analytics SDK for Python
|
||||
|
||||
Client library for Azure AI Language service NLP capabilities including sentiment, entities, key phrases, and more.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-textanalytics
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_LANGUAGE_ENDPOINT=https://<resource>.cognitiveservices.azure.com
|
||||
AZURE_LANGUAGE_KEY=<your-api-key> # If using API key
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### API Key
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
from azure.ai.textanalytics import TextAnalyticsClient
|
||||
|
||||
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
|
||||
key = os.environ["AZURE_LANGUAGE_KEY"]
|
||||
|
||||
client = TextAnalyticsClient(endpoint, AzureKeyCredential(key))
|
||||
```
|
||||
|
||||
### Entra ID (Recommended)
|
||||
|
||||
```python
|
||||
from azure.ai.textanalytics import TextAnalyticsClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
client = TextAnalyticsClient(
|
||||
endpoint=os.environ["AZURE_LANGUAGE_ENDPOINT"],
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
```
|
||||
|
||||
## Sentiment Analysis
|
||||
|
||||
```python
|
||||
documents = [
|
||||
"I had a wonderful trip to Seattle last week!",
|
||||
"The food was terrible and the service was slow."
|
||||
]
|
||||
|
||||
result = client.analyze_sentiment(documents, show_opinion_mining=True)
|
||||
|
||||
for doc in result:
|
||||
if not doc.is_error:
|
||||
print(f"Sentiment: {doc.sentiment}")
|
||||
print(f"Scores: pos={doc.confidence_scores.positive:.2f}, "
|
||||
f"neg={doc.confidence_scores.negative:.2f}, "
|
||||
f"neu={doc.confidence_scores.neutral:.2f}")
|
||||
|
||||
# Opinion mining (aspect-based sentiment)
|
||||
for sentence in doc.sentences:
|
||||
for opinion in sentence.mined_opinions:
|
||||
target = opinion.target
|
||||
print(f" Target: '{target.text}' - {target.sentiment}")
|
||||
for assessment in opinion.assessments:
|
||||
print(f" Assessment: '{assessment.text}' - {assessment.sentiment}")
|
||||
```
|
||||
|
||||
## Entity Recognition
|
||||
|
||||
```python
|
||||
documents = ["Microsoft was founded by Bill Gates and Paul Allen in Albuquerque."]
|
||||
|
||||
result = client.recognize_entities(documents)
|
||||
|
||||
for doc in result:
|
||||
if not doc.is_error:
|
||||
for entity in doc.entities:
|
||||
print(f"Entity: {entity.text}")
|
||||
print(f" Category: {entity.category}")
|
||||
print(f" Subcategory: {entity.subcategory}")
|
||||
print(f" Confidence: {entity.confidence_score:.2f}")
|
||||
```
|
||||
|
||||
## PII Detection
|
||||
|
||||
```python
|
||||
documents = ["My SSN is 123-45-6789 and my email is john@example.com"]
|
||||
|
||||
result = client.recognize_pii_entities(documents)
|
||||
|
||||
for doc in result:
|
||||
if not doc.is_error:
|
||||
print(f"Redacted: {doc.redacted_text}")
|
||||
for entity in doc.entities:
|
||||
print(f"PII: {entity.text} ({entity.category})")
|
||||
```
|
||||
|
||||
## Key Phrase Extraction
|
||||
|
||||
```python
|
||||
documents = ["Azure AI provides powerful machine learning capabilities for developers."]
|
||||
|
||||
result = client.extract_key_phrases(documents)
|
||||
|
||||
for doc in result:
|
||||
if not doc.is_error:
|
||||
print(f"Key phrases: {doc.key_phrases}")
|
||||
```
|
||||
|
||||
## Language Detection
|
||||
|
||||
```python
|
||||
documents = ["Ce document est en francais.", "This is written in English."]
|
||||
|
||||
result = client.detect_language(documents)
|
||||
|
||||
for doc in result:
|
||||
if not doc.is_error:
|
||||
print(f"Language: {doc.primary_language.name} ({doc.primary_language.iso6391_name})")
|
||||
print(f"Confidence: {doc.primary_language.confidence_score:.2f}")
|
||||
```
|
||||
|
||||
## Healthcare Text Analytics
|
||||
|
||||
```python
|
||||
documents = ["Patient has diabetes and was prescribed metformin 500mg twice daily."]
|
||||
|
||||
poller = client.begin_analyze_healthcare_entities(documents)
|
||||
result = poller.result()
|
||||
|
||||
for doc in result:
|
||||
if not doc.is_error:
|
||||
for entity in doc.entities:
|
||||
print(f"Entity: {entity.text}")
|
||||
print(f" Category: {entity.category}")
|
||||
print(f" Normalized: {entity.normalized_text}")
|
||||
|
||||
# Entity links (UMLS, etc.)
|
||||
for link in entity.data_sources:
|
||||
print(f" Link: {link.name} - {link.entity_id}")
|
||||
```
|
||||
|
||||
## Multiple Analysis (Batch)
|
||||
|
||||
```python
|
||||
from azure.ai.textanalytics import (
|
||||
RecognizeEntitiesAction,
|
||||
ExtractKeyPhrasesAction,
|
||||
AnalyzeSentimentAction
|
||||
)
|
||||
|
||||
documents = ["Microsoft announced new Azure AI features at Build conference."]
|
||||
|
||||
poller = client.begin_analyze_actions(
|
||||
documents,
|
||||
actions=[
|
||||
RecognizeEntitiesAction(),
|
||||
ExtractKeyPhrasesAction(),
|
||||
AnalyzeSentimentAction()
|
||||
]
|
||||
)
|
||||
|
||||
results = poller.result()
|
||||
for doc_results in results:
|
||||
for result in doc_results:
|
||||
if result.kind == "EntityRecognition":
|
||||
print(f"Entities: {[e.text for e in result.entities]}")
|
||||
elif result.kind == "KeyPhraseExtraction":
|
||||
print(f"Key phrases: {result.key_phrases}")
|
||||
elif result.kind == "SentimentAnalysis":
|
||||
print(f"Sentiment: {result.sentiment}")
|
||||
```
|
||||
|
||||
## Async Client
|
||||
|
||||
```python
|
||||
from azure.ai.textanalytics.aio import TextAnalyticsClient
|
||||
from azure.identity.aio import DefaultAzureCredential
|
||||
|
||||
async def analyze():
|
||||
async with TextAnalyticsClient(
|
||||
endpoint=endpoint,
|
||||
credential=DefaultAzureCredential()
|
||||
) as client:
|
||||
result = await client.analyze_sentiment(documents)
|
||||
# Process results...
|
||||
```
|
||||
|
||||
## Client Types
|
||||
|
||||
| Client | Purpose |
|
||||
|--------|---------|
|
||||
| `TextAnalyticsClient` | All text analytics operations |
|
||||
| `TextAnalyticsClient` (aio) | Async version |
|
||||
|
||||
## Available Operations
|
||||
|
||||
| Method | Description |
|
||||
|--------|-------------|
|
||||
| `analyze_sentiment` | Sentiment analysis with opinion mining |
|
||||
| `recognize_entities` | Named entity recognition |
|
||||
| `recognize_pii_entities` | PII detection and redaction |
|
||||
| `recognize_linked_entities` | Entity linking to Wikipedia |
|
||||
| `extract_key_phrases` | Key phrase extraction |
|
||||
| `detect_language` | Language detection |
|
||||
| `begin_analyze_healthcare_entities` | Healthcare NLP (long-running) |
|
||||
| `begin_analyze_actions` | Multiple analyses in batch |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use batch operations** for multiple documents (up to 10 per request)
|
||||
2. **Enable opinion mining** for detailed aspect-based sentiment
|
||||
3. **Use async client** for high-throughput scenarios
|
||||
4. **Handle document errors** — results list may contain errors for some docs
|
||||
5. **Specify language** when known to improve accuracy
|
||||
6. **Use context manager** or close client explicitly
|
||||
69
skills/azure-ai-transcription-py/SKILL.md
Normal file
69
skills/azure-ai-transcription-py/SKILL.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
name: azure-ai-transcription-py
|
||||
description: |
|
||||
Azure AI Transcription SDK for Python. Use for real-time and batch speech-to-text transcription with timestamps and diarization.
|
||||
Triggers: "transcription", "speech to text", "Azure AI Transcription", "TranscriptionClient".
|
||||
package: azure-ai-transcription
|
||||
---
|
||||
|
||||
# Azure AI Transcription SDK for Python
|
||||
|
||||
Client library for Azure AI Transcription (speech-to-text) with real-time and batch transcription.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-transcription
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
TRANSCRIPTION_ENDPOINT=https://<resource>.cognitiveservices.azure.com
|
||||
TRANSCRIPTION_KEY=<your-key>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
Use subscription key authentication (DefaultAzureCredential is not supported for this client):
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.ai.transcription import TranscriptionClient
|
||||
|
||||
client = TranscriptionClient(
|
||||
endpoint=os.environ["TRANSCRIPTION_ENDPOINT"],
|
||||
credential=os.environ["TRANSCRIPTION_KEY"]
|
||||
)
|
||||
```
|
||||
|
||||
## Transcription (Batch)
|
||||
|
||||
```python
|
||||
job = client.begin_transcription(
|
||||
name="meeting-transcription",
|
||||
locale="en-US",
|
||||
content_urls=["https://<storage>/audio.wav"],
|
||||
diarization_enabled=True
|
||||
)
|
||||
result = job.result()
|
||||
print(result.status)
|
||||
```
|
||||
|
||||
## Transcription (Real-time)
|
||||
|
||||
```python
|
||||
stream = client.begin_stream_transcription(locale="en-US")
|
||||
stream.send_audio_file("audio.wav")
|
||||
for event in stream:
|
||||
print(event.text)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Enable diarization** when multiple speakers are present
|
||||
2. **Use batch transcription** for long files stored in blob storage
|
||||
3. **Capture timestamps** for subtitle generation
|
||||
4. **Specify language** to improve recognition accuracy
|
||||
5. **Handle streaming backpressure** for real-time transcription
|
||||
6. **Close transcription sessions** when complete
|
||||
249
skills/azure-ai-translation-document-py/SKILL.md
Normal file
249
skills/azure-ai-translation-document-py/SKILL.md
Normal file
@@ -0,0 +1,249 @@
|
||||
---
|
||||
name: azure-ai-translation-document-py
|
||||
description: |
|
||||
Azure AI Document Translation SDK for batch translation of documents with format preservation. Use for translating Word, PDF, Excel, PowerPoint, and other document formats at scale.
|
||||
Triggers: "document translation", "batch translation", "translate documents", "DocumentTranslationClient".
|
||||
package: azure-ai-translation-document
|
||||
---
|
||||
|
||||
# Azure AI Document Translation SDK for Python
|
||||
|
||||
Client library for Azure AI Translator document translation service for batch document translation with format preservation.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-translation-document
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_DOCUMENT_TRANSLATION_ENDPOINT=https://<resource>.cognitiveservices.azure.com
|
||||
AZURE_DOCUMENT_TRANSLATION_KEY=<your-api-key> # If using API key
|
||||
|
||||
# Storage for source and target documents
|
||||
AZURE_SOURCE_CONTAINER_URL=https://<storage>.blob.core.windows.net/<container>?<sas>
|
||||
AZURE_TARGET_CONTAINER_URL=https://<storage>.blob.core.windows.net/<container>?<sas>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### API Key
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.ai.translation.document import DocumentTranslationClient
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
|
||||
endpoint = os.environ["AZURE_DOCUMENT_TRANSLATION_ENDPOINT"]
|
||||
key = os.environ["AZURE_DOCUMENT_TRANSLATION_KEY"]
|
||||
|
||||
client = DocumentTranslationClient(endpoint, AzureKeyCredential(key))
|
||||
```
|
||||
|
||||
### Entra ID (Recommended)
|
||||
|
||||
```python
|
||||
from azure.ai.translation.document import DocumentTranslationClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
client = DocumentTranslationClient(
|
||||
endpoint=os.environ["AZURE_DOCUMENT_TRANSLATION_ENDPOINT"],
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
```
|
||||
|
||||
## Basic Document Translation
|
||||
|
||||
```python
|
||||
from azure.ai.translation.document import DocumentTranslationInput, TranslationTarget
|
||||
|
||||
source_url = os.environ["AZURE_SOURCE_CONTAINER_URL"]
|
||||
target_url = os.environ["AZURE_TARGET_CONTAINER_URL"]
|
||||
|
||||
# Start translation job
|
||||
poller = client.begin_translation(
|
||||
inputs=[
|
||||
DocumentTranslationInput(
|
||||
source_url=source_url,
|
||||
targets=[
|
||||
TranslationTarget(
|
||||
target_url=target_url,
|
||||
language="es" # Translate to Spanish
|
||||
)
|
||||
]
|
||||
)
|
||||
]
|
||||
)
|
||||
|
||||
# Wait for completion
|
||||
result = poller.result()
|
||||
|
||||
print(f"Status: {poller.status()}")
|
||||
print(f"Documents translated: {poller.details.documents_succeeded_count}")
|
||||
print(f"Documents failed: {poller.details.documents_failed_count}")
|
||||
```
|
||||
|
||||
## Multiple Target Languages
|
||||
|
||||
```python
|
||||
poller = client.begin_translation(
|
||||
inputs=[
|
||||
DocumentTranslationInput(
|
||||
source_url=source_url,
|
||||
targets=[
|
||||
TranslationTarget(target_url=target_url_es, language="es"),
|
||||
TranslationTarget(target_url=target_url_fr, language="fr"),
|
||||
TranslationTarget(target_url=target_url_de, language="de")
|
||||
]
|
||||
)
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
## Translate Single Document
|
||||
|
||||
```python
|
||||
from azure.ai.translation.document import SingleDocumentTranslationClient
|
||||
|
||||
single_client = SingleDocumentTranslationClient(endpoint, AzureKeyCredential(key))
|
||||
|
||||
with open("document.docx", "rb") as f:
|
||||
document_content = f.read()
|
||||
|
||||
result = single_client.translate(
|
||||
body=document_content,
|
||||
target_language="es",
|
||||
content_type="application/vnd.openxmlformats-officedocument.wordprocessingml.document"
|
||||
)
|
||||
|
||||
# Save translated document
|
||||
with open("document_es.docx", "wb") as f:
|
||||
f.write(result)
|
||||
```
|
||||
|
||||
## Check Translation Status
|
||||
|
||||
```python
|
||||
# Get all translation operations
|
||||
operations = client.list_translation_statuses()
|
||||
|
||||
for op in operations:
|
||||
print(f"Operation ID: {op.id}")
|
||||
print(f"Status: {op.status}")
|
||||
print(f"Created: {op.created_on}")
|
||||
print(f"Total documents: {op.documents_total_count}")
|
||||
print(f"Succeeded: {op.documents_succeeded_count}")
|
||||
print(f"Failed: {op.documents_failed_count}")
|
||||
```
|
||||
|
||||
## List Document Statuses
|
||||
|
||||
```python
|
||||
# Get status of individual documents in a job
|
||||
operation_id = poller.id
|
||||
document_statuses = client.list_document_statuses(operation_id)
|
||||
|
||||
for doc in document_statuses:
|
||||
print(f"Document: {doc.source_document_url}")
|
||||
print(f" Status: {doc.status}")
|
||||
print(f" Translated to: {doc.translated_to}")
|
||||
if doc.error:
|
||||
print(f" Error: {doc.error.message}")
|
||||
```
|
||||
|
||||
## Cancel Translation
|
||||
|
||||
```python
|
||||
# Cancel a running translation
|
||||
client.cancel_translation(operation_id)
|
||||
```
|
||||
|
||||
## Using Glossary
|
||||
|
||||
```python
|
||||
from azure.ai.translation.document import TranslationGlossary
|
||||
|
||||
poller = client.begin_translation(
|
||||
inputs=[
|
||||
DocumentTranslationInput(
|
||||
source_url=source_url,
|
||||
targets=[
|
||||
TranslationTarget(
|
||||
target_url=target_url,
|
||||
language="es",
|
||||
glossaries=[
|
||||
TranslationGlossary(
|
||||
glossary_url="https://<storage>.blob.core.windows.net/glossary/terms.csv?<sas>",
|
||||
file_format="csv"
|
||||
)
|
||||
]
|
||||
)
|
||||
]
|
||||
)
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
## Supported Document Formats
|
||||
|
||||
```python
|
||||
# Get supported formats
|
||||
formats = client.get_supported_document_formats()
|
||||
|
||||
for fmt in formats:
|
||||
print(f"Format: {fmt.format}")
|
||||
print(f" Extensions: {fmt.file_extensions}")
|
||||
print(f" Content types: {fmt.content_types}")
|
||||
```
|
||||
|
||||
## Supported Languages
|
||||
|
||||
```python
|
||||
# Get supported languages
|
||||
languages = client.get_supported_languages()
|
||||
|
||||
for lang in languages:
|
||||
print(f"Language: {lang.name} ({lang.code})")
|
||||
```
|
||||
|
||||
## Async Client
|
||||
|
||||
```python
|
||||
from azure.ai.translation.document.aio import DocumentTranslationClient
|
||||
from azure.identity.aio import DefaultAzureCredential
|
||||
|
||||
async def translate_documents():
|
||||
async with DocumentTranslationClient(
|
||||
endpoint=endpoint,
|
||||
credential=DefaultAzureCredential()
|
||||
) as client:
|
||||
poller = await client.begin_translation(inputs=[...])
|
||||
result = await poller.result()
|
||||
```
|
||||
|
||||
## Supported Formats
|
||||
|
||||
| Category | Formats |
|
||||
|----------|---------|
|
||||
| Documents | DOCX, PDF, PPTX, XLSX, HTML, TXT, RTF |
|
||||
| Structured | CSV, TSV, JSON, XML |
|
||||
| Localization | XLIFF, XLF, MHTML |
|
||||
|
||||
## Storage Requirements
|
||||
|
||||
- Source and target containers must be Azure Blob Storage
|
||||
- Use SAS tokens with appropriate permissions:
|
||||
- Source: Read, List
|
||||
- Target: Write, List
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use SAS tokens** with minimal required permissions
|
||||
2. **Monitor long-running operations** with `poller.status()`
|
||||
3. **Handle document-level errors** by iterating document statuses
|
||||
4. **Use glossaries** for domain-specific terminology
|
||||
5. **Separate target containers** for each language
|
||||
6. **Use async client** for multiple concurrent jobs
|
||||
7. **Check supported formats** before submitting documents
|
||||
274
skills/azure-ai-translation-text-py/SKILL.md
Normal file
274
skills/azure-ai-translation-text-py/SKILL.md
Normal file
@@ -0,0 +1,274 @@
|
||||
---
|
||||
name: azure-ai-translation-text-py
|
||||
description: |
|
||||
Azure AI Text Translation SDK for real-time text translation, transliteration, language detection, and dictionary lookup. Use for translating text content in applications.
|
||||
Triggers: "text translation", "translator", "translate text", "transliterate", "TextTranslationClient".
|
||||
package: azure-ai-translation-text
|
||||
---
|
||||
|
||||
# Azure AI Text Translation SDK for Python
|
||||
|
||||
Client library for Azure AI Translator text translation service for real-time text translation, transliteration, and language operations.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-translation-text
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_TRANSLATOR_KEY=<your-api-key>
|
||||
AZURE_TRANSLATOR_REGION=<your-region> # e.g., eastus, westus2
|
||||
# Or use custom endpoint
|
||||
AZURE_TRANSLATOR_ENDPOINT=https://<resource>.cognitiveservices.azure.com
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### API Key with Region
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.ai.translation.text import TextTranslationClient
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
|
||||
key = os.environ["AZURE_TRANSLATOR_KEY"]
|
||||
region = os.environ["AZURE_TRANSLATOR_REGION"]
|
||||
|
||||
# Create credential with region
|
||||
credential = AzureKeyCredential(key)
|
||||
client = TextTranslationClient(credential=credential, region=region)
|
||||
```
|
||||
|
||||
### API Key with Custom Endpoint
|
||||
|
||||
```python
|
||||
endpoint = os.environ["AZURE_TRANSLATOR_ENDPOINT"]
|
||||
|
||||
client = TextTranslationClient(
|
||||
credential=AzureKeyCredential(key),
|
||||
endpoint=endpoint
|
||||
)
|
||||
```
|
||||
|
||||
### Entra ID (Recommended)
|
||||
|
||||
```python
|
||||
from azure.ai.translation.text import TextTranslationClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
client = TextTranslationClient(
|
||||
credential=DefaultAzureCredential(),
|
||||
endpoint=os.environ["AZURE_TRANSLATOR_ENDPOINT"]
|
||||
)
|
||||
```
|
||||
|
||||
## Basic Translation
|
||||
|
||||
```python
|
||||
# Translate to a single language
|
||||
result = client.translate(
|
||||
body=["Hello, how are you?", "Welcome to Azure!"],
|
||||
to=["es"] # Spanish
|
||||
)
|
||||
|
||||
for item in result:
|
||||
for translation in item.translations:
|
||||
print(f"Translated: {translation.text}")
|
||||
print(f"Target language: {translation.to}")
|
||||
```
|
||||
|
||||
## Translate to Multiple Languages
|
||||
|
||||
```python
|
||||
result = client.translate(
|
||||
body=["Hello, world!"],
|
||||
to=["es", "fr", "de", "ja"] # Spanish, French, German, Japanese
|
||||
)
|
||||
|
||||
for item in result:
|
||||
print(f"Source: {item.detected_language.language if item.detected_language else 'unknown'}")
|
||||
for translation in item.translations:
|
||||
print(f" {translation.to}: {translation.text}")
|
||||
```
|
||||
|
||||
## Specify Source Language
|
||||
|
||||
```python
|
||||
result = client.translate(
|
||||
body=["Bonjour le monde"],
|
||||
from_parameter="fr", # Source is French
|
||||
to=["en", "es"]
|
||||
)
|
||||
```
|
||||
|
||||
## Language Detection
|
||||
|
||||
```python
|
||||
result = client.translate(
|
||||
body=["Hola, como estas?"],
|
||||
to=["en"]
|
||||
)
|
||||
|
||||
for item in result:
|
||||
if item.detected_language:
|
||||
print(f"Detected language: {item.detected_language.language}")
|
||||
print(f"Confidence: {item.detected_language.score:.2f}")
|
||||
```
|
||||
|
||||
## Transliteration
|
||||
|
||||
Convert text from one script to another:
|
||||
|
||||
```python
|
||||
result = client.transliterate(
|
||||
body=["konnichiwa"],
|
||||
language="ja",
|
||||
from_script="Latn", # From Latin script
|
||||
to_script="Jpan" # To Japanese script
|
||||
)
|
||||
|
||||
for item in result:
|
||||
print(f"Transliterated: {item.text}")
|
||||
print(f"Script: {item.script}")
|
||||
```
|
||||
|
||||
## Dictionary Lookup
|
||||
|
||||
Find alternate translations and definitions:
|
||||
|
||||
```python
|
||||
result = client.lookup_dictionary_entries(
|
||||
body=["fly"],
|
||||
from_parameter="en",
|
||||
to="es"
|
||||
)
|
||||
|
||||
for item in result:
|
||||
print(f"Source: {item.normalized_source} ({item.display_source})")
|
||||
for translation in item.translations:
|
||||
print(f" Translation: {translation.normalized_target}")
|
||||
print(f" Part of speech: {translation.pos_tag}")
|
||||
print(f" Confidence: {translation.confidence:.2f}")
|
||||
```
|
||||
|
||||
## Dictionary Examples
|
||||
|
||||
Get usage examples for translations:
|
||||
|
||||
```python
|
||||
from azure.ai.translation.text.models import DictionaryExampleTextItem
|
||||
|
||||
result = client.lookup_dictionary_examples(
|
||||
body=[DictionaryExampleTextItem(text="fly", translation="volar")],
|
||||
from_parameter="en",
|
||||
to="es"
|
||||
)
|
||||
|
||||
for item in result:
|
||||
for example in item.examples:
|
||||
print(f"Source: {example.source_prefix}{example.source_term}{example.source_suffix}")
|
||||
print(f"Target: {example.target_prefix}{example.target_term}{example.target_suffix}")
|
||||
```
|
||||
|
||||
## Get Supported Languages
|
||||
|
||||
```python
|
||||
# Get all supported languages
|
||||
languages = client.get_supported_languages()
|
||||
|
||||
# Translation languages
|
||||
print("Translation languages:")
|
||||
for code, lang in languages.translation.items():
|
||||
print(f" {code}: {lang.name} ({lang.native_name})")
|
||||
|
||||
# Transliteration languages
|
||||
print("\nTransliteration languages:")
|
||||
for code, lang in languages.transliteration.items():
|
||||
print(f" {code}: {lang.name}")
|
||||
for script in lang.scripts:
|
||||
print(f" {script.code} -> {[t.code for t in script.to_scripts]}")
|
||||
|
||||
# Dictionary languages
|
||||
print("\nDictionary languages:")
|
||||
for code, lang in languages.dictionary.items():
|
||||
print(f" {code}: {lang.name}")
|
||||
```
|
||||
|
||||
## Break Sentence
|
||||
|
||||
Identify sentence boundaries:
|
||||
|
||||
```python
|
||||
result = client.find_sentence_boundaries(
|
||||
body=["Hello! How are you? I hope you are well."],
|
||||
language="en"
|
||||
)
|
||||
|
||||
for item in result:
|
||||
print(f"Sentence lengths: {item.sent_len}")
|
||||
```
|
||||
|
||||
## Translation Options
|
||||
|
||||
```python
|
||||
result = client.translate(
|
||||
body=["Hello, world!"],
|
||||
to=["de"],
|
||||
text_type="html", # "plain" or "html"
|
||||
profanity_action="Marked", # "NoAction", "Deleted", "Marked"
|
||||
profanity_marker="Asterisk", # "Asterisk", "Tag"
|
||||
include_alignment=True, # Include word alignment
|
||||
include_sentence_length=True # Include sentence boundaries
|
||||
)
|
||||
|
||||
for item in result:
|
||||
translation = item.translations[0]
|
||||
print(f"Translated: {translation.text}")
|
||||
if translation.alignment:
|
||||
print(f"Alignment: {translation.alignment.proj}")
|
||||
if translation.sent_len:
|
||||
print(f"Sentence lengths: {translation.sent_len.src_sent_len}")
|
||||
```
|
||||
|
||||
## Async Client
|
||||
|
||||
```python
|
||||
from azure.ai.translation.text.aio import TextTranslationClient
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
|
||||
async def translate_text():
|
||||
async with TextTranslationClient(
|
||||
credential=AzureKeyCredential(key),
|
||||
region=region
|
||||
) as client:
|
||||
result = await client.translate(
|
||||
body=["Hello, world!"],
|
||||
to=["es"]
|
||||
)
|
||||
print(result[0].translations[0].text)
|
||||
```
|
||||
|
||||
## Client Methods
|
||||
|
||||
| Method | Description |
|
||||
|--------|-------------|
|
||||
| `translate` | Translate text to one or more languages |
|
||||
| `transliterate` | Convert text between scripts |
|
||||
| `detect` | Detect language of text |
|
||||
| `find_sentence_boundaries` | Identify sentence boundaries |
|
||||
| `lookup_dictionary_entries` | Dictionary lookup for translations |
|
||||
| `lookup_dictionary_examples` | Get usage examples |
|
||||
| `get_supported_languages` | List supported languages |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Batch translations** — Send multiple texts in one request (up to 100)
|
||||
2. **Specify source language** when known to improve accuracy
|
||||
3. **Use async client** for high-throughput scenarios
|
||||
4. **Cache language list** — Supported languages don't change frequently
|
||||
5. **Handle profanity** appropriately for your application
|
||||
6. **Use html text_type** when translating HTML content
|
||||
7. **Include alignment** for applications needing word mapping
|
||||
286
skills/azure-ai-translation-ts/SKILL.md
Normal file
286
skills/azure-ai-translation-ts/SKILL.md
Normal file
@@ -0,0 +1,286 @@
|
||||
---
|
||||
name: azure-ai-translation-ts
|
||||
description: Build translation applications using Azure Translation SDKs for JavaScript (@azure-rest/ai-translation-text, @azure-rest/ai-translation-document). Use when implementing text translation, transliteration, language detection, or batch document translation.
|
||||
package: @azure-rest/ai-translation-text, @azure-rest/ai-translation-document
|
||||
---
|
||||
|
||||
# Azure Translation SDKs for TypeScript
|
||||
|
||||
Text and document translation with REST-style clients.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# Text translation
|
||||
npm install @azure-rest/ai-translation-text @azure/identity
|
||||
|
||||
# Document translation
|
||||
npm install @azure-rest/ai-translation-document @azure/identity
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
TRANSLATOR_ENDPOINT=https://api.cognitive.microsofttranslator.com
|
||||
TRANSLATOR_SUBSCRIPTION_KEY=<your-api-key>
|
||||
TRANSLATOR_REGION=<your-region> # e.g., westus, eastus
|
||||
```
|
||||
|
||||
## Text Translation Client
|
||||
|
||||
### Authentication
|
||||
|
||||
```typescript
|
||||
import TextTranslationClient, { TranslatorCredential } from "@azure-rest/ai-translation-text";
|
||||
|
||||
// API Key + Region
|
||||
const credential: TranslatorCredential = {
|
||||
key: process.env.TRANSLATOR_SUBSCRIPTION_KEY!,
|
||||
region: process.env.TRANSLATOR_REGION!,
|
||||
};
|
||||
const client = TextTranslationClient(process.env.TRANSLATOR_ENDPOINT!, credential);
|
||||
|
||||
// Or just credential (uses global endpoint)
|
||||
const client2 = TextTranslationClient(credential);
|
||||
```
|
||||
|
||||
### Translate Text
|
||||
|
||||
```typescript
|
||||
import TextTranslationClient, { isUnexpected } from "@azure-rest/ai-translation-text";
|
||||
|
||||
const response = await client.path("/translate").post({
|
||||
body: {
|
||||
inputs: [
|
||||
{
|
||||
text: "Hello, how are you?",
|
||||
language: "en", // source (optional, auto-detect)
|
||||
targets: [
|
||||
{ language: "es" },
|
||||
{ language: "fr" },
|
||||
],
|
||||
},
|
||||
],
|
||||
},
|
||||
});
|
||||
|
||||
if (isUnexpected(response)) {
|
||||
throw response.body.error;
|
||||
}
|
||||
|
||||
for (const result of response.body.value) {
|
||||
for (const translation of result.translations) {
|
||||
console.log(`${translation.language}: ${translation.text}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Translate with Options
|
||||
|
||||
```typescript
|
||||
const response = await client.path("/translate").post({
|
||||
body: {
|
||||
inputs: [
|
||||
{
|
||||
text: "Hello world",
|
||||
language: "en",
|
||||
textType: "Plain", // or "Html"
|
||||
targets: [
|
||||
{
|
||||
language: "de",
|
||||
profanityAction: "NoAction", // "Marked" | "Deleted"
|
||||
tone: "formal", // LLM-specific
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
### Get Supported Languages
|
||||
|
||||
```typescript
|
||||
const response = await client.path("/languages").get();
|
||||
|
||||
if (isUnexpected(response)) {
|
||||
throw response.body.error;
|
||||
}
|
||||
|
||||
// Translation languages
|
||||
for (const [code, lang] of Object.entries(response.body.translation || {})) {
|
||||
console.log(`${code}: ${lang.name} (${lang.nativeName})`);
|
||||
}
|
||||
```
|
||||
|
||||
### Transliterate
|
||||
|
||||
```typescript
|
||||
const response = await client.path("/transliterate").post({
|
||||
body: { inputs: [{ text: "这是个测试" }] },
|
||||
queryParameters: {
|
||||
language: "zh-Hans",
|
||||
fromScript: "Hans",
|
||||
toScript: "Latn",
|
||||
},
|
||||
});
|
||||
|
||||
if (!isUnexpected(response)) {
|
||||
for (const t of response.body.value) {
|
||||
console.log(`${t.script}: ${t.text}`); // Latn: zhè shì gè cè shì
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Detect Language
|
||||
|
||||
```typescript
|
||||
const response = await client.path("/detect").post({
|
||||
body: { inputs: [{ text: "Bonjour le monde" }] },
|
||||
});
|
||||
|
||||
if (!isUnexpected(response)) {
|
||||
for (const result of response.body.value) {
|
||||
console.log(`Language: ${result.language}, Score: ${result.score}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Document Translation Client
|
||||
|
||||
### Authentication
|
||||
|
||||
```typescript
|
||||
import DocumentTranslationClient from "@azure-rest/ai-translation-document";
|
||||
import { DefaultAzureCredential } from "@azure/identity";
|
||||
|
||||
const endpoint = "https://<translator>.cognitiveservices.azure.com";
|
||||
|
||||
// TokenCredential
|
||||
const client = DocumentTranslationClient(endpoint, new DefaultAzureCredential());
|
||||
|
||||
// API Key
|
||||
const client2 = DocumentTranslationClient(endpoint, { key: "<api-key>" });
|
||||
```
|
||||
|
||||
### Single Document Translation
|
||||
|
||||
```typescript
|
||||
import DocumentTranslationClient from "@azure-rest/ai-translation-document";
|
||||
import { writeFile } from "node:fs/promises";
|
||||
|
||||
const response = await client.path("/document:translate").post({
|
||||
queryParameters: {
|
||||
targetLanguage: "es",
|
||||
sourceLanguage: "en", // optional
|
||||
},
|
||||
contentType: "multipart/form-data",
|
||||
body: [
|
||||
{
|
||||
name: "document",
|
||||
body: "Hello, this is a test document.",
|
||||
filename: "test.txt",
|
||||
contentType: "text/plain",
|
||||
},
|
||||
],
|
||||
}).asNodeStream();
|
||||
|
||||
if (response.status === "200") {
|
||||
await writeFile("translated.txt", response.body);
|
||||
}
|
||||
```
|
||||
|
||||
### Batch Document Translation
|
||||
|
||||
```typescript
|
||||
import { ContainerSASPermissions, BlobServiceClient } from "@azure/storage-blob";
|
||||
|
||||
// Generate SAS URLs for source and target containers
|
||||
const sourceSas = await sourceContainer.generateSasUrl({
|
||||
permissions: ContainerSASPermissions.parse("rl"),
|
||||
expiresOn: new Date(Date.now() + 24 * 60 * 60 * 1000),
|
||||
});
|
||||
|
||||
const targetSas = await targetContainer.generateSasUrl({
|
||||
permissions: ContainerSASPermissions.parse("rwl"),
|
||||
expiresOn: new Date(Date.now() + 24 * 60 * 60 * 1000),
|
||||
});
|
||||
|
||||
// Start batch translation
|
||||
const response = await client.path("/document/batches").post({
|
||||
body: {
|
||||
inputs: [
|
||||
{
|
||||
source: { sourceUrl: sourceSas },
|
||||
targets: [
|
||||
{ targetUrl: targetSas, language: "fr" },
|
||||
],
|
||||
},
|
||||
],
|
||||
},
|
||||
});
|
||||
|
||||
// Get operation ID from header
|
||||
const operationId = new URL(response.headers["operation-location"])
|
||||
.pathname.split("/").pop();
|
||||
```
|
||||
|
||||
### Get Translation Status
|
||||
|
||||
```typescript
|
||||
import { isUnexpected, paginate } from "@azure-rest/ai-translation-document";
|
||||
|
||||
const statusResponse = await client.path("/document/batches/{id}", operationId).get();
|
||||
|
||||
if (!isUnexpected(statusResponse)) {
|
||||
const status = statusResponse.body;
|
||||
console.log(`Status: ${status.status}`);
|
||||
console.log(`Total: ${status.summary.total}`);
|
||||
console.log(`Success: ${status.summary.success}`);
|
||||
}
|
||||
|
||||
// List documents with pagination
|
||||
const docsResponse = await client.path("/document/batches/{id}/documents", operationId).get();
|
||||
const documents = paginate(client, docsResponse);
|
||||
|
||||
for await (const doc of documents) {
|
||||
console.log(`${doc.id}: ${doc.status}`);
|
||||
}
|
||||
```
|
||||
|
||||
### Get Supported Formats
|
||||
|
||||
```typescript
|
||||
const response = await client.path("/document/formats").get();
|
||||
|
||||
if (!isUnexpected(response)) {
|
||||
for (const format of response.body.value) {
|
||||
console.log(`${format.format}: ${format.fileExtensions.join(", ")}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Key Types
|
||||
|
||||
```typescript
|
||||
// Text Translation
|
||||
import type {
|
||||
TranslatorCredential,
|
||||
TranslatorTokenCredential,
|
||||
} from "@azure-rest/ai-translation-text";
|
||||
|
||||
// Document Translation
|
||||
import type {
|
||||
DocumentTranslateParameters,
|
||||
StartTranslationDetails,
|
||||
TranslationStatus,
|
||||
} from "@azure-rest/ai-translation-document";
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Auto-detect source** - Omit `language` parameter to auto-detect
|
||||
2. **Batch requests** - Translate multiple texts in one call for efficiency
|
||||
3. **Use SAS tokens** - For document translation, use time-limited SAS URLs
|
||||
4. **Handle errors** - Always check `isUnexpected(response)` before accessing body
|
||||
5. **Regional endpoints** - Use regional endpoints for lower latency
|
||||
289
skills/azure-ai-vision-imageanalysis-java/SKILL.md
Normal file
289
skills/azure-ai-vision-imageanalysis-java/SKILL.md
Normal file
@@ -0,0 +1,289 @@
|
||||
---
|
||||
name: azure-ai-vision-imageanalysis-java
|
||||
description: Build image analysis applications with Azure AI Vision SDK for Java. Use when implementing image captioning, OCR text extraction, object detection, tagging, or smart cropping.
|
||||
package: com.azure:azure-ai-vision-imageanalysis
|
||||
---
|
||||
|
||||
# Azure AI Vision Image Analysis SDK for Java
|
||||
|
||||
Build image analysis applications using the Azure AI Vision Image Analysis SDK for Java.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-ai-vision-imageanalysis</artifactId>
|
||||
<version>1.1.0-beta.1</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Client Creation
|
||||
|
||||
### With API Key
|
||||
|
||||
```java
|
||||
import com.azure.ai.vision.imageanalysis.ImageAnalysisClient;
|
||||
import com.azure.ai.vision.imageanalysis.ImageAnalysisClientBuilder;
|
||||
import com.azure.core.credential.KeyCredential;
|
||||
|
||||
String endpoint = System.getenv("VISION_ENDPOINT");
|
||||
String key = System.getenv("VISION_KEY");
|
||||
|
||||
ImageAnalysisClient client = new ImageAnalysisClientBuilder()
|
||||
.endpoint(endpoint)
|
||||
.credential(new KeyCredential(key))
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
### Async Client
|
||||
|
||||
```java
|
||||
import com.azure.ai.vision.imageanalysis.ImageAnalysisAsyncClient;
|
||||
|
||||
ImageAnalysisAsyncClient asyncClient = new ImageAnalysisClientBuilder()
|
||||
.endpoint(endpoint)
|
||||
.credential(new KeyCredential(key))
|
||||
.buildAsyncClient();
|
||||
```
|
||||
|
||||
### With DefaultAzureCredential
|
||||
|
||||
```java
|
||||
import com.azure.identity.DefaultAzureCredentialBuilder;
|
||||
|
||||
ImageAnalysisClient client = new ImageAnalysisClientBuilder()
|
||||
.endpoint(endpoint)
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.buildClient();
|
||||
```
|
||||
|
||||
## Visual Features
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| `CAPTION` | Generate human-readable image description |
|
||||
| `DENSE_CAPTIONS` | Captions for up to 10 regions |
|
||||
| `READ` | OCR - Extract text from images |
|
||||
| `TAGS` | Content tags for objects, scenes, actions |
|
||||
| `OBJECTS` | Detect objects with bounding boxes |
|
||||
| `SMART_CROPS` | Smart thumbnail regions |
|
||||
| `PEOPLE` | Detect people with locations |
|
||||
|
||||
## Core Patterns
|
||||
|
||||
### Generate Caption
|
||||
|
||||
```java
|
||||
import com.azure.ai.vision.imageanalysis.models.*;
|
||||
import com.azure.core.util.BinaryData;
|
||||
import java.io.File;
|
||||
import java.util.Arrays;
|
||||
|
||||
// From file
|
||||
BinaryData imageData = BinaryData.fromFile(new File("image.jpg").toPath());
|
||||
|
||||
ImageAnalysisResult result = client.analyze(
|
||||
imageData,
|
||||
Arrays.asList(VisualFeatures.CAPTION),
|
||||
new ImageAnalysisOptions().setGenderNeutralCaption(true));
|
||||
|
||||
System.out.printf("Caption: \"%s\" (confidence: %.4f)%n",
|
||||
result.getCaption().getText(),
|
||||
result.getCaption().getConfidence());
|
||||
```
|
||||
|
||||
### Generate Caption from URL
|
||||
|
||||
```java
|
||||
ImageAnalysisResult result = client.analyzeFromUrl(
|
||||
"https://example.com/image.jpg",
|
||||
Arrays.asList(VisualFeatures.CAPTION),
|
||||
new ImageAnalysisOptions().setGenderNeutralCaption(true));
|
||||
|
||||
System.out.printf("Caption: \"%s\"%n", result.getCaption().getText());
|
||||
```
|
||||
|
||||
### Extract Text (OCR)
|
||||
|
||||
```java
|
||||
ImageAnalysisResult result = client.analyze(
|
||||
BinaryData.fromFile(new File("document.jpg").toPath()),
|
||||
Arrays.asList(VisualFeatures.READ),
|
||||
null);
|
||||
|
||||
for (DetectedTextBlock block : result.getRead().getBlocks()) {
|
||||
for (DetectedTextLine line : block.getLines()) {
|
||||
System.out.printf("Line: '%s'%n", line.getText());
|
||||
System.out.printf(" Bounding polygon: %s%n", line.getBoundingPolygon());
|
||||
|
||||
for (DetectedTextWord word : line.getWords()) {
|
||||
System.out.printf(" Word: '%s' (confidence: %.4f)%n",
|
||||
word.getText(),
|
||||
word.getConfidence());
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Detect Objects
|
||||
|
||||
```java
|
||||
ImageAnalysisResult result = client.analyzeFromUrl(
|
||||
imageUrl,
|
||||
Arrays.asList(VisualFeatures.OBJECTS),
|
||||
null);
|
||||
|
||||
for (DetectedObject obj : result.getObjects()) {
|
||||
System.out.printf("Object: %s (confidence: %.4f)%n",
|
||||
obj.getTags().get(0).getName(),
|
||||
obj.getTags().get(0).getConfidence());
|
||||
|
||||
ImageBoundingBox box = obj.getBoundingBox();
|
||||
System.out.printf(" Location: x=%d, y=%d, w=%d, h=%d%n",
|
||||
box.getX(), box.getY(), box.getWidth(), box.getHeight());
|
||||
}
|
||||
```
|
||||
|
||||
### Get Tags
|
||||
|
||||
```java
|
||||
ImageAnalysisResult result = client.analyzeFromUrl(
|
||||
imageUrl,
|
||||
Arrays.asList(VisualFeatures.TAGS),
|
||||
null);
|
||||
|
||||
for (DetectedTag tag : result.getTags()) {
|
||||
System.out.printf("Tag: %s (confidence: %.4f)%n",
|
||||
tag.getName(),
|
||||
tag.getConfidence());
|
||||
}
|
||||
```
|
||||
|
||||
### Detect People
|
||||
|
||||
```java
|
||||
ImageAnalysisResult result = client.analyzeFromUrl(
|
||||
imageUrl,
|
||||
Arrays.asList(VisualFeatures.PEOPLE),
|
||||
null);
|
||||
|
||||
for (DetectedPerson person : result.getPeople()) {
|
||||
ImageBoundingBox box = person.getBoundingBox();
|
||||
System.out.printf("Person at x=%d, y=%d (confidence: %.4f)%n",
|
||||
box.getX(), box.getY(), person.getConfidence());
|
||||
}
|
||||
```
|
||||
|
||||
### Smart Cropping
|
||||
|
||||
```java
|
||||
ImageAnalysisResult result = client.analyzeFromUrl(
|
||||
imageUrl,
|
||||
Arrays.asList(VisualFeatures.SMART_CROPS),
|
||||
new ImageAnalysisOptions().setSmartCropsAspectRatios(Arrays.asList(1.0, 1.5)));
|
||||
|
||||
for (CropRegion crop : result.getSmartCrops()) {
|
||||
System.out.printf("Crop region: aspect=%.2f, x=%d, y=%d, w=%d, h=%d%n",
|
||||
crop.getAspectRatio(),
|
||||
crop.getBoundingBox().getX(),
|
||||
crop.getBoundingBox().getY(),
|
||||
crop.getBoundingBox().getWidth(),
|
||||
crop.getBoundingBox().getHeight());
|
||||
}
|
||||
```
|
||||
|
||||
### Dense Captions
|
||||
|
||||
```java
|
||||
ImageAnalysisResult result = client.analyzeFromUrl(
|
||||
imageUrl,
|
||||
Arrays.asList(VisualFeatures.DENSE_CAPTIONS),
|
||||
new ImageAnalysisOptions().setGenderNeutralCaption(true));
|
||||
|
||||
for (DenseCaption caption : result.getDenseCaptions()) {
|
||||
System.out.printf("Caption: \"%s\" (confidence: %.4f)%n",
|
||||
caption.getText(),
|
||||
caption.getConfidence());
|
||||
System.out.printf(" Region: x=%d, y=%d, w=%d, h=%d%n",
|
||||
caption.getBoundingBox().getX(),
|
||||
caption.getBoundingBox().getY(),
|
||||
caption.getBoundingBox().getWidth(),
|
||||
caption.getBoundingBox().getHeight());
|
||||
}
|
||||
```
|
||||
|
||||
### Multiple Features
|
||||
|
||||
```java
|
||||
ImageAnalysisResult result = client.analyzeFromUrl(
|
||||
imageUrl,
|
||||
Arrays.asList(
|
||||
VisualFeatures.CAPTION,
|
||||
VisualFeatures.TAGS,
|
||||
VisualFeatures.OBJECTS,
|
||||
VisualFeatures.READ),
|
||||
new ImageAnalysisOptions()
|
||||
.setGenderNeutralCaption(true)
|
||||
.setLanguage("en"));
|
||||
|
||||
// Access all results
|
||||
System.out.println("Caption: " + result.getCaption().getText());
|
||||
System.out.println("Tags: " + result.getTags().size());
|
||||
System.out.println("Objects: " + result.getObjects().size());
|
||||
System.out.println("Text blocks: " + result.getRead().getBlocks().size());
|
||||
```
|
||||
|
||||
### Async Analysis
|
||||
|
||||
```java
|
||||
asyncClient.analyzeFromUrl(
|
||||
imageUrl,
|
||||
Arrays.asList(VisualFeatures.CAPTION),
|
||||
null)
|
||||
.subscribe(
|
||||
result -> System.out.println("Caption: " + result.getCaption().getText()),
|
||||
error -> System.err.println("Error: " + error.getMessage()),
|
||||
() -> System.out.println("Complete")
|
||||
);
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
import com.azure.core.exception.HttpResponseException;
|
||||
|
||||
try {
|
||||
client.analyzeFromUrl(imageUrl, Arrays.asList(VisualFeatures.CAPTION), null);
|
||||
} catch (HttpResponseException e) {
|
||||
System.out.println("Status: " + e.getResponse().getStatusCode());
|
||||
System.out.println("Error: " + e.getMessage());
|
||||
}
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
VISION_ENDPOINT=https://<resource>.cognitiveservices.azure.com/
|
||||
VISION_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
## Image Requirements
|
||||
|
||||
- Formats: JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, MPO
|
||||
- Size: < 20 MB
|
||||
- Dimensions: 50x50 to 16000x16000 pixels
|
||||
|
||||
## Regional Availability
|
||||
|
||||
Caption and Dense Captions require GPU-supported regions. Check [supported regions](https://learn.microsoft.com/azure/ai-services/computer-vision/concept-describe-images-40) before deployment.
|
||||
|
||||
## Trigger Phrases
|
||||
|
||||
- "image analysis Java"
|
||||
- "Azure Vision SDK"
|
||||
- "image captioning"
|
||||
- "OCR image text extraction"
|
||||
- "object detection image"
|
||||
- "smart crop thumbnail"
|
||||
- "detect people image"
|
||||
260
skills/azure-ai-vision-imageanalysis-py/SKILL.md
Normal file
260
skills/azure-ai-vision-imageanalysis-py/SKILL.md
Normal file
@@ -0,0 +1,260 @@
|
||||
---
|
||||
name: azure-ai-vision-imageanalysis-py
|
||||
description: |
|
||||
Azure AI Vision Image Analysis SDK for captions, tags, objects, OCR, people detection, and smart cropping. Use for computer vision and image understanding tasks.
|
||||
Triggers: "image analysis", "computer vision", "OCR", "object detection", "ImageAnalysisClient", "image caption".
|
||||
package: azure-ai-vision-imageanalysis
|
||||
---
|
||||
|
||||
# Azure AI Vision Image Analysis SDK for Python
|
||||
|
||||
Client library for Azure AI Vision 4.0 image analysis including captions, tags, objects, OCR, and more.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-vision-imageanalysis
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
VISION_ENDPOINT=https://<resource>.cognitiveservices.azure.com
|
||||
VISION_KEY=<your-api-key> # If using API key
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### API Key
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.ai.vision.imageanalysis import ImageAnalysisClient
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
|
||||
endpoint = os.environ["VISION_ENDPOINT"]
|
||||
key = os.environ["VISION_KEY"]
|
||||
|
||||
client = ImageAnalysisClient(
|
||||
endpoint=endpoint,
|
||||
credential=AzureKeyCredential(key)
|
||||
)
|
||||
```
|
||||
|
||||
### Entra ID (Recommended)
|
||||
|
||||
```python
|
||||
from azure.ai.vision.imageanalysis import ImageAnalysisClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
client = ImageAnalysisClient(
|
||||
endpoint=os.environ["VISION_ENDPOINT"],
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
```
|
||||
|
||||
## Analyze Image from URL
|
||||
|
||||
```python
|
||||
from azure.ai.vision.imageanalysis.models import VisualFeatures
|
||||
|
||||
image_url = "https://example.com/image.jpg"
|
||||
|
||||
result = client.analyze_from_url(
|
||||
image_url=image_url,
|
||||
visual_features=[
|
||||
VisualFeatures.CAPTION,
|
||||
VisualFeatures.TAGS,
|
||||
VisualFeatures.OBJECTS,
|
||||
VisualFeatures.READ,
|
||||
VisualFeatures.PEOPLE,
|
||||
VisualFeatures.SMART_CROPS,
|
||||
VisualFeatures.DENSE_CAPTIONS
|
||||
],
|
||||
gender_neutral_caption=True,
|
||||
language="en"
|
||||
)
|
||||
```
|
||||
|
||||
## Analyze Image from File
|
||||
|
||||
```python
|
||||
with open("image.jpg", "rb") as f:
|
||||
image_data = f.read()
|
||||
|
||||
result = client.analyze(
|
||||
image_data=image_data,
|
||||
visual_features=[VisualFeatures.CAPTION, VisualFeatures.TAGS]
|
||||
)
|
||||
```
|
||||
|
||||
## Image Caption
|
||||
|
||||
```python
|
||||
result = client.analyze_from_url(
|
||||
image_url=image_url,
|
||||
visual_features=[VisualFeatures.CAPTION],
|
||||
gender_neutral_caption=True
|
||||
)
|
||||
|
||||
if result.caption:
|
||||
print(f"Caption: {result.caption.text}")
|
||||
print(f"Confidence: {result.caption.confidence:.2f}")
|
||||
```
|
||||
|
||||
## Dense Captions (Multiple Regions)
|
||||
|
||||
```python
|
||||
result = client.analyze_from_url(
|
||||
image_url=image_url,
|
||||
visual_features=[VisualFeatures.DENSE_CAPTIONS]
|
||||
)
|
||||
|
||||
if result.dense_captions:
|
||||
for caption in result.dense_captions.list:
|
||||
print(f"Caption: {caption.text}")
|
||||
print(f" Confidence: {caption.confidence:.2f}")
|
||||
print(f" Bounding box: {caption.bounding_box}")
|
||||
```
|
||||
|
||||
## Tags
|
||||
|
||||
```python
|
||||
result = client.analyze_from_url(
|
||||
image_url=image_url,
|
||||
visual_features=[VisualFeatures.TAGS]
|
||||
)
|
||||
|
||||
if result.tags:
|
||||
for tag in result.tags.list:
|
||||
print(f"Tag: {tag.name} (confidence: {tag.confidence:.2f})")
|
||||
```
|
||||
|
||||
## Object Detection
|
||||
|
||||
```python
|
||||
result = client.analyze_from_url(
|
||||
image_url=image_url,
|
||||
visual_features=[VisualFeatures.OBJECTS]
|
||||
)
|
||||
|
||||
if result.objects:
|
||||
for obj in result.objects.list:
|
||||
print(f"Object: {obj.tags[0].name}")
|
||||
print(f" Confidence: {obj.tags[0].confidence:.2f}")
|
||||
box = obj.bounding_box
|
||||
print(f" Bounding box: x={box.x}, y={box.y}, w={box.width}, h={box.height}")
|
||||
```
|
||||
|
||||
## OCR (Text Extraction)
|
||||
|
||||
```python
|
||||
result = client.analyze_from_url(
|
||||
image_url=image_url,
|
||||
visual_features=[VisualFeatures.READ]
|
||||
)
|
||||
|
||||
if result.read:
|
||||
for block in result.read.blocks:
|
||||
for line in block.lines:
|
||||
print(f"Line: {line.text}")
|
||||
print(f" Bounding polygon: {line.bounding_polygon}")
|
||||
|
||||
# Word-level details
|
||||
for word in line.words:
|
||||
print(f" Word: {word.text} (confidence: {word.confidence:.2f})")
|
||||
```
|
||||
|
||||
## People Detection
|
||||
|
||||
```python
|
||||
result = client.analyze_from_url(
|
||||
image_url=image_url,
|
||||
visual_features=[VisualFeatures.PEOPLE]
|
||||
)
|
||||
|
||||
if result.people:
|
||||
for person in result.people.list:
|
||||
print(f"Person detected:")
|
||||
print(f" Confidence: {person.confidence:.2f}")
|
||||
box = person.bounding_box
|
||||
print(f" Bounding box: x={box.x}, y={box.y}, w={box.width}, h={box.height}")
|
||||
```
|
||||
|
||||
## Smart Cropping
|
||||
|
||||
```python
|
||||
result = client.analyze_from_url(
|
||||
image_url=image_url,
|
||||
visual_features=[VisualFeatures.SMART_CROPS],
|
||||
smart_crops_aspect_ratios=[0.9, 1.33, 1.78] # Portrait, 4:3, 16:9
|
||||
)
|
||||
|
||||
if result.smart_crops:
|
||||
for crop in result.smart_crops.list:
|
||||
print(f"Aspect ratio: {crop.aspect_ratio}")
|
||||
box = crop.bounding_box
|
||||
print(f" Crop region: x={box.x}, y={box.y}, w={box.width}, h={box.height}")
|
||||
```
|
||||
|
||||
## Async Client
|
||||
|
||||
```python
|
||||
from azure.ai.vision.imageanalysis.aio import ImageAnalysisClient
|
||||
from azure.identity.aio import DefaultAzureCredential
|
||||
|
||||
async def analyze_image():
|
||||
async with ImageAnalysisClient(
|
||||
endpoint=endpoint,
|
||||
credential=DefaultAzureCredential()
|
||||
) as client:
|
||||
result = await client.analyze_from_url(
|
||||
image_url=image_url,
|
||||
visual_features=[VisualFeatures.CAPTION]
|
||||
)
|
||||
print(result.caption.text)
|
||||
```
|
||||
|
||||
## Visual Features
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| `CAPTION` | Single sentence describing the image |
|
||||
| `DENSE_CAPTIONS` | Captions for multiple regions |
|
||||
| `TAGS` | Content tags (objects, scenes, actions) |
|
||||
| `OBJECTS` | Object detection with bounding boxes |
|
||||
| `READ` | OCR text extraction |
|
||||
| `PEOPLE` | People detection with bounding boxes |
|
||||
| `SMART_CROPS` | Suggested crop regions for thumbnails |
|
||||
|
||||
## Error Handling
|
||||
|
||||
```python
|
||||
from azure.core.exceptions import HttpResponseError
|
||||
|
||||
try:
|
||||
result = client.analyze_from_url(
|
||||
image_url=image_url,
|
||||
visual_features=[VisualFeatures.CAPTION]
|
||||
)
|
||||
except HttpResponseError as e:
|
||||
print(f"Status code: {e.status_code}")
|
||||
print(f"Reason: {e.reason}")
|
||||
print(f"Message: {e.error.message}")
|
||||
```
|
||||
|
||||
## Image Requirements
|
||||
|
||||
- Formats: JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, MPO
|
||||
- Max size: 20 MB
|
||||
- Dimensions: 50x50 to 16000x16000 pixels
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Select only needed features** to optimize latency and cost
|
||||
2. **Use async client** for high-throughput scenarios
|
||||
3. **Handle HttpResponseError** for invalid images or auth issues
|
||||
4. **Enable gender_neutral_caption** for inclusive descriptions
|
||||
5. **Specify language** for localized captions
|
||||
6. **Use smart_crops_aspect_ratios** matching your thumbnail requirements
|
||||
7. **Cache results** when analyzing the same image multiple times
|
||||
265
skills/azure-ai-voicelive-dotnet/SKILL.md
Normal file
265
skills/azure-ai-voicelive-dotnet/SKILL.md
Normal file
@@ -0,0 +1,265 @@
|
||||
---
|
||||
name: azure-ai-voicelive-dotnet
|
||||
description: |
|
||||
Azure AI Voice Live SDK for .NET. Build real-time voice AI applications with bidirectional WebSocket communication. Use for voice assistants, conversational AI, real-time speech-to-speech, and voice-enabled chatbots. Triggers: "voice live", "real-time voice", "VoiceLiveClient", "VoiceLiveSession", "voice assistant .NET", "bidirectional audio", "speech-to-speech".
|
||||
package: Azure.AI.VoiceLive
|
||||
---
|
||||
|
||||
# Azure.AI.VoiceLive (.NET)
|
||||
|
||||
Real-time voice AI SDK for building bidirectional voice assistants with Azure AI.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
dotnet add package Azure.AI.VoiceLive
|
||||
dotnet add package Azure.Identity
|
||||
dotnet add package NAudio # For audio capture/playback
|
||||
```
|
||||
|
||||
**Current Versions**: Stable v1.0.0, Preview v1.1.0-beta.1
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_VOICELIVE_ENDPOINT=https://<resource>.services.ai.azure.com/
|
||||
AZURE_VOICELIVE_MODEL=gpt-4o-realtime-preview
|
||||
AZURE_VOICELIVE_VOICE=en-US-AvaNeural
|
||||
# Optional: API key if not using Entra ID
|
||||
AZURE_VOICELIVE_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### Microsoft Entra ID (Recommended)
|
||||
|
||||
```csharp
|
||||
using Azure.Identity;
|
||||
using Azure.AI.VoiceLive;
|
||||
|
||||
Uri endpoint = new Uri("https://your-resource.cognitiveservices.azure.com");
|
||||
DefaultAzureCredential credential = new DefaultAzureCredential();
|
||||
VoiceLiveClient client = new VoiceLiveClient(endpoint, credential);
|
||||
```
|
||||
|
||||
**Required Role**: `Cognitive Services User` (assign in Azure Portal → Access control)
|
||||
|
||||
### API Key
|
||||
|
||||
```csharp
|
||||
Uri endpoint = new Uri("https://your-resource.cognitiveservices.azure.com");
|
||||
AzureKeyCredential credential = new AzureKeyCredential("your-api-key");
|
||||
VoiceLiveClient client = new VoiceLiveClient(endpoint, credential);
|
||||
```
|
||||
|
||||
## Client Hierarchy
|
||||
|
||||
```
|
||||
VoiceLiveClient
|
||||
└── VoiceLiveSession (WebSocket connection)
|
||||
├── ConfigureSessionAsync()
|
||||
├── GetUpdatesAsync() → SessionUpdate events
|
||||
├── AddItemAsync() → UserMessageItem, FunctionCallOutputItem
|
||||
├── SendAudioAsync()
|
||||
└── StartResponseAsync()
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Start Session and Configure
|
||||
|
||||
```csharp
|
||||
using Azure.Identity;
|
||||
using Azure.AI.VoiceLive;
|
||||
|
||||
var endpoint = new Uri(Environment.GetEnvironmentVariable("AZURE_VOICELIVE_ENDPOINT"));
|
||||
var client = new VoiceLiveClient(endpoint, new DefaultAzureCredential());
|
||||
|
||||
var model = "gpt-4o-mini-realtime-preview";
|
||||
|
||||
// Start session
|
||||
using VoiceLiveSession session = await client.StartSessionAsync(model);
|
||||
|
||||
// Configure session
|
||||
VoiceLiveSessionOptions sessionOptions = new()
|
||||
{
|
||||
Model = model,
|
||||
Instructions = "You are a helpful AI assistant. Respond naturally.",
|
||||
Voice = new AzureStandardVoice("en-US-AvaNeural"),
|
||||
TurnDetection = new AzureSemanticVadTurnDetection()
|
||||
{
|
||||
Threshold = 0.5f,
|
||||
PrefixPadding = TimeSpan.FromMilliseconds(300),
|
||||
SilenceDuration = TimeSpan.FromMilliseconds(500)
|
||||
},
|
||||
InputAudioFormat = InputAudioFormat.Pcm16,
|
||||
OutputAudioFormat = OutputAudioFormat.Pcm16
|
||||
};
|
||||
|
||||
// Set modalities (both text and audio for voice assistants)
|
||||
sessionOptions.Modalities.Clear();
|
||||
sessionOptions.Modalities.Add(InteractionModality.Text);
|
||||
sessionOptions.Modalities.Add(InteractionModality.Audio);
|
||||
|
||||
await session.ConfigureSessionAsync(sessionOptions);
|
||||
```
|
||||
|
||||
### 2. Process Events
|
||||
|
||||
```csharp
|
||||
await foreach (SessionUpdate serverEvent in session.GetUpdatesAsync())
|
||||
{
|
||||
switch (serverEvent)
|
||||
{
|
||||
case SessionUpdateResponseAudioDelta audioDelta:
|
||||
byte[] audioData = audioDelta.Delta.ToArray();
|
||||
// Play audio via NAudio or other audio library
|
||||
break;
|
||||
|
||||
case SessionUpdateResponseTextDelta textDelta:
|
||||
Console.Write(textDelta.Delta);
|
||||
break;
|
||||
|
||||
case SessionUpdateResponseFunctionCallArgumentsDone functionCall:
|
||||
// Handle function call (see Function Calling section)
|
||||
break;
|
||||
|
||||
case SessionUpdateError error:
|
||||
Console.WriteLine($"Error: {error.Error.Message}");
|
||||
break;
|
||||
|
||||
case SessionUpdateResponseDone:
|
||||
Console.WriteLine("\n--- Response complete ---");
|
||||
break;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Send User Message
|
||||
|
||||
```csharp
|
||||
await session.AddItemAsync(new UserMessageItem("Hello, can you help me?"));
|
||||
await session.StartResponseAsync();
|
||||
```
|
||||
|
||||
### 4. Function Calling
|
||||
|
||||
```csharp
|
||||
// Define function
|
||||
var weatherFunction = new VoiceLiveFunctionDefinition("get_current_weather")
|
||||
{
|
||||
Description = "Get the current weather for a given location",
|
||||
Parameters = BinaryData.FromString("""
|
||||
{
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"location": {
|
||||
"type": "string",
|
||||
"description": "The city and state or country"
|
||||
}
|
||||
},
|
||||
"required": ["location"]
|
||||
}
|
||||
""")
|
||||
};
|
||||
|
||||
// Add to session options
|
||||
sessionOptions.Tools.Add(weatherFunction);
|
||||
|
||||
// Handle function call in event loop
|
||||
if (serverEvent is SessionUpdateResponseFunctionCallArgumentsDone functionCall)
|
||||
{
|
||||
if (functionCall.Name == "get_current_weather")
|
||||
{
|
||||
var parameters = JsonSerializer.Deserialize<Dictionary<string, string>>(functionCall.Arguments);
|
||||
string location = parameters?["location"] ?? "";
|
||||
|
||||
// Call external service
|
||||
string weatherInfo = $"The weather in {location} is sunny, 75°F.";
|
||||
|
||||
// Send response
|
||||
await session.AddItemAsync(new FunctionCallOutputItem(functionCall.CallId, weatherInfo));
|
||||
await session.StartResponseAsync();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Voice Options
|
||||
|
||||
| Voice Type | Class | Example |
|
||||
|------------|-------|---------|
|
||||
| Azure Standard | `AzureStandardVoice` | `"en-US-AvaNeural"` |
|
||||
| Azure HD | `AzureStandardVoice` | `"en-US-Ava:DragonHDLatestNeural"` |
|
||||
| Azure Custom | `AzureCustomVoice` | Custom voice with endpoint ID |
|
||||
|
||||
## Supported Models
|
||||
|
||||
| Model | Description |
|
||||
|-------|-------------|
|
||||
| `gpt-4o-realtime-preview` | GPT-4o with real-time audio |
|
||||
| `gpt-4o-mini-realtime-preview` | Lightweight, fast interactions |
|
||||
| `phi4-mm-realtime` | Cost-effective multimodal |
|
||||
|
||||
## Key Types Reference
|
||||
|
||||
| Type | Purpose |
|
||||
|------|---------|
|
||||
| `VoiceLiveClient` | Main client for creating sessions |
|
||||
| `VoiceLiveSession` | Active WebSocket session |
|
||||
| `VoiceLiveSessionOptions` | Session configuration |
|
||||
| `AzureStandardVoice` | Standard Azure voice provider |
|
||||
| `AzureSemanticVadTurnDetection` | Voice activity detection |
|
||||
| `VoiceLiveFunctionDefinition` | Function tool definition |
|
||||
| `UserMessageItem` | User text message |
|
||||
| `FunctionCallOutputItem` | Function call response |
|
||||
| `SessionUpdateResponseAudioDelta` | Audio chunk event |
|
||||
| `SessionUpdateResponseTextDelta` | Text chunk event |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always set both modalities** — Include `Text` and `Audio` for voice assistants
|
||||
2. **Use `AzureSemanticVadTurnDetection`** — Provides natural conversation flow
|
||||
3. **Configure appropriate silence duration** — 500ms typical to avoid premature cutoffs
|
||||
4. **Use `using` statement** — Ensures proper session disposal
|
||||
5. **Handle all event types** — Check for errors, audio, text, and function calls
|
||||
6. **Use DefaultAzureCredential** — Never hardcode API keys
|
||||
|
||||
## Error Handling
|
||||
|
||||
```csharp
|
||||
if (serverEvent is SessionUpdateError error)
|
||||
{
|
||||
if (error.Error.Message.Contains("Cancellation failed: no active response"))
|
||||
{
|
||||
// Benign error, can ignore
|
||||
}
|
||||
else
|
||||
{
|
||||
Console.WriteLine($"Error: {error.Error.Message}");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Audio Configuration
|
||||
|
||||
- **Input Format**: `InputAudioFormat.Pcm16` (16-bit PCM)
|
||||
- **Output Format**: `OutputAudioFormat.Pcm16`
|
||||
- **Sample Rate**: 24kHz recommended
|
||||
- **Channels**: Mono
|
||||
|
||||
## Related SDKs
|
||||
|
||||
| SDK | Purpose | Install |
|
||||
|-----|---------|---------|
|
||||
| `Azure.AI.VoiceLive` | Real-time voice (this SDK) | `dotnet add package Azure.AI.VoiceLive` |
|
||||
| `Microsoft.CognitiveServices.Speech` | Speech-to-text, text-to-speech | `dotnet add package Microsoft.CognitiveServices.Speech` |
|
||||
| `NAudio` | Audio capture/playback | `dotnet add package NAudio` |
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| NuGet Package | https://www.nuget.org/packages/Azure.AI.VoiceLive |
|
||||
| API Reference | https://learn.microsoft.com/dotnet/api/azure.ai.voicelive |
|
||||
| GitHub Source | https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/ai/Azure.AI.VoiceLive |
|
||||
| Quickstart | https://learn.microsoft.com/azure/ai-services/speech-service/voice-live-quickstart |
|
||||
225
skills/azure-ai-voicelive-java/SKILL.md
Normal file
225
skills/azure-ai-voicelive-java/SKILL.md
Normal file
@@ -0,0 +1,225 @@
|
||||
---
|
||||
name: azure-ai-voicelive-java
|
||||
description: |
|
||||
Azure AI VoiceLive SDK for Java. Real-time bidirectional voice conversations with AI assistants using WebSocket.
|
||||
Triggers: "VoiceLiveClient java", "voice assistant java", "real-time voice java", "audio streaming java", "voice activity detection java".
|
||||
package: com.azure:azure-ai-voicelive
|
||||
---
|
||||
|
||||
# Azure AI VoiceLive SDK for Java
|
||||
|
||||
Real-time, bidirectional voice conversations with AI assistants using WebSocket technology.
|
||||
|
||||
## Installation
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.azure</groupId>
|
||||
<artifactId>azure-ai-voicelive</artifactId>
|
||||
<version>1.0.0-beta.2</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_VOICELIVE_ENDPOINT=https://<resource>.openai.azure.com/
|
||||
AZURE_VOICELIVE_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### API Key
|
||||
|
||||
```java
|
||||
import com.azure.ai.voicelive.VoiceLiveAsyncClient;
|
||||
import com.azure.ai.voicelive.VoiceLiveClientBuilder;
|
||||
import com.azure.core.credential.AzureKeyCredential;
|
||||
|
||||
VoiceLiveAsyncClient client = new VoiceLiveClientBuilder()
|
||||
.endpoint(System.getenv("AZURE_VOICELIVE_ENDPOINT"))
|
||||
.credential(new AzureKeyCredential(System.getenv("AZURE_VOICELIVE_API_KEY")))
|
||||
.buildAsyncClient();
|
||||
```
|
||||
|
||||
### DefaultAzureCredential (Recommended)
|
||||
|
||||
```java
|
||||
import com.azure.identity.DefaultAzureCredentialBuilder;
|
||||
|
||||
VoiceLiveAsyncClient client = new VoiceLiveClientBuilder()
|
||||
.endpoint(System.getenv("AZURE_VOICELIVE_ENDPOINT"))
|
||||
.credential(new DefaultAzureCredentialBuilder().build())
|
||||
.buildAsyncClient();
|
||||
```
|
||||
|
||||
## Key Concepts
|
||||
|
||||
| Concept | Description |
|
||||
|---------|-------------|
|
||||
| `VoiceLiveAsyncClient` | Main entry point for voice sessions |
|
||||
| `VoiceLiveSessionAsyncClient` | Active WebSocket connection for streaming |
|
||||
| `VoiceLiveSessionOptions` | Configuration for session behavior |
|
||||
|
||||
### Audio Requirements
|
||||
|
||||
- **Sample Rate**: 24kHz (24000 Hz)
|
||||
- **Bit Depth**: 16-bit PCM
|
||||
- **Channels**: Mono (1 channel)
|
||||
- **Format**: Signed PCM, little-endian
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Start Session
|
||||
|
||||
```java
|
||||
import reactor.core.publisher.Mono;
|
||||
|
||||
client.startSession("gpt-4o-realtime-preview")
|
||||
.flatMap(session -> {
|
||||
System.out.println("Session started");
|
||||
|
||||
// Subscribe to events
|
||||
session.receiveEvents()
|
||||
.subscribe(
|
||||
event -> System.out.println("Event: " + event.getType()),
|
||||
error -> System.err.println("Error: " + error.getMessage())
|
||||
);
|
||||
|
||||
return Mono.just(session);
|
||||
})
|
||||
.block();
|
||||
```
|
||||
|
||||
### 2. Configure Session Options
|
||||
|
||||
```java
|
||||
import com.azure.ai.voicelive.models.*;
|
||||
import java.util.Arrays;
|
||||
|
||||
ServerVadTurnDetection turnDetection = new ServerVadTurnDetection()
|
||||
.setThreshold(0.5) // Sensitivity (0.0-1.0)
|
||||
.setPrefixPaddingMs(300) // Audio before speech
|
||||
.setSilenceDurationMs(500) // Silence to end turn
|
||||
.setInterruptResponse(true) // Allow interruptions
|
||||
.setAutoTruncate(true)
|
||||
.setCreateResponse(true);
|
||||
|
||||
AudioInputTranscriptionOptions transcription = new AudioInputTranscriptionOptions(
|
||||
AudioInputTranscriptionOptionsModel.WHISPER_1);
|
||||
|
||||
VoiceLiveSessionOptions options = new VoiceLiveSessionOptions()
|
||||
.setInstructions("You are a helpful AI voice assistant.")
|
||||
.setVoice(BinaryData.fromObject(new OpenAIVoice(OpenAIVoiceName.ALLOY)))
|
||||
.setModalities(Arrays.asList(InteractionModality.TEXT, InteractionModality.AUDIO))
|
||||
.setInputAudioFormat(InputAudioFormat.PCM16)
|
||||
.setOutputAudioFormat(OutputAudioFormat.PCM16)
|
||||
.setInputAudioSamplingRate(24000)
|
||||
.setInputAudioNoiseReduction(new AudioNoiseReduction(AudioNoiseReductionType.NEAR_FIELD))
|
||||
.setInputAudioEchoCancellation(new AudioEchoCancellation())
|
||||
.setInputAudioTranscription(transcription)
|
||||
.setTurnDetection(turnDetection);
|
||||
|
||||
// Send configuration
|
||||
ClientEventSessionUpdate updateEvent = new ClientEventSessionUpdate(options);
|
||||
session.sendEvent(updateEvent).subscribe();
|
||||
```
|
||||
|
||||
### 3. Send Audio Input
|
||||
|
||||
```java
|
||||
byte[] audioData = readAudioChunk(); // Your PCM16 audio data
|
||||
session.sendInputAudio(BinaryData.fromBytes(audioData)).subscribe();
|
||||
```
|
||||
|
||||
### 4. Handle Events
|
||||
|
||||
```java
|
||||
session.receiveEvents().subscribe(event -> {
|
||||
ServerEventType eventType = event.getType();
|
||||
|
||||
if (ServerEventType.SESSION_CREATED.equals(eventType)) {
|
||||
System.out.println("Session created");
|
||||
} else if (ServerEventType.INPUT_AUDIO_BUFFER_SPEECH_STARTED.equals(eventType)) {
|
||||
System.out.println("User started speaking");
|
||||
} else if (ServerEventType.INPUT_AUDIO_BUFFER_SPEECH_STOPPED.equals(eventType)) {
|
||||
System.out.println("User stopped speaking");
|
||||
} else if (ServerEventType.RESPONSE_AUDIO_DELTA.equals(eventType)) {
|
||||
if (event instanceof SessionUpdateResponseAudioDelta) {
|
||||
SessionUpdateResponseAudioDelta audioEvent = (SessionUpdateResponseAudioDelta) event;
|
||||
playAudioChunk(audioEvent.getDelta());
|
||||
}
|
||||
} else if (ServerEventType.RESPONSE_DONE.equals(eventType)) {
|
||||
System.out.println("Response complete");
|
||||
} else if (ServerEventType.ERROR.equals(eventType)) {
|
||||
if (event instanceof SessionUpdateError) {
|
||||
SessionUpdateError errorEvent = (SessionUpdateError) event;
|
||||
System.err.println("Error: " + errorEvent.getError().getMessage());
|
||||
}
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
## Voice Configuration
|
||||
|
||||
### OpenAI Voices
|
||||
|
||||
```java
|
||||
// Available: ALLOY, ASH, BALLAD, CORAL, ECHO, SAGE, SHIMMER, VERSE
|
||||
VoiceLiveSessionOptions options = new VoiceLiveSessionOptions()
|
||||
.setVoice(BinaryData.fromObject(new OpenAIVoice(OpenAIVoiceName.ALLOY)));
|
||||
```
|
||||
|
||||
### Azure Voices
|
||||
|
||||
```java
|
||||
// Azure Standard Voice
|
||||
options.setVoice(BinaryData.fromObject(new AzureStandardVoice("en-US-JennyNeural")));
|
||||
|
||||
// Azure Custom Voice
|
||||
options.setVoice(BinaryData.fromObject(new AzureCustomVoice("myVoice", "endpointId")));
|
||||
|
||||
// Azure Personal Voice
|
||||
options.setVoice(BinaryData.fromObject(
|
||||
new AzurePersonalVoice("speakerProfileId", PersonalVoiceModels.PHOENIX_LATEST_NEURAL)));
|
||||
```
|
||||
|
||||
## Function Calling
|
||||
|
||||
```java
|
||||
VoiceLiveFunctionDefinition weatherFunction = new VoiceLiveFunctionDefinition("get_weather")
|
||||
.setDescription("Get current weather for a location")
|
||||
.setParameters(BinaryData.fromObject(parametersSchema));
|
||||
|
||||
VoiceLiveSessionOptions options = new VoiceLiveSessionOptions()
|
||||
.setTools(Arrays.asList(weatherFunction))
|
||||
.setInstructions("You have access to weather information.");
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use async client** — VoiceLive requires reactive patterns
|
||||
2. **Configure turn detection** for natural conversation flow
|
||||
3. **Enable noise reduction** for better speech recognition
|
||||
4. **Handle interruptions** gracefully with `setInterruptResponse(true)`
|
||||
5. **Use Whisper transcription** for input audio transcription
|
||||
6. **Close sessions** properly when conversation ends
|
||||
|
||||
## Error Handling
|
||||
|
||||
```java
|
||||
session.receiveEvents()
|
||||
.doOnError(error -> System.err.println("Connection error: " + error.getMessage()))
|
||||
.onErrorResume(error -> {
|
||||
// Attempt reconnection or cleanup
|
||||
return Flux.empty();
|
||||
})
|
||||
.subscribe();
|
||||
```
|
||||
|
||||
## Reference Links
|
||||
|
||||
| Resource | URL |
|
||||
|----------|-----|
|
||||
| GitHub Source | https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-voicelive |
|
||||
| Samples | https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/ai/azure-ai-voicelive/src/samples |
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user