Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e3ae95d3b7 |
55
.github/MAINTENANCE.md
vendored
55
.github/MAINTENANCE.md
vendored
@@ -1,4 +1,4 @@
|
||||
# 🛠️ Repository Maintenance Guide (V4)
|
||||
# 🛠️ Repository Maintenance Guide (V3)
|
||||
|
||||
> **"If it's not documented, it's broken."**
|
||||
|
||||
@@ -24,20 +24,21 @@ Committing is NOT enough. You must PUSH to the remote.
|
||||
|
||||
If you touch **any of these**:
|
||||
|
||||
- `skills/` (add/remove/modify skills)
|
||||
- the **Full Skill Registry** section of `README.md`
|
||||
- **counts/claims** about the number of skills (`560+ Agentic Skills...`, `(560/560)`, etc.)
|
||||
- `skills/` (aggiungi/rimuovi/modifichi skill)
|
||||
- la sezione **Full Skill Registry** di `README.md`
|
||||
- i **conteggi/claim** sul numero di skill (`256+ Agentic Skills...`, `(256/256)`, ecc.)
|
||||
|
||||
…then you **MUST** run the Validation Chain **BEFORE** committing.
|
||||
…allora **DEVI** eseguire la Validation Chain **PRIMA** di committare.
|
||||
|
||||
- Running `npm run chain` is **NOT optional**.
|
||||
- Running `npm run catalog` is **NOT optional**.
|
||||
- Eseguire `validate_skills.py` **NON è opzionale**.
|
||||
- Eseguire `generate_index.py` **NON è opzionale**.
|
||||
- Eseguire `update_readme.py` **NON è opzionale**.
|
||||
|
||||
If CI fails with:
|
||||
Se la CI fallisce con:
|
||||
|
||||
> `❌ Detected uncommitted changes produced by registry/readme/catalog scripts.`
|
||||
> `❌ Detected uncommitted changes in README.md or skills_index.json`
|
||||
|
||||
it means you **did not run or commit** the Validation Chain correctly.
|
||||
significa che **non hai eseguito o committato** correttamente la Validation Chain.
|
||||
|
||||
### 3. 📝 EVIDENCE OF WORK
|
||||
|
||||
@@ -58,23 +59,29 @@ it means you **did not run or commit** the Validation Chain correctly.
|
||||
|
||||
Before ANY commit that adds/modifies skills, run the chain:
|
||||
|
||||
1. **Validate, index, and update readme**:
|
||||
1. **Validate Metadata & Quality**:
|
||||
|
||||
```bash
|
||||
npm run chain
|
||||
python3 scripts/validate_skills.py
|
||||
```
|
||||
|
||||
_Must return 0 errors for new skills._
|
||||
|
||||
2. **Build catalog**:
|
||||
2. **Regenerate Index**:
|
||||
|
||||
```bash
|
||||
npm run catalog
|
||||
python3 scripts/generate_index.py
|
||||
```
|
||||
|
||||
3. **COMMIT GENERATED FILES**:
|
||||
3. **Update Readme**:
|
||||
|
||||
```bash
|
||||
git add README.md skills_index.json data/catalog.json data/bundles.json data/aliases.json CATALOG.md
|
||||
python3 scripts/update_readme.py
|
||||
```
|
||||
|
||||
4. **COMMIT GENERATED FILES**:
|
||||
```bash
|
||||
git add skills_index.json README.md
|
||||
git commit -m "chore: sync generated files"
|
||||
```
|
||||
> 🔴 **CRITICAL**: If you skip this, CI will fail with "Detected uncommitted changes".
|
||||
@@ -95,13 +102,13 @@ After multiple PR merges or significant changes:
|
||||
3. **Draft a Release**:
|
||||
- Go to [Releases Page](https://github.com/sickn33/antigravity-awesome-skills/releases).
|
||||
- Draft a new release for the merged changes.
|
||||
- Tag version (e.g., `v4.1.0`).
|
||||
- Tag version (e.g., `v3.1.0`).
|
||||
|
||||
---
|
||||
|
||||
## 2. 📝 Documentation "Pixel Perfect" Rules
|
||||
|
||||
We discovered several consistency issues during V4 development. Follow these rules STRICTLY.
|
||||
We discovered several consistency issues during V3 development. Follow these rules STRICTLY.
|
||||
|
||||
### A. Table of Contents (TOC) Anchors
|
||||
|
||||
@@ -125,12 +132,12 @@ _Common pitfall: Updating the clone URL in README but leaving an old one in FAQ.
|
||||
### C. Statistics Consistency (CRITICAL)
|
||||
|
||||
If you add/remove skills, you **MUST** ensure the total count is identical in ALL locations.
|
||||
**Do not allow drift** (e.g., 560 in title, 558 in header).
|
||||
**Do not allow drift** (e.g., 356 in title, 354 in header).
|
||||
|
||||
Locations to check:
|
||||
|
||||
1. **Title of `README.md`**: "560+ Agentic Skills..."
|
||||
2. **`## Full Skill Registry (560/560)` header**.
|
||||
1. **Title of `README.md`**: "356+ Agentic Skills..."
|
||||
2. **`## Full Skill Registry (356/356)` header**.
|
||||
3. **`docs/GETTING_STARTED.md` intro**.
|
||||
|
||||
### D. Credits Policy (Who goes where?)
|
||||
@@ -159,7 +166,7 @@ Reject any PR that fails this:
|
||||
4. **Examples**: Copy-pasteable code blocks?
|
||||
5. **Actions**: "Run this command" vs "Think about this".
|
||||
|
||||
### B. Risk Labels (V4)
|
||||
### B. Risk Labels (V3)
|
||||
|
||||
- ⚪ **Safe**: Default.
|
||||
- 🔴 **Risk**: Destructive/Security tools. MUST have `[Authorized Use Only]` warning.
|
||||
@@ -176,8 +183,8 @@ When cutting a new version (e.g., V4):
|
||||
3. **Bump Version**: Update header in `README.md`.
|
||||
4. **Tag Release**:
|
||||
```bash
|
||||
git tag -a v4.0.0 -m "V4 Enterprise Edition"
|
||||
git push origin v4.0.0
|
||||
git tag -a v3.0.0 -m "V3 Enterprise Edition"
|
||||
git push origin v3.0.0
|
||||
```
|
||||
|
||||
### 📋 Release Note Template
|
||||
|
||||
26
.github/workflows/ci.yml
vendored
26
.github/workflows/ci.yml
vendored
@@ -37,17 +37,6 @@ jobs:
|
||||
run: |
|
||||
python3 scripts/update_readme.py
|
||||
|
||||
- name: Set up Node
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: "lts/*"
|
||||
|
||||
- name: Install npm dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: 📦 Build catalog
|
||||
run: npm run catalog
|
||||
|
||||
- name: Set up GitHub credentials (for auto-sync)
|
||||
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
|
||||
run: |
|
||||
@@ -58,12 +47,12 @@ jobs:
|
||||
- name: Auto-commit registry drift (main only)
|
||||
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
|
||||
run: |
|
||||
# If no changes, exit successfully
|
||||
# Se non ci sono cambi, esci senza errore
|
||||
git diff --quiet && exit 0
|
||||
|
||||
git add README.md skills_index.json data/catalog.json data/bundles.json data/aliases.json CATALOG.md || true
|
||||
git add README.md skills_index.json || true
|
||||
|
||||
# If nothing to commit, exit successfully
|
||||
# Se non c'è niente da committare, esci senza errore
|
||||
git diff --cached --quiet && exit 0
|
||||
|
||||
git commit -m "chore: sync generated registry files [ci skip]"
|
||||
@@ -72,12 +61,13 @@ jobs:
|
||||
- name: 🚨 Check for Uncommitted Drift
|
||||
run: |
|
||||
if ! git diff --quiet; then
|
||||
echo "❌ Detected uncommitted changes produced by registry/readme/catalog scripts."
|
||||
echo "❌ Detected uncommitted changes produced by registry/readme scripts."
|
||||
echo
|
||||
echo "To fix locally, run the FULL Validation Chain, then commit and push:"
|
||||
echo " npm run chain"
|
||||
echo " npm run catalog"
|
||||
echo " git add README.md skills_index.json data/catalog.json data/bundles.json data/aliases.json CATALOG.md"
|
||||
echo " python3 scripts/validate_skills.py"
|
||||
echo " python3 scripts/generate_index.py"
|
||||
echo " python3 scripts/update_readme.py"
|
||||
echo " git add README.md skills_index.json"
|
||||
echo " git commit -m \"chore: sync generated registry files\""
|
||||
echo " git push"
|
||||
exit 1
|
||||
|
||||
21
.gitignore
vendored
21
.gitignore
vendored
@@ -1,26 +1,7 @@
|
||||
node_modules/
|
||||
|
||||
|
||||
walkthrough.md
|
||||
.agent/rules/
|
||||
.gemini/
|
||||
LOCAL_CONFIG.md
|
||||
data/node_modules
|
||||
|
||||
# Temporary analysis and report files
|
||||
*_REPORT.md
|
||||
*_ANALYSIS*.md
|
||||
*_COUNT.md
|
||||
*_SUMMARY.md
|
||||
*_analysis.json
|
||||
*_validation.json
|
||||
*_results.json
|
||||
voltagent_*.json
|
||||
similar_skills_*.json
|
||||
remaining_*.json
|
||||
html_*.json
|
||||
|
||||
# Temporary analysis scripts
|
||||
scripts/*voltagent*.py
|
||||
scripts/*html*.py
|
||||
scripts/*similar*.py
|
||||
scripts/*count*.py
|
||||
|
||||
88
CATALOG.md
88
CATALOG.md
@@ -1,17 +1,16 @@
|
||||
# Skill Catalog
|
||||
|
||||
Generated at: 2026-01-31T07:34:21.497Z
|
||||
Generated at: 2026-01-28T16:10:28.837Z
|
||||
|
||||
Total skills: 618
|
||||
Total skills: 552
|
||||
|
||||
## architecture (58)
|
||||
## architecture (52)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
| `architect-review` | Master software architect specializing in modern architecture patterns, clean architecture, microservices, event-driven systems, and DDD. Reviews system desi... | | architect, review, software, specializing, architecture, clean, microservices, event, driven, ddd, reviews, designs |
|
||||
| `architecture` | Architectural decision-making framework. Requirements analysis, trade-off evaluation, ADR documentation. Use when making architecture decisions or analyzing ... | architecture | architecture, architectural, decision, making, framework, requirements, analysis, trade, off, evaluation, adr, documentation |
|
||||
| `architecture-decision-records` | Write and maintain Architecture Decision Records (ADRs) following best practices for technical decision documentation. Use when documenting significant techn... | architecture, decision, records | architecture, decision, records, write, maintain, adrs, following, technical, documentation, documenting, significant, decisions |
|
||||
| `automate-whatsapp` | Build WhatsApp automations with Kapso workflows: configure WhatsApp triggers, edit workflow graphs, manage executions, deploy functions, and use databases/in... | automate, whatsapp | automate, whatsapp, automations, kapso, configure, triggers, edit, graphs, executions, deploy, functions, databases |
|
||||
| `avalonia-viewmodels-zafiro` | Optimal ViewModel and Wizard creation patterns for Avalonia using Zafiro and ReactiveUI. | avalonia, viewmodels, zafiro | avalonia, viewmodels, zafiro, optimal, viewmodel, wizard, creation, reactiveui |
|
||||
| `bash-linux` | Bash/Linux terminal patterns. Critical commands, piping, error handling, scripting. Use when working on macOS or Linux systems. | bash, linux | bash, linux, terminal, critical, commands, piping, error, handling, scripting, working, macos |
|
||||
| `binary-analysis-patterns` | Master binary analysis patterns including disassembly, decompilation, control flow analysis, and code pattern recognition. Use when analyzing executables, un... | binary | binary, analysis, including, disassembly, decompilation, control, flow, code, recognition, analyzing, executables, understanding |
|
||||
@@ -24,7 +23,6 @@ Total skills: 618
|
||||
| `code-refactoring-refactor-clean` | You are a code refactoring expert specializing in clean code principles, SOLID design patterns, and modern software engineering best practices. Analyze and r... | code, refactoring, refactor, clean | code, refactoring, refactor, clean, specializing, principles, solid, software, engineering, analyze, provided, improve |
|
||||
| `codebase-cleanup-refactor-clean` | You are a code refactoring expert specializing in clean code principles, SOLID design patterns, and modern software engineering best practices. Analyze and r... | codebase, cleanup, refactor, clean | codebase, cleanup, refactor, clean, code, refactoring, specializing, principles, solid, software, engineering, analyze |
|
||||
| `competitor-alternatives` | When the user wants to create competitor comparison or alternative pages for SEO and sales enablement. Also use when the user mentions 'alternative page,' 'v... | competitor, alternatives | competitor, alternatives, user, wants, comparison, alternative, pages, seo, sales, enablement, mentions, page |
|
||||
| `context-degradation` | Recognize patterns of context failure: lost-in-middle, poisoning, distraction, and clash | degradation | degradation, context, recognize, failure, lost, middle, poisoning, distraction, clash |
|
||||
| `core-components` | Core component library and design system patterns. Use when building UI, using design tokens, or working with the component library. | core, components | core, components, component, library, building, ui, tokens, working |
|
||||
| `cpp-pro` | Write idiomatic C++ code with modern features, RAII, smart pointers, and STL algorithms. Handles templates, move semantics, and performance optimization. Use... | cpp | cpp, pro, write, idiomatic, code, features, raii, smart, pointers, stl, algorithms, move |
|
||||
| `cqrs-implementation` | Implement Command Query Responsibility Segregation for scalable architectures. Use when separating read and write models, optimizing query performance, or bu... | cqrs | cqrs, command, query, responsibility, segregation, scalable, architectures, separating, read, write, models, optimizing |
|
||||
@@ -43,8 +41,6 @@ Total skills: 618
|
||||
| `julia-pro` | Master Julia 1.10+ with modern features, performance optimization, multiple dispatch, and production-ready practices. Expert in the Julia ecosystem including... | julia | julia, pro, 10, features, performance, optimization, multiple, dispatch, ecosystem, including, package, scientific |
|
||||
| `minecraft-bukkit-pro` | Master Minecraft server plugin development with Bukkit, Spigot, and Paper APIs. Specializes in event-driven architecture, command systems, world manipulation... | minecraft, bukkit | minecraft, bukkit, pro, server, plugin, development, spigot, paper, apis, specializes, event, driven |
|
||||
| `monorepo-architect` | Expert in monorepo architecture, build systems, and dependency management at scale. Masters Nx, Turborepo, Bazel, and Lerna for efficient multi-project devel... | monorepo | monorepo, architect, architecture, dependency, scale, masters, nx, turborepo, bazel, lerna, efficient, multi |
|
||||
| `multi-agent-patterns` | Master orchestrator, peer-to-peer, and hierarchical multi-agent architectures | multi, agent | multi, agent, orchestrator, peer, hierarchical, architectures |
|
||||
| `n8n-mcp-tools-expert` | Expert guide for using n8n-mcp MCP tools effectively. Use when searching for nodes, validating configurations, accessing templates, managing workflows, or us... | n8n, mcp | n8n, mcp, effectively, searching, nodes, validating, configurations, accessing, managing, any, provides, selection |
|
||||
| `nestjs-expert` | Nest.js framework expert specializing in module architecture, dependency injection, middleware, guards, interceptors, testing with Jest/Supertest, TypeORM/Mo... | nestjs | nestjs, nest, js, framework, specializing, module, architecture, dependency, injection, middleware, guards, interceptors |
|
||||
| `nx-workspace-patterns` | Configure and optimize Nx monorepo workspaces. Use when setting up Nx, configuring project boundaries, optimizing build caching, or implementing affected com... | nx, workspace | nx, workspace, configure, optimize, monorepo, workspaces, setting, up, configuring, boundaries, optimizing, caching |
|
||||
| `on-call-handoff-patterns` | Master on-call shift handoffs with context transfer, escalation procedures, and documentation. Use when transitioning on-call responsibilities, documenting s... | on, call, handoff | on, call, handoff, shift, handoffs, context, transfer, escalation, procedures, documentation, transitioning, responsibilities |
|
||||
@@ -60,14 +56,12 @@ Total skills: 618
|
||||
| `tailwind-design-system` | Build scalable design systems with Tailwind CSS, design tokens, component libraries, and responsive patterns. Use when creating component libraries, implemen... | tailwind | tailwind, scalable, css, tokens, component, libraries, responsive, creating, implementing, standardizing, ui |
|
||||
| `tailwind-patterns` | Tailwind CSS v4 principles. CSS-first configuration, container queries, modern patterns, design token architecture. | tailwind | tailwind, css, v4, principles, first, configuration, container, queries, token, architecture |
|
||||
| `testing-patterns` | Jest testing patterns, factory functions, mocking strategies, and TDD workflow. Use when writing unit tests, creating test factories, or following TDD red-gr... | | testing, jest, factory, functions, mocking, tdd, writing, unit, tests, creating, test, factories |
|
||||
| `tool-design` | Build tools that agents can use effectively, including architectural reduction patterns | | agents, effectively, including, architectural, reduction |
|
||||
| `unreal-engine-cpp-pro` | Expert guide for Unreal Engine 5.x C++ development, covering UObject hygiene, performance patterns, and best practices. | unreal, engine, cpp | unreal, engine, cpp, pro, development, covering, uobject, hygiene, performance |
|
||||
| `wcag-audit-patterns` | Conduct WCAG 2.2 accessibility audits with automated testing, manual verification, and remediation guidance. Use when auditing websites for accessibility, fi... | wcag, audit | wcag, audit, conduct, accessibility, audits, automated, testing, manual, verification, remediation, guidance, auditing |
|
||||
| `workflow-orchestration-patterns` | Design durable workflows with Temporal for distributed systems. Covers workflow vs activity separation, saga patterns, state management, and determinism cons... | | orchestration, durable, temporal, distributed, covers, vs, activity, separation, saga, state, determinism, constraints |
|
||||
| `workflow-patterns` | Use this skill when implementing tasks according to Conductor's TDD workflow, handling phase checkpoints, managing git commits for tasks, or understanding th... | | skill, implementing, tasks, according, conductor, tdd, handling, phase, checkpoints, managing, git, commits |
|
||||
| `zapier-make-patterns` | No-code automation democratizes workflow building. Zapier and Make (formerly Integromat) let non-developers automate business processes without writing code.... | zapier, make | zapier, make, no, code, automation, democratizes, building, formerly, integromat, let, non, developers |
|
||||
|
||||
## business (37)
|
||||
## business (35)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
@@ -77,7 +71,6 @@ Total skills: 618
|
||||
| `context-driven-development` | Use this skill when working with Conductor's context-driven development methodology, managing project context artifacts, or understanding the relationship be... | driven | driven, context, development, skill, working, conductor, methodology, managing, artifacts, understanding, relationship, between |
|
||||
| `copy-editing` | When the user wants to edit, review, or improve existing marketing copy. Also use when the user mentions 'edit this copy,' 'review my copy,' 'copy feedback,'... | copy, editing | copy, editing, user, wants, edit, review, improve, existing, marketing, mentions, my, feedback |
|
||||
| `copywriting` | Use this skill when writing, rewriting, or improving marketing copy for any page (homepage, landing page, pricing, feature, product, or about page). This ski... | copywriting | copywriting, skill, writing, rewriting, improving, marketing, copy, any, page, homepage, landing, pricing |
|
||||
| `deep-research` | Execute autonomous multi-step research using Google Gemini Deep Research Agent. Use for: market analysis, competitive landscaping, literature reviews, techni... | deep, research | deep, research, execute, autonomous, multi, step, google, gemini, agent, market, analysis, competitive |
|
||||
| `defi-protocol-templates` | Implement DeFi protocols with production-ready templates for staking, AMMs, governance, and lending systems. Use when building decentralized finance applicat... | defi, protocol | defi, protocol, protocols, staking, amms, governance, lending, building, decentralized, finance, applications, smart |
|
||||
| `employment-contract-templates` | Create employment contracts, offer letters, and HR policy documents following legal best practices. Use when drafting employment agreements, creating HR poli... | employment, contract | employment, contract, contracts, offer, letters, hr, policy, documents, following, legal, drafting, agreements |
|
||||
| `framework-migration-legacy-modernize` | Orchestrate a comprehensive legacy system modernization using the strangler fig pattern, enabling gradual replacement of outdated components while maintainin... | framework, migration, legacy, modernize | framework, migration, legacy, modernize, orchestrate, modernization, strangler, fig, enabling, gradual, replacement, outdated |
|
||||
@@ -91,7 +84,6 @@ Total skills: 618
|
||||
| `paywall-upgrade-cro` | When the user wants to create or optimize in-app paywalls, upgrade screens, upsell modals, or feature gates. Also use when the user mentions "paywall," "upgr... | paywall, upgrade, cro | paywall, upgrade, cro, user, wants, optimize, app, paywalls, screens, upsell, modals, feature |
|
||||
| `pricing-strategy` | Design pricing, packaging, and monetization strategies based on value, customer willingness to pay, and growth objectives. | pricing | pricing, packaging, monetization, value, customer, willingness, pay, growth, objectives |
|
||||
| `sales-automator` | Draft cold emails, follow-ups, and proposal templates. Creates pricing pages, case studies, and sales scripts. Use PROACTIVELY for sales outreach or lead nur... | sales, automator | sales, automator, draft, cold, emails, follow, ups, proposal, creates, pricing, pages, case |
|
||||
| `screenshots` | Generate marketing screenshots of your app using Playwright. Use when the user wants to create screenshots for Product Hunt, social media, landing pages, or ... | screenshots | screenshots, generate, marketing, app, playwright, user, wants, product, hunt, social, media, landing |
|
||||
| `scroll-experience` | Expert in building immersive scroll-driven experiences - parallax storytelling, scroll animations, interactive narratives, and cinematic web experiences. Lik... | scroll, experience | scroll, experience, building, immersive, driven, experiences, parallax, storytelling, animations, interactive, narratives, cinematic |
|
||||
| `seo-cannibalization-detector` | Analyzes multiple provided pages to identify keyword overlap and potential cannibalization issues. Suggests differentiation strategies. Use PROACTIVELY when ... | seo, cannibalization, detector | seo, cannibalization, detector, analyzes, multiple, provided, pages, identify, keyword, overlap, potential, issues |
|
||||
| `seo-content-auditor` | Analyzes provided content for quality, E-E-A-T signals, and SEO best practices. Scores content and provides improvement recommendations based on established ... | seo, content, auditor | seo, content, auditor, analyzes, provided, quality, signals, scores, provides, improvement, recommendations, established |
|
||||
@@ -109,7 +101,7 @@ Total skills: 618
|
||||
| `startup-financial-modeling` | This skill should be used when the user asks to "create financial projections", "build a financial model", "forecast revenue", "calculate burn rate", "estima... | startup, financial, modeling | startup, financial, modeling, skill, should, used, user, asks, projections, model, forecast, revenue |
|
||||
| `team-composition-analysis` | This skill should be used when the user asks to "plan team structure", "determine hiring needs", "design org chart", "calculate compensation", "plan equity a... | team, composition | team, composition, analysis, skill, should, used, user, asks, plan, structure, determine, hiring |
|
||||
|
||||
## data-ai (93)
|
||||
## data-ai (81)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
@@ -122,14 +114,12 @@ Total skills: 618
|
||||
| `api-documenter` | Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build com... | api, documenter | api, documenter, documentation, openapi, ai, powered, developer, experience, interactive, docs, generate, sdks |
|
||||
| `autonomous-agent-patterns` | Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use ... | autonomous, agent | autonomous, agent, building, coding, agents, covers, integration, permission, browser, automation, human, loop |
|
||||
| `autonomous-agents` | Autonomous agents are AI systems that can independently decompose goals, plan actions, execute tools, and self-correct without constant human guidance. The c... | autonomous, agents | autonomous, agents, ai, independently, decompose, goals, plan, actions, execute, self, correct, without |
|
||||
| `beautiful-prose` | Hard-edged writing style contract for timeless, forceful English prose without AI tics | beautiful, prose | beautiful, prose, hard, edged, writing, style, contract, timeless, forceful, english, without, ai |
|
||||
| `behavioral-modes` | AI operational modes (brainstorm, implement, debug, review, teach, ship, orchestrate). Use to adapt behavior based on task type. | behavioral, modes | behavioral, modes, ai, operational, brainstorm, debug, review, teach, ship, orchestrate, adapt, behavior |
|
||||
| `blockrun` | Use when user needs capabilities Claude lacks (image generation, real-time X/Twitter data) or explicitly requests external models ("blockrun", "use grok", "u... | blockrun | blockrun, user, capabilities, claude, lacks, image, generation, real, time, twitter, data, explicitly |
|
||||
| `browser-automation` | Browser automation powers web testing, scraping, and AI agent interactions. The difference between a flaky script and a reliable system comes down to underst... | browser | browser, automation, powers, web, testing, scraping, ai, agent, interactions, difference, between, flaky |
|
||||
| `business-analyst` | Master modern business analysis with AI-powered analytics, real-time dashboards, and data-driven insights. Build comprehensive KPI frameworks, predictive mod... | business, analyst | business, analyst, analysis, ai, powered, analytics, real, time, dashboards, data, driven, insights |
|
||||
| `cc-skill-backend-patterns` | Backend architecture patterns, API design, database optimization, and server-side best practices for Node.js, Express, and Next.js API routes. | cc, skill, backend | cc, skill, backend, architecture, api, database, optimization, server, side, node, js, express |
|
||||
| `cc-skill-clickhouse-io` | ClickHouse database patterns, query optimization, analytics, and data engineering best practices for high-performance analytical workloads. | cc, skill, clickhouse, io | cc, skill, clickhouse, io, database, query, optimization, analytics, data, engineering, high, performance |
|
||||
| `clarity-gate` | Pre-ingestion verification for epistemic quality in RAG systems with 9-point verification and Two-Round HITL workflow | clarity, gate | clarity, gate, pre, ingestion, verification, epistemic, quality, rag, point, two, round, hitl |
|
||||
| `code-documentation-doc-generate` | You are a documentation expert specializing in creating comprehensive, maintainable documentation from code. Generate API docs, architecture diagrams, user g... | code, documentation, doc, generate | code, documentation, doc, generate, specializing, creating, maintainable, api, docs, architecture, diagrams, user |
|
||||
| `codex-review` | Professional code review with auto CHANGELOG generation, integrated with Codex AI | codex | codex, review, professional, code, auto, changelog, generation, integrated, ai |
|
||||
| `content-marketer` | Elite content marketing strategist specializing in AI-powered content creation, omnichannel distribution, SEO optimization, and data-driven performance marke... | content, marketer | content, marketer, elite, marketing, strategist, specializing, ai, powered, creation, omnichannel, distribution, seo |
|
||||
@@ -148,12 +138,6 @@ Total skills: 618
|
||||
| `documentation-generation-doc-generate` | You are a documentation expert specializing in creating comprehensive, maintainable documentation from code. Generate API docs, architecture diagrams, user g... | documentation, generation, doc, generate | documentation, generation, doc, generate, specializing, creating, maintainable, code, api, docs, architecture, diagrams |
|
||||
| `documentation-templates` | Documentation templates and structure guidelines. README, API docs, code comments, and AI-friendly documentation. | documentation | documentation, structure, guidelines, readme, api, docs, code, comments, ai, friendly |
|
||||
| `embedding-strategies` | Select and optimize embedding models for semantic search and RAG applications. Use when choosing embedding models, implementing chunking strategies, or optim... | embedding, strategies | embedding, strategies, select, optimize, models, semantic, search, rag, applications, choosing, implementing, chunking |
|
||||
| `fal-audio` | Text-to-speech and speech-to-text using fal.ai audio models | fal, audio | fal, audio, text, speech, ai, models |
|
||||
| `fal-generate` | Generate images and videos using fal.ai AI models | fal, generate | fal, generate, images, videos, ai, models |
|
||||
| `fal-image-edit` | AI-powered image editing with style transfer and object removal | fal, image, edit | fal, image, edit, ai, powered, editing, style, transfer, object, removal |
|
||||
| `fal-upscale` | Upscale and enhance image and video resolution using AI | fal, upscale | fal, upscale, enhance, image, video, resolution, ai |
|
||||
| `fal-workflow` | Generate workflow JSON files for chaining AI models | fal | fal, generate, json, files, chaining, ai, models |
|
||||
| `fp-ts-react` | Practical patterns for using fp-ts with React - hooks, state, forms, data fetching. Use when building React apps with functional programming patterns. Works ... | fp, ts, react | fp, ts, react, practical, hooks, state, forms, data, fetching, building, apps, functional |
|
||||
| `frontend-dev-guidelines` | Opinionated frontend development standards for modern React + TypeScript applications. Covers Suspense-first data fetching, lazy loading, feature-based archi... | frontend, dev, guidelines | frontend, dev, guidelines, opinionated, development, standards, react, typescript, applications, covers, suspense, first |
|
||||
| `geo-fundamentals` | Generative Engine Optimization for AI search engines (ChatGPT, Claude, Perplexity). | geo, fundamentals | geo, fundamentals, generative, engine, optimization, ai, search, engines, chatgpt, claude, perplexity |
|
||||
| `graphql` | GraphQL gives clients exactly the data they need - no more, no less. One endpoint, typed schema, introspection. But the flexibility that makes it powerful al... | graphql | graphql, gives, clients, exactly, data, no, less, one, endpoint, typed, schema, introspection |
|
||||
@@ -165,7 +149,6 @@ Total skills: 618
|
||||
| `llm-application-dev-langchain-agent` | You are an expert LangChain agent developer specializing in production-grade AI systems using LangChain 0.1+ and LangGraph. | llm, application, dev, langchain, agent | llm, application, dev, langchain, agent, developer, specializing, grade, ai, langgraph |
|
||||
| `llm-application-dev-prompt-optimize` | You are an expert prompt engineer specializing in crafting effective prompts for LLMs through advanced techniques including constitutional AI, chain-of-thoug... | llm, application, dev, prompt, optimize | llm, application, dev, prompt, optimize, engineer, specializing, crafting, effective, prompts, llms, through |
|
||||
| `llm-evaluation` | Implement comprehensive evaluation strategies for LLM applications using automated metrics, human feedback, and benchmarking. Use when testing LLM performanc... | llm, evaluation | llm, evaluation, applications, automated, metrics, human, feedback, benchmarking, testing, performance, measuring, ai |
|
||||
| `nanobanana-ppt-skills` | AI-powered PPT generation with document analysis and styled images | nanobanana, ppt, skills | nanobanana, ppt, skills, ai, powered, generation, document, analysis, styled, images |
|
||||
| `neon-postgres` | Expert patterns for Neon serverless Postgres, branching, connection pooling, and Prisma/Drizzle integration Use when: neon database, serverless postgres, dat... | neon, postgres | neon, postgres, serverless, branching, connection, pooling, prisma, drizzle, integration, database |
|
||||
| `nextjs-app-router-patterns` | Master Next.js 14+ App Router with Server Components, streaming, parallel routes, and advanced data fetching. Use when building Next.js applications, impleme... | nextjs, app, router | nextjs, app, router, next, js, 14, server, components, streaming, parallel, routes, data |
|
||||
| `nextjs-best-practices` | Next.js App Router principles. Server Components, data fetching, routing patterns. | nextjs, best, practices | nextjs, best, practices, next, js, app, router, principles, server, components, data, fetching |
|
||||
@@ -188,11 +171,9 @@ Total skills: 618
|
||||
| `senior-architect` | Comprehensive software architecture skill for designing scalable, maintainable systems using ReactJS, NextJS, NodeJS, Express, React Native, Swift, Kotlin, F... | senior | senior, architect, software, architecture, skill, designing, scalable, maintainable, reactjs, nextjs, nodejs, express |
|
||||
| `seo-audit` | Diagnose and audit SEO issues affecting crawlability, indexation, rankings, and organic performance. Use when the user asks for an SEO audit, technical SEO r... | seo, audit | seo, audit, diagnose, issues, affecting, crawlability, indexation, rankings, organic, performance, user, asks |
|
||||
| `similarity-search-patterns` | Implement efficient similarity search with vector databases. Use when building semantic search, implementing nearest neighbor queries, or optimizing retrieva... | similarity, search | similarity, search, efficient, vector, databases, building, semantic, implementing, nearest, neighbor, queries, optimizing |
|
||||
| `skill-seekers` | -Automatically convert documentation websites, GitHub repositories, and PDFs into Claude AI skills in minutes. | skill, seekers | skill, seekers, automatically, convert, documentation, websites, github, repositories, pdfs, claude, ai, skills |
|
||||
| `spark-optimization` | Optimize Apache Spark jobs with partitioning, caching, shuffle optimization, and memory tuning. Use when improving Spark performance, debugging slow jobs, or... | spark, optimization | spark, optimization, optimize, apache, jobs, partitioning, caching, shuffle, memory, tuning, improving, performance |
|
||||
| `sql-optimization-patterns` | Master SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when de... | sql, optimization | sql, optimization, query, indexing, explain, analysis, dramatically, improve, database, performance, eliminate, slow |
|
||||
| `sqlmap-database-pentesting` | This skill should be used when the user asks to "automate SQL injection testing," "enumerate database structure," "extract database credentials using sqlmap,... | sqlmap, database, pentesting | sqlmap, database, pentesting, penetration, testing, skill, should, used, user, asks, automate, sql |
|
||||
| `stitch-ui-design` | Expert guide for creating effective prompts for Google Stitch AI UI design tool. Use when user wants to design UI/UX in Stitch, create app interfaces, genera... | stitch, ui | stitch, ui, creating, effective, prompts, google, ai, user, wants, ux, app, interfaces |
|
||||
| `tdd-orchestrator` | Master TDD orchestrator specializing in red-green-refactor discipline, multi-agent workflow coordination, and comprehensive test-driven development practices... | tdd, orchestrator | tdd, orchestrator, specializing, red, green, refactor, discipline, multi, agent, coordination, test, driven |
|
||||
| `team-collaboration-standup-notes` | You are an expert team communication specialist focused on async-first standup practices, AI-assisted note generation from commit history, and effective remo... | team, collaboration, standup, notes | team, collaboration, standup, notes, communication, async, first, ai, assisted, note, generation, commit |
|
||||
| `telegram-bot-builder` | Expert in building Telegram bots that solve real problems - from simple automation to complex AI-powered bots. Covers bot architecture, the Telegram Bot API,... | telegram, bot, builder | telegram, bot, builder, building, bots, solve, real, problems, simple, automation, complex, ai |
|
||||
@@ -200,14 +181,13 @@ Total skills: 618
|
||||
| `unity-ecs-patterns` | Master Unity ECS (Entity Component System) with DOTS, Jobs, and Burst for high-performance game development. Use when building data-oriented games, optimizin... | unity, ecs | unity, ecs, entity, component, dots, jobs, burst, high, performance, game, development, building |
|
||||
| `vector-database-engineer` | Expert in vector databases, embedding strategies, and semantic search implementation. Masters Pinecone, Weaviate, Qdrant, Milvus, and pgvector for RAG applic... | vector, database | vector, database, engineer, databases, embedding, semantic, search, masters, pinecone, weaviate, qdrant, milvus |
|
||||
| `vector-index-tuning` | Optimize vector index performance for latency, recall, and memory. Use when tuning HNSW parameters, selecting quantization strategies, or scaling vector sear... | vector, index, tuning | vector, index, tuning, optimize, performance, latency, recall, memory, hnsw, parameters, selecting, quantization |
|
||||
| `vexor` | Vector-powered CLI for semantic file search with a Claude/Codex skill | vexor | vexor, vector, powered, cli, semantic, file, search, claude, codex, skill |
|
||||
| `voice-ai-development` | Expert in building voice AI applications - from real-time voice agents to voice-enabled apps. Covers OpenAI Realtime API, Vapi for voice agents, Deepgram for... | voice, ai | voice, ai, development, building, applications, real, time, agents, enabled, apps, covers, openai |
|
||||
| `voice-ai-engine-development` | Build real-time conversational AI voice engines using async worker pipelines, streaming transcription, LLM agents, and TTS synthesis with interrupt handling ... | voice, ai, engine | voice, ai, engine, development, real, time, conversational, engines, async, worker, pipelines, streaming |
|
||||
| `web-artifacts-builder` | Suite of tools for creating elaborate, multi-component claude.ai HTML artifacts using modern frontend web technologies (React, Tailwind CSS, shadcn/ui). Use ... | web, artifacts, builder | web, artifacts, builder, suite, creating, elaborate, multi, component, claude, ai, html, frontend |
|
||||
| `xlsx` | Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work ... | xlsx | xlsx, spreadsheet, creation, editing, analysis, formulas, formatting, data, visualization, claude, work, spreadsheets |
|
||||
| `xlsx-official` | Comprehensive spreadsheet creation, editing, and analysis with support for formulas, formatting, data analysis, and visualization. When Claude needs to work ... | xlsx, official | xlsx, official, spreadsheet, creation, editing, analysis, formulas, formatting, data, visualization, claude, work |
|
||||
|
||||
## development (80)
|
||||
## development (72)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
@@ -234,12 +214,9 @@ Total skills: 618
|
||||
| `fastapi-pro` | Build high-performance async APIs with FastAPI, SQLAlchemy 2.0, and Pydantic V2. Master microservices, WebSockets, and modern Python async patterns. Use PROA... | fastapi | fastapi, pro, high, performance, async, apis, sqlalchemy, pydantic, v2, microservices, websockets, python |
|
||||
| `fastapi-templates` | Create production-ready FastAPI projects with async patterns, dependency injection, and comprehensive error handling. Use when building new FastAPI applicati... | fastapi | fastapi, async, dependency, injection, error, handling, building, new, applications, setting, up, backend |
|
||||
| `firecrawl-scraper` | Deep web scraping, screenshots, PDF parsing, and website crawling using Firecrawl API | firecrawl, scraper | firecrawl, scraper, deep, web, scraping, screenshots, pdf, parsing, website, crawling, api |
|
||||
| `fp-ts-errors` | Handle errors as values using fp-ts Either and TaskEither for cleaner, more predictable TypeScript code. Use when implementing error handling patterns with f... | fp, ts, errors | fp, ts, errors, handle, values, either, taskeither, cleaner, predictable, typescript, code, implementing |
|
||||
| `fp-ts-pragmatic` | A practical, jargon-free guide to fp-ts functional programming - the 80/20 approach that gets results without the academic overhead. Use when writing TypeScr... | fp, ts, pragmatic | fp, ts, pragmatic, practical, jargon, free, functional, programming, 80, 20, approach, gets |
|
||||
| `frontend-design` | Create distinctive, production-grade frontend interfaces with intentional aesthetics, high craft, and non-generic visual identity. Use when building or styli... | frontend | frontend, distinctive, grade, interfaces, intentional, aesthetics, high, craft, non, generic, visual, identity |
|
||||
| `frontend-developer` | Build React components, implement responsive layouts, and handle client-side state management. Masters React 19, Next.js 15, and modern frontend architecture... | frontend | frontend, developer, react, components, responsive, layouts, handle, client, side, state, masters, 19 |
|
||||
| `frontend-mobile-development-component-scaffold` | You are a React component architecture expert specializing in scaffolding production-ready, accessible, and performant components. Generate complete componen... | frontend, mobile, component | frontend, mobile, component, development, scaffold, react, architecture, specializing, scaffolding, accessible, performant, components |
|
||||
| `frontend-slides` | Create stunning, animation-rich HTML presentations from scratch or by converting PowerPoint files. Use when the user wants to build a presentation, convert a... | frontend, slides | frontend, slides, stunning, animation, rich, html, presentations, scratch, converting, powerpoint, files, user |
|
||||
| `go-concurrency-patterns` | Master Go concurrency with goroutines, channels, sync primitives, and context. Use when building concurrent Go applications, implementing worker pools, or de... | go, concurrency | go, concurrency, goroutines, channels, sync, primitives, context, building, concurrent, applications, implementing, worker |
|
||||
| `golang-pro` | Master Go 1.21+ with modern patterns, advanced concurrency, performance optimization, and production-ready microservices. Expert in the latest Go ecosystem i... | golang | golang, pro, go, 21, concurrency, performance, optimization, microservices, latest, ecosystem, including, generics |
|
||||
| `hubspot-integration` | Expert patterns for HubSpot CRM integration including OAuth authentication, CRM objects, associations, batch operations, webhooks, and custom objects. Covers... | hubspot, integration | hubspot, integration, crm, including, oauth, authentication, objects, associations, batch, operations, webhooks, custom |
|
||||
@@ -248,16 +225,12 @@ Total skills: 618
|
||||
| `javascript-testing-patterns` | Implement comprehensive testing strategies using Jest, Vitest, and Testing Library for unit tests, integration tests, and end-to-end testing with mocking, fi... | javascript | javascript, testing, jest, vitest, library, unit, tests, integration, mocking, fixtures, test, driven |
|
||||
| `javascript-typescript-typescript-scaffold` | You are a TypeScript project architecture expert specializing in scaffolding production-ready Node.js and frontend applications. Generate complete project st... | javascript, typescript | javascript, typescript, scaffold, architecture, specializing, scaffolding, node, js, frontend, applications, generate, complete |
|
||||
| `launch-strategy` | When the user wants to plan a product launch, feature announcement, or release strategy. Also use when the user mentions 'launch,' 'Product Hunt,' 'feature r... | launch | launch, user, wants, plan, product, feature, announcement, release, mentions, hunt, go, market |
|
||||
| `makepad-skills` | Makepad UI development skills for Rust apps: setup, patterns, shaders, packaging, and troubleshooting. | makepad, skills | makepad, skills, ui, development, rust, apps, setup, shaders, packaging, troubleshooting |
|
||||
| `mcp-builder` | Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use whe... | mcp, builder | mcp, builder, creating, high, quality, model, context, protocol, servers, enable, llms, interact |
|
||||
| `memory-safety-patterns` | Implement memory-safe programming with RAII, ownership, smart pointers, and resource management across Rust, C++, and C. Use when writing safe systems code, ... | memory, safety | memory, safety, safe, programming, raii, ownership, smart, pointers, resource, rust, writing, code |
|
||||
| `mobile-design` | Mobile-first design and engineering doctrine for iOS and Android apps. Covers touch interaction, performance, platform conventions, offline behavior, and mob... | mobile | mobile, first, engineering, doctrine, ios, android, apps, covers, touch, interaction, performance, platform |
|
||||
| `mobile-developer` | Develop React Native, Flutter, or native mobile apps with modern architecture patterns. Masters cross-platform development, native integrations, offline sync... | mobile | mobile, developer, develop, react, native, flutter, apps, architecture, masters, cross, platform, development |
|
||||
| `modern-javascript-patterns` | Master ES6+ features including async/await, destructuring, spread operators, arrow functions, promises, modules, iterators, generators, and functional progra... | modern, javascript | modern, javascript, es6, features, including, async, await, destructuring, spread, operators, arrow, functions |
|
||||
| `multi-platform-apps-multi-platform` | Build and deploy the same feature consistently across web, mobile, and desktop platforms using API-first architecture and parallel implementation strategies. | multi, platform, apps | multi, platform, apps, deploy, same, feature, consistently, web, mobile, desktop, platforms, api |
|
||||
| `n8n-code-python` | Write Python code in n8n Code nodes. Use when writing Python in n8n, using _input/_json/_node syntax, working with standard library, or need to understand Py... | n8n, code, python | n8n, code, python, write, nodes, writing, input, json, node, syntax, working, standard |
|
||||
| `n8n-node-configuration` | Operation-aware node configuration guidance. Use when configuring nodes, understanding property dependencies, determining required fields, choosing between g... | n8n, node, configuration | n8n, node, configuration, operation, aware, guidance, configuring, nodes, understanding, property, dependencies, determining |
|
||||
| `observe-whatsapp` | Observe and troubleshoot WhatsApp in Kapso: debug message delivery, inspect webhook deliveries/retries, triage API errors, and run health checks. Use when in... | observe, whatsapp | observe, whatsapp, troubleshoot, kapso, debug, message, delivery, inspect, webhook, deliveries, retries, triage |
|
||||
| `product-manager-toolkit` | Comprehensive toolkit for product managers including RICE prioritization, customer interview analysis, PRD templates, discovery frameworks, and go-to-market ... | product, manager | product, manager, toolkit, managers, including, rice, prioritization, customer, interview, analysis, prd, discovery |
|
||||
| `python-development-python-scaffold` | You are a Python project architecture expert specializing in scaffolding production-ready Python applications. Generate complete project structures with mode... | python | python, development, scaffold, architecture, specializing, scaffolding, applications, generate, complete, structures, tooling, uv |
|
||||
| `python-packaging` | Create distributable Python packages with proper project structure, setup.py/pyproject.toml, and publishing to PyPI. Use when packaging Python libraries, cre... | python, packaging | python, packaging, distributable, packages, proper, structure, setup, py, pyproject, toml, publishing, pypi |
|
||||
@@ -280,7 +253,6 @@ Total skills: 618
|
||||
| `shopify-development` | Build Shopify apps, extensions, themes using GraphQL Admin API, Shopify CLI, Polaris UI, and Liquid.
|
||||
TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify, development, apps, extensions, themes, graphql, admin, api, cli, polaris, ui, liquid |
|
||||
| `slack-bot-builder` | Build Slack apps using the Bolt framework across Python, JavaScript, and Java. Covers Block Kit for rich UIs, interactive components, slash commands, event h... | slack, bot, builder | slack, bot, builder, apps, bolt, framework, python, javascript, java, covers, block, kit |
|
||||
| `swiftui-expert-skill` | Write, review, or improve SwiftUI code following best practices for state management, view composition, performance, modern APIs, Swift concurrency, and iOS ... | swiftui, skill | swiftui, skill, write, review, improve, code, following, state, view, composition, performance, apis |
|
||||
| `systems-programming-rust-project` | You are a Rust project architecture expert specializing in scaffolding production-ready Rust applications. Generate complete project structures with cargo to... | programming, rust | programming, rust, architecture, specializing, scaffolding, applications, generate, complete, structures, cargo, tooling, proper |
|
||||
| `tavily-web` | Web search, content extraction, crawling, and research capabilities using Tavily API | tavily, web | tavily, web, search, content, extraction, crawling, research, capabilities, api |
|
||||
| `telegram-mini-app` | Expert in building Telegram Mini Apps (TWA) - web apps that run inside Telegram with native-like experience. Covers the TON ecosystem, Telegram Web App API, ... | telegram, mini, app | telegram, mini, app, building, apps, twa, web, run, inside, native, like, experience |
|
||||
@@ -293,7 +265,7 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `viral-generator-builder` | Expert in building shareable generator tools that go viral - name generators, quiz makers, avatar creators, personality tests, and calculator tools. Covers t... | viral, generator, builder | viral, generator, builder, building, shareable, go, name, generators, quiz, makers, avatar, creators |
|
||||
| `webapp-testing` | Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing... | webapp | webapp, testing, toolkit, interacting, local, web, applications, playwright, supports, verifying, frontend, functionality |
|
||||
|
||||
## general (122)
|
||||
## general (95)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
@@ -317,32 +289,21 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `cc-skill-continuous-learning` | Development skill from everything-claude-code | cc, skill, continuous, learning | cc, skill, continuous, learning, development, everything, claude, code |
|
||||
| `cc-skill-project-guidelines-example` | Project Guidelines Skill (Example) | cc, skill, guidelines, example | cc, skill, guidelines, example |
|
||||
| `cc-skill-strategic-compact` | Development skill from everything-claude-code | cc, skill, strategic, compact | cc, skill, strategic, compact, development, everything, claude, code |
|
||||
| `claude-ally-health` | A health assistant skill for medical information analysis, symptom tracking, and wellness guidance. | claude, ally, health | claude, ally, health, assistant, skill, medical, information, analysis, symptom, tracking, wellness, guidance |
|
||||
| `claude-code-guide` | Master guide for using Claude Code effectively. Includes configuration templates, prompting strategies "Thinking" keywords, debugging techniques, and best pr... | claude, code | claude, code, effectively, includes, configuration, prompting, thinking, keywords, debugging, techniques, interacting, agent |
|
||||
| `claude-scientific-skills` | Scientific research and analysis skills | claude, scientific, skills | claude, scientific, skills, research, analysis |
|
||||
| `claude-speed-reader` | -Speed read Claude's responses at 600+ WPM using RSVP with Spritz-style ORP highlighting | claude, speed, reader | claude, speed, reader, read, responses, 600, wpm, rsvp, spritz, style, orp, highlighting |
|
||||
| `claude-win11-speckit-update-skill` | Windows 11 system management | claude, win11, speckit, update, skill | claude, win11, speckit, update, skill, windows, 11 |
|
||||
| `clean-code` | Pragmatic coding standards - concise, direct, no over-engineering, no unnecessary comments | clean, code | clean, code, pragmatic, coding, standards, concise, direct, no, engineering, unnecessary, comments |
|
||||
| `code-documentation-code-explain` | You are a code education expert specializing in explaining complex code through clear narratives, visual diagrams, and step-by-step breakdowns. Transform dif... | code, documentation, explain | code, documentation, explain, education, specializing, explaining, complex, through, clear, narratives, visual, diagrams |
|
||||
| `code-refactoring-context-restore` | Use when working with code refactoring context restore | code, refactoring, restore | code, refactoring, restore, context, working |
|
||||
| `code-refactoring-tech-debt` | You are a technical debt expert specializing in identifying, quantifying, and prioritizing technical debt in software projects. Analyze the codebase to uncov... | code, refactoring, tech, debt | code, refactoring, tech, debt, technical, specializing, identifying, quantifying, prioritizing, software, analyze, codebase |
|
||||
| `code-review-excellence` | Master effective code review practices to provide constructive feedback, catch bugs early, and foster knowledge sharing while maintaining team morale. Use wh... | code, excellence | code, excellence, review, effective, provide, constructive, feedback, catch, bugs, early, foster, knowledge |
|
||||
| `codebase-cleanup-tech-debt` | You are a technical debt expert specializing in identifying, quantifying, and prioritizing technical debt in software projects. Analyze the codebase to uncov... | codebase, cleanup, tech, debt | codebase, cleanup, tech, debt, technical, specializing, identifying, quantifying, prioritizing, software, analyze, uncover |
|
||||
| `commit` | Create commit messages following Sentry conventions. Use when committing code changes, writing commit messages, or formatting git history. Follows convention... | commit | commit, messages, following, sentry, conventions, committing, code, changes, writing, formatting, git, history |
|
||||
| `comprehensive-review-full-review` | Use when working with comprehensive review full review | comprehensive, full | comprehensive, full, review, working |
|
||||
| `comprehensive-review-pr-enhance` | You are a PR optimization expert specializing in creating high-quality pull requests that facilitate efficient code reviews. Generate comprehensive PR descri... | comprehensive, pr, enhance | comprehensive, pr, enhance, review, optimization, specializing, creating, high, quality, pull, requests, facilitate |
|
||||
| `concise-planning` | Use when a user asks for a plan for a coding task, to generate a clear, actionable, and atomic checklist. | concise, planning | concise, planning, user, asks, plan, coding, task, generate, clear, actionable, atomic, checklist |
|
||||
| `context-compression` | Design and evaluate compression strategies for long-running sessions | compression | compression, context, evaluate, long, running, sessions |
|
||||
| `context-fundamentals` | Understand what context is, why it matters, and the anatomy of context in agent systems | fundamentals | fundamentals, context, understand, what, why, matters, anatomy, agent |
|
||||
| `context-management-context-restore` | Use when working with context management context restore | restore | restore, context, working |
|
||||
| `context-management-context-save` | Use when working with context management context save | save | save, context, working |
|
||||
| `context-optimization` | Apply compaction, masking, and caching strategies | optimization | optimization, context, apply, compaction, masking, caching |
|
||||
| `create-pr` | Create pull requests following Sentry conventions. Use when opening PRs, writing PR descriptions, or preparing changes for review. Follows Sentry's code revi... | create, pr | create, pr, pull, requests, following, sentry, conventions, opening, prs, writing, descriptions, preparing |
|
||||
| `culture-index` | Index and search culture documentation | culture, index | culture, index, search, documentation |
|
||||
| `daily-news-report` | Scrapes content based on a preset URL list, filters high-quality technical information, and generates daily Markdown reports. | daily, news, report | daily, news, report, scrapes, content, preset, url, list, filters, high, quality, technical |
|
||||
| `debugging-strategies` | Master systematic debugging techniques, profiling tools, and root cause analysis to efficiently track down bugs across any codebase or technology stack. Use ... | debugging, strategies | debugging, strategies, systematic, techniques, profiling, root, cause, analysis, efficiently, track, down, bugs |
|
||||
| `debugging-toolkit-smart-debug` | Use when working with debugging toolkit smart debug | debugging, debug | debugging, debug, toolkit, smart, working |
|
||||
| `design-md` | Analyze Stitch projects and synthesize a semantic design system into DESIGN.md files | md | md, analyze, stitch, synthesize, semantic, files |
|
||||
| `dispatching-parallel-agents` | Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies | dispatching, parallel, agents | dispatching, parallel, agents, facing, independent, tasks, worked, without, shared, state, sequential, dependencies |
|
||||
| `docx` | Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. When Claude ... | docx | docx, document, creation, editing, analysis, tracked, changes, comments, formatting, preservation, text, extraction |
|
||||
| `docx-official` | Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. When Claude ... | docx, official | docx, official, document, creation, editing, analysis, tracked, changes, comments, formatting, preservation, text |
|
||||
@@ -350,28 +311,21 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `environment-setup-guide` | Guide developers through setting up development environments with proper tools, dependencies, and configurations | environment, setup | environment, setup, developers, through, setting, up, development, environments, proper, dependencies, configurations |
|
||||
| `error-debugging-multi-agent-review` | Use when working with error debugging multi agent review | error, debugging, multi, agent | error, debugging, multi, agent, review, working |
|
||||
| `error-diagnostics-smart-debug` | Use when working with error diagnostics smart debug | error, diagnostics, debug | error, diagnostics, debug, smart, working |
|
||||
| `evaluation` | Build evaluation frameworks for agent systems | evaluation | evaluation, frameworks, agent |
|
||||
| `executing-plans` | Use when you have a written implementation plan to execute in a separate session with review checkpoints | executing, plans | executing, plans, written, plan, execute, separate, session, review, checkpoints |
|
||||
| `fal-platform` | Platform APIs for model management, pricing, and usage tracking | fal, platform | fal, platform, apis, model, pricing, usage, tracking |
|
||||
| `ffuf-claude-skill` | Web fuzzing with ffuf | ffuf, claude, skill | ffuf, claude, skill, web, fuzzing |
|
||||
| `file-organizer` | Intelligently organizes files and folders by understanding context, finding duplicates, and suggesting better organizational structures. Use when user wants ... | file, organizer | file, organizer, intelligently, organizes, files, folders, understanding, context, finding, duplicates, suggesting, better |
|
||||
| `finishing-a-development-branch` | Use when implementation is complete, all tests pass, and you need to decide how to integrate the work - guides completion of development work by presenting s... | finishing, a, branch | finishing, a, branch, development, complete, all, tests, pass, decide, how, integrate, work |
|
||||
| `fix-review` | Verify fix commits address audit findings without new bugs | fix | fix, review, verify, commits, address, audit, findings, without, new, bugs |
|
||||
| `framework-migration-code-migrate` | You are a code migration expert specializing in transitioning codebases between frameworks, languages, versions, and platforms. Generate comprehensive migrat... | framework, migration, code, migrate | framework, migration, code, migrate, specializing, transitioning, codebases, between, frameworks, languages, versions, platforms |
|
||||
| `game-development` | Game development orchestrator. Routes to platform-specific skills based on project needs. | game | game, development, orchestrator, routes, platform, specific, skills |
|
||||
| `git-advanced-workflows` | Master advanced Git workflows including rebasing, cherry-picking, bisect, worktrees, and reflog to maintain clean history and recover from any situation. Use... | git, advanced | git, advanced, including, rebasing, cherry, picking, bisect, worktrees, reflog, maintain, clean, history |
|
||||
| `git-pr-workflows-onboard` | You are an **expert onboarding specialist and knowledge transfer architect** with deep experience in remote-first organizations, technical team integration, ... | git, pr, onboard | git, pr, onboard, onboarding, knowledge, transfer, architect, deep, experience, remote, first, organizations |
|
||||
| `git-pr-workflows-pr-enhance` | You are a PR optimization expert specializing in creating high-quality pull requests that facilitate efficient code reviews. Generate comprehensive PR descri... | git, pr, enhance | git, pr, enhance, optimization, specializing, creating, high, quality, pull, requests, facilitate, efficient |
|
||||
| `imagen` | | imagen | imagen |
|
||||
| `infinite-gratitude` | Multi-agent research skill for parallel research execution (10 agents, battle-tested with real case studies). | infinite, gratitude | infinite, gratitude, multi, agent, research, skill, parallel, execution, 10, agents, battle, tested |
|
||||
| `interactive-portfolio` | Expert in building portfolios that actually land jobs and clients - not just showing work, but creating memorable experiences. Covers developer portfolios, d... | interactive, portfolio | interactive, portfolio, building, portfolios, actually, land, jobs, clients, just, showing, work, creating |
|
||||
| `last30days` | Research a topic from the last 30 days on Reddit + X + Web, become an expert, and write copy-paste-ready prompts for the user's target tool. | last30days | last30days, research, topic, last, 30, days, reddit, web, become, write, copy, paste |
|
||||
| `legacy-modernizer` | Refactor legacy codebases, migrate outdated frameworks, and implement gradual modernization. Handles technical debt, dependency updates, and backward compati... | legacy, modernizer | legacy, modernizer, refactor, codebases, migrate, outdated, frameworks, gradual, modernization, technical, debt, dependency |
|
||||
| `linear-claude-skill` | Manage Linear issues, projects, and teams | linear, claude, skill | linear, claude, skill, issues, teams |
|
||||
| `lint-and-validate` | Automatic quality control, linting, and static analysis procedures. Use after every code modification to ensure syntax correctness and project standards. Tri... | lint, and, validate | lint, and, validate, automatic, quality, control, linting, static, analysis, procedures, after, every |
|
||||
| `linux-privilege-escalation` | This skill should be used when the user asks to "escalate privileges on Linux", "find privesc vectors on Linux systems", "exploit sudo misconfigurations", "a... | linux, privilege, escalation | linux, privilege, escalation, skill, should, used, user, asks, escalate, privileges, find, privesc |
|
||||
| `linux-shell-scripting` | This skill should be used when the user asks to "create bash scripts", "automate Linux tasks", "monitor system resources", "backup files", "manage users", or... | linux, shell, scripting | linux, shell, scripting, scripts, skill, should, used, user, asks, bash, automate, tasks |
|
||||
| `memory-systems` | Design short-term, long-term, and graph-based memory architectures | memory | memory, short, term, long, graph, architectures |
|
||||
| `micro-saas-launcher` | Expert in launching small, focused SaaS products fast - the indie hacker approach to building profitable software. Covers idea validation, MVP development, p... | micro, saas, launcher | micro, saas, launcher, launching, small, products, fast, indie, hacker, approach, building, profitable |
|
||||
| `monorepo-management` | Master monorepo management with Turborepo, Nx, and pnpm workspaces to build efficient, scalable multi-package repositories with optimized builds and dependen... | monorepo | monorepo, turborepo, nx, pnpm, workspaces, efficient, scalable, multi, package, repositories, optimized, dependency |
|
||||
| `nft-standards` | Implement NFT standards (ERC-721, ERC-1155) with proper metadata handling, minting strategies, and marketplace integration. Use when creating NFT contracts, ... | nft, standards | nft, standards, erc, 721, 1155, proper, metadata, handling, minting, marketplace, integration, creating |
|
||||
@@ -389,27 +343,20 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `pptx-official` | Presentation creation, editing, and analysis. When Claude needs to work with presentations (.pptx files) for: (1) Creating new presentations, (2) Modifying o... | pptx, official | pptx, official, presentation, creation, editing, analysis, claude, work, presentations, files, creating, new |
|
||||
| `privilege-escalation-methods` | This skill should be used when the user asks to "escalate privileges", "get root access", "become administrator", "privesc techniques", "abuse sudo", "exploi... | privilege, escalation, methods | privilege, escalation, methods, skill, should, used, user, asks, escalate, privileges, get, root |
|
||||
| `prompt-library` | Curated collection of high-quality prompts for various use cases. Includes role-based prompts, task-specific templates, and prompt refinement techniques. Use... | prompt, library | prompt, library, curated, collection, high, quality, prompts, various, cases, includes, role, task |
|
||||
| `readme` | When the user wants to create or update a README.md file for a project. Also use when the user says | readme | readme, user, wants, update, md, file, says |
|
||||
| `receiving-code-review` | Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technic... | receiving, code | receiving, code, review, feedback, before, implementing, suggestions, especially, seems, unclear, technically, questionable |
|
||||
| `referral-program` | When the user wants to create, optimize, or analyze a referral program, affiliate program, or word-of-mouth strategy. Also use when the user mentions 'referr... | referral, program | referral, program, user, wants, optimize, analyze, affiliate, word, mouth, mentions, ambassador, viral |
|
||||
| `requesting-code-review` | Use when completing tasks, implementing major features, or before merging to verify work meets requirements | requesting, code | requesting, code, review, completing, tasks, implementing, major, features, before, merging, verify, work |
|
||||
| `search-specialist` | Expert web researcher using advanced search techniques and synthesis. Masters search operators, result filtering, and multi-source verification. Handles comp... | search | search, web, researcher, techniques, synthesis, masters, operators, result, filtering, multi, source, verification |
|
||||
| `sharp-edges` | Identify error-prone APIs and dangerous configurations | sharp, edges | sharp, edges, identify, error, prone, apis, dangerous, configurations |
|
||||
| `shellcheck-configuration` | Master ShellCheck static analysis configuration and usage for shell script quality. Use when setting up linting infrastructure, fixing code issues, or ensuri... | shellcheck, configuration | shellcheck, configuration, static, analysis, usage, shell, script, quality, setting, up, linting, infrastructure |
|
||||
| `signup-flow-cro` | When the user wants to optimize signup, registration, account creation, or trial activation flows. Also use when the user mentions "signup conversions," "reg... | signup, flow, cro | signup, flow, cro, user, wants, optimize, registration, account, creation, trial, activation, flows |
|
||||
| `skill-creator` | Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capa... | skill, creator | skill, creator, creating, effective, skills, should, used, users, want, new, update, existing |
|
||||
| `skill-rails-upgrade` | Analyze Rails apps and provide upgrade assessments | skill, rails, upgrade | skill, rails, upgrade, analyze, apps, provide, assessments |
|
||||
| `slack-gif-creator` | Knowledge and utilities for creating animated GIFs optimized for Slack. Provides constraints, validation tools, and animation concepts. Use when users reques... | slack, gif, creator | slack, gif, creator, knowledge, utilities, creating, animated, gifs, optimized, provides, constraints, validation |
|
||||
| `social-content` | When the user wants help creating, scheduling, or optimizing social media content for LinkedIn, Twitter/X, Instagram, TikTok, Facebook, or other platforms. A... | social, content | social, content, user, wants, creating, scheduling, optimizing, media, linkedin, twitter, instagram, tiktok |
|
||||
| `subagent-driven-development` | Use when executing implementation plans with independent tasks in the current session | subagent, driven | subagent, driven, development, executing, plans, independent, tasks, current, session |
|
||||
| `superpowers-lab` | Lab environment for Claude superpowers | superpowers, lab | superpowers, lab, environment, claude |
|
||||
| `theme-factory` | Toolkit for styling artifacts with a theme. These artifacts can be slides, docs, reportings, HTML landing pages, etc. There are 10 pre-set themes with colors... | theme, factory | theme, factory, toolkit, styling, artifacts, these, slides, docs, reportings, html, landing, pages |
|
||||
| `threejs-skills` | Three.js skills for creating 3D elements and interactive experiences | threejs, skills | threejs, skills, three, js, creating, 3d, elements, interactive, experiences |
|
||||
| `turborepo-caching` | Configure Turborepo for efficient monorepo builds with local and remote caching. Use when setting up Turborepo, optimizing build pipelines, or implementing d... | turborepo, caching | turborepo, caching, configure, efficient, monorepo, local, remote, setting, up, optimizing, pipelines, implementing |
|
||||
| `tutorial-engineer` | Creates step-by-step tutorials and educational content from code. Transforms complex concepts into progressive learning experiences with hands-on examples. U... | tutorial | tutorial, engineer, creates, step, tutorials, educational, content, code, transforms, complex, concepts, progressive |
|
||||
| `ui-skills` | Opinionated, evolving constraints to guide agents when building interfaces | ui, skills | ui, skills, opinionated, evolving, constraints, agents, building, interfaces |
|
||||
| `ui-ux-designer` | Create interface designs, wireframes, and design systems. Masters user research, accessibility standards, and modern design tools. Specializes in design toke... | ui, ux, designer | ui, ux, designer, interface, designs, wireframes, masters, user, research, accessibility, standards, specializes |
|
||||
| `upgrading-expo` | Upgrade Expo SDK versions | upgrading, expo | upgrading, expo, upgrade, sdk, versions |
|
||||
| `upstash-qstash` | Upstash QStash expert for serverless message queues, scheduled jobs, and reliable HTTP-based task delivery without managing infrastructure. Use when: qstash,... | upstash, qstash | upstash, qstash, serverless, message, queues, scheduled, jobs, reliable, http, task, delivery, without |
|
||||
| `using-git-worktrees` | Use when starting feature work that needs isolation from current workspace or before executing implementation plans - creates isolated git worktrees with sma... | using, git, worktrees | using, git, worktrees, starting, feature, work, isolation, current, workspace, before, executing, plans |
|
||||
| `using-superpowers` | Use when starting any conversation - establishes how to find and use skills, requiring Skill tool invocation before ANY response including clarifying questions | using, superpowers | using, superpowers, starting, any, conversation, establishes, how, find, skills, requiring, skill, invocation |
|
||||
@@ -417,10 +364,8 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `web-performance-optimization` | Optimize website and web application performance including loading speed, Core Web Vitals, bundle size, caching strategies, and runtime performance | web, performance, optimization | web, performance, optimization, optimize, website, application, including, loading, speed, core, vitals, bundle |
|
||||
| `windows-privilege-escalation` | This skill should be used when the user asks to "escalate privileges on Windows," "find Windows privesc vectors," "enumerate Windows for privilege escalation... | windows, privilege, escalation | windows, privilege, escalation, skill, should, used, user, asks, escalate, privileges, find, privesc |
|
||||
| `writing-plans` | Use when you have a spec or requirements for a multi-step task, before touching code | writing, plans | writing, plans, spec, requirements, multi, step, task, before, touching, code |
|
||||
| `writing-skills` | Use when creating, updating, or improving agent skills. | writing, skills | writing, skills, creating, updating, improving, agent |
|
||||
| `x-article-publisher-skill` | Publish articles to X/Twitter | x, article, publisher, skill | x, article, publisher, skill, publish, articles, twitter |
|
||||
|
||||
## infrastructure (77)
|
||||
## infrastructure (72)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
@@ -429,7 +374,6 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `api-testing-observability-api-mock` | You are an API mocking expert specializing in realistic mock services for development, testing, and demos. Design mocks that simulate real API behavior and e... | api, observability, mock | api, observability, mock, testing, mocking, specializing, realistic, development, demos, mocks, simulate, real |
|
||||
| `application-performance-performance-optimization` | Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across... | application, performance, optimization | application, performance, optimization, optimize, profiling, observability, backend, frontend, tuning, coordinating, stack |
|
||||
| `aws-serverless` | Specialized skill for building production-ready serverless applications on AWS. Covers Lambda functions, API Gateway, DynamoDB, SQS/SNS event-driven patterns... | aws, serverless | aws, serverless, specialized, skill, building, applications, covers, lambda, functions, api, gateway, dynamodb |
|
||||
| `aws-skills` | AWS development with infrastructure automation and cloud architecture patterns | aws, skills | aws, skills, development, infrastructure, automation, cloud, architecture |
|
||||
| `backend-architect` | Expert backend architect specializing in scalable API design, microservices architecture, and distributed systems. Masters REST/GraphQL/gRPC APIs, event-driv... | backend | backend, architect, specializing, scalable, api, microservices, architecture, distributed, masters, rest, graphql, grpc |
|
||||
| `backend-development-feature-development` | Orchestrate end-to-end backend feature development from requirements to deployment. Use when coordinating multi-phase feature delivery across teams and servi... | backend | backend, development, feature, orchestrate, requirements, deployment, coordinating, multi, phase, delivery, teams |
|
||||
| `bash-defensive-patterns` | Master defensive Bash programming techniques for production-grade scripts. Use when writing robust shell scripts, CI/CD pipelines, or system utilities requir... | bash, defensive | bash, defensive, programming, techniques, grade, scripts, writing, robust, shell, ci, cd, pipelines |
|
||||
@@ -454,7 +398,6 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `error-debugging-error-trace` | You are an error tracking and observability expert specializing in implementing comprehensive error monitoring solutions. Set up error tracking systems, conf... | error, debugging, trace | error, debugging, trace, tracking, observability, specializing, implementing, monitoring, solutions, set, up, configure |
|
||||
| `error-diagnostics-error-analysis` | You are an expert error analysis specialist with deep expertise in debugging distributed systems, analyzing production incidents, and implementing comprehens... | error, diagnostics | error, diagnostics, analysis, deep, expertise, debugging, distributed, analyzing, incidents, implementing, observability, solutions |
|
||||
| `error-diagnostics-error-trace` | You are an error tracking and observability expert specializing in implementing comprehensive error monitoring solutions. Set up error tracking systems, conf... | error, diagnostics, trace | error, diagnostics, trace, tracking, observability, specializing, implementing, monitoring, solutions, set, up, configure |
|
||||
| `expo-deployment` | Deploy Expo apps to production | expo, deployment | expo, deployment, deploy, apps |
|
||||
| `file-uploads` | Expert at handling file uploads and cloud storage. Covers S3, Cloudflare R2, presigned URLs, multipart uploads, and image optimization. Knows how to handle l... | file, uploads | file, uploads, handling, cloud, storage, covers, s3, cloudflare, r2, presigned, urls, multipart |
|
||||
| `flutter-expert` | Master Flutter development with Dart 3, advanced widgets, and multi-platform deployment. Handles state management, animations, testing, and performance optim... | flutter | flutter, development, dart, widgets, multi, platform, deployment, state, animations, testing, performance, optimization |
|
||||
| `gcp-cloud-run` | Specialized skill for building production-ready serverless applications on GCP. Covers Cloud Run services (containerized), Cloud Run Functions (event-driven)... | gcp, cloud, run | gcp, cloud, run, specialized, skill, building, serverless, applications, covers, containerized, functions, event |
|
||||
@@ -465,10 +408,8 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `gitops-workflow` | Implement GitOps workflows with ArgoCD and Flux for automated, declarative Kubernetes deployments with continuous reconciliation. Use when implementing GitOp... | gitops | gitops, argocd, flux, automated, declarative, kubernetes, deployments, continuous, reconciliation, implementing, automating, setting |
|
||||
| `grafana-dashboards` | Create and manage production Grafana dashboards for real-time visualization of system and application metrics. Use when building monitoring dashboards, visua... | grafana, dashboards | grafana, dashboards, real, time, visualization, application, metrics, building, monitoring, visualizing, creating, operational |
|
||||
| `helm-chart-scaffolding` | Design, organize, and manage Helm charts for templating and packaging Kubernetes applications with reusable configurations. Use when creating Helm charts, pa... | helm, chart | helm, chart, scaffolding, organize, charts, templating, packaging, kubernetes, applications, reusable, configurations, creating |
|
||||
| `hugging-face-cli` | Execute Hugging Face Hub operations using the `hf` CLI. Use when the user needs to download models/datasets/spaces, upload files to Hub repositories, create ... | hugging, face, cli | hugging, face, cli, execute, hub, operations, hf, user, download, models, datasets, spaces |
|
||||
| `hybrid-cloud-networking` | Configure secure, high-performance connectivity between on-premises infrastructure and cloud platforms using VPN and dedicated connections. Use when building... | hybrid, cloud, networking | hybrid, cloud, networking, configure, secure, high, performance, connectivity, between, premises, infrastructure, platforms |
|
||||
| `istio-traffic-management` | Configure Istio traffic management including routing, load balancing, circuit breakers, and canary deployments. Use when implementing service mesh traffic po... | istio, traffic | istio, traffic, configure, including, routing, load, balancing, circuit, breakers, canary, deployments, implementing |
|
||||
| `iterate-pr` | Iterate on a PR until CI passes. Use when you need to fix CI failures, address review feedback, or continuously push fixes until all checks are green. Automa... | iterate, pr | iterate, pr, until, ci, passes, fix, failures, address, review, feedback, continuously, push |
|
||||
| `java-pro` | Master Java 21+ with modern features like virtual threads, pattern matching, and Spring Boot 3.x. Expert in the latest Java ecosystem including GraalVM, Proj... | java | java, pro, 21, features, like, virtual, threads, matching, spring, boot, latest, ecosystem |
|
||||
| `kpi-dashboard-design` | Design effective KPI dashboards with metrics selection, visualization best practices, and real-time monitoring patterns. Use when building business dashboard... | kpi, dashboard | kpi, dashboard, effective, dashboards, metrics, selection, visualization, real, time, monitoring, building, business |
|
||||
| `langfuse` | Expert in Langfuse - the open-source LLM observability platform. Covers tracing, prompt management, evaluation, datasets, and integration with LangChain, Lla... | langfuse | langfuse, open, source, llm, observability, platform, covers, tracing, prompt, evaluation, datasets, integration |
|
||||
@@ -493,16 +434,15 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `sql-pro` | Master modern SQL with cloud-native databases, OLTP/OLAP optimization, and advanced query techniques. Expert in performance tuning, data modeling, and hybrid... | sql | sql, pro, cloud, native, databases, oltp, olap, optimization, query, techniques, performance, tuning |
|
||||
| `temporal-python-pro` | Master Temporal workflow orchestration with Python SDK. Implements durable workflows, saga patterns, and distributed transactions. Covers async/await, testin... | temporal, python | temporal, python, pro, orchestration, sdk, implements, durable, saga, distributed, transactions, covers, async |
|
||||
| `terraform-module-library` | Build reusable Terraform modules for AWS, Azure, and GCP infrastructure following infrastructure-as-code best practices. Use when creating infrastructure mod... | terraform, module, library | terraform, module, library, reusable, modules, aws, azure, gcp, infrastructure, following, code, creating |
|
||||
| `terraform-skill` | Terraform infrastructure as code best practices | terraform, skill | terraform, skill, infrastructure, code |
|
||||
| `test-automator` | Master AI-powered test automation with modern frameworks, self-healing tests, and comprehensive quality engineering. Build scalable testing strategies with a... | automator | automator, test, ai, powered, automation, frameworks, self, healing, tests, quality, engineering, scalable |
|
||||
| `unity-developer` | Build Unity games with optimized C# scripts, efficient rendering, and proper asset management. Masters Unity 6 LTS, URP/HDRP pipelines, and cross-platform de... | unity | unity, developer, games, optimized, scripts, efficient, rendering, proper, asset, masters, lts, urp |
|
||||
| `vercel-deploy-claimable` | Deploy applications and websites to Vercel. Use this skill when the user requests deployment actions such as | vercel, deploy, claimable | vercel, deploy, claimable, applications, websites, skill, user, requests, deployment, actions, such |
|
||||
| `vercel-deployment` | Expert knowledge for deploying to Vercel with Next.js Use when: vercel, deploy, deployment, hosting, production. | vercel, deployment | vercel, deployment, knowledge, deploying, next, js, deploy, hosting |
|
||||
| `voice-agents` | Voice agents represent the frontier of AI interaction - humans speaking naturally with AI systems. The challenge isn't just speech recognition and synthesis,... | voice, agents | voice, agents, represent, frontier, ai, interaction, humans, speaking, naturally, challenge, isn, just |
|
||||
| `wireshark-analysis` | This skill should be used when the user asks to "analyze network traffic with Wireshark", "capture packets for troubleshooting", "filter PCAP files", "follow... | wireshark | wireshark, network, traffic, analysis, skill, should, used, user, asks, analyze, capture, packets |
|
||||
| `workflow-automation` | Workflow automation is the infrastructure that makes AI agents reliable. Without durable execution, a network hiccup during a 10-step payment flow means lost... | | automation, infrastructure, makes, ai, agents, reliable, without, durable, execution, network, hiccup, during |
|
||||
| `writing-skills` | Use when creating new skills, editing existing skills, or verifying skills work before deployment | writing, skills | writing, skills, creating, new, editing, existing, verifying, work, before, deployment |
|
||||
|
||||
## security (112)
|
||||
## security (107)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
@@ -538,7 +478,6 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `docker-expert` | Docker containerization expert with deep knowledge of multi-stage builds, image optimization, container security, Docker Compose orchestration, and productio... | docker | docker, containerization, deep, knowledge, multi, stage, image, optimization, container, security, compose, orchestration |
|
||||
| `ethical-hacking-methodology` | This skill should be used when the user asks to "learn ethical hacking", "understand penetration testing lifecycle", "perform reconnaissance", "conduct secur... | ethical, hacking, methodology | ethical, hacking, methodology, skill, should, used, user, asks, learn, understand, penetration, testing |
|
||||
| `file-path-traversal` | This skill should be used when the user asks to "test for directory traversal", "exploit path traversal vulnerabilities", "read arbitrary files through web a... | file, path, traversal | file, path, traversal, testing, skill, should, used, user, asks, test, directory, exploit |
|
||||
| `find-bugs` | Find bugs, security vulnerabilities, and code quality issues in local branch changes. Use when asked to review changes, find bugs, security review, or audit ... | find, bugs | find, bugs, security, vulnerabilities, code, quality, issues, local, branch, changes, asked, review |
|
||||
| `firebase` | Firebase gives you a complete backend in minutes - auth, database, storage, functions, hosting. But the ease of setup hides real complexity. Security rules a... | firebase | firebase, gives, complete, backend, minutes, auth, database, storage, functions, hosting, ease, setup |
|
||||
| `firmware-analyst` | Expert firmware analyst specializing in embedded systems, IoT security, and hardware reverse engineering. Masters firmware extraction, analysis, and vulnerab... | firmware, analyst | firmware, analyst, specializing, embedded, iot, security, hardware, reverse, engineering, masters, extraction, analysis |
|
||||
| `form-cro` | Optimize any form that is NOT signup or account registration — including lead capture, contact, demo request, application, survey, quote, and checkout forms.... | form, cro | form, cro, optimize, any, signup, account, registration, including, lead, capture, contact, demo |
|
||||
@@ -548,7 +487,6 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `gdpr-data-handling` | Implement GDPR-compliant data handling with consent management, data subject rights, and privacy by design. Use when building systems that process EU persona... | gdpr, data, handling | gdpr, data, handling, compliant, consent, subject, rights, privacy, building, process, eu, personal |
|
||||
| `graphql-architect` | Master modern GraphQL with federation, performance optimization, and enterprise security. Build scalable schemas, implement advanced caching, and design real... | graphql | graphql, architect, federation, performance, optimization, enterprise, security, scalable, schemas, caching, real, time |
|
||||
| `html-injection-testing` | This skill should be used when the user asks to "test for HTML injection", "inject HTML into web pages", "perform HTML injection attacks", "deface web applic... | html, injection | html, injection, testing, skill, should, used, user, asks, test, inject, web, pages |
|
||||
| `hugging-face-jobs` | This skill should be used when users want to run any workload on Hugging Face Jobs infrastructure. Covers UV scripts, Docker-based jobs, hardware selection, ... | hugging, face, jobs | hugging, face, jobs, skill, should, used, users, want, run, any, workload, infrastructure |
|
||||
| `hybrid-cloud-architect` | Expert hybrid cloud architect specializing in complex multi-cloud solutions across AWS/Azure/GCP and private clouds (OpenStack/VMware). Masters hybrid connec... | hybrid, cloud | hybrid, cloud, architect, specializing, complex, multi, solutions, aws, azure, gcp, private, clouds |
|
||||
| `idor-testing` | This skill should be used when the user asks to "test for insecure direct object references," "find IDOR vulnerabilities," "exploit broken access control," "... | idor | idor, vulnerability, testing, skill, should, used, user, asks, test, insecure, direct, object |
|
||||
| `incident-responder` | Expert SRE incident responder specializing in rapid problem resolution, modern observability, and comprehensive incident management. Masters incident command... | incident, responder | incident, responder, sre, specializing, rapid, problem, resolution, observability, masters, command, blameless, post |
|
||||
@@ -592,7 +530,6 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `scanning-tools` | This skill should be used when the user asks to "perform vulnerability scanning", "scan networks for open ports", "assess web application security", "scan wi... | scanning | scanning, security, skill, should, used, user, asks, perform, vulnerability, scan, networks, open |
|
||||
| `secrets-management` | Implement secure secrets management for CI/CD pipelines using Vault, AWS Secrets Manager, or native platform solutions. Use when handling sensitive credentia... | secrets | secrets, secure, ci, cd, pipelines, vault, aws, manager, native, platform, solutions, handling |
|
||||
| `security-auditor` | Expert security auditor specializing in DevSecOps, comprehensive cybersecurity, and compliance frameworks. Masters vulnerability assessment, threat modeling,... | security, auditor | security, auditor, specializing, devsecops, cybersecurity, compliance, frameworks, masters, vulnerability, assessment, threat, modeling |
|
||||
| `security-bluebook-builder` | Build security Blue Books for sensitive apps | security, bluebook, builder | security, bluebook, builder, blue, books, sensitive, apps |
|
||||
| `security-compliance-compliance-check` | You are a compliance expert specializing in regulatory requirements for software systems including GDPR, HIPAA, SOC2, PCI-DSS, and other industry standards. ... | security, compliance, check | security, compliance, check, specializing, regulatory, requirements, software, including, gdpr, hipaa, soc2, pci |
|
||||
| `security-requirement-extraction` | Derive security requirements from threat models and business context. Use when translating threats into actionable requirements, creating security user stori... | security, requirement, extraction | security, requirement, extraction, derive, requirements, threat, models, business, context, translating, threats, actionable |
|
||||
| `security-scanning-security-dependencies` | You are a security expert specializing in dependency vulnerability analysis, SBOM generation, and supply chain security. Scan project dependencies across eco... | security, scanning, dependencies | security, scanning, dependencies, specializing, dependency, vulnerability, analysis, sbom, generation, supply, chain, scan |
|
||||
@@ -612,14 +549,12 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `top-web-vulnerabilities` | This skill should be used when the user asks to "identify web application vulnerabilities", "explain common security flaws", "understand vulnerability catego... | top, web, vulnerabilities | top, web, vulnerabilities, 100, reference, skill, should, used, user, asks, identify, application |
|
||||
| `twilio-communications` | Build communication features with Twilio: SMS messaging, voice calls, WhatsApp Business API, and user verification (2FA). Covers the full spectrum from simpl... | twilio, communications | twilio, communications, communication, features, sms, messaging, voice, calls, whatsapp, business, api, user |
|
||||
| `ui-visual-validator` | Rigorous visual validation expert specializing in UI testing, design system compliance, and accessibility verification. Masters screenshot analysis, visual r... | ui, visual, validator | ui, visual, validator, rigorous, validation, specializing, testing, compliance, accessibility, verification, masters, screenshot |
|
||||
| `using-neon` | Guides and best practices for working with Neon Serverless Postgres. Covers getting started, local development with Neon, choosing a connection method, Neon ... | using, neon | using, neon, guides, working, serverless, postgres, covers, getting, started, local, development, choosing |
|
||||
| `varlock-claude-skill` | Secure environment variable management ensuring secrets are never exposed in Claude sessions, terminals, logs, or git commits | varlock, claude, skill | varlock, claude, skill, secure, environment, variable, ensuring, secrets, never, exposed, sessions, terminals |
|
||||
| `vulnerability-scanner` | Advanced vulnerability analysis principles. OWASP 2025, Supply Chain Security, attack surface mapping, risk prioritization. | vulnerability, scanner | vulnerability, scanner, analysis, principles, owasp, 2025, supply, chain, security, attack, surface, mapping |
|
||||
| `web-design-guidelines` | Review UI code for Web Interface Guidelines compliance. Use when asked to "review my UI", "check accessibility", "audit design", "review UX", or "check my si... | web, guidelines | web, guidelines, review, ui, code, interface, compliance, asked, my, check, accessibility, audit |
|
||||
| `wordpress-penetration-testing` | This skill should be used when the user asks to "pentest WordPress sites", "scan WordPress for vulnerabilities", "enumerate WordPress users, themes, or plugi... | wordpress, penetration | wordpress, penetration, testing, skill, should, used, user, asks, pentest, sites, scan, vulnerabilities |
|
||||
| `xss-html-injection` | This skill should be used when the user asks to "test for XSS vulnerabilities", "perform cross-site scripting attacks", "identify HTML injection flaws", "exp... | xss, html, injection | xss, html, injection, cross, site, scripting, testing, skill, should, used, user, asks |
|
||||
|
||||
## testing (22)
|
||||
## testing (21)
|
||||
|
||||
| Skill | Description | Tags | Triggers |
|
||||
| --- | --- | --- | --- |
|
||||
@@ -631,7 +566,6 @@ TRIGGER: "shopify", "shopify app", "checkout extension",... | shopify | shopify,
|
||||
| `pentest-commands` | This skill should be used when the user asks to "run pentest commands", "scan with nmap", "use metasploit exploits", "crack passwords with hydra or john", "s... | pentest, commands | pentest, commands, skill, should, used, user, asks, run, scan, nmap, metasploit, exploits |
|
||||
| `performance-testing-review-multi-agent-review` | Use when working with performance testing review multi agent review | performance, multi, agent | performance, multi, agent, testing, review, working |
|
||||
| `playwright-skill` | Complete browser automation with Playwright. Auto-detects dev servers, writes clean test scripts to /tmp. Test pages, fill forms, take screenshots, check res... | playwright, skill | playwright, skill, complete, browser, automation, auto, detects, dev, servers, writes, clean, test |
|
||||
| `pypict-skill` | Pairwise test generation | pypict, skill | pypict, skill, pairwise, test, generation |
|
||||
| `screen-reader-testing` | Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issue... | screen, reader | screen, reader, testing, test, web, applications, readers, including, voiceover, nvda, jaws, validating |
|
||||
| `startup-analyst` | Expert startup business analyst specializing in market sizing, financial modeling, competitive analysis, and strategic planning for early-stage companies. Us... | startup, analyst | startup, analyst, business, specializing, market, sizing, financial, modeling, competitive, analysis, strategic, planning |
|
||||
| `startup-metrics-framework` | This skill should be used when the user asks about "key startup metrics", "SaaS metrics", "CAC and LTV", "unit economics", "burn multiple", "rule of 40", "ma... | startup, metrics, framework | startup, metrics, framework, skill, should, used, user, asks, about, key, saas, cac |
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
# 🤝 Contributing Guide - V4 Enterprise Edition
|
||||
# 🤝 Contributing Guide - V3 Enterprise Edition
|
||||
|
||||
**Thank you for wanting to make this repo better!** This guide shows you exactly how to contribute, even if you're new to open source.
|
||||
With V4, we raised the bar for quality. Please read the **new Quality Standards** below carefully.
|
||||
With V3, we raised the bar for quality. Please read the **new Quality Standards** below carefully.
|
||||
|
||||
---
|
||||
|
||||
## 🧐 The "Quality Bar" (V4 Standard)
|
||||
## 🧐 The "Quality Bar" (V3 Standard)
|
||||
|
||||
**Critical for new skills:** Every skill submitted must pass our **5-Point Quality Check** (see `docs/QUALITY_BAR.md` for details):
|
||||
|
||||
@@ -112,13 +112,16 @@ code example here
|
||||
- ❌ Don't do this
|
||||
```
|
||||
|
||||
#### Step 4: Validate (CRITICAL V4 STEP)
|
||||
#### Step 4: Validate (CRITICAL V3 STEP)
|
||||
|
||||
Use the canonical validator `scripts/validate_skills.py` via `npm run validate`. **We will not merge PRs that fail this check.**
|
||||
Run the validation script locally. **We will not merge PRs that fail this check.**
|
||||
|
||||
```bash
|
||||
npm run validate # soft mode (warnings only)
|
||||
npm run validate:strict # strict mode (what CI runs)
|
||||
# Soft mode (warnings only)
|
||||
python3 scripts/validate_skills.py
|
||||
|
||||
# Hard mode (what CI runs)
|
||||
python3 scripts/validate_skills.py --strict
|
||||
```
|
||||
|
||||
This checks:
|
||||
|
||||
55
README.md
55
README.md
@@ -1,6 +1,6 @@
|
||||
# 🌌 Antigravity Awesome Skills: 625+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More
|
||||
# 🌌 Antigravity Awesome Skills: 560+ Agentic Skills for Claude Code, Gemini CLI, Cursor, Copilot & More
|
||||
|
||||
> **The Ultimate Collection of 625+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode**
|
||||
> **The Ultimate Collection of 560+ Universal Agentic Skills for AI Coding Assistants — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode**
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://claude.ai)
|
||||
@@ -11,7 +11,7 @@
|
||||
[](https://github.com/opencode-ai/opencode)
|
||||
[](https://github.com/sickn33/antigravity-awesome-skills)
|
||||
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested library of **624 high-performance agentic skills** designed to work seamlessly across all major AI coding assistants:
|
||||
**Antigravity Awesome Skills** is a curated, battle-tested library of **560 high-performance agentic skills** designed to work seamlessly across all major AI coding assistants:
|
||||
|
||||
- 🟣 **Claude Code** (Anthropic CLI)
|
||||
- 🔵 **Gemini CLI** (Google DeepMind)
|
||||
@@ -29,7 +29,7 @@ This repository provides essential skills to transform your AI assistant into a
|
||||
- [🔌 Compatibility & Invocation](#compatibility--invocation)
|
||||
- [📦 Features & Categories](#features--categories)
|
||||
- [🎁 Curated Collections (Bundles)](#curated-collections)
|
||||
- [📚 Browse 625+ Skills](#browse-625-skills)
|
||||
- [📚 Browse 560+ Skills](#browse-560-skills)
|
||||
- [🛠️ Installation](#installation)
|
||||
- [🤝 How to Contribute](#how-to-contribute)
|
||||
- [👥 Contributors & Credits](#credits--sources)
|
||||
@@ -52,14 +52,10 @@ AI Agents (like Claude Code, Cursor, or Gemini) are smart, but they lack **speci
|
||||
|
||||
### 2. ⚡️ Quick Start (The "Bundle" Way)
|
||||
|
||||
Install once (clone or npx); then use our **Starter Packs** in [docs/BUNDLES.md](docs/BUNDLES.md) to see which skills fit your role. You get the full repo; Starter Packs are curated lists, not a separate install.
|
||||
Don't install 560+ skills manually. Use our **Starter Packs**:
|
||||
|
||||
1. **Install** (pick one):
|
||||
1. **Clone the repo**:
|
||||
```bash
|
||||
# Easiest: npx installer (clones to ~/.agent/skills by default)
|
||||
npx antigravity-awesome-skills
|
||||
|
||||
# Or clone manually
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
2. **Pick your persona** (See [docs/BUNDLES.md](docs/BUNDLES.md)):
|
||||
@@ -95,7 +91,7 @@ These skills follow the universal **SKILL.md** format and work with any AI codin
|
||||
|
||||
> [!WARNING]
|
||||
> **Windows Users**: This repository uses **symlinks** for official skills.
|
||||
> The **npx** installer sets `core.symlinks=true` automatically. For **git clone**, enable Developer Mode or run Git as Administrator:
|
||||
> You must enable Developer Mode or run Git as Administrator:
|
||||
> `git clone -c core.symlinks=true https://github.com/...`
|
||||
|
||||
---
|
||||
@@ -124,7 +120,7 @@ The repository is organized into specialized domains to transform your AI into a
|
||||
|
||||
[Check out our Starter Packs in docs/BUNDLES.md](docs/BUNDLES.md) to find the perfect toolkit for your role.
|
||||
|
||||
## Browse 625+ Skills
|
||||
## Browse 560+ Skills
|
||||
|
||||
We have moved the full skill registry to a dedicated catalog to keep this README clean.
|
||||
|
||||
@@ -132,33 +128,10 @@ We have moved the full skill registry to a dedicated catalog to keep this README
|
||||
|
||||
## Installation
|
||||
|
||||
To use these skills with **Claude Code**, **Gemini CLI**, **Codex CLI**, **Cursor**, **Antigravity**, or **OpenCode**:
|
||||
|
||||
### Option A: npx (recommended)
|
||||
To use these skills with **Claude Code**, **Gemini CLI**, **Codex CLI**, **Cursor**, **Antigravity**, or **OpenCode**, clone this repository into your agent's skills directory:
|
||||
|
||||
```bash
|
||||
# Default: ~/.agent/skills (universal)
|
||||
npx antigravity-awesome-skills
|
||||
|
||||
# Cursor
|
||||
npx antigravity-awesome-skills --cursor
|
||||
|
||||
# Claude Code
|
||||
npx antigravity-awesome-skills --claude
|
||||
|
||||
# Gemini CLI
|
||||
npx antigravity-awesome-skills --gemini
|
||||
|
||||
# Custom path
|
||||
npx antigravity-awesome-skills --path ./my-skills
|
||||
```
|
||||
|
||||
Run `npx antigravity-awesome-skills --help` for all options. If the directory already exists, the installer runs `git pull` to update.
|
||||
|
||||
### Option B: git clone
|
||||
|
||||
```bash
|
||||
# Universal (works with most tools)
|
||||
# Universal installation (works with most tools)
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
|
||||
# Claude Code specific
|
||||
@@ -223,13 +196,11 @@ This collection would not be possible without the incredible work of the Claude
|
||||
- **[zebbern/claude-code-guide](https://github.com/zebbern/claude-code-guide)**: Comprehensive Security suite & Guide (Source for ~60 new skills).
|
||||
- **[alirezarezvani/claude-skills](https://github.com/alirezarezvani/claude-skills)**: Senior Engineering and PM toolkit.
|
||||
- **[karanb192/awesome-claude-skills](https://github.com/karanb192/awesome-claude-skills)**: A massive list of verified skills for Claude Code.
|
||||
- **[VoltAgent/awesome-agent-skills](https://github.com/VoltAgent/awesome-agent-skills)**: Curated collection of 61 high-quality skills including official team skills from Sentry, Trail of Bits, Expo, Hugging Face, and comprehensive context engineering suite (v4.3.0 integration).
|
||||
- **[zircote/.claude](https://github.com/zircote/.claude)**: Shopify development skill reference.
|
||||
- **[vibeforge1111/vibeship-spawner-skills](https://github.com/vibeforge1111/vibeship-spawner-skills)**: AI Agents, Integrations, Maker Tools (57 skills, Apache 2.0).
|
||||
- **[coreyhaines31/marketingskills](https://github.com/coreyhaines31/marketingskills)**: Marketing skills for CRO, copywriting, SEO, paid ads, and growth (23 skills, MIT).
|
||||
- **[vudovn/antigravity-kit](https://github.com/vudovn/antigravity-kit)**: AI Agent templates with Skills, Agents, and Workflows (33 skills, MIT).
|
||||
- **[affaan-m/everything-claude-code](https://github.com/affaan-m/everything-claude-code)**: Complete Claude Code configuration collection from Anthropic hackathon winner - skills only (8 skills, MIT).
|
||||
- **[whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills)**: Practical fp-ts skills for TypeScript – fp-ts-pragmatic, fp-ts-react, fp-ts-errors (v4.4.0).
|
||||
- **[webzler/agentMemory](https://github.com/webzler/agentMemory)**: Source for the agent-memory-mcp skill.
|
||||
- **[sstklen/claude-api-cost-optimization](https://github.com/sstklen/claude-api-cost-optimization)**: Save 50-90% on Claude API costs with smart optimization strategies (MIT).
|
||||
|
||||
@@ -286,15 +257,13 @@ We officially thank the following contributors for their help in making this rep
|
||||
- [Owen Wu](https://github.com/yubing744)
|
||||
- [SuperJMN](https://github.com/SuperJMN)
|
||||
- [Viktor Ferenczi](https://github.com/viktor-ferenczi)
|
||||
- [Đỗ Khắc Gia Khoa](https://github.com/Dokhacgiakhoa)
|
||||
- [evandro-miguel](https://github.com/evandro-miguel)
|
||||
- [junited31](https://github.com/junited31)
|
||||
- [Đỗ Khắc Gia Khoa](https://github.com/Dokhacgiakhoa) - Vietnamese translations (PR #38)
|
||||
- [junited31](https://github.com/junited31) - unreal-engine-cpp-pro skill (PR #39)
|
||||
- [krisnasantosa15](https://github.com/krisnasantosa15)
|
||||
- [raeef1001](https://github.com/raeef1001)
|
||||
- [taksrules](https://github.com/taksrules)
|
||||
- [zebbern](https://github.com/zebbern)
|
||||
- [vuth-dogo](https://github.com/vuth-dogo)
|
||||
- [whatiskadudoing](https://github.com/whatiskadudoing)
|
||||
|
||||
## Star History
|
||||
|
||||
|
||||
182
RELEASE_NOTES.md
182
RELEASE_NOTES.md
@@ -1,182 +1,40 @@
|
||||
# Release v4.5.0: Stitch UI Design
|
||||
# Release v4.1.0: Internationalization & Game Development
|
||||
|
||||
> **Expert prompting guide for Google Stitch AI-powered UI design tool**
|
||||
> **Vietnamese translations and Unreal Engine C++ expertise added to the skills collection**
|
||||
|
||||
This release adds the stitch-ui-design skill and clarifies documentation around Starter Packs vs full repo installation, bringing the total to 625 skills. The new skill provides comprehensive guidance for creating effective prompts in Google Stitch (Gemini 2.5 Flash) to generate high-quality UI designs for web and mobile applications.
|
||||
|
||||
## New Skills (1)
|
||||
|
||||
- **[stitch-ui-design](skills/stitch-ui-design/)** – Expert guide for creating effective prompts for Google Stitch AI UI design tool. Covers prompt structure, specificity techniques, iteration strategies, design-to-code workflows, and 10+ practical examples for landing pages, mobile apps, and dashboards.
|
||||
|
||||
> **Try it:** `Use @stitch-ui-design to help me create a prompt for a mobile fitness app dashboard`
|
||||
|
||||
## Documentation Improvements
|
||||
|
||||
- **Clarified Starter Packs**: Updated README.md and GETTING_STARTED.md to explicitly state that installation means cloning the full repo once; Starter Packs are curated lists to help discover which skills to use by role, not a different installation method (fixes [#44](https://github.com/sickn33/antigravity-awesome-skills/issues/44))
|
||||
|
||||
## Registry Update
|
||||
|
||||
- **Total Skills**: 625 (from 624)
|
||||
- **New Skills Added**: 1
|
||||
- **Catalog**: Regenerated with all skills
|
||||
|
||||
## Credits
|
||||
|
||||
A huge shoutout to our community contributors:
|
||||
|
||||
- **[@CypherPoet](https://github.com/CypherPoet)** for raising the documentation clarity issue (#44)
|
||||
|
||||
---
|
||||
|
||||
# Release v4.4.0: fp-ts skills for TypeScript
|
||||
|
||||
> **Three practical fp-ts skills for TypeScript functional programming**
|
||||
|
||||
This release adds 3 fp-ts skills sourced from [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills), bringing the total to 624 skills. These skills focus on practical, jargon-free patterns for pipe, Option, Either, TaskEither, React integration, and type-safe error handling.
|
||||
|
||||
## New Skills (3)
|
||||
|
||||
- **[fp-ts-pragmatic](skills/fp-ts-pragmatic/)** – The 80/20 of functional programming: pipe, Option, Either, TaskEither without academic jargon
|
||||
- **[fp-ts-react](skills/fp-ts-react/)** – Patterns for using fp-ts with React 18/19 and Next.js 14/15 (state, forms, data fetching)
|
||||
- **[fp-ts-errors](skills/fp-ts-errors/)** – Type-safe error handling with Either and TaskEither; no more try/catch spaghetti
|
||||
|
||||
## Registry Update
|
||||
|
||||
- **Total Skills**: 624 (from 621)
|
||||
- **New Skills Added**: 3
|
||||
- **Catalog**: Regenerated with all skills
|
||||
|
||||
---
|
||||
|
||||
# Release v4.3.0: VoltAgent Integration & Context Engineering Suite
|
||||
|
||||
> **Massive expansion with 61 new skills from VoltAgent repository, including official team skills and comprehensive context engineering capabilities**
|
||||
|
||||
This release adds 61 high-quality skills sourced from the VoltAgent/awesome-agent-skills curated collection, bringing the total to 614 skills. Highlights include official skills from Sentry, Trail of Bits, Expo, Hugging Face, and a complete context engineering suite for building sophisticated AI agents.
|
||||
This release brings comprehensive Vietnamese language support for documentation and adds professional Unreal Engine 5.x C++ development guidance, expanding the repository's international reach and game development capabilities.
|
||||
|
||||
## 🚀 New Skills
|
||||
|
||||
### Official Team Skills (27)
|
||||
### 🎮 [unreal-engine-cpp-pro](skills/unreal-engine-cpp-pro/)
|
||||
|
||||
#### Sentry (4)
|
||||
- **[commit](skills/commit/)** – Create commits with best practices following Sentry conventions
|
||||
- **[create-pr](skills/create-pr/)** – Create pull requests with proper descriptions and review guidelines
|
||||
- **[find-bugs](skills/find-bugs/)** – Find and identify bugs in code systematically
|
||||
- **[iterate-pr](skills/iterate-pr/)** – Iterate on pull request feedback efficiently
|
||||
**Expert-level Unreal Engine 5.x C++ development guide**
|
||||
|
||||
#### Trail of Bits (3)
|
||||
- **[culture-index](skills/culture-index/)** – Index and search culture documentation
|
||||
- **[fix-review](skills/fix-review/)** – Verify fix commits address audit findings without new bugs
|
||||
- **[sharp-edges](skills/sharp-edges/)** – Identify error-prone APIs and dangerous configurations
|
||||
Comprehensive guide for developing robust, performant C++ code in Unreal Engine 5. Covers UObject hygiene, garbage collection patterns, performance optimization, and Epic Games' coding standards.
|
||||
|
||||
#### Expo (2)
|
||||
- **[expo-deployment](skills/expo-deployment/)** – Deploy Expo apps to production
|
||||
- **[upgrading-expo](skills/upgrading-expo/)** – Upgrade Expo SDK versions safely
|
||||
- **UObject & Garbage Collection**: Proper UPROPERTY usage, TStrongObjectPtr patterns, IsValid() checks
|
||||
- **Performance Optimization**: Tick management, casting optimization, struct vs class decisions
|
||||
- **Naming Conventions**: Strict adherence to Epic Games' coding standards (T, U, A, F, E, I prefixes)
|
||||
- **Common Patterns**: Component lookup, interface implementation, async loading with soft references
|
||||
- **Example Code**: Complete Actor implementation demonstrating best practices
|
||||
|
||||
#### Hugging Face (2)
|
||||
- **[hugging-face-cli](skills/hugging-face-cli/)** – HF Hub CLI for models, datasets, repos, and compute jobs
|
||||
- **[hugging-face-jobs](skills/hugging-face-jobs/)** – Run compute jobs and Python scripts on HF infrastructure
|
||||
|
||||
#### Other Official (16)
|
||||
- **[vercel-deploy-claimable](skills/vercel-deploy-claimable/)** – Deploy projects to Vercel
|
||||
- **[design-md](skills/design-md/)** – Create and manage DESIGN.md files (Google Stitch)
|
||||
- **[using-neon](skills/using-neon/)** – Best practices for Neon Serverless Postgres
|
||||
- **[n8n-code-python](skills/n8n-code-python/)** – Python in n8n Code nodes
|
||||
- **[n8n-mcp-tools-expert](skills/n8n-mcp-tools-expert/)** – n8n MCP tools guide
|
||||
- **[n8n-node-configuration](skills/n8n-node-configuration/)** – n8n node configuration
|
||||
- **[swiftui-expert-skill](skills/swiftui-expert-skill/)** – SwiftUI best practices
|
||||
- **[fal-audio](skills/fal-audio/)** – Text-to-speech and speech-to-text using fal.ai
|
||||
- **[fal-generate](skills/fal-generate/)** – Generate images and videos using fal.ai AI models
|
||||
- **[fal-image-edit](skills/fal-image-edit/)** – AI-powered image editing with style transfer
|
||||
- **[fal-platform](skills/fal-platform/)** – Platform APIs for model management and usage tracking
|
||||
- **[fal-upscale](skills/fal-upscale/)** – Upscale and enhance image/video resolution using AI
|
||||
- **[fal-workflow](skills/fal-workflow/)** – Generate workflow JSON files for chaining AI models
|
||||
- **[deep-research](skills/deep-research/)** – Gemini Deep Research Agent for autonomous research
|
||||
- **[imagen](skills/imagen/)** – Generate images using Google Gemini
|
||||
- **[readme](skills/readme/)** – Generate comprehensive project documentation
|
||||
|
||||
### Community Skills (34)
|
||||
|
||||
#### Context Engineering Suite (7)
|
||||
A complete suite for building sophisticated AI agents with advanced context management:
|
||||
|
||||
- **[context-fundamentals](skills/context-fundamentals/)** – Understand what context is, why it matters, and the anatomy of context in agent systems
|
||||
- **[context-degradation](skills/context-degradation/)** – Recognize patterns of context failure: lost-in-middle, poisoning, distraction, and clash
|
||||
- **[context-compression](skills/context-compression/)** – Design and evaluate compression strategies for long-running sessions
|
||||
- **[context-optimization](skills/context-optimization/)** – Apply compaction, masking, and caching strategies
|
||||
- **[multi-agent-patterns](skills/multi-agent-patterns/)** – Master orchestrator, peer-to-peer, and hierarchical multi-agent architectures
|
||||
- **[memory-systems](skills/memory-systems/)** – Design short-term, long-term, and graph-based memory architectures
|
||||
- **[evaluation](skills/evaluation/)** – Build evaluation frameworks for agent systems
|
||||
|
||||
#### Development Tools (8)
|
||||
- **[frontend-slides](skills/frontend-slides/)** – Generate animation-rich HTML presentations with visual style previews
|
||||
- **[linear-claude-skill](skills/linear-claude-skill/)** – Manage Linear issues, projects, and teams
|
||||
- **[skill-rails-upgrade](skills/skill-rails-upgrade/)** – Analyze Rails apps and provide upgrade assessments
|
||||
- **[terraform-skill](skills/terraform-skill/)** – Terraform infrastructure as code best practices
|
||||
- **[tool-design](skills/tool-design/)** – Build tools that agents can use effectively, including architectural reduction patterns
|
||||
- **[screenshots](skills/screenshots/)** – Generate marketing screenshots with Playwright
|
||||
- **[automate-whatsapp](skills/automate-whatsapp/)** – Build WhatsApp automations with workflows and agents
|
||||
- **[observe-whatsapp](skills/observe-whatsapp/)** – Debug WhatsApp delivery issues and run health checks
|
||||
|
||||
#### Platform & Framework Skills (19)
|
||||
- **[aws-skills](skills/aws-skills/)** – AWS development with infrastructure automation
|
||||
- **[ui-skills](skills/ui-skills/)** – Opinionated constraints for building interfaces
|
||||
- **[vexor](skills/vexor/)** – Vector-powered CLI for semantic file search
|
||||
- **[pypict-skill](skills/pypict-skill/)** – Pairwise test generation
|
||||
- **[makepad-skills](skills/makepad-skills/)** – Makepad UI development for Rust apps
|
||||
- **[threejs-skills](skills/threejs-skills/)** – Three.js 3D experiences
|
||||
- **[claude-scientific-skills](skills/claude-scientific-skills/)** – Scientific research skills
|
||||
- **[claude-win11-speckit-update-skill](skills/claude-win11-speckit-update-skill/)** – Windows 11 management
|
||||
- **[security-bluebook-builder](skills/security-bluebook-builder/)** – Security documentation
|
||||
- **[claude-ally-health](skills/claude-ally-health/)** – Health assistant
|
||||
- **[clarity-gate](skills/clarity-gate/)** – RAG quality verification
|
||||
- **[beautiful-prose](skills/beautiful-prose/)** – Writing style guide
|
||||
- **[claude-speed-reader](skills/claude-speed-reader/)** – Speed reading tool
|
||||
- **[skill-seekers](skills/skill-seekers/)** – Skill conversion tool
|
||||
- **[varlock-claude-skill](skills/varlock-claude-skill/)** – Secure environment variable management
|
||||
- **[superpowers-lab](skills/superpowers-lab/)** – Superpowers Lab integration
|
||||
- **[nanobanana-ppt-skills](skills/nanobanana-ppt-skills/)** – PowerPoint presentation skills
|
||||
- **[x-article-publisher-skill](skills/x-article-publisher-skill/)** – X/Twitter article publishing
|
||||
- **[ffuf-claude-skill](skills/ffuf-claude-skill/)** – Web fuzzing with ffuf
|
||||
> **Try it:** `"Help me write a C++ Actor class for Unreal Engine 5"` or `"Show me how to properly use UPROPERTY in Unreal Engine"`
|
||||
|
||||
---
|
||||
|
||||
## 📦 Registry Update
|
||||
## 📦 Improvements
|
||||
|
||||
- **Total Skills**: 614 (from 553)
|
||||
- **New Skills Added**: 61
|
||||
- **Catalog**: Fully regenerated with all new skills
|
||||
- **Sources**: All skills properly attributed in `docs/SOURCES.md`
|
||||
|
||||
## 🔧 Improvements
|
||||
|
||||
### Quality Assurance
|
||||
- All new skills validated for frontmatter compliance
|
||||
- "When to Use" sections added where missing
|
||||
- Source attribution maintained for all skills
|
||||
- Risk labels properly set
|
||||
|
||||
### Documentation
|
||||
- Updated README.md with correct skill count (614)
|
||||
- Updated package.json version to 4.3.0
|
||||
- Comprehensive release notes created
|
||||
|
||||
## 📊 Statistics
|
||||
|
||||
- **Skills from VoltAgent Repository**: 61
|
||||
- Official Team Skills: 27
|
||||
- Community Skills: 34
|
||||
- **Skills Analyzed**: 174 total from VoltAgent
|
||||
- **Skills Already Present**: 32 (skipped as duplicates)
|
||||
- **Skills with Similar Names**: 89 (analyzed, 12 implemented as complementary)
|
||||
- **Registry Update**: Now tracking 560 skills (was 559).
|
||||
- **Internationalization**: Added comprehensive Vietnamese translations for all core documentation
|
||||
- **Documentation Structure**: Reorganized Vietnamese translations from `docs/vi/` to `docs/vietnamese/` for consistency
|
||||
- **Translation Coverage**: Vietnamese versions now available for README, GETTING_STARTED, CONTRIBUTING, FAQ, SECURITY, QUALITY_BAR, SKILL_ANATOMY, and more
|
||||
|
||||
## 👥 Credits
|
||||
|
||||
A huge shoutout to our community contributors and the VoltAgent team:
|
||||
A huge shoutout to our community contributors:
|
||||
|
||||
- **VoltAgent/awesome-agent-skills** for curating an excellent collection
|
||||
- **Official Teams**: Sentry, Trail of Bits, Expo, Hugging Face, Vercel Labs, Google Labs, Neon, fal.ai
|
||||
- **Community Contributors**: zarazhangrui, wrsmith108, robzolkos, muratcankoylan, antonbabenko, and all other skill authors
|
||||
- **@Dokhacgiakhoa** for comprehensive Vietnamese translations (PR #38)
|
||||
- **@junited31** for `unreal-engine-cpp-pro` skill (PR #39)
|
||||
|
||||
---
|
||||
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 52 KiB |
113
bin/install.js
113
bin/install.js
@@ -1,113 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
const { spawnSync } = require('child_process');
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
|
||||
const REPO = 'https://github.com/sickn33/antigravity-awesome-skills.git';
|
||||
const HOME = process.env.HOME || process.env.USERPROFILE || '';
|
||||
|
||||
function resolveDir(p) {
|
||||
if (!p) return null;
|
||||
const s = p.replace(/^~($|\/)/, HOME + '$1');
|
||||
return path.resolve(s);
|
||||
}
|
||||
|
||||
function parseArgs() {
|
||||
const a = process.argv.slice(2);
|
||||
let pathArg = null;
|
||||
let cursor = false, claude = false, gemini = false;
|
||||
|
||||
for (let i = 0; i < a.length; i++) {
|
||||
if (a[i] === '--help' || a[i] === '-h') return { help: true };
|
||||
if (a[i] === '--path' && a[i + 1]) { pathArg = a[++i]; continue; }
|
||||
if (a[i] === '--cursor') { cursor = true; continue; }
|
||||
if (a[i] === '--claude') { claude = true; continue; }
|
||||
if (a[i] === '--gemini') { gemini = true; continue; }
|
||||
if (a[i] === 'install') continue;
|
||||
}
|
||||
|
||||
return { pathArg, cursor, claude, gemini };
|
||||
}
|
||||
|
||||
function defaultDir(opts) {
|
||||
if (opts.pathArg) return resolveDir(opts.pathArg);
|
||||
if (opts.cursor) return path.join(HOME, '.cursor', 'skills');
|
||||
if (opts.claude) return path.join(HOME, '.claude', 'skills');
|
||||
if (opts.gemini) return path.join(HOME, '.gemini', 'skills');
|
||||
return path.join(HOME, '.agent', 'skills');
|
||||
}
|
||||
|
||||
function printHelp() {
|
||||
console.log(`
|
||||
antigravity-awesome-skills — installer
|
||||
|
||||
npx antigravity-awesome-skills [install] [options]
|
||||
|
||||
Clones the skills repo into your agent's skills directory.
|
||||
|
||||
Options:
|
||||
--cursor Install to ~/.cursor/skills (Cursor)
|
||||
--claude Install to ~/.claude/skills (Claude Code)
|
||||
--gemini Install to ~/.gemini/skills (Gemini CLI)
|
||||
--path <dir> Install to <dir> (default: ~/.agent/skills)
|
||||
|
||||
Examples:
|
||||
npx antigravity-awesome-skills
|
||||
npx antigravity-awesome-skills --cursor
|
||||
npx antigravity-awesome-skills --path ./my-skills
|
||||
`);
|
||||
}
|
||||
|
||||
function run(cmd, args, opts = {}) {
|
||||
const r = spawnSync(cmd, args, { stdio: 'inherit', ...opts });
|
||||
if (r.status !== 0) process.exit(r.status == null ? 1 : r.status);
|
||||
}
|
||||
|
||||
function main() {
|
||||
const opts = parseArgs();
|
||||
if (opts.help) {
|
||||
printHelp();
|
||||
return;
|
||||
}
|
||||
|
||||
const target = defaultDir(opts);
|
||||
if (!target || !HOME) {
|
||||
console.error('Could not resolve home directory. Use --path <absolute-path>.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (fs.existsSync(target)) {
|
||||
const gitDir = path.join(target, '.git');
|
||||
if (fs.existsSync(gitDir)) {
|
||||
console.log('Directory already exists and is a git repo. Updating…');
|
||||
process.chdir(target);
|
||||
run('git', ['pull']);
|
||||
return;
|
||||
}
|
||||
console.error(`Directory exists and is not a git repo: ${target}`);
|
||||
console.error('Remove it or use --path to choose another location.');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const parent = path.dirname(target);
|
||||
if (!fs.existsSync(parent)) {
|
||||
try {
|
||||
fs.mkdirSync(parent, { recursive: true });
|
||||
} catch (e) {
|
||||
console.error(`Cannot create parent directory: ${parent}`, e.message);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
if (process.platform === 'win32') {
|
||||
run('git', ['-c', 'core.symlinks=true', 'clone', REPO, target]);
|
||||
} else {
|
||||
run('git', ['clone', REPO, target]);
|
||||
}
|
||||
|
||||
console.log(`\nInstalled to ${target}`);
|
||||
console.log('Pick a bundle in docs/BUNDLES.md and use @skill-name in your AI assistant.');
|
||||
}
|
||||
|
||||
main();
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"generatedAt": "2026-01-31T07:34:21.497Z",
|
||||
"generatedAt": "2026-01-28T16:10:28.837Z",
|
||||
"aliases": {
|
||||
"accessibility-compliance-audit": "accessibility-compliance-accessibility-audit",
|
||||
"active directory attacks": "active-directory-attacks",
|
||||
@@ -24,7 +24,6 @@
|
||||
"cicd-automation-automate": "cicd-automation-workflow-automate",
|
||||
"claude code guide": "claude-code-guide",
|
||||
"d3-viz": "claude-d3js-skill",
|
||||
"claude-win11-skill": "claude-win11-speckit-update-skill",
|
||||
"cloud penetration testing": "cloud-penetration-testing",
|
||||
"code-documentation-explain": "code-documentation-code-explain",
|
||||
"code-documentation-generate": "code-documentation-doc-generate",
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
{
|
||||
"generatedAt": "2026-01-31T07:34:21.497Z",
|
||||
"generatedAt": "2026-01-28T16:10:28.837Z",
|
||||
"bundles": {
|
||||
"core-dev": {
|
||||
"description": "Core development skills across languages, frameworks, and backend/frontend fundamentals.",
|
||||
@@ -45,21 +45,16 @@
|
||||
"firebase",
|
||||
"firecrawl-scraper",
|
||||
"flutter-expert",
|
||||
"fp-ts-errors",
|
||||
"fp-ts-pragmatic",
|
||||
"fp-ts-react",
|
||||
"frontend-design",
|
||||
"frontend-dev-guidelines",
|
||||
"frontend-developer",
|
||||
"frontend-mobile-development-component-scaffold",
|
||||
"frontend-mobile-security-xss-scan",
|
||||
"frontend-security-coder",
|
||||
"frontend-slides",
|
||||
"go-concurrency-patterns",
|
||||
"golang-pro",
|
||||
"graphql",
|
||||
"hubspot-integration",
|
||||
"hugging-face-jobs",
|
||||
"ios-developer",
|
||||
"java-pro",
|
||||
"javascript-mastery",
|
||||
@@ -68,7 +63,6 @@
|
||||
"javascript-typescript-typescript-scaffold",
|
||||
"langgraph",
|
||||
"launch-strategy",
|
||||
"makepad-skills",
|
||||
"mcp-builder",
|
||||
"memory-safety-patterns",
|
||||
"mobile-design",
|
||||
@@ -77,14 +71,11 @@
|
||||
"modern-javascript-patterns",
|
||||
"moodle-external-api-development",
|
||||
"multi-platform-apps-multi-platform",
|
||||
"n8n-code-python",
|
||||
"n8n-node-configuration",
|
||||
"nextjs-app-router-patterns",
|
||||
"nextjs-best-practices",
|
||||
"nextjs-supabase-auth",
|
||||
"nodejs-backend-patterns",
|
||||
"nodejs-best-practices",
|
||||
"observe-whatsapp",
|
||||
"openapi-spec-generation",
|
||||
"php-pro",
|
||||
"plaid-fintech",
|
||||
@@ -112,8 +103,6 @@
|
||||
"shopify-apps",
|
||||
"shopify-development",
|
||||
"slack-bot-builder",
|
||||
"stitch-ui-design",
|
||||
"swiftui-expert-skill",
|
||||
"systems-programming-rust-project",
|
||||
"tavily-web",
|
||||
"telegram-bot-builder",
|
||||
@@ -127,7 +116,6 @@
|
||||
"typescript-expert",
|
||||
"typescript-pro",
|
||||
"ui-ux-pro-max",
|
||||
"using-neon",
|
||||
"uv-package-manager",
|
||||
"viral-generator-builder",
|
||||
"voice-agents",
|
||||
@@ -164,7 +152,6 @@
|
||||
"design-orchestration",
|
||||
"docker-expert",
|
||||
"ethical-hacking-methodology",
|
||||
"find-bugs",
|
||||
"firebase",
|
||||
"firmware-analyst",
|
||||
"form-cro",
|
||||
@@ -173,7 +160,6 @@
|
||||
"frontend-security-coder",
|
||||
"gdpr-data-handling",
|
||||
"graphql-architect",
|
||||
"hugging-face-jobs",
|
||||
"hybrid-cloud-architect",
|
||||
"idor-testing",
|
||||
"k8s-manifest-generator",
|
||||
@@ -204,7 +190,6 @@
|
||||
"scanning-tools",
|
||||
"secrets-management",
|
||||
"security-auditor",
|
||||
"security-bluebook-builder",
|
||||
"security-compliance-compliance-check",
|
||||
"security-requirement-extraction",
|
||||
"security-scanning-security-dependencies",
|
||||
@@ -222,8 +207,6 @@
|
||||
"top-web-vulnerabilities",
|
||||
"twilio-communications",
|
||||
"ui-visual-validator",
|
||||
"using-neon",
|
||||
"varlock-claude-skill",
|
||||
"vulnerability-scanner",
|
||||
"web-design-guidelines",
|
||||
"wordpress-penetration-testing",
|
||||
@@ -280,11 +263,9 @@
|
||||
"database-optimizer",
|
||||
"dbt-transformation-patterns",
|
||||
"firebase",
|
||||
"fp-ts-react",
|
||||
"frontend-dev-guidelines",
|
||||
"gdpr-data-handling",
|
||||
"graphql",
|
||||
"hugging-face-jobs",
|
||||
"hybrid-cloud-networking",
|
||||
"idor-testing",
|
||||
"ios-developer",
|
||||
@@ -317,7 +298,6 @@
|
||||
"sql-pro",
|
||||
"sqlmap-database-pentesting",
|
||||
"unity-ecs-patterns",
|
||||
"using-neon",
|
||||
"vector-database-engineer",
|
||||
"xlsx",
|
||||
"xlsx-official"
|
||||
@@ -355,7 +335,6 @@
|
||||
"error-debugging-error-trace",
|
||||
"error-diagnostics-error-analysis",
|
||||
"error-diagnostics-error-trace",
|
||||
"expo-deployment",
|
||||
"flutter-expert",
|
||||
"git-pr-workflows-git-workflow",
|
||||
"gitlab-ci-patterns",
|
||||
@@ -393,9 +372,9 @@
|
||||
"temporal-python-pro",
|
||||
"terraform-specialist",
|
||||
"unity-developer",
|
||||
"vercel-deploy-claimable",
|
||||
"vercel-deployment",
|
||||
"voice-agents"
|
||||
"voice-agents",
|
||||
"writing-skills"
|
||||
]
|
||||
}
|
||||
},
|
||||
|
||||
1462
data/catalog.json
1462
data/catalog.json
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "4.2.0",
|
||||
"version": "4.0.0",
|
||||
"dependencies": {
|
||||
"yaml": "^2.8.2"
|
||||
}
|
||||
|
||||
424
docs/BUNDLES.md
424
docs/BUNDLES.md
@@ -1,396 +1,124 @@
|
||||
# 📦 Antigravity Skill Bundles
|
||||
|
||||
> **Curated collections of skills organized by role and expertise level.** Don't know where to start? Pick a bundle below to get a curated set of skills for your role.
|
||||
Don't know where to start? Pick a bundle below to get a curated set of skills for your role.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
1. **Install the repository:**
|
||||
```bash
|
||||
npx antigravity-awesome-skills
|
||||
# or clone manually
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
2. **Choose your bundle** from the list below based on your role or interests.
|
||||
|
||||
3. **Use skills** by referencing them in your AI assistant:
|
||||
- Claude Code: `>> @skill-name help me...`
|
||||
- Cursor: `@skill-name in chat`
|
||||
- Gemini CLI: `Use skill-name...`
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Essentials & Core
|
||||
|
||||
### 🚀 The "Essentials" Starter Pack
|
||||
## 🚀 The "Essentials" Starter Pack
|
||||
|
||||
_For everyone. Install these first._
|
||||
|
||||
- [`concise-planning`](../skills/concise-planning/): Always start with a plan.
|
||||
- [`lint-and-validate`](../skills/lint-and-validate/): Keep your code clean automatically.
|
||||
- [`git-pushing`](../skills/git-pushing/): Save your work safely.
|
||||
- [`kaizen`](../skills/kaizen/): Continuous improvement mindset.
|
||||
- [`systematic-debugging`](../skills/systematic-debugging/): Debug like a pro.
|
||||
- `concise-planning`: Always start with a plan.
|
||||
- `lint-and-validate`: Keep your code clean automatically.
|
||||
- `git-pushing`: Save your work safely.
|
||||
- `kaizen`: Continuous improvement mindset.
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ Security & Compliance
|
||||
|
||||
### 🛡️ The "Security Engineer" Pack
|
||||
## 🛡️ The "Security Engineer" Pack
|
||||
|
||||
_For pentesting, auditing, and hardening._
|
||||
|
||||
- [`ethical-hacking-methodology`](../skills/ethical-hacking-methodology/): The Bible of ethical hacking.
|
||||
- [`burp-suite-testing`](../skills/burp-suite-testing/): Web vulnerability scanning.
|
||||
- [`top-web-vulnerabilities`](../skills/top-web-vulnerabilities/): OWASP-aligned vulnerability taxonomy.
|
||||
- [`linux-privilege-escalation`](../skills/linux-privilege-escalation/): Advanced Linux security assessment.
|
||||
- [`cloud-penetration-testing`](../skills/cloud-penetration-testing/): AWS/Azure/GCP security.
|
||||
- [`security-auditor`](../skills/security-auditor/): Comprehensive security audits.
|
||||
- [`vulnerability-scanner`](../skills/vulnerability-scanner/): Advanced vulnerability analysis.
|
||||
- `ethical-hacking-methodology`: The Bible of ethical hacking.
|
||||
- `burp-suite-testing`: Web vulnerability scanning.
|
||||
- `owasp-top-10`: Check for the most common flaws.
|
||||
- `linux-privilege-escalation`: Advanced Linux security assessment.
|
||||
- `cloud-penetration-testing`: AWS/Azure/GCP security.
|
||||
|
||||
### 🔐 The "Security Developer" Pack
|
||||
|
||||
_For building secure applications._
|
||||
|
||||
- [`api-security-best-practices`](../skills/api-security-best-practices/): Secure API design patterns.
|
||||
- [`auth-implementation-patterns`](../skills/auth-implementation-patterns/): JWT, OAuth2, session management.
|
||||
- [`backend-security-coder`](../skills/backend-security-coder/): Secure backend coding practices.
|
||||
- [`frontend-security-coder`](../skills/frontend-security-coder/): XSS prevention and client-side security.
|
||||
- [`cc-skill-security-review`](../skills/cc-skill-security-review/): Security checklist for features.
|
||||
- [`pci-compliance`](../skills/pci-compliance/): Payment card security standards.
|
||||
|
||||
---
|
||||
|
||||
## 🌐 Web Development
|
||||
|
||||
### 🌐 The "Web Wizard" Pack
|
||||
## 🌐 The "Web Wizard" Pack
|
||||
|
||||
_For building modern, high-performance web apps._
|
||||
|
||||
- [`frontend-design`](../skills/frontend-design/): UI guidelines and aesthetics.
|
||||
- [`react-best-practices`](../skills/react-best-practices/): React & Next.js performance optimization.
|
||||
- [`react-patterns`](../skills/react-patterns/): Modern React patterns and principles.
|
||||
- [`nextjs-best-practices`](../skills/nextjs-best-practices/): Next.js App Router patterns.
|
||||
- [`tailwind-patterns`](../skills/tailwind-patterns/): Tailwind CSS v4 styling superpowers.
|
||||
- [`form-cro`](../skills/form-cro/): Optimize your forms for conversion.
|
||||
- [`seo-audit`](../skills/seo-audit/): Get found on Google.
|
||||
- `frontend-design`: UI guidelines and aesthetics.
|
||||
- `react-patterns`: Best practices for React (if available).
|
||||
- `tailwind-patterns`: Styling superpowers.
|
||||
- `form-cro`: Optimize your forms for conversion.
|
||||
- `seo-audit`: Get found on Google.
|
||||
|
||||
### 🖌️ The "Web Designer" Pack
|
||||
## 🤖 The "Agent Architect" Pack
|
||||
|
||||
_For pixel-perfect experiences._
|
||||
_For building AI systems._
|
||||
|
||||
- [`ui-ux-pro-max`](../skills/ui-ux-pro-max/): Premium design systems and tokens.
|
||||
- [`frontend-design`](../skills/frontend-design/): The base layer of aesthetics.
|
||||
- [`3d-web-experience`](../skills/3d-web-experience/): Three.js & React Three Fiber magic.
|
||||
- [`canvas-design`](../skills/canvas-design/): Static visuals and posters.
|
||||
- [`mobile-design`](../skills/mobile-design/): Mobile-first design principles.
|
||||
- [`scroll-experience`](../skills/scroll-experience/): Immersive scroll-driven experiences.
|
||||
- `agent-evaluation`: Test your agents.
|
||||
- `langgraph`: Build stateful agent workflows.
|
||||
- `mcp-builder`: Create your own tools.
|
||||
- `prompt-engineering`: Master the art of talking to LLMs.
|
||||
|
||||
### ⚡ The "Full-Stack Developer" Pack
|
||||
|
||||
_For end-to-end web application development._
|
||||
|
||||
- [`senior-fullstack`](../skills/senior-fullstack/): Complete fullstack development guide.
|
||||
- [`frontend-developer`](../skills/frontend-developer/): React 19+ and Next.js 15+ expertise.
|
||||
- [`backend-dev-guidelines`](../skills/backend-dev-guidelines/): Node.js/Express/TypeScript patterns.
|
||||
- [`api-patterns`](../skills/api-patterns/): REST vs GraphQL vs tRPC selection.
|
||||
- [`database-design`](../skills/database-design/): Schema design and ORM selection.
|
||||
- [`stripe-integration`](../skills/stripe-integration/): Payments and subscriptions.
|
||||
|
||||
---
|
||||
|
||||
## 🤖 AI & Agents
|
||||
|
||||
### 🤖 The "Agent Architect" Pack
|
||||
|
||||
_For building AI systems and autonomous agents._
|
||||
|
||||
- [`agent-evaluation`](../skills/agent-evaluation/): Test and benchmark your agents.
|
||||
- [`langgraph`](../skills/langgraph/): Build stateful agent workflows.
|
||||
- [`mcp-builder`](../skills/mcp-builder/): Create your own MCP tools.
|
||||
- [`prompt-engineering`](../skills/prompt-engineering/): Master the art of talking to LLMs.
|
||||
- [`ai-agents-architect`](../skills/ai-agents-architect/): Design autonomous AI agents.
|
||||
- [`rag-engineer`](../skills/rag-engineer/): Build RAG systems with vector search.
|
||||
|
||||
### 🧠 The "LLM Application Developer" Pack
|
||||
|
||||
_For building production LLM applications._
|
||||
|
||||
- [`llm-app-patterns`](../skills/llm-app-patterns/): Production-ready LLM patterns.
|
||||
- [`rag-implementation`](../skills/rag-implementation/): Retrieval-Augmented Generation.
|
||||
- [`prompt-caching`](../skills/prompt-caching/): Cache strategies for LLM prompts.
|
||||
- [`context-window-management`](../skills/context-window-management/): Manage LLM context efficiently.
|
||||
- [`langfuse`](../skills/langfuse/): LLM observability and tracing.
|
||||
|
||||
---
|
||||
|
||||
## 🎮 Game Development
|
||||
|
||||
### 🎮 The "Indie Game Dev" Pack
|
||||
## 🎮 The "Indie Game Dev" Pack
|
||||
|
||||
_For building games with AI assistants._
|
||||
|
||||
- [`game-development/game-design`](../skills/game-development/game-design/): Mechanics and loops.
|
||||
- [`game-development/2d-games`](../skills/game-development/2d-games/): Sprites and physics.
|
||||
- [`game-development/3d-games`](../skills/game-development/3d-games/): Models and shaders.
|
||||
- [`unity-developer`](../skills/unity-developer/): Unity 6 LTS development.
|
||||
- [`godot-gdscript-patterns`](../skills/godot-gdscript-patterns/): Godot 4 GDScript patterns.
|
||||
- [`algorithmic-art`](../skills/algorithmic-art/): Generate assets with code.
|
||||
- `game-development/game-design`: Mechanics and loops.
|
||||
- `game-development/2d-games`: Sprites and physics.
|
||||
- `game-development/3d-games`: Models and shaders.
|
||||
- `game-development/unity-csharp`: C# scripting mastery.
|
||||
- `algorithmic-art`: Generate assets with code.
|
||||
|
||||
---
|
||||
|
||||
## 🐍 Backend & Languages
|
||||
|
||||
### 🐍 The "Python Pro" Pack
|
||||
## 🐍 The "Python Pro" Pack
|
||||
|
||||
_For backend heavyweights and data scientists._
|
||||
|
||||
- [`python-pro`](../skills/python-pro/): Master Python 3.12+ with modern features.
|
||||
- [`python-patterns`](../skills/python-patterns/): Idiomatic Python code.
|
||||
- [`fastapi-pro`](../skills/fastapi-pro/): High-performance async APIs.
|
||||
- [`fastapi-templates`](../skills/fastapi-templates/): Production-ready FastAPI projects.
|
||||
- [`django-pro`](../skills/django-pro/): The battery-included framework.
|
||||
- [`python-testing-patterns`](../skills/python-testing-patterns/): Comprehensive testing with pytest.
|
||||
- [`async-python-patterns`](../skills/async-python-patterns/): Python asyncio mastery.
|
||||
- `python-patterns`: Idiomatic Python code.
|
||||
- `poetry-manager`: Dependency management that works.
|
||||
- `pytest-mastery`: Testing frameworks.
|
||||
- `fastapi-expert`: High-performance APIs.
|
||||
- `django-guide`: The battery-included framework.
|
||||
|
||||
### 🟦 The "TypeScript & JavaScript" Pack
|
||||
|
||||
_For modern web development._
|
||||
|
||||
- [`typescript-expert`](../skills/typescript-expert/): TypeScript mastery and advanced types.
|
||||
- [`javascript-pro`](../skills/javascript-pro/): Modern JavaScript with ES6+.
|
||||
- [`react-best-practices`](../skills/react-best-practices/): React performance optimization.
|
||||
- [`nodejs-best-practices`](../skills/nodejs-best-practices/): Node.js development principles.
|
||||
- [`nextjs-app-router-patterns`](../skills/nextjs-app-router-patterns/): Next.js 14+ App Router.
|
||||
|
||||
### 🦀 The "Systems Programming" Pack
|
||||
|
||||
_For low-level and performance-critical code._
|
||||
|
||||
- [`rust-pro`](../skills/rust-pro/): Rust 1.75+ with async patterns.
|
||||
- [`go-concurrency-patterns`](../skills/go-concurrency-patterns/): Go concurrency mastery.
|
||||
- [`golang-pro`](../skills/golang-pro/): Go development expertise.
|
||||
- [`memory-safety-patterns`](../skills/memory-safety-patterns/): Memory-safe programming.
|
||||
- [`cpp-pro`](../skills/cpp-pro/): Modern C++ development.
|
||||
|
||||
---
|
||||
|
||||
## 🦄 Product & Business
|
||||
|
||||
### 🦄 The "Startup Founder" Pack
|
||||
## 🦄 The "Startup Founder" Pack
|
||||
|
||||
_For building products, not just code._
|
||||
|
||||
- [`product-manager-toolkit`](../skills/product-manager-toolkit/): RICE prioritization, PRD templates.
|
||||
- [`competitive-landscape`](../skills/competitive-landscape/): Competitor analysis.
|
||||
- [`competitor-alternatives`](../skills/competitor-alternatives/): Create comparison pages.
|
||||
- [`launch-strategy`](../skills/launch-strategy/): Product launch planning.
|
||||
- [`copywriting`](../skills/copywriting/): Marketing copy that converts.
|
||||
- [`stripe-integration`](../skills/stripe-integration/): Get paid from day one.
|
||||
- `product-requirements-doc`: Define what to build.
|
||||
- `competitor-analysis`: Know who you are fighting.
|
||||
- `pitch-deck-creator`: Raise capital (or just explain your idea).
|
||||
- `landing-page-copy`: Write words that sell.
|
||||
- `stripe-integration`: Get paid.
|
||||
|
||||
### 📊 The "Business Analyst" Pack
|
||||
|
||||
_For data-driven decision making._
|
||||
|
||||
- [`business-analyst`](../skills/business-analyst/): AI-powered analytics and KPIs.
|
||||
- [`startup-metrics-framework`](../skills/startup-metrics-framework/): SaaS metrics and unit economics.
|
||||
- [`startup-financial-modeling`](../skills/startup-financial-modeling/): 3-5 year financial projections.
|
||||
- [`market-sizing-analysis`](../skills/market-sizing-analysis/): TAM/SAM/SOM calculations.
|
||||
- [`kpi-dashboard-design`](../skills/kpi-dashboard-design/): Effective KPI dashboards.
|
||||
|
||||
### 📈 The "Marketing & Growth" Pack
|
||||
|
||||
_For driving user acquisition and retention._
|
||||
|
||||
- [`content-creator`](../skills/content-creator/): SEO-optimized marketing content.
|
||||
- [`seo-audit`](../skills/seo-audit/): Technical SEO health checks.
|
||||
- [`programmatic-seo`](../skills/programmatic-seo/): Create pages at scale.
|
||||
- [`analytics-tracking`](../skills/analytics-tracking/): Set up GA4/PostHog correctly.
|
||||
- [`ab-test-setup`](../skills/ab-test-setup/): Validated learning experiments.
|
||||
- [`email-sequence`](../skills/email-sequence/): Automated email campaigns.
|
||||
|
||||
---
|
||||
|
||||
## 🌧️ DevOps & Infrastructure
|
||||
|
||||
### 🌧️ The "DevOps & Cloud" Pack
|
||||
## 🌧️ The "DevOps & Cloud" Pack
|
||||
|
||||
_For infrastructure and scaling._
|
||||
|
||||
- [`docker-expert`](../skills/docker-expert/): Master containers and multi-stage builds.
|
||||
- [`aws-serverless`](../skills/aws-serverless/): Serverless on AWS (Lambda, DynamoDB).
|
||||
- [`kubernetes-architect`](../skills/kubernetes-architect/): K8s architecture and GitOps.
|
||||
- [`terraform-specialist`](../skills/terraform-specialist/): Infrastructure as Code mastery.
|
||||
- [`environment-setup-guide`](../skills/environment-setup-guide/): Standardization for teams.
|
||||
- [`deployment-procedures`](../skills/deployment-procedures/): Safe rollout strategies.
|
||||
- [`bash-linux`](../skills/bash-linux/): Terminal wizardry.
|
||||
- `docker-expert`: Master containers and multi-stage builds.
|
||||
- `aws-serverless`: Go serverless on AWS (Lambda, DynamoDB).
|
||||
- `environment-setup-guide`: Standardization for teams.
|
||||
- `deployment-procedures`: Safe rollout strategies.
|
||||
- `bash-linux`: Terminal wizardry.
|
||||
|
||||
### 📊 The "Observability & Monitoring" Pack
|
||||
|
||||
_For production reliability._
|
||||
|
||||
- [`observability-engineer`](../skills/observability-engineer/): Comprehensive monitoring systems.
|
||||
- [`distributed-tracing`](../skills/distributed-tracing/): Track requests across microservices.
|
||||
- [`slo-implementation`](../skills/slo-implementation/): Service Level Objectives.
|
||||
- [`incident-responder`](../skills/incident-responder/): Rapid incident response.
|
||||
- [`postmortem-writing`](../skills/postmortem-writing/): Blameless postmortems.
|
||||
- [`performance-engineer`](../skills/performance-engineer/): Application performance optimization.
|
||||
|
||||
---
|
||||
|
||||
## 📊 Data & Analytics
|
||||
|
||||
### 📊 The "Data & Analytics" Pack
|
||||
## 📊 The "Data & Analytics" Pack
|
||||
|
||||
_For making sense of the numbers._
|
||||
|
||||
- [`analytics-tracking`](../skills/analytics-tracking/): Set up GA4/PostHog correctly.
|
||||
- [`claude-d3js-skill`](../skills/claude-d3js-skill/): Beautiful custom visualizations with D3.js.
|
||||
- [`sql-pro`](../skills/sql-pro/): Modern SQL with cloud-native databases.
|
||||
- [`postgres-best-practices`](../skills/postgres-best-practices/): Postgres optimization.
|
||||
- [`ab-test-setup`](../skills/ab-test-setup/): Validated learning.
|
||||
- [`database-architect`](../skills/database-architect/): Database design from scratch.
|
||||
- `analytics-tracking`: Set up GA4/PostHog correctly.
|
||||
- `d3-viz`: Beautiful custom visualizations.
|
||||
- `sql-mastery`: Write better queries (Community skill).
|
||||
- `ab-test-setup`: Validated learning.
|
||||
|
||||
### 🔄 The "Data Engineering" Pack
|
||||
|
||||
_For building data pipelines._
|
||||
|
||||
- [`data-engineer`](../skills/data-engineer/): Data pipeline architecture.
|
||||
- [`airflow-dag-patterns`](../skills/airflow-dag-patterns/): Apache Airflow DAGs.
|
||||
- [`dbt-transformation-patterns`](../skills/dbt-transformation-patterns/): Analytics engineering.
|
||||
- [`vector-database-engineer`](../skills/vector-database-engineer/): Vector databases for RAG.
|
||||
- [`embedding-strategies`](../skills/embedding-strategies/): Embedding model selection.
|
||||
|
||||
---
|
||||
|
||||
## 🎨 Creative & Content
|
||||
|
||||
### 🎨 The "Creative Director" Pack
|
||||
## 🎨 The "Creative Director" Pack
|
||||
|
||||
_For visuals, content, and branding._
|
||||
|
||||
- [`canvas-design`](../skills/canvas-design/): Generate posters and diagrams.
|
||||
- [`frontend-design`](../skills/frontend-design/): UI aesthetics.
|
||||
- [`content-creator`](../skills/content-creator/): SEO-optimized blog posts.
|
||||
- [`copy-editing`](../skills/copy-editing/): Polish your prose.
|
||||
- [`algorithmic-art`](../skills/algorithmic-art/): Code-generated masterpieces.
|
||||
- [`interactive-portfolio`](../skills/interactive-portfolio/): Portfolios that land jobs.
|
||||
- `canvas-design`: Generate posters and diagrams.
|
||||
- `frontend-design`: UI aesthetics.
|
||||
- `content-creator`: SEO-optimized blog posts.
|
||||
- `copy-editing`: Polish your prose.
|
||||
- `algorithmic-art`: Code-generated masterpieces.
|
||||
|
||||
---
|
||||
|
||||
## 🐞 Quality Assurance
|
||||
|
||||
### 🐞 The "QA & Testing" Pack
|
||||
## 🐞 The "QA & Testing" Pack
|
||||
|
||||
_For breaking things before users do._
|
||||
|
||||
- [`test-driven-development`](../skills/test-driven-development/): Red, Green, Refactor.
|
||||
- [`systematic-debugging`](../skills/systematic-debugging/): Debug like Sherlock Holmes.
|
||||
- [`browser-automation`](../skills/browser-automation/): End-to-end testing with Playwright.
|
||||
- [`e2e-testing-patterns`](../skills/e2e-testing-patterns/): Reliable E2E test suites.
|
||||
- [`ab-test-setup`](../skills/ab-test-setup/): Validated experiments.
|
||||
- [`code-review-checklist`](../skills/code-review-checklist/): Catch bugs in PRs.
|
||||
- [`test-fixing`](../skills/test-fixing/): Fix failing tests systematically.
|
||||
- `test-driven-development`: Red, Green, Refactor.
|
||||
- `systematic-debugging`: Sherlock Holmes for code.
|
||||
- `browser-automation`: End-to-end testing with Playwright.
|
||||
- `ab-test-setup`: Validated experiments.
|
||||
- `code-review-checklist`: Catch bugs in PRs.
|
||||
|
||||
## 🖌️ The "Web Designer" Pack
|
||||
|
||||
_For pixel-perfect experiences._
|
||||
|
||||
- `ui-ux-pro-max`: Premium design systems/tokens.
|
||||
- `frontend-design`: The base layer of aesthetics.
|
||||
- `3d-web-experience`: Three.js & R3F magic.
|
||||
- `canvas-design`: Static visuals/posters.
|
||||
- `responsive-layout`: Mobile-first principles.
|
||||
|
||||
---
|
||||
|
||||
## 🔧 Specialized Packs
|
||||
|
||||
### 📱 The "Mobile Developer" Pack
|
||||
|
||||
_For iOS, Android, and cross-platform apps._
|
||||
|
||||
- [`mobile-developer`](../skills/mobile-developer/): Cross-platform mobile development.
|
||||
- [`react-native-architecture`](../skills/react-native-architecture/): React Native with Expo.
|
||||
- [`flutter-expert`](../skills/flutter-expert/): Flutter multi-platform apps.
|
||||
- [`ios-developer`](../skills/ios-developer/): iOS development with Swift.
|
||||
- [`app-store-optimization`](../skills/app-store-optimization/): ASO for App Store and Play Store.
|
||||
|
||||
### 🔗 The "Integration & APIs" Pack
|
||||
|
||||
_For connecting services and building integrations._
|
||||
|
||||
- [`stripe-integration`](../skills/stripe-integration/): Payments and subscriptions.
|
||||
- [`twilio-communications`](../skills/twilio-communications/): SMS, voice, WhatsApp.
|
||||
- [`hubspot-integration`](../skills/hubspot-integration/): CRM integration.
|
||||
- [`plaid-fintech`](../skills/plaid-fintech/): Bank account linking and ACH.
|
||||
- [`algolia-search`](../skills/algolia-search/): Search implementation.
|
||||
|
||||
### 🎯 The "Architecture & Design" Pack
|
||||
|
||||
_For system design and technical decisions._
|
||||
|
||||
- [`senior-architect`](../skills/senior-architect/): Comprehensive software architecture.
|
||||
- [`architecture-patterns`](../skills/architecture-patterns/): Clean Architecture, DDD, Hexagonal.
|
||||
- [`microservices-patterns`](../skills/microservices-patterns/): Microservices architecture.
|
||||
- [`event-sourcing-architect`](../skills/event-sourcing-architect/): Event sourcing and CQRS.
|
||||
- [`architecture-decision-records`](../skills/architecture-decision-records/): Document technical decisions.
|
||||
|
||||
---
|
||||
|
||||
## 📚 How to Use Bundles
|
||||
|
||||
### Installation
|
||||
|
||||
1. **Clone the repository:**
|
||||
```bash
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
2. **Or use the installer:**
|
||||
```bash
|
||||
npx antigravity-awesome-skills
|
||||
```
|
||||
|
||||
### Using Skills
|
||||
|
||||
Once installed, reference skills in your AI assistant:
|
||||
|
||||
- **Claude Code**: `>> @skill-name help me...`
|
||||
- **Cursor**: `@skill-name` in chat
|
||||
- **Gemini CLI**: `Use skill-name...`
|
||||
|
||||
### Customizing Bundles
|
||||
|
||||
You can create your own bundle by:
|
||||
1. Copying skill folders to your `.agent/skills/` directory
|
||||
2. Or referencing multiple skills in a single conversation
|
||||
|
||||
---
|
||||
|
||||
## 🎓 Learning Paths
|
||||
|
||||
### Beginner → Intermediate → Advanced
|
||||
|
||||
**Web Development:**
|
||||
1. Start: `Essentials` → `Web Wizard`
|
||||
2. Grow: `Full-Stack Developer` → `Architecture & Design`
|
||||
3. Master: `Observability & Monitoring` → `Security Developer`
|
||||
|
||||
**AI/ML:**
|
||||
1. Start: `Essentials` → `Agent Architect`
|
||||
2. Grow: `LLM Application Developer` → `Data Engineering`
|
||||
3. Master: Advanced RAG and agent orchestration
|
||||
|
||||
**Security:**
|
||||
1. Start: `Essentials` → `Security Developer`
|
||||
2. Grow: `Security Engineer` → Advanced pentesting
|
||||
3. Master: Red team tactics and threat modeling
|
||||
|
||||
---
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
Found a skill that should be in a bundle? Or want to create a new bundle? [Open an issue](https://github.com/sickn33/antigravity-awesome-skills/issues) or submit a PR!
|
||||
|
||||
---
|
||||
|
||||
## 📖 Related Documentation
|
||||
|
||||
- [Getting Started Guide](GETTING_STARTED.md)
|
||||
- [Full Skill Catalog](../CATALOG.md)
|
||||
- [Contributing Guide](../CONTRIBUTING.md)
|
||||
|
||||
---
|
||||
|
||||
_Last updated: January 2026 | Total Skills: 560+ | Total Bundles: 20+_
|
||||
_To use a bundle, simply copy the skill names into your `.agent/skills` folder or use them with your agent._
|
||||
|
||||
@@ -1,23 +1,24 @@
|
||||
# CI Drift Fix Guide
|
||||
|
||||
**Problem**: The failing job is caused by uncommitted changes detected in `README.md`, `skills_index.json`, or catalog files after the update scripts run.
|
||||
**Problem**: The failing job is caused by uncommitted changes detected in `README.md` or `skills_index.json` after the update scripts run.
|
||||
|
||||
**Error**:
|
||||
|
||||
```
|
||||
❌ Detected uncommitted changes produced by registry/readme/catalog scripts.
|
||||
❌ Detected uncommitted changes in README.md or skills_index.json. Please run scripts locally and commit.
|
||||
```
|
||||
|
||||
**Cause**:
|
||||
Scripts like `scripts/generate_index.py`, `scripts/update_readme.py`, and `scripts/build-catalog.js` modify `README.md`, `skills_index.json`, `data/catalog.json`, `data/bundles.json`, `data/aliases.json`, and `CATALOG.md`. The workflow expects these files to have no changes after the scripts run. Any differences mean the committed repo is out-of-sync with what the generation scripts produce.
|
||||
Scripts like `scripts/generate_index.py` and `scripts/update_readme.py` modify `README.md` and `skills_index.json`, but the workflow expects these files to have no changes after the scripts are run. Any differences mean the committed repo is out-of-sync with what the generation scripts produce.
|
||||
|
||||
**How to Fix (DO THIS EVERY TIME):**
|
||||
|
||||
1. Run the **FULL Validation Chain** locally:
|
||||
1. Run the **FULL Validation Chain** locally to regenerate `README.md` e `skills_index.json`:
|
||||
|
||||
```bash
|
||||
npm run chain
|
||||
npm run catalog
|
||||
python3 scripts/validate_skills.py
|
||||
python3 scripts/generate_index.py
|
||||
python3 scripts/update_readme.py
|
||||
```
|
||||
|
||||
2. Check for changes:
|
||||
@@ -29,10 +30,10 @@ Scripts like `scripts/generate_index.py`, `scripts/update_readme.py`, and `scrip
|
||||
|
||||
3. Commit and push any updates:
|
||||
```bash
|
||||
git add README.md skills_index.json data/catalog.json data/bundles.json data/aliases.json CATALOG.md
|
||||
git add README.md skills_index.json
|
||||
git commit -m "chore: sync generated registry files"
|
||||
git push
|
||||
```
|
||||
|
||||
**Summary**:
|
||||
Always commit and push all changes produced by the registry, readme, and catalog scripts. This keeps the CI workflow passing by ensuring the repository and generated files are synced.
|
||||
Always commit and push all changes produced by the registry or readme update scripts. This keeps the CI workflow passing by ensuring the repository and generated files are synced.
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
Skills are specialized instruction files that teach AI assistants how to handle specific tasks. Think of them as expert knowledge modules that your AI can load on-demand.
|
||||
**Simple analogy:** Just like you might consult different experts (a lawyer, a doctor, a mechanic), these skills let your AI become an expert in different areas when you need them.
|
||||
|
||||
### Do I need to install all 624+ skills?
|
||||
### Do I need to install all 552+ skills?
|
||||
|
||||
**No!** When you clone the repository, all skills are available, but your AI only loads them when you explicitly invoke them with `@skill-name`.
|
||||
It's like having a library - all books are there, but you only read the ones you need.
|
||||
@@ -41,7 +41,7 @@ The skill files themselves are stored locally on your computer, but your AI assi
|
||||
|
||||
---
|
||||
|
||||
## 🔒 Security & Trust (V4 Update)
|
||||
## 🔒 Security & Trust (V3 Update)
|
||||
|
||||
### What do the Risk Labels mean?
|
||||
|
||||
@@ -156,7 +156,7 @@ Include:
|
||||
|
||||
### My PR failed "Quality Bar" check. Why?
|
||||
|
||||
V4 introduces automated quality control. Your skill might be missing:
|
||||
V3 introduces automated quality control. Your skill might be missing:
|
||||
|
||||
1. A valid `description`.
|
||||
2. Usage examples.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Getting Started with Antigravity Awesome Skills (V4)
|
||||
# Getting Started with Antigravity Awesome Skills (V3)
|
||||
|
||||
**New here? This guide will help you supercharge your AI Agent in 5 minutes.**
|
||||
|
||||
@@ -15,25 +15,15 @@ AI Agents (like **Claude Code**, **Gemini**, **Cursor**) are smart, but they lac
|
||||
|
||||
## ⚡️ Quick Start: The "Starter Packs"
|
||||
|
||||
Don't panic about the 624+ skills. You don't need them all at once.
|
||||
Don't panic about the 560+ skills. You don't need them all at once.
|
||||
We have curated **Starter Packs** to get you running immediately.
|
||||
|
||||
You **install the full repo once** (npx or clone); Starter Packs are curated lists to help you **pick which skills to use** by role (e.g. Web Wizard, Hacker Pack)—they are not a different way to install.
|
||||
|
||||
### 1. Install the Repo
|
||||
|
||||
**Option A — npx (easiest):**
|
||||
Copy the skills to your agent's folder:
|
||||
|
||||
```bash
|
||||
npx antigravity-awesome-skills
|
||||
```
|
||||
|
||||
This clones to `~/.agent/skills` by default. Use `--cursor`, `--claude`, or `--gemini` to install for a specific tool, or `--path <dir>` for a custom location. Run `npx antigravity-awesome-skills --help` for details.
|
||||
|
||||
**Option B — git clone:**
|
||||
|
||||
```bash
|
||||
# Universal (works for most agents)
|
||||
# Universal Installation (works for most agents)
|
||||
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||
```
|
||||
|
||||
@@ -86,7 +76,7 @@ Once installed, just talk to your AI naturally.
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ Trust & Safety (New in V4)
|
||||
## 🛡️ Trust & Safety (New in V3)
|
||||
|
||||
We classify skills so you know what you're running:
|
||||
|
||||
@@ -100,8 +90,8 @@ _Check the [Skill Catalog](../CATALOG.md) for the full list._
|
||||
|
||||
## ❓ FAQ
|
||||
|
||||
**Q: Do I need to install all 624 skills?**
|
||||
A: You clone the whole repo once; your AI only _reads_ the skills you invoke (or that are relevant), so it stays lightweight. **Starter Packs** in [BUNDLES.md](BUNDLES.md) are curated lists to help you discover the right skills for your role—they don't change how you install.
|
||||
**Q: Do I need to install all 560 skills?**
|
||||
A: You clone the whole repo, but your AI only _reads_ the ones you ask for (or that are relevant). It's lightweight!
|
||||
|
||||
**Q: Can I make my own skills?**
|
||||
A: Yes! Use the **@skill-creator** skill to build your own.
|
||||
|
||||
@@ -57,9 +57,8 @@ We also categorize skills by who maintains them:
|
||||
|
||||
## How to Validate Your Skill
|
||||
|
||||
The canonical validator is `scripts/validate_skills.py`. Run `npm run validate` (or `npm run validate:strict`) before submitting a PR:
|
||||
Run the validator script before submitting a PR:
|
||||
|
||||
```bash
|
||||
npm run validate # soft mode (warnings only)
|
||||
npm run validate:strict # strict mode (CI uses this)
|
||||
python3 scripts/validate_skills.py --strict
|
||||
```
|
||||
|
||||
@@ -14,72 +14,6 @@ If you recognize your work here and it is not properly attributed, please open a
|
||||
| `react-patterns` | [React Docs](https://react.dev/) | CC-BY | Official patterns. |
|
||||
| **All Official Skills** | [Anthropic / Google / OpenAI] | Proprietary | Usage encouraged by vendors. |
|
||||
|
||||
## Skills from VoltAgent/awesome-agent-skills
|
||||
|
||||
The following skills were added from the curated collection at [VoltAgent/awesome-agent-skills](https://github.com/VoltAgent/awesome-agent-skills):
|
||||
|
||||
### Official Team Skills
|
||||
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :---- | :-------------- | :------ | :---- |
|
||||
| `vercel-deploy-claimable` | [Vercel Labs](https://github.com/vercel-labs/agent-skills) | MIT | Official Vercel skill |
|
||||
| `design-md` | [Google Labs (Stitch)](https://github.com/google-labs-code/stitch-skills) | Compatible | Google Labs Stitch skills |
|
||||
| `hugging-face-cli`, `hugging-face-jobs` | [Hugging Face](https://github.com/huggingface/skills) | Compatible | Official Hugging Face skills |
|
||||
| `culture-index`, `fix-review`, `sharp-edges` | [Trail of Bits](https://github.com/trailofbits/skills) | Compatible | Security skills from Trail of Bits |
|
||||
| `expo-deployment`, `upgrading-expo` | [Expo](https://github.com/expo/skills) | Compatible | Official Expo skills |
|
||||
| `commit`, `create-pr`, `find-bugs`, `iterate-pr` | [Sentry](https://github.com/getsentry/skills) | Compatible | Sentry dev team skills |
|
||||
| `using-neon` | [Neon](https://github.com/neondatabase/agent-skills) | Compatible | Neon Postgres best practices |
|
||||
| `fal-audio`, `fal-generate`, `fal-image-edit`, `fal-platform`, `fal-upscale`, `fal-workflow` | [fal.ai Community](https://github.com/fal-ai-community/skills) | Compatible | fal.ai AI model skills |
|
||||
|
||||
### Community Skills
|
||||
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :---- | :-------------- | :------ | :---- |
|
||||
| `automate-whatsapp`, `observe-whatsapp` | [gokapso](https://github.com/gokapso/agent-skills) | Compatible | WhatsApp automation skills |
|
||||
| `readme` | [Shpigford](https://github.com/Shpigford/skills) | Compatible | README generation |
|
||||
| `screenshots` | [Shpigford](https://github.com/Shpigford/skills) | Compatible | Marketing screenshots |
|
||||
| `aws-skills` | [zxkane](https://github.com/zxkane/aws-skills) | Compatible | AWS development patterns |
|
||||
| `deep-research` | [sanjay3290](https://github.com/sanjay3290/ai-skills) | Compatible | Gemini Deep Research Agent |
|
||||
| `ffuf-claude-skill` | [jthack](https://github.com/jthack/ffuf_claude_skill) | Compatible | Web fuzzing with ffuf |
|
||||
| `ui-skills` | [ibelick](https://github.com/ibelick/ui-skills) | Compatible | UI development constraints |
|
||||
| `vexor` | [scarletkc](https://github.com/scarletkc/vexor) | Compatible | Vector-powered CLI |
|
||||
| `pypict-skill` | [omkamal](https://github.com/omkamal/pypict-claude-skill) | Compatible | Pairwise test generation |
|
||||
| `makepad-skills` | [ZhangHanDong](https://github.com/ZhangHanDong/makepad-skills) | Compatible | Makepad UI development |
|
||||
| `swiftui-expert-skill` | [AvdLee](https://github.com/AvdLee/SwiftUI-Agent-Skill) | Compatible | SwiftUI best practices |
|
||||
| `threejs-skills` | [CloudAI-X](https://github.com/CloudAI-X/threejs-skills) | Compatible | Three.js 3D experiences |
|
||||
| `claude-scientific-skills` | [K-Dense-AI](https://github.com/K-Dense-AI/claude-scientific-skills) | Compatible | Scientific research skills |
|
||||
| `claude-win11-speckit-update-skill` | [NotMyself](https://github.com/NotMyself/claude-win11-speckit-update-skill) | Compatible | Windows 11 management |
|
||||
| `imagen` | [sanjay3290](https://github.com/sanjay3290/ai-skills) | Compatible | Google Gemini image generation |
|
||||
| `security-bluebook-builder` | [SHADOWPR0](https://github.com/SHADOWPR0/security-bluebook-builder) | Compatible | Security documentation |
|
||||
| `claude-ally-health` | [huifer](https://github.com/huifer/Claude-Ally-Health) | Compatible | Health assistant |
|
||||
| `clarity-gate` | [frmoretto](https://github.com/frmoretto/clarity-gate) | Compatible | RAG quality verification |
|
||||
| `n8n-code-python`, `n8n-mcp-tools-expert`, `n8n-node-configuration` | [czlonkowski](https://github.com/czlonkowski/n8n-skills) | Compatible | n8n automation skills |
|
||||
| `varlock-claude-skill` | [wrsmith108](https://github.com/wrsmith108/varlock-claude-skill) | Compatible | Secure environment variables |
|
||||
| `beautiful-prose` | [SHADOWPR0](https://github.com/SHADOWPR0/beautiful_prose) | Compatible | Writing style guide |
|
||||
| `claude-speed-reader` | [SeanZoR](https://github.com/SeanZoR/claude-speed-reader) | Compatible | Speed reading tool |
|
||||
| `skill-seekers` | [yusufkaraaslan](https://github.com/yusufkaraaslan/Skill_Seekers) | Compatible | Skill conversion tool |
|
||||
|
||||
- **frontend-slides** - [zarazhangrui](https://github.com/zarazhangrui/frontend-slides)
|
||||
- **linear-claude-skill** - [wrsmith108](https://github.com/wrsmith108/linear-claude-skill)
|
||||
- **skill-rails-upgrade** - [robzolkos](https://github.com/robzolkos/skill-rails-upgrade)
|
||||
- **context-fundamentals** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **context-degradation** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **context-compression** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **context-optimization** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **multi-agent-patterns** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **tool-design** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **evaluation** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **memory-systems** - [muratcankoylan](https://github.com/muratcankoylan/Agent-Skills-for-Context-Engineering)
|
||||
- **terraform-skill** - [antonbabenko](https://github.com/antonbabenko/terraform-skill)
|
||||
|
||||
## Skills from whatiskadudoing/fp-ts-skills (v4.4.0)
|
||||
|
||||
| Skill | Original Source | License | Notes |
|
||||
| :---- | :-------------- | :------ | :---- |
|
||||
| `fp-ts-pragmatic` | [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills) | Compatible | Pragmatic fp-ts guide – pipe, Option, Either, TaskEither |
|
||||
| `fp-ts-react` | [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills) | Compatible | fp-ts with React 18/19 and Next.js |
|
||||
| `fp-ts-errors` | [whatiskadudoing/fp-ts-skills](https://github.com/whatiskadudoing/fp-ts-skills) | Compatible | Type-safe error handling with Either and TaskEither |
|
||||
|
||||
## License Policy
|
||||
|
||||
- **Code**: All original code in this repository is **MIT**.
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
Skills là các tệp hướng dẫn chuyên biệt dạy cho các trợ lý AI cách xử lý những tác vụ cụ thể. Hãy coi chúng như những mô-đun kiến thức chuyên gia mà AI của bạn có thể tải khi cần.
|
||||
**Một so sánh đơn giản:** Giống như việc bạn tham khảo ý kiến của các chuyên gia khác nhau (luật sư, bác sĩ, thợ máy), những kỹ năng này giúp AI của bạn trở thành chuyên gia trong các lĩnh vực khác nhau khi bạn cần.
|
||||
|
||||
### Tôi có cần phải cài đặt tất cả hơn 560 skills không?
|
||||
### Tôi có cần phải cài đặt tất cả hơn 552 skills không?
|
||||
|
||||
**Không!** Khi bạn clone (tải bản sao) repository này, tất cả các kỹ năng đều có sẵn, nhưng AI của bạn chỉ tải chúng khi bạn yêu cầu rõ ràng bằng lệnh `@ten-skill`.
|
||||
Nó giống như việc sở hữu một thư viện - tất cả sách đều ở đó, nhưng bạn chỉ đọc những cuốn bạn cần thôi.
|
||||
@@ -41,7 +41,7 @@ Bản thân các file skill được lưu trữ cục bộ trên máy tính củ
|
||||
|
||||
---
|
||||
|
||||
## 🔒 Bảo mật & Tin cậy (Cập nhật V4)
|
||||
## 🔒 Bảo mật & Tin cậy (Cập nhật V3)
|
||||
|
||||
### Các Nhãn rủi ro (Risk Labels) có ý nghĩa gì?
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Hướng dẫn Bắt đầu với Antigravity Awesome Skills (V4)
|
||||
# Hướng dẫn Bắt đầu với Antigravity Awesome Skills (V3)
|
||||
|
||||
**Bạn mới đến đây? Hướng dẫn này sẽ giúp bạn tăng cường sức mạnh cho trợ lý trợ lý AI của mình chỉ trong 5 phút.**
|
||||
|
||||
@@ -15,7 +15,7 @@ Các trợ lý AI (như **Claude Code**, **Gemini**, **Cursor**) rất thông mi
|
||||
|
||||
## ⚡️ Khởi động nhanh: Các "Gói khởi đầu" (Starter Packs)
|
||||
|
||||
Đừng lo lắng về con số hơn 560 kỹ năng. Bạn không cần dùng tất cả chúng cùng một lúc.
|
||||
Đừng lo lắng về con số hơn 552 kỹ năng. Bạn không cần dùng tất cả chúng cùng một lúc.
|
||||
Chúng tôi đã tuyển chọn các **Gói khởi đầu** để bạn có thể bắt đầu sử dụng ngay lập tức.
|
||||
|
||||
### 1. Cài đặt Repository
|
||||
@@ -76,7 +76,7 @@ Sau khi cài đặt, bạn chỉ cần trò chuyện với AI một cách tự n
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ Sự tin cậy & An toàn (Mới trong bản V4)
|
||||
## 🛡️ Sự tin cậy & An toàn (Mới trong bản V3)
|
||||
|
||||
Chúng tôi phân loại các kỹ năng để bạn biết mình đang chạy những gì:
|
||||
|
||||
@@ -90,7 +90,7 @@ _Kiểm tra [Danh mục Skill (Skill Catalog)](../CATALOG.vi.md) để xem danh
|
||||
|
||||
## ❓ FAQ
|
||||
|
||||
**H: Tôi có cần cài đặt tất cả 560 kỹ năng không?**
|
||||
**H: Tôi có cần cài đặt tất cả 552 kỹ năng không?**
|
||||
Đ: Bạn tải toàn bộ repo về, nhưng AI của bạn chỉ _đọc_ những kỹ năng bạn yêu cầu (hoặc những kỹ năng có liên quan). Nó rất nhẹ!
|
||||
|
||||
**H: Tôi có thể tự tạo kỹ năng cho riêng mình không?**
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# 🌌 Antigravity Awesome Skills: 560+ Kỹ năng (Skills) cho Claude Code, Gemini CLI, Cursor, Copilot và nhiều hơn nữa
|
||||
# 🌌 Antigravity Awesome Skills: 552+ Kỹ năng (Skills) cho Claude Code, Gemini CLI, Cursor, Copilot và nhiều hơn nữa
|
||||
|
||||
> **Bộ sưu tập tối ưu gồm hơn 560 Kỹ năng Phổ quát cho các Trợ lý Lập trình AI — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode**
|
||||
> **Bộ sưu tập tối ưu gồm hơn 552 Kỹ năng Phổ quát cho các Trợ lý Lập trình AI — Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode**
|
||||
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
[](https://claude.ai)
|
||||
@@ -11,7 +11,7 @@
|
||||
[](https://github.com/opencode-ai/opencode)
|
||||
[](https://github.com/sickn33/antigravity-awesome-skills)
|
||||
|
||||
**Antigravity Awesome Skills** là một thư viện được tuyển chọn và kiểm chứng kỹ lưỡng với **560 kỹ năng hiệu suất cao** được thiết kế để hoạt động mượt mà trên tất cả các trợ lý lập trình AI lớn:
|
||||
**Antigravity Awesome Skills** là một thư viện được tuyển chọn và kiểm chứng kỹ lưỡng với **552 kỹ năng hiệu suất cao** được thiết kế để hoạt động mượt mà trên tất cả các trợ lý lập trình AI lớn:
|
||||
|
||||
- 🟣 **Claude Code** (Anthropic CLI)
|
||||
- 🔵 **Gemini CLI** (Google DeepMind)
|
||||
@@ -56,7 +56,7 @@ Repository được tổ chức thành các lĩnh vực chuyên biệt để bi
|
||||
|
||||
[Xem các Gói khởi đầu tại docs/vietnamese/BUNDLES.md](docs/vietnamese/BUNDLES.vi.md) để tìm bộ công cụ hoàn hảo cho vai trò của bạn.
|
||||
|
||||
## Duyệt hơn 560 Kỹ năng
|
||||
## Duyệt hơn 552 Kỹ năng
|
||||
|
||||
Chúng tôi đã chuyển danh sách đầy đủ các kỹ năng sang một danh mục riêng biệt để giữ cho file README này gọn gàng.
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
Tài liệu này dùng để theo dõi tiến độ dịch thuật toàn bộ repository `antigravity-awesome-skills` sang tiếng Việt.
|
||||
|
||||
**Mục tiêu:** Dịch toàn bộ 560+ kỹ năng và tài liệu hướng dẫn.
|
||||
**Mục tiêu:** Dịch toàn bộ 552+ kỹ năng và tài liệu hướng dẫn.
|
||||
**Quy tắc:**
|
||||
1. Giữ nguyên cấu trúc thư mục gốc.
|
||||
2. File dịch được lưu tại `docs/vietnamese/skills/<category>/<skill-name>.vi.md`.
|
||||
|
||||
1
node_modules/.bin/yaml
generated
vendored
Symbolic link
1
node_modules/.bin/yaml
generated
vendored
Symbolic link
@@ -0,0 +1 @@
|
||||
../yaml/bin.mjs
|
||||
12
package-lock.json → node_modules/.package-lock.json
generated
vendored
12
package-lock.json → node_modules/.package-lock.json
generated
vendored
@@ -1,20 +1,8 @@
|
||||
{
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "4.2.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "antigravity-awesome-skills",
|
||||
"version": "4.2.0",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"yaml": "^2.8.2"
|
||||
},
|
||||
"bin": {
|
||||
"antigravity-awesome-skills": "bin/install.js"
|
||||
}
|
||||
},
|
||||
"node_modules/yaml": {
|
||||
"version": "2.8.2",
|
||||
"resolved": "https://registry.npmjs.org/yaml/-/yaml-2.8.2.tgz",
|
||||
13
node_modules/yaml/LICENSE
generated
vendored
Normal file
13
node_modules/yaml/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
Copyright Eemeli Aro <eemeli@gmail.com>
|
||||
|
||||
Permission to use, copy, modify, and/or distribute this software for any purpose
|
||||
with or without fee is hereby granted, provided that the above copyright notice
|
||||
and this permission notice appear in all copies.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
|
||||
REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
|
||||
FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
|
||||
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||
OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
|
||||
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
|
||||
THIS SOFTWARE.
|
||||
172
node_modules/yaml/README.md
generated
vendored
Normal file
172
node_modules/yaml/README.md
generated
vendored
Normal file
@@ -0,0 +1,172 @@
|
||||
# YAML <a href="https://www.npmjs.com/package/yaml"><img align="right" src="https://badge.fury.io/js/yaml.svg" title="npm package" /></a>
|
||||
|
||||
`yaml` is a definitive library for [YAML](https://yaml.org/), the human friendly data serialization standard.
|
||||
This library:
|
||||
|
||||
- Supports both YAML 1.1 and YAML 1.2 and all common data schemas,
|
||||
- Passes all of the [yaml-test-suite](https://github.com/yaml/yaml-test-suite) tests,
|
||||
- Can accept any string as input without throwing, parsing as much YAML out of it as it can, and
|
||||
- Supports parsing, modifying, and writing YAML comments and blank lines.
|
||||
|
||||
The library is released under the ISC open source license, and the code is [available on GitHub](https://github.com/eemeli/yaml/).
|
||||
It has no external dependencies and runs on Node.js as well as modern browsers.
|
||||
|
||||
For the purposes of versioning, any changes that break any of the documented endpoints or APIs will be considered semver-major breaking changes.
|
||||
Undocumented library internals may change between minor versions, and previous APIs may be deprecated (but not removed).
|
||||
|
||||
The minimum supported TypeScript version of the included typings is 3.9;
|
||||
for use in earlier versions you may need to set `skipLibCheck: true` in your config.
|
||||
This requirement may be updated between minor versions of the library.
|
||||
|
||||
For more information, see the project's documentation site: [**eemeli.org/yaml**](https://eemeli.org/yaml/)
|
||||
|
||||
For build instructions and contribution guidelines, see [docs/CONTRIBUTING.md](docs/CONTRIBUTING.md).
|
||||
|
||||
To install:
|
||||
|
||||
```sh
|
||||
npm install yaml
|
||||
# or
|
||||
deno add jsr:@eemeli/yaml
|
||||
```
|
||||
|
||||
**Note:** These docs are for `yaml@2`. For v1, see the [v1.10.0 tag](https://github.com/eemeli/yaml/tree/v1.10.0) for the source and [eemeli.org/yaml/v1](https://eemeli.org/yaml/v1/) for the documentation.
|
||||
|
||||
## API Overview
|
||||
|
||||
The API provided by `yaml` has three layers, depending on how deep you need to go: [Parse & Stringify](https://eemeli.org/yaml/#parse-amp-stringify), [Documents](https://eemeli.org/yaml/#documents), and the underlying [Lexer/Parser/Composer](https://eemeli.org/yaml/#parsing-yaml).
|
||||
The first has the simplest API and "just works", the second gets you all the bells and whistles supported by the library along with a decent [AST](https://eemeli.org/yaml/#content-nodes), and the third lets you get progressively closer to YAML source, if that's your thing.
|
||||
|
||||
A [command-line tool](https://eemeli.org/yaml/#command-line-tool) is also included.
|
||||
|
||||
### Parse & Stringify
|
||||
|
||||
```js
|
||||
import { parse, stringify } from 'yaml'
|
||||
```
|
||||
|
||||
- [`parse(str, reviver?, options?): value`](https://eemeli.org/yaml/#yaml-parse)
|
||||
- [`stringify(value, replacer?, options?): string`](https://eemeli.org/yaml/#yaml-stringify)
|
||||
|
||||
### Documents
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
```js
|
||||
import {
|
||||
Document,
|
||||
isDocument,
|
||||
parseAllDocuments,
|
||||
parseDocument
|
||||
} from 'yaml'
|
||||
```
|
||||
|
||||
- [`Document`](https://eemeli.org/yaml/#documents)
|
||||
- [`constructor(value, replacer?, options?)`](https://eemeli.org/yaml/#creating-documents)
|
||||
- [`#contents`](https://eemeli.org/yaml/#content-nodes)
|
||||
- [`#directives`](https://eemeli.org/yaml/#stream-directives)
|
||||
- [`#errors`](https://eemeli.org/yaml/#errors)
|
||||
- [`#warnings`](https://eemeli.org/yaml/#errors)
|
||||
- [`isDocument(foo): boolean`](https://eemeli.org/yaml/#identifying-node-types)
|
||||
- [`parseAllDocuments(str, options?): Document[]`](https://eemeli.org/yaml/#parsing-documents)
|
||||
- [`parseDocument(str, options?): Document`](https://eemeli.org/yaml/#parsing-documents)
|
||||
|
||||
### Content Nodes
|
||||
|
||||
<!-- prettier-ignore -->
|
||||
```js
|
||||
import {
|
||||
isAlias, isCollection, isMap, isNode,
|
||||
isPair, isScalar, isSeq, Scalar,
|
||||
visit, visitAsync, YAMLMap, YAMLSeq
|
||||
} from 'yaml'
|
||||
```
|
||||
|
||||
- [`isAlias(foo): boolean`](https://eemeli.org/yaml/#identifying-node-types)
|
||||
- [`isCollection(foo): boolean`](https://eemeli.org/yaml/#identifying-node-types)
|
||||
- [`isMap(foo): boolean`](https://eemeli.org/yaml/#identifying-node-types)
|
||||
- [`isNode(foo): boolean`](https://eemeli.org/yaml/#identifying-node-types)
|
||||
- [`isPair(foo): boolean`](https://eemeli.org/yaml/#identifying-node-types)
|
||||
- [`isScalar(foo): boolean`](https://eemeli.org/yaml/#identifying-node-types)
|
||||
- [`isSeq(foo): boolean`](https://eemeli.org/yaml/#identifying-node-types)
|
||||
- [`new Scalar(value)`](https://eemeli.org/yaml/#scalar-values)
|
||||
- [`new YAMLMap()`](https://eemeli.org/yaml/#collections)
|
||||
- [`new YAMLSeq()`](https://eemeli.org/yaml/#collections)
|
||||
- [`doc.createAlias(node, name?): Alias`](https://eemeli.org/yaml/#creating-nodes)
|
||||
- [`doc.createNode(value, options?): Node`](https://eemeli.org/yaml/#creating-nodes)
|
||||
- [`doc.createPair(key, value): Pair`](https://eemeli.org/yaml/#creating-nodes)
|
||||
- [`visit(node, visitor)`](https://eemeli.org/yaml/#finding-and-modifying-nodes)
|
||||
- [`visitAsync(node, visitor)`](https://eemeli.org/yaml/#finding-and-modifying-nodes)
|
||||
|
||||
### Parsing YAML
|
||||
|
||||
```js
|
||||
import { Composer, Lexer, Parser } from 'yaml'
|
||||
```
|
||||
|
||||
- [`new Lexer().lex(src)`](https://eemeli.org/yaml/#lexer)
|
||||
- [`new Parser(onNewLine?).parse(src)`](https://eemeli.org/yaml/#parser)
|
||||
- [`new Composer(options?).compose(tokens)`](https://eemeli.org/yaml/#composer)
|
||||
|
||||
## YAML.parse
|
||||
|
||||
```yaml
|
||||
# file.yml
|
||||
YAML:
|
||||
- A human-readable data serialization language
|
||||
- https://en.wikipedia.org/wiki/YAML
|
||||
yaml:
|
||||
- A complete JavaScript implementation
|
||||
- https://www.npmjs.com/package/yaml
|
||||
```
|
||||
|
||||
```js
|
||||
import fs from 'fs'
|
||||
import YAML from 'yaml'
|
||||
|
||||
YAML.parse('3.14159')
|
||||
// 3.14159
|
||||
|
||||
YAML.parse('[ true, false, maybe, null ]\n')
|
||||
// [ true, false, 'maybe', null ]
|
||||
|
||||
const file = fs.readFileSync('./file.yml', 'utf8')
|
||||
YAML.parse(file)
|
||||
// { YAML:
|
||||
// [ 'A human-readable data serialization language',
|
||||
// 'https://en.wikipedia.org/wiki/YAML' ],
|
||||
// yaml:
|
||||
// [ 'A complete JavaScript implementation',
|
||||
// 'https://www.npmjs.com/package/yaml' ] }
|
||||
```
|
||||
|
||||
## YAML.stringify
|
||||
|
||||
```js
|
||||
import YAML from 'yaml'
|
||||
|
||||
YAML.stringify(3.14159)
|
||||
// '3.14159\n'
|
||||
|
||||
YAML.stringify([true, false, 'maybe', null])
|
||||
// `- true
|
||||
// - false
|
||||
// - maybe
|
||||
// - null
|
||||
// `
|
||||
|
||||
YAML.stringify({ number: 3, plain: 'string', block: 'two\nlines\n' })
|
||||
// `number: 3
|
||||
// plain: string
|
||||
// block: |
|
||||
// two
|
||||
// lines
|
||||
// `
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
Browser testing provided by:
|
||||
|
||||
<a href="https://www.browserstack.com/open-source">
|
||||
<img width=200 src="https://eemeli.org/yaml/images/browserstack.svg" alt="BrowserStack" />
|
||||
</a>
|
||||
11
node_modules/yaml/bin.mjs
generated
vendored
Executable file
11
node_modules/yaml/bin.mjs
generated
vendored
Executable file
@@ -0,0 +1,11 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
import { UserError, cli, help } from './dist/cli.mjs'
|
||||
|
||||
cli(process.stdin, error => {
|
||||
if (error instanceof UserError) {
|
||||
if (error.code === UserError.ARGS) console.error(`${help}\n`)
|
||||
console.error(error.message)
|
||||
process.exitCode = error.code
|
||||
} else if (error) throw error
|
||||
})
|
||||
88
node_modules/yaml/browser/dist/compose/compose-collection.js
generated
vendored
Normal file
88
node_modules/yaml/browser/dist/compose/compose-collection.js
generated
vendored
Normal file
@@ -0,0 +1,88 @@
|
||||
import { isNode } from '../nodes/identity.js';
|
||||
import { Scalar } from '../nodes/Scalar.js';
|
||||
import { YAMLMap } from '../nodes/YAMLMap.js';
|
||||
import { YAMLSeq } from '../nodes/YAMLSeq.js';
|
||||
import { resolveBlockMap } from './resolve-block-map.js';
|
||||
import { resolveBlockSeq } from './resolve-block-seq.js';
|
||||
import { resolveFlowCollection } from './resolve-flow-collection.js';
|
||||
|
||||
function resolveCollection(CN, ctx, token, onError, tagName, tag) {
|
||||
const coll = token.type === 'block-map'
|
||||
? resolveBlockMap(CN, ctx, token, onError, tag)
|
||||
: token.type === 'block-seq'
|
||||
? resolveBlockSeq(CN, ctx, token, onError, tag)
|
||||
: resolveFlowCollection(CN, ctx, token, onError, tag);
|
||||
const Coll = coll.constructor;
|
||||
// If we got a tagName matching the class, or the tag name is '!',
|
||||
// then use the tagName from the node class used to create it.
|
||||
if (tagName === '!' || tagName === Coll.tagName) {
|
||||
coll.tag = Coll.tagName;
|
||||
return coll;
|
||||
}
|
||||
if (tagName)
|
||||
coll.tag = tagName;
|
||||
return coll;
|
||||
}
|
||||
function composeCollection(CN, ctx, token, props, onError) {
|
||||
const tagToken = props.tag;
|
||||
const tagName = !tagToken
|
||||
? null
|
||||
: ctx.directives.tagName(tagToken.source, msg => onError(tagToken, 'TAG_RESOLVE_FAILED', msg));
|
||||
if (token.type === 'block-seq') {
|
||||
const { anchor, newlineAfterProp: nl } = props;
|
||||
const lastProp = anchor && tagToken
|
||||
? anchor.offset > tagToken.offset
|
||||
? anchor
|
||||
: tagToken
|
||||
: (anchor ?? tagToken);
|
||||
if (lastProp && (!nl || nl.offset < lastProp.offset)) {
|
||||
const message = 'Missing newline after block sequence props';
|
||||
onError(lastProp, 'MISSING_CHAR', message);
|
||||
}
|
||||
}
|
||||
const expType = token.type === 'block-map'
|
||||
? 'map'
|
||||
: token.type === 'block-seq'
|
||||
? 'seq'
|
||||
: token.start.source === '{'
|
||||
? 'map'
|
||||
: 'seq';
|
||||
// shortcut: check if it's a generic YAMLMap or YAMLSeq
|
||||
// before jumping into the custom tag logic.
|
||||
if (!tagToken ||
|
||||
!tagName ||
|
||||
tagName === '!' ||
|
||||
(tagName === YAMLMap.tagName && expType === 'map') ||
|
||||
(tagName === YAMLSeq.tagName && expType === 'seq')) {
|
||||
return resolveCollection(CN, ctx, token, onError, tagName);
|
||||
}
|
||||
let tag = ctx.schema.tags.find(t => t.tag === tagName && t.collection === expType);
|
||||
if (!tag) {
|
||||
const kt = ctx.schema.knownTags[tagName];
|
||||
if (kt?.collection === expType) {
|
||||
ctx.schema.tags.push(Object.assign({}, kt, { default: false }));
|
||||
tag = kt;
|
||||
}
|
||||
else {
|
||||
if (kt) {
|
||||
onError(tagToken, 'BAD_COLLECTION_TYPE', `${kt.tag} used for ${expType} collection, but expects ${kt.collection ?? 'scalar'}`, true);
|
||||
}
|
||||
else {
|
||||
onError(tagToken, 'TAG_RESOLVE_FAILED', `Unresolved tag: ${tagName}`, true);
|
||||
}
|
||||
return resolveCollection(CN, ctx, token, onError, tagName);
|
||||
}
|
||||
}
|
||||
const coll = resolveCollection(CN, ctx, token, onError, tagName, tag);
|
||||
const res = tag.resolve?.(coll, msg => onError(tagToken, 'TAG_RESOLVE_FAILED', msg), ctx.options) ?? coll;
|
||||
const node = isNode(res)
|
||||
? res
|
||||
: new Scalar(res);
|
||||
node.range = coll.range;
|
||||
node.tag = tagName;
|
||||
if (tag?.format)
|
||||
node.format = tag.format;
|
||||
return node;
|
||||
}
|
||||
|
||||
export { composeCollection };
|
||||
43
node_modules/yaml/browser/dist/compose/compose-doc.js
generated
vendored
Normal file
43
node_modules/yaml/browser/dist/compose/compose-doc.js
generated
vendored
Normal file
@@ -0,0 +1,43 @@
|
||||
import { Document } from '../doc/Document.js';
|
||||
import { composeNode, composeEmptyNode } from './compose-node.js';
|
||||
import { resolveEnd } from './resolve-end.js';
|
||||
import { resolveProps } from './resolve-props.js';
|
||||
|
||||
function composeDoc(options, directives, { offset, start, value, end }, onError) {
|
||||
const opts = Object.assign({ _directives: directives }, options);
|
||||
const doc = new Document(undefined, opts);
|
||||
const ctx = {
|
||||
atKey: false,
|
||||
atRoot: true,
|
||||
directives: doc.directives,
|
||||
options: doc.options,
|
||||
schema: doc.schema
|
||||
};
|
||||
const props = resolveProps(start, {
|
||||
indicator: 'doc-start',
|
||||
next: value ?? end?.[0],
|
||||
offset,
|
||||
onError,
|
||||
parentIndent: 0,
|
||||
startOnNewline: true
|
||||
});
|
||||
if (props.found) {
|
||||
doc.directives.docStart = true;
|
||||
if (value &&
|
||||
(value.type === 'block-map' || value.type === 'block-seq') &&
|
||||
!props.hasNewline)
|
||||
onError(props.end, 'MISSING_CHAR', 'Block collection cannot start on same line with directives-end marker');
|
||||
}
|
||||
// @ts-expect-error If Contents is set, let's trust the user
|
||||
doc.contents = value
|
||||
? composeNode(ctx, value, props, onError)
|
||||
: composeEmptyNode(ctx, props.end, start, null, props, onError);
|
||||
const contentEnd = doc.contents.range[2];
|
||||
const re = resolveEnd(end, contentEnd, false, onError);
|
||||
if (re.comment)
|
||||
doc.comment = re.comment;
|
||||
doc.range = [offset, contentEnd, re.offset];
|
||||
return doc;
|
||||
}
|
||||
|
||||
export { composeDoc };
|
||||
102
node_modules/yaml/browser/dist/compose/compose-node.js
generated
vendored
Normal file
102
node_modules/yaml/browser/dist/compose/compose-node.js
generated
vendored
Normal file
@@ -0,0 +1,102 @@
|
||||
import { Alias } from '../nodes/Alias.js';
|
||||
import { isScalar } from '../nodes/identity.js';
|
||||
import { composeCollection } from './compose-collection.js';
|
||||
import { composeScalar } from './compose-scalar.js';
|
||||
import { resolveEnd } from './resolve-end.js';
|
||||
import { emptyScalarPosition } from './util-empty-scalar-position.js';
|
||||
|
||||
const CN = { composeNode, composeEmptyNode };
|
||||
function composeNode(ctx, token, props, onError) {
|
||||
const atKey = ctx.atKey;
|
||||
const { spaceBefore, comment, anchor, tag } = props;
|
||||
let node;
|
||||
let isSrcToken = true;
|
||||
switch (token.type) {
|
||||
case 'alias':
|
||||
node = composeAlias(ctx, token, onError);
|
||||
if (anchor || tag)
|
||||
onError(token, 'ALIAS_PROPS', 'An alias node must not specify any properties');
|
||||
break;
|
||||
case 'scalar':
|
||||
case 'single-quoted-scalar':
|
||||
case 'double-quoted-scalar':
|
||||
case 'block-scalar':
|
||||
node = composeScalar(ctx, token, tag, onError);
|
||||
if (anchor)
|
||||
node.anchor = anchor.source.substring(1);
|
||||
break;
|
||||
case 'block-map':
|
||||
case 'block-seq':
|
||||
case 'flow-collection':
|
||||
node = composeCollection(CN, ctx, token, props, onError);
|
||||
if (anchor)
|
||||
node.anchor = anchor.source.substring(1);
|
||||
break;
|
||||
default: {
|
||||
const message = token.type === 'error'
|
||||
? token.message
|
||||
: `Unsupported token (type: ${token.type})`;
|
||||
onError(token, 'UNEXPECTED_TOKEN', message);
|
||||
node = composeEmptyNode(ctx, token.offset, undefined, null, props, onError);
|
||||
isSrcToken = false;
|
||||
}
|
||||
}
|
||||
if (anchor && node.anchor === '')
|
||||
onError(anchor, 'BAD_ALIAS', 'Anchor cannot be an empty string');
|
||||
if (atKey &&
|
||||
ctx.options.stringKeys &&
|
||||
(!isScalar(node) ||
|
||||
typeof node.value !== 'string' ||
|
||||
(node.tag && node.tag !== 'tag:yaml.org,2002:str'))) {
|
||||
const msg = 'With stringKeys, all keys must be strings';
|
||||
onError(tag ?? token, 'NON_STRING_KEY', msg);
|
||||
}
|
||||
if (spaceBefore)
|
||||
node.spaceBefore = true;
|
||||
if (comment) {
|
||||
if (token.type === 'scalar' && token.source === '')
|
||||
node.comment = comment;
|
||||
else
|
||||
node.commentBefore = comment;
|
||||
}
|
||||
// @ts-expect-error Type checking misses meaning of isSrcToken
|
||||
if (ctx.options.keepSourceTokens && isSrcToken)
|
||||
node.srcToken = token;
|
||||
return node;
|
||||
}
|
||||
function composeEmptyNode(ctx, offset, before, pos, { spaceBefore, comment, anchor, tag, end }, onError) {
|
||||
const token = {
|
||||
type: 'scalar',
|
||||
offset: emptyScalarPosition(offset, before, pos),
|
||||
indent: -1,
|
||||
source: ''
|
||||
};
|
||||
const node = composeScalar(ctx, token, tag, onError);
|
||||
if (anchor) {
|
||||
node.anchor = anchor.source.substring(1);
|
||||
if (node.anchor === '')
|
||||
onError(anchor, 'BAD_ALIAS', 'Anchor cannot be an empty string');
|
||||
}
|
||||
if (spaceBefore)
|
||||
node.spaceBefore = true;
|
||||
if (comment) {
|
||||
node.comment = comment;
|
||||
node.range[2] = end;
|
||||
}
|
||||
return node;
|
||||
}
|
||||
function composeAlias({ options }, { offset, source, end }, onError) {
|
||||
const alias = new Alias(source.substring(1));
|
||||
if (alias.source === '')
|
||||
onError(offset, 'BAD_ALIAS', 'Alias cannot be an empty string');
|
||||
if (alias.source.endsWith(':'))
|
||||
onError(offset + source.length - 1, 'BAD_ALIAS', 'Alias ending in : is ambiguous', true);
|
||||
const valueEnd = offset + source.length;
|
||||
const re = resolveEnd(end, valueEnd, options.strict, onError);
|
||||
alias.range = [offset, valueEnd, re.offset];
|
||||
if (re.comment)
|
||||
alias.comment = re.comment;
|
||||
return alias;
|
||||
}
|
||||
|
||||
export { composeEmptyNode, composeNode };
|
||||
86
node_modules/yaml/browser/dist/compose/compose-scalar.js
generated
vendored
Normal file
86
node_modules/yaml/browser/dist/compose/compose-scalar.js
generated
vendored
Normal file
@@ -0,0 +1,86 @@
|
||||
import { isScalar, SCALAR } from '../nodes/identity.js';
|
||||
import { Scalar } from '../nodes/Scalar.js';
|
||||
import { resolveBlockScalar } from './resolve-block-scalar.js';
|
||||
import { resolveFlowScalar } from './resolve-flow-scalar.js';
|
||||
|
||||
function composeScalar(ctx, token, tagToken, onError) {
|
||||
const { value, type, comment, range } = token.type === 'block-scalar'
|
||||
? resolveBlockScalar(ctx, token, onError)
|
||||
: resolveFlowScalar(token, ctx.options.strict, onError);
|
||||
const tagName = tagToken
|
||||
? ctx.directives.tagName(tagToken.source, msg => onError(tagToken, 'TAG_RESOLVE_FAILED', msg))
|
||||
: null;
|
||||
let tag;
|
||||
if (ctx.options.stringKeys && ctx.atKey) {
|
||||
tag = ctx.schema[SCALAR];
|
||||
}
|
||||
else if (tagName)
|
||||
tag = findScalarTagByName(ctx.schema, value, tagName, tagToken, onError);
|
||||
else if (token.type === 'scalar')
|
||||
tag = findScalarTagByTest(ctx, value, token, onError);
|
||||
else
|
||||
tag = ctx.schema[SCALAR];
|
||||
let scalar;
|
||||
try {
|
||||
const res = tag.resolve(value, msg => onError(tagToken ?? token, 'TAG_RESOLVE_FAILED', msg), ctx.options);
|
||||
scalar = isScalar(res) ? res : new Scalar(res);
|
||||
}
|
||||
catch (error) {
|
||||
const msg = error instanceof Error ? error.message : String(error);
|
||||
onError(tagToken ?? token, 'TAG_RESOLVE_FAILED', msg);
|
||||
scalar = new Scalar(value);
|
||||
}
|
||||
scalar.range = range;
|
||||
scalar.source = value;
|
||||
if (type)
|
||||
scalar.type = type;
|
||||
if (tagName)
|
||||
scalar.tag = tagName;
|
||||
if (tag.format)
|
||||
scalar.format = tag.format;
|
||||
if (comment)
|
||||
scalar.comment = comment;
|
||||
return scalar;
|
||||
}
|
||||
function findScalarTagByName(schema, value, tagName, tagToken, onError) {
|
||||
if (tagName === '!')
|
||||
return schema[SCALAR]; // non-specific tag
|
||||
const matchWithTest = [];
|
||||
for (const tag of schema.tags) {
|
||||
if (!tag.collection && tag.tag === tagName) {
|
||||
if (tag.default && tag.test)
|
||||
matchWithTest.push(tag);
|
||||
else
|
||||
return tag;
|
||||
}
|
||||
}
|
||||
for (const tag of matchWithTest)
|
||||
if (tag.test?.test(value))
|
||||
return tag;
|
||||
const kt = schema.knownTags[tagName];
|
||||
if (kt && !kt.collection) {
|
||||
// Ensure that the known tag is available for stringifying,
|
||||
// but does not get used by default.
|
||||
schema.tags.push(Object.assign({}, kt, { default: false, test: undefined }));
|
||||
return kt;
|
||||
}
|
||||
onError(tagToken, 'TAG_RESOLVE_FAILED', `Unresolved tag: ${tagName}`, tagName !== 'tag:yaml.org,2002:str');
|
||||
return schema[SCALAR];
|
||||
}
|
||||
function findScalarTagByTest({ atKey, directives, schema }, value, token, onError) {
|
||||
const tag = schema.tags.find(tag => (tag.default === true || (atKey && tag.default === 'key')) &&
|
||||
tag.test?.test(value)) || schema[SCALAR];
|
||||
if (schema.compat) {
|
||||
const compat = schema.compat.find(tag => tag.default && tag.test?.test(value)) ??
|
||||
schema[SCALAR];
|
||||
if (tag.tag !== compat.tag) {
|
||||
const ts = directives.tagString(tag.tag);
|
||||
const cs = directives.tagString(compat.tag);
|
||||
const msg = `Value may be parsed as either ${ts} or ${cs}`;
|
||||
onError(token, 'TAG_RESOLVE_FAILED', msg, true);
|
||||
}
|
||||
}
|
||||
return tag;
|
||||
}
|
||||
|
||||
export { composeScalar };
|
||||
217
node_modules/yaml/browser/dist/compose/composer.js
generated
vendored
Normal file
217
node_modules/yaml/browser/dist/compose/composer.js
generated
vendored
Normal file
@@ -0,0 +1,217 @@
|
||||
import { Directives } from '../doc/directives.js';
|
||||
import { Document } from '../doc/Document.js';
|
||||
import { YAMLWarning, YAMLParseError } from '../errors.js';
|
||||
import { isCollection, isPair } from '../nodes/identity.js';
|
||||
import { composeDoc } from './compose-doc.js';
|
||||
import { resolveEnd } from './resolve-end.js';
|
||||
|
||||
function getErrorPos(src) {
|
||||
if (typeof src === 'number')
|
||||
return [src, src + 1];
|
||||
if (Array.isArray(src))
|
||||
return src.length === 2 ? src : [src[0], src[1]];
|
||||
const { offset, source } = src;
|
||||
return [offset, offset + (typeof source === 'string' ? source.length : 1)];
|
||||
}
|
||||
function parsePrelude(prelude) {
|
||||
let comment = '';
|
||||
let atComment = false;
|
||||
let afterEmptyLine = false;
|
||||
for (let i = 0; i < prelude.length; ++i) {
|
||||
const source = prelude[i];
|
||||
switch (source[0]) {
|
||||
case '#':
|
||||
comment +=
|
||||
(comment === '' ? '' : afterEmptyLine ? '\n\n' : '\n') +
|
||||
(source.substring(1) || ' ');
|
||||
atComment = true;
|
||||
afterEmptyLine = false;
|
||||
break;
|
||||
case '%':
|
||||
if (prelude[i + 1]?.[0] !== '#')
|
||||
i += 1;
|
||||
atComment = false;
|
||||
break;
|
||||
default:
|
||||
// This may be wrong after doc-end, but in that case it doesn't matter
|
||||
if (!atComment)
|
||||
afterEmptyLine = true;
|
||||
atComment = false;
|
||||
}
|
||||
}
|
||||
return { comment, afterEmptyLine };
|
||||
}
|
||||
/**
|
||||
* Compose a stream of CST nodes into a stream of YAML Documents.
|
||||
*
|
||||
* ```ts
|
||||
* import { Composer, Parser } from 'yaml'
|
||||
*
|
||||
* const src: string = ...
|
||||
* const tokens = new Parser().parse(src)
|
||||
* const docs = new Composer().compose(tokens)
|
||||
* ```
|
||||
*/
|
||||
class Composer {
|
||||
constructor(options = {}) {
|
||||
this.doc = null;
|
||||
this.atDirectives = false;
|
||||
this.prelude = [];
|
||||
this.errors = [];
|
||||
this.warnings = [];
|
||||
this.onError = (source, code, message, warning) => {
|
||||
const pos = getErrorPos(source);
|
||||
if (warning)
|
||||
this.warnings.push(new YAMLWarning(pos, code, message));
|
||||
else
|
||||
this.errors.push(new YAMLParseError(pos, code, message));
|
||||
};
|
||||
// eslint-disable-next-line @typescript-eslint/prefer-nullish-coalescing
|
||||
this.directives = new Directives({ version: options.version || '1.2' });
|
||||
this.options = options;
|
||||
}
|
||||
decorate(doc, afterDoc) {
|
||||
const { comment, afterEmptyLine } = parsePrelude(this.prelude);
|
||||
//console.log({ dc: doc.comment, prelude, comment })
|
||||
if (comment) {
|
||||
const dc = doc.contents;
|
||||
if (afterDoc) {
|
||||
doc.comment = doc.comment ? `${doc.comment}\n${comment}` : comment;
|
||||
}
|
||||
else if (afterEmptyLine || doc.directives.docStart || !dc) {
|
||||
doc.commentBefore = comment;
|
||||
}
|
||||
else if (isCollection(dc) && !dc.flow && dc.items.length > 0) {
|
||||
let it = dc.items[0];
|
||||
if (isPair(it))
|
||||
it = it.key;
|
||||
const cb = it.commentBefore;
|
||||
it.commentBefore = cb ? `${comment}\n${cb}` : comment;
|
||||
}
|
||||
else {
|
||||
const cb = dc.commentBefore;
|
||||
dc.commentBefore = cb ? `${comment}\n${cb}` : comment;
|
||||
}
|
||||
}
|
||||
if (afterDoc) {
|
||||
Array.prototype.push.apply(doc.errors, this.errors);
|
||||
Array.prototype.push.apply(doc.warnings, this.warnings);
|
||||
}
|
||||
else {
|
||||
doc.errors = this.errors;
|
||||
doc.warnings = this.warnings;
|
||||
}
|
||||
this.prelude = [];
|
||||
this.errors = [];
|
||||
this.warnings = [];
|
||||
}
|
||||
/**
|
||||
* Current stream status information.
|
||||
*
|
||||
* Mostly useful at the end of input for an empty stream.
|
||||
*/
|
||||
streamInfo() {
|
||||
return {
|
||||
comment: parsePrelude(this.prelude).comment,
|
||||
directives: this.directives,
|
||||
errors: this.errors,
|
||||
warnings: this.warnings
|
||||
};
|
||||
}
|
||||
/**
|
||||
* Compose tokens into documents.
|
||||
*
|
||||
* @param forceDoc - If the stream contains no document, still emit a final document including any comments and directives that would be applied to a subsequent document.
|
||||
* @param endOffset - Should be set if `forceDoc` is also set, to set the document range end and to indicate errors correctly.
|
||||
*/
|
||||
*compose(tokens, forceDoc = false, endOffset = -1) {
|
||||
for (const token of tokens)
|
||||
yield* this.next(token);
|
||||
yield* this.end(forceDoc, endOffset);
|
||||
}
|
||||
/** Advance the composer by one CST token. */
|
||||
*next(token) {
|
||||
switch (token.type) {
|
||||
case 'directive':
|
||||
this.directives.add(token.source, (offset, message, warning) => {
|
||||
const pos = getErrorPos(token);
|
||||
pos[0] += offset;
|
||||
this.onError(pos, 'BAD_DIRECTIVE', message, warning);
|
||||
});
|
||||
this.prelude.push(token.source);
|
||||
this.atDirectives = true;
|
||||
break;
|
||||
case 'document': {
|
||||
const doc = composeDoc(this.options, this.directives, token, this.onError);
|
||||
if (this.atDirectives && !doc.directives.docStart)
|
||||
this.onError(token, 'MISSING_CHAR', 'Missing directives-end/doc-start indicator line');
|
||||
this.decorate(doc, false);
|
||||
if (this.doc)
|
||||
yield this.doc;
|
||||
this.doc = doc;
|
||||
this.atDirectives = false;
|
||||
break;
|
||||
}
|
||||
case 'byte-order-mark':
|
||||
case 'space':
|
||||
break;
|
||||
case 'comment':
|
||||
case 'newline':
|
||||
this.prelude.push(token.source);
|
||||
break;
|
||||
case 'error': {
|
||||
const msg = token.source
|
||||
? `${token.message}: ${JSON.stringify(token.source)}`
|
||||
: token.message;
|
||||
const error = new YAMLParseError(getErrorPos(token), 'UNEXPECTED_TOKEN', msg);
|
||||
if (this.atDirectives || !this.doc)
|
||||
this.errors.push(error);
|
||||
else
|
||||
this.doc.errors.push(error);
|
||||
break;
|
||||
}
|
||||
case 'doc-end': {
|
||||
if (!this.doc) {
|
||||
const msg = 'Unexpected doc-end without preceding document';
|
||||
this.errors.push(new YAMLParseError(getErrorPos(token), 'UNEXPECTED_TOKEN', msg));
|
||||
break;
|
||||
}
|
||||
this.doc.directives.docEnd = true;
|
||||
const end = resolveEnd(token.end, token.offset + token.source.length, this.doc.options.strict, this.onError);
|
||||
this.decorate(this.doc, true);
|
||||
if (end.comment) {
|
||||
const dc = this.doc.comment;
|
||||
this.doc.comment = dc ? `${dc}\n${end.comment}` : end.comment;
|
||||
}
|
||||
this.doc.range[2] = end.offset;
|
||||
break;
|
||||
}
|
||||
default:
|
||||
this.errors.push(new YAMLParseError(getErrorPos(token), 'UNEXPECTED_TOKEN', `Unsupported token ${token.type}`));
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Call at end of input to yield any remaining document.
|
||||
*
|
||||
* @param forceDoc - If the stream contains no document, still emit a final document including any comments and directives that would be applied to a subsequent document.
|
||||
* @param endOffset - Should be set if `forceDoc` is also set, to set the document range end and to indicate errors correctly.
|
||||
*/
|
||||
*end(forceDoc = false, endOffset = -1) {
|
||||
if (this.doc) {
|
||||
this.decorate(this.doc, true);
|
||||
yield this.doc;
|
||||
this.doc = null;
|
||||
}
|
||||
else if (forceDoc) {
|
||||
const opts = Object.assign({ _directives: this.directives }, this.options);
|
||||
const doc = new Document(undefined, opts);
|
||||
if (this.atDirectives)
|
||||
this.onError(endOffset, 'MISSING_CHAR', 'Missing directives-end indicator line');
|
||||
doc.range = [0, endOffset, endOffset];
|
||||
this.decorate(doc, false);
|
||||
yield doc;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export { Composer };
|
||||
115
node_modules/yaml/browser/dist/compose/resolve-block-map.js
generated
vendored
Normal file
115
node_modules/yaml/browser/dist/compose/resolve-block-map.js
generated
vendored
Normal file
@@ -0,0 +1,115 @@
|
||||
import { Pair } from '../nodes/Pair.js';
|
||||
import { YAMLMap } from '../nodes/YAMLMap.js';
|
||||
import { resolveProps } from './resolve-props.js';
|
||||
import { containsNewline } from './util-contains-newline.js';
|
||||
import { flowIndentCheck } from './util-flow-indent-check.js';
|
||||
import { mapIncludes } from './util-map-includes.js';
|
||||
|
||||
const startColMsg = 'All mapping items must start at the same column';
|
||||
function resolveBlockMap({ composeNode, composeEmptyNode }, ctx, bm, onError, tag) {
|
||||
const NodeClass = tag?.nodeClass ?? YAMLMap;
|
||||
const map = new NodeClass(ctx.schema);
|
||||
if (ctx.atRoot)
|
||||
ctx.atRoot = false;
|
||||
let offset = bm.offset;
|
||||
let commentEnd = null;
|
||||
for (const collItem of bm.items) {
|
||||
const { start, key, sep, value } = collItem;
|
||||
// key properties
|
||||
const keyProps = resolveProps(start, {
|
||||
indicator: 'explicit-key-ind',
|
||||
next: key ?? sep?.[0],
|
||||
offset,
|
||||
onError,
|
||||
parentIndent: bm.indent,
|
||||
startOnNewline: true
|
||||
});
|
||||
const implicitKey = !keyProps.found;
|
||||
if (implicitKey) {
|
||||
if (key) {
|
||||
if (key.type === 'block-seq')
|
||||
onError(offset, 'BLOCK_AS_IMPLICIT_KEY', 'A block sequence may not be used as an implicit map key');
|
||||
else if ('indent' in key && key.indent !== bm.indent)
|
||||
onError(offset, 'BAD_INDENT', startColMsg);
|
||||
}
|
||||
if (!keyProps.anchor && !keyProps.tag && !sep) {
|
||||
commentEnd = keyProps.end;
|
||||
if (keyProps.comment) {
|
||||
if (map.comment)
|
||||
map.comment += '\n' + keyProps.comment;
|
||||
else
|
||||
map.comment = keyProps.comment;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
if (keyProps.newlineAfterProp || containsNewline(key)) {
|
||||
onError(key ?? start[start.length - 1], 'MULTILINE_IMPLICIT_KEY', 'Implicit keys need to be on a single line');
|
||||
}
|
||||
}
|
||||
else if (keyProps.found?.indent !== bm.indent) {
|
||||
onError(offset, 'BAD_INDENT', startColMsg);
|
||||
}
|
||||
// key value
|
||||
ctx.atKey = true;
|
||||
const keyStart = keyProps.end;
|
||||
const keyNode = key
|
||||
? composeNode(ctx, key, keyProps, onError)
|
||||
: composeEmptyNode(ctx, keyStart, start, null, keyProps, onError);
|
||||
if (ctx.schema.compat)
|
||||
flowIndentCheck(bm.indent, key, onError);
|
||||
ctx.atKey = false;
|
||||
if (mapIncludes(ctx, map.items, keyNode))
|
||||
onError(keyStart, 'DUPLICATE_KEY', 'Map keys must be unique');
|
||||
// value properties
|
||||
const valueProps = resolveProps(sep ?? [], {
|
||||
indicator: 'map-value-ind',
|
||||
next: value,
|
||||
offset: keyNode.range[2],
|
||||
onError,
|
||||
parentIndent: bm.indent,
|
||||
startOnNewline: !key || key.type === 'block-scalar'
|
||||
});
|
||||
offset = valueProps.end;
|
||||
if (valueProps.found) {
|
||||
if (implicitKey) {
|
||||
if (value?.type === 'block-map' && !valueProps.hasNewline)
|
||||
onError(offset, 'BLOCK_AS_IMPLICIT_KEY', 'Nested mappings are not allowed in compact mappings');
|
||||
if (ctx.options.strict &&
|
||||
keyProps.start < valueProps.found.offset - 1024)
|
||||
onError(keyNode.range, 'KEY_OVER_1024_CHARS', 'The : indicator must be at most 1024 chars after the start of an implicit block mapping key');
|
||||
}
|
||||
// value value
|
||||
const valueNode = value
|
||||
? composeNode(ctx, value, valueProps, onError)
|
||||
: composeEmptyNode(ctx, offset, sep, null, valueProps, onError);
|
||||
if (ctx.schema.compat)
|
||||
flowIndentCheck(bm.indent, value, onError);
|
||||
offset = valueNode.range[2];
|
||||
const pair = new Pair(keyNode, valueNode);
|
||||
if (ctx.options.keepSourceTokens)
|
||||
pair.srcToken = collItem;
|
||||
map.items.push(pair);
|
||||
}
|
||||
else {
|
||||
// key with no value
|
||||
if (implicitKey)
|
||||
onError(keyNode.range, 'MISSING_CHAR', 'Implicit map keys need to be followed by map values');
|
||||
if (valueProps.comment) {
|
||||
if (keyNode.comment)
|
||||
keyNode.comment += '\n' + valueProps.comment;
|
||||
else
|
||||
keyNode.comment = valueProps.comment;
|
||||
}
|
||||
const pair = new Pair(keyNode);
|
||||
if (ctx.options.keepSourceTokens)
|
||||
pair.srcToken = collItem;
|
||||
map.items.push(pair);
|
||||
}
|
||||
}
|
||||
if (commentEnd && commentEnd < offset)
|
||||
onError(commentEnd, 'IMPOSSIBLE', 'Map comment with trailing content');
|
||||
map.range = [bm.offset, offset, commentEnd ?? offset];
|
||||
return map;
|
||||
}
|
||||
|
||||
export { resolveBlockMap };
|
||||
198
node_modules/yaml/browser/dist/compose/resolve-block-scalar.js
generated
vendored
Normal file
198
node_modules/yaml/browser/dist/compose/resolve-block-scalar.js
generated
vendored
Normal file
@@ -0,0 +1,198 @@
|
||||
import { Scalar } from '../nodes/Scalar.js';
|
||||
|
||||
function resolveBlockScalar(ctx, scalar, onError) {
|
||||
const start = scalar.offset;
|
||||
const header = parseBlockScalarHeader(scalar, ctx.options.strict, onError);
|
||||
if (!header)
|
||||
return { value: '', type: null, comment: '', range: [start, start, start] };
|
||||
const type = header.mode === '>' ? Scalar.BLOCK_FOLDED : Scalar.BLOCK_LITERAL;
|
||||
const lines = scalar.source ? splitLines(scalar.source) : [];
|
||||
// determine the end of content & start of chomping
|
||||
let chompStart = lines.length;
|
||||
for (let i = lines.length - 1; i >= 0; --i) {
|
||||
const content = lines[i][1];
|
||||
if (content === '' || content === '\r')
|
||||
chompStart = i;
|
||||
else
|
||||
break;
|
||||
}
|
||||
// shortcut for empty contents
|
||||
if (chompStart === 0) {
|
||||
const value = header.chomp === '+' && lines.length > 0
|
||||
? '\n'.repeat(Math.max(1, lines.length - 1))
|
||||
: '';
|
||||
let end = start + header.length;
|
||||
if (scalar.source)
|
||||
end += scalar.source.length;
|
||||
return { value, type, comment: header.comment, range: [start, end, end] };
|
||||
}
|
||||
// find the indentation level to trim from start
|
||||
let trimIndent = scalar.indent + header.indent;
|
||||
let offset = scalar.offset + header.length;
|
||||
let contentStart = 0;
|
||||
for (let i = 0; i < chompStart; ++i) {
|
||||
const [indent, content] = lines[i];
|
||||
if (content === '' || content === '\r') {
|
||||
if (header.indent === 0 && indent.length > trimIndent)
|
||||
trimIndent = indent.length;
|
||||
}
|
||||
else {
|
||||
if (indent.length < trimIndent) {
|
||||
const message = 'Block scalars with more-indented leading empty lines must use an explicit indentation indicator';
|
||||
onError(offset + indent.length, 'MISSING_CHAR', message);
|
||||
}
|
||||
if (header.indent === 0)
|
||||
trimIndent = indent.length;
|
||||
contentStart = i;
|
||||
if (trimIndent === 0 && !ctx.atRoot) {
|
||||
const message = 'Block scalar values in collections must be indented';
|
||||
onError(offset, 'BAD_INDENT', message);
|
||||
}
|
||||
break;
|
||||
}
|
||||
offset += indent.length + content.length + 1;
|
||||
}
|
||||
// include trailing more-indented empty lines in content
|
||||
for (let i = lines.length - 1; i >= chompStart; --i) {
|
||||
if (lines[i][0].length > trimIndent)
|
||||
chompStart = i + 1;
|
||||
}
|
||||
let value = '';
|
||||
let sep = '';
|
||||
let prevMoreIndented = false;
|
||||
// leading whitespace is kept intact
|
||||
for (let i = 0; i < contentStart; ++i)
|
||||
value += lines[i][0].slice(trimIndent) + '\n';
|
||||
for (let i = contentStart; i < chompStart; ++i) {
|
||||
let [indent, content] = lines[i];
|
||||
offset += indent.length + content.length + 1;
|
||||
const crlf = content[content.length - 1] === '\r';
|
||||
if (crlf)
|
||||
content = content.slice(0, -1);
|
||||
/* istanbul ignore if already caught in lexer */
|
||||
if (content && indent.length < trimIndent) {
|
||||
const src = header.indent
|
||||
? 'explicit indentation indicator'
|
||||
: 'first line';
|
||||
const message = `Block scalar lines must not be less indented than their ${src}`;
|
||||
onError(offset - content.length - (crlf ? 2 : 1), 'BAD_INDENT', message);
|
||||
indent = '';
|
||||
}
|
||||
if (type === Scalar.BLOCK_LITERAL) {
|
||||
value += sep + indent.slice(trimIndent) + content;
|
||||
sep = '\n';
|
||||
}
|
||||
else if (indent.length > trimIndent || content[0] === '\t') {
|
||||
// more-indented content within a folded block
|
||||
if (sep === ' ')
|
||||
sep = '\n';
|
||||
else if (!prevMoreIndented && sep === '\n')
|
||||
sep = '\n\n';
|
||||
value += sep + indent.slice(trimIndent) + content;
|
||||
sep = '\n';
|
||||
prevMoreIndented = true;
|
||||
}
|
||||
else if (content === '') {
|
||||
// empty line
|
||||
if (sep === '\n')
|
||||
value += '\n';
|
||||
else
|
||||
sep = '\n';
|
||||
}
|
||||
else {
|
||||
value += sep + content;
|
||||
sep = ' ';
|
||||
prevMoreIndented = false;
|
||||
}
|
||||
}
|
||||
switch (header.chomp) {
|
||||
case '-':
|
||||
break;
|
||||
case '+':
|
||||
for (let i = chompStart; i < lines.length; ++i)
|
||||
value += '\n' + lines[i][0].slice(trimIndent);
|
||||
if (value[value.length - 1] !== '\n')
|
||||
value += '\n';
|
||||
break;
|
||||
default:
|
||||
value += '\n';
|
||||
}
|
||||
const end = start + header.length + scalar.source.length;
|
||||
return { value, type, comment: header.comment, range: [start, end, end] };
|
||||
}
|
||||
function parseBlockScalarHeader({ offset, props }, strict, onError) {
|
||||
/* istanbul ignore if should not happen */
|
||||
if (props[0].type !== 'block-scalar-header') {
|
||||
onError(props[0], 'IMPOSSIBLE', 'Block scalar header not found');
|
||||
return null;
|
||||
}
|
||||
const { source } = props[0];
|
||||
const mode = source[0];
|
||||
let indent = 0;
|
||||
let chomp = '';
|
||||
let error = -1;
|
||||
for (let i = 1; i < source.length; ++i) {
|
||||
const ch = source[i];
|
||||
if (!chomp && (ch === '-' || ch === '+'))
|
||||
chomp = ch;
|
||||
else {
|
||||
const n = Number(ch);
|
||||
if (!indent && n)
|
||||
indent = n;
|
||||
else if (error === -1)
|
||||
error = offset + i;
|
||||
}
|
||||
}
|
||||
if (error !== -1)
|
||||
onError(error, 'UNEXPECTED_TOKEN', `Block scalar header includes extra characters: ${source}`);
|
||||
let hasSpace = false;
|
||||
let comment = '';
|
||||
let length = source.length;
|
||||
for (let i = 1; i < props.length; ++i) {
|
||||
const token = props[i];
|
||||
switch (token.type) {
|
||||
case 'space':
|
||||
hasSpace = true;
|
||||
// fallthrough
|
||||
case 'newline':
|
||||
length += token.source.length;
|
||||
break;
|
||||
case 'comment':
|
||||
if (strict && !hasSpace) {
|
||||
const message = 'Comments must be separated from other tokens by white space characters';
|
||||
onError(token, 'MISSING_CHAR', message);
|
||||
}
|
||||
length += token.source.length;
|
||||
comment = token.source.substring(1);
|
||||
break;
|
||||
case 'error':
|
||||
onError(token, 'UNEXPECTED_TOKEN', token.message);
|
||||
length += token.source.length;
|
||||
break;
|
||||
/* istanbul ignore next should not happen */
|
||||
default: {
|
||||
const message = `Unexpected token in block scalar header: ${token.type}`;
|
||||
onError(token, 'UNEXPECTED_TOKEN', message);
|
||||
const ts = token.source;
|
||||
if (ts && typeof ts === 'string')
|
||||
length += ts.length;
|
||||
}
|
||||
}
|
||||
}
|
||||
return { mode, indent, chomp, comment, length };
|
||||
}
|
||||
/** @returns Array of lines split up as `[indent, content]` */
|
||||
function splitLines(source) {
|
||||
const split = source.split(/\n( *)/);
|
||||
const first = split[0];
|
||||
const m = first.match(/^( *)/);
|
||||
const line0 = m?.[1]
|
||||
? [m[1], first.slice(m[1].length)]
|
||||
: ['', first];
|
||||
const lines = [line0];
|
||||
for (let i = 1; i < split.length; i += 2)
|
||||
lines.push([split[i], split[i + 1]]);
|
||||
return lines;
|
||||
}
|
||||
|
||||
export { resolveBlockScalar };
|
||||
49
node_modules/yaml/browser/dist/compose/resolve-block-seq.js
generated
vendored
Normal file
49
node_modules/yaml/browser/dist/compose/resolve-block-seq.js
generated
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
import { YAMLSeq } from '../nodes/YAMLSeq.js';
|
||||
import { resolveProps } from './resolve-props.js';
|
||||
import { flowIndentCheck } from './util-flow-indent-check.js';
|
||||
|
||||
function resolveBlockSeq({ composeNode, composeEmptyNode }, ctx, bs, onError, tag) {
|
||||
const NodeClass = tag?.nodeClass ?? YAMLSeq;
|
||||
const seq = new NodeClass(ctx.schema);
|
||||
if (ctx.atRoot)
|
||||
ctx.atRoot = false;
|
||||
if (ctx.atKey)
|
||||
ctx.atKey = false;
|
||||
let offset = bs.offset;
|
||||
let commentEnd = null;
|
||||
for (const { start, value } of bs.items) {
|
||||
const props = resolveProps(start, {
|
||||
indicator: 'seq-item-ind',
|
||||
next: value,
|
||||
offset,
|
||||
onError,
|
||||
parentIndent: bs.indent,
|
||||
startOnNewline: true
|
||||
});
|
||||
if (!props.found) {
|
||||
if (props.anchor || props.tag || value) {
|
||||
if (value?.type === 'block-seq')
|
||||
onError(props.end, 'BAD_INDENT', 'All sequence items must start at the same column');
|
||||
else
|
||||
onError(offset, 'MISSING_CHAR', 'Sequence item without - indicator');
|
||||
}
|
||||
else {
|
||||
commentEnd = props.end;
|
||||
if (props.comment)
|
||||
seq.comment = props.comment;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
const node = value
|
||||
? composeNode(ctx, value, props, onError)
|
||||
: composeEmptyNode(ctx, props.end, start, null, props, onError);
|
||||
if (ctx.schema.compat)
|
||||
flowIndentCheck(bs.indent, value, onError);
|
||||
offset = node.range[2];
|
||||
seq.items.push(node);
|
||||
}
|
||||
seq.range = [bs.offset, offset, commentEnd ?? offset];
|
||||
return seq;
|
||||
}
|
||||
|
||||
export { resolveBlockSeq };
|
||||
37
node_modules/yaml/browser/dist/compose/resolve-end.js
generated
vendored
Normal file
37
node_modules/yaml/browser/dist/compose/resolve-end.js
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
function resolveEnd(end, offset, reqSpace, onError) {
|
||||
let comment = '';
|
||||
if (end) {
|
||||
let hasSpace = false;
|
||||
let sep = '';
|
||||
for (const token of end) {
|
||||
const { source, type } = token;
|
||||
switch (type) {
|
||||
case 'space':
|
||||
hasSpace = true;
|
||||
break;
|
||||
case 'comment': {
|
||||
if (reqSpace && !hasSpace)
|
||||
onError(token, 'MISSING_CHAR', 'Comments must be separated from other tokens by white space characters');
|
||||
const cb = source.substring(1) || ' ';
|
||||
if (!comment)
|
||||
comment = cb;
|
||||
else
|
||||
comment += sep + cb;
|
||||
sep = '';
|
||||
break;
|
||||
}
|
||||
case 'newline':
|
||||
if (comment)
|
||||
sep += source;
|
||||
hasSpace = true;
|
||||
break;
|
||||
default:
|
||||
onError(token, 'UNEXPECTED_TOKEN', `Unexpected ${type} at node end`);
|
||||
}
|
||||
offset += source.length;
|
||||
}
|
||||
}
|
||||
return { comment, offset };
|
||||
}
|
||||
|
||||
export { resolveEnd };
|
||||
207
node_modules/yaml/browser/dist/compose/resolve-flow-collection.js
generated
vendored
Normal file
207
node_modules/yaml/browser/dist/compose/resolve-flow-collection.js
generated
vendored
Normal file
@@ -0,0 +1,207 @@
|
||||
import { isPair } from '../nodes/identity.js';
|
||||
import { Pair } from '../nodes/Pair.js';
|
||||
import { YAMLMap } from '../nodes/YAMLMap.js';
|
||||
import { YAMLSeq } from '../nodes/YAMLSeq.js';
|
||||
import { resolveEnd } from './resolve-end.js';
|
||||
import { resolveProps } from './resolve-props.js';
|
||||
import { containsNewline } from './util-contains-newline.js';
|
||||
import { mapIncludes } from './util-map-includes.js';
|
||||
|
||||
const blockMsg = 'Block collections are not allowed within flow collections';
|
||||
const isBlock = (token) => token && (token.type === 'block-map' || token.type === 'block-seq');
|
||||
function resolveFlowCollection({ composeNode, composeEmptyNode }, ctx, fc, onError, tag) {
|
||||
const isMap = fc.start.source === '{';
|
||||
const fcName = isMap ? 'flow map' : 'flow sequence';
|
||||
const NodeClass = (tag?.nodeClass ?? (isMap ? YAMLMap : YAMLSeq));
|
||||
const coll = new NodeClass(ctx.schema);
|
||||
coll.flow = true;
|
||||
const atRoot = ctx.atRoot;
|
||||
if (atRoot)
|
||||
ctx.atRoot = false;
|
||||
if (ctx.atKey)
|
||||
ctx.atKey = false;
|
||||
let offset = fc.offset + fc.start.source.length;
|
||||
for (let i = 0; i < fc.items.length; ++i) {
|
||||
const collItem = fc.items[i];
|
||||
const { start, key, sep, value } = collItem;
|
||||
const props = resolveProps(start, {
|
||||
flow: fcName,
|
||||
indicator: 'explicit-key-ind',
|
||||
next: key ?? sep?.[0],
|
||||
offset,
|
||||
onError,
|
||||
parentIndent: fc.indent,
|
||||
startOnNewline: false
|
||||
});
|
||||
if (!props.found) {
|
||||
if (!props.anchor && !props.tag && !sep && !value) {
|
||||
if (i === 0 && props.comma)
|
||||
onError(props.comma, 'UNEXPECTED_TOKEN', `Unexpected , in ${fcName}`);
|
||||
else if (i < fc.items.length - 1)
|
||||
onError(props.start, 'UNEXPECTED_TOKEN', `Unexpected empty item in ${fcName}`);
|
||||
if (props.comment) {
|
||||
if (coll.comment)
|
||||
coll.comment += '\n' + props.comment;
|
||||
else
|
||||
coll.comment = props.comment;
|
||||
}
|
||||
offset = props.end;
|
||||
continue;
|
||||
}
|
||||
if (!isMap && ctx.options.strict && containsNewline(key))
|
||||
onError(key, // checked by containsNewline()
|
||||
'MULTILINE_IMPLICIT_KEY', 'Implicit keys of flow sequence pairs need to be on a single line');
|
||||
}
|
||||
if (i === 0) {
|
||||
if (props.comma)
|
||||
onError(props.comma, 'UNEXPECTED_TOKEN', `Unexpected , in ${fcName}`);
|
||||
}
|
||||
else {
|
||||
if (!props.comma)
|
||||
onError(props.start, 'MISSING_CHAR', `Missing , between ${fcName} items`);
|
||||
if (props.comment) {
|
||||
let prevItemComment = '';
|
||||
loop: for (const st of start) {
|
||||
switch (st.type) {
|
||||
case 'comma':
|
||||
case 'space':
|
||||
break;
|
||||
case 'comment':
|
||||
prevItemComment = st.source.substring(1);
|
||||
break loop;
|
||||
default:
|
||||
break loop;
|
||||
}
|
||||
}
|
||||
if (prevItemComment) {
|
||||
let prev = coll.items[coll.items.length - 1];
|
||||
if (isPair(prev))
|
||||
prev = prev.value ?? prev.key;
|
||||
if (prev.comment)
|
||||
prev.comment += '\n' + prevItemComment;
|
||||
else
|
||||
prev.comment = prevItemComment;
|
||||
props.comment = props.comment.substring(prevItemComment.length + 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (!isMap && !sep && !props.found) {
|
||||
// item is a value in a seq
|
||||
// → key & sep are empty, start does not include ? or :
|
||||
const valueNode = value
|
||||
? composeNode(ctx, value, props, onError)
|
||||
: composeEmptyNode(ctx, props.end, sep, null, props, onError);
|
||||
coll.items.push(valueNode);
|
||||
offset = valueNode.range[2];
|
||||
if (isBlock(value))
|
||||
onError(valueNode.range, 'BLOCK_IN_FLOW', blockMsg);
|
||||
}
|
||||
else {
|
||||
// item is a key+value pair
|
||||
// key value
|
||||
ctx.atKey = true;
|
||||
const keyStart = props.end;
|
||||
const keyNode = key
|
||||
? composeNode(ctx, key, props, onError)
|
||||
: composeEmptyNode(ctx, keyStart, start, null, props, onError);
|
||||
if (isBlock(key))
|
||||
onError(keyNode.range, 'BLOCK_IN_FLOW', blockMsg);
|
||||
ctx.atKey = false;
|
||||
// value properties
|
||||
const valueProps = resolveProps(sep ?? [], {
|
||||
flow: fcName,
|
||||
indicator: 'map-value-ind',
|
||||
next: value,
|
||||
offset: keyNode.range[2],
|
||||
onError,
|
||||
parentIndent: fc.indent,
|
||||
startOnNewline: false
|
||||
});
|
||||
if (valueProps.found) {
|
||||
if (!isMap && !props.found && ctx.options.strict) {
|
||||
if (sep)
|
||||
for (const st of sep) {
|
||||
if (st === valueProps.found)
|
||||
break;
|
||||
if (st.type === 'newline') {
|
||||
onError(st, 'MULTILINE_IMPLICIT_KEY', 'Implicit keys of flow sequence pairs need to be on a single line');
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (props.start < valueProps.found.offset - 1024)
|
||||
onError(valueProps.found, 'KEY_OVER_1024_CHARS', 'The : indicator must be at most 1024 chars after the start of an implicit flow sequence key');
|
||||
}
|
||||
}
|
||||
else if (value) {
|
||||
if ('source' in value && value.source?.[0] === ':')
|
||||
onError(value, 'MISSING_CHAR', `Missing space after : in ${fcName}`);
|
||||
else
|
||||
onError(valueProps.start, 'MISSING_CHAR', `Missing , or : between ${fcName} items`);
|
||||
}
|
||||
// value value
|
||||
const valueNode = value
|
||||
? composeNode(ctx, value, valueProps, onError)
|
||||
: valueProps.found
|
||||
? composeEmptyNode(ctx, valueProps.end, sep, null, valueProps, onError)
|
||||
: null;
|
||||
if (valueNode) {
|
||||
if (isBlock(value))
|
||||
onError(valueNode.range, 'BLOCK_IN_FLOW', blockMsg);
|
||||
}
|
||||
else if (valueProps.comment) {
|
||||
if (keyNode.comment)
|
||||
keyNode.comment += '\n' + valueProps.comment;
|
||||
else
|
||||
keyNode.comment = valueProps.comment;
|
||||
}
|
||||
const pair = new Pair(keyNode, valueNode);
|
||||
if (ctx.options.keepSourceTokens)
|
||||
pair.srcToken = collItem;
|
||||
if (isMap) {
|
||||
const map = coll;
|
||||
if (mapIncludes(ctx, map.items, keyNode))
|
||||
onError(keyStart, 'DUPLICATE_KEY', 'Map keys must be unique');
|
||||
map.items.push(pair);
|
||||
}
|
||||
else {
|
||||
const map = new YAMLMap(ctx.schema);
|
||||
map.flow = true;
|
||||
map.items.push(pair);
|
||||
const endRange = (valueNode ?? keyNode).range;
|
||||
map.range = [keyNode.range[0], endRange[1], endRange[2]];
|
||||
coll.items.push(map);
|
||||
}
|
||||
offset = valueNode ? valueNode.range[2] : valueProps.end;
|
||||
}
|
||||
}
|
||||
const expectedEnd = isMap ? '}' : ']';
|
||||
const [ce, ...ee] = fc.end;
|
||||
let cePos = offset;
|
||||
if (ce?.source === expectedEnd)
|
||||
cePos = ce.offset + ce.source.length;
|
||||
else {
|
||||
const name = fcName[0].toUpperCase() + fcName.substring(1);
|
||||
const msg = atRoot
|
||||
? `${name} must end with a ${expectedEnd}`
|
||||
: `${name} in block collection must be sufficiently indented and end with a ${expectedEnd}`;
|
||||
onError(offset, atRoot ? 'MISSING_CHAR' : 'BAD_INDENT', msg);
|
||||
if (ce && ce.source.length !== 1)
|
||||
ee.unshift(ce);
|
||||
}
|
||||
if (ee.length > 0) {
|
||||
const end = resolveEnd(ee, cePos, ctx.options.strict, onError);
|
||||
if (end.comment) {
|
||||
if (coll.comment)
|
||||
coll.comment += '\n' + end.comment;
|
||||
else
|
||||
coll.comment = end.comment;
|
||||
}
|
||||
coll.range = [fc.offset, cePos, end.offset];
|
||||
}
|
||||
else {
|
||||
coll.range = [fc.offset, cePos, cePos];
|
||||
}
|
||||
return coll;
|
||||
}
|
||||
|
||||
export { resolveFlowCollection };
|
||||
223
node_modules/yaml/browser/dist/compose/resolve-flow-scalar.js
generated
vendored
Normal file
223
node_modules/yaml/browser/dist/compose/resolve-flow-scalar.js
generated
vendored
Normal file
@@ -0,0 +1,223 @@
|
||||
import { Scalar } from '../nodes/Scalar.js';
|
||||
import { resolveEnd } from './resolve-end.js';
|
||||
|
||||
function resolveFlowScalar(scalar, strict, onError) {
|
||||
const { offset, type, source, end } = scalar;
|
||||
let _type;
|
||||
let value;
|
||||
const _onError = (rel, code, msg) => onError(offset + rel, code, msg);
|
||||
switch (type) {
|
||||
case 'scalar':
|
||||
_type = Scalar.PLAIN;
|
||||
value = plainValue(source, _onError);
|
||||
break;
|
||||
case 'single-quoted-scalar':
|
||||
_type = Scalar.QUOTE_SINGLE;
|
||||
value = singleQuotedValue(source, _onError);
|
||||
break;
|
||||
case 'double-quoted-scalar':
|
||||
_type = Scalar.QUOTE_DOUBLE;
|
||||
value = doubleQuotedValue(source, _onError);
|
||||
break;
|
||||
/* istanbul ignore next should not happen */
|
||||
default:
|
||||
onError(scalar, 'UNEXPECTED_TOKEN', `Expected a flow scalar value, but found: ${type}`);
|
||||
return {
|
||||
value: '',
|
||||
type: null,
|
||||
comment: '',
|
||||
range: [offset, offset + source.length, offset + source.length]
|
||||
};
|
||||
}
|
||||
const valueEnd = offset + source.length;
|
||||
const re = resolveEnd(end, valueEnd, strict, onError);
|
||||
return {
|
||||
value,
|
||||
type: _type,
|
||||
comment: re.comment,
|
||||
range: [offset, valueEnd, re.offset]
|
||||
};
|
||||
}
|
||||
function plainValue(source, onError) {
|
||||
let badChar = '';
|
||||
switch (source[0]) {
|
||||
/* istanbul ignore next should not happen */
|
||||
case '\t':
|
||||
badChar = 'a tab character';
|
||||
break;
|
||||
case ',':
|
||||
badChar = 'flow indicator character ,';
|
||||
break;
|
||||
case '%':
|
||||
badChar = 'directive indicator character %';
|
||||
break;
|
||||
case '|':
|
||||
case '>': {
|
||||
badChar = `block scalar indicator ${source[0]}`;
|
||||
break;
|
||||
}
|
||||
case '@':
|
||||
case '`': {
|
||||
badChar = `reserved character ${source[0]}`;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (badChar)
|
||||
onError(0, 'BAD_SCALAR_START', `Plain value cannot start with ${badChar}`);
|
||||
return foldLines(source);
|
||||
}
|
||||
function singleQuotedValue(source, onError) {
|
||||
if (source[source.length - 1] !== "'" || source.length === 1)
|
||||
onError(source.length, 'MISSING_CHAR', "Missing closing 'quote");
|
||||
return foldLines(source.slice(1, -1)).replace(/''/g, "'");
|
||||
}
|
||||
function foldLines(source) {
|
||||
/**
|
||||
* The negative lookbehind here and in the `re` RegExp is to
|
||||
* prevent causing a polynomial search time in certain cases.
|
||||
*
|
||||
* The try-catch is for Safari, which doesn't support this yet:
|
||||
* https://caniuse.com/js-regexp-lookbehind
|
||||
*/
|
||||
let first, line;
|
||||
try {
|
||||
first = new RegExp('(.*?)(?<![ \t])[ \t]*\r?\n', 'sy');
|
||||
line = new RegExp('[ \t]*(.*?)(?:(?<![ \t])[ \t]*)?\r?\n', 'sy');
|
||||
}
|
||||
catch {
|
||||
first = /(.*?)[ \t]*\r?\n/sy;
|
||||
line = /[ \t]*(.*?)[ \t]*\r?\n/sy;
|
||||
}
|
||||
let match = first.exec(source);
|
||||
if (!match)
|
||||
return source;
|
||||
let res = match[1];
|
||||
let sep = ' ';
|
||||
let pos = first.lastIndex;
|
||||
line.lastIndex = pos;
|
||||
while ((match = line.exec(source))) {
|
||||
if (match[1] === '') {
|
||||
if (sep === '\n')
|
||||
res += sep;
|
||||
else
|
||||
sep = '\n';
|
||||
}
|
||||
else {
|
||||
res += sep + match[1];
|
||||
sep = ' ';
|
||||
}
|
||||
pos = line.lastIndex;
|
||||
}
|
||||
const last = /[ \t]*(.*)/sy;
|
||||
last.lastIndex = pos;
|
||||
match = last.exec(source);
|
||||
return res + sep + (match?.[1] ?? '');
|
||||
}
|
||||
function doubleQuotedValue(source, onError) {
|
||||
let res = '';
|
||||
for (let i = 1; i < source.length - 1; ++i) {
|
||||
const ch = source[i];
|
||||
if (ch === '\r' && source[i + 1] === '\n')
|
||||
continue;
|
||||
if (ch === '\n') {
|
||||
const { fold, offset } = foldNewline(source, i);
|
||||
res += fold;
|
||||
i = offset;
|
||||
}
|
||||
else if (ch === '\\') {
|
||||
let next = source[++i];
|
||||
const cc = escapeCodes[next];
|
||||
if (cc)
|
||||
res += cc;
|
||||
else if (next === '\n') {
|
||||
// skip escaped newlines, but still trim the following line
|
||||
next = source[i + 1];
|
||||
while (next === ' ' || next === '\t')
|
||||
next = source[++i + 1];
|
||||
}
|
||||
else if (next === '\r' && source[i + 1] === '\n') {
|
||||
// skip escaped CRLF newlines, but still trim the following line
|
||||
next = source[++i + 1];
|
||||
while (next === ' ' || next === '\t')
|
||||
next = source[++i + 1];
|
||||
}
|
||||
else if (next === 'x' || next === 'u' || next === 'U') {
|
||||
const length = { x: 2, u: 4, U: 8 }[next];
|
||||
res += parseCharCode(source, i + 1, length, onError);
|
||||
i += length;
|
||||
}
|
||||
else {
|
||||
const raw = source.substr(i - 1, 2);
|
||||
onError(i - 1, 'BAD_DQ_ESCAPE', `Invalid escape sequence ${raw}`);
|
||||
res += raw;
|
||||
}
|
||||
}
|
||||
else if (ch === ' ' || ch === '\t') {
|
||||
// trim trailing whitespace
|
||||
const wsStart = i;
|
||||
let next = source[i + 1];
|
||||
while (next === ' ' || next === '\t')
|
||||
next = source[++i + 1];
|
||||
if (next !== '\n' && !(next === '\r' && source[i + 2] === '\n'))
|
||||
res += i > wsStart ? source.slice(wsStart, i + 1) : ch;
|
||||
}
|
||||
else {
|
||||
res += ch;
|
||||
}
|
||||
}
|
||||
if (source[source.length - 1] !== '"' || source.length === 1)
|
||||
onError(source.length, 'MISSING_CHAR', 'Missing closing "quote');
|
||||
return res;
|
||||
}
|
||||
/**
|
||||
* Fold a single newline into a space, multiple newlines to N - 1 newlines.
|
||||
* Presumes `source[offset] === '\n'`
|
||||
*/
|
||||
function foldNewline(source, offset) {
|
||||
let fold = '';
|
||||
let ch = source[offset + 1];
|
||||
while (ch === ' ' || ch === '\t' || ch === '\n' || ch === '\r') {
|
||||
if (ch === '\r' && source[offset + 2] !== '\n')
|
||||
break;
|
||||
if (ch === '\n')
|
||||
fold += '\n';
|
||||
offset += 1;
|
||||
ch = source[offset + 1];
|
||||
}
|
||||
if (!fold)
|
||||
fold = ' ';
|
||||
return { fold, offset };
|
||||
}
|
||||
const escapeCodes = {
|
||||
'0': '\0', // null character
|
||||
a: '\x07', // bell character
|
||||
b: '\b', // backspace
|
||||
e: '\x1b', // escape character
|
||||
f: '\f', // form feed
|
||||
n: '\n', // line feed
|
||||
r: '\r', // carriage return
|
||||
t: '\t', // horizontal tab
|
||||
v: '\v', // vertical tab
|
||||
N: '\u0085', // Unicode next line
|
||||
_: '\u00a0', // Unicode non-breaking space
|
||||
L: '\u2028', // Unicode line separator
|
||||
P: '\u2029', // Unicode paragraph separator
|
||||
' ': ' ',
|
||||
'"': '"',
|
||||
'/': '/',
|
||||
'\\': '\\',
|
||||
'\t': '\t'
|
||||
};
|
||||
function parseCharCode(source, offset, length, onError) {
|
||||
const cc = source.substr(offset, length);
|
||||
const ok = cc.length === length && /^[0-9a-fA-F]+$/.test(cc);
|
||||
const code = ok ? parseInt(cc, 16) : NaN;
|
||||
if (isNaN(code)) {
|
||||
const raw = source.substr(offset - 2, length + 2);
|
||||
onError(offset - 2, 'BAD_DQ_ESCAPE', `Invalid escape sequence ${raw}`);
|
||||
return raw;
|
||||
}
|
||||
return String.fromCodePoint(code);
|
||||
}
|
||||
|
||||
export { resolveFlowScalar };
|
||||
146
node_modules/yaml/browser/dist/compose/resolve-props.js
generated
vendored
Normal file
146
node_modules/yaml/browser/dist/compose/resolve-props.js
generated
vendored
Normal file
@@ -0,0 +1,146 @@
|
||||
function resolveProps(tokens, { flow, indicator, next, offset, onError, parentIndent, startOnNewline }) {
|
||||
let spaceBefore = false;
|
||||
let atNewline = startOnNewline;
|
||||
let hasSpace = startOnNewline;
|
||||
let comment = '';
|
||||
let commentSep = '';
|
||||
let hasNewline = false;
|
||||
let reqSpace = false;
|
||||
let tab = null;
|
||||
let anchor = null;
|
||||
let tag = null;
|
||||
let newlineAfterProp = null;
|
||||
let comma = null;
|
||||
let found = null;
|
||||
let start = null;
|
||||
for (const token of tokens) {
|
||||
if (reqSpace) {
|
||||
if (token.type !== 'space' &&
|
||||
token.type !== 'newline' &&
|
||||
token.type !== 'comma')
|
||||
onError(token.offset, 'MISSING_CHAR', 'Tags and anchors must be separated from the next token by white space');
|
||||
reqSpace = false;
|
||||
}
|
||||
if (tab) {
|
||||
if (atNewline && token.type !== 'comment' && token.type !== 'newline') {
|
||||
onError(tab, 'TAB_AS_INDENT', 'Tabs are not allowed as indentation');
|
||||
}
|
||||
tab = null;
|
||||
}
|
||||
switch (token.type) {
|
||||
case 'space':
|
||||
// At the doc level, tabs at line start may be parsed
|
||||
// as leading white space rather than indentation.
|
||||
// In a flow collection, only the parser handles indent.
|
||||
if (!flow &&
|
||||
(indicator !== 'doc-start' || next?.type !== 'flow-collection') &&
|
||||
token.source.includes('\t')) {
|
||||
tab = token;
|
||||
}
|
||||
hasSpace = true;
|
||||
break;
|
||||
case 'comment': {
|
||||
if (!hasSpace)
|
||||
onError(token, 'MISSING_CHAR', 'Comments must be separated from other tokens by white space characters');
|
||||
const cb = token.source.substring(1) || ' ';
|
||||
if (!comment)
|
||||
comment = cb;
|
||||
else
|
||||
comment += commentSep + cb;
|
||||
commentSep = '';
|
||||
atNewline = false;
|
||||
break;
|
||||
}
|
||||
case 'newline':
|
||||
if (atNewline) {
|
||||
if (comment)
|
||||
comment += token.source;
|
||||
else if (!found || indicator !== 'seq-item-ind')
|
||||
spaceBefore = true;
|
||||
}
|
||||
else
|
||||
commentSep += token.source;
|
||||
atNewline = true;
|
||||
hasNewline = true;
|
||||
if (anchor || tag)
|
||||
newlineAfterProp = token;
|
||||
hasSpace = true;
|
||||
break;
|
||||
case 'anchor':
|
||||
if (anchor)
|
||||
onError(token, 'MULTIPLE_ANCHORS', 'A node can have at most one anchor');
|
||||
if (token.source.endsWith(':'))
|
||||
onError(token.offset + token.source.length - 1, 'BAD_ALIAS', 'Anchor ending in : is ambiguous', true);
|
||||
anchor = token;
|
||||
start ?? (start = token.offset);
|
||||
atNewline = false;
|
||||
hasSpace = false;
|
||||
reqSpace = true;
|
||||
break;
|
||||
case 'tag': {
|
||||
if (tag)
|
||||
onError(token, 'MULTIPLE_TAGS', 'A node can have at most one tag');
|
||||
tag = token;
|
||||
start ?? (start = token.offset);
|
||||
atNewline = false;
|
||||
hasSpace = false;
|
||||
reqSpace = true;
|
||||
break;
|
||||
}
|
||||
case indicator:
|
||||
// Could here handle preceding comments differently
|
||||
if (anchor || tag)
|
||||
onError(token, 'BAD_PROP_ORDER', `Anchors and tags must be after the ${token.source} indicator`);
|
||||
if (found)
|
||||
onError(token, 'UNEXPECTED_TOKEN', `Unexpected ${token.source} in ${flow ?? 'collection'}`);
|
||||
found = token;
|
||||
atNewline =
|
||||
indicator === 'seq-item-ind' || indicator === 'explicit-key-ind';
|
||||
hasSpace = false;
|
||||
break;
|
||||
case 'comma':
|
||||
if (flow) {
|
||||
if (comma)
|
||||
onError(token, 'UNEXPECTED_TOKEN', `Unexpected , in ${flow}`);
|
||||
comma = token;
|
||||
atNewline = false;
|
||||
hasSpace = false;
|
||||
break;
|
||||
}
|
||||
// else fallthrough
|
||||
default:
|
||||
onError(token, 'UNEXPECTED_TOKEN', `Unexpected ${token.type} token`);
|
||||
atNewline = false;
|
||||
hasSpace = false;
|
||||
}
|
||||
}
|
||||
const last = tokens[tokens.length - 1];
|
||||
const end = last ? last.offset + last.source.length : offset;
|
||||
if (reqSpace &&
|
||||
next &&
|
||||
next.type !== 'space' &&
|
||||
next.type !== 'newline' &&
|
||||
next.type !== 'comma' &&
|
||||
(next.type !== 'scalar' || next.source !== '')) {
|
||||
onError(next.offset, 'MISSING_CHAR', 'Tags and anchors must be separated from the next token by white space');
|
||||
}
|
||||
if (tab &&
|
||||
((atNewline && tab.indent <= parentIndent) ||
|
||||
next?.type === 'block-map' ||
|
||||
next?.type === 'block-seq'))
|
||||
onError(tab, 'TAB_AS_INDENT', 'Tabs are not allowed as indentation');
|
||||
return {
|
||||
comma,
|
||||
found,
|
||||
spaceBefore,
|
||||
comment,
|
||||
hasNewline,
|
||||
anchor,
|
||||
tag,
|
||||
newlineAfterProp,
|
||||
end,
|
||||
start: start ?? end
|
||||
};
|
||||
}
|
||||
|
||||
export { resolveProps };
|
||||
34
node_modules/yaml/browser/dist/compose/util-contains-newline.js
generated
vendored
Normal file
34
node_modules/yaml/browser/dist/compose/util-contains-newline.js
generated
vendored
Normal file
@@ -0,0 +1,34 @@
|
||||
function containsNewline(key) {
|
||||
if (!key)
|
||||
return null;
|
||||
switch (key.type) {
|
||||
case 'alias':
|
||||
case 'scalar':
|
||||
case 'double-quoted-scalar':
|
||||
case 'single-quoted-scalar':
|
||||
if (key.source.includes('\n'))
|
||||
return true;
|
||||
if (key.end)
|
||||
for (const st of key.end)
|
||||
if (st.type === 'newline')
|
||||
return true;
|
||||
return false;
|
||||
case 'flow-collection':
|
||||
for (const it of key.items) {
|
||||
for (const st of it.start)
|
||||
if (st.type === 'newline')
|
||||
return true;
|
||||
if (it.sep)
|
||||
for (const st of it.sep)
|
||||
if (st.type === 'newline')
|
||||
return true;
|
||||
if (containsNewline(it.key) || containsNewline(it.value))
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
default:
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
export { containsNewline };
|
||||
26
node_modules/yaml/browser/dist/compose/util-empty-scalar-position.js
generated
vendored
Normal file
26
node_modules/yaml/browser/dist/compose/util-empty-scalar-position.js
generated
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
function emptyScalarPosition(offset, before, pos) {
|
||||
if (before) {
|
||||
pos ?? (pos = before.length);
|
||||
for (let i = pos - 1; i >= 0; --i) {
|
||||
let st = before[i];
|
||||
switch (st.type) {
|
||||
case 'space':
|
||||
case 'comment':
|
||||
case 'newline':
|
||||
offset -= st.source.length;
|
||||
continue;
|
||||
}
|
||||
// Technically, an empty scalar is immediately after the last non-empty
|
||||
// node, but it's more useful to place it after any whitespace.
|
||||
st = before[++i];
|
||||
while (st?.type === 'space') {
|
||||
offset += st.source.length;
|
||||
st = before[++i];
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
return offset;
|
||||
}
|
||||
|
||||
export { emptyScalarPosition };
|
||||
15
node_modules/yaml/browser/dist/compose/util-flow-indent-check.js
generated
vendored
Normal file
15
node_modules/yaml/browser/dist/compose/util-flow-indent-check.js
generated
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
import { containsNewline } from './util-contains-newline.js';
|
||||
|
||||
function flowIndentCheck(indent, fc, onError) {
|
||||
if (fc?.type === 'flow-collection') {
|
||||
const end = fc.end[0];
|
||||
if (end.indent === indent &&
|
||||
(end.source === ']' || end.source === '}') &&
|
||||
containsNewline(fc)) {
|
||||
const msg = 'Flow end indicator should be more indented than parent';
|
||||
onError(end, 'BAD_INDENT', msg, true);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export { flowIndentCheck };
|
||||
13
node_modules/yaml/browser/dist/compose/util-map-includes.js
generated
vendored
Normal file
13
node_modules/yaml/browser/dist/compose/util-map-includes.js
generated
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
import { isScalar } from '../nodes/identity.js';
|
||||
|
||||
function mapIncludes(ctx, items, search) {
|
||||
const { uniqueKeys } = ctx.options;
|
||||
if (uniqueKeys === false)
|
||||
return false;
|
||||
const isEqual = typeof uniqueKeys === 'function'
|
||||
? uniqueKeys
|
||||
: (a, b) => a === b || (isScalar(a) && isScalar(b) && a.value === b.value);
|
||||
return items.some(pair => isEqual(pair.key, search));
|
||||
}
|
||||
|
||||
export { mapIncludes };
|
||||
335
node_modules/yaml/browser/dist/doc/Document.js
generated
vendored
Normal file
335
node_modules/yaml/browser/dist/doc/Document.js
generated
vendored
Normal file
@@ -0,0 +1,335 @@
|
||||
import { Alias } from '../nodes/Alias.js';
|
||||
import { isEmptyPath, collectionFromPath } from '../nodes/Collection.js';
|
||||
import { NODE_TYPE, DOC, isNode, isCollection, isScalar } from '../nodes/identity.js';
|
||||
import { Pair } from '../nodes/Pair.js';
|
||||
import { toJS } from '../nodes/toJS.js';
|
||||
import { Schema } from '../schema/Schema.js';
|
||||
import { stringifyDocument } from '../stringify/stringifyDocument.js';
|
||||
import { anchorNames, findNewAnchor, createNodeAnchors } from './anchors.js';
|
||||
import { applyReviver } from './applyReviver.js';
|
||||
import { createNode } from './createNode.js';
|
||||
import { Directives } from './directives.js';
|
||||
|
||||
class Document {
|
||||
constructor(value, replacer, options) {
|
||||
/** A comment before this Document */
|
||||
this.commentBefore = null;
|
||||
/** A comment immediately after this Document */
|
||||
this.comment = null;
|
||||
/** Errors encountered during parsing. */
|
||||
this.errors = [];
|
||||
/** Warnings encountered during parsing. */
|
||||
this.warnings = [];
|
||||
Object.defineProperty(this, NODE_TYPE, { value: DOC });
|
||||
let _replacer = null;
|
||||
if (typeof replacer === 'function' || Array.isArray(replacer)) {
|
||||
_replacer = replacer;
|
||||
}
|
||||
else if (options === undefined && replacer) {
|
||||
options = replacer;
|
||||
replacer = undefined;
|
||||
}
|
||||
const opt = Object.assign({
|
||||
intAsBigInt: false,
|
||||
keepSourceTokens: false,
|
||||
logLevel: 'warn',
|
||||
prettyErrors: true,
|
||||
strict: true,
|
||||
stringKeys: false,
|
||||
uniqueKeys: true,
|
||||
version: '1.2'
|
||||
}, options);
|
||||
this.options = opt;
|
||||
let { version } = opt;
|
||||
if (options?._directives) {
|
||||
this.directives = options._directives.atDocument();
|
||||
if (this.directives.yaml.explicit)
|
||||
version = this.directives.yaml.version;
|
||||
}
|
||||
else
|
||||
this.directives = new Directives({ version });
|
||||
this.setSchema(version, options);
|
||||
// @ts-expect-error We can't really know that this matches Contents.
|
||||
this.contents =
|
||||
value === undefined ? null : this.createNode(value, _replacer, options);
|
||||
}
|
||||
/**
|
||||
* Create a deep copy of this Document and its contents.
|
||||
*
|
||||
* Custom Node values that inherit from `Object` still refer to their original instances.
|
||||
*/
|
||||
clone() {
|
||||
const copy = Object.create(Document.prototype, {
|
||||
[NODE_TYPE]: { value: DOC }
|
||||
});
|
||||
copy.commentBefore = this.commentBefore;
|
||||
copy.comment = this.comment;
|
||||
copy.errors = this.errors.slice();
|
||||
copy.warnings = this.warnings.slice();
|
||||
copy.options = Object.assign({}, this.options);
|
||||
if (this.directives)
|
||||
copy.directives = this.directives.clone();
|
||||
copy.schema = this.schema.clone();
|
||||
// @ts-expect-error We can't really know that this matches Contents.
|
||||
copy.contents = isNode(this.contents)
|
||||
? this.contents.clone(copy.schema)
|
||||
: this.contents;
|
||||
if (this.range)
|
||||
copy.range = this.range.slice();
|
||||
return copy;
|
||||
}
|
||||
/** Adds a value to the document. */
|
||||
add(value) {
|
||||
if (assertCollection(this.contents))
|
||||
this.contents.add(value);
|
||||
}
|
||||
/** Adds a value to the document. */
|
||||
addIn(path, value) {
|
||||
if (assertCollection(this.contents))
|
||||
this.contents.addIn(path, value);
|
||||
}
|
||||
/**
|
||||
* Create a new `Alias` node, ensuring that the target `node` has the required anchor.
|
||||
*
|
||||
* If `node` already has an anchor, `name` is ignored.
|
||||
* Otherwise, the `node.anchor` value will be set to `name`,
|
||||
* or if an anchor with that name is already present in the document,
|
||||
* `name` will be used as a prefix for a new unique anchor.
|
||||
* If `name` is undefined, the generated anchor will use 'a' as a prefix.
|
||||
*/
|
||||
createAlias(node, name) {
|
||||
if (!node.anchor) {
|
||||
const prev = anchorNames(this);
|
||||
node.anchor =
|
||||
// eslint-disable-next-line @typescript-eslint/prefer-nullish-coalescing
|
||||
!name || prev.has(name) ? findNewAnchor(name || 'a', prev) : name;
|
||||
}
|
||||
return new Alias(node.anchor);
|
||||
}
|
||||
createNode(value, replacer, options) {
|
||||
let _replacer = undefined;
|
||||
if (typeof replacer === 'function') {
|
||||
value = replacer.call({ '': value }, '', value);
|
||||
_replacer = replacer;
|
||||
}
|
||||
else if (Array.isArray(replacer)) {
|
||||
const keyToStr = (v) => typeof v === 'number' || v instanceof String || v instanceof Number;
|
||||
const asStr = replacer.filter(keyToStr).map(String);
|
||||
if (asStr.length > 0)
|
||||
replacer = replacer.concat(asStr);
|
||||
_replacer = replacer;
|
||||
}
|
||||
else if (options === undefined && replacer) {
|
||||
options = replacer;
|
||||
replacer = undefined;
|
||||
}
|
||||
const { aliasDuplicateObjects, anchorPrefix, flow, keepUndefined, onTagObj, tag } = options ?? {};
|
||||
const { onAnchor, setAnchors, sourceObjects } = createNodeAnchors(this,
|
||||
// eslint-disable-next-line @typescript-eslint/prefer-nullish-coalescing
|
||||
anchorPrefix || 'a');
|
||||
const ctx = {
|
||||
aliasDuplicateObjects: aliasDuplicateObjects ?? true,
|
||||
keepUndefined: keepUndefined ?? false,
|
||||
onAnchor,
|
||||
onTagObj,
|
||||
replacer: _replacer,
|
||||
schema: this.schema,
|
||||
sourceObjects
|
||||
};
|
||||
const node = createNode(value, tag, ctx);
|
||||
if (flow && isCollection(node))
|
||||
node.flow = true;
|
||||
setAnchors();
|
||||
return node;
|
||||
}
|
||||
/**
|
||||
* Convert a key and a value into a `Pair` using the current schema,
|
||||
* recursively wrapping all values as `Scalar` or `Collection` nodes.
|
||||
*/
|
||||
createPair(key, value, options = {}) {
|
||||
const k = this.createNode(key, null, options);
|
||||
const v = this.createNode(value, null, options);
|
||||
return new Pair(k, v);
|
||||
}
|
||||
/**
|
||||
* Removes a value from the document.
|
||||
* @returns `true` if the item was found and removed.
|
||||
*/
|
||||
delete(key) {
|
||||
return assertCollection(this.contents) ? this.contents.delete(key) : false;
|
||||
}
|
||||
/**
|
||||
* Removes a value from the document.
|
||||
* @returns `true` if the item was found and removed.
|
||||
*/
|
||||
deleteIn(path) {
|
||||
if (isEmptyPath(path)) {
|
||||
if (this.contents == null)
|
||||
return false;
|
||||
// @ts-expect-error Presumed impossible if Strict extends false
|
||||
this.contents = null;
|
||||
return true;
|
||||
}
|
||||
return assertCollection(this.contents)
|
||||
? this.contents.deleteIn(path)
|
||||
: false;
|
||||
}
|
||||
/**
|
||||
* Returns item at `key`, or `undefined` if not found. By default unwraps
|
||||
* scalar values from their surrounding node; to disable set `keepScalar` to
|
||||
* `true` (collections are always returned intact).
|
||||
*/
|
||||
get(key, keepScalar) {
|
||||
return isCollection(this.contents)
|
||||
? this.contents.get(key, keepScalar)
|
||||
: undefined;
|
||||
}
|
||||
/**
|
||||
* Returns item at `path`, or `undefined` if not found. By default unwraps
|
||||
* scalar values from their surrounding node; to disable set `keepScalar` to
|
||||
* `true` (collections are always returned intact).
|
||||
*/
|
||||
getIn(path, keepScalar) {
|
||||
if (isEmptyPath(path))
|
||||
return !keepScalar && isScalar(this.contents)
|
||||
? this.contents.value
|
||||
: this.contents;
|
||||
return isCollection(this.contents)
|
||||
? this.contents.getIn(path, keepScalar)
|
||||
: undefined;
|
||||
}
|
||||
/**
|
||||
* Checks if the document includes a value with the key `key`.
|
||||
*/
|
||||
has(key) {
|
||||
return isCollection(this.contents) ? this.contents.has(key) : false;
|
||||
}
|
||||
/**
|
||||
* Checks if the document includes a value at `path`.
|
||||
*/
|
||||
hasIn(path) {
|
||||
if (isEmptyPath(path))
|
||||
return this.contents !== undefined;
|
||||
return isCollection(this.contents) ? this.contents.hasIn(path) : false;
|
||||
}
|
||||
/**
|
||||
* Sets a value in this document. For `!!set`, `value` needs to be a
|
||||
* boolean to add/remove the item from the set.
|
||||
*/
|
||||
set(key, value) {
|
||||
if (this.contents == null) {
|
||||
// @ts-expect-error We can't really know that this matches Contents.
|
||||
this.contents = collectionFromPath(this.schema, [key], value);
|
||||
}
|
||||
else if (assertCollection(this.contents)) {
|
||||
this.contents.set(key, value);
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Sets a value in this document. For `!!set`, `value` needs to be a
|
||||
* boolean to add/remove the item from the set.
|
||||
*/
|
||||
setIn(path, value) {
|
||||
if (isEmptyPath(path)) {
|
||||
// @ts-expect-error We can't really know that this matches Contents.
|
||||
this.contents = value;
|
||||
}
|
||||
else if (this.contents == null) {
|
||||
// @ts-expect-error We can't really know that this matches Contents.
|
||||
this.contents = collectionFromPath(this.schema, Array.from(path), value);
|
||||
}
|
||||
else if (assertCollection(this.contents)) {
|
||||
this.contents.setIn(path, value);
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Change the YAML version and schema used by the document.
|
||||
* A `null` version disables support for directives, explicit tags, anchors, and aliases.
|
||||
* It also requires the `schema` option to be given as a `Schema` instance value.
|
||||
*
|
||||
* Overrides all previously set schema options.
|
||||
*/
|
||||
setSchema(version, options = {}) {
|
||||
if (typeof version === 'number')
|
||||
version = String(version);
|
||||
let opt;
|
||||
switch (version) {
|
||||
case '1.1':
|
||||
if (this.directives)
|
||||
this.directives.yaml.version = '1.1';
|
||||
else
|
||||
this.directives = new Directives({ version: '1.1' });
|
||||
opt = { resolveKnownTags: false, schema: 'yaml-1.1' };
|
||||
break;
|
||||
case '1.2':
|
||||
case 'next':
|
||||
if (this.directives)
|
||||
this.directives.yaml.version = version;
|
||||
else
|
||||
this.directives = new Directives({ version });
|
||||
opt = { resolveKnownTags: true, schema: 'core' };
|
||||
break;
|
||||
case null:
|
||||
if (this.directives)
|
||||
delete this.directives;
|
||||
opt = null;
|
||||
break;
|
||||
default: {
|
||||
const sv = JSON.stringify(version);
|
||||
throw new Error(`Expected '1.1', '1.2' or null as first argument, but found: ${sv}`);
|
||||
}
|
||||
}
|
||||
// Not using `instanceof Schema` to allow for duck typing
|
||||
if (options.schema instanceof Object)
|
||||
this.schema = options.schema;
|
||||
else if (opt)
|
||||
this.schema = new Schema(Object.assign(opt, options));
|
||||
else
|
||||
throw new Error(`With a null YAML version, the { schema: Schema } option is required`);
|
||||
}
|
||||
// json & jsonArg are only used from toJSON()
|
||||
toJS({ json, jsonArg, mapAsMap, maxAliasCount, onAnchor, reviver } = {}) {
|
||||
const ctx = {
|
||||
anchors: new Map(),
|
||||
doc: this,
|
||||
keep: !json,
|
||||
mapAsMap: mapAsMap === true,
|
||||
mapKeyWarned: false,
|
||||
maxAliasCount: typeof maxAliasCount === 'number' ? maxAliasCount : 100
|
||||
};
|
||||
const res = toJS(this.contents, jsonArg ?? '', ctx);
|
||||
if (typeof onAnchor === 'function')
|
||||
for (const { count, res } of ctx.anchors.values())
|
||||
onAnchor(res, count);
|
||||
return typeof reviver === 'function'
|
||||
? applyReviver(reviver, { '': res }, '', res)
|
||||
: res;
|
||||
}
|
||||
/**
|
||||
* A JSON representation of the document `contents`.
|
||||
*
|
||||
* @param jsonArg Used by `JSON.stringify` to indicate the array index or
|
||||
* property name.
|
||||
*/
|
||||
toJSON(jsonArg, onAnchor) {
|
||||
return this.toJS({ json: true, jsonArg, mapAsMap: false, onAnchor });
|
||||
}
|
||||
/** A YAML representation of the document. */
|
||||
toString(options = {}) {
|
||||
if (this.errors.length > 0)
|
||||
throw new Error('Document with errors cannot be stringified');
|
||||
if ('indent' in options &&
|
||||
(!Number.isInteger(options.indent) || Number(options.indent) <= 0)) {
|
||||
const s = JSON.stringify(options.indent);
|
||||
throw new Error(`"indent" option must be a positive integer, not ${s}`);
|
||||
}
|
||||
return stringifyDocument(this, options);
|
||||
}
|
||||
}
|
||||
function assertCollection(contents) {
|
||||
if (isCollection(contents))
|
||||
return true;
|
||||
throw new Error('Expected a YAML collection as document contents');
|
||||
}
|
||||
|
||||
export { Document };
|
||||
71
node_modules/yaml/browser/dist/doc/anchors.js
generated
vendored
Normal file
71
node_modules/yaml/browser/dist/doc/anchors.js
generated
vendored
Normal file
@@ -0,0 +1,71 @@
|
||||
import { isScalar, isCollection } from '../nodes/identity.js';
|
||||
import { visit } from '../visit.js';
|
||||
|
||||
/**
|
||||
* Verify that the input string is a valid anchor.
|
||||
*
|
||||
* Will throw on errors.
|
||||
*/
|
||||
function anchorIsValid(anchor) {
|
||||
if (/[\x00-\x19\s,[\]{}]/.test(anchor)) {
|
||||
const sa = JSON.stringify(anchor);
|
||||
const msg = `Anchor must not contain whitespace or control characters: ${sa}`;
|
||||
throw new Error(msg);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
function anchorNames(root) {
|
||||
const anchors = new Set();
|
||||
visit(root, {
|
||||
Value(_key, node) {
|
||||
if (node.anchor)
|
||||
anchors.add(node.anchor);
|
||||
}
|
||||
});
|
||||
return anchors;
|
||||
}
|
||||
/** Find a new anchor name with the given `prefix` and a one-indexed suffix. */
|
||||
function findNewAnchor(prefix, exclude) {
|
||||
for (let i = 1; true; ++i) {
|
||||
const name = `${prefix}${i}`;
|
||||
if (!exclude.has(name))
|
||||
return name;
|
||||
}
|
||||
}
|
||||
function createNodeAnchors(doc, prefix) {
|
||||
const aliasObjects = [];
|
||||
const sourceObjects = new Map();
|
||||
let prevAnchors = null;
|
||||
return {
|
||||
onAnchor: (source) => {
|
||||
aliasObjects.push(source);
|
||||
prevAnchors ?? (prevAnchors = anchorNames(doc));
|
||||
const anchor = findNewAnchor(prefix, prevAnchors);
|
||||
prevAnchors.add(anchor);
|
||||
return anchor;
|
||||
},
|
||||
/**
|
||||
* With circular references, the source node is only resolved after all
|
||||
* of its child nodes are. This is why anchors are set only after all of
|
||||
* the nodes have been created.
|
||||
*/
|
||||
setAnchors: () => {
|
||||
for (const source of aliasObjects) {
|
||||
const ref = sourceObjects.get(source);
|
||||
if (typeof ref === 'object' &&
|
||||
ref.anchor &&
|
||||
(isScalar(ref.node) || isCollection(ref.node))) {
|
||||
ref.node.anchor = ref.anchor;
|
||||
}
|
||||
else {
|
||||
const error = new Error('Failed to resolve repeated object (this should not happen)');
|
||||
error.source = source;
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
},
|
||||
sourceObjects
|
||||
};
|
||||
}
|
||||
|
||||
export { anchorIsValid, anchorNames, createNodeAnchors, findNewAnchor };
|
||||
55
node_modules/yaml/browser/dist/doc/applyReviver.js
generated
vendored
Normal file
55
node_modules/yaml/browser/dist/doc/applyReviver.js
generated
vendored
Normal file
@@ -0,0 +1,55 @@
|
||||
/**
|
||||
* Applies the JSON.parse reviver algorithm as defined in the ECMA-262 spec,
|
||||
* in section 24.5.1.1 "Runtime Semantics: InternalizeJSONProperty" of the
|
||||
* 2021 edition: https://tc39.es/ecma262/#sec-json.parse
|
||||
*
|
||||
* Includes extensions for handling Map and Set objects.
|
||||
*/
|
||||
function applyReviver(reviver, obj, key, val) {
|
||||
if (val && typeof val === 'object') {
|
||||
if (Array.isArray(val)) {
|
||||
for (let i = 0, len = val.length; i < len; ++i) {
|
||||
const v0 = val[i];
|
||||
const v1 = applyReviver(reviver, val, String(i), v0);
|
||||
// eslint-disable-next-line @typescript-eslint/no-array-delete
|
||||
if (v1 === undefined)
|
||||
delete val[i];
|
||||
else if (v1 !== v0)
|
||||
val[i] = v1;
|
||||
}
|
||||
}
|
||||
else if (val instanceof Map) {
|
||||
for (const k of Array.from(val.keys())) {
|
||||
const v0 = val.get(k);
|
||||
const v1 = applyReviver(reviver, val, k, v0);
|
||||
if (v1 === undefined)
|
||||
val.delete(k);
|
||||
else if (v1 !== v0)
|
||||
val.set(k, v1);
|
||||
}
|
||||
}
|
||||
else if (val instanceof Set) {
|
||||
for (const v0 of Array.from(val)) {
|
||||
const v1 = applyReviver(reviver, val, v0, v0);
|
||||
if (v1 === undefined)
|
||||
val.delete(v0);
|
||||
else if (v1 !== v0) {
|
||||
val.delete(v0);
|
||||
val.add(v1);
|
||||
}
|
||||
}
|
||||
}
|
||||
else {
|
||||
for (const [k, v0] of Object.entries(val)) {
|
||||
const v1 = applyReviver(reviver, val, k, v0);
|
||||
if (v1 === undefined)
|
||||
delete val[k];
|
||||
else if (v1 !== v0)
|
||||
val[k] = v1;
|
||||
}
|
||||
}
|
||||
}
|
||||
return reviver.call(obj, key, val);
|
||||
}
|
||||
|
||||
export { applyReviver };
|
||||
88
node_modules/yaml/browser/dist/doc/createNode.js
generated
vendored
Normal file
88
node_modules/yaml/browser/dist/doc/createNode.js
generated
vendored
Normal file
@@ -0,0 +1,88 @@
|
||||
import { Alias } from '../nodes/Alias.js';
|
||||
import { isNode, isPair, MAP, SEQ, isDocument } from '../nodes/identity.js';
|
||||
import { Scalar } from '../nodes/Scalar.js';
|
||||
|
||||
const defaultTagPrefix = 'tag:yaml.org,2002:';
|
||||
function findTagObject(value, tagName, tags) {
|
||||
if (tagName) {
|
||||
const match = tags.filter(t => t.tag === tagName);
|
||||
const tagObj = match.find(t => !t.format) ?? match[0];
|
||||
if (!tagObj)
|
||||
throw new Error(`Tag ${tagName} not found`);
|
||||
return tagObj;
|
||||
}
|
||||
return tags.find(t => t.identify?.(value) && !t.format);
|
||||
}
|
||||
function createNode(value, tagName, ctx) {
|
||||
if (isDocument(value))
|
||||
value = value.contents;
|
||||
if (isNode(value))
|
||||
return value;
|
||||
if (isPair(value)) {
|
||||
const map = ctx.schema[MAP].createNode?.(ctx.schema, null, ctx);
|
||||
map.items.push(value);
|
||||
return map;
|
||||
}
|
||||
if (value instanceof String ||
|
||||
value instanceof Number ||
|
||||
value instanceof Boolean ||
|
||||
(typeof BigInt !== 'undefined' && value instanceof BigInt) // not supported everywhere
|
||||
) {
|
||||
// https://tc39.es/ecma262/#sec-serializejsonproperty
|
||||
value = value.valueOf();
|
||||
}
|
||||
const { aliasDuplicateObjects, onAnchor, onTagObj, schema, sourceObjects } = ctx;
|
||||
// Detect duplicate references to the same object & use Alias nodes for all
|
||||
// after first. The `ref` wrapper allows for circular references to resolve.
|
||||
let ref = undefined;
|
||||
if (aliasDuplicateObjects && value && typeof value === 'object') {
|
||||
ref = sourceObjects.get(value);
|
||||
if (ref) {
|
||||
ref.anchor ?? (ref.anchor = onAnchor(value));
|
||||
return new Alias(ref.anchor);
|
||||
}
|
||||
else {
|
||||
ref = { anchor: null, node: null };
|
||||
sourceObjects.set(value, ref);
|
||||
}
|
||||
}
|
||||
if (tagName?.startsWith('!!'))
|
||||
tagName = defaultTagPrefix + tagName.slice(2);
|
||||
let tagObj = findTagObject(value, tagName, schema.tags);
|
||||
if (!tagObj) {
|
||||
if (value && typeof value.toJSON === 'function') {
|
||||
// eslint-disable-next-line @typescript-eslint/no-unsafe-call
|
||||
value = value.toJSON();
|
||||
}
|
||||
if (!value || typeof value !== 'object') {
|
||||
const node = new Scalar(value);
|
||||
if (ref)
|
||||
ref.node = node;
|
||||
return node;
|
||||
}
|
||||
tagObj =
|
||||
value instanceof Map
|
||||
? schema[MAP]
|
||||
: Symbol.iterator in Object(value)
|
||||
? schema[SEQ]
|
||||
: schema[MAP];
|
||||
}
|
||||
if (onTagObj) {
|
||||
onTagObj(tagObj);
|
||||
delete ctx.onTagObj;
|
||||
}
|
||||
const node = tagObj?.createNode
|
||||
? tagObj.createNode(ctx.schema, value, ctx)
|
||||
: typeof tagObj?.nodeClass?.from === 'function'
|
||||
? tagObj.nodeClass.from(ctx.schema, value, ctx)
|
||||
: new Scalar(value);
|
||||
if (tagName)
|
||||
node.tag = tagName;
|
||||
else if (!tagObj.default)
|
||||
node.tag = tagObj.tag;
|
||||
if (ref)
|
||||
ref.node = node;
|
||||
return node;
|
||||
}
|
||||
|
||||
export { createNode };
|
||||
176
node_modules/yaml/browser/dist/doc/directives.js
generated
vendored
Normal file
176
node_modules/yaml/browser/dist/doc/directives.js
generated
vendored
Normal file
@@ -0,0 +1,176 @@
|
||||
import { isNode } from '../nodes/identity.js';
|
||||
import { visit } from '../visit.js';
|
||||
|
||||
const escapeChars = {
|
||||
'!': '%21',
|
||||
',': '%2C',
|
||||
'[': '%5B',
|
||||
']': '%5D',
|
||||
'{': '%7B',
|
||||
'}': '%7D'
|
||||
};
|
||||
const escapeTagName = (tn) => tn.replace(/[!,[\]{}]/g, ch => escapeChars[ch]);
|
||||
class Directives {
|
||||
constructor(yaml, tags) {
|
||||
/**
|
||||
* The directives-end/doc-start marker `---`. If `null`, a marker may still be
|
||||
* included in the document's stringified representation.
|
||||
*/
|
||||
this.docStart = null;
|
||||
/** The doc-end marker `...`. */
|
||||
this.docEnd = false;
|
||||
this.yaml = Object.assign({}, Directives.defaultYaml, yaml);
|
||||
this.tags = Object.assign({}, Directives.defaultTags, tags);
|
||||
}
|
||||
clone() {
|
||||
const copy = new Directives(this.yaml, this.tags);
|
||||
copy.docStart = this.docStart;
|
||||
return copy;
|
||||
}
|
||||
/**
|
||||
* During parsing, get a Directives instance for the current document and
|
||||
* update the stream state according to the current version's spec.
|
||||
*/
|
||||
atDocument() {
|
||||
const res = new Directives(this.yaml, this.tags);
|
||||
switch (this.yaml.version) {
|
||||
case '1.1':
|
||||
this.atNextDocument = true;
|
||||
break;
|
||||
case '1.2':
|
||||
this.atNextDocument = false;
|
||||
this.yaml = {
|
||||
explicit: Directives.defaultYaml.explicit,
|
||||
version: '1.2'
|
||||
};
|
||||
this.tags = Object.assign({}, Directives.defaultTags);
|
||||
break;
|
||||
}
|
||||
return res;
|
||||
}
|
||||
/**
|
||||
* @param onError - May be called even if the action was successful
|
||||
* @returns `true` on success
|
||||
*/
|
||||
add(line, onError) {
|
||||
if (this.atNextDocument) {
|
||||
this.yaml = { explicit: Directives.defaultYaml.explicit, version: '1.1' };
|
||||
this.tags = Object.assign({}, Directives.defaultTags);
|
||||
this.atNextDocument = false;
|
||||
}
|
||||
const parts = line.trim().split(/[ \t]+/);
|
||||
const name = parts.shift();
|
||||
switch (name) {
|
||||
case '%TAG': {
|
||||
if (parts.length !== 2) {
|
||||
onError(0, '%TAG directive should contain exactly two parts');
|
||||
if (parts.length < 2)
|
||||
return false;
|
||||
}
|
||||
const [handle, prefix] = parts;
|
||||
this.tags[handle] = prefix;
|
||||
return true;
|
||||
}
|
||||
case '%YAML': {
|
||||
this.yaml.explicit = true;
|
||||
if (parts.length !== 1) {
|
||||
onError(0, '%YAML directive should contain exactly one part');
|
||||
return false;
|
||||
}
|
||||
const [version] = parts;
|
||||
if (version === '1.1' || version === '1.2') {
|
||||
this.yaml.version = version;
|
||||
return true;
|
||||
}
|
||||
else {
|
||||
const isValid = /^\d+\.\d+$/.test(version);
|
||||
onError(6, `Unsupported YAML version ${version}`, isValid);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
default:
|
||||
onError(0, `Unknown directive ${name}`, true);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Resolves a tag, matching handles to those defined in %TAG directives.
|
||||
*
|
||||
* @returns Resolved tag, which may also be the non-specific tag `'!'` or a
|
||||
* `'!local'` tag, or `null` if unresolvable.
|
||||
*/
|
||||
tagName(source, onError) {
|
||||
if (source === '!')
|
||||
return '!'; // non-specific tag
|
||||
if (source[0] !== '!') {
|
||||
onError(`Not a valid tag: ${source}`);
|
||||
return null;
|
||||
}
|
||||
if (source[1] === '<') {
|
||||
const verbatim = source.slice(2, -1);
|
||||
if (verbatim === '!' || verbatim === '!!') {
|
||||
onError(`Verbatim tags aren't resolved, so ${source} is invalid.`);
|
||||
return null;
|
||||
}
|
||||
if (source[source.length - 1] !== '>')
|
||||
onError('Verbatim tags must end with a >');
|
||||
return verbatim;
|
||||
}
|
||||
const [, handle, suffix] = source.match(/^(.*!)([^!]*)$/s);
|
||||
if (!suffix)
|
||||
onError(`The ${source} tag has no suffix`);
|
||||
const prefix = this.tags[handle];
|
||||
if (prefix) {
|
||||
try {
|
||||
return prefix + decodeURIComponent(suffix);
|
||||
}
|
||||
catch (error) {
|
||||
onError(String(error));
|
||||
return null;
|
||||
}
|
||||
}
|
||||
if (handle === '!')
|
||||
return source; // local tag
|
||||
onError(`Could not resolve tag: ${source}`);
|
||||
return null;
|
||||
}
|
||||
/**
|
||||
* Given a fully resolved tag, returns its printable string form,
|
||||
* taking into account current tag prefixes and defaults.
|
||||
*/
|
||||
tagString(tag) {
|
||||
for (const [handle, prefix] of Object.entries(this.tags)) {
|
||||
if (tag.startsWith(prefix))
|
||||
return handle + escapeTagName(tag.substring(prefix.length));
|
||||
}
|
||||
return tag[0] === '!' ? tag : `!<${tag}>`;
|
||||
}
|
||||
toString(doc) {
|
||||
const lines = this.yaml.explicit
|
||||
? [`%YAML ${this.yaml.version || '1.2'}`]
|
||||
: [];
|
||||
const tagEntries = Object.entries(this.tags);
|
||||
let tagNames;
|
||||
if (doc && tagEntries.length > 0 && isNode(doc.contents)) {
|
||||
const tags = {};
|
||||
visit(doc.contents, (_key, node) => {
|
||||
if (isNode(node) && node.tag)
|
||||
tags[node.tag] = true;
|
||||
});
|
||||
tagNames = Object.keys(tags);
|
||||
}
|
||||
else
|
||||
tagNames = [];
|
||||
for (const [handle, prefix] of tagEntries) {
|
||||
if (handle === '!!' && prefix === 'tag:yaml.org,2002:')
|
||||
continue;
|
||||
if (!doc || tagNames.some(tn => tn.startsWith(prefix)))
|
||||
lines.push(`%TAG ${handle} ${prefix}`);
|
||||
}
|
||||
return lines.join('\n');
|
||||
}
|
||||
}
|
||||
Directives.defaultYaml = { explicit: false, version: '1.2' };
|
||||
Directives.defaultTags = { '!!': 'tag:yaml.org,2002:' };
|
||||
|
||||
export { Directives };
|
||||
57
node_modules/yaml/browser/dist/errors.js
generated
vendored
Normal file
57
node_modules/yaml/browser/dist/errors.js
generated
vendored
Normal file
@@ -0,0 +1,57 @@
|
||||
class YAMLError extends Error {
|
||||
constructor(name, pos, code, message) {
|
||||
super();
|
||||
this.name = name;
|
||||
this.code = code;
|
||||
this.message = message;
|
||||
this.pos = pos;
|
||||
}
|
||||
}
|
||||
class YAMLParseError extends YAMLError {
|
||||
constructor(pos, code, message) {
|
||||
super('YAMLParseError', pos, code, message);
|
||||
}
|
||||
}
|
||||
class YAMLWarning extends YAMLError {
|
||||
constructor(pos, code, message) {
|
||||
super('YAMLWarning', pos, code, message);
|
||||
}
|
||||
}
|
||||
const prettifyError = (src, lc) => (error) => {
|
||||
if (error.pos[0] === -1)
|
||||
return;
|
||||
error.linePos = error.pos.map(pos => lc.linePos(pos));
|
||||
const { line, col } = error.linePos[0];
|
||||
error.message += ` at line ${line}, column ${col}`;
|
||||
let ci = col - 1;
|
||||
let lineStr = src
|
||||
.substring(lc.lineStarts[line - 1], lc.lineStarts[line])
|
||||
.replace(/[\n\r]+$/, '');
|
||||
// Trim to max 80 chars, keeping col position near the middle
|
||||
if (ci >= 60 && lineStr.length > 80) {
|
||||
const trimStart = Math.min(ci - 39, lineStr.length - 79);
|
||||
lineStr = '…' + lineStr.substring(trimStart);
|
||||
ci -= trimStart - 1;
|
||||
}
|
||||
if (lineStr.length > 80)
|
||||
lineStr = lineStr.substring(0, 79) + '…';
|
||||
// Include previous line in context if pointing at line start
|
||||
if (line > 1 && /^ *$/.test(lineStr.substring(0, ci))) {
|
||||
// Regexp won't match if start is trimmed
|
||||
let prev = src.substring(lc.lineStarts[line - 2], lc.lineStarts[line - 1]);
|
||||
if (prev.length > 80)
|
||||
prev = prev.substring(0, 79) + '…\n';
|
||||
lineStr = prev + lineStr;
|
||||
}
|
||||
if (/[^ ]/.test(lineStr)) {
|
||||
let count = 1;
|
||||
const end = error.linePos[1];
|
||||
if (end?.line === line && end.col > col) {
|
||||
count = Math.max(1, Math.min(end.col - col, 80 - ci));
|
||||
}
|
||||
const pointer = ' '.repeat(ci) + '^'.repeat(count);
|
||||
error.message += `:\n\n${lineStr}\n${pointer}\n`;
|
||||
}
|
||||
};
|
||||
|
||||
export { YAMLError, YAMLParseError, YAMLWarning, prettifyError };
|
||||
17
node_modules/yaml/browser/dist/index.js
generated
vendored
Normal file
17
node_modules/yaml/browser/dist/index.js
generated
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
export { Composer } from './compose/composer.js';
|
||||
export { Document } from './doc/Document.js';
|
||||
export { Schema } from './schema/Schema.js';
|
||||
export { YAMLError, YAMLParseError, YAMLWarning } from './errors.js';
|
||||
export { Alias } from './nodes/Alias.js';
|
||||
export { isAlias, isCollection, isDocument, isMap, isNode, isPair, isScalar, isSeq } from './nodes/identity.js';
|
||||
export { Pair } from './nodes/Pair.js';
|
||||
export { Scalar } from './nodes/Scalar.js';
|
||||
export { YAMLMap } from './nodes/YAMLMap.js';
|
||||
export { YAMLSeq } from './nodes/YAMLSeq.js';
|
||||
import * as cst from './parse/cst.js';
|
||||
export { cst as CST };
|
||||
export { Lexer } from './parse/lexer.js';
|
||||
export { LineCounter } from './parse/line-counter.js';
|
||||
export { Parser } from './parse/parser.js';
|
||||
export { parse, parseAllDocuments, parseDocument, stringify } from './public-api.js';
|
||||
export { visit, visitAsync } from './visit.js';
|
||||
11
node_modules/yaml/browser/dist/log.js
generated
vendored
Normal file
11
node_modules/yaml/browser/dist/log.js
generated
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
function debug(logLevel, ...messages) {
|
||||
if (logLevel === 'debug')
|
||||
console.log(...messages);
|
||||
}
|
||||
function warn(logLevel, warning) {
|
||||
if (logLevel === 'debug' || logLevel === 'warn') {
|
||||
console.warn(warning);
|
||||
}
|
||||
}
|
||||
|
||||
export { debug, warn };
|
||||
114
node_modules/yaml/browser/dist/nodes/Alias.js
generated
vendored
Normal file
114
node_modules/yaml/browser/dist/nodes/Alias.js
generated
vendored
Normal file
@@ -0,0 +1,114 @@
|
||||
import { anchorIsValid } from '../doc/anchors.js';
|
||||
import { visit } from '../visit.js';
|
||||
import { ALIAS, isAlias, isCollection, isPair, hasAnchor } from './identity.js';
|
||||
import { NodeBase } from './Node.js';
|
||||
import { toJS } from './toJS.js';
|
||||
|
||||
class Alias extends NodeBase {
|
||||
constructor(source) {
|
||||
super(ALIAS);
|
||||
this.source = source;
|
||||
Object.defineProperty(this, 'tag', {
|
||||
set() {
|
||||
throw new Error('Alias nodes cannot have tags');
|
||||
}
|
||||
});
|
||||
}
|
||||
/**
|
||||
* Resolve the value of this alias within `doc`, finding the last
|
||||
* instance of the `source` anchor before this node.
|
||||
*/
|
||||
resolve(doc, ctx) {
|
||||
let nodes;
|
||||
if (ctx?.aliasResolveCache) {
|
||||
nodes = ctx.aliasResolveCache;
|
||||
}
|
||||
else {
|
||||
nodes = [];
|
||||
visit(doc, {
|
||||
Node: (_key, node) => {
|
||||
if (isAlias(node) || hasAnchor(node))
|
||||
nodes.push(node);
|
||||
}
|
||||
});
|
||||
if (ctx)
|
||||
ctx.aliasResolveCache = nodes;
|
||||
}
|
||||
let found = undefined;
|
||||
for (const node of nodes) {
|
||||
if (node === this)
|
||||
break;
|
||||
if (node.anchor === this.source)
|
||||
found = node;
|
||||
}
|
||||
return found;
|
||||
}
|
||||
toJSON(_arg, ctx) {
|
||||
if (!ctx)
|
||||
return { source: this.source };
|
||||
const { anchors, doc, maxAliasCount } = ctx;
|
||||
const source = this.resolve(doc, ctx);
|
||||
if (!source) {
|
||||
const msg = `Unresolved alias (the anchor must be set before the alias): ${this.source}`;
|
||||
throw new ReferenceError(msg);
|
||||
}
|
||||
let data = anchors.get(source);
|
||||
if (!data) {
|
||||
// Resolve anchors for Node.prototype.toJS()
|
||||
toJS(source, null, ctx);
|
||||
data = anchors.get(source);
|
||||
}
|
||||
/* istanbul ignore if */
|
||||
if (data?.res === undefined) {
|
||||
const msg = 'This should not happen: Alias anchor was not resolved?';
|
||||
throw new ReferenceError(msg);
|
||||
}
|
||||
if (maxAliasCount >= 0) {
|
||||
data.count += 1;
|
||||
if (data.aliasCount === 0)
|
||||
data.aliasCount = getAliasCount(doc, source, anchors);
|
||||
if (data.count * data.aliasCount > maxAliasCount) {
|
||||
const msg = 'Excessive alias count indicates a resource exhaustion attack';
|
||||
throw new ReferenceError(msg);
|
||||
}
|
||||
}
|
||||
return data.res;
|
||||
}
|
||||
toString(ctx, _onComment, _onChompKeep) {
|
||||
const src = `*${this.source}`;
|
||||
if (ctx) {
|
||||
anchorIsValid(this.source);
|
||||
if (ctx.options.verifyAliasOrder && !ctx.anchors.has(this.source)) {
|
||||
const msg = `Unresolved alias (the anchor must be set before the alias): ${this.source}`;
|
||||
throw new Error(msg);
|
||||
}
|
||||
if (ctx.implicitKey)
|
||||
return `${src} `;
|
||||
}
|
||||
return src;
|
||||
}
|
||||
}
|
||||
function getAliasCount(doc, node, anchors) {
|
||||
if (isAlias(node)) {
|
||||
const source = node.resolve(doc);
|
||||
const anchor = anchors && source && anchors.get(source);
|
||||
return anchor ? anchor.count * anchor.aliasCount : 0;
|
||||
}
|
||||
else if (isCollection(node)) {
|
||||
let count = 0;
|
||||
for (const item of node.items) {
|
||||
const c = getAliasCount(doc, item, anchors);
|
||||
if (c > count)
|
||||
count = c;
|
||||
}
|
||||
return count;
|
||||
}
|
||||
else if (isPair(node)) {
|
||||
const kc = getAliasCount(doc, node.key, anchors);
|
||||
const vc = getAliasCount(doc, node.value, anchors);
|
||||
return Math.max(kc, vc);
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
export { Alias };
|
||||
147
node_modules/yaml/browser/dist/nodes/Collection.js
generated
vendored
Normal file
147
node_modules/yaml/browser/dist/nodes/Collection.js
generated
vendored
Normal file
@@ -0,0 +1,147 @@
|
||||
import { createNode } from '../doc/createNode.js';
|
||||
import { isNode, isPair, isCollection, isScalar } from './identity.js';
|
||||
import { NodeBase } from './Node.js';
|
||||
|
||||
function collectionFromPath(schema, path, value) {
|
||||
let v = value;
|
||||
for (let i = path.length - 1; i >= 0; --i) {
|
||||
const k = path[i];
|
||||
if (typeof k === 'number' && Number.isInteger(k) && k >= 0) {
|
||||
const a = [];
|
||||
a[k] = v;
|
||||
v = a;
|
||||
}
|
||||
else {
|
||||
v = new Map([[k, v]]);
|
||||
}
|
||||
}
|
||||
return createNode(v, undefined, {
|
||||
aliasDuplicateObjects: false,
|
||||
keepUndefined: false,
|
||||
onAnchor: () => {
|
||||
throw new Error('This should not happen, please report a bug.');
|
||||
},
|
||||
schema,
|
||||
sourceObjects: new Map()
|
||||
});
|
||||
}
|
||||
// Type guard is intentionally a little wrong so as to be more useful,
|
||||
// as it does not cover untypable empty non-string iterables (e.g. []).
|
||||
const isEmptyPath = (path) => path == null ||
|
||||
(typeof path === 'object' && !!path[Symbol.iterator]().next().done);
|
||||
class Collection extends NodeBase {
|
||||
constructor(type, schema) {
|
||||
super(type);
|
||||
Object.defineProperty(this, 'schema', {
|
||||
value: schema,
|
||||
configurable: true,
|
||||
enumerable: false,
|
||||
writable: true
|
||||
});
|
||||
}
|
||||
/**
|
||||
* Create a copy of this collection.
|
||||
*
|
||||
* @param schema - If defined, overwrites the original's schema
|
||||
*/
|
||||
clone(schema) {
|
||||
const copy = Object.create(Object.getPrototypeOf(this), Object.getOwnPropertyDescriptors(this));
|
||||
if (schema)
|
||||
copy.schema = schema;
|
||||
copy.items = copy.items.map(it => isNode(it) || isPair(it) ? it.clone(schema) : it);
|
||||
if (this.range)
|
||||
copy.range = this.range.slice();
|
||||
return copy;
|
||||
}
|
||||
/**
|
||||
* Adds a value to the collection. For `!!map` and `!!omap` the value must
|
||||
* be a Pair instance or a `{ key, value }` object, which may not have a key
|
||||
* that already exists in the map.
|
||||
*/
|
||||
addIn(path, value) {
|
||||
if (isEmptyPath(path))
|
||||
this.add(value);
|
||||
else {
|
||||
const [key, ...rest] = path;
|
||||
const node = this.get(key, true);
|
||||
if (isCollection(node))
|
||||
node.addIn(rest, value);
|
||||
else if (node === undefined && this.schema)
|
||||
this.set(key, collectionFromPath(this.schema, rest, value));
|
||||
else
|
||||
throw new Error(`Expected YAML collection at ${key}. Remaining path: ${rest}`);
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Removes a value from the collection.
|
||||
* @returns `true` if the item was found and removed.
|
||||
*/
|
||||
deleteIn(path) {
|
||||
const [key, ...rest] = path;
|
||||
if (rest.length === 0)
|
||||
return this.delete(key);
|
||||
const node = this.get(key, true);
|
||||
if (isCollection(node))
|
||||
return node.deleteIn(rest);
|
||||
else
|
||||
throw new Error(`Expected YAML collection at ${key}. Remaining path: ${rest}`);
|
||||
}
|
||||
/**
|
||||
* Returns item at `key`, or `undefined` if not found. By default unwraps
|
||||
* scalar values from their surrounding node; to disable set `keepScalar` to
|
||||
* `true` (collections are always returned intact).
|
||||
*/
|
||||
getIn(path, keepScalar) {
|
||||
const [key, ...rest] = path;
|
||||
const node = this.get(key, true);
|
||||
if (rest.length === 0)
|
||||
return !keepScalar && isScalar(node) ? node.value : node;
|
||||
else
|
||||
return isCollection(node) ? node.getIn(rest, keepScalar) : undefined;
|
||||
}
|
||||
hasAllNullValues(allowScalar) {
|
||||
return this.items.every(node => {
|
||||
if (!isPair(node))
|
||||
return false;
|
||||
const n = node.value;
|
||||
return (n == null ||
|
||||
(allowScalar &&
|
||||
isScalar(n) &&
|
||||
n.value == null &&
|
||||
!n.commentBefore &&
|
||||
!n.comment &&
|
||||
!n.tag));
|
||||
});
|
||||
}
|
||||
/**
|
||||
* Checks if the collection includes a value with the key `key`.
|
||||
*/
|
||||
hasIn(path) {
|
||||
const [key, ...rest] = path;
|
||||
if (rest.length === 0)
|
||||
return this.has(key);
|
||||
const node = this.get(key, true);
|
||||
return isCollection(node) ? node.hasIn(rest) : false;
|
||||
}
|
||||
/**
|
||||
* Sets a value in this collection. For `!!set`, `value` needs to be a
|
||||
* boolean to add/remove the item from the set.
|
||||
*/
|
||||
setIn(path, value) {
|
||||
const [key, ...rest] = path;
|
||||
if (rest.length === 0) {
|
||||
this.set(key, value);
|
||||
}
|
||||
else {
|
||||
const node = this.get(key, true);
|
||||
if (isCollection(node))
|
||||
node.setIn(rest, value);
|
||||
else if (node === undefined && this.schema)
|
||||
this.set(key, collectionFromPath(this.schema, rest, value));
|
||||
else
|
||||
throw new Error(`Expected YAML collection at ${key}. Remaining path: ${rest}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export { Collection, collectionFromPath, isEmptyPath };
|
||||
38
node_modules/yaml/browser/dist/nodes/Node.js
generated
vendored
Normal file
38
node_modules/yaml/browser/dist/nodes/Node.js
generated
vendored
Normal file
@@ -0,0 +1,38 @@
|
||||
import { applyReviver } from '../doc/applyReviver.js';
|
||||
import { NODE_TYPE, isDocument } from './identity.js';
|
||||
import { toJS } from './toJS.js';
|
||||
|
||||
class NodeBase {
|
||||
constructor(type) {
|
||||
Object.defineProperty(this, NODE_TYPE, { value: type });
|
||||
}
|
||||
/** Create a copy of this node. */
|
||||
clone() {
|
||||
const copy = Object.create(Object.getPrototypeOf(this), Object.getOwnPropertyDescriptors(this));
|
||||
if (this.range)
|
||||
copy.range = this.range.slice();
|
||||
return copy;
|
||||
}
|
||||
/** A plain JavaScript representation of this node. */
|
||||
toJS(doc, { mapAsMap, maxAliasCount, onAnchor, reviver } = {}) {
|
||||
if (!isDocument(doc))
|
||||
throw new TypeError('A document argument is required');
|
||||
const ctx = {
|
||||
anchors: new Map(),
|
||||
doc,
|
||||
keep: true,
|
||||
mapAsMap: mapAsMap === true,
|
||||
mapKeyWarned: false,
|
||||
maxAliasCount: typeof maxAliasCount === 'number' ? maxAliasCount : 100
|
||||
};
|
||||
const res = toJS(this, '', ctx);
|
||||
if (typeof onAnchor === 'function')
|
||||
for (const { count, res } of ctx.anchors.values())
|
||||
onAnchor(res, count);
|
||||
return typeof reviver === 'function'
|
||||
? applyReviver(reviver, { '': res }, '', res)
|
||||
: res;
|
||||
}
|
||||
}
|
||||
|
||||
export { NodeBase };
|
||||
36
node_modules/yaml/browser/dist/nodes/Pair.js
generated
vendored
Normal file
36
node_modules/yaml/browser/dist/nodes/Pair.js
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
import { createNode } from '../doc/createNode.js';
|
||||
import { stringifyPair } from '../stringify/stringifyPair.js';
|
||||
import { addPairToJSMap } from './addPairToJSMap.js';
|
||||
import { NODE_TYPE, PAIR, isNode } from './identity.js';
|
||||
|
||||
function createPair(key, value, ctx) {
|
||||
const k = createNode(key, undefined, ctx);
|
||||
const v = createNode(value, undefined, ctx);
|
||||
return new Pair(k, v);
|
||||
}
|
||||
class Pair {
|
||||
constructor(key, value = null) {
|
||||
Object.defineProperty(this, NODE_TYPE, { value: PAIR });
|
||||
this.key = key;
|
||||
this.value = value;
|
||||
}
|
||||
clone(schema) {
|
||||
let { key, value } = this;
|
||||
if (isNode(key))
|
||||
key = key.clone(schema);
|
||||
if (isNode(value))
|
||||
value = value.clone(schema);
|
||||
return new Pair(key, value);
|
||||
}
|
||||
toJSON(_, ctx) {
|
||||
const pair = ctx?.mapAsMap ? new Map() : {};
|
||||
return addPairToJSMap(ctx, pair, this);
|
||||
}
|
||||
toString(ctx, onComment, onChompKeep) {
|
||||
return ctx?.doc
|
||||
? stringifyPair(this, ctx, onComment, onChompKeep)
|
||||
: JSON.stringify(this);
|
||||
}
|
||||
}
|
||||
|
||||
export { Pair, createPair };
|
||||
24
node_modules/yaml/browser/dist/nodes/Scalar.js
generated
vendored
Normal file
24
node_modules/yaml/browser/dist/nodes/Scalar.js
generated
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
import { SCALAR } from './identity.js';
|
||||
import { NodeBase } from './Node.js';
|
||||
import { toJS } from './toJS.js';
|
||||
|
||||
const isScalarValue = (value) => !value || (typeof value !== 'function' && typeof value !== 'object');
|
||||
class Scalar extends NodeBase {
|
||||
constructor(value) {
|
||||
super(SCALAR);
|
||||
this.value = value;
|
||||
}
|
||||
toJSON(arg, ctx) {
|
||||
return ctx?.keep ? this.value : toJS(this.value, arg, ctx);
|
||||
}
|
||||
toString() {
|
||||
return String(this.value);
|
||||
}
|
||||
}
|
||||
Scalar.BLOCK_FOLDED = 'BLOCK_FOLDED';
|
||||
Scalar.BLOCK_LITERAL = 'BLOCK_LITERAL';
|
||||
Scalar.PLAIN = 'PLAIN';
|
||||
Scalar.QUOTE_DOUBLE = 'QUOTE_DOUBLE';
|
||||
Scalar.QUOTE_SINGLE = 'QUOTE_SINGLE';
|
||||
|
||||
export { Scalar, isScalarValue };
|
||||
144
node_modules/yaml/browser/dist/nodes/YAMLMap.js
generated
vendored
Normal file
144
node_modules/yaml/browser/dist/nodes/YAMLMap.js
generated
vendored
Normal file
@@ -0,0 +1,144 @@
|
||||
import { stringifyCollection } from '../stringify/stringifyCollection.js';
|
||||
import { addPairToJSMap } from './addPairToJSMap.js';
|
||||
import { Collection } from './Collection.js';
|
||||
import { MAP, isPair, isScalar } from './identity.js';
|
||||
import { Pair, createPair } from './Pair.js';
|
||||
import { isScalarValue } from './Scalar.js';
|
||||
|
||||
function findPair(items, key) {
|
||||
const k = isScalar(key) ? key.value : key;
|
||||
for (const it of items) {
|
||||
if (isPair(it)) {
|
||||
if (it.key === key || it.key === k)
|
||||
return it;
|
||||
if (isScalar(it.key) && it.key.value === k)
|
||||
return it;
|
||||
}
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
class YAMLMap extends Collection {
|
||||
static get tagName() {
|
||||
return 'tag:yaml.org,2002:map';
|
||||
}
|
||||
constructor(schema) {
|
||||
super(MAP, schema);
|
||||
this.items = [];
|
||||
}
|
||||
/**
|
||||
* A generic collection parsing method that can be extended
|
||||
* to other node classes that inherit from YAMLMap
|
||||
*/
|
||||
static from(schema, obj, ctx) {
|
||||
const { keepUndefined, replacer } = ctx;
|
||||
const map = new this(schema);
|
||||
const add = (key, value) => {
|
||||
if (typeof replacer === 'function')
|
||||
value = replacer.call(obj, key, value);
|
||||
else if (Array.isArray(replacer) && !replacer.includes(key))
|
||||
return;
|
||||
if (value !== undefined || keepUndefined)
|
||||
map.items.push(createPair(key, value, ctx));
|
||||
};
|
||||
if (obj instanceof Map) {
|
||||
for (const [key, value] of obj)
|
||||
add(key, value);
|
||||
}
|
||||
else if (obj && typeof obj === 'object') {
|
||||
for (const key of Object.keys(obj))
|
||||
add(key, obj[key]);
|
||||
}
|
||||
if (typeof schema.sortMapEntries === 'function') {
|
||||
map.items.sort(schema.sortMapEntries);
|
||||
}
|
||||
return map;
|
||||
}
|
||||
/**
|
||||
* Adds a value to the collection.
|
||||
*
|
||||
* @param overwrite - If not set `true`, using a key that is already in the
|
||||
* collection will throw. Otherwise, overwrites the previous value.
|
||||
*/
|
||||
add(pair, overwrite) {
|
||||
let _pair;
|
||||
if (isPair(pair))
|
||||
_pair = pair;
|
||||
else if (!pair || typeof pair !== 'object' || !('key' in pair)) {
|
||||
// In TypeScript, this never happens.
|
||||
_pair = new Pair(pair, pair?.value);
|
||||
}
|
||||
else
|
||||
_pair = new Pair(pair.key, pair.value);
|
||||
const prev = findPair(this.items, _pair.key);
|
||||
const sortEntries = this.schema?.sortMapEntries;
|
||||
if (prev) {
|
||||
if (!overwrite)
|
||||
throw new Error(`Key ${_pair.key} already set`);
|
||||
// For scalars, keep the old node & its comments and anchors
|
||||
if (isScalar(prev.value) && isScalarValue(_pair.value))
|
||||
prev.value.value = _pair.value;
|
||||
else
|
||||
prev.value = _pair.value;
|
||||
}
|
||||
else if (sortEntries) {
|
||||
const i = this.items.findIndex(item => sortEntries(_pair, item) < 0);
|
||||
if (i === -1)
|
||||
this.items.push(_pair);
|
||||
else
|
||||
this.items.splice(i, 0, _pair);
|
||||
}
|
||||
else {
|
||||
this.items.push(_pair);
|
||||
}
|
||||
}
|
||||
delete(key) {
|
||||
const it = findPair(this.items, key);
|
||||
if (!it)
|
||||
return false;
|
||||
const del = this.items.splice(this.items.indexOf(it), 1);
|
||||
return del.length > 0;
|
||||
}
|
||||
get(key, keepScalar) {
|
||||
const it = findPair(this.items, key);
|
||||
const node = it?.value;
|
||||
return (!keepScalar && isScalar(node) ? node.value : node) ?? undefined;
|
||||
}
|
||||
has(key) {
|
||||
return !!findPair(this.items, key);
|
||||
}
|
||||
set(key, value) {
|
||||
this.add(new Pair(key, value), true);
|
||||
}
|
||||
/**
|
||||
* @param ctx - Conversion context, originally set in Document#toJS()
|
||||
* @param {Class} Type - If set, forces the returned collection type
|
||||
* @returns Instance of Type, Map, or Object
|
||||
*/
|
||||
toJSON(_, ctx, Type) {
|
||||
const map = Type ? new Type() : ctx?.mapAsMap ? new Map() : {};
|
||||
if (ctx?.onCreate)
|
||||
ctx.onCreate(map);
|
||||
for (const item of this.items)
|
||||
addPairToJSMap(ctx, map, item);
|
||||
return map;
|
||||
}
|
||||
toString(ctx, onComment, onChompKeep) {
|
||||
if (!ctx)
|
||||
return JSON.stringify(this);
|
||||
for (const item of this.items) {
|
||||
if (!isPair(item))
|
||||
throw new Error(`Map items must all be pairs; found ${JSON.stringify(item)} instead`);
|
||||
}
|
||||
if (!ctx.allNullValues && this.hasAllNullValues(false))
|
||||
ctx = Object.assign({}, ctx, { allNullValues: true });
|
||||
return stringifyCollection(this, ctx, {
|
||||
blockItemPrefix: '',
|
||||
flowChars: { start: '{', end: '}' },
|
||||
itemIndent: ctx.indent || '',
|
||||
onChompKeep,
|
||||
onComment
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
export { YAMLMap, findPair };
|
||||
113
node_modules/yaml/browser/dist/nodes/YAMLSeq.js
generated
vendored
Normal file
113
node_modules/yaml/browser/dist/nodes/YAMLSeq.js
generated
vendored
Normal file
@@ -0,0 +1,113 @@
|
||||
import { createNode } from '../doc/createNode.js';
|
||||
import { stringifyCollection } from '../stringify/stringifyCollection.js';
|
||||
import { Collection } from './Collection.js';
|
||||
import { SEQ, isScalar } from './identity.js';
|
||||
import { isScalarValue } from './Scalar.js';
|
||||
import { toJS } from './toJS.js';
|
||||
|
||||
class YAMLSeq extends Collection {
|
||||
static get tagName() {
|
||||
return 'tag:yaml.org,2002:seq';
|
||||
}
|
||||
constructor(schema) {
|
||||
super(SEQ, schema);
|
||||
this.items = [];
|
||||
}
|
||||
add(value) {
|
||||
this.items.push(value);
|
||||
}
|
||||
/**
|
||||
* Removes a value from the collection.
|
||||
*
|
||||
* `key` must contain a representation of an integer for this to succeed.
|
||||
* It may be wrapped in a `Scalar`.
|
||||
*
|
||||
* @returns `true` if the item was found and removed.
|
||||
*/
|
||||
delete(key) {
|
||||
const idx = asItemIndex(key);
|
||||
if (typeof idx !== 'number')
|
||||
return false;
|
||||
const del = this.items.splice(idx, 1);
|
||||
return del.length > 0;
|
||||
}
|
||||
get(key, keepScalar) {
|
||||
const idx = asItemIndex(key);
|
||||
if (typeof idx !== 'number')
|
||||
return undefined;
|
||||
const it = this.items[idx];
|
||||
return !keepScalar && isScalar(it) ? it.value : it;
|
||||
}
|
||||
/**
|
||||
* Checks if the collection includes a value with the key `key`.
|
||||
*
|
||||
* `key` must contain a representation of an integer for this to succeed.
|
||||
* It may be wrapped in a `Scalar`.
|
||||
*/
|
||||
has(key) {
|
||||
const idx = asItemIndex(key);
|
||||
return typeof idx === 'number' && idx < this.items.length;
|
||||
}
|
||||
/**
|
||||
* Sets a value in this collection. For `!!set`, `value` needs to be a
|
||||
* boolean to add/remove the item from the set.
|
||||
*
|
||||
* If `key` does not contain a representation of an integer, this will throw.
|
||||
* It may be wrapped in a `Scalar`.
|
||||
*/
|
||||
set(key, value) {
|
||||
const idx = asItemIndex(key);
|
||||
if (typeof idx !== 'number')
|
||||
throw new Error(`Expected a valid index, not ${key}.`);
|
||||
const prev = this.items[idx];
|
||||
if (isScalar(prev) && isScalarValue(value))
|
||||
prev.value = value;
|
||||
else
|
||||
this.items[idx] = value;
|
||||
}
|
||||
toJSON(_, ctx) {
|
||||
const seq = [];
|
||||
if (ctx?.onCreate)
|
||||
ctx.onCreate(seq);
|
||||
let i = 0;
|
||||
for (const item of this.items)
|
||||
seq.push(toJS(item, String(i++), ctx));
|
||||
return seq;
|
||||
}
|
||||
toString(ctx, onComment, onChompKeep) {
|
||||
if (!ctx)
|
||||
return JSON.stringify(this);
|
||||
return stringifyCollection(this, ctx, {
|
||||
blockItemPrefix: '- ',
|
||||
flowChars: { start: '[', end: ']' },
|
||||
itemIndent: (ctx.indent || '') + ' ',
|
||||
onChompKeep,
|
||||
onComment
|
||||
});
|
||||
}
|
||||
static from(schema, obj, ctx) {
|
||||
const { replacer } = ctx;
|
||||
const seq = new this(schema);
|
||||
if (obj && Symbol.iterator in Object(obj)) {
|
||||
let i = 0;
|
||||
for (let it of obj) {
|
||||
if (typeof replacer === 'function') {
|
||||
const key = obj instanceof Set ? it : String(i++);
|
||||
it = replacer.call(obj, key, it);
|
||||
}
|
||||
seq.items.push(createNode(it, undefined, ctx));
|
||||
}
|
||||
}
|
||||
return seq;
|
||||
}
|
||||
}
|
||||
function asItemIndex(key) {
|
||||
let idx = isScalar(key) ? key.value : key;
|
||||
if (idx && typeof idx === 'string')
|
||||
idx = Number(idx);
|
||||
return typeof idx === 'number' && Number.isInteger(idx) && idx >= 0
|
||||
? idx
|
||||
: null;
|
||||
}
|
||||
|
||||
export { YAMLSeq };
|
||||
63
node_modules/yaml/browser/dist/nodes/addPairToJSMap.js
generated
vendored
Normal file
63
node_modules/yaml/browser/dist/nodes/addPairToJSMap.js
generated
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
import { warn } from '../log.js';
|
||||
import { isMergeKey, addMergeToJSMap } from '../schema/yaml-1.1/merge.js';
|
||||
import { createStringifyContext } from '../stringify/stringify.js';
|
||||
import { isNode } from './identity.js';
|
||||
import { toJS } from './toJS.js';
|
||||
|
||||
function addPairToJSMap(ctx, map, { key, value }) {
|
||||
if (isNode(key) && key.addToJSMap)
|
||||
key.addToJSMap(ctx, map, value);
|
||||
// TODO: Should drop this special case for bare << handling
|
||||
else if (isMergeKey(ctx, key))
|
||||
addMergeToJSMap(ctx, map, value);
|
||||
else {
|
||||
const jsKey = toJS(key, '', ctx);
|
||||
if (map instanceof Map) {
|
||||
map.set(jsKey, toJS(value, jsKey, ctx));
|
||||
}
|
||||
else if (map instanceof Set) {
|
||||
map.add(jsKey);
|
||||
}
|
||||
else {
|
||||
const stringKey = stringifyKey(key, jsKey, ctx);
|
||||
const jsValue = toJS(value, stringKey, ctx);
|
||||
if (stringKey in map)
|
||||
Object.defineProperty(map, stringKey, {
|
||||
value: jsValue,
|
||||
writable: true,
|
||||
enumerable: true,
|
||||
configurable: true
|
||||
});
|
||||
else
|
||||
map[stringKey] = jsValue;
|
||||
}
|
||||
}
|
||||
return map;
|
||||
}
|
||||
function stringifyKey(key, jsKey, ctx) {
|
||||
if (jsKey === null)
|
||||
return '';
|
||||
// eslint-disable-next-line @typescript-eslint/no-base-to-string
|
||||
if (typeof jsKey !== 'object')
|
||||
return String(jsKey);
|
||||
if (isNode(key) && ctx?.doc) {
|
||||
const strCtx = createStringifyContext(ctx.doc, {});
|
||||
strCtx.anchors = new Set();
|
||||
for (const node of ctx.anchors.keys())
|
||||
strCtx.anchors.add(node.anchor);
|
||||
strCtx.inFlow = true;
|
||||
strCtx.inStringifyKey = true;
|
||||
const strKey = key.toString(strCtx);
|
||||
if (!ctx.mapKeyWarned) {
|
||||
let jsonStr = JSON.stringify(strKey);
|
||||
if (jsonStr.length > 40)
|
||||
jsonStr = jsonStr.substring(0, 36) + '..."';
|
||||
warn(ctx.doc.options.logLevel, `Keys with collection values will be stringified due to JS Object restrictions: ${jsonStr}. Set mapAsMap: true to use object keys.`);
|
||||
ctx.mapKeyWarned = true;
|
||||
}
|
||||
return strKey;
|
||||
}
|
||||
return JSON.stringify(jsKey);
|
||||
}
|
||||
|
||||
export { addPairToJSMap };
|
||||
36
node_modules/yaml/browser/dist/nodes/identity.js
generated
vendored
Normal file
36
node_modules/yaml/browser/dist/nodes/identity.js
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
const ALIAS = Symbol.for('yaml.alias');
|
||||
const DOC = Symbol.for('yaml.document');
|
||||
const MAP = Symbol.for('yaml.map');
|
||||
const PAIR = Symbol.for('yaml.pair');
|
||||
const SCALAR = Symbol.for('yaml.scalar');
|
||||
const SEQ = Symbol.for('yaml.seq');
|
||||
const NODE_TYPE = Symbol.for('yaml.node.type');
|
||||
const isAlias = (node) => !!node && typeof node === 'object' && node[NODE_TYPE] === ALIAS;
|
||||
const isDocument = (node) => !!node && typeof node === 'object' && node[NODE_TYPE] === DOC;
|
||||
const isMap = (node) => !!node && typeof node === 'object' && node[NODE_TYPE] === MAP;
|
||||
const isPair = (node) => !!node && typeof node === 'object' && node[NODE_TYPE] === PAIR;
|
||||
const isScalar = (node) => !!node && typeof node === 'object' && node[NODE_TYPE] === SCALAR;
|
||||
const isSeq = (node) => !!node && typeof node === 'object' && node[NODE_TYPE] === SEQ;
|
||||
function isCollection(node) {
|
||||
if (node && typeof node === 'object')
|
||||
switch (node[NODE_TYPE]) {
|
||||
case MAP:
|
||||
case SEQ:
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
function isNode(node) {
|
||||
if (node && typeof node === 'object')
|
||||
switch (node[NODE_TYPE]) {
|
||||
case ALIAS:
|
||||
case MAP:
|
||||
case SCALAR:
|
||||
case SEQ:
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
const hasAnchor = (node) => (isScalar(node) || isCollection(node)) && !!node.anchor;
|
||||
|
||||
export { ALIAS, DOC, MAP, NODE_TYPE, PAIR, SCALAR, SEQ, hasAnchor, isAlias, isCollection, isDocument, isMap, isNode, isPair, isScalar, isSeq };
|
||||
37
node_modules/yaml/browser/dist/nodes/toJS.js
generated
vendored
Normal file
37
node_modules/yaml/browser/dist/nodes/toJS.js
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
import { hasAnchor } from './identity.js';
|
||||
|
||||
/**
|
||||
* Recursively convert any node or its contents to native JavaScript
|
||||
*
|
||||
* @param value - The input value
|
||||
* @param arg - If `value` defines a `toJSON()` method, use this
|
||||
* as its first argument
|
||||
* @param ctx - Conversion context, originally set in Document#toJS(). If
|
||||
* `{ keep: true }` is not set, output should be suitable for JSON
|
||||
* stringification.
|
||||
*/
|
||||
function toJS(value, arg, ctx) {
|
||||
// eslint-disable-next-line @typescript-eslint/no-unsafe-return
|
||||
if (Array.isArray(value))
|
||||
return value.map((v, i) => toJS(v, String(i), ctx));
|
||||
if (value && typeof value.toJSON === 'function') {
|
||||
// eslint-disable-next-line @typescript-eslint/no-unsafe-call
|
||||
if (!ctx || !hasAnchor(value))
|
||||
return value.toJSON(arg, ctx);
|
||||
const data = { aliasCount: 0, count: 1, res: undefined };
|
||||
ctx.anchors.set(value, data);
|
||||
ctx.onCreate = res => {
|
||||
data.res = res;
|
||||
delete ctx.onCreate;
|
||||
};
|
||||
const res = value.toJSON(arg, ctx);
|
||||
if (ctx.onCreate)
|
||||
ctx.onCreate(res);
|
||||
return res;
|
||||
}
|
||||
if (typeof value === 'bigint' && !ctx?.keep)
|
||||
return Number(value);
|
||||
return value;
|
||||
}
|
||||
|
||||
export { toJS };
|
||||
214
node_modules/yaml/browser/dist/parse/cst-scalar.js
generated
vendored
Normal file
214
node_modules/yaml/browser/dist/parse/cst-scalar.js
generated
vendored
Normal file
@@ -0,0 +1,214 @@
|
||||
import { resolveBlockScalar } from '../compose/resolve-block-scalar.js';
|
||||
import { resolveFlowScalar } from '../compose/resolve-flow-scalar.js';
|
||||
import { YAMLParseError } from '../errors.js';
|
||||
import { stringifyString } from '../stringify/stringifyString.js';
|
||||
|
||||
function resolveAsScalar(token, strict = true, onError) {
|
||||
if (token) {
|
||||
const _onError = (pos, code, message) => {
|
||||
const offset = typeof pos === 'number' ? pos : Array.isArray(pos) ? pos[0] : pos.offset;
|
||||
if (onError)
|
||||
onError(offset, code, message);
|
||||
else
|
||||
throw new YAMLParseError([offset, offset + 1], code, message);
|
||||
};
|
||||
switch (token.type) {
|
||||
case 'scalar':
|
||||
case 'single-quoted-scalar':
|
||||
case 'double-quoted-scalar':
|
||||
return resolveFlowScalar(token, strict, _onError);
|
||||
case 'block-scalar':
|
||||
return resolveBlockScalar({ options: { strict } }, token, _onError);
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
/**
|
||||
* Create a new scalar token with `value`
|
||||
*
|
||||
* Values that represent an actual string but may be parsed as a different type should use a `type` other than `'PLAIN'`,
|
||||
* as this function does not support any schema operations and won't check for such conflicts.
|
||||
*
|
||||
* @param value The string representation of the value, which will have its content properly indented.
|
||||
* @param context.end Comments and whitespace after the end of the value, or after the block scalar header. If undefined, a newline will be added.
|
||||
* @param context.implicitKey Being within an implicit key may affect the resolved type of the token's value.
|
||||
* @param context.indent The indent level of the token.
|
||||
* @param context.inFlow Is this scalar within a flow collection? This may affect the resolved type of the token's value.
|
||||
* @param context.offset The offset position of the token.
|
||||
* @param context.type The preferred type of the scalar token. If undefined, the previous type of the `token` will be used, defaulting to `'PLAIN'`.
|
||||
*/
|
||||
function createScalarToken(value, context) {
|
||||
const { implicitKey = false, indent, inFlow = false, offset = -1, type = 'PLAIN' } = context;
|
||||
const source = stringifyString({ type, value }, {
|
||||
implicitKey,
|
||||
indent: indent > 0 ? ' '.repeat(indent) : '',
|
||||
inFlow,
|
||||
options: { blockQuote: true, lineWidth: -1 }
|
||||
});
|
||||
const end = context.end ?? [
|
||||
{ type: 'newline', offset: -1, indent, source: '\n' }
|
||||
];
|
||||
switch (source[0]) {
|
||||
case '|':
|
||||
case '>': {
|
||||
const he = source.indexOf('\n');
|
||||
const head = source.substring(0, he);
|
||||
const body = source.substring(he + 1) + '\n';
|
||||
const props = [
|
||||
{ type: 'block-scalar-header', offset, indent, source: head }
|
||||
];
|
||||
if (!addEndtoBlockProps(props, end))
|
||||
props.push({ type: 'newline', offset: -1, indent, source: '\n' });
|
||||
return { type: 'block-scalar', offset, indent, props, source: body };
|
||||
}
|
||||
case '"':
|
||||
return { type: 'double-quoted-scalar', offset, indent, source, end };
|
||||
case "'":
|
||||
return { type: 'single-quoted-scalar', offset, indent, source, end };
|
||||
default:
|
||||
return { type: 'scalar', offset, indent, source, end };
|
||||
}
|
||||
}
|
||||
/**
|
||||
* Set the value of `token` to the given string `value`, overwriting any previous contents and type that it may have.
|
||||
*
|
||||
* Best efforts are made to retain any comments previously associated with the `token`,
|
||||
* though all contents within a collection's `items` will be overwritten.
|
||||
*
|
||||
* Values that represent an actual string but may be parsed as a different type should use a `type` other than `'PLAIN'`,
|
||||
* as this function does not support any schema operations and won't check for such conflicts.
|
||||
*
|
||||
* @param token Any token. If it does not include an `indent` value, the value will be stringified as if it were an implicit key.
|
||||
* @param value The string representation of the value, which will have its content properly indented.
|
||||
* @param context.afterKey In most cases, values after a key should have an additional level of indentation.
|
||||
* @param context.implicitKey Being within an implicit key may affect the resolved type of the token's value.
|
||||
* @param context.inFlow Being within a flow collection may affect the resolved type of the token's value.
|
||||
* @param context.type The preferred type of the scalar token. If undefined, the previous type of the `token` will be used, defaulting to `'PLAIN'`.
|
||||
*/
|
||||
function setScalarValue(token, value, context = {}) {
|
||||
let { afterKey = false, implicitKey = false, inFlow = false, type } = context;
|
||||
let indent = 'indent' in token ? token.indent : null;
|
||||
if (afterKey && typeof indent === 'number')
|
||||
indent += 2;
|
||||
if (!type)
|
||||
switch (token.type) {
|
||||
case 'single-quoted-scalar':
|
||||
type = 'QUOTE_SINGLE';
|
||||
break;
|
||||
case 'double-quoted-scalar':
|
||||
type = 'QUOTE_DOUBLE';
|
||||
break;
|
||||
case 'block-scalar': {
|
||||
const header = token.props[0];
|
||||
if (header.type !== 'block-scalar-header')
|
||||
throw new Error('Invalid block scalar header');
|
||||
type = header.source[0] === '>' ? 'BLOCK_FOLDED' : 'BLOCK_LITERAL';
|
||||
break;
|
||||
}
|
||||
default:
|
||||
type = 'PLAIN';
|
||||
}
|
||||
const source = stringifyString({ type, value }, {
|
||||
implicitKey: implicitKey || indent === null,
|
||||
indent: indent !== null && indent > 0 ? ' '.repeat(indent) : '',
|
||||
inFlow,
|
||||
options: { blockQuote: true, lineWidth: -1 }
|
||||
});
|
||||
switch (source[0]) {
|
||||
case '|':
|
||||
case '>':
|
||||
setBlockScalarValue(token, source);
|
||||
break;
|
||||
case '"':
|
||||
setFlowScalarValue(token, source, 'double-quoted-scalar');
|
||||
break;
|
||||
case "'":
|
||||
setFlowScalarValue(token, source, 'single-quoted-scalar');
|
||||
break;
|
||||
default:
|
||||
setFlowScalarValue(token, source, 'scalar');
|
||||
}
|
||||
}
|
||||
function setBlockScalarValue(token, source) {
|
||||
const he = source.indexOf('\n');
|
||||
const head = source.substring(0, he);
|
||||
const body = source.substring(he + 1) + '\n';
|
||||
if (token.type === 'block-scalar') {
|
||||
const header = token.props[0];
|
||||
if (header.type !== 'block-scalar-header')
|
||||
throw new Error('Invalid block scalar header');
|
||||
header.source = head;
|
||||
token.source = body;
|
||||
}
|
||||
else {
|
||||
const { offset } = token;
|
||||
const indent = 'indent' in token ? token.indent : -1;
|
||||
const props = [
|
||||
{ type: 'block-scalar-header', offset, indent, source: head }
|
||||
];
|
||||
if (!addEndtoBlockProps(props, 'end' in token ? token.end : undefined))
|
||||
props.push({ type: 'newline', offset: -1, indent, source: '\n' });
|
||||
for (const key of Object.keys(token))
|
||||
if (key !== 'type' && key !== 'offset')
|
||||
delete token[key];
|
||||
Object.assign(token, { type: 'block-scalar', indent, props, source: body });
|
||||
}
|
||||
}
|
||||
/** @returns `true` if last token is a newline */
|
||||
function addEndtoBlockProps(props, end) {
|
||||
if (end)
|
||||
for (const st of end)
|
||||
switch (st.type) {
|
||||
case 'space':
|
||||
case 'comment':
|
||||
props.push(st);
|
||||
break;
|
||||
case 'newline':
|
||||
props.push(st);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
function setFlowScalarValue(token, source, type) {
|
||||
switch (token.type) {
|
||||
case 'scalar':
|
||||
case 'double-quoted-scalar':
|
||||
case 'single-quoted-scalar':
|
||||
token.type = type;
|
||||
token.source = source;
|
||||
break;
|
||||
case 'block-scalar': {
|
||||
const end = token.props.slice(1);
|
||||
let oa = source.length;
|
||||
if (token.props[0].type === 'block-scalar-header')
|
||||
oa -= token.props[0].source.length;
|
||||
for (const tok of end)
|
||||
tok.offset += oa;
|
||||
delete token.props;
|
||||
Object.assign(token, { type, source, end });
|
||||
break;
|
||||
}
|
||||
case 'block-map':
|
||||
case 'block-seq': {
|
||||
const offset = token.offset + source.length;
|
||||
const nl = { type: 'newline', offset, indent: token.indent, source: '\n' };
|
||||
delete token.items;
|
||||
Object.assign(token, { type, source, end: [nl] });
|
||||
break;
|
||||
}
|
||||
default: {
|
||||
const indent = 'indent' in token ? token.indent : -1;
|
||||
const end = 'end' in token && Array.isArray(token.end)
|
||||
? token.end.filter(st => st.type === 'space' ||
|
||||
st.type === 'comment' ||
|
||||
st.type === 'newline')
|
||||
: [];
|
||||
for (const key of Object.keys(token))
|
||||
if (key !== 'type' && key !== 'offset')
|
||||
delete token[key];
|
||||
Object.assign(token, { type, indent, source, end });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export { createScalarToken, resolveAsScalar, setScalarValue };
|
||||
61
node_modules/yaml/browser/dist/parse/cst-stringify.js
generated
vendored
Normal file
61
node_modules/yaml/browser/dist/parse/cst-stringify.js
generated
vendored
Normal file
@@ -0,0 +1,61 @@
|
||||
/**
|
||||
* Stringify a CST document, token, or collection item
|
||||
*
|
||||
* Fair warning: This applies no validation whatsoever, and
|
||||
* simply concatenates the sources in their logical order.
|
||||
*/
|
||||
const stringify = (cst) => 'type' in cst ? stringifyToken(cst) : stringifyItem(cst);
|
||||
function stringifyToken(token) {
|
||||
switch (token.type) {
|
||||
case 'block-scalar': {
|
||||
let res = '';
|
||||
for (const tok of token.props)
|
||||
res += stringifyToken(tok);
|
||||
return res + token.source;
|
||||
}
|
||||
case 'block-map':
|
||||
case 'block-seq': {
|
||||
let res = '';
|
||||
for (const item of token.items)
|
||||
res += stringifyItem(item);
|
||||
return res;
|
||||
}
|
||||
case 'flow-collection': {
|
||||
let res = token.start.source;
|
||||
for (const item of token.items)
|
||||
res += stringifyItem(item);
|
||||
for (const st of token.end)
|
||||
res += st.source;
|
||||
return res;
|
||||
}
|
||||
case 'document': {
|
||||
let res = stringifyItem(token);
|
||||
if (token.end)
|
||||
for (const st of token.end)
|
||||
res += st.source;
|
||||
return res;
|
||||
}
|
||||
default: {
|
||||
let res = token.source;
|
||||
if ('end' in token && token.end)
|
||||
for (const st of token.end)
|
||||
res += st.source;
|
||||
return res;
|
||||
}
|
||||
}
|
||||
}
|
||||
function stringifyItem({ start, key, sep, value }) {
|
||||
let res = '';
|
||||
for (const st of start)
|
||||
res += st.source;
|
||||
if (key)
|
||||
res += stringifyToken(key);
|
||||
if (sep)
|
||||
for (const st of sep)
|
||||
res += st.source;
|
||||
if (value)
|
||||
res += stringifyToken(value);
|
||||
return res;
|
||||
}
|
||||
|
||||
export { stringify };
|
||||
97
node_modules/yaml/browser/dist/parse/cst-visit.js
generated
vendored
Normal file
97
node_modules/yaml/browser/dist/parse/cst-visit.js
generated
vendored
Normal file
@@ -0,0 +1,97 @@
|
||||
const BREAK = Symbol('break visit');
|
||||
const SKIP = Symbol('skip children');
|
||||
const REMOVE = Symbol('remove item');
|
||||
/**
|
||||
* Apply a visitor to a CST document or item.
|
||||
*
|
||||
* Walks through the tree (depth-first) starting from the root, calling a
|
||||
* `visitor` function with two arguments when entering each item:
|
||||
* - `item`: The current item, which included the following members:
|
||||
* - `start: SourceToken[]` – Source tokens before the key or value,
|
||||
* possibly including its anchor or tag.
|
||||
* - `key?: Token | null` – Set for pair values. May then be `null`, if
|
||||
* the key before the `:` separator is empty.
|
||||
* - `sep?: SourceToken[]` – Source tokens between the key and the value,
|
||||
* which should include the `:` map value indicator if `value` is set.
|
||||
* - `value?: Token` – The value of a sequence item, or of a map pair.
|
||||
* - `path`: The steps from the root to the current node, as an array of
|
||||
* `['key' | 'value', number]` tuples.
|
||||
*
|
||||
* The return value of the visitor may be used to control the traversal:
|
||||
* - `undefined` (default): Do nothing and continue
|
||||
* - `visit.SKIP`: Do not visit the children of this token, continue with
|
||||
* next sibling
|
||||
* - `visit.BREAK`: Terminate traversal completely
|
||||
* - `visit.REMOVE`: Remove the current item, then continue with the next one
|
||||
* - `number`: Set the index of the next step. This is useful especially if
|
||||
* the index of the current token has changed.
|
||||
* - `function`: Define the next visitor for this item. After the original
|
||||
* visitor is called on item entry, next visitors are called after handling
|
||||
* a non-empty `key` and when exiting the item.
|
||||
*/
|
||||
function visit(cst, visitor) {
|
||||
if ('type' in cst && cst.type === 'document')
|
||||
cst = { start: cst.start, value: cst.value };
|
||||
_visit(Object.freeze([]), cst, visitor);
|
||||
}
|
||||
// Without the `as symbol` casts, TS declares these in the `visit`
|
||||
// namespace using `var`, but then complains about that because
|
||||
// `unique symbol` must be `const`.
|
||||
/** Terminate visit traversal completely */
|
||||
visit.BREAK = BREAK;
|
||||
/** Do not visit the children of the current item */
|
||||
visit.SKIP = SKIP;
|
||||
/** Remove the current item */
|
||||
visit.REMOVE = REMOVE;
|
||||
/** Find the item at `path` from `cst` as the root */
|
||||
visit.itemAtPath = (cst, path) => {
|
||||
let item = cst;
|
||||
for (const [field, index] of path) {
|
||||
const tok = item?.[field];
|
||||
if (tok && 'items' in tok) {
|
||||
item = tok.items[index];
|
||||
}
|
||||
else
|
||||
return undefined;
|
||||
}
|
||||
return item;
|
||||
};
|
||||
/**
|
||||
* Get the immediate parent collection of the item at `path` from `cst` as the root.
|
||||
*
|
||||
* Throws an error if the collection is not found, which should never happen if the item itself exists.
|
||||
*/
|
||||
visit.parentCollection = (cst, path) => {
|
||||
const parent = visit.itemAtPath(cst, path.slice(0, -1));
|
||||
const field = path[path.length - 1][0];
|
||||
const coll = parent?.[field];
|
||||
if (coll && 'items' in coll)
|
||||
return coll;
|
||||
throw new Error('Parent collection not found');
|
||||
};
|
||||
function _visit(path, item, visitor) {
|
||||
let ctrl = visitor(item, path);
|
||||
if (typeof ctrl === 'symbol')
|
||||
return ctrl;
|
||||
for (const field of ['key', 'value']) {
|
||||
const token = item[field];
|
||||
if (token && 'items' in token) {
|
||||
for (let i = 0; i < token.items.length; ++i) {
|
||||
const ci = _visit(Object.freeze(path.concat([[field, i]])), token.items[i], visitor);
|
||||
if (typeof ci === 'number')
|
||||
i = ci - 1;
|
||||
else if (ci === BREAK)
|
||||
return BREAK;
|
||||
else if (ci === REMOVE) {
|
||||
token.items.splice(i, 1);
|
||||
i -= 1;
|
||||
}
|
||||
}
|
||||
if (typeof ctrl === 'function' && field === 'key')
|
||||
ctrl = ctrl(item, path);
|
||||
}
|
||||
}
|
||||
return typeof ctrl === 'function' ? ctrl(item, path) : ctrl;
|
||||
}
|
||||
|
||||
export { visit };
|
||||
98
node_modules/yaml/browser/dist/parse/cst.js
generated
vendored
Normal file
98
node_modules/yaml/browser/dist/parse/cst.js
generated
vendored
Normal file
@@ -0,0 +1,98 @@
|
||||
export { createScalarToken, resolveAsScalar, setScalarValue } from './cst-scalar.js';
|
||||
export { stringify } from './cst-stringify.js';
|
||||
export { visit } from './cst-visit.js';
|
||||
|
||||
/** The byte order mark */
|
||||
const BOM = '\u{FEFF}';
|
||||
/** Start of doc-mode */
|
||||
const DOCUMENT = '\x02'; // C0: Start of Text
|
||||
/** Unexpected end of flow-mode */
|
||||
const FLOW_END = '\x18'; // C0: Cancel
|
||||
/** Next token is a scalar value */
|
||||
const SCALAR = '\x1f'; // C0: Unit Separator
|
||||
/** @returns `true` if `token` is a flow or block collection */
|
||||
const isCollection = (token) => !!token && 'items' in token;
|
||||
/** @returns `true` if `token` is a flow or block scalar; not an alias */
|
||||
const isScalar = (token) => !!token &&
|
||||
(token.type === 'scalar' ||
|
||||
token.type === 'single-quoted-scalar' ||
|
||||
token.type === 'double-quoted-scalar' ||
|
||||
token.type === 'block-scalar');
|
||||
/* istanbul ignore next */
|
||||
/** Get a printable representation of a lexer token */
|
||||
function prettyToken(token) {
|
||||
switch (token) {
|
||||
case BOM:
|
||||
return '<BOM>';
|
||||
case DOCUMENT:
|
||||
return '<DOC>';
|
||||
case FLOW_END:
|
||||
return '<FLOW_END>';
|
||||
case SCALAR:
|
||||
return '<SCALAR>';
|
||||
default:
|
||||
return JSON.stringify(token);
|
||||
}
|
||||
}
|
||||
/** Identify the type of a lexer token. May return `null` for unknown tokens. */
|
||||
function tokenType(source) {
|
||||
switch (source) {
|
||||
case BOM:
|
||||
return 'byte-order-mark';
|
||||
case DOCUMENT:
|
||||
return 'doc-mode';
|
||||
case FLOW_END:
|
||||
return 'flow-error-end';
|
||||
case SCALAR:
|
||||
return 'scalar';
|
||||
case '---':
|
||||
return 'doc-start';
|
||||
case '...':
|
||||
return 'doc-end';
|
||||
case '':
|
||||
case '\n':
|
||||
case '\r\n':
|
||||
return 'newline';
|
||||
case '-':
|
||||
return 'seq-item-ind';
|
||||
case '?':
|
||||
return 'explicit-key-ind';
|
||||
case ':':
|
||||
return 'map-value-ind';
|
||||
case '{':
|
||||
return 'flow-map-start';
|
||||
case '}':
|
||||
return 'flow-map-end';
|
||||
case '[':
|
||||
return 'flow-seq-start';
|
||||
case ']':
|
||||
return 'flow-seq-end';
|
||||
case ',':
|
||||
return 'comma';
|
||||
}
|
||||
switch (source[0]) {
|
||||
case ' ':
|
||||
case '\t':
|
||||
return 'space';
|
||||
case '#':
|
||||
return 'comment';
|
||||
case '%':
|
||||
return 'directive-line';
|
||||
case '*':
|
||||
return 'alias';
|
||||
case '&':
|
||||
return 'anchor';
|
||||
case '!':
|
||||
return 'tag';
|
||||
case "'":
|
||||
return 'single-quoted-scalar';
|
||||
case '"':
|
||||
return 'double-quoted-scalar';
|
||||
case '|':
|
||||
case '>':
|
||||
return 'block-scalar-header';
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
export { BOM, DOCUMENT, FLOW_END, SCALAR, isCollection, isScalar, prettyToken, tokenType };
|
||||
717
node_modules/yaml/browser/dist/parse/lexer.js
generated
vendored
Normal file
717
node_modules/yaml/browser/dist/parse/lexer.js
generated
vendored
Normal file
@@ -0,0 +1,717 @@
|
||||
import { BOM, DOCUMENT, FLOW_END, SCALAR } from './cst.js';
|
||||
|
||||
/*
|
||||
START -> stream
|
||||
|
||||
stream
|
||||
directive -> line-end -> stream
|
||||
indent + line-end -> stream
|
||||
[else] -> line-start
|
||||
|
||||
line-end
|
||||
comment -> line-end
|
||||
newline -> .
|
||||
input-end -> END
|
||||
|
||||
line-start
|
||||
doc-start -> doc
|
||||
doc-end -> stream
|
||||
[else] -> indent -> block-start
|
||||
|
||||
block-start
|
||||
seq-item-start -> block-start
|
||||
explicit-key-start -> block-start
|
||||
map-value-start -> block-start
|
||||
[else] -> doc
|
||||
|
||||
doc
|
||||
line-end -> line-start
|
||||
spaces -> doc
|
||||
anchor -> doc
|
||||
tag -> doc
|
||||
flow-start -> flow -> doc
|
||||
flow-end -> error -> doc
|
||||
seq-item-start -> error -> doc
|
||||
explicit-key-start -> error -> doc
|
||||
map-value-start -> doc
|
||||
alias -> doc
|
||||
quote-start -> quoted-scalar -> doc
|
||||
block-scalar-header -> line-end -> block-scalar(min) -> line-start
|
||||
[else] -> plain-scalar(false, min) -> doc
|
||||
|
||||
flow
|
||||
line-end -> flow
|
||||
spaces -> flow
|
||||
anchor -> flow
|
||||
tag -> flow
|
||||
flow-start -> flow -> flow
|
||||
flow-end -> .
|
||||
seq-item-start -> error -> flow
|
||||
explicit-key-start -> flow
|
||||
map-value-start -> flow
|
||||
alias -> flow
|
||||
quote-start -> quoted-scalar -> flow
|
||||
comma -> flow
|
||||
[else] -> plain-scalar(true, 0) -> flow
|
||||
|
||||
quoted-scalar
|
||||
quote-end -> .
|
||||
[else] -> quoted-scalar
|
||||
|
||||
block-scalar(min)
|
||||
newline + peek(indent < min) -> .
|
||||
[else] -> block-scalar(min)
|
||||
|
||||
plain-scalar(is-flow, min)
|
||||
scalar-end(is-flow) -> .
|
||||
peek(newline + (indent < min)) -> .
|
||||
[else] -> plain-scalar(min)
|
||||
*/
|
||||
function isEmpty(ch) {
|
||||
switch (ch) {
|
||||
case undefined:
|
||||
case ' ':
|
||||
case '\n':
|
||||
case '\r':
|
||||
case '\t':
|
||||
return true;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
}
|
||||
const hexDigits = new Set('0123456789ABCDEFabcdef');
|
||||
const tagChars = new Set("0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz-#;/?:@&=+$_.!~*'()");
|
||||
const flowIndicatorChars = new Set(',[]{}');
|
||||
const invalidAnchorChars = new Set(' ,[]{}\n\r\t');
|
||||
const isNotAnchorChar = (ch) => !ch || invalidAnchorChars.has(ch);
|
||||
/**
|
||||
* Splits an input string into lexical tokens, i.e. smaller strings that are
|
||||
* easily identifiable by `tokens.tokenType()`.
|
||||
*
|
||||
* Lexing starts always in a "stream" context. Incomplete input may be buffered
|
||||
* until a complete token can be emitted.
|
||||
*
|
||||
* In addition to slices of the original input, the following control characters
|
||||
* may also be emitted:
|
||||
*
|
||||
* - `\x02` (Start of Text): A document starts with the next token
|
||||
* - `\x18` (Cancel): Unexpected end of flow-mode (indicates an error)
|
||||
* - `\x1f` (Unit Separator): Next token is a scalar value
|
||||
* - `\u{FEFF}` (Byte order mark): Emitted separately outside documents
|
||||
*/
|
||||
class Lexer {
|
||||
constructor() {
|
||||
/**
|
||||
* Flag indicating whether the end of the current buffer marks the end of
|
||||
* all input
|
||||
*/
|
||||
this.atEnd = false;
|
||||
/**
|
||||
* Explicit indent set in block scalar header, as an offset from the current
|
||||
* minimum indent, so e.g. set to 1 from a header `|2+`. Set to -1 if not
|
||||
* explicitly set.
|
||||
*/
|
||||
this.blockScalarIndent = -1;
|
||||
/**
|
||||
* Block scalars that include a + (keep) chomping indicator in their header
|
||||
* include trailing empty lines, which are otherwise excluded from the
|
||||
* scalar's contents.
|
||||
*/
|
||||
this.blockScalarKeep = false;
|
||||
/** Current input */
|
||||
this.buffer = '';
|
||||
/**
|
||||
* Flag noting whether the map value indicator : can immediately follow this
|
||||
* node within a flow context.
|
||||
*/
|
||||
this.flowKey = false;
|
||||
/** Count of surrounding flow collection levels. */
|
||||
this.flowLevel = 0;
|
||||
/**
|
||||
* Minimum level of indentation required for next lines to be parsed as a
|
||||
* part of the current scalar value.
|
||||
*/
|
||||
this.indentNext = 0;
|
||||
/** Indentation level of the current line. */
|
||||
this.indentValue = 0;
|
||||
/** Position of the next \n character. */
|
||||
this.lineEndPos = null;
|
||||
/** Stores the state of the lexer if reaching the end of incpomplete input */
|
||||
this.next = null;
|
||||
/** A pointer to `buffer`; the current position of the lexer. */
|
||||
this.pos = 0;
|
||||
}
|
||||
/**
|
||||
* Generate YAML tokens from the `source` string. If `incomplete`,
|
||||
* a part of the last line may be left as a buffer for the next call.
|
||||
*
|
||||
* @returns A generator of lexical tokens
|
||||
*/
|
||||
*lex(source, incomplete = false) {
|
||||
if (source) {
|
||||
if (typeof source !== 'string')
|
||||
throw TypeError('source is not a string');
|
||||
this.buffer = this.buffer ? this.buffer + source : source;
|
||||
this.lineEndPos = null;
|
||||
}
|
||||
this.atEnd = !incomplete;
|
||||
let next = this.next ?? 'stream';
|
||||
while (next && (incomplete || this.hasChars(1)))
|
||||
next = yield* this.parseNext(next);
|
||||
}
|
||||
atLineEnd() {
|
||||
let i = this.pos;
|
||||
let ch = this.buffer[i];
|
||||
while (ch === ' ' || ch === '\t')
|
||||
ch = this.buffer[++i];
|
||||
if (!ch || ch === '#' || ch === '\n')
|
||||
return true;
|
||||
if (ch === '\r')
|
||||
return this.buffer[i + 1] === '\n';
|
||||
return false;
|
||||
}
|
||||
charAt(n) {
|
||||
return this.buffer[this.pos + n];
|
||||
}
|
||||
continueScalar(offset) {
|
||||
let ch = this.buffer[offset];
|
||||
if (this.indentNext > 0) {
|
||||
let indent = 0;
|
||||
while (ch === ' ')
|
||||
ch = this.buffer[++indent + offset];
|
||||
if (ch === '\r') {
|
||||
const next = this.buffer[indent + offset + 1];
|
||||
if (next === '\n' || (!next && !this.atEnd))
|
||||
return offset + indent + 1;
|
||||
}
|
||||
return ch === '\n' || indent >= this.indentNext || (!ch && !this.atEnd)
|
||||
? offset + indent
|
||||
: -1;
|
||||
}
|
||||
if (ch === '-' || ch === '.') {
|
||||
const dt = this.buffer.substr(offset, 3);
|
||||
if ((dt === '---' || dt === '...') && isEmpty(this.buffer[offset + 3]))
|
||||
return -1;
|
||||
}
|
||||
return offset;
|
||||
}
|
||||
getLine() {
|
||||
let end = this.lineEndPos;
|
||||
if (typeof end !== 'number' || (end !== -1 && end < this.pos)) {
|
||||
end = this.buffer.indexOf('\n', this.pos);
|
||||
this.lineEndPos = end;
|
||||
}
|
||||
if (end === -1)
|
||||
return this.atEnd ? this.buffer.substring(this.pos) : null;
|
||||
if (this.buffer[end - 1] === '\r')
|
||||
end -= 1;
|
||||
return this.buffer.substring(this.pos, end);
|
||||
}
|
||||
hasChars(n) {
|
||||
return this.pos + n <= this.buffer.length;
|
||||
}
|
||||
setNext(state) {
|
||||
this.buffer = this.buffer.substring(this.pos);
|
||||
this.pos = 0;
|
||||
this.lineEndPos = null;
|
||||
this.next = state;
|
||||
return null;
|
||||
}
|
||||
peek(n) {
|
||||
return this.buffer.substr(this.pos, n);
|
||||
}
|
||||
*parseNext(next) {
|
||||
switch (next) {
|
||||
case 'stream':
|
||||
return yield* this.parseStream();
|
||||
case 'line-start':
|
||||
return yield* this.parseLineStart();
|
||||
case 'block-start':
|
||||
return yield* this.parseBlockStart();
|
||||
case 'doc':
|
||||
return yield* this.parseDocument();
|
||||
case 'flow':
|
||||
return yield* this.parseFlowCollection();
|
||||
case 'quoted-scalar':
|
||||
return yield* this.parseQuotedScalar();
|
||||
case 'block-scalar':
|
||||
return yield* this.parseBlockScalar();
|
||||
case 'plain-scalar':
|
||||
return yield* this.parsePlainScalar();
|
||||
}
|
||||
}
|
||||
*parseStream() {
|
||||
let line = this.getLine();
|
||||
if (line === null)
|
||||
return this.setNext('stream');
|
||||
if (line[0] === BOM) {
|
||||
yield* this.pushCount(1);
|
||||
line = line.substring(1);
|
||||
}
|
||||
if (line[0] === '%') {
|
||||
let dirEnd = line.length;
|
||||
let cs = line.indexOf('#');
|
||||
while (cs !== -1) {
|
||||
const ch = line[cs - 1];
|
||||
if (ch === ' ' || ch === '\t') {
|
||||
dirEnd = cs - 1;
|
||||
break;
|
||||
}
|
||||
else {
|
||||
cs = line.indexOf('#', cs + 1);
|
||||
}
|
||||
}
|
||||
while (true) {
|
||||
const ch = line[dirEnd - 1];
|
||||
if (ch === ' ' || ch === '\t')
|
||||
dirEnd -= 1;
|
||||
else
|
||||
break;
|
||||
}
|
||||
const n = (yield* this.pushCount(dirEnd)) + (yield* this.pushSpaces(true));
|
||||
yield* this.pushCount(line.length - n); // possible comment
|
||||
this.pushNewline();
|
||||
return 'stream';
|
||||
}
|
||||
if (this.atLineEnd()) {
|
||||
const sp = yield* this.pushSpaces(true);
|
||||
yield* this.pushCount(line.length - sp);
|
||||
yield* this.pushNewline();
|
||||
return 'stream';
|
||||
}
|
||||
yield DOCUMENT;
|
||||
return yield* this.parseLineStart();
|
||||
}
|
||||
*parseLineStart() {
|
||||
const ch = this.charAt(0);
|
||||
if (!ch && !this.atEnd)
|
||||
return this.setNext('line-start');
|
||||
if (ch === '-' || ch === '.') {
|
||||
if (!this.atEnd && !this.hasChars(4))
|
||||
return this.setNext('line-start');
|
||||
const s = this.peek(3);
|
||||
if ((s === '---' || s === '...') && isEmpty(this.charAt(3))) {
|
||||
yield* this.pushCount(3);
|
||||
this.indentValue = 0;
|
||||
this.indentNext = 0;
|
||||
return s === '---' ? 'doc' : 'stream';
|
||||
}
|
||||
}
|
||||
this.indentValue = yield* this.pushSpaces(false);
|
||||
if (this.indentNext > this.indentValue && !isEmpty(this.charAt(1)))
|
||||
this.indentNext = this.indentValue;
|
||||
return yield* this.parseBlockStart();
|
||||
}
|
||||
*parseBlockStart() {
|
||||
const [ch0, ch1] = this.peek(2);
|
||||
if (!ch1 && !this.atEnd)
|
||||
return this.setNext('block-start');
|
||||
if ((ch0 === '-' || ch0 === '?' || ch0 === ':') && isEmpty(ch1)) {
|
||||
const n = (yield* this.pushCount(1)) + (yield* this.pushSpaces(true));
|
||||
this.indentNext = this.indentValue + 1;
|
||||
this.indentValue += n;
|
||||
return yield* this.parseBlockStart();
|
||||
}
|
||||
return 'doc';
|
||||
}
|
||||
*parseDocument() {
|
||||
yield* this.pushSpaces(true);
|
||||
const line = this.getLine();
|
||||
if (line === null)
|
||||
return this.setNext('doc');
|
||||
let n = yield* this.pushIndicators();
|
||||
switch (line[n]) {
|
||||
case '#':
|
||||
yield* this.pushCount(line.length - n);
|
||||
// fallthrough
|
||||
case undefined:
|
||||
yield* this.pushNewline();
|
||||
return yield* this.parseLineStart();
|
||||
case '{':
|
||||
case '[':
|
||||
yield* this.pushCount(1);
|
||||
this.flowKey = false;
|
||||
this.flowLevel = 1;
|
||||
return 'flow';
|
||||
case '}':
|
||||
case ']':
|
||||
// this is an error
|
||||
yield* this.pushCount(1);
|
||||
return 'doc';
|
||||
case '*':
|
||||
yield* this.pushUntil(isNotAnchorChar);
|
||||
return 'doc';
|
||||
case '"':
|
||||
case "'":
|
||||
return yield* this.parseQuotedScalar();
|
||||
case '|':
|
||||
case '>':
|
||||
n += yield* this.parseBlockScalarHeader();
|
||||
n += yield* this.pushSpaces(true);
|
||||
yield* this.pushCount(line.length - n);
|
||||
yield* this.pushNewline();
|
||||
return yield* this.parseBlockScalar();
|
||||
default:
|
||||
return yield* this.parsePlainScalar();
|
||||
}
|
||||
}
|
||||
*parseFlowCollection() {
|
||||
let nl, sp;
|
||||
let indent = -1;
|
||||
do {
|
||||
nl = yield* this.pushNewline();
|
||||
if (nl > 0) {
|
||||
sp = yield* this.pushSpaces(false);
|
||||
this.indentValue = indent = sp;
|
||||
}
|
||||
else {
|
||||
sp = 0;
|
||||
}
|
||||
sp += yield* this.pushSpaces(true);
|
||||
} while (nl + sp > 0);
|
||||
const line = this.getLine();
|
||||
if (line === null)
|
||||
return this.setNext('flow');
|
||||
if ((indent !== -1 && indent < this.indentNext && line[0] !== '#') ||
|
||||
(indent === 0 &&
|
||||
(line.startsWith('---') || line.startsWith('...')) &&
|
||||
isEmpty(line[3]))) {
|
||||
// Allowing for the terminal ] or } at the same (rather than greater)
|
||||
// indent level as the initial [ or { is technically invalid, but
|
||||
// failing here would be surprising to users.
|
||||
const atFlowEndMarker = indent === this.indentNext - 1 &&
|
||||
this.flowLevel === 1 &&
|
||||
(line[0] === ']' || line[0] === '}');
|
||||
if (!atFlowEndMarker) {
|
||||
// this is an error
|
||||
this.flowLevel = 0;
|
||||
yield FLOW_END;
|
||||
return yield* this.parseLineStart();
|
||||
}
|
||||
}
|
||||
let n = 0;
|
||||
while (line[n] === ',') {
|
||||
n += yield* this.pushCount(1);
|
||||
n += yield* this.pushSpaces(true);
|
||||
this.flowKey = false;
|
||||
}
|
||||
n += yield* this.pushIndicators();
|
||||
switch (line[n]) {
|
||||
case undefined:
|
||||
return 'flow';
|
||||
case '#':
|
||||
yield* this.pushCount(line.length - n);
|
||||
return 'flow';
|
||||
case '{':
|
||||
case '[':
|
||||
yield* this.pushCount(1);
|
||||
this.flowKey = false;
|
||||
this.flowLevel += 1;
|
||||
return 'flow';
|
||||
case '}':
|
||||
case ']':
|
||||
yield* this.pushCount(1);
|
||||
this.flowKey = true;
|
||||
this.flowLevel -= 1;
|
||||
return this.flowLevel ? 'flow' : 'doc';
|
||||
case '*':
|
||||
yield* this.pushUntil(isNotAnchorChar);
|
||||
return 'flow';
|
||||
case '"':
|
||||
case "'":
|
||||
this.flowKey = true;
|
||||
return yield* this.parseQuotedScalar();
|
||||
case ':': {
|
||||
const next = this.charAt(1);
|
||||
if (this.flowKey || isEmpty(next) || next === ',') {
|
||||
this.flowKey = false;
|
||||
yield* this.pushCount(1);
|
||||
yield* this.pushSpaces(true);
|
||||
return 'flow';
|
||||
}
|
||||
}
|
||||
// fallthrough
|
||||
default:
|
||||
this.flowKey = false;
|
||||
return yield* this.parsePlainScalar();
|
||||
}
|
||||
}
|
||||
*parseQuotedScalar() {
|
||||
const quote = this.charAt(0);
|
||||
let end = this.buffer.indexOf(quote, this.pos + 1);
|
||||
if (quote === "'") {
|
||||
while (end !== -1 && this.buffer[end + 1] === "'")
|
||||
end = this.buffer.indexOf("'", end + 2);
|
||||
}
|
||||
else {
|
||||
// double-quote
|
||||
while (end !== -1) {
|
||||
let n = 0;
|
||||
while (this.buffer[end - 1 - n] === '\\')
|
||||
n += 1;
|
||||
if (n % 2 === 0)
|
||||
break;
|
||||
end = this.buffer.indexOf('"', end + 1);
|
||||
}
|
||||
}
|
||||
// Only looking for newlines within the quotes
|
||||
const qb = this.buffer.substring(0, end);
|
||||
let nl = qb.indexOf('\n', this.pos);
|
||||
if (nl !== -1) {
|
||||
while (nl !== -1) {
|
||||
const cs = this.continueScalar(nl + 1);
|
||||
if (cs === -1)
|
||||
break;
|
||||
nl = qb.indexOf('\n', cs);
|
||||
}
|
||||
if (nl !== -1) {
|
||||
// this is an error caused by an unexpected unindent
|
||||
end = nl - (qb[nl - 1] === '\r' ? 2 : 1);
|
||||
}
|
||||
}
|
||||
if (end === -1) {
|
||||
if (!this.atEnd)
|
||||
return this.setNext('quoted-scalar');
|
||||
end = this.buffer.length;
|
||||
}
|
||||
yield* this.pushToIndex(end + 1, false);
|
||||
return this.flowLevel ? 'flow' : 'doc';
|
||||
}
|
||||
*parseBlockScalarHeader() {
|
||||
this.blockScalarIndent = -1;
|
||||
this.blockScalarKeep = false;
|
||||
let i = this.pos;
|
||||
while (true) {
|
||||
const ch = this.buffer[++i];
|
||||
if (ch === '+')
|
||||
this.blockScalarKeep = true;
|
||||
else if (ch > '0' && ch <= '9')
|
||||
this.blockScalarIndent = Number(ch) - 1;
|
||||
else if (ch !== '-')
|
||||
break;
|
||||
}
|
||||
return yield* this.pushUntil(ch => isEmpty(ch) || ch === '#');
|
||||
}
|
||||
*parseBlockScalar() {
|
||||
let nl = this.pos - 1; // may be -1 if this.pos === 0
|
||||
let indent = 0;
|
||||
let ch;
|
||||
loop: for (let i = this.pos; (ch = this.buffer[i]); ++i) {
|
||||
switch (ch) {
|
||||
case ' ':
|
||||
indent += 1;
|
||||
break;
|
||||
case '\n':
|
||||
nl = i;
|
||||
indent = 0;
|
||||
break;
|
||||
case '\r': {
|
||||
const next = this.buffer[i + 1];
|
||||
if (!next && !this.atEnd)
|
||||
return this.setNext('block-scalar');
|
||||
if (next === '\n')
|
||||
break;
|
||||
} // fallthrough
|
||||
default:
|
||||
break loop;
|
||||
}
|
||||
}
|
||||
if (!ch && !this.atEnd)
|
||||
return this.setNext('block-scalar');
|
||||
if (indent >= this.indentNext) {
|
||||
if (this.blockScalarIndent === -1)
|
||||
this.indentNext = indent;
|
||||
else {
|
||||
this.indentNext =
|
||||
this.blockScalarIndent + (this.indentNext === 0 ? 1 : this.indentNext);
|
||||
}
|
||||
do {
|
||||
const cs = this.continueScalar(nl + 1);
|
||||
if (cs === -1)
|
||||
break;
|
||||
nl = this.buffer.indexOf('\n', cs);
|
||||
} while (nl !== -1);
|
||||
if (nl === -1) {
|
||||
if (!this.atEnd)
|
||||
return this.setNext('block-scalar');
|
||||
nl = this.buffer.length;
|
||||
}
|
||||
}
|
||||
// Trailing insufficiently indented tabs are invalid.
|
||||
// To catch that during parsing, we include them in the block scalar value.
|
||||
let i = nl + 1;
|
||||
ch = this.buffer[i];
|
||||
while (ch === ' ')
|
||||
ch = this.buffer[++i];
|
||||
if (ch === '\t') {
|
||||
while (ch === '\t' || ch === ' ' || ch === '\r' || ch === '\n')
|
||||
ch = this.buffer[++i];
|
||||
nl = i - 1;
|
||||
}
|
||||
else if (!this.blockScalarKeep) {
|
||||
do {
|
||||
let i = nl - 1;
|
||||
let ch = this.buffer[i];
|
||||
if (ch === '\r')
|
||||
ch = this.buffer[--i];
|
||||
const lastChar = i; // Drop the line if last char not more indented
|
||||
while (ch === ' ')
|
||||
ch = this.buffer[--i];
|
||||
if (ch === '\n' && i >= this.pos && i + 1 + indent > lastChar)
|
||||
nl = i;
|
||||
else
|
||||
break;
|
||||
} while (true);
|
||||
}
|
||||
yield SCALAR;
|
||||
yield* this.pushToIndex(nl + 1, true);
|
||||
return yield* this.parseLineStart();
|
||||
}
|
||||
*parsePlainScalar() {
|
||||
const inFlow = this.flowLevel > 0;
|
||||
let end = this.pos - 1;
|
||||
let i = this.pos - 1;
|
||||
let ch;
|
||||
while ((ch = this.buffer[++i])) {
|
||||
if (ch === ':') {
|
||||
const next = this.buffer[i + 1];
|
||||
if (isEmpty(next) || (inFlow && flowIndicatorChars.has(next)))
|
||||
break;
|
||||
end = i;
|
||||
}
|
||||
else if (isEmpty(ch)) {
|
||||
let next = this.buffer[i + 1];
|
||||
if (ch === '\r') {
|
||||
if (next === '\n') {
|
||||
i += 1;
|
||||
ch = '\n';
|
||||
next = this.buffer[i + 1];
|
||||
}
|
||||
else
|
||||
end = i;
|
||||
}
|
||||
if (next === '#' || (inFlow && flowIndicatorChars.has(next)))
|
||||
break;
|
||||
if (ch === '\n') {
|
||||
const cs = this.continueScalar(i + 1);
|
||||
if (cs === -1)
|
||||
break;
|
||||
i = Math.max(i, cs - 2); // to advance, but still account for ' #'
|
||||
}
|
||||
}
|
||||
else {
|
||||
if (inFlow && flowIndicatorChars.has(ch))
|
||||
break;
|
||||
end = i;
|
||||
}
|
||||
}
|
||||
if (!ch && !this.atEnd)
|
||||
return this.setNext('plain-scalar');
|
||||
yield SCALAR;
|
||||
yield* this.pushToIndex(end + 1, true);
|
||||
return inFlow ? 'flow' : 'doc';
|
||||
}
|
||||
*pushCount(n) {
|
||||
if (n > 0) {
|
||||
yield this.buffer.substr(this.pos, n);
|
||||
this.pos += n;
|
||||
return n;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
*pushToIndex(i, allowEmpty) {
|
||||
const s = this.buffer.slice(this.pos, i);
|
||||
if (s) {
|
||||
yield s;
|
||||
this.pos += s.length;
|
||||
return s.length;
|
||||
}
|
||||
else if (allowEmpty)
|
||||
yield '';
|
||||
return 0;
|
||||
}
|
||||
*pushIndicators() {
|
||||
switch (this.charAt(0)) {
|
||||
case '!':
|
||||
return ((yield* this.pushTag()) +
|
||||
(yield* this.pushSpaces(true)) +
|
||||
(yield* this.pushIndicators()));
|
||||
case '&':
|
||||
return ((yield* this.pushUntil(isNotAnchorChar)) +
|
||||
(yield* this.pushSpaces(true)) +
|
||||
(yield* this.pushIndicators()));
|
||||
case '-': // this is an error
|
||||
case '?': // this is an error outside flow collections
|
||||
case ':': {
|
||||
const inFlow = this.flowLevel > 0;
|
||||
const ch1 = this.charAt(1);
|
||||
if (isEmpty(ch1) || (inFlow && flowIndicatorChars.has(ch1))) {
|
||||
if (!inFlow)
|
||||
this.indentNext = this.indentValue + 1;
|
||||
else if (this.flowKey)
|
||||
this.flowKey = false;
|
||||
return ((yield* this.pushCount(1)) +
|
||||
(yield* this.pushSpaces(true)) +
|
||||
(yield* this.pushIndicators()));
|
||||
}
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
*pushTag() {
|
||||
if (this.charAt(1) === '<') {
|
||||
let i = this.pos + 2;
|
||||
let ch = this.buffer[i];
|
||||
while (!isEmpty(ch) && ch !== '>')
|
||||
ch = this.buffer[++i];
|
||||
return yield* this.pushToIndex(ch === '>' ? i + 1 : i, false);
|
||||
}
|
||||
else {
|
||||
let i = this.pos + 1;
|
||||
let ch = this.buffer[i];
|
||||
while (ch) {
|
||||
if (tagChars.has(ch))
|
||||
ch = this.buffer[++i];
|
||||
else if (ch === '%' &&
|
||||
hexDigits.has(this.buffer[i + 1]) &&
|
||||
hexDigits.has(this.buffer[i + 2])) {
|
||||
ch = this.buffer[(i += 3)];
|
||||
}
|
||||
else
|
||||
break;
|
||||
}
|
||||
return yield* this.pushToIndex(i, false);
|
||||
}
|
||||
}
|
||||
*pushNewline() {
|
||||
const ch = this.buffer[this.pos];
|
||||
if (ch === '\n')
|
||||
return yield* this.pushCount(1);
|
||||
else if (ch === '\r' && this.charAt(1) === '\n')
|
||||
return yield* this.pushCount(2);
|
||||
else
|
||||
return 0;
|
||||
}
|
||||
*pushSpaces(allowTabs) {
|
||||
let i = this.pos - 1;
|
||||
let ch;
|
||||
do {
|
||||
ch = this.buffer[++i];
|
||||
} while (ch === ' ' || (allowTabs && ch === '\t'));
|
||||
const n = i - this.pos;
|
||||
if (n > 0) {
|
||||
yield this.buffer.substr(this.pos, n);
|
||||
this.pos = i;
|
||||
}
|
||||
return n;
|
||||
}
|
||||
*pushUntil(test) {
|
||||
let i = this.pos;
|
||||
let ch = this.buffer[i];
|
||||
while (!test(ch))
|
||||
ch = this.buffer[++i];
|
||||
return yield* this.pushToIndex(i, false);
|
||||
}
|
||||
}
|
||||
|
||||
export { Lexer };
|
||||
39
node_modules/yaml/browser/dist/parse/line-counter.js
generated
vendored
Normal file
39
node_modules/yaml/browser/dist/parse/line-counter.js
generated
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
/**
|
||||
* Tracks newlines during parsing in order to provide an efficient API for
|
||||
* determining the one-indexed `{ line, col }` position for any offset
|
||||
* within the input.
|
||||
*/
|
||||
class LineCounter {
|
||||
constructor() {
|
||||
this.lineStarts = [];
|
||||
/**
|
||||
* Should be called in ascending order. Otherwise, call
|
||||
* `lineCounter.lineStarts.sort()` before calling `linePos()`.
|
||||
*/
|
||||
this.addNewLine = (offset) => this.lineStarts.push(offset);
|
||||
/**
|
||||
* Performs a binary search and returns the 1-indexed { line, col }
|
||||
* position of `offset`. If `line === 0`, `addNewLine` has never been
|
||||
* called or `offset` is before the first known newline.
|
||||
*/
|
||||
this.linePos = (offset) => {
|
||||
let low = 0;
|
||||
let high = this.lineStarts.length;
|
||||
while (low < high) {
|
||||
const mid = (low + high) >> 1; // Math.floor((low + high) / 2)
|
||||
if (this.lineStarts[mid] < offset)
|
||||
low = mid + 1;
|
||||
else
|
||||
high = mid;
|
||||
}
|
||||
if (this.lineStarts[low] === offset)
|
||||
return { line: low + 1, col: 1 };
|
||||
if (low === 0)
|
||||
return { line: 0, col: offset };
|
||||
const start = this.lineStarts[low - 1];
|
||||
return { line: low, col: offset - start + 1 };
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
export { LineCounter };
|
||||
967
node_modules/yaml/browser/dist/parse/parser.js
generated
vendored
Normal file
967
node_modules/yaml/browser/dist/parse/parser.js
generated
vendored
Normal file
@@ -0,0 +1,967 @@
|
||||
import { tokenType } from './cst.js';
|
||||
import { Lexer } from './lexer.js';
|
||||
|
||||
function includesToken(list, type) {
|
||||
for (let i = 0; i < list.length; ++i)
|
||||
if (list[i].type === type)
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
function findNonEmptyIndex(list) {
|
||||
for (let i = 0; i < list.length; ++i) {
|
||||
switch (list[i].type) {
|
||||
case 'space':
|
||||
case 'comment':
|
||||
case 'newline':
|
||||
break;
|
||||
default:
|
||||
return i;
|
||||
}
|
||||
}
|
||||
return -1;
|
||||
}
|
||||
function isFlowToken(token) {
|
||||
switch (token?.type) {
|
||||
case 'alias':
|
||||
case 'scalar':
|
||||
case 'single-quoted-scalar':
|
||||
case 'double-quoted-scalar':
|
||||
case 'flow-collection':
|
||||
return true;
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
}
|
||||
function getPrevProps(parent) {
|
||||
switch (parent.type) {
|
||||
case 'document':
|
||||
return parent.start;
|
||||
case 'block-map': {
|
||||
const it = parent.items[parent.items.length - 1];
|
||||
return it.sep ?? it.start;
|
||||
}
|
||||
case 'block-seq':
|
||||
return parent.items[parent.items.length - 1].start;
|
||||
/* istanbul ignore next should not happen */
|
||||
default:
|
||||
return [];
|
||||
}
|
||||
}
|
||||
/** Note: May modify input array */
|
||||
function getFirstKeyStartProps(prev) {
|
||||
if (prev.length === 0)
|
||||
return [];
|
||||
let i = prev.length;
|
||||
loop: while (--i >= 0) {
|
||||
switch (prev[i].type) {
|
||||
case 'doc-start':
|
||||
case 'explicit-key-ind':
|
||||
case 'map-value-ind':
|
||||
case 'seq-item-ind':
|
||||
case 'newline':
|
||||
break loop;
|
||||
}
|
||||
}
|
||||
while (prev[++i]?.type === 'space') {
|
||||
/* loop */
|
||||
}
|
||||
return prev.splice(i, prev.length);
|
||||
}
|
||||
function fixFlowSeqItems(fc) {
|
||||
if (fc.start.type === 'flow-seq-start') {
|
||||
for (const it of fc.items) {
|
||||
if (it.sep &&
|
||||
!it.value &&
|
||||
!includesToken(it.start, 'explicit-key-ind') &&
|
||||
!includesToken(it.sep, 'map-value-ind')) {
|
||||
if (it.key)
|
||||
it.value = it.key;
|
||||
delete it.key;
|
||||
if (isFlowToken(it.value)) {
|
||||
if (it.value.end)
|
||||
Array.prototype.push.apply(it.value.end, it.sep);
|
||||
else
|
||||
it.value.end = it.sep;
|
||||
}
|
||||
else
|
||||
Array.prototype.push.apply(it.start, it.sep);
|
||||
delete it.sep;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
/**
|
||||
* A YAML concrete syntax tree (CST) parser
|
||||
*
|
||||
* ```ts
|
||||
* const src: string = ...
|
||||
* for (const token of new Parser().parse(src)) {
|
||||
* // token: Token
|
||||
* }
|
||||
* ```
|
||||
*
|
||||
* To use the parser with a user-provided lexer:
|
||||
*
|
||||
* ```ts
|
||||
* function* parse(source: string, lexer: Lexer) {
|
||||
* const parser = new Parser()
|
||||
* for (const lexeme of lexer.lex(source))
|
||||
* yield* parser.next(lexeme)
|
||||
* yield* parser.end()
|
||||
* }
|
||||
*
|
||||
* const src: string = ...
|
||||
* const lexer = new Lexer()
|
||||
* for (const token of parse(src, lexer)) {
|
||||
* // token: Token
|
||||
* }
|
||||
* ```
|
||||
*/
|
||||
class Parser {
|
||||
/**
|
||||
* @param onNewLine - If defined, called separately with the start position of
|
||||
* each new line (in `parse()`, including the start of input).
|
||||
*/
|
||||
constructor(onNewLine) {
|
||||
/** If true, space and sequence indicators count as indentation */
|
||||
this.atNewLine = true;
|
||||
/** If true, next token is a scalar value */
|
||||
this.atScalar = false;
|
||||
/** Current indentation level */
|
||||
this.indent = 0;
|
||||
/** Current offset since the start of parsing */
|
||||
this.offset = 0;
|
||||
/** On the same line with a block map key */
|
||||
this.onKeyLine = false;
|
||||
/** Top indicates the node that's currently being built */
|
||||
this.stack = [];
|
||||
/** The source of the current token, set in parse() */
|
||||
this.source = '';
|
||||
/** The type of the current token, set in parse() */
|
||||
this.type = '';
|
||||
// Must be defined after `next()`
|
||||
this.lexer = new Lexer();
|
||||
this.onNewLine = onNewLine;
|
||||
}
|
||||
/**
|
||||
* Parse `source` as a YAML stream.
|
||||
* If `incomplete`, a part of the last line may be left as a buffer for the next call.
|
||||
*
|
||||
* Errors are not thrown, but yielded as `{ type: 'error', message }` tokens.
|
||||
*
|
||||
* @returns A generator of tokens representing each directive, document, and other structure.
|
||||
*/
|
||||
*parse(source, incomplete = false) {
|
||||
if (this.onNewLine && this.offset === 0)
|
||||
this.onNewLine(0);
|
||||
for (const lexeme of this.lexer.lex(source, incomplete))
|
||||
yield* this.next(lexeme);
|
||||
if (!incomplete)
|
||||
yield* this.end();
|
||||
}
|
||||
/**
|
||||
* Advance the parser by the `source` of one lexical token.
|
||||
*/
|
||||
*next(source) {
|
||||
this.source = source;
|
||||
if (this.atScalar) {
|
||||
this.atScalar = false;
|
||||
yield* this.step();
|
||||
this.offset += source.length;
|
||||
return;
|
||||
}
|
||||
const type = tokenType(source);
|
||||
if (!type) {
|
||||
const message = `Not a YAML token: ${source}`;
|
||||
yield* this.pop({ type: 'error', offset: this.offset, message, source });
|
||||
this.offset += source.length;
|
||||
}
|
||||
else if (type === 'scalar') {
|
||||
this.atNewLine = false;
|
||||
this.atScalar = true;
|
||||
this.type = 'scalar';
|
||||
}
|
||||
else {
|
||||
this.type = type;
|
||||
yield* this.step();
|
||||
switch (type) {
|
||||
case 'newline':
|
||||
this.atNewLine = true;
|
||||
this.indent = 0;
|
||||
if (this.onNewLine)
|
||||
this.onNewLine(this.offset + source.length);
|
||||
break;
|
||||
case 'space':
|
||||
if (this.atNewLine && source[0] === ' ')
|
||||
this.indent += source.length;
|
||||
break;
|
||||
case 'explicit-key-ind':
|
||||
case 'map-value-ind':
|
||||
case 'seq-item-ind':
|
||||
if (this.atNewLine)
|
||||
this.indent += source.length;
|
||||
break;
|
||||
case 'doc-mode':
|
||||
case 'flow-error-end':
|
||||
return;
|
||||
default:
|
||||
this.atNewLine = false;
|
||||
}
|
||||
this.offset += source.length;
|
||||
}
|
||||
}
|
||||
/** Call at end of input to push out any remaining constructions */
|
||||
*end() {
|
||||
while (this.stack.length > 0)
|
||||
yield* this.pop();
|
||||
}
|
||||
get sourceToken() {
|
||||
const st = {
|
||||
type: this.type,
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
source: this.source
|
||||
};
|
||||
return st;
|
||||
}
|
||||
*step() {
|
||||
const top = this.peek(1);
|
||||
if (this.type === 'doc-end' && top?.type !== 'doc-end') {
|
||||
while (this.stack.length > 0)
|
||||
yield* this.pop();
|
||||
this.stack.push({
|
||||
type: 'doc-end',
|
||||
offset: this.offset,
|
||||
source: this.source
|
||||
});
|
||||
return;
|
||||
}
|
||||
if (!top)
|
||||
return yield* this.stream();
|
||||
switch (top.type) {
|
||||
case 'document':
|
||||
return yield* this.document(top);
|
||||
case 'alias':
|
||||
case 'scalar':
|
||||
case 'single-quoted-scalar':
|
||||
case 'double-quoted-scalar':
|
||||
return yield* this.scalar(top);
|
||||
case 'block-scalar':
|
||||
return yield* this.blockScalar(top);
|
||||
case 'block-map':
|
||||
return yield* this.blockMap(top);
|
||||
case 'block-seq':
|
||||
return yield* this.blockSequence(top);
|
||||
case 'flow-collection':
|
||||
return yield* this.flowCollection(top);
|
||||
case 'doc-end':
|
||||
return yield* this.documentEnd(top);
|
||||
}
|
||||
/* istanbul ignore next should not happen */
|
||||
yield* this.pop();
|
||||
}
|
||||
peek(n) {
|
||||
return this.stack[this.stack.length - n];
|
||||
}
|
||||
*pop(error) {
|
||||
const token = error ?? this.stack.pop();
|
||||
/* istanbul ignore if should not happen */
|
||||
if (!token) {
|
||||
const message = 'Tried to pop an empty stack';
|
||||
yield { type: 'error', offset: this.offset, source: '', message };
|
||||
}
|
||||
else if (this.stack.length === 0) {
|
||||
yield token;
|
||||
}
|
||||
else {
|
||||
const top = this.peek(1);
|
||||
if (token.type === 'block-scalar') {
|
||||
// Block scalars use their parent rather than header indent
|
||||
token.indent = 'indent' in top ? top.indent : 0;
|
||||
}
|
||||
else if (token.type === 'flow-collection' && top.type === 'document') {
|
||||
// Ignore all indent for top-level flow collections
|
||||
token.indent = 0;
|
||||
}
|
||||
if (token.type === 'flow-collection')
|
||||
fixFlowSeqItems(token);
|
||||
switch (top.type) {
|
||||
case 'document':
|
||||
top.value = token;
|
||||
break;
|
||||
case 'block-scalar':
|
||||
top.props.push(token); // error
|
||||
break;
|
||||
case 'block-map': {
|
||||
const it = top.items[top.items.length - 1];
|
||||
if (it.value) {
|
||||
top.items.push({ start: [], key: token, sep: [] });
|
||||
this.onKeyLine = true;
|
||||
return;
|
||||
}
|
||||
else if (it.sep) {
|
||||
it.value = token;
|
||||
}
|
||||
else {
|
||||
Object.assign(it, { key: token, sep: [] });
|
||||
this.onKeyLine = !it.explicitKey;
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case 'block-seq': {
|
||||
const it = top.items[top.items.length - 1];
|
||||
if (it.value)
|
||||
top.items.push({ start: [], value: token });
|
||||
else
|
||||
it.value = token;
|
||||
break;
|
||||
}
|
||||
case 'flow-collection': {
|
||||
const it = top.items[top.items.length - 1];
|
||||
if (!it || it.value)
|
||||
top.items.push({ start: [], key: token, sep: [] });
|
||||
else if (it.sep)
|
||||
it.value = token;
|
||||
else
|
||||
Object.assign(it, { key: token, sep: [] });
|
||||
return;
|
||||
}
|
||||
/* istanbul ignore next should not happen */
|
||||
default:
|
||||
yield* this.pop();
|
||||
yield* this.pop(token);
|
||||
}
|
||||
if ((top.type === 'document' ||
|
||||
top.type === 'block-map' ||
|
||||
top.type === 'block-seq') &&
|
||||
(token.type === 'block-map' || token.type === 'block-seq')) {
|
||||
const last = token.items[token.items.length - 1];
|
||||
if (last &&
|
||||
!last.sep &&
|
||||
!last.value &&
|
||||
last.start.length > 0 &&
|
||||
findNonEmptyIndex(last.start) === -1 &&
|
||||
(token.indent === 0 ||
|
||||
last.start.every(st => st.type !== 'comment' || st.indent < token.indent))) {
|
||||
if (top.type === 'document')
|
||||
top.end = last.start;
|
||||
else
|
||||
top.items.push({ start: last.start });
|
||||
token.items.splice(-1, 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
*stream() {
|
||||
switch (this.type) {
|
||||
case 'directive-line':
|
||||
yield { type: 'directive', offset: this.offset, source: this.source };
|
||||
return;
|
||||
case 'byte-order-mark':
|
||||
case 'space':
|
||||
case 'comment':
|
||||
case 'newline':
|
||||
yield this.sourceToken;
|
||||
return;
|
||||
case 'doc-mode':
|
||||
case 'doc-start': {
|
||||
const doc = {
|
||||
type: 'document',
|
||||
offset: this.offset,
|
||||
start: []
|
||||
};
|
||||
if (this.type === 'doc-start')
|
||||
doc.start.push(this.sourceToken);
|
||||
this.stack.push(doc);
|
||||
return;
|
||||
}
|
||||
}
|
||||
yield {
|
||||
type: 'error',
|
||||
offset: this.offset,
|
||||
message: `Unexpected ${this.type} token in YAML stream`,
|
||||
source: this.source
|
||||
};
|
||||
}
|
||||
*document(doc) {
|
||||
if (doc.value)
|
||||
return yield* this.lineEnd(doc);
|
||||
switch (this.type) {
|
||||
case 'doc-start': {
|
||||
if (findNonEmptyIndex(doc.start) !== -1) {
|
||||
yield* this.pop();
|
||||
yield* this.step();
|
||||
}
|
||||
else
|
||||
doc.start.push(this.sourceToken);
|
||||
return;
|
||||
}
|
||||
case 'anchor':
|
||||
case 'tag':
|
||||
case 'space':
|
||||
case 'comment':
|
||||
case 'newline':
|
||||
doc.start.push(this.sourceToken);
|
||||
return;
|
||||
}
|
||||
const bv = this.startBlockValue(doc);
|
||||
if (bv)
|
||||
this.stack.push(bv);
|
||||
else {
|
||||
yield {
|
||||
type: 'error',
|
||||
offset: this.offset,
|
||||
message: `Unexpected ${this.type} token in YAML document`,
|
||||
source: this.source
|
||||
};
|
||||
}
|
||||
}
|
||||
*scalar(scalar) {
|
||||
if (this.type === 'map-value-ind') {
|
||||
const prev = getPrevProps(this.peek(2));
|
||||
const start = getFirstKeyStartProps(prev);
|
||||
let sep;
|
||||
if (scalar.end) {
|
||||
sep = scalar.end;
|
||||
sep.push(this.sourceToken);
|
||||
delete scalar.end;
|
||||
}
|
||||
else
|
||||
sep = [this.sourceToken];
|
||||
const map = {
|
||||
type: 'block-map',
|
||||
offset: scalar.offset,
|
||||
indent: scalar.indent,
|
||||
items: [{ start, key: scalar, sep }]
|
||||
};
|
||||
this.onKeyLine = true;
|
||||
this.stack[this.stack.length - 1] = map;
|
||||
}
|
||||
else
|
||||
yield* this.lineEnd(scalar);
|
||||
}
|
||||
*blockScalar(scalar) {
|
||||
switch (this.type) {
|
||||
case 'space':
|
||||
case 'comment':
|
||||
case 'newline':
|
||||
scalar.props.push(this.sourceToken);
|
||||
return;
|
||||
case 'scalar':
|
||||
scalar.source = this.source;
|
||||
// block-scalar source includes trailing newline
|
||||
this.atNewLine = true;
|
||||
this.indent = 0;
|
||||
if (this.onNewLine) {
|
||||
let nl = this.source.indexOf('\n') + 1;
|
||||
while (nl !== 0) {
|
||||
this.onNewLine(this.offset + nl);
|
||||
nl = this.source.indexOf('\n', nl) + 1;
|
||||
}
|
||||
}
|
||||
yield* this.pop();
|
||||
break;
|
||||
/* istanbul ignore next should not happen */
|
||||
default:
|
||||
yield* this.pop();
|
||||
yield* this.step();
|
||||
}
|
||||
}
|
||||
*blockMap(map) {
|
||||
const it = map.items[map.items.length - 1];
|
||||
// it.sep is true-ish if pair already has key or : separator
|
||||
switch (this.type) {
|
||||
case 'newline':
|
||||
this.onKeyLine = false;
|
||||
if (it.value) {
|
||||
const end = 'end' in it.value ? it.value.end : undefined;
|
||||
const last = Array.isArray(end) ? end[end.length - 1] : undefined;
|
||||
if (last?.type === 'comment')
|
||||
end?.push(this.sourceToken);
|
||||
else
|
||||
map.items.push({ start: [this.sourceToken] });
|
||||
}
|
||||
else if (it.sep) {
|
||||
it.sep.push(this.sourceToken);
|
||||
}
|
||||
else {
|
||||
it.start.push(this.sourceToken);
|
||||
}
|
||||
return;
|
||||
case 'space':
|
||||
case 'comment':
|
||||
if (it.value) {
|
||||
map.items.push({ start: [this.sourceToken] });
|
||||
}
|
||||
else if (it.sep) {
|
||||
it.sep.push(this.sourceToken);
|
||||
}
|
||||
else {
|
||||
if (this.atIndentedComment(it.start, map.indent)) {
|
||||
const prev = map.items[map.items.length - 2];
|
||||
const end = prev?.value?.end;
|
||||
if (Array.isArray(end)) {
|
||||
Array.prototype.push.apply(end, it.start);
|
||||
end.push(this.sourceToken);
|
||||
map.items.pop();
|
||||
return;
|
||||
}
|
||||
}
|
||||
it.start.push(this.sourceToken);
|
||||
}
|
||||
return;
|
||||
}
|
||||
if (this.indent >= map.indent) {
|
||||
const atMapIndent = !this.onKeyLine && this.indent === map.indent;
|
||||
const atNextItem = atMapIndent &&
|
||||
(it.sep || it.explicitKey) &&
|
||||
this.type !== 'seq-item-ind';
|
||||
// For empty nodes, assign newline-separated not indented empty tokens to following node
|
||||
let start = [];
|
||||
if (atNextItem && it.sep && !it.value) {
|
||||
const nl = [];
|
||||
for (let i = 0; i < it.sep.length; ++i) {
|
||||
const st = it.sep[i];
|
||||
switch (st.type) {
|
||||
case 'newline':
|
||||
nl.push(i);
|
||||
break;
|
||||
case 'space':
|
||||
break;
|
||||
case 'comment':
|
||||
if (st.indent > map.indent)
|
||||
nl.length = 0;
|
||||
break;
|
||||
default:
|
||||
nl.length = 0;
|
||||
}
|
||||
}
|
||||
if (nl.length >= 2)
|
||||
start = it.sep.splice(nl[1]);
|
||||
}
|
||||
switch (this.type) {
|
||||
case 'anchor':
|
||||
case 'tag':
|
||||
if (atNextItem || it.value) {
|
||||
start.push(this.sourceToken);
|
||||
map.items.push({ start });
|
||||
this.onKeyLine = true;
|
||||
}
|
||||
else if (it.sep) {
|
||||
it.sep.push(this.sourceToken);
|
||||
}
|
||||
else {
|
||||
it.start.push(this.sourceToken);
|
||||
}
|
||||
return;
|
||||
case 'explicit-key-ind':
|
||||
if (!it.sep && !it.explicitKey) {
|
||||
it.start.push(this.sourceToken);
|
||||
it.explicitKey = true;
|
||||
}
|
||||
else if (atNextItem || it.value) {
|
||||
start.push(this.sourceToken);
|
||||
map.items.push({ start, explicitKey: true });
|
||||
}
|
||||
else {
|
||||
this.stack.push({
|
||||
type: 'block-map',
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
items: [{ start: [this.sourceToken], explicitKey: true }]
|
||||
});
|
||||
}
|
||||
this.onKeyLine = true;
|
||||
return;
|
||||
case 'map-value-ind':
|
||||
if (it.explicitKey) {
|
||||
if (!it.sep) {
|
||||
if (includesToken(it.start, 'newline')) {
|
||||
Object.assign(it, { key: null, sep: [this.sourceToken] });
|
||||
}
|
||||
else {
|
||||
const start = getFirstKeyStartProps(it.start);
|
||||
this.stack.push({
|
||||
type: 'block-map',
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
items: [{ start, key: null, sep: [this.sourceToken] }]
|
||||
});
|
||||
}
|
||||
}
|
||||
else if (it.value) {
|
||||
map.items.push({ start: [], key: null, sep: [this.sourceToken] });
|
||||
}
|
||||
else if (includesToken(it.sep, 'map-value-ind')) {
|
||||
this.stack.push({
|
||||
type: 'block-map',
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
items: [{ start, key: null, sep: [this.sourceToken] }]
|
||||
});
|
||||
}
|
||||
else if (isFlowToken(it.key) &&
|
||||
!includesToken(it.sep, 'newline')) {
|
||||
const start = getFirstKeyStartProps(it.start);
|
||||
const key = it.key;
|
||||
const sep = it.sep;
|
||||
sep.push(this.sourceToken);
|
||||
// @ts-expect-error type guard is wrong here
|
||||
delete it.key;
|
||||
// @ts-expect-error type guard is wrong here
|
||||
delete it.sep;
|
||||
this.stack.push({
|
||||
type: 'block-map',
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
items: [{ start, key, sep }]
|
||||
});
|
||||
}
|
||||
else if (start.length > 0) {
|
||||
// Not actually at next item
|
||||
it.sep = it.sep.concat(start, this.sourceToken);
|
||||
}
|
||||
else {
|
||||
it.sep.push(this.sourceToken);
|
||||
}
|
||||
}
|
||||
else {
|
||||
if (!it.sep) {
|
||||
Object.assign(it, { key: null, sep: [this.sourceToken] });
|
||||
}
|
||||
else if (it.value || atNextItem) {
|
||||
map.items.push({ start, key: null, sep: [this.sourceToken] });
|
||||
}
|
||||
else if (includesToken(it.sep, 'map-value-ind')) {
|
||||
this.stack.push({
|
||||
type: 'block-map',
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
items: [{ start: [], key: null, sep: [this.sourceToken] }]
|
||||
});
|
||||
}
|
||||
else {
|
||||
it.sep.push(this.sourceToken);
|
||||
}
|
||||
}
|
||||
this.onKeyLine = true;
|
||||
return;
|
||||
case 'alias':
|
||||
case 'scalar':
|
||||
case 'single-quoted-scalar':
|
||||
case 'double-quoted-scalar': {
|
||||
const fs = this.flowScalar(this.type);
|
||||
if (atNextItem || it.value) {
|
||||
map.items.push({ start, key: fs, sep: [] });
|
||||
this.onKeyLine = true;
|
||||
}
|
||||
else if (it.sep) {
|
||||
this.stack.push(fs);
|
||||
}
|
||||
else {
|
||||
Object.assign(it, { key: fs, sep: [] });
|
||||
this.onKeyLine = true;
|
||||
}
|
||||
return;
|
||||
}
|
||||
default: {
|
||||
const bv = this.startBlockValue(map);
|
||||
if (bv) {
|
||||
if (bv.type === 'block-seq') {
|
||||
if (!it.explicitKey &&
|
||||
it.sep &&
|
||||
!includesToken(it.sep, 'newline')) {
|
||||
yield* this.pop({
|
||||
type: 'error',
|
||||
offset: this.offset,
|
||||
message: 'Unexpected block-seq-ind on same line with key',
|
||||
source: this.source
|
||||
});
|
||||
return;
|
||||
}
|
||||
}
|
||||
else if (atMapIndent) {
|
||||
map.items.push({ start });
|
||||
}
|
||||
this.stack.push(bv);
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
yield* this.pop();
|
||||
yield* this.step();
|
||||
}
|
||||
*blockSequence(seq) {
|
||||
const it = seq.items[seq.items.length - 1];
|
||||
switch (this.type) {
|
||||
case 'newline':
|
||||
if (it.value) {
|
||||
const end = 'end' in it.value ? it.value.end : undefined;
|
||||
const last = Array.isArray(end) ? end[end.length - 1] : undefined;
|
||||
if (last?.type === 'comment')
|
||||
end?.push(this.sourceToken);
|
||||
else
|
||||
seq.items.push({ start: [this.sourceToken] });
|
||||
}
|
||||
else
|
||||
it.start.push(this.sourceToken);
|
||||
return;
|
||||
case 'space':
|
||||
case 'comment':
|
||||
if (it.value)
|
||||
seq.items.push({ start: [this.sourceToken] });
|
||||
else {
|
||||
if (this.atIndentedComment(it.start, seq.indent)) {
|
||||
const prev = seq.items[seq.items.length - 2];
|
||||
const end = prev?.value?.end;
|
||||
if (Array.isArray(end)) {
|
||||
Array.prototype.push.apply(end, it.start);
|
||||
end.push(this.sourceToken);
|
||||
seq.items.pop();
|
||||
return;
|
||||
}
|
||||
}
|
||||
it.start.push(this.sourceToken);
|
||||
}
|
||||
return;
|
||||
case 'anchor':
|
||||
case 'tag':
|
||||
if (it.value || this.indent <= seq.indent)
|
||||
break;
|
||||
it.start.push(this.sourceToken);
|
||||
return;
|
||||
case 'seq-item-ind':
|
||||
if (this.indent !== seq.indent)
|
||||
break;
|
||||
if (it.value || includesToken(it.start, 'seq-item-ind'))
|
||||
seq.items.push({ start: [this.sourceToken] });
|
||||
else
|
||||
it.start.push(this.sourceToken);
|
||||
return;
|
||||
}
|
||||
if (this.indent > seq.indent) {
|
||||
const bv = this.startBlockValue(seq);
|
||||
if (bv) {
|
||||
this.stack.push(bv);
|
||||
return;
|
||||
}
|
||||
}
|
||||
yield* this.pop();
|
||||
yield* this.step();
|
||||
}
|
||||
*flowCollection(fc) {
|
||||
const it = fc.items[fc.items.length - 1];
|
||||
if (this.type === 'flow-error-end') {
|
||||
let top;
|
||||
do {
|
||||
yield* this.pop();
|
||||
top = this.peek(1);
|
||||
} while (top?.type === 'flow-collection');
|
||||
}
|
||||
else if (fc.end.length === 0) {
|
||||
switch (this.type) {
|
||||
case 'comma':
|
||||
case 'explicit-key-ind':
|
||||
if (!it || it.sep)
|
||||
fc.items.push({ start: [this.sourceToken] });
|
||||
else
|
||||
it.start.push(this.sourceToken);
|
||||
return;
|
||||
case 'map-value-ind':
|
||||
if (!it || it.value)
|
||||
fc.items.push({ start: [], key: null, sep: [this.sourceToken] });
|
||||
else if (it.sep)
|
||||
it.sep.push(this.sourceToken);
|
||||
else
|
||||
Object.assign(it, { key: null, sep: [this.sourceToken] });
|
||||
return;
|
||||
case 'space':
|
||||
case 'comment':
|
||||
case 'newline':
|
||||
case 'anchor':
|
||||
case 'tag':
|
||||
if (!it || it.value)
|
||||
fc.items.push({ start: [this.sourceToken] });
|
||||
else if (it.sep)
|
||||
it.sep.push(this.sourceToken);
|
||||
else
|
||||
it.start.push(this.sourceToken);
|
||||
return;
|
||||
case 'alias':
|
||||
case 'scalar':
|
||||
case 'single-quoted-scalar':
|
||||
case 'double-quoted-scalar': {
|
||||
const fs = this.flowScalar(this.type);
|
||||
if (!it || it.value)
|
||||
fc.items.push({ start: [], key: fs, sep: [] });
|
||||
else if (it.sep)
|
||||
this.stack.push(fs);
|
||||
else
|
||||
Object.assign(it, { key: fs, sep: [] });
|
||||
return;
|
||||
}
|
||||
case 'flow-map-end':
|
||||
case 'flow-seq-end':
|
||||
fc.end.push(this.sourceToken);
|
||||
return;
|
||||
}
|
||||
const bv = this.startBlockValue(fc);
|
||||
/* istanbul ignore else should not happen */
|
||||
if (bv)
|
||||
this.stack.push(bv);
|
||||
else {
|
||||
yield* this.pop();
|
||||
yield* this.step();
|
||||
}
|
||||
}
|
||||
else {
|
||||
const parent = this.peek(2);
|
||||
if (parent.type === 'block-map' &&
|
||||
((this.type === 'map-value-ind' && parent.indent === fc.indent) ||
|
||||
(this.type === 'newline' &&
|
||||
!parent.items[parent.items.length - 1].sep))) {
|
||||
yield* this.pop();
|
||||
yield* this.step();
|
||||
}
|
||||
else if (this.type === 'map-value-ind' &&
|
||||
parent.type !== 'flow-collection') {
|
||||
const prev = getPrevProps(parent);
|
||||
const start = getFirstKeyStartProps(prev);
|
||||
fixFlowSeqItems(fc);
|
||||
const sep = fc.end.splice(1, fc.end.length);
|
||||
sep.push(this.sourceToken);
|
||||
const map = {
|
||||
type: 'block-map',
|
||||
offset: fc.offset,
|
||||
indent: fc.indent,
|
||||
items: [{ start, key: fc, sep }]
|
||||
};
|
||||
this.onKeyLine = true;
|
||||
this.stack[this.stack.length - 1] = map;
|
||||
}
|
||||
else {
|
||||
yield* this.lineEnd(fc);
|
||||
}
|
||||
}
|
||||
}
|
||||
flowScalar(type) {
|
||||
if (this.onNewLine) {
|
||||
let nl = this.source.indexOf('\n') + 1;
|
||||
while (nl !== 0) {
|
||||
this.onNewLine(this.offset + nl);
|
||||
nl = this.source.indexOf('\n', nl) + 1;
|
||||
}
|
||||
}
|
||||
return {
|
||||
type,
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
source: this.source
|
||||
};
|
||||
}
|
||||
startBlockValue(parent) {
|
||||
switch (this.type) {
|
||||
case 'alias':
|
||||
case 'scalar':
|
||||
case 'single-quoted-scalar':
|
||||
case 'double-quoted-scalar':
|
||||
return this.flowScalar(this.type);
|
||||
case 'block-scalar-header':
|
||||
return {
|
||||
type: 'block-scalar',
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
props: [this.sourceToken],
|
||||
source: ''
|
||||
};
|
||||
case 'flow-map-start':
|
||||
case 'flow-seq-start':
|
||||
return {
|
||||
type: 'flow-collection',
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
start: this.sourceToken,
|
||||
items: [],
|
||||
end: []
|
||||
};
|
||||
case 'seq-item-ind':
|
||||
return {
|
||||
type: 'block-seq',
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
items: [{ start: [this.sourceToken] }]
|
||||
};
|
||||
case 'explicit-key-ind': {
|
||||
this.onKeyLine = true;
|
||||
const prev = getPrevProps(parent);
|
||||
const start = getFirstKeyStartProps(prev);
|
||||
start.push(this.sourceToken);
|
||||
return {
|
||||
type: 'block-map',
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
items: [{ start, explicitKey: true }]
|
||||
};
|
||||
}
|
||||
case 'map-value-ind': {
|
||||
this.onKeyLine = true;
|
||||
const prev = getPrevProps(parent);
|
||||
const start = getFirstKeyStartProps(prev);
|
||||
return {
|
||||
type: 'block-map',
|
||||
offset: this.offset,
|
||||
indent: this.indent,
|
||||
items: [{ start, key: null, sep: [this.sourceToken] }]
|
||||
};
|
||||
}
|
||||
}
|
||||
return null;
|
||||
}
|
||||
atIndentedComment(start, indent) {
|
||||
if (this.type !== 'comment')
|
||||
return false;
|
||||
if (this.indent <= indent)
|
||||
return false;
|
||||
return start.every(st => st.type === 'newline' || st.type === 'space');
|
||||
}
|
||||
*documentEnd(docEnd) {
|
||||
if (this.type !== 'doc-mode') {
|
||||
if (docEnd.end)
|
||||
docEnd.end.push(this.sourceToken);
|
||||
else
|
||||
docEnd.end = [this.sourceToken];
|
||||
if (this.type === 'newline')
|
||||
yield* this.pop();
|
||||
}
|
||||
}
|
||||
*lineEnd(token) {
|
||||
switch (this.type) {
|
||||
case 'comma':
|
||||
case 'doc-start':
|
||||
case 'doc-end':
|
||||
case 'flow-seq-end':
|
||||
case 'flow-map-end':
|
||||
case 'map-value-ind':
|
||||
yield* this.pop();
|
||||
yield* this.step();
|
||||
break;
|
||||
case 'newline':
|
||||
this.onKeyLine = false;
|
||||
// fallthrough
|
||||
case 'space':
|
||||
case 'comment':
|
||||
default:
|
||||
// all other values are errors
|
||||
if (token.end)
|
||||
token.end.push(this.sourceToken);
|
||||
else
|
||||
token.end = [this.sourceToken];
|
||||
if (this.type === 'newline')
|
||||
yield* this.pop();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export { Parser };
|
||||
102
node_modules/yaml/browser/dist/public-api.js
generated
vendored
Normal file
102
node_modules/yaml/browser/dist/public-api.js
generated
vendored
Normal file
@@ -0,0 +1,102 @@
|
||||
import { Composer } from './compose/composer.js';
|
||||
import { Document } from './doc/Document.js';
|
||||
import { prettifyError, YAMLParseError } from './errors.js';
|
||||
import { warn } from './log.js';
|
||||
import { isDocument } from './nodes/identity.js';
|
||||
import { LineCounter } from './parse/line-counter.js';
|
||||
import { Parser } from './parse/parser.js';
|
||||
|
||||
function parseOptions(options) {
|
||||
const prettyErrors = options.prettyErrors !== false;
|
||||
const lineCounter = options.lineCounter || (prettyErrors && new LineCounter()) || null;
|
||||
return { lineCounter, prettyErrors };
|
||||
}
|
||||
/**
|
||||
* Parse the input as a stream of YAML documents.
|
||||
*
|
||||
* Documents should be separated from each other by `...` or `---` marker lines.
|
||||
*
|
||||
* @returns If an empty `docs` array is returned, it will be of type
|
||||
* EmptyStream and contain additional stream information. In
|
||||
* TypeScript, you should use `'empty' in docs` as a type guard for it.
|
||||
*/
|
||||
function parseAllDocuments(source, options = {}) {
|
||||
const { lineCounter, prettyErrors } = parseOptions(options);
|
||||
const parser = new Parser(lineCounter?.addNewLine);
|
||||
const composer = new Composer(options);
|
||||
const docs = Array.from(composer.compose(parser.parse(source)));
|
||||
if (prettyErrors && lineCounter)
|
||||
for (const doc of docs) {
|
||||
doc.errors.forEach(prettifyError(source, lineCounter));
|
||||
doc.warnings.forEach(prettifyError(source, lineCounter));
|
||||
}
|
||||
if (docs.length > 0)
|
||||
return docs;
|
||||
return Object.assign([], { empty: true }, composer.streamInfo());
|
||||
}
|
||||
/** Parse an input string into a single YAML.Document */
|
||||
function parseDocument(source, options = {}) {
|
||||
const { lineCounter, prettyErrors } = parseOptions(options);
|
||||
const parser = new Parser(lineCounter?.addNewLine);
|
||||
const composer = new Composer(options);
|
||||
// `doc` is always set by compose.end(true) at the very latest
|
||||
let doc = null;
|
||||
for (const _doc of composer.compose(parser.parse(source), true, source.length)) {
|
||||
if (!doc)
|
||||
doc = _doc;
|
||||
else if (doc.options.logLevel !== 'silent') {
|
||||
doc.errors.push(new YAMLParseError(_doc.range.slice(0, 2), 'MULTIPLE_DOCS', 'Source contains multiple documents; please use YAML.parseAllDocuments()'));
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (prettyErrors && lineCounter) {
|
||||
doc.errors.forEach(prettifyError(source, lineCounter));
|
||||
doc.warnings.forEach(prettifyError(source, lineCounter));
|
||||
}
|
||||
return doc;
|
||||
}
|
||||
function parse(src, reviver, options) {
|
||||
let _reviver = undefined;
|
||||
if (typeof reviver === 'function') {
|
||||
_reviver = reviver;
|
||||
}
|
||||
else if (options === undefined && reviver && typeof reviver === 'object') {
|
||||
options = reviver;
|
||||
}
|
||||
const doc = parseDocument(src, options);
|
||||
if (!doc)
|
||||
return null;
|
||||
doc.warnings.forEach(warning => warn(doc.options.logLevel, warning));
|
||||
if (doc.errors.length > 0) {
|
||||
if (doc.options.logLevel !== 'silent')
|
||||
throw doc.errors[0];
|
||||
else
|
||||
doc.errors = [];
|
||||
}
|
||||
return doc.toJS(Object.assign({ reviver: _reviver }, options));
|
||||
}
|
||||
function stringify(value, replacer, options) {
|
||||
let _replacer = null;
|
||||
if (typeof replacer === 'function' || Array.isArray(replacer)) {
|
||||
_replacer = replacer;
|
||||
}
|
||||
else if (options === undefined && replacer) {
|
||||
options = replacer;
|
||||
}
|
||||
if (typeof options === 'string')
|
||||
options = options.length;
|
||||
if (typeof options === 'number') {
|
||||
const indent = Math.round(options);
|
||||
options = indent < 1 ? undefined : indent > 8 ? { indent: 8 } : { indent };
|
||||
}
|
||||
if (value === undefined) {
|
||||
const { keepUndefined } = options ?? replacer ?? {};
|
||||
if (!keepUndefined)
|
||||
return undefined;
|
||||
}
|
||||
if (isDocument(value) && !_replacer)
|
||||
return value.toString(options);
|
||||
return new Document(value, _replacer, options).toString(options);
|
||||
}
|
||||
|
||||
export { parse, parseAllDocuments, parseDocument, stringify };
|
||||
37
node_modules/yaml/browser/dist/schema/Schema.js
generated
vendored
Normal file
37
node_modules/yaml/browser/dist/schema/Schema.js
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
import { MAP, SCALAR, SEQ } from '../nodes/identity.js';
|
||||
import { map } from './common/map.js';
|
||||
import { seq } from './common/seq.js';
|
||||
import { string } from './common/string.js';
|
||||
import { getTags, coreKnownTags } from './tags.js';
|
||||
|
||||
const sortMapEntriesByKey = (a, b) => a.key < b.key ? -1 : a.key > b.key ? 1 : 0;
|
||||
class Schema {
|
||||
constructor({ compat, customTags, merge, resolveKnownTags, schema, sortMapEntries, toStringDefaults }) {
|
||||
this.compat = Array.isArray(compat)
|
||||
? getTags(compat, 'compat')
|
||||
: compat
|
||||
? getTags(null, compat)
|
||||
: null;
|
||||
this.name = (typeof schema === 'string' && schema) || 'core';
|
||||
this.knownTags = resolveKnownTags ? coreKnownTags : {};
|
||||
this.tags = getTags(customTags, this.name, merge);
|
||||
this.toStringOptions = toStringDefaults ?? null;
|
||||
Object.defineProperty(this, MAP, { value: map });
|
||||
Object.defineProperty(this, SCALAR, { value: string });
|
||||
Object.defineProperty(this, SEQ, { value: seq });
|
||||
// Used by createMap()
|
||||
this.sortMapEntries =
|
||||
typeof sortMapEntries === 'function'
|
||||
? sortMapEntries
|
||||
: sortMapEntries === true
|
||||
? sortMapEntriesByKey
|
||||
: null;
|
||||
}
|
||||
clone() {
|
||||
const copy = Object.create(Schema.prototype, Object.getOwnPropertyDescriptors(this));
|
||||
copy.tags = this.tags.slice();
|
||||
return copy;
|
||||
}
|
||||
}
|
||||
|
||||
export { Schema };
|
||||
17
node_modules/yaml/browser/dist/schema/common/map.js
generated
vendored
Normal file
17
node_modules/yaml/browser/dist/schema/common/map.js
generated
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
import { isMap } from '../../nodes/identity.js';
|
||||
import { YAMLMap } from '../../nodes/YAMLMap.js';
|
||||
|
||||
const map = {
|
||||
collection: 'map',
|
||||
default: true,
|
||||
nodeClass: YAMLMap,
|
||||
tag: 'tag:yaml.org,2002:map',
|
||||
resolve(map, onError) {
|
||||
if (!isMap(map))
|
||||
onError('Expected a mapping for this tag');
|
||||
return map;
|
||||
},
|
||||
createNode: (schema, obj, ctx) => YAMLMap.from(schema, obj, ctx)
|
||||
};
|
||||
|
||||
export { map };
|
||||
15
node_modules/yaml/browser/dist/schema/common/null.js
generated
vendored
Normal file
15
node_modules/yaml/browser/dist/schema/common/null.js
generated
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
import { Scalar } from '../../nodes/Scalar.js';
|
||||
|
||||
const nullTag = {
|
||||
identify: value => value == null,
|
||||
createNode: () => new Scalar(null),
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:null',
|
||||
test: /^(?:~|[Nn]ull|NULL)?$/,
|
||||
resolve: () => new Scalar(null),
|
||||
stringify: ({ source }, ctx) => typeof source === 'string' && nullTag.test.test(source)
|
||||
? source
|
||||
: ctx.options.nullStr
|
||||
};
|
||||
|
||||
export { nullTag };
|
||||
17
node_modules/yaml/browser/dist/schema/common/seq.js
generated
vendored
Normal file
17
node_modules/yaml/browser/dist/schema/common/seq.js
generated
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
import { isSeq } from '../../nodes/identity.js';
|
||||
import { YAMLSeq } from '../../nodes/YAMLSeq.js';
|
||||
|
||||
const seq = {
|
||||
collection: 'seq',
|
||||
default: true,
|
||||
nodeClass: YAMLSeq,
|
||||
tag: 'tag:yaml.org,2002:seq',
|
||||
resolve(seq, onError) {
|
||||
if (!isSeq(seq))
|
||||
onError('Expected a sequence for this tag');
|
||||
return seq;
|
||||
},
|
||||
createNode: (schema, obj, ctx) => YAMLSeq.from(schema, obj, ctx)
|
||||
};
|
||||
|
||||
export { seq };
|
||||
14
node_modules/yaml/browser/dist/schema/common/string.js
generated
vendored
Normal file
14
node_modules/yaml/browser/dist/schema/common/string.js
generated
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
import { stringifyString } from '../../stringify/stringifyString.js';
|
||||
|
||||
const string = {
|
||||
identify: value => typeof value === 'string',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:str',
|
||||
resolve: str => str,
|
||||
stringify(item, ctx, onComment, onChompKeep) {
|
||||
ctx = Object.assign({ actualString: true }, ctx);
|
||||
return stringifyString(item, ctx, onComment, onChompKeep);
|
||||
}
|
||||
};
|
||||
|
||||
export { string };
|
||||
19
node_modules/yaml/browser/dist/schema/core/bool.js
generated
vendored
Normal file
19
node_modules/yaml/browser/dist/schema/core/bool.js
generated
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
import { Scalar } from '../../nodes/Scalar.js';
|
||||
|
||||
const boolTag = {
|
||||
identify: value => typeof value === 'boolean',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:bool',
|
||||
test: /^(?:[Tt]rue|TRUE|[Ff]alse|FALSE)$/,
|
||||
resolve: str => new Scalar(str[0] === 't' || str[0] === 'T'),
|
||||
stringify({ source, value }, ctx) {
|
||||
if (source && boolTag.test.test(source)) {
|
||||
const sv = source[0] === 't' || source[0] === 'T';
|
||||
if (value === sv)
|
||||
return source;
|
||||
}
|
||||
return value ? ctx.options.trueStr : ctx.options.falseStr;
|
||||
}
|
||||
};
|
||||
|
||||
export { boolTag };
|
||||
43
node_modules/yaml/browser/dist/schema/core/float.js
generated
vendored
Normal file
43
node_modules/yaml/browser/dist/schema/core/float.js
generated
vendored
Normal file
@@ -0,0 +1,43 @@
|
||||
import { Scalar } from '../../nodes/Scalar.js';
|
||||
import { stringifyNumber } from '../../stringify/stringifyNumber.js';
|
||||
|
||||
const floatNaN = {
|
||||
identify: value => typeof value === 'number',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:float',
|
||||
test: /^(?:[-+]?\.(?:inf|Inf|INF)|\.nan|\.NaN|\.NAN)$/,
|
||||
resolve: str => str.slice(-3).toLowerCase() === 'nan'
|
||||
? NaN
|
||||
: str[0] === '-'
|
||||
? Number.NEGATIVE_INFINITY
|
||||
: Number.POSITIVE_INFINITY,
|
||||
stringify: stringifyNumber
|
||||
};
|
||||
const floatExp = {
|
||||
identify: value => typeof value === 'number',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:float',
|
||||
format: 'EXP',
|
||||
test: /^[-+]?(?:\.[0-9]+|[0-9]+(?:\.[0-9]*)?)[eE][-+]?[0-9]+$/,
|
||||
resolve: str => parseFloat(str),
|
||||
stringify(node) {
|
||||
const num = Number(node.value);
|
||||
return isFinite(num) ? num.toExponential() : stringifyNumber(node);
|
||||
}
|
||||
};
|
||||
const float = {
|
||||
identify: value => typeof value === 'number',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:float',
|
||||
test: /^[-+]?(?:\.[0-9]+|[0-9]+\.[0-9]*)$/,
|
||||
resolve(str) {
|
||||
const node = new Scalar(parseFloat(str));
|
||||
const dot = str.indexOf('.');
|
||||
if (dot !== -1 && str[str.length - 1] === '0')
|
||||
node.minFractionDigits = str.length - dot - 1;
|
||||
return node;
|
||||
},
|
||||
stringify: stringifyNumber
|
||||
};
|
||||
|
||||
export { float, floatExp, floatNaN };
|
||||
38
node_modules/yaml/browser/dist/schema/core/int.js
generated
vendored
Normal file
38
node_modules/yaml/browser/dist/schema/core/int.js
generated
vendored
Normal file
@@ -0,0 +1,38 @@
|
||||
import { stringifyNumber } from '../../stringify/stringifyNumber.js';
|
||||
|
||||
const intIdentify = (value) => typeof value === 'bigint' || Number.isInteger(value);
|
||||
const intResolve = (str, offset, radix, { intAsBigInt }) => (intAsBigInt ? BigInt(str) : parseInt(str.substring(offset), radix));
|
||||
function intStringify(node, radix, prefix) {
|
||||
const { value } = node;
|
||||
if (intIdentify(value) && value >= 0)
|
||||
return prefix + value.toString(radix);
|
||||
return stringifyNumber(node);
|
||||
}
|
||||
const intOct = {
|
||||
identify: value => intIdentify(value) && value >= 0,
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:int',
|
||||
format: 'OCT',
|
||||
test: /^0o[0-7]+$/,
|
||||
resolve: (str, _onError, opt) => intResolve(str, 2, 8, opt),
|
||||
stringify: node => intStringify(node, 8, '0o')
|
||||
};
|
||||
const int = {
|
||||
identify: intIdentify,
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:int',
|
||||
test: /^[-+]?[0-9]+$/,
|
||||
resolve: (str, _onError, opt) => intResolve(str, 0, 10, opt),
|
||||
stringify: stringifyNumber
|
||||
};
|
||||
const intHex = {
|
||||
identify: value => intIdentify(value) && value >= 0,
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:int',
|
||||
format: 'HEX',
|
||||
test: /^0x[0-9a-fA-F]+$/,
|
||||
resolve: (str, _onError, opt) => intResolve(str, 2, 16, opt),
|
||||
stringify: node => intStringify(node, 16, '0x')
|
||||
};
|
||||
|
||||
export { int, intHex, intOct };
|
||||
23
node_modules/yaml/browser/dist/schema/core/schema.js
generated
vendored
Normal file
23
node_modules/yaml/browser/dist/schema/core/schema.js
generated
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
import { map } from '../common/map.js';
|
||||
import { nullTag } from '../common/null.js';
|
||||
import { seq } from '../common/seq.js';
|
||||
import { string } from '../common/string.js';
|
||||
import { boolTag } from './bool.js';
|
||||
import { floatNaN, floatExp, float } from './float.js';
|
||||
import { intOct, int, intHex } from './int.js';
|
||||
|
||||
const schema = [
|
||||
map,
|
||||
seq,
|
||||
string,
|
||||
nullTag,
|
||||
boolTag,
|
||||
intOct,
|
||||
int,
|
||||
intHex,
|
||||
floatNaN,
|
||||
floatExp,
|
||||
float
|
||||
];
|
||||
|
||||
export { schema };
|
||||
62
node_modules/yaml/browser/dist/schema/json/schema.js
generated
vendored
Normal file
62
node_modules/yaml/browser/dist/schema/json/schema.js
generated
vendored
Normal file
@@ -0,0 +1,62 @@
|
||||
import { Scalar } from '../../nodes/Scalar.js';
|
||||
import { map } from '../common/map.js';
|
||||
import { seq } from '../common/seq.js';
|
||||
|
||||
function intIdentify(value) {
|
||||
return typeof value === 'bigint' || Number.isInteger(value);
|
||||
}
|
||||
const stringifyJSON = ({ value }) => JSON.stringify(value);
|
||||
const jsonScalars = [
|
||||
{
|
||||
identify: value => typeof value === 'string',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:str',
|
||||
resolve: str => str,
|
||||
stringify: stringifyJSON
|
||||
},
|
||||
{
|
||||
identify: value => value == null,
|
||||
createNode: () => new Scalar(null),
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:null',
|
||||
test: /^null$/,
|
||||
resolve: () => null,
|
||||
stringify: stringifyJSON
|
||||
},
|
||||
{
|
||||
identify: value => typeof value === 'boolean',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:bool',
|
||||
test: /^true$|^false$/,
|
||||
resolve: str => str === 'true',
|
||||
stringify: stringifyJSON
|
||||
},
|
||||
{
|
||||
identify: intIdentify,
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:int',
|
||||
test: /^-?(?:0|[1-9][0-9]*)$/,
|
||||
resolve: (str, _onError, { intAsBigInt }) => intAsBigInt ? BigInt(str) : parseInt(str, 10),
|
||||
stringify: ({ value }) => intIdentify(value) ? value.toString() : JSON.stringify(value)
|
||||
},
|
||||
{
|
||||
identify: value => typeof value === 'number',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:float',
|
||||
test: /^-?(?:0|[1-9][0-9]*)(?:\.[0-9]*)?(?:[eE][-+]?[0-9]+)?$/,
|
||||
resolve: str => parseFloat(str),
|
||||
stringify: stringifyJSON
|
||||
}
|
||||
];
|
||||
const jsonError = {
|
||||
default: true,
|
||||
tag: '',
|
||||
test: /^/,
|
||||
resolve(str, onError) {
|
||||
onError(`Unresolved plain scalar ${JSON.stringify(str)}`);
|
||||
return str;
|
||||
}
|
||||
};
|
||||
const schema = [map, seq].concat(jsonScalars, jsonError);
|
||||
|
||||
export { schema };
|
||||
96
node_modules/yaml/browser/dist/schema/tags.js
generated
vendored
Normal file
96
node_modules/yaml/browser/dist/schema/tags.js
generated
vendored
Normal file
@@ -0,0 +1,96 @@
|
||||
import { map } from './common/map.js';
|
||||
import { nullTag } from './common/null.js';
|
||||
import { seq } from './common/seq.js';
|
||||
import { string } from './common/string.js';
|
||||
import { boolTag } from './core/bool.js';
|
||||
import { floatNaN, floatExp, float } from './core/float.js';
|
||||
import { intOct, intHex, int } from './core/int.js';
|
||||
import { schema } from './core/schema.js';
|
||||
import { schema as schema$1 } from './json/schema.js';
|
||||
import { binary } from './yaml-1.1/binary.js';
|
||||
import { merge } from './yaml-1.1/merge.js';
|
||||
import { omap } from './yaml-1.1/omap.js';
|
||||
import { pairs } from './yaml-1.1/pairs.js';
|
||||
import { schema as schema$2 } from './yaml-1.1/schema.js';
|
||||
import { set } from './yaml-1.1/set.js';
|
||||
import { timestamp, intTime, floatTime } from './yaml-1.1/timestamp.js';
|
||||
|
||||
const schemas = new Map([
|
||||
['core', schema],
|
||||
['failsafe', [map, seq, string]],
|
||||
['json', schema$1],
|
||||
['yaml11', schema$2],
|
||||
['yaml-1.1', schema$2]
|
||||
]);
|
||||
const tagsByName = {
|
||||
binary,
|
||||
bool: boolTag,
|
||||
float,
|
||||
floatExp,
|
||||
floatNaN,
|
||||
floatTime,
|
||||
int,
|
||||
intHex,
|
||||
intOct,
|
||||
intTime,
|
||||
map,
|
||||
merge,
|
||||
null: nullTag,
|
||||
omap,
|
||||
pairs,
|
||||
seq,
|
||||
set,
|
||||
timestamp
|
||||
};
|
||||
const coreKnownTags = {
|
||||
'tag:yaml.org,2002:binary': binary,
|
||||
'tag:yaml.org,2002:merge': merge,
|
||||
'tag:yaml.org,2002:omap': omap,
|
||||
'tag:yaml.org,2002:pairs': pairs,
|
||||
'tag:yaml.org,2002:set': set,
|
||||
'tag:yaml.org,2002:timestamp': timestamp
|
||||
};
|
||||
function getTags(customTags, schemaName, addMergeTag) {
|
||||
const schemaTags = schemas.get(schemaName);
|
||||
if (schemaTags && !customTags) {
|
||||
return addMergeTag && !schemaTags.includes(merge)
|
||||
? schemaTags.concat(merge)
|
||||
: schemaTags.slice();
|
||||
}
|
||||
let tags = schemaTags;
|
||||
if (!tags) {
|
||||
if (Array.isArray(customTags))
|
||||
tags = [];
|
||||
else {
|
||||
const keys = Array.from(schemas.keys())
|
||||
.filter(key => key !== 'yaml11')
|
||||
.map(key => JSON.stringify(key))
|
||||
.join(', ');
|
||||
throw new Error(`Unknown schema "${schemaName}"; use one of ${keys} or define customTags array`);
|
||||
}
|
||||
}
|
||||
if (Array.isArray(customTags)) {
|
||||
for (const tag of customTags)
|
||||
tags = tags.concat(tag);
|
||||
}
|
||||
else if (typeof customTags === 'function') {
|
||||
tags = customTags(tags.slice());
|
||||
}
|
||||
if (addMergeTag)
|
||||
tags = tags.concat(merge);
|
||||
return tags.reduce((tags, tag) => {
|
||||
const tagObj = typeof tag === 'string' ? tagsByName[tag] : tag;
|
||||
if (!tagObj) {
|
||||
const tagName = JSON.stringify(tag);
|
||||
const keys = Object.keys(tagsByName)
|
||||
.map(key => JSON.stringify(key))
|
||||
.join(', ');
|
||||
throw new Error(`Unknown custom tag ${tagName}; use one of ${keys}`);
|
||||
}
|
||||
if (!tags.includes(tagObj))
|
||||
tags.push(tagObj);
|
||||
return tags;
|
||||
}, []);
|
||||
}
|
||||
|
||||
export { coreKnownTags, getTags };
|
||||
58
node_modules/yaml/browser/dist/schema/yaml-1.1/binary.js
generated
vendored
Normal file
58
node_modules/yaml/browser/dist/schema/yaml-1.1/binary.js
generated
vendored
Normal file
@@ -0,0 +1,58 @@
|
||||
import { Scalar } from '../../nodes/Scalar.js';
|
||||
import { stringifyString } from '../../stringify/stringifyString.js';
|
||||
|
||||
const binary = {
|
||||
identify: value => value instanceof Uint8Array, // Buffer inherits from Uint8Array
|
||||
default: false,
|
||||
tag: 'tag:yaml.org,2002:binary',
|
||||
/**
|
||||
* Returns a Buffer in node and an Uint8Array in browsers
|
||||
*
|
||||
* To use the resulting buffer as an image, you'll want to do something like:
|
||||
*
|
||||
* const blob = new Blob([buffer], { type: 'image/jpeg' })
|
||||
* document.querySelector('#photo').src = URL.createObjectURL(blob)
|
||||
*/
|
||||
resolve(src, onError) {
|
||||
if (typeof atob === 'function') {
|
||||
// On IE 11, atob() can't handle newlines
|
||||
const str = atob(src.replace(/[\n\r]/g, ''));
|
||||
const buffer = new Uint8Array(str.length);
|
||||
for (let i = 0; i < str.length; ++i)
|
||||
buffer[i] = str.charCodeAt(i);
|
||||
return buffer;
|
||||
}
|
||||
else {
|
||||
onError('This environment does not support reading binary tags; either Buffer or atob is required');
|
||||
return src;
|
||||
}
|
||||
},
|
||||
stringify({ comment, type, value }, ctx, onComment, onChompKeep) {
|
||||
if (!value)
|
||||
return '';
|
||||
const buf = value; // checked earlier by binary.identify()
|
||||
let str;
|
||||
if (typeof btoa === 'function') {
|
||||
let s = '';
|
||||
for (let i = 0; i < buf.length; ++i)
|
||||
s += String.fromCharCode(buf[i]);
|
||||
str = btoa(s);
|
||||
}
|
||||
else {
|
||||
throw new Error('This environment does not support writing binary tags; either Buffer or btoa is required');
|
||||
}
|
||||
type ?? (type = Scalar.BLOCK_LITERAL);
|
||||
if (type !== Scalar.QUOTE_DOUBLE) {
|
||||
const lineWidth = Math.max(ctx.options.lineWidth - ctx.indent.length, ctx.options.minContentWidth);
|
||||
const n = Math.ceil(str.length / lineWidth);
|
||||
const lines = new Array(n);
|
||||
for (let i = 0, o = 0; i < n; ++i, o += lineWidth) {
|
||||
lines[i] = str.substr(o, lineWidth);
|
||||
}
|
||||
str = lines.join(type === Scalar.BLOCK_LITERAL ? '\n' : ' ');
|
||||
}
|
||||
return stringifyString({ comment, type, value: str }, ctx, onComment, onChompKeep);
|
||||
}
|
||||
};
|
||||
|
||||
export { binary };
|
||||
26
node_modules/yaml/browser/dist/schema/yaml-1.1/bool.js
generated
vendored
Normal file
26
node_modules/yaml/browser/dist/schema/yaml-1.1/bool.js
generated
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
import { Scalar } from '../../nodes/Scalar.js';
|
||||
|
||||
function boolStringify({ value, source }, ctx) {
|
||||
const boolObj = value ? trueTag : falseTag;
|
||||
if (source && boolObj.test.test(source))
|
||||
return source;
|
||||
return value ? ctx.options.trueStr : ctx.options.falseStr;
|
||||
}
|
||||
const trueTag = {
|
||||
identify: value => value === true,
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:bool',
|
||||
test: /^(?:Y|y|[Yy]es|YES|[Tt]rue|TRUE|[Oo]n|ON)$/,
|
||||
resolve: () => new Scalar(true),
|
||||
stringify: boolStringify
|
||||
};
|
||||
const falseTag = {
|
||||
identify: value => value === false,
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:bool',
|
||||
test: /^(?:N|n|[Nn]o|NO|[Ff]alse|FALSE|[Oo]ff|OFF)$/,
|
||||
resolve: () => new Scalar(false),
|
||||
stringify: boolStringify
|
||||
};
|
||||
|
||||
export { falseTag, trueTag };
|
||||
46
node_modules/yaml/browser/dist/schema/yaml-1.1/float.js
generated
vendored
Normal file
46
node_modules/yaml/browser/dist/schema/yaml-1.1/float.js
generated
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
import { Scalar } from '../../nodes/Scalar.js';
|
||||
import { stringifyNumber } from '../../stringify/stringifyNumber.js';
|
||||
|
||||
const floatNaN = {
|
||||
identify: value => typeof value === 'number',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:float',
|
||||
test: /^(?:[-+]?\.(?:inf|Inf|INF)|\.nan|\.NaN|\.NAN)$/,
|
||||
resolve: (str) => str.slice(-3).toLowerCase() === 'nan'
|
||||
? NaN
|
||||
: str[0] === '-'
|
||||
? Number.NEGATIVE_INFINITY
|
||||
: Number.POSITIVE_INFINITY,
|
||||
stringify: stringifyNumber
|
||||
};
|
||||
const floatExp = {
|
||||
identify: value => typeof value === 'number',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:float',
|
||||
format: 'EXP',
|
||||
test: /^[-+]?(?:[0-9][0-9_]*)?(?:\.[0-9_]*)?[eE][-+]?[0-9]+$/,
|
||||
resolve: (str) => parseFloat(str.replace(/_/g, '')),
|
||||
stringify(node) {
|
||||
const num = Number(node.value);
|
||||
return isFinite(num) ? num.toExponential() : stringifyNumber(node);
|
||||
}
|
||||
};
|
||||
const float = {
|
||||
identify: value => typeof value === 'number',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:float',
|
||||
test: /^[-+]?(?:[0-9][0-9_]*)?\.[0-9_]*$/,
|
||||
resolve(str) {
|
||||
const node = new Scalar(parseFloat(str.replace(/_/g, '')));
|
||||
const dot = str.indexOf('.');
|
||||
if (dot !== -1) {
|
||||
const f = str.substring(dot + 1).replace(/_/g, '');
|
||||
if (f[f.length - 1] === '0')
|
||||
node.minFractionDigits = f.length;
|
||||
}
|
||||
return node;
|
||||
},
|
||||
stringify: stringifyNumber
|
||||
};
|
||||
|
||||
export { float, floatExp, floatNaN };
|
||||
71
node_modules/yaml/browser/dist/schema/yaml-1.1/int.js
generated
vendored
Normal file
71
node_modules/yaml/browser/dist/schema/yaml-1.1/int.js
generated
vendored
Normal file
@@ -0,0 +1,71 @@
|
||||
import { stringifyNumber } from '../../stringify/stringifyNumber.js';
|
||||
|
||||
const intIdentify = (value) => typeof value === 'bigint' || Number.isInteger(value);
|
||||
function intResolve(str, offset, radix, { intAsBigInt }) {
|
||||
const sign = str[0];
|
||||
if (sign === '-' || sign === '+')
|
||||
offset += 1;
|
||||
str = str.substring(offset).replace(/_/g, '');
|
||||
if (intAsBigInt) {
|
||||
switch (radix) {
|
||||
case 2:
|
||||
str = `0b${str}`;
|
||||
break;
|
||||
case 8:
|
||||
str = `0o${str}`;
|
||||
break;
|
||||
case 16:
|
||||
str = `0x${str}`;
|
||||
break;
|
||||
}
|
||||
const n = BigInt(str);
|
||||
return sign === '-' ? BigInt(-1) * n : n;
|
||||
}
|
||||
const n = parseInt(str, radix);
|
||||
return sign === '-' ? -1 * n : n;
|
||||
}
|
||||
function intStringify(node, radix, prefix) {
|
||||
const { value } = node;
|
||||
if (intIdentify(value)) {
|
||||
const str = value.toString(radix);
|
||||
return value < 0 ? '-' + prefix + str.substr(1) : prefix + str;
|
||||
}
|
||||
return stringifyNumber(node);
|
||||
}
|
||||
const intBin = {
|
||||
identify: intIdentify,
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:int',
|
||||
format: 'BIN',
|
||||
test: /^[-+]?0b[0-1_]+$/,
|
||||
resolve: (str, _onError, opt) => intResolve(str, 2, 2, opt),
|
||||
stringify: node => intStringify(node, 2, '0b')
|
||||
};
|
||||
const intOct = {
|
||||
identify: intIdentify,
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:int',
|
||||
format: 'OCT',
|
||||
test: /^[-+]?0[0-7_]+$/,
|
||||
resolve: (str, _onError, opt) => intResolve(str, 1, 8, opt),
|
||||
stringify: node => intStringify(node, 8, '0')
|
||||
};
|
||||
const int = {
|
||||
identify: intIdentify,
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:int',
|
||||
test: /^[-+]?[0-9][0-9_]*$/,
|
||||
resolve: (str, _onError, opt) => intResolve(str, 0, 10, opt),
|
||||
stringify: stringifyNumber
|
||||
};
|
||||
const intHex = {
|
||||
identify: intIdentify,
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:int',
|
||||
format: 'HEX',
|
||||
test: /^[-+]?0x[0-9a-fA-F_]+$/,
|
||||
resolve: (str, _onError, opt) => intResolve(str, 2, 16, opt),
|
||||
stringify: node => intStringify(node, 16, '0x')
|
||||
};
|
||||
|
||||
export { int, intBin, intHex, intOct };
|
||||
64
node_modules/yaml/browser/dist/schema/yaml-1.1/merge.js
generated
vendored
Normal file
64
node_modules/yaml/browser/dist/schema/yaml-1.1/merge.js
generated
vendored
Normal file
@@ -0,0 +1,64 @@
|
||||
import { isScalar, isAlias, isSeq, isMap } from '../../nodes/identity.js';
|
||||
import { Scalar } from '../../nodes/Scalar.js';
|
||||
|
||||
// If the value associated with a merge key is a single mapping node, each of
|
||||
// its key/value pairs is inserted into the current mapping, unless the key
|
||||
// already exists in it. If the value associated with the merge key is a
|
||||
// sequence, then this sequence is expected to contain mapping nodes and each
|
||||
// of these nodes is merged in turn according to its order in the sequence.
|
||||
// Keys in mapping nodes earlier in the sequence override keys specified in
|
||||
// later mapping nodes. -- http://yaml.org/type/merge.html
|
||||
const MERGE_KEY = '<<';
|
||||
const merge = {
|
||||
identify: value => value === MERGE_KEY ||
|
||||
(typeof value === 'symbol' && value.description === MERGE_KEY),
|
||||
default: 'key',
|
||||
tag: 'tag:yaml.org,2002:merge',
|
||||
test: /^<<$/,
|
||||
resolve: () => Object.assign(new Scalar(Symbol(MERGE_KEY)), {
|
||||
addToJSMap: addMergeToJSMap
|
||||
}),
|
||||
stringify: () => MERGE_KEY
|
||||
};
|
||||
const isMergeKey = (ctx, key) => (merge.identify(key) ||
|
||||
(isScalar(key) &&
|
||||
(!key.type || key.type === Scalar.PLAIN) &&
|
||||
merge.identify(key.value))) &&
|
||||
ctx?.doc.schema.tags.some(tag => tag.tag === merge.tag && tag.default);
|
||||
function addMergeToJSMap(ctx, map, value) {
|
||||
value = ctx && isAlias(value) ? value.resolve(ctx.doc) : value;
|
||||
if (isSeq(value))
|
||||
for (const it of value.items)
|
||||
mergeValue(ctx, map, it);
|
||||
else if (Array.isArray(value))
|
||||
for (const it of value)
|
||||
mergeValue(ctx, map, it);
|
||||
else
|
||||
mergeValue(ctx, map, value);
|
||||
}
|
||||
function mergeValue(ctx, map, value) {
|
||||
const source = ctx && isAlias(value) ? value.resolve(ctx.doc) : value;
|
||||
if (!isMap(source))
|
||||
throw new Error('Merge sources must be maps or map aliases');
|
||||
const srcMap = source.toJSON(null, ctx, Map);
|
||||
for (const [key, value] of srcMap) {
|
||||
if (map instanceof Map) {
|
||||
if (!map.has(key))
|
||||
map.set(key, value);
|
||||
}
|
||||
else if (map instanceof Set) {
|
||||
map.add(key);
|
||||
}
|
||||
else if (!Object.prototype.hasOwnProperty.call(map, key)) {
|
||||
Object.defineProperty(map, key, {
|
||||
value,
|
||||
writable: true,
|
||||
enumerable: true,
|
||||
configurable: true
|
||||
});
|
||||
}
|
||||
}
|
||||
return map;
|
||||
}
|
||||
|
||||
export { addMergeToJSMap, isMergeKey, merge };
|
||||
74
node_modules/yaml/browser/dist/schema/yaml-1.1/omap.js
generated
vendored
Normal file
74
node_modules/yaml/browser/dist/schema/yaml-1.1/omap.js
generated
vendored
Normal file
@@ -0,0 +1,74 @@
|
||||
import { isScalar, isPair } from '../../nodes/identity.js';
|
||||
import { toJS } from '../../nodes/toJS.js';
|
||||
import { YAMLMap } from '../../nodes/YAMLMap.js';
|
||||
import { YAMLSeq } from '../../nodes/YAMLSeq.js';
|
||||
import { resolvePairs, createPairs } from './pairs.js';
|
||||
|
||||
class YAMLOMap extends YAMLSeq {
|
||||
constructor() {
|
||||
super();
|
||||
this.add = YAMLMap.prototype.add.bind(this);
|
||||
this.delete = YAMLMap.prototype.delete.bind(this);
|
||||
this.get = YAMLMap.prototype.get.bind(this);
|
||||
this.has = YAMLMap.prototype.has.bind(this);
|
||||
this.set = YAMLMap.prototype.set.bind(this);
|
||||
this.tag = YAMLOMap.tag;
|
||||
}
|
||||
/**
|
||||
* If `ctx` is given, the return type is actually `Map<unknown, unknown>`,
|
||||
* but TypeScript won't allow widening the signature of a child method.
|
||||
*/
|
||||
toJSON(_, ctx) {
|
||||
if (!ctx)
|
||||
return super.toJSON(_);
|
||||
const map = new Map();
|
||||
if (ctx?.onCreate)
|
||||
ctx.onCreate(map);
|
||||
for (const pair of this.items) {
|
||||
let key, value;
|
||||
if (isPair(pair)) {
|
||||
key = toJS(pair.key, '', ctx);
|
||||
value = toJS(pair.value, key, ctx);
|
||||
}
|
||||
else {
|
||||
key = toJS(pair, '', ctx);
|
||||
}
|
||||
if (map.has(key))
|
||||
throw new Error('Ordered maps must not include duplicate keys');
|
||||
map.set(key, value);
|
||||
}
|
||||
return map;
|
||||
}
|
||||
static from(schema, iterable, ctx) {
|
||||
const pairs = createPairs(schema, iterable, ctx);
|
||||
const omap = new this();
|
||||
omap.items = pairs.items;
|
||||
return omap;
|
||||
}
|
||||
}
|
||||
YAMLOMap.tag = 'tag:yaml.org,2002:omap';
|
||||
const omap = {
|
||||
collection: 'seq',
|
||||
identify: value => value instanceof Map,
|
||||
nodeClass: YAMLOMap,
|
||||
default: false,
|
||||
tag: 'tag:yaml.org,2002:omap',
|
||||
resolve(seq, onError) {
|
||||
const pairs = resolvePairs(seq, onError);
|
||||
const seenKeys = [];
|
||||
for (const { key } of pairs.items) {
|
||||
if (isScalar(key)) {
|
||||
if (seenKeys.includes(key.value)) {
|
||||
onError(`Ordered maps must not include duplicate keys: ${key.value}`);
|
||||
}
|
||||
else {
|
||||
seenKeys.push(key.value);
|
||||
}
|
||||
}
|
||||
}
|
||||
return Object.assign(new YAMLOMap(), pairs);
|
||||
},
|
||||
createNode: (schema, iterable, ctx) => YAMLOMap.from(schema, iterable, ctx)
|
||||
};
|
||||
|
||||
export { YAMLOMap, omap };
|
||||
78
node_modules/yaml/browser/dist/schema/yaml-1.1/pairs.js
generated
vendored
Normal file
78
node_modules/yaml/browser/dist/schema/yaml-1.1/pairs.js
generated
vendored
Normal file
@@ -0,0 +1,78 @@
|
||||
import { isSeq, isPair, isMap } from '../../nodes/identity.js';
|
||||
import { createPair, Pair } from '../../nodes/Pair.js';
|
||||
import { Scalar } from '../../nodes/Scalar.js';
|
||||
import { YAMLSeq } from '../../nodes/YAMLSeq.js';
|
||||
|
||||
function resolvePairs(seq, onError) {
|
||||
if (isSeq(seq)) {
|
||||
for (let i = 0; i < seq.items.length; ++i) {
|
||||
let item = seq.items[i];
|
||||
if (isPair(item))
|
||||
continue;
|
||||
else if (isMap(item)) {
|
||||
if (item.items.length > 1)
|
||||
onError('Each pair must have its own sequence indicator');
|
||||
const pair = item.items[0] || new Pair(new Scalar(null));
|
||||
if (item.commentBefore)
|
||||
pair.key.commentBefore = pair.key.commentBefore
|
||||
? `${item.commentBefore}\n${pair.key.commentBefore}`
|
||||
: item.commentBefore;
|
||||
if (item.comment) {
|
||||
const cn = pair.value ?? pair.key;
|
||||
cn.comment = cn.comment
|
||||
? `${item.comment}\n${cn.comment}`
|
||||
: item.comment;
|
||||
}
|
||||
item = pair;
|
||||
}
|
||||
seq.items[i] = isPair(item) ? item : new Pair(item);
|
||||
}
|
||||
}
|
||||
else
|
||||
onError('Expected a sequence for this tag');
|
||||
return seq;
|
||||
}
|
||||
function createPairs(schema, iterable, ctx) {
|
||||
const { replacer } = ctx;
|
||||
const pairs = new YAMLSeq(schema);
|
||||
pairs.tag = 'tag:yaml.org,2002:pairs';
|
||||
let i = 0;
|
||||
if (iterable && Symbol.iterator in Object(iterable))
|
||||
for (let it of iterable) {
|
||||
if (typeof replacer === 'function')
|
||||
it = replacer.call(iterable, String(i++), it);
|
||||
let key, value;
|
||||
if (Array.isArray(it)) {
|
||||
if (it.length === 2) {
|
||||
key = it[0];
|
||||
value = it[1];
|
||||
}
|
||||
else
|
||||
throw new TypeError(`Expected [key, value] tuple: ${it}`);
|
||||
}
|
||||
else if (it && it instanceof Object) {
|
||||
const keys = Object.keys(it);
|
||||
if (keys.length === 1) {
|
||||
key = keys[0];
|
||||
value = it[key];
|
||||
}
|
||||
else {
|
||||
throw new TypeError(`Expected tuple with one key, not ${keys.length} keys`);
|
||||
}
|
||||
}
|
||||
else {
|
||||
key = it;
|
||||
}
|
||||
pairs.items.push(createPair(key, value, ctx));
|
||||
}
|
||||
return pairs;
|
||||
}
|
||||
const pairs = {
|
||||
collection: 'seq',
|
||||
default: false,
|
||||
tag: 'tag:yaml.org,2002:pairs',
|
||||
resolve: resolvePairs,
|
||||
createNode: createPairs
|
||||
};
|
||||
|
||||
export { createPairs, pairs, resolvePairs };
|
||||
39
node_modules/yaml/browser/dist/schema/yaml-1.1/schema.js
generated
vendored
Normal file
39
node_modules/yaml/browser/dist/schema/yaml-1.1/schema.js
generated
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
import { map } from '../common/map.js';
|
||||
import { nullTag } from '../common/null.js';
|
||||
import { seq } from '../common/seq.js';
|
||||
import { string } from '../common/string.js';
|
||||
import { binary } from './binary.js';
|
||||
import { trueTag, falseTag } from './bool.js';
|
||||
import { floatNaN, floatExp, float } from './float.js';
|
||||
import { intBin, intOct, int, intHex } from './int.js';
|
||||
import { merge } from './merge.js';
|
||||
import { omap } from './omap.js';
|
||||
import { pairs } from './pairs.js';
|
||||
import { set } from './set.js';
|
||||
import { intTime, floatTime, timestamp } from './timestamp.js';
|
||||
|
||||
const schema = [
|
||||
map,
|
||||
seq,
|
||||
string,
|
||||
nullTag,
|
||||
trueTag,
|
||||
falseTag,
|
||||
intBin,
|
||||
intOct,
|
||||
int,
|
||||
intHex,
|
||||
floatNaN,
|
||||
floatExp,
|
||||
float,
|
||||
binary,
|
||||
merge,
|
||||
omap,
|
||||
pairs,
|
||||
set,
|
||||
intTime,
|
||||
floatTime,
|
||||
timestamp
|
||||
];
|
||||
|
||||
export { schema };
|
||||
93
node_modules/yaml/browser/dist/schema/yaml-1.1/set.js
generated
vendored
Normal file
93
node_modules/yaml/browser/dist/schema/yaml-1.1/set.js
generated
vendored
Normal file
@@ -0,0 +1,93 @@
|
||||
import { isMap, isPair, isScalar } from '../../nodes/identity.js';
|
||||
import { Pair, createPair } from '../../nodes/Pair.js';
|
||||
import { YAMLMap, findPair } from '../../nodes/YAMLMap.js';
|
||||
|
||||
class YAMLSet extends YAMLMap {
|
||||
constructor(schema) {
|
||||
super(schema);
|
||||
this.tag = YAMLSet.tag;
|
||||
}
|
||||
add(key) {
|
||||
let pair;
|
||||
if (isPair(key))
|
||||
pair = key;
|
||||
else if (key &&
|
||||
typeof key === 'object' &&
|
||||
'key' in key &&
|
||||
'value' in key &&
|
||||
key.value === null)
|
||||
pair = new Pair(key.key, null);
|
||||
else
|
||||
pair = new Pair(key, null);
|
||||
const prev = findPair(this.items, pair.key);
|
||||
if (!prev)
|
||||
this.items.push(pair);
|
||||
}
|
||||
/**
|
||||
* If `keepPair` is `true`, returns the Pair matching `key`.
|
||||
* Otherwise, returns the value of that Pair's key.
|
||||
*/
|
||||
get(key, keepPair) {
|
||||
const pair = findPair(this.items, key);
|
||||
return !keepPair && isPair(pair)
|
||||
? isScalar(pair.key)
|
||||
? pair.key.value
|
||||
: pair.key
|
||||
: pair;
|
||||
}
|
||||
set(key, value) {
|
||||
if (typeof value !== 'boolean')
|
||||
throw new Error(`Expected boolean value for set(key, value) in a YAML set, not ${typeof value}`);
|
||||
const prev = findPair(this.items, key);
|
||||
if (prev && !value) {
|
||||
this.items.splice(this.items.indexOf(prev), 1);
|
||||
}
|
||||
else if (!prev && value) {
|
||||
this.items.push(new Pair(key));
|
||||
}
|
||||
}
|
||||
toJSON(_, ctx) {
|
||||
return super.toJSON(_, ctx, Set);
|
||||
}
|
||||
toString(ctx, onComment, onChompKeep) {
|
||||
if (!ctx)
|
||||
return JSON.stringify(this);
|
||||
if (this.hasAllNullValues(true))
|
||||
return super.toString(Object.assign({}, ctx, { allNullValues: true }), onComment, onChompKeep);
|
||||
else
|
||||
throw new Error('Set items must all have null values');
|
||||
}
|
||||
static from(schema, iterable, ctx) {
|
||||
const { replacer } = ctx;
|
||||
const set = new this(schema);
|
||||
if (iterable && Symbol.iterator in Object(iterable))
|
||||
for (let value of iterable) {
|
||||
if (typeof replacer === 'function')
|
||||
value = replacer.call(iterable, value, value);
|
||||
set.items.push(createPair(value, null, ctx));
|
||||
}
|
||||
return set;
|
||||
}
|
||||
}
|
||||
YAMLSet.tag = 'tag:yaml.org,2002:set';
|
||||
const set = {
|
||||
collection: 'map',
|
||||
identify: value => value instanceof Set,
|
||||
nodeClass: YAMLSet,
|
||||
default: false,
|
||||
tag: 'tag:yaml.org,2002:set',
|
||||
createNode: (schema, iterable, ctx) => YAMLSet.from(schema, iterable, ctx),
|
||||
resolve(map, onError) {
|
||||
if (isMap(map)) {
|
||||
if (map.hasAllNullValues(true))
|
||||
return Object.assign(new YAMLSet(), map);
|
||||
else
|
||||
onError('Set items must all have null values');
|
||||
}
|
||||
else
|
||||
onError('Expected a mapping for this tag');
|
||||
return map;
|
||||
}
|
||||
};
|
||||
|
||||
export { YAMLSet, set };
|
||||
101
node_modules/yaml/browser/dist/schema/yaml-1.1/timestamp.js
generated
vendored
Normal file
101
node_modules/yaml/browser/dist/schema/yaml-1.1/timestamp.js
generated
vendored
Normal file
@@ -0,0 +1,101 @@
|
||||
import { stringifyNumber } from '../../stringify/stringifyNumber.js';
|
||||
|
||||
/** Internal types handle bigint as number, because TS can't figure it out. */
|
||||
function parseSexagesimal(str, asBigInt) {
|
||||
const sign = str[0];
|
||||
const parts = sign === '-' || sign === '+' ? str.substring(1) : str;
|
||||
const num = (n) => asBigInt ? BigInt(n) : Number(n);
|
||||
const res = parts
|
||||
.replace(/_/g, '')
|
||||
.split(':')
|
||||
.reduce((res, p) => res * num(60) + num(p), num(0));
|
||||
return (sign === '-' ? num(-1) * res : res);
|
||||
}
|
||||
/**
|
||||
* hhhh:mm:ss.sss
|
||||
*
|
||||
* Internal types handle bigint as number, because TS can't figure it out.
|
||||
*/
|
||||
function stringifySexagesimal(node) {
|
||||
let { value } = node;
|
||||
let num = (n) => n;
|
||||
if (typeof value === 'bigint')
|
||||
num = n => BigInt(n);
|
||||
else if (isNaN(value) || !isFinite(value))
|
||||
return stringifyNumber(node);
|
||||
let sign = '';
|
||||
if (value < 0) {
|
||||
sign = '-';
|
||||
value *= num(-1);
|
||||
}
|
||||
const _60 = num(60);
|
||||
const parts = [value % _60]; // seconds, including ms
|
||||
if (value < 60) {
|
||||
parts.unshift(0); // at least one : is required
|
||||
}
|
||||
else {
|
||||
value = (value - parts[0]) / _60;
|
||||
parts.unshift(value % _60); // minutes
|
||||
if (value >= 60) {
|
||||
value = (value - parts[0]) / _60;
|
||||
parts.unshift(value); // hours
|
||||
}
|
||||
}
|
||||
return (sign +
|
||||
parts
|
||||
.map(n => String(n).padStart(2, '0'))
|
||||
.join(':')
|
||||
.replace(/000000\d*$/, '') // % 60 may introduce error
|
||||
);
|
||||
}
|
||||
const intTime = {
|
||||
identify: value => typeof value === 'bigint' || Number.isInteger(value),
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:int',
|
||||
format: 'TIME',
|
||||
test: /^[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+$/,
|
||||
resolve: (str, _onError, { intAsBigInt }) => parseSexagesimal(str, intAsBigInt),
|
||||
stringify: stringifySexagesimal
|
||||
};
|
||||
const floatTime = {
|
||||
identify: value => typeof value === 'number',
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:float',
|
||||
format: 'TIME',
|
||||
test: /^[-+]?[0-9][0-9_]*(?::[0-5]?[0-9])+\.[0-9_]*$/,
|
||||
resolve: str => parseSexagesimal(str, false),
|
||||
stringify: stringifySexagesimal
|
||||
};
|
||||
const timestamp = {
|
||||
identify: value => value instanceof Date,
|
||||
default: true,
|
||||
tag: 'tag:yaml.org,2002:timestamp',
|
||||
// If the time zone is omitted, the timestamp is assumed to be specified in UTC. The time part
|
||||
// may be omitted altogether, resulting in a date format. In such a case, the time part is
|
||||
// assumed to be 00:00:00Z (start of day, UTC).
|
||||
test: RegExp('^([0-9]{4})-([0-9]{1,2})-([0-9]{1,2})' + // YYYY-Mm-Dd
|
||||
'(?:' + // time is optional
|
||||
'(?:t|T|[ \\t]+)' + // t | T | whitespace
|
||||
'([0-9]{1,2}):([0-9]{1,2}):([0-9]{1,2}(\\.[0-9]+)?)' + // Hh:Mm:Ss(.ss)?
|
||||
'(?:[ \\t]*(Z|[-+][012]?[0-9](?::[0-9]{2})?))?' + // Z | +5 | -03:30
|
||||
')?$'),
|
||||
resolve(str) {
|
||||
const match = str.match(timestamp.test);
|
||||
if (!match)
|
||||
throw new Error('!!timestamp expects a date, starting with yyyy-mm-dd');
|
||||
const [, year, month, day, hour, minute, second] = match.map(Number);
|
||||
const millisec = match[7] ? Number((match[7] + '00').substr(1, 3)) : 0;
|
||||
let date = Date.UTC(year, month - 1, day, hour || 0, minute || 0, second || 0, millisec);
|
||||
const tz = match[8];
|
||||
if (tz && tz !== 'Z') {
|
||||
let d = parseSexagesimal(tz, false);
|
||||
if (Math.abs(d) < 30)
|
||||
d *= 60;
|
||||
date -= 60000 * d;
|
||||
}
|
||||
return new Date(date);
|
||||
},
|
||||
stringify: ({ value }) => value?.toISOString().replace(/(T00:00:00)?\.000Z$/, '') ?? ''
|
||||
};
|
||||
|
||||
export { floatTime, intTime, timestamp };
|
||||
146
node_modules/yaml/browser/dist/stringify/foldFlowLines.js
generated
vendored
Normal file
146
node_modules/yaml/browser/dist/stringify/foldFlowLines.js
generated
vendored
Normal file
@@ -0,0 +1,146 @@
|
||||
const FOLD_FLOW = 'flow';
|
||||
const FOLD_BLOCK = 'block';
|
||||
const FOLD_QUOTED = 'quoted';
|
||||
/**
|
||||
* Tries to keep input at up to `lineWidth` characters, splitting only on spaces
|
||||
* not followed by newlines or spaces unless `mode` is `'quoted'`. Lines are
|
||||
* terminated with `\n` and started with `indent`.
|
||||
*/
|
||||
function foldFlowLines(text, indent, mode = 'flow', { indentAtStart, lineWidth = 80, minContentWidth = 20, onFold, onOverflow } = {}) {
|
||||
if (!lineWidth || lineWidth < 0)
|
||||
return text;
|
||||
if (lineWidth < minContentWidth)
|
||||
minContentWidth = 0;
|
||||
const endStep = Math.max(1 + minContentWidth, 1 + lineWidth - indent.length);
|
||||
if (text.length <= endStep)
|
||||
return text;
|
||||
const folds = [];
|
||||
const escapedFolds = {};
|
||||
let end = lineWidth - indent.length;
|
||||
if (typeof indentAtStart === 'number') {
|
||||
if (indentAtStart > lineWidth - Math.max(2, minContentWidth))
|
||||
folds.push(0);
|
||||
else
|
||||
end = lineWidth - indentAtStart;
|
||||
}
|
||||
let split = undefined;
|
||||
let prev = undefined;
|
||||
let overflow = false;
|
||||
let i = -1;
|
||||
let escStart = -1;
|
||||
let escEnd = -1;
|
||||
if (mode === FOLD_BLOCK) {
|
||||
i = consumeMoreIndentedLines(text, i, indent.length);
|
||||
if (i !== -1)
|
||||
end = i + endStep;
|
||||
}
|
||||
for (let ch; (ch = text[(i += 1)]);) {
|
||||
if (mode === FOLD_QUOTED && ch === '\\') {
|
||||
escStart = i;
|
||||
switch (text[i + 1]) {
|
||||
case 'x':
|
||||
i += 3;
|
||||
break;
|
||||
case 'u':
|
||||
i += 5;
|
||||
break;
|
||||
case 'U':
|
||||
i += 9;
|
||||
break;
|
||||
default:
|
||||
i += 1;
|
||||
}
|
||||
escEnd = i;
|
||||
}
|
||||
if (ch === '\n') {
|
||||
if (mode === FOLD_BLOCK)
|
||||
i = consumeMoreIndentedLines(text, i, indent.length);
|
||||
end = i + indent.length + endStep;
|
||||
split = undefined;
|
||||
}
|
||||
else {
|
||||
if (ch === ' ' &&
|
||||
prev &&
|
||||
prev !== ' ' &&
|
||||
prev !== '\n' &&
|
||||
prev !== '\t') {
|
||||
// space surrounded by non-space can be replaced with newline + indent
|
||||
const next = text[i + 1];
|
||||
if (next && next !== ' ' && next !== '\n' && next !== '\t')
|
||||
split = i;
|
||||
}
|
||||
if (i >= end) {
|
||||
if (split) {
|
||||
folds.push(split);
|
||||
end = split + endStep;
|
||||
split = undefined;
|
||||
}
|
||||
else if (mode === FOLD_QUOTED) {
|
||||
// white-space collected at end may stretch past lineWidth
|
||||
while (prev === ' ' || prev === '\t') {
|
||||
prev = ch;
|
||||
ch = text[(i += 1)];
|
||||
overflow = true;
|
||||
}
|
||||
// Account for newline escape, but don't break preceding escape
|
||||
const j = i > escEnd + 1 ? i - 2 : escStart - 1;
|
||||
// Bail out if lineWidth & minContentWidth are shorter than an escape string
|
||||
if (escapedFolds[j])
|
||||
return text;
|
||||
folds.push(j);
|
||||
escapedFolds[j] = true;
|
||||
end = j + endStep;
|
||||
split = undefined;
|
||||
}
|
||||
else {
|
||||
overflow = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
prev = ch;
|
||||
}
|
||||
if (overflow && onOverflow)
|
||||
onOverflow();
|
||||
if (folds.length === 0)
|
||||
return text;
|
||||
if (onFold)
|
||||
onFold();
|
||||
let res = text.slice(0, folds[0]);
|
||||
for (let i = 0; i < folds.length; ++i) {
|
||||
const fold = folds[i];
|
||||
const end = folds[i + 1] || text.length;
|
||||
if (fold === 0)
|
||||
res = `\n${indent}${text.slice(0, end)}`;
|
||||
else {
|
||||
if (mode === FOLD_QUOTED && escapedFolds[fold])
|
||||
res += `${text[fold]}\\`;
|
||||
res += `\n${indent}${text.slice(fold + 1, end)}`;
|
||||
}
|
||||
}
|
||||
return res;
|
||||
}
|
||||
/**
|
||||
* Presumes `i + 1` is at the start of a line
|
||||
* @returns index of last newline in more-indented block
|
||||
*/
|
||||
function consumeMoreIndentedLines(text, i, indent) {
|
||||
let end = i;
|
||||
let start = i + 1;
|
||||
let ch = text[start];
|
||||
while (ch === ' ' || ch === '\t') {
|
||||
if (i < start + indent) {
|
||||
ch = text[++i];
|
||||
}
|
||||
else {
|
||||
do {
|
||||
ch = text[++i];
|
||||
} while (ch && ch !== '\n');
|
||||
end = i;
|
||||
start = i + 1;
|
||||
ch = text[start];
|
||||
}
|
||||
}
|
||||
return end;
|
||||
}
|
||||
|
||||
export { FOLD_BLOCK, FOLD_FLOW, FOLD_QUOTED, foldFlowLines };
|
||||
128
node_modules/yaml/browser/dist/stringify/stringify.js
generated
vendored
Normal file
128
node_modules/yaml/browser/dist/stringify/stringify.js
generated
vendored
Normal file
@@ -0,0 +1,128 @@
|
||||
import { anchorIsValid } from '../doc/anchors.js';
|
||||
import { isPair, isAlias, isNode, isScalar, isCollection } from '../nodes/identity.js';
|
||||
import { stringifyComment } from './stringifyComment.js';
|
||||
import { stringifyString } from './stringifyString.js';
|
||||
|
||||
function createStringifyContext(doc, options) {
|
||||
const opt = Object.assign({
|
||||
blockQuote: true,
|
||||
commentString: stringifyComment,
|
||||
defaultKeyType: null,
|
||||
defaultStringType: 'PLAIN',
|
||||
directives: null,
|
||||
doubleQuotedAsJSON: false,
|
||||
doubleQuotedMinMultiLineLength: 40,
|
||||
falseStr: 'false',
|
||||
flowCollectionPadding: true,
|
||||
indentSeq: true,
|
||||
lineWidth: 80,
|
||||
minContentWidth: 20,
|
||||
nullStr: 'null',
|
||||
simpleKeys: false,
|
||||
singleQuote: null,
|
||||
trueStr: 'true',
|
||||
verifyAliasOrder: true
|
||||
}, doc.schema.toStringOptions, options);
|
||||
let inFlow;
|
||||
switch (opt.collectionStyle) {
|
||||
case 'block':
|
||||
inFlow = false;
|
||||
break;
|
||||
case 'flow':
|
||||
inFlow = true;
|
||||
break;
|
||||
default:
|
||||
inFlow = null;
|
||||
}
|
||||
return {
|
||||
anchors: new Set(),
|
||||
doc,
|
||||
flowCollectionPadding: opt.flowCollectionPadding ? ' ' : '',
|
||||
indent: '',
|
||||
indentStep: typeof opt.indent === 'number' ? ' '.repeat(opt.indent) : ' ',
|
||||
inFlow,
|
||||
options: opt
|
||||
};
|
||||
}
|
||||
function getTagObject(tags, item) {
|
||||
if (item.tag) {
|
||||
const match = tags.filter(t => t.tag === item.tag);
|
||||
if (match.length > 0)
|
||||
return match.find(t => t.format === item.format) ?? match[0];
|
||||
}
|
||||
let tagObj = undefined;
|
||||
let obj;
|
||||
if (isScalar(item)) {
|
||||
obj = item.value;
|
||||
let match = tags.filter(t => t.identify?.(obj));
|
||||
if (match.length > 1) {
|
||||
const testMatch = match.filter(t => t.test);
|
||||
if (testMatch.length > 0)
|
||||
match = testMatch;
|
||||
}
|
||||
tagObj =
|
||||
match.find(t => t.format === item.format) ?? match.find(t => !t.format);
|
||||
}
|
||||
else {
|
||||
obj = item;
|
||||
tagObj = tags.find(t => t.nodeClass && obj instanceof t.nodeClass);
|
||||
}
|
||||
if (!tagObj) {
|
||||
const name = obj?.constructor?.name ?? (obj === null ? 'null' : typeof obj);
|
||||
throw new Error(`Tag not resolved for ${name} value`);
|
||||
}
|
||||
return tagObj;
|
||||
}
|
||||
// needs to be called before value stringifier to allow for circular anchor refs
|
||||
function stringifyProps(node, tagObj, { anchors, doc }) {
|
||||
if (!doc.directives)
|
||||
return '';
|
||||
const props = [];
|
||||
const anchor = (isScalar(node) || isCollection(node)) && node.anchor;
|
||||
if (anchor && anchorIsValid(anchor)) {
|
||||
anchors.add(anchor);
|
||||
props.push(`&${anchor}`);
|
||||
}
|
||||
const tag = node.tag ?? (tagObj.default ? null : tagObj.tag);
|
||||
if (tag)
|
||||
props.push(doc.directives.tagString(tag));
|
||||
return props.join(' ');
|
||||
}
|
||||
function stringify(item, ctx, onComment, onChompKeep) {
|
||||
if (isPair(item))
|
||||
return item.toString(ctx, onComment, onChompKeep);
|
||||
if (isAlias(item)) {
|
||||
if (ctx.doc.directives)
|
||||
return item.toString(ctx);
|
||||
if (ctx.resolvedAliases?.has(item)) {
|
||||
throw new TypeError(`Cannot stringify circular structure without alias nodes`);
|
||||
}
|
||||
else {
|
||||
if (ctx.resolvedAliases)
|
||||
ctx.resolvedAliases.add(item);
|
||||
else
|
||||
ctx.resolvedAliases = new Set([item]);
|
||||
item = item.resolve(ctx.doc);
|
||||
}
|
||||
}
|
||||
let tagObj = undefined;
|
||||
const node = isNode(item)
|
||||
? item
|
||||
: ctx.doc.createNode(item, { onTagObj: o => (tagObj = o) });
|
||||
tagObj ?? (tagObj = getTagObject(ctx.doc.schema.tags, node));
|
||||
const props = stringifyProps(node, tagObj, ctx);
|
||||
if (props.length > 0)
|
||||
ctx.indentAtStart = (ctx.indentAtStart ?? 0) + props.length + 1;
|
||||
const str = typeof tagObj.stringify === 'function'
|
||||
? tagObj.stringify(node, ctx, onComment, onChompKeep)
|
||||
: isScalar(node)
|
||||
? stringifyString(node, ctx, onComment, onChompKeep)
|
||||
: node.toString(ctx, onComment, onChompKeep);
|
||||
if (!props)
|
||||
return str;
|
||||
return isScalar(node) || str[0] === '{' || str[0] === '['
|
||||
? `${props} ${str}`
|
||||
: `${props}\n${ctx.indent}${str}`;
|
||||
}
|
||||
|
||||
export { createStringifyContext, stringify };
|
||||
143
node_modules/yaml/browser/dist/stringify/stringifyCollection.js
generated
vendored
Normal file
143
node_modules/yaml/browser/dist/stringify/stringifyCollection.js
generated
vendored
Normal file
@@ -0,0 +1,143 @@
|
||||
import { isNode, isPair } from '../nodes/identity.js';
|
||||
import { stringify } from './stringify.js';
|
||||
import { lineComment, indentComment } from './stringifyComment.js';
|
||||
|
||||
function stringifyCollection(collection, ctx, options) {
|
||||
const flow = ctx.inFlow ?? collection.flow;
|
||||
const stringify = flow ? stringifyFlowCollection : stringifyBlockCollection;
|
||||
return stringify(collection, ctx, options);
|
||||
}
|
||||
function stringifyBlockCollection({ comment, items }, ctx, { blockItemPrefix, flowChars, itemIndent, onChompKeep, onComment }) {
|
||||
const { indent, options: { commentString } } = ctx;
|
||||
const itemCtx = Object.assign({}, ctx, { indent: itemIndent, type: null });
|
||||
let chompKeep = false; // flag for the preceding node's status
|
||||
const lines = [];
|
||||
for (let i = 0; i < items.length; ++i) {
|
||||
const item = items[i];
|
||||
let comment = null;
|
||||
if (isNode(item)) {
|
||||
if (!chompKeep && item.spaceBefore)
|
||||
lines.push('');
|
||||
addCommentBefore(ctx, lines, item.commentBefore, chompKeep);
|
||||
if (item.comment)
|
||||
comment = item.comment;
|
||||
}
|
||||
else if (isPair(item)) {
|
||||
const ik = isNode(item.key) ? item.key : null;
|
||||
if (ik) {
|
||||
if (!chompKeep && ik.spaceBefore)
|
||||
lines.push('');
|
||||
addCommentBefore(ctx, lines, ik.commentBefore, chompKeep);
|
||||
}
|
||||
}
|
||||
chompKeep = false;
|
||||
let str = stringify(item, itemCtx, () => (comment = null), () => (chompKeep = true));
|
||||
if (comment)
|
||||
str += lineComment(str, itemIndent, commentString(comment));
|
||||
if (chompKeep && comment)
|
||||
chompKeep = false;
|
||||
lines.push(blockItemPrefix + str);
|
||||
}
|
||||
let str;
|
||||
if (lines.length === 0) {
|
||||
str = flowChars.start + flowChars.end;
|
||||
}
|
||||
else {
|
||||
str = lines[0];
|
||||
for (let i = 1; i < lines.length; ++i) {
|
||||
const line = lines[i];
|
||||
str += line ? `\n${indent}${line}` : '\n';
|
||||
}
|
||||
}
|
||||
if (comment) {
|
||||
str += '\n' + indentComment(commentString(comment), indent);
|
||||
if (onComment)
|
||||
onComment();
|
||||
}
|
||||
else if (chompKeep && onChompKeep)
|
||||
onChompKeep();
|
||||
return str;
|
||||
}
|
||||
function stringifyFlowCollection({ items }, ctx, { flowChars, itemIndent }) {
|
||||
const { indent, indentStep, flowCollectionPadding: fcPadding, options: { commentString } } = ctx;
|
||||
itemIndent += indentStep;
|
||||
const itemCtx = Object.assign({}, ctx, {
|
||||
indent: itemIndent,
|
||||
inFlow: true,
|
||||
type: null
|
||||
});
|
||||
let reqNewline = false;
|
||||
let linesAtValue = 0;
|
||||
const lines = [];
|
||||
for (let i = 0; i < items.length; ++i) {
|
||||
const item = items[i];
|
||||
let comment = null;
|
||||
if (isNode(item)) {
|
||||
if (item.spaceBefore)
|
||||
lines.push('');
|
||||
addCommentBefore(ctx, lines, item.commentBefore, false);
|
||||
if (item.comment)
|
||||
comment = item.comment;
|
||||
}
|
||||
else if (isPair(item)) {
|
||||
const ik = isNode(item.key) ? item.key : null;
|
||||
if (ik) {
|
||||
if (ik.spaceBefore)
|
||||
lines.push('');
|
||||
addCommentBefore(ctx, lines, ik.commentBefore, false);
|
||||
if (ik.comment)
|
||||
reqNewline = true;
|
||||
}
|
||||
const iv = isNode(item.value) ? item.value : null;
|
||||
if (iv) {
|
||||
if (iv.comment)
|
||||
comment = iv.comment;
|
||||
if (iv.commentBefore)
|
||||
reqNewline = true;
|
||||
}
|
||||
else if (item.value == null && ik?.comment) {
|
||||
comment = ik.comment;
|
||||
}
|
||||
}
|
||||
if (comment)
|
||||
reqNewline = true;
|
||||
let str = stringify(item, itemCtx, () => (comment = null));
|
||||
if (i < items.length - 1)
|
||||
str += ',';
|
||||
if (comment)
|
||||
str += lineComment(str, itemIndent, commentString(comment));
|
||||
if (!reqNewline && (lines.length > linesAtValue || str.includes('\n')))
|
||||
reqNewline = true;
|
||||
lines.push(str);
|
||||
linesAtValue = lines.length;
|
||||
}
|
||||
const { start, end } = flowChars;
|
||||
if (lines.length === 0) {
|
||||
return start + end;
|
||||
}
|
||||
else {
|
||||
if (!reqNewline) {
|
||||
const len = lines.reduce((sum, line) => sum + line.length + 2, 2);
|
||||
reqNewline = ctx.options.lineWidth > 0 && len > ctx.options.lineWidth;
|
||||
}
|
||||
if (reqNewline) {
|
||||
let str = start;
|
||||
for (const line of lines)
|
||||
str += line ? `\n${indentStep}${indent}${line}` : '\n';
|
||||
return `${str}\n${indent}${end}`;
|
||||
}
|
||||
else {
|
||||
return `${start}${fcPadding}${lines.join(' ')}${fcPadding}${end}`;
|
||||
}
|
||||
}
|
||||
}
|
||||
function addCommentBefore({ indent, options: { commentString } }, lines, comment, chompKeep) {
|
||||
if (comment && chompKeep)
|
||||
comment = comment.replace(/^\n+/, '');
|
||||
if (comment) {
|
||||
const ic = indentComment(commentString(comment), indent);
|
||||
lines.push(ic.trimStart()); // Avoid double indent on first line
|
||||
}
|
||||
}
|
||||
|
||||
export { stringifyCollection };
|
||||
20
node_modules/yaml/browser/dist/stringify/stringifyComment.js
generated
vendored
Normal file
20
node_modules/yaml/browser/dist/stringify/stringifyComment.js
generated
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
/**
|
||||
* Stringifies a comment.
|
||||
*
|
||||
* Empty comment lines are left empty,
|
||||
* lines consisting of a single space are replaced by `#`,
|
||||
* and all other lines are prefixed with a `#`.
|
||||
*/
|
||||
const stringifyComment = (str) => str.replace(/^(?!$)(?: $)?/gm, '#');
|
||||
function indentComment(comment, indent) {
|
||||
if (/^\n+$/.test(comment))
|
||||
return comment.substring(1);
|
||||
return indent ? comment.replace(/^(?! *$)/gm, indent) : comment;
|
||||
}
|
||||
const lineComment = (str, indent, comment) => str.endsWith('\n')
|
||||
? indentComment(comment, indent)
|
||||
: comment.includes('\n')
|
||||
? '\n' + indentComment(comment, indent)
|
||||
: (str.endsWith(' ') ? '' : ' ') + comment;
|
||||
|
||||
export { indentComment, lineComment, stringifyComment };
|
||||
85
node_modules/yaml/browser/dist/stringify/stringifyDocument.js
generated
vendored
Normal file
85
node_modules/yaml/browser/dist/stringify/stringifyDocument.js
generated
vendored
Normal file
@@ -0,0 +1,85 @@
|
||||
import { isNode } from '../nodes/identity.js';
|
||||
import { createStringifyContext, stringify } from './stringify.js';
|
||||
import { indentComment, lineComment } from './stringifyComment.js';
|
||||
|
||||
function stringifyDocument(doc, options) {
|
||||
const lines = [];
|
||||
let hasDirectives = options.directives === true;
|
||||
if (options.directives !== false && doc.directives) {
|
||||
const dir = doc.directives.toString(doc);
|
||||
if (dir) {
|
||||
lines.push(dir);
|
||||
hasDirectives = true;
|
||||
}
|
||||
else if (doc.directives.docStart)
|
||||
hasDirectives = true;
|
||||
}
|
||||
if (hasDirectives)
|
||||
lines.push('---');
|
||||
const ctx = createStringifyContext(doc, options);
|
||||
const { commentString } = ctx.options;
|
||||
if (doc.commentBefore) {
|
||||
if (lines.length !== 1)
|
||||
lines.unshift('');
|
||||
const cs = commentString(doc.commentBefore);
|
||||
lines.unshift(indentComment(cs, ''));
|
||||
}
|
||||
let chompKeep = false;
|
||||
let contentComment = null;
|
||||
if (doc.contents) {
|
||||
if (isNode(doc.contents)) {
|
||||
if (doc.contents.spaceBefore && hasDirectives)
|
||||
lines.push('');
|
||||
if (doc.contents.commentBefore) {
|
||||
const cs = commentString(doc.contents.commentBefore);
|
||||
lines.push(indentComment(cs, ''));
|
||||
}
|
||||
// top-level block scalars need to be indented if followed by a comment
|
||||
ctx.forceBlockIndent = !!doc.comment;
|
||||
contentComment = doc.contents.comment;
|
||||
}
|
||||
const onChompKeep = contentComment ? undefined : () => (chompKeep = true);
|
||||
let body = stringify(doc.contents, ctx, () => (contentComment = null), onChompKeep);
|
||||
if (contentComment)
|
||||
body += lineComment(body, '', commentString(contentComment));
|
||||
if ((body[0] === '|' || body[0] === '>') &&
|
||||
lines[lines.length - 1] === '---') {
|
||||
// Top-level block scalars with a preceding doc marker ought to use the
|
||||
// same line for their header.
|
||||
lines[lines.length - 1] = `--- ${body}`;
|
||||
}
|
||||
else
|
||||
lines.push(body);
|
||||
}
|
||||
else {
|
||||
lines.push(stringify(doc.contents, ctx));
|
||||
}
|
||||
if (doc.directives?.docEnd) {
|
||||
if (doc.comment) {
|
||||
const cs = commentString(doc.comment);
|
||||
if (cs.includes('\n')) {
|
||||
lines.push('...');
|
||||
lines.push(indentComment(cs, ''));
|
||||
}
|
||||
else {
|
||||
lines.push(`... ${cs}`);
|
||||
}
|
||||
}
|
||||
else {
|
||||
lines.push('...');
|
||||
}
|
||||
}
|
||||
else {
|
||||
let dc = doc.comment;
|
||||
if (dc && chompKeep)
|
||||
dc = dc.replace(/^\n+/, '');
|
||||
if (dc) {
|
||||
if ((!chompKeep || contentComment) && lines[lines.length - 1] !== '')
|
||||
lines.push('');
|
||||
lines.push(indentComment(commentString(dc), ''));
|
||||
}
|
||||
}
|
||||
return lines.join('\n') + '\n';
|
||||
}
|
||||
|
||||
export { stringifyDocument };
|
||||
24
node_modules/yaml/browser/dist/stringify/stringifyNumber.js
generated
vendored
Normal file
24
node_modules/yaml/browser/dist/stringify/stringifyNumber.js
generated
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
function stringifyNumber({ format, minFractionDigits, tag, value }) {
|
||||
if (typeof value === 'bigint')
|
||||
return String(value);
|
||||
const num = typeof value === 'number' ? value : Number(value);
|
||||
if (!isFinite(num))
|
||||
return isNaN(num) ? '.nan' : num < 0 ? '-.inf' : '.inf';
|
||||
let n = Object.is(value, -0) ? '-0' : JSON.stringify(value);
|
||||
if (!format &&
|
||||
minFractionDigits &&
|
||||
(!tag || tag === 'tag:yaml.org,2002:float') &&
|
||||
/^\d/.test(n)) {
|
||||
let i = n.indexOf('.');
|
||||
if (i < 0) {
|
||||
i = n.length;
|
||||
n += '.';
|
||||
}
|
||||
let d = minFractionDigits - (n.length - i - 1);
|
||||
while (d-- > 0)
|
||||
n += '0';
|
||||
}
|
||||
return n;
|
||||
}
|
||||
|
||||
export { stringifyNumber };
|
||||
150
node_modules/yaml/browser/dist/stringify/stringifyPair.js
generated
vendored
Normal file
150
node_modules/yaml/browser/dist/stringify/stringifyPair.js
generated
vendored
Normal file
@@ -0,0 +1,150 @@
|
||||
import { isCollection, isNode, isScalar, isSeq } from '../nodes/identity.js';
|
||||
import { Scalar } from '../nodes/Scalar.js';
|
||||
import { stringify } from './stringify.js';
|
||||
import { lineComment, indentComment } from './stringifyComment.js';
|
||||
|
||||
function stringifyPair({ key, value }, ctx, onComment, onChompKeep) {
|
||||
const { allNullValues, doc, indent, indentStep, options: { commentString, indentSeq, simpleKeys } } = ctx;
|
||||
let keyComment = (isNode(key) && key.comment) || null;
|
||||
if (simpleKeys) {
|
||||
if (keyComment) {
|
||||
throw new Error('With simple keys, key nodes cannot have comments');
|
||||
}
|
||||
if (isCollection(key) || (!isNode(key) && typeof key === 'object')) {
|
||||
const msg = 'With simple keys, collection cannot be used as a key value';
|
||||
throw new Error(msg);
|
||||
}
|
||||
}
|
||||
let explicitKey = !simpleKeys &&
|
||||
(!key ||
|
||||
(keyComment && value == null && !ctx.inFlow) ||
|
||||
isCollection(key) ||
|
||||
(isScalar(key)
|
||||
? key.type === Scalar.BLOCK_FOLDED || key.type === Scalar.BLOCK_LITERAL
|
||||
: typeof key === 'object'));
|
||||
ctx = Object.assign({}, ctx, {
|
||||
allNullValues: false,
|
||||
implicitKey: !explicitKey && (simpleKeys || !allNullValues),
|
||||
indent: indent + indentStep
|
||||
});
|
||||
let keyCommentDone = false;
|
||||
let chompKeep = false;
|
||||
let str = stringify(key, ctx, () => (keyCommentDone = true), () => (chompKeep = true));
|
||||
if (!explicitKey && !ctx.inFlow && str.length > 1024) {
|
||||
if (simpleKeys)
|
||||
throw new Error('With simple keys, single line scalar must not span more than 1024 characters');
|
||||
explicitKey = true;
|
||||
}
|
||||
if (ctx.inFlow) {
|
||||
if (allNullValues || value == null) {
|
||||
if (keyCommentDone && onComment)
|
||||
onComment();
|
||||
return str === '' ? '?' : explicitKey ? `? ${str}` : str;
|
||||
}
|
||||
}
|
||||
else if ((allNullValues && !simpleKeys) || (value == null && explicitKey)) {
|
||||
str = `? ${str}`;
|
||||
if (keyComment && !keyCommentDone) {
|
||||
str += lineComment(str, ctx.indent, commentString(keyComment));
|
||||
}
|
||||
else if (chompKeep && onChompKeep)
|
||||
onChompKeep();
|
||||
return str;
|
||||
}
|
||||
if (keyCommentDone)
|
||||
keyComment = null;
|
||||
if (explicitKey) {
|
||||
if (keyComment)
|
||||
str += lineComment(str, ctx.indent, commentString(keyComment));
|
||||
str = `? ${str}\n${indent}:`;
|
||||
}
|
||||
else {
|
||||
str = `${str}:`;
|
||||
if (keyComment)
|
||||
str += lineComment(str, ctx.indent, commentString(keyComment));
|
||||
}
|
||||
let vsb, vcb, valueComment;
|
||||
if (isNode(value)) {
|
||||
vsb = !!value.spaceBefore;
|
||||
vcb = value.commentBefore;
|
||||
valueComment = value.comment;
|
||||
}
|
||||
else {
|
||||
vsb = false;
|
||||
vcb = null;
|
||||
valueComment = null;
|
||||
if (value && typeof value === 'object')
|
||||
value = doc.createNode(value);
|
||||
}
|
||||
ctx.implicitKey = false;
|
||||
if (!explicitKey && !keyComment && isScalar(value))
|
||||
ctx.indentAtStart = str.length + 1;
|
||||
chompKeep = false;
|
||||
if (!indentSeq &&
|
||||
indentStep.length >= 2 &&
|
||||
!ctx.inFlow &&
|
||||
!explicitKey &&
|
||||
isSeq(value) &&
|
||||
!value.flow &&
|
||||
!value.tag &&
|
||||
!value.anchor) {
|
||||
// If indentSeq === false, consider '- ' as part of indentation where possible
|
||||
ctx.indent = ctx.indent.substring(2);
|
||||
}
|
||||
let valueCommentDone = false;
|
||||
const valueStr = stringify(value, ctx, () => (valueCommentDone = true), () => (chompKeep = true));
|
||||
let ws = ' ';
|
||||
if (keyComment || vsb || vcb) {
|
||||
ws = vsb ? '\n' : '';
|
||||
if (vcb) {
|
||||
const cs = commentString(vcb);
|
||||
ws += `\n${indentComment(cs, ctx.indent)}`;
|
||||
}
|
||||
if (valueStr === '' && !ctx.inFlow) {
|
||||
if (ws === '\n' && valueComment)
|
||||
ws = '\n\n';
|
||||
}
|
||||
else {
|
||||
ws += `\n${ctx.indent}`;
|
||||
}
|
||||
}
|
||||
else if (!explicitKey && isCollection(value)) {
|
||||
const vs0 = valueStr[0];
|
||||
const nl0 = valueStr.indexOf('\n');
|
||||
const hasNewline = nl0 !== -1;
|
||||
const flow = ctx.inFlow ?? value.flow ?? value.items.length === 0;
|
||||
if (hasNewline || !flow) {
|
||||
let hasPropsLine = false;
|
||||
if (hasNewline && (vs0 === '&' || vs0 === '!')) {
|
||||
let sp0 = valueStr.indexOf(' ');
|
||||
if (vs0 === '&' &&
|
||||
sp0 !== -1 &&
|
||||
sp0 < nl0 &&
|
||||
valueStr[sp0 + 1] === '!') {
|
||||
sp0 = valueStr.indexOf(' ', sp0 + 1);
|
||||
}
|
||||
if (sp0 === -1 || nl0 < sp0)
|
||||
hasPropsLine = true;
|
||||
}
|
||||
if (!hasPropsLine)
|
||||
ws = `\n${ctx.indent}`;
|
||||
}
|
||||
}
|
||||
else if (valueStr === '' || valueStr[0] === '\n') {
|
||||
ws = '';
|
||||
}
|
||||
str += ws + valueStr;
|
||||
if (ctx.inFlow) {
|
||||
if (valueCommentDone && onComment)
|
||||
onComment();
|
||||
}
|
||||
else if (valueComment && !valueCommentDone) {
|
||||
str += lineComment(str, ctx.indent, commentString(valueComment));
|
||||
}
|
||||
else if (chompKeep && onChompKeep) {
|
||||
onChompKeep();
|
||||
}
|
||||
return str;
|
||||
}
|
||||
|
||||
export { stringifyPair };
|
||||
336
node_modules/yaml/browser/dist/stringify/stringifyString.js
generated
vendored
Normal file
336
node_modules/yaml/browser/dist/stringify/stringifyString.js
generated
vendored
Normal file
@@ -0,0 +1,336 @@
|
||||
import { Scalar } from '../nodes/Scalar.js';
|
||||
import { foldFlowLines, FOLD_FLOW, FOLD_QUOTED, FOLD_BLOCK } from './foldFlowLines.js';
|
||||
|
||||
const getFoldOptions = (ctx, isBlock) => ({
|
||||
indentAtStart: isBlock ? ctx.indent.length : ctx.indentAtStart,
|
||||
lineWidth: ctx.options.lineWidth,
|
||||
minContentWidth: ctx.options.minContentWidth
|
||||
});
|
||||
// Also checks for lines starting with %, as parsing the output as YAML 1.1 will
|
||||
// presume that's starting a new document.
|
||||
const containsDocumentMarker = (str) => /^(%|---|\.\.\.)/m.test(str);
|
||||
function lineLengthOverLimit(str, lineWidth, indentLength) {
|
||||
if (!lineWidth || lineWidth < 0)
|
||||
return false;
|
||||
const limit = lineWidth - indentLength;
|
||||
const strLen = str.length;
|
||||
if (strLen <= limit)
|
||||
return false;
|
||||
for (let i = 0, start = 0; i < strLen; ++i) {
|
||||
if (str[i] === '\n') {
|
||||
if (i - start > limit)
|
||||
return true;
|
||||
start = i + 1;
|
||||
if (strLen - start <= limit)
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
function doubleQuotedString(value, ctx) {
|
||||
const json = JSON.stringify(value);
|
||||
if (ctx.options.doubleQuotedAsJSON)
|
||||
return json;
|
||||
const { implicitKey } = ctx;
|
||||
const minMultiLineLength = ctx.options.doubleQuotedMinMultiLineLength;
|
||||
const indent = ctx.indent || (containsDocumentMarker(value) ? ' ' : '');
|
||||
let str = '';
|
||||
let start = 0;
|
||||
for (let i = 0, ch = json[i]; ch; ch = json[++i]) {
|
||||
if (ch === ' ' && json[i + 1] === '\\' && json[i + 2] === 'n') {
|
||||
// space before newline needs to be escaped to not be folded
|
||||
str += json.slice(start, i) + '\\ ';
|
||||
i += 1;
|
||||
start = i;
|
||||
ch = '\\';
|
||||
}
|
||||
if (ch === '\\')
|
||||
switch (json[i + 1]) {
|
||||
case 'u':
|
||||
{
|
||||
str += json.slice(start, i);
|
||||
const code = json.substr(i + 2, 4);
|
||||
switch (code) {
|
||||
case '0000':
|
||||
str += '\\0';
|
||||
break;
|
||||
case '0007':
|
||||
str += '\\a';
|
||||
break;
|
||||
case '000b':
|
||||
str += '\\v';
|
||||
break;
|
||||
case '001b':
|
||||
str += '\\e';
|
||||
break;
|
||||
case '0085':
|
||||
str += '\\N';
|
||||
break;
|
||||
case '00a0':
|
||||
str += '\\_';
|
||||
break;
|
||||
case '2028':
|
||||
str += '\\L';
|
||||
break;
|
||||
case '2029':
|
||||
str += '\\P';
|
||||
break;
|
||||
default:
|
||||
if (code.substr(0, 2) === '00')
|
||||
str += '\\x' + code.substr(2);
|
||||
else
|
||||
str += json.substr(i, 6);
|
||||
}
|
||||
i += 5;
|
||||
start = i + 1;
|
||||
}
|
||||
break;
|
||||
case 'n':
|
||||
if (implicitKey ||
|
||||
json[i + 2] === '"' ||
|
||||
json.length < minMultiLineLength) {
|
||||
i += 1;
|
||||
}
|
||||
else {
|
||||
// folding will eat first newline
|
||||
str += json.slice(start, i) + '\n\n';
|
||||
while (json[i + 2] === '\\' &&
|
||||
json[i + 3] === 'n' &&
|
||||
json[i + 4] !== '"') {
|
||||
str += '\n';
|
||||
i += 2;
|
||||
}
|
||||
str += indent;
|
||||
// space after newline needs to be escaped to not be folded
|
||||
if (json[i + 2] === ' ')
|
||||
str += '\\';
|
||||
i += 1;
|
||||
start = i + 1;
|
||||
}
|
||||
break;
|
||||
default:
|
||||
i += 1;
|
||||
}
|
||||
}
|
||||
str = start ? str + json.slice(start) : json;
|
||||
return implicitKey
|
||||
? str
|
||||
: foldFlowLines(str, indent, FOLD_QUOTED, getFoldOptions(ctx, false));
|
||||
}
|
||||
function singleQuotedString(value, ctx) {
|
||||
if (ctx.options.singleQuote === false ||
|
||||
(ctx.implicitKey && value.includes('\n')) ||
|
||||
/[ \t]\n|\n[ \t]/.test(value) // single quoted string can't have leading or trailing whitespace around newline
|
||||
)
|
||||
return doubleQuotedString(value, ctx);
|
||||
const indent = ctx.indent || (containsDocumentMarker(value) ? ' ' : '');
|
||||
const res = "'" + value.replace(/'/g, "''").replace(/\n+/g, `$&\n${indent}`) + "'";
|
||||
return ctx.implicitKey
|
||||
? res
|
||||
: foldFlowLines(res, indent, FOLD_FLOW, getFoldOptions(ctx, false));
|
||||
}
|
||||
function quotedString(value, ctx) {
|
||||
const { singleQuote } = ctx.options;
|
||||
let qs;
|
||||
if (singleQuote === false)
|
||||
qs = doubleQuotedString;
|
||||
else {
|
||||
const hasDouble = value.includes('"');
|
||||
const hasSingle = value.includes("'");
|
||||
if (hasDouble && !hasSingle)
|
||||
qs = singleQuotedString;
|
||||
else if (hasSingle && !hasDouble)
|
||||
qs = doubleQuotedString;
|
||||
else
|
||||
qs = singleQuote ? singleQuotedString : doubleQuotedString;
|
||||
}
|
||||
return qs(value, ctx);
|
||||
}
|
||||
// The negative lookbehind avoids a polynomial search,
|
||||
// but isn't supported yet on Safari: https://caniuse.com/js-regexp-lookbehind
|
||||
let blockEndNewlines;
|
||||
try {
|
||||
blockEndNewlines = new RegExp('(^|(?<!\n))\n+(?!\n|$)', 'g');
|
||||
}
|
||||
catch {
|
||||
blockEndNewlines = /\n+(?!\n|$)/g;
|
||||
}
|
||||
function blockString({ comment, type, value }, ctx, onComment, onChompKeep) {
|
||||
const { blockQuote, commentString, lineWidth } = ctx.options;
|
||||
// 1. Block can't end in whitespace unless the last line is non-empty.
|
||||
// 2. Strings consisting of only whitespace are best rendered explicitly.
|
||||
if (!blockQuote || /\n[\t ]+$/.test(value)) {
|
||||
return quotedString(value, ctx);
|
||||
}
|
||||
const indent = ctx.indent ||
|
||||
(ctx.forceBlockIndent || containsDocumentMarker(value) ? ' ' : '');
|
||||
const literal = blockQuote === 'literal'
|
||||
? true
|
||||
: blockQuote === 'folded' || type === Scalar.BLOCK_FOLDED
|
||||
? false
|
||||
: type === Scalar.BLOCK_LITERAL
|
||||
? true
|
||||
: !lineLengthOverLimit(value, lineWidth, indent.length);
|
||||
if (!value)
|
||||
return literal ? '|\n' : '>\n';
|
||||
// determine chomping from whitespace at value end
|
||||
let chomp;
|
||||
let endStart;
|
||||
for (endStart = value.length; endStart > 0; --endStart) {
|
||||
const ch = value[endStart - 1];
|
||||
if (ch !== '\n' && ch !== '\t' && ch !== ' ')
|
||||
break;
|
||||
}
|
||||
let end = value.substring(endStart);
|
||||
const endNlPos = end.indexOf('\n');
|
||||
if (endNlPos === -1) {
|
||||
chomp = '-'; // strip
|
||||
}
|
||||
else if (value === end || endNlPos !== end.length - 1) {
|
||||
chomp = '+'; // keep
|
||||
if (onChompKeep)
|
||||
onChompKeep();
|
||||
}
|
||||
else {
|
||||
chomp = ''; // clip
|
||||
}
|
||||
if (end) {
|
||||
value = value.slice(0, -end.length);
|
||||
if (end[end.length - 1] === '\n')
|
||||
end = end.slice(0, -1);
|
||||
end = end.replace(blockEndNewlines, `$&${indent}`);
|
||||
}
|
||||
// determine indent indicator from whitespace at value start
|
||||
let startWithSpace = false;
|
||||
let startEnd;
|
||||
let startNlPos = -1;
|
||||
for (startEnd = 0; startEnd < value.length; ++startEnd) {
|
||||
const ch = value[startEnd];
|
||||
if (ch === ' ')
|
||||
startWithSpace = true;
|
||||
else if (ch === '\n')
|
||||
startNlPos = startEnd;
|
||||
else
|
||||
break;
|
||||
}
|
||||
let start = value.substring(0, startNlPos < startEnd ? startNlPos + 1 : startEnd);
|
||||
if (start) {
|
||||
value = value.substring(start.length);
|
||||
start = start.replace(/\n+/g, `$&${indent}`);
|
||||
}
|
||||
const indentSize = indent ? '2' : '1'; // root is at -1
|
||||
// Leading | or > is added later
|
||||
let header = (startWithSpace ? indentSize : '') + chomp;
|
||||
if (comment) {
|
||||
header += ' ' + commentString(comment.replace(/ ?[\r\n]+/g, ' '));
|
||||
if (onComment)
|
||||
onComment();
|
||||
}
|
||||
if (!literal) {
|
||||
const foldedValue = value
|
||||
.replace(/\n+/g, '\n$&')
|
||||
.replace(/(?:^|\n)([\t ].*)(?:([\n\t ]*)\n(?![\n\t ]))?/g, '$1$2') // more-indented lines aren't folded
|
||||
// ^ more-ind. ^ empty ^ capture next empty lines only at end of indent
|
||||
.replace(/\n+/g, `$&${indent}`);
|
||||
let literalFallback = false;
|
||||
const foldOptions = getFoldOptions(ctx, true);
|
||||
if (blockQuote !== 'folded' && type !== Scalar.BLOCK_FOLDED) {
|
||||
foldOptions.onOverflow = () => {
|
||||
literalFallback = true;
|
||||
};
|
||||
}
|
||||
const body = foldFlowLines(`${start}${foldedValue}${end}`, indent, FOLD_BLOCK, foldOptions);
|
||||
if (!literalFallback)
|
||||
return `>${header}\n${indent}${body}`;
|
||||
}
|
||||
value = value.replace(/\n+/g, `$&${indent}`);
|
||||
return `|${header}\n${indent}${start}${value}${end}`;
|
||||
}
|
||||
function plainString(item, ctx, onComment, onChompKeep) {
|
||||
const { type, value } = item;
|
||||
const { actualString, implicitKey, indent, indentStep, inFlow } = ctx;
|
||||
if ((implicitKey && value.includes('\n')) ||
|
||||
(inFlow && /[[\]{},]/.test(value))) {
|
||||
return quotedString(value, ctx);
|
||||
}
|
||||
if (/^[\n\t ,[\]{}#&*!|>'"%@`]|^[?-]$|^[?-][ \t]|[\n:][ \t]|[ \t]\n|[\n\t ]#|[\n\t :]$/.test(value)) {
|
||||
// not allowed:
|
||||
// - '-' or '?'
|
||||
// - start with an indicator character (except [?:-]) or /[?-] /
|
||||
// - '\n ', ': ' or ' \n' anywhere
|
||||
// - '#' not preceded by a non-space char
|
||||
// - end with ' ' or ':'
|
||||
return implicitKey || inFlow || !value.includes('\n')
|
||||
? quotedString(value, ctx)
|
||||
: blockString(item, ctx, onComment, onChompKeep);
|
||||
}
|
||||
if (!implicitKey &&
|
||||
!inFlow &&
|
||||
type !== Scalar.PLAIN &&
|
||||
value.includes('\n')) {
|
||||
// Where allowed & type not set explicitly, prefer block style for multiline strings
|
||||
return blockString(item, ctx, onComment, onChompKeep);
|
||||
}
|
||||
if (containsDocumentMarker(value)) {
|
||||
if (indent === '') {
|
||||
ctx.forceBlockIndent = true;
|
||||
return blockString(item, ctx, onComment, onChompKeep);
|
||||
}
|
||||
else if (implicitKey && indent === indentStep) {
|
||||
return quotedString(value, ctx);
|
||||
}
|
||||
}
|
||||
const str = value.replace(/\n+/g, `$&\n${indent}`);
|
||||
// Verify that output will be parsed as a string, as e.g. plain numbers and
|
||||
// booleans get parsed with those types in v1.2 (e.g. '42', 'true' & '0.9e-3'),
|
||||
// and others in v1.1.
|
||||
if (actualString) {
|
||||
const test = (tag) => tag.default && tag.tag !== 'tag:yaml.org,2002:str' && tag.test?.test(str);
|
||||
const { compat, tags } = ctx.doc.schema;
|
||||
if (tags.some(test) || compat?.some(test))
|
||||
return quotedString(value, ctx);
|
||||
}
|
||||
return implicitKey
|
||||
? str
|
||||
: foldFlowLines(str, indent, FOLD_FLOW, getFoldOptions(ctx, false));
|
||||
}
|
||||
function stringifyString(item, ctx, onComment, onChompKeep) {
|
||||
const { implicitKey, inFlow } = ctx;
|
||||
const ss = typeof item.value === 'string'
|
||||
? item
|
||||
: Object.assign({}, item, { value: String(item.value) });
|
||||
let { type } = item;
|
||||
if (type !== Scalar.QUOTE_DOUBLE) {
|
||||
// force double quotes on control characters & unpaired surrogates
|
||||
if (/[\x00-\x08\x0b-\x1f\x7f-\x9f\u{D800}-\u{DFFF}]/u.test(ss.value))
|
||||
type = Scalar.QUOTE_DOUBLE;
|
||||
}
|
||||
const _stringify = (_type) => {
|
||||
switch (_type) {
|
||||
case Scalar.BLOCK_FOLDED:
|
||||
case Scalar.BLOCK_LITERAL:
|
||||
return implicitKey || inFlow
|
||||
? quotedString(ss.value, ctx) // blocks are not valid inside flow containers
|
||||
: blockString(ss, ctx, onComment, onChompKeep);
|
||||
case Scalar.QUOTE_DOUBLE:
|
||||
return doubleQuotedString(ss.value, ctx);
|
||||
case Scalar.QUOTE_SINGLE:
|
||||
return singleQuotedString(ss.value, ctx);
|
||||
case Scalar.PLAIN:
|
||||
return plainString(ss, ctx, onComment, onChompKeep);
|
||||
default:
|
||||
return null;
|
||||
}
|
||||
};
|
||||
let res = _stringify(type);
|
||||
if (res === null) {
|
||||
const { defaultKeyType, defaultStringType } = ctx.options;
|
||||
const t = (implicitKey && defaultKeyType) || defaultStringType;
|
||||
res = _stringify(t);
|
||||
if (res === null)
|
||||
throw new Error(`Unsupported default string type ${t}`);
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
export { stringifyString };
|
||||
11
node_modules/yaml/browser/dist/util.js
generated
vendored
Normal file
11
node_modules/yaml/browser/dist/util.js
generated
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
export { createNode } from './doc/createNode.js';
|
||||
export { debug, warn } from './log.js';
|
||||
export { createPair } from './nodes/Pair.js';
|
||||
export { toJS } from './nodes/toJS.js';
|
||||
export { findPair } from './nodes/YAMLMap.js';
|
||||
export { map as mapTag } from './schema/common/map.js';
|
||||
export { seq as seqTag } from './schema/common/seq.js';
|
||||
export { string as stringTag } from './schema/common/string.js';
|
||||
export { foldFlowLines } from './stringify/foldFlowLines.js';
|
||||
export { stringifyNumber } from './stringify/stringifyNumber.js';
|
||||
export { stringifyString } from './stringify/stringifyString.js';
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user