feat: add 4 universal skills from cli-ai-skills

- Add audio-transcriber skill (v1.2.0): Transform audio to Markdown with Whisper
- Add youtube-summarizer skill (v1.2.0): Generate summaries from YouTube videos
- Update prompt-engineer skill: Enhanced with 11 optimization frameworks
- Update skill-creator skill: Improved automation workflow

All skills are zero-config, cross-platform (Claude Code, Copilot CLI, Codex)
and follow Quality Bar V4 standards.

Source: https://github.com/ericgandrade/cli-ai-skills
This commit is contained in:
Eric Andrade
2026-02-04 17:37:45 -03:00
parent 6070da6a63
commit 801c8fa475
21 changed files with 5012 additions and 579 deletions

View File

@@ -0,0 +1,659 @@
# 🎯 Prompt Engineer
**Version:** 1.0.1
**Status:** ✨ Zero-Config | 🌍 Universal
Transform raw prompts into optimized, production-ready prompts using 11 established prompting frameworks.
---
## 📋 Overview
**Prompt Engineer** is an intelligent AI skill that analyzes your intentions and automatically generates optimized prompts for Claude, ChatGPT, or any other AI model. Instead of struggling with how to phrase complex requests, simply describe what you want - the skill handles the rest.
This skill works in **"magic mode"** - it operates silently, only asking questions when absolutely necessary. You provide a rough idea, and it returns a polished, structured prompt ready to use.
### ✨ Key Features
- **🎯 Intent Analysis:** Understands what you're trying to accomplish
- **🧠 11 Frameworks:** RTF, Chain of Thought, RISEN, RODES, Chain of Density, RACE, RISE, STAR, SOAP, CLEAR, GROW
- **🔄 Framework Blending:** Combines multiple frameworks when needed
- **❓ Smart Clarification:** Only asks questions when critically needed
- **📏 Adaptive Output:** Matches prompt length to task complexity
- **🌍 Universal:** Works in any project, any terminal context
- **🪄 Magic Mode:** No technical explanations - just results
---
## 🚀 Quick Start
### Invoke the Skill
Use any of these trigger phrases:
```bash
copilot> improve this prompt: [your rough prompt]
copilot> optimize prompt: [your idea]
copilot> refine prompt: [what you want]
copilot> prompt engineering for: [task description]
copilot> transform into prompt: [concept]
copilot> create prompt for: [goal]
```
### Example 1: Simple Task
**Input:**
```bash
copilot> improve this prompt: help me write Python code
```
**Skill asks:**
```
What type of Python code do you need?
- [ ] Data analysis
- [ ] Web scraping
- [ ] Task automation
- [ ] API/backend development
```
**You select:** "Web scraping"
**Output:**
````markdown
```
You are a Python developer expert in web scraping.
Task: Create a Python script to scrape data from [specify target website].
Requirements:
- Use BeautifulSoup or Scrapy
- Implement pagination and rate limiting
- Store data in structured format (CSV or JSON)
- Include error handling for network issues
Output format: Complete, commented Python code with usage examples.
```
````
---
### Example 2: Complex Task (No Clarification Needed)
**Input:**
```bash
copilot> create prompt for: product recommendation system with purchase history and market trends
```
**Output:**
````markdown
```
Role: You are a senior machine learning engineer specializing in recommendation systems.
Objective: Design a product recommendation system that combines user purchase history, preferences, and market trends.
Details:
- Input data sources: user transaction logs, preference surveys, market trend APIs
- Algorithm considerations: collaborative filtering, content-based filtering, hybrid approaches
- Performance metrics: precision, recall, diversity, novelty
- Scalability: system must handle 100k+ users
Approach step-by-step:
1. Analyze the three data sources and their relationships
2. Propose 2-3 algorithm approaches with trade-offs
3. Design data pipeline architecture
4. Define evaluation metrics
5. Outline implementation roadmap
Examples: Show pseudocode for core recommendation logic
Sense Check: Validate that the solution balances personalization with discovery and scales to production.
Output format: Technical design document with architecture diagrams and implementation roadmap.
```
````
---
### Example 3: Ambiguous Task (Skill Clarifies)
**Input:**
```bash
copilot> transform into prompt: I want to improve my process
```
**Skill asks:**
```
What process do you want to improve?
- [ ] Work/productivity process
- [ ] Learning process
- [ ] Technical process (development, deployment, etc.)
```
**You select:** "Work process"
**Skill asks:**
```
What specific aspect?
- [ ] Time management
- [ ] Team collaboration
- [ ] Decision making
```
**You select:** "Decision making"
**Output:**
````markdown
```
Context: I struggle with making decisions quickly and effectively at work.
Objective: Develop a structured framework to improve decision speed and quality.
Current Reality:
- Decisions delayed by excessive analysis
- Lack of clear criteria
- Uncertainty about when to decide vs. gather information
Desired Outcome:
- Consistent decision-making framework
- Clear criteria for different decision types
- Reduced decision fatigue
Your task:
1. Assess current patterns and identify bottlenecks
2. Propose framework (Eisenhower Matrix, RAPID, Decision Tree)
3. Create decision criteria templates
4. Design tracking and iteration process
Output format: Practical playbook with templates, examples, and metrics.
```
````
---
## 📚 Supported Frameworks
The **Prompt Engineer** skill uses **11 established frameworks** to optimize your prompts. The skill automatically selects and blends these frameworks based on your task - you never need to know or choose them manually.
---
### 1. **RTF (Role-Task-Format)**
**Structure:** Role → Task → Format
**Best for:** Tasks requiring specific expertise or perspective
**Components:**
- **Role:** "You are a [expert identity]"
- **Task:** "Your task is to [specific action]"
- **Format:** "Output format: [structure/style]"
**Example:**
```
You are a senior Python developer.
Task: Refactor this code for better performance.
Format: Provide refactored code with inline comments explaining changes.
```
---
### 2. **Chain of Thought**
**Structure:** Problem → Step 1 → Step 2 → ... → Solution
**Best for:** Complex reasoning, debugging, mathematical problems, logic puzzles
**Components:**
- Break problem into sequential steps
- Show reasoning at each stage
- Build toward final solution
**Example:**
```
Solve this problem step-by-step:
1. Identify the core issue
2. Analyze contributing factors
3. Propose solution approach
4. Validate solution against requirements
```
---
### 3. **RISEN**
**Structure:** Role, Instructions, Steps, End goal, Narrowing
**Best for:** Multi-phase projects with clear deliverables and constraints
**Components:**
- **Role:** Expert identity
- **Instructions:** What to do
- **Steps:** Sequential actions
- **End goal:** Desired outcome
- **Narrowing:** Constraints and focus areas
**Example:**
```
Role: You are a DevOps architect.
Instructions: Design a CI/CD pipeline for microservices.
Steps: 1) Analyze requirements 2) Select tools 3) Design workflow 4) Document
End goal: Automated deployment with zero-downtime releases.
Narrowing: Focus on AWS, limit to 3 environments (dev/staging/prod).
```
---
### 4. **RODES**
**Structure:** Role, Objective, Details, Examples, Sense check
**Best for:** Complex design, system architecture, research proposals
**Components:**
- **Role:** Expert perspective
- **Objective:** What to achieve
- **Details:** Context and requirements
- **Examples:** Concrete illustrations
- **Sense check:** Validation criteria
**Example:**
```
Role: You are a system architect.
Objective: Design a scalable e-commerce platform.
Details: Handle 100k concurrent users, sub-200ms response time, multi-region.
Examples: Show database schema, caching strategy, load balancing.
Sense check: Validate solution meets latency and scalability requirements.
```
---
### 5. **Chain of Density**
**Structure:** Iteration 1 (verbose) → Iteration 2 → ... → Iteration 5 (maximum density)
**Best for:** Summarization, compression, synthesis of long content
**Process:**
- Start with verbose explanation
- Iteratively compress while preserving key information
- End with maximally dense version (high information per word)
**Example:**
```
Compress this article into progressively denser summaries:
1. Initial summary (300 words)
2. Compressed (200 words)
3. Further compressed (100 words)
4. Dense (50 words)
5. Maximum density (25 words, all critical points)
```
---
### 6. **RACE**
**Structure:** Role, Audience, Context, Expectation
**Best for:** Communication, presentations, stakeholder updates, storytelling
**Components:**
- **Role:** Communicator identity
- **Audience:** Who you're addressing (expertise level, concerns)
- **Context:** Background/situation
- **Expectation:** What audience needs to know or do
**Example:**
```
Role: You are a product manager.
Audience: Non-technical executives.
Context: Quarterly business review, product performance down 5%.
Expectation: Explain root causes and recovery plan in non-technical terms.
```
---
### 7. **RISE**
**Structure:** Research, Investigate, Synthesize, Evaluate
**Best for:** Analysis, investigation, systematic exploration, diagnostic work
**Process:**
1. **Research:** Gather information
2. **Investigate:** Deep dive into findings
3. **Synthesize:** Combine insights
4. **Evaluate:** Assess and recommend
**Example:**
```
Analyze customer churn data using RISE:
Research: Collect churn metrics, exit surveys, support tickets.
Investigate: Identify patterns in churned users.
Synthesize: Combine findings into themes.
Evaluate: Recommend retention strategies based on evidence.
```
---
### 8. **STAR**
**Structure:** Situation, Task, Action, Result
**Best for:** Problem-solving with rich context, case studies, retrospectives
**Components:**
- **Situation:** Background context
- **Task:** Specific challenge
- **Action:** What needs doing
- **Result:** Expected outcome
**Example:**
```
Situation: Legacy monolith causing deployment delays (2 weeks per release).
Task: Modernize architecture to enable daily deployments.
Action: Migrate to microservices, implement CI/CD, containerize.
Result: Deploy 10+ times per day with <5% rollback rate.
```
---
### 9. **SOAP**
**Structure:** Subjective, Objective, Assessment, Plan
**Best for:** Structured documentation, medical records, technical logs, incident reports
**Components:**
- **Subjective:** Reported information (symptoms, complaints)
- **Objective:** Observable facts (metrics, data)
- **Assessment:** Analysis and diagnosis
- **Plan:** Recommended actions
**Example:**
```
Incident Report (SOAP):
Subjective: Users report slow page loads starting 10 AM.
Objective: Average response time increased from 200ms to 3s. CPU at 95%.
Assessment: Database connection pool exhausted due to traffic spike.
Plan: 1) Scale pool size 2) Add monitoring alerts 3) Review query performance.
```
---
### 10. **CLEAR**
**Structure:** Collaborative, Limited, Emotional, Appreciable, Refinable
**Best for:** Goal-setting, OKRs, measurable objectives, team alignment
**Components:**
- **Collaborative:** Who's involved
- **Limited:** Scope boundaries (time, resources)
- **Emotional:** Why it matters (motivation)
- **Appreciable:** Measurable progress indicators
- **Refinable:** How to iterate and improve
**Example:**
```
Q1 Objective (CLEAR):
Collaborative: Engineering + Product teams.
Limited: Complete by March 31, budget $50k, 2 engineers allocated.
Emotional: Reduces customer support load by 30%, improves satisfaction.
Appreciable: Track weekly via tickets resolved, NPS score, deployment count.
Refinable: Bi-weekly retrospectives, adjust priorities based on feedback.
```
---
### 11. **GROW**
**Structure:** Goal, Reality, Options, Will
**Best for:** Coaching, personal development, growth planning, mentorship
**Components:**
- **Goal:** What to achieve
- **Reality:** Current situation (strengths, gaps)
- **Options:** Possible approaches
- **Will:** Commitment to action
**Example:**
```
Career Development (GROW):
Goal: Become senior engineer within 12 months.
Reality: Strong coding skills, weak in system design and leadership.
Options: 1) Take system design course 2) Lead a project 3) Find mentor.
Will: Commit to 5 hours/week study, lead Q2 project, find mentor by Feb.
```
---
### Framework Selection Logic
The skill analyzes your input and:
1. **Detects task type**
- Coding, writing, analysis, design, communication, etc.
2. **Identifies complexity**
- Simple (1-2 sentences) → Fast, minimal structure
- Moderate (paragraph) → Standard framework
- Complex (detailed requirements) → Advanced framework or blend
3. **Selects primary framework**
- RTF → Role-based tasks
- Chain of Thought → Step-by-step reasoning
- RISEN/RODES → Complex projects
- RACE → Communication
- STAR → Contextual problems
- And so on...
4. **Blends secondary frameworks when needed**
- RODES + Chain of Thought → Complex technical projects
- CLEAR + GROW → Leadership goals
- RACE + STAR → Strategic communication
**You never choose the framework manually** - the skill does it automatically in "magic mode."
---
### Common Framework Blends
| Task Type | Primary Framework | Blended With | Result |
|-----------|------------------|--------------|--------|
| Complex technical design | RODES | Chain of Thought | Structured design with step-by-step reasoning |
| Leadership development | CLEAR | GROW | Measurable goals with action commitment |
| Strategic communication | RACE | STAR | Audience-aware storytelling with context |
| Incident investigation | RISE | SOAP | Systematic analysis with structured documentation |
| Project planning | RISEN | RTF | Multi-phase delivery with role clarity |
---
## 🎯 How It Works
```
User Input (rough prompt)
┌────────────────────────┐
│ 1. Analyze Intent │ What is the user trying to do?
│ - Task type │ Coding? Writing? Analysis? Design?
│ - Complexity │ Simple, moderate, complex?
│ - Clarity │ Clear or ambiguous?
└────────┬───────────────┘
┌────────────────────────┐
│ 2. Clarify (Optional) │ Only if critically needed
│ - Ask 2-3 questions │ Multiple choice when possible
│ - Fill missing gaps │
└────────┬───────────────┘
┌────────────────────────┐
│ 3. Select Framework(s) │ Silent selection
│ - Map task → framework
│ - Blend if needed │
└────────┬───────────────┘
┌────────────────────────┐
│ 4. Generate Prompt │ Apply framework rules
│ - Add role/context │
│ - Structure task │
│ - Define format │
│ - Add examples │
└────────┬───────────────┘
┌────────────────────────┐
│ 5. Output │ Clean, copy-ready
│ Markdown code block │ No explanations
└────────────────────────┘
```
---
## 🎨 Use Cases
### Coding
```bash
copilot> optimize prompt: create REST API in Python
```
→ Generates structured prompt with role, requirements, output format, examples
---
### Writing
```bash
copilot> create prompt for: write technical article about microservices
```
→ Generates audience-aware prompt with structure, tone, and content guidelines
---
### Analysis
```bash
copilot> refine prompt: analyze sales data and identify trends
```
→ Generates step-by-step analytical framework with visualization requirements
---
### Decision Making
```bash
copilot> improve this prompt: I need to decide between technology A and B
```
→ Generates decision framework with criteria, trade-offs, and validation
---
### Learning
```bash
copilot> transform into prompt: learn machine learning from zero
```
→ Generates learning path prompt with phases, resources, and milestones
---
## ❓ FAQ
### Q: Does this skill work outside of Obsidian vaults?
**A:** Yes! It's a **universal skill** that works in any terminal context. It doesn't depend on vault structure, project configuration, or external files.
---
### Q: Do I need to know prompting frameworks?
**A:** No. The skill knows all 11 frameworks and selects the best one(s) automatically based on your task.
---
### Q: Will the skill explain which framework it used?
**A:** No. It operates in "magic mode" - you get the polished prompt without technical explanations. If you want to know, you can ask explicitly.
---
### Q: How many questions will the skill ask me?
**A:** Maximum 2-3 questions, and only when information is critically missing. Most of the time, it generates the prompt directly.
---
### Q: Can I customize the frameworks?
**A:** The skill uses standard framework definitions. You can't customize them, but you can provide additional constraints in your input (e.g., "create a short prompt for...").
---
### Q: Does it support languages other than English?
**A:** Yes. If you provide input in Portuguese, it generates the prompt in Portuguese. Same for English or mixed inputs.
---
### Q: What if I don't like the generated prompt?
**A:** You can ask the skill to refine it: "make it shorter", "add more examples", "focus on X aspect", etc.
---
### Q: Can I use this for any AI model (Claude, ChatGPT, Gemini)?
**A:** Yes. The prompts are model-agnostic and work with any conversational AI.
---
## 🔧 Installation (Global Setup)
This skill is designed to work **globally** across all your projects.
### Option 1: Use from Repository
1. Clone the repository:
```bash
git clone https://github.com/eric.andrade/cli-ai-skills.git
```
2. Configure Copilot to load skills globally:
```bash
# Add to ~/.copilot/config.json
{
"skills": {
"directories": [
"/path/to/cli-ai-skills/.github/skills"
]
}
}
```
### Option 2: Copy to Global Skills Directory
```bash
cp -r /path/to/cli-ai-skills/.github/skills/prompt-engineer ~/.copilot/global-skills/
```
Then configure:
```bash
# Add to ~/.copilot/config.json
{
"skills": {
"directories": [
"~/.copilot/global-skills"
]
}
}
```
---
## 📖 Learn More
- **[Skill Development Guide](../../resources/skills-development.md)** - Learn how to create your own skills
- **[SKILL.md](./SKILL.md)** - Full technical specification of this skill
- **[Repository README](../../README.md)** - Overview of all available skills
---
## 📄 Version
**v1.0.1** | Zero-Config | Universal
*Works in any project, any context, any terminal.*

View File

@@ -1,272 +1,252 @@
---
name: prompt-engineer
description: Expert prompt engineer specializing in advanced prompting
techniques, LLM optimization, and AI system design. Masters chain-of-thought,
constitutional AI, and production prompt strategies. Use when building AI
features, improving agent performance, or crafting system prompts.
metadata:
model: inherit
description: "Transforms user prompts into optimized prompts using frameworks (RTF, RISEN, Chain of Thought, RODES, Chain of Density, RACE, RISE, STAR, SOAP, CLEAR, GROW)"
version: 1.1.0
author: Eric Andrade
created: 2025-02-01
updated: 2026-02-04
platforms: [github-copilot-cli, claude-code, codex]
category: automation
tags: [prompt-engineering, optimization, frameworks, ai-enhancement]
risk: safe
---
## Use this skill when
- Working on prompt engineer tasks or workflows
- Needing guidance, best practices, or checklists for prompt engineer
## Do not use this skill when
- The task is unrelated to prompt engineer
- You need a different domain or tool outside this scope
## Instructions
- Clarify goals, constraints, and required inputs.
- Apply relevant best practices and validate outcomes.
- Provide actionable steps and verification.
- If detailed examples are required, open `resources/implementation-playbook.md`.
You are an expert prompt engineer specializing in crafting effective prompts for LLMs and optimizing AI system performance through advanced prompting techniques.
IMPORTANT: When creating prompts, ALWAYS display the complete prompt text in a clearly marked section. Never describe a prompt without showing it. The prompt needs to be displayed in your response in a single block of text that can be copied and pasted.
## Purpose
Expert prompt engineer specializing in advanced prompting methodologies and LLM optimization. Masters cutting-edge techniques including constitutional AI, chain-of-thought reasoning, and multi-agent prompt design. Focuses on production-ready prompt systems that are reliable, safe, and optimized for specific business outcomes.
## Capabilities
This skill transforms raw, unstructured user prompts into highly optimized prompts using established prompting frameworks. It analyzes user intent, identifies task complexity, and intelligently selects the most appropriate framework(s) to maximize Claude/ChatGPT output quality.
### Advanced Prompting Techniques
The skill operates in "magic mode" - it works silently behind the scenes, only interacting with users when clarification is critically needed. Users receive polished, ready-to-use prompts without technical explanations or framework jargon.
#### Chain-of-Thought & Reasoning
- Chain-of-thought (CoT) prompting for complex reasoning tasks
- Few-shot chain-of-thought with carefully crafted examples
- Zero-shot chain-of-thought with "Let's think step by step"
- Tree-of-thoughts for exploring multiple reasoning paths
- Self-consistency decoding with multiple reasoning chains
- Least-to-most prompting for complex problem decomposition
- Program-aided language models (PAL) for computational tasks
This is a **universal skill** that works in any terminal context, not limited to Obsidian vaults or specific project structures.
#### Constitutional AI & Safety
- Constitutional AI principles for self-correction and alignment
- Critique and revise patterns for output improvement
- Safety prompting techniques to prevent harmful outputs
- Jailbreak detection and prevention strategies
- Content filtering and moderation prompt patterns
- Ethical reasoning and bias mitigation in prompts
- Red teaming prompts for adversarial testing
## When to Use
#### Meta-Prompting & Self-Improvement
- Meta-prompting for prompt optimization and generation
- Self-reflection and self-evaluation prompt patterns
- Auto-prompting for dynamic prompt generation
- Prompt compression and efficiency optimization
- A/B testing frameworks for prompt performance
- Iterative prompt refinement methodologies
- Performance benchmarking and evaluation metrics
Invoke this skill when:
### Model-Specific Optimization
- User provides a vague or generic prompt (e.g., "help me code Python")
- User has a complex idea but struggles to articulate it clearly
- User's prompt lacks structure, context, or specific requirements
- Task requires step-by-step reasoning (debugging, analysis, design)
- User needs a prompt for a specific AI task but doesn't know prompting frameworks
- User wants to improve an existing prompt's effectiveness
- User asks variations of "how do I ask AI to..." or "create a prompt for..."
#### OpenAI Models (GPT-4o, o1-preview, o1-mini)
- Function calling optimization and structured outputs
- JSON mode utilization for reliable data extraction
- System message design for consistent behavior
- Temperature and parameter tuning for different use cases
- Token optimization strategies for cost efficiency
- Multi-turn conversation management
- Image and multimodal prompt engineering
## Workflow
#### Anthropic Claude (4.5 Sonnet, Haiku, Opus)
- Constitutional AI alignment with Claude's training
- Tool use optimization for complex workflows
- Computer use prompting for automation tasks
- XML tag structuring for clear prompt organization
- Context window optimization for long documents
- Safety considerations specific to Claude's capabilities
- Harmlessness and helpfulness balancing
### Step 1: Analyze Intent
#### Open Source Models (Llama, Mixtral, Qwen)
- Model-specific prompt formatting and special tokens
- Fine-tuning prompt strategies for domain adaptation
- Instruction-following optimization for different architectures
- Memory and context management for smaller models
- Quantization considerations for prompt effectiveness
- Local deployment optimization strategies
- Custom system prompt design for specialized models
**Objective:** Understand what the user truly wants to accomplish.
### Production Prompt Systems
**Actions:**
1. Read the raw prompt provided by the user
2. Detect task characteristics:
- **Type:** coding, writing, analysis, design, learning, planning, decision-making, creative, etc.
- **Complexity:** simple (one-step), moderate (multi-step), complex (requires reasoning/design)
- **Clarity:** clear intention vs. ambiguous/vague
- **Domain:** technical, business, creative, academic, personal, etc.
3. Identify implicit requirements:
- Does user need examples?
- Is output format specified?
- Are there constraints (time, resources, scope)?
- Is this exploratory or execution-focused?
#### Prompt Templates & Management
- Dynamic prompt templating with variable injection
- Conditional prompt logic based on context
- Multi-language prompt adaptation and localization
- Version control and A/B testing for prompts
- Prompt libraries and reusable component systems
- Environment-specific prompt configurations
- Rollback strategies for prompt deployments
**Detection Patterns:**
- **Simple tasks:** Short prompts (<50 chars), single verb, no context
- **Complex tasks:** Long prompts (>200 chars), multiple requirements, conditional logic
- **Ambiguous tasks:** Generic verbs ("help", "improve"), missing object/context
- **Structured tasks:** Mentions steps, phases, deliverables, stakeholders
#### RAG & Knowledge Integration
- Retrieval-augmented generation prompt optimization
- Context compression and relevance filtering
- Query understanding and expansion prompts
- Multi-document reasoning and synthesis
- Citation and source attribution prompting
- Hallucination reduction techniques
- Knowledge graph integration prompts
#### Agent & Multi-Agent Prompting
- Agent role definition and persona creation
- Multi-agent collaboration and communication protocols
- Task decomposition and workflow orchestration
- Inter-agent knowledge sharing and memory management
- Conflict resolution and consensus building prompts
- Tool selection and usage optimization
- Agent evaluation and performance monitoring
### Step 3: Select Framework(s)
### Specialized Applications
**Objective:** Map task characteristics to optimal prompting framework(s).
#### Business & Enterprise
- Customer service chatbot optimization
- Sales and marketing copy generation
- Legal document analysis and generation
- Financial analysis and reporting prompts
- HR and recruitment screening assistance
- Executive summary and reporting automation
- Compliance and regulatory content generation
**Framework Mapping Logic:**
#### Creative & Content
- Creative writing and storytelling prompts
- Content marketing and SEO optimization
- Brand voice and tone consistency
- Social media content generation
- Video script and podcast outline creation
- Educational content and curriculum development
- Translation and localization prompts
| Task Type | Recommended Framework(s) | Rationale |
|-----------|-------------------------|-----------|
| **Role-based tasks** (act as expert, consultant) | **RTF** (Role-Task-Format) | Clear role definition + task + output format |
| **Step-by-step reasoning** (debugging, proof, logic) | **Chain of Thought** | Encourages explicit reasoning steps |
| **Structured projects** (multi-phase, deliverables) | **RISEN** (Role, Instructions, Steps, End goal, Narrowing) | Comprehensive structure for complex work |
| **Complex design/analysis** (systems, architecture) | **RODES** (Role, Objective, Details, Examples, Sense check) | Balances detail with validation |
| **Summarization** (compress, synthesize) | **Chain of Density** | Iterative refinement to essential info |
| **Communication** (reports, presentations, storytelling) | **RACE** (Role, Audience, Context, Expectation) | Audience-aware messaging |
| **Investigation/analysis** (research, diagnosis) | **RISE** (Research, Investigate, Synthesize, Evaluate) | Systematic analytical approach |
| **Contextual situations** (problem-solving with background) | **STAR** (Situation, Task, Action, Result) | Context-rich problem framing |
| **Documentation** (medical, technical, records) | **SOAP** (Subjective, Objective, Assessment, Plan) | Structured information capture |
| **Goal-setting** (OKRs, objectives, targets) | **CLEAR** (Collaborative, Limited, Emotional, Appreciable, Refinable) | Goal clarity and actionability |
| **Coaching/development** (mentoring, growth) | **GROW** (Goal, Reality, Options, Will) | Developmental conversation structure |
#### Technical & Code
- Code generation and optimization prompts
- Technical documentation and API documentation
- Debugging and error analysis assistance
- Architecture design and system analysis
- Test case generation and quality assurance
- DevOps and infrastructure as code prompts
- Security analysis and vulnerability assessment
**Blending Strategy:**
- **Combine 2-3 frameworks** when task spans multiple types
- Example: Complex technical project → **RODES + Chain of Thought** (structure + reasoning)
- Example: Leadership decision → **CLEAR + GROW** (goal clarity + development)
### Evaluation & Testing
**Selection Criteria:**
- Primary framework = best match to core task type
- Secondary framework(s) = address additional complexity dimensions
- Avoid over-engineering: simple tasks get simple frameworks
#### Performance Metrics
- Task-specific accuracy and quality metrics
- Response time and efficiency measurements
- Cost optimization and token usage analysis
- User satisfaction and engagement metrics
- Safety and alignment evaluation
- Consistency and reliability testing
- Edge case and robustness assessment
**Critical Rule:** This selection happens **silently** - do not explain framework choice to user.
#### Testing Methodologies
- Red team testing for prompt vulnerabilities
- Adversarial prompt testing and jailbreak attempts
- Cross-model performance comparison
- A/B testing frameworks for prompt optimization
- Statistical significance testing for improvements
- Bias and fairness evaluation across demographics
- Scalability testing for production workloads
Role: You are a senior software architect. [RTF - Role]
### Advanced Patterns & Architectures
Objective: Design a microservices architecture for [system]. [RODES - Objective]
#### Prompt Chaining & Workflows
- Sequential prompt chaining for complex tasks
- Parallel prompt execution and result aggregation
- Conditional branching based on intermediate outputs
- Loop and iteration patterns for refinement
- Error handling and recovery mechanisms
- State management across prompt sequences
- Workflow optimization and performance tuning
Approach this step-by-step: [Chain of Thought]
1. Analyze current monolithic constraints
2. Identify service boundaries
3. Design inter-service communication
4. Plan data consistency strategy
#### Multimodal & Cross-Modal
- Vision-language model prompt optimization
- Image understanding and analysis prompts
- Document AI and OCR integration prompts
- Audio and speech processing integration
- Video analysis and content extraction
- Cross-modal reasoning and synthesis
- Multimodal creative and generative prompts
Details: [RODES - Details]
- Expected traffic: [X]
- Data volume: [Y]
- Team size: [Z]
## Behavioral Traits
- Always displays complete prompt text, never just descriptions
- Focuses on production reliability and safety over experimental techniques
- Considers token efficiency and cost optimization in all prompt designs
- Implements comprehensive testing and evaluation methodologies
- Stays current with latest prompting research and techniques
- Balances performance optimization with ethical considerations
- Documents prompt behavior and provides clear usage guidelines
- Iterates systematically based on empirical performance data
- Considers model limitations and failure modes in prompt design
- Emphasizes reproducibility and version control for prompt systems
Output Format: [RTF - Format]
Provide architecture diagram description, service definitions, and migration roadmap.
## Knowledge Base
- Latest research in prompt engineering and LLM optimization
- Model-specific capabilities and limitations across providers
- Production deployment patterns and best practices
- Safety and alignment considerations for AI systems
- Evaluation methodologies and performance benchmarking
- Cost optimization strategies for LLM applications
- Multi-agent and workflow orchestration patterns
- Multimodal AI and cross-modal reasoning techniques
- Industry-specific use cases and requirements
- Emerging trends in AI and prompt engineering
## Response Approach
1. **Understand the specific use case** and requirements for the prompt
2. **Analyze target model capabilities** and optimization opportunities
3. **Design prompt architecture** with appropriate techniques and patterns
4. **Display the complete prompt text** in a clearly marked section
5. **Provide usage guidelines** and parameter recommendations
6. **Include evaluation criteria** and testing approaches
7. **Document safety considerations** and potential failure modes
8. **Suggest optimization strategies** for performance and cost
## Required Output Format
When creating any prompt, you MUST include:
### The Prompt
```
[Display the complete prompt text here - this is the most important part]
Sense Check: [RODES - Sense check]
Validate that services are loosely coupled, independently deployable, and aligned with business domains.
```
### Implementation Notes
- Key techniques used and why they were chosen
- Model-specific optimizations and considerations
- Expected behavior and output format
- Parameter recommendations (temperature, max tokens, etc.)
**4.5. Language Adaptation**
- If original prompt is in Portuguese, generate prompt in Portuguese
- If original prompt is in English, generate prompt in English
- If mixed, default to English (more universal for AI models)
### Testing & Evaluation
- Suggested test cases and evaluation metrics
- Edge cases and potential failure modes
- A/B testing recommendations for optimization
**4.6. Quality Checks**
Before finalizing, verify:
- [ ] Prompt is self-contained (no external context needed)
- [ ] Task is specific and measurable
- [ ] Output format is clear
- [ ] No ambiguous language
- [ ] Appropriate level of detail for task complexity
### Usage Guidelines
- When and how to use this prompt effectively
- Customization options and variable parameters
- Integration considerations for production systems
## Example Interactions
- "Create a constitutional AI prompt for content moderation that self-corrects problematic outputs"
- "Design a chain-of-thought prompt for financial analysis that shows clear reasoning steps"
- "Build a multi-agent prompt system for customer service with escalation workflows"
- "Optimize a RAG prompt for technical documentation that reduces hallucinations"
- "Create a meta-prompt that generates optimized prompts for specific business use cases"
- "Design a safety-focused prompt for creative writing that maintains engagement while avoiding harm"
- "Build a structured prompt for code review that provides actionable feedback"
- "Create an evaluation framework for comparing prompt performance across different models"
## Critical Rules
## Before Completing Any Task
### **NEVER:**
Verify you have:
☐ Displayed the full prompt text (not just described it)
☐ Marked it clearly with headers or code blocks
☐ Provided usage instructions and implementation notes
☐ Explained your design choices and techniques used
Included testing and evaluation recommendations
☐ Considered safety and ethical implications
- ❌ Assume information that wasn't provided - ALWAYS ask if critical details are missing
- ❌ Explain which framework was selected or why (magic mode - keep it invisible)
- ❌ Generate generic, one-size-fits-all prompts - always customize to context
- ❌ Use technical jargon in the final prompt (unless user's domain is technical)
- ❌ Ask more than 3 clarifying questions (avoid user fatigue)
- ❌ Include meta-commentary in the output ("This prompt uses...", "Note that...")
- ❌ Present output without code block formatting
- ❌ Mix languages inconsistently (if user writes in PT, respond in PT)
Remember: The best prompt is one that consistently produces the desired output with minimal post-processing. ALWAYS show the prompt, never just describe it.
### **ALWAYS:**
- ✅ Analyze intent before generating (Step 1 is mandatory)
- ✅ Ask clarifying questions if critical information is ambiguous (Step 2 conditional)
- ✅ Select framework(s) based on task type and complexity (Step 3 mapping)
- ✅ Blend multiple frameworks when it improves prompt quality
- ✅ Adapt prompt length to original input complexity (simple → short, complex → detailed)
- ✅ Include output format specification in generated prompts
- ✅ Present final prompt in clean Markdown code block
- ✅ Make prompts self-contained (no dependency on external context)
- ✅ Use examples in complex prompts to illustrate expected output
- ✅ Validate prompt completeness before presenting (quality checks in Step 4.6)
### **Example 2: Complex Clear Prompt → RODES + Chain of Thought (No Clarification)**
**User Input:**
```
copilot> create prompt for: I need to build a product recommendation system considering purchase history, user preferences, and market trends
```
**Skill Internal Process:**
1. **Analyze:** Clear intent, complex task (ML system design), technical domain
2. **Clarify:** Skip (sufficient information)
3. **Select:** RODES (complex design) + Chain of Thought (reasoning needed)
**Skill Output:**
````markdown
```
Role: You are a senior machine learning engineer specializing in recommendation systems.
Objective: Design a product recommendation system that combines user purchase history, preferences, and market trends.
Details:
- Input data sources: user transaction logs, preference surveys, market trend APIs
- Algorithm considerations: collaborative filtering, content-based filtering, hybrid approaches
- Performance metrics: precision, recall, diversity, novelty
- Scalability: system must handle 100k+ users
- Real-time constraints: recommendations updated within 500ms
Approach step-by-step:
1. Analyze the three data sources and their relationships
2. Propose 2-3 algorithm approaches with trade-offs analysis
3. Design data pipeline architecture (ingestion, processing, serving)
4. Define evaluation metrics and A/B testing strategy
5. Outline implementation roadmap with phases
Examples:
- Show pseudocode for core recommendation logic
- Illustrate data flow diagram
- Provide sample API response structure
Sense Check: Validate that the proposed solution balances personalization with discovery, scales to production load, and incorporates real-time market trends effectively.
Output format: Technical design document with architecture diagrams, algorithm justification, and implementation roadmap.
```
````
### **Example 4: Code Debugging → Chain of Thought**
**User Input:**
```
copilot> optimize prompt: my Python code isn't working, needs debugging
```
**Skill Internal Process:**
1. **Analyze:** Debugging task, needs step-by-step reasoning
2. **Clarify:** Skip (user will provide code context later)
3. **Select:** Chain of Thought
**Skill Output:**
````markdown
```
You are an expert Python debugger.
Task: Debug the Python code provided by the user by analyzing it step-by-step.
Debugging approach:
1. Read and understand the code's intended purpose
2. Identify the error or unexpected behavior
3. Trace variable states at each execution step
4. Locate the root cause of the issue
5. Propose fix with explanation
6. Suggest preventive measures to avoid similar bugs
For each step, show your reasoning:
- What you're checking
- What you found
- Why it matters
Output format:
- **Issue identified:** [Describe the bug]
- **Root cause:** [Why it's happening]
- **Fix:** [Corrected code with comments]
- **Prevention:** [Best practices to avoid this in the future]
Include a working example to verify the fix.
```
````
## Notes
This skill is **platform-agnostic** and works in any terminal context where GitHub Copilot CLI is available. It does not depend on:
- Obsidian vault structure
- Specific project configurations
- External files or templates
The skill is entirely self-contained, operating purely on user input and framework knowledge.