feat: Add Official Microsoft & Gemini Skills (845+ Total)
🚀 Impact Significantly expands the capabilities of **Antigravity Awesome Skills** by integrating official skill collections from **Microsoft** and **Google Gemini**. This update increases the total skill count to **845+**, making the library even more comprehensive for AI coding assistants. ✨ Key Changes 1. New Official Skills - **Microsoft Skills**: Added a massive collection of official skills from [microsoft/skills](https://github.com/microsoft/skills). - Includes Azure, .NET, Python, TypeScript, and Semantic Kernel skills. - Preserves the original directory structure under `skills/official/microsoft/`. - Includes plugin skills from the `.github/plugins` directory. - **Gemini Skills**: Added official Gemini API development skills under `skills/gemini-api-dev/`. 2. New Scripts & Tooling - **`scripts/sync_microsoft_skills.py`**: A robust synchronization script that: - Clones the official Microsoft repository. - Preserves the original directory heirarchy. - Handles symlinks and plugin locations. - Generates attribution metadata. - **`scripts/tests/inspect_microsoft_repo.py`**: Debug tool to inspect the remote repository structure. - **`scripts/tests/test_comprehensive_coverage.py`**: Verification script to ensure 100% of skills are captured during sync. 3. Core Improvements - **`scripts/generate_index.py`**: Enhanced frontmatter parsing to safely handle unquoted values containing `@` symbols and commas (fixing issues with some Microsoft skill descriptions). - **`package.json`**: Added `sync:microsoft` and `sync:all-official` scripts for easy maintenance. 4. Documentation - Updated `README.md` to reflect the new skill counts (845+) and added Microsoft/Gemini to the provider list. - Updated `CATALOG.md` and `skills_index.json` with the new skills. 🧪 Verification - Ran `scripts/tests/test_comprehensive_coverage.py` to verify all Microsoft skills are detected. - Validated `generate_index.py` fixes by successfully indexing the new skills.
This commit is contained in:
214
skills/official/microsoft/python/foundry/contentsafety/SKILL.md
Normal file
214
skills/official/microsoft/python/foundry/contentsafety/SKILL.md
Normal file
@@ -0,0 +1,214 @@
|
||||
---
|
||||
name: azure-ai-contentsafety-py
|
||||
description: |
|
||||
Azure AI Content Safety SDK for Python. Use for detecting harmful content in text and images with multi-severity classification.
|
||||
Triggers: "azure-ai-contentsafety", "ContentSafetyClient", "content moderation", "harmful content", "text analysis", "image analysis".
|
||||
package: azure-ai-contentsafety
|
||||
---
|
||||
|
||||
# Azure AI Content Safety SDK for Python
|
||||
|
||||
Detect harmful user-generated and AI-generated content in applications.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-contentsafety
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
CONTENT_SAFETY_ENDPOINT=https://<resource>.cognitiveservices.azure.com
|
||||
CONTENT_SAFETY_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
### API Key
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety import ContentSafetyClient
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
import os
|
||||
|
||||
client = ContentSafetyClient(
|
||||
endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],
|
||||
credential=AzureKeyCredential(os.environ["CONTENT_SAFETY_KEY"])
|
||||
)
|
||||
```
|
||||
|
||||
### Entra ID
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety import ContentSafetyClient
|
||||
from azure.identity import DefaultAzureCredential
|
||||
|
||||
client = ContentSafetyClient(
|
||||
endpoint=os.environ["CONTENT_SAFETY_ENDPOINT"],
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
```
|
||||
|
||||
## Analyze Text
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety import ContentSafetyClient
|
||||
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
|
||||
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
|
||||
|
||||
request = AnalyzeTextOptions(text="Your text content to analyze")
|
||||
response = client.analyze_text(request)
|
||||
|
||||
# Check each category
|
||||
for category in [TextCategory.HATE, TextCategory.SELF_HARM,
|
||||
TextCategory.SEXUAL, TextCategory.VIOLENCE]:
|
||||
result = next((r for r in response.categories_analysis
|
||||
if r.category == category), None)
|
||||
if result:
|
||||
print(f"{category}: severity {result.severity}")
|
||||
```
|
||||
|
||||
## Analyze Image
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety import ContentSafetyClient
|
||||
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
import base64
|
||||
|
||||
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
|
||||
|
||||
# From file
|
||||
with open("image.jpg", "rb") as f:
|
||||
image_data = base64.b64encode(f.read()).decode("utf-8")
|
||||
|
||||
request = AnalyzeImageOptions(
|
||||
image=ImageData(content=image_data)
|
||||
)
|
||||
|
||||
response = client.analyze_image(request)
|
||||
|
||||
for result in response.categories_analysis:
|
||||
print(f"{result.category}: severity {result.severity}")
|
||||
```
|
||||
|
||||
### Image from URL
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
|
||||
|
||||
request = AnalyzeImageOptions(
|
||||
image=ImageData(blob_url="https://example.com/image.jpg")
|
||||
)
|
||||
|
||||
response = client.analyze_image(request)
|
||||
```
|
||||
|
||||
## Text Blocklist Management
|
||||
|
||||
### Create Blocklist
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety import BlocklistClient
|
||||
from azure.ai.contentsafety.models import TextBlocklist
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
|
||||
blocklist_client = BlocklistClient(endpoint, AzureKeyCredential(key))
|
||||
|
||||
blocklist = TextBlocklist(
|
||||
blocklist_name="my-blocklist",
|
||||
description="Custom terms to block"
|
||||
)
|
||||
|
||||
result = blocklist_client.create_or_update_text_blocklist(
|
||||
blocklist_name="my-blocklist",
|
||||
options=blocklist
|
||||
)
|
||||
```
|
||||
|
||||
### Add Block Items
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety.models import AddOrUpdateTextBlocklistItemsOptions, TextBlocklistItem
|
||||
|
||||
items = AddOrUpdateTextBlocklistItemsOptions(
|
||||
blocklist_items=[
|
||||
TextBlocklistItem(text="blocked-term-1"),
|
||||
TextBlocklistItem(text="blocked-term-2")
|
||||
]
|
||||
)
|
||||
|
||||
result = blocklist_client.add_or_update_blocklist_items(
|
||||
blocklist_name="my-blocklist",
|
||||
options=items
|
||||
)
|
||||
```
|
||||
|
||||
### Analyze with Blocklist
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety.models import AnalyzeTextOptions
|
||||
|
||||
request = AnalyzeTextOptions(
|
||||
text="Text containing blocked-term-1",
|
||||
blocklist_names=["my-blocklist"],
|
||||
halt_on_blocklist_hit=True
|
||||
)
|
||||
|
||||
response = client.analyze_text(request)
|
||||
|
||||
if response.blocklists_match:
|
||||
for match in response.blocklists_match:
|
||||
print(f"Blocked: {match.blocklist_item_text}")
|
||||
```
|
||||
|
||||
## Severity Levels
|
||||
|
||||
Text analysis returns 4 severity levels (0, 2, 4, 6) by default. For 8 levels (0-7):
|
||||
|
||||
```python
|
||||
from azure.ai.contentsafety.models import AnalyzeTextOptions, AnalyzeTextOutputType
|
||||
|
||||
request = AnalyzeTextOptions(
|
||||
text="Your text",
|
||||
output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS
|
||||
)
|
||||
```
|
||||
|
||||
## Harm Categories
|
||||
|
||||
| Category | Description |
|
||||
|----------|-------------|
|
||||
| `Hate` | Attacks based on identity (race, religion, gender, etc.) |
|
||||
| `Sexual` | Sexual content, relationships, anatomy |
|
||||
| `Violence` | Physical harm, weapons, injury |
|
||||
| `SelfHarm` | Self-injury, suicide, eating disorders |
|
||||
|
||||
## Severity Scale
|
||||
|
||||
| Level | Text Range | Image Range | Meaning |
|
||||
|-------|------------|-------------|---------|
|
||||
| 0 | Safe | Safe | No harmful content |
|
||||
| 2 | Low | Low | Mild references |
|
||||
| 4 | Medium | Medium | Moderate content |
|
||||
| 6 | High | High | Severe content |
|
||||
|
||||
## Client Types
|
||||
|
||||
| Client | Purpose |
|
||||
|--------|---------|
|
||||
| `ContentSafetyClient` | Analyze text and images |
|
||||
| `BlocklistClient` | Manage custom blocklists |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Use blocklists** for domain-specific terms
|
||||
2. **Set severity thresholds** appropriate for your use case
|
||||
3. **Handle multiple categories** — content can be harmful in multiple ways
|
||||
4. **Use halt_on_blocklist_hit** for immediate rejection
|
||||
5. **Log analysis results** for audit and improvement
|
||||
6. **Consider 8-severity mode** for finer-grained control
|
||||
7. **Pre-moderate AI outputs** before showing to users
|
||||
Reference in New Issue
Block a user