Files
crawl4ai/tests/browser/test_browser_context_id.py
Nasrin f6f7f1b551 Release v0.8.0: Crash Recovery, Prefetch Mode & Security Fixes (#1712)
* Fix: Use correct URL variable for raw HTML extraction (#1116)

- Prevents full HTML content from being passed as URL to extraction strategies
- Added unit tests to verify raw HTML and regular URL processing

Fix: Wrong URL variable used for extraction of raw html

* Fix #1181: Preserve whitespace in code blocks during HTML scraping

  The remove_empty_elements_fast() method was removing whitespace-only
  span elements inside <pre> and <code> tags, causing import statements
  like "import torch" to become "importtorch". Now skips elements inside
  code blocks where whitespace is significant.

* Refactor Pydantic model configuration to use ConfigDict for arbitrary types

* Fix EmbeddingStrategy: Uncomment response handling for the variations and clean up mock data. ref #1621

* Fix: permission issues with .cache/url_seeder and other runtime cache dirs. ref #1638

* fix: ensure BrowserConfig.to_dict serializes proxy_config

* feat: make LLM backoff configurable end-to-end

- extend LLMConfig with backoff delay/attempt/factor fields and thread them
  through LLMExtractionStrategy, LLMContentFilter, table extraction, and
  Docker API handlers
- expose the backoff parameter knobs on perform_completion_with_backoff/aperform_completion_with_backoff
  and document them in the md_v2 guides

* reproduced AttributeError from #1642

* pass timeout parameter to docker client request

* added missing deep crawling objects to init

* generalized query in ContentRelevanceFilter to be a str or list

* import modules from enhanceable deserialization

* parameterized tests

* Fix: capture current page URL to reflect JavaScript navigation and add test for delayed redirects. ref #1268

* refactor: replace PyPDF2 with pypdf across the codebase. ref #1412

* Add browser_context_id and target_id parameters to BrowserConfig

Enable Crawl4AI to connect to pre-created CDP browser contexts, which is
essential for cloud browser services that pre-create isolated contexts.

Changes:
- Add browser_context_id and target_id parameters to BrowserConfig
- Update from_kwargs() and to_dict() methods
- Modify BrowserManager.start() to use existing context when provided
- Add _get_page_by_target_id() helper method
- Update get_page() to handle pre-existing targets
- Add test for browser_context_id functionality

This enables cloud services to:
1. Create isolated CDP contexts before Crawl4AI connects
2. Pass context/target IDs to BrowserConfig
3. Have Crawl4AI reuse existing contexts instead of creating new ones

* Add cdp_cleanup_on_close flag to prevent memory leaks in cloud/server scenarios

* Fix: add cdp_cleanup_on_close to from_kwargs

* Fix: find context by target_id for concurrent CDP connections

* Fix: use target_id to find correct page in get_page

* Fix: use CDP to find context by browserContextId for concurrent sessions

* Revert context matching attempts - Playwright cannot see CDP-created contexts

* Add create_isolated_context flag for concurrent CDP crawls

When True, forces creation of a new browser context instead of reusing
the default context. Essential for concurrent crawls on the same browser
to prevent navigation conflicts.

* Add context caching to create_isolated_context branch

Uses contexts_by_config cache (same as non-CDP mode) to reuse contexts
for multiple URLs with same config. Still creates new page per crawl
for navigation isolation. Benefits batch/deep crawls.

* Add init_scripts support to BrowserConfig for pre-page-load JS injection

This adds the ability to inject JavaScript that runs before any page loads,
useful for stealth evasions (canvas/audio fingerprinting, userAgentData).

- Add init_scripts parameter to BrowserConfig (list of JS strings)
- Apply init_scripts in setup_context() via context.add_init_script()
- Update from_kwargs() and to_dict() for serialization

* Fix CDP connection handling: support WS URLs and proper cleanup

Changes to browser_manager.py:

1. _verify_cdp_ready(): Support multiple URL formats
   - WebSocket URLs (ws://, wss://): Skip HTTP verification, Playwright handles directly
   - HTTP URLs with query params: Properly parse with urlparse to preserve query string
   - Fixes issue where naive f"{cdp_url}/json/version" broke WS URLs and query params

2. close(): Proper cleanup when cdp_cleanup_on_close=True
   - Close all sessions (pages)
   - Close all contexts
   - Call browser.close() to disconnect (doesn't terminate browser, just releases connection)
   - Wait 1 second for CDP connection to fully release
   - Stop Playwright instance to prevent memory leaks

This enables:
- Connecting to specific browsers via WS URL
- Reusing the same browser with multiple sequential connections
- No user wait needed between connections (internal 1s delay handles it)

Added tests/browser/test_cdp_cleanup_reuse.py with comprehensive tests.

* Update gitignore

* Some debugging for caching

* Add _generate_screenshot_from_html for raw: and file:// URLs

Implements the missing method that was being called but never defined.
Now raw: and file:// URLs can generate screenshots by:
1. Loading HTML into a browser page via page.set_content()
2. Taking screenshot using existing take_screenshot() method
3. Cleaning up the page afterward

This enables cached HTML to be rendered with screenshots in crawl4ai-cloud.

* Add PDF and MHTML support for raw: and file:// URLs

- Replace _generate_screenshot_from_html with _generate_media_from_html
- New method handles screenshot, PDF, and MHTML in one browser session
- Update raw: and file:// URL handlers to use new method
- Enables cached HTML to generate all media types

* Add crash recovery for deep crawl strategies

Add optional resume_state and on_state_change parameters to all deep
crawl strategies (BFS, DFS, Best-First) for cloud deployment crash
recovery.

Features:
- resume_state: Pass saved state to resume from checkpoint
- on_state_change: Async callback fired after each URL for real-time
  state persistence to external storage (Redis, DB, etc.)
- export_state(): Get last captured state manually
- Zero overhead when features are disabled (None defaults)

State includes visited URLs, pending queue/stack, depths, and
pages_crawled count. All state is JSON-serializable.

* Fix: HTTP strategy raw: URL parsing truncates at # character

The AsyncHTTPCrawlerStrategy.crawl() method used urlparse() to extract
content from raw: URLs. This caused HTML with CSS color codes like #eee
to be truncated because # is treated as a URL fragment delimiter.

Before: raw:body{background:#eee} -> parsed.path = 'body{background:'
After:  raw:body{background:#eee} -> raw_content = 'body{background:#eee'

Fix: Strip the raw: or raw:// prefix directly instead of using urlparse,
matching how the browser strategy handles it.

* Add base_url parameter to CrawlerRunConfig for raw HTML processing

When processing raw: HTML (e.g., from cache), the URL parameter is meaningless
for markdown link resolution. This adds a base_url parameter that can be set
explicitly to provide proper URL resolution context.

Changes:
- Add base_url parameter to CrawlerRunConfig.__init__
- Add base_url to CrawlerRunConfig.from_kwargs
- Update aprocess_html to use base_url for markdown generation

Usage:
  config = CrawlerRunConfig(base_url='https://example.com')
  result = await crawler.arun(url='raw:{html}', config=config)

* Add prefetch mode for two-phase deep crawling

- Add `prefetch` parameter to CrawlerRunConfig
- Add `quick_extract_links()` function for fast link extraction
- Add short-circuit in aprocess_html() for prefetch mode
- Add 42 tests (unit, integration, regression)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Updates on proxy rotation and proxy configuration

* Add proxy support to HTTP crawler strategy

* Add browser pipeline support for raw:/file:// URLs

- Add process_in_browser parameter to CrawlerRunConfig
- Route raw:/file:// URLs through _crawl_web() when browser operations needed
- Use page.set_content() instead of goto() for local content
- Fix cookie handling for non-HTTP URLs in browser_manager
- Auto-detect browser requirements: js_code, wait_for, screenshot, etc.
- Maintain fast path for raw:/file:// without browser params

Fixes #310

* Add smart TTL cache for sitemap URL seeder

- Add cache_ttl_hours and validate_sitemap_lastmod params to SeedingConfig
- New JSON cache format with metadata (version, created_at, lastmod, url_count)
- Cache validation by TTL expiry and sitemap lastmod comparison
- Auto-migration from old .jsonl to new .json format
- Fixes bug where incomplete cache was used indefinitely

* Update URL seeder docs with smart TTL cache parameters

- Add cache_ttl_hours and validate_sitemap_lastmod to parameter table
- Document smart TTL cache validation with examples
- Add cache-related troubleshooting entries
- Update key features summary

* Add MEMORY.md to gitignore

* Docs: Add multi-sample schema generation section

Add documentation explaining how to pass multiple HTML samples
to generate_schema() for stable selectors that work across pages
with varying DOM structures.

Includes:
- Problem explanation (fragile nth-child selectors)
- Solution with code example
- Key points for multi-sample queries
- Comparison table of fragile vs stable selectors

* Fix critical RCE and LFI vulnerabilities in Docker API deployment

Security fixes for vulnerabilities reported by ProjectDiscovery:

1. Remote Code Execution via Hooks (CVE pending)
   - Remove __import__ from allowed_builtins in hook_manager.py
   - Prevents arbitrary module imports (os, subprocess, etc.)
   - Hooks now disabled by default via CRAWL4AI_HOOKS_ENABLED env var

2. Local File Inclusion via file:// URLs (CVE pending)
   - Add URL scheme validation to /execute_js, /screenshot, /pdf, /html
   - Block file://, javascript:, data: and other dangerous schemes
   - Only allow http://, https://, and raw: (where appropriate)

3. Security hardening
   - Add CRAWL4AI_HOOKS_ENABLED=false as default (opt-in for hooks)
   - Add security warning comments in config.yml
   - Add validate_url_scheme() helper for consistent validation

Testing:
   - Add unit tests (test_security_fixes.py) - 16 tests
   - Add integration tests (run_security_tests.py) for live server

Affected endpoints:
   - POST /crawl (hooks disabled by default)
   - POST /crawl/stream (hooks disabled by default)
   - POST /execute_js (URL validation added)
   - POST /screenshot (URL validation added)
   - POST /pdf (URL validation added)
   - POST /html (URL validation added)

Breaking changes:
   - Hooks require CRAWL4AI_HOOKS_ENABLED=true to function
   - file:// URLs no longer work on API endpoints (use library directly)

* Enhance authentication flow by implementing JWT token retrieval and adding authorization headers to API requests

* Add release notes for v0.7.9, detailing breaking changes, security fixes, new features, bug fixes, and documentation updates

* Add release notes for v0.8.0, detailing breaking changes, security fixes, new features, bug fixes, and documentation updates

Documentation for v0.8.0 release:

- SECURITY.md: Security policy and vulnerability reporting guidelines
- RELEASE_NOTES_v0.8.0.md: Comprehensive release notes
- migration/v0.8.0-upgrade-guide.md: Step-by-step migration guide
- security/GHSA-DRAFT-RCE-LFI.md: GitHub security advisory drafts
- CHANGELOG.md: Updated with v0.8.0 changes

Breaking changes documented:
- Docker API hooks disabled by default (CRAWL4AI_HOOKS_ENABLED)
- file:// URLs blocked on Docker API endpoints

Security fixes credited to Neo by ProjectDiscovery

* Add examples for deep crawl crash recovery and prefetch mode in documentation

* Release v0.8.0: The v0.8.0 Update

- Updated version to 0.8.0
- Added comprehensive demo and release notes
- Updated all documentation

* Update security researcher acknowledgment with a hyperlink for Neo by ProjectDiscovery

* Add async agenerate_schema method for schema generation

- Extract prompt building to shared _build_schema_prompt() method
- Add agenerate_schema() async version using aperform_completion_with_backoff
- Refactor generate_schema() to use shared prompt builder
- Fixes Gemini/Vertex AI compatibility in async contexts (FastAPI)

* Fix: Enable litellm.drop_params for O-series/GPT-5 model compatibility

O-series (o1, o3) and GPT-5 models only support temperature=1.
Setting litellm.drop_params=True auto-drops unsupported parameters
instead of throwing UnsupportedParamsError.

Fixes temperature=0.01 error for these models in LLM extraction.

---------

Co-authored-by: rbushria <rbushri@gmail.com>
Co-authored-by: AHMET YILMAZ <tawfik@kidocode.com>
Co-authored-by: Soham Kukreti <kukretisoham@gmail.com>
Co-authored-by: Chris Murphy <chris.murphy@klaviyo.com>
Co-authored-by: unclecode <unclecode@kidocode.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 14:19:15 +01:00

490 lines
17 KiB
Python

"""Test for browser_context_id and target_id parameters.
These tests verify that Crawl4AI can connect to and use pre-created
browser contexts, which is essential for cloud browser services that
pre-create isolated contexts for each user.
The flow being tested:
1. Start a browser with CDP
2. Create a context via raw CDP commands (simulating cloud service)
3. Create a page/target in that context
4. Have Crawl4AI connect using browser_context_id and target_id
5. Verify Crawl4AI uses the existing context/page instead of creating new ones
"""
import asyncio
import json
import os
import sys
import websockets
# Add the project root to Python path if running directly
if __name__ == "__main__":
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../..')))
from crawl4ai.browser_manager import BrowserManager, ManagedBrowser
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.async_logger import AsyncLogger
# Create a logger for clear terminal output
logger = AsyncLogger(verbose=True, log_file=None)
class CDPContextCreator:
"""
Helper class to create browser contexts via raw CDP commands.
This simulates what a cloud browser service would do.
"""
def __init__(self, cdp_url: str):
self.cdp_url = cdp_url
self._message_id = 0
self._ws = None
self._pending_responses = {}
self._receiver_task = None
async def connect(self):
"""Establish WebSocket connection to browser."""
# Convert HTTP URL to WebSocket URL if needed
ws_url = self.cdp_url.replace("http://", "ws://").replace("https://", "wss://")
if not ws_url.endswith("/devtools/browser"):
# Get the browser websocket URL from /json/version
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get(f"{self.cdp_url}/json/version") as response:
data = await response.json()
ws_url = data.get("webSocketDebuggerUrl", ws_url)
self._ws = await websockets.connect(ws_url, max_size=None, ping_interval=None)
self._receiver_task = asyncio.create_task(self._receive_messages())
logger.info(f"Connected to CDP at {ws_url}", tag="CDP")
async def disconnect(self):
"""Close WebSocket connection."""
if self._receiver_task:
self._receiver_task.cancel()
try:
await self._receiver_task
except asyncio.CancelledError:
pass
if self._ws:
await self._ws.close()
self._ws = None
async def _receive_messages(self):
"""Background task to receive CDP messages."""
try:
async for message in self._ws:
data = json.loads(message)
msg_id = data.get('id')
if msg_id is not None and msg_id in self._pending_responses:
self._pending_responses[msg_id].set_result(data)
except asyncio.CancelledError:
pass
except Exception as e:
logger.error(f"CDP receiver error: {e}", tag="CDP")
async def _send_command(self, method: str, params: dict = None) -> dict:
"""Send CDP command and wait for response."""
self._message_id += 1
msg_id = self._message_id
message = {
"id": msg_id,
"method": method,
"params": params or {}
}
future = asyncio.get_event_loop().create_future()
self._pending_responses[msg_id] = future
try:
await self._ws.send(json.dumps(message))
response = await asyncio.wait_for(future, timeout=30.0)
if 'error' in response:
raise Exception(f"CDP error: {response['error']}")
return response.get('result', {})
finally:
self._pending_responses.pop(msg_id, None)
async def create_context(self) -> dict:
"""
Create an isolated browser context with a blank page.
Returns:
dict with browser_context_id, target_id, and cdp_session_id
"""
await self.connect()
# 1. Create isolated browser context
result = await self._send_command("Target.createBrowserContext", {
"disposeOnDetach": False # Keep context alive
})
browser_context_id = result["browserContextId"]
logger.info(f"Created browser context: {browser_context_id}", tag="CDP")
# 2. Create a new page (target) in the context
result = await self._send_command("Target.createTarget", {
"url": "about:blank",
"browserContextId": browser_context_id
})
target_id = result["targetId"]
logger.info(f"Created target: {target_id}", tag="CDP")
# 3. Attach to the target to get a session ID
result = await self._send_command("Target.attachToTarget", {
"targetId": target_id,
"flatten": True
})
cdp_session_id = result["sessionId"]
logger.info(f"Attached to target, sessionId: {cdp_session_id}", tag="CDP")
return {
"browser_context_id": browser_context_id,
"target_id": target_id,
"cdp_session_id": cdp_session_id
}
async def get_targets(self) -> list:
"""Get list of all targets in the browser."""
result = await self._send_command("Target.getTargets")
return result.get("targetInfos", [])
async def dispose_context(self, browser_context_id: str):
"""Dispose of a browser context."""
try:
await self._send_command("Target.disposeBrowserContext", {
"browserContextId": browser_context_id
})
logger.info(f"Disposed browser context: {browser_context_id}", tag="CDP")
except Exception as e:
logger.warning(f"Error disposing context: {e}", tag="CDP")
async def test_browser_context_id_basic():
"""
Test that BrowserConfig accepts browser_context_id and target_id parameters.
"""
logger.info("Testing BrowserConfig browser_context_id parameter", tag="TEST")
try:
# Test that BrowserConfig accepts the new parameters
config = BrowserConfig(
cdp_url="http://localhost:9222",
browser_context_id="test-context-id",
target_id="test-target-id",
headless=True
)
# Verify parameters are set correctly
assert config.browser_context_id == "test-context-id", "browser_context_id not set"
assert config.target_id == "test-target-id", "target_id not set"
# Test from_kwargs
config2 = BrowserConfig.from_kwargs({
"cdp_url": "http://localhost:9222",
"browser_context_id": "test-context-id-2",
"target_id": "test-target-id-2"
})
assert config2.browser_context_id == "test-context-id-2", "browser_context_id not set via from_kwargs"
assert config2.target_id == "test-target-id-2", "target_id not set via from_kwargs"
# Test to_dict
config_dict = config.to_dict()
assert config_dict.get("browser_context_id") == "test-context-id", "browser_context_id not in to_dict"
assert config_dict.get("target_id") == "test-target-id", "target_id not in to_dict"
logger.success("BrowserConfig browser_context_id test passed", tag="TEST")
return True
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
return False
async def test_pre_created_context_usage():
"""
Test that Crawl4AI uses a pre-created browser context instead of creating a new one.
This simulates the cloud browser service flow:
1. Start browser with CDP
2. Create context via raw CDP (simulating cloud service)
3. Have Crawl4AI connect with browser_context_id
4. Verify it uses existing context
"""
logger.info("Testing pre-created context usage", tag="TEST")
# Start a managed browser first
browser_config_initial = BrowserConfig(
use_managed_browser=True,
headless=True,
debugging_port=9226, # Use unique port
verbose=True
)
managed_browser = ManagedBrowser(browser_config=browser_config_initial, logger=logger)
cdp_creator = None
manager = None
context_info = None
try:
# Start the browser
cdp_url = await managed_browser.start()
logger.info(f"Browser started at {cdp_url}", tag="TEST")
# Create a context via raw CDP (simulating cloud service)
cdp_creator = CDPContextCreator(cdp_url)
context_info = await cdp_creator.create_context()
logger.info(f"Pre-created context: {context_info['browser_context_id']}", tag="TEST")
logger.info(f"Pre-created target: {context_info['target_id']}", tag="TEST")
# Get initial target count
targets_before = await cdp_creator.get_targets()
initial_target_count = len(targets_before)
logger.info(f"Initial target count: {initial_target_count}", tag="TEST")
# Now create BrowserManager with browser_context_id and target_id
browser_config = BrowserConfig(
cdp_url=cdp_url,
browser_context_id=context_info['browser_context_id'],
target_id=context_info['target_id'],
headless=True,
verbose=True
)
manager = BrowserManager(browser_config=browser_config, logger=logger)
await manager.start()
logger.info("BrowserManager started with pre-created context", tag="TEST")
# Get a page
crawler_config = CrawlerRunConfig()
page, context = await manager.get_page(crawler_config)
# Navigate to a test page
await page.goto("https://example.com", wait_until="domcontentloaded")
title = await page.title()
logger.info(f"Page title: {title}", tag="TEST")
# Get target count after
targets_after = await cdp_creator.get_targets()
final_target_count = len(targets_after)
logger.info(f"Final target count: {final_target_count}", tag="TEST")
# Verify: target count should not have increased significantly
# (allow for 1 extra target for internal use, but not many more)
target_diff = final_target_count - initial_target_count
logger.info(f"Target count difference: {target_diff}", tag="TEST")
# Success criteria:
# 1. Page navigation worked
# 2. Target count didn't explode (reused existing context)
success = title == "Example Domain" and target_diff <= 1
if success:
logger.success("Pre-created context usage test passed", tag="TEST")
else:
logger.error(f"Test failed - Title: {title}, Target diff: {target_diff}", tag="TEST")
return success
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
import traceback
traceback.print_exc()
return False
finally:
# Cleanup
if manager:
try:
await manager.close()
except:
pass
if cdp_creator and context_info:
try:
await cdp_creator.dispose_context(context_info['browser_context_id'])
await cdp_creator.disconnect()
except:
pass
if managed_browser:
try:
await managed_browser.cleanup()
except:
pass
async def test_context_isolation():
"""
Test that using browser_context_id actually provides isolation.
Create two contexts and verify they don't share state.
"""
logger.info("Testing context isolation with browser_context_id", tag="TEST")
browser_config_initial = BrowserConfig(
use_managed_browser=True,
headless=True,
debugging_port=9227,
verbose=True
)
managed_browser = ManagedBrowser(browser_config=browser_config_initial, logger=logger)
cdp_creator = None
manager1 = None
manager2 = None
context_info_1 = None
context_info_2 = None
try:
# Start the browser
cdp_url = await managed_browser.start()
logger.info(f"Browser started at {cdp_url}", tag="TEST")
# Create two separate contexts
cdp_creator = CDPContextCreator(cdp_url)
context_info_1 = await cdp_creator.create_context()
logger.info(f"Context 1: {context_info_1['browser_context_id']}", tag="TEST")
# Need to reconnect for second context (or use same connection)
await cdp_creator.disconnect()
cdp_creator2 = CDPContextCreator(cdp_url)
context_info_2 = await cdp_creator2.create_context()
logger.info(f"Context 2: {context_info_2['browser_context_id']}", tag="TEST")
# Verify contexts are different
assert context_info_1['browser_context_id'] != context_info_2['browser_context_id'], \
"Contexts should have different IDs"
# Connect with first context
browser_config_1 = BrowserConfig(
cdp_url=cdp_url,
browser_context_id=context_info_1['browser_context_id'],
target_id=context_info_1['target_id'],
headless=True
)
manager1 = BrowserManager(browser_config=browser_config_1, logger=logger)
await manager1.start()
# Set a cookie in context 1
page1, ctx1 = await manager1.get_page(CrawlerRunConfig())
await page1.goto("https://example.com", wait_until="domcontentloaded")
await ctx1.add_cookies([{
"name": "test_isolation",
"value": "context_1_value",
"domain": "example.com",
"path": "/"
}])
cookies1 = await ctx1.cookies(["https://example.com"])
cookie1_value = next((c["value"] for c in cookies1 if c["name"] == "test_isolation"), None)
logger.info(f"Cookie in context 1: {cookie1_value}", tag="TEST")
# Connect with second context
browser_config_2 = BrowserConfig(
cdp_url=cdp_url,
browser_context_id=context_info_2['browser_context_id'],
target_id=context_info_2['target_id'],
headless=True
)
manager2 = BrowserManager(browser_config=browser_config_2, logger=logger)
await manager2.start()
# Check cookies in context 2 - should not have the cookie from context 1
page2, ctx2 = await manager2.get_page(CrawlerRunConfig())
await page2.goto("https://example.com", wait_until="domcontentloaded")
cookies2 = await ctx2.cookies(["https://example.com"])
cookie2_value = next((c["value"] for c in cookies2 if c["name"] == "test_isolation"), None)
logger.info(f"Cookie in context 2: {cookie2_value}", tag="TEST")
# Verify isolation
isolation_works = cookie1_value == "context_1_value" and cookie2_value is None
if isolation_works:
logger.success("Context isolation test passed", tag="TEST")
else:
logger.error(f"Isolation failed - Cookie1: {cookie1_value}, Cookie2: {cookie2_value}", tag="TEST")
return isolation_works
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
import traceback
traceback.print_exc()
return False
finally:
# Cleanup
for mgr in [manager1, manager2]:
if mgr:
try:
await mgr.close()
except:
pass
for ctx_info, creator in [(context_info_1, cdp_creator), (context_info_2, cdp_creator2 if 'cdp_creator2' in dir() else None)]:
if ctx_info and creator:
try:
await creator.dispose_context(ctx_info['browser_context_id'])
await creator.disconnect()
except:
pass
if managed_browser:
try:
await managed_browser.cleanup()
except:
pass
async def run_tests():
"""Run all browser_context_id tests."""
results = []
logger.info("Running browser_context_id tests", tag="SUITE")
# Basic parameter test
results.append(("browser_context_id_basic", await test_browser_context_id_basic()))
# Pre-created context usage test
results.append(("pre_created_context_usage", await test_pre_created_context_usage()))
# Note: Context isolation test is commented out because isolation is enforced
# at the CDP level by the cloud browser service, not at the Playwright level.
# When multiple BrowserManagers connect to the same browser, Playwright sees
# all contexts. In production, each worker gets exactly one pre-created context.
# results.append(("context_isolation", await test_context_isolation()))
# Print summary
total = len(results)
passed = sum(1 for _, r in results if r)
logger.info("=" * 50, tag="SUMMARY")
logger.info(f"Test Results: {passed}/{total} passed", tag="SUMMARY")
logger.info("=" * 50, tag="SUMMARY")
for name, result in results:
status = "PASSED" if result else "FAILED"
logger.info(f" {name}: {status}", tag="SUMMARY")
if passed == total:
logger.success("All tests passed!", tag="SUMMARY")
return True
else:
logger.error(f"{total - passed} tests failed", tag="SUMMARY")
return False
if __name__ == "__main__":
success = asyncio.run(run_tests())
sys.exit(0 if success else 1)