feat(core): Release v0.3.73 with Browser Takeover and Docker Support

Major changes:
- Add browser takeover feature using CDP for authentic browsing
- Implement Docker support with full API server documentation
- Enhance Mockdown with tag preservation system
- Improve parallel crawling performance

This release focuses on authenticity and scalability, introducing the ability
to use users' own browsers while providing containerized deployment options.
Breaking changes include modified browser handling and API response structure.

See CHANGELOG.md for detailed migration guide.
This commit is contained in:
UncleCode
2024-11-05 20:04:18 +08:00
parent c4c6227962
commit 67a23c3182
18 changed files with 1066 additions and 61263 deletions

View File

@@ -31,9 +31,11 @@ with open("crawl4ai/_version.py") as f:
# Define the requirements for different environments
default_requirements = requirements
torch_requirements = ["torch", "nltk", "spacy", "scikit-learn"]
transformer_requirements = ["transformers", "tokenizers", "onnxruntime"]
cosine_similarity_requirements = ["torch", "transformers", "nltk", "spacy"]
# torch_requirements = ["torch", "nltk", "spacy", "scikit-learn"]
# transformer_requirements = ["transformers", "tokenizers", "onnxruntime"]
torch_requirements = ["torch", "nltk", "scikit-learn"]
transformer_requirements = ["transformers", "tokenizers"]
cosine_similarity_requirements = ["torch", "transformers", "nltk" ]
sync_requirements = ["selenium"]
def install_playwright():