Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Add guide on how to avoid getting blocked #569

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 21 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,38 +129,39 @@ The [`PlaywrightCrawler`](https://crawlee.dev/python/api/class/PlaywrightCrawler
```python
import asyncio

from crawlee.playwright_crawler import PlaywrightCrawler, PlaywrightCrawlingContext
from crawlee import EnqueueStrategy
from crawlee.beautifulsoup_crawler import BeautifulSoupCrawler, BeautifulSoupCrawlingContext

# Define the maximum depth for the crawler
MAX_DEPTH = 3

async def main() -> None:
crawler = PlaywrightCrawler(
# Limit the crawl to max requests. Remove or increase it for crawling all links.
crawler = BeautifulSoupCrawler(
# Limit the crawl to max requests. Adjust as needed.
max_requests_per_crawl=10,
)

# Define the default request handler, which will be called for every request.
@crawler.router.default_handler
async def request_handler(context: PlaywrightCrawlingContext) -> None:
context.log.info(f'Processing {context.request.url} ...')

# Extract data from the page.
data = {
'url': context.request.url,
'title': await context.page.title(),
}

# Push the extracted data to the default dataset.
await context.push_data(data)

# Enqueue all links found on the page.
await context.enqueue_links()
async def request_handler(context: BeautifulSoupCrawlingContext) -> None:
current_depth = context.request.meta.get('depth', 0)

context.log.info(f'Processing {context.request.url} at depth {current_depth} ...')

# Only enqueue links if the current depth is less than the maximum
if current_depth < MAX_DEPTH:
# Enqueue all links found on the page, incrementing the depth
await context.enqueue_links(
strategy=EnqueueStrategy.ALL,
meta={'depth': current_depth + 1} # Increment depth for enqueued links
)

# Run the crawler with the initial list of requests.
await crawler.run(['https://crawlee.dev'])

# Start with a depth of 0 for the initial URL.
await crawler.run([{'url': 'https://crawlee.dev', 'meta': {'depth': 0}}])

if __name__ == '__main__':
asyncio.run(main())

```

### More examples
Expand Down
30 changes: 30 additions & 0 deletions website/roa-loader/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 2 additions & 1 deletion website/roa-loader/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
"author": "",
"license": "ISC",
"dependencies": {
"loader-utils": "^3.2.1"
"loader-utils": "^3.2.1",
"roa-loader": "file:"
}
}
Loading