There are two ways to use Crawl API:

  1. API-Based collection
  2. No-Code collection (via Control Panel)

API-Based collection

  1. Trigger a data collection job via a simple HTTP POST
  2. Specify the URLs and output format
  3. Receive a snapshot_id to retrieve results later.
Code Example
curl -H "Authorization: Bearer API_TOKEN" -H "Content-Type: application/json" -d '[{"url":"https://example.com"},{"url":"https://example.com/1"}]' "https://api.brightdata.com/datasets/v3/trigger?dataset_id=<dataset_id>&include_errors=<true/false>&custom_output_fields=<custom_output_fields>"

Query Parameters:

dataset_id
query
required

Your dataset ID (e.g., gd_m6gjtfmeh43we6cqc)

include_errors
query
default:"true"

include error logs in results

custom_output_fields
query

markdown, html, ld_json, etc. \

Choose the format that best fits your workflow:

# Main Article Title

This is the introduction paragraph with **bold text** and *italics*.

## Subheading

- List item one
- List item two

> This is a blockquote from the articlesh Code Example


[Link text](https://example.com/more-info)
![Image description](https://example.com/image.jpg)

Delivery

Deliver results to:

  • Webhooks
  • External storage (S3, GCS, etc.)
  • Direct download via API or Control Panel

No-Code Scraper (Control Panel)

Use our Control Panel to launch crawls without writing a single line of code. Steps:

  1. Open the Crawl API Control Panel
  2. Enter the target domain or URLs
  3. Choose your output format
  4. Start the crawl
  5. Download results directly from the dashboard