How it works
You send one or more YouTube URLs to the Bright Data YouTube Scraper API. Bright Data handles the scraping infrastructure and returns clean, structured JSON.dataset_id to specify the data type (videos, channels, or comments) and return results in JSON, NDJSON, or CSV.
What the response looks like
Supported data types
Videos
Titles, views, likes, descriptions, durations, and thumbnails. Discover videos by keyword, hashtag, or explore.
Channels
Subscriber counts, video counts, descriptions, and verification status. Discover channels by keyword.
Comments
Comment text, likes, replies, commenter details for any video.
Request methods
The Bright Data YouTube Scraper API supports two request methods. Choose based on your volume and latency needs.
Learn more in Understanding sync vs. async requests.
Capabilities and limits
| Capability | Detail |
|---|---|
| Output formats | JSON, NDJSON, CSV |
| Max URLs per sync request | 20 |
| Max URLs per async request | 5,000 |
| Data freshness | Real-time (scraped on demand) |
| Delivery options | API download, Webhook, Amazon S3, Snowflake, Azure, GCS (all options) |
| Pricing | Pay per successful record (see pricing) |
Common questions
Is the data scraped in real time?
Is the data scraped in real time?
Yes. Each request triggers a live scrape. There is no cached or stale data. Response times vary by endpoint: channels typically return in 10-30 seconds (sync), while discovery requests may take longer depending on result volume.
What is the difference between URL collection and discovery?
What is the difference between URL collection and discovery?
URL collection scrapes a specific YouTube page you provide (e.g., a channel URL). Discovery finds YouTube content matching search criteria (e.g., videos by keyword or hashtag) and scrapes the results. Discovery is only available via async requests.
How is this different from scraping using proxies or Web Unlocker?
How is this different from scraping using proxies or Web Unlocker?
When scraping using proxies or Web Unlocker, you still need to write and maintain
your own parsing logic and update it whenever YouTube changes its page structure.
The YouTube Scraper API handles the entire stack: proxy rotation, anti-bot bypassing
and parsing. You simply send a YouTube URL and get clean, structured JSON back with
no scraping infrastructure or parser maintenance required on your end.
Next steps
Quickstart
Scrape your first YouTube channel in 5 minutes.
Send your first request
Full code examples in cURL, Python, and Node.js.
API reference
Endpoint specs, parameters, and response schemas.