Global options
These flags work with any command:| Flag | Description |
|---|---|
-k, --api-key <key> | Override API key for this request |
--timing | Show request timing info |
-v, --version | Show CLI version |
brightdata login
Authenticate with Bright Data. Opens the browser for OAuth by default.
| Flag | Description |
|---|---|
-k, --api-key <key> | Use API key directly (skips browser) |
-c, --customer-id <id> | Bright Data account ID (optional) |
-d, --device | Use device flow for SSH/headless environments |
brightdata logout
Clear stored credentials.
brightdata scrape <url>
Scrape any URL using Bright Data’s Web Unlocker. Handles CAPTCHAs, JavaScript rendering, and anti-bot protections automatically.
| Flag | Description |
|---|---|
-f, --format <fmt> | markdown (default), html, screenshot, json |
--country <code> | ISO country code for geo-targeting (e.g. us, de, jp) |
--zone <name> | Web Unlocker zone name |
--mobile | Use a mobile user agent |
--async | Submit async, return a snapshot ID |
-o, --output <path> | Write output to file |
--json | Force JSON output |
--pretty | Pretty-print JSON output |
brightdata search <query>
Search Google, Bing, or Yandex via Bright Data’s SERP API. Google returns structured JSON with organic results, ads, People Also Ask, and related searches. Bing and Yandex return markdown by default.
| Flag | Description |
|---|---|
--engine <name> | google (default), bing, yandex |
--country <code> | Localized results (e.g. us, de) |
--language <code> | Language code (e.g. en, fr) |
--page <n> | Page number, 0-indexed (default: 0) |
--type <type> | web (default), news, images, shopping |
--device <type> | desktop, mobile |
--zone <name> | SERP zone name |
-o, --output <path> | Write output to file |
--json | Force JSON output |
--pretty | Pretty-print JSON output |
brightdata discover <query>
AI-powered web discovery. Submit a query with optional intent, and Bright Data finds, ranks, and optionally extracts full-page content for each result.
| Flag | Description |
|---|---|
--intent <text> | AI intent to evaluate and rank result relevance |
--country <code> | ISO country code (default: US) |
--city <name> | City for localized results (e.g. "New York") |
--language <code> | Language code (default: en) |
--num-results <n> | Number of results to return |
--filter-keywords <words> | Comma-separated keywords that must appear in results |
--include-content | Include full page content (markdown) in each result |
--no-remove-duplicates | Keep duplicate results |
--start-date <date> | Only content updated from date (YYYY-MM-DD) |
--end-date <date> | Only content updated until date (YYYY-MM-DD) |
--timeout <seconds> | Polling timeout (default: 600) |
-o, --output <path> | Write output to file |
--json / --pretty | JSON output (raw / indented) |
brightdata pipelines <type> [params...] [options]
Extract structured data from 40+ platforms. Triggers an async collection job, polls until results are ready, and returns the data.
| Flag | Description |
|---|---|
--format <fmt> | json (default), csv, ndjson, jsonl |
--timeout <seconds> | Polling timeout (default: 600) |
-o, --output <path> | Write output to file |
--json | Force JSON output |
--pretty | Pretty-print JSON output |
Supported platforms
E-Commerce
E-Commerce
| Type | Platform | Parameters |
|---|---|---|
amazon_product | Amazon product page | <url> |
amazon_product_reviews | Amazon reviews | <url> |
amazon_product_search | Amazon search results | <keyword> <domain_url> |
walmart_product | Walmart product page | <url> |
walmart_seller | Walmart seller profile | <url> |
ebay_product | eBay listing | <url> |
bestbuy_products | Best Buy | <url> |
etsy_products | Etsy | <url> |
homedepot_products | Home Depot | <url> |
zara_products | Zara | <url> |
google_shopping | Google Shopping | <url> |
Professional Networks
Professional Networks
| Type | Platform | Parameters |
|---|---|---|
linkedin_person_profile | LinkedIn person | <url> |
linkedin_company_profile | LinkedIn company | <url> |
linkedin_job_listings | LinkedIn jobs | <url> |
linkedin_posts | LinkedIn posts | <url> |
linkedin_people_search | LinkedIn people search | <url> <first_name> <last_name> |
crunchbase_company | Crunchbase | <url> |
zoominfo_company_profile | ZoomInfo | <url> |
Social Media
Social Media
Maps, Reviews & Other
Maps, Reviews & Other
| Type | Platform | Parameters |
|---|---|---|
google_maps_reviews | Google Maps reviews | <url> [days_limit] |
google_play_store | Google Play | <url> |
apple_app_store | Apple App Store | <url> |
github_repository_file | GitHub repository files | <url> |
yahoo_finance_business | Yahoo Finance | <url> |
zillow_properties_listing | Zillow | <url> |
booking_hotel_listings | Booking.com | <url> |
brightdata status <job-id>
Check the status of an async snapshot job (from --async scrapes or pipeline collections).
| Flag | Description |
|---|---|
--wait | Poll until the job completes |
--timeout <seconds> | Polling timeout (default: 600) |
-o, --output <path> | Write output to file |
--json / --pretty | JSON output |
brightdata browser
Control a real browser session powered by Bright Data’s Scraping Browser. A lightweight local daemon holds the browser connection open between commands, giving you persistent state without reconnecting on every call.
Global flags
These flags work with everybrowser subcommand:
| Flag | Description |
|---|---|
--session <name> | Session name for running multiple isolated sessions in parallel (default: default) |
--country <code> | Geo-target by ISO country code. On open, changing country reconnects the browser |
--zone <name> | Scraping Browser zone (default: cli_browser) |
--timeout <ms> | IPC command timeout in milliseconds (default: 30000) |
--idle-timeout <ms> | Daemon auto-shutdown after idle (default: 600000 / 10 min) |
--json / --pretty | JSON output |
-o, --output <path> | Write output to file |
Subcommands
browser open
browser open
Navigate to a URL. Starts the daemon and browser session automatically if not already running.
| Flag | Description |
|---|---|
--country <code> | Geo-targeting. Reconnects the browser if the country changes on an existing session |
--zone <name> | Browser zone name |
--idle-timeout <ms> | Daemon idle timeout for this session |
browser snapshot
browser snapshot
Capture the page as a text accessibility tree. This is the primary way AI agents read page content - far more token-efficient than raw HTML.Each interactive element gets a Example output:
ref (e.g. e1, e2) that you pass to click, type, fill, and other interaction commands.| Flag | Description |
|---|---|
--compact | Only interactive elements and their ancestors (70-90% fewer tokens) |
--interactive | Only interactive elements, as a flat list |
--depth <n> | Limit tree depth |
--selector <sel> | Scope snapshot to elements matching a CSS selector |
--wrap | Wrap output in content boundaries (useful for AI agent prompt injection safety) |
browser screenshot
browser screenshot
Capture a PNG screenshot of the current viewport.
| Flag | Description |
|---|---|
[path] | Where to save the PNG (default: temp directory) |
--full-page | Capture the full scrollable page, not just the viewport |
--base64 | Output base64-encoded PNG data instead of saving to a file |
browser click / type / fill
browser click / type / fill
Interact with elements using their snapshot
ref values.Flag (for type) | Description |
|---|---|
--append | Append to existing value using key-by-key simulation |
--submit | Press Enter after typing |
browser scroll
browser scroll
Scroll the viewport or scroll an element into view.
| Flag | Description |
|---|---|
--direction <dir> | up, down, left, right (default: down) |
--distance <px> | Pixels to scroll (default: 300) |
--ref <ref> | Scroll this element into view instead of the viewport |
browser get text / get html
browser get text / get html
Get text or HTML content from the page or a specific element.
browser network / cookies / status
browser network / cookies / status
browser back / forward / reload
browser back / forward / reload
Navigation controls.
browser close
browser close
Close a session and stop its daemon.
Element
ref values (e.g. e1, e3) are re-assigned on every snapshot call. After navigating or clicking, take a fresh snapshot before using refs again.brightdata zones
List and inspect Bright Data proxy zones.
brightdata budget
View account balance and per-zone cost/bandwidth. Read-only.
| Subcommand | Description |
|---|---|
| (none) | Quick account balance |
balance | Balance + pending charges |
zones | Cost & bandwidth table for all zones |
zone <name> | Detailed cost & bandwidth for one zone |
| Flag | Description |
|---|---|
--from <datetime> | Start of date range (e.g. 2024-01-01T00:00:00) |
--to <datetime> | End of date range |
--json / --pretty | JSON output |
brightdata config
View and manage CLI configuration.
| Subcommand | Description |
|---|---|
| (none) | Show all config |
get <key> | Get a single value |
set <key> <value> | Set a value |
| Config Key | Description |
|---|---|
default_zone_unlocker | Default zone for scrape and search |
default_zone_serp | Override zone for search only |
default_format | Default output format: markdown or json |
api_url | Override API base URL |
brightdata init
Interactive setup wizard. Walks through authentication, zone selection, and default configuration.
| Flag | Description |
|---|---|
--skip-auth | Skip the authentication step |
-k, --api-key <key> | Provide API key directly |
brightdata skill
Install Bright Data AI agent skills into coding agents (Claude Code, Cursor, Copilot, etc.).
| Subcommand | Description |
|---|---|
add | Interactive picker - choose skills + target agents |
add <name> | Install a specific skill directly |
list | List all available skills |
search, scrape, data-feeds, bright-data-mcp, bright-data-best-practices
brightdata add mcp
Add the Bright Data MCP server to Claude Code, Cursor, or Codex. Uses the API key stored by brightdata login.
| Flag | Description |
|---|---|
--agent <agents> | Comma-separated targets: claude-code, cursor, codex |
--global | Install to the agent’s global config file |
--project | Install to the current project’s config file |
Config file locations
| Agent | Global path | Project path |
|---|---|---|
| Claude Code | ~/.claude.json | .claude/settings.json |
| Cursor | ~/.cursor/mcp.json | .cursor/mcp.json |
| Codex | $CODEX_HOME/mcp.json or ~/.codex/mcp.json | Not supported |
mcpServers["bright-data"]. Existing config is preserved - only the bright-data key is added or replaced.
instagram_profiles<url>instagram_posts<url>instagram_reels<url>instagram_comments<url>facebook_posts<url>facebook_marketplace_listings<url>facebook_company_reviews<url> [num_reviews]facebook_events<url>tiktok_profiles<url>tiktok_posts<url>tiktok_shop<url>tiktok_comments<url>x_posts<url>youtube_profiles<url>youtube_videos<url>youtube_comments<url> [num_comments]reddit_posts<url>