Skip to main content

Global options

These flags work with any command:
FlagDescription
-k, --api-key <key>Override API key for this request
--timingShow request timing info
-v, --versionShow CLI version

brightdata login

Authenticate with Bright Data. Opens the browser for OAuth by default.
FlagDescription
-k, --api-key <key>Use API key directly (skips browser)
-c, --customer-id <id>Bright Data account ID (optional)
-d, --deviceUse device flow for SSH/headless environments
brightdata login                        # Browser OAuth (recommended)
brightdata login --device               # Headless/SSH environments
brightdata login --api-key <key>        # Direct API key
On first login, the CLI automatically creates cli_unlocker and cli_browser proxy zones and sets sensible defaults.

brightdata logout

Clear stored credentials.
brightdata logout

brightdata scrape <url>

Scrape any URL using Bright Data’s Web Unlocker. Handles CAPTCHAs, JavaScript rendering, and anti-bot protections automatically.
FlagDescription
-f, --format <fmt>markdown (default), html, screenshot, json
--country <code>ISO country code for geo-targeting (e.g. us, de, jp)
--zone <name>Web Unlocker zone name
--mobileUse a mobile user agent
--asyncSubmit async, return a snapshot ID
-o, --output <path>Write output to file
--jsonForce JSON output
--prettyPretty-print JSON output
brightdata scrape https://news.ycombinator.com

brightdata search <query>

Search Google, Bing, or Yandex via Bright Data’s SERP API. Google returns structured JSON with organic results, ads, People Also Ask, and related searches. Bing and Yandex return markdown by default.
FlagDescription
--engine <name>google (default), bing, yandex
--country <code>Localized results (e.g. us, de)
--language <code>Language code (e.g. en, fr)
--page <n>Page number, 0-indexed (default: 0)
--type <type>web (default), news, images, shopping
--device <type>desktop, mobile
--zone <name>SERP zone name
-o, --output <path>Write output to file
--jsonForce JSON output
--prettyPretty-print JSON output
brightdata search "typescript best practices"

brightdata discover <query>

AI-powered web discovery. Submit a query with optional intent, and Bright Data finds, ranks, and optionally extracts full-page content for each result.
FlagDescription
--intent <text>AI intent to evaluate and rank result relevance
--country <code>ISO country code (default: US)
--city <name>City for localized results (e.g. "New York")
--language <code>Language code (default: en)
--num-results <n>Number of results to return
--filter-keywords <words>Comma-separated keywords that must appear in results
--include-contentInclude full page content (markdown) in each result
--no-remove-duplicatesKeep duplicate results
--start-date <date>Only content updated from date (YYYY-MM-DD)
--end-date <date>Only content updated until date (YYYY-MM-DD)
--timeout <seconds>Polling timeout (default: 600)
-o, --output <path>Write output to file
--json / --prettyJSON output (raw / indented)
brightdata discover "AI trends"
For best results with --intent, use a structured formula: describe your persona, what to prioritize, the depth of analysis, and what to exclude. See the Discover API reference for detailed guidance.

brightdata pipelines <type> [params...] [options]

Extract structured data from 40+ platforms. Triggers an async collection job, polls until results are ready, and returns the data.
FlagDescription
--format <fmt>json (default), csv, ndjson, jsonl
--timeout <seconds>Polling timeout (default: 600)
-o, --output <path>Write output to file
--jsonForce JSON output
--prettyPretty-print JSON output
# List all available pipeline types
brightdata pipelines list
brightdata pipelines linkedin_person_profile "https://linkedin.com/in/username"

Supported platforms

TypePlatformParameters
amazon_productAmazon product page<url>
amazon_product_reviewsAmazon reviews<url>
amazon_product_searchAmazon search results<keyword> <domain_url>
walmart_productWalmart product page<url>
walmart_sellerWalmart seller profile<url>
ebay_producteBay listing<url>
bestbuy_productsBest Buy<url>
etsy_productsEtsy<url>
homedepot_productsHome Depot<url>
zara_productsZara<url>
google_shoppingGoogle Shopping<url>
TypePlatformParameters
linkedin_person_profileLinkedIn person<url>
linkedin_company_profileLinkedIn company<url>
linkedin_job_listingsLinkedIn jobs<url>
linkedin_postsLinkedIn posts<url>
linkedin_people_searchLinkedIn people search<url> <first_name> <last_name>
crunchbase_companyCrunchbase<url>
zoominfo_company_profileZoomInfo<url>
TypePlatformParameters
instagram_profilesInstagram profiles<url>
instagram_postsInstagram posts<url>
instagram_reelsInstagram reels<url>
instagram_commentsInstagram comments<url>
facebook_postsFacebook posts<url>
facebook_marketplace_listingsFacebook Marketplace<url>
facebook_company_reviewsFacebook reviews<url> [num_reviews]
facebook_eventsFacebook events<url>
tiktok_profilesTikTok profiles<url>
tiktok_postsTikTok posts<url>
tiktok_shopTikTok shop<url>
tiktok_commentsTikTok comments<url>
x_postsX (Twitter) posts<url>
youtube_profilesYouTube channels<url>
youtube_videosYouTube videos<url>
youtube_commentsYouTube comments<url> [num_comments]
reddit_postsReddit posts<url>
TypePlatformParameters
google_maps_reviewsGoogle Maps reviews<url> [days_limit]
google_play_storeGoogle Play<url>
apple_app_storeApple App Store<url>
github_repository_fileGitHub repository files<url>
yahoo_finance_businessYahoo Finance<url>
zillow_properties_listingZillow<url>
booking_hotel_listingsBooking.com<url>
Run brightdata pipelines list in your terminal to see all available types at any time.

brightdata status <job-id>

Check the status of an async snapshot job (from --async scrapes or pipeline collections).
FlagDescription
--waitPoll until the job completes
--timeout <seconds>Polling timeout (default: 600)
-o, --output <path>Write output to file
--json / --prettyJSON output
brightdata status s_abc123xyz
brightdata status s_abc123xyz --wait --pretty
brightdata status s_abc123xyz --wait --timeout 300

brightdata browser

Control a real browser session powered by Bright Data’s Scraping Browser. A lightweight local daemon holds the browser connection open between commands, giving you persistent state without reconnecting on every call.
brightdata browser <subcommand> [options]

Global flags

These flags work with every browser subcommand:
FlagDescription
--session <name>Session name for running multiple isolated sessions in parallel (default: default)
--country <code>Geo-target by ISO country code. On open, changing country reconnects the browser
--zone <name>Scraping Browser zone (default: cli_browser)
--timeout <ms>IPC command timeout in milliseconds (default: 30000)
--idle-timeout <ms>Daemon auto-shutdown after idle (default: 600000 / 10 min)
--json / --prettyJSON output
-o, --output <path>Write output to file

Subcommands

Navigate to a URL. Starts the daemon and browser session automatically if not already running.
brightdata browser open <url>
brightdata browser open https://amazon.com --country us --session shop
FlagDescription
--country <code>Geo-targeting. Reconnects the browser if the country changes on an existing session
--zone <name>Browser zone name
--idle-timeout <ms>Daemon idle timeout for this session
Capture the page as a text accessibility tree. This is the primary way AI agents read page content - far more token-efficient than raw HTML.Each interactive element gets a ref (e.g. e1, e2) that you pass to click, type, fill, and other interaction commands.
brightdata browser snapshot
brightdata browser snapshot --compact          # Interactive elements + ancestors only
brightdata browser snapshot --interactive      # Interactive elements as a flat list
brightdata browser snapshot --depth 3          # Limit tree depth
brightdata browser snapshot --selector "main"  # Scope to a CSS subtree
Example output:
Page: Example Domain
URL: https://example.com

- heading "Example Domain" [level=1]
- paragraph "This domain is for use in illustrative examples."
- link "More information..." [ref=e1]
FlagDescription
--compactOnly interactive elements and their ancestors (70-90% fewer tokens)
--interactiveOnly interactive elements, as a flat list
--depth <n>Limit tree depth
--selector <sel>Scope snapshot to elements matching a CSS selector
--wrapWrap output in content boundaries (useful for AI agent prompt injection safety)
Capture a PNG screenshot of the current viewport.
brightdata browser screenshot
brightdata browser screenshot ./result.png
brightdata browser screenshot --full-page -o page.png
brightdata browser screenshot --base64
FlagDescription
[path]Where to save the PNG (default: temp directory)
--full-pageCapture the full scrollable page, not just the viewport
--base64Output base64-encoded PNG data instead of saving to a file
Interact with elements using their snapshot ref values.
# Click an element
brightdata browser click e3

# Type text into a field (clears first by default)
brightdata browser type e5 "search query"
brightdata browser type e5 " more text" --append    # Append to existing value
brightdata browser type e5 "search query" --submit  # Press Enter after typing

# Fill a form field directly (no keyboard simulation)
brightdata browser fill e2 "user@example.com"

# Select a dropdown option by visible label
brightdata browser select e4 "United States"

# Check / uncheck a checkbox or radio button
brightdata browser check e7
brightdata browser uncheck e7

# Hover over an element
brightdata browser hover e2
Flag (for type)Description
--appendAppend to existing value using key-by-key simulation
--submitPress Enter after typing
Scroll the viewport or scroll an element into view.
brightdata browser scroll                          # Scroll down 300px (default)
brightdata browser scroll --direction up
brightdata browser scroll --direction down --distance 600
brightdata browser scroll --ref e10                # Scroll element into view
FlagDescription
--direction <dir>up, down, left, right (default: down)
--distance <px>Pixels to scroll (default: 300)
--ref <ref>Scroll this element into view instead of the viewport
Get text or HTML content from the page or a specific element.
# Text content
brightdata browser get text            # Full page text
brightdata browser get text "h1"       # Text of the first h1
brightdata browser get text "#price"   # Text inside #price

# HTML content
brightdata browser get html              # Full page outer HTML
brightdata browser get html ".product"   # innerHTML of .product
Inspect session state.
# HTTP requests captured since last navigation
brightdata browser network

# Cookies for the active session
brightdata browser cookies

# Current session state
brightdata browser status
brightdata browser status --session shop --pretty

# List all active sessions
brightdata browser sessions
Navigation controls.
brightdata browser back
brightdata browser forward
brightdata browser reload
Close a session and stop its daemon.
brightdata browser close                    # Close the default session
brightdata browser close --session shop     # Close a named session
brightdata browser close --all              # Close all active sessions
Element ref values (e.g. e1, e3) are re-assigned on every snapshot call. After navigating or clicking, take a fresh snapshot before using refs again.

brightdata zones

List and inspect Bright Data proxy zones.
brightdata zones                        # List all active zones
brightdata zones info <name>            # Full details for a zone
brightdata zones --json -o zones.json   # Export as JSON
brightdata zones info my_zone --pretty  # Pretty-print zone info

brightdata budget

View account balance and per-zone cost/bandwidth. Read-only.
SubcommandDescription
(none)Quick account balance
balanceBalance + pending charges
zonesCost & bandwidth table for all zones
zone <name>Detailed cost & bandwidth for one zone
FlagDescription
--from <datetime>Start of date range (e.g. 2024-01-01T00:00:00)
--to <datetime>End of date range
--json / --prettyJSON output
brightdata budget
brightdata budget balance
brightdata budget zones
brightdata budget zone my_zone
brightdata budget zones --from 2024-01-01T00:00:00 --to 2024-02-01T00:00:00

brightdata config

View and manage CLI configuration.
SubcommandDescription
(none)Show all config
get <key>Get a single value
set <key> <value>Set a value
Config KeyDescription
default_zone_unlockerDefault zone for scrape and search
default_zone_serpOverride zone for search only
default_formatDefault output format: markdown or json
api_urlOverride API base URL
brightdata config
brightdata config set default_zone_unlocker my_zone
brightdata config set default_format json
brightdata config get default_zone_unlocker

brightdata init

Interactive setup wizard. Walks through authentication, zone selection, and default configuration.
FlagDescription
--skip-authSkip the authentication step
-k, --api-key <key>Provide API key directly
brightdata init

brightdata skill

Install Bright Data AI agent skills into coding agents (Claude Code, Cursor, Copilot, etc.).
SubcommandDescription
addInteractive picker - choose skills + target agents
add <name>Install a specific skill directly
listList all available skills
Available skills: search, scrape, data-feeds, bright-data-mcp, bright-data-best-practices
brightdata skill add              # Interactive picker
brightdata skill add scrape       # Direct install
brightdata skill list             # See what's available

brightdata add mcp

Add the Bright Data MCP server to Claude Code, Cursor, or Codex. Uses the API key stored by brightdata login.
brightdata add mcp                               # Interactive agent + scope prompts
brightdata add mcp --agent claude-code --global
brightdata add mcp --agent claude-code,cursor --project
brightdata add mcp --agent codex --global
FlagDescription
--agent <agents>Comma-separated targets: claude-code, cursor, codex
--globalInstall to the agent’s global config file
--projectInstall to the current project’s config file

Config file locations

AgentGlobal pathProject path
Claude Code~/.claude.json.claude/settings.json
Cursor~/.cursor/mcp.json.cursor/mcp.json
Codex$CODEX_HOME/mcp.json or ~/.codex/mcp.jsonNot supported
The command writes the MCP server entry under mcpServers["bright-data"]. Existing config is preserved - only the bright-data key is added or replaced.
brightdata add mcp uses the API key stored by brightdata login. It does not read BRIGHTDATA_API_KEY or the --api-key flag, so run brightdata login first.