- Proxy
- Proxy Manager
- Web Unlocker & SERP API
- Web Scraper IDE API
- Web Scraper API
- Scraping Shield
- Account Management API
- Marketplace Dataset API
- MCP Server
FAQ: MCP
Find answers to common questions about integrating, configuring, and using Bright Data’s MCP server.
Bright Data MCP (Model Context Protocol) is a server that enables LLMs, AI agents, and applications to access, discover, and extract web data in real-time. It allows MCP clients like Claude Desktop, Cursor, and Windsurf to search the web, navigate websites, take actions, and retrieve data without getting blocked, making it perfect for web scraping tasks.
The MCP server provides advanced capabilities, including:
- Bypassing geo-restrictions to access content regardless of location
- Navigating websites with bot detection protection using Web Unlocker technology
- Structured data extraction from platforms like Amazon, LinkedIn, Instagram, and many other data feeds
- Remote browser automation for complex web interactions
- Access to a global IP network to avoid blocking or rate limiting
You can get your API token from the user settings page in your Bright Data account.
Make sure you have an account on brightdata.com - new users get free credit for testing, and pay-as-you-go options are available after that.
To enable browser control tools:
- Visit your Bright Data control panel at brightdata.com/cp/zones
- Create a new ‘Browser API’ zone
- Once created, copy the authentication string from the Browser API overview tab
- The authentication string will be formatted like:
brd-customer-[your-customer-ID]-zone-[your-zone-ID]:[your-password]
- Add this authentication string to your MCP configuration as the
BROWSER_AUTH
environment variable
Bright Data MCP offers a comprehensive set of tools, including:
Search and basic scraping:
search_engine
: Scrape search results from Google, Bing or Yandexscrape_as_markdown
: Scrape a webpage and get results in Markdown formatscrape_as_html
: Scrape a webpage and get results in HTML formatsession_stats
: View tool usage during the current session
Structured data extraction:
- Amazon:
web_data_amazon_product
,web_data_amazon_product_reviews
- LinkedIn:
web_data_linkedin_person_profile
,web_data_linkedin_company_profile
- ZoomInfo:
web_data_zoominfo_company_profile
- Instagram:
web_data_instagram_profiles
,web_data_instagram_posts
,web_data_instagram_reels
,web_data_instagram_comments
- Facebook:
web_data_facebook_posts
,web_data_facebook_marketplace_listings
,web_data_facebook_company_reviews
- Various others:
web_data_x_posts
,web_data_zillow_properties_listing
,web_data_booking_hotel_listings
,web_data_youtube_videos
Browser automation:
scraping_browser_navigate
: Navigate to a URLscraping_browser_go_back
/scraping_browser_go_forward
: Navigate browser historyscraping_browser_click
: Click on an elementscraping_browser_links
: Get all links on the current pagescraping_browser_type
: Type text into an elementscraping_browser_wait_for
: Wait for an element to appearscraping_browser_screenshot
: Take a screenshotscraping_browser_get_html
/scraping_browser_get_text
: Get page content
Some web data tools can take longer to execute, especially when dealing with complex websites. To ensure your agent can consume the data:
- Set a high enough timeout in your agent settings (180 seconds is recommended for 99% of requests)
- For particularly slow sites, you may need to increase this value further
- Use the specialized
web_data_*
tools when available, as they’re often faster than general scraping - For browser automation sequences, keep operations close together in time
Always treat scraped web content as untrusted data. Never use raw scraped content directly in LLM prompts to avoid potential prompt injection risks.
Instead:
- Filter and validate all web data before passing it to the LLM
- Use structured data extraction rather than raw text when possible
- Be cautious with executing JavaScript from scraped content
This error occurs when your system cannot find the npx
command. To fix it:
- Find your Node.js path:
- On macOS: Run
which node
in Terminal - On Windows: Run
where node
in Command Prompt
- On macOS: Run
- Update your MCP configuration to use the full path instead of
npx
:
{
"mcpServers": {
"Bright Data": {
"command": "/usr/local/bin/node", // Replace with your actual Node.js path
"args": ["node_modules/@brightdata/mcp/index.js"],
"env": {
"API_TOKEN": "<your-api-token>"
}
}
}
}
You can use the session_stats
tool to check your usage during the current session. For comprehensive usage tracking and billing information, log in to your Bright Data account dashboard.
The session_stats
tool will show you information about:
- Number of requests made during the current session
- Tools used and their frequency
Yes, you can try Bright Data MCP without any setup using the playground on Smithery.
This platform provides an easy way to explore the capabilities of Bright Data MCP without any local setup. Just sign in and start experimenting with web data collection!
Was this page helpful?