Find answers to common questions about Bright Data’s Browser API, including supported languages, debugging tips, and integration guidelines.
Can I choose the country that the Browser API will scrape from?
This is possible, but not recommended. The reason is that the Browser API utilises Bright Data’s full suite of unblocking capabilities which automatically chooses the best IP type and location to get you the page you want to access.
If you still need the Browser API to launch from a specific country, add the -country
flag, after your USER
credentials within the Bright Data endpoint, followed by the 2-letter ISO code for that country.
For example, Browser API using Puppeteer in the USA:
EU region
EU region You can target the entire European Union region in the same manner as “Country” above by adding “eu” after “country” in your request: “-country-eu”. Requests sent using -country-eu, will use IPs from one of the countries below which are included automatically within “eu”: AL, AZ, KG, BA, UZ, BI, XK, SM, DE, AT, CH, UK, GB,IE, IM, FR, ES, NL, IT, PT, BE, AD, MT, MC, MA, LU, TN, DZ, GI, LI, SE, DK, FI, NO, AX, IS, GG, JE, EU, GL, VA, FX, FO.
Need Browser API to target a specific geographical radius of proxies?
Check out our Proxy.setLocation feature.
Which programming languages, libraries, and browser automation tools are supported by Browser API?
Bright Data’s Browser API is compatible with a wide variety of programming languages, libraries, and browser automation tools, offering full native support for Node.js, Python, and Java/C# using puppeteer, playwright, and selenium respectively.
Other languages can usually be integrated as well via the third-party libraries listed below, enabling you to incorporate Browser API directly into your existing tech stack.
Language/Platform | puppeteer | playwright | selenium |
---|---|---|---|
Python | N/A | *playwright-python | *Selenium WebDriver |
JS / Node | *Native | *Native | *WebDriverJS |
Java | Puppeteer Java | Playwright for Java | *Native |
Ruby | Puppeteer-Ruby | playwright-ruby-client | Selenium WebDriver for Ruby |
C# | *.NET: Puppeteer Sharp | Playwright for .NET | *Selenium WebDriver for .NET |
Go | chromedp | playwright-go | Selenium WebDriver for Go |
*Full support |
How can I debug what's happening behind the scenes during my Browser API session?
You can monitor a live Browser API session by launching the Browser API Debugger on your local machine. This is similar to setting headless browser to ‘FALSE’ on Puppeteer.
The Browser API Debugger serves as a valuable resource, enabling you to inspect, analyze, and fine-tune your code alongside Chrome Dev Tools, resulting in better control, visibility, and efficiency.
The Browser API Debugger can be launched via two methods:
Manually via Control Panel
Remotely via your script.
The Browser API Debugger can be easily accessed within your Bright Data Control Panel. Follow these steps:
Within the control panel, go to My Proxies view
Click on your Browser API proxy
Click on the Overview tab
On the right side, Click on the “Chrome Dev Tools Debugger” button
Getting Started with the Debugger & Chrome Dev Tools
Open a Browser API Session
Launch the Debugger
Connect with your live browser sessions
The Browser API Debugger can be easily accessed within your Bright Data Control Panel. Follow these steps:
Within the control panel, go to My Proxies view
Click on your Browser API proxy
Click on the Overview tab
On the right side, Click on the “Chrome Dev Tools Debugger” button
Getting Started with the Debugger & Chrome Dev Tools
Open a Browser API Session
Launch the Debugger
Connect with your live browser sessions
To access and launch the debugger session directly from your script, you’ll need to send the CDP command: Page.inspect
.
Leveraging Chrome Dev Tools
How to automatically launch devtools locally to view your live browser session?
If you would like to automatically launch devtools on every session to view your live browser session, you can integrate the following code snippet:
How can I get a screenshot of what's happening in the browser?
You can easily trigger a screenshot of the browser at any time by adding the following to your code:
To take screenshots on Python and C# see here.
You can easily trigger a screenshot of the browser at any time by adding the following to your code:
To take screenshots on Python and C# see here.
See our full section on opening devtools automatically.
Why does the initial navigation for certain pages take longer than others?
What are the most Common Error codes?
Error Code | Meaning | What can you do about it? |
Unexpected server response: 407 | An issue with the remote browser’s port | Please check your remote browser’s port. The correct port for Browser API is port:9222 |
Unexpected server response: 403 | Authentication Error | Check authentication credentials (username, password) and check that you are using the correct “Browser API” zone from Bright Data control panel |
Unexpected server response: 503 | Service Unavailable | We are likely scaling browsers right now to meet demand. Try to reconnect in 1 minute. |
I can't seem to establish a connection with Browser API, do I have a connection issue?
If you’re experiencing connection issues, you can test your local Browser API connection with a simple curl to the following endpoint:
https://brd.superproxy.io:9222
.How to Integrate Browser API with .NET Puppeteer Sharp?
Integration with the Browser API product with C# requires patching the PuppeteerSharp library to add support for websocket authentication. This can be done like the following:
How does the Browser API pricing work?
Browser API pricing is simple: you only pay for gigabytes of traffic that you transferred through the Browser API.
There is no cost for instances or time using the Browser API - only traffic.
It doesn’t matter what country you are using, traffic is billed at the same rates. Because you pay by traffic, you probably will want to minimize it.
The only exception to this is premium domains, which cost more per gigabyte, because Bright Data needs to invest a significantly higher amount of effort and resources to unblock. You can find more information about premium domains in your Browser API configuration pages.
What are some tips for reducing bandwidth while scraping with Browser API?
When optimizing your web scraping projects, conserving bandwidth is the key.
Explore our tips and guidelines below on effective bandwidth-saving techniques that you can utilize within your script to ensure efficient and resource-friendly scraping.
A typical inefficiency when loading a browser is the unnecessary downloading of media content, such as images and videos, from your targeted domains. Learn below how to easily avoid this by excluding them right from within your script.
Given that anti-bot systems expect specific resources to load for particular domains, approach resource-blocking cautiously, as it can directly impact Browser API’s ability to successfully load your target domains. If you encounter any issues after applying resource blocks, please ensure that they persist even when your blocking logic is reverted, before contacting our support team.
Blocking media type requests alone may not always reduce your bandwidth usage. Some websites have ad spaces that continuously refresh ads, and others use live bidding mechanisms that constantly search for new ads if one fails to load properly.
In such cases, it’s important to identify and block these specific network requests. Doing so will decrease the number of network requests and, consequently, lower your bandwidth usage.
One common inefficiency in scraping jobs is the repeated downloading of the same page during a single session.
Leveraging cached pages - a version of a previously scraped page - can significantly increase your scraping efficiency, as it can be used to avoid repeated network requests to the same domain. Not only does it save on bandwidth by avoiding redundant fetches, but it also ensures faster and more responsive interactions with the preloaded content.
A single Scraping Browser session can persist for up to 30 minutes. This duration allows you ample opportunity to revisit and re-navigate the page as needed within the same session, eliminating the need for redundant sessions on identical pages during your scraping job.
Example In a multi-step web scraping workflow, you often gather links from a page and then dive into each link for more detailed data extraction.
You’ll often need to revisit the initial page for cross-referencing or validation. By leveraging caching, these revisits don’t trigger new network requests as the data is simply loaded from the cache.
Is password typing allowed with Browser API?
Bright Data is committed to collecting only publicly available data. To uphold this commitment, Browser API is configured by default to prevent any attempts to log in to accounts by disabling password entry. This restriction helps ensure that no non-public data—including any data accessible only behind a login—is scraped. For your reference, please review our Acceptable Use Policy at https://brightdata.com/acceptable-use-policy .
In certain cases, it may be possible to override this default block. If you require an exception, you must first complete our Know-Your-Customer (KYC) process available at https://brightdata.com/cp/kyc. Once you have completed the process, please contact our compliance department directly at compliance@brightdata.com to submit your request (you could also request the permissions during your KYC process).
How can I keep the same IP address in Browser API sessions?
The Browser API supports maintaining IP address across multiple browser sessions using a custom CDP function. This allows you to reuse the same proxy peer for consecutive requests by associating them with the same session ID.
For implementation details and sample code, see our documentation on Session Persistence
Find answers to common questions about Bright Data’s Browser API, including supported languages, debugging tips, and integration guidelines.
Can I choose the country that the Browser API will scrape from?
This is possible, but not recommended. The reason is that the Browser API utilises Bright Data’s full suite of unblocking capabilities which automatically chooses the best IP type and location to get you the page you want to access.
If you still need the Browser API to launch from a specific country, add the -country
flag, after your USER
credentials within the Bright Data endpoint, followed by the 2-letter ISO code for that country.
For example, Browser API using Puppeteer in the USA:
EU region
EU region You can target the entire European Union region in the same manner as “Country” above by adding “eu” after “country” in your request: “-country-eu”. Requests sent using -country-eu, will use IPs from one of the countries below which are included automatically within “eu”: AL, AZ, KG, BA, UZ, BI, XK, SM, DE, AT, CH, UK, GB,IE, IM, FR, ES, NL, IT, PT, BE, AD, MT, MC, MA, LU, TN, DZ, GI, LI, SE, DK, FI, NO, AX, IS, GG, JE, EU, GL, VA, FX, FO.
Need Browser API to target a specific geographical radius of proxies?
Check out our Proxy.setLocation feature.
Which programming languages, libraries, and browser automation tools are supported by Browser API?
Bright Data’s Browser API is compatible with a wide variety of programming languages, libraries, and browser automation tools, offering full native support for Node.js, Python, and Java/C# using puppeteer, playwright, and selenium respectively.
Other languages can usually be integrated as well via the third-party libraries listed below, enabling you to incorporate Browser API directly into your existing tech stack.
Language/Platform | puppeteer | playwright | selenium |
---|---|---|---|
Python | N/A | *playwright-python | *Selenium WebDriver |
JS / Node | *Native | *Native | *WebDriverJS |
Java | Puppeteer Java | Playwright for Java | *Native |
Ruby | Puppeteer-Ruby | playwright-ruby-client | Selenium WebDriver for Ruby |
C# | *.NET: Puppeteer Sharp | Playwright for .NET | *Selenium WebDriver for .NET |
Go | chromedp | playwright-go | Selenium WebDriver for Go |
*Full support |
How can I debug what's happening behind the scenes during my Browser API session?
You can monitor a live Browser API session by launching the Browser API Debugger on your local machine. This is similar to setting headless browser to ‘FALSE’ on Puppeteer.
The Browser API Debugger serves as a valuable resource, enabling you to inspect, analyze, and fine-tune your code alongside Chrome Dev Tools, resulting in better control, visibility, and efficiency.
The Browser API Debugger can be launched via two methods:
Manually via Control Panel
Remotely via your script.
The Browser API Debugger can be easily accessed within your Bright Data Control Panel. Follow these steps:
Within the control panel, go to My Proxies view
Click on your Browser API proxy
Click on the Overview tab
On the right side, Click on the “Chrome Dev Tools Debugger” button
Getting Started with the Debugger & Chrome Dev Tools
Open a Browser API Session
Launch the Debugger
Connect with your live browser sessions
The Browser API Debugger can be easily accessed within your Bright Data Control Panel. Follow these steps:
Within the control panel, go to My Proxies view
Click on your Browser API proxy
Click on the Overview tab
On the right side, Click on the “Chrome Dev Tools Debugger” button
Getting Started with the Debugger & Chrome Dev Tools
Open a Browser API Session
Launch the Debugger
Connect with your live browser sessions
To access and launch the debugger session directly from your script, you’ll need to send the CDP command: Page.inspect
.
Leveraging Chrome Dev Tools
How to automatically launch devtools locally to view your live browser session?
If you would like to automatically launch devtools on every session to view your live browser session, you can integrate the following code snippet:
How can I get a screenshot of what's happening in the browser?
You can easily trigger a screenshot of the browser at any time by adding the following to your code:
To take screenshots on Python and C# see here.
You can easily trigger a screenshot of the browser at any time by adding the following to your code:
To take screenshots on Python and C# see here.
See our full section on opening devtools automatically.
Why does the initial navigation for certain pages take longer than others?
What are the most Common Error codes?
Error Code | Meaning | What can you do about it? |
Unexpected server response: 407 | An issue with the remote browser’s port | Please check your remote browser’s port. The correct port for Browser API is port:9222 |
Unexpected server response: 403 | Authentication Error | Check authentication credentials (username, password) and check that you are using the correct “Browser API” zone from Bright Data control panel |
Unexpected server response: 503 | Service Unavailable | We are likely scaling browsers right now to meet demand. Try to reconnect in 1 minute. |
I can't seem to establish a connection with Browser API, do I have a connection issue?
If you’re experiencing connection issues, you can test your local Browser API connection with a simple curl to the following endpoint:
https://brd.superproxy.io:9222
.How to Integrate Browser API with .NET Puppeteer Sharp?
Integration with the Browser API product with C# requires patching the PuppeteerSharp library to add support for websocket authentication. This can be done like the following:
How does the Browser API pricing work?
Browser API pricing is simple: you only pay for gigabytes of traffic that you transferred through the Browser API.
There is no cost for instances or time using the Browser API - only traffic.
It doesn’t matter what country you are using, traffic is billed at the same rates. Because you pay by traffic, you probably will want to minimize it.
The only exception to this is premium domains, which cost more per gigabyte, because Bright Data needs to invest a significantly higher amount of effort and resources to unblock. You can find more information about premium domains in your Browser API configuration pages.
What are some tips for reducing bandwidth while scraping with Browser API?
When optimizing your web scraping projects, conserving bandwidth is the key.
Explore our tips and guidelines below on effective bandwidth-saving techniques that you can utilize within your script to ensure efficient and resource-friendly scraping.
A typical inefficiency when loading a browser is the unnecessary downloading of media content, such as images and videos, from your targeted domains. Learn below how to easily avoid this by excluding them right from within your script.
Given that anti-bot systems expect specific resources to load for particular domains, approach resource-blocking cautiously, as it can directly impact Browser API’s ability to successfully load your target domains. If you encounter any issues after applying resource blocks, please ensure that they persist even when your blocking logic is reverted, before contacting our support team.
Blocking media type requests alone may not always reduce your bandwidth usage. Some websites have ad spaces that continuously refresh ads, and others use live bidding mechanisms that constantly search for new ads if one fails to load properly.
In such cases, it’s important to identify and block these specific network requests. Doing so will decrease the number of network requests and, consequently, lower your bandwidth usage.
One common inefficiency in scraping jobs is the repeated downloading of the same page during a single session.
Leveraging cached pages - a version of a previously scraped page - can significantly increase your scraping efficiency, as it can be used to avoid repeated network requests to the same domain. Not only does it save on bandwidth by avoiding redundant fetches, but it also ensures faster and more responsive interactions with the preloaded content.
A single Scraping Browser session can persist for up to 30 minutes. This duration allows you ample opportunity to revisit and re-navigate the page as needed within the same session, eliminating the need for redundant sessions on identical pages during your scraping job.
Example In a multi-step web scraping workflow, you often gather links from a page and then dive into each link for more detailed data extraction.
You’ll often need to revisit the initial page for cross-referencing or validation. By leveraging caching, these revisits don’t trigger new network requests as the data is simply loaded from the cache.
Is password typing allowed with Browser API?
Bright Data is committed to collecting only publicly available data. To uphold this commitment, Browser API is configured by default to prevent any attempts to log in to accounts by disabling password entry. This restriction helps ensure that no non-public data—including any data accessible only behind a login—is scraped. For your reference, please review our Acceptable Use Policy at https://brightdata.com/acceptable-use-policy .
In certain cases, it may be possible to override this default block. If you require an exception, you must first complete our Know-Your-Customer (KYC) process available at https://brightdata.com/cp/kyc. Once you have completed the process, please contact our compliance department directly at compliance@brightdata.com to submit your request (you could also request the permissions during your KYC process).
How can I keep the same IP address in Browser API sessions?
The Browser API supports maintaining IP address across multiple browser sessions using a custom CDP function. This allows you to reuse the same proxy peer for consecutive requests by associating them with the same session ID.
For implementation details and sample code, see our documentation on Session Persistence