When navigating a page with Scraping Browser, our integrated CAPTCHA solver automatically solves all CAPTCHAs by default. You can monitor this auto-solving process in your code with the following custom CDP events.

Once a CAPTCHA is solved, if there is a form to submit, it will be submitted by default.

CAPTCHA Solver - Automatic Solve

CAPTCHA Solver - Manual Control

If you would like to either manually configure or fully disable our default CAPTCHA solver and instead call the solver manually or solve on your own, see the following CDP commands and functionality.

Disable CAPTCHA Solving

By default, as part of our full proxy unblocking solution, Scraping Browser also solves CAPTCHAs that are encountered while returning your proxy request.

When disabling CAPTCHA solver, our unlocker algorithm still takes care of the entire ever-changing flow of finding the best proxy network, customizing headers, fingerprinting, and more, but intentionally does not solve CAPTCHAs automatically, giving your team a lightweight, streamlined solution, that broadens the scope of your potential scraping opportunities.

Best for:

  • Scraping data from websites without getting blocked
  • Emulating real-user web behavior
  • Teams that don’t have an unblocking infrastructure in-house and don’t want their scraper to solve CAPTCHAs automatically

Code Examples

Examples of BrightData’s Scraping Browsers usage using common browser-control libraries.

Please make sure to install required libraries before continuing

Simple scraping of targeted page

Select your pefered tech-stack

#!/usr/bin/env node
const playwright = require('playwright');
const {
    AUTH = 'USER:PASS',
    TARGET_URL = 'https://example.com',
} = process.env;

async function scrape(url = TARGET_URL) {
    if (AUTH == 'USER:PASS') {
        throw new Error(`Provide Scraping Browsers credentials in AUTH`
            + ` environment variable or update the script.`);
    }
    console.log(`Connecting to Browser...`);
    const endpointURL = `wss://${AUTH}@brd.superproxy.io:9222`;
    const browser = await playwright.chromium.connectOverCDP(endpointURL);
    try {
        console.log(`Connected! Navigating to ${url}...`);
        const page = await browser.newPage();
        await page.goto(url, { timeout: 2 * 60 * 1000 });
        console.log(`Navigated! Scraping page content...`);
        const data = await page.content();
        console.log(`Scraped! Data: ${data}`);
    } finally {
        await browser.close();
    }
}

if (require.main == module) {
    scrape().catch(error => {
        console.error(error.stack || error.message || error);
        process.exit(1);
    });
}