Skip to main content
This guide walks through building a custom web scraper in the Bright Data Scraper Studio IDE from scratch. You will write interaction code that navigates the target site, parser code that extracts structured fields, and then save the scraper to production and configure delivery. By the end, you will have a runnable scraper you can trigger by API, manually, or on a schedule. Time to complete: about 15 to 30 minutes per scraper, depending on site complexity.

Prerequisites

  • An active Bright Data account with access to Scraper Studio
  • Basic JavaScript familiarity (variables, functions, control flow)
  • A target URL you want to scrape
If you prefer describing the scraper in plain language instead of writing code, use the Scraper Studio AI Agent. The agent generates the same kind of scraper the IDE would produce.

How do I build a scraper in the IDE?

1

Open the Scraper Studio IDE

Go to brightdata.com/cp/scrapers, click Scraper Studio, then click Develop a web scraper (IDE) to open an empty scraper.
2

Start from scratch or pick a template

Choose a template from the Templates panel if your target site has a matching starter, or start from a blank scraper. Templates are pre-built scrapers for common patterns and sites; they are a fast way to learn the idioms Bright Data Scraper Studio expects.
3

Write the interaction code

Interaction code navigates the target site and captures the data you need into the parser. Use the Interaction code editor on the left.A minimal interaction script:
navigate(input.url);
wait('.product-title');

let data = parse();
collect(data);
For a multi-page scrape, fan out with next_stage():
navigate(input.url);
wait('.listing');
let listings = parse().listings;
for (let url of listings)
  next_stage({url});
See Scraper Studio functions for every interaction command.
4

Write the parser code

Parser code reads the HTML of the loaded page and returns a structured record. Use Cheerio’s jQuery-like $ selector.
return {
  title: $('h1').text_sane(),
  price: new Money(+$('.price').text().replace(/\D+/g, ''), 'USD'),
  image: new Image($('img.product').attr('src')),
  listings: $('.listing a').toArray().map(el => $(el).attr('href')),
};
Parser code returns data to whichever interaction function called parse(). See Scraper Studio functions for the parser helpers Bright Data Scraper Studio provides.
5

Choose a worker type

In the Settings panel, pick the worker type:
  • Code worker (faster, cheaper): for static HTML pages and public JSON endpoints
  • Browser worker: for JavaScript-rendered pages, clicks, scrolling, popups, or captured background traffic
Start with Code worker. Switch to Browser worker if you need any function from the browser-only list.
6

Run a preview

Click the Preview button to run the scraper against a single test input. The results appear in the Output tab. Use the Run log and Browser network tabs to debug failed runs.
Expected result: the Output tab shows a structured record with the fields defined in your parser code.
7

Save to production

Click Save to Production in the top-right corner. The scraper appears under My Scrapers in the control panel and can be triggered by API, manually, or on a schedule.
8

Configure delivery

Open the scraper in My Scrapers, click Delivery preferences, and choose a destination (API download, webhook, S3, GCS, Azure, SFTP, or email) and a format (JSON, NDJSON, CSV, XLSX, Parquet). See Initiate collection and delivery for every option.
9

Initiate the scraper

Trigger the first production run. Pick the initiation method that matches your workflow:

Frequently asked questions

Open the scraper in the Bright Data Scraper Studio IDE and check the Last errors tab. Every failed input is stored with its exact error message and error code (up to the most recent 1,000 failures). Re-run the failing input from the IDE to reproduce the problem locally, fix the interaction or parser code, and save a new production version.
Yes. Every scraper in Bright Data Scraper Studio, regardless of how it was created, can be opened and edited in the IDE. You can change extraction logic, tweak selectors, add or remove output fields, and change the worker type.
Click Edit Schema in the IDE’s output schema panel and add the new fields, or return them from parser code and Bright Data Scraper Studio prompts you to update the schema when you save to production.
Use collect() to append one record at a time; it is the default way to emit data. Use set_lines() when you are collecting records progressively and want the most recent snapshot delivered even if a later step throws an error. Every call to set_lines() overrides the previous one. See collect and set_lines.

Scraper Studio functions

Full reference for interaction and parser commands

Best practices

Recommended patterns for fast, reliable scrapers

Scraper Studio IDE interface

Reference for every panel and control in the IDE

Self-Healing tool

Fix broken scrapers and add fields with plain-language prompts