When writing a scraper code on the IDE, the system auto-saves the scraper as a draft to the development environment. From inside the IDE, you can run one page at a time to sample how your scraper will behave. To get a full production run, you need to save scraper to production by clicking the ‘Save to production’ button at the top right corner of the IDE screen. All scrapers will appear under the My scrapers tab in the control panel. Any inactive scraper will be shown in a faded state.

Initiate scraper

To start collecting the data, choose one of three options:

You can start a data collection through API without accessing the Bright Data control panel : Getting started with API documentation

Before initiating an API request, please create an API token. To create an API token, go to:
Dashboard side menu settings > account settings > API tokens

  1. Set Up Inputs Manually - provide input manually or through the API request
  2. Trigger behavior - you can add several requests in parallel that are activated according to the order they’re defined. You can add another job run to the queue and run more than two jobs simultaneously.
  3. Preview of the API Request - Bright Data provides you with a REST API call to initiate the scraper. Please select the “Linux Bash” viewer for CURL commands. As soon as you send the request, you will receive a job id.

You will receive the data according to the delivery preferences defined earlier.

Receive data API call is required in order to receive data when delivery preferences is set to API download

Delivery Options

You can set your delivery preferences for the dataset. To do that simply click on the scraper row from the ‘My scrapers’ tab and then click on ‘Delivery preferences’

Output schema

Schema defines the data point structure and how the data will be organized. You can change the schema structure and modify the data points to suit your needs, re-order, set default values, and add additional data to your output configuration. You can add new field names by going into the advanced settings and editing the code.

Input / Output schemachoose the tab you’d like to configure
Custom validationvalidate the schema
Parsed datadata points collected by the scraper
Add new fieldif you need additional data point, you can add fields and define field name and type
Additional dataadditional information you can add to the schema (timestamp, screenshot, etc.)