Getting started
Introduction to Bright Data Scraper Studio
A guided tour of the Bright Data Scraper Studio IDE: templates, proxy networks, and how to build multiple scrapers in one flow. Start here if you have never built a scraper on Bright Data before.
Pair with: Understanding Scraper Studio
Platform overview: project setup, proxies, and automation
A walkthrough of essential Bright Data Scraper Studio features: project setup, proxy creation and management, and scheduling automated collection runs.
Pair with: Scraper Studio IDE interface
Building scrapers
Scrape data from Amazon search results
How to scrape Amazon search results, build a scraper that walks paginated results, and rely on the Bright Data proxy network to avoid blocks.
Pair with: Develop a scraper
Walk search results with for loops
How to use a for loop to navigate hundreds of result pages, extract links to each listing, and collect data with parse() and collect(). Covers proxy manager setup, basic templates, input targeting, running and testing code, and reading logs.
Pair with: Web scraping basics
Use templates and deploy multiple scrapers
How to start from an IDE template, build a Walmart scraper, deploy it to the proxy network, and collect the data into a single API response.
Pair with: Scraper Studio functions
Build a Python scraper end-to-end
Building a Python project that uses Bright Data Scraper Studio templates, modifies the code, previews results, configures delivery, and retrieves data via API.
Pair with: Initiate collection and delivery
Handling challenging sites
Scrape dynamic sites with utility functions
How to scrape a heavily dynamic site with Bright Data’s browser helpers: waiting for grids to render, looping over grid cells, and usingnext_stage() to fan out detail collection.
Pair with: Best practices
Automate Airbnb with a Python scraper
How to customize a Bright Data Scraper Studio template to scrape Airbnb, plus the API flow for triggering the scraper and retrieving results.
Pair with: Scraper Studio AI Agent
Debugging and delivery
Debug a scraper and configure delivery
How to debug a scraper in the IDE, read the run log and errors, and choose a delivery method (API endpoint or Amazon S3).
Pair with: Scraper Studio IDE interface
End-to-end data engineering with Tableau
A longer walkthrough: proxy manager, API integrations, building a scraper from a template, using input for products or categories, running code, and reading logs for debugging.
Pair with: Features reference
Related
Understanding Scraper Studio
How Bright Data Scraper Studio works and when to use it
Develop a scraper
Step-by-step walkthrough of building a scraper in the IDE