Find answers to common questions about Bright Data’s Web Scraper IDE, including setup, troubleshooting, and best practices for developing custom data scrapers.
What is a Bright Data Web Scraper?
What is Web Scraper IDE?
What is an “input” when using a Web Scraper?
What is an “output” when using a Web Scraper?
How many free records are included with my free trial?
Why did I receive more statistic records than inputs?
What are the most frequent data points collected from social media?
Can I collect data from multiple platforms?
Can I add additional information to my Web Scraper?
What is a search scraper?
What is a discovery scraper?
Can I change the code in the IDE by myself?
What are the options to initiate requests?
How to start using the Web Scraper?
What is a queue request?
What is a CPM?
When building a scraper, what is considered as a billable event?
navigate()
request()
load_more()
How can I confirm that someone is working on the new Web Scraper I requested?
How to report an issue on the Web scraper IDE?
Select a job ID : issued Dataset
Select a type of the issue
Data
Collection and Delivery
Other
(Parsing issues) Use the “bug” red icon to indicate where the incorrect results are
(Parsing issues) Enter the results you expect to receive
Write a description of what went wrong and the URL where the data is collected
If needed, attach an image to support your report
I updated input/output schema of my managed scraper. Can I use it while BrightData updates my scraper?
override_incompatible_schema=1
override_incompatible_input_schema=1
How can I debug real time scrapers?
What should I do if I face an issue with a Web Scraper?
When “reporting an issue”, what information should I include in my report?
What is a Data Collector?
How to Create a Data Collector?
Any system limitations?
How do i use the AI code generator?