Prerequisites
- A Bright Data account (sign up free)
- The URL of the website you want to scrape
Build your first scraper with the AI Agent
Open Scraper Studio
In the Bright Data control panel, click Scrapers in the left menu and open Scraper Studio.

Enter the target website URL
Paste the URL of the page you want to scrape into the chat input. Along with the URL, add any context that helps the AI build a more accurate scraper on the first try. The more context you provide, the better the generated code.
Useful context to include:

- Specific fields you need: “I need price, title, and stock status”
- Where the data lives on the page: “prices are in the product detail panel, not the listing page”
- Actions required to reach the data: “click ‘Show more’ to load full descriptions”
- CSS selectors, if you know them:
.product-price span.amount - Page load behavior, if the site is slow or lazy-loads content: “results load dynamically, give it extra time”
Expected result: the AI Agent acknowledges the URL and may ask one or two clarifying questions about the data you want.
Answer the AI's questions
Respond in plain language.
Expected result: the AI Agent generates a schema, a structured list of fields with data types that will become your scraper’s output.
Review and approve the schema
Read through the generated schema. You have four options:
Additional controls:
- Approve: click Approve to accept the schema as-is
- Decline: type feedback in the chat (for example, “Remove the image field and add a rating field”) and the AI regenerates the schema
- Edit inline: modify the schema directly without going back to the chat
- Upload your own schema: bring your own schema file; download the example file to see the correct format
- Edit a field (pencil icon): change a field’s name or data type
- Delete a field (trash icon): remove fields you do not need
- Add a field (plus button): add new fields to the schema

- Start from scratch: clears every field so you can build the schema manually from an empty state
- Reset the schema: discards inline changes and returns to the original AI-generated schema
Expected result: once approved, the AI Agent starts generating the scraper code.
Wait for code generation
The AI writes the full scraper, including extraction logic, navigation handling, data validation, and error handling. This takes a few minutes.
Expected result: a confirmation popup appears indicating your scraper is ready.

Run your scraper
Click Try it out to open the Initiate Manually page. Review the collection settings and click Start to begin data collection.
You can also choose an alternative initiation method:

- Initiate by API: trigger the scraper programmatically without opening the control panel
- Schedule: run the scraper on a daily, weekly, or custom interval
Expected result: the scraper collects data. Monitor progress from the Runs dashboard and download results in JSON, NDJSON, CSV, or XLSX once the job completes.
What can the AI Agent build?
The AI Agent creates scrapers based on a specific input type and collection goal. It does not crawl an entire domain: passing a homepage URL and asking it to “scrape everything” will not produce useful results. Choose the scraper type that matches your data shape.1. Product page (PDP) scraper
You provide a list of product page URLs. The scraper visits each URL and extracts product-level data (title, price, description, images).Use when: you already have the URLs of the specific pages you want to scrape.
2. Discovery scraper
You provide a category page or listing page URL. The scraper collects data directly from the listing (titles, prices, ratings), without visiting individual product pages.Use when: you need an overview of items from a category or search results page, and you do not need full product-page detail.
3. Discovery + PDP scraper
You provide a category or listing page URL. The scraper first discovers all product URLs on the page, then visits each product page to collect full detail.Use when: you need complete product data from an entire category, not just the fields visible on the listing page.
4. Search scraper
You provide a search keyword. The AI Agent creates either a Discovery or Discovery + PDP scraper based on your stated requirements: it first finds results for the keyword, then collects data from them.Use when: you do not have specific URLs and want to collect data from a search term.
Frequently asked questions
Can I edit the code after the AI Agent builds the scraper?
Can I edit the code after the AI Agent builds the scraper?
Yes. Every scraper the AI Agent generates can be opened in the Bright Data Scraper Studio IDE and edited directly. If you prefer not to write code, use the Self-Healing tool to request changes in plain language.
Does the AI Agent build scrapers for login-protected sites?
Does the AI Agent build scrapers for login-protected sites?
The AI Agent generates scrapers that run on Bright Data’s proxy and unblocking infrastructure, which handles most anti-bot defenses. For sites that require a logged-in session, build the scraper in the IDE and use
set_session_cookie() or the authentication pattern that matches the target site.Why didn't the AI generate what I expected?
Why didn't the AI generate what I expected?
The AI Agent relies on the context you give it. If the output is off, decline the schema and add more specifics about field names, selectors, or the exact page section where the data lives. You can also use the Self-Healing tool to refine a generated scraper after the fact.
Related
Develop a scraper with the IDE
Build a scraper by writing JavaScript directly
Self-Healing tool
Update a generated scraper with plain-language prompts