How it works
You send a prompt to the Bright Data ChatGPT Scraper API. Bright Data handles the scraping infrastructure and returns clean, structured JSON with the answer, citations, and sources.dataset_id for ChatGPT Search and return results in JSON, NDJSON, or CSV.
What the response looks like
Supported capabilities
Search with Citations
Get structured answers with source citations, positions, and linked references from ChatGPT web search.
Follow-up Prompts
Send an additional prompt to get follow-up answers within the same search context.
Shopping and Map Results
Detect when ChatGPT returns shopping product cards or map-based results for location queries.
Web Search Control
Enable or disable web search to control whether ChatGPT uses live web data in its responses.
Request methods
The Bright Data ChatGPT Scraper API supports two request methods. Choose based on your volume and latency needs.
Learn more in Understanding sync vs. async requests.
Capabilities and limits
| Capability | Detail |
|---|---|
| Output formats | JSON, NDJSON, CSV |
| Max inputs per sync request | 20 |
| Max inputs per async request | 5,000 |
| Max prompt length | 4,096 characters |
| Data freshness | Real-time (scraped on demand) |
| Context between requests | None (each request is independent) |
| Delivery options | API download, Webhook, Amazon S3, Snowflake, Azure, GCS (all options) |
| Pricing | Pay per successful record (see pricing) |
Common questions
Is the data scraped in real time?
Is the data scraped in real time?
Yes. Each request triggers a live ChatGPT search session. There is no cached or stale data. Response times vary depending on prompt complexity and whether web search is enabled.
Can I maintain conversation context across requests?
Can I maintain conversation context across requests?
No. Each request starts a fresh ChatGPT session. There is no memory or context carried over between requests. To ask a follow-up question within a single request, use the
additional_prompt field.How is this different from scraping using proxies or Web Unlocker?
How is this different from scraping using proxies or Web Unlocker?
When scraping using proxies or Web Unlocker, you still need to write and maintain
your own browser automation and parsing logic, and update it whenever ChatGPT changes
its interface. The ChatGPT Scraper API handles the entire stack: proxy rotation,
anti-bot bypassing, browser automation, and parsing. You simply send a prompt and get
clean, structured JSON back with no scraping infrastructure or parser maintenance
required on your end.
Next steps
Quickstart
Search ChatGPT with your first prompt in 5 minutes.
Send your first request
Full code examples in cURL, Python, and Node.js.
API reference
Endpoint specs, parameters, and response schemas.