How it works
You send one or more Instagram URLs to the Bright Data Instagram Scraper API. Bright Data handles the scraping infrastructure and returns clean, structured JSON.dataset_id to specify the data type (profiles, posts, reels, or comments) and return results in JSON, NDJSON, or CSV.
What the response looks like
Supported data types
Profiles
Follower counts, bios, verification status, profile pictures. Discover profiles by username.
Posts
Captions, likes, comments, hashtags, photos, and videos. Discover posts by profile URL.
Reels
Video URLs, view counts, play counts, thumbnails. Discover reels or collect all reels from a profile.
Comments
Comment text, likes, replies, commenter details for any post or reel.
Request methods
The Bright Data Instagram Scraper API supports two request methods. Choose based on your volume and latency needs.
Learn more in Understanding sync vs. async requests.
Capabilities and limits
| Capability | Detail |
|---|---|
| Output formats | JSON, NDJSON, CSV |
| Max URLs per sync request | 20 |
| Max URLs per async request | 5,000 |
| Data freshness | Real-time (scraped on demand) |
| Delivery options | API download, Webhook, Amazon S3, Snowflake, Azure, GCS (all options) |
| Pricing | Pay per successful record (see pricing) |
Common questions
Is the data scraped in real time?
Is the data scraped in real time?
Yes. Each request triggers a live scrape. There is no cached or stale data. Response times vary by endpoint: profiles typically return in 10-30 seconds (sync), while discovery requests may take longer depending on result volume.
What is the difference between URL collection and discovery?
What is the difference between URL collection and discovery?
URL collection scrapes a specific Instagram page you provide (e.g., a profile URL). Discovery finds Instagram content matching search criteria (e.g., all posts from a profile URL) and scrapes the results. Discovery is only available via async requests.
How is this different from scraping using proxies or Web Unlocker?
How is this different from scraping using proxies or Web Unlocker?
When scraping using proxies or Web Unlocker, you still need to write and maintain
your own parsing logic and update it whenever Instagram changes its page structure.
The Instagram Scraper API handles the entire stack: proxy rotation, anti-bot bypassing
and parsing. You simply send an Instagram URL and get clean, structured JSON back with
no scraping infrastructure or parser maintenance required on your end.
Next steps
Quickstart
Scrape your first Instagram profile in 5 minutes.
Send your first request
Full code examples in cURL, Python, and Node.js.
API reference
Endpoint specs, parameters, and response schemas.