Skip to main content
This tutorial shows you how to scrape a Reddit post and get structured JSON data using the Bright Data Reddit Scraper API.

Prerequisites

1

Get your API token

Go to the user settings page in your Bright Data account and copy your API token.If you don’t have an account yet, sign up at brightdata.com. New users get $2 free credit for testing.
Your API token is shown only once when created. Copy and store it securely.
2

Send a request

We’ll use the Posts — Collect by URL endpoint with a synchronous request. Replace YOUR_API_TOKEN with your actual token:
curl -X POST \
  "https://api.brightdata.com/datasets/v3/scrape?dataset_id=gd_lvz8ah06191smkebj4&format=json" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '[{"url": "https://www.reddit.com/r/learnpython/comments/1asdf12/how_do_i_start_learning_python/"}]'
You should see a 200 status code. This takes 10 to 30 seconds.
3

Review the response

The Bright Data Reddit Scraper API returns a JSON array with structured post data:
[
  {
    "post_id": "1asdf12",
    "url": "https://www.reddit.com/r/learnpython/comments/1asdf12/how_do_i_start_learning_python/",
    "user_posted": "example_user",
    "title": "How do I start learning Python?",
    "description": "I'm a complete beginner...",
    "num_upvotes": 1240,
    "num_comments": 86,
    "date_posted": "2025-03-14T18:22:00Z",
    "community_name": "learnpython",
    "community_url": "https://www.reddit.com/r/learnpython",
    "community_members_num": 1_120_000,
    "tag": "Tutorial"
  }
]
Each post object includes post details, community stats, engagement metrics and attached media. See the full response schema.
You’ve successfully scraped your first Reddit post using the Bright Data Reddit Scraper API.

Common questions

Yes. Add more objects to the input array. Synchronous requests support up to 20 URLs. For larger batches or for discovery by keyword or subreddit, use the async /trigger endpoint.
[
  {"url": "https://www.reddit.com/r/learnpython/comments/1asdf12/"},
  {"url": "https://www.reddit.com/r/python/comments/1bsdf34/"},
  {"url": "https://www.reddit.com/r/programming/comments/1csdf56/"}
]
Yes, with the separate Comments dataset. Use dataset ID gd_lvzdpsdlw09j6t702 and pass the post URL:
curl -X POST \
  "https://api.brightdata.com/datasets/v3/scrape?dataset_id=gd_lvzdpsdlw09j6t702&format=json" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '[{"url": "https://www.reddit.com/r/learnpython/comments/1asdf12/"}]'
You can also pass days_back to limit results to comments posted within the last N days.
Verify your API token is correct and hasn’t expired. Generate a new token from Account settings. See the authentication guide for details.
Synchronous requests have a 1-minute timeout. If the request exceeds this limit, it automatically switches to async and returns a snapshot_id. Use the async workflow for large batches.
Verify the Reddit post URL is publicly accessible and correctly formatted. The URL should follow the pattern https://www.reddit.com/r/{subreddit}/comments/{post_id}/{slug}/. Private subreddits and deleted posts cannot be scraped.

Next steps

Send your first request

Explore every endpoint with full examples in cURL, Python and Node.js.

Async batch requests

Scrape hundreds of posts or run keyword discovery in a single batch job.

API reference

Endpoint specs, parameters and response schemas.