Skip to main content
This guide shows you how to scrape LinkedIn data at scale using the asynchronous /trigger endpoint. Use this when you have more than 20 URLs, need discovery, or want delivery to a webhook or S3.
Not sure whether to use sync or async? Read Understanding sync vs. async requests.

Prerequisites

Step 1: Trigger the collection

Send a POST request to the /trigger endpoint with your input URLs:
curl -X POST \
  "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_l1viktl72bvl7bjuj0&format=json" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '[
    {"url": "https://www.linkedin.com/in/satyanadella"},
    {"url": "https://www.linkedin.com/in/jeffweiner08"},
    {"url": "https://www.linkedin.com/in/rbranson"},
    {"url": "https://www.linkedin.com/in/sherylsandberg"},
    {"url": "https://www.linkedin.com/in/raboram"}
  ]'
You should see a 200 response with a snapshot_id:
{
  "snapshot_id": "s_m1a2b3c4d5e6f7g8h"
}
Save this ID. You need it to check progress and download results.

Step 2: Monitor progress

Poll the snapshot status until it shows ready. This takes 30 seconds to several minutes depending on the number of URLs.
curl "https://api.brightdata.com/datasets/v3/progress/s_m1a2b3c4d5e6f7g8h" \
  -H "Authorization: Bearer YOUR_API_TOKEN"
Status values:
StatusMeaning
collectingScraping is in progress
digestingData is being processed
readyResults are available for download
failedThe collection encountered an error

Step 3: Download results

Once the status is ready, download the scraped data:
curl "https://api.brightdata.com/datasets/v3/snapshot/s_m1a2b3c4d5e6f7g8h?format=json" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -o results.json
You’ve successfully triggered, monitored, and downloaded a batch LinkedIn scraping job.

Skip polling with webhooks

If you don’t want to poll for status, add a webhook parameter to receive results automatically:
curl -X POST \
  "https://api.brightdata.com/datasets/v3/trigger?dataset_id=gd_l1viktl72bvl7bjuj0&format=json&webhook=https://your-server.com/webhook" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '[{"url": "https://www.linkedin.com/in/satyanadella"}]'
See How to receive LinkedIn data via webhooks for the full setup.

Limits and constraints

ConstraintValue
Max input file size1 GB
Max concurrent batch requests100
Max concurrent single-input requests1,500
Webhook delivery sizeUp to 1 GB
API download sizeUp to 5 GB

Troubleshooting

You’ve exceeded the concurrent request limit. Reduce the number of parallel requests or combine inputs into fewer, larger batches. Each batch can include up to 1 GB of input data.
Check that all input URLs are valid LinkedIn URLs. Review the error details in the snapshot response or in the Logs tab of your Bright Data dashboard.
Some URLs may fail individually while the overall job succeeds. Check the snapshot response for any errors field. Retry failed URLs in a separate request.

Next steps

Set up webhooks

Receive results without polling.

Deliver to S3

Send results directly to your S3 bucket.