Web Scraper API
Overview
We are thrilled to introduce a new product, Web scraper API, which is designed to simplify and enrich your data acquisition process. This new service allows for a more robust and streamlined way to collect data and facilitating more effective dataset generation according to your specific needs.
Data Collection APIs
Initiate a scrape
- Choose the target website from our variety of API offerings
- Update the desired list of Inputs via JSON or CSV
- Select whether to deliver the data by webhook or by API
Via Webhook:
- Select your preferred file format (JSON, NDJSON, JSON lines, CSV)
- Set your webhook URL and Authorization header if needed
- Choose whether to send it compressed or not
- Test webhook to validate that the operation runs successfully (using sample data)
- Copy the code and run it.
Via API:
- Select your preferred delivery location (S3, Google cloud, Snowflake or any other available option)
- Fill out the needed credentials according to your pick
- Set your webhook URL and Authorization header if needed
- Select your preferred file format (JSON, NDJSON, JSON lines, CSV)
- Copy the code and run it.
Limit records
While running a discovery API, you can set a limit of the number of results per input
In the example below, we’ve set a limitation of 10 results per input
Management APIs
Get snapshot list
Check your snapshot history with this API. It returns a list of all available snapshots, including the snapshot ID, creation date, and status.
Monitor Progress
Check your data collection status with this API. It should return “collecting” while gathering data, “digesting” when processing, and “ready” when available.
System limitations
File size
Input | up to 1GB |
Webhook delivery | up to 1GB |
API Download | up to 5GB (for bigger files use API delivery) |
Delivery API | unlimited |