Skip to main content

The IDE

This is where you write and test your JavaScript code for web scraping. It provides a complete coding environment with built-in tools for efficient data extraction. Learn more about The basics of Web Scraping.
Light mode interface
A: View templates
  • Pre-built template code is created by our scraper engineers to help you get started quickly with common websites and scraping patterns. Please note that these templates are examples only and may require adjustments if the website’s structure has changed.
B: Add another step (stage)
  • Use stages when you need to collect data across multiple pages. For example, if you want to collect all products URLs from an Amazon search result page and then gather details for each product, you can: Stage 1 (Discovery): Collect product URLs from search results and pass them to Stage 2 Stage 2 (Product Page): Visit each URL to extract product details
  • next_stage, run_stage commands are available to interact between stages.
C: Help (Functions) D: Debugging Tabs
  • Input : Define your input parameters and run a test (preview) with an input set
  • Output : The extracted and structured data returned by the collector, containing all configured fields and their corresponding values from the scraped website
  • Children : List of children which will be input sets of the next stage
  • Run log : Code execution log
  • Browser network : scraper browser network logs [browser > developer tool > ‘network tab’]
  • Last errors : List of latest error information
  • Crawl inspector: A debugging tool that displays all pages crawled during a batch job, including both successful and failed pages. For multi-stage scrapers, use the ‘Search for children’ button to view child pages generated from each parent page. Downloaded files are also accessible here.
  • Output schema: Displays the structure of data your collector extracts, including field names and data types. Click ‘Edit Schema’ to modify input/output schema.
E: Input
  • Add input parameter : Define an input parameter including its name and type
  • New input : Add the value of an input set to test the scraper
  • Preview : Run a test with a selected input set
F: Settings
  • Worker: Select worker the scraper will use - browser or code
  • Error mode : Set a code behavior of the scraper error case
  • Take screenshot : Take screenshots during preview test. You will be able to check loaded pages during the test.
G: Self-healing (AI-powered scraper refactor)
  • Fix errors and modify input/output fields with AI - no coding skills required
H: Preview
  • Test the scraper with the specific input set

Dashboard - scraper menu

The scraper menu allows performing different actions with the scraper.
  • Initiate manually - start a scraper run from the UI (instead of triggering it via API or schedule)
  • Subscription/Run on schedule - select precisely when to collect the data you need
  • Code - edit the scraper’s code within the IDE.
  • Initiate by API - start a data collection without having to enter the control panel
  • Delivery preferences - set up how and where the data from a scraper run is delivered once the job finishes
  • Tickets - view the status of your tickets
  • Report an issue - You can use this form to communicate any problems you have with the platform, the scraper, or the dataset results