Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.brightdata.com/llms.txt

Use this file to discover all available pages before exploring further.

Building an AI startup?

You might be eligible for our Startup Program. Get fully funded access to the infrastructure you’re reading about right now (up to $20K value).
A comprehensive suite of CrewAI tools that leverage Bright Data’s powerful infrastructure for web scraping, data extraction, and search operations. These tools provide three distinct capabilities:

BrightDataDatasetTool

Extract structured data from popular data feeds (Amazon, LinkedIn, Instagram, etc.) using pre-built datasets

BrightDataSearchTool

Perform web searches across multiple search engines with geo-targeting and device simulation

BrightDataUnlockerAPITool

Scrape any website content while bypassing bot protection mechanisms

Steps to Get Started

To effectively use the BrightData Tools, follow these steps:
1

Obtain Your Bright Data API Key

2

Install the Bright Data Integration

Install the Bright Data integration package for CrewAI, along with aiohttp and requests by running the following command:
pip install crewai[tools] aiohttp requests
3

Set the environment variable

Set your Bright Data API key as an environment variable:
export BRIGHT_DATA_API_KEY="your_api_key_here"
export BRIGHT_DATA_ZONE="your_zone_here"
4

Select your preferred Bright Data tool

The Bright Data + CrewAI integration currently supports:
# Dataset Tool - Extract Amazon Product Data
from crewai_tools import BrightDataDatasetTool

# Initialize with specific dataset and URL
tool = BrightDataDatasetTool(
    dataset_type="amazon_product",
    url="https://www.amazon.com/dp/B08QB1QMJ5/"
)
result = tool.run()

Conclusion

By integrating BrightData Tools into your CrewAI agents, you gain access to enterprise-grade web scraping and data extraction capabilities. These tools handle complex challenges like bot protection, geo-restrictions, and data parsing, allowing you to focus on building your applications rather than managing scraping infrastructure.