In your code file, import the package - and launch your first requests:
Copy
from brightdata import bdclient # Imports the brightdata-sdk package to the codeclient = bdclient(api_token="your_api_key")results = client.search("best selling shoes")print(client.parse_content(results))
Try these examples to use Bright Data’s SDK functions from your IDE
Copy
from brightdata import bdclientclient = bdclient(api_token="your-api-token") # Can also be taken from .env fileresults = client.search(query="best shoes of 2025")# Try adding parameters like: search_engine="bing"/"yandex", country="gb"print(client.parse_content(results)) # Try parsing the result for easy to read result / low token usage
When working with multiple queries or URLs, requests are handled concurrently for optimal performance.
Search LinkedIn to get all the relevant data for your search, response is structured data
Copy
# Add the following function after package import and setting the client# Search LinkedIn profiles by namefirst_names = ["James", "Idan"]last_names = ["Smith", "Vilenski"]result = client.search_linkedin.profiles(first_names, last_names)# Search jobs by URLjob_urls = [ "https://www.linkedin.com/jobs/search?keywords=Software&location=Tel%20Aviv-Yafo", "https://www.linkedin.com/jobs/reddit-inc.-jobs-worldwide?f_C=150573"]result = client.search_linkedin.jobs(url=job_urls)# Search jobs by keyword and locationresult = client.search_linkedin.jobs( location="Paris", keyword="product manager", country="FR", time_range="Past month", job_type="Full-time")# Search posts by profile URL with date rangeresult = client.search_linkedin.posts( profile_url="https://www.linkedin.com/in/bettywliu", start_date="2018-04-25T00:00:00.000Z", end_date="2021-05-25T00:00:00.000Z")# Search posts by company URLresult = client.search_linkedin.posts( company_url="https://www.linkedin.com/company/bright-data")# Returns snapshot ID that can be used to download the content later using download_snapshot function
In your IDE, hover over the brightdata package or any of its functions to view available methods, parameters, and usage examples.
# Add the following function after package import and setting the clientresult = client.search_chatGPT( prompt="what day is it today?" # prompt=["What are the top 3 programming languages in 2024?", "Best hotels in New York", "Explain quantum computing"], # additional_prompt=["Can you explain why?", "Are you sure?", ""] )client.download_content(result) # In case of timeout error, your snapshot_id is presented and you will downloaded it using download_snapshot()
client = bdclient( api_token="your_token", auto_create_zones=False, # Else it creates Zone automatically web_unlocker_zone="custom_zone", serp_zone="custom_serp_zone")
Search
Add advanced search parameters
Search the web using the SERP API
Copy
- `query`: Search query string or list of queries- `search_engine`: "google", "bing", or "yandex"- `zone`: Zone identifier (auto-configured if None)- `format`: "json" or "raw"- `method`: HTTP method- `country`: Two-letter country code- `data_format`: "markdown", "screenshot", etc.- `async_request`: Enable async processing- `max_workers`: Max parallel workers (default: 10)- `timeout`: Request timeout in seconds (default: 30)
Scrape
Add advanced Scrape parameters
Scrape a single URL or list of URLs using the Web Unlocker API.
Copy
- `url`: Single URL string or list of URLs- `zone`: Zone identifier (auto-configured if None)- `format`: "json" or "raw"- `method`: HTTP method- `country`: Two-letter country code- `data_format`: "markdown", "screenshot", etc.- `async_request`: Enable async processing- `max_workers`: Max parallel workers (default: 10)- `timeout`: Request timeout in seconds (default: 30)