Final Banner Pn

Install the package

Open the terminal and run:
pip install brightdata-sdk
In your code file, import the package - and launch your first requests:
from brightdata import bdclient # Imports the brightdata-sdk package to the code
client = bdclient(api_token="your_api_key")

results = client.search("best selling shoes")

print(client.parse_content(results))

Launch scrapes and web searches

Try these examples to use Bright Data’s SDK functions from your IDE
from brightdata import bdclient

client = bdclient(api_token="your-api-token") # Can also be taken from .env file

results = client.search(query="best shoes of 2025")
# Try adding parameters like: search_engine="bing"/"yandex", country="gb"

print(client.parse_content(results)) 
# Try parsing the result for easy to read result / low token usage
When working with multiple queries or URLs, requests are handled concurrently for optimal performance.

Use LinkedIn Scrapers to get structured response

Search LinkedIn to get all the relevant data for your search, response is structured data
# Add the following function after package import and setting the client
# Search LinkedIn profiles by name
first_names = ["James", "Idan"]
last_names = ["Smith", "Vilenski"]
result = client.search_linkedin.profiles(first_names, last_names)

# Search jobs by URL
job_urls = [
    "https://www.linkedin.com/jobs/search?keywords=Software&location=Tel%20Aviv-Yafo",
    "https://www.linkedin.com/jobs/reddit-inc.-jobs-worldwide?f_C=150573"
]
result = client.search_linkedin.jobs(url=job_urls)

# Search jobs by keyword and location
result = client.search_linkedin.jobs(
    location="Paris", 
    keyword="product manager",
    country="FR",
    time_range="Past month",
    job_type="Full-time"
)

# Search posts by profile URL with date range
result = client.search_linkedin.posts(
    profile_url="https://www.linkedin.com/in/bettywliu",
    start_date="2018-04-25T00:00:00.000Z",
    end_date="2021-05-25T00:00:00.000Z"
)
# Search posts by company URL
result = client.search_linkedin.posts(
    company_url="https://www.linkedin.com/company/bright-data"
)

# Returns snapshot ID that can be used to download the content later using download_snapshot function
In your IDE, hover over the brightdata package or any of its functions to view available methods, parameters, and usage examples.
Hover Over1 Jp

Use ChatGPT to collect any number of responses

# Add the following function after package import and setting the client
result = client.search_chatGPT(
    prompt="what day is it today?"
    # prompt=["What are the top 3 programming languages in 2024?", "Best hotels in New York", "Explain quantum computing"],
    # additional_prompt=["Can you explain why?", "Are you sure?", ""]  
)

client.download_content(result) # In case of timeout error, your snapshot_id is presented and you will downloaded it using download_snapshot()

Connect to scraping browser

Use the SDK to easily connect Bright’s scraping browser
# For Playwright (default browser_type)
from brightdata import bdclient
from playwright.sync_api import Playwright, sync_playwright

client = bdclient(
    api_token="your_api_token",
    browser_username="username-zone-browser_zone1",
    browser_password="your_password"
)

def scrape(playwright: Playwright, url='https://example.com'):
    browser = playwright.chromium.connect_over_cdp(client.connect_browser())
    try:
        print(f'Connected! Navigating to {url}...')
        page = browser.new_page()
        page.goto(url, timeout=2*60_000)
        print('Navigated! Scraping page content...')
        data = page.content()
        print(f'Scraped! Data: {data}')
    finally:
        browser.close()

def main():
    with sync_playwright() as playwright:
        scrape(playwright)

if __name__ == '__main__':
    main()

Parameters

client = bdclient(
  api_token="your_token",
  auto_create_zones=False,  # Else it creates Zone automatically
  web_unlocker_zone="custom_zone",
  serp_zone="custom_serp_zone"
)
Scrape a single URL or list of URLs using the Web Unlocker API.
- `url`: Single URL string or list of URLs
- `zone`: Zone identifier (auto-configured if None)
- `format`: "json" or "raw"
- `method`: HTTP method
- `country`: Two-letter country code
- `data_format`: "markdown", "screenshot", etc.
- `async_request`: Enable async processing
- `max_workers`: Max parallel workers (default: 10)
- `timeout`: Request timeout in seconds (default: 30)

Error handling

Enable VERBOSE in the Client for advenced logging (see Client parameters) Use the list_zones() func to check available zones
Create a Bright Data account and copy your API key Go to account settings, and make sure that your API key have admin permissions

Resources