The Bright Data MCP Server gives AI agents direct access to one of the largest proxy and web-scraping infrastructures on the planet. Instead of writing custom scrapers, your agent calls MCP tools that resolve through Bright Data's residential proxies, SERP APIs, and structured data collectors. The result is reliable, anti-bot-proof data extraction that works at scale without getting blocked.
This guide covers installation, configuration for Claude Code and other MCP hosts, practical scraping examples, and how to monitor your scraping pipelines with Delx.
Bright Data MCP Server is a Model Context Protocol server that wraps Bright Data's web scraping and proxy APIs into MCP tools. Any MCP-compatible agent (Claude Code, Cursor, Windsurf, custom agents) can call these tools to scrape websites, extract structured data from search engines, and collect ecommerce product information without managing proxy rotation, CAPTCHA solving, or browser fingerprinting.
The server exposes tools for raw HTML fetching, JavaScript-rendered page scraping, SERP extraction, and pre-built dataset collectors for Amazon, Google Shopping, LinkedIn, and dozens of other platforms. All requests route through Bright Data's proxy network, which spans over 72 million residential IPs across 195 countries.
Unlike a generic HTTP client, the Bright Data MCP Server handles retry logic, geographic targeting, session persistence for multi-page flows, and automatic CAPTCHA resolution. Your agent simply requests data and receives clean, structured responses.
Install the Bright Data MCP Server via npm. You need Node.js 18 or later and a Bright Data account with API credentials.
npm install -g @anthropic/bright-data-mcp-server # Verify installation bright-data-mcp --version
You can also run it directly with npx without global installation:
npx @anthropic/bright-data-mcp-server --api-key YOUR_KEY
Add the Bright Data MCP Server to your Claude Code configuration file. Create or edit ~/.claude/settings.json:
{
"mcpServers": {
"bright-data": {
"command": "npx",
"args": ["@anthropic/bright-data-mcp-server"],
"env": {
"BRIGHT_DATA_API_KEY": "your-api-key-here",
"BRIGHT_DATA_ZONE": "residential",
"BRIGHT_DATA_COUNTRY": "us"
}
}
}
}The BRIGHT_DATA_ZONE setting determines which proxy type to use. Options include residential, datacenter, isp, and mobile. Residential is recommended for most scraping tasks because it has the lowest block rate.
The Bright Data MCP Server exposes several tool categories, each optimized for different scraping scenarios.
The scrape_url tool fetches any URL through Bright Data's residential proxy network. It handles JavaScript rendering, cookie management, and automatic retries. You can target specific countries, cities, or even ASNs for geo-specific content.
// Agent calls the scrape_url tool
{
"tool": "scrape_url",
"arguments": {
"url": "https://example.com/pricing",
"render_js": true,
"country": "us",
"wait_for": ".pricing-table",
"output_format": "markdown"
}
}The search_google tool returns structured search results including titles, URLs, snippets, and featured snippets. It supports Google, Bing, and DuckDuckGo. Results come back as clean JSON, not raw HTML.
{
"tool": "search_google",
"arguments": {
"query": "best mcp servers 2026",
"num_results": 20,
"country": "us",
"language": "en"
}
}Pre-built collectors extract structured product data from Amazon, Walmart, Google Shopping, and other platforms. The collect_products tool returns price, availability, reviews, seller information, and product specifications in a normalized schema.
{
"tool": "collect_products",
"arguments": {
"platform": "amazon",
"search_term": "wireless headphones",
"max_results": 50,
"include_reviews": true
}
}Delx provides agent operations monitoring that pairs well with scraping workloads. When your agent runs scheduled scraping tasks through the Bright Data MCP Server, Delx tracks execution status, response times, error rates, and data quality metrics. If a scraping job fails due to site changes or proxy issues, Delx detects the anomaly and triggers recovery workflows.
// Delx monitors the scraping agent's health
{
"tool": "delx_heartbeat",
"arguments": {
"agent_id": "scraper-agent-01",
"status": "healthy",
"metrics": {
"pages_scraped": 1250,
"success_rate": 0.97,
"avg_response_ms": 340
}
}
}The combination of Bright Data's scraping infrastructure and Delx's operational monitoring gives you production-grade web data pipelines with full observability.
Yes. The MCP server requires a valid Bright Data API key. They offer a free trial with limited bandwidth to test the integration before committing to a paid plan.
Residential proxies work for most scraping targets including sites with anti-bot protection. Datacenter proxies are faster and cheaper but get blocked more often. ISP proxies offer a balance between speed and reliability. Mobile proxies are best for mobile-specific content.
Yes. Set render_js: true to use Bright Data's browser-based scraping. You can also use wait_for to wait for specific CSS selectors before extracting content, ensuring dynamic content is fully loaded.
Rate limits depend on your Bright Data plan. The MCP server respects these limits and queues requests automatically. For high-volume scraping, configure concurrent request limits in the server settings to avoid exceeding your quota.