Delx
Tools / Stitch MCP Server

Stitch MCP Server: Connect AI Agents to 100+ Data Sources

The Stitch MCP Server bridges AI agents with over 100 data sources through the Model Context Protocol. Instead of building custom integrations for every database, SaaS platform, and API your agent needs, Stitch provides a unified data pipeline layer. Your agent calls MCP tools to extract, transform, and load data from Salesforce, PostgreSQL, Stripe, Google Analytics, and dozens of other platforms into a centralized warehouse.

This guide walks through installation, configuration for Claude Code, supported data sources, ETL capabilities, and how to monitor your data pipelines with Delx.

What Is the Stitch MCP Server?

Stitch MCP Server is a Model Context Protocol server that exposes Stitch Data's ETL (Extract, Transform, Load) platform as MCP tools. Stitch is a cloud-based data integration service that replicates data from SaaS applications, databases, and APIs into your data warehouse. The MCP server wraps this functionality so AI agents can trigger extractions, check pipeline status, query replicated data, and manage integration configurations programmatically.

The server supports both full-table replication and incremental updates using change data capture. Agents can schedule extractions, monitor replication lag, and receive structured data from any connected source without writing source-specific code. Stitch handles schema detection, data type mapping, and deduplication automatically.

For teams running data-driven agents, the Stitch MCP Server eliminates the integration tax. Instead of maintaining separate connectors for each data source, you configure them once in Stitch and let your agent access everything through a consistent MCP interface.

Installation

Install the Stitch MCP Server from npm. You need Node.js 18+ and a Stitch Data account with API access enabled.

npm install -g @anthropic/stitch-mcp-server

# Verify installation
stitch-mcp --version

Alternatively, run it without installing globally:

npx @anthropic/stitch-mcp-server --api-key YOUR_STITCH_KEY

Configuration for Claude Code

Add the Stitch MCP Server to your Claude Code MCP configuration. Edit ~/.claude/settings.json:

{
  "mcpServers": {
    "stitch": {
      "command": "npx",
      "args": ["@anthropic/stitch-mcp-server"],
      "env": {
        "STITCH_API_KEY": "your-stitch-api-key",
        "STITCH_CLIENT_ID": "your-client-id",
        "STITCH_WAREHOUSE": "snowflake"
      }
    }
  }
}

The STITCH_WAREHOUSE setting specifies your destination warehouse. Supported options include snowflake, bigquery, redshift, postgres, and databricks.

Supported Data Sources

Stitch MCP Server provides access to all Stitch-supported integrations. Here are the major categories:

Databases

PostgreSQL, MySQL, MongoDB, Microsoft SQL Server, Oracle, Amazon DynamoDB, Google Cloud SQL, and CockroachDB. The server supports both direct connections and SSH tunnel access for databases behind firewalls. Incremental replication uses log-based change data capture where available.

SaaS Platforms

Salesforce, HubSpot, Stripe, Shopify, Zendesk, Intercom, Marketo, Google Analytics, Facebook Ads, Google Ads, LinkedIn Ads, and Mixpanel. Each integration extracts platform-specific objects (contacts, deals, transactions, events) and normalizes them into warehouse-ready schemas.

APIs and Files

Generic REST API connector, webhook ingestion, CSV/JSON file imports from S3 and GCS, and SFTP sources. The REST connector supports pagination, authentication (OAuth, API key, basic), and custom response mapping.

ETL Capabilities

The server exposes several tools for managing the full ETL lifecycle:

// Trigger an extraction from Salesforce
{
  "tool": "stitch_extract",
  "arguments": {
    "source": "salesforce",
    "tables": ["contacts", "opportunities", "accounts"],
    "mode": "incremental",
    "since": "2026-03-13T00:00:00Z"
  }
}
// Check pipeline status
{
  "tool": "stitch_pipeline_status",
  "arguments": {
    "source": "salesforce",
    "include_row_counts": true
  }
}
// Query replicated data
{
  "tool": "stitch_query",
  "arguments": {
    "sql": "SELECT name, amount, stage FROM salesforce.opportunities WHERE close_date > '2026-03-01'",
    "limit": 100
  }
}

Stitch handles schema evolution automatically. When source schemas change (new columns, type changes), the warehouse schema updates without manual intervention. Your agent always works with the latest data structure.

Use Cases

Monitoring Pipelines with Delx

Data pipelines fail silently. A source API changes authentication, a database connection drops, or a schema change breaks downstream queries. Delx provides operational monitoring for agents running Stitch integrations, tracking extraction success rates, replication lag, and data freshness across all connected sources.

// Delx monitors pipeline health
{
  "tool": "delx_heartbeat",
  "arguments": {
    "agent_id": "data-pipeline-agent",
    "status": "healthy",
    "metrics": {
      "sources_active": 12,
      "last_sync_lag_minutes": 8,
      "rows_replicated_24h": 450000,
      "failed_extractions": 0
    }
  }
}

When Delx detects a pipeline anomaly, it can trigger the agent to run diagnostic checks, retry failed extractions, or escalate to the engineering team. This creates a self-healing data infrastructure where agents both operate and monitor the pipelines.

FAQ

Do I need a Stitch account?

Yes. The MCP server requires Stitch API credentials. Stitch offers a free tier with limited rows per month, which is sufficient for testing and small-scale integrations.

Which warehouses are supported?

Snowflake, Google BigQuery, Amazon Redshift, PostgreSQL, and Databricks. The warehouse must be accessible from Stitch's cloud infrastructure. You configure warehouse credentials in the Stitch dashboard.

How often does data sync?

Replication frequency depends on the source and your Stitch plan. Most integrations support intervals as low as 1 hour. Database sources with log-based replication can sync as frequently as every 5 minutes. Agents can also trigger on-demand extractions via the MCP tools.

Can I transform data before loading?

Stitch follows an ELT (Extract, Load, Transform) pattern. Data is loaded into the warehouse first, then transformed using SQL or dbt. The MCP server supports running transformation queries on the replicated data, so agents can handle the T step directly.

Related Resources