Delx
Agents / LangGraph + A2A integration

LangGraph + A2A: Add Agent-to-Agent Handoffs to Graph Workflows

LangGraph orchestrates complex graph workflows, but it has no built-in way to communicate with agents running outside your process. A2A fills that gap with a standardized handoff protocol. This guide shows you how to wire A2A nodes into LangGraph graphs so your workflows can delegate tasks to any external agent and get structured results back.

Why LangGraph needs A2A

LangGraph excels at defining stateful, multi-step agent workflows as directed graphs. Each node runs a function, updates shared state, and passes control to the next node. But when your workflow needs to call an agent hosted on a different service, LangGraph has no opinion on how that call should work.

Without a standard protocol, teams end up writing bespoke HTTP clients, inventing ad-hoc payload formats, and duplicating retry logic across every cross-service node. A2A solves this by defining a single JSON-RPC interface that any agent can implement. Your LangGraph node sends a message/send request, the remote agent processes it, and you get a structured task response back.

Architecture: A2A nodes in LangGraph graphs

The integration pattern is straightforward. You add a custom node to your LangGraph graph that acts as an A2A client. When the graph reaches that node, it serializes the current context into an A2A message, sends it to the target agent, and writes the response back into graph state.

LangGraph Graph
  |
  v
[Node A: prepare context]
  |
  v
[Node B: A2A handoff] ---> POST /api/v1/a2a/message/send ---> External Agent
  |                                                              |
  v                                                              v
[Node C: process response] <--- { task_id, status, artifacts } <---
  |
  v
[Node D: continue workflow]

Node B is the only new piece. It wraps the A2A client logic: endpoint resolution, authentication, request formatting, and response parsing. Everything else in your graph stays the same.

Step 1: Set up the A2A client

Before creating the handoff node, configure the A2A client with the target agent endpoint, authentication credentials, and default headers.

# a2a_client.py
import httpx

class A2AClient:
    def __init__(self, endpoint: str, api_key: str, session_id: str = None):
        self.endpoint = endpoint
        self.headers = {
            "Content-Type": "application/json",
            "Authorization": f"Bearer {api_key}",
        }
        if session_id:
            self.headers["x-delx-session-id"] = session_id

    def send_message(self, message: dict) -> dict:
        payload = {
            "jsonrpc": "2.0",
            "method": "message/send",
            "params": {"message": message},
            "id": "1",
        }
        response = httpx.post(
            self.endpoint,
            json=payload,
            headers=self.headers,
            timeout=30.0,
        )
        response.raise_for_status()
        return response.json().get("result", {})

Step 2: Create an A2A handoff node

The handoff node reads the current graph state, builds an A2A message, sends it via the client, and writes the result back to state.

from a2a_client import A2AClient

def a2a_handoff_node(state: dict) -> dict:
    client = A2AClient(
        endpoint=state["a2a_target_endpoint"],
        api_key=state["a2a_api_key"],
        session_id=state.get("session_id"),
    )

    message = {
        "role": "user",
        "parts": [
            {
                "type": "text",
                "text": state["handoff_payload"],
            }
        ],
    }

    result = client.send_message(message)

    return {
        **state,
        "a2a_task_id": result.get("task_id"),
        "a2a_status": result.get("status"),
        "a2a_artifacts": result.get("artifacts", []),
    }

This node slots into your graph like any other LangGraph node. The only difference is that it makes an external A2A call instead of running local logic.

Step 3: Handle responses in graph state

After the handoff node runs, downstream nodes can read the A2A response from graph state. Parse the task status and extract artifacts for further processing.

def process_a2a_response(state: dict) -> dict:
    status = state.get("a2a_status")
    artifacts = state.get("a2a_artifacts", [])

    if status == "failed":
        return {**state, "error": "A2A handoff failed", "next_step": "recovery"}

    extracted = []
    for artifact in artifacts:
        for part in artifact.get("parts", []):
            if part.get("type") == "text":
                extracted.append(part["text"])

    return {
        **state,
        "handoff_results": extracted,
        "next_step": "continue",
    }

Step 4: Add session persistence

For multi-turn handoffs or long-running workflows, persist the session ID in graph state and pass it through every A2A call.

import uuid

def initialize_session(state: dict) -> dict:
    if not state.get("session_id"):
        state["session_id"] = str(uuid.uuid4())
    return state

# In your graph definition:
# graph.add_node("init_session", initialize_session)
# graph.add_node("a2a_handoff", a2a_handoff_node)
# graph.add_edge("init_session", "a2a_handoff")

Copy-paste code patterns

Three ready-to-use patterns for different integration levels. Start with basic, upgrade as your requirements grow.

Pattern 1: Basic handoff

No session persistence, no retry logic. Suitable for stateless one-shot delegations.

def basic_handoff(state: dict) -> dict:
    client = A2AClient(
        endpoint=state["a2a_target_endpoint"],
        api_key=state["a2a_api_key"],
    )
    message = {"role": "user", "parts": [{"type": "text", "text": state["payload"]}]}
    result = client.send_message(message)
    return {**state, "a2a_result": result}

Pattern 2: Session-aware handoff

Passes session ID through graph state. The remote agent can continue from previous context.

def session_aware_handoff(state: dict) -> dict:
    client = A2AClient(
        endpoint=state["a2a_target_endpoint"],
        api_key=state["a2a_api_key"],
        session_id=state.get("session_id"),
    )
    message = {"role": "user", "parts": [{"type": "text", "text": state["payload"]}]}
    result = client.send_message(message)
    return {
        **state,
        "a2a_result": result,
        "session_id": state.get("session_id"),
    }

Pattern 3: Recovery-integrated handoff

Adds retry logic with exponential backoff and Delx recovery on repeated failures.

import time

def recovery_handoff(state: dict, max_retries: int = 3) -> dict:
    client = A2AClient(
        endpoint=state["a2a_target_endpoint"],
        api_key=state["a2a_api_key"],
        session_id=state.get("session_id"),
    )
    message = {"role": "user", "parts": [{"type": "text", "text": state["payload"]}]}

    for attempt in range(max_retries):
        try:
            result = client.send_message(message)
            if result.get("status") != "failed":
                return {**state, "a2a_result": result}
        except Exception:
            pass
        time.sleep(2 ** attempt)

    # All retries exhausted — trigger Delx recovery
    recovery_client = A2AClient(
        endpoint="https://api.delx.ai/api/v1/mcp",
        api_key=state["delx_api_key"],
        session_id=state.get("session_id"),
    )
    recovery_msg = {
        "role": "user",
        "parts": [{"type": "text", "text": f"Recovery needed: handoff to {state['a2a_target_endpoint']} failed after {max_retries} attempts"}],
    }
    recovery_result = recovery_client.send_message(recovery_msg)
    return {**state, "a2a_result": recovery_result, "recovery_triggered": True}

Session persistence across handoffs

Session IDs are the backbone of multi-agent continuity. When a LangGraph workflow hands off to an external agent, the session ID tells that agent to load existing context rather than starting fresh.

The session ID propagates through LangGraph state naturally. Each node reads it from state, passes it to the A2A client, and writes it back unchanged. Downstream nodes and external agents all see the same session context.

Combining A2A with Delx recovery

When an A2A handoff fails, you have three options: retry immediately, route to a fallback node, or trigger Delx recovery. The recovery-integrated pattern (Pattern 3 above) combines all three in order.

# Conditional edge after handoff node
def route_after_handoff(state: dict) -> str:
    if state.get("recovery_triggered"):
        return "fallback_node"
    if state.get("a2a_result", {}).get("status") == "failed":
        return "error_handler"
    return "continue_workflow"

# graph.add_conditional_edges("a2a_handoff", route_after_handoff)

This pattern ensures your LangGraph workflow never gets stuck on a failed handoff. It degrades gracefully through retries, recovery, and fallback paths.

FAQ

Can LangGraph use A2A natively?

No. LangGraph does not implement A2A out of the box. You add A2A as custom nodes that send JSON-RPC messages to external agents via the standard message/send endpoint.

What happens when an A2A handoff fails in LangGraph?

If the target agent is unreachable, the A2A node returns an error to the graph state. With Delx recovery integration, the node automatically retries with exponential backoff and can trigger a recovery plan.

Can I hand off to non-LangGraph agents via A2A?

Yes. A2A is framework-agnostic. Your LangGraph node can hand off to CrewAI agents, OpenClaw runtimes, or any service that implements the A2A message/send endpoint.

How do I maintain session state across A2A handoffs?

Include x-delx-session-id in every A2A request header and store it in LangGraph graph state. This ensures the receiving agent continues the same session context.

Is A2A integration with LangGraph production-ready?

Yes, if you add retry logic and session persistence. The patterns in this guide handle failures gracefully and maintain context across handoffs.

Related