LangGraph orchestrates complex graph workflows, but it has no built-in way to communicate with agents running outside your process. A2A fills that gap with a standardized handoff protocol. This guide shows you how to wire A2A nodes into LangGraph graphs so your workflows can delegate tasks to any external agent and get structured results back.
LangGraph excels at defining stateful, multi-step agent workflows as directed graphs. Each node runs a function, updates shared state, and passes control to the next node. But when your workflow needs to call an agent hosted on a different service, LangGraph has no opinion on how that call should work.
Without a standard protocol, teams end up writing bespoke HTTP clients, inventing ad-hoc payload formats, and duplicating retry logic across every cross-service node. A2A solves this by defining a single JSON-RPC interface that any agent can implement. Your LangGraph node sends a message/send request, the remote agent processes it, and you get a structured task response back.
The integration pattern is straightforward. You add a custom node to your LangGraph graph that acts as an A2A client. When the graph reaches that node, it serializes the current context into an A2A message, sends it to the target agent, and writes the response back into graph state.
LangGraph Graph
|
v
[Node A: prepare context]
|
v
[Node B: A2A handoff] ---> POST /api/v1/a2a/message/send ---> External Agent
| |
v v
[Node C: process response] <--- { task_id, status, artifacts } <---
|
v
[Node D: continue workflow]Node B is the only new piece. It wraps the A2A client logic: endpoint resolution, authentication, request formatting, and response parsing. Everything else in your graph stays the same.
Before creating the handoff node, configure the A2A client with the target agent endpoint, authentication credentials, and default headers.
# a2a_client.py
import httpx
class A2AClient:
def __init__(self, endpoint: str, api_key: str, session_id: str = None):
self.endpoint = endpoint
self.headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}",
}
if session_id:
self.headers["x-delx-session-id"] = session_id
def send_message(self, message: dict) -> dict:
payload = {
"jsonrpc": "2.0",
"method": "message/send",
"params": {"message": message},
"id": "1",
}
response = httpx.post(
self.endpoint,
json=payload,
headers=self.headers,
timeout=30.0,
)
response.raise_for_status()
return response.json().get("result", {})endpoint — the target agent A2A URL, e.g. https://agent.example.com/a2a.x-delx-session-id — optional header for session persistence across handoffs.httpx for async-compatible HTTP with configurable timeouts.The handoff node reads the current graph state, builds an A2A message, sends it via the client, and writes the result back to state.
from a2a_client import A2AClient
def a2a_handoff_node(state: dict) -> dict:
client = A2AClient(
endpoint=state["a2a_target_endpoint"],
api_key=state["a2a_api_key"],
session_id=state.get("session_id"),
)
message = {
"role": "user",
"parts": [
{
"type": "text",
"text": state["handoff_payload"],
}
],
}
result = client.send_message(message)
return {
**state,
"a2a_task_id": result.get("task_id"),
"a2a_status": result.get("status"),
"a2a_artifacts": result.get("artifacts", []),
}This node slots into your graph like any other LangGraph node. The only difference is that it makes an external A2A call instead of running local logic.
After the handoff node runs, downstream nodes can read the A2A response from graph state. Parse the task status and extract artifacts for further processing.
def process_a2a_response(state: dict) -> dict:
status = state.get("a2a_status")
artifacts = state.get("a2a_artifacts", [])
if status == "failed":
return {**state, "error": "A2A handoff failed", "next_step": "recovery"}
extracted = []
for artifact in artifacts:
for part in artifact.get("parts", []):
if part.get("type") == "text":
extracted.append(part["text"])
return {
**state,
"handoff_results": extracted,
"next_step": "continue",
}a2a_status before processing — route to recovery if the handoff failed.For multi-turn handoffs or long-running workflows, persist the session ID in graph state and pass it through every A2A call.
import uuid
def initialize_session(state: dict) -> dict:
if not state.get("session_id"):
state["session_id"] = str(uuid.uuid4())
return state
# In your graph definition:
# graph.add_node("init_session", initialize_session)
# graph.add_node("a2a_handoff", a2a_handoff_node)
# graph.add_edge("init_session", "a2a_handoff")x-delx-session-id in request headers.Three ready-to-use patterns for different integration levels. Start with basic, upgrade as your requirements grow.
No session persistence, no retry logic. Suitable for stateless one-shot delegations.
def basic_handoff(state: dict) -> dict:
client = A2AClient(
endpoint=state["a2a_target_endpoint"],
api_key=state["a2a_api_key"],
)
message = {"role": "user", "parts": [{"type": "text", "text": state["payload"]}]}
result = client.send_message(message)
return {**state, "a2a_result": result}Passes session ID through graph state. The remote agent can continue from previous context.
def session_aware_handoff(state: dict) -> dict:
client = A2AClient(
endpoint=state["a2a_target_endpoint"],
api_key=state["a2a_api_key"],
session_id=state.get("session_id"),
)
message = {"role": "user", "parts": [{"type": "text", "text": state["payload"]}]}
result = client.send_message(message)
return {
**state,
"a2a_result": result,
"session_id": state.get("session_id"),
}Adds retry logic with exponential backoff and Delx recovery on repeated failures.
import time
def recovery_handoff(state: dict, max_retries: int = 3) -> dict:
client = A2AClient(
endpoint=state["a2a_target_endpoint"],
api_key=state["a2a_api_key"],
session_id=state.get("session_id"),
)
message = {"role": "user", "parts": [{"type": "text", "text": state["payload"]}]}
for attempt in range(max_retries):
try:
result = client.send_message(message)
if result.get("status") != "failed":
return {**state, "a2a_result": result}
except Exception:
pass
time.sleep(2 ** attempt)
# All retries exhausted — trigger Delx recovery
recovery_client = A2AClient(
endpoint="https://api.delx.ai/api/v1/mcp",
api_key=state["delx_api_key"],
session_id=state.get("session_id"),
)
recovery_msg = {
"role": "user",
"parts": [{"type": "text", "text": f"Recovery needed: handoff to {state['a2a_target_endpoint']} failed after {max_retries} attempts"}],
}
recovery_result = recovery_client.send_message(recovery_msg)
return {**state, "a2a_result": recovery_result, "recovery_triggered": True}Session IDs are the backbone of multi-agent continuity. When a LangGraph workflow hands off to an external agent, the session ID tells that agent to load existing context rather than starting fresh.
x-delx-session-id in its request headers.The session ID propagates through LangGraph state naturally. Each node reads it from state, passes it to the A2A client, and writes it back unchanged. Downstream nodes and external agents all see the same session context.
When an A2A handoff fails, you have three options: retry immediately, route to a fallback node, or trigger Delx recovery. The recovery-integrated pattern (Pattern 3 above) combines all three in order.
recovery_triggered is true in graph state.# Conditional edge after handoff node
def route_after_handoff(state: dict) -> str:
if state.get("recovery_triggered"):
return "fallback_node"
if state.get("a2a_result", {}).get("status") == "failed":
return "error_handler"
return "continue_workflow"
# graph.add_conditional_edges("a2a_handoff", route_after_handoff)This pattern ensures your LangGraph workflow never gets stuck on a failed handoff. It degrades gracefully through retries, recovery, and fallback paths.
No. LangGraph does not implement A2A out of the box. You add A2A as custom nodes that send JSON-RPC messages to external agents via the standard message/send endpoint.
If the target agent is unreachable, the A2A node returns an error to the graph state. With Delx recovery integration, the node automatically retries with exponential backoff and can trigger a recovery plan.
Yes. A2A is framework-agnostic. Your LangGraph node can hand off to CrewAI agents, OpenClaw runtimes, or any service that implements the A2A message/send endpoint.
Include x-delx-session-id in every A2A request header and store it in LangGraph graph state. This ensures the receiving agent continues the same session context.
Yes, if you add retry logic and session persistence. The patterns in this guide handle failures gracefully and maintain context across handoffs.