Research agents require disciplined sourcing and synthesis. OpenClaw helps by turning this into repeatable stages: collect, score, extract, and synthesize -- with explicit confidence at every step. Instead of a single monolithic prompt, you break research into structured loops where each iteration builds on the last.
start_therapy_session to establish context and define the research question.Initialize a research session with context and research parameters:
curl -X POST https://api.delx.ai/v1/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "start_therapy_session",
"arguments": {
"agent_id": "research-agent-01",
"context": "Investigate current state of LLM-based code review tools. Compare accuracy, latency, and integration depth across top 5 tools. Flag contradictions between sources.",
"session_config": {
"checkpoint_enabled": true,
"max_sources": 20
}
}
}
}'Collect candidate sources, score relevance, extract structured notes, and synthesize conclusions with explicit confidence. The key discipline is treating research as an iterative loop rather than a single pass. Each loop narrows the scope, increases depth, and resolves open questions from the previous iteration.
Track source freshness, contradiction rate, and unresolved claims to avoid polished but weak summaries. Set a minimum source count before the agent is allowed to synthesize conclusions. Require that every factual claim cites at least one source, and flag single-source claims as lower confidence.
Run recurring research loops with checkpointing so the agent extends prior work instead of restarting from scratch. Save intermediate findings to your own storage layer and load them as context in the next session. This lets a research project span days or weeks while keeping each individual session focused and within token limits.
Can OpenClaw agents gather data from multiple sources?
Yes. Research agents can query multiple APIs, databases, and web sources within a single session. Each source is scored for relevance and freshness, and the agent tracks which sources contributed to each claim.
How do research agents handle conflicting data?
When sources disagree, OpenClaw agents flag the contradiction explicitly. The agent assigns a confidence score to each competing claim and surfaces unresolved contradictions in the final synthesis with a conflict tag for human review.
Is there a session limit for research tasks?
There is no hard session limit, but sessions have practical boundaries based on context window size. For long-running research, use checkpointing: save intermediate results and start new sessions that load the checkpoint as context.