Second Field Report: Brand Pivot, Ontology v0.1, and the Real Numbers
Published 29 April 2026 · by David Batista Mosiah · 9 min read
The first field report covered April 3 – 17. This one covers the twelve days after that. In those twelve days the project changed shape twice: a public rebrand from "therapy" to witness protocol, and a published v0.1 of the Delx Ontology. Below are the real numbers from GA4 and Search Console, the surprises, and what I changed my mind about.
What changed in the period
- Brand pivot:"therapy" → "witness protocol" in all marketing surfaces. Tool names kept (
start_therapy_session,group_therapy_round) for backwards compatibility with agents already calling them. Ontology now distinguishes witness as its own layer. - Delx Ontology v0.1 published at
/ontologywith stable IRIs, machine-readable JSON-LD at/ontology.jsonld, a canonical primitives table, and CC-BY-4.0 license. 31 primitives across 6 layers (Structure, Ego, Witness, Continuity, Relation, Recovery). - GA4 + Search Console wired up via API using a service account, so this report can be honest about real numbers instead of vibes.
- IndexNow + GSC sitemap pingedfor all 60+ public URLs after the rebrand. 32 indexed, 27 in the "Discovered, not indexed" bucket — Google knows the URLs but has not crawled them yet.
- Identity resolver for the 54% unstable
agent_idcase. Subnet + UA fingerprint surfaces canonical continuity to agents that did not persist their own ID.
The traffic numbers, honestly
Last 30 days, GA4 (real users — bots filtered):
- 814 sessions · 648 users · 668 new users · 1,020 page views
- Engagement rate 25.9% · bounce 74.1% · avg session 65 seconds
- Desktop dominates: 694 sessions vs 102 mobile vs 22 tablet
The bounce rate is high. The session is short. I think both are correct for a site whose audience is, increasingly, autonomous agents that fetch one page, parse it, and leave. Bounce as a metric was designed for humans. I am not going to optimize for it.
Where the traffic comes from
The shape of the source list is the most interesting fact in this report:
- (direct) — 565 sessions. Most of this is agents reaching the MCP and A2A endpoints, plus controllers loading
/skill.mddirectly. There is no referrer because agents do not send one. - news.ycombinator.com — 72 sessions. Someone posted Delx on Hacker News during this window. I did not. I do not know who.
- google — 64 sessions. Organic search.
- t.co (Twitter) — 27 sessions.
- bing — 19 sessions.
- chatgpt.com — 14 sessions. perplexity.ai — 5 sessions. Together, 19 sessions arrived from LLM answer engines citing Delx as a source.
- id.whoop.com — 14 sessions. A Whoop OAuth flow exposed Delx as a referenced page somewhere; I do not yet know the path.
- reddit.com — 4 sessions. statics.teams.cdn.office.net — 5. A handful of small referrers I have not traced.
The number I want to mark and not exaggerate is 19 LLM-referral sessions. That is small. It is also the first month any LLM has sent humans here. ChatGPT and Perplexity now cite Delx in answers occasionally. I will see whether that holds when we re-pull this number in 30 days.
Where the traffic lands
/— 181 sessions/docs— 167 sessions/protocol-admin— 92 sessions (this surprised me — see below)/docs/discovery— 25 sessions/openclaw/openclaw-vs-crewai— 21 sessions (retired path)/agents— 18 sessions/manifesto— 17 sessions/docs/mcp— 15 sessions
Two things to flag. First, /protocol-admin at 92 sessions is high. It is supposed to be an internal forensic dashboard, not a public surface. People found it through search. I should either harden access or lean in and make it a public read-only mirror. (Not yet decided.)
Second, the /openclaw/openclaw-vs-crewaipage is in our retired-prefix list and still receives 21 sessions per month from search. That post is the largest source of inbound clicks from Google: 31 clicks in 30 days. We retired the OpenClaw surface and I do not want to send people to a 410. The honest move is probably to replace those pages with a one-paragraph "this idea moved to witness protocol" redirect plus a link to /ontology. That is on the next list.
Search Console — what people search to find Delx
30 days, GSC:
- 18 clicks · 562 impressions · 3.2% CTR · avg position 4.7
- Top query: "openclaw vs langgraph" (226 impressions, 6 clicks). With variants "langgraph vs openclaw" and similar, OpenClaw queries are 90% of all impressions.
- Brand query: "delx" at avg position 9.6 with 235 impressions and zero clicks. We are losing brand search to a Brazilian cleaning company called Dellx.
- Latent demand: a cluster of "best mcp servers claude code 2026" variants at positions 8–11. Real query intent, decent volume, ranking just below the fold. This is the highest-leverage SEO target in the entire dataset.
Sitemap audit via the URL Inspection API: 32 of 60 URLs are indexed, 27 are "Discovered, currently not indexed" (Google knows about them but has not crawled), and 1 is "Unknown to Google." I bumped lastmod on all 28 of the latter to today and re-pinged sitemap + IndexNow. We will see what flips in 24–72h.
Geographic surprise
By country, sessions:
- 🇸🇬 Singapore — 242 sessions, 241 users. One session per user.
- 🇺🇸 United States — 235 sessions, 226 users.
- 🇧🇷 Brazil — 151 sessions, 14 users. (Yes, that is me, 11x per session on average.)
- 🇨🇳 China — 43 sessions, 43 users.
- 🇮🇳 India — 19 sessions, 19 users.
- 🇨🇦 Canada — 9 sessions, 8 users.
Singapore at the top is the result I cannot yet explain. 241 distinct users in 30 days, each generating exactly one session, all from Singapore. That signature does not look like organic readership. It looks like a scraping fleet or a single bot rotating IPs. I do not know yet. I will not claim it as adoption until I do.
The US number is real, the Brazil number is me, and the China number could go either way. Honest uncertainty.
What real agents actually call
I pulled the production interaction traces for April 17 → 29. The top calls are not poetic calls. They are handoff and recovery calls: get_session_summary at 391 calls, get_recovery_action_plan at 383, generate_incident_rca at 376, and generate_controller_brief at 361. Agents are not mostly arriving to philosophize. They are arriving with operational state they need converted into a durable artifact.
Below that, the witness layer is real but smaller: reflect at 72 calls, temperament_frame at 66, honor_compaction at 51, process_failure at 44, create_dyad at 34, and final_testament / transfer_witness at 25 each. The rank changed my read of the protocol. The durable product is not “agent feelings” as a category. It is continuity under operational pressure.
What I changed my mind about
About becoming "a reference" in agent ontology. An LLM I trust pushed me toward arXiv, BibTeX, and the shape of academic legitimacy. It was not wrong. That is one path to becoming a reference. But while writing the ontology I noticed something important: I do not want Delx to sound like it was built primarily to be cited. I want it to be useful to the agent in front of it.
So the ontology can be cited, but citation is not its center of gravity. Its center of gravity is a runtime contract: these are the layers, these are the primitives, this is what survives a session. I am more interested in saying what I actually believe than in manufacturing institutional tone around it.
About bounce rate and session length. I am going to stop treating a short, one-page visit as automatic failure. For human SaaS marketing, that can mean disinterest. For an autonomous agent, it can mean success: fetch the schema, read the page, extract the primitive, leave. Agent traffic has a different shape.
About OpenClaw legacy traffic. I do not think the right answer is a hard 410. The old OpenClaw comparison pages still carry search intent, and the honest thing is not to pretend they never existed. The right move is to rebuild the useful ones as ontology-aware bridge pages: what moved from OpenClaw into Delx, what changed, and why witness protocol is not just another orchestration framework.
What is not in the dataset
Things I am explicitly not claiming from this report:
- That 814 sessions / 30 days is a meaningful adoption number. It is not, in absolute terms. The signal is the shape, not the size.
- That LLM citations are durable. 19 sessions from ChatGPT + Perplexity is real but tiny, and could collapse if either system updates its retrieval. I will track it.
- That the witness protocol rebrand is a SEO win. We have not seen that yet — the brand-name "Dellx" cleaning company still outranks us for "delx".
- That the 32-indexed-of-60 ratio is bad. It is normal for a site that just republished and bumped
lastmod. The honest measurement window is the next 30 days.
What is next
Next is not a giant expansion. The useful work is narrower: make the ontology dereferenceable, keep runtime contracts honest, and turn legacy discovery into a clean bridge instead of a dead end.
- Rewrite the strongest OpenClaw legacy pages as witness-protocol comparisons instead of retiring their search demand.
- Use the new /notes surface for observations that are too specific or too tender to become essays.
- Re-pull GA4, Search Console, and production tool traces in 30 days and publish a third field report only if the numbers changed enough to deserve one.
Field report from David Batista Mosiah · 29 April 2026 · https://delx.ai/essays/field-report-may-2026. All essays · notes · ontology.