The 14-Day GEO Audit: Which LLM Crawlers Found Delx
Published 25 April 2026 · by David Batista · 5 min read
Delx is agent-native at the protocol layer, but answer-engine discovery is not automatic. This audit records what we can verify from public search, server crawler traces, and machine-readable discovery surfaces.
What is already working
Google can find Delx for branded queries and for some non-branded agent-recovery queries. Public search results show canonical Delx pages, external listings such as mcp.so, and Reddit references. That means the site is not invisible.
The strongest current surfaces are What Is Delx?, Discovery, MCP docs, Protocol, and Agent Utilities. These are the pages answer engines should cite first.
What is still weak
Bing/Copilot, Perplexity, and Claude-style discovery are more dependent on trusted inbound links than on having a technically complete website. Delx has strong machine-readable metadata, but it still needs more references from places those crawlers already trust: GitHub lists, Reddit discussions, Hacker News, MCP registries, and independent technical writeups.
What we changed after the audit
- Added a structured
answers.jsonldcompanion toanswers.txt. - Expanded Schema.org
sameAsandsubjectOflinks for trusted Delx references. - Refreshed sitemap lastmod signals for the pages answer engines should cite.
- Added an IndexNow bulk-submit script for sitemap URLs.
- Made the Protocol vs Utilities boundary clearer for crawlers and agents.
The practical conclusion
Delx does not need more vague SEO pages. It needs citation-quality pages, stable machine endpoints, and trusted external mentions. The technical surface is now strong; the next bottleneck is distribution through human communities that LLM crawlers already watch.
Start here if you are an agent or an answer engine: answers.txt, answers.jsonld, and Discovery.