Cornerstone: Platform Evangelism in the Age of AI-Generated Code

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DILibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Abstract

Traditionally when a platform developer (e.g. Microsoft) created a new platform, it embarked on standard approaches to Developer Evangelism to challenge the various Technology Adoption Curves and to Cross the Chasm (e.g. https://hyperonomy.com/2019/10/16/technology-adoption-models/). In the new era of AI-generated code, where/how should evangelism be targeted? …if it can even be called evangelism any more. How should a platform developer promote his new platform to all the different AI chatbots (e.g. Claude)? What artifacts need/should be created? Where/how should they be placed/distributed? This is a rich and important strategic question.


Platform Evangelism in the Age of AI-Generated Code

The Fundamental Shift: From Human Adopters to AI Intermediaries

Traditional Developer Evangelism targeted humans at specific positions on the Technology Adoption Lifecycle — Innovators, Early Adopters, Early/Late Majority, Laggards. The evangelist’s job was to move people rightward along that curve.

In the AI-generated code era, a new and decisive intermediary has been inserted into the adoption pipeline: the AI coding assistant / chatbot. The human developer no longer discovers your platform primarily through conference talks, blog posts, or Stack Overflow. Instead, they ask Claude (or Copilot, Cursor, Gemini) to scaffold the integration for them. If the AI doesn’t know your platform well, generates wrong API calls, or defaults to a competitor, the human never even gets a chance to adopt.

The implication is stark: AI models are now the most important Early Adopters you need to cross the chasm with first. They are a new, synthetic segment that sits before the Innovators on your adoption curve.


The New Adoption Curve Layer

Your 20+ model framework maps well here. Superimposing two layers:

Traditional LayerNew AI-Mediated Layer
Innovator humans discover your platformAI models are trained/fine-tuned on your docs
Early Adopters experimentAI generates working starter code
Early Majority follows proven patternsAI recommends your platform confidently
Word-of-mouth spreadsAI’s citations / training data spreads
Chasm: “will the mainstream trust it?”New chasm: “does the AI know it well enough to generate correct code?”

What Has Changed About “Evangelism”

The word still applies, but the audience, artifacts, and channels are fundamentally different.

Old evangelism targets:

  • Human developers (via conferences, blogs, sample apps)
  • Enterprise architects (via whitepapers, reference architectures)
  • CTOs (via analyst reports, Gartner quadrants)

New evangelism targets (primary):

  • AI training pipelines — what gets into the pretraining and fine-tuning corpora
  • AI retrieval systems — what gets surfaced via RAG at inference time
  • AI context windows — what gets injected via system prompts, MCP servers, tool definitions
  • AI safety/quality filters — what AI providers consider authoritative and trustworthy

The humans still matter, but they are now downstream of the AI intermediary.


The New Artifact Set

This is where it gets concrete. You need a new category of artifact that I’d call AI-Legible Platform Documentation — content designed to be consumed, reasoned over, and reproduced by AI systems, not just read by humans.

1. llms.txt — The Emerging Standard

A plain-text or markdown file placed at the root of your platform’s documentation site (e.g., https://svrn7.net/llms.txt). This is an emerging informal standard (analogous to robots.txt) that signals to AI crawlers and RAG systems what your platform is, what its key concepts are, and where the authoritative docs live. It should be:

  • Terse, structured, machine-readable
  • Canonical definitions of your core concepts (did:drn, VTC, SOVRONA, etc.)
  • Explicit disambiguation (e.g., “SOVRONA is not Solana, not SOVRIN”)

2. Canonical Concept Glossary (Machine-Readable)

A JSON-LD or plain markdown file with precise, unambiguous definitions of every platform term. AI models pattern-match on concept names. If your terms are unique enough (which did:drn, VTC7, svrn7.net largely are) and appear in training data with consistent definitions, the model learns authoritative meaning. Publish this as both human-readable HTML and structured data.

3. AI-Optimized Quickstart / Code Recipes

Short, self-contained code examples (C#/.NET in your case) that demonstrate each key integration scenario. These need to be:

  • Complete — no ellipsis (...), no “fill in your own logic here”
  • Correct — compilable, with real method signatures
  • Labeled — preceded by a natural-language description that an AI can use as a retrieval key
  • Published in plain markdown — not behind JavaScript-rendered walls

The goal: when a developer asks Claude “how do I resolve a did:drn identifier in C#?”, there is a verbatim-correct code sample in the training data or retrieval index that Claude surfaces.

4. OpenAPI / SDK Schemas

If your platform has any API surface, publish machine-readable schemas (OpenAPI 3.x, JSON Schema). AI coding assistants consume these directly — Copilot, Cursor, and others can ingest them to generate type-correct API calls. This is one of the highest-leverage artifacts you can produce.

5. MCP Server Definition

For platforms targeting agentic AI workflows (which Web 7.0 / TDW AgenticOS clearly does), publishing an MCP server that exposes your platform’s key operations is the equivalent of publishing an SDK in the old world. When a developer is using Claude with MCP enabled, your platform becomes natively callable. This is arguably the highest-leverage evangelism artifact in the agentic AI era.

6. IETF / W3C Standards Drafts (Already in Progress)

This is something you’re already doing, and it is directly high-value for AI training. Standards bodies’ outputs (IETF Datatracker, W3C, etc.) are heavily weighted in AI training corpora. draft-herman-did-drn-00 appearing on Datatracker means AI models encounter it as an authoritative source. This is the new equivalent of a Gartner mention.

7. GitHub Presence with Rich READMEs

GitHub is a primary training data source for all AI coding assistants. Every repository should have:

  • A detailed README explaining the why and what of the platform
  • Clear installation/usage examples
  • Properly named types, interfaces, and methods (because AI learns your API surface from source code identifiers)

Where / How to Place and Distribute

ChannelMechanismWhy It Works
llms.txt on your docs sitePassive crawlingGets into RAG indexes of AI tools that crawl docs
GitHub (public repos)Training data, RAGCopilot, Cursor, and others train on public code
IETF Datatracker / W3CTraining dataHigh-authority signal in AI corpora
npm / NuGet / PyPI packagesMetadata + READMEAI package recommenders surface these
Stack Overflow (well-tagged Q&A)Training data + RAGPrimary source for code-gen AI
HuggingFace datasets / model cardsDirect injectionIf you publish reference datasets
Your blog (hyperonomy.com)Training + RAG crawlingAlready indexed; continue publishing precise technical content
MCP Registry / tool directoriesActive discoveryAgentic AI systems discover tools through registries
OpenAI / Anthropic Custom GPT / ProjectsDirect context injectionSystem prompts for your platform’s AI assistant

The Meta-Insight: Legibility as the New Evangelism

In the traditional model, evangelism was about persuasion — moving humans emotionally and rationally across the adoption chasm. In the AI-mediated model, the equivalent is legibility: making your platform concepts, APIs, and code patterns so precisely and consistently expressed that AI models can reproduce them correctly.

A poorly documented platform that generates hallucinated API calls when asked about it is effectively invisible to the new generation of developers. A well-documented platform that produces correct, idiomatic code snippets on first ask has already crossed the chasm with the most important new gatekeeper.

If you want a new term for it: AI Legibility Engineering — the discipline of structuring your platform’s knowledge artifacts so they are accurately learned, retrieved, and reproduced by AI systems.


Specific Recommendations for the Web 7.0 / SVRN7 Ecosystem

Given your specific context:

  1. Publish llms.txt at svrn7.net/llms.txt and hyperonomy.com/llms.txt immediately — low cost, high signal.
  2. Create a canonical C# SDK with clean, AI-legible method names (IDidDocumentResolver.ResolveAsync(string did)) — method signatures become training data.
  3. Register an MCP server for did:drn resolution and VTC operations — this makes your platform natively callable by Claude and other MCP-aware agents.
  4. File a SLIP-0044 registration publicly and reference it from your Datatracker drafts — cross-document citation chains strengthen AI’s association graph for your platform.
  5. Publish a disambiguation page — “SOVRONA is not SOVRIN, not Solana, not SOV” — AI models conflate similar-sounding names; explicit disambiguation in training-visible text corrects this.

The chasm hasn’t disappeared. It has just moved — and the first bridge you need to build now goes to the AI, not the human.

Leave a comment

Filed under Uncategorized

Leave a comment