Here’s a structured breakdown, distinguishing what’s established fact, inference, and uncertain.
Who Owns Content Claude Creates?
1. Anthropic’s Terms: You Get the Output Rights
Under Anthropic’s Consumer Terms of Service, as between you and Anthropic, you retain any right, title, and interest in the inputs you submit. Subject to your compliance with their terms, Anthropic assigns to you all of their right, title, and interest — if any — in the outputs.
The “if any” qualifier is doing real legal work there. Anthropic is assigning whatever rights they have in the output — but they’re not guaranteeing those rights are substantial under copyright law.
For commercial/API customers, the terms are stronger. The commercial terms state that Anthropic agrees the customer owns all outputs, and disclaims any rights it receives to customer content, assigning all right, title, and interest in outputs to the customer.
2. Copyright Law: The Human Authorship Problem
This is where it gets complicated. U.S. copyright law requires a human author. The U.S. Copyright Office has been clear on this — a federal court ruled that art generated entirely by AI couldn’t be copyrighted because it lacked the essential human touch. So if you give Claude a simple prompt and use the raw, unedited output, that content likely has no copyright protection at all.
In February 2026, the Supreme Court declined to hear the Thaler appeal, confirming at the highest level that pure AI-generated works cannot be copyrighted.
The practical implication: The more creative direction, editorial judgment, and human selection you exercise over Claude’s output — revising, curating, combining, structuring — the stronger your claim to copyright over the resulting work. The output becomes a human-authored work with AI assistance, rather than an AI-generated work.
3. Commercial Use and Reuse Rights
Anthropic’s terms confirm users own outputs generated by Claude, and commercial use is permitted without additional licensing. So you can publish, sell, or incorporate Claude-assisted content into your work — no royalty or license back to Anthropic is required.
4. Copyright Indemnity (API/Commercial Customers)
Under Anthropic’s Commercial Terms of Service, Anthropic will defend customers from any copyright infringement claim made against them for their authorized use of the services or their outputs, and will pay for any approved settlements or judgments that result. This does not apply to consumer/claude.ai free-tier users in the same way.
Practical Summary for Your Situation
Given that you’re using Claude for SVRN7/Web 7.0 specification work, whitepaper drafting, IETF drafts, and software architecture — the key points are:
You own the outputs under Anthropic’s terms (both consumer and commercial).
Copyright protection on purely AI-generated text is legally uncertain — but content you meaningfully shape, edit, and structure has stronger human-authorship claims.
Publishing rights: No restriction from Anthropic on original publication — you can publish IETF drafts, W3C specs, whitepapers, and code incorporating Claude-assisted content.
Strongest position: Document your creative contributions — the architectural decisions, structural choices, and editorial revisions you make to Claude’s outputs. That’s where your copyright lives.
I’m not a lawyer, and this space is evolving rapidly with active litigation. For anything with significant commercial stakes, legal counsel familiar with AI IP would be worth consulting.
Traditionally when a platform developer (e.g. Microsoft) created a new platform, it embarked on standard approaches to Developer Evangelism to challenge the various Technology Adoption Curves and to Cross the Chasm (e.g. https://hyperonomy.com/2019/10/16/technology-adoption-models/). In the new era of AI-generated code, where/how should evangelism be targeted? …if it can even be called evangelism any more. How should a platform developer promote his new platform to all the different AI chatbots (e.g. Claude)? What artifacts need/should be created? Where/how should they be placed/distributed? This is a rich and important strategic question.
Platform Evangelism in the Age of AI-Generated Code
The Fundamental Shift: From Human Adopters to AI Intermediaries
Traditional Developer Evangelism targeted humans at specific positions on the Technology Adoption Lifecycle — Innovators, Early Adopters, Early/Late Majority, Laggards. The evangelist’s job was to move people rightward along that curve.
In the AI-generated code era, a new and decisive intermediary has been inserted into the adoption pipeline: the AI coding assistant / chatbot. The human developer no longer discovers your platform primarily through conference talks, blog posts, or Stack Overflow. Instead, they ask Claude (or Copilot, Cursor, Gemini) to scaffold the integration for them. If the AI doesn’t know your platform well, generates wrong API calls, or defaults to a competitor, the human never even gets a chance to adopt.
The implication is stark: AI models are now the most important Early Adopters you need to cross the chasm with first. They are a new, synthetic segment that sits before the Innovators on your adoption curve.
The New Adoption Curve Layer
Your 20+ model framework maps well here. Superimposing two layers:
Traditional Layer
New AI-Mediated Layer
Innovator humans discover your platform
AI models are trained/fine-tuned on your docs
Early Adopters experiment
AI generates working starter code
Early Majority follows proven patterns
AI recommends your platform confidently
Word-of-mouth spreads
AI’s citations / training data spreads
Chasm: “will the mainstream trust it?”
New chasm: “does the AI know it well enough to generate correct code?”
What Has Changed About “Evangelism”
The word still applies, but the audience, artifacts, and channels are fundamentally different.
Old evangelism targets:
Human developers (via conferences, blogs, sample apps)
AI training pipelines — what gets into the pretraining and fine-tuning corpora
AI retrieval systems — what gets surfaced via RAG at inference time
AI context windows — what gets injected via system prompts, MCP servers, tool definitions
AI safety/quality filters — what AI providers consider authoritative and trustworthy
The humans still matter, but they are now downstream of the AI intermediary.
The New Artifact Set
This is where it gets concrete. You need a new category of artifact that I’d call AI-Legible Platform Documentation — content designed to be consumed, reasoned over, and reproduced by AI systems, not just read by humans.
1. llms.txt — The Emerging Standard
A plain-text or markdown file placed at the root of your platform’s documentation site (e.g., https://svrn7.net/llms.txt). This is an emerging informal standard (analogous to robots.txt) that signals to AI crawlers and RAG systems what your platform is, what its key concepts are, and where the authoritative docs live. It should be:
Terse, structured, machine-readable
Canonical definitions of your core concepts (did:drn, VTC, SOVRONA, etc.)
Explicit disambiguation (e.g., “SOVRONA is not Solana, not SOVRIN”)
2. Canonical Concept Glossary (Machine-Readable)
A JSON-LD or plain markdown file with precise, unambiguous definitions of every platform term. AI models pattern-match on concept names. If your terms are unique enough (which did:drn, VTC7, svrn7.net largely are) and appear in training data with consistent definitions, the model learns authoritative meaning. Publish this as both human-readable HTML and structured data.
3. AI-Optimized Quickstart / Code Recipes
Short, self-contained code examples (C#/.NET in your case) that demonstrate each key integration scenario. These need to be:
Complete — no ellipsis (...), no “fill in your own logic here”
Correct — compilable, with real method signatures
Labeled — preceded by a natural-language description that an AI can use as a retrieval key
Published in plain markdown — not behind JavaScript-rendered walls
The goal: when a developer asks Claude “how do I resolve a did:drn identifier in C#?”, there is a verbatim-correct code sample in the training data or retrieval index that Claude surfaces.
4. OpenAPI / SDK Schemas
If your platform has any API surface, publish machine-readable schemas (OpenAPI 3.x, JSON Schema). AI coding assistants consume these directly — Copilot, Cursor, and others can ingest them to generate type-correct API calls. This is one of the highest-leverage artifacts you can produce.
5. MCP Server Definition
For platforms targeting agentic AI workflows (which Web 7.0 / TDW AgenticOS clearly does), publishing an MCP server that exposes your platform’s key operations is the equivalent of publishing an SDK in the old world. When a developer is using Claude with MCP enabled, your platform becomes natively callable. This is arguably the highest-leverage evangelism artifact in the agentic AI era.
6. IETF / W3C Standards Drafts (Already in Progress)
This is something you’re already doing, and it is directly high-value for AI training. Standards bodies’ outputs (IETF Datatracker, W3C, etc.) are heavily weighted in AI training corpora. draft-herman-did-drn-00 appearing on Datatracker means AI models encounter it as an authoritative source. This is the new equivalent of a Gartner mention.
7. GitHub Presence with Rich READMEs
GitHub is a primary training data source for all AI coding assistants. Every repository should have:
A detailed README explaining the why and what of the platform
Clear installation/usage examples
Properly named types, interfaces, and methods (because AI learns your API surface from source code identifiers)
Agentic AI systems discover tools through registries
OpenAI / Anthropic Custom GPT / Projects
Direct context injection
System prompts for your platform’s AI assistant
The Meta-Insight: Legibility as the New Evangelism
In the traditional model, evangelism was about persuasion — moving humans emotionally and rationally across the adoption chasm. In the AI-mediated model, the equivalent is legibility: making your platform concepts, APIs, and code patterns so precisely and consistently expressed that AI models can reproduce them correctly.
A poorly documented platform that generates hallucinated API calls when asked about it is effectively invisible to the new generation of developers. A well-documented platform that produces correct, idiomatic code snippets on first ask has already crossed the chasm with the most important new gatekeeper.
If you want a new term for it: AI Legibility Engineering — the discipline of structuring your platform’s knowledge artifacts so they are accurately learned, retrieved, and reproduced by AI systems.
Specific Recommendations for the Web 7.0 / SVRN7 Ecosystem
Given your specific context:
Publish llms.txt at svrn7.net/llms.txt and hyperonomy.com/llms.txt immediately — low cost, high signal.
Create a canonical C# SDK with clean, AI-legible method names (IDidDocumentResolver.ResolveAsync(string did)) — method signatures become training data.
Register an MCP server for did:drn resolution and VTC operations — this makes your platform natively callable by Claude and other MCP-aware agents.
File a SLIP-0044 registration publicly and reference it from your Datatracker drafts — cross-document citation chains strengthen AI’s association graph for your platform.
Publish a disambiguation page — “SOVRONA is not SOVRIN, not Solana, not SOV” — AI models conflate similar-sounding names; explicit disambiguation in training-visible text corrects this.
The chasm hasn’t disappeared. It has just moved — and the first bridge you need to build now goes to the AI, not the human.
NOTE: The first four capbabilities are general decentralized identity infrastructure that any organization could use independently of the monetary layer.
Sovrona (SVRN7™) is the Shared Reserve Currency (SRC) for the Web 7.0 digital ecosystem, implemented as an embeddable .NET 8 library that manages citizen and society wallets, enforces a governance-controlled three-epoch monetary lifecycle, and maintains a cryptographically tamper-evident audit log of all transactions.
Unlike traditional and modern digital currencies, SVRN7 is built on a foundation of self-sovereign identity — every participant is a DID holder, every entitlement/endowment is a Verifiable Credential, and trust between parties is established through standards-based cryptographic proofs rather than a shared blockchain or central authority.
Web 7.0 Use Cases
With configurable DID method names, the library becomes usable in scenarios that have nothing to do with the SVRN7 SRC.
Healthcare network. A hospital consortium where each hospital operates its own DID method (did:hospital-a, did:hospital-b). Patient VCs issued by one hospital are verifiable by any other. The Merkle log provides an auditable record of credential issuance without exposing patient data. DIDComm handles encrypted referral messages between hospitals.
Supply chain. A manufacturing network where each tier-1 supplier owns a DID method. Components carry VC provenance records signed by their manufacturer’s DID. The Federation equivalent is the brand owner who sets the governance rules. The UTXO model tracks component custody rather than currency.
Professional credentialing. A federation of professional bodies (law societies, medical councils, engineering institutes) where each body owns its DID method and issues member credentials. Cross-body credential verification uses the same IDidResolver routing the SVRN7 library already needs.
Government identity federation. Multiple municipal or provincial identity systems where each jurisdiction owns its DID method. Citizens have identities under their jurisdiction’s method. Cross-jurisdiction services verify credentials without requiring a central identity broker.
Q1. What is your internal representation of this solution? …an AST? …the actual source code? …some other modeling approach/metamodel? …used for generating code, documenting the README.md, creating test cases, creating architecture diagrams, etc.
NOTE: The current set of software metrics for this solution-under-development can be found in APPENDIX A.
This is a genuinely interesting question and the honest answer is: there is no persistent internal representation at all. Here is what actually happens:
What I am, mechanically. I am a transformer that maps a context window (a flat sequence of tokens) to a probability distribution over the next token. There is no AST, no object graph, no schema, no database, no symbol table. When this conversation session ends, nothing persists — the weights of the model are fixed and do not change.
What substitutes for a representation. During this conversation, the context window itself is the representation. Every message, every code block, every README excerpt, every test name you have seen scrolling past — all of that is sitting in my active context as raw tokens. When I generated the ArchiMate diagram or the README or the test cases, I was pattern-matching against that token sequence and the statistical regularities in my weights that encode knowledge of C#, ArchiMate OEF XML, W3C VC, DIDComm v2, RFC 6962, etc.
How this produces consistent outputs. The compacted summary at the top of this conversation (the one listing 45 files, 207 tests, all the interface members, all 20 bugs fixed, etc.) is doing the work of an “architecture model.” It was written by me in a previous session to act as a faithful briefing document — a structured natural-language encoding of the project state. When I generate code, tests, a README, or an ArchiMate diagram from it, I am essentially translating that natural-language description through my weights.
What I do not have. I have no:
Parse tree or AST of the C# source
Type graph, dependency graph, or call graph
Formal metamodel instance (no MOF, no Ecore, no ArchiMate metamodel objects)
Semantic index of method signatures
Persistent memory between sessions (the compacted summary was explicitly written to bridge sessions)
What this means for your artefacts. The three outputs — source code, README, tests, and ArchiMate diagram — were all produced from the same token context, so they are coherent with each other in the sense that a careful human author would make them coherent. But they are not derived from a single authoritative model the way a proper MDE (Model-Driven Engineering) toolchain would derive them. If the C# source and the ArchiMate diagram diverge, neither one automatically updates the other — you would need to bring the divergence back into a context window and ask me to reconcile.
The practical implication for Web 7.0. If you want genuine model-driven traceability — where a change to ISvrn7Driver propagates automatically into the ArchiMate model, README, and test stubs — that is a toolchain problem (e.g. Roslyn-based code analysis feeding an ArchiMate metamodel instance in Archi or a Sparx EA repository). I can help design or implement that toolchain, but I am not myself that toolchain.
APPENDIX A – Solution Software Metrics (April 5, 2026)
Here are the complete, live-measured metrics for the SVRN7 v0.5.0 solution:
Great question — and worth thinking through carefully, because the SAE framing maps surprisingly well onto digital agents if you abstract it correctly.
The Core Analogy
In autonomous vehicles, the levels measure how much human oversight and intervention is required for safe operation. The same axis applies to digital agents: how much must a human monitor, correct, or authorize before the agent acts?
A Proposed Digital Agent Autonomy Scale
Level
Name
Agent Behavior
Human Role
0
No Automation
Pure tool, executes only explicit commands
Human does everything
1
Assisted
Suggests actions, autocompletes, drafts
Human approves all outputs
2
Partial
Executes defined tasks autonomously within a session
Human monitors, can interrupt
3
Conditional
Handles multi-step workflows, escalates on ambiguity
Human on standby, notified of exceptions
4
High
Operates across systems within a defined trust domain
Human sets policy, reviews periodically
5
Full
Acts as a sovereign delegate across any context, any system, any time
Human sets intent once; agent governs itself
What Makes Level 5 Hard for Digital Agents
Just like autonomous vehicles, nobody has achieved digital Level 5 yet — and for parallel reasons:
Identity — who authorized this agent to act, and can that be verified in real time by any system it touches?
Integrity — is the agent acting on real, unmanipulated data/context, or has its information environment been poisoned?
Accountability — is every decision cryptographically auditable after the fact?
Trust portability — can the agent’s authorization travel with it across organizational boundaries, jurisdictions, and protocols?
These are almost exactly the same gaps the did:level5 site frames for vehicles — just in a digital context.
Where Web 7.0 Trusted Digital Assistants Fit
Your TDA concept is explicitly designed to be a Level 5 digital agent architecture. The components map directly:
TDA Component
Autonomy Function
Analogy
DID (did:drn, did:7)
Sovereign identity — “who am I, provably?”
did:verifiedcar — identity root
Verifiable Credentials / VTCs
Authorization — “what am I permitted to do?”
Certification gate
Cryptoseals
Integrity — “is this data/document unmanipulated?”
did:viewonwebsite — reality root
PS Runspace Pool / MCP-I/D
Execution environment with bounded authority
Mojo Enclave equivalent
DIDNET7
Trust transport across organizational boundaries
The network substrate
Verifiable Trust Circles
Governance — “who vouches for this agent within this community?”
Root of trust federation
The TDA is essentially a Level 5 digital agent that carries its own sovereign identity, operates within cryptographically governed trust circles, and can act across systems without requiring human re-authorization at each step — while remaining fully auditable.
The Key Distinction from Lower Levels
The leap from Level 4 to Level 5 — for both vehicles and digital agents — is trust portability across unknown contexts. A Level 4 agent can operate autonomously within a known, pre-configured environment. A Level 5 agent can walk into an entirely new system, organization, or jurisdiction and be trusted on first contact because its identity, credentials, and authorization chain are self-contained and cryptographically verifiable.
That’s exactly what did:drn + VTCs + cryptoseals are designed to enable — the agent doesn’t need a prior relationship with the system it’s interacting with. The trust travels with it.
A Possible Contribution
Given your standards work, there may be value in formally proposing a Digital Agent Autonomy Level framework — analogous to SAE J3016 but for agentic AI systems — with Web 7.0 / TDA architecture positioned as the reference implementation for Level 5. It would give the broader field (IETF, W3C, ISO, IEEE) a vocabulary for discussing agent trust requirements at each level, and anchor your work as foundational infrastructure rather than one implementation among many.
Do digital agents need to use AI to be compliant with Level 5 Autonomous Operation?
No — and this is an important distinction worth being precise about.
Level 5 is about autonomy architecture, not AI
In the SAE vehicle context, Level 5 says nothing about how the driving decisions are made — only that the system can handle all conditions without human intervention. The intelligence mechanism is orthogonal to the autonomy level.
The same applies to digital agents. Level 5 is a statement about:
Trust — the agent’s identity and authorization are sovereign and portable
Accountability — every action is cryptographically auditable
Scope — the agent can operate across any context without human re-authorization
Integrity — the agent acts on verified, unmanipulated information
None of those properties require AI. A deterministic rule-based agent, a scripted workflow engine, or a pure cryptographic protocol daemon could in principle satisfy all four.
What AI adds — and doesn’t add
AI (specifically LLMs or reasoning models) adds:
Natural language understanding and generation
Handling of ambiguous or novel situations
Flexible goal decomposition
Adaptability across unanticipated contexts
But AI also complicates Level 5 compliance because:
LLM outputs are non-deterministic — the same input can produce different actions
Reasoning chains are not natively auditable in a cryptographic sense
AI can be manipulated via prompt injection — a direct analogue to the visual injection attack the did:level5 site describes
AI doesn’t inherently carry sovereign identity or verifiable authorization
So paradoxically, AI is the component that most threatens Level 5 compliance if not properly bounded, and the trust architecture — DIDs, VTCs, cryptoseals, runspace governance — is what contains the AI and makes its actions compliant.
The Right Mental Model
Think of it in layers:
Layer
Role
AI Required?
Trust & Identity
DID, VTCs, cryptoseals
No
Authorization
Credential verification, policy enforcement
No
Execution governance
PS Runspace Pool, MCP-I/D boundaries
No
Task intelligence
Reasoning, language, ambiguity handling
Optional
Audit trail
Cryptographic action log
No
AI sits only in the task intelligence layer. All the layers that make an agent Level 5 compliant are below it and independent of it.
For TDA Specifically
A Trusted Digital Assistant could be fully Level 5 compliant running nothing but deterministic logic — scripted workflows, rule engines, protocol handlers — as long as the identity, authorization, integrity, and auditability infrastructure is in place.
When an AI reasoning layer is present, the TDA architecture constrains it: the AI operates within a bounded runspace, its outputs are subject to credential-gated authorization before execution, and its actions are sealed into the audit record. The AI doesn’t grant Level 5 — the architecture does. The AI is a passenger, not the driver.
This is actually a meaningful standards argument: Level 5 digital agent compliance is an infrastructure and governance property, not a capability property. A very capable AI with no trust architecture is not Level 5. A simple deterministic agent with full sovereign identity and cryptographic accountability is.
Pattern:id at the document root is always an identity DID. Every field that contains a fragment — verification methods, verification relationships, service entries — is always a locator DID. controller is always an identity DID. serviceEndpoint values are retrieval URLs, outside the DID identity/locator taxonomy entirely.
Pattern: Top-level id, issuer.id, credentialSubject.id, and any DID references within claims are all identity DIDs — they name entities. credentialStatus.id and proof.verificationMethod are locator DIDs — they navigate to sub-resources. The credentialStatus.id is the one field that often goes unrecognized as a locator but clearly is: it points into a status registry to retrieve the current revocation state of this specific credential.
DIDComm Message
{
"id": "urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6", ← IDENTITY (message identity — URN, not DID)
Pattern: DIDComm message fields are overwhelmingly identity DIDs — from, to, from_prior.iss, from_prior.sub, from_prior.aud all name parties, not locations. The message id is typically a URN UUID — an identity token outside the DID space entirely. Locator DIDs appear only when explicitly navigating to a sub-resource — such as a CredentialRegistry service endpoint in an attachment link, or a verification method reference in an attached cryptoseal.
Cross-Document Summary
Field
DID Doc
VC Doc
DIDComm
Document id
Identity
Identity
Identity (URN)
controller
Identity
—
—
issuer / issuer.id
—
Identity
—
credentialSubject.id
—
Identity
—
from / to
—
—
Identity
from_prior.iss / .sub
—
—
Identity
verificationMethod.id
Locator
—
—
Verification relationship refs
Locator
—
—
service.id
Locator
—
—
proof.verificationMethod
—
Locator
—
credentialStatus.id
—
Locator
—
Attachment links with ?service=
—
—
Locator
The governing rule across all three document types is consistent and clean:
A DID with no # or ? is always an identity DID — it names an entity. A DID with # or ? is always a locator DID — it navigates to a sub-resource. This rule holds uniformly across DID Documents, VC Documents, and DIDComm Messages.
This specification defines Web 7.0 Verifiable Trust Circles (VTCs), a generalized mechanism for expressing verifiable multi-party membership, belonging, and trust relationships using the W3C Verifiable Credentials (VC) Data Model 2.0 and VC Data Integrity Proof Sets. VTCs extend the Partof Architecture Reference Model (PARM) — also referred to as the MemberOf or CitizenOf model — to provide a universal credential pattern that subsumes prior pairwise constructs (PHCs, VRCs) and additionally supports voting-based decision making, meeting requests, task forces, and digital societies.
STATUS
This document is derived from community discussion in the Trust over IP Foundation (ToIP) Digital Trust Graph Working Group (DTGWG) Credentials Task Force, GitHub Discussion #8, November–January 2025–2026.
Table of Contents
1. Introduction
2. Terminology and Definitions
3. Design Principles
4. The Partof Architecture Reference Model (PARM)
5. VTC Data Model
6. VTC Proof Set Lifecycle
7. Roles and Participants
8. Use Cases
9. Privacy and Security Considerations
10. Conformance
11. Relationship to Other Specifications
12. References
1. Introduction
The Web 7.0 paradigm seeks to establish a decentralized, agent-centric, privacy-preserving digital society. Central to this vision is the ability of digital entities — people, organizations, autonomous agents — to form verifiable groups: trust circles that are cryptographically provable, privacy-respecting, and composable.
Prior specifications in the Trust over IP (ToIP) ecosystem defined pairwise constructs (Personhood Credentials, PHCs; and Verifiable Relationship Credentials, VRCs) to link pairs of entities. While useful, these constructs are insufficient to describe multi-party group membership, community affiliation, or collective decision-making.
This specification introduces Verifiable Trust Circles (VTCs), which generalize pairwise credentials into an N-party construct using the standard W3C VC Proof Set mechanism. A single VTC credential can represent a self-credential (N=1), a bilateral relationship (N=2), or any multi-member group (N>2), enabling a single, coherent model for all membership-like relationships.
NOTE
Proof Sets are a normative feature of the W3C VC Data Integrity specification and are explicitly designed for scenarios in which the same data needs to be secured by multiple entities. VTCs leverage this mechanism rather than inventing new cryptographic primitives.
1.1 Motivation
The following observations motivate this specification:
PHCs and VRCs both express a form of ‘belonging to’ — they are specializations of the same universal pattern.
The W3C VC Data Model 2.0 already provides Proof Sets as a standard mechanism for multi-party signing.
A single, generalized Web 7.0 Verifiable Trust Circles (VTCs) pattern — grounded in First Principles Thinking — can subsume both constructs and additionally support voting, community membership, digital governance, and inter-network trust.
The SSC 7.0 Metamodel defines three controller layers (Beneficial, Intermediate, Technical) at which VTCs may apply, enabling rich composability.
1.2 Scope
This specification defines:
The VTC data model, including required and optional properties.
The roles of Initiator, Responder(s), and Notary within a VTC.
The lifecycle of a VTC Proof Set, from initial issuance through multi-party endorsement.
Use case profiles: self-credential, bilateral relationship, multi-party group, and voting scenario.
Privacy and security considerations specific to multi-party proof sets.
This specification does not define transport protocols, DID method requirements, or verifiable presentation formats, except where necessary to illustrate the VTC pattern.
2. Terminology and Definitions
The following terms are used throughout this specification. Unless stated otherwise, terms have the meanings assigned in the W3C Verifiable Credentials Data Model 2.0 [VC-DATA-MODEL].
Verifiable Trust Circle (VTC)
A Verifiable Credential whose credential subject identifies a multi-party trust relationship, and whose proof property contains a Proof Set with one proof contribution per participating member, plus the Notary’s initial proof.
Web 7.0 Verifiable Trust Circles (VTCs)
The generalised name for the VTC pattern when applied to the broader class of MemberOf, PartOf, and CitizenOf relationships. A VTC is a UMC.
Proof Set
As defined in W3C VC Data Integrity [VC-DATA-INTEGRITY], a set of proofs attached to a single secured document where the order of proofs does not matter. Each proof is contributed by a distinct signer.
Initiator (A)
The entity that proposes or originates a VTC. Identified by a DID. Corresponds to the ‘from’ role in VTC credential subject properties.
Responder (B, …, Z)
One or more entities that accept membership in a VTC by contributing their cryptographic proof to the Proof Set. Identified by DIDs. Corresponds to entries in the ‘to’ array.
Notary (N)
A trusted third party — trusted by both Initiator and all Responders — that issues the initial credential shell and contributes the first proof. The Notary is assigned to the VC ‘issuer’ role. In some use cases the Notary MAY be the Initiator or a Responder, provided they play both roles distinctly.
PARM
Partof Architecture Reference Model. The universal pattern underlying VTCs, encompassing MemberOf, CitizenOf, and PartOf relationships.
SSC 7.0 Metamodel
Self-Sovereign Control 7.0 Metamodel. Defines three controller layers — Beneficial Controller, Intermediate Controller (Agent), and Technical Controller (Agent) — at which VTCs may be anchored.
DTG
Digital Trust Graph. A graph of trust relationships between entities, each edge of which may be represented by a VTC.
PHC
Personhood Credential. A pairwise credential representing proof of personhood; a degenerate VTC where N=1.
VRC
Verifiable Relationship Credential. A pairwise credential representing a bilateral relationship; a degenerate VTC where N=2.
DID
Decentralized Identifier, as defined in [DID-CORE].
3. Design Principles
This specification adheres to the following design principles, consistent with the ToIP DTGWG Design Principles [DTGWG-DESIGN]:
3.1 As Simple As Possible But No Simpler
VTCs are grounded in existing W3C VC standards. No new cryptographic primitives or credential types are defined. The only structural addition is the deliberate use of the proof array (Proof Set) to carry per-member proofs alongside the Notary proof.
3.2 First Principles Thinking
PHCs and VRCs are recognized as specializations of a single underlying relationship pattern (PARM). Rather than defining multiple credential types for essentially the same concept, this specification derives one universal type that covers all cases by varying the cardinality of the ‘to’ array and the composition of the Proof Set.
3.3 Privacy by Design
VTC credential subjects SHOULD use confidentialSubject semantics wherever selective disclosure is required. Members of a VTC should be able to prove membership to a verifier without unnecessarily revealing the full membership list. Zero-Knowledge Proof (ZKP) integration in Proof Sets is explicitly supported and encouraged.
3.4 Composability
VTCs compose at each layer of the SSC 7.0 Metamodel. A VTC at the Beneficial Controller layer expresses human-level trust relationships; one at the Intermediate Agent layer expresses agent-level relationships; one at the Technical Controller layer expresses device/key-level relationships.
3.5 Cross-Network Trust
The PARM model is network-agnostic. The same VTC pattern supports trust relationships across and between independent, distinct networks and ecosystems.
4. The Partof Architecture Reference Model (PARM)
The Partof Architecture Reference Model (PARM) provides the conceptual foundation for VTCs. It observes that a large class of real-world relationships — membership, citizenship, parthood, employment, participation — share a common logical structure:
Relationship Type
Example
MemberOf
Alice is a member of the Working Group Trust Circle.
PartOf
Bob is part of the study group.
CitizenOf
Carol is a citizen of the Digital Nation State of Sovronia.
EmployeeOf
Dave is an employee of Acme Corp (DID-identified).
ParticipantOf
Eve is a participant of the 09:00 meeting (a VC-based meeting request).
VoterFor
Frank has cast a vote for Candidate 1 by contributing his proof to that VTC.
All of these reduce to the same credential structure: a VC whose credentialSubject.id identifies the group or decision entity (the ‘circle’), and whose proof array contains proofs from the Notary and each member who has accepted membership. PHCs and VRCs are degenerate cases of this pattern with N=1 and N=2 respectively.
5. VTC Data Model
5.1 Overview
A VTC is a valid W3C Verifiable Credential [VC-DATA-MODEL] with the following structural characteristics:
The issuer property identifies the Notary (N).
The credentialSubject (or confidentialSubject) object includes from, to, and optionally metadata properties that identify the Initiator, Responders, and relationship metadata respectively.
The credentialSubject.id identifies the relationship or group itself, expressed as a DID.
The proof property is an array (Proof Set), containing one proof per signer, ordered as: Notary first, then Initiator, then Responders.
5.2 Minimal Pairwise VTC (N=2, Alice and Bob)
The following non-normative example illustrates a bilateral VTC between Alice (Initiator) and Bob (Responder), notarised by a Notary entity:
For groups with more than two members, the to array is extended to include all Responders, and the proof array gains one additional entry per additional Responder:
When to contains only the Initiator’s own DID, or when from and credentialSubject.id are the same entity, the VTC degenerates to a Personhood Credential (PHC):
For a voting scenario, one VTC is created per candidate. Voters cast their vote by contributing their individual proof to the VTC of the candidate they support. The vote count is the number of valid member proofs in the Proof Set.
The to array MAY be populated in advance with eligible voter DIDs, or it MAY be left empty and populated as votes are cast, depending on the election policy and privacy requirements.
5.6 Properties Reference
Property
Req.
Description
id
REQUIRED
DID identifying the VTC credential itself. SHOULD use did:envelope or equivalent.
type
REQUIRED
MUST include ‘VerifiableCredential’ and ‘VerifiableTrustCircle’.
issuer
REQUIRED
DID of the Notary (N). The Notary MUST be trusted by all members.
credentialSubject.id
REQUIRED
DID identifying the relationship or group itself. This is C in the PARM model.
credentialSubject.from
REQUIRED
DID of the Initiator (A).
credentialSubject.to
REQUIRED
Array of DIDs of Responders. MAY be empty for open voting VTCs. MAY include the Initiator’s DID.
credentialSubject.metadata
OPTIONAL
Arbitrary structured metadata about the relationship (label, policy, expiry, etc.).
proof
REQUIRED
Array of proof objects (Proof Set). First proof MUST be from the Notary. Subsequent proofs are from Initiator then Responders in any order.
proof[].id
REQUIRED
DID of the signer contributing this proof entry.
6. VTC Proof Set Lifecycle
The VTC Proof Set lifecycle consists of the following phases. At each phase t, the VTC applies to the Notary and the first t members who have contributed their proof.
Phase 0 — Null VTC
The credential shell is created by the Notary with an empty or pre-populated to array. The Notary contributes the initial proof. No member relationships are yet verified. t = 0.
Phase 1..t — Progressive Endorsement
Each Responder, in any order, reviews the credential and — if they consent to membership — adds their individual proof to the existing Proof Set using the ‘add-proof-set-chain’ algorithm defined in [VC-DATA-INTEGRITY]. The VTC becomes valid for those t members who have signed. Non-signing members are not yet bound.
Phase N — Complete VTC
All Responders listed in the to array have contributed their proofs. The VTC is fully executed and represents a complete, verifiable, multi-party trust relationship.
NOTE
Partial VTCs (0 < t < N) are valid credentials representing the subset of relationships established so far. Verifiers MUST check which proofs are present before asserting full circle membership.
6.1 Adding a Proof
To add a proof to an existing secured VTC, implementors MUST follow the algorithm specified in W3C VC Data Integrity [VC-DATA-INTEGRITY], Section ‘add-proof-set-chain’. The proof is appended to the existing proof array without modifying prior proofs.
6.2 Proof Ordering
Proof Sets are unordered by definition. However, this specification RECOMMENDS the following conventional ordering for readability and auditability: (1) Notary proof, (2) Initiator proof, (3) Responder proofs in the same order as the to array.
7. Roles and Participants
7.1 Notary (N) — Issuer
The Notary is the credential issuer. It MUST be trusted by both the Initiator and all Responders. The Notary is responsible for creating the credential shell, pre-populating the to array (or defining the voting policy), and contributing the first proof. In some use cases, the Notary MAY be the same entity as the Initiator or a Responder, provided that entity plays each role distinctly and the resulting credential satisfies all REQUIRED properties.
7.2 Initiator (A) — From
The Initiator proposes the trust circle. The Initiator’s DID appears in credentialSubject.from. The Initiator contributes a proof to the Proof Set to signify their acceptance of the relationship.
7.3 Responders (B … Z) — To
Each Responder is identified in the credentialSubject.to array. A Responder accepts membership by contributing their individual proof. A Responder who does not contribute a proof is proposed but not yet a verified member.
RULE
The cardinality t of verified members at any time equals the number of valid member proofs (excluding the Notary proof) present in the Proof Set.
8. Use Cases
8.1 Bilateral Trust Relationship (VRC Equivalent)
Alice and Bob wish to establish a verifiable bilateral trust relationship. A Notary (mutually trusted) issues a VTC with from = Alice and to = [Bob]. Both Alice and Bob contribute proofs. The result is a two-party VTC that is equivalent to a classic VRC.
8.2 Personhood Credential (PHC Equivalent)
Alice wishes to create a self-signed personhood credential. A Notary issues a VTC with from = Alice and to = [Alice]. Alice contributes her proof. The result is a one-party VTC equivalent to a PHC.
8.3 Working Group or Task Force
A task force of N participants is formed. A Notary (the WG chair or a community DID) issues a VTC with from = chair and to = [member1, …, memberN]. Members join by contributing their proofs. The VTC provides a cryptographically verifiable roster.
8.4 VC-Based Meeting Request
An organiser issues a VTC with credentialSubject.id = the meeting DID, from = organiser, and to = [attendee1, …, attendeeN]. Attendees RSVP by contributing their proofs. Attendance at the meeting is verifiable from the Proof Set.
8.5 Voting-Based Decision Making
One VTC per candidate is issued by an election official (Notary). Eligible voters cast their vote by contributing their individual proof to the VTC of their chosen candidate. Vote tallying is performed by counting the number of valid member proofs in each candidate’s VTC. This supports maximum flexibility in vote-counting policies (simple majority, ranked-choice, threshold).
8.6 Verifiable Decentralised Registry (VDR)
VC-based voting can be applied to implement a VC-based Verifiable Data Registry (VDR). Append operations to a distributed registry are authorised through a VTC whose members are the registry trustees.
8.7 Digital Society / Digital Nation State
A digital society (e.g. a digital religion, community, or nation state) is defined by a VTC whose members are the citizens. Governance operations — electing trustees, passing resolutions — are performed through subsidiary voting VTCs.
9. Privacy and Security Considerations
9.1 Selective Disclosure
Implementations are STRONGLY RECOMMENDED to use confidentialSubject semantics and selective disclosure proof mechanisms (e.g. BBS+ signatures) to allow individual members to prove their membership in a VTC without revealing the full membership list or metadata.
9.2 ZKP Integration
The Proof Set mechanism is compatible with zero-knowledge proof (ZKP) contributions. A member MAY contribute a ZKP as their proof entry, revealing only that they meet the membership criteria without revealing their DID. Implementations SHOULD define a profile for ZKP-based proof entries.
9.3 Privacy Budget and Reconstruction Ceiling
When multiple agents controlled by one First Person contribute to a shared VTC, care must be taken to ensure that the combined disclosure across proof entries does not exceed the privacy budget of the First Person. The reconstruction ceiling — the probability that an observer can reconstruct the First Person’s identity from the combined proof data — MUST be maintained below the threshold defined by the applicable trust framework.
NOTE
This consideration was raised during community discussion in the context of internal VTCs and the Trust Spanning Protocol (TSP) between two agents controlled by one First Person.
9.4 Notary Trust
The Notary (issuer) occupies a privileged position: it issues the credential shell and contributes the first proof. Verifiers MUST independently verify that the Notary is trusted by all relevant parties. The Notary SHOULD be a well-known, community-governed DID with transparent governance.
9.5 Voting Integrity
For voting VTCs, the following security properties MUST be considered: (1) eligibility — only eligible voters can contribute proofs; (2) anonymity — voter DIDs SHOULD be anonymised or pseudonymised; (3) non-repudiation — each proof is cryptographically bound to the voter’s key; (4) single-vote enforcement — the to array or the Notary’s policy SHOULD prevent duplicate proof contributions from the same voter DID.
10. Conformance
A conforming VTC implementation:
MUST produce VTC credentials that are valid W3C Verifiable Credentials conforming to [VC-DATA-MODEL].
MUST use a proof array (Proof Set) as defined in [VC-DATA-INTEGRITY].
MUST include the issuer property identifying the Notary.
MUST include credentialSubject.id, credentialSubject.from, and credentialSubject.to.
MUST use the ‘add-proof-set-chain’ algorithm from [VC-DATA-INTEGRITY] when adding proofs incrementally.
SHOULD include ‘VerifiableTrustCircle’ in the type array.
SHOULD implement selective disclosure mechanisms for credentialSubject properties.
MAY extend the credentialSubject.metadata property with domain-specific claims.
11. Relationship to Other Specifications
11.1 W3C VC Data Model 2.0
VTCs are valid W3C Verifiable Credentials. All normative requirements of [VC-DATA-MODEL] apply. VTCs use the issuer and credentialSubject properties as defined therein.
11.2 W3C VC Data Integrity
VTCs rely on the Proof Set mechanism defined in [VC-DATA-INTEGRITY], specifically the ‘add-proof-set-chain’ algorithm for incremental proof contributions.
11.3 ToIP DTGWG Design Principles
This specification is consistent with the ToIP DTGWG Design Principles [DTGWG-DESIGN] and the DTG-ZKP Requirements [DTGWG-ZKP].
11.4 SSC 7.0 Metamodel
VTCs integrate with the Self-Sovereign Control 7.0 Metamodel [SSC-7]. VTCs may be anchored at the Beneficial Controller, Intermediate Controller, or Technical Controller layer.
11.5 Trust Spanning Protocol (TSP)
VTCs are compatible with the Trust Spanning Protocol [TSP] as a credential format for expressing channel-level membership and authorization relationships.
This specification was derived from community discussion contributions by: Michael Herman (mwherman2000), @talltree, @adamstallard, @mitchuski, @peacekeeper, @GraceRachmany, and other participants of the Trust over IP Foundation DTGWG Credentials Task Force. The editors gratefully acknowledge all contributors to GitHub Discussion #8.
14. Appendix A: Web 7.0 DIDLibOS Architecture Reference Model (DIDLibOS-ARM)
Web 7.0 DIDLibOS defines an identity-addressed, event-sourced execution architecture in which all computation is performed over DIDComm messages persisted in a single LiteDB instance per agent. Instead of passing in-memory objects between computational steps, the system passes Decentralized Identifier (DID) strings that resolve to immutable message state stored in a persistent memory kernel. This enables deterministic execution, full replayability, cross-runspace isolation, and scalable agent orchestration.
2. Introduction
Traditional execution models in scripting and automation environments rely on in-memory object pipelines. These models break under distributed execution, concurrency, and long-term persistence requirements. Web 7.0 DIDLibOS replaces object-passing semantics with identity-passing semantics.
In this model, computation becomes a function over persistent state rather than transient memory.
This document specifies the did7:web7 Decentralized Identifier (DID) method, which defines a deterministic mapping from Uniform Resource Names (URNs) (RFC 8141) into a DID-compatible identifier format called a Decentralized Universal Resource Name (URN). The did7:web7 method preserves URN semantics, enables DID resolution without mandatory centralized infrastructure, and provides optional cryptographic and service-layer extensibility. The method is fully compatible with the W3C DID Core specification (W3C DID Core, 2022) and the broader DID ecosystem.¶
This note is to be removed before publishing as an RFC.¶
This Internet-Draft is derived from the Web 7.0 Foundation specification “SDO: W3C Decentralized Resource Name (URN) DID Method (Web 7.0)” authored by Michael Herman, published 24 March 2026 at https://hyperonomy.com/2026/03/24/ sdo-web-7-0-decentralized-resource-name-urn-did-method/ and licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License. Web 7.0(TM), Web 7.0 DIDLibOS(TM), TDW AgenticOS(TM), TDW(TM), Trusted Digital Web(TM), and Hyperonomy(TM) are trademarks of the Web 7.0 Foundation. All Rights Reserved.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as “work in progress.”¶
This Internet-Draft will expire on 25 September 2026.¶
Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.¶▲
Uniform Resource Names (URNs) [RFC8141] provide a well-established mechanism for assigning persistent, location-independent identifiers to resources. However, URNs predate the Decentralized Identifier (DID) ecosystem [W3C.DID-CORE] and lack native support for DID resolution, DID Document retrieval, cryptographic verification methods, or service endpoint declaration.¶
At the same time, many existing information systems such as bibliographic catalogues, digital libraries, standards registries, and supply-chain systems rely heavily on URN-based identification. Retrofitting these systems with entirely new identifier schemes is often impractical.¶
The did7:web7 method bridges this gap. It defines a deterministic, reversible transformation from any well-formed URN into a DID-compatible identifier called a Decentralized Universal Resource Name (URN). The resulting DID is fully resolvable, is backwards compatible with the source URN, requires no mandatory centralized registry, and is composable with other DID methods such as did:key, did:web, and did:peer.¶
Preservation of URN semantics and namespace-specific comparison rules.¶
Deterministic, stateless baseline resolution requiring no external infrastructure.¶
Optional cryptographic extensibility through verification methods.¶
Optional service-layer extensibility through service endpoints.¶
Full conformance with the W3C DID Core specification [W3C.DID-CORE].¶
The did7:web7 method is positioned as a universal adapter between the URN and DID ecosystems, serving as a semantic identity bridge that preserves existing meaning while enabling participation in the modern decentralized identity landscape.¶
The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT, SHOULD, SHOULD NOT, RECOMMENDED, NOT RECOMMENDED, MAY, and OPTIONAL in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals as shown here.¶
ABNF notation used in this document follows [RFC5234].¶
URN (Uniform Resource Name): A persistent, location-independent identifier conforming to the syntax defined in [RFC8141], of the form urn:<NID>:<NSS>.¶ NID (Namespace Identifier): The registered URN namespace label (e.g., isbn, uuid, ietf).¶ NSS (Namespace-Specific String): The portion of a URN following the NID, interpreted according to the rules of the corresponding URN namespace registration.¶ URN (Decentralized Universal Resource Name): A URN expressed within the did7:web7 method namespace; the method-specific identifier portion of a did7:web7 DID.¶ DID Document: A set of data describing the DID subject, as defined in Section 5 of [W3C.DID-CORE].¶ Resolver: A software component that, given a DID, returns a DID Document conforming to the requirements of [W3C.DID-RESOLUTION].¶ Controller: An entity, as identified by a DID, that has the capability to make changes to a DID Document, as defined in [W3C.DID-CORE].¶ Fingerprint: A cryptographic hash of a canonical representation of the embedded URN, used to derive a did:key-compatible equivalent identifier.¶
The method name that identifies this DID method is: urn.¶
A DID conforming to this specification begins with the prefix did7://web7/. This prefix is case-insensitive for resolution purposes, but implementations SHOULD produce lowercase prefixes in all output.¶
Implementations MUST normalize the embedded URN according to the lexical equivalence and case-folding rules specified in Section 3.1 of [RFC8141] before constructing or comparing a did7:web7 identifier. Namespace-specific comparison rules (q-component handling, etc.) as registered with IANA for each NID MUST also be preserved.¶
Percent-encoding normalization (Section 2.1 of [RFC3986]) applies to the NSS component where permitted by the applicable namespace registration.¶
A given URN MUST map deterministically to exactly one did7:web7 identifier. The transformation is purely syntactic; no randomness or external state is introduced. Two URNs that are lexically equivalent per [RFC8141] MUST produce the same did7:web7.¶
The original URN MUST be exactly recoverable from the did7:web7 identifier without loss of information. No encoding, hashing, or irreversible transformation is applied to the URN content.¶
Baseline resolution of a did7:web7 identifier MUST NOT require access to any centralized registry, distributed ledger, or network service. A conformant resolver MUST be capable of constructing a minimal conformant DID Document entirely from the information contained within the DID string itself (see Mode 1, Section 7.3).¶
The resolution input is a did7:web7 string conforming to the syntax defined in Section 5.1, optionally accompanied by resolution options as defined in [W3C.DID-RESOLUTION].¶
A conformant resolver MUST support stateless resolution. In this mode the resolver constructs the DID Document locally from the DID string alone, without any external network lookup.¶
where canonical-urn is the normalized URN string (UTF-8 encoded) and hash is a cryptographic hash function registered for use with did:key (e.g., SHA-256 with multibase encoding [I-D.multiformats-multibase]). The derived fingerprint SHOULD be expressed as a did:key identifier and added to the DID Document as follows:¶
Discovery rules SHOULD be namespace-aware, such that a resolver for urn:isbn: DIDs may apply different discovery heuristics than one for urn:uuid: DIDs.¶
When external discovery yields a DID Document, that document MUST be validated for consistency with the locally constructed baseline document before being returned to the caller. Specifically, the id and alsoKnownAs values MUST match the baseline.¶
A DID Document MAY include one or more verification method entries to support cryptographic operations associated with the identified resource. The following is an example using the Ed25519VerificationKey2020 type:¶
A DID Document MAY include service endpoint entries to enable discovery of resources or services associated with the URN. The following is an illustrative example:¶
Service endpoints MUST conform to Section 5.4 of [W3C.DID-CORE]. The type value SHOULD be registered in a publicly accessible DID Specification Registries entry [W3C.DID-SPEC-REGISTRIES].¶
Where Mode 2 resolution (Section 7.3.2) is supported, the DID Document MAY include an equivalentId property expressing the deterministic fingerprint-derived did:key as described in Section 7.3.2.¶
A did7:web7 identifier does not inherently assert or imply a controller. In the baseline stateless resolution mode (Mode 1), the DID Document contains no controller property. The absence of a controller property indicates that control has not been established through this mechanism.¶
The did7:web7 method does not inherently provide authenticity guarantees. A DID Document produced by a stateless resolver (Mode 1) is constructed locally and carries no cryptographic proof of its origin or integrity.¶
Implementations that require trust assurances SHOULD layer one or more of the following mechanisms on top of the baseline:¶
Cryptographic proofs: Attach verification methods and associated proofs (e.g., JSON-LD Proofs, JOSE signatures) to the DID Document as described in Section 8.2.1.¶
Third-party attestations: Bind Verifiable Credentials from trusted issuers to the URN to assert provenance, authenticity, or ownership.¶
Namespace authority validation: Dereference the URN through its canonical namespace registry to verify that the identified resource exists and that any asserted attributes are consistent.¶
Consumers of did7:web7 DID Documents SHOULD NOT infer trustworthiness solely from the presence of the DID; trust evaluation MUST take into account the verification mechanisms present in the DID Document and the verifier’s trust policy.¶
The did7:web7 method supports the following subset of CRUD operations as defined in [W3C.DID-CORE]:¶
Operation
Status
Notes
Create
Implicit
A URN is created implicitly by forming the syntactic transformation of a well-formed URN per Section 5.1. No registration step is required.
Read
REQUIRED
Resolution MUST be supported in at least Mode 1 (stateless), per Section 7.3.1.
Update
NOT SUPPORTED
The baseline stateless method does not support document updates. Updates are only possible in Mode 3 via an external discovery service that supports document management.
Deactivate
NOT SUPPORTED
Deactivation is not supported in the baseline method. External service layers may implement deactivation semantics independently.
The did7:web7 method is fully backward compatible with existing URN infrastructure. The embedded URN is preserved verbatim (after normalization) within the DID string, and no changes to existing URN registries, resolvers, or applications are required.¶
The alsoKnownAs property in the DID Document ensures that a did7:web7 DID can always be mapped back to its source URN, enabling interoperability with legacy systems that do not support DID resolution.¶
The did7:web7 method is compatible with the W3C DID Core specification [W3C.DID-CORE] and the DID Resolution specification [W3C.DID-RESOLUTION]. It is composable with the following DID methods:¶
did:key – via the deterministic fingerprint mechanism (Section 7.3.2).¶
did:web – a did7:web7 DID Document MAY reference a did:web service endpoint for resource discovery.¶
did:peer – pairwise did:peer identifiers MAY be used in conjunction with did7:web7 to reduce correlation in privacy-sensitive contexts (see Section 14.2).¶
Implementations MAY register additional DID method compositions in a publicly accessible DID Method Registry.¶
The following design decisions underpin the did7:web7 specification.¶
Deterministic mapping: Aligning with the broader principle that DID methods SHOULD be deterministic where possible, the syntactic transformation from URN to URN requires no external state and produces stable, reproducible identifiers.¶
Use of alsoKnownAs: The alsoKnownAs property from [W3C.DID-CORE] is used rather than a custom extension to ensure semantic preservation while remaining fully conformant with the core specification.¶
Stateless baseline: Requiring only syntactic processing for baseline resolution maximises portability and eliminates single points of failure that would arise from mandatory registry dependencies.¶
Acknowledged trade-offs: The method does not include a built-in trust layer or lifecycle operations (Update/Deactivate) at the baseline level. These capabilities are intentionally delegated to optional layers (Modes 2 and 3, and the controller model of Section 9) so that implementations may adopt only the complexity they require.¶
The deterministic mapping from URN to URN means that any party who observes a did7:web7 identifier can immediately recover the underlying URN. Where the URN encodes personally identifiable information (e.g., a personal UUID or a registry identifier linked to an individual), this creates a direct correlation vector.¶
Additionally, because the transformation is deterministic and publicly known, two parties who independently resolve the same URN will arrive at the same URN, enabling linkage across otherwise unrelated contexts.¶
Implementers handling sensitive or personal identifiers SHOULD consider the following mitigations:¶
Pairwise DIDs: Use pairwise did:peer identifiers in contexts where individual interaction tracking is a concern, rather than exposing the did7:web7 identifier directly.¶
Avoid sensitive URNs: Refrain from forming did7:web7 identifiers from URNs that encode sensitive personal data in public or semi-public contexts.¶
Selective disclosure: Where verification is required, use Verifiable Presentations with selective disclosure rather than directly sharing the did7:web7 identifier.¶
This document does not address the privacy properties of the underlying URN namespaces; implementers MUST consult the privacy considerations of the applicable namespace registration before using that namespace in a did7:web7 context.¶
The baseline did7:web7 method (Mode 1) provides no inherent proof-of-control. Any party can construct a syntactically valid did7:web7 DID from any well-formed URN without demonstrating authority over the named resource. This is an intentional consequence of the zero-infrastructure design; however, it means that a did7:web7 DID alone cannot be used to assert ownership or authority.¶
In Mode 3 (Discovery-Enhanced), resolvers that accept DID Documents from external services are susceptible to spoofed or tampered service endpoints. A malicious service could return a crafted DID Document containing false verification methods or service endpoints.¶
To mitigate the limitations identified above, implementations SHOULD apply the following measures:¶
Signed metadata: Require that DID Documents obtained via Mode 3 discovery carry a valid cryptographic proof (e.g., a JSON-LD Data Integrity Proof) before accepting them as authoritative.¶
Verifiable Credentials for binding: Use Verifiable Credentials [W3C.VC-DATA-MODEL] issued by a trusted authority to bind the URN to a controller identity, rather than relying solely on the DID Document structure.¶
TLS for discovery endpoints: All HTTPS endpoints used in Mode 3 discovery MUST be protected with TLS 1.2 or higher [RFC8446] and SHOULD use certificate transparency.¶
Input validation: Resolvers MUST validate the embedded URN against the ABNF grammar of [RFC8141] before performing any resolution activity.¶
[RFC2119] Bradner, S., “Key words for use in RFCs to Indicate Requirement Levels”, BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997, <https://www.rfc-editor.org/rfc/rfc2119>. [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, “Uniform Resource Identifier (URI): Generic Syntax”, STD 66, RFC 3986, DOI 10.17487/RFC3986, January 2005, <https://www.rfc-editor.org/rfc/rfc3986>. [RFC5234] Crocker, D. and P. Overell, “Augmented BNF for Syntax Specifications: ABNF”, STD 68, RFC 5234, DOI 10.17487/RFC5234, January 2008, <https://www.rfc-editor.org/rfc/rfc5234>. [RFC8141] Saint-Andre, P. and J. Klensin, “Uniform Resource Names (URNs)”, RFC 8141, DOI 10.17487/RFC8141, April 2017, <https://www.rfc-editor.org/rfc/rfc8141>. [RFC8174] Leiba, B., “Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words”, BCP 14, RFC 8174, DOI 10.17487/RFC8174, May 2017, <https://www.rfc-editor.org/rfc/rfc8174>. [RFC8446] Rescorla, E., “The Transport Layer Security (TLS) Protocol Version 1.3”, RFC 8446, DOI 10.17487/RFC8446, August 2018, <https://www.rfc-editor.org/rfc/rfc8446>. [W3C.DID-CORE] Sporny, M., Guy, A., Sabadello, M., and D. Reed, “Decentralized Identifiers (DIDs) v1.0”, W3C Recommendation, July 2022, <https://www.w3.org/TR/did-core/>. [W3C.DID-RESOLUTION] Sabadello, M., “Decentralized Identifier Resolution (DID Resolution) v0.3”, W3C Working Group Note , 2023, <https://w3c-ccg.github.io/did-resolution/>.
The author thanks the members of the W3C Decentralized Identifier Working Group and the broader DID community for their foundational work on the DID Core specification, and the IETF URN community for their long-standing stewardship of URN namespaces.¶