Web 7.0 DIDLibOS™ / TDW AgenticOS™ (Project “Shorthorn”)

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Introduction

This article describes Web 7.0™ and TDW AgenticOS ™ – with a specific focus on the Web 7.0 Neuromorphic Agent Architecture Reference Model (NAARM) used by TDW AgenticOS™ to support the creation of Web 7.0 Decentralized Societies.

The intended audience for this document is a broad range of professionals interested in furthering their understanding of TDW AgenticOS for use in software apps, agents, and services. This includes software architects, application developers, and user experience (UX) specialists, as well as people involved in a broad range of standards efforts related to decentralized identity, verifiable credentials, and secure storage.

The Second Reformation

Web 7.0 Foundation Ecosystem

“Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials.”
Michael Herman, Trusted Digital Web (TDW) Project, Hyperonomy Digital Identity Lab, Web 7.0 Foundation. January 2023.

Credit: Alex Thurow, https://youtu.be/4OjZOyG6nMo

TDW AgenticOS™

TDW AgenticOS™ is a macromodular, neuromorphic agent platform for coordinating and executing complex systems of work that is:

  • Secure
  • Trusted
  • Open
  • Resilient

TDW AgenticOS™ is 100% Albertan by birth and open source.

Project “Shorthorn”

Project “Shorthorn” is a parody project name based on Microsoft’s Windows “Longhorn” WinFS project (a SQL-based Windows File System project) with which the author was involved in from a design preview and feedback, consulting, and PM technical training (Groove Workspace system architecture and operation) perspectives (circa 2001-2002).

What makes Shorthorns great:
– They’re good at turning grass into meat (great efficiency).
– Shorthorn cows are amazing mothers and raise strong, healthy calves (nurture great offspring).
– Their genetics blend well with other breeds for strong hybrid calves (plays well with others).
…and so it is with TDW AgenticOS™.

Web 7.0 Foundation

The Web 7.0 Foundation, a federally-incorporated Canadian non-profit corporation, is chartered to develop, support, promote, protect, and curate the Web 7.0 ecosystem: TDW AgenticOS operating system software, and related standards and specifications. The Foundation is based in Alberta, Canada.

What we’re building at the Web 7.0 Foundation is described in this quote from Don Tapscott and co.:

“We see an alternate path: a decentralized platform for our digital selves, free from total corporate control and within our reach, thanks to co-emerging technologies.”
“A discussion has begun about “democratizing AI.” Accessibility is critical. Mostaque has argued that the world needs what he calls “Universal Basic AI.” Some in the technology industry have argued that AI can be democratized through open source software that is available for anyone to use, modify, and distribute. Mostaque argues that this is not enough. “AI also needs to be transparent,” meaning that AI systems should be auditable and explainable, allowing researchers to examine their decision-making processes. “AI should not be a single capability on monolithic servers but a modular structure that people can build on,” said Mostaque. “That can’t go down or be corrupted or manipulated by powerful forces. AI needs to be decentralized in both technology, ownership and governance.” He’s right.”
You to the Power Two. Don Tapscott and co. 2025.

A Word about the Past

The Web 7.0 project has roots dating back approximately 30 years to before 1998 with the release of Alias Upfront for Windows. Subsequent to the release of Upfront (which Bill Gates designated as the “most outstanding graphics product for Microsoft Windows 3.0”), the AUSOM Application Design Framework was formalized.

AUSOM Application Design Framework

AUSOM is an acronym for A User State of Mind — the name of a framework or architecture for designing software applications that are easier to design, implement, test, document and support. In addition, an application developed using the AUSOM framework is more capable of being: incrementally enhanced, progressively installed and updated, dynamically configured and is capable of being implemented in many execution environments. This paper describes the Core Framework, the status of its current runtime implementations and its additional features and benefits.

What is AUSOM?

The AUSOM Application Design Framework, developed in 1998, is a new way to design client-side applications. The original implementation of the framework is based on a few basic concepts: user scenarios and detailed task analysis, visual design using state-transition diagrams, and implementation using traditional Windows message handlers.

The original motivation for the framework grew out of the need to implement a highly modeless user interface that was comprised of commands or tasks that were very modal (e.g. allowing the user to change how a polygon was being viewed while the user was still sketching the boundary of the polygon).

To learn more, read The AUSOM Application Design Framework whitepaper.

Einstein’s Advice

The following is essentially the same advice I received from Charles Simonyi when we were both at Microsoft (and one of the reasons why I eventually left the company in 2001).

“No problem can be solved from the same level of consciousness that created it.” [Albert Einstein]
“The meaning of this quote lies in Einstein’s belief that problems are not just technical failures but outcomes of deeper ways of thinking. He suggested that when people approach challenges using the same assumptions, values, and mental habits that led to those challenges, real solutions remain out of reach. Accoding to this idea, improvement begins only when individuals are willing to step beyond familiar thought patterns and question the mindset that shaped the problem.” [Economic Times]

Simonyi et al., in the paper Intentional Software, state:

For the creation of any software, two kinds of contributions need to be combined even though they are not at all similar: those of the domain providing the problem statement and those of software engineering providing the.implementation. They need to be woven together to form the program.

TDW AgenticOS is the software for building decentralized societies.

A Word about the Future

“Before the next century is over, human beings will no longer be the most intelligent or capable type of entity on the planet. Actually, let me take that back. The truth of that last statement depends on how we define human.” Ray Kurzweil. 1999.

NOTE: “Artificial Intelligence” (or “AI”) does not appear anywhere in the remainder of this article. The northstar of the Web 7.0 project is to be a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials – regardless of whether the outcome (a Web 7.0 network) uses AI or not. Refer to Figures 4a, 4b, and 6 for a better understanding.

DIDComm Notation, a visual language for architecting and designing decentralized systems, was used to create the figures in this article.

Value Proposition

By Personna

Business Analyst – Ability to design and execute, secure, trusted business processes of arbitrary complexity across multiple parties in multiple organizations – anywhere on the planet.

Global Hyperscaler Administrators – Ability to design and execute, secure, trusted systems administration processes (executed using PowerShell) of arbitrary complexity across an unlimited number of physical or virtual servers hosted by an unlimited number of datacenters, deployed by multiple cloud (or in-house) xAAS providers – anywhere on the planet.

App Developers – Ability to design, build, deploy, and manage secure, trusted network-effect-by-default apps of arbitrary complexity across multiple devices owned by anybody – anywhere on the planet.

Smartphone Vendors – Ability to upsell a new category of a second device, a Web 7.0 Always-on Trusted Digital Assistant – a pre-integrated hardware and software solution, that pairs with the smart device that a person already owns. Instead of a person typically purchasing/leasing one smartphone, they can now leverage a Web 7.0-enabled smartphone bundle that also includes a secure, trusted, and decentralized communications link to a Web 7.0 Always-on Trusted Digital Assistant deployed at home (or in a cloud of their choosing).

Digital Church/Religion Builders – Ability to create a new decentralized digital religion for 1 billion people in Communist China.

By Trust Relationship (Verifiable Trust Circle (VTC))

Secure, Trusted Agent-to-Agent Messaging Model

Figure 0. Simple Agent-to-Agent Communications Model

Figure 0. depicts the design of a typical simple agent-to-agent communications model. DIDComm Notation was used to create the diagram.

TDW AgenticOS: Conceptual and Logical Architecture

The Web 7.0 architecture is illustrated in the following figure.

Figure 1. Web 7.0 Neuromorphic Agent

Figure 1 is an all-in illustration of the conceptual architecture of a Web 7.0 Neuromorphic Agent. A Web 7.0 Agent is comprised of a Frontal LOBE and the Neural Messaging pathway. An Agent communicates with the outside world (other Web 7.0 Agents) using its Outbound (Talking), Seeing, and Inbound (Listening) Interfaces. Agents can be grouped together into Neural Clusters to form secure and trusted multi-agent organisms. DIDComm/HTTP is the default secure digital communications protocol (see DIDComm Messages as the Steel Shipping Containers of Secure, Trusted Digital Communication). The Decentralized Identifiers (DIDs) specification is used to define the Identity layer in the Web 7.0 Messaging Superstack (see Figure 6 as well as Decentralized Identifiers (DIDs) as Barcodes for Secure, Trusted Digital Communication).

An agent remains dormant until it receives a message directed to it and returns to a dormant state when no more messages are remaining to be processed. An agent’s message processing can be paused without losing any incoming messages. When an agent is paused, messages are received, queued, and persisted in long-term memory. Message processing can be resumed at any time.

Additionally, an Agent can include a dynamically changing set of Coordination and Execution LOBEs. These LOBEs enable an Agent to capture events (incoming messages), compose responses (outgoing messages), and share these messages with one or more Agents (within a specific Neural Cluster or externally with the Beneficial Agent in other Neural Clusters (see Figure 5)).

What is a LOBE?

LOBE (Loadable Object Brain Extensions) is a macromodular, neuromorphic intelligence framework designed to let systems grow, adapt, and evolve by making it easy to add new capabilities at any time. Each LOBE is a dynamically Loadable Object — a self-contained cognitive module that extends the Frontal LOBE’s functionality, whether for perception, reasoning, coordination, or control (execution). Together, these LOBEs form a dynamic ecosystem of interoperable intelligence, enabling developers to construct distributed, updatable, and extensible minds that can continuously expand their understanding and abilities.

LOBEs lets intelligence and capability grow modularly. Add new lobes, extend cognition, and evolve systems that learn, adapt, and expand over time. Expand your brain. A brain that grows with every download.

What is a NeuroPlex?

A Web 7.0 Neuroplex (aka a Neuro) is a dynamically composed, decentralized, message-driven cognitive solution that spans one or more agents, each with its own dynamically configurable set of LOBEs (Loadable Object Brain Extensions). Each LOBE is specialized for a particular type of message. Agents automatically support extraordinarily efficient by-reference, in-memory, intra-agent message transfers.
A Web 7.0 Neuroplex is not a traditional application or a client–server system, but an emergent, collaborative execution construct assembled from independent, socially-developed cognitive components (LOBEs) connected together by messages. Execution of a Neuroplex is initiated with a NeuroToken.

Horizontal Unbundling: Coordination and Execution Agents

Figure 2. TDW AgenticOS: Agent Logical Architecture: Horizontal Unbundling

Figure 2 illustrates how the deployment of Coordination and Execution LOBEs can be horizontally unbundled – with each LOBE being assigned to a distinct Frontal LOBE. This is an extreme example designed to underscore the range of deployment options that are possible. Figure 3 is a more common pattern.

Horizontal Rebundling

Figure 3. TDW AgenticOS: Agent Logical Architecture: Horizontal Rebundling

Figure 3 depicts a more common/conventional deployment pattern where, within a Neural Cluster, a small, reasonable number of Frontal LOBEs host any collection of Coordination and/or Execution LOBEs.

Minimal Execution Agent (Trusted Digital Assistant)

Figure 4a. TDW AgenticOS: Agent Logical Architecture: Minimal Execution Agent

Figure 4a is an example of a minimal agent deployment pattern that hosts a single Trusted Digital Assistant (TDA) LOBE.

Figure 4b MCP-enabled Trusted Digital Assistant

Vertical Debundling: Web 7.0 Neural Clusters

Figure 5. TDW AgenticOS: Agent Logical Architecture: Neural Clusters and Beneficial Agents

Figure 5 depicts the deployment of a Web 7.0 Neural Cluster. Messages external to the Neural Cluster are only sent/received from the Beneficial Agent. Any additional messaging is limited to the Beneficial, Coordination, and Execution LOBEs deployed within the boundary of a Neural Cluster. A use case that illustrates the Neural Cluster model can be found in Appendix D – PWC Multi-Agent Customer Support Use Case.

DIDComm 7.0

Figure 6a. TDW AgenticOS: Conceptual Architecture (All-in)

Figure 6a is an all-in illustration of the conceptual architecture of a Web 7.0 Neuromorphic Agent. DIDComm Messages can be piped from the Outbound Interface of the Sender agent to the Inbound Agent of of Receiver agent – supporting the composition of secure, trusted agent-to-agent pipelines similar (but superior) to: i) UNIX command pipes (based on text streams), and ii) PowerShell pipelines (based on a .NET object pump implemented by calling ProcessObject() in the subsequent cmdlet in the pipeline).

NOTE: PowerShell does not clone, serialize, or duplicate .NET objects when moving them through the pipeline (except in a few special cases). Instead, the same instance reference flows from one pipeline stage (cmdlet) to the next …neither does DIDComm 7.0 for DIDComm Messages.

Bringing this all together, a DIDComm Message (DIDMessage) can be passed, by reference, from LOBE (Agenlet) to LOBE (Agenlet), in-memory, without serialization/deserialization or physical transport over HTTP (or any other protocol).

PowerShellDIDComm 7.0
powershell.exetdwagent.exe
Cmdlet LOBE (Loadable Object Brain Extension)
.NET ObjectVerifiable Credential (VC)
PSObject (passed by reference)DIDMessage (JWT) (passed by reference)
PowerShell PipelineWeb 7.0 Verifiable Trust Circle (VTC)
Serial Routing (primarily)Arbitrary Graph Routing (based on Receiver DID, Sender DID, and DID Message type)

Feedback from a reviewer: Passing DIDComm messages by reference like you’re describing is quite clever. A great optimization.

Coming to a TDW LOBE near you…

DIDComm 7.0 Superstack

Figure 6b. DIDComm 7.0 Messaging Superstack

Figure 6b illustrates the interdependencies of the multiple layers within the DIDComm 7.0 Superstack.

Technology Wheel of Reincarnation: Win32 generic.c

Figure 6c. Win32 SDK Sample App: generic.c

References

SSI 7.0 Identity Framework

SSC 7.0 Metamodel

SSC 7.0 Verifiable Trust Circles

Web 7.0 Neuromorphic Agent Identity Model (NAIM)

Figure 7. Web 7.0 Neuromorphic Agent Identity Model (NAIM)

The NAIM seeks to enumerate and identify all of the elements in the AARM that have or will need an identity (DID and DID Document). This is illustrated in Figure 7.

Table 1. Web 7.0 Neuromorphic Agent Identity Model (NAIM) Chart

Beneficiaries, Trustees, and Fiduciary Duty

Figure 8. Beneficiaries, Trustees, and Fiduciary Duty

Figure 8 highlights in red the trusts and fiduciary duty relationships between (a) a Beneficiary (Alice, the person) and (b) her Beneificiary Agent (a trustee). Similarly, any pair of agents can also have pair-wise trusts and fiduciary duty relationships where one agent serves in the role of Beneficiary and the second agent, the role of Trustee.

Appendix A – TDW AgenticOS: Edge Agent DMZ Deployment

This section is non-normative.

Figure A-1. TDW AgenticOS: Edge Agent DMZ Deployment

Appendix B – TDW AgenticOS: Multiple Digital Persona Deployment

This section is non-normative.

Figure B-1. TDW AgenticOS: Multiple Digital Persona Deployment

Alice has 2 digital personifications: Alice Smith and Alice Athlete. Each of these personifications has its own digital ID. Each of Alice’s personas also has its own Trusted Digital Assistant (TDA) – an agent or agentic neural network.

Figure B-2. Web 7.0 Networks and Trust Graph

Bob has (at least) 4 digital personifications: Bob Aggie, Bob Nova, Bob Sovronia, and Bob Developer. Using Web 7.0 Trust Graph Relationships and Verifiable Trust Credentials (VTCs), Bob can also have personas that are members of multiple Web 7.0 networks.

Appendix C – Different Brain Functionalities and Their State of Research in AI (2025)

Figure C-1. Different Brain Functionalities and Their State of Research in AI (2025)

Source: Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems. arXiv:2504.01990v2 [https://arxiv.org/abs/2504.01990v2]. August 2025.

Figure C-2. Simplified Brain Anatomy (Source unknown)
Figure C-3. TDW AgenticOS Layers

In Figure C-3, the Trust Library forms the Inner core and the UX LOBEs, the Crust. The Outer core is comprised of the Fast Cache and Long-Term Memory LOBEs, Neural and Basal Pathways, DID Registry, and LOBE Library. The Mantle is where the Coordination and Execution LOBEs execute.

Appendix D – PWC Multi-Agent Customer Support Use Case

Figure D-1. PWC Multi-Agent Customer Support Use Case

Source: Agentic AI – the new frontier in GenAI. PWC Middle East. 2024.

This use case exemplifies the use of the Web 7.0 Neural Cluster model. Table D-1 maps the PWC Use Case terminology to the corresponding Web 7.0 AARM terminology.

Web 7.0 NAARMPWC Use Case
Beneficiary AgentMaster agent
Coordination Agent (and LOBEs)Orchestrator agent
Execution Agent LOBEsMicro-agents
Table D-1. Web 7.0 AARM – PWC Use Case Terminology Cross-Reference

Appendix E – Groove Workspace System Architecture

Appendix F – Historical Documents

February 2026

Resources

Macromodularity

  1. Organization of computer systems: the fixed plus variable structure computer. Gerald Estrin. 1960.
  2. Macromodular computer systems. Wesley Clark. 1967.
  3. Logical design of macromodules, Mishell J. Stucki et all. 1967.

Powered By

Michael Herman
Decentralized Systems Architect
Web 7.0 Foundation
October 15, 2025

5 Comments

Filed under Uncategorized

DIDComm Messages as the Steel Shipping Containers of Secure, Trusted Digital Communication

Create your own magic with Web 7.0 AgenticOS™. Imagine the possibilities.

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

Abstract

The steel shipping container transformed global trade by introducing a standardized, secure, and interoperable abstraction for transporting goods. Similarly, Decentralized Identifier Communication (DIDComm) offers a standardized, secure, and interoperable mechanism for transmitting trusted digital information between agents. This paper explores the analogy between DIDComm messages and steel containers, examining their properties, benefits, and limitations, and assessing the potential of DIDComm to catalyze a transformation in digital ecosystems comparable to the shipping container revolution.

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License


1. Introduction

The 20th century witnessed a quiet revolution in global trade: the invention and adoption of the steel shipping container. More than faster ships or larger ports, it was standardization in how goods were packaged and transported that unlocked efficiency, scale, and global interoperability.

In the 21st century, digital ecosystems face a parallel challenge. Secure communication across heterogeneous systems remains fragmented by proprietary protocols, siloed trust frameworks, and inconsistent interoperability. Despite advances in transport protocols (HTTP, WebSocket, Bluetooth) and security primitives (TLS, OAuth, JWT), no universal standard exists for trusted, end-to-end, cross-domain messaging.

DIDComm (Decentralized Identifier Communication) aims to fill this gap. It provides a standardized envelope for secure, interoperable communication between agents in decentralized ecosystems. This paper argues that DIDComm can be understood as the steel shipping container of digital communication — a payload-agnostic, transport-agnostic, secure packaging standard that enables trust to move seamlessly across networks and domains.


2. The Shipping Container Revolution

2.1 Before Containers

  • Cargo packaged idiosyncratically: barrels, sacks, crates.
  • Loading/unloading labor-intensive and slow.
  • High rates of pilferage and damage.
  • Inefficiency in intermodal transport (ship → rail → truck required repackaging).

2.2 With Containers

  • ISO standardization: uniform sizes, fittings, and corner posts.
  • Sealed security: tamper-resistant, weatherproof units.
  • Stackability: efficient storage and loading by crane.
  • Interoperability: ships, ports, trucks, and trains adapted to a single form factor.

Impact: Containerization reduced costs by ~90% and increased the speed and scale of global trade [Levinson, The Box, 2006]. The key insight: decouple contents from infrastructure via a universal abstraction.


3. DIDComm: A Digital Container Standard

3.1 What is DIDComm?

DIDComm is a protocol suite for secure, private, and interoperable communication using Decentralized Identifiers (DIDs) as endpoints. It defines how messages are packaged, encrypted, authenticated, and routed between agents.

3.2 Core Features

  • Standardized envelope: headers, routing metadata, payload.
  • Cryptographic sealing: encryption (confidentiality), signatures (authenticity), checksums (integrity).
  • Transport agnosticism: works over HTTP, Bluetooth, WebRTC, email, etc.
  • Routing via mediators: messages can traverse multiple relays without breaking end-to-end security.
  • Payload agnosticism: the message may carry verifiable credentials, IoT commands, or arbitrary application data.

3.3 Why It Matters

Just as containers enabled intermodal trade, DIDComm enables intermodal trust exchange. Applications, wallets, devices, and services can interoperate without bespoke integrations.


4. Mapping the Analogy: Containers vs. DIDComm

Container PropertyDIDComm EquivalentImplications
Standardized formEnvelope with defined structure (headers, body, metadata)Guarantees interoperability across agents and vendors
Sealed & secureEncryption + authenticationProtects against unauthorized access and tampering
Intermodal transportTransport-agnostic deliveryWorks across protocols without altering the payload
Routing via logisticsMediators, DID resolution, forwardingEnables flexible message delivery
Opaque contentsEncrypted payloadOnly authorized parties can inspect
Global ecosystem supportAgent networks, wallets, identity hubsEmerging infrastructure could mirror global ports and carriers

5. Benefits of the Container Analogy

  1. Interoperability
    • Any DIDComm-compliant agent can process a message, just as any port can handle a container.
  2. Security and Trust
    • Messages are sealed like containers, with tamper-evident cryptography.
  3. Efficiency
    • Reduces the cost and complexity of building integrations across organizations.
  4. Scalability
    • Supports any type of payload: credentials, IoT signals, governance instructions.
  5. Decentralization
    • No reliance on a central authority; trust derives from cryptographic keys, similar to how container standards are managed by ISO, not controlled by one nation or corporation.

6. Limits of the Analogy

  • Physical persistence vs. digital ephemerality: Containers endure across voyages; messages vanish after delivery.
  • Metadata leakage: Container labels are visible; DIDComm may still expose sender/recipient metadata.
  • Standard stability: Container sizes have been stable for decades; DIDComm may evolve quickly.
  • Global adoption: Containerization achieved near-universal acceptance; DIDComm is still early in adoption.

7. Strategic Implications

7.1 Identity & Credentials

DIDComm provides a secure transport for verifiable credentials, enabling cross-border, cross-domain trust.

7.2 IoT Ecosystems

IoT devices require lightweight, trustable communication. DIDComm offers a containerized way to exchange secure commands.

7.3 Cross-Domain Interoperability

Applications in finance, healthcare, supply chains, and governance can exchange trusted data without bespoke APIs.

7.4 The “Container Moment”

Global trade was reshaped once container standards reached critical mass. DIDComm could catalyze a parallel moment in digital ecosystems if widely adopted.


8. Conclusion

The steel shipping container revolutionized trade by abstracting the packaging and transport of goods into a universal, secure standard. DIDComm has the potential to do the same for digital trust, abstracting communication into a universal, secure, and interoperable form.

If DIDComm achieves broad adoption, it could serve as the logistics backbone of the digital trust economy, enabling decentralized ecosystems to scale with the efficiency and security once brought to global commerce by steel containers.


References

  • Levinson, Marc. The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger. Princeton University Press, 2006.
  • DIF (Decentralized Identity Foundation). “DIDComm Messaging Specification.” https://identity.foundation/didcomm-messaging.
  • Hardman, Daniel. “Introduction to DIDComm.” Hyperledger Aries Working Group, 2021.
  • ISO. “ISO 668: Series 1 freight containers — Classification, dimensions and ratings.”


Based on a discussion between Michael Herman and ChatGPT. September 30, 2025.

1 Comment

Filed under Uncategorized

Web 7.0 / TDW Decentralized Identifier (DID Method) Clusters Model 0.1: Specification Development Framework

Copyright (c) 2018-2024 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

To learn how to join the did-methods DIF Working Group, click https://github.com/decentralized-identity/did-methods.

0. Prime Objective/Directive

#Chickens, #Eggs, and #Roosters: A #NorthStar for the Global Decentralized Systems Community (#GDSC)

Byline: #meggDLs, #Seleggtive #Disclosure, #DEGGCOMM, and #Eggports

The entire digital identity ecosystem is missing out on the #BigOpportunity by not focusing on the right catalyst for the #massiveadoption of #digitalcredentials.
Morphing the chicken and egg mental model: If Hens are the Issuers, Roosters the Verifiers, and Eggs are the digital credentials, the prime objective needs to be increasing the demand for and consumption of Eggs by Holders …creating hundreds of thousands of ways that drive more Holders to consume more Eggs. Think about it.

… are great examples of driving the demand for and consumption of more and more digital credentials [and DIDs] (eggs); and secondarily, the demand for hens and roosters (Issuers and Verifiers). The demand for eggs drives the production of hens; and in turn, the demand for roosters. Don’t mess with #MotherNature

Reference: https://www.linkedin.com/posts/mwherman_the-classic-chicken-and-egg-problem-verifiers-activity-7249162946709241856-lxaS?utm_source=share&utm_medium=member_desktop

1. Decentralized Identifiers (DIDs) v1.0 Specification

Reference: https://www.w3.org/TR/did-core/

Decentralized identifiers (DIDs) are a new type of identifier that enables verifiable, decentralized digital identity. A DID refers to any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) as determined by the controller of the DID. In contrast to typical, federated identifiers, DIDs have been designed so that they may be decoupled from centralized registries, identity providers, and certificate authorities.

DID subject
The entity identified by a DID and described by a DID document. Anything can be a DID subject: person, group, organization, physical thing, digital thing, logical thing, etc.

2. Use Cases and Requirements for Decentralized Identifiers Document

Reference: https://www.w3.org/TR/did-use-cases/#uc

3. Subsidiary Ledgers / Digital Wallets

4. Trusted Digital Web (TDW)

5. LinkedIn Economic Graph

Reference: https://www.youtube.com/watch?v=xwawTzMfHic&list=PLU-rWqHm5p45dzXF2LJZjuNVJrOUR6DaD&index=23

6. Toronto Musican’s Economic Graph

7. #Graphitization: Amazon’s Principles, Core Entities, and Relationships

8. Enteprise Architecture

9. What Happens When You Don’t Think You Need a Taxonomy?

10. NEON NATION / Web 7.0 / TDW Conceptual Architecture

11. Web 7.0 / TDW DID Method Clusters Model 0.1

GitHub: https://github.com/decentralized-identity/did-methods/issues/3

Universal DID Methods

DID methods suitable for interacting with Every Little Thing (#ELT) on the planet/in the universe:

Web 7.0/TDW DID Method Clusters Model Taxonomy 0.1

  • A bold method is the model method or exemplar for the particular cluster (cell).
  • A method can be a exemplar for 1 or many clusters.
  • This list of DID method categories is just an example. A complete taxonomy will likely be a 2-3 level hierarchy. The parent categories for these examples might include: Live Things, Inanimate Things, Abstract Things, Digital Things, Business Things, etc. etc.
  • More than one exemplar per cluster is permitted.

12. Process Framework: Sociocracy

Reference: https://github.com/decentralized-identity/did-methods/issues/4

In Sociocracy terminology, a mini-WG is called a circle. Each category of DID methods (cluster of DID Methods) would be managed by its own independent circle. A circle member can belong to more than 1 circle. Circles are connected to a parent circle for administrative purposes. The parent circle would correspond to the DID Method WG (co-chaired by Markus).

Sociocracy: https://www.sociocracyforall.org/content/

Sociocracy combines consent decision-making, a decentralized system of authority and intentional processes to improve our decisions and processes over time into a governance system that supports effective and efficient process while increasing connection, listening and co-creation among members.

Sociocracy is used in businesses, communities, nonprofits, cooperatives, grassroots groups and in education. 

13. Trusted Digital Web (TDW) Glossary/Taxonomy Model: Erin Buys a Car Neighborhood

14. Questions

Leave a comment

Filed under Uncategorized

Technology/Change Adoption Models: A Comprehensive Guide

Create your own magic with Web 7.0™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2019-2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, TDW AgenticOS™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

[Original Title: Technology Adoption Models: A Comprehensive Guide]

This article documents more than 20 technology adoption models that the author has encountered over his 45+ year career …some models that he didn’t even realize he knew about ;-).  Here they there are, in no particular order.

NOTE: Each model progresses from left-to-right along an unspecified timeline.  The implication is that it is possible to superimpose two or more models on top of each other for deeper understanding and for creating more tangible, more illustrative, depictions of your corporate, product, and project strategies.

An example is: Model 10. Technology Adoption Lifecycle illuminated by the Gartner Hype Cycle.

Technology Adoption Models

NOTE: Click on any of the figures to enlarge them.

Model 1. Crossing the Chasm: Technology Adoption Lifecycle

1. Crossing the Chasm-Technology Adoption Lifecycle

Model 2a. Social Evolution: Creation of Nation State

A #wanderer is someone who leaves their tribe to share their knowledge and wisdom with others; to later form a party of explorers to explore and conquer a common set of goals; and, even further on, create a clan, a band, a tribe, and a tribal society, a group of people who live and work together – a group of tribes organized around kinships.

Model 2b. Social Evolution: Defining Principles

A #wanderer is someone who leaves their tribe to share their knowledge and wisdom with others; to later form a party of explorers to explore and conquer a common set of goals; and, even further on, create a clan, a band, a tribe, and a tribal society, a group of people who live and work together – a group of tribes organized around kinships.

Model 2c. Social Evolution: Self-Sovereignty Political Spectrum

Model 2d. Social Evolution: Driving Change (ADKAR)

Model 3. Phases of Foundational Technology Adoption

3. Phases of Foundational Technology Adoption

Model 4. Phases of Desire and Action

4. Phases of Desire and Action

Model 5. Phases of Understanding

5. Phases of Understanding

Model 6. Classic Enterprise Solution Sales and Adoption Lifecycle

6. Classic Enterprise Solution Sales and Adoption Lifecycle

Model 7. ICRVA (I CRaVe A) Process

7. ICRVA (I CRaVe A) Process

Model 8. Three-letter Words

8. Three-Letter Words

Model 9. Gartner Hype Cycle

9. Gartner-Hype Cycle

Model 10. Technology Adoption Lifecycle illuminated by the Gartner Hype Cycle

10. TAC-Hype Cycle

Model 11. World Wide Web Consortium (W3C): Tenth Anniversary

11. World-Wide Web Consortium (W3C)-Tenth Anniversary

Model 12. Systems Co-existence and Migration

12. Systems Co-existence and Migration

Model 13. Embrace, Extend, and Extinguish

13. Embrace-Extend-Extinguish

Model 14. Take-off Velocity (v2)

14. Takeoff Velocity-v2

Model 15. From Mainframe to Blockchain

15. From Mainframe to Blockchain-header

0_BJ5SrrZXvXqhi8QMiXj9mw

Model 16. Progressive Improvement through Continuous Transformation

16. Progressive Improvement through Continuous Transformation

progressive-improvement-thru-continuous-transformation-1-0-1
progressive-improvement-a-1-0-1
progressive-improvement-b-1-0-1

Model 17. Liedtka-Ogilvie Design Thinking Modelf0c4ccea6b32d4fa772046d3646d0ff0Model 18. CB-Insights NExTT Framework

CB-Insights NExTT Framework

Model 19. O’Donnell Exponential Growth Model

19. DarrelO-Exponential

Model 20. O’Donnell-Gartner Exponential Hype Cycle

20. DarrelO-HypeCycle

Model 21. Technical Intensity (video)

Technology Intensity

Model 22. Technology Adoption Curve plus Social Evolution Model

Model 23: Overton Window

Model 24: Overton Window and Technology Adoption Lifecycle

Model 25: The Technology Adoption Lifecycle and ADKAR

Model 26: Overton Window: Treviño’s 6 Degrees of Acceptance vs. ADKAR

References

[Model 1] Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers (https://en.wikipedia.org/wiki/Crossing_the_Chasm)

[Model 2a] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 2b] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 2c] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 2d] Michael Herman (https://www.linkedin.com/in/mwherman/) and ADKAR Change Management Model (https://www.prosci.com/adkar/adkar-model_

[Model 3] Phases of Foundational Technology Adoption (https://www.linkedin.com/pulse/blockchain-foundational-technology-michael-herman/)

[Model 4] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 5] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 6] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 7] How We Think About How We Work (https://hyperonomy.com/2016/05/09/how-do-we-think-about-how-we-work/)

[Model 8] Unknown (with apologizes from the author)

[Model 9] Gartner Hype Cycle (https://www.gartner.com/en/research/methodologies/gartner-hype-cycle)

[Model 10] Gartner Hype Cycle (https://www.gartner.com/en/research/methodologies/gartner-hype-cycle) and Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 11] World Wide Web Consortium (W3C): Timeline Graphic (https://www.w3.org/2005/01/timelines/description)

[Model 12] Microsoft Corporation (https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish)

[Model 13] Unknown (with apologizes from the author)

[Model 14] Unknown (with apologizes from the author)

[Model 15] Medium.com: From mainframes to blockchains. How to look at the future. (https://medium.com/@ben_longstaff/my-framework-for-how-to-look-at-the-future-of-blockchain-719f4243491f)

[Model 16] How We Think About How We Work (https://hyperonomy.com/2016/05/09/how-do-we-think-about-how-we-work/)

[Model 17] Designing for Growth: A Design Thinking Tool Kit for Managers (http://www.designingforgrowthbook.com/)

[Model 18] CB-Insights NExTT Framework (https://www.cbinsights.com/)

[Model 19 and 20] Darrell O’Donnell. The Current and Future State of Digital Wallets (https://www.continuumloop.com/standards-digitalwallet-part-11-16/).

[Model 20] Gartner Hype Cycle (https://www.gartner.com/en/research/methodologies/gartner-hype-cycle) and Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 21] Vision Keynote with Satya Nadella | Microsoft Ignite 2019 (https://www.youtube.com/watch?v=jnUiJi4hts4)

[Model 22] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 23] Overton Window (https://en.wikipedia.org/wiki/Overton_window)

[Model 24] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 25] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 26] Michael Herman (https://www.linkedin.com/in/mwherman/)

Resources

  1. Phases of Foundational Technology Adoption (https://www.linkedin.com/pulse/blockchain-foundational-technology-michael-herman/)

6 Comments

Filed under TDW, Trusted Digital Web

Michael Herman, Blockchain Developer, Enterprise Architect and Data Scientist: #Graphitization Inventor

COPYRIGHT © 2016-2017 by Michael Herman, Seattle, Washington and Toronto, Canada. All rights reserved.

Michael Herman is an expert when it comes to the mathematical modeling, analysis, and visualization of almost everything:

  • Large enterprise organizations,
  • Commercial, global-scale, cloud services platforms,
  • Organization principles and belief systems,
  • Human platforms,
  • Aircraft engines, and
  • Muscle cars.

Michael is the inventor of the #Graphitization Continous Transformation Model – a closed-closed loop feedback process for the ingestion, modeling, analysis, visualization, systems optimization, and life cycle management of any type of strategy, system, asset, architecture, or process.

progressive-ea-model-1-0-11-peam4-operational-data-chasm

Figure 1. #Graphitization Continuous Transformation Model

A key concept of #Graphitization is the implementation of Transformative Changes that result in positive increases in business value in the system being modeled.

#Graphitization

What is #Graphitization?

#Graphitization is a data science and enterprise architecture framework and process model for modeling, ingesting, organizing, analyzing, and visualizing any domain of endeavor by using graphs – networks of connected objects and relationships with each object and relationship annotated with additional descriptive information (metadata).

The primary applications of #Graphitization are:

  • System optimization,
  • Systems life cycle management, and
  • Transformative Change in resulting in positive increases in business value for the system being studied.

A system is defined as any collection of strategies, system components, assets, architectures or processes.

References

#Graphitization Continuous Transformation Model

The #Graphitization general model is described in Figure 2. as it applies to the design and optimization of large enterprise organizations.

progressive-ea-model-1-0-11-peam4-operational-data-chasm

Figure 2. #Graphization Continuous Transformation Model: Large Enterprise Organizations

The same model can also be used to improve the design and operation of many different types of systems:

  1. Large scale enterprise organizations (public and private sector)
  2. Aircraft engines, muscle cars, and other high-performance engine systems
  3. Commercial, global-scale, cloud services platforms
  4. Automated service composition of cloud services-based data systems
  5. Large collaborative ecosystems: employee groups, business partners, social networks
  6. Large ecosystems of competing or competitive business organizations
  7. Organization principles and belief systems
  8. Conventions software applications and architectures: desktop, server, and web apps
  9. International standards for visual modeling languages
  10. Parallelspace ModelMate
  11. Enterprise Data Management
  12. Internet of Things (IoT)
  13. Architecture Reference Models

Standards

NEO Enhancement Proposal (NEP) Standards Author

Projects and Publications

0. SerentityData Graph

Model-based off-chain and on-chain (blockchain) graph data creation, migration, visualization, and analysis

Abstract

SerentityData Graph is an entity-relationship modeling, serialization, and graph analysis solution that supports development of traditional full-stack and blockchain smart contract applications. SerentityData features tight Neo4j integration for on-chain & off-chain graph data visualization and analysis.

Description

SerentityData Graph is an open source, entity-relationship modeling, serialization, and graph data visualization and analysis solution that supports the development of traditional full-stack, blockchain-based smart contract, and Neo4j graph database applications.

Starting from a single data model, SerentityData supports the automatic code generation of entities and relationships that support symmetric development of: (a) off-chain data in traditional multi-tier full-stack applications, (b) on-chain data management for blockchain-based distributed ledger technology apps (dApps), and (c) Neo4j enterprise graph applications.

SerentityData features complete life-cycle integration with Neo4j for on-chain and off-chain graph data creation, migration, visualization, and analysis. Live code walk-throughs and demonstrations will enable you to begin using SerenityData and Neo4j immediately. Github: https://github.com/mwherman2000/serentitydata-compiler

Resources

My blog: https://hyperonomy.com/

Related blog posts

  1. Michael Herman, Blockchain Developer, Enterprise Architect and Data Scientist: #Graphitization Inventor https://hyperonomy.com/2017/05/18/michael-herman-inventor-of-graphitization/
  2. #Graphitization of the Enterprise https://hyperonomy.com/2017/01/02/graphitization-of-the-enterprise/
  3. Tokenize Every Little Thing (ELT) https://hyperonomy.com/2018/01/24/tokenization-of-every-little-thing-elt/
  4. #Graphitization of .NET Applications: Marrying Open EA Data with Graph Databases https://hyperonomy.com/2016/10/19/crossing-the-ea-chasm-marrying-open-ea-data-with-graph-databases/
  5. #Graphitization of Ray Dalio’s Principles: Iteration 1 https://hyperonomy.com/2016/12/29/graphitization-of-ray-dalios-principles/
  6. #Graphitization of Ray Dalio’s Principles: Iteration 2 https://hyperonomy.com/2016/12/30/graphitization-of-ray-dalios-principles-iteration-2/
  7. Crossing the EA Chasm: #Graphitization of ArchiMate 3.0 – Iteration 1 https://hyperonomy.com/2017/01/17/crossing-the-ea-chasm-graphitization-of-archimate-3-0/
  8. Crossing the EA Chasm: #Graphitization of ArchiMate 3.0 – Iteration 2 https://hyperonomy.com/2017/02/08/crossing-the-ea-chasm-graphitization-of-archimate-3-0-iteration-2/
  9. Crossing the EA Chasm: Automating Enterprise Architecture Modeling #1 https://hyperonomy.com/2016/10/22/crossing-the-ea-chasm-automating-enterprise-architecture-modeling/
  10. Crossing the EA Chasm: Automating Enterprise Architecture Modeling #2 https://hyperonomy.com/2016/11/04/crossing-the-ea-chasm-automating-enterprise-architecture-modeling-2/
  11. Crossing the EA Chasm: ArchiMate “Keep Calm and Have IT Your Way” https://hyperonomy.com/2016/11/17/crossing-the-ea-chasm-archimate-have-it-your-way/
  12. Crossing the EA Chasm: Open Repository Strategies for Enterprise Architecture https://hyperonomy.com/2016/10/04/the-ea-chasm-open-repository-strategies-for-enterprise-architecture/
  13. Crossing the EA Chasm: Enterprise Architecture Diagrams Your Grandmother (and CIO) Will Love https://hyperonomy.com/2016/10/13/archimate-diagrams-your-grandmother-and-cio-will-love/
  14. #Graphitization of ArchiMate: Getting MMOR from ArchiMate using the ModelMate Master Online Repository https://hyperonomy.com/2017/02/10/crossing-the-ea-chasm-how-to-use-the-modelmate-online-repository-mmor/
  15. #Graphitization of the Amazon Leadership Principles (introducing Personal Leadership Principle Maps) – Iteration 1 https://hyperonomy.com/2017/05/08/amazons-principles/
  16. What are the differences between improving the design (and operation) of an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …all running at hyperscale? https://hyperonomy.com/2017/04/10/whats-the-difference-between-improving-the-design-and-operation-of-an-aircraft-engine-a-muscle-car-a-large-enterprise-and-a-commercial-global-cloud-services-platform/

Live Neo4j Models

  1. http://hobby-icgaeohcoeaggbkeabhldpol.dbs.graphenedb.com:24789/browser/ Userid: ModelMate_Master_Datasets10 Password: YqeZAH4ODEJqglkEsK5p

YouTube Channel: https://www.youtube.com/playlist?list=PLU-rWqHm5p46bIDXPNf4c2JP_AOkopnV5

  1. 12. NEO Persistable Classes (NPC) Platform 2.1: Preview https://www.youtube.com/watch?v=N-jiJOZwiFg&list=PLU-rWqHm5p46bIDXPNf4c2JP_AOkopnV5&index=5
  2. NEO Persistable Classes (NPC) Platform 2.0: Deep Dive https://www.youtube.com/watch?v=Nj4-m2o94VE&list=PLU-rWqHm5p46bIDXPNf4c2JP_AOkopnV5&index=6
  3. NEO Persistable Classes 1.0: Deep Dive (Video 2 of 3) [Update 1] https://www.youtube.com/watch?v=qwteL1BiCjM&list=PLU-rWqHm5p46bIDXPNf4c2JP_AOkopnV5&index=7
  4. NEO Persistable Classes Platform 2.2: Structured Storage & Reusable, Indexed, Non-Fungible Entities https://www.youtube.com/watch?v=vnAxyCAZ1ec&list=PLU-rWqHm5p46bIDXPNf4c2JP_AOkopnV5&index=10

Related Github Projects

  1. SerentityData Entity Compiler (serentitydata-compiler) https://github.com/mwherman2000/serentitydata-compiler/blob/master/README.md
  2. NEO Persistable Classes (NPC) Compiler 2.1 (npcc) – Compiler for the NEO Persistable Classes (NPC) Platform 2.1 https://github.com/mwherman2000/neo-npcc2
  3. NEO Persistable Classes V1.0 – An Efficient Object-Oriented Framework for C#.NEO Smart Contract Development (ORIGINAL) – https://github.com/mwherman2000/neo-persistibleclasses

Recognition

  1. NeoDraw – NEO Persistable Classes Platform 2.0: NEO-Microsoft dApp Competition (4th place prize – USD$15,000) – https://neo.org/blog/details/3074 and https://neo.org/awards.html

Keywords

  • blockchain on-chain data modeling symmetric programming data management .NET C# NEO Stratis Ethereum Technical Case Study Developer Best Practices

1. Large scale enterprise organizations (public and private sector)

The first applications of #Graphitization were in the field of traditional enterprise architecture modeling and analysis:

  • Business Architecture
  • Application Architecture
  • Technology/Infrastructure Architecture

References

  1. #Graphitization of the Enterprise
  2. Crossing the Chasm: Progressive Enterprise Architecture Model (PEAM)
  3. Progressive Enterprise Architecture Maps – Update 2
  4. Using ArchiMate 2.1 to Model Product or Service Markets
  5. ArchiMate 3.0: What is the preferred way to model a Server Farm?
  6. Crossing the EA Chasm: Enterprise Architecture Diagrams Your Grandmother (and CIO) Will Love
  7. Crossing the EA Chasm: Annotating Your EA Models with RACI Roles
  8. Crossing the EA Chasm: Automating Enterprise Architecture Modeling #1
  9. Crossing the EA Chasm: Automating Enterprise Architecture Modeling #2
  10. Crossing the Enterprise Architecture Chasm
  11. ModelMate Architecture Reference Model
  12. What are the differences between improving the design (and operation) of an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …all running at hyperscale?
  13. Modeling a Company and Its Locations, Markets, Employees, Investors & Roles: Proposals, Wishes & Dreams

2. Aircraft engines, muscle cars, and other high-performance engine systems

It turns out that the modeling and analysis of any complex system is an ideal candidate for #Graphitization.

References

  1. What are the differences between improving the design (and operation) of an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …all running at hyperscale?

3. Commercial, global-scale, cloud services platforms

One particularly important application is the modeling and analysis of very large, commercial, global-scale, cloud services platforms.

References

  1. What are the differences between improving the design (and operation) of an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …all running at hyperscale?

4. Automated service composition of cloud services-based data systems

Call the solution “Expedia for Microsoft Azure/AWS/SFDC/…” or whatever you prefer, today’s commercial cloud services platforms are still a pain in the ass to use for creating non-trivial applications.  Left, right, and center you have to hand-code a myriad of worker processes simply to reformat and pass data around.

#Graphitization is an optimal approach for modeling the underlying cloud services platform services catalog.

References

  1. MS Azure is a bit of a bucket of bolts …very good bolts …but relative to the other IoT vendors, a bucket of bolts.
  2. What are the differences between improving the design (and operation) of an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …all running at hyperscale?
  3. Microsoft Azure Stack POC Architecture Reference Model (ARM): ArchiMate Model – version 1-0-7 – April 30, 2016

5. Large collaborative ecosystems: employee groups, business partners, social networks

Project “Boston” is named after some potential business partners and the embryo for the idea coming from my months as a founding Groove Networks business partner (including many of my most important relationships that I still maintain today).

6. Large ecosystems of competing or competitive business organizations

Modeling of large ecosystems of competing/competitive business organizations is a straightforward #Graphitization use case.

7. Organization principles and belief systems

On the surface, the #Graphitization of principle and belief-based frameworks is pretty straightforward but this is because the basic #Graphitization serves as the substrate for many advanced data ingestion, analysis, and visualization projects.

Below are the results of the  #Graphitization of two principle and belief-based frameworks:

  • Bridgewater Associates: Ray Dalio’s Principles
  • Amazon: Jeff Bezos’ Amazon Leadership Principles

References

  1. #Graphitization of Ray Dalio’s Principles: Iteration 1
  2. #Graphitization of Ray Dalio’s Principles: Iteration 2
  3. #Graphitization of the Amazon Leadership Principles (introducing Personal Leadership Principle Maps) – Iteration 1

8. Conventional software applications and architectures: desktop, server, and web apps

Modeling of complex, multi-language, multiple runtime software environments is a use case that is an ideal application of #Graphitization.

References

  1. #Graphitization of .NET Applications: Marrying Open EA Data with Graph Databases
  2. Pinc-A Tool For Maintaining Configurable Software in Pascal1
  3. Pinc-A Tool For Maintaining Configurable Software in Pascal2
  4. Pinc-A Tool For Maintaining Configurable Software in Pascal3
  5. Pinc-A Tool For Maintaining Configurable Software in Pascal4
  6. Pinc-A Tool For Maintaining Configurable Software in Pascal5

9. International standards for visual modeling languages

A significant investment has been made in applying #Graphitization to language modeling; specifically, languages for enterprise architecture like ArchiMate.

ArchiMate References

  1. Using ArchiMate 2.1 to Model Product or Service Markets
  2. ArchiMate 3.0: What is the preferred way to model a Server Farm?
  3. How do I model “X” using ArchiMate?
  4. Crossing the EA Chasm: ArchiMate “Keep Calm and Have IT Your Way”
  5. Crossing the EA Chasm: ArchiMate Art
  6. Crossing the EA Chasm: Re-visioning the ArchiMate Specification
  7. Crossing the EA Chasm: Reflections on the Current State of ArchiMate
  8. Crossing the EA Chasm: Re-visioning ArchiMate 3.0 Relations as Verbs
  9. Crossing the EA Chasm: Re-visioning ArchiMate 3.0 Elements as Adjectives [WIP]
  10. Crossing the EA Chasm: #Graphitization of ArchiMate 3.0 – Iteration 1
  11. Crossing the EA Chasm: #Graphitization of ArchiMate 3.0 – Iteration 2 (long but meaty)
  12. #Graphitization of ArchiMate: Getting MMOR from ArchiMate using the ModelMate Master Online Repository

10. Enterprise Data Management

Modeling and analyzing enterprise data structures and stores is a common application of #Graphitization; including the modeling of taxonomies and master data.

References

  1. RE: Managing Master Data With ArchiMate

11. Parallelspace ModelMate

Parallelspace ModelMate is an approach (platform and language framework) for creating domain specific languages (DSLs) for enterprise architecture.  It is realized using #Graphitization and the ArchiMate enterprise architecture modeling language.

References

  1. Crossing the Enterprise Architecture Chasm
  2. Crossing the EA Chasm: Open Repository Strategies for Enterprise Architecture
  3. ModelMate Architecture Reference Model

12. Internet of Things (IoT)

IoT is an interesting beast.  It is a reference to an application service for processing raw events from a device or dynamically generated events from a software system.  IoT also defines a conceptual software and data flow architecture that can also be used for the dynamic creating and maintenance of complex systems such as large enterprise architectures.

References

  1. Subject: MS Azure Services: Is there an overarching architectural vision?
  2. MS Azure is a bit of a bucket of bolts …very good bolts …but relative to the other IoT vendors, a bucket of bolts.
  3. Crossing the EA Chasm: “Where does IoT [Internet of Things] fit in?”

13. Architecture Reference Models (ARMs)

An ARM is easily modeled (and analyzed) using #Graphitization.  SharePoint and Azure Stack are two good examples.

References

  1. ARMs for Model-Driven LOB apps: SharePoint 2013/SharePoint 2016 [Oct. 24, 2016]
  2. Microsoft Azure Stack POC Architecture Reference Model (ARM): ArchiMate Model – version 1-0-7 – April 30, 2016

General References

  1. Continuous Transformation and Transformative Change are key principles of the Total Enterprise Architecture Model (TEAM) (click here)
  2. To dig deeper, check out Graphitization of the Enterprise (click here)
  3. [Enterprise Architecture, Big Data, CRM, ERP, …] Tools and Methods Don’t Generate Business Value (click here)
  4. Crossing the EA Chasm: The Surveyor

Best regards,

Michael Herman
Enterprise Architect and Data Scientist
Parallelspace Corporation
M: 416 524-7702
E: mwherman@parallelspace.net
B: http://hyperonomy.com
L: https://www.linkedin.com/in/mwherman/recent-activity/posts/
Skype: mwherman2000

Living at the intersection of Enterprise Architecture, Enterprise Knowledge, and Data Science

  • ArchiMate is registered trademark of The Open Group.

1 Comment

Filed under ArchiMate, Architecture Reference Models, Automated Application Architecture Analysis, Automated Enterprise Architecture Modeling, Graphitization, How do we think, Microsoft Azure, ModelMate Information Architecture for ArchiMate, ModelMate Information Architecture for Languages

What are the differences between improving the design (and operation) of a smart city, an aircraft engine, a muscle car, a large enterprise, and an economic system …at hyperscale?

Original title: What are the differences between improving the design (and operation) of a smart city, an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …running at hyperscale?

COPYRIGHT © 2016-2024 by Michael Herman. All rights reserved. [Updated May 14, 2024]

Question: What are the differences between improving the design (and operation) of:

  • a smart city,
  • an aircraft engine,
  • a muscle car,
  • a large enterprise, and/or
  • an econonic system
  • …running at hyperscale?

Answer: None.

Scroll down to see the use cases; then the list of resources at the bottom of this article.

Use Case 1: Aircraft engine
Use Case 2: 
Muscle car

Continuous Transformation 2

Figure 1. Continuous Transformation Model: Aircraft Engines and Muscle Cars

Use Case 3: Smart city
Use Case 4: Large enterprise operating at hyperscale

Continuous Transformation 1.png

Figure 2. Continuous Transformation Model: Smart Cities, Large Enterprises, and Cloud Services Platforms

Use Case 5: Economic systems

Figure 3. Continuous Improvement Framework applied to Economic Systems

Diving Deeper: #Graphitization

To go deeper, checkout #Graphitization of the Enterprise (click here) as well as the list of references below.

progressive-ea-model-1-0-9-peam3-ea-chasm-auto-dots

Figure 4. #Graphitization Continuous Transformation Model

progressive-ea-model-1-0-11-peam5-1010

Figure 5. Continuous Transformation Framework: Process Model

References

  1. Michael Herman, Blockchain Developer, Enterprise Architect and Data Scientist: #Graphitization Inventor  (click here)
  2. Continuous Transformation and Transformative Change are key principles of the Total Enterprise Architecture Model (TEAM) (click here)
  3. To dig deeper, check out Graphitization of the Enterprise (click here)
  4. [Enterprise Architecture, Big Data, CRM, ERP, …] Tools and Methods Don’t Generate Business Value (click here)

Best regards,

Michael Herman
Enterprise Architect and Data Scientist
E: mwherman@parallelspace.net
B: http://hyperonomy.com
L: https://www.linkedin.com/in/mwherman/recent-activity/posts/

Living at the intersection of Enterprise Architecture, Enterprise Knowledge, and Data Science

2 Comments

Filed under ArchiMate, Architecture Reference Models, Crossing the EA Charm, Data Science, Enterprise Architecture, Graphitization, How do we think, IoT, Space Flight

Michael Herman: Award-winning Author, Invited Speaker, Illustrator, and Trainer

COPYRIGHT © 2016-2017 by Michael Herman, Toronto, Canada. All rights reserved.

Feel free to contact me at:

PARTIAL PORTFOLIO

All of the publications below are full-length white papers or technical notes – unless noted otherwise (e.g. presentations, training materials, online product help).

Microsoft Live Communications Server

Client: Microsoft Corporation Live Communications Server Product Group / Microsoft IT Showcase

Microsoft SharePoint Products and Technologies

Client: Microsoft Corporation SharePoint Product Group / Microsoft IT Showcase

Microsoft Exchange Server

Client: Microsoft Corporation Exchange Server Product Group / Microsoft IT Showcase

Metalogix Replicator for SharePoint

Client: Metalogix, market leading provider of solutions to move, manage and protect content within enterprise collaboration platforms in the cloud and on-premises.

Microsoft “Alchemy”

Client: Microsoft Web Services Product Group / Microsoft IT Showcase

Parallelspace Vulture

Client: Parallelspace Corporation

Tzunami K-Wise Deployer

Client: Tzunami

Leave a comment

Filed under ArchiMate, Architecture Reference Models, Crossing the EA Charm, Enterprise Architecture, Enterprise Architecture Chasm, Graphitization, How do we think, ModelMate Information Architecture for ArchiMate, ModelMate Information Architecture for Languages

Using #Graphitization to Create Your Organization’s Digital Twin

Original title: #Graphitization of the Enterprise

COPYRIGHT © 2016-2017 by Michael Herman, Toronto Canada. All rights reserved. [Updated June 16, 2018]

This article is the first in a series on #Graphitization. Click here to explore the other articles in this series.

Reprinted from #Graphitization of the Enterprise on LinkedIn.

Move beyond digitalization of the enterprise to graphitization of the enterprise, the creation of your organization’s digital twin. Here’s a great diagram that explains this concept. (click on the diagram to enlarge it)

graphitization-new-world-of-it
Figure 1. Digital Twin Model of IT

Graphitization of not only all of your corporate information assets across all of your constituencies and stakeholders – at the data, application entity, and business object level – but also the graphitization of all of the interconnections between every business process, application system, infrastructure component, cloud service, vendor/service provider, and business role that uses, manages, or stores corporate information (Crossing the EA Chasm: Automating Enterprise Architecture Modeling #2).

Use graphitization to make your existing corporate information more available, more usable, and more informative. Graphitization enables you to “Keep Calm and Have IT Your Way“.

What is #Graphitization?

#Graphitization is a data science and enterprise architecture-inspired framework and process model for modeling, ingesting, organizing, analyzing, and visualizing any domain of endeavor by using graphs – networks of connected objects and relationships with each object and relationship annotated with additional descriptive information (metadata).

The primary applications of #Graphitization are:

  • System optimization,
  • Systems life cycle management, and
  • Transformative Change in resulting in positive increases in business value for the system being studied.

A system is defined as any collection of strategies, system components, assets, architectures or processes.

Using #Graphitization

Use graphitization of your organization to help close both the Enterprise Architecture Chasm and the Operational Data Chasm. See below.

progressive-ea-model-1-0-11-peam4-operational-data-chasm
Figure 2. Continuous Transformation Framework: Enterprise Architecture Chasm and Operational Data Chasm

progressive-ea-model-1-0-11-peam5-1010
Figure 3. Continuous Transformation Framework: Processes and Activities

To learn more about other applications of graphitization, check out the following articles:

Michael Herman (Calgary)
mwherman@parallelspace.net

7 Comments

Filed under continuous transformation, Crossing the EA Charm, Data Science, Digital Transformation, Enterprise Architecture, Enterprise Architecture Chasm, Graphitization, ModelMate, Operational Data Chasm, Progressive Enterprise Architecture Map (PEAM)

Cornerstone: Platform Evangelism in the Age of AI-Generated Code

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DILibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Abstract

Traditionally when a platform developer (e.g. Microsoft) created a new platform, it embarked on standard approaches to Developer Evangelism to challenge the various Technology Adoption Curves and to Cross the Chasm (e.g. https://hyperonomy.com/2019/10/16/technology-adoption-models/). In the new era of AI-generated code, where/how should evangelism be targeted? …if it can even be called evangelism any more. How should a platform developer promote his new platform to all the different AI chatbots (e.g. Claude)? What artifacts need/should be created? Where/how should they be placed/distributed? This is a rich and important strategic question.


Platform Evangelism in the Age of AI-Generated Code

The Fundamental Shift: From Human Adopters to AI Intermediaries

Traditional Developer Evangelism targeted humans at specific positions on the Technology Adoption Lifecycle — Innovators, Early Adopters, Early/Late Majority, Laggards. The evangelist’s job was to move people rightward along that curve.

In the AI-generated code era, a new and decisive intermediary has been inserted into the adoption pipeline: the AI coding assistant / chatbot. The human developer no longer discovers your platform primarily through conference talks, blog posts, or Stack Overflow. Instead, they ask Claude (or Copilot, Cursor, Gemini) to scaffold the integration for them. If the AI doesn’t know your platform well, generates wrong API calls, or defaults to a competitor, the human never even gets a chance to adopt.

The implication is stark: AI models are now the most important Early Adopters you need to cross the chasm with first. They are a new, synthetic segment that sits before the Innovators on your adoption curve.


The New Adoption Curve Layer

Your 20+ model framework maps well here. Superimposing two layers:

Traditional LayerNew AI-Mediated Layer
Innovator humans discover your platformAI models are trained/fine-tuned on your docs
Early Adopters experimentAI generates working starter code
Early Majority follows proven patternsAI recommends your platform confidently
Word-of-mouth spreadsAI’s citations / training data spreads
Chasm: “will the mainstream trust it?”New chasm: “does the AI know it well enough to generate correct code?”

What Has Changed About “Evangelism”

The word still applies, but the audience, artifacts, and channels are fundamentally different.

Old evangelism targets:

  • Human developers (via conferences, blogs, sample apps)
  • Enterprise architects (via whitepapers, reference architectures)
  • CTOs (via analyst reports, Gartner quadrants)

New evangelism targets (primary):

  • AI training pipelines — what gets into the pretraining and fine-tuning corpora
  • AI retrieval systems — what gets surfaced via RAG at inference time
  • AI context windows — what gets injected via system prompts, MCP servers, tool definitions
  • AI safety/quality filters — what AI providers consider authoritative and trustworthy

The humans still matter, but they are now downstream of the AI intermediary.


The New Artifact Set

This is where it gets concrete. You need a new category of artifact that I’d call AI-Legible Platform Documentation — content designed to be consumed, reasoned over, and reproduced by AI systems, not just read by humans.

1. llms.txt — The Emerging Standard

A plain-text or markdown file placed at the root of your platform’s documentation site (e.g., https://svrn7.net/llms.txt). This is an emerging informal standard (analogous to robots.txt) that signals to AI crawlers and RAG systems what your platform is, what its key concepts are, and where the authoritative docs live. It should be:

  • Terse, structured, machine-readable
  • Canonical definitions of your core concepts (did:drn, VTC, SOVRONA, etc.)
  • Explicit disambiguation (e.g., “SOVRONA is not Solana, not SOVRIN”)

2. Canonical Concept Glossary (Machine-Readable)

A JSON-LD or plain markdown file with precise, unambiguous definitions of every platform term. AI models pattern-match on concept names. If your terms are unique enough (which did:drn, VTC7, svrn7.net largely are) and appear in training data with consistent definitions, the model learns authoritative meaning. Publish this as both human-readable HTML and structured data.

3. AI-Optimized Quickstart / Code Recipes

Short, self-contained code examples (C#/.NET in your case) that demonstrate each key integration scenario. These need to be:

  • Complete — no ellipsis (...), no “fill in your own logic here”
  • Correct — compilable, with real method signatures
  • Labeled — preceded by a natural-language description that an AI can use as a retrieval key
  • Published in plain markdown — not behind JavaScript-rendered walls

The goal: when a developer asks Claude “how do I resolve a did:drn identifier in C#?”, there is a verbatim-correct code sample in the training data or retrieval index that Claude surfaces.

4. OpenAPI / SDK Schemas

If your platform has any API surface, publish machine-readable schemas (OpenAPI 3.x, JSON Schema). AI coding assistants consume these directly — Copilot, Cursor, and others can ingest them to generate type-correct API calls. This is one of the highest-leverage artifacts you can produce.

5. MCP Server Definition

For platforms targeting agentic AI workflows (which Web 7.0 / TDW AgenticOS clearly does), publishing an MCP server that exposes your platform’s key operations is the equivalent of publishing an SDK in the old world. When a developer is using Claude with MCP enabled, your platform becomes natively callable. This is arguably the highest-leverage evangelism artifact in the agentic AI era.

6. IETF / W3C Standards Drafts (Already in Progress)

This is something you’re already doing, and it is directly high-value for AI training. Standards bodies’ outputs (IETF Datatracker, W3C, etc.) are heavily weighted in AI training corpora. draft-herman-did-drn-00 appearing on Datatracker means AI models encounter it as an authoritative source. This is the new equivalent of a Gartner mention.

7. GitHub Presence with Rich READMEs

GitHub is a primary training data source for all AI coding assistants. Every repository should have:

  • A detailed README explaining the why and what of the platform
  • Clear installation/usage examples
  • Properly named types, interfaces, and methods (because AI learns your API surface from source code identifiers)

Where / How to Place and Distribute

ChannelMechanismWhy It Works
llms.txt on your docs sitePassive crawlingGets into RAG indexes of AI tools that crawl docs
GitHub (public repos)Training data, RAGCopilot, Cursor, and others train on public code
IETF Datatracker / W3CTraining dataHigh-authority signal in AI corpora
npm / NuGet / PyPI packagesMetadata + READMEAI package recommenders surface these
Stack Overflow (well-tagged Q&A)Training data + RAGPrimary source for code-gen AI
HuggingFace datasets / model cardsDirect injectionIf you publish reference datasets
Your blog (hyperonomy.com)Training + RAG crawlingAlready indexed; continue publishing precise technical content
MCP Registry / tool directoriesActive discoveryAgentic AI systems discover tools through registries
OpenAI / Anthropic Custom GPT / ProjectsDirect context injectionSystem prompts for your platform’s AI assistant

The Meta-Insight: Legibility as the New Evangelism

In the traditional model, evangelism was about persuasion — moving humans emotionally and rationally across the adoption chasm. In the AI-mediated model, the equivalent is legibility: making your platform concepts, APIs, and code patterns so precisely and consistently expressed that AI models can reproduce them correctly.

A poorly documented platform that generates hallucinated API calls when asked about it is effectively invisible to the new generation of developers. A well-documented platform that produces correct, idiomatic code snippets on first ask has already crossed the chasm with the most important new gatekeeper.

If you want a new term for it: AI Legibility Engineering — the discipline of structuring your platform’s knowledge artifacts so they are accurately learned, retrieved, and reproduced by AI systems.


Specific Recommendations for the Web 7.0 / SVRN7 Ecosystem

Given your specific context:

  1. Publish llms.txt at svrn7.net/llms.txt and hyperonomy.com/llms.txt immediately — low cost, high signal.
  2. Create a canonical C# SDK with clean, AI-legible method names (IDidDocumentResolver.ResolveAsync(string did)) — method signatures become training data.
  3. Register an MCP server for did:drn resolution and VTC operations — this makes your platform natively callable by Claude and other MCP-aware agents.
  4. File a SLIP-0044 registration publicly and reference it from your Datatracker drafts — cross-document citation chains strengthen AI’s association graph for your platform.
  5. Publish a disambiguation page — “SOVRONA is not SOVRIN, not Solana, not SOV” — AI models conflate similar-sounding names; explicit disambiguation in training-visible text corrects this.

The chasm hasn’t disappeared. It has just moved — and the first bridge you need to build now goes to the AI, not the human.

Leave a comment

Filed under Uncategorized

Web 7.0 Decentralized System Architecture (DSA) 0.15

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DILibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Web 7.0 Federation – Trusted Digital Web (TDW)

Web 7.0 is a general-purpose infrastructure for:

  1. DID Document management (W3C spec compliant)
  2. Verifiable Credential issuance and lifecycle (W3C VC v2 JWT)
  3. DIDComm v2 encrypted messaging (full spec implementation)
  4. Append-only RFC 6962 Merkle audit log
  5. UTXO-based token accounting
  6. GDPR Article 17 erasure
  7. Implements Sovrona Shared Reserve Currency (SRC) Pools (Ticker: SVRN7)

NOTE: The first four capbabilities are general decentralized identity infrastructure that any organization could use independently of the monetary layer.

Sovrona Shared Reserve Currency (SRC) (Ticker: SVRN7)

Sovrona (SVRN7) is the Shared Reserve Currency (SRC) for the Web 7.0 digital ecosystem, implemented as an embeddable .NET 8 library that manages citizen and society wallets, enforces a governance-controlled three-epoch monetary lifecycle, and maintains a cryptographically tamper-evident audit log of all transactions.

Unlike traditional and modern digital currencies, SVRN7 is built on a foundation of self-sovereign identity — every participant is a DID holder, every entitlement/endowment is a Verifiable Credential, and trust between parties is established through standards-based cryptographic proofs rather than a shared blockchain or central authority.

Web 7.0 Use Cases

With configurable DID method names, the library becomes usable in scenarios that have nothing to do with the SVRN7 SRC.

Healthcare network. A hospital consortium where each hospital operates its own DID method (did:hospital-a, did:hospital-b). Patient VCs issued by one hospital are verifiable by any other. The Merkle log provides an auditable record of credential issuance without exposing patient data. DIDComm handles encrypted referral messages between hospitals.

Supply chain. A manufacturing network where each tier-1 supplier owns a DID method. Components carry VC provenance records signed by their manufacturer’s DID. The Federation equivalent is the brand owner who sets the governance rules. The UTXO model tracks component custody rather than currency.

Professional credentialing. A federation of professional bodies (law societies, medical councils, engineering institutes) where each body owns its DID method and issues member credentials. Cross-body credential verification uses the same IDidResolver routing the SVRN7 library already needs.

Government identity federation. Multiple municipal or provincial identity systems where each jurisdiction owns its DID method. Citizens have identities under their jurisdiction’s method. Cross-jurisdiction services verify credentials without requiring a central identity broker.

Leave a comment

Filed under Uncategorized

Claude, what is your internal representation of this ~10KLOC, CS/.NET, seven (7)-project solution?

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public LicenseWeb 7.0™, TDW AgenticOS™ and Hyperonomy™ are trademarks of the Web 7.0 Foundation. All Rights Reserved


Q1. What is your internal representation of this solution? …an AST? …the actual source code? …some other modeling approach/metamodel? …used for generating code, documenting the README.md, creating test cases, creating architecture diagrams, etc.

NOTE: The current set of software metrics for this solution-under-development can be found in APPENDIX A.

This is a genuinely interesting question and the honest answer is: there is no persistent internal representation at all. Here is what actually happens:

What I am, mechanically. I am a transformer that maps a context window (a flat sequence of tokens) to a probability distribution over the next token. There is no AST, no object graph, no schema, no database, no symbol table. When this conversation session ends, nothing persists — the weights of the model are fixed and do not change.

What substitutes for a representation. During this conversation, the context window itself is the representation. Every message, every code block, every README excerpt, every test name you have seen scrolling past — all of that is sitting in my active context as raw tokens. When I generated the ArchiMate diagram or the README or the test cases, I was pattern-matching against that token sequence and the statistical regularities in my weights that encode knowledge of C#, ArchiMate OEF XML, W3C VC, DIDComm v2, RFC 6962, etc.

How this produces consistent outputs. The compacted summary at the top of this conversation (the one listing 45 files, 207 tests, all the interface members, all 20 bugs fixed, etc.) is doing the work of an “architecture model.” It was written by me in a previous session to act as a faithful briefing document — a structured natural-language encoding of the project state. When I generate code, tests, a README, or an ArchiMate diagram from it, I am essentially translating that natural-language description through my weights.

What I do not have. I have no:

  • Parse tree or AST of the C# source
  • Type graph, dependency graph, or call graph
  • Formal metamodel instance (no MOF, no Ecore, no ArchiMate metamodel objects)
  • Semantic index of method signatures
  • Persistent memory between sessions (the compacted summary was explicitly written to bridge sessions)

What this means for your artefacts. The three outputs — source code, README, tests, and ArchiMate diagram — were all produced from the same token context, so they are coherent with each other in the sense that a careful human author would make them coherent. But they are not derived from a single authoritative model the way a proper MDE (Model-Driven Engineering) toolchain would derive them. If the C# source and the ArchiMate diagram diverge, neither one automatically updates the other — you would need to bring the divergence back into a context window and ask me to reconcile.

The practical implication for Web 7.0. If you want genuine model-driven traceability — where a change to ISvrn7Driver propagates automatically into the ArchiMate model, README, and test stubs — that is a toolchain problem (e.g. Roslyn-based code analysis feeding an ArchiMate metamodel instance in Archi or a Sparx EA repository). I can help design or implement that toolchain, but I am not myself that toolchain.

APPENDIX A – Solution Software Metrics (April 5, 2026)

Here are the complete, live-measured metrics for the SVRN7 v0.5.0 solution:


Lines of Code

ProjectFilesTotal linesBlankCommentCode
Svrn7.Core61,031195296540
Svrn7.Crypto14596934356
Svrn7.Store71,617282741,261
Svrn7.Ledger25849357434
Svrn7.Identity23305236242
Svrn7.Api72,4083304901,588
Svrn7.DIDComm83,1964607821,954
Src subtotal339,6251,4811,7696,375
Svrn7.Tests123,8557621732,920
Grand total4513,4802,2431,9429,295

Structure

MetricCount
Interfaces12
Total interface members191
ISvrn7Driver members (public API)41 (28 async, 11 sync, 2 properties)
Concrete classes / records / structs (src)109
Async methods (src)55
Exception classes13 (12 typed + 1 base Svrn7Exception)
Enums4 (KeyAlgorithm, DidStatus, VcStatus, DIDCommPackMode)
Named constants8 in Svrn7Constants + collection name constants per context
NuGet packages12

Defensive Programming

Guard typeCount
ThrowIfDisposed() calls53 (44 in Svrn7Driver alone)
ArgumentNullException / ArgumentException guards153

Complexity Indicators (branch keywords in src)

Projectif / else if / switch / foreach / while / catch
Svrn7.DIDComm189
Svrn7.Api153
Svrn7.Ledger46

Tests

Test fileTestsAssertions
DIDCommTests.cs42114
VcRegistryTests.cs2851
ExtensionsAndAdminTests.cs2148
StoreTests.cs2037
DidDocumentRegistryTests.cs2340
AdvancedDriverTests.cs1540
DriverIntegrationTests.cs1530
CryptoServiceTests.cs1834
VcServiceTests.cs919
MerkleLogTests.cs1222
TransferValidatorTests.cs46
Total207441

Top 5 Files by Code LOC

FileCode LOC
Svrn7.Api / Svrn7Driver.cs790
Svrn7.DIDComm / DIDCommPackingService.cs596
Svrn7.DIDComm / DIDCommPacker.cs531
Svrn7.Store / LiteDidDocumentRegistry.cs360
Svrn7.Store / LiteVcRegistry.cs359

Leave a comment

Filed under Uncategorized

Web 7.0 Governance: Sovrona Shared Reserve Currency (SVRN7)

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DILibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Leave a comment

Filed under Uncategorized

How might Autonomous Operation Levels apply to Digital Agents in general (and Web 7.0 Trusted Digital Assistants specifically)?

Great question — and worth thinking through carefully, because the SAE framing maps surprisingly well onto digital agents if you abstract it correctly.


The Core Analogy

In autonomous vehicles, the levels measure how much human oversight and intervention is required for safe operation. The same axis applies to digital agents: how much must a human monitor, correct, or authorize before the agent acts?


A Proposed Digital Agent Autonomy Scale

LevelNameAgent BehaviorHuman Role
0No AutomationPure tool, executes only explicit commandsHuman does everything
1AssistedSuggests actions, autocompletes, draftsHuman approves all outputs
2PartialExecutes defined tasks autonomously within a sessionHuman monitors, can interrupt
3ConditionalHandles multi-step workflows, escalates on ambiguityHuman on standby, notified of exceptions
4HighOperates across systems within a defined trust domainHuman sets policy, reviews periodically
5FullActs as a sovereign delegate across any context, any system, any timeHuman sets intent once; agent governs itself

What Makes Level 5 Hard for Digital Agents

Just like autonomous vehicles, nobody has achieved digital Level 5 yet — and for parallel reasons:

  • Identity — who authorized this agent to act, and can that be verified in real time by any system it touches?
  • Integrity — is the agent acting on real, unmanipulated data/context, or has its information environment been poisoned?
  • Accountability — is every decision cryptographically auditable after the fact?
  • Trust portability — can the agent’s authorization travel with it across organizational boundaries, jurisdictions, and protocols?

These are almost exactly the same gaps the did:level5 site frames for vehicles — just in a digital context.


Where Web 7.0 Trusted Digital Assistants Fit

Your TDA concept is explicitly designed to be a Level 5 digital agent architecture. The components map directly:

TDA ComponentAutonomy FunctionAnalogy
DID (did:drn, did:7)Sovereign identity — “who am I, provably?”did:verifiedcar — identity root
Verifiable Credentials / VTCsAuthorization — “what am I permitted to do?”Certification gate
CryptosealsIntegrity — “is this data/document unmanipulated?”did:viewonwebsite — reality root
PS Runspace Pool / MCP-I/DExecution environment with bounded authorityMojo Enclave equivalent
DIDNET7Trust transport across organizational boundariesThe network substrate
Verifiable Trust CirclesGovernance — “who vouches for this agent within this community?”Root of trust federation

The TDA is essentially a Level 5 digital agent that carries its own sovereign identity, operates within cryptographically governed trust circles, and can act across systems without requiring human re-authorization at each step — while remaining fully auditable.


The Key Distinction from Lower Levels

The leap from Level 4 to Level 5 — for both vehicles and digital agents — is trust portability across unknown contexts. A Level 4 agent can operate autonomously within a known, pre-configured environment. A Level 5 agent can walk into an entirely new system, organization, or jurisdiction and be trusted on first contact because its identity, credentials, and authorization chain are self-contained and cryptographically verifiable.

That’s exactly what did:drn + VTCs + cryptoseals are designed to enable — the agent doesn’t need a prior relationship with the system it’s interacting with. The trust travels with it.


A Possible Contribution

Given your standards work, there may be value in formally proposing a Digital Agent Autonomy Level framework — analogous to SAE J3016 but for agentic AI systems — with Web 7.0 / TDA architecture positioned as the reference implementation for Level 5. It would give the broader field (IETF, W3C, ISO, IEEE) a vocabulary for discussing agent trust requirements at each level, and anchor your work as foundational infrastructure rather than one implementation among many.

Do digital agents need to use AI to be compliant with Level 5 Autonomous Operation?

No — and this is an important distinction worth being precise about.


Level 5 is about autonomy architecture, not AI

In the SAE vehicle context, Level 5 says nothing about how the driving decisions are made — only that the system can handle all conditions without human intervention. The intelligence mechanism is orthogonal to the autonomy level.

The same applies to digital agents. Level 5 is a statement about:

  • Trust — the agent’s identity and authorization are sovereign and portable
  • Accountability — every action is cryptographically auditable
  • Scope — the agent can operate across any context without human re-authorization
  • Integrity — the agent acts on verified, unmanipulated information

None of those properties require AI. A deterministic rule-based agent, a scripted workflow engine, or a pure cryptographic protocol daemon could in principle satisfy all four.


What AI adds — and doesn’t add

AI (specifically LLMs or reasoning models) adds:

  • Natural language understanding and generation
  • Handling of ambiguous or novel situations
  • Flexible goal decomposition
  • Adaptability across unanticipated contexts

But AI also complicates Level 5 compliance because:

  • LLM outputs are non-deterministic — the same input can produce different actions
  • Reasoning chains are not natively auditable in a cryptographic sense
  • AI can be manipulated via prompt injection — a direct analogue to the visual injection attack the did:level5 site describes
  • AI doesn’t inherently carry sovereign identity or verifiable authorization

So paradoxically, AI is the component that most threatens Level 5 compliance if not properly bounded, and the trust architecture — DIDs, VTCs, cryptoseals, runspace governance — is what contains the AI and makes its actions compliant.


The Right Mental Model

Think of it in layers:

LayerRoleAI Required?
Trust & IdentityDID, VTCs, cryptosealsNo
AuthorizationCredential verification, policy enforcementNo
Execution governancePS Runspace Pool, MCP-I/D boundariesNo
Task intelligenceReasoning, language, ambiguity handlingOptional
Audit trailCryptographic action logNo

AI sits only in the task intelligence layer. All the layers that make an agent Level 5 compliant are below it and independent of it.


For TDA Specifically

A Trusted Digital Assistant could be fully Level 5 compliant running nothing but deterministic logic — scripted workflows, rule engines, protocol handlers — as long as the identity, authorization, integrity, and auditability infrastructure is in place.

When an AI reasoning layer is present, the TDA architecture constrains it: the AI operates within a bounded runspace, its outputs are subject to credential-gated authorization before execution, and its actions are sealed into the audit record. The AI doesn’t grant Level 5 — the architecture does. The AI is a passenger, not the driver.

This is actually a meaningful standards argument: Level 5 digital agent compliance is an infrastructure and governance property, not a capability property. A very capable AI with no trust architecture is not Level 5. A simple deterministic agent with full sovereign identity and cryptographic accountability is.

Leave a comment

Filed under Uncategorized

DSA: Which field values of the following documents are locator DIDs and which are identity DIDs?

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0™, Web 7.0 DILibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy™ are trademarks of the Web 7.0 Foundation. All Rights Reserved.

DSA: Which field values of the following documents are locator DIDs and which are identity DIDs?

  • DID Document
  • VC Document
  • DIDComm Message

In what follows, each document type is systematically and precisely mapped to the DID identity/locator framework.


DID Document

{
"id": "did:drn:z6Mk...9xEg", ← IDENTITY
"controller": "did:drn:z6MkController...abc", ← IDENTITY
"alsoKnownAs": ["did:drn:z6MkAka...xyz"], ← IDENTITY
"verificationMethod": [{
"id": "did:drn:z6Mk...9xEg#key-1", ← LOCATOR
"controller": "did:drn:z6Mk...9xEg", ← IDENTITY
"type": "Ed25519VerificationKey2020",
"publicKeyMultibase": "z6Mk..."
}],
"authentication": [
"did:drn:z6Mk...9xEg#key-1" ← LOCATOR
],
"assertionMethod": [
"did:drn:z6Mk...9xEg#key-1" ← LOCATOR
],
"capabilityInvocation": [
"did:drn:z6Mk...9xEg#key-1" ← LOCATOR
],
"capabilityDelegation": [
"did:drn:z6Mk...9xEg#key-1" ← LOCATOR
],
"keyAgreement": [
"did:drn:z6Mk...9xEg#key-2" ← LOCATOR
],
"service": [{
"id": "did:drn:z6Mk...9xEg#vcregistry-1", ← LOCATOR
"type": "CredentialRegistry",
"serviceEndpoint": "https://registry.example.com/..." ← URL (retrieval)
},
{
"id": "did:drn:z6Mk...9xEg#didcomm-1", ← LOCATOR
"type": "DIDCommMessaging",
"serviceEndpoint": "https://agent.example.com/..." ← URL (retrieval)
}]
}

Pattern: id at the document root is always an identity DID. Every field that contains a fragment — verification methods, verification relationships, service entries — is always a locator DID. controller is always an identity DID. serviceEndpoint values are retrieval URLs, outside the DID identity/locator taxonomy entirely.


VC Document

{
"@context": [...],
"id": "did:drn:z6MkCredential...abc", ← IDENTITY (envelope identity)
"type": ["VerifiableCredential", "MembershipCredential"],
"issuer": {
"id": "did:drn:z6MkIssuer...3kRt", ← IDENTITY
"name": "Web 7.0 Foundation"
},
"validFrom": "2026-01-01T00:00:00Z",
"validUntil": "2027-01-01T00:00:00Z",
"credentialSubject": {
"id": "did:drn:z6MkSubject...9xEg", ← IDENTITY (subject identity)
"memberOf": "did:drn:z6MkOrg...7fPq", ← IDENTITY
"role": "Member"
},
"credentialStatus": {
"id": "did:drn:z6MkStatus...8bNm", ← LOCATOR (points into status registry)
"type": "StatusList2021Entry"
},
"cryptoseal": [{
"type": "DataIntegrityProof",
"verificationMethod": "did:drn:z6MkIssuer...3kRt#key-1", ← LOCATOR
"cryptosuite": "eddsa-2022",
"created": "2026-01-01T00:00:00Z",
"proofValue": "z58DAdFfa..."
}]
}

Pattern: Top-level id, issuer.id, credentialSubject.id, and any DID references within claims are all identity DIDs — they name entities. credentialStatus.id and proof.verificationMethod are locator DIDs — they navigate to sub-resources. The credentialStatus.id is the one field that often goes unrecognized as a locator but clearly is: it points into a status registry to retrieve the current revocation state of this specific credential.


DIDComm Message

{
"id": "urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6", ← IDENTITY (message identity — URN, not DID)
"type": "https://didcomm.org/basicmessage/2.0/message", ← IDENTITY (IRI naming message type)
"typ": "application/didcomm-encrypted+json",
"from": "did:drn:z6MkSender...abc", ← IDENTITY
"to": ["did:drn:z6MkRecipient...xyz"], ← IDENTITY
"thid": "urn:uuid:parent-thread-id", ← IDENTITY (thread identity)
"pthid": "urn:uuid:parent-parent-thread-id", ← IDENTITY (parent thread identity)
"created_time": 1704067200,
"expires_time": 1704153600,
"from_prior": {
"iss": "did:drn:z6MkPrior...def", ← IDENTITY (prior DID — rotation)
"sub": "did:drn:z6MkSender...abc", ← IDENTITY (new DID — rotation)
"aud": "did:drn:z6MkRecipient...xyz" ← IDENTITY
},
"body": {
"content": "..."
},
"attachments": [{
"id": "attachment-1", ← IDENTITY (attachment identity)
"media_type": "application/vc+ld+json",
"data": {
"links": ["did:drn:z6MkSubject...9xEg?service=CredentialRegistry"] ← LOCATOR
}
}]
}

Pattern: DIDComm message fields are overwhelmingly identity DIDs — from, to, from_prior.iss, from_prior.sub, from_prior.aud all name parties, not locations. The message id is typically a URN UUID — an identity token outside the DID space entirely. Locator DIDs appear only when explicitly navigating to a sub-resource — such as a CredentialRegistry service endpoint in an attachment link, or a verification method reference in an attached cryptoseal.


Cross-Document Summary

FieldDID DocVC DocDIDComm
Document idIdentityIdentityIdentity (URN)
controllerIdentity
issuer / issuer.idIdentity
credentialSubject.idIdentity
from / toIdentity
from_prior.iss / .subIdentity
verificationMethod.idLocator
Verification relationship refsLocator
service.idLocator
proof.verificationMethodLocator
credentialStatus.idLocator
Attachment links with ?service=Locator

The governing rule across all three document types is consistent and clean:

A DID with no # or ? is always an identity DID — it names an entity.
A DID with # or ? is always a locator DID — it navigates to a sub-resource.
This rule holds uniformly across DID Documents, VC Documents, and DIDComm Messages.

Leave a comment

Filed under Uncategorized

SDO: Verifiable Trust Circles (VTCs) using VC Proof Sets (Web 7.0)

Draft Specification

Status:Web 7.0 Foundation Draft
Version:0.1.0-draft
Published:26 March 2026
Latest Version:https://github.com/trustoverip/dtgwg-cred-tf/discussions/8
Editors:M. Herman (Web 7.0 Foundation)
Feedback:Web 7.0 Foundation Governance Council
Derived From:GitHub Discussion #8, trustoverip/dtgwg-cred-tf

This document is derived from community discussion at trustoverip/dtgwg-cred-tf Discussion #8.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0™, Web 7.0 DILibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy™ are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Abstract

This specification defines Web 7.0 Verifiable Trust Circles (VTCs), a generalized mechanism for expressing verifiable multi-party membership, belonging, and trust relationships using the W3C Verifiable Credentials (VC) Data Model 2.0 and VC Data Integrity Proof Sets. VTCs extend the Partof Architecture Reference Model (PARM) — also referred to as the MemberOf or CitizenOf model — to provide a universal credential pattern that subsumes prior pairwise constructs (PHCs, VRCs) and additionally supports voting-based decision making, meeting requests, task forces, and digital societies.

STATUSThis document is derived from community discussion in the Trust over IP Foundation (ToIP) Digital Trust Graph Working Group (DTGWG) Credentials Task Force, GitHub Discussion #8, November–January 2025–2026.

Table of Contents

1. Introduction

2. Terminology and Definitions

3. Design Principles

4. The Partof Architecture Reference Model (PARM)

5. VTC Data Model

6. VTC Proof Set Lifecycle

7. Roles and Participants

8. Use Cases

9. Privacy and Security Considerations

10. Conformance

11. Relationship to Other Specifications

12. References

1. Introduction

The Web 7.0 paradigm seeks to establish a decentralized, agent-centric, privacy-preserving digital society. Central to this vision is the ability of digital entities — people, organizations, autonomous agents — to form verifiable groups: trust circles that are cryptographically provable, privacy-respecting, and composable.

Prior specifications in the Trust over IP (ToIP) ecosystem defined pairwise constructs (Personhood Credentials, PHCs; and Verifiable Relationship Credentials, VRCs) to link pairs of entities. While useful, these constructs are insufficient to describe multi-party group membership, community affiliation, or collective decision-making.

This specification introduces Verifiable Trust Circles (VTCs), which generalize pairwise credentials into an N-party construct using the standard W3C VC Proof Set mechanism. A single VTC credential can represent a self-credential (N=1), a bilateral relationship (N=2), or any multi-member group (N>2), enabling a single, coherent model for all membership-like relationships.

NOTEProof Sets are a normative feature of the W3C VC Data Integrity specification and are explicitly designed for scenarios in which the same data needs to be secured by multiple entities. VTCs leverage this mechanism rather than inventing new cryptographic primitives.

1.1 Motivation

The following observations motivate this specification:

  • PHCs and VRCs both express a form of ‘belonging to’ — they are specializations of the same universal pattern.
  • The W3C VC Data Model 2.0 already provides Proof Sets as a standard mechanism for multi-party signing.
  • A single, generalized Web 7.0 Verifiable Trust Circles (VTCs) pattern — grounded in First Principles Thinking — can subsume both constructs and additionally support voting, community membership, digital governance, and inter-network trust.
  • The SSC 7.0 Metamodel defines three controller layers (Beneficial, Intermediate, Technical) at which VTCs may apply, enabling rich composability.

1.2 Scope

This specification defines:

  1. The VTC data model, including required and optional properties.
  2. The roles of Initiator, Responder(s), and Notary within a VTC.
  3. The lifecycle of a VTC Proof Set, from initial issuance through multi-party endorsement.
  4. Use case profiles: self-credential, bilateral relationship, multi-party group, and voting scenario.
  5. Privacy and security considerations specific to multi-party proof sets.

This specification does not define transport protocols, DID method requirements, or verifiable presentation formats, except where necessary to illustrate the VTC pattern.

2. Terminology and Definitions

The following terms are used throughout this specification. Unless stated otherwise, terms have the meanings assigned in the W3C Verifiable Credentials Data Model 2.0 [VC-DATA-MODEL].

Verifiable Trust Circle (VTC)A Verifiable Credential whose credential subject identifies a multi-party trust relationship, and whose proof property contains a Proof Set with one proof contribution per participating member, plus the Notary’s initial proof.
Web 7.0 Verifiable Trust Circles (VTCs)The generalised name for the VTC pattern when applied to the broader class of MemberOf, PartOf, and CitizenOf relationships. A VTC is a UMC.
Proof SetAs defined in W3C VC Data Integrity [VC-DATA-INTEGRITY], a set of proofs attached to a single secured document where the order of proofs does not matter. Each proof is contributed by a distinct signer.
Initiator (A)The entity that proposes or originates a VTC. Identified by a DID. Corresponds to the ‘from’ role in VTC credential subject properties.
Responder (B, …, Z)One or more entities that accept membership in a VTC by contributing their cryptographic proof to the Proof Set. Identified by DIDs. Corresponds to entries in the ‘to’ array.
Notary (N)A trusted third party — trusted by both Initiator and all Responders — that issues the initial credential shell and contributes the first proof. The Notary is assigned to the VC ‘issuer’ role. In some use cases the Notary MAY be the Initiator or a Responder, provided they play both roles distinctly.
PARMPartof Architecture Reference Model. The universal pattern underlying VTCs, encompassing MemberOf, CitizenOf, and PartOf relationships.
SSC 7.0 MetamodelSelf-Sovereign Control 7.0 Metamodel. Defines three controller layers — Beneficial Controller, Intermediate Controller (Agent), and Technical Controller (Agent) — at which VTCs may be anchored.
DTGDigital Trust Graph. A graph of trust relationships between entities, each edge of which may be represented by a VTC.
PHCPersonhood Credential. A pairwise credential representing proof of personhood; a degenerate VTC where N=1.
VRCVerifiable Relationship Credential. A pairwise credential representing a bilateral relationship; a degenerate VTC where N=2.
DIDDecentralized Identifier, as defined in [DID-CORE].

3. Design Principles

This specification adheres to the following design principles, consistent with the ToIP DTGWG Design Principles [DTGWG-DESIGN]:

3.1 As Simple As Possible But No Simpler

VTCs are grounded in existing W3C VC standards. No new cryptographic primitives or credential types are defined. The only structural addition is the deliberate use of the proof array (Proof Set) to carry per-member proofs alongside the Notary proof.

3.2 First Principles Thinking

PHCs and VRCs are recognized as specializations of a single underlying relationship pattern (PARM). Rather than defining multiple credential types for essentially the same concept, this specification derives one universal type that covers all cases by varying the cardinality of the ‘to’ array and the composition of the Proof Set.

3.3 Privacy by Design

VTC credential subjects SHOULD use confidentialSubject semantics wherever selective disclosure is required. Members of a VTC should be able to prove membership to a verifier without unnecessarily revealing the full membership list. Zero-Knowledge Proof (ZKP) integration in Proof Sets is explicitly supported and encouraged.

3.4 Composability

VTCs compose at each layer of the SSC 7.0 Metamodel. A VTC at the Beneficial Controller layer expresses human-level trust relationships; one at the Intermediate Agent layer expresses agent-level relationships; one at the Technical Controller layer expresses device/key-level relationships.

3.5 Cross-Network Trust

The PARM model is network-agnostic. The same VTC pattern supports trust relationships across and between independent, distinct networks and ecosystems.

4. The Partof Architecture Reference Model (PARM)

The Partof Architecture Reference Model (PARM) provides the conceptual foundation for VTCs. It observes that a large class of real-world relationships — membership, citizenship, parthood, employment, participation — share a common logical structure:

Relationship TypeExample
MemberOfAlice is a member of the Working Group Trust Circle.
PartOfBob is part of the study group.
CitizenOfCarol is a citizen of the Digital Nation State of Sovronia.
EmployeeOfDave is an employee of Acme Corp (DID-identified).
ParticipantOfEve is a participant of the 09:00 meeting (a VC-based meeting request).
VoterForFrank has cast a vote for Candidate 1 by contributing his proof to that VTC.

All of these reduce to the same credential structure: a VC whose credentialSubject.id identifies the group or decision entity (the ‘circle’), and whose proof array contains proofs from the Notary and each member who has accepted membership. PHCs and VRCs are degenerate cases of this pattern with N=1 and N=2 respectively.

5. VTC Data Model

5.1 Overview

A VTC is a valid W3C Verifiable Credential [VC-DATA-MODEL] with the following structural characteristics:

  • The issuer property identifies the Notary (N).
  • The credentialSubject (or confidentialSubject) object includes from, to, and optionally metadata properties that identify the Initiator, Responders, and relationship metadata respectively.
  • The credentialSubject.id identifies the relationship or group itself, expressed as a DID.
  • The proof property is an array (Proof Set), containing one proof per signer, ordered as: Notary first, then Initiator, then Responders.

5.2 Minimal Pairwise VTC (N=2, Alice and Bob)

The following non-normative example illustrates a bilateral VTC between Alice (Initiator) and Bob (Responder), notarised by a Notary entity:

{
  “@context”: [
    “https://www.w3.org/ns/credentials/v2”,
    “https://w3id.org/vtc/v1”
  ],
  “id”: “did:envelope:1234”,
  “type”: [“VerifiableCredential”, “VerifiableTrustCircle”],
  “issuer”: “did:example:notaryabcd”,
  “validFrom”: “2026-01-01T00:00:00Z”,
  “credentialSubject”: {
    “id”: “did:vrc:2468”,
    “from”: “did:example:alice”,
    “to”: [“did:example:bob”],
    “metadata”: { “label”: “Alice-Bob Bilateral Trust Circle” }
  },
  “proof”: [
    {
      “id”: “did:example:notaryabcd”,
      “type”: “DataIntegrityProof”,
      “…”: “Notary initial proof…”
    },
    {
      “id”: “did:example:alice”,
      “type”: “DataIntegrityProof”,
      “…”: “Alice proof…”
    },
    {
      “id”: “did:example:bob”,
      “type”: “DataIntegrityProof”,
      “…”: “Bob proof…”
    }
  ]
}

5.3 Multi-Party VTC (N=3+, Alice through Zelda)

For groups with more than two members, the to array is extended to include all Responders, and the proof array gains one additional entry per additional Responder:

{
  “id”: “did:envelope:5678”,
  “type”: [“VerifiableCredential”, “VerifiableTrustCircle”],
  “issuer”: “did:example:notaryabcd”,
  “credentialSubject”: {
    “id”: “did:vrc:9999”,
    “from”: “did:example:alice”,
    “to”: [
      “did:example:bob”,
      “did:example:carol”,
      “…”,
      “did:example:zelda”
    ],
    “metadata”: {
      “label”: “Working Group Trust Circle”,
      “policy”: “did:policy:majority”
    }
  },
  “proof”: [
    { “id”: “did:example:notaryabcd”, “…”: “Notary proof…” },
    { “id”: “did:example:alice”,     “…”: “Alice proof…”  },
    { “id”: “did:example:bob”,       “…”: “Bob proof…”    },
    { “id”: “did:example:carol”,     “…”: “Carol proof…”  },
    { “id”: “did:example:zelda”,     “…”: “Zelda proof…”  }
  ]
}

5.4 Self-Credential VTC (N=1, PHC equivalent)

When to contains only the Initiator’s own DID, or when from and credentialSubject.id are the same entity, the VTC degenerates to a Personhood Credential (PHC):

{
  “credentialSubject”: {
    “id”: “did:phc:alice-self”,
    “from”: “did:example:alice”,
    “to”: [“did:example:alice”],
    “metadata”: { “label”: “Alice Self-Attestation” }
  },
  “proof”: [
    { “id”: “did:example:notaryabcd”, “…”: “Notary proof…” },
    { “id”: “did:example:alice”,      “…”: “Alice proof…”  }
  ]
}

5.5 Voting VTC

For a voting scenario, one VTC is created per candidate. Voters cast their vote by contributing their individual proof to the VTC of the candidate they support. The vote count is the number of valid member proofs in the Proof Set.

{
  “credentialSubject”: {
    “id”: “did:sovronia:election2026:district103:candidate1”,
    “from”: “did:example:electionofficial”,
    “to”: [],
    “metadata”: { “label”: “Candidate 1 — District 103 — 2026” }
  },
  “proof”: [
    { “id”: “did:example:electionofficial”, “…”: “Official proof…” },
    { “id”: “did:example:voter001”,         “…”: “Vote proof…”    },
    { “id”: “did:example:voter002”,         “…”: “Vote proof…”    }
  ]
}
NOTEThe to array MAY be populated in advance with eligible voter DIDs, or it MAY be left empty and populated as votes are cast, depending on the election policy and privacy requirements.

5.6 Properties Reference

PropertyReq.Description
idREQUIREDDID identifying the VTC credential itself. SHOULD use did:envelope or equivalent.
typeREQUIREDMUST include ‘VerifiableCredential’ and ‘VerifiableTrustCircle’.
issuerREQUIREDDID of the Notary (N). The Notary MUST be trusted by all members.
credentialSubject.idREQUIREDDID identifying the relationship or group itself. This is C in the PARM model.
credentialSubject.fromREQUIREDDID of the Initiator (A).
credentialSubject.toREQUIREDArray of DIDs of Responders. MAY be empty for open voting VTCs. MAY include the Initiator’s DID.
credentialSubject.metadataOPTIONALArbitrary structured metadata about the relationship (label, policy, expiry, etc.).
proofREQUIREDArray of proof objects (Proof Set). First proof MUST be from the Notary. Subsequent proofs are from Initiator then Responders in any order.
proof[].idREQUIREDDID of the signer contributing this proof entry.

6. VTC Proof Set Lifecycle

The VTC Proof Set lifecycle consists of the following phases. At each phase t, the VTC applies to the Notary and the first t members who have contributed their proof.

Phase 0 — Null VTC

The credential shell is created by the Notary with an empty or pre-populated to array. The Notary contributes the initial proof. No member relationships are yet verified. t = 0.

Phase 1..t — Progressive Endorsement

Each Responder, in any order, reviews the credential and — if they consent to membership — adds their individual proof to the existing Proof Set using the ‘add-proof-set-chain’ algorithm defined in [VC-DATA-INTEGRITY]. The VTC becomes valid for those t members who have signed. Non-signing members are not yet bound.

Phase N — Complete VTC

All Responders listed in the to array have contributed their proofs. The VTC is fully executed and represents a complete, verifiable, multi-party trust relationship.

NOTEPartial VTCs (0 < t < N) are valid credentials representing the subset of relationships established so far. Verifiers MUST check which proofs are present before asserting full circle membership.

6.1 Adding a Proof

To add a proof to an existing secured VTC, implementors MUST follow the algorithm specified in W3C VC Data Integrity [VC-DATA-INTEGRITY], Section ‘add-proof-set-chain’. The proof is appended to the existing proof array without modifying prior proofs.

6.2 Proof Ordering

Proof Sets are unordered by definition. However, this specification RECOMMENDS the following conventional ordering for readability and auditability: (1) Notary proof, (2) Initiator proof, (3) Responder proofs in the same order as the to array.

7. Roles and Participants

7.1 Notary (N) — Issuer

The Notary is the credential issuer. It MUST be trusted by both the Initiator and all Responders. The Notary is responsible for creating the credential shell, pre-populating the to array (or defining the voting policy), and contributing the first proof. In some use cases, the Notary MAY be the same entity as the Initiator or a Responder, provided that entity plays each role distinctly and the resulting credential satisfies all REQUIRED properties.

7.2 Initiator (A) — From

The Initiator proposes the trust circle. The Initiator’s DID appears in credentialSubject.from. The Initiator contributes a proof to the Proof Set to signify their acceptance of the relationship.

7.3 Responders (B … Z) — To

Each Responder is identified in the credentialSubject.to array. A Responder accepts membership by contributing their individual proof. A Responder who does not contribute a proof is proposed but not yet a verified member.

RULEThe cardinality t of verified members at any time equals the number of valid member proofs (excluding the Notary proof) present in the Proof Set.

8. Use Cases

8.1 Bilateral Trust Relationship (VRC Equivalent)

Alice and Bob wish to establish a verifiable bilateral trust relationship. A Notary (mutually trusted) issues a VTC with from = Alice and to = [Bob]. Both Alice and Bob contribute proofs. The result is a two-party VTC that is equivalent to a classic VRC.

8.2 Personhood Credential (PHC Equivalent)

Alice wishes to create a self-signed personhood credential. A Notary issues a VTC with from = Alice and to = [Alice]. Alice contributes her proof. The result is a one-party VTC equivalent to a PHC.

8.3 Working Group or Task Force

A task force of N participants is formed. A Notary (the WG chair or a community DID) issues a VTC with from = chair and to = [member1, …, memberN]. Members join by contributing their proofs. The VTC provides a cryptographically verifiable roster.

8.4 VC-Based Meeting Request

An organiser issues a VTC with credentialSubject.id = the meeting DID, from = organiser, and to = [attendee1, …, attendeeN]. Attendees RSVP by contributing their proofs. Attendance at the meeting is verifiable from the Proof Set.

8.5 Voting-Based Decision Making

One VTC per candidate is issued by an election official (Notary). Eligible voters cast their vote by contributing their individual proof to the VTC of their chosen candidate. Vote tallying is performed by counting the number of valid member proofs in each candidate’s VTC. This supports maximum flexibility in vote-counting policies (simple majority, ranked-choice, threshold).

8.6 Verifiable Decentralised Registry (VDR)

VC-based voting can be applied to implement a VC-based Verifiable Data Registry (VDR). Append operations to a distributed registry are authorised through a VTC whose members are the registry trustees.

8.7 Digital Society / Digital Nation State

A digital society (e.g. a digital religion, community, or nation state) is defined by a VTC whose members are the citizens. Governance operations — electing trustees, passing resolutions — are performed through subsidiary voting VTCs.

9. Privacy and Security Considerations

9.1 Selective Disclosure

Implementations are STRONGLY RECOMMENDED to use confidentialSubject semantics and selective disclosure proof mechanisms (e.g. BBS+ signatures) to allow individual members to prove their membership in a VTC without revealing the full membership list or metadata.

9.2 ZKP Integration

The Proof Set mechanism is compatible with zero-knowledge proof (ZKP) contributions. A member MAY contribute a ZKP as their proof entry, revealing only that they meet the membership criteria without revealing their DID. Implementations SHOULD define a profile for ZKP-based proof entries.

9.3 Privacy Budget and Reconstruction Ceiling

When multiple agents controlled by one First Person contribute to a shared VTC, care must be taken to ensure that the combined disclosure across proof entries does not exceed the privacy budget of the First Person. The reconstruction ceiling — the probability that an observer can reconstruct the First Person’s identity from the combined proof data — MUST be maintained below the threshold defined by the applicable trust framework.

NOTEThis consideration was raised during community discussion in the context of internal VTCs and the Trust Spanning Protocol (TSP) between two agents controlled by one First Person.

9.4 Notary Trust

The Notary (issuer) occupies a privileged position: it issues the credential shell and contributes the first proof. Verifiers MUST independently verify that the Notary is trusted by all relevant parties. The Notary SHOULD be a well-known, community-governed DID with transparent governance.

9.5 Voting Integrity

For voting VTCs, the following security properties MUST be considered: (1) eligibility — only eligible voters can contribute proofs; (2) anonymity — voter DIDs SHOULD be anonymised or pseudonymised; (3) non-repudiation — each proof is cryptographically bound to the voter’s key; (4) single-vote enforcement — the to array or the Notary’s policy SHOULD prevent duplicate proof contributions from the same voter DID.

10. Conformance

A conforming VTC implementation:

  • MUST produce VTC credentials that are valid W3C Verifiable Credentials conforming to [VC-DATA-MODEL].
  • MUST use a proof array (Proof Set) as defined in [VC-DATA-INTEGRITY].
  • MUST include the issuer property identifying the Notary.
  • MUST include credentialSubject.id, credentialSubject.from, and credentialSubject.to.
  • MUST use the ‘add-proof-set-chain’ algorithm from [VC-DATA-INTEGRITY] when adding proofs incrementally.
  • SHOULD include ‘VerifiableTrustCircle’ in the type array.
  • SHOULD implement selective disclosure mechanisms for credentialSubject properties.
  • MAY extend the credentialSubject.metadata property with domain-specific claims.

11. Relationship to Other Specifications

11.1 W3C VC Data Model 2.0

VTCs are valid W3C Verifiable Credentials. All normative requirements of [VC-DATA-MODEL] apply. VTCs use the issuer and credentialSubject properties as defined therein.

11.2 W3C VC Data Integrity

VTCs rely on the Proof Set mechanism defined in [VC-DATA-INTEGRITY], specifically the ‘add-proof-set-chain’ algorithm for incremental proof contributions.

11.3 ToIP DTGWG Design Principles

This specification is consistent with the ToIP DTGWG Design Principles [DTGWG-DESIGN] and the DTG-ZKP Requirements [DTGWG-ZKP].

11.4 SSC 7.0 Metamodel

VTCs integrate with the Self-Sovereign Control 7.0 Metamodel [SSC-7]. VTCs may be anchored at the Beneficial Controller, Intermediate Controller, or Technical Controller layer.

11.5 Trust Spanning Protocol (TSP)

VTCs are compatible with the Trust Spanning Protocol [TSP] as a credential format for expressing channel-level membership and authorization relationships.

12. References

12.1 Normative References

[VC-DATA-MODEL]W3C. Verifiable Credentials Data Model v2.0. W3C Recommendation. https://www.w3.org/TR/vc-data-model-2.0/
[VC-DATA-INTEGRITY]W3C. Verifiable Credential Data Integrity 1.0. W3C Recommendation. https://www.w3.org/TR/vc-data-integrity/
[DID-CORE]W3C. Decentralized Identifiers (DIDs) v1.0. W3C Recommendation. https://www.w3.org/TR/did-core/

12.2 Informative References

[DTGWG-DESIGN]Trust over IP. DTGWG Design Principles. GitHub Discussion #11. https://github.com/trustoverip/dtgwg-cred-tf/discussions/11
[DTGWG-ZKP]Trust over IP. DTG-ZKP Requirements. GitHub Discussion #12. https://github.com/trustoverip/dtgwg-cred-tf/discussions/12
[DTGWG-VTC-13]Trust over IP. VRC Design Proposals. GitHub Discussion #13. https://github.com/trustoverip/dtgwg-cred-tf/discussions/13
[DISCUSSION-8]M. Herman et al. Web 7.0 Verifiable Trust Circles (VTCs). GitHub Discussion #8. https://github.com/trustoverip/dtgwg-cred-tf/discussions/8
[PHC-PAPER]B. Crites et al. Personhood Credentials. arXiv preprint 2408.07892. https://arxiv.org/pdf/2408.07892
[SSC-7]M. Herman. Self-Sovereign Control (SSC) 7.0 Metamodel. Hyperonomy. https://hyperonomy.com/2025/12/10/self-sovereign-control-ssc-7-0-metamodel/
[W7-AARM]M. Herman. Web 7.0 Agentic OS Agent Architecture Reference Model. Hyperonomy. https://hyperonomy.com/2025/10/14/web-7-0-agentic-os-agent-architecture-reference-model-aarm/
[W7-TRUST-GRAPH]M. Herman. Web 7.0 Trust Graph — Pure Peer Model. Hyperonomy. https://hyperonomy.com/2025/10/25/web-7-0-trust-graph-pure-peer-model/
[TSP]Trust over IP. Trust Spanning Protocol. https://trustoverip.org/
[FIRST-PRINCIPLES]M. Herman. First Principles Thinking. Hyperonomy. https://hyperonomy.com/2021/03/10/first-principles-thinking/

13. Acknowledgements

This specification was derived from community discussion contributions by: Michael Herman (mwherman2000), @talltree, @adamstallard, @mitchuski, @peacekeeper, @GraceRachmany, and other participants of the Trust over IP Foundation DTGWG Credentials Task Force. The editors gratefully acknowledge all contributors to GitHub Discussion #8.

14. Appendix A: Web 7.0 DIDLibOS Architecture Reference Model (DIDLibOS-ARM)

Leave a comment

Filed under Uncategorized

Web 7.0 DIDLibOS Whitepaper

Identity-Addressed Execution, Event-Sourced Memory, and Runspace-Orchestrated Agent Computing

Version: 2.0 Date: 2026-03-25

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0™, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy™ are trademarks of the Web 7.0 Foundation. All Rights Reserved.


Table of Contents

  1. Abstract
  2. Introduction
  3. System Overview
  4. Core Design Principles
  5. DIDComm Message Model
  6. DID as Universal Execution Handle
  7. LiteDB as Agent Memory Kernel
  8. Transparent Cache Architecture
  9. PowerShell Runspace Execution Model
  10. Pipeline Semantics and Execution Flow
  11. Cmdlet Lifecycle and Message Transformation
  12. Cross-Runspace Communication Model
  13. Event-Sourced State and Immutability
  14. LOBES (Loadable Object Brain Extensions)
  15. MCP-I and External System Interfacing
  16. Agent Memory Architecture (Long-Term Memory)
  17. DID Resolution and Identity Semantics
  18. Concurrency Model and Consistency Guarantees
  19. Performance Model and Cache Behavior
  20. Failure Modes and Recovery Semantics
  21. Security Model and Trust Boundaries
  22. Web 7.0 Agent Ecosystem Model
  23. Diagrammatic Architecture Reference
  24. System Properties Summary

1. Abstract

Web 7.0 DIDLibOS defines an identity-addressed, event-sourced execution architecture in which all computation is performed over DIDComm messages persisted in a single LiteDB instance per agent. Instead of passing in-memory objects between computational steps, the system passes Decentralized Identifier (DID) strings that resolve to immutable message state stored in a persistent memory kernel. This enables deterministic execution, full replayability, cross-runspace isolation, and scalable agent orchestration.


2. Introduction

Traditional execution models in scripting and automation environments rely on in-memory object pipelines. These models break under distributed execution, concurrency, and long-term persistence requirements. Web 7.0 DIDLibOS replaces object-passing semantics with identity-passing semantics.

In this model, computation becomes a function over persistent state rather than transient memory.


3. System Overview

The system consists of four primary layers:

  • Execution Layer: PowerShell runspaces executing cmdlets
  • Identity Layer: DIDComm message identifiers (DIDs)
  • Memory Layer: LiteDB persistent store per agent
  • Acceleration Layer: Transparent in-memory cache managed by LiteDB

All computation flows through these layers via identity resolution.


4. Core Design Principles

  1. Everything is a DIDComm message
  2. DIDs are the only runtime values passed between cmdlets
  3. All state is persisted in LiteDB
  4. No shared in-memory objects exist across runspaces
  5. Execution is deterministic and replayable
  6. Cache is transparent and non-semantic
  7. Mutation creates new messages, never modifies in-place

5. DIDComm Message Model

Each system object is represented as a DIDComm message with a globally unique DID.

A DID serves as:

  • Identifier
  • Lookup key
  • Execution handle

Messages are immutable once persisted.


6. DID as Universal Execution Handle

The DID is the only value passed in PowerShell pipelines.

A cmdlet receives a DID, resolves it via LiteDB, processes the message, and outputs a new DID.

Pipeline flow: DID₁ → Cmdlet → DID₂ → Cmdlet → DID₃


7. LiteDB as Agent Long-term Memory

LiteDB acts as the system of record.

Responsibilities:

  • Persistent message storage
  • Indexing by DID
  • Versioning support
  • Retrieval and query execution

There is exactly one LiteDB instance per agent.


8. Transparent Cache Architecture

LiteDB includes an internal cache layer.

Properties:

  • In-memory acceleration
  • Size configurable
  • Fully transparent
  • No semantic visibility to execution layer

Cache only optimizes DID resolution.


9. PowerShell Runspace Execution Model

Each runspace is an isolated execution environment.

Properties:

  • No shared memory across runspaces
  • Only DID strings are passed
  • Execution is stateless between invocations

10. Pipeline Semantics and Execution Flow

Pipeline execution is identity-based:

Step 1: Receive DID Step 2: Resolve message Step 3: Execute transformation Step 4: Persist new message Step 5: Emit new DID


11. Cmdlet Lifecycle and Message Transformation

Each cmdlet follows a strict lifecycle:

  • Input: DID
  • Resolve: LiteDB lookup
  • Materialize: snapshot object
  • Transform: compute result
  • Persist: store new message
  • Output: new DID

12. Cross-Runspace Communication Model

Runspaces communicate only via DIDs.

No object sharing occurs. All state is retrieved from LiteDB.


13. Event-Sourced State and Immutability

All messages are immutable. Each transformation produces a new version.

This creates a complete event history of system execution.


14. LOBES (Loadable Object Brain Extensions)

LOBES are modular execution extensions implemented as PowerShell modules.

Capabilities:

  • Cmdlet composition
  • External system integration
  • DID-based message processing
  • Execution graph augmentation

15. MCP-I and External System Interfacing

MCP-I acts as a bridge for external APIs and systems.

It enables:

  • Querying external databases
  • Calling agent APIs
  • Integrating distributed services

All interactions remain DID-addressed.


16. Agent Memory Architecture (Long-Term Memory)

Long-term memory is implemented as persistent DID storage in LiteDB.

It supports:

  • Historical replay
  • State reconstruction
  • Cross-runspace consistency

17. DID Resolution and Identity Semantics

A DID is resolved at runtime into a message snapshot.

Important distinction:

  • DID is a reference
  • Message is persisted state

18. Concurrency Model and Consistency Guarantees

Concurrency is managed via:

  • Single-writer LiteDB semantics
  • Atomic writes per message
  • Isolation between runspaces

19. Performance Model and Cache Behavior

Performance optimization occurs via internal caching.

Hot messages remain in memory. Cold messages are loaded from disk.


20. Failure Modes and Recovery Semantics

Failures are handled via:

  • Persistent message logs
  • Replay capability
  • Idempotent cmdlet execution

21. Security Model and Trust Boundaries

Security is enforced through:

  • DID-based identity verification
  • Controlled execution boundaries
  • Module isolation in LOBES

22. Web 7.0 Agent Ecosystem Model

Agents operate as autonomous computation nodes.

They communicate via DIDComm messages forming a distributed execution graph.


23. DIDLibOS Architecture Reference Model (DIDLibOS-ARM) 0.8

Referenced external architecture diagram:

This diagram represents:

  • Multi-agent neural execution topology
  • DIDComm messaging fabric
  • LOBE-based computation layers
  • Neuro-symbolic orchestration system

24. Summary

  • Deterministic execution
  • Identity-based computation
  • Event-sourced memory
  • Runspace isolation
  • Transparent caching
  • Modular extension via LOBES
  • Distributed agent scalability

Leave a comment

Filed under Uncategorized

Decentralized Universal Resource Name (URN) DID Method (Web 7.0)

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0™, Web 7.0 DILibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy™ are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Abstract

This document specifies the did7:web7 Decentralized Identifier (DID) method, which defines a deterministic mapping from Uniform Resource Names (URNs) (RFC 8141) into a DID-compatible identifier format called a Decentralized Universal Resource Name (URN). The did7:web7 method preserves URN semantics, enables DID resolution without mandatory centralized infrastructure, and provides optional cryptographic and service-layer extensibility. The method is fully compatible with the W3C DID Core specification (W3C DID Core, 2022) and the broader DID ecosystem.

Derivation Notice

This note is to be removed before publishing as an RFC.

This Internet-Draft is derived from the Web 7.0 Foundation specification “SDO: W3C Decentralized Resource Name (URN) DID Method (Web 7.0)” authored by Michael Herman, published 24 March 2026 at https://hyperonomy.com/2026/03/24/ sdo-web-7-0-decentralized-resource-name-urn-did-method/ and licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License. Web 7.0(TM), Web 7.0 DIDLibOS(TM), TDW AgenticOS(TM), TDW(TM), Trusted Digital Web(TM), and Hyperonomy(TM) are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as “work in progress.”

This Internet-Draft will expire on 25 September 2026.

Copyright Notice

Copyright (c) 2026 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.

Table of Contents

1. Introduction

Uniform Resource Names (URNs) [RFC8141] provide a well-established mechanism for assigning persistent, location-independent identifiers to resources. However, URNs predate the Decentralized Identifier (DID) ecosystem [W3C.DID-CORE] and lack native support for DID resolution, DID Document retrieval, cryptographic verification methods, or service endpoint declaration.

At the same time, many existing information systems such as bibliographic catalogues, digital libraries, standards registries, and supply-chain systems rely heavily on URN-based identification. Retrofitting these systems with entirely new identifier schemes is often impractical.

The did7:web7 method bridges this gap. It defines a deterministic, reversible transformation from any well-formed URN into a DID-compatible identifier called a Decentralized Universal Resource Name (URN). The resulting DID is fully resolvable, is backwards compatible with the source URN, requires no mandatory centralized registry, and is composable with other DID methods such as did:key, did:web, and did:peer.

The primary design goals of did7:web7 are:

  • Preservation of URN semantics and namespace-specific comparison rules.
  • Deterministic, stateless baseline resolution requiring no external infrastructure.
  • Optional cryptographic extensibility through verification methods.
  • Optional service-layer extensibility through service endpoints.
  • Full conformance with the W3C DID Core specification [W3C.DID-CORE].

The did7:web7 method is positioned as a universal adapter between the URN and DID ecosystems, serving as a semantic identity bridge that preserves existing meaning while enabling participation in the modern decentralized identity landscape.

2. Conventions and Definitions

The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT, SHOULD, SHOULD NOT, RECOMMENDED, NOT RECOMMENDED, MAY, and OPTIONAL in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals as shown here.

ABNF notation used in this document follows [RFC5234].

3. Terminology

URN (Uniform Resource Name): A persistent, location-independent identifier conforming to the syntax defined in [RFC8141], of the form urn:<NID>:<NSS>. NID (Namespace Identifier): The registered URN namespace label (e.g., isbn, uuid, ietf). NSS (Namespace-Specific String): The portion of a URN following the NID, interpreted according to the rules of the corresponding URN namespace registration. URN (Decentralized Universal Resource Name): A URN expressed within the did7:web7 method namespace; the method-specific identifier portion of a did7:web7 DID. DID Document: A set of data describing the DID subject, as defined in Section 5 of [W3C.DID-CORE]. Resolver: A software component that, given a DID, returns a DID Document conforming to the requirements of [W3C.DID-RESOLUTION]. Controller: An entity, as identified by a DID, that has the capability to make changes to a DID Document, as defined in [W3C.DID-CORE]. Fingerprint: A cryptographic hash of a canonical representation of the embedded URN, used to derive a did:key-compatible equivalent identifier.

4. Method Name

The method name that identifies this DID method is: urn.

A DID conforming to this specification begins with the prefix did7://web7/. This prefix is case-insensitive for resolution purposes, but implementations SHOULD produce lowercase prefixes in all output.

5. Method-Specific Identifier

5.1. Syntax

The ABNF grammar for a did7:web7 DID is as follows:

did-web7-urn = "did7://web7/" urn
urn     = "urn:" NID ":" NSS
NID     = <URN Namespace Identifier per RFC 8141>
NSS     = <Namespace-Specific String per RFC 8141>

The following are conformant examples of did7:web7 identifiers:

did7://web7/urn:isbn:9780141036144
did7://web7/urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8
did7://web7/urn:ietf:rfc:8141
did7://web7/urn:epc:id:sgtin:0614141.107346.2017

5.2. Normalization

Implementations MUST normalize the embedded URN according to the lexical equivalence and case-folding rules specified in Section 3.1 of [RFC8141] before constructing or comparing a did7:web7 identifier. Namespace-specific comparison rules (q-component handling, etc.) as registered with IANA for each NID MUST also be preserved.

Percent-encoding normalization (Section 2.1 of [RFC3986]) applies to the NSS component where permitted by the applicable namespace registration.

6. Core Properties

6.1. Determinism

A given URN MUST map deterministically to exactly one did7:web7 identifier. The transformation is purely syntactic; no randomness or external state is introduced. Two URNs that are lexically equivalent per [RFC8141] MUST produce the same did7:web7.

6.2. Reversibility

The original URN MUST be exactly recoverable from the did7:web7 identifier without loss of information. No encoding, hashing, or irreversible transformation is applied to the URN content.

6.3. Infrastructure Independence

Baseline resolution of a did7:web7 identifier MUST NOT require access to any centralized registry, distributed ledger, or network service. A conformant resolver MUST be capable of constructing a minimal conformant DID Document entirely from the information contained within the DID string itself (see Mode 1, Section 7.3).

7. DID Resolution

7.1. Resolution Input

The resolution input is a did7:web7 string conforming to the syntax defined in Section 5.1, optionally accompanied by resolution options as defined in [W3C.DID-RESOLUTION].

Input:  did7://web7/<urn>

7.2. Resolution Output

A conforming resolver MUST return a DID Document. The minimum conformant DID Document for any did7:web7 identifier is:

{
  "@context": "https://www.w3.org/ns/did/v1",
  "id": "did7://web7/urn:isbn:9780141036144",
  "alsoKnownAs": [
    "urn:isbn:9780141036144"
  ]
}

The alsoKnownAs property MUST contain the embedded URN in its normalized form (per Section 5.2).

7.3. Resolution Modes

7.3.1. Mode 1 – Stateless Resolution (REQUIRED)

A conformant resolver MUST support stateless resolution. In this mode the resolver constructs the DID Document locally from the DID string alone, without any external network lookup.

Properties of this mode:

  • Fully deterministic.
  • Zero infrastructure dependency.
  • Always available regardless of network connectivity.

7.3.2. Mode 2 – Deterministic Fingerprint (RECOMMENDED)

Resolvers SHOULD support derivation of a cryptographic fingerprint from the canonical URN. The fingerprint is derived as:

fingerprint = hash(canonical-urn)

where canonical-urn is the normalized URN string (UTF-8 encoded) and hash is a cryptographic hash function registered for use with did:key (e.g., SHA-256 with multibase encoding [I-D.multiformats-multibase]). The derived fingerprint SHOULD be expressed as a did:key identifier and added to the DID Document as follows:

"equivalentId": [
  "did:key:<multibase-encoded-fingerprint>"
]

7.3.3. Mode 3 – Discovery-Enhanced Resolution (OPTIONAL)

Resolvers MAY perform external discovery to supplement the locally constructed DID Document. Permitted discovery mechanisms include:

  • DNS-based lookup (e.g., using the DNS-SD mechanism).
  • HTTPS well-known endpoints (e.g., /.well-known/did.json).
  • Content-addressed storage systems (e.g., IPFS).

Discovery rules SHOULD be namespace-aware, such that a resolver for urn:isbn: DIDs may apply different discovery heuristics than one for urn:uuid: DIDs.

When external discovery yields a DID Document, that document MUST be validated for consistency with the locally constructed baseline document before being returned to the caller. Specifically, the id and alsoKnownAs values MUST match the baseline.

8. DID Document Structure

8.1. Base Document

Every DID Document produced by a did7:web7 resolver MUST conform to the following template:

{
  "@context": "https://www.w3.org/ns/did/v1",
  "id": "did7://web7/<urn>",
  "alsoKnownAs": ["<urn>"]
}

Where <urn> is the normalized URN as defined in Section 5.2.

8.2. Optional Properties

8.2.1. Verification Methods

A DID Document MAY include one or more verification method entries to support cryptographic operations associated with the identified resource. The following is an example using the Ed25519VerificationKey2020 type:

"verificationMethod": [
  {
    "id": "did7://web7/<urn>#key-1",
    "type": "Ed25519VerificationKey2020",
    "controller": "did7://web7/<urn>",
    "publicKeyMultibase": "z6Mk..."
  }
]

Verification methods MUST conform to Section 5.2 of [W3C.DID-CORE].

8.2.2. Service Endpoints

A DID Document MAY include service endpoint entries to enable discovery of resources or services associated with the URN. The following is an illustrative example:

"service": [
  {
    "id": "did7://web7/<urn>#resource",
    "type": "URNResourceService",
    "serviceEndpoint":
      "https://example.com/urn/<encoded-urn>"
  }
]

Service endpoints MUST conform to Section 5.4 of [W3C.DID-CORE]. The type value SHOULD be registered in a publicly accessible DID Specification Registries entry [W3C.DID-SPEC-REGISTRIES].

8.2.3. Equivalent Identifier

Where Mode 2 resolution (Section 7.3.2) is supported, the DID Document MAY include an equivalentId property expressing the deterministic fingerprint-derived did:key as described in Section 7.3.2.

9. Controller Model

9.1. Default Behaviour

A did7:web7 identifier does not inherently assert or imply a controller. In the baseline stateless resolution mode (Mode 1), the DID Document contains no controller property. The absence of a controller property indicates that control has not been established through this mechanism.

9.2. Establishing Control

Control over a did7:web7 DID Document MAY be asserted through any of the following mechanisms:

  • Verifiable Credentials [W3C.VC-DATA-MODEL] binding a controller identity to the URN.
  • Signed DID Documents, where the document is signed by a verification method under the controller’s authority.
  • Namespace authority attestations, where the registrant or maintainer of the relevant URN namespace asserts controller status.

When a controller is established, the controller property MUST be included in the DID Document and MUST reference a resolvable DID.

10. Verification and Trust

The did7:web7 method does not inherently provide authenticity guarantees. A DID Document produced by a stateless resolver (Mode 1) is constructed locally and carries no cryptographic proof of its origin or integrity.

Implementations that require trust assurances SHOULD layer one or more of the following mechanisms on top of the baseline:

  • Cryptographic proofs: Attach verification methods and associated proofs (e.g., JSON-LD Proofs, JOSE signatures) to the DID Document as described in Section 8.2.1.
  • Third-party attestations: Bind Verifiable Credentials from trusted issuers to the URN to assert provenance, authenticity, or ownership.
  • Namespace authority validation: Dereference the URN through its canonical namespace registry to verify that the identified resource exists and that any asserted attributes are consistent.

Consumers of did7:web7 DID Documents SHOULD NOT infer trustworthiness solely from the presence of the DID; trust evaluation MUST take into account the verification mechanisms present in the DID Document and the verifier’s trust policy.

11. CRUD Operations

The did7:web7 method supports the following subset of CRUD operations as defined in [W3C.DID-CORE]:

OperationStatusNotes
CreateImplicitA URN is created implicitly by forming the syntactic transformation of a well-formed URN per Section 5.1. No registration step is required.
ReadREQUIREDResolution MUST be supported in at least Mode 1 (stateless), per Section 7.3.1.
UpdateNOT SUPPORTEDThe baseline stateless method does not support document updates. Updates are only possible in Mode 3 via an external discovery service that supports document management.
DeactivateNOT SUPPORTEDDeactivation is not supported in the baseline method. External service layers may implement deactivation semantics independently.

12. Interoperability

12.1. With URN Systems

The did7:web7 method is fully backward compatible with existing URN infrastructure. The embedded URN is preserved verbatim (after normalization) within the DID string, and no changes to existing URN registries, resolvers, or applications are required.

The alsoKnownAs property in the DID Document ensures that a did7:web7 DID can always be mapped back to its source URN, enabling interoperability with legacy systems that do not support DID resolution.

12.2. With the DID Ecosystem

The did7:web7 method is compatible with the W3C DID Core specification [W3C.DID-CORE] and the DID Resolution specification [W3C.DID-RESOLUTION]. It is composable with the following DID methods:

  • did:key – via the deterministic fingerprint mechanism (Section 7.3.2).
  • did:web – a did7:web7 DID Document MAY reference a did:web service endpoint for resource discovery.
  • did:peer – pairwise did:peer identifiers MAY be used in conjunction with did7:web7 to reduce correlation in privacy-sensitive contexts (see Section 14.2).

Implementations MAY register additional DID method compositions in a publicly accessible DID Method Registry.

13. Design Rationale

The following design decisions underpin the did7:web7 specification.

Deterministic mapping: Aligning with the broader principle that DID methods SHOULD be deterministic where possible, the syntactic transformation from URN to URN requires no external state and produces stable, reproducible identifiers.

Use of alsoKnownAs: The alsoKnownAs property from [W3C.DID-CORE] is used rather than a custom extension to ensure semantic preservation while remaining fully conformant with the core specification.

Stateless baseline: Requiring only syntactic processing for baseline resolution maximises portability and eliminates single points of failure that would arise from mandatory registry dependencies.

Acknowledged trade-offs: The method does not include a built-in trust layer or lifecycle operations (Update/Deactivate) at the baseline level. These capabilities are intentionally delegated to optional layers (Modes 2 and 3, and the controller model of Section 9) so that implementations may adopt only the complexity they require.

14. Privacy Considerations

14.1. Correlation Risks

The deterministic mapping from URN to URN means that any party who observes a did7:web7 identifier can immediately recover the underlying URN. Where the URN encodes personally identifiable information (e.g., a personal UUID or a registry identifier linked to an individual), this creates a direct correlation vector.

Additionally, because the transformation is deterministic and publicly known, two parties who independently resolve the same URN will arrive at the same URN, enabling linkage across otherwise unrelated contexts.

14.2. Mitigations

Implementers handling sensitive or personal identifiers SHOULD consider the following mitigations:

  • Pairwise DIDs: Use pairwise did:peer identifiers in contexts where individual interaction tracking is a concern, rather than exposing the did7:web7 identifier directly.
  • Avoid sensitive URNs: Refrain from forming did7:web7 identifiers from URNs that encode sensitive personal data in public or semi-public contexts.
  • Selective disclosure: Where verification is required, use Verifiable Presentations with selective disclosure rather than directly sharing the did7:web7 identifier.

This document does not address the privacy properties of the underlying URN namespaces; implementers MUST consult the privacy considerations of the applicable namespace registration before using that namespace in a did7:web7 context.

15. Security Considerations

15.1. Limitations

The baseline did7:web7 method (Mode 1) provides no inherent proof-of-control. Any party can construct a syntactically valid did7:web7 DID from any well-formed URN without demonstrating authority over the named resource. This is an intentional consequence of the zero-infrastructure design; however, it means that a did7:web7 DID alone cannot be used to assert ownership or authority.

In Mode 3 (Discovery-Enhanced), resolvers that accept DID Documents from external services are susceptible to spoofed or tampered service endpoints. A malicious service could return a crafted DID Document containing false verification methods or service endpoints.

15.2. Recommendations

To mitigate the limitations identified above, implementations SHOULD apply the following measures:

  • Signed metadata: Require that DID Documents obtained via Mode 3 discovery carry a valid cryptographic proof (e.g., a JSON-LD Data Integrity Proof) before accepting them as authoritative.
  • Verifiable Credentials for binding: Use Verifiable Credentials [W3C.VC-DATA-MODEL] issued by a trusted authority to bind the URN to a controller identity, rather than relying solely on the DID Document structure.
  • TLS for discovery endpoints: All HTTPS endpoints used in Mode 3 discovery MUST be protected with TLS 1.2 or higher [RFC8446] and SHOULD use certificate transparency.
  • Input validation: Resolvers MUST validate the embedded URN against the ABNF grammar of [RFC8141] before performing any resolution activity.

16. IANA Considerations

This document requests registration of the following DID method name in the W3C DID Specification Registries [W3C.DID-SPEC-REGISTRIES]:

FieldValue
Method Nameurn
Statusprovisional
SpecificationThis document
ContactSee Author’s Address

This document has no other IANA actions.

17. References

17.1. Normative References

[RFC2119] Bradner, S., “Key words for use in RFCs to Indicate Requirement Levels”, BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997, <https://www.rfc-editor.org/rfc/rfc2119>. [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, “Uniform Resource Identifier (URI): Generic Syntax”, STD 66, RFC 3986, DOI 10.17487/RFC3986, January 2005, <https://www.rfc-editor.org/rfc/rfc3986>. [RFC5234] Crocker, D. and P. Overell, “Augmented BNF for Syntax Specifications: ABNF”, STD 68, RFC 5234, DOI 10.17487/RFC5234, January 2008, <https://www.rfc-editor.org/rfc/rfc5234>. [RFC8141] Saint-Andre, P. and J. Klensin, “Uniform Resource Names (URNs)”, RFC 8141, DOI 10.17487/RFC8141, April 2017, <https://www.rfc-editor.org/rfc/rfc8141>. [RFC8174] Leiba, B., “Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words”, BCP 14, RFC 8174, DOI 10.17487/RFC8174, May 2017, <https://www.rfc-editor.org/rfc/rfc8174>. [RFC8446] Rescorla, E., “The Transport Layer Security (TLS) Protocol Version 1.3”, RFC 8446, DOI 10.17487/RFC8446, August 2018, <https://www.rfc-editor.org/rfc/rfc8446>. [W3C.DID-CORE] Sporny, M., Guy, A., Sabadello, M., and D. Reed, “Decentralized Identifiers (DIDs) v1.0”, W3C Recommendation, July 2022, <https://www.w3.org/TR/did-core/>. [W3C.DID-RESOLUTION] Sabadello, M., “Decentralized Identifier Resolution (DID Resolution) v0.3”, W3C Working Group Note , 2023, <https://w3c-ccg.github.io/did-resolution/>.

17.2. Informative References

[I-D.multiformats-multibase] Sporny, M., “The Multibase Data Format”, Work in Progress, Internet-Draft, draft-multiformats-multibase, 2023, <https://datatracker.ietf.org/doc/ draft-multiformats-multibase/>. [W3C.DID-SPEC-REGISTRIES] Sporny, M. and O. Steele, “DID Specification Registries”, W3C Working Group Note , 2023, <https://www.w3.org/TR/did-spec-registries/>. [W3C.VC-DATA-MODEL] Sporny, M., Longley, D., and D. Chadwick, “Verifiable Credentials Data Model v2.0”, W3C Candidate Recommendation , 2024, <https://www.w3.org/TR/vc-data-model-2.0/>. [WEB70-URN] Herman, M., “SDO: W3C Decentralized Universal Resource Name (URN) DID Method (Web 7.0)”, Licensed under Creative Commons Attribution-ShareAlike 4.0 International Public License , March 2026, <https://hyperonomy.com/2026/03/24/ sdo-web-7-0-decentralized-resource-name-urn-did-method/>.

Appendix A. Complete Example

This appendix illustrates a complete did7:web7 resolution using an ISBN URN as input, with Mode 2 (fingerprint) and a service endpoint included.

Source URN:

urn:isbn:9780141036144

Derived URN DID:

did7://web7/urn:isbn:9780141036144

Resolved DID Document (Modes 1 + 2 + service endpoint):

{
  "@context": "https://www.w3.org/ns/did/v1",
  "id": "did7://web7/urn:isbn:9780141036144",
  "alsoKnownAs": [
    "urn:isbn:9780141036144"
  ],
  "equivalentId": [
    "did:key:zQm..."
  ],
  "service": [
    {
      "id": "did7://web7/urn:isbn:9780141036144#info",
      "type": "BookMetadata",
      "serviceEndpoint":
        "https://example.org/isbn/9780141036144"
    }
  ]
}

Acknowledgements

The author thanks the members of the W3C Decentralized Identifier Working Group and the broader DID community for their foundational work on the DID Core specification, and the IETF URN community for their long-standing stewardship of URN namespaces.

Author’s Address

Michael Herman

Web 7.0 Foundation

BindlossAlberta

Canada

Email:mwherman@gmail.com

URI:https://hyperonomy.com/about/

Leave a comment

Filed under Uncategorized

SDO: W3C Decentralized Resource Name (DRN) DID Method

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0™, Web 7.0 DILibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy™ are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Full name: Decentralized Resource Name (DRN) DID Method
Status: Draft Recommendation
SDO: Web 7.0 Foundation
Editor: Michael Herman, Chief Digital Officer, Web 7.0 Foundation
Related standards: DID Core, URN (RFC 8141 via )


1. Abstract

The did:drn method defines a deterministic mapping from Uniform Resource Names (URNs) into a DID-compatible identifier format called a Decentralized Resource Name (DRN).

This method:

  • Preserves URN semantics
  • Enables DID resolution without mandatory centralized infrastructure
  • Provides optional cryptographic and service-layer extensibility

2. Conformance

The key words MUST, SHOULD, and MAY are to be interpreted as described in RFC 2119.


3. Method Name

did:drn

4. Terminology

  • URN: A persistent, location-independent identifier defined by RFC 8141
  • DRN: A URN expressed within the did:drn method
  • Controller: An entity asserting control over a DID Document

5. Method-Specific Identifier

5.1 Syntax

did-drn = "did:drn:" drn
drn = "urn:" NSS
NSS = <RFC 8141 namespace-specific string>

Examples

5.2 Normalization

did:drn:urn:isbn:9780141036144
did:drn:urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8

Implementations MUST:

  • Normalize the embedded URN according to RFC 8141
  • Preserve namespace-specific comparison rules

6. Core Properties

6.1 Determinism

A given URN MUST map deterministically to a single did:drn.

6.2 Reversibility

The original URN MUST be recoverable without loss.

6.3 Infrastructure Independence

The method MUST NOT require a centralized registry for baseline resolution.


7. DID Resolution

7.1 Input

did:drn:<urn>

7.2 Output

A conforming resolver MUST return a DID Document.

Minimum Conformant Document

{
"id": "did:drn:urn:isbn:9780141036144",
"alsoKnownAs": [
"urn:isbn:9780141036144"
]
}

7.3 Resolution Modes

Mode 1 — Stateless (REQUIRED)

  • Resolver constructs the DID Document locally
  • No external lookup required

Properties:

  • Deterministic
  • Zero infrastructure dependency

Mode 2 — Deterministic Fingerprint (RECOMMENDED)

Resolvers SHOULD derive a cryptographic fingerprint:

fingerprint = hash(urn)

Add:

"equivalentId": [
"did:key:<fingerprint>"
]

Mode 3 — Discovery-Enhanced (OPTIONAL)

Resolvers MAY perform discovery using:

  • DNS-based lookup
  • HTTPS well-known endpoints
  • Content-addressed storage (e.g., IPFS)

Discovery rules SHOULD be namespace-aware.


8. DID Document

8.1 Base Document

{
"id": "did:drn:<urn>",
"alsoKnownAs": ["<urn>"]
}

8.2 Optional Properties

Verification Methods

"verificationMethod": [
{
"id": "#key-1",
"type": "Ed25519VerificationKey2020",
"controller": "did:drn:<urn>",
"publicKeyMultibase": "..."
}
]

Services

"service": [
{
"id": "#resource",
"type": "DRNResourceService",
"serviceEndpoint": "https://example.com/drn/<encoded-urn>"
}
]

9. Controller Model

9.1 Default Behavior

  • A DRN does not imply a controller

9.2 Establishing Control

Control MAY be asserted via:

  • Verifiable Credentials
  • Signed DID Documents
  • Namespace authority attestations

10. Verification & Trust

10.1 Baseline

did:drn does not inherently guarantee authenticity

10.2 Trust Mechanisms

Trust SHOULD be layered using:

  • Cryptographic proofs
  • Third-party attestations
  • Namespace authority validation

11. CRUD Operations

OperationSupport
CreateImplicit (deterministic derivation)
ReadREQUIRED
UpdateNOT supported (stateless baseline)
DeactivateNOT supported

12. Privacy Considerations

Risks

  • Deterministic mapping enables correlation
  • Public URNs may expose identity linkage

Mitigations

  • Pair with pairwise DIDs (did:peer)
  • Avoid sensitive URNs in public contexts

13. Security Considerations

Limitations

  • No inherent proof-of-control
  • Susceptible to spoofed service endpoints

Recommendations

  • Require signed metadata
  • Use verifiable credentials for binding

14. Interoperability

With URN Systems

  • Fully backward compatible
  • No changes to existing URN infrastructure

With DID Ecosystem

  • Compatible with DID Core
  • Composable with:
    • did:key
    • did:web
    • did:peer

15. Example

Input

urn:isbn:9780141036144

DRN DID

did:drn:urn:isbn:9780141036144

Resolved Document

{
"id": "did:drn:urn:isbn:9780141036144",
"alsoKnownAs": [
"urn:isbn:9780141036144"
],
"equivalentId": [
"did:key:zQm..."
],
"service": [
{
"id": "#info",
"type": "BookMetadata",
"serviceEndpoint": "https://example.org/isbn/9780141036144"
}
]
}

16. Design Rationale

Well-supported:

  • Deterministic mapping aligns with DID design principles
  • alsoKnownAs ensures semantic preservation
  • Stateless resolution maximizes portability

Tradeoffs:

  • No built-in trust layer
  • No lifecycle operations

17. Positioning

What did:drn IS

  • A universal adapter between URN and DID ecosystems
  • A semantic identity bridge
  • A zero-infrastructure resolution method (baseline)

What it is NOT

  • A self-sovereign identity system by itself
  • A registry-backed authority system

18. One-line Definition

did:drn transforms a URN into a resolvable, interoperable DID while preserving its original meaning and structure.


Leave a comment

Filed under Uncategorized