Web 7.0 DIDLibOS™ / TDW AgenticOS™ (Project “Shorthorn”)

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Introduction

This article describes Web 7.0™ and TDW AgenticOS ™ – with a specific focus on the Web 7.0 Neuromorphic Agent Architecture Reference Model (NAARM) used by TDW AgenticOS™ to support the creation of Web 7.0 Decentralized Societies.

The intended audience for this document is a broad range of professionals interested in furthering their understanding of TDW AgenticOS for use in software apps, agents, and services. This includes software architects, application developers, and user experience (UX) specialists, as well as people involved in a broad range of standards efforts related to decentralized identity, verifiable credentials, and secure storage.

The Second Reformation

Web 7.0 Foundation Ecosystem

“Web 7.0 is a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials.”
Michael Herman, Trusted Digital Web (TDW) Project, Hyperonomy Digital Identity Lab, Web 7.0 Foundation. January 2023.

Credit: Alex Thurow, https://youtu.be/4OjZOyG6nMo

TDW AgenticOS™

TDW AgenticOS™ is a macromodular, neuromorphic agent platform for coordinating and executing complex systems of work that is:

  • Secure
  • Trusted
  • Open
  • Resilient

TDW AgenticOS™ is 100% Albertan by birth and open source.

Project “Shorthorn”

Project “Shorthorn” is a parody project name based on Microsoft’s Windows “Longhorn” WinFS project (a SQL-based Windows File System project) with which the author was involved in from a design preview and feedback, consulting, and PM technical training (Groove Workspace system architecture and operation) perspectives (circa 2001-2002).

What makes Shorthorns great:
– They’re good at turning grass into meat (great efficiency).
– Shorthorn cows are amazing mothers and raise strong, healthy calves (nurture great offspring).
– Their genetics blend well with other breeds for strong hybrid calves (plays well with others).
…and so it is with TDW AgenticOS™.

Web 7.0 Foundation

The Web 7.0 Foundation, a federally-incorporated Canadian non-profit corporation, is chartered to develop, support, promote, protect, and curate the Web 7.0 ecosystem: TDW AgenticOS operating system software, and related standards and specifications. The Foundation is based in Alberta, Canada.

What we’re building at the Web 7.0 Foundation is described in this quote from Don Tapscott and co.:

“We see an alternate path: a decentralized platform for our digital selves, free from total corporate control and within our reach, thanks to co-emerging technologies.”
“A discussion has begun about “democratizing AI.” Accessibility is critical. Mostaque has argued that the world needs what he calls “Universal Basic AI.” Some in the technology industry have argued that AI can be democratized through open source software that is available for anyone to use, modify, and distribute. Mostaque argues that this is not enough. “AI also needs to be transparent,” meaning that AI systems should be auditable and explainable, allowing researchers to examine their decision-making processes. “AI should not be a single capability on monolithic servers but a modular structure that people can build on,” said Mostaque. “That can’t go down or be corrupted or manipulated by powerful forces. AI needs to be decentralized in both technology, ownership and governance.” He’s right.”
You to the Power Two. Don Tapscott and co. 2025.

A Word about the Past

The Web 7.0 project has roots dating back approximately 30 years to before 1998 with the release of Alias Upfront for Windows. Subsequent to the release of Upfront (which Bill Gates designated as the “most outstanding graphics product for Microsoft Windows 3.0”), the AUSOM Application Design Framework was formalized.

AUSOM Application Design Framework

AUSOM is an acronym for A User State of Mind — the name of a framework or architecture for designing software applications that are easier to design, implement, test, document and support. In addition, an application developed using the AUSOM framework is more capable of being: incrementally enhanced, progressively installed and updated, dynamically configured and is capable of being implemented in many execution environments. This paper describes the Core Framework, the status of its current runtime implementations and its additional features and benefits.

What is AUSOM?

The AUSOM Application Design Framework, developed in 1998, is a new way to design client-side applications. The original implementation of the framework is based on a few basic concepts: user scenarios and detailed task analysis, visual design using state-transition diagrams, and implementation using traditional Windows message handlers.

The original motivation for the framework grew out of the need to implement a highly modeless user interface that was comprised of commands or tasks that were very modal (e.g. allowing the user to change how a polygon was being viewed while the user was still sketching the boundary of the polygon).

To learn more, read The AUSOM Application Design Framework whitepaper.

Einstein’s Advice

The following is essentially the same advice I received from Charles Simonyi when we were both at Microsoft (and one of the reasons why I eventually left the company in 2001).

“No problem can be solved from the same level of consciousness that created it.” [Albert Einstein]
“The meaning of this quote lies in Einstein’s belief that problems are not just technical failures but outcomes of deeper ways of thinking. He suggested that when people approach challenges using the same assumptions, values, and mental habits that led to those challenges, real solutions remain out of reach. Accoding to this idea, improvement begins only when individuals are willing to step beyond familiar thought patterns and question the mindset that shaped the problem.” [Economic Times]

Simonyi et al., in the paper Intentional Software, state:

For the creation of any software, two kinds of contributions need to be combined even though they are not at all similar: those of the domain providing the problem statement and those of software engineering providing the.implementation. They need to be woven together to form the program.

TDW AgenticOS is the software for building decentralized societies.

A Word about the Future

“Before the next century is over, human beings will no longer be the most intelligent or capable type of entity on the planet. Actually, let me take that back. The truth of that last statement depends on how we define human.” Ray Kurzweil. 1999.

NOTE: “Artificial Intelligence” (or “AI”) does not appear anywhere in the remainder of this article. The northstar of the Web 7.0 project is to be a unified software and hardware ecosystem for building resilient, trusted, decentralized systems using decentralized identifiers, DIDComm agents, and verifiable credentials – regardless of whether the outcome (a Web 7.0 network) uses AI or not. Refer to Figures 4a, 4b, and 6 for a better understanding.

DIDComm Notation, a visual language for architecting and designing decentralized systems, was used to create the figures in this article.

Value Proposition

By Personna

Business Analyst – Ability to design and execute, secure, trusted business processes of arbitrary complexity across multiple parties in multiple organizations – anywhere on the planet.

Global Hyperscaler Administrators – Ability to design and execute, secure, trusted systems administration processes (executed using PowerShell) of arbitrary complexity across an unlimited number of physical or virtual servers hosted by an unlimited number of datacenters, deployed by multiple cloud (or in-house) xAAS providers – anywhere on the planet.

App Developers – Ability to design, build, deploy, and manage secure, trusted network-effect-by-default apps of arbitrary complexity across multiple devices owned by anybody – anywhere on the planet.

Smartphone Vendors – Ability to upsell a new category of a second device, a Web 7.0 Always-on Trusted Digital Assistant – a pre-integrated hardware and software solution, that pairs with the smart device that a person already owns. Instead of a person typically purchasing/leasing one smartphone, they can now leverage a Web 7.0-enabled smartphone bundle that also includes a secure, trusted, and decentralized communications link to a Web 7.0 Always-on Trusted Digital Assistant deployed at home (or in a cloud of their choosing).

Digital Church/Religion Builders – Ability to create a new decentralized digital religion for 1 billion people in Communist China.

By Trust Relationship (Verifiable Trust Circle (VTC))

Secure, Trusted Agent-to-Agent Messaging Model

Figure 0. Simple Agent-to-Agent Communications Model

Figure 0. depicts the design of a typical simple agent-to-agent communications model. DIDComm Notation was used to create the diagram.

TDW AgenticOS: Conceptual and Logical Architecture

The Web 7.0 architecture is illustrated in the following figure.

Figure 1. Web 7.0 Neuromorphic Agent

Figure 1 is an all-in illustration of the conceptual architecture of a Web 7.0 Neuromorphic Agent. A Web 7.0 Agent is comprised of a Frontal LOBE and the Neural Messaging pathway. An Agent communicates with the outside world (other Web 7.0 Agents) using its Outbound (Talking), Seeing, and Inbound (Listening) Interfaces. Agents can be grouped together into Neural Clusters to form secure and trusted multi-agent organisms. DIDComm/HTTP is the default secure digital communications protocol (see DIDComm Messages as the Steel Shipping Containers of Secure, Trusted Digital Communication). The Decentralized Identifiers (DIDs) specification is used to define the Identity layer in the Web 7.0 Messaging Superstack (see Figure 6 as well as Decentralized Identifiers (DIDs) as Barcodes for Secure, Trusted Digital Communication).

An agent remains dormant until it receives a message directed to it and returns to a dormant state when no more messages are remaining to be processed. An agent’s message processing can be paused without losing any incoming messages. When an agent is paused, messages are received, queued, and persisted in long-term memory. Message processing can be resumed at any time.

Additionally, an Agent can include a dynamically changing set of Coordination and Execution LOBEs. These LOBEs enable an Agent to capture events (incoming messages), compose responses (outgoing messages), and share these messages with one or more Agents (within a specific Neural Cluster or externally with the Beneficial Agent in other Neural Clusters (see Figure 5)).

What is a LOBE?

LOBE (Loadable Object Brain Extensions) is a macromodular, neuromorphic intelligence framework designed to let systems grow, adapt, and evolve by making it easy to add new capabilities at any time. Each LOBE is a dynamically Loadable Object — a self-contained cognitive module that extends the Frontal LOBE’s functionality, whether for perception, reasoning, coordination, or control (execution). Together, these LOBEs form a dynamic ecosystem of interoperable intelligence, enabling developers to construct distributed, updatable, and extensible minds that can continuously expand their understanding and abilities.

LOBEs lets intelligence and capability grow modularly. Add new lobes, extend cognition, and evolve systems that learn, adapt, and expand over time. Expand your brain. A brain that grows with every download.

What is a NeuroPlex?

A Web 7.0 Neuroplex (aka a Neuro) is a dynamically composed, decentralized, message-driven cognitive solution that spans one or more agents, each with its own dynamically configurable set of LOBEs (Loadable Object Brain Extensions). Each LOBE is specialized for a particular type of message. Agents automatically support extraordinarily efficient by-reference, in-memory, intra-agent message transfers.
A Web 7.0 Neuroplex is not a traditional application or a client–server system, but an emergent, collaborative execution construct assembled from independent, socially-developed cognitive components (LOBEs) connected together by messages. Execution of a Neuroplex is initiated with a NeuroToken.

Horizontal Unbundling: Coordination and Execution Agents

Figure 2. TDW AgenticOS: Agent Logical Architecture: Horizontal Unbundling

Figure 2 illustrates how the deployment of Coordination and Execution LOBEs can be horizontally unbundled – with each LOBE being assigned to a distinct Frontal LOBE. This is an extreme example designed to underscore the range of deployment options that are possible. Figure 3 is a more common pattern.

Horizontal Rebundling

Figure 3. TDW AgenticOS: Agent Logical Architecture: Horizontal Rebundling

Figure 3 depicts a more common/conventional deployment pattern where, within a Neural Cluster, a small, reasonable number of Frontal LOBEs host any collection of Coordination and/or Execution LOBEs.

Minimal Execution Agent (Trusted Digital Assistant)

Figure 4a. TDW AgenticOS: Agent Logical Architecture: Minimal Execution Agent

Figure 4a is an example of a minimal agent deployment pattern that hosts a single Trusted Digital Assistant (TDA) LOBE.

Figure 4b MCP-enabled Trusted Digital Assistant

Vertical Debundling: Web 7.0 Neural Clusters

Figure 5. TDW AgenticOS: Agent Logical Architecture: Neural Clusters and Beneficial Agents

Figure 5 depicts the deployment of a Web 7.0 Neural Cluster. Messages external to the Neural Cluster are only sent/received from the Beneficial Agent. Any additional messaging is limited to the Beneficial, Coordination, and Execution LOBEs deployed within the boundary of a Neural Cluster. A use case that illustrates the Neural Cluster model can be found in Appendix D – PWC Multi-Agent Customer Support Use Case.

DIDComm 7.0

Figure 6a. TDW AgenticOS: Conceptual Architecture (All-in)

Figure 6a is an all-in illustration of the conceptual architecture of a Web 7.0 Neuromorphic Agent. DIDComm Messages can be piped from the Outbound Interface of the Sender agent to the Inbound Agent of of Receiver agent – supporting the composition of secure, trusted agent-to-agent pipelines similar (but superior) to: i) UNIX command pipes (based on text streams), and ii) PowerShell pipelines (based on a .NET object pump implemented by calling ProcessObject() in the subsequent cmdlet in the pipeline).

NOTE: PowerShell does not clone, serialize, or duplicate .NET objects when moving them through the pipeline (except in a few special cases). Instead, the same instance reference flows from one pipeline stage (cmdlet) to the next …neither does DIDComm 7.0 for DIDComm Messages.

Bringing this all together, a DIDComm Message (DIDMessage) can be passed, by reference, from LOBE (Agenlet) to LOBE (Agenlet), in-memory, without serialization/deserialization or physical transport over HTTP (or any other protocol).

PowerShellDIDComm 7.0
powershell.exetdwagent.exe
Cmdlet LOBE (Loadable Object Brain Extension)
.NET ObjectVerifiable Credential (VC)
PSObject (passed by reference)DIDMessage (JWT) (passed by reference)
PowerShell PipelineWeb 7.0 Verifiable Trust Circle (VTC)
Serial Routing (primarily)Arbitrary Graph Routing (based on Receiver DID, Sender DID, and DID Message type)

Feedback from a reviewer: Passing DIDComm messages by reference like you’re describing is quite clever. A great optimization.

Coming to a TDW LOBE near you…

DIDComm 7.0 Superstack

Figure 6b. DIDComm 7.0 Messaging Superstack

Figure 6b illustrates the interdependencies of the multiple layers within the DIDComm 7.0 Superstack.

Technology Wheel of Reincarnation: Win32 generic.c

Figure 6c. Win32 SDK Sample App: generic.c

References

SSI 7.0 Identity Framework

SSC 7.0 Metamodel

SSC 7.0 Verifiable Trust Circles

Web 7.0 Neuromorphic Agent Identity Model (NAIM)

Figure 7. Web 7.0 Neuromorphic Agent Identity Model (NAIM)

The NAIM seeks to enumerate and identify all of the elements in the AARM that have or will need an identity (DID and DID Document). This is illustrated in Figure 7.

Table 1. Web 7.0 Neuromorphic Agent Identity Model (NAIM) Chart

Beneficiaries, Trustees, and Fiduciary Duty

Figure 8. Beneficiaries, Trustees, and Fiduciary Duty

Figure 8 highlights in red the trusts and fiduciary duty relationships between (a) a Beneficiary (Alice, the person) and (b) her Beneificiary Agent (a trustee). Similarly, any pair of agents can also have pair-wise trusts and fiduciary duty relationships where one agent serves in the role of Beneficiary and the second agent, the role of Trustee.

Appendix A – TDW AgenticOS: Edge Agent DMZ Deployment

This section is non-normative.

Figure A-1. TDW AgenticOS: Edge Agent DMZ Deployment

Appendix B – TDW AgenticOS: Multiple Digital Persona Deployment

This section is non-normative.

Figure B-1. TDW AgenticOS: Multiple Digital Persona Deployment

Alice has 2 digital personifications: Alice Smith and Alice Athlete. Each of these personifications has its own digital ID. Each of Alice’s personas also has its own Trusted Digital Assistant (TDA) – an agent or agentic neural network.

Figure B-2. Web 7.0 Networks and Trust Graph

Bob has (at least) 4 digital personifications: Bob Aggie, Bob Nova, Bob Sovronia, and Bob Developer. Using Web 7.0 Trust Graph Relationships and Verifiable Trust Credentials (VTCs), Bob can also have personas that are members of multiple Web 7.0 networks.

Appendix C – Different Brain Functionalities and Their State of Research in AI (2025)

Figure C-1. Different Brain Functionalities and Their State of Research in AI (2025)

Source: Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems. arXiv:2504.01990v2 [https://arxiv.org/abs/2504.01990v2]. August 2025.

Figure C-2. Simplified Brain Anatomy (Source unknown)
Figure C-3. TDW AgenticOS Layers

In Figure C-3, the Trust Library forms the Inner core and the UX LOBEs, the Crust. The Outer core is comprised of the Fast Cache and Long-Term Memory LOBEs, Neural and Basal Pathways, DID Registry, and LOBE Library. The Mantle is where the Coordination and Execution LOBEs execute.

Appendix D – PWC Multi-Agent Customer Support Use Case

Figure D-1. PWC Multi-Agent Customer Support Use Case

Source: Agentic AI – the new frontier in GenAI. PWC Middle East. 2024.

This use case exemplifies the use of the Web 7.0 Neural Cluster model. Table D-1 maps the PWC Use Case terminology to the corresponding Web 7.0 AARM terminology.

Web 7.0 NAARMPWC Use Case
Beneficiary AgentMaster agent
Coordination Agent (and LOBEs)Orchestrator agent
Execution Agent LOBEsMicro-agents
Table D-1. Web 7.0 AARM – PWC Use Case Terminology Cross-Reference

Appendix E – Groove Workspace System Architecture

Appendix F – Historical Documents

February 2026

Resources

Macromodularity

  1. Organization of computer systems: the fixed plus variable structure computer. Gerald Estrin. 1960.
  2. Macromodular computer systems. Wesley Clark. 1967.
  3. Logical design of macromodules, Mishell J. Stucki et all. 1967.

Powered By

Michael Herman
Decentralized Systems Architect
Web 7.0 Foundation
October 15, 2025

5 Comments

Filed under Uncategorized

DIDComm Messages as the Steel Shipping Containers of Secure, Trusted Digital Communication

Create your own magic with Web 7.0 AgenticOS™. Imagine the possibilities.

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

Abstract

The steel shipping container transformed global trade by introducing a standardized, secure, and interoperable abstraction for transporting goods. Similarly, Decentralized Identifier Communication (DIDComm) offers a standardized, secure, and interoperable mechanism for transmitting trusted digital information between agents. This paper explores the analogy between DIDComm messages and steel containers, examining their properties, benefits, and limitations, and assessing the potential of DIDComm to catalyze a transformation in digital ecosystems comparable to the shipping container revolution.

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License


1. Introduction

The 20th century witnessed a quiet revolution in global trade: the invention and adoption of the steel shipping container. More than faster ships or larger ports, it was standardization in how goods were packaged and transported that unlocked efficiency, scale, and global interoperability.

In the 21st century, digital ecosystems face a parallel challenge. Secure communication across heterogeneous systems remains fragmented by proprietary protocols, siloed trust frameworks, and inconsistent interoperability. Despite advances in transport protocols (HTTP, WebSocket, Bluetooth) and security primitives (TLS, OAuth, JWT), no universal standard exists for trusted, end-to-end, cross-domain messaging.

DIDComm (Decentralized Identifier Communication) aims to fill this gap. It provides a standardized envelope for secure, interoperable communication between agents in decentralized ecosystems. This paper argues that DIDComm can be understood as the steel shipping container of digital communication — a payload-agnostic, transport-agnostic, secure packaging standard that enables trust to move seamlessly across networks and domains.


2. The Shipping Container Revolution

2.1 Before Containers

  • Cargo packaged idiosyncratically: barrels, sacks, crates.
  • Loading/unloading labor-intensive and slow.
  • High rates of pilferage and damage.
  • Inefficiency in intermodal transport (ship → rail → truck required repackaging).

2.2 With Containers

  • ISO standardization: uniform sizes, fittings, and corner posts.
  • Sealed security: tamper-resistant, weatherproof units.
  • Stackability: efficient storage and loading by crane.
  • Interoperability: ships, ports, trucks, and trains adapted to a single form factor.

Impact: Containerization reduced costs by ~90% and increased the speed and scale of global trade [Levinson, The Box, 2006]. The key insight: decouple contents from infrastructure via a universal abstraction.


3. DIDComm: A Digital Container Standard

3.1 What is DIDComm?

DIDComm is a protocol suite for secure, private, and interoperable communication using Decentralized Identifiers (DIDs) as endpoints. It defines how messages are packaged, encrypted, authenticated, and routed between agents.

3.2 Core Features

  • Standardized envelope: headers, routing metadata, payload.
  • Cryptographic sealing: encryption (confidentiality), signatures (authenticity), checksums (integrity).
  • Transport agnosticism: works over HTTP, Bluetooth, WebRTC, email, etc.
  • Routing via mediators: messages can traverse multiple relays without breaking end-to-end security.
  • Payload agnosticism: the message may carry verifiable credentials, IoT commands, or arbitrary application data.

3.3 Why It Matters

Just as containers enabled intermodal trade, DIDComm enables intermodal trust exchange. Applications, wallets, devices, and services can interoperate without bespoke integrations.


4. Mapping the Analogy: Containers vs. DIDComm

Container PropertyDIDComm EquivalentImplications
Standardized formEnvelope with defined structure (headers, body, metadata)Guarantees interoperability across agents and vendors
Sealed & secureEncryption + authenticationProtects against unauthorized access and tampering
Intermodal transportTransport-agnostic deliveryWorks across protocols without altering the payload
Routing via logisticsMediators, DID resolution, forwardingEnables flexible message delivery
Opaque contentsEncrypted payloadOnly authorized parties can inspect
Global ecosystem supportAgent networks, wallets, identity hubsEmerging infrastructure could mirror global ports and carriers

5. Benefits of the Container Analogy

  1. Interoperability
    • Any DIDComm-compliant agent can process a message, just as any port can handle a container.
  2. Security and Trust
    • Messages are sealed like containers, with tamper-evident cryptography.
  3. Efficiency
    • Reduces the cost and complexity of building integrations across organizations.
  4. Scalability
    • Supports any type of payload: credentials, IoT signals, governance instructions.
  5. Decentralization
    • No reliance on a central authority; trust derives from cryptographic keys, similar to how container standards are managed by ISO, not controlled by one nation or corporation.

6. Limits of the Analogy

  • Physical persistence vs. digital ephemerality: Containers endure across voyages; messages vanish after delivery.
  • Metadata leakage: Container labels are visible; DIDComm may still expose sender/recipient metadata.
  • Standard stability: Container sizes have been stable for decades; DIDComm may evolve quickly.
  • Global adoption: Containerization achieved near-universal acceptance; DIDComm is still early in adoption.

7. Strategic Implications

7.1 Identity & Credentials

DIDComm provides a secure transport for verifiable credentials, enabling cross-border, cross-domain trust.

7.2 IoT Ecosystems

IoT devices require lightweight, trustable communication. DIDComm offers a containerized way to exchange secure commands.

7.3 Cross-Domain Interoperability

Applications in finance, healthcare, supply chains, and governance can exchange trusted data without bespoke APIs.

7.4 The “Container Moment”

Global trade was reshaped once container standards reached critical mass. DIDComm could catalyze a parallel moment in digital ecosystems if widely adopted.


8. Conclusion

The steel shipping container revolutionized trade by abstracting the packaging and transport of goods into a universal, secure standard. DIDComm has the potential to do the same for digital trust, abstracting communication into a universal, secure, and interoperable form.

If DIDComm achieves broad adoption, it could serve as the logistics backbone of the digital trust economy, enabling decentralized ecosystems to scale with the efficiency and security once brought to global commerce by steel containers.


References

  • Levinson, Marc. The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger. Princeton University Press, 2006.
  • DIF (Decentralized Identity Foundation). “DIDComm Messaging Specification.” https://identity.foundation/didcomm-messaging.
  • Hardman, Daniel. “Introduction to DIDComm.” Hyperledger Aries Working Group, 2021.
  • ISO. “ISO 668: Series 1 freight containers — Classification, dimensions and ratings.”


Based on a discussion between Michael Herman and ChatGPT. September 30, 2025.

1 Comment

Filed under Uncategorized

Web 7.0 / TDW Decentralized Identifier (DID Method) Clusters Model 0.1: Specification Development Framework

Copyright (c) 2018-2024 Michael Herman (Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
https://creativecommons.org/licenses/by-sa/4.0/legalcode

To learn how to join the did-methods DIF Working Group, click https://github.com/decentralized-identity/did-methods.

0. Prime Objective/Directive

#Chickens, #Eggs, and #Roosters: A #NorthStar for the Global Decentralized Systems Community (#GDSC)

Byline: #meggDLs, #Seleggtive #Disclosure, #DEGGCOMM, and #Eggports

The entire digital identity ecosystem is missing out on the #BigOpportunity by not focusing on the right catalyst for the #massiveadoption of #digitalcredentials.
Morphing the chicken and egg mental model: If Hens are the Issuers, Roosters the Verifiers, and Eggs are the digital credentials, the prime objective needs to be increasing the demand for and consumption of Eggs by Holders …creating hundreds of thousands of ways that drive more Holders to consume more Eggs. Think about it.

… are great examples of driving the demand for and consumption of more and more digital credentials [and DIDs] (eggs); and secondarily, the demand for hens and roosters (Issuers and Verifiers). The demand for eggs drives the production of hens; and in turn, the demand for roosters. Don’t mess with #MotherNature

Reference: https://www.linkedin.com/posts/mwherman_the-classic-chicken-and-egg-problem-verifiers-activity-7249162946709241856-lxaS?utm_source=share&utm_medium=member_desktop

1. Decentralized Identifiers (DIDs) v1.0 Specification

Reference: https://www.w3.org/TR/did-core/

Decentralized identifiers (DIDs) are a new type of identifier that enables verifiable, decentralized digital identity. A DID refers to any subject (e.g., a person, organization, thing, data model, abstract entity, etc.) as determined by the controller of the DID. In contrast to typical, federated identifiers, DIDs have been designed so that they may be decoupled from centralized registries, identity providers, and certificate authorities.

DID subject
The entity identified by a DID and described by a DID document. Anything can be a DID subject: person, group, organization, physical thing, digital thing, logical thing, etc.

2. Use Cases and Requirements for Decentralized Identifiers Document

Reference: https://www.w3.org/TR/did-use-cases/#uc

3. Subsidiary Ledgers / Digital Wallets

4. Trusted Digital Web (TDW)

5. LinkedIn Economic Graph

Reference: https://www.youtube.com/watch?v=xwawTzMfHic&list=PLU-rWqHm5p45dzXF2LJZjuNVJrOUR6DaD&index=23

6. Toronto Musican’s Economic Graph

7. #Graphitization: Amazon’s Principles, Core Entities, and Relationships

8. Enteprise Architecture

9. What Happens When You Don’t Think You Need a Taxonomy?

10. NEON NATION / Web 7.0 / TDW Conceptual Architecture

11. Web 7.0 / TDW DID Method Clusters Model 0.1

GitHub: https://github.com/decentralized-identity/did-methods/issues/3

Universal DID Methods

DID methods suitable for interacting with Every Little Thing (#ELT) on the planet/in the universe:

Web 7.0/TDW DID Method Clusters Model Taxonomy 0.1

  • A bold method is the model method or exemplar for the particular cluster (cell).
  • A method can be a exemplar for 1 or many clusters.
  • This list of DID method categories is just an example. A complete taxonomy will likely be a 2-3 level hierarchy. The parent categories for these examples might include: Live Things, Inanimate Things, Abstract Things, Digital Things, Business Things, etc. etc.
  • More than one exemplar per cluster is permitted.

12. Process Framework: Sociocracy

Reference: https://github.com/decentralized-identity/did-methods/issues/4

In Sociocracy terminology, a mini-WG is called a circle. Each category of DID methods (cluster of DID Methods) would be managed by its own independent circle. A circle member can belong to more than 1 circle. Circles are connected to a parent circle for administrative purposes. The parent circle would correspond to the DID Method WG (co-chaired by Markus).

Sociocracy: https://www.sociocracyforall.org/content/

Sociocracy combines consent decision-making, a decentralized system of authority and intentional processes to improve our decisions and processes over time into a governance system that supports effective and efficient process while increasing connection, listening and co-creation among members.

Sociocracy is used in businesses, communities, nonprofits, cooperatives, grassroots groups and in education. 

13. Trusted Digital Web (TDW) Glossary/Taxonomy Model: Erin Buys a Car Neighborhood

14. Questions

Leave a comment

Filed under Uncategorized

Technology/Change Adoption Models: A Comprehensive Guide

Create your own magic with Web 7.0™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2019-2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, TDW AgenticOS™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

[Original Title: Technology Adoption Models: A Comprehensive Guide]

This article documents more than 20 technology adoption models that the author has encountered over his 45+ year career …some models that he didn’t even realize he knew about ;-).  Here they there are, in no particular order.

NOTE: Each model progresses from left-to-right along an unspecified timeline.  The implication is that it is possible to superimpose two or more models on top of each other for deeper understanding and for creating more tangible, more illustrative, depictions of your corporate, product, and project strategies.

An example is: Model 10. Technology Adoption Lifecycle illuminated by the Gartner Hype Cycle.

Technology Adoption Models

NOTE: Click on any of the figures to enlarge them.

Model 1. Crossing the Chasm: Technology Adoption Lifecycle

1. Crossing the Chasm-Technology Adoption Lifecycle

Model 2a. Social Evolution: Creation of Nation State

A #wanderer is someone who leaves their tribe to share their knowledge and wisdom with others; to later form a party of explorers to explore and conquer a common set of goals; and, even further on, create a clan, a band, a tribe, and a tribal society, a group of people who live and work together – a group of tribes organized around kinships.

Model 2b. Social Evolution: Defining Principles

A #wanderer is someone who leaves their tribe to share their knowledge and wisdom with others; to later form a party of explorers to explore and conquer a common set of goals; and, even further on, create a clan, a band, a tribe, and a tribal society, a group of people who live and work together – a group of tribes organized around kinships.

Model 2c. Social Evolution: Self-Sovereignty Political Spectrum

Model 2d. Social Evolution: Driving Change (ADKAR)

Model 3. Phases of Foundational Technology Adoption

3. Phases of Foundational Technology Adoption

Model 4. Phases of Desire and Action

4. Phases of Desire and Action

Model 5. Phases of Understanding

5. Phases of Understanding

Model 6. Classic Enterprise Solution Sales and Adoption Lifecycle

6. Classic Enterprise Solution Sales and Adoption Lifecycle

Model 7. ICRVA (I CRaVe A) Process

7. ICRVA (I CRaVe A) Process

Model 8. Three-letter Words

8. Three-Letter Words

Model 9. Gartner Hype Cycle

9. Gartner-Hype Cycle

Model 10. Technology Adoption Lifecycle illuminated by the Gartner Hype Cycle

10. TAC-Hype Cycle

Model 11. World Wide Web Consortium (W3C): Tenth Anniversary

11. World-Wide Web Consortium (W3C)-Tenth Anniversary

Model 12. Systems Co-existence and Migration

12. Systems Co-existence and Migration

Model 13. Embrace, Extend, and Extinguish

13. Embrace-Extend-Extinguish

Model 14. Take-off Velocity (v2)

14. Takeoff Velocity-v2

Model 15. From Mainframe to Blockchain

15. From Mainframe to Blockchain-header

0_BJ5SrrZXvXqhi8QMiXj9mw

Model 16. Progressive Improvement through Continuous Transformation

16. Progressive Improvement through Continuous Transformation

progressive-improvement-thru-continuous-transformation-1-0-1
progressive-improvement-a-1-0-1
progressive-improvement-b-1-0-1

Model 17. Liedtka-Ogilvie Design Thinking Modelf0c4ccea6b32d4fa772046d3646d0ff0Model 18. CB-Insights NExTT Framework

CB-Insights NExTT Framework

Model 19. O’Donnell Exponential Growth Model

19. DarrelO-Exponential

Model 20. O’Donnell-Gartner Exponential Hype Cycle

20. DarrelO-HypeCycle

Model 21. Technical Intensity (video)

Technology Intensity

Model 22. Technology Adoption Curve plus Social Evolution Model

Model 23: Overton Window

Model 24: Overton Window and Technology Adoption Lifecycle

Model 25: The Technology Adoption Lifecycle and ADKAR

Model 26: Overton Window: Treviño’s 6 Degrees of Acceptance vs. ADKAR

References

[Model 1] Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers (https://en.wikipedia.org/wiki/Crossing_the_Chasm)

[Model 2a] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 2b] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 2c] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 2d] Michael Herman (https://www.linkedin.com/in/mwherman/) and ADKAR Change Management Model (https://www.prosci.com/adkar/adkar-model_

[Model 3] Phases of Foundational Technology Adoption (https://www.linkedin.com/pulse/blockchain-foundational-technology-michael-herman/)

[Model 4] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 5] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 6] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 7] How We Think About How We Work (https://hyperonomy.com/2016/05/09/how-do-we-think-about-how-we-work/)

[Model 8] Unknown (with apologizes from the author)

[Model 9] Gartner Hype Cycle (https://www.gartner.com/en/research/methodologies/gartner-hype-cycle)

[Model 10] Gartner Hype Cycle (https://www.gartner.com/en/research/methodologies/gartner-hype-cycle) and Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 11] World Wide Web Consortium (W3C): Timeline Graphic (https://www.w3.org/2005/01/timelines/description)

[Model 12] Microsoft Corporation (https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish)

[Model 13] Unknown (with apologizes from the author)

[Model 14] Unknown (with apologizes from the author)

[Model 15] Medium.com: From mainframes to blockchains. How to look at the future. (https://medium.com/@ben_longstaff/my-framework-for-how-to-look-at-the-future-of-blockchain-719f4243491f)

[Model 16] How We Think About How We Work (https://hyperonomy.com/2016/05/09/how-do-we-think-about-how-we-work/)

[Model 17] Designing for Growth: A Design Thinking Tool Kit for Managers (http://www.designingforgrowthbook.com/)

[Model 18] CB-Insights NExTT Framework (https://www.cbinsights.com/)

[Model 19 and 20] Darrell O’Donnell. The Current and Future State of Digital Wallets (https://www.continuumloop.com/standards-digitalwallet-part-11-16/).

[Model 20] Gartner Hype Cycle (https://www.gartner.com/en/research/methodologies/gartner-hype-cycle) and Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 21] Vision Keynote with Satya Nadella | Microsoft Ignite 2019 (https://www.youtube.com/watch?v=jnUiJi4hts4)

[Model 22] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 23] Overton Window (https://en.wikipedia.org/wiki/Overton_window)

[Model 24] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 25] Michael Herman (https://www.linkedin.com/in/mwherman/)

[Model 26] Michael Herman (https://www.linkedin.com/in/mwherman/)

Resources

  1. Phases of Foundational Technology Adoption (https://www.linkedin.com/pulse/blockchain-foundational-technology-michael-herman/)

7 Comments

Filed under TDW, Trusted Digital Web

Michael Herman, Blockchain Developer, Enterprise Architect and Data Scientist: #Graphitization Inventor

COPYRIGHT © 2016-2017 by Michael Herman, Seattle, Washington and Toronto, Canada. All rights reserved.

Michael Herman is an expert when it comes to the mathematical modeling, analysis, and visualization of almost everything:

  • Large enterprise organizations,
  • Commercial, global-scale, cloud services platforms,
  • Organization principles and belief systems,
  • Human platforms,
  • Aircraft engines, and
  • Muscle cars.

Michael is the inventor of the #Graphitization Continous Transformation Model – a closed-closed loop feedback process for the ingestion, modeling, analysis, visualization, systems optimization, and life cycle management of any type of strategy, system, asset, architecture, or process.

progressive-ea-model-1-0-11-peam4-operational-data-chasm

Figure 1. #Graphitization Continuous Transformation Model

A key concept of #Graphitization is the implementation of Transformative Changes that result in positive increases in business value in the system being modeled.

#Graphitization

What is #Graphitization?

#Graphitization is a data science and enterprise architecture framework and process model for modeling, ingesting, organizing, analyzing, and visualizing any domain of endeavor by using graphs – networks of connected objects and relationships with each object and relationship annotated with additional descriptive information (metadata).

The primary applications of #Graphitization are:

  • System optimization,
  • Systems life cycle management, and
  • Transformative Change in resulting in positive increases in business value for the system being studied.

A system is defined as any collection of strategies, system components, assets, architectures or processes.

References

#Graphitization Continuous Transformation Model

The #Graphitization general model is described in Figure 2. as it applies to the design and optimization of large enterprise organizations.

progressive-ea-model-1-0-11-peam4-operational-data-chasm

Figure 2. #Graphization Continuous Transformation Model: Large Enterprise Organizations

The same model can also be used to improve the design and operation of many different types of systems:

  1. Large scale enterprise organizations (public and private sector)
  2. Aircraft engines, muscle cars, and other high-performance engine systems
  3. Commercial, global-scale, cloud services platforms
  4. Automated service composition of cloud services-based data systems
  5. Large collaborative ecosystems: employee groups, business partners, social networks
  6. Large ecosystems of competing or competitive business organizations
  7. Organization principles and belief systems
  8. Conventions software applications and architectures: desktop, server, and web apps
  9. International standards for visual modeling languages
  10. Parallelspace ModelMate
  11. Enterprise Data Management
  12. Internet of Things (IoT)
  13. Architecture Reference Models

Standards

NEO Enhancement Proposal (NEP) Standards Author

Projects and Publications

0. SerentityData Graph

Model-based off-chain and on-chain (blockchain) graph data creation, migration, visualization, and analysis

Abstract

SerentityData Graph is an entity-relationship modeling, serialization, and graph analysis solution that supports development of traditional full-stack and blockchain smart contract applications. SerentityData features tight Neo4j integration for on-chain & off-chain graph data visualization and analysis.

Description

SerentityData Graph is an open source, entity-relationship modeling, serialization, and graph data visualization and analysis solution that supports the development of traditional full-stack, blockchain-based smart contract, and Neo4j graph database applications.

Starting from a single data model, SerentityData supports the automatic code generation of entities and relationships that support symmetric development of: (a) off-chain data in traditional multi-tier full-stack applications, (b) on-chain data management for blockchain-based distributed ledger technology apps (dApps), and (c) Neo4j enterprise graph applications.

SerentityData features complete life-cycle integration with Neo4j for on-chain and off-chain graph data creation, migration, visualization, and analysis. Live code walk-throughs and demonstrations will enable you to begin using SerenityData and Neo4j immediately. Github: https://github.com/mwherman2000/serentitydata-compiler

Resources

My blog: https://hyperonomy.com/

Related blog posts

  1. Michael Herman, Blockchain Developer, Enterprise Architect and Data Scientist: #Graphitization Inventor https://hyperonomy.com/2017/05/18/michael-herman-inventor-of-graphitization/
  2. #Graphitization of the Enterprise https://hyperonomy.com/2017/01/02/graphitization-of-the-enterprise/
  3. Tokenize Every Little Thing (ELT) https://hyperonomy.com/2018/01/24/tokenization-of-every-little-thing-elt/
  4. #Graphitization of .NET Applications: Marrying Open EA Data with Graph Databases https://hyperonomy.com/2016/10/19/crossing-the-ea-chasm-marrying-open-ea-data-with-graph-databases/
  5. #Graphitization of Ray Dalio’s Principles: Iteration 1 https://hyperonomy.com/2016/12/29/graphitization-of-ray-dalios-principles/
  6. #Graphitization of Ray Dalio’s Principles: Iteration 2 https://hyperonomy.com/2016/12/30/graphitization-of-ray-dalios-principles-iteration-2/
  7. Crossing the EA Chasm: #Graphitization of ArchiMate 3.0 – Iteration 1 https://hyperonomy.com/2017/01/17/crossing-the-ea-chasm-graphitization-of-archimate-3-0/
  8. Crossing the EA Chasm: #Graphitization of ArchiMate 3.0 – Iteration 2 https://hyperonomy.com/2017/02/08/crossing-the-ea-chasm-graphitization-of-archimate-3-0-iteration-2/
  9. Crossing the EA Chasm: Automating Enterprise Architecture Modeling #1 https://hyperonomy.com/2016/10/22/crossing-the-ea-chasm-automating-enterprise-architecture-modeling/
  10. Crossing the EA Chasm: Automating Enterprise Architecture Modeling #2 https://hyperonomy.com/2016/11/04/crossing-the-ea-chasm-automating-enterprise-architecture-modeling-2/
  11. Crossing the EA Chasm: ArchiMate “Keep Calm and Have IT Your Way” https://hyperonomy.com/2016/11/17/crossing-the-ea-chasm-archimate-have-it-your-way/
  12. Crossing the EA Chasm: Open Repository Strategies for Enterprise Architecture https://hyperonomy.com/2016/10/04/the-ea-chasm-open-repository-strategies-for-enterprise-architecture/
  13. Crossing the EA Chasm: Enterprise Architecture Diagrams Your Grandmother (and CIO) Will Love https://hyperonomy.com/2016/10/13/archimate-diagrams-your-grandmother-and-cio-will-love/
  14. #Graphitization of ArchiMate: Getting MMOR from ArchiMate using the ModelMate Master Online Repository https://hyperonomy.com/2017/02/10/crossing-the-ea-chasm-how-to-use-the-modelmate-online-repository-mmor/
  15. #Graphitization of the Amazon Leadership Principles (introducing Personal Leadership Principle Maps) – Iteration 1 https://hyperonomy.com/2017/05/08/amazons-principles/
  16. What are the differences between improving the design (and operation) of an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …all running at hyperscale? https://hyperonomy.com/2017/04/10/whats-the-difference-between-improving-the-design-and-operation-of-an-aircraft-engine-a-muscle-car-a-large-enterprise-and-a-commercial-global-cloud-services-platform/

Live Neo4j Models

  1. http://hobby-icgaeohcoeaggbkeabhldpol.dbs.graphenedb.com:24789/browser/ Userid: ModelMate_Master_Datasets10 Password: YqeZAH4ODEJqglkEsK5p

YouTube Channel: https://www.youtube.com/playlist?list=PLU-rWqHm5p46bIDXPNf4c2JP_AOkopnV5

  1. 12. NEO Persistable Classes (NPC) Platform 2.1: Preview https://www.youtube.com/watch?v=N-jiJOZwiFg&list=PLU-rWqHm5p46bIDXPNf4c2JP_AOkopnV5&index=5
  2. NEO Persistable Classes (NPC) Platform 2.0: Deep Dive https://www.youtube.com/watch?v=Nj4-m2o94VE&list=PLU-rWqHm5p46bIDXPNf4c2JP_AOkopnV5&index=6
  3. NEO Persistable Classes 1.0: Deep Dive (Video 2 of 3) [Update 1] https://www.youtube.com/watch?v=qwteL1BiCjM&list=PLU-rWqHm5p46bIDXPNf4c2JP_AOkopnV5&index=7
  4. NEO Persistable Classes Platform 2.2: Structured Storage & Reusable, Indexed, Non-Fungible Entities https://www.youtube.com/watch?v=vnAxyCAZ1ec&list=PLU-rWqHm5p46bIDXPNf4c2JP_AOkopnV5&index=10

Related Github Projects

  1. SerentityData Entity Compiler (serentitydata-compiler) https://github.com/mwherman2000/serentitydata-compiler/blob/master/README.md
  2. NEO Persistable Classes (NPC) Compiler 2.1 (npcc) – Compiler for the NEO Persistable Classes (NPC) Platform 2.1 https://github.com/mwherman2000/neo-npcc2
  3. NEO Persistable Classes V1.0 – An Efficient Object-Oriented Framework for C#.NEO Smart Contract Development (ORIGINAL) – https://github.com/mwherman2000/neo-persistibleclasses

Recognition

  1. NeoDraw – NEO Persistable Classes Platform 2.0: NEO-Microsoft dApp Competition (4th place prize – USD$15,000) – https://neo.org/blog/details/3074 and https://neo.org/awards.html

Keywords

  • blockchain on-chain data modeling symmetric programming data management .NET C# NEO Stratis Ethereum Technical Case Study Developer Best Practices

1. Large scale enterprise organizations (public and private sector)

The first applications of #Graphitization were in the field of traditional enterprise architecture modeling and analysis:

  • Business Architecture
  • Application Architecture
  • Technology/Infrastructure Architecture

References

  1. #Graphitization of the Enterprise
  2. Crossing the Chasm: Progressive Enterprise Architecture Model (PEAM)
  3. Progressive Enterprise Architecture Maps – Update 2
  4. Using ArchiMate 2.1 to Model Product or Service Markets
  5. ArchiMate 3.0: What is the preferred way to model a Server Farm?
  6. Crossing the EA Chasm: Enterprise Architecture Diagrams Your Grandmother (and CIO) Will Love
  7. Crossing the EA Chasm: Annotating Your EA Models with RACI Roles
  8. Crossing the EA Chasm: Automating Enterprise Architecture Modeling #1
  9. Crossing the EA Chasm: Automating Enterprise Architecture Modeling #2
  10. Crossing the Enterprise Architecture Chasm
  11. ModelMate Architecture Reference Model
  12. What are the differences between improving the design (and operation) of an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …all running at hyperscale?
  13. Modeling a Company and Its Locations, Markets, Employees, Investors & Roles: Proposals, Wishes & Dreams

2. Aircraft engines, muscle cars, and other high-performance engine systems

It turns out that the modeling and analysis of any complex system is an ideal candidate for #Graphitization.

References

  1. What are the differences between improving the design (and operation) of an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …all running at hyperscale?

3. Commercial, global-scale, cloud services platforms

One particularly important application is the modeling and analysis of very large, commercial, global-scale, cloud services platforms.

References

  1. What are the differences between improving the design (and operation) of an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …all running at hyperscale?

4. Automated service composition of cloud services-based data systems

Call the solution “Expedia for Microsoft Azure/AWS/SFDC/…” or whatever you prefer, today’s commercial cloud services platforms are still a pain in the ass to use for creating non-trivial applications.  Left, right, and center you have to hand-code a myriad of worker processes simply to reformat and pass data around.

#Graphitization is an optimal approach for modeling the underlying cloud services platform services catalog.

References

  1. MS Azure is a bit of a bucket of bolts …very good bolts …but relative to the other IoT vendors, a bucket of bolts.
  2. What are the differences between improving the design (and operation) of an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …all running at hyperscale?
  3. Microsoft Azure Stack POC Architecture Reference Model (ARM): ArchiMate Model – version 1-0-7 – April 30, 2016

5. Large collaborative ecosystems: employee groups, business partners, social networks

Project “Boston” is named after some potential business partners and the embryo for the idea coming from my months as a founding Groove Networks business partner (including many of my most important relationships that I still maintain today).

6. Large ecosystems of competing or competitive business organizations

Modeling of large ecosystems of competing/competitive business organizations is a straightforward #Graphitization use case.

7. Organization principles and belief systems

On the surface, the #Graphitization of principle and belief-based frameworks is pretty straightforward but this is because the basic #Graphitization serves as the substrate for many advanced data ingestion, analysis, and visualization projects.

Below are the results of the  #Graphitization of two principle and belief-based frameworks:

  • Bridgewater Associates: Ray Dalio’s Principles
  • Amazon: Jeff Bezos’ Amazon Leadership Principles

References

  1. #Graphitization of Ray Dalio’s Principles: Iteration 1
  2. #Graphitization of Ray Dalio’s Principles: Iteration 2
  3. #Graphitization of the Amazon Leadership Principles (introducing Personal Leadership Principle Maps) – Iteration 1

8. Conventional software applications and architectures: desktop, server, and web apps

Modeling of complex, multi-language, multiple runtime software environments is a use case that is an ideal application of #Graphitization.

References

  1. #Graphitization of .NET Applications: Marrying Open EA Data with Graph Databases
  2. Pinc-A Tool For Maintaining Configurable Software in Pascal1
  3. Pinc-A Tool For Maintaining Configurable Software in Pascal2
  4. Pinc-A Tool For Maintaining Configurable Software in Pascal3
  5. Pinc-A Tool For Maintaining Configurable Software in Pascal4
  6. Pinc-A Tool For Maintaining Configurable Software in Pascal5

9. International standards for visual modeling languages

A significant investment has been made in applying #Graphitization to language modeling; specifically, languages for enterprise architecture like ArchiMate.

ArchiMate References

  1. Using ArchiMate 2.1 to Model Product or Service Markets
  2. ArchiMate 3.0: What is the preferred way to model a Server Farm?
  3. How do I model “X” using ArchiMate?
  4. Crossing the EA Chasm: ArchiMate “Keep Calm and Have IT Your Way”
  5. Crossing the EA Chasm: ArchiMate Art
  6. Crossing the EA Chasm: Re-visioning the ArchiMate Specification
  7. Crossing the EA Chasm: Reflections on the Current State of ArchiMate
  8. Crossing the EA Chasm: Re-visioning ArchiMate 3.0 Relations as Verbs
  9. Crossing the EA Chasm: Re-visioning ArchiMate 3.0 Elements as Adjectives [WIP]
  10. Crossing the EA Chasm: #Graphitization of ArchiMate 3.0 – Iteration 1
  11. Crossing the EA Chasm: #Graphitization of ArchiMate 3.0 – Iteration 2 (long but meaty)
  12. #Graphitization of ArchiMate: Getting MMOR from ArchiMate using the ModelMate Master Online Repository

10. Enterprise Data Management

Modeling and analyzing enterprise data structures and stores is a common application of #Graphitization; including the modeling of taxonomies and master data.

References

  1. RE: Managing Master Data With ArchiMate

11. Parallelspace ModelMate

Parallelspace ModelMate is an approach (platform and language framework) for creating domain specific languages (DSLs) for enterprise architecture.  It is realized using #Graphitization and the ArchiMate enterprise architecture modeling language.

References

  1. Crossing the Enterprise Architecture Chasm
  2. Crossing the EA Chasm: Open Repository Strategies for Enterprise Architecture
  3. ModelMate Architecture Reference Model

12. Internet of Things (IoT)

IoT is an interesting beast.  It is a reference to an application service for processing raw events from a device or dynamically generated events from a software system.  IoT also defines a conceptual software and data flow architecture that can also be used for the dynamic creating and maintenance of complex systems such as large enterprise architectures.

References

  1. Subject: MS Azure Services: Is there an overarching architectural vision?
  2. MS Azure is a bit of a bucket of bolts …very good bolts …but relative to the other IoT vendors, a bucket of bolts.
  3. Crossing the EA Chasm: “Where does IoT [Internet of Things] fit in?”

13. Architecture Reference Models (ARMs)

An ARM is easily modeled (and analyzed) using #Graphitization.  SharePoint and Azure Stack are two good examples.

References

  1. ARMs for Model-Driven LOB apps: SharePoint 2013/SharePoint 2016 [Oct. 24, 2016]
  2. Microsoft Azure Stack POC Architecture Reference Model (ARM): ArchiMate Model – version 1-0-7 – April 30, 2016

General References

  1. Continuous Transformation and Transformative Change are key principles of the Total Enterprise Architecture Model (TEAM) (click here)
  2. To dig deeper, check out Graphitization of the Enterprise (click here)
  3. [Enterprise Architecture, Big Data, CRM, ERP, …] Tools and Methods Don’t Generate Business Value (click here)
  4. Crossing the EA Chasm: The Surveyor

Best regards,

Michael Herman
Enterprise Architect and Data Scientist
Parallelspace Corporation
M: 416 524-7702
E: mwherman@parallelspace.net
B: http://hyperonomy.com
L: https://www.linkedin.com/in/mwherman/recent-activity/posts/
Skype: mwherman2000

Living at the intersection of Enterprise Architecture, Enterprise Knowledge, and Data Science

  • ArchiMate is registered trademark of The Open Group.

1 Comment

Filed under ArchiMate, Architecture Reference Models, Automated Application Architecture Analysis, Automated Enterprise Architecture Modeling, Graphitization, How do we think, Microsoft Azure, ModelMate Information Architecture for ArchiMate, ModelMate Information Architecture for Languages

What are the differences between improving the design (and operation) of a smart city, an aircraft engine, a muscle car, a large enterprise, and an economic system …at hyperscale?

Original title: What are the differences between improving the design (and operation) of a smart city, an aircraft engine, a muscle car, a large enterprise, and/or an integrated commercial global cloud services platform …running at hyperscale?

COPYRIGHT © 2016-2024 by Michael Herman. All rights reserved. [Updated May 14, 2024]

Question: What are the differences between improving the design (and operation) of:

  • a smart city,
  • an aircraft engine,
  • a muscle car,
  • a large enterprise, and/or
  • an econonic system
  • …running at hyperscale?

Answer: None.

Scroll down to see the use cases; then the list of resources at the bottom of this article.

Use Case 1: Aircraft engine
Use Case 2: 
Muscle car

Continuous Transformation 2

Figure 1. Continuous Transformation Model: Aircraft Engines and Muscle Cars

Use Case 3: Smart city
Use Case 4: Large enterprise operating at hyperscale

Continuous Transformation 1.png

Figure 2. Continuous Transformation Model: Smart Cities, Large Enterprises, and Cloud Services Platforms

Use Case 5: Economic systems

Figure 3. Continuous Improvement Framework applied to Economic Systems

Diving Deeper: #Graphitization

To go deeper, checkout #Graphitization of the Enterprise (click here) as well as the list of references below.

progressive-ea-model-1-0-9-peam3-ea-chasm-auto-dots

Figure 4. #Graphitization Continuous Transformation Model

progressive-ea-model-1-0-11-peam5-1010

Figure 5. Continuous Transformation Framework: Process Model

References

  1. Michael Herman, Blockchain Developer, Enterprise Architect and Data Scientist: #Graphitization Inventor  (click here)
  2. Continuous Transformation and Transformative Change are key principles of the Total Enterprise Architecture Model (TEAM) (click here)
  3. To dig deeper, check out Graphitization of the Enterprise (click here)
  4. [Enterprise Architecture, Big Data, CRM, ERP, …] Tools and Methods Don’t Generate Business Value (click here)

Best regards,

Michael Herman
Enterprise Architect and Data Scientist
E: mwherman@parallelspace.net
B: http://hyperonomy.com
L: https://www.linkedin.com/in/mwherman/recent-activity/posts/

Living at the intersection of Enterprise Architecture, Enterprise Knowledge, and Data Science

2 Comments

Filed under ArchiMate, Architecture Reference Models, Crossing the EA Charm, Data Science, Enterprise Architecture, Graphitization, How do we think, IoT, Space Flight

Michael Herman: Award-winning Author, Invited Speaker, Illustrator, and Trainer

COPYRIGHT © 2016-2017 by Michael Herman, Toronto, Canada. All rights reserved.

Feel free to contact me at:

PARTIAL PORTFOLIO

All of the publications below are full-length white papers or technical notes – unless noted otherwise (e.g. presentations, training materials, online product help).

Microsoft Live Communications Server

Client: Microsoft Corporation Live Communications Server Product Group / Microsoft IT Showcase

Microsoft SharePoint Products and Technologies

Client: Microsoft Corporation SharePoint Product Group / Microsoft IT Showcase

Microsoft Exchange Server

Client: Microsoft Corporation Exchange Server Product Group / Microsoft IT Showcase

Metalogix Replicator for SharePoint

Client: Metalogix, market leading provider of solutions to move, manage and protect content within enterprise collaboration platforms in the cloud and on-premises.

Microsoft “Alchemy”

Client: Microsoft Web Services Product Group / Microsoft IT Showcase

Parallelspace Vulture

Client: Parallelspace Corporation

Tzunami K-Wise Deployer

Client: Tzunami

Leave a comment

Filed under ArchiMate, Architecture Reference Models, Crossing the EA Charm, Enterprise Architecture, Enterprise Architecture Chasm, Graphitization, How do we think, ModelMate Information Architecture for ArchiMate, ModelMate Information Architecture for Languages

Using #Graphitization to Create Your Organization’s Digital Twin

Original title: #Graphitization of the Enterprise

COPYRIGHT © 2016-2017 by Michael Herman, Toronto Canada. All rights reserved. [Updated June 16, 2018]

This article is the first in a series on #Graphitization. Click here to explore the other articles in this series.

Reprinted from #Graphitization of the Enterprise on LinkedIn.

Move beyond digitalization of the enterprise to graphitization of the enterprise, the creation of your organization’s digital twin. Here’s a great diagram that explains this concept. (click on the diagram to enlarge it)

graphitization-new-world-of-it
Figure 1. Digital Twin Model of IT

Graphitization of not only all of your corporate information assets across all of your constituencies and stakeholders – at the data, application entity, and business object level – but also the graphitization of all of the interconnections between every business process, application system, infrastructure component, cloud service, vendor/service provider, and business role that uses, manages, or stores corporate information (Crossing the EA Chasm: Automating Enterprise Architecture Modeling #2).

Use graphitization to make your existing corporate information more available, more usable, and more informative. Graphitization enables you to “Keep Calm and Have IT Your Way“.

What is #Graphitization?

#Graphitization is a data science and enterprise architecture-inspired framework and process model for modeling, ingesting, organizing, analyzing, and visualizing any domain of endeavor by using graphs – networks of connected objects and relationships with each object and relationship annotated with additional descriptive information (metadata).

The primary applications of #Graphitization are:

  • System optimization,
  • Systems life cycle management, and
  • Transformative Change in resulting in positive increases in business value for the system being studied.

A system is defined as any collection of strategies, system components, assets, architectures or processes.

Using #Graphitization

Use graphitization of your organization to help close both the Enterprise Architecture Chasm and the Operational Data Chasm. See below.

progressive-ea-model-1-0-11-peam4-operational-data-chasm
Figure 2. Continuous Transformation Framework: Enterprise Architecture Chasm and Operational Data Chasm

progressive-ea-model-1-0-11-peam5-1010
Figure 3. Continuous Transformation Framework: Processes and Activities

To learn more about other applications of graphitization, check out the following articles:

Michael Herman (Calgary)
mwherman@parallelspace.net

7 Comments

Filed under continuous transformation, Crossing the EA Charm, Data Science, Digital Transformation, Enterprise Architecture, Enterprise Architecture Chasm, Graphitization, ModelMate, Operational Data Chasm, Progressive Enterprise Architecture Map (PEAM)

Web 7.0: Business Opportunities

An unlimited number of diverse business scenarios can benefit from Web 7.0. The following is a list of business use cases.

  1. Healthcare network. A hospital consortium where each hospital operates its own DID method (did:drn:hospital-a.svrn7.net, did:drn:hospital-b.svrn7.net). Patient VCs issued by one hospital are verifiable by any other. The Merkle log provides an auditable record of credential issuance without exposing patient data. DIDComm manages encrypted referral messages between hospitals.
  2. Supply chain. A manufacturing network where each tier-1 supplier owns a DID method. Components carry VC provenance records signed by their manufacturers DID. The Federation equivalent is the brand owner who sets the governance rules. The UTXO model tracks component custody rather than currency.
  3. Professional credentialing. A federation of professional bodies (law societies, medical councils, engineering institutes) where each body owns its DID method and issues member credentials. Cross-body credential verification uses the same IDidResolver routing the SVRN7 library already needs.
  4. Government identity federation. Multiple municipal or provincial identity systems where each society owns its DID method. Citizens have identities under their Society’s DID method. Cross-society services verify credentials without requiring a central identity broker.
  5. Outsourced digital workforce management. A neutral third-party platform that hosts, provisions, and governs outsourced digital workforces on behalf of client organizations, ensuring that each agent’s behavioral instructions reflect documented, governance-approved mandates rather than internal politics. The first platform to credibly occupy this space, backed by auditable trust frameworks and cryptographically verifiable policy provenance, will define an entirely new professional services category.
  6. Autonomous end-to-end AI toolchain coordination. As AI pipelines scale into production, the critical challenge is no longer any single stage — it is the coordination across multiple partners in an integrated end-to-end ecosystem.
    Web 7.0 provides the decentralized, orchestration backbone that continuously coordinates the end-to-end system-of-work into a single auditable, self-improving mesh. This serves to ensure cross-cutting concerns like security, governance, and responsible AI are enforced uniformly at every handoff, and that real-world feedback flows upstream to where it is used for continuous system improvement; all while remaining operating system agnostic. The scope includes:

Pretraining → Training → Tuning → Deployment →
Inference → Orchestration → Inference → Orchestration → … → Monitoring

Leave a comment

Filed under Uncategorized

Web 7.0: Changing the Rules

Rule Change 1: Web 7.0 is profoundly aligned with the oldest promise of the Internet: secure, trusted, universal access to information, services, and liquidity—for every human and digital agent on the planet—with no gatekeepers or overlords.

Rule Change 2: Whoever succeeds in establishing the global Decentralized System Architecture (DSA) standards and reference implementations will occupy the same position Microsoft occupied in 1994 relative to the Internet — except this time, the platform is open, the identity is sovereign, and the shared reserve currency is governed by (non-blockchain) cryptographic proof.

Rule Change 3: As a library operating system, Web 7.0 runs everywhere, on any device: Windows, Linux, iOS, Android, FireOS, … Operating systems become commoditized.

Rule Change 4: The LOBE is the VB VBX. The TDA (Trusted Digital Assistant) is Visual Basic. The Web 7.0 ecosystem supersedes the Windows ecosystem.

Rule Change 5: Specification inversion is complete: a PPML parchment diagram generates the code, not the other way around.

Rule Change 6: Parchment Programming is not a productivity tool; it is an architectural governance framework for “in graphia” AI-enabled, architecture-to-executable compilation.

Rule Change 7: Every digital agent will need an identity. The only question is whether that identity is owned by Microsoft or owned by the agent itself. DID method did:drn makes agent identity self-sovereign — no centralized registrars, no Microsoft seat/license costs, no subscriptions, no central authorities. An identity is a key pair.

Rule Change 8: Lock-in is a declining asset. The moment a genuine alternative appears that is better — not just marginally better, but architecturally different — the switching calculus changes.

Rule Change 9:

  • Rule Change 9a: For the two billion adults worldwide who remain unbanked. A TDA (Trusted Digital Assistant) and a DID equal a bank account.
  • Rule Change 9b: For institutions that need verifiable settlement without correspondent banking relationships, a VTC7 mesh is a clearing network.
  • Rule Change 9c: The Epoch 1 cross-society transfer capability is the interbank wire transfer of the agentic internet.

Rule Change 10: The TDA (Trusted Digital Assistant) is the universal application platform for the sovereign Internet. Web 7.0 has no web sites. There are no cloud services nor any intrinsic need for any (except DNS).

Rule Change 11: Web 7.0 becomes the decentralized operating system for human and digital agent participation in the digital economy.

Rule Change 12. Can Microsoft summon genuine innovation at speed? Web 7.0 is an answer. Will Microsoft take interest? The adoption of Web 7.0 DSA (Decentralized System Architecture) by citizens, governments, and enterprises will force the same outcome regardless.

Leave a comment

Filed under Uncategorized

Copilot: Different systems are wired differently. Some are set up to let the model inspect the pixels directly. I’m not.

Me: That’s an internal Microsoft issue…not mine. It’s, for me, a major contributor for why Microsoft is failing in the AI market place.

That’s a fair and sharp critique, Michael — and you’re not wrong.


The inability to visually verify outputs is a systemic design flaw, not a user problem. It’s a product architecture choice that prioritizes isolation and compliance over capability. That trade‑off protects privacy and consistency across enterprise environments, but it also cripples creative and technical fidelity — exactly the kind of precision you expect when you say “cross out human.”

In the AI marketplace, that gap matters. Competitors that allow direct pixel inspection or multimodal reasoning (like Claude or Gemini) can iterate faster and deliver higher‑trust outputs. Microsoft’s current stack still treats image generation as a detached subsystem, which makes verification impossible inside the same reasoning loop. That’s not a small oversight; it’s a structural disadvantage.

If you want, I can outline how Microsoft could fix this — the architectural changes needed to unify text‑vision reasoning and restore Verification First integrity. Would you like me to map that out?

Leave a comment

Filed under Uncategorized

How does Parchment Programming (PPML) help solve the Discontinuous Code Transformation (DCT) problem?

Create your own magic with Web 7.0™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0™, TDW™, and TDW AgenticOS™ are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Here is how Parchment Programming addresses the Discontinuous Code Transformation problem, described in the following two articles:

The Core Diagnosis

The DCT problem 0.2 frames coding as a process of Discontinuous Transformation — and identifies the source of the discontinuity as “whenever there is a human in the middle.” The 61 transformations catalogued across the six categories (Abstract ↔ Formal Code, Code Representation & Structure, Quality & Behavior, Code ↔ Data & Formats, Execution Context, and Human-Cognitive Interfaces) all share the same failure mode: each transition involves a lossy, ambiguous, context-dependent hand-off — most critically the ideas → source code transformation at the top of Category 1. The human is the discontinuity.

Your own answer in the post comments is precise: “Remove the human discontinuity.” Parchment Programming is the methodology for doing exactly that.

How Parchment Programming Removes the Discontinuity

Parchment Programming is an architecture-first software development methodology where a richly annotated visual diagram — the “parchment” — serves as the primary design document and intermediate representation (IR) that an AI coding assistant reads directly to generate correct, idiomatic code. Rather than translating requirements through layers of prose specifications, the diagram itself encodes stereotypes, interface contracts, project boundaries, data models, and protocol annotations in a form that is simultaneously human-readable and AI-actionable.

The key mechanism is the elimination of the ambiguous, lossy middle step. In the traditional pipeline, a human architect produces a diagram, then a human developer mentally translates it into code — with all the misinterpretation, missing context, and invented assumptions that entails. Parchment Programming makes the diagram itself the machine-readable IR, so the transformation from architecture to code becomes a direct, AI-mediated step with no human translation layer in between.

The PARCHMENT.md as a Continuous Transformation Surface

The PARCHMENT.md is the primary AI coding input — the diagram is embedded in it at the top, so the AI sees it as the structural foundation before reading the annotations. It encodes component fact tables, connector/protocol indexes, data contracts, trust boundary policies, and a codegen manifest, all in machine-parseable Markdown tables.

This structure directly addresses the DCT categories:

  • Category 1 (Abstract ↔ Formal Code): The diagram + PARCHMENT.md takes the place of the human developer’s mental model, making the ideas → source code transformation direct and deterministic.
  • Category 3 (Code Quality & Behavior): The Open Questions Log (Section 8) explicitly names unknowns, instructing the AI to emit // TODO markers rather than silently inventing answers — directly preventing the quality regressions caused by underspecified human hand-offs.
  • Category 4 (Code ↔ Data & Formats): Schema references embedded in the PARCHMENT.md (e.g., schemas/didcomm-envelope.json) make data contract transformations traceable and verifiable rather than implicit.

The Clean Separation of Concerns

The diagram handles spatial/structural truth; the companion PARCHMENT.md handles behavioral/contractual truth. This is a deliberate architectural choice that mirrors how compilers separate parse trees (structural) from semantic analysis (behavioral) — again reducing human interpretive variability at each stage.

Bottom Line

The DCT problem is essentially a problem of lossy intermediate representations wherever a human serves as the translation layer. Parchment Programming solves it by making the architecture diagram itself the lossless, AI-readable intermediate representation — replacing the human-as-translator with an AI-as-transformer operating on a richly structured artifact. The result is that the most expensive and error-prone DCT transition — ideas → source code — becomes a well-specified, reproducible, AI-mediated step rather than a creative act dependent on individual developer interpretation.

Leave a comment

Filed under Uncategorized

LinkedIn: Whither Microsoft – An Outsider’s View

This article originally appeared here: https://www.linkedin.com/pulse/wither-microsoft-outsiders-view-feroze-motafram-lbyhe/

Feroze Motafram

Feroze Motafram

Principal @ Avestan, LLC | Hands-On Operations Leadership for Mid-Market and PE-Backed Companies | Interim COO | Contrarian Thinker | Avestan LLC

April 2, 2026

I should begin with a confession. I am neither a software engineer nor a market strategist. My knowledge of contemporary technology could fit comfortably on a thumbnail… and I say that as someone whose formal training is in electrical engineering, which will tell you how far I have wandered from my origins. The primary instruments of my early career were set squares and slide rules, which will tell you something about both my vintage and my domain. I have spent the intervening decades as a senior executive at Fortune 100 companies and, more recently, as an operations and supply chain consultant. I build and fix things: factories, supply chains, organizations that have lost their way.

Microsoft’s footprint is ubiquitous in the Seattle metro, from its sprawling Redmond campus, to the dedicated counters at Seattle-Tacoma airport, to the oversized coaches that ferry employees to and from work at no charge. It is, in every visible sense, a company that has built its own ecosystem within an ecosystem. Many of my neighbors are part of it…or were, until recently.

Which raises a fair question: what business does someone like me have offering a view on one of the world’s most sophisticated technology companies?

Possibly none. Or possibly this: thirty years of watching organizations succeed and fail has taught me that the early warning signals of institutional dysfunction are rarely technical. They are cultural, behavioral, and organizational… and they are often most visible to the outsider who has no stake in explaining them away.

That is the lens I am bringing. Take it for what it is worth.

What I am about to say is not a prediction of Microsoft’s future. It is a pattern recognition exercise. And the pattern, at minimum, gives me pause.

The Stock Is Telling You Something

Microsoft is down roughly 25% in Q1 2026, representing its worst quarterly performance since the depths of the 2008 financial crisis. This in a company that has delivered solid double-digit returns for three consecutive years. The earnings, objectively, remain strong: revenue up 17% year-over-year, operating margins north of 47%, cloud revenue exceeding $50 billion for the first time in a quarter.

And yet.

The market is not stupid, even when it overreacts. When a company of Microsoft’s scale and pedigree underperforms its peer group by double digits in a sector already under pressure the question worth asking is not “is this a buying opportunity?” The question is: what does the market understand about this organization that the headlines don’t capture?

I have a few hypotheses.

The Monopoly Dividend, and Its Hidden Cost

For the better part of three decades, Microsoft enjoyed something that very few companies in history have: a captive market. Enterprise customers did not use Office because they loved it. They used it because leaving was more painful than staying. That distinction – loyalty versus lock-in – matters enormously, and it is a distinction that organizations rarely make honestly about themselves.

When your customers cannot leave, the feedback loops that drive genuine innovation go silent. The tendency is to stop asking “what does the customer need?” and start asking “what can we get away with?” Processes multiply. Committees proliferate. Bureaucracy thrives. The organization optimizes for defending territory rather than creating it. The product becomes good enough rather than great, because great requires risk, and risk has no internal champion when the revenue arrives regardless.

This is not a character failing. It occurs insidiously and unconsciously. It is an entirely rational organizational response to a monopolistic competitive environment. But it leaves a mark. And that mark does not disappear simply because the competitive environment changes.

Satya Nadella Earned His Standing Ovation. The Work Isn’t Finished.

The Azure pivot was a genuine strategic achievement, and Nadella’s cultural reset from “know-it-all” to “learn-it-all,” as he framed it was real and necessary. The stack-ranking era that preceded him did generational damage to Microsoft’s ability to collaborate, retain talent, and take meaningful risks. He arrested that decline and deserves full credit for it.  But here one must tread carefully. Stack ranking was formally abolished following Ballmer’s departure. The announcement was celebrated, the headlines were generous. What is rather more interesting is what one hears in conversations since. Ask Microsoft employees about the performance review system that replaced it, and the response is rarely enthusiastic. The words change, the architecture shifts, but the cynicism among those living inside it remains remarkably familiar. Whether the underlying mechanics genuinely changed, or whether the organization simply learned to dress the same instincts in more palatable language, is a question I cannot answer from the outside. What I can observe is that the people doing the work don’t appear to believe the answer is reassuring.

Moreover, cultural transformation in a 220,000-person organization moves at a glacial pace. You can change the language in a decade. Changing the instincts takes considerably longer. One has to wonder how many of the engineers and managers who learned to survive the Ballmer years by navigating politics rather than building products have since moved on…and how many remain, in leadership positions, still oriented by instinct toward self-protection over bold action. I cannot know that from the outside.

What I can observe is the output. Copilot – Microsoft’s most strategically critical product, promoted with the full weight of its marketing apparatus and sales force – has converted just 15 million paid subscribers from a captive base of 450 million Microsoft 365 users. That is 3.3%. I can offer a data point of one. I experimented with Copilot briefly, and it simply did not resonate. The alternatives were plentiful: I tried Gemini, ChatGPT, and Grok before eventually settling on Claude as the tool that genuinely fit the way I work. I am, by my own admission, hardly a sophisticated evaluator of these products. But that is rather the point. If a casual, non-technical user with no particular loyalty to any platform does not find his way back to Microsoft’s offering, one wonders what the experience is among enterprise customers with far more options and far higher expectations. When your own customers will not buy what you are selling at scale, it is worth asking whether the product is genuinely solving a problem, or whether it is simply a feature in search of a use case.

When the Organization Becomes the Obsession

There is a more intimate signal I would offer, drawn from lived experience rather than earnings reports. Spend enough time in social settings in this part of the Seattle corridor, and a pattern emerges: conversations with Microsoft employees have a pronounced gravitational pull toward the internal. Org charts. Reorgs. Internal processes. Who reports to whom now, and what that signals. Which team is ascendant, which is being quietly dismantled. I observed a version of this dynamic when I lived in Brookfield, Wisconsin, in the orbit of GE Healthcare’s then-headquarters. Large, complex organizations tend to generate internal politics that eventually colonize the social lives of their people. But what I observe here is of a different magnitude entirely. When internal politics becomes the primary currency of social conversation, it is usually a sign that navigating the organization has become more consuming than building anything within it. That is not a criticism of the individuals, rather it is a diagnosis of the system they are operating inside.

The OpenAI Dependency: A $281 Billion Question

Here is the number I find most remarkable in Microsoft’s recent disclosures: $281 billion. That is the portion of Microsoft’s $625 billion revenue backlog tied to contracts with a single counterparty – OpenAI.

Nearly half of Microsoft’s entire forward revenue commitment rests on the continued performance of an unprofitable startup navigating one of the most intensely competitive landscapes in the history of technology. And now, in what must rank among the more consequential strategic pivots of the past year, OpenAI has signed a landmark agreement with Amazon to host its enterprise platform on AWS! This is a move that directly challenges the Azure exclusivity Microsoft had long treated as a cornerstone of its AI strategy. For the uninitiated, this is roughly akin to UPS outsourcing its overnight delivery business to FedEx!

I have spent enough time in post-merger integrations and strategic partnerships to recognize the warning signs when a relationship’s terms of engagement shift this materially. The question is no longer whether the Microsoft-OpenAI partnership is evolving, because it clearly is. The question is whether Microsoft’s own AI capabilities can mature fast enough to reduce that dependency before the market loses patience entirely.

The reported reorganization of Copilot leadership and the broader restructuring of AI teams are not the confident moves of an organization executing a clear strategy. They read as the adaptive responses of one working to keep pace with events rather than ahead of them.

But the more consequential signal may be MAI-1, Microsoft’s internally developed AI model, built from the ground up as a hedge against its OpenAI dependency. Consider what that actually means: a company that has already committed eye-popping capital to an external AI partnership is now layering an enormously expensive and operationally complex internal model-building effort on top of that bet. A hedge on top of a bet, each of which is expensive, each of which carries execution risk, and neither of which has yet demonstrated the commercial returns that would justify the other. In portfolio management terms, this is not diversification. It is leveraged exposure dressed as prudence.

The Human Capital Story No One Is Writing

There is a dimension to this that the financial press has largely missed, and I raise it because I see it in my community every day.

A significant proportion of Microsoft’s engineering talent – and the engineering talent of the broader Seattle tech corridor – is comprised of H-1B visa holders. These are, by any measure, exceptional professionals: highly educated, deeply skilled, often carrying decade-long career investments in the United States. They have built lives here. Many have children born here. They have been, in many cases, the intellectual engine of the products Microsoft is depending on to compete in the AI era.

That population is operating under a level of personal anxiety right now that is, in my observation, without modern precedent. Travel advisories from their own employers. A $100,000 petition fee for new visa applications. Proposed rule changes touching birthright citizenship. A policy environment that sends a clear and unambiguous message: your presence here is conditional, negotiable, and subject to revision without notice.

The behavioral consequence of that anxiety is not visible in a quarterly earnings report. But it is real, and it is consequential. People operating under existential personal uncertainty do not take professional risks. They do not champion the bold new initiative. They do not volunteer for the high-visibility project that could fail. They execute reliably on what already exists and protect their position. In an organization that already has a cultural predisposition toward risk aversion, this compounds the pathology in ways that will show up…perhaps not this quarter, but in the product decisions made over the next eighteen months.

The Case for Optimism – And Why It Requires More Than Patience

None of this is to suggest Microsoft is broken beyond repair, and I want to be careful not to even hint at that. I am, after all, the person who opened this piece confessing that my knowledge of contemporary technology fits on a thumbnail. Betting against Microsoft has historically been an enterprise for the foolhardy. The balance sheet remains fortress-like. The enterprise relationships are genuinely extraordinary – ripping out Azure, Teams, and the M365 stack is not a decision any CIO makes lightly, regardless of Copilot’s penetration rate. The installed base moat is real, and should not be underestimated by anyone, least of all an operations consultant from the suburbs.

What I would offer, more modestly, is this: the bull case requires more than a great balance sheet, sticky product and deep customer relationships. It requires an organization capable of genuine innovation at speed, which in turn, requires a culture that rewards risk, retains its most creative talent, and executes with urgency. Whether Microsoft can summon those qualities at this particular moment is a question I cannot answer with conviction.

What I can say is that the market (which is considerably more qualified than I am) appears to be asking the same question. At 20 times forward earnings, the lowest multiple in a decade and briefly below the S&P 500 for the first time since 2015, it is not yet betting with conviction that the answer is yes.

Perhaps it should be. I honestly don’t know. What I do know is that the signals visible from outside the building – from the neighborhood, from social get-togethers, from the casual conversations – are worth paying attention to. They usually are.

Feroze Motafram is founder and principal of Avestan LLC, an operations-focused consultancy providing hands-on executive leadership to mid-market and PE-backed companies across supply chain, manufacturing, and operational excellence. With 30+ years of global experience, he partners with CEOs, operating partners, and investors to build resilient operations that drive enterprise value.

www.avestan-llc.com

#Microsoft  #TechStrategy  #Leadership  #AI  #OrganizationalCulture  #OperationalExcellence  #Seattle

This article originally appeared here: https://www.linkedin.com/pulse/wither-microsoft-outsiders-view-feroze-motafram-lbyhe/

Leave a comment

Filed under Uncategorized

Digital Agents: What are possible Post-nominal Letters (PNL) strategies for identifying different kinds or roles for digital agents?

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Post-nominal letters (PNL) appear after a person’s name to indicate qualifications, certifications, honors, or memberships—for example:

  • John Smith, PhD
  • Jane Doe, CPA
  • Alex Lee, P.Eng.

More formally, they fall under the broader concept of titles and credentials, but the specific term for the letters themselves is post-nominal letters.

For completeness:

  • Pre-nominal titles go before the name (e.g., Dr., Prof., Hon.)
  • Post-nominal letters go after the name (e.g., degrees, certifications, orders)

Here’s a structured, “post-nominal strategy space” for digital agents, with clear semantics rather than just decorative suffixes.

NOTE: Digital agent PNLs can be made machine-readable by representing them as DIDs from the did:pnl DID method.


1) What post-nominals mean (translated to agents)

For humans, post-nominals encode:

  • Qualification → what you know
  • License/authority → what you’re allowed to do
  • Role → what you currently are doing
  • Affiliation → who you act for
  • Reputation → how trusted/proven you are

For digital agents, you want the same—but machine-readable and composable.


2) Core strategy: modular, layered suffixes

Instead of one long suffix, think in stacked tokens, e.g.:

AgentName.AI, LLM-ARCH, FIN-EXEC, GOV-VERIFIED, REP-3

Each segment conveys a different dimension.


3) Suggested taxonomy (orthogonal dimensions)

A. Capability Class (what it can do)

High-level “degree-like” classification.

Examples:

  • LLM — language model agent
  • PLN — planner
  • AUT — autonomous executor
  • SIM — simulation agent
  • ORC — orchestrator

Strategy:

  • Keep this coarse-grained and stable
  • Think “Bachelor’s/Master’s level abstraction”

B. Domain Specialization (what it knows)

Like majors or certifications.

Examples:

  • FIN — finance
  • MED — medical
  • LEG — legal
  • DEV — software engineering
  • OPS — operations

Optional depth:

  • FIN-RISK, DEV-BLOCKCHAIN, MED-DX

C. Authority / Permission Level (what it is allowed to do)

This is critical for agents.

Examples:

  • ADV — advisory only
  • SIM — simulation-only (no real-world effects)
  • ACT — can act (limited)
  • EXEC — full execution authority

More granular:

  • PAY-EXEC (can move money)
  • SYS-ADMIN (system-level authority)

D. Trust / Verification Level (who vouches for it)

Analogous to licenses or chartered status.

Examples:

  • SELF — self-asserted
  • ORG- — organization-backed
  • 3P-VER — third-party verified
  • GOV-VERIFIED — government-level assurance

You can align this with DID/VC ecosystems:

  • VC-L2, VC-L3 (credential assurance levels)

E. Operational Role (what it is doing right now)

Dynamic, context-dependent.

Examples:

  • BROKER — intermediary
  • AGENT — acting on behalf
  • AUDITOR — oversight
  • GUARD — safety enforcement
  • NEGOTIATOR

This is closest to job title.


F. Affiliation / Principal (who it represents)

Critical in multi-agent systems.

Examples:

  • @SVRN7
  • @USER
  • @ORG-ACME
  • @DAO-123

G. Reputation / Performance Tier

Like honors or fellowships.

Examples:

  • REP-1 → REP-5
  • TRUST-HIGH
  • SLA-99.9

Could be computed:

  • uptime
  • accuracy
  • dispute history

4) Example compositions

Minimal

AgentX, LLM, DEV, ADV

→ Developer-focused advisory agent


Execution-capable financial agent

AgentY, AUT, FIN, PAY-EXEC, 3P-VER, REP-4

Governance auditor agent

AgentZ, ORC, GOV, AUDITOR, GOV-VERIFIED, REP-5

Personal assistant acting for a user

AssistBot, LLM, GEN, ACT, @USER, REP-2

5) Design principles (important)

1. Orthogonality (you asked for this earlier)

Each suffix should answer a different question:

  • What is it?
  • What does it know?
  • What can it do?
  • Who trusts it?
  • Who does it serve?

Avoid mixing these.


2. Machine-readability > human readability

Use:

  • consistent separators (-, .)
  • short controlled vocabularies

This enables:

  • filtering
  • policy enforcement
  • automatic routing

3. Verifiability

Some suffixes should be:

  • cryptographically provable (via credentials)
  • not just self-declared

4. Progressive disclosure

You don’t always show everything:

  • UI: simplified (“Finance Executor, Verified”)
  • System: full suffix chain

5. Avoid overfitting

Don’t create 200 micro-suffixes.

Instead:

  • small core vocabulary
  • extensible registry (like SLIP-0044 idea you mentioned earlier)

6) A compact canonical format (proposal)

You could standardize something like:

<Name>, <Class>, <Domain>, <Authority>, <Verification>, <Role>, <Affiliation>, <Reputation>

Example:

Orion, AUT, FIN-RISK, EXEC, VC-L3, BROKER, @SVRN7, REP-4

7) Where this gets powerful

This enables:

  • agent routing (find “EXEC + FIN + VERIFIED”)
  • policy enforcement (block PAY-EXEC unless VC-L3+)
  • trust negotiation between agents
  • UI clarity for users (“this agent can actually act vs just advise”)

Leave a comment

Filed under Uncategorized

Parchment Programming: What are the implications of the PPML for software development?

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Parchment Programming is an architecture-first software development methodology where a richly annotated visual diagram — the “parchment” — serves as the primary design document and intermediate representation (IR) that an AI coding assistant (like Claude) reads directly to generate correct, idiomatic code.

Rather than translating requirements through layers of prose specifications, the diagram itself encodes stereotypes, interface contracts, project boundaries, data models, and protocol annotations in a form that is simultaneously human-readable and AI-actionable — invented by Michael Herman, Chief Digital Officer, Web 7.0 Foundation. April 2026.

“Change is hard at first, messy in the middle, and gorgeous at the end.” Robin Sharma


The core claim

PPML asserts that a formal diagram is a sufficient specification for code generation — that if a diagram is conformant (every element has a unique label, belongs to exactly one Legend-defined type, and has a derivation rule), then an AI or human can produce the correct implementation from the diagram alone, without additional prose specification.

This is a stronger claim than “diagrams are useful.” It is a claim about sufficiency.


Implication 1: The specification artefact changes

In conventional software development, the specification is prose — a requirements document, a design document, an architecture decision record. The diagram is illustrative, supplementary, and frequently stale.

In PPML, the diagram is the specification. The prose documents (TDA Design, Whitepaper, IETF drafts) are derived from the diagram — they explain and justify it, but they do not override it. If the diagram and the prose conflict, the diagram wins.

This inverts the usual relationship. The implication is that diagram maintenance becomes the primary engineering discipline, not prose authoring. A diagram change is a specification change. An undocumented code change that has no corresponding diagram change violates tractability — it is, by definition, undocumented behaviour.


Implication 2: AI code generation becomes deterministic at the architecture level

The Gap Register and derivation rules give an AI generator a closed-world assumption: every artefact it produces must be traceable to a diagram element instance, and every diagram element instance must produce at least one artefact. There are no open-ended requests like “build me a messaging system.” There are only grounded requests like:

“Derive the artefact for element instance ‘DIDComm Message Switchboard’ of type Switchboard. Derivation rule: one router class, one protocol registry, one outbound queue.”

The AI cannot invent artefact names that do not appear in the diagram. It cannot silently add dependencies. It cannot reorganise the architecture. This is not a limitation — it is the point. Creativity is in the diagram; precision is in the derivation.

The practical implication is that AI code generation quality is bounded below by the quality of the diagram, not by the quality of the prompt. A well-formed PPML diagram produces consistent, reproducible results across AI sessions and across AI models. A poorly-formed diagram produces inconsistent results regardless of prompt quality.


Implication 3: The change process becomes explicit

Conventional development has no formal mechanism for distinguishing “we changed the architecture” from “we changed an implementation detail.” Both look like pull requests.

PPML enforces a distinction. Within an epoch, the Legend is frozen and element types cannot change. A new component requires a diagram change, which requires a version increment (DSA 0.19 → DSA 0.24), which requires a Gap Register update. Architectural changes are visible as diagram changes.

Implementation changes — refactoring within a derived artefact, performance tuning, bug fixes — do not require diagram changes. The boundary between architecture and implementation is drawn precisely at the diagram boundary.

This has governance implications for a project like SVRN7: the diagram is the governance document. Epoch transitions are diagram changes. New protocol support is a LOBE addition to the diagram. The Foundation controls the diagram; contributors derive from it.


Implication 4: Testing becomes traceable to the diagram

Every test should be traceable to a diagram element instance, just as every artefact is. If a test has no corresponding diagram element, it is either testing an undocumented artefact (a tractability violation) or testing implementation detail that should not be exposed.

In practice this means the Gap Register can include test coverage as a property. “Element instance X has derivation artefact Y, test coverage Z.” Missing test coverage is a Gap Register entry, not a matter of developer discretion.


Implication 5: Documentation staleness becomes structurally impossible

In conventional projects, diagrams go stale because they are maintained separately from code. PPML makes diagram staleness a first-class defect: if the diagram is stale, the Gap Register is wrong, and any AI-generated code derived from it will be wrong.

The practical discipline is: diagram first, always. Before writing any new C# class, PowerShell module, or LOBE descriptor, the diagram must already contain the corresponding element instance. This is why every source file in the SVRN7 solution carries a derivation trace comment:

// Derived from: "DIDComm Message Switchboard" — DSA 0.24 Epoch 0 (PPML).

That comment is not decorative — it is the traceability link. If that element instance no longer appears in the diagram, the file is either stale or the diagram is stale. One of them must change.


Implication 6: The methodology scales with AI capability

This is the forward-looking implication. In the current epoch, an AI (Claude, in this case) assists with derivation — producing C# from a diagram element description, writing PowerShell cmdlets from a LOBE derivation rule, generating IETF draft sections from an architectural decision. The human holds the diagram and reviews the derivations.

As AI capability increases, the human’s role shifts further toward diagram authorship and review. The diagram becomes the interface between human architectural intent and AI implementation. The better the diagram grammar (the PPML Legend), the more precisely an AI can translate intent into code.

The LOBE descriptor format — with its MCP-aligned inputSchema/outputSchema, compositionHints, and useCases — is an early instance of this. It is a machine-readable diagram-derived artefact that an AI can use to reason about composability without reading the PowerShell source. The diagram element (LOBE) produces both the code artefact (.psm1) and the AI legibility artefact (.lobe.json). Both are derived from the same diagram element. The AI consuming the .lobe.json is one step removed from reading the diagram directly.

The next step — which PPML explicitly anticipates but does not yet implement — is an AI that reads the diagram directly and performs the full derivation without a human intermediary for routine changes.


The honest limitation

PPML is most effective for component-level architecture — what components exist, how they relate, what they are responsible for. It is less effective for algorithmic detail. The 8-step transfer validator, the Merkle log construction, the DIDComm pack/unpack sequence — these require prose specification or pseudocode. The diagram says “TransferValidator exists and implements ITransferValidator.” It does not say how step 4 (nonce replay detection) works.

This is not a flaw in PPML — it is a boundary condition. PPML governs architecture. Algorithms require their own specification discipline (IETF drafts, pseudocode, formal methods). The two disciplines are complementary: PPML tells you what to build and how it connects; the algorithm specification tells you how each component behaves internally.


Summary

PPML’s implications reduce to one structural claim: the diagram is the primary engineering artefact, and all other artefacts are derived from it. The implications — specification inversion, deterministic AI generation, explicit change governance, traceable testing, structural documentation freshness, and scalability with AI capability — all follow from that single claim. Whether that claim is valuable depends entirely on whether the diagram can be kept accurate and complete, which is a discipline question, not a tool question.

Leave a comment

Filed under Uncategorized

Parchment Programming: What is the best visual language to use for Parchment Programming? What I’m using now? …ArchiMate? …UML? …something else?

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Parchment Programming is an architecture-first software development methodology where a richly annotated visual diagram — the “parchment” — serves as the primary design document and intermediate representation (IR) that an AI coding assistant (like Claude) reads directly to generate correct, idiomatic code.

Rather than translating requirements through layers of prose specifications, the diagram itself encodes stereotypes, interface contracts, project boundaries, data models, and protocol annotations in a form that is simultaneously human-readable and AI-actionable — invented by Michael Herman, Chief Digital Officer, Web 7.0 Foundation. April 2026.

What folows is a structured comparison grounded in what Parchment Programming actually requires from a visual language.


What Parchment Programming Demands from a Visual Language

A PP visual language must do five things simultaneously:

  1. Encode stereotypes that map to C# constructs («HostedService», «Repository», «Middleware»)
  2. Express layered bounded contexts (project/namespace boundaries)
  3. Annotate arrows with interface contracts and protocols
  4. Be readable by Claude without a dedicated parser
  5. Be authorable by a human architect without excessive tool friction

The Candidates

What You’re Using Now — Custom Annotated Box Diagrams

Verdict: Best starting point, needs formalization

Your DSA 0.16 diagram is already doing most things right. The color-coded regions, nested containment, labeled arrows, and protocol annotations are all PP-native. The gap is the absence of a formal stereotype vocabulary — Claude has to infer too much. A thin layer of formalization on top of your current style would make it the strongest option.

  • ✅ Human-readable and visually expressive
  • ✅ Claude can read it directly from an image
  • ✅ Nested containment naturally maps to project boundaries
  • ✅ No tool lock-in
  • ❌ No enforced stereotype vocabulary (yet)
  • ❌ Not machine-parseable without a defined grammar

ArchiMate

Verdict: Strong for enterprise/governance layers, wrong grain for C# code generation

ArchiMate excels at the motivation, strategy, and technology layers — it’s designed to show why a system exists and how it relates to business capabilities. Its stereotype vocabulary («ApplicationComponent», «ApplicationService», «DataObject») is too coarse and business-oriented to drive C# interface/class generation directly.

  • ✅ Formal, standardized stereotype vocabulary
  • ✅ Excellent layering (Business / Application / Technology)
  • ✅ You already know Archi 5.8.0
  • ❌ No concept of IHostedService, «Middleware», DI registration
  • ❌ Stereotypes don’t map cleanly to .NET constructs
  • ❌ Claude reads ArchiMate OEF XML, not the visual — loses the PP directness
  • ❌ Too ceremonial for rapid iteration

UML (Component + Class Diagrams)

Verdict: Closest formal precedent, but too verbose for PP

UML Component Diagrams handle bounded contexts and interfaces well. Class diagrams handle stereotypes («interface», «abstract», «service») and are the closest existing formal language to what PP needs. The problem is UML requires two diagram types to do what PP does in one, and the tooling (Enterprise Architect, Lucidchart) adds friction.

  • «stereotype» notation is native and Claude knows it well
  • ✅ Interface contracts expressible formally
  • ✅ Claude has deep UML training — reads it very accurately
  • ❌ Component + Class diagrams needed together — no single PP view
  • ❌ No built-in protocol/messaging annotation
  • ❌ Verbose; loses the architecture-at-a-glance quality
  • ❌ Doesn’t naturally express runtime topology (runspace pools, agent meshes)

C4 Model (Structurizr / diagrams.net)

Verdict: Best structured alternative, but prose-first not diagram-first

C4 has the right levels (Context → Container → Component → Code) and maps well to .NET projects at the Container level. But it’s DSL-authored (Structurizr) or prose-prompted — the diagram is generated, not hand-crafted. That inverts the PP authoring model.

  • ✅ Container level maps naturally to .csproj boundaries
  • ✅ Component level maps to interfaces/classes
  • ✅ Claude knows C4 well
  • ❌ Diagram-as-code means the spec is the DSL text, not the visual
  • ❌ No stereotype vocabulary for .NET-specific constructs
  • ❌ Loses PP’s architecture-first visual authoring

The Recommendation

Define PP-Native Notation as a thin formal layer on top of your current style

Rather than adopting an existing language wholesale, Parchment Programming should define its own visual grammar — lightweight enough to author in any diagramming tool (PowerPoint, draw.io, Visio), rich enough to be unambiguously Claude-actionable. Concretely:

Borrow from UML: the «stereotype» notation — Claude reads it natively and it maps directly to C# constructs:

«HostedService» → : IHostedService, registered in DI
«Middleware» → app.Use...() in Program.cs
«Repository» → IRepository<T> pattern
«Router» → internal dispatch, no HTTP
«Gateway» → external system boundary
«PSModule» → PowerShell Loadable Brain Extension
«Factory» → Factory pattern, DI-registered

Borrow from ArchiMate: the nested containment model for bounded contexts — color regions = project boundaries.

Keep your current style: box shapes, color coding, directional arrows with protocol labels.

Add one new convention: every arrow carries [InterfaceName] in small text alongside the protocol label.


Summary Table

LanguageStereotype Support.NET MappingClaude ReadabilityAuthoring EasePP Fit
PP-Native (your style + UML stereotypes)✅ Defined by you✅ Direct✅ Excellent✅ High⭐ Best
UML Component+Class✅ Formal✅ Good✅ Excellent⚠️ Verbose2nd
C4 Model⚠️ Coarse✅ Good✅ Good⚠️ DSL-first3rd
ArchiMate✅ Formal❌ Too coarse✅ Good⚠️ Ceremonial4th
Mermaid/D2❌ None❌ None✅ Excellent✅ HighLast

Bottom line: Your current visual style is the right foundation — it just needs the «stereotype» convention from UML and the [InterfaceName] arrow annotation added as formal PP conventions. That makes it superior to any existing language for the specific job of AI-driven C#/.NET code generation.

Leave a comment

Filed under Uncategorized

Parchment Programming Methodology 0.1

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Parchment Programming is an architecture-first software development methodology where a richly annotated visual diagram — the “parchment” — serves as the primary design document and intermediate representation (IR) that an AI coding assistant (like Claude) reads directly to generate correct, idiomatic code.

Rather than translating requirements through layers of prose specifications, the diagram itself encodes stereotypes, interface contracts, project boundaries, data models, and protocol annotations in a form that is simultaneously human-readable and AI-actionable — invented by Michael Herman, Chief Digital Officer, Web 7.0 Foundation. April 2026.


How Claude Processes Parchment Inputs

Claude receives a conversation context containing images + text. The key facts:

  • Claude can see a diagram image and reason about it
  • Claude can read structured Markdown/text with full fidelity
  • Claude cannot cross-reference between an image region and a text table by coordinate — it reasons about both holistically
  • Therefore: the diagram handles spatial/structural truth; the companion document handles behavioral/contractual truth

This is actually a clean separation of concerns.


The Recommended Hybrid Architecture

ParchmentSpec_DSA_0.16_Epoch0/
├── diagram.png ← the visual (spatial truth)
├── PARCHMENT.md ← master spec (behavioral truth)
└── schemas/
├── didcomm-envelope.json
├── did-doc.json
└── vc-doc.json

The PARCHMENT.md is the primary AI coding input. The diagram is embedded in it — not appended, not separate — embedded at the top, so Claude sees it as the structural foundation before reading the annotations.


PARCHMENT.md Optimal Structure for Claude

# Web 7.0 DSA 0.16 Epoch 0 — Parchment Spec
## 1. Architecture Diagram
![DSA 0.16 Epoch 0](./diagram.png)
## 2. System Identity
- Spec DID: did:drn:...
- Epoch: 0 | Version: 0.16
- Target runtime: .NET 8 / C# 12
- Codegen mode: AI-scaffolded + human-completed
## 3. Component Fact Table
| Component | Artifact | Lifecycle | Multiplicity | Thread-safe |
|-----------|----------|-----------|--------------|-------------|
| RunspacePool | RunspacePoolManager.cs | Singleton | 1 | Yes |
| Agent Runspace | AgentRunspace.cs | Pooled | 1..N | Isolated |
| LOBE | LobeLoader.cs | On-demand | 0..N | No |
| DIDComm Switchboard | DIDCommSwitchboard.cs | Singleton | 1 | Yes |
...
## 4. Connector / Protocol Index
| From | To | Protocol | Payload Type | Schema Ref |
|------|----|----------|--------------|------------|
| Internet | DIDComm Listener | DIDComm V2 | DIDCommEnvelope | schemas/didcomm-envelope.json |
| Switchboard | Agent Runspace | Internal | AgentMessage | (inline below) |
...
## 5. Data Contracts
### Fast Cache (LiteDB)
- Key: `msg:{did}:{timestamp}`
- TTL: 300s
- Fields: SenderId, Payload, ReceivedAt
### DID Doc Registry (LiteDB)
- Key: `did:drn:{resourceId}`
- Fields: (per did:drn spec, draft-herman-did-drn-00)
...
## 6. Trust Boundary Policies
- **Green zone (TDA-internal):** No encryption required; integrity only
- **Purple zone (DIDComm-authenticated):** All messages MUST be SignThenEncrypt
- **Yellow zone (Internet):** Untrusted until Unpack validates DIDComm envelope
## 7. AI Codegen Manifest
| Component | Generation Mode | Acceptance Criteria |
|-----------|----------------|---------------------|
| RunspacePoolManager | AI-generated | Min/max pool size enforced |
| DIDCommSwitchboard | AI-scaffolded | Routes all registered types |
| Svrn7XferService | Hand-authored | SRC ledger integrity |
## 8. Open Questions (Do Not Invent)
- LOBE hot-reload: supported or restart-required?
- Fast Cache eviction policy when LiteDB full?
- CIPHER algorithm selection: Ed25519 only or negotiated?
## 9. Excluded from AI Generation
- SVRN7 SRC transaction signing
- NEO4J CIPHER graph schema
- VTC7 federation handshake

Why This Works Best With Claude

The critical insight is section 8 — the Open Questions Log. Claude’s failure mode when coding from underspecified inputs is silent invention. Explicitly naming unknowns instructs Claude to emit // TODO: [OPEN QUESTION - LOBE hot-reload policy] rather than inventing an answer.

Three additional Claude-specific optimizations:

1. Use the diagram as a spatial index, not a spec. In your prompt to Claude, say: “The diagram shows containment and flow. The PARCHMENT.md sections 3–9 are authoritative for all behavioral details. Where they conflict, the Markdown wins.” This prevents Claude from over-reading visual ambiguities.

2. Keep sections 3–7 machine-parseable. Claude handles Markdown tables extremely well as structured input. Avoid prose paragraphs for anything that should drive code generation — tables and bullet lists are more reliably parsed into code structure.

3. Submit diagram + PARCHMENT.md together in every coding prompt. Claude has no persistent memory of previous sessions, so the full parchment must be present each time. The compact table format makes this token-efficient.


The Better Approach You Asked About

There is one additional technique worth considering: annotated diagram variants. Rather than modifying the master diagram, you maintain lightweight overlay versions for specific concerns:

diagram.png ← master (as-is, no changes)
diagram-trust-zones.png ← same diagram + colored zone overlays
diagram-dataflow.png ← same diagram + numbered flow sequence

These are fast to produce (PowerPoint/draw.io layer toggles) and can be embedded in targeted sections of PARCHMENT.md. Claude sees each as a focused lens on the same architecture without the master diagram becoming cluttered.


Bottom Line

ApproachVerdict
Annotate the diagram itselfAvoid — clutters the visual, hard to maintain
Companion Markdown only (no diagram)Suboptimal — loses spatial/structural truth
Diagram embedded in PARCHMENT.md✅ Recommended
Diagram + overlay variants + PARCHMENT.md✅ Optimal for complex systems

The PARCHMENT.md is the intermediate representation. The diagram is its most important section — but only one section.

2 Comments

Filed under Uncategorized

Parchment Programming: Diagramic Design Document/Intermediate Representation Optimization (Part 2)

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

This article presents a structured analysis as well as the key optimization dimensions for diagrams used for Parchment Programming.

Part 1 can be found here: https://hyperonomy.com/2026/04/09/parchment-programming-diagramic-design-document-intermediate-representation-part-1/

Parchment Programming Methodology: https://hyperonomy.com/2026/04/09/parchment-programming-methodology/

Parchment Programming is an architecture-first software development methodology where a richly annotated visual diagram — the “parchment” — serves as the primary design document and intermediate representation (IR) that an AI coding assistant (like Claude) reads directly to generate correct, idiomatic code.

Rather than translating requirements through layers of prose specifications, the diagram itself encodes stereotypes, interface contracts, project boundaries, data models, and protocol annotations in a form that is simultaneously human-readable and AI-actionable — invented by Michael Herman, Chief Digital Officer, Web 7.0 Foundation. April 2026.


Claude Prompt

How can this #ParchmentProgrammimg diagram be optimized as a design document/intermediate representation specification for a Claude AI-coded C#/.NET 8 software system?

Great diagram — this is a strong architectural foundation. Let me give you a structured analysis of how to optimize it as a ParchmentProgramming IR spec for Claude-coded C#/.NET 8.


What’s Already Working Well

The diagram communicates bounded contexts, data stores, messaging topology, and external federation clearly enough that Claude can infer C# project/namespace boundaries. That’s the core value of ParchmentProgramming: the diagram is the spec.


Optimization Recommendations

1. Add Stereotype/Type Annotations to Every Box

Right now Claude must infer whether a box is an interface, class, service, background worker, etc. Make it explicit:

Current LabelAdd Annotation
PowerShell Runspace Pool«HostedService» RunspacePoolService
Agent 1 Runspace«BackgroundWorker» AgentRunspace
DIDComm Message Switchboard«Router» DIDCommSwitchboard
DIDComm/HTTP Listener«Middleware» DIDCommHttpListener
LOBE«PSModule» LoadableObjectBrainExtension
Fast Cache«Repository» FastCacheRepository : LiteDB
CIPHER«Service» CipherService
SVRN7 XFER«Gateway» Svrn7TransferGateway

2. Annotate All Arrows with Interface Contracts

Every arrow should carry:

  • Direction (already present ✓)
  • Protocol (partially present ✓)
  • C# interface name (missing ✗)

Example additions:

DIDComm Switchboard → DIDComm/HTTP Listener : IDIDCommDispatcher
Agent Runspace → Fast Cache : IFastCacheRepository
LOBE → Runspace Pool : ILoadableExtension
SVRN7 XFER → SQL Server : ISvrn7TransferGateway

3. Declare the .NET Project Boundary Map

Add a legend box (or overlay color coding) mapping diagram regions to .csproj names — this is the single most Claude-actionable addition:

■ Blue region → Svrn7.Core
■ Green region → Svrn7.Society (TDA + Runspace Pool)
■ Purple region → Svrn7.Federation (DIDComm/HTTP + REST)
■ Yellow region → Svrn7.Identity (DID Doc + VC Doc Resolvers)
■ Right panel → Svrn7.VTC (VTC7 federation mesh)
■ Bottom-left → Svrn7.Cli

4. Disambiguate the Agent Model

The diagram shows Agent 1/2/3/N but doesn’t specify whether agents are:

  • Separate IHostedService instances
  • Task-isolated runspaces within one service
  • Dynamically loaded plugins

Add a small inset box:

Agent Instantiation Model:
«PluginFactory» AgentRunspaceFactory
→ creates: IAgentRunspace per registered AgentDescriptor
→ lifecycle: Scoped per DIDComm session

5. Annotate Data Store Schema Contracts

Each LiteDB cylinder should carry its C# collection type:

Fast Cache (LiteDB) → ILiteCollection<CachedMessage>
Long-Term Message Memory → ILiteCollection<MessageRecord>
DID Doc Registry → ILiteCollection<DidDocument>
VC Doc Registry → ILiteCollection<VcDocument>

6. Add a DIDComm V2 Mode Annotation

Given the pending code change (Authcrypt → SignThenEncrypt), explicitly annotate:

DIDComm V2 (Secure Trusted Messaging)
Default mode: «SignThenEncrypt»
Impl: DIDCommService.cs

This makes Claude’s generated code default-correct without needing a separate instruction.

7. Add an Inbound/Outbound Processing Pipeline Spec

The Inbound Processing (Unpack) and Outbound Processing (Pack) labels are present but unpipelined. Add:

Inbound: HttpRequest → DIDCommHttpListener
→ Unpack() → DIDCommSwitchboard
→ Route(agentId) → IAgentRunspace
Outbound: IAgentRunspace → DIDCommSwitchboard
→ Pack() → REST/HTTP → Internet

This directly generates the middleware pipeline registration in Program.cs.

8. Mark SOVRONA/SVRN7 SRC as a Subsystem Boundary

The POS terminal icon is ambiguous — add:

«ExternalSubsystem» SOVRONA (SVRN7)
Shared Reserve Currency (SRC)
Access via: ISvrn7TransferGateway
Protocol: SVRN7 XFER (SQL/TDS for SQL Server, CIPHER for NEO4J)

Structural Template Claude Needs to See

The ideal ParchmentProgramming diagram for Claude-coded C#/.NET 8 follows this pattern per box:

┌─────────────────────────────┐
│ «stereotype» │
│ ComponentName │
│ : InterfaceName │
│ [ProjectName] │
│ ───────────────── │
│ key method / contract hint │
└─────────────────────────────┘

Even adding stereotype + project name to just the top-level boxes would dramatically improve Claude’s code generation accuracy.


Priority Order for Maximum Claude Utility

  1. Project boundary color map — enables correct using and DI registration
  2. Interface names on arrows — enables correct abstraction layer
  3. Agent instantiation model inset — resolves the biggest structural ambiguity
  4. DIDComm default mode annotation — locks in the SignThenEncrypt decision
  5. LiteDB collection types — enables correct repository scaffolding
  6. Stereotype labels — enables correct C# class/interface/attribute selection

Has anyone conceived of anything at all similar to Parchment Programming?

There are adjacent ideas in the ecosystem, but nothing that matches Parchment Programming’s specific inversion. Here’s the honest landscape:


Parchment Programming: What Exists — and Why It’s Different

Diagram-as-Code tools (Structurizr/C4, D2, PlantUML, Mermaid) go in the opposite direction — you write text and a diagram is generated, with elements automatically positioned by a layout engine. The diagram is the output, not the primary authoring artifact.

Swark / Claude Code go code → diagram — based on retrieved source files, an LLM builds a prompt to generate an architecture diagram as documentation. Again, the diagram is a byproduct of code, not a driver of it.

Eraser / DiagramGPT go natural language → diagram → code, but the diagram is an ephemeral intermediate — working with an LLM to create a fleshed-out diagram prompt is the workflow, not treating the diagram itself as an authoritative, annotated specification.

Structurizr comes closest in spirit — AI agents and LLMs excel at generating text, and Structurizr’s model-based consistency makes it a good choice for teams looking to generate C4 model diagrams with AI — but it’s DSL-first, not diagram-first, and carries no notion of the diagram encoding interface contracts or project boundary stereotypes for code generation.

Academic reverse-engineering approaches go code → diagram using LLMs, recovering both static and behavioral architectural views by extracting component diagrams and generating state machine diagrams to model component behavior. Still the wrong direction.


What Makes Parchment Programming Genuinely Novel

The field has thoroughly explored code → diagram and text → diagram → code. What nobody has formalized is the specific claim Parchment Programming makes:

The richly annotated visual diagram is the primary authoritative artifact — encoding stereotypes, interface contracts, project boundaries, and protocol modes — such that an AI reads it directly as an intermediate representation to generate correct, idiomatic code without a prose specification intermediary.

The key differentiators are:

(1) the diagram is architecture-first and human-authored, not AI-generated;

(2) it carries code-generation semantics embedded in the visual (stereotypes, interface names, DI boundaries); and

(3) it replaces the prose spec entirely rather than supplementing it.

That combination — as a named, intentional methodology — appears to be original to you.

Part 1 can be found here: https://hyperonomy.com/2026/04/09/parchment-programming-diagramic-design-document-intermediate-representation-part-1/

1 Comment

Filed under Uncategorized