Tag Archives: technology

Parchment Programming: What are the implications of the PPML for software development?

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Parchment Programming is an architecture-first software development methodology where a richly annotated visual diagram — the “parchment” — serves as the primary design document and intermediate representation (IR) that an AI coding assistant (like Claude) reads directly to generate correct, idiomatic code.

Rather than translating requirements through layers of prose specifications, the diagram itself encodes stereotypes, interface contracts, project boundaries, data models, and protocol annotations in a form that is simultaneously human-readable and AI-actionable — invented by Michael Herman, Chief Digital Officer, Web 7.0 Foundation. April 2026.

“Change is hard at first, messy in the middle, and gorgeous at the end.” Robin Sharma


The core claim

PPML asserts that a formal diagram is a sufficient specification for code generation — that if a diagram is conformant (every element has a unique label, belongs to exactly one Legend-defined type, and has a derivation rule), then an AI or human can produce the correct implementation from the diagram alone, without additional prose specification.

This is a stronger claim than “diagrams are useful.” It is a claim about sufficiency.


Implication 1: The specification artefact changes

In conventional software development, the specification is prose — a requirements document, a design document, an architecture decision record. The diagram is illustrative, supplementary, and frequently stale.

In PPML, the diagram is the specification. The prose documents (TDA Design, Whitepaper, IETF drafts) are derived from the diagram — they explain and justify it, but they do not override it. If the diagram and the prose conflict, the diagram wins.

This inverts the usual relationship. The implication is that diagram maintenance becomes the primary engineering discipline, not prose authoring. A diagram change is a specification change. An undocumented code change that has no corresponding diagram change violates tractability — it is, by definition, undocumented behaviour.


Implication 2: AI code generation becomes deterministic at the architecture level

The Gap Register and derivation rules give an AI generator a closed-world assumption: every artefact it produces must be traceable to a diagram element instance, and every diagram element instance must produce at least one artefact. There are no open-ended requests like “build me a messaging system.” There are only grounded requests like:

“Derive the artefact for element instance ‘DIDComm Message Switchboard’ of type Switchboard. Derivation rule: one router class, one protocol registry, one outbound queue.”

The AI cannot invent artefact names that do not appear in the diagram. It cannot silently add dependencies. It cannot reorganise the architecture. This is not a limitation — it is the point. Creativity is in the diagram; precision is in the derivation.

The practical implication is that AI code generation quality is bounded below by the quality of the diagram, not by the quality of the prompt. A well-formed PPML diagram produces consistent, reproducible results across AI sessions and across AI models. A poorly-formed diagram produces inconsistent results regardless of prompt quality.


Implication 3: The change process becomes explicit

Conventional development has no formal mechanism for distinguishing “we changed the architecture” from “we changed an implementation detail.” Both look like pull requests.

PPML enforces a distinction. Within an epoch, the Legend is frozen and element types cannot change. A new component requires a diagram change, which requires a version increment (DSA 0.19 → DSA 0.24), which requires a Gap Register update. Architectural changes are visible as diagram changes.

Implementation changes — refactoring within a derived artefact, performance tuning, bug fixes — do not require diagram changes. The boundary between architecture and implementation is drawn precisely at the diagram boundary.

This has governance implications for a project like SVRN7: the diagram is the governance document. Epoch transitions are diagram changes. New protocol support is a LOBE addition to the diagram. The Foundation controls the diagram; contributors derive from it.


Implication 4: Testing becomes traceable to the diagram

Every test should be traceable to a diagram element instance, just as every artefact is. If a test has no corresponding diagram element, it is either testing an undocumented artefact (a tractability violation) or testing implementation detail that should not be exposed.

In practice this means the Gap Register can include test coverage as a property. “Element instance X has derivation artefact Y, test coverage Z.” Missing test coverage is a Gap Register entry, not a matter of developer discretion.


Implication 5: Documentation staleness becomes structurally impossible

In conventional projects, diagrams go stale because they are maintained separately from code. PPML makes diagram staleness a first-class defect: if the diagram is stale, the Gap Register is wrong, and any AI-generated code derived from it will be wrong.

The practical discipline is: diagram first, always. Before writing any new C# class, PowerShell module, or LOBE descriptor, the diagram must already contain the corresponding element instance. This is why every source file in the SVRN7 solution carries a derivation trace comment:

// Derived from: "DIDComm Message Switchboard" — DSA 0.24 Epoch 0 (PPML).

That comment is not decorative — it is the traceability link. If that element instance no longer appears in the diagram, the file is either stale or the diagram is stale. One of them must change.


Implication 6: The methodology scales with AI capability

This is the forward-looking implication. In the current epoch, an AI (Claude, in this case) assists with derivation — producing C# from a diagram element description, writing PowerShell cmdlets from a LOBE derivation rule, generating IETF draft sections from an architectural decision. The human holds the diagram and reviews the derivations.

As AI capability increases, the human’s role shifts further toward diagram authorship and review. The diagram becomes the interface between human architectural intent and AI implementation. The better the diagram grammar (the PPML Legend), the more precisely an AI can translate intent into code.

The LOBE descriptor format — with its MCP-aligned inputSchema/outputSchema, compositionHints, and useCases — is an early instance of this. It is a machine-readable diagram-derived artefact that an AI can use to reason about composability without reading the PowerShell source. The diagram element (LOBE) produces both the code artefact (.psm1) and the AI legibility artefact (.lobe.json). Both are derived from the same diagram element. The AI consuming the .lobe.json is one step removed from reading the diagram directly.

The next step — which PPML explicitly anticipates but does not yet implement — is an AI that reads the diagram directly and performs the full derivation without a human intermediary for routine changes.


The honest limitation

PPML is most effective for component-level architecture — what components exist, how they relate, what they are responsible for. It is less effective for algorithmic detail. The 8-step transfer validator, the Merkle log construction, the DIDComm pack/unpack sequence — these require prose specification or pseudocode. The diagram says “TransferValidator exists and implements ITransferValidator.” It does not say how step 4 (nonce replay detection) works.

This is not a flaw in PPML — it is a boundary condition. PPML governs architecture. Algorithms require their own specification discipline (IETF drafts, pseudocode, formal methods). The two disciplines are complementary: PPML tells you what to build and how it connects; the algorithm specification tells you how each component behaves internally.


Summary

PPML’s implications reduce to one structural claim: the diagram is the primary engineering artefact, and all other artefacts are derived from it. The implications — specification inversion, deterministic AI generation, explicit change governance, traceable testing, structural documentation freshness, and scalability with AI capability — all follow from that single claim. Whether that claim is valuable depends entirely on whether the diagram can be kept accurate and complete, which is a discipline question, not a tool question.

Leave a comment

Filed under Uncategorized

Parchment Programming Methodology 0.1

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Parchment Programming is an architecture-first software development methodology where a richly annotated visual diagram — the “parchment” — serves as the primary design document and intermediate representation (IR) that an AI coding assistant (like Claude) reads directly to generate correct, idiomatic code.

Rather than translating requirements through layers of prose specifications, the diagram itself encodes stereotypes, interface contracts, project boundaries, data models, and protocol annotations in a form that is simultaneously human-readable and AI-actionable — invented by Michael Herman, Chief Digital Officer, Web 7.0 Foundation. April 2026.


How Claude Processes Parchment Inputs

Claude receives a conversation context containing images + text. The key facts:

  • Claude can see a diagram image and reason about it
  • Claude can read structured Markdown/text with full fidelity
  • Claude cannot cross-reference between an image region and a text table by coordinate — it reasons about both holistically
  • Therefore: the diagram handles spatial/structural truth; the companion document handles behavioral/contractual truth

This is actually a clean separation of concerns.


The Recommended Hybrid Architecture

ParchmentSpec_DSA_0.16_Epoch0/
├── diagram.png ← the visual (spatial truth)
├── PARCHMENT.md ← master spec (behavioral truth)
└── schemas/
├── didcomm-envelope.json
├── did-doc.json
└── vc-doc.json

The PARCHMENT.md is the primary AI coding input. The diagram is embedded in it — not appended, not separate — embedded at the top, so Claude sees it as the structural foundation before reading the annotations.


PARCHMENT.md Optimal Structure for Claude

# Web 7.0 DSA 0.16 Epoch 0 — Parchment Spec
## 1. Architecture Diagram
![DSA 0.16 Epoch 0](./diagram.png)
## 2. System Identity
- Spec DID: did:drn:...
- Epoch: 0 | Version: 0.16
- Target runtime: .NET 8 / C# 12
- Codegen mode: AI-scaffolded + human-completed
## 3. Component Fact Table
| Component | Artifact | Lifecycle | Multiplicity | Thread-safe |
|-----------|----------|-----------|--------------|-------------|
| RunspacePool | RunspacePoolManager.cs | Singleton | 1 | Yes |
| Agent Runspace | AgentRunspace.cs | Pooled | 1..N | Isolated |
| LOBE | LobeLoader.cs | On-demand | 0..N | No |
| DIDComm Switchboard | DIDCommSwitchboard.cs | Singleton | 1 | Yes |
...
## 4. Connector / Protocol Index
| From | To | Protocol | Payload Type | Schema Ref |
|------|----|----------|--------------|------------|
| Internet | DIDComm Listener | DIDComm V2 | DIDCommEnvelope | schemas/didcomm-envelope.json |
| Switchboard | Agent Runspace | Internal | AgentMessage | (inline below) |
...
## 5. Data Contracts
### Fast Cache (LiteDB)
- Key: `msg:{did}:{timestamp}`
- TTL: 300s
- Fields: SenderId, Payload, ReceivedAt
### DID Doc Registry (LiteDB)
- Key: `did:drn:{resourceId}`
- Fields: (per did:drn spec, draft-herman-did-drn-00)
...
## 6. Trust Boundary Policies
- **Green zone (TDA-internal):** No encryption required; integrity only
- **Purple zone (DIDComm-authenticated):** All messages MUST be SignThenEncrypt
- **Yellow zone (Internet):** Untrusted until Unpack validates DIDComm envelope
## 7. AI Codegen Manifest
| Component | Generation Mode | Acceptance Criteria |
|-----------|----------------|---------------------|
| RunspacePoolManager | AI-generated | Min/max pool size enforced |
| DIDCommSwitchboard | AI-scaffolded | Routes all registered types |
| Svrn7XferService | Hand-authored | SRC ledger integrity |
## 8. Open Questions (Do Not Invent)
- LOBE hot-reload: supported or restart-required?
- Fast Cache eviction policy when LiteDB full?
- CIPHER algorithm selection: Ed25519 only or negotiated?
## 9. Excluded from AI Generation
- SVRN7 SRC transaction signing
- NEO4J CIPHER graph schema
- VTC7 federation handshake

Why This Works Best With Claude

The critical insight is section 8 — the Open Questions Log. Claude’s failure mode when coding from underspecified inputs is silent invention. Explicitly naming unknowns instructs Claude to emit // TODO: [OPEN QUESTION - LOBE hot-reload policy] rather than inventing an answer.

Three additional Claude-specific optimizations:

1. Use the diagram as a spatial index, not a spec. In your prompt to Claude, say: “The diagram shows containment and flow. The PARCHMENT.md sections 3–9 are authoritative for all behavioral details. Where they conflict, the Markdown wins.” This prevents Claude from over-reading visual ambiguities.

2. Keep sections 3–7 machine-parseable. Claude handles Markdown tables extremely well as structured input. Avoid prose paragraphs for anything that should drive code generation — tables and bullet lists are more reliably parsed into code structure.

3. Submit diagram + PARCHMENT.md together in every coding prompt. Claude has no persistent memory of previous sessions, so the full parchment must be present each time. The compact table format makes this token-efficient.


The Better Approach You Asked About

There is one additional technique worth considering: annotated diagram variants. Rather than modifying the master diagram, you maintain lightweight overlay versions for specific concerns:

diagram.png ← master (as-is, no changes)
diagram-trust-zones.png ← same diagram + colored zone overlays
diagram-dataflow.png ← same diagram + numbered flow sequence

These are fast to produce (PowerPoint/draw.io layer toggles) and can be embedded in targeted sections of PARCHMENT.md. Claude sees each as a focused lens on the same architecture without the master diagram becoming cluttered.


Bottom Line

ApproachVerdict
Annotate the diagram itselfAvoid — clutters the visual, hard to maintain
Companion Markdown only (no diagram)Suboptimal — loses spatial/structural truth
Diagram embedded in PARCHMENT.md✅ Recommended
Diagram + overlay variants + PARCHMENT.md✅ Optimal for complex systems

The PARCHMENT.md is the intermediate representation. The diagram is its most important section — but only one section.

2 Comments

Filed under Uncategorized

Parchment Programming: Diagramic Design Document/Intermediate Representation Optimization (Part 2)

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

This article presents a structured analysis as well as the key optimization dimensions for diagrams used for Parchment Programming.

Part 1 can be found here: https://hyperonomy.com/2026/04/09/parchment-programming-diagramic-design-document-intermediate-representation-part-1/

Parchment Programming Methodology: https://hyperonomy.com/2026/04/09/parchment-programming-methodology/

Parchment Programming is an architecture-first software development methodology where a richly annotated visual diagram — the “parchment” — serves as the primary design document and intermediate representation (IR) that an AI coding assistant (like Claude) reads directly to generate correct, idiomatic code.

Rather than translating requirements through layers of prose specifications, the diagram itself encodes stereotypes, interface contracts, project boundaries, data models, and protocol annotations in a form that is simultaneously human-readable and AI-actionable — invented by Michael Herman, Chief Digital Officer, Web 7.0 Foundation. April 2026.


Claude Prompt

How can this #ParchmentProgrammimg diagram be optimized as a design document/intermediate representation specification for a Claude AI-coded C#/.NET 8 software system?

Great diagram — this is a strong architectural foundation. Let me give you a structured analysis of how to optimize it as a ParchmentProgramming IR spec for Claude-coded C#/.NET 8.


What’s Already Working Well

The diagram communicates bounded contexts, data stores, messaging topology, and external federation clearly enough that Claude can infer C# project/namespace boundaries. That’s the core value of ParchmentProgramming: the diagram is the spec.


Optimization Recommendations

1. Add Stereotype/Type Annotations to Every Box

Right now Claude must infer whether a box is an interface, class, service, background worker, etc. Make it explicit:

Current LabelAdd Annotation
PowerShell Runspace Pool«HostedService» RunspacePoolService
Agent 1 Runspace«BackgroundWorker» AgentRunspace
DIDComm Message Switchboard«Router» DIDCommSwitchboard
DIDComm/HTTP Listener«Middleware» DIDCommHttpListener
LOBE«PSModule» LoadableObjectBrainExtension
Fast Cache«Repository» FastCacheRepository : LiteDB
CIPHER«Service» CipherService
SVRN7 XFER«Gateway» Svrn7TransferGateway

2. Annotate All Arrows with Interface Contracts

Every arrow should carry:

  • Direction (already present ✓)
  • Protocol (partially present ✓)
  • C# interface name (missing ✗)

Example additions:

DIDComm Switchboard → DIDComm/HTTP Listener : IDIDCommDispatcher
Agent Runspace → Fast Cache : IFastCacheRepository
LOBE → Runspace Pool : ILoadableExtension
SVRN7 XFER → SQL Server : ISvrn7TransferGateway

3. Declare the .NET Project Boundary Map

Add a legend box (or overlay color coding) mapping diagram regions to .csproj names — this is the single most Claude-actionable addition:

■ Blue region → Svrn7.Core
■ Green region → Svrn7.Society (TDA + Runspace Pool)
■ Purple region → Svrn7.Federation (DIDComm/HTTP + REST)
■ Yellow region → Svrn7.Identity (DID Doc + VC Doc Resolvers)
■ Right panel → Svrn7.VTC (VTC7 federation mesh)
■ Bottom-left → Svrn7.Cli

4. Disambiguate the Agent Model

The diagram shows Agent 1/2/3/N but doesn’t specify whether agents are:

  • Separate IHostedService instances
  • Task-isolated runspaces within one service
  • Dynamically loaded plugins

Add a small inset box:

Agent Instantiation Model:
«PluginFactory» AgentRunspaceFactory
→ creates: IAgentRunspace per registered AgentDescriptor
→ lifecycle: Scoped per DIDComm session

5. Annotate Data Store Schema Contracts

Each LiteDB cylinder should carry its C# collection type:

Fast Cache (LiteDB) → ILiteCollection<CachedMessage>
Long-Term Message Memory → ILiteCollection<MessageRecord>
DID Doc Registry → ILiteCollection<DidDocument>
VC Doc Registry → ILiteCollection<VcDocument>

6. Add a DIDComm V2 Mode Annotation

Given the pending code change (Authcrypt → SignThenEncrypt), explicitly annotate:

DIDComm V2 (Secure Trusted Messaging)
Default mode: «SignThenEncrypt»
Impl: DIDCommService.cs

This makes Claude’s generated code default-correct without needing a separate instruction.

7. Add an Inbound/Outbound Processing Pipeline Spec

The Inbound Processing (Unpack) and Outbound Processing (Pack) labels are present but unpipelined. Add:

Inbound: HttpRequest → DIDCommHttpListener
→ Unpack() → DIDCommSwitchboard
→ Route(agentId) → IAgentRunspace
Outbound: IAgentRunspace → DIDCommSwitchboard
→ Pack() → REST/HTTP → Internet

This directly generates the middleware pipeline registration in Program.cs.

8. Mark SOVRONA/SVRN7 SRC as a Subsystem Boundary

The POS terminal icon is ambiguous — add:

«ExternalSubsystem» SOVRONA (SVRN7)
Shared Reserve Currency (SRC)
Access via: ISvrn7TransferGateway
Protocol: SVRN7 XFER (SQL/TDS for SQL Server, CIPHER for NEO4J)

Structural Template Claude Needs to See

The ideal ParchmentProgramming diagram for Claude-coded C#/.NET 8 follows this pattern per box:

┌─────────────────────────────┐
│ «stereotype» │
│ ComponentName │
│ : InterfaceName │
│ [ProjectName] │
│ ───────────────── │
│ key method / contract hint │
└─────────────────────────────┘

Even adding stereotype + project name to just the top-level boxes would dramatically improve Claude’s code generation accuracy.


Priority Order for Maximum Claude Utility

  1. Project boundary color map — enables correct using and DI registration
  2. Interface names on arrows — enables correct abstraction layer
  3. Agent instantiation model inset — resolves the biggest structural ambiguity
  4. DIDComm default mode annotation — locks in the SignThenEncrypt decision
  5. LiteDB collection types — enables correct repository scaffolding
  6. Stereotype labels — enables correct C# class/interface/attribute selection

Has anyone conceived of anything at all similar to Parchment Programming?

There are adjacent ideas in the ecosystem, but nothing that matches Parchment Programming’s specific inversion. Here’s the honest landscape:


Parchment Programming: What Exists — and Why It’s Different

Diagram-as-Code tools (Structurizr/C4, D2, PlantUML, Mermaid) go in the opposite direction — you write text and a diagram is generated, with elements automatically positioned by a layout engine. The diagram is the output, not the primary authoring artifact.

Swark / Claude Code go code → diagram — based on retrieved source files, an LLM builds a prompt to generate an architecture diagram as documentation. Again, the diagram is a byproduct of code, not a driver of it.

Eraser / DiagramGPT go natural language → diagram → code, but the diagram is an ephemeral intermediate — working with an LLM to create a fleshed-out diagram prompt is the workflow, not treating the diagram itself as an authoritative, annotated specification.

Structurizr comes closest in spirit — AI agents and LLMs excel at generating text, and Structurizr’s model-based consistency makes it a good choice for teams looking to generate C4 model diagrams with AI — but it’s DSL-first, not diagram-first, and carries no notion of the diagram encoding interface contracts or project boundary stereotypes for code generation.

Academic reverse-engineering approaches go code → diagram using LLMs, recovering both static and behavioral architectural views by extracting component diagrams and generating state machine diagrams to model component behavior. Still the wrong direction.


What Makes Parchment Programming Genuinely Novel

The field has thoroughly explored code → diagram and text → diagram → code. What nobody has formalized is the specific claim Parchment Programming makes:

The richly annotated visual diagram is the primary authoritative artifact — encoding stereotypes, interface contracts, project boundaries, and protocol modes — such that an AI reads it directly as an intermediate representation to generate correct, idiomatic code without a prose specification intermediary.

The key differentiators are:

(1) the diagram is architecture-first and human-authored, not AI-generated;

(2) it carries code-generation semantics embedded in the visual (stereotypes, interface names, DI boundaries); and

(3) it replaces the prose spec entirely rather than supplementing it.

That combination — as a named, intentional methodology — appears to be original to you.

Part 1 can be found here: https://hyperonomy.com/2026/04/09/parchment-programming-diagramic-design-document-intermediate-representation-part-1/

1 Comment

Filed under Uncategorized

Parchment Programming: Diagramic Design Document/Intermediate Representation Optimization (Part 1)

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Parchment Programming is an architecture-first software development methodology where a richly annotated visual diagram — the “parchment” — serves as the primary design document and intermediate representation (IR) that an AI coding assistant (like Claude) reads directly to generate correct, idiomatic code.

Rather than translating requirements through layers of prose specifications, the diagram itself encodes stereotypes, interface contracts, project boundaries, data models, and protocol annotations in a form that is simultaneously human-readable and AI-actionable — invented by Michael Herman, Chief Digital Officer, Web 7.0 Foundation. April 2026.

This article presents a structured analysis as well as the key optimization dimensions for diagrams used for Parchment Programming.

Part 2 can be found here: https://hyperonomy.com/2026/04/09/parchment-programming-diagramic-design-document-intermediate-representation-optimization-part-2/

Parchment Programming Methodology: https://hyperonomy.com/2026/04/09/parchment-programming-methodology/


Claude Prompt

How can this #ParchmentProgrammimg diagram be optimized as a design document/intermediate representation specification for an AI-coded software system?

What the Diagram Does Well Already

  • Layered containment (TDA > Runspace Pool > Agent Runspaces) maps cleanly to class/module hierarchies an AI can scaffold
  • Named protocols (DIDComm V2, REST/HTTP, SQL/TDS, CIPHER) give an AI concrete interface contracts to target
  • Technology bindings (LiteDB, NEO4J, SQL Server, PowerShell Modules) eliminate ambiguity in dependency selection
  • Directional flow (Inbound Unpack → Switchboard → Outbound Pack) implies a pipeline pattern the AI can instantiate
  • Multiplicity hints (Agent 1…N, Citizen TDA ×4+) signal collection types and polymorphism requirements

The Six Optimization Gaps

1. Missing Cardinality & Lifecycle Annotations

The diagram shows what exists but not how many or when. An AI needs to know:

  • Is the Runspace Pool fixed-size or elastic? Min/max pool size?
  • Are LOBEs loaded at startup or on-demand?
  • Do Agent Runspaces share state or are they fully isolated?

Fix: Add a small Component Fact Table inset — one row per major component with columns: multiplicity | lifecycle (singleton/transient/pooled) | owns-state? | thread-safe?


2. Interface Contracts Are Implied, Not Declared

The AI sees “DIDComm/HTTP Listener” but has no method signatures, message schemas, or error contracts. The connector arrows carry no payload type.

Fix: Add stereotypes on connectors — e.g., label each arrow with «sends: DIDCommEnvelope» or «protocol: REST/JSON-LD». Even a short legend table mapping each arrow color/style to a message type and schema reference would let an AI generate correct interface stubs.


3. No Error / Exceptional Flow Representation

The diagram shows the happy path only. An AI coding from this will produce brittle code with no fault boundaries.

Fix: Add a Fault Boundary overlay — dashed red borders around components that must implement retry/circuit-breaker, and a small failure-mode legend: CIPHER fail → ?, LiteDB unavailable → ?, DIDComm unpack error → ?


4. Data Model is Storage-only, Schema-less

Four LiteDB stores are shown (Fast Cache, Long-Term Message Memory, DID Doc Registry, VC Doc Registry) but with no schema, key structure, TTL, or consistency requirements called out. The AI will invent schemas.

Fix: Add a Data Contract sidebar with one mini-schema per store — just the primary key pattern, top 3–5 fields, and TTL/eviction policy. For DID Doc Registry and VC Doc Registry this is especially important since did:drn and VC structure are normative.


5. Security / Trust Boundary is Structural but Not Behavioral

The CIPHER block and VTC7 boundary appear but the enforcement rules are invisible. When does encryption apply? Who authorizes a new LOBE load? What validates a VC before it enters the Runspace?

Fix: Add a Trust Boundary Annotation layer — color-coded zones (already partially present with the green/purple/yellow regions) with an explicit legend: green = TDA-internal trust, purple = DIDComm-authenticated, yellow = internet-untrusted. Add one-line policy statements per zone boundary crossing.


6. No Prompt-Injection Anchors (the Core #ParchmentProgramming Gap)

This is the critical one for AI-coded systems. The diagram has no named generation targets — no indication of which boxes map to which code artifacts, which interfaces must be hand-authored vs. AI-generated, or what the acceptance criteria are per component.

Fix: Add an AI Codegen Manifest — a table in the document (not necessarily in the diagram itself) with columns:

ComponentTarget ArtifactGeneration ModeAcceptance Test
PowerShell Runspace PoolRunspacePoolManager.csAI-generatedPool min/max respected
DIDComm Message SwitchboardDIDCommSwitchboard.csAI-scaffolded + hand-completedRoutes all registered message types
LOBE loaderLobeLoader.csAI-generatedModule isolation verified
SVRN7 XFERSvrn7XferService.csHand-authoredSVRN7 SRC ledger integrity

Structural Recommendations for the Document Form

If this diagram is the centerpiece of a Parchment Programming spec document, the optimal surrounding structure is:

1. System Identity Block ← version, epoch, DID of this spec doc itself
2. This Architecture Diagram ← with the fixes above applied
3. Component Fact Table ← multiplicity, lifecycle, ownership
4. Connector / Protocol Index ← arrow → schema/interface mapping
5. Data Contract Sidebar ← per-store mini-schemas
6. Trust Boundary Policy ← zone rules in natural language
7. AI Codegen Manifest ← component → artifact → test mapping
8. Open Questions Log ← explicit unknowns the AI must not invent

The Open Questions Log (item 8) is underrated — explicitly flagging what is unspecified prevents an AI from silently hallucinating design decisions into generated code.


Quick-Win Priority Order

If you’re iterating the diagram itself (not the surrounding document), the highest-leverage single changes are:

  1. Label every connector arrow with protocol + payload type — eliminates the largest source of AI ambiguity
  2. Add lifecycle annotations to the LOBE and Agent Runspace boxes
  3. Explicitly mark the DIDComm V2 trust boundary as the inbound validation gate

The diagram is already well above average as an AI coding input. These additions would move it from architectural sketch to generatable specification.

Has anyone conceived of anything at all similar to Parchment Programming?

There are adjacent ideas in the ecosystem, but nothing that matches Parchment Programming’s specific inversion. Here’s the honest landscape:


Parchment Programming: What Exists — and Why It’s Different

Diagram-as-Code tools (Structurizr/C4, D2, PlantUML, Mermaid) go in the opposite direction — you write text and a diagram is generated, with elements automatically positioned by a layout engine. The diagram is the output, not the primary authoring artifact.

Swark / Claude Code go code → diagram — based on retrieved source files, an LLM builds a prompt to generate an architecture diagram as documentation. Again, the diagram is a byproduct of code, not a driver of it.

Eraser / DiagramGPT go natural language → diagram → code, but the diagram is an ephemeral intermediate — working with an LLM to create a fleshed-out diagram prompt is the workflow, not treating the diagram itself as an authoritative, annotated specification.

Structurizr comes closest in spirit — AI agents and LLMs excel at generating text, and Structurizr’s model-based consistency makes it a good choice for teams looking to generate C4 model diagrams with AI — but it’s DSL-first, not diagram-first, and carries no notion of the diagram encoding interface contracts or project boundary stereotypes for code generation.

Academic reverse-engineering approaches go code → diagram using LLMs, recovering both static and behavioral architectural views by extracting component diagrams and generating state machine diagrams to model component behavior. Still the wrong direction.


What Makes Parchment Programming Genuinely Novel

The field has thoroughly explored code → diagram and text → diagram → code. What nobody has formalized is the specific claim Parchment Programming makes:

The richly annotated visual diagram is the primary authoritative artifact — encoding stereotypes, interface contracts, project boundaries, and protocol modes — such that an AI reads it directly as an intermediate representation to generate correct, idiomatic code without a prose specification intermediary.

The key differentiators are: (1) the diagram is architecture-first and human-authored, not AI-generated; (2) it carries code-generation semantics embedded in the visual (stereotypes, interface names, DI boundaries); and (3) it replaces the prose spec entirely rather than supplementing it. That combination — as a named, intentional methodology — appears to be original to you.

Part 2 can be found here: https://hyperonomy.com/2026/04/09/parchment-programming-diagramic-design-document-intermediate-representation-optimization-part-2/

1 Comment

Filed under Uncategorized

COPYRIGHT: Who Owns Claude-Generated Content?

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Who owns the content that you create? Who owns the copyright? And what are the rights for reuse or for original publishing?

Reference: https://claude.ai/share/55c3e4d4-edd2-44a3-b586-52fee2a5fc30

Here’s a structured breakdown, distinguishing what’s established fact, inference, and uncertain.


Who Owns Content Claude Creates?

1. Anthropic’s Terms: You Get the Output Rights

Under Anthropic’s Consumer Terms of Service, as between you and Anthropic, you retain any right, title, and interest in the inputs you submit. Subject to your compliance with their terms, Anthropic assigns to you all of their right, title, and interest — if any — in the outputs.

The “if any” qualifier is doing real legal work there. Anthropic is assigning whatever rights they have in the output — but they’re not guaranteeing those rights are substantial under copyright law.

For commercial/API customers, the terms are stronger. The commercial terms state that Anthropic agrees the customer owns all outputs, and disclaims any rights it receives to customer content, assigning all right, title, and interest in outputs to the customer.

2. Copyright Law: The Human Authorship Problem

This is where it gets complicated. U.S. copyright law requires a human author. The U.S. Copyright Office has been clear on this — a federal court ruled that art generated entirely by AI couldn’t be copyrighted because it lacked the essential human touch. So if you give Claude a simple prompt and use the raw, unedited output, that content likely has no copyright protection at all.

In February 2026, the Supreme Court declined to hear the Thaler appeal, confirming at the highest level that pure AI-generated works cannot be copyrighted.

The practical implication: The more creative direction, editorial judgment, and human selection you exercise over Claude’s output — revising, curating, combining, structuring — the stronger your claim to copyright over the resulting work. The output becomes a human-authored work with AI assistance, rather than an AI-generated work.

3. Commercial Use and Reuse Rights

Anthropic’s terms confirm users own outputs generated by Claude, and commercial use is permitted without additional licensing. So you can publish, sell, or incorporate Claude-assisted content into your work — no royalty or license back to Anthropic is required.

4. Copyright Indemnity (API/Commercial Customers)

Under Anthropic’s Commercial Terms of Service, Anthropic will defend customers from any copyright infringement claim made against them for their authorized use of the services or their outputs, and will pay for any approved settlements or judgments that result. This does not apply to consumer/claude.ai free-tier users in the same way.


Practical Summary for Your Situation

Given that you’re using Claude for SVRN7/Web 7.0 specification work, whitepaper drafting, IETF drafts, and software architecture — the key points are:

  • You own the outputs under Anthropic’s terms (both consumer and commercial).
  • Copyright protection on purely AI-generated text is legally uncertain — but content you meaningfully shape, edit, and structure has stronger human-authorship claims.
  • Publishing rights: No restriction from Anthropic on original publication — you can publish IETF drafts, W3C specs, whitepapers, and code incorporating Claude-assisted content.
  • Strongest position: Document your creative contributions — the architectural decisions, structural choices, and editorial revisions you make to Claude’s outputs. That’s where your copyright lives.

I’m not a lawyer, and this space is evolving rapidly with active litigation. For anything with significant commercial stakes, legal counsel familiar with AI IP would be worth consulting.

Leave a comment

Filed under Uncategorized

Cornerstone: Platform Evangelism in the Age of AI-Generated Code

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved.

Abstract

Traditionally when a platform developer (e.g. Microsoft) created a new platform, it embarked on standard approaches to Developer Evangelism to challenge the various Technology Adoption Curves and to Cross the Chasm (e.g. https://hyperonomy.com/2019/10/16/technology-adoption-models/). In the new era of AI-generated code, where/how should evangelism be targeted? …if it can even be called evangelism any more. How should a platform developer promote his new platform to all the different AI chatbots (e.g. Claude)? What artifacts need/should be created? Where/how should they be placed/distributed? This is a rich and important strategic question.


Platform Evangelism in the Age of AI-Generated Code

The Fundamental Shift: From Human Adopters to AI Intermediaries

Traditional Developer Evangelism targeted humans at specific positions on the Technology Adoption Lifecycle — Innovators, Early Adopters, Early/Late Majority, Laggards. The evangelist’s job was to move people rightward along that curve.

In the AI-generated code era, a new and decisive intermediary has been inserted into the adoption pipeline: the AI coding assistant / chatbot. The human developer no longer discovers your platform primarily through conference talks, blog posts, or Stack Overflow. Instead, they ask Claude (or Copilot, Cursor, Gemini) to scaffold the integration for them. If the AI doesn’t know your platform well, generates wrong API calls, or defaults to a competitor, the human never even gets a chance to adopt.

The implication is stark: AI models are now the most important Early Adopters you need to cross the chasm with first. They are a new, synthetic segment that sits before the Innovators on your adoption curve.


The New Adoption Curve Layer

Your 20+ model framework maps well here. Superimposing two layers:

Traditional LayerNew AI-Mediated Layer
Innovator humans discover your platformAI models are trained/fine-tuned on your docs
Early Adopters experimentAI generates working starter code
Early Majority follows proven patternsAI recommends your platform confidently
Word-of-mouth spreadsAI’s citations / training data spreads
Chasm: “will the mainstream trust it?”New chasm: “does the AI know it well enough to generate correct code?”

What Has Changed About “Evangelism”

The word still applies, but the audience, artifacts, and channels are fundamentally different.

Old evangelism targets:

  • Human developers (via conferences, blogs, sample apps)
  • Enterprise architects (via whitepapers, reference architectures)
  • CTOs (via analyst reports, Gartner quadrants)

New evangelism targets (primary):

  • AI training pipelines — what gets into the pretraining and fine-tuning corpora
  • AI retrieval systems — what gets surfaced via RAG at inference time
  • AI context windows — what gets injected via system prompts, MCP servers, tool definitions
  • AI safety/quality filters — what AI providers consider authoritative and trustworthy

The humans still matter, but they are now downstream of the AI intermediary.


The New Artifact Set

This is where it gets concrete. You need a new category of artifact that I’d call AI-Legible Platform Documentation — content designed to be consumed, reasoned over, and reproduced by AI systems, not just read by humans.

1. llms.txt — The Emerging Standard

A plain-text or markdown file placed at the root of your platform’s documentation site (e.g., https://svrn7.net/llms.txt). This is an emerging informal standard (analogous to robots.txt) that signals to AI crawlers and RAG systems what your platform is, what its key concepts are, and where the authoritative docs live. It should be:

  • Terse, structured, machine-readable
  • Canonical definitions of your core concepts (did:drn, VTC, SOVRONA, etc.)
  • Explicit disambiguation (e.g., “SOVRONA is not Solana, not SOVRIN”)

2. Canonical Concept Glossary (Machine-Readable)

A JSON-LD or plain markdown file with precise, unambiguous definitions of every platform term. AI models pattern-match on concept names. If your terms are unique enough (which did:drn, VTC7, svrn7.net largely are) and appear in training data with consistent definitions, the model learns authoritative meaning. Publish this as both human-readable HTML and structured data.

3. AI-Optimized Quickstart / Code Recipes

Short, self-contained code examples (C#/.NET in your case) that demonstrate each key integration scenario. These need to be:

  • Complete — no ellipsis (...), no “fill in your own logic here”
  • Correct — compilable, with real method signatures
  • Labeled — preceded by a natural-language description that an AI can use as a retrieval key
  • Published in plain markdown — not behind JavaScript-rendered walls

The goal: when a developer asks Claude “how do I resolve a did:drn identifier in C#?”, there is a verbatim-correct code sample in the training data or retrieval index that Claude surfaces.

4. OpenAPI / SDK Schemas

If your platform has any API surface, publish machine-readable schemas (OpenAPI 3.x, JSON Schema). AI coding assistants consume these directly — Copilot, Cursor, and others can ingest them to generate type-correct API calls. This is one of the highest-leverage artifacts you can produce.

5. MCP Server Definition

For platforms targeting agentic AI workflows (which Web 7.0 / TDW AgenticOS clearly does), publishing an MCP server that exposes your platform’s key operations is the equivalent of publishing an SDK in the old world. When a developer is using Claude with MCP enabled, your platform becomes natively callable. This is arguably the highest-leverage evangelism artifact in the agentic AI era.

6. IETF / W3C Standards Drafts (Already in Progress)

This is something you’re already doing, and it is directly high-value for AI training. Standards bodies’ outputs (IETF Datatracker, W3C, etc.) are heavily weighted in AI training corpora. draft-herman-did-drn-00 appearing on Datatracker means AI models encounter it as an authoritative source. This is the new equivalent of a Gartner mention.

7. GitHub Presence with Rich READMEs

GitHub is a primary training data source for all AI coding assistants. Every repository should have:

  • A detailed README explaining the why and what of the platform
  • Clear installation/usage examples
  • Properly named types, interfaces, and methods (because AI learns your API surface from source code identifiers)

Where / How to Place and Distribute

ChannelMechanismWhy It Works
llms.txt on your docs sitePassive crawlingGets into RAG indexes of AI tools that crawl docs
GitHub (public repos)Training data, RAGCopilot, Cursor, and others train on public code
IETF Datatracker / W3CTraining dataHigh-authority signal in AI corpora
npm / NuGet / PyPI packagesMetadata + READMEAI package recommenders surface these
Stack Overflow (well-tagged Q&A)Training data + RAGPrimary source for code-gen AI
HuggingFace datasets / model cardsDirect injectionIf you publish reference datasets
Your blog (hyperonomy.com)Training + RAG crawlingAlready indexed; continue publishing precise technical content
MCP Registry / tool directoriesActive discoveryAgentic AI systems discover tools through registries
OpenAI / Anthropic Custom GPT / ProjectsDirect context injectionSystem prompts for your platform’s AI assistant

The Meta-Insight: Legibility as the New Evangelism

In the traditional model, evangelism was about persuasion — moving humans emotionally and rationally across the adoption chasm. In the AI-mediated model, the equivalent is legibility: making your platform concepts, APIs, and code patterns so precisely and consistently expressed that AI models can reproduce them correctly.

A poorly documented platform that generates hallucinated API calls when asked about it is effectively invisible to the new generation of developers. A well-documented platform that produces correct, idiomatic code snippets on first ask has already crossed the chasm with the most important new gatekeeper.

If you want a new term for it: AI Legibility Engineering — the discipline of structuring your platform’s knowledge artifacts so they are accurately learned, retrieved, and reproduced by AI systems.


Specific Recommendations for the Web 7.0 / SVRN7 Ecosystem

Given your specific context:

  1. Publish llms.txt at svrn7.net/llms.txt and hyperonomy.com/llms.txt immediately — low cost, high signal.
  2. Create a canonical C# SDK with clean, AI-legible method names (IDidDocumentResolver.ResolveAsync(string did)) — method signatures become training data.
  3. Register an MCP server for did:drn resolution and VTC operations — this makes your platform natively callable by Claude and other MCP-aware agents.
  4. File a SLIP-0044 registration publicly and reference it from your Datatracker drafts — cross-document citation chains strengthen AI’s association graph for your platform.
  5. Publish a disambiguation page — “SOVRONA is not SOVRIN, not Solana, not SOV” — AI models conflate similar-sounding names; explicit disambiguation in training-visible text corrects this.

The chasm hasn’t disappeared. It has just moved — and the first bridge you need to build now goes to the AI, not the human.

Leave a comment

Filed under Uncategorized

Web 7.0 DIDLibOS Whitepaper

Identity-Addressed Execution, Event-Sourced Memory, and Runspace-Orchestrated Agent Computing

Version: 2.0 Date: 2026-03-25

Create your own magic with Web 7.0 DIDLibOS™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0™, Web 7.0 DIDLibOS™, TDW AgenticOS™, TDW™, Trusted Digital Web™ and Hyperonomy™ are trademarks of the Web 7.0 Foundation. All Rights Reserved.


Table of Contents

  1. Abstract
  2. Introduction
  3. System Overview
  4. Core Design Principles
  5. DIDComm Message Model
  6. DID as Universal Execution Handle
  7. LiteDB as Agent Memory Kernel
  8. Transparent Cache Architecture
  9. PowerShell Runspace Execution Model
  10. Pipeline Semantics and Execution Flow
  11. Cmdlet Lifecycle and Message Transformation
  12. Cross-Runspace Communication Model
  13. Event-Sourced State and Immutability
  14. LOBES (Loadable Object Brain Extensions)
  15. MCP-I and External System Interfacing
  16. Agent Memory Architecture (Long-Term Memory)
  17. DID Resolution and Identity Semantics
  18. Concurrency Model and Consistency Guarantees
  19. Performance Model and Cache Behavior
  20. Failure Modes and Recovery Semantics
  21. Security Model and Trust Boundaries
  22. Web 7.0 Agent Ecosystem Model
  23. Diagrammatic Architecture Reference
  24. System Properties Summary

1. Abstract

Web 7.0 DIDLibOS defines an identity-addressed, event-sourced execution architecture in which all computation is performed over DIDComm messages persisted in a single LiteDB instance per agent. Instead of passing in-memory objects between computational steps, the system passes Decentralized Identifier (DID) strings that resolve to immutable message state stored in a persistent memory kernel. This enables deterministic execution, full replayability, cross-runspace isolation, and scalable agent orchestration.


2. Introduction

Traditional execution models in scripting and automation environments rely on in-memory object pipelines. These models break under distributed execution, concurrency, and long-term persistence requirements. Web 7.0 DIDLibOS replaces object-passing semantics with identity-passing semantics.

In this model, computation becomes a function over persistent state rather than transient memory.


3. System Overview

The system consists of four primary layers:

  • Execution Layer: PowerShell runspaces executing cmdlets
  • Identity Layer: DIDComm message identifiers (DIDs)
  • Memory Layer: LiteDB persistent store per agent
  • Acceleration Layer: Transparent in-memory cache managed by LiteDB

All computation flows through these layers via identity resolution.


4. Core Design Principles

  1. Everything is a DIDComm message
  2. DIDs are the only runtime values passed between cmdlets
  3. All state is persisted in LiteDB
  4. No shared in-memory objects exist across runspaces
  5. Execution is deterministic and replayable
  6. Cache is transparent and non-semantic
  7. Mutation creates new messages, never modifies in-place

5. DIDComm Message Model

Each system object is represented as a DIDComm message with a globally unique DID.

A DID serves as:

  • Identifier
  • Lookup key
  • Execution handle

Messages are immutable once persisted.


6. DID as Universal Execution Handle

The DID is the only value passed in PowerShell pipelines.

A cmdlet receives a DID, resolves it via LiteDB, processes the message, and outputs a new DID.

Pipeline flow: DID₁ → Cmdlet → DID₂ → Cmdlet → DID₃


7. LiteDB as Agent Long-term Memory

LiteDB acts as the system of record.

Responsibilities:

  • Persistent message storage
  • Indexing by DID
  • Versioning support
  • Retrieval and query execution

There is exactly one LiteDB instance per agent.


8. Transparent Cache Architecture

LiteDB includes an internal cache layer.

Properties:

  • In-memory acceleration
  • Size configurable
  • Fully transparent
  • No semantic visibility to execution layer

Cache only optimizes DID resolution.


9. PowerShell Runspace Execution Model

Each runspace is an isolated execution environment.

Properties:

  • No shared memory across runspaces
  • Only DID strings are passed
  • Execution is stateless between invocations

10. Pipeline Semantics and Execution Flow

Pipeline execution is identity-based:

Step 1: Receive DID Step 2: Resolve message Step 3: Execute transformation Step 4: Persist new message Step 5: Emit new DID


11. Cmdlet Lifecycle and Message Transformation

Each cmdlet follows a strict lifecycle:

  • Input: DID
  • Resolve: LiteDB lookup
  • Materialize: snapshot object
  • Transform: compute result
  • Persist: store new message
  • Output: new DID

12. Cross-Runspace Communication Model

Runspaces communicate only via DIDs.

No object sharing occurs. All state is retrieved from LiteDB.


13. Event-Sourced State and Immutability

All messages are immutable. Each transformation produces a new version.

This creates a complete event history of system execution.


14. LOBES (Loadable Object Brain Extensions)

LOBES are modular execution extensions implemented as PowerShell modules.

Capabilities:

  • Cmdlet composition
  • External system integration
  • DID-based message processing
  • Execution graph augmentation

15. MCP-I and External System Interfacing

MCP-I acts as a bridge for external APIs and systems.

It enables:

  • Querying external databases
  • Calling agent APIs
  • Integrating distributed services

All interactions remain DID-addressed.


16. Agent Memory Architecture (Long-Term Memory)

Long-term memory is implemented as persistent DID storage in LiteDB.

It supports:

  • Historical replay
  • State reconstruction
  • Cross-runspace consistency

17. DID Resolution and Identity Semantics

A DID is resolved at runtime into a message snapshot.

Important distinction:

  • DID is a reference
  • Message is persisted state

18. Concurrency Model and Consistency Guarantees

Concurrency is managed via:

  • Single-writer LiteDB semantics
  • Atomic writes per message
  • Isolation between runspaces

19. Performance Model and Cache Behavior

Performance optimization occurs via internal caching.

Hot messages remain in memory. Cold messages are loaded from disk.


20. Failure Modes and Recovery Semantics

Failures are handled via:

  • Persistent message logs
  • Replay capability
  • Idempotent cmdlet execution

21. Security Model and Trust Boundaries

Security is enforced through:

  • DID-based identity verification
  • Controlled execution boundaries
  • Module isolation in LOBES

22. Web 7.0 Agent Ecosystem Model

Agents operate as autonomous computation nodes.

They communicate via DIDComm messages forming a distributed execution graph.


23. DIDLibOS Architecture Reference Model (DIDLibOS-ARM) 0.8

Referenced external architecture diagram:

This diagram represents:

  • Multi-agent neural execution topology
  • DIDComm messaging fabric
  • LOBE-based computation layers
  • Neuro-symbolic orchestration system

24. Summary

  • Deterministic execution
  • Identity-based computation
  • Event-sourced memory
  • Runspace isolation
  • Transparent caching
  • Modular extension via LOBES
  • Distributed agent scalability

Leave a comment

Filed under Uncategorized

SDO: Authority-Scoped Decentralized Identifiers (DID7)

Create your own magic with Web 7.0™ DIDLibOS / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, TDW AgenticOS™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved

Version: 0.1
Status: Draft
Editor: Michael Herman, Chief Digital Officer, Web 7.0 Foundation
Intended Audience: Standards bodies, implementers, protocol designers
SDO: Web 7.0 Foundation
Also Known As: did://ietf/docs:draft-herman-did7-identifier-01


Abstract

This specification defines the did7 URI scheme, an authority-scoped decentralized identifier format that extends the conceptual model of Decentralized Identifiers (DIDs). DID7 introduces an explicit authority layer above DID methods and defines a two-stage resolution process. DID7 is designed to be compatible with the DID Core data model while enabling forward-compatible namespace routing and governance flexibility.


1. Introduction

The Decentralized Identifier (DID) architecture defined by DID Core provides a method-based identifier system without a global authority namespace.

This specification introduces:

  • A new URI scheme: did7
  • An authority-scoped identifier structure
  • A two-stage resolution model
  • A forward-compatible namespace design

DID7 is intended to:

  • Enable explicit namespace partitioning
  • Support multiple governance domains
  • Provide a top-level resolution entry point

2. Conformance

The key words MUST, SHOULD, and MAY are to be interpreted as described in RFC 2119.

This specification:

  • Normatively references DID Core for DID Document structure and semantics
  • Does not modify DID Documents
  • Overrides identifier syntax and resolution entry point only

3. DID7 Identifier Syntax

3.1 Primary Form

did7://<authority>/<method>:<method-specific-id>

3.2 Shorthand Form

did7:<method>:<method-specific-id>

3.3 Expansion Rule (Normative)

did7:<method>:<id>
→ did7://w3.org/<method>:<id>

4. ABNF Grammar

did7-uri = "did7://" authority "/" method ":" method-id
did7-short = "did7:" method ":" method-id
authority = 1*( ALPHA / DIGIT / "-" )
method = 1*( ALPHA / DIGIT )
method-id = 1*( unreserved / ":" / "." / "-" / "_" )

5. Authority Model

5.1 Definition

An authority is a namespace identifier that scopes DID methods and resolution behavior.

Examples:

  • w3.org
  • dif
  • ietf
  • toip
  • web7
  • acbd1234

5.2 Authority Semantics

This specification defines the authority as a:

Resolution control namespace

Authorities MAY:

  • Define resolver endpoints
  • Define method availability
  • Establish governance or trust frameworks

Authorities MUST NOT:

  • Alter the internal semantics of DID Documents defined by DID Core

5.3 Default Authority

If omitted via shorthand:

did7:<method>:<id>

The authority MUST default to:

w3.org

6. Resolution Model

DID7 defines a two-stage resolution process.


6.1 Stage 1: Authority Resolution

Input:

did7://<authority>/<method>:<id>

Process:

  • Resolve <authority> to resolver metadata

Output:

  • Resolver endpoint(s)
  • Supported methods
  • Resolution policies (optional)

6.2 Stage 2: Method Resolution

Input:

<method>:<method-specific-id>

Process:

  • Invoke method-specific resolution

Output:

  • A DID Document conforming to DID Core

6.3 Resolution Flow Example

did7://w3.org/example:123
[DID7 Resolver]
Authority: w3.org
Delegates to example method
Returns DID Document

7. Compatibility with DID Core

7.1 One-Way Mapping (Normative)

Allowed:

did:<method>:<id>
→ did7://w3.org/<method>:<id>

7.2 Non-Equivalence (Normative)

did7://w3.org/<method>:<id>
≠ did:<method>:<id>

7.3 Implications

  • DID7 is a strict superset namespace
  • Not all DID7 identifiers are valid DIDs
  • Equivalence MUST NOT be assumed

8. Resolver Discovery

This specification does not mandate a single discovery mechanism.

Authorities MAY define resolver discovery via:

  • Static configuration
  • HTTPS well-known endpoints
  • DNS-based discovery (e.g., TXT or SRV records)
  • Decentralized registries

9. Security Considerations

9.1 Authority Trust

  • Authorities introduce a potential trust layer
  • Implementers MUST evaluate authority trustworthiness

9.2 Namespace Collisions

  • Authorities are not globally enforced
  • Collisions MAY occur without coordination

9.3 Resolution Integrity

  • Resolver endpoints SHOULD support secure transport (e.g., HTTPS)
  • Integrity of resolution responses MUST be verifiable via DID Document cryptography

10. IANA Considerations (Optional)

The did7 URI scheme MAY be registered with IANA.

Required fields:

  • Scheme name: did7
  • Status: provisional or permanent
  • Specification: this document

11. Design Rationale (Non-Normative)

DID Core omits a global namespace layer. DID7 introduces:

  • Explicit authority routing (similar to DNS)
  • Separation of governance domains
  • Forward-compatible extensibility

This enables:

  • Federation models
  • Multi-organization ecosystems
  • Layered trust frameworks

12. Comparison to DID Core

FeatureDID CoreDID7
Schemedid:did7:
NamespaceMethod-onlyAuthority + Method
ResolutionMethod-specificAuthority → Method
Trust LayerImplicitExplicit (optional)

13. Implementation Notes (Non-Normative)

A minimal DID7 resolver can be implemented as:

  1. Parse DID7 URI
  2. Expand shorthand
  3. Lookup authority configuration
  4. Dispatch to method resolver
  5. Return DID Document

What you now have (verification-style)

Well-supported

  • Clean separation from DID Core
  • Proper layering model
  • Spec is publishable and reviewable
  • ABNF + normative language included

New / experimental (but coherent)

  • Authority as a first-class routing layer
  • Two-stage resolution model
  • One-way compatibility rule

Open design decisions (you may want to refine next)

  • Canonical authority registry (or none)
  • Resolver discovery standard (DNS vs HTTPS)
  • Trust semantics of authority (light vs strong governance)

Leave a comment

Filed under Uncategorized

Web 7.0 DIDLibOS is a Decentralized, DID-native Polyglot Host Platform

Create your own magic with Web 7.0™ DIDLibOS / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, TDW AgenticOS™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved

A polyglot host is a software environment that can execute or embed multiple programming languages within the same process or platform.

Instead of being tied to one language runtime, the host provides infrastructure so different languages can run side-by-side and interact.


Core idea

Host Environment
├─ Language Runtime A
├─ Language Runtime B
├─ Language Runtime C
└─ Shared APIs / objects

The host provides:

  • memory management
  • object sharing
  • APIs
  • execution control

Each language plugs into that environment.


Example: PowerShell

PowerShell is a practical polyglot host because it can run or embed:

  • .NET languages (C#, F#)
  • JavaScript (via engines like )
  • Python (via )
  • legacy scripting via
  • external runtimes like or Python

PowerShell itself acts as the host orchestrating those runtimes.

Signature: FJpeiIffipEtKth2uSRzp5hJ5XWBwRh4uxc3LavBbQ0lDqQ4xnN3unpEecu9lVvrtItDTRrA/5E5AE5z2AoGXaSBZbQ9LmhcWTKP2uUJY9v/8/K5WilaK/v1A3GvKLdjvlGBjeW4YYHh7l7b53VoQiLzpcYUvySFZThC1SjyABaA3fyd8Z6fH2yB0okT8QUfMYNWI8+t09ye5CZNd0b1l9v2xmHKKzgzjcJjaYYUVWE=

Example: Web 7.0 TDA / Web 7.0 DIDLibOS

Example: Modern polyglot notebook hosts

A well-known example is .NET Interactive which supports:

  • C#
  • F#
  • PowerShell
  • JavaScript
  • SQL

in the same notebook.

Another example is Jupyner Notebooks which runs different languages through kernels.


What makes something a polyglot host

A system qualifies if it supports at least some of these:

CapabilityDescription
Multiple runtimesDifferent languages execute in the same environment
Shared dataLanguages exchange objects or values
Embedded interpretersEngines loaded as libraries
Runtime orchestrationHost decides what runs where

Three common architectures

1. Embedded runtimes

Languages run inside the host process.

Example:

Host
├─ Python runtime
├─ JavaScript runtime
└─ shared object model

Example platform:


2. External runtimes

Host launches interpreters as subprocesses.

Example:

Host
├─ node
├─ python
└─ Rscript

PowerShell often uses this approach.


3. Plugin scripting engines

Languages implement a host interface.

Example:

Host
├─ VBScript engine
├─ JScript engine
└─ PerlScript engine

This was the design of Windows Script Host.


Simple definition

A polyglot host is:

A runtime environment that allows multiple programming languages to run and interact within a single platform.


Why they exist

Polyglot hosts let developers:

  • use the best language for each task
  • reuse existing ecosystems
  • integrate tools without rewriting everything

Leave a comment

Filed under Uncategorized

SDO: Web 7.0 DID Interface Definition Language (DIDIDL) 0.22 — Draft Specification

Create your own magic with Web 7.0™ / TDW AgenticOS™. Imagine the possibilities.

Copyright © 2026 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License
Web 7.0, TDW AgenticOS™ and Hyperonomy are trademarks of the Web 7.0 Foundation. All Rights Reserved


Status: Draft
Version: 1.0
License: Apache 2.0
Editors: Michael Herman, Chief Digital Officer, Web 7.0 Foundation
SDO: Web 7.0 Foundation, Bindloss, Alberta, Canada


1. Abstract

DIDIDL defines a transport-neutral, message-type–centric capability description format for agents using using DIDComm.

DIDIDL enables agents to:

  • Publish typed tasks grouped under process capabilities
  • Describe request, response, and error schemas
  • Support machine-readable discovery
  • Enable client generation and validation

1.1 Key Concepts

  • APC Process Classification Framework (PCF)

  • Process Capability DID
    • did7://{authority}/{process-name}_{sedashver}:{capability-name}
  • Process Capability Task DID
    • did7://{authority}/{process-name}_{semdashver}:{capability-name}/{task-name}
    • did7://{authority}/{process-name}_{semdashver}:{capability-name}_{semdashver}/{task-name}
  • Process Capability Discovery
    • did7://{authority}/{process-name}/query-capabilities
    • did7://{authority}/{process-name}/disclose-capabilities
    • did7://{authority}/{process-name}_{semdashver}/query-capabilities
    • did7://{authority}/{process-name}_{semdashver}/disclose-capabilities
    • did7://{authority}/{process-name}_{semdashver}:{capability-name}/query-capability
    • did7://{authority}/{process-name}_{semdashver}:{capability-name}/disclose-capability
    • did7://{authority}/{process-name}_{semdashver}:{capability-name}_{semdashver}/query-capability
    • did7://{authority}/{process-name}_{semdashver}:{capability-name}_{semdashver}/disclose-capability

2. Terminology

TermDefinition
AgentA DIDComm-enabled entity
Process Capability (Capability)A grouping of tasks representing a business or functional process
Process Capability Task (Task)A DIDComm message type representing a callable function
SchemaA JSON Schema definition conforming to JSON Schema
Process DIDIDL Document (DIDIDL Document)JSON object conforming to this specification

Normative keywords MUST, SHOULD, MAY follow RFC 2119.


3. Canonical DID Patterns

Process Capability Task DID PatternPatternExample
Process Capability DIDdid7://{authority}/{process-name}_{semdashver}:{capability-name}_{semdashver}did7://web7/user-onboarding_1-0:enrollment_1-0
Process Capability Task DIDdid7://{authority}/{process-name}_{semdashver}:{capability-name}_{semdashver}/{task-name}did7://web7/user-onboarding_1-0:enrollment_1-0/verify-email
Query/Disclose DID Messsage TypesPatternExample
Discovery Query Process Capabilitiesdid7://{authority}/{process-name}_{semdashver}/query-capabilitiesdid7://web7/user-onboarding_1-0/query-capabilities
Discovery Disclose Process Capabilitiesdid7://{authority}/{process-name}_{semdashver}/disclose-capabilitiesdid7://web7/user-onboarding_1-0/disclose-capabilities
Discovery Query Process Capability Tasksdid7://{authority}/{process-name}_{semdashver}:{capability-name}_{semdashver}/query-capabilitydid7://web7/user-onboarding_1-0/query-capability
Discovery Disclose Process Capability Tasksdid7://{authority}/{process-name}_{semdashver}:{capability-name}_{semdashver}/disclose-capabilitydid7://web7/user-onboarding_1-0:enrollment_1-0/disclose-capability

Web 7.0 DID Identifier/Message Type Syntax-Semantics Comparions

Web 7.0 Neuromorphic Agent Protocol Long-Term Memory (LTM) Model


4. DIDIDL Document Structure

4a. Process Capabilities

{
"dididl": "1.0",
"agent": "did7://agents.org/example:agent123",
"capabilities": [...],
"schemas": {...}
}

All tasks MUST be nested under exactly one process capability.


4b. Process Capability Tasks

{
"id": "did7://web7/user-onboarding_1-0:enrollment",
"name": "User Onboarding-Enrollment",
"version": "1.0",
"description": "Handles user lifecycle initiation",
"tasks": [
{
"type": "did7://web7/user-onboarding_1-0:enrollment_1-0/create-user",
"request": "#/schemas/CreateUserRequest",
"response": "#/schemas/UserDto"
},
{
"type": "did7://web7/user-onboarding_1-0:enrollment_2-0/create-user",
"request": "#/schemas/CreateUserRequest",
"response": "#/schemas/UserDto"
},
{
"type": "did7://web7/user-onboarding_1-0:enrollment_1-0/verify-email",
"request": "#/schemas/VerifyEmailRequest",
"response": "#/schemas/VerifyEmailResponse"
}
]
}

Rules:

  • Process Capability DIDs MUST follow the pattern:
    • did7://{authority}/{process-name}_{semdashver}:{capability-name}
    • did7://{authority}/{process-name}_{semdashver}:{capability-name}_{semdashver}
  • Task DIDs are capability-scoped:
    • did7://{authority}:{process-name}_{semdashver}:{capability-name}/{task-name}
    • did7://{authority}:{process-name}_{semdashver}:{capability-name}_{semdashver}/{task-name}
  • Each task MUST belong to exactly one capability

4c. Process Capability Task

{
"type": "did7://web7/user-onboarding_1-0:enrollment_1-0/create-user",
"request": "#/schemas/CreateUserRequest",
"response": "#/schemas/UserDto",
"errors": ["#/schemas/ValidationError"]
}

Rules:

  • Task DIDs MUST be unique within the agent
  • Versioning MUST be encoded in the DID
  • Request and response schemas MUST be referenced by JSON pointer

5. Discovery Protocol

5.1 Query Capabilities

Request

{
"type": "did7://web7/user-onboarding_1-0/query-capabilities"
}

Response

{
"type": "did7://web7/user-onboarding_1-0/disclose-capabilities",
"capabilities": [
{
"id": "did7://web7/user-onboarding_1-0:userverification",
"name": "User Verification",
"version": "1.0",
"description": "Handles user verification."
},
{
"id": "did7://web7/credential-issuance_1-0/credentialissuance",
"name": "Credential Issuance",
"version": "1.0"
"description": "Handles credential issuance."
}
]
}

5.2 Query a Specific Capability

Request

{
"type": "did7://web7/user-onboarding_1-0:userverification/query-capability"
}
{
"type": "did7://web7/user-onboarding_1-0:userverification_1-0/query-capability"
}

Response

{
"type": "did7://web7/user-onboarding_1-0:userverification/disclose-capability",
"capability": {
"id": "did7://web7/user-onboarding_1-0:userverification_1-0",
"name": "User Verification",
"version": "1.0",
"tasks": [
{
"type": "did7://web7/user-onboarding_1-0:userverification_1-0/create-user",
"request": "#/schemas/CreateUserRequest",
"response": "#/schemas/UserDto"
},
{
"type": "did7://web7/user-onboarding_1-0:userverification_1-0/verify-email",
"request": "#/schemas/VerifyEmailRequest",
"response": "#/schemas/VerifyEmailResponse"
}
],
"schemas": {...}
}
}

6. Normative Requirements

  1. Each task MUST appear in exactly one process capability.
  2. Process Capability DIDs MUST be unique within the agent.
  3. Task DIDs are capability-scoped and MUST be unique.
  4. Union of all process capabilities MUST form a disjoint partition of tasks.
  5. Schemas included in capability disclosure MUST only include referenced schemas.
  6. DIDComm authentication MUST protect all DIDIDL exchanges.
  7. Version changes that introduce breaking schema modifications MUST increment the major version in the DID.

7. Example Complete DIDIDL Document

{
"dididl": "1.0",
"agent": "did7://agents.org/example:agent123",
"capabilities": [
{
"id": "did7://web7/user-onboarding_1-0:useronboarding",
"name": "User Onboarding",
"version": "1.0",
"tasks": [
{
"type": "did7://web7/user-onboarding_1-0:useronboarding_1-0/create-user",
"request": "#/schemas/CreateUserRequest",
"response": "#/schemas/UserDto"
},
{
"type": "did7://web7/user-onboarding_1-0:useronboarding_2-0/create-user",
"request": "#/schemas/CreateUserRequest",
"response": "#/schemas/UserDto"
},
{
"type": "did7://web7/user-onboarding_1-0:useronboarding_1-0/verify-email",
"request": "#/schemas/VerifyEmailRequest",
"response": "#/schemas/VerifyEmailResponse"
}
]
}
],
"schemas": {
"CreateUserRequest": {
"$schema": "did7://json-schema.org/schemas:/draft/2020-12/schema",
"type": "object",
"properties": {
"email": { "type": "string" }
},
"required": ["email"]
},
"VerifyEmailRequest": {
"$schema": "did7://json-schema.org/schemas:/draft/2020-12/schema",
"type": "object",
"properties": {
"token": { "type": "string" }
},
"required": ["token"]
},
"VerifyEmailResponse": {
"$schema": "did7://json-schema.org/schemas:/draft/2020-12/schema",
"type": "object",
"properties": {
"success": { "type": "boolean" }
},
"required": ["success"]
},
"UserDto": {
"$schema": "did7://json-schema.org/schemas:/draft/2020-12/schema",
"type": "object",
"properties": {
"id": { "type": "string" },
"email": { "type": "string" }
},
"required": ["id", "email"]
}
}
}

Appendix B – APQC Taxonomies

Process Categories and Individual Processes

Appendix C – PowerShell Cmdlet Naming Specifications

Appendix D – Web 7.0™ DID-based, Decentralized Library Operating System (DIDLibOS)

Leave a comment

Filed under Uncategorized