Category Archives: Uncategorized

Self-Sovereign Control (SSC) 7.0 Metamodel

Also known as the Grand Scheme of Things (GST).

Create your own magic with Web 7.0 AgenticOS™. Imagine the possibilities.

SSI 7.0 Identity Framework

Each persona #2-9 has its own identity – its own set of claims that are projected onto it. In addition, each persona has an identifier associated with it (e.g. ALICE SMITH, ALICE DIGI, ALICE ROBOT) and possibly 1 or more additional identifiers (e.g. each persona also has an identifier whose value is a DID). An identifier is a name or label for a persona’s identity.

SSC 7.0 Metamodel

Inspired by Tim Bouma’s extended article Things in Control: Part 2 – Charting a New Policy Path.

I firmly believe we’re heading toward a definition of something called Self-Sovereign Control (SSC) that will succeed Self-Sovereign Identity (SSI). SSI has remained an unrealized concept, while SSC has the real potential of becoming a core building block (part of the concrete foundation) for the human digital identity.

Tim Bouma’s Definition of Identity

Identity, properly understood, is a capability surface: the total set of Things in Control that a person can activate.

SSC Simplified Metamodel

The following is an easier-to-digest version of the SSC Metamodel – ideal for many less technical audiences.

SSC Verifiable Trust Circles (VTCs)

SSC Verifiable Trust Circles (VTCs) are based on UMCs. A VTC can have one, two, three, or more verifiable members. VTCs are circle relationships, not straight line relationships. VTCs can live at any layer in the SSC Metamodel: Beneficial Controller, Intermediate Controller, or Technical Controller. Below is an example of a Beneficial Controller-layer VTC.

VTCs can be used to represent single-party, two-party, or multi-party membership, citizenship, and other partOf relationships. VTCs can also be used to implement/track higher-level working group, team, study group, task force, and digital nation-state processes:

  • Multi-person meeting requests
  • Trustee and notary elections
  • Voting-based decision-making
  • Review and approval routing workflows
  • Contract execution
  • Counter-signing
  • Polls
  • Petitions

SSC 7.0 Verifiable Trust Circles ChainMail (VTC-CM)

End

Leave a comment

Filed under Uncategorized

Microsoft Windows “Longhorn”: WinFS

Create your own magic with Web 7.0 AgenticOS™. Imagine the possibilities.

The story is about my involvement with the planned release of “Longhorn” of Microsoft Windows (circa 2000-2005), with a particular focus on the WinFS subsystem – the SQL relational database technology-based Windows File System.

InfoWorld Annoucement

The author was involved with Project “Longhorn” from a design preview and feedback, consulting, and PM technical training (Groove Workspace system architecture and operation) perspectives (circa 2001-2002).

References

1 Comment

Filed under Uncategorized

Consensus is for those who can’t think for themselves

Create your own magic with Web 7.0 AgenticOS™. Imagine the possibilities.

Consensus rarely creates truth or progress; it mostly creates more consensus.”, Michael Herman, December 2025

While consensus has its place in stabilizing groups, it is structurally inclined to reproduce its own comfort rather than generate new understanding. At its best it harmonizes; at its worst it becomes self-referential, producing only more consensus and very little meaningful discovery.”, Michael Herman, December 2025

“Consensus is for those who can’t think for themselves.”, Michael Herman, December 2025.

Related Wisdom

Margaret Thatcher: “To me, consensus seems to be the process of abandoning all beliefs, principles, values and policies in search of something in which no-one believes and to which no-one objects.”

Michael Crichton: (speaking of scientific and intellectual contexts): “Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet … In science … consensus is irrelevant.”

Bertrand Russell: “The fact that an opinion has been widely held is no evidence whatever that it is not utterly absurd.” — On the authority of group agreement being meaningless

Friedrich Nietzsche: “Madness is rare in individuals—but in groups, parties, nations, and epochs, it is the rule.” — When the group agrees, it often amplifies unexamined errors.

Abba Eban: “Consensus means that everyone agrees to say collectively what no one believes individually.”

Margaret Thatcher: “Consensus is the absence of leadership.”

Christopher Hitchens: “The herd instinct is strong in human beings… and it leads to consensus based on the path of least resistance.”

Søren Kierkegaard: “The crowd is untruth.” — Collective agreement rarely produces truth; it produces comfort.

Mark Twain: “Whenever you find yourself on the side of the majority, it is time to pause and reflect.”

Arthur Schopenhauer: “The majority of men have no opinions of their own; they simply echo what they have heard.”

Albert Einstein: “What is right is not always popular, and what is popular is not always right.”

George Bernard Shaw: “Every profession is a conspiracy against the laity.” — A critique of insider consensus reinforcing itself.

Michael Crichton: “In science, consensus is irrelevant. What counts are reproducible results.”

Thomas Paine: “A long habit of not thinking a thing wrong gives it a superficial appearance of being right.”

Buckminster Fuller: “You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” — Consensus defends the status quo; innovation bypasses it.

Be a Wanderer

  • Be a wanderer
  • Be daring
    • Go where no one dared tread before
  • Be a campaigner
  • Be a warrior

ThinkDifferent ActDifferent

BeIntentional BeDisruptive

Leave a comment

Filed under Uncategorized

The AUSOM Application Design Framework

AUSOM is an acronym for A User State of Mind — the name of a framework or architecture for designing software applications that are easier to design, implement, test, document and support. In addition, an application developed using the AUSOM framework is more capable of being: incrementally enhanced, progressively installed and updated, dynamically configured and is capable of being implemented in many execution environments. This paper describes the Core Framework, the status of its current runtime implementations and its additional features and benefits.

What is AUSOM?

The AUSOM Application Design Framework is a new way to design client-side applications. The original implementation of the framework is based on a few basic concepts: user scenarios and detailed task analysis, visual design using state-transition diagrams, and implementation using traditional Windows message handlers.

The original motivation for the framework grew out of the need to implement a highly modeless user interface that was comprised of commands or tasks that were very modal (e.g. allowing the user to change how a polygon was being viewed while the user was still sketching the boundary of the polygon).

To Learn More

1 Comment

Filed under Uncategorized

Who Owns Grok-Generated Content?

Create your own magic with Web 7.0 AgenticOS™. Imagine the possibilities…

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

Who owns the content that you create? Who owns the copyright? And what are the rights for reuse or for original publishing?

Reference: https://grok.com/share/c2hhcmQtMw_1447491f-72c2-4829-85fe-d3992f28023e

Grok: Who owns the content that Grok (I) create?

When you interact with Grok (developed by xAI), the ownership and copyright of the generated output is as follows:

  • You (the user) own the output you generate with Grok, including full copyright ownership of that specific content.
  • xAI claims no ownership or copyright over the individual outputs you create.

This is explicitly stated in xAI’s official Terms of Service (Consumer version, applicable to grok.x.ai and the apps):

“You own the Outputs you create with the Service, including the right to use, reproduce, distribute, create derivative works, display, and perform them, subject to these Terms.”

(Section 4 – “Content”, subsection “Your Outputs” in the current terms as of December 2025)

Important qualifications and rights retained by xAI

Even though you own the output, xAI retains certain rights to use it:

  1. Broad license grant back to xAI
    By using the service you automatically grant xAI an irrevocable, perpetual, worldwide, royalty-free license to:
  • use, reproduce, modify, adapt, publish, translate, distribute, display, and create derivative works of your inputs and outputs
  • for any purpose, including training and improving Grok and other xAI models, research, and commercial purposes.

“You grant xAI an irrevocable, perpetual, transferable, sublicensable, royalty-free, and worldwide license to use, copy, store, modify, distribute, reproduce, publish, display, create derivative works of, and aggregate Your Content [inputs + outputs] for any purpose…”

  1. No confidentiality
    Your prompts and the generated outputs are not treated as confidential. They may be used to train future versions of Grok (unless you opt out of data usage for training where that option is offered—currently available in settings for logged-in users).

Rights for reuse and original publishing

  • You can freely reuse, republish, sell, or commercialize the content you generate with Grok (e.g., put it in a book, blog, YouTube video, sell as NFTs, etc.) without needing xAI’s permission and without paying royalties.
  • You can claim copyright over the specific output you received (in most jurisdictions).
  • However, because Grok’s outputs are AI-generated and may resemble content it was trained on, third parties could theoretically challenge very close similarities under certain circumstances (though this is rare and not specific to Grok).

Summary table

AspectWho owns it?Your rightsxAI’s rights
Specific output you generateYou (the user)Full ownership, copyright, commercial use, etc.Broad license to use it for any purpose (including training)
The model itself & weightsxAINoneExclusive
Your input promptsYouYou retain ownershipSame broad license as above

Official source (current as of December 2025):
https://x.ai/legal/terms-of-service (Consumer version) – Section 4 “Content”

So in short: You own what Grok writes for you and can do whatever you want with it, but you simultaneously give xAI very wide permission to use both your prompts and Grok’s responses forever.

Leave a comment

Filed under Uncategorized

Who Owns Microsoft Copilot-Generated Content?

Create your own magic with Web 7.0 AgenticOS™. Imagine the possibilities…

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

Who owns the content that you create? Who owns the copyright? And what are the rights for reuse or for original publishing?

Reference: https://copilot.microsoft.com/shares/Pnf1YHy8ZeN15BeMe6cJ7

Copilot: Great set of questions — these go right to the heart of how AI‑generated content fits into copyright law and publishing rights. Let’s break it down clearly:


📜 Ownership of AI‑Generated Content

  • You own the outputs: When you use Microsoft Copilot, the text, images, or other creative content I generate for you are yours to use. Microsoft does not claim ownership of the content you create with me.
  • No copyright by the AI itself: Since I’m not a human author, I cannot hold copyright. Copyright law generally requires human authorship.

⚖️ Copyright Status

  • Human authorship requirement: In most jurisdictions (e.g., U.S., Canada, EU), copyright protection applies only to works created by humans. Purely machine‑generated works without human input may not qualify for copyright.
  • Your role matters: If you provide creative direction, edits, or substantial input, that human contribution can establish copyright ownership for you. For example:
  • If you ask me to generate a logo and then refine it, your creative input strengthens your claim to copyright.
  • If you publish my output verbatim with no human modification, copyright protection may be weaker or even unavailable depending on the jurisdiction.

🔄 Rights for Reuse & Publishing

  • Reuse: You can freely reuse, adapt, and republish the content I generate for you. There are no restrictions from Microsoft on your ability to use it.
  • Original publishing: You can publish AI‑assisted content under your name, brand, or company. However:
  • Some publishers, academic institutions, and platforms require disclosure if AI was used.
  • Legal enforceability of copyright may vary depending on how much human creativity is involved.
  • Commercial use: You are allowed to use Copilot outputs commercially (e.g., marketing materials, books, designs). Microsoft does not restrict this.

✅ Practical Takeaways

  • Treat Copilot outputs as your creative material — you can publish, sell, or license them.
  • For stronger copyright protection, add human creativity (editing, refining, combining ideas).
  • Always check publisher or platform rules about AI disclosure if you plan to release content publicly.

Leave a comment

Filed under Uncategorized

Who Owns ChatGPT-Generated Content?

Create your own magic with Web 7.0 AgenticOS™. Imagine the possibilities…

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

Who owns the content that you create? Who owns the copyright? And what are the rights for reuse or for original publishing?

References:

ChatGPT: Perfect—here are all three, written in clean, legally practical language you can use immediately. These are structured to work for publishers, clients, and commercial contracts.


✅ 1. Publisher-Safe AI Disclosure Clause

(For books, articles, research, courses, or journalism)

Standard Neutral Disclosure (Recommended):

Portions of this work were developed with the assistance of artificial intelligence tools used for drafting, editing, and conceptual development. All final content, structure, interpretation, and conclusions were reviewed, directed, and approved by the author, who retains full responsibility for the work.

Strict Academic Disclosure:

Artificial intelligence tools were used solely as assistive drafting and editing instruments. All arguments, interpretations, data validation, and original contributions were independently verified and finalized by the author.

Commercial Publishing Friendly (Minimal):

This work was created using a combination of human authorship and AI-assisted drafting tools.


✅ 2. Commercial IP Warranty Clause

(For software, reports, designs, marketing assets, and IP transfer)

Strong Commercial Warranty (Recommended for B2B):

The Creator represents and warrants that, to the best of their knowledge after reasonable review, the delivered work is original, does not knowingly infringe any third-party intellectual property rights, and is free of intentional plagiarism. The Creator retains full rights to transfer ownership and grant licenses to the work.

AI-Specific Safe Addendum (Optional but Powerful):

Portions of the work may have been drafted using AI-assisted tools. The Creator affirms that all outputs were materially reviewed, edited, and curated by human authorship to ensure originality, compliance, and lawful reuse.


✅ 3. Client-Facing Ownership & Rights Clause

(For consulting, design, software, research, branding, creative services)

Full Buyout / Work-for-Hire Equivalent:

Upon full payment, all right, title, and interest in and to the delivered work, including all intellectual property rights therein, shall vest exclusively in the Client. The Creator retains no ownership interest except for portfolio display unless otherwise stated in writing.

License-Based Model (If You Want to Retain Core IP):

The Creator retains ownership of all underlying methodologies, systems, and pre-existing intellectual property. The Client is granted a perpetual, worldwide, royalty-free license to use, modify, and commercialize the delivered work.

AI Transparency Version (Modern Contracts):

The Client acknowledges that AI-assisted tools may have been used as part of the production workflow. All final deliverables are contractually guaranteed as original to the extent required under applicable law and are fully transferable as stated herein.


✅ Bonus: Platform-Neutral Rights Assertion (Optional Footer)

You can place this in reports, PDFs, and digital products:

© [Year] [Your Name or Entity]. All rights reserved. This work is original and may not be reproduced, redistributed, or sublicensed without express permission, except as permitted by law or written agreement.


⚖️ Practical Legal Strength Summary

Use CaseCovered
Books & Publishing✅ Yes
Software & SaaS✅ Yes
Consulting & Research✅ Yes
Marketing & Branding✅ Yes
Data Products & Reports✅ Yes
Client IP Transfers✅ Yes
Due Diligence / M&A✅ Yes

Leave a comment

Filed under Uncategorized

Identic AI – Powered by Web 7.0 AgenticOS™

Create your own magic with Web 7.0 AgenticOS™. Imagine the possibilities…

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

Resources

Leave a comment

Filed under Uncategorized

Flesh & blood persons, digital personas, digital agents, and Self-Sovereign Control (SSC)

Create your own magic with Web 7.0 AgenticOS™. Imagine the possibilities.

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

Figure 1. SSI 2.0 Identity Framework: My Personal Things

Self-Sovereign Control (SSC)

Maybe SSC is the true successor to SSI.

Leave a comment

December 1, 2025 · 12:47 pm

Updated Principles of Self-Sovereign Identity 7.0 (DRAFT Proposal #2 – 20 Principles)

Create your own magic with Web 7.0 AgenticOS™. Imagine the possibilities.

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

The original 10 principles from Christopher Allen’s 2016 “The Path to Self-Sovereign Identity” remain a powerful foundation. (Life with Alacrity)

But over the past decade — as digital identity systems have begun to be built, debated, and challenged — new insights from academic research, real-world deployments, regulation and evolving tech (e.g. decentralized identifiers, verifiable credentials, blockchains, privacy-preserving proofs) suggest that an updated, expanded set of principles — a new update for self-sovereign identity (SSI) — could better reflect both opportunities and risks. Below is a proposed revised and expanded collection of SSI principles, integrating the original 10 while adding new ones to address trust, accountability, usability, governance, and compliance.


✅ What remains — Core Timeless Principles (original 10, refined)

I believe the original 10 remain largely valid; we restate them with slight refinement for modern context:

  1. Existence (Human Primacy)
    The identity must be anchored in a real human being: digital identity cannot supplant the ineffable “I.” Systems should affirm human dignity and individuality. (Life with Alacrity)
  2. Control (User Authority & Autonomy)
    Individuals must remain the ultimate authorities over their identity data. They should be able to create, modify, hide, or revoke identity data under secure, well-understood mechanisms. (Life with Alacrity)
  3. Access (Transparency of Own Data)
    Users must have the ability to view (and where appropriate manage) all personal data and claims associated with their identity — no hidden data silos or gatekeepers. (Life with Alacrity)
  4. Transparency (Open Systems & Algorithms)
    Identity systems, their governance, and the algorithms they rely on must be open, auditable, and as independent as possible from centralized authority. (Life with Alacrity)
  5. Persistence (Longevity & Continuity)
    Identities should persist over time — ideally as long as the user desires — even if credential issuers or platforms change, while still respecting a user’s right to delete or abandon an identity. (Life with Alacrity)
  6. Portability (Mobility & Transferability)
    Identity data and claims must be transportable across platforms, jurisdictions, devices — not locked into any one vendor, provider, or ecosystem. (Life with Alacrity)
  7. Interoperability (Cross-system & Cross-jurisdiction Usability)
    Identities should work broadly across different systems, platforms, borders — enabling global, cross-context recognition while preserving user control. (Life with Alacrity)
  8. Consent (Informed & Explicit Consent for Sharing)
    Any sharing or use of identity data must occur only with the individual’s informed and explicit consent. Consent mechanisms should be clear, user-friendly, and revocable. (Life with Alacrity)
  9. Minimalization (Data Minimization & Selective Disclosure)
    Only the minimum required data should be disclosed for a given purpose (e.g. proving “over 18” without revealing date of birth). Use of selective-disclosure, zero-knowledge proofs, or cryptographic safeguards is encouraged. (Life with Alacrity)
  10. Protection (Privacy, Security & Rights Safeguards)
    Users’ rights must be protected; identity systems must defend against abuse, resist censorship and coercion, secure data, and guard against identity theft or misuse. (Life with Alacrity)

➕ New Principles for SSI 7.0

Based on developments over the past decade — from academic critiques, real-world SSI schemes, regulatory pressures, and evolving threat models — I propose adding the following principles to make SSI more robust, practical, and trustworthy in modern contexts.

  1. Accountability & Auditability
    While decentralization and user control are central, identity systems must also provide mechanisms for accountability — for misuse, fraud, or disputes. This may include revocation registries, audit logs, and cryptographic or institutional mechanisms that allow for tracking misuse without compromising user privacy. This balances user sovereignty with real-world responsibilities. For example, a recent SSI scheme augments privacy with joint accountability using threshold-based methods to prevent attacks while preserving user control. (arXiv)
  2. Security & Resilience by Design
    SSI should rely on strong cryptographic foundations, resistance to key-compromise, replay or correlation attacks, and design for long-term resilience (including key recovery, revocation, migration, and backup). Academic surveys highlight that trust in SSI is still “complex and fragmented,” and robust security architecture is essential. (arXiv)
  3. Privacy by Default & Contextual Confidentiality
    Beyond minimal disclosure, SSI systems should default to maximum privacy: user wallets, metadata protection, unlinkability across contexts, selective disclosure by default, and privacy-preserving authentication (e.g. zero-knowledge proofs, anonymous credentials). This becomes more critical especially with cross-platform and cross-jurisdiction use (e.g. in the context of the metaverse, or global Web3). (arXiv)
  4. Usability & Accessibility
    SSI must be practical for real people — across digital-literacy levels, devices, geographies — and not just for technologists. Identity wallets and management tools should be easy to use, resilient to device loss, data corruption, or jurisdictional changes. Also, cost and usability barriers must be minimized so that SSI remains inclusive (“available to all”). This is especially important for global adoption (e.g. for refugees, under-served populations). (Sovereign Cities)
  5. Governance & Community Stewardship
    Given SSI’s public-good dimensions (interoperability, privacy, security), there needs to be governance mechanisms — community, standard-bodies or decentralized governance — to steward protocols, resolve disputes, evolve standards, and ensure accountability. This ensures that identity systems remain aligned with human rights, collective interest, and social norms. Some proponents argue turning original principles into legally or socially codified “duties” rather than just “best practices.” (Life with Alacrity)
  6. Compliance & Ethical Legality
    SSI systems should respect applicable laws, human rights, data-protection regulations (e.g. privacy laws, nondiscrimination), and ethical norms — especially when deployed globally across jurisdictions. This principle helps bridge SSI with regulatory realities (e.g. GDPR) while preserving user sovereignty. Indeed, recent literature critiques SSI schemes on GDPR compliance. (arXiv)
  7. Recoverability & Continuity (Key/Access Loss Handling)
    Real people lose devices, forget credentials, or face emergencies. SSI must support mechanisms for recovery, transfer, or delegation — e.g. social recovery, multi-party escrow, recovery tokens — so that a user’s identity does not become irretrievably lost. This ensures persistence doesn’t become fragility.
  8. Minimal Trust Assumptions (Decentralization of Trust)
    SSI should minimize reliance on central trusted parties; trust should be distributed, protocol-based, cryptographically verifiable, and avoid single points of failure (or control). This strengthens autonomy and reduces governance risk.
  9. Transparency of Governance & Policy (Beyond Implementation)
    Not just open code, but also open governance: decisions, upgrades, policies for revocation, dispute resolution, credential issuance — should be transparent, documented, and participatory (ideally community-driven).
  10. Inter-Community and Social Interoperability (Normalization of Claims & Contexts)
    Beyond technical interoperability, identity systems should support semantic and social interoperability: standard vocabularies, claim schemas, norms for credential issuance and verification, recognition across communities/jurisdictions. This avoids fragmentation even in decentralized ecosystems.

🧑‍🤝‍🧑 Why these additions — What changed since 2016

  • The rise of decentralized identity implementations (DIDs, verifiable credentials, wallets, blockchain-based registries) has shown the importance of security, recoverability, privacy by default, and regulatory compliance.
  • Academic scrutiny (e.g. risk of misuse, identity theft, trust fragmentation, accountability) reveals that pure decentralization without accountability can be risky. (arXiv)
  • Real-world scenarios — global mobility, cross-jurisdiction identity, refugees, displaced people — demand usability, accessibility, portability, and social interoperability to make SSI inclusive and effective.
  • Legal/regulatory frameworks (e.g. privacy laws, data protection, anti-money laundering) increasingly intersect with identity systems — forcing SSI to consider compliance, ethicality, and governance. (arXiv)
  • As identity becomes central to emerging domains (e.g. Web3, metaverse, decentralized finance), there is a need for governance, community stewardship, transparent policy, and inter-community standards to avoid fragmentation and fragmentation-driven insecurity. (arXiv)

⚠️ Challenges & Trade-offs — Why SSI 7.0 must be “balanced and diversified”

Adding more principles inevitably creates tension — greater security and governance can come at the cost of simplicity or decentralization; accountability mechanisms risk undermining privacy; recoverability may introduce attack surfaces; compliance may conflict with anonymity. A robust SSI 7.0 must therefore balance these trade-offs, offering diversified principles so that implementers can make conscious, value-driven design choices depending on the context (e.g. voting, healthcare, travel, social networking).


📝 SSI 7.0 Principles Proposal — Draft for Community Use

Here’s a draft list, which could serve as a starting point for community discussion, design, and standardization:

  1. Existence (Human Primacy)
  2. Control (User Authority & Autonomy)
  3. Access (Data Transparency to Users)
  4. Transparency (Open Systems & Algorithms)
  5. Persistence (Longevity & Continuity)
  6. Portability (Mobility & Transferability)
  7. Interoperability (Technical & Semantic)
  8. Consent (Informed, Explicit, Revocable)
  9. Minimalization (Data Minimization & Selective Disclosure)
  10. Protection (Privacy, Security & Rights Safeguards)
  11. Accountability & Auditability
  12. Security & Resilience by Design
  13. Privacy by Default & Contextual Confidentiality
  14. Usability & Accessibility
  15. Governance & Community Stewardship
  16. Compliance & Ethical Legality
  17. Recoverability & Continuity (Key/Access Loss Handling)
  18. Minimal Trust Assumptions (Decentralized Trust)
  19. Transparent Governance & Policy
  20. Inter-Community / Social Interoperability (Semantic & Contextual)

🎯 How This Might Be Used — Next Steps

  • Use this as a living framework: publish a draft “SSI 7.0 Principles” document (e.g. on GitHub) and invite contributions from technologists, regulators, civil-society, and end users — just as the 2025 revision project under the original author invites. (Life with Alacrity)
  • When designing or evaluating SSI systems (wallets, DIDs, credentials, revocation), use this expanded set to assess trade-offs — e.g. does this system meet privacy by default? Does it include recovery mechanisms? Is governance transparent?
  • For implementations in sensitive contexts (healthcare, refugees, KYC, cross-border identity) embed governance, compliance, accountability, and usability early — not as afterthoughts.

Reference: https://chatgpt.com/share/692bdce6-1b24-8008-812d-d9e55bae0f2c

Leave a comment

Filed under Uncategorized