DAVOS2026 EXCLUSIVE: How much of the liability question applies equally to OpenAI ChatGPT and Microsoft Copilot?

Copyright © 2025 Michael Herman (Bindloss, Alberta, Canada) – Creative Commons Attribution-ShareAlike 4.0 International Public License

Q1: How much of what we’ve discussed in this thread applies equally to OpenAI ChatGPT as well as Microsoft Copilot?

Short answer first:

Almost everything we’ve discussed applies to both Microsoft Copilot and OpenAI’s ChatGPT — but the type of responsibility falls on each in different ways.

I’ll be precise and keep your verification-first style: what applies equally, what applies differently, and where responsibility diverges.


1. What applies equally to both

These points are true for any company deploying large language models to the public.

A. Known unreliability

Well-supported

  • Both Copilot and ChatGPT can generate:
    • false statements
    • confident errors
    • misleading summaries
  • Both OpenAI and Microsoft know this.

So the core premise holds for both:

They deploy systems with foreseeable risk of misinformation.


B. Foreseeable reliance

Both products:

  • are fluent
  • appear authoritative
  • are marketed as helpful assistants
  • are used in real work contexts

So for both:

It is foreseeable that users will rely on them.

This triggers the same moral responsibility logic for each.


C. Control over safeguards

Both companies control:

  • system prompts
  • guardrails
  • refusal policies
  • uncertainty signaling
  • citation features
  • verification modes

So for both:

They are responsible for the risk architecture they design.


2. Where responsibility diverges

This is the crucial part.

The difference is not about whether responsibility exists —
it’s about what kind of responsibility each carries.


A. OpenAI (ChatGPT) — Model & platform responsibility

OpenAI is primarily responsible for:

  • The core model behavior
  • The baseline safety architecture
  • The default reliability profile
  • The design of hallucination controls
  • The disclosure of limitations

So OpenAI carries:

Upstream responsibility
for how risky or safe the system is in principle.

If the model:

  • hallucinates too easily
  • overstates confidence
  • lacks uncertainty signaling

that is mainly:

OpenAI’s design responsibility.


B. Microsoft (Copilot) — Deployment & context responsibility

Microsoft is primarily responsible for:

  • Where the system is embedded
  • How it is branded
  • What defaults are enabled
  • What verification is on/off
  • What tasks it is encouraged to perform
  • Whether it is placed in:
    • email
    • documents
    • code
    • enterprise workflows

So Microsoft carries:

Downstream responsibility
for how much harm that behavior can cause in practice.

If Copilot:

  • is placed in a professional tool
  • looks like an authority
  • operates with verification off by default

that is mainly:

Microsoft’s deployment responsibility.


3. The clean responsibility split

Here is the most accurate way to say it.

LayerOpenAI (ChatGPT)Microsoft (Copilot)
Model designPrimarySecondary
Safety architecturePrimaryShared
Disclosure of limitationsPrimaryShared
Product contextSecondaryPrimary
Default settingsSharedPrimary
User reliance environmentSecondaryPrimary
Enterprise embeddingNoYes
Brand-authority effectModerateHigh

4. Moral responsibility — equal in principle, different in form

OpenAI is morally responsible for:

  • shipping a system it knows can mislead
  • designing its epistemic behavior
  • choosing how uncertainty is handled

Microsoft is morally responsible for:

  • putting that system in places where:
    • people must trust it
    • errors have real consequences
  • choosing defaults that allow unverified output
  • benefiting from institutional trust while externalizing risk

So:

OpenAI is responsible for the engine.
Microsoft is responsible for the vehicle and where it’s driven.


5. Legal responsibility — how this will likely land

In the coming legal framework:

  • OpenAI will be treated like:
    • a manufacturer of a complex component
  • Microsoft will be treated like:
    • the integrator and product owner

And in product-liability law:

Integrators usually carry the greater duty of care
because they control the use context.

So in many future cases:

  • OpenAI will share responsibility
  • but Microsoft will carry front-line liability for harm in enterprise and consumer deployments.

6. Your question, answered precisely

How much of what we’ve discussed applies equally to ChatGPT and Copilot?

Applies equally

  • Known unreliability
  • Foreseeable misuse
  • Moral responsibility for risk design
  • Duty to disclose limitations
  • Obligation to reduce foreseeable harm

Applies differently

  • Who controls deployment
  • Who controls defaults
  • Who shapes user reliance
  • Who benefits from institutional trust

So the accountability structure is:

Shared responsibility for the risk.
Differentiated responsibility for the harm.


7. Final formulation

Everything we’ve discussed about:

  • foreseeability
  • control
  • defaults
  • trust debt
  • duty of care

applies to both OpenAI and Microsoft.

But:

  • OpenAI is responsible for what the system is capable of doing.
  • Microsoft is responsible for what the system is allowed to do to people.

That’s the cleanest, fairest division of responsibility — legally, morally, and operationally.


Leave a comment

Filed under Uncategorized

Leave a comment