Skip to content

Things Playing Nice

It would be good to enable easy federalization of services.

When this works, it allows groups to decide to trust and cooperate. This can allow self-organizing groups to support each other, and to provide a path for removal of trust when necessary.

There is also the possibility of a federated system being used to provide a more robust hosting solution for small websites, which are more vulnerable to traffic spikes and deplatforming. This is discussed in more detail in the Hosting document.

I'd like to explore what could be done with a dynamic trust system that would not force users to trust each other.

Trust by Reputation

The reputation of individual network nodes can, in some circumstances, be derived by gossip about gossip. If it is meaningful for a subjective claim by a peer to be compared with other such claims, then by comparing subjective claims from other peers, any peer can decide its confident in an claim.

Trust by reputation can only support things that can be independently verified by gossip.

Server health and accessibility can be independently verified, but accessibility is not objective because a network partition can cause two honest peers to give different answers to the same question.

Principals are peers that have a stake in the trust.

Remotes are peers that collaborate with a principal in a way that serves the trust.

It is expected that, for each feature, a principal will also act as another peer's remote.

Assuming the majority of remotes are trustworthy, a principal can use gossip-about-gossip as a way to establish trust for each claim it receives. More than 50% of malicious remotes would have to cooperate in order to make a principal mistakenly trust.

Feature Objectivity Remarks
Server health ✔️ Permits malware to spam the check if it's shared. Because participants cannot be trusted, any secrets on the remote's side must be assumed insecure.
Static file hosting ✔️ A legal and governance issue develops from illegal data sharing via this technique. Remotes must manage their exposure.

This requires that remotes either detect illegal data (hard, maybe should be impossible), or remotes should have the ability to strictly control which peers they permit as principals.

A Kademlia-like Distributed Hash Table (DHT) could be used to verify file contents. Also, this would be good as a thing to resolve when addressing/naming.
Logging/tracing ✔️ Snapshotting could make this like a specialized case of static file hosting at the expense of freshness.

Would this be valuable? I'm not sure I see the value.