> Trust without some centralized "god" somewhere is extraordinarily hard for the reasons you discuss. How do I trust? How do I compute trust? How do I cooperate with peers to compute trust while being sure these peers are not defecting.
I think the problem is trying to compute trust algorithmically. In a completely decentralized network the information necessary to do that is not intrinsically available so you have to bootstrap trust in some other way.
Everybody trusting some root authority is the easiest way to do that but it's also the most centralized. It also doesn't actually solve the problem unless the root authority is also the only trusted party, because now you have to ask how the root is supposed to know whether to trust some third party before signing it. That's the huge fail with the existing CAs. They'll sign anything. Moxie Marlinspike has had a number of relevant things to say about that.
> That manual intervention must by definition take place over some other network, not the network in question, since the network being intervened with may be compromised.
In a theoretical sense that's true, because if the network is totally compromised, meaning no communication can take place between anyone, then you can't do anything in the direction of fixing it without having some external network to use to coordinate. But that's only a problem before bootstrap. If you can discover and communicate with several compatriots using the network and over time come to trust them before any attack is launched against the network, you can then designate them as trusted parties without any external contact. This is like the Bitcoin solution except that instead of using processing power as the limit on Sybils you use human face time. Then when the attack comes you already have trusted parties you can rely on to help you resist it.
So you *can* bootstrap trust (slowly) but you have to do it before the attack happens or suffer a large inefficiency in the meantime. But using an external network to bootstrap trust before you even turn the system on is clearly a much easier way to guarantee that it's done before the attack begins, and is probably the only efficient way to recover if it *isn't* done before the attack begins.
> This also makes me think more and more about hybrid systems where you've got multiple types of systems -- including both centralized and decentralized -- that back each other to create an "antifragile" network.
That definitely seems like the way to go. Homogenous systems are inherently fragile because any attack that works against any part of the system will work against the whole of it. It's like the Unix Way: Make everything simple and modular so that everything can interface with anything, that way if something isn't working you can swap it out with something else. Then as long as you have [anything] that can perform the necessary function (e.g. message relay or lookup database), everything requiring that function can carry on working.
> Yep. It's one of the reasons I don't think Bitcoin in its present form is necessarily *that* much more robust than central banks and other financial entities.
I tend to think that Bitcoin is going to crash and burn. It has all the makings of a bubble. It's inherently deflationary which promotes hoarding and speculation which causes the price to increase in the short term, but the whole thing is resting on the supremacy of its technical architecture. So if somebody breaks the technology *or* somebody comes up with something better or even a worthwhile but incompatible improvement to Bitcoin itself, when everyone stops using Bitcoin in favor of the replacement the Bitcoins all lose their value. For example if anyone ever breaks SHA256 it would compromise the entire blockchain. Then what do you do, start over from zero with SHA3?
> The challenge is making the *interface* and *presentation* of trust comprehensible to the user so the user understands exactly what they're doing and the implications of it clearly (without having to be an expert in PKI).
A big part of it is to reduce the consequences of users making poor trust decisions. The peers that are "trusted" should be trusted only to the smallest extent possible and the consequences of one peer making poor trust decisions should have minimal consequences for the others. That's one of the reasons web of trust is so problematic. Using web of trust for key distribution is desperation. Key distribution is the poster child for applying multiple heterogenous methods. It's the thing most necessary to carry out external to the network but they're trying to handle it internally using one method for everyone.
The ideal would be for nodes to only trust a peer to relay data and then have the destination provide an authenticated confirmation of receipt. Then if there is no confirmation you ask some different trusted peer(s) to relay the message. That way all misplaced trust costs you is efficiency rather than security. If a trusted peer defects then you try the next one. Then even if half the peers you trusted will defect, you're still far ahead of the alternative where 90% or 99.9% of the peers you try could be Sybils. And that gets the percentage of defecting peers down to the point where you can start looking at the Byzantine fault tolerance algorithms to detect them, which might even allow defecting peers to be algorithmically ejected from the trusted group.
> Yeah, that's basically the identical idea except in your model the centralized node(s) are the defaults and the DHT is fallback.
Part of the idea is to decentralize the centralized nodes. Then there are big nodes trusted by large numbers of people but there is no "root" which is trusted by everybody. And big is relative. If each organization (or hackerspace or ...) runs their own supernode then there is nothing to shut down or compromise that will take most of the network with it, and there is nothing preventing a non-supernode from trusting (i.e. distributing their trust between) more than one supernode. Then you can have the supernode operators each decide which other supernodes they trust which shrinks the web of trust problem by putting a little bit of hierarchy into it, without making the hierarchy rigid or giving it a single root. The result is similar in structure to a top down hierarchy except that it's built from the bottom up so no one has total control over it.
> Umm... sorry to break this to you, but that's exactly what I did.
Argh. Why does everything related to Windows have to be unnecessarily complicated?