We’ve had enough of digital monopolies and surveillance capitalism. We want an alternative world that works for everyone, just like the original intention of the web and net.
We seek a world of open platforms and protocols with real choices of applications and services for people. We care about privacy, transparency and autonomy. Our tools and organisations should fundamentally be accountable and resilient.
On Aug 12, 2014, at 5:23 PM, David Geib <email@example.com> wrote: > > > Trust without some centralized "god" somewhere is extraordinarily hard for the reasons you discuss. How do I trust? How do I compute trust? How do I cooperate with peers to compute trust while being sure these peers are not defecting. > > I think the problem is trying to compute trust algorithmically. In a completely decentralized network the information necessary to do that is not intrinsically available so you have to bootstrap trust in some other way. > > Everybody trusting some root authority is the easiest way to do that but it's also the most centralized. It also doesn't actually solve the problem unless the root authority is also the only trusted party, because now you have to ask how the root is supposed to know whether to trust some third party before signing it. That's the huge fail with the existing CAs. They'll sign anything. Moxie Marlinspike has had a number of relevant things to say about that. > That's the general pattern that I see. The easiest approach is the most centralized approach... at least if you neglect the longer term systemic downsides of it. Maybe over-centralization should be considered a form of technical debt. I agree that root CAs are horrible. I have had them do things like send me a private key unencrypted to gmail. I am not making that up. No passphrase. To gmail. Hmm... Yeah, I think doing trust better is a must. Btw... Some folks responded to my post lamenting that I had given up on decentralization. That's not true at all. I am just doing two things. One is trying to spin the problem around and conceptualize it differently. The other is giving the problem the respect it deserves. It's a very, very hard problem... Which is part of why I like it. :) > > That manual intervention must by definition take place over some other network, not the network in question, since the network being intervened with may be compromised. > > In a theoretical sense that's true, because if the network is totally compromised, meaning no communication can take place between anyone, then you can't do anything in the direction of fixing it without having some external network to use to coordinate. But that's only a problem before bootstrap. If you can discover and communicate with several compatriots using the network and over time come to trust them before any attack is launched against the network, you can then designate them as trusted parties without any external contact. This is like the Bitcoin solution except that instead of using processing power as the limit on Sybils you use human face time. Then when the attack comes you already have trusted parties you can rely on to help you resist it. I'm not sure those kinds of approaches can work on a global scale. How do people in Russia or South Africa determine their trust relationship with someone in New York? I guess you could traverse the graph, but now you are back to trying to compute trust. > So you *can* bootstrap trust (slowly) but you have to do it before the attack happens or suffer a large inefficiency in the meantime. But using an external network to bootstrap trust before you even turn the system on is clearly a much easier way to guarantee that it's done before the attack begins, and is probably the only efficient way to recover if it *isn't* done before the attack begins. Another point on this... History has taught us that governments and very sophisticated criminals are often much more ahead of the game than we suspect they are. My guess is that if a genuine breakthrough in trust is made it will be recognizable as such and those forces will get in early. The marketing industry is also very sophisticated, though not quite as cutting edge as the overworld and the underworld. On a more pragmatic note, I think you have a chicken or egg problem with the idea of bootstrapping before turning the system on. History has also demonstrated that in computing release early release often wins hands down. Everything that I am familiar with, from the web to Linux to even polish obsessed creatures like Mac have followed this path. If it doesn't exist yet nobody will use it, and if nobody is using it nobody will bootstrap trust for it because nobody is using it therefore nobody will ever use it therefore it's a waste of time... > Then as long as you have [anything] that can perform the necessary function (e.g. message relay or lookup database), everything requiring that function can carry on working. You can have your cake and eat it too. It's easy. Just make two cakes. Make a centralized cake and a decentralized cake. > I tend to think that Bitcoin is going to crash and burn. It has all the makings of a bubble. It's inherently deflationary which promotes hoarding and speculation which causes the price to increase in the short term, but the whole thing is resting on the supremacy of its technical architecture. So if somebody breaks the technology *or* somebody comes up with something better or even a worthwhile but incompatible improvement to Bitcoin itself, when everyone stops using Bitcoin in favor of the replacement the Bitcoins all lose their value. For example if anyone ever breaks SHA256 it would compromise the entire blockchain. Then what do you do, start over from zero with SHA3? I think the tech behind it is more interesting than Bitcoin itself. It reminds me of the web. Hypertext, browsers, and the new hybrid thin client model they led to was interesting. The internet was certainly damn interesting. But pets.com and flooz? Not so much. I still need to take a deep, deep dive into the block chain technology. I get the very basic surface of it, but I am really curious about how it might be used as part of a solution to the trust bootstrapping problem. If hybrid overlapping heterogenous solutions are the way forward for network robustness, then maybe a similar concurrent cake solution exists for trust. At some point I think someone is going to successfully attack Bitcoin. What happens then? I don't know. It has some value as a wire transfer protocol if nothing else, but the sheen will certainly wear off. > The ideal would be for nodes to only trust a peer to relay data and then have the destination provide an authenticated confirmation of receipt. Then if there is no confirmation you ask some different trusted peer(s) to relay the message. That way all misplaced trust costs you is efficiency rather than security. If a trusted peer defects then you try the next one. Then even if half the peers you trusted will defect, you're still far ahead of the alternative where 90% or 99.9% of the peers you try could be Sybils. And that gets the percentage of defecting peers down to the point where you can start looking at the Byzantine fault tolerance algorithms to detect them, which might even allow defecting peers to be algorithmically ejected from the trusted group. This is basic to any relayed crypto peer to peer system including the one I built. Every packet is MAC'd using a key derived from a DH agreement, etc. I think the harder thing is defending not against Sybils vs. the data itself but Sybils vs the infrastructure. Criminals, enemy governments, authoritarian governments, etc. might just want to take the network down, exploit it to carry out a DDOS amplification attack against other targets, or make it unsuitable for a certain use case. > Part of the idea is to decentralize the centralized nodes. Then there are big nodes trusted by large numbers of people but there is no "root" which is trusted by everybody. And big is relative. If each organization (or hackerspace or ...) runs their own supernode then there is nothing to shut down or compromise that will take most of the network with it, and there is nothing preventing a non-supernode from trusting (i.e. distributing their trust between) more than one supernode. Then you can have the supernode operators each decide which other supernodes they trust which shrinks the web of trust problem by putting a little bit of hierarchy into it, without making the hierarchy rigid or giving it a single root. The result is similar in structure to a top down hierarchy except that it's built from the bottom up so no one has total control over it. I like this...especially the part about shrinking the problem. It reminds me of how old NNTP and IRC and similar protocols were run. You had a network of servers run by admin volunteers, so the trust problem was manageable. But there was no king per se... A bit of an oligarchy though. > > > Umm... sorry to break this to you, but that's exactly what I did. > > Argh. Why does everything related to Windows have to be unnecessarily complicated? > That's nothing. Get a load of what I had to pull out of my you know what to get windows to treat a virtual network properly with regard to firewall policy. As far as I know I am the first developer to pull this off, and it's not pretty. I think I am first on this one by virtue of masochism. https://github.com/zerotier/ZeroTierOne/commit/f8d4611d15b18bf505de9ca82d74f5102fc57024#diff-288ff5a08b3c03deb7f81b5d45228018R628