Login

Redecentralize

We’ve had enough of digital monopolies and surveillance capitalism. We want an alternative world that works for everyone, just like the original intention of the web and net.

We seek a world of open platforms and protocols with real choices of applications and services for people. We care about privacy, transparency and autonomy. Our tools and organisations should fundamentally be accountable and resilient.

Home

Parent
David Geib [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-14 04:30:54 (4 years 11 mons 5 days 19:03:00 ago)
> That's the general pattern that I see. The easiest approach is the most centralized approach... at least if you neglect the longer term systemic downsides of it. Maybe over-centralization should be considered a form of technical debt.

It's more like a security vulnerability. Single point of failure, single point of compromise and a choke point for censorship and spying.

> I agree that root CAs are horrible. I have had them do things like send me a private key unencrypted to gmail. I am not making that up. No passphrase. To gmail.

And don't forget that they're all fully trusted. So it's completely futile to try to find a secure one because the insecure ones can still give the attackers a certificate with your name.

> Btw... Some folks responded to my post lamenting that I had given up on decentralization. That's not true at all. I am just doing two things. One is trying to spin the problem around and conceptualize it differently. The other is giving the problem the respect it deserves. It's a very, very hard problem... Which is part of why I like it. :)

It's definitely a fun problem. Part of it is to pin down just what "decentralization" is supposed to mean. If you start with the ideologically pure definition where each node is required to be totally uniform you end up banging your head against the wall. You want a node running on batteries with an expensive bandwidth provider to be able to participate in the network but that shouldn't exclude the possibility of usefully exploiting the greater resources of other nodes that run on AC power and have cheap wired connections. So once you admit the possibility of building a network which is both decentralized and asymmetrical it becomes an optimization problem. How close to the platonic ideal can you get without overly compromising efficiency or availability?

> I'm not sure those kinds of approaches can work on a global scale. How do people in Russia or South Africa determine their trust relationship with someone in New York? I guess you could traverse the graph, but now you are back to trying to compute trust

But that's the whole problem, isn't it? If you have no direct contact and you have no trusted path you really have nothing. That's why web of trust is the last resort. It's the thing that comes closest to working when nothing else will. Which is also why it's terrible. Because you only need it when nothing else works but those are also the times when web of trust is at its weakest.

The key is to find something better from the context of the relationship. Even if you live far apart you might be able to meet once and exchange keys. If you have a mutual trusted friend you can use that. If you have an existing organizational hierarchy then you can traverse that to find a trusted path. If one of you has a true broadcast medium under your control then you can broadcast your key so that anyone can get it.

If you don't have *anything*, you have to ask what it is you're supposed to be trusting. If you start communicating with some John Doe on the other side of the world with no prior relationship or claim to any specific credentials, does it actually matter that he wants to call himself John Smith instead of John Doe? At that point the only thing you can really ask to be assured of is that when you communicate with "John Smith" tomorrow it's the same "John Smith" it was yesterday.

> Another point on this... History has taught us that governments and very sophisticated criminals are often much more ahead of the game than we suspect they are. My guess is that if a genuine breakthrough in trust is made it will be recognizable as such and those forces will get in early. The marketing industry is also very sophisticated, though not quite as cutting edge as the overworld and the underworld.

Oh sure. Trust is a social issue. Criminals and marketing departments (now there's a combination that fits like a glove) have engaged in social engineering forever. That's nothing new. Maybe the question is whether there are any new *solutions* to the old problems. Some combination of global instantaneous communication and digital storage might make it harder for people to behave dishonestly or inconsistently without getting caught. But then we're back to computing trust.

And maybe that's not wrong. The real problem is trying to compute trust with no points of reference. Once you have some externally-sourced trust anchors we're back to heterogeneous and hybrid solutions.

> On a more pragmatic note, I think you have a chicken or egg problem with the idea of bootstrapping before turning the system on.

Just the opposite. Bootstrapping first *is* the ship early method because you bootstrap based on existing trust networks rather than trying to construct a new one from whole cloth. The question is how to gather the existing information in a way that provides a good user experience. You can imagine something like Facebook: You need to add a couple of friends manually but then it can start asking whether their friends are your friends. Though that obviously brings privacy implications; maybe something like homomorphic encryption could improve it? But now it's starting to get complicated. I wonder if it makes sense to factor it out. Separate the trust network from the communications network. A personal trust graph as a local API could be extremely useful in general. And then the entities can start tagging themselves with other data like their email address, PGP key, snow key, website, etc. A little bit social network + web of trust + key:value store.

> I think the tech behind it is more interesting than Bitcoin itself. It reminds me of the web. Hypertext, browsers, and the new hybrid thin client model they led to was interesting. The internet was certainly damn interesting. But pets.com and flooz? Not so much.

Agreed. It's interesting because it solves a lot of the hard problems with digital currencies but not all of them. It's clearly an evolutionary step on the road to something else. Which is what concerns me about it: Inertia and market share will allow it to survive against competitors that are only slightly better but that just means more people will have built their homes on the flood plain by the time the rain comes.

> I still need to take a deep, deep dive into the block chain technology. I get the very basic surface of it, but I am really curious about how it might be used as part of a solution to the trust bootstrapping problem. If hybrid overlapping heterogeneous solutions are the way forward for network robustness, then maybe a similar concurrent cake solution exists for trust.

Relevant: http://www.aaronsw.com/weblog/squarezooko

This is essentially the roadmap that led to namecoin, which (among other things) disproved Zooko's Triangle.

Actually that's an interesting point. Zooko's triangle was supposed to be that you couldn't have a naming system which is decentralized, has global human-readable names and is secure. And it fails by the same overgeneralization as we had here. You don't need centralization as long as you have trust. So bitcoin/namecoin puts its trust in the majority as determined by processing power and solves the triangle by providing trust without centralization.

An interesting question is what might we use instead of computing power to create a trust democracy that would allow the good guys to retain a majority.

> This is basic to any relayed crypto peer to peer system including the one I built. Every packet is MAC'd using a key derived from a DH agreement, etc.

Right, the crypto is a solved problem. The issue is that if you send a packet to a Sybil, it throws it away. After the timeout you send the packet via some other node. If it's also a Sybil it throws it away. If the large majority of the nodes are Sybils that's where the inefficiency comes from. You would essentially have to broadcast the message in order to find a path that contains no Sybils. Trust should be able to solve the problem by making available several "trusted" paths only a minority of which contain Sybils.

> I think the harder thing is defending not against Sybils vs. the data itself but Sybils vs the infrastructure. Criminals, enemy governments, authoritarian governments, etc. might just want to take the network down, exploit it to carry out a DDOS amplification attack against other targets, or make it unsuitable for a certain use case.

Some attacks are unavoidable. If the attacker has more bandwidth than the sum of the honest nodes in the network, you lose. But those are the attacks that inverse scale. The more honest nodes in the network, the harder the attack. And the more you can reduce the number of centralized choke points, the harder it is to take down the network as a whole.

Amplification is also relatively easy to mitigate. Avoid sending big packets in response to small packets. And if you have to do that, first send a small packet with a challenge that the target node has to copy back in order to provide evidence that the target node is the requesting node. Relevant: http://tools.ietf.org/html/draft-eastlake-dnsext-cookies-02

It's the targeted attacks that are a bear because they're heterogeneous *attacks*. In order to contact a node, the node itself has to be online, there has to be an honest path between you and the node, and you have to be able to discover that path. So the attacker can dump traffic on the low-capacity honest paths to take them offline and then create a bunch of Sybils to make discovering the higher-capacity paths more difficult, and you have no way to distinguish between the target legitimately being offline and merely all the paths you've tried being compromised. The answer is to somehow know ex ante which paths are honest, but easier said than done.

> That's nothing. Get a load of what I had to pull out of my you know what to get windows to treat a virtual network properly with regard to firewall policy. As far as I know I am the first developer to pull this off, and it's not pretty. I think I am first on this one by virtue of masochism.

Thanks again, Microsoft. Though I think the OpenVPN users might have beat you to the equivalent solution, e.g. http://superuser.com/questions/120038/changing-network-type-from-unidentified-network-to-private-network-on-an-openvpn

(And as a public service announcement, 1.1.1.1 is no longer a "fake" address as the 1.0.0.0/8 block was assigned to APNIC. http://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xhtml)


: