Login

Redecentralize

We’ve had enough of digital monopolies and surveillance capitalism. We want an alternative world that works for everyone, just like the original intention of the web and net.

We seek a world of open platforms and protocols with real choices of applications and services for people. We care about privacy, transparency and autonomy. Our tools and organisations should fundamentally be accountable and resilient.

Home

Parent
Adam Ierymenko [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-19 12:52:55 (5 years 1 mon 23 days 07:12:00 ago)
Getting to this a bit belatedly… :)

On Aug 7, 2014, at 9:32 AM, Michael Rogers <michael@briarproject.org> wrote:

> I don't think the Tsitsiklis/Xu paper tells us anything about
> centralisation vs decentralisation in general. It gives a very
> abstract model of a system where some fraction of a scarce resource
> can be allocated wherever it's needed. I'm not surprised that such a
> system has different queueing behaviour from a system with fixed
> allocation. But it seems to me that this result is a poor fit for your
> argument, in two respects.
> 
> First, the result doesn't necessarily apply beyond resource allocation
> problems - specifically, those problems where resources can be moved
> from place to place at no cost. I don't see the relevance to the
> lookup and routing problems you're aiming to solve with ZeroTier.

I have an admission to make. I did a very un-academic right-brainy thing, in that I made a little bit of a leap. When I read “phase transition” it was sort of an epiphany moment. Perhaps I studied too much complexity and evolutionary theory, but I immediately got a mental image of a phase transition in state space where a system takes on new properties. You see that sort of thing in those areas all the time.

But I don’t think it’s a huge leap. The question Tsitsiklis/Xu were looking at was storage allocation in a distributed storage pool (or an idealized form of that problem). Their research was backed by Google, who obviously is very interested in storage allocation problems. But I don’t think it’s a monstrous leap to go from storage allocation problems to bandwidth, routing, or trust. Those are all “resources” and all can be moved or re-allocated. Many are dynamic rather than static resources.

It’d be interesting to write these authors and ask them directly what they think. Maybe I’ll do that.

If you’ve been reading the other thread, we’re talking a lot about trust and I’m starting to agree with David Geib that trust is probably the root of it. These other issues, such as this and the CAP theorem, are probably secondary in that if trust can be solved then these other things can be tackled or the problem space can be redefined around them.

> Second, the advantage is gained by having a panoptic view of the whole
> system - far from being a blind idiot, the allocator needs to know
> what's happening everywhere, and needs to be able to send resources
> anywhere. It's more Stalin than Lovecraft.

I think it’s probably possible to have a coordinator that coordinates without knowing *much* about what it is coordinating, via careful and clever use of cryptography. I was more interested in the over-arching theoretical question of whether some centralization is needed to achieve efficiency and the other things that are required for a good user experience, and if so how much.

ZeroTier’s supernodes know that point A wants to talk to point B, and if NAT traversal is impossible and data has to be relayed then they also know how much data. But that’s all they know. They don’t know the protocol, the port, or the content of that data. They’re *pretty* blind. I have a suspicion it might be possible to do better than that, to make the blind idiot… umm… blinder.

It would be significantly easier if it weren’t for NAT. NAT traversal demands a relaying maneuver that inherently exposes some metadata about the communication event taking place. But we already know NAT is evil and must be destroyed or the kittens will die.

> It's true that nobody's been able to ship a decentralised alternative
> to Facebook, Google, or Twitter. But that failure could be due to many
> reasons. Who's going to buy stock in a blind-by-design internet
> company that can't target ads at its users? How do you advertise a
> system that doesn't have a central place where people can go to join
> or find out more? How do you steer the evolution of such a system?

Sure, those are problems too. Decentralization is a multifaceted problem: technical, political, business, social, ...

But it’s not like someone’s shipped a decentralized Twitter that is equivalently fast, easy to use, etc., and it’s failed in the marketplace. It’s that nobody’s shipped it at all, and it’s not clear to me how one would build such a thing.

Keep in mind too that some of the profitability problems of decentralization are mitigated by the cost savings. A decentralized network costs orders of magnitude less to run. You don’t need data centers that consume hundreds of megawatts of power to handle every single computation and store every single bit of data. So your opportunities to monetize are lower but your costs are also lower. Do those factors balance out? Not sure. Nobody’s tried it at scale, and I strongly suspect the reason to be technical.

The bottom line is kind of this:

Decentralization and the devolution of power are something that lots of people want, and they’re something human beings have been trying to achieve in various ways for a very long time. Most of these efforts, like democracy, republics, governmental balance of power, anti-trust laws, etc., pre-date the Internet. Yet it never works.

When I see something like that — repeated tries, repeated failures, but everyone still wants it — I start to suspect that there might be a law of nature at work. To give an extreme case — probably a more extreme case than this one — people have been trying to build infinite energy devices for a long time too. People would obviously love to have an infinite energy device. It would solve a lot of problems. But they never work, and in that case any physicist can tell you why.

Are there laws of nature at work here? If so, what are they? Are they as tough and unrelenting as the second law of thermodynamics, or are they something we can learn to work within or around? That’s what I want to know.

> The blind idiot god is a brilliant metaphor, and I agree it's what we
> should aim for whenever we need a touch of centralisation to solve a
> problem. But if we take into account the importance of metadata
> privacy as well as content privacy, I suspect that truly blind and
> truly idiotic gods will be very hard to design. A god that knows
> absolutely nothing can't contribute to the running of the system. So
> perhaps the first question to ask when designing a BIG is, what
> information is it acceptable for the BIG to know?

Good point about metadata privacy, but I think it’s ultimately not a factor here. Or rather… it *is* a factor here, but we have to ignore it.

The only way I know of to achieve metadata privacy with any strength beyond the most superficial sort is onion routing. Onion routing is inherently expensive. I’m not sure anyone’s going to use it for anything “routine” or huge-scale.

… that is unless someone invents something new. I have wondered if linear coding schemes might offer a way to make onion routing more efficient, but that there would be an awfully big research project that I don’t have time to do. :)

We can get most of the way there by making it at least difficult to gather meta-data, and by using encryption to make that meta-data less meaningful and transparent. There’s a big difference between Google or the NSA or the Russian Mob being able to know everything I’ve ever bought vs. them being able to know with some probability when and where I’ve spent money but not what on and not how much. The latter is less useful.

: