We’ve had enough of digital monopolies and surveillance capitalism. We want an alternative world that works for everyone, just like the original intention of the web and net.
We seek a world of open platforms and protocols with real choices of applications and services for people. We care about privacy, transparency and autonomy. Our tools and organisations should fundamentally be accountable and resilient.
I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while: http://adamierymenko.com/decentralization-i-want-to-believe/
I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while:
http://adamierymenko.com/decentralization-i-want-to-believe/
Thank you so much! I thoroughly enjoyed reading that and had lots of 'this makes so much sense' moments. Time to start thinking more about 'provably minimal hubs'. :-) On 02/08/14 00:07, Adam Ierymenko wrote: > I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while: > > http://adamierymenko.com/decentralization-i-want-to-believe/ >
On Aug 1, 2014, at 9:02 PM, Steve Phillips <steve@tryingtobeawesome.com> wrote: > 3. For a year or so I've had a design for a zero-knowledge server that nonetheless implements partial search/querying functionality for anyone with the key. Perhaps this could also play some role in the ecosystem. I'll try to write something up. I've been thinking about that too, but I think it's important to take a step back and think through the problem. I really want to push through the Little Centralization Paper (Tsitsiklis/Xu) a little more. To me the key thing is this: Our hypothetical "blind idiot God" must be as minimal as possible. That's why I said "provably minimal hub." The Tsitsiklis/Xu paper gives us a mathematical way to calculate exactly what percentage of traffic in a network must be centralized to achieve the phase transition they describe, but they do not give us an answer for what functionality is required. Imagine a stupid-simple key-value store with PUT and GET. Each key has a corresponding public key submitted with it that can be used to authorize future updates of the same key. Keys expire after one year. That's it. Or could we go even more minimal than that? In Turing-completeness there are shockingly minimal systems that are universal computers: https://en.wikipedia.org/wiki/One_instruction_set_computer
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 02/08/14 00:07, Adam Ierymenko wrote: > I just started a personal blog, and my first post includes some > thoughts I've wanted to get down for a while: > > http://adamierymenko.com/decentralization-i-want-to-believe/ > Bravo Adam, You have succinctly put into words many of the fuzzy "hand wavy" thoughts I've been reaching for. Reading your blog post has allowed them to become more concrete. Regarding peer-(super)peer-peer: when I read this turn of phrase I had an "aha!" moment. My work/thinking about DHT (via the drogulus project) has led me to wonder about the nature of hierarchy - when someone or some node in a network is more important than another. I skirt around it in my recent Europython talk. Given the way the Kademlia DHT algorithm I'm using works I expect several things to happen in this context: * Individual nodes prefer peers that are reliable (for example, they're always on the network and reply in a timely fashion). The reliable peers are the ones that end up in the local node's routing table (that keeps track of who is out there on the network). * Nodes share information from their routing tables with each other to discover who else is on the network and keep themselves up-to-date (it's part of the process of a lookup in the DHT). * I would expect (super)peers to *emerge* from such interactions (note my comments in the Europython talk on hierarchy based upon evidence rather than architecture). * If a (super)peer fails or doesn't "perform", the algorithm works around it - i.e. I expect (super)peers to both emerge via evidence and for the network to ignore them if or when they fall over or die. This addresses the "how do we get rid of you?" question from the blackboard slide in my talk. Also, I like your use of the word "feudal". I've been exchanging emails with an old school friend who (surprisingly to me at least) is interested in P2P. Here's a quote from a recent (private) exchange via email about the Europython talk: "Consider my slide about hierarchy and power: it works when the people with authority derive their power through evidence. Unfortunately, technology can be used to manipulate power in a way that is analogous to the way aristocratic power works: "Why are you my King?", "Because my father was your King!" It's the result of an imposed system (Feudalism or a certain technical architecture) rather than merit or consensus of opinion based upon tangible evidence (the king has authority via accident of birth, the website has authority because of the client/server model; contrast that to a doctor who has authority because they have years of training and demonstrate a certain skill - making ill people feel better)." Finally, you end with "in the meantime, please do your own hacking and ask your own questions". For this very reason I'm sitting in my shed on my 17th wedding anniversary hacking on the drogulus (I have a young family, a "real" job and organise UK Python community stuff - so I get very little time to work on it; this needs to change). I'd be interested to know how you'd see this sort of decentralised / peer to peer work being funded. The best plan I can come up with is to save money and then take some months off (likely around March next year). Once again, congratulations on such an interesting and thought provoking blog post..! All the best, Nicholas. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJT3Py6AAoJEP0qBPaYQbb6UQMH/icGQvgEmLLsEczPP4pS1vvz gRWMf4MLeW8ROLR1+Xp+NtiTk85chDYmtXOndTc7mdR1IKUC5PbSiosuhR8Pk1aH p9dtuzO9IVbn608KwbRTjtQjgEDzZysm1q8JNlfj64x2NtJP2h22yGMpvhoOwwLB AJUJOQaGfAk3t9MQBouQ3Ocm/wV6RV4/hiTW0C3e7q4F7bJLdmfDR8zlf2X03d1b usLTnIeTnIl0MYt6q9TotCUZMeyeYwp0SrKG0S5qfc1oqYIODXfHBUrLLM5PMhPW 7TaZ+Bg3tRvA210/+HVRJTOxSTVV8v9FG0HEVpDDFVcwBbkDnkV0wntezYzDdtw= =XCUD -----END PGP SIGNATURE-----
Am 02.08.2014 21:47, schrieb Adam Ierymenko: > On Aug 1, 2014, at 9:02 PM, Steve Phillips <steve@tryingtobeawesome.com> wrote: > >> 3. For a year or so I've had a design for a zero-knowledge server that nonetheless implements partial search/querying functionality for anyone with the key. Perhaps this could also play some role in the ecosystem. I'll try to write something up. > I've been thinking about that too, but I think it's important to take a step back and think through the problem. I really want to push through the Little Centralization Paper (Tsitsiklis/Xu) a little more. > > To me the key thing is this: > > Our hypothetical "blind idiot God" must be as minimal as possible. I'm with you. We've been toying with such an idea for a while too. But looking into this "little centralization paper" I'm left puzzled what *function* the centralized thing should provide? My over-all impression so far is, that the paper mostly concerns efficiency and load balancing. I'm not yet convinced that these are the most important points. IMHO reliability and simplicity are much more important (as you mentioned in your blog post too). I view efficiency more like an economic term applicable to central service providers operating services like FB. I can only guess what the to-be-centralized functionality would be: your #1 of your problem definition, the name lookup. Why? Because any following operation could be arranged to only ever talk to know peers. > That's why I said "provably minimal hub." The Tsitsiklis/Xu paper gives us a mathematical way to calculate exactly what percentage of traffic in a network must be centralized to achieve the phase transition they describe, but they do not give us an answer for what functionality is required. > > Imagine a stupid-simple key-value store with PUT and GET. Each key has a corresponding public key submitted with it that can be used to authorize future updates of the same key. Keys expire after one year. That's it. > > Or could we go even more minimal than that? Maybe: forget the keys, don't allow any peer to simply update a value. (Why? Assume the "value" is the ownership property of some coin or other unique resource. How to manage transfer? The update could be malicious.) Instead: link the value to some script, which is invoked as the "storing" (better now "maintaining") node to compute the update upon request. (Take the script as some kind of "contract" governing the update rules. No peer simply accepts updates, the check the contract to verify the update complies with the terms.) At this point the design is down to a simple pointer stored with the first value pointing to a second value (the script, which in turn points to an update policy; since this is a contract chances are that no updates are allowed). All handling of keys, expiration time etc. would suddenly be user defined. > In Turing-completeness there are shockingly minimal systems that are universal computers: https://en.wikipedia.org/wiki/One_instruction_set_computer I'm afraid there needs to be some compromise. That's too simple to be usable. How about allowing some kind of hashbang syntax in the script to pull the language of users choice to execute the update? /Jörg
I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while: http://adamierymenko.com/decentralization-i-want-to-believe/
I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while:
http://adamierymenko.com/decentralization-i-want-to-believe/
Adam,
I've got a question:
…
In this blog post you wrote:
> I designed the protocol to be capable of evolving toward a more decentralized design in the future without disrupting existing users, but that's where it stands today.
On Friday, August 1, 2014, Adam Ierymenko <adam.ierymenko@zerotier.com> wrote:I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while:
http://adamierymenko.com/decentralization-i-want-to-believe/
Adam, your blog post interested me a lot. Best of luck with your efforts. One quibbly question:>efficiency, security, decentralization, pick two.
Assuming certain sorts of threats, decentralization contributes a lot to security. In those circumstances, your trichotomy devolves to a dichotomy, "efficiency or security, pick one."
Fortunately, your actual approach, the peer-(super) peer-peer idea, finesses the problem nicely. Instead of "I am Spartacus," "I am the blind idiot god." Still, might attackers find a vulnerability there? In order to assure the efficiency you desire, someone must provide some resources intended to act as the superpeer or superpeers. Attacker censors those nodes, network efficiency falls below the tolerable threshold, bad guys win. How do you plan to defend against this attack?
On Aug 3, 2014, at 1:45 AM, Jörg F. Wittenberger <Joerg.Wittenberger@softeyes.net> wrote: > Am 02.08.2014 21:47, schrieb Adam Ierymenko: >> On Aug 1, 2014, at 9:02 PM, Steve Phillips <steve@tryingtobeawesome.com> wrote: >> >>> 3. For a year or so I've had a design for a zero-knowledge server that nonetheless implements partial search/querying functionality for anyone with the key. Perhaps this could also play some role in the ecosystem. I'll try to write something up. >> I've been thinking about that too, but I think it's important to take a step back and think through the problem. I really want to push through the Little Centralization Paper (Tsitsiklis/Xu) a little more. >> >> To me the key thing is this: >> >> Our hypothetical "blind idiot God" must be as minimal as possible. > > I'm with you. > > We've been toying with such an idea for a while too. > > But looking into this "little centralization paper" I'm left puzzled > what *function* the centralized thing should provide? That's what I'm scratching my head about too. Their work is so theoretical it simply doesn't specify *what* it should do, just that it should be there and its presence has an effect on the dynamics of the network. I'm toying around with some ideas, but it's still cooking. > My over-all impression so far is, that the paper mostly concerns > efficiency and load balancing. I'm not yet convinced that these are the > most important points. IMHO reliability and simplicity are much more > important (as you mentioned in your blog post too). I view efficiency > more like an economic term applicable to central service providers > operating services like FB. Efficiency is really important if we want to push intelligence to the edges, which is what "decentralization" is at least partly about. Mobile makes efficiency *really* important. Anything that requires that a mobile device constantly sling packets is simply off the table, since it would kill battery life and eat up cellular data quotas. That basically eliminates every mesh protocol I know about, every DHT, etc. from consideration for mobile. >> In Turing-completeness there are shockingly minimal systems that are universal computers: https://en.wikipedia.org/wiki/One_instruction_set_computer > > I'm afraid there needs to be some compromise. That's too simple to be > usable. How about allowing some kind of hashbang syntax in the script > to pull the language of users choice to execute the update? I agree... I just furnished it as an example to show that the complexity *floor* for systems like this can be pretty low. Usually the practical design is less minimal than what theory allows.
Sorry, this was supposed to be a private message.
(But hitting "reply" instead of "reply to list" sends it to the list anyways.)
One more question: am I correct to understand that zerotier serves essentially the same purpose as cjdns?
https://github.com/cjdelisle/cjdns
Thanks
/Jörg
Am 03.08.2014 11:31, schrieb "Jörg F. Wittenberger":
Adam,
I've got a question:
…In this blog post you wrote:
> I designed the protocol to be capable of evolving toward a more decentralized design in the future without disrupting existing users, but that's where it stands today.
Not exactly, but close. CJDNS is a mesh protocol that creates a single L3 IPv6 network. ZeroTier One is a hybrid peer to peer protocol that creates virtual Ethernet networks (plural). ZeroTier is more like SDN for everyone, everywhere. (SDN is software defined networking, and refers to the creation of software defined virtual networks in data centers.)I've been following CJDNS for a while. I know it's being used by several community meshnet projects. Anyone tried it? I admit I haven't yet, but I've heard it basically does work but not perfectly. I'm curious about how large it could scale though. I'll try it out at some point.On Aug 3, 2014, at 5:21 AM, Jörg F. Wittenberger <Joerg.Wittenberge r@softeyes.net> wrote:Sorry, this was supposed to be a private message.
(But hitting "reply" instead of "reply to list" sends it to the list anyways.)
One more question: am I correct to understand that zerotier serves essentially the same purpose as cjdns?
https://github.com/cjdelisle/cjdns
Thanks
/Jörg
Am 03.08.2014 11:31, schrieb "Jörg F. Wittenberger":
Adam,
I've got a question:
â¦In this blog post you wrote:
> I designed the protocol to be capable of evolving toward a more decentralized design in the future without disrupting existing users, but that's where it stands today.
One line of research and technology that I personally find very exciting, and highly relevant to the idea of zero-knowledge centralization -- even though it's still some time off from being scalably useful -- is homomorphic encryption.Homomorphic encryption is a technique where you take two inputs, encrypt them with a private key, hand them off to some other machine, have that machine perform a known computation *on the ciphertext*, and give you back the encrypted result, so you can decrypt it and get the answer. The machine that did the computation knows nothing about the inputs or the outputs -- it can only blindly operate on them.While some techniques (like RSA) were partially homomorphic, what you need to make arbitrary homomorphic computation is a system that can do both multiplication and addition (together, these are Turing complete), and no system to do this was found for 40 years, until Craig Gentry's PhD thesis showed a working algorithm to do it.The bad news it is many many orders of magnitude too slow to be useful -- and uses "lattice encryption", which requires very large private/public keys (like GBs). IBM has since scooped up Gentry, and made advances on the original scheme that have sped it up by a trillion times -- but it is still a trillion times too slow.But, someday -- and maybe someday sooner than we think, as these things go -- maybe it will be feasible to have things like zero-knowledge search engines. Maybe low-level zero-knowledge tasks, like packet-switching or whatever, could be feasible much sooner.It's something to watch!-- EricOn Mon, Aug 4, 2014 at 7:06 PM, Adam Ierymenko <adam.ierymenko@zerotier.com> wrote:
Not exactly, but close. CJDNS is a mesh protocol that creates a single L3 IPv6 network. ZeroTier One is a hybrid peer to peer protocol that creates virtual Ethernet networks (plural). ZeroTier is more like SDN for everyone, everywhere. (SDN is software defined networking, and refers to the creation of software defined virtual networks in data centers.)I've been following CJDNS for a while. I know it's being used by several community meshnet projects. Anyone tried it? I admit I haven't yet, but I've heard it basically does work but not perfectly. I'm curious about how large it could scale though. I'll try it out at some point.On Aug 3, 2014, at 5:21 AM, Jörg F. Wittenberger <Joerg.Wittenberge r@softeyes.net> wrote:Sorry, this was supposed to be a private message.
(But hitting "reply" instead of "reply to list" sends it to the list anyways.)
One more question: am I correct to understand that zerotier serves essentially the same purpose as cjdns?
https://github.com/cjdelisle/cjdns
Thanks
/Jörg
Am 03.08.2014 11:31, schrieb "Jörg F. Wittenberger":
Adam,
I've got a question:
…In this blog post you wrote:
> I designed the protocol to be capable of evolving toward a more decentralized design in the future without disrupting existing users, but that's where it stands today.
--
My over-all impression so far is, that the paper mostly concerns efficiency and load balancing. I'm not yet convinced that these are the most important points. IMHO reliability and simplicity are much more important (as you mentioned in your blog post too). I view efficiency more like an economic term applicable to central service providers operating services like FB.Efficiency is really important if we want to push intelligence to the edges, which is what "decentralization" is at least partly about. Mobile makes efficiency *really* important. Anything that requires that a mobile device constantly sling packets is simply off the table, since it would kill battery life and eat up cellular data quotas. That basically eliminates every mesh protocol I know about, every DHT, etc. from consideration for mobile.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi Adam, This is a great post. I share your frustration with the difficulty of building decentralised systems that are usable, efficient and secure. But I have some doubts about your argument. I don't think the Tsitsiklis/Xu paper tells us anything about centralisation vs decentralisation in general. It gives a very abstract model of a system where some fraction of a scarce resource can be allocated wherever it's needed. I'm not surprised that such a system has different queueing behaviour from a system with fixed allocation. But it seems to me that this result is a poor fit for your argument, in two respects. First, the result doesn't necessarily apply beyond resource allocation problems - specifically, those problems where resources can be moved from place to place at no cost. I don't see the relevance to the lookup and routing problems you're aiming to solve with ZeroTier. Second, the advantage is gained by having a panoptic view of the whole system - far from being a blind idiot, the allocator needs to know what's happening everywhere, and needs to be able to send resources anywhere. It's more Stalin than Lovecraft. I'm not denying that a touch of centralisation could help to make ZeroTier more usable, efficient and secure - I just don't think this paper does anything to support that contention. You mention split-brain and internet weather as problems ZeroTier should cope with, but I'm not sure centralisation will help to solve those problems. If the network is partitioned, some nodes will lose contact with the centre - they must either stop operating until they re-establish contact, or continue to operate without the centre's guidance. A distributed system with a centre is still a distributed system - you can't escape the CAP theorem by putting a crown on one of the nodes. It's true that nobody's been able to ship a decentralised alternative to Facebook, Google, or Twitter. But that failure could be due to many reasons. Who's going to buy stock in a blind-by-design internet company that can't target ads at its users? How do you advertise a system that doesn't have a central place where people can go to join or find out more? How do you steer the evolution of such a system? All of these questions are easier to answer for infrastructure than for public-facing products and services. Facebook, Google and Twitter sit on top of several layers of mostly-decentralised infrastructure. Since you're building infrastructure, I wonder whether it would be more useful to look at how centralisation vs decentralisation plays out at layers 2-4, rather than looking at the fully-centralised businesses that sit on top of those layers. The blind idiot god is a brilliant metaphor, and I agree it's what we should aim for whenever we need a touch of centralisation to solve a problem. But if we take into account the importance of metadata privacy as well as content privacy, I suspect that truly blind and truly idiotic gods will be very hard to design. A god that knows absolutely nothing can't contribute to the running of the system. So perhaps the first question to ask when designing a BIG is, what information is it acceptable for the BIG to know? Cheers, Michael On 02/08/14 00:07, Adam Ierymenko wrote: > I just started a personal blog, and my first post includes some > thoughts I've wanted to get down for a while: > > http://adamierymenko.com/decentralization-i-want-to-believe/ > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iQEcBAEBCAAGBQJT46oJAAoJEBEET9GfxSfM+qkH/24J+2VPW4kOokwA20PAg285 Fk9Snzuuz7ruP9qIfuMJVfN5k7+01on7H/VnDuW6gAnc8oGTqye9RWQzmCzMbghe Z2CzadRufFDTgPYk73pyLWAlFLujqu0N/cqHeWGqw8K7wmDmfucnnimKEnQQ2eBP uODbPyJUzc3NahRE42yMeXurC7A0HlcHyMhg7rPkdGZVzJzQ9RDJAkLVo+lpdJ3V ovBpN+QjMg9IJoTKH1Rc5pApTZawoBFPap/o7s3PWLdDY8CL8Oyie2N0NwfyFrVe fN6xBva1PuZ7I2rNhzQijy7hDRhGFeVMH3z6sI2OiUT/UUsHLpHbIEukq9x28c8= =4/Oc -----END PGP SIGNATURE-----
> I was thinking: does this almost reduce to the "hard AI problem?"
Detecting which nodes are malicious might not even be computable. It's the lack of verifiable information. Unless you have some trust anchors to create a frame of reference you can never tell who is defecting vs. who is lying about others defecting. And as I think about it, the only way to distinguish a targeted attack from a node being offline is to establish that it is online, which requires you to have a communications path to it, which would allow you to defeat the attack. So unless you can efficiently defeat the attack you can't efficiently detect whether one is occurring.
So I guess "detect then mitigate" is out. At least without manual intervention to identify that an attack is occurring.
> The Bitcoin network solves the trust problem by essentially trusting itself. If someone successfully mounted a 51% attack against Bitcoin, nothing would be broken as far as the network is concerned. But that's not what *we*, the sentient beings that use it, want. We want the network to do "the right thing," but what's that? How does the network know what the right thing is? As far as its concerned, when 51% of the network extends the block chain that's the right thing... right?
Another way of putting this is that the Bitcoin users solve the trust problem by trusting the majority, where resistance to a Sybil attack comes from allocating votes proportional to computing power. Which works great until some entity amasses enough computing power to vote itself God. And you can do similar with other scarce or expensive things. IPv4 public addresses come to mind. Useful for banning trolls on IRC but if your attacker has a Class A or a botnet you're screwed.
> You could solve that problem pragmatically though by shipping with acceptable defaults. If a user wanted to change them they could, but they don't have to.
Right.
> One idea I've had is a hybrid system combining a centralized database and a decentralized DHT. Both are available and they back each other. The central database can take over if the decentralized DHT comes under attack and the decentralized DHT will work if the central system fails or is blocked (e.g. in a censorship-heavy country).
I've been considering doing federation similar to that. You have some node which is essentially a dedicated DHT node and a bunch of clients which use it as a gateway to access the DHT instead of participating themselves. So you have a lot of ostensibly related clients all using the same gateway and when they want to contact each other they get one hop access and no Sybil exposure. And if the gateway is down the clients can still participate in the DHT themselves so it isn't a single point of failure.
> Everything related to TUN/TAP on every platform is nearly documentation-free. :)
The Linux implementation never gave me any trouble. https://www.kernel.org/doc/Documentation/networking/tuntap.txt says how to create one and then you configure it the same as eth0.
Maybe the trouble with TAP-Windows is that it's idiosyncratic (to be kind) in addition to undocumented. Have you discovered any good way identify your TAP-Windows interface as something not to be molested by other TAP-Windows applications like OpenVPN? There is some language in the .inf about changing the component ID which seems to imply recompiling the driver and then probably needing a code signing key from Microsoft to make it work, but there has to be some less ridiculous way of doing it than that.
On Aug 12, 2014, at 5:23 PM, David Geib <trustiosity.zrm@gmail.com> wrote: > > > Trust without some centralized "god" somewhere is extraordinarily hard for the reasons you discuss. How do I trust? How do I compute trust? How do I cooperate with peers to compute trust while being sure these peers are not defecting. > > I think the problem is trying to compute trust algorithmically. In a completely decentralized network the information necessary to do that is not intrinsically available so you have to bootstrap trust in some other way. > > Everybody trusting some root authority is the easiest way to do that but it's also the most centralized. It also doesn't actually solve the problem unless the root authority is also the only trusted party, because now you have to ask how the root is supposed to know whether to trust some third party before signing it. That's the huge fail with the existing CAs. They'll sign anything. Moxie Marlinspike has had a number of relevant things to say about that. > That's the general pattern that I see. The easiest approach is the most centralized approach... at least if you neglect the longer term systemic downsides of it. Maybe over-centralization should be considered a form of technical debt. I agree that root CAs are horrible. I have had them do things like send me a private key unencrypted to gmail. I am not making that up. No passphrase. To gmail. Hmm... Yeah, I think doing trust better is a must. Btw... Some folks responded to my post lamenting that I had given up on decentralization. That's not true at all. I am just doing two things. One is trying to spin the problem around and conceptualize it differently. The other is giving the problem the respect it deserves. It's a very, very hard problem... Which is part of why I like it. :) > > That manual intervention must by definition take place over some other network, not the network in question, since the network being intervened with may be compromised. > > In a theoretical sense that's true, because if the network is totally compromised, meaning no communication can take place between anyone, then you can't do anything in the direction of fixing it without having some external network to use to coordinate. But that's only a problem before bootstrap. If you can discover and communicate with several compatriots using the network and over time come to trust them before any attack is launched against the network, you can then designate them as trusted parties without any external contact. This is like the Bitcoin solution except that instead of using processing power as the limit on Sybils you use human face time. Then when the attack comes you already have trusted parties you can rely on to help you resist it. I'm not sure those kinds of approaches can work on a global scale. How do people in Russia or South Africa determine their trust relationship with someone in New York? I guess you could traverse the graph, but now you are back to trying to compute trust. > So you *can* bootstrap trust (slowly) but you have to do it before the attack happens or suffer a large inefficiency in the meantime. But using an external network to bootstrap trust before you even turn the system on is clearly a much easier way to guarantee that it's done before the attack begins, and is probably the only efficient way to recover if it *isn't* done before the attack begins. Another point on this... History has taught us that governments and very sophisticated criminals are often much more ahead of the game than we suspect they are. My guess is that if a genuine breakthrough in trust is made it will be recognizable as such and those forces will get in early. The marketing industry is also very sophisticated, though not quite as cutting edge as the overworld and the underworld. On a more pragmatic note, I think you have a chicken or egg problem with the idea of bootstrapping before turning the system on. History has also demonstrated that in computing release early release often wins hands down. Everything that I am familiar with, from the web to Linux to even polish obsessed creatures like Mac have followed this path. If it doesn't exist yet nobody will use it, and if nobody is using it nobody will bootstrap trust for it because nobody is using it therefore nobody will ever use it therefore it's a waste of time... > Then as long as you have [anything] that can perform the necessary function (e.g. message relay or lookup database), everything requiring that function can carry on working. You can have your cake and eat it too. It's easy. Just make two cakes. Make a centralized cake and a decentralized cake. > I tend to think that Bitcoin is going to crash and burn. It has all the makings of a bubble. It's inherently deflationary which promotes hoarding and speculation which causes the price to increase in the short term, but the whole thing is resting on the supremacy of its technical architecture. So if somebody breaks the technology *or* somebody comes up with something better or even a worthwhile but incompatible improvement to Bitcoin itself, when everyone stops using Bitcoin in favor of the replacement the Bitcoins all lose their value. For example if anyone ever breaks SHA256 it would compromise the entire blockchain. Then what do you do, start over from zero with SHA3? I think the tech behind it is more interesting than Bitcoin itself. It reminds me of the web. Hypertext, browsers, and the new hybrid thin client model they led to was interesting. The internet was certainly damn interesting. But pets.com and flooz? Not so much. I still need to take a deep, deep dive into the block chain technology. I get the very basic surface of it, but I am really curious about how it might be used as part of a solution to the trust bootstrapping problem. If hybrid overlapping heterogenous solutions are the way forward for network robustness, then maybe a similar concurrent cake solution exists for trust. At some point I think someone is going to successfully attack Bitcoin. What happens then? I don't know. It has some value as a wire transfer protocol if nothing else, but the sheen will certainly wear off. > The ideal would be for nodes to only trust a peer to relay data and then have the destination provide an authenticated confirmation of receipt. Then if there is no confirmation you ask some different trusted peer(s) to relay the message. That way all misplaced trust costs you is efficiency rather than security. If a trusted peer defects then you try the next one. Then even if half the peers you trusted will defect, you're still far ahead of the alternative where 90% or 99.9% of the peers you try could be Sybils. And that gets the percentage of defecting peers down to the point where you can start looking at the Byzantine fault tolerance algorithms to detect them, which might even allow defecting peers to be algorithmically ejected from the trusted group. This is basic to any relayed crypto peer to peer system including the one I built. Every packet is MAC'd using a key derived from a DH agreement, etc. I think the harder thing is defending not against Sybils vs. the data itself but Sybils vs the infrastructure. Criminals, enemy governments, authoritarian governments, etc. might just want to take the network down, exploit it to carry out a DDOS amplification attack against other targets, or make it unsuitable for a certain use case. > Part of the idea is to decentralize the centralized nodes. Then there are big nodes trusted by large numbers of people but there is no "root" which is trusted by everybody. And big is relative. If each organization (or hackerspace or ...) runs their own supernode then there is nothing to shut down or compromise that will take most of the network with it, and there is nothing preventing a non-supernode from trusting (i.e. distributing their trust between) more than one supernode. Then you can have the supernode operators each decide which other supernodes they trust which shrinks the web of trust problem by putting a little bit of hierarchy into it, without making the hierarchy rigid or giving it a single root. The result is similar in structure to a top down hierarchy except that it's built from the bottom up so no one has total control over it. I like this...especially the part about shrinking the problem. It reminds me of how old NNTP and IRC and similar protocols were run. You had a network of servers run by admin volunteers, so the trust problem was manageable. But there was no king per se... A bit of an oligarchy though. > > > Umm... sorry to break this to you, but that's exactly what I did. > > Argh. Why does everything related to Windows have to be unnecessarily complicated? > That's nothing. Get a load of what I had to pull out of my you know what to get windows to treat a virtual network properly with regard to firewall policy. As far as I know I am the first developer to pull this off, and it's not pretty. I think I am first on this one by virtue of masochism. https://github.com/zerotier/ZeroTierOne/commit/f8d4611d15b18bf505de9ca82d74f5102fc57024#diff-288ff5a08b3c03deb7f81b5d45228018R628
It's more like a security vulnerability. Single point of failure, single point of compromise and a choke point for censorship and spying.
Getting to this a bit belatedly… :) On Aug 7, 2014, at 9:32 AM, Michael Rogers <michael@briarproject.org> wrote: > I don't think the Tsitsiklis/Xu paper tells us anything about > centralisation vs decentralisation in general. It gives a very > abstract model of a system where some fraction of a scarce resource > can be allocated wherever it's needed. I'm not surprised that such a > system has different queueing behaviour from a system with fixed > allocation. But it seems to me that this result is a poor fit for your > argument, in two respects. > > First, the result doesn't necessarily apply beyond resource allocation > problems - specifically, those problems where resources can be moved > from place to place at no cost. I don't see the relevance to the > lookup and routing problems you're aiming to solve with ZeroTier. I have an admission to make. I did a very un-academic right-brainy thing, in that I made a little bit of a leap. When I read “phase transition” it was sort of an epiphany moment. Perhaps I studied too much complexity and evolutionary theory, but I immediately got a mental image of a phase transition in state space where a system takes on new properties. You see that sort of thing in those areas all the time. But I don’t think it’s a huge leap. The question Tsitsiklis/Xu were looking at was storage allocation in a distributed storage pool (or an idealized form of that problem). Their research was backed by Google, who obviously is very interested in storage allocation problems. But I don’t think it’s a monstrous leap to go from storage allocation problems to bandwidth, routing, or trust. Those are all “resources” and all can be moved or re-allocated. Many are dynamic rather than static resources. It’d be interesting to write these authors and ask them directly what they think. Maybe I’ll do that. If you’ve been reading the other thread, we’re talking a lot about trust and I’m starting to agree with David Geib that trust is probably the root of it. These other issues, such as this and the CAP theorem, are probably secondary in that if trust can be solved then these other things can be tackled or the problem space can be redefined around them. > Second, the advantage is gained by having a panoptic view of the whole > system - far from being a blind idiot, the allocator needs to know > what's happening everywhere, and needs to be able to send resources > anywhere. It's more Stalin than Lovecraft. I think it’s probably possible to have a coordinator that coordinates without knowing *much* about what it is coordinating, via careful and clever use of cryptography. I was more interested in the over-arching theoretical question of whether some centralization is needed to achieve efficiency and the other things that are required for a good user experience, and if so how much. ZeroTier’s supernodes know that point A wants to talk to point B, and if NAT traversal is impossible and data has to be relayed then they also know how much data. But that’s all they know. They don’t know the protocol, the port, or the content of that data. They’re *pretty* blind. I have a suspicion it might be possible to do better than that, to make the blind idiot… umm… blinder. It would be significantly easier if it weren’t for NAT. NAT traversal demands a relaying maneuver that inherently exposes some metadata about the communication event taking place. But we already know NAT is evil and must be destroyed or the kittens will die. > It's true that nobody's been able to ship a decentralised alternative > to Facebook, Google, or Twitter. But that failure could be due to many > reasons. Who's going to buy stock in a blind-by-design internet > company that can't target ads at its users? How do you advertise a > system that doesn't have a central place where people can go to join > or find out more? How do you steer the evolution of such a system? Sure, those are problems too. Decentralization is a multifaceted problem: technical, political, business, social, ... But it’s not like someone’s shipped a decentralized Twitter that is equivalently fast, easy to use, etc., and it’s failed in the marketplace. It’s that nobody’s shipped it at all, and it’s not clear to me how one would build such a thing. Keep in mind too that some of the profitability problems of decentralization are mitigated by the cost savings. A decentralized network costs orders of magnitude less to run. You don’t need data centers that consume hundreds of megawatts of power to handle every single computation and store every single bit of data. So your opportunities to monetize are lower but your costs are also lower. Do those factors balance out? Not sure. Nobody’s tried it at scale, and I strongly suspect the reason to be technical. The bottom line is kind of this: Decentralization and the devolution of power are something that lots of people want, and they’re something human beings have been trying to achieve in various ways for a very long time. Most of these efforts, like democracy, republics, governmental balance of power, anti-trust laws, etc., pre-date the Internet. Yet it never works. When I see something like that — repeated tries, repeated failures, but everyone still wants it — I start to suspect that there might be a law of nature at work. To give an extreme case — probably a more extreme case than this one — people have been trying to build infinite energy devices for a long time too. People would obviously love to have an infinite energy device. It would solve a lot of problems. But they never work, and in that case any physicist can tell you why. Are there laws of nature at work here? If so, what are they? Are they as tough and unrelenting as the second law of thermodynamics, or are they something we can learn to work within or around? That’s what I want to know. > The blind idiot god is a brilliant metaphor, and I agree it's what we > should aim for whenever we need a touch of centralisation to solve a > problem. But if we take into account the importance of metadata > privacy as well as content privacy, I suspect that truly blind and > truly idiotic gods will be very hard to design. A god that knows > absolutely nothing can't contribute to the running of the system. So > perhaps the first question to ask when designing a BIG is, what > information is it acceptable for the BIG to know? Good point about metadata privacy, but I think it’s ultimately not a factor here. Or rather… it *is* a factor here, but we have to ignore it. The only way I know of to achieve metadata privacy with any strength beyond the most superficial sort is onion routing. Onion routing is inherently expensive. I’m not sure anyone’s going to use it for anything “routine” or huge-scale. … that is unless someone invents something new. I have wondered if linear coding schemes might offer a way to make onion routing more efficient, but that there would be an awfully big research project that I don’t have time to do. :) We can get most of the way there by making it at least difficult to gather meta-data, and by using encryption to make that meta-data less meaningful and transparent. There’s a big difference between Google or the NSA or the Russian Mob being able to know everything I’ve ever bought vs. them being able to know with some probability when and where I’ve spent money but not what on and not how much. The latter is less useful.
On Wed, 2014-08-20 at 00:56 -0400, David Geib wrote: > ... > I don't really agree that it never works. For all the failings of free > market capitalism, it's clearly better than a centrally planned > economy. The thing about functioning decentralized and federated > systems is that they often work so well they become invisible. Nobody > notices the *absence* of a middle man. This is a great conversation and I'm enjoying the way the ideas are flowing. This paragraph has pushed one of my buttons, so I'm weighing in. I agree with the failure of the planned economy experiment, but I think the comparison with the free market needs expansion. It's important to emphasise that we don't actually _have_ a free market, not as Hayeck and his followers envisaged. The potential for market imbalances (of power, knowledge, choice and such) is too great, so we end up with laws, against fraud, weights and measures abuse, and stuff that is not marketable quality, and regulations, to reduce power and knowledge imbalances. Of course we also have deliberate imbalances, such as immigration restrictions, to control the market in workers, trade tariffs, to re-inforce local industry, and trade agreements, to enhance power imbalances. Most of these problems come out the sheer size of states and corporations, and most of the normal human interactions that might protect against abuse assume relatively small groups. A sports club, church community, even a village, are all self managing. Regulation still happens, but the detection and response are (or can be) relatively lightweight. This doesn't work even with cities, where everyone is a stranger, and police are required. To bring the point home, we can consider a market as a collection of protocols. This conversation, or the re-decentralise thing, probably started by assuming these protocols all work perfectly, as per Hayeck. Clearly not a worker. We need rules and regulations, we need detection and response, and the response has to have some real impact. These are, I suspect, human things. Humans are interacting, and humans need to address problems. As a direct outcome of the human model, we might look at community size. This depends on the facilities being offered. Distributed search, YaCy, for example, could have a very large number of users. Social networks, on the other hand, might need very focused small communities. I can imagine a sort of federated facility, using something like Diaspora, where smallish groups can share a server, but servers can talk to each other in some limited way to allow for groups that overlap. Problems can then be resolved through side channels and appropriate server management tools. (and a 'server' could be a collection of distributed nodes, of course) OK, that'll do from me. Thanks for listening. Mike S.
Human societies are networks too. I think this work has po litical and philosophical implications inasmuch as the same information theoretic principles that govern computer networks might also operate in human ones.If we can fix it here, maybe it can help us find new ways of fixing it there.
Am 20.08.2014 06:56, schrieb David Geib: > Or the other way around for that matter. Look at the societies that > work best and see how they do it. BTW: That's been the concept we followed when we came up with Askemos. Understanding that we currently have an internet akind to some kind of feudal society, we asked: what came next and how did they do it? Next came democracy (again), in terms of constitutional states. Power of balance, social contracts, bi- and multilateral contracts etc. Let's not argue that we see them eventually failing all to often. Maybe we can make real societies better (i.e., the governments less broken) once we understood how to implement it with the rigor required in programming. So instead of inventing anything anew – which people would the have to learn, adopt and accept – we tried to map these concepts as good as we can into a minimal language. The we implemented a prototype interpreter for this language (BALL) to learn how this could work. Best /Jörg
On Tue, Aug 19, 2014 at 9:22 AM, Adam Ierymenko <adam.ierymenko@zerotier.com> wrote:
Human societies are networks too. I think this work has po litical and philosophical implications inasmuch as the same information theoretic principles that govern computer networks might also operate in human ones.
If we can fix it here, maybe it can help us find new ways of fixing it there.
And networks are human societies, every node has at least one person associated with it, trying to cooperate/communicate with at least one other. But it seems like it would be easy to push the analogy too far, as custom, law, contracts, etc. are only vaguely similar to software. I would expect at least a few very interesting and annoying differences, though maybe also some surprising and useful isomorphisms.
However I don't understand you "vaguely similar". It seems not to be that vague. It's just a different "machine" executing it: physical hardware or human agents. But both are supposed to stick precisely to the rules until the software is changed. (And both are usually buggy.)
On Wednesday, August 20, 2014, Jörg F. Wittenberger <Joerg.Wittenberger@softeyes.net> wrote:
However I don't understand you "vaguely similar". It seems not to be that vague. It's just a different "machine" executing it: physical hardware or human agents. But both are supposed to stick precisely to the rules until the software is changed. (And both are usually buggy.)
I was trying to compensate for my bias by using understatement and ambiguity. But now that you challenge me, I feel obligated to try to respond.
Has anyone written a mathematical analysis of the isomorphism, it's features and limits?
Custom and law typically operate by defining constraints that must not be violated, leaving agents free to pursue arbitrary goals using arbitrary strategies within those limits. Software typically provides a menu of capabilities, defined (usually) by a sequential, goal oriented algorithm, often employing a single prechosen strategy. Constraints limit software, but do not dominate the situation as in law.
I must obey the traffic laws while driving to work. The law knows nothing about my goal. I am in charge. If/when we all have self-driving cars, traffic laws will serve no purpose, but the car has to know where I want to go, in addition to the constraints and heuristics that allow it to navigate safely there. I am still in charge, but not in control. Action in each case combines intent, strategy, resources and constraints, but the mix is different. Or maybe the level of abstraction?
I can use software to break the law, and I can use the law to break software, but it is an accident of language that I can make these statements, the meaning is not at all similar.
I would be delighted for you to convince me that I am being too pessimistic, ignorant and unimaginative. I would prefer to be on the other side of this argument.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 19/08/14 20:52, Adam Ierymenko wrote: > Getting to this a bit belatedly… :) Likewise. :-) > If you’ve been reading the other thread, we’re talking a lot about > trust and I’m starting to agree with David Geib that trust is > probably the root of it. These other issues, such as this and the > CAP theorem, are probably secondary in that if trust can be solved > then these other things can be tackled or the problem space can be > redefined around them. I totally agree. Perhaps Tor would be an interesting example to think about, because it's decentralised at the level of resource allocation but centralised at the level of trust. The Tor directory authorities are the closest thing I can think of to a Blind Idiot God: they act as a trust anchor for the system while remaining deliberately ignorant about who uses it and how. They know even less than ZeroTier's supernodes, because they're not aware of individual flows and they don't relay any traffic themselves. > It would be significantly easier if it weren’t for NAT. NAT > traversal demands a relaying maneuver that inherently exposes some > metadata about the communication event taking place. But we already > know NAT is evil and must be destroyed or the kittens will die. NAT is the biggest and most underestimated obstacle for P2P systems. I'm glad you're tackling it head-on. > Good point about metadata privacy, but I think it’s ultimately not > a factor here. Or rather… it *is* a factor here, but we have to > ignore it. > > The only way I know of to achieve metadata privacy with any > strength beyond the most superficial sort is onion routing. Onion > routing is inherently expensive. I’m not sure anyone’s going to use > it for anything “routine” or huge-scale. Onion routing will always be more expensive than direct routing, but bandwidth keeps getting cheaper, so the set of things for which onion routing is affordable will keep growing. Latency is a bigger issue than bandwidth in my opinion. In theory you can pass a voice packet through three relays and still deliver it to the destination in an acceptable amount of time, but the system will have to be really well engineered to minimise latency. Tor wasn't built with that in mind - and again, the question is who's going to pay an engineering team to build a decentralised anonymous voice network they can't profit from? > … that is unless someone invents something new. I have wondered if > linear coding schemes might offer a way to make onion routing more > efficient, but that there would be an awfully big research project > that I don’t have time to do. :) There have been some papers about anonymity systems based on secret sharing and network coding, but nothing that's been deployed as far as I know. In any case, they all used multi-hop paths so the bandwidth and latency issues would remain. > We can get most of the way there by making it at least difficult > to gather meta-data, and by using encryption to make that meta-data > less meaningful and transparent. There’s a big difference between > Google or the NSA or the Russian Mob being able to know everything > I’ve ever bought vs. them being able to know with some probability > when and where I’ve spent money but not what on and not how much. > The latter is less useful. Again, I totally agree and I'm happy to see any progress towards somewhat-more-blind, somewhat-more-idiotic internet deities. Cheers, Michael -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iQEcBAEBCAAGBQJUExvwAAoJEBEET9GfxSfMnfoH/RRZpfvSsHx6gDS3jCTlLrPP wbJ7zVuMJdtnxRC/wgkTOQ/AkQG9N13VKqE10YtrWZoMw1TX6wj4uGOFascH7gUK uKkf023m1tSHE05x2IaYusGdGDlOXlwKY8+LoP8a3OFI8DSX8ous+3vOANPpT+kZ 8MQ/ryiNa40ck369ew3lmxwMVycTxPgISM+WpAonQWADCqyGW/wiIZcebbFM+tIq zaZeomkc9s6BLU/TJE8TAIGkhS5xcEsPDJrETYIPhGNNQ6gjFZE1S1DFyTYrReRP LuyDLLC9x5sFYqTqtzqcIR36HDaPYb1iEbN3vkw5iMjEuS0F9Y6+/jwEdWP31A8= =RVi3 -----END PGP SIGNATURE-----