Login

Redecentralize

We’ve had enough of digital monopolies and surveillance capitalism. We want an alternative world that works for everyone, just like the original intention of the web and net.

We seek a world of open platforms and protocols with real choices of applications and services for people. We care about privacy, transparency and autonomy. Our tools and organisations should fundamentally be accountable and resilient.

Home
Adam Ierymenko [LibreList] Thoughts on decentralization: "I want to believe." 2014-08-01 16:07:20 (6 years 9 mons 23 days 15:13:00 ago)
I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while:

http://adamierymenko.com/decentralization-i-want-to-believe/

Steve Phillips [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-01 21:02:50 (6 years 9 mons 23 days 10:18:00 ago)
Hi Adam,

Great post!  About 75% of the way through, my mindset shifted from, "I am disappointed that he has given up on decentralization" to "zero-knowledge centralization is a fucking fantastic idea."

0. Suppose I'm trying to, say, send an IM over a maximally-decentralized IM network that uses a centralized zero-knowledge server for tracking the IPs and open port numbers of people or devices connected to said network, which chat clients somehow query so they know where the IM should be sent.

In this scenario, do you think it's possible for me to get this information without the server also getting it (by decrypting the IP/port pairs however I'd decrypt them), thereby eliminating the critical zero-knowledge aspect?  Is this the kind of system and situation you have in mind?

1. Could something like the Fluidinfo API, which is world-writable (assuming it's still working), play the role of The People's Zero-Knowledge Data Store?

2. Similarly, what if we all shared some world-writable DB-backed API running on Heroku, GAE, or some other free architecture?  Couldn't that serve as such a system, which we'd only write encrypted data to?  We could even have several of these servers, which perhaps exchange information with one another (simple DB replication?), in which case we'd have a federated zero-knowledge system hosted by many providers.  (If the servers are independent and don't communicate, we could have one server that publicly lists the IPs of the other servers.)  This is basically the Fluidinfo scenario, but hosted my multiple parties.

Would either of these be helpful?

3. For a year or so I've had a design for a zero-knowledge server that nonetheless implements partial search/querying functionality for anyone with the key.  Perhaps this could also play some role in the ecosystem.  I'll try to write something up.

Thanks for jump-starting this conversation (thread), whose core focus is so critical to the future of (maximally-)decentralized systems.

--Steve


On Fri, Aug 1, 2014 at 4:07 PM, Adam Ierymenko <adam.ierymenko@zerotier.com> wrote:
I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while:

http://adamierymenko.com/decentralization-i-want-to-believe/


Tom Atkins [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-02 11:16:48 (6 years 9 mons 22 days 20:04:00 ago)
Thank you so much! I thoroughly enjoyed reading that and had lots of
'this makes so much sense' moments. Time to start thinking more about
'provably minimal hubs'. :-)

On 02/08/14 00:07, Adam Ierymenko wrote:
> I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while:
> 
> http://adamierymenko.com/decentralization-i-want-to-believe/
> 

Adam Ierymenko [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-02 12:47:06 (6 years 9 mons 22 days 18:33:00 ago)
On Aug 1, 2014, at 9:02 PM, Steve Phillips <steve@tryingtobeawesome.com> wrote:

> 3. For a year or so I've had a design for a zero-knowledge server that nonetheless implements partial search/querying functionality for anyone with the key.  Perhaps this could also play some role in the ecosystem.  I'll try to write something up.

I've been thinking about that too, but I think it's important to take a step back and think through the problem. I really want to push through the Little Centralization Paper (Tsitsiklis/Xu) a little more.

To me the key thing is this:

Our hypothetical "blind idiot God" must be as minimal as possible. That's why I said "provably minimal hub." The Tsitsiklis/Xu paper gives us a mathematical way to calculate exactly what percentage of traffic in a network must be centralized to achieve the phase transition they describe, but they do not give us an answer for what functionality is required.

Imagine a stupid-simple key-value store with PUT and GET. Each key has a corresponding public key submitted with it that can be used to authorize future updates of the same key. Keys expire after one year. That's it.

Or could we go even more minimal than that?

In Turing-completeness there are shockingly minimal systems that are universal computers: https://en.wikipedia.org/wiki/One_instruction_set_computer

Nicholas H.Tollervey [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-02 15:59:09 (6 years 9 mons 22 days 15:21:00 ago)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/08/14 00:07, Adam Ierymenko wrote:
> I just started a personal blog, and my first post includes some 
> thoughts I've wanted to get down for a while:
> 
> http://adamierymenko.com/decentralization-i-want-to-believe/
> 

Bravo Adam,

You have succinctly put into words many of the fuzzy "hand wavy"
thoughts I've been reaching for. Reading your blog post has allowed them
to become more concrete.

Regarding peer-(super)peer-peer: when I read this turn of phrase I had
an "aha!" moment. My work/thinking about DHT (via the drogulus project)
has led me to wonder about the nature of hierarchy - when someone or
some node in a network is more important than another. I skirt around it
in my recent Europython talk.

Given the way the Kademlia DHT algorithm I'm using works I expect
several things to happen in this context:

* Individual nodes prefer peers that are reliable (for example, they're
always on the network and reply in a timely fashion). The reliable peers
are the ones that end up in the local node's routing table (that keeps
track of who is out there on the network).

* Nodes share information from their routing tables with each other to
discover who else is on the network and keep themselves up-to-date (it's
part of the process of a lookup in the DHT).

* I would expect (super)peers to *emerge* from such interactions (note
my comments in the Europython talk on hierarchy based upon evidence
rather than architecture).

* If a (super)peer fails or doesn't "perform", the algorithm works
around it - i.e. I expect (super)peers to both emerge via evidence and
for the network to ignore them if or when they fall over or die. This
addresses the "how do we get rid of you?" question from the blackboard
slide in my talk.

Also, I like your use of the word "feudal". I've been exchanging emails
with an old school friend who (surprisingly to me at least) is
interested in P2P. Here's a quote from a recent (private) exchange via
email about the Europython talk:

"Consider my slide about hierarchy and power: it works when the people
with authority derive their power through evidence. Unfortunately,
technology can be used to manipulate power in a way that is analogous
to the way aristocratic power works: "Why are you my King?", "Because
my father was your King!" It's the result of an imposed system
(Feudalism or a certain technical architecture) rather than merit or
consensus of opinion based upon tangible evidence (the king has
authority via accident of birth, the website has authority because of
the client/server model; contrast that to a doctor who has authority
because they have years of training and demonstrate a certain skill -
making ill people feel better)."

Finally, you end with "in the meantime, please do your own hacking and
ask your own questions". For this very reason I'm sitting in my shed on
my 17th wedding anniversary hacking on the drogulus (I have a young
family, a "real" job and organise UK Python community stuff - so I get
very little time to work on it; this needs to change). I'd be interested
to know how you'd see this sort of decentralised / peer to peer work
being funded. The best plan I can come up with is to save money and then
take some months off (likely around March next year).

Once again, congratulations on such an interesting and thought provoking
blog post..!

All the best,

Nicholas.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJT3Py6AAoJEP0qBPaYQbb6UQMH/icGQvgEmLLsEczPP4pS1vvz
gRWMf4MLeW8ROLR1+Xp+NtiTk85chDYmtXOndTc7mdR1IKUC5PbSiosuhR8Pk1aH
p9dtuzO9IVbn608KwbRTjtQjgEDzZysm1q8JNlfj64x2NtJP2h22yGMpvhoOwwLB
AJUJOQaGfAk3t9MQBouQ3Ocm/wV6RV4/hiTW0C3e7q4F7bJLdmfDR8zlf2X03d1b
usLTnIeTnIl0MYt6q9TotCUZMeyeYwp0SrKG0S5qfc1oqYIODXfHBUrLLM5PMhPW
7TaZ+Bg3tRvA210/+HVRJTOxSTVV8v9FG0HEVpDDFVcwBbkDnkV0wntezYzDdtw=
=XCUD
-----END PGP SIGNATURE-----
Jörg F. Wittenberger [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-03 10:45:39 (6 years 9 mons 21 days 20:35:00 ago)
Am 02.08.2014 21:47, schrieb Adam Ierymenko:
> On Aug 1, 2014, at 9:02 PM, Steve Phillips <steve@tryingtobeawesome.com> wrote:
>
>> 3. For a year or so I've had a design for a zero-knowledge server that nonetheless implements partial search/querying functionality for anyone with the key.  Perhaps this could also play some role in the ecosystem.  I'll try to write something up.
> I've been thinking about that too, but I think it's important to take a step back and think through the problem. I really want to push through the Little Centralization Paper (Tsitsiklis/Xu) a little more.
>
> To me the key thing is this:
>
> Our hypothetical "blind idiot God" must be as minimal as possible.

I'm with you.

We've been toying with such an idea for a while too.

But looking into this "little centralization paper" I'm left puzzled 
what *function* the centralized thing should provide?


My over-all impression so far is, that the paper mostly concerns 
efficiency and load balancing.  I'm not yet convinced that these are the 
most important points.  IMHO reliability and simplicity are much more 
important (as you mentioned in your blog post too).  I view efficiency 
more like an economic term applicable to central service providers 
operating services like FB.

I can only guess what the to-be-centralized functionality would be: your 
#1 of your problem definition, the name lookup.

Why?  Because any following operation could be arranged to only ever 
talk to know peers.

>   That's why I said "provably minimal hub." The Tsitsiklis/Xu paper gives us a mathematical way to calculate exactly what percentage of traffic in a network must be centralized to achieve the phase transition they describe, but they do not give us an answer for what functionality is required.
>
> Imagine a stupid-simple key-value store with PUT and GET. Each key has a corresponding public key submitted with it that can be used to authorize future updates of the same key. Keys expire after one year. That's it.
>
> Or could we go even more minimal than that?

Maybe: forget the keys, don't allow any peer to simply update a value.  
(Why? Assume the "value" is the ownership property of some coin or other 
unique resource.  How to manage transfer? The update could be malicious.)

Instead: link the value to some script, which is invoked as the 
"storing" (better now "maintaining") node to compute the update upon 
request.  (Take the script as some kind of "contract" governing the 
update rules.  No peer simply accepts updates, the check the contract to 
verify the update complies with the terms.)

At this point the design is down to a simple pointer stored with the 
first value pointing to a second value (the script, which in turn points 
to an update policy; since this is a contract chances are that no 
updates are allowed).

All handling of keys, expiration time etc. would suddenly be user defined.

> In Turing-completeness there are shockingly minimal systems that are universal computers: https://en.wikipedia.org/wiki/One_instruction_set_computer

I'm afraid there needs to be some compromise.  That's too simple to be 
usable.  How about allowing some kind of hashbang syntax in the script 
to pull the language of users choice to execute the update?

/Jörg


Jörg F. Wittenberger [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-03 11:31:11 (6 years 9 mons 21 days 19:49:00 ago)
Adam,

I've got a question:

Am 02.08.2014 01:07, schrieb Adam Ierymenko:
I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while:

http://adamierymenko.com/decentralization-i-want-to-believe/


In this blog post you wrote:

> I designed the protocol to be capable of evolving toward a more decentralized design in the future without disrupting existing users, but that's where it stands today.

My situation: we wrote a p2p network for replicating state machines with byzantine fault tolerance.

That would be a kind of a "global database no single individual controls"; I actually like you "blind idiot god" term.  We always thought of it implementing some "general will" like the legal system in a constitutional state.  Not so different, isn't it?

So far we concentrated on building a practical, working system (e.g., self-hosting).  The networking layer is just a plug in.  And the default plugin was always intended to be replace with state-of-the-art implementations.  It will probably not scale and hence we never tested how it scales.  When looking at zerotier I'm asking: could this possibly be a transport plugin?

What we need:

A) Our identifiers are self-sealing.  That is, they are required to match some hash of the (initial) content and 4 more predefined meta data elements.  (We need this to prove their correctness; like in Ricardian Contracts etc.)

So we'd need to register one such identifier per peer in a DHT.

B) We need some kind of byzantine (Paxos alike) protocol, which is capable to convey hash verifying agreement on the proposed update.  (This is slightly more than most paxos implementations provide, since those are for some reason beyond me, designed to TRUST the origin of an update.)  Fortunately we have this code.  So what we really need is "network traffic" between peers identified by some key.

In understand that zerotier provides (B).  But since I see "some kind" of "noise" as identifier in zerotier, I'm unsure how easy it would be to get (A) too.


Further I take you "capable of evolving" as a warning: how far does the implementation deviate?

Thanks

/Jörg

PS:

As you are sharing my reservations wrt. Bitcoin while at the same time looking for trust and accountability you might want to look at how those alternatives compare.  The 51% of hash power is just one way.  Byzantine agreement requires 67% of traitors.  However the latter are *well known* and contractually bound.  Advantages: a) speed: transactions take a fraction of a second over WAN, b) privacy: data lives precisely where you expect it to be and is not leaked elsewhere.  Downsides: Bitcoin is open-join.  Anybody can participate.  With Askemos you get close-join.  Like WhatsApp: the owner needs to accept the other party before messages are taken.

Actually I'm currently gathering more info towards a fair comparison.  Comments welcome:
http://ball.askemos.org/?_v=wiki&_id=1786
David Burns [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-03 11:38:08 (6 years 9 mons 21 days 19:42:00 ago)

On Friday, August 1, 2014, Adam Ierymenko <adam.ierymenko@zerotier.com> wrote:
I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while:

http://adamierymenko.com/decentralization-i-want-to-believe/


Adam, your blog post interested me a lot. Best of luck with your efforts. One quibbly question:

>efficiency, security, decentralization, pick two.

 Assuming certain sorts of threats, decentralization contributes a lot to security. In those circumstances, your trichotomy devolves to a dichotomy, "efficiency or security, pick one." 

Fortunately, your actual approach, the peer-(super) peer-peer idea, finesses the problem nicely. Instead of "I am Spartacus," "I am the blind idiot god." Still, might attackers find a vulnerability there? In order to assure the efficiency you desire, someone must provide some resources intended to act as the superpeer or superpeers. Attacker censors those nodes, network efficiency falls below the tolerable threshold, bad guys win. How do you plan to defend against this attack?

Cheers,
Dave


--
"You can't negotiate with reality."
"You can, but it drives a really hard bargain."
Jörg F. Wittenberger [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-03 14:21:03 (6 years 9 mons 21 days 17:00:00 ago)
Sorry, this was supposed to be a private message.
(But hitting "reply" instead of "reply to list" sends it to the list anyways.)

One more question: am I correct to understand that zerotier serves essentially the same purpose as cjdns?
https://github.com/cjdelisle/cjdns

Thanks

/Jörg

Am 03.08.2014 11:31, schrieb "Jörg F. Wittenberger":
Adam,

I've got a question:
In this blog post you wrote:

> I designed the protocol to be capable of evolving toward a more decentralized design in the future without disrupting existing users, but that's where it stands today.


Adam Ierymenko [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-04 15:58:16 (6 years 9 mons 20 days 15:22:00 ago)

On Aug 3, 2014, at 2:38 PM, David Burns <tdbtdb@gmail.com> wrote:


On Friday, August 1, 2014, Adam Ierymenko <adam.ierymenko@zerotier.com> wrote:
I just started a personal blog, and my first post includes some thoughts I've wanted to get down for a while:

http://adamierymenko.com/decentralization-i-want-to-believe/


Adam, your blog post interested me a lot. Best of luck with your efforts. One quibbly question:

>efficiency, security, decentralization, pick two.

 Assuming certain sorts of threats, decentralization contributes a lot to security. In those circumstances, your trichotomy devolves to a dichotomy, "efficiency or security, pick one." 

You're absolutely correct there. Decentralized systems are more robust against censorship, most naive denial of service attacks, and the failure of critical systems.

What they usually don't offer is a good user experience and high performance the other 99% of the time when everything is working well.

A decentralized network under attack will be more robust, but a centralized network *not* under attack will be faster, more consistent/reliable, easier to reach, consume less resources at the edge (important for mobile), and generally be easier to use... at least according to any known paradigm. Facebook is down every once in a while, but when it's up it's fast and incredibly easy to use compared to alternatives.

Everything I've written on this subject comes with a caveat: something new could be discovered tomorrow. Everything I write assumes the current state of the art, so obviously any big discoveries could change the whole picture. Personally I think a discovery in an area like graph theory that let us build *completely* center-less networks with the same performance, efficiency, and security characteristics as centralized ones would rank up there with the discovery of public key cryptography. It'd be Nobel Prize material if there were a Nobel Prize for CS.

Fortunately, your actual approach, the peer-(super) peer-peer idea, finesses the problem nicely. Instead of "I am Spartacus," "I am the blind idiot god." Still, might attackers find a vulnerability there? In order to assure the efficiency you desire, someone must provide some resources intended to act as the superpeer or superpeers. Attacker censors those nodes, network efficiency falls below the tolerable threshold, bad guys win. How do you plan to defend against this attack?

Yeah, that's basically it. All my current thinking is around the idea of minimal central hubs that allow us to have the benefits of central points without the downsides. I'm working on a follow-up blog post going into more detail about zero-knowledge hubs and what might be required there.

If I can find the time I might try to hack something up, but don't count on it in the next few months... so much other stuff going on.

-Adam

Adam Ierymenko [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-04 16:04:39 (6 years 9 mons 20 days 15:16:00 ago)
On Aug 3, 2014, at 1:45 AM, Jörg F. Wittenberger <Joerg.Wittenberger@softeyes.net> wrote:

> Am 02.08.2014 21:47, schrieb Adam Ierymenko:
>> On Aug 1, 2014, at 9:02 PM, Steve Phillips <steve@tryingtobeawesome.com> wrote:
>> 
>>> 3. For a year or so I've had a design for a zero-knowledge server that nonetheless implements partial search/querying functionality for anyone with the key.  Perhaps this could also play some role in the ecosystem.  I'll try to write something up.
>> I've been thinking about that too, but I think it's important to take a step back and think through the problem. I really want to push through the Little Centralization Paper (Tsitsiklis/Xu) a little more.
>> 
>> To me the key thing is this:
>> 
>> Our hypothetical "blind idiot God" must be as minimal as possible.
> 
> I'm with you.
> 
> We've been toying with such an idea for a while too.
> 
> But looking into this "little centralization paper" I'm left puzzled 
> what *function* the centralized thing should provide?

That's what I'm scratching my head about too. Their work is so theoretical it simply doesn't specify *what* it should do, just that it should be there and its presence has an effect on the dynamics of the network.

I'm toying around with some ideas, but it's still cooking.

> My over-all impression so far is, that the paper mostly concerns 
> efficiency and load balancing.  I'm not yet convinced that these are the 
> most important points.  IMHO reliability and simplicity are much more 
> important (as you mentioned in your blog post too).  I view efficiency 
> more like an economic term applicable to central service providers 
> operating services like FB.

Efficiency is really important if we want to push intelligence to the edges, which is what "decentralization" is at least partly about. Mobile makes efficiency *really* important. Anything that requires that a mobile device constantly sling packets is simply off the table, since it would kill battery life and eat up cellular data quotas. That basically eliminates every mesh protocol I know about, every DHT, etc. from consideration for mobile.

>> In Turing-completeness there are shockingly minimal systems that are universal computers: https://en.wikipedia.org/wiki/One_instruction_set_computer
> 
> I'm afraid there needs to be some compromise.  That's too simple to be 
> usable.  How about allowing some kind of hashbang syntax in the script 
> to pull the language of users choice to execute the update?

I agree... I just furnished it as an example to show that the complexity *floor* for systems like this can be pretty low. Usually the practical design is less minimal than what theory allows.

Adam Ierymenko [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-04 16:06:56 (6 years 9 mons 20 days 15:14:00 ago)
Not exactly, but close. CJDNS is a mesh protocol that creates a single L3 IPv6 network. ZeroTier One is a hybrid peer to peer protocol that creates virtual Ethernet networks (plural). ZeroTier is more like SDN for everyone, everywhere. (SDN is software defined networking, and refers to the creation of software defined virtual networks in data centers.)

I've been following CJDNS for a while. I know it's being used by several community meshnet projects. Anyone tried it? I admit I haven't yet, but I've heard it basically does work but not perfectly. I'm curious about how large it could scale though. I'll try it out at some point.

On Aug 3, 2014, at 5:21 AM, Jörg F. Wittenberger <Joerg.Wittenberger@softeyes.net> wrote:

Sorry, this was supposed to be a private message.
(But hitting "reply" instead of "reply to list" sends it to the list anyways.)

One more question: am I correct to understand that zerotier serves essentially the same purpose as cjdns?
https://github.com/cjdelisle/cjdns

Thanks

/Jörg

Am 03.08.2014 11:31, schrieb "Jörg F. Wittenberger":
Adam,

I've got a question:
…
In this blog post you wrote:

> I designed the protocol to be capable of evolving toward a more decentralized design in the future without disrupting existing users, but that's where it stands today.



Eric Mill [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-05 00:48:45 (6 years 9 mons 20 days 06:32:00 ago)
One line of research and technology that I personally find very exciting, and highly relevant to the idea of zero-knowledge centralization -- even though it's still some time off from being scalably useful -- is homomorphic encryption.

Homomorphic encryption is a technique where you take two inputs, encrypt them with a private key, hand them off to some other machine, have that machine perform a known computation *on the ciphertext*, and give you back the encrypted result, so you can decrypt it and get the answer. The machine that did the computation knows nothing about the inputs or the outputs -- it can only blindly operate on them.

While some techniques (like RSA) were partially homomorphic, what you need to make arbitrary homomorphic computation is a system that can do both multiplication and addition (together, these are Turing complete), and no system to do this was found for 40 years, until Craig Gentry's PhD thesis showed a working algorithm to do it.

The bad news it is many many orders of magnitude too slow to be useful -- and uses "lattice encryption", which requires very large private/public keys (like GBs). IBM has since scooped up Gentry, and made advances on the original scheme that have sped it up by a trillion times -- but it is still a trillion times too slow.

But, someday -- and maybe someday sooner than we think, as these things go -- maybe it will be feasible to have things like zero-knowledge search engines. Maybe low-level zero-knowledge tasks, like packet-switching or whatever, could be feasible much sooner.

It's something to watch!


-- Eric


On Mon, Aug 4, 2014 at 7:06 PM, Adam Ierymenko <adam.ierymenko@zerotier.com> wrote:
Not exactly, but close. CJDNS is a mesh protocol that creates a single L3 IPv6 network. ZeroTier One is a hybrid peer to peer protocol that creates virtual Ethernet networks (plural). ZeroTier is more like SDN for everyone, everywhere. (SDN is software defined networking, and refers to the creation of software defined virtual networks in data centers.)

I've been following CJDNS for a while. I know it's being used by several community meshnet projects. Anyone tried it? I admit I haven't yet, but I've heard it basically does work but not perfectly. I'm curious about how large it could scale though. I'll try it out at some point.

On Aug 3, 2014, at 5:21 AM, Jörg F. Wittenberger <Joerg.Wittenberge r@softeyes.net> wrote:

Sorry, this was supposed to be a private message.
(But hitting "reply" instead of "reply to list" sends it to the list anyways.)

One more question: am I correct to understand that zerotier serves essentially the same purpose as cjdns?
https://github.com/cjdelisle/cjdns

Thanks

/Jörg

Am 03.08.2014 11:31, schrieb "Jörg F. Wittenberger":
Adam,

I've got a question:
…
In this blog post you wrote:

> I designed the protocol to be capable of evolving toward a more decentralized design in the future without disrupting existing users, but that's where it stands today.






--
Adam Ierymenko [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-05 11:57:52 (6 years 9 mons 19 days 19:23:00 ago)
Oh definitely.

Homomorphic crypto could have a *lot* of uses. It opens up the potential for things like black box certificate authorities that could be distributed as open source software. The CA signs your key. With what? A key pair it generated internally that cannot *ever* be viewed by *anyone*. :)

-Adam

On Aug 4, 2014, at 9:48 PM, Eric Mill <eric@konklone.com> wrote:

One line of research and technology that I personally find very exciting, and highly relevant to the idea of zero-knowledge centralization -- even though it's still some time off from being scalably useful -- is homomorphic encryption.

Homomorphic encryption is a technique where you take two inputs, encrypt them with a private key, hand them off to some other machine, have that machine perform a known computation *on the ciphertext*, and give you back the encrypted result, so you can decrypt it and get the answer. The machine that did the computation knows nothing about the inputs or the outputs -- it can only blindly operate on them.

While some techniques (like RSA) were partially homomorphic, what you need to make arbitrary homomorphic computation is a system that can do both multiplication and addition (together, these are Turing complete), and no system to do this was found for 40 years, until Craig Gentry's PhD thesis showed a working algorithm to do it.

The bad news it is many many orders of magnitude too slow to be useful -- and uses "lattice encryption", which requires very large private/public keys (like GBs). IBM has since scooped up Gentry, and made advances on the original scheme that have sped it up by a trillion times -- but it is still a trillion times too slow.

But, someday -- and maybe someday sooner than we think, as these things go -- maybe it will be feasible to have things like zero-knowledge search engines. Maybe low-level zero-knowledge tasks, like packet-switching or whatever, could be feasible much sooner.

It's something to watch!


-- Eric


On Mon, Aug 4, 2014 at 7:06 PM, Adam Ierymenko <adam.ierymenko@zerotier.com> wrote:
Not exactly, but close. CJDNS is a mesh protocol that creates a single L3 IPv6 network. ZeroTier One is a hybrid peer to peer protocol that creates virtual Ethernet networks (plural). ZeroTier is more like SDN for everyone, everywhere. (SDN is software defined networking, and refers to the creation of software defined virtual networks in data centers.)

I've been following CJDNS for a while. I know it's being used by several community meshnet projects. Anyone tried it? I admit I haven't yet, but I've heard it basically does work but not perfectly. I'm curious about how large it could scale though. I'll try it out at some point.

On Aug 3, 2014, at 5:21 AM, Jörg F. Wittenberger <Joerg.Wittenberge r@softeyes.net> wrote:

Sorry, this was supposed to be a private message.
(But hitting "reply" instead of "reply to list" sends it to the list anyways.)

One more question: am I correct to understand that zerotier serves essentially the same purpose as cjdns?
https://github.com/cjdelisle/cjdns

Thanks

/Jörg

Am 03.08.2014 11:31, schrieb "Jörg F. Wittenberger":
Adam,

I've got a question:
…
In this blog post you wrote:

> I designed the protocol to be capable of evolving toward a more decentralized design in the future without disrupting existing users, but that's where it stands today.






--

Jörg F. Wittenberger [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-05 12:31:45 (6 years 9 mons 19 days 18:49:00 ago)
Am 05.08.2014 01:04, schrieb Adam Ierymenko:
> My over-all impression so far is, that the paper mostly concerns 
> efficiency and load balancing.  I'm not yet convinced that these are the 
> most important points.  IMHO reliability and simplicity are much more 
> important (as you mentioned in your blog post too).  I view efficiency 
> more like an economic term applicable to central service providers 
> operating services like FB.
Efficiency is really important if we want to push intelligence to the edges, which is what "decentralization" is at least partly about. Mobile makes efficiency *really* important. Anything that requires that a mobile device constantly sling packets is simply off the table, since it would kill battery life and eat up cellular data quotas. That basically eliminates every mesh protocol I know about, every DHT, etc. from consideration for mobile.

I did not want to say that efficiency is not important at all.

But I don't really see a value in an application, which is not reliable.  What's the value of n virtual asset stored at mobile when the mobile is lost?  Manual backup is no solution.  As long as data does not outlive gadgets there is little value left.

Michael Rogers [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-07 17:32:09 (6 years 9 mons 17 days 13:48:00 ago)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi Adam,

This is a great post. I share your frustration with the difficulty of
building decentralised systems that are usable, efficient and secure.
But I have some doubts about your argument.

I don't think the Tsitsiklis/Xu paper tells us anything about
centralisation vs decentralisation in general. It gives a very
abstract model of a system where some fraction of a scarce resource
can be allocated wherever it's needed. I'm not surprised that such a
system has different queueing behaviour from a system with fixed
allocation. But it seems to me that this result is a poor fit for your
argument, in two respects.

First, the result doesn't necessarily apply beyond resource allocation
problems - specifically, those problems where resources can be moved
from place to place at no cost. I don't see the relevance to the
lookup and routing problems you're aiming to solve with ZeroTier.

Second, the advantage is gained by having a panoptic view of the whole
system - far from being a blind idiot, the allocator needs to know
what's happening everywhere, and needs to be able to send resources
anywhere. It's more Stalin than Lovecraft.

I'm not denying that a touch of centralisation could help to make
ZeroTier more usable, efficient and secure - I just don't think this
paper does anything to support that contention.

You mention split-brain and internet weather as problems ZeroTier
should cope with, but I'm not sure centralisation will help to solve
those problems. If the network is partitioned, some nodes will lose
contact with the centre - they must either stop operating until they
re-establish contact, or continue to operate without the centre's
guidance. A distributed system with a centre is still a distributed
system - you can't escape the CAP theorem by putting a crown on one of
the nodes.

It's true that nobody's been able to ship a decentralised alternative
to Facebook, Google, or Twitter. But that failure could be due to many
reasons. Who's going to buy stock in a blind-by-design internet
company that can't target ads at its users? How do you advertise a
system that doesn't have a central place where people can go to join
or find out more? How do you steer the evolution of such a system?

All of these questions are easier to answer for infrastructure than
for public-facing products and services. Facebook, Google and Twitter
sit on top of several layers of mostly-decentralised infrastructure.
Since you're building infrastructure, I wonder whether it would be
more useful to look at how centralisation vs decentralisation plays
out at layers 2-4, rather than looking at the fully-centralised
businesses that sit on top of those layers.

The blind idiot god is a brilliant metaphor, and I agree it's what we
should aim for whenever we need a touch of centralisation to solve a
problem. But if we take into account the importance of metadata
privacy as well as content privacy, I suspect that truly blind and
truly idiotic gods will be very hard to design. A god that knows
absolutely nothing can't contribute to the running of the system. So
perhaps the first question to ask when designing a BIG is, what
information is it acceptable for the BIG to know?

Cheers,
Michael

On 02/08/14 00:07, Adam Ierymenko wrote:
> I just started a personal blog, and my first post includes some
> thoughts I've wanted to get down for a while:
> 
> http://adamierymenko.com/decentralization-i-want-to-believe/
> 
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJT46oJAAoJEBEET9GfxSfM+qkH/24J+2VPW4kOokwA20PAg285
Fk9Snzuuz7ruP9qIfuMJVfN5k7+01on7H/VnDuW6gAnc8oGTqye9RWQzmCzMbghe
Z2CzadRufFDTgPYk73pyLWAlFLujqu0N/cqHeWGqw8K7wmDmfucnnimKEnQQ2eBP
uODbPyJUzc3NahRE42yMeXurC7A0HlcHyMhg7rPkdGZVzJzQ9RDJAkLVo+lpdJ3V
ovBpN+QjMg9IJoTKH1Rc5pApTZawoBFPap/o7s3PWLdDY8CL8Oyie2N0NwfyFrVe
fN6xBva1PuZ7I2rNhzQijy7hDRhGFeVMH3z6sI2OiUT/UUsHLpHbIEukq9x28c8=
=4/Oc
-----END PGP SIGNATURE-----
Adam Ierymenko [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-12 08:30:41 (6 years 9 mons 12 days 22:50:00 ago)

On Aug 5, 2014, at 11:11 PM, David Geib <trustiosity.zrm@gmail.com> wrote:

> I was thinking: does this almost reduce to the "hard AI problem?"

Detecting which nodes are malicious might not even be computable. It's the lack of verifiable information. Unless you have some trust anchors to create a frame of reference you can never tell who is defecting vs. who is lying about others defecting. And as I think about it, the only way to distinguish a targeted attack from a node being offline is to establish that it is online, which requires you to have a communications path to it, which would allow you to defeat the attack. So unless you can efficiently defeat the attack you can't efficiently detect whether one is occurring.

So I guess "detect then mitigate" is out. At least without manual intervention to identify that an attack is occurring.

I think you're ultimately right, and you've shifted my thinking just a little. The CAP theorem, while relevant, is probably not the central bugaboo. The central problem is trust.

What and who do you trust, and why, and how do you compute this?

The solution most of the Internet uses is for real-world political entities (corporations, governments, etc.) to create signing certificates. This is also the solution ZeroTier uses, more or less. Supernodes are designated as such because they're hard coded, which will soon be determined by a signing certificate that I plan to put somewhere very safe (and keep encrypted) when I'm done signing the topology root dictionary.

Trust without some centralized "god" somewhere is extraordinarily hard for the reasons you discuss. How do I trust? How do I compute trust? How do I cooperate with peers to compute trust while being sure these peers are not defecting.

If there is an answer, it's going to come from game theory.

Finally, on the subject of "manual intervention..."

That manual intervention must by definition take place over some other network, not the network in question, since the network being intervened with may be compromised.

It reminds me of Godel's incompleteness theorem. To intervene on behalf of a decentralized network requires that the conversation be taken somewhere *outside* that network. We see this with Bitcoin's response to GHASH.IO temporarily getting 51%. The response was rapid, and was coordinated via sites like Reddit /r/bitcoin and other things completely separate from the block chain.

This also makes me think more and more about hybrid systems where you've got multiple types of systems -- including both centralized and decentralized -- that back each other to create an "antifragile" network.

> The Bitcoin network solves the trust problem by essentially trusting itself. If someone successfully mounted a 51% attack against Bitcoin, nothing would be broken as far as the network is concerned. But that's not what *we*, the sentient beings that use it, want. We want the network to do "the right thing," but what's that? How does the network know what the right thing is? As far as its concerned, when 51% of the network extends the block chain that's the right thing... right?

Another way of putting this is that the Bitcoin users solve the trust problem by trusting the majority, where resistance to a Sybil attack comes from allocating votes proportional to computing power. Which works great until some entity amasses enough computing power to vote itself God. And you can do similar with other scarce or expensive things. IPv4 public addresses come to mind. Useful for banning trolls on IRC but if your attacker has a Class A or a botnet you're screwed.

Yep. It's one of the reasons I don't think Bitcoin in its present form is necessarily *that* much more robust than central banks and other financial entities.

Don't get me wrong. It's awesome, especially on a technical level, and it's clearly useful. I just don't think it's the absolute panacea that some think it is.

It has other issues too. A lot of people call it "anonymous virtual currency." It most certainly is not. That particular piece of Bitcoin evangelism is almost Orewellian in its doublespeak-iness. Bitcoin is the least anonymous currency ever devised. Every single transaction is recorded verbatim forever. Yes, the addresses can be anonymous but... umm... educate yourself about data de-anonymization with machine learning and data mining techniques. That's all I'm gonna say.

... and once one Bitcoin address is de-anonymized, you can now begin traversing the transaction graph and de-anonymizing others. If the geeky methods fail you you can always fall back on gumshoe detective work. "Hey dude, you sell Bitcoins on localbitcoin right? Who did you meet with on X date?"

> You could solve that problem pragmatically though by shipping with acceptable defaults. If a user wanted to change them they could, but they don't have to.

Right.

Maybe a good solution to the trust problem is exactly this:

Build in acceptable trust defaults, but let the user change them if they want or add new entities to trust if they want.

The challenge is making the *interface* and *presentation* of trust comprehensible to the user so the user understands exactly what they're doing and the implications of it clearly (without having to be an expert in PKI). Otherwise malware will game the user into trusting things they shouldn't. Of course you can never be totally safe from social engineering, but at least you should present the system in a way that makes the social engineers' job harder.

Complicated things like webs of trust are, I think, a no-go because they ask the user to solve the same non-computable trust problems a trustless network would have to solve except with lots of people and other entities. If something is non-computable for machines it is also non-computable for humans.

> One idea I've had is a hybrid system combining a centralized database and a decentralized DHT. Both are available and they back each other. The central database can take over if the decentralized DHT comes under attack and the decentralized DHT will work if the central system fails or is blocked (e.g. in a censorship-heavy country).

I've been considering doing federation similar to that. You have some node which is essentially a dedicated DHT node and a bunch of clients which use it as a gateway to access the DHT instead of participating themselves. So you have a lot of ostensibly related clients all using the same gateway and when they want to contact each other they get one hop access and no Sybil exposure. And if the gateway is down the clients can still participate in the DHT themselves so it isn't a single point of failure.

Yeah, that's basically the identical idea except in your model the centralized node(s) are the defaults and the DHT is fallback.

> Everything related to TUN/TAP on every platform is nearly documentation-free. :)

The Linux implementation never gave me any trouble. https://www.kernel.org/doc/Documentation/networking/tuntap.txt says how to create one and then you configure it the same as eth0.

Maybe the trouble with TAP-Windows is that it's idiosyncratic (to be kind) in addition to undocumented. Have you discovered any good way identify your TAP-Windows interface as something not to be molested by other TAP-Windows applications like OpenVPN? There is some language in the .inf about changing the component ID which seems to imply recompiling the driver and then probably needing a code signing key from Microsoft to make it work, but there has to be some less ridiculous way of doing it than that.

Umm... sorry to break this to you, but that's exactly what I did.

I had to do it anyway because I had to add a new IOCTL to the tap driver to allow the ZeroTier service to query multicast group subscriptions at the Ethernet layer. Windows has no such thing natively, while on OSX/BSD you can get it via sysctl() and Linux exposes it in /proc.

David Geib [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-12 20:23:47 (6 years 9 mons 12 days 10:57:00 ago)
> Trust without some centralized "god" somewhere is extraordinarily hard for the reasons you discuss. How do I trust? How do I compute trust? How do I cooperate with peers to compute trust while being sure these peers are not defecting.

I think the problem is trying to compute trust algorithmically. In a completely decentralized network the information necessary to do that is not intrinsically available so you have to bootstrap trust in some other way.

Everybody trusting some root authority is the easiest way to do that but it's also the most centralized. It also doesn't actually solve the problem unless the root authority is also the only trusted party, because now you have to ask how the root is supposed to know whether to trust some third party before signing it. That's the huge fail with the existing CAs. They'll sign anything. Moxie Marlinspike has had a number of relevant things to say about that.

> That manual intervention must by definition take place over some other network, not the network in question, since the network being intervened with may be compromised.

In a theoretical sense that's true, because if the network is totally compromised, meaning no communication can take place between anyone, then you can't do anything in the direction of fixing it without having some external network to use to coordinate. But that's only a problem before bootstrap. If you can discover and communicate with several compatriots using the network and over time come to trust them before any attack is launched against the network, you can then designate them as trusted parties without any external contact. This is like the Bitcoin solution except that instead of using processing power as the limit on Sybils you use human face time. Then when the attack comes you already have trusted parties you can rely on to help you resist it.

So you *can* bootstrap trust (slowly) but you have to do it before the attack happens or suffer a large inefficiency in the meantime. But using an external network to bootstrap trust before you even turn the system on is clearly a much easier way to guarantee that it's done before the attack begins, and is probably the only efficient way to recover if it *isn't* done before the attack begins.

> This also makes me think more and more about hybrid systems where you've got multiple types of systems -- including both centralized and decentralized -- that back each other to create an "antifragile" network.

That definitely seems like the way to go. Homogenous systems are inherently fragile because any attack that works against any part of the system will work against the whole of it. It's like the Unix Way: Make everything simple and modular so that everything can interface with anything, that way if something isn't working you can swap it out with something else. Then as long as you have [anything] that can perform the necessary function (e.g. message relay or lookup database), everything requiring that function can carry on working.

> Yep. It's one of the reasons I don't think Bitcoin in its present form is necessarily *that* much more robust than central banks and other financial entities.

I tend to think that Bitcoin is going to crash and burn. It has all the makings of a bubble. It's inherently deflationary which promotes hoarding and speculation which causes the price to increase in the short term, but the whole thing is resting on the supremacy of its technical architecture. So if somebody breaks the technology *or* somebody comes up with something better or even a worthwhile but incompatible improvement to Bitcoin itself, when everyone stops using Bitcoin in favor of the replacement the Bitcoins all lose their value. For example if anyone ever breaks SHA256 it would compromise the entire blockchain. Then what do you do, start over from zero with SHA3?

> The challenge is making the *interface* and *presentation* of trust comprehensible to the user so the user understands exactly what they're doing and the implications of it clearly (without having to be an expert in PKI).

A big part of it is to reduce the consequences of users making poor trust decisions. The peers that are "trusted" should be trusted only to the smallest extent possible and the consequences of one peer making poor trust decisions should have minimal consequences for the others. That's one of the reasons web of trust is so problematic. Using web of trust for key distribution is desperation. Key distribution is the poster child for applying multiple heterogenous methods. It's the thing most necessary to carry out external to the network but they're trying to handle it internally using one method for everyone.

The ideal would be for nodes to only trust a peer to relay data and then have the destination provide an authenticated confirmation of receipt. Then if there is no confirmation you ask some different trusted peer(s) to relay the message. That way all misplaced trust costs you is efficiency rather than security. If a trusted peer defects then you try the next one. Then even if half the peers you trusted will defect, you're still far ahead of the alternative where 90% or 99.9% of the peers you try could be Sybils. And that gets the percentage of defecting peers down to the point where you can start looking at the Byzantine fault tolerance algorithms to detect them, which might even allow defecting peers to be algorithmically ejected from the trusted group.

> Yeah, that's basically the identical idea except in your model the centralized node(s) are the defaults and the DHT is fallback.

Part of the idea is to decentralize the centralized nodes. Then there are big nodes trusted by large numbers of people but there is no "root" which is trusted by everybody. And big is relative. If each organization (or hackerspace or ...) runs their own supernode then there is nothing to shut down or compromise that will take most of the network with it, and there is nothing preventing a non-supernode from trusting (i.e. distributing their trust between) more than one supernode. Then you can have the supernode operators each decide which other supernodes they trust which shrinks the web of trust problem by putting a little bit of hierarchy into it, without making the hierarchy rigid or giving it a single root. The result is similar in structure to a top down hierarchy except that it's built from the bottom up so no one has total control over it.

> Umm... sorry to break this to you, but that's exactly what I did.

Argh. Why does everything related to Windows have to be unnecessarily complicated?

Adam Ierymenko [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-13 21:04:47 (6 years 9 mons 11 days 10:16:00 ago)
On Aug 12, 2014, at 5:23 PM, David Geib <trustiosity.zrm@gmail.com> wrote:
> 
> > Trust without some centralized "god" somewhere is extraordinarily hard for the reasons you discuss. How do I trust? How do I compute trust? How do I cooperate with peers to compute trust while being sure these peers are not defecting.
> 
> I think the problem is trying to compute trust algorithmically. In a completely decentralized network the information necessary to do that is not intrinsically available so you have to bootstrap trust in some other way.
> 
> Everybody trusting some root authority is the easiest way to do that but it's also the most centralized. It also doesn't actually solve the problem unless the root authority is also the only trusted party, because now you have to ask how the root is supposed to know whether to trust some third party before signing it. That's the huge fail with the existing CAs. They'll sign anything. Moxie Marlinspike has had a number of relevant things to say about that.
> 

That's the general pattern that I see. The easiest approach is the most centralized approach... at least if you neglect the longer term systemic downsides of it. Maybe over-centralization should be considered a form of technical debt.

I agree that root CAs are horrible. I have had them do things like send me a private key unencrypted to gmail. I am not making that up. No passphrase. To gmail.

Hmm... Yeah, I think doing trust better is a must.

Btw... Some folks responded to my post lamenting that I had given up on decentralization. That's not true at all. I am just doing two things. One is trying to spin the problem around and conceptualize it differently. The other is giving the problem the respect it deserves. It's a very, very hard problem... Which is part of why I like it. :)

> > That manual intervention must by definition take place over some other network, not the network in question, since the network being intervened with may be compromised.
> 
> In a theoretical sense that's true, because if the network is totally compromised, meaning no communication can take place between anyone, then you can't do anything in the direction of fixing it without having some external network to use to coordinate. But that's only a problem before bootstrap. If you can discover and communicate with several compatriots using the network and over time come to trust them before any attack is launched against the network, you can then designate them as trusted parties without any external contact. This is like the Bitcoin solution except that instead of using processing power as the limit on Sybils you use human face time. Then when the attack comes you already have trusted parties you can rely on to help you resist it.

I'm not sure those kinds of approaches can work on a global scale. How do people in Russia or South Africa determine their trust relationship with someone in New York? I guess you could traverse the graph, but now you are back to trying to compute trust.

> So you *can* bootstrap trust (slowly) but you have to do it before the attack happens or suffer a large inefficiency in the meantime. But using an external network to bootstrap trust before you even turn the system on is clearly a much easier way to guarantee that it's done before the attack begins, and is probably the only efficient way to recover if it *isn't* done before the attack begins. 

Another point on this... History has taught us that governments and very sophisticated criminals are often much more ahead of the game than we suspect they are. My guess is that if a genuine breakthrough in trust is made it will be recognizable as such and those forces will get in early. The marketing industry is also very sophisticated, though not quite as cutting edge as the overworld and the underworld.

On a more pragmatic note, I think you have a chicken or egg problem with the idea of bootstrapping before turning the system on. History has also demonstrated that in computing release early release often wins hands down. Everything that I am familiar with, from the web to Linux to even polish obsessed creatures like Mac have followed this path. If it doesn't exist yet nobody will use it, and if nobody is using it nobody will bootstrap trust for it because nobody is using it therefore nobody will ever use it therefore it's a waste of time...

> Then as long as you have [anything] that can perform the necessary function (e.g. message relay or lookup database), everything requiring that function can carry on working. 

You can have your cake and eat it too. It's easy. Just make two cakes. Make a centralized cake and a decentralized cake.

> I tend to think that Bitcoin is going to crash and burn. It has all the makings of a bubble. It's inherently deflationary which promotes hoarding and speculation which causes the price to increase in the short term, but the whole thing is resting on the supremacy of its technical architecture. So if somebody breaks the technology *or* somebody comes up with something better or even a worthwhile but incompatible improvement to Bitcoin itself, when everyone stops using Bitcoin in favor of the replacement the Bitcoins all lose their value. For example if anyone ever breaks SHA256 it would compromise the entire blockchain. Then what do you do, start over from zero with SHA3? 

I think the tech behind it is more interesting than Bitcoin itself. It reminds me of the web. Hypertext, browsers, and the new hybrid thin client model they led to was interesting. The internet was certainly damn interesting. But pets.com and flooz? Not so much.

I still need to take a deep, deep dive into the block chain technology. I get the very basic surface of it, but I am really curious about how it might be used as part of a solution to the trust bootstrapping problem. If hybrid overlapping heterogenous solutions are the way forward for network robustness, then maybe a similar concurrent cake solution exists for trust.

At some point I think someone is going to successfully attack Bitcoin. What happens then? I don't know. It has some value as a wire transfer protocol if nothing else, but the sheen will certainly wear off.

> The ideal would be for nodes to only trust a peer to relay data and then have the destination provide an authenticated confirmation of receipt. Then if there is no confirmation you ask some different trusted peer(s) to relay the message. That way all misplaced trust costs you is efficiency rather than security. If a trusted peer defects then you try the next one. Then even if half the peers you trusted will defect, you're still far ahead of the alternative where 90% or 99.9% of the peers you try could be Sybils. And that gets the percentage of defecting peers down to the point where you can start looking at the Byzantine fault tolerance algorithms to detect them, which might even allow defecting peers to be algorithmically ejected from the trusted group. 

This is basic to any relayed crypto peer to peer system including the one I built. Every packet is MAC'd using a key derived from a DH agreement, etc.

I think the harder thing is defending not against Sybils vs. the data itself but Sybils vs the infrastructure. Criminals, enemy governments, authoritarian governments, etc. might just want to take the network down, exploit it to carry out a DDOS amplification attack against other targets, or make it unsuitable for a certain use case.

> Part of the idea is to decentralize the centralized nodes. Then there are big nodes trusted by large numbers of people but there is no "root" which is trusted by everybody. And big is relative. If each organization (or hackerspace or ...) runs their own supernode then there is nothing to shut down or compromise that will take most of the network with it, and there is nothing preventing a non-supernode from trusting (i.e. distributing their trust between) more than one supernode. Then you can have the supernode operators each decide which other supernodes they trust which shrinks the web of trust problem by putting a little bit of hierarchy into it, without making the hierarchy rigid or giving it a single root. The result is similar in structure to a top down hierarchy except that it's built from the bottom up so no one has total control over it. 

I like this...especially the part about shrinking the problem.

It reminds me of how old NNTP and IRC and similar protocols were run. You had a network of servers run by admin volunteers, so the trust problem was manageable. But there was no king per se... A bit of an oligarchy though.

> 
> > Umm... sorry to break this to you, but that's exactly what I did.
> 
> Argh. Why does everything related to Windows have to be unnecessarily complicated?
> 

That's nothing. Get a load of what I had to pull out of my you know what to get windows to treat a virtual network properly with regard to firewall policy. As far as I know I am the first developer to pull this off, and it's not pretty. I think I am first on this one by virtue of masochism.

https://github.com/zerotier/ZeroTierOne/commit/f8d4611d15b18bf505de9ca82d74f5102fc57024#diff-288ff5a08b3c03deb7f81b5d45228018R628
David Geib [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-14 04:30:54 (6 years 9 mons 11 days 02:50:00 ago)
> That's the general pattern that I see. The easiest approach is the most centralized approach... at least if you neglect the longer term systemic downsides of it. Maybe over-centralization should be considered a form of technical debt.

It's more like a security vulnerability. Single point of failure, single point of compromise and a choke point for censorship and spying.

> I agree that root CAs are horrible. I have had them do things like send me a private key unencrypted to gmail. I am not making that up. No passphrase. To gmail.

And don't forget that they're all fully trusted. So it's completely futile to try to find a secure one because the insecure ones can still give the attackers a certificate with your name.

> Btw... Some folks responded to my post lamenting that I had given up on decentralization. That's not true at all. I am just doing two things. One is trying to spin the problem around and conceptualize it differently. The other is giving the problem the respect it deserves. It's a very, very hard problem... Which is part of why I like it. :)

It's definitely a fun problem. Part of it is to pin down just what "decentralization" is supposed to mean. If you start with the ideologically pure definition where each node is required to be totally uniform you end up banging your head against the wall. You want a node running on batteries with an expensive bandwidth provider to be able to participate in the network but that shouldn't exclude the possibility of usefully exploiting the greater resources of other nodes that run on AC power and have cheap wired connections. So once you admit the possibility of building a network which is both decentralized and asymmetrical it becomes an optimization problem. How close to the platonic ideal can you get without overly compromising efficiency or availability?

> I'm not sure those kinds of approaches can work on a global scale. How do people in Russia or South Africa determine their trust relationship with someone in New York? I guess you could traverse the graph, but now you are back to trying to compute trust

But that's the whole problem, isn't it? If you have no direct contact and you have no trusted path you really have nothing. That's why web of trust is the last resort. It's the thing that comes closest to working when nothing else will. Which is also why it's terrible. Because you only need it when nothing else works but those are also the times when web of trust is at its weakest.

The key is to find something better from the context of the relationship. Even if you live far apart you might be able to meet once and exchange keys. If you have a mutual trusted friend you can use that. If you have an existing organizational hierarchy then you can traverse that to find a trusted path. If one of you has a true broadcast medium under your control then you can broadcast your key so that anyone can get it.

If you don't have *anything*, you have to ask what it is you're supposed to be trusting. If you start communicating with some John Doe on the other side of the world with no prior relationship or claim to any specific credentials, does it actually matter that he wants to call himself John Smith instead of John Doe? At that point the only thing you can really ask to be assured of is that when you communicate with "John Smith" tomorrow it's the same "John Smith" it was yesterday.

> Another point on this... History has taught us that governments and very sophisticated criminals are often much more ahead of the game than we suspect they are. My guess is that if a genuine breakthrough in trust is made it will be recognizable as such and those forces will get in early. The marketing industry is also very sophisticated, though not quite as cutting edge as the overworld and the underworld.

Oh sure. Trust is a social issue. Criminals and marketing departments (now there's a combination that fits like a glove) have engaged in social engineering forever. That's nothing new. Maybe the question is whether there are any new *solutions* to the old problems. Some combination of global instantaneous communication and digital storage might make it harder for people to behave dishonestly or inconsistently without getting caught. But then we're back to computing trust.

And maybe that's not wrong. The real problem is trying to compute trust with no points of reference. Once you have some externally-sourced trust anchors we're back to heterogeneous and hybrid solutions.

> On a more pragmatic note, I think you have a chicken or egg problem with the idea of bootstrapping before turning the system on.

Just the opposite. Bootstrapping first *is* the ship early method because you bootstrap based on existing trust networks rather than trying to construct a new one from whole cloth. The question is how to gather the existing information in a way that provides a good user experience. You can imagine something like Facebook: You need to add a couple of friends manually but then it can start asking whether their friends are your friends. Though that obviously brings privacy implications; maybe something like homomorphic encryption could improve it? But now it's starting to get complicated. I wonder if it makes sense to factor it out. Separate the trust network from the communications network. A personal trust graph as a local API could be extremely useful in general. And then the entities can start tagging themselves with other data like their email address, PGP key, snow key, website, etc. A little bit social network + web of trust + key:value store.

> I think the tech behind it is more interesting than Bitcoin itself. It reminds me of the web. Hypertext, browsers, and the new hybrid thin client model they led to was interesting. The internet was certainly damn interesting. But pets.com and flooz? Not so much.

Agreed. It's interesting because it solves a lot of the hard problems with digital currencies but not all of them. It's clearly an evolutionary step on the road to something else. Which is what concerns me about it: Inertia and market share will allow it to survive against competitors that are only slightly better but that just means more people will have built their homes on the flood plain by the time the rain comes.

> I still need to take a deep, deep dive into the block chain technology. I get the very basic surface of it, but I am really curious about how it might be used as part of a solution to the trust bootstrapping problem. If hybrid overlapping heterogeneous solutions are the way forward for network robustness, then maybe a similar concurrent cake solution exists for trust.

Relevant: http://www.aaronsw.com/weblog/squarezooko

This is essentially the roadmap that led to namecoin, which (among other things) disproved Zooko's Triangle.

Actually that's an interesting point. Zooko's triangle was supposed to be that you couldn't have a naming system which is decentralized, has global human-readable names and is secure. And it fails by the same overgeneralization as we had here. You don't need centralization as long as you have trust. So bitcoin/namecoin puts its trust in the majority as determined by processing power and solves the triangle by providing trust without centralization.

An interesting question is what might we use instead of computing power to create a trust democracy that would allow the good guys to retain a majority.

> This is basic to any relayed crypto peer to peer system including the one I built. Every packet is MAC'd using a key derived from a DH agreement, etc.

Right, the crypto is a solved problem. The issue is that if you send a packet to a Sybil, it throws it away. After the timeout you send the packet via some other node. If it's also a Sybil it throws it away. If the large majority of the nodes are Sybils that's where the inefficiency comes from. You would essentially have to broadcast the message in order to find a path that contains no Sybils. Trust should be able to solve the problem by making available several "trusted" paths only a minority of which contain Sybils.

> I think the harder thing is defending not against Sybils vs. the data itself but Sybils vs the infrastructure. Criminals, enemy governments, authoritarian governments, etc. might just want to take the network down, exploit it to carry out a DDOS amplification attack against other targets, or make it unsuitable for a certain use case.

Some attacks are unavoidable. If the attacker has more bandwidth than the sum of the honest nodes in the network, you lose. But those are the attacks that inverse scale. The more honest nodes in the network, the harder the attack. And the more you can reduce the number of centralized choke points, the harder it is to take down the network as a whole.

Amplification is also relatively easy to mitigate. Avoid sending big packets in response to small packets. And if you have to do that, first send a small packet with a challenge that the target node has to copy back in order to provide evidence that the target node is the requesting node. Relevant: http://tools.ietf.org/html/draft-eastlake-dnsext-cookies-02

It's the targeted attacks that are a bear because they're heterogeneous *attacks*. In order to contact a node, the node itself has to be online, there has to be an honest path between you and the node, and you have to be able to discover that path. So the attacker can dump traffic on the low-capacity honest paths to take them offline and then create a bunch of Sybils to make discovering the higher-capacity paths more difficult, and you have no way to distinguish between the target legitimately being offline and merely all the paths you've tried being compromised. The answer is to somehow know ex ante which paths are honest, but easier said than done.

> That's nothing. Get a load of what I had to pull out of my you know what to get windows to treat a virtual network properly with regard to firewall policy. As far as I know I am the first developer to pull this off, and it's not pretty. I think I am first on this one by virtue of masochism.

Thanks again, Microsoft. Though I think the OpenVPN users might have beat you to the equivalent solution, e.g. http://superuser.com/questions/120038/changing-network-type-from-unidentified-network-to-private-network-on-an-openvpn

(And as a public service announcement, 1.1.1.1 is no longer a "fake" address as the 1.0.0.0/8 block was assigned to APNIC. http://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xhtml)


Adam Ierymenko [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-19 12:22:38 (6 years 9 mons 5 days 18:58:00 ago)

On Aug 14, 2014, at 1:30 AM, David Geib <trustiosity.zrm@gmail.com> wrote:

It's more like a security vulnerability. Single point of failure, single point of compromise and a choke point for censorship and spying.

Not a bad way of framing it…

Try listing all the “trustedâ€
Adam Ierymenko [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-19 12:52:55 (6 years 9 mons 5 days 18:28:00 ago)
Getting to this a bit belatedly… :)

On Aug 7, 2014, at 9:32 AM, Michael Rogers <michael@briarproject.org> wrote:

> I don't think the Tsitsiklis/Xu paper tells us anything about
> centralisation vs decentralisation in general. It gives a very
> abstract model of a system where some fraction of a scarce resource
> can be allocated wherever it's needed. I'm not surprised that such a
> system has different queueing behaviour from a system with fixed
> allocation. But it seems to me that this result is a poor fit for your
> argument, in two respects.
> 
> First, the result doesn't necessarily apply beyond resource allocation
> problems - specifically, those problems where resources can be moved
> from place to place at no cost. I don't see the relevance to the
> lookup and routing problems you're aiming to solve with ZeroTier.

I have an admission to make. I did a very un-academic right-brainy thing, in that I made a little bit of a leap. When I read “phase transition” it was sort of an epiphany moment. Perhaps I studied too much complexity and evolutionary theory, but I immediately got a mental image of a phase transition in state space where a system takes on new properties. You see that sort of thing in those areas all the time.

But I don’t think it’s a huge leap. The question Tsitsiklis/Xu were looking at was storage allocation in a distributed storage pool (or an idealized form of that problem). Their research was backed by Google, who obviously is very interested in storage allocation problems. But I don’t think it’s a monstrous leap to go from storage allocation problems to bandwidth, routing, or trust. Those are all “resources” and all can be moved or re-allocated. Many are dynamic rather than static resources.

It’d be interesting to write these authors and ask them directly what they think. Maybe I’ll do that.

If you’ve been reading the other thread, we’re talking a lot about trust and I’m starting to agree with David Geib that trust is probably the root of it. These other issues, such as this and the CAP theorem, are probably secondary in that if trust can be solved then these other things can be tackled or the problem space can be redefined around them.

> Second, the advantage is gained by having a panoptic view of the whole
> system - far from being a blind idiot, the allocator needs to know
> what's happening everywhere, and needs to be able to send resources
> anywhere. It's more Stalin than Lovecraft.

I think it’s probably possible to have a coordinator that coordinates without knowing *much* about what it is coordinating, via careful and clever use of cryptography. I was more interested in the over-arching theoretical question of whether some centralization is needed to achieve efficiency and the other things that are required for a good user experience, and if so how much.

ZeroTier’s supernodes know that point A wants to talk to point B, and if NAT traversal is impossible and data has to be relayed then they also know how much data. But that’s all they know. They don’t know the protocol, the port, or the content of that data. They’re *pretty* blind. I have a suspicion it might be possible to do better than that, to make the blind idiot… umm… blinder.

It would be significantly easier if it weren’t for NAT. NAT traversal demands a relaying maneuver that inherently exposes some metadata about the communication event taking place. But we already know NAT is evil and must be destroyed or the kittens will die.

> It's true that nobody's been able to ship a decentralised alternative
> to Facebook, Google, or Twitter. But that failure could be due to many
> reasons. Who's going to buy stock in a blind-by-design internet
> company that can't target ads at its users? How do you advertise a
> system that doesn't have a central place where people can go to join
> or find out more? How do you steer the evolution of such a system?

Sure, those are problems too. Decentralization is a multifaceted problem: technical, political, business, social, ...

But it’s not like someone’s shipped a decentralized Twitter that is equivalently fast, easy to use, etc., and it’s failed in the marketplace. It’s that nobody’s shipped it at all, and it’s not clear to me how one would build such a thing.

Keep in mind too that some of the profitability problems of decentralization are mitigated by the cost savings. A decentralized network costs orders of magnitude less to run. You don’t need data centers that consume hundreds of megawatts of power to handle every single computation and store every single bit of data. So your opportunities to monetize are lower but your costs are also lower. Do those factors balance out? Not sure. Nobody’s tried it at scale, and I strongly suspect the reason to be technical.

The bottom line is kind of this:

Decentralization and the devolution of power are something that lots of people want, and they’re something human beings have been trying to achieve in various ways for a very long time. Most of these efforts, like democracy, republics, governmental balance of power, anti-trust laws, etc., pre-date the Internet. Yet it never works.

When I see something like that — repeated tries, repeated failures, but everyone still wants it — I start to suspect that there might be a law of nature at work. To give an extreme case — probably a more extreme case than this one — people have been trying to build infinite energy devices for a long time too. People would obviously love to have an infinite energy device. It would solve a lot of problems. But they never work, and in that case any physicist can tell you why.

Are there laws of nature at work here? If so, what are they? Are they as tough and unrelenting as the second law of thermodynamics, or are they something we can learn to work within or around? That’s what I want to know.

> The blind idiot god is a brilliant metaphor, and I agree it's what we
> should aim for whenever we need a touch of centralisation to solve a
> problem. But if we take into account the importance of metadata
> privacy as well as content privacy, I suspect that truly blind and
> truly idiotic gods will be very hard to design. A god that knows
> absolutely nothing can't contribute to the running of the system. So
> perhaps the first question to ask when designing a BIG is, what
> information is it acceptable for the BIG to know?

Good point about metadata privacy, but I think it’s ultimately not a factor here. Or rather… it *is* a factor here, but we have to ignore it.

The only way I know of to achieve metadata privacy with any strength beyond the most superficial sort is onion routing. Onion routing is inherently expensive. I’m not sure anyone’s going to use it for anything “routine” or huge-scale.

… that is unless someone invents something new. I have wondered if linear coding schemes might offer a way to make onion routing more efficient, but that there would be an awfully big research project that I don’t have time to do. :)

We can get most of the way there by making it at least difficult to gather meta-data, and by using encryption to make that meta-data less meaningful and transparent. There’s a big difference between Google or the NSA or the Russian Mob being able to know everything I’ve ever bought vs. them being able to know with some probability when and where I’ve spent money but not what on and not how much. The latter is less useful.

David Geib [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-20 00:56:27 (6 years 9 mons 5 days 06:24:00 ago)
> Yes, and I also think it’s a toy version of an even larger problem: how to devolve power in general.
> Human societies are networks too. I think this work has po litical and philosophical implications inasmuch as the same information theoretic principles that govern computer networks might also operate in human ones.
> If we can fix it here, maybe it can help us find new ways of fixing it there.

Or the other way around for that matter. Look at the societies that work best and see how they do it.

> I wonder what might be done if we could pair mesh nets with broadcast media? Has anyone looked into that? I picture a digitally encoded shortwave analog to a “numbers station” that continuously broadcasts the current mesh net consensus for trust anchor points and high-availa bility nodes.

I think we have two different problems here and it makes sense to distinguish them.

The first problem is the key distribution problem, which is an authentication problem. You have some name or other identity and you need a trustworthy method of obtaining the corresponding public key.

The second problem is the communication problem, which is a reliability/availability problem. You have some public key and you want to make a [more] direct connection to it so you need to identify someone or some path that can be trusted to reliably deliver the request.

Traditional broadcast media can actually solve both of them in different ways. Key distribution has the narrower solution. If you're The New York Times or CBS then you can e.g. print the QR code of your public key fingerprint on the back page of every issue. A reader who picks up an issue from a random news stand can have good confidence that the key isn't forged because distributing a hundred thousand forged copies of The New York Times every day or setting up a 50KW transmitter on a frequency allocated to CBS would be extremely conspicuous and would quickly cause the perpetrator to get sued or arrested or shut down by the FCC. But that only works if you yourself are the broadcaster (or you trust them to essentially act as a CA). And pirate radio doesn't have the same effect because the fact that the FCC will find you and eat you is *why* you can trust that a broadcast on CBS is actually from CBS. Without that it's just self-signed certificates.

By contrast, broadcasting could theoretically solve the availability problem for everyone. If anyone can broadcast a message and have it be received by everyone else then you've essentially solved the problem. The trouble is the efficiency. That's just the nature of broadcast. NBC broadcasting TV to millions of households who aren't watching it is an enormous waste of radio spectrum but it's a sunk cost (at least until the FCC reallocates more of their spectrum). You can even do the same thing without a broadcast tower, it just has the same lack of efficiency. It's simple enough to have every node regularly tell every other node how to contact it but it's not very efficient or scalable.

> Or... we are admitting that trust is inherently asymmetrical because of course it is! Nobody trusts everyone equally. The question then is whether people need to agree at the “meta” level on some common things that they all trust, and if so how this is accomplished. Seems to me that they do otherwise cooperation becomes difficult (game theory territory).

That's probably true in an "all must agree on what protocol to use" sense but I don't think dynamic global consensus is actually required in general. The things like that which everyone has to agree about are relatively static. Meanwhile if Alice and Bob want to communicate then Alice and Bob have to agree on how to do it but that doesn't require everybody else to do it in the same way or trust the same parties.

> I wonder if we could actually define good guys in some meaningful way, like via game theory? Are they actors that tend toward cooperation in an environment of mostly cooperators?

The hard part is that the bad guys can behave identically to the good guys until they don't. So establishing a trusted identity has to be in some way difficult or expensive so that burning one would be a significant loss to an attacker.

> The goal is just to build a system where the cost of an attack is so high

Right, of course. The trouble is there could be realistic DoS attacks within the capabilities of various notorious internet trolls which are legitimately hard to defend against.

> Decentralization and the devolution of power are something that lots of people want, and they’re something human beings have been trying to achieve in various ways for a very long time. Most of these efforts, like democracy, republics, governmental balance of power, anti-trust laws, etc., pre-date the Internet. Yet it never works.

I don't really agree that it never works. For all the failings of free market capitalism, it's clearly better than a centrally planned economy. The thing about functioning decentralized and federated systems is that they often work so well they become invisible. Nobody notices the *absence* of a middle man.

And it seems like the more centralized systems work even less. Look at Congress. Their approval ratings are lower than hemorrhoids, toenail fungus, dog poop, cockroaches and zombies. Say what you will about PGP, at least it's preferable to zombies.
MikedePlume [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-20 12:04:36 (6 years 9 mons 4 days 19:16:00 ago)
On Wed, 2014-08-20 at 00:56 -0400, David Geib wrote:
> ...

> I don't really agree that it never works. For all the failings of free
> market capitalism, it's clearly better than a centrally planned
> economy. The thing about functioning decentralized and federated
> systems is that they often work so well they become invisible. Nobody
> notices the *absence* of a middle man. 


This is a great conversation and I'm enjoying the way the ideas are
flowing.  This paragraph has pushed one of my buttons, so I'm weighing
in.

I agree with the failure of the planned economy experiment, but I
think the comparison with the free market needs expansion.  It's
important to emphasise that we don't actually _have_ a free market,
not as Hayeck and his followers envisaged.  The potential for market
imbalances (of power, knowledge, choice and such) is too great, so we
end up with laws, against fraud, weights and measures abuse, and stuff
that is not marketable quality, and regulations, to reduce power and
knowledge imbalances.  Of course we also have deliberate imbalances,
such as immigration restrictions, to control the market in workers,
trade tariffs, to re-inforce local industry, and trade agreements, to
enhance power imbalances.

Most of these problems come out the sheer size of states and
corporations, and most of the normal human interactions that might
protect against abuse assume relatively small groups.  A sports club,
church community, even a village, are all self managing.  Regulation
still happens, but the detection and response are (or can be)
relatively lightweight.  This doesn't work even with cities, where
everyone is a stranger, and police are required.

To bring the point home, we can consider a market as a collection of
protocols.  This conversation, or the re-decentralise thing, probably
started by assuming these protocols all work perfectly, as per Hayeck.
Clearly not a worker.  We need rules and regulations, we need
detection and response, and the response has to have some real impact.
These are, I suspect, human things.  Humans are interacting, and humans
need to address problems.  As a direct outcome of the human model, we
might look at community size.  This depends on the facilities being
offered.  Distributed search, YaCy, for example, could have a very
large number of users.  Social networks, on the other hand, might need
very focused small communities.  I can imagine a sort of federated
facility, using something like Diaspora, where smallish groups can
share a server, but servers can talk to each other in some limited way
to allow for groups that overlap. Problems can then be resolved
through side channels and appropriate server management tools. (and a
'server' could be a collection of distributed nodes, of course)

OK, that'll do from me.  Thanks for listening.


Mike S.

David Burns [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-20 12:49:31 (6 years 9 mons 4 days 18:31:00 ago)

On Tue, Aug 19, 2014 at 9:22 AM, Adam Ierymenko <adam.ierymenko@zerotier.com> wrote:
Human societies are networks too. I think this work has po litical and philosophical implications inasmuch as the same information theoretic principles that govern computer networks might also operate in human ones.

If we can fix it here, maybe it can help us find new ways of fixing it there.


And networks are human societies, every node has at least one person associated with it, trying to cooperate/communicate with at least one other. But it seems like it would be easy to push the analogy too far, as custom, law, contracts, etc. are only vaguely similar to software. I would expect at least a few very interesting and annoying differences, though maybe also some surprising and useful isomorphisms.
Dave
Jörg F. Wittenberger [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-20 13:10:03 (6 years 9 mons 4 days 18:11:00 ago)
Am 20.08.2014 06:56, schrieb David Geib:
> Or the other way around for that matter. Look at the societies that 
> work best and see how they do it.

BTW: That's been the concept we followed when we came up with Askemos.

Understanding that we currently have an internet akind to some kind of 
feudal society, we asked: what came next and how did they do it?

Next came democracy (again), in terms of constitutional states. Power of 
balance, social contracts, bi- and multilateral contracts etc.

Let's not argue that we see them eventually failing all to often. Maybe 
we can make real societies better (i.e., the governments less broken) 
once we understood how to implement it with the rigor required in 
programming.

So instead of inventing anything anew – which people would the have to 
learn, adopt and accept – we tried to map these concepts as good as we 
can into a minimal language.  The we implemented a prototype interpreter 
for this language (BALL) to learn how this could work.

Best

/Jörg
Jörg F. Wittenberger [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-21 11:23:12 (6 years 9 mons 3 days 19:57:00 ago)
Am 21.08.2014 00:49, schrieb David Burns:

On Tue, Aug 19, 2014 at 9:22 AM, Adam Ierymenko <adam.ierymenko@zerotier.com> wrote:
Human societies are networks too. I think this work has po litical and philosophical implications inasmuch as the same information theoretic principles that govern computer networks might also operate in human ones.

If we can fix it here, maybe it can help us find new ways of fixing it there.


And networks are human societies, every node has at least one person associated with it, trying to cooperate/communicate with at least one other. But it seems like it would be easy to push the analogy too far, as custom, law, contracts, etc. are only vaguely similar to software. I would expect at least a few very interesting and annoying differences, though maybe also some surprising and useful isomorphisms.

That's pretty much our experience.

You don't want to push the analogy too far.  After all it *is* an analogy.  Not only would it be too complicated, we *know* there are inconsistencies at least in law. (Let alone custom!)  Which we might want to fix.

The useful isomorphism is pretty obvious.  At least to computer scientists, lawyers and ethics professionals as it turned out during the project.  And the rigor it enforces upon the programmer, when she is must treat code as if it was a contract did actually help in the end.  But it *is* a nightmare to the newcomer.

"Annoying differences" we did not find so far.  Widespread we found incomplete understanding of the actual business case.  We barely found a CS master student who could identify all the contracts of even a single trade transaction.  When we began the project I would have failed badly myself.

However I don't understand you "vaguely similar".  It seems not to be that vague.  It's just a different "machine" executing it: physical hardware or human agents.  But both are supposed to stick precisely to the rules until the software is changed.  (And both are usually buggy.)

/Jörg
Adam Ierymenko [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-22 13:51:28 (6 years 9 mons 2 days 17:29:00 ago)
I don’t think there’s anything wrong with plugging a project. Part of what this group is about is discussing various work going on in this area.

I’ve been following Ethereum for a long time, and I’m really fascinated by it. It strikes me as a step out beyond just currency for the block chain and into the realm of being able to truly define autonomous organizations, etc. I think “cryptocorpsâ€
Stephan Tual [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-22 14:41:22 (6 years 9 mons 2 days 16:39:00 ago)
David Burns [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-22 20:30:40 (6 years 9 mons 2 days 10:50:00 ago)


On Wednesday, August 20, 2014, Jörg F. Wittenberger <Joerg.Wittenberger@softeyes.net> wrote:

However I don't understand you "vaguely similar".  It seems not to be that vague.  It's just a different "machine" executing it: physical hardware or human agents.  But both are supposed to stick precisely to the rules until the software is changed.  (And both are usually buggy.)

I was trying to compensate for my bias by using understatement and ambiguity. But now that you challenge me, I feel obligated to try to respond. 

Has anyone written a mathematical analysis of the isomorphism, it's features and limits? Custom and law typically operate by defining constraints that must not be violated, leaving agents free to pursue arbitrary goals using arbitrary strategies within those limits. Software typically provides a menu of capabilities, defined (usually) by a sequential, goal oriented algorithm, often employing a single prechosen strategy. Constraints limit software, but do not dominate the situation as in law. 
I must obey the traffic laws while driving to work. The law knows nothing about my goal. I am in charge. If/when we all have self-driving cars, traffic laws will serve no purpose, but the car has to know where I want to go, in addition to the constraints and heuristics that allow it to navigate safely there. I am still in charge, but not in control. Action in each case combines intent, strategy, resources and constraints, but the mix is different. Or maybe the level of abstraction?

I can use software to break the law, and I can use the law to break software, but it is an accident of language that I can make these statements, the meaning is not at all similar. 

I would be delighted for you to convince me that I am being too pessimistic, ignorant and unimaginative. I would prefer to be on the other side of this argument.
Cheers,
Dave


--
"You can't negotiate with reality."
"You can, but it drives a really hard bargain."
Stephan Tual [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-24 22:15:06 (6 years 9 mons 09:05:00 ago)
Jörg F. Wittenberger [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-25 11:58:19 (6 years 8 mons 29 days 19:22:00 ago)
Am 23.08.2014 08:30, schrieb David Burns:


On Wednesday, August 20, 2014, Jörg F. Wittenberger <Joerg.Wittenberger@softeyes.net> wrote:

However I don't understand you "vaguely similar".  It seems not to be that vague.  It's just a different "machine" executing it: physical hardware or human agents.  But both are supposed to stick precisely to the rules until the software is changed.  (And both are usually buggy.)

I was trying to compensate for my bias by using understatement and ambiguity. But now that you challenge me, I feel obligated to try to respond. 

Has anyone written a mathematical analysis of the isomorphism, it's features and limits?

We can't claim a mathematical analysis.  At least not of the full problem.

We did however build a programming environment however to gather experience.  The limits of the system are rather tight: it is essentially a system to collect/assert proofs of the state of software agents.  The agent's code however is treated like a contract: no change, no upgrade.  The system starts actually by creating a social contract holding all the code required to boot the system.  By analogy this would be the constitution and the body of law a human inherits.


Custom and law typically operate by defining constraints that must not be violated, leaving agents free to pursue arbitrary goals using arbitrary strategies within those limits. Software typically provides a menu of capabilities, defined (usually) by a sequential, goal oriented algorithm, often employing a single prechosen strategy. Constraints limit software, but do not dominate the situation as in law.

At this point we might want to subclass "software".  Customer grade software as you're talking about here look like the assembly instructions coming with your furniture not so much like law.  Both are expressed in words.

So "law-alike software" would probably a class of assertions.  Application code would be supposed to include checks for relevant assertions.

I must obey the traffic laws while driving to work. The law knows nothing about my goal. I am in charge. If/when we all have self-driving cars, traffic laws will serve no purpose, but the car has to know where I want to go, in addition to the constraints and heuristics that allow it to navigate safely there. I am still in charge, but not in control. Action in each case combines intent, strategy, resources and constraints, but the mix is different. Or maybe the level of abstraction?

I'd say: the level of abstraction.  We can't take the human intent out of the game.  (In our model, agents representing users may be free send arbitrary messages.  Akin to no regulation and freedom of expression.)


I can use software to break the law, and I can use the law to break software, but it is an accident of language that I can make these statements, the meaning is not at all similar.

Again: what is software for you?  Can I use software to break software?  What is "to break"?

IMHO software is first and foremost an expression.  In some language. For which some interpreter exists. Which maintains some ongoing process.


I would be delighted for you to convince me that I am being too pessimistic, ignorant and unimaginative. I would prefer to be on the other side of this argument.

As a programmer, I'd say: given enough time I can program everything I can understand well enough to express it in a formal language.

This risk is that in formalizing law, we might discover inconsistencies in the law.  To bad ;-)

/Jörg



Michael Rogers [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-09-12 17:14:40 (6 years 8 mons 11 days 14:06:00 ago)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 19/08/14 20:52, Adam Ierymenko wrote:
> Getting to this a bit belatedly… :)

Likewise. :-)

> If you’ve been reading the other thread, we’re talking a lot about 
> trust and I’m starting to agree with David Geib that trust is 
> probably the root of it. These other issues, such as this and the
> CAP theorem, are probably secondary in that if trust can be solved
> then these other things can be tackled or the problem space can be 
> redefined around them.

I totally agree. Perhaps Tor would be an interesting example to think
about, because it's decentralised at the level of resource allocation
but centralised at the level of trust. The Tor directory authorities
are the closest thing I can think of to a Blind Idiot God: they act as
a trust anchor for the system while remaining deliberately ignorant
about who uses it and how. They know even less than ZeroTier's
supernodes, because they're not aware of individual flows and they
don't relay any traffic themselves.

> It would be significantly easier if it weren’t for NAT. NAT
> traversal demands a relaying maneuver that inherently exposes some
> metadata about the communication event taking place. But we already
> know NAT is evil and must be destroyed or the kittens will die.

NAT is the biggest and most underestimated obstacle for P2P systems.
I'm glad you're tackling it head-on.

> Good point about metadata privacy, but I think it’s ultimately not
> a factor here. Or rather… it *is* a factor here, but we have to
> ignore it.
> 
> The only way I know of to achieve metadata privacy with any
> strength beyond the most superficial sort is onion routing. Onion
> routing is inherently expensive. I’m not sure anyone’s going to use
> it for anything “routine” or huge-scale.

Onion routing will always be more expensive than direct routing, but
bandwidth keeps getting cheaper, so the set of things for which onion
routing is affordable will keep growing.

Latency is a bigger issue than bandwidth in my opinion. In theory you
can pass a voice packet through three relays and still deliver it to
the destination in an acceptable amount of time, but the system will
have to be really well engineered to minimise latency. Tor wasn't
built with that in mind - and again, the question is who's going to
pay an engineering team to build a decentralised anonymous voice
network they can't profit from?

> … that is unless someone invents something new. I have wondered if 
> linear coding schemes might offer a way to make onion routing more 
> efficient, but that there would be an awfully big research project 
> that I don’t have time to do. :)

There have been some papers about anonymity systems based on secret
sharing and network coding, but nothing that's been deployed as far as
I know. In any case, they all used multi-hop paths so the bandwidth
and latency issues would remain.

> We can get most of the way there by making it at least difficult
> to gather meta-data, and by using encryption to make that meta-data
> less meaningful and transparent. There’s a big difference between
> Google or the NSA or the Russian Mob being able to know everything
> I’ve ever bought vs. them being able to know with some probability
> when and where I’ve spent money but not what on and not how much.
> The latter is less useful.

Again, I totally agree and I'm happy to see any progress towards
somewhat-more-blind, somewhat-more-idiotic internet deities.

Cheers,
Michael
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUExvwAAoJEBEET9GfxSfMnfoH/RRZpfvSsHx6gDS3jCTlLrPP
wbJ7zVuMJdtnxRC/wgkTOQ/AkQG9N13VKqE10YtrWZoMw1TX6wj4uGOFascH7gUK
uKkf023m1tSHE05x2IaYusGdGDlOXlwKY8+LoP8a3OFI8DSX8ous+3vOANPpT+kZ
8MQ/ryiNa40ck369ew3lmxwMVycTxPgISM+WpAonQWADCqyGW/wiIZcebbFM+tIq
zaZeomkc9s6BLU/TJE8TAIGkhS5xcEsPDJrETYIPhGNNQ6gjFZE1S1DFyTYrReRP
LuyDLLC9x5sFYqTqtzqcIR36HDaPYb1iEbN3vkw5iMjEuS0F9Y6+/jwEdWP31A8=
=RVi3
-----END PGP SIGNATURE-----
:
: