We’ve had enough of digital monopolies and surveillance capitalism. We want an alternative world that works for everyone, just like the original intention of the web and net.

We seek a world of open platforms and protocols with real choices of applications and services for people. We care about privacy, transparency and autonomy. Our tools and organisations should fundamentally be accountable and resilient.


Jörg F. Wittenberger [LibreList] Re: [redecentralize] Thoughts on decentralization: "I want to believe." 2014-08-25 11:58:19 (5 years 1 mon 19 days 03:13:00 ago)
Am 23.08.2014 08:30, schrieb David Burns:

On Wednesday, August 20, 2014, Jörg F. Wittenberger <Joerg.Wittenberger@softeyes.net> wrote:

However I don't understand you "vaguely similar".  It seems not to be that vague.  It's just a different "machine" executing it: physical hardware or human agents.  But both are supposed to stick precisely to the rules until the software is changed.  (And both are usually buggy.)

I was trying to compensate for my bias by using understatement and ambiguity. But now that you challenge me, I feel obligated to try to respond. 

Has anyone written a mathematical analysis of the isomorphism, it's features and limits?

We can't claim a mathematical analysis.  At least not of the full problem.

We did however build a programming environment however to gather experience.  The limits of the system are rather tight: it is essentially a system to collect/assert proofs of the state of software agents.  The agent's code however is treated like a contract: no change, no upgrade.  The system starts actually by creating a social contract holding all the code required to boot the system.  By analogy this would be the constitution and the body of law a human inherits.

Custom and law typically operate by defining constraints that must not be violated, leaving agents free to pursue arbitrary goals using arbitrary strategies within those limits. Software typically provides a menu of capabilities, defined (usually) by a sequential, goal oriented algorithm, often employing a single prechosen strategy. Constraints limit software, but do not dominate the situation as in law.

At this point we might want to subclass "software".  Customer grade software as you're talking about here look like the assembly instructions coming with your furniture not so much like law.  Both are expressed in words.

So "law-alike software" would probably a class of assertions.  Application code would be supposed to include checks for relevant assertions.

I must obey the traffic laws while driving to work. The law knows nothing about my goal. I am in charge. If/when we all have self-driving cars, traffic laws will serve no purpose, but the car has to know where I want to go, in addition to the constraints and heuristics that allow it to navigate safely there. I am still in charge, but not in control. Action in each case combines intent, strategy, resources and constraints, but the mix is different. Or maybe the level of abstraction?

I'd say: the level of abstraction.  We can't take the human intent out of the game.  (In our model, agents representing users may be free send arbitrary messages.  Akin to no regulation and freedom of expression.)

I can use software to break the law, and I can use the law to break software, but it is an accident of language that I can make these statements, the meaning is not at all similar.

Again: what is software for you?  Can I use software to break software?  What is "to break"?

IMHO software is first and foremost an expression.  In some language. For which some interpreter exists. Which maintains some ongoing process.

I would be delighted for you to convince me that I am being too pessimistic, ignorant and unimaginative. I would prefer to be on the other side of this argument.

As a programmer, I'd say: given enough time I can program everything I can understand well enough to express it in a formal language.

This risk is that in formalizing law, we might discover inconsistencies in the law.  To bad ;-)