We’ve had enough of digital monopolies and surveillance capitalism. We want an alternative world that works for everyone, just like the original intention of the web and net.

We seek a world of open platforms and protocols with real choices of applications and services for people. We care about privacy, transparency and autonomy. Our tools and organisations should fundamentally be accountable and resilient.


Jörg F. Wittenberger [LibreList] Re: [redecentralize] Spring of User Experience 2014-03-03 14:29:51 (5 years 4 mons 19 days 15:43:00 ago)
For your pleasure an anecdote...

Am 01.03.2014 00:53, schrieb Paul Frazee:
Is anybody familiar with novel approaches to security UX that you might share? I'd enjoy some anecdotes about what's worked.

On Fri, Feb 28, 2014 at 4:46 PM, Ximin Luo <infinity0@pwned.gg> wrote:
Telegram's justifications for their security have basically been "prove me wrong". In fact, they have been proven

First let me say: it depends a lot of you definition of "worked"; this is how it worked out, even though not always as intended:

In contrast to the mentioned "proof me wrong" attitude, out project began with the proof a security property.  We then built a system, which abides the a rule set we could proof secure. Let me share the surprise what happened once we had users...

The problem we tackled relates to permission delegation. Easy to see: any system having an administrative super user is prone to corruption.  The administrator can easily impersonate any user.  No matter how much crypto you add, a chance is left where you must trust your admin.  So the first "interesting" result: an incorruptible system has pair-wise symmetric permissions initially. (Independent of how permissions are represented)  Next: we know there are inalienable rights in real world.  To be able to model those correctly we must proof that no operation is be able to transfer _all_ those permissions away from a owner.

That's it already: any system which allows wholesale (or transitive) permission transfer is corruptible.  Even if we do not yet know how to exploit the vulnerability.  (Most databases document how the admin can set up another admin account. That's precisely what must be proofed to be impossible. Another example would be X509 sub-certificate authorities. The criterion of being "in-corruptible" would simply forbid sub-CA's. Period.)

Left with little choice of existing system, we build our own, where permission delegation would always transfer at most a strict subset of the permissions a user already has.

First we met skeptics who simply claimed that "such a system will never work" and "the administrator is there for a reason".  Those we could silence by demonstrating that the implementation is at least usable.

Then we added "users": programmers & students. We wanted them to be creative. Build some applications atop.  Make the step from "technically usable" to "usable by end users" (those who we don't want to bother with any proof, even when the math is simple enough to be understood by an 8th grade).

First surprise: after setting up their development environment, they forgot about permission handling to the extend that they never thought about adding a real user interface at all.  For years that is.  Once done right it worked for them.

Second surprise: a manager (from a partner company) was enthusiastic in the beginning about having a system which can ensure absence of impersonation.  After all that's the foundation for both individual responsibility and freedom. Once there where some application prototypes built, the manager learned that such apps leave little room to exert coercing power over users. So he had the programmers build backdoors into the users apps.  No he could still not break the permission control.  But he could circumvent it using broken apps. (Until the first source code audit at least.  But that would presumable be years in the future.)

Moral of the story: the well know problem with the strength chains and their weakest links.

Best Regards

askemos.org - A S(ch)KEMatic Operating System