[sugar] datastore use cases for activities

Michael Stone michael
Thu Sep 27 18:08:51 EDT 2007


Bert,

The executive summary is that our data model is currently inadequately
expressive to record the information necessary to secure the integrity
and confidentiality of the user's data in the use cases you envision.

In the short run (i.e. up to and including FRS), Walter says that he
thinks we should provide a capability, available only to selected signed
activities, that partially compromises our ability to protect the user's
data in exchange for the freedom of access that is necesssary to satisfy
a given use case.

He further suggests that we schedule a "Data Model Summit", to be put on
shortly after FRS, at which we can hash out a solution to the raft of
problems we have discovered in the current model.

Below, I have included the detailed reasoning that explains, from my
perspective, what the current difficulties are.

Michael





The Intent of Bitfrost
----------------------

Bitfrost presumes that software is generally benign and that
malicious software will co-opt benign software to fulfill its author's
malicious intent.

Therefore, main goal of Bitfrost is to limit what damage can be done by
exploiting benign software by leading the authors of the benign software
to shed capabilities that are unnecessary to their purpose.

The kinds of harm that Bitfrost suggests malicious authors can try to
inflict on XO users are:

  a) damage to the XO hardware
  b) damage to the structural integrity of the XO software
* c) violation of data integrity
* d) violation of privacy and of confidentiality
  e) damage to the structural integrity of the network

For our present discussion, harms (c) and (d) are the relevant ones
to examine.




The Design of Our Data Model
----------------------------

Implicit in the design of Sugar is the sense that frozen activity
sessions, as Tomeu has called them, have some sort of information domain
in which they may freely be read and a separate, perhaps overlapping
domain, in which they can be modified.

Some data, for example, SSH private keys and confidential diary entries,
have read and write scopes that are as small as possible: these keys
should only be accessible by explicit direction of the user.

Other data, (e.g. a Paint instance that has been explicity shared to all
and sundry on the mesh), live in a very broad read scope and an a fairly
broad write scope. (The write scope is temporally limited, because we
currently only support synchronous "activity sharing", but, at any given
moment of "sharing", the write-capable scope contains all actors on the
mesh).

So far as I know, no design presently exists for read- or write-scopes
that discriminate between machine-local actors.




Our Choice of Default Access Scopes
-----------------------------------

I interpret Walter's remarks in 

  http://lists.laptop.org/pipermail/sugar/2007-August/003230.html

in the light of the "Bitfrost Intent" discussion to imply that 

* the DEFAULT read and write scopes for user-created data are AS SMALL
  AS POSSIBLE.




I'll let that sink in for a little while.




(Note 1: Walter states that I'm reading too much in to his words here;
however, he agrees that we lack representations for policies more
permissive than pervasive secrecy and less permissive than fully
public.)

(Note 2: Walter also points out that the as-yet unimplemented
scope-specific "bulletin boards" could be used to provide suitably sized
access scopes; I agree, with some reservations.)




Conclusions
-----------

No one doubts that Bitfrost is a specification of a user-controlled
digital restrictions management system - i.e. that it defines and
attempts to enforce an access-control logic on principals consisting of
groups of related OS processes and on resources consisting primarily of
graphs of files where the facts necessary to prove that an access-check
is okay are ultimately asserted by the XO user.

The difficulty that we're encountering here is two-fold:

D1) We have, thus far, implemented a *minimally* expressive logic.
    (Compare to the richness of a logic like Soutei [1] to get an idea
    of what we're missing.)

D2) As I suggested above, we appear, to me, to have decided that the
    default scoping of user-created data should be maximally restrictive.

The problem with (D1) is that we have no technology for expressing scopes
that contain more local processes than provided by the "only by direct
and explicit user consent" scope.

The problem with (D2) is it means that there will be essentially no data
on which useful and efficient implementation of the (to my mind,
incredibly appealing) higher-order activities you propose would be
permitted to operate.


Summary
-------

My security position is that (D2) cannot be safely revised (i.e. that we
become unable to prevent harms (c) and (d)) without fixing (D1).


References
----------

[1]: http://okmij.org/ftp/Prolog/Prolog.html#Soutei





More information about the Sugar-devel mailing list