[Sugar-devel] GSoC proposal status
me at jvonau.ca
Tue Feb 23 05:59:56 EST 2016
> On February 22, 2016 at 1:32 AM Tony Anderson <tony_anderson at usa.net>
> Hi, Jerry
> When I am talking about Sugar, I am referring to 0.106. You will find
> ds-backup.py and ds-backup.sh in /usr/bin. The shell script determines
> whether the
> schoolserver is registered and connected and whether a backup is needed
> (> 24hours). This script is apparently executed via
> /usr/lib/systemd/system/ds-backup.service.The rsync is accomplished in
> function rsync_to_xs in ds-backup.py. All of this seems well integrated
> into Sugar.
I take it that the statements above are for the benefit of others, I
already understand how it works.
> I don't have a school server with live backup here but my recollection
> is that 3.2.1 last year in Rwanda made snapshots and that they were
> identified by a date field.
How is ds-backup helpful to all consumers of sugar? Non-XOs users(SoaS) may
not have the olpc repo installed from where this rpm lives.
> I assume the ui interface you mention is the schoolserver gconf
> settings. In any case, the issue is no more complicated than where the
> 'nick' is stored.
Sort of, the 'nick' is stored as a user setting(was gconf now gsettings),
however there is a visual UI field in "About Me" where a user can change
his or her 'nick' at will but is populated with information gathered at the
> The user interface is to register the XO and then one can observe that
> the XO via network settings. (a view of the gconf setting).
Registration populates the "Collaboration Sever" field. There is no place
to alter the 'backup_url' that ds-backup.py uses from with in the UI, one
has to "re-register". Registration itself has flaws, it requires name
resolution of 'schoolserver', fine if the schoolserver is in full control
of the network, but that is not always possible.
> The backups
> are stored in /library/users in a directory named from the XO
> serial-number. I don't believe there is any specific software to support
At one point there was via xs-moodle but that hasn't worked since a change
in the datastore way back when.(0.86?) This just renders the backups less
useful to the endusers, no way to restore the data easily, leading me to
believe the sole purpose of the backups is to troll through the data.
> The idmgr (create_user) sets up a public/private key pair for
> authentication. A timestamp is stored in /usr/olpc/.sugar/defaults to
> record when the most recent backup was made.
> "You are assuming that ds-backup is used. There is now the builtin backup
> usbdrive in the controlpanel, I see no reason that functionality couldn't
> be reused with a backend that is not a usbdrive but some network based
> storage(GSOC task?). Kind of reminds me of the "Example Workflow for
> Via-School-Server Mode" where the connection was a webdav share on the
> 'schoolserver' Come to think of it that is more of a local/remote journal
> layout than a backup service but I'm just using it as an example of a
> network filesystem.
> From your earlier description of your preferred workflow for the
> maybe this is what you are seeking? Not as is but in a general sense?
> the pictures make a good visual representation of the local/remote
> journals and moving objects between them via "CopyTo". Think to be more
> effective there should be a "MoveTo" (ArchiveTo?) in the dropdown menu
> should this move forward.
> At best this 'builtin' is an independent feature. I am not sure why a
> feature in the control panel is considered more built in than the
> registration option.
> My hunch is that the Sugar implementers were not even aware of the
> ds-backup capability. I suspect this 'builtin' arose because developers
> were using
> usb drives to backup and restore the datastore.
Both originate from the same source tar file, with control panel components
being optional to core operation of sugar and third party components like
switch-desktop can be dropped into a known framework within controlpanel.
The current backup came out as a neutered version of what was originally
used in dextrose based deployments with the user interaction moved from
the frame to the control-panel with the schoolserver backend removed.
Remember ds-backup is unsuitable for places where there are no
or a schoolserver can not be used for whatever reason that may be. Lets not
omit the teaching of good computer habits, making backup copies of your
work on your own. Wasn't a bad first step just need to add back a revised
> In Rwanda, I have used a
> simple backup.sh and restore.sh one-liner that simply copies the
> datastore to usb and copies it back.
More scripting in place of a UI feature, but at least that teaches the CLI
so it is not without merit.
> The 'workflow' I am referring to is to enable a user to maintain a
> remote repository of his/her Journal and to provides means to keep
> the local copy as a cache of work-in-progress.
Yes I understand.
> The move to/from is as
> simple as using the existing 'keep' selection.
Sorry I don't see 'keep' anywhere in 0.107, when does this become
available, after registration maybe? Maybe I'm a bit confused could you
point it out for me please? I'm not being sarcastic I'm really missing
something in the message you are trying to convey.
> The rest is done
> behind-the-scenes by software and requires no user action. User
> intervention is only needed to manage the local store (the school server
> is assumed to have enough capacity).
> This enables control with the granularity of individual Journal objects
> - not complete datastores.
I agree, in the section '[UPLOAD] Choose an entry to upload to the
school-server' more than half way down on the url I posted shows a journal
object being copied much like how one would copy an object to a usbdrive or
"my documents" using the "CopyTo" function. Note the schoolserver icon in
the frame along with a usbdrive and my documents. Please keep in mind this
was before the current web-services came into production and was testing
the connection to and from the schoolserver. What was missing in the mockup
was a MoveTo, ArchiveTo, ShareTo or however you want to label the action in
the drop down menu. Just pointing out once you have the target for the
alternate datastore life becomes much easier.
> Certainly that builtin workflow could be adapted to a school server. A
> USB drive is essentially a directory (root to the usb). It would be
> trivial to create a corresponding directory on a school server. However,
> the screenshots indicate a more complex interface than using 'keep'.
Not sure I understand what you mean, can you elaborate a bit more? If you
are referring to entering of passwords, yes that needed more work at the
time and done in the background, but the image is before(2012) the time
> A second question is how to share Journal objects with other users. This
> description seems to ignore the facilities already in place. A user
> should be able to send a file by existing collaboration support. At the
> moment this is done on a case by case basis using activities which
> handle a give mime_type and collaboration. There is no need to
> distinguish between transfers by mesh or school server. If an XO is
> connected to the school server, collaboration is handled through the
> server (issues with tubes aside). This is essentially invisible to the
> user - the user creates a connection and then uses the
> neighborhood/group views to join and communicate.
Lets keep sharing of objects apart from backups, the processes are sort of
related but that muddies the waters. However I've ported the patched
rpm from XS-6 for testing use on the XSCE a while ago. While it currently
installs should it fail to install in the future I will not be updating it.
The rpm should be retired or updated if somebody wants to support or
that aspect on the XSCE.
> I suggested OwnCloud as a purpose built environment for sharing files
> between XOs connected to the same school server (or 'cloud'). OwnCloud
> is already available in XSCE but would need appropriate sysadmin
I welcome and await your additions to the XSCE project to support this
workflow. I'll look at what you have so far and figure out with Tim &
George the best way forward.
> This mechanism has the side benefit of giving users a
> purpose-built control over which files in their 'journal' are to be
> shared with which other users.
For the benefit of others, can you explain how?
> A more difficult question is how this workflow is migrated from a
> serial-number system to a username system (implied by SOAS and James
> Cameron's Sugar on Ubuntu). As an example, if the Journal 'bundles' are
> stored in OwnCloud by username, then there needs to be a mapping from XO
> serial number to username.
You have a blank slate, use the 'nick' as the OwnCloud username to start
testing with for now. The 'nick' should be under $HOME/.<path>/ as a user
setting for the user so a multi user login should map nicely.
> In many deployments the laptops (XOs) are used by multiple users through
> the day. Having each user under their own username and not 'olpc' would
> help greatly by enabling existing Linux methods to be used. Each user on
> an XO would have their own home directory
> (/usr/home/username/.sugar/defaults etc.).
That is not directly sugar related but rework olpc-dm to allow logins on
the XO, just like how sugar behaves on Fedora once fully installed.
> However many of the folders in /usr/home/olpc would need to be moved to
> common space (e.g. Activities, Library).
That is a limitation of using sugar bundles(.xo files) to install the
activities, that method installs the activities in the user's home
directory. I could envision installing to a "system-user" then linking upon
user creation as a workaround thou. However both Fedora and Ubuntu package
activities using their native package format, pulling in any required
dependencies, installing system-wide thus available to all users
Using bundles opens up a whole can of worms requiring the bundle to have
all its' dependencies met by the underling os and/or carry libraries in
the bundle. Having this harmonized across architectures, distributions and
releases becomes quite a dance and is a subject worthy of its own thread.
Tossing out for discussion thou, the activity.info file is used to
define bits for the activity(more or less metadata) why not extend that to
define external libraries if needed for operation as proposed in 2009? Who
better to know what the dependencies are other than the activity author?
How to define and pull-in the dependencies would need further discussion
and work. Once defined all distributions could consume this information to
ensure the dependency chain is met for packages released by the
distributions. In the case of bundles installing the dependencies at
bundle install time plugins could be done for the respective distributions
to handle that but the data needs to be defined first and for the most part
exists within the file the defines the distro's activity.
> In this case, one could also
> imagine a common equivalent of the Documents folder which could be
> viewed in the Journal and permit files to be moved to/from the Journal
> using the existing mechanisms.)
Yes I can but the problem I see is some "features" die at the design
stage where it is discussed to death and doesn't become integrated but yet
exists in some form in the code or a fork is used. Yet things look hopeful
given all the cross distribution work.
More information about the Sugar-devel