[sugar] Automated testing of activities
Peter Korn
Peter.Korn
Sun Jul 22 15:30:32 EDT 2007
Hi Kent,
First, sorry for taking a while to respond to this thread. I think
Fernando has already eloquently stated a good set of priorities of us:
what is most important is that the basic Sugar interface and key
text-based applications are accessible to at least a basic quality
level; games and high quality speech and numerous other things that we
have in the West are nice, but not the bar we should be striving toward
at this time.
To your questions:
> When you say "accessibility" you mean "support for vision-impaired
> users"? Are there other accessibility requirements?
> What in particular are the requirements? Support for a screen reader?
> Large fonts? Zoomable screen? Will there be a screen reader application
> for the system? Where is it coming from, and how will it interact with
> activities?
>
A good place to start is http://wiki.laptop.org/go/Accessibility which
contains both some background on the needs, and also an enumeration of
the key things we need to do to support various disabilities (physical
impairments, the full spectrum of vision impairments, hearing
impairments, etc.). There is also an accessibility section in the
Design Fundamentals page at:
http://wiki.laptop.org/go/OLPC_Human_Interface_Guidelines/Design_Fundamentals#Accessibility
Fundamentally for vision impairments (short of near or total blindness),
we need either some level of theming (with a high contrast theme as we
have in desktop GNOME), or a zoomable interface. This includes scaled
fonts, scaled images, etc. For severe vision impairments we need an
add-on screen magnifier, and for blindness we need an add-on screen
reader. For both of these, we need API hooks implemented in the
applications that allow these add-on applications to get the information
they need - this is ATK that is already in GNOME/GTK+, and some version
of AT-SPI for inter-process communication. We have an existing GNOME
application that does a good job for screen reading in Orca. It also
supports screen magnification, though we are looking for smoother
magnification with more features to come with a compositing window
manager. Beryl has some nice magnification features for accessibility,
and that remains a likely path for future magnification on the desktop.
For other impairments like severe physical impairments, we also address
this via an add-on application that allows someone to generation input
from other than a keyboard or mouse. This is more than simple keyboard
substitution; for an effective and efficient user interface for someone
with a severe physical impairment, you really want an application like
the GNOME On-screen Keyboard (GOK) that extracts user interface elements
and places them on a special, scanning keyboard for rapid selection by
someone who can only move their shoulder or make some other simple
muscular gesture. Like Orca, GOK uses ATK/AT-SPI and so is an option
for OLPC once we figure out the IPC mechanism to use.
> How does the accessibility interact with the intent to support
> localizable or localization-free activities, in large part by leaving
> text out of the interface entirely? What is a textless application
> supposed to do in this environment?
> What parts of the system are going to have to comply with this
> requirement? The mesh view, for example? The clipboard? What about, say,
> drawing activities? The record application? Games?
>
For blind users with text-to-speech, we will need to associate some sort
of text representation to our icons, or at least an "earcon" - audio WAV
file. Since images come from files with filenames, we have a
quick-and-dirty text string we can use for these images. As Fernando
pointed out in his reply, even English text (so long as it sounds
different than other English text for other icons) is better than no
text for someone who doesn't otherwise speak English. Also a lot
smaller than having a bunch of WAV files in flash...
Please note: I am not suggesting we need to render text to the screen;
just that we have text associated with every image a user can interact
with (of every "important" application) which can be retrieved by our
screen reader and then spoken via text-to-speech. The GNOME On-screen
keyboard also uses this text, by the way. And automated testing will
find that text useful too.
We need to make the key tasks in the mesh view accessible. It isn't
important to convey the spatial relationships of the mesh view to a
blind person; it is important that a blind person be able to find his
friends, to have a sense of the access points (how many are there?), etc.
Don't worry about drawing activities at this point. There are a number
of things we do in the desktop to make most functions of most drawing
applications accessible to most people. But please, let's focus on the
critical things first. The same is even more the case for games.
Regards,
Peter Korn
Accessibility Architect,
Sun Microsystems, Inc.
> Is automated testing intended for more than just battery life testing?
> If not, is it really necessary for every activity to support it? If so,
> what do you expect to accomplish? Will it actually save more than the
> amount of time taken to implement it for a given activity?
>
> What are the time constraints?
>
> The potential scope is huge...it would be nice to understand the actual
> requirements.
>
> Kent
>
>
>
> Jim Gettys wrote:
>
>> I want to give everyone a heads up to start thinking about the following
>> issues:
>>
>> 1) we need to be able to do basic automated smoke testing of activities.
>> 2) we need to support accessibility in activities. Some of you may not
>> be aware of it, but this is a non-optional requirement of some
>> jurisdiction, most of which roughly follow US government laws in this
>> area.
>> 3) we need to be able to quantify improvements/regressions in activity's
>> energy usage, as we optimize various code in our system.
>> 4) we need to be able to automate testing of realistic workloads (or
>> play loads :-), of our systems that roughly simulate the use of a child
>> in school, so we can see how we're doing when we change various knobs
>> that we have for controlling power usage, from backlight, to use of the
>> dcon, to blanking the screen, to suspending aggressively, etc.
>> Applications adding hints in key locations that suspending might be a
>> good thing to do are also becoming possible, as our power management
>> infrastructure improves.
>>
>> But if we can't reproduce results, we'll be in fog, unable to see what
>> direction to go.
>>
>> We'll therefore need to be able to script applications. So long as
>> we're on an XO-1 with its resolution screen *and* you don't change the
>> UI, it's not all that hard. But we expect all of you to want to tune
>> your UI's, and also we need to ensure accessibility needs get met.
>> Future systems our software runs on may have different screens; this
>> model will break down pretty quickly.
>>
>> Note that the hooks required for accessibility hooks makes it possible
>> to script activities by name, rather than by X/Y coordinates on the
>> screen, and wait for results, and that this technology therefore can
>> remove the screen resolution dependence of such scripting. Custom
>> widgets you build will need such hooks, in any case.
>>
>> We'll shortly be setting up a battery life testing infrastructure in a
>> revived Tinderbox; with the machine we have with instrumented with > 20
>> measurement points we can gather great amounts of useful information on
>> the behavior of our system and of individual activities.
>>
>> At some point, we'll start asking you to generate a workload for an
>> activity, which should be able to address many of the issues above.
>> More when the infrastructure work is further along.
>>
>> - Jim
>>
>>
>>
>
>
>
More information about the Sugar-devel
mailing list