[Sugar-devel] Maintenance, reviews, QA & co

Simon Schampijer simon at schampijer.de
Sun Aug 19 12:43:26 EDT 2012


Hi Sascha,

On 08/11/2012 10:53 PM, Sascha Silbe wrote:
> Dear Sugar Labs members,
>
> == Executive Summary ==
>
> The current review process isn't a good fit for the current Sugar
> contributors and causes a lot of frustration on both sides. There are
> better ways to reach the same goals. The two major changes to the review
> process I'll propose in this email are making in-depth reviews by
> independent senior developers optional and non-blocking as well as
> accepting "no ceiling" and standards compliance patches into
> mainline. Other work to address some of the goals of the previous review
> process has already been going one for quite some time.
>
>
> == Why we should rethink our current approach ==
>
> I think it's time to take a step back and reconsider our current
> approach for reviews and QA. For the past few months to years, upstream
> reviews were primarily a QA measure. While hopefully many contributors
> learned to be better developers, the focus clearly was on making sure
> that only "good" patches were merged into mainline. That's the way many
> (maybe most) community-driven Open Source projects work: Downstream
> needs to show their contribution is of value to upstream (including not
> having a negative impact on other parts of the user base) and does not
> impair maintainability.
>
> Sugar, however, currently isn't a community project: Most of the work is
> done by a small number of organisations with a clear commercial
> focus. They have deadlines breathing down their neck and need to care
> about what their costumers want (who are the ones keeping these
> organisations alive), not what's in the interest of the general user
> base. The paying customers are what keeps these organisations alive and
> the organisations in turn are what keeps Sugar Labs alive. The small
> number and similar focus of the organisations means there's not enough
> diversity to address a more general user base.

Sugar Labs is a community project. There are two major player (OLPC and 
Activity Central) with paid developers but there are as well a lot of 
contributors from the outside. Just to name a few in the Sugar 
development team: Daniel Narvaez, Gary C Martin, Benjamin Berg... Of 
course the paid stuff does have more hours for Sugar work per day but 
that should not have any impact on the review process.

> Tight deadlines and focus on the needs of paying customers doesn't
> really mix with the style of reviews done by community-driven
> projects. This regularly results in frustration on both sides.

I disagree here. A review is a review. There are two parties involved, a 
contributor and a reviewer. The reviewer is interested in getting quick 
feedback about his approach. The reviewer (maintainer) is interested in 
keeping the quality of his code base clean and is interested in getting 
more contributions from the contributor. To handle that scenario there 
are guidelines.

> We should think about what we want to achieve and how to best achieve
> it. Maybe we still want reviews to take place, but in a different
> manner. Maybe we'll just do something else and do away with reviews. But
> simply continuing the current way is not a good idea.
>
>
> == Goals ==
>
> So what do we want to achieve? Some options:
>
> 1. Few obvious mistakes: Making sure the code doesn't contain a lot of
>     mistakes that could have been easily spotted by some other developer,
>     resulting in bugs affecting the user.
>
> 2. Few bugs affecting the user: Making sure there are few regressions
>     and new features work as expected.
>
> 3. Few hard to fix bugs: Making sure that the risk of introducing bugs
>     that are hard to diagnose and fix (usually race conditions) is low.
>
> 4. Maintainability: Making sure that the cost of doing code changes
>     (including QA as necessary for 1. and 2.) doesn't grow in the long
>     term.
>
> 5. Better developers: Constantly increasing our own and others'
>     abilities and knowledge.
>
>
> == Means ==
>
> There are various ways to achieve one or more of the goals above:
>
> A. Eating our own dog food
> B. Private reviews done amongst colleagues
> C. Public reviews done amongst colleagues
> D. Public in-depth reviews done by senior developers
> E. Public short reviews done by senior developers
> F. Requiring public short reviews by senior developers, usually from
>     within the same organisation, before pushing changes
> G. Requiring public in-depth reviews by independent, senior developers
>     (AKA upstream maintainers) before pushing changes
> H. Manual UI/UX tests
> J. Automated tests (UI tests, unit tests, etc.)
> K. Automated code checks (e.g. pylint, pep8)
> L. Leveraging the community of upstream components
>
>
> == Proposal ==
>
> All of the means listed above have different trade-offs, there's no
> silver bullet. Review bandwidth of senior developers is limited, manual
> tests take so much effort that only a subset can be tested regularly and
> automated tests needs to be maintained alongside the code base. We need
> a good balance to arrive at a great product.
>
> My current proposal would be:
>
> I.   Do private (B) or public (C) review amongst colleagues to address
>       goal 1 (few obvious mistakes).
> II.  Do public in-depth reviews by senior developers (D), but make them
>       optional (i.e. _not_ G) and non-blocking. Addresses goals 3 (few
>       hard to fix bugs), 4 (maintainability) and 5 (better developers).
> III. Require public short reviews by some senior developer
>       (E). Addresses goals 1 and to a limited amount 2 (few bugs
>       affecting the user), 3 and 4.

This is actually how I always have seen it, especially if code of long 
term contributors is queued to get in. The reviewer should focus in 
those cases on the approach that is taken and have a look at the meat of 
the patch. No need to block for example because of the language of the 
description. If there is time it is welcome to check that as well. But 
it is secondary. The important thing is: make sure to know what you can 
do as a maintainer. If you have the time to do in-depth reviews for each 
patch in a timely manner, great, if not better to make sure you get the 
short-review right.

> IV.  Manual UI/UX (H) testing to the extent feasible. Addresses goal 2.
> V.  Implement automated UI and system tests (J). Strongly encourage
>       contributors to run them (successfully) before posting a patch
>       upstream. Where necessary contributors should extend the test
>       suite. Routinely run the tests against mainline across a large
>       number of supported systems (different distros, different hardware
>       including VMs, XO-* and Olidata JumPC). Addresses goals 1, 2 and to
>       some degree 3.

Sure, Daniel Narvaez have been doing great work on that end with the 
buildbot and the Ui tests he is building. Of course this will help to 
increase the quality.

> VI.  Accept patches into mainline that are likely to increase the number
>       of contributors using Sugar themselves (A) or to increase their
>       usage of Sugar, even if the patch doesn't directly benefit the XO
>       target user base. It should not have a negative impact on the XO
>       target user base, of course. This addresses goals 2, 3, 4 and 5.

When we setup SugarLabs the goal was to make Sugar available on other 
platforms like the XO. This goal is still the case.

> VII. Work on making Sugar more modular, using upstream components and
>       standard protocols or APIs as much as possible (L), allowing users
>       to mix and match components or simply configure them differently
>       (A) and reducing the footprint of bugs. This addresses goals 2, 3,
>       4 and 5.

Using standards and upstream components has been a goal as well. So we 
just have to continue that road.

> AFAICT I. is already being done. With Manuel Quiñones' appointment as
> Glucose maintainer and his subsequent actions, we're evidently also
> doing III. now. OLPC-AU and OLPC are doing IV., no change there
> either. I've been working on implementing automated tests that can be
> used for V., both on system tests (sugar-datastore test suite) as well
> as UI tests (Gnome a11y based, not ready yet). Similarly, I've been
> working on moving to standard protocols (e.g. XDG desktop entries for
> Activities, ICCCM compliance) and upstream components
> (e.g. gnome-session).
>
> The two major changes are making in-depth reviews by senior developers
> optional and non-blocking (II.)

But require public short reviews by some senior developer, correct?

as well as accepting "no ceiling"
> (VI.) and standards compliance (VII.) patches.

But modularity comes at a cost. This should be decided on a case per 
case basis.

Regards,
    Simon



More information about the Sugar-devel mailing list