[Sugar-devel] Maintenance, reviews, QA & co
Bernie Innocenti
bernie at codewiz.org
Sun Aug 12 14:19:35 EDT 2012
Improving the development process is as important as improving the code
itself, so I think it's good to have this sort of discussion every once
in a while.
About 2 years ago I started a similar discussion on this mailing list
because I thought that our development process was slow and frustrating.
Some things have improved since then, but I feel there's still a lot of
room for further improvement.
I've not been involved with Sugar development for quite a long time, so
I can't offer any specific advice. If the Sugar maintainers are
interested, I could share some details on the Google review process,
(which is really excellent in my opinion) or the GCC and Linux review
processes, which might be a better fit for a free software project.
On Sat, 2012-08-11 at 22:53 +0200, Sascha Silbe wrote:
> Dear Sugar Labs members,
>
> == Executive Summary ==
>
> The current review process isn't a good fit for the current Sugar
> contributors and causes a lot of frustration on both sides. There are
> better ways to reach the same goals. The two major changes to the review
> process I'll propose in this email are making in-depth reviews by
> independent senior developers optional and non-blocking as well as
> accepting "no ceiling" and standards compliance patches into
> mainline. Other work to address some of the goals of the previous review
> process has already been going one for quite some time.
>
>
> == Why we should rethink our current approach ==
>
> I think it's time to take a step back and reconsider our current
> approach for reviews and QA. For the past few months to years, upstream
> reviews were primarily a QA measure. While hopefully many contributors
> learned to be better developers, the focus clearly was on making sure
> that only "good" patches were merged into mainline. That's the way many
> (maybe most) community-driven Open Source projects work: Downstream
> needs to show their contribution is of value to upstream (including not
> having a negative impact on other parts of the user base) and does not
> impair maintainability.
>
> Sugar, however, currently isn't a community project: Most of the work is
> done by a small number of organisations with a clear commercial
> focus. They have deadlines breathing down their neck and need to care
> about what their costumers want (who are the ones keeping these
> organisations alive), not what's in the interest of the general user
> base. The paying customers are what keeps these organisations alive and
> the organisations in turn are what keeps Sugar Labs alive. The small
> number and similar focus of the organisations means there's not enough
> diversity to address a more general user base.
>
> Tight deadlines and focus on the needs of paying customers doesn't
> really mix with the style of reviews done by community-driven
> projects. This regularly results in frustration on both sides.
>
> We should think about what we want to achieve and how to best achieve
> it. Maybe we still want reviews to take place, but in a different
> manner. Maybe we'll just do something else and do away with reviews. But
> simply continuing the current way is not a good idea.
>
>
> == Goals ==
>
> So what do we want to achieve? Some options:
>
> 1. Few obvious mistakes: Making sure the code doesn't contain a lot of
> mistakes that could have been easily spotted by some other developer,
> resulting in bugs affecting the user.
>
> 2. Few bugs affecting the user: Making sure there are few regressions
> and new features work as expected.
>
> 3. Few hard to fix bugs: Making sure that the risk of introducing bugs
> that are hard to diagnose and fix (usually race conditions) is low.
>
> 4. Maintainability: Making sure that the cost of doing code changes
> (including QA as necessary for 1. and 2.) doesn't grow in the long
> term.
>
> 5. Better developers: Constantly increasing our own and others'
> abilities and knowledge.
>
>
> == Means ==
>
> There are various ways to achieve one or more of the goals above:
>
> A. Eating our own dog food
> B. Private reviews done amongst colleagues
> C. Public reviews done amongst colleagues
> D. Public in-depth reviews done by senior developers
> E. Public short reviews done by senior developers
> F. Requiring public short reviews by senior developers, usually from
> within the same organisation, before pushing changes
> G. Requiring public in-depth reviews by independent, senior developers
> (AKA upstream maintainers) before pushing changes
> H. Manual UI/UX tests
> J. Automated tests (UI tests, unit tests, etc.)
> K. Automated code checks (e.g. pylint, pep8)
> L. Leveraging the community of upstream components
>
>
> == Proposal ==
>
> All of the means listed above have different trade-offs, there's no
> silver bullet. Review bandwidth of senior developers is limited, manual
> tests take so much effort that only a subset can be tested regularly and
> automated tests needs to be maintained alongside the code base. We need
> a good balance to arrive at a great product.
>
> My current proposal would be:
>
> I. Do private (B) or public (C) review amongst colleagues to address
> goal 1 (few obvious mistakes).
> II. Do public in-depth reviews by senior developers (D), but make them
> optional (i.e. _not_ G) and non-blocking. Addresses goals 3 (few
> hard to fix bugs), 4 (maintainability) and 5 (better developers).
> III. Require public short reviews by some senior developer
> (E). Addresses goals 1 and to a limited amount 2 (few bugs
> affecting the user), 3 and 4.
> IV. Manual UI/UX (H) testing to the extent feasible. Addresses goal 2.
> V. Implement automated UI and system tests (J). Strongly encourage
> contributors to run them (successfully) before posting a patch
> upstream. Where necessary contributors should extend the test
> suite. Routinely run the tests against mainline across a large
> number of supported systems (different distros, different hardware
> including VMs, XO-* and Olidata JumPC). Addresses goals 1, 2 and to
> some degree 3.
> VI. Accept patches into mainline that are likely to increase the number
> of contributors using Sugar themselves (A) or to increase their
> usage of Sugar, even if the patch doesn't directly benefit the XO
> target user base. It should not have a negative impact on the XO
> target user base, of course. This addresses goals 2, 3, 4 and 5.
> VII. Work on making Sugar more modular, using upstream components and
> standard protocols or APIs as much as possible (L), allowing users
> to mix and match components or simply configure them differently
> (A) and reducing the footprint of bugs. This addresses goals 2, 3,
> 4 and 5.
>
>
> AFAICT I. is already being done. With Manuel Quiñones' appointment as
> Glucose maintainer and his subsequent actions, we're evidently also
> doing III. now. OLPC-AU and OLPC are doing IV., no change there
> either. I've been working on implementing automated tests that can be
> used for V., both on system tests (sugar-datastore test suite) as well
> as UI tests (Gnome a11y based, not ready yet). Similarly, I've been
> working on moving to standard protocols (e.g. XDG desktop entries for
> Activities, ICCCM compliance) and upstream components
> (e.g. gnome-session).
>
> The two major changes are making in-depth reviews by senior developers
> optional and non-blocking (II.) as well as accepting "no ceiling"
> (VI.) and standards compliance (VII.) patches.
>
> Sascha
> _______________________________________________
> Sugar-devel mailing list
> Sugar-devel at lists.sugarlabs.org
> http://lists.sugarlabs.org/listinfo/sugar-devel
--
_ // Bernie Innocenti
\X/ http://codewiz.org
More information about the Sugar-devel
mailing list