[Systems] Infrastructure Status Report
David Farning
dfarning at sugarlabs.org
Tue Nov 3 17:50:31 EST 2009
While SoaS and Trademarks have gotten most of the attention lately,
other parts of Sugar Labs are growing and moving forward. A lot is
happening on the infrastructure side of the project. Some of it
behind the scenes and some of it more public.
--Capacity growth--
The biggest challenge faced by the team is handling capacity growth.
-Machines-
The first and most visible need is additional servers. Initially we
outsourced several of our services. While cost effective, that policy
created problems because each place we outsourced to had different
support policies and system requirements.
Now we are going through a consolidating phase. It is much easier to
maintain a consistent infrastructure. Several of our services now
need to be clustered across groups of co-located machines for load
balancing and increasing reliability.
We are also seeing growth rate which are doubling every quarter for
Activities.sl.o.
-Administrators-
The second and less visible need is for administrators to help keep
the systems alive. Sorry Bernie, we are going to have to pull you
(kicking and screaming) from sysadmin to Infrastructure team leader.
We are going to need to work on training and identify others so that
they can take authority and responsibility of parts of the
infrastructure.
--Specific Tasks--
There are a number of specific tasks in progress which need help.
-Launch Pad-
Luke is working on migrating pieces of the Infrastructure to Launch
Pad. There are a number of Pros and Cons to this.
The big win will be using the LB bug tracker. Upstream trac (our
current bug tracker) development has stalled make it very difficult to
maintain dev.sl.o. Other improvements will be LP answers and LP
blueprints. The user facing portion of LP is very good.
Overall the LP team has been very good to work with. But I still have
a number of reservations that I hope Luke and the LP team can take
care of:
1. Integration with the rest of Sugar Labs services. Specificly
git.sl.org and translate.sl.org.
2. Ability not to get lost in Ubuntu. There are several place where
it is very easy to unwittingly exit the Sugar project and end up
wandering around Ubuntu.
3. Ability to get easily get back to the rest of the *sl.org
I encourage others to get involved in this project to insure that:
1. It is the right thing to do.
2. Work with the Sugar and LP communities to insure that this process
is beneficial to both parties.
3. Work with the Sugar user and developer communities to insure that
the migration goes smoothly.
-Activities.sl.org-
I am working on separating activities.sl.o from the rest of the Sugar
Labs services.
There are several reasons for separating out a.sl.o:
1. It will be easier to grant admin authority to a.sl.o with granting
admin authority to all of the SL infrastructure.
2. A.sl.o is a resource hog. By splitting it out, we can think about
scaling a.sl.o without worrying about how it will affect the rest of
the infrastructure.
3. Security. The separation with provide a fence between a.sl.o and
the rest of the infrastructure. If one part is compromised it will
not affect the other parts.
If any one want to help out, there are several interesting tasks...
1. Setting up a fresh instance of a.sl.o.
2. Load balancing and HA for the php front end.
3. Load balancing and HA for the my SQL database.
-Beamrider-
Bernie is in the process of splitting up the services on sunjammer
between two machines, it and Beamrider. This is primarily lead by the
needs to:
1. Scale sunjammer. In addition to SL stuff, Sunjammer is also
hosting services for local labs and OLE.
2. Increase security and reliability. Sunjammer will remain the
'developer' machine, hosting developer accounts and testing/devel
services, while beamrider will host higher priorith services.
-Machines and Rack space-
Finally, we need to start thinking about future machine and rack space
needs. Of particular concern is finding a hosting provider that is
willing and able to our growing number of machines in a single
facility.
david
More information about the Systems
mailing list