[Systems] Discovery One--load balancing

David Farning dfarning at sugarlabs.org
Tue Nov 17 09:01:57 EST 2009


We it looks like the Discovery_One (aslo) is coming along pretty well.

I am happy with the overall performance.  We still have some tuning to
do, especially in the database.  When we increased the size of the
working set the db load went up and overall performance dropped from
~25 to ~22 transactions per second.  We are using
http://people.sugarlabs.org/dfarning/working_subset.10000.aslo-proxy
as the working set.  I have Siege (the benchmark tool) set to hit the
aslo-proxy with random urls from working_subset.10000.aslo-proxy as
fast as it can to keep 40 concurrent connections.

The a.sl.o QUERY_CACHE is set to expire at 300 seconds.  So some
viewers will be seeing pages that are up to five minutes old.

The next step is to add load balancing.  My plan is to add haproxy to
also-proxy on port 81.

Incoming traffic will first hit squid on port 80.  Squid will take
care of the cached static content.  -->
Requests which can not be served by squid will be forwarded to HAProxy
on port 81. -->
HAProxy will redirect the traffic to the php servers as necessary.

Long term goal:  We should be able to clone aslo-web, calling them
aslo-web[*], as needed to meet php cpu requirements.

Phase one: set up haproxy so it redirects all traffic to also-web (the
current php server)

Phase two:  Balance load between aslo-web and beamrider.  I think that
beamrider and sunjammer are configured closely enough that we can test
on beamrider with out screwing up sunjammer.  NOTE:  also-web and
beamrider are on the same physical machine so there we be no
performance gain.   But, it will provide a platform for testing how
multiple php servers interact.

Phase three: transition from beamrider to sunjammer.

Phase four:?????

I hope that these steps will buy Sugar Labs 3-4 months of growing room
before things start to get critical again.

david

Luckily, I have a nice long flight tomarrow to think about the moving
pieces in Phase II:)


More information about the Systems mailing list