[Systems] pointing activities.sugarlabs.org at the proxy

David Farning dfarning at sugarlabs.org
Mon Nov 16 00:27:37 EST 2009


On Sun, Nov 15, 2009 at 3:48 PM, Bernie Innocenti <bernie at codewiz.org> wrote:
> [cc += systems@]
>
> El Sun, 15-11-2009 a las 10:44 -0600, David Farning escribió:
>> Can you set up a few dns name for us
>> 140.186.70.121 also-proxy.sl.o
>> 140.186.70.123 also-web.sl.o
>> 140.186.70.125 aslo-db.sl.o
>
> Done. I also added the corresponding AAAA records as our infrastructure
> is proudly 100% IPv6 ready :-)
>
>> The proxy at 140.186.70.121 is set up as a squid reverse proxy in
>> front of the existing activities.sugarlabs.org at 140.186.70.53.
>>
>> We are ready to move activities.sl.o from .53 to .121 .
>
> Cool! But, before we go on, I was wondering if we could merge the proxy
> and the database together and/or fold the proxy into beamrider to reduce
> the number of virtual machines.

I was hoping to keep this architecture at least for the next 6 months
to see how it scales.  The main pieces of information that I would
like to learn is what resources each 'layer' requires.

1. aslo-proxy - This is the secure front end which contains the squid
reverse proxy and will contain the haproxy (for ha and load balancing)
.  This will sit on the public internet.  The main constrains here
will be memory and IO speed.  The big win is that we are caching the
static content (images, css, and js) before it hits the php servers.

I will also be putting haproxy on this vm to handle future load balancing needs.

2. aslo-web - This is were the php happens.  The main constraint here
appears to be CPU.

3. aslo-db - This is where the mysql and memcache will live.  Mysql is
cpu and IO bound while memcache is memory bound.

> It shouldn't detract much from scalability, but it would reduce the
> maintenance burden for the VMs by 2/3. There's also 0 security risk,
> since both mysql and squid are very secure services.

At this point the issues in not scalability but rather how to
determine our future scalability needs.  You are right it would be
easier and more efficient to stick the whole stack on one machine.  By
doing the abstraction now it will we be easier to scale in the future.

> Also, I feel that the proxy and the database shouldn't run off a crappy
> qcow2 file, for performance reasons. As the number of aslo-web
> front-ends increases, they will probably perform a lot of disk I/O

I have not been following how you are setting up the VMs.  ON my test
machine at home I have my entire disk set up as LVM on raid (the is a
small ext2 boot partition to start dom0).  I just move and resize
memory and hard drive space as needed for individual vms.

> If we decide to keep the current architecture with 3 distinct VMs, we
> should at least setup backups for aslo-db and aslo-proxy before we
> transition them into production.

aslo-proxy is ready to go into production.  It is all set up. I
thought you set up the backup last night.  aslo-proxy is currently
pointing at the existing aslo instance on sunjammer.  I would like to
spend a week or so tuning before pointing it at the new also-web
instance.

I would like to emphasise that that point of using a layer of VMs is
not because VMs are cool.  As Bernie correctly states, they are a pain
in the ass to set up.  The point of using the VMs is to insure that I
have the architectural design and abstraction barriers right for when
we need to migrate the VMs to their own physical machines.

david

>   // Bernie Innocenti - http://codewiz.org/
>  \X/  Sugar Labs       - http://sugarlabs.org/


More information about the Systems mailing list