[Systems] Docker backups

Bernie Innocenti bernie at sugarlabs.org
Sun Oct 26 01:15:54 EDT 2014


On 10/25/2014 03:14 PM, Sam P. wrote:
> Hi Bernie,
> 
> On Sat, Oct 25, 2014 at 11:42 AM, Bernie Innocenti <bernie at sugarlabs.org
> <mailto:bernie at sugarlabs.org>> wrote:
> 
>     I see that we have a pgsql db in there. Backing up the raw binary files
>     of the db is going to take a lot of space while not producing a
>     consistent backup. We should look for a way to dump of all databases
>     owned by containers daily, before the backup starts.
> 
> 
> I have turned on the discourse daily backups.  It dumps the
> images/uploads as well as `dump.sql`.  It stores the most recent backup
> in /srv/socialhelp/d-shared/backups/default.  Can we setup a cron job to
> make sure that that is the only /srv/socialhelp thing that gets backed up?

We already have a backup script running every night between freedom and
justice, and it should pick up the new dir tonight.

Our backup script is called 'wizbackup' (made by yours truly). It runs
from /etc/cron.daily/wizbackup and reads the list of hosts from
/backup/HOSTS/.


>     Nice. I have no production images yet. I only created an ubuntu image to
>     verify that docker was working after installing it (you can wipe it
>     now).
> 
>     Question: is it to be expectede that the processes in the containers are
>     also visible from the base system with ps? It's annoying, also because
>     the uids are different.
> 
> I have no idea!  Docker is using docker/libcontainer for isolation,
> which uses Linux namespaces.  Do you know anything about this?

I know the low-level concepts (cgroups, namespaces, etc), but I don't
know how this maps to docker. I guess we need to study.

By the way, this came up on LWN today:
  http://mjg59.dreamwidth.org/33170.html
  http://lwn.net/Articles/617842/#Comments


>     Question 2: how do we limit the resources of containers to make sure
>     they don't grow until freedom runs out of ram or disk space?
> 
> 
> We have to set options when we start the container.  Tell me how much
> ram, how many cpu cores/"shares" and I will restart discourse.

I don't know, how much memory and cpu does it use at peak? Find this
out, and set a limit at about 200% of the current maximum to give it
room for growth and spikes.

The host is shared by multiple services and has limited resources. If we
overcommit, soon or later a service will run out of control and cause
the entire host to trash and oom.

-- 
Bernie Innocenti
Sugar Labs Infrastructure Team
http://wiki.sugarlabs.org/go/Infrastructure_Team



More information about the Systems mailing list