[Systems] Wiki speed
Sam P.
sam.parkinson3 at gmail.com
Fri Jun 26 02:44:44 EDT 2015
Hi Samuel,
Thanks for doing the research about resource limits in docker by the way!
On Fri, Jun 26, 2015 at 3:31 PM Samuel Cantero <scg at sugarlabs.org> wrote:
> On Wed, Jun 24, 2015 at 12:29 PM, Bernie Innocenti <bernie at codewiz.org>
> wrote:
>
>> On 24/06/15 04:02, Sam P. wrote:
>>
>>> Hi All,
>>>
>>> I just saw that the docker container on freedom that does the
>>> magical-wiki-visual-editor-node-js-service was not running, so that was
>>> why only the edit source thing was working :)
>>>
>>> IDK what happened there - did somebody restart docker on freedom? or
>>> restart freedom? all of the containers were down :(
>>>
>>> So about the visual editor, here were my tests on wiki-devel (running
>>> "time curl
>>> http://wiki-devel.sugarlabs.org/go/Welcome_to_the_Sugar_Labs_wiki >
>>> /dev/null" on sunjammer):
>>>
>>> * ~2sec average time with visual editor and all the others
>>> * ~1.5sec without visual editor => probs a bit of network cost, but not
>>> the whole thing
>>> * ~1.7sec without mobile front end (but with VE and all the others)
>>> * ~2sec without the media viewer lightbox (but with VE and all the
>>> others)
>>>
>>> Don't trust my test results, but it would probably help a little to
>>> move VE together with the rest of the wiki.
>>>
>>> It would be pretty cool to containerize or chuck the wiki in a new VM.
>>> That would make it eaiser to move the VE thing onto the same machine.
>>> Moving it onto a newer OS could also let us play with HHVM, which seems
>>> to work nicely with media wiki [1][2].
>>>
>>
>> LGTM. Can you and the other Sam (scg) work together on this?
>>
>
> Of course. I couldn't get in contact with Sam yet but I have been working
> on this and I have some interesting things to tell you.
>
>>
>> I think the current docker setup is still too sketchy to run on justice
>> alongside other production VMs. We need to ensure that each container has
>> hard limits on all resources: cpu, memory, disk space and possibly even I/O
>> bandwidth.
>>
>
> Sure. I want to tell you about my research. I was reading about docker
> runtime constraints on resources. By default, the memory and swap
> accounting is disable on ubuntu 14.04. We need to enable it on GRUB setting
> the following line:
>
> GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
>
> Then we have to update the grub (sudo update-grub) and pitifully reboot
> the system. I have checked it on freedom and it is not activated.
>
Wait. Why do we need to enable this if we have the memory control working
as you tested?
>
> Currently, we can limit the memory for a container in docker. However, by
> default, the container can use all of the memory on the host (as much
> memory as needed). Another interesting fact: the swap memory will be
> *unlimited* too.
>
> We have to change this in our actual configuration because having no limit
> on memory can lead to issues where one container can easily make the whole
> system unstable and as a result unusable.
>
> To check how this works in Docker, I used the *stress* tool (inside a
> container) that helped me out to generate some load in the containers so I
> could actually check for the resource limits being applied. The results are:
>
> *MEMORY*
>
> We can limit the amount of memory for a docker container (tested). By
> default, the amount of the swap memory will be the same. We can set
> different values for the swap memory but we can not disable it entirely.
>
> A container with a memory limit of 256 MB will die If we stress out the
> container with the twice of the assigned memory. The container uses all of
> the memory and all of the swap and then die.
>
Sounds great!
>
> *CPU*
>
> When I run the stress tool (inside the container) with 2 workers to impose
> load on to the CPU, I can see that each process consume 50% of CPU. Yes, by
> default, a container can use a 100% of the CPU. And If we have many cores,
> it can use all the cores available.
>
> Docker lets you specify a CPU share, but this is not so useful to actually
> limit the physical cores (I guess). The CPU share is just a proportion of
> CPU cycles. This is a relative weight (between containers) and has nothing
> to do with the actual processor speed. Besides, the proportion will only
> apply when CPU-intensive processes are running. When tasks in one container
> are idle, other containers can use the left-over CPU time. As you might
> guess, on an idle host, a container with low shares will still be able to
> use 100% of the CPU. On a multi-core system, the shares of CPU time are
> distributed over all CPU cores. Even if a container is limited to less than
> 100% of CPU time, it can use 100% of each individual CPU core.
>
> As you can see, there is no way to say that a container should have access
> only to 1 GHz of the CPU. Therefore, what can we do then?
>
> 1- Limit the container's CPU usage in percentage. For example, we
> can limit the container to 50% of a CPU resource using CPU quota in docker.
>
> When I ran the stress tool (inside the container) with 2 workers to impose
> load on to the CPU, I saw that each process consumed around 25% of CPU
> (values: 25.2 and 24.9, 25.2 and 25.2, 25.6 y 24.9 and so on). However, I
> have to test it in a multi-core VM and perform other testing.
>
> This is just a first test. I want to continue reading about CPU quota and
> CPU period constraints in Docker.
>
> 2- Pin a container to a specific core, i.e, set cpu or cpus in which to
> allow execution for containers.
>
Yeah, that is probably an issue with docker. Infact there seem to be 2
main issues with docker on SL infra now:
1. cpu limits
2. docker kills containers when you restart the daemon (aka. installing a
docker update kills containers). That's probably why all the containers
got stopped a while back. The 'solution' to that seems to be clustering -
but that ain't gonna work for us.
So, lets look at alternatives to docker. The 2 that stand out are rkt
(core os) and runc.io (reference Open Container Project implementation).
Both of them allow you to run docker images. Both run as a standalone
program not a daemon; so you can use systemd for resource management (demo
on runc.io).
Freedom doesn't actually have systemd, but maybe upstart does something
similar. I will look into it, but I that rkt or runc could be a useful
replacement for docker to out needs.
Thanks,
Sam
>
> *DISK and I/O Bandwidth*
>
> I couldn't work on this yet.
>
>
>> Without resource isolation, it's only a matter of time until a bug in one
>> container takes down everything, including our primary nameserver on
>> lightwave, the VMs for Paraguay and so on.
>>
>> Secondarily, we need some sort of monitoring to see things like memory
>> leaks and overloaded services. Without good graphs, setting resource limits
>> becomes a game of guessing.
>>
>
> I am agree. I will search for some monitoring tools.
>
> Can we solve both these problems before moving the wiki? I'm sure docker
>> deployments around the world have similar requirements and developed
>> similar solutions.
>>
>> --
>> _ // Bernie Innocenti
>> \X/ http://codewiz.org
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sugarlabs.org/private/systems/attachments/20150626/4b24dec5/attachment.html>
More information about the Systems
mailing list