[Systems] {wiki, activities}.sugarlabs.org dead, no response to HTTP GET

Samuel Cantero scanterog at gmail.com
Sat Feb 27 16:10:27 EST 2016


I haven't had time lately and I couldn't check the files. It would be great
the next time to left some copy in case someone else want to look for it.

Certainly we can and we should improve the way we manage runjobs but I
don't think this can be the root cause of our problem.

I have been reading about runjobs and checked our default values
(includes/DefaultSetting.php).

$wgJobRunRate = 1 => means that it will run a job in each page request from
the job queue.
$wgRunJobsAsync = true; => an internal HTTP connection for handling the
execution of jobs will be opened, and MediaWiki will return the contents of
the page immediately to the client without waiting for the job to complete.
....

According to [1], we can improve the performance by processing the job
queue periodically in the background using cron (maybe every 30 minutes?)
and setting $wgJobRunRate to 0 or we can reduce the calls to RunJobs.php by
lowering $wgJobRunRate to a value between 0 and 1. For ex: 0.01 will cause
one item in the job queue to run on average every 100 page views (it is
just a probability, it is not fixed).

I understand that we want to lower the calls because every call to a job
may take over a second and the user will feel the page loading somewhat
sluggish. In this would be worst in our case considering our slow access to
disks.

We can check our current job queue in the following URL:
http://wiki.sugarlabs.org/api.php?action=query&meta=siteinfo&siprop=statistics&format=jsonfm
.

I also understand here that in every page request (or page view) mediawiki
(PHP) will open a socket to make an internal HTTP request in order to
execute RunJobs.php. We can check this in the apache-status file. We can
also read in [1]: ".. it (calling runjobs) requires loading a lot of PHP
classes in memory on a new process to execute a job, and also makes a new
HTTP request that the server must handle".

So, can we expect one apache process for every call to RunJobs.php? What
would happen with many request to wiki.slo?

In the last event, were all the apache processes (150) executing the
runjobs maintenance script at the same time? Are we certainly pointing at
runjobs as the main guilty for the apache crashes?

This last information could help us to improve the performance in our wiki
but I don't know how to relate with our main problem.

[1] https://www.mediawiki.org/wiki/Manual:Job_queue

On Thu, Feb 25, 2016 at 9:37 AM, Sebastian Silva <sebastian at fuentelibre.org>
wrote:

> lol
>
> That expIains why it traced back to Boston...
>
> Hanlon's Razor--"Never attribute to malice that which is adequately
> explained by stupidity."
>
> Regards,
> Sebastian
>
>
>
> On 25/02/16 04:17, Bernie Innocenti wrote:
>
> lol, 2001:4830:134:7::11 is sunjammer's ipv6 address... better not plonk
> it with iptables even if it sends nasty queries :-)
>
> So, what's calling RunJobs at high rate???
>
>
> --
> I+D SomosAzucar.Org
> "icarito" #somosazucar en Freenode IRC
> "Nadie libera a nadie, nadie se libera solo. Los seres humanos se liberan en comuniĆ³n" - P. Freire
>
>
> _______________________________________________
> Systems mailing list
> Systems at lists.sugarlabs.org
> http://lists.sugarlabs.org/listinfo/systems
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sugarlabs.org/archive/systems/attachments/20160227/9513a365/attachment.html>


More information about the Systems mailing list