[Sugar-devel] OT: determining memory usage of short-lived processes

Martin Langhoff martin.langhoff at gmail.com
Sat Jun 6 18:28:58 EDT 2009


On Sat, Jun 6, 2009 at 9:40 PM, Sascha
Silbe<sascha-ml-ui-sugar-devel at silbe.org> wrote:
> I've examined memory usage of long running processes (i.e. daemons and
> applications) in the past, no problems.

If in the past you've used top, there's a new and more accurate way of
measuring memory usage. Recent kernels export the 'smaps' of every
process, and scripts such as 'ps_mem.py' provide very good summaries
of that info.

Perhaps you knew it already -- it's a good tool worthy of promotion so...

> But for my VCS comparison I need to determine the peak memory usage of all
> child processes (combined), which are rather short-lived. What's the best
> way to do that? Performance is an issue as the benchmark already takes 13h
> to run on a sample set of 100 Project Gutenberg files (originally I had a
> sample size of ~800 files). That probably rules out valgrind (AFAIK it
> incurs quite a performance penalty)...

Run the whole thing under /usr/bin/time, which is different from the
shell's 'time' built-in. Used it very often in git development to
assess peak mem usage. The report is actually about the pagefaults,
which for relatively short runs relate almost linearly to mem usage.

For long lived processes that may release memory, and then allocate
memory again, the relationship is a bit less clear.

hth,



m
-- 
 martin.langhoff at gmail.com
 martin at laptop.org -- School Server Architect
 - ask interesting questions
 - don't get distracted with shiny stuff  - working code first
 - http://wiki.laptop.org/go/User:Martinlanghoff


More information about the Sugar-devel mailing list