cyrus perdtion , connections not dying

Andrew Morgan morgan at orst.edu
Tue Apr 8 13:32:51 EDT 2008


On Tue, 8 Apr 2008, Ian Eiloart wrote:

> --On 7 April 2008 10:20:33 -0700 Andrew Morgan <morgan at orst.edu> wrote:
>
>> On Mon, 7 Apr 2008, Ian Eiloart wrote:
>> ......
>>> 
>>> Is there a way to limit Cyrus process sizes at all? I guess I can take a
>>> look at my compilation options to try to reduce the starting size, but
>>> can I limit the process growth?
>> 
>> How are you calculating the process size?  I'm curious to compare it with
>> my Cyrus installation.
>>
>>  	Andy
>
> I'm using the RSIZE reported in top. Here, I've got two users on a 
> development machine:
>
> top -U cyrus -o rsize
> % top -U cyrus -o rsize -l 1
> Processes:  100 total, 4 running, 2 stuck, 94 sleeping... 292 threads 
> 11:27:10
>
> Load Avg:  1.46,  1.38,  1.38    CPU usage:  7.69% user, 46.15% sys, 46.15% 
> idle
> SharedLibs: num =    7, resident =   67M code, 1600K data, 5088K linkedit.
> MemRegions: num =  9491, resident =  237M +   22M private,  124M shared.
> PhysMem:  658M wired,  561M active,   28M inactive, 1247M used, 2799M free.
> VM: 4439M + 372M   105234(0) pageins, 0(0) pageouts
>
> PID COMMAND      %CPU   TIME   #TH #PRTS #MREGS RPRVT  RSHRD  RSIZE  VSIZE
>  71 master       0.0%  1:50:34   1    15+    47   70M+ 2364K+   70M+ 84M+
> 30219 proxyd       0.0%  0:01.78   1    15+   105  672K+ 8168K+ 3552K+ 74M+
> 1076 proxyd       0.0%  0:00.83   1    15+   110  524K+ 8176K+ 3388K+ 74M+
> 1093 proxyd       0.0%  0:02.56   1    15+   110  516K+ 8176K+ 3332K+ 74M+
>  94 proxyd       0.0%  0:00.95   1    15+   110  524K+ 8176K+ 3332K+ 74M+
> 35737 proxyd       0.0%  0:00.37   1    15+   104  660K+ 8168K+ 3292K+ 74M+
> 7483 proxyd       0.0%  0:00.19   1    15+   107  408K+ 8172K+ 2040K+ 74M+
>  84 proxyd       0.0%  0:00.02   1    15+   107  408K+ 8172K+ 2040K+ 74M+
> 53828 proxyd       0.0%  0:00.27   1    15+   101  464K+ 8164K+ 2024K+ 74M+
> 51026 proxyd       0.0%  0:00.27   1    15+   101  464K+ 8164K+ 2024K+ 74M+
> 46021 proxyd       0.0%  0:00.27   1    15+   101  464K+ 8164K+ 2024K+ 74M+
> 44932 proxyd       0.0%  0:00.23   1    15+   101  464K+ 8164K+ 2024K+ 74M+
> 12886 proxyd       0.0%  0:00.20   1    15+   101  464K+ 8164K+ 2024K+ 74M+
> 102 proxyd       0.0%  0:00.02   1    15+   107  404K+ 8172K+ 2024K+ 74M+
> 100 proxyd       0.0%  0:00.02   1    15+   107  404K+ 8172K+ 2024K+ 74M+
>  95 proxyd       0.0%  0:00.02   1    15+   107  404K+ 8172K+ 2024K+ 74M+
>  93 proxyd       0.0%  0:00.02   1    15+   107  404K+ 8172K+ 2024K+ 74M+
>  83 proxyd       0.0%  0:00.02   1    15+   107  404K+ 8172K+ 2024K+ 74M+
> 57073 mupdate      0.0%  0:00.12   1    14+    23  136K+ 1216K+  604K+ 19M+
> 57074 mupdate      0.0%  0:00.05   1    13+    23  128K+ 1216K+  588K+ 19M+
>  88 launchd      0.0% 32:36.05   3    26+    26  120K+  296K+  464K+ 18M+

These match what I see on my Cyrus Murder machines (linux 2.6) as well.

With 328 proxyd processes currently running on a frontend, top reports:

Tasks: 388 total,   1 running, 387 sleeping,   0 stopped,   0 zombie
Cpu(s): 20.4%us,  1.0%sy,  0.0%ni, 78.3%id,  0.0%wa,  0.1%hi,  0.2%si,  0.0%st
Mem:   2076724k total,   634260k used,  1442464k free,   162364k buffers
Swap:  2000052k total,        0k used,  2000052k free,   216200k cached

Which seems to roughly agree with a 2MB per proxyd rule-of-thumb.

 	Andy


More information about the Info-cyrus mailing list